Professional Documents
Culture Documents
Akamatsu 2014
Akamatsu 2014
I. INTRODUCTION
Fig. 1. Target environment
It is very important for administrators of public facilities
or shops to know the number of visitors using their facilities.
In order to determine the exact number of people that visit
a particular establishment, the number of people passing pass through a doorway, so that the door-opening timing
through the doorways of the building must be counted. is appropriate. The 3D laser scanner used in the system is
Doorways located in semi-outdoor environments are affected robust to illumination changes. In addition, it is not necessary
by rain and sunlight. In addition, occlusions often occur to install a pole or a sensor on the floor or wall because
in crowded passages. Therefore, existing systems have poor the sensor is attached to the top of the door. Moreover, this
accuracy. system has no dead space and does not adversely affect the
There are conventional methods by which to detect a landscape.
person in such an environment. One typical method for hu- In the present paper, we propose a method by which to
man detection involves the use of ceiling cameras [1][2][3]. count the number of people entering and exiting a structure
K. Terada et al. and D. Beymer used a stereo camera attached using a 3D laser scanner that was developed for use as an
to the ceiling [1][2]. S. Velipasalar et al. used a single automatic door sensor.
camera attached to the ceiling [3] for human detection. These
systems often fail to detect a person when changes in the II. S ENSOR
lighting conditions occur. Methods using 2D laser scanners In the present study, we used the 3D laser scanner de-
have also been proposed [4][5]. K. Nakamura et al. used veloped by Hokuyo Automatic Co., Ltd. as a door sensor.
multiple single-row laser scanners installed on a pole inside a The measurement is based on a time-of-flight (TOF) method,
passageway [4]. K. Katabira et al. are also using a single-row in which the distance is calculated from the time difference
laser scanner attached to the ceiling [5]. A 2D laser scanner between emitting a laser beam and receiving the reflected
can be used to measure distance. However, blind areas often beam. The specifications of the sensor are shown in Table I.
occur in crowded environments. In addition, sensors must be The detection area on the ground is 5.0 m in width and 2.8
installed at the same height as the flow line of the people, m in depth when the sensor is attached at a height of 3.1 m.
which may not be acceptable for building owners because of The scanning area is shown in Fig. 2. The data sent from the
potential problems associated with landscaping. sensor includes the distance to the object and the projection
Recently, we have been developing an intelligent au- angle of the laser beam, and it is possible to obtain data
tomatic door system using a 3D laser scanner [6]. The (5,440 points/frame) at 10 Hz.
system measures the velocity, the position, and the width
of a pedestrian and judges whether the person intends to III. A LGORITHM
1 S. Akamatsu and T. Tomizawa are with the Department of Human In this section, we describe the algorithm for counting the
Media Systems at Graduate School of Information Systems, The Univer- number of people passing through a doorway. The algorithm
sity of Electro-Communications, Choufu, Tokyo, Japan, {akamatsu, can be roughly divided into person-detection, tracking, and
tomys}@taka.is.uec.ac.jp
2 N. Shimaji is with the Department of Engineering Hokuyo Automatic counting components. First, the sensor data is separated into
Co., Ltd., Osaka, Japan shimaji@hokuyo-aut.jp individual objects by grouping. Next, the individual objects
^ĐĂŶWŽŝŶƚƐ
TABLE I
S PECIFICATIONS OF THE SENSOR (a) Photograph of the experiment
Specification
Optical source laser diode
Type of Measurement TOF (pulse-modulated signals)
Scanning Device Resonant mirror
Horizontal Range 72 deg
Vertical Range 42 deg
Frame Rate 10 Hz
Number of Observation Points 5,440 points/frame
Temperature Resistance -20 to 50◦ C
Size 127 (H)× 230 (L)× 83 (W) mm
A. Person detection
The person-detection component extracts human figures
from point clouds observed by the 3D sensor.
1) Grouping: The point clouds consists of more than one
person and other moving objects.
(b) Raw point cloud
If these objects are sufficiently separated, it is possible
to identify each person using a simple grouping method
Fig. 3. Raw point cloud data
based only on the point-to-point distance. However, if the
distance between individuals is very short, persons are cannot
be separated by the simple grouping method. On the other
hand, since the head of a walking person is generally not in a horizontal distance of less than 0.2 m (x and y axes)
contact with another object, we adopt a method that groups between a point and a previously labeled point and a
the point cloud from top to bottom based on the heads vertical distance of less than 0.5 m (z-axis distance). If
of individual people, which makes it possible to separate there is no labeled point, then the second highest point
the point cloud associated with each person. This grouping is assigned a new label, i.e., group B.
method is described below. 4) This procedure is repeated until all points in the cloud
are labeled.
1) The point cloud is sorted in ascending order according
to the heights of the points (on the z-axis). The raw point cloud data is shown in Fig. 3. The results
2) The highest point is labeled group A. of the grouping are shown in Fig. 4. Points of the same color
3) Next, focus on the second highest point. If the distance are assigned to a single object.
between the point and a point of a previously defined 2) Determination of whether a detected object is a person:
group is short, the same label is assigned to the point of We next consider whether or not a detected object is a person.
focus. The thresholds for determining the closeness are The determination is based on the whole-body size of each
1984
TABLE II
PARAMETERS AND CONDITIONS FOR PERSON DETERMINATION
Parameter Parameter name Condition
Range of the height of the object objectheight > 0.2 m
Area of the object objectarea > 0.05 m2
Width of the object objectwidth > 0.2 m
Depth of the object objectdepth > 0.2 m
Range of the height of the head headheight > 0.1 m
Area of the head headarea > 0.02 m2
Width of the head headwidth > 0.1 m
Depth of the head headdepth > 0.1 m
Ground height of the head headz > 0.5 m
• objectarea
1) Project the point cloud of the object onto the x-y
plane.
2) Calculate the smallest convex polygon that con-
tains all point clouds projected onto the x-y plane.
3) Calculate the area of the convex polygon. Here,
Fig. 5. Extracted head points the area is defined as objectarea .
• objectwidth , objectdepth
1985
Fig. 7. Result of person tracking (blue lines)
ŽƵŶƚůŝŶĞ
1986
(a) Point cloud and results of counting
(a) Point cloud and results of counting
1987
R EFERENCES
[1] K. Terada, D. Yoshida, S. Oe, and J. Yamaguchi, “A method of
counting the passing people by using the stereo images”, International
Conference on Image Processing, vol. 2, pp. 338-342, 1999.
[2] D. Beymer, “Person counting using stereo”, Workshop on Human
Motion, pp. 127, 133, 2000.
[3] S. Velipasalar, Y.-L. Tian, and A. Hampapur, “Automatic counting
of interacting people by using a single uncalibrated camera”, IEEE
International Conference on Multimedia and Expo, pp. 1265-1268,
2006.
[4] K. Nakamura, H. Zhao, X. Shao, and R. Shibasaki, “Human Sensing
in Crowd Using Laser Scanners”, Laser Scanner Technology, Dr. J.
Apolinar Munoz Rodriguez (Ed.), ISBN: 978-953-51-0280-9, InTech.
[5] K. Katabira, K. Nakamura, H. Zhao, and R. Shibasaki, “A method
for counting pedestrians using a laser range scanner”, In: 25th Asian
Conference on Remote Sensing (ACRS 2004), Thailand, November
22-26, 2004.
[6] D. Nishida, K. Tsuzura, S. Kudoh, K. Takai, T. Momodori, N. Asada,
T. Mori, T. Suehiro, and T. Tomizawa, “Development of Intelligent
Automatic Door System”, Proceedings of the 2014 IEEE International
Conference on Robotics and Automation, pp. 6368-6374, 2014-06-04.
1988