You are on page 1of 11

On Solving Mirror Reection in LIDAR Sensing

Shao-Wen Yang, Student Member, IEEE, and Chieh-Chih Wang, Member, IEEE
AbstractThis paper presents a characterization of sensing failures of light detection and ranging (LIDAR) given the presence of a mirror, which are quite common in our daily lives. Although LIDARs play an important role in the eld of robotics, previous research has addressed little regarding the challenges in optical sensing such as mirror reections. As light can be reected off a mirror and penetrate a window, mobile robots equipped with LIDARs only may not be capable of dealing with real environments. It is straightforward to deal with mirrors and windows by fusing sensors of heterogeneous characteristics. However, indistinguishability between mirror images and true objects makes the map inconsistent with the true environment, even for a robot with heterogeneous sensors. We propose a Bayesian framework to detect and track mirrors using only LIDAR information. Mirrors are detected by utilizing the property of mirror symmetry. Spatiotemporal information is integrated using a Bayesian lter. The proposed approach can be seamlessly integrated into the occupancy grid map representation and the mobile robot localization framework, and has been demonstrated using real data from a LIDAR. Mirrors, as potential obstacles, are successfully detected and tracked. Index TermsMobile robots, optical scanners, range sensing, sensor fusion.

I. INTRODUCTION IMULTANEOUS localization and mapping (SLAM) is the process by which a mobile robot can build a map of the environment and, at the same time, use this map to compute its location. As the SLAM problem has attracted immense attention in the mobile robotics literature, a large variety of sensors have been used for SLAM, such as sonar, light detection and ranging (LIDAR), IR, monocular vision, stereo vision, and GPS. The past decade has seen rapid progress in solving the SLAM problem [1], [2], and LIDARs are at the core of most stateof-the-art robot systems, such as Boss [3] and Stanley [4], and the autonomous vehicles in the Defense Advanced Research Projects Agency (DARPA) Urban Challenge and Grand Challenge. Because of their narrow beamwidth and fast time of ight,
Manuscript received February 2, 2009; revised June 6, 2009, October 2, 2009, and December 7, 2009; accepted December 25, 2009. Date of publication February 8, 2010; date of current version January 19, 2011. Recommended by Technical Editor C. A Kitts. This work was supported in part by the Taiwan National Science Council under Grant 96-2628-E-002-251-MY3, Grant 962218-E-002-035, Grant 97-2218-E-002-017, and Grant 98-2218-E-002-006, in part by the Excellent Research Projects of the National Taiwan University, in part by Taiwan Micro-Star International, in part by Compal Communications, Inc., and in part by Intel. S.-W. Yang is with the Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan (e-mail: any@robotics.csie.ntu.edu.tw). C.-C. Wang is with the Department of Computer Science and Information Engineering and the Graduate Institute of Networking and Multimedia, National Taiwan University, Taipei 10617, Taiwan (e-mail: Color versions of one or bobwang@ntu.edu.tw). more of the gures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identier 10.1109/TMECH.2010.2040113

LIDARs are appropriate for high-precision applications in the eld of robotics. A LIDAR estimates the distance to a surface by measuring the round-trip time of ight of an emitted pulse of light. Only a fraction of the photons emitted by the LIDAR are received back through the sensors optics, with this amount being a strong function of the reectivity of the object being imaged. Table I summarizes how surface properties affect the amount of light reected, absorbed, and transmitted. White surfaces reect a large fraction of light, while black surfaces reect only a small amount. Transparent objects such as glasses often refract the light, and a LIDAR measurement of such a surface typically results in the range information for the object behind the transparent surface. In addition, the mirror-like reection of light, in which light from a single incoming direction is reected into a single outgoing direction, is called specular reection or regular reection. This is in contrast to diffuse reection, where light bounces off in a number of angles due to the irregularity of a surface. Mirrors are very at surfaces and reect nearly all incident light such that the angles of incidence and reection are equal. In geometry, the mirror image of an object is the virtual image formed by reection in a plane mirror. The mirror image that is formed appears to be behind the mirror and is of the same size as the real object illuminated by the LIDAR via the mirror. Figs. 1 and 2 illustrate the circumstances of mirror reection and glass transparency. As a result, detection of mirrors and windows can be problematic in laser sensing [5] and [6]. To our best knowledge, the solution to the problem of mirror reection has not been addressed yet. As LIDARs have become the major perceptual sensors, mirrors and windows can pose a real danger to robots with limited perceptual capability. In this paper, the problem of mirror reection is addressed. The main contribution of this study is to provide a solution to detect and track mirrors using only LIDAR information. The mirror detector utilizes the geometric property of mirror symmetry to generate hypothetical mirror locations. An identied mirror location is represented using a line model with endpoints. The mirror tracker is then used to integrate the potential mirror locations temporally using a Bayesian lter. The spatiotemporal information is accumulated and used to provide reliable scene understanding. A Bayesian framework is also introduced to the mobile robot mapping and localization process so that the mirror images can be eliminated. The proposed approach has been demonstrated using real data from the experimental platform equipped with a SICK LMS 291 LIDAR, as shown in Fig. 1. The performance of the proposed approach has also been evaluated using real data. The ground truth is obtained using another LIDAR that can observe the actual boundary of a mirror. The ample experimental results demonstrate the feasibility and effectiveness of our approach.

1083-4435/$26.00 2010 IEEE

TABLE I LIGHT REFLECTION,ABSORPTION, AND TRANSMISSION

Fig. 1. Experimental platform.

II. BACKGROUND Current LIDARs are a standard sensor for both indoor and outdoor mobile robots, given their inherent reliability. The data from a LIDAR include the angles and the distances to the objects in the eld of view. Compared with LIDARs, vision sensors require complicated and error-prone processing before obtaining depth information. Range sensors such as sonar sensors and IR sensors are not capable of ne angular resolution. As a result, LIDARs are capable of ne angular and distance resolution, real-time data retrieval, and low false rates. As light can be reected off a mirror and penetrate a window, mobile robots equipped with LIDARs only may not be capable of dealing with real environments. The sonar, oppositely, is capable of detecting those objects that a LIDAR can miss. The main drawbacks in sonar sensing are specularity, wide beam width, and frequent misreadings due to either external ultrasound sources or crosstalk [7]. In optical sensing, specular reection can cause loss of data and noisy signals in optical scans [8]. Several new LIDAR systems have been introduced recently. A time-of-ight camera [9] is a 3-D LIDAR that can provide immediate depth images. It enables a diverse set of emerging medical, biometric, and robotics applications. Several small LI-DARs have been introduced for indoor use, and have a reasonable price and low power consumption. They operate at high data rates with approximate millimeter resolution. Konolige et al. [10] proposed a low-cost laser distance sensor with reasonable accuracy. As the development of LIDARs is getting more and more mature, prices are greatly reduced. Robots also rely more and more on laser sensing. However, new LIDARs also suffer from the problems of mirror reection and glass transparency.

Making robots fully autonomous in a wide variety of environments is difcult, especially in environments with transparent objects, light-reected objects, or light-absorbed objects [11]. To make robots fully autonomous in environments with mirrors and windows, detection and modeling of these objects are critical. Jorg [7] proposed to use LIDAR measurements to lter out spurious sonar measurements. The objective is the extraction of sonar range readings, which are complementary to corresponding laser range information in the sense that they provide additional environmental information. The LIDAR information is used to verify corresponding sonar range information. Dudek et al. [12] introduced an approach to extract line segments in laser scans and sonar readings. A collection of sonar measurements is acquired to obtain a dense range map. The laser sensing is used to complement the sonar sensing by accurately pinpointing the corners and the borders of objects, where the sonar data are ambiguous. Both of these works proposed to extract complementary sonar readings to detect those objects not seen by LIDARs. However, the indistinguishability between mirrors and windows makes robot exploration problematic. III. SENSOR FUSION In order to demonstrate the ambiguities that arise in a conventional sensor fusion approach, we maintain two individual occupancy grid maps [13] accumulated from a LIDAR and a sonar array, respectively. Instead of making hard decisions at every time step, the occupancy grid maps are utilized to accul mulate the temporal information of the sensor readings. Let M s and M be the occupancy grid maps built using data from a LIDAR and a sonar array, respectively. Each grid cell (x,y) is determined as a potential obstacle if the following inequalities hold: Ml x,y <l
l s

(1)

l Mprobabilities. The values of (2) where and are predened s x,y >s and s can be obtained according to the apriori probabilities used in l the occupancy grid map representation. In our experiments, is s 0.05 and is 0.95. At every time step, the sensor fusion map is calculated accordingly. The probability Mx,y of the grid cell s (x,y) in the sensor fusion map M is M if (1) x,y l and (2) hold; otherwise, M x,y . Fig. 4 visualizes the resulting grid maps using data collected in an environment with mirrors and windows. Fig. 4(a) and (b) depicts the occupancy grid maps built using data from a LIDAR

Fig. 2. Environments with mirrors and windows. (a), (c), and (e) Laser scans in environments with mirrors or windows that are marked with rectangles. The robot is at the origin and heads toward the positive x-axis. (b), (d), and (f) Camera images of (a), (c), and (e), respectively, for visual reference. In (a), there is a mirror placed on the right-hand side of the corridor. In (c), there is a French window on the right-hand side of the corridor. In (e), there is a mirror pillar on the right-hand side of the room.

and a sonar array, respectively. It can be observed that mirrors and windows are objects that are likely to be seen by sonar sensors, but less likely to be identied by LIDARs. Fig. 4(c) shows the sensor fusion map in which most of the mirror and window locations are successfully identied, in contrast to the LIDAR-only map. Fusion of heterogeneous sensors is important for collision-free navigation in real environments. However, mirrors and windows, as potential obstacles, make no difference. As illustrated in Fig. 4(c), it is difcult to distinguish whether an object behind a potential obstacle is a real object or a mirror image. Indistinguishability between mirror images and true objects make the map inconsistent with the true environment. Inconsistency between the map and the true environment makes mobile robot navigation problematic. To deal with the problem of mirror reection, conventional approaches might include the use of sonar to detect obstacles unseen by a LIDAR. However, it still fails to resolve the ambiguity of whether an obstacle is specically a mirror or a window. The interpretation of an object that appears to be behind the obstacle can be ambiguous. In order to ensure collision-free navigation and reliable localization capability, having a consistent understanding of the environment is important. We take advantage of the property of mirror symmetry to resolve the ambiguity, and use the Bayesian framework to incorporate spatial and temporal information. By investigating the spatial symmetry of the environment and using only LIDAR information, our approach can identify mirrors, estimate their locations, and properly interpret the mirror images of objects.

Fig. 3. Blueprint for the environment shown in Fig. 2(b) and (d). The top rectangle indicates the mirror location and the bottom rectangle shows the location of a French window. In Fig. 2(a), the robot takes the observation at place B heading toward place A. In Fig. 2(c), the robot takes the observation at place B heading toward place C.

IV. MIRROR DETECTION In this section, we describe a method to identify potential mirror locations within a laser scan. We assume that mirrors are planar. A distance-based criterion is used to determine gaps in a laser scan. The geometric property of mirror symmetry is exploited to restore the spatial information of reected scan points. The likelihood eld sensor model [14] is applied to calculate the likelihood that a gap is indeed a mirror. A mirror prediction is then represented by a Gaussian. The iterative closest points (ICPs) algorithm [15] is utilized for evaluating the uncertainty of a mirror prediction.

Fig. 4. Occupancy grid maps. The maps are depicted with respect to a global coordinate system. In (a), (b), and (c), the top rectangle indicates the mirror location and the bottom rectangle shows the location of a French window. (a) and (b) Map obtained by using the data from the LIDAR and the sonar sensors, respectively. (c) Map obtained by fusion of the LIDAR and the sonar sensors. Rectangles show the potential obstacles not seen by using only the LIDAR. The potential obstacles are successfully identied with the use of the sensor fusion map. However, the robot still cannot distinguish the differences in the sensor fusion map between mirrors and windows. The actual map is shown in Fig. 3, in which the robot moves from place A to place C.

A. Prediction The mirror prediction method utilizes the fact that mirrors are usually framed, i.e., mirrors are physically bounded.For instance, in Fig. 2(b), the mirror is enclosed by a wooden frame, whereas in Fig. 2(f), the mirror that is supported by a pillar is framed with steel. The assumption can fail when a mirror that is not placed along anything else does not have a boundary. First, we assume environments are smooth and dene that gaps are discontinuities of range measurements within a laser scan. Letting z be an observation containing range measurements taken from a LIDAR, a gap Gi,j consists of two measurements Fig. 5. Restoration of reected scan points. The rectangle (in blue) shows the robot location. The heavy lines (in red) indicate the measurement acquired by {zi,zj|1 i<j n,j i> 1}, such that the LIDAR. The light lines (in cyan) indicate the reected scan points by zi+1 zi >d (3) zj1 zj >d (4) |zk zk+1|d for i<k<j 1 (5) where n is the cardinality |z| of the observation z, zi is the ith range measurement, and d is a predetermined constant. The cardinality of an observation is a measure of the number of measurements of the observation. In our experiments, d is 1.5 m. The line with endpoints {pi,pj}is thus considered as a potential mirror location, where pi and pj are the Cartesian coordinates of the range measurements zi and zj, respectively, in the robot frame. B. Verication For each gap Gi,j with endpoints {pi,pj}, the measurements {zi+1,zi+2,...,zj1}are restored in accordance with the geometric property of mirror symmetry. Let ei,j be the line with endpoints pi and pj, e0,k be the line with endpoints pk and the origin 0, and pi,j,k be the intersection point between the two lines ei,j and e0,k. The reected scan point pk with respect to
applying the mirror symmetry. The thick line (in black) indicates the gap location.

the kth range measurement zk is calculated such that (0,pk)= (0,pi,j,k)+(pi,j,k,pk) (6) (0,pi,j,k,pi)= (pj,pi,j,k,pk) (7) where ( , )is the Euclidean distance function and (p1,p angle is the2,p3) function calculating the angle between vectors p1p2 3p2. The process is illustrated in Fig. and p The likelihood i,j of the reected scan points 5. {pi+1,pi+2,...,pj1} with respect to the local map around the robot is then calculated using the likelihood eld sensor model. A gap Gi,j with likelihood i,j greater than or equal to is considered likely to be a mirror Mi,j, where is a predened constant probability. In our experiments, is 0.5, meaning that a gap with at least 50% condence is considered as a possible mirror location. C. Representation To incorporate temporal integration, a mirror location has to be represented properly so that the uncertainty can be taken into account. Intuitively, a mirror location is a line segment

Fig. 6. Instability of measurements around a mirror. Laser scans at different time steps are shown. The scene is the same as that shown in Fig. 2(b), wherea mirror is placed on the right-hand side of the corridor. The robot is at the origin and heads toward the positive x-axis. The rectangles indicate the true mirror locations. (a) Almost all of the laser beams are reected back directly from the mirror. (b) Laser beams are missing due to the long travel distance and possibly multiple reections of the light. (c) Laser beams are reected off from the mirror surface.

and can be described with its endpoints. A ltering algorithm updates the two endpoints with the associated mirror measurement separately. However, whether a laser beam is reected back or reected off is highly relevant to the smoothness of the mirror surface and the angle of incidence. The distance between the endpoints of a mirror prediction is never longer than the true distance. Accompanying the basic light property, the observed endpoints are, almost surely, not the true endpoints of the mirror. The instability of the measurements around mirrors are illustrated in Fig. 6. Instead of storing the endpoints of a mirror measurement directly in the state vector, we propose to represent the mirror with a line model and store the corresponding endpoints separately. In Section V, this property will be further used to facilitate the process of estimating the endpoints of a mirror. We propose to represent mirrors as line segments. In the state vector, a line segment is represented by the angle and the distance of the closest point on the line to the origin of the robot frame. The endpoints of a line segment are not placed within the state vector, but stored separately. The mean vector of the line segment of Mi,j with respect to the robot frame is given as yi,j,k R arctan R Mi,j xi,j,k == M(8) R i,j 22 Mi,j x+y i,j,k i,j,k where xi,j,k and yi,j,k are the xy coordinates of the closest point on the line to the origin. Image registration is the process of transforming the different sets of data, acquired at different times or from different perspectives, into one coordinate system. We propose to exploit the ICP algorithm to estimate the uncertainty of a mirror prediction. By matching the reected scan points with the whole laser scan, the displacement, including translation and rotation, between the reected scan points and the environment is calculated. However, adjusting the four parameters of a mirror prediction, two parameters for the line model and two parameters

for the endpoints, using the three parameters of the displacement is infeasible. Note that a point on a line has 1 DOF. Instead of using the registration result to rene a mirror prediction, the displacement is utilized to calculate the covariance matrix of a mirror prediction, which can be expressed as 2 2 R + 0 M 2 2 = (9) 2 0 +x +y
i,j

where and are predetermined values of the measurement noise for the covariance matrix, and x, y, and are the registration results using the ICP algorithm by which {pi+1,pi+2,...,pj1} and the whole laser scan are aligned. The values of and can be obtained by taking into account the modeled uncertainty sources. In our experiments, is 3 and is 0.2 m. Fig. 7 illustrates the mirror detection results in which the gaps in the laser scans are identied.

D. Complexity Analysis The mirror detector requires O(|z| )operations in the general 3 case and O(|z| )operations in the worst case at each time step, where |z| denotes the cardinality of the observation z,asshown 2 in Table II. The mirror prediction stage takes O(|z| ) time to identify all gaps within a laser scan. The mirror verication stage takes constant time to calculate the Cartesian coordinates of the endpoints and O(|z|) time to restore the reected scan points for each gap. There are O(|z|) gaps in this stage. The mirror representation stage takes constant time to calculate the mean vector and the covariance matrix of a mirror prediction, 2 and O(|z| ) time to perform scan matching. There are O(|z|) mirror predictions in this stage. As there are usually only a couple of mirrors around an environment, the number of gaps and mirror predictions in the verication stage and the representation stage, respectively, can be bounded by some constant. The overall time 2 complexity in the general case is greatly reduced to O(|z| ), which is sufcient
2

Fig. 7. Mirror detection. Mirrors are detected using the property of mirror symmetry. The scene of (a) and (b) is the same as that shown in Fig. 2(b), and the scene of (c) is the same as that shown in Fig. 2(f). The robot is at the origin and heads toward the positive x-axis. Dots are the raw range measurements, where the heavy dots (in red) are the measurements not identied as the mirror images, and the light dots (in cyan) are the measurements with false range information due to mirror reection. Lines indicate the predicted line models of the mirrors where the thick lines (in black) are the veried mirror locations and the thin lines (in magenta) are not. Circles and crosses are the restored reected points with respect to the veried mirror locations and the predicted mirror locations, respectively. TABLE II PERFORMANCE ANALYSIS OF THE MIRROR DETECTION STAGE

V. MIRROR TRACKING In this section, we describe a method to update mirror locations for temporal integration. Bayesian ltering is a general probabilistic approach for estimating an unknown probability density function over time using a mathematical process model and incoming observations. Mirror predictions at different time steps are integrated using an extended Kalman lter (EKF), which is inherently a nonlinear Bayesian lter. As the endpoints are not stored in the state vector, the update stage is separated into two stages: the line update stage and the endpoints update stage. The line update stage integrates mirror predictions temporally using EKFs. The endpoints update stage updates the endpoints of a mirror by exploiting the basic light property. A. Line Update The mean vector and the covariance matrix of a line model are rst transformed into global coordinates, which are given as
t

for real-time applications. The step-by-step algorithm is shown in Algorithm 1.

Mi,j

= (10)
+J
T R T

Mi,j
R

+
R

xt cos + t + yt sin + t Mi,j Mi,j

Mi,j xt

= JPti,jJ i,j J (11) x Mi,j M t M


t i,j

where Jx and JM are the Jacobian matrices of the line model T with respect to the robot pose xt =( xt yt t ) and the line measurement, respectively, and Pt is the covariance matrix of the robot pose. Data association is implemented using a validation gate dened by the Mahalanobis distance. The standard EKF process is then applied to update the mean vector and the covariance matrix of a mirror estimate. B. Endpoints Update After the line model of a mirror estimate is updated, the endt points of the mirror should be at time t, accordingly. Let M be updated Mt+1 be the associated i,j the updated mirror estimate
u,v

real-time applications. The step-by-step algorithm is shown in Algorithm 2.

Fig. 8. Endpoints update. The dashed line shows the updated line model e of a mirror at time t+1. The solid lines indicate the line model of the updated t+1 t+1 mirror estimate Mt the associated mirror measurement M . The thick and i,j u,v lines show the corresponding line segments of and M .The set i,j Mt u,v P t can be computed accordingly which contains the closest points from M and
i,j t+1 t+1
M

t+1

u,v to

the line e.

mirror measurement at time t+1, {pi,p}tt t+1 t+1 and {p,p} t juv be the endpoints of M and t+1 t+1 t+1 i,j u,v M be mirror estimate at time t+1, and e M , respectively,updated the be the corresponding line model of the updated mirror estimate. We can compute the point set P = {pttt+1 j,p,p}, which includes ,p t+1 uv ttt+1 t+1 i t+1 the closest points from the points in P = {pi,pj,p,p} to uv the line e . The process is illustrated in Fig. 8. As described in Section IV-C and illustrated in Fig. 6, the observed endpoints of a mirror are usually not the true counterparts, and thus, the distance between the endpoints of a mirror predic tion is never longer than the true distance. We take advantage of the apriori knowledge to accommodate this phenomenon. t+1 The endpoints of the mirror estimate M are obtained by nding a pair of points in P , such that the distance between pointstwomaximum, which can be expressed as these is t+1 t+1 p,p= argmax (p1,p2) (12) 12
t+1 t+1 where pand pare the resulting endpoints of the mirror 12 t+1 M estimate . Fig. 9 illustrates a mirror tracking result in which a mirror is correctly detected and tracked. As can be seen from Fig. 4(c), although the mirror detected with sensor fusion is spatially sparse, the proposed approach can accurately estimate the location of the mirror. p1 ,p2 P

VI. EXPERIMENTAL RESULTS A. Mapping, Localization, and Navigation First, we describe the mapping, localization, and navigation problems in environments with mirrors. Without the mirror detection and tracking process, mirror images are considered as parts of real environments. As an occupancy grid map represents the conguration space (C-space) of a robot, the inconsistency between the real environment and the map containing mirror images makes the robot navigation problematic. Robots should be capable of guring out mirror locations and avoid entering the fake areas formed due to mirror reection. To deal with the phenomenon of mirror reection, the mirror images within a map have to be detected In this paper,accordingly. detected and tracked while the and corrected mirrors are SLAM process is performed. The map is further rened by incorporating mirror information such that mirror images are eliminated. Accompanying the postprocessing process [16], each measurement perceiving the distance between the robot and a mirror is updated as the distance to the mirror surface. Figs. 10 and 11 illustrate the postprocessing process. In Figs. 10(a) and 11(a), the maps built-in environments with mirrors are shown. The mirror locations, which are estimated while the robot drove by, are also visualized. As can be seen, the maps that contain mirror images are inconsistent with the real environments. In Figs. 10(b) and 11(b), the maps incorporating

C. Complexity Analysis The mirror tracker requires O(1) operations in the general case and O(|z|) in the worst case, where |z| denotes the cardinality of the observation z, as shown in Table III. The line update stage takes constant time to perform an EKF update for each of the mirror estimates. The endpoints update stage also takes constant time to update the endpoints of a mirror. There are O(|z|)mirror estimates in this stage. Similarly, as there are usually only a couple of mirrors around an environment, the number of mirror estimates in the line update stage and the endpoints update stage can be bounded by some constant. The overall time complexity in the general case is greatly reduced to O(1), which is sufcient for

Fig. 9. Mirror tracking. The scene is the same as that shown in Fig. 3, where the robot is at around place B. The maps are depicted with respect to a global coordinate system. The occupancy grid map of the environment is shown, where the rectangle (lled with blue) indicates the robot pose, the lines (in red) are the line models of the mirrors, the ellipses (in green) show the 2 covariances of the line models, and the thick lines (in red) indicate the mirror locations. (a) Mirror tracking result. (b) Enlargement of (a). (c) Enlargement of (b).

Fig. 10. Occupancy grid maps without and with mirror information incorporated. The scene is the same as that shown in Fig. 2(f). The maps are depicted with respect to a global coordinate system. Mirror estimates are eliminated upon divergence. (a) and (b) Occupancy grid maps without and with mirror information TABLE III PERFORMANCE the line OF THE of the mirrors, the ellipses (in green) show the 2 covariances displacements fromthe thick lines incorporated, where the lines (in red) areANALYSISmodels MIRROR by calculating the mean of the of the line models, and matching TRACKING STAGE (in red) indicate the mirror locations.

empirical observations. To quantify the performance, we perform SLAM using data from the two LIDARs separately. There are seven datasets collected around the environment shown in Fig. 3. Each dataset contains about 500 observations. The ground truth mirror locations are annotated and taken into account in the mapping process for obtaining consistent mirror locations in the mirror information are depicted. Mirror images are elimiglobal coordinates. The maps can be slightly different from nated by correcting LIDAR measurements affected by each other due to various noise sources. The resulting maps mirrors. The false estimates are removed probabilistically by are aligned for a fair comparison and used to calculate the discarding uncertain mirror estimates. With the use of the estimation error. Fig. 13 illustrates the calibrated observations proposed mirror detection and tracking process, the map can and the resulting maps obtained from the LIDARs. As the be estimated consistently without apriori knowledge of mirror maps are similar, only one estimated map and one ground For mobile robot localization, such as EKF localization, locations. truth map are depicted in Fig. 13(e) and (f). We dene the Markov localization, and Monte Carlo localization, compared overall error of a mirror estimate as the sum of the residuals to the postprocessing process, the preprocessing process is rebetween the estimated endpoints and the true endpoints, and quired to take the mirror information into account. The dene the angular error of a mirror estimate as the angular preprocessing process eliminates the mirror image within a misalignment between the estimated line model and the true laser scan by applying the property of mirror symmetry, as line model. Root-mean-squared error (RMSE) is used to described in Section IV-B. The updated LIDAR evaluate the accuracy of our algorithm. In the experiment, the measurements are then used to perform the localization task. overall error is 0.12 m and the angular error is 0.47 . The majority of the error tends to be in the plane of the wall, and B. Quantitative Evaluation the angular misalignment of the estimated mirror location is The feasibility of the proposed algorithm has been demonsmall. This is mainly because of the instability of LIDAR strated using real data. Furthermore, we present a performance measurements around a mirror. The predicted locations of the analysis of the proposed algorithm. In this experiment, the endpoints depend on whether the emitted photon is reected SICK LMS 100 LIDAR is used whose angle of view is 270 .As back, reected off, or missing. Mirror reection can make the ground truth mirror locations are usually unobtainable, markers observed endpoints ambiguous. However, a LIDAR that are placed at the boundary of the mirror. Two LIDARs are used offers high precision can provide accurate angular estimates of to collect data whose observations are parallel to each other, as mirrors. Note that the error includes uncertainties from the shown in Fig. 12. While one perceives a mirror image, the SLAM process. Just as with solving the SLAM problem, the other can obtain the ground truth mirror location by observing performance also depends on sensor characteristics and the the markers placed alongside. The two LIDARs are calibrated environment. The experiment shows that the proposed approach is effective, even though various noise sources are involved.

Fig. 11. Occupancy grid maps without and with mirror information incorporated. The scene is the same as that shown in Fig. 2(f). The maps are depicted with respect to a global coordinate system. The sensor data is collected at the demo room of Taiwan Shin Kong Security, in which there is a mirror pillar. (a) and (b) Occupancy grid maps without and with mirror information incorporated, where the lines (in red) are the line models of the mirrors, the ellipses (in green) show the 2 covariances of the line models, and the thick lines (in red) indicate the mirror locations.

The case in which the LIDAR sees the robot itself in the mir-generated when the robot sees itself. By adopting the Bayesian ror is also illustrated in Fig. 13(b) and (d). As can be seen, the framework, it is eliminated naturally from temporal integra-LIDAR can detect a mirror when the angle of incidence of the tion of observations. The resulting mirror estimate is shown in emitted photon is zero. More mirror predictions than one are Fig. 13(e).

VII. CONCLUSION AND FUTURE WORK A. Conclusion Making robots fully autonomous in a wide variety of environments is difcult. To our best knowledge, the solution to the problem of mirror reection has not been addressed previously. The primary contribution of this paper is to introduce the mirror detection and tracking framework using only LIDAR information. The mirror detection method utilizes the property of mirror symmetry to calculate the condence of a mirror prediction. The image registration technique is used for evaluating the uncertainty of a mirror prediction. The proposed endpoints update strategy employs the fact that the distance between the endpoints of a mirror prediction is never longer than the true distance. The proposed approach can be seamlessly integrated into the mobile robot localization framework and the occupancy grid map representation. The ample experimental results using real data from a LIDAR have demonstrated the feasibility and effectiveness of the proposed approach. B. Future Work In this paper, we use a heuristic method to guess possible mirror locations in the continuous Cartesian space. It relies on the fact that mirrors are usually framed or placed along a wall. If the boundary of a mirror is not apparent or the mirror is not placed along anything else, the proposed approach will fail. Sensor fusion is versatile in its capability to deal with diversied surfaces, but less precise. On the other hand, the major drawback of LIDAR-only approaches can be their incapability to detect transparent objects, due to the nature of light. Future work will include an approach to guess the possible mirror locations using sensor fusion. Because of the inaccuracy of sonar readings, the extraction and reconstruction of disjointed line segments is required to generate a mirror prediction. Indistinguishability between mirrors and windows in sensor fusion can also be resolved through the use of sensor fusion and the proposed mirror detection and tracking process. In addition, it would also be of interest to study some of the special cases: multiple reections of mirrors, curved mirrors, and mirror symmetric scenes. ACKNOWLEDGMENT The authors would like to acknowledge the Editor, Technical Editor, and anonymous reviewers for their time and critique who improved the manuscript by their constructive comments. REFERENCES
Fig. 13. Observations from the LIDARs. (a)(d) Robot is shown by the rectangle (in black) and heads toward the positive x-axis. Dots are the range measurements containing mirror images, where the heavy dots (in red) are the measurements not identied as the mirror images, and the light dots (in cyan) are the measurements with false range information due to mirror reection. Lines (in black) indicate the detected mirror locations. Circles (in green) are the range measurements used for performance evaluation. (e) Estimated map is shown in which the thick (red) line indicates the mirror location. (f) Ground truth map is depicted, where the ground truth endpoints of the mirror are shown in (red) crosses. The maps are depicted with respect to a global coordinate system. [1] E. Asadi and M. Bozorg, A decentralized architecture for simultaneous localization and mapping, IEEE/ASME Trans. Mechatronics, vol. 14, no. 1, pp. 6471, Feb. 2009. [2] A. Franchi, L. Freda, G. Oriolo, and M. Vendittelli, The sensor-based random graph method for cooperative robot exploration, IEEE/ASME Trans. Mechatronics, vol. 14, no. 2, pp. 163175, Apr. 2009. [3] C. Urmson, J. Anhalt, D. Bagnell, C. Baker, R. Bittner, M. N. Clark, J. Dolan, D. Duggins, T. Galatali, C. Geyer, M. Gittleman, S. Harbaugh, M. Hebert, T. M. Howard, S. Kolski, A. Kelly, M. Likhachev, M. McNaughton, N. Miller, K. Peterson, B. Pilnick, R. Rajkumar, P. Rybski, B. Salesky, Y.-W. Seo, S. Singh, J. Snider, A. Stentz, W. R. Whittaker, Z. Wolkowicki, J. Ziglar, H. Bae, T. Brown, D. Demitrish,

Fig.12.Experimentalsetup.TwoLIDARswhoseobservationsareparalleltoeachot heraremounted.Markersareplacedattheboundaryofthemirror.

B. Litkouhi, J. Nickolaou, V. Sadekar, W. Zhang, J. Struble, M. Taylor, [15] P. J. Besl and N. D. McKay, A method for registration of 3-D M. Darms, and D. Ferguson, Autonomous driving in urban shapes, IEEE Trans. Pattern Anal. Mach. Intell., vol. 12, no. 2, pp. 239256, environments: Boss and the urban challenge, J. Field Robot., vol. 25, Feb. 1992. [16] F. Lu and E. Milios, Globally consistent range scan alignment for no. 8, pp. 425466, Jul. 2008. [4] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, environment mapping, Auton. Robots, vol. 4, no. 4, pp. 333349, 1997. P. Fong, J. Gale, M. Halpenny, G. Hoffmann, K. Lau, C. Oakley, M. Palatucci, V. Pratt, P. Stang, S. Strohband, C. Dupont, L.-E. Jendrossek, C. Koelen, C. Markey, C. Rummel, J. van Niekerk, E. Jensen, P. Alessandrini, G. Bradski, B. Davies, S. Ettinger, A. Kaehler, A. Nean, and P. Mahoney, Stanley, the robot that won the DARPA Grand Challenge, J. Field Robot., vol. 23, no. 9, pp. 661692, Sep. [5] 2006. C.-C. Wang, Simultaneous localization, mapping and moving object tracking, Ph.D. dissertation, Robot. Inst., Carnegie Mellon Univ., Pittsburgh, PA, Apr. 2004. His research interests include mobile robot local[6] A. Dioso and L. Kleeman, Advanced sonar and laser range ............................................................................................................................. ization in highly dynamic environments, and nder fusion for simultaneous localization and mapping, in Proc. IEEE/RSJ simultaneous ego-motion estimation and moving Int. Conf. Intell. Robots Syst., Sendai, Japan, Sep. 2004, pp. 18541859. object detection. Shao-Wen Yang (S07) received the B.Sc. degree in [7] org, World J K.-W. modeling for an autonomous mobile robot using het- Mr. Yang is a member of the IEEE Robotics and Automation Society. erogeneous sensor information, Robot. Auton. Syst., vol. 14, no. 22, pp. computer science from National Taiwan Ocean University, Keelung, Taiwan, in 2005. Currently, he is working toward the Ph.D. degree in the Department 159170, 1995. of Computer Science and Information Engineering, National Taiwan [8] X. Chen, D. Wang, and H. Li, A hybrid method of reconstructing 3D airfoil prole from incomplete and corrupted optical University, Taipei, Taiwan. scans, Int. J. Mechatron. Manuf. Syst., vol. 2, no. 1/2, pp. 3957, 2009. [9] utgen, T.Oggier,B.Buegg,andA.Hodac, F. Lustenberger, G. Becker, B. RSwissRanger SR3000 and rst experiences based on miniaturized 3DTOF cameras, presented at the 1st Range Imag. Res. Day, ETH Zurich, Zurich, Switzerland, Sep. 2005. [10] K. Konolige, J. Augenbraun, N. Donaldson, C. Fiebig, and P. . Shah, A low-cost laser distance sensor, in Proc. IEEE Int. Conf. Robot. He was with the Bayesian Vision Group at the National Aeronautics and ................................................................................................................................................................................................................................................................. Space Administration Autom., Pasadena, CA, May 2008, pp. 30023008. ....................................................................................................................................................................... Ames Research Center and at Z+F, Inc., Pittsburgh. [11] A. Petrovskaya and S. Thrun, Model based vehicle tracking for From 2004 to 2005, he was an Australian Research Council (ARC) Research autonomous driving in urban environments, presented at the Robot., Sci. Chieh-Chih Wang for Autonomous Systems and the Fellow of the ARC Centre of Excellence (S02M05) received the B.S. and Syst. IV, Zurich, Switzerland, Jun. 2008. M.S. degrees from for Field Taiwan University, Taipei,Sydney. In 1994 and Australian Centre National Robotics, University of Taiwan, in 2005, he [12] G. Dudek, P. Freedman, and I. Rekleitis, Just-in-time sensing: 1996, respectively, andof Computer Science and Information Engineering, joined the Department the Ph.D. degree in robotics from the School of Efciently combining sonar and laser range data for exploring unknown Computer Science, Carnegie Mellonhe is currently an Assistant Professor, and National Taiwan University, where University, Pittsburgh, PA, in 2004. worlds, in Proc. IEEE Int. Conf. Robot. Autom., Minneapolis, MN, Apr. also with the Graduate Institute of Networking and Multimedia. His research 1996, pp. 667671. interests include robotics, machine perception, and machine learning. [13] A. Elfes, Occupancy grids: A probabilistic framework for robot perception and navigation, Ph.D. dissertation, Electr. Comput. Eng./Robot. Dr. Wang was the recipient of the Best Conference Paper Award at the 2003 Inst., Carnegie Mellon Univ., Pittsburgh, PA 1989. [14] S. Thrun, A probabilistic online mapping algorithm for teams of IEEE International Conference on Robotics and Automation and the Best Remobile robots, Int. J. Robot. Res., vol. 20, no. 5, pp. 335363, Apr. 2001. viewer Award at the 8th Asian Conference on Computer Vision (ACCV 2007).

................................................................................................................................................................................................................................................................. ............................................................................................................................................................................................................................

You might also like