You are on page 1of 6

Optik 124 (2013) 5357–5362

Contents lists available at ScienceDirect

Optik
journal homepage: www.elsevier.de/ijleo

An improved building boundary extraction algorithm based on fusion of optical


imagery and LIDAR data
Yong Li a,∗ , Huayi Wu b , Ru An a , Hanwei Xu a , Qisheng He a , Jia Xu a
a
School of Earth Sciences and Engineering, Hohai University, Nanjing 210098, China
b
State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan 430079, China

a r t i c l e i n f o a b s t r a c t

Article history: This article presents a new method of automatic boundary extraction using LIDAR-optical fusion suited
Received 12 September 2012 to handle diverse building shapes. This method makes full use of the complementary advantages of LIDAR
Accepted 20 March 2013 data and optical imagery. Different building features are extracted from the two data sources and fused to
form the final complete building boundaries. First, the points of each roof patch are detected from LIDAR
point cloud. This process consists of four steps: filtering, building detection, wall point removal and roof
Keywords:
patch detection. Second, initial building edges are extracted from optical imagery using an improved
LIDAR
Canny detector constrained by edge location information derived from the LIDAR point cloud as edge
Optical image
Fusion
buffer areas. Finally, the roof patch and initial edges are integrated by mathematical morphology to form
Boundary extraction the final complete building boundaries. All processes have no constraints or rules on building shapes. This
Building method is fully data-driven and suitable for any building shape. LIDAR data and aerial images of complex
Mathematical morphology geographical environments are used to test the method. These experimental results demonstrate that
our method can automatically extract accurate boundaries for buildings with complex shapes, and also
is highly robust in complex environments.
© 2013 Elsevier GmbH. All rights reserved.

1. Introduction it is difficult to judge whether the hypotheses are in conformity


with reality. The second category obtains complete building bound-
Accurate building boundaries are important for diverse applica- aries by grouping the extracted features such as corner points,
tions such as real estate, city planning, and disaster management. line segments and roof planes [8–16]. The feature grouping pro-
The automatic extraction of building boundaries is challenging due cess follows complex rules which are crucial for the success and
to building shape variability and surrounding environmental com- robustness of this method. This category of methods performs
plexity. High-resolution optical imagery contains rich spectral and well mainly on the regular buildings composed of straight lines
textural information and is easily affected by many factors such as [17,18]. The third category uses model primitives to determine
contrast, illumination and occlusion. The airborne Light Detection building boundaries by fitting input data to the adopted mod-
and Ranging (LIDAR) can rapidly acquire dense and precise height els [19,20]. The adopted model primitives are often some regular
data of large-scale areas by emitting and receiving laser pulses. The shape or a designed model dataset. It is difficult to accurately repre-
height changes are more suitable for locating building boundaries sent various irregular building shapes using the predefined model
more than the spectral and texture changes. However, the hori- primitives.
zontal accuracy of boundaries extracted from LIDAR data is poor Considering the complementary advantages of LIDAR data and
because of laser pulse discontinuousness. high-resolution imagery, the fusion of two data sources is regarded
Hence a number of methods have been developed to make use as a promising strategy to extract high quality building bound-
of LIDAR point clouds to extract building boundaries and solve aries [8,9,12–15,20–25]. However, it is challenging to extract the
the horizontal accuracy problem arising from laser footprint dis- correct features from optical image or LIDAR data of complex land-
continuousness. These approaches are broadly divided into three scapes. Additionally, there is no general solution for fusing different
categories. The first category is simplifying and generalizing the features for automatic building boundary extraction of various
coarse edges derived from raw data based on some building shape building shapes such as curved, wavy, zigzag and other irregular
hypotheses such as parallel or perpendicular edges [1–7]. However shapes. Most existing fusion methods can only handle simple build-
ing shapes like polygons.
In order to automatically extract building boundaries from com-
∗ Corresponding author. Tel.: +86 13655192670. plex geographic environments, a new method of building boundary
E-mail address: liyong@hhu.edu.cn (Y. Li). extraction by LIDAR-optical fusion is proposed in this paper. This

0030-4026/$ – see front matter © 2013 Elsevier GmbH. All rights reserved.
http://dx.doi.org/10.1016/j.ijleo.2013.03.045
5358 Y. Li et al. / Optik 124 (2013) 5357–5362

LIDAR Point
Aerial Image
Cloud

1 Filtering 2 Gaussion Smoothing

Edge Information
Building Detection Gradient Calculating
Deriving

Edge Buffer Area Non-maximal Suppression


Wall Point Removal
Creating in Edge Buffer Area

Initial Edges Detection by


Roof Patch Detection
Edge Tracking

3 Integrating of Roof Patches and Initial Edges by Closing Operation


Edge point
Peripheral point
Complete Boundary Extraction by Mathematical Morphology
Peripheral point with the height
of the neighboring edge point
Fig. 1. Work flow of boundary extraction adaptive for complex building shape.
Fig. 2. Edge location information derived from LIDAR data.
method is fully data-driven, and self-adaptive for diverse build-
ing shapes. This paper is organized as follows. The work flow and point set whose size is larger than a certain threshold is determined
detailed procedure of the proposed method is explained in Section as a building roof patch, which avoids the influence of roof details
2. Experiments are described and discussed in Section 3. Finally, such as antennas, chimneys, etc.
the conclusions are given in Section 4.
2.2. Extracting initial edges from aerial images
2. Methodology
The Canny edge detector [27] is widely used in digital image
The work flow of this method is shown in Fig. 1. First, the points processing. It works in four stages: First, the image is smoothed
of each roof patch are derived from the LIDAR point cloud. This con- by Gaussian convolution. Second, a 2-D first derivative operator is
sists of four steps, namely filtering, building detection, wall point applied to the smoothed image to calculate the gradient magni-
removal and roof patch detection as seen in frame 1 of Fig. 1. Sec- tude and direction. Third, the process of non-maximal suppression
ond, the initial edges are extracted from image using the improved (NMS) is imposed on the gradient image. Finally, an edge tracking
Canny detector which is cued by the edge location information from process is applied controlled by two thresholds.
the LIDAR point cloud in the form of edge buffer areas as seen in Non-maximal suppression and edge tracking are mainly respon-
frame 2 of Fig. 1. Finally, the roof patch and initial edges are inte- sible for the success of the Canny edge detector. But if the two
grated by mathematical morphology to form the final complete steps are carried out using only images, the determination of edges
boundary as seen in frame 3 of Fig. 1. is random, and easily influenced by false edge information such
as shadows. In addition, some building edges may be missed as
2.1. Extracting roof patches from LIDAR data shown in Fig. 7. So the first two steps, namely Gaussian smoothing
and gradient calculation, are implemented on the aerial image. The
LIDAR point cloud is divided into ground points and non-ground non-maximal suppression and edge tracking processes are imple-
points, which process is called filtering. Then building points are mented by combining LIDAR points and images in the following
separated from the non-ground points. Many algorithms are avail- steps.
able for filtering. This research utilizes the method of filtering based In order to reduce the randomness of edge extraction in images,
on the morphological gradient proposed by Li and Wu [26]. The the edge location information derived from LIDAR data are used
non-ground points derived by filtering include buildings, vehicles, to specify the working areas of the non-maximal suppression and
vegetation and so on. There are often other objects that are attached edge tracking processes. Since roof patches have been extracted as
to buildings, for example, vegetation and low objects. Here three previously elaborated in Section 2.1, the edge points of each roof
steps are taken to detect building points. First, low object points are patch can be located by neighborhood analysis. That is, if a point
removed if the difference between the object height and the adja- outside the roof patch lies in the neighborhood of a point inside roof
cent ground height is less than a predefined threshold, e.g. 2 meters. patch, the inside point is marked as an edge point, and the outside
Second, a morphological opening operation is carried out, which point is marked as a peripheral point. Considering the height dis-
breaks away the non-building portions that are attached to build- placement of perspective projection, the height of the neighboring
ings. Third, the connected regions are detected by a region growing edge point of each peripheral point is recorded as shown in Fig. 2.
technique, and those regions that are larger than a certain threshold For each roof patch, the edge buffer areas are created by the edge
are regarded as buildings. points and the peripheral points with the height of the neighboring
The wall points need to be removed because they may influence edge points. Since LIDAR and image data have been aligned relative
the subsequent extraction of roof patches and the determination of to a common reference frame by the photogrammetric technique
building edge points. A building point is judged as wall point if there [28], those points are first back-projected to the image plane as
are a much higher point, a much lower point and few points that shown in Fig. 3(a). Considering the discontinuousness of LIDAR
have the same approximate height in that point’s neighborhood. laser pulses, the real location of building edges must lie between
That means a wall point is not only a low jump point but also a the two kinds of points as the overlay of two buffer areas in Fig. 3(a)
high jump point, and is an isolated point at the same time. show. So buffer area1 is created around the edge points as Fig. 3(b),
Each roof patch can be extracted from the building points and buffer area2 is created around the peripheral points as Fig. 3(c).
after removing the wall points. The point sets with gradual height The width of the buffer areas is the average point spacing. As shown
changes are detected by the region growing technology, and the in Fig. 3(d), the two kinds of buffer areas are integrated to form the
Y. Li et al. / Optik 124 (2013) 5357–5362 5359

2
1 3 4 1 3 4

(a) (b)

1 3 4 1 3 4

(c) (d)

Fig. 3. Edge buffer area creation.

terminal buffer area which is composed of three portions. The over-


lay area, i.e. buffer area3, contains true building edges. The lack of
buffer area2 may happen because of the LIDAR data hollow caused
by oblique laser scan angle, aboveground object obstruction, object (e) (f)
reflecting instability, etc. So non-maximal suppression is carried
out in buffer area1, then the edge tracking process starts from buffer Points on roof patch
area3 and then extended in buffer area1. Initial edges

2.3. Forming complete boundaries by fusing roof patch and initial Morphological operation
edges Erosion operation for edge extraction
by mathematical morphology
Although the extraction of initial edges is constrained by the
edge buffer areas, it is still difficult to extract continuous and com- Fig. 4. Complete boundary extraction by fusion roof patch and initial edges.
plete edges solely from images. There are some broken lines and
noise existing in the initial edges due to the complex spectral
and texture characteristics of high-resolution images. LIDAR point Comparing Fig. 4(a) and (f), it can be seen that different features
clouds implicitly provide a closed edge around roof patch. So the are well fused forming a complete building boundary. On one hand,
initial edges extracted from images can be fused with the roof patch the broken initial edges extracted from image are selected and con-
extracted from the LIDAR data to form the final complete bound- nected using the roof patch information. On the other hand, the final
aries as Fig. 4. boundary is improved in comparison to the boundary derived solely
First, the initial edges near the roof patch are selected as features from points of the roof patch. Moreover, this whole procedure is
for fusion. The distance of the real building edges and roof patch is fully data-driven.
generally less than the average point spacing of LIDAR point clouds.
As Fig. 4(a), lines 1–4 are the initial edges extracted from image. The 3. Experimental results and discussion
distances of lines 1, 3, 4 and the roof patch are less than the average
point spacing, so they are selected for fusion as Fig. 4(b). In order to test the performance of the method proposed in this
Second, a morphological closing operation, including a combi- paper, LIDAR-Optical data from a real geographical environment
nation of dilation and erosion operations, is applied to the roof was used in experiments. The data covers part of the downtown
patch and the initial edges selected. The shadow areas in Fig. 4(c) area of the City of Toronto Ontario, Canada, as shown in Fig. 5. The
and (d) illustrate respectively the results of the dilation and erosion LIDAR data were acquired by an Optech ALTM 3100 system. The
operations. The two kinds of features derived from two different average density of the point clouds is 0.8 point/m2 . Fig. 5(b) is the
data sources are naturally integrated to a single entity. digital surface model (DSM) generated from LIDAR data. The aerial
Finally, the complete boundary is determined by the mathemat- image resolution is 1 m, which is shown as Fig. 5(a). These two
ical morphology boundary extraction method [29] following the kinds of data have been aligned relative to a common reference
formula: frame with the method in [28]. Fig. 5(c) is the LIDAR point cloud
ˇ(A) = A − (AB) (1) shown in the color of their corresponding pixels. As can be seen in
Fig. 5, the test area contains numerous buildings of complex shapes,
where A = the entity derived by the morphological closing operation with complex surrounding environments. The aerial image shows
integrating roof patch and initial edges as shown by the shadow the detailed building outlines, as well as the complex spectral and
area in Fig. 4(e) texture characteristics such as contrast, illumination and shadow,
B = structuring element with the size of 3 × 3 pixels while the DSM derived from LIDAR data shows the jagged edges
A  B = A is eroded with structuring element B, whose result is caused by the discontinuousness of laser pulse. The test data are
shown as the polygon of dashed lines in Fig. 4(e) ideal to evaluate the building boundary extraction method adaptive
ˇ(A) = the final complete building boundary as Fig. 4(f) for diverse building shapes.
5360 Y. Li et al. / Optik 124 (2013) 5357–5362

Fig. 5. Test data. (a) Aerial image; (b) DSM generated from LIDAR data; (c) the LIDAR
point cloud in the color of their corresponding pixels.

The LIDAR point cloud was divided into ground points and non-
ground points after filtering as shown in Fig. 6(a). Building points
were detected from the non-ground points by adopting the three
steps mentioned in Section 2.1 as Fig. 6(b). Fig. 6(c) shows the
building points after removing wall points, from which each roof
patch can be extracted according to height continuousness. Fig. 6(d)
shows the detected edge points and peripheral points of each roof Fig. 7. Extraction of initial edges from image. (a) Edge buffer areas; (b) edges
patch, which indicate the approximate location of real edges. extracted by the improved Canny detector; (c) edges extracted by the traditional
During the extraction of initial edges from the aerial image, Canny detector; (d) edges extracted by the traditional Canny detector appearing in
the improved Canny detector is employed as introduced in Section the buffer areas.
2.2. After Gaussian smoothing and gradient calculation are imple-
mented, the non-maximal suppression and edge tracking processes Fig. 7(d) shows the edges appearing in the buffer areas. It is obvi-
are applied to the edge buffer areas created by projecting the edge ous that the traditional Canny detector misses many building edges
points and peripheral points onto image space, as shown in Fig. 7(a). because of the influence of the surrounding spectrum and texture.
Fig. 7(b) illustrates the initial edges as extracted by the improved The purpose of the non-maximal suppression is to make sure of
Canny detector. In order to make a comparison, Fig. 7(c) shows the only one response along the gradient direction, ensuring that the
result of the traditional Canny detector on the whole image, and real edges are indeed along the buffer area and lie in the overlay area
of buffer areas. So using the edge buffer areas as cues, the improved
Canny detector can extract edges more efficiently as Fig. 7.
The roof patch and initial edges are fused to form the final
complete boundary by mathematical morphology as described in
Section 2.3. The overlay of roof patches and initial edges is shown
in Fig. 8(a). It can be seen that there are some hollows in the roof
patches because the density of the point cloud does not match
the image resolution, a common occurrence in practical applica-
tions. The initial edges are still fragmentary although using the
improved Canny detector yielded better results than the tradi-
tional Canny detector. As shown in Fig. 8(b), each roof patch and its
surrounding initial edges are integrated to an entity through a mor-
phological closing operation. Then the final complete boundaries
are determined by the boundary extraction method of mathemati-
cal morphology as Fig. 8(c). The final boundaries are closed and one
pixel in width, which can be used for large scaled cartography or
three dimensional modeling with the height of roof patch points.
In order to make quantitative assessment of boundary accuracy,
26 pairs of check points were selected from the LIDAR point cloud
and aerial image respectively as shown in Fig. 9. Those check points
lie on building corners which are easily distinguished. Because the
final boundaries are compensated by the initial edges extracted
from image, the check points from the LIDAR point cloud probably
did not lie on the final boundaries after projection. Therefore the
Fig. 6. Processing in the LIDAR data. (a) The result of filtering (white: ground points; pixels of the final boundaries determined by the check points from
green: aboveground points); (b) the result of building detection (red: building the LIDAR point cloud, i.e. the pixel of the final boundary nearest
points); (c) building points after removing wall points (purple: wall points); (d) edge
to the projected check point of LIDAR, are used to compare with
points and peripheral points of each roof patch (orange: edge points; blue: periph-
eral points). (For interpretation of the references to colour in this figure legend, the the corresponding check points of image. The Root Mean Square
reader is referred to the web version of this article.) Error (RMSE) at x and y directions achieved 0.55 pixels as shown
Y. Li et al. / Optik 124 (2013) 5357–5362 5361

Table 1
The errors of check points.

Check 1 2 3 4 5 6 7 8 9 10 11 12 13
point
x 0 0 0 0 0 0 1 0 0 0 1 0 0
y 0 0 0 0 −1 0 0 0 0 0 1 1 0
Check 14 15 16 17 18 19 20 21 22 23 24 25 26 RMSE
point
x 0 0 1 0 0 2 0 0 0 0 0 0 1 0.55
y 0 0 0 0 0 1 0 0 0 0 0 0 0 0.39

in Table 1, which is a high boundary extraction accuracy when


considering the accuracy of raw data.

4. Conclusions

Building boundaries are a kind of important geospatial infor-


mation. Automatic extraction of accurate building boundaries is a
challenging task due to building shape variability and surrounding
environment complexity. In order to deal with various buildings
of complex shapes, this paper presents a new method for auto-
matic boundary extraction by LIDAR-optical fusion. The following
statements can be concluded:
This method makes full use of the complementary advantages
of LIDAR data and optical images. Different building features are
extracted respectively from the two data sources and fused to form
Fig. 8. Extraction of the final complete boundaries by fusing roof patch and ini- the final complete building boundaries.
tial edges. (a) The overlay of the roof patch and initial edges (gray: roof patches; The whole procedure does not impose any constraint on build-
black: initial edges); (b) integrating of patches and edges by morphological closing ing shape. This method is fully data-driven, and self-adaptive for
operation; (c) the final complete boundaries.
diverse building shapes.
The initial edges are extracted from an image using the
improved Canny detector constrained by the edge location infor-
mation derived from a LIDAR point cloud in the form of edge buffer
areas, for more efficiently edge extraction.
An innovative general strategy based on mathematical morphol-
ogy to fuse the features from different data source to get complete
building boundaries with high detailed information was presented.
The final boundaries are closed and one pixel in width, which
can then be used for large scaled cartography or three dimensional
modeling with the height information of LIDAR data.
There are still some problems to be studied in future. The
final boundaries are the set of neighboring pixels, with much data
redundancy. So the boundaries need further simplification. In addi-
tion, roof surface changing also need representation using suitable
method for detailed 3D modeling.

Acknowledgments

This work was supported by the National Natural Science


Foundation of China (41101374; 41021061; 41271361; 41101308)
and the Fundamental Research Funds for the Central Universities
(2011B06614).

References

[1] H.G. Maas, G. Vosselman, Two algorithms for extracting building models from
raw laser altimetry data, ISPRS J. Photogram. Rem. Sens. 54 (1999) 153–163.
[2] R.J. Ma, DEM generation and building detection from Lidar data, Photogramm.
Eng. Remote Sens. 71 (2005) 847–854.
[3] Z. Keqi, Y. Jianhua, C. Shu-Ching, Automatic construction of building foot-
prints from airborne LIDAR data, IEEE Trans. Geosci. Remote Sens. 44 (2006)
Fig. 9. Selecting check points for quantitative assessment. (a) The distribution of 2523–2533.
check points in LIDAR data (white: ground points; red: building points; green: non- [4] A. Sampath, J. Shan, Building boundary tracing and regularization from airborne
building points; purple: wall points; yellow cross: check points); (b) the local zoom lidar point clouds, Photogramm. Eng. Remote Sens. 73 (2007) 805–812.
out of Fig. 9(a); (c) the distribution of check points in image (blue cross: check [5] S.R. Lach, J.P. Kerekes, Robust extraction of exterior building boundaries from
topographic lidar data, in: IEEE International Geoscience and Remote Sensing
points); (d) the final complete boundaries. (For interpretation of the references to
Symposium, IGARSS, 2008, pp. 85–88.
colour in this figure legend, the reader is referred to the web version of this article.)
[6] H. Neidhart, M. Sester, Extraction of building ground plans from LIDAR data,
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 37 (2008) 405–410.
5362 Y. Li et al. / Optik 124 (2013) 5357–5362

[7] Y. Jwa, G. Sohn, V. Tao, W. Cho, An implicit geometric regularization of 3D [18] G. Sohn, X.F. Huang, V. Tao, Using a binary space partitioning tree for recons-
building shape using airborne LIDAR data, Int. Arch. Photogramm. Remote Sens tructing polyhedral building models from airborne lidar data, Photogram. Eng.
Spatial Inf. Sci. 37 (2008) 69–75. Remote Sens. 74 (2008) 1425–1438.
[8] L. Chen, T. Teo, Y. Shao, Y. Lai, J. Rau, Fusion of LIDAR data and optical imagery [19] G. Vosselman, P. Kessels, B. Gorte, The utilisation of airborne laser scanning for
for building modeling, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 35 mapping, Int. J. Appl. Earth Observ. Geoinform. 6 (2005) 177–186.
(2004) 732–737. [20] H. Jinhui, Y. Suya, U. Neumann, Integrating LiDAR, aerial image and ground
[9] L. Chen, T. Teo, C. Hsieh, J. Rau, Reconstruction of building models with curvi- images for complete urban building modeling, in: The Third International
linear boundaries from laser scanner and aerial imagery, Adv. Image Video Symposium on 3D Data Processing, Visualization, and Transmission, 2006, pp.
Technol. (2006) 24–33. 184–191.
[10] J.Y. Rau, L.C. Chen, Robust reconstruction of building models from three- [21] G.Q. Zhou, C. Song, J. Simmers, P. Cheng, Urban 3D GIS from LiDAR and digital
dimensional line segments, Photogramm. Eng. Remote Sens. 69 (2003) aerial images, Comput. Geosci. 30 (2004) 345–353.
181–188. [22] C. Brenner, Building reconstruction from images and laser scanning, Int. J. Appl.
[11] B.B. Madhavan, C. Wang, H. Tanahashi, H. Hirayu, Y. Niwa, K. Yamamoto, K. Earth Observ. Geoinfor. 6 (2005) 187–198.
Tachibana, T. Sasagawa, A computer vision based approach for 3D building [23] H.J. You, S.Q. Zhang, 3D building reconstruction from aerial CCD image and
modelling of airborne laser scanner DSM data, Comput. Environ. Urban Syst. sparse laser sample data, Opt. Lasers Eng. 44 (2006) 555–566.
30 (2006) 54–77. [24] A. Habib, J. Kersting, T. McCaffrey, A. Jarvis, Integration of LIDAR and airborne
[12] G. Sohn, I. Dowman, Data fusion of high-resolution satellite imagery and LiDAR imagery for realistic visualization of 3D urban environments, Int. Arch. Pho-
data for automatic building extraction, ISPRS J. Photogram. Remote Sens. 62 togramm. Remote Sens. Spatial Inf. Sci. 37 (2008) 617–623.
(2007) 43–63. [25] L. Cheng, J. Gong, X. Chen, P. Han, Building boundary extraction from high res-
[13] A.F. Elaksher, Fusion of hyperspectral images and lidar-based dems for coastal olution imagery and LIDAR data, Int. Arch. Photogramm. Remote Sens. Spatial
mapping, Opt. Lasers Eng. 46 (2008) 493–498. Inf. Sci. 37 (2008) 693–698.
[14] J.J. Jaw, C.C. Cheng, Building roof reconstruction by fusion laser range data and [26] Y. Li, H. Wu, DEM extraction from LIDAR data by morphological gradient, in:
aerial images, International Archives of Photogrammetry, Remote Sens. Spatial the Fifth International Joint Conference on INC, IMS and IDC, NCM, IEEE, 2009,
Inf. Sci. 37 (2008) 707–712. pp. 1301–1306.
[15] F. Rottensteiner, Automatic generation of high-quality building models from [27] J. Canny, A computational approach to edge detection, IEEE Trans. Pattern Anal.
lidar data, IEEE Comput. Graph. Appl. 23 (2003) 42–50. Mac. Intel. 8 (1986) 679–698.
[16] M. Peternell, T. Steiner, Reconstruction of piecewise planar objects from point [28] H. Wu, Y. Li, J. Li, J. Gong, A two-step displacement correction algorithm for reg-
clouds, Comput.-Aid. Des. 36 (2004) 333–342. istration of lidar point clouds and aerial images without orientation parameters,
[17] L.C. Chen, T.A. Teo, C.Y. Kuo, J.Y. Rau, Shaping polyhedral buildings by the fusion Photogramm. Eng. Remote Sens. 76 (2010) 1135–1145.
of vector maps and lidar point clouds, Photogramm. Eng. Remote Sens. 74 [29] R. Gonzalez, R. Woods, S. Eddins, Digital Image Processing Using MATLAB, Pren-
(2008) 1147–1157. tice Hall, Upper Saddle River, NJ, 2004.