Professional Documents
Culture Documents
a r t i c l e i n f o a b s t r a c t
Article history: Facade features represent segmentations of building surfaces and can serve as a building framework.
Received 23 June 2017 Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method
Received in revised form 21 November 2017 for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images,
Accepted 21 November 2017
this study describes the creation of a highly accurate building facade feature extraction method from 3D
Available online 17 March 2018
PCD with a focus on structural information. The new extraction method involves three major steps:
image feature extraction, exploration of the mapping method between the image features and 3D PCD,
Keywords:
and optimization of the initial 3D PCD facade features considering structural information. Results show
Three-dimensional point cloud data
Two-dimensional optical images
that the new method can extract the 3D PCD facade features of buildings more accurately and continu-
Structural information ously. The new method is validated using a case study. In addition, the effectiveness of the new method is
Facade features demonstrated by comparing it with the range image-extraction method and the optical image-extraction
Three-dimensional building modeling method in the absence of structural information. The 3D PCD facade features extracted by the new
method can be applied in many fields, such as 3D building modeling and building information modeling.
Ó 2017 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier
B.V. All rights reserved.
https://doi.org/10.1016/j.isprsjprs.2017.11.015
0924-2716/Ó 2017 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.
Y. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 139 (2018) 146–153 147
only unstructured 3D PCD to extract facade features, their preci- including decreased accuracy of feature extraction from 2D images
sion is often very low (Rutzinger et al., 2009; Zolanvari and of building facades, lack of rigorous mapping models between
Laefer, 2016). 2D-image features and 3D PCD, and lack of reasonable optimal
In indirect extraction methods, to acquire features with high treatment of building facade features of 3D PCD. Therefore, it is
accuracy, other information, such as range images, optical images, necessary to investigate methods allowing for higher-accuracy
and geo-information, are often included in the facade feature feature extraction from building facades for 3D PCD.
extraction process (Yang et al., 2014, 2016; Gilani et al., 2015). By combining the advantages of both 3D PCD and 2D optical
Indirect extraction methods can be divided into two types. The first images, we achieved a highly accurate building facade feature
type constructs range images using 3D PCD first, then extracts fea- extraction method from 3D PCD that considers structural informa-
tures from the range images with the aid of digital-image process- tion. Section 2 introduces the basic idea and details of the method.
ing and then maps the range-image features to the 3D PCD (Tan Section 3 describes a case study that extracts the facade features
and Cheng, 2015). This type of indirect extraction methods applies from the 3D PCD of the School of Architectural and Surveying &
range images to extract facade features from 3D PCD to enable Mapping Engineering (SASME) building, Jiangxi University of
detection of edge features of heterogeneous regions. However, this Science and Technology (JXUST). The effectiveness, value settings,
type of indirect extraction methods requires transformation of 3D and potential limitations of the method are discussed in Section 4.
PCD into two-dimensional (2D) range images. Additionally, only In Section 5, we deliver our conclusions on the validity and accu-
range information associated with 3D PCD is preserved during racy of the method. Our results show that this method can provide
the transformation process, with loss of substantial amounts of data support for 3D modeling and BIM construction in urban plan-
boundary information. Therefore, this type of indirect extraction ning applications.
methods excludes many facade features.
The second type of indirect extraction methods uses 2D optical
images obtained by a camera registered on a 3D laser scanner to 2. Methodology
assist facade feature extraction (Gilani et al., 2015). Considering
that high-resolution 2D optical images can supple important infor- 2.1. Basic idea and overall design
mation, the second type of indirect extraction methods can acquire
extraction results that are more accurate than the first type (Wang The 3D PCD of buildings is unstructured, and the features
et al., 2013; Li et al., 2013; Gilani et al., 2015). However, 2D image directly extracted from the 3D PCD often exhibit low geometrical
processing algorithms are still used to extract 2D features in the continuity. 2D optical images, which can be mapped to the real-
second type (Yang et al., 2014, 2016), which does not take advan- world scale according to the 3D PCD, contain substantial amounts
tage of the textural and geometric structural information con- of geometric and textural structural information. This information
tained in the 2D optical images of building facades during the can be used to describe the spatial distribution and internal varia-
extraction process, resulting in the extraction of many non- tion of image elements, such as line segment features. By applying
features while some important facade-features are ignored. statistical analysis to the structural information of adjacent areas,
The 2D optical images contain rich textural and geometric the marching directions of information, such as line segments,
structural information that can be used to help extract facade fea- can be determined efficiently in the feature extraction process,
tures after adding scales (Fig. 1). The images of building facades and these directions are very important for improving the accuracy
contain regular and repetitive textural structural information, such of feature extraction. Therefore, introducing the structural infor-
as wall surfaces covered with the same painting material. In addi- mation contained in 2D optical images to the process of feature
tion, there are many window and texture edges with rich simple extraction of building facades from 3D PCD should be feasible
geometric structural information, such as line segments. This and effective.
structural information can be used to describe the spatial distribu- The overall design of the new method involves three major
tion and internal variation of images. By applying statistical analy- steps (Fig. 2). First, extract the image features of building facades
sis to the structural information of adjacent areas, the marching based on structural information. In this step, the textural and geo-
directions of information can be determined efficiently during metric structural information is used to extract image features
the feature extraction process, and these directions are important accurately. Second, explore the mapping method between the
for improving the feature extraction accuracy (Li et al., 2012; image features and the 3D PCD. In this step, we combine image
Zhou and Yin, 2013). Thus, structural information can be used in features of building facades and the 3D PCD and then extract the
facade feature extraction for 3D PCD but has not been sufficiently initial 3D PCD facade features based on the image features. Finally,
utilized in current methods (Truong-Hong et al., 2012; Vo et al., optimize the initial 3D PCD facade features that take into consider-
2015; Aljumaily et al., 2015). This leads to many weaknesses, ation the structural information. In this step, we propose an
Fig. 1. Schematic diagram of textural and geometric structural information in a facade image.
148 Y. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 139 (2018) 146–153
Step1: Step2
Image feature extraction of Structural Mapping method between image
building facades information of features and 3D PCD
building images
Building image pretreatment Textural Analysis of internal and external
and facade structure structural parameters of the camera after
decomposition information registration
Step3
Initial 3D PCD
Optimizing method for initial The final 3D PCD
features of
3D PCD features based on features of
building
RANSCA building facades
facades
Case study and discussion of the new feature extraction method for
building facades from 3D PCD considering structural information
Fig. 2. Flowchart of the new facade feature extraction method for buildings.
optimizing method for the 3D PCD features of building facades. regulates texture removal and contains general pixel-wise win-
This step also addresses the noise and discontinuous phenomena dowed total-variation measurements and the novel windowed
of initial 3D PCD features. inherent variation. In addition, RTV produces texture structure
images. (3) Calculate gradient. Gray-change-detection operators,
2.2. Image features of building facade extraction considering structural such as Cany, are used to compute the gradient magnitude and
information direction at each pixel and obtain optimized gradient maps. (4)
Detect line segments. EDLines, which is a famous line segment
Currently, numerous classical edge detection algorithms, such detector, is used to detect features contained in gradient maps
as the Roberts operator, the Sobel operator, the Laplacian operator, (Akinlar and Topal, 2011; Von Gioi et al., 2012). These line segment
and the Canny operator, exist for the extraction of 2D image fea- detection algorithms make full use of neighboring pixel-gradient
tures of building facades (Canny, 1986; von Gioi et al., 2010, magnitudes and directions. By calculating or setting the appropri-
2012; Maini and Aggarwal, 2009). These algorithms usually use ate detection parameters, such as the gradient threshold and scale
differential operators that check derivative changes to detect alter- factor which are suggested to be set to 5.22 and 0.8 according to
ations in gray values surrounding an image. These algorithms can Akinlar and Topal (2011) and Von Gioi et al. (2012), these line seg-
be directly or indirectly (with some adjustments) used to extract ment detection algorithms can detect the pixels that exhibit alter-
building facade features; however, the facade features extracted ations in peak gray values more accurately. By using Edge Drawing,
by these algorithms are incomplete and inaccurate due to their which is a novel and recently-proposed edge detection algorithm,
not using facade structure information. To address these problems, to connect these pixels, we can obtain additional line segments
we developed a new method for extracting image features of build- that constitute the image features of building facades (Akinlar
ing facades based on structural information. and Topal (2011)). (5) Validate line segments of image features.
The key processes associated with this extraction method are as Validation is performed using the ‘‘Number of False Alarms” (NFA)
follows. (1) Preprocess images. A Gaussian filter with a window of a line segment, which was defined by Desolneux et al. (2000)
height and width of 3 pixels is used to suppress noise and and is widely used in many line segment detection methods
smoothen out the input images. This step involves zooming out (Von Gioi et al., 2010, 2012; Akinlar and Topal, 2011). The NFA
from high-resolution images to a suitable scale according to the value of A, which is a segment of length ‘‘n” with ‘‘k” points, in
scale-zooming factor. (2) Remove texture. The relative total an image of N N pixels is calculated by Formula (1) below. A is
variation (RTV) method is used to filter windows to decompose considered a validated line segment if its NFA ¢, which is recom-
and remove the texture of building facades (Li et al., 2012). RTV mended to be set to 1 by Desolneux et al. (2000).
Y. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 139 (2018) 146–153 149
n
X n The following steps describe the mapping process:
NFAðn; kÞ ¼ N4 pi ð1 pÞni ð1Þ
i¼k
i
(1) The pixel coordinates of the image features are recorded as a
set (U, V), which is traversed to analyze every pixel, (Ui, Vi),
In Formula (1), ‘‘p” is defined as follows: if two line segments, P
to determine its status as a feature pixel (if the pixel belongs
and Q, exhibit the same directionality, the precision (1/n) of angles
to a line segment, it is considered a feature pixel). All feature
(P) and (Q) are within p/n degrees of each other.
pixels are saved to set (UC, VC).
Compared with classical feature extraction operators, the new
(2) Coordinates are registered from the 3D PCD to 2D optical
method for extracting the image features of building facades uses
images acquired by a camera fixed to the 3D laser scanner.
the textural and geometric structural information of building
The intrinsic (M1) and external parameters (M2) of the cam-
images more effectively. Furthermore, the new method can main-
era are optimized using 3D PCD-processing software, such as
tain better geometry continuity for the extracted features than
RiSCAN PRO. The 3D PCD set is traversed, and every point
other algorithms on building facade feature extraction. In addition,
coordinate value (XWi, YWi, ZWi) is introduced into conver-
the new extraction method can detect valid features for images
sion formula 2, which converts the 3D PCD from geospatial
that exhibit subtle levels of gray changes. The validity of the image
coordinates (XW, YW, ZW) to camera coordinates (XC, YC,
features extracted from building facades indicates that this method
ZC), followed by analysis of the camera coordinates (XCi,
is an effective framework and basis for the next step of feature
YCi, ZCi) of each 3D PCD, especially the optical axis value,
extraction.
ZCi. We then introduce each the geospatial coordinate value
(XWi, YWi, ZWi) and the ZCi value of each point into conversion
2.3. Mapping between image features and 3D PCD formula 3 to calculate the corresponding image pixel-
spatial-coordinate value (ui, vi).
Image features extracted from building facades represent a ser-
ies of feature pixels. The image feature is a set of 2D pixels. There- ½X C YC ZC 11 ¼ M 2 ½X W YW ZW 11 ð2Þ
fore, the method used to map between the 2D image features and
the 3D PCD can be simplified into mapping from the 2D plane to Z C ½u v 11 ¼ M 1 M 2 ½X W YW ZW 11 ð3Þ
the 3D space. Because each pixel coordinate (u, v) may correspond
to more than one 3D point (x, y, z), the mapping relationship
between pixels of image features and points of the 3D PCD is (3) If this coordinate (ui, vi) is contained within the feature pixel
one-to-many. set (UC, VC), the 3D-laser points corresponding to this image
This section describes the method used to map between image pixel represent the feature points in the 3D PCD and are
features of building facades and the 3D PCD. The aim of this pro- saved along with the corresponding image pixels into a 3D
cess is to discover corresponding 3D PCD features according to PCD feature-point set (XWC, YWC, ZWC).
the 2D image features of building facades. The flowchart is shown (4) By executing step (2) and (3) iteratively, we obtain the initial
in Fig. 3. 3D PCD features of building facades.
(a)
(b) (c)
Fig. 4. Final 3D PCD facade features extracted by the new method. (a) Overall 3D PCD facade features of the SASME building. (b and c) Partial detail of the 3D PCD belonging to
the facade features (red) overlapping with the original 3D PCD. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of
this article.)
Y. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 139 (2018) 146–153 151
Fig. 5. Comparisons of facade features extracted by different methods. (a) The original 3D PCD overlapping with the 2D optical images. (b) 3D PCD facade features in red color
extracted by range-image method. (c) 3D PCD facade features in red color extracted by the optical-image method in the absence of structural information. (d) 3D PCD facade
features in red color extracted by the new extraction method proposed by this paper. (For interpretation of the references to color in this figure legend, the reader is referred
to the web version of this article.)
152 Y. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 139 (2018) 146–153
features and result in some false detection and extraction, as proposed optimization method get the final 3D PCD facade features
shown in Fig. 5(b) and (c). By comparison, the new method can (Fig. 6, red).
eliminate most of the false detection, as shown in Fig. 5(d), indicat- In the initial 3D PCD facade features, many non-features (at the
ing the effectiveness of the new method further. top of Fig. 6) are extracted by the false detection, and some non-
Furthermore, the statistical analysis of facade feature informa- feature points are included because of the swell phenomenon (at
tion (Table 1) for the three extraction methods was performed the bottom of Fig. 6). However, in the optimized 3D PCD facade fea-
based on Fig. 5. A quantifiable assessment was completed by refer- tures, the non-features and swell phenomenon are mostly elimi-
encing Truong-Hong and Laefer’s (2013) work. Table 1 shows the nated. Obviously, we obtain accurate 3D PCD facade features
following conclusions: the range-image method can only extract after the optimization process.
few true facade features with more false features; the optical-
image method without structural information can extract most
4.3. Potential limitations of this approach
true facade features, but numerous false facade features; and the
new method can extract most true facade features with few false
By using the textural and geometric structural information con-
features.
tained in the optical images, the proposed extraction method
improved the accuracy and continuousness of the resulting facade
4.2. Value setting of optimization parameters features. However, this method still has some potential limitations.
When the textures of building facades become increasing complex,
Three important parameters in the initial 3D PCD facade feature the amount of scrappy line segments extracted will increase. Addi-
optimization process include distance threshold, n, chosen proba- tionally, when there are multiple types of geometric structures in
bility, W, and proportion, g, of out-of-range points. Our analysis the building facades, the accuracy of the final facade features will
indicated that a distance threshold (0.02%) of the length of the be reduced. Finally, the occlusion can result in missing data in
diagonal line of the bounding box associated with the candidate LiDAR point cloud data and images. False detection from the miss-
point set, Q, is optimal. The chosen probability, W, is suggested ing data is hardly eliminated. These potential limitations will be
as 0.99, and the proportion, g, is suggested as 0.9. Based on these investigated further in the future.
values, we selected partial initial 3D PCD facade features of the
SASME building as test data (Fig. 6, black), and used the new
5. Conclusion
Acknowledgement
References
Akinlar, C., Topal, C., 2011. EDLines: A real-time line segment detector with a false
detection control. Pattern Recogn. Lett. 32, 1633–1642.
Aljumaily, H., Laefer, D.F., Asce, M., Cuadra, D., 2015. Big-data approach for three-
dimensional building extraction from aerial laser scanning. J. Comput. Civil Eng.
30 (3).
Aljumaily, H., Laefer, D.F., Asce, M., Cuadra, D., 2017. Urban point cloud mining
based on density clustering and MapReduce. J. Comput. Civil Eng. 31 (5), 215–
219.
AlHalawani, S., Yang, Y., Liu, H., Mitra, N.J., 2013. Interactive facades analysis and
synthesis of semi-regular facades. Comput. Graphics Forum 32, 215–224.
An, Y., Li, Z.L., Shao, C., 2013. Feature extraction from 3D point cloud data based on
discrete curves. Math. Probl. Eng. 2013, 290740.
Bauer, J., Karner, K., Schindler, K., Klaus, A., Zach, C., 2003. Segmentation of building
models from dense 3D point-clouds. In: Proceedings of the 27th Workshop of
the Austrian Association for Pattern Recognition. OAGM, Laxenburg, Austria, pp.
Fig. 6. Facade features after optimization using the suggested values. 253–259.
Y. Wang et al. / ISPRS Journal of Photogrammetry and Remote Sensing 139 (2018) 146–153 153
Biosca, J.M., Lerma, J.L., 2008. Unsupervised robust planar segmentation of Sajadian, M., 2014. A data driven method for building reconstruction from LiDAR
terrestrial laser scanner point clouds based on fuzzy clustering methods. J. point clouds. In: The 1st ISPRS International Conference on Geospatial
Photogramm. Rem. Sens. 63, 84–98. Information Research, ISPRS Archives, XL-2/W3, pp. 15–17.
Canny, J., 1986. A computational approach to edge detection. IEEE Trans. Pattern Schnabel, R., Wahl, R., Klein, R., 2007. Efficient RANSAC for point-cloud shape
Anal. Mach. Intel. PAMI-8, 679–698. detection. Comput. Graphics Forum 26, 214–226.
Ceylan, D., Mitra, N.J., Li, H., Weise, T., Pauly, M., 2012. Factored facade acquisition Tan, K., Cheng, X.J., 2015. Dual-threshold algorithm for intensity image edge
using symmetric line arrangements. Comput. Graphics Forum 31, 671–680. extraction of terrestrial laser scanning point cloud. J. Tongji Univ. (Nat. Sci.) 43
Cheng, L., Wang, Y., Chen, Y., Li, M., 2016. Using LiDAR for digital documentation of (9), 1425–1431 (in Chinese).
ancient city walls. J. Cult. Heritage 17, 188–193. Truong-Hong, L., Laefer, D.F., Asce, M., Hinks, T., Carr, H., 2012. Flying voxel method
Desolneux, A., Moisan, L., Morel, J.M., 2000. Meaningful alignments. Int. J. Comput. with Delaunay triangulation criterion for facade/feature detection for
Vis. 40 (1), 7–23. computation. J. Comput. Civil Eng. 26 (6), 691–707.
Gilani, S.A.N., Awrangjeb, M., Lu, G., 2015. Fusion of lidar data and multispectral Truong-Hong, L., Laefer, D.F., 2013. Validating computational models from laser
imagery for effective building detection based on graph and connected scanning data for historic facades. J. Test. Eval. 41 (3), 481–496.
component analysis. ISPRS – Int. Arch. Photogramm., Rem. Sens. Spatial Turk, Ž., 2016. Ten questions concerning building information modelling. Build.
Inform. Sci. XL-3/W2, 65–72. Environ. 107, 274–284.
Hinks, T., Carr, H., Truong-Hong, L., Laefer, D.F., Asce, M., 2013. Point cloud data Vo, A.V., Truong-Hong, L., Laefer, D.F., Bertolotto, M., 2015. Octree-based region
conversion into solid models via point-based voxelization. J. Surv. Eng. 139 (2), growing for point cloud segmentation. ISPRS J. Photogramm. Remote Sens. 104,
72–83. 88–100.
Jordana, T.R., Goetcheus, C.L., Madden, M., 2016. Point cloud mapping methods for Von Gioi, R.G., Jakubowicz, J., Morel, J.M., Randall, G., 2012. LSD: a line segment
documenting cultural landscape features at the Wormsloe state historic site, detector. Image Process. On Line 2, 35–55.
Savannah, Georgia, USA. ISPRS – Int. Arch. Photogramm., Rem. Sens. Spatial Von Gioi, R.G., Jakubowicz, J., Morel, J.M., Randall, G., 2010. LSD: a fast line segment
Inform. Sci. XLI-B5, 277–280. detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 32,
Laefer, D.F., Truong-Hong, L., Fitzgerald, M., 2011. Processing of terrestrial laser 722–732.
scanning point cloud data for computational modelling of building facades. Wang, Y., Ewert, D., Schilberg, D., Jeschke, S., 2013. Edge extraction by merging 3D
Recent Patents Comput. Sci. 4, 16–29. point cloud and 2D image data. In: Proceedings of the 10th International
Li, X., Yan, Q., Xia, Y., Jia, J., 2012. Structure extraction from texture via relative total Conference & Expo on Emerging Technologies for a Smarter World (CEWIT),
variation. ACM Trans. Graphics 31 (6), 131–139. New York, USA, pp. 21–22.
Li, H., Zhong, C., Hu, X.G., Xiao, L., Huang, X., 2013. New methodologies for precise Yang, L., Sheng, Y.H., Wang, B., 2014. LiDAR data reduction assisted by optical image
building boundary extraction from LiDAR data and high resolution image. for 3D building reconstruction. Optik – Int. J. Light Electron Opt. 125, 6282–
Sensor Rev. 33 (2), 157–165. 6286.
Maini, R., Aggarwal, H., 2009. Study and comparison of various image edge Yang, L., Sheng, Y.H., Wang, B., 2016. 3D reconstruction of building facade with
detection techniques. Int. J. Image Process. 3 (1), 1–11. fused data of terrestrial LiDAR data and optical image. Optik – Int. J. Light
Pang, S.Y., Liu, Y.W., Zuo, Z.Q., Chen, Z.F., 2015. Combination of region growing and Electron Opt. 127, 2165–2168.
TIN edge segmentation for extraction of geometric features on building facades. Zhou, S.R., Yin, J.P., 2013. LBP texture feature based on Haar characteristics. J. Softw.
Geomatics Inform. Sci. Wuhan Univ. 40 (1), 102–106. 24 (8), 1909–1926 (in Chinese).
Rutzinger, M., Elberink, S.O., Pu, S., Vosselman, G., 2009. Automatic extraction of Zolanvari, S.M.I., Laefer, D.F., 2016. Slicing Method for curved façade and window
vertical walls from mobile and airborne laser scanning data. In: Bretar, F., extraction from point clouds. J. Photogramm. Rem. Sens. 119, 334–346.
Pierrot-Deseiligny, M., Vosselman, G. (Eds.), Laserscanning ’09, ISPRS Archives,
XXXVIII-3/W8, pp. 7–11.