Professional Documents
Culture Documents
Abstract: Terrestrial Laser Scanning (TLS) technology is increasingly used for the generation of accurate three-dimensional (3D) models of
objects and scenes. But converting the acquired 3D point cloud data into a representative, semantic 3D model of the scene requires advanced
processing and skills. This research field is challenging, particularly when considering inhabited, furnished environments that are charac-
terised by clutter and occlusions. This paper presents a TLS data-processing pipeline aimed at producing semantic 3D models of furnished
office and home interiors. The structure of rooms (floor, ceiling, and walls with window and door openings) is created using Boundary
Representation (B-Rep) models that not only encode the geometry of those elements, but also their connectivity. Windows and doors
are recognized and modeled using a novel method based on molding detection. For the furniture, the approach uniquely integrates smart
technology [radio frequency identification (RFID)] that is increasingly used for Facilities Management (FM). RFID tags attached to furniture
are sensed at the same time as laser scanning is conducted. The collected IDs are used to retrieve discriminatory geometric information about
those objects from the building’s FM database; this information is used to support their recognition and modeling in the point cloud data. The
manuscript particularly reports results for the recognition and modeling of chairs, tables, and wardrobes (and other similar objects like chests
of drawers). Extended experimentation of the method has been carried out in real scenarios yielding encouraging results. DOI: 10.1061/
(ASCE)CP.1943-5487.0000525. © 2015 American Society of Civil Engineers.
Author keywords: Three-dimensional (3D) laser scanner; 3D data processing; Radio frequency identification (RFID); B-Rep model;
Building information modeling.
Introduction moving) (Srinivasa et al. 2008). One such field of application, for
which interest is currently growing rapidly, is the use of robots for
Terrestrial laser scanning (TLS) technology is increasingly used for supporting elderly or handicapped people in their homes (Mast et al.
the generation of accurate three-dimensional (3D) models of ob- 2012). Other applications include conducting security inspections;
jects and scenes. While the technology has application in numerous conducting inventories (Ehrenberg et al. 2007); or situations when
fields such as cultural heritage or forensics, this article focuses on there is the presence of hazards for humans, for example when nav-
its application in the Architecture, Engineering, Construction, and igating buildings after earthquakes (Wandel et al. 2003; Marques
Facilities Management (AEC/FM) industry. In the AEC/FM sector, et al. 2002; Ishida et al. 2004).
a particular area of interest for the application of such systems is the The complete 3D reconstruction of interior scenes from point
synthesis of 3D building information modeling (BIM) models of cloud data, that is the reconstruction of 3D models where the struc-
buildings, facilities, or infrastructure.
ture and furniture are both precisely modeled, is a complex task at
The wide majority of systems reported to date, both academic
two levels:
research or commercial software, focus on the modeling of building
• Furniture creates clutter and occlusion that challenge algorithms
or room structural components, with far fewer works tackling the
for recognizing and modeling the structure of rooms; and
automatic generation of 3D (BIM) models for furnished (i.e., inhab-
• Furniture can have a wide range of shape and is movable, mean-
ited) rooms, houses, and buildings. Yet, there are applications that
ing that little prior information can be used regarding their
require 3D models of interiors that contain not just their structure
potential location in the room.
(floor, ceiling, and walls, with openings like doors and windows),
Accordingly, this paper presents a unique system for the auto-
but also their furniture, and possibly even other smaller objects.
matic generation of 3D models of inhabited interiors, by means
These particularly include applications based on robot navigation
of TLS and radio frequency identification (RFID) technology.
and robot interaction with the environment (grasping, pushing,
The use of RFID technology is motivated by the fact that it is in-
1 creasingly used for applications in Facilities Managements (FM)
Research Associate, School of Energy, Geoscience, Infrastructure and
Society, Heriot-Watt Univ., Edinburgh EH14 4AS, U.K. (corresponding where objects (especially mobile ones like furniture) are tagged
author). E-mail: e.valero@hw.ac.uk for simplifying their identification—see Lu et al. (2011) for a
2 broad review of applications of RFID in this field. Furthermore,
Associate Professor, 3D Visual Computing and Robotics Lab, Univ. de
Castilla-La Mancha, 13005 Ciudad Real, Spain. E-mail: antonio.adan@ recent BIM developments such as the NBS National BIM Library
uclm.es (http://www.nationalbimlibrary.com/) (see also the Polantis Ikea
3
Lecturer, School of Energy, Geoscience, Infrastructure and Society, 3D model database at http://www.polantis.com/ikea) will inevi-
Heriot-Watt Univ., Edinburgh EH14 4AS, U.K. E-mail: f.n.bosche@hw.ac.uk
tably lead to FM databases containing detailed 3D models of all
Note. This manuscript was submitted on January 9, 2015; approved on
July 9, 2015; published online on September 7, 2015. Discussion period objects, including furniture, contained in buildings.
open until February 7, 2016; separate discussions must be submitted for Fig. 1 illustrates an example of scenes dealt with in this paper,
individual papers. This paper is part of the Journal of Computing in Civil with the point cloud acquired by the scanner and the final 3D
Engineering, © ASCE, ISSN 0887-3801/04015053(19)/$25.00. interior model produced by the proposed system.
that furniture (and other objects) that can have various shapes, cre-
Fig. 1. Point cloud and the produced 3D model ate disorder, and occlude the walls and other components during
scanning. Many researchers have focused their attention on the rec-
ognition and modeling of particular parts of the building structure
of interior spaces, such as walls, columns, and doors, as well as
The rest of the article is organized as follows. Some of the most specific furniture. El-Hakim et al. (1997) present a mobile mapping
significant and relevant works in the field of modeling from 3D system to generate 3D models of indoor environments using a com-
point clouds are reviewed in sections “Modeling from 3D Point bination of location tracking and 2D range-imaging devices.
Clouds” and “TLS and RFID Integration: Motivation and Contri- Stamos and Allen (2000) present an approach to generate large pla-
bution,” with particular focus on applications in the AEC/FM sec- nar areas with primitives and nonplanar areas with dense mesh
tor. The proposed approach is then described in section “Method.” elements. Plane-sweep approaches have also been widely investi-
The section Experimental Results presents experimental results gated to find planar regions and define the walls in a point cloud of
demonstrating its performance. Finally, the section “Conclusions” a room (Hahnel et al. 2003; Budroni and Bohm 2005). Kwon et al.
presents the conclusions and areas for future works. (2004) recognize different objects by automatically fitting elemen-
tary geometric shapes (cuboids, cylinders, and spheres) to point
clouds. Okorn et al. (2010) present an automatic method to generate
Modeling from 3D Point Clouds precise 2D plans for indoors and Adán and Huber (2011) propose a
solution based on region labeling to reconstruct interior boundaries.
Dense raw data provided by laser scanners are typically processed Valero et al. (2012b) present a method that automatically yields
manually or semiautomatically by engineers to create 3D models. Boundary Representation (B-Rep) models of the structure of in-
These models range from mesh and computer-aided design teriors (floor, ceiling, and walls) from dense TLS point clouds.
(CAD) models in which components are considered as individual Interestingly, the B-Rep representation enables the modeling of
geometric elements, are not explicitly classified and do not include not only the surface geometry of each objects, but also their con-
connectivity information, to BIM models in which the objects nectivity. Recently, Dimitrov and Golparvar-Fard (2015) presented
have all this information, and possibly more (e.g., material, energy a robust algorithm for segmenting a point cloud into smooth sur-
performance). faces. While the work does not address the actual recognition and
In the last few years, commercial solutions have emerged that en- modeling of building components, the segmented surfaces would
able the creation of 3D models, a process now commonly called as-is provide a valuable basis for it. Ali et al. (2008) identify windows
modelling. A basic solution is for example provided by Matterport by means of boundary analysis in binary images and texture
(http://matterport.com/), which uses proprietary 3D cameras to create information.
3D point clouds and meshes of indoor environments. This solution In relation to the specific problem of 3D reconstruction from
however does not tackle the automatic recognition and/or segmenta- partial data due to occlusions, Dell Acqua and Fisher (2002)
tion of the different scanned components for the creation of present a method to reconstruct planar surfaces located behind
semantically-rich models. More advanced software solutions are also objects which occlude them. Xiong et al. (2013) detect and model
available for the modelling of 3D CAD/BIM models from TLS point the main structural components of an indoor environment (floors,
clouds. For example, ClearEdge3D commercializes the EdgeWise 5.0 ceilings, and walls) including windows and doorways despite the
software package that features functionalities for (semi-) automati- presence of significant clutter and occlusion.
cally extracting structural components like walls, windows, doors Other approaches relate to the present work by their consider-
from TLS point clouds and exporting them in a BIM model format. ation for prior knowledge about the analysed scanned scene. For
This can be considered an example of the current state-of-the-art com- example, Bosché (2010), Anil et al. (2013), and Kim et al. (2013)
mercial solutions for structural modeling of interiors. Yet, the software identify components in built environments using the valuable infor-
does not consider the modeling of furniture, and in fact may require mation contained in existing (e.g., as-designed) 3D models.
that rooms be as empty as possible when scanned so that point clouds The rapid development of inexpensive solutions to generate 3D
are acquired from the entire surface of the structural components. point clouds, such as Red, Green, Blue-Depth (RGB-D) cameras, has
From a research viewpoint, different approaches dealing with encouraged many researchers, mainly in the field of computer vision,
the creation of realistic nonparametric models have been proposed to employ these devices to capture and rapidly understand and model
over the last decades. Some of these focus on outdoor environ- indoor environments. For example, Guo and Hoiem (2013) tackle the
ments, others on interiors. identification of horizontal and planar surfaces that can support
As regards the modeling of building exteriors, Frueh et al. (2005) objects and people (such as chairs, beds, and tables) from single
and Bohm (2008) present different algorithms for reconstructing RGB-D images. Silberman et al. (2012) present an algorithm to
façades from 3D laser-scanned data and images. Remondino et al. segment RGB-D data into surfaces and object regions and infer their
(2009) propose a combination of measurement techniques for the structural and support topology. Jia et al. (2013) segment different
and modeling of furniture beyond the coarse labeling of data re- many ways:
gions as furniture or clutter. Relevant works in this area include • Modeling of walls: An improved algorithm is proposed that
that of Rusu et al. (2008) who identify pieces of furniture and uten- models wall planes more accurately;
sils in point clouds of a corner of a kitchen. Wang and Oliveira • Detection and Modelling of wall openings: B-Rep models of
(2002) take advantage of the symmetry of certain objects to recon- interiors are completed with wall openings, such as windows
struct them when they are affected by occlusion. And Castellani and doorways, as well as interior free columns. In particular,
et al. (2002) work on the reconstruction of corners and edges of a novel algorithm is presented for the recognition of openings
furniture that is partially occluded. based on a molding detection;
• Identification and modeling of furniture: The algorithms that
calculate the pose of furniture in the point cloud are improved
TLS and RFID Integration: Motivation and and detailed; and
Contribution • Experimental validation: Experimentation is extended to larger
environments, and new assessment procedures are included that
From the literature reviewed in the previous section, it appears that further demonstrate the strengths of the method.
none of the existing works on as-is modeling provides a complete
solution for the reconstruction of semantically-rich models of in-
teriors where both structural elements and furniture are precisely Method
modeled. Indeed, while many of the approaches previously dis-
cussed are able to reconstruct the structure of interiors with detailed Overview
classification of the segmented regions (e.g., surfaces classified as
The flowchart in Fig. 2 summarizes the strategy to create a 3D
walls, floor, and ceiling), they generally only provide a coarse seg- model of the scene. The elements that compose an inhabited
mentation and classification of the rest of the interior objects interior are divided into two groups:
(e.g., object classified as furniture or clutter without more precision). • (Immobile) Structural elements, which include the ceiling, floor,
In contrast, authors present in this paper a novel approach that walls, and columns, along with the main openings within the
achieves the reconstruction of semantic 3D models of inhabited, walls (windows and doors); and
furnished interiors where both the structure and furniture are accu- • (Mobile) Furniture, which include tables, chairs, and wardrobe-
rately and precisely segmented, classified and modelled in TLS like objects.
point cloud data. The approach is geared towards the specific con- First of all, it is assumed that 3D data acquisition and prepro-
text where furniture is tagged with RFID tags that link each piece of cessing (i.e., filtering and registration of multiple point clouds)
furniture to additional relevant information stored in a Facilities have been carried out, leading to the point cloud labeled D1 in
Management (FM) database. While such context is arguably sim- Fig. 2. Then, a sequential processing is applied where, at each con-
pler, the authors have shown that it is also increasingly common to secutive stage, one type of element is automatically recognized
use smart technologies, like RFID technology, in construction and (point set Si ), modeled and removed from the point cloud (Diþ1 ¼
FM (Jaselskis et al. 1995; Lu et al. 2011). Di − Si ). This sequential strategy starts with structural elements—
While RFID technology has been widely discussed in the liter- successively floor/ceiling, walls, columns, doors, and windows—
ature and is already used in industry for various applications, few and then moves on to furniture—successively tables, chairs,
papers relate RFID technology to 3D scene reconstruction and wardrobes, and wastepaper baskets. Structural elements are detected
understanding. El-Omari and Moselhi (2011) have proposed to in- using the 3D TLS data only, and the room’s structure is modeled
tegrate RFID and TLS technologies to track the state of materials using a B-Rep representation that captures not only the geometry
on jobsites, but they consider the two technologies at two different of the individual structural elements but also their connectivity. Then,
stages in the construction supply chain; RFID is not used to detect furniture is identified in the room by means of RFID tags attached to
objects within TLS data. In contrast, the work of Cerrada et al. them and sensed by a reader mounted alongside the laser scanner.
(2009) is more relevant as they use the information stored in smart The IDs enable the retrieval from the building’s FM database of
tags to improve the performance of object recognition in laser- relevant geometric information for the identified pieces of furniture
scanned data. However, they deal with small scenarios with geo- (including their 3D models) that are used to accurately recognize and
metrical shapes laid on a table. I Ehrenberg et al. (2007) developed precisely model them in the TLS point cloud. All these stages are
a mobile robot that conducts library inventory, where the robot is detailed in the following subsections.
equipped with an RFID reader to identify the books present on the
shelves.
Modeling of Room Structure
In the present work, researchers focus on the 3D reconstruction
of large building interiors, a problem for which the previously- In order to detect and model the structural elements from the point
mentioned integration has not been considered to date. Previous cloud data, the 3D space is first discretized in a uniform 3D voxel
grid. This voxelization has two purposes: reduce computational 3. Detect and model openings in the wall corresponding to doors
complexity, and act as a noise filter. Then, the approach conducts and windows; and
the following successive steps: 4. Generate a complete semantic 3D model of the room’s
1. Detect and model the data planes that contain the 3D points structure.
belonging to the boundary of the scene (i.e., floor, ceiling, A detailed presentation of the overall approach to model the
and walls); room structural boundary elements (ceiling, floor and walls) can
2. Detect and model the data corresponding to interior free be found in Valero et al. (2012b). However, an improved method
columns; for modeling the walls is reported in this present study. Also, Steps 2
and 3 as listed in the preceding paragraph constitute new contribu- As illustrated in Fig. 5, the authors propose to carry out the wall
tions to the overall room-structure modeling approach. detection in a top view 2D binary image, I, obtained by orthogonal
The previously-introduced processing steps are presented in the projection of the entire initial 3D point cloud data (D1 ) on a horizontal
following subsections. To support the presentation, an illustrative plane [Fig. 5(a)]. Pixels of I are labeled occupied (1) or empty (0)
example, corresponding to a real case study, is used. Fig. 3 shows a depending on whether TLS points fall within them [Fig. 5(b)]. I
photo and the initial point cloud (D1 ) of this illustrative interior is used to detect the room boundary (walls) [Fig. 5(c)]. Then, a
room. Hough Transform algorithm is applied to the dilated room boundary
binary image to detect initial approximate locations and extents of
Floor and Ceiling Segmentation
the walls [Fig. 5(d)]. For each of the walls, an algorithm based on
Formally, the uniform voxel space can be defined in a universal co-
random sample consensus (RANSAC) (Fischler and Bolles 1981) is
ordinate system (UCS) by means of the voxel size ε and the 3D co-
then used to calculate the plane equation that optimally fits the data
ordinates of each voxel’s center (vx , vy , vz ). The voxel space’s origin
while discarding outlier points (e.g., from objects positioned against
and voxel size are initially considered unknown and are collectively
the wall). Singular Value Decomposition (SVD), which provides the
calculated with the detection of the floor and ceiling data planes.
normal to the wall plane, is used to calculate the equation of the
The goal is then to produce a voxelization of the space that leads
to most of the points belonging to the floor and ceiling being planes which best fit each wall. For each iteration, three points
contained in thin parallelepipeds M f and M c of height ε [Fig. 4(a)]. are randomly selected to calculate a plane equation and the plane
An integrated algorithm has been developed that automatically support is calculated as the number of remaining points that satisfy
achieves this optimization; see Valero et al. (2012a) for details. a distance-to-plane threshold [e.g., according to DIN 18202
Fig. 4(b) shows floor and ceiling segmentation results for the illus- (Deutsches Institut fur Normung 2005)] and whose normal vector
trative case. This method assumes that the floor and ceiling are both does not differ from that of the hypothesized wall plane by
horizontal, which is by far a common situation. more than 15°. The plane leading to the largest support after
1,000 iterations is considered the best plane.
Wall Segmentation Fig. 5(e) shows the resulting segmentation of points belonging
In rectangular indoor plans, the walls could be detected by adapting to the walls for the illustrative case.
the process previoulsy described to two pairs of parallel voxels
planes. Instead, the authors consider the more complex case of the Room Boundary B-Rep Model
extraction of walls in arbitrary plans in which walls are designed to The floor, ceiling, and wall detection and segmentation stages pro-
be straight, but not necessarily perpendicular or parallel to one duce the planes corresponding to each one of those elements. The
another. following step consists in converting this information into a unified
Fig. 5. Illustration of the wall segmentation process: (a) top view of the initial point cloud (D1 ); (b) binary image generated after discretization;
(c) detected contour (dilated); (d) walls and their intersections recovered by the Hough transform and plane intersection stages; (e) 3D points matched
to the detected walls
Modeling of Furniture
The furniture in the scene is identified, recognized, and positioned
with the help of RFID tags attached on them, as well as relevant
geometric information contained in the building’s FM database. An
RFID reader mounted alongside with the scanner acquires the ob-
ject IDs stored in the tags during the scanning process. Each ID has
a corresponding entry in the FM information database that provides
relevant relevant geometric information about the given object.
Fig. 11. Final B-Rep model with columns and openings
This study stored the following information for each object:
• Discriminatory geometric information, used to uniquely recog-
nize/locate this object in the point cloud data; and
at least one 3D point falls within it [green pixels in Fig. 8(a)]. This • 3D model (mesh), used to accurately position the object in the
yields the detection of the boundary of empty regions [Fig. 8(b)] point cloud data, and subsequently for visualization.
that may correspond to openings (empty regions can also be the The following subsections present the methods developed to
result of occlusions). recognize and determine the pose of the following types of furniture:
Fig. 12. Illustration of the process to recognize and model a table: (a) 3D points inside the slice defined around the expected tabletop height;
(b) binary image I t ; (c) extracted segment; (d) normal vectors for the detected segment; (e) recognized tabletop (in green) from the segment
3D points (in red); (f) final positioning of the table (red) in the entire point cloud
tables, chairs, wardrobes (and the like), and paper baskets. All these an algorithm based on the scatter matrix (Shi et al. 1994)—
elements are assumed to be in contact with the floor in a stable see Fig. 12(d) for an indication of how the boundary of the
position. tabletop is clearly distinguished. Then, the lengths of the two
main orthogonal directions of the compact regions are used
Tables as recognition criterion. The main directions also enable the ini-
The discriminatory geometric information used to recognize a table tial approximate positioning of the table’s 3D model (from the
in the point cloud data includes the length, width, and height of the FM database) in the scene. This position is subsequently
tabletop. optimized by finely fitting the 3D model to the complete point
The recognition process, illustrated in Fig. 12, goes as follows. cloud by means of an Iteration Closest Point (ICP) algorithm
A horizontal slice of the current 3D point cloud (D4 ) is first (Rusinkiewicz and Levoy 2001).
extracted at the table height, and a binary image I t is generated
by projecting the points contained in that slide orthogonally Chairs
onto a horizontal plane, and labeling the pixels of I t as occupied The discriminatory geometric information used to recognize a chair
(by points) or empty. Compact regions in the image are filtered in the point cloud data is the leg pattern to be searched at a specified
by calculating the normal vectors of the points by means of height above the floor.
Fig. 13. Illustration of the process for detecting and modeling a chair with a discrete leg pattern: (a) example of a discrete pattern; (b) slice of points
close to the floor; (c) binary image I c ; (d) detection of the discrete pattern (multiple detections); (e) 3D chair model accurately positioned in the scene
The recognition process starts with the extraction of a slice of against the expected one. For a chair having n legs, any set of n
the point cloud around the specified height above the floor, and the potential legs obeying the chair distance pattern conditions simul-
generation of a binary image I c from the points contained in that taneously is selected as candidate for the considered chair.
slide projected on a horizontal plane. The goal of the subsequent In the case of continuous patterns (Fig. 14), a shape-matching
image processing is to recognize the leg pattern, i.e., signature, process is executed by cross-correlating the leg pattern (calculated
of the given chair in the image. The authors distinguish discrete beforehand from the chair’s 3D model) with image I c . To address
from continuous leg patterns. Discrete patterns consist of sets of the fact that the orientation of the chair leg pattern in the room is
spots disposed at the vertices of regular polygons [see example unknown, this process is conducted 360=2n times with 2° incre-
in Fig. 13(a)], and continuous patterns are continuous patches usu- mental rotations of the pattern between two consecutive cross-
ally with a star shape [see example in Fig. 14(a)]. correlation operations.
In the case of discrete patterns (Fig. 13), since the projections of In both the discrete and continuous cases, the detection stage
the legs are discrete, each being only a few centimeters wide, only yields an initial pose of the chair that is refined by using an
small and compact segments are considered as potential legs, and ICP algorithm to fit the 3D model of the chair to the 3D point cloud
their patterns studied. For each possible leg, its distance to the other data of the scene. To address issues surrounding symmetries in the
ones is calculated, and the pattern of these distances is checked chair leg layout, the ICP algorithm is also initialized n times.
Fig. 14. Illustration of the process for detecting and modeling a chair with a continuous leg pattern: (a) example of a chair with a continuous leg
pattern; (b) binary pattern obtained from the chair’s 3D model and used for the cross-correlation step; (c) binary image of the top view of the point
cloud slices (I c ); (d) detection of the continuous pattern
Wardrobe-Like Furniture The approach to detect and model these objects is similar to the
Wardrobe-like furniture, such as wardrobes, filing cabinets, or one used for interior structural columns, that is the detection of the
chests of drawers, are hypothesized to be shaped as parallelepipeds. cross-section in a binary image obtained from a horizontal slice of
As a result, the discriminatory geometric information used to rec- the point cloud data spanning from the floor to the specified basket
ognize a wardrobe-like piece of furniture in the point cloud data is height. Like for other furniture, the initial position yielded by this
the width, depth, and height of the parallelepiped. process is refined using ICP. Fig. 16 shows two wastepaper baskets
As illustrated in Fig. 15, the recognition algorithm is based recognized and modeled in a scene.
on processing an image which is obtained from the subtraction
of two point cloud data slices, one each from above and below the
specified wardrobe height. Two binary images I w1 and I w2 are Experimental Results
generated from these slices, and the subtraction binary image The proposed system has been tested with diverse real inhabited
I w ¼ I w1 − I w2 is processed. The wardrobe is detected in I w as interiors. The authors first present results for what can be consid-
the white segment whose bounding box has the correct width ered a simple case of a living room. Then, results for a more com-
and depth. plex case of an entire high school floor are reported. Altogether,
The 3D model of the wardrobe (from the database) is then ini- these results enable a fair assessment of the current performance
tially positioned by placing its center at the centroid of the point of the system.
cloud and aligning the normal vectors of the planes extracted from
the matched segment to those of the 3D model. Finally, ICP is em-
ployed to adjust the wardrobe’s position so that it optimally fits the Simple Case: Living Room
overall point cloud data. The approach has first been tested in the context of a living room, for
which three different furniture configurations have been arranged are
Paper Baskets scanned. The experimental equipment setup is composed of a Faro
There exist other kinds of furniture such as wastepaper baskets, Photon 80 laser scanner (Faro Technologies) and an OBID LRU
which can also be easily recognized and modeled. Baskets are 3500 (FEIG) RFID sensor (FEIG Electronic, Germany). The sensor
hypothesized to have a constant cylindrical or rectangular platform is moved towards commanded positions to perform scans.
cross-section with vertical longitudinal axes. Therefore, the dis- For each room configuration, 3D data was acquired from five posi-
criminatory geometric information used to recognize a waste- tions, resulting in a total of 8 million points. Fig. 17 presents the plan
paper basket in the data is its cross section type and size, and of the room and Table 1 summarizes the number of tagged furniture
its height. in each configuration.
Structure
Once the 3D data are acquired and aligned, and the floor, ceiling, and
walls are recognized and modeled, the precision of the obtained
B-Rep models is assessed. The distances between points and planes
are represented in color maps in Fig. 18. Regions corresponding to
objects like pictures or moldings are clearly visible through signifi-
cant color variations. For example, two paintings and the skirting
board, whose width is between 1 and 1.5 cm, can be seen in Wall
Fig. 15. Wardrobe-like furniture detection and positioning: (a) slices 7 [Fig. 18(b)]. Of course, some of error is also due to walls not being
above and below the wardrobe height (first row) and respective bin- totally flat.
ary images I w1 and I w2 (second row); (b) subtraction image I w ; The precision of the wall detection and modeling is also as-
(c) final position of the wardrobe model superimposed to the point sessed with the following four quantitative metrics. The latter three
cloud are obtained by comparing the generated B-Rep model to a
ground-truth model previously built manually using measurements
obtained with a Leica DISTO A6 laser distance meter (Leica
Geosystems AG, Switzerland):
• δ: The plane fitting error, i.e., the mean value of the distances of
the matched points to the calculated wall plane;
• α: The plane orientation error, i.e., the horizontal angle between
the ground-truth and estimated normal vectors; and
• dh and dw : The wall height and width errors, i.e., the differences
between the ground-truth and estimated heights and widths.
Table 2 summarizes the results obtained for all 16 walls. The
values of δ range from 0.2 to 1.6 cm, demonstrating good modeling
precision. Values above 1 cm are generally obtained for narrow
walls only. The values of α do not exceed 1.6°, with errors above
1° typically obtained for narrow walls. The values for dh are all
0.80 cm. This is simply due to the fact that dh effectively assesses
the accuracy in the modeling of the floor and ceiling, which are
both considered to be flat horizontal surfaces. The values for dw
range between 0.1 and 4.5 cm, with errors above 2 cm are essen-
tially obtained for narrow walls.
Altogether, the results are very positive. The accuracy of the
modeling is generally high, with larger errors only noticed for nar-
row walls. Wall 16 may first be considered an exception to this.
Fig. 16. Example detection and modeling of two wastepaper baskets
But, as shown in Fig. 18(d), the presence of a large window
Fig. 18. Error maps obtained after the plane fitting stage, for four representative walls; the error is simply the distance (cm) of each point to the fitted
plane: (a) Wall 1; (b) Wall 7; (c) Wall 11; (d) Wall 16
Table 2. Modeling Errors for Each Room Wall and the additional significant occlusion from curtains on one of its
side do explain the comparatively poorer results.
Wall identifier δ (cm) α (°) dh (cm) dw (cm) As shown in Fig. 19, three openings corresponding to two open
1 0.65 0.23 0.80 0.74 doors and one window are detected. The figure also shows their
2 1.15 1.83 0.79 0.39 estimated sizes and positions. To quantitatively evaluate the accu-
3 0.35 0.32 0.79 0.09 racy of the detection and modeling of these components, three met-
4 0.54 1.42 0.80 0.99 rics are used:
5 0.87 0.21 0.79 0.15 • dc : Positioning error, i.e., the distance between the centers of the
6 1.00 0.06 0.79 1.80
reconstructed and ground-truth opening models; and
7 0.49 0.87 0.80 0.11
8 1.32 0.02 0.80 2.05
• dh and dw : Sizing errors, i.e., the differences in, respectively,
9 0.33 0.12 0.80 0.32 height and width of the reconstructed and ground-truth opening
10 1.19 2.33 0.80 1.46 models.
11 0.63 1.21 0.80 2.18 The results, summarized in Table 3, show that the doors fit better
12 1.07 2.04 0.80 1.31 than the window. For the window, the error is due to curtains that
13 0.65 0.54 0.80 1.29 partially occlude it, leading to the fitting of the component to a
14 0.99 0.53 0.80 2.15 smaller opening.
15 0.50 0.42 0.80 1.17
16 1.63 0.20 0.80 4.45 Furniture
Mean 0.84 0.77 0.80 1.29 Fig. 20 illustrates the three furniture configurations arranged for the
Note: dh and dw = plane dimensional errors (absolute value), i.e., the living room together with the reconstruction results. Different
differences between measured ground-truth and estimated heights and pieces of furniture are identified and positioned in each point cloud.
widths; α = plane orientation error, i.e., the horizontal angle between the Several components such as curtains, a sofa, and other small ob-
measured ground-truth and estimated normal vectors; δ = plane fitting error. jects are correctly not recognized in this process.
Fig. 19. Result of the detection and modeling of openings: (a) planar image of the room with the detected and modeled openings; (b) B-Rep model
with the detected openings
Fig. 20. Furniture recognition and modeling results for the three different configurations; configurations 1, 2, and 3 are shown in the three successive
columns: (a) planar views of the acquired point cloud data; (b) recognized and positioned furniture
More Complex Case: First Floor of a High School equipped with a Riegl VZ-400 laser scanner (Riegl Laser Measure-
ment System GmbH, Austria), an OBID LRU 3500 (FEIG) RFID
This subsection presents the results obtained for the first floor of sensor, and two computers (Fig. 22). The mobile robot is able to
a high school composed of five rooms and one corridor that move towards commanded positions to perform scans.
are equipped with typical classroom furniture. The experimental In total, 30 scans were acquired and 6 floors, 6 ceilings, 94
equipment setup is composed of a mobile robot (Robotnik, Spain) walls, 5 doors, 15 windows, 33 chairs, 15 tables, and 4 wastepaper
Table 4. Furniture Recognition and Positioning Results for the Three Configurations (C Denotes a Chair, W Denotes a Wardrobe; T Denotes a Table)
Configuration Object d (cm) P (%) α (°) ϕ (°) θ (°) μ (°) x (cm) y (cm) z (cm) ktk (cm)
1 C1 1.28 88.40 0.04 0.25 0.80 0.82 0.80 1.01 0.81 0.87
C2 1.63 80.74 0.50 0.43 0.86 1.08 1.03 0.19 2.99 1.40
C3 1.82 80.10 0.15 0.32 0.52 0.62 0.59 1.61 3.30 1.83
C4 1.69 100 0.38 0.01 0.22 0.44 1.03 0.65 1.35 1.01
C5 1.68 99.47 0.28 0.60 0.22 0.7 0.80 0.50 1.59 0.96
W1 1.83 81.61 0.76 0.86 0.12 1.16 1.02 2.21 2.75 1.99
W2 1.84 53.43 1.78 1.95 2.67 3.78 3.88 5.00 2.18 3.69
T1 0.84 100 0.00 0.00 0.16 0.16 1.11 0.63 0.83 0.86
T2 1.06 99.98 0.06 0.06 0.89 0.9 0.09 0.85 1.71 0.88
T3 0.30 100 0.05 0.00 0.05 0.06 0.17 0.41 0.05 0.21
2 C1 1.58 99.08 0.63 0.49 0.64 1.02 1.00 0.34 1.22 0.85
C2 1.72 99.25 0.69 0.14 1.40 1.56 0.39 1.63 0.96 0.99
C3 0.99 96.13 0.18 0.02 0.90 0.92 0.52 0.41 0.58 0.50
C4 1.73 98.31 0.48 0.56 1.03 1.26 0.96 1.21 1.26 1.14
C5 1.88 92.67 0.34 1.32 0.28 1.38 1.63 1.78 1.52 1.64
C6 1.75 73.4 1.23 0.15 1.15 1.7 0.48 2.89 1.11 1.49
W1 1.67 66.08 0.87 0.87 0.41 1.3 1.57 1.40 4.41 2.46
W2 1.75 51.88 0.04 2.3 105.99 106 8.64 1.17 1.58 3.80
T1 0.93 99.65 0.01 0.01 1.23 1.24 1.15 0.45 1.00 0.87
T2 1.45 86.98 0.02 1.34 0.52 1.44 1.97 2.14 2.06 2.06
T3 0.89 69.55 0.23 0.03 2.03 2.04 2.89 9.54 0.74 4.39
3 C1 1.73 97.16 0.56 0.54 0.79 1.1 0.33 1.80 1.41 1.04
C2 1.54 100 0.13 0.23 0.81 0.86 0.55 0.27 1.21 0.68
C3 1.67 82.95 0.27 0.15 0.03 0.32 1.00 0.11 1.48 0.86
C4 1.84 93.76 0.20 0.83 0.61 1.06 1.86 0.18 1.37 1.14
C5 1.41 83.21 0.16 0.35 0.78 0.88 1.19 1.15 0.82 3.16
W1 1.66 77.61 0.57 0.83 0.03 1 1.14 1.29 3.67 2.03
W2 1.58 71.68 4.61 1.07 0.88 4.82 0.85 0.64 3.87 1.79
W3 1.68 100 0.21 0.10 0.09 0.24 1.74 0.61 1.17 1.17
T1 0.91 100 0.03 0.00 0.37 0.38 1.63 0.36 0.96 0.98
T2 1.22 90.57 0.00 0.08 0.51 0.52 2.64 0.95 1.70 1.76
T3 0.72 87.48 0.06 0.22 0.99 1.02 3.28 0.85 0.44 1.52
Mean 1.45 87.54 0.49 0.50 3.99 4.43 1.50 1.38 1.63 1.56
Note: μ = quaternion angle of the rotational error. Bold values indicates Euler angle and module of vector t, are the most significant parameters.
Fig. 21. 3D furnished models for the three configurations of the studied living room; part of the B-Rep has been cropped for visualization
purposes
Over the last five years, various partial solutions for automatically
Table 7. Performance of the Recognition and Positioning of the Furniture generating 3D models of buildings using laser scanners have been
Room d̄ (cm) P̄ (%) μ̄ (°) ktk (cm) published. In the case of inhabited, furnished interiors, the auto-
matic generation of 3D models is considered challenging because
1 1.91 78.7 2.87 3.43
of the large amount of information that needs to be processed, the
2 1.81 78.3 2.56 1.95
3 1.86 78.7 2.82 1.73 wide variability in room shape and unpredictable furniture layout,
4 1.94 72.8 2.40 1.77 and the frequent presence of significant clutter and occlusion.
5 1.58 90.9 2.34 1.74 This paper proposes a hierarchical 3D data processing algorithm
6 n/a n/a n/a n/a that is able, with the help of RFID technology, to generate detailed
Note: d̄ and P̄ are the averages of the d and P values obtained for all
and precise semantic 3D models of furnished, inhabited interiors.
furniture contained within each room; μ̄ and t̄ are the means of the RFID tags, attached to furniture and scanned at the same time as
orientation and position errors obtained for all furniture contained laser scanning, provide access to discriminatory geometric infor-
within the given room; Room 6 did not contain any furniture. mation (contained in the building’s FM database) about the fur-
niture present in the scanned room that greatly alleviates the
difficulties of recognizing them in the point cloud data. This
Furniture makes the automatic modeling process faster and more robust and
The performance of the recognition and positioning of furniture is accurate.
assessed per room using the following quantitative metrics, relating Altogether, the approach proposed in this paper is capable of
to those used earlier: recognizing and modeling the main structural components of an
• d̄: Mean of the d values obtained for all furniture contained indoor environment (walls, floors, ceilings), the wall openings like
within the given room; doorways and windows, as well as typical furniture, such as tables,
Fig. 25. Final modeling results for (a) Room 2 and (b) 4 of the high-school floor experiment