You are on page 1of 24

remote sensing

Article
Classification of 3D Digital Heritage
Eleonora Grilli 1,2, * and Fabio Remondino 1
1 3D Optical Metrology (3DOM) unit, Bruno Kessler Foundation (FBK), Via Sommarive 18, 38121 Trento, Italy;
remondino@fbk.eu
2 Department of Architecture, Alma Mater Studiorum—University of Bologna, Viale del Risorgimento 2,
40136 Bologna, Italy
* Correspondence: grilli@fbk.eu

Received: 21 February 2019; Accepted: 26 March 2019; Published: 8 April 2019 

Abstract: In recent years, the use of 3D models in cultural and archaeological heritage for
documentation and dissemination purposes is increasing. The association of heterogeneous
information to 3D data by means of automated segmentation and classification methods can help
to characterize, describe and better interpret the object under study. Indeed, the high complexity of
3D data along with the large diversity of heritage assets themselves have constituted segmentation
and classification methods as currently active research topics. Although machine learning methods
brought great progress in this respect, few advances have been developed in relation to cultural
heritage 3D data. Starting from the existing literature, this paper aims to develop, explore and
validate reliable and efficient automated procedures for the classification of 3D data (point clouds
or polygonal mesh models) of heritage scenarios. In more detail, the proposed solution works on
2D data (“texture-based” approach) or directly on the 3D data (“geometry-based approach) with
supervised or unsupervised machine learning strategies. The method was applied and validated
on four different archaeological/architectural scenarios. Experimental results demonstrate that the
proposed approach is reliable and replicable and it is effective for restoration and documentation
purposes, providing metric information e.g. of damaged areas to be restored.

Keywords: classification; segmentation; cultural heritage; machine learning; random forest

1. Introduction
The generation of 3D data of heritage sites or monuments, being point clouds or polygonal
models, is altering the approach that cultural heritage specialists use for the analysis, interpretation,
communication and valorization of such historical information. Indeed, 3D information allows one,
e.g.; to perform morphological measurements, quantitative analysis, and information annotation, as
well as produce decay maps, while enabling easy access and study of remote sites and structures.
The management of architectural heritage information is considered crucial for a better understanding
of the heritage data as well as for the development of targeted conservation policies and actions. An efficient
information management strategy should take into consideration three main concepts: segmentation,
structuring the hierarchical relationships and semantic enrichment [1]. The demand for automatic model
analysis and understanding is continuously increasing. Recent years have witnessed significant progress
in automatic procedures for segmentation and classification of point clouds or meshes [2–4]. There are
multiple studies related to the segmentation topic, mainly driven by specific needs provided by the field
of application (Building Information Modeling (BIM) [5], heritage documentation and preservation [6],
robotics [7] autonomous driving [8], urban planning [9], etc.).
In the cultural heritage field, the identification of different components (Figure 1) in point clouds
and 3D meshes is of primary importance because it can facilitate the study of monuments and

Remote Sens. 2019, 11, 847; doi:10.3390/rs11070847 www.mdpi.com/journal/remotesensing


should take into consideration three main concepts: segmentation, structuring the
hierarchical relationships and semantic enrichment [1]. The demand for automatic
model analysis and understanding is continuously increasing. Recent years have
witnessed significant progress in automatic procedures for segmentation and
Remote Sens. 2019, 11, 847 2 of 23
classification of point clouds or meshes [2–4]. There are multiple studies related to
the segmentation topic, mainly driven by specific needs provided by the field of
integrating them with
application heterogeneous
(Building information
Information Modeling and(BIM)
attributes. However,
[5], heritage it remains a challenging
documentation and
task considering the complexity and high variety that heritage case studies can
preservation [6], robotics [7] autonomous driving [8], urban planning [9], etc.) have.

Figure Example
1. 1.
Figure Exampleofofaasegmented andclassified
segmented and classified point
point cloud.
cloud.

In the cultural
The research heritageinfield,
presented thisthe identification
article of different
was motivated by components
the need to(Figure
identify 1) in
andpoint
mapclouds
different
statesand 3D meshes is phases
of conservation of primary importancematerials
or employed because itincan facilitate
heritage the study
objects. Towardsof monuments and we
this direction,
integrating
developed them with
a method: (i) heterogeneous
to document information
and retrieve and attributes.and
historical However, it remains
architectural a challenging(ii) to
information;
task considering the complexity and high variety that heritage case studies can have.
distinguish different constructing techniques (e.g. types of opus, etc.); and (iii) to recognize the presence
The research presented in this article was motivated by the need to identify and map different
of existing restoration evidence. The retrieval of such information in historic buildings by traditional
states of conservation phases or employed materials in heritage objects. Towards this direction, we
methods (e.g.; manual mapping or simple visual inspection by experts) are considered time-consuming
developed a method: (i) to document and retrieve historical and architectural information; (ii) to
and laborious
distinguishprocedures [10]. The techniques
different constructing aim of our(e.g.research
types of was notetc.);
opus, to develop
and (iii) atonew algorithm
recognize the for
the classification of 3D data (point clouds or polygonal mesh models) of heritage scenarios,
presence of existing restoration evidence. The retrieval of such information in historic buildings by but to
explore the applicability
traditional of supervised
methods (e.g.; machine
manual mapping or learning approaches
simple visual inspectionto by
an experts)
almost unexplored
are consideredfield of
time-consuming
application and laborious
(i.e.; cultural heritage)procedures
proposing [10]. The aim
a reliable and ofefficient
our research was not
pipeline thattocan
develop a new
be standardized
algorithm
for different forstudies.
case the classification of 3D data (point clouds or polygonal mesh models) of heritage
scenarios, but to explore the applicability of supervised machine learning approaches to an almost
unexplored
2. State field2D/3D
of the Art: of application (i.e.; cultural
Segmentation andheritage) proposing
Classification a reliable and efficient pipeline that
Techniques
can be standardized for different case studies.
Both image and cloud segmentation are fundamental tasks in various application, such as object
detection [11],
2. State of themedical
Art: 2D/3Danalyses [12], license
Segmentation plate and vehicle
and Classification recognition [13], classification of
Techniques
microorganisms [14], fruit recognition [15] and many more [16]. Segmentation is the process of
Both image and cloud segmentation are fundamental tasks in various application, such as object
grouping data [11],
detection (e.g.;medical
images,analyses
point clouds or meshes)
[12], license plate into
and multiple homogeneous
vehicle recognition [13], regions with similar
classification of
properties [17] (Figure
microorganisms [14],2).fruitThese regions
recognition areand
[15] homogeneous
many more [16]. withSegmentation
respect to some is the criteria,
process ofcalled
features, that data
grouping constitute a characteristic
(e.g.; images, point cloudsproperty
Remote Sens. 2019, 11, x FOR PEER REVIEW
or meshes) orinto
setmultiple
of properties which regions
homogeneous is unique,
withmeasurable
similar
3 of 24
and differentiable. In the case
properties [17] (Figure of 2Dregions
2). These imagery, are features
homogeneous refer to
withvisual properties
respect to some such as size,
criteria, calledcolor,
features,
shape, that constitute
scale patterns,
differentiable. In the a characteristic
etc.,case
while,
of 2D for property
3D point
imagery, orrefer
cloud
features set oftoproperties
data, they whichresult
typically
visual properties issuch
unique,
from measurable
specific
as size, and
geometric
color, shape,
scale patterns,
characteristic of the etc.,
global while, for 3D
or local 3D point cloud[18].
structure data, they typically
Typically, in 3D result from specific
data, surface geometric
normals, gradients
characteristic of the global or local 3D
and curvature in the neighborhood of a point are used. structure [18]. Typically, in 3D data, surface normals, gradients
and curvature in the neighborhood of a point are used.

2. An2.example
FigureFigure of image
An example segmentation
of image segmentation[19].
[19].

Once 2D or 3D scenarios have been segmented, each group can be labeled with a class giving the
parts some semantics, hence classification is often called semantic segmentation or pixel/point
labeling.

2.1. Segmentation Methods


The image segmentation topic has been widely explored [20] and current state-of-the-art
Figure 2. An example of image segmentation [19].
Remote Sens. 2019, 11, 847 3 of 23
Once 2D or 3D scenarios have been segmented, each group can be labeled with a class giving the
parts some semantics, hence classification is often called semantic segmentation or pixel/point
labeling.
Once 2D or 3D scenarios have been segmented, each group can be labeled with a class giving the
parts some semantics, hence classification is often called semantic segmentation or pixel/point labeling.
2.1. Segmentation Methods
2.1. Segmentation
The imageMethods
segmentation topic has been widely explored [20] and current state-of-the-art
techniques include edge-based
The image segmentation [21,22]
topic has and region-based
been widely explored [20]approaches
and current[23] and clustering
state-of-the-art technique
techniques
[24–27]
include (Figure [21,22]
edge-based 3). and region-based approaches [23] and clustering technique [24–27] (Figure 3).

Figure 3. Synthetic
Figure representation
3. Synthetic of the
representation of segmentation and and
the segmentation classification methods.
classification The The
methods. 3D model
3D model
represents the classification of the archaeological elements of the Neptune temple in Paestum [26].
represents the classification of the archaeological elements of the Neptune temple in Paestum [26].

Except for the model fitting approach, most of the point cloud data segmentation methods have
Except for the model fitting approach, most of the point cloud data segmentation methods have
some root to image segmentation. However, due to the complexity and variety of point clouds caused
some root to image segmentation. However, due to the complexity and variety of point clouds caused
by irregular sampling, varying density, different types of objects, etc.; point cloud segmentation and
by irregular sampling, varying density, different types of objects, etc.; point cloud segmentation and
classification are more challenging and still very active research topics.
classification are more challenging and still very active research topics.
Edge-based segmentation methods have two main stages [27]: (i) edge detection to outlines
Edge-based segmentation methods have two main stages [27]: (i) edge detection to outlines the
the borders of different regions; and [28]: (ii) grouping of points inside the boundaries to deliver
borders of different regions; and [28]: (ii) grouping of points inside the boundaries to deliver the final
the final segments. Edges in a given depth map are defined by the points where changes in
segments. Edges in a given depth map are defined by the points where changes in the local surface
the local surface properties exceed a given threshold. The most used local surface properties are
properties exceed a given threshold. The most used local surface properties are normals, gradients,
normals, gradients, principal curvatures or higher-order derivatives. Methods based on edge-based
principal curvatures or higher-order derivatives. Methods based on edge-based segmentation
segmentation techniques were reported by Bhanu et al. [29], Sappa and Devy [30], and Wani and
techniques were reported by Bhanu et al. [29], Sappa and Devy [30], and Wani and Arabnia [31].
Arabnia [31]. Although such methods allow a fast segmentation, they may produce inaccurate results
Although such methods allow a fast segmentation, they may produce inaccurate results due to noise
due to noise and uneven density of point clouds, situations that commonly occur in point cloud
and uneven density of point clouds, situations that commonly occur in point cloud data. In 3D space,
data. In 3D space, such methods often detect disconnected edges making the identification of closed
segments difficult without a filling or interpretation procedure [32].
Region-based methods work with region growing algorithms. In this case, the segmentation
starts from one or more points (seed points) featuring specific characteristics and then grows around
neighboring points with similar characteristics, such as surface orientation, curvature, etc. [27,33].
The initial algorithm was introduced by Besl et al. [34], and several variations are presented in the
literature [35–40]. In general, the region growing methods are more robust to noise than the edge-based
ones because they utilize global information [41]. However, these methods are sensitive to: (i) the
location of initial seed regions; and (ii) inaccurate estimations of the normals and curvatures of points
near region boundaries.
The model fitting approach is based on the observation that many man-made objects can be
decomposed into geometric primitives such as planes, cylinders and spheres (Figure 4). Therefore,
primitive shapes (such as cylinders, cubes, spheres, etc.) are fitted onto 3D data and the points that best
fit the mathematical representation of the fitted shape are labeled as one segment. As part of the model
fitting category, two widely employed algorithms are the Hough Transform (HT) [42] and the Random
Sample Consensus (RANSAC) [43]. Model fitting methods are fast and robust with outliers, although
points near region boundaries.
The model fitting approach is based on the observation that many man-made objects can be
decomposed into geometric primitives such as planes, cylinders and spheres (Figure 4). Therefore,
primitive shapes (such as cylinders, cubes, spheres, etc.) are fitted onto 3D data and the points that
best fitSens.
Remote the2019,
mathematical
11, 847 representation of the fitted shape are labeled as one segment. As part of the
4 of 23
model fitting category, two widely employed algorithms are the Hough Transform (HT) [42] and the
Random Sample Consensus (RANSAC) [43]. Model fitting methods are fast and robust with outliers,
a set of dedicated
although primitivesprimitives
a set of dedicated is necessary. As it falls short
is necessary. As it for
fallscomplex shapes
short for or fully
complex shapesautomated
or fully
implementations, the use of the richness of surface geometry through local descriptors
automated implementations, the use of the richness of surface geometry through local descriptors provides a
better solution
provides [44].
a better In the architectural
solution field, detailsfield,
[44]. In the architectural cannot always
details be modeled
cannot alwaysinto easily recognizable
be modeled into easily
geometric shapes. Thus, while some entities can be characterized by geometric
recognizable geometric shapes. Thus, while some entities can be characterized by geometric properties, others are
more readily distinguished by their color content [45].
properties, others are more readily distinguished by their color content [45].

Figure 4. Segmentation of 3D point cloud by geometric primitive fitting.

Machine learning approaches are described in detail in the next sections. It It should
should be be noted
noted that,
that,
while clustering algorithms (unsupervised technique) belong to segmentation
clustering algorithms (unsupervised technique) belong to segmentation methods, aftermethods, after applying
supervisedsupervised
applying machine learning
machine methods,
learningthemethods,
3D data are
the not
3Donly
datasegmented but also
are not only classified.
segmented but also
Several benchmarks have been proposed in the research community, providing labeled terrestrial
classified.
and airborne data on which
Several benchmarks users
have can proposed
been test and validate
in the their owncommunity,
research algorithms. providing
However, most of
labeled
the available
terrestrial anddatasets
airborneprovide
data onclassified
which usersnatural, urban
can test andand streettheir
validate scenes
own [46–52]. WhileHowever,
algorithms. in those
scenarios
most theavailable
of the object classes and provide
datasets labels areclassified
almost defined
natural,(mainly
urban ground, roads,
and street scenestrees, and buildings),
[46–52]. While in
the identification
those scenarios theofobject
preciseclasses
categories in the are
and labels heritage
almost field is much
defined moreground,
(mainly complexroads,(several
trees,classes
and
could be used
buildings), theto identify andof
identification describe
precisethe same building
categories characteristics,
in the heritage based more
field is much on different
complex purposes).
(several
For these
classes reasons,
could to the
be used to authors’
identify knowledge,
and describeinthe
thesame
literature, therecharacteristics,
building is currently a lack
basedof benchmarks
on different
of labeled images
purposes). or 3D
For these heritage
reasons, data.
to the authors’ knowledge, in the literature, there is currently a lack of
benchmarks of labeled images or 3D heritage data.
2.2. Machine Learning for Data Segmentation and Classification
2.2. Machine
2D andLearning
3D datafor Data Segmentation
classification and Classification
are gaining interest and becoming an active field of research
connected
2D andwith
3Dmachine learning application
data classification are gaining([53–57] andand
interest [58–61], respectively).
becoming an active field of research
Machine learning (including deep learning) is a scientific discipline
connected with machine learning application ([53–57] and [58–61], respectively). concerned with the design
and development of Artificial
Machine learning Intelligence
(including algorithms
deep learning) is a that allowdiscipline
scientific computers to makes with
concerned decisions based
the design
on empirical and training data. Broadly, there are three types of approach with machine
and development of Artificial Intelligence algorithms that allow computers to makes decisions based learning
algorithms:

• A supervised approach is where semantic categories are learned from a dataset of annotated data
and the trained model is used to provide a semantic classification of the entire dataset. If for the
aforementioned methods the classification is a step after the segmentation, when using supervised
machine learning methods, the class labeling procedure is planned before to segment the model.
Random forest [62], described in detail at Section 4.2, is one of the most used supervised learning
algorithms for classification problem [63,64].
• An unsupervised approach is where the data are automatically partitioned into segments based on
a user-provided parameterization of the algorithm. No annotations are requested but the outcome
might not be aligned with the user’s intention. Clustering is a type of unsupervised machine
learning that aims to find homogeneous subgroups such that objects in the same group (clusters)
are more similar to each other than those in other groups. K-Means is a clustering algorithm that
divides observations into k clusters using features. Since we can dictate the number of clusters,
• An unsupervised approach is where the data are automatically partitioned into segments based
on a user-provided parameterization of the algorithm. No annotations are requested but the
outcome might not be aligned with the user’s intention. Clustering is a type of unsupervised
machine learning that aims to find homogeneous subgroups such that objects in the same group
(clusters) are more similar to each other than those in other groups. K-Means is a clustering
Remote Sens. 2019, 11, 847 5 of 23
algorithm that divides observations into k clusters using features. Since we can dictate the
number of clusters, it can be easily used in classification where we divide data into clusters that
itcan
canbe beequal
easilytoused
or more than the number
in classification where ofweclasses.
divideThe original
data K-means
into clusters algorithm
that presented
can be equal to or
by MacQueen
more et al. [65]ofhas
than the number been then
classes. largely exploited
The original K-means for image and
algorithm point clouds
presented by variouset
by MacQueen
al.researchers
[65] has been [66–70].
then largely exploited for image and point clouds by various researchers [66–70].
• • An Aninteractive
interactiveapproach
approachisiswhere
wherethetheuser
userisisactively
actively involved
involved inin the
the segmentation/classification
segmentation/classification
loop by guiding the extraction of segments via feedback signals. This requires
loop by guiding the extraction of segments via feedback signals. This requires aalarge
largeeffort
effortfrom
from
the user side but it could adapt and improve the segmentation
the user side but it could adapt and improve the segmentation result based on the user’s result based on the user’s
feedback.
feedback.
Feature
Featureextraction
extractionis is
a prerequisite
a prerequisiteforforimage/cloud
image/cloud segmentation.
segmentation. Features play
Features an important
play role
an important
inrole
these problems and their definition is one of the bottlenecks of machine learning
in these problems and their definition is one of the bottlenecks of machine learning methods methods [71,72].
Weinmann et al. [73]et
[71,72]. Weinmann discussed the suitability
al. [73] discussed of features
the suitability of that should
features thatprivilege quality over
should privilege quantity
quality over
(Figure 5). High quality features allow better interpreting the models
quantity (Figure 5). High quality features allow better interpreting the models and enhancing and enhancing algorithm
performance with respect
algorithm performance to both
with thetospeed
respect both and accuracy.
the speed This shows
and accuracy. a need
This shows toaprioritize and find
need to prioritize
robust and relevant features to address the heterogeneity in images or point
and find robust and relevant features to address the heterogeneity in images or point clouds. clouds.

Figure5.5.Framework
Figure Frameworkforfor3D
3Dscene
scene analysis:
analysis: a 3D point serves as input
input and
and the
the output
output consists
consistsof
ofaa
semanticallylabeled
semantically labeled3D
3Dpoint
pointcloud
cloud[18].
[18].

InInboth
both2D
2Dand
and3D3Dsegmentation/classification, approaches can
segmentation/classification, approaches can be
be combined
combined to to exploit
exploit the
the
strength
strengthofof
a method
a methodandand
bypass the weakness
bypass of others
the weakness [41,74].[41,74].
of others The success of these of
The success “hybrid
these methods”
“hybrid
depends
methods” ondepends
the success of the
on the underlying
success approaches.
of the underlying approaches.

3.3.Segmentation
Segmentationand
andClassification
Classificationin
inCultural
CulturalHeritage
Heritage
InInthe
thefield
fieldofofcultural
culturalheritage,
heritage,processes
processes such
such as
as segmentation
segmentation and and classification
classification cancanbe
beapplied
applied
atatdifferent
differentscales,
scales,from
fromentire
entirearchaeological
archaeologicalsites
sitesand
andlandscapes
landscapes to to small
small artifacts.
artifacts.
InInthe
theliterature,
literature,different
differentsolutions
solutions are
are presented
presented for the classification
classification of of architectural
architecturalimages,
images,
using
usingdifferent
differenttechniques
techniques such
suchasas
pattern detection
pattern detection[75], Gabor
[75], filters
Gabor andand
filters support vector
support machine
vector machine[76],
K-means algorithms
[76], K-means [77], clustering
algorithms and and
[77], clustering learning of local
learning features
of local [78],
features hierarchical
[78], hierarchicalsparse coding
sparse codingof
blocks [79] or CNN deep learning [16,80].
of blocks [79] or CNN deep learning [80,16].
Many
Manyexperiments
experimentswere werealso
alsocarried
carriedout
outonon 3D
3D data
data at different scales [6,81,82].
[6,81,82]. Some
Someworks
worksaimaim
totodefine
define a procedure
a procedure forfor
thethe integration
integration of architectural
of architectural 3D models
3D models within within BIM [1,5,83].
BIM [1,5,83]. In others,
In many many
others,
the the classification
classification is conducted
is conducted manually manually for annotation
for annotation purposespurposes (www.aioli.cloud
(www.aioli.cloud). In the).NUBES
In the
project, for example, 3D models are generated from 2D annotated images. In particular, the NUBES
web platform [84] allows the displaying and cross-referencing of 2D mapping data on the 3D model in
real time, by means of structured 2D layer, such as annotations concerning stone degradation, dating
and material. Apollonio et al. [85] used 3D models and data mapping on 3D surfaces in the context
of the restoration documentation of Neptune’s Fountain in Bologna. Campanaro et al. [86] realized
a 3D management system for heritage structures by exploiting the combination of 3D visualization
and GIS analysis. The 3D model of the building was originally split into architectural sub-elements
(facades) to add color information projecting orthoimages by means of planar mapping techniques
(texture mapping). Sithole [87] proposed an automatic segmentation method for detecting bricks in
masonry walls, working on the point clouds and assuming that mortar channels are reasonably deep
and wide. Oses et al. [76] used machine learning classifiers, support vector machines and classification
trees for the masonry classification. Riveiro et al. [88] suggested an algorithm for the segmentation of
bricks in point cloud built on a 2.5D approach and creating images based on the intensity attribute of
LiDAR sensors. Recently, Messaoudi et al. [89] developed a correlation pipeline for the integration of
combination of 3D visualization and GIS analysis. The 3D model of the building was originally split
into architectural sub-elements (facades) to add color information projecting orthoimages by means
of planar mapping techniques (texture mapping). Sithole [87] proposed an automatic segmentation
method for detecting bricks in masonry walls, working on the point clouds and assuming that mortar
channels
Remote Sens.are
2019,reasonably
11, 847 deep and wide. Oses et al. [76] used machine learning classifiers, support 6 of 23
vector machines and classification trees for the masonry classification. Riveiro et al. [88] suggested
an algorithm for the segmentation of bricks in point cloud built on a 2.5D approach and creating
semantic,
images spatial
based on and morphological
the intensity dimension
attribute of LiDARof sensors.
a built heritage.
Recently,The annotation
Messaoudi et process provides a
al. [89] developed
3D point-based representation of each 2D region.
a correlation pipeline for the integration of semantic, spatial and morphological dimension of a built
heritage. The annotation process provides a 3D point-based representation of each 2D region.
4. Project’s Methodology
4. Project’s Methodology
Considering the availability and reliability of segmentation methods applied to (2D) images and
the efficiency of machine learning strategies, a new methodology was developed to assist cultural
Considering the availability and reliability of segmentation methods applied to (2D) images and
heritage experts analyze digital 3D data. In particular, the approach presented hereafter relies on
the efficiency of machine learning strategies, a new methodology was developed to assist cultural
supervised and unsupervised machine learning methods for segmenting texture information of 3D
heritage experts analyze digital 3D data. In particular, the approach presented hereafter relies on
digital models. Starting from colored 3D point clouds or textured surface models, our pipeline relies
supervised and unsupervised machine learning methods for segmenting texture information of 3D
on the following steps:
digital models. Starting from colored 3D point clouds or textured surface models, our pipeline relies
on
• the following
Create steps: models, orthoimages (for 2.5D geometries) and UV maps (for 3D geometries)
and optimize
• (Figure
Create 6a–c).
and optimize models, orthoimages (for 2.5D geometries) and UV maps (for 3D
• geometries)
Segment the (Figure 6a–c).
orthoimage or the UV map following different approaches tailored to the case study
• (clustering,
Segment therandom
orthoimage or (Figure
forest) the UV map
6d-e).following different approaches tailored to the case study
• (clustering,
Project the 2D classification results 6d-e).
random forest) (Figure onto the 3D object space by back-projection and collinearity
• Project the 2D 6f).
model (Figure classification results onto the 3D object space by back-projection and collinearity
model (Figure 6f).

Figure 6. Schematic
Schematic representation
representation of the developed segmentation
segmentation and classification
classification methodology:
methodology:
3D model of a portion of Circus Maximus
Maximus Cavea
Cavea inin Rome,
Rome, Italy,
Italy, (a); 3D model
model after
after re-meshing
re-meshing (b);
(b);
UV map
map(c);(c); manually
manually identified
identified training
training areas
areas on on the unwrapped
the unwrapped texture (d);texture (d);classification
supervised supervised
classification results
results (e); and (e); and of
re-projection re-projection of the results
the classification classification results
onto the onto the
3D model (f). 3D model (f).

4.1. Image Preparation


The method works on the texture information of a 3D model. The texture is prepared according
to the geometry and complexity of the considered 3D object:

• Planar objects (e.g. walls, Section 5.1): The object orthophoto is created and the procedure classifies
and finally re-maps the information onto the 3D geometry.
• Regular objects (e.g. building or other 3D structures with certain level of complexity fit into
this category, Section 5.2): Instead of creating various orthoimages from different points of view,
unwrapped texture (UV maps) are generated and classified. To generate a good texture image to
be classified, we followed these steps:
Remote Sens. 2019, 11, 847 7 of 23

1. Remeshing: Beneficial to improve the quality of the mesh and to simplify the next steps.
2. Unwrapping: UV maps are generated using Blender, adjusting and optimizing seam lines
and overlap (Figure 6c) to facilitate the subsequent analysis with machine learning strategies.
This correction is made commanding the UV unwrapper to cut the mesh along edges chosen
in accordance with the shape of the case study [90].
3. Texture mapping: The created UV map is then textured (Figure 6d) using the original
textured polygonal model (as vertex color or with external texture). This way the radiometric
quality is not compromised despite the remeshing phase.
• Complex objects (e.g.; monuments or statue, Section 5.3): When objects are too complex for a
good unwrap, the classification is done directly on the texture generates as output during the 3D
modeling procedure.

When we consider color image segmentation, choosing a proper color space becomes an important
issue [91]. This is because different color spaces present color information in different ways that make
certain calculations more convenient and also provide a way to identify colors that is more intuitive.
Several color representations are currently in use in color image processing. The most common is the
RGB, but also HSV and L*A*B* are frequently chosen color spaces [92,93]. In the RGB color space,
for example, shadowed areas will most likely have very different characteristics than areas without
shadows. In the HSV color space, the hue component of areas with and without shadow are more likely
to be similar: the shadow will primarily influence the value, or the saturation component, while the
hue—indicating the primary “color” without its brightness and diluted-ness by white/black—should
not change so much. Another popular option is LAB color space—where the AB channels represent
the color and Euclidean distances in AB space—better match the human perception of color. Again,
ignoring the L channel (Luminance) makes the algorithm more robust to lighting differences.

4.2. Supervised Learning Classification


The 2D classification method relies on different machine learning models embedded in WeKa [94]
coupled with the Fiji distribution of ImageJ, an image processing software that exploits WeKa as an
engine for machine learning models [95]. The method combines a collection of machine learning
algorithms (random tree, support vector machine, random forest, etc.) with a set of selected image
features to produce pixel-based segmentations. All the available classifiers are based on a decision
tree learning method. In this approach, during the training, a set of decision nodes over the values
of the input features (e.g. “feature x is greater than 0.7?”) are built and connected to each other in a
tree structure.
This structure, as a whole, represents a complex decision process over the input features. The result
of this decision is a value for the label that classifies the input example. During the training phase,
the algorithm learns these decision nodes and connects them.
Among the different approaches, we achieved the best results in terms of accuracy exploiting
the random forest method (Section 5.1) [62]. In this approach, several decision trees are trained as an
ensemble, with the mode of all the predictions that is taken as the final one. This allows us to overcome
some typical problems in decision tree learning, such as overfitting the training data and learning
uncommon irregular patterns that may occur in the training set. This behavior is mitigated by the
random forest procedure by randomly selecting different subsets of the training set and, for each of
these subsets, a random subset of input features. At the same time, for each of these subsets of training
examples and features, a decision tree is learned. The main intuition between these procedures is called
“feature bagging”, where some features are very strong predictors for the output class. Such features
will be likely to be selected in many of the trees, causing them to become correlated.
For each case study, the random forest was trained giving in input the manually annotated
model’s textures. More specifically, not all the pixels of those images were manually annotated with
its corresponding label, but just some significant and well distributed portions (e.g., see Figure 8b).
Remote Sens. 2019, 11, 847 8 of 23

The first time the training process starts, the features of the input image are extracted and converted to
a set of vectors of float values (Weka input). This step can take some time depending on the size of
the images, the number of features and the number of cores of the machine where the classification
is running. The feature calculation is done in a completely multi-thread fashion. The features are
calculated only the first time it is trained after starting the plugin or after changing any of the feature
options. In the case of color (RGB) images, the hue, saturation and brightness are also part of
the features.

4.3. Unsupervised Learning Classification


The unsupervised segmentation approach is performed using the k-means clustering plugin of
ImageJ or Fiji [96]. The algorithm performs pixel-based segmentation of multi-band images. Each pixel
in the input image is assigned to one of the clusters. Values in the output image represent the cluster
number to which original pixel is assigned. Before starting the elaboration, the operator decides the
number K of classes the image will be divided into and the cluster center tolerance.

4.4. Evaluation Method


To assess the performance of the classification, small sections of the entire datasets were trained
and then compared quantitatively with the ground truths. More specifically, we relied on the precision,
recall and F1 score calculated for each class—computed for each point by comparing the label predicted
by the classifier with the same manually annotated—and on the overall accuracy, which is useful to
evaluate the overall performance of the classifier.

Tp
Precision = (1)
T p + Fp

Tp
Recall = (2)
T p + Fn
Recall ∗ Precision
F1 score = 2 ∗ Recall + Precision (3)
number o f correct predictions
Overall accuracy = total number o f predictions (4)

where, for each considered class, (true positive), Tn (true negative), Fp (false positive), and Fn (false negative)
come from the confusion matrix, which is commonly used to evaluate machine learning classifiers.
Once considerable levels of overall accuracy were reached (>70%), the classification was extended
to the whole datasets, which were evaluated qualitatively. In this way, the performance of the model
was assessed measuring the performance against another set of images, different from the ones used
during the training phase, so that the capabilities of the model to generalize over unseen data could be
effectively measured.

5. Test Objects and Classification Results


The proposed methodology was applied to and tested on various archaeological and architectural
scenarios to prove its replicability and reliability with different 3D objects.
In particular, these case studies were considered:

• The Pecile’s wall of Villa Adriana in Tivoli: it is a 60 m L × 9 m H wall (Figure 7a) with
holes on its top meant for the beams of a roof. The digital model of the wall was classified
to identify the different categories of opus (roman building techniques), distinguishing original
and restored parts.
• Part of a renaissance portico located in the city center of Bologna: It spans ca. 8 m L × 13 m H × 5
m D (Figure 7b). The classification aimed to identify principal parts and architectural elements;
• The Pecile’s wall of Villa Adriana in Tivoli: it is a 60 m L × 9 m H wall (Figure 7a) with holes on
its top meant for the beams of a roof. The digital model of the wall was classified to identify the
different categories of opus (roman building techniques), distinguishing original and restored
parts.
• Part of a renaissance portico located in the city center of Bologna: It spans ca. 8 m L × 13 m H ×
Remote Sens. 2019, 11, 847 9 of 23
5 m D (Figure 7b). The classification aimed to identify principal parts and architectural elements;
• The Sarcophagus of the Spouses (Figure 7c): It is a late 6th century BC Etruscan anthropoid
• sarcophagus,
The Sarcophagus 1.14 of
m the
highSpouses
by 1.9 m(Figure
wide, made
7c): It of
is terracotta, which was
a late 6th century BC once brightly
Etruscan painted
anthropoid
[97]. The classification
sarcophagus, aimed
1.14 m high at m
by 1.9 identifying
wide, made surface anomalies
of terracotta, (such
which wasas fractures and painted
once brightly decays) [97].
and
quantifying the percentage of mimetic cement used to assemble the sarcophagus.
The classification aimed at identifying surface anomalies (such as fractures and decays) and
• quantifying
The Bartoccini’s Tomb in Tarquinia
the percentage of mimetic(Figure
cement 7d): the to
used tomb, excavated
assemble in the hard sand in the 4th
the sarcophagus.
• century,
The has fourTomb
Bartoccini’s rooms—a central one
in Tarquinia (Figure m ×the
(ca. 57d): 4 m) and excavated
tomb, three laterinrooms m × in
(ca. 3sand
the hard 3 m)—all
the 4th
century, has four rooms—a central one (ca. 5 m × 4 m) and three later rooms (ca. 3 m × 3and
connected through small corridors. The height of the tomb rooms does not exceed 3 m it is
m)—all
all paintedthrough
connected with a reddish color and
small corridors. various
The heightfigures. The rooms
of the tomb aim was to automatically
does not exceed 3 midentify
and it isthe
all
still painted areas on the wall and the deteriorated parts.
painted with a reddish color and various figures. The aim was to automatically identify the still
For the porticoes
painted areasand the wall
on the Bartoccini’s
and the tomb, a supervised
deteriorated parts. segmentation approach directly on the 3D
models was also applied.

Figure
Figure Thestudies
7. The7.case case studies
of theof the work
work to validate
to validate the semantic
the semantic classification
classification for analyses
for analyses and and restoration
restoration purposes:
purposes: Pecile’s wall of Villa Adriana in Tivoli, Italy (a); Renaissance building in Bologna,
Pecile’s wall of Villa Adriana in Tivoli, Italy (a); Renaissance building in Bologna, Italy (b); Sarcophagus Italy (b); of the
Sarcophagus
Spouses, NationalofEtruscan
the Spouses,
museumNational Etruscan
of Villa museum
Giulia in of Villa
Rome, Italy (c);Giulia in Rome,
Etruscan tomb inItaly (c); Etruscan
Tarquinia, Italy (d).
tomb in Tarquinia, Italy (d).
5.1. The Pecile’s Wall
For the porticoes and the Bartoccini’s tomb, a supervised segmentation approach directly on the
To define
3D models wasthe correctness
also applied. of the developed approach, different analyses were conducted on this
first case study. Only a portion of the Pecile’s wall (4 m length × 9 m height) was considered at first
5.1. The Pecile’s
(Figure 8). On Wall
this portion of the wall’s orthoimage, different training processes were run using
different image
To define thescales to identify
correctness the
of the solution that
developed best fitdifferent
approach, this caseanalyses
study, taking into account
were conducted the
on this
classification
first aims.
case study. Only a portion of the Pecile’s wall (4 m length × 9 m height) was considered at
first (Figure 8). On this portion of the wall’s orthoimage, different training processes were run using
different image scales to identify the solution that best fit this case study, taking into account the
classification aims.
Remote Sens. 2019, 11, 847 10 of 23
Remote Sens. 2019, 11, x FOR PEER REVIEW 10 of 24

Figure
Figure8.8.Orthoimage
Orthoimageof ofaaportion
portion of
of Pecile’s wall (4
Pecile’s wall (4m length××9 m
mlength 9m height)
height) exported
exported at 1:20
at 1:20 scale
scale (a); (a);
corresponding training samples in the image (b); and classification results obtained at different
corresponding training samples in the image (b); and classification results obtained at different scales: scales:
scale 1:10 (c); scale 1:20 (d); scale 1:50 (e); and ground truth (f).
scale 1:10 (c); scale 1:20 (d); scale 1:50 (e); and ground truth (f).

AtAta a1:10
1:10scale,
scale,the
theresults
resultspresent
present an
an over segmentation.Using
over segmentation. Usinga a1:50
1:50scale,
scale, many
many details
details were
were
lost, identifying only some macro-areas. The scale 1:20 (normally used for restoration
lost, identifying only some macro-areas. The scale 1:20 (normally used for restoration purposes as it purposes as it
allows
allows totodistinguish
distinguishbricks)
bricks)turned
turned outout to be the
the optimal
optimalchoice.
choice.ItItallowed
allowed the
the capture
capture of of
thethe details
details
but could not consider the cracks of the mortar between the bricks (Figure 8d).
but could not consider the cracks of the mortar between the bricks (Figure 8d). Given the manually Given the manually
selected
selected training classes
training (seven
classes classes),
(seven different
classes), classifiers
different were
classifiers trained
were and and
trained evaluated. TableTable
evaluated. 1 reports
1
thereports
overallthe overall accuracy
accuracy results
results for for allclassifiers
all tested tested classifiers
run on run on the orthoimage
the orthoimage at scaleat1:20.
scale 1:20.
Moreover, we report the time elapsed for each algorithm, considering that creating the classes
Table 1. Accuracy results and elapsed time for various classifier applied to an orthoimage at 1:20 scale.
and the training data took around 10 min and the feature stack array required 14 min.
Classifier Overall Accuracy Time
Table 1. Accuracy results and elapsed time for various classifier applied to an orthoimage at 1:20 scale.
j48 0.44 22 s
Classifier
Random Tree Overall Accuracy
0.46 Time 15 s
RepTREE
j48 0.440.47 22 s 33 s
LogitBoost
Random Tree 0.460.52 15 s 20 s
Random Forest
RepTREE 0.470.57 33 s 23 s
Fast Random Forest
LogitBoost 0.520.70 20 s 120 s

Random Forest 0.57 23 s


Moreover, we report the time elapsed for each
Fast Random Forest
algorithm, considering
0.70 120 s
that creating the classes
and the training data took around 10 min and the feature stack array required 14 min.
OutOut
of all thethe
of all tests performed
tests performed with
withthethe
different algorithms,
different algorithms, thethe
best overall
best accuracy
overall obtained
accuracy obtainedwaswas
70%70% using
using a random
a random forest
forest classifier.
classifier. ToTo better
better identify
identify thethe classification
classification errors,a confusion
errors, a confusionmatrix
matrixwas
was(Table
used used 2).
(Table
From2).the From
tablethe table analysis,
analysis, it was to
it was possible possible to understand
understand that most that most
errors errors in
in classification
classification were in those classes where an overlap of plaster was present on
were in those classes where an overlap of plaster was present on the surface of the opus. However, the surface of the opus.
However, it is believed that an expert should not consider the accuracy percentage
it is believed that an expert should not consider the accuracy percentage absolute without previous absolute without
previous verification.
verification. ComparingComparing the segmentation
the segmentation handled by handled by the and
the operator operator and
by the by the algorithm,
algorithm, it
it was found
was found that the supervised method allowed the identification of more details
that the supervised method allowed the identification of more details and differences in the material’s and differences in
the material’s composition. In fact, it could not only distinguish the classes, but also identify the
composition. In fact, it could not only distinguish the classes, but also identify the presence of plaster
presence of plaster above the wall surface. This is an important advantage for the degradation
above the wall surface. This is an important advantage for the degradation analysis.
analysis.

Table 2. Normalized confusion matrix to analyze the results of the supervised classification of a portion of Pecile’s
wall at scale 1:20. On the left of the table are reported precision, recall and F1 calculated for each category.
Remote Sens. 2019, 11, x FOR PEER REVIEW 11 of 24

Recall F1
Precision
Remote Sens. 2019, 11, 847 11 of 23
Undercut/holes 0.64 0.01 0 0.07 0.08 0.02 0.13 0.56 0.67 0.6

Restored Opus latericium 0.2 0.73 0.02 0.02 0.18 0.01 0 0.85 0.63 0.72
Table 2. Normalized confusion matrix to analyze the results of the supervised classification of a portion
of Pecile’s wall at scale
Plaster wall 1:20. On the left of 0.02 the table
0.03 are reported
0.75 precision,
0.04 0.12 recall and F1
0.02 0.03 0.91calculated
0.74 for 0.82
each category.
Remote Sens. 2019, 11, x FOR PEER REVIEW 11 of 24
Old Opus latericium 0.08 0.02 0.01 0.43 0.2 0.05 0.21 0.56 0.43 0.49

Precision
Remote Sens. 2019, 11, x FOR PEER REVIEW 11 of 24

Recall

F1
Opus reticulatum grey 0.06 0.05 0.01 0.08 Recall
0.66 F10.05 0.08 Recall 0.47
F1 0.67 0.55
Precision Precision
Undercut/holes 0.64 0.01 0 0.07 0.08 0.02 0.13 0.56 0.67 0.6
TRUE RestoredUndercut/holes
Opus reticulatum
Undercut/holes 0.02 0.01
0.64 0.64 0.01 0.010.01 0.01
0 0 0.07
0.07 0.080.07 0.830.13
0.02 0.02
0.08 0.13 0.06
0.56 0.560.67 0.77
0.6 0.67 0.820.6 0.8
TRUE LABEL

Restored Opus latericium 0.2 0.73 0.02 0.02 0.18 0.01 0 0.85 0.63 0.72
Restored Opus latericium 0.2 0.73 0.02 0.02 0.18 0.01 0 0.85 0.63 0.72
Restored Opus latericium0.02 0.03 0.02
0.2
0.75 0.04 0.03
0.73 0.02 0.02 0.18 0.01 0 0.85 0.63 0.72
LABEL Old OpusPlaster wall Plaster wall
reticulatum 0.15
0.12 0.020.75
0.01
0.03
0.02 0.04
0.91 0.12
0.12
0.74 0.02
0.08 0.090.03
0.82
0.52 0.91 0.5 0.74 0.520.82 0.51
Old Opus latericium
Old Opus Plaster wall
latericium 0.08 0.02 0.08 0.02
0.02
0.01 0.43 0.050.01
0.2 0.03 0.43
0.75 0.04
0.21 0.2
0.560.12 0.02 0.05
0.43 0.03 0.21 0.91
0.49 0.560.74 0.820.43 0.49
Opus reticulatum Opusgrey
reticulatum grey 0.06 0.05 0.06
0.01 0.08 0.05
0.66 0.050.01
0.08 0.08
0.47 0.660.67 0.05
0.55 0.08 0.47 0.67 0.55
Old Opus latericium 0.08PREDICTED
0.02 0.01 0.43 0.2 LABEL 0.05 0.21
Restored Opus reticulatum
TRUE Restored Opus reticulatum 0.02 0.01 0.02
0.01 0.01 0.01
0.07 0.830.01
0.06 0.01
0.77 0.070.82 0.83
0.8 0.06 0.56 0.770.43 0.49
0.82 0.8
Old Opus reticulatum
LABEL Opus reticulatum
Old Opus reticulatum grey 0.15 0.01
0.06 0.05 0.02
0.01
0.15 0.01 0.02 0.12 0.08 0.09 0.52 0.12
0.080.5 0.08
0.66 0.05
0.52 0.09
0.08
0.51 0.52 0.47 0.5 0.67 0.550.52 0.51
PREDICTED LABEL
PREDICTED LABEL
TRUE Restored Opus reticulatum 0.02 0.01 0.01 0.01 0.07 0.83 0.06 0.77 0.82 0.8

LABEL Old Opus reticulatum 0.15 0.01 0.02 0.12 0.08 0.09 0.52 0.5 0.52 0.51
Starting from
Starting from this
this result
Starting
result
To classify
the
from this
540 m ofthe
result training
surfacetraining
dataset
the training dataset was applied to was
dataset
the process took about was
1 h. Considering
applied
a larger part to
of the wall (Figure
thatapplied
the operator tookto4ha
a larger part of the wall (Figure 9b).
9b).
justlarger part of the wall (Figure 9b).
2
PREDICTED LABEL
To classify 2
classify 540
540m m2 of surface thetheprocess took about 1 h. Considering that thethe
operator took 4 h4just for
for classifying a smaller part (24 m ), the supervised technique could obtain a more accurate result in
2

To ofa shorter
surface process
time. The classification took
results can also be used toabout 1 commonly
create the most h. Considering
requested map that operator took h just
2 ),dedicated
for restoration purpose, with symbols/legend (Figure 9d).
for classifying a smaller part (24 m ), the supervised technique could obtain a more accurate result ina
classifying a smaller part (24 m the
2 supervised technique could obtain a more accurate result in
Starting from this result the training dataset was applied to a larger part of the wall (Figure 9b). requested map
ashorter
shortertime.
time.The
Theclassification
classification
To classify 540 m of results
results
surface thecan
2
can also
processalso be used
tookbe used
about
to
to create
create that
1 h. Considering
the
thethe most
most commonly
commonly
operator took 4 h just requested map
for restoration
for restoration purpose,
purpose, with dedicated
witha smaller
for classifying dedicated part (24 symbols/legend
symbols/legend
m ), the supervised technique
2 (Figure
(Figure 9d).a more accurate result in
9d).
could obtain
a shorter time. The classification results can also be used to create the most commonly requested map
for restoration purpose, with dedicated symbols/legend (Figure 9d).

Figure 9. The original (a); and classified (b) model of the Pecile’s wall long ca 60 m. A closer view is also
reported to better show the classification results with random colors (c); or dedicated symbols (d).

5.2. Bologna’s Porticoes


The historical porticoes of Bologna were built during the 11th–20th centuries and can be
regarded as unique from an architectural viewpoint in terms of their authenticity and integrity.
Thanks to their great extension, permanence, use and history, the porticoes of Bologna are considered
of outstanding universal value. They span approximately 40 km, mainly in the historic city center of
Bologna, and they represent a high-quality architectural work.

Figure 9. The original (a); and classified (b) model of the Pecile’s wall long ca 60 m. A closer view is also
reported to better show the classification results with random colors (c); or dedicated symbols (d).

5.2. Bologna’s Porticoes


The historical porticoes of Bologna were built during the 11th–20th centuries and can be
regarded as unique from an architectural viewpoint in terms of their authenticity and integrity.
Thanks to their great extension, permanence, use and history, the porticoes of Bologna are considered
of outstanding universal value. They span approximately 40 km, mainly in the historic city center of
Figure
Figure 9. 9.
TheThe original
Bologna,
original (a);
and
(a); and
they
and classified
represent
classified (b)(b) model
a high-quality
model of the
architectural
of the Pecile’s
work.
Pecile’s wallwall
longlong
ca 60cam.60Am. A closer
closer viewview is
is also
also reported
reported to better
to better show show the classification
the classification results
results withwith random
random colors
colors (c); (c); or dedicated
or dedicated symbols
symbols (d).(d).

5.2. Bologna’s Porticoes


5.2. Bologna’s Porticoes
The historical porticoes of Bologna were built during the 11th–20th centuries and can be regarded
The historical porticoes of Bologna were built during the 11th–20th centuries and can be
as unique from an architectural viewpoint in terms of their authenticity and integrity. Thanks to their
regarded as unique from an architectural viewpoint in terms of their authenticity and integrity.
great extension, permanence, use and history, the porticoes of Bologna are considered of outstanding
Thanks to their great extension, permanence, use and history, the porticoes of Bologna are considered
universal value. They span approximately 40 km, mainly in the historic city center of Bologna, and
of outstanding universal value. They span approximately 40 km, mainly in the historic city center of
they represent a high-quality architectural work.
Bologna, and they represent a high-quality architectural work.
Such structures combine variegated geometric shapes, different materials and many architectural
details such as moldings and ornaments. According to the different classification requirements, the aim
of the task could be the identification of:

• different architectural elements;


• diverse materials (bricks vs. stones vs. marble); and
Remote Sens. 2019, 11, x FOR PEER REVIEW 12 of 24

Such structures combine variegated geometric shapes, different materials and many
architectural details such as moldings and ornaments. According to the different classification
Remote Sens. 2019, 11, x FOR PEER REVIEW 12 of 24
requirements,
Remote Sens. 2019,the aim of the task could be the identification of:
11, 847 12 of 23
•Suchdifferent architectural elements;
structures combine variegated geometric shapes, different materials and many
• diverse
architectural materials
details such as(bricks vs. stones
moldings vs. marble);
and ornaments. and
According to the different classification
• requirements,
categories of
• categories decay
the aim (cracks
ofofdecay vs. humidity
(cracks
the task couldvs. vs. swelling).
humidity
be the vs. swelling).
identification of:
• different architectural elements;
5.2.1. Classification
5.2.1. Classification of2D
• diverse
of 2D Data (bricks vs. stones vs. marble); and
data
materials
Starting•from
Starting fromcategories
the of decay (cracks
the available
available3D 3Ddata
vs. humidity
dataofofthe
vs.[98],
theporticoes
porticoes
swelling).
thethe
[98], texture of the
texture 3D 3D
of the digital porticoes
digital was
porticoes
unwrapped
was unwrapped (Figure 10)
(Figure and used
10)data to manually identify training patches and classes
and used to manually identify training patches and classes (10). The(10). The results
5.2.1. Classification of 2D
results (Figure 11), based on fast forest
(Figure 11), based on fast random random model/classifier, show many
forest model/classifier, classification
show errors under
many classification the
errors
porticoes,Starting
where from
the the available
plaster is not 3D data of the porticoes
homogeneous and [98], the
presents texture decays.
different of the 3DIndigital
this porticoes
case, a solution
under the porticoes, where the plaster is not homogeneous and presents different decays. In this case,
was unwrapped (Figure 10) and used to manually identify training patches and classes (10). The
amight be to createbemany
solution
resultsmight
(Figure 11),
different
to create
based on many classes according
fast different classes
random forest
toaccording
the number
model/classifier,
of decay
to show
the numbercategories
of decay orcategories
many classification
to apply, asora
errors
post processing phase, algorithms to make more uniform the areas with small
to apply, as a post processing phase, algorithms to make more uniform the areas with small spots. spots.
under the porticoes, where the plaster is not homogeneous and presents different decays. In this case,
a solution might be to create many different classes according to the number of decay categories or
to apply, as a post processing phase, algorithms to make more uniform the areas with small spots.

Figure
Figure
Figure 10. 10. Manually
10. Manually
Manually identified
identified
identified trainingareas
training
training areas on
areas on the
theunwrapped
onthe unwrapped
unwrappedtexture of of
texture
texture the porticoes
ofthe
the portion
porticoes
porticoes (a); (a);
portion
portion (a);
and and classification
andclassification results
classification results based
results based
based on on
on thethe selected
the selected 10
selected 10 classes
10 classes (b).
classes (b).
(b).

Figure 11. 3D model (a); and from 2D to 3D classification results of historical porticoes in Bologna (b).

5.2.2. Classification of 3D Data


Considering the classification errors of the facades, due to the heterogeneous surfaces and
the presence of different decays, a supervised classification approach was performed based on the
Remote Sens. 2019, 11, x FOR PEER REVIEW 13 of 24

Figure 11. 3D model (a); and from 2D to 3D classification results of historical porticoes in Bologna (b).

5.2.2 Classification of 3D Data


Remote Sens. 2019, 11, 847 13 of 23
Considering the classification errors of the facades, due to the heterogeneous surfaces and the
presence of different decays, a supervised classification approach was performed based on the
Computational
ComputationalGeometry
GeometryAlgorithms
AlgorithmsLibrary
Library(CGAL)
(CGAL)[99] [99] and
and the
the Random
Random Forest
Forest Template
Template Library
Library
(ETHZ
(ETHZ Random Forest, 2018) [100]. The classification consists of three steps: feature computation,
Random Forest, 2018) [100]. The classification consists of three steps: feature computation,
model
modeltraining
trainingandandprediction.
prediction.To Tocharacterize
characterizethethemodel,
model,aacombination
combinationof offeatures
featuresconsidering
consideringbothboth
geometry and color factors was selected: distance to plane, eigenvalues of the
geometry and color factors was selected: distance to plane, eigenvalues of the neighborhood, neighborhood, elevation,
verticality
elevation, and HSV channels.
verticality and HSV The channels.
training dataset of correctdataset
The training labels was manually
of correct annotated
labels on small
was manually
portions
annotated of on
thesmall
entireportions
3D model. Toentire
of the conclude that phase,
3D model. a classifier
To conclude thatwas defined
phase, and trained
a classifier using
was defined
random forest: from the set of values taken by the features at an input item, it measured
and trained using random forest: from the set of values taken by the features at an input item, it the likelihood
of this itemthe
measured belonging
likelihoodto one label
of this or another.
item belonging Finally,
to oneonce
labelthe
or classifier was generated,
another. Finally, once thethe prediction
classifier was
process was performed on the entire point cloud (Figure 12).
generated, the prediction process was performed on the entire point cloud (Figure 12).

Figure12.
Figure 12. 3D
3D model
model (a);
(a); and
andclassification
classificationresults
resultsof
ofhistorical
historicalporticoes
porticoesin
inBologna
Bologna(b).
(b).

As shown in the figure, the classification problems on the walls were solved, but there were
As shown in the figure, the classification problems on the walls were solved, but there were some
some false positive due to the classification of drain pipes as columns (vertical pipes) and vaults
false positive due to the classification of drain pipes as columns (vertical pipes) and vaults (horizontal
(horizontal pipes). However, it was not considered a classification error, but an error in the training
pipes). However, it was not considered a classification error, but an error in the training phase, as a
phase, as a class was not assigned to these features.
class was not assigned to these features.
Considering the results obtained working onto the 3D models and the suitability of the case
Considering the results obtained working onto the 3D models and the suitability of the case study
study (40 km of similar buildings), as a future works the authors aim to extend the classification to a
(40 km of similar buildings), as a future works the authors aim to extend the classification to a bigger
bigger portion of the porticoes of Bologna.
portion of the porticoes of Bologna.
5.3 The Etruscan Sarcophagus of the Spouses
5.3. The Etruscan Sarcophagus of the Spouses
The Etruscan masterpiece “Sarcofago degli Sposi” was found in 1881 in the Banditaccia
The Etruscan masterpiece “Sarcofago degli Sposi” was found in 1881 in the Banditaccia necropolis
necropolis in Tarquinia (Italy). The remains were found broken into more than 400 pieces. The
in Tarquinia (Italy). The remains were found broken into more than 400 pieces. The sarcophagus
sarcophagus was then reassembled and joined using a mimetic cement to fill the gaps among the
was then reassembled and joined using a mimetic cement to fill the gaps among the different
different pieces. In 2013, digital acquisitions and 3D modeling of the sarcophagus, based on different
pieces. In 2013, digital acquisitions and 3D modeling of the sarcophagus, based on different
technologies (photogrammetry, TOF and triangulation-based laser scanning) were conducted to
technologies (photogrammetry, TOF and triangulation-based laser scanning) were conducted to
deliver highly detailed photo-realistic 3D representation of the Etruscan masterpiece for successive
deliver highly detailed photo-realistic 3D representation of the Etruscan masterpiece for successive
multimedia purposes [101]. Using the high-resolution photogrammetric 3D model (5 million
multimedia purposes [101]. Using the high-resolution photogrammetric 3D model (5 million triangles),
triangles), the segmentation task aimed to detect the surface anomalies (fractures and decays) and to
the segmentation task aimed to detect the surface anomalies (fractures and decays) and to test the
test the reliability of the method on a heritage objects with a more complex topology and few
reliability of the method on a heritage objects with a more complex topology and few chromatic
chromatic differentiations on the texture. For the training set, three main categories, and two
differentiations on the texture. For the training set, three main categories, and two accessories ones,
accessories ones, were identified (Figure 13a).
were identified (Figure 13a).
Remote Sens. 2019, 11, 847 14 of 23

Remote Sens. 2019, 11, x FOR PEER REVIEW 14 of 24

Remote Sens. 2019, 11, x FOR PEER REVIEW 14 of 24

Figure 13. Manually


Figure identified
13. Manually training
identified trainingareas
areasonon the unwrapped
the unwrapped texture
texture of sarcophagus
of the the sarcophagus
(a); and(a); and
related classification resultresult
related classification on the texture
on the (b).
texture (b).

The manual identification of the necessary patches took about 15 min and was accomplished
Figure 13. Manually identified training areas on the unwrapped texture of the sarcophagus (a); and
Thewith
manual identification
the support of restoration
of the necessary patches took about 15 min and was accomplished
related classification result onexperts. The
the texture sustaining legs of the sarcophagus were excluded from
(b).
with thethesupport of restoration experts. The sustaining
classification, as they are the only parts where pigment legs of the sarcophagus
decorations werethus
are clearly visible, excluded
their from
The manual identification of the necessary patches took about 15 min and was accomplished
the classification,
analysis wasas they are
outside the only parts
the segmentation where
scope. Afterpigment
the patch decorations
identification, are clearly
the model visible,
took some 2thus
h their
with the support of restoration experts. The sustaining legs of the sarcophagus were excluded from
analysisof processing
was outside tothe
classify the entire texture
segmentation scope.(Figure
After 13b),
the which identification,
patch was then mapped theonto the available
model
the classification, as they are the only parts where pigment decorations are clearly visible, thus their
took some 2h
3D geometry
of processing to (Figures
classify the14entire
and 15). The segmented
texture (Figure 3D model
13b), whichhighlighted
was thenevery single onto
mapped detail the
of the
available
analysis was outside the segmentation scope. After the patch identification, the model took some 2 h
masterpiece assembly; fractures were distinguished from engraving; and the different grades of
3D geometry (Figures
of processing 14 andthe15).
to classify The
entire segmented
texture 3D model
(Figure 13b), highlighted
which was then mapped every
ontosingle detail of the
the available
conservation were also identified. The classification output also allowed calculating the percentage
3D geometry
masterpiece assembly; (Figures 14 and were
fractures 15). The segmented 3D from
distinguished model engraving;
highlighted every
and single
the detail of the
different grades of
that each label occupied. From the results, we have that the 12% of the entire surface of the object (i.e.
masterpiece
conservation were assembly;
also fracturesThe
identified. were distinguishedoutput
classification from engraving;
also and the
allowed different 2 grades
calculating the of
percentage
3D model) is composed by mimetic cement. As the overall surface of the 3D model is 6.8 m , it means
conservation were also identified. The classification output also allowed calculating the percentage
that each label
that occupied.0.8From
approximately m2 arethe results, we
reconstructed have
parts . that the 12% of the entire surface of the object
that each label occupied. From the results, we have that the 12% of the entire surface of the object (i.e.
(i.e. 3D 3D
model)
model) is composed by mimetic cement. As theAs
is composed by mimetic cement. the surface
overall overallofsurface of the
the 3D model 3Dmmodel
is 6.8 is 6.8 m2 ,
2, it means

it meansthat
that approximately 2 are reconstructed parts.
approximately 0.8 m0.8 mreconstructed
2 are parts.

Figure 14. Texturized (a) and segmented (b) 3D model of the Sarcophagus of the Spouses.

14. Texturized
Figure Figure (a) and
14. Texturized segmented
(a) and segmented(b)
(b)3D modelofofthe
3D model the Sarcophagus
Sarcophagus of theofSpouses.
the Spouses.
Remote Sens. 2019, 11, 847 15 of 23
Remote Sens. 2019, 11, x FOR PEER REVIEW 15 of 24

Figure15.
Figure Mimetic
15.Mimetic cement
cement areas
areas of of
thethe sarcophagus
sarcophagus of the
of the Spouses
Spouses highlighted
highlighted in red
in red (ca. (ca.
12% 12%
of theofmodel).
the model).
5.4. The Etruscan Bartoccini’s Tomb in Tarquinia, Italy
5.4. The Etruscan Bartoccini’s Tomb in Tarquinia, Italy
Tarquinia was one of the most ancient cities of the Etruscan civilization. The necropolis, situated
Tarquinia was one of the most ancient cities of the Etruscan civilization. The necropolis, situated
in the areas of Monterozzi and Calvario, is composed of some 6000 tombs, 60 of which are decorated
in the areas of Monterozzi and Calvario, is composed of some 6000 tombs, 60 of which are decorated
with paintings. The Bartoccini tomb, dated to around the 4th century B.C.; was discovered in 1959.
with paintings. The Bartoccini tomb, dated to around the 4th century B.C.; was discovered in 1959.
Combined TOF scanning and panoramic photographic surveys were carried out to obtain the
Combined TOF scanning and panoramic photographic surveys were carried out to obtain the complete
complete 3D model (3 million triangles) [102]. The TOF range data were used to derive the geometry
3D model (3 million triangles) [102]. The TOF range data were used to derive the geometry of the tomb,
of the tomb, while the panoramic image was used to derive the photo realistic high-resolution texture.
while the panoramic image was used to derive the photo realistic high-resolution texture. In-house
In-house developed algorithms were used to project the panoramic image onto the 3D geometry and
developed algorithms were used to project the panoramic image onto the 3D geometry and then
then extract a high-resolution texture. As over the centuries, the tomb has suffered from erosion
extract a high-resolution texture. As over the centuries, the tomb has suffered from erosion caused
caused by various reasons such as infiltration, seasoning, aging, etc. The aim of the segmentation was
by various reasons such as infiltration, seasoning, aging, etc. The aim of the segmentation was the
the automated identification of the deteriorated surfaces’ area on the painted walls. To reach this
automated identification of the deteriorated surfaces’ area on the painted walls. To reach this goal,
goal, different strategies were tested, on both 2D (Section 5.4.1) and 3D (Section 5.4.2) data.
different strategies were tested, on both 2D (Section 5.4.1) and 3D (Section 5.4.2) data. Quantification
Quantification analyses are presented in Section 5.4.3.
analyses are presented in Section 5.4.3.
5.4.1.
5.4.1. Classification
Classification of of
2D2D Data
Data
Given
Given thethe texture
texture oftomb,
of the the tomb, a clustering
a clustering processprocess
was chosenwas instead
chosen ofinstead of manually
manually training
training various
various
classes. classes.clustering
K-means K-means(Section
clustering (Section
4.3) was 4.3) was
performed performed
to generate to generate
a pixel-based a pixel-based
segmentation of
segmentation of the panoramic images. To optimize the work time in a trial
the panoramic images. To optimize the work time in a trial phase, only one wall was analyzed,phase, only one wall was
analyzed, was then the segmentation was extended to the whole model.
was then the segmentation was extended to the whole model. To avoid segmentation errors, and To avoid segmentation
errors,
thus andbetter
achieve thus achieve
results, better results,
the image wasthe image wasfrom
transformed transformed fromcolor
RGB to Lab* RGBspace
to Lab* color16b).
(Figure space
(Figure 16b). The obtained results (Figure 16d) were compared with the ground
The obtained results (Figure 16d) were compared with the ground truth (manually segmented) data truth (manually
segmented)
(Figure data (Figure
16c), achieving 16c), achieving
an overall accuracy of an91.15%.
overall The
accuracy of 91.15%.
clustering method The clustering
was then appliedmethod was
to the
thenpanoramic
entire applied to the entire
texture of panoramic
the tomb’s texture of the tomb’s
room (Figure 17) androom (Figure
finally mapped 17) and
ontofinally
the 3Dmapped onto
geometry
the 3D geometry
(Figure 18). (Figure 18).
Remote Sens. 2019, 11, x FOR PEER REVIEW 16 of 24
Remote Sens. 2019, 11, 847 16 of 23
Remote Sens. 2019, 11, x FOR PEER REVIEW 16 of 24

Figure
Figure 16.
Figure Texture
16.16.
Texture
Texture ofof
of a wall ofofthe
a wall theBartoccini’s
Bartoccini’stomb
Bartoccini’s tombwith
tomb withRGB
with RGB(a)
RGB (a)and
(a) andLab*
and Lab*colors
Lab* colors(b);
colors (b);ground
(b); groundtruth
ground of
truth
truth
eroded
of erodedareas
areas(c); clustering
(c); results
clustering results(d); and
(d); marking
and marking(pink color)
(pink of
color) not
of identified
not eroded
identified erodedareas
of eroded areas (c); clustering results (d); and marking (pink color) of not identified eroded areas by by
areas the
by
automatic
the automatic
the clustering
automatic clusteringmethod
clusteringmethod(e).(e).
method (e).

Figure 17. Panoramic image of one room of the tomb (a); Lab* color space (b); and automatically
identified eroded areas by clustering segmentation (c).
Figure
Figure 17.
17. Panoramic
Panoramicimage
imageofofone
oneroom
roomofofthe
thetomb
tomb (a);
(a); Lab*
Lab*color
color space
space (b);
(b); and
and automatically
automatically
identified
identifiederoded
erodedareas
areasbybyclustering
clustering segmentation
segmentation (c).
(c).
Remote Sens. 2019, 11, 847 17 of 23
Remote Sens. 2019, 11, x FOR PEER REVIEW 17 of 24

Remote Sens. 2019, 11, x FOR PEER REVIEW 17 of 24

Figure18.
Figure Section of
18. Section of the tomb (a);
(a); and
andvisualization
visualizationofofthethe2D
2Dsegmentation
segmentation results mapped
results mapped onon
thethe
3D
model
3D Figure 18. Section of the tomb (a); and visualization of the 2D segmentation results mapped on the
(b).(b).
model
3D model (b).
5.4.2. Classification of 3D data
5.4.2. Classification of 3D
5.4.2. Classification data
of 3D data
From the 3D geometry of a tomb’s wall, a plane fitting procedure allowed extracting a depth
FromFromthe 3D
the geometry
3D geometry ofofa atomb’s wall,aaplane
tomb’s wall, plane fitting
fitting procedure
procedure allowed
allowed extracting
extracting a depth a depth
map of the wall and identifying the eroded surfaces (Figure 19b), i.e. those below the ideal fitted
map of maptheofwall andand
the wall identifying
identifyingthethe eroded surfaces
eroded surfaces (Figure
(Figure 19b),19b), i.e. those
i.e. those below below
the idealthe ideal fitted
fitted
surface. Although
surface. Although only
onlythe damaged areasbelow
below a certain depth variation threshold were identified,
surface. Although only thethe damagedareas
damaged areas belowa certain
a certain depth variation
depth threshold
variation were identified,
threshold were identified,
the volume of the
the volume eroded
of the eroded wall
wallcould
could still be calculated
still be (Table
3). 3).
the volume of the eroded wall could still becalculated
calculated(Table
(Table 3).

Remote Sens. 2019, 11, x FOR PEER REVIEW 19 of 24


Remote Sens. 2019, 11, x FOR PEER REVIEW 19 of 24
Remote Sens. 2019, 11, x FOR PEER REVIEW 19 of 24
Remote Sens. 2019, 11, x FOR PEER REVIEW 19 of 24

FigureFigure AA
19.Table
19. wall
3.wall of
ComputedtheBartoccini’s
of the Bartoccini’s
eroded areas
tombtomb
(and
(a); (a); depth
volumes)
depth map
with
map (b); andthe (b);
3D and 3Dsegmentation
supervised
aforementioned
supervised segmentation
approaches.
results (c).
results (c).
Table 3. Computed eroded areas (and volumes) with the aforementioned approaches.
Table 3. Computed eroded areas (and volumes) with the aforementioned approaches.
% eroded
Figure Consequently,
Table
19. 3. Computed
A wall as for the porticoes
eroded
ofClassifier
the areas tomb
Bartoccini’s casevolumes)
(and study,
(a); a supervised
depth with
mapthe classification
aforementioned
Area/Volume
(b); and approach was achieved.results (c).
approaches.
Time
3D supervised for elaborations
segmentation
Table 3. Computed eroded areas % (and
eroded volumes)
surfaces with the aforementioned approaches.
The correct labels (three
Classifier classes) were annotated
% eroded in small, well-distributed portions,
Area/Volume Time for elaborations and then the
prediction process was performed on the%
Classifier
Classifier
eroded
surfaces
entire
surfaces 3D Area/Volume
model (Figure
Area/Volume 20). Time for elaborations
Time for elaborations
Consequently, as for the porticoes case Time for
2DClassifier
Manual %study,
Erodeda Surfaces
surfaces supervised classification
Area/Volume approach was achieved.
58% 2,8 m2 30 minElaborations
The correct2D labels
Manual (three classes) were annotated in small, 2well-distributed portions, and then the
Classification
2D Manual 58% 2,8 m2 30 min
prediction 2D Manual 58% 2,8 m 30 min
Manual was performed on the entire
process
Classification
2DClassification 58% 3D model (Figure
2,8 m 2 20). 2 30 min
Classification 58% 2,8 m 30 min
classification
2D Unsupervised 49% 2,4 m2 5 min
2D Unsupervised
clustering 49% 2,4 m22 5 min
2D Unsupervised
2D Unsupervised 49% 2,4 m 5 min
2D Unsupervised
clustering
clustering 49% 49% 2,4 m2 2,4 m2 5 min 5 min
clustering
clustering
56% 2,7 m2 10 min
3D Supervised
3D Supervised 56% 2,7 m22 2 10 min
3D Supervised
segmentation 56% 56% 2,7 m 2,7 m 10 min 10 min
segmentation
3D Supervised 56% 2,7 m2 10 min
3D Supervised
segmentation
segmentation
segmentation
Depth Depth
map map 40%
40% 1,96 m2 /1,96 m32 / 0,1m3 10 min
0,1m 10 min
Depth map 40% 1,96 m22 / 0,1m33 10 min
Depth map 40% 1,96 m / 0,1m 10 min
Depth map 40% 1,96 m2 / 0,1m3 10 min
Consequently, as for the porticoes case study, a supervised classification approach was achieved.
6. Conclusions
The correct labelsand Future
(three Works
classes) were annotated in small, well-distributed portions, and then the
6. Conclusions and Future Works
prediction
Thisprocess
6. Conclusions was performed
and Future
paper Works
presents a pipeline on to
theclassify
entire 3D
3D model
heritage(Figure 20). working on the texture or
data, either
6. Conclusions and Future Works
This paper
directly presents
on presents a pipeline
the 3D geometry, to classify 3D heritage data, either working on the texture or the
This paper a pipeline depending
to classify 3Don heritage
the needsdata,
andeither
scopeworking
of the classification.
on the textureWithor
This
directly paper
on
proposed the presents
3D a
geometry, pipeline to
depending classify
on 3D
the heritage
needs and data,
scopeeither
of working
the on the texture
classification. With the or
directly on themethods, archaeologists,
3D geometry, depending restorers,
on the conservator and scientists
needs and scope can automatically
of the classification. With annotate
the
directly
2D on
proposed the 3D
methods,
textures of geometry, depending
archaeologists,
heritage objects restorers,on conservator
and visualizethe needs and
and scope
them onto of the
scientists can classification.
automatically With the
annotate
proposed methods, archaeologists, restorers, conservator and 3D geometries
scientists for a better understanding.
can automatically annotate
proposed
2D textures methods, archaeologists,
of heritage
The existence objects
of a vast restorers,
and visualize
variety conservator
them and scientists
onto 3D geometries can
for automatically annotate
a better understanding.
2D textures of heritage objects and of buildingthem
visualize techniques
onto 3Dand the usagefor
geometries of diverse
a better ornamental elements
understanding.
2D
Thetextures
existence ofofheritage
introduces aanvast objects
variety
extra ofand
barrier invisualize
building them onto
techniques
generalizing and3Dthegeometries for a better
usagetechniques
segmentation of diverse understanding.
ornamental
to heritage elements
case studies.
The existence of a vast variety of building techniques and the usage of diverse ornamental elements
Remote Sens.
Remote 2019,11,
Sens.2019, 11, 847
x FOR PEER REVIEW 18 of 24 18 of 23

Figure 20.20.
Figure Texturized (a); and
Texturized (a); classified (b) 3D(b)
and classified model of the Bartoccini’s
3D model Tomb in Tarquinia.
of the Bartoccini’s Tomb in Tarquinia.

5.4.3.Quantification
5.4.3. Quantification Analyses
Analyses
Forall
For all the
the aforementioned
aforementioned experiments,
experiments, the the
identified eroded
identified surfaces
eroded were estimated
surfaces as a
were estimated as a
percentage of the total area of interest. In particular, for the approaches based on
percentage of the total area of interest. In particular, for the approaches based on 2D images,2D images, the
percentage was calculated as a comparison between the number of pixels classified as eroded and the
the percentage was calculated as a comparison between the number of pixels classified as eroded
total number of pixels in the segmented image. On the other hand, in the case 3D data, the percentage
and the total number of pixels in the segmented image. On the other hand, in the case 3D data,
resulted as the number of points belonging to the deteriorated areas over the total number of 3D
the percentage
points. Table 3resulted as the
reports the number
eroded areas of points belonging
computed for one walltoofthe
thedeteriorated
tomb (4.9 m2areas
), withover
boththe total
number 2
percentages and square meter areas shown. This type of metric results could be beneficial for(4.9 m ),
of 3D points. Table 3 reports the eroded areas computed for one wall of the tomb
with both percentages
monitoring and purposes.
and restoration square meter areas shown. This type of metric results could be beneficial
for monitoring and restoration purposes.

6. Conclusions and Future Works


This paper presents a pipeline to classify 3D heritage data, either working on the texture or
directly on the 3D geometry, depending on the needs and scope of the classification. With the proposed
methods, archaeologists, restorers, conservator and scientists can automatically annotate 2D textures
of heritage objects and visualize them onto 3D geometries for a better understanding. The existence
of a vast variety of building techniques and the usage of diverse ornamental elements introduces an
extra barrier in generalizing segmentation techniques to heritage case studies. Moreover, a monument
can be subject to different types of degradation depending on its exposure under various conditions,
hence increasing the efficiency of the classification tasks. A machine learning-based approach becomes
beneficial for speeding up classification tasks on large and complex scenarios, provided that the
training datasets are sufficiently large and diverse.
The advantages of the proposed method are:

• shorter time to classify objects with respect to manual methods (Table 1);
• over-segmentation results useful for restoration purposes to detect small cracks or
deteriorated parts;
• replicability of the training set for buildings of the same historical period or with similar
construction material (e.g. roman walls);
Remote Sens. 2019, 11, 847 19 of 23

• visualization of classification results onto 3D models from different points of view, using
unwrapped textures;
• possibility to compute absolute and relative areas of each class (Table 3), useful for analysis and
restoration purposes; and
• applicability of the pipeline to different kinds of heritage buildings, monuments or any other kind
of 3D models.

On the other hand, lesson learned and open critical issues can be summarized as:

• Difficult identification of the classes of analysis case by case (e.g. problems with the
drainpipes classified as columns): the choice of the right classes during the training phase
becomes fundamental.
• Misinterpretation of the shadows can introduce errors in the classification: the use of different
color spaces from the classic RGB one, e.g. HSV and Lab, makes the lighting differences less
problematic during the segmentation phase.
• Over-segmentation results in many classes, commonly useless in semantic analysis: this implies
the need to make the regions more uniform in a post processing phase.

For future works, the authors aim to work across different heritage buildings, improving the
generalization of the classification of some basic classes (e.g. windows, doors, and columns). To do
that, it will be necessary to increase the number of labeled images and exploit more complex machine
learning algorithms, in particular deep neural networks.
We will also tackle the objective of increasing the homogeneity of the segmentation to minimize
and ideally avoid any post-processing phase.
Finally, we will work on new case studies, applying both techniques presented (from 2D to 3D
and directly on 3D), to better understand the advantages of each method with respect to the other.

Author Contributions: E.G. designed the project objectives, performed the experiments and wrote the manuscript.
F.R. supervised all the steps of the research, provided substantial insights and edits.
Funding: This research received no external funding.
Acknowledgments: The authors would like to acknowledge Accademia Adrianea and University of Bologna
(Dept. Architecture) for providing the datasets of Pecile’s wall in Villa Adriana and Bologna’s porticoes. We are
also thankful to Soprintendenza per i Beni Archeologici dell’Etruria Meridionale and the Etruscan National of
Villa Giulia in Rome for the possibility to work on the Sarcophagus and Bartoccini’s tomb case studies.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Saygi, G.; Remondino, F. Management of architectural heritage information in BIM and GIS: State-of-the-art
and future perspectives. Int. J. Herit. Digit. Era 2013, 2, 695–713. [CrossRef]
2. Hackel, T.; Wegner, J.D.; Schindler, K. Fast semantic segmentation of 3D point clouds with strongly varying
density. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 177–184. [CrossRef]
3. Weinmann, M.; Weinmann, M. Geospatial Computer Vision Based on Multi-Modal Data—How Valuable Is
Shape Information for the Extraction of Semantic Information? Remote Sens. 2017, 10, 2. [CrossRef]
4. Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic graph CNN for learning on
point clouds. arXiv, 2018; arXiv:1801.07829.
5. Macher, H.; Landes, T.; Grussenmeyer, P. Point clouds segmentation as base for as-built BIM creation.
ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 2, 191. [CrossRef]
6. Barsanti, S.G.; Guidi, G.; De Luca, L. Segmentation of 3D Models for Cultural Heritage Structural
Analysis–Some Critical Issues. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 4, 115. [CrossRef]
7. Maturana, D.; Scherer, S. Voxnet: A 3D convolutional neural network for real-time object recognition.
In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),
Hamburg, Germany, 28 September–2 October 2015; pp. 922–928.
Remote Sens. 2019, 11, 847 20 of 23

8. Wang, L.; Zhang, Y.; Wang, J. Map-Based Localization Method for Autonomous Vehicles Using 3D-LIDAR.
IFAC-PapersOnLine 2017, 50, 276–281.
9. Xu, S.; Vosselman, G.; Oude Elberink, S. Multiple-entity based classification of airborne laser scanning data
in urban areas. ISPRS J. Photogramm. Remote Sens. 2014, 88, 1–15. [CrossRef]
10. Corso, J.; Roca, J.; Buill, F. Geometric analysis on stone façades with terrestrial laser scanner technology.
Geosciences 2017, 7, 103. [CrossRef]
11. Cheng, G.; Han, J. A survey on object detection in optical remote sensing images. ISPRS J. Photogramm.
Remote Sens. 2016, 117, 11–28. [CrossRef]
12. Shen, D.; Wu, G.; Suk, H.I. Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 2017, 19,
221–248. [CrossRef]
13. Li, H.; Shen, C. Reading car license plates using deep convolutional neural networks and LSTMs. arXiv,
2016; arXiv:1601.05610.
14. Li, C.; Shirahama, K.; Grzegorzek, M. Application of content-based image analysis to environmental
microorganism classification. Biocybern. Biomed. Eng. 2015, 35, 10–21. [CrossRef]
15. Dubey, S.R.; Dixit, P.; Singh, N.; Gupta, J.P. Infected fruit part detection using K-means clustering
segmentation technique. Ijimai 2013, 2, 65–72. [CrossRef]
16. Llamas, J.; Lerones, P.M.; Medina, R.; Zalama, E.; Gómez-García-Bermejo, J. Classification of architectural
heritage images using deep learning techniques. Appl. Sci. 2017, 7, 992. [CrossRef]
17. Grilli, E.; Menna, F.; Remondino, F. A review of point clouds segmentation and classification algorithms.
ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 339–344. [CrossRef]
18. Weinmann, M. Reconstruction and Analysis of 3D Scenes: From Irregularly Distributed 3D Points to Object Classes;
Springer: Cham, Switzerland, 2016.
19. Stathopoulou, E.K.; Remondino, F. Semantic photogrammetry: boosting image-based 3D reconstruction with
semantic labeling. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 422, 685–690. [CrossRef]
20. Yuheng, S.; Hao, Y. Image Segmentation Algorithms Overview. arXiv, 2017; arXiv:1707.02051.
21. Al-Amri, S.S.; Kalyankar, N.V.; Khamitkar, S.D. Image segmentation by using edge detection. Int. J. Comput.
Sci. Eng. 2010, 2, 804–807.
22. Kaur, J.; Agrawal, S.; Vig, R. A comparative analysis of thresholding and edge detection segmentation
techniques. Int. J. Comput. Appl. 2012, 39, 29–34. [CrossRef]
23. Schoenemann, T.; Kahl, F.; Cremers, D. Curvature regularity for region-based image segmentation and
inpainting: A linear programming relaxation. In Proceedings of the International Conference on Computer
Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 17–23.
24. Chitade, A.Z.; Katiyar, S.K. Colour based image segmentation using k-means clustering. Int. J. Eng. Sci. Technol.
2010, 2, 5319–5325.
25. Saraswathi, S.; Allirani, A. Survey on image segmentation via clustering. In Proceedings of the 2013
International Conference on Information Communication and Embedded Systems (ICICES), Chennai, India,
21–22 February 2013; pp. 331–335.
26. Fiorillo, F.; Fernández-Palacios, B.J.; Remondino, F.; Barba, S. 3D Surveying and modelling of the
Archaeological Area of Paestum, Italy. Virtual Archaeol. Rev. 2013, 4, 55–60. [CrossRef]
27. Naik, D.; Shah, P. A review on image segmentation clustering algorithms. Int. J. Comput. Sci. Inf. Technol.
2014, 5, 3289–3293.
28. Rabbani, T.; Van Den Heuvel, F.; Vosselmann, G. Segmentation of point clouds using smoothness constraint.
Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 248–253.
29. Bhanu, B.; Lee, S.; Ho, C.C.; Henderson, T. Range data processing: Representation of surfaces by edges.
In Proceedings of the 8th International Conference on Pattern Recognition, Paris, France, 27–31 October 1986;
pp. 236–238.
30. Sappa, A.D.; Devy, M. Fast range image segmentation by an edge detection strategy. In Proceedings of the
IEEE 3rd 3D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001; pp. 292–299.
31. Wani, M.A.; Arabnia, H.R. Parallel edge-region-based segmentation algorithm targeted at reconfigurable
multiring network. J. Supercomput. 2003, 25, 43–62. [CrossRef]
32. Castillo, E.; Liang, J.; Zhao, H. Point cloud segmentation and denoising via constrained nonlinear least
squares normal estimates. In Innovations for Shape Analysis; Springer: Berlin/Heidelberg, Germany, 2013;
pp. 283–299.
Remote Sens. 2019, 11, 847 21 of 23

33. Jagannathan, A.; Miller, E.L. Three-dimensional surface mesh segmentation using curvedness-based region
growing approach. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 2195–2204. [CrossRef]
34. Besl, P.J.; Jain, R.C. Segmentation through variable order surface fitting. IEEE Trans. Pattern Anal. Mach. Intell. 1988,
10, 167–192. [CrossRef]
35. Vosselman, M.G.; Gorte, B.G.H.; Sithole, G.; Rabbani, T. Recognising structure in laser scanning point clouds.
Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 46, 33–38.
36. Belton, D.; Lichti, D.D. Classification and segmentation of terrestrial laser scanner point clouds using local
variance information. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 44–49.
37. Klasing, K.; Althoff, D.; Wollherr, D.; Buss, M. Comparison of surface normal estimation methods for range
sensing applications. In Proceedings of the IEEE International Conference on Robotics and Automation,
Kobe, Japan, 12–17 May 2009; pp. 3206–3211.
38. Xiao, J.; Zhang, J.; Adler, B.; Zhang, H.; Zhang, J. Three-dimensional point cloud plane segmentation in both
structured and unstructured environments. Robot. Auton. Syst. 2013, 61, 1641–1652. [CrossRef]
39. Vo, A.V.; Truong-Hong, L.; Laefer, D.F.; Bertolotto, M. Octree-based region growing for point cloud
segmentation. ISPRS J. Photogramm. Remote Sens. 2015, 104, 88–100. [CrossRef]
40. Liu, Y.; Xiong, Y. Automatic segmentation of unorga-nized noisy point clouds based on the gaussian map.
Comput. Aided Des. 2008, 40, 576–594. [CrossRef]
41. Vieira, M.; Shimada, K. Surface mesh segmentation and smooth surface extraction through region growing.
Comput. Aided Geom. Des. 2005, 22, 771–792. [CrossRef]
42. Ballard, D.H. Generalizing the Hough transform to detect arbitrary shapes. Pattern Recognit. 1991, 13, 183–194.
[CrossRef]
43. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to
image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [CrossRef]
44. Poux, F.; Hallot, P.; Neuville, R.; Billen, R. Smart point cloud: Definition and remaining challenge.
In Proceedings of the 11th 3D Geoinfo Conference, Athens, Greece, 20–21 October 2016.
45. Barnea, S.; Filin, S. Segmentation of terrestrial laser scanning data using geometry and image information.
ISPRS J. Photogramm. Remote Sens. 2013, 76, 33–48. [CrossRef]
46. 2D Semantic labelling. Available online: http://www2.isprs.org/commissions/comm3/wg4/semantic-
labeling.html (accessed on 28 March 2019).
47. Deepglobe. Available online: http://deepglobe.org/challenge.html (accessed on 28 March 2019).
48. Mapping challenge. Available online: https://www.crowdai.org/challenges/mapping-challenge (accessed
on 28 March 2019).
49. Large-Scale Semantic 3D Reconstruction. Available online: https://www.grss-ieee.org/community/
technical-committees/data-fusion/data-fusion-contest/ (accessed on 28 March 2019).
50. Large-Scale Point Cloud Classification Benchmark. Available online: http://www.semantic3d.net/ (accessed
on 28 March 2019).
51. KITTI Vision Benchmark Suite. Available online: http://www.cvlibs.net/datasets/kitti/ (accessed on
28 March 2019).
52. Semantic, instance-wise, dense pixel annotations of 30 classes. Available online: https://www.cityscapes-
dataset.com/ (accessed on 28 March 2019).
53. Noh, H.; Hong, S.; Han, B. Learning deconvolution network for semantic segmentation. In Proceedings of
the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1520–1528.
54. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for
image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [CrossRef]
55. Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.; Villena-Martinez, V.; Garcia-Rodriguez, J. A review on deep
learning techniques applied to semantic segmentation. arXiv, 2017; arXiv:1704.06857.
56. Perez, L.; Wang, J. The effectiveness of data augmentation in image classification using deep learning. arXiv,
2017; arXiv:1712.04621.
57. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with
deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2018,
40, 834–848. [CrossRef]
58. Guo, B.; Huang, X.; Zhang, F.; Sohn, G. Classification of airborne laser scanning data using JointBoost.
ISPRS J. Photogramm. Remote Sens. 2014, 92, 124–136. [CrossRef]
Remote Sens. 2019, 11, 847 22 of 23

59. Niemeyer, J.; Rottensteiner, F.; Soergel, U. Contextual classification of LiDAR data and building object
detection in urban areas. ISPRS J. Photogramm. Remote Sens. 2014, 87, 152–165. [CrossRef]
60. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3D classification and
segmentation. In Proceedings of the Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA,
21–26 July 2017; Volume 1, p. 4.
61. Weinmann, M.; Schmidt, A.; Mallet, C.; Hinz, S.; Rottensteiner, F.; Jutzi, B. Contextual classification of point
cloud data by exploiting individual 3D neighbourhoods. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci.
2015, 2, 271–278. [CrossRef]
62. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [CrossRef]
63. Bassier, M.; Van Genechten, B.; Vergauwen, M. Classification of sensor independent point cloud data of
building objects using random forests. J. Build. Eng. 2019, 21, 468–477. [CrossRef]
64. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions.
ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [CrossRef]
65. MacQueen, J. Some methods for classification and analysis of multivariate observations. In Proceedings of
the 5th Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, June 1967.
66. Jain, A.K. Data clustering: 50 years beyond K-means. Pattern Recognit. Lett. 2010, 31, 651–666. [CrossRef]
67. Burney, S.A.; Tariq, H. K-means cluster analysis for image segmentation. Int. J. Comput. Appl. 2014, 96.
[CrossRef]
68. Dhanachandra, N.; Manglem, K.; Chanu, Y.J. Image segmentation using K-means clustering algorithm and
subtractive clustering algorithm. Procedia Comput. Sci. 2015, 54, 764–771. [CrossRef]
69. Zhang, C.; Mao, B. 3D Building Models Segmentation Based on K-means++ Cluster Analysis. Int. Arch.
Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 42. [CrossRef]
70. Hétroy-Wheeler, F.; Casella, E.; Boltcheva, D. Segmentation of tree seedling point clouds into elementary
units. Int. J. Remote Sens. 2016, 37, 2881–2907. [CrossRef]
71. Guo, Y.; Bennamoun, M.; Sohel, F.; Lu, M.; Wan, J.; Kwok, N.M. A comprehensive performance evaluation of
3D local feature descriptors. Int. J. Comput. Vis. 2016, 116, 66–89. [CrossRef]
72. Georganos, S.; Grippa, T.; Vanhuysse, S.; Lennert, M.; Shimoni, M.; Kalogirou, S.; Wolff, E. Less is more:
Optimizing classification performance through feature selection in a very-high-resolution remote sensing
object-based urban application. GISci. Remote Sens. 2018, 55, 221–242. [CrossRef]
73. Weinmann, M.; Jutzi, B.; Mallet, C. Feature relevance assessment for the semantic interpretation of 3D point
cloud data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 5, W2. [CrossRef]
74. Aijazi, A.K.; Serna, A.; Marcotegui, B.; Checchin, P.; Trassoudaine, L. Segmentation and Classification of
3D Urban Point Clouds: Comparison and Combination of Two Approaches. In Field and Service Robotics;
Springer: Cham, Switzerland, 2016; pp. 201–216.
75. Mathias, M.; Martinovic, A.; Weissenberg, J.; Haegler, S.; Van Gool, L. Automatic architectural style
recognition. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, 3816, 171–176. [CrossRef]
76. Oses, N.; Dornaika, F.; Moujahid, A. Image-based delineation and classification of built heritage masonry.
Remote Sens. 2014, 6, 1863–1889. [CrossRef]
77. Shalunts, G.; Haxhimusa, Y.; Sablatnig, R. Architectural style classification of building facade windows.
In Proceedings of the International Symposium on Visual Computing, Las Vegas, NV, USA, 26–28 September 2011;
Springer: Berlin/Heidelberg, Germany, 2011; pp. 280–289.
78. Zhang, L.; Song, M.; Liu, X.; Sun, L.; Chen, C.; Bu, J. Recognizing architecture styles by hierarchical sparse
coding of blocklets. Inf. Sci. 2014, 254, 141–154. [CrossRef]
79. Chu, W.T.; Tsai, M.H. Visual pattern discovery for architecture image classification and product image search.
In Proceedings of the 2nd ACM International Conference on Multimedia Retrieval, Hong Kong, China,
5–8 June 2012; p. 27.
80. Llamas, J.; Lerones, P.M.; Zalama, E.; Gómez-García-Bermejo, J. Applying deep learning techniques to cultural
heritage images within the INCEPTION project. In Proceedings of the Euro-Mediterranean Conference,
Nicosia, Cyprus, 31 October–5 November 2016; Springer: Cham, Switzerland, 2016; pp. 25–32.
81. Manferdini, A.M.; Remondino, F.; Baldissini, S.; Gaiani, M.; Benedetti, B. 3D modeling and semantic
classification of archaeological finds for management and visualization in 3D archaeological databases.
In Proceedings of the International Conference on Virtual Systems and MultiMedia (VSMM), Limassol,
Cyprus, 20–25 October 2008; pp. 221–228.
Remote Sens. 2019, 11, 847 23 of 23

82. Poux, F.; Neuville, R.; Hallot, P.; Billen, R. Point cloud classification of tesserae from terrestrial laser data
combined with dense image matching for archaeological information extraction. ISPRS Ann. Photogramm.
Remote Sens. Spat. Inf. Sci. 2017, 4, 203–211. [CrossRef]
83. De Luca, L.; Buglio, D.L. Geometry vs. Semantics: Open issues on 3D reconstruction of architectural elements.
In 3D Research Challenges in Cultural Heritage; Springer: Berlin/Heidelberg, Germany, 2014; pp. 36–49.
84. Stefani, C.; Busayarat, C.; Lombardo, J.; Luca, L.D.; Véron, P. A web platform for the consultation of spatialized
and semantically enriched iconographic sources on cultural heritage buildings. J. Comput. Cult. Herit. 2013, 6, 13.
[CrossRef]
85. Apollonio, F.I.; Basilissi, V.; Callieri, M.; Dellepiane, M.; Gaiani, M.; Ponchio, F.; Rizzo, F.; Rubino, A.R.;
Scopigno, R. A 3D-centered information system for the documentation of a complex restoration intervention.
J. Cult. Herit. 2018, 29, 89–99. [CrossRef]
86. Campanaro, D.M.; Landeschi, G.; Dell’Unto, N.; Touati, A.M.L. 3D GIS for cultural heritage restoration:
A ‘white box’workflow. J. Cult. Herit. 2016, 18, 321–332. [CrossRef]
87. Sithole, G. Detection of bricks in a masonry wall. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008,
XXXVII, 1–6.
88. Riveiro, B.; Lourenço, P.B.; Oliveira, D.V.; González-Jorge, H.; Arias, P. Automatic Morphologic Analysis of
Quasi-Periodic Masonry Walls from LiDAR. Comput.-Aided Civ. Infrastruct. Eng. 2016, 31, 305–319. [CrossRef]
89. Messaoudi, T.; Véron, P.; Halin, G.; De Luca, L. An ontological model for the reality-based 3D annotation of
heritage building conservation state. J. Cult. Herit. 2018, 29, 100–112. [CrossRef]
90. Cipriani, L.; Fantini, F. Digitalization culture vs. archaeological visualization: Integration of pipelines and
open issues. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 195. [CrossRef]
91. Bora, D.J.; Gupta, A.K. A new approach towards clustering based color image segmentation. Int. J. Comput. Appl.
2014, 107, 23–30.
92. Jurio, A.; Pagola, M.; Galar, M.; Lopez-Molina, C.; Paternain, D. A comparison study of different color spaces
in clustering based image segmentation. In Proceedings of the International Conference on Information
Processing and Management of Uncertainty in Knowledge-Based Systems, Dortmund, Germany, 28
June–2 July 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 532–541.
93. Sural, S.; Qian, G.; Pramanik, S. Segmentation and histogram generation using the HSV color space for
image retrieval. In Proceedings of the 2002 International Conference on Image Processing, Rochester, NY,
USA, 22–25 September 2002; Volume 2, p. II.
94. Witten, I.H.; Frank, E.; Hall, M.A.; Pal, C.J. Data Mining: Practical Machine Learning Tools and Techniques;
Morgan Kaufmann: Burlington, MA, USA, 2016.
95. Image processing package Imagej. Available online: http://imagej.net/Fiji (accessed on 25 March 2019).
96. Imagej K-means plugin. Available online: http://ij-plugins.sourceforge.net/plugins/segmentation/k-
means.html (accessed on 25 March 2019).
97. Kleiner, F.S. A History of Roman Art; Cengage Learning: Boston, MA, USA, 2016.
98. Remondino, F.; Gaiani, M.; Apollonio, F.; Ballabeni, A.; Ballabeni, M.; Morabito, D. 3D documentation of 40
kilometers of historical porticoes-the challenge. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41.
[CrossRef]
99. Giraudot, S.; Lafarge, F. Classification. In CGAL User and Reference Manual, 4.14 ed; CGAL Editorial Board,
2019.
100. ETHZ Random Forest code. Available online: www.prs.igp.ethz.ch/research/Source_code_and_datasets.
html (accessed on 25 March 2019).
101. Menna, F.; Nocerino, E.; Remondino, F.; Dellepiane, M.; Callieri, M.; Scopigno, R. 3D digitization of an
heritage masterpiece-a critical analysis on quality assessment. Int. Arch. Photogramm. Remote Sens. Spat.
Inf. Sci. 2016, 41. [CrossRef]
102. Jiménez Fernández-Palacios, B.; Rizzi, A.; Remondino, F. Etruscans in 3D—Surveying and 3D modeling for a
better access and understanding of heritage. Virtual Archaeol. Rev. 2013, 4, 85–89. [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Copyright of Remote Sensing is the property of MDPI Publishing and its content may not be
copied or emailed to multiple sites or posted to a listserv without the copyright holder's
express written permission. However, users may print, download, or email articles for
individual use.

You might also like