Professional Documents
Culture Documents
Abstract: Accurate and rapid condition assessment of in-service structural components is critical to ensure safety and serviceability. One
major assessment consideration is the detection and quantification of structural section loss due to deterioration, for instance, from cor-
rosion. Modern three-dimensional (3D) imaging techniques, which generate high-resolution 3D point clouds, are capable of detecting and
measuring these deteriorations. However, despite advancements in the fields of automated point cloud analysis for as-built modeling and
structural inspection, the potential use of spatial 3D data for updating numerical finite-element (FE) models of structures is still an emer-
gent topic. This paper presents a localized methodology for the automatic and systematic detection and quantification of damages in
structural components using high-fidelity 3D point cloud data, followed by a corresponding local update to an FE model. In this study,
3D point cloud data of a targeted structure were first obtained by using dense structure from motion (DSfM) algorithms. Section loss
damage was then identified and located through computer vision and 3D data processing techniques. In order to preserve data integrity and
resolve localized high-fidelity details, direct 3D point cloud comparisons were performed. An experimental study validating the developed
approach is presented as well. The results indicate that the presented methodology will enable engineers to use the updated structural
model to determine the reserved capacity and remaining service life of structural elements, though further studies on methods to improve
mesh generation and defect quantification are warranted. DOI: 10.1061/(ASCE)AS.1943-5525.0000885. © 2018 American Society of
Civil Engineers.
Author keywords: Three-dimensional (3D) reconstruction; Condition assessment; Computer vision; Damage detection; Finite-element
model; Computational mechanics; Structural damage; Digital image correlation; Infrastructure monitoring; 3D data processing.
ing methodologies for creating and updating FE models. mary contribution of this work.
In the work done by Lubowiecka et al. (2009), a 3D laser scan- The remainder of the paper is structured as follows. The algo-
ner was used to capture the complex geometry of a medieval ma- rithmic methodology is first presented. An evaluation of the accu-
sonry bridge, and the resulting data were later used to create a FE racy of the damage detection and quantification subcomponents of
model. In their method, two-dimensional (2D) Delaunay flat tri- the algorithm is then provided. This is followed by a discussion of
angulation was used to create 3D surfaces from point clouds, which the results of a series of experimental tests designed to evaluate the
was observed to produce inaccuracies in capturing the actual geom- overall numerical model updating process. The paper concludes
etry of damaged regions on the structure. In the study by Truong- with an examination of the limitations of the approach and potential
Hong et al. (2013), a combination of voxelization with an angle avenues for future work and extensions.
criterion was used to convert point clouds to finite-element models
for building façades. In another work by Hinks et al. (2013), an
automated conversion of point cloud data into solid computer mod- Methodology
els was performed through a pointwise voxelization technique as
The overall computational methodology (Fig. 1) was composed of
well. However, due to the inherent sensitivity of the algorithm
several steps that are described in the following subsections. This
to voxel size, it suffered from a lack of accuracy in modeling small
process was designed to identify and quantify changes in the state
and detailed regions.
of the targeted component at different times. Therefore, the pre-
Barazzetti et al. (2015) and Stavroulaki et al. (2016) investigated
sented methodology required a baseline model in the form of
using both 3D laser scanners and image-based reconstruction to
as-built or as-designed 3D solid and/or point cloud models of a
generate 3D point clouds of historic structures. Although these
structural component, as well as a point cloud representing current
models were later converted to solid models compatible for com-
conditions (as-is) as input data. The current state point cloud, Pc ,
putational analysis, the capabilities of their methods to incorporate
was used for comparative analysis with respect to the reference
potential structural damages were not investigated. In the studies by
point cloud, Pr , on a point-to-point scale to detect structural dam-
Ma et al. (2016) and Zeibak-Shini et al. (2016), automated acquis-
age in the component. Once damage was detected, the point cloud
ition of an as-damaged building information model (BIM) for
of the defect was extracted and used to update an existing solid
evaluating the postearthquake state of reinforced concrete buildings
model of the component, which can then be meshed for finite-
based on a laser-scanned 3D point cloud was presented. Fernandez
element analysis. The approach was designed for localized damage
et al. (2016) utilized 3D scanning results to capture the geometry of
on individual structural components, and is not meant for the gen-
the corroded steel bars tested under cyclic or monotonic loads to
eration of finite-element models of complete structural systems.
create their corresponding 3D FE model.
However, if a finite-element model of a system exists, this approach
In most related prior efforts, a complete point cloud was used to
can be used to locally update it in the region of a defect.
generate a new finite-element model of a damaged component or
structural system, typically through global surface reconstruction.
One major limitation of such approaches is the often undue number Damage Detection and Quantification
of finite elements in the resulting models, leading to excessive
computational costs, particularly if nonlinear analysis is required. Point Cloud Generation
Global reconstructions are also susceptible to inaccuracies stem- In general, 3D point clouds of the reference (Pr ) and current (Pc )
ming from the meshing process itself, which in general attempt condition of a component can be generated via 3D laser scanning or
to fit a local surface to a point cloud, with a corresponding degra- photogrammetry, or a combination of the two. In the absence of the
dation of geometric features such as flat surfaces and sharp edge as-built reference point cloud, a 3D solid model of the as-designed
boundaries within the 3D model. This problem is exacerbated as component can be used as a basis for sampling points on each
the relative scale between a component and a defect increases. surface to generate a synthetic reference 3D point cloud.
Finally, such approaches are dependent on the generation of a In this study, point clouds were generated from a set of 2D dig-
complete 3D point cloud of a component because occlusions and ital images, using the photogrammetric approach known as struc-
missing data in such approaches result in artificial defects and in- ture from motion (SfM). SfM was chosen over laser scanning due
accurate finite-element models. to the research team’s previous experiences working with the two
technologies (Khaloo et al. 2018; Khaloo and Lattanzi 2016). The
presented algorithm does not require SfM-generated point clouds to
Contributions of This Work
function properly, and the choice of laser scanning or SfM should
The main objective of this study was to address these challenges be evaluated on a case-by-case basis.
through the development of a localized approach to the updating of The SfM approach reconstructs the parameters of cameras (both
finite-element models based on damage detected and quantified in extrinsic and intrinsic parameters) solely from correspondences in
an unstructured image data set. This process includes salient feature Point clouds are then smoothed and denoised using a statistical
extraction and feature matching and estimation of relative camera approach (Rusu et al. 2008). The mean, μ, and standard deviation,
poses from known point correspondences, followed by the compu- σ, of the Euclidian distances of every point from its k-nearest neigh-
tation of the 3D coordinates of extracted features (Hartley and bors are computed. Next, every point that falls outside μ ασ is
Zisserman 2004). Upon finding the orientation of each image, a bun- considered as noise and removed from the 3D point cloud. In this
dle adjustment is then performed to optimize the resulting camera study, k ¼ 30 and α ¼ 1 were assigned to account for varying local
orientations and 3D point coordinates. This procedure is repeated point density, as recommended in Rusu et al. (2008).
until an orientation is available for all images. The result of this pro- The reference and current state point clouds are then automati-
cedure is a relatively sparse set of points. Next, in order to densify the cally aligned and globally registered against each other using the
reconstructed model and produce a photorealistic 3D model dense iterative closest point (ICP) algorithm (Besl and McKay 1992). The
enough to capture small geometric changes, a semiglobal (pixelwise) overall aim of the ICP algorithm is to estimate a rigid transforma-
multiview stereo (MVS) algorithm (Hirschmüller et al. 2012) is used tion between points pi ∈ Pr and points qi ∈ Pc by solving an
to capture information from all pixels in the input 2D images to optimization problem in the least-squares sense. By using nearest-
derive up to one 3D point for each pixel. The result is a dense neighbors search and Euclidean distance calculation, the algorithm
3D point cloud of the targeted structural component suitable for the estimates the closest neighbors pi and qi as correspondence points.
following analysis. In order to calculate the rotation R and translation t between pi
and qi , the error function shown in Eq. (1) is minimized given all
Point Cloud Data Preprocessing points pi ∈ Pr , with pi and qi represented numerically as a set of
The generated Pr and Pc point clouds then undergo preprocessing. 3D spatial (x, y, z) coordinates
First, to improve computational efficiency and to overcome the X
variability in local point density in both 3D models, a uniform sub- EðR; tÞ ¼ minR;t kpi − ðRqi þ tÞk2 ð1Þ
i
sampling step is applied. This helps to eliminate redundant infor-
mation while preserving the geometric detail needed for automated In case of using the original ICP algorithm (Besl and McKay
damage detection. In addition, the nonuniform point density in the 1992) to align the entire reference (as-built) 3D model (using all
original point clouds could influence point correspondence match- available points) to the deformed one (current condition), inaccu-
ing in the subsequent transformation and registration of 3D models. rate registration may happen due to the sensitivity of this method to
One of the most efficient ways to achieve uniform sampling incorrect point pairing when the compared model is severely de-
within point clouds, and the approach taken here, is by subdividing formed (Rusinkiewicz and Levoy 2001). In order to avoid errone-
the data set into a set of cubical regions (3D voxel grid) and then ous registration, in this study the rigid transformation was only
taking the centroid (average point inside a voxel) of the voxel grid executed at a component level, rather than for the global structural
as a key point. The number of points in the resulting point cloud is configuration, minimizing the effect of large distortions. Further-
then reduced and downsampled uniformly. more, it is possible to utilize a combination of both rigid and
correspond to deformed regions or section loss. At this step of the The damage in a detected region of Pm is then quantified by
presented method, the primary aim of the change detection is to offsetting the corresponding points in Pr using each point’s calcu-
identify changes in the targeted scene over time in a simple binary lated displacement vector e. This can be used to create an offset
change/no changes assessment. point set for each extracted damaged region.
To identify the deformed regions, the local point density of the
reference and compared models Pr and Pc are first computed. Next,
Pr and Pc models are merged together to form a combined cloud, Finite-Element Model Updating
Pm , that contains points associating with both reference and com-
pared dense 3D models. By computing the local point density in the
Volumetric Reconstruction of Damage
merged model, Pm , and later isolating the points with a point den-
After pointwise damage quantification, there are two point sets for
sity, ρ, less than or equal to the lesser value of the estimated location
each damaged region: a damaged surface (DPSi ) corresponding to
parameter (e.g., mean or median) of the point density distribution in
the reference surface and an offset point set (O-DPSi ) correspond-
Pr and Pc , point sets corresponding to the damaged surfaces can be
ing to the estimated deficiencies in the current condition. Once DPS
detected. Incorporating the uniform sampling using 3D voxel grids
and O-DPS are converted into their corresponding surfaces, the
at the preprocessing stage helps to generate a relatively homog-
volume enclosed by these surfaces corresponds to a volumetric
enous density in the models that can improve the damage identi-
quantification of section loss or surface expansion (depending
fication using density analysis.
on the direction of vector e). This is accomplished by generating
To obtain an estimate of local point density ρ at a point p,
a meshed surface encompassing the volume created between each
ρ ¼ ðk þ 1Þ=½ð4=3Þπr3 , where r is the radius of the enclosing
O-DPS and DPS. The created surface is then converted into a solid
sphere of the k-nearest neighbors of p, denoted by the index set
3D model of the defect.
N p , given by
Fundamentally, surface reconstruction algorithms generate a
polygonal mesh from a dense point cloud model in order to recover
r ¼ maxi∈N p kp − pi k ð2Þ the original surface on which those points lie. The ideal method
should take into account properties of point clouds such as sam-
where r = distance from the point of interest to the furthest neigh- pling density, noise, outliers, misalignment, and missing data, all
bor. Upon estimating the local point density for each point in the of which have an impact on the accuracy of reconstruction algo-
merged point cloud Pm , as well as Pr and Pc , an M-estimator with a rithms (Berger et al. 2017).
logistic ψ-function (Rousseeuw and Verboven 2002) was used to In this study, two alternative methods for surface reconstruction
represent the point density location parameter in Pr and Pc in order were implemented to further evaluate the impact of the utilized
to lessen the impact of noise in the 3D models. Consequently, the meshing technique on the presented algorithm. First, an interpola-
result of this step in the process was an automated identification of tory reconstruction technique using a 2.5-dimentional (2.5D)
changes in the structural component undergone damage. Delaunay-based triangulation (Dyn et al. 1990) was used. The gen-
eral idea behind this method is to reconstruct triangulated surfaces
Direct Cloud-to-Cloud Damage Quantification that are formed by a subcomplex of Delaunay triangulations. In
Once damage is detected through the density analysis, damage addition, the screened Poisson surface reconstruction developed
quantification is performed on a pointwise basis using a cloud- by Kazhdan and Hoppe (2013) was also used to generate 3D water-
to-cloud (C2C) distance measurement. There are several methods tight surfaces of the damaged regions. By using points as interpo-
available to calculate the distance between two 3D point clouds lation positional constraints, Kazhdan and Hoppe (2013) were able
(or point sets) (Qin et al. 2016). In the current study, the approach to significantly improve the accuracy (optimize the oversmoothing)
presented by Girardeau-Montaut et al. (2005) was utilized. For each and sharpness of their original Poisson surface reconstruction tech-
already detected and isolated point from the merged model Pm , the nique (Kazhdan et al. 2006) without amplifying noise and overfit-
distance (e) to its correspondence in the reference model Pr was ting data. These two methods, Delaunay triangulation and Poisson
calculated. The linear solution for the nearest-neighbor search reconstruction, were chosen for this study because they are well-
had a running time of O½n, where n is the number of points. In established but fundamentally different approaches to surface
order to make this process more efficient, the k-d tree algorithm meshing and, as such, provide insight into how the choice of mesh-
(Friedman et al. 1977) was implemented to find the corresponding ing algorithm impacts the results of the overall approach.
points within the 3D models. The created meshes from both approaches are then converted
The approximate distance between two 3D models can be into a boundary-representation (B-rep) solid model (Várady et al.
expressed as the Euclidean distance between the corresponding 1997). B-reps were adopted herein because of their compatibility
nearest points in the models. Given a point p ∈ Pm and a reference with commercial FE analysis packages (e.g., ANSYS) (Hinks
cloud Pr , it is possible to define the distance eðp; Pr Þ as et al. 2013).
overall model.
in Autodesk Inventor based on the actual dimensions of the speci-
men. Each defect was modeled separately following the method-
ology described in the previous section and the initial model
Experimental Validation was updated with one defect at a time.
According to Table 1, the damage identification step of the pre-
Damage Quantification Accuracy sented method (i.e., the point cloud density analysis) in tandem
with the defect quantification step (i.e., the cloud-to-cloud distance
For experimental validation of damage quantification accuracy, the
analysis) were able to successfully locate and provide a quantifi-
methodology was tested using two different types of specimens. cation estimate of all nine defects.
The specimens included a built-up section made from medium- Generally, better damage extent estimations were achieved with
density fiberboard (MDF) material (Fig. 2), and an aluminum plate. an increase in the defect dimensions and volume. This was mainly
The 19-mm-thick I-profile specimen represented a structural com- due to the inaccuracy in the nearest-neighbor search, where desig-
ponent with planar surfaces. The aluminum plate specimen, with nated closest points do not correspond to the same physical part of
dimensions of 203.2 × 152.4 × 12.7 mm, was used to represent the component since the damage is relatively small. For the defects
planar and textureless structural components. Controlled flaws with a minimum dimension of less than 5 mm, it was observed that
of known dimensions were machined into both specimens using the captured volumes were systematically lower than the measured
a computer numerical control (CNC) machine. values. For instance, the captured defect volume for DPS6 (3-mm-
In order to create dense 3D point cloud data for the targeted deep square groove) was significantly lower than the actual
specimens before and after being damaged, a multiscale structure- volume.
from-motion technique was utilized (Khaloo and Lattanzi 2016) in To further study the correlation between the captured and actual
order to maximize the resolution of the point clouds. A commer- volume of the defects, the point cloud–based measurements are
cially available Nikon D-800E (36.3-megapixel resolution) (Nikon plotted versus the actual measurements in Fig. 4. According to this
Corporation, Tokyo) digital single-lens reflex (DSLR) camera was figure, the correlation between the measurements improves for
used to obtain images with a Nikon AF-S 50-mm lens and a Nikon 100 mm3 and larger defects; however, the point cloud–based mea-
AF-S 105-mm Micro-NIKKOR lens. All images were taken with a surements were systematically lower than the direct measurements.
sensitivity (ISO) of 200 and an aperture of f=8, and a resolution of This underprediction of flaw size was observed in all defects. The
7,360 horizontal and 4,912 vertical pixels. reason for this underprediction is that, during density analysis,
points on the boundaries of a defect may not have been sufficiently
distinguishable from the component itself due to the nature of
nearest-neighbor density estimation. This would result in a misca-
tegorization of points and a smaller segmented defect region. This
is particularly true for smaller defects, particularly those less than
5 mm in minimum dimension, where such miscategorization would
have a disproportionate effect.
Plate Specimen
A total of four defects were machined in the plate specimen, with
varying sizes and shapes (Fig. 5). Similar to the I-profile specimen,
the damaged regions on the specimen were detected using the den-
sity analysis described in the previous sections. As shown in Fig. 5,
the C2C distance pattern for the detected circular and transverse
grooves in the plate illustrates the robust performance of the
presented algorithm in quantifying automatically located dam-
aged regions. Table 2 summarizes the comparison between the
quantification of four detected defects using the presented algo-
rithm with respect to the actual (ground-truth) dimensions of these
regions.
With regards to measurement accuracy, several key observations
can be made. Good agreement was observed between the point
cloud–based and direct measurements when all of the dimensions
Fig. 2. I-profile specimen: (a) undamaged 3D point cloud; and (b)
for the defect were about 3 mm or larger. The results generally im-
damaged 3D point cloud.
proved as the scale of the measured dimensions increased.
The employed cloud-to-cloud distance definition again caused a detected and isolated, and the aforementioned 2.5D and screened
systematic underestimation of defect size and volume. A distinct Poisson meshing algorithms were used to reconstruct the surfaces
distance deviation pattern was observed for DSP4, where the esti- of the detected defects. Isolated damaged sections were meshed and
mated damage volume was less than half of the ground-truth meas- imported into Autodesk Inventor to update the solid models. The
urement. Furthermore, the inherent inaccuracy in the rigid ICP damaged specimens along with their resulting 3D solid models are
registration process can be a source of error in detection and shown in Fig. 6. Then the updated 3D solid models were imported
quantification of relatively small damages. into the finite-element analysis program ANSYS, which was used
to both mesh and analyze all FE models.
Finite-Element Model Updating Analysis FE Simulation
After evaluating the quantification capabilities of the presented ap- After importing the updated solid models to ANSYS, structural
proach, the complete FE model updating process was evaluated steel (Grade 36) material properties were assigned to the models,
through a series of uniaxial tensile tests. Tensile coupons with con- with bilinear isotropic hardening to simulate nonlinear behavior.
trolled geometric damage were fabricated, each with a distinctly The yield strength and tangent modulus properties of the material
different damage geometry. Point clouds of the damaged specimens were determined from coupon testing.
were generated and an associated finite-element model of each Imported solid models were meshed with 20-node hexahedral
specimen was updated. The specimens were tested in uniaxial ten- brick elements, with three translational degrees of freedom at each
sion and the strain fields in the specimens were measured through node. This type of element was chosen for its intrinsic ability to
digital image correlation (DIC). These measured strains were then handle material plasticity and its resistance to locking issues
compared with the strains predicted by the updated finite-element (Puso and Solberg 2006). The meshes were specified to have a
models. global average side length of 2 mm, with a mesh three times finer
around the defects. The element size was then evaluated through a
Experimental Setup
mesh convergence analysis (Macneal and Harder 1985), with con-
The tensile coupons were fabricated out of structural steel
(Grade 36) with dimensions of 300 × 40 × 3.2 mm, based on ASTM vergence at less than 0.1% error. The results from the convergence
(2016) testing standards. Circular holes and triangular V-notch study are shown in Table 3, as well as the number of elements in the
openings (flaws) were then machined on these coupons. These flaw final mesh for both models.
shapes were selected in order to study the behavior of the updating After meshing the geometry, boundary conditions and loads
process on both rounded and sharp-edged features. The diameter of were defined. One side of the model was fixed for support, repre-
the hole and length of the V-notch opening were both 15 mm. senting the static grip of the tensile machine, and the other side was
The specimens were tested in uniaxial tension on a Tinius Olsen subjected to tensile load. For the linear elastic analysis, the model
KT50 universal testing machine (UTM) (Tinius Olsen Testing was subjected to an 8-kN load in the linear elastic region for the
Machine, Horsham, Pennsylvania). A Correlated Solutions DIC specimen. For the nonlinear analysis, a 16-kN load was applied to
system (Correlated Solutions, Inc., Irmo, South Carolina) was used capture the postyield behavior of the material.
for capturing the experimental strain fields during the tensile tests, Two-dimensional strain fields corresponding to the 8- and
with a reported error of less than 1 pixel (Hild and Roux 2006). 16-kN loads were extracted from the DIC system. The equivalent
Loads were increasingly applied to the specimen up to component von Mises strains along the width of specimens at the center of the
failure in order to capture both the elastic and postyielding behavior flaw were computed in order to capture the effect of stress concen-
of specimens. trations. The stress concentration factor of the specimen with the
hole was also measured and compared with the theoretical value.
Solid Model Generation
To build the solid model of test specimens, images were taken using Results and Discussion
the same camera discussed in the previous section with a sensitivity Figs. 7 and 8 show the comparison of equivalent von Mises strain
(ISO) of 60 and an aperture of f=5. Overall, 108 and 134 images along the width of specimens with a hole and V-notch opening,
were generated for the specimens with the V-notch and hole, respectively. The updated FE models through Delaunay (2.5D)
respectively. The corresponding 3D dense point clouds were built and screened Poisson surface reconstruction algorithms are shown
in Agisoft PhotoScan software, and consisted of 594,566 and here to compare their performance against each other. For compar-
796,719 points. A point cloud of a specimen representing an ative purposes, the solid models generated through computer-aided
undamaged component was synthetically produced in CloudCom- design (CAD) and used to fabricate the machined tensile specimens
pare with the same point cloud density as for the other two point were also meshed and analyzed in ANSYS. These ground truth re-
clouds. The geometries of the hole and V-notch defects were sults are reported as well.
Fig. 5. Plate specimen: (a) 3D point cloud model; and (b) located and quantified damages.
Table 2. Flaw measurements (DPSi ) on plate specimen these increases were less than 4% per refinement and suggest that
mesh refinement does not play a significant role in the behavior of
Depth Area Actual Captured
i Defect (mm) (mm2 ) volume (mm3 ) volume (mm3 ) the updating approach.
The second sensitivity study considered how reductions in point
1 Transverse groove 3.7 1,016 3,739 3,189 cloud density impacted results (Tables 5 and 6). Halving the num-
2 Circular groove 3.3 2,023 6,693 6,352
ber of points in the clouds increased strain prediction errors by 5%
3 Transverse groove 3.3 1,734 2,806 2,284
4 Transverse groove 1.1 1,028 1,120 457 for the hole specimen and 10% for the V-notch specimen relative to
DIC measurements, and increased volume measurement errors by 9
and 10% for the hole and V-notch specimens, respectively, when
compared with the CAD models. A 75% reduction increased strain
hole diameter was approximately 89% of the actual diameter. This errors by an additional 9% for the hole and 15% for the V-notch.
also caused the lower strains predicted by FE model, as discussed. Volume errors increased by another 7% for the hole and 10% for the
No closed-form value of the theoretical stress concentration factor V-notch. These results indicate that maximizing point cloud density
exists for the triangular flaw due to the nature of the machined flaw, is essential for the modeling and assessment of small defects,
so these results are not reported. and should be a critical decision when selecting a 3D imaging
Sensitivity Analysis approach.
In order to better understand the behavior of the model updating
process, two sensitivity analyses were performed. The first was Sources of Error and Methodological Limitations
a mesh refinement and convergence analysis. The mesh refinement
in ANSYS was sequentially increased until the difference in maxi- There are several potential sources of error in this FE model up-
mum strain between subsequent models was less than 0.1%. The dating methodology. Some errors stem from the damage detection
higher levels of mesh refinement did result in higher peak strains and quantification process itself, which tended to underpredict
that more closely matched the experimental results (Table 3), but flaw size and resulted in systematic underprediction of strains
Fig. 6. Tensile specimens and updated solid models: (a) V-notch defect and updated solid model using (b) 2.5D mesh and (c) Poisson mesh; (d) hole
defect and updated solid model using; (e) 2.5D mesh; and (f) Poisson mesh.
Table 3. Convergence analysis for V-notch and hole defect models performance observed for the two surface reconstruction algo-
Number of Maximum strain Change in maximum
rithms suggest that a more in-depth study on the impacts of mesh
Models elements (mm=mm) strain (%) generation is warranted. In particular, there is a need to develop an
approach for selecting the optimal meshing algorithm for a given
V-notch defect 2,864 0.00125 —
14,048 0.00130 3.95 flaw type.
26,123 0.00136 3.92 The most important limitation of this methodology stems from
39,488 0.00136 −8.0 × 10−3 the comparative nature of the approach, which necessitates that the
point clouds be consistent between scans. If occlusions or obstruc-
Hole defect 4,425 0.00136 —
tions are present in one scan, but not the other, the result will be
28,888 0.00142 4.53
41,743 0.00142 −2 × 10−4 false positive or negative determinations of damage. However, the
localized nature of the approach does mitigate this problem to some
extent because a comprehensive point cloud of a component is not
necessary for performing finite-element updating. This approach
in the FE models as a result. Second, the bilinear isotropic hard- does present a potential difficulty in a scenario where both global
ening curve used to model plasticity is an approximation that may and local deformations are present. In those cases, it would be
have resulted in inaccurate strain measurements near the flaws, as necessary to isolate and calibrate extracted point cloud data to ad-
was observed for the V-notch specimen. Finally, the differences in just for quantified global deformations. Such a process has not
Fig. 7. Comparison of equivalent von Mises strains along flaw cross section for hole defect specimen: (a) 8-kN load; and (b) 16-kN load.
Fig. 8. Comparison of equivalent von Mises strains along flaw cross section for V-notch defect specimen: (a) 8-kN load; and (b) 16-kN load.
Table 4. Stress concentration factor analysis representative of the in situ conditions, the results of the updated
Metric Theoretical Numerical (FE) Relative error (%) model may not be sufficiently accurate.
Force (N) 8,000 8,000 —
Average stress (MPa) 106.67 91.95 13.8
Maximum stress (MPa) 237.87 213 10.5 Conclusions and Future Work
K factor 2.23 2.34 3.9
In this work, a computational methodology to update the finite-
element model of a damaged structural component based on com-
parative point cloud analysis was introduced. Using computer
previously been developed and presents one avenue for future vision techniques, the extent of damage to a component was quan-
work. The updating approach is also limited to volumetric changes tified and extracted. The point cloud data corresponding to this
in a component, such as wastage from corrosion, and is not suitable damage were then used to update an existing solid model of the
for quantifying deformations. Jafari et al. (2017) and Mukupa et al. component, followed by meshing and finite-element analysis.
(2017) describe recent efforts in this domain. Finally, the approach The proposed point cloud–based methodology has a number of ad-
relies on the existence of a finite-element model with properly mod- vantages over current practices, primarily due to the localized ap-
eled boundary conditions. If the original finite-element model is not proach to quantification and model updating. These advantages
Table 5. Sensitivity analysis of point cloud resolution for specimen with hole defect
Model Maximum strain Strain error (%) Captured area (mm2 ) Captured volume (mm3 ) Deviation from CAD (%)
DIC 0.00084 — — — —
100% resolution 0.00082 −2.75 157.00 471.00 −11.11
50% resolution 0.00078 −7.66 141.87 425.61 −19.68
25% resolution 0.00070 −16.64 129.01 387.03 −26.96
04015053. https://doi.org/10.1061/(ASCE)CF.1943-5509.0000807.
nal point cloud data, quantitative measurements for the extent of Dai, K., D. Boyajian, W. Liu, S.-E. Chen, J. Scott, and M. Schmieder. 2014.
damage, and the capability to further update the model based on “Laser-based field measurement for a bridge finite-element model val-
future inspection data and for future referencing. Overall, apart idation.” J. Perform. Constr. Facil. 28 (5): 04014024. https://doi.org/10
from the bias caused by strain localization at the V-notch, the .1061/(ASCE)CF.1943-5509.0000484.
FE models and DIC results showed good agreement and indicate Dyn, N., D. Levin, and S. Rippa. 1990. “Data dependent triangulations for
that the FE process offers the potential for quantitative and accurate piecewise linear interpolation.” IMA J. Numer. Anal. 10 (1): 137–154.
structural assessments from remotely sensed 3D point cloud data, https://doi.org/10.1093/imanum/10.1.137.
with applications across a broad range of engineering disciplines. Erkal, B. G., and J. F. Hajjar. 2017. “Laser-based surface damage
This study is part of an ongoing research program and several detection and quantification using predicted surface properties.”
aspects of the presented methodology are being considered for fur- Autom. Constr. 83: 285–302. https://doi.org/10.1016/j.autcon.2017
ther improvement. More advanced and robust cloud-to-cloud com- .08.004.
parison techniques such as the multiscale model-to-model cloud Fathi, H., F. Dai, and M. Lourakis. 2015. “Automated as-built 3D
comparison (M3C2) (Lague et al. 2013) are being evaluated, as reconstruction of civil infrastructure using computer vision: Achieve-
ments, opportunities, and challenges.” Adv. Eng. Inf. 29 (2): 149–161.
is the potential to use a nonuniform rational basis spline (NURBS)
https://doi.org/10.1016/j.aei.2015.01.012.
to create a solid model rather than mesh-based watertight surface
Fernandez, I., J. M. Bairán, and A. R. Marí. 2016. “3D FEM model devel-
reconstruction techniques. There is also a need to assess the com- opment from 3D optical measurement technique applied to corroded
putational cost of the presented approach relative to other methods, steel bars.” Constr. Build. Mater. 124: 519–532. https://doi.org/10
and to evaluate how to improve the computational efficiencies of .1016/j.conbuildmat.2016.07.133.
the individual subroutines, for instance, the computationally expen- Friedman, J. H., J. L. Bentley, and R. A. Finkel. 1977. “An algorithm for
sive k-d trees algorithm used for point correspondence matching. finding best matches in logarithmic expected time.” ACM Trans. Math.
In particular, methods to improve detection at defect boundaries, in Software 3 (3): 209–226. https://doi.org/10.1145/355744.355745.
order to reduce the error from underprediction of flaw size, are also Girardeau-Montaut, D., M. Roux, R. Marc, and G. Thibault. 2005. “Change
being studied. Applications of the approach to fatigue life estima- detection on points cloud data acquired with a ground laser scanner.” In
tion and dynamic analysis are being considered, as is the potential Proc., Int. Archives of Photogrammetry, Remote Sensing, and Spatial
for full-scale testing on in-service structures. One notable avenue Information Series, 30–35. Enschede, Netherlands: ISPRS.
for future work is to adapt the approach for more complex and Gong, J., and A. Maher. 2014. “Use of mobile lidar data to assess hurricane
large-scale structural changes. As presented, the model updating damage and visualize community vulnerability.” Transp. Res. Rec.
algorithm only accounts for volumetric changes, without the pres- 2459: 119–126. https://doi.org/10.3141/2459-14.
ence of accompanying deformations stemming from phenomena Hartley, R., and A. Zisserman. 2004. Multiple view geometry in computer
such as bending or torsion. There is the potential to develop a com- vision. New York: Cambridge University Press.
plementary process that quantifies such deformations and then uses Hild, F., and S. Roux. 2006. “Digital image correlation: From displacement
measurement to identification of elastic properties: A review.” Strain
them to parametrically warp point clouds to a common reference
42 (2): 69–80. https://doi.org/10.1111/j.1475-1305.2006.00258.x.
frame, facilitating the process delineated in this work.
Hinks, T., H. Carr, L. Truong-Hong, and D. F. Laefer. 2013. “Point cloud
data conversion into solid models via point-based voxelization.” J. Surv.
Eng. 139 (2): 72–83. https://doi.org/10.1061/(ASCE)SU.1943-5428
Acknowledgments .0000097.
Hirschmüller, H., M. Buder, and I. Ernst. 2012. “Memory efficient semi-
The authors would like to thank the National Science Foundation global matching.” ISPRS Ann. Photogramm. Remote Sens. Spatial Inf.
(NSF) (Grant No. CMMI-1433765), as well as the Thomas F. and Sci. I-3: 371–376. https://doi.org/10.5194/isprsannals-I-3-371-2012.
Kate Miller Jeffress Memorial Trust, for their support of this study. Hoffmann, C. M. 1989. Geometric and solid modeling: An introduction.
Burlington, MA: Morgan Kaufmann.
Huttenlocher, D. P., G. A. Klanderman, and W. J. Rucklidge. 1993.
References “Comparing images using the Hausdorff distance.” IEEE Trans. Pattern
Anal. Mach. Intell. 15 (9): 850–863. https://doi.org/10.1109/34.232073.
Ali, S., Z. Toony, and D. Laurendeau. 2015. “A 3D vision-based inspection Jafari, B., A. Khaloo, and D. Lattanzi. 2017. “Deformation tracking in
method for pairwise comparison of locally deformable 3D models.” 3D point clouds via statistical sampling of direct cloud-to-cloud dis-
Mach. Vision Appl. 26 (7–8): 1061–1078. https://doi.org/10.1007 tances.” J. Nondestr. Eval. 36 (4): 65. https://doi.org/10.1007/s10921
/s00138-015-0711-0. -017-0444-2.
ASTM. 2016. Standard test methods for tension testing of metallic Jahanshahi, M. R., J. S. Kelly, S. F. Masri, and G. S. Sukhatme. 2009. “A
materials. ASTM E8/E8M-16a. West Conshohocken, PA: ASTM. survey and evaluation of promising approaches for automatic image-
Barazzetti, L., F. Banfi, R. Brumana, G. Gusmeroli, M. Previtali, and G. based defect detection of bridge structures.” Struct. Infrastruct. Eng.
Schiantarelli. 2015. “Cloud-to-BIM-to-FEM: Structural simulation with 5 (6): 455–486. https://doi.org/10.1080/15732470801945930.
2018. “Unmanned aerial vehicle inspection of the Placer river trail .1016/S0167-9473(02)00078-6.
bridge through image-based 3D modelling.” Struct. Inf. Eng. 14 (1): Rusinkiewicz, S., and M. Levoy. 2001. “Efficient variants of the ICP
124–136. https://doi.org/10.1080/15732479.2017.1330891. algorithm.” In Proc., 3rd Int. Conf. on 3-D Digital Imaging and
Koch, C., K. Georgieva, V. Kasireddy, B. Akinci, and P. Fieguth. 2015. Modeling, 145–152. New York: IEEE.
“A review on computer vision based defect detection and condition Rusu, R. B., Z. C. Marton, N. Blodow, M. Dolha, and M. Beetz. 2008.
assessment of concrete and asphalt civil infrastructure.” Adv. Eng. Inf. “Towards 3D point cloud based object maps for household environ-
29 (2): 196–210. https://doi.org/10.1016/j.aei.2015.01.008. ments.” Rob. Auton. Syst. 56 (11): 927–941. https://doi.org/10.1016/j
Laefer, D., D. Hinks, H. Carr, and L. Truong-Hong. 2011. “New Advances .robot.2008.08.005.
in Automated Urban Modelling from Airborne Laser Scanning Data.” Sánchez-Aparicio, L. J., B. Riveiro, D. Gonzalez-Aguilera, and L. F.
Recent Pat. Eng. 5 (3): 196–208. Ramos. 2014. “The combination of geomatic approaches and opera-
Laefer, D. F., L. Truong-Hong, H. Carr, and M. Singh. 2014. “Crack de- tional modal analysis to improve calibration of finite element models:
tection limits in unit based masonry with terrestrial laser scanning.” A case of study in Saint Torcato Church (Guimarães, Portugal).” Constr.
NDT & E Int. 62: 66–76. https://doi.org/10.1016/j.ndteint.2013.11.001. Build. Mater. 70: 118–129. https://doi.org/10.1016/j.conbuildmat.2014
Lague, D., N. Brodu, and J. Leroux. 2013. “Accurate 3D comparison of .07.106.
complex topography with terrestrial laser scanner: Application to the Stavroulaki, M. E., B. Riveiro, G. A. Drosopoulos, M. Solla,
Rangitikei canyon (N-Z).” ISPRS J. Photogramm. Remote Sens. P. Koutsianitis, and G. E. Stavroulakis. 2016. “Modelling and strength
82: 10–26. https://doi.org/10.1016/j.isprsjprs.2013.04.009. evaluation of masonry bridges using terrestrial photogrammetry and
Lubowiecka, I., J. Armesto, P. Arias, and H. Lorenzo. 2009. “Historic finite elements.” Adv. Eng. Software 101: 136–148. https://doi.org/10
bridge modelling using laser scanning, ground penetrating radar and .1016/j.advengsoft.2015.12.007.
finite element methods in the context of structural dynamics.” Eng. Truong-Hong, L., D. F. Laefer, T. Hinks, and H. Carr. 2013. “Combining an
Struct. 31 (11): 2667–2676. https://doi.org/10.1016/j.engstruct.2009 angle criterion with voxelization and the flying voxel method in recon-
.06.018. structing building models from LiDAR data.” Comput. Aided Civ.
Ma, L., R. Sacks, R. Zeibak-Shini, A. Aryal, and S. Filin. 2016. “Prepa- Infrastruct. Eng. 28 (2): 112–129. https://doi.org/10.1111/j.1467
ration of synthetic as-damaged models for post-earthquake BIM -8667.2012.00761.x.
reconstruction research.” J. Comput. Civ. Eng. 30 (3): 04015032. Várady, T., R. R. Martin, and J. Cox. 1997. “Reverse engineering of geo-
https://doi.org/10.1061/(ASCE)CP.1943-5487.0000500. metric models: An introduction.” Comput. Aided Des. 29 (4): 255–268.
Macneal, R. H., and R. L. Harder. 1985. “A proposed standard set of prob- https://doi.org/10.1016/S0010-4485(96)00054-1.
lems to test finite element accuracy.” Finite Elem. Anal. Des. 1 (1): Zeibak-Shini, R., R. Sacks, L. Ma, and S. Filin. 2016. “Towards generation
3–20. https://doi.org/10.1016/0168-874X(85)90003-4. of as-damaged BIM models using laser-scanning and as-built BIM:
Mukupa, W., G. W. Roberts, C. M. Hancock, and K. Al-Manasir. 2017. “A First estimate of as-damaged locations of reinforced concrete frame
review of the use of terrestrial laser scanning application for change members in masonry infill structures.” Adv. Eng. Inf. 30 (3): 312–326.
detection and deformation monitoring of structures.” Surv. Rev. https://doi.org/10.1016/j.aei.2016.04.001.
49 (353): 99–116. https://doi.org/10.1080/00396265.2015.1133039. Zhu, Z., S. German, and I. Brilakis. 2010. “Detection of large-scale con-
Olsen, M. J., F. Kuester, B. J. Chang, and T. C. Hutchinson. 2009. crete columns for automated bridge inspection.” Autom. Constr. 19 (8):
“Terrestrial laser scanning-based structural damage assessment.” 1047–1055. https://doi.org/10.1016/j.autcon.2010.07.016.