You are on page 1of 14

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/320935189

Computer vision for SHM of civil infrastructure: From dynamic response


measurement to damage detection – A review

Article  in  Engineering Structures · January 2018


DOI: 10.1016/j.engstruct.2017.11.018

CITATIONS READS

99 2,093

2 authors:

Dongming Feng Maria Qing Feng


Thornton Tomasetti, Inc. Columbia University
42 PUBLICATIONS   974 CITATIONS    221 PUBLICATIONS   5,217 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Design of a MEMS-based, wireless sensor board for vibration monitoring View project

Columns Repair View project

All content following this page was uploaded by Dongming Feng on 27 July 2019.

The user has requested enhancement of the downloaded file.


Engineering Structures 156 (2018) 105–117

Contents lists available at ScienceDirect

Engineering Structures
journal homepage: www.elsevier.com/locate/engstruct

Review article

Computer vision for SHM of civil infrastructure: From dynamic response MARK
measurement to damage detection – A review

Dongming Feng , Maria Q. Feng
Department of Civil Engineering and Engineering Mechanics, Columbia University, New York, USA

A R T I C L E I N F O A B S T R A C T

Keywords: To address the limitations of current sensor systems for field applications, the research community has been
Computer vision actively exploring new technologies that can advance the state-of-the-practice in structural health monitoring
Displacement measurement (SHM). Thanks to the rapid advances in computer vision, the camera-based noncontact vision sensor has
Structural health monitoring emerged as a promising alternative to conventional contact sensors for structural dynamic response measure-
Structural dynamics
ment and health monitoring. Significant advantages of the vision sensor include its low cost, ease of setup and
Damage detection
operation, and flexibility to extract displacements of any points on the structure from a single video measure-
Natural frequency
Mode shape ment. This review paper is intended to summarize the collective experience that the research community has
Model updating gained from the recent development and validation of the vision-based sensors for structural dynamic response
measurement and SHM. General principles of the vision sensor systems are firstly presented by reviewing dif-
ferent template matching techniques for tracking targets, coordinate conversion methods for determining cali-
bration factors to convert image pixel displacements to physical displacements, measurements by tracking ar-
tificial targets vs. natural targets, measurements in real time vs. by post-processing, etc. Then the paper reviews
laboratory and filed experimentations carried out to evaluate the performance of the vision sensors, followed by
a discussion on measurement error sources and mitigation methods. Finally, applications of the measured dis-
placement data for SHM are reviewed, including examples of structural modal property identification, structural
model updating, damage detection, and cable force estimation.

1. Introduction and frequency response function. Although these studies have produced
SHM methods, frameworks and algorithms validated through numer-
Structures and infrastructure systems including bridges, buildings, ical, and laboratory and field experimental studies, their wide deploy-
dams, pipelines are complex engineering systems that support a so- ment in realistic engineering structures are limited by the requirement
ciety’s economic prosperity and quality of life. As these systems age and of cumbersome and expensive installation and maintenance of sensors
deteriorate, their proper inspection, monitoring and maintenance has networks and data acquisition (DAQ) systems.
become increasingly important. The conventional practice based on To address these limitations, the research community has been ac-
periodic human visual inspection is inadequate. Nondestructive eva- tively exploring new technologies that can advance the current state-of-
luation (NDE) has shown potential for detecting hidden damage but the the-practice in SHM, such as wireless sensors [12–14], fiber optic sen-
structures’ large size presents a significant challenge to implement such sors [15–17], and the interferometric radar system [18]. In recent
local inspection methods. Over the past two decades, a significant years, camera and computer vision-based sensors have emerged as a
amount of studies have been conducted in the emerging field of promising tool for non-contact remote measurement of structural re-
structural health monitoring, aiming at objective and quantitative sponses, in which displacements are extracted by tracking the move-
structural damage detection and integrity assessment based on mea- ment of targets from videos images. Compared to the structural accel-
surements by sensors, mostly accelerometers [1–11]. For example, eration response (which most of the SHM studies are based on), the
Carden and Fanning [7] presented an extensive literature review of the displacement response directly reflects the structural overall stiffness,
damage detection techniques based on changes in the frequency-do- and thus offers a potential for more accurate assessment of structural
main modal properties, such as natural frequencies, mode shapes and conditions [2]. Conventional contact-type displacement sensors such as
its curvatures, modal flexibility and its derivatives, modal strain energy, the linear variable differential transducer (LVDT) require a stationary


Corresponding author.
E-mail address: df2465@columbia.edu (D. Feng).

https://doi.org/10.1016/j.engstruct.2017.11.018
Received 6 May 2017; Received in revised form 4 October 2017; Accepted 8 November 2017
0141-0296/ © 2017 Elsevier Ltd. All rights reserved.
D. Feng, M.Q. Feng Engineering Structures 156 (2018) 105–117

reference point, which is often difficult to find in the field. Thanks to vision-based sensor systems for structural dynamic displacement mea-
the advances in cameras and computer vision algorithms, the non- surement and SHM applications. DIC-based applications for experi-
contact vision sensor technologies based displacement measurement mental mechanics are not included in this paper.
offers significant advantages over the contact-type and other non- In summary, this paper aims to provide a review of the collective
contact-type (e.g., GPS, laser vibrometer) displacement sensors, as experience that the research community has gained from the develop-
summarized below [19–49]: ment and application of vision-based displacement sensors, with em-
phasis on structural dynamics and health monitoring applications. The
(1) In contrast to the conventional contact-type sensor (such as an paper is organized as follows. In Section 2, general principles of vision-
LVDT) that requires time-consuming and costly installation of the based sensor systems are presented by reviewing various template
sensor on the structure with physical connections to not only a matching techniques, coordinate conversion methods, measurement by
stationary reference point but also DAQ and power supply, the vi- tracking artificial targets vs. natural targets, etc. In Section 3, valida-
sion sensor does not require physical access to the structures, as the tions of the measurement capacity and accuracy of vision-based sensors
camera can be set up at a remote location. This represents a sig- are reviewed by providing a description of the current state of experi-
nificant time and cost saving. For bridge monitoring, for example, mentation in both laboratory and field environments, followed by the
no traffic control is required. detailed discussion on measurement error sources and error mitigation
(2) Compared with the GPS, which still requires installation on the methods. In Section 4, current studies on using the measured dis-
structure (but not the stationary reference point), the vision sensor placement data for structural SHM are reviewed in detail, including
is far more accurate and less expensive. Depending on the cost, the examples of modal analysis, model updating, damage detection, and
GPS measurement error is typically in the range of 5 mm–10 mm, cable force estimation. Finally, Section 5 concludes the paper with a
more than an order of magnitude larger than that of the vision summary and outlook of future directions of vision-based sensors for
sensor. SHM.
(3) Compared with the non-contact laser vibrometer, which needs to be
placed relatively close to the measurement target due to the low 2. Basics and principles of vision-based sensor system
laser power for safety concerns, the vision sensor can be placed
hundreds of meters away (when using a zoom lens) and still achieve 2.1. System basics: hardware and software
satisfactory measurement accuracy.
(4) In contrast to these conventional sensors, all of which are point- The vision-based displacement sensor system typically consists of a
wise sensors, the vision sensor can be termed as a noncontact dis- video camera (or cameras), a zoom lens (or lenses), and a computer. It
tributed sensing technique as it can simultaneously tracking mul- may also require lighting lamps for conducting measurements at night
tiple points from a long distance. More importantly, one can easily [27]. Table 1 shows typical hardware components of a vision sensor
alter the measurement points after the video images are taken. system. The camera equipped with the lens is fixed on a tripod and
placed at a remote location away from the structure. The camera is
The research community has applied vision sensor systems on a connected to the computer, which is installed with an image acquisition
diverse set of structures to measure their displacements in either con- and analytics software package. If the software has real-time processing
trolled laboratory or complex and challenging field environments. In capability, the measured displacement time histories can be displayed
the SHM applications, structural natural frequencies and mode shapes on the computer screen in real time and automatically saved to the
can be conveniently obtained from displacement measurements using computer. Otherwise, the images can be saved for post processing.
one or more cameras. The adoption of vision sensors can significantly Oftentimes, it is required to take measurements from a remote distance.
reduce test cost and time associated with conventional instrumenta- To guarantee the measurement resolution, optical lens with proper
tions. For example, Poozesh et al. [50] pointed out that testing of a focal length should be selected to zoom in the image to obtain enlarged
typical 50 m utility-scale wind turbine blade requires approximately tracking target/targets.
200 gages (costing $35 k–$50 k) and about 3 weeks to set up a con- In literature [54], an easy-to-use user interface, as shown in Fig. 1, is
ventional strain gauge system. By contrast, a multi-camera noncontact built into a real-time image processing and displacement extraction
measurement system can significantly reduce the test time and cost. software package for easy operation by non-technical staff. It sum-
Data analytics for FE model updating, structure damage detection and marized that the procedure of the vision-based displacement measure-
integrity evaluation can be carried out utilizing the measured dis- ment typically includes:
placement time histories and corresponding operational modal analysis
results. (1) Vision sensor setup. Fix the camera equipped with the lens on a
In the field of experimental mechanics such as material mechanical tripod and place it at a remote convenient location away from the
testing and structural stress analysis, the digital image correlation (DIC) structure. The camera is connected to the notebook computer in-
technique has been commonly used as a practical and effective tool. It stalled with the image-processing software.
can directly provide full-field displacements to sub-pixel accuracy and (2) Single- or multiple-target/template registration. Any natural or artifi-
full-field strains by comparing the digital images of a test object surface cial texture (summarized in detail in Section 2.7) on the structural
acquired before and after deformation. Experimental mechanics appli- surface can be registered as a tracking target, as long as it has
cations usually involve specific specimens and the measurements are pattern with a contrast to surrounding background. For each mea-
made in well controlled environments. To achieve reliable and accurate surement point, a subset with a proper size should be chosen, which
DIC analysis, artificial speckle or texture patterns are often applied on should contain sufficient local texture to allow an accurate pattern
the specimen surface [26]. Pan et al. [51] systematically reviewed and matching [49].
discussed the methodologies of the 2D DIC technique for displacement (3) Template matching for displacement extraction. The template
field measurement and strain field estimation, and provided detailed matching algorithm (summarized in detail in Section 2.2), mostly
analyses of the measurement accuracy considering the influences of together with subpixel techniques, is employed to track the targets
both experimental conditions and algorithm details. Based on the registered in the previous step. The motion of the target is tracked
measured strain fields, various material mechanical parameters in- by finding its position in a sequence of video images. It would be
cluding Young’s modulus, poisson’s ratio, stress intensity factor, re- highly time-consuming if the target is searched within the whole
sidual stress and thermal expansion coefficient can be further identified image of each video frame. To reduce computational time, the
[51,52]. It is noted that the emphasis of this paper is placed on the searching area could be confined to a predefined region of interest

106
D. Feng, M.Q. Feng Engineering Structures 156 (2018) 105–117

Table 1
Typical hardware components of vision sensor system.

Component Model Technical specifications

Video camera Maximum resolution: 1280 × 1024


[53] Frame rate: 150 fps
Color: Monochrome
Point Grey/FL3-U3-13Y3M-C Sensor type: CMOS
Pixel size: 4.8 μm
Lens mount: C-mount
Interface: USB3.0
Image sensor: 1/3″ CCD ICX424 AL/AQ
[31] Active picture element: 640 × 480
Maximum frame rate: 86 fps
Nile IMX-5040FT Video output: digital-12 bit camera link
Scanning system: progressive scan
Maximum resolution: 1400 × 1024
[27] Frame rate at maximum resolution: 64 fps
Pixel size: 7.4 μm
Genie/HM1400 Sensor type: CMOS
Color: Monochrome
Lens mount: C-mount + F-mount adaptor
Optical lens Focal length: 9–90 mm
[53]
Maximum Aperture: F1.8
Kowa/LMVZ990 IR Mount: C-mount

Focal length: 12–240 mm


[31] Mount: C-mount

Samsung Techwin (SLA-12240)


Zoom: 80–400 mm
[27] Aperture: F4.5–5.6
Manual zoom and luminosity control
Nikon/80-400 VR ED
Pan-tilt drive and housing Rotation angle: pan 0° –350°, tilt −90° to + 20° ± 5°
Receiver function:
[31] - AUX1 ∼ AUX5 (light, wiper, pump, heater)
- Camera control (zoom, focus)
- Pan tilt/ camera preset
YUSIN system (EPT-6000s)
- Communication method: RS-485/RS-422
Tripod and accessories Laptop computer, Tripod, USB3.0 type-A to micro-B cable, etc.

(ROI) near the template’s location in the previous image. It is noted against the source image by moving the template one pixel at a time
that the new ROI of a target must be large enough to cover its (left to right, up to down). At each location, a metric is calculated to
potential position on the next frame. Otherwise, mismatching will represent the similarity between the template image and the particular
be introduced [41,54]. area of the source image. For each location (x, y) of T over I, the match
(4) Coordinate conversion. In order to obtain physical displacements of metric is stored. The position of the template in the source image is
the target object from the captured video images, the relationship determined through searching the peak position of the distribution of
between the pixel coordinate and the physical coordinate must be the match metric. The differences of the positions of the template in
established. The scaling factor (e.g., with units of mm/pixel) can be video images yield the in-plane displacement vector, as illustrated in
obtained in two ways, as to be discussed in Section 2.5. Fig. 2. The existing correlation criteria or similarity metrics/measures
employed for vision sensors are often categorized into two groups [60]:
2.2. Motion/displacement tracking using template matching techniques namely, (1) the cross-correlation (CC) criterion which includes CC,
normalized cross-correlation (NCC) and zero-normalized cross-correla-
Computer vision-based displacement sensors are primarily enabled tion (ZNCC), and (2) the sum of squared differences (SSD) correlation
by the template matching technique, one of the most effective image criterion which includes SSD, Normalized sum of squared differences
processing techniques for tracking objects. Template matching is a (NSSD) and zero-normalized sum of squared differences (ZNSSD). Stu-
computationally intensive process that aims at locating a template dies reveal that the ZNCC and ZNSSD correlation criteria offer the most
within an image [55]. As illustrated in Fig. 2, the technique involves robust noise-proof performance and are insensitive to the offset and
two primary components: (1) the template image T; and (2) the source linear scale in illumination lighting; the NCC and NSSD correlation
image I in which a match to the template image is expected to find. criteria are insensitive to the linear scale in illumination lighting but
General classifications of template matching approaches are: area or sensitive to offset of the lighting; and the CC and SSD correlation cri-
template-based matching approaches and feature-based matching ap- teria are sensitive to all lighting fluctuations [51]. All these methods
proaches [56–58]. have been successfully applied for structural displacement/deflection
measurement [31,49,61–63]. For example, the vision-based displace-
2.2.1. Area-based template matching using cross correlation or sum of ment measurement systems were developed by Ye et al. [62] based on
squared differences the NCC criterion, by Dworakowski et al. [63] based on the ZNCC
Area-based template matching methods, sometimes called correla- criterion, and by Pan et al. [49] using the ZNSSD criterion. In addition,
tion-like methods, put emphasis rather on the matching step than the Feng et al. [47] developed a vision sensor based on the upsampled cross
detection of salient objects [59]. Classical area-based methods exploit correlation (UCC), which is essentially the CC by means of Fourier
for directly matching image intensities using the exhaustive search transform.
strategy. To identify the matching area, the template image is compared

107
D. Feng, M.Q. Feng Engineering Structures 156 (2018) 105–117

Fig. 1. User interface of a vision sensor software package [54].

Feature detection selects points of interest in an image that have


unique features, such as corners (sharp image features) or blobs
(smooth image features). The key to feature detection is to find features
that remain locally invariant so that they can be detected even in the
presence of illumination and scale changes, rotation, and occlusion. For
example, the features from Harris, accelerated segment test (FAST), and
Shi & Tomasi methods can be adopted for detecting corner features
[65], and the speeded-up robust features (SURF) [66], KAZE [67], and
maximally stable extremal regions (MSER) methods [68] for detecting
blob features. On the other hand, feature extraction involves computing
a compact vector representation of a local regions centered around
detected features. Descriptors, such as scale-invariant feature transform
(SIFT) or SURF, rely on local gradient computations. Binary descriptors,
such as binary robust invariant scalable keypoint (BRISK) [69] or fast
retina keypoint (FREAK) [70], rely on pairs of local intensity differ-
ences, which are then encoded into a binary vector.
Once the local features and their descriptors have been extracted,
the matching of feature points between two images can be performed
by minimizing the Euclidean distance between the descriptors using the
Fig. 2. Development schematic of vision-based sensors [47].
nearest-neighbor matching algorithms. Two algorithms have been
found most efficiently for matching high dimensional features: the
2.2.2. Feature-based template matching randomized k-d forest and the fast library for approximate nearest
Feature-based matching exhibits both geometric (i.e., translation, neighbors (FLANN). It is noted that these algorithms are not suitable for
rotation and scale) invariance and photometric (i.e., brightness and binary features (e.g., FREAK or BRISK), which can be compared using
exposure) invariance, the main steps of which often include: (1) de- the Hamming distance calculated via performing a bitwise XOR op-
tecting a set of distinctive key-points and define a region around each eration followed by a bit count on the result [71]. In computer vision
key point, (2) computing and extracting local descriptors from the applications, in order to remove the false matching points, statistically
normalize regions, and (3) matching local descriptors. Thus, feature robust methods such as the RANdom SAmple Consensus (RANSAC) can
detection, description and matching are three essential components in be used to filter outliers in matched feature sets while estimating the
feature-based matching applications [64]. geometric transformation or fundamental matrix [72]. In Ref. [57], Soh

108
D. Feng, M.Q. Feng Engineering Structures 156 (2018) 105–117

et al. compared several well-known techniques for feature selection and sections of a large-scale structure. For example, aiming at measuring
matching, i.e., Kanade-Lucas-Tomasi (KLT) method, SURF with FLANN, multi-point displacements along structures, Fukuda et al. [92] devel-
SURF with brute force matching, and SIFT with RANSAC. oped a time synchronous measurement system using multiple computer
Due to advantages such as geometric and photometric invariances, all and camera subsystems. The system embeds an algorithm that auto-
the aforementioned and other available feature detection and matching matically and periodically performs synchronization through TCP/IP
techniques have been extensively adopted for the vision-based displacement communication to maintain the time lag between internal clocks of
sensor development [27,30,32,33,39,46,53,73–81]. multiple computers to a range < 5 ms. Lee et al. [93] introduced the
synchronized multi-point vision-based system for real-time displace-
2.3. Pixel level vs. subpixel level ment measurement of high-rise buildings using a portioning approach,
the accuracy and feasibility of which were verified on a five-story steel
In practical applications, one major concern for the vision sensor is frame tower. Ojio et al. [94] synchronized two cameras by triggering an
its measurement accuracy. Abovementioned template matching tech- interval timer through a relay module driven by solid-state relays. A
niques alone usually measure displacements with integer-pixel resolu- pair of high-luminance LEDs were activated by the timer within sight of
tion since the minimal unit in a video image is one pixel. Although in each camera and used as a synchronizing timing marker.
many applications the pixel-level accuracy is adequate, a higher re-
solution is often required for measuring small structural vibrations such 2.5. Coordinate conversion and scaling factor
as ambient vibration of a short-span concrete bridge. Especially, pixel-
level template matching may result in unacceptable measurement er- In order to measure structural displacements from the captured
rors if the displacement to be measured is in the same order of mag- video images, the relationship between image coordinates in the unit of
nitude as the scaling factor [47]. To improve the measurement accu- pixels and physical (or world) coordinates in the units of millimeter or
racy, incorporating the subpixel registration into the template matching inch must be established to obtain the scaling factor for converting the
algorithm is regarded as the best practice. The interpolation technique image pixel to the physical length (e.g., with units of mm/pixel). The
is most commonly used subpixel approach, examples of which includes scaling factor can be determined (1) based on the intrinsic parameters
intensity interpolation, correlation coefficient curve-fitting or inter- of the camera as well as the extrinsic parameters between the camera
polation, phase correlation interpolation and the geometric methods and the object structure, which can be obtained through camera cali-
[82–84]. bration; and (2) from the known physical dimension on the object
Subpixel registration can also be formulated as an optimization surface and its corresponding image dimension in pixels.
problem and solved through heuristic algorithms such as genetic al-
gorithms, artificial neural network algorithms, and particle swarm op- 2.5.1. Camera calibration method
timization [85,86]. There are also other subpixel techniques that are Camera calibration is the process of estimating parameters of the
based on Newton-Raphson method [87] and gradient-based methods camera for obtaining the scaling factor using images of a special cali-
[88]. Studies have been conducted to investigate their performance for bration pattern. The parameters include camera intrinsics, distortion
vision-based displacement measurement. For example, to further im- coefficients, camera extrinsics [63]. Fig. 3 shows the schematic of stereo
prove the accuracy of DIC, Pan et al. [51] reviewed the various subpixel calibration to estimate parameters for a pair of cameras, as well as the
registration algorithms, including coarse-fine search algorithm, peak- relative positions and orientations of the cameras. After obtaining the
finding algorithm, iterative spatial domain cross-correlation algorithm, camera parameters, the world coordinates of any image point can be
spatial-gradient-based algorithm, genetic algorithm, finite element reconstructed from its image coordinates.
method and B-spline algorithm. Debella-Gilo and Kaab [89] evaluated The most frequently adopted pinhole model for camera calibration
two different approaches, namely intensity interpolation and correla- relates the 3-dimentional (3D) world coordinate (X, Y, Z) of a calibra-
tion interpolation, to achieve sub-pixel precisions when measuring tion target point P with its corresponding location (u, v) in the image
surface displacements on mass movements using NCC. Through shaking plane using a perspective transformation. The projection equation can
table test, Feng et al. [90] demonstrated the improvement of mea- be expressed as:
surement accuracy by applying an upsampling subpixel technique.
It is noted that although in theory subpixel resolution can achieve sm′ = A [R|T] M′ (1)
maximum accuracy, in practice the resolution is limited as images may or
be contaminated with various environmental noises and system noises
arising from the electronics of the imaging digitizer [47]. The subpixel X
u ⎡ fx γ cx ⎤ ⎡ r11 r12 r13 t1 ⎤ ⎡ ⎤
accuracies reported in many studies vary within orders of magnitude s⎡ v ⎤ = ⎢ 0 f y c y ⎥ ⎢ r21 r22 r23 t2 ⎥ ⎢ Y ⎥
⎢ ⎥ ⎢ ⎥ ⎢r r r t ⎥ ⎢ Z ⎥
from 0.5 to 0.01 pixel [82]. Mas et al. [91] demonstrated, through ⎣1⎦
⎣ 0 0 1 ⎦ ⎣ 21 22 33 3 ⎦ ⎢ ⎣1⎥ ⎦ (2)
numerical analysis, a realistic limit for subpixel accuracy, and found
that the maximum achievable resolution enhancement is related to the where s is a scale factor, fx and fy are the horizontal and vertical focal
dynamic range of the image. lengths expressed in pixel units, (cx, cy) is a principal point that is
usually at the image center, γ is a skew factor. The extrinsic parameters,
2.4. Single- vs. multi-point measurement R and T, represent a rigid rotation and translation transformation from
3D world coordinate system to the 3D camera's coordinate system. The
The vision sensor can conduct single-point displacement measure- intrinsic parameters A denote a projective transformation from the 3D
ment at its best resolution. By zooming out the lens, multi-point or full- camera's coordinates into the 2D image coordinates. The intrinsic
field displacements in a large field of view (FOV), i.e., area that is parameters do not depend on the scene viewed. Therefore, once esti-
visible in the image, can be measured simultaneously by one camera. mated, it can be reused as long as the focal length is fixed (if a zoom
However, tradeoffs between the measurement resolution and the lens is used).
number of measurement points or FOV is necessary. This is because a Then, given the corresponding locations of the calibration control
decreased measurement resolution would be expected when measuring points in the 3D world coordinates and in the 2D image coordinates, the
multiple points in a larger FOV for large-scale structures, even though unknown camera parameters can be estimated based on a nonlinear
the sensor accuracy can be significantly improved through sub-pixel optimization, e.g., with the Levenberg-Marquardt algorithm. Wang
registration techniques. In this case, one possible solution is using et al. [26] summarized the total number of unknowns in both single-
multiple synchronized cameras, with each camera targeting different and stereo-camera calibrations. It is noted that the camera calibration

109
D. Feng, M.Q. Feng Engineering Structures 156 (2018) 105–117

Fig. 3. Schematic of stereo calibration.

can be conveniently conducted using the well-known OpenCV and based on the intrinsic parameters of the camera as well as the extrinsic
MATLAB camera calibration packages. To calibrate the camera, mul- parameters between the camera and the object structure (i.e., D, f and
tiple images of a calibration pattern from different angles are needed. dpixel ) [47,54].
Popular calibration patterns include the asymmetric checkerboard
dknown
(with one side containing an even number of squares, both black and ⎧ SF1 = Iknown
white, and the other containing an odd number of squares) [95] and the ⎨ SF2 = D
d
f pixel
circular control points [96,97]. There also exist other calibration pat- ⎩ (3)
terns. For example, Park et al. [98] used a T-shaped wand with multiple
markers attached at pre-determined positions for calibration of multiple where dknown is the known physical length on the object surface, Iknown
cameras and to set up the origin of the 3D displacement measurement. are the corresponding pixel length at the image plane, dpixel is per pixel
After the calibration, the wand is removed and not used during dis- length (e.g., in μm/pixel), D is the distance between the camera and the
placement measurement. object, and f is the focal length.
Coordinate conversion through camera calibration has been However, the prerequisite of SF1 and SF2 is the perpendicularity of
adopted to many vision-based displacement measurement systems the camera’s optical axis to the object surface. Such a requirement
[63,79,99]. When a projection of 2D world coordinates to 2D image would impose some difficulties in the practical implementations be-
coordinates is considered, i.e., the objects are only subjected to in-plane cause small magnitude of camera misalignment angle can be unnoticed
motion, Wu et al. [41] proposed a simplified camera calibration method during the experiment setup but could cause errors especially when the
on the basis of known world coordinates of at least four points. In lit- object distance from the camera is relatively large. Moreover, in out-
erature [99], the known dimensions from the edges and diagonals of the door field tests, it is often unavoidable to tilt the camera optical axis by
installed artificial target are used to establish the transformation be- a small angle in order to track the object surface target. When the
tween image and physical coordinates, with the assumption that the camera optical axis is tilted about the normal directions of the object
out-of-plane motion is negligible. surface by an angle θ, the scaling factor can be approximated by SF3 in
Eq. (4) [47].
2.5.2. Practical calibration method D
The camera calibration method requires a one-time access to the SF3 = dpixel
f cos2 θ (4)
target structure in order to install the calibration panel, which can be
difficult in the field. For this reason, research has been carried out to In literatures [47,62,92,100–105], the scale ratio is constructed
develop more practical calibration methods. When the camera optical using SF1 based on the known physical dimension on the object surface,
axis is perpendicular to the object surface, all points on this surface such as the size of artificial target panels (if such panels can be installed
have equal depth of fields, meaning that these points can be equally on the object) or the sizes of structural members known from design
scaled down into the image plane. In this case, only one identical drawings, and the corresponding image dimensions in pixels. Generally,
scaling factor is needed. In general, the scaling factor can be obtained the ratio dpixel / f in SF2 and SF3 can be calculated via the camera spe-
from one of the two methods, as expressed in Eq. (3): (1) SF1 based on cifications provided by the manufacturer; however, this information
the known physical dimension on the object surface and its corre- can hardly be found for the majority of low cost cameras [106]. Instead,
sponding image dimension in pixels (i.e., dknown and Iknown ); (2) SF2 in literature [106], a unit conversion (e.g., from image pixel to

110
D. Feng, M.Q. Feng Engineering Structures 156 (2018) 105–117

millimeter) method is developed with the help of a checkerboard. 2.8. Real time vs. post-processing
However, it’s noted that the proposed unit conversion method requires
that the zoom factor be kept constant in the subsequent test measure- Most existing vision sensor systems post-possess the recorded video
ments. In fact, in cases of non-perpendicular optical axis, the scaling files. In this case, only a consumer-grade commercial video camera is
factor at each measurement point is not a constant. It depends on needed to take videos. This also provides the flexibility to extract
various parameters including the image coordinates of the measure- structural displacements at any point from a single recording. The in-
ment point, the tilt angle θ of the camera, the focal length f as well as ability to perform real-time displacement measurement, however,
the distance D between a camera and each measurement point [49]. would limit the application for continuous online monitoring.
Pan et al. [49] converted image displacements to physical displace- To achieve high measurement accuracy, many vision-based tech-
ments using an improved calibration model of Eq. (4), where the tilt niques sacrifice the analysis speed, although real-time and fast-speed
angle and distance D are measured by a laser rangefinder. analysis are often necessary in computer vision based applications.
Khuc and Catbas [106] mentioned that the challenges associated with
the video storage requirement and their processing time need to be
2.6. 2D vs. 3D measurement addressed. Baqersad et al. [109] also pointed out that real-time cap-
ability is critical to make photogrammetry more practical for dynamic
Most of vision sensor studies focus on 2D dynamic displacement measurement. The feasibility of real-time measurement depends on the
measurement. It is known that the in-plane displacement measurement complexity of the adopted template matching algorithm, programming
accuracy using a single camera is sensitive to out-of-plane motion. language, as well as code efficiency for the developed software. A real-
Sutton et al. [107] found that the in-plane measurement error due to time vision sensor system contains a notebook computer installed with
out-of-plane translation is proportional to ΔZ/Z, where ΔZ is the out-of- a developed image processing software package. The real-time dis-
plane translation displacement and Z the distance from the object to the placement measurement data are saved in the computer, avoiding the
camera. time-consuming and memory-intensive task of saving video files. For
To minimize the effect of the out-of-plane motion, a 3D stereovision example, Pan et al. [49] developed a real-time displacement tracking
system with a pair of synchronized cameras in stereoscopic configura- system using subset-based DIC. Feng et al. developed the real-time
tion can be employed. Park et al. [98] proposed a motion capture video-processing software based on both the UCC and OCM algorithms
system with multiple cameras to measure 3D structural displacements. [47,110], in which the programming environment for the software is
The 2D coordinate data from each camera is used to calculate the 3D Visual Studio 2010 using C++ language. During measurement, the
coordinates of the markers attached on structures with respect to the FlyCapture Software Development Kit (SDK) by Point Grey Research is
predetermined origin of the 3D space. The effectiveness was validated used to capture video images from Point Grey USB 3.0 cameras using
through a free vibration experiment of a 3-story structure. Poozesh the same application programming interface (API) under 32- or 64-bit
et al. [50] assessed the accuracy of the 3D stereo-vision system to Windows 7/8 operating system. Then the frame-by-frame image are
measure full-field distributed strain and displacement over a large area processed by the UCC/OCM algorithm and displayed on the screen
of a scaled wind turbine blade. Pan et al. [51] pointed out that for using DirectShow library. Meanwhile, the measured displacement his-
deformation measurement of a macroscopic object with a curved sur- tory would be shown on the screen in real time and saved to the
face, stereovision-based 3D measurement is more practical and effective computer.
because it can be used for 3D profile and deformation measurement and The real-time displacement measurement capability is particularly
is insensitive to out-of-plane displacement. important for long-term continuous monitoring. Data analytics can also
Three-dimensional measurement based on stereo vision systems is be incorporated in the software to assess the structural health condi-
expected to attract more research and application interests. However, tions and detect post-event structural damage. However, real-time
due to its convenience and efficiency, 2D measurement would still be measurement may not be feasible depending on the number of mea-
sufficient for most civil engineering structural applications such as surement points, required video resolution, maximum frame rate per
measuring the vertical and transverse deformation of bridges and hor- second, and template and ROI sizes.
izontal displacements of buildings or towers.
3. Validation of measurement capacity and accuracy

2.7. Artificial target vs. natural feature Vision sensors are developed to address the challenges of remote
and accurate displacement measurement for both small and large en-
Template matching algorithms rely on the sufficient intensity var- gineering structures. In the literature, a large number of evaluation tests
iations in the reference and target images to ensure reliable point have been reported on laboratory structures as well as on medium- and
identification and matching [26]. In order to improve the robustness of long-span bridges, buildings, wind turbines, among many others.
the target tracking and to reduce measurement errors, high-contrast
artificial targets are often attached to the structural surface [49,108], 3.1. Laboratory experimental evaluation
such as a roundel target [20], concentric rings [99], crosses [42], LEDs
[43], black and white blocks with random sizes [39], and speckle Many recent studies on vision-based displacement sensors by dif-
patterns [61]. For example, in literature [92], a planar target with four ferent research groups have experimentally demonstrated that high
circles is attached to the structure to measure the vibration of bridge, accuracy can be achieved for both single-point and multi-point struc-
while in [42] cross-shaped targets are used and the viewing system is tural displacement measurements by either tracking high-contrast
equipped with an additional reference system, which decreases the predesigned target panels or natural features on the structural surface
sensitivity to vibrations. Ring-shaped and random targets are used in [27,29,30,36,37,39,41,43,47–49,53,102,106,111,112]. Note that this
[30], and multiple targets are simultaneously measured with a single review is not intended to be an exhaustive listing of laboratory ex-
camera, producing displacements at multiple points. In order to com- periments of vision sensors. In fact, almost all the vision sensors in
pletely eliminate the need for accessing the structure, efforts have been literatures have been validated through laboratory tests. Recently, ef-
made to track natural features on the structural surface without in- forts have also been made to investigate the feasibilities of displacement
stalling artificial targets. Recent studies on vision-based displacement measurements utilizing the advanced onboard sensing capabilities of
sensors have demonstrated the accuracy of vision sensor by tracking the smartphone technologies, such as embedded high-resolution/speed
natural structural features [39,79,81,106]. video features, powerful processors and memories, and open-source

111
D. Feng, M.Q. Feng Engineering Structures 156 (2018) 105–117

computer vision libraries, etc. For example, Min et al. [113] developed pre-attached artificial target panel and natural targets on the bridge,
a smartphone software application for real-time displacement mea- with the camera placed 710 m away from the mid-span targets. The
surement and shaking table tests were conducted to study its accuracy. measurements agree well with data from reference GPS. Ye et al. [48]
Based on vibration testing of a small-scale multistory laboratory model, demonstrated the robustness of their vision sensor system through field
Ozer et al. [114] demonstrated the dual usage of the ubiquitously measurement of the mid-span vertical displacement of Tsing Ma Bridge
available smartphone for measuring both structural deflections/dis- in the operational condition, from which a good agreement was ob-
placements and accelerations with its embedded camera and accel- served between the measurement results by the vision-based system
erometer. and GPS. In addition, the vision sensor system was used to measure the
Warren et al. [115] experimentally compared the vision-based, laser vertical mid-span displacement influence lines of the Stonecutters
and accelerometer measurements for structural dynamics analysis. It is Bridge in Hong Kong under different loading scenarios. By tracking six
concluded that vision-based photogrammetry techniques provide ad- actively illuminated LED targets mounted on the bridge, the video de-
ditional measurement capabilities that complement the current array of flectometer developed by Tian and Pan [117] was applied for field,
measurement systems by providing an alternative that favors the high- remote, and multipoint displacement measurement of the Wuhan
displacement and low-frequency vibration, which is typically difficult Yangtze River Bridge in China during its routine safety evaluation tests.
to measure with accelerometers and laser vibrometers. Additionally, In [106], the measurement accuracy of the vision sensor for displace-
D’Emilia [108] introduced the concept of synchronization among three ment responses and modal parameters of a football stadium is com-
transducers of the cameras, the lasers and the accelerometers. pared with those from reference LVDTs and accelerometers, respec-
tively, for various conditions such as changing ambient light and
3.2. Field tests distance of the camera.

For civil engineering structural applications, field evaluation of the 3.3. Measurement error sources
vision sensors is of particular importance. The ability of vision sensors
to remotely measure displacements of short- or medium-span bridges Measurement errors, caused by various sources, cannot be com-
has been reported. For example, field tests, individually conducted by pletely eliminated for vision-based measurement [118]. The application
Feng et al. [47] and Shariati and Schumacher [116], on a pedestrian studies, especially in field tests, have acknowledged the interference
bridge located on the Princeton University campus cross-validated the factors that affect the accuracy of vision-based sensor systems. Mea-
frequency-domain characteristics of the bridge identified from the surement errors may arise from sources such as the calibration proce-
measurements by different vision sensors. Feng et al. [39] carried out dure, optical distortion effects, non-linearity of the field view, optical
field tests on two railway bridges subjected to freight trainloads tra- components, system resolution, data synchronization among cameras,
veling in various speeds. Measurements were remotely taken not only light intensity, non-uniform air refraction, among others [108]. D’E-
during the daytime but also at night from different distances with and milia et al. [108] evaluated the performance of vision-based vibration
without artificial target panel. Through comparison with a conven- measurements by considering the effect of some peculiar parameters,
tional LVDT reference sensor, the high accuracy of the proposed remote i.e., the type of target, the vibration frequency and amplitude, the ex-
sensor system that tracks natural targets was demonstrated in realistic posure time and image acquisition frequency. Literatures [119,120]
field environments. Similarly, Pan et al. [49] demonstrated the efficacy analyzed the sensitivity of displacement to the image acquisition noise
and practicality of the proposed video deflectometer through real-time (e.g. digitization, read-out noise, black current noise and photon noise),
deflection measurement of a railway bridge. Ribeiro et al. [27] mea- with reference images corrupted by different levels of zero mean
sured the displacement of a railway bridge’s deck induced by the pas- Gaussian noise. It’s demonstrated that the standard deviation of the
sage of trains, yielding a good agreement between the results of dis- measurement errors is proportional to that of the image noise and in-
placement measurement obtained with the video system and with an versely proportional to the subset size and to the average of the squared
LVDT, achieving accuracy below 0.1 mm for distances from the camera grey level gradients. Haddadi [121] investigated the error sources re-
to the target up to 15 m and in the order of 0.25 mm for a distance of lated to the DIC technique. Based on both numerical and experimental
25 m. Busca et al. [30] proposed a vison based technique to measure tests of the rigid-body motion, this study assessed the errors related to
both the static and dynamic displacement responses of a railway bridge. lighting, the optical lens (distortion), the charge coupled devices (CCD)
It was found that without an artificial target, the reliability is strongly sensor, the out-of-plane displacement, the speckle pattern, the grid
affected by the structure texture contrast. This study concluded that in pitch, the size of the subset and the correlation algorithm. Pan et al.
order to measure a large bridge portion with one single camera, a [51] systematically reviewed the displacement measurement errors of
compromise between field of view and measurement resolution is ne- 2D DIC caused by speckle pattern, non-parallel camera sensor and ob-
cessary. ject surface, out-of-plane displacement, image distortion, various
Studies have also been carried out for long-span bridges and other noises, subset size, correlation criterion, interpolation scheme, and
large-size structures. As one of the earliest applications, Stephen et al. shape function, etc. Ferrer et al. [122] performed a parametric study of
[20] employed a visual tracking system in the measurement of deck the measurement errors introduced by the vision-based method, in
displacements at the center of the 1410 m span of the Humber Bridge in which the influencing factors such as the distance to the target, the
the UK. Whabeh et al. [43] developed a video camera system with image size, the type of camera and the movement amplitude were
targets consisting of black steel sheets, on which two high resolution analyzed for four different distances and two types of excitations.
red lights (LED) were mounted to measure displacement of the Vincent
Thomas Bridge located in Sam Pedro, California. Fukuda et al. [100] 3.3.1. Errors from camera motion
conducted vision sensor measurement of the mid-span displacement of In field measurements, the camera itself is often subjected to am-
the same bridge from 300 m away and demonstrated the robustness of bient vibration (wind, traffic, etc.), causing displacement measurement
their OCM algorithm in dealing with the changing natural lighting errors. Studies have highlighted the adverse effect of camera vibration
conditions in the field. Field tests on the long-span Manhattan Bridge on the measurement accuracy [39,49,99]. For example, during the field
(Fig. 4) by Feng and Feng [54], with the camera placed 300 m away tests in [39], the camera vibration caused by passing-train-induced
from the bridge, confirmed the remote, real-time and multi-point ground motion affected the measurement accuracy, particularly when a
measurement capacities of the vision sensor system. Brownjohn et al. zoom lens was used, which magnified not only the images but also the
[99] evaluated the performance of a commercial optical system for camera vibration. This problem becomes more serious for a lightweight
tracking mid-span displacement of the Humber Bridge by using both compact camera-tripod system.

112
D. Feng, M.Q. Feng Engineering Structures 156 (2018) 105–117

Fig. 4. Field measurement of Manhattan Bridge: (a) test setup, and (b) Tracking targets on the bridge and the background building [54].

Yoneyama and Ueda [61] proposed a method for correcting the pixels could occur when selecting the physical members from the image
effect of camera movement. The relationship between images before using a mouse. It is recommended to use the mean value from several
and after the camera movement is described by an equation of per- repeated picking operations to average out some of the random errors.
spective transformation. The unknown coefficients of the equation are Furthermore, if a dot extraction algorithm is implemented, the error
determined from un-deformed regions of the images. Then the effect of could be reduced to ± 1/10 th of a pixel. Feng et al. [47] theoretically
the camera movement is eliminated by using the perspective transfor- studied the effects of the optical axis tilt angle and lens focal length
mation. The effectiveness of the correction method is validated by ap- based on the 1D in-plane translation. It was found that the errors in SF1
plying to the rigid body rotation and translation measurement of a and SF3 increase as the tilt angle increases and the error is inversely
planar specimen, the deflection measurement of a wide-flange beam, related to the focal length. Furthermore, through laboratory tests, Feng
and the bridge defection measurement during vehicle passage. A more et al. [54] demonstrated that the estimated scaling factors SF1 utilizing
convenient camera vibration correction method has been proposed in known physical dimensions can yield satisfactory accuracy when the
literatures [54,123,124]. When measuring, for example, a long-span camera tilt angle is small (e.g., 9°). It is also suggested that for the non-
bridge displacement, a reference object (such as a building) in the perpendicular lens optical axis, scaling factors in the horizontal and
background can be assured as stationary, as shown in Fig. 4. By sub- vertical directions should be obtained separately. Moreover, when a
tracting the displacement of the reference object from the bridge dis- series of targets are simultaneously tracked with a single camera to
placement, the camera motion can be canceled. Kim and Kim [123] measure displacements at multiple points along the structure, due to
applied this correction method to remove deck-vibration-induced the projective distortion, different scaling factors should be determined
camera motion when measuring the hanger cable tension forces for for each measurement point by utilizing the structural dimension closer
Gwangan Bridge in Korea. However, the need to have a static reference to or encompassing that point.
in the FOV may lessen the advantage of this method. Besides, the tra-
deoff between the FOV and the proportion of tracking targets is ne- 3.3.3. Errors from hardware limitations
cessary if a static reference need to be in the FOV. Therefore, camera Measurement errors using vision-based system can further arise
motion cancellation is still an open problem. from hardware limitations such as the rolling shutter effect and the
temporal aliasing. Two types of image sensors widely used in digital
3.3.2. Errors from coordinate conversion cameras are the CCD and complementary metal oxide semiconductors
The scaling factor determined by both of the coordinate conversion (CMOS). Although CMOS imaging sensors has brought positive im-
methods described earlier can introduce measurement errors. For the provement over CCD, one major distinction between the two sensors is
camera calibration method in Section 2.5.1, the extrinsic and intrinsic the readout modes. CCD cameras often use the global shutter mode,
matrices for camera calibration do not account for lens distortion, which captures the entire image frame at the same instant. This is
mostly radial distortion and slight tangential distortion. Lava et al. particularly beneficial when the image is changing from frame to frame.
[125] and Pan et al. [126] investigated the impact of lens distortion on By contrast, most CMOS cameras use the rolling shutter mode, in which
the uncertainty of DIC measurement. In order to accurately represent a each image frame is recorded by scanning row-by-row or column-by-
real camera for an accurate camera calibration, a general lens distortion column in the pixels. In other words, the rolling shutter method may
model should also be considered during camera calibration [26]. In produce distortions when recording fast-moving objects, the effect of
literature [127], a method of lens distortion correction is proposed to which on the measurement accuracy should be rectified when this type
improve the measurement accuracy of DIC for 2D displacement mea- of video cameras are used [128]. Note that not all CMOS sensors have
surement. The lens distortion was first evaluated from displacement rolling shutters. For example, in [53], the adopted video camera (Point
distributions obtained in a rigid body in-plane translation and rotation Grey/ FL3-U3-13Y3M-C) has a CMOS-type sensor but a global shutter.
tests, and then the measured displacement was corrected using a The temporal aliasing is another concern for vibration tests. When
coefficient determined by a least squares method. Note that in field structural response has frequency of higher than half of the frame rate,
applications of measuring small structural displacement with a rela- the measured displacement will contain aliased information from
tively long focal-length lens from a remote distance, the errors caused higher frequencies. Different from the other vibration sensor systems
by the lens distortion may be negligible. where the aliasing effect can be removed by using an anti-aliasing filter,
For the practical calibration method described in Section 2.5.2, the temporal aliasing cannot be removed in vision-based systems since
there also exist uncertainties in the estimated scaling factor. For the images are already aliased [79,129]. According to Nyquist theorem, in
scaling factors SF2 and SF3, estimation errors would arise from the order to avoid aliasing, the sample rate must be at least twice of the
uncertainties in the tilt angle estimation, camera distance measurement highest frequency component in the measured signal [41]. For field
and focal length readings from the adjustable-focal-length lens. Using a measurement of a structure, whose natural frequencies are unknown,
fixed zoom lens and an angle measurement device can minimize such preliminary analysis can be conducted to estimate the frequencies and
errors. For the scaling factor SF1 based on the known physical dimen- other sensors (such as laser or accelerometers) can be used to conduct
sion (such as the length of a truss member) on the target surface and the some preliminary measurements [108]. It is noted that for civil en-
corresponding image dimension in pixels, errors in the range of ± 2 gineering structures, the dominant natural frequencies are usually less

113
D. Feng, M.Q. Feng Engineering Structures 156 (2018) 105–117

than 50 Hz, and thus a sampling rate of 100 Hz should be sufficient to using two synchronized stereo-vision systems in conjunction with
avoid the aliasing problem. output-only system identification. The measurements obtained from the
camera pairs are stitched together. The extracted modal properties were
3.3.4. Errors from environmental sources shown to be accurate when compared to those from a validated finite
It is well-known that the accuracy of the template matching tech- element model. This study validated the effectiveness of the stitching
niques is largely dependent on the image quality, which is often diffi- approach for developing a multi-camera system for monitoring the
cult to guarantee in outdoor field environmental conditions such as il- entire surface of large sized blades. Through experiments on scaled
lumination variation, partial target occlusion, partial shading, and laboratory structures, Yang et al. [134,135] showed the potential for
background disturbance, etc. Shadow and low lighting conditions are output-only modal identification using either non-aliased measure-
the fundamental technical limitation of computer vision systems [110]. ments or temporally-aliased video measurements at low frame rates.
For high speed, high magnification or night-time applications, lighting
the area under test is important. Measurement errors can also arise from 4.2. Model updating and damage detection
the heat haze that occurs when the air is heated non-uniformly by the
high ambient temperature during the field testing [130]. The non- Finite element models of a structure can be updated by comparing
uniformly heated air causes variation in its optical refraction index, the analytical and experimental modal properties including the natural
resulting in image distortion. The measurement errors caused by the frequencies, mode shapes, and damping ratios based on the vision
heat haze increase as the measurement distance increases, because the sensor measurement [136]. The updated model can then be used for
air volume between the target object and the lens of the camera be- structural damage detection. For example, based on the measured dis-
comes large. Research has been conducted to study measurement errors placement on a cantilever beam by the phase-based optical flow algo-
from these environmental sources. For example, Ye et al. [62] con- rithm, Cha et al. [77] utilized the unscented Kalman filter to detect
ducted a series of shaking table experiments to examine the environ- structural damage by identifying structural properties such as stiffness
mental influence factors affecting the accuracy and stability of the vi- and damping coefficient with an assumption of known structural mass.
sion-based system. It is demonstrated that the measurement results are Feng and Feng [54] experimentally demonstrated that a smooth mode
adversely affected by illumination and vapor. Preliminary tests of a shapes from full-field displacement responses measured by a single
video-based system by Ribeiro et al. [27] have shown that the mea- camera enabled structural damage detection and location of a simple
surement precision can be affected by the distortion of the field of view, beam based on the mode shape curvature index. Feng and Feng [110]
caused by the flow of heat waves generated by IR incandescent lighting utilized the identified modal parameters from the vision sensor to
and, therefore, the operating time of the lamps should be limited. Lee successfully update the inter-story stiffness of a laboratory frame
et al. [131] pointed out that external factors such as precipitation, fog, structure. Oh et al. [137] conducted model updating using a multi-
variation of natural light and wind action, may influence the perfor- objective optimization algorithm based on displacement responses from
mance of a vision system. Anantrasirichai et al. [130] proposed novel a motion capture system (MCS). Through a free vibration test of a three-
method for mitigating the effects of atmospheric distortion using story shear frame model, the performance of the model updating
complex wavelet-based fusion. method is validated by comparing the dynamics properties between the
updated model and the direct MCS measurement. Wang et al. [138,139]
4. Applications for SHM demonstrated that the region-based Zernike moment descriptor (ZMD)
is a robust image processing technique for mode-shape recognition and
Although study of vision-based sensor applications for SHM is still at finite element model updating of simple plate structures. Furthermore,
its early stage, extensive efforts have been made towards extracting FE model updating of nonlinear elasto-plastic material properties was
quantitative structural condition measures from the low-cost vision- carried out using the ZMD derived from full-field strain measurements
based displacement data. Some examples are presented as follows. [140]. Song et al. [33] demonstrated the use of subpixel virtual visual
sensors to acquire modal shapes and frequencies of structures and their
4.1. Modal property identification use in a wavelet-based structural damage detection algorithm through
laboratory experiments on steel cantilever beams. Dworakowski et al.
SHM is often based on vibration measurement and structural modal [141] obtained the deflection curve of small-scale laboratory beams by
property identification. Available vibration sensors (such as accel- means of digital image correlation (DIC), and evaluated two deflection
erometers, GPS, and laser vibrometers) are mostly point sensors. As a shape-based algorithms for damage detections of the beams. Feng and
result, the spatial resolution of the obtained mode shapes is limited by Feng [40] proposed a time-domain method to identify the equivalent
the number of deployed sensors, which can result in less accurate re- stiffness of a railway bridge based on vision-based displacement mea-
sults of finite element (FE) model updating and damage localization. surement with the prerequisite of known trainloads. Sensitivity studies
Mas et al. [132] developed a method for simultaneous multipoint showed that train-induced displacement response is more suited than
measurement of vibration frequencies through the analysis of a high- acceleration responses to identify the bridge stiffness. Based on ex-
speed video sequence. Wang et al. [133] carried out full-field vibration periments on a laboratory scaled beam structure, Feng and Feng [142]
measurement on the 3D surface of the bonnet by a 3D DIC system with further demonstrated that global stiffness of the beam specimen as well
random excitation, from which modal parameters of the car bonnet as external hammer excitation forces can be successfully and accurately
were successfully identified by using the frequency response functions identified from displacement measurement at two points using one
(FRFs) of the shape features of the DIC measurements. Through la- camera.
boratory experiments on a scaled simply beam and a 3-story frame
structures, Feng and Feng [54,110] demonstrated that dynamic dis- 4.3. Cable force estimation
placement responses at a series of points can be simultaneously and
accurately measured using one camera, and the identified natural fre- A cable system is the most important component in cable-supported
quencies and mode shapes by the vision sensor match well with those bridges and roof structures and their tensions need to be monitored.
by multiple accelerometers. Yoon et al. [79] carried out laboratory Existing methods for cable tension estimation are based on acceleration
vibration experiments on a scaled frame structure. Modal parameters response measurement using accelerometers attached on each of the
were identified and agreed with those by the conventional accel- cables. Such practice is relatively expensive and time-consuming due to
erometer-based method. Poozesh et al. [50] extracted operating mode the required installation of the sensors and data acquisition systems
shapes and natural frequencies of a small-scale wind turbine blade by [128]. A few attempts have been made to apply vision-based sensors for

114
D. Feng, M.Q. Feng Engineering Structures 156 (2018) 105–117

cable force estimates [31,32,123,128]. For example, Ji and Chang [32] including how to convert pixel displacements to physical displace-
and Kim et al.[31] employed respectively the optical flow and nor- ments, how to achieve sub-pixel resolutions, and what to cause mea-
malized cross correlation temptation methods for cable vibration surement errors and how to mitigate the errors. Other subjects of in-
measurement and cable force estimates. To ensure that cable forces terest include the comparison between measurement using artificial
reach their design values, a series of field tests have been carried out by targets versus natural features on structural surfaces, 2D versus 3D
Feng et al. [128] to measure cable forces for the cable-supported roof measurement, and real-time versus post image processing, etc. To
structure of the Hard Rock Stadium in Florida. Satisfactory agreements evaluate measurement accuracy and demonstrate the unique features
are observed between measured cable forces by the vision-based sensor and merits of vision-based structural monitoring, the research com-
and the reference readings from load cells. munity has undertaken both laboratory and field experimental studies
on a wide range of structures, including buildings, bridges, wind tur-
4.4. Other SHM-related applications bine and mechanical structures. Some of these studies have further
applied the measurement data for SHM, including modal parameter
In addition to the aforementioned remote measurement of bridge identification, structural model updating, damage detection, as well as
displacement, computer vision can also be applied to classify traveling cable force estimation.
vehicles and measure their weights and positions. For example, Khan However, in many respects, the vision-based sensor technology is
et al. [143] presented a design and application of novel multisensory still in its infancy. The majority studies have still been focused on
testbeds for collection, synchronization, archival, and analysis of mul- measurements of small-scale laboratory structures or field measure-
timodal data for health monitoring of transportation infrastructures, ments of large structures at a limited number of points for a short period
where computer-vision algorithms are used to detect and track vehicles of time. In the near future, the technology is expected to be deployed in
and extract their properties. Tan et al. [144] used a video images of real structures to fully validate its performance in outdoor field en-
passing vehicles to extract information about the vehicle types, arrival vironments. For large size structures such as long-span bridges, multiple
times and speeds to develop a physics-based model of the traffic ex- synchronized cameras targeting different sections of the structure will
citation on bridges. The effectiveness of this video-assisted approach be applied to monitor the entire structure. Another direction is to fur-
was validated by the field experimental results on a bridge. Catbas et al. ther improve measurement accuracy, resolution and robustness, by
[145] presented a methodology for bridge load rating by integrating addressing the error sources discussed in Section 3.3. Most of the cur-
computer images of passing vehicles with strain gauge measurements. rent field studies have focused on measurement of relatively large
Ojio et al. [94] proposed a contactless bridge weight-in-motion (WIM) amplitude displacements such as bridges subjected to moving train-
without the need for any sensors to be attached to the bridge. One loads. The various noise sources in complex outdoor conditions such as
camera is used to track bridge deflections subject to vehicular loadings, heat haze pose challenges for accurate measurement of small-amplitude
while a second camera is used to monitor traffic and to determine axle displacements, such as response of short- or medium-span concrete
spacing. The static axle weights can be found by minimizing the bridges under light-weight vehicles or ambient excitations.
squared differences between the measured and theoretical deflection Most existing image processing software programs are for post-
responses. By developing the capability of simultaneously measuring processing of recorded video images. Future applications of vision
traffic loading input and bridge displacement output, vision-based sensors for long-term continuous monitoring of structures in the field
system can provide more accurate and actionable information to assess will require real-time on-site image processing. Increasingly ubiquitous
the bridge structural condition and safety. traffic and security cameras or cell phone cameras provide opportu-
Yeum and Dyke [146] proposed the vision-based automated crack nities to tap into low-cost sensor resources for real-time SHM and safety
detection for bridge inspection. As a pilot study, cracks near bolts on a assessment.
steel structure are identified from images. Similarly, Cha et al. [147]
proposed a vision-based method using a deep architecture of convolu- Acknowledgement
tional neural networks for detecting concrete cracks without calculating
the defect features. Vision sensor systems can also be used for real-time This work is supported by NCHRP Highway IDEA Project (No. 20-
tracking of water levels without disturbing the water flow. For example, 30/IDEA 189). The authors would also like to acknowledge the anon-
in order to characterize the dynamics of tuned liquid column dampers ymous reviewers for their constructive comments which helped in im-
(TLCDs), vision-based sensing system was developed by Kim et al. [148] proving the quality of this paper.
for measurement of the water depth with time during shaking table
dynamic tests. Dynamic characteristics such as frequencies and References
damping ratios of the TLCDs are then estimated from measured ex-
perimental data. Additionally, the vision sensors can be used to track [1] Wang H, Tao T, Guo T, Li J, Li A. Full-scale measurements and system identifi-
cation on sutong cable-stayed bridge during typhoon Fung-Wong. Sci World J
earth mass movements such as glacier flow, rockglacier creep and land
2014;2014:13.
sliding, etc. For example, Debella-Gilo and Kaab [89] evaluated the [2] Feng D, Feng MQ. Model updating of railway bridge using in situ dynamic dis-
accuracy of pixel and subpixel image processing algorithms when placement measurement under trainloads. J Bridge Eng 2015;20(12):04015019.
[3] Wang H, Li A, Guo T, Tao T. Establishment and application of the wind and
measuring surface displacements on mass movements. structural health monitoring system for the runyang Yangtze River bridge. Shock
Vibration 2014;2014:15.
5. Conclusions [4] Guo T, Li A, Wang H. Influence of ambient temperature on the fatigue damage of
welded bridge decks. Int J Fatigue 2008;30:1092–102.
[5] Chen Y, Feng M, Soyoz S. Large-scale shake table test verification of bridge con-
This paper presents a comprehensive review on the recent devel- dition assessment methods. J Struct Eng 2008;134:1235–45.
[6] Chen Y, Feng M. Structural health monitoring by recursive Bayesian filtering. J Eng
opment of computer vision-based sensors for structural displacement Mech 2009;135:231–42.
response measurement and their applications for SHM. The goal is to [7] Carden EP, Fanning P. Vibration based condition monitoring: a review. Struct
help broaden the application of this emerging low-cost sensor tech- Health Monit 2004;3:355–77.
[8] Farrar CR, Doebling SW, Nix DA. Vibration–based structural damage identification.
nology in not only scientific research but also engineering practice such Philos Trans R Soc London, A 2001;359:131–49.
as field condition assessment of aging civil engineering structures and [9] Mita A, Takahira S. Risk control of smart structures using damage index sensors.
infrastructure systems. Advances in Building Technology. Oxford: Elsevier; 2002. p. 521–8.
[10] Catbas FN, Susoy M, Frangopol DM. Structural health monitoring and reliability
The general principles of vision-based sensor systems based on estimation: Long span truss bridge application with environmental monitoring
various template matching algorithms are firstly reviewed. Importation data. Eng Struct 2008;30:2347–59.
[11] Xing Z, Mita A. A substructure approach to local damage detection of shear
issues critical to successful measurement are discussed in detail,

115
D. Feng, M.Q. Feng Engineering Structures 156 (2018) 105–117

structure. Struct Control Health Monit 2012;19:309–18. 2016;79:73–80.


[12] Lynch J, Loh K. A summary review of wireless sensors and sensor networks for [50] Poozesh P, Baqersad J, Niezrecki C, Avitabile P, Harvey E, Yarala R. Large-area
structural health monitoring. Shock Vibration Digest 2006;38:91–128. photogrammetry based testing of wind turbine blades. Mech Syst Signal Process
[13] Li J, Mechitov KA, Kim RE, Spencer BF. Efficient time synchronization for struc- 2017;86(Part B):98–115.
tural health monitoring using wireless smart sensor networks. Struct Control [51] Pan B, Qian K, Xie H, Asundi A. Two-dimensional digital image correlation for in-
Health Monit 2016;23:470–86. plane displacement and strain measurement: a review. Meas Sci Technol
[14] Sabato A, Niezrecki C, Fortino G. Wireless MEMS-based accelerometer sensor 2009;20:062001.
boards for structural vibration monitoring: a review. IEEE Sens J 2017;17:226–35. [52] Hild F, Roux S. Digital image correlation: from displacement measurement to
[15] Li S, Wu Z. Development of distributed long-gage fiber optic sensing system for identification of elastic properties – a review. Strain 2006;42:69–80.
structural health monitoring. Struct Health Monit 2007;6:133–43. [53] Feng D, Feng MQ. Vision-based multipoint displacement measurement for struc-
[16] Kim DH, Feng MQ. Real-time structural health monitoring using a novel fiber-optic tural health monitoring. Struct Control Health Monit 2016;23(5):876–90.
accelerometer system. IEEE Sens J 2007;7:536–43. [54] Feng D, Feng MQ. Experimental validation of cost-effective vision-based structural
[17] Feng MQ, Kim D-H. Novel fiber optic accelerometer system using geometric moiré health monitoring. Mech Syst Signal Process 2017;88:199–211.
fringe. Sens Actuators, A 2006;128:37–42. [55] Mattoccia S, Tombari F, Di Stefano L. Efficient template matching for multi-
[18] Gentile C, Bernardini G. An interferometric radar for non-contact measurement of channel images. Pattern Recogn Lett 2011;32:694–700.
deflections on civil engineering structures: laboratory and full-scale tests. Struct [56] Mahalakshmi T, Muthaiah R, Swaminathan P. Review article: an overview of
Infrastructure Eng 2010;6:521–34. template matching technique in image processing. Res J Appl Sci, Eng Technol
[19] Lee JJ, Fukuda Y, Shinozuka M, Cho S, Yun CB. Development and application of a 2012.
vision-based displacement measurement system for structural health monitoring of [57] Soh Y, Qadir M, Mehmood A, Hae Y, Ashraf H, Kim I. A feature area-based image
civil structures. Smart Struct Syst 2007;3:373–84. registration. Int J Comput Theory Eng 2014;6:407–11.
[20] Stephen GA, Brownjohn JMW, Taylor CA. Measurements of static and dynamic [58] Brunelli R. Template matching techniques in computer vision: theory and practice.
displacement from visual monitoring of the Humber Bridge. Eng Struct A John Wiley and Sons, Ltd; 2009.
1993;15:197–208. [59] Zitová B, Flusser J. Image registration methods: a survey. Image Vis Comput
[21] Park J-W, Lee J-J, Jung H-J, Myung H. Vision-based displacement measurement 2003;21:977–1000.
method for high-rise building structures using partitioning approach. NDT E Int [60] Pratt WK. Digital image processing: PIKS inside. John Wiley & Sons, Inc.; 2001.
2010;43:642–7. [61] Yoneyama S, Ueda H. Bridge deflection measurement using digital image corre-
[22] Ye XW, Dong CZ, Liu T. A review of machine vision-based structural health lation with camera movement correction. Mater Trans, JIM 2012;53:285–90.
monitoring: methodologies and applications. J Sens 2016;2016:10. [62] Ye XW, Yi T-H, Dong CZ, Liu T. Vision-based structural displacement measure-
[23] Zhang D, Guo J, Lei X, Zhu C. A high-speed vision-based sensor for dynamic vi- ment: System performance evaluation and influence factor analysis. Measurement
bration analysis using fast motion extraction algorithms. Sensors 2016;16:572. 2016;88:372–84.
[24] Qin J, Gao Z, Wang X, Yang S. Three-dimensional continuous displacement mea- [63] Dworakowski Z, Kohut P, Gallina A, Holak K, Uhl T. Vision-based algorithms for
surement with temporal speckle pattern interferometry. Sensors 2016;16:2020. damage detection and localization in structural health monitoring. Struct Control
[25] Farzad N, Dimitrios K. Evaluation of vision-based measurements for shake-table Health Monit 2016;23:35–50.
testing of nonstructural components. [64] Hassaballah M, Abdelmgeid AA, Alshazly HA. Image features detection, descrip-
[26] Wang Z, Kieu H, Nguyen H, Le M. Digital image correlation in experimental me- tion and matching. In: Awad AI, Hassaballah M, editors. Image feature detectors
chanics and image registration in computer vision: Similarities, differences and and descriptors: foundations and applications. Cham: Springer International
complements. Opt Lasers Eng 2015;65:18–27. Publishing; 2016. p. 11–45.
[27] Ribeiro D, Calçada R, Ferreira J, Martins T. Non-contact measurement of the dy- [65] Mair E, Hager GD, Burschka D, Suppa M, Hirzinger G. Adaptive and generic corner
namic displacement of railway bridges using an advanced video-based system. Eng detection based on the accelerated segment test. In: Daniilidis K, Maragos P,
Struct 2014;75:164–80. Paragios N, editors. Computer vision – ECCV 2010: 11th European conference on
[28] Kohut P, Holak K, Uhl T, Ortyl Ł, Owerko T, Kuras P, et al. Monitoring of a civil computer vision, Heraklion, Crete, Greece, September 5–11, 2010, Proceedings,
structure’s state based on noncontact measurements. Struct Health Monit 2013. Part II. Berlin Heidelberg: Springer; 2010. p. 183–96.
[29] Lee JJ, Shinozuka M. A vision-based system for remote sensing of bridge dis- [66] Bay H, Tuytelaars T, Van Gool L. SURF: speeded up robust features. In: Leonardis
placement. NDT E Int 2006;39:425–31. A, Bischof H, Pinz A, editors. Computer vision – ECCV 2006: 9th European con-
[30] Busca G, Cigada A, Mazzoleni P, Zappa E. Vibration monitoring of multiple bridge ference on computer vision, Graz, Austria, May 7–13, 2006 Proceedings, Part I.
points by means of a unique vision-based measuring system. Exp Mech Berlin Heidelberg: Springer; 2006. p. 404–17.
2014;54:255–71. [67] Alcantarilla PF, Bartoli A, Davison AJ. KAZE features. In: Fitzgibbon A, Lazebnik S,
[31] Kim S-W, Jeon B-G, Kim N-S, Park J-C. Vision-based monitoring system for eval- Perona P, Sato Y, Schmid C, editors. Computer vision – ECCV 2012: 12th European
uating cable tensile forces on a cable-stayed bridge. Struct Health Monit conference on computer vision, Florence, Italy, October 7–13, 2012, Proceedings,
2013;12:440–56. Part VI. Berlin Heidelberg: Springer; 2012. p. 214–27.
[32] Ji Y, Chang C. Nontarget image-based technique for small cable vibration mea- [68] Mikolajczyk K, Tuytelaars T, Schmid C, Zisserman A, Matas J, Schaffalitzky F, et al.
surement. J Bridge Eng 2008;13:34–42. A comparison of affine region detectors. Int J Comput Vision 2005;65:43–72.
[33] Song Y-Z, Bowen CR, Kim AH, Nassehi A, Padget J, Gathercole N. Virtual visual [69] Leutenegger S, Chli M, Siegwart RY. BRISK: binary robust invariant scalable
sensors and their application in structural health monitoring. Struct Health Monit keypoints. In: Proceedings of the 2011 international conference on computer vi-
2014;13:251–64. sion: IEEE computer society; 2011. p. 2548–55.
[34] Zhang H, Hu S, Zhang X. SIFT flow for large-displacement object tracking. Appl [70] Ortiz R. FREAK: fast retina keypoint. In: Proceedings of the 2012 IEEE conference
Opt 2014;53:6194–205. on computer vision and pattern recognition (CVPR): IEEE computer society; 2012.
[35] Gehle RW, Masri SF. Tracking the multi-component motion of a cable using a p. 510–7.
television camera. Smart Mater Struct 1998;7:43. [71] Awad AI, Hassaballah M. Image feature detectors and descriptors: foundations and
[36] Jeon H, Bang Y, Myung H. A paired visual servoing system for 6-DOF displacement applications. Springer International Publishing; 2016.
measurement of structures. Smart Mater Struct 2011;20:045019. [72] Peng J, Peng S, Hu Y. Partial least squares and random sample consensus in outlier
[37] Lee J-H, Ho H-N, Shinozuka M, Lee J-J. An advanced vision-based system for real- detection. Anal Chim Acta 2012;719:24–9.
time displacement measurement of high-rise buildings. Smart Mater Struct [73] Son K-S, Jeon H-S, Park J-H, Park JW. Vibration displacement measurement
2012;21:125019. technology for cylindrical structures using camera images. Nucl Eng Technol
[38] Santos CA, Costa CO, Batista JP. Calibration methodology of a vision system for 2015;47:488–99.
measuring the displacements of long-deck suspension bridges. Struct Control [74] Feng B, Chen F, Liu G, Xiang Y, Liu B, Lv Z. Image-based displacement and rotation
Health Monit 2012;19:385–404. detection using scale invariant features for 6 degree of freedom ICF target posi-
[39] Feng M, Fukuda Y, Feng D, Mizuta M. Nontarget vision sensor for remote mea- tioning. Appl Opt 2015;54:4130–4.
surement of bridge dynamic response. J Bridge Eng 2015;04015023. [75] Jeong H-J, Choi JD, Ha Y-G. Vision based displacement detection for stabilized
[40] Feng D, Feng M. Model updating of railway bridge using in situ dynamic dis- UAV control on cloud server. Mobile Inf Syst 2016;2016:1–11.
placement measurement under trainloads. J Bridge Eng 2015;04015019. [76] Bay H, Ess A, Tuytelaars T, Van Gool L. Speeded-up robust features (SURF).
[41] Wu L-J, Casciati F, Casciati S. Dynamic testing of a laboratory model via vision- Comput Vis Image Underst 2008;110:346–59.
based sensing. Eng Struct 2014;60:113–25. [77] Cha YJ, Chen JG, Büyüköztürk O. Output-only computer vision based damage
[42] Olaszek P. Investigation of the dynamic characteristic of bridge structures using a detection using phase-based optical flow and unscented Kalman filters. Eng Struct
computer vision method. Measurement 1999;25:227–36. 2017;132:300–13.
[43] Wahbeh AM, John PC, Sami FM. A vision-based approach for the direct mea- [78] Guo J, Zhu Ca. Dynamic displacement measurement of large-scale structures based
surement of displacements in vibrating systems. Smart Mater Struct 2003;12:785. on the Lucas-Kanade template tracking algorithm. Mech Syst Signal Process
[44] Ho H-N, Lee J-H, Park Y-S, Lee J-J. A synchronized multipoint vision-based system 2016;66–67:425–36.
for displacement measurement of civil infrastructures. Sci World J 2012;2012:9. [79] Yoon H, Elanwar H, Choi H, Golparvar-Fard M, Spencer BF. Target-free approach
[45] Myung H, Lee S, Lee B. Paired structured light for structural health monitoring for vision-based structural system identification using consumer-grade cameras.
robot system. Struct Health Monit 2010. Struct Control Health Monit 2016;23:1405–16.
[46] Caetano E, Silva S, Bateira J. A vision system for vibration monitoring of civil [80] Liu B, Zhang D, Guo J, Zhu Ca. Vision-based displacement measurement sensor
engineering structures. Exp Tech 2011;35:74–82. using modified Taylor approximation approach. Opt Eng 2016;55:114103.
[47] Feng D, Feng M, Ozer E, Fukuda Y. A vision-based sensor for noncontact structural [81] Choi I, Kim J, Kim D. A target-less vision-based displacement sensor based on
displacement measurement. Sensors 2015;15:16557–75. image convex hull optimization for measuring the dynamic response of building
[48] Ye XW, Ni YQ, Wai TT, Wong KY, Zhang XM, Xu F. A vision-based system for structures. Sensors 2016;16:2085.
dynamic displacement measurement of long-span bridges: algorithm and ver- [82] Bing P, Hui-min X, Bo-qin X, Fu-long D. Performance of sub-pixel registration al-
ification. Smart Struct Syst 2013;12:363–79. gorithms in digital image correlation. Meas Sci Technol 2006;17:1615.
[49] Pan B, Tian L, Song X. Real-time, non-contact and targetless measurement of [83] Foroosh H, Zerubia JB, Berthod M. Extension of phase correlation to subpixel re-
vertical deflection of bridges using off-axis digital image correlation. NDT E Int gistration. Image Process, IEEE Trans 2002;11:188–200.

116
D. Feng, M.Q. Feng Engineering Structures 156 (2018) 105–117

[84] Berenstein CA, Kanal LN, Lavine D, Olson EC. A geometric approach to subpixel [118] Wang D, DiazDelaO FA, Wang W, Lin X, Patterson EA, Mottershead JE.
registration accuracy. Comput Vision, Graphics, Image Process 1987;40:334–60. Uncertainty quantification in DIC with Kriging regression. Opt Lasers Eng
[85] Pilch A, Mahajan A, Chu T. Measurement of whole-field surface displacements and 2016;78:182–95.
strain using a genetic algorithm based intelligent image correlation method. J Dyn [119] Roux S, Hild F. Stress intensity factor measurements from digital image correla-
Syst Meas Contr 2004;126:479–88. tion: post-processing and integrated approaches. Int J Fract 2006;140:141–57.
[86] Li L, Chen Y, Yu X, Liu R, Huang C. Sub-pixel flood inundation mapping from [120] Besnard G, Hild F, Roux S. “Finite-element” displacement fields analysis from
multispectral remotely sensed images based on discrete particle swarm optimiza- digital images: application to Portevin–Le Châtelier bands. Exp Mech
tion. ISPRS J Photogrammetry Remote Sens 2015;101:10–21. 2006;46:789–803.
[87] Bruck HA, McNeill SR, Sutton MA, Peters III WH. Digital image correlation using [121] Haddadi H, Belhabib S. Use of rigid-body motion for the investigation and esti-
Newton-Raphson method of partial differential correction. Exp Mech mation of the measurement errors related to digital image correlation technique.
1989;29:261–7. Opt Lasers Eng 2008;46:185–96.
[88] Davis CQ, Freeman DM. Statistics of subpixel registration algorithms based on [122] Ferrer B, Mas D, García-Santos JI, Luzi G. Parametric study of the errors obtained
spatiotemporal gradients or block matching. Opt Eng 1998;37:1290–8. from the measurement of the oscillating movement of a bridge using image pro-
[89] Debella-Gilo M, Kääb A. Sub-pixel precision image matching for measuring surface cessing. J Nondestruct Eval 2016;35:53.
displacements on mass movements using normalized cross-correlation. Remote [123] Kim S-W, Kim N-S. Dynamic characteristics of suspension bridge hanger cables
Sens Environ 2011;115:130–42. using digital image processing. NDT E Int 2013;59:25–33.
[90] Feng D, Feng M, Ozer E, Fukuda Y. A vision-based sensor for noncontact structural [124] Neal W, Frédo D, Justin GC, Oral B, William TF, Abe D. Video camera-based vi-
displacement measurement. Sensors 2015;15:16557. bration measurement for civil infrastructure applications.
[91] Mas D, Perez J, Ferrer B, Espinosa J. Realistic limits for subpixel movement de- [125] Lava P, Van Paepegem W, Coppieters S, De Baere I, Wang Y, Debruyne D. Impact
tection. Appl Opt 2016;55:4974–9. of lens distortions on strain measurements obtained with 2D digital image corre-
[92] Fukuda Y, Feng MQ, Shinozuka M. Cost-effective vision-based system for mon- lation. Opt Lasers Eng 2013;51:576–84.
itoring dynamic response of civil engineering structures. Struct Control Health [126] Pan B, Yu L, Wu D, Tang L. Systematic errors in two-dimensional digital image
Monit 2010;17:918–36. correlation due to lens distortion. Opt Lasers Eng 2013;51:140–7.
[93] Jong-Han L, Hoai-Nam H, Masanobu S, Jong-Jae L. An advanced vision-based [127] Yoneyama S, Kikuta H, Kitagawa A, Kitamura K. Lens distortion correction for
system for real-time displacement measurement of high-rise buildings. Smart digital image correlation by measuring rigid body displacement. Opt Eng
Mater Struct 2012;21:125019. 2006;45:023602–23609.
[94] Doherty C, Carey CH, OBrien EJ, Taylor SE, Ojio T. Contactless bridge weigh-in- [128] Feng D, Scarangello T, Feng MQ, Ye Q. Cable tension force estimate using novel
motion; 2016. noncontact vision-based sensor. Measurement 2017;99:44–52.
[95] Zhang Z. A flexible new technique for camera calibration. IEEE Trans Pattern Anal [129] Choi H-S, Cheung J-H, Kim S-H, Ahn J-H. Structural dynamic displacement vision
Mach Intell 2000;22:1330–4. system using digital image processing. NDT E Int 2011;44:597–608.
[96] Heikkil J. #228. Geometric camera calibration using circular control points. IEEE [130] Anantrasirichai N, Achim A, Kingsbury NG, Bull DR. Atmospheric turbulence
Trans Pattern Anal Mach Intell 2000;22:1066–77. mitigation using complex wavelet-based fusion. IEEE Trans Image Process
[97] Luo P-F, Wu J. Easy calibration technique for stereo vision using a circle grid. Opt 2013;22:2398–408.
Eng 2008;47:033607–33610. [131] Lee JW, Oh JS, Park MK, Kwon SD, Kwark JW. Bridge displacement measurement
[98] Park SW, Park HS, Kim JH, Adeli H. 3D displacement measurement model for system using image processing. In: Advances in bridge maintenance, safety man-
health monitoring of structures using a motion capture system. Measurement agement, and life-cycle performance, set of book & CD-ROM: CRC Press; 2006. p.
2015;59:352–62. 281–2.
[99] Brownjohn J, Hester D, Xu Y, Bassitt J, koo KY. Viability of optical tracking sys- [132] Mas D, Ferrer B, Acevedo P, Espinosa J. Methods and algorithms for video-based
tems for monitoring deformations of a long span bridge. In: EACS 2016–6th multi-point frequency measuring and mapping. Measurement 2016;85:164–74.
European conference on structural control; 2016. [133] Wang W, Mottershead JE, Siebert T, Pipino A. Frequency response functions of
[100] Fukuda Y, Feng MQ, Narita Y, Kaneko S, Tanaka T. Vision-based displacement shape features from full-field vibration measurements using digital image corre-
sensor for monitoring dynamic response using robust object search algorithm. Sens lation. Mech Syst Signal Process 2012;28:333–47.
J, IEEE 2013;13:4725–32. [134] Yang Y, Dorn C, Mancini T, Talken Z, Nagarajaiah S, Kenyon G, et al. Blind
[101] Feng D, Feng MQ. Vision-based multipoint displacement measurement for struc- identification of full-field vibration modes of output-only structures from uni-
tural health monitoring. Struct Control Health Monit 2015;876–90. formly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements. J
[102] Feng DM. Advanced vision-based displacement sensors for structural health Sound Vib 2017;390:232–56.
monitoring. Columbia University Academic Commons 2016. [135] Yang Y, Dorn C, Mancini T, Talken Z, Kenyon G, Farrar C, et al. Blind identification
[103] Sładek J, Ostrowska K, Kohut P, Holak K, Gąska A, Uhl T. Development of a vision of full-field vibration modes from video measurements with phase-based video
based deflection measurement system and its accuracy assessment. Measurement motion magnification. Mech Syst Signal Process 2017;85:567–90.
2013;46:1237–49. [136] Wang W, Mottershead JE, Ihle A, Siebert T, Reinhard Schubach H. Finite element
[104] Chen JG, Wadhwa N, Cha YJ, Durand F, Freeman WT, Buyukozturk O. Modal model updating from full-field vibration measurement using digital image corre-
identification of simple structures with high-speed video using motion magnifi- lation. J Sound Vib 2011;330:1599–620.
cation. J Sound Vib 2015;345:58–71. [137] Oh BK, Hwang JW, Choi SW, Kim Y, Cho T, Park HS. Dynamic displacements-
[105] Bartilson DT, Wieghaus KT, Hurlebaus S. Target-less computer vision for traffic based model updating with motion capture system. Struct Control Health Monit;
signal structure vibration studies. Mech Syst Signal Process 2015;60–61:571–82. 2016:n/a-n/a.
[106] Khuc T, Catbas FN. Completely contactless structural health monitoring of real-life [138] Wang WZ, Mottershead JE, Mares C. Mode-shape recognition and finite element
structures using cameras and computer vision. Struct Control Health Monit 2016. model updating using the Zernike moment descriptor. Meas Sci Technol
http://dx.doi.org/10.1002/stc.852. 2009;23:2088–112.
[107] Sutton M, Yan JH, Tiwari V, Orteu JJ. The effect of out-of-plane motion on 2D and [139] Wang WZ, Mottershead JE, Mares C. Vibration mode shape recognition using
3D digital image correlation measurements. Opt Lasers Eng 2008;46:746–57. image processing. J Sound Vib 2009;326:909–38.
[108] D’Emilia G, Razzè L, Zappa E. Uncertainty analysis of high frequency image-based [140] Wang W, Mottershead JE, Sebastian CM, Patterson EA. Shape features and finite
vibration measurements. Measurement 2013;46:2630–7. element model updating from full-field strain data. Int J Solids Struct
[109] Baqersad J, Poozesh P, Niezrecki C, Avitabile P. Photogrammetry and optical 2011;48:1644–57.
methods in structural dynamics - a review. Mech Syst Signal Process 2016. [141] Dworakowski Z, Kohut P, Gallina A, Holak K, Uhl T. Vision-based algorithms for
[110] Feng D, Feng MQ. Vision-based multipoint displacement measurement for struc- damage detection and localization in structural health monitoring. Struct Control
tural health monitoring. Struct Control Health Monit 2016;23:876–90. Health Monit 2016;35–50.
[111] Kohut P, Holak K, Uhl T, Ortyl Ł, Owerko T, Kuras P, et al. Monitoring of a civil [142] Feng D, Feng MQ. Identification of structural stiffness and excitation forces in time
structure’s state based on noncontact measurements. Struct Health Monit domain using noncontact vision-based displacement measurement. J Sound Vib
2013;12:411–29. 2017;406:15–28.
[112] Nikfar F, Konstantinidis D. Evaluation of vision-based measurements for shake- [143] Gandhi T, Chang R, Trivedi MM. Video and seismic sensor-based structural health
table testing of nonstructural components. J Comput Civil Eng 2016;04016050. monitoring: framework, algorithms, and implementation. IEEE Trans Intell Transp
[113] Min JH, Gelo NJ, Jo H. Non-contact and real-time dynamic displacement mon- Syst 2007;8:169–80.
itoring using smartphone technologies. J Life Cycle Reliability Saf Eng [144] Tan CA, Beyene Ashebo D, Feng MQ, Fukuda Y. Integration of traffic information
2015;4:40–51. in the structural health monitoring of highway bridges; 2007. p. 65291D-D-10.
[114] Ozer E, Feng D, Feng MQ. Hybrid motion sensing and experimental modal analysis [145] Catbas FN, Zaurin R, Gul M, Gokce HB. Sensor networks, computer imaging, and
using collocated smartphone camera and accelerometers. Meas Sci Technol unit influence lines for structural health monitoring: case study for bridge load
2017;28:105903. rating. J Bridge Eng 2012;17:662–70.
[115] Warren C, Niezrecki C, Avitabile P, Pingle P. Comparison of FRF measurements [146] Yeum CM, Dyke SJ. Vision-based automated crack detection for bridge inspection.
and mode shapes determined using optically image based, laser, and accel- Comput-Aided Civil Infrastructure Eng 2015;30:759–70.
erometer measurements. Mech Syst Signal Process 2011;25:2191–202. [147] Cha Y-J, Choi W, Büyüköztürk O. Deep learning-based crack damage detection
[116] Shariati A, Schumacher T. Eulerian-based virtual visual sensors to measure dy- using convolutional neural networks. Comput-Aided Civil Infrastructure Eng
namic displacements of structures. Struct Control Health Monit; 2016:e1977-n/a. 2017;32:361–78.
[117] Tian L, Pan B. Remote bridge deflection measurement using an advanced video [148] Kim J, Park C-S, Min K-W. Fast vision-based wave height measurement for dy-
deflectometer and actively illuminated LED targets. Sensors (Basel, Switzerland) namic characterization of tuned liquid column dampers. Measurement
2016;16:1344. 2016;89:189–96.

117

View publication stats

You might also like