Automated Photogrammetry for Construction Monitoring
Automated Photogrammetry for Construction Monitoring
Computers in Industry
journal homepage: [Link]/locate/compind
A R T I C L E I N F O A B S T R A C T
Article history:
Received 16 August 2017 The construction industry has a poor productivity record, which was predominantly ascribed to
Received in revised form 7 March 2018 inadequate monitoring of how a project is progressing at any given time. Most available approaches do
Accepted 15 March 2018 not offer key stakeholders a shared understanding of project performance in real-time, which as a result
Available online 22 March 2018 fail to identify any project slippage on the original schedule. This paper reports on the development of a
novel automatic system for monitoring, updating and controlling construction site activities in real-time.
Keywords: The proposed system seeks to harness advances in close-range photogrammetry to deliver an original
Photogrammetry approach that is capable of continuous monitoring of construction activities, with progress status
Point cloud
determined, at any given time, throughout the construction lifecycle. The proposed approach has the
BIM
potential to identify any deviation of as planned construction schedules, so prompt action can be taken
Construction site
Monitoring because of an automatic notification system, which informs decision-makers via emails and SMS. This
Delays system was rigorously tested in a real-life case study of an in-progress construction site. The findings
revealed that the proposed system achieved a significant high level of accuracy and automation, and was
relatively cheap and easier to operate.
© 2018 Elsevier B.V. All rights reserved.
[Link]
0166-3615/© 2018 Elsevier B.V. All rights reserved.
H. Omar et al. / Computers in Industry 98 (2018) 172–182 173
waste of time and money. This demonstrates that current the Autodesk’s developed software/system still relied heavily on
monitoring systems are often unreliable, time-consuming, costly, the inspectors to manually insert the construction site updates,
and prone to subjectivity and errors [11,12]. which were not only subjective but also unreliable. Consequently,
To address construction projects’ overrun, rigorous and reliable this method lacked the instant detection of delays, reliability and
monitoring systems are needed to detect rapidly delays together accuracy.
with their root causes in order to alleviate them, once occurred A more promising system proposed by Bosché [19], sought to
[13]. Over the last few decades, a great deal of work sought to automate progress monitoring in construction sites by developing
improve conventional monitoring, updating and controlling a point matching method using the Iterative Closest Point (ICP)
systems in the construction industry [14,15]. In the same context, recognition algorithm. This was achieved by using Laser Scanning
early efforts applied standalone technology to monitor and control (LS) technology to build a 3D point cloud model, which was then
construction site activities, but due to their inherent limitations, compared with the as-planned Building Information Model (BIM).
recent research sought to combine two or more technologies to BIM involves the development and use of “a computer software
improve the results of their monitoring systems. model to simulate the construction and operation of a facility. The
resulting model is a data-rich, object-oriented, intelligent and
2. Current and emerging site monitoring and controlling parametric digital representation of the facility, from which views
systems and data are appropriated and analysed to generate information
that can be used to make decisions and improve the process of
Several attempts were made to resolve the salient challenges delivering the facility” [38]. Eastman et al. [20] claimed that, using
associated with the limitations of current monitoring and the BIM produced error-free design and boosted offsite prefab-
controlling systems. Consequently, a great deal of work has been rication.
devoted to develop monitoring technologies. Broadly speaking, Bosché [19] succeeded in achieving optimal registration and
these developments could be classified as standalone and comparison between the project’s Computer-Aided Design (CAD)
integrated technologies. model and the site LS model. The ICP approach was applied for
comparison-based registration to estimate the differences be-
2.1. Standalone technology tween the two models. This system seemed promising but it relied
heavily on manual interventions to perform synchronisation
In this standalone proposed system only one technology is between the two models (i.e. as-built point cloud and as-planned
utilised [16]. Earlier efforts by Navon [16] focused on the models). Overall, this system is prone to human errors due to
development of a robotic system that could not only install tiles, manual interventions, and is therefore time-consuming. Bosché
but also monitor the site as it is equipped with cameras to measure [19] himself considered the system as quasi automated. Above all,
the progress of the installed tiles. However, the robotic system LS technology is not suitable for the majority of construction sites,
required continuous human intervention for stabilisation and due to its high costs, as well as the expertise needed for operation.
system calibration. Moreover, the robot was monochromatic,
which in poor lighting conditions, especially indoors, affected 2.2. Integrated technologies
adversely the accuracy of progress measurements. Therefore, the
proposed robotic system lacked accuracy and reliability. It is worth It is clear that standalone systems that use single technology are
mentioning that Navon [16] himself considered the system confronted with several limitations. Consequently, there has been
incomplete and required further developments to minimise or a quest to integrate two or more technologies to combine their
eliminate human intervention. benefits, and to reduce the adverse effects of the standalone
A more developed automatic system proposed by Dick et al. technology [21–23,9,24,25,5].
[17], produced an automatic framework acquisition for the 3D as- The thrust of recent efforts shifted attention to mixed
built model. The model was constructed from a small number of technologies to address the limitations of stand-alone technology.
site photos by developing an algorithm that enabled recognition of Accordingly, El-Omari and Moselhi [21] proposed the integration
the structural objects from the site photos. This framework between LS and photogrammetry to enhance the speed and
succeeded to a limited extent to compare the as-built 3D model accuracy of the acquired data from construction sites. The system
against the as-planned one. However, the results lacked the succeeded in building a 3D as-built point cloud model with
sufficient accuracy for site monitoring, as the optimum accuracy satisfactory accuracy by integrating LS and photogrammetry
was 83% for vertical elements and 91% for horizontals. techniques, by synchronising the common points between the
A further advanced system proposed by Lukins and Trucco [18] two point cloud models. Using the constructed 3D point cloud
used Computer Vision (CV) to develop a classifier that can observe model, a comparison could be performed between the progress
and detect changes (the progress status) during construction (as-built) model and the as-planned model. One of the main
through a fixed camera. This was achieved by developing the prior limitations of this system is the long time needed to perform a
building model and aligning it with the camera scenes to identify single scan. Indeed, to scan the entire built asset multiple moves
the progress. However, the system suffered from several limi- are required from different positions, which make this system
tations, including weather interference, occlusions, and daylight time-consuming and cumbersome. In addition, specialised tech-
fluctuations. In addition, the whole system required continuous nicians are often needed to perform the scans to collect accurate
manual intervention. Consequently, the accuracy of the classifier is data. Above all, LS technique is still relatively expensive, which
subject to the operator’s accuracy and as a result makes the system hampers its applicability for regular updates of construction site
prone to errors and time-consuming. activities or to support timely and informed management
Software developers such as Autodesk tried to overcome the decisions.
recognised limitations pertaining to conventional monitoring A similar system proposed by Ibrahim et al. [22] relied on CV
systems. Autodesk produced enabled a semi-automatic cloud techniques to develop a progress monitoring system. This system
based system to collect data from construction sites using Personal analysed the geometric and material properties of the components
Digital Assistants (PDA) such as tablets or smartphones. Generally, in a BIM model and compared it with the corresponding elements
PDAs overcame some of the recognised limitations, especially from the collected site photos (the as-built). The comparison
those related to the time required to collect data [8,6]. However, helped to identify the changes, reflected in the progress status of
174 H. Omar et al. / Computers in Industry 98 (2018) 172–182
the construction site, which was then used to update the Work leading to significant confusions in the constructed 3D model. The
Breakdown Structure (WBS). Despite the promising results, the latest research by Behnam et al. [5] proposed an automated
system suffered from several limitations, including the lack of system to generate and visualise the progress status of the
sufficient details of the elements in the collected photos, which repetitive construction activities for linear infrastructure proj-
negatively affected the synchronisation with the BIM elements. ects. They integrated satellite remote sensing techniques and a
Consequently, several elements could not be recognised, Geographic Information System (GIS) web-based platform to
which had an impact on the accuracy and the reliability of the build 3D models from satellite images and locations recognised
collected data. by the GIS system. The accuracy of the created 3D model was
Another integrated system developed by Golparvar-Fard et al. improved by the collected site photos. Further comparison
[23] sought to automate monitoring the construction site between the 3D created model and the 4D BIM model depicts
activities by combining photogrammetry and time-lapse techni- the progress status in the form of visualised charts for easy
ques to superimpose the as-built point cloud model constructed understanding by PM and other stakeholders. The proposed
from the site photos over the as-planned 3D model, using the automatic system succeeded, to a certain extent, to address some
Augmented Reality (AR) technique. This system was able to depict of the limitations of the previous automation attempts to monitor
the progress status in colour codes, where green meant as the progress of construction site activities. However, this system
scheduled, dark green referred to ahead of schedule, and red cannot be generalised to site activities, as it is limited to linear
depicted behind schedule. However, this monitoring system projects with repetitive activities, such as pipelines, road works,
suffered from a mismatched level of details in the baseline and railways. In addition, it did not provide sufficient information
schedule (as-planned), compared with the actual site. The about start dates that needed to be inserted manually. Therefore,
proposed system could only recognise the completed activities the system was deemed not fully automated as manual
with the same patterns and it overlooked the uncompleted/on- intervention is required to conjugate the geographical objects
going activities. In addition, manual intervention was needed to between the collected site photos and satellite images. Conse-
filter and select the suitable photos, which made the process quently, even for the longitudinal projects, the accuracy and the
time-consuming, and highly dependent on the operator’s reliability of the progress update are still doubtful.
experience and visual ability. Moreover, replicating the construc- This review revealed that – to date – there is no approach/
tion progress in a colour coding system was also deemed system that has successfully automated monitoring, analysing and
unsuitable for staff suffering from colour-blindness. A few years controlling construction site activities to detect instantly any
later, Golparvar-Fard et al. [9] developed further the previous delays, once occurred. The aim of this study is to develop an
research for evaluating the status of the construction progress original, close-range photogrammetry-based approach for moni-
using the same approach of the colour-coding system. In this case, toring and controlling construction site activities.
colour codes described the representation of time and cost values,
as an analysis for Earned Value (EV). However, the later study did 3. Computer vision based close-range photogrammetry
not address the other limitations of the Golparvar-Fard et al. [23]
proposal. CV based-metric cameras introduced significant technology
Another system, proposed by Roh et al. [24], endeavoured to that enabled the building of 3D models from 2D captured stills
solve some of the weaknesses of previous systems by comparing [26]. In the last two decades, building accurate 3D models from 2D
the components of the as-built, abstracted from site photographs, images is attracting a great deal of attention. The constructed 3D
against the as-planned BIM model. This was achieved by model is used in several fields such as monitoring construction
developing a classifier algorithm to synchronise the objects from activities [27].
BIM models with corresponding objects from captured photos. Traditionally, Building 3D models from camera stills required
This system succeeded in providing the progress of construction network cameras to cover the target object(s) from different
elements in percentages. Due to significant manual intervention, views [28]. The captured images used to construct 3D models that
critical data such as location and time was fed manually, which included enough details to measure the dimensions and the
made it unreliable and inaccurate. In addition, training the coordinates of the 2D images. Consequently, camera calibration
classifier was an extremely time-consuming process, whereby process is the mainstay for correct data abstraction, as it enables
the development of the algorithm required manual synchronisa- the determination of the accurate relationship between each
tion of the construction elements, and thus the classifier accuracy object in the photo and its physical representation [29]. Poor
heavily depended on human accuracy. cameras’ calibration may result in blemishes in images, such as
More recently, Dimitrov and Golparvar-Fard [25] developed an distortions, which in turn can affect the reliability and accuracy of
algorithm that used a material texture recognition technique to the abstracted information. Therefore, camera calibration tends
automatically create 3D models by extracting the information from to be the most crucial process for any computer vision application
randomly collected site photos for construction elements (col- [30,31].
umns, walls, slabs, etc.). The automatically created 3D models were The purpose of this paper is to develop a novel automatic
further used to depict the construction progress, after comparing system that is able to promptly detect delays occurring in
them with the as-planned schedules. The proposed system was construction sites, and as a result send automatic notifications
distinguished by its simplicity, but it was challenged for its with progress updates to the concerned decision-makers, such as
inconsistent accuracy that affected its reliability. The system’s the PM and client.
accuracy varied from one material to another; hence the accuracy
for the material recognition algorithm is 92.1% for casted concrete, 4. Methodology
and 92.3% for compacted soil. High accuracy was achieved in three
elements only (formwork, grass and marble), but these elements The proposed system is intended to be fully automated, which
are considered as minor elements in the construction industry (i.e. requires using a close-range photogrammetry technique to collect
they usually have less impact on the critical path). In addition, the images from the construction site. These images are stored and
richness of the pre-prepared material recognition library heavily fused together in one folder, based on the capturing date in a cloud
controlled the accuracy of the created 3D model, thus some server. The registered photos are exported to Agisoft PhotoScan pro
construction elements could not be recognised by the algorithm, software. This software has a built-in property to recognise and
H. Omar et al. / Computers in Industry 98 (2018) 172–182 175
Tan et al. [32] concluded that there are two main techniques to
calibrate cameras: (1) photogrammetric calibration, which
requires prerequisite knowledge of the physical object such as
its dimensions, coordinates and directions, in addition to the 2D
information from the captured image. (2) Self-calibration does not
Fig. 2. a) Point cloud built from captured photos for columns. b) Sample of captured
require any prerequisite knowledge. Using the first approach gives
site photos to construct point cloud model.
more reliable and accurate data compared with the second one,
because using advanced knowledge reduces the number of
parameters [32]. Therefore, this paper selected the photogram- manufactured accurately. The calibration process was iterated a
metric calibration approach using the checkerboard 9 6 with a total of 6 times to acquire the best result by reducing the errors for
known pattern, size and structure. the determined parameters Fig. 5b and c. The following parameters
To ensure cameras have the same geometrical representation of produced the best results to build the 3D model with accurate and
the captured scene, cameras were calibrated by shooting 32 images reliable details: Focal length 45 mm, Distance from object 27m,
from different positions for the checkerboard Fig. 3. These photos Angle for camera to capture Photos 45 from the horizontal axes
were uploaded to MATLAB R2017b to determine the intrinsic (focal and Ground Sampling Distance (GSD) width is 0.1966 and height
length, principal point, lens distortion coefficient) and extrinsic 0.1971 (cm/pixel).
parameters of the cameras. In this process, once the 32 images
were uploaded to MATLAB toolbox; the software recognised the 4.2. Case study
corner points for the checkerboard Fig. 4. Subsequently, to
determine the intrinsic and extrinsic parameters, MATLAB toolbox To test the proposed system’s validity and performance, it was
re-projected the corners to determine the re-projection errors rigorously tested in a real-life case study, to monitor and control
Fig. 5a, [31]. During the calibration process, the following measures the progress. Cameras were installed with precluded motion (i.e.
were considered; (1) optimum and clear view of the checkerboard, cameras are static) to capture photos for the Reinforced Concrete
with minimum camera magnification to reduce blurred images (RC) columns. Only the RC columns superstructure was selected,
[30]; (2) the Angle Of View (AOV) and Depth Of Field (DOF) were mainly because these elements often lay on the critical path.
calculated to attain a minimum of 75% side overlap for consecutive Accordingly, any delay(s) for the superstructure were likely to
shots; and (3) the control point pattern was well known and affect the project adversely, thus leading to overruns.
Fig. 1. Developed model to automate monitoring and controlling the construction site activities.
176 H. Omar et al. / Computers in Industry 98 (2018) 172–182
Table 1
Camera specifications.
Type Specification
Sensor 10 ’ CMOS, Effective pixels: 20M
Lens FOV 84 8.8 mm/24 mm (35 mm format equivalent) f/2.8 f/11 auto focus at 1 m 1
Mechanical/electric Shutter Speed 8–1/2000 s; 8–1/8000 s
Shooting mode Auto+, Program AE, Shutter priority AE, Aperture priority AE
Wireless File Transmitter Wireless File Transmitter WFT-E7
Photo JPEG, DNG (RAW), JPEG + DNG
Battery life (CIPA) 500
Internet Wireless Frequency Band 2.4 GHz
Min/Max Operating Temperature 10 C/50 C
Remote Controller/Switch Remote control with N3 type contact, Wireless Controller LC-5, Remote Controller RC-6
Wi-Fi range extender Wi-Fi range with speeds up to 2200 Mbps and provides up to 1000 M2 Wi-Fi range
Fig. 3. Images captured for 9 6 planar checkerboard used for camera calibration.
Twelve cameras were installed and programmed to cover the 4.3. Algorithms to detect delays
tested elements and to capture a photo every 10 min for two hours
at the end of each working day (i.e. starts at 5:00 PM till 7:00 PM). 4.3.1. Algorithm 1: registration of 3D BIM model column surfaces
This is deemed as the best time to capture photos, because of The aim of this algorithm is to create internal and external
lighting and glaring issues. In addition, it best captured progress as boundaries/surface planes Fig. 6a. And combine them into
a cut-off date every day. Photos were taken with 75% side overlap bounding boxes. The created bounding surfaces contain each
for any two consecutive captured stills. Cameras were installed in a column, so as to isolate the points relevant for each column to
protected container following the Leung et al. [33] container enable later processing. It uses the column surfaces modelled in
specification. The container reduced the impact from the local the 3D BIM model in 6 directions (top, bottom, left, right, front and
climate (i.e. temperature and humidity). In addition, it eliminated back), and then simply adds displacements of 5 mm to each in each
the negative effect on the captured images, mainly because of dirt direction (external and internal). The result is a pair of surfaces for
accumulation on camera [33]. each of the 6 sides, containing all relevant point cloud points.
In order to automatically detect any discrepancies between as- Algorithm 1
planned and as-built progress status, four algorithms were developed. Input
Column from 3D BIM model
Column surfaces (S0, S1, S2 . . . . S5)
Displacement offset distance Doffset
1. Recognise column original faces (S0, S1, S2 . . . . S5)
2. Add external planar surface surrounding the columns with
offset = Doff1 Doff1 = (Se0 + 5 mm,., Se5 + 5 mm)
3. Identify intersections between planar surfaces (Se0, Se1,., Se5)
4. Combine planar surfaces (Se0, Se1,., Se5)
5. Add internal planar surface surrounding the columns with offset
Doff2 Doff2 = (Si0 + 5 mm, . . . , Si5 + 5 mm)
6. Recognise intersections between planar surfaces (Si0, Si1,., Si5)
7. Combine planar surfaces (Si0, Si1,., Si5)
Output
Column in 3D BIM model surrounded with additional external
Fig. 4. The centre of the circle demonstrates images’ detected corners and the red
and internal surfaces
cross intersection reprojected the corners using the calibrated camera parameters.
(For interpretation of the references to colour in this figure legend, the reader is External surfaces (Se0, Se1, Se2 . . . . Se5) Fig. 6b
referred to the web version of this article.) Internal surfaces (Si0, Si1, Si2 . . . . Si5) Fig. 6c
H. Omar et al. / Computers in Industry 98 (2018) 172–182 177
Fig. 5. a) Reprojection error is the offset from the circle center to the cross intersection. b) High reprojection mean error (0.87pixels). c) Adjusted reprojection mean error (0.18 pixels).
4.3.2. Algorithm 2: aligning the 3D point cloud with the 3D column 1. Origin Apj is the mean point in 3D
from the BIM model
1X n
This algorithm seeks to align the two models i.e. the 3D point Apj ¼ ðXi; Yi; ZiÞ
n i
cloud model developed by Agisoft PhotoScan pro and 3D BIM
model with external and internal surface planes from algorithm 1. n = number of points (Xi, Yi, Zi) in point cloud shaping the column
Input 2. Define column origin in BIM model Am
Internal offset planes (Si0, Si1, Si2, . . . ., Si5) 3. Translate the point cloud origin Apj to the column origin Am
External offset planes (Se0, Se1, Se2, . . . ., Se5) 4. Compute transformation (R, T)
178 H. Omar et al. / Computers in Industry 98 (2018) 172–182
Fig. 6. a) creating virtual internal and external surface planes. b) Creating virtual (external surface planers). c) Creating virtual (internal surface planers). d) Points are
determined within the surface boundaries, according to algorithm 1.
cloud by confining these points between the external and internal A=WL
surface boundaries in algorithm 2.
W is the column width from BIM model
Algorithm 3
L is the column length from BIM model
Input
External surfaces (Se0, Se1, Se2 . . . . Se5) ;Vj = H W L
Internal surfaces (Si0, Si1, Si2 . . . . Si5)
Output
Aligned point cloud Pc with 3D BIM model
Point cloud height represents the true progress of the concrete
element.
1. (Pn, P0, P1)∊ Pc
As-built progressive volume for any column.
Pn Represents points outside the surface boundaries Fig. 6d
P0 Represents points located within the surface boundaries
(between internal and external)
5. Results and discussion
P1 Represents points located within internal boundary only
2. For every point p 2 Pc:
One of the most challenging issues encountered in the
If p is between Si0 and Se0, or Si1 and Se1, or . . . Se0 and Si5
implementation of this system is related to occlusions. Occlusion
Register p 2 P0
is defined as any blockage of the camera vision by a physical object
3. For every point p 2 Pc:
[34]. Occlusion can be classified into two main categories based on
If p is between Si0 and Si5 (front and back internal surfaces)
its source, static occlusion which is a result of a static object (such
Register p 2 P1
as scaffolding, steel rebar, timber, etc.), and dynamic occlusion
4. For every point p 2 Pc:
which is a result of movable objects (such as labourers, machines,
If p 2
= P1 and p 2
= P0
etc.). Indeed, it is extremely difficult to obtain a clear image
Register p 2 Pn
without occlusions Fig. 7. On a construction site, occlusions are
5. Report column point cloud P0, algorithm stops.
often inevitable [25,22].
Output
To reduce dynamic occlusion it was decided to capture site
Confined point cloud P0 between internal and external surface
photos after the duty time (i.e. after 5:00 pm). The selected time
boundaries Fig. 6d.
significantly reduced the dynamic occlusions for captured photos
3D point cloud with cleared from clutter and occlusion.
because the site is shut-down; accordingly there are no active
labourers or machines. Although 60% of the side overlap is Agisoft
PhotoScan Pro recommendation, it was decided to consider 75% for
4.3.4. Algorithm 4: calculation of the progreessive column volume
side overlap for captured photos to ensure the same object has at
The aim of this algorithm is to determine the as-planned
least 3 photos from different viewing positions. This process
concrete volume for each column, by measuring the true column
enabled the software to remove automatically any dynamic
heights H (maximum heights) obtained from the point cloud in
occlusion by applying simple background-foreground subtraction
algorithm 3. To obtain the column’s progressive volume arithmetic,
from the 2D images [35,34,36,37].
multiplication (H A) is applied where A is the cross sectional area
However,static objects usually occlude camera vision in two
of the column. The algorithm starts by calculating the maximum
forms either partially or fully. Full occlusion is defined as any action
height of the point cloud confined between the internal and
that precludes camera to capture photo(s) for the target object
external boundaries. Height calculation starts from the lower
[34]. Full occlusion was deemed beyond the scope of this research,
surface and progressively increases the height by a constant value
of 10 millimeters. The existence and density of the points are then
computed.
Input
3D BIM Column cross sectional area A
Column with point cloud density P0
The case study revealed that the whole process starting from
the initial storing of the photos until the sending of notification
emails and SMS takes less than 60 min. Details of the accuracy
obtained from the case study are shown in Table 2. The recorded
error for the horizontal dimensions is 0 mm, which means that a
horizontal accuracy of 100% was achieved, because data was
abstracted from the BIM model. Similarly, the accuracy for the
vertical dimensions is 6 mm for every 1 linear meter, resulting in a Fig. 9. a) SMS sent automatically to notify the project manager for the deviated
vertical accuracy of 99.4%. However, the accuracy for the areas and activities detected by the system. b) Email sent automatically to notify the project
volumes is 100% and 99.99% respectively, and 100% accuracy was manager for the deviated activities detected by the system.
achieved for quantified numbers.
[21] S. El-Omari, O. Moselhi, Integrating automated data acquisition technologies micrometric resolution, Int. J. Adv. Manuf. Technol. 91 (9) (2017) 2935–2947,
for progress reporting of construction projects, Autom. Constr. 20 (2011) 699– doi:[Link]
705, doi:[Link] [31] A. Feti
c, D. Juri
c, D. Osamankovi c, The procedure of a camera calibration using
[22] Y. Ibrahim, T. Lukins, X. Zhang, A. Kaka, Towards automated progress camera calibration toolbox for MATLAB, MIPRO, Conference of the 35th
assessment of work package components in construction projects using International Convention, Opatija Croatia (2012) 1752–1757.
computer vision, J. Adv. Eng. Inform. 23 (2009) 93–103, doi:[Link] [32] L. Tan, Y. Wang, H. Yu, J. Zhu, Automatic camera calibration using active
10.1016/[Link].2008.07.002. displays of a virtual pattern, IEEE (2017) 1416–1428, doi:[Link]
[23] M. Golparvar-Fard, F. Pena-Mora, C. Arboleda, S. Lee, Visualization of 10.1109/ICIEA.2016.7603807.
construction progress monitoring with 4D simulation model overlaid on [33] S. Leung, S. Mak, B. Lee, Using a real-time integrated communication system to
time-lapse photographs, Comput. Civil Eng. 23 (6) (2009) 391–404. monitor the progress and quality of construction works, Autom. Constr. 17
[24] S. Roh, Z. Aziz, F. Pena-Mora, An object-based 3D walk-through mode for (2008) 749–757, doi:[Link]
interior construction progress monitoring, Autom. Constr. 20 (2011) 66–75, [34] C. Chi, Y. Bisheng, Dynamic occlusion detection and inpainting of in situ
doi:[Link] captured terrestrial laser scanning point cloud sequence, J. Photogr. Remote
[25] M. Dimitrov, M. Golparvar-Fard, Vision-based material recognition for Sens. 119 (2016) 90–107, doi:[Link]
automated monitoring of construction progress and generating building Companies’ public annual reports (2013). [Link]
information modelling from unordered site image collections, J. Adv. Eng. industries/capital-projects-and-infrastructure/our-insights/the-construction-
Inform. 28 (1) (2014) 37–49, doi:[Link] productivity-imperative (Accessed 5 August 2017).
[26] Y. Dong, Z. Ye, X. He, A novel calibration method combined with calibration [35] Hieu Nguyen, A. Smeulders, Robust tracking using foreground-background
toolbox and generic algorithm, IEEE (2016) 1416–1420, doi:[Link] texture discrimination, Int. J. Comput. Vis. 69 (3) (2006) 277–293, doi:http://
10.1109/ICIEA.2016.7603807. [Link]/10.1007/s11263-006-7067-x.
[27] E. Kwak, I. Detchev, A. Habib, M. El-Badry, C. Hughes, Precise photogrammetric [36] S. Sengar, S. Mukhopadhyay, Foreground detection via background subtraction
reconstruction using model-based image fitting for 3D beam deformation and improved three-frame differencing, Arab. J. Sci. Eng. 42 (8) (2017) 3621–
monitoring, J. Survey. Eng. 139 (2013) 14–155 10.1061(ASCE)SU.1943- 3633, doi:[Link]
5428.0000105. [37] L.L.C. Agisoft, Agisoft PhotoScan User Manual, Version 1.3, professional
[28] I. Detchev, M. Mazaheri, S. Rondeel, A. Habib, Calibration of multi-camera edition, American Institute of Architects (2002), the architect's handbook
photogrammetric systems, Int. Arch. Photogr. Remote Sens. Spat. Inform. Sci. of professional practice, John Wiley and Sons, Inc., New York, 2017 Available
XL (1) (2014) 101–108. at: [Link] (Accessed 2
[29] Z. Zhang, A flexible new technique for camera calibration, IEEE Trans. Pattern January 2018).
Anal. Mach. Intell. 22 (11) (2000) 1330–1334, doi:[Link] [38] Associated General Contractors of America, The Contractor’s Guide to BIM, 1st
34.888718. edition, AGC Research Foundation publisher, Las Vegas, 2005.
[30] G. Percoco, M. Guerra, A. Salmeron, L. Galantucci, Experimental investigation
on camera calibration for 3D photogrammetric scanning of micro-features for