You are on page 1of 9

Alexandria Engineering Journal (2019) 58, 171–179

H O S T E D BY
Alexandria University

Alexandria Engineering Journal


www.elsevier.com/locate/aej
www.sciencedirect.com

ORIGINAL ARTICLE

Study the accuracy of digital close range


photogrammetry technique software as a measuring
tool
Hossam El-Din Fawzy
Civil Engineering Department, Kafrelsheikh University, Kafrelsheikh, Egypt

Received 26 December 2017; revised 2 April 2018; accepted 24 April 2018


Available online 22 November 2018

KEYWORDS Abstract This paper presents a study of accuracy of a digital technique that uses a computer pho-
Digital Close Range Pho- togrammetric software which is PhotoModeler, this one based mainly on two mathematical theories
togrammetry (DCRP); to determine some measurement quantification, Direct linear Transformation (DLT) and Scale
PhotoModeler; Invariant Feature Transform (SIFT) will be discussed as an explanation of the software technology.
Laboratory experiments; A set of laboratory experiments were prepared as a comparison with the traditional techniques, a
Sofware; try to assess the highest possible accuracy of the software and to demonstrate if it matches the pro-
(SIFT), (DLT) posed standard values, The pilot recommendations were also taken into account in the conduct of
the tests as well as in the creation of projects by the software. These experiments focused on study of
the accuracy of length monitoring and size measurement, the observation of the ground cartesian
coordinates as well as the camera format size calculation. The results show that this technique could
be relied upon effectively in some different measurements rather than traditional methods.
Ó 2018 The Author. Published by Elsevier B.V. on behalf of Faculty of Engineering, Alexandria
University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/
licenses/by-nc-nd/4.0/).

1. Introduction field, in order to create digital models that correspond to the


real captured objectives. That opens up the possibility of terms
The Digital Close Range Photogrammetry (DCRP), is a tech- standard modeling in such (length, area, and size as well as
nology that has testified an unremitting evolution in recent spatial coordinates). There is no doubt that digital modeling
years, it shows the importance of using (DCRP) in many dif- works by multiple mathematical models-based software, such:
ferent applications and fields, to name but a few: Architectural Interactive digital stereo systems, PHIDIAS, iWitness, Image-
photogrammetry, Engineering photogrammetry, Industrial Modeler and PhotoModeler [1]. And it is accepted that users
photogrammetry, Forensic photogrammetry, Biostereomet- tend to look for cheap, easy and safe methods and techniques,
rics, Motography, Multi-media photogrammetry and Shape not to mention the newest and the most accurate. This paper
from stereo [1]. DCRP is based mainly on digital technology aims to focus on the PhotoModeler software, one of the most
at storing and manipulating data (images) within the applied famous photogrammetric software packages, indicated to it by
many previous studies, used a variety of versions in the proce-
E-mail address: hossameldinfawzy@yahoo.com dure for modeling the studied objects, such: Gesch, Karl
Peer review under responsibility of Faculty of Engineering, Alexandria Richard (2015) [2], Robert R. Wells et al. (2016) [3]. PhotoMo-
University. deler is mainly based on two mathematical theories, Direct
https://doi.org/10.1016/j.aej.2018.04.004
1110-0168 Ó 2018 The Author. Published by Elsevier B.V. on behalf of Faculty of Engineering, Alexandria University.
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
172 H. El-Din Fawzy

Linear Transformation (DLT) [4] and Scale Invariant Feature whose object-space coordinates are known. Eq. (1) is rewritten
Transform (SIFT) [2] ‘‘ to calculate the camera parameters by to serve in a least squares formulation relating known control
a calibration process, and creating a dense surface model points to image coordinate measurements:
respectively. That is why this study went to examine the accu- Calculation of the lens distortion:
racy of the use of the software in the targeted measurements.    
Dx ¼ x d1 s2 þ d2 s4 þ d3 s6 þ e1 s2 þ 2x2 þ 2e2 x y ð2Þ
So some laboratory and computational experiments were sug-
gested to examine the accuracy of the software. The proposed    
Dy ¼ y d1 s2 þ d2 s4 þ d3 s6 þ 2e1 x y þ 2e2 s2 þ 2y ð3Þ
set of experiments has been designed to simulate different field
applications that were carried out at ‘‘Survey Engineering Lab- where:
oratory, Faculty of Engineering, KafrelShiekh University”
The experiments were designed to test the following: x ¼ x  xo y ¼ y  yo s2 ¼ ðx  xo Þ2 þ ðy  yo Þ2
x, y are the image coordinates e1 , e2 are two asymmetric
 Calculation of Camera format size (a calibration process). parameters for decentring distortion, d1 ,d2 and d3 are radial
 Measuring of a line length. distortion three symmetric parameters, s is the radial distance
 Volumes quantification (3D amount measurement). of the principal point [7].
 Point coordinates relative to a specific axis.
2.2. SIFT

2. PhotoModeler mathematical models DCRP is a technique for accurate measuring objects directly
from digital images captured with a camera within a close
Direct Linear Transformation (DLT) and Scale Invariant range. Multiple overlapped images captured consecutively with
Feature Transform (SIFT) are two mathematical theories will different perspectives to produce measurements that can be
be discussed respectively. used to create accurate three dimension models of real objects.
Determination of the coordinates of the used camera is not
2.1. DLT necessary because the local futures of the object are generated
directly from the images. Photogrammetry techniques allow to
The photogrammetric three-dimension coordinate is mainly transform images data of an object into a three-dimension
based on the co-linearity equation which simply states that model by using a pre known parameters- digital camera (lens
object point, camera projective center and image point lies focal length, imager size and number of image pixels and dis-
on a straight line [5]. Thus, each point of interest should be tortion factors). SIFT features from University of British
captured within at least two images, so the point coordinates Columbia (UBC). This product is based in part on the SIFT
could be measured from three-dimension monitoring which technology developed by David Lowe [7], SIFT is an approach
is comprised by a photogrammetric software such PhotoMo- aims to transform stored digital images data into an invariant
deler. Abdel Aziz and Karara (1974) [6] proposed a simple scaling, rotation, transformation and a partially invariant illu-
method for close range photogrammetric data reduction with mination changes as a large collection of local futures vectors.
non– metric cameras; it establishes the DLT between the This approach depends on a four prime stages of computation
two-dimensional coordinates, and the corresponding object- used to generate the image local features [8]:
space coordinates. The DLT between a point (X,Y,Z) in object
space and its corresponding image space coordinates (x,y) can 2.2.1. Scale-space extrema detection
be modeled by two linear fractional equations: Searching the location of images and scales are proceeded
worthily at the first phase by applying difference-of-Gaussian
C1 X þ C2 Y þ C3 Z þ C4
x þ Dx þ ¼0 function to differentiate the prospective interest points which
C9 X þ C10 Y þ C11 Z þ 1
are invariant to orientation and scale.
C5 X þ C6 Y þ C7 Z þ C8 Bðx; y; rÞ ¼ Gðx; y; rÞ  Mðx; yÞ
y þ Dy þ ¼0
C9 X þ C10 Y þ C11 Z þ 1 And it could be explained as:
By taking random error into account, it forms as 1
 eðx þy Þ=2r
2 2 2
Avx þ ADx þ x þ C1 X þ C2 Y þ C3 Z þ C4 þ C9 xX þ C10 xYþ Gðx; y; rÞ ¼ ð4Þ
2pr2
C11 xZ ¼ 0
where:
Avy þ ADy þ y þ C5 X þ C6 Y þ C7 Z þ C8 þ C9 yX B is a blurred image, G is the Gaussian Blur operator, M is
þ C10 yY þ C11 yZ ¼ 0 ð1Þ an captured image, x,y are the location coordinates and r is
the ‘‘scale” parameter. the amount of blur gets greater by the
where: greater scale value.
C1 ,C2 ,C3 . . .C11 are the transformation parameters X, Y and
Z are the object space coordinates, vx and vy are random error 2.2.2. Keypoint localization
in image coordinates, A is (C9 X þ C10 Y þ C11 Z þ 1). Eq. (1) Measuring of keypoints stability, determines their efficiency.
results from the equation of the central perspective in an obvi- Moreover a precise model is generated to identify scale and
ous manner; Dx, Dy are systematic deformations of the image, location.
such the deviations from the central perspective. Eq. (1) can be
calculated directly for the eleven transformation parameters @DT 1 @2D
DðXÞ ¼ D þ X þ XT X ð5Þ
(C1 ,C2 ,C3 . . .C11 ) if there are at least six points in the image @X 2 @X2
Study the accuracy of digital close range photogrammetry 173

where: tion result is compared with the result comes out of an exper-
X ¼ ðx; y; rÞT is scale-space function so maximum and min- iment established by installing a specific centric preset A4
imum specified from the derivative of the function equivalent white sheet vertically on a dark wall and in parallel a digital
to zero, T is the offset from the sample point. phone camera was installed on a tri-horizontal rack. By adjust-
 2 1  ing the camera center from the imaging settings and matching
@ D @D it to the center of the paper, a picture was capturd that con-
xb ¼  ð6Þ
@X2 @X tained a scene of the sheet about 75% of the total area of pho-
where: tography. The captured image was digitally stored and worked
D can be calculated from image subtraction, xb is the max- on it to determine the required values.
imum or minimum location extremum.
4.1.1. Camera calibration
2.2.3. Orientation assignment By using a single calibration sheet Fig. 1 printed on an A4
The single or multiple orientation of each keypoint are deter- paper with 4 control points,one of them will be considered
mined basically on local directions of images. Orientation, as principle point (0,0,0), it is noticed that all points at the
scale and location of each future, control the operations of sheet have Z = 0 and X&Y values differenced in a constant
image data transformation. determined values, known by the software. Three pictures cap-
 0:5 tured in (0,90,180) degree rolling around the orientation of the
mðx; yÞ ¼ ðLðx þ 1; yÞ  Lðx  1; yÞÞ2 þ ðLðx; y þ 1ÞLðx; y  1ÞÞ2 camera lens of each side of the sheet. So the total captured pic-
tures = 12 (this number of images is enough to determine the
ð7Þ
lens unknown distortion values (d1 ,d2 ,e1 ,e2 and the camera for-
hðx; yÞ ¼ tan1 ððLðx; y þ 1Þ  Lðx; y  1ÞÞ=ðLðx þ 1; yÞ  Lðx  1; yÞÞÞ mat size).
By using PhotoModeler calibration pane, 12 calibration
ð8Þ
sheet images were loaded into the software to be analyzed.
h is the orientation of the peak of histogram descriptor One of the most important calibration results is ‘‘the camera
according to the keypoints number n (h1. . ... hn). format size”. This pattern will be calculated manually to be
compared with software calibration result.
2.2.4. Keypoint descriptor
Surround each identified keypoint, local gradients of image are 4.1.2. Calculation of camera format size manually [4]
determined through the selected scale to create a representation, By taping the A4 white sheet on a dark wall in Figs. 2 and 3 to
even if the image contains distortion or illumination differences. make the edges of paper more contrasted. About 75% covered
From the previous paragraphs. it is shown some advantages scene image by the sheet is captured as shown in Fig. 4, the
of this theroy: image Fig. 4 got digitalized Fig. 5 to calculate parameters
(Sx, Sy, Nx, Ny). Sheet corners in the image, as well as the
 Locality: Expression of the objects depicted in a coherent image corners are specified precisely by point mark mode, once
model free of interference and dispersion. standing by the mouse arrow on any point mark, the software
 Distinguish objects by collecting as many individual details shows at which pixel in the image it locates.
as possible.
Px  f  sx Py  f  sy
 Ability to collect enough data for the smallest visual object. Fx ¼ Fy ¼ ð9Þ
S  Nx S  Ny
 Close to instantaneous performance.
where:
sx , sy : The image total number of pixels according to X, Y
3. Testfield preparation direction. Nx , Ny : The total number of pixels that are needed to
capture both length Px , Py .
A digital mobile phone main camera (Microsoft Lumia 640 By applying the equation, image format size is calculated.
XL, 13 MP, f = 3.71 mm constant focus, 1/3 in. sensor size) Fx , Fy : Image format size in X, Y direction. (horizontal and
is used to carry out all the mentioned experiments at Survey perpendicular), Px : the length of the A4 sheet = 297 mm, Py :
Engineering Lab. Coded target sheets are used for easy auto
marking process, created by the software from ‘‘Create coded
target pane, by a Core i5 6200u 2.3 mghz CPU, 12 GB ram,
AMD Radeon R5 M330 computer device, all modeling opera-
tions are processed. A trail PhotoModeler UAS [64-bit] ver-
sion is used. (Sokkia SET330RK) Total station was used for
ground observations.

4. Laboratory experiments

4.1. Calculation of camera format size (with a calibration


process)

This experiment was conducted to calculate the camera format


size by the PhotoModeler calibration process, then the calibra- Fig. 1 Single calibration sheet model.
174 H. El-Din Fawzy

Fig. 5 Digitalized captured image.

deler with specific distance between their centers, with the aim
of making that distance as a guiding scale measurement of the
software to be able to measure a distance drawn across two
points at a standard distance on any ruler. Through the soft-
ware, a standard distance was measured on the ruler to be
compared with its real value observed to identify the potential
error rate at the software measurements of longitudinal
Fig. 2 Elevation and sideview positions.
distances.

4.2.1. Creating RAD coded targets by PhotoModeler [4]


The coded target is a high contrast dot with a pattern around it
that is placed in the scene before capturing images. The targets
are identified automatically by the software images automark-
ing. There are two forms of coding- the RAD target, and the
RAD dot targets. The RAD is more robust and recommended
for new projects Fig. 6. Coded targets are created by the soft-
ware precisely, with an accurate standard distance between
each center of them Fig. 7.
Creation of coded target by PhotoModeler depends on the
following equation:
7; 5  Fu  S
Minimum target radius ¼ ð10Þ
f  Pw
Fig. 3 Real scene.
where:
Fu: the format width of the camera, S: the distance between
the farthest target and the camera lens, f: the focal length, Pw:
the number of pixels in the width of the image.

4.2.2. PhotoModeler project formation


Four images (four different scenes about more than 30 degrees
horizontal difference) were captured for the same proposed
status of the rulers with the RAD control points Fig. 8. In
order to create a new project, the images were digitally stored
and analyzed to be referenced together by the software.

Fig. 4 The captured image.

the width of the A4 sheet = 210 mm, S: The distance between


the center of the lens to the center of the Sheet determined by a
measuring tape.

4.2. Measuring of a line length

In this experiment, four millimeter-divided standard rulers are


used and put in random directions, next to a couple of RAD
(Ringed Automatically Detected) targets created by PhotoMo- Fig. 6 RAD coded target.
Study the accuracy of digital close range photogrammetry 175

Fig. 10 A calibrated sample of sand.

Fig. 7 RAD coded target sheet.

Fig. 11 The sand sample setting.

At this experiment a project is implemented by ‘‘Dense sur-


Fig. 8 An image of the four captured images. face model” project, chosen from ‘‘Projects pane ‘‘to create a
three dimensional model that can be manipulated for model-
An automated targets project was created Fig. 9 to generate ing. The surface of the sample was expressed by a multi-
a three-dimensional model contains the same spatial details of triangulation algorithm Fig. 12 and generating a smart surface
the designated field of experiment. Four equal lengths were scanning to create dense point cloud Fig. 13 to be textured
measured on the four rulers shown in the model. from the source images Fig. 14.
Coded targets were referenced automatically to be used as
4.3. Volumes quantification (3D amount measurement) points for creating a specific surface Fig. 15 to rule the model
and calculating the sample size. The distance that created as
referenced scale between two (RAD) coded target is inserted
The objective of this experiment is to measure the size of a
specific sand mass pre-calibrated by a standard vessel
Fig. 10, then it was poured on a specific horizontal surface with
coded target sheet with known inter-spaces between centers of
targets (as an standard scale measurement value of the soft-
ware) Fig. 11. The phone camera was set to a constant focal
length to scan the sample by 10 images from different low-
oblique angles directed at the sample with a concentration.
Digitally captured images are stored on the computer, a
three-dimensional textured surface model of the sample was
created to be measured by the software and to calculate the
error in the volume measurement by a comparison with the
standard volume value. Fig. 12 Describing the sample surface by triangulation
formation.

Fig. 9 Created project. Fig. 13 generated smart point cloud.


176 H. El-Din Fawzy

Fig. 17 3D model of distributed points according to the project


Fig. 14 3D textured model.
form.

to determine the coordinates of any control point according to


the origin point (0,0,0) X, Y and Z axis were set typically to the
origin point.

5. Results and discussion

5.1. Calculation of camera format size

Calibration results report after images analyzing shows the


Fig. 15 Final simulated model. values of camera parameters Table 1.
PhotoModeler automatic calibration detects that Camera
format size is Fx  Fy ¼4.54*2.55 mm2, Fig. 18 shows that
as a standard value to make the software able to measure the sx = 4200 pixels & sy = 2360 pixels.
size of model precisely as possible. Nx , Ny number of pixels are identified by the difference of
both A4 sheet corners from the captured image. Nx = 3335–
4.4. Point coordinates relative to a specific axis 906 = 2429 pixels, S= 432 mm is measured by a measuring
tape, Ny = 2055–309 = 1746 pixels.
This experiment was conducted in order to determine the accu- So
racy of the software in observation the three-dimensional coor- Px  f  sx 297  3:71  4200
dinates (cartesian coordinates) of specific points by fixing a Fx ¼ ¼ ¼ 4:41 mm
S  Nx 432  2429
number set of targets randomly (X,Y,Z in random difference)
with specifying of a principle point (X = 0, Y = 0, Z = 0). All Py  f  sy 210  3:71  2360
points are observed by (Sokkia SET330RK) Total station Fy ¼ ¼ ¼ 2:44 mm
S  Ny 432  1746
Fig. 16, five images are captured by the digital phone camera
with a constant focal length. The images were digitally stored
at the computer in order to create a 3D-coordinate model of
Table 1 Calibration report.
the observed points Fig. 17 and to calculate the coordinates
of each point according to the principle point. A comparison Parameter Value Deviation
was made between the coordinates observed by the Total sta- Focal length f 3.708025 mm 0.006 mm
tion and the coordinates are calculated by the software. Xp - principle point 2.253131 mm 0.003 mm
Four images were captured and analyzed for the purpose of Yp - principal point 1.276118 mm 0.001 mm
modeling the points and the model was already formed by the Fx - format width 4.544863 mm 0.001 mm
(automated target) project. Through the model it became easy Fy - format height 2.552578 mm 0.00 mm
d1 - radial distortion 1 8.453e003 3.1e004
d2 - radial distortion 2 1.785e003 6.7e005
e1 - decentering distortion 5.130e004 8.6e005
e2 - decentering distortion 2 0.000e+000 0.000e+000

Fig. 16 The distributed coded targets points observed by the


Total station. Fig. 18 Number of pixels determination.
Study the accuracy of digital close range photogrammetry 177

Table 2 Lines measuring report.


Line no 1 2 3 4
Angles relative to axis X,Y,Z in (deg) 69.257 48.255 89.433 22.563
88.930 62.602 0.944 67.471
20.773 54.034 89.246 88.819
XY plane intersection in (deg) 20.743 41.745 67.437 67.437
XZ plane intersectin in (deg) 69.227 35.966 0.754 1.181
YZ plane intersection in (deg) 1.070 27.398 89.056 22.529
Distance in (mm) 102.141 101.013 100.215 97.350
Mean (mm) 100.126

According to manual calculations, Camera format size is surements may range between 98.23 mm and 101.77 mm as
Fx  Fy ¼ 4:41  2:44 mm2. percentage of error 1.77% in general.
It is clear from this experiment that, calculated values are
too close to the values detected by the software, with a percent- 5.3. Volumes quantification (3D amount measurement)
age of error ranging between 2.86%:4.31% of the value
observed by the software. It is noticeable that the more accu- The sample model is subject to a tight virtual surface that lim-
rate the manual procedures in determining the parameters, its the size of the sample. Therefore, any simple position
the lower the error rate. changing of the surface control points, effectively affects on
the calculated final value. Once that default surface and model
5.2. Measurement of a line length surface are selected together, the volume of the sample is cal-
culated immediately. Fig. 19 shows how PhotoModeler tight
Four lines were created to be measured by the software, each the volume with the textured created surface, the software
one of them equals to 100 mm on each ruler. The measuring divides the simulated surface into two parts, a volume above
pane shows all information about any selected object Table 2. the surface and the other one is below to be both limited.
Measuring pane shown a set of varied measurements of the According to the local coordinates axis, the volume is defined.
same value of line length, by calculation of standard deviation The software measuring pane shows the volume of the sam-
r = ±1.77 mm, it indicates to the predicted error rate in mea- ple = 455.726 cm3, the calibrated volume = 435 cm3, with a
relative expected error about = ±4.7%.

5.4. Point coordinates relative to a specific axis

By the same measuring pane, All the proposed points coordi-


nates are determined by the software, once they are selected
from the 3D pane, the coordinates values are displayed in rel-
ative to the proposed origin point (0,0,0). Each point also takes
a special number according to what was assigned. By the Total
station the same set of points coordinates are observed to be
Fig. 19 Dense surface model volume limitation. compared with the PhotoModeler observations Table 3.

Table 3 PhotoModeler & total station observations.


PhotoModeler observations (m) Total station observations (m) Residual (m)
Point (ID) X Y Z X Y Z DX DY DZ
1 0.472958 0.468801 0.405412 0.470 0.470 0.404 0.002958 0.001199 0.001412
2 0.510631 0.408511 0.066141 0.510 0.406 0.068 0.000631 0.002511 0.001859
3 0.352889 0.018446 0.006736 0.354 0.019 0.008 0.001111 0.000554 0.001264
4 0.388111 0.035824 0.009742 0.386 0.035 0.010 0.002111 0.000824 0.000258
5 0.387974 0.479269 0.162004 0.388 0.477 0.160 0.000026 0.002269 0.002004
6 0.400181 0.462826 0.148939 0.403 0.463 0.149 0.002819 0.000174 0.000061
7 0.001924 0.34821 0.020333 0.000 0.350 0.021 0.001924 0.00179 0.000667
8 0.023110 0.478319 0.147056 0.020 0.480 0.145 0.00311 0.001681 0.002056
13 0.090240 0.0004 0.001445 0.093 0.000 0.000 0.00276 0.0004 0.001445
14 0.090025 0.083062 0.001979 0.093 0.083 0.000 0.002975 0.000062 0.001979
15 0.000000 0.0000 0.000000 0.000 0.000 0.000 0.0000 0.0000 0.0000
16 0.000046 0.08303 0.000000 0.000 0.083 0.000 0.000046 0.00003 0.0000
17 0.090627 0.0000 0.000000 0.087 0.000 0.000 0.003627 0.000 0.0000
18 0.090364 0.083158 0.000733 0.091 0.081 0.000 0.000636 0.002158 0.000733
Mean 0.0017667 0.000975143 0.000981
178 H. El-Din Fawzy

rx = 0.001315351 m, ry =0.000938336 m, rz =  All targeted objects should be included in 3 images or more.


0.000825856 m, by comparing the observations and calculating  There is should be a sufficient overlap between captured
the standard deviation of the average difference between them, images.
it is clear that the variation domain does not exceed 1.3 mm  Make sure that the total scanning images cover all object
as a maximum possible error value. Fig. 20 shows how much characteristics and textures.
convergence between the usual observation method (Total sta-  All coded targets and control points must accurately reflect
tion) and the PhotoModeler observation technique. the surfaces by fixing them well to avoid the spatial
Bryan Randles et al. (2010) [9]. in their experiment in a car difference.
accident study, they concluded that measuring using Photo-
Modeler as photogrammetry technique is a more accurate
method than the manual ones, especially that photogrammetry 6. Conclusions
reduces potential errors due to the visual nature of the targeted
object analysis. However, it is noted that if it is compared to The use of photogrammetry as a measurement technique for
standard values, there may be a simple error in measurement different purposes is a very effective, fast, cheap and safe
values depends on basic manual setting procedures of the tar- method, especially it has the ability to survey and observe tar-
geted objects and the photographing procedure itself as well, gets without physical contact.
Hossam El-Din Fawzy (2015) [10] found that the use of that With the widespread use of computer software based on
technique in measuring volumes reaches the accuracy of mathematical theories, it became easy for technical specialists
97.21%, that is why the following should be taken in account to use this new technology to measure quantify various engi-
to assess the best results as possible [4,11]. neering purposes, PhotoModeler software was one of the most
commonly used computer applications in previous studies for
 Make sure the angles between the images capturing posi- test-field measurements and 3D projects creation such, soil
tions in range of 35°: 90° erosion quantification, crime or accident scene reconstruction,
observations of structural elements deformation, gridded Dig-
ital Elevation Model (DEM) generation, and many other
application fields due to its ability to create robust contrasted
coded targets, small-large-single-multi metric calibration
sheets, and analyze any type of digital cameras images in var-
ied format in addition to manipulate and processes digital
images to extract results in few minutes.
This study shows through the proposed experiments the
efficiency of use PhotoModeler as a measuring tool, as with
any measuring technique, it has a percentage of residual error,
so the PhotoModeler measurement accuracy and reliability
reach in general 95% at least estimation for various engineer-
ing quantification purposes such size, length, digital non-
metric camera calibration and monitoring of points cartesian
coordinates, as that percentage able to be increased by adjust-
ing the manual procedures through some simple cautions dur-
ing project formation.

References

[1] S Robson Luhmann, S. Kyle, I. Harley, Close Range


Photogrammetry Principles, Techniques and Applications,
Whittles Publishing, 2011.
[2] Gesch, Karl Richard, Validation and application of close- range
photogrammetry to quantify ephemeral gully erosion, Paper
14319, 2015
[3] Robert R. Wells et al, A measurement method for rill and
ephemeral gully erosion assessments, Soil Sci. Soc. Am. J. 80
(2016) 203–214.
[4] Eos Systems, Inc, PhotoModeler UAS User’s Manual,
Vancouverm B.C., Canada, 2016.
[5] Hossam El-Din Fawzy, The accuracy of mobile phone camera
instead of high resolution camera in digital close range
photogrammetry, Int. J. Civil Eng. Technol. (IJCIET) 6 (1)
(2015) 76–85.
[6] Y.I. Abdel-Aziz, H.M. Karara, Accuracy Aspects of Non-
Metric Imageries, University of Illinois, Urbana, Illinois, 1974,
pp. 1107–1117.
Fig. 20 A comparison between Total station and PhotoModeler [7] Jason P. de Villiers*a,b, F. Wilhelm Leuschnerb et al., Centi-
observations. pixel accurate real-time inverse distortion correction, in:
Study the accuracy of digital close range photogrammetry 179

International Symposium on Optomechatronic Technologies, [10] Hossam El-Din Fawzy, The accuracy of determining the
California, United States, 2008. volumes using close range photogrammetry, J. Mech. Civil
[8] David G. Lowe, Distinctive image features from scale- invariant Eng. IOSR 12 (2) (2015) 10–15.
keypoints, Int. J. Comput. Vision 60 (2) (2004) 91–110. [11] Benoit Dierckx, Christophe De Veuster, Measuring, et al,
[9] Bryan Randles et al., The accuracy of photogrammetry vs. Measuring a Geometry by Photogrammetry: Evaluation of the
hands-on measurement techniques used in accident Approach in View of Experimental Modal Analysis on
reconstruction, SAE International, 2010. Automotive Structures, Society of Automotive Engineers, 2001.

You might also like