Professional Documents
Culture Documents
PII: S0263-2241(18)30490-1
DOI: https://doi.org/10.1016/j.measurement.2018.05.100
Reference: MEASUR 5601
Please cite this article as: Y-T. Lin, Y-C. Lin, J-Y. Han, Automatic Water-Level Detection Using Single-Camera
Images with Varied Poses, Measurement (2018), doi: https://doi.org/10.1016/j.measurement.2018.05.100
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers
we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and
review of the resulting proof before it is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Automatic Water-Level Detection Using Single-Camera Images with Varied Poses
Abstract
Monitoring of the water level of reservoirs, rivers, and lakes is an essential task for hydraulic
facility management and disaster mitigation. Nowadays, although automated instruments for
water-level detection have been widely applied, their reliability and robustness still need to be
further improved. On the other hand, surveillance cameras are typically available at major
rivers and hydraulic facilities and could provide opportune field observations of the water
images is developed. It employs digital image processing techniques to minimize the ambient
noise and to detect the water level in real-time. The photogrammetric technique is also
introduced to track the camera movements and accurately determine the water level in the
object space in a rigorous manner. According to the results of real field experiments, it has
been illustrated that the proposed approach is capable of overcoming interference due to
changing weather conditions and unexpected movement of the camera and automatically
identifying the water level. Consequently, it can provide an efficient and reliable water-level
1
facility management; automated analysis
1. Introduction
Heavy precipitation in a short period of time can cause flash floods in riparian regions,
systems are particularly important for flood warnings and disaster risk assessment [1, 2]. In
densely populated areas, monitoring man-made facilities like river channels and reservoirs
are installed water-level measuring equipment to apply real-time monitoring and flood hazard
management. Water-level measuring techniques can be divided into two categories: contact
measurement systems and non-contact measurement systems [3]. Sensors in modern contact
measurement systems (e.g. float sensors [4] and submersible pressure sensors) are compact,
low cost, and easy to use. However, sediment deposition accompanied by frequent floods
often buries or damages the sensors, causing inaccuracies in reporting water levels.
Limitations on the local electricity supply and Internet access, influences from fluctuations of
water temperature, and other environmental factors also introduce uncertainties into the
readings [5]. On the other hand, non-contact sensors (e.g. radar sensors [6, 7] or ultrasonic
sensors [8]) can collect high-quality observations while requiring only low maintenance and
low power consumption. However, non-contact sensors are usually installed on adjacent
structures, such as bridges, poles, and scaffoldings. As a result, the obtained measurements
often contain noticeable errors due to the interference of vibration from the structures to
2
which the sensors are attached [9, 10].
Another non-contact technique for measuring the water level is to capture images of the
water gauges for visual interpretation. For monitoring of water levels during a flood, a typical
method is to take a series of images and to manually determine the gauge reading in each
image. However, by-eye manual readings are not only inefficient but also often biased and
inaccurate, especially when the image is of low quality. Therefore, how to automatically and
accurately identify gauge readings from images has been an issue of concern. With
advancement in digital image processing, gauge or water line images can now be read
automatically, replacing time- and labor-consuming manual methods. Imaging methods based
on template-matching algorithms [1, 11] or pattern recognition [12, 13] have been proposed
in some recent studies. However, image quality is still a major concern in automatic
gauge-reading identification, limiting its applicability in real field cases with significant
Alternatively, via the photogrammetric method, the water level can be calculated by
introducing fiducials or control points to link the image to the world coordinates [14–17].
Although they are less dependent on image quality, such methods face additional challenges
like lens distortion and unexpected camera movements that could introduce significant
perturbations into the calculations. In this study, the potential of integrating image techniques
3
is explored.
The aim of this study is to develop an automatic water-level detection approach for flood
hazard management that considers both the ambient noise and camera movements. In the
proposed method, image enhancement was employed first to reduce image noise.
Feature-matching techniques and the photogrammetric principle were then applied to track
the camera’s movements and determine the water level in real-time. Finally, field tests were
performed and the results were evaluated to demonstrate the feasibility and reliability of the
proposed approach.
2. Methodology
The proposed water-level detection approach consists of two ideas: identifying the water line
in the water-gauge image and solving the water level based on photogrammetric principles
(i.e. the well-known collinearity equations). In order to solve the collinearity equations, the
object (world) coordinate system must be defined according to the description in Section 2.1.
Section 2.1 also introduces the parameters that establish the relation between the object space
and the image space. Section 2.2 then introduces a method to automatically check and update
these parameters whenever a new image is read. Section 2.3 presents a strategy to reduce
ambient noises and enhance the quality of line detection. Finally, the water level can be
4
2.1 System initialization
The proposed method’s aim is to monitor water level at man-made facilities like river
channels and reservoirs, which is often key to flood hazard management. The condition for
setting up a camera is the water level change as well as the gauge interval, which can be
completely observed through images. According to Sampling Theory, the size of the interval
needs to be larger than 2 pixels in order to clearly identify the water level value, and the
distance between the camera and the gauge is adjusted according to this condition. The
distance is relevant: larger distance increases ground sample distance, and thus decreases
image measurement accuracy. In the case study described below, the object distance is
camera approximately along the Y-axis (Fig. 1) may simplify computation, but it is not
must be carried out in order to establish the relation between the object coordinate system and
the image coordinate system. The process includes three steps: (1) identifying calibration
points in the field, (2) measuring the coordinates of the calibration points, and (3)
Upon setting up the camera, an image is taken firstly as a reference image. Also,
multiple points on the gauge are selected manually as calibration points. Figure 1 illustrates
5
the geometry of the system, where H marks the unknown water level, P1–P6 are the
calibration points in the object coordinate system, and h and p1–p6 denote their
correspondences in the image coordinate system, respectively. In this study, the calibration
points also serve as reference points in the image-matching process; therefore, it is important
to choose points with distinct features as calibration points. Note that there should be at least
three points in order to solve the exterior orientation parameters of the camera. Additionally,
at least two points on the Z-axis should be selected in order to conduct single-camera
The second step is to measure the object and image coordinates of the calibration points
manually. The object and image coordinate systems used in the proposed approach are
illustrated in Fig. 1. The origin of the object coordinate system is defined at the zero level of
the water gauge. The Z-axis is coincident with the gauge and the X-axis is perpendicular to it.
The scale is the same as in the water gauge. Based on the definition, the object coordinates of
the calibration points are denoted as Pi (Xi, 0, Zi) and can be measured accurately by
equipment like a total station or laser rangefinder. The image coordinate system corresponds
to the image’s pixel indices. The origin is at the top left of the image, the positive x-axis
points right, and the positive y-axis points down. The image coordinates of the calibration
points pi (xi, yi) can then be measured manually. In addition, the slope of the water line in the
reference image is calculated and will act as an initial value in the Hough transform (see
6
Section 2.3).
The final step to establish the relation between the object space and the image space is
obtaining the interior and exterior orientations of the camera. The interior orientation includes
the focal length, the location of the principle point, and a description of the lens distortion.
These parameters are acquired through camera calibration, which can be done in a laboratory
prior to setting up the camera in the field. Because the focal length is a constant in the model,
the minimum requirement for the camera is to have a fixed-focal lens. The exterior
orientation describes the position and orientation of the camera in the object space, which
orientation. These parameters can be obtained by solving the famous collinearity equations
[18]:
where (x, y) represent the image coordinates of the calibration point; (X, Y, Z) represent the
object coordinates of the calibration point; m11–m33 are the elements in a 3 × 3 rotation matrix
M; (x0, y0) are the offsets from the fiducial-based origin to the perspective center origin; and f
is the focal length. The rotation matrix M represents the camera orientation in the object
7
1 0 0 cos y 0 sin y cos z sin z 0
M 0 cos x sin x 0 1 0 sin z cos z 0 (2)
0 sin x cos x sin y 0 cos y 0 0 1
In Eq. (1), x0, y0, and f are interior parameters and have been decided previously by camera
calibration. Once three or more calibration points have been identified and measured, one can
write at least six equations based on Eq. (1). The camera’s position X L , YL , Z L and
orientation , ,
x y z
can then be uniquely determined by applying a least-squares
technique [19, 20]. After obtaining the interior and exterior orientations, the relation between
object space and image space, which is the foundation of the proposed single-camera
8
2.2 Camera-movement detection
To obtain the water level with high accuracy by the proposed approach, accurate exterior
parameters are essential. Although these parameters were evaluated manually in the system
initialization process, they need to be updated once the camera rotates or moves. In this
section, a strategy that detects camera movement and adjusts the exterior orientation
parameters is developed by matching the calibration points in sequential images. The strategy
is applied whenever a new image is read, and it is fully automatic. Hence, no manual
calibration work is required once the system is set up and initialized in the field.
The method matches the calibration points based on two well-known area-based
cross-correlation (NCC) [25–27]. In general, NCC is more robust than LSM since it requires
a less accurate initial position of the reference points. LSM, on the other hand, can achieve a
higher precision but is very sensitive to the initial estimation [28, 29]. Instead of the
commonly used strategy that applies NCC first to obtain an approximate position, followed
by LSM for refinement, the proposed method uses LSM prior to NCC, where the former only
acts as a camera-movement detector and the latter determines the positions of the matching
points.
A target window consisting of the reference points ( xit , yit ) (i.e. the i-th calibration
points on the image at time t) and the corresponding neighborhood is selected in the
9
beginning. Whenever a new image is read, the LSM algorithm determines the transformation
that best fits the search window S ( xit 1 , yit 1 ) to the target window T ( xit , yit ) by calculating
the minimum gray value difference V [21, 30] between the two windows and simultaneously
transformation parameter, and h2 is a radiometric scale factor. Since the LSM technique is
very sensitive to the initial position of the reference points, the matching process gives a poor
result when the camera moves or rotates. This seeming disadvantage can instead serve as a
signal of camera movements. First, a pre-defined tolerance is set based on the image
resolution. In this study, is simply set equal to the number of pixels per water-gauge unit,
which means any camera movement causing a difference of more than one unit coordinate
will be distinguished. The change in camera orientation can then be discriminated if the
average distance between the reference points and their corresponding matching points is
If the camera has moved or rotated, solving the coordinates of matching points by LSM
will not be accurate enough. For this reason, NCC is adopted to find the matching points once
10
the camera movement has been recognized by LSM. Equation (6) gives a basic definition of
T ( x , y ) T S ( x
n1 n2
t 1
t t
, yit 1 ) S
i i i
i 1 j 1
ts (6)
t s
T ( x , y ) T S ( x
n1 n2 2 n1 n2 2
t 1
t
i
t
i i , yit 1 ) S
i 1 j 1 i 1 j 1
where T and S denote the average gray value and t and s denote the standard
deviations of the gray values of the target window and the search window, respectively; ts
denotes the covariance between the target window and the search window; and n1 and n2
denote the number of rows and columns in the search image. The highest cross-correlation
coefficient gives the optimized estimate of the position of the matching point. The precision
achieved by NCC is around one pixel, which is adequate in this study since the tolerance
is usually more than one pixel. As a result, there will be no need to apply LSM again to refine
the matching result. Finally, the exterior orientation parameters can be updated automatically
by solving the collinearity equations, considering that the image coordinates of calibration
points (i.e. the coordinates of the matching points) have been determined and the object
The water line, generally a straight line representing the intersection between the air and the
line-detection methods [31]. However, automatic line-detection techniques often yield poor
11
results under bad weather conditions like heavy rain because of the rain-induced noises. To
cope with ambient noises and to detect the water line robustly, a method combining different
Instead of taking one single image in an epoch, a series of images are taken in a short
period of time, usually within a few seconds. An averaging image is formed by calculating
edge x, y
t
avg x, y t 1
(7)
T
where edget x, y is the digital number of pixel (x, y) at time t and T is the total elapsed
time. Afterwards, first the Gaussian filter smooths the original gray-level images and removes
noises and then the Canny edge detector [32] can extract the structural information from the
images. Figure 2 displays an example of forming an averaging image. Considering the fact
that the blur caused by raindrops occurs randomly among the images and also the assumption
that the water level would not change significantly within a short time, the averaging image
can remove rain-induced noises effectively while preserving the water line and background
12
Fig. 2. An averaging image formed by three sequential images
The water-line detection can be carried out once the image quality is enhanced. A region
of interest (ROI) is first selected according to the positions of the calibration points to reduce
the search range (shown in Fig. 3). In this study, ROI is simply defined as a k-pixel buffer
around the water gauge. The Hough transform is then applied to identify the position of the
water line in the image space [33, 34]. The equation of the water line, written in the Hesse
where r is the distance from the origin to the closest point on the straight line and is the
angle between the x-axis and the line connecting the origin with the closest point. The angle
is constrained by:
yR yL
c tan 1 C (9)
xR xL
13
where ( xR , yR ) and ( xL , yL ) are the right and left ends of the initial water line, respectively,
and C is a pre-defined angular tolerance. The offset between each candidate water line in the
ROI and the initial water line is computed, and the one with the minimum offset will be the
Fig. 3. The ROI (blue frame) and the candidate water lines (yellow lines) extracted by the
Despite the need for a pair of conjugate images for typical space intersection calculations, in
situ water-level monitoring work often involves only a single camera. For this reason, a
14
water-level detection method using single-camera images and collinearity equations is
By definition, the water level is the elevation of the water surface. According to the
object coordinate system defined in Section 2.1, it can be specified as the point of
intersection of the water surface and the Z-axis, denoted H in Fig. 1, and its coordinates can
be written as 0, 0, Z r . The key concept of the proposed approach is that water level is
simply the Z coordinate of point H. Since the X and Y coordinate of H are both zeros, one can
solve for Zr when the corresponding image point coordinates, h (xr, yr), is acquired.
According to Fig. 1, the image point h is the intersection of the water line and the line p1p5 in
the image. Given that the water line has been identified in Section 2.3 and the coordinates of
points p1 and p5 are known (see Section 2.1), one can then solve for the coordinates of point h
(xr, yr) in the image space. The water level Z r can be obtained by solving for the
coordinates of its correspondence, point H, in the object space. The collinearity equations are
(10)
Y m12 x m22 y m32 f Z Z Y
m13 x m23 y m33 f L L
15
m13 x m23 y m33 f
Z r ,1 Z L X L
m11 x m21 y m31 f
(11)
Z Z Y m13 x m23 y m33 f
r ,2 L L
m12 x m22 y m32 f
Finally, the water level can be estimated by taking the average of Z r ,1 and Z r ,2 :
Z r ,1 Z r ,2
Z (12)
2
The derivation up to this point shows that Z coordinate is the only relevant component.
Therefore, vertical control is critical: the minimal requirement is two control points on the
The proposed integrated method aims to cope with three main issues in automatic
water-level detection: (1) the camera movements, (2) the ambient noises, and (3)
single-camera measurement. LSM and NCC are employed for detecting the camera
movement and updating the exterior orientation parameters automatically. Rather than using a
single image in the water-line detection process, an averaging image is formed in order to
reduce the rain-induced noises. Ultimately, the water level is obtained using only
single-camera images. Figure 4 shows a flowchart of all of the steps of the analysis in the
proposed approach.
16
Fig. 4. Flowchart for the proposed automatic single-camera water-level detection
3. Case study
The feasibility of the proposed water-level detection method was tested in the field. The case
study took place in an urban area prone to flash floods and road safety issues due to high
population density and heavy traffic [35]. The water level changes considerably with the tide
as the site lies in the downstream region near the sea, increasing the risk of flood under heavy
rain. These unique characteristics make water-level monitoring an essential task in this area.
Figure 5 illustrates the field setup: a digital camera (model: Canon 650D) was set up
19.63 m from the water gauge and captured images with a resolution of 1920 by 1280 pixels.
The procedure was repeated several times until intrinsic parameters stabilized at
x0 , y0 , f 11.376, 7.582,31.719 mm. After setting up the system, the camera first
acquired a reference image. Ten calibration points on the gauge were then selected, and their
17
object and image coordinates were measured manually (see the list in Table 1). The exterior
orientation parameters were calculated according to the collinearity equations (Eq. 1), where
the image size and the computational efficiency, the ROI was defined as a 200-pixel buffer
The system was tested for a 25-hour period in the field to capture the change of the
lighting condition throughout the day (including sunlight in day-time and street light at night).
The camera captured 10 images in a roll at a rate of one frame per second (i.e. 1 fps) every
half hour and collected 51 10-image sets (image sets 1–51, hereafter) for subsequent analyses.
During the experiment, the ability of the proposed water-level detection method to overcome
two situations was tested: (1) the camera was purposely rotated between image sets 10–11,
26–27, and 38–39, and (2) rainfall occurred and the lens fogged up during image sets 41 and
50.
18
Fig. 5. Field setup for single-camera water-level detection
Examples of the average images and the water-line estimation corresponding to each
camera pose are shown in Fig. 6. The results of automatic water-level detection and the
19
corresponding by-eye measurements (which serve as the reference values in this experiment)
are listed in Table 2. Figure 7 further plots the time series of the water level. One can see
immediately that the water level changes significantly with the tide. While the resolution of
manual observation is 0.05 m, identifying the water level from 0.05 to 1.99 m, the resolution
of image-based estimation is up to 0.001 m, identifying the water level from –0.110 to 1.975
m. Note that the water level below 0 m is not available for by-eye measurement since the
minimum reading of the water gauge is zero. On the other hand, the proposed method is able
to specify water levels lower than 0 m because it solves the position of the water line in the
The overall Root-Mean-Square Error (RMSE) between water levels estimated by the
two methods is 0.011 m, which suggests that the proposed method reaches comparable
accuracy to manual observation in general. The RMSEs corresponding to the two situations,
camera movement and rain-induced noise, are 0.009 and 0.009 m, showing no significant
differences in accuracy. In addition, the changing light condition throughout the 25-hour field
test had no significant effect on the accuracy in this experiment. Note that the by-eye
measurements could also contain error under varying light conditions due to poor image
quality.
Finally, it is noted that the analysis was implemented using MATLAB (R2013a, 64-bit),
and tested on a personal computer (processor: Intel Core i7-4700HQ 2.4 GHz; memory: 8.00
20
GB DDR3; operating system: Windows 7, 64 bit). The time required to process a single
image set was 7 seconds on average. Since the analysis was executed every half hour, the
In conclusion, the proposed approach detects water level with an accuracy of 0.01 m
using single-camera images. It is able to cope with camera movement and rain-induced noise
21
(a)
(b)
(c)
(d)
22
Table 2. A comparison between manual observation and image-based estimation
23
34 05:30 0.20 0.191 0.009
35 06:00 N/A –0.051 N/A
36 06:30 N/A –0.076 N/A
37 07:00 N/A –0.082 N/A
38 07:30 N/A –0.076 N/A
39 08:00 N/A –0.072 N/A
40 08:30 N/A –0.076 N/A
41 09:00 0.05 0.061 –0.011
42 09:30 N/A –0.103 N/A
43 10:00 N/A –0.110 N/A
44 10:30 N/A –0.107 N/A
45 11:00 N/A –0.107 N/A
46 11:30 N/A –0.101 N/A
47 12:00 0.34 0.329 0.011
48 12:30 0.75 0.742 0.008
49 13:00 1.10 1.103 –0.003
50 13:30 1.41 1.418 –0.008
51 14:00 1.70 1.704 –0.004
RMSE 0.011
Fig. 7. The time series of the water level obtained by manual observation and image-based
24
estimation
4. Conclusion
To achieve a real-time water-level monitoring system, water gauge images in a river are
captured consecutively. However, changes in camera orientation and bad weather conditions
can add difficulty to automatic identification of the water line in the water gauge images,
sometimes even interrupting the monitoring process. The proposed study integrates image
detection using single-camera data. The case study demonstrates that the method resolves
difficulties in real-time water-level detection with changes in camera orientation and blurred
images effectively. By utilizing not only computer vision techniques but also
photogrammetric principles, the proposed method is able to distinguish the water level with a
resolution of up to 0.01 m, which is higher than that of the gauge unit. Furthermore, since the
method solved for the water level rather than identify exact gauge reading, it could be
implemented even without water gauge if provided identifiable and reliable control points.
The system can be upgraded by implementing nighttime observation techniques (e.g. infrared
sensors) to provide better image quality under poor lighting conditions. An algorithm that
automatically adjusts the ROI and the tolerance can also improve the robustness of water-line
detection. The current method requires the water line to be relatively still as was designed to
be applied in man-made facilities like river channels and reservoirs. However, by integrating
25
the water line detection algorithm developed by Kröhnert and Meichner (2017) [36] and the
water-level determination procedure used in this study, the proposed method could be
with wired or wireless communication channels, remote control, and real-time recording
5. Acknowledgments
The authors thank Mr Tsung-Hsien Ruan (National Taiwan University) for assisting field data
collection, and Ms Yusan Yang (University of Pittsburgh) for valuable comments during the
manuscript preparation stage. Dr. Tong Sun and two anonymous reviewers provided helpful
References
[1] Lin, F., Chang, W. Y., Lee, L. C., Hsiao, H. T., Tsai, W. F., and Lai, J. S., 2013.
Applications of image recognition for real-time water level and surface velocity.
In Multimedia (ISM), 2013 IEEE International Symposium on (pp. 259–262). IEEE. doi:
10.1109/ISM.2013.49
[2] Wu, J.H., Chen L.C., Tseng, C.H., Lo, S.W., and Lin, F.P., 2015. Automated image
26
[3] Buchanan, T. J., and Somers, W. P., 1968. Stage measurement at gaging stations. US
[4] Stannard, D. I., and Rosenberry, D. O., 1991. A comparison of short-term measurements
of lake evaporation using eddy correlation and energy budget methods. Journal of
[5] Sweet, H. R., Rosenthal, G., and Atwood, D. F., 1990. Water level
[6] Fukami, K., Yamaguchi, T., Imamura, H., and Tashiro, Y., 2008. Current status of river
discharge observation using non-contact current meter for operational use in Japan.
10.1061/40976(316)278
[7] Costa, J. E., Spicer, K. R., Cheng, R. T., Haeni, F. P., Melcher, N. B., Thurman, E. M.,
Plant, W. J., and Keller, W. C., 2000. Measuring stream discharge by non‐contact
doi: 10.1029/1999GL006087
[8] Simpson, M. R., and Oltmann, R. N., 1993. Discharge-measurement system using an
acoustic Doppler current profiler with applications to large rivers and estuaries (p. 32).
27
US Government Printing Office.
[9] Chen, Y. C., Kuo, J. T., Yang, H. C., Yu, S. R., and Yang., H. C., 2007. Discharge
measurement during high flow. Journal of Taiwan Water Conservancy, 55, 21–33.
[10] Tsai, T. M., 2004. Stage measuring technique improvement of ultrasonic sensor gauge
and accuracy verifying procedure of water level data, Ph.D. dissertation, National
[11] Bin, S., Zhang, C., Li, L., and Wang, J., 2014. Research on water level measurement
based on image recognition for industrial boiler. In The 26th Chinese Control and
10.1109/CCDC.2014.6852375
[12] Hies, T., Babu, P. S., Wang, Y., Duester, R., Eikaas H. S. and Meng, T., 2012. Enhanced
[13] Lin, K. H., 2013. Water-level Detection and Recognition Using Image Processing
Technology.
[14] Yu, J., and Hahn, H., 2010. Remote detection and monitoring of a water level using
Narrow Band Channel. Journal of Information Science and Engineering, 26(1), 71–82.
[15] Royem, A. A., Mui, C. K., Fuka, D. R., and Walter, M. T., 2012. Technical note:
28
of the ASABE, 55(6), 2237–2242. doi: 10.13031/2013.42512
[16] Gilmore, T. E., Birgand, F., and Chapman, K. W., 2013. Source and magnitude of error
[17] Yang, H. C., Wang, C. Y., and Yang, J. X., 2014. Applying image recording and
identification for measuring water stages to prevent flood hazards. Natural hazards,
[18] Mikhail, E. M., and Weerawong, K., 1997. Exploitation of linear features in surveying
10.1061/(ASCE)0733-9453(1997)123:1(32)
[19] Mikhail, E. M., and Ackermann, F. E., 1982. Observations and least squares, IEP-A
[20] Ghilani, C. D., 2010. Adjustment computations: spatial data analysis. John Wiley and
[21] Förstner, W., 1982. On the geometric precision of digital correlation. Int. Arch.
[22] Ackermann, F., 1984. Digital image correlation: performance and potential application in
10.1111/j.1477-9730.1984.tb00505.x
29
[23] Gruen, A., 1985. Adaptive least squares correlation: a powerful image matching
14(3), 175-187.
[24] Wrobel, B. P., 1987. Digital image matching by facets using object space models. In
Hague International Symposium (pp. 325-335). International Society for Optics and
[25] Lewis, J. P., 1995. Fast normalized cross-correlation. In Vision interface (Vol. 10, No. 1,
pp. 120-123).
[26] Pilu, M., 1997. A direct method for stereo correspondence based on singular value
10.1109/CVPR.1997.609330
[27] Zhao, F., Huang, Q., and Gao, W., 2006. Image matching by normalized
10.1109/ICASSP.2006.1660446
[28] Lerma, J. L., Navarro, S., Cabrelles, M., Seguí, A. E., and Hernández, D., 2013.
Automatic orientation and 3D modelling from markerless rock art imagery. ISPRS
30
10.1016/j.isprsjprs.2012.08.002
[29] Tasdelen, E., Unbekannt, H., Willner, K., and Oberst, J., 2012, September.
10.5194/isprsarchives-XL-4-257-2014
[30] Bethmann, F., and Luhmann, T., 2011. Least-squares matching with advanced geometric
[31] Moselhi, O., and Shehab-Eldeen, T., 1999. Automated detection of surface defects in
10.1016/S0926-5805(99)00007-2
[32] Canny, J., 1986. A computational approach to edge detection. IEEE Transactions on
10.1109/TPAMI.1986.4767851
[33] Duda, R. O., and Hart, P. E., 1972. Use of the Hough transformation to detect lines and
10.1145/361237.361242
[34] Kirstein, S., Müller, K., Walecki-Mingers, M., and Deserno, T. M., 2012. Robust
adaptive flow line detection in sewer pipes. Automation in Construction, 21, 24–31. doi:
31
10.1016/j.autcon.2011.05.009
[35] Water Resources Agency Ministry of Economic Affairs, 1992. Research of urban area
[36] Kröhnert, M. and Meichsner, R., 2017. Segmentation of environmental time lapse image
cameras. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-2/W4, 1-8. doi:
10.5194/isprs-annals-IV-2-W4-1-2017.
32
Highlights
Image matching algorithms detect camera movements and adjust exterior orientations.
Forming averaging images to reduce ambient noises and enhance the robustness.
33