You are on page 1of 34

Accepted Manuscript

Automatic Water-Level Detection Using Single-Camera Images with Varied


Poses

Yan-Ting Lin, Yi-Chun Lin, Jen-Yu Han

PII: S0263-2241(18)30490-1
DOI: https://doi.org/10.1016/j.measurement.2018.05.100
Reference: MEASUR 5601

To appear in: Measurement

Received Date: 30 March 2017


Revised Date: 22 May 2018
Accepted Date: 25 May 2018

Please cite this article as: Y-T. Lin, Y-C. Lin, J-Y. Han, Automatic Water-Level Detection Using Single-Camera
Images with Varied Poses, Measurement (2018), doi: https://doi.org/10.1016/j.measurement.2018.05.100

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers
we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and
review of the resulting proof before it is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Automatic Water-Level Detection Using Single-Camera Images with Varied Poses

Yan-Ting Lin, Yi-Chun Lin, Jen-Yu Han

Department of Civil Engineering, National Taiwan University, Taipei 10617, Taiwan

Abstract

Monitoring of the water level of reservoirs, rivers, and lakes is an essential task for hydraulic

facility management and disaster mitigation. Nowadays, although automated instruments for

water-level detection have been widely applied, their reliability and robustness still need to be

further improved. On the other hand, surveillance cameras are typically available at major

rivers and hydraulic facilities and could provide opportune field observations of the water

level. In this study, an automatic water-level detection approach based on single-camera

images is developed. It employs digital image processing techniques to minimize the ambient

noise and to detect the water level in real-time. The photogrammetric technique is also

introduced to track the camera movements and accurately determine the water level in the

object space in a rigorous manner. According to the results of real field experiments, it has

been illustrated that the proposed approach is capable of overcoming interference due to

changing weather conditions and unexpected movement of the camera and automatically

identifying the water level. Consequently, it can provide an efficient and reliable water-level

monitoring technique for hydraulic management authorities.

Keywords: water-level detection; digital image processing; real-time monitoring; hydraulic

1
facility management; automated analysis

1. Introduction

Heavy precipitation in a short period of time can cause flash floods in riparian regions,

especially under extreme weather conditions. Consequently, real-time water-level monitoring

systems are particularly important for flood warnings and disaster risk assessment [1, 2]. In

densely populated areas, monitoring man-made facilities like river channels and reservoirs

are installed water-level measuring equipment to apply real-time monitoring and flood hazard

management. Water-level measuring techniques can be divided into two categories: contact

measurement systems and non-contact measurement systems [3]. Sensors in modern contact

measurement systems (e.g. float sensors [4] and submersible pressure sensors) are compact,

low cost, and easy to use. However, sediment deposition accompanied by frequent floods

often buries or damages the sensors, causing inaccuracies in reporting water levels.

Limitations on the local electricity supply and Internet access, influences from fluctuations of

water temperature, and other environmental factors also introduce uncertainties into the

readings [5]. On the other hand, non-contact sensors (e.g. radar sensors [6, 7] or ultrasonic

sensors [8]) can collect high-quality observations while requiring only low maintenance and

low power consumption. However, non-contact sensors are usually installed on adjacent

structures, such as bridges, poles, and scaffoldings. As a result, the obtained measurements

often contain noticeable errors due to the interference of vibration from the structures to

2
which the sensors are attached [9, 10].

Another non-contact technique for measuring the water level is to capture images of the

water gauges for visual interpretation. For monitoring of water levels during a flood, a typical

method is to take a series of images and to manually determine the gauge reading in each

image. However, by-eye manual readings are not only inefficient but also often biased and

inaccurate, especially when the image is of low quality. Therefore, how to automatically and

accurately identify gauge readings from images has been an issue of concern. With

advancement in digital image processing, gauge or water line images can now be read

automatically, replacing time- and labor-consuming manual methods. Imaging methods based

on template-matching algorithms [1, 11] or pattern recognition [12, 13] have been proposed

in some recent studies. However, image quality is still a major concern in automatic

gauge-reading identification, limiting its applicability in real field cases with significant

ambient noise (e.g. changing light and precipitation).

Alternatively, via the photogrammetric method, the water level can be calculated by

introducing fiducials or control points to link the image to the world coordinates [14–17].

Although they are less dependent on image quality, such methods face additional challenges

like lens distortion and unexpected camera movements that could introduce significant

perturbations into the calculations. In this study, the potential of integrating image techniques

and photogrammetric principles to enhance the robustness in automatic water-level detection

3
is explored.

The aim of this study is to develop an automatic water-level detection approach for flood

hazard management that considers both the ambient noise and camera movements. In the

proposed method, image enhancement was employed first to reduce image noise.

Feature-matching techniques and the photogrammetric principle were then applied to track

the camera’s movements and determine the water level in real-time. Finally, field tests were

performed and the results were evaluated to demonstrate the feasibility and reliability of the

proposed approach.

2. Methodology

The proposed water-level detection approach consists of two ideas: identifying the water line

in the water-gauge image and solving the water level based on photogrammetric principles

(i.e. the well-known collinearity equations). In order to solve the collinearity equations, the

object (world) coordinate system must be defined according to the description in Section 2.1.

Section 2.1 also introduces the parameters that establish the relation between the object space

and the image space. Section 2.2 then introduces a method to automatically check and update

these parameters whenever a new image is read. Section 2.3 presents a strategy to reduce

ambient noises and enhance the quality of line detection. Finally, the water level can be

obtained following the derivation in Section 2.4.

4
2.1 System initialization

The proposed method’s aim is to monitor water level at man-made facilities like river

channels and reservoirs, which is often key to flood hazard management. The condition for

setting up a camera is the water level change as well as the gauge interval, which can be

completely observed through images. According to Sampling Theory, the size of the interval

needs to be larger than 2 pixels in order to clearly identify the water level value, and the

distance between the camera and the gauge is adjusted according to this condition. The

distance is relevant: larger distance increases ground sample distance, and thus decreases

image measurement accuracy. In the case study described below, the object distance is

approximately 20 m, resulting in a ground sample distance of 0.002 m/pix. Setting up the

camera approximately along the Y-axis (Fig. 1) may simplify computation, but it is not

necessary and it does not affect the accuracy.

While installing a camera for automatic water-level detection, several initializations

must be carried out in order to establish the relation between the object coordinate system and

the image coordinate system. The process includes three steps: (1) identifying calibration

points in the field, (2) measuring the coordinates of the calibration points, and (3)

determining the initial interior and exterior orientation of the camera.

Upon setting up the camera, an image is taken firstly as a reference image. Also,

multiple points on the gauge are selected manually as calibration points. Figure 1 illustrates

5
the geometry of the system, where H marks the unknown water level, P1–P6 are the

calibration points in the object coordinate system, and h and p1–p6 denote their

correspondences in the image coordinate system, respectively. In this study, the calibration

points also serve as reference points in the image-matching process; therefore, it is important

to choose points with distinct features as calibration points. Note that there should be at least

three points in order to solve the exterior orientation parameters of the camera. Additionally,

at least two points on the Z-axis should be selected in order to conduct single-camera

water-level detection (see further discussion in Section 2.4).

The second step is to measure the object and image coordinates of the calibration points

manually. The object and image coordinate systems used in the proposed approach are

illustrated in Fig. 1. The origin of the object coordinate system is defined at the zero level of

the water gauge. The Z-axis is coincident with the gauge and the X-axis is perpendicular to it.

The scale is the same as in the water gauge. Based on the definition, the object coordinates of

the calibration points are denoted as Pi (Xi, 0, Zi) and can be measured accurately by

equipment like a total station or laser rangefinder. The image coordinate system corresponds

to the image’s pixel indices. The origin is at the top left of the image, the positive x-axis

points right, and the positive y-axis points down. The image coordinates of the calibration

points pi (xi, yi) can then be measured manually. In addition, the slope of the water line in the

reference image is calculated and will act as an initial value in the Hough transform (see

6
Section 2.3).

The final step to establish the relation between the object space and the image space is

obtaining the interior and exterior orientations of the camera. The interior orientation includes

the focal length, the location of the principle point, and a description of the lens distortion.

These parameters are acquired through camera calibration, which can be done in a laboratory

prior to setting up the camera in the field. Because the focal length is a constant in the model,

the minimum requirement for the camera is to have a fixed-focal lens. The exterior

orientation describes the position and orientation of the camera in the object space, which

includes six independent parameters:  X L , YL , Z L  for position and  ,  ,  


x y z for

orientation. These parameters can be obtained by solving the famous collinearity equations

[18]:

 m11 ( X  X L )  m12 (Y  YL )  m13 ( Z  Z L )


 x  x0   f m31 ( X  X L )  m32 (Y  YL )  m33 ( Z  Z L )

 (1)
y  y   f m21 ( X  X L )  m22 (Y  YL )  m23 ( Z  Z L )
 0
m31 ( X  X L )  m32 (Y  YL )  m33 ( Z  Z L )

where (x, y) represent the image coordinates of the calibration point; (X, Y, Z) represent the

object coordinates of the calibration point; m11–m33 are the elements in a 3 × 3 rotation matrix

M; (x0, y0) are the offsets from the fiducial-based origin to the perspective center origin; and f

is the focal length. The rotation matrix M represents the camera orientation in the object

space and can be written as:

7
1 0 0  cos  y 0  sin  y   cos  z sin  z 0
  
M  0 cos  x sin  x   0 1 0    sin  z cos  z 0  (2)
0  sin  x cos  x   sin  y 0 cos  y   0 0 1 

In Eq. (1), x0, y0, and f are interior parameters and have been decided previously by camera

calibration. Once three or more calibration points have been identified and measured, one can

write at least six equations based on Eq. (1). The camera’s position  X L , YL , Z L  and

orientation  ,  ,  
x y z
can then be uniquely determined by applying a least-squares

technique [19, 20]. After obtaining the interior and exterior orientations, the relation between

object space and image space, which is the foundation of the proposed single-camera

water-level detection approach, is uniquely determined.

Fig. 1. The geometry of the system used in single-camera water-level detection

8
2.2 Camera-movement detection

To obtain the water level with high accuracy by the proposed approach, accurate exterior

parameters are essential. Although these parameters were evaluated manually in the system

initialization process, they need to be updated once the camera rotates or moves. In this

section, a strategy that detects camera movement and adjusts the exterior orientation

parameters is developed by matching the calibration points in sequential images. The strategy

is applied whenever a new image is read, and it is fully automatic. Hence, no manual

calibration work is required once the system is set up and initialized in the field.

The method matches the calibration points based on two well-known area-based

matching techniques: least-squares matching (LSM) [21–24] and normalized

cross-correlation (NCC) [25–27]. In general, NCC is more robust than LSM since it requires

a less accurate initial position of the reference points. LSM, on the other hand, can achieve a

higher precision but is very sensitive to the initial estimation [28, 29]. Instead of the

commonly used strategy that applies NCC first to obtain an approximate position, followed

by LSM for refinement, the proposed method uses LSM prior to NCC, where the former only

acts as a camera-movement detector and the latter determines the positions of the matching

points.

A target window consisting of the reference points ( xit , yit ) (i.e. the i-th calibration

points on the image at time t) and the corresponding neighborhood is selected in the

9
beginning. Whenever a new image is read, the LSM algorithm determines the transformation

that best fits the search window S ( xit 1 , yit 1 ) to the target window T ( xit , yit ) by calculating

the minimum gray value difference V [21, 30] between the two windows and simultaneously

solves the coordinates of the matching points ( xit 1 , yit 1 ) :

V  h1  h2  S  xit 1 , yit 1   T  xit , yit  (3)

 xit 1  a0  a1 xit  a2 yit


 t 1 (4)
 yi  b0  b1 xi  b2 yi
t t

where a0 – a2 and b0 – b2 are geometric transformation parameters, h1 is the radiometric

transformation parameter, and h2 is a radiometric scale factor. Since the LSM technique is

very sensitive to the initial position of the reference points, the matching process gives a poor

result when the camera moves or rotates. This seeming disadvantage can instead serve as a

signal of camera movements. First, a pre-defined tolerance  is set based on the image

resolution. In this study,  is simply set equal to the number of pixels per water-gauge unit,

which means any camera movement causing a difference of more than one unit coordinate

will be distinguished. The change in camera orientation can then be discriminated if the

average distance between the reference points and their corresponding matching points is

larger than the predefined tolerance:

x  xit 1    yit  yit 1   


t 2 2
i
(5)

If the camera has moved or rotated, solving the coordinates of matching points by LSM

will not be accurate enough. For this reason, NCC is adopted to find the matching points once
10
the camera movement has been recognized by LSM. Equation (6) gives a basic definition of

the NCC coefficient [27]:

 T ( x , y )  T    S ( x 
n1 n2
t 1
t t
, yit 1 )  S

i i i
i 1 j 1
  ts  (6)
 t s
 T ( x , y )  T    S ( x 
n1 n2 2 n1 n2 2
t 1
t
i
t
i  i , yit 1 )  S
i 1 j 1 i 1 j 1

where T and S denote the average gray value and  t and  s denote the standard

deviations of the gray values of the target window and the search window, respectively;  ts

denotes the covariance between the target window and the search window; and n1 and n2

denote the number of rows and columns in the search image. The highest cross-correlation

coefficient gives the optimized estimate of the position of the matching point. The precision

achieved by NCC is around one pixel, which is adequate in this study since the tolerance 

is usually more than one pixel. As a result, there will be no need to apply LSM again to refine

the matching result. Finally, the exterior orientation parameters can be updated automatically

by solving the collinearity equations, considering that the image coordinates of calibration

points (i.e. the coordinates of the matching points) have been determined and the object

coordinates of calibration points remain unchanged.

2.3 Noise reduction and water-line detection

The water line, generally a straight line representing the intersection between the air and the

water surface, is a recognizable feature in an image and can thus be identified by

line-detection methods [31]. However, automatic line-detection techniques often yield poor
11
results under bad weather conditions like heavy rain because of the rain-induced noises. To

cope with ambient noises and to detect the water line robustly, a method combining different

feature-detection techniques is introduced here.

Instead of taking one single image in an epoch, a series of images are taken in a short

period of time, usually within a few seconds. An averaging image is formed by calculating

the average digital number of the serial images:

 edge  x, y 
t
avg  x, y   t 1
(7)
T

where edget  x, y  is the digital number of pixel (x, y) at time t and T is the total elapsed

time. Afterwards, first the Gaussian filter smooths the original gray-level images and removes

noises and then the Canny edge detector [32] can extract the structural information from the

images. Figure 2 displays an example of forming an averaging image. Considering the fact

that the blur caused by raindrops occurs randomly among the images and also the assumption

that the water level would not change significantly within a short time, the averaging image

can remove rain-induced noises effectively while preserving the water line and background

information in the image.

12
Fig. 2. An averaging image formed by three sequential images

The water-line detection can be carried out once the image quality is enhanced. A region

of interest (ROI) is first selected according to the positions of the calibration points to reduce

the search range (shown in Fig. 3). In this study, ROI is simply defined as a k-pixel buffer

around the water gauge. The Hough transform is then applied to identify the position of the

water line in the image space [33, 34]. The equation of the water line, written in the Hesse

normal form, is:

r  x cos  y sin  (8)

where r is the distance from the origin to the closest point on the straight line and  is the

angle between the x-axis and the line connecting the origin with the closest point. The angle

 is constrained by:

 yR  yL 
c  tan 1  C (9)
 xR  xL 

13
where ( xR , yR ) and ( xL , yL ) are the right and left ends of the initial water line, respectively,

and C is a pre-defined angular tolerance. The offset between each candidate water line in the

ROI and the initial water line is computed, and the one with the minimum offset will be the

new water line.

Fig. 3. The ROI (blue frame) and the candidate water lines (yellow lines) extracted by the

Hough transform technique

2.4 Single-camera water-level detection

Despite the need for a pair of conjugate images for typical space intersection calculations, in

situ water-level monitoring work often involves only a single camera. For this reason, a

14
water-level detection method using single-camera images and collinearity equations is

developed in this section.

By definition, the water level is the elevation of the water surface. According to the

object coordinate system defined in Section 2.1, it can be specified as the point of

intersection of the water surface and the Z-axis, denoted H in Fig. 1, and its coordinates can

be written as  0, 0, Z r  . The key concept of the proposed approach is that water level is

simply the Z coordinate of point H. Since the X and Y coordinate of H are both zeros, one can

solve for Zr when the corresponding image point coordinates, h (xr, yr), is acquired.

According to Fig. 1, the image point h is the intersection of the water line and the line p1p5 in

the image. Given that the water line has been identified in Section 2.3 and the coordinates of

points p1 and p5 are known (see Section 2.1), one can then solve for the coordinates of point h

(xr, yr) in the image space. The water level Z r can be obtained by solving for the

coordinates of its correspondence, point H, in the object space. The collinearity equations are

rewritten in another basic form:

  m11 x  m21 y  m31 f  Z  Z  X


X   L
  m13 x  m23 y  m33 f  L

 (10)
 Y   m12 x  m22 y  m32 f   Z  Z   Y
  m13 x  m23 y  m33 f  L L

Substituting the coordinates of point H  0, 0, Z r  into Eq. (10), one obtains:

15
  m13 x  m23 y  m33 f 
 Z r ,1  Z L  X L
  m11 x  m21 y  m31 f 
 (11)
 Z  Z  Y  m13 x  m23 y  m33 f 
 r ,2 L L
 m12 x  m22 y  m32 f 

Finally, the water level can be estimated by taking the average of Z r ,1 and Z r ,2 :

Z r ,1  Z r ,2
Z (12)
2

The derivation up to this point shows that Z coordinate is the only relevant component.

Therefore, vertical control is critical: the minimal requirement is two control points on the

Z-axis and a third, non-collinear control point.

The proposed integrated method aims to cope with three main issues in automatic

water-level detection: (1) the camera movements, (2) the ambient noises, and (3)

single-camera measurement. LSM and NCC are employed for detecting the camera

movement and updating the exterior orientation parameters automatically. Rather than using a

single image in the water-line detection process, an averaging image is formed in order to

reduce the rain-induced noises. Ultimately, the water level is obtained using only

single-camera images. Figure 4 shows a flowchart of all of the steps of the analysis in the

proposed approach.

16
Fig. 4. Flowchart for the proposed automatic single-camera water-level detection

3. Case study

The feasibility of the proposed water-level detection method was tested in the field. The case

study took place in an urban area prone to flash floods and road safety issues due to high

population density and heavy traffic [35]. The water level changes considerably with the tide

as the site lies in the downstream region near the sea, increasing the risk of flood under heavy

rain. These unique characteristics make water-level monitoring an essential task in this area.

Figure 5 illustrates the field setup: a digital camera (model: Canon 650D) was set up

19.63 m from the water gauge and captured images with a resolution of 1920 by 1280 pixels.

Software PhotoModeler (http://www.photomodeler.com) was applied to calibrate the camera.

The procedure was repeated several times until intrinsic parameters stabilized at

 x0 , y0 , f   11.376, 7.582,31.719 mm. After setting up the system, the camera first

acquired a reference image. Ten calibration points on the gauge were then selected, and their

17
object and image coordinates were measured manually (see the list in Table 1). The exterior

orientation parameters were calculated according to the collinearity equations (Eq. 1), where

 X L , YL , Z L    0.00, 19.63,5.40 m and  x ,  y ,  z    79.76, 42.72, 7.92  deg. Considering

the image size and the computational efficiency, the ROI was defined as a 200-pixel buffer

around the water gauge.

The system was tested for a 25-hour period in the field to capture the change of the

lighting condition throughout the day (including sunlight in day-time and street light at night).

The camera captured 10 images in a roll at a rate of one frame per second (i.e. 1 fps) every

half hour and collected 51 10-image sets (image sets 1–51, hereafter) for subsequent analyses.

During the experiment, the ability of the proposed water-level detection method to overcome

two situations was tested: (1) the camera was purposely rotated between image sets 10–11,

26–27, and 38–39, and (2) rainfall occurred and the lens fogged up during image sets 41 and

50.

18
Fig. 5. Field setup for single-camera water-level detection

Table 1. Coordinates of the calibration points

Object coordinates Image coordinates


NO. X [m] Y [m] Z [m] x [pix] y [pix]
1 0.00 0.00 0.50 940 757
2 0.38 0.00 0.50 983 758
3 0.00 0.00 1.00 941 692
4 0.38 0.00 1.00 985 692
5 0.00 0.00 2.10 952 542
6 0.38 0.00 2.10 994 540
7 0.00 0.00 2.50 953 488
8 0.38 0.00 2.50 994 487
9 0.00 0.00 3.55 955 346
10 0.38 0.00 3.55 997 346

Examples of the average images and the water-line estimation corresponding to each

camera pose are shown in Fig. 6. The results of automatic water-level detection and the

19
corresponding by-eye measurements (which serve as the reference values in this experiment)

are listed in Table 2. Figure 7 further plots the time series of the water level. One can see

immediately that the water level changes significantly with the tide. While the resolution of

manual observation is 0.05 m, identifying the water level from 0.05 to 1.99 m, the resolution

of image-based estimation is up to 0.001 m, identifying the water level from –0.110 to 1.975

m. Note that the water level below 0 m is not available for by-eye measurement since the

minimum reading of the water gauge is zero. On the other hand, the proposed method is able

to specify water levels lower than 0 m because it solves the position of the water line in the

object space instead of recognizing gauge readings in the image.

The overall Root-Mean-Square Error (RMSE) between water levels estimated by the

two methods is 0.011 m, which suggests that the proposed method reaches comparable

accuracy to manual observation in general. The RMSEs corresponding to the two situations,

camera movement and rain-induced noise, are 0.009 and 0.009 m, showing no significant

differences in accuracy. In addition, the changing light condition throughout the 25-hour field

test had no significant effect on the accuracy in this experiment. Note that the by-eye

measurements could also contain error under varying light conditions due to poor image

quality.

Finally, it is noted that the analysis was implemented using MATLAB (R2013a, 64-bit),

and tested on a personal computer (processor: Intel Core i7-4700HQ 2.4 GHz; memory: 8.00

20
GB DDR3; operating system: Windows 7, 64 bit). The time required to process a single

image set was 7 seconds on average. Since the analysis was executed every half hour, the

performance is adequate for the current study.

In conclusion, the proposed approach detects water level with an accuracy of 0.01 m

using single-camera images. It is able to cope with camera movement and rain-induced noise

and provide reliable results.

21
(a)

(b)

(c)

(d)

Fig. 6. The averaging images corresponding to four different camera poses

22
Table 2. A comparison between manual observation and image-based estimation

No. Time Manual observation Image-based Difference [m]


[m] estimation [m]
1 (reference) 13:00 1.52 1.520 0.000
2 13:30 1.78 1.778 0.002
3 14:00 1.94 1.938 0.002
4 14:30 1.99 1.975 0.015
5 15:00 1.87 1.853 0.017
6 15:30 1.59 1.587 0.003
7 16:00 1.29 1.277 0.013
8 16:30 0.99 0.978 0.012
9 17:00 0.72 0.720 0.000
10 17:30 0.45 0.431 0.019
11 18:00 0.20 0.189 0.011
12 18:30 N/A –0.055 N/A
13 19:00 N/A –0.076 N/A
14 19:30 0.05 0.057 –0.007
15 20:00 N/A 0.001 N/A
16 20:30 N/A –0.085 N/A
17 21:00 N/A –0.076 N/A
18 21:30 N/A –0.096 N/A
19 22:00 N/A –0.086 N/A
20 22:30 N/A –0.067 N/A
21 23:00 0.23 0.218 0.012
22 23:30 0.55 0.546 0.004
23 00:00 0.85 0.843 0.007
24 00:30 1.15 1.145 0.005
25 01:00 1.41 1.426 –0.016
26 01:30 1.60 1.606 –0.006
27 02:00 1.72 1.712 0.008
28 02:30 1.70 1.704 –0.004
29 03:00 1.55 1.560 –0.010
30 03:30 1.28 1.274 0.006
31 04:00 1.02 1.022 –0.002
32 04:30 0.74 0.714 0.026
33 05:00 0.47 0.452 0.018

23
34 05:30 0.20 0.191 0.009
35 06:00 N/A –0.051 N/A
36 06:30 N/A –0.076 N/A
37 07:00 N/A –0.082 N/A
38 07:30 N/A –0.076 N/A
39 08:00 N/A –0.072 N/A
40 08:30 N/A –0.076 N/A
41 09:00 0.05 0.061 –0.011
42 09:30 N/A –0.103 N/A
43 10:00 N/A –0.110 N/A
44 10:30 N/A –0.107 N/A
45 11:00 N/A –0.107 N/A
46 11:30 N/A –0.101 N/A
47 12:00 0.34 0.329 0.011
48 12:30 0.75 0.742 0.008
49 13:00 1.10 1.103 –0.003
50 13:30 1.41 1.418 –0.008
51 14:00 1.70 1.704 –0.004
RMSE 0.011

Fig. 7. The time series of the water level obtained by manual observation and image-based

24
estimation

4. Conclusion

To achieve a real-time water-level monitoring system, water gauge images in a river are

captured consecutively. However, changes in camera orientation and bad weather conditions

can add difficulty to automatic identification of the water line in the water gauge images,

sometimes even interrupting the monitoring process. The proposed study integrates image

matching, feature detection, and collinearity equations to achieve automatic water-level

detection using single-camera data. The case study demonstrates that the method resolves

difficulties in real-time water-level detection with changes in camera orientation and blurred

images effectively. By utilizing not only computer vision techniques but also

photogrammetric principles, the proposed method is able to distinguish the water level with a

resolution of up to 0.01 m, which is higher than that of the gauge unit. Furthermore, since the

method solved for the water level rather than identify exact gauge reading, it could be

implemented even without water gauge if provided identifiable and reliable control points.

The system can be upgraded by implementing nighttime observation techniques (e.g. infrared

sensors) to provide better image quality under poor lighting conditions. An algorithm that

automatically adjusts the ROI and the tolerance can also improve the robustness of water-line

detection. The current method requires the water line to be relatively still as was designed to

be applied in man-made facilities like river channels and reservoirs. However, by integrating

25
the water line detection algorithm developed by Kröhnert and Meichner (2017) [36] and the

water-level determination procedure used in this study, the proposed method could be

modified to specialize in automatic monitoring of running water. Ultimately, in combination

with wired or wireless communication channels, remote control, and real-time recording

techniques, a low-cost, all-weather, real-time monitoring system for various hydrological

applications can then be accomplished.

5. Acknowledgments

The authors thank Mr Tsung-Hsien Ruan (National Taiwan University) for assisting field data

collection, and Ms Yusan Yang (University of Pittsburgh) for valuable comments during the

manuscript preparation stage. Dr. Tong Sun and two anonymous reviewers provided helpful

comments that improved the quality of the manuscript.

References

[1] Lin, F., Chang, W. Y., Lee, L. C., Hsiao, H. T., Tsai, W. F., and Lai, J. S., 2013.

Applications of image recognition for real-time water level and surface velocity.

In Multimedia (ISM), 2013 IEEE International Symposium on (pp. 259–262). IEEE. doi:

10.1109/ISM.2013.49

[2] Wu, J.H., Chen L.C., Tseng, C.H., Lo, S.W., and Lin, F.P., 2015. Automated image

identification method in large-scale streaming for flood disaster monitoring: A case

study in Taiwan. Taiwan Water Conservancy, 63(1), 93–99. doi: 10.2991/iea-15.2015.65

26
[3] Buchanan, T. J., and Somers, W. P., 1968. Stage measurement at gaging stations. US

Government Printing Office.

[4] Stannard, D. I., and Rosenberry, D. O., 1991. A comparison of short-term measurements

of lake evaporation using eddy correlation and energy budget methods. Journal of

Hydrology, 122(1–4), 15–22. doi: 10.1016/0022-1694(91)90168-H

[5] Sweet, H. R., Rosenthal, G., and Atwood, D. F., 1990. Water level

monitoring-achievable accuracy and precision. Ground water and vadose zone

monitoring: Philadelphia, American Society for Testing and Materials, Special

Technical Publication, 1053, 178–192. doi: 10.1520/STP23407S

[6] Fukami, K., Yamaguchi, T., Imamura, H., and Tashiro, Y., 2008. Current status of river

discharge observation using non-contact current meter for operational use in Japan.

In World Environmental and Water Resources Congress (pp. 1–10). doi:

10.1061/40976(316)278

[7] Costa, J. E., Spicer, K. R., Cheng, R. T., Haeni, F. P., Melcher, N. B., Thurman, E. M.,

Plant, W. J., and Keller, W. C., 2000. Measuring stream discharge by non‐contact

methods: A proof‐of‐concept experiment. Geophysical Research Letters, 27(4), 553–556.

doi: 10.1029/1999GL006087

[8] Simpson, M. R., and Oltmann, R. N., 1993. Discharge-measurement system using an

acoustic Doppler current profiler with applications to large rivers and estuaries (p. 32).

27
US Government Printing Office.

[9] Chen, Y. C., Kuo, J. T., Yang, H. C., Yu, S. R., and Yang., H. C., 2007. Discharge

measurement during high flow. Journal of Taiwan Water Conservancy, 55, 21–33.

[10] Tsai, T. M., 2004. Stage measuring technique improvement of ultrasonic sensor gauge

and accuracy verifying procedure of water level data, Ph.D. dissertation, National

Cheng Kung University, Taiwan, Tainan, 22 p.

[11] Bin, S., Zhang, C., Li, L., and Wang, J., 2014. Research on water level measurement

based on image recognition for industrial boiler. In The 26th Chinese Control and

Decision Conference (2014 CCDC) (pp. 1344-1348). IEEE. doi:

10.1109/CCDC.2014.6852375

[12] Hies, T., Babu, P. S., Wang, Y., Duester, R., Eikaas H. S. and Meng, T., 2012. Enhanced

water-level detection by image processing. The 10th International Conference on

Hydroinformatics, Hamburg, Germany.

[13] Lin, K. H., 2013. Water-level Detection and Recognition Using Image Processing

Technology.

[14] Yu, J., and Hahn, H., 2010. Remote detection and monitoring of a water level using

Narrow Band Channel. Journal of Information Science and Engineering, 26(1), 71–82.

[15] Royem, A. A., Mui, C. K., Fuka, D. R., and Walter, M. T., 2012. Technical note:

proposing a low-tech, affordable, accurate stream stage monitoring system. Transactions

28
of the ASABE, 55(6), 2237–2242. doi: 10.13031/2013.42512

[16] Gilmore, T. E., Birgand, F., and Chapman, K. W., 2013. Source and magnitude of error

in an inexpensive image-based water level measurement system. Journal of Hydrology,

496, 178-186. doi: 10.1016/j.jhydrol.2013.05.011

[17] Yang, H. C., Wang, C. Y., and Yang, J. X., 2014. Applying image recording and

identification for measuring water stages to prevent flood hazards. Natural hazards,

74(2), 737-754. doi: 10.1007/s11069-014-1208-2

[18] Mikhail, E. M., and Weerawong, K., 1997. Exploitation of linear features in surveying

and photogrammetry. Journal of Surveying Engineering, 123(1), 32–47. doi:

10.1061/(ASCE)0733-9453(1997)123:1(32)

[19] Mikhail, E. M., and Ackermann, F. E., 1982. Observations and least squares, IEP-A

Dun-Donnelley, New York.

[20] Ghilani, C. D., 2010. Adjustment computations: spatial data analysis. John Wiley and

Sons. 178-200 p. doi: 10.1002/9780470121498.ch1

[21] Förstner, W., 1982. On the geometric precision of digital correlation. Int. Arch.

Photogrammetry and Remote Sensing, 24(3), 176–189.

[22] Ackermann, F., 1984. Digital image correlation: performance and potential application in

photogrammetry. The Photogrammetric Record, 11(64), 429-439. doi:

10.1111/j.1477-9730.1984.tb00505.x

29
[23] Gruen, A., 1985. Adaptive least squares correlation: a powerful image matching

technique. South African Journal of Photogrammetry, Remote Sensing and Cartography,

14(3), 175-187.

[24] Wrobel, B. P., 1987. Digital image matching by facets using object space models. In

Hague International Symposium (pp. 325-335). International Society for Optics and

Photonics. doi: 10.1117/12.941331

[25] Lewis, J. P., 1995. Fast normalized cross-correlation. In Vision interface (Vol. 10, No. 1,

pp. 120-123).

[26] Pilu, M., 1997. A direct method for stereo correspondence based on singular value

decomposition. In Computer Vision and Pattern Recognition, 1997. Proceedings., 1997

IEEE Computer Society Conference on (pp. 261-266). IEEE. doi:

10.1109/CVPR.1997.609330

[27] Zhao, F., Huang, Q., and Gao, W., 2006. Image matching by normalized

cross-correlation. In 2006 IEEE International Conference on Acoustics Speech and

Signal Processing Proceedings (Vol. 2, pp. II–II). IEEE. doi:

10.1109/ICASSP.2006.1660446

[28] Lerma, J. L., Navarro, S., Cabrelles, M., Seguí, A. E., and Hernández, D., 2013.

Automatic orientation and 3D modelling from markerless rock art imagery. ISPRS

Journal of Photogrammetry and Remote Sensing, 76, 64–75. doi:

30
10.1016/j.isprsjprs.2012.08.002

[29] Tasdelen, E., Unbekannt, H., Willner, K., and Oberst, J., 2012, September.

Implementation of an ISIS compatible stereo processing chain for 3D stereo

reconstruction. In European Planetary Science Congress 2012 (Vol. 1, p. 483). doi:

10.5194/isprsarchives-XL-4-257-2014

[30] Bethmann, F., and Luhmann, T., 2011. Least-squares matching with advanced geometric

transformation models. Photogrammetrie-Fernerkundung-Geoinformation, 2011(2),

57–69. doi: 10.1127/1432-8364/2011/0073

[31] Moselhi, O., and Shehab-Eldeen, T., 1999. Automated detection of surface defects in

water and sewer pipes. Automation in Construction, 8(5), 581–588. doi:

10.1016/S0926-5805(99)00007-2

[32] Canny, J., 1986. A computational approach to edge detection. IEEE Transactions on

Pattern Analysis and Machine Intelligence, 8(6), 679–698. doi:

10.1109/TPAMI.1986.4767851

[33] Duda, R. O., and Hart, P. E., 1972. Use of the Hough transformation to detect lines and

curves in pictures. Communications of the ACM, 15(1), 11–15. doi:

10.1145/361237.361242

[34] Kirstein, S., Müller, K., Walecki-Mingers, M., and Deserno, T. M., 2012. Robust

adaptive flow line detection in sewer pipes. Automation in Construction, 21, 24–31. doi:

31
10.1016/j.autcon.2011.05.009

[35] Water Resources Agency Ministry of Economic Affairs, 1992. Research of urban area

floods by using monitoring system (1/2), 76–86 p.

[36] Kröhnert, M. and Meichsner, R., 2017. Segmentation of environmental time lapse image

sequences for the determination of shore lines captured by hand-held smartphone

cameras. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., IV-2/W4, 1-8. doi:

10.5194/isprs-annals-IV-2-W4-1-2017.

32
Highlights

 Automatic water-level detection is achieved using single-camera images.

 The method integrates computer vision techniques and photogrammetric principles.

 Image matching algorithms detect camera movements and adjust exterior orientations.

 Forming averaging images to reduce ambient noises and enhance the robustness.

 Centimeter accuracy for the water-level measurements is achieved.

33

You might also like