You are on page 1of 42

CHAPTER 1

INTRODUCTION
In order to deal with underwater image processing, we have to consider first of all the
basic physics of the light propagation in the water medium. Physical properties of the medium
cause degradation effects not present in normal images taken in air. Underwater images are
essentially characterized by their poor visibility because light is exponentially attenuated as it
travels in the water and the scenes result poorly contrasted and hazy. Light attenuation limits the
visibility distance at about twenty meters in clear water and five meters or less in turbid water.
The light attenuation process is caused by absorption (which removes light energy) and
scattering (which changes the direction of light path). The absorption and scattering processes of
the light in water influence the overall performance of underwater imaging systems. Forward
scattering (randomly deviated light on its way from an object to the camera) generally leads to
blurring of the image features. On the other hand, backward scattering (the fraction of the light
reflected by the water towards the camera before it actually reaches the objects in the scene)
generally limits the contrast of the images, generating a characteristic veil that superimposes
itself on the image and hides the scene. Absorption and scattering effects are due not only to the
water itself but also to other components such as dissolved organic matter or small observable
floating particles.

The presence of the floating particles known as “marine snow” (highly variable in kind
and concentration) increase absorption and scattering effects. The visibility range can be
increased with artificial lighting but these sources not only suffer from the difficulties described
before (scattering and absorption), but in addition tend to illuminate the scene in a non uniform
fashion, producing a bright spot in the center of the image with a poorly illuminated area
surrounding it. Finally, as the amount of light is reduced when we go deeper, colors drop off one
by one depending on their wavelengths. The blue color travels the longest in the water due to its
shortest wavelength, making the underwater images to be dominated essentially by blue color. In
summary, the images we are interested on can suffer of one or more of the following problems:
limited range visibility, low contrast, non uniform lighting, and blurring, bright artifacts, color
diminished (bluish appearance) and noise. Therefore, application of standard computer vision
techniques to underwater imaging requires dealing first with these added problems.
1.1. Mathematical Model

Figure 1.1, shows the interaction between light, transmission medium, camera and scene.
The camera receives three types of light energy in line of sight (LOS): the direct transmission
light energy reflected from the scene captured (direct transmission); the light from the scene that
is scattered by small particles but still reaches the camera (forward scattering); and the light
coming from atmospheric light and reflected by the suspended particles (background scattering).
In the real-world underwater scene, the use of artificial light sources tends to aggravate the
adverse effect of background scattering. The particles suspended underwater generated unwanted
noise and aggravate the visibility of dimming images. The imaging process of underwater images
can also be represented as the linear superposition of the above three components and shown as
follows:

ET ( x , y )=E d ( x , y ) + Ef ( x , y )+ Eb ( x , y ) (1.1)

whereby (x, y) represents the coordinates of individual image pixels; ET ( x , y ), Ed ( x , y ), E f ( x , y ),


and Eb ( x , y ) represent the total signal energy captured by the camera, the direct transmission
component, the forward scattering component, and the background scattering component,
respectively. Since the distance between the underwater scene and the camera is relatively close,
the forward scattering component can be ignored and only the direct transmission and
background scattering components are considered.

If we define J as the underwater scene, t as the residual energy ratio after J was captured
by the camera, B as the homogenous background light, then the scene captured by the camera I
can be represented as in Eq.(1. 2), which is considered as the simplified underwater image
imaging model (IFM)

I c ( x )=J c ( x ) t c ( x ) + Bc ( 1−t c ( x ) ) (1.2)

Whereby x represents one particular point (i, j) of the scene image, c is one of the red, green and
blue (RGB) channels, an J c ( x ) t c ( x ) and Bc ( 1−t c ( x ) ) represent the direct transmission and
background scattering component, respectively. The visibility of underwater images can be
improved using hardware and software solutions. The specialized hardware platforms and
cameras can be expensive and power-consuming. What is more, they are not adaptive to different
underwater environments. Thus, many algorithmic methods have been developed for underwater
image quality improvement by image enhancement or restoration.

1. The image restoration aims to recover a degraded image using a model of the degradation
and of the original image formation; it is essentially an inverse problem. These methods
are rigorous but they require many model parameters (like attenuation and diffusion
coefficients that characterize the water turbidity) which are only scarcely known in tables
and can be extremely variable. Another important parameter required is the depth
estimation of a given object in the scene.
2. Image enhancement uses qualitative subjective criteria to produce a more visually
pleasing image and they do not rely on any physical model for the image formation.
These kinds of approaches are usually simpler and faster than deconvolution methods.

In what follows we give a general view of some of the most recent methods that address
the topic of underwater image processing providing an introduction of the problem and
enumerating the difficulties found. Our scope is to give the reader, in particular who is not an
specialist in the field and who has a specific problem to address and solve, the indications of the
available methods focusing on the imaging conditions for which they were developed (lighting
conditions, depth, environment where the approach was tested, quality evaluation of the results)
and considering the model characteristics and assumptions of the approach itself. In this way we
wish to guide the reader so as to find the technique that better suits his problem or application.

1.2. Characteristics of Underwater images

Unlike conventional imaging taken above sea in open air, underwater photography shows
a strong dominance of bluish and greenish colors. On the other hand, the strong attenuation of
light in the water with respect to the air and a greater diffusion of the incident light have the
consequence of considerably reducing the visibility. Thus, objects at distant distances from the
acquisition system or the observer but also at medium distances, or even relatively short in some
cases, are hardly visible and poorly contrasted with respect to their environment. In addition, in
the presence of particles suspended in water (sand, plankton, algae, etc.), the incident light is
reflected by these particles and forms a kind of inhomogeneous mist that adds to the scene
observed. This turbidity of the water, most often white, also affects the visibility but also the
color dynamics of the objects contained in the image by tarnishing or veiling them. On the other
hand, the formation of an underwater image is highly dependent on the nature of the water in
which it was acquired. Natural waters can have very varied constitutions in terms of plants or
minerals dissolved or suspended in water. The behavior of the propagation of light in such a
medium is strongly governed by this factor.

Figure1. Diagram of underwater optical imaging.

1.1.1. Bio-optical Properties of Natural Waters

Natural waters and their IOPs depend on the various elements that go into their
compositions. While clear waters will mostly diffuse blue light, organic-rich waters will emit a
greener, sometimes even yellow. Numerous measurements have been made which have made it
possible to establish a link between the optical properties of absorption and diffusion and the
chemical nature of the main components of the water [1].

Bukata, et al. [2] have established a first approximation in a classification according to


the total concentration of chlorophyll-based pigments including phytoplankton. A second
component corresponding to Dissolved Organic Matter, or yellow substance, was added later
improving the model. From optical measurements and chemical measurements, spectral
attenuation curves for different concentrations were obtained. An attenuation model was then
established by regression of these data, making it possible to define a function expressing the
attenuation coefficient directly from the concentrations.

1.1.2. Components and Mitigation by Water

The first component of natural waters is the water molecule itself. If we consider the total
attenuation coefficient, it is possible to decompose this term into several components
corresponding respectively to attenuation by pure water and to different types of particles. In
fact, the components that most affect attenuation in natural waters are respectively the
chlorophyll-based pigments present in some living cell organisms in suspension and the
dissolved organic matter. These relatively large particles scatter more light at higher wavelengths
than water, thus affecting the color perceived by the observer (Gibson, 2015). According to
Chiang and Chen, (2011) it is possible to restrict to three coefficients corresponding to water,
pigments and dissolved matter.

In conventional cameras the stenop’e hole is replaced by a lens which, thanks to a larger
surface area and its light-focusing property, allows the lens to be illuminated. Get a brighter
image and have a better clean. The elements that make up a basic camera are the following: one
or more lenses, the image plane (film, CCD, CMOS), a focusing device that makes it possible to
move the lens with respect to the image plane, the diaphragm that controls the amount of light
received by the lens and the shutter that controls the aperture time of the diaphragm [3]. The
quality of the image observed will depend on sensor optics, electronics (amplification,
quantification and sampling) as well as the environment in which the light will propagate before
reaching the objective and lighting conditions [4].

1.2. Dataset

An open image dataset called TURBID has been created [5], to add value to research in
underwater images. The dataset consists of three groups of images named as Milk (containing 20
images), DeepBlue (containing 20 images) and Chlorophyll (containing 42 images). Another
dataset called UIEB dataset includes 890 underwater images that contain both the raw and its
respective high-quality images and another set of 60 challenging underwater images [6].
1.3. Motivation

Software based underwater image enhancement techniques are usually work by


controlling some aspect of the mathematical model of underwater to compensate for the
degrading effects introduced by the water’s light absorption and the presence of organic and
inorganic particles in water. Current state of the art method for underwater image restoration are
typically designed for a single image input as using multiple images for the processing usually
require more computational resources and may not be suitable for the real-time applications.
Amongst the single image enhancements method, those based on the image fusion
methodologies shows the most promising results. The general principle followed by fusion based
techniques is splitting the input image into two, processing the split images separately calculating
the weights based on the features of each image and finally using the calculated weights to fuse
the images into a final result. Although this algorithm does produce good results for most of the
cases, it does suffer from over compensation of the color and sometime distort the contrast in a
negative way. Different variations of these techniques exist and there is room for improvement.
Specifically, improving the color degradation part of the method can make use of the linear
function. Thus combining the color improvement algorithm with the image fusion principle
could improve the overall quality of the enhanced image even more.

1.4. Problem Statement

This is a general observation that the quality of an image taken with in water is always
degraded. It loses the actual tonal quality and the contrast necessary for distinguishing the object
of interest present in the image. The situation becomes more challenging when the neighboring
objects have very minor differences in pixel intensity values. This situation poses a serious
challenge to extract finer details from the data and reduces the performance of the algorithms
used to extract information from the images. Therefore, there is a pressing demand for images
taken underwater much be processed in such a way that they should represent their true tonal
details. Underwater imagery has wide range of applications for example investigation of aquatic
life, quality of water, defense and security purposes etc. therefore images or videos obtained for
meeting these objectives much carry the exact details
CHAPTER 2
RELATED WORK
This chapter presents a comprehensive review of literature focusing on various
techniques that are used to enhance under water image and restore visibility and quality of the
image. There is a wide variety of technologies as well as techniques used for underwater image
restoration and therefore the following discussion categorizes and groups them in relation to their
distinguishing features
Iqbal, K et al. [7] worked on "Enhancing the low quality images using Unsupervised
Color Correction Method", the affected underwater images reduced contrast and non-uniform
color cast because of the absorption and scattering of light rays in the marine environment. For
that they proposed an Unsupervised Colour Correction Method (UCM) for underwater image
quality enhancement. UCM is based on color matching, contrast improvement of RGB color
model and contrast improvement of HSI color model. Firstly, the color cast is concentrated by
equalizing the color values. Secondly, an improvement to a contrast alteration method is useful
to increase the Red color by stretching red color histogram towards the utmost; similarly the
Blue color is concentrated by stretching the blue histogram to the minimum. Thirdly, the
Saturation and Intensity parts of the HSI color model have been useful for contrast correction to
enlarge the true color using Saturation and to address the illumination problem through Intensity.
Jinbo Chen et al. [8] proposed "A detection method based on sonar image for underwater
pipeline tracker;” The surveillance and inspection of underwater pipelines are carried out by
operators who drive a remotely operated underwater vehicle (ROV) with camera mounted on it.
Though in extremely turbid water, the camera cannot capture any scene, even with
supplementary high-intensity light. In this case the optical detection devices are unable to
complete the surveillance task In recent years, forward looking sonar is broadly applied to the
underwater examination, which is not subject to the control of light and turbidity. So it is
appropriate for the inspection of pipelines. But the active change of ROV by the water flow will
show the way to the aim to escape from the sonar image effortlessly. In adding up, the sonar
image is with high noise and little contrast. It is difficult for the operator to identify the pipeline
from the images. Furthermore, the observation of underwater pipelines is deadly and time
unbearable and it is easy to create mistakes due to the exhaustion and interruption of the
operator. Then, the study focuses on rising image processing algorithms to distinguish the
pipeline repeatedly. By means of the proposed image processing technique, firstly the images are
improved using the Gabor filter. And then these images are useful for an edge detector. Lastly
the parameters of the pipeline are designed by Hough transform. To decrease the search area, the
Kalman filter is explored to forecast the parameters of the pipeline on the next picture. And the
research is shown the vision system is on hand to the observation of underwater pipelines.
Hung-Yu et al. [9] worked on "Low Complexity Underwater Image Enhancement Based
on Dark Channel Prior,”. Blurred underwater image is always an irritating problem in the deep-
sea engineering. They proposed a competent and low complexity underwater image enhancement
technique based on dark channel before. Our technique employs the median filter in its place of
the soft matting method to estimate the depth map of image. Furthermore, a color improvement
method is adopted to improve the color contrast for underwater image. The tentative results show
that the proposed approach can well improve the underwater image and decrease the
implementation time. In addition, this technique requires fewer computing reserve and is well
appropriate for implementing on the supervision and underwater navigation in real time.
Chiang et al. [10] "Underwater Image Enhancement by Wavelength Compensation and
Dehazing,". Where light scattering and color modify are two main sources of alteration for
underwater shooting. Light scattering is affected by light event on objects reflected and deflected
many times by particles present in the water prior to reaching the camera. This in turn lowers the
visibility and contrast of the image captured. Color change corresponds to the unstable degrees
of reduction encountered by light traveling in the water with diverse wavelengths, depiction
ambient underwater environments conquered by a bluish quality. No obtainable underwater
processing techniques can handle light dispersion and color change distortions caused by
underwater images, and the probable presence of false lighting concurrently. This literature
proposed a novel systematic come up to improve underwater images by a de-hazing algorithm, to
give back the attenuation difference along the broadcast path, and to take the pressure of the
possible presence of an false light source into consideration. Previously the deepness map, i.e.,
distances between the objects and the camera, is expected; the foreground and background within
a view are segmented. By managing the effect of artificial light, the haze occurrence and
inconsistency in wavelength attenuation along the underwater broadcast path to camera are
corrected. Secondly, the water deepness in the image scene is predictable according to the
remaining energy ratios of diverse color channels obtainable in the background light.
Shamsuddin et al. [11] developed a technique on "Significance level of image
enhancement techniques for underwater images,". Underwater imaging is fairly a demanding in
the area of photography especially for low resolution and normal digital camera. There are some
problems arise in underwater images such as partial range visibility, low contrast, non identical
lighting, blurring, intense artifacts, color diminish and noise. This research concentrated on color
diminished. Major application of typical computer vision techniques to marine imaging is
mandatory in dealing with the thought problems. Both automatic and manual level methods are
used to record the mean values of the stretched histogram.
Hitam et al. [12] has been worked on "Mixture contrast limited adaptive histogram
equalization for underwater image enhancement,". By improving the quality of an underwater
image has received substantial attention due to rundown visibility of the image which is caused
by physical properties of the water. Here they presented a new technique called hybrid Contrast
Limited Adaptive Histogram Equalization (CLAHE) color spaces that specifically developed for
underwater image improvement. The technique operates CLAHE on RGB and HSV color spaces
and both results are joint together using Euclidean rule. Tentative results show that the future
approach considerably improves the visual quality of underwater images by enhancing contrast,
as well as dropping noise and artifacts.
The correction method, proposed by [13], is based on an estimate of the attenuation
coefficient. The estimation is performed using known reflectance values of a grey reference
target present in the images considered. This object, named Spectralon, consists of a plastic
surface with the property of reflecting light very strongly by following a Lambertian reflection of
almost perfect light [13]. On the other hand, the camera used, its properties and its position are
known. Thanks to a priori knowledge about the content of the scene, the photometric behavior of
the objects and the acquisition system, the luminance reaching the spectral is known as well as
the value of the luminance reaching the camera for each of the three chromatic channels. In this
method, the camera is placed vertically at the ocean floor. Thus, the size of the water column
corresponds to the depth. The luminance can therefore be written as a function of the depth
represents the luminance received by the camera. The author makes the following three
hypotheses allowing correction [14]:
 The photographic seabed has a reflection following a Lambertian distribution;
 The spectral receives as much light as the surrounding environment;
 The camera has stable sensitivity curves with respect to illumination variations.
Having all these elements, it is possible to express the attenuation coefficient as a
function of depth and luminance. The reflectance at a given point of an object corresponds to the
ratio of the incident luminance (incoming) Lin received by the spectral and of the diffused
luminance (outgoing) Lout received by the camera [15] Having measured the depth and the
attenuation coefficient, Peng, Zhao, and Cosman, (2015) applied the Beer’s law, for all the pixels
of the image and to find the value corrected pixels. We could not reproduce the experiment,
which involves an acquisition campaign with the reference object. As far as one can judge by the
results given in the correction is of good quality, from a visual point of view. However, the study
is limited to clear water and photographs taken in shallow waters. It is likely that high turbidity
of the water causing white scattering of the incident light would distort the estimation of the
attenuation coefficient and background luminance. Moreover, this method requires mastered
acquisition conditions (illumination, depth, position of the camera, etc.).
In [16], described based on the use of a light polarizing filter. One of the problems of
inversion of the physical model is to estimate the distance between the objects and the camera.
Due to attenuation and diffusion, the visibility in the images is very small. The purpose of this
method is to increase visibility. The authors therefore propose the acquisition of two images with
different polarizations in order to recover additional information. The authors first describe the
mechanism of image formation at the camera level in order to then implement a treatment to
compensate for the effect of diffusion to increase the visibility in the image. This method does
not rely solely on the inversion of propagation but also on the nature of the acquisition system
and its properties. By relying on the light propagation model, the signal is considered as the sum
of the direct transmission D, i.e. the attenuated light coming from the objects, and the forward
scatter, in other words the ambient light scattered in a direction forming an angle close to the line
of sight of the camera. Thus the signal corresponding to the image is measured and the direct
transmission is attained. Forward scatter makes the image blurry. The authors use a special
function called Point Spread Function (PSF) to compensate for this phenomenon. A PSF that can
be used in the underwater setting accounts for the distance, the inverse Fourier transform and the
spatial frequency in the plane image [17].
Generally, a degraded image may negatively impact its interpretation by the human eye
as well as the performance of a computer vision system. In this section, this study shows the
phenomena that come into play in the different stages of the scene acquisition chain and that can
alter the quality of the captured image. The chapter then moves on to describe the methods used
to improve the quality of the image [18].
In [19], exploits the effects of polarization to compensate for the degradation of visibility.
Considering a light source illuminating the particles of the line of sight, an incidence plane is
formed by a ray coming from the source and the line of sight. The backscattered light is partially
polarized perpendicularly to this plane. For this reason, typical natural backscattering in the
underwater environment is partially horizontally polarized. In order to measure the different
polarization components, the scene is acquired through a polarizing filter. Since backscattering is
polarized, its intensity depends on the orientation of the filter around the optical axis. There are
two orthogonal orientations for which the transmittance of the backscattered light reaches
maximum values Bmax and Bmin. Thus, there are two linear polarization components [20].
When the polarizer is mounted, the intensity of each pixel in the image depends on a cosine
function of the orientation angle. Similar to backscattering, there are two intensity extremes, and
the visibility enhancement algorithm compensates for the haze effect caused by the broadcast
[21].

From the local characteristics of the image a global regularity is measured. This
regularity is called “total variation” and is calculated as the sum of the local gradients of the
image. In the case of an additive noise, the image observed. The image noise is made by looking
for the image. TV is used as a term of regularization which allows to penalize the big variations
and to allow the discontinuities along the sufficiently regular outlines. The denoising force is
controlled and the larger it is, the smaller the total variation of the resulting image [22]. The
disadvantage of this type of smoothing is that the textures can be considered as noise and be
erased. Yang, [23] also observed the creation of staircase effects. This method, in addition to
noise suppression can also be used to restore images degraded by fuzziness. Note that the “Total
Variation Model” method can be interpreted as a special case of the Bayesian approaches.

The Bayes formula expresses the posterior distribution of having an image in there is a
random noise. Denoising is maximizing this probability. The probability is known and plays the
role of a normalization constant, and is the likelihood and is determined from the model of data
formation. The maximum likelihood (ML) estimate consists of looking for the value of that
maximizes the likelihood. For reasons of simplicity, it is preferable to estimate the log of this
product of probabilities. In general, the maximization is done by looking for image which
satisfies the probability density in the case of a Gaussian noise. The Maximum a posteriori
(MAP) estimation has the advantage of being able to take into account the prior. The MAP
consists of finding the image which maximizes, which gives, by applying the logarithmic
function to the Bayes formula [24].

Markov fields are often used to determine the prior probability. For this reason, the image
is considered as an arrangement of atoms found on several energy states, the grey levels. The
state of each atom depends only on its neighbors. The Hammersley-Clifford theorem gives the
expression of this probability in which there is normalization constant, the sum of potential
functions computed on cliques (neighborhoods). A priori corresponds to the choice of cliques
and potential functions (differential operator for example) [25]. To determine the MAP, there are
two families of methods:

 Deterministic algorithms that are fast, but may converge to a local minimum far
from the global minimum. We mention for example the gradient descent
algorithm and its variants.
 Stochastic algorithms that are slow but provide convergence towards a global
minimum.

Wavelets were introduced in the early 1980s, in order to overcome a problem related to
the Fourier transform which does not allow locating the frequencies of the signal in the time.
Denoising consists in keeping the coefficients cmn and jo mn having a significant value,
considering that the low values correspond to the noise, then there is a reversing equation in
order to recover the image without noise [26]. In order to recover the coefficients which interest
analysts, a threshold must be found to detect the coefficients corresponding to the noise, for
which a great diversity of methods exists (strong, soft thresholding). Note that there are other
methods belonging to the same family as wavelets (time-frequency representation) such as
curvelet or contourlet. In addition to noise-related problems, the image may suffer from loss of
contrast [27].
The purpose of histogram equalization approach is to modify the histogram of the image
by assigning new values to the pixels of the input image. The histogram of images with low
contrast occupies a small portion of the intensity range. The goal of equalization is to spread the
histogram over a larger beach. For this, from the histogram of the image, the approach calculates
the cumulative histogram and applies it (after normalization) to the image in order to spread its
histogram uniformly over the entire range of dynamics. There are also other functions such as
logarithmic, exponential, power and others to obtain a histogram with a certain shape. Histogram
equalization often gives better results when applied locally [28].

In many cases, the histogram of the image covers a broad dynamic. In this case local
histogram equalization is necessary to bring out the contrasts of the different parts of the image.
For this, the image is scanned with a small window and the equalization principle described
above is applied to each window separately. Then, in order to eliminate the generated block
effects, due to the difference of the histograms between neighboring blocks, a bi-linear
interpolation is used. This method is called Contrast Limited Adaptive Histogram Equalization
(CLAHE). The defect of this type of method is the over-amelioration of contrasts: it brings out
false details. Because of the local character of the method, it requires more processing time than
a global equation [29].

Retinex is the combination of the words R’etine and cortex. The method is based on the
observation that the human visual system perceives the contrast and colour of an object relatively
in the same way under different illumination conditions. This is not the case for camera sensors
because the intensity value of a pixel depends strongly on the photon flux. The objective is to
build, from a given image, a new image illuminated by a constant white light [30]. Retinex has a
scale that applies a nonlinear operation to the logarithmic input image. There is also Multi Scale
Retinex (MSR) which as the name suggests, a combination of several retinex (usually 3) made at
different scales (different sizes). Experimentally, it has been shown that a uniform weighting
gives good results. The last step of the algorithm is normalization which brings the result back to
the definition interval of the image using an affine operation. The retinex algorithm is simple and
automatic but requires a large signal-to-noise ratio to obtain a satisfactory result [31]. On the
other hand, in order to improve the processing time and to be able to process large images more
quickly, it is customary to replace the convolution in the spatial domain by a multiplication in the
frequency domain. In the following, we tackle another problem which is the white balance [32].

Chao and Wang [33] used the dark channel prior to recover the original clarity in objects
in underwater images. The technique was used to solve the scattering of particles. The study
showed that the existence of purplish pixels is mainly due to the scattering of incident light. In
fact, when the fog sensation is strong at one point, the luminance of the pixel is strong due to the
scattering of the incident light. When the inversion of the attenuation is performed, the values of
the red channel are greatly increased while the additional luminance due to the incoming
scattering is not subtracted [34]. In order to remove this purplish appearance from the corrected
images, the value of all the pixels of is adjusted by adding to the correction the effect of the
luminance and the red diffusion coefficient in the propagation. This procedure is applied through
an iterative process to successively reduce and minimize the number of defective pixels detected.
The algorithm employed is based on a least squares method seeking to minimize the cardinal
[35]. Schettini and Corchs [36] conducted a review of various image enhancement techniques
and concluded that there are significant challenges in obtaining objects’ visibility at short and
long distance in underwater settings. The study concluded that numerous approaches are
available for image enhancement, but majority of them are limited to ordinary images and only a
small number of approaches are particularly focused on enhancement of underwater images [37].

One class of underwater image enhancement methods is contrast correction based


algorithms which tackle the problem of poor contrast in underwater images. One of the popular
and traditional methods for contrast enhancement is histogram equalization (HE) [38]. But HE
does not give the desired results as underwater images have non-uniform contrast and color
distortion which cannot be rectified by HE. Adaptive HE [39] can address the problem of non-
uniform contrast in underwater images but again color distortion problem is unaddressed. Some
researchers employed histogram stretching for underwater image enhancement e.g. Integrated
Colour Model (ICM) [40] used contrast stretching of RGB and HSI model to achieve enhanced
image. In [41] employed CLAHE (Contrast Limited Adaptive HE) of RGB and HSI model and
formed the enhanced image by taking the Euclidean norm of contrast stretched RGB and HSI
images. Ghani et al. [42] applied contrast correction on different color models and integrated
their results to improve the overall quality of the image. In [43] proposed an adaptive fuzzy
based contrast correction method for underwater images captured in turbid media. In [44] applied
CLAHE on HSV and YIQ color space to improve the contrast and fused their result to get the
final image. This fusion improves the color quality but this method is slow and results are also
not visually appealing. In [45] also employed histogram equalization of RGB channels to
improve the contrast of underwater images. Mathur et al. [46] fused the results of CLAHE and
guided filters to get an enhanced image but results still had poor contrast. Since poor contrast is
not the only issue of underwater images. Thus, only contrast correction does not yield desirable
results. Color correction, on the other hand, sometimes improves the contrast of the images also
by changing the color distribution of the pixels in the three color channels. Next subsection
provides insight into color correction based underwater image enhancement

Another class of underwater image enhancement methods focuses on color correction of


underwater images to remove the problem of color cast. The fundamentals of color correction
include gray world assumption [47], white balance [48] and retinex theory [49]. These theories
form the basis for the methods for underwater image color correction. In [50] used gray world
based assumption and white balance in lαβ space for color correction of underwater images. In
[51] modified Automatic Color Equalization (ACE) [52], which is based on white balance theory
so that modified ACE can be applied on underwater videos. Although stated automatic, the
method requires tuning of parameters, thus making it less usable for underwater images. In [53]
employed empirical mode decomposition (EMD) and used genetic algorithm to adjust the
weights for the layers of EMD to enhance the underwater images but contrast is not improved to
a good extent. In [54] used retinex theory along with some optimization strategy for underwater
image enhancement and this method can also be applied on different types of color degraded
images. Hou et al. [55] applied filtering to saturation and intensity values and not changing the
hue component. Results are appealing but contrast is not good and haze is present in the images.
In [56] restored the colors of the image by applying Rayleigh distribution based stretching on
YCbCr color model.
Both color and contrast related issues need to be eliminated from underwater images to
achieve desired results. Hence, most of the works in literature are hybrid of color and contrast
correction algorithms.
Another class is hybrid of contrast and color correction algorithms which solves the
contrast and color related problem in underwater images. Unsupervised Color correction Method
(UCM) [57] used gray world assumption theory for color correction, followed by contrast
stretching for enhanced image. The above-mentioned method sometime generates over-enhanced
and under-enhanced regions in the output image. To overcome these drawbacks, Ghani et al. [58]
proposed a technique which combined the modified ICM and UCM to produce better results.
This technique produces visually appealing results but create halos and artifacts for low-lit
images. Ghani et al. [59] applied contrast correction using modified von kries hypothesis [60]
followed by color correction applied on HSV model to enhance the underwater images. But the
resultant images of this technique still have drawbacks like poor contrast, blue green illumination
and partial enhancement.
Ancuti et al. [61] proposed a fusion of two versions (color corrected and contrast
corrected) of input image using four different weight maps derived from those versions. In [62],
Ancuti et al. extended by modifying the contrast correction method and reducing the number of
weight maps. Both the techniques employ multi-scale Laplacian pyramid decomposition based
fusion and give good results but a little haze is still present in the output. Zhang et al. [63]
proposed an underwater image enhancement method which first restores the image and then
fuses the two versions of the restored image using multi-scale fusion but the results are not better
than those obtained by Ancuti et al. Wong et al. [64] combined color and contrast correction
using adaptive gray world and histogram equalization to remove color cast and improve contrast
of underwater images. But the color qualities of images were not good. In [65] modified gray
world assumption and improved the color correction method and applied PSO on contrast
correction method so as to control the artifacts in the resultant image. Results have improved
color but with hazy appearance which reduces the overall quality of image.
Hybrid of color and contrast correction based algorithms handle both color and contrast
related problems of underwater images, thereby producing better results than other underwater
image enhancement classes mentioned above. But there is lack of guidance mechanism during
contrast stretching and most of the techniques assume color cast as blue only for every
underwater image. Artificial lighting is also not been taken care by most of the techniques which
leads to over-enhanced and under-enhanced regions in the output image. Being computationally
less intensive and fast, underwater image enhancement methods have an edge over restoration
methods as they handle two major problems of underwater images effectively and do not need
for any other prior information about the image.
One class under underwater image restoration techniques makes use of different filters to
remove problem of noise in underwater images. In [66], pre-processing step using various filters
has been proposed to enhance the image quality by removing noise. Lu et al. [67] used
trigonometric filter for enhancement of underwater images. Sheng et al. [68] employed multi-
wavelet transform and median filter for underwater image enhancement. In [69] employed
modified Multi-Scale Retinex (MSR) [70] which employs bilateral and trilateral filters instead of
Gaussian for underwater image enhancement. Nnolim [71] proposed partial differential
equations for optimizing entropy and gradient based image processing algorithms.
Researchers are also working to find the parameters of image formation model using
different soft computing techniques to restore the image. A PSO based technique is proposed for
underwater images by Abunaser et al. [72] but it requires repeated execution of PSO to tune the
parameters for image enhancement. Some researchers are also focusing on training convolutional
neural network (CNN) with image restoration techniques to dehaze underwater images [73].
Another deep neural network based study for underwater image enhancement [74] has trained
the CNN for finding the parameters of image formation model using synthetic underwater
images. Trained neural networks give good results.
However, it requires a lot of time to exhaustively train the neural networks with every
type of underwater image to get effective results. This class of underwater image restoration
methods considers noise as the only problem in underwater images and gives satisfactory results
for some images. Thus, to address all the problems of underwater images, an extensive filter
design is necessary which a computationally expensive and difficult approach.
One more category of algorithms for underwater image restoration considers blurriness
and color cast in underwater image analogous to haze in outdoor images and apply dehazing
based techniques [75] for underwater image enhancement. Another reason for applying these
techniques is the similarity between the equations of image formation in underwater medium and
atmospheric medium. These techniques require estimation of transmission map and waterlight
(airlight for atmospheric images) by making use of dark channel prior (DCP) [76] or its variant
to get an enhanced image. Chao et al. [77], Carlevaris-Bianco et al. [78] used dark channel prior
to clarify the blurred underwater images but failed to color correct the images. In [79] also
employed dark channel prior for image deblurring and then followed it with a color correction
method. Chaing et al. [80] proposed a dark channel prior based algorithm which further removes
artifacts caused by artificial light. In [81] developed a dark channel prior based algorithm in
which estimation of dark channel is done using image blurriness model. In [82] modified the
dark channel prior to bright channel prior to enhance the underwater images.
In [83] formulated the problem around red channel restoration as red color attenuates the
most in underwater environment. In [84] proposed an algorithm which used minimum
information loss principle for image dehazing and followed it with a contrast enhancement
algorithm for further enhancement of image. Peng et al [85] proposed a method for estimating
the depth of the image formation model based on image blurriness and light absorption pattern.
In [86] proposed new adaptive attenuation prior for image restoration which delivered good
results but still there is need for accurate assumption of attenuation coefficients. In [87] used an
optimized version of DCP which estimates the ambient color and helped in object detection in
water medium but it does not give accurate results in turbid medium. In [88] dehazed the
underwater image using a approach which applies edge preserving smoothing at different level
along with dehazing.
Results are convincing but some images have spurious color which indicates that
parameter tuning is required. In [89], a new method to estimate transmission map using inverse
red channel attenuation prior for dehazing model. Lu et al. [90] proposed a method based on dark
channel prior and Multi-Scale Cycle Generative Adversarial Network (MCycle GAN) for
underwater image restoration. But both methods cannot handle non-uniform illumination due to
artificial lighting. These dark channel prior based methods give good results but require high
computation. Results are highly dependent on correct estimation of transmission map so these
methods work only for some underwater images.
To summarize, image restoration based techniques give good result for few images, for
which the parameters have been estimated correctly. These techniques are computationally
intensive and require either parameter estimation for existing models or formation of new model
or priors for different types of underwater images. Moreover, efficacy of such models is best
only for the media for which their parameters have been tuned.
CHAPTER 3
PROPOSED METHOD
Wavelet Fusion
Wavelet transforms provide a framework in which a signal is decomposed, with each
level corresponding to a coarser resolution, or lower frequency band. There are two main groups
of transforms, continuous and discrete. Discrete transforms are more commonly used and can be
subdivided in various categories. The below figure 3.2 shows the decomposition process of
wavelet transform.

Figure 3.2 Implementation of Discrete Wavelet Transform


Based on the idea given in figure 3.2, we combine/fuse the images which are obtained
from color balancing procedure using “Symlet4” based DWT as shown in figure 3.3.
Figure 3.3 Block diagram for the wavelet fusion
CHAPTER 4
RESULTS and DISCUSSIONS
CHAPTER 5
CONCLUSION
REFERENCES

1. Cosmin Ancuti, Codruta Orniana Ancuti, Tom Haber, and Philippe Bekaert. Enhancing
underwater images and videos by fusion. In 2012 IEEE Conference on Computer Vision and
Pattern Recognition, pages 81–88. IEEE, 2012.
2. Robert P Bukata, John H Jerome, Alexander S Kondratyev, and Dimitry V Pozdnyakov. Optical
properties and remote sensing of inland and coastal waters. CRC press, 2018.
3. Daily Daleno de O Rodrigues, Wagner F de Barros, Jos´e P de Queiroz-Neto, Anderson
G Fontoura, and Jos´e Reginaldo H Carvalho. Enhancement of underwater images in
low-to-high turbidity rivers. In 2016 29th SIBGRAPI Conference on Graphics, Patterns
and Images (SIBGRAPI), pages 233–240. IEEE, 2016.
4. Sofia Ahlberg Pilfold. Managing knowledge for through life capability. PhD thesis, c
Sofia Ahlberg Pilfold, 2016.
5. Duarte, A., Codevilla, F., Gaya, J.D.O. and Botelho, S.S., 2016, April. A dataset to
evaluate underwater image restoration methods. In OCEANS 2016-Shanghai (pp. 1-6).
IEEE.
6. C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, D. Tao. 2019. An Underwater Image
Enhancement Benchmark Dataset and Beyond. IEEE Trans. Image Process. 29:4376-
4389
7. Iqbal, K.; Odetayo, M.; James, A.; Salam, R.A.; Talib, A.Z.H., "Enhancing the low
quality images using Unsupervised Colour Correction Method," Systems Man and
Cybernetics (SMC), 2010 IEEE International Conference on , vol., no., pp.1703,1709,
10-13 Oct. 2010.
8. Jinbo Chen; Zhenbang Gong; Hengyu Li; Shaorong Xie, "A detection method based on
sonar image for underwater pipeline tracker," Mechanic Automation and Control
Engineering (MACE), 2011 Second International Conference on , vol., no.,
pp.3766,3769, 15-17 July 2011.
9. Hung-Yu Yang; Pei-Yin Chen; Chien-Chuan Huang; YaZhu Zhuang; Yeu-Horng Shiau,
"Low Complexity Underwater Image Enhancement Based on Dark Channel Prior,"
Innovations in Bio-inspired Computing and Applications (IBICA), 2011 Second
International Conference on , vol., no., pp.17,20, 16-18 Dec. 2011.
10. Chiang, J.Y.; Ying-Ching Chen, "Underwater Image Enhancement by Wavelength
Compensation and Dehazing," Image Processing, IEEE Transactions on , vol.21, no.4,
pp.1756,1769, April 2012.
11. bt. Shamsuddin, N.; bt. Wan Ahmad, W.F.; Baharudin, B.B.; Kushairi, M.; Rajuddin, M.;
bt. Mohd, F., "Significance level of image enhancement techniques for underwater
images," Computer & Information Science (ICCIS), 2012 International Conference on ,
vol.1, no., pp.490,494, 12-14 June 2012.
12. Hitam, M.S.; Yussof, W.N.J.H.W.; Awalludin, E.A.; Bachok, Z., "Mixture contrast
limited adaptive histogram equalization for underwater image enhancement," Computer
Applications Technology (ICCAT), 2013 International Conference on , vol., no., pp.1,5,
20-22 Jan. 2013.
13. Paulo LJ Drews, Erickson R Nascimento, Silvia SC Botelho, and Mario Fernando
Montenegro Campos. Underwater depth estimation and image restoration based on single
images. IEEE computer graphics and applications, 36(2):24–35, 2016.
14. Wenhao Zhang, Ge Li, and Zhenqiang Ying. Underwater image enhancement by the
combination of dehazing and color correction. In Pacific Rim Conference on Multimedia,
pages 145–155. Springer, 2018.
15. Codruta O Ancuti, Cosmin Ancuti, Christophe De Vleeschouwer, Laszlo Neumann, and
Rafael Garcia. Color transfer for underwater dehazing and depth estimation. In 2017
IEEE International Conference on Image Processing (ICIP), pages 695–699. IEEE, 2017
16. Huimin Lu, Yujie Li, Xing Xu, Li He, Yun Li, Donald Dansereau, and Seiichi Serikawa.
Underwater image descattering and quality assessment. In 2016 IEEE International
Conference on Image Processing (ICIP), pages 1998– 2002. IEEE, 2016.
17. Huimin Lu, Yujie Li, and Seiichi Serikawa. Single underwater image descattering and
color correction. In 2015 IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP), pages 1623–1627. IEEE, 2015.
18. Khadidja Ould Amer, Marwa Elbouz, Ayman Alfalou, Christian Brosseau, and Jaouad
Hajjami. Enhancing underwater optical imaging by using a lowpass polarization filter.
Optics express, 27(2):621–643, 2019.
19. Fei Liu, Yi Wei, Pingli Han, Kui Yang, Lu Bai, and Xiaoepeng Shao. Polarization-based
exploration for clear underwater vision in natural illumination. Optics express,
27(3):3629–3641, 2019.
20. Bingjing Huang, Tiegen Liu, Haofeng Hu, Jiahui Han, and Mingxuan Yu. Underwater
image recovery considering polarization effects of objects. Optics express, 24(9):9826–
9838, 2016.
21. Chongyi Li and Jichang Guo. Underwater image enhancement by dehazing and color
correction. Journal of Electronic Imaging, 24(3):033023, 2015.
22. Dana Berman, Tali Treibitz, and Shai Avidan. Diving into haze-lines: Color restoration
of underwater images. In Proc. British Machine Vision Conference (BMVC), volume 1,
2017.
23. Miao Yang, Arcot Sowmya, ZhiQiang Wei, and Bing Zheng. Offshore underwater image
restoration using reflection-decomposition-based transmission map estimation. IEEE
Journal of Oceanic Engineering, 2019.
24. Laurent Guillon, Stan E Dosso, N Ross Chapman, and Achraf Drira. Bayesian
geoacoustic inversion with the image source method. IEEE Journal of Oceanic
Engineering, 41(4):1035–1044, 2016.
25. Tengyue Li, Bo He, Shizhe Tan, Chen Feng, Shuai Guo, Hanmin Liu, and Tianhong Yan.
Optical sources optimization for 3d reconstruction based on underwater vision system. In
2019 IEEE Underwater Technology (UT), pages 1–4. IEEE, 2019.
26. Adarsh Jamadandi and Uma Mudenagudi. Exemplar-based underwater image
enhancement augmented by wavelet corrected transforms. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition Workshops, pages 11–17,
2019.
27. Xi Qiao, Jianhua Bao, Hang Zhang, Lihua Zeng, and Daoliang Li. Underwater image
quality enhancement of sea cucumbers based on improved histogram equalization and
wavelet transform. Information processing in agriculture, 4(3):206–213, 2017.
28. WNJHW Yussof, Muhammad Suzuri Hitam, Ezmahamrul Afreen Awalludin, and
Zainuddin Bachok. Performing contrast limited adaptive histogram equalization
technique on combined color models for underwater image enhancement. International
Journal of Interactive Digital Media, 1(1):1–6, 2013.
29. Haocheng Wen, Yonghong Tian, Tiejun Huang, and Wen Gao. Single underwater image
enhancement with a new optical model. In 2013 IEEE International Symposium on
Circuits and Systems (ISCAS2013), pages 753–756. IEEE, 2013.
30. Kaiming He, Jian Sun, and Xiaoou Tang. Single image haze removal using dark channel
prior. IEEE transactions on pattern analysis and machine intelligence, 33(12):2341–2353,
2010.
31. Mading Li, Jiaying Liu, Wenhan Yang, Xiaoyan Sun, and Zongming Guo. Structure-
revealing low-light image enhancement via robust retinex model. IEEE Transactions on
Image Processing, 27(6):2828–2841, 2018.
32. Xueyang Fu, Peixian Zhuang, Yue Huang, Yinghao Liao, Xiao-Ping Zhang, and Xinghao
Ding. A retinex-based enhancing approach for single underwater image. In 2014 IEEE
International Conference on Image Processing (ICIP), pages 4572–4576. IEEE, 2014.
33. Liu Chao and Meng Wang. Removal of water scattering. In 2010 2nd International
Conference on Computer Engineering and Technology, volume 2, pages V2–35. IEEE,
2010.
34. Marco Block, Benjamin Gehmlich, and Damian Hettmanczyk. Automatic underwater
image enhancement using improved dark channel prior. Studies in Digital Heritage,
1(2):566–589, 2017.
35. Adrian Galdran, David Pardo, Artzai Pic´on, and Aitor Alvarez-Gila. Automatic red-
channel underwater image restoration. Journal of Visual Communication and Image
Representation, 26:132–145, 2015.
36. Raimondo Schettini and Silvia Corchs. Underwater image processing: state of the art of
restoration and image enhancement methods. EURASIP Journal on Advances in Signal
Processing, 2010(1):746052, 2010.
37. Tomasz Luczy´nski and Andreas Birk. Underwater image haze removal and color
correction with an underwater-ready dark channel prior. arXiv preprint
arXiv:1807.04169, 2018.
38. R. C. Gonzalez and R. E. Woods, Digital Image Processing (3rd Edition). Upper Saddle
River, NJ, USA: Prentice-Hall, Inc., 2006.
39. R. Dale-Jones and T. Tjahjadi, “A study and modification of the local histogram
equalization algorithm,” Pattern Recognition, vol. 26, no. 9, pp. 1373 – 1381, 1993.
40. K. Iqbal, R. A. Salam, A. Osman, and A. Z. Talib, “Underwater Image Enhancement
Using an Integrated Colour Model,” International Journal of Computer Science, vol. 34,
no. 2, pp. 239–244, 2007.
41. M. S. Hitam, E. A. Awalludin, W. N. Jawahir Hj Wan Yussof, and Z. Bachok, “Mixture
contrast limited adaptive histogram equalization for underwater image enhancement,”
International Conference on Computer Applications Technology, ICCAT 2013, 2013.
42. A. S. A. Ghani and N. A. M. Isa, “Underwater image quality enhancement through
Rayleigh stretching and averaging image planes,” International Journal of Naval
Architecture and Ocean Engineering, vol. 6, no. 4, pp. 840 – 866, 2014.
43. K. Srividhya and M. Ramya, “Fuzzy based adaptive contrast enhancement of underwater
images,” Res J Inf Technol, vol. 8, pp. 29–38, 2016.
44. J. Ma, X. Fan, S. X. Yang, X. Zhang, and X. Zhu, “Contrast limited adaptive histogram
equalization-based fusion in yiq and hsi color spaces for underwater image
enhancement,” 129 International Journal of Pattern Recognition and Artificial
Intelligence, vol. 32, no. 07, p.1854018, 2018.
45. Y. Fan, S. Wang, T. Yu, and B. L. Hu, “Underwater image enhancement algorithm based
on rgb channels histogram equalization,” in Optical Sensing and Imaging Technologies
and Applications, vol. 10846. International Society for Optics and Photonics, 2018, p.
108460G.
46. P. Mathur, K. Monica, and B. Soni, “Improved fusion-based technique for underwater
image enhancement,” in 2018 4th International Conference on Computing
Communication and Automation (ICCCA), Dec 2018, pp. 1–6.
47. G. Buchsbaum, “A spatial processor model for object colour perception,” Journal of the
Franklin Institute, vol. 310, no. 1, pp. 1–26, jul 1980.
48. V. C. Cardei and B. Funt, “Committee-based color constancy,” in Color and Imaging
Conference, vol. 1999, no. 1. Society for Imaging Science and Technology, 1999, pp.
311–313.
49. E. H. Land, “The retinex theory of color vision,” Scientific American, vol. 237, no. 6, pp.
108–129, 1977.
50. G. Bianco, M. Muzzupappa, F. Bruno, R. Garcia, and L. Neumann, “A new color
correction method for underwater imaging,” The International Archives of
Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 40, no. 5, p. 25,
2015.
51. M. Chambah, D. Semani, A. Renouf, P. Courtellemont, and A. Rizzi, “Underwater color
constancy: enhancement of automatic live fish recognition,” in Electronic Imaging 2004.
International Society for Optics and Photonics, 2003, pp. 157–168.
52. A. Rizzi, C. Gatta, and D. Marini, “From retinex to automatic color equalization: issues
in developing a new algorithm for unsupervised color equalization,” Journal of Electronic
Imaging, vol. 13, no. 1, pp. 75–84, 2004.
53. A. T. C¸ elebi and S. Ert¨urk, “Visual enhancement of underwater images using empirical
mode decomposition,” Expert Systems with Applications, vol. 39, no. 1, pp. 800 – 805,
2012.
54. X. Fu, P. Zhuang, Y. Huang, Y. Liao, X. Zhang, and X. Ding, “A retinex-based
enhancing approach for single underwater image,” in 2014 IEEE International
Conference on Image Processing (ICIP), Oct 2014, pp. 4572–4576.
55. G. Hou, Z. Pan, B. Huang, G. , and X. Luan, “Hue preserving-based approach for
underwater colour image enhancement,” IET Image Processing, vol. 12, no. 2, 2017.
56. H. H. kareem, H. G. Daway, and E. G. Daway, “Underwater image enhancement using
colour restoration based on YCbCr colour model,” IOP Conference Series: Materials
Science and Engineering, vol. 571, p. 012125, aug 2019.
57. K. Iqbal, M. Odetayo, A. James, R. A. Salam, and A. Z. H. Talib, “Enhancing the low
quality images using unsupervised colour correction method,” Conference Proceedings –
IEEE International Conference on Systems, Man and Cybernetics, pp. 1703–1709, 2010.
58. A. S. Abdul Ghani and N. A. Mat Isa, “Underwater image quality enhancement through
integrated color model with Rayleigh distribution,” Applied Soft Computing Journal, vol.
27, pp. 219–230, 2015.
59. A. S. Abdul Ghani and N. A. Mat Isa, “Underwater image quality enhancement through
composition of dual-intensity images and rayleigh-stretching,” SpringerPlus, vol. 3, no. 1,
p. 757, Dec 2014.
60. R. Ramanath and M. S. Drew, von Kries Hypothesis. Boston, MA: Springer US, 2014,
pp. 874–875.
61. C. Ancuti, C. O. Ancuti, T. Haber, and P. Bekaert, “Enhancing underwater images and
videos by fusion,” in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE
Conference on. IEEE, 2012, pp. 81–88.
62. C. O. Ancuti, C. Ancuti, C. D. Vleeschouwer, and P. Bekaert, “Color balance and fusion
for underwater image enhancement,” IEEE Transactions on Image Processing, vol. 27,
no. 1, pp.379–393, Jan 2018.
63. C. Zhang, X. Zhang, and D. Tu, “Underwater image enhancement by fusion,” in
International Workshop of Advanced Manufacturing and Automation. Springer, 2017,
pp. 81–92.
64. S.-L. Wong, R. Paramesran, and A. Taguchi, “Underwater image enhancement by
adaptive gray world and differential gray-levels histogram equalization,” Advances in
Electrical and Computer Engineering, vol. 18, no. 2, pp. 109–117, 2018.
65. K. Z. M. Azmi, A. S. A. Ghani, Z. M. Yusof, and Z. Ibrahim, “Deep underwater image
enhancement through colour cast removal and optimization algorithm,” The Imaging
Science Journal, vol. 67, no. 6, pp. 330–342, 2019.
66. S. Bazeille, I. Quidu, L. Jaulin, and J. P. Malkasse, “Automatic underwater image pre-
processing,” Proceedings of, vol. 1900, no. 1, p. 8, 2006.
67. H. Lu, Y. Li, and S. Serikawa, “Underwater image enhancement using guided
trigonometric bilateral filter and fast automatic color correction,” 2013 IEEE
International Conference on Image Processing, ICIP 2013 - Proceedings, pp. 3412–3416,
2013.
68. M. Sheng, Y. Pang, L. Wan, and H. Huang, “Underwater images enhancement using
multiwavelet transform and median filter,” TELKOMNIKA Indonesian Journal of
Electrical Engineering, vol. 12, no. 3, pp. 2306–2313, 2014.
69. S. Zhang, T. Wang, J. Dong, and H. Yu, “Underwater image enhancement via extended
multi-scale retinex,” Neurocomputing, vol. 245, pp. 1 – 9, 2017.
70. D. J. Jobson, Z. Rahman, and G. A. Woodell, “A multiscale retinex for bridging the gap
between color images and the human observation of scenes,” IEEE Transactions on
Image Processing, vol. 6, no. 7, pp. 965–976, Jul 1997.
71. U. A. Nnolim, “Smoothing and enhancement algorithms for underwater images based on
partial differential equations,” Journal of Electronic Imaging, vol. 26, pp. 26 – 26 – 21,
2017.
72. A. AbuNaser, I. A. Doush, N. Mansour, and S. Alshattnawi, “Underwater image
enhancement using particle swarm optimization,” Journal of Intelligent Systems, vol. 24,
no. 1, pp. 99–115, 2015.
73. J. Perez, A. C. Attanasio, N. Nechyporenko, and P. J. Sanz, “A deep learning approach
for underwater image enhancement,” in InternationalWork-Conference on the Interplay
Between Natural and Artificial Computation. Springer, 2017, pp. 183–192.
74. S. Anwar, C. Li, and F. Porikli, “Deep underwater image enhancement,” arXiv preprint
arXiv:1807.03528, 2018.
75. R. Fattal, “Single image dehazing,” in ACM SIGGRAPH 2008 Papers, ser. SIGGRAPH ’08.
New York, NY, USA: ACM, 2008, pp. 72:1–72:9
76. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE
transactions on pattern analysis and machine intelligence, vol. 33, no. 12, pp. 2341–2353, 2011.
77. L. Chao and M. Wang, “Removal of water scattering,” ICCET 2010 - 2010 International
Conference on Computer Engineering and Technology, Proceedings, vol. 2, pp. 35–39, 2010.
78. N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results in underwater single image
dehazing,” MTS/IEEE Seattle, OCEANS 2010, 2010.
79. H. Y. Yang, P. Y. Chen, C. C. Huang, Y. Z. Zhuang, and Y. H. Shiau, “Low complexity
underwater image enhancement based on dark channel prior,” Proceedings - 2011 2nd
International Conference on Innovations in Bio-Inspired Computing and Applications, IBICA
2011, pp. 17–20, 2011.
80. J. Y. Chiang and Y. C. Chen, “Underwater image enhancement by wavelength compensation and
dehazing,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 1756–1769, 2012.
81. L. Jolla, “SINGLE UNDERWATER IMAGE ENHANCEMENT USING DEPTH ESTIMATION
BASED ON BLURRINESS,” International Conference on Image Processing (ICIP), pp. 2–6,
2015.
82. Y. Gao, H. Li, and S. Wen, “Restoration and Enhancement of Underwater Images Based on
Bright Channel Prior,” Mathematical Problems in Engineering, vol. 2016, 2016.
83. A. Galdran, D. Pardo, A. Pic´on, and A. Alvarez-Gila, “Automatic Red-Channel underwater
image restoration,” Journal of Visual Communication and Image Representation, vol. 26, pp.
132–145, 2015.
84. C. Y. Li, J. C. Guo, R. M. Cong, Y. W. Pang, and B. Wang, “Underwater image enhancement by
Dehazing with minimum information loss and histogram distribution prior,” IEEE Transactions
on Image Processing, vol. 25, no. 12, pp. 5664–5677, 2016.
85. Y. T. Peng and P. C. Cosman, “Underwater image restoration based on image blurriness and light
absorption,” IEEE Transactions on Image Processing, vol. 26, no. 4, pp. 1579–1594, April 2017.
86. Y. Wang, H. Liu, and L. P. Chau, “Single Underwater Image Restoration Using Adaptive
Attenuation-Curve Prior,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 65,
no. 3, 2018.
87. K. O. Amer, M. Elbouz, A. Alfalou, C. Brosseau, and J. Hajjami, “Enhancing underwater optical
imaging by using a low-pass polarization filter,” Opt. Express, vol. 27, no. 2, pp. 621–643, Jan
2019
88. K. Purohit, S. Mandal, and A. N. Rajagopalan, “Multilevel weighted enhancement for underwater
image dehazing,” J. Opt. Soc. Am. A, vol. 36, no. 6, pp. 1098–1108, Jun 2019.
89. X. Deng, H. Wang, and X. Liu, “Underwater image enhancement based on removing light source
color and dehazing,” IEEE Access, vol. 7, pp. 114 297–114 309, 2019.
90. J. Lu, N. Li, S. Zhang, Z. Yu, H. Zheng, and B. Zheng, “Multi-scale adversarial network for
underwater image restoration,” Optics & Laser Technology, vol. 110, pp. 105–113, 2019.
Appendix A
MATLAB
A.1 Introduction

MATLAB is a high-performance language for technical computing. It integrates


computation, visualization, and programming in an easy-to-use environment where problems and
solutions are expressed in familiar mathematical notation. MATLAB stands for matrix
laboratory, and was written originally to provide easy access to matrix software developed by
LINPACK (linear system package) and EISPACK (Eigen system package) projects. MATLAB
is therefore built on a foundation of sophisticated matrix software in which the basic element is
array that does not require pre dimensioning which to solve many technical computing problems,
especially those with matrix and vector formulations, in a fraction of time.
MATLAB features a family of applications specific solutions called toolboxes. Very
important to most users of MATLAB, toolboxes allow learning and applying specialized
technology. These are comprehensive collections of MATLAB functions (M-files) that extend
the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are
available include signal processing, control system, neural networks, fuzzy logic, wavelets,
simulation and many others.
Typical uses of MATLAB include: Math and computation, Algorithm development, Data
acquisition, Modeling, simulation, prototyping, Data analysis, exploration, visualization,
Scientific and engineering graphics, Application development, including graphical user interface
building.

A.2 Basic Building Blocks of MATLAB

The basic building block of MATLAB is MATRIX. The fundamental data type is the
array. Vectors, scalars, real matrices and complex matrix are handled as specific class of this
basic data type. The built in functions are optimized for vector operations. No dimension
statements are required for vectors or arrays.
A.2.1 MATLAB Window

The MATLAB works based on five windows: Command window, Workspace window,
Current directory window, Command history window, Editor Window, Graphics window and
Online-help window.

A.2.1.1 Command Window

The command window is where the user types MATLAB commands and expressions at
the prompt (>>) and where the output of those commands is displayed. It is opened when the
application program is launched. All commands including user-written programs are typed in
this window at MATLAB prompt for execution.

A.2.1.2 Work Space Window

MATLAB defines the workspace as the set of variables that the user creates in a work
session. The workspace browser shows these variables and some information about them.
Double clicking on a variable in the workspace browser launches the Array Editor, which can be
used to obtain information.

A.2.1.3 Current Directory Window

The current Directory tab shows the contents of the current directory, whose path is
shown in the current directory window. For example, in the windows operating system the path
might be as follows: C:\MATLAB\Work, indicating that directory “work” is a subdirectory of
the main directory “MATLAB”; which is installed in drive C. Clicking on the arrow in the
current directory window shows a list of recently used paths. MATLAB uses a search path to
find M-files and other MATLAB related files. Any file run in MATLAB must reside in the
current directory or in a directory that is on search path.

A.2.1.4 Command History Window

The Command History Window contains a record of the commands a user has entered in
the command window, including both current and previous MATLAB sessions. Previously
entered MATLAB commands can be selected and re-executed from the command history
window by right clicking on a command or sequence of commands. This is useful to select
various options in addition to executing the commands and is useful feature when experimenting
with various commands in a work session.
A.2.1.5 Editor Window

The MATLAB editor is both a text editor specialized for creating M-files and a graphical
MATLAB debugger. The editor can appear in a window by itself, or it can be a sub window in
the desktop. In this window one can write, edit, create and save programs in files called M-files.
MATLAB editor window has numerous pull-down menus for tasks such as saving,
viewing, and debugging files. Because it performs some simple checks and also uses color to
differentiate between various elements of code, this text editor is recommended as the tool of
choice for writing and editing M-functions.

A.2.1.6 Graphics or Figure Window

The output of all graphic commands typed in the command window is seen in this
window.

A.2.1.7 Online Help Window

MATLAB provides online help for all it’s built in functions and programming language
constructs. The principal way to get help online is to use the MATLAB help browser, opened as
a separate window either by clicking on the question mark symbol (?) on the desktop toolbar, or
by typing help browser at the prompt in the command window. The help Browser is a web
browser integrated into the MATLAB desktop that displays a Hypertext Markup Language
(HTML) documents. The Help Browser consists of two panes, the help navigator pane, used to
find information, and the display pane, used to view the information. Self-explanatory tabs other
than navigator pane are used to perform a search.

A.3 MATLAB Files

MATLAB has three types of files for storing information. They are: M-files and MAT-
files.

A.3.1 M-Files

These are standard ASCII text file with ‘m’ extension to the file name and creating own
matrices using M-files, which are text files containing MATLAB code. MATLAB editor or
another text editor is used to create a file containing the same statements which are typed at the
MATLAB command line and save the file under a name that ends in .m. There are two types of
M-files:
1. Script Files

It is an M-file with a set of MATLAB commands in it and is executed by typing name of


file on the command line. These files work on global variables currently present in that
environment.

2. Function Files

A function file is also an M-file except that the variables in a function file are all local.
This type of files begins with a function definition line.

A.3.2 MAT-Files

These are binary data files with .mat extension to the file that are created by MATLAB
when the data is saved. The data written in a special format that only MATLAB can read. These
are located into MATLAB with ‘load’ command.

A.4 the MATLAB System:

The MATLAB system consists of five main parts:

A.4.1 Development Environment:

This is the set of tools and facilities that help you use MATLAB functions and files. Many
of these tools are graphical user interfaces. It includes the MATLAB desktop and Command
Window, a command history, an editor and debugger, and browsers for viewing help, the
workspace, files, and the search path.

A.4.2 the MATLAB Mathematical Function:

This is a vast collection of computational algorithms ranging from elementary functions


like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix
inverse, matrix eigen values, Bessel functions, and fast Fourier transforms.
A.4.3 the MATLAB Language:

This is a high-level matrix/array language with control flow statements, functions, data
structures, input/output, and object-oriented programming features. It allows both "programming
in the small" to rapidly create quick and dirty throw-away programs, and "programming in the
large" to create complete large and complex application programs.

A.4.4 Graphics:

MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as
annotating and printing these graphs. It includes high-level functions for two-dimensional and
three-dimensional data visualization, image processing, animation, and presentation graphics. It
also includes low-level functions that allow you to fully customize the appearance of graphics as
well as to build complete graphical user interfaces on your MATLAB applications.

A.4.5 the MATLAB Application Program Interface (API):

This is a library that allows you to write C and FORTRAN programs that interact with
MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling
MATLAB as a computational engine, and for reading and writing MAT-files.

A.5 SOME BASIC COMMANDS:

pwd prints working directory

Demo demonstrates what is possible in Mat lab

Who lists all of the variables in your Mat lab workspace?

Whose list the variables and describes their matrix size

clear erases variables and functions from memory

clear x erases the matrix 'x' from your workspace

close by itself, closes the current figure window


figure creates an empty figure window

hold on holds the current plot and all axis properties so that subsequent graphing

commands add to the existing graph

hold off sets the next plot property of the current axes to "replace"

find find indices of nonzero elements e.g.:

d = find(x>100) returns the indices of the vector x that are greater than 100

break terminate execution of m-file or WHILE or FOR loop

for repeat statements a specific number of times, the general form of a FOR

statement is:

FOR variable = expr, statement, ..., statement END

for n=1:cc/c;

magn(n,1)=NaNmean(a((n-1)*c+1:n*c,1));

end

diff difference and approximate derivative e.g.:

DIFF(X) for a vector X, is [X(2)-X(1) X(3)-X(2) ... X(n)-X(n-1)].

NaN the arithmetic representation for Not-a-Number, a NaN is obtained as a

result of mathematically undefined operations like 0.0/0.0

INF the arithmetic representation for positive infinity, a infinity is also produced

by operations like dividing by zero, e.g. 1.0/0.0, or from overflow, e.g. exp(1000).

save saves all the matrices defined in the current session into the file,

matlab.mat, located in the current working directory


load loads contents of matlab.mat into current workspace

save filename x y z saves the matrices x, y and z into the file titled filename.mat

save filename x y z /ascii save the matrices x, y and z into the file titled filename.dat

load filename loads the contents of filename into current workspace; the file can

be a binary (.mat) file

load filename.dat loads the contents of filename.dat into the variable filename

xlabel(‘ ’) : Allows you to label x-axis

ylabel(‘ ‘) : Allows you to label y-axis

title(‘ ‘) : Allows you to give title for

plot

subplot() : Allows you to create multiple

plots in the same window

A.6 SOME BASIC PLOT COMMANDS:

Kinds of plots:

plot(x,y) creates a Cartesian plot of the vectors x & y

plot(y) creates a plot of y vs. the numerical values of the elements in the y-vector

semilogx(x,y) plots log(x) vs y

semilogy(x,y) plots x vs log(y)

loglog(x,y) plots log(x) vs log(y)

polar(theta,r) creates a polar plot of the vectors r & theta where theta is in radians

bar(x) creates a bar graph of the vector x. (Note also the command stairs(x))
bar(x, y) creates a bar-graph of the elements of the vector y, locating the bars

according to the vector elements of 'x'

Plot description:

grid creates a grid on the graphics plot

title('text') places a title at top of graphics plot

xlabel('text') writes 'text' beneath the x-axis of a plot

ylabel('text') writes 'text' beside the y-axis of a plot

text(x,y,'text') writes 'text' at the location (x,y)

text(x,y,'text','sc') writes 'text' at point x,y assuming lower left corner is (0,0)

and upper right corner is (1,1)

axis([xmin xmax ymin ymax]) sets scaling for the x- and y-axes on the current plot

A.7 ALGEBRIC OPERATIONS IN MATLAB:

Scalar Calculations:

+ Addition

- Subtraction

* Multiplication

/ Right division (a/b means a ÷ b)

\ left division (a\b means b ÷ a)

^ Exponentiation

For example 3*4 executed in 'matlab' gives ans=12

4/5 gives ans=0.8


Array products: Recall that addition and subtraction of matrices involved addition or
subtraction of the individual elements of the matrices. Sometimes it is desired to simply multiply
or divide each element of an matrix by the corresponding element of another matrix 'array
operations”.

Array or element-by-element operations are executed when the operator is preceded by a '.'
(Period):

a .* b multiplies each element of a by the respective element of b

a ./ b divides each element of a by the respective element of b

a .\ b divides each element of b by the respective element of a

a .^ b raise each element of a by the respective b element

A.8 MATLAB WORKING ENVIRONMENT:

A.8.1 MATLAB DESKTOP

Matlab Desktop is the main Matlab application window. The desktop contains five sub
windows, the command window, the workspace browser, the current directory window, the
command history window, and one or more figure windows, which are shown only when the
user displays a graphic.

The command window is where the user types MATLAB commands and expressions at
the prompt (>>) and where the output of those commands is displayed. MATLAB defines the
workspace as the set of variables that the user creates in a work session.

The workspace browser shows these variables and some information about them. Double
clicking on a variable in the workspace browser launches the Array Editor, which can be used to
obtain information and income instances edit certain properties of the variable.

The current Directory tab above the workspace tab shows the contents of the current
directory, whose path is shown in the current directory window. For example, in the windows
operating system the path might be as follows: C:\MATLAB\Work, indicating that directory
“work” is a subdirectory of the main directory “MATLAB”; WHICH IS INSTALLED IN
DRIVE C. clicking on the arrow in the current directory window shows a list of recently used
paths. Clicking on the button to the right of the window allows the user to change the current
directory.

MATLAB uses a search path to find M-files and other MATLAB related files, which are
organize in directories in the computer file system. Any file run in MATLAB must reside in the
current directory or in a directory that is on search path. By default, the files supplied with
MATLAB and math works toolboxes are included in the search path. The easiest way to see
which directories are soon the search path, or to add or modify a search path, is to select set path
from the File menu the desktop, and then use the set path dialog box. It is good practice to add
any commonly used directories to the search path to avoid repeatedly having the change the
current directory.

The Command History Window contains a record of the commands a user has entered in
the command window, including both current and previous MATLAB sessions. Previously
entered MATLAB commands can be selected and re-executed from the command history
window by right clicking on a command or sequence of commands.

This action launches a menu from which to select various options in addition to executing
the commands. This is useful to select various options in addition to executing the commands.
This is a useful feature when experimenting with various commands in a work session.

A.8.2 Using the MATLAB Editor to create M-Files:

The MATLAB editor is both a text editor specialized for creating M-files and a graphical
MATLAB debugger. The editor can appear in a window by itself, or it can be a sub window in
the desktop. M-files are denoted by the extension .m, as in pixelup.m.

The MATLAB editor window has numerous pull-down menus for tasks such as saving,
viewing, and debugging files. Because it performs some simple checks and also uses color to
differentiate between various elements of code, this text editor is recommended as the tool of
choice for writing and editing M-functions.
To open the editor, type edit at the prompt opens the M-file filename.m in an editor
window, ready for editing. As noted earlier, the file must be in the current directory, or in a
directory in the search path.

A.8.3 Getting Help:

The principal way to get help online is to use the MATLAB help browser, opened as a
separate window either by clicking on the question mark symbol (?) on the desktop toolbar, or by
typing help browser at the prompt in the command window. The help Browser is a web browser
integrated into the MATLAB desktop that displays a Hypertext Markup Language(HTML)
documents. The Help Browser consists of two panes, the help navigator pane, used to find
information, and the display pane, used to view the information. Self-explanatory tabs other than

navigator pane are used to perform a search.

You might also like