Professional Documents
Culture Documents
I. INTRODUCTION
Outdoor images loses their quality due to the
reflection of sunlight off air pollution (haze) or because
of water droplets in the air (fog) or because of Figure 1 (b): Hazy Free Image
combination of both smoke and fog (smog). Haze is the Haze removal is highly desirable in different fields like
natural process in which the dust and smoke particles computer vision algorithms, image processing and
reflect the sunlight casing the vision loss. (Nayar and photography. First, removal of haze from the image
Narasimhan 1999). The visibility from the camera increases the visibility of hazy image caused by the
faded due to the interference with the environmental atmospheric particles. Generally, hazy free images are
light source reflected by the dust particles. Since the more delightful than the hazy images. Second, in most
amount of dispersion depend upon the scene from computer image processing techniques, from bigger
camera so haze will depend from image to image. These scale image processing to advanced scale shape detec-
blurred images gain noise and loses the color tion actually considering that the corresponding image is
attenuation, which is described in Figure 1(a). the scene Luminant. The evaluation of these techniques
greatly depends on the scenes. If the image or scene is
dull, then vision algorithms face many issues and do not
show efficient performance. So, removing haze is a need
for better results and efficient performance. The bad increases the visibility of hazy image caused by the
images can be put to better use. atmospheric particles. Generally, hazy free images are
more delightful than the hazy images. In the of computer
vision, image enhancement the removal of haze from the
image is challenging task. The popular methods like R.
Fattal 2008, R. Tan (Tian, Xiao et al. 2016) and S. G.
Narasimhan (Narasimhan and Nayar 2003) explained this
model to defined creation of haze in the environment as:
Q(a) = W(a)i(a) + P (1 − i(a)) Eq(1)
Where Q is the highest value that is hypothesized, W is the
deflection of light, P is the average sunlight present in the
Figure 2: Major cause of haze formed environment and i is the glow that doesn’t scattered and
plan to the target. The aim of this equation is to recover W
The problem of vision blurriness in computer vision is
(reflected light), P (light source) and i (transmission
one of the most common problem in this field and it is
medium) from the intensity (Q).
due to haze and light phenomena like reflection,
refraction and radiance etc. Vision DE enhancement due The first variable in equation (1) describe the light reflected
to haze is one of the most common cause of traffic from the surface and its decay in that medium (direct
accident. Recently, single image haze removal has made attenuation) and the second variable describe the light
achievable progresses. Recently, single image haze source which result from the previous scattered light source
removal has made achievable progresses. The success of which lead to the change of scene color.(Levin, Lischinski
these methods lies upon using a stronger prior or et al. 2008) When the environment is same, the conveyance
assumption. Tan hypothesized that the initial input image I can be defined as
must have capable brightness than the input hazy image R (b) = z−∃o (b) Eq(2)
and he remove haze from the input image by boasting the Here ∃ indicate that surface light is exponentially link with
neighbor pattern of the reclaimed image. The result were depth scene o. Geometrically Equation (1)defined that they
correct by vision but we can’t mathematically valid. Fattal are in one color channel, vectors P, W (a), and Q (b) are the
computed the albedo of the environment and then infers end point which lies in the same plane.
the transmission map under considering that transmission In 2008 R. Tan’s Model focuses on the improvement on
map and the shady part of the image are logically visibility of an image. It was basically used for the
unrelated. Fattal’s work can be evaluated validly and visibility of bad weather images. Tan model approach is
gives us remarkable conclusion. Anyhow, this work based on two basic observation first the dehazed image has
doesn’t avail indoor images and may give fault more contrast than hazy images and second the air light
conclusion if our hypothesis goes wrong as shown in intensity change depend on the distance from object to
figure 2 (a) and (b). There was a different artifact in haze camera should be smooth. Based on these two observation
formation model in Fattal’s approach than the original he developed a function in the framework of MRF
one(Fattal 2008). (Markov Random filed) in order to enhances the output
image by obtaining the detail and structure from the image.
This method focusses on the enhancement of visibility, it
does not aim for the recovery of reflected areas.
Similarly, in 2008 R. Fattal introduced another image
dehazing model. It is based on the assumption that the
scattered light is removed in order to increase the vision
and recover haze from the image. In this approach he
redefined image model that account for surface shading in
addition to transmission function. Based on this image
model, the image is divided into small parts of constant
albedo and the light source ambiguity is removed by
adding a function which require surface shading and
Figure 3 (a) Haze formation model and medium transmission function to be locally statistically
(b) Constant albedo model used in Fattal’s work unrelated. Fattal used a physics based approach to produce
haze free Image and require statistically based assumption
in the local patches which make it non convenient.
II. LITERATURE REVIEW Qingsong Zhu(Zhu, Mai et al. 2014) used another approach
Haze removal is highly desirable in different fields like to remove haze from an image. He introduced a color
computer vision algorithms, image processing and attenuation prior model to remove haze. A linear model
photography. First, removal of haze from the image with supervised learning is used for the scene depth of the
hazy image, in this way a link between the image and it method is operated on the outdoor hazy images. The
depth map is established. By estimating the depth map of method is based on the low value of pixel in the hazy
an image we can easily recover the haze from an image. image in at least one color from RGB channel. In hazy
The proposed methodology has high efficiency and better image, the value of these pixels give us the hazy part of the
dehazed effect but not in all cases. image from haze image. By this, these particular pixels
Codruta Orniana(Wu, Zhu et al. 2014) proposed another provide clear computed image of haze transmission. After
methodology for haze removal. Their method is able to that, we are using Convolution Neural Network (CNN) on
accurately dehazed image using the original degraded it and recover high quality image which is dehazed image.
information. The aim is the enhancement of visibility III. METHODOLOGY
similar to Tans model.
Yanlin Tian(Scharstein and Szeliski 2002) proposed an Haze removal is one of the most common problem in the
approach to remove haze from a hazed image by using field of computer vision. The problem is not even simple if
segmenting the super pixels. The previous methodology the input is only a corresponding image. After all, huge
from removing haze from an image might not work under methodology have been proposed in removing haze from
particular circumstance due to noise distribution. In this the haze image but some method failed under certain
paper, we propose an improved method combining circumstances .(Zhu, Mai et al. 2014) These methods
segmented pixel with light intensity of a haze image in include polarization technique which removes the haze by
order to compute the sunlight instead of dark channel applying haze removal filter such as mean guided filter. In
prior. The proposed method also estimate the some other papers, they have also computed the
transmission map. Color space conversion in done by transmission map to further clear the visibility of hazy
converting RGB into HIS. In RGB channel pick the image. (Nayar and Narasimhan 1999).
brightest pixel and record the location of that particular The proposed training model first extract the dehazed
pixel. This recorded pixel is known as super pixel .We feature from the image using the convolution operation and
calculate it average by it 8 surrounding pixel it reduces give feature map to the hidden layer one. These feature will
the haze of a hazy image. not be enough in order to remove haze from the given
(Xie, Guo et al. 2010)The images captured in a rainy datasets. So we have to further extract the feature, for this
weather give colored and noisy image due to water purpose we are going to reduce the size of feature map and
droplet on the camera lens of an image. Images which are extract the significant part in hidden layer two.
taken in hazy environment conditions give us poor quality
The output layer can easily check whether the input image
visibility, which will have a great effect on the outdoor
is hazy or not. As shown in figure 5
computer vision systems, such as video surveillance,
traffic surveillance camera, and security purpose system,
camera in space and so on. The paper proposes the recent
discoveries in restoring haze from the image and visibility
techniques applied in the field of computer vision and
image processing.
A color attenuation prior model is proposed by using the
brightness and saturation in pixel of hazy image.
Moreover supervised learning is used as parameters from
the information recovered from hazy image.(Badhe and
Prabhakar 2016) Figure 5: The CNN architecture to remove Haze from an
image
In this paper, we propose a method for single image
dehazing. This dehazing method is applicable only to the
outdoor images.(Scharstein and Szeliski 2002) The main
technique is to find the low intensity pixel from the hazy
image which can achieved by applying particular filter.
These intensity pixel further compute the transmission map
which further improve visibility. By this, these particular
pixels can give us the accurate computed haze model. After
that, apply some methods on it and recover high quality
image which is dehazed image. This method is applicable
Figure 4: Summary of the previous techniques to remove to handle on the distant objects even in the very high turbid
haze medium. This approach recovers dehazed image from
heavy hazy images.(Hsu, Mertens et al. 2008)
In this paper, we proposed a method for removing haze
from the image using novelty approach. This dehazing In encoding phase in figure 5, we have apply filter of all
the previous approaches forming an opaque Convolution
Neural Network (CNN) in order to extract feature map method to be image dehazing (Peng, Xiao et al. 2012)The
from the hazy image. This CNN model contain 7 neurons low values in the color channel are actually due to three
in the 2nd hidden layer and 8 neurons in the 3rd hidden causes: a) dimness light b) Brighter images c) dark
layer forwarding the information in a feed-forward object.(Zhu, Mai et al. 2015)
manner as shown in figure 3. The type of model presented
in the paper ensure the efficient method to pass the To verify this issue, we collect some datasets. Since haze
parameters in both forward and backward propagation usually occurs in outdoor landscape and cityscape scenes,
pass. we manually pick the dehazed landscape and cityscape
ones from the downloaded images. We have used around
1500 images in order to verify our result. Following is the
result of dataset of outdoor hazy image. We can see that
our approach work efficiently than the previous approach
methodology and improve the failure cases in all the
Figure 6: CNN model: Remove haze from the image previous cases. The train result of outdoor images in
The decoder function is similar to the encode phase presented in figure 6
except that it uses residual function which ensure that
each hidden neuron is fully connected to all other neuron
connected in the second layer. The advantage of this
function is that it improve the learning rate and converge
our training data set model. The decoder phase is shown
in the figure 6