You are on page 1of 7

Single Image Dehazing using CNN

Javaid Iqbal1, Huzaifa Rashid2, Nauman Zafar3

Abstract—Haze is a natural phenomenon in which the


dust, smoke and other particles alter the vision of the sky
reducing visibility. Hazy images are the most common
problems we face in different fields like computer vision,
image processing and other fields. It caused various visibility
problems for traffic user, tourists and especially in northern
areas (hilly areas) where haze and fog are common
however fog is a different phenomenon where the water
particles alter the visibility. In this paper, we propose a
method for single image dehazing using Convolution Neural
Network (CNN).This dehazing method is composed on the
outdoor images on which we apply filters to find the haze in
the image. In Hazy images some part of the images contain
particular areas have small value in only one color alpha
channel from Red, Blue, green RGB channel. In hazy Figure 1 (a): Hazy Image
image, intensity of these pixels is mainly bestowed by the air Since the amount of dispersion depends upon the
light depth map. By estimating these low value points we length of scene from the camera, the degradation is
can find haze transmission map and reclaim a high spatial-variant. Haze removal is highly desirable in
quality dehazed image. After that we train the image in
our proposed convolution model where we get ground truth
different fields like computer vision algorithms, image
of dehazed images. First we apply mean guided filter to processing and photogra- phy.(Scharstein and Szeliski
remove haze for convolving the image, then to achieve non 2002, Levin, Lischinski et al. 2008) First, removal of
linearity in its feature images then finally we have to reduce haze from the image increases the visibility of hazy
dimensions of the image by sub sampling technique. We image caused by the atmospheric particles. Generally,
have presented an end to end encoder- decoder training
hazy free images are more delightful than the hazy
model to achieve a high quality dehazed image. The
approach gives us 97% accuracy to free haze from hazy images, as shown in Figure 1 (b)
images in a better quality. We have used around 1500
outdoor images to validate the result of our proposed
methodology. The proposed method also give transmission
map of the hazy image which can further be used to
enhance visibility of the scene.
Keywords: Image Dehazing, high intensity, mean filter,
Guided filter, Haze removal, roiled, Transmission map,
Depth map, Atmospheric light,, Encoder-Decoder, CNN

I. INTRODUCTION
Outdoor images loses their quality due to the
reflection of sunlight off air pollution (haze) or because
of water droplets in the air (fog) or because of Figure 1 (b): Hazy Free Image
combination of both smoke and fog (smog). Haze is the Haze removal is highly desirable in different fields like
natural process in which the dust and smoke particles computer vision algorithms, image processing and
reflect the sunlight casing the vision loss. (Nayar and photography. First, removal of haze from the image
Narasimhan 1999). The visibility from the camera increases the visibility of hazy image caused by the
faded due to the interference with the environmental atmospheric particles. Generally, hazy free images are
light source reflected by the dust particles. Since the more delightful than the hazy images. Second, in most
amount of dispersion depend upon the scene from computer image processing techniques, from bigger
camera so haze will depend from image to image. These scale image processing to advanced scale shape detec-
blurred images gain noise and loses the color tion actually considering that the corresponding image is
attenuation, which is described in Figure 1(a). the scene Luminant. The evaluation of these techniques
greatly depends on the scenes. If the image or scene is
dull, then vision algorithms face many issues and do not
show efficient performance. So, removing haze is a need
for better results and efficient performance. The bad increases the visibility of hazy image caused by the
images can be put to better use. atmospheric particles. Generally, hazy free images are
more delightful than the hazy images. In the of computer
vision, image enhancement the removal of haze from the
image is challenging task. The popular methods like R.
Fattal 2008, R. Tan (Tian, Xiao et al. 2016) and S. G.
Narasimhan (Narasimhan and Nayar 2003) explained this
model to defined creation of haze in the environment as:
Q(a) = W(a)i(a) + P (1 − i(a)) Eq(1)
Where Q is the highest value that is hypothesized, W is the
deflection of light, P is the average sunlight present in the
Figure 2: Major cause of haze formed environment and i is the glow that doesn’t scattered and
plan to the target. The aim of this equation is to recover W
The problem of vision blurriness in computer vision is
(reflected light), P (light source) and i (transmission
one of the most common problem in this field and it is
medium) from the intensity (Q).
due to haze and light phenomena like reflection,
refraction and radiance etc. Vision DE enhancement due The first variable in equation (1) describe the light reflected
to haze is one of the most common cause of traffic from the surface and its decay in that medium (direct
accident. Recently, single image haze removal has made attenuation) and the second variable describe the light
achievable progresses. Recently, single image haze source which result from the previous scattered light source
removal has made achievable progresses. The success of which lead to the change of scene color.(Levin, Lischinski
these methods lies upon using a stronger prior or et al. 2008) When the environment is same, the conveyance
assumption. Tan hypothesized that the initial input image I can be defined as
must have capable brightness than the input hazy image R (b) = z−∃o (b) Eq(2)
and he remove haze from the input image by boasting the Here ∃ indicate that surface light is exponentially link with
neighbor pattern of the reclaimed image. The result were depth scene o. Geometrically Equation (1)defined that they
correct by vision but we can’t mathematically valid. Fattal are in one color channel, vectors P, W (a), and Q (b) are the
computed the albedo of the environment and then infers end point which lies in the same plane.
the transmission map under considering that transmission In 2008 R. Tan’s Model focuses on the improvement on
map and the shady part of the image are logically visibility of an image. It was basically used for the
unrelated. Fattal’s work can be evaluated validly and visibility of bad weather images. Tan model approach is
gives us remarkable conclusion. Anyhow, this work based on two basic observation first the dehazed image has
doesn’t avail indoor images and may give fault more contrast than hazy images and second the air light
conclusion if our hypothesis goes wrong as shown in intensity change depend on the distance from object to
figure 2 (a) and (b). There was a different artifact in haze camera should be smooth. Based on these two observation
formation model in Fattal’s approach than the original he developed a function in the framework of MRF
one(Fattal 2008). (Markov Random filed) in order to enhances the output
image by obtaining the detail and structure from the image.
This method focusses on the enhancement of visibility, it
does not aim for the recovery of reflected areas.
Similarly, in 2008 R. Fattal introduced another image
dehazing model. It is based on the assumption that the
scattered light is removed in order to increase the vision
and recover haze from the image. In this approach he
redefined image model that account for surface shading in
addition to transmission function. Based on this image
model, the image is divided into small parts of constant
albedo and the light source ambiguity is removed by
adding a function which require surface shading and
Figure 3 (a) Haze formation model and medium transmission function to be locally statistically
(b) Constant albedo model used in Fattal’s work unrelated. Fattal used a physics based approach to produce
haze free Image and require statistically based assumption
in the local patches which make it non convenient.
II. LITERATURE REVIEW Qingsong Zhu(Zhu, Mai et al. 2014) used another approach
Haze removal is highly desirable in different fields like to remove haze from an image. He introduced a color
computer vision algorithms, image processing and attenuation prior model to remove haze. A linear model
photography. First, removal of haze from the image with supervised learning is used for the scene depth of the
hazy image, in this way a link between the image and it method is operated on the outdoor hazy images. The
depth map is established. By estimating the depth map of method is based on the low value of pixel in the hazy
an image we can easily recover the haze from an image. image in at least one color from RGB channel. In hazy
The proposed methodology has high efficiency and better image, the value of these pixels give us the hazy part of the
dehazed effect but not in all cases. image from haze image. By this, these particular pixels
Codruta Orniana(Wu, Zhu et al. 2014) proposed another provide clear computed image of haze transmission. After
methodology for haze removal. Their method is able to that, we are using Convolution Neural Network (CNN) on
accurately dehazed image using the original degraded it and recover high quality image which is dehazed image.
information. The aim is the enhancement of visibility III. METHODOLOGY
similar to Tans model.
Yanlin Tian(Scharstein and Szeliski 2002) proposed an Haze removal is one of the most common problem in the
approach to remove haze from a hazed image by using field of computer vision. The problem is not even simple if
segmenting the super pixels. The previous methodology the input is only a corresponding image. After all, huge
from removing haze from an image might not work under methodology have been proposed in removing haze from
particular circumstance due to noise distribution. In this the haze image but some method failed under certain
paper, we propose an improved method combining circumstances .(Zhu, Mai et al. 2014) These methods
segmented pixel with light intensity of a haze image in include polarization technique which removes the haze by
order to compute the sunlight instead of dark channel applying haze removal filter such as mean guided filter. In
prior. The proposed method also estimate the some other papers, they have also computed the
transmission map. Color space conversion in done by transmission map to further clear the visibility of hazy
converting RGB into HIS. In RGB channel pick the image. (Nayar and Narasimhan 1999).
brightest pixel and record the location of that particular The proposed training model first extract the dehazed
pixel. This recorded pixel is known as super pixel .We feature from the image using the convolution operation and
calculate it average by it 8 surrounding pixel it reduces give feature map to the hidden layer one. These feature will
the haze of a hazy image. not be enough in order to remove haze from the given
(Xie, Guo et al. 2010)The images captured in a rainy datasets. So we have to further extract the feature, for this
weather give colored and noisy image due to water purpose we are going to reduce the size of feature map and
droplet on the camera lens of an image. Images which are extract the significant part in hidden layer two.
taken in hazy environment conditions give us poor quality
The output layer can easily check whether the input image
visibility, which will have a great effect on the outdoor
is hazy or not. As shown in figure 5
computer vision systems, such as video surveillance,
traffic surveillance camera, and security purpose system,
camera in space and so on. The paper proposes the recent
discoveries in restoring haze from the image and visibility
techniques applied in the field of computer vision and
image processing.
A color attenuation prior model is proposed by using the
brightness and saturation in pixel of hazy image.
Moreover supervised learning is used as parameters from
the information recovered from hazy image.(Badhe and
Prabhakar 2016) Figure 5: The CNN architecture to remove Haze from an
image
In this paper, we propose a method for single image
dehazing. This dehazing method is applicable only to the
outdoor images.(Scharstein and Szeliski 2002) The main
technique is to find the low intensity pixel from the hazy
image which can achieved by applying particular filter.
These intensity pixel further compute the transmission map
which further improve visibility. By this, these particular
pixels can give us the accurate computed haze model. After
that, apply some methods on it and recover high quality
image which is dehazed image. This method is applicable
Figure 4: Summary of the previous techniques to remove to handle on the distant objects even in the very high turbid
haze medium. This approach recovers dehazed image from
heavy hazy images.(Hsu, Mertens et al. 2008)
In this paper, we proposed a method for removing haze
from the image using novelty approach. This dehazing In encoding phase in figure 5, we have apply filter of all
the previous approaches forming an opaque Convolution
Neural Network (CNN) in order to extract feature map method to be image dehazing (Peng, Xiao et al. 2012)The
from the hazy image. This CNN model contain 7 neurons low values in the color channel are actually due to three
in the 2nd hidden layer and 8 neurons in the 3rd hidden causes: a) dimness light b) Brighter images c) dark
layer forwarding the information in a feed-forward object.(Zhu, Mai et al. 2015)
manner as shown in figure 3. The type of model presented
in the paper ensure the efficient method to pass the To verify this issue, we collect some datasets. Since haze
parameters in both forward and backward propagation usually occurs in outdoor landscape and cityscape scenes,
pass. we manually pick the dehazed landscape and cityscape
ones from the downloaded images. We have used around
1500 images in order to verify our result. Following is the
result of dataset of outdoor hazy image. We can see that
our approach work efficiently than the previous approach
methodology and improve the failure cases in all the
Figure 6: CNN model: Remove haze from the image previous cases. The train result of outdoor images in
The decoder function is similar to the encode phase presented in figure 6
except that it uses residual function which ensure that
each hidden neuron is fully connected to all other neuron
connected in the second layer. The advantage of this
function is that it improve the learning rate and converge
our training data set model. The decoder phase is shown
in the figure 6

Figure 8. Left: Dehazed Images in our database


Figure 7 (a) The encode phase (b) Residual function in Bottom: Ground truth of image.
Decoder Right: Output Image
A non-linear relation is established between a hazy image
and estimated ground truth. This relationship recover a The above figure shows the dehazed images that are stored
high quality dehazed image from a hazy image in an in our database and other images having high intensity
efficient manner. The proposed model involves an pixels. First assume the atmospheric light A is given. The
encoder-decoder structure using deep neural network. The figure clearly show different variation of intensities in hazy
involvement of gradient descent increase the reliability ofimages in our database.
convergence of training dataset model. The mean squared The figure 9 show the result of our proposed training
error and residual loss function play a vital role in training
model of indoor images. The model efficiently remove the
of the dataset model. haze and recover a high quality dehazed image from hazy
indoor images. The loss function used in our model recover
This approach is based on the above mentioned high transmission value from the input image which help
hypothesized of the hazy images: in most of the non- us further to estimate the transmission map from the hazy
environmental, some of the color from the RGB channel part of the image.
has very low value in some parts of the images.(Van Herk
1992) In general, the lowest value in such senerios should
be

WMax (z) = low (low (Wc(x))), c {r,g,b} x Ω(a)


Equation(3)

Here Wc is a the original hazy image W and Ω (a) is a


neighborhood part of the image which point at the center.
Our hypothesized scenario says that except for the
environmental part, the value of WMax is small and move
towards 0, if W is a clear image. We say WMax the high
value of W, and we therefore assign the above image
WMax (x) = low (low (Wc(z))) = 0 c z Ω(a) Eq(6)
Pc is always an increasing value. As we mentioned earlier
that high intensity pixels are not good for mountain and
hilly areas region where haze is high. Since we included
that the above low value region in not a good approach
under certain circumstances, it may fail due to limited
vision in the hilly areas. After all the atmosphere is
unlimited and the convene move towards 0 , the equation
(6) remarkably shows good result on both the
environmental and non-environmental parts.(Schechner,
Narasimhan et al. 2001)
In normal days, the atmosphere contain different dust
particles which reflect sunlight and one of the major cause
of the reduce vision for human in the atmosphere. In order
Figure 9. Left: Dehazed Images in our database to avoid this we have developed a method to remove have
Bottom: Ground truth of image. for better human vision. We fix our result up to 97%
Right: Output Image accuracy and used about 1500 dataset of the hazy images
We present a novelty approach method for computation and obtain transmission map as a byproduct which can be
of sunlight as shown in figure 4 further used for image preprocessing technique.
IV. EXPERIMENT RESULTS
We performed M. Van Herk algorithm whose time
complexity is proportional to Image dimension. The Image
resolution will be 720 × 480. In our proposed
methodology, we implemented Preconditioned Conjugate
Gradient. Our algorithm takes 5 seconds to give the result
of 720 × 480-pixel image on core i7 processor in
MATLAB using filter to test our result and uses Pytorch
version 3.6 to implement our neural network. The
computation is time consuming and took almost 6 hour to
trains the dataset.(Zhao, Xiao et al. 2015).
Equation (4) and (5) given above computes the
transmission map of the image. The light source in these in
estimated in section 4. In figure 7, we presented a
Figure 10: Haze removal results. Top: input haze Images.
comparison between our model and Tan’s model. We
Bottom:
clearly presented our methodology recover the structure of
Dehazed Images. The highlighted part in the Figure
the image without sacrificing color section. In Figure 6 we
shows the novelty for computing the light
compare Fattal result with our approach. In Figure 8, we
have shown that in case of dark images our model gives
We consider that the conveyance in the neighbor pixels Ω much better result than Fattal’s approach.(Xiang, Sahay et
(z) is accurate. We defined the area’s conveyance as t˜(a). al. 2013)
Taking the minimum operation in the local part on the
haze imaging equation. Firstly consider that the
environmental light W is present there. We proposed a
novelty method for computation of environmental light as
shown in figure 4. Imaging equation. We have:
low (Wc(z)) = t˜(x) low (Wc(z)) +(1−t˜(z)) Pc. z Ω(a)
z Ω(a) Eq (4)
Keeping in mind that minimum computation is evaluated
on RGB channel nonlinearly. The equation (4) is equal to:
ow (__Wc(z)) = t˜(x) low (__wc(z)) + (1 t˜(z)) z Ω(a) Pc Figure 11: (a) Input Image (b) Before Extrapolation (c)
y Ω(a) Ac Eq (5)
After Extrapolation (Fattal’s Result) (d) Our Result
According to our approach, the high intensity pixels
WMax of the dehazed radiance W move towards
decreasing value:
Figure 12(a) using the area size 20 × 20. It looks good more efficient in comparison with the quality of data that is
but contains some block effects as the transmission is used, computational speed, and accuracy and also in term
not fixed always in the area. of transmission map. Fattal and Tan approach doesn’t give
transmission map while our approach give.
Approach/Par Ima Transmi Computat Accur
ameters ge ssion ional acy
Data Map Speed
set
Fattal approach 500 No high 86%

Tan’s Approach 700 No high 92%

Dark channel 1000 Yes low 95%


Figure 12: (a) Input hazy Image (b) Estimated Prior approach
Transmission map CNN Approach 1400 Yes low 97%
After applying all the formulas and the equation
(4),(5),(6), we obtained a dehazed images as shown in
Table 1: Comparison between different approaches
figure 12(c)
Following is our contribution in this paper the figure
present the result of both outdoor and indoor images.

Figure 12 (c): Dehazed Image


If we compare our result with the Fattal’s work, then the
following figure is given below:

Figure 15: Comparison of our result on indoor and


outdoor images

Figure 13: Left: Hazy Image, Middle: Fattal’s Work, V. CONCLUSION


In this paper, we have proposed a very simple and efficient
Right: Our Result
method for image dehazing using Convolution Neural
If we compare our result with the Tan’s work, then it also Network (CNN) in which an end to end encoder and
gives better results than his approach. Figure 14 shows decoder architecture in proposed. This gives better and
the comparison of results. efficient results than Fattal’s result and Tan’s result. As
from the previous work on dark channel prior, it gave 0.95
result reported on their paper but our approach using high
intensity pixel value gave 0.97 approx. results which we
have shown on this paper.
It may not work on those images which have no shadow
cast then this approach might be invalid. Our approach
might underestimate the transmission for these kinds of
objects. So, our future work is to refine or dehazed those
Figure 14: Comparison between our result and Tan’s images having no shadow cast and to run on all cases. The
Result future work include real time streaming for dehazing a
Table 1 shows the comparison of different approaches video as it comprised of frames of images. Our result of
used with Haze removal and show why our approach is this paper are real time for an image. Our next step include
for making a video that remove haze in real time that will [12]Sun, C.-C., et al. (2016). Single image fog removal
be a great achievement for solving traffic problem in algorithm based on an improved dark channel prior method.
northern areas.(Sun, Lai et al. 2016) Intelligent Signal Processing and Communication Systems
(ISPACS), 2016 International Symposium on, IEEE.
VI. ABSTRACT
[13]Tian, Y., et al. (2016). "Haze removal of single remote
We thank our colleague from Virginia tech who gives us sensing image by combining dark channel prior with superpixel."
guidelines that were fruitful, although we may miss the Electronic Imaging 2016(2): 1-6.
things and didn’t fulfill the expectations he was expecting
from us. We thank Dr. Javaid Iqbal for their kindness and [14]Van Herk, M. (1992). "A fast algorithm for local minimum
great support, and for comments that really enhances the and maximum filters on rectangular and octagonal kernels."
manuscript. We thank him from the core of our heart for Pattern Recognition Letters 13(7): 517-521.
sharing their knowledge during this course of research [15]Wu, D., et al. (2014). Image haze removal: Status,
and build us and strengthen our foundation. challenges and prospects. Information Science and Technology
VII. REFERNCES (ICIST), 2014 4th IEEE International Conference on, IEEE.
[1] Anwar, M. I. and A. Khosla (2017). "Vision enhancement
through single image fog removal." Engineering Science and [17]Xiang, Y., et al. (2013). Hazy image enhancement based on
Technology, an International Journal 20(3): 1075-1083. the full-saturation assumption. Multimedia and Expo Workshops
(ICMEW), 2013 IEEE International Conference on, IEEE.
[2]Badhe, M. V. and L. R. Prabhakar (2016). "A Survey on
Haze removal using Image visibility Restoration Technique." [18]Xie, B., et al. (2010). Improved single image dehazing using
International Journal of Computer science and Mobile dark channel prior and multi-scale retinex. Intelligent System
Computing 5(2): 96-101. Design and Engineering Application (ISDEA), 2010
International Conference on, IEEE.
[3]Cai, B., et al. (2016). "Dehazenet: An end-to-end system for
single image haze removal." IEEE Transactions on Image [19]Zhao, H., et al. (2015). "Single image fog removal based on
Processing 25(11): 5187-5198. local extrema." IEEE/CAA Journal of Automatica Sinica 2(2):
158-165.
[3]Fattal, R. (2008). "Single image dehazing." ACM
transactions on graphics (TOG) 27(3): 72. [20]Zhu, Q., et al. (2014). Single Image Dehazing Using Color
Attenuation Prior. BMVC, Citeseer.
[4]Gibson, K. B., et al. (2012). "An investigation of dehazing
effects on image and video coding." IEEE Transactions on [21]Zhu, Q., et al. (2015). "A fast single image haze removal
Image Processing 21(2): 662-673. algorithm using color attenuation prior." IEEE Transactions on
Image Processing 24(11): 3522-3533.
[5]Hsu, E., et al. (2008). Light mixture estimation for spatially
varying white balance. ACM transactions on graphics (TOG), [22]Sun, C.-C., et al. (2016). Single image fog removal
ACM. algorithm based on an improved dark channel prior method.
Intelligent Signal Processing and Communication Systems
[6]Levin, A., et al. (2008). "A closed-form solution to natural (ISPACS), 2016 International Symposium on, IEEE.
image matting." IEEE Transactions on Pattern Analysis and
Machine Intelligence 30(2): 228-242.

[7]Narasimhan, S. G. and S. K. Nayar (2003). "Contrast


restoration of weather degraded images." IEEE Transactions on
Pattern Analysis and Machine Intelligence 25(6): 713-724.

[8]Nayar, S. K. and S. G. Narasimhan (1999). Vision in bad


weather. Computer Vision, 1999. The Proceedings of the
Seventh IEEE International Conference on, IEEE.

[9]Peng, L., et al. (2012). "Single image fog removal based on


dark channel prior and local extrema." American Journal of
Engineering and Technology Research Vol 12(2).

[10]Scharstein, D. and R. Szeliski (2002). "A taxonomy and


evaluation of dense two-frame stereo correspondence
algorithms." International journal of computer vision 47(1-3):
7-42.

[11]Schechner, Y. Y., et al. (2001). Instant dehazing of images


using polarization. Computer Vision and Pattern Recognition,
2001. CVPR 2001. Proceedings of the 2001 IEEE Computer
Society Conference on, IEEE.

You might also like