You are on page 1of 8

V.Sujatha et al.

/ (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES


Vol No. 5, Issue No. 2, 261 - 268

A Genetic Algorithm Based on Fog Intensity


Detection method for Driver Safety

V.Sujatha Y.Prathima Dr K.Rama Krishna


E.C.E Department E.C.E.Department E.C.E. Department
V.R.Siddhartha Engineering College V.R. Siddhartha Engineering College V.R.Siddhartha Engineering College
TIFAC CORE TELEMATICS TIFAC CORE TELEMATICS TIFAC CORE IN TELEMATICS
Kanur, Vijayawada Kanur, Vijayawada Kanur, Vijayawada
Sujatha.vundavali@gmail.com prathimasiva@gmail.com srk_kalva@yahoo.com

Abstract-The obstacle detection under adverse weather preceding vehicle's tail lamp is perceived to be 60% further

T
conditions, especially foggy conditions, is a challenging task away than under fair conditions. Furthermore, fog changes
because the contrast is drastically reduced. In weather significantly both temporally and spatially, and as a result
conditions in particular, daylight fog, the contrast of images there is a need for real-time detection using in-vehicle
grabbed by in-vehicle cameras in the visible light range is
sensors. One method that involves installing large numbers
drastically degraded, which makes current driver assistance
that relies on cameras very sensitive to weather conditions. The of sensors along roads might be one solution, though it may
not accurately reflect a driver's visual condition. It would
ES
visibility distance is calculated from the camera projection
equations and the blurring due to the fog. The effects of
daylight fog vary across the scene and are exponential with
respect to the depth of scene points.
also be a very expensive system to establish. Considering
these problems, we propose a method that classifies fog
density into three levels using in-vehicle camera images and
millimetre-wave (mm-W) radar data. The image from the
in-vehicle camera reflects the driver's visual conditions, vital
when driving. This is the prime advantage of using an in-
Key-Words-Fog removal, Atmospheric Visibility distance,
Contrast Restoration, Genetic algorithm, Contrast restoration vehicle camera. We also evaluate the degradation in
visibility of images that are captured in foggy conditions,
especially by focusing on the change in visibility of a
preceding vehicle. We must also take into account the
I. INTRODUCTION distance to the targets to determine the fog density, because
A
under the same fog condition, nearby objects is easy to see
Recently, many systems have been developed that use
while distant objects are not. We therefore use mm-W radar
computers and various sensors to assist driving. Some
together with the in-vehicle camera, since it can measure
notable examples include self-steering by white-line
distance without being influenced by adverse weather
detection, a rear-end collision-prevention system that
operates by measuring the distance to the vehicle ahead, a Under adverse meteorological conditions, the contrast of
danger notification system that recognizes pedestrians, and a images that are grabbed by a classical in-vehicle camera in
IJ

system that automatically operates the windshield wipers the visible light range is drastically degraded, which makes
upon recognizing rain drops . When considering a driving current in-vehicle applications relying on such sensors very
assistant system, we cannot ignore changes in weather sensitive to weather conditions. An in-vehicle vision system
conditions, since in such adverse weather conditions as rain, should take fog effects into account to be more reliable. A
snow, or fog, driving is more difficult than in fair first solution is to adapt the operating thresholds of the
conditions, leading to a significant increase in the accident system or to momentarily deactivate it if these thresholds
rate. Therefore, a close relationship exists between driver have been surpassed. A second solution is to remove
assistance and weather recognition. In this paper, we focus weather effects from the image beforehand. Unfortunately,
on fog detection. Fog negatively influences human the effects vary across the scene. They are exponential with
perception of traffic conditions, making for potentially respect to the depth of scene points. Consequently, space-
dangerous situations. Automatic lighting of fog lamps, invariant filtering techniques cannot be directly used to
speed control, and rousing of attention are examples of adequately remove weather effects from images. A judicious
potential assistance to be realized with respect to fog approach is to detect and characterize weather conditions to
recognition. Under foggy conditions the distance between a estimate the decay in the image and then to remove it.

ISSN: 2230-7818 @ 2011 http://www.ijaest.iserp.org. All rights Reserved. Page 261


V.Sujatha et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES
Vol No. 5, Issue No. 2, 261 - 268

II. FOG EFFECTS ON VISION Fig.1. Fog or haze luminance is due to the scattering of daylight. Light
Coming from the sun and scattered by atmospheric particles toward the
A. Visual Properties of Fog camera is the air light A. It increases with the distance. The light emanating
from the object R is attenuated by scattering along the line of sight. Direct
The attenuation of luminance through the atmosphere transmission T of R decreases with distance
was studied by Koschmieder, who derived an equation
relating the apparent luminance or radiance L of an object B. Camera Response
that is located at distance d to the luminance L0 measured
close to this Object, Let us denote f as the camera-response function,
which models the mapping from scene luminance to image
intensity by the imaging system, including optic as well as
electronic. The intensity I of a pixel is the result of f applied
+ (1- ) (1)
to the sum of the air light A and the direct transmission T,

This expression indicates that the luminance of the object


that is seen through fog is attenuated in e−kd (Beer–Lambert (1)
law); it also reveals a luminance reinforcement of the form
(1- ) resulting from daylight that is scattered by the
slab of fog between the object and the observer, which is In this paper, we assume that the conversion process

T
also named air light. L∞ is the atmospheric luminance. In between incident energy on the charge-coupled device
the presence of fog, it is also the background luminance on (CCD) sensor and the intensity in the image is linear. This is
which the target can be detected. The previous equation may generally the case for short exposure times because it
be written as prevents the CCD array from being saturated. Furthermore,
short exposure times (1–4 ms) are used on in-vehicle
cameras to reduce the motion blur. This assumption can,
=( - )
On the basis of this equation, Duntley developed a contrast
attenuation law, stating that a nearby object exhibiting
ES(2) thus, be considered as valid, and (1) becomes

Contrast C0 with the background will be perceived at


distance d with the following contrast

) (1 ))
C= [( - )/ ] = (3)
) ) (1 )
This expression serves to base the definition of a standard
A
+ (1 )
dimension that is called “meteorological visibility distance”
Vmet, i.e., the greatest distance at which a black object (C0 C. Enhancement
= −1) of a suitable dimension can be seen in the sky on the
The aim of image enhancement is to improve the
horizon, with the threshold contrast set at 5%. It is, thus, a interpretability or perception of information in images for
standard dimension that characterizes the opacity of a fog human viewers, or to provide `better' input for other
layer. This definition yields the following expression automated image processing techniques. Image
IJ

1 3 enhancement techniques can be divided into two broad


=- (4) categories:
 
1. Spatial domain methods, which operate directly on pixels,
and
2. Frequency domain methods, which operate on the Fourier
transform of an image.
Unfortunately, there is no general theory for determining
what `good’ image enhancement is when it comes to human
perception. If it looks good, it is good! However, when
image enhancement techniques are used as pre-processing
tools for other image processing techniques, then

ISSN: 2230-7818 @ 2011 http://www.ijaest.iserp.org. All rights Reserved. Page 262


V.Sujatha et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES
Vol No. 5, Issue No. 2, 261 - 268

quantitative measures can determine which techniques are [0, + ∞ )] (4)


most appropriate
Contrast enhancement techniques can be classed into
two means: global and local. Histogram equalization is the
most popular global enhancement algorithm for its
simplicity and effectiveness. Because it uses global
histogram information over the whole image as its
transformation function to stretch contrast, can not reflect
local scene depth change, the enhancement effect cannot
be satisfying when depth changes in the scene. In ,an
image clearness method for fog using a moving mask-
based sub-block overlapped histogram equalization
method to de weather the degraded image. The idea of
this method is that assuming the pixels in the sub-block
mask having same scene depth, so by segmenting the sky
region for restraining over-enhancing, the contrast in
non-sky region can be restored and fog effect is
lighten. However, the method moves the mask by step
equal to 1 for implementing sky region pixel judgement Fig 2.Gray Scale Images

T
and contrast enhancement by sub-block overlapped
histogram equalization, the complexity of computation is
very high.

III. CONTRAST-RESTORATION

Restoration Principle
METHOD
ES
Here, we describe a simple method to restore scene
contrast from an image of a foggy scene. Let us consider a
pixel with known depth d. Its intensity I is given by. (A∞, β)
characterizes the weather condition and is estimated through
consequently, contrary to; R can be directly estimated for all
scene points from

+ (1- ) (1)
A
Fig 3.Restoration Principles Outputs

The previous equation can be written as follows

∞ ∞ (2)

And thus, the contrast Cr in the restored image is


IJ

∞ )/ ∞ ] =C (3)

However, R may become negative for certain values of (I,


d). We can solve the equation R (d∗) = 0 and obtain

In case of negative values during the restoration process, we


propose to set these values to 0. The restoration equation
finally becomes
Fig 4. Histogram Output for Images

ISSN: 2230-7818 @ 2011 http://www.ijaest.iserp.org. All rights Reserved. Page 263


V.Sujatha et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES
Vol No. 5, Issue No. 2, 261 - 268

IV. Block Diagram

Edge detection
Image (binary)
Genetic Iteration to Out Put
Foggy reduce
algorithm
Image intensity
levels
Gray scale
Foggy image

1. Foggy Image
while preserving the important structural properties of an
In this foggy image is taken. That foggy image may image. If the edge detection step is successful, the
be any format i.e., color image may be j-peg bitmap subsequent task of interpreting the information contents in
...etc that foggy image (color image) is converted in to the original image may therefore be substantially simplified.

T
Gray scale image. Color image is 3-dimension image However, it is not always possible to obtain such ideal edges
where as gray scale image is 2- dimension image. from real life images of moderate complexity. Edges
Image extraction is possible in 2- dimension in 3- extracted from nontrivial images are often hampered by
dimension image extraction is quite complicated. fragmentation, meaning that the edge curve are not
connected, missing edge segments as well as false edges not
To convert gray scale to edge detection canny corresponding to interesting phenomena in the image – thus
algorithm is used
2. Edge Detection
ES
Edge detection is a fundamental tool in image processing and
complicating the subsequent task of interpreting the image
data

computer vision, particularly in the areas of feature detection and


feature extraction, which aim at identifying points in a digital
image at which the image brightness changes sharply or, more
formally, has discontinuities.
Canny edge detection applied to a photograph

Canny Edge Detection:


The purpose of detecting sharp changes in image
A
brightness is to capture important events and changes in
properties of the world. It can be shown that under rather
general assumptions for an image formation model,
discontinuities in image brightness are likely to correspond
to
Fig 5.Input Images
 discontinuities in depth,
IJ

 discontinuities in surface orientation,


 changes in material properties and
 Variations in scene illumination.
In the ideal case, the result of applying a edge
detector to an image may lead to a set of connected curves
that indicate the boundaries of objects, the boundaries of
surface markings as well as curves that correspond to
discontinuities in surface orientation. Thus, applying an
edge detection algorithm to an image may significantly
reduce the amount of data to be processed and may therefore
filter out information that may be regarded as less relevant, Fig 6.Edge Detection Outputs

ISSN: 2230-7818 @ 2011 http://www.ijaest.iserp.org. All rights Reserved. Page 264


V.Sujatha et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES
Vol No. 5, Issue No. 2, 261 - 268

3. Gray Scale Images possible to cover everything in these pages. But you should
get some idea, what the genetic algorithms are and what
A grayscale image (also called gray-scale, gray scale, or
they could be useful for. Do not expect any sophisticated
gray-level) is a data matrix whose values represent
mathematics theories here. Genetic algorithms are a part of
intensities within some range. MATLAB stores a grayscale
evolutionary computing, which is a rapidly growing area of
image as an individual matrix, with each element of the
artificial intelligence. As you can guess, genetic algorithms
matrix corresponding to one image pixel. By convention,
are inspired by Darwin's theory about evolution. Simply
this documentation uses the variable name I to refer to
said, solution to a problem solved by genetic algorithms is
grayscale images. The matrix can be of class uint8, uint16,
evolved.
int16, single, or double. While grayscale images are rarely
saved with a color map, MATLAB uses a color map to A. Initialization
display them. For a matrix of class single or double, using
Initially many individual solutions are randomly generated
the default grayscale color map, the intensity 0 represents
to form an initial population. The population size depends
black and the intensity 1 represents white. For a matrix of
on the nature of the problem, but typically contains several
type uint8, uint16, or int16, the intensity intmin (class (I))
hundreds or thousands of possible solutions. Traditionally,
represents black and the intensity intmax (class (I))
the population is generated randomly, covering the entire
represents white
range of possible solutions (the search space). Occasionally,
the solutions may be "seeded" in areas where optimal
solutions are likely to be found.

T
During each successive generation, a proportion of the
existing population is selected to breed a new generation.
Individual solutions are selected through a fitness-based
process, where fitter solutions (as measured by a fitness
function) are typically more likely to be selected. Certain
selection methods rate the fitness of each solution and
ES preferentially select the best solutions. Other methods rate
only a random sample of the population, as this process may
be very time-consuming

B. Reproduction
The next step is to generate a second generation population
of solutions from those selected through genetic operators:
Fig 7.Input Images crossover (also called recombination), and/or mutation. For
each new solution to be produced, a pair of "parent"
solutions is selected for breeding from the pool selected
A
previously. By producing a "child" solution using the above
methods of crossover and mutation, a new solution is
created which typically shares many of the characteristics of
its "parents". New parents are selected for each new child,
and the process continues until a new population of
solutions of appropriate size is generated. Although
reproduction methods that are based on the use of two
IJ

parents are more "biology inspired", some research suggests


more than two "parents" are better to be used to reproduce a
good quality chromosome. These processes ultimately
result in the next generation population of chromosomes that
is different from the initial generation. Generally the
average fitness will have increased by this procedure for the
population, since only the best organisms from the first
Fig 8.Gray Scale Images
generation are selected for breeding, along with a small
proportion of less fit solutions, for reasons already
mentioned above. Although Crossover and Mutation are
known as the main genetic operators, it is possible to use
4. Fog Removal Algorithm (Genetic Algorithm) other operators such as regrouping, colonization-extinction,
The area of genetic algorithms is very wide; it is not or migration in genetic algorithms.

ISSN: 2230-7818 @ 2011 http://www.ijaest.iserp.org. All rights Reserved. Page 265


V.Sujatha et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES
Vol No. 5, Issue No. 2, 261 - 268

timetabling and scheduling problems, and many scheduling


software packages are based on GAs. GAs have also been
C.Termination applied to engineering. Genetic algorithms are often applied
This generational process is repeated until a Termination as an approach to solve global optimization problems.
conditions have been reached. Common terminating
conditions are As a general rule of thumb genetic algorithms might be
useful in problem domains that have a complex fitness
landscape as mixing, i.e., mutation in combination with
 A solution is found that satisfies minimum criteria crossover, is designed to move the population away from
local optima that a traditional hill climbing algorithm might
 Fixed number of generations reached get stuck in. Observe that commonly used crossover
operators cannot change any uniform population. Mutation
 Allocated budget (computation time/money) reached alone can provide ergodicity of the overall genetic algorithm
 The highest ranking solution's fitness is reaching or has process (seen as a Markov chain).
reached a plateau such that successive iterations no longer
produce better results Examples of problems solved by genetic
algorithms include: mirrors designed to funnel sunlight to a
 Manual inspection solar collector, antennae designed to pick up radio signals in
space, and walking methods for computer figures. Many of

T
 Combinations of the above their solutions have been highly effective, unlike anything a
human engineer would have produced, and inscrutable as to
Simple generational genetic algorithm pseudo code
how they arrived at that solution.
1. Choose the initial population of individuals
2. Evaluate the fitness of each individual in that population The simplest algorithm represents each chromosome as
3. Repeat on this generation until termination: (time limit,

a.
b.
sufficient fitness achieved, etc.)
Select the best-fit individuals for reproduction
ES
Breed new individuals through crossover and mutation
a bit string. Typically, numeric parameters can be
represented by integers, though it is possible to use floating
point representations. The floating point representation is
operations to give birth to offspring natural to evolution strategies and evolutionary
c. Evaluate the individual fitness of new individuals
programming. The notion of real-valued genetic algorithms
d. Replace least-fit population with new individuals
The building block hypothesis has been offered but is really a misnomer because it does

Genetic algorithms are simple to implement, but their not really represent the building block theory that was

behavior is difficult to understand. In particular it is difficult proposed by Holland in the 1970s. This theory is not
A
to understand why these algorithms frequently succeed at without support though, based on theoretical and

generating solutions of high fitness when applied to experimental results (see below). The basic algorithm

practical problems. The building block hypothesis (BBH) performs crossover and mutation at the bit level. Other

consists of: variants treat the chromosome as a list of numbers which are
indexes into an instruction table, nodes in a linked list,
IJ

1.A description of a heuristic that performs adaptation by


hashes, objects, or any other imaginable data structure.
identifying and recombining "building blocks", i.e. low
Crossover and mutation are performed so as to respect data
order, low defining-length schemata with above average
element boundaries. For most data types, specific variation
fitness.
operators can be designed. Different chromosomal data
2. A hypothesis that a genetic algorithm performs adaptation types seem to work better or worse for different specific
by implicitly and efficiently implementing this heuristic.
problem domains. When bit-string representations of
Problem with Genetic algorithm
integers are used, Gray coding is often employed. In this
Problems which appear to be particularly way, small changes in the integer can be readily affected
appropriate for solution by genetic algorithms include through mutations or crossovers.

ISSN: 2230-7818 @ 2011 http://www.ijaest.iserp.org. All rights Reserved. Page 266


V.Sujatha et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES
Vol No. 5, Issue No. 2, 261 - 268

Fig 9:Input Images

T
ES
A
Fig 10: Fog Removal Algorithm Outputs (Genetic Algorithm)
IJ

Fig 11: Histograms for Inputs and Outputs Images

ISSN: 2230-7818 @ 2011 http://www.ijaest.iserp.org. All rights Reserved. Page 267


V.Sujatha et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES
Vol No. 5, Issue No. 2, 261 - 268

V. Conclusion
In this paper, we proposed a method that classifies fog
density according to a visibility feature of a preceding
vehicle and the distance to the vehicle. We obtained
promising results through an experiment using data
collected from an in-vehicle camera while driving the
vehicle. From the results, we confirmed that the
proposed method could make judgments that comply
with human perception.
In future, we will consider an improved visibility
feature that does not vary depending on the type or
colour of the preceding vehicle. In addition, we will
consider a situation when there is no preceding vehicle at
all.

VI. References
1. Nicolas Hautière, Jean-Philippe Tarel, and Didier
Aubert Mitigation of Visibility Loss for Advanced

T
Camera-Based Drive Assistance IEEE transactions on
intelligent transportation systems, vol. 11, no. 2, June
2010
2. R. T. Tan, “Visibility in bad weather from a single
image,” in Proc. IEEE Conf. Comput. Vis. Pattern
Recog., 2008
ES
3. N. Hautière, J.-P. Tarel, and D. Aubert, “Towards fog-
free in-vehicle vision systems through contrast
restoration,” in Proc. IEEE Conf. Comput. Vis. Pattern
Recog., 2007, pp. 1–8.
4. N. Hautière, J.-P. Tarel, J. Lavenant, and D. Aubert.
Automatic Fog Detection and Estimation of Visibility
Distance through use of an Onboard Camera. Machine
Vision and Applications Journal, 17(1):8– 20, April
2006.
5. S. G. Narasimhan and S. K. Nayar, “Contrast
restoration of weather degraded images,” IEEE Trans.
Pattern Anal. Mach. Intell., vol. 25, no. 6, pp. 713–724,
A
Jun. 2003
IJ

ISSN: 2230-7818 @ 2011 http://www.ijaest.iserp.org. All rights Reserved. Page 268

You might also like