You are on page 1of 37

Bahir Dar University

Bahir Dar Institute of Technology


Faculty of Electrical and Computer Engineering
TRAFFIC CONTROL USING IMAGE PROCESSING
By
1. Yohannes Eshetie-------0601870
2. Yonatan Ngusu----------0601883
3. Yosef Assefa------------0601887
4. Zelalem Abebe----------0601901

Advisor
Mr. Esubalew A.

A Project Submitted to the Faculty of Electrical and Computer Engineering of Bahir Dar
University in Partial Fulfillment of the Requirements for the Degree of Bachelor of Science in
Electrical Engineering [Communication and Electronics Engineering]
February, 2018
Bahir Dar, Ethiopia
Declaration
We, the undersigned , declare that this Project is our original work, has not been presented for a
degree in this or any other universities, and all sources of materials used for the Project have been
fully acknowledged.
Name Signature
1. Yohannes Eshetie-------------------------------------------------
2. Yonatan Ngusu----------------------------------------------------
3. Yosef Assefa-------------------------------------------------------
4. Zelalem Abebe----------------------------------------------------

Date of Submission: __________


This Project has been submitted for examination with my approval as a university advisor.
_____________________ _________
Project Advisor Signature

i
Acknowledgment
First of all, we are grateful to the Almighty God for enabling us to complete this semester project
work.
We wish, also, to express our sincere gratitude to Mr. Esubalew A. for his expert, sincere and
valuable guidance and encouragement. We are thankful for his aspiring guidance, invaluably
constructive criticism and friendly advice during the project work. We are sincerely grateful to
them for sharing their truthful and illuminating views on a number of issues related to the project.
Finally, we take this opportunity to sincerely thank all the faculty members of Electrical and
Computer Engineering for their help and encouragement in our educational endeavors.

ii
Table of Contents
Declaration........................................................................................................................................ i
Acknowledgment ............................................................................................................................. ii
List of Figures .................................................................................................................................. v
List of Acronyms ............................................................................................................................ vi
Abstract .......................................................................................................................................... vii
CHAPTER ONE .............................................................................................................................. 1
1. INTRODUCTION ....................................................................................................................... 1
1.1 Background Information ....................................................................................................... 2
1.2 Statement of Problem ............................................................................................................ 3
1.3 Objective of the Project ......................................................................................................... 4
1.4 Methodology.......................................................................................................................... 4
1.4.1 Procedure ........................................................................................................................ 8
1.5 Major Assumption Made ....................................................................................................... 9
1.6 Contribution of the Project .................................................................................................... 9
1.7 Scope of the Project ............................................................................................................... 9
1.8 Organization of the Project .................................................................................................... 9
CHAPTER TWO ........................................................................................................................... 10
2. LITERATURE REVIEW .......................................................................................................... 10
CHAPTER THREE ....................................................................................................................... 12
3. SYSTEM COMPONENT AND OPERATION ........................................................................ 12
3.1 Introduction to MATLAB ................................................................................................... 12
3.2 Graphical User Interface (GUI): .......................................................................................... 13
CHAPTER FOUR ......................................................................................................................... 16
4. SYSTEM DESIGN AND ANALYSIS ..................................................................................... 16
CHAPTER FIVE ........................................................................................................................... 18
5. RESULT AND DISCUSTION .................................................................................................. 18
5.1 Software Simulation Result and Discussion........................................................................ 18
CHAPTER SIX ............................................................................................................................. 22
6. CONCLUSION AND RECOMMENDATION FOR FUTURE WORK .................................. 22

iii
6.1 Conclusion ........................................................................................................................... 22
6.2 Recommendations for Future work ..................................................................................... 22
Reference ....................................................................................................................................... 23
Appendix ....................................................................................................................................... 24

iv
List of Figures
Figure 3. 1 GUI............................................................................................................................. 13
Figure 4. 1 Block Diagram……………………………………………………………………….16

Figure 5. 1Matching for 30 to 46% ............................................................................................... 18


Figure 5. 2Matching for 46 to 50% ............................................................................................... 19
Figure 5. 3 Matching for 50 to 80% .............................................................................................. 20
Figure 5. 4 Matching for 80 to 100% ……………………………………………………………21

v
List of Acronyms
1D--------------------------------------One Dimension

3D--------------------------------------Three Dimension

EISPACK----------------------------- Eigen System Package

GUI-----------------------------------Graphical User Interface

GUIDE ------------------------------Graphical User Interface Development Environment

ITS------------------------------------ Intelligent Transportation Systems

LINPACK----------------------------Linear System Package

MATLAB---------------------------- Matrix Laboratory

RGB-----------------------------------Red Green Blue

vi
Abstract
In present time, due to an increase in population number, there are a lot of vehicles present in our
country. As days go by, traffic congestion is becoming more and more serious each day. The
existing techniques that are being used today are proven to be unsatisfactory.so, eventually these
problems are going to be disastrous unless we are to consider new ways to deal traffic congestion.
In this paper is we have written about the possibility of using Image Processing Technique for
traffic control system.

The first chapter mentions about introduction traffic light systems and about image processing. It
gives detailed information about the background of the existing traffic lighting system and the role
of image processing in it. The methodology and scope of the project is also stated.
The second chapter talks about the literature review, what has been done in the area so far? And
Major draw backs and contribution of this project is mentioned.
The third chapter talks about the system components and operation and the fourth chapter is about
system design and analysis. On the fifth chapter results of the project are given and explained.
On chapter six Conclusion and recommendation for future are discussed.

vii
CHAPTER ONE

1. INTRODUCTION
Automatic traffic monitoring and surveillance are important for road usage and management.
Traffic parameter estimation has been an active research area for the development of intelligent
Transportation systems (ITS). For ITS applications traffic- information needs to be collected and
distributed[1].
In modern life time is the most precious resource. Wastage of time through travel is one of the
problems. Traffic congestion is the main cause for it. It is said that the high tome of vehicles, the
scanty infrastructure and the irrational distribution of the development are main reasons for
augmented traffic jam. The major cause leading to traffic jam is the high number of vehicle which
was caused by the population and the development of economy[2].

As the population of the modern cities is increasing day by day due to which vehicular travel is
increasing which lead to congestion problem. Traffic congestion has been causing many critical
problems and challenges in the major and most populated cities. The increased traffic has lead to
more waiting times and fuel wastages. Due to these congestion problems, people lose time, miss
opportunities, and get frustrated. Traffic load is highly dependent on parameters such as time, day,
season, weather and unpredictable situations such as accidents, special events or constructional
activities. If these parameters are not taken into account, the traffic control system will create
delays. To solve congestion problem new roads are constructed[3]. The only disadvantage of
making New roads on facilities is that it makes the surroundings more congested. So for that reason
there is a need to change the system rather than making new infrastructure twice. A traffic control
system can solve these problems by continuously sensing and adjusting the timing of traffic lights
according to the actual traffic load is called an Intelligent Traffic Control System. The advantages
of building Intelligent Traffic Control System which reduce congestion; reduce operational costs;
provide alternate routes to travelers, increases capacity of infrastructure. One such traffic control
system can be built by image processing technique like edge detection to find the traffic density,
based on traffic density can regulate the traffic signal light. Digital image processing is meant for
processing digital computer. It is the use of computer algorithm to perform image processing on
digital images. It is a technology widely used for digital image operations like feature extraction,

1
pattern recognition, segmentation, image morphology etc. Edge detection is a well developed field
on its own within image processing. Edge is the important characteristic of image. Edges
characterize boundaries and are therefore a problem of fundamental importance in image
processing. Edges typically occur on the boundary between two different regions in an image. Edge
detection allows user to observe those features of an image where there is a more or less abrupt
change in gray level or texture indicating the end of one region in the image and the beginning of
another. It finds practical applications in medical imaging, computer guided surgery diagnosis,
locate object in satellite images, face recognition, and finger print recognition, automatic traffic
controlling systems, study of anatomical structure etc. Many edge detection techniques have been
developed for extracting edges from digital images[3].

1.1 Background Information


Image Processing is a technique to enhance raw images received from cameras/sensors placed on
space probes, aircrafts and satellites or pictures taken in normal day-today life for various
applications. An Image is rectangular graphical object. Image processing involves issues related to
image representation, compression techniques and various complex operations, which can be
carried out on the image data. The operations that come under image processing are image
enhancement operations such as sharpening, blurring, brightening, edge enhancement etc. Image
processing is any form of signal processing for which the input is an image, such as photographs
or frames of video; the output of image processing can be either an image or a set of characteristics
or parameters related to the image. Most image-processing techniques involve treating the image
as a two-dimensional signal and applying standard signal-processing techniques to it. Image
processing usually refers to digital image processing, but optical and analog image processing are
also possible.

Standard Traffic Control Systems:


Manual Controlling
Manual controlling the name instance it require man power to control the traffic. Depending on the
countries and states the traffic polices are allotted for a required area or city to control traffic. The
traffic polices will carry sign board, sign light and whistle to control the traffic. They will be
instructed to wear specific uniforms in order to control the traffic.

2
Automatic Controlling
Automatic traffic light is controlled by timers and electrical sensors. In traffic light each phase a
constant numerical value loaded in the timer. The lights are automatically getting ON and OFF
depending on the timer value changes. While using electrical sensors it will capture the availability
of the vehicle and signals on each phase, depending on the signal the lights automatically switch
ON and OFF.

Image Processing in Traffic Light Control


We propose a system for controlling the traffic light by image processing. The vehicles are detected
by the system through images instead of using electronic sensors embedded in the pavement. A
camera will be placed alongside the traffic light. It will capture image sequences. Image processing
is a better technique to control the state change of the traffic light. It shows that it can decrease the
traffic congestion and avoids the time being wasted by a green light on an empty road. It is also
more reliable in estimating vehicle presence because it uses actual traffic images. It visualizes the
practicality, so it functions much better than those systems that rely on the detection of the vehicles’
metal content.

1.2 Statement of Problem


In Ethiopia, all of the traffic light system used is the traditional system. These systems encounter
many limitations i.e. timing is not based on number of vehicles due to this we have the following
problems:
A. Heavy Traffic Jam
With increasing number of vehicles on road, heavy traffic congestion occurs substantially in major
cities. This happened usually at the main junctions commonly in the morning before office hour
and in the evening after office hours. This causes an increased time wasting of the people on the
road.
B. Green Light for An Empty Road
There are these times where there are no vehicles on a junction but the green light is on for that
junction. Whereas, on the other junction there exists a queue of waiting vehicles but the red light
is on for that junction.
C. No Traffic, But the Pedestrians Still Need to Wait

3
At certain junctions, sometimes even if there is no traffic, pedestrians have to wait. Because the
traffic light remains green for the preset time period, the road users should wait until the light turn
to red.

1.3 Objective of the Project


1.3.1 General Objective
• To measure the traffic density using Image Processing techniques.
1.3.2 Specific Objective
• To show that Image Processing can be a useful technique for traffic controlling.
• To save time and reduce wastage of fuel.
• Avoid unnecessary wastage of green lights on empty roads.
• Decrease the number of human resource that is used to control traffic.

1.4 Methodology
Following are the steps involved
• Image acquisition
• Image Pre-Processing
➢ Image Resizing/Scaling
➢ RGB to GRAY Conversion
• Image enhancement
• Image matching using edge detection
Image Acquisition:
Generally an image is a two-dimensional function f(x,y)(here x and y are plane coordinates).The
amplitude of image at any point say f is called intensity of the image. It is also called the gray level
of image at that point. We need to convert these x and y values to finite discrete values to form a
digital image. The input image is a fundus taken from stare data base and drive data base. The
image of the retina is taken for processing and to check the condition of the person. We need to
convert the analog image to digital image to process it through digital computer. Each digital image
composed of a finite elements and each finite element is called a pixel.

4
Image Pre-Processing:
Image Resizing/Scaling:
Image scaling occurs in all digital photos at some stage whether this be in Bayer demosaicing or in
photo enlargement. It happens anytime you resize your image from one pixel grid to another. Image
resizing is necessary when you need to increase or decrease the total number of pixels. Even if the
same image resize is performed, the result can vary significantly depending on the algorithm.
Images are resized because of number of reasons but one of them is very important in our project.
Every camera has its resolution, so when a system is designed for some camera specifications it
will not run correctly for any other camera depending on specification similarities. so it is necessary
to make the resolution constant for the application and hence perform image resizing.

RGB to GRAY Conversion:


Humans perceive colour through wavelength-sensitive sensory cells called cones. There are three
different varieties of cones, each has a different sensitivity to electromagnetic radiation (light) of
different wavelength. One cone is mainly sensitive to green light, one to red light, and one to blue
light. By emitting a restricted combination of these three colours (red, green and blue), and hence
stimulate the three types of cones at will, we are able to generate almost any detectable colour. This
is the reason behind why colour images are often stored as three separate image matrices; one
storing the amount of red (R) in each pixel, one the amount of green (G) and one the amount of
blue (B). We call such colour images as stored in an RGB format. In grayscale images, however,
we do not differentiate how much we emit of different colours, we emit the same amount in every
channel. We will be able to differentiate the total amount of emitted light for each pixel; little light
gives dark pixels and much light is perceived as bright pixels. When converting an RGB image to
grayscale, we have to consider the RGB values for each pixel and make as output a single value
reflecting the brightness of that pixel[3].
Image Enhancement
Image enhancement is the process of adjusting digital images so that the results are more suitable
for display or further analysis. For example, we can eliminate noise, which will make it more easier
to identify the key characteristics.
In poor contrast images, the adjacent characters merge during binarization. We have to reduce the
spread of the characters before applying a threshold to the word image. Hence, we introduce

5
“POWER- LAW TRANSFORMATION” which increases the contrast of the characters and helps
in better segmentation. The basic form of power-law transformation is
s = cr γ,
where r and s are the input and output intensities, respectively; c and γ are positive constants. A
variety of devices used for image capture, printing, and display respond according to a power-law.
By convention, the exponent in the power-law equation is referred to as gamma. Hence, the process
used to correct these power-law response phenomena is called gamma correction. Gamma
correction is important, if displaying an image accurately on a computer screen is of concern. In
our experimentation, γ is varied in the range of 1 to 5. If c is not equal to ’1’, then the dynamic
range of the pixel values will be significantly affected by scaling. Thus, to avoid another stage of
rescaling after power-law transformation, we fix the value of c = 1.With γ = 1, if the power-law
transformed image is passed through binarization, there will be no change in the result compared
to simple binarization. When γ > 1, there will be a change in the histogram plot, since there is an
increase of samples in the bins towards the gray value of zero. Gamma correction is important if
displaying an image accurately on computer screen is of concern[3].

Edge Detection and Image Matching:


Edge Detection
Edge detection is the name for a set of mathematical methods which aim at identifying points in a
digital image at which the image brightness changes sharply or, more technically, has
discontinuities or noise. The points at which image brightness alters sharply are typically organized
into a set of curved line segments termed edges.
The same problem of detecting discontinuities in 1D signal is known as step detection and the
problem of finding signal discontinuities over time is known as change detection. Edge detection
is a basic tool in image processing, machine vision and computer envisage, particularly in the areas
of feature reveal and feature extraction.
Edge detection techniques: Different colours has different brightness values of particular colour.
Green image has more bright than red and blue image or blue image is blurred image and red image
is the high noise image.
Following are list of various edge-detection methods:-
• Sobel Edge Detection Technique

6
• Perwitt Edge Detection
• Roberts Edge Detection Technique
• Zerocross Threshold Edge Detection Technique
• Canny Edge Detection Technique
In our project we use “CANNY EDGE DETECTION TECHNIQUE” because of its various
advantages over other edge detection techniques.
Canny Edge Detection: The Canny Edge Detector is one of the most commonly used image
processing tools detecting edges in a very robust manner. It is a multi-step process. Canny edge
detection technique is based on three basic objectives.
I. Low error rate:-
All edges should be found, and there should be no spurious responses. That is, the edges
must be as close as possible to the true edges.
II. Edge point should be well localized:-
The edges located must be as close as possible to the true edges. That is , the distance
between a point marked as an edge by the detector and the center of the true edge should
be minimum.
III. Single edge point response:-
The detector should return only one point for each true edge point. That is, the number of
local maxima around the true edge should be minimum. This means that the detector should
not identify multiple edge pixels where only a single edge point exist.

Summarizing, the Canny edge detection algorithm consist of the following basic steps;
i. Smooth the input image with Gaussian filter.
ii. Compute the gradient magnitude and angle images.
iii. Apply nonmaxima suppression to the gradient magnitude image.
iv. Use double thresholding and connectivity analysis to detect and link edges.
Image Matching:-
Recognition techniques based on matching represent each class by a prototype pattern vector. An
unknown pattern is assigned to the class to which is closest in terms of predefined metric. The
simplest approach is the minimum distance classifier, which, as its name implies, computes the
(Euclidean) distance between the unknown and each of the prototype vectors. It chooses the

7
smallest distance to make decision. There is another approach based on correlation, which can be
formulated directly in terms of images and is quite intuitive.
We have used a totally different approach for image matching. Comparing a reference image with
the real time image pixel by pixel. Though there are some disadvantages related to pixel based
matching but it is one of the best techniques for the algorithm which is used in the project for
decision making. Real image is stored in matric in memory and the real time image is also
converted in the desired matric. For images to be same their pixel values in matrix must be same.
This is the simplest fact used in pixel matching. If there is any mismatch in pixel value it adds on
to the counter used to calculate number of pixel mismatches. Finally percentage of matching is
expressed as
%match=No.of pixels matched successfully/ total no.of pixels

1.4.1 Procedure
Phase1:
• Initially image acquisition is done with the help of web camera
• First image of the road is captured, when there is no traffic on the road
• This empty road’s image is saved as reference image at a particular location specified in
the program
• Image resizing and RGB to gray conversion is done on the reference image
• Now gamma correction is done on the reference gray image to achieve image enhancement
• Edge detection of this reference image is done thereafter with the help of Canny edge
detection operator
Phase2:
• Images of the road are captured.
• RGB to gray conversion is done on the sequence of captured images
• Now gamma correction is done on each of the captured gray image to achieve image
enhancement
• Edge detection of these real time images of the road is now done with the help of Canny
edge detection operator
Phase3:

8
• After edge detection procedure both reference and real time images are matched and traffic
lights can be controlled based on percentage of matching and time will be set.

1.5 Major Assumption Made


• This project was made by taking that the assumption that the roads are only used by vehicles
not by pedestrians and animals.
• Inline with what is mentioned above the project is best suited for roads that doesn’t
introduce too much rubbish and dirt

1.6 Contribution of the Project


• Minimize the traffic congestion in large cities
• Save time for users
• Provide valuable assist in facilitating transportation system
• Assist country’s economy

1.7 Scope of the Project


• This project is intended for high way with high traffic congestion
• It is also applicable for urban areas
1.8 Organization of the Project
When we try to start our project, we have asked our advisor Mr. Esubalew about about image
processing techniques, download youtube videos and different documents from the internet.
The images that are used in the project are taken by ourselves using mobile camera.

9
CHAPTER TWO

2. LITERATURE REVIEW
Pezhman Niksaz et. al. [3], propose a system that estimates the size of traffic in highways by using
image processing has been proposed and as a result a message is shown to inform the number of
cars in highway. This project has been implemented by using the Matlab software and it aims to
prevent heavy traffic in highways. Moreover, for implementing this project following steps must
be considered:
1) Image acquisition
2) RGB to grayscale transformation
3) Image enhancement and
4) Morphological operations.
At first, film of highway is captured by a camera has been installed in highway. Then, the film
comes in the form of consecutive frames and each frame is compared with the first frame. After
that, the number of cars in highways is specified. At the end, if the number of cars is more than a
threshold, a message is shown to inform the traffic status. By this message we can predict the need
to reduce the size of traffic carried. Experiments show that the algorithm will work properly.
In this particular research paper, they have used video camera. Camera is shooting video and the
video is then converted to sequence of images by taking snapshots. This is quite difficult as we are
nothing to do with video coverage. So, it is better to use a simple camera only.
Chandrasekhar. M, Saikrishna.C, Phaneendra Kumar [4], propose the implementation of image
processing algorithm in real time traffic light control which will control the traffic light efficiently.
A web camera is placed in each stage of traffic light that will capture the still images of the road
where we want to control the traffic. Then those captured images are successively matched using
image matching with a reference image which is an empty road image. The traffic is governed
according to percentage of matching.
The key point of the paper is the technique which is used for image comparison. The authors have
used image matching technique. SIFT algorithm has been used in this paper and this is very
effective and pretty simple.
Vikramaditya Dangi, Amol Parab, Kshitij Pawar & S.S Rathod [5], propose the way to implement
an intelligent traffic controller using real time image processing. The image sequences from a

10
camera are analyzed using various edge detection and object counting methods to obtain the most
efficient technique. Subsequently, the number of vehicles at the intersection is evaluated and traffic
is efficiently managed. The paper also proposes to implement a real-time emergency vehicle
detection system. In case an emergency vehicle is detected, the lane is given priority over all the
others.
The key point of this paper is the technique which is used for edge detection. The authors have
given the comparison of various edge detection techniques and conclude that canny edge detection
is the best method for edge detection.
Pallavi Choudekar et. Al [6] they propose a system for controlling the traffic light by image
processing. The system will detect vehicles through images instead of using electronic sensors
embedded in the pavement. A camera will be installed alongside the traffic light. It will capture
image sequences. The image sequence will then be analyzed using digital image processing for
vehicle detection, and according to traffic conditions on the road traffic light can be controlled.

11
CHAPTER THREE

3. SYSTEM COMPONENT AND OPERATION


The block diagram of the proposed algorithm discussed above is implemented in MATLAB
R2013a. So it is necessary to gain an insight of MATLAB.

3.1 Introduction to MATLAB


The name MATLAB stands for MATrix LABoratory. MATLAB was written originally to provide
easy access to matrix software developed by the LINPACK (linear system package) and EISPACK
(Eigen system package) projects.
MATLAB is a high-performance language for technical computing. It integrates computation,
visualization, and programming environment. Furthermore, MATLAB is a modern programming
language environment: it has sophisticated data structures, contains built-in editing and debugging
tools, and supports object-oriented programming. These factors make MATLAB an excellent tool
for teaching and research.
MATLAB has many advantages compared to conventional computer languages for solving
technical problems. MATLAB is an interactive system whose basic data element is an array that
does not require dimensioning. The software package has been commercially available since 1984
and is now considered as a standard tool at most universities and industries worldwide.
It has powerful built-in routines that enable a very wide variety of computations. It also has easy
to use graphics commands that make the visualization of results immediately available. Specific
applications are collected in packages referred to as toolbox. There are toolboxes for signal
processing, symbolic computation, control theory, simulation, optimization, and several other
fields of applied science and engineering.
There are various tools in Matlab that can be utilized for image processing, such as Simulink, GUI
etc. Simulink contains various toolboxes and image processing toolbox is one such example.
Simulink is used for simulation of various projects. GUI is another important tool in Matlab. It can
be designed either by manual programming which is tedious task or by using guide. GUI is
explained in next section.

12
3.2 Graphical User Interface (GUI):
A graphical user interface (GUI) is a graphical display in one or more windows containing controls,
called components, which enable a user to perform interactive tasks. The user of the GUI does not
have to create a script or type commands at the command line to accomplish the tasks. Unlike
coding programs to accomplish tasks, the user of a GUI need not understand the details of how the
tasks are performed. GUI components can include menus, toolbars, push buttons, radio buttons,
list boxes, and sliders—just to name a few. GUIs created using MATLAB tools can also perform
any type of computation, read and write data files, communicate with other GUIs, and display data
as tables or as plots. The following figure illustrates a simple GUI that can be easily build

Figure 3. 1 GUI

13
The GUI contains

• An axes component
• A pop-up menu listing three data sets that correspond to MATLAB
Functions: peaks, membrane, and sinc
• Astatic text component to label the pop-up menu
• Three buttons that provide different kinds of plots: surface, mesh, and contour When you
click a push button, the axes component displays the selected data set using the specified
type of 3-D plot.
Typically, GUIs wait for an end user to manipulate a control, and then respond to each user action
in turn. Each control, and the GUI itself, has one or more call-backs, named for the fact that they
“call back” to MATLAB to ask it to do things. A particular user action, such as pressing a screen
button, or passing the cursor over a component, triggers the execution of each call back. The GUI
then responds to these events. You, as the GUI creator, write call-backs that define what the
components do to handle events. This kind of programming is often referred to as event-driven
programming. In event-driven programming, call back execution is asynchronous, that is, events
external to the software trigger call back execution. In the case of MATLAB GUIs, most events
are user interactions with the GUI, but the GUI can respond to other kinds of events as well, for
example, the creation of a file or connecting a device to the computer.
You can code call-backs in two distinct ways:
• As MATLAB language functions stored in files
• As strings containing MATLAB expressions or commands (such as 'c = sqrt(a*a + b*b);
'or' print')Using functions stored in code files as call-backs is preferable to using strings,
because functions have access to arguments and are more powerful and flexible. You cannot
use MATLAB scripts (sequences of statements stored in code files that do not define
functions) as call-backs. Although you can provide a call back with certain data and make
it do anything you want, you cannot control when call-backs execute. That is, when your
GUI is being used, you have no control over the sequence of events that trigger particular
call-backs or what other call-backs might still be running at those times. This distinguishes

14
event-driven programming from other types of control flow, for example, processing
sequential data files.
A MATLAB GUI is a figure window to which you add user-operated components. You can select,
size, and position these components as you like using call-backs you can make the components do
what you want when the user clicks or manipulates the components with keystrokes.

You can build MATLAB GUIs in two ways:


• Use GUIDE (GUI Development Environment), an interactive GUI construction kit.
This approach starts with a figure that you populate with components from within a graphic
layout editor. GUIDE creates an associated code file containing call-backs for the GUI and
its components. GUIDE saves both the figure (as a FIG-file) and the code file. Opening
either one also opens the other to run the GUI.
• Create code files that generate GUIs as functions or scripts (programmatic GUI
construction). Using this approach, you create a code file that defines all component
properties and behaviors. When a user executes the file, it creates a figure, populates it with
components, and handles user interactions. Typically, the figure is not saved between
sessions because the code in the file creates a new one each time it runs.
The code files of the two approaches look different. Programmatic GUI files are generally longer,
because they explicitly define every property of the figure and its controls, as well as the call-backs.
GUIDE GUIs define most of the properties within the figure itself. They store the definitions in its
FIG-file rather than in its code file. The code file contains call-backs and other functions that
initialize the GUI when it opens.
A GUI can be created with GUIDE and then modify it programmatically. However, you cannot
create a GUI programmatically and then modify it with GUIDE. The GUI-building technique you
choose depends on your experience, your preferences, and the kind of application you need the
GUI to operate. This table outlines some possibilities.

15
CHAPTER FOUR

4. SYSTEM DESIGN AND ANALYSIS


Block Diagram

Figure 4. 1 Block Diagram

The attributes of block diagram of the project were discussed. The algorithm behind the block
diagram consists of following steps
1. We have a reference image and the image to be matched is continuously captured using a
camera that is installed at the junction.

2. The images are pre-processed in two steps as follows

a. Images are rescaled to 300x300 pixels.

16
b. Then the above rescaled images are converted from RGB to gray.

3. Edge detection of pre-processed images is carried out using Canny edge detection
technique.

4. The output images of previous step are matched using pixel to pixel matching technique.

5. After matching the timing allocation is done depending on the percentage of matching as

a. If the matching is between 30 to 46% - green light is on for 60 seconds.

b. If the matching is between 47 to 50% - green light is on for 30 seconds.

c. If the matching is between 50 to 80% - green light is on for 10 seconds.

d. If the matching is between 80 to 100% - red light is on for 70 seconds.

The program written using MATLAB to implement the above algorithm is given in appendix
however the output of each step and final results of the program are given in next sections.

17
CHAPTER FIVE

5. RESULT AND DISCUSTION


5.1 Software Simulation Result and Discussion
The images were taken around former Ghion Hotel location in Bahir Dar. The simulation result are
discussed below with figure.

A. If the matching is between 30 to 46% - green light is on for 60 seconds

Figure 5. 1Matching for 30 to 46%

18
B. If the matching is between 46 to 50% - green light is on for 30 seconds

Figure 5. 2Matching for 46 to 50%

19
C. If the matching is between 50 to 80% - green light is on for 10 seconds

Figure 5. 3 Matching for 50 to 80%

20
D. If the matching is between 80 to 100% - red light is on for 70 seconds

Figure 5. 4 Matching for 80 to 100%

21
CHAPTER SIX

6. CONCLUSION AND RECOMMENDATION FOR FUTURE WORK


6.1 Conclusion
“Traffic control using image processing” technique that we propose overcomes the limitations of
the earlier (in use) techniques used for controlling the traffic. Earlier in automatic traffic control
use of timer had a drawback that the time is being wasted by green light on the empty. This
technique avoids this problem. Upon comparison of various edge detection algorithms, it was
inferred that Canny Edge Detector technique is the most efficient one. The project demonstrates
that image processing is a far more efficient method of traffic control as compared to traditional
techniques. The use of our technique removes the need for extra hardware such as sound sensors.
The increased response time for these vehicles is crucial for the prevention of loss of life. Major
advantage is the variation in signal time which control appropriate traffic density using Image
matching. The accuracy in calculation of time due to single moving camera depends on the
registration position while facing road every time. Output of GUI clearly indicated some expected
results.

6.2 Recommendations for Future work


The hardware implementation of this project would enable the project to be used in real-time
practical conditions. In addition, we propose a system to identify the vehicles as they pass by,
giving preference to emergency vehicles and assisting in surveillance on a large scale.

22
Reference
[1]. http://www.ijcse.com/docs/IJCSE11-02-01-031.pdf
[2]. http://www.irdindia.in/journal_ijaeee/pdf/vol2_iss5/19.pdf
[3].https://s.3amazonaws.com/pptdownload/imageprocessingbasedintelligenttrafficcontrolsystem
matlabgui
[4]. Pezhman Niksaz,Science &Research Branch, Azad University of Yazd, Iran, "Automatic
Traffic Estimation Using Image Processing ",2012 International Conference on Image,Vision and
Computing.
[5]. Vikramaditya Dangi, Amol Parab, Kshitij Pawar & S.S Rathod "Image Processing Based
Intelligent Traffic Controller", Undergraduate Academic Research Journal (UARJ), ISSN : 2278 –
1129, Volume-1, Issue-1, 2012.
[6]. Ms.Pallavi Choudekar,Ajay Kumar Garg Engineering College, Department of electrical and
electronics Ghaziabad,UP ,India,Ms.SAYANTI BANERJEE,Ajay Kumar Garg Engineering
College, Department of electrical and electronics,Ghaziabad,UP ,India,Prof.M.K.MUJU,Ajay
Kumar Garg Engineering College, Department of Mechanical,Ghaziabad,UP ,India,‖ Real Time
Traffic Light Control Using Image Processing‖, Indian Journal of Computer Science and
Engineering (IJCSE), ISSN : 0976-5166 Vol. 2 No. 1
[7]. Kühne, R. D.; Schäfer, R.-P.; Mikat, J.; Thiessenhusen, K.-U.; Böttger, U.; Lorkowski, S.:
New approaches for traffic management in metropolitan areas. Proceedings of the 10th
IFAC(International Federation of Automatic Control) Symposium on Control in Transportation
Systems. Tokyo, Japan. 2003.

23
Appendix
GUI for the project
function varargout = imagepro(varargin)
% IMAGEPRO MATLAB code for imagepro.fig
% IMAGEPRO, by itself, creates a new IMAGEPRO or raises the existing
% singleton*.
%
% H = IMAGEPRO returns the handle to a new IMAGEPRO or the handle to
% the existing singleton*.
%
% IMAGEPRO('CALLBACK',hObject,eventData,handles,...) calls the local
% function named CALLBACK in IMAGEPRO.M with the given input arguments.
%
% IMAGEPRO('Property','Value',...) creates a new IMAGEPRO or raises the
% existing singleton*. Starting from the left, property value pairs are
% applied to the GUI before imagepro_OpeningFcn gets called. An
% unrecognized property name or invalid value makes property application
% stop. All inputs are passed to imagepro_OpeningFcn via varargin.
%
% *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one
% instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help imagepro

% Last Modified by GUIDE v2.5 03-Feb-2018 01:39:41

% Begin initialization code - DO NOT EDIT


gui_Singleton = 1;
gui_State = struct('gui_Name', mfilename, ...

24
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @imagepro_OpeningFcn, ...
'gui_OutputFcn', @imagepro_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback', []);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT

% --- Executes just before imagepro is made visible.


function imagepro_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to imagepro (see VARARGIN)
global a;
a = imread('test1.JPG');
axes(handles.axes1);
imshow(a);
% Choose default command line output for imagepro

% Update handles structure

25
guidata(hObject, handles);

% UIWAIT makes imagepro wait for user response (see UIRESUME)


% uiwait(handles.figure1);

% --- Outputs from this function are returned to the command line.
function varargout = imagepro_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure


varargout{1} = handles.output;

% --- Executes on button press in pushbutton1.


function pushbutton1_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
x='green for 60 sec';
y='green for 30 sec';
z='green for 10 sec';
zz='Red for 70 sec ';
global a;
handles.output = hObject;

[filename, pathname] = uigetfile({'*.*';'*.bmp';'*.jpg';'*.gif'}, 'C:\Users\Red-


Jonathan\Desktop\deadly final\figures');

26
b = imread([pathname,filename]);
axes(handles.axes2);
imshow(b);

% -----------------------------------------------------------------
c=imresize(a,[300 300]);
d=imresize(b,[300 300]);
c=rgb2gray(c);
d=rgb2gray(d);
edg1=edge(c,'canny');
edg2=edge(d,'canny');
% figure;
% imshow(edg1)
% figure;
% imshow(edg2)
matched_points=0;
whites=0;
for i=1:1:300
for j=1:1:300
if(edg1(i,j)==1)
whites=whites+1;
end
end
end

for i=1:1:300
for j=1:1:300
if((edg1(i,j)==1)&(edg2(i,j)==1))
matched_points = matched_points+1;
end

27
end
end
whites
matched_points
percentage_matched=(matched_points/(whites))*100
% ____00000000000000000000000_________________________________---------0
set(handles.text4,'string',num2str(whites));
set(handles.text6,'string',num2str(matched_points));
set(handles.text8,'string',num2str(percentage_matched));
% --- Executes on selection change in popupmenu1.
if ((percentage_matched>30)&(percentage_matched<46))
set(handles.text10,'string',x);
elseif ((percentage_matched>=46)&(percentage_matched<50))
set(handles.text10,'string',y);
elseif ((percentage_matched>50)&(percentage_matched<80))
set(handles.text10,'string',z);
elseif ((percentage_matched>80)&(percentage_matched<=100))
set(handles.text10,'string',zz);

end
function popupmenu1_Callback(hObject, eventdata, handles)
% hObject handle to popupmenu1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Hints: contents = cellstr(get(hObject,'String')) returns popupmenu1 contents as cell array


% contents{get(hObject,'Value')} returns selected item from popupmenu1

% --- Executes during object creation, after setting all properties.


function popupmenu1_CreateFcn(hObject, eventdata, handles)

28
% hObject handle to popupmenu1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called

% Hint: popupmenu controls usually have a white background on Windows.


% See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end

29

You might also like