CHAPTER 1 INTRODUCTION This project primarily aims at the new technique of video image processing used to solve problems

associated with the real-time road traffic control systems. The increase in congestion problems and problems associated with existing detectors spawned an interest in these new vehicle detection technologies. But these systems possess difficulties with congestion, shadows and lighting transitions. Problem connected to any practical image processing application in road traffic has to be solved with the assistance of the real world images, processed in real time. Various algorithms, mainly based on back ground techniques have been developed for these purposes, since back ground based algorithms are very sensitive to ambient lighting conditions as they have not yielded the expected results. So a real-time image tracking approach using edge detection techniques was developed for detecting vehicles under these trouble-posing conditions. Ever since the automation of the traffic signal, traffic management has become easier and better flowing. Highway traffic has benefited, too. In the latter case, traffic safety has been enhanced because of speed detectors and automated red light running detection. Lawbreakers would not have been filed in a lawsuit due to the infringement of the law leading to become less wary and panic about the situation. The improvement of equipment and the making of better traffic plans have worked in decreasing the number of violations. Traffic safety has never been better. Certain government departments and agencies have helped to improve flow, security, systems, etc. Vast changes have been made and some schemes have been changed to accommodate the increasing number of vehicles that flow in and out of cities, localities and even rural areas.

1

Traffic lights involve a rather complicated automated system that relies on sensors and programs. There are basically two types: the first type of traffic light has fixed time. That means, the green light may be on for a minute and off for the next few minutes while the rest of the traffic lights turn green. There was always a fixed time for every street meeting at an intersection. And there has the variable types of traffic light. The variable type of traffic light relies on sensors underground that detect the flow of traffic approaching the intersection. If traffic is heavy, the green light stays longer than it could if the traffic were less. Coupled with the traffic light are the speed detector and the red light running detector. The speed detector uses a device that registers the speed of an oncoming vehicle. The latter, occurred because of drivers beating the stoplight. Nowadays camera surveillance controlled traffic light replaced all the above said methodologies in controlling traffic. This uses various parameter to find the vehicle count and there by controlling the traffic lighting period with the use of timers. Our project dealt with the Image processing technology wherein we count the vehicle on each side using edge detection. The development of our project was divided into four phases. • Taking snapshots using camera at each interval • Vehicle counting using edge detection • Calculation of timer values to control the traffic light • Upload the timers with the timings (in seconds) These functions are integrated together to make the system operate. Thus the vehicle traffic can be monitored periodically and can be controlled in order to solve the traffic congestion problem. This makes use of the main components Camera, Matlab and Timer.

2

CHAPTER 2 LITERATURE REVIEW 2.1Fuzzy Expert System Fuzzy expert system was used to control the traffic light in most cities. It was the most common system used in major areas. The fuzzy expert system composed of seven elements i.e. a radio frequency identification reader (RFID), an active RFID tag, a personal digital assistance (PDA), a wireless network, a database, a knowledge base and a backend server. The following figure gives a brief knowledge about the connections of the different elements of the system.

Fig 2.1.1 Fuzzy Expert System In this system, the RFID reader detects a RF-ACTIVE code at 1024 MHz from the active tag pasted on the car. The active tag has a battery, which is inbuilt inside it, so that it can periodically and actively transmit messages stored in the tag. As soon as the data is received, the reader will save all information in the PDA. When the PDA accumulates the required amount of data, it will use its wireless card and connect to the backend server and store them in to the
3

When all possible congestion roads and car speed are collected.database in the server. Now the server uses the data stored in the database to calculate maximum flow. The system uses these alternatives as well as the collected data to choose the best and the most suitable solution for that particular traffic congestion situation. which is a data driven approach. After getting the simulation results. starting from a basic idea and then tries to draw conclusions. All the rules and reasoning are used in the IF-THEN format. the system is able to automatically give different alternatives in terms of varieties of traffic situations and then the red light or green light duration is being set via a traffic light control interface for improving the traffic congestion problems. second and third best traffic light duration. The simulation model running in this system give three optimal alternatives. then these data would be used as the input parameters of the traffic light control simulation model in the server. inter arrival time and average car speed. the best. The system is using the forward chaining approach. 4 .

E & F are similar. These results are the input for the light control A1. B & C are similar and sub models D. The researchers (Patel and Ranganathan)created an ANN model which included 5 .1.3 The simulation flow diagram for Traffic Light A1 The upper dash area in the above figure controls traffic signal at the first intersection on road A. Now we will explain the model A which has the logic diagram shown below. The sub models A. Finally the delay for green light A1 sets the duration for green light. then the simulation gives results in the form of different alternatives. Like the process of light control A1. Release switch A1 releases the resource switch A1 for allowing the cars to seize the resource. When all the data required is collected and the simulation is being run.Light control A1 generates an entity to control the traffic light signal.1.2 Artificial Neural Network Approach The adaptive traffic light problem was modeled using the ANN approach. Fig 2. the processes of light control A2 and Light control A3 are the same. The element named as ‘Assign for light A1 clock time’ gets the current simulation time. Delay for red light A1 sets the duration for red light.Fig 2. we can say that there are 6 road traffic control simulation models.2 Traffic control simulation From the above figure. Prompt seizes the resource switch A1 as first priority to get the resource. 8P 2.

2. the most suitable results or alternatives are selected as the output are then used by the traffic lights to set the timing for the red and green lights. The ANN model.predicting the traffic parameters for the next time frame and computing the cycle time adjustment values. The model after getting the input used the hidden layer to decide which nodes suites the current traffic situation. After comparing the nodes an d matching it with the current situation with the help of membership function.e. The architecture of the IDUTC is shown in figure below. Each hidden nodes is given a membership function (i.1.2. Fig 2. ANN traffic Model The input given to the ANN models are the list of data collected by the sensors which are placed around the traffic lights. between 0 and 1). would like the figure shown below. The sensors give the traffic light ANN model all the data which are related to the past and present traffic parameters. 6 .3 An Intelligence Decision-making system for Urban Traffic-Control `IDUTC is a real time intelligent decision making system that computes decisions within a dynamically changing application environment. The output of the ANN model will be in the form of membership functions ranging from 0 to 1. if drawn a sketch. This model consisted of nine inputs (one of each past and present traffic parameters one hidden layers with 70 hidden nodes and three output nodes.

Sensors collect the past data of the traffic conditions.1 The architecture of the IDUTC.3. After the surrounding environmental conditions. The IDUTC model consists of seven elements. • Fuzzification element • Fuzzy expert systems (FES) • Defuzzification Element. which is all known as the application environments shown in the figure above. the sensors send crisp data inputs to the artificial neural network.Fig 2. • Controllers. Then the fuzzy expert system fires the rules based on these fuzzy values. The sensors of the system are placed at the road to sense the different parameters of the traffic conditions. The sensors are the actual input of the IDUTC model. • Artificial Neural Network (ANN). • Application environment. Now the output of ANN model are assigned fuzzy labels indicates the degree to which each crisp value is a member of a domain. The ANN model collects all the data from the systems and process it through the hidden layers and gives the desired output. • Sensors The IDUTC is a self adjusting traffic light control system. The names of the element are as follows. The defuzzification 7 .

The IDUTC is a self adjusting traffic light control system. After the surrounding environmental conditions.unit converts the computed decisions into crisp values that are used to control the environment through the controllers installed at the traffic lights. which is all known as the application environments shown in the figure above. Now the output of ANN model are assigned fuzzy labels indicates the degree to which each crisp value is a member of a domain. The cycle goes on repeating and tries to change the traffic light timings condition. the sensors send crisp data inputs to the artificial neural network. past data are being collected along with the present data by the sensors. Sensors collect the past data of the traffic conditions. After running the simulation on the traffic light. Then the fuzzy expert system fires the rules based on these fuzzy values. The cycle goes on repeating and tries to change the traffic light timings condition. past data are being collected along with the present data by the sensors. 8 . The defuzzification unit converts the computed decisions into crisp values that are used to control the environment through the controllers installed at the traffic lights. This shows that the system is self-adjusting according to the situation. After running the simulation on the traffic light. This shows that the system is selfadjusting according to the situation. The sensors are the actual input of the IDUTC model. The sensors of the system are placed at the road to sense the different parameters of the traffic conditions. The ANN model collects all the data from the systems and process it through the hidden layers and gives the desired output.

The researcher (Rosenfeld) identified the basic steps in a general image analysis process .The researcher (Binford) suggested criteria to evaluate the performance of an industrial computer vision system.e. A computer vision system accepts an image or a set of images as input and attempts to produce results similar in nature to those produced by a human viewer using the same image/s. Computer Vision has been an active research area since 1960’s. Its applications include document processing. automated traffic flow measurement.2. in addition to the results gathered by image analysis.. In the third section. to name a few major areas. microscopy.4 Automated traffic control using Image processing. Automated traffic flow measurement using video image sequences. industrial automation. 9 . By 1983. The symbolic information required to produce results are generated in the process of Image Analysis for this particular task. The first and second sections review research in the preliminary research areas of computer vision and image sequence analysis respectively.. consists of three main sections. surveillance etc. can be classified as an application of Computer Vision. Image Sequence Analysis uses temporal relations and properties of the objects in a scene. This review. many ad hoc techniques for analyzing images had been developed and the subject had gradually begun to develop a scientific basis. i. therefore. the input is a sequence of images. research in the specific area of Traffic Image Sequence Analysis will be reviewed.

the most commonly used features are outlined in the following paragraphs.2 Feature extraction A computer vision system extracts a selected set of features from an image. The set of edges of objects in the scene can be considered as an important feature for a computer vision system working on images of that scene. making them easy to detect. However. Griffith. so that subsequent feature extraction and segmentation becomes easier. Hueckel. Canny and Sobel were other edge detectors based 10 . These images are preprocessed to reduce noise so that the effect of noise on system performance is minimized. The features to be extracted vary from one system to another. directional intensity variations in images correspond to these edges.4.4. 2. to produce results. Roberts’ operator was one of the earliest edge detectors developed. or matched with those of other images or a model of the real world objects.1 Pre-processing The input images (or image sequences) to most computer vision systems contain some amount of noise. Sharp.A general computer vision system performs the following basic functions: (1) Pre-processing (2) Feature extraction (3) Image segmentation (4) Analysis of results in (2) and (3) to produce results A brief description of the state of the art in each step follows. These features are then analyzed. 2. This was based on analyzing the first order derivatives of pixel intensity. Techniques for edge detection have been available since 1965. Another objective of pre processing images is to enhance certain image features while suppressing some other features.

For applications where objects of known shape are sought in an image. This can be performed 11 . but results in a large number of spurious edges.Descriptions of techniques for shape analysis and shape 2. including those mentioned above. Image segmentation is the process of partitioning the given image into meaningful regions and labeling the individual regions. but consumes a lot of processing time . several techniques of shape matching exist.The Wang and Brady corner detector consumes less processing time. Detection of corners has been another popular technique in image analysis. Two.on the same principle. The researchers (Haralick and Abdou et al) have compared the performance of several edge detectors. One. Hough transforms can be used effectively to identify basic geometric shapes . where curves cannot be detected directly. The researchers ( Marr and Hildreth) developed a method to detect edges in noisy images using the Laplacian of Gaussian operator. Corners are easier to detect and match compared to edges due to two main reasons. Curves are another feature of interest in image analysis. they are less sensitive to affine transforms of the objects in the scene.4. Both these are based on intensity gradients of the image. B-spline curves can be combining feature points (such as corners). The researchers (Kashyap and Nevatia) have suggested the use of stochastic methods for extracting edges( Marr and Hildreth) identified criteria for evaluating the performance of edge detectors. and looks at the intensities of the pixels in the neighbourhood of a particular pixel to identify if it is a corner. corners are localized in a smaller area. The Plessey corner detector provides a reliable set of corners. The Susan corner detector takes a different approach. Several corner detectors with acceptable performance are available.3 Image segmentation The next important step in digital image analysis is image segmentation. The Hough transform can be used to identify various types of curves in an image.

A more sophisticated technique is splitting and merging. 12 . image segmentation is very highly scene-dependent and problem-dependent . determination of the number of regions of interest is difficult. This may be obvious only in very simple situations such as a uniform background and one object. Simpler techniques use gray levels and local feature values such as edges.in several ways.Two. Attempts have been made to use expert systems to improve performance of segmentation. In this technique. an image is split into small regions and these small regions are merged together to form larger regions. One. corners and closed curves to segment images. There are two main problems associated with image segmentation. This has been employed successfully in texture.

So we go for the image processing techniques using queue detection algorithm to change the traffic signals. This method is suitable when the no of vehicles(fig 3.2 Traffic in Road 2 13 .1EXISTING SYSTEM In normal traffic light system.1.CHAPTER 3 SYSTEM ANALYSIS 3.1 Traffic in Road 1 But when there is an unequal no of vehicles in the queues (fig 3. Fig 3.1.1.2) this method is inefficient.1.1) in all sides of a junction are same. Fig 3. the traffic signals are changed in the predetermined time.

Hence there is wastage of time. Thus the timer works and traffic light is controlled. the vehicles from all other paths have to wait till the count becomes zero.3. In Queue detection algorithm. the timer will be loaded with the time (in seconds) based on the vehicle count. After finding the count. the number of vehicles in each side of the road can be calculated using edge detection method.2. Our system is a conservative approach to solve such problem. So traffic congestion gets reduced. The count of the vehicle will be found from camera image using object counting using Image processing technique.2 PROPOSED SYSTEM Even if there is no vehicle on that path. The timer instead of being loaded with value of sixty seconds it will be loaded with time based on the vehicle count on that corresponding road. We have focused to avoid wastage of time that occurs due to waiting for a period greater than needed for the vehicles to pass through. Fig 3.1 Queue Detection and Edge Detection 14 .

They have the feature of Digital PTZ . Thus the system memory requirement is reduced to a greater extend. This eliminates the need for power cables and reducing installation costs. and can operate in temperatures as low as -40 °C (-40 °F). rain.1 Taking snapshots using camera at each interval Here background images are compared with images on each side of the road. It has protection against dust.CHAPTER 4 MODULE DESCRIPTION • Taking snapshots using camera at each interval • Vehicle counting using edge detection • Calculation of timer values to control the traffic light • Upload the timers with the timings (in seconds) 4. In real time implementation. As there is no Security of vehicles is considered.So no problem with the lighting needed to take photos during night time. These cameras have got the IP66 rating. Axis P1343 Network camera with Superb image quality with SVGA resolution can be used .These cameras are capable of functioning both during day and night . snow and sun.The images that are captured by the cameras can be made to be stored for few seconds till the next image is obtained. 15 . Local storage of the images or maintaining a large database for the images taken is not needed. Later these images can be erased off automatically. Then the number of vehicles in each side of road can be found using edge detection method.

Fig 4.1.1 Background Fig 4.1.2 Side 1 Fig 4.1.3 Side 2 16 .

The method used here is based on applying edge detector operators to a profile of the image – Edges are less sensitive to the variation of ambient lighting and are used in full frame applications (detection).Thus timings are calculated and the lighting sequence will be given.2 Vehicle counting using edge detection A vehicle detection operation is applied on the profile of the unprocessed image. The below syntax specifies the Canny method.1. Say . Most of the vehicle detection algorithms developed so far are based on a background differencing technique.When the timer count is half the loaded value. The timing should be calculated based on the number of vehicles. using sigma as the standard deviation of the Gaussian filter.4 Side 3 Fig 4.5 Side 4 4. the image 17 . two strategies are often applied: key region processing and simple algorithms.3 Calculation of timer values to control the traffic light By doing the above process. So the total timing needed for signalling a phase is 50 seconds=(5*10).if 10 seconds are allotted for each vehicle then for above simulation output count of the vehicle for side1 was found to be five. BW = edge(I.Fig 4.1. based on sigma. the size of the filter is chosen automatically.sigma) 4. The default sigma is 1.thresh. the count of the vehicle will be found. To implement the algorithm in real time. which is sensitive to variations of ambient lighting.'canny'.

the timer gets uploaded new time value and the lighting sequence will be carried out. The decrement in the loaded timer value will be continuously monitored by the Matlab and when it reaches half the loaded value. the timings needed for controlling signal will be calculated . When the timer count drops to zero. Thus timer and Matlab communicates to have efficient controlling of traffic light.4 Upload the timer with timings (in seconds) By the end of the last step. the next image will be taken.Calculation of the timer values are calculated using Matlab. These values are to be downloaded to the timers.of traffic in the will be captured and fed to the Matlab database and processed and count of the vehicle will be found. processed and count will be calculated and which is the downloaded to timer from Matlab. 18 . where the timers are controlling the traffic light. 4.

The application of Queue detection algorithm with the aid of image processing technique would result in a phenomenal decrease in congestion of vehicles. 19 . we would intend to take real images using camera at each interval and going to interface camera with MATLAB. The automatic control system would condense the manual work involved in the process.CHAPTER 5 CONCLUSION AND FUTURE WORK This novelistic approach would be efficient over the conservative systems in diversified ways. For future work. Economical time control has been yet another advantage counting to its remarkable success in curtailing the number of accidents.

s1=s1. u=imresize(s1.png'). % figure. imshow(t1).2). t1=edge(t1.level). u=imresize(t1. level=graythresh(u). s1=edge(s1.'canny'. 0. t1=t1.level). imshow(s1) t1=imread('C:\Users\Happy\Desktop\project\ip\final\image1.'bilinear'). % figure.66].14.3.14. %%The Program for the side1: It senses the vehicles and calculate the total %%no of vehicles % these lines are used to read the background pictures s1=imread('C:\Users\Happy\Desktop\project\ip\final\back1. s1=im2bw(u. 0.66]. clear all.'bilinear'). t1=im2bw(u.*ones(size(s1)).[256.[256.*ones(size(t1)).APPENDIX A SAMPLE CODING clc.png').2). 20 . level=graythresh(u). close all.'canny'. % imshow(s1).3.

count=0.y)==0 length = 0.y)==1 length = length +1. else side1(x. end end end imview(side1).t1).y)=0. end if side1(x.y)=1. count = count +1. y=1.side1=imsubtract(s1. for x=1:256 for y=1:66 if side1(x. end end end 21 . end if length >25 length=0.y)==1 side1(x. length=0. for y=23:1:24 for x=1:250 if side1(x.

imview(side2). s2=edge(s2. count=0. s2=im2bw(u. 0. level=graythresh(u). u=imresize(t2. side2=imsubtract(t2.*ones(size(s2)).*ones(size(t2)). level=graythresh(u).level). u=imresize(s2. t2=imread('C:\Users\Happy\Desktop\project\ip\final\image2. imshow(t2). t2=edge(t2.'canny'.2). imshow(s2).png'). t2=t2.3. length=0.[55. 22 .'bilinear').'canny'. % imshow(s1).'bilinear'). y=1. % figure.237].14.237]. % figure.png'). s2=s2.2).3.level).w1=count. 0.s2).14.[55. %%The Program for the side2: It senses the vehicles and calculate the total %%no of vehicles % these lines are used to read the background pictures s2=imread('C:\Users\Happy\Desktop\project\ip\final\back2. t2=im2bw(u.

png'). end if length >25 length=0.level). u=imresize(s3.2). level=graythresh(u).3. 0. %%The Program for the side3: It senses the vehicles and calculate the total %%no of vehicles % these lines are used to read the background pictures s3=imread('C:\Users\Happy\Desktop\project\ip\final\back3.[93 255].'bilinear'). end if side2(x.14.'canny'.y)==1 length = length +1. end end end w2=count.for x=40:1:50 for y=1:230 if side2(x. % imshow(s1). s3=edge(s3. count = count +1. 23 . imshow(s3). s3=im2bw(u.y)==0 length = 0. s3=s3.*ones(size(s3)). % figure.

[93 255]. imview(side3).*ones(size(t3)).t3=imread('C:\Users\Happy\Desktop\project\ip\final\image3.t3). side3=imsubtract(s3. count = count +1. imshow(t3). t3=t3.3.2). level=graythresh(u).y)==0 length = 0.y)==1 length = length +1. length=0. u=imresize(t3. end if length >25 length=0. end end end w3=count. 24 .level). 0.14. for y=1:255 for x=1:93 if side3(x. t3=edge(t3.'bilinear'). end if side3(x.png'). % figure.'canny'. y=1. count=0. t3=im2bw(u.

% figure. t4=edge(t4.'bilinear'). s4=s4. % figure. level=graythresh(u). u=imresize(t4.t4). imshow(t4).png'). side4=imsubtract(s4. s4=edge(s4.'canny'.'bilinear').*ones(size(t4)). else side4(x.*ones(size(s4)). s4=im2bw(u. for x=1:152 for y=1:118 if side4(x.[152 118].3.y)=0.%%The Program for the side4: It senses the vehicles and calculate the total %%no of vehicles % these lines are used to read the background pictures s4=imread('C:\Users\Happy\Desktop\project\ip\final\back4.level). end 25 . t4=im2bw(u.'canny'. 0.png').14. imshow(s4).2).y)=1.3. 0.y)==1 side4(x. level=graythresh(u).14.[152 118]. u=imresize(s4. % imshow(s1). t4=t4.level). t4=imread('C:\Users\Happy\Desktop\project\ip\final\image4.2).

count = count +1. end end end w4=count. for y=60:1:64 for x=1:150 if side4(x. time=[].y)==1 length = length +1. count=0.y)==0 length = 0.end end imview(side4). y=1. whitebox=[w1 w2 w3 w4]. ontime=[]. end if length >25 length=0. end if side4(x. length=0. % we assume that a vehicle takes 30 sec to move for i=1:4 if i==1 26 .

road=[1. title('Graph on Vehicles Vs Ontime'). % figure. xlabel('OnTime in sec').2. title('Graph on Vehicles Vs Ontime').ylabel('Vehicles').ylabel('Vehicles Count'). %we give it to the traffic light through output ports.bar(road. 27 .4] figure.time(i)=10*whitebox(i). end % figure. %it counts the time for four sides and it is stored in an array.ontime.barh(whitebox. end ontime(i)=10*whitebox(i). xlabel('OnTime in sec').time). % xlabel('TotalTime in sec').3.ontime).barh(whitebox. else time(i)=time(i-1)+10*whitebox(i).ylabel('Vehicles Count').'g'). % title('Graph on Vehicles Vs TotalTime').

APPENDIX B SCREEN SHOTS Fig 5.2.1 Output Image for Side 1 28 .

3 Output Image for Side 3 29 .2.2 Output Image for Side 2 Fig 5.2.Fig 5.

Fig 5.4 Output Image for Side 4 30 .2.

Fig 5.5 Graph for vehicles count 31 .2.

6 Graph for Calculation of timer values REFERENCES 32 .2.Fig 5.

 Rourke. and control .H. A. N: ‘Computer Image Processing in Traffic Engineering’.Gonzalez and Richard E.Woods.G. and Bell. M.: ‘Queue detection and congestion monitoring using Image processing’. Digital Image Processing by Rafael C. Traffic Engg.  A Real-time Computer Vision System for Vehicle Tracking and Traffic Surveillance by Benjamin Coifman.  Hoose.. 33 .

Sign up to vote on this title
UsefulNot useful