You are on page 1of 12

A SEMINAR REPORT

ON

SENSOR BASED AUTOMATED CAR

Submitted in
partial fulfillment of the requirements
for the award of
Bachelor of Technology
in
Electronics & Instrumentation Engineering

BY

DHAMMISHETTY RAVITEJA

B19EI025

DEPARTMENT OF ELECTRONICS & INSTRUMENTATION ENGINEERING


KAKATIYA INSTITUTE OF TECHNOLOGY & SCIENCE, WARANGAL-15
2021 – 2022
CERTIFICATE

This is to certify that the report entitled " SENSOR BASED AUTOMATED

CAR" is a bonafide record of the Seminar presented by D.RAVITEJA

(B19EI025), in partial fulfillment of the requirements for the award of the

degree of Bachelor of Technology in Electronics & Instrumentation

Engineering from Kakatiya Institute of Technology & Science Warangal

B.Shashikanth
(Seminar Guide)
PROF.OF EIE DEPT.
Dept. Electronics & Instrumentation Engineering

Dr. M. Raghu Ram


Head & Associate Professor
Dept. Electronics & Instrumentation Engineering

Place:Hanamkonda
Date:11.12.2021
ACKNOWLEDGEMENT

I wish to take this opportunity to express my deep gratitude to all the people who have
extended their cooperation in various ways during my Seminar. It is my pleasure to acknowledge
the help of all those individuals.

I thank Dr. K. Ashoka Reddy, Principal of Kakatiya Institute of Technology & Science,
Warangal, for his strong support.

I thank , Dr. M. Raghu Ram, Professor & Head, Department of E&I Engg., for his
constant support in bringing shape to this Seminar.

I would like to thank my guide, B.Shashikanth PROF, Dept of E&I, for his guidance and
help throughout the Seminar.

In completing this Seminar successfully all our faculty members have given an excellent
cooperation by guiding us in every aspect. All your guidance helped me a lot and I am very
grateful to you.

D.Raviteja

B19EI025
ABSTRACT

In this undertaking a basic self-driving vehicle is planned and carried


out. The idea of the undertaking was enlivened by the new flood in
mechanized vehicle industry. The planned vehicle was fit for identifying
the street signals and taking the right turn appropriately. To execute
the entire framework, the body of the vehicle was associated with the
analyser PC through wi-fi where the PC can dissect the feed video
outline by outline. In a genuine vehicle the analyser PC can be basically
mounted ready. The entire situation was fit for taking best choice with
great precision.
CONTEXT
1. Introduction
Prologue to SelfDriving Car:
A self-driving car can analyse surrounding without any human interactions and take decisions
accordingly without
any human
interactions. A
number of sensors are
1. Introduction combined and are
Prologue to SelfDriving Car used to identify the
Issue Statement pathway and road
2. Necessary Components signal from the
surroundings. An
Arduino Uno
autonomous car has
Esp8266 (NodeMCUV0.9) reduced costs due to
Engine Driver L298 less wastage of fuel,
DC engine increased safety,
Camera increased mobility,
Traffic Signal increased customer
Undercarriage satisfaction and that’s
why it has more
3. Process Implementation advantage than
Flowchart of Implementation traditional cars. The
Getting Data from Camera biggest benefit of
Sending information to picture processor using a self-driving
by means of TCP server. car is significantly
Perceiving Feed Data from fewer traffic
Information base by Image Processing accidents. More than
90% of all accidents
Sending Instructions to the Prototype.
are caused by some
4. Final get together of the Prototype degree of human
5. Area of Upgrades error, including
distraction, impaired
6. Conclsusion driving, or poor
7. References decision making.
With self-driving cars
making decisions and communicating with one another, the number of accidents should reduce.
Issue Statement:
In this venture a minimal expense model of self-driving vehicle is proposed and executed. The
vehicle will have a camera ready and with the feed video the analyser PC can identify traffic light
(turn right, turn left, pause) and give right choices to the vehicle.
2. Necessary Components
Arduino Uno:
The Arduino Uno is an open-source microcontroller board dependent on the Microchip
ATmega328P microcontroller and created by Arduino.cc. The board is outfitted with sets of
computerized and simple information/yield sticks that might be interfaced to different extension
sheets and different circuits.
ESP 8266 (Nodemcu V0.9)
The ESP8266 is a minimal expense Wi-Fi central processor, with a full TCP/IP stack and
microcontroller capacity, created by Espressif Systems in Shanghai, China. The chip
previously went to the consideration of Western creators in August 2014 with the ESP-01
module, made by an outsider maker AiThinker.
Motor Driver L298
L298 Motor Driver Module is a powerful engine driver module for driving DC and Stepper
Motors. This module comprises of a L298 engine driver IC and a 78M05 5V controller. L298
Module can handle up to 4 DC engines, or 2 DC engines with directional and speed control.
L298 Motor Driver.
DC Engine
A DC engine is any of a class of turning electrical engines that converts direct flow electrical
energy into mechanical energy. The most well-known sorts depend on the powers delivered by
attractive fields. Essentially a wide range of DC engines have some inside instrument, either
electromechanical or are utilized in this undertaking.
5V DC Motor
Camera
A locally available camera was utilized to take care of live video of the pathway to the analyser
pc through wi-fi.
Traffic Signals
Three diverse traffic signs were utilized as test traffic lights.
Undercarriage (LFR/Obstacle Avoider)
A hard suspension for any line supporter robot or deterrent avoider robot can be utilized for the
model.
3. Process of Implementation
Flowchart of Implementation
Getting information from Camera: For sending feed information a portable camera and an
application which sends feed information to the ideal TCP waiter is utilized.

Sending information to picture processor by means of TCP server


A wi-fi module (esp 8266 – nodemcu v0.9) for making a tcp server, which filled in as a
correspondence channel between picture processor and the model vehicle.

Information base by Image Processing:


Traffic Sign Detection: To decide the traffic sign from the continuous video, the feed video is
separated into certain edges contrasting 30 milisecond. As the quantity of signs to be recognized
is sufficiently little, a fairly more straightforward methodology – connection is taken. This errand
can be separated into a few subtasks as follows.
1) Acquiring Training Image: Around 10 images of each signals, removed the sub
square shape containing the signals from the picture, and saved the picture grid as
matlab record. Each picture was resized as 600x600 pixels.Dividing the Real Time
Video: An image after every 30 milisecond apart is taken and worked with the frame
to detect the traffic sign.
2) Detecting Circular Objects from an Image: The circle Hough Change is utilized to
recognize circles in the casing. The circle Hough Transform (CHT) is a fundamental
method utilized in Digital Image Processing, for identifying roundabout items in an
advanced picture. The circle Hough Transform (CHT) is an element extraction
strategy for identifying circles. It is a specialization of Hough Transform. The
motivation behind the strategy is to observe circles in blemished picture inputs. The
circle applicants are delivered by "casting a ballot" in the Hough boundary space and
afterward select the neighborhood maxima in an alleged gatherer lattice. This
method is executed in matlab work imfindcircles that we utilized.
3) Correlation: Then we associate the separated circle with each sign. Edges were
picked to identify each kind of sign through experimentation.
Sending Instructions to the Prototype.
The activity for the perceived sign is then shipped off the model through tcp server. The
Arduino uno was utilized to drive the dc engines as indicated by the activities

4. Final get together of the Prototype


The model was collected in straightforward LFR making strategy and the accompanying vehicle
was made. The model vehicle in the test field.
5. Area of upgrades
The model was gathered in basic LFR making methodology and the accompanying vehicle was
made. A lot of enhancements can be made to the model agreeing the field of utilization. Rather
than a different PC, on board raspberry pi can be utilized for picture handling. There is few
guidelines for the model. The guidelines can be expanded and AI calculation can be executed for
additional self-supporting enhancements.

7.CONCLUSION
In this venture a straightforward model of a self-driving vehicle is illustrated. The model vehicle can
perceive three separate signs – turn left, turn right and stop with extraordinary precision and take
choice in like manner. The model can be additionally improved by including more signals and
utilizing ML approach.

8.REFERENCES

1.  Taeihagh, Araz; Lim, Hazel Si Min (2 January 2019). "Governing


autonomous vehicles: emerging responses for safety, liability,
privacy, cybersecurity, and industry risks". Transport
Reviews.  39 (1): 103–
128.  arXiv:1807.05720. doi:10.1080/01441647.2018.1494640.  ISSN 
0144-1647. S2CID  49862783.
2.  Maki, Sydney; Sage, Alexandria (19 March 2018). "Self-driving Uber
car kills Arizona woman crossing street".  Reuters. Retrieved 14
April 2019.
3.  Thrun, Sebastian (2010). "Toward Robotic Cars".  Communications of
the ACM. 53  (4): 99–
106.  doi:10.1145/1721654.1721679.  S2CID 207177792.
4. Hu, Junyan; et, al (2020).  "Cooperative control of heterogeneous
connected vehicle platoons: An adaptive leader-following
approach". IEEE Robotics and Automation Letters.  5 (2): 977–
984.  doi:10.1109/LRA.2020.2966412.  S2CID 211055808.
5. Gehrig, Stefan K.; Stein, Fridtjof J. (1999). Dead reckoning and
cartography using stereo vision for an automated car. IEEE/RSJ
International Conference on Intelligent Robots and Systems. 3.
Kyongju. pp.  1507–1512.  doi:10.1109/IROS.1999.811692. ISBN  0-
7803-5184-3.

You might also like