You are on page 1of 18

开题报告

毕业设计题目:Design and implementation of small


automatic driving intelligent car system
班 级 计算机科学与技术(全英文留学生)19(1) 姓 名 K.AZAMAT

Design and implementation of small automatic driving


课题名称
intelligent car system
开题报告(包括以下 5 点内容,(艺术类、外语类除外)不少于 3000 字)

目录

1 选题意义与可行性分析
2 研究现状
3 研究的基本内容与拟解决的主要问题
4 总体研究思路和预期研究成果
5 研究工作计划
6 参考文献

成绩:

开题答辩 答辩组长签名:
意见
年月日

签名: 系


审 签名:
指导教师 核
知情确认 意

年月日 年 月 日
Design and implementation of small automatic driving intelligent car system

开题报告

1 Analysis of the significance and feasibility of the selected topic

The development of fully autonomous vehicles is moving forward at a rapid pace and every
major automaker, ride-sharing service, and tech company from Apple to Baidu has invested in
the driverless car market. There is a current advancement in the driverless car area almost every
day.
Autonomous vehicles are already beginning to appear on our roadways. It won't be long before
the technological obstacles to complete AV implementation are overcome and autonomous cars-
related legal, social, and transportation issues are openly discussed. And also autonomous cars
have the potential to be a significant urban transformation accelerator. [1] The majority of
automakers are working on autonomous driving, but just a few have released self-driving
vehicles on the market (with varying degrees of sophistication), with more than 10 million of
them anticipated to be on the road by 2020[2]. The current study evaluates a fully autonomous
vehicle's a priori acceptability, attitudes, personality attributes, and intention to utilize one.
421 drivers (153 men, M = 40.2 years, range 19–73) responded to the online survey. A priori,
68.1% of the sample accepted having fully autonomous vehicles[3]. The consequences of giving
control to the automobile are extensive. Accidents will inevitably occur, thus the self-driving
automobile will have to make decisions that could save or end human lives.[4]. 94% of car crashes
in the USA are attributed to human error. Self-driving cars have the potential to drastically
reduce accidents by eliminating the human aspect, while concurrently monitoring the
environment to identify and respond swiftly to potentially hazardous events and driving
behaviors.[2]. On the other hand system of self-driving cars doesn’t get tired or sleepy, they
always can make decisions faster than human beings. And also technology for self-driving cars
promises other advantages including improved traffic flow, decreased pollution, and a reduction
in accidents caused by human error. Future cars are anticipated to have better decision-making
abilities because they will be able to acquire detailed situational awareness through the use of
multiple sensors, which when combined with artificial intelligence may allow self-driving cars to
anticipate and respond to the environment better than humans[4].

2 Current state of Research


According to recent World Health Organization statistics, 1.25 million people die as a result of
traffic accidents each year  In addition, the cost of these mishaps in recent years has reached
US$518 billion annually, which subtracts 1% to 2% of the global GDP from all nations [5].
Autonomous vehicles have the potential to be significantly safer than the manually operated
vehicles we currently use. This is among the factors that have people enthusiastic about the
creation and adoption of self-driving vehicles. However, self-driving automobiles cannot
guarantee complete safety. This is due to the fact that they will be traveling at a high rate of
speed while dodging unexpected pedestrians, bikers, and human drivers[6].
The Society of Automotive Engineers (SAE)  standard outlines five stages of driving automation.
In level one vehicles, ADASs help the driver handle steering or acceleration/deceleration in some
circumstances with human input. In level two vehicles, ADASs control steering and acceleration
and deceleration with driver input in certain conditions. In lower-level vehicles (levels zero to
two), the driver typically keeps an eye on the road environment. In contrast, higher-level (levels
three to five) automobiles' ADAS analyzes the road environment. Level three vehicles, like the
2016 Tesla Model S, have the most advanced ADASs and handle several safety systems, but the
driver can still take over when necessary. Vehicles of level four may operate in a wider variety of
situations and manage numerous safety systems. The ultimate aim of autonomous driving is level
five automation, where all of the vehicle's systems are controlled by the ADAS in all conditions
(such as snow-covered highways and unlabeled dirt roads) and do not require any human
intervention[5].

Figure 1 Levels of self driving automation Figure 2 Levels of Self driving assistant

In our daily life when we say self-driving cars, we don’t think that philosophy has anything to do
with that, but recent development shows that while working with self-driving cars we have to
deal with problems which directly involves philosophy.
Here is one of them - “The presumption is that a startling event occurs, after which there are two
possible courses of action. If no active choice is taken, some people will die, and if a choice is
made, more people will live but fewer will die. If you don't do anything, five people will perish.
One person will perish if the trolley is turned by pulling the lever”[7]. This is just one question
from the philosophical front.
Research in this region continues to be heavily focused on adapting and upgrading the current
automated driving technologies to traffic patterns and particular scenarios relevant to Asia[8].
A number of ground-breaking technologies that finally gave rise to the driver-assistance systems
we know today were supported by the European Commission's Eureka PROMETHEUS Project
(Programme for a European Traffic of Highest Efficiency and Unprecedented Safety [1987-
1995]). The project, which sought to increase traffic safety, drew numerous European
automakers, who joined and created the initial prototypes of these parts and procedures.[1].
3 The basic content of the research and the main problems to be
solved
3.1 The research content

In my project I will assemble a robotic car which will be able to drive itself on a handmade
artificial road, and it will also be able to pass through obstacles. It also will be able to detect
traffic lights. It will drive on the road if the light is green and stops if the light is red.
3.1.1 The Software

For my project’s software part, I need to develop a program which I can run on a robot.
To develop this software I’m going to use OpenCV and TensorFlow to develop the machine
learning algorithm to detect objects on the road. I picked OpenCV and TensorFlow because of
their ability to lunch programs quickly. Also because of the great community out there, to ask
questions. This software basically works for detect the lines on the road, detect the cars on the
road and to detect the traffic light
I also need to develop a program to run on Arduino Uno. This software runs on Arduino Uno,
will be written in C/C++ programming language. I will write program in my laptop and install it
to Arduino.

3.1.2 The Hardware

Here I’m going to talk about most important hardware components


For my project I need many spare parts of hardware. I’m going to use Raspberry PI 4B model as
a brain of my car. The software which I developed in C++ and Python will be running on
Raspberry PI. As I said earlier the Raspberry PI works as a brain of my project. On the other
hand, the Arduino Uno will be controlling the wheels. It will control the turning degree of the
wheels.
I also need a L298N model DC motor controller to control the wheels. Basically, the DC
controller will be connected to four small DC motors
3.2 The problems to be solved

1. Developing a fast algorithm for image classification


2. Developing a software and downgrade it to run on the robot car.
3. Testing the car on a handmade artificial car track

4 The overall research idea and expected research results

There are many algorithms available in usage for car detection, but I needed an alogorithm which
is particularly right for my project and for my hardware. That’s why I picked the Haar cascade
classification algorithm for my project. An approach for machine learning called Haar Cascade
can be used to find objects in pictures or videos[9].
There are many reasons I picked this algorithm, the most obvious one is individual item
detection and recognition using the Haar Cascade Classifier has been demonstrated to be both
effective and accurate[9].
As I said already I needed an algorithms which is particularly right for my hardware. Since I’m
using a single board computer Raspberry Pi for my control system, I need a lightweight
algorithm which is easy to compute and takes decisions faster.
Viola-Jones first developed the Haar feature in the field of computer vision, as the Haar Cascade
Classifier for quick image recognition[10].
In their system, Viola and Jones make use of elements like Haar, integral pictures, Adaboost, and
cascade structures[11].
For an object to be recognized, object recognition algorithms need to use distinctive traits[11].

Down here I will explain how my algorithm works and the steps for my algorithms to recognize
the object on the road.

How the algorithm works

Calculating the difference between the sums of pixels in the black and white rectangle allows us
to first identify a rectangular Haar-like feature[12].
Firstly the value of all pixels in grayscale images which are in black accumulated. Then they
subtracted from the total of white boxes. Finally, the result will be compared to the defined
threshold and if the criteria is met, the feature considers a hit.
In general for each computation in Haar feature my need to obtain each single pixel in the areas
of features and this step can be bypassed by applying integral images that the value of each pixel
is equal to the summation of gray values above and left in the image. Therefore, it only calculates
the pixel value for four pixels lookups from the integral image.

Figure 1 The Overall process to detect image

Convert Image into grayscale


On the first step our algorithm will convert the image taken from the frame, into gray scale
image.

Figure 2 Converted grayscale image


Even though colors on the images a great tool to classify the picture, but for our Haar cascade
algorithm it is not useful, they made feature calculations rather computationally expensive. It is
easy to detect objects in a grayscale image. Grayscale images has consistent pattern of objects,
and that is great to detect an object. With this feature grayscale images can beat the features of
RGB images. In OpenCV “COLOR BGR2GRAY” is the command to convert the image into
grayscale.
After receiving video frames that need to be converted into grayscale images, we perform the
necessary image processing to turn the colored frame images into grayscale[9].
The integral image can calculate the features of various targets in various positions of the image
regardless of the size of the image, using the same constant time, considerably reducing the
detection time[13].
Integral Image

Haar cascade image classification algorithm is supervised learning algorithm, that’s way
there are images that positive and negative. Haar cascade uses white and black rectangles where
they will compare all part of picture with our positive image. It is necessary to define the location
of the Haar features with determined values of dark and white, prior to finding the value of the
integral image on the features. Without integral image in a 24x24 image, 160,000 rectangular
features can be detected[13].
For cars, the extracted traits must be distinct, different for each vehicle, and able to completely
characterize the vehicle without being impacted by a change in the vehicle's position[13].
Black and white squares of the same size and placement serve as the foundation for the Haar
characteristics[10].

Figure 3 The Haar feature

The feature, there are rectangular features on some part of the image which are in two forms:
black(dark) and white(bright). Based on those rectangles, the Haar-like feature is calculated.

Figure 4 The Haar feature on car

Next, decide if there are photos or just hundreds of Haar features. Utilizing the formula,
the black area surrounding the white part of the Haar characteristics should be diminished.
This is an example of a 10x10 input image that was used.

Figure 5 Image on a digital form

The value of integral images utilizing the pixel value is displayed above.

Figure 6 The taken part of Image for Haar feature

The value of the second row and second column that is obtained by adding the value of the first
column's first row pixel, the first row pixel's value with the second column's first row pixel, the
first row pixel's value with the second column's first row pixel, the second row pixel's value with
the second row's second column, and the second row pixel's value with the second row's second
column (0,1). Then, using the value (0,7), where 0,1+0,2+0,3+0,1, the pixel value was
determined.
The computer initially produces a first classifier using the positive photos, evaluates it using the
negative images, and then builds a second classifier with greater detection rates[14].
It is made up of edge and line features. The white bar in the grayscale image represents the pixels
closest to the light source[15].

Figure 7 Integral Image


Calculation with and without Integral image
Finding the value of the Haar characteristics came next once the integral's value had been
established. By adding up the value of the integral image, the value of the Haar features could be
determined more quickly. The example that follows compares searching for feature values with
and without integral images. Determine the value of the Haar features without using the integral
image first.

Figure 8 Image without integral image

Figure 9 Image with integral image


To calculate the value of pixel using integral image, this equation is used

D=D+A-(A-C)

Where those letters define the location of number on integral image

Figure 10 The Haar feature notation

AdaBoost
After Obtaining all the value of features from Haar features and the value of integral image
features, the next step is to determine the features of AdaBoost.
The sub image is processed to determine whether the received features is true or false. If true
feature was the feature that has already been stored in database, then the feature was identified
and it is the face that matches with the feature. If it is not the feature, the feature would be
discarded. It meant that it was not the face that matches with the database or sub image feature.

Figure 11 AdaBoost decision process


1. Basic Hardware
Raspberry pi 4B model

Figure 12 Raspberry Pi 4B

Advantages

Powerful processing power in a compact board


Numerous specialized interfaces (UART, I2C, SPI, I2S, CSI, DSI) are available to link a variety
of sensors and electrical components.
Elevated connections (HDMI, USB, Ethernet, Wi-Fi, Bluetooth)
Low cost 800 yuan (32GB storage)
High-customization long-term automated image and video recording
Built-in graphics with HDMI support (up to 4K for latest models)
Low power usage (though greater than microcontrollers) and fueled by a variety of external
batteries and solar panels
Silent and without moving parts
Backwards compatibility and high transferability[16]

Disadvantages

Comes as a circuit board that is empty (but many compact cases are available)
Not as strong as a conventional PC
While programming and electronics expertise are not required for the majority of commercial
solutions, they may be readily taught for more complicated applications.
There is no integrated analog-to-digital conversion (can be added)
No sleep mode or power button (possible with power management HATS)
Customized installations could make standardization and replication more difficult (but can be
overcome by detailed standardized online documentation)[16]

Type Single-board Computer


Operating System Multiple operating systems possible
Dimensions 85.6 mm * 56.6 mm // 65*30 mm
Weight 46 g// 9g
Price 800 yuan
Multitasking Yes
Setting up required Yes
Processor 64-bit
Memory 32 GB
Clock speed 4*1.5 GHz // 1GHz
Ethernet Gigabit// Adapter needed
Wi-fi Yes
Bluetooth Yes
USB 2*USB 2&3 // micro-USB
Camera port Yes
Audio port No
HDMI 2*//1*micro-HDMI
Input voltage 5V
GPIO Ports 40 pins: 5V, 3.3V, Ground Digital I/O
Storage MicroSD card
Desktop Interface Yes
Power consumption 3,000 mW// 750 mW

Arduino Uno R3

Figure 13 Arduino Uno R3


Advantages

Affordable and cheap platform


Energy efficient
Software that is extendable and open source
Hardware that is expandable and open source
Massive community
Between platforms
Processor for rapid prototyping [17]

Disadvantages

Low processing speed


Limited memory and storage
Work required to complete activities like scheduling and database storage
Cannot manage significant complications in highly sophisticated projects
Hardware constraints[17]

Type Microcontroller
Operating system None
Dimensions 68.6mm*53.4mm
Weight 25g
Price 42 yuan
Multitasking No
Setting up required No
Processor 8-bit
Memory 32Kb
Clock speed 16MHz
Ethernet No
Wi-fi No
Input voltage 7-12 V
GPIO 20 pins: 5V, 3.3V Ground Digital, analogue
Shut down required No
Desktop interface No, C/C++ required
Power consumption <250 mW

L298N motor driver

Figure 14 L298N motor driver


The L298N is a twin H-bridge motor driver that has the capacity to simultaneously control the
speed and direction of two DC motors. Building the L298N Motor Driver Module involves using
an H-bridge (an H-bridge is a simple circuit that allows us to control the movement of a DC
motor forward or backward). In addition to numerous screw terminal blocks for the ground pin,
motor VCC, and a 5V pin that can be used as an input or output, the L298N motor driver
includes two screw terminal blocks for motors A and B[18].

4.1.1 Main components of the proposed system

On successful completion of my project the robot car will be able to drive itself on a road. It will
also be able pass through cars which is on the road and will be able detect traffic lights, There are
still some small components I didn’t show in this paper, for example the DC motor. I use the
Haar cascade image classification algorithm to detect cars and traffic lights. All image
calculations will be done on Raspberry pi - single board computer.

The basic image of my car is shown down below

Figure 15 Finished Project's outside view


5 Research work plan

START AND END PROJECT IMPLEMENTATION PLAN


TIME
YEAR, MONTH, DAY
—YEAR, MONTH, DAY
2022.11.15-2022.11.29 (1) Students select topics and decide on the graduation
thesis topic.
(2) Teachers offer students graduation projects task.

2022.11.30-2023.01.11 Students finish and turn in the opening report, literature


review, and English translation.
2023.01.12 – 2023.01.13 Opening report defense, score evaluation

2023.01.14 – 2023.02.28 Students create designs for the graduation project.


Implement some of the functions, and write program code.
Mid-term inspection of the graduation project

Students develop the full graduation project. Write program


code and give design documentation.
Software testing

Modify the program and write a paper

Teacher review

Students further revise and improve their graduation


project
The first defense of the graduation project

6 References

[1] F. Duarte,C. Ratti, "The impact of autonomous vehicles on cities: A review," Journal of Urban Technology,
vol. 25, no. 4, pp. 3-18, 2018.
[2] S. Karnouskos, "Self-driving car acceptance and the role of ethics," IEEE Transactions on Engineering
Management, vol. 67, no. 2, pp. 252-265, 2018.
[3] W. Payre, J. Cestac,P. Delhomme, "Intention to use a fully automated car: Attitudes and a priori
acceptability," Transportation research part F: traffic psychology and behaviour, vol. 27, pp. 252-263,
2014.
[4] S. Karnouskos, "The role of utilitarianism, self-safety, and technology in the acceptance of self-driving
cars," Cognition, Technology & Work, vol. 23, no. 4, pp. 659-667, 2021.
[5] V. K. Kukkala, J. Tunnell, S. Pasricha et al., "Advanced driver-assistance systems: A path toward
autonomous vehicles," IEEE Consumer Electronics Magazine, vol. 7, no. 5, pp. 18-25, 2018.
[6] S. Nyholm,J. Smids, "The ethics of accident-algorithms for self-driving cars: An applied trolley problem?,"
Ethical theory and moral practice, vol. 19, no. 5, pp. 1275-1289, 2016.
[7] R. Johansson,J. Nilsson, "Disarming the trolley problem–why self-driving cars do not need to choose
whom to kill," in Workshop CARS 2016-Critical Automotive applications: Robustness & Safety, 2016.
[8] M. Daily, S. Medasani, R. Behringer et al., "Self-driving cars," Computer, vol. 50, no. 12, pp. 18-23, 2017.
[9] K. Pavani,P. Sriramya, "Novel vehicle detection in real time road traffic density using Haar cascade
comparing with KNN Algorithm based on accuracy and time mean speed," REVISTA GEINTEC-GESTAO
INOVACAO E TECNOLOGIAS, vol. 11, no. 2, pp. 897-910, 2021.
[10] R. A. Harahap, E. P. Wibowo,R. K. Harahap, "Detection and simulation of vacant parking lot space using
east algorithm and haar cascade," in 2020 Fifth International Conference on Informatics and Computing
(ICIC), 2020: IEEE, pp. 1-5.
[11] I. M. Hakim, D. Christover,A. M. J. Marindra, "Implementation of an image processing based smart
parking system using Haar-cascade method," in 2019 IEEE 9th Symposium on Computer Applications &
Industrial Electronics (ISCAIE), 2019: IEEE, pp. 222-227.
[12] P. Pankajavalli, V. Vignesh,G. Karthick, "Implementation of haar cascade classifier for vehicle security
system based on face authentication using wireless networks," in International Conference on Computer
Networks and Communication Technologies, 2019: Springer, pp. 639-648.
[13] L. Zhang, J. Wang,Z. An, "Vehicle recognition algorithm based on Haar-like features and improved
Adaboost classifier," Journal of Ambient Intelligence and Humanized Computing, pp. 1-9, 2021.
[14] L. T. H. Phuc, H. Jeon, N. T. N. Truong et al., "Applying the Haar-cascade Algorithm for detecting safety
equipment in safety management systems for multiple working environments," Electronics, vol. 8, no. 10,
p. 1079, 2019.
[15] A. B. Shetty,J. Rebeiro, "Facial recognition using Haar cascade and LBP classifiers," Global Transitions
Proceedings, vol. 2, no. 2, pp. 330-335, 2021.
[16] J. W. Jolles, "Broad‐scale applications of the Raspberry Pi: A review and guide for biologists," Methods in
Ecology and Evolution, vol. 12, no. 9, pp. 1562-1579, 2021.
[17] H. K. Kondaveeti, N. K. Kumaravelu, S. D. Vanambathina et al., "A systematic literature review on
prototyping with Arduino: Applications, challenges, advantages, and limitations," Computer Science
Review, vol. 40, p. 100364, 2021.
[18] H. J. Habil, Q. Al-Jarwany, M. N. Hawas et al., "Raspberry Pi 4 and Python Based on Speed and Direction
of DC Motor," in 2022 4th Global Power, Energy and Communication Conference (GPECOM), 2022:
IEEE, pp. 541-545.
[19] G. Guido, V. Gallelli, D. Rogano et al., "Evaluating the accuracy of vehicle tracking data obtained from
Unmanned Aerial Vehicles," International journal of transportation science and technology, vol. 5, no. 3,
pp. 136-151, 2016.
[19]

You might also like