Professional Documents
Culture Documents
Method of Processing Images Collected From Self-Driving Car Control Camera 13-9
Method of Processing Images Collected From Self-Driving Car Control Camera 13-9
I. Introducation
Self-driving cars are widely applied in many different fields in practice and play an important
role in traffic environments with high risk of accidents such as steep, slippery roads or
intelligent traffic systems. ( giải thích) The vehicle's intelligent automatic motion control can
help reduce accidents and traffic jams. ( nêu ví dụ, giải thích)
In principle, a simple AGV system would consist of two main components: the Preprocessor
and the Control Unit.
The preprocessor uses sensors, radar, GPS or computer vision systems with attached cameras
to acquire information from the environment such as limits, direction, width, curvature,
flatness. lanes as well as the appearance of obstacles, etc. In which, the use of cameras to
identify lanes in processing probwlems is quite popular and widely used. The advantage of
using a camera is that it is easy to observe, provides color, contrast and optical character
recognition, and is affordable. Accompanied by a few outstanding disadvantages such as
limited scanning angle, poor performance in low light conditions, ...
The controller will control the car or autonomous robot to move along its lane limits and can
avoid obstacles in the road if necessary. Autonomous vehicle navigation includes three main
steps: Perception and localization – Planning – Control. In this, vehicle control is the final step
in the navigation system and is usually performed using one of two independent controllers.
Lateral Controller: Adjust the steering angle so that the vehicle follows the reference line. The
controller minimizes the error between the current vehicle position and the predetermined
path. Longitudinal Controller: The vertical controller minimizes the difference between the
direction of the vehicle and the direction of the reference line. Helps the vehicle to move
stably without shaking and accelerate and decelerate more smoothly.
In this study, we propose simple image processing algorithms and methods to help
autonomous vehicles move and control obstacles within a certain lane limit. By using the
concept of lane vectors based on the Non-uniform Bspline (NUBS) theory [1] to construct the
limit lines for the left and right lanes. Then describe the Uncertainty Resolving System (URS)
with the Visual Grounding (VG) model [2] to detect the object mentioned in the command in
the visual scene. Finally, using the motion estimation algorithm Point Minimal Solution [3] to
give the most suitable motion for the autonomous vehicle. Experimental results show that the
algorithms are very successful in identifying large urban lanes, where there is a large shadow
noise, as well as successful in identifying medium and large sized obstructions to correct
direction of movement. However, limitations are evident in detecting small obstacles such as
predicting the left while the subject is on the right or not recognizing a few colored obstacles
mixed with the lane. ( mở rộng giới thiệu các nhóm xử lý khác)
II. Methods
1. Vectơ làn đường dựa trên lý thuyết Non-uniform Bspline (NUBS)
- In there:
+ Al, Bl,… are the control points for the left lane; Ar, Br,… are the control points for the right
lane.
+ l⃗1=⃗ r 1=⃗
Al Bland ⃗ A r B r are the left and right lane vectors.
- Step 1: Set up 2 horizontal scan lines at the empty area at the end of the image, find the
control points respectively Al, Bl, Ar, Br
- Step 2: Build 2 lane vectors 1l and 1r , then calculate the angle l and r according to the
formula
( ) ( )
x A −x B x A −x B
α l=arctg l l
, α r =arctg r r
(1)
yA−yB
l l
y A − yB
r r
- Step 3: Divide the remaining space of the image into 4 parts (based on the length of the lane
map image) using 3 horizontal sweep lines. Vectors l⃗1and ⃗
r 1 stretching we get vector ⃗
l ' 2 and ⃗
r '2
. These two vectors intersect with the horizontal sweep line 3, we get two points C’ l and C’r .
Take the midpoint CM of the two points above and sweep to the sides, we get two control
points Cl and Cr. In case the lane is not continuous or noisy we will take C’l and C’r do the next
control points.
( )
xl −x l
α li =arctg ( i+1) i
yl − yl
( i+1) i
( )
x r −x r
α ri =arctg (i+ 1) i (2)
yr − y r
(i+ 1) i
α r=ar ∆ α + br ∆ α + c r α r 1 (6)
r2 r1
In there: al, bl, cl (left lane) and ar, br, cr (with the right lane) established and selected
experimentally. Sign of al, bl, cl take it in turn ∆ α , ∆α , α l 1 and ar, br, cr take it ∆ α , ∆α , α r 1 . In
l2 l1 r2 r1
the test here, we take:
|al|=|ar|=3 ,|bl|=|b r|=2,|c l|=|c r|=1.
( W − pVehicle )
k l=2 ; k r=2(1−k l ) (7)
W
( k l α l +k r α r )
α Out = −α Vehicle (8)
(kl + kr )
α Out =¿
In our tests, we will use the notation 𝐸𝑛𝑠𝐸 where 𝐸 indicates the number of models in the
set. In addition to correcting a model, the use of a composite channel increases accuracy.
Detect uncertain and uncertain objects
- Softmax Addition (SA): relies on the softmax probability distribution adding up to 1. The
top-𝑘 objects from 𝑂I are selected based on their probability from distribution 𝑝(𝑂I|𝛷,𝜽), such
that the sum of these 𝑘 probabilities is higher then the sum of remaining |𝑂I|−𝑘 probabilities,
with |𝑂I| is the number of objects in 𝑂I
+ No more clusters can be merged, there are no clusters within a distance 𝛿 of each other, the
one with the largest probability is selected.
+ If there is only one object in the cluster, the model is classified as certain.
+ Otherwise, it is uncertain with all the objects in the highest probability cluster being
members of the candidate set 𝑂𝑐.
- Thresholding: makes use of a threshold 𝜂 for classifying the model as certain or not. In case
of the softmax output of the model, the threshold (trained on the validation set) is applied over
the probability distribution 𝑝(𝑂𝐼|𝛷,𝜽) to create the candidate set 𝑂𝑐 asfollows:
∀𝑜∈𝑂𝑐⟺𝑝(𝑜|𝛷,𝜽)> 𝜂 (12)
[ ] [ ]
cosθ −sinθ 0 cos φv
R= sinθ cosθ 0 , t=ρ sin φv (13)
0 0 1 0
In there θ is the relative function angle and ρ is the scale of the comparative translation. Here,
the z-axis of Vk out of the paper. Further observations can be made from the graph showing the
angle between ρ and the line perpendicular to the circle at Vk, therefore φv = θ/2. We
immediately see that the relative motion between frame Vk and Vk+! depends only on 2 scale
parameters ρ and yaw angle θ.
[ ER R0 ](14)
EGC =
EGC =¿
For brevity, let us now drop all the indices on the Pl ̈uckerline vector from Equation 2 and
simply denote 2 correspon-dence Pl ̈ucker lines as l=[u T (t C × u)T ]T from frame k and
l '=[u' T (t C ' ×u ' )T ]T from frame k+1. Perform l × EGC ×l' , I get:
θ θ
acosθ +bsinθ +cρcos + dρsin + e=0(16)
2 2
Where
−u x (t C uw −t C u x )−u ' y (t C uw −t C u y )
x w y w
Here, the subscripts x, y and w refer to the componentsin the vector. Equation 16 is our new
GEC with the Ack-ermann motion model. We need 2 Pl ̈ucker line correspon-dences to solve
for the 2 unknowns ρ and θ in Equation 16. Denoting each set of known coefficients obtained
from each Pl ̈ucker line correspondence by (a1,b1,c1,d1,e1) and (a2,b2,c2,d2,e2), and using
the trigonometric half-angleformula
θ
cosθ=1−2sin 2 (14 a)
2
θ θ
sinθ=2 sin cos (14 b)
2 2
III. Results
1. Vectơ làn đường dựa trên lý thuyết Non-uniform Bspline (NUBS)
a) Experiment.
Case 1: The lane is almost straight line, there is no interference, and the lengths of the left and
right lanes are approximately the same
2. Simulation results
Case 1: The lane is curved to the right and the lane is slightly disturbed
Figure 9. Simulation results of case 1
The simulation results show that α l=63.889 ° , α r =−18.142° , α out =22.874 ° . This indicates that
from this position, the car will turn to the right with a relatively large steering angle of value
22.8740
Case 2: The right lane curves heavily to the left, the right lane and the left lane are lost.
The simulation results show that α l=0 ° , α r =−66.914 ° , α out =−33.457 ° . This indicates that from
this position, the car will turn to the left with a relatively large steering angle of value -33.457 0
Figure 13 .Top view of trajectory and 3D map points after pose-graph loop-closure
and full bundle adjustment compared with GPS/INSground truth
IV. Conclusions
The above articles have proposed simple and suitable image processing algorithms for the
problem of construction, lane detection, obstacle and steering angle control for autonomous
vehicles to provide appropriate motion on the lane. allow. By using properly installed camera
systems along with simple and powerful algorithms, experiments have shown that the system's
intuitiveness in controlling self-driving cars is completely feasible in remote areas. The traffic
area is not too crowded. High processing speed, image acquisition, more than 13 frames per
second, low latency, the ability to set lane trajectory and provide feedback to obstacles on the
road with relatively high accuracy for see that the car can completely operate freely without
human intervention. Some of the strengths of the system can be seen as follows:
- The technique of using lane-vector allows to overcome the difficulties of noise on the image
to fully detect the lanes that are discontinuous, or noisy by other objects.
- Successfully modeled the camera system as a general camera by 2-point minimal solution
- Combining the use of the URS system with the VG model gives a positive result, increasing
by more than 9% compared to using the VG model alone.
V. REFERENCES
[1] https://123docz.net//document/5855309-xu-ly-anh-xe-tu-hanh.htm
[2]https://husteduvn.sharepoint.com/:b:/s/
20213ME2021Technicalwritingandpresentation134869/Ef_a-
b7IWP9DpeBbF61i42UBfUHiiW77YGpPa5oP-B7xMQ?e=Ekor8R
[3]https://husteduvn.sharepoint.com/:b:/s/
20213ME2021Technicalwritingandpresentation134869/Ea5twn00E-
hNlzCiq5dM07ABw0Pxw0ky-AJMCPjV4OKRFQ?e=uUeZre