You are on page 1of 4

HW Implementation of Real-Time Road & Lane Detection

in FPGA-based Stereo Camera


Jung-Gu Kim and Jae-Hyung Yoo
R&D Center
VisionST Co., LTD.
Seoul, Korea.
jgkim@visionst.com, jhyoo@visionst.com

Abstract— This paper introduces an HW implementation of real- In order for an autonomous vehicle to travel on an actual
time road and lane recognition in FPGA-based stereo camera. road at a speed of about 60 km / h or more, the recognition
Information on the road, such as lanes, stop lines, crosswalks, and latency should be minimized. If all the recognition processes are
directional lines is the most basic and essential information for implemented by software alone, at least one frame time delay
self-driving car. In order to accurately recognition this
may occur. However, if real-time HW is implemented through
information, it is necessary to separate roads only from the actual
urban road environment. We implemented road separation FPGA, the latency can be pulled within a few lines (within about
function in FPGA based stereo camera for real-time computation 10 milliseconds).
and shows good experimental results. The output of the stereo camera is a disparity map and road-
only image together with left and right input color images. Lane
Keywords— road sign, lane detection, road separation, stereo recognition can be perceived through various algorithms based
camera, FPGA, real-time, Dynamic Programming on road-only images. In this paper, we show the results of lane
detection using dynamic programming in the same way as in the
I. INTRODUCTION paper [1]. The processing speed is improved about more than 3
An autonomous vehicle is a vehicle that recognizes the times as compared with the software method.
environment surrounding the vehicle in real time using various
sensors such as lidar, camera and ultrasonic mounted on the
vehicle and autonomously travels the planned route without
human assistance based on the recognized information. Until
now, autonomous driving has shown good performance in a
precisely prepared area or highway.
For safe autonomous driving, various signals on the road,
such as lanes, direction indicators, stop lines, and pedestrian
crossings, should be basically recognized. Also, it must be
recognized the number of signals such as traffic lights and
several signs around road. Self driving car is also necessary to
recognize the vehicles, bicycles, people, and many facilities in
the vicinity in real time, and to make accurate and safe driving
control.
However, when there are many vehicles and obstacles in the
vicinity of the city area, autonomous driving performance is not
as good as that of a person. When there is a vehicle on the road,
it is easy to be misunderstood only by simple image processing.
Especially, when the color of the vehicle is white, it may cause
confusion in stop line or lane recognition.
In the paper [1], we used a stereo camera to separate the road
from the vehicle, and effectively separated the roads and
recognized the lane using the disparity map information of the
road. In this paper, we describe the implementation of HW that
outputs the road image from the left and right input images in Fig. 1. Concept of Road and Lane Detection using Stereo Camera.
real time by implementing the process of separating the roads
directly into the FPGA mounted inside the stereo camera.

This work was supported by the ICT R&D program of MSIP/IITP. [2016-
0-00004, Development of Driving Computing System Supporting Real-time
Sensor Fusion Processing for Self-Driving Car]

978-1-5386-7789-6/19/$31.00 ©2019 IEEE


II. HW IMPLEMENTATION algorithm was applied to the double Trellis Dynamic
Programming [2] with enhanced post-processing algorithm
A. HW System developed by VisionST. In addition, real - time road area
The camera used in the autonomous vehicle includes a stereo separation algorithm is implemented in real - time by the
camera, front, rear, left and right cameras and each camera must disparity map analysis. Table 1 shows the hardware
be synchronized with each other to obtain images at the same specifications.
time. Fig. 2 shows the block diagram of the real-time
synchronization, demosaicing, color correction, gamma
correction, color space conversion, and image merging. Each
camera image is transmitted to the input interface board through
the FPD-LINK3 interlace, and the final image after the image
processing is combined with each other to output a single image
through the USB3.0 UVC or FPD-Link3 output interface. Fig. 3
is a more detailed block diagram of the process of Fig. 2.

Fig. 4. Input image packing and merging process.

Fig. 2. Block diagram of Multi-Camera Synchronization.

Fig. 5. Stereo Camera and FPGA based computing Board.

Fig. 3. Block diagram of image processing flow of Fig. 2. TABLE I. HW SPECIFICATIONS

Item Specification
Fig. 4 is a block diagram showing the process of combining
Input channel Max 5 channel. (default:4)
and packing 1, 2, and 4 camera input images. Fig. 5 shows a Input camera 640x480, 800x600, 950x540, 1280x720,
stereo camera and a FPGA based HW computing board for real- resolution 1920x1080
time processing of video signals. Table 1 shows the basic Output resolution 2560x480(30), 3200x60 (30), 3840x54 (30),
specifications. The HW board consists of a base board, four (frame rate) 5120x720(30), 7680x1080(15)
Input camera 1 Stereo Camera + 2 Mono Camera /
input interface boards, two output interface boards, and a stereo combination 4 Mono Camera
matching board. The input interface board is designed to Base board, Camera interface board, Stereo
receive one stereo camera and two mono cameras or four mono HW Module Matching board, output interface board
cameras via the RJ45 connector. Left and right images input (FPD-LINK3 and USB3.0 UVC)
Size (mm) 160x135x30
through a stereo camera are computed in real time on an FPGA
mounted on a stereo matching board. Stereo matching
B. Real-Time Road Detection in Stereo Matching FPGA C. Real-Time Lane Detectoin using Dynamic Programming
The process of separating the road from the input image is After separating the road using a stereo camera, the lane is
implemented using the property of the road area disparity map recognized on the separated road. Recent studies on lane
of the stereo camera. If there is no car on the road, the disparity recognition can be found in the following article [3]. Dynamic
map decreases away from the front of the vehicle. In the programming (DP) [4] is applied as a method that imitates how
disparity map image, the disparity value of the lower part of the a person intuitively finds a lane on the road. That is, the lane
image is the largest value, and the value decreases toward zero can be thought of as the brightest continuous line existing on
to the upper part. Fig. 6 shows a disparity map and a road image the front left and right of the driving vehicle, and this concept
obtained by removing a vehicle area after determining whether can be applied using Dynamic Programming. Using Dynamic
a vehicle exists on the road by examining the disparity map. Programming, we can find the optimal path to connect the
starting and ending points of a lane. as shown in Fig. 8.

Fig. 6. Disparity map and road area detection result.


The equation for road separation is shown below. In the
equation 1, where Dx and Dref denotes the disparity value in the
xth column and the disparity map obtained on the road where the
vehicle does not exist. The road area is obtained by selecting the
area except for when the disparity value is larger than when the
road area is present.

(1) Fig. 8. Lane detection using Dynamic Programming. Lane is a optimal


path through the most bright line on the road.
Fig. 7 shows the output image of the stereo camera. The
output image is the left and right input image, the disparity map,
and the road only disparity map.

Fig. 9. Cost map and detected lane using Dynamic Programming.

The lane obtained by the DP algorithm is an algorithm that


operates when there is a lane mark on the road. If one of the left
and right lanes disappears, normal calculation is impossible. In
this case, the lane is searched considering the relation between
the left and right lanes. If only one of the left and right lanes is
recognized, the other lane should be calculated by referring to
the lane information obtained from the existing image and the
distance information between the lanes
Fig. 7. Stereo camera output : left and right input images, disparity map Since VisionST's real-time stereo camera has a Trellis DP
and load only disparity map. structure, a DP-based lane recognition algorithm can also be
implemented in an FPGA to obtain real-time results.
D. Experimental Results
The Stereo cameras and the HW boards developed by
VisionST were connected to ETRI's driving computing III. CONCLUSION
platform for lane detection. The computer environment was In this paper, we introduced an FPGA based HW system
Ubuntu 14.04.5 LTS with OpenCV 3.1.0. And stereo camera developed by VisionST that receives stereo camera or several
with 5120x720 resolution of 30 fps is connected to the platform mono camera’s input images and processes real-time stereo
via USB 3.0 UVC interface. Figure 10 shows vehicle mounted matching and road and lane detection. The hardware system can
the stereo camera, the hardware board, the ETRI platform, the be connected to the ETRI's driving computing platform or a
vehicle test scene, and the lane detection results. We obtained general PC via the FPD-LINK3 or USB3.0 UVC interface. It is
the lane recognition result of 95.4%, 27 frames per second. advantageous to minimize the delay in the computation
compared to the system implemented with SW only.
Dynamic programming and the relationship between left
and right lane information was used for lane recognition. We
showed the result of the experiment by mounting the developed
system on a real vehicle.
REFERENCES
[1] J.G.Kim, J.H.Yoo and J.C.Koo. "Road and Lane Detection using Stereo
Camera" . IEEE BigComp 2018, Shanghai China, Jan. 2018.
[2] H. Jeong and S. Parak "Generallized Trellis Stereo Matching with
Systolic Array" . Lecture Notes in Computer Science, Vol. 3358, pp. 263-
267. Nov 2004.
[3] Sandipann P. Narote, Pradnya N. Bhujbal, Abbhilasha S.Narote, and
Dhiraj M.Dhane, "A Review of Recent Advances in Lane Detection and
Departure Warning System", Pattern Recognition, Vol. 73, pp. 216-234,
Jan. 2018,
[4] Bellman and Richard, "The theory of dynamic programming", Bulletin of
the American Mathematical Society, 60 (6), pp. 503–516, MR 0067459,
1954.

Fig. 10. Vehicle mounted stereo camera, hardware board, ETRI


platform, vehicle test scene, and lane detection results.

You might also like