You are on page 1of 15

sensors

Article
Research and Implementation of Vehicle Target
Detection and Information Recognition Technology
Based on NI myRIO
Hongliang Wang *, Shuang He, Jiashan Yu, Luyao Wang and Tao Liu
National Key Laboratory for Electronic Measurement Technology, Key Laboratory of Instrumentation Science &
Dynamic Measurement (North University of China), Ministry of Education, North University of China,
Taiyuan 030051, China; 1306024217@st.nuc.edu.cn (S.H.); 1506024121@st.nuc.edu.cn (J.Y.);
1205014127@st.nuc.edu.cn (L.W.); 1201014310@st.nuc.edu.cn (T.L.)
* Correspondence: wanghongliang@nuc.edu.cn; Tel.: +86-15934042158

Received: 21 December 2019; Accepted: 19 March 2020; Published: 22 March 2020 

Abstract: A vehicle target detection and information extraction scheme based on NI (National
Instruments) myRIO is designed in this paper. The vehicle information acquisition and processing
method based on image recognition is used to design a complete vehicle detection and information
extraction system. In the LabVIEW programming environment, the edge detection method is used to
realize the vehicle target detection, the pattern matching method is used to realize the vehicle logo
recognition, and the Optical Character Recognition (OCR) character recognition algorithm is used
to realize the vehicle license plate recognition. The feasibility of the design scheme in this paper is
verified through the actual test and analysis. The scheme is intuitive and efficient, with the high
recognition accuracy.

Keywords: vehicle target detection; vehicle information recognition; NI myRIO

1. Introduction
In the current intelligent transportation system (ITS), the accuracy and the efficiency of the real-time
vehicle targets detection and vehicle identity recognition become more important with the increasing
number of vehicles and the increasingly complex traffic environment [1,2]. Traffic information
monitoring technology based on image recognition has become a hot spot in the transportation field
in recent years. It collects traffic video by the camera, and uses image recognition and processing
technology to identify and process the target area, monitor the traffic condition, and extract traffic
information through a certain algorithm, which has the advantages of convenient installation, wide
coverage, and high detection accuracy. With the development of computer technology, traffic
information detection technology based on image recognition and processing technology has gradually
matured and improved, and will be more and more widely used in intelligent transportation systems [3].
Vehicle detection and recognition is a key component of the intelligent transportation system.
It has broad application prospects in the fields of traffic guidance, assisted driving system, and
road monitoring, and it can provide important clues and evidence for public security cases and
traffic accident investigation [4,5]. The current target recognition methods mainly include statistical
pattern recognition method, template matching recognition method, model recognition method, neural
network recognition method, fuzzy set recognition method, and edge detection method [5–7]. The edge
detection method based on Canny operator is used in this paper. It has a small amount of calculation
and high recognition accuracy, and can effectively remove noise and improve recognition reliability [8].
Moreover, its compatibility with NI myRIO is better.

Sensors 2020, 20, 1765; doi:10.3390/s20061765 www.mdpi.com/journal/sensors


Sensors 2020, 20, 1765 2 of 15

At present, the recognition of vehicle identity still relies mainly on license plate information [9].
The license plate recognition system has become the key to the development of smart cities such as
vehicle management, stolen vehicle surveys, and traffic monitoring. When a deck car appears, or the
license plate is blocked, vehicle recognition relying solely on the license plate will be ineffective [10].
Therefore, vehicle color and logo are added to vehicle identity recognition. The vehicle recognition
method adopted in this paper is based on license plate and supplemented by vehicle color and vehicle
logo information, which makes up for the deficiencies in the traditional license plate recognition and
greatly improves the accuracy and reliability of the vehicle identity recognition. Thereby, it further
improves the reliability of the intelligent transportation system.
In general, color recognition is mainly divided into two steps: (1) color feature extraction and
(2) feature classification and matching. Color feature extraction refers to that the color information of
the image is statistically analyzed, and is expressed by the feature vector. Feature classification and
matching refers to that based on the expression of the given image color, the color category of the image
is predicted by a classification or matching algorithm, such as nearest neighbor search, neural network,
and support vector machine. From the earliest proposal of using color histogram as a feature to match
the existing library, to the proposal of using color invariant moment to describe the color information
of the whole object, and then to the proposal of extracting the color feature from the local region of the
object to replace the global region, color classification and matching technology has been gradually
improved and matured [11]. The vehicle color recognition method based on NI Vision Assistant color
classification function is adopted in this paper, which is intuitive, efficient, easy to implement, and has
good recognition accuracy.
At present, there are a few researches on the vehicle logo recognition technique, which is a
new research direction of vehicle recognition technology. The vehicle logo recognition technique
can be widely used in the field of automated safety management, such as criminal vehicle search,
traffic accident handling, and illegal vehicle monitoring. However, it is still difficult to develop and
implement a complete vehicle logo recognition technique due to the unfixed position, different sizes
and various types of the vehicle logo, and the different external environmental conditions. A vehicle
logo recognition algorithm based on convolutional neural network is proposed in [12]. The algorithm
consists of two parts: knowledge base and inference engine. Its recognition accuracy is good, but the
recognition speed is slow and cannot correctly depict the contour features when dealing with the tilted
vehicle logo, which has certain influence on recognition. A vehicle logo recognition method based on
HOG (Histogram of Oriented Gradient) and PCA (Principal Component Analysis) is proposed in [13].
It uses HOG features in combination with a multiclass SVM (Support Vector Machine) to deal with
logo recognition, and the accurate recognition results are obtained. The classification function of KNN
(K-nearest neighbor) algorithm based on machine vision is used for training and recognition in this
paper. Its recognition speed is fast, the accuracy is high, and a variety of simple vehicle logo can be
recognized at the same time.
With the improvement of computer performance and the development of computer vision
technology, the research on the license plate recognition technique has gradually developed towards
a systematic direction. License plate recognition is an image processing technological solution that
captures images of license plates of vehicles, and it segments the characters from the plate area
by detecting and extracting the license plate, and then, it displays the license plate information by
using feature extraction of the character recognition technique [14]. An improved technique of OCR
(Optical Character Recognition)-based license plate recognition using neural network trained dataset
of object features is proposed and is compared with the existing methods for improving accuracy
in [15]. The license plate recognition scheme based on OCR character recognition algorithm in NI
Vision Assistant is adopted in this paper, which realizes the correct recognition of license plates with
white characters on a blue background with a length of 8 characters. In the complex environment
background, the results of the license plate position recognition and extraction are ideal, the accuracy
Sensors 2019,
Sensors 20, x1765
2020, 19, FOR PEER REVIEW 33 of
of 15
15

of character recognition is high, and the recognition effect to the license plate with a large tilting angle
of character recognition is high, and the recognition effect to the license plate with a large tilting angle
is also good.
is also good.
The purpose of this paper is to design a mobile platform for intelligent transportation systems
The purpose of this paper is to design a mobile platform for intelligent transportation systems to
to achieve vehicle target detection and vehicle information acquisition. NI myRIO is used as the
achieve vehicle target detection and vehicle information acquisition. NI myRIO is used as the hardware
hardware platform and LabVIEW is used as the software platform in this mobile platform. The
platform and LabVIEW is used as the software platform in this mobile platform. The information
information of vehicle target is acquired by an industrial camera, and the target detection of the
of vehicle target is acquired by an industrial camera, and the target detection of the vehicle and the
vehicle and the extraction of the vehicle information are simultaneously performed. Finally, the
extraction of the vehicle information are simultaneously performed. Finally, the detection result and
detection result and the acquired information are outputted and displayed.
the acquired information are outputted and displayed.
2. Materials and Methods
2. Materials and Methods
The overall flow of vehicle detection and information extraction based on NI myRIO can be
The overall flow of vehicle detection and information extraction based on NI myRIO can be
observed from the system block diagram in Figure 1. The image input used DH-HV3151UC industrial
observed from the system block diagram in Figure 1. The image input used DH-HV3151UC industrial
camera and the main controller was myRIO-s1900 of National Instruments. The DH-HV3151UC
camera and the main controller was myRIO-s1900 of National Instruments. The DH-HV3151UC
industrial camera was a color area array CMOS image sensor with a resolution of 3.1 million pixels,
industrial camera was a color area array CMOS image sensor with a resolution of 3.1 million pixels,
supporting continuous acquisition, external trigger, and soft trigger acquisition. It had a standard
supporting continuous acquisition, external trigger, and soft trigger acquisition. It had a standard USB
USB 2.0 interface that supported hot swap, a DirectX compatible device driver, and a software
2.0 interface that supported hot swap, a DirectX compatible device driver, and a software interface that
interface that could be directly recognized by HALCON, LabVIEW, and other software. The core chip
could be directly recognized by HALCON, LabVIEW, and other software. The core chip of NI myRIO
of NI myRIO was Xilinx Zynq, which integrated a 667 MHz dual-core ARM Cortex-A9 and an FPGA
was Xilinx Zynq, which integrated a 667 MHz dual-core ARM Cortex-A9 and an FPGA with 28k logic
with 28k logic units, 80 DSP slices, and 16 DMA channels, and an integrated USB host for connecting
units, 80 DSP slices, and 16 DMA channels, and an integrated USB host for connecting external devices.
external devices.
NI myRIO

Image Acquisition Information Display


Vehicle target detection

Vehicle information recognition

Vehicle color recognition

Vehicle logo recognition

Vehicle license plate


recognition

Figure 1. Overall design of vehicle target detection and information extraction system.
Figure 1. Overall design of vehicle target detection and information extraction system.
Specifically, the edge detection method was used to implement vehicle target detection, the color
Specifically,
classification the edge
function was detection method was
used to implement usedcolor
vehicle to implement
recognition,vehicle target detection,
the pattern recognitionthe color
method
classification function was used to implement vehicle color recognition,
was used to implement vehicle logo recognition, and the OCR character recognition algorithm was the pattern recognition
method was used vehicle
used to implement to implement vehicle
license plate logo recognition,
recognition in real timeandon NIthemyRIO.
OCR character
Real-timerecognition
image data
algorithm was used to implement vehicle license plate recognition
acquisition and processing were realized through the LabVIEW software platform, in real time on NIand
myRIO. Real-
the output
time image data acquisition and processing
data were displayed on PC or mobile devices. were realized through the LabVIEW software platform,
and the output data were displayed on PC or mobile devices.
3. Vehicle Target Detection Scheme and Implementation
3. Vehicle Target Detection Scheme and Implementation
3.1. Edge Detection Method
3.1. Edge Detection Method
The edge of the image refers to the part of the image where the brightness of the local area
The significantly
changes edge of the image refers to
in the digital the part
image. The of themainly
edge image exists
wherebetween
the brightness of and
the target the local area
the target,
changes significantly
and between the targetinand
the the
digital image. The
background. It edge mainly exists
is an important between
basis for image theanalysis
target andsuchtheastarget,
image
and between
texture theextraction
feature target and theshape
and background. It is an important
feature extraction. Most ofbasis for image
the image analysisissuch
information as image
concentrated
texture feature
in the edge of extraction
the image.and The shape feature extraction.
determination Most ofof
and extraction thethe
imageimageinformation is concentrated
edge are very important
in
forthe edge of theand
recognition image. The determination
analysis of the entire imageand extraction
scene, and of they
the image
are alsoedge anare very important
important feature for
for
recognition and analysis of
the image segmentation. the detection
Edge entire image scene, the
is mainly andmeasurement,
they are also detection,
an important andfeature for the
localization of
image segmentation.
grayscale changes of theEdge detection
image. is mainly
The edge detection theismeasurement,
used to detect detection,
the vehicleand localization
target in this paper.of
grayscale changes of the image. The edge detection is used to detect the vehicle target in this paper.
Sensors 2020, 20, 1765 4 of 15

Its advantage is low computational load and high recognition accuracy, and it can be well adapted to
NI myRIO environment.
The common edge detection operators in NI Vision Assistant include Laplacian operator,
Diff operator, Sobel operator, Roberts operator, and Canny operator, and each of them has a different
algorithm. Canny edge detection uses the first-order differentiation of the Gaussian function, which
can balance the noise suppression and edge detection process well. Therefore, Canny operator is used
to extract the image edge in this paper in order to obtain better edge detection performance.
The two-dimensional Gaussian function is:

x2 + y2
!
1
G(x, y) = exp − . (1)
2πσ2 2σ2

In order to improve the operation speed, the convolution ∇G can be decomposed into two
one-dimensional row and column filters:

∂G x2 y2
! !
= kxexp − 2 exp − 2 = h1 (x)h2 ( y), (2)
∂x 2σ 2σ

∂G y2 x2
! !
= kyexp − 2 exp − 2 = h1 ( y)h2 (x), (3)
∂y 2σ 2σ
√  2 √  2
y √  2 √  2
y
x x
where h1 (x) = kxexp − 2σ 2 , h1 ( y ) = kyexp − 2σ2
, h2 ( x ) = kexp − 2σ2
, h 2 ( y ) = kexp − 2σ2 ,
and k is a constant.
Then, calculate the convolution of Equation (2) and f (x, y) of the image, and Equation (3) and
f (x, y) of the image, respectively.
∂G(x, y, σ)
Px = ∗ f (x, y), (4)
∂x
∂G(x, y, σ)
Py = ∗ f (x, y). (5)
∂y
Then, the edge strength A and the direction α can be obtained, as shown in Equations (6) and (7).
q
A(i, j) = P2x (i, j) + P2y (i, j), (6)

P y (i, j)
α(i, j) = arctan . (7)
Px (i, j)

3.2. Vehicle Target Recognition Method


The vehicle target recognition algorithm in this paper mainly includes image graying processing,
morphology processing and two edge detection based on Canny operator. The flowchart of vehicle
target recognition scheme is shown in Figure 2.
The specific steps are as follows: First, extract the luminance plane and grayscale the image.
Morphology processing uses 3 × 3 corrosion of the regular octagon to make one iteration. Then, make
the edge detection twice, which is from right to left and left to right. If both sides of the vehicle are
detected, it is regarded as the vehicle target detected. The edge detection parameter settings are shown
in Figure 3, the vehicle target recognition program block diagram is shown in Figure 4.
Original Image
Original Image

Extract
Extract
color panel
Sensors 2020, 20, 1765 color panel 5 of 15
Sensors 2019, 19, x FOR PEER REVIEW 5 of 15

Morphology
Morphology
processing
processing
Original Image

First
Firstdetection
straight Extract
edge
straight edge
color detection
panel

Second
straight Second
edge detection
Morphology
straight edge detection
processing

Output test result


OutputFirst
test result
straight edge detection

Figure 2. Flowchart of vehicle target recognition scheme.


Figure 2. Flowchart of vehicle target recognition scheme.
Second
The specific steps are as follows: First,straight edge detection
extract the luminance plane and grayscale the image.
The specific steps are as follows: First, extract the luminance plane and grayscale the image.
Morphology processing uses 3×3 corrosion of the regular octagon to make one iteration. Then, make
Morphology processing uses 3×3 corrosion of the regular octagon to make one iteration. Then, make
the edge detection twice, which is from rightOutputto left and left to right. If both sides of the vehicle are
test result
the edge detection twice, which is from right to left and left to right. If both sides of the vehicle are
detected, it is regarded as the vehicle target detected. The edge detection parameter settings are
detected, it is regarded as the vehicle target detected. The edge detection parameter settings are
shown in Figure 3, the vehicle
Figuretarget
Figure recognition
Flowchart
2. Flowchart
2. of vehicle
of vehicleprogram block diagram
target recognition
target recognition is shown in Figure 4.
scheme.
scheme.
shown in Figure 3, the vehicle target recognition program block diagram is shown in Figure 4.

The specific steps are as follows: First, extract the luminance plane and grayscale the image.
Morphology processing uses 3×3 corrosion of the regular octagon to make one iteration. Then, make
the edge detection twice, which is from right to left and left to right. If both sides of the vehicle are
detected, it is regarded as the vehicle target detected. The edge detection parameter settings are
shown in Figure 3, the vehicle target recognition program block diagram is shown in Figure 4.

(a) (b)
(a) (b)
Figure 3. Edge detection parameter settings: (a) from right to left and (b) from left to right.
(a) from
Figure 3. Edge detection parameter settings: (a) from right to left
left and (b) from left to right.

(a) (b)
Figure 3. Edge detection parameter settings: (a) from right to left and (b) from left to right.

Figure 4.
Figure Program block
4. Program block diagram
diagram of
of vehicle
vehicle target
target recognition.
recognition.
Figure 4. Program block diagram of vehicle target recognition.
4. Vehicle Color Recognition Scheme and Implementation
4. Vehicle Color Recognition Scheme and Implementation
4. Vehicle Color Recognition
The vehicle Scheme
color recognition and Implementation
method based on the color classification function of NI Vision
The vehicle
Assistant is used color
in thisrecognition method
paper. First, create based
a coloron the color
sample classclassification function
and color samples of NI
using NI Vision
The vehicle color recognition method based on the color classification function of NI Vision
Assistant is used in this paper. First, create a color sample class and color samples using NI Vision
Assistant. The number of color samples should not be too small or too large, so each group of
Assistant is used in this paper. First, create a color sample class and color samples using NI Vision color
classes adds 5–10 samples under different light. The sensitivity color recognition based on brightness
Figure 4. Program block diagram of vehicle target recognition.
is adopted for sample recognition. The nearest neighbor training algorithm is used for sample training.
The color recognition
4. Vehicle process Scheme
Color Recognition outputs and
eachImplementation
color recognition score and displays the result with the
highest score, thereby realizing the recognition of eight colors of red, yellow, green, cyan, blue, purple,
The vehicle color recognition method based on the color classification function of NI Vision
Assistant is used in this paper. First, create a color sample class and color samples using NI Vision
Sensors 2019, 19, x FOR PEER REVIEW 6 of 15
Assistant. The number of color samples should not be too small or too large, so each group of color
classes adds
Assistant. The 5–10 samples
number undersamples
of color different light. not
should Thebesensitivity
too smallcolor recognition
or too basedgroup
large, so each on brightness
of color
is adopted for sample recognition. The nearest neighbor training algorithm is
classes adds 5–10 samples under different light. The sensitivity color recognition based on brightnessused for sample
training.
is adopted The color recognition process outputs each color recognition score and displays the result
Sensors 2020, 20,for sample recognition. The nearest neighbor training algorithm is used for sample
1765 6 of 15
with the highest score, thereby realizing the recognition of eight colors of red, yellow,
training. The color recognition process outputs each color recognition score and displays the result green, cyan,
blue,the
with purple,
highest black, andthereby
score, white. The program
realizing the block diagram
recognition of of color
eight recognition
colors of red, is showngreen,
yellow, in Figure
cyan,5.
black,
The andcolor
eight white. The program
recognition block
results are diagram
shown in of color 6.
Figure recognition is shown in Figure 5. The eight
blue, purple, black, and white. The program block diagram of color recognition is shown in Figure 5.
color recognition results are shown in Figure 6.
The eight color recognition results are shown in Figure 6.

Figure 5. Program block diagram of color recognition.


Figure 5. Program block diagram of color recognition.
Figure 5. Program block diagram of color recognition.

(a) (b) (c) (d)


(a) (b) (c) (d)

(e) (f) (g) (h)


Figure(e)
Figure 6. Color recognition results:(f)
6. (a) white, (b) black, (c) red,(g) (h)(g) cyan,
(d) blue, (e) yellow, (f) green,
and
and (h)
(h) purple.
purple.
Figure 6. Color recognition results: (a) white, (b) black, (c) red, (d) blue, (e) yellow, (f) green, (g) cyan,
and (h)Logo
5. Vehicle purple.
Recognition Scheme and Implementation
5. Vehicle Logo Recognition Scheme and Implementation
5.1.
5. Classification
Vehicle Algorithm Introduction
Logo Recognition Scheme and Implementation
5.1. Classification Algorithm Introduction
The KNN algorithm is called K-nearest neighbor, and it is a no parent statistical method proposed
5.1. Classification Algorithm Introduction
The KNN algorithm is called K-nearest neighbor, and it is a no parent statistical method
by Cover and Hart in 1968 for classification and regression. Classification is made by measuring the
proposed by Cover
The between
KNN and Hartcalled
algorithm in 1968 for classification and regression. Classification is made by
distance different iseigenvalues K-nearest
in KNN. neighbor,
The specificandmethod
it is a isnothat
parent
if thestatistical
majority ofmethod
the K
measuring
proposed the distance
by samples
Cover and between different eigenvalues in KNN. The specific method is that if the
most similar of aHart in 1968
sample, whichfor isclassification and regression.
also the nearest, in a featureClassification
space belong is to made
a certainby
majority
measuring of the K most
thesample
distance similar
between samples of a sample, which is also the nearest, in a feature space belong
category, that also belongsdifferent eigenvalues
to this category, where inKKNN. The specific
is usually an integer method is thatthan
not greater if the
20.
to a certain
majority of thecategory, that sample
K most similar samplesalsoofbelongs
a sample, towhich
this category,
is also where K is
in usually an integer not
In the KNN algorithm, the selected neighbors are all objects thatthehavenearest, a feature
been correctly space
classified. belong
greater
to thancategory,
a certain 20. In the that
KNN algorithm,
sample the selected neighbors are all objects isthat havean
been correctly
When the data and labels in thealso belongs
training set to
arethis category,
known, enter where
the testKdata. usually
Compare integer
the not
features
classified.
greater than 20.with
In the
of the test data theKNN algorithm,
features the selected
corresponding to theneighbors
training set aretoall objects
find that
the top K have been similar
data most correctly to
When the data and labels in the training set are known, enter the test data. Compare the features
classified.
the test data, and the category that the test data corresponding to is the one with the most occurrences
of the test data
When withand
the labels
features corresponding to the training set to find the top K data most similar
among the the data
K data. The algorithm in the training
steps are: set are known, enter the test data. Compare the features
of the test data with the features corresponding to the training set to find the top K data most similar
(1) Calculate the distance between the test data and each training data;
(2) Sorting according to the increasing relationship of distances;
(3) Select the K points with the small distance;
Sensors 2019, 19, x FOR PEER REVIEW 7 of 15
Sensors 2020, 20, 1765 7 of 15

to the test data, and the category that the test data corresponding to is the one with the most
occurrences among the K data. The algorithm steps are:
(4) Determine the occurrence frequency of the category of the first K points;
(1) Calculate the distance between the test data and each training data;
(5) Return the category with the highest frequency among the top K points as the prediction category
(2) Sorting according to the increasing relationship of distances;
of the test data.
(3) Select the K points with the small distance;
(4) InDetermine the occurrence
the KNN algorithm, therefrequency
are three of the category
commonly ofdistances,
used the first Knamely,
points; Manhattan distance,
(5) Return the category with
Euclidean distance, and Minkowski distance. the highest frequency among the top K points as the prediction
category
Let of the space
the feature test data.
X be an n-dimensional real number vector space. xi , x j ∈ X, xi =
T (n) )T , and
(xi , xi , . . . , xi ) , x j = (x j (1are
In( 1 the
) (KNN
2 ) algorithm,
( n ) there ) , x three commonly
j , . . . , xj
(2) used thedistances,
distance Lnamely, Manhattan distance,
p of xi , x j is defined as
Euclidean distance, and Minkowski distance.
Let the feature space X be an n-dimensional  Xn real number  1 vector space. 𝑥𝑖 , 𝑥𝑗 ∈ 𝑋, 𝑥𝑖 =
(l) (l) p p
L x , xj = 𝑇 xi − x j , (8)
(𝑥𝑖 (1) , 𝑥𝑖 (2) , … , 𝑥𝑖 (𝑛) )𝑇 , 𝑥𝑗 = (𝑥𝑗 (1) , 𝑥𝑗 (2) ,p… ,i𝑥𝑗 (𝑛) ) , andl=the 1 distance 𝐿𝑝 of 𝑥𝑖 , 𝑥𝑗 is defined as

1
here p ≥ 1. 𝐿𝑝 (𝑥𝑖 , 𝑥𝑗 ) = (∑𝑛𝑙=1 |𝑥𝑖 (𝑙) − 𝑥𝑗 (𝑙) |𝑝 )𝑝 , (8)
When p = 1, it is called Manhattan distance, and the formula is:
here p ≥ 1.
 Xn
When p = 1, it is called ManhattanLdistance, and the xformula − x j (lis:

(l) )
1 xi , x j = i . (9)
l=1
𝐿1 (𝑥𝑖 , 𝑥𝑗 ) = ∑𝑛𝑙=1|𝑥𝑖 (𝑙)
− 𝑥𝑗 |. (𝑙)
(9)
When p = 2, it is called Euclidean distance, and the formula is:
When p = 2, it is called Euclidean distance, and the formula is:
 Xn 1
(l2) 21 2

𝑛 (𝑙)(l) (𝑙)
𝐿L 2 x
2 (𝑥
,x = x −x
𝑖 ,i 𝑥𝑗 )j = (∑𝑙=1 |𝑥𝑖 i − 𝑥𝑗 j | )2 .
. (10)(10)
l=1

When p = ∞, it is the maximum value of each coordinate distance, and the formula is:
When p = ∞, it is the maximum value of each coordinate distance, and the formula is:
 𝑚𝑎𝑥𝑙 |𝑥 𝑖 (𝑙) − 𝑥𝑗 (𝑙) |.
𝐿∞ (𝑥𝑖, 𝑥𝑗 ) = (11)
L∞ xi , x j = maxl xi (l) − x j (l) . (11)

5.2.5.2.
Vehicle Logo
Vehicle Recognition
Logo Scheme
Recognition Scheme
TheThevehicle logo
vehicle logorecognition
recognitionbased
based on patternmatching
on pattern matchingis is adopted
adopted in this
in this paper,
paper, whichwhich is
is divided
divided into three parts: the production of library files, the preprocessing of images, and
into three parts: the production of library files, the preprocessing of images, and the recognition display the
recognition
of vehicledisplay
logos. of vehicle
The logos. The
production production
library library
file includes thefile includes
creation the library
of the creationfile,
of the libraryand
addition
file,deletion
additionofand
the deletion of the
vehicle logo vehicle
sample, aslogo
shownsample, as shown
in Figures 7 andin8.Figures 7 andlogo
The vehicle 8. The vehicle
images logo
in different
images in different lights are added to the sample library trained by a classification function
lights are added to the sample library trained by a classification function based on the KNN algorithm. based
on the KNN algorithm.

(a) (b)
Figure
Figure 7. Program
7. Program block
block diagram.
diagram. (a) creation
(a) creation of library
of library filesfiles
andand (b) deletion
(b) deletion of the
of the library
library files.
files.
Sensors 2020, 20, 1765 8 of 15
Sensors 2019,
Sensors 2019, 19,
19, xx FOR
FOR PEER
PEER REVIEW
REVIEW 88 of
of 15
15
Sensors 2019, 19, x FOR PEER REVIEW 8 of 15

Figure 8.
Figure
Figure 8. Program
8. Program block
block diagram
diagram of
of addition
addition of
of the
the sample.
sample.
Figure 8. Program block diagram of addition of the sample.

Image preprocessinguses
Image preprocessing
preprocessing usesgrayscale
uses grayscalecorrosion
grayscale corrosionand
corrosion andthen
and thenbinarization.
then binarization.The
binarization. The
The classification
classification
classification function
function
functionin
Image
in pattern
pattern
pattern preprocessing
matching
matching is used
is used useduses
to comparegrayscale
to compare
compare corrosion
important
important and
features then
features
with the binarization.
with the features
features The
features classification
of the known of the
theobject
known function
object
category.
in matching is to important features with the of known object
in
Thepattern
category. matching
The
preprocessing andis vehicle
used to
preprocessing and compare
logo vehicle important
logo
recognition features
recognition
process is shown with
process
in the
is
Figure features
shown
9. in
First, ofFigure
theacquired
the known
9. object
First, the
image
category. The preprocessing and vehicle logo recognition process is shown in Figure 9. First, the
category.
acquired
is The
preprocessed preprocessing
image isis preprocessed
and and
the brightness vehicle
and plane logo
the brightness
brightness recognition
plane
is extracted is
to is process
extracted
convert thetois
to shown
convert
32-bit colorin Figure
theimage
32-bitinto9. First,
color the
image
animage
8-bit
acquired image preprocessed and the plane extracted convert the 32-bit color
acquired
into an
grayscale image
8-bit
image. is preprocessed
grayscale
Then,image.
carry andcorrosion
Then,
out the brightness
carry out planeoperations
corrosion
operations is
to extracted
the to
image to
the convert
byimage
using the
by 5 32-bit
using
5 5 color
corrosion5 image
corrosion
of the
into an 8-bit grayscale image. Then, carry out corrosion operations to the image by using 5 5 corrosion
into
of the
squareansquare
8-bit
and grayscale
and
make make
threeimage.
three Then, as
iterations, carry
iterations, as out
shown corrosion
shownin in operations
Figure
Figure 10. 10. Then,
Then, tocarry
the image
carry out
out byimage
the
the using 5binarization.
image 5 corrosion
binarization.
of the square and make three iterations, as shown in Figure 10. Then, carry out the image binarization.
of
Setthe
thesquare and make
value thethree
of the pixeliterations,
with the as shown
the gray
gray valueinfrom
Figure
from to10. Then, 0,carry out rest
the image 255,binarization.
Set the gray value
gray of pixel with value 00 to 127
127 to 0,
to and the
and the rest to to 255, as shown
as shown in in
Set the
Figure gray
11. value
Pattern of the
matching pixel andwith the
vehicle gray
logo value from
recognition 0 to
are 127 to
performed0, and bythetherest to 255,
classification as shown
functionin
Figure 11. Pattern matching and vehicle logo recognition are performed by the classification function
Figurethe11.
and the Pattern matching
matching result is and vehicleAfterlogo recognition
multiple tests, whenare performed
when the test
test valueby the
value is classification function
and matching result is judged.
judged. After multiple tests, the is 700,
700, the
the identification
identification
and the matching
requirements can result
be
can be met.
bemet. isSo,
met. judged.
the
So,thethe After
result
resultofmultiple
thethe
of scoretests,
score when
greater
greater thanthe ortest
than orvalue
equal isto700,
to test
equal test the
value
test identification
is
value output. The
is output.
requirements can So, result of the score greater than or equal to value is output. The
requirements
recognition
The of
recognition canofbe
Honda, met. So,
Toyota,
Honda, the result
Volkswagen,
Toyota, of the
Volkswagen, score greater
Mercedes-Benz,
Mercedes-Benz, than
and or equal
other
and to test
vehicle
other value
logos
vehicle is
is
logos output.
realized.
is The
The
realized.
recognition of Honda, Toyota, Volkswagen, Mercedes-Benz, and other vehicle logos is realized. The
recognition
overall
The overall of
program Honda,
program block Toyota,
diagram
block Volkswagen,
diagram of the
the
of the Mercedes-Benz,
vehicle
vehiclelogo
logo recognitionand other
recognition vehicle
is isshown
shown
shownin inlogos
Figure
inFigure is realized.
Figure 12, andThe
and
12, and the
overall program block diagram of vehicle logo recognition is 12, the
overall program
recognition results block
are diagram
shown in of the13.
Figure
Figure vehicle logo recognition is shown in Figure 12, and the
13.
recognition results are shown in Figure 13.
recognition results are shown in Figure 13.
Start
Start
Start

Extract color panel


Extract color
(Grayscale panel
processing)
Extract color
(Grayscale panel
processing)
(Grayscale processing)

Morphological processing
Morphological processing
(Corrosion operation)
Morphological processing
(Corrosion operation)
(Corrosion operation)

Image binarization
Image binarization
Image binarization

Pattern matching
basedPattern matching
on particle detection
basedPattern matching
on particle detection
based on particle detection

N Is the pattern matching score


N Is the pattern
N
greater thanmatching score
test value?
Is the pattern
greater matching
than score
test value?
greater than test value?
Y
Y
Y
Output and display
Output and display
Output and display

End
End
End

Figure
Figure 9. Process of
Process of
9. Process preprocessing
preprocessing and
of preprocessing and vehicle
vehicle logo recognition.
logo recognition.
Figure 9. Process of preprocessing and vehicle logo recognition.

(a)
(a) (b)
(b)
(a) (b)
(a) before
Figure 10. Corrosion operation: (a) before and (b) after.
Figure 10. Corrosion operation: (a) before and (b) after.
Figure 10. Corrosion operation: (a) before and (b) after.
Sensors 2019, 19, x FOR PEER REVIEW 9 of 15
Sensors 2019,
Sensors 20, x
2020, 19, 1765
FOR PEER REVIEW 99 of
of 15
15
Sensors 2019, 19, x FOR PEER REVIEW 9 of 15

(a) (b)
(a) (b)
(a) Figure 11. Binarization: (a) before and (b) after. (b)
Figure 11. Binarization: (a) before and (b) after.
Figure 11. Binarization: (a) before and (b) after.
Figure 11.

Figure 12. Overall program block diagram of the vehicle logo recognition.
Figure 12.
Figure Overall program
12. Overall program block
block diagram
diagram of
of the
the vehicle
vehicle logo
logo recognition.
recognition.
Figure 12. Overall program block diagram of the vehicle logo recognition.

(a)
(a)
(a)

(b)
(b)
(b)

(c)
(c)
(c)

(d)
(d)
Vehicle
13.Vehicle
Figure 13. logo recognition
logo recognition (d) (b)
results: (a)results:
Honda, (a)Toyota,
Honda, (b) Toyota, and
(c) Volkswagen, (c) (d)
Volkswagen,
Mercedes-
Figure
and (d)13.
Benz. Vehicle logo recognition results: (a) Honda, (b) Toyota, (c) Volkswagen, and (d) Mercedes-
Mercedes-Benz.
Figure 13. Vehicle logo recognition results: (a) Honda, (b) Toyota, (c) Volkswagen, and (d) Mercedes-
Benz.
Benz.
Sensors 2019, 19, x FOR PEER REVIEW 10 of 15
Sensors 2019,20,
Sensors2020, 19,1765
x FOR PEER REVIEW 10
10of
of15
15
6. License Plate Recognition Scheme and Implementation
6. License Plate Recognition Scheme and Implementation
6. License Plate Recognition Scheme and Implementation
6.1. OCR Character Recognition
6.1. OCR Character Recognition
6.1. OCR Character
The OCR Recognition
(Optical Character Recognition) technique refers to that an electronic device, such as a
scannerThe OCR (Optical Character Recognition) technique refers to that an electronic device, such as a
TheorOCRdigital camera,
(Optical checksRecognition)
Character the charactertechnique
printed on paper,
refers determines
to that its shape
an electronic by detecting
device, such as a
scanner
dark and or digital camera,
bright camera,
patterns, checks
and the the
then character
translates printed
the on on
shape paper,
into determines its shape
computeritscharacters by detecting
by character
scanner or digital checks character printed paper, determines shape by detecting dark
dark and bright patterns, and then translates the shape into computer characters by character
recognition.
and bright patterns, and then translates the shape into computer characters by character recognition.
recognition.
6.2. License
6.2. License Plate
Plate Recognition
Recognition Scheme
Scheme
6.2. License Plate Recognition Scheme
The license
The license plate
plate recognition
recognition scheme
scheme mainly
mainly includes
includes image
image acquisition,
acquisition, image
image preprocessing,
preprocessing,
license
licenseThe license
plate
plate plateextraction,
position
position recognition
extraction, schemesegmentation,
character
character mainly includes
segmentation, andimage
and acquisition,
character
character image
recognition.
recognition. The
The preprocessing,
license plate
license plate
license plate
recognition position
scheme extraction,
based on character
OCR character segmentation,
recognition and character
algorithm in recognition.
NI Vision
recognition scheme based on OCR character recognition algorithm in NI Vision Assistant is used The
Assistant license
is plate
used in
in
recognition
this design.
this scheme
design. The
The designbased
design flow on OCR
flow chart
chart ofcharacter
of license recognition
license plate algorithm
plate recognition
recognition is in
is shown NI
shown in Vision
in Figure Assistant
14, and
Figure 14, and the is used
the design in
design
this design.
flow chart
flow chart of The
of OCR design
OCR characterflow chart of
character recognition license
recognition is plate
is shown
shown in recognition
in Figure
Figure 15.
15. is shown in Figure 14, and the design
flow chart of OCR character recognition is shown in Figure 15.
Start
Read characters
Start in the image
Read characters
Capture the original image in the image

Capture the original image Character recognition

Image binarization Character recognition


Verify license plate
Image binarization characters
Verify license plate
Denoise characters

Denoise Is the number N


of characters correct and
Particle filter Is “?”character?
there no the number N
of characters correct and
Particle filter there no “?”character?
Y
Morphological processing
OutputY and display
Morphological processing
Output and display
Detect the blue edge of
the license plate End
Detect the blue edge of
the license plate End

Figure 14. Design flow chart of license plate recognition.


Figure 14. Design
Figure 14. Design flow
flow chart
chart of
of license
license plate
plate recognition.
recognition.

Start
Start

New temple library


New temple library

Add OCR characters


Add OCR characters

Train OCR characters


Train OCR characters

Verify and proofread


OCR characters
Verify and proofread
OCR characters

End
End

Figure
Figure 15. Design flow
15. Design flow chart
chart of
of Optical
Optical Character
Character Recognition
Recognition (OCR)
(OCR) character
character recognition.
recognition.
Figure 15. Design flow chart of Optical Character Recognition (OCR) character recognition.
Image preprocessing: It includes binarization, removal of noise interference, particle filtering and
Image preprocessing: It includes binarization, removal of noise interference, particle filtering
morphological transformation
Image preprocessing: processing
It includes operations removal
binarization, of dynamically
of noiseacquired images.particle
interference, First, the color
filtering
and morphological transformation processing operations of dynamically acquired images. First, the
image is binarized. The
and morphological value of theprocessing
transformation pixels in the HSL(Hue ofSaturation Lightness) plane with a tone
color image is binarized. The value of the pixelsoperations
in the HSL(Hue dynamically
Saturationacquired
Lightness)images.
plane First,
with the
a
value of 150–180 and saturation value of 20–180 are replaced with Saturation
white, and Lightness)
the others are replaced
tone value of 150–180 and saturation value of 20–180 are replaced with white, and the others area
color image is binarized. The value of the pixels in the HSL(Hue plane with
with
tone black.
value Then the image
of black.
150–180 is
andthe denoised.value
saturation The IMAQ
of 20–180RemoveParticle
are replaced VI function
with white, isand
used
thetoothers
carry out
replaced with Then image is denoised. The IMAQ RemoveParticle VI function is usedare to
particle
replaced detection in connection modeis8.denoised.
The number of corrosion operations VIis 8, and theisshape is
carry out with black.
particle Then the
detection image
in connection mode 8. TheThe IMAQ
numberRemoveParticle function
of corrosion operations usedthe
is 8, and to
octagonal.
carry out Then, the particle
particle detection is filtered
in particle
connection to remove
mode 8. the interference particles. Finally, a 3 3 octagon
shape is octagonal. Then, the is filtered toThe number
remove the of corrosion operations
interference is 8, and
particles. Finally, a 3the
3
corrosion operation is carried out to theisimage, and the number of iterations isparticles.
2. The program block
octagon corrosion operation is carried out to the image, and the number of iterations is 2. aThe
shape is octagonal. Then, the particle filtered to remove the interference Finally, 33
diagram
octagon is shown inoperation
corrosion Figure 16.is carried out to the image, and the number of iterations is 2. The
program block diagram is shown in Figure 16.
program block diagram is shown in Figure 16.
Sensors 2019, 19, x FOR PEER REVIEW 11 of 15
Sensors 2020, 20, 1765 11 of 15
Sensors 2019, 19, x FOR PEER REVIEW 11 of 15

(a) (b)
(a) (b)

(c) (d)
(c) (d)
Figure 16. Program block diagram: (a) binarization, (b) denoising, (c) particle filter, and (d)
Figure 16. Program block diagram: (a) binarization, (b) denoising, (c) particle filter, and (d) corrosion
corrosion operation.
operation.
Figure 16. Program block diagram: (a) binarization, (b) denoising, (c) particle filter, and (d) corrosion
operation.
License plate position extraction: The background of vehicle images is often complicated in the
License
natural plate position
environment, so how extraction: The background
to determine the licenseof vehicle
plate images
position and is often
extractcomplicated
the licensein the
plate
natural
position environment,
Licenseinformation issothe
plate position howkeytotodetermine
extraction: theThe the license
background
entire license of plate
plate vehicle position
images
recognition. Inand
isthisextract
often the
the license
complicated
paper, licensein plate
the
plate
position
natural information
environment, is
so the
how key
to to the entire
determine license
the plate
license recognition.
plate position In
and this paper,
extract
position is determined by detecting the blue edge of the license plate and the license plate is segmented thethe license
license plate
plate
position
position
from the is determined
information
image. theby
TheisIMAQ key detecting the blue
to the Analysis
Particle entire VI edge
license ofrecognition.
plate
function the license
is used plate
In
to carrythis and
outpaper,thethe
particle license
licenseplate
analysis plate is
to the
segmented
position
preprocessed from the and
is determined
image, image.
bythe The
detectingIMAQ
boundary Particle
thepixel
bluevalue
edgeAnalysis
ofofthe VI function
thelicense
license is used
plate
plate area and to carry
the license
is extracted. out
The particle
plate is
license
analysis
segmented to the
from preprocessed
the image. image,
The IMAQand
plate positioning results are shown in Figure 17. the boundary
Particle pixel
Analysis VI value of
function the license
is used plate
to area
carry is
out extracted.
particle
The license
analysis to theplate positioning
preprocessed results
image, arethe
and shown in Figure
boundary pixel17.
value of the license plate area is extracted.
The license plate positioning results are shown in Figure 17.

(a) (b)
(a) (b)

(c) (d)
(c) (d)
License
17.License
Figure 17. plate
plate positioning
positioning results:
results: (a) Position
(a) Position 1, (b)1, (b) Position
Position 2, (c) Position
2, (c) Position 3, Position
3, and (d) and (d)
Position
4.
Figure 17. 4.
License plate positioning results: (a) Position 1, (b) Position 2, (c) Position 3, and (d) Position
4.
Character recognition
Character recognition and and verification:
verification: The The OCROCR character
character recognition
recognition algorithm
algorithm is is used
used inin
character
character
Charactersegmentation
segmentation
recognition and
andand character
character recognition
recognition
verification: The OCRin this paper.
in thischaracter First,
paper. First, train the OCR
train the algorithm
recognition character
OCR character is used library,
library,
in
and save
and
character the
the trained
trainedcharacter
savesegmentation character
and character template
template asasa afile.
recognition file.The
inThe character
this paper. training
character training
First, traininterface is shown
interface
the OCR is shown
characterin Figure
in 18,
Figure
library,
andsave
18,
and thethe
and training
the training
trained result
result is shown
character is shown in in
template Figure
Figure 19.19.When
as a file. TheWhen the
thecharacter
character character
training isisinterface
recognized,
recognized, the
the characters
is shown characters
in Figureof of
the
18, license
theand
license plate is compared
plate is result
the training compared with
is shownwithinthat
that in the
in the
Figure character
19.character
When the template.
template.
characterTheThe result closest
result closest
is recognized, to the character
thetocharacters
the characterof
template
the licenseisplate
template isoutput,
output, andand
is compared the the
license platein
license
with that information
plate is displayed.
information
the character The
The character
is displayed.
template. resultThe recognition
character
closest parameter
to therecognition
character
settings are
parameter
template shown are
is settings
output, in
andFigure
shown 20. In the
in Figure
the license plateverification,
20. In the errors
the verification,
information and
the
is displayed. the The
errors recognition
andcharacter results
the recognition with “?”
results
recognition
are eliminated,
with “?” are
parameter settings realizing
eliminated,
are shown the correct
realizing recognition
the correct
in Figure 20. In the of
recognitionlicense
verification, plates
of license with
plates
the errors white
andwith characters
thewhite on
characters
recognition a blue
on a
results
background
blue
with background
“?” with with
are eliminated, a length of 8 characters.
arealizing
length of
the8 correct
characters.The results
The results
recognition of
of the oflicense
theplates
license plate
license position
plate
with recognition
position
white onand
recognition
characters a
extraction
andbackground
blue are ideal
extraction are and
ideal
with the accuracy
and theofaccuracy
a length of the character
of theThe
8 characters. recognition
character
resultsrecognitionis also
of the license high.
is also The
high.
plate main
The main
position program block
program
recognition
block
and diagram
extraction areofideal
license
andplate recognition
the accuracy is character
of the shown inrecognition
Figure 21, isand
alsothe recognition
high. The mainresults
programare
shown in Figure 22.
block diagram of license plate recognition is shown in Figure 21, and the recognition results are
shown in Figure 22.
Sensors 2019, 19, x FOR PEER REVIEW 12 of 15
Sensors 2019, 19, x FOR PEER REVIEW 12 of 15
Sensors 2020,
2019, 20,
19, 1765
x FOR PEER REVIEW 12 of 15

Sensors 2019, 19, x FOR PEER REVIEW 12 of 15


diagram of license plate recognition is shown in Figure 21, and the recognition results are shown in
Sensors 2019, 19, x FOR PEER REVIEW 12 of 15
Figure 22.

Figure 18. Character training interface.


Figure 18. Character training interface.
Figure 18. Character training interface.

Figure 18. Character training interface.


Figure 18. Character training interface.
Figure 18. Character training interface.

Figure 19. Character training result.


Figure 19. Character training result.
Figure 19. Character training result.
Figure 19. Character training result.
Figure 19. Character training result.

Figure 19. Character training result.

Figure 20. Character recognition parameter settings.


Figure 20. Character recognition parameter settings.
Figure 20. Character recognition parameter settings.
Figure 20. Character recognition parameter settings.

Figure 20. Character recognition parameter settings.

Figure 20. Character recognition parameter settings.

Figure 21. Main program block diagram of license plate recognition.


Figure 21. Main program block diagram of license plate recognition.
Figure 21. Main program block diagram of license plate recognition.
Figure 21. Main program block diagram of license plate recognition.

Figure 21. Main program block diagram of license plate recognition.

Figure 21. Main program block diagram of license plate recognition.

(a) (b)
(a) (b)
License plate
Figure 22. License
Figure plate recognition
recognition results:
results: (a)
(a) License
License Plate
Plate 11 and
and (b)
(b) License
License Plate
Plate 2.
2.
(a) (b)
Figure 22. License plate recognition results: (a) License Plate 1 and (b) License Plate 2.
7. System Integration and Performance Analysis
7. System Integration and
(a) Performance
Figure 22. License Analysis
plate recognition results: (a) License Plate 1 and (b)(b)License Plate 2.
7. System Integration
After each moduleand Performance
of the Analysis
vehicle target detection and information extraction system is completed,
Figure 22. License plate recognition results: (a) License Plate 1 and (b)(b)
(a) Performance License Plate 2.
7. System
the whole Integration and
system is integrated. The wholeAnalysis
image is partitioned into three regions and marked with
Figure 22. License
7. System Integration plate recognition
and Performance results: (a) License Plate 1 and (b) License Plate 2.
Analysis

7. System Integration and Performance Analysis


Sensors 2019, 19, x FOR PEER REVIEW 13 of 15
Sensors 2019, 19, x FOR PEER REVIEW 13 of 15
Sensors 2019, 19, x FOR PEER REVIEW 13 of 15
After
Sensors each
2020, 20, 1765 module of the vehicle target detection and information extraction system
13 of is
15
After each module of the vehicle target detection and information extraction system is
completed, the
After each whole
module system is
of the integrated. The whole
vehicle target image
detection is partitioned
and informationinto three regions
extraction systemand is
completed, the whole system is integrated. The whole image is partitioned into three regions and
marked
completed,withthered lines.
whole The program block diagram of image segmentation and extraction is shown
marked
lines.with
redFigure The red lines.system
program The
block
is integrated.
program
diagram block The whole
diagram
of Figures
image
imagesegmentation
of image
segmentation
is partitionedand
intoextraction
three regions
and extraction is shown
and
is shown
in Figure 23,
in
marked 23,
with and
red the results
lines. The are shown
program in
block 24 of
diagram and 25. segmentation
image and extraction is shown
in
andFigure 23, and
the results arethe results
shown in are shown
Figures 24 in Figures
and 25. 24 and 25.
in Figure 23, and the results are shown in Figures 24 and 25.

Figure 23. Program block diagram of image segmentation and extraction.


Figure
Figure 23.
23. Program
Program block
block diagram
diagram of
of image
image segmentation
segmentation and
and extraction.
extraction.
Figure 23. Program block diagram of image segmentation and extraction.

Figure
Figure 24.
24. Image
Image segmentation
segmentation result.
result.
Figure 24. Image segmentation result.
Figure 24. Image segmentation result.

Figure
Figure 25. Image
Image extraction
extraction result.
result.
Figure 25. Image extraction result.
Figure 25. Image extraction result.
The user-defined
The user-defined button
buttonof ofNI
NImyRIO
myRIOisisusedusedtoto control
control thethe
program
program to run in the
to run system.
in the When
system. When the
button The
is user-defined
released, the button
vehicle of NI myRIO
target is used
detection is to control
carried out. the
Whenprogram
the to runisin
button the system.
pressed, the When
vehicle
the button is released,
The user-defined the vehicle target detection is carried out. When the button is pressed, the
the button
information released,button
isextraction and
of NI myRIO
the vehicle
recognitiontarget is used toiscontrol
detection
function is carried
carried
the
out to
program
out. When to
identify
run
the
the
in theissystem.
button
color, logo,pressed,
and
Whenthe
license
vehicle
the information
button is released,extraction and
the vehicle recognition function
target detection is carried
is carried out
out.out to
When identify
the button the color, logo,
is pressed, and
the
vehicle information extraction and recognition function is carried to identify the color, logo, and
plate
license
vehicleinformation
information of extraction
plate information the vehicle,
of the and
and the real-time
vehicle,
recognition images
andfunction
the isand
real-time recognition
carriedimages
out toand results are
thedisplayed
recognition
identify results
color, logo,onandthe
are
license
front plateLED0~LED2
panel. information of used
are the vehicle,
to displayandthe the real-time
real-time images
status of the and recognition
system program. results
When are
the
displayed
license on the
plate front panel.
information LED0~LED2
of the vehicle, andare used to displayimages
the real-time the real-time status of the
and recognition system
results are
displayed
vehicle on detection
target the front program
panel. LED0~LED2
is running, are used
LED0 lightsto up.
display
When the
the real-time
vehicle status of the
information system
extraction
program.
displayed When
on thethethe vehicle
front panel.target detection
LED0~LED2 program is running, LED0 lights up. When the vehicle
program. When vehicle target detection are
programused to display the
is running, LED0 real-time
lights up. status
When of the system
the vehicle
and recognition
information
program. When program
extraction
the andis recognition
vehicle running,
target LED1
detection lights
program
program isup. isWhen
running,
running,there
LED1 islights
LED0 a car coming,
up.
lights When
up. LED2
When there lights
the up,
isvehicle
a car
information
whereas LED2 extraction
goes out and
when recognition
there is noprogram
car coming.is running,
The LED1 lights
recognition resultup.isWhen
shown there
in is a car
Figure 26,
coming,
informationLED2 lights up,and
extraction whereas LED2 goes
recognition out when
program there is LED1
is running, no car coming. up.The recognition
there is result
coming, LED2 lights up, whereas LED2 goes out when there is no carlights
coming. TheWhenrecognition a car
result
and
is the overall
shown
coming, in Figure system program
26,up,
and block
the overall diagram
system iswhen
programshown in Figure
block diagram27. coming.
is shownThe in Figure 27. result
is shownLED2 lights
in Figure 26, andwhereas LED2
the overall goes out
system program there
block is no car
diagram is shown in Figure recognition
27.
is shown in Figure 26, and the overall system program block diagram is shown in Figure 27.

(a) (b)
(a) (b)
(a) (b)
Figure 26. Overall system recognition results: (a) Vehicle 1 and (b) Vehicle 2.
Figure
Figure 26. Overall
Overall system
system recognition
recognition results:
results: (a)
(a) Vehicle
Vehicle 1 and (b) Vehicle 2.
Figure 26. Overall system recognition results: (a) Vehicle 1 and (b) Vehicle 2.
Sensors 2020, 20, 1765 14 of 15
Sensors 2019, 19, x FOR PEER REVIEW 14 of 15

Figure
Figure 27. Overall system
27. Overall system program
program block
block diagram.
diagram.

8. Conclusions
8. Conclusions
The vehicle detection and information extraction system in this paper uses the NI myRIO hardware
The vehicle detection and information extraction system in this paper uses the NI myRIO
platform to realize vehicle target detection based on edge detection, vehicle color recognition based
hardware platform to realize vehicle target detection based on edge detection, vehicle color
on color classification function, vehicle logo recognition based on pattern matching, and license plate
recognition based on color classification function, vehicle logo recognition based on pattern
recognition based on OCR character recognition algorithm. The system can detect the vehicle target in
matching, and license plate recognition based on OCR character recognition algorithm. The system
real time, and determine the vehicle identity according to the color, logo, and license plate information
can detect the vehicle target in real time, and determine the vehicle identity according to the color,
of the vehicle. The recognition speed and accuracy are nice. The development cycle is shortened and
logo, and license plate information of the vehicle. The recognition speed and accuracy are nice. The
the scalability is good by using the visual component LabVIEW programming environment.
development cycle is shortened and the scalability is good by using the visual component LabVIEW
The system will be expanded and optimized in the future for more vehicle target processing
programming environment.
tasks, including target object segmentation, target object tracking, deep learning, and so on, to further
The system will be expanded and optimized in the future for more vehicle target processing
improve the system’s functions and performance.
tasks, including target object segmentation, target object tracking, deep learning, and so on, to further
improve
Author the system’s functions
Contributions: and performance.
All work with relation to this paper has been accomplished by all authors’ efforts.
The idea was proposed and conceptualized by H.W. The final draft of the article was reviewed and edited by
Author
S.H. contribution:
The original All
draft was workby
written with relation
J.Y. The to this
test work waspaper has by
completed been accomplished
H.W., by data
S.H. and J.Y. The all authors’
curation
and formal analysis were done with the help of L.W. and T.L. All authors have read and
efforts. The idea was proposed and conceptualized by Hongliang Wang. The final draft of theagreed to the published
article
version of the manuscript.
was reviewed and edited by Shuang He. The original draft was written by Jiashan Yu. The test work
Funding: The Fund
was completed by for Shanxi ‘1331
Hongliang Wang,Project’ Key Subject
Shuang He andConstruction(1331KSC);
Jiashan Yu. The datathe open project
curation of Key
and formal
Laboratory of Mcro Opto-electro Mechanical System Technology, Tianjin University, Ministry of Education;
analysis were done with the help of Luyao Wang and Tao Liu.
the open project of Key Laboratory of Dynamic Measurement (North University of China), Ministry of Education,
North University
Funding: of China;
The Fund the Major
for Shanxi State
‘1331 Basic Research
Project’ Development
Key Subject Program of China (2016YFC0101900).
Construction(1331KSC); the open project of
Conflicts of Interest:
Key Laboratory The authors
of Mcro declare Mechanical
Opto-electro that there is no conflictTechnology,
System of interest. Tianjin University, Ministry of
Education; the open project of Key Laboratory of Dynamic Measurement (North University of
References
China), Ministry of Education, North University of China; the Major State Basic Research
Development
1. Program
Morris, B.T.; ofScora,
Cuong, T.; ChinaG.;(2016YFC0101900).
Trivedi, M.M.; Barth, M.J. Real-Time Video-Based Traffic Measurement and
Visualization
Conflict System
of interest: Thefor declare thatIEEE
Energy/Emissions.
authors Trans.
there is noIntell. Transp.
conflict Syst. 2012, 13, 1667–1678. [CrossRef]
of interest.
2. Nandyal1, S.; Patil, P. Vehicle Detection and Traffic Assessment Using Images. Int. J. Comput. Sci. Mob.
References 2013, 2, 8–17.
Comput.
3. Song, H.; Zhang, X.; Zheng, B.; Yan, T. Vehicle Detection Based on Deep Learning in Complex Scene. Appl. Res.
1. Morris, B.T.;
Comput. 2018,Cuong Tran; Scora, G.; Trivedi, M.M.; Barth, M.J. Real-Time Video-Based Traffic Measurement
35, 1270–1273.
4. and Visualization System for Energy/Emissions.
Cho, H.; Hwang, S.Y. High-Performance On-RoadIEEE Trans.
Vehicle Intell. Transp.
Detection Syst. 2012, Cascade
with Non-Biased 13, 1667–1678.
Classifier by
2. Nandyal1, S.; Patil,
Weight-Balanced P. Vehicle
Training. Detection
EURASIP andVideo
J. Image Traffic Assessment
Process. 2015, 1, Using Images. Int. J. Comput. Sci. Mob.
1–7. [CrossRef]
Comput. 2013, 2, 8–17.
Sensors 2020, 20, 1765 15 of 15

5. Hadavi, M.; Shafahi, Y. Vehicle Identification Sensor Models for Origin-Destination Estimation. Transp. Res.
Part B Methodol. 2016, 89, 82–106. [CrossRef]
6. Mu, K.; Hui, F.; Zhao, X. Multiscale Edge Fusion for Vehicle Detection Based on Difference of Gaussian.
Opt.-Int. J. Light Electron Opt. 2016, 127, 4794–4798. [CrossRef]
7. Nur, S.A.; Ibrahim, M.M.; Ali, N.M.; Nur, F.I.Y. Vehicle Detection Based on Underneath Vehicle Shadow
Using Edge Features. In Proceedings of the 2016 6th IEEE International Conference on Control System,
Computing and Engineering (ICCSCE), Batu Ferringhi, Malaysia, 25–27 November 2016; pp. 407–412.
8. Yuan, L.; Xu, X. Adaptive Image Edge Detection Algorithm Based on Canny Operator. In Proceedings of the
2015 4th International Conference on Advanced Information Technology and Sensor Application (AITS),
Harbin, China, 21–23 August 2015; pp. 28–31.
9. Wang, C.-M.; Liu, J.-H. License Plate Recognition System. In Proceedings of the 2015 12th International
Conference on Fuzzy Systems and Knowledge Discovery (FSKD), Zhangjiajie, China, 15–17 August 2015;
pp. 1708–1710.
10. Zhao, W.; Li, X.; Wang, P. Study on Vehicle Target Detection Method in Gas Station Complex Environment.
For. Eng. 2014, 30, 74–79.
11. Chen, P. Research on Vehicle Color Recognition in Natural Scene. Master’s Thesis, Huazhong University of
Science and Technology, Wuhan, China, 2016.
12. Huang, Y.; Wu, R.; Sun, Y.; Wang, W.; Ding, X. Vehicle Logo Recognition System Based on Convolutional
Neural Networks with a Pretraining Strategy. IEEE Trans. Intell. Transp. Syst. 2015, 16, 1951–1960. [CrossRef]
13. Llorca, D.F.; Arroyo, R.; Sotelo, M.A. Vehicle logo recognition in traffic images using HOG features and SVM.
In Proceedings of the 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013),
The Hague, The Netherlands, 6–9 October 2013; pp. 2229–2234.
14. Tejas, B.; Omkar, D.; Rutuja, D.; Prajakta, K.; Bhakti, P. Number plate recognition and document verification
using feature extraction OCR algorithm. In Proceedings of the 2017 International Conference on Intelligent
Computing and Control Systems (ICICCS), Madurai, India, 15–16 June 2017; pp. 1317–1320.
15. Kakani, B.V.; Gandhi, D.; Jani, S. Improved OCR based automatic vehicle number plate recognition using
features trained neural network. In Proceedings of the 2017 8th International Conference on Computing,
Communication and Networking Technologies (ICCCNT), Delhi, India, 3–5 July 2017; pp. 1–6.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like