You are on page 1of 9

LICENSE PLATE RECOGNITION FOR SECURITY APPLICATIONS

ABSTRACT all crucial in the examination of licenced number plates. The


core parts of any ANPR system are now being constructed as a
Vehicle licence plate identification is useful in a variety of result. The Number Plate Recognition system is composed of a
situations. In this work, an efficient and astonishingly easy camera, an edge capturing device, a computer, and specially
approach for recognising the number plate is utilised. For image created software for image management, analysis, and
processing using py-tesseract, the suggested solution employs identification.
the Open CV package and the Python programming language.
The input picture is converted to grayscale before being filtered
via a bilateral filter to eliminate undesired characters. The Canny LITERATURE REVIEW
edge detection method is utilised in this research to identify the
edges of licence plates. TESSERACT is an optical character The image data which was prepossessed using various methods
recognition system (OCR). and was examined using a bilateral filter. The truncated image
of the vehicle number plate is automatically saved in the
INTRODUCTION cropped licence plates photos folder, where the image text has
been transformed to a string. Tesseract is used to read the text
The amount of unregistered automobiles has skyrocketed in across the image, and the result is displayed in the Python output
India together with the population growth, which has only terminal in [1]. For academics and students studying computer
served to exacerbate the nation's long-standing traffic issue. Due vision, Python and open CV are excellent places to start. The
to this, there were more accidents and crimes overall [20]. It is ANPR system's camera captures a picture of the licence plate of
necessary to enact a system that can issue fines promptly as a the car, and the vehicle's number is then recognised to give the
result. It requires a significant amount of time and financial owner data and information. An image capturing system may be
resources for business enterprises to allow only approved created with a camera mounted on the entry. When a car pulls
automobiles. Therefore, what we want is a reliable and effective up, the camera takes a picture of the front of the vehicle and
system that can capture the image of a licence plate and utilise it locates the licence plate before doing further recognition. The
to extract data from it. A technique called automatic number door will open if the licence plate is recognised; else, an alarm
plate recognition is used to extract the characters of licence plate will be played. To obtain the vehicle licence number, the image
numbers from an illustration [21]. This approach is extracted collection system reads the car's licence plate using the NPR
and implemented via image processing. system in order to get the vehicle owner's information and data
in [2]. The number plate is located using automatic licence plate
A licence plate recognition system uses image processing recognition (ALPR). It is used for multiple plates is its high
technology to recognise an automobile based on its licence plate detection accuracy and detects numerous car licence plates
plate. The goal is to develop a vehicle identification system that in a single camera frame. Licenses with complex backgrounds
uses licence plates and is trustworthy, automated, and verified. are tracked accurately and successfully in [3]. Pre-processing,
The system is used to keep an eye on security at the entrance to number plate categorization (with conception v3), object
a restricted area, such as a conflict zone, or in the vicinity of recognition (utilising SSD), text categorization, and detection
important government buildings, including the Supreme Court, and recognition are all stages of the system (using tesseract
the Parliament, and educational institutions. Anybody can use OCR engine) in [4]. A vehicle number plate dataset is used to
this technology for security purposes. For the general public, an create Vehicle Number Plate Detection. The camera takes real-
Android app will be downloaded on your phone. After that, he time traffic photographs from various places, distances and
merely needs to capture and process the image of the licence angles where the number plates were recognised in [5]. There
plate to get the information he requires in order to learn more are two ways for detecting licence plates one uses Sobel edge
about a certain automobile. Prior to snapping a snapshot of the detection and another uses morphological gradient detection.
car, the specially constructed apparatus first detects it for Sobel edge detection is appropriate for vehicle images where the
security reasons. The photo is segmented into an image, and the camera shooting direction is parallel to the licence plate area,
area of the automobile number plate is extracted. With the help and the licence plate detection based on morphological gradient
of the optical character recognition technique, characters may be detection is appropriate for detecting vehicle images where the
recognised (OCR). licence plate position is tilted in [6]. An open-source library for
automatic licence plate recognition that may be used with both
static photos and video streams in [7]. The standard constant
shooting area, should only be utilised in the car park licence
A licence plate recognition system makes use of image
plate recognition system, which has limits. Image detection to
processing to recognise a car based on its number plate. The
locate the target can be utilised not only in parking lot
objective is to create a dependable, automatic, and approved
recognition systems, but also in monitoring systems, unlawful
vehicle surveillance system utilising the licence plate. The
photographing of traffic lights, and so on. As a result, it will
system is used to keep an eye on security at the entry to a very
become increasingly popular in the future [8]. The effectiveness
small region, such a war zone or the vicinity of important
of a licence plate recognition system in detecting the plate
institutions of higher learning and significant government
number has been demonstrated. The output of the system is
institutions like the Supreme Court and the Parliament.
simply to display the identified plate number [9]. the likelihood
that licence plate recognition will be able to collect all vehicle-
Everyone is welcome to utilise this system for security purposes. related data. To improve the quality of fusion-based automobile
Your phone will be equipped with a public-facing android photographs, first remove the licence plate and separate the
application. Following that, all we need to do to get the characters from it. Then use an artificial neural network to find
knowledge we need is capture and process the image of the the characters on the plate. [10].
licence plate on a particular automobile.

Prior to snapping a picture of the car, the device is meant to first


detect it for security reasons. By dividing the image into
segments, the automobile number plate area may be located. The
Optical Character Recognition Technique is used for character
recognition (OCR). The created information is then compared to
other database entries to provide distinctive information like the
owner's name, registration location, address, and so on. The
framework is created and tested in Python using simulations of
actual picture data. Character recognition software, PC
knowledge, and processes for confirmed plate identification are
PROPOSED BLOCK DIAGRAM
A)Algorithm Design

Among the common license plate detection methods based on


we chosen the Canny edge detection-based license plate
detection technique out of the popular color image-based
approaches for detecting license plates. The steps in the method
are as follows:

 Pre-processing
 Contours
 Optical Character Recognition

PRE-PROCESSING
Fig.1 Basic LPR block diagram i)Bilateral filter
ii)Canny edge detection
METHODOLOGY BILATERAL FILTER
Work flow: A type of de-noising technique known as bilateral filtering can
keep sharp edges while removing certain colours from the
edges. This filter is simple to comprehend since it uses the
weighted mean of nearby pixels. Furthermore, there is a space
between pixel values, allowing it to be appropriately adapted.
Not to mention, this filter is non-iterative, which makes it quite
straightforward.
The box, Gaussian, and bilateral filters are some of the most
well-known filters used in image processing. We are aware that
all of these filters are used for deblurring and smoothing.
Furthermore, certain elements that were there in the original
image but were hidden by these filters will enhance the final
image. We will get the conclusion that the bilateral filter is the
best after analysing the four filters and taking the ideal sigma
value into consideration.

Bilateral filtering is quite helpful in this situation since we need


to blur and smooth our image while still retaining the edge [11].
Numerous technical and scientific fields can make use of this
filter. In this method, grayscale values are combined with
colours, and close values are chosen over far or remote ones.
The primary goal of the aforementioned technique is to
eliminate phantom colours that may have emerged in the
margins of the original image.

Fig.3 Noisy image

Fig.2 LPR work flow


The input image must first be turned into a grayscale in this
stage by altering the contrast and brightness such that the image
is blurred to remove noise. As a result, the first step is to filter
the primary picture to remove noise in order to make the
location and detecting edge effective. It is frequently applied as
a Gaussian filter for noise reduction [14]. The standard Gaussian
filter equation in 2-D is defined as follows:

G ( x . y )=
1
2 π σ2
exp
2σ (
−x 2+ y 2
)
where y is the vertical axis distance from the origin, x is the
horizontal axis distance from the origin, and is the Gaussian
distribution's standard deviation.
Fig.4 De- noised image
Calculate image gradient : The Canny edge detection operator
is used in the gradient computation stage to calculate the picture
CANNY EDGE DETECTION gradient and identify the edge and direction intensities. The edge
pixels with sharp variations in grey area values are found by
Principle of canny edge detection algorithm computing the picture gradient. The gradient is a unit vector
Based on edge characteristics, this approach is a detection since it is represented by points that point in the general
algorithm. The text processing procedure will continue with the direction of intensity variation. At this point, the gradient's
text attribute value being unchanged, but the text itself won't be vertical and horizontal components are computed first, followed
processed—only the text image's data size will be reduced. by the gradient's magnitude and direction [15].
There are now several edge detection-based techniques in use. The following formulas are used to determine the gradient's
Canny algorithm is employed in this research. Text recognition magnitude G and angle :
benefits from this algorithm's ability to effectively handle the
Gradient Magnitude=G=√G x + GY
edge detection challenge. There are many different edge 2 2
detection techniques, thus a clever algorithm will choose which
edge detection is best for the situation. in the following sense.
(1) Strive for the best detection: Edge detection will attempt to
extract as many text edge properties as it can while also aiming
for the lowest detection miss probability.
Gradient Angle=θ=are tan
( )
Gy
Gx
(2) Edge location rule: According to this rule, there cannot be a The horizontal and vertical gradients are represented by G x and
considerable discrepancy between the edge position of the
search and the edge position of the actual text; rather, the point G y , respectively.
of the edge search must be reasonably close to the edge point of
the real text. Four filters are used in the Canny edge detection technique to
determine the diagonal, vertical, and horizontal edges in the
(3) A comparison between the search location and the edge blurred picture. Additionally, one of the four angles illustrating
location: The algorithm lines up the search point and the text the vertical, horizontal, and the two diagonals is used to
point. represent the edge direction angle (00, 450, 900, 1350 degrees).
In order to improve edge recognition, the Canny edge detection The first derivative is estimated in the horizontal direction Gx
method was created. For this purpose, consideration was given and vertical direction Gy as a consequence [16]. The Canny
to three crucial elements. Identifying all of the principal edges in
the reference image was the first and most crucial requirement. algorithm then finds the edges where the intensity of the grey
Reduced edge error was the main objective. The edge points that level fluctuates the greatest.
were found had to be as close to the correct edge as is physically
possible, which was the second condition. Not having several Non-maxima suppression (NMS) : This approach is based on
reactions to one edge was the last criteria. The third stage was one of the two approaches commonly used to identify edges, the
implemented because the first two circumstances were not first of which is to see edges as zero-crossings of the Laplacian
severe enough to entirely rule out the possibility of more than of picture intensity [17, 18]. The second is to use a technique
one reaction to an edge. known as NMS, which involves suppressing the local non-
maxima of the magnitude of the gradient of image intensity in
The Canny method is one of the most well-known because it can
the gradient's direction. The final image should have sharp
maintain a low error rate, preserve important information by
edges, ideally. So, in order to smooth the edges, we must
removing spam, retain fewer alterations from the primary
complete NMS. NMS can also successfully locate the edge and
picture, and remove multiplex answers to the near edge. The
prevent the occurrence of false edges.[18].
Canny edge detection method functions as follows [13] based on
this criterion. There are five distinct steps in the Canny
algorithm. NMS is also based on the gradient magnitudes that the detector
converts the thick edges of the image, to nearly thin and sharp
edges which can be more utilized for identification purposes. It
Noise detection: Smoothing may be used to reduce noise, but
is mainly performed in NMS for thinning the edge. In this
the results are more susceptible to image noise since the
process, the image is scanned along the edge direction and
mathematics used in edge detection is inherently derivative-
rejects any pixel value that is not considered to be an edge
based. The image may be made noise-free by using a Gaussian
which will result in a thin line in the output image.
blur to smooth it out. To do this, a Gaussian kernel and the
image concentration technique (3x3, 5x5, 7x7, etc.) are used.
What defines the size of the core is the expected blur's trace. The Double thresholding : The threshold value is divided into two
blur is essentially less for the smallest kernel is less noticeable. parts: T1 is a high threshold and T2 is a low threshold. Edge
area is the outcome if the pixels with values of a grayscale level have the same intensity or hue can be blocked out by contours,
greater than T1 are strong edge pixels. If the pixels with values which are simply read as a curve connecting points continually.
of a grayscale level less than T2 are weak edge pixels, the In a line of text, the contour of each word is identified using a
outcome is the non-edge area. The outcome depends on the straightforward procedure that involves tracing the line.[12].
pixels nearby. If the pixels have grayscale values between T1
It is most useful for analysing the form of the supplied image,
and T2 [24]. The stage's objective was to categorise the pixels
determining the size and dimensions of the object that must be
into three groups: strong, weak, and irrelevant. We applied this
found in the provided image, and identifying particular objects.
step as follows in this paper:
This is done in order to classify the forms of objects, segment
photos, crop things from the processed image, and many other
− Small enough to be ignored as irrelevant edge detection are related tasks.
weak pixels, which have an intensity value but not enough to be
taken into account.

−High intensity pixels are powerful ones because they


contribute to the outer border.

− Other pixels serve as irrelevant contributors to the edge. We


can also see the following with the double thresholds:

To find irrelated pixels, the low threshold is used (intensity less


than the low threshold).

Pixels with an intensity value but not enough to be taken into


account are weak pixels; nonetheless, because they are so little,
edge detection on them is meaningless.

− The strong pixels are recognised using the high threshold


(intensity higher than the high threshold).

Fig.6 finding all contours


Track edge by hysteresis : If an edge does not link to a very
clear edge, it will be removed from the resulting image. The
final picture will contain any weak edges that are linked to
strong edges. The final edge picture has the strong edges, which
are portrayed as specific edges. If a pixel's gradient magnitude
exceeds the upper threshold, it is classified as a strong edge
pixel. A pixel is categorized as a weak edge pixel if its gradient
magnitude value falls between the lower and higher thresholds.
Strong edges are those that can be added as edges right away in
the finished edge image. As opposed to this, if weak edges are
connected to strong edges, they can be marked.

Fig.7 finding the contours from all the edges

DRAW CONTOURS

Providing you have the boundary points for the form, you may
use it to create any shape's contours. Its first parameter is the
source picture, the second argument is the contours, which
should be supplied as a Python list, and the third argument is the
index of contours (helpful for drawing each contour). Pass -1)
and the remaining arguments—color, thickness, etc.—will draw
Fig.5 edge detection using Canny edge detection all contours.

CONTOURS
i) Find Contours
ii) Draw Contours

FIND CONTOURS
In order to recognise when two-dimensional pictures or objects
are being searched for, OpenCV (EmguCv) created the Find
contour algorithm. The OpenCV library, which stands for Open
Computer Vision, was introduced by Intel in 1999. Objects that
o Word and character detection - Establishes a
framework for word and character forms, separating
words as appropriate.
Fig.8 drawing the selected contours o Script recognition – In multilingual documents, the
script may change at the level of the words and
hence, identification of the script is necessary, before
OPTICAL CHARACTER RECOGNITION the right OCR can be invoked to handle the specific
(OCR) script.
o Character isolation or "segmentation" – For per-
character OCR, multiple characters that are
connected due to image artifacts must be separated;
single characters that are broken into multiple pieces
due to artifacts must be connected.
o Normalise aspect ratio and scale.

2. Character recognition: A ranked list of potential characters


may be generated by one of two fundamental core OCR
algorithm types. Matrix matching, also known as "pattern
matching," "pattern recognition," or "image correlation,"
involves comparing a picture to a recorded graphic pixel-by-
pixel. This depends on the stored glyph having the same scale
and font as the input glyph, as well as being appropriately
separated from the rest of the picture. This method does not
perform well when using unfamiliar typefaces and is most
effective when used with typewritten material. The early
physical photocell-based OCR employed this method quite
directly. Decomposing glyphs into "features" like lines, closed
loops, line direction, and line intersections is the process of
feature extraction. The representation's dimensions are reduced
by the extraction characteristics, which also makes the
Fig.9 OCR process flow recognition procedure computationally effective. These
characteristics are contrasted with an abstract, character-like
OCR, also known as optical character recognition or optical vector representation, which might be reduced to one or more
character reader, is an electronic or mechanical method of glyph prototypes. This form of OCR, which is frequently used in
transforming images of typed, handwritten, or printed text into "intelligent" handwriting recognition and in fact most current
computer message from scanned documents, images of OCR software, is adaptable to general approaches of feature
paperwork, visual photos (such as the text on signs and identification in computer vision. To compare picture attributes
hoardings in a landscape image), or subtitle text transposed on with recorded glyph features and select the closest match,
an image (for example from a television broadcast). nearest neighbour classifiers, such the k-nearest neighbours
technique, are utilised.
The Techniques in OCR involve:
1. Pre-processing: To increase the likelihood of a successful Character recognition programmes like Tesseract and
recognition, OCR software frequently "pre-processes" pictures. Cuneiform employ a two-pass method. Known as "adaptive
techniques consist of: recognition," the second pass makes greater use of the letter
o De-skew – Text lines may need to be slanted a few shapes identified with high confidence on the first run to
degrees in either direction to make them totally identify the remaining letters. This is helpful for typefaces that
horizontal or vertical if the document's alignment are uncommon or for deformed fonts in low-quality scans (e.g.
was incorrect after scanning. blurred or faded).The United States Library of Congress's
o De-speckle means to smooth out edges and eliminate specialised XML schema, the standard ALTO format, may be
positive and negative marks. used to store the OCR output. See Comparison of optical
o Binarization: converting a picture from colour or character recognition software for a list of available
grayscale to black and white (called a "binary image" programmes.
because there are two colours). Binarization is done
to make it simple to distinguish the text (or any other
necessary visual component) from the background. 3. Post-processing: If the output is limited by a lexicon, or a
Since binary images are used by the majority of list of terms that are permitted to appear in a document, OCR
commercial recognition algorithms because doing so accuracy can be improved. For instance, this may be the whole
is clearly simpler, binarization is a necessary task. English language or a more technical vocabulary for a particular
Furthermore, the effectiveness of the binarization industry. If the text uses proper nouns or other terms that are not
stage has a big impact on the character recognition part of the vocabulary, using this strategy might be challenging.
stage's quality, thus it's crucial to pick the Tesseract influences the character segmentation process using its
binarization technique carefully for each type of vocabulary to increase accuracy. Although increasingly
input picture. This is because the type of input advanced OCR systems can maintain the original page layout
picture determines the effectiveness of the and generate products like annotated PDFs that have both the
binarization process used to create the binary result. original image of the page and a searchable textual
(Scanning a document, creating a scene text picture, representation, the output stream may still be a plain text stream
using an old, damaged document, etc.) or file of characters. By noticing that specific words are
o Line elimination - Removes non-glyph lines and frequently found together, "near-neighbour analysis" can
boxes. employ co-occurrence frequencies to fix mistakes. For instance,
o Layout analysis or "zoning" – Layout analysis, often the term "Washington, D.C." is far more frequently used in
English than "Washington DOC."
known as "zoning," recognises columns, paragraphs,
captions, etc. as separate blocks. Important in
particular for tables and multi-column layouts. A word's likelihood to be a verb or a noun, for instance, may be
determined with more accuracy by understanding the grammar
of the language being scanned. In order to further enhance the
output of an OCR API, the Levenshtein Distance method has
also been utilised in post-processing for OCR.

Fig.10 Text format image of detected license plate.

RESULTS ANALYSIS

Fig.11 and Fig.12 demonstrates the retrieved text. The text was
placed to match the image of the number plate on the licence
plate. A tesseract was used to read the cropped image's output.
We acquired 88 results for precise licence plate numbers after
running our algorithm on more than 100 images. In addition to
these, we tested our algorithm on a wide range of random
datasets and found that it performed well on most of them. A
few errors also happen, which shows that we need to work on
our project more accurately. We need a capable OCR engine
that can read text more accurately and more efficient filters to
reduce background noise in order to do this.

Fig.12 assessment of license plate recognition in detail


Fig.11 assessment of license plate recognition in detail
saved in a folder called cropped licence plates photos, where the
image text has been transformed to a string. The result was
displayed in the Python output terminal after the text was read
over the image using tesseract. We tested this procedure on a
variety of photographs and discovered that for the majority of
the images, it performed as intended. On pictures of licence
plates with white backgrounds, our code operated as intended.
Images with a lot of background noise did not respond to our
OUTPUT ACCURACY TABLE technique.

TABLE 1

REFERENCES

[1]. A. S. Mohammed Shariff, R. Bhatia, R. Kuma and S. Jha,


"Vehicle Number Plate Detection Using Python and Open CV,"
2021 International Conference on Advance Computing and
Innovative Technologies in Engineering (ICACITE), 2021, pp.
525-529, doi: 10.1109/ICACITE51222.2021.9404556.

[2]. M. Samantaray, A. K. Biswal, D. Singh, D. Samanta, M.


Karuppiah and N. P. Joseph, "Optical Character Recognition
(OCR) based Vehicle's License Plate Recognition System Using
Python and OpenCV," 2021 5th International Conference on
Electronics, Communication and Aerospace Technology
(ICECA), 2021, pp. 849-853, doi:
10.1109/ICECA52323.2021.9676015.

[3]. A. Menon and B. Omman, "Detection and Recognition of


Multiple License Plate From Still Images," 2018 International
Conference on Circuits and Systems in Digital Enterprise
Technology (ICCSDET), 2018, pp. 1-5, doi:
10.1109/ICCSDET.2018.8821138.

[4]. S. Jain, R. Rathi and R. K. Chaurasiya, "Indian Vehicle


Number-Plate Recognition using Single Shot Detection and
OCR," 2021 IEEE India Council International Subsections
Conference (INDISCON), 2021, pp. 1-5, doi:
10.1109/INDISCON53343.2021.9582216.

[5]. G. Joshi, S. Kaul and A. Singh, "Automated Vehicle


Numberplate Detection and Recognition," 2021 11th
International Conference on Cloud Computing, Data Science &
Engineering (Confluence), 2021, pp. 465-469, doi:
PLATE LOCALIZATION 10.1109/Confluence51648.2021.9377101.

[6]. L. Xu, W. Shang, W. Lin and W. Huang, "License Plate


Detection Methods Based on OpenCV," 2021 21st ACIS
International Winter Conference on Software Engineering,
Artificial Intelligence, Networking and Parallel/Distributed
Computing (SNPD-Winter), 2021, pp. 11-16, doi:
10.1109/SNPDWinter52325.2021.00012.

[7].N. H. Lin, Y. L. Aung and W. K. Khaing, "Automatic


Vehicle License Plate Recognition System for Smart
Transportation," 2018 IEEE International Conference on
Internet of Things and Intelligence System (IOTAIS), 2018, pp.
97-103, doi: 10.1109/IOTAIS.2018.8600829.
CHARACTER SEGMENTATION
[8]. C. Xu, H. Zhang, W. Wang and J. Qiu, "License Plate
Recognition System Based on Deep Learning," 2020 IEEE
CUMULA
International Conference on Artificial Intelligence and
TOTAL CHARACTER SUCCES TIVE
FAILED Computer Applications (ICAICA), 2020, pp. 1300-1303, doi:
IMAGES SEGMENTED S RATE SUCCESS
10.1109/ICAICA50127.2020.9182382.
RATE
90 80 10 88.8% 80% [9]. D. Jiang, T. M. Mekonnen, T. E. Merkebu and A.
Gebrehiwot, "Car Plate Recognition System," 2012 Fifth
International Conference on Intelligent Networks and Intelligent
CONCLUSION Systems, 2012, pp. 9-12, doi: 10.1109/ICINIS.2012.55.

This study implements an effective approach for detecting [10]. F. Ali, H. Rathor and W. Akram, "License Plate
automobile licence plates. The input picture was evaluated using Recognition System," 2021 International Conference on
a bilateral filter and prepossessed using several techniques. The Advance Computing and Innovative Technologies in
cropped image of the licence plate for the car is automatically
Engineering (ICACITE), 2021, pp. 1053-1055, doi:
10.1109/ICACITE51222.2021.9404706.

[11] Mirpouya Mirmozaffari. Filtering in Image


Processing. ENG Transactions, H & T Publication, 2020. ⟨hal-
03213844⟩

[12] Manuaba, P., & Indah , K. A. T. . (2021). The object


detection system of balinese script on traditional Balinese
manuscript with findcontours method. Matrix : Jurnal
Manajemen Teknologi Dan Informatika, 11(3), 177–184.
https://doi.org/10.31940/matrix.v11i3.177-184

[13]  A. L. K. a. D. V. Sangam, “Canny edge detection


algorithm,” International Journal of Advanced Research in
Electronics and Communication Engineering (IJARECE), vol. 5,
pp. 1292–1295, 2016.

[14] R. M. a. D. Aggarwal, “Study and Comparison of Various


Image Edge Detection Techniques,” International Journal of
Image Processing (IJIP), vol. 3, 2009.

[15] Rezai-Rad, “Comparison of SUSAN and Sobel Edge


Detection in MRI Images for Feature Extraction,” in
Information and Communication Technologies, ICTTA, vol. 6,
pp. 1103-1107, 2006.

[16] J.-M. J. a. F. C. C. Wolf, “Text localization, enhancement


and binarization in multimedia documents,” in Object
recognition supported by user interaction for service robots,
Quebec City, Quebec, Canada, vol. 2, pp. 1037-1040, 2002.

[17]  J.-M. J. a. F. C. C. Wolf, “Text localization, enhancement


and binarization in multimedia documents,” in Object
recognition supported by user interaction for service robots,
Quebec City, Quebec, Canada, vol. 2, pp. 1037-1040, 2002.

[18] R. Haralick, “Digital step edges from zero crossing of


second directional,” IEEE Transactions on Pattern Analysis and
Machine Intelligence, pp. 58–68, January 1984.

[19] Sekehravani, Ehsan Akbari and Babulak, Eduard and


Masoodi, Mehdi, Implementing canny edge detection algorithm
for noisy image (August 12, 2021). Bulletin of Electrical
Engineering and Informatics Vol. 9, No. 4, August 2020, pp.
1404~1410 ISSN: 2302-9285, DOI: 10.11591/eei.v9i4.1837,
Available at SSRN: https://ssrn.com/abstract=3904360

[20] A. Sasi, S. Sharma and A. N. Cheeran, "Automatic car


number plate recognition", 2017 International Conference on
Innovations in Information Embedded and Communication
Systems (ICIIECS), pp. 1-6, 2017.

[21]A. Kashyap, B. Suresh, A. Patil, S. Sharma and A. Jaiswal,


"Automatic Number Plate Recognition", 2018 International
Conference on Advances in Computing Communication Control
and Networking (ICACCCN), pp. 838-843, 2018.

You might also like