You are on page 1of 4

Intelligent vision system

for quality classification of airport lamp prisms


Jakub Suder, Kacper Podbucki, Tomasz Marciniak, Adam Dąbrowski
Poznan University of Technology, Faculty of Automatic Control, Robotics, and Electrical Engineering
Institute of Automatic Control and Robotics
Division of Electronic Systems and Signal Processing
Jana Pawła II 24, 60-965 Poznań, Poland
e-mail: jakub.a.suder@doctorate.put.poznan.pl

Abstract—The article presents a concept of the analysis of


mechanical wear of prisms in the in-pavement airport lamps. The
solution is based on image processing technique that requires an
appropriate selection of parameters due to the specificity of the
objects. During the experimental tests, a database consisting of 316
photos of IDM airport lamps mounted in the airport areas was
used. The proposed solution using an artificial neural network
allows for the classification of lamps with an efficiency of 81.4%.

Keywords-vision system; airport; lighting control; neural Figure 1. An example of a built-in airport lamp, damaged (left) and new
(right) [2]
networks; artificial intelligence; Hough transform
Quality of airport lamps can be assessed stationary (using
I. INTRODUCTION a light goniometer) or mobile (using luminous intensity
Due to growing requirements of the aviation agencies, measuring platforms [2] or drones [5, 6]). The stationary method
airports all over the world are increasingly burdened with tasks is a very time-consuming as it requires disassembly of the lamps.
to improve safety of the air operations [1]. One of the important The use of measurement platforms or drones, due to the short
aspects in this field is the correct lighting of the airport areas, time gaps between air operations, ranging from 5 to 10 minutes,
especially runways, where the most critical air operations occur. necessitates short occupations of the runway areas.
For this purpose, measuring devices are built [2, 3], which are An alternative mobile method is the use of a vision system
able to measure the luminous efficiency of each lamp, assess its that allows analyzing the degree of mechanical wear of lamps,
wear and then classify it as suitable for further use or requiring mainly their prisms. An advantage of such solution, in relation
replacement [4]. Reduction of the luminous efficiency occurs to the luminous intensity measurement, is the precise assessment
through weather conditions and the operation of the runway of the degree of degradation of the prisms’ frontal plane. The
(e.g., through sticky, powdered rubber from aircraft tires). presented solution can also be a supplement to the sensory
Machines that keep runways and taxiways clean may also luminous intensity measurement system. Image acquisition and
damage lamp housings and their prisms, especially during snow processing can be performed with the use of embedded systems
removal. Metal brushes scratch the lamp holders and chip their [7, 8], using dedicated vision sensors [9-11]. Fig. 2 shows
prisms, making it necessary to replace the damaged lamps. a general concept of the described mobile vision system.
For example, on the runway of the Poznań-Ławica airport,
there are 356 in-pavement lamps, distributed over a distance of
approx. 2.5 km. Lamp performance is specified in the European
Aviation Safety Agency (EASA) standard in the Chapter U —
Colours for aeronautical ground lights, markings, signs and
panels [1]. This document presents the intensity of illumination
of individual lights and their dependence on the angle of
Figure 2. Concept of intelligent vision system for the analysis of mechanical
incidence. The given values differ depending on the lamp type wear of the prisms of in-pavement airport lamps
and the light color. The standards also define the requirements
relating to the airport category. One of the most important The paper presents various stages of image processing of the
parameters is the minimum luminous intensity for the main built-in airport lamps. For the final classification assessment,
beam, which is in a different angular range depending on the a two-layer neural network with forward feed was used, based
lamp type. It should be noted that stricter guidelines have been on the sigmoidal function in the hidden layer and the softmax
provided for the runway than for other airport areas, which is function (normalized exponential function) in the output layer.
justified by the safety requirements of air operations. Fig. 1 Experimental research was carried out using the proprietary
shows a lamp damaged after the winter season and a new lamp. database and the Matlab 2022a environment [12].

This research was funded partly by the 2022 subvention and partly with
the SMART4ALL EU Horizon 2020 project, Grant Agreement No 872614.
II. GENERAL CONCEPT OF PROPOSED VISION SYSTEM backgrounds (asphalt, concrete, or a white background in the
The general scheme of the image processing is shown in case of a lamp in the area of road markings on the airfield).
Figure 3. Processing can be divided into two stages: Ultimately, the photos have a resolution of 49283264 pixels
 detection of the area of interest (ROI) and a 24-bit color depth. Their format is JPG with the RGB
 decision-making process based on a neural network. color space. Figures 5, 6 and 7 show examples of different types
of in-pavement airport lamps collected in the database.

Figure 5. Taxiway Center Line Lamp IDM5582

Figure 3. General scheme of the processing process

The prism localization and detection is shown in Figure 4.


Lamp locating is performed with the circular Hough transform
to detect the lamp and then its position and orientation. The Figure 6. Runway center line lamp IDM4582
prism area is detected with morphological operations and high-
pass filtering using the Sobel mask.

Figure 7. Touchdown zone lamp IDM4671

The lamps have different positions and brightness in the photos.


Additionally, some of them were switched on (the runway
centerline lamps work in both directions, the taxiway only in
one direction, according to the regulations). Such diversity is
valuable because the algorithms can be fine-tuned, allowing for
more accurate inspection, regardless of the conditions.
IV. LAMP AND PRISM DETECTION PROCESS
The lamp detection and prism localization starts with loading
an image from the database, scaled to 15% of its original size
Figure 4. Diagram of the detection of lamp and prism (reduction of the data amount is critical for the neural network
training [11]). Then the image is converted to grayscale. On this
III. DATA SET FOR EXPERIMENTAL TESTS basis, the lamp is searched for in the image using the circular
Hough transform. Its determination is carried out by means of
The in-pavement lamps database consisting of 316 images appropriately selected ranges of the radius value of the lamp
was prepared for the experimental research. The photos were being sought. The function uses the Circular Hough Transform
taken in different weather conditions and at different times of (CHT) algorithm to find the circles in the image. This approach
the day, which makes it possible to test the operation of the is used because of its resistance to noise, occlusion, and
algorithms in many situations. Attempts were made to obtain fluctuating illumination.
different positions of the lamps in the photographs and different
Then, smaller circles are searched for, which are airport Table I summarizes the effectiveness of the various stages
lighting mounting bolts. The pixel span has been selected of the algorithm's operation.
experimentally and optimally for search applications. These
operations are followed by an appropriate rotation of the input TABLE I. IDENTIFICATION EFFECTIVENESS
image in order to obtain the same position for all photos. In Efficiency of identification
addition, the use of the primary image allows for obtaining Type of operation
Correct identification Incorrect identification
better quality of data necessary for further steps. Fig. 8 shows
Lamp detection 90,81% 9,09%
the result of the lamp detection operation (blue circle) and its
orientation based on bolts (green circles). Lamp orientation 90,21% 9,79%

V. PRISM QUALITY CLASSIFICATION


The two-layer feed-forward network implemented using
Open Neural Network Start app (nnstart in Matlab
environment), based on the sigmoidal function in the hidden
layer and the softmax function (normalized exponential
function) in the output layer, is able to correctly classify the
vectors when it contains a sufficient number of neurons in the
hidden layer. The network is trained using the scaled back
propagation of the conjugate gradient. Entry to the network is
the I matrix with dimensions of 1,440,000×284, which contains
284 samples, each of which with 1,440,000 elements (image
Figure 8. The result of the operation of detecting the lamp (blue circle) and
vector). The target matrix T dimensioned 3 × 284 also
its orientation (green circles) represents 284 samples but with 3 elements (prism
classification). The database is limited at this stage to
The next operation limits the working area by cropping to 284 images due to incorrect performance in the earlier stages
a part of the image, on which the prism is located. This image (no lamp detected). The first class (Fig. 10) are undamaged
is converted to grayscale and the edges are detected. Then prisms, the second are those that are no longer suitable for
morphological operations and filtering with the help of the operation (Fig. 11). The third group (Fig. 12) consists of images
Sobel operator are made to determine the position of the when it is impossible to judge the wear of the prism. Images in
searched prism, which also allows to reduce the ROI. this group are most often the result of incorrect earlier stages of
finding the location of the lamp and prism. Fig. 13 shows the
The finally used Sobel mask has the form:
ratio of the class distribution in our database.
0 0 0
[1 1 1]
0 0 0
Figure 10. Examples of tarnished prisms but with no mechanical damage
Based on the conducted experiments, we focused on finding
horizontal edges as it was the optimum solution for our
application. As a result, distortions and small edges, which
caused errors in the further processing steps, have been partially
eliminated.
Figure 11. Examples of damaged prisms
For the Sobel mask in diamond form, the obtained was
completely devoid of reference points, which made it
impossible for the algorithm to determine the position of the
prism. Fig. 9 shows the final result of detecting the prism
location of the tested built-in lamp.
Figure 12. Examples of improper prism segmentation

35

58
191

Class I Class II Class III


Figure 9. The result of the prism detection operation
Figure 13. The ratio of the class distribution in the database
Samples entering the network input are randomly divided subject of the further research. The accuracy obtained in the
with each call, maintaining the following proportions: presented classification process is 81.4% and can be improved
 Training: 70% − 198 images by increasing the number of photos in the database, balancing
These images are fed to the network during its training, during the number of photos in each class, which the authors also
which it is adjusted according to its error. intend to do.
 Validation: 15% − 43 images The experiments were performed using the standard Open
The above samples are used to measure the network Neural Network Start app in the Matlab environment and
generalization and to stop the training process when the value standard parameters of neural network operation were used. In
of the generalization stops improving. This is indicated by the the future, the authors plan to expand the database and examine
increment of the cross-entropy error of the validation samples. the impact of neural network parameters on the efficiency and
 Testing: 15% − 43 images speed of the segmentation and classification process.
This part of the process does not affect the training of the neural
REFERENCES
network, thus it is an independent indicator of performance
during and after training. [1] Certification Specifications (CS) and Guideline Material (GM) for
Aerodrome Design Edition 3, Annex to Decision No. 2016/027/R of the
EASA Executive Director, European Aviation Safety Agency. Available
online: https://www.easa.europa.eu/downloads/21730/en (accessed on 4
July 2022).
[2] Suder J., Maciejewski P., Podbucki K., Marciniak T., Dąbrowski A.,
Platforma pomiarowa do badania jakości działania lamp lotniskowych,
Pomiary Automatyka Robotyka, R. 23, No. 2/2019, pp. 5-13,
https://doi.org/10.14313/PAR_232/5
[3] Suder J, Podbucki K, Marciniak T, Dąbrowski A. Spectrum sensors for
detecting type of airport lamps in a light photometry system. Opto-
Electronics Review. 2021;29(4):133–140.
https://doi.org/10.24425/opelre.2021.139383
[4] Podbucki K., Suder J., Marciniak T., Dąbrowski A., Elektroniczna
matryca pomiarowa do badania lamp lotniskowych, Przegląd
Elektrotechniczny, No. 02/2021, pp. 47-51,
https://doi.org/10.15199/48.2021.02.12
[5] Sitompul D. S. D., Surya F. E., Suhandi F. P., Zakaria H., "Runway Edge
Light Photometry System by Using Drone-Mounted Instrument," 2019
International Symposium on Electronics and Smart Devices (ISESD),
2019, pp. 1-5, doi: 10.1109/ISESD.2019.8909498.
[6] Sitompul D. S. D., Surya F. E., Suhandi F. P., Zakaria H., "Horizontal
Scanning Method by Drone Mounted Photodiode Array for Runway Edge
Light Photometry," 2019 International Seminar on Intelligent Technology
and Its Applications (ISITIA), 2019, pp. 41-45, doi:
10.1109/ISITIA.2019.8937211.
[7] Barua B., Gomes C., Baghe S., Sisodia J., A Self-Driving Car
Implementation using Computer Vision for Detection and Navigation,
2019 International Conference on Intelligent Computing and Control
Figure 14. Neural network confusion matrices Systems (ICCS), 2019, pp. 271-274, doi:
10.1109/ICCS45141.2019.9065627
The results of the trained neural network turned out to be [8] Podbucki K., Suder J., Marciniak T., Dąbrowski A., CCTV based system
for detection of anti-virus masks, 2020 Signal Processing: Algorithms,
very satisfactory, where 81.4% of correct classifications were Architectures, Arrangements, and Applications (SPA), 2020, pp. 87-91,
obtained during the tests (Figure 14). doi: 10.23919/SPA50552.2020.9241303
[9] Suder, J.; Podbucki, K.; Marciniak, T.; Dąbrowski, A. Low Complexity
VI. SUMMARY Lane Detection Methods for Light Photometry System. Electronics 2021,
10, 1665. https://doi.org/10.3390/electronics10141665
The paper presents an intelligent vision system and the
[10] Suder, J., Możliwości przetwarzania sekwencji wizyjnych w systemach
performed experiments of quality classification for the prisms wbudowanych, Przegląd Elektrotechniczny, No. 01/2022, pp. 188-191,
of in-pavement airport lamps. The assessment was made with doi: 10.15199/48.2022.01.41
the use of a proprietary database prepared in cooperation with [11] K. Podbucki, J. Suder, T. Marciniak and A. Dabrowski, "Evaluation of
Poznań-Ławica Airport, Poland. The specificity of airport Embedded Devices for Real- Time Video Lane Detection," 2022 29th
lamps requires proper selection of parameters for image International Conference on Mixed Design of Integrated Circuits and
System (MIXDES), 2022, pp. 187-191, doi:
processing algorithms and functioning of the neural network. 10.23919/MIXDES55591.2022.9838167.
Improvement of the ROI detection efficiency can be sought [12] MATLAB 2022a Documentation Available
online:https://www.mathworks.com/help/matlab/release-notes.html.
with lighting systems when taking photos, which is currently (accessed on 4 July 2022).

You might also like