Professional Documents
Culture Documents
1
NORCE Technology, Norwegian Research Centre AS, 4879 Grimstad, Norway
2
Department of Engineering Sciences, University of Agder, 4879 Grimstad, Norway
3
Jendra Power AG, 8635 Dürnten, Switzerland
4
Department of Mathematics and Statistics, UiT the Arctic University of Norway, 9019 Tromsø, Norway
Abstract—Aerial inspection of solar modules is becoming faults in PV plants, by using electrical and non-electrical data
increasingly popular in automatizing operations and maintenance (e.g., visible or infrared imaging) [1][3][4][5][6]. In power
in large-scale photovoltaic power plants. Current practices are plants, operation and maintenance systems typically integrate
typically time-consuming as they make use of manual acquisitions monitoring and inspection components to provide timely
and analysis of thousands of images to scan for faults and updates about the PV plant condition and initiate necessary
anomalies in the modules. In this paper, we explore and evaluate actions, such as raising alarms or scheduling maintenance
the use of computer vision and deep learning methods for cycles. The large number of images collected during aerial
automating the analysis of fault detection and classification in inspection of large PV power plants makes the analysis
large scale photovoltaic module installations. We use
demanding. While traditional practice relies on manual labor to
convolutional neural networks to analyze thermal and visible color
images acquired by cameras mounted on unmanned aerial
analyze thousands of images to correctly identify and classify
vehicles. We generate composite images by overlaying the thermal faults, machine learning (ML) techniques based on deep
and visible images to investigate improvements in detection learning (DL) can significantly speed up and automize the
accuracy of faint features related to faults on modules. Our main analysis. In this work, composite images are created from visible
goal is to evaluate whether image processing with multi- (VIS) and infrared thermal (IRT) images, with metadata
wavelength composite images can improve both the detection and extracted from both image sources and global positioning
the classification performance compared to using thermal images system (GPS) coordinates collected by aerial drone imaging of
alone. The hypothesis is that fusion of images acquired at different a large-scale PV power plant. Different strategies for image pre-
wavelengths (i.e., thermal infrared, red, green, and blue visible processing (i.e., creating composite images, scaling, de-
ranges) would enhance the multi-wavelength representation of warping) and object labeling for automated fault detection using
faults and thus their histogram feature signatures. The results supervised ML algorithms are investigated. Our main goal is to
showed a successful detection and localization of faint fault evaluate whether image processing such as multi-wavelength
features using composite images. However, the classification of the composite images can improve both the detection and the
fault categories did not show significant improvements and needs classification performance with improved selectivity of faults
continued investigation. This research represents a step towards compared to using thermal images only.
the design of robust automated methods to improve fault detection
from airborne images. Further work is still necessary to reach a II. METHOD
classification accuracy comparable with the performance of
human experts. We frame the problem as a supervised object/pattern
detection and region proposal and use the “You Only Look
Keywords—classification, composite image, deep learning, Once” (YOLO) DL architecture [8]. For the work in this paper,
detection, fault, infrared, module, photovoltaic, visible we chose YOLOv3, which offers a good trade-off between
computational complexity and performance in terms of
I. INTRODUCTION detection and localization accuracy [9]. This paper focuses on
Efficient monitoring and fault detection are required to creating three-layered composite images from infrared and
maintain high performance and avoid economic loss throughout visible images. The methodology can be divided into two main
the lifetime of a photovoltaic (PV) power plant. This has led to parts: i) Cleaning the dataset, creating composite images, and
a need for efficient fault detection methods for PV modules in annotating the images by drawing a bounding box around each
large-scale power plants. Aerial inspection by drones is category of fault. ii) Training the object detection model
becoming essential to efficiently cover large areas and inspect YOLOv3, and finally testing the model on new images.
PV modules by carrying multiple imaging devices. Mellit et al. Cleaning and preparation of the raw images is here performed
[1] and Pillai et al. [2] provided a comprehensive review of manually by filtering away unusable images from the dataset
various types of PV faults and means for detection. Machine and removing fisheye lens distortion from the images. However,
learning has been adopted recently for automatizing detection work is ongoing to make these processes automated based on
and classification in several applications including detecting calibration parameters for data collection including drone
B. Detection results
The prediction output is given in mean average precision
(mAP) for each class. A mAP of 1 for a given class indicates
that the predicted bounding boxes for that class completely
overlap with the true bounding boxes, while a mAP equal to
zero indicates that the predicted bounding boxes for that class
are always wrong.
The results show that YOLOv3 performed well in detecting
a fault but struggled to correctly determine the type of fault
category. The fault detection accuracy was first tested by using
the IRT images alone, see example in Fig. 4. From a total
dataset of 880 annotated IRT images, the set (b)
was divided into 760 for training and 120 for validation and
testing, respectively. An overall mAP fault Fig. 4. (a) Illustration of the deep learning detection process using IRT images
detection/classification accuracy above 0.5 was achieved, only. (b) Example of results from tests of the YOLOv3 algorithm using IRT
which is considered satisfactory. Indeed, it is difficult to images only, displaying the successful detection of junction box JBD (ID4) and
cell defect combination class CD_CC (ID3, label is hidden behind JBD label).