Professional Documents
Culture Documents
COMPUTER VISION
Bachelor of Science
In
Electrical Technology
University of Gujrat
Session 2013-17
We are glad on the completion of our project report. This is the result of co-operation
and collective effort of our team members, Qasim Raza, Qaisar Ali, Zeeshan Ali and
Umair Mansha. We sincerely thank our project supervisor for guidance and
i
DEDICTION
We dedicate all our efforts and struggles of the educational life to our dear parents;
without them we are meaningless. In addition, we devote the effort of this project to
respectable and honourable teachers and supervisor who taught and supported us in
ii
DECLARATION
University of Gujrat, Pakistan, hereby solemnly declare that the data quoted in this
VISION” is based on our original work, and has not yet been submitted or published
elsewhere. We also solemnly declare that the entire report is free of deliberate
plagiarism and we shall not use this report for obtaining any other degree from this or
any stage, even after the award of the degree, the degree shall be cancelled or revoked
--------------------------------------------------------------------------------------------------------
I certify that Qasim raza, Zeeshan Ali, Umair Mansha and Qaisar Ali students of BS
Pakistan, worked under my supervision and the above stated declaration is true to the
iii
best of my knowledge.
Dated: ________________________________
iv
REPORT COMPLETION CERTIFICATE
It is verified that this report titled “FRUITS AND VEGETABLES SORTING USING
University of Gujrat, Pakistan,, contains sufficient material required for the award of
v
Table of Contents
Abstract .......................................................................................................................... 1
Introduction ................................................................................................ 2
Methods.................................................................................................... 11
vi
3.9 Raspberry Pi ................................................................................................. 24
3.13 Actuator........................................................................................................ 29
Results ...................................................................................................... 31
vii
4.12.2 Neural Network .................................................................................... 42
Appendix ...................................................................................................................... 59
viii
List of Figures
Figure 1.1: Block diagram of the system. ...................................................................... 3
Figure 3.6: Raspberry Pi camera board V1.3, image from raspberrypi.org ................. 25
Figure 4.9: (a) Ripe, (b) semi ripe and (c) not ripe lemons.......................................... 38
ix
Figure 4.12: Red-Green mean correlation ................................................................... 43
Figure 4.16: confusion matrix for training, testing and validation .............................. 47
Figure 5.1: Comparison between (Momin, 2013) and our method .............................. 52
x
List of Tables
Table 3.1: Gaussian kernel, example. .......................................................................... 17
Table 4.1: Determining ripeness using Mean Values of Red, Green and Blue colour
channels ................................................................................................................ 38
xi
ABSTRACT
The focus of this project was mainly on sorting of fruits. The selected fruit was lemon
for the project. The goal of the project was to automate the sorting and grading process,
which could minimize the human errors caused by manual grading after harvesting. A
chamber was assembled. Digital images were captured using a CCD camera.
Furthermore, a mechanism to eradicate image blur was designed which could take the
image before placing lemon on belt. Captured images were enhanced using image
processing techniques and useful features were extracted. The RGB colour space was
used. The background was removed followed by noise removal. The extracted features
were fruit area, mean of skin colour and global standard deviation of individual colour
channels (Red, Green and Blue), local contrast differences and local standard deviation
of three colour channels. These nine features were fed to a Back Propagation Neural
Network. Neural network was trained using 99 samples from three classes, ripe, semi
ripe and a combined class of defective and unripe. The classification accuracy of the
Page 1 of 64
INTRODUCTION
1.1 BACKGROUND
Image processing techniques have been evolving for years. Image processing and
computer vision have been used in different fields and processes for example robots and
self-driving cars that use object recognition and edge detection to avoid obstacles. Face
recognition systems use computer vision for security. In agriculture, computer vision
has been applied for several tasks such as grading, counting and sorting for about two
decades. Sorting and grading using computer vision has been improving over the time.
It allows farmers to categorize the products accurately and provides better control over
their products enabling farmers to make good decision about target market.
Sorting and grading of fruits and vegetables has an important role in post harvesting
process. Manual sorting and grading the products is a very tiring job and requires a lot
of time and workers to complete the task. The computer vision techniques, if applied
carefully can help the farmer to categorize the fruits and vegetables correctly. Some
product counting techniques, able to count the fruits in the image has been proposed
and can provide good estimate about the number of fruits even before harvesting.
Automated sorting and grading of vegetables and fruits using computer vision is
accomplished using digital photograph of the products. The automated sorting and
grading uses non-destructive visual features to classify the products, meaning the
product can be classified quiet accurately without damaging it. Visual fruits and
Page 2 of 64
• Removal of background.
• Calculation of size.
Sorting and grading of fruits are important steps after harvesting and should be carried
accurately. False grading can affect the farmer’s reputation in market, which can lead to
long-term financial problems. Incorrect grading can lead to wastage of food since a
healthy fruit can be dumped to waste. Conventionally sorting and grading of fruits after
harvesting is performed by humans, which is very tiring, task and requires a lot of time
and labour. Since manual classification depends solely on human resource, it is prone
blind worker can make classification errors. Automated system is not guided by humane
interventions and does not subject to biological limits such as tiredness and diversion
Page 3 of 64
1.3 IMPORTANCE OF LEMON FRUITS
Tropical regions are suitable for cultivation of citrus fruits. Pakistan is not a big grower
of citrus due to its sub-tropical climate but still citrus are very important crops.
Pakistan Horticulture Development & Export Company is a state company under the
ministry of commerce, government of Pakistan which states that the total citrus
metric tons. Punjab has favourable growing conditions with adequate water for citrus
Lemons find their uses in food and drinks as lemonade, cocktail and soft drinks. Lemons
are used in industry for the production of citric acid. Lemons are used as cleaning agent
This project was based on computer vision techniques and code has been written using
OpenCV library in C++. The process included image segmentation and analysis.
Goal of this project was to design a method for sorting and grading of lemons. The
project was built based on image processing and computer vision techniques. The
project was designed such that the resulting process was automatic. It did not require
any human supervision in prediction of the quality of fruit. Only manual placement of
the fruit in image capturing chamber was required, after that the whole process was
Page 4 of 64
automatic. The mechanical system was able to sort the lemons into their specific bins
Page 5 of 64
LITERATURE SURVEY
Computer vision researchers have long been trying to propose methods for visual
sorting and grading of fruits. Sorting of fruits can be done mostly based on their
characteristics such as the colour of the fruit, size, surface irregularities. Some advanced
techniques use laser imaging, fluorescent imaging and spectroscopy for defect
detection.
This section reviews various methods for and papers for sorting and grading of fruits
(Kondo & Ting, 1998) showed a fundamental setup to get the data such as colour, size
and mass. The author provided a simple prototype for industry to classify the product
and forward to proper channel. Modern sorters can sort fruits very fast at a speed more
than ten fruits per second base on colour, shape, defects and stem detection.
(Jahnsa, 2001) sorted tomatoes based on computer vision techniques and observed that
the tomatoes can be sorted based on mass using only image and computer vision. The
Mangoes can be sorted based on their colour and shape. The geometric features such as
shape can be compared with reference shape. Shape analysis is a good feature for variety
of mangoes. For grading purpose, the pixel value is another good feature. Pixel value
greater than 100 means the skin is good and pure. This method has 83.3% accuracy
Fruits such as mango can be sorted based on their maturity. A camera is used to acquire
a digital image of mango. In the second step, the noise is removed using pseudo median
Page 6 of 64
filter. Image is then converted to binary for edge detection. The method is 90% accurate
To evaluate the quality of fruit, a new method was proposed using HIS colour model. A
digital image of fruit, taken using CCD camera captured in RGB colour space was
transferred into HIS colour space. Colour intensity histogram of only hue H channel
was calculated. The histogram was provided to back propagation neural network as
input. The output of the network was the description of quality of fruit (Cui, Wang,
A date fruit sorting and grading system was proposed. The system consisted of software
and hardware. The hardware section included a conveyor belt system with a camera
integrated into it. A computer loaded with software was used to analyse the digital image
of dates and classify. The over al accuracy was found to be 80%. The problem associated
A robot was designed to identify and pick fruits automatically using computer vision. A
physical system was designed that could be mounted to tractor. A camera was used to
capture the images. The image was further processed to detect defective apples. A
Food colour measurement in computer vision applications was reviewed. The paper
analysed the pros and cons of colour measurement for food was described and future
scope and trend in the field was proposed (Wu & Sun, 2013).
A very intuitive method for apple defect detection was proposed. The method
incorporated the automatic light correction. The method counted and distinguished
between the true defect and stem end. The method used support vector machine for
Page 7 of 64
(Jhawar, December 2015) proposed a lemon sorting system based on pattern recognition
techniques such as nearest prototype, edited multi-seed nearest neighbour and linear
regression. Features extracted were Mean Values of Red, Green, and Blue, size, standard
deviation and min max values of the grey level image. They collected their samples
from different locations in India consisting five different breeds. The scope of their
research was limited to only ripeness measurement. Our model closely resembles this
model in ripeness measurements but goes beyond this research in terms of defective
fruit detection. Their system was able to perform at 100% accuracy using linear
regression.
(Seema, Kumar, & Gill, 2015) prepared a fruit recognition system to sort mixed fruits
based on type of fruit. The features used for fruit recognition were shape, size and
(Khojasteh, 2010) proposed a lemon grading embedded system based on colour and
volume only. No defect-based classification was done. Greenish lemons with smaller
size were considered as grade B, while larger yellowish lemons were considered as
(Momin, 2013) proposed a very advanced technique for lemon defect detection.
Florescent imaging was the base of the research since it has been used to extract the
florescent component from the peels of citrus fruits. Technique of spectroscopy was
helps identify the chemical composition of lemon peel, which in return can be used for
(Khoje, 2013 ) used Curvelet transform to for pattern recognition. Fruit quality was
Page 8 of 64
technique that works on lower and higher resolutions to extract both local and global
features related to fruit’s surface. The technique was evaluated on lemons and guava.
Textural features extracted from Curvelet transform were standard deviation, energy,
entropy, and mean. Probabilistic Neural Network and Support Vector Machine were
trained using these features and performance was evaluated for two classes, healthy and
(Swapnil S. Pawar & Dale, 2016) designed a system to recognize a fruit based on
features such as roundness value and colour. If the object is recognized as fruit using K-
thresholding was used to isolate defective area. If the pixel value exceeds a threshold
value, then it belongs to the pure skin otherwise the pixel belongs to the defective area.
All such pixels are counted to get the total defective area.
(Iqbal, 2016) devised an approach to sort citrus fruits especially lemon, oranges and
sweet-limes. I single view image was proven enough for classification based on colour
features. Only hue from HSV colour space was used for classification. Different
distribution were used to evaluate the classification accuracy. An accuracy of 90% was
obtained based on colour classification. Moreover, colour variability was used for fruit
maturity analysis. Colour variability was measured using hue mean and hue median.
Defect detection on the spherical fruits is a tough task due to uneven lighting around
the spherical shape. The study covered different defects like scarring and copper burn,
which are common in oranges. Non-uniform spherical orange images were transformed
using Butterworth filter resulting in even lighting distribution. It was observed that the
stem end was detected as defect in the algorithm. Red and Green ratio in colour image
Page 9 of 64
along with big area and elongated region removal algorithms were used to detect stem
end. The method detected defects extremely well with an accuracy of 98.9%. However,
the method could not discriminate the types of defects (Li, 2013).
(Blasco, 2014) designed an automated system for citrus fruit harvesting. The authors
realized that the field conditions vary massively. To make the system consistent, a good
and efficient lighting system was necessary. Moreover, a low power processing unit and
Our method used various techniques presented in different papers. The methods have
Page 10 of 64
METHODS
This chapter describes the methods and algorithms for lemon fruit sorting which are
pre-processing, feature extraction and neural network training. The material described
Algorithm used in system has seven major steps shown in figure 3.1 and figure 3.2
Page 11 of 64
Light
Pre-processing
Making
Page 12 of 64
Figure 3.3: Flow chart for machine training
Page 13 of 64
Figure 3.4: Flow chart for testing
Page 14 of 64
3.1 IMAGE ACQUISITION
Capturing the digital image is very first step in image processing. A controlled light
source is required to get a better image. Moreover, the background, distance from
camera to the object should also be controlled in order to get better picture and
consistent results. The factors have a great effect on image segmentation. If a fixed
was captured by a CCD camera, which resulted colour images in RGB colour space.
Exposure value and AWB gains for camera were set manually to get images with
consistent parameters. Automatic metering was turned off and images was captured as
3.2 CROPPING
An imaging chamber was used to capture the image, which had the physical
arrangement such that fruit could only appear at a fixed central region in the camera’s
field of view. The probability of finding the fruit outside that region was zero. Therefore,
it was practical to crop only the central region and perform further operations on that
region only. It could save the memory and the processing power resulting in speeding
the object of interest only. Background removal can save a lot of processing power later
and reduces the complexities in later algorithm. Since we are using a fixed black
Page 15 of 64
algorithm checks for pixel values for all three channels (R, G, and B). Knowing the
pixel intensity range for the fruit, all other values would be set to zero. The process was
Algorithm 1:
For each pixel 𝑝(𝑥, 𝑦) of image 𝑖 with value 𝑣𝑟𝑒𝑑 (𝑥, 𝑦) and 𝑣𝑔𝑟𝑒𝑒𝑛 (𝑥, 𝑦):
(𝑝(𝑥, 𝑦) = 0
Since the background used was black and under different lighting conditions, it could
only produce levels of grey. Using the fact that grey levels always have all three pixels
(R, G, and B) nearly equal, green channel values were subtracted from red channel and
whenever the absolute difference was below a threshold, the pixel was set to black.
“Throughout the whole sensing process, noise is added from various sources, which
may include fixed pattern noise, dark current noise, shot noise, amplifier noise and
are used to remove noise but Gaussian filter is a natural way for noise removal. Gaussian
filter is used extensively in image processing and signal processing. It is used at pre-
processing stage and it provides better results under all kind of noises. Gaussian filter
Page 16 of 64
is a low pass filter and thus can reduce high frequency components and smoothens the
image. It is little slower in runtime compared to box filter and median filter. Finding an
appropriate filter size is necessary so that image would not become too flat. The image
is convolved with a Gaussian kernel having odd sizes such as 3x3 or 5x5 and so on.
Larger filter sizes are slower and can reduce details in the image.
−(𝑥2 +𝑦2 )
1
𝐺(𝑥, 𝑦) = 𝑒 2𝜎2 (3.1)
2𝜋𝜎2
The kernel is centred at every pixel of the image and multiplied element wise followed
by an addition of all terms. The result is the new value of the central pixel. The process
Colour feature used to determine the ripeness of the lemon was global Mean Value of
Jhawar (2015) used mean along with standard deviation to determine four classes of
A ripe lemon fruit has red channel intensity around 180, green from 150 to 180 and blue
below 80.
Page 17 of 64
3.5.1 Mean Value
knowledge about ripeness. Humans consider a yellow lemon as ripe, greenish yellow as
semi ripe and green as not ripe. Mean Values of all channels such as red and green
blue were used to classify the fruit whether it is ripe, semi ripe or not ripe at all. The
colour space used was RGB colour space. By convention, it is closer to human visual
system where three kinds or colour receptor cells (Red, Green and Blue) sense the
colours. High Mean Value of red shows that the lemon is ripe because in RGB colour
The thing worth mentioning is that while computing mean, background pixel (which
was previously set to zero) should never be considered as the part of mean computations
because it yields false values. A mask was used while computing the mean to consider
3.6 SIZE
Size is very important feature; usually humans consider bigger sized fruit as better
quality one. In computer vision, the size of an object can be determined by counting the
number of pixels covered by that object in the digital image. It is not very accurate but
it does certainly provide very good estimation. The way it worked is given as:
• Count the number of non-zero pixels of the image (all other pixels were set to
zero).
• Took the real object and found its diameter using Vernier callipers.
𝜋𝑑2
• Used the formula 𝐴 = ( ) to calculate the area.
4
Page 18 of 64
• The number of pixels correspond to this area.
Initially Surface defects were determined using region based segmentation (Mohana &
C.J., 2015). The technique provided good results but it was found that region based
segmentation was a time consuming process in for our ARM based processing unit.
type of spatial filtering that is independent of global context (Vonikakis & Winkler).
The algorithm computed average pixel intensity of a local neighbourhood of 5x5. The
central pixel intensity is subtracted from the average to check local contrast difference.
If the absolute difference exceeds some predetermined value, it means the contrast for
that pixel is high and the pixel is set to logical high, otherwise the pixel is set to zero.
We used this technique to detect local contrast differences because the resultant image
was a contrast map where only strong contrast differences were shown. Knowing the
fact that the defect free skin is smooth and does not pose strong contrasts, the
mechanism was used to detect strong local contrasts. Pixels with strong contrasts were
Image was rescaled to 40% in both directions to increase the filter strength. Algorithm
Page 19 of 64
for centre surround (Frintrop, 2006) presented below.
Algorithm 2:
Centre = 𝑣(𝑥, 𝑦)
(𝑖(𝑥, 𝑦) = 1
Else 𝑖(𝑥, 𝑦) = 0
Standard deviation is a measure of how spread out the pixel values of an image are.
Low standard deviation means the values are mostly close to the Mean Value of the
image pixels. It is a good measure of how smooth an image surface is. Higher standard
deviation means the pixel values are non-uniform and spread in a wider range. Lower
standard deviation shows that the image is smooth and has less variation spatially. The
name global standard deviation indicates that the computations were performed on the
whole image channel simultaneously. All three channel Red, Green and Blue were
treated separately.
1
𝜎 = √𝑁 ∑𝑁
𝑖,𝑗=1(𝑥𝑖,𝑗 − 𝜇)
2 (3.3)
Where:
Page 20 of 64
∑ 𝑥(𝑖,𝑗)
𝜇= (3.4)
𝑀
The method described in previous section was quite handy but in practice it was not
suitable for all kind of lemons. A lemon turning from green to yellow has yellow patches
on green and possesses strong colour contrast even if it does not have any defect.
To overcome this difficulty another method was proposed. The whole image was
divided into 16x16 patches. Standard deviation was calculated for each patch thus the
name, local standard deviation. The method calculated standard deviation in each 16x16
patch locally, independent of global context. The patches having higher standard
should not spread significantly for a defect free surface. Only defective region has high
standard deviation. The computed standard deviation was stored in a matrix having
It was determined experimentally that the border of fruit had high standard deviation
even for good fruits because the patches at the border has wider pixel value distribution
beyond mean. Morphological operation was performed to remove some border values.
Most patches in the fruit region had values of standard deviation in the range of 0 to 1
even for smoother skin because of little bumps on lemon surface. So the patch
comprising of vale 1 were set to zero. Remaining patches, where value of standard
deviation was non-zero were added together and were used as feature M.
Features from both centre-surround and local 8x8 patches standard deviation was used
Page 21 of 64
for defective fruit detection.
Algorithm 3:
If (𝑆𝑖,𝑗 ≥ 0)
Si,j = Si,j – 1
𝑀 = ∑ 𝑆𝑖,𝑗
• Area
We will not use Mean Value of blue colour because it has no significant effect on quality
The extracted features were in the form of numbers. The five feature from each fruit
sample were arranged in a matrix. A total of 150 training samples, 50 from each category
Machine learning algorithms require normalized data in the form of floating point
numbers ranging from zero to 1. For this purpose, the matrix was converted to 32-bit
floating data type and each row was divided by the highest number in that row. The
Page 22 of 64
operation resulted normalized floating point data in the range {0.0 to 1.0}.
3.7.5 Labelling
Since the supervised learning algorithms were used for training and classification, labels
for each sample must be passed to learning algorithm. A floating point matrix was
3.7.6 Training
Two different machine learning algorithm were used for training. Both the algorithms
The resulting learned weights were saved to storage for later use. The results obtained
A complete fruit sorting mechanism was built to make the process automatic. The major
• Raspberry pi
• Camera
• Conveyor belt
• Actuator
• Sorting bins
Page 23 of 64
• Electrical components for control and switching
3.9 RASPBERRY PI
Raspberry Pi is a single board computer also called a development board. It was created
became more popular in other fields such as robotics. Various developers and inventors
use Raspberry Pi for prototyping. Since its release, Raspberry Pi organization has
released many models and revisions differing in features as memory, peripherals and
processor.
The development board used in this project was Raspberry Pi 3 model B. This particular
computer has 1.2 GHz ARM64 processor and 1 GB RAM which was enough to consider
it as our central control and processing unit. It also has 40 I/O pins header that can be
Page 24 of 64
3.10 CAMERA
The used in the project was also from Raspberry Pi organization and it was Pi camera
V1.3. Camera had a 5 mega pixel sensor that could take 2592x1944 resolution images
with decent quality. It had a serial interface in could be connected directly to Raspberry
Pi board. The pictures were obtained directly from video stream as frames which was
not as good as a picture captured by camera board because picture mode has some
advanced algorithms to remove noise and filters to improve quality and correct colours.
Since our application required a series of frames from video stream so the use of picture
mode was not possible. The video mode produces a stream of 1080p.
The belt conveyor is the medium that carries the lemons from one end to other. It has
1. A drive motor
2. A pair of rollers
Page 25 of 64
3. The frame
4. Belt
The drive motor used to make rollers spin was obtained from an electric scooter. The
motor had external gears to reduce the angular velocity resulting a high torque. The
motor required 12V DC supply with a no load current of 1.5A and a full load current of
2.3A. The motor was controlled directly by the RaspberryPi itself. The motor speed was
reduced to half using Pulse width modulation of square wave. The driver circuit used to
control the motor was connected to the RaspberryPi. A MOSFET that can handle high
current was used as a switch. Wiringpi library with C++ was used to produce PWM.
Page 26 of 64
Figure 3.8: Schematic diagram of motor control circuit
Pulse width modulation frequency was set to 5kHz and a duty cycle of 50%.
3.11.2 Rollers
Two rollers made from wood were used to support the belt. Both sides of the rollers
were supported using ball bearings which allowed the rollers to spin with much reduced
friction. Each roller had a diameter of 5cm and a length of 20.3cm. One roller was
The roller was covered by a rubbery material to make the surface contact of belt and
roller non-slipping.
Page 27 of 64
3.11.3 Frame
A wooden frame was made to support all the components of belt conveyor, image
capturing chamber and actuator. Frame has a structure at both ends where bearing
blocks can be mounted. The bearing blocks can be adjusted so that to set the tension of
the belt. Frame has the length of 1.22 meters and a width of 25cm. frame has its uppers
3.11.4 Belt
A non-elastic rubbery cloth was used as the belt. The belt was composed of two
materials, the fabric and a leather like material which makes it fit for the application.
The belt tension was adjusted so that it might not slip or track sideways. Belt was
painted black because the image processing algorithm required the background to be
black.
An imaging chamber was proposed as part of the project to protect the sample from
changing lighting conditions from outside world while capturing. The camera was
mounted at the roof of chamber and two LED lamps with a built in diffuser were
The lamps were provided with fixed 12 volts to make the lighting conditions constant.
The openings of the chamber were covered using papers to prevent as much light as
possible to enter the chamber. The chamber was 20cm high, 30cm long and 20cm wide.
The chamber inside was designed such that the lemon was placed directly inside the
chamber where it was stopped right beneath the camera and the image was taken. The
Page 28 of 64
Lights
Sample
idea of stopping the fruit before taking the image was to eliminate the motion blur
3.13 ACTUATOR
An actuator is the part of a machine which is responsible for mechanical control such
as opening or closing valves. The actuator in the project was used as a pusher to move
the fruit into the respective bin. The bin where the lemon was to be pushed was decided
by Back Propagation Neural Network. Output of Neural Network decided which way
Actuator was built using PVC pipe and a small motor. RaspberryPi directly controlled
3.13.1 H-Bridge
The circuit used to control actuator was an H-Bridge which can turn a DC motor in both
forward and reverse directions (Paquino25, n.d.). It was constructed using two N-
Channel and Two P-Channel Power MOSFETs. A small NPN transistor was used at
Page 29 of 64
driver stage to control switching which acts as protection between gate capacitance and
Raspberry Pi pins.
The H-Bridge has two inputs S1 and S2, and the output is in the form of forward or
As mentioned earlier that the system required a fixed 12V for lighting conditions to
remain constant. Even a little voltage drop is highly undesirable. Motors can draw six
to ten times more current at starting. Since our system used three motors two of which
were started and stopped regularly. The choice of power supply was critical. ATX power
supplies are designed to with stand changing loads and high currents. Therefor and ATX
power supply was modified to power up the system. It was a 250 watt switched mode
power supply which was rated at an output of 12V 16A which was pretty solid
considering our application. Additionally, it could provide 5V and 3.3V output too. One
Page 30 of 64
RESULTS
In most image processing applications, the pre-processing is required on the image. Pre-
processing includes noise removal, background subtraction, cropping and resizing. This
Digital image acquisition is very first step in this project. Varying ambient light have
adverse effect on the quality of image captured. For this reason, an image capturing
the chamber. The camera used for image capturing was Raspberry Pi Camera Board
v1.3 (5MP, 1080p) as described earlier. It was a fixed focus camera and hence could be
used in fixed position. Chamber was designed as to stop ambient light from entering the
chamber. Chamber was provided with two florescent lamps each powered by 12V
source. It could provide consistent lighting to achieve good quality images. Captured
Page 31 of 64
images had the dimensions 1280x960x3 and RGB colour space. The image captured by
the camera is shown in Figure 4-1 which is the scaled down version of the original. The
4.2 CROPPING
Image was cropped to remove the part of it where the probability of finding the object
of interest is zero. Cropping is easy and very important step in image processing.
Cropping allows to focus only on the main object and removes distracting objects from
Page 32 of 64
image. Cropping not only reduces size but saves a lot of computing effort later.
Moreover, cropping can change the aspect ratio. Only area of the image was cropped
where the object could appear. The area required to crop is shown in figure 4-3 where
In this step the original image with a resolution of 1280x930 and a total of 1190400
pixels was reduced to 410x300 with only 123000 pixels. The resulting cropped image
reduces to only 10% of the original. Figure 4-4 shows the resulting cropped image.
Noise removal is an important step in image processing. Noise from different sources
such as lens, sensor, quantization and transmitting channel should be removed to make
the image noise free. The algorithms perform better on noise free images. Gaussian
filter was used for this purpose. The parameters for Gaussian filter set manually were
sigma and ksize. Ksize means kernel size and it was set to 3. Higher kernel sizes
eliminate more noise but are slower in operation thus the choice of parameter was
Page 33 of 64
critical. Sigma is standard deviation of Gaussian filter. It was a 2D function and it
allowed use of two different values of sigma for x and y direction. In this example, both
parameters sigma-x and sigma-y were set equally to 3. The result was a smoother image
(a) (b)
Background should be removed so that the system only performs calculations on the
object of interest. Background removal can save a lot of processing power later and also
reduces the complexities in features extraction. All background pixels were set to zero
based on the assumption that the background was black. Since the back ground was
black, pixels having intensity close to zero were set to zero. Some pixel had intensity in
the range of 50 due to reflection of imperfection but even for higher value pixels the
black back ground assumption worked. It was noted that the bright spots on the
Page 34 of 64
background produced only shades of grey and not a colour. The pixel value for
background in all three channels (RGB) was almost equal therefor the difference of two
channels was used to check whether the pixel belonged to background. For this purpose,
the blue channel was subtracted from red channel and absolute difference was
calculated. If the difference exceeded a value of 30, the pixel certainly belonged to fruit
and not the background. The difference in the red and blue value in the pixels belonging
to the fruit was always more than 60. It was simple and efficient approach.
BLOB is the isolated object in binary image. In image processing, blobs are used to
compute shape related features, in some operations, such as calculating Mean Value, the
binary image is passed to function to compute the Mean Value of the image area
After pre-processing operations described previously, the blob was isolated (Figure 4-
6). A simple thresholding operation was enough for this purpose. Image was first
Page 35 of 64
operation in which the pixel value less than a set threshold is set to zero and all other
pixel are set to one (binary). The threshold value used was 30. To suppress sharp edges
in the binary image (also referred as binary mask) a morphological closing operation
was performed.
Erosion, just like soil erosion takes the white pixels forming the shape, away. The
foreground object should be kept in white. Erosion removes the boundary pixels
depending on the type of kernel used. The pixel is considered one only if all the pixels
under the kernel are one. The operation increases the black region and suppresses the
white foreground.
Dilation is just reverse of erosion. The pixel is considered one if any of the pixel under
Page 36 of 64
the kernel is one. In the result the operation increases the white region.
The threshold image obtained in the previous section was subjected to Erosion. The
operation smoothed the boundary of the shape. Kernel used in erosion was 3×3.
Discussion related to pre-processing ends here, the feature extraction will be discussed
Mean Value of the image was the first feature. The colour image comprises of three
channels, Red, Green and Blue. Mean for each channel was calculated separately and a
• Red Mean
• Green Mean
• Blue Mean
These features were related to ripeness measurement. Mean Values of Red, Green and
Blue for Ripe, Semi Ripe and Not Ripe has been shown in the table. A total of 100
samples were used to extract features. It can be observed that ripeness features are
Page 37 of 64
linearly separable. Some fixed thresholds can separate the data into respective category.
Table 4.1: Determining ripeness using Mean Values of Red, Green and Blue colour
channels
Mean vale
Ripeness
Red Mean Green Mean Blue Mean
Figure 4.9: (a) Ripe, (b) semi ripe and (c) not ripe lemons
Standard deviation is a statistical property of data that tells that how much data is differs
from the Mean Value of that data. A fair and smooth surface has low standard deviation
whereas a defective surface has higher standard deviation. As described earlier that the
colour image is composed of three channels, Red, Green and Blue. Standard deviation
for each channel was computed separately just like the Mean Value feature. Standard
Page 38 of 64
deviation features extracted in the process were:
It is worth mentioning that the standard deviation computed in this particular section
4.9 AREA
Size of fruit is good indication of its quality. It is usually considered that bigger the size
relates to better quality. It is described in previous section that the blobs are used for
shape features extraction. Lemon is a three dimensional body which has volume. Image
captured by camera loses the third dimension details thus volume calculation from a 2
dimensional image is not possible. Since the whole shape of the object is mapped to
pixels in the image, counting the pixels that describe the object produce good
neighbour around a pixel and computes the Mean Value. If the pixel value exceeds the
Mean Value by a certain margin, the pixel is said to have a strong contrast. The method
inefficient. Instead of using a larger filter which slows down the operation, the image
Page 39 of 64
can be reduced in size and a smaller filter is used. A filter applied to the image has
greater impact when image subsampled. This approach was used in the system and the
image was resized to 20% both for rows and columns. A neighbourhood of 5x5 was
used for centre surround computations. Pixels with higher local contrast were set to
high. All other pixels were set to zero. The resulting image shows strong contrast change
Counting non-zero elements in the contrast map, obtained through centre surround
method, resulted the total defective area of the fruit. Higher the non-zero pixels, more
Whole image was divided into 16x16 patches and the standard deviation for each patch
was computed separately. The standard deviation for each patch was stored in a results
matrix having elements equal to the number of 16x16 patches in the original image. The
results matrix elements are summed up to get feature value of local standard deviation.
Page 40 of 64
For local standard deviation, the image was first converted to greyscale.
(a) (b)
Figure 4.11: 16x16 patches to compute local standard deviation
(a) Smooth image with feature value 56, (b) Defective lemon with feature value 125
4.12 TRAINING
The algorithm began with image acquisition, the acquired image was enhanced and
cropped (pre-processing) and nine useful features related to ripeness, size and defects
were extracted. Now it was the time to train a neural network. Neural Network training
requires pre-processing for extracted features. All the samples (lemons) were placed in
the chamber one by one and useful features were extracted. Extracted features were
arranged in a feature vector. Feature vector was a 32-bit floating point array having
number of rows equal to number of features. The features were arranged in the columns
and rows contained the data points. Since we used 100 lemons for training purpose, our
Similarly, a 100 rows labels vector was created which contained the labels for all
samples.
are normalized into a range of 0.0 to 1.0. Machine learning algorithm require features
Page 41 of 64
arranged in certain way, every column in the feature vector should contain a single
feature. To normalize data, every column element was divided by the biggest number
Multilayer Perceptron was used for training and prediction of fruit quality. Multilayer
Perceptron has at least three layers, input layer, hidden layer and output layer. Each
layer has nodes which is actually a neuron. Apart from input layer, every neuron uses
Perceptron uses nonlinear activation functions and can distinguish nonlinear data.
backpropagation. Input layer must have neurons equal to the number of input features.
Neurons in hidden layer are determined experimentally through trial and error whereas
the output layer should have neurons equal to the number of outputs.
Input Layers 9
Hidden Layers 36
Output Layers 3
Page 42 of 64
Learned data was stored to storage for future use as ‘.yaml’ file.
In this section, the co-relation between features has been presented in the form of scatter
plots of the features. Figure 5-1 shows the correlation between red and green Mean
Values whereas figure 5-2 shows the correlation between red and blue Mean Values.
Plot shows high correlation between Red, Green and Blue Mean Values. One or two of
these features could be excluded from the training without effecting the accuracy very
much. The reason to include all of these was to get maximum possible efficiency.
Page 43 of 64
Figure 4.13: Red-Blue mean correlation
A backpropagation multilayer perceptron was trained using 100 samples. Data set was
split into 3 subsets, only first set was used for training. Other two subsets were used for
validation and testing. Seventy samples were used for training, fifteen for validation
and fifteen for testing. Neural network performance was observed based on these sets.
Several statistical tools were used to analyse the Neural Network performance in
MATLAB.
Neural network achieved its best performance after only 21 iterations as shown in figure
5-1. The graph shows three curves, training, validation and test. At the beginning, model
MATLAB, once best validation performance is achieved, the training is continued for
six more iterations and then stopped. It can be observed that, after fifteen iterations, the
Page 44 of 64
model started to over fit the data and validation curves started to rise.
Figure 5-14, shows error rate for training, testing and validation data sets. It can be seen
that the training, testing and validation errors decrease with the number of training
Figure 4.14
Cross entropy plot is the measure of the quality of neural network predictions rather
than the classification error. Classification error only shows the number of
misclassification whereas the cross entropy shows the quality of the prediction. The
training error after 21 iterations reduced to 4.3% whereas the testing error was about
6%.
Page 45 of 64
Figure 4.15: Performance plot, neural network
Confusion matrix is a very simple tool use to analyse the performance of a classifier.
Left confusion matrix shows overall classification accuracy of the neural network. It
shows overall accuracy of about 94%. 2 out of 27 samples of good quality lemons were
classified as average quality and 1 was classified as defective or unripe since the third
category combines both defective and green lemons. There is no misclassification for
average lemons therefor an accuracy of 100% was obtained shown in column four.
Page 46 of 64
Three of the defective or unripe lemons were misclassified, only was classified as good
Fourth row of the confusion matrix true positives and false positives rate. Figure shows
that a total of 25 lemons were classified as good quality and only one lemon in this class
was false positive compared to average class were out of 22 lemons, 4 were false
positive. Fifty-two lemons were classified as defective of unripe out of which only one
was false positive. No lemon from average class was misclassified to other classes and
(a) (b)
Figure 4.16: confusion matrix for training, testing and validation
(a) Training confusion matrix and (b) overall confusion matrix
A dual H-Bridge was made to control two motors, one for actuator and other for inside
the imaging chamber to stop lemon for image capturing. H-Bridge controls the direction
of DC motor. It was a two input H-Bridge the simulation results for which are presented
Page 47 of 64
in table 5.3.
0 0 Brake
0 1 Forward
1 0 Reverse
1 1 Brake
Applying the same signal at both inputs of the H-Bridge makes motor to brake.
CONTROL
The circuit in figure 4.3 shows the circuit diagram for to control the DC motor speed
control. Speed was using pulse width modulation technique. Frequency for pulse width
The input frequency was provided by raspberry pi and the circuit amplified the signal
Figure 5.17 shows simulation results for pulse width modulation for motor speed
control. The simulation was performed on Proteus with virtual oscilloscope. The
Page 48 of 64
Input Signal: -----
Output Signal: …….
Page 49 of 64
DISCUSSION AND CONCLUSION
5.1 CONCLUSION
The project was about sorting and grading the fruits using image processing and
computer vision techniques. The fruit selected for this purpose was lemon. There are
five kinds of lemons available in Pakistan. Sorting and grading is an important post
harvesting process which is a tiring job for humans. Human workers can produce
inconsistent results which can lead to food wastage and financial problems.
The objective of the project was to eliminate human intervention in decision making. A
physical system was designed to make whole sorting process automatic. Physical
system consists of a conveyor belt, an image capturing chamber, actuator and sorting
bins. Fruit is placed in image capturing chamber where image is taken and fruit is
automatically placed on the conveyor belt. The system uses a CCD camera to capture
First major step is pre-processing in which the image is prepared to extract useful
features. The image is first cropped to focus only on main object of interest which is
lemon in this case and useless part is eliminated. Background is then removed to make
further algorithm less complex. Noise is removed using a Gaussian filter of size 3x3
which smooths the image and reduce high frequency components. Pre-processing is
In next step, the pre-processed image was used to extract useful features from image.
Image was spitted into three channels in RGB colour space and each channel was
treated individually during extraction of features. Mean Value for all three channels
Page 50 of 64
was computed followed by the computation of global standard deviation for all three
channels. Mean Value determines the ripeness of the fruit whereas standard deviation
measures the surface irregularities at global context. Area of fruit was computed
which determines the size of the fruit. A local contrast map was obtained using the
technique of centre surround method which approximated the surface defects well.
Standard deviation was computed again but this time at local level in 16x16 patches. A
Third and final major step in project was to train a machine learning algorithm and
store its output for future use. A feed forward, backpropagation, multilayer perceptron
was trained using 99 samples of lemons obtained from local market. The learned data
was used for later use. At testing stage, the sample was placed inside the chamber
where the picture was captured and features were extracted as described earlier. The
test samples were fed to input of neural network and the class of lemon was predicted.
Lemon was the dropped onto the belt and a command was issued to the sorting
(Jhawar, December 2015) classified citrus into four classes based on ripeness only, the
classes were ripe, semi rime, green and over ripe. He could be able to obtain an accuracy
of 97.98%. Our model outperformed this model in terms of only ripeness measures. Our
(Mohana & C.J., 2015) propose a method to detect defects as well as the stem ends.
Stem end detection was implemented to lower the chance of stem ends detected as
Page 51 of 64
defects. The system performed at an accuracy of 97.5% for defect detection and 95%
for stem end detection. Our approach detected the defective lemons with an accuracy of
98%.
for defect detection and used light intensity correction for better results. They
considered stem detection to increase accuracy and prevent stem ends to be detected as
(Khoje, 2013 ) used curvelet transform and pattern recognition for defective fruit
detection. He used guava and lemon for testing. The approach was good enough to
classify the lemons with an accuracy of 91.72%. Our approach is superior than this
technique.
(Momin, 2013) Used fluorescent lamp and spectrographic techniques to detect defects
on lemon. The method was only for defect detection and the paper does not show any
accuracy values. Anyhow the method was able to detect the defective location. Our
method uses extracted features to determine if there is a defect on fruit surface or the
skin is pure hence does not provide any location information of defect. Apart from
location, there is no base line for comparison between both methods for example the
(a) (b)
(a) (Momin, 2013)’s method highlights the defect location whereas our method (b)
shows a defect map which does not pinpoint the defect.
Page 52 of 64
classification accuracy. Our approach works well suited for domain of our project to
Our approach considers three attributes, area, ripeness and defects for lemon
classification whereas all other models deal with one or two attributes only. The table
97.98% 100%
Overall accuracy of our model, as described earlier, is 94% while considering all
attributes at once. Our system is more practical and general for grading and sorting
5.3 DISCUSSION
In this section, different limitations of the system will be discussed. Furthermore, areas
to improve will be stated in next section. Our system was able to perform at 94%
accuracy. Most of fruits and vegetable sorting methods already purposed have
accuracies in the range of 90-100%. The reason for this could be a smaller data set of
only 99 samples collected from the same source. Our model was trained at a small data
Page 53 of 64
set and might not be able to perform at said accuracy in real world.
The scope of our project was to build a prototype, able to sort fruits automatically
without human interventions. Our system does have many limitations discussed here.
The system performed well for given training and testing set. Training sample was
obtained from local market only. The training set was very small it did not cover all
different kind of lemons found in Pakistan. The model can be thought of a model which
was trained only on the lemons available in mid-summer. Different kinds of lemons
Research Council, there are five major types of lemons cultivated in Pakistan
(Cultvation of lemon, n.d.). Out model does not consider all the types as well as the
Our project embeds a Raspberry Pi board which is has an ARM64 based CPU. ARM
CPUs are well behind the AMD64 architecture and did not allow the use of region based
segmentation techniques which are very time consuming on ARM architecture. Region
The second biggest limitation of our project was that it used only one camera that was
mounted at the top inside the image capturing chamber. It could only take the image of
Page 54 of 64
top side of the fruit. There was no way to scan all sides of the fruit. bottom side of the
lemon cannot be captured and defects at the bottom side were always ignored.
Previous section discussed limitations related to our project. The clear statement for
• Use of a faster x86 or AMD64 based processor for faster real time
• Collection of a large data set from different locations and weathers covering
• Trying some other machine learning techniques for comparing results to figure
out which one can perform better for this particular case.
Page 55 of 64
References
Bipan Tudu, C. S. (2012). An Automated Machine Vision Based System for Fruit
(ICST).
Blasco, S. C.-N. (2014). ptimised computer vision system for automatic pre-grading of
Clowting, E. (2007). Robotic apple picker relies on a camera inside the gripper and
http://www.vision-systems.com/articles/print/volume-12/issue-
8/features/profile-in-industry-solutions/vision-system-simplifies-robotic-fruit-
picking.html
Cui, Y., Wang, Y., Chen, S., & Ping. (2013). Study on HSI Color Model-Based Fruit
Processing (CISP2010).
http://parc.gov.pk/index.php/en/153-urdu-m/fruits-m/1088-cultivation-of-
lemon
Frintrop, S. (2006). A Visual Attention System for Object Detection and Goal Directed
Search.
Huang, W., Zhang, B., Gong, L., & Li, J. (2015). Computer vision detection of
Page 56 of 64
Iqbal, S. M. (2016). Classification of Selected Citrus Fruits Based on Color Using
Jahnsa, G. (2001). Measuring image analysis attributes and modelling fuzzy consumer
Khojasteh, M. (2010). Development of a lemon sorting system based on color and size.
combined lighting transform and image ratio methods. Postharvest Biology and
Technology.
https://www.researchgate.net/publication/259308877_Detection_of_Visual_De
fects_in_Citrus_Fruits_Multivariate_Image_Analysis_vs_Graph_Image_Segm
entation.
Page 57 of 64
Mohana, & C.J., P. (2015). Automatic Detection of Surface Defects on Citrus Fruit
Processing.
Ohali, Y. A. (2011). Computer vision based date fruit Grading system: Design and
http://fritzing.org/projects/hbridge-with-transistors
Pauly, L., & Sankar, D. (2015). A New Method for Sorting And Grading Of Mangos
Seema, Kumar, A., & Gill, G. S. (2015). Computer Vision based Model for Fruit
Swapnil S. Pawar, & Dale, M. P. (2016). Computer Vision Based Fruit Detection and
http://szeliski.org/Book/.
Vonikakis, V., & Winkler, S. (n.d.). A center-surround framework for spatial image
Wu, D., & Sun, D.-W. (2013). Color measurements by Computer vision for food quality
Page 58 of 64
APPENDIX
C++ Code:
Page 59 of 64
57. digitalWrite(door_c, LOW);
58. softPwmCreate(motor, 0, 3); //pin , initian value, range
59. }
60. //////////////////////////////////Main Function//////////////////////////////////////////
61. int main() {
62. setup();
63. //////////////////////////following three statements start three threads/////////////////
64. piThreadCreate(actuator_1);
65. piThreadCreate(actuator_2);
66. piThreadCreate(door);
67. raspicam::RaspiCam_Cv Camera; //Initialize camera
68. Camera.set(CV_CAP_PROP_FORMAT, CV_8UC3); //setting camera parameters
69. Mat imgOriginal, imgCropped, mask, imGray, imgWindow, imgCoeff;
70. Scalar mean_1;
71. namedWindow("cropped", WINDOW_KEEPRATIO);
72. if (!Camera.open()) {
73. cerr << "Error opening the camera" << endl;
74. return -1;
75. }
76. cout << "Make Sure Chamber is Empty" << endl;
77. usleep(3000000); //delay so that camera starts properly
78. //////////////////////////////Set camera parameters//////////////////////////////////////
79. Camera.set(CAP_PROP_EXPOSURE, 3);
80. Camera.set(CAP_PROP_WHITE_BALANCE_BLUE_U, 0.0155);
81. Camera.set(CAP_PROP_WHITE_BALANCE_RED_V, 0.0165);
82. ///////////////////////////////Caliberation Complete/////////////////////////////////////
83. cout << "Camera Caliberated" << endl;
84. usleep(2000000); //camera stabilizes
85. softPwmWrite(motor, motor_value); //start belt
86. /////////////////////////////////Initialize Neural Network////////////////////////////////
87. Ptr < ANN_MLP > ANN = ANN_MLP::load < ANN_MLP > ("ANN_MLPtrained.yaml");
88. int key = 0;
89. ///////////////////////////////Here Goes Infinite Loop////////////////////////////////////
90. while (1) {
91. key = 0;
92. ///////////////////////////Wait for user input to start or stop////////////////////////////
93. key = waitKey(1);
94. if (key == 32) {
95. softPwmWrite(motor, 0);
96. key = waitKey();
97. if (key == 27) {
98. break;
99. }
100. softPwmWrite(motor, motor_value);
101. }
102. if (key == 27) {
103. break;
104. }
105. Camera.grab();
106. Camera.retrieve(imgOriginal); //get image
107. imgCropped = crop_1(imgOriginal); //crop image
108. imshow("cropped", imgCropped);
109. mask = mask_1(imgCropped); // Get thresholded image
110. cvtColor(imgCropped, imGray, CV_BGR2GRAY); //Convert to Grayscale
111. mean_1 = mean(imGray, mask); //Find mean to check wether there is fruit
112. //////////////////////////////////If fruit detected///////////////////////////////////////////
113. if (mean_1[0] > 30) {
114. cout << "mean " << mean_1 << endl;
115. usleep(1000000); //wait for fruit to stop
116. //////////////////////////////////////Acquire image//////////////////////////////////////////
117. Camera.grab();
118. Camera.retrieve(imgOriginal); //get image
Page 60 of 64
119. ////////////////////////////////////////Preprocessing///////////////////////////////////////////
120. imgCropped = crop_1(imgOriginal); //Crop image
121. mask = mask_1(imgCropped); //Thresholded image
122. Mat mrg[] = {
123. mask, mask, mask
124. };
125. Mat mask3;
126. merge(mrg, 3, mask3);
127. bitwise_and(imgCropped, mask3, imgCropped); //Romve background
128. /////////////////////////////////Features Extraction/////////////////////////////////////////////
129. ///////////////Find mean value of all three channels and Global standard deviation///////////////
130. mean_std(imgCropped, mask);
131. find_area(mask); //Find area
132. find_dct(imgCropped); //Find local contrast differances
133. area_defect(imgCropped, mask); //Find local standard deviation
134. imshow("Fruit", imgCropped);
135. /////////////////////////////////////Creat feature vector////////////////////////////////////////
136. Mat testData(1, 9, CV_32FC1, features);
137. cout << " | " << testData << endl;
138. Mat response_ann;
139. float response_ann_f = 0;
140. int response_ann_idx;
141. //////////////////////////////////////Prediction Using Neural Network//////////////////////////
142. ANN - > predict(testData, response_ann);
143. for (int z = 0; z < 3; z++) {
144. if (response_ann_f < response_ann.at < float > (0, z)) {
145. response_ann_f = response_ann.at < float > (0, z);
146. response_ann_idx = z;
147. }
148. }
149. cout << " => ANN_Response: " << response_ann_idx << endl;
150. d_v = 1; //Communicate to thread to open the door
151. //////////////////////////////Issue a command to actuator based on decision////////////////////
152. if (response_ann_idx == 1) {
153. t2_v = 1;
154. }
155. if (response_ann_idx == 2) {
156. t1_v = 1;
157. }
158. piLock(1);
159. }
160. usleep(1000000);
161. }
162. //////////////////////////////// Terminate programme on detection of ESC key///////////////////
163. cout << "Exiting" << endl;
164. ////////////////////////////////////Terminate threads and stop motor////////////////////////////
165. t1_v = 2;
166. t2_v = 2;
167. d_v = 2;
168. softPwmStop(motor);
169. waitKey(100);
170. }
171. //////////////////////////////////////First Thread Function/////////////////////////////////////
172. ////////////////////////////////////Used to push Defective fruit/////////////////////////////////
173. PI_THREAD(actuator_1) {
174. while (true) {
175. if (t1_v == 1) {
176. piUnlock(1);
177. cout << "First actuator" << endl;
178. t1_v = 0;
179. delay(900);
180. digitalWrite(actuator_1_p, HIGH);
Page 61 of 64
181. delay(300);
182. digitalWrite(actuator_1_p, LOW);
183. delay(100);
184. digitalWrite(actuator_1_h, HIGH);
185. delay(300);
186. digitalWrite(actuator_1_h, LOW);
187. } else if (t1_v == 2) {
188. cout << "Closing Thread1" << endl;
189. }
190. delay(10);
191. }
192. }
193. ///////////////////////////////////Second Thread Function////////////////////////////
194. /////////////////Controls actuator for pushing B category fruit//////////////////////
195. PI_THREAD(actuator_2) {
196. while (true) {
197. if (t2_v == 1) {
198. piUnlock(1);
199. cout << "Second actuator" << endl;
200. t2_v = 0;
201. delay(1);
202. digitalWrite(actuator_1_p, HIGH);
203. delay(300);
204. digitalWrite(actuator_1_p, LOW);
205. delay(600);
206. digitalWrite(actuator_1_h, HIGH);
207. delay(300);
208. digitalWrite(actuator_1_h, LOW);
209. } else if (t2_v == 2) {
210. cout << "Closing Thread2" << endl;
211. }
212. delay(10);
213. }
214. }
215. /////////////////////////////////////Thrird thread function///////////////////////////////
216. //////////////////////////Opens and closes door to place fruit on belt////////////////////
217. ///////////////////////////////// after image is taken////////////////////////////////////
218. PI_THREAD(door) {
219. while (true) {
220. if (d_v == 1) {
221. piUnlock(1);
222. cout << "Door" << endl;
223. d_v = 0;
224. digitalWrite(door_o, HIGH);
225. delay(100);
226. digitalWrite(door_o, LOW);
227. delay(500);
228. digitalWrite(door_c, HIGH);
229. delay(100);
230. digitalWrite(door_c, LOW);
231. } else if (d_v == 2) {
232. cout << "Closing Thread2" << endl;
233. }
234. delay(10);
235. }
236. }
237. ///////////////////////////////Function finds local Standard deviation/////////////////////
238. ///////////////By dividing image into small windows and find Standard deviation////////////
239. void find_dct(Mat imgCropped) {
240. Mat imgCoeff, imgWindow;
241. int windows_n_rows = 16; //Height of window
242. int windows_n_cols = 16; // Width of window
Page 62 of 64
243. int StepSlide = 16;
244. cvtColor(imgCropped, imgCropped, CV_BGR2GRAY);
245. imgCoeff = Mat::zeros(Size(18, 25), CV_8U);
246. int x = 0, y = 0;
247. Scalar mean_, std;
248. double minval, maxval;
249. for (int row = 0; row <= (imgCropped.rows - windows_n_rows); row += StepSlide) {
250. for (int col = 0; col <= (imgCropped.cols - windows_n_cols); col += StepSlide) {
251. Rect windows(col, row, windows_n_rows, windows_n_cols);
252. imgWindow = imgCropped(windows);
253. meanStdDev(imgWindow, mean_, std);
254. if (std[0] > 0) std[0] -= 1;
255. imgCoeff.at < uchar > (y, x) = std[0];
256. x++;
257. }
258. x = 0;
259. y++;
260. }
261. Mat element = getStructuringElement(MORPH_ELLIPSE, Size(3, 3), Point(-1, -1));
262. erode(imgCoeff, imgCoeff, element, Point(-1, -1), 2);
263. Scalar g = sum(imgCoeff);
264. features[0][8] = g[0] / 100;
265. }
266. /////////////////////////////Function Finds Local contrast difference/////////////////
267. ////////////////////////////To calculate defective area///////////////////////////////
268. void area_defect(Mat Image, Mat mask) {
269. GaussianBlur(Image, Image, Size(3, 3), 5, 5);
270. cvtColor(Image, Image, CV_BGR2GRAY);
271. Image.convertTo(Image, -1, 1.5, 0);
272. float resiz = 0.20;
273. Mat Element;
274. Element = getStructuringElement(MORPH_ELLIPSE, Size(3, 3)); // Get kernal for morphologyEx
275. resize(mask, mask, Size(), resiz, resiz);
276. erode(mask, mask, Element, Point(-1, -1), 4);
277. threshold(mask, mask, 1, 255, 0);
278. resize(Image, Image, Size(), resiz, resiz);
279. cv::Mat lbp(Image.rows, Image.cols, CV_8UC1);
280. float center = 0;
281. for (int row = 2; row < Image.rows - 2; row++) {
282. for (int col = 2; col < Image.cols - 2; col++) {
283. center = (Image.at<uchar>(row,col))+ (Image.at<uchar>(row-1,col-1))+ (Image.at<uchar>(row-1,col))+
(Image.at<uchar>(row-1,col+1))+ (Image.at<uchar>(row,col-1))+ (Image.at<uchar>(row,col+1))+
(Image.at<uchar>(row+1,col-1))+ (Image.at<uchar>(row+1,col))+ (Image.at<uchar>(row+1,col+1))+
(Image.at<uchar>(row-2,col-2))+ (Image.at<uchar>(row-2,col-1))+ (Image.at<uchar>(row-2,col))+
(Image.at<uchar>(row-2,col+1))+ (Image.at<uchar>(row-2,col+2))+ (Image.at<uchar>(row-1,col-2))+
(Image.at<uchar>(row-1,col+2))+ (Image.at<uchar>(row,col-2))+ (Image.at<uchar>(row,col+2))+
(Image.at<uchar>(row+1,col-2))+ (Image.at<uchar>(row+1,col+2))+ (Image.at<uchar>(row+2,col-2))+
(Image.at<uchar>(row+2,col-1))+ (Image.at<uchar>(row+2,col))+ (Image.at<uchar>(row+2,col+1))+
(Image.at<uchar>(row+2,col+2));
284. if ((((Image.at<uchar>(row,col))-(center/25))>10)|((center/25)-(Image.at<uchar>(row,col))>10))lbp.at<uchar>(row,col)=0;
285. else lbp.at < uchar > (row, col) = 255;
286. }
287. }
288. bitwise_and(mask, lbp, lbp);
289. bitwise_not(lbp, lbp, mask);
290. imshow("LBP", lbp);
291. float defective_area = countNonZero(lbp);
292. defective_area = defective_area / 200;
293. features[0][7] = defective_area;
294. }
295. //////////////////////////Find colour channels Mean and global standard deviation////////////////////
296. void mean_std(Mat image, Mat mask) {
Page 63 of 64
297. Mat mean, stdDeviation;
298. meanStdDev(image, mean, stdDeviation, mask);
299. float redMean, greenMean, blueMean, redStd, greenStd, blueStd;
300. redMean = mean.at < double > (2, 0);
301. greenMean = mean.at < double > (1, 0);
302. blueMean = mean.at < double > (0, 0);
303. redStd = stdDeviation.at < double > (2, 0);
304. greenStd = stdDeviation.at < double > (1, 0);
305. blueStd = stdDeviation.at < double > (0, 0);
306. redMean = redMean / 195;
307. greenMean = greenMean / 195;
308. blueMean = blueMean / 110;
309. redStd = redStd / 30;
310. greenStd = greenStd / 30;
311. blueStd = blueStd / 30;
312. features[0][1] = redMean;
313. features[0][2] = greenMean;
314. features[0][3] = blueMean;
315. features[0][4] = redStd;
316. features[0][5] = greenStd;
317. features[0][6] = blueStd;
318. }
319. ///////////////////////////////////Find Area of fruit//////////////////////////////////////
320. void find_area(Mat mask) {
321. Mat image;
322. image = mask.clone();
323. vector < vector < Point > > contour;
324. findContours(image, contour, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE, Point(0, 0));
325. if (contour.size() < 1) { //No fruit detected
326. return;
327. }
328. float largestarea = 0;
329. int largestIndex = 0;
330. double total_defective_area = 0, total_area = 0;
331. double area;
332. for (int i = 0; i < contour.size(); i++) {
333. area = contourArea(contour[i]);
334. if (area > largestarea) {
335. largestarea = area;
336. largestIndex = i;
337. }
338. }
339. feature_area = largestarea;
340. float scale_area = feature_area / 36000;
341. features[0][0] = scale_area;
342. }
343. Mat mask_1(Mat image) {
344. GaussianBlur(image, image, Size(3, 3), 3, 3);
345. vector < Mat > channel;
346. split(image, channel);
347. absdiff(channel[0], channel[1], image); //cvtColor(image, image, CV_BGR2GRAY);
348. threshold(image, image, 30, 255, 0);
349. Mat Element;
350. Element = getStructuringElement(MORPH_ELLIPSE, Size(3, 3)); // Get kernal for morphologyEx
351. erode(image, image, Element, Point(-1, -1), 2);
352. return (image);
353. }
354. Mat crop_1(Mat image) {
355. Rect rec(Point((image.cols / 2) - 79, (image.rows / 2) - 179), Point((image.cols / 2) + 220, (image.rows / 2) + 230));
356. image = image(rec);
357. return (image);
358. }
Page 64 of 64