You are on page 1of 7

New Zealand Journal of Agricultural Research

ISSN: 0028-8233 (Print) 1175-8775 (Online) Journal homepage: https://www.tandfonline.com/loi/tnza20

Recognition of greenhouse cucumber fruit using


computer vision

Libin Zhang , Qinghua Yang , Yi Xun , Xiao Chen , Yongxin Ren , Ting Yuan ,
Yuzhi Tan & Wei Li

To cite this article: Libin Zhang , Qinghua Yang , Yi Xun , Xiao Chen , Yongxin Ren , Ting Yuan ,
Yuzhi Tan & Wei Li (2007) Recognition of greenhouse cucumber fruit using computer vision, New
Zealand Journal of Agricultural Research, 50:5, 1293-1298, DOI: 10.1080/00288230709510415

To link to this article: https://doi.org/10.1080/00288230709510415

Published online: 22 Feb 2010.

Submit your article to this journal

Article views: 864

View related articles

Citing articles: 5 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=tnza20
New Zealand Journal of Agricultural Research, 2007, Vol. 50: 1293-1298 1293
0028-8233/07/5005-1293 © The Royal Society of New Zealand 2007

Recognition of greenhouse cucumber fruit using computer vision

LIBIN ZHANG Finally, by a texture analysis a "third moment" was


QINGHUA YANG selected as a feature to identify the upper part of the
The Ministry of Education Key Laboratory of fruit, which provides grip position for a robot end-
Mechanical Manufacture and Automation effector. The experimental results on 40 cucumber
Zhejiang University of Technology plant images show that the recognition rate of fruits
No. 18 Chaowang Road is about 76%.
Hangzhou, China
Keywords computer vision; cucumbers; neural
YI XUN network; recognition; texture analysis
XIAO CHEN
YONGXIN REN
TING YUAN INTRODUCTION
YUZHI TAN Being a popular salad vegetable, cucumber is widely
WEI LI* planted all over the world. At harvest time, it must
College of Engineering be picked at the right time or it becomes over-mature
China Agricultural University which affects quality and reduces value. However,
PO Box 448 manual operations are uncomfortable in a warm
China Agricultural University humid greenhouse environment. Fruit recognition
No. 17 Qinghua Dong Road and location in a single image are key to a vision
Haidian District, Beijing, China system for a robot. This study on a cucumber picking
robot therefore has the potential to be very useful
*Author for correspondence: xun_yi@126.com indeed.
In recent years, there have been many reports
published on in situ fruit recognition. Bulanon et al.
Abstract A method for cucumber fruit recogni- (2000, 2001, 2002 and 2004) proposed an algonthm
tion in greenhouses is presented for a robotic fruit for detecting Fuji apples on the tree. Luminance and
picking system. A three-layer back propagation (BP) colour difference transformations of red/green/blue
neural network was set up to segment the cucumber (RGB) colour were used to segment the fruit from
plant images. The B (blue) and S (saturation) com- the background portion of the image. Annamalai et
ponents were extracted as the input of the network. al. (2004) developed a machine vision system to
The multiple colour space feature fusion reduced identify citrus fruits and to estimate yield informa-
illumination effects and enhanced image colour tion of citrus groves. The threshold of segmenta-
information. After successful training of the net- tion of the images to recognise citrus fruits was
work, the cucumber plant images were segmented. estimated from the pixel distribution in the hue/
The fruiting regions were preserved while most saturation/intensity (HSI) colour plane (Annamalai
other portions were removed. Then, a binary im- & Lee (2003); Annamalar et al. (2004); Regunathan
age was made after morphologic processing and et al. 2005; Chmchuluun & Lee 2006). Zhao et al.
fruits were recognised by a logical operation with (2005) used a combination of redness index (r =
two templates based on the discriminated image. 3R-G-B), texture-based edge detection, and circle
fitting in RGB colour to locate apple fruit in a single
image. Cai & Zhao (2005) used hue and saturation
A07156; Online publication date 16 May 2008 data to produce a fusion image. After segmentation
Received and accepted 10 August 2007 by the Ostu operation, regions containing mature
1294 New Zealand Journal of agricultural Research, 2007, Vol. 50

in this paper cucumber fruit recognition in the


greenhouse was studied using computer vision. a
back propagation (bP) neural network and a tem-
plate operation method were used for discrimination.
a n algorithm for determination of the fruit upper
(stalk) region was also developed, using texture
analysis based on the discriminated image.

ImAge A c q u I s I t I o n
The variety of cucumber tested in this study was
'JinYu 5', which was planted in the greenhouse.
Fifty images of cucumber plants were obtained us-
ing a charge-coupled device (CCD) camera (JVC
TK-C1381) in a stationary mode by means of back-
lighting between 9:00 h and 15:00 h on a sunny
Fig. 1 Cucumber plant images in Rgb space. A, Red- day. The colour data signals from the camera were
green-blue (Rgb) image. B, R component image. c, g transferred as a 24-bit R g b colour image to an
component image. d, b component image.
image board (JoiNhoPE oK_C20a). The effec-
tive pixels of resolution of the image were 640(h)
x 480(V). brightness, contrast, shutter speed, and
aperture of the camera were kept constant most of
tomatoes were extracted. Xu et al. (2005) adopted the time during imaging. The camera was placed
a contrast colour index (R-b) model to recognise horizontally. The distance between camera and fruit
citrus fruit. The contrast colour index of citrus fruit was between 0.40 and 0.60 m. under backlighting
was significantly different from leaves and branches. conditions, very little light permeates the fruit, so
Experimental results showed that the rate of recog- the brightness of the cucumber surface is less than
nition was high. Foglia & giulio (2006) designed that of the leaf and stem in images, which is helpful
an agricultural robot for radicchio (leaf chicory) to image analysis.
harvesting. a n algorithm based on intelligent colour
filtering and morphological image operations was
provided to locate the radicchio plants. Most of the
above research detected objects by colour and bright- ImAge s e g m e n t A t I o n
ness difference between fruits and other portions.
For cucumber discrimination, these methods are not colour feature extraction
applicable because fruit colour is too similar to leaf Figure 1a shows an image of cucumber plant
and stem colour. Some investigators observed that including fruits, leaves, stems, sky and soil. The area
the reflectances of fruit, leaf and stem were almost of the fruits accounts for about 9% of the total area of
the same in visible light, but the reflectance of fruit the image. First, the colour image was decomposed
was higher in the near infra red (NiR) region. Fruit into a R, g and b component image, as shown in Fig.
could be identified based on their spectral character- 1. by analysing the grey values of the component
istics. Kondo et al. (1994) developed a visual sensor image, it was found that the maximum difference
for a cucumber harvesting robot. This employed between cucumber and other portions was in the b
550 and 850 nm wavelength interference filters to component image. The grey values of the most leafy
discriminate fruit from its leaves and from stems. region approached 0 and those of the most sky region
Van henten et al. (2002, 2006) detected cucumber were close to 255. So the b component was selected
fruit in the green environment using different filters for image segmentation as features.
on each of two cameras. one camera was equipped
in R g b colour space, R, g and b components
with an 850 nm filter, the other with a filter in the 970
were strongly influenced by light. To reduce the ef-
nm band. With filters, more demands were placed on
fect, the image was transformed into hSi space. h , S,
camera quality and the complexity of the machine
I component images were extracted, as shown in Fig.
vision system was also increased.
2. in the S channel, the leaves behind the fruit were
Zhang et al.—Recognition of greenhouse cucumber fruit 1295

brighter, while those in front of the fruit were darker.


S was more favourable for image segmentation.
B and S components were extracted as colour
features. The multiple colour space combination
could reduce the illumination effect and enhance
image colour information, which established a basis
for fruits extraction.
BP neural network
Ten images were chosen randomly. a s mentioned
above, cucumber plant images are mainly composed
of fruits, leaves, stems, sky and soil. because of the
illumination, the colour of each portion varied in a
complex manner in both R g b and hiS space, so
2500 pixels were selected manually in each image,
in which as many image characteristics as possible C D
could be contained in sample sets. in total 25 000 Fig.2 Cucumberplantimagesinhue/saturation/intensity
pixels were taken as training samples. The grey (hSi) space. A, hSi image. B, h component image. c, S
values of the b and S components of each pixel component image. d, i component image.
were calculated individually, which served as in-
put parameters of the network. The five regions of
interest were used as output.
A three-layer bP neural network was adopted for N= number of training samples
image segmentation. The network was developed W = number of weights which needed to be
with two neurons in the input layer and five neurons adjusted
in the output layer. it was trained by the resilient
propagation (RPRoP) algorithm which provides Image segmentation using BP network
much faster convergence and smaller computational after colour feature extraction, the images were
complexity. The number of hidden layer neurons was segmented by the neural network that had been
calculated by the empirical formula (Eqn 1). trained already, as shown in Fig. 3. it was a binary
image, in which the fruit targets were white and the
= ×()ma2+ a[]01,0 (1) others were black. Most leaves, sky and soil regions
were removed. however, many stem regions were
h = number of hidden layer neurons still reserved because their colour was more similar
n = number of input neurons to fruits. a small part of data from cucumber fruit
m = number of output neurons region was lost except occluded parts. Thus, it was
Further, the number was adjusted according to convenient for post image processing.
the image's segmentation effect and the conver-
gence rate of the network. Through tests, the optimal
number was found to be 12.
The weights and biases of each neuron were P o s t ImAge P r o c e s s I n g
initialised by the Nguyen-Widrow method. Each hid-
den and output layer had a sigmoid transfer function. cucumber fruit recognition
The goal of training performance and learning rate although most fruit regions were segmented, they
were set at 0.01. The number of epochs was kept at were discontinuous. Dilation-erosion was a better
10 000 to allow sufficient events before the training method in filling some region gaps and holes. A 5
was stopped. To ensure reliability and generalisa- x 5 diamond-shaped structuring element was used
tion ability of the network, the number of training in the morphological processing. based on prior
samples should satisfy the following formula (Eqn knowledge, the average projected area and length of
2). an integrated cucumber were about 12 615 and 374
pixels in an image. if the fruit was partially obscured
N > \w\/z (2) (occluded) by leaves or stems, it would be divided
e = performance goal into several sections after image segmentation.
1296 New Zealand Journal of agricultural Research, 2007, Vol. 50

Illllllllllllllll ' • • • • • • • • • • • • • • •
I T
Illllllllllllllll1
. IIIIIIIIIIIIIII
III] Ill I l l
Illllllllll I II I
Illllllllll •
mini III !!!!!!!S!!!!!!L; THU; ;,:H

Fig. 5 Two templates.

Fig. 3 Result of image segmentation.


between these two images was performed, so that
several long lines were obtained (Fig. 6E). These
lines were joined by morphological closing and
the final object was recognised as the fruit. Finally,
according to (Ath), the final object was recognised as
the fruit (Fig. 6F).
after a fruit was detected, its barycentre and
centreline could be obtained easily. This provided
the basis for a spatial position determination.
Fruit upper part determination
Cucumber fruit having more burrs on the surface
Fig. 4 image after dilation-erosion-small target removal look fresher and have higher economic value. To
processing. avoid injury of burrs, fruits upper parts should be
grasped when picked up by hands. Similarly, robot
end-effectors should grasp the upper part of the cu-
cumber. by measurement and statistics, the length
Considering this condition, an area threshold (Ath) of this part is one-quarter of the whole fruit. The
was set at 4025 which was one-third of the whole texture of this part is smooth, while those of the
projected area of the fruit. Then, those small targets remainder is rough. So, this region can be found
whose areas were below (Ath) were marked and by texture analysis. The upper part of the fruit is
removed from the image. Figure 4 shows the image indicated in Fig. 7. From an initial point (a) a series
processing result. of rectangular windows with size of 20(h) × 40(V)
in Fig. 4, some parts of the leaves and stems were pixels were drawn along the fruit centerline to the
detected by mistake. The template operation method end point (b) and their centre points were located
was used to remove these parts (hayashi et al. 2002). on the centreline. The distance of adjacent centre
The mature cucumber fruit has a long cylindrical points was 5 pixels.
shape and its attitude is near-vertical, so the vertical in each window, the original colour image was
long portions in an image were recognised as fruit. translated into a grey image. Then, some descriptors
Therefore, two templates were built. The templates of texture based on the intensity histogram were
were binary images whose sizes were the same as obtained such as mean, standard deviation,
Fig. 4. The detailed dimensions are shown in Fig. smoothness, third moment. by comparison, the
5. These are composed of horizontal and vertical "third moment" was the most effective parameter
white lines which are just 1-pixel wide. The distance for upper part determination. Figure 8 shows the
between adjacent horizontal and vertical lines are 80 parameter variation from lower to upper parts along
pixels and 1 pixel, respectively. the fruit centreline.
by a logical operation (aND) with the templates according to the median of the "third moment"
Fig. 4 was vertically divided. Every object in Fig. the approximate demarcation line could be ob-
4 was divided into many regions, as shown in Fig. tained straightforwardly. The portion above the
6a,b. For each object only the region with a maxi- demarcation line was considered as the upper part.
mum area remained. Figure 6C ,D show the maximum This provided grip-position information for the
areas of all objects. Then, a logical operation (aND) robot end-effector.
Zhang et al.—Recognition of greenhouse cucumber fruit 1297

/8L
upper part J L ^
end point B y^~ f

initial point A
window ><
centreline

Dn
D Fig. 7 Fruit upper part determination.
c

Fig. 6 Template operation method. A, Logical opera-


tion between Fig. 4 and template a. B, Logical operation
between Fig. 4 and template b. c, Maximum areas of all
objects in (a). d, Maximum areas of all objects in (b).
e, Logical operation between C and D. F, Morphologi-
cal closing.

Fig. 8 Third moment variation.


TESTS r e s u l t s A n d A n A l y s I s
The recognition algorithm was tested on 40 cucum-
ber plant images. The number of mature fruits visible
in each image was counted by human observers.
Three people counted the number of fruits in all with only half of the fruits being recognised. This
of the 40 images manually and an average of the provided that the auxiliary lighting source technique
numbers obtained by them was used for comparison. could improve image quality and recognition ac-
The experimental results show that the recognition curacy.
rate of mature fruits was about 76%. The reasons
for erroneous identification were: (1) the fruits with
brighter surfaces were falsely identified as leaves,
(2) fruit clusters were sometimes counted as single conclusIon
fruit, and (3) when, due to partial occlusion only a A method is presented that can be used to recognise
small portion of a fruit was visible, it was sometimes cucumber fruits. This worked well for fruits under
removed as a noise. backlight conditions in a greenhouse. The following
highly non-uniform illumination in the image and conclusions were reached:
colour similarity caused difficulty for colour-based (1) a three-layer bP neural network was successful-
image segmentation. in a cucumber greenhouse ly used for image segmentation. after segmenta-
only the backlighting condition was considered. tion, the fruit targets were preserved while most
The algorithm worked worse under the front lighting other portions of the image were rejected.
1298 New Zealand Journal of Agricultural Research, 2007, Vol. 50

(2) Two templates composed of horizontal and ver- Bulanon DM, Kataoka T, Okamoto H et al. 2004. De-
tical white lines were defined. The non-fruit velopment of a real-time machine vision system
parts remaining in a discriminated image were for the apple harvesting robot. In: SICE 2004
removed by logical operations employing these Annual Conference, Japan: Sapporo, 4-6 August.
Pp. 595-598.
templates. The fruits were usually recognised.
(3) By a texture analysis, the parameter "third Cai Jianrong, Zhao Jiewen 2005. Recognition of mature
moment" was selected for determination of a fruit in natural scene using computer vision.
fruit's upper part. According to the median of Transactions of the CSAM 36(2): 61-64 (in
Chinese).
the "third moment" the approximate demarca-
tion line could be obtained. The portion above Chinchuluun R, Lee WS 2006. Citrus yield mapping sys-
the line was considered as the upper part. This tem in natural outdoor scenes using the Watershed
provided target grip position information for a transform. ASAE Paper No. 063010.
robot end-effector. Foglia MM, Giulio R 2006. Agricultural robot for radic-
chio harvesting. Journal of Field Robotics 23(6/7):
363-377.
Hayashi S, Ganno K, Ishii I, Tanaka I 2002. Robotic har-
ACKNOWLEDGMENTS vesting system for eggplants. Japan Agricultural
The authors gratefully acknowledge the financial support Review Quarterly 36(3): 163-168.
provided by the 863 project (2007AA04Z222). Kondo N, Nakamura H, Monta M et al. 1994. Visual
sensor for cucumber harvesting robot. In: Pro-
ceedings of the Food Processing Automation
Conference III: 461-470.
REFERENCES Regunathan M, Lee WS 2005. Citrus yield mapping and
Annamalai P, Lee WS 2003. Citrus yield mapping system size determination using machine vision and ul-
using machine vision. ASAE Paper No. 031002. trasonic sensors. ASAE Paper No. 053017.
Annamalai P, Lee WS, Burks T 2004. Colour vision sys- Van Henten EJ, Hemming J, van Tuijl BAJ et al. 2002.
tem for estimating citrus yieldinreal-time. ASAE An autonomous robot for harvesting cucumbers
Paper No. 043054. in greenhouses. Autonomous Robots 13(3):
241-258.
Bulanon DM, Kataoka T, Ota Y et al. 2000. Estimating of
apple fruit location using machine vision system Van Henten EJ, van Tuijl BAJ, Hoogaker GJ et al. 2006.
for apple harvesting robot. Proceedings of the XIV An autonomous robot for de-leafing cucumber
Memorial CIGR World Congress. Pp. 574-579. plants grown in a high-wire cultivation system,
Biosystems Engineering 94(3): 317-323.
Bulanon DM, Kataoka T, Ota Y, Hiroma T 2001. A ma-
chine vision system for the apple harvesting robot. Xu Huirong, Ye Zunzhong, Ying Yibin 2005. Identifica-
Agricultural Engineering International: the CIGR tion of citrus fruit in a tree canopy using colour
Journal of Scientific Research and Development information. Transactions of the CSAE 21(5):
III: 1-11. 98-101 (in Chinese).
Bulanon DM, Hiroma T, Ota Y et al. 2002. A segmenta- Zhao J, Tow J, Katupitiya J 2005. On-tree fruit recogni-
tion algorithm for the automatic recognition of tion using texture properties and colour data. In:
Fuji apples at harvest. Biosystems Engineering IEEE/RSJ International Conference on Intelligent
83(4): 405-412. Robots and Systems, 2-6 August. Pp. 263-268.

You might also like