You are on page 1of 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/240869493

Machine Vision identification of tomato seedlings for automatic weed control

Article · November 1997

CITATIONS READS
83 566

3 authors, including:

L.F. Tian
University of Illinois, Urbana-Champaign
104 PUBLICATIONS   10,097 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Current project View project

Precision agricultural machinery system View project

All content following this page was uploaded by L.F. Tian on 26 April 2016.

The user has requested enhancement of the downloaded file.


February 1, 2000

MACHINE VISION IDENTIFICATION OF


TOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL

L. Tian D. C. Slaughter R. F. Norris


MEMBER MEMBER
ASAE ASAE

ABSTRACT
A machine vision system to detect and locate tomato seedlings and weed plants in a
commercial agricultural environment was developed and tested. Images acquired in agricultural
tomato fields under natural illumination were studied extensively, and an environmentally
adaptive image segmentation algorithm was developed to improve machine recognition of plants
under these conditions. The system was able to identify the majority of non-occluded target
plant cotyledons, and to locate plant centers even when the plant was partially occluded. Of all
the individual target crop plants 65% to 78% were correctly identified and less than 5% of the
weeds were incorrectly identified as crop plants.
Keywords. Machine vision, pattern recognition, tomato, weeds.

INTRODUCTION
Agricultural production experienced a revolution in mechanization over the past century.
However, due to the working environment, plant characteristics, or costs, there are still tasks
which have remained largely untouched by the revolution. Hand laborers in 1990’s still may
have to perform tedious field operations that have not changed for centuries. Identification of
individual crop plants in the field and locating their exact position is one of the most important
tasks needed to further automate farming. Only with the technology to locate individual plants,
can "smart" field machinery be developed to automatically and precisely perform treatments
such as weeding, thinning, and chemical application.
Early studies of machine vision systems for outdoor field applications concentrated
mainly on robotic fruit harvesting. Parrish and Goksel (1977) first studied the use of machine
vision for fruit harvesting in 1977. In France, a vision system was developed at the
CEMAGREF center to pick apples (Grand d'Esnon et al., 1987). Slaughter and Harrel (1989)
developed a machine vision system that successfully picked oranges in the grove. Fruits
generally have regular shapes and are often distinguishable by their unique color when compared
to the color of the background foliage. Less work has been done on outdoor plant identification.
Jia et al. (1990) investigated the use of machine vision to locate corn plants by finding the main
leaf vein from a top view. Unfortunately this technique is not applicable to most dicot row
crops. A group of researchers at the University of California at Davis have developed a machine
L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL

vision guided cultivator for between-row cultivation (Slaughter et al., 1996). This color machine
vision system could identify the center of the crop row from field images of crops such as
tomato, lettuce or cotton even when weeds were present, however it did not identify plants on an
individual basis.
Object shapes have proven to be one of the most important ways of describing biological
targets. Under a controlled indoor environment, some researchers have studied the problem of
using machine vision to identify individual biological targets. Semantic shape description was
used in cytopathology to detect abnormal cells (Tucker, 1979). For non-occluded plant seedling
species identification, Guyer et al. (1986) used four semantic shape features in a classifier and
showed that up to 91 percent of the sample plants could be correctly recognized. Woebbecke et
al. (1992) used a group of semantic shape features in their research on plant species
identification. Using five experimental plant species grown in pots in a greenhouse they
observed that the performance of their features were functions of the plant growth stages. They
also found that the features were sensitive to plant species, with some features being useful for
only certain plant species. Using leaf shape as a source of information, Franz et al. (1991) tried
to create a general description for plant leaves. Experiments showed that completely visible
leaves (no occlusion) could be identified by aligning a curvature function of each leaf with leaf
curvature models. For the partially occluded leaves, a Fourier-Mellin correlation was used to
calculate re-sampling curvature functions which were then aligned with each model.
Machine vision recognition of plant leaf shape is still at the stage of studying individual
potted plants which are viewed under a controlled indoor environment. Differences between the
uncontrolled outdoor working environment of agriculture and the controlled environment of
indoor facilities requires that robotic systems designed for the agricultural sector have to be more
adaptable than their industrial counterparts and the sensors for such a robotic systems need to be
capable of operating in these unstructured environments.
The purpose of this research was to investigate the feasibility of using a machine vision
system to distinguish individual crop plants from weeds in the natural outdoor environment of a
commercial agricultural field. The processing field tomato plant (Lycopersicon esculentum L.)
was selected as the target crop plant for this study. The seedling plant stage was selected
because it is appropriate for many labor intensive operations (e.g. weeding and thinning) and
because plant leaf occlusion is at its minimum at this early stage.
Many different weed species grow in tomato fields. The common species of weeds and
their growth condition change from field to field. To simplify the problem of plant
identification, tomato plants were distinguished from all other plant species by the vision system.
In this study, all non-tomato plants found in the field were classified as weeds. This approach is

2
L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL

consistent with the use of a machine vision system for automated weeding which would treat all
weeds in the same manner independent of species.

MATERIALS AND METHODS

The experimental data was collected in vivo under normal California commercial farming
conditions. The images for this study were taken from 13 different commercial tomato fields in
the spring of 1994. The commercial tomato fields selected for this study contained tomato plants
which ranged in developmental age from cotyledon to first true leaf stage. Tomato varieties
included Halley 3155, Brigade and many other popular commercial varieties grown in Northern
California. The most common weeds in the images included Hairy Nightshade (Solanum
sarrachoides Sendtner), Ground cherries (Physalis spp.), Jimsonweed (Datura stramonium L.),
Yellow Nutsedge (Cyperus esculentus L.), Field Bindweed (Convolvulus arvensis L.),
Johnsongrass (Sorghum halepense (L.) Pers.), and Little Mallow (Malva parviflora L.).
The equipment setup for outdoor image collection is shown in figure 1. There were two
cameras in the vision system: a front camera for row guidance and a rear camera for the in-row
detection of individual plants. The front camera was tilted forward, facing the direction of travel,
to allow a 1 to 2m section of the crop row to be viewed enabling the vision system to calculate an
accurate row center line (Slaughter et al., 1996) to position the rear camera directly above the
seedline. The rear camera was mounted directly behind the vision guided toolbar with the lens
pointed vertically down toward the plant row to capture a top view. The view of the image
covered approximately an area of 130 x 95 mm of the bed. The longer edge of the image was
parallel to the seedline, which was also parallel to the direction of travel. A Sony CCD-VX3
color camcorder (Sony Corp.) was used as the rear camera. Real-time video images were
recorded on Hi8 metal video tape. All images were digitized (off-line) into 24 bit 640 x 480
pixel RGB color images using a Macintosh IIfx computer, a RasterOps (model 24XLTV) color
frame grabber, and Media Grabber 2.1 software.
The crop rows were traversed in the same serpentine pattern in which they were planted.
Thus angle of sunlight changed from row to row, field to field, and time to time. The camcorder
shutter speed was set to 1/500 of a second to prevent blurred images due not only to travel speed,
but because outdoor field plants may oscillate at high frequencies due to wind.
Knowledge Based Vision (KBV, America Artificial Intelligence, Inc., Amherst, MA)
system software on a Sun Sparcstation (Sun Microsystems, Inc., model IPX) was used for
algorithm development and evaluation. Algorithms written in C, and Common LISP were
integrated into the routine tasks of the KBV system for image analyis. SAS/STAT (SAS

3
L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL

Institute Inc., Cary, NC) statistical proceedures were used for morphological feature selection
and in classical discriminant analysis for system evaluation.

A tomato plant has a general green color, a highly irregular leaf shape, and an open plant
structure which contributes to its being a challenging crop plant to identify in the field. To
identify tomato plants and distinguish them from weeds a two stage plant identification
algorithm was developed. The first stage was a color pre-processing operation to segment all
plants from the background. The second stage was a pattern recognition analysis to locate the
center of and identify the type of each individual plant. These algorithms were trained with a
training set of 30 images. An independent set of 271 images was used to evaluate the
performance of the system. According to the lighting conditions and the plant leaf shapes, these
images were divided into three different quality groups named high, fair, and poor. There was
no overlap between the images in the training set and those in the evaluation set and the images
in the evaluation set included environmental conditions (e.g. lighting) outside those found in the
training set.
To distinguish plant materials from background objects in a color image, a color
segmentation image processing step was conducted where objects were classified into one of two
classes (plants and background) by their color difference in red, green, blue color space. In this
study, the changing illumination conditions encountered in the outdoor fields prevented the use
of a static segmentation algorithm.
The idea of an environmentally adaptive segmentation algorithm (EASA) was developed
to simulate the human vision system which makes necessary adjustments to accommodate the
changing lighting environment when operating outdoors. The EASA was designed to learn from
local conditions in each field or for each time, including the specific lighting and color
conditions of the different crop plant varieties, the weeds, and the soil. The kernel of an EASA
was the adaptive or self-learning process. Since the general data structure properties of our
image were known (e.g. the object should be green, the background anything but green, both
object and the background classes are distributed close to the gray scale axis, and the object class
is very close to a normal distribution, etc.), a modified clustering method called partially-
supervised learning (PSL) was introduced. The PSL procedure started from a set of previously
selected seed colors for each class. The pixels in the sample images were then clustered using
the nearest neighbor technique (Duda et. al. 1973) until a required number of classes were found.
Then the program would show the classification results to the operator by means of displaying
each class of objects in the image using a different color. The operator decided what group(s) of
cluster regions should be considered as “object” (tomato cotyledons). The mean and covariance

4
L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL

matrices for each class were processed using a Bayesian classifier (Duda et. al. 1973) to develop
a lookup table (LUT) for real-time color image segmentation.
The boundary of the objects in the image may have to be smoothed to eliminate noise.
Logical image processing was used to eliminate noise, smooth irregular object boundaries, and
separate objects which slightly touch (Tian et al., 1993). To increase the identification accuracy,
a special object partition algorithm called watershed (Vincent et al., 1991) was used for
separating partially occluded plant leaves (Tian et al., 1993).
The 13 morphological features identified which appeared to have promise for
distinguishing between tomato cotyledons and weeds were:
Perimeter (PRI) was the count of the number of boundary pixels,
Centroid (CEN) was the center of the area of an object, the average location of all pixels.
Pixel-count (PXC) was the area of an object in an image,
Height (HET) was the difference between the largest and the smallest vertical coordinate plus
one,
Width (WID) was the difference between the largest and the smallest horizontal coordinate
plus one,
The major and minor axes (MJX and MNX) of the best fit ellipse for the object,
The ratio of area to length (ATL) was defined as:

PXC
ATL =
MJX
The compactness (CMP) was the ratio of area to the perimeter squared. In this study, it was
defined as:

PXC
CMP = 16
PRI2
Elongation (ELG) was the measurement of how long and narrow an object was. It was
calculated as the differences between the lengths of the major and minor axes of the best
fit ellipse, divided by the sum of the lengths.

MJX − MNX
ELG =
MJX + MNX
The logarithm of the ratio of height to width (LHW) gave a symmetric measure of the aspect
ratio of the object. The definition was:

 HET 
LHW = log 10 
WID 

5
L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL

The ratio of perimeter to broadness (PTB) was a measurement of a convex region. It was
defined as:

PRI
PTB =
2(HGT + WID )
The ratio of length to perimeter (LTP) was a measure of the 2-D distribution pattern of the
boundary of an object. It was defined as:

MJX
LHW =
PRI

To facilitate their use in a real-time system the number of features was limited to less
than 5. Because of complex intercorrelation between features, two heuristic methods were used
to select a final feature subset. Individual feature performance and classical discriminant
analysis (SAS) indicated that a subset of the following 4 features: CMP, ELG, LHW, and LTP
out of the 13 studied would provide the best results (Tian, 1995).
To increase the accuracy and to simplify the algorithm, a Bayesian classifier (Jain, 1989)
was used to build a lookup table for real-time implementation. With this classifier, all the
objects in an image were classified into the cotyledon class or the weed class. If a cotyledon was
not occluded by weeds or other cotyledons the classification was likely to be successful.
Occluded cotyledons could only be recognized using more sophisticated processes.
Identification of occluded cotyledons was found to be necessary in the interpretation of the
position of the whole plant.
Once the cotyledons were found in an image, the whole plant identification process was
initiated. Distance between cotyledons was the first property used to determine which
cotyledons should be considered as part of the same plant. Cotyledons from the same plant are
normally close to each other, typically less than 1/2 a cotyledon length apart. In a field, in which
the plants were fairly well separated, a method using this criteria worked fairly well. However,
in outdoor field conditions where the crop plants were planted very close to each other, and with
weeds located anywhere, this method often led to incorrect cotyledon pairing. Figure 4e shows
black squares at the predicted stem locations of each plant when the definition of a whole plant
was based only upon the distance between the cotyledons. To overcome this problem, more
structural features like leaf size, relative position and orientation were used in a syntactic
procedure to describe the whole plant structure. An algorithm that uses whole plant properties of
field plants can also overcome some of the problems of partial occlusion in an indirect way. The
algorithm developed here was based on the following observations.

6
L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL

Rule 1. There must be an initial tomato cotyledon (ITC) to begin with.


Rule 2. The stem is always in the line of the extended major axis of a cotyledon.
Rule 3. An occluded cotyledon always belongs to an object with a larger PXC value (binary
image object area) as compared with that of the classified cotyledon.
Rule 4. The occluded cotyledon is the nearest object in the weed sub-set in the immediate
neighborhood of the classified cotyledon.
Rule 5. Incomplete cotyledons caused by incorrect color, extreme position, or twisted shape
always have smaller PXC values than complete cotyledons.
Rule 6. If there is another tomato cotyledon (ATC) within the near neighborhood (defined as
a circle diameter of 1.5*MJX of ITC), the one to be paired with the ITC has the
following properties:
1). The inward end point (EDP) must be closest to the MJX of ITC;
2). The EDP is the closest to the CEN of ITC;
3). The PXC is between 60 % to 130% of PXC of ITC;
4). The angle (α) between MJXs is the smallest and not greater than 20 degrees, as
shown in figure 2.
Rule 7. If there is no cotyledon within the near neighborhood but a possible partially
occluded tomato cotyledon (OTC) exits, the one to be paired with the ITC has to
have the following characteristics:
1). The occluded cotyledon to be paired is the one with a PXC bigger than that of
ITC, and located near the EDP and within an angle β < 80 degrees as shown
in in figure 3,
2). The maximum distance, D in figure 3, between the two boundary intersection
points on the radial line from the nearer end of ITC is greater than 80% of
the MJX of ITC.

CEN
MJX MJX ITC
α < 20 degree D

ITC

EDP β < 80 degrees


ATC MJX

7
L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL

Figure 2. Example of syntatic relationship Figure 3. Example of syntatic relationship


between the initial tomato cotyledon between the initial tomato cotyledon
(ITC) and another tomato cotyledon (ITC) and a partially occluded
(ATC) being considered for pairing tomato cotyledon (OTC) being
with the ITC. considered for pairing with the ITC.

Rule 8. If there is no cotyledon (or possible occluded one) within the near neighborhood but a
possible incomplete tomato cotyledon(s) (ETC), the one to be paired with the ITC
has to be:
1). The one with the CEN closest to the MJX of ITC.
2). The one with the CEN closer to the EDP of ITC than to CEN of ITC.
Rule 9. If there is no other object which will form a cotyledon pair within the near
neighborhood, pick a EDP randomly and extend the MJX by 25% from this EDP as
the stem.

RESULTS AND DISCUSSION


The color segmentation LUT was trained with one or two images from the same image
data set being processed. This means they were from similar field conditions. This is equivalent
to an on-demand training procedure in a real life system. Under real field conditions, the system
would capture an image for training each new LUT when the lighting or field condition changed.
In the laboratory system, a new LUT was created when a new image was found to be quite
different in its lighting and field conditions. Figure 4b shows the result of segmentation of a field
image using a color LUT. Examples of occluded leaves processed with the watershed algorithm
are shown with the identified tomato cotyledons in black and the "weeds" in gray in figure 4d.
Four morphological features (CMP, ELG, LHW, and LTP) were found to provide a high
degree of separation between the two classes (cotyledon leaves and non-cotyledon leaves), figure
5. Experiments of leaf recognition were carried out on the training data set to test the
performance of the selected 4-feature subset. The results of this classifier were very promising.
In the training set, more than 95 percent of tomato cotyledons were correctly classified. More
than 98 percent of the non-cotyledons were assigned correctly to the weed class. The overall
error was 3.27 percent.
An example of final stem location using the whole plant syntatic algorithm is shown in
Figure 4f (predicted stems are shown with black squares). When compared with the results using
distance-based pairing alone (figure 4e) the improvement in correct stem placement can be seen.
This syntactic procedure frequently leads to the correct stem location even when the plants are
close to each other.

8
L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL

More than 65% of the tomato seedlings and more than 95% of the weeds in the 271 field
trial images of the validation set were correctly recognized by the machine vision algorithm,
Table 3. In Table 3, “cotyledon” means the results of leaf detection with the 4-feature classifier
and “plant” means the percentage of successfully recognized whole tomato plants using the
syntactic procedure. In all three different image quality groups, the failures were mainly caused
by heavily overlapped leaves, plant leaf positions which did not allow their morphological
features to be observed (i.e. a vertical leaf), or poor illumination conditions. Algorithm failure
caused by the first two reasons happened randomly. Algorithm failure caused by illumination
changes would happen in a sequence of frames. This failure could be easily detected by the
computer. In the prototype system, a new classifier would be trained and a new LUT created
when a sequence of failed frames was observed.
Table 3. Overall plant recognition results.
Cotyledon Leaves Whole Tomato Error (%)
Image Correctly Identified Plants Correctly (weeds identified as
Quality (%) Identified (%) tomato plants)
Group
high 59.5 78.3 3.74
fair 45.59 67.94 2.76
poor 40 65.9 1.53

To provide an estimate of the maximum crop stand and weed plants remaining along the
seedline after a hypothetical weed control operation using the machine vision algorithm, the
number of whole tomato plants correctly recognized and weed leaves incorrectly recognized per
meter was calculated, Table 4. Theoretically an average of 1.2 weed leaves per meter would be
left after application of a weeding operation based upon this machine vision algorithm. In the
whole plant syntatic algorithm each leaf identified as a tomato cotyledon caused the system to
locate a whole plant in the field. So, a weed leaf incorrectly identified as a tomato would have
been incorrectly allowed to survive once for every 0.82 m after weeding in this study. If
thinning is done by removing half of the plants remaining, the number of weeds that might
survive would drop to one in every 1.6 m.

Table 4. Plant recognition results along seedline.


Tomato Weeds
actual found actual missed
image group (plants/m) (plants/m) (leaves/m) (leaves/m)
high 16.4 12.8 38.4 0.70
fair 12.1 8.2 107.3 1.80

9
L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL

poor 13.4 8.8 70.5 1.09

The percentage of correct tomato plant recognition was higher than that for the
cotyledons. Many of the plants were identified based only on one cotyledon and the information
in the immediate area of that cotyledon. The over all percentage for successful recognition of
tomato plants in the field was greater than 65 percent of the tomato plants in the images. This is
much higher than the percentage of plants that typically remain after the current weeding and
thinning operations. Theoretically, three tomato plants per meter would provide the desired yield
in a typical processing tomato field. If the prototype system was to be used in the field, the
number of tomato plants remaining (per meter) after a machine vision based weeding operation
would be as is shown in Table 4. More than twice the total number of tomato plants needed for
the thinning operation would remain after weeding.

CONCLUSION
The feasibility of using an outdoor natural-light-only machine vision system to
distinguish between tomato seedlings and weeds was demonstrated. An environmentally
adaptive image segmentation algorithm was used to reduced problems associated with variations
in illumination when changes took place due to time-of-day, field conditions, or even in the color
characteristics of the tomato plants. This algorithm extended the dynamic range of the vision
sensing system to meet outdoor lighting conditions.
Object partition methods were developed to minimize the occlusion problem which
frequently occurs in a field when tomato and weed seedlings grow close together. The binary
watershed algorithm showed that cotyledons could be successfully separated from occluding
plants when the overlap was small. Four semantic shape features were used to distinguish
tomato cotyledons from weed leaves and a whole plant syntatic algorithm was used to predict the
stem location of the whole plant. Using this technique more than 65% of the tomato plants could
be successfully detected.

ACKNOWLEDGMENTS
This research has been supported by California Tomato Research Institute (CTRI) and the
University of California Integrated Pest Management Project.

REFERENCES
Duda, R. O., and P. E. Hart. 1973. Pattern Classification and Scene Analysis. Wiley, New
York.

10
L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL

Grand d’Esnon, A., R. Pellenc, G. Rabatel, A. Journeau, and M. J. Aldon. 1987. Magasi: a self
propelled robot to pick apples. ASAE Paper No. 87037, ASAE, St. Joseph, MI.
Franz, E., M.R. Gebhardt, and K.B. Unklesbay. 1991. Shape description of completely visible
and partially occluded leaves for identifying plants in digital images. Trans. of ASAE
34(2):673-681.
Guyer, D. E., G. E. Miles, M. M. Schreiber, O. R. Mitchell, and V. C. Vanderbilt. 1986.
Machine vision and image processing for plant identification. Trans. of ASAE.
29(6):1500-1507.
Jain, A. K. 1989. Fundamentals of digital image processing. Prentice Hall, Englewood Cliffs,
NJ.
Jia, J., G. W. Krutz, and Hary G. Gibson. 1990. Corn plant locating by image processing. SPIE
Optics in Agriculture. 1379:246 - 253.
Parrish E. A., Jr. and A. K. Goksel. 1977. Pictorial pattern recognition applied to harvesting.
Trans. of ASAE 20(5):822-827.
SAS. 1990. SAS/STAT User’s Guide. SAS Institute Inc., Cary, NC
Slaughter D. C. and R. C. Harrell. 1989. Discriminating fruit in a natural outdoor scene for
robotic harvest. Trans. of the ASAE 32(2):757-763.
Slaughter D. C., R.C. Curley, P. Chen, and D.K. Giles. 1996. Robotic cultivator U.S. Patent No.
5,442,552. U.S. Patent & Trademark Office, Wash. D.C.
Tian, L. 1995. Knowledge Based Machine Vision System for Outdoor Plant Identification.
Ph.D. Dissertation. University of California, Davis, CA.
Tian, L. and D. C. Slaughter. 1993. Algorithm for outdoor field plant identification. ASAE
Paper No. 93-3608. ASAE, St. Joseph, MI.
Tucker, J.H. 1976. CERVISCAN: An image analysis system for experiments in automatic
cervical smear screening. Computers in Biomedical Res. 9:93-107.
Vincent, L. and P. Soille. 1991. Watersheds in digital space: an efficient algorithm based on
immersion simulations. IEEE Transactions on Pattern Analysis and Machine Intelligence.
13(6):583-598.
Woebbecke, D.M., G.E. Meyer, K.V. Bargen and D.A. Mortensen. 1992. Plant species
identification, size, and enumeration using machine vision techniques on near-binary images.
SPIE Optics in Agriculture and Forestry 1836:208-217.

11
L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL

stationary toolbar
tools

toolbar hydralic
controller

moving toolbar

computer

rear camera front camera

(a)

diffuser

(b)

Figure 1 . Setup of the equipment. (a) The top view of the setup; (b) detail of camera
mountings. The toolbar was controlled to follow the seed line in the field.

12
L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL

a) Grayscale version of raw color image b) Color segmented image (unsmoothed)

c) Binary image after smoothing d) Cotyledon identification after cutting

e) Distance-based cotyledon paring & stem (black f) Syntax-based cotyledon pairing & stem (black
squares) location squares) location
Figure 4.

13
L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL

LTP

LHW

CMP

ELG LTP LHW

Figure 5 Scatter plots of the training set leaf features selected for classification.
= cotyledons; x = other plant leaves.

14

View publication stats

You might also like