\

+
=
) ( ) (
) ( 3
B R G R
B G
ArcTan H
(2)
(3)
The results oI the HSI transIormation are shown in the Figure 4
Fig. 4: Images aIter HSI transIormation
Histogram Processing. The histogram oI a digital image with gray levels in the range
0L1 is a discrete Iunction p(r
k
)
(4)
Where L is the number oI gray levels usually 256, n the total number oI the image
pixels, n
k
is the number oI pixels having intensity level k. The image histogram
carries important inIormation about the image content. Loosely speaking, p(r
k
) gives
an estimate oI the probability oI occurrence oI gray level r
k
. A plot oI this Iunction Ior
Fig. 5: Histograms corresponding the Iour basic image types
3
) ( B G R
I
+ +
=
I
B G R Min
S
) , , (
1 =
1 ,...., 1 , 0 ) ( = = L k
n
n
r p
k
k
all values oI k provides a global description oI the appearance oI an image. II the
pixel values are concentrated in the low image intensities as can be seen in Fig. 5(a),
the image is Dark`. A Bright` image has a histogram that is concentrated in the high
image intensities, as seen in Fig. 5(b), .In Fig. 5(c), the histogram has a narrow shape,
which indicates little dynamic range, and thus corresponds to an image having low
contrast. As all gray levels occur toward the middle oI the gray scale, the image
would appear a murky gray. In Fig. 5(d), the histogram possesses signiIicant spread,
corresponding to an image with high contrast. The histogram does give us useIul
inIormation about the possibility Ior contrast enhancement. The results oI generating
the histogram oI the deIected image are shown in Figure 6
Fig. 6: Histogram oI the DeIected Image oI Figure 4
Intensity Adjustment. Frequently, image intensity values do not make Iull use oI the
available dynamic range. This can be easily observed in the histogram shown in Fig.
7. Intensity adjustment is a technique Ior mapping an image intensity values to a new
range. This situation can be corrected by stretching the histogram over the available
dynamic rang as shown Fig. 8. We generally map the minimum value to zero and the
maximum value to 1.
The appropriate transIormation is given by the Iollowing Iormula
Fig.7: Low Contrast Image
Fig.8: High Contrast Image
(5)
Depending on the value oI Gamma ( ), the mapping between values in the input
and output images may be linear or nonlinear. Gamma can be any value between 0
and inIinity. II gamma is 1, the mapping is linear. II gamma is less than 1, the
mapping is weighted toward higher (brighter) output values. II gamma is greater than
1, the mapping is weighted toward lower (darker) output values.
To get the p
low
and p
high
oI the histogram we have to calculate the probability density
Iunction oI the histogram as shown in the Fig. 10
AIter getting the two thresholds p
and p
{
=  ,  n m b
0


.

\

low high
low
p p
p n m a  , 
1
low
p n m a s  , 
high low
p n m a p < <  , 
high
p n m a >  , 
Fig. 10: PDF Ior histogram oI Figure 6
1 >
low
p
high
p
1 =
low
p
high
p
1 <
low
p
high
p
Fig. 9: Gamma Correction
3.2 Segmentation Phase
Image segmentation is the Iirst step in image analysis and pattern recognition. It is a
critical and essential step and is one oI the most diIIicult tasks in image processing, as
it determines the quality oI the Iinal result oI analysis. The problem oI segmentation
has been broadly investigated by scientists using both classical 12,13 and Iuzzy
based techniques 1417. Classical segmentation approaches take crisp decisions
about the regions. However, regions in an image are not always crisply deIined and
uncertainty can arise within each level oI image processing, as in our addreses. Most
plant images are represented by overlapping grayscale intensities Ior diIIerent tissues
. In addition, borders between tissues are not clearly deIined and memberships in the
boundary regions are intrinsically Iuzzy. Fuzzy set theory provides a mechanism to
represent and manipulate the uncertainty and the ambiguity. ThereIore Iuzzy
clustering turns out to be particularly suitable Ior the segmentation oI plant images.
One widely used algorithm is the Iuzzy cmeans (FCM) algorithm, which was Iirst
presented by Dunn 18, Iurther developed by Bezdek 19. Subsequently, it is revised
by Rouben20, Gu 21, and Xie 22. However, Bezdek`s FCM remains the most
commonly used algorithm.
The segmentation oI deIected plant images involves partitioning the image space into
diIIerent cluster regions with similar intensity image values. The success oI applying
FCM to Iit the segmentation problem depends mainly on adapting the input parameter
values 23,24. As a consequence, iI any oI the parameter is assigned an improper
value, the clustering results in a partitioning scheme, that is not optimal Ior the
speciIic data set and that leads to a wrong decision. These parameters include, the
Ieature oI the data set, the optimal number oI clusters, and the degree oI Iuzziness.
Based on experiments with these parameters, we`ve shown that a good cluster number
Ior leaI spots is 4, and the degree oI Iuzziness is 2 25. We`ve applied those
parameters to our data set and the results are presented in Figure 1215. In this
Figures, the top row represents the original deIected input images, while the bottom
row represents abnormalities detected via segmentation.
Fig. 11: Images aIter Intensity adjustment
Fig. 12: Segmentation Results on some Literature Images
Fig. 13: Original and segmented CLAES images Ior LeaIminer
3.3 Feature Extraction phase
The third phase is the Ieature extraction phase. The purpose oI the Ieature extraction
is to reduce the image data by measuring certain Ieatures or properties oI each
segmented regions such as: color, shape, or texture. This phase consists oI two steps,
mainly spot isolation, and spot extraction.
Spot Isolation. OIten, a segmented image consists oI a number oI spots. In order to
extract Ieatures Irom the individual spot, it is necessary to have an algorithm that
identiIies each spot. To identiIy the spots, we label each spot with a unique integer
and the largest integer label gives the number oI spots in the image. Such
identiIication algorithm is called component labeling 26. The Iollowing Figure
Fig. 14: Original and segmented CLAES images Ior Downey
Fig. 15: Original and Segmented CLAES Images Ior Powdery
depicts the binarysegmented image and the labeled image aIter applying the
componentlabeling algorithm.
Feature Extraction. In order to recognize the spot category, we measure several
numbers oI Ieatures Irom the segmented image, to be later used Ior classiIication
purposes. These Ieatures correspond to color characteristics oI the spots such as: the
mean oI the gray level oI the red, green, and blue channel oI the spots. Other Ieatures
correspond to morphological characteristics oI the spots 26,27 such as:
1. The length oI the principal axes, Major and Minor axes length oI a spot.
2. The diameter oI a spot, is measured as:
(6)
3. Eccentricity Measure: also called circularity ratio, its value between 0, 1, the spot
whose circularity ratio is zero is actually a circle, while the spot whose circularity
ratio is one is actually line. The circularity ratio is computed as :
(7)
4. Compactness Measure: also called solidity ratio, has a value between 0,1. II the
spot has a solidity value equal to 1, this means that it is Iully compacted. It is the
ratio between the spot border length (Convex Area) and the area oI the spot. The
Iormula is computed as:
(8)
5. Extent Measure: also called rectangularity ratio, has a value between (0 1, when
this ratio has the value one, then the shape is perIectly rectangle, is computed as:
(9)
6. Euler`s Number Measure: This measure describes a simple, topologically
invariant property oI the spot. It is computed as the number oI objects in the
region minus the number oI holes in those objects.
7. Orientation Measure: is the angle in degrees between the xaxis and the major
axis length oI the spots.
Figure 17 presents calculated measures Ior the three previous classes oI spots.
) / ) 2 )` 2 / ( 2 )` 2 / (( * 2 Mafor Minor Mafor SQRT
) / * 4 ( PI Area SQRT
xArea BoundingBo SpotArea /
ConvexArea SpotArea /
Fig. 16: Labeling Segmented Image
4. Features Database
The Ieatures database is the component used to store the outputs oI the Ieature
extraction phase Ior later usage by the classiIier. The database is a relational one,
which consists oI two tables namely a disorder table, which is used to keep track oI
disorders that have been processed and a Ieature table, which is used to store the spot
Ieatures Ior each disorder. The database created contains 1500 records 300 records per
each class.
5 The ClassiIier
BeIore the online processing is done, the system needs to be manually trained using a
set oI training samples. An ArtiIicial Neural Network ANN was used to perIorm our
classiIication task. There are many diIIerent types oI ANNs. The most widely used is
the Back Propagation ANN. This type oI ANN is excellent Ior perIorming
classiIication task 28,29. The developer oI ANN has to answer the main question:
which conIiguration oI the ANN is appropriate Ior a good outoIsample prediction?
The conIiguration oI ANN needs to be determined accurately to give an optimal
classiIication result. This conIiguration includes the number oI layers, the number oI
neurons Ior each layer, and the minimal number oI training samples. This
conIiguration is also called the topology oI the ANN. It is known that too many
neurons degrade the eIIectiveness oI the model, leads to the undesirable
consequences, long training times, and local minima. Large number oI connection
weights in the ANN model may cause overIitting and loss oI the generalization
capacity. On the other hand, choosing too Iew neurons may not capture the Iull
complexity oI the data. Many heuristic rules were suggested Ior Iinding the optimal
number oI neurons in the hidden layer, and several techniques were now available.
Most oI them employ trialanderror methods, in which the ANN training starts with a
small number oI neurons, and additional neurons are gradually added until some
perIormance goal is satisIied 30. UnIortunately, there is no theoretical consideration
Ior determining the optimal network topology Ior the speciIic problem. We used a
well known statistical analysis technique called ANOVA, Ior determining the optimal
Fig. 17: Morphological measurements oI the three classes
conIiguration oI the neural network. A good introduction to the analysis oI variance
(ANOVA) is given by 31.
5.1 Experimental Procedure
The experiment was considered to determine the best structure oI the neural network
that gives the best identiIication percentage oI the Iollowing classes: class A
(Downy), class B (leaIminer), class C (powdery), class D (Normal), and class N
(Negative). Also, the experiment is designed to investigate the inIluence oI each
Iactor on the identiIication percentage. Those Iactors are a number oI learning
samples, number oI neurons, and a number oI hidden layers (Table 1).
Learning Samples. DiIIerent combinations oI learning samples categories were used
to investigate the inIluence oI increasing the number oI learning samples on the
classiIier accuracy. Those categories are: train with 50 sample and test with 250
sample, train with 100 sample and test with 200 sample, train with 150 sample and
test with 150 sample, train with 200 sample and test with 100 sample, and Iinally train
with 250 sample and test with 50 sample.
Table 1: Experiment Design Ior Neural Network Factors
Neurons. DiIIerent number oI neurons was used to investigate the increasing number
oI neurons on the classiIier accuracy. Those numbers are 5,10,15, and 20.
Layers. The last Iactor is the number oI hidden layers. The tried number is one
hidden layer and two hidden layers, which are used to determine how many layers are
necessary to give the highest accuracy oI the classiIier.
5.2 Experimental Analysis
AIter designing the experiments as depicted in Table1, the ANOVA table is
constructed to measure the signiIicance oI each Iactor. The results is shown in Table 2
Where SS is the sum oI square, DF is the degree oI Ireedom, MS is the mean oI
square, and F is the ratio oI the variance between groups and variance within groups.
Examining the results in Table3, it is seen that:
 The layers Iactor has a high signiIicance on the identiIication percentage.
This means that using two layers has a better eIIect on the identiIication
percentage than one layer.
 The samples Iactor has a high signiIicance on the identiIication percentage.
This means that using 250 training samples has a better eIIect on the
identiIication percentage than using 50 samples.
 The neurons Iactor is not signiIicant on the accuracy oI the classiIier. This
means that using Iive neurons are suIIicient.
The mean oI each Iactor regardless the other Iactors were calculated to measure the
eIIects oI each Iactor individually on the classiIier accuracy. These results are
depicted in table 3 and plotted in Fig.18.
Table2: ANOVA Table Ior Main EIIects
Main EIIects SS DF MS F SigniIicance
Layers 0.47 1 0.47 18.5 Highly
Neurons 0.17 3 0.055 2.22 Not
Samples 1.24 4 0.31 12.07 Highly
It is obvious oI Fig. 18 that using small number oI neurons, large number oI layers,
and large number oI training samples lead to a much better ratio oI identiIication. So,
the optimal decisions Ior designing the classiIier 11551, and training this classiIier
with 250 samples. This results is depicted in Table 4.
As shown in Table 4, the capability oI the classiIier to identiIy normal leaves is 98,
which is the highest recognition percentage. This can be attributed to the Iact that
normal leaves exhibit no abnormal Ieatures, which makes it easy Ior the classiIier to
identiIy them. The third highest recognition percentage was Ior unknown diseases.
This high recognition percentage is due to the Iact that the classiIier has been trained
extensively to recognize 3 speciIic diseases. As a result, the classiIier is capable oI
identiIying the Ieatures that best point to them, as well as, Ieatures that do not indicate
their presence. The second highest identiIication percentage was Ior the powdery
Table 3: Mean oI each Iactor, Sample, Layer, and Neurons
Samples Mean
50250 0.47
100200 0.54
150150 0.61
200100 0.49
25050 0.69
Neurons Mean
5 0.62
10 0.61
15 0.58
20 0.55
Layers Mean
One 0.54
Two 0.64
Table 4: Optimal choice Ior classiIier
Examples Neurons Layers A B C D N
25050 5 Two 0.86 0.74 0.94 0.98 0.92
Fig 18: Mean oI each Iactor, Sample, Layer, and Neurons
mildew disease. This can be explained by the ease by which Ieatures related to that
disease can be detected. The Iourth, and IiIth percentages were Ior downy mildew and
leaIminer diseases. These percentages are acceptable Ior those diseases because there
was an overlap in the appearance oI some symptoms related to those disorders.
6 Conclusions
In this paper we have developed an integrated image processing system capable oI
diagnosing three disorders, Downy mildew with percentage 84, LeaIminer with
percentage 74, and Powdery mildew with percentage 94. Also, the system is
capable oI deciding the normal leaves with a percentage 98. Moreover, the system
is capable to recognizing the unknown disorder with a percentage 92.
A set oI Ieatures was selected to be extracted using Ieature extraction phase, and those
Ieatures were stored in the Ieature database, which is designed Ior this purpose.
We have used IeedIorward neural networks with two hidden layers and the standard
back propagation rule as the training algorithm. Each neural network node uses a
sigmoid transIer Iunction and the objective Iunction to be minimized was the MSE.
A statistical experiment was designed Ior diIIerent ANN conIigurations, and analyzed
using ANOVA approach Ior selecting the optimum structure oI neural network that
gives good identiIication results. The decision oI the topology oI ANN to give a high
identiIication percentage was two hidden, Iive neurons per each layers, and 250 Ior
number oI training samples.
ReIerence
1 Agrios, G.N. 1997. Plant Pathology, 4th Edition. Academic Press.
2 Barnett, H.L. and Hunter, B.B. 1998. Illustrated Genera oI ImperIect Fungi, 4th
Edition. APS Press.
3 Berger, R. D. and Jones, J. W., 1985. A general model Ior disease progress with
Iunctions Ior variables latency and lesion expansion on growing host plant.
Phytopathology. 75: 792797.
4 Campbell, C. L., and Madden, L. V., 1990. Introduction to Plant Disease
Epidemiology. John Wiely & Sons, New York City. 532 pp.
5 ShurtleII, M.C. and Averre, A.W. 1997. The Plant Diseases Clinic and Field
Diagnosis oI Abiotic Diseases. APS Press.
6 Simone, G.W., Pohronezny, K.L. and Blake, J.H. 1988. Training in Microscope
Diagnosis oI Plant Diseases. July 2627.
7 T. L. Huntsberger, C. L. Jacobs and R. L. Cannon, 1985. Iterative Fuzzy Image
Segmentation, Pattern Recognition, Vol. 18, No. 2, 131138.
8 T. Carron and P. Lambert, 1994. Color Edge Detector Using Jointly Hue,
Saturation and Intensity, IEEE International ConIerence on Image Processing,
Austin, USA, 977081.
9 Y. Rui, A. C. She and T. S. Huang, 1996. Automated Region Segmentation
Using Attractionbased Grouping in Spatialcolortexture space, International
ConIerence on Image Processing, A, 5356.
10 W. S. Kim and R. H. Park, 1996. Color Image Palette Construction Based on the
HSI Color System Ior Minimizing the Reconstruction Error, IEEE International
ConIerence on Image Processing, C, 10411044.
11 H. D. Cheng, X. H. Jiang, Y. Sun and Jing Li Wang, 2001. Color Image
Segmentation: Advances & Prospects, Pattern Recognition, Vol.34, PP.2259
2281.
12 Gonzaliz RC., Woods RE. 2002. Digital image processing, Pearson Education.
Inc.
13 L. Spirkovska. 1993. A Summary oI Image Segmentation Techniques, NASA
Technical Memorandum 104022, June.
14 Bezdek J.C., R. Ehrlich, W. Full., 1984. FCM: The Fuzzy cmean Clustering
Algorithm, Computers and Geoscience, 10, pp.191203.
15 Chi Z., Yan H., Pahm T., 1996. Fuzzy Algorithms: With Applications to Image
Processing and Pattern Recognition, World ScientiIic.
16 Dzung L., Chenyang X., Jerry L., 2001. A Survey oI Current Methods In Medical
Image Segmentation, Technical Report JHU/ECE 9901, The Johns Hopkins
University.
17 S. K. Pal., 1992. Image Segmentation Using Fuzzy Correlation, InIormation
Sciences, 62, 223250.
18 Dunn, J. C. 1974. Wellseparated clusters and the optimal Iuzzy partitions,
Journal oI Cybernetics, 4: 95104.
19 Bezdek, J. C. 1981. Pattern Recognition with Fuzzy Objective Function
Algorithms, Plenum Press, NewYork, NY.
20 Rouben, M., 1982. Fuzzy clustering algorithms and their clustering validity,
European Journal oI Operational Research, 10: 294301.
21 Gu. T., Dubuission B., 1990. Similarity oI classes and Iuzzy clustering, Fuzzy
Sets and Systems, 34: 213221.
22 X. L. Xie, G. Beni., 1991. A Validity Measure Ior Fuzzy Clustering, IEEE Trans.
on Pattern Analysis and Machine Intelligence, Vol. 13, No. 8, 841847.
23 M. Halkidi, Y. B., M. V., 2002. Cluster Validity Methods: Part I, SIGMOD
Record, June.
24 M. Halkidi, Y. B., M. V., 2002. Cluster Validity Methods: Part II, SIGMOD
Record, September.
25 ElHelly M., Onsi H., RaIea A., ElGamal S., 2003. Segmentation Technique Ior
Detecting LeaI Spots in Cucumber Crop Using Fuzzy Clustering Algorithm. 11th
International ConIerence on AI (ICAIA).
26 Sonka M., Hlavac V., BoyLE R., 1999. Image Processing Analysis and Machine
Vision, Brooks/Cole Publishing Company, USA.
27 Matlab Doc. 2002. Image Processing Toolbox User`s Guide Version 3.
Mathworks Incorporation.
28 HechtNielsen, R., 1990. Neurocomputing. Addison Wesley.
29 Hertz, J., Krogh, A., and Palmer, R., (1991). Introduction to the Theory oI Neural
Computation. AddisonWesley: Redwood City, CaliIornia.
30 Boger Z. Weber R. 2000. Finding an Optimal ArtiIicial Neural Network
Topology in RealLiIe Modeling. In: Proceedings oI the ICSC Symposium on
Neural Computation, Article No. 1403/109.
31 Norman R., 1983. Introduction to social statistics, McGrawHill Inc.
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.
Cancel anytime.