You are on page 1of 14

Int. J. Biomedical Engineering and Technology, Vol. 7, No.

4, 2011 339

Mammogram tumour classification using Q learning

S. Thamarai Selvi
Department of Computer Technology,
Madras Institute of Technology,
Anna University, India
E-mail: stselvi@annauniv.edu

R. Malmathanraj*
Department of ECE,
National Institute of Technology,
Tiruchirappalli, India
E-mail: malmathan@gmail.com
*Corresponding author

Abstract: In mammography, there have been quite a number of papers on


image enhancement and denoising however segmenting the image is still in its
developing stage. The mammograms, as normally viewed, display the texture
property. In the Magnetic Resonance Imaging the mammograms with glandular
tissues and malignant region have periodic repetition in the image and
make the detection of small malignancies difficult. In this paper, it is intended
to contribute to the medical community by implementing a novel approach in
segmentation using Q learning algorithm for multilevel thresholding technique.
Furthermore, various feature datasets of cancerous and non cancerous
mammographs are calculated and used for classification as either benign or
malignant. The performance evaluation is made using Region of Convergence
(ROC) graph and Overall Performance (OP) rate.

Keywords: Q learning; reinforcement learning; multilevel thresholding;


SVMs; support vector machines; ROC; region of convergence.

Reference to this paper should be made as follows: Thamarai Selvi, S.


and Malmathanraj, R. (2011) ‘Mammogram tumour classification using
Q learning’, Int. J. Biomedical Engineering and Technology, Vol. 7, No. 4,
pp.339–352.

Biographical notes: S. Thamarai Selvi got her BE in Mechanical Engineering


at Madurai Kamaraj University, ME in Computer Science and Engineering
at Bharathiar University and PhD in Computer Science and Engineering at
Manonmaniam Sundaranar University. She was granted a patent for Trust
Resource Broker and has been published in Journal No. 16/2007 dated
20/04/2007. Currently, under her guidance 1 Scholar (Completed) 11 Scholars
(Pursuing) 2 Scholars (Submitted Thesis) for PhD programme and for MS
(By Research) 2 Scholars (Completed ) 1 Scholar (Submitted Thesis) 1 Scholar
(Pursuing).

R. Malmathanraj is doing his Doctoral studies at Anna University.


He completed his BE in Manonmaniam Sundaranar University, ME in Madurai
Kamaraj University. His research interests include digital image processing and

Copyright © 2011 Inderscience Enterprises Ltd.


340 S. Thamarai Selvi and R. Malmathanraj

neural networks. He has published several papers at national and international


level.

1 Introduction

Breast cancer is one of the leading cancers among women in developed countries and is
the cause of death in approximately 20% of all females who die from cancer in these
countries. The World Health Organization’s International agency for Research on Cancer
in Lyon, France, estimates that more than 150,000 women worldwide die of breast cancer
each year. In India, breast cancer accounts for 23% of all the female cancers in
metropolitan cities like Mumbai, Calcutta and Bangalore. Although the incidence is
lower in India than in the developed countries, the burden of breast cancer in India is
alarming. Primary prevention is not possible since the cause of disease still is not known.
Survival from breast cancer is directly related to the stage at diagnosis. Thus, detection
of early and subtle signs of breast cancer requires high-quality images and skilled
radiologists. Most imaging studies and biopsies of the breast are conducted using
mammography or ultrasound, in some cases, Magnetic Resonance Imaging (MRI).
MRI is excellent at imaging the augmented breast, including both the breast implant itself
and the breast tissue surrounding the implant. MRI is also useful for staging
breast cancer, determining the most appropriate treatment, and for patient follow-up
after breast cancer treatment. One notable feature in MR Images is the contrast. Thus,
MR Images help oncologists and biomedical engineers alike in generating accurate
results (Weigel et al., 2010).
It has been proven that double reading of mammogram, by two radiologists, reduces
missed detection rate, but at a considerable expense. The estimated interobserve variation
rate of radiologists in breast cancer screening is only about 65–75% but the performance
would be improved if they were prompted with the possible locations of abnormalities.
Because of the nature of medical images, the classification of medical images is still
faced with challenges such as
• Low resolution and strong noise, two common characteristics in most medical
images. With these characteristics, medical images cannot be precisely segmented
and extracted for the visual content of their features
• Medical images are digitally represented in a multitude of formats based on their
modality and the scanning device used.
Another characteristic of medical images is that many images are represented in grey
level rather than colour. Since masses are often indistinguishable from the surrounding
parenchymal region, the automated mass detection and classification is more
challenging.
In an attempt to overcome these difficulties, there have been many attempts to assist
radiologists by prompting sites of potential abnormalities using Computer Aided Design
(CAD) tools. Currently, there are several image processing methods proposed for the
detection of tumours in mammograms. Various technologies such as fractal analysis
(Yang and Yan, 2000), multiresolution-based image processing (Brazokovic and
Neskovic, 1993; Chen and Lee, 1997) and Markov Random Field (MRF) (Li et al., 1995)
Mammogram tumour classification using Q learning 341

have been used. Brazokovic and Neskovic (1993) described an algorithm for tumour
detection from mammogram based on fuzzy pyramid linking and multiresolution
segmentation. Threshold technique uses only grey-level information (Hao et al., 1999).
Consequently, this research work intends to increase the efficiency by implementing
a novel approach in medical image segmentation using Q-learning algorithm for
multilevel thresholding technique, feature extraction and neural network classifier. In this
research work, a Reinforcement Learning (RL) method for maximum entropy-based
thresholding is implemented to segment the tumour region in the mammograms.
The thresholding process can be divided into two phases. In the first phase, Q-values
for selected actions are updated, starting from the initial state to the final state. This is
done for a specified number of epochs, which is specified as input. Actions are selected
according to the action selection rule. After learning is complete, the optimum thresholds
are computed using the updated Q-values. In the second phase, the input image is
segmented into multiple binary images according to the optimal thresholds. The goal
is to maximise the cumulative entropy using RL. The paper is arranged as follows:
Section 1 explains the Introduction, Section 2 explains the multilevel thresholding,
Section 3 explains the features derived, Section 4 explains the classifiers, Section 5
explains the results and discussion and Section 6 explains the conclusion part of this
paper.

2 Multilevel thresholding

Thresholding is a popular tool for image segmentation, which is essentially a pixels


classification problem, which can be classified into two groups: either in bilevel or in
multilevel depending on the underlying applications. Bilevel thresholding classifies
the pixels into two groups, one including those pixels with grey levels above a threshold,
the other including the rest. Multilevel thresholding divides the pixels into several groups
separated by the selected multiple thresholds. To segment complex images, a multilevel
thresholding method is required. However, its time-consuming computation is often an
obstacle in real-time application systems.

2.1 Reinforcement Learning


RL is learning what to do-how to map situations to actions – so as to maximise
a numerical reward signal. The two characteristics – trial-and-error search and delayed
reward – are the two most important distinguishing features of RL. RL (agent) is
defined not by characterising learning algorithms, but by characterising a learning
problem.

2.2 Q-learning algorithm


The optimal policy to the problems can be learned by the Q-learning algorithm, which
approximates the maximum cumulative reward through the recursive definition of the
Q function. First, the learner initialises a table of the estimate of the Q function for each
possible state-action pair. Then, these table entries are iteratively updated using the
equation. The details of the Q-learning algorithm are presented as follows.
342 S. Thamarai Selvi and R. Malmathanraj

2.3 Maximum entropy-based thresholding


First, Pun (1980) proposed the entropy thresholding method by which the posterior
entropy of the segmented objects and background is maximised, and so is the information
content. Kapur et al. (1985) further corrected some of Pun’s derivations. The fuzzy
approach of maximum entropy thresholding was developed by Cheng et al. (2003).
Let the grey levels of a given image of N pixels range over [0; L − 1] and h(i) denote the
number of occurrences of grey level i. Let Pi = h(i)/N, for i = 0, 1, …, L − 1. Kapur’s
method maximises the posterior entropy of the segmented histogram. It can be
formulated as follows. Maximise,
f1(t) = H(0,t) + H(t,L),
where
t −1 t −1
Pi Pi
H (0, t ) = −∑ ln , ω0 = ∑ Pi , (1)
i =0 ω0 ω0 i =0

L −1 L −1
Pi Pi
H (t , L) = −∑ ln , ω1 = ∑ Pi ,
i =t ω1 ω1 i =t

and the optimal threshold is the grey level that maximises f1. It can be extended to the
multilevel thresholding case with n thresholds as follows. Maximise
fn(t1,t2,…,tn) = H(O, t1) + H(t1, t2) + … + H(tn–1, tn) + H(tn, L)
n
= ∑ H (ti , ti +1 ), (2)
i =0

where t0 ≡ O, tn+1 ≡ L, and


ti +1 −1
Pj Pj ti +1 −1
− H (ti , ti +1 ) = ∑ω
j = ti
ln
ωi
, ωi = ∑ P.
j = ti
j
i

The optimal n thresholds are those grey levels that maximise fn. Let Pij = h(i, j)/N
for i and j = 0, 1…L − 1. The optimal threshold pair (m, n) can be calculated as
maximise,
Mammogram tumour classification using Q learning 343

–f1(m, n) = H((O, O), (m, n)) + H((m, n), (L, L)) (3)
where
m −1 n −1 Pij Pij
H ((O, O), (m, n)) = ∑∑ ln ,
i =0 j =0 ω0 ω0
m −1 n −1
ω0 = ∑∑ Pij ,
i =0 j =0

L −1 L −1 Pij Pij
H ((m, n), ( L, L)) = ∑∑ ln ,
i=m j =n ω1 ω1
L −1 L −1
ω1 = ∑∑ Pij .
i=m j =n

2.4 Formatting maximum entropy thresholding to the Q-learning


The multilevel form of the 1D entropy thresholding is, firstly, formatted to the Q-learning
algorithm. The formatting of the 2D entropy thresholding is similarly derived as in
Peng-Yeng (2002). The objective of the ID entropy thresholding is to maximise the
cumulative entropy by selecting a sequence of n optimal thresholds. Figure 1 depicts
the relationship between the 1D entropy thresholding and the Q-learning algorithm.
For the convenience of presentation, the indexing of the states is changed to two
dimensions.

Figure 1 Formulation of the multilevel thresholding problem as an MDP

Algorithm
The first dimension indicates the serial order of the threshold and the second dimension
represents the grey level of it.
The agent starts from S0,0 and repeats choosing an action and getting into another state
until the state Sn+1;L is observed. Now, the key components of the Q-learning algorithm
for the 1D entropy thresholding are defined as follows.
344 S. Thamarai Selvi and R. Malmathanraj

1 A set of environment states, S = {Si,j}; 0 ≤ i ≤ n + 1, 0 ≤ j ≤ L. The two states S0,0 and


Sn+1,L, are called the initial state and the final state, respectively. Any path from S0,0 to
Sn+1,L represents a possible solution of selecting n thresholds using the maximum
entropy criterion.
2 A set of agent actions, A = {ai}; 1 ≤ i ≤ L. Selecting action ai to perform means
choosing grey level i as the value of the next threshold. To maintain the sequence
order of the selected thresholds, the available actions to choose in each state are
constrained. For the agent in the state Sk,i, only ai+1, ai+2,….,aL−n+k can be chosen.
When n actions (thresholds) are selected, the action aL forces the agent to get into the
final state.
3 A set of scalar rewards, R = {H(i, j)}; 0 ≤ i, j ≤ L, where H(i, j) is as defined
in equation (2).
4 A state transition function, δ(Sk,i, aj) = Sk+1,j. This is apparent by the definition of the
state index.
5 A reward function, r(Sk,i, aj) = H(i, j). That is, the immediate reward given the state
sk,i and the action aj is the entropy value computed within the histogram range from
h(i) to h(j).
To maximise the objective function as given in equation (2), the function Q(Sk,i, aj)
(maximum cumulative entropy) can be achieved by performing action aj in the state
sk,i and then proceeding optimally until the final state sn+1,L is observed. The recursive
definition of Q(sk,i; aj) is given by
Q( sk ,i , a j ) = H (i, j ) + max Q( sk +1, j , al ), (4)
al

where Q(sn+1, L, aj) = 0 sets the boundary case to avoid being undefined.
Now, equation (2) can be rewritten as
max f n (t1 , t2 ,..., tn ) = max Q( s0,0 , a j ). (5)
t1 , t2 ,..., tn aj

In other words, learning the optimal selection of n thresholds to maximise the cumulative
entropy is equivalent to learning the optimal policy of choosing a sequence of n actions
to maximise the cumulative reward. For bilevel thresholding case, the recursive definition
of Q(sk,i, aj) is given by
 0 + max Q( sk +1, j , al ) if k = 0,1
 al
Q ( sk , i , a j ) =  , (6)
 f1 (i, j ) + max
al
Q( sk +1, j , al ) if k = 2

and Q(s3,L, aj) = 0 sets the boundary case. Thus, the objective function of the 2D entropy
bilevel thresholding can be rewritten as
max f1 (m, n) = max Q( s0,0 , a j ). (7)
m,n aj

The algorithm is repeated for the given maximum number of iterations. Then, the
n sequential actions chosen by the learned optimal policy determine the n optimal
thresholds (Figure 1).
Mammogram tumour classification using Q learning 345

3 Features derived
Intensity features GLDS features Shape features
FI1: Contrast measure of tumour FT1: Contrast FG1: Convexity
FI2: Average grey level of tumour FT2: Angular second FG2: Rectangularity
moment
FI3: Standard derivation FT3: Entropy FG3: Perimeter
FI4: Skewness of tumour FT4: Mean FG4: Centroid
FI5: Kurtosis of tumour RLS features: FG5: Minor axis length
FI6: A set of features composed FT5: Short runs emphasis FG6: Major axis length
of third-order normalised
Zernike moments
FI7: Mean/variance FT6: Long runs emphasis FG7: Eccentricity
FI8: Mean absolute deviation FT7: Grey-level FG8: Orientation
non-uniformity
FT8: Run length FG9: Fourier descriptor
non-uniformity FG10: Euler number
FG11: Solidity

1 Area: The numbers of pixels within an image.


2 Centroid: Centre of the segmented region.
3 Major axis length: Length (in pixels) of the major axis of the ellipse that has the
same normalised second central moments as the region.
4 Minor axis length: Length (in pixels) of the minor axis of the ellipse that has the
same normalised second central moments as the region.
5 Eccentricity: Eccentricity is the measure of aspect ratio. It is the ratio of the length of
major axis to the length of minor axis. It can be calculated by principal axes method
or minimum bounding rectangle method.
6 Orientation: The angle (in degrees ranging from −90 to 90°) between the x-axis and
the major axis of the ellipse that has the same second-moments as
the region.
7 Filled area: The number of pixels in filled image.
8 Extrema: The extreme points of the ROI.
9 Solidity: Solidity describes the extent to which the shape is convex or concave,
it is defined as the proportion of the pixels in the convex hull that are also in the
region.
10 Equivdiameter: The diameter of a circle with the same area as the region. Computed
as sqrt(4*Area/pi).
11 Bounding box: It is the smallest rectangle circumscribing the tumour region.
12 Euler number: The number of objects in the region minus the number of holes
in those objects.
346 S. Thamarai Selvi and R. Malmathanraj

13 Compactness is defined as the ratio of area of tumour region to the area of the
smallest rectangle that circumscribes the tumour region.
14 Perimeter.
15 Skewness is a measure of symmetry, or more precisely, the lack of symmetry.
A distribution, or data set, is symmetric if it looks the same to the left and right of the
centre point.
16 Kurtosis is a measure of whether the data are peaked or flat relative to a normal
distribution. That is, data sets with high kurtosis tend to have a distinct peak near
the mean, decline rather rapidly, and have heavy tails. Data sets with low kurtosis
tend to have a flat top near the mean rather than a sharp peak.
17 Mean/variance.
18 Mean absolute deviation.
Texture features
• Mean gradient with current region measures the average value of the gradient in each
region
N
1
mwg =
N
∑g .
k =1
k (8)

• Mean gradient of region boundary measures the average value of the gradient
in boundary
1 N
mg = ∑ g k′ .
N ′ k =1
(9)

Grey-level cooccurrence matrix


1 Angular Second Moment: The Angular Second Moment (ASM) is defined as
F1 = ΣΣC (i, j) where C(i, j) represents the joint probability of occurrence of pixels
with intensities i and j and L is the number of distinct grey levels. This is a measure
of local homogeneity in the image. Its value is high when the image has very good
homogeneity. In a non-homogeneous image, where there are many grey level
transitions, the ASM assumes lower values.
2 Contrast
F2 = ΣΣC(i, j)/(i – j)^2 ∀ i ≠ j. (10)
Returns a measure of the intensity contrast between a pixel and its neighbour over the
whole image.
3 Entropy
F3 = ΣΣi.j C(i, j)log(C(i, j)). (11)
This measure yields a measure of complexity and complex textures tend to have high
entropy.
Mammogram tumour classification using Q learning 347

4 Inverse Difference Moment (IDM)


F4 = ΣΣC(i, j)/[(1 + (i – j)*2)] (12)
where i and j are two different pixel intensities. The IDM feature is the inverse of the
contrast of the cooccurrence matrix. It is a measure of the amount of local uniformity
present in the image.
5 Grey Level Cooccurrence Mean (GLCM)
F5 = Σi (p i, j) (13)
6 Variance
F6 = Σ pi, j(i – µi)^2 (14)
7 Dissimilarity
F7 = Σ|pi, j| i – j. (15)

4 Classifiers

Once the features related to masses are extracted and selected, the features are input
into a classifier to classify the detected suspicious areas into normal tissues, benign
masses, or malignant masses. The classifiers that have been used are Back Propagation,
Radial Basis Function (RBF), Learning Vector Quantisation (LQV) neural networks and
Support Vector Machines (SVMs).

4.1 Support Vector Machines (SVMs)


SVM is a novel powerful machine learning method based on small-sample Statistical
Learning Theory (SLT). SVM for classification and non-linear function estimation,
as introduced by Vapnik (1995) and further investigated by many others (SchVolkopf
et al., 1997), is an important new methodology in the area of neural networks and
non-linear modelling. Least Squares Support Vector Machine (LS-SVM) proposed by
Suykens and Vandewalle (1999) is trained by solving a set of linear equations. LS-SVM
is an extension of standard SVM and used to formulate for two-class classification
problems and multiclass classification problems. LS-SVMs have been investigated for
classification and function estimation.
The SVM proposed by Vapnik has been studied extensively for classification,
regression and density estimation. SVM maps the input patterns into a
higher-dimensional feature space through some non-linear mapping chosen a priori.
A linear decision surface is then constructed in this high-dimensional feature space. Thus,
SVM is a linear classifier in the parameter space, but it becomes a non-linear classifier
as a result of the non-linear mapping of the space of the input patterns into the
high-dimensional feature space. Let m dimensional training data be Xi(i = 1,…, M) and
their class labels be Yi, where Yi = 1 and Yi = –1 for classes 1 and 2, respectively. If these
input data are linearly separable in the feature space, then the following decision function
can be determined:
D(x) = wt g(x) + b
348 S. Thamarai Selvi and R. Malmathanraj

where g(x) is a mapping function that maps x into the l-dimensional space, w is the
l-dimensional vector and b is a scalar. To separate data linearly, the decision function
satisfies the following condition:
Yi(wt g(x) + b) > 1 for i = 1,…, M. (16)
If the problem is linearly separable in the feature space, there are an infinite number
of decision functions. Among them, the hyperplane that has the largest margin between
two classes is required. The margin is the minimum distance from the separating
hyperplane to the input data and this is given by ||D(x)||/||w||. Then, we call the separating
hyperplane with the maximum margin optimal separating hyperplane. Assuming that the
margin is q, the following condition needs to be satisfied: Yi D(Xi)/w ≥ p i = 1, …, M,
minimising ½ w wt..The optimal separating hyperplane is determined so that the
maximisation of the margin and the minimisation of the training error are achieved.
When p = 1, the SVM is called L1 soft margin SVM (L1-SVM), and when p = 2, L2 soft
margin SVM (L2-SVM).

5 Results and discussion

Pattern classification was carried out by using the BPN, RBF, LVQ and SVM
architectures. The set of feature vectors is split into two parts: training and test, through
a random choice. The training and test sets consisted of normal and abnormal feature
vectors. The feature vectors were prepared for pattern classification as described in
Section 4. The number of input nodes in the network is equal to the number of features,
and the number of output nodes is equal to the number of target class. The algorithm
is implemented in Matlab 7.1 (Mathworks, Natick, MA) with a desktop computer
(3 GHz Pentium IV processor and 3 GB RAM). For classification on the best three
layered MLP NN (17-19-20-6-1) with Tanh transfer functions in hidden layers was used.
The classifier delivered the best performance on Tanh activation function when used
for the neurons of the output layer. This is obvious because for classification, the output
processing must be non-linear for generation of arbitrary complex decision regions. In the
LVQ neural network for the initialisation of the codebook vectors, a set of vectors is
chosen from the training data, one at a time. All the entries used for initialisation must
fall within the borders of the corresponding classes, and this is checked by the K-nearest
neighbour (K-nn) algorithm. The accuracy of classification may depend on the number of
codebook entries allocated to each class. Various parameters for the SVM like
regularisation parameter C, degree of polynomial, sigma of RBF were varied as: C from
1 to 105, degree of polynomial from 0 to 9 and Gamma (γ) from 0.13 to 2.5 to choose
the best parameters for SVM. ROC as shown in Figure 2 is one of the best ways to
evaluate a classifier. ROC methodology is appropriate in situations, where there are 2
possible truth states (i.e., diseased/normal, event/non-event). From the results of the ROC
analyses, a reasonable trade-off between specificity and sensitivity is observed. For a
perfect classifier, the ROC must approach unity. The sensitivity and specificity values
represent a measure of the classification accuracy, which take into account the variance in
diagnostic consequences that a false diagnosis on a malignant abnormality can have and
vice versa. True negative findings are a correct benign diagnosis; False Positives (FPs)
are an incorrect diagnosis about a malignant abnormality. Sensitivity is a measure of
detecting cancer (TP/(TP + FN)) whereas specificity is a measure of the classification
Mammogram tumour classification using Q learning 349

accuracy or probability that an abnormality is correctly identified (TN/(TP + FN)).


The higher the values for both sensitivity and specificity the better the performance
of the system is. Further in this analysis the OP rate of classification method was
calculated.
MC BC
OP rate = TP rate + (100 − FP rate) (17)
NI m NI m

where
MC: Number of malignant cases in the test set
BC: Number of benign cases in the test set
NIm: Total number of images in the test case.

Figure 2 Region of Convergence graph for BPN, RBF, LVQ and SVM architectures (see online
version for colours)

The images were thresholded by using the sample images. Let us consider an image
mdb001 from the as given in Figure 4, based multi-level thresholding method.
The thresholds obtained for the mammogram image mdb001 in the MIAS database are
T1, T2, T3, T4 and T5. The different set of features was extracted from the mdb001 image
with grey values above threshold T1, between T1 and T2, above T2, between T4 and T5 and
above T5. In Figure 3 the Monalisa image segmented with multiple thresholds 75, 130,
171 and 232. In Figure 5 mammogram image segmented with multiple thresholds
76, 133,163 and 195.
Then, the TPF value and value were calculated for BPN, RBF, LVQ and SVM
architectures in consideration as given in Tables 1 and 2.
350 S. Thamarai Selvi and R. Malmathanraj

Figure 3 Monalisa image segmented with multiple thresholds 75,130, 171 and 232 (see online
version for colours)

Figure 4 Mammogram image segmented with multiple thresholds 55, 77, 130 and 164

Figure 5 Mammogram image segmented with multiple thresholds 195, 163, 133 and 76

Table 1 True Positive Fraction and False positive Fraction values for BPN, LVQ, RBF and
SVM architectures

NN FS 1 FS 2 FS 3
Sl. no. Image name architecture TPF FPF TPF FPF TPF FPF
1 mdb 001 BPN 0.601 0.40 0.5 0 0.80 0.02
with RBF 0.62 0.59 0.7 0 0.82 0.03
thresholds
55, 130, 164 LVQ 0.73 0.70 0.9 0.4 0.85 0.12
and 192 SVM 0.80 0.28 0.92 0.1 0.91 0.13
2 mdb 002 BPN 0.570 0.497 0.7 0 0.75 0.06
with RBF 0.61 0.52 0.71 0.15 0.75 0.06
thresholds
76, 133, 163 LVQ 0.71 0.69 0.8 0.3 1 0
and 195 SVM 0.83 0.22 0.97 0.2 1 0
3 mdb 005 BPN 0.63 0.39 0.77 0.29 0.75 0.18
with RBF 0.69 0.40 0.86 0.19 0.75 0.18
thresholds
70, 123, 157 LVQ 0.75 0.31 0.87 0.09 1 0.22
and 200 SVM 0.82 0.02 0.91 0.04 1 0.03
Mammogram tumour classification using Q learning 351

Table 2 Summary of ANN performance with various features and different architectures

Sl. no. Features used Architecture No. of cases (malign/design) Training epochs OP rate Time
1 F1 BPN 108 (46,62) 3000 46.7 0.73
RBF 108 (46,62) 500 57.2 0.40
LVQ 108 (46,62) 50 51.5 0.29
SVM 108 (46,62) 50 46.6 0.27
2 F2 BPN 108 (46,62) 2700 54.2 0.69
RBF 108 (46,62) 300 47.9 0.45
LVQ 108 (46,62) 50 47.0 0.32
SVM 108 (46,62) 45 46.6 0.20
3 F3 BPN 108 (46,62) 3200 68.9 0.71
RBF 108 (46,62) 400 63.2 0.56
LVQ 108 (46,62) 75 50.4 0.41
SVM 108 (46,62) 50 49.5 0.31
4 F4 BPN 108 (46,62) 3400 70.2 2.16
RBF 108 (46,62) 500 70.8 0.75
LVQ 108 (46,62) 75 71.0 0.51
SVM 108 (46,62) 50 73.6 0.20
5 F5 BPN 108 (46,62) 2500 71.1 0.43
RBF 108 (46,62) 475 73.2 0.29
LVQ 108 (46,62) 65 73.5 0.28
SVM 108 (46,62) 50 75.0 0.19

6 Conclusion

The aim of this work is to develop a robust algorithm for segmentation of masses on MR
image, which is a unique challenge in mammogram tumour segmentation. The result
shows that this algorithm can identify the lesions of tumour present in the MR
images automatically. There is no intervention of radiologist and it reduces the time to
45–60 s from 5 to 10 min that will take for manual segmentation. This work may help the
physicians and radiologist in classifying tumour into benign or malignant using MR
images.

References
Brazokovic, D. and Neskovic, M. (1993) ‘Mammogram screening using multiresolution-based
image segmentation’, Int. J. Pattern Recog. Artif. Intelligence, Vol. 7, pp.1437–1460.
Chen, C.H. and Lee, G.G. (1997) ‘On digital mammogram segmentation and microcalcification
detection using multiresolution wavelet analysis’, Graphical Models Image Processing,
Vol. 59, pp.349–364.
Cheng, H.D., Cai, X., Chen, X.W., Hu, L. and Lou, X. (2003) ‘Computer-aided detection and
classification of microcalcifications in mammograms: a survey’, Pattern Recognition, Vol. 36,
pp.2967–2991.
352 S. Thamarai Selvi and R. Malmathanraj

Hao, X., Gao, S. and Gao, X. (1999) ‘A novel multi-scale nonlinear thresholding method for
ultrasonic speckle suppressing’, IEEE Trans. Med. Imaging, Vol. 8, pp.787–794.
Kapur, J.N., Sahoo, P.K. and Wong, A.K.C. (1985) ‘A new method for grey-level picture
thresholding using the entropy of the histogram’, Comput. Vision Graphics Image Process,
Vol. 29.
Li, H.D., Kallergi, M., Clarke, L.P., Jain, V.K. and Clark, R.A. (1995) ‘Markov random field for
tumor detection in digital mammography’, IEEE Trans. Med. Imag., Vol. 14, pp.565–576.
Pun, T. (1980) ‘A new method for gray level picture thresholding using the entropy of the
histogram’, Signal Process, Vol. 2, No. 3, pp.223–237.
SchVolkopf, B., Sung, K.K., Burges, C., Girosi, F., Niyogi, P., Poggio, T. and Vapnik, V. (1997)
‘Comparing support vector machines with Gaussian kernels to radial basis function
classifiers’, IEEE Trans. Signal Process, Vol. 45, pp.2758–2765.
Suykens, J.A.K. and Vandewalle, J. (1999) ‘Least squares support vector machine classifiers’,
Neural Process. Lett., Vol. 9, pp.293–300.
Vapnik, V. (1995) The Nature of Statistical Learning Theory, Springer-Verlag, New York.
Weigel, S., Schrading, S., Arand, B., Bieling, H., König, R., Tombach, B., Leutner, C.,
RiebKuhl Cer-Brambs, A., Nordhoff, D., Heindel,W., Reiser, M. and Schild, H. (2010)
‘Prospective multicenter cohort study to refine management recommendations for women at
elevated familial risk of breast cancer’, Journal Clinical Oncolology, Vol. 28, No. 9,
pp.1450–1457.
Yang, Y. and Yan, H. (2000) ‘An adaptive logical method for binarization of degraded document
images’, Pattern Recognition, Vol. 33, No. 5, pp.787–807.

Bibliography
Ilankumaran, V., Thamarai Selvi, S. et al. (2005) ‘Wavelet Implementation for ECG
characterization in pacemakers – an overview’, Caledonian Journal of Engineering, Vol. 01,
Peng-Yeng, Y. (2002) ‘Maximum entropy-based optimal threshold selection using deterministic
reinforcement learning with controlled randomization’, Signal Processing, Vol. 82,
pp.993–1006.
Poonguzhali, S. and Ravindran, G. (2008) ‘Automated detection of abnormal masses in ultrasound
images’, Int. J. Biomedical Engineering and Technology, Vol. 1, No. 3, pp.250–258.
Selvaraj, H., Thamarai Selvi, S., Selvathi, D. and Gewali, L. (2006) ‘Brain MRI slices classification
using least squares support vector machine’, International Journal of Intelligent Computing
in Medical Science and Image Processing.
Selvaraj, H., Thamarai Selvi, S., Selvathi, D. and Ramkumar, R. (2005) ‘Support vector machine
based automatic classification of human brain using MR image features’, International
Journal of Computational Intelligence and Applications.
Selvathi, D., Thamarai Selvi, S. and Alagappan, S. (2005) ‘Performance analysis of fuzzy logic
based filtering techniques for noise reduction from images’, International Journal on Lateral
Computing, Vol. 1, No. 2.
Selvathi, D., Thamarai Selvi, S. and Selvaraj, H. (2006) ‘Abnormality detection in brain MR
images using minimum error thresholding method’, International Journal of Computational
Intelligence and Applications, Vol. 6, No. 2, pp.177–191.
Thamarai Selvi, S., Selvathi, D., Selvaraj, H. and Ramkumar, R. (2006) ‘Least squares support
vector machine based classification of abnormalities in brain MR image’, Systems Science.

You might also like