You are on page 1of 14

THE IMAGING SCIENCE JOURNAL

2019, VOL. 67, NO. 8, 434–446


https://doi.org/10.1080/13682199.2019.1700875

RESEARCH ARTICLE

Semi-automatic unsupervised MR brain tumour segmentation using a simple


Bayesian Framework
Archana Chaudhari and Jayant Kulkarni
Instrumentation Engineering Department, Vishwakarma Institute of Technology, Pune, Maharashtra, India

ABSTRACT ARTICLE HISTORY


Tumours, one of the pathologies of the brain, are diverse in shape and appearance and overlap Received 17 December 2018
with the normal brain tissues making accurate automatic segmentation a challenge. This work Accepted 25 November 2019
proposes a semi- automatic, unsupervised method for brain tumour segmentation, using
KEYWORDS
magnetic resonance images in a simple Bayesian framework. Pixel is classified as a tumour MRI; Bayes theorem; brain
class by taking into account the knowledge of different brain tissue classes, grey level of tumour; segmentation;
pixels and their neighbourhood. For the Bayesian frame work, the likelihood of the different likelihood; conditional
brain tissue classes is assumed as Gaussian and Gaussian density weights of the pixel probability; prior; posterior
neighbourhood serve as the prior information for accurate tumour segmentation. probability
Experiments conducted on the publically available BRATS database result in an overall
accuracy of 98% for tumour core and 96% for oedema.

Introduction of interest. Segmentation is a process of labelling an


image into an non–overlapping image, constituting
Tumour, one of the pathologies of the brain, is an uncon-
regions such that all elements of the group have a
trolled growth of cancer cells. Clinically, tumours are of
common property [2]. Image segmentation consists in
different types and have different characteristics. The
finding a correspondence between radiometric infor-
primary brain tumours originate in the brain, while the
mation and symbolic labelling [3]. It permits the visualiza-
metastatic brain tumours begin as cancer elsewhere in
tion of structure of interest by removing unwanted
the body and spread to the brain. Brain tumours are
information. Segmentation of tumour from the brain
further divided as benign and malignant. World Health
image is the key for diagnosis, surgery and therapy plan-
Organization classifies the brain tumours into grade I to
ning. It also enables structure analysis such as calculation
IV. Grade I and grade II are benign low-grade brain
of the volume of a tumour [4].
tumours; grade III and grade IV are high-grade malignant
brain tumours. Statistical reports show that brain
tumours are the second leading cause of cancer-related
Related work
deaths all over the world. Brain tumours are, therefore,
seriously endangering people’s lives and there is an Segmentation techniques can be broadly classified as
urgent need for early discovery and action. Clinically, pixel classification-based, threshold-based, region-
treatment options for brain tumours include surgery in based and model-based techniques. In thresholding-
most cases, but radiation therapy or chemotherapy can based methods, objects are classified using image
be used to slow the growth of the tumour. Magnetic res- intensity either at a global or a local level.
onance imaging (MRI) provides good differentiation of In the specific area of brain tumours, segmentation
soft tissues and detailed images of the brain. Its non-inva- consists in separating the different tumour tissues from
sive nature, high contrast, relatively high spatial resol- the normal brain tissues [5]. Tumours consist of the fol-
ution across the entire field of view and multi-spectral lowing regions; tumour cores, which are active tumour
characteristics make it a popular modality for the brain tissues, and swelling oedema and neurotic tissues are sur-
tumour diagnosis [1]. The brain is a complex structure rounding the tumour. The goal of segmentation is to
and the brain images contain a lot of information. The detect the location and extension of the tumour
brain consists of different tissues including white regions. Tumours are often diffused with the surrounding
matter (WM), grey matter (GM), cerebrospinal fluid oedema, emerge anywhere in the brain in different
(CSF), deep brain and sub- cortical structures, brain path- shapes and sizes and extend tentacle-like structures
ologies like tumours and multiple sclerosis lesions. For that make them difficult to segment [6]. Different MRI
diagnostic purpose often only one or two structures are modalities produce different types of tissue contrast

CONTACT Jayant Kulkarni vitjvk@yahoo.com Instrumentation Engineering Department, Vishwakarma Institute of Technology, 666 Upper Indira
Nagar, Bibvewadi, Pune 411037, Maharashtra, India
© 2019 The Royal Photographic Society
THE IMAGING SCIENCE JOURNAL 435

images, thus providing valuable structural information K means algorithm. Pixel p(x, y) having coordinates (x,
and enabling the diagnosis and segmentation of y) is defined in the image F(m, n) having a grey level
tumours along with their sub-regions [5]. ‘g’. For pixel classification in terms of grey levels in an
Pixel classification methods proposed in the litera- image, the conditional probability Pk that a pixel p(x,
ture use pixel features such as grey levels, local y) belongs to class j can be represented as
texture or colour to cluster the pixels for MR image seg-
Pk = P( p(x, y) = j; g, F) (1)
mentation. In clustering, different objects are grouped
into different clusters based on the some appropriate The grey levels of a set of 8 neighbourhood pixels of p
distance measure between pixels in the feature (x, y) can be represented as
space. Fuzzy C means (FCM) and K means are popular
G [ {gN }
clustering methods proposed for MR image segmenta-
tion. Several variants of the FCM and K means algor- where N = 1, 2, 8. Pixel classification, in terms of neigh-
ithm are proposed for accurate segmentation. FCM bourhood grey level information, can be represented
with spatial constraints, which incorporate spatial infor- as
mation, are proposed for better segmentation. Other
Pk = P( p(x, y) = j; g, G) (2)
algorithms, exploiting the neighbouring information
or spatial connection proposed in the literature, can In the Bayesian framework, the conditional probability
be found in [7–13]. This research article proposes a for pixel classified into the jth class can be represented
new semi-automatic unsupervised method for seg- as
mentation of the tumour core and the surrounding
Posterior = Likelihood∗Prior (3)
oedema. The method is based on pixel classification
using a Bayesian framework. It incorporates clustering From Equations (2) and (3),
and conditional probability along with spatial neigh-
P(g, G; p(x, y) = j)∗P(j)
bourhood information. It differs from other clustering P( p(x, y) = j; g, G) = (4)
methods in the sense that, along with clustering and P (g, G)
neighbourhood information, the Bayesian analysis is where P(g, G; p(x, y) = j) is the likelihood, P(j) rep-
employed for accurate tumour segmentation. Cluster- resents the prior and P(g, G) is marginal probability. It
ing methods use a distance measure for pixel classifi- is a normalization factor and is assumed as unity. The
cation, whereas the proposed method is based on sign ‘*’ denotes multiplication.
the posterior probability of the pixel. In the proposed According to the Bayesian analysis the pixel p(x, y)
method pixel is classified as tumour class only when under consideration is said to be classified into the
the posterior probability of the pixel is maximum for jth class if its posterior probability is maximum for the
the tumour class. The Gaussian weighted grey levels jth class. In other words, the product of the likelihood
of the neighbourhood pixels aid in the pixel classifi- and the prior probability for pixel p(x, y) has to be the
cation in the form of the prior in a Bayesian framework. maximum for pixel classification in the jth class.
This research article is organized as follows: Section In the proposed method the likelihood of the pixels
2 discusses the proposed segmentation method. belonging to each pattern class is assumed as having a
Section 3 describes the database along with the evalu- Gaussian density.
ation metrics. Section 4 discusses the experimental For an n-dimensional case, the Gaussian density of
work along with the results and Section 5 presents the vector in the jth pattern class has the following
the conclusion of the research work. form
1 −12(x−mj )T Cj−1 (x−mj )
The proposed segmentation method using a P(x; j) = n n exp (5)
simple Bayesian Framework (2p)2 |Cj |2
In the proposed method K means clustering is used to where each density is specified completely by its mean
quickly produce a primary classification into different vector mj and covariance matrix Cj which are defined as
classes, which serve as an initialization step. With the aid
of the primary clusters from K means and Bayesian frame- mj = Ej {x} (6)
work, segmentation of tumour and oedema is achieved.
Cj = Ej {(x − mj )(x − j)T } (7)

In the Bayesian analysis, the selection of the prior plays


Tumour segmentation using a simple Bayesian
an important role [14]. Considering the probabilities of
Framework
each class only as the prior may not lead to an accurate
Consider the MR image of size, having m rows and n pixel classification. A need, therefore, arises to embed
columns, as represented as F(m, n). Let F(m, n) involve more information into the prior such that the posterior
j pattern classes. The initial classes are decided by the probability is maximized for accurate pixel
436 A. CHAUDHARI AND J. KULKARNI

classification. The information provided by the neigh- Step 2: In the proposed method the input image is
bourhood pixels also contributes to the assignment initialized into j = 5 classes.
of pixels in a particular class. In the proposed Step 3: The mean of each jth class is computed from
method, the product of the prior probability of the the K means. The mean of the jth class is represented as
jth class, along with the sum of all the Gaussian mj.
density weights of eight neighbours of pixel p(x, y) Step 4: From the initial clusters the prior probability
for the jth class is considered as the prior information. of each class p( jth class) is computed as
The pixel p(x, y) under consideration in the proposed
(number of pixels belonging to j th class)
method is classified into the jth class when the p( jth class) =
(number of pixels in the image (m ∗ n))
product of Gaussian likelihood of the pixel and the
prior information is maximum for the jth class. Math- Step 5: Compute the Gaussian likelihood of the pixel p
ematically, (x, y) for the jth class using the mean of the jth class
from step 3 and Equation (5).
Pk = P ( p(x, y) = j; g, G)
Step 6: Compute the Gaussian density weights of all
= P (g, G; p(x, y) = j)∗ P (j) (8) the neighbourhood pixels around p(x, y)
Step 7: Obtain the sum of all the Gaussian density
In Equation (8), P (g, G; p(x, y) = j) represents the weights of the neighbours of pixel p(x, y) from Equation
Gaussian likelihood for pixel p(x, y) and (5) using the term SN (gN ∗ P (xN ; j)) where gN rep-
P (j) represents the prior and is detailed for the pro- resent the grey levels of the Nth neighbour of pixel p
posed method as Equation (8a) (x, y) and P(xN; j) is the Gaussian density weight of
P(j) = p( jth class)∗ SN (gN ∗ P (xN ; j)) (8a) the Nth neighbour of pixel p(x, y). Since the term
involves the product of the grey levels and the Gaus-
where p( jth class) represents the probability of the jth sian weights no normalization is needed. Figure 2 rep-
class and it is computed using Equation (5). gN is the resents the eight neighbours of the pixel p(x, y).
grey level of N neighbourhood pixels and P(xN; j) rep- The term in Step 7 can be represented using
resents the Gaussian density weights of the N neigh- Figure 2 as
bourhood pixels of p(x, y), computed using Equation
(5). SN gN ∗ P (xN ; j)
The Equation (8) is elaborated as = g1 ∗ P (x − 1, y − 1; j) + g2 ∗ P (x, y − 1; j)
+ g3 ∗ P (x + 1, y − 1; j) + g4 ∗ P (x − 1, y; j)
Probability of pixel p(x, y)belonging to jth class =
+ g5 ∗ P (x + 1, y; j) + g6 ∗ P (x − 1, y + 1; j)
Gaussian likelihood of pixel p(x, y)
+ g7 ∗ P (x, y + 1; j) + g8 ∗ P (x + 1, y + 1; j)
∗(prior probability of jth class)∗(sum of product of
gray levels of eight neighbourhood pixels and where P is obtained using Equation (5).
Gaussiandensity weights of eight neighbourhood pixels Step 8: Obtain the expression of the prior by com-
bining the information from Steps 3, 5 and 6.
around pixel p(x, y))
Step 9: The probability of the jth class for pixel p(x, y)
From Equation (8) it is, therefore, said that for pixel p(x, can be computed using Equation (8).
y) has to be classified in the jth class the value of Pk as Step 10: The pixel p(x, y) is classified in the jth class if
to be maximum for the jth class. Equation (9) rep- the value of the probability of jth class is maximum.
resents the threshold for pixel classification. The algorithm computes the product of the likeli-
hood and the prior for each pixel and each class, and
p(x, y) [ ( jth class), for max(Pk = j) (9)
the pixel classification is done when this product is
In the proposed method for pixel classification, the maximum for the pixel under consideration for a par-
initial clusters in the image are found by using the K ticular class.
means clustering algorithm. The mean of all the
classes is computed from the clusters. The prior prob-
Results and discussion
ability of each class is computed from the initial knowl-
edge of the clusters. The probability of each pixel p(x, y) The performance evaluation of the proposed method is
in the image F(m, n) for all j classes is computed. Thus tested on MICCAI 2012 and 2013 Challenge database
pixel p(x, y) is classified as belonging to the jth class if its [15–17]. The training data consist of multi-contrast
probability is maximum for the jth class. Figure 1 pre- MR scans. For each patient, T1, T2, FLAIR, and post-
sents the flow chart of the proposed method. Gadolinium T1 MR Images are available. Segmentation
The proposed method is elaborated in the follwing evaluation is the task of comparing two segmentations
steps: by measuring the distance or similarity between them,
Step 1: Compute the j classes of the input MR image where one is the segmentation to be evaluated and the
F(m, n) using K means. other is the corresponding ground truth segmentation
THE IMAGING SCIENCE JOURNAL 437

Figure 1. Flow chart of the proposed method. (Images reproduced by kind permission of the authors).

[18]. Fenster et al. [19] categorized the requirements of the ground truth segmentation), the precision as a
medical segmentation evaluation into accuracy (the measure of repeatability, and the efficiency which is
degree to which the segmentation results agree with mostly related to time. Taha et al. give an overview of
the different metrics used for segmentation. For the
proposed method segmentation evaluation is done
using Accuracy, Sensitivity, Specificity, Precision and
Dice Coefficient [18].
The proposed method is compared with the most
basic versions of the KM and FCM algorithm for the
purpose of comparison of distance-based metrics for
pixel classification and probabilistic-based metrics for
pixel classification. The proposed method is compared
with few recent works in the literature.

Results for tumour core and oedema


segmentation
In the proposed method, T1-w contrast-enhanced
images are used to detect tumour core and corre-
sponding FLAIR images from the BRATS database are
used to detect the surrounding oedema. Figure 3
demonstrates the results for BRATS 2012 database
Figure 2. Representation of eight neighbours of pixel p(x, y) along with ground truth images separately for
(Images reproduced by kind permission of the authors). tumour core and oedema and segmentation results
438 A. CHAUDHARI AND J. KULKARNI

Figure 3. Segmentation results for BRATS 2012 database. Column First (odd row): T1w contrast-enhanced images. Column first
(even row): corresponding FLAIR images. Column second (odd row) Ground Truth images of Tumour. Column second (even
row) Ground Truth images of oedema. Segmentation results using Column Third: Proposed method. Column Fourth: K means algor-
ithm. Column Fifth: FCM algorithm. (Images reproduced by kind permission of the authors).
THE IMAGING SCIENCE JOURNAL 439

Figure 4. Segmentation results for the BRATS 2013 database. Column First (odd row): T1w contrast-enhanced images. Column First
(even row): corresponding FLAIR images. Column second (odd row) Ground Truth images of Tumour. Column second (even row):
Ground Truth images of oedema. Segmentation results using Column Third: Proposed method. Column Fourth: K means algorithm.
Column Fifth: FCM algorithm. (Images reproduced by kind permission of the authors).
440 A. CHAUDHARI AND J. KULKARNI

Figure 5. Segmentation results for the BRATS-2 database. Column First (odd row): T1w contrast-enhanced images. Column First
(even row): corresponding FLAIR images. Column second (odd row) Ground Truth images of Tumour. Column second (even
row) Ground Truth images of oedema. Segmentation results using Column Third: Proposed method. Column Fourth: K means algor-
ithm. Column Fifth: FCM algorithm. (Images reproduced by kind permission of the authors).
THE IMAGING SCIENCE JOURNAL 441

Figure 6. Segmentation results for the BRATS-2 synthetic database. Column First (odd row): T1w contrast-enhanced images.
Column First (even row): corresponding FLAIR images. Column second (odd row) Ground Truth images of Tumour. Column
second (even row) Ground Truth images of oedema. Segmentation results using Column Third: Proposed method. Column
Fourth: K means algorithm. Column Fifth: FCM algorithm. (Images reproduced by kind permission of the authors).
442
Table 1. Evaluation metrics for tumour core segmentation for the BRATS database.
Image Set Accuracy Sensitivity Specificity Precision Dice

A. CHAUDHARI AND J. KULKARNI


PM KM FCM PM KM FCM PM KM FCM PM KM FCM PM KM FCM
12hg3522 0.9898 0.9875 0.9896 0.3399 0.2509 0.5474 0.9995 0.9984 0.9962 0.9100 0.7094 0.6856 0.4949 0.3708 0.6087
12hg3526 0.9912 0.9872 0.9879 0.6166 0.0952 0.5904 0.9959 0.9983 0.9928 0.6556 0.4210 0.5092 0.6355 0.1553 0.5468
12hg3518 0.9934 0.9758 0.9893 0.8568 0.1090 0.6494 0.9972 0.9999 0.9988 0.8952 0.9824 0.9394 0.8756 0.1963 0.7679
12hg3538 0.9741 0.9718 0.9731 0.3566 0.1410 0.3456 0.9926 0.9967 0.9919 0.5935 0.5687 0.5622 0.4455 0.2259 0.4280
13hgg514 0.9964 0.9931 0.9957 0.8718 0.3140 0.6462 0.9976 0.9997 0.9991 0.7828 0.9255 0.8753 0.8249 0.4690 0.7435
13hgg526 0.9961 0.9904 0.9955 0.8779 0.4118 0.8606 0.9978 0.9986 0.9975 0.8516 0.8186 0.8319 0.8646 0.54799 0.8460
13hgg550 0.9953 0.9874 0.9940 0.8582 0.3042 0.7135 0.9978 0.9996 0.9990 0.8765 0.9417 0.9278 0.8673 0.4599 0.8067
13hgg556 0.9871 0.9832 0.9877 0.4329 0.1911 0.4864 0.9976 0.9982 0.9971 0.7764 0.6732 0.7643 0.5559 0.2978 0.5945
2hg686 0.9954 0.9938 0.9950 0.6439 0.3900 0.6656 0.9987 0.9995 0.9981 0.832 0.8811 0.7733 0.7260 0.5407 0.7154
2hg699 0.9936 0.9861 0.9928 0.7727 0.4687 0.8759 0.9991 0.9988 0.9957 0.9552 0.9085 0.8338 0.8543 0.6183 0.8543
2hg705 0.9858 0.9840 0.9860 0.5823 0.2323 0.4382 0.9939 0.9991 0.9970 0.66 0.8404 0.7506 0.6187 0.3640 0.5533
2hg735 0.9712 0.9656 0.9701 0.3954 0.1783 0.3853 0.9954 0.9987 0.9947 0.7852 0.8615 0.7545 0.5259 0.2955 0.5101
hgsyn885 0.9957 0.9916 0.9607 0.8378 0.3391 0.2623 0.9977 0.9998 0.9694 0.8206 0.9580 0.09675 0.8291 0.5009 0.1413
hgsyn903 0.9973 0.9947 0.9977 0.9658 0.5141 0.8826 0.9976 0.9997 0.9989 0.8104 0.9558 0.8972 0.8813 0.6685 0.8898
hgsyn909 0.9923 0.9876 0.9918 0.5570 0.2448 0.5355 0.9995 0.9999 0.9993 0.9551 0.9924 0.9362 0.7036 0.3928 0.6813
hgsyn945 0.9824 0.9737 0.9866 0.4472 0.1659 0.5927 0.9997 0.9999 0.9994 0.9860 0.9855 0.9712 0.6153 0.2840 0.7361
Average 0.9898 0.9846 0.9871 0.6508 0.2719 0.5923 0.9973 0.9991 0.9953 0.8216 0.8390 0.7568 0.7074 0.3992 0.6515

Table 2. Evaluation metrics for oedema segmentation for the BRATS database.
Image Set Accuracy Sensitivity Specificity Precision Dice
PM KM FCM PM KM FCM PM KM FCM PM KM FCM PM KM FCM
12hg3520 0.9771 0.9823 0.9714 0.7161 0.2624 0.0000 0.9807 0.9921 0.9847 0.3357 0.3120 0.0000 0.4571 0.2850 0.1465
12hg3524 0.9855 0.9740 0.9848 0.8330 0.4051 0.7812 0.9923 0.9993 0.9939 0.8278 0.9623 0.8498 0.8304 0.5702 0.8141
12hg3516 0.9502 0.9574 0.9449 0.4449 0.2246 0.9286 0.9728 0.9901 0.9456 0.4221 0.5034 0.4328 0.4332 0.3106 0.5904
12hg3536 0.9474 0.9459 0.9421 0.4696 0.2838 0.5807 0.9830 0.9952 0.9691 0.6724 0.8154 0.5830 0.5530 0.4210 0.5818
13hgg512 0.9839 0.9737 0.9698 0.6029 0.3696 0.5725 0.9907 0.9846 0.9770 0.5399 0.3018 0.3097 0.5697 0.3323 0.4019
13hgg524 0.9797 0.9835 0.9833 0.7962 0.5372 0.6677 0.9860 0.9991 0.9943 0.6651 0.9524 0.8037 0.7248 0.6869 0.7294
13hgg548 0.9759 0.9857 0.9184 0.5702 0.3814 0.1837 0.9815 0.9941 0.9286 0.2980 0.4694 0.0343 0.3914 0.4208 0.0578
13hgg545 0.9607 0.9588 0.9592 0.5585 0.3095 0.5035 0.9796 0.9893 0.9806 0.5620 0.5761 0.5495 0.5602 0.4026 0.5255
2hg684 0.9764 0.9630 0.9754 0.8804 0.3149 0.7275 0.9803 0.9888 0.9852 0.6394 0.5279 0.6618 0.7408 0.3945 0.6931
2hg697 0.9528 0.9545 0.9517 0.7194 0.4659 0.6867 0.9708 0.9922 0.9722 0.6559 0.8225 0.6559 0.6861 0.5948 0.6710
2hg703 0.9791 0.9702 0.9676 0.2308 0.0935 0.0136 0.9937 0.9873 0.9863 0.4169 0.1263 0.0190 0.2971 0.1075 0.0158
2hg733 0.9683 0.9696 0.9715 0.6681 0.3719 0.5845 0.9832 0.9992 0.9906 0.6624 0.9573 0.7554 0.6652 0.5356 0.6590
hgsyn883 0.9530 0.9685 0.9637 0.1588 0.5755 0.3833 0.9785 0.9812 0.9824 0.1918 0.4954 0.4116 0.1738 0.5324 0.3970
hgsyn901 0.9479 0.9685 0.9837 0.3116 0.5382 0.6273 0.9727 0.9853 0.9976 0.3080 0.5872 0.9092 0.3098 0.5617 0.7424
hgsyn907 0.9871 0.9755 0.9890 0.5395 0.5645 0.6600 0.9986 0.9860 0.9974 0.9070 0.5099 0.8687 0.6766 0.5358 0.7501
hgsyn943 0.9810 0.9686 0.9841 0.3135 0.6013 0.4466 0.9984 0.9782 0.9982 0.8408 0.4190 0.8643 0.4568 0.4938 0.5889
Average 0.9691 0.9687 0.9663 0.5508 0.3937 0.5217 0.9839 0.9901 0.9802 0.5591 0.5836 0.5443 0.5329 0.4491 0.5228
THE IMAGING SCIENCE JOURNAL 443

for KM and FCM algorithms. Figures 4–6 demonstrate Table 3. Comparison of the proposed method with recent
the segmentation results for tumour core and methods for tumour core segmentation from the BRATS
oedema with high-grade Glioma for BRATS 2013, database (T1c images).
Methods Accuracy Sensitivity Specificity Dice Coefficient
BRATS 2 and BRATS 2 synthetic images. Since K
PSO 0 .9397 0 .9796 0 .8268 0 .9316
means and Fuzzy C means method are classical pixel PCA 0.9374 0.9807 0.8271 0 .9316
classification methods, hence the proposed method is SE-LS 0.9413 0 .9815 0 .8277 0 .9408
compared with the K means and Fuzzy C means PM 0.9898 0.6508 0.9839 0.7074
SE-TLBO 0 .9459 0 .9923 0 .9025 0 .8762
method. Tables 1 and 2 present the comparison of CNN Not computed 0.79 0.79 0.79
the evaluation metrics for the proposed method (PM)
with ground truth, KM and FCM algorithms for
tumour core and oedema segmentation, respectively. neural network (CNN). Except sensitivity the exper-
The proposed method is also compared with other imental results of the proposed method are compar-
recent tumour segmentation methods in the literature. able with the recent supervised methods in the
Table 3 presents the comparison of the proposed literature.
method with recent methods in the literature for Table 4 summarizes the average computation time
tumour core segmentation on the BRATS database. for the proposed method along with K means and
The average segmentation accuracy using the pro- Fuzzy C-means algorithms for one slice of the BRATS
posed method for tumour core is observed as 98% database. The computation time thus depends on the
and it is comparable with K means and FCM algorithms. algorithm complexity and also on the processor. It
Sensitivity, which refers to the ability to detect the can be minimized using fast processors.
tumour core or oedema correctly, is observed to be
around 65% but better than K means and FCM algor-
ithms. One of the reasons for this is clustering Results using the proposed method without the
methods depend on the Euclidean distance between use of neighbourhood information as the prior
pixels for classification, but the proposed method In the Bayesian framework, the likelihood of the tissue
implements conditional probability along with neigh- class, the prior knowledge of the tissue class along with
bourhood information for pixel classification. Specifi- spatial neighbourhood information are used to esti-
city is related to the ability to correctly detect healthy mate the pixels belonging to a certain tissue class.
tissues and is observed to be around 99% and is com- Figure 7 represents the pixel profile of the segmented
parable with KM and FCM algorithms. Precision is a image with the use of 8 neighbours as the priors and
measure of repeatability, and its efficiency is around without the use of 8 neighbours as the priors. Exper-
82%. The average Dice coefficient, which is a measure iments for the same are conducted on images from
of overlap between segmentation and ground truth, BRATS 2012 and BRATS 2013 high-grade and low-
is observed to be 70% and is better than KM and grade Glioma database, respectively. The pixel profile
FCM algorithms. and the visual perception of the segmented images
In the case of oedema segmentation the average with and without neighbourhood pixels further vali-
segmentation accuracy using the proposed method is date the effectiveness of the spatial neighbourhood
observed as 96% and is comparable with K means information as the prior.
and FCM algorithms. The average sensitivity is
observed to be 55% and is better than K means and
FCM algorithms. The average specificity and precision Segmentation results on real brain tumour
is observed to be 98% and 55%, respectively. The images from Radiopaedia
average Dice coefficient is observed to be 53% and is
better than the K means and FCM algorithms. Since To test the effectiveness of the proposed method
the probability of each pixel for each class is computed experiments are conducted on real MR Brain images
the computation time of the proposed method is containing tumour from Radiopaedia [20]. Figure 8
greater than the K means and the Fuzzy C-means shows segmentation results on axial T1-w and FLAIR
algorithms. images having Gliosarcoma. Figure 8(b) shows the
Table 3 represents the evaluation metrics for the skull stripped image for T1-w images. Skull stripping
proposed method using T1-weighted contrast- is a pre-processing step before image segmentation
enhanced images (T1-c) from the BRATS database for to remove the skull. Many different methods are
tumour core segmentation with the following recent
supervised methods: Particle swam optimization Table 4. The average computation time required for the
using Markov random field (PSO), Principal component proposed segmentation method for the BRATS database.
Analysis (PCA), Shannon Entropy and Level Set (SE-LS), Time required for(sec) Proposed method K means Fuzzy C-means
Shannon entropy-based teaching learning-based Tumour segmentation 85.55 0.297 32.44
Oedema segmentation 88.32 0.310 33.56
optimization (SE-TLBO) and a two-input convolutional
444 A. CHAUDHARI AND J. KULKARNI

Figure 7. Pixel profile of the segmented image with the use of 8 neighbours as the priors and without the use of 8 neighbours as
the priors for the images from the BRATS 2012 and 2013 database. (a) T1w contrast-enhanced image with high-grade Glioma (b)
synthetic T2w image with high-grade glioma (c) FLAIR image with low-grade Glioma (d)–(f) Segmented image using 8 neighbours
as the priors. (g)–(i) Pixel profile using 8 neighbours as prior. (j)–(l) Segmented image without 8 neighbours as the priors. (m)–(o)
Pixel profile without 8 neighbours as the prior. (Images reproduced by kind permission of the authors).

proposed in the literature for skull stripping [21,22]. In the results of segmentation using the proposed
the proposed work skull stripping is carried out using method and (d) and (e) demonstrate the segmented
morphological operations. Figure 8(c) demonstrates images using KM and FCM algorithms for T1-w
THE IMAGING SCIENCE JOURNAL 445

Figure 8. Results of the proposed segmentation method on axial images with Gliosarcoma from Radiopaedia. (a) T1-w image
enhancing tumour core (b) skull stripped T1-w image. Tumour segmented images using (c) Proposed method (d) K means algor-
ithm (e) Fuzzy C means algorithm. (f) FLAIR image enhancing oedema (g) skull stripped FLAIR image. Oedema-segmented images
using (h) Proposed method (i) K means algorithm (j) Fuzzy C means algorithm. (Images reproduced by kind permission of the
authors).

images. T1-w image is used to segment the tumour Engineering Department at Vishwakarma Institute of Tech-
core and the FLAIR image is used to segment the sur- nology, affiliated to Savitribai Phule Pune University. Her
main areas of interest are Signal and Image processing,
rounding oedema.
Pattern Recognition, Biomedical Image Analysis and
Machine Learning.

Conclusions Jayant Kulkarni received the B.E., M.E., and Ph.D. degrees
from Marathwada University, India. He is currently working
This article presented a semi-automatic method for the as Professor in Instrumentation Engineering Department at
segmentation of tumour core and the surrounding Vishwakarma Institute of Technology, affiliated to Savitribai
Phule Pune University. His main areas of interest are Signal
oedema using a simple Bayesian framework. Gaussian
and Image processing, Pattern Recognition, Machine Learn-
likelihood of the tissue class, the prior knowledge of ing, Image Surveillance applications.
the tissue class and Gaussian-weighted spatial neigh-
bourhood information are used to classify the pixels
belonging to tumour and oedema class. The pixel is References
classified as tumour or oedema when its posterior prob-
[1] Zexuan J, Jinyao L, Guo C, et al. Robust spatially con-
ability is maximum for the tumour class. Experiments strained fuzzy c-means algorithm for brain MR image
conducted on the publicly available BRATS database segmentation. Pattern Recogn. 2001;47:2454–2466.
result in an average segmentation accuracy of 98% [2] Gonzalez R, Woods R. Digital image processing. 2nd ed.
and 96% for tumour core and oedema, respectively. New Jersey (Upper Saddle River, USA): Pearson
Comparison of the proposed method with other pixel Education Pvt. Ltd; 2000.
[3] Richard N, Dojat M, Garbay C. Distributed Markovian
classification-based clustering methods demonstrates
segmentation: application to MR brain scans. Pattern
encouraging performance in most of the cases. The pro- Recogn. 2007;40:3467–3480.
posed method based on probability metrics demon- [4] Smistad E, Falch T, Bozorgi M, et al. Medical image seg-
strates encouraging results when compared to other mentation on GPUs – a comprehensive review. Med
pixel classification distance-based metrics. Image Anal. 2015;20(1):1–18.
[5] Gordillo N, Montseny E, Sobrevilla P. State of the art
survey on MRI brain tumor segmentation. Magn Reson
Imaging. 2013;31(8):1426–1438.
Disclosure statement [6] Havaei M, Davy A, Warde-Farley D, et al. Brain tumor
No potential conflict of interest was reported by the authors. segmentation with deep neural networks. Med Image
Anal. 2017;35:18–31.
[7] Sanjay G, Hebert T. Bayesian pixel classification using
Notes on contributors spatially variant finite mixtures and the generalized EM
algorithm. IEEE Trans Image Process. 1998;7:1014–1028.
Archana Chaudhari received the B.E. and M. Tech. degrees [8] Blekas K, Likas A, Galatsanos N, et al. A spatially con-
from Savitribai Phule Pune University, Pune, India. She is cur- strained mixture model for image segmentation. IEEE
rently working as Assistant Professor in Instrumentation Trans Neural Netw. 2005;16:494–498.
446 A. CHAUDHARI AND J. KULKARNI

[9] Greenspan H, Ruf A, Goldberger J. Constrained Gaussian [16] Menze BH, Jakab A, Bauer S, et al. The multimodal brain
mixture model framework for automatic segmentation tumor image segmentation benchmark (BRATS). IEEE
of MR brain images. IEEE Trans Med Imaging. Trans Med Imaging. 2016;34(10):1993–2024.
2006;25:1233–1245. [17] Kistler M, Bonaretti S, Pfahrer M, et al. The virtual skeleton
[10] Diplaros A, Vlassis N, Gevers T. A spatially constrained database: an open access repository for biomedical
generative model and an EM algorithm for image research and collaboration. J Med Internet Res. 2013;15(11).
segmentation. IEEE Trans Neural Netw. 2007;18:798– [18] Taha A, Hanbury A. Metrics for evaluating 3D medical
808. image segmentation: analysis, selection, and tool. BMC
[11] Nikou C, Galatsanos N, Likas A. A class-adaptive spatially Med Imaging. 2015: 15–29.
variant mixture model for image segmentation. IEEE [19] Fenster A, Chiu B. Evaluation of segmentation algor-
Trans Image Process. 2007;16:1121–1130. ithms for medical imaging. Conf Proc IEEE Eng Med
[12] Nguyen T, Wu Q. Fast and robust spatially constrained Biol Soc. 2005;7(7):186–189.
Gaussian mixture model for image segmentation. [20] https://radiopaedia.org
IEEE Trans Circuits Syst Video Technol. 2013;23:621– [21] Capelle A, Colot O, Fernandez-Maloigne C. Evidential
635. segmentation scheme of multi-echo MR images for
[13] Zeng J, Xie L, Liu Z. Type-2 fuzzy Gaussian mixture. the detection of brain tumors using neighbourhood
Pattern Recogn. 2008;41:3636–3643. information. Inf Fusion. 2004;5(3):203–216.
[14] Hanson K. Introduction to Bayesian image analysis. Proc [22] Zhuang A, Valentino D, Toga A. Skull-stripping magnetic
SPIE. (1993, 1898):716–731. resonance brain images using a model-based level set.
[15] http://www.imm.dtu.dk/projects/BRATS2012 Neuroimage. 2006;32(1):79–92.
Copyright of Imaging Science Journal is the property of Taylor & Francis Ltd and its content
may not be copied or emailed to multiple sites or posted to a listserv without the copyright
holder's express written permission. However, users may print, download, or email articles for
individual use.

You might also like