You are on page 1of 49

Specification of Atherosclerosis

Functioning of atherosclerosis
Related Research
Methodology basics
Dataset and manual segmentation
ELM Implementation
ELM auto-encoder architecture
Methodology flowchart
Methodology steps
Results
Discussion
Atherosclerosis (also known as arteriosclerotic
vascular disease or ASVD) is a specific form
of arteriosclerosis in which an artery wall thickens
as a result of invasion and accumulation of white
blood cells (foam cells) and proliferation of
intimal-smooth-muscle cell creating
an atheromatous (fibrofatty) plaque.
Atherosclerosis is a chronic disease that
remains asymptomatic for decades.
Atherosclerosis involves a progressive thickening
of the arterial walls by fat accumulation, which
hinders blood flow and reduces the elasticity of the
affected vessels.
The intima-media thickness (IMT) of the common
carotid artery(CCA) is considered as an early and
reliable indicator of atheroscle-rosis and it is
extracted from ultrasound scans.
In human body, blood vessels present three
different layers from inner-most to outermost:
intima, media and adventitia.
The IMT is defined as the distance from the lumen-
intima interface (LII) to the media-adventitia interface
(MAI).
The use of different proto-cols and the variability
between observers are recurrent problems in the IMT
measurement procedure.
Several solutions have been developed to perform
the carotid wall segmentation in ultrasound images
for the IMT measurement.
Edge detection and Gradient-based techniques.
Dynamic Programming.
Active contours
Neural Networks
Statistical modelling
Hough transform
fully automated segmentation technique completely
based on machine learning to recognize IMT intensity
patterns in the carotid ultrasound images.
detection of the optimal measurement area and, then
the identification of the arterial wall layers.
Based on abstraction and efficient feature
representations by means of auto-encoders based on
extreme learning machine (ELM).
Dataset of 67 ultrasounds of the CCA taken with a
Philips iU22 Ultrasound System using three
different ultrasound transducers or probes, with
frequency ranges of 93 MHz, 125 MHz and 17
5 MHz.
The spatial resolution of the images ranges from
0.029 to 0.081 mm/pixel, with mean and standard
deviation equal to 0.051 and 0.015 mm/pixel.
Some blurred and noisy images, affected by
intraluminal artifacts, and some others with
partially visible boundaries are included in the
studied set.
ELM is a single-layer feed-forward networks (SLFN)
specification.
For N arbitrary distinct samples (xn , tn) where xn is
input vector and tn is level of that input vector, the
output of single- layer feed- forward networks is
defined as :
o yn = =1 f( + ), n = 1,.N;

o where wj represents input weight vector matrix


connected to input units and the jth hidden neuron =
[j1 , j2 , . jm] is the output weight vector
connecting the jth hidden neuron and the output units,
and bj is the bias of the jth hidden neuron.
Fig:2 Diagram of the arterial layer in a transverse section (left) and
longitudinal view of CCA (Common carotid artery) in an ultrasound
image
ELM auto encoder is an application tool based on
deep learning algorithms
In this work, this tool is used for segmentation task
Structure of ELM auto encoder is like
Start

End
Collection of
RAW image

Image Crop
LII (Lumen intima
interface) CCA Ultrasound
AND MAI (media- Image
adventitia interface)

1: ROI
Detection
Machine Learning Approach (Pixel
Classification) 2: Arterial
layers
recognition
Collection of Raw data in form of images.
Raw images contain CCA ultrasound, frame with
patient data and additional information.
Crop image to remove frame and additional
information from images. (Application of Morphology)
Obtain binary image from crop image and fill regions
or holes (Application of opening and reconstruction
operator in morphology)
Investigate region of interest (ROI) which is wall of
the blood vessel and classification of ROI into pixels of
LII and MAI recognition.
Obtain parameters of domain after classification.
Division of CCA ultrasound image into squared
blocks.
The squared block size for ELM auto encoder is
39*39.
An ELM-AE has been designed to obtain useful and
efficient representations of image blocks for their
posterior classification as ROI-block, if a typical pat-
tern of the far wall is recognized, or non-ROI-block,
otherwise. The size of the image blocks to process is
39 39 pixels.
Design parameter for ELM auto encoder
No. of hidden neurons (M) : 28 (10, 20,.100, 150,
200,1000)
Regularization term (C): 38 di (2^ (-18), 2^ (-17),2^
(19) )
ELM was retrained 50 times for every pair of values
(50*28*38) and its mean performance was analyzed.
20% of training samples were randomly selected as
validation set in each trial.
Optimal coding is obtained with 850 hidden neurons
and C1 = 2 ^(-6), C2 = 2 ^(19).
Accuracy of the classification between ROI and
non-ROI image blocks is 98.45 0.06% (mean
and standard deviation from50 trials). Moreover,
the sensitivity is 99.38 0.06% and the specificity
is 97.56 0.11%, which describe the ability of the
system to identify positive results (ROI
observations) and negative results(non-ROI
observations), respectively.
Detection of IMT boundaries is performed by 2
different multilayer ELM AE.
Multilayer ELM-AE are used for recognition of
the arterial layers in CCA ultrasounds.
Tuning parameter for ELM-AE are as follows:
M= {10, 20, .500,550,1000,1100,2000)
C= {2^-18 , 2^ -17, .. 2 ^ 50}.
Total no of trials: 50
Performance parameter was RMSE. See
performance in next slide.
ROI: Manual segmentation
Recognition of IMT boundaries and manually marked points

Final IMT boundaries obtained for


image
The difference between manual tracing of LII
ranges, on average, from 29.7m to 40.6m,
whereas manual segmentation error for MAI varies
between 43.5 and 53.9 m.
Statistical distribution of different parameter is
given below.
Given an ultrasound image and the corresponding
boundaries of the arterial wall, manually or
automatically segmented con-tours, the IMT is
estimated by using three different metrics: mean
absolute difference (MAD), poly-line distance (PLD)
and center line distance (CLD).
MAD is the most used metric to evaluate IMT. It is
based on the vertical distance between contours along
the longitudinal axis of an image. In particular, it is
necessary that both contours have the same number of
points (N)to calculate the average of these vertical
distances.
Calculation of MAD is performed as
The difference between automatic IMT and GT is of
5.8 34 m for MAD and PLD metrics (6.7 34 m
for CLD), whereas the absolute error of the automatic
measurements is 27.3 21 m for MAD and PLD
(27.2 22 m for CLD). These values reveal that the
measurement error associated with the proposed
method is lower than the inter-observer errors and it is
in the rank of the intra-observer errors. In addition, the
correlation coefficient (98.1%) is comparable to the
intra-observer variability.
Classification performance study is measured by
accuracy (ACC), specificity (SPEC), sensitivity (SEN)
metrics and Mathews correlation coefficient (MCC)
metrics.
These metrics are defined as :
The method has been tested over a database of 67 images with
different spatial resolutions. The validation of the technique is
carried out by comparing the automatic contours with the average
off our manual segmentations performed by two different
observers. The results show a mean segmentation error of 0.028
mm for the LII and 0.035 mm for the MAI and demonstrate that
the pro-posed methodology reduces the uncertainty and variability
of the manual procedure.
In reference to the IMT measurements, a high grade of agreement
between manual and automatic observations is obtained with a
difference of only 5.8 34 m (mean and standard deviation).
In particular, the new approach is completely based on machine
learning (ML), both the recognition of the carotid far wall (region
of interest, ROI) and the identification of the IMT contours (LII
and MAI).
The use of a new pattern recognition strategy based on ML for the
recognition of the ROI implies that the system is able to adapt to
the optimal area for the measurement, by avoiding those uncertain
regions in which the characteristic IMT pattern is unclear, blurred
or even hidden. Moreover, in this work, it has been studied the
utilization of ELM multilayer AE to obtain sparse data
representations with the aim of obtaining a high performance
classifier. With the proposed architecture, the obtained results
show an overall success rate exceeding 99% in the classification
of the nearly 13,000 test samples.
in the present study ELM has provided advantages in the learning
process (training and design) of the proposed system, because of
its good performance at fast speed even with high-dimensional
data. Furthermore, the suggested strategy has been designed to
recognize jointly LII and MAI and it is able to identify and
differentiate both contours by means of the developed multiclass
classifier (4classes).
This work also explained ELM as multiclass classifier which is a
new proposal given by author because before this research ELM
was recommended as binary classifier only.
function [TrainingTime, TestingTime, TrainingAccuracy, TestingAccuracy] =
elm(TrainingData_File, TestingData_File, Elm_Type, NumberofHiddenNeurons,
ActivationFunction, C)
This function is taking 6 parameter as input and giving 4 parameters as
output.
A. Input parameters are : TrainingData_File
B. TestingData_File
C. Elm_Type (use 0 for regression; 1 for (both binary and multi-classes)
classification.
D. NumberofHiddenNeurons ( number of present neurons for processing)
E. Activation function (sigmoid / sine)
F. C = tuning coefficient based on learning of elm algorithm
REGRESSION=0; A multiclass classifier
application
CLASSIFIER=1;
train_data=load( Path of training images data set);
@@@@ upload image data for training.
T=train_data(:,1)'; @@@@ it will return
training data as input picture as total no. of
rows and single column.
P=train_data(:,2:size(train_data,2))';
clear train_data; @@@@Delete training data.
test_data=load(TestingData_File);
TV.T=test_data(:,1)';
TV.P=test_data(:,2:size(test_data,2))';
clear test_data;

Explanation is as same as for training data


NumberofTrainingData=size(P,2); @@@@ returns no. of rows in matrix P
NumberofTestingData=size(TV.P,2);@@@@ returns no. of rows in matrix TV.P
NumberofInputNeurons=size(P,1); @@@ returns no. of columns in matrix TV.P
InputWeight=rand(NumberofHiddenNeurons,NumberofInputNeur
ons)*2-1;
@@ input for randomized weight assignment based on
hidden neurons, input
neurons
BiasofHiddenNeurons=rand(NumberofHiddenNeurons,1); @@@
randomized Bias input
tempH=InputWeight*P; @@@ Declaratiom of MoorePenrose
generalized matrix
clear P; % Release input of training data
ind=ones(1,NumberofTrainingData);@@matrix containing all
ones of size[1,n ]
BiasMatrix=BiasofHiddenNeurons(:,ind); @@@Extend the bias
matrix BiasofHiddenNeurons to match the dimension of H
tempH=tempH+BiasMatrix; @@@ normal definition of single
feed forward network
switch lower(ActivationFunction)
case {'sig','sigmoid'}
%%%%%%%% Sigmoid
H = 1 ./ (1 + exp(-tempH));
case {'sin','sine'}
%%%%%%%% Sine
H = sin(tempH);
case {'hardlim'}
%%%%%%%% Hard Limit
H = double(hardlim(tempH));
case {'tribas'}
%%%%%%%% Triangular basis function
H = tribas(tempH);
case {'radbas'}
%%%%%%%% Radial basis function
H = radbas(tempH);
%%%%%%%% More activation functions can be added here
end
clear tempH;
Y=(H' * OutputWeight)'; @@@ Y: the
actual output of the training data
based on modified weight.
if Elm_Type == REGRESSION
TrainingAccuracy=sqrt(mse(T - Y))
@@@ ERROR calculation in terms of
mean square error.
end
clear H;
Calculate the output of testing input

start_time_test=cputime;
tempH_test =InputWeight*TV.P; @@@@
modified input weight matrix and testing data
clear TV.P;
switch lower(ActivationFunction) @@@@@ Calculation of Moore
Penrose generalized matrix based on various activation function for
testing data image
case {'sig','sigmoid'}
%%%%%%%% Sigmoid
H_test = 1 ./ (1 + exp(-tempH_test));
case {'sin','sine'}
%%%%%%%% Sine
H_test = sin(tempH_test);
case {'hardlim'}
%%%%%%%% Hard Limit
H_test = hardlim(tempH_test);
case {'tribas'}
%%%%%%%% Triangular basis function
H_test = tribas(tempH_test);
case {'radbas'}
%%%%%%%% Radial basis function
H_test = radbas(tempH_test);
%%%%%%%% More activation functions can be added here
end
TY=(H_test' * OutputWeight)'; @@ current output
of the testing data
end_time_test=cputime;
TestingTime=end_time_test-start_time_test @@
Calculate CPU time (seconds) for running
algorithm.

if Elm_Type == REGRESSION
TestingAccuracy=sqrt(mse(TV.T - TY)) @@@@
Calculate testing accuracy (RMSE) for regression
case
end
if Elm_Type == CLASSIFIER @@@@@ ELM application as a classifier

These terms are used to


MissClassificationRate_Training=0;
represent error with known
MissClassificationRate_Testing=0; output. Initially there is no
error so their value is zero.
for i = 1 : size(T, 2)
[x, label_index_expected]=max(T(:,i));
[x, label_index_actual]=max(Y(:,i));
if label_index_actual~=label_index_expected
MissClassificationRate_Training=MissClassificationRate_Training+1;
end
end
TrainingAccuracy=1-MissClassificationRate_Training/size(T,2)
for i = 1 : size(TV.T, 2)
[x, label_index_expected]=max(TV.T(:,i));
[x, label_index_actual]=max(TY(:,i));
if label_index_actual~=label_index_expected

MissClassificationRate_Testing=MissClassificationRate_Testing+
1;
end
end
TestingAccuracy = 1-
MissClassificationRate_Testing/size(TV.T,2)
end

These terms are used to represent error with known output. Initially
there is no error so their value is zero. Here missClassification is used
for testing accuracy.

You might also like