Professional Documents
Culture Documents
Authorized licensed use limited to: ANNA UNIVERSITY. Downloaded on May 27,2020 at 11:21:43 UTC from IEEE Xplore. Restrictions apply.
that n has zero mean (E(n ) = 0). By adding the second level of
analysis on the group level we have:
β = XG βG + η (2)
Where XG is the group-level design matrix, and it separates the
two groups (controls and patients), βG is the vector of the group level
parameters and η is the group level residuals. Also η is considered
to have zero mean, E(η) = 0. Substituting with (2) in (1) we get:
Fig. 1. Illustration of the speech experiment that includes four audio Y = XXG βG + Xη + (3)
stimuli: Complex forward speech, Simple forward speech, backward Let
speech and silence which are alternating repeatedly along 6 min 20 γ = Xη + (4)
sec. then,
forward speech, backward speech and silence which were alternating Y = XXG βG + γ (5)
repeatedly along 6 min 20 seconds. Fig 1 illustrates the fMRI exper-
iment design. In order to divide subjects in our study, we have used Let V is the covariance of and VG is the covariance of η, then
the calibrated severity score (CSS) for the toddler module which in- by using the general least squares approach [21], the first level pa-
dicates the level of autism severity. The CSS is obtained from raw rameters could be estimated as:
total domain scores of the autism diagnostic observation schedule β̂ = (X T V −1 X)X T V −1 Y (6)
(ADOS) [16] and varies from 0 to 10. It is divided into 3 classes: (i)
mild (CSS 1-4), (ii) moderate (CSS 5-7), and (iii) severe (CSS 8-10). and
Three matched sets, one for each autistic group are included in this ˆ = (X T V −1 X)−1
cov(β) (7)
study, each with the size of 13 subjects. In the similar way with the group level, the group parameters are
given by:
T −1
3. METHODS βˆG = (XG VG XG )−1 X T V −1 Y (8)
and
Fig 2 shows the general three-stage framework of our proposed sys- T −1
cov(βˆG ) = (XG VG XG )−1 . (9)
tem for grading ASD subject’s severity level using task-based fMRI
images. The following sections explain each step in detail: After estimating the parameters for each voxel, it is required to
define some contrasts and check which voxels are significant with
respect to these conditions. The most common technique for testing
3.1. Data preprocessing
the voxel significance is the paired z-test [22, 23]. The statistical
The preprocessing pipeline is performed using fMRI expert analysis analysis result could be expressed as a map of corrected P-values at
tool (FEAT) [17] included in fMRIB’s software library (FSL) [18] each voxel. For more details about GLM parameter estimates, the
accordingly applying the following steps:(i) interleaved slice timing reader is referred to [22].
correction to correct the effect of recording slices at different points In this study, we utilize the output z-stats of each subject to ex-
in time within the volume, (ii) motion correction using MCFLIRT tract features. The Brainnetome atlas (BNT) [24] is applied to map
to correct the effect of subjects motion in the scanner by applying the brain to 246 areas. A histogram with 6 bins is constructed to
rigid-body transformations with 6 degrees of freedom (DOF) [19], define the percent of z-stat intensity value within each interval, for
(iii) spatial smoothing using Gaussian window with full width at each brain area. Since the z-stat map varied between 0 and 6, the
half maximum (FWHM) of 5mm, to improve the signal to noise ra- used intervals are (0 <= |z| < 1, 1 <= |z| < 2, 2 <= |z| <
tio. (iv) high pass temporal filtering (100s) to remove low frequency 3, 3 <= |z| < 4, 4 <= |z| < 5, |z| >= 5). This creates a feature
artefacts and scanner drifts, (v) brain extraction using BET, to re- vector of 246 areas by 6 features per subject. We apply a feature
move the scalp and all non-brain data in sMRI images, and (vi) Reg- selection algorithm prior to classification to reduce the feature space
istration to standardize the fMRI brain images by applying 2 steps, dimensionality and to detect the significant features.
the first step is to register the functional volume to its high resolution
anatomical scan, the second step is to register the anatomical scan to 3.3. Classification and severity grading
MNI-152 space with 12 DOF [20].
As mentioned above, the speech task has 4 conditions, complex for-
ward speech, simple forward speech, backward speech, and silence.
3.2. Feature extraction with multi level General Linear Model
In this study, we apply both the first level analysis and the higher
(GLM)
level analysis to model these four regressors in GLM. The first level
Consider an experiment with N subjects, and for each subject Yn analysis is used to quantify the activation differences between the
there is a vector of T time points, where n = 1,...,N. The first level subjects for the classification purpose, while the higher level analy-
GLM is defined as: sis is applied to have more insightful analysis about the overall group
Y1
X1 0 ... 0 0
β1
1
differences. Hence, we study the relation between the selected brain
Y2 0 X2 ... 0 β2 2 areas with discriminating features for classification and the com-
. = . . . . ∗ . + . = Xβ + (1) monly activated significant areas in each group. As shown in Fig
..
.. .. .
. . .
.
. . .
. . . 3, the algorithm used for feature selection is RFE [25]. This algo-
YN 0 0 ... XN βN N
rithm uses random forest classifier to fit a model for the data and sort
Where Xn is the design matrix, is the residual error, and βn the feature importance, then start removing the least features recur-
are the parameters to be estimated. To estimate βn , we will assume sively.
1405
Authorized licensed use limited to: ANNA UNIVERSITY. Downloaded on May 27,2020 at 11:21:43 UTC from IEEE Xplore. Restrictions apply.
Fig. 2. Block diagram of the proposed framework for grading autistic subject’s severity level using task-based fMRI images
To show how the histogram features per area are matching the
4.1. Classification results group activation, we extracted the mean percent of activated voxels
out of the most significant histogram bins (|z| > 3) per group for
In this experiment, we use random forest classifier in both RFE and each area. Table 3 shows the top 5 areas per group. The intersections
classification. We used a 10-folds cross validation technique and between the top activated areas and the FLAME group analysis is
calculated the accuracy of classification. The achieved accuracy is shown in fig 4. It is obvious that our top informative areas match
71.79%. In addition to calculating the accuracy, the confusion matrix the group analysis results. Moreover, such analysis gives insightful
between the 3 classes is also calculated. Table 1 shows the confu- information about significant activation over brain areas rather than
sion matrix. The model hyperparameters for both RFE and random independent voxels as in FLAME group analysis.
forest classifier are both selected using a grid search. The optimal It is important to check the quality of feature representation and
performance is obtained when using 11 trees with maximum depth selection. Consequently, comparing table 2 and table 3 reveals the
of 9 for RFE and 238 with maximum depth of 28 for random for- high intersection between the selected areas by RFE, in the feature
est. Table 2 shows the areas with the most selected features by RFE. selection step, with the top activated areas in each group.
Different classifiers other than random forest were tested including:
(i) linear SVM (accuracy = 54%), (ii) SVM with RBF kernel (accu-
racy 59%) and neural network (accuracy =64%). The random forest 5. CONCLUSION AND FUTURE WORK
outperformed the other tested classifiers.
In this study, we introduce a machine learning based approach for
4.2. Validation with higher level analysis autism severity grading on the autism spectrum. To the best of our
knowledge, this is the first effort to utilize task-based fMRI for this
We have validated our selected features in our classification by com- goal. With a limited number of subjects (n = 39), our algorithm
paring them with the most activated areas in the group analysis. Two achieved accuracy of 72% using random forest classifier following
1406
Authorized licensed use limited to: ANNA UNIVERSITY. Downloaded on May 27,2020 at 11:21:43 UTC from IEEE Xplore. Restrictions apply.
[8] Omar Dekhil et al., “Using resting state functional mri to build
a personalized autism diagnosis system,” PLOS ONE, vol. 13,
no. 10, pp. 1–22, 10 2018.
[9] John D Van Horn et al., “The functional magnetic resonance
imaging data center (fmridc): the challenges and rewards of
large–scale databasing of neuroimaging studies,” Philosophi-
cal Transactions of the Royal Society of London B: Biological
Sciences, vol. 356, no. 1412, pp. 1323–1339, 2001.
[10] Ayman El-Baz et al., Imaging the brain in autism, Springer,
2013.
[11] Karl J Friston et al., “Statistical parametric maps in functional
imaging: a general linear approach,” Human brain mapping,
vol. 2, no. 4, pp. 189–210, 1994.
Fig. 4. A cross section showing top 5 informative areas, for each [12] Marie Gomot et al., “Brain hyper-reactivity to auditory novel
group, in terms of percent of significant voxels in each area, inter- targets in children with high-functioning autism,” Brain, vol.
secting with the FSL group level analysis results (orange). 131, no. 9, pp. 2479–2488, 2008.
[13] Michael V Lombardo et al., “Different functional neural sub-
Table 3. The top 5 areas having percent of significant voxels for
strates for good and poor language outcome in autism,” Neu-
each group
Mild Moderate Severe percent of significant voxels ron, vol. 86, no. 2, pp. 567–577, 2015.
Mild Moderate Severe [14] Guillaume Chanel et al., “Classification of autistic individuals
First area 14 13 14 25.5 24.8 32.1
Second area 13 5 19 20.7 23.5 31.5 and controls using cross-task characterization of fmri activity,”
Third area 19 11 20 19.9 23.3 30.9 NeuroImage: Clinical, vol. 10, pp. 78–88, 2016.
Forth area 20 19 13 18.5 23.1 28.6
Fifth area 32 20 24 14.5 22.1 24.7
[15] Nicha C Dvornek et al., “Learning generalizable recurrent neu-
ral networks from small task-fmri datasets,” in International
recursive feature elimination. We also applied group analysis to vali- Conference on Medical Image Computing and Computer-
date selected features and study common group brain activation. Our Assisted Intervention. Springer, 2018, pp. 329–337.
future work will focus mainly on integrating more data from differ- [16] Katherine Gotham et al., “Standardizing ADOS scores for a
ent experiments and modalities to get more comprehensive under- measure of severity in autism spectrum disorders,” Journal of
standing of how brain activation abnormalities in response to dif- autism and developmental disorders, vol. 39, no. 5, pp. 693–
ferent tasks explains autism. Such significant differences will be 705, 2009.
investigated to correlate with ADOS reports to develop personalized
diagnosis and treatment. [17] Mark W aoolrich Woolrich, “Temporal autocorrelation in uni-
variate linear modeling of fmri data,” Neuroimage, vol. 14, no.
6, 2001.
6. REFERENCES [18] Mark Jenkinson, Christian F Beckmann, Timothy EJ Behrens,
Mark W Woolrich, and Stephen M Smith, “Fsl,” Neuroimage,
[1] David G. Amaral aand others, “Neuroanatomy of autism,”
vol. 62, no. 2, pp. 782–790, 2012.
Trends in Neurosciences, vol. 31, no. 3, pp. 137 – 145, 2008.
[19] Mark Jenkinson et al., “Improved optimization for the robust
[2] Manuel F Casanova et al., Autism Imaging and Devices, CRC and accurate linear registration and motion correction of brain
Press, 2017. images,” Neuroimage, vol. 17, no. 2, pp. 825–841, 2002.
[3] Marwa Ismail et al., “Studying autism spectrum disorder with [20] Jack L. Lancaster, Diana Tordesillas-Gutiérrez, Michael Mar-
structural and diffusion magnetic resonance imaging: a sur- tinez, Felipe Salinas, Alan Evans, Karl Zilles, John C. Mazz-
vey,” Frontiers in human neuroscience, vol. 10, 2016. iotta, and Peter T. Fox, “Bias between mni and talairach coor-
[4] R. Haweel et al., “Functional magnetic resonance imaging dinates analyzed using the icbm-152 brain template,” Human
based framework for autism diagnosis,” in 2019 Fifth Inter- Brain Mapping, vol. 28, no. 11, pp. 1194–1205, 2007.
national Conference on Advances in Biomedical Engineering [21] Shayle Searle et al., Variance components, vol. 391, John
(ICABME), Oct 2019, pp. 1–4. Wiley & Sons, 2009.
[5] Y. ElNakieb et al., “Towards accurate personalized autism di- [22] Christian F Beckmann, Mark Jenkinson, and Stephen M Smith,
agnosis using different imaging modalities: smri, fmri, and “General multilevel linear modeling for group analysis in
dti,” in 2018 IEEE International Symposium on Signal Pro- fmri,” Neuroimage, vol. 20, no. 2, pp. 1052–1063, 2003.
cessing and Information Technology, Dec 2018, pp. 447–452. [23] Yufeng Zang et al., “Regional homogeneity approach to fmri
[6] Omar Dekhil et al., “Identifying personalized autism related data analysis,” Neuroimage, vol. 22, no. 1, 2004.
impairments using resting functional mri and ados reports,” in [24] Lingzhong Fan et al., “The human brainnetome atlas: a new
Medical Image Computing and Computer Assisted Interven- brain atlas based on connectional architecture,” Cerebral cor-
tion – MICCAI 2018, 2018, pp. 240–248. tex, vol. 26, no. 8, pp. 3508–3526, 2016.
[7] O. Dekhil et al., “A novel cad system for autism diagnosis [25] Pablo M Granitto et al., “Recursive feature elimination with
using structural and functional mri,” in 2017 IEEE 14th Inter- random forest for ptr-ms analysis of agroindustrial products,”
national Symposium on Biomedical Imaging (ISBI 2017), April Chemometrics and Intelligent Laboratory Systems, vol. 83, no.
2017, pp. 995–998. 2, pp. 83–90, 2006.
1407
Authorized licensed use limited to: ANNA UNIVERSITY. Downloaded on May 27,2020 at 11:21:43 UTC from IEEE Xplore. Restrictions apply.