You are on page 1of 1

Transfer learning radiomics based on multimodal three-

dimensional MRI/PET imaging for staging Alzheimer’s disease


Abstract
Deep neural networks have been successfully applied diagnostics, there are still large portions of the brain that are
to unsupervised feature learning for single modalities. unmapped which limits our understanding of the brain. Due to
Neuroimaging scans acquired from MRI and metabolism data collection rates of modern technologies, there is an ever-
images obtained by FDG-PET provide in-vivo increasing volume of brain data collected. These large data
measurements of structure and function (glucose volumes present significant challenges around quality
metabolism) in a living brain. It is hypothesized that assurance and validation with current approaches often
combining multiple different image modalities providing requiring manual input.
complementary information could help improve early The aim of this study is to test the efficacy of applying
diagnosis of a patient’s mental state: Cognitive Normal novel 3D Convolutional Neural Network models to the
(CN), Mild Cognitive Impairment (MCI), and problem of staging the AD progression. The network will
Alzheimer’s Disease (AD). analyze important features such as the separation and shapes
In this work, we propose a novel application of deep of different parts such as the ventricle and hippocampus which
networks to learn features over multiple modalities. We provide additional parameters for learning. The results
present a series of tasks for multimodal learning and show reported from hold-out test sets show promising performance
how to train deep neural networks that learn features to with a comparable classification accuracy. The main limitation
address these tasks. In particular, we demonstrate cross of the 2D slice-level approach is that MRI is 3-dimensional,
modality feature learning, where better features for one whereas the 2D convolutional filters analyze all slices of a
modality (e.g., structure) can be learned if multiple subject independently. Moreover, there are many ways to
modalities (e.g., structure and function) are present at select slices that are used as input (as not all of them may be
feature learning time. Furthermore, we show how to learn informative), and slice-level accuracy and subject-level
a shared representation between modalities and evaluate it accuracy are often confused. By comparing the correlation of
on a unique task, where the classifier is trained with multiple modalities, we can determine the dominating feature
structure-only image but tested with function-only data in forming the surrogate model. Deploying a sufficiently
and vice-versa. Our models are validated on the trained model in a productionized processing pipeline could be
Alzheimer’s Disease Neuroimaging Initiative (ADNI) transformational, reducing the manual intervention required to
database [1]. We used FreeSurfer, a brain imaging locate the information (e.g., hippocampus) needed.
software package, to reconstruct the ADNI images and
segment the brain scans into different subcortical
structures (e.g., hippocampus, ventricle, thalamus) [2].
Using these reconstructed 3D images, we were able to
effectively isolate the regions of interest (ROI).

Even though the neuroimaging scans and metabolism

REFERENCES
1. ADNI http://adni.loni.usc.edu/
2. FreeSurfer
https://surfer.nmr.mgh.harvard.edu/fswiki/FreeSurferW
iki
images are providing important information for AD

You might also like