Professional Documents
Culture Documents
Authorized licensed use limited to: University of Exeter. Downloaded on July 01,2020 at 18:45:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
Authorized licensed use limited to: University of Exeter. Downloaded on July 01,2020 at 18:45:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
MUHAMMAD et al.: DEEP LEARNING FOR MULTIGRADE BTC IN SMART HEALTHCARE SYSTEMS 3
Fig. 2. General flow of deep learning (CNN)-based methods for BTC. The overall flow consists of three steps: 1) preprocessing; 2) training; and 3) testing.
Additional steps can be added and/or adjusted as per the requirements of the target users.
TABLE I
D ETAILED A NALYSIS AND C OMPARISON OF O UR S TUDY W ITH E XISTING S URVEYS
and several remarks to highlight the overall impact of each is their lack of detailed reviews to highlight the limitations
survey and its weaknesses. of other studies and motivations for a new survey. Second,
There are several limitations of existing surveys, which are the majority of BTC surveys do not provide detailed future
addressed in our survey. Among them, the most limiting issue research directions, which is a compulsory section for any
Authorized licensed use limited to: University of Exeter. Downloaded on July 01,2020 at 18:45:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
Authorized licensed use limited to: University of Exeter. Downloaded on July 01,2020 at 18:45:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
MUHAMMAD et al.: DEEP LEARNING FOR MULTIGRADE BTC IN SMART HEALTHCARE SYSTEMS 5
TABLE II
D ETAILED D ESCRIPTION OF THE D EEP L EARNING BASED M ETHODS W ITH T HEIR R ESPECTIVE S EGMENTATION , F EATURES , C LASSIFIERS , AND T OTAL
N UMBER OF C LASSIFIED C LASSES
the multiscale features to enhance tumor regions. In another a deep CNN-based CAD system for gliomas classification.
work, Ge et al. [46] used 2-D-CNN with feature aggregation to Pereira et al. [42] first extracted the tumor regions using a
enhance the performance of classification. Decuyper et al. [63] 3-D-U-Net model and then fed them into their proposed
used pretrained VGG model and extracted features from the Glioma grading CNN after resizing the images. In their
first fully connected layer for the classification of glioma into proposed CNN model, global average pooling is used to
high- and low-level grades. Banerjee et al. [64] presented summarize each feature map, followed by a cascade of
Authorized licensed use limited to: University of Exeter. Downloaded on July 01,2020 at 18:45:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
TABLE III
S TATISTICS OF M ULTIGRADE B RAIN T UMOR D ATA S ET [32] G RADES W ITH AND W ITHOUT AUGMENTATION
Authorized licensed use limited to: University of Exeter. Downloaded on July 01,2020 at 18:45:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
MUHAMMAD et al.: DEEP LEARNING FOR MULTIGRADE BTC IN SMART HEALTHCARE SYSTEMS 7
Authorized licensed use limited to: University of Exeter. Downloaded on July 01,2020 at 18:45:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
TABLE V
D ETAILED D ESCRIPTION OF M EDICAL I MAGING W ITH A F OCUS ON MRI T RANSFER L EARNING T ECHNIQUES
their implementation in real-world BTC systems. Results show insightful to both industry and hospitals in the sense that
that VGGNet obtained the best accuracy compared with all they can select a method of their choice, considering their
other CNNs under consideration, with a similar average fps requirements, accuracy, deployment environment, and other
but larger model size than other CNNs. These results are constraints.
Authorized licensed use limited to: University of Exeter. Downloaded on July 01,2020 at 18:45:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
MUHAMMAD et al.: DEEP LEARNING FOR MULTIGRADE BTC IN SMART HEALTHCARE SYSTEMS 9
TABLE VI
E XPERIMENTAL R ESULTS OF D IFFERENT CNN M ODELS OVER D ATA S ET 1 AND D ATA SET 2 W ITH A CCURACY AND N ECESSARY D ETAILS . T HE T ERMS
“WOA” AND “WA” U SED IN T HIS TABLE R EFER TO “W ITHOUT AUGMENTATION ” AND “W ITH AUGMENTATION ,” R ESPECTIVELY
Authorized licensed use limited to: University of Exeter. Downloaded on July 01,2020 at 18:45:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
contain a limited number of images for each class, resulting for efficient analysis. This can be possible by exploring fog
in lower accuracy levels for the implemented methods. For and cloud computing with an extensive investigation of the
this reason, we used data augmentation in our experiments new emerging framework of AI “Federated Learning” (FA)
in Section VI. Furthermore, the majority of already available for BTC in future studies. In FA architecture, models use the
data sets only provide data for high-level classification, with no distributed mobile/edge devices for computation due to their
focus on further grades of brain tumors that can be very helpful recently improved capabilities for executing a machine learn-
to radiologists for early diagnosis. Therefore, it is highly ing model. Using this hybrid computing platform, a model can
recommended to create challenging data sets with detailed be improved by training it locally via the data collected by
grading of each image in future research and ensure their the concerned edge device, and the changes in terms of model
availability for the benefit of the entire research community. parameters and weights can be reported to cloud through a
secure communication link, e.g., homomorphic encryption as
B. End-to-End Deep Learning Models employed by Feng et al. [126] for outsourcing big data in the
Although the majority of the recent techniques are based on federated cloud environment. The most important aspect of FA
deep learning, they use different models for detection, classi- is the preservation of user’s privacy, which is utterly important
fication, and segmentation, as discussed in Section III. This in the medical domain. In the case of BTC, the output of
increases the computational complexity of the implemented brain tumors, classified into various grades, will be generated
methods, making them less suitable for their consideration directly over the cloud or fog, making the overall process
in clinical practice. Currently, there is no end-to-end deep smarter and more feasible compared with manual lengthy
learning model, which can detect a tumor in the input MRI processes.
image, segment it, and classify its nature as a final output.
E. Advanced Data-Enrichment Techniques
Thus, both industry and academia are highly encouraged to
further investigate deep learning models for the problem of Data augmentation can be used to generate data up to an
BTC in this context. This can greatly reduce the overall extent, which can be used for training deep learning BTC
running time of the target BTC model, ultimately matching the systems. However, the quality of generated data stalls or even
practical constraints of smart healthcare and clinical practice. degrades after a certain level of augmentation is reached.
Thus, more advanced data-enrichment techniques need to be
C. Edge Intelligence for MRI Data Analysis investigated for BTC. Generative adversarial networks (GANs)
Edge intelligence is used on a wide scale in differ- are among the advanced data-enrichment techniques and are
ent domains [121], [122] because of its numerous advan- widely used for many applications in diverse domains, such
tages, such as reduced bandwidth and threats, minimal as medical image synthesis [127], [128], compressive sensing
latency, improved consistency, compliance, and lower cost. For MRI [129], [130], superresolution [130], security [131], classi-
instance, Chen et al. [123] presented a deep learning and edge fication [132], image-to-image transformation [133], and many
intelligence-assisted system for distributed video surveillance other useful purposes. GAN creates new data instances from
applications. Their system processes data at network edges the existing data, realistically complying with the distribution
instead of network centers, reducing communication overhead, of the input data. As mentioned earlier, the main problem
thereby providing accurate video analysis results with elastic in the BTC literature is the lack of data, which is currently
and scalable computing power. Similarly, Pace et al. [124] handled through various data augmentation techniques. GAN
proposed “BodyEdge,” a framework for supporting healthcare has still not been explored yet for the problem of BTC, which
applications in industry 4.0 with the main focus on reducing can easily handle the limited data problem by generating new
the data load toward the Internet. In the field of BTC, the MRI similar data from the input images. Thus, it is recommended
data are normally collected in the Digital Imaging and Com- for scientists working in BTC domain to utilize GANs for
munications in Medicine (DICOM) format, which is manually new data generation to effectively solve the limited data
converted into slices for further analysis. This process is time- problem and possibly improve the performance of state-of-
consuming and tedious with comparatively higher changes of the-art methods.
errors. Through edge intelligence, the DICOM images can be
F. Sequential Learning in DICOM Images
automatically processed over the capturing device for efficient
analysis with better accuracy. Currently, there is no such a DICOM images are sequences of slices captured in an order,
concept of edge intelligence for the specific problem of BTC, where some slices have a small size, while others have a
which can be a new trend for further research in this domain. large size of tumors [134]. The appearance of a brain tumor
varies in size and angle in different slices. When these slices
D. Merging Fog and Cloud Computing With Federated are converted into normal image formats, we may not locate
Learning: A New Dawn for BTC the exact position of a brain tumor. Singular 2-D image data
Fog computing is an extension of cloud computing cannot provide enough information that can be used in further
performed over the edge through a distributed network. treatment of tumors using laser therapy. This problem is not
Fog computing makes it easy to facilitate the regular process- addressed in the current literature, needing special attention
ing and generates the output fast enough by using capabilities from both industry and academia. In future work, the DICOM
of edge network [125]. In the medical field, the data collected image slices in a group should be analyzed sequentially, which
from a specific MRI capturing device should be quickly can give enough information in the 3-D form to find the exact
formulated and processed over different cloud and fog layers location of the brain tumor. Thus, it can directly help and
Authorized licensed use limited to: University of Exeter. Downloaded on July 01,2020 at 18:45:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
MUHAMMAD et al.: DEEP LEARNING FOR MULTIGRADE BTC IN SMART HEALTHCARE SYSTEMS 11
facilitate specialists in various diagnosis processes that are grade of the tumor. The main advantage of BTS integration
applied after finding the exact location of a brain tumor. with the IoMT is to treat the patients in remote areas, i.e., those
who have limited access to medical services. The reported
G. Effective Methods for Commercial Clinical Applications performance of existing reviewed methods and models is good
There is a special need for trustworthy methods in the BTC enough to be considered in real-time scenarios. However,
literature, which can be implemented in clinics and hospitals there are still several challenges for researchers to present
on a commercial scale and can be extended to smart health- a complete IoMT environment for taking decisions in real-
care [135]. The accuracy achieved by deep learning-based time. These challenges include privacy preservation [149],
BTC methods is better than traditional methods and convincing the computational complexity of deep learning models [150],
enough to be considered as a second opinion for medical the latency of deployed networks [151], and the compatibility
specialists. However, it still needs improvements in terms of smart devices with existing technologies [152], all needing
of accuracy, execution time, flexibility, cost, and scalability effective solutions for smooth integration with existing medical
for commercialization and practicability in real-world medical and patients management systems for smart and personalized
applications. healthcare.
H. Confidence and Explainability in Learning-Based BTC J. Synergies With Other Areas of Computational Intelligence
Another crucial aspect when undertaking BTC via machine The interest in deep learning has propelled intense research
learning methods is to also consider the usability of the devel- efforts around this family of machine learning models in the
oped methods by medical practitioners. Indeed, the perfor- last few years, yielding countless applications such as the one
mance (accuracy) of the model when detecting and estimating targeted in this overview. However, many issues underneath
the severity of a tumor is of pivotal importance due to the the use of deep learning methods still remain insufficiently
relevance and consequences of decisions to be made upon addressed to date. Renowned practical caveats include the
the model’s output. However, for the medical community to difficult architectural design of deep learning models, the
embrace the benefits of deep learning methods used for this slow convergence of backpropagation for highly complex
purpose, there are far more aspects to be considered beyond deep architectures, their relative lack of interpretability, or the
model performance. Usability aspects, such as the confidence burdensome hyperparameter tuning phase required for such
of the output, must be also placed under the spotlight for models to perform optimally for a given task [153]. The
the developed models to be actionable. The reliability of the wide acknowledgment of the research community around these
produced predictions (as estimated, for instance, by meth- problems has ignited a shift of focus toward hybridizing deep
ods such as test dropout [136] or Bayesian deep learning learning models with elements from other areas of computa-
methods [137]) can, indeed, make the medical specialist feel tional intelligence [154], with an emphasis lately placed on
more confident with decisions supported by black-box deep the use of evolutionary computation and swarm intelligence.
learning models. Indeed, a growing number of research works have focused
Likewise, there is a rising concern with assessing what on different flavors of bio-inspired optimization heuristics to
deep learning models eventually learn to observe from data, overcome problems inherently deriving from deep learning as
particularly in the medical domain. Reasons span beyond the the ones exemplified earlier. For instance, the use of different
usability of the model: recently reported cases confirm that forms of evolutionary programming has given rise to tools
there is little knowledge about what models surpassing human to automate the design and configuration of complex neural
performance in the diagnosis of certain diseases are learning architectures [155], much like a fresh renaissance of old con-
from data [138], [139]. Posthoc techniques for eXplainable cepts intersecting neural and evolutionary computation (e.g.,
Artificial Intelligence [140], [141] can give the clue needed not NEAT). In this same line of reasoning, bio-inspired heuristics
only to disentangle the knowledge learned by these powerful specially devoted to undertake large-scale global optimization
methods but also to open up new medical research directions. problems can be thought today to be a serious competitor to the
classical gradient backpropagation algorithm that dominates
I. Internet of Medical Things
the spectrum of training algorithms for deep architectures
Recently, the Internet of Medical Things (IoMT) attracted [156]. Further intersections between these two areas include
more attention of the researchers due to the rapid develop- a more effective transfer of the learned knowledge between
ment in intelligent technologies and real-time data sharing different classification tasks that, for the medical area, can be
using the Internet of Things (IoT) [142], [143]. Due to a catalyst to allow using deep learning models in new scenarios
the efficient data sharing and communication technologies, with scarcely available data [156].
the concept of the IoT has been adopted by almost every We believe that as in any other specific application area,
field of research, i.e., smart networking [144], energy man- deep learning models empowered with evolutionary computa-
agement [145], business [146], healthcare [147], and med- tion, edge and swarm intelligence, and FA will expedite and
ical [148]. Among these areas, the medical domain especially increase the quality of its results when facing new problems in
dealing with brain diseases is considered the most severe multigrade brain tumor detection and characterization, even-
case due to the life-threatening issues of patients. Considering tually reaching unprecedented levels of self-configurability,
this aspect, BTC can be implemented in IoMT, which will knowledge transferability, and accuracy. For this to occur,
minimize the life risk of brain tumor patients by providing the newcomers to the field should steer their efforts toward this
necessary precaution online and informing them about their expectedly profitable research avenue.
Authorized licensed use limited to: University of Exeter. Downloaded on July 01,2020 at 18:45:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
Authorized licensed use limited to: University of Exeter. Downloaded on July 01,2020 at 18:45:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
MUHAMMAD et al.: DEEP LEARNING FOR MULTIGRADE BTC IN SMART HEALTHCARE SYSTEMS 13
[30] G. Mohan and M. M. Subashini, “MRI based medical image analysis: [52] M. Havaei et al., “Brain tumor segmentation with deep neural net-
Survey on brain tumor grade classification,” Biomed. Signal Process. works,” Med. Image Anal., vol. 35, pp. 18–31, Jan. 2017.
Control, vol. 39, pp. 139–161, Jan. 2018. [53] J. S. Paul, A. J. Plassard, B. A. Landman, and D. Fabbri, “Deep learning
[31] J. Cheng et al., “Enhanced performance of brain tumor classification for brain tumor classification,” Proc. SPIE, vol. 10137, Mar. 2017,
via tumor region augmentation and partition,” PLoS ONE, vol. 10, Art. no. 1013710.
no. 10, Oct. 2015, Art. no. e0140381. [54] K. B. Ahmed, L. O. Hall, D. B. Goldgof, R. Liu, and R. A. Gatenby,
[32] M. Sajjad, S. Khan, K. Muhammad, W. Wu, A. Ullah, and S. W. Baik, “Fine-tuning convolutional deep features for MRI based brain tumor
“Multi-grade brain tumor classification using deep CNN with extensive classification,” Proc. SPIE, vol. 10134, Mar. 2017, Art. no. 101342E.
data augmentation,” J. Comput. Sci., vol. 30, pp. 174–182, Jan. 2019. [55] N. M. Balasooriya and R. D. Nawarathna, “A sophisticated convolu-
[33] M. Sajjad et al., “CNN-based anti-spoofing two-tier multi-factor tional neural network model for brain tumor classification,” in Proc.
authentication system,” Pattern Recognit. Lett., vol. 126, pp. 123–131, IEEE Int. Conf. Ind. Inf. Syst. (ICIIS), Dec. 2017, pp. 1–5.
Sep. 2019. [56] K. Clark et al., “The cancer imaging archive (TCIA): Maintaining and
[34] K. Muhammad, S. Khan, M. Elhoseny, S. H. Ahmed, and S. W. Baik, operating a public information repository,” J. Digit. Imag., vol. 26,
“Efficient fire detection for uncertain surveillance environment,” IEEE no. 6, pp. 1045–1057, Dec. 2013.
Trans. Ind. Informat., vol. 15, no. 5, pp. 3113–3122, May 2019. [57] P. Afshar, A. Mohammadi, and K. N. Plataniotis, “Brain tumor type
[35] S. Khan, K. Muhammad, S. Mumtaz, S. W. Baik, and classification via capsule networks,” in Proc. 25th IEEE Int. Conf.
V. H. C. de Albuquerque, “Energy-efficient deep CNN for smoke Image Process. (ICIP), Oct. 2018, pp. 3129–3133.
detection in foggy IoT environment,” IEEE Internet Things J., vol. 6, [58] E.-S.-A. El-Dahshan, H. M. Mohsen, K. Revett, and A.-B.-M. Salem,
no. 6, pp. 9237–9245, Dec. 2019. “Computer-aided diagnosis of human brain tumor through MRI:
[36] A. Ullah, J. Ahmad, K. Muhammad, M. Sajjad, and S. W. Baik, “Action A survey and a new algorithm,” Expert Syst. Appl., vol. 41, no. 11,
recognition in video sequences using deep bi-directional LSTM with pp. 5526–5545, Sep. 2014.
CNN features,” IEEE Access, vol. 6, pp. 1155–1166, 2018. [59] A. Vidyarthi and N. Mittal, “Comparative study for brain tumor
[37] H. Dong, G. Yang, F. Liu, Y. Mo, and Y. Guo, “Automatic brain classification on MR/CT images,” in Proc. 3rd Int. Conf. Soft Comput.
tumor detection and segmentation using U-net based fully convolutional Problem Solving, 2014, pp. 889–897.
networks,” in Proc. Annu. Conf. Med. Image Understand. Anal., 2017, [60] N. M. Saad, S. A. R. S. A. Bakar, A. S. Muda, and M. M. Mokji,
pp. 506–517. “Review of brain lesion detection and classification using neuroimag-
[38] N. Arunkumar, M. A. Mohammed, S. A. Mostafa, D. A. Ibrahim, ing analysis techniques,” Jurnal Teknologi, vol. 74, no. 6, pp. 1–13,
J. J. P. C. Rodrigues, and V. H. C. Albuquerque, “Fully automatic May 2015.
model-based segmentation and classification approach for MRI brain [61] G. S. Tandel et al., “A review on a deep learning perspective in brain
tumor using artificial neural networks,” Concurrency Comput., Pract. cancer classification,” Cancers, vol. 11, no. 1, p. 111, Jan. 2019.
Exper., vol. 32, no. 1, p. e4962, Jan. 2020. [62] P. Afshar, K. N. Plataniotis, and A. Mohammadi, “Capsule networks
[39] Z. Akkus et al., “Predicting 1p19q chromosomal deletion of for brain tumor classification based on MRI images and coarse tumor
low-grade gliomas from MR images using deep learning,” 2016, boundaries,” in Proc. ICASSP-IEEE Int. Conf. Acoust., Speech Signal
arXiv:1611.06939. [Online]. Available: http://arxiv.org/abs/1611.06939 Process. (ICASSP), May 2019, pp. 1368–1372.
[40] K. C. L. Wong, T. Syeda-Mahmood, and M. Moradi, “Building medical [63] M. Decuyper, S. Bonte, and R. Van Holen, “Binary glioma grading:
image classifiers with very limited data using segmentation networks,” Radiomics versus pre-trained CNN features,” in Medical Image Com-
Med. Image Anal., vol. 49, pp. 105–116, Oct. 2018. puting and Computer Assisted Intervention—MICCAI 2018. Cham,
[41] H. Mohsen, E.-S. A. El-Dahshan, E.-S. M. El-Horbaty, and Switzerland: Springer, 2018, pp. 498–505.
A.-B. M. Salem, “Classification using deep learning neural networks [64] S. Banerjee, S. Mitra, A. Sharma, and B. Uma Shankar, “A
for brain tumors,” Future Comput. Informat. J., vol. 3, no. 1, pp. 68–71, CADe system for gliomas in brain MRI using convolutional
2018. neural networks,” 2018, arXiv:1806.07589. [Online]. Available:
[42] S. Pereira, R. Meier, V. Alves, M. Reyes, and C. A. Silva, “Auto- http://arxiv.org/abs/1806.07589
matic brain tumor grading from MRI data using convolutional neural [65] A. K. Anaraki, M. Ayati, and F. Kazemi, “Magnetic resonance imaging-
networks and quality assessment,” in Understanding and Interpreting based brain tumor grades classification and grading via convolutional
Machine Learning in Medical Image Computing Applications. Cham, neural networks and genetic algorithms,” Biocybernetics Biomed. Eng.,
Switzerland: Springer, 2018, pp. 106–114. vol. 39, no. 1, pp. 63–74, Jan. 2019.
[43] P. Afshar, K. N. Plataniotis, and A. Mohammadi, “Capsule networks [66] B. H. Menze et al., “The multimodal brain tumor image segmentation
for brain tumor classification based on MRI images and course tumor benchmark (BRATS),” IEEE Trans. Med. Imag., vol. 34, no. 10,
boundaries,” 2018, arXiv:1811.00597. [Online]. Available: http://arxiv. pp. 1993–2024, Oct. 2015.
org/abs/1811.00597 [67] E. D. Vidoni, “The whole brain atlas: Www. Med. Harvard.
[44] P. Afshar, A. Mohammadi, and K. N. Plataniotis, “Brain tumor type Edu/aanlib,” J. Neurologic Phys. Therapy, vol. 36, p. 108, Jun. 2012.
classification via capsule networks,” 2018, arXiv:1802.10200. [Online]. [68] B. Caldairou, N. Passat, P. A. Habas, C. Studholme, and F.
Available: http://arxiv.org/abs/1802.10200 Rousseau, “A non-local fuzzy segmentation method: Application
[45] C. Ge, Q. Qu, I. Y.-H. Gu, and A. S. Jakola, “3D multi-scale to brain MRI,” Pattern Recognit., vol. 44, no. 9, pp. 1916–1927,
convolutional networks for glioma grading using MR images,” in Proc. Sep. 2011.
25th IEEE Int. Conf. Image Process. (ICIP), Oct. 2018, pp. 141–145. [69] S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic routing between cap-
[46] C. Ge, I. Y.-H. Gu, A. S. Jakola, and J. Yang, “Deep learning and sules,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 3856–3866.
multi-sensor fusion for glioma classification using multistream 2D [70] K. Simonyan and A. Zisserman, “Very deep convolutional networks
convolutional networks,” in Proc. 40th Annu. Int. Conf. IEEE Eng. for large-scale image recognition,” 2014, arXiv:1409.1556. [Online].
Med. Biol. Soc. (EMBC), Jul. 2018, pp. 5894–5897. Available: http://arxiv.org/abs/1409.1556
[47] N. Abiwinanda, M. Hanif, S. T. Hesaputra, A. Handayani, and T. [71] P. Afshar, K. N. Plataniotis, and A. Mohammadi, “Capsule networks’
R. Mengko, “Brain tumor classification using convolutional neural interpretability for brain tumor classification via radiomics analy-
network,” in World Congress on Medical Physics and Biomedical ses,” in Proc. IEEE Int. Conf. Image Process. (ICIP), Sep. 2019,
Engineering. Singapore: Springer, 2019, pp. 183–189. pp. 3816–3820.
[48] Z. Akkus et al., “Semi-automated segmentation of pre-operative low [72] M. Talo, U. B. Baloglu, Ö. Yıldırım, and U. R. Acharya, “Application
grade gliomas in magnetic resonance imaging,” Cancer Imag., vol. 15, of deep transfer learning for automated brain abnormality classification
p. 12, Aug. 2015. using MR images,” Cognit. Syst. Res., vol. 54, pp. 176–188, May 2019.
[49] R. Mehta and J. Sivaswamy, “M-net: A convolutional neural network [73] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for
for deep brain structure segmentation,” in Proc. IEEE 14th Int. Symp. image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.
Biomed. Imag. (ISBI), Apr. 2017, pp. 437–440. (CVPR), Jun. 2016, pp. 770–778.
[50] H. Mohsen, E. El-Dahshan, E. El-Horbaty, and A. Salem, “Brain [74] Z. N. K. Swati et al., “Brain tumor classification for MR images using
tumor type classification based on support vector machine in magnetic transfer learning and fine-tuning,” Computerized Med. Imag. Graph.,
resonance images,” in Proc. Ann. Dunarea De Jos Univ. Galati, Math., vol. 75, pp. 34–46, Jul. 2019.
Phys., Theor. Mech., Fascicle II, Year IX (XL), 2017, p. 1. [75] K. Pathak, M. Pavthawala, N. Patel, D. Malek, V. Shah, and B. Vaidya,
[51] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional net- “Classification of brain tumor using convolutional neural network,” in
works for biomedical image segmentation,” in Proc. Int. Conf. Med. Proc. 3rd Int. Conf. Electron., Commun. Aerosp. Technol. (ICECA),
Image Comput. Comput.-Assist. Intervent., 2015, pp. 234–241. Jun. 2019, pp. 128–132.
Authorized licensed use limited to: University of Exeter. Downloaded on July 01,2020 at 18:45:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
[76] S. Basheera and M. S. S. Ram, “Classification of brain tumors using [99] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna,
deep features extracted using CNN,” J. Phys., Conf. Ser., vol. 1172, “Rethinking the inception architecture for computer vision,” in Proc.
Mar. 2019, Art. no. 012016. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016,
[77] D. N. Louis et al., “The 2016 world health organization classification of pp. 2818–2826.
tumors of the central nervous system: A summary,” Acta Neuropatho- [100] G. Litjens, O. Debats, J. Barentsz, N. Karssemeijer, and H. Huisman,
logica, vol. 131, no. 6, pp. 803–820, Jun. 2016. “Computer-aided detection of prostate cancer in MRI,” IEEE Trans.
[78] K. Skogen, A. Schulz, J. B. Dormagen, B. Ganeshan, E. Helseth, and Med. Imag., vol. 33, no. 5, pp. 1083–1092, May 2014.
A. Server, “Diagnostic performance of texture analysis on MRI in [101] S. G. Armato et al., “PROSTATEx challenges for computerized clas-
grading cerebral gliomas,” Eur. J. Radiol., vol. 85, no. 4, pp. 824–829, sification of prostate lesions from multiparametric magnetic resonance
Apr. 2016. images,” Proc. SPIE, vol. 5, Nov. 2018, Art. no. 044501.
[79] Y. Pan et al., “Brain tumor grading based on neural networks and [102] Y. Yuan et al., “Prostate cancer classification with multiparametric
convolutional neural networks,” in Proc. 37th Annu. Int. Conf. IEEE MRI transfer learning model,” Med. Phys., vol. 46, no. 2, pp. 756–765,
Eng. Med. Biol. Soc. (EMBC), Aug. 2015, pp. 699–702. Feb. 2019.
[80] M. Huang, W. Yang, Y. Wu, J. Jiang, W. Chen, and Q. Feng, “Brain [103] Z. N. K. Swati et al., “Content-based brain tumor retrieval for MR
tumor segmentation based on local independent projection-based clas- images using transfer learning,” IEEE Access, vol. 7, pp. 17809–17822,
sification,” IEEE Trans. Biomed. Eng., vol. 61, no. 10, pp. 2633–2645, 2019.
Oct. 2014. [104] S. Deepak and P. M. Ameer, “Brain tumor classification using deep
[81] I. U. Haq, K. Muhammad, A. Ullah, and S. W. Baik, “DeepStar: Detect- CNN features via transfer learning,” Comput. Biol. Med., vol. 111,
ing starring characters in movies,” IEEE Access, vol. 7, pp. 9265–9272, Aug. 2019, Art. no. 103345.
2019. [105] C. Szegedy et al., “Going deeper with convolutions,” in Proc. IEEE
[82] K. Muhammad, S. Khan, V. Palade, I. Mehmood, and Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 1–9.
V. H. C. de Albuquerque, “Edge intelligence-assisted smoke detection [106] J. Cheng et al., “Retrieval of brain tumors by adaptive spatial pooling
in foggy surveillance environments,” IEEE Trans. Ind. Informat., and Fisher vector representation,” PLoS ONE, vol. 11, no. 6, Jun. 2016,
vol. 16, no. 2, pp. 1067–1075, Feb. 2020. Art. no. e0157112.
[83] K. Muhammad, R. Hamza, J. Ahmad, J. Lloret, H. Wang, and [107] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification
S. W. Baik, “Secure surveillance framework for IoT systems using with deep convolutional neural networks,” in Proc. Adv. Neural Inf.
probabilistic image encryption,” IEEE Trans. Ind. Informat., vol. 14, Process. Syst. (NIPS), 2012, pp. 1097–1105.
no. 8, pp. 3679–3689, Aug. 2018. [108] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally,
[84] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, and K. Keutzer, “SqueezeNet: AlexNet-level accuracy with 50x fewer
“ImageNet: A large-scale hierarchical image database,” in Proc. IEEE parameters and < 0.5 MB model size,” 2016, arXiv:1602.07360.
Conf. Comput. Vis. Pattern Recognit., Jun. 2009, pp. 248–255. [Online]. Available: http://arxiv.org/abs/1602.07360
[85] H. Shan et al., “3-D convolutional encoder-decoder network for low- [109] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and
dose CT via transfer learning from a 2-D trained network,” IEEE Trans. L.-C. Chen, “MobileNetV2: Inverted residuals and linear bottlenecks,”
Med. Imag., vol. 37, no. 6, pp. 1522–1534, Jun. 2018. in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018,
[86] A. van Opbroek, M. A. Ikram, M. W. Vernooij, and pp. 4510–4520.
M. de Bruijne, “Transfer learning improves supervised image [110] T. Akiba, S. Suzuki, and K. Fukuda, “Extremely large mini-
segmentation across imaging protocols,” IEEE Trans. Med. Imag., batch SGD: Training Resnet-50 on imagenet in 15 minutes,” 2017,
vol. 34, no. 5, pp. 1018–1030, May 2015. arXiv:1711.04325. [Online]. Available: http://arxiv.org/abs/1711.04325
[87] A. J. Worth. (1996). The Internet Brain Segmentation Repos- [111] Y. Jia et al., “Caffe: Convolutional architecture for fast feature
itory (IBSR). Accessed: Jan. 15, 2009. [Online]. Available: embedding,” in Proc. ACM Int. Conf. Multimedia (MM), 2014,
http://www.cma.mgh.Harvard.edu/ibsr pp. 675–678.
[88] B. Cheng, M. Liu, D. Zhang, B. C. Munsell, and D. Shen, “Domain [112] L. Yeager, J. Bernauer, A. Gray, and M. Houston, “DIGITS:
transfer learning for MCI conversion prediction,” IEEE Trans. Biomed. The deep learning GPU training system,” in Proc. ICML AutoML
Eng., vol. 62, no. 7, pp. 1805–1817, Jul. 2015. Workshop, 2015. [Online]. Available: https://indico.lal.in2p3.fr/event/
[89] M. Ghafoorian et al., “Transfer learning for domain adaptation in MRI: 2914/contributions/6476/subcontributions/170/attachments/6036/7161/
Application in brain lesion segmentation,” in Proc. Int. Conf. Med. digits.pdf
Image Comput. Comput.-Assist. Intervent., 2017, pp. 516–524. [113] D. P. Kingma and J. Ba, “Adam: A method for stochastic opti-
[90] A. G. van Norden et al., “Causes and consequences of cerebral small mization,” 2014, arXiv:1412.6980. [Online]. Available: http://arxiv.
vessel disease. The RUN DMC study: A prospective cohort study. org/abs/1412.6980
Study rationale and protocol,” BMC Neurol., vol. 11, no. 1, p. 29, [114] R. Hu, B. Tian, S. Yin, and S. Wei, “Efficient hardware architecture of
Dec. 2011. softmax layer in deep neural network,” in Proc. IEEE 23rd Int. Conf.
[91] S. Ul Hassan Dar, M. Özbey, A. B. Çatlı, and T. Çukur, “A transfer- Digit. Signal Process. (DSP), Nov. 2018, pp. 1–5.
learning approach for accelerated MRI using deep neural networks,” [115] J. Ahmad, K. Muhammad, and S. W. Baik, “Medical image retrieval
2017, arXiv:1710.02615. [Online]. Available: http://arxiv.org/abs/1710. with compact binary codes generated in frequency domain using highly
02615 reactive convolutional features,” J. Med. Syst., vol. 42, no. 2, p. 24,
[92] A. Berg, J. Deng, S. Satheesh, H. Su, and F.-F. Li. (2011). Image-Net Feb. 2018.
Large Scale Visual Recognition Challenge 2011 (ILSVRC). [Online]. [116] G. Cui et al., “Machine-learning-based classification of Glioblas-
Available: http://www.image-net.org/challenges/LSVRC/ toma using MRI-based radiomic features,” Proc. SPIE, vol. 10950,
[93] S. He and B. Jalali, “Brain MRI image super resolution using phase Mar. 2019, Art. no. 1095048.
stretch transform and transfer learning,” 2018, arXiv:1807.11643. [117] P. Korfiatis and B. Erickson, “Deep learning can see the unseeable:
[Online]. Available: http://arxiv.org/abs/1807.11643 Predicting molecular markers from MRI of brain gliomas,” Clin.
[94] Y. Romano, J. Isidoro, and P. Milanfar, “RAISR: Rapid and accurate Radiol., vol. 74, no. 5, pp. 367–373, May 2019.
image super resolution,” IEEE Trans. Comput. Imag., vol. 3, no. 1, [118] H.-C. Shin et al., “Deep convolutional neural networks for computer-
pp. 110–125, Mar. 2017. aided detection: CNN architectures, dataset characteristics and transfer
[95] H. Johansen-Berg and T. E. Behrens, Diffusion MRI: From Quanti- learning,” IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1285–1298,
tative Measurement to In Vivo Neuroanatomy. New York, NY, USA: May 2016.
Academic, 2013. [119] N. Tajbakhsh et al., “Convolutional neural networks for medical image
[96] C. Haarburger et al., “Transfer learning for breast cancer malignancy analysis: Full training or fine tuning?” IEEE Trans. Med. Imag., vol. 35,
classification based on dynamic contrast-enhanced MR images,” in no. 5, pp. 1299–1312, May 2016.
Bildverarbeitung für die Medizin. Berlin, Germany: Springer Vieweg, [120] O. Russakovsky et al., “ImageNet large scale visual recognition
2018, pp. 216–221. challenge,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252,
[97] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, Dec. 2015.
inception-resnet and the impact of residual connections on learning,” [121] A. M. Rahmani et al., “Exploiting smart e-health gateways at the edge
in Proc. 31st AAAI Conf. Artif. Intell., 2017, pp. 1–7. of healthcare Internet-of-Things: A fog computing approach,” Future
[98] Q. Chen, S. Hu, P. Long, F. Lu, Y. Shi, and Y. Li, “A transfer Gener. Comput. Syst., vol. 78, pp. 641–658, Jan. 2018.
learning approach for malignant prostate lesion detection on multi- [122] A. Kumari, S. Tanwar, S. Tyagi, and N. Kumar, “Fog computing for
parametric MRI,” Technol. Cancer Res. Treatment, vol. 18, Jun. 2019, healthcare 4.0 environment: Opportunities and challenges,” Comput.
Art. no. 1533033819858363. Electr. Eng., vol. 72, pp. 1–13, Nov. 2018.
Authorized licensed use limited to: University of Exeter. Downloaded on July 01,2020 at 18:45:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
MUHAMMAD et al.: DEEP LEARNING FOR MULTIGRADE BTC IN SMART HEALTHCARE SYSTEMS 15
[123] J. Chen, K. Li, Q. Deng, K. Li, and P. S. Yu, “Distributed deep learning [145] Y. Liu, C. Yang, L. Jiang, S. Xie, and Y. Zhang, “Intelligent edge
model for intelligent video surveillance systems with edge computing,” computing for IoT-based energy management in smart cities,” IEEE
IEEE Trans. Ind. Informat., early access, Apr. 4, 2019, doi: 10. Netw., vol. 33, no. 2, pp. 111–117, Mar. 2019.
1109/TII.2019.2909473. [146] M. R. F. B. Junior, C. L. Batista, M. E. Marques, and C. R. M. Pessoa,
[124] P. Pace, G. Aloi, R. Gravina, G. Caliciuri, G. Fortino, and A. Liotta, “Business models applicable to IoT,” in Handbook of Research on
“An edge-based architecture to support efficient applications for Business Models in Modern Competitive Scenarios. Hershey, PA, USA:
healthcare industry 4.0,” IEEE Trans. Ind. Informat., vol. 15, no. 1, IGI Global, 2019, pp. 21–42.
pp. 481–489, Jan. 2019. [147] T. Hussain, K. Muhammad, S. Khan, A. Ullah, M. Y. Lee, and
[125] J. Xu, H. Liu, W. Shao, and K. Deng, “Quantitative 3-D shape S. W. Baik, “Intelligent baby behavior monitoring using embedded
features based tumor identification in the fog computing architecture,” vision in IoT for smart healthcare centers,” J. Artif. Intell. Syst., vol. 1,
J. Ambient Intell. Humanized Comput., vol. 10, no. 8, pp. 2987–2997, no. 15, p. 15, 2019.
Aug. 2019. [148] S. Rani, S. H. Ahmed, R. Talwar, J. Malhotra, and H. Song, “IoMT:
[126] J. Feng, L. T. Yang, Q. Zhu, and K.-K.-R. Choo, “Privacy-preserving A reliable cross layer protocol for Internet of multimedia things,” IEEE
tensor decomposition over encrypted data in a federated cloud envi- Internet Things J., vol. 4, no. 3, pp. 832–839, Jun. 2017.
ronment,” IEEE Trans. Dependable Secure Comput., early access, [149] M. Usman, M. A. Jan, X. He, and J. Chen, “P2DCA:
Nov. 15, 2018, doi: 10.1109/tdsc.2018.2881452. A privacy-preserving-based data collection and analysis framework
[127] P. Costa et al., “End-to-end adversarial retinal image synthesis,” IEEE for IoMT applications,” IEEE J. Sel. Areas Commun., vol. 37, no. 6,
Trans. Med. Imag., vol. 37, no. 3, pp. 781–791, Mar. 2018. pp. 1222–1230, Jun. 2019.
[128] D. Nie et al., “Medical image synthesis with deep convolutional [150] K. Muhammad, J. Ahmad, and S. W. Baik, “Early fire detection using
adversarial networks,” IEEE Trans. Biomed. Eng., vol. 65, no. 12, convolutional neural networks during surveillance for effective disaster
pp. 2720–2730, Dec. 2018. management,” Neurocomputing, vol. 288, pp. 30–42, May 2018.
[129] M. Mardani et al., “Deep generative adversarial neural networks for [151] M. Alaslani and B. Shihada, “Analyzing latency and dropping in
compressive sensing MRI,” IEEE Trans. Med. Imag., vol. 38, no. 1, Today’s Internet of multimedia things,” in Proc. 16th IEEE Annu.
pp. 167–179, Jan. 2019. Consum. Commun. Netw. Conf. (CCNC), Jan. 2019, pp. 1–4.
[130] A. Lucas, S. Lopez-Tapia, R. Molina, and A. K. Katsaggelos, “Gen- [152] G. Hatzivasilis, O. Soultatos, S. Ioannidis, C. Verikoukis, G. Demetriou,
erative adversarial networks and perceptual losses for video super- and C. Tsatsoulis, “Review of security and privacy for the Internet of
resolution,” IEEE Trans. Image Process., vol. 28, no. 7, pp. 3312–3327, Medical Things (IoMT),” in Proc. 15th Int. Conf. Distrib. Comput.
Jul. 2019. Sensor Syst. (DCOSS), May 2019, pp. 457–464.
[131] W. Yang, C. Hui, Z. Chen, J.-H. Xue, and Q. Liao, “FV-GAN: Finger [153] Q. Zhang, L. T. Yang, Z. Chen, and P. Li, “A survey on deep learning
vein representation using generative adversarial networks,” IEEE Trans. for big data,” Inf. Fusion, vol. 42, pp. 146–157, Jul. 2018.
Inf. Forensics Security, vol. 14, no. 9, pp. 2512–2524, Sep. 2019. [154] J. Del Ser et al., “Bio-inspired computation: Where we stand and what’s
[132] L. Zhu, Y. Chen, P. Ghamisi, and J. A. Benediktsson, “Generative next,” Swarm Evol. Comput., vol. 48, pp. 220–250, Aug. 2019.
adversarial networks for hyperspectral image classification,” IEEE [155] R. Miikkulainen et al., “Evolving deep neural networks,” in Artificial
Trans. Geosci. Remote Sens., vol. 56, no. 9, pp. 5046–5063, Sep. 2018. Intelligence in the Age of Neural Networks and Brain Computing.
[133] C. Wang, C. Xu, C. Wang, and D. Tao, “Perceptual adversarial networks Amsterdam, The Netherlands: Elsevier, 2019, pp. 293–312.
for image-to-image transformation,” IEEE Trans. Image Process., [156] S. Fong, S. Deb, and X.-S. Yang, “How meta-heuristic algorithms
vol. 27, no. 8, pp. 4066–4079, Aug. 2018. contribute to deep learning in the hype of big data analytics,” in
[134] A. Dixit and A. Nanda, “Brain tumor detection and classification in Progress in Intelligent Computing Techniques: Theory, Practice, and
MRI: Technique for smart healthcare adaptation,” in Smart Healthcare Applications. Springer, 2018, pp. 3–25.
Systems. Boca Raton, FL, USA: CRC Press, 2019, pp. 125–134.
[135] K. Muhammad, M. Sajjad, and S. W. Baik, “Dual-level security
based cyclic18 steganographic method and its application for secure
transmission of keyframes during wireless capsule endoscopy,” J. Med.
Syst., vol. 40, no. 5, p. 114, May 2016.
[136] Y. Gal and Z. Ghahramani, “Dropout as a Bayesian approximation:
Khan Muhammad (Member, IEEE) received the
Representing model uncertainty in deep learning,” in Proc. Int. Conf.
Ph.D. degree in digital contents from Sejong Uni-
Mach. Learn., 2016, pp. 1050–1059.
[137] A. Kendall and Y. Gal, “What uncertainties do we need in Bayesian versity, Seoul, South Korea, in 2019.
He is currently an Assistant Professor with the
deep learning for computer vision?” in Proc. Adv. Neural Inf. Process.
Syst., 2017, pp. 5574–5584. Department of Software and a Lead Researcher
[138] S. M. Raghunath et al., “Deep neural networks can predict 1-year of the Intelligent Media Laboratory, Sejong Uni-
mortality directly from ECG signal, even when clinically interpreted versity. His research interests include medical
as normal,” Circulation, vol. 140, no. 1, 2019, Art. no. A14425. image analysis (brain magnetic resonance imaging
[139] S. M. Raghunath et al., “A deep neural network for predicting inci- (MRI), diagnostic hysteroscopy, and wireless cap-
dent atrial fibrillation directly from 12-lead electrocardiogram traces,” sule endoscopy), information security (steganogra-
Circulation, vol. 140, no. 1, 2019, Art. no. A14407. phy, encryption, watermarking, and image hashing),
[140] A. Barredo Arrieta et al., “Explainable artificial intelligence (XAI): video summarization, computer vision, fire/smoke scene analysis, and video
Concepts, taxonomies, opportunities and challenges toward responsi- surveillance. He has filed over ten patents and published over 100 articles
ble AI,” 2019, arXiv:1910.10045. [Online]. Available: http://arxiv.org/ in peer-reviewed international journals and conferences in these research
abs/1910.10045 areas with target venues, such as the IEEE NETWORK Magazine, the
[141] W. Samek, T. Wiegand, and K.-R. Müller, “Explainable artificial intelli- IEEE Communications Magazine, the IEEE T RANSACTIONS ON I NDUSTRIAL
gence: Understanding, visualizing and interpreting deep learning mod- I NFORMATICS , the IEEE T RANSACTIONS ON I NDUSTRIAL E LECTRONICS ,
els,” 2017, arXiv:1708.08296. [Online]. Available: http://arxiv.org/abs/ the IEEE T RANSACTIONS ON S YSTEMS , M AN , AND C YBERNETICS : S YS -
1708.08296 TEMS , the IEEE I NTERNET OF T HINGS J OURNAL, IEEE A CCESS , the
[142] R. Basatneh, B. Najafi, and D. G. Armstrong, “Health sensors, smart IEEE T RANSACTIONS ON S ERVICES C OMPUTING, Elsevier Information
home devices, and the Internet of Medical things: An opportunity for Sciences, Neurocomputing, Pattern Recognition Letters, Future Generation
dramatic improvement in care for the lower extremity complications Computer Systems, Applied Soft Computing, the International Journal of
of diabetes,” J. Diabetes Sci. Technol., vol. 12, no. 3, pp. 577–586, Information Management, Swarm and Evolutionary Computation, Computer
May 2018. Communications, Computers in Industry, Journal of Parallel and Distributed
[143] G. J. Joyia, R. M. Liaqat, A. Farooq, and S. Rehman, “Inter- Computing, Pervasive and Mobile Computing, Biomedical Signal Processing
net of medical Things (IOMT): Applications, benefits and future and Control, CASE, Springer Neural Computing and Applications, Multimedia
challenges in healthcare domain,” J. Commun., vol. 12, no. 4, Tools and Applications, Journal of Medical Systems, Journal of Real-Time
pp. 240–247, 2017. Image Processing, and so on.
[144] J. Chen, L. Zhang, Y.-C. Liang, X. Kang, and R. Zhang, “Resource allo- Dr. Muhammad is a member of the Association for Computing Machinery
cation for wireless-powered IoT networks with short packet communi- (ACM). He is also serving as a professional reviewer for over 70 well-reputed
cation,” IEEE Trans. Wireless Commun., vol. 18, no. 2, pp. 1447–1461, journals and conferences. He is also involved in the editing of several special
Feb. 2019. issues as GE/LGE.
Authorized licensed use limited to: University of Exeter. Downloaded on July 01,2020 at 18:45:20 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
Salman Khan (Student Member, IEEE) received Victor Hugo C. de Albuquerque (Senior Member,
the master’s degree in software convergence from IEEE) graduated in mechatronics engineering from
Sejong University, Seoul, South Korea, in 2020. the Federal Center for Technological Education,
He is currently pursuing the Ph.D. degree in com- Fortaleza, Brazil. He received the M.Sc. degree in
puter vision (deep learning for modeling complex teleinformatics engineering from the Federal Uni-
video activities) with Oxford Brookes University, versity of Ceará, Fortaleza, in 2007, and the Ph.D.
Oxford, U.K. degree in mechanical engineering from the Federal
He is currently a Research Assistant with the University of Paraíba, João Pessoa, Brazil, in 2010.
Visual Artificial Intelligence Laboratory (VAIL), He is currently a Professor and a Senior
Oxford Brookes University. He has published sev- Researcher with the University of Fortaleza (UNI-
eral articles in well-reputed journals, including the FOR), Fortaleza, and the Data Science Director
IEEE I NTERNET OF T HINGS J OURNAL, the IEEE T RANSACTIONS ON at the Superintendency for Research and Public Safety Strategy of Ceará
I NDUSTRIAL I NFORMATICS , Pattern Recognition Letters, and the Journal of State (SUPESP/CE), Brazil. He is also a Full Professor with the Graduate
Computational Science. His research interests include complex video activity Program in Applied Informatics, UNIFOR, where he is also the Leader of
recognition, fire/smoke scene analysis, and medical image analysis. the Industrial Informatics, Electronics and Health Research Group (CNPq).
Mr. Khan is serving as a reviewer in reputed journals, including the He is also a specialist, mainly, in the IoT, machine/deep learning, pattern
IEEE TII, IEEE A CCESS , and the IEEE IoTJ. For a complete profile, visit recognition, and robotics.
https://sites.google.com/view/salman-k.
Authorized licensed use limited to: University of Exeter. Downloaded on July 01,2020 at 18:45:20 UTC from IEEE Xplore. Restrictions apply.