You are on page 1of 18

IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 10, NO.

4, APRIL 2023
 
859

Explainable, Domain-Adaptive, and Federated


Artificial Intelligence in Medicine
Ahmad Chaddad, Qizong Lu, Jiali Li, Yousef Katib, Reem Kateb, Camel Tanougast,
Ahmed Bouridane, Senior Member, IEEE, and Ahmed Abdulkadir

   Abstract—Artificial intelligence (AI) continues to transform ers the basic concepts, highlights relevant corner-stone and state-
data analysis in many domains. Progress in each domain is driven of-the-art research in the field, and discusses perspectives.
by a growing body of annotated data, increased computational     Index Terms—Domain adaptation, explainable artificial intelli-
resources, and technological innovations. In medicine, the sensi- gence, federated learning.
tivity of the data, the complexity of the tasks, the potentially high   

stakes, and a requirement of accountability give rise to a particu-


I.  Introduction
lar set of challenges. In this review, we focus on three key
RTIFICIAL intelligence (AI)–understood here as the
methodological approaches that address some of the particular
challenges in AI-driven medical decision making. 1) Explainable
AI aims to produce a human-interpretable justification for each
A capability of computers to transform input data to elicit
an appropriate response [1], has a huge potential to transform
output. Such models increase confidence if the results appear many aspects of our lives. The methodological advances that
plausible and match the clinicians expectations. However, the drive the progress generally focus on core issues like solving
absence of a plausible explanation does not imply an inaccurate
specific tasks more accurately [2], creating models that gener-
model. Especially in highly non-linear, complex models that are
tuned to maximize accuracy, such interpretable representations alize to unseen data [3], or addressing fundamentally new
only reflect a small portion of the justification. 2) Domain adapta- tasks [4]. Within these innovation cycles, the application of
tion and transfer learning enable AI models to be trained and medical data faces particular challenges. Personal health
applied across multiple domains. For example, a classification related information is sensitive and therefore not readily
task based on images acquired on different acquisition hardware. sharable. When it is shared, the data itself is highly complex
3) Federated learning enables learning large-scale models without and heterogeneous. There is not only a large variation between
exposing sensitive personal health information. Unlike centralized individuals, but there is also measurement noise and system-
AI learning, where the centralized learning machine has access to
atic effects due to acquisition instruments and protocols. This
the entire training data, the federated learning process iteratively
updates models across multiple sites by exchanging only parame- noise, uncertainty, and heterogeneity in the data and labels
ter updates, not personal health data. This narrative review cov- create distinctly challenging sets of problems. The goal of AI
in medicine is to support the clinical decision process. The
Manuscript received July 29, 2022; revised October 28, 2022; accepted
November 10, 2022. This work was supported in part by the National Natural decisions in clinical processes can have grave consequences,
Science Foundation of China (82260360) and the Foreign Young Talent sometimes observed with a substantial delay and usually
Program (QN2021033002L). Recommended by Associate Editor Jiayi Ma. uncorrectable. Furthermore, state-of-the-art AI models tend to
(Corresponding author: Ahmad Chaddad.)
have a large number of parameters, which are necessary to
Citation: A. Chaddad, Q. Z. Lu, J. L. Li, Y. Katib, R. Kateb, C. Tanougast,
A. Bouridane, and A. Abdulkadir, “Explainable, domain-adaptive, and learn complex interaction and generalize well. Thus, even
federated artificial intelligence in medicine,” IEEE/CAA J. Autom. Sinica, vol. when being deterministic, AI models do not behave intu-
10, no. 4, pp. 859–876, Apr. 2023. itively and may fail or behave unexpectedly [5]. This percep-
A. Chaddad, Q. Z. Lu, and J. L. Li are with the School of Artificial tion of the black-box behavior leads to subjective mistrust and
intelligence, Guilin Universiy of Electronic Technology, Guilin 541004,
China. A. Chaddad is also with The Laboratory for Imagery, Vision and objectively limited reliability of an unknown extent.
Artificial Intelligence, Ecole de Technologie Superieure, Montreal, Quebec With the growing availability of extensive medical data and
H3C 1K3, Canada (e-mail: ahmadchaddad@guet.edu.cn; qizong.lu@hotmail. the increase in computational power, AI has become more
com; indigo.aomg@gmail.com).
important in clinical practice for early disease detection, accu-
Y. Katib is with the College of Medicine, Taibah University, Madinah
42353, Saudi Arabia (e-mail: yousef_katib@hotmail.com). rate diagnosis, predictions, and prognosis [6]. However, AI
R. Kateb is with College of Computer Science and Engineering, Taibah models–mostly instantiated as deep artificial neural networks,
University, Madinah 42353, Saudi Arabia (e-mail: rkatb@taibahu.edu.sa). are sufficiently complicated to be non-intuitive. Thus, the
C. Tanougast is with the Laboratory of Design, Optimization and Modeling computer models are perceived as a black boxes [7]. Then, the
Systems, University of Lorraine, Metz 57070, France (e-mail: camel.tanougast black-box nature leads to mistrust of predictive models by
@univ-lorraine.fr).
clinicians [8]. For this reason, explainable artificial intelli-
A. Bouridane is with the Center for Data Analytics and Cybersecurity
(CDAC), University of Sharjah, Sharjah 27272, United Arab Emirates (e- gence (explainable AI) was proposed in order to make AI
mail: abouridane@sharjah.ac.ae). models more transparent and easier to understand and inter-
A. Abdulakadir is with the Laboratoire de recherche en neuroimagerie, pret [9]. Explainable AI can thus complement human percep-
Centre Hospitalier Universitaire Vaudois, Mont Paisible 16, 1011, Lausanne, tion of the naked data [10], [11] and will contribute to making
Switzerland (e-mail: Ahmed.Abdulkadir@pennmedicine.upenn.edu).
Color versions of one or more of the figures in this paper are available
AI models in medicine more trusted.
online at http://ieeexplore.ieee.org. Computer-aided diagnosis is increasingly involved in clini-
Digital Object Identifier 10.1109/JAS.2023.123123 cal decision processes to advance the goal of personalized
860 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 10, NO. 4, APRIL 2023

100
medicine [12]. However, some challenges limit its application Topic

Search results
in clinical practice, such as domain transfer [13]. Specifically, XAI
FL
it assumes that the training and test sets are independent and 50 DA
identically distributed (IID) using machine learning (ML)
models. In reality, the distributions of the training and test sets
0
may not be identical. This leads to domain shift when we
2016 2017 2018 2019 2020 2021 2022
transfer the model [14]. Despite advanced deep learning mod- Year
els [15], the error increases proportionately with different dis-
tributions between the training data set and the test data set Fig. 1. Number of publications indexed on PubMed (https://pubmed.ncbi.
[13]. Therefore, how to solve the domain transfer problem is a nlm.nih.gov) that matched the search queries related to the topics in this sur-
key to applying AI to clinical tasks. Domain adaptation (DA) vey. FL: \federated learning” AND ((medicine) OR (healthcare)), XAI:
breaks the traditional assumption that the distribution of the ((\explainable AI”) OR (\explainable artificial intelligence”)) AND
training and test data sets must be consistent with ML [16]. ((medicine) (healthcare)), DA: (\domain adaptation”) AND ((medicine) OR
The data set is divided into source and target data in this con- (healthcare)). The number for the year 2022 is a projection based on the
text. The source data is related to the task to be solved, and the assumption that publications appear at the same rate for the remaining two
target data (with no labels or only a few labels) is related to months of the year 2022.
the task. Note that there are always domain shifts between two
domains [16]. In medical imaging, there are significant differ- [22] leave no room for interpretation and provide no explana-
ences within data sets of the same type/modality due to vari- tion of the prediction. AI models are therefore casually con-
ous equipment, sites, protocols, etc [17]. Domain adaptation sidered as black boxes because the “How?” and “Why?” a
techniques aim to reduce these differences. machine arrives at its conclusion is in general not understand-
More and more attention has been paid to data privacy in able by humans [9]. The field has a rich and nuanced taxon-
recent years [18]. In clinical topics, personalized models omy that is not always used consistently. Explainability and
require protection of personal health information [19]. Feder- interpretability–sometimes used interchangeably, relate to
ated learning (FL) can integrate data from private sources human-focused understanding [23]–[25]. A machine is under-
without sharing private data itself [20]. Specifically, FL pro- standable if its decisions are based on descriptive and relevant
tects users’ privacy primarily by exchanging non-personal information that is recognizable by humans. In this context,
data such as updated models between agents (clients) of the descriptive means how well the interpretation method cap-
learning federation [21]. tures the learned relationships and relevant means how useful
This paper aims to provide a brief narrative review of the interpretation is [26]. Explainable AI models strive for
explainable/interpretable, domain-adaptive, and federated AI high causability–the extent to which a human can casually
for clinical applications. Fig. 1 illustrates a yearly growing understand the decision making process of artificial intelli-
number of publications in the field. These search results indi- gence . Explainability and related research fields are elements
cate a clear trend. However, it is worth noting that the search of building reliable and trusted AI systems [10].
technique is not 100 percent sensitive because synonymous XAI elements are either injected into the model architecture
terms may be used for similar techniques; for instance, trans- or are external. The former leads to models that intrinsically
fer learning is a type of DA. In addition, some research works increase transparency of the model, whereas the latter intro-
may discuss one of the topics superficially yet being counted. duces explainability post-hoc [27]. Transparency is achieved
This review provides a brief overview of XAI, DA and FL by highlighting relevance of features, simplification, and
and categorizes techniques within each field. For each topic, visual data that is relevant and meaningful for the practition-
we motivate its use in medical research and healthcare appli- ers [28].
  

cations and discuss challenges and potentials for feature


B. Domain-Adaptive Artificial Intelligence
research.
The remainder of this paper is structured as follows. Sec- Generalizability is the implicit ability to use a trained pre-
tion II consists of the fundamental concepts for the three dictive model from one or more source domains in a different,
methodological approaches. We discuss in Section III the but related, target domain. In contrast, domain adaptation is
applications of XAI, DA and FL in medical research and achieved through explicit modelling of domain differences.
healthcare. Finally, Section IV concludes by providing per- Assume we train a classifier model using data from a source
spectives of challenges and untapped potential. domain D s for some task T s, and to learn the same or similar
task Tt using target data Dt . If the two domains have the same
  

  
II.  Methodological Foundation and Background distribution (D s ∼ Dt ) and the tasks are identical, then the
model trained for task T s on D s can also be used for the task
A. Explainable Artificial Intelligence Tt in Dt . If the two domains are not from the same distribu-
Human-centric medical decision processes are designed to tion (D s / Dt ), then the models trained on the source domain
be trustworthy because of the high stakes that are involved. D s are likely to perform worse on Dt . DA is utilized to exploit
Isolated outputs from AI models derived from deterministic, similarities of tasks and learn differences in data domains to
yet highly non-linear networks that consist of artificial neu- produce models for task Tt in Dt [29]. Fig. 2 shows an exam-
rons organized in many–up to a thousand and more, layers ple of homogeneous domain adaptation, this is when the same
CHADDAD et al.: EXPLAINABLE, DOMAIN-ADAPTIVE, AND FEDERATED ARTIFICIAL INTELLIGENCE IN MEDICINE 861

Source [38]. Despite the practicability of DA, there are always


data
remaining risks of inherent biases in the data [39] or unfair-
ness in models [40].
3) Single- and Multi-Source Domain Adaptation: Depend-
Domain
ing on the number of source domains, it is possible to classify
adaptation DA into single-source DA (SDA), multi-source DA (MDA)
Target and source-free DA [41]. For example, trained models using
data
single-source domains lack generalization. Multi-source DA
collects labeled data from multiple sources with different dis-
Source domain: Target domain: tributions [42] and implicitly learn to be robust. For example,
Separating hyperplane in source domain: MDA learning can alleviate the difficulties caused by align-
ing multimodal characteristics between multiple domains [43].
Fig. 2. Illustration of homogeneous domain adaptation in which the source
Although MDA is advanced, MDA applications are challeng-
and target distributions consist of the same features but have different distri-
ing due to the heterogeneity of data between domains. For this
butions. Red and blue colors represent the source and target data, respectively.
reason, most existing studies are based on SDA [44]. Another
Circles and squares represent class labels. Domain adaptation aims to apply-
issue could be related to the source domains, cause to data pri-
ing models (dashed line) to data from a target domain (blue) that were learned
vacy or lack of storage space for small devices [45]. A gener-
using data from a source domain (red). In the example shown, the domain
alized source-free domain adaptation (G-SFDA) was pro-
adaptation consists of shifting and rotating the data points such that after the
posed to possess source domain features via transfer learning
transformation, the source, and target domains overlap. The separating classi-
[46].
fication hyperplane, shown as grey dashed line, only accurately classifies the
4) One- and Multi- Step DA: On the basis of the distance
data points from the target data after domain adaptation.
between the source and the target domain, domain adaptation
task is applied to the same features that have different distri- methods can be roughly categorized into two types: one-step
butions. Other types of DA methods are outlined in the taxon- and multistep domain adaptation methods. The one-step DA
omy below. methods often simply align the feature distributions of the two
1) Supervised, Semi-Supervised, and Unsupervised Domain domains by minimizing their distances. But it is difficult for
Adaptation: Depending on the availability of annotation, DA the network to minimize the huge domain discrepancy.
applications can be classified as supervised [30], semisuper- Specifically, in remote domain transfer learning, when the dis-
vised [31], and unsupervised [32]. In supervised DA models, tance between the source domain and the target domain is too
labels of all training instances are available. On the other long [47], it is impossible to achieve and can even affect the
hand, in semi-supervised learning, only a part of the training performance of the learning model in the target domain. This
data is annotated. Learning with no labels is considered unsu- situation is called negative transfer [48]. In order to avoid this
pervised. Specifically, most studies focus on unsupervised DA situation, it is necessary to find an “intermediate bridge” to
models due to insufficient numbers of labeled samples. For connect the source and the target domain. That is called multi-
example, a novel unsupervised DA algorithm proposes to step DA. It can be transformed into one-step domain adapta-
eliminate the domain differences for reducing the label noise tion by choosing an appropriate intermediate domain. For
[17]. While, a deep domain adaptation method is proposed to example, through efficient instance selection, transfer fea-
transfer domain knowledge from a well-labeled source tures are successfully applied between remote domains and
domain to a partially labeled target domain [33]. This method effectively solve the remote domain problem [49].
minimizes the differences in the domain that have an impor- 5) Shallow and Deep DA: According to the type of model,
tant clinical impact for disease diagnosis. we can divide the DA method into shallow and deep DA. Usu-
2) Single- and Cross-Modality: Medical image derived from ally, shallow DA leverages feature engineering and is used
many types of scanner (e.g., magnetic resonance imaging, CT, with traditional machine learning. But few timely reviews the
X-ray) that are related to texture differences between the emerging deep learning based methods. Unlike the shallow
source and target domains [34]. This phenomenon is called the DA, deep DA methods can leverage deep networks to learn
modality difference. When based on acquisition, DA methods more transferable representations by embedding DA in a deep
are categorized as either single- or cross-modality DA. In sin- learning pipeline [50]. In addition, deep DA integrates a fea-
gle-modality DA, the adaptations happen within a single type ture into the learning model to improve performance metrics
of data [35]. In clinical practice, medical images are acquired [13].
on hardware from different vendors and at different centers These methods may reduce the differences in the domain
[36]. Because different modalities provide complementary that can have a positive impact in clinical research. Specifi-
information and vary in acquisition cost and expertise, a diag- cally, more effort is required to fully apply such methods in
nosis can be established with different modalities. Cross- clinical topics to avoid bias.
  

modal DA helps to make models applicable across modalities.


For example, some models can be transferred between CT and C. Federated Learning
MRI images [37]. In addition, bidirectional cross-modality Federated learning (FL) is a way of distributed collabora-
DA between MRI and CT images has been applied using the tive machine learning in which the training data is not shared.
synergistic image and feature alignment framework (SIFA) This concept of data-private collaborative learning [51], that
862 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 10, NO. 4, APRIL 2023

initially was motivated in the context of mobile devices in vacy, their application still bares some security-related risks
which each one has only a small fraction of the data, equally [61].
applies to medical application in which the data is distributed 2) Effective Communication: Communication is a critical
across hospitals [52]. bottleneck in federated networks [62]. FL network can consist
This framework works by collaborating multiple devices to of thousands of agents contributing model updates. To obtain
update a machine learning model without sharing their pri- high-performance models, the multi-participant situation is
vate data under the supervision of a central server. An exam- inevitable. In this context, network communication is slower
ple in medicine is the COVID-19 chest scan engine, which than local computation by many orders of magnitude due to
was built by many researchers and medical practitioners from limited resources such as bandwidth, energy, and power. To
different parts of the world. In the COVID-19 pandemic, solve this problem, two key aspects can be used: a) reducing
along with increased privacy concerns, FL is mainly useful to the total number of communication rounds and b) reducing the
support the diagnosis and detection of COVID-19. This was size of the messages transmitted in each round [63]. It can per-
done by coordinating massive hospitals to build a common AI form client-side filtering, reduce the frequency of updates, and
model [53]. P2P (peer-to-peer) learning (e.g., compression) [64].
We now give a general formal definition of FL. Let N be the 3) Data Heterogeneity: Currently, data samples in hospitals
number of data clients–for example hospitals, {H1 , H2 , . . . , are not independent and distributed identically (IID). It leads
HN }, all of whom wish to contribute to the training of a pre- to an increase in the bias of the model [65]. A proposed initial
dictive model by combining their respective data shared model used for hospitals that can easily adapt it to the
{D1 , D2 , . . . , DN }. A traditional method is to combine all data local dataset [66]. Through intelligently choosing the partici-
and use D = D1 ∪ D2 , . . . ∪ DN to train a model Mall . FL is a pant, it can reduce the times of communication in FL and
learning process in which the data clients collaboratively train counterbalance the bias introduced by IID data. However, the
a model MFL , in which process any data client Hi does not data are not always IID type. To systematically and intu-
share its data Di to others. In addition, the accuracy of MFL , itively understand the current FL method in different data dis-
denoted as VFL should approach the performance of Mall , tributions, new studies have been conducted a comprehensive
Vall . Formally, let δ be a non-negative real number, if experimental comparison between the new algorithms [67].
This means that FL can solve the problem of multisite data,
|VFl − Vall | < δ. (1)
such as in multiple clinical institutions [68].
Then, the federated learning algorithm has an accuracy loss Regardless of these advanced FL techniques, more efforts
of δ. are needed to consider these models in healthcare field.
  

FL is particularly attractive for smart healthcare, by coordi-


nating multiple hospitals to perform an AI training without III.  Methods
sharing raw data [54]. However, FL is an emerging research Deep learning models are widely applied in medicine. How-
area and has not yet gained much trust in the medical commu- ever, these models require more interpretability to simplify the
nity; therefore, it has required more research, particularly with steps of the prediction process. Despite advances in XAI mod-
respect to security and privacy aspects. Although this frame- els, there is a huge need to consider XAI with DA and FL
work sounds ideal in theory, it is not immune to attacks. when the topics are related to medicine to achieve the goal of
Moreover, its current development is not mature enough to be personalized medicine. For example, DA-based XAI models
expected to solve all privacy issues by default. The most spe- have the ability to improve the performance in computer aided
cific security threats facing the FL framework are communica- diagnosis [69]. But the same model will not be able to pro-
tion bottlenecks, poisoning, backdoor attacks, and inference- vide a feasible performance using multi-institutional data. For
based attacks, which can be considered the most critical to FL this reason, FL is a good option for data sharing under the
privacy [20]. In this context, FL is represented in three main condition of protecting data privacy [70]. However, there is an
groups a) privacy protection [55], b) effective communication additional issue related to intra- and inter- site difference of
[56] and c) data heterogeneity [57]. distributions. This leads to a domain shift between sites.
1) Privacy Protection: For generalization of the predictive Recent work proposes to integrate unsupervised domain adap-
model in medical data analysis, recent work focuses on con- tation (UDA) into the FL framework [71]. For example, UDA
sidering multisite samples with multidimensionality. How- guides the model to learn domain-agnostic features through
ever, data sharing between institutions has been controlled by adversarial learning or a specific type of batch normalization
laws such as the United States Health Insurance Circulation [47]. Furthermore, DA techniques such as mixture of experts
and Accountability Act (HIPAA) [58] and the European Gen- (MoE) with adversarial domain alignment can be used to
eral Data Protection Regulation (GDPR) [59]). FL demon- improve the accuracy of different sites in the FL learning
strates its feasibility in sharing and protecting the privacy of setup [72]. These techniques will be presented below in three
20 medical institutions and centers [60]. FL is represented by subsections related to DA, XAI and FL.
  

models that are trained simultaneously at each site. These


models are periodically aggregated and redistributed, requir- A. Current Methods of Explainable Artificial Intelligence
ing only the weight transfer of learned model between institu- With high-performance deep learning models of increasing
tions. This step eliminates the requirement of sharing data complexity, the way specific results were obtained is not
directly [52]. Although FL can efficiently protect the data pri- accessible to humans. A number of methods were proposed to
CHADDAD et al.: EXPLAINABLE, DOMAIN-ADAPTIVE, AND FEDERATED ARTIFICIAL INTELLIGENCE IN MEDICINE 863

TABLE I
Summary of the XAI Models Used for Clinical Applications
Application XAI / Explainability level Prediction model Reference
Preclinical relevance assessment Model combination / G GNN, FNN [89]
Detection and prediction for Alzheimer’s disease SHAP / L & G RF [90]
Deterioration risk prediction of hepatitis SHAP, LIME, PDP / L & G RF [91]
Non-communicable diseases prediction DeepSHAP / L & G DNN [92]
Spinal posture classification LIME / L SVM [93]
Classification of estrogen receptor status SmoothGrad / L DCNN [94]
Differential diagnosis of COVID-19 Grad-CAM / G CNN [95]
Prediction of mortality Shapley additive / L RNN [96]
Medical image segmentation Combination / L & G CNN, Attention Mechanism [97]
Early detection of Parkinson’s disease LIME / L VGG-16 [98]
COVID-19 diagnosis Grad-CAM ++, LRP / L DNN [99]
Evaluation of cancer in Barrett’s esophagus 5 XAI methods / L AlexNet, SqueezeNet, ResNet50, VGG16 [100]
Heart anomaly detection SHAP and Occlusion maps / L CNN, MLP [101]
Distinguish VTE patients DeepLIFT / L ANN [102]
Detection of ductal carcinoma in situ LRP / L Resnet-50 CNN [103]
Lesion prediction Activation Maps, Saliency Maps / L U-net [104]
Early detection of COVID-19 Saliency Maps / L Fuzzy-enhanced CNN [105]
EEG emotion recognition SmoothGrad, saliency maps / L CNN [106]
Cerebral hemorrhage detection and localization Grad-CAM / L ResNet [107]
Macular disease classification t-SNE, model simplification / L CNN [108]
Automated diagnosis and grading of ulcerative colitis Grad-CAM / L CNN [109]
Diagnostic for breast cancer Causal-TabNet / G GNN [110]
Prediction of depressive symptoms LIME / L RNN [111]
Early prediction of antimicrobial multidrug resistance SHAP / L & G LSTM [112]
Predictions of clinical time series CAM / Local FCN [113]
L: Local; G:Global; ANN: Artificial Neural Network; GNN: Graph Neural Networks; FNN: Fuzzy Neural Network; DNN: Deep Neural Networks; RNN:
Recurrent Neural Network; FCN: Fully Convolutional Network; DCNN: Deep Convolutional Neural Network; VGG-16: GG-Very-Deep-16 CNN; VGG-19:
GG-Very-Deep-19 CNN; Fuzzy-enhanced CNN: Fuzzy-enhanced Convolutional Neural Network; RF: Random forest; SVM: Support Vector Machine; MLP:
Muti-Layer Perceptron; LSTM: Long Short-Term Memory network; PDP: Partial Dependence Plot; t-SNE: t-distributed Stochastic Neighbor Embedding.
Seven Network: SqueezeNet, Inception, ResNet, ResNeXt, Xception, ShuffleNet, DenseNet; Five XAI methods: Saliency, guided backpropagation, integrated
gradients, input gradients, and DeepLIFT.

increase explainability in recent years, 17 of which were dis- clinical classifications [76]–[81]. However, there are only a
cussed in a recent survey [73] and a selection of which is few studies focused on explainable hidden layers.
listed in Table I and discussed in following paragraphs. These 2) Explainability After Modeling: It requires many experi-
techniques are mainly divided into two categories, i) explain- ments to balance model accuracy based on post-hoc inter-
ability in modeling and ii) explainability after modeling (see pretability (explainability after modeling). It is grouped into
Fig. 3). the following categories:
1) Explainability in Modeling: Explainability is derived a) Feature relevance analysis: It considers the importance
from modifications of the model architecture. This process of features in the prediction model (e.g., layer-wise relevance
can be represented in simplification (and/or quantification) back propagation (LRP)). LRP is an explainable AI model that
layers and loss function, or by adding an explainable network explores the relation between output and input data by gener-
and visualization module. For example, CNN architectures ating separate correlation maps for each individual image.
consist of convolutions, pooling, and fully connected layers; LRP with explainability concept is also used to prune CNN. It
To increase the explainability of CNN, layers could be quanti- finds the most relevant units (i.e., weights or filters) with their
fied, modified, and combined with a visual model. Replace- relevance scores [82]. Deep-LIFT is also a method to improve
ment of convolutional layers with max-pooling layers pro- the effectiveness of feature learning and improve the inter-
vides better visualization without losing performance metrics pretability of the model itself. It works by comparing the acti-
[74]. Another scenario related to CNN is represented by vation of each neuron to its reference activation and assigns
adding attention network and modifying the loss function scores according to the difference between the two [83], [84].
[75]. Recent studies have been quantified the convolution lay- b) Visual explanation: To further improve model visualiza-
ers by Gaussian mixture model and/or texture features for tion, heat maps, saliency maps, or class activation methods are
864 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 10, NO. 4, APRIL 2023

DATA POST-HOC
ACQUISITION ARCHITECTURE MODIFICATION EXPLAINABILITY

Fully
Convolution connected Dense
Input Pooling Output Depth 1 Depth 2 ……
layers
CNN L
Input S Output LRP
T
CNN M
φi

Clinical image Layer modification Model combination Feature relevance analysis

D
Input G e
Classifier
Discriminative loss
A n F
Soft
Reward
P s C
function e
max
LSTM
Relevance loss
Cross ωk
ω2
Data entropy loss ω1
Similarity
Clinical report Attention network score Loss modification CAM
Input mapping

Fig. 3. Example of flowchart for the explainable artificial intelligence model, 1) data acquisition consists of images and clinical report for each of patients.
2) Example of four methods during modeling: it can modify layer (Layer modification) to simplify the structure or combined (Model combination) with other
model to build a full explainable model. In addition, an attention network could be used for visualization and even to modify the loss function. 3) The two most
used methods after modeling: feature relevance analysis and visual explanation.

considered. Among these functions, class activation mapping learning models they are applied to. Specifically, LIME uses
(CAM) is the most widely used. In CNN, it can improve explainability models (such as linear models and decision
explainable capabilities by replacing the fully connected layer trees) to approximate the target predictions from the black-box
with the global average pooling (GAP) layer [85]. model. It detects the relationships between the output of the
However, CAM requires modifying the structure of the black-box model by slightly perturbing the input in the XAI
original model by retraining the model. For example, another model. For example, LIME demonstrated feasible explana-
version of CAM known as Grad-CAM that is a deep neural tions in artificial neural networks by altering the input feature
network [86]. It uses the gradient information that is related to and fitting it to a linear model [114]. The SHAP explains how
the convolutional layer and decision of interest. Whether it is each feature affects the predicted value (known as Shaply). It
CAM or Grad-CAM, the visualization results they generate is computed based on the average of the marginal contribu-
are not visually enough. In addition, a Score-CAM model pro- tions of all permutations (permutations of each feature). It pro-
poses to manage the dependence on gradients by different vides different explanations for a particular model (e.g., fea-
ways for obtaining linear weights [87]. ture importance visualization, linear formulation). With SHAP
3) Explainability Level: The explainability scale is divided [115], it illustrates how the design parameters affect perfor-
into different categories in different situations. For example, mance using DNN features.
  

XAI is divided into five scales according to ethology in


explaining animal intentionality [88]. According to the trans- B. Domain Adaptation With Deep Learning
parency of the model, XAI is divided into simulatability, We present the main steps in DA model by summarizing the
decomposability, and algorithmic transparency [25]. Further- existing literature. We grouped the existing methods into 1)
more, XAI with deep learning models could be grouped into DA-related data, 2) DA-related features, and 3) building and
a) local and b) global explanations. For local explanation, the adversarial domain adaptation. In addition, we report these
model considers a local approximation of how the black box models in Table II. We note that the problem of domain bias
works. It mainly focuses on explanation of data instances. The occurs when the target domain is different from the source
global explanation describes the whole mechanism of the domain. This bias type limits the performance of transfer deep
model. Generally, a group of data instances is needed to gen- learning model that leads to disable the generalization of
erate one or more explanation maps. model.
Table I summarizes XAI in deep learning models applied to 1) Domain Adaptation Related Data: It assumes that DA
clinical tasks. We note that CNNs are the most widely used can solve the problem of domain shift. However, there is a
deep learning models and gradient-based explainability is the bias due to different hospital equipments. Instance Re-weight-
most applied principle. The level of explainability is mostly ing Adaptation is a method to solve this issue when the
local. domain bias is small. It removes samples from the source
Agnostic model In addition to the previous XAI methods, domain that mismatch the similar distribution in the target
shapley additive explanations (SHAP) and local explainable domain. For example, samples with higher correlation with
model-agnostic explanation (LIME) are agnostic to the deep target samples are assigned larger weights [116]. The assigned
CHADDAD et al.: EXPLAINABLE, DOMAIN-ADAPTIVE, AND FEDERATED ARTIFICIAL INTELLIGENCE IN MEDICINE 865

TABLE II ity, and transmission efficiency. Therefore, a method that can


Summary of Recent Domain Adaptation Models adapt source-trained models towards target distributions with-
Used For Clinical Topics
out accessing source data will be promising. To adapt a
Cross-
Application Method domain Topics Reference model, samples from the source and target domains can share
Gastric epithelial similar representations to provide similar predictions [122].
GAN N Y [125]
tumor These models need to be pre-trained using the source domain
Multiple sclerosis
lesions CNN N N [122] data and re-trained by the target domain [122], which will
Arrhythmia CNN N N [126]
increase the processing time. Therefore, these types of predic-
heartbeat tive models are not preferred. Another scenario based on
Bio-metric CycleGAN Y S and N [127] adversarial domain adaptation is currently one of the main-
identification
Medical image DDA-Net Y N [128] stream methods. It is known that deep models (e.g., adversar-
Cardiac PnP-AdaNet Y N [129] ial domain adaptation) can use the transformed features for
Brain dementia domain adaptation [123]. For example, an unsupervised
SVM Y Y [130]
identification method based DA used for predicting glaucoma fundus [124].
Brain dementia
identification AD2A N N [131] Specifically, they designed a loss function that learns the
Autism spectrum MSDA/MVSR N Y [132]
source region features and possesses the original labels
disorder unchanged. An improved adversarial domain adaptation net-
Cardiac CycleGAN Y N [133] work has been considered for tumor image diagnosis with few
Lung ADDA N N [134] and/or no labels [125].
Histopathological HisNet-SSDA Y S [135] Table II reports the DA methods used for clinical applica-
Breast cancer DADA N Y [136] tions. We find that the dominant research approaches in recent
Pancreatic cancer GCN N N [137]
years are related to adversarial networks, with a lower domi-
nant for DA methods based on features and data. It observes
Atrial fibrillation 3DU-Net N Y [138]
that the most of these approaches based on unsupervised DA.
Y: Yes; N: No; S: Semi-supervised; GAN: Generative Adversarial Network;
CNN: Convolutional Neural Network; CycleGAN: Cycle-Consistent This is due to the limitation of medical data. These contribu-
Generative Adversarial Networks; DDA-Net: Dual Domain Adaptation tions provide a solid foundation for future DA methods
Network; DASC-Net: Domain Adaptation based Self-Correction model;
PnP-AdaNet: Plug-and-Play Adversarial Domain Adaptation; SVM:Support applied in medicine.
  

Vector Machine; AD2A: Attention-guided Deep Domain Adaptation;


MSDA: Multi-Source Domain Adaptation; MVSR: Multi-View Sparse C. Federated Learning Modeling
Representation; ADDA: Adversarial Discriminative Domain Adaptation;
HisNet-SSDA: a deep transferred semi-supervised domain adaptation; FL updates the models to achieve high performance by con-
DADA: Depth-Aware Domain Adaptation; GCN: Graph Convolutional
Networks;3D U-Net: three dimension U-net. sidering co-modeling step between different data sources (or
client). FL is divided into the following categories (see Fig.
data are then re-entered into the training model to reduce the 4):
problems arising from domain shifts [117]. This idea has been 1) Horizontal Federate Learning: Horizontal federated
developed with a task of re-weighting mechanism that consid- learning (HFL) is a system in which all the parties share the
ers loss function with dynamically update weights [118]. same feature space. It does the training process as follows: a)
Regardless of the advances algorithms and methods used in Participants first download the latest model from the server
DA related data, these models are able to only solve the prob- individually and train the model based on the local data, b)
lem of small domain bias. The trained model with its parameters (i.e., cryptographic gra-
dients) is uploaded to the server, and c) The server aggregates
2) Domain Adaptation Related Features: It is based on the
the gradients of each user and returns the updated model to
feature similarity between the source and the target domains.
each participant to update local models [154]. With this
For example, the feature transformation strategy can solve the
advantage, HFL is widely used in clinical topics [155].
bias problem by transferring the source and target samples
2) Vertical Federated Learning: Vertical federated learning
from the original feature space to the new feature representa-
(VFL) lets multiple parties that possess different attributes
tion space [119]. A novel feature-based transfer learning algo- (e.g., features and/or labels) of the same data entity (e.g., a
rithm for remote domains requires only a small portion of person) to jointly train a model. Its learning process consists
labeled target samples from a different domain [120]. How- of a) encrypted sample alignment and b) an encryption train-
ever, transfer features can only reduce the distribution vari- ing model. a) Encrypted sample alignment: Using encryption-
ance from a single perspective, and it is a challenge to find an based user ID alignment technology to align common users.
optimal transfer learning method for a given dataset. For this These common users data are then considered for b) encryp-
reason, the distribution variance was reduced from multiple tion training model. However, in learning process, VFL
perspectives by applying multi-feature-based transfer learning requires multiple communications between all participants
methods [121]. that need a high processing time. For this reason, many stud-
3) Building an Adversarial Domain Adaptation: Data- and ies have developed an asynchronous training algorithm based
feature-based DA aims to align a labeled source domain and on vertically partitioned data [156]. We note that many exist-
an unlabeled target domain, which requires accessing the ing methods focus on HFL when all feature sets and labels are
source data. It will raise concerns about data privacy, portabil- available.
866 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 10, NO. 4, APRIL 2023

Federated model

Encrypted gradients Encrypted gradients

Updates
Encrypted model training

Updates Encrypted gradients


Updates
Encrypted entity alignment

Hospital 1 Hospital 2 Hospital 3 Participant A Participant B


(a) (b)

Fig. 4. Example of federated learning based on a) horizontally and b) vertically technique; In horizontally federated learning, multiple clients (Hospitals) with
the same data structure collaborate to train a predictive/classifier model with the help from server. In vertically federated learning: First, align the common users
in different participants. Then, it performed the encrypted training model based on these common users.

3) Federated Transfer Learning: Federated transfer learn- data. It observed that the accuracy of FL model under the opti-
ing (FTL) generalizes the concept of federated learning and mization algorithm is close to the central model, which
emphasizes that collaborative modeling learning can be per- demonstrates the potential of federated learning. However, the
formed on any data distribution and entity [157]. For example, number of clients studied is still limited to 100 [148]. More-
participants first compute and encrypt their intermediate over, most of the data from these clients are derived from pri-
results with their gradients and losses. The gradients and vate datasets or multi-site datasets, which will limit the devel-
losses are then collected and decrypted by a collaborator. opment of FL in practical medical applications [164]. So far,
Finally, the participants receive the aggregated results that are FL for healthcare is still in its early stages. The preliminary
used to update the respective models. We note that FTL uses investigations provide a good foundation for future work,
homomorphic encryption for security and the polynomial dif- regardless of whether they involved replicating FL environ-
ference approximation for a) privacy and b) prevention of data ments or doing limited experiments across hospitals [165].
  

leakage [154]. Unlike HFL that possesses the original data and
models locally, FTL makes the participants encrypt the gradi- D. Overview of Methods and Datasets Used for XAI, DA, and FL
ents before data transmission, and adding random masks to Table IV compares methods of XAI, DA and FL based on
avoid the participants from getting information of each other’s. key strengths and weaknesses and lists code and datasets
In healthcare applications, FTL can not only provide a stan- within each topic. To overcome the limited availability of
dard framework, but also achieve efficient classifications and medical datasets, many of the XAI, DA and FL based on deep
avoid the problem of unbalanced classes [158]. learning techniques employ transfer learning from a large
4) Federated Optimization: Compared with traditional mach- dataset such as ImageNet [166]. Thus, we include the original
ine learning, FL focuses on significant variability of system datasets and code used for the recent and popular techniques.
features on each device in the network (systems heterogene- For example, Grad-CAM is one of the common XAI methods
ity), and non-identically distributed data across the network used for multitask in medical topics like grading of ulcerative
(statistical heterogeneity) [159]. Furthermore, aggregation FL colitis [109], detecting cerebral hemorrhage [167], and pre-
requires defining an aggregation strategy, such as a method to dicting COVID-19 [67], [168] with a high degree of image
combine the local models coming from the clients into a interpretation. LIME also shows a feasible rate in multitask
global one. For example, the standard and simplest aggrega- such as spinal posture classification [93], detection of Parkin-
tion strategy is federated averaging (known as FedAvg [160]). son’s disease [98], and predicting depressive symptoms [111].
Another strategy is called FedProx [161] is a generalization of Many other techniques used recently (e.g., Shapley Additive
FedAvg with some modifications to address data and the het- Explanations, Graph LIME, etc.) are almost based on deep
erogeneity of the system. Furthermore, FedProx is considered learning models that need high power and performance com-
to explore the interactions between statistical and systematic puters.
heterogeneity [159]. In addition, there is a new version of DA in high-dimensional complex data like medical imaging
FedAvg called FedAvg+ based on personalization and cluster relies on deep learning to implicitly model nonlinear transfor-
FL [162]. Also, FedEM model is based on the mixed distribu- mations [169]. For example, the difference of the feature dis-
tion assumption with the expectation maximum algorithm tribution between source and target domains is narrowed by
[163]. Unfortunately, these optimization algorithms are still the deep domain confusion model for cross subject recogni-
rarely applied in healthcare topics. tion [170]. Recent algorithms like deep CORAL [171], deep
Table III reports FL techniques used recently for healthcare adaptation networks [172], deep subdomain associate adapta-
CHADDAD et al.: EXPLAINABLE, DOMAIN-ADAPTIVE, AND FEDERATED ARTIFICIAL INTELLIGENCE IN MEDICINE 867

TABLE III
Summary of Federate Learning Models Used for Clinical Topics
Application Model / Learning / optimization n Performance (Local / Central / FL models) Reference
Predicting clinical outcomes EXAM / HFL /—— 20 AUC: 0.79/ - / 0.92 [139]
Precision medicine RETAIN / HFL / FedAvg 42 AUC: - / 0.73 / 0.62 [140]
Spinal cord gray matter segmentation DNN / HFL / FedAvg / FedPRox 4 Sensitivity: - / 0.76 / 0.78 [141]
Prostate cancer diagnosis 3D AH-Net / HFL /—— 3 Dice: 0.812–0.872/ - / 0.89 [52]
Detection diabetic retinopathy AlexNet / FTL / FedAvg / FedProx 5 Acc: 0.81-0.92/ - / 0.65-0.90 [142]
Prediction in COVID-19 patients LASSO, MLP / HFL /—— 5 AUC: - / 0.719-0.822 / 0.786-0.836 [143]
Image reconstruction U-net / HFL / FedAvg 4 SSIM: 0.94/ - / 0.93 [144]
Computational pathology CNN / HFL / 3 AUC: - / 0.985±0.004 / 0.976±0.007 [145]
Predicting outcomes in patients ResNet-34 / HFL / FedAvg 20 AUC: 0.79 / - / 0.92 [146]
Multimodal melanoma detection EfficientNet / HFL / FedAvg 5 Acc: - / 0.83 / 0.83 [147]
Arrhythmia detection of Non-IID ECG ——/ HFL / EWC 100 F1-score: - / — / 0.80 [148]
Joint training ——/ HFL / FedACS 3 Acc: - / — / 0.70 [149]
MultiView training CNN / VFL/—— 2 Acc: 0.62/ - / 0.71 [150]
Prognostic prediction ELM / HFL /—— 4 Acc: - / 0.89 / 0.8899 [151]
Designing ECG monitoring healthcare system CNN / FTL /—— 3 Acc: - / 0.94 / 0.92 [70]
Multi-center imaging diagnostics 3D-CNN, ResNet-18 / HFL /—— 4 AUC: - / 0.85 / 0.86 [152]
COVID-19 detection ResNet / HFL / FedMoCo 3 Acc: - / 0.96 / 91.56 [153]
Local model: training alone in client; Central model: use all client data for training; FL model: model under FL training methods; n: number of institutions;
EXAM: Electronic Medical Record (EMR) chest X-ray AI model; HFL: Horizontally Federate Learning; VFL: Vertically Federated Learning; FTL:
Federated Transfer Learning; DNN: Deep Neural Networks; CNN: Convolutional Neural Network; 3D-CNN: three-Dimensional Convolutional Neural
Network; AlexNet, EfficientNetB7, ResNet, ResNet-34, ResNet-18, U-net: a class in CNN; AUC: Area Under Curve; Acc: Accuracy; 3D AH-Net: 3D
Anisotropic Hybrid Network; LASSO: Least Absolute Shrinkage and Selection Operator; MLP: Multilayer Perceptron; Dice: Dice coefficient range; Kaapa
score: is a measure of classification accuracy; SSIM: Structural Similarity Index Measure; PSNR: Peak-Signal-to-Noise Ratio; ELM: Extreme Learning
Machine; RETAIN: has two parallel RNN branches merged at a final logistic layer; EWC: elastic weight consolidation; FedAvg: Federated Averaging;
FedACS: FedAvg with adaptive client sampling; FedMoCo: a robust federated contrastive learning framework; FedProx: A optimal method to tackle
heterogeneity in federated networks.

tion network (DSAAN) [173] and emotional domain adversar- clinicians can be trained to analyze it. Therefore, it is neces-
ial neural network (EDANN) [174] are recommended to sary to employ artificial intelligence to keep up with the
reduce domain differences. Despite the advances in DA mod- growth of the data. Deep learning based image analysis
els, few algorithms were applied to medical data [169]. An reached the performance of expert clinicians in making refer-
example in medical imaging is the explicit harmonization of ral recommendations based on widespread available optical
uni-modal brain imaging data across multiple sites with vary- coherence tomography [194]. In many scenarios, however,
ing acquisition hardware [175]. artificial intelligence is not yet readily able to replace expert
Recent studies of FL provide promising results in health- clinicians because the applicable AI methods are not plug-in
care applications. For example, data from 20 institutes across replacements for human decision-making. In neuroimaging,
the globe to train a model (EXAM: electronic medical record for example, the transition to the clinics did not take place
(EMR) chest X-ray AI model), for predicting COVID-19 despite many successes in research studies [195]. Applica-
[139]. This model is based on HFL that considers differential tions of AI with a human in the loop relieve the clinician from
privacy to mitigate risk of data “interception” during site- some–typically tedious work. Atri-U, for example, is an appli-
server communication. In [142], three main models using cation of AI in the clinical routine in which the AI back-end
standard transfer learning, FedAvg, and Federated Proximal proposes a segmentation, two landmarks, and a time point to
frameworks, respectively, are considered to diagnose diabetic compute the volume of the left atrium [196]. The clinician that
retinopathy. In [176], a new study proposes federated disen- bares the responsibility can either accept, modify, or ignore
tangled representation learning for unsupervised brain the proposals. The latter scenario introduces a non-disruptive
anomaly detection using MR scans from four different institu- innovation into an existing diagnostic process with a substan-
tions. To solve the problems of security and trustworthiness, tially reduced average process time without compromising
densely connected CNN based on FedAvg is proposed and quality. Further developments of such AI tools for clinical
used for COVID-19 [177]. Due to the limitations of sharing practice will depend on the key technologies discussed previ-
the data across hospitals and countries, more collaborations ously, and for each of which we will conclude with a sum-
will improve the feasibility of FL in medical applications.
  
mary.
1) Explainable AI: We found that the most XAI with deep
IV.  Conclusion and Future Perspective learning addresses post-hoc interpretability to increase face
The rate at which the complexity and volume of digital value, like the CNN-CAM. We also found that the explain-
patient data are acquired, far exceeds the rate at which expert ability levels are limited local explainability. This means that
868 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 10, NO. 4, APRIL 2023

TABLE IV
The Strengthts and Weaknesses of XAI, DA, and FL Techniques
Technique Strengths Weaknesses Code Data Reference
Explainable Artificial Intelligence
- Simple to use - The weight of saliency map depends on the
- No need to change the network model and data set [67], [86],
Gradient-weighted Class structure - Sometimes prominent areas cannot be C1 D1, D2 [109], [167],
Activation Mapping - Efficient to compute correctly labeled [168]
- High degree of image explanation - Poor robustness
Local Explainable Model- - Uses a simple model for local - Poor robustness [93], [98],
C2 D3
agnostic Explanation explanation - Inefficient calculation [111], [178]
- Simple to use - Inefficient calculation
Graph LIME *C3 D4 [91], [179]
- Trustworthy - instable
Shapley Additive - Useful for interpretation - Inefficient computation C4 D5, D6 [96], [180]
Explanations - Repeatability
Domain Adaptation
- Only adapts to one layer of network
- Reduce the distribution difference
Deep Domain Confusion between the source and target domains -optimal
A single fixed kernel may not be the C5 D7 [170], [181]
kernel
- Domain difference problem cannot be
Deep Adaptation Networks - Reduce domain differences C5 D7 [182], [183]
solved by semi-supervised learning
Deep Subdomain Adaptation - The weight could be defined flexible - The weight value of network maybe could C5 D8 [184], [185]
Network not be the best
- Solve the dynamic distribution
Dynamic Adversarial adaptation problem in adversarial - Can not handle regression problems C5 D7 [186], [187]
Adaptation Networks networks
Federated Learning
- Can use large, and heterogeneous - Bias may occur due to limitations in data
EXAM / HFL data-sets C6 D9 [139]
quality.
- Robust and generalizable
AlexNet / FTL / FedAvg & - Sub-optimal security
- Prediction rate is high − D10–D12 [142]
FedProx - Data accuracy pitfalls
- Mitigate the statistical heterogeneity
- Good generalizability with clinical
CNN / HFL / FedDis - Privacy concerns C7 D13–D17 [176]
applications
- Flexibility
- High performance metrics - Weak federated training process for the
3D-CNN / HFL / FedAvg - Provided visual explanations unstable internet connection. C8 D18 [177]
- Suitable for transfer learning - Inefficient computation
FedDis: Federated Disentred representation learning for unsupervisived brain operatory segmentation; EXAM: Electronic Medical Record (EMR) chest X-ray
AI model, “−” not available, “*” not official, C: code, D: data.
C1 https://github.com/zhoubolei/CAM, https://github.com/Cloud-CV/Grad-CAM
C2 https://github.com/marcotcr/lime
C3 https://github.com/WilliamCCHuang/GraphLIME
C4 https://github.com/slundberg/shap
C5 https://github.com/jindongwang/transferlearning/tree/master/code/DeepDA/, https://github.com/ting2696/Deep-Symmetric-Adaptation-Network.
C6 https://ngc.nvidia.com/catalog/models/nvidia:med:clara_train_covid19_exam_ehr_xray
C7 https://github.com/albarqounilab/
C8 https://github.com/HUST-EIC-AI-LAB/UCADI
D1 http://host.robots.ox.ac.uk/pascal/VOC/voc2007
D2 ImageNet Object Localization Challenge [166]
D3 https://github.com/marcotcr/lime-experiments
D4 Datasets are from Cora and Pubmed.
D5 [188]
D6 http://yann.lecun.com/exdb/mnist
D7 [189]–[193], ultrasoundcases.info
D8 https://github.com/jindongwang/transferlearning/blob/master/data/dataset.md, [186]
D9 https://www.kaggle.com/datasets/mariaherrerot/eyepacspreprocess
D10 https://www.adcis.net/en/third-party/messidor
D11 https://ieee-dataport.org/open-access/indian-diabetic-retinopathy-image-dataset-idrid
D12 https://www.kaggle.com/datasets/mariaherrerot/aptos2019
D13 https://www.oasis-brains.org
D14 http://adni.loni.usc.edu/data-553samples/access-data/
D15 http://lit.fe.uni-lj.si/tools.php?lang=eng
D16 https://smart-stats-tools.org/lesion-challenge-5562015
D17 https://www.med.upenn.edu/sbia/brats2018/data.html
D18 https://www.nhsx.nhs.uk/covid-19-response/data-and-covid-19/national-covid-19-chest-imaging-database-nccid/

the model understanding can only explain the model output 2) Domain Adaptation: Domain adaptation can effectively
rather than the internal decision-making. Global explainabil- promote the transfer of models between different domains.
ity is desirable in clinical tasks to achieve trust. More specifi- However, data heterogeneity and label validity are still a criti-
cally, the practical XAI application is inevitable to consider cal challenge. Many DA methods can eliminate the domain
people who do not have the relevant AI background. Addi- bias and the difference in data distribution, such as data-based
tional explanations such as manual instructions will be greatly methods and feature-based methods. Unfortunately, these
recommended. methods are too rigid and require a lot of processing time.
CHADDAD et al.: EXPLAINABLE, DOMAIN-ADAPTIVE, AND FEDERATED ARTIFICIAL INTELLIGENCE IN MEDICINE 869

Unlike adversarial DA, which shows strong performance, it imaging: Methodological failures and recommendations for the
future,” NPJ Digit. Med., vol. 5, no. 1, p. 48, Apr. 2022.
can effectively reduce domain differences and improve gener-
[7] M. Gaur, K. Faldu, and A. Sheth, “Semantics of the black-box: Can
alization performance. knowledge graphs help make deep learning systems more interpretable
3) Federated Learning: Federate learning is not well applied and explainable?” IEEE Internet Comput., vol. 25, no. 1, pp. 51–59,
to healthcare data, and most studies focus on horizontal feder- Jan.-Feb. 2021.
ated learning, which does not fully exploit the potential of fed- [8] A. Adadi and M. Berrada, “Explainable AI for healthcare: From black
erated learning. Only few optimization algorithms can achieve box to interpretable models,” in Embedded Systems and Artificial
Intelligence, V. Bhateja, S. C. Satapathy, and H. Satori, Eds.
performance similar to the central model. We note that FL Singapore, Singapore: Springer, 2020, pp. 327−337.
faces the challenges of poor data quality and insufficient [9] G. Yang, Q. H. Ye, and J. Xia, “Unbox the black-box for the medical
model accuracy. This is due to the absence of standard and explainable AI via multi-modal and multi-centre data fusion: A mini-
uniform data collection among institutions. Therefore, it men- review, two showcases and beyond,” Inf. Fusion, vol. 77, pp. 29–52,
Jan. 2022.
tions that the standard unified data with model security protec-
[10] A. Holzinger, “The next frontier: AI we can really trust,” in Proc. Joint
tion mechanism is the key to improving the deep learning European Conf. Machine Learning and Knowledge Discovery in
model for clinical tasks. Databases, 2021, pp. 427−440.
4) Outlook to Further Challenges: In the past decade, many [11] A. Holzinger, M. Dehmer, F. Emmert-Streib, R. Cucchiara, I.
methods originating in computer vision and other non-medi- Augenstein, J. Del Ser, W. Samek, I. Jurisica, and N. Díaz-Rodríguez,
cal fields found applications in medical research healthcare “Information fusion as an integrative cross-cutting enabler to achieve
robust, explainable, and trustworthy medical artificial intelligence,”
with some delay and minor adaptations. If this trend contin- Inf. Fusion, vol. 79, pp. 263–278, Mar. 2022.
ues, we expect to see more of the emerging XAI, DA, and FL [12] H. P. Chan, L. M. Hadjiiski, and R. K. Samala, “Computer-aided
methods in medicine for tasks that are similarly structured as diagnosis in the era of deep learning,” Med. Phys., vol. 47, no. 5,
in non-medical fields. We note, however, that healthcare pro- pp. e218–e227, Jun. 2020.
cesses often incorporate irregularly sampled, longitudinal, [13] H. Guan and M. X. Liu, “Domain adaptation for medical image
multi-modal data for decision-making. The imputation of analysis: A survey,” IEEE Trans. Biomed. Eng., vol. 69, no. 3,
pp. 1173–1185, Mar. 2022.
missing data can lead to inaccurate prediction that may result
[14] G. Bode, S. Thul, M. Baranski, and D. Müller, “Real-world application
in biased estimations [197], [198]. With strong assumptions of machine-learning-based fault detection trained with experimental
and simplified models, clinical decision processes can be data,” Energy, vol. 198, p. 117323, May 2020.
modeled and optimized using mixed linear models that natu- [15] Y. G. Lei, B. Yang, X. W. Jiang, F. Jia, N. P. Li, and A. K. Nandi,
rally handle missing data, as long as the input data is low- “Applications of machine learning to machine fault diagnosis: A
dimensional and in a homogeneous domain [199]. To train review and roadmap,” Mech. Syst. Signal Process., vol. 138, p.
106587, Apr. 2020.
deep learning models with similar flexibility but more com-
[16] C. C. Zhang and Q. J. Zhao, “Deep discriminative domain adaptation,”
plex data, much larger training data sets are required. Com- Inf. Sci., vol. 575, pp. 599–610, Oct. 2021.
bined, FA may contribute to leverage massive pools of pri- [17] Y. F. Zhang, Y. Wei, Q. Y. Wu, P. L. Zhao, S. C. Niu, J. Z. Huang,
vate data that will be transformed into a common reference and M. K. Tan, “Collaborative unsupervised domain adaptation for
domain with DA to train interpretable and thus more trustwor- medical image diagnosis,” IEEE Trans. Image Process., vol. 29,
pp. 7834–7844, Jul. 2020.
thy AI models using XAI.
[18] P. Yang, N. X. Xiong, and J. L. Ren, “Data security and privacy
protection for cloud storage: A survey,” IEEE Access, vol. 8,
References pp. 131723–131740, Jul. 2020.
[1] S. C. Shapiro, Encyclopedia of Artificial Intelligence. 2nd ed. New [19] C. Zhang, Y. Xie, H. Bai, B. Yu, W. H. Li, and Y. Gao, “A survey on
York, USA: Wiley, 1992. federated learning,” Knowl.-Based Syst., vol. 216, p. 106775, Mar.
2021.
[2] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. A. Ma, Z.
H. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. [20] V. Mothukuri, R. M. Parizi, S. Pouriyeh, Y. Huang, A. Dehghantanha,
Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. and G. Srivastava, “A survey on security and privacy of federated
Comput. Vis., vol. 115, no. 3, pp. 211–252, Dec. 2015. learning,” Future Gener. Comput. Syst., vol. 115, pp. 619–640, Feb.
2021.
[3] H. D. Li, J. C. Li, X. M. Guan, B. H. Liang, Y. T. Lai, and X. L. Luo,
“Research on overfitting of deep learning,” in Proc. 15th Int. Conf. [21] V. Kulkarni, M. Kulkarni, and A. Pant, “Survey of personalization
Computational Intelligence and Security, Macao, China, 2019, pp. techniques for federated learning,” in Proc. 4th World Conf. Smart
78−81. Trends in Systems, Security and Sustainability, London, UK, 2020, pp.
794−797.
[4] O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik,
J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev, J. Oh, D. [22] S. Rashidi, M. Mehrad, H. Ghorbani, D. A. Wood, N. Mohamadian, J.
Horgan, M. Kroiss, I. Danihelka, A. Huang, L. Sifre, T. Cai, J. P. Moghadasi, and S. Davoodi, “Determination of bubble point pressure
Agapiou, M. Jaderberg, A. S. Vezhnevets, R. Leblond, T. Pohlen, V. & oil formation volume factor of crude oils applying multiple hidden
Dalibard, D. Budden, Y. Sulsky, J. Molloy, T. L. Paine, C. Gulcehre, layers extreme learning machine algorithms,” J. Pet. Sci. Eng.,
Z. Y. Wang, T. Pfaff, Y. H. Wu, R. Ring, D. Yogatama, D. Wünsch, vol. 202, p. 108425, Jul. 2021.
K. McKinney, O. Smith, T. Schaul, T. Lillicrap, K. Kavukcuoglu, D. [23] G. Vilone and L. Longo, “Notions of explainability and evaluation
Hassabis, C. Apps, and D. Silver, “Grandmaster level in StarCraft II approaches for explainable artificial intelligence,” Inf. Fusion, vol. 76,
using multi-agent reinforcement learning,” Nature, vol. 575, no. 7782, pp. 89–106, Dec. 2021.
pp. 350–354, Nov. 2019. [24] W. J. Murdoch, C. Singh, K. Kumbier, R. Abbasi-Asl, and B. Yu,
[5] A. Nguyen, J. Yosinski, and J. Clune, “Deep neural networks are “Definitions, methods, and applications in interpretable machine
easily fooled: High confidence predictions for unrecognizable images,” learning,” Proc. Natl. Acad. Sci. USA, vol. 116, no. 44, pp. 22071–
in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 22080, Oct. 2019.
Boston, USA, 2015, pp. 427−436. [25] A. B. Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A.
[6] G. Varoquaux and V. Cheplygina, “Machine learning for medical Barbado, S. Garcia, S. Gil-Lopez, D. Molina, R. Benjamins, R.
870 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 10, NO. 4, APRIL 2023

Chatila, and F. Herrera, “Explainable artificial intelligence (XAI): [43] H. Wang, M. H. Xu, B. B. Ni, and W. J. Zhang, “Learning to combine:
Concepts, taxonomies, opportunities and challenges toward Knowledge aggregation for multi-source domain adaptation,” in Proc.
responsible AI,” Inf. Fusion, vol. 58, pp. 82–115, Jun. 2020. 16th European Conf. Computer Vision, Glasgow, UK, 2020, pp.
[26] A. Holzinger and H. Müller, “Toward human–AI interfaces to support 727−744.
explainability and causability in medical AI,” Computer, vol. 54, [44] S. C. Zhao, X. Y. Yue, S. H. Zhang, B. Li, H. Zhao, B. C. Wu, R.
no. 10, pp. 78–86, Oct. 2021. Krishna, J. E. Gonzalez, A. L. Sangiovanni-Vincentelli, S. A. Seshia,
[27] R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. and K. Keutzer, “A review of single-source deep unsupervised visual
Pedreschi, “A survey of methods for explaining black box models,” domain adaptation,” IEEE Trans. Neural Netw. Learning Syst., vol. 33,
ACM Comput. Surv., vol. 51, no. 5, p. 93, Sept. 2019. no. 2, pp. 473–493, Feb. 2022.
[28] A. Mohanty and S. Mishra, “A comprehensive study of explainable [45] J. N. Kundu, N. Venkat, M. V. Rahul, and R. V. Babu, “Universal
artificial intelligence in healthcare,” in Augmented Intelligence in source-free domain adaptation,” in Proc. IEEE/CVF Conf. Computer
Healthcare: A Pragmatic and Integrated Analysis, S. Mishra, H. K. Vision and Pattern Recognition, Seattle, USA, 2020, pp. 4543−4552.
Tripathy, P. Mallick, and K. Shaalan, Eds. Singapore, Singapore: [46] Y. Liu, W. Zhang, and J. Wang, “Source-free domain adaptation for
Springer, 2022, pp. 475−502. semantic segmentation,” in Proc. IEEE/CVF Conf. Computer Vision
[29] G. Csurka, “A comprehensive survey on domain adaptation for visual and Pattern Recognition, Nashville, USA, 2021, pp. 1215−1224.
applications,” in Domain Adaptation in Computer Vision Applications, [47] A. Jiménez-Sánchez, M. Tardy, M. A. G. Ballester, D. Mateus, and G.
G. Csurka, Ed. Cham, Germany: Springer, 2017, pp. 1−35. Piella, “Memory-aware curriculum federated learning for breast cancer
[30] P. Koniusz, Y. Tas, H. G. Zhang, M. Harandi, F. Porikli, and R. classification,” Comput. Methods Programs Biomed., vol. 229, p.
Zhang, “Museum exhibit identification challenge for the supervised 107318, Feb. 2023.
domain adaptation and beyond,” in Proc. 15th European Conf. [48] M. Jiménez-Guarneros and P. Gómez-Gil, “A study of the effects of
Computer Vision, Munich, Germany, 2018, pp. 815−833. negative transfer on deep unsupervised domain adaptation methods,”
[31] R. G. Praveen, E. Granger, and P. Cardinal, “Deep weakly supervised Expert Syst. Appl., vol. 167, p. 114088, Apr. 2021.
domain adaptation for pain localization in videos,” in Proc. 15th IEEE [49] L. T. Feng, F. Qian, X. He, Y. Q. Fan, H. P. Cai, and G. M. Hu,
Int. Conf. Automatic Face and Gesture Recognition, Buenos Aires, “Transitive transfer sparse coding for distant domain,” in Proc. IEEE
Argentina, 2020, pp. 473−480. Int. Conf. Acoustics, Speech and Signal Processing, Toronto, Canada,
[32] L. L. Guo, S. R. Pfohl, J. Fries, A. E. W. Johnson, J. Posada, C. 2021, pp. 3165−3169.
Aftandilian, N. Shah, and L. Sung, “Evaluation of domain [50] M. Wang and W. H. Deng, “Deep visual domain adaptation: A
generalization and adaptation on improving model robustness to survey,” Neurocomputing, vol. 312, pp. 135–153, Oct. 2018.
temporal dataset shift in clinical medicine,” Sci. Rep., vol. 12, no. 1, p. [51] M. J. Sheller, B. Edwards, G. A. Reina, J. Martin, S. Pati, A.
2726, Feb. 2022. Kotrotsou, M. Milchenko, W. L. Xu, D. Marcus, R. R. Colen, and S.
[33] Y. F. Zhang, S. C. Niu, Z. Qiu, Y. Wei, P. L. Zhao, J. H. Yao, J. Z. Bakas, “Federated learning in medicine: Facilitating multi-institutional
Huang, Q. Y. Wu, and M. K. Tan, “COVID-DA: Deep domain collaborations without sharing patient data,” Sci. Rep., vol. 10, no. 1, p.
adaptation from typical pneumonia to COVID-19,” arXiv preprint 12598, Jul. 2020.
arXiv: 2005.01577, 2020. [52] K. V. Sarma, S. Harmon, T. Sanford, H. R. Roth, Z. Y. Xu, J.
[34] M. Kim and H. Byun, “Learning texture invariant representation for Tetreault, D. G. Xu, M. G. Flores, A. G. Raman, R. Kulkarni, B. J.
domain adaptation of semantic segmentation,” in Proc. IEEE/CVF Wood, P. L. Choyke, A. M. Priester, L. S. Marks, S. S. Raman, D.
Conf. Computer Vision and Pattern Recognition, Seattle, USA, 2020, Enzmann, B. Turkbey, W. Speier, and C. W. Arnold, “Federated
pp. 12972−12981. learning improves site performance in multicenter deep learning
[35] W. J. Yan, Y. Y. Wang, S. J. Gu, L. Huang, F. H. Yan, L. M. Xia, and without data sharing,” J. Am. Med. Inf. Assoc., vol. 28, no. 6,
Q. Tao, “The domain shift problem of medical image segmentation pp. 1259–1264, Jun. 2021.
and vendor-adaptation by Unet-GAN,” in Proc. 22nd Int. Conf. [53] R. Kumar, A. A. Khan, J. Kumar, Zakria, N. A. Golilarz, S. M. Zhang,
Medical Image Computing and Computer-Assisted Intervention, Y. Ting, C. Y. Zheng, and W. Y. Wang, “Blockchain-federated-
Shenzhen, China, 2019, pp. 623−631. learning and deep learning models for COVID-19 detection using CT
[36] R. Argelaguet, D. Arnol, D. Bredikhin, Y. Deloro, B. Velten, J. C. imaging,” IEEE Sens. J., vol. 21, no. 14, pp. 16301–16314, Jul. 2021.
Marioni, and O. Stegle, “MOFA+: A statistical framework for [54] M. G. Crowson, D. Moukheiber, A. R. Arévalo, B. D. Lam, S.
comprehensive integration of multi-modal single-cell data,” Genome Mantena, A. Rana, D. Goss, D. W. Bates, and L. A. Celi, “A
Biol., vol. 21, no. 1, p. 111, May 2020. systematic review of federated learning applications for biomedical
[37] J. L. Yang, N. C. Dvornek, F. Zhang, J. Chapiro, M. D. Lin, and J. S. data,” PLOS Digit. Health, vol. 1, no. 5, p. e0000033, May 2022.
Duncan, “Unsupervised domain adaptation via disentangled [55] D. McGraw and K. D. Mandl, “Privacy protections to encourage use of
representations: Application to cross-modality liver segmentation,” in health-relevant digital data in a learning health system,” NPJ Digit.
Proc. 22nd Int. Conf. Medical Image Computing and Computer- Med., vol. 4, no. 1, p. 2, Jan. 2021.
Assisted Intervention, Shenzhen, China, 2019, pp. 255−263. [56] C. H. Wu, F. Z. Wu, L. J. Lyu, Y. F. Huang, and X. Xie,
[38] C. Chen, Q. Dou, H. Chen, J. Qin, and P. A. Heng, “Unsupervised “Communication-efficient federated learning via knowledge
bidirectional cross-modality adaptation via deeply synergistic image distillation,” Nat. Commun., vol. 13, no. 1, p. 2032, Apr. 2022.
and feature alignment for medical image segmentation,” IEEE Trans. [57] L. Q. Qu, Y. Y. Zhou, P. P. Liang, Y. D. Xia, F. F. Wang, E. Adeli, L.
Med. Imaging, vol. 39, no. 7, pp. 2494–2505, Jul. 2020. Fei-Fei, and D. Rubin, “Rethinking architecture design for tackling
[39] A. J. Larrazabal, N. Nieto, V. Peterson, D. H. Milone, and E. Ferrante, data heterogeneity in federated learning,” in Proc. IEEE/CVF Conf.
“Gender imbalance in medical imaging datasets produces biased Computer Vision and Pattern Recognition, New Orleans, USA, 2022,
classifiers for computer-aided diagnosis,” Proc. Natl. Acad. Sci. USA, pp. 10051−10061.
vol. 117, no. 23, pp. 12592–12594, May 2020. [58] G. J. Annas, “HIPAA regulations–A new era of medical-record
[40] M. N. Du, F. Yang, N. Zou, and X. Hu, “Fairness in deep learning: A privacy?” N. Engl. J. Med., vol. 348, pp. 1486–1490, Apr. 2003.
computational perspective,” IEEE Intell. Syst., vol. 36, no. 4, pp. 25–34, [59] P. Voigt and A. Von dem Bussche, The EU General Data Protection
Jul.–Aug. 2021. Regulation (GDPR). A Practical Guide. Cham, Germany: Springer,
[41] Y. Kim, D. Cho, K. Han, P. Panda, and S. Hong, “Domain adaptation 2017.
without source data,” IEEE Trans. Artif. Intell., vol. 2, no. 6, pp. 508– [60] S. Rajendran, J. S. Obeid, H. Binol, R. Jr. D’Agostino, K. Foley, W.
518, Dec. 2021. Zhang, P. Austin, J. Brakefield, M. N. Gurcan, and U. Topaloglu,
[42] S. C. Zhao, B. Li, C. Reed, P. F. Xu, and K. Keutzer, “Multi-source “Cloud-based federated learning implementation across medical
domain adaptation in the deep learning era: A systematic survey,” centers,” JCO Clin. Cancer Inf., vol. 5, pp. 1–11, Jan. 2021.
arXiv preprint arXiv: 2002.12169, 2020. [61] J. W. Kang, Z. H. Xiong, D. Niyato, Y. Z. Zou, Y. Zhang, and M.
CHADDAD et al.: EXPLAINABLE, DOMAIN-ADAPTIVE, AND FEDERATED ARTIFICIAL INTELLIGENCE IN MEDICINE 871

Guizani, “Reliable federated learning for mobile networks,” IEEE vol. 25, no. 7, pp. 2454–2462, Jul. 2021.
Wirel. Commun., vol. 27, no. 2, pp. 72–80, Apr. 2020. [79] A. Chaddad, M. L. Zhang, L. Hassan, and T. Niazi, “Modeling of
[62] K. A. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. textures to predict immune cell status and survival of brain tumour
Ivanov, C. Kiddon, J. Konecný, S. Mazzocchi, B. McMahan, T. Van patients,” in Proc. IEEE 18th Int. Symp. Biomedical Imaging, Nice,
Overveldt, D. Petrou, D. Ramage, J. Roselander, “Towards federated France, 2021, pp. 1067−1071.
learning at scale: System design,” in Proc. Machine Learning and [80] A. Chaddad, L. Hassan, and C. Desrosiers, “Deep radiomic analysis
Systems, Stanford, USA, 2019, pp. 374−388. for predicting coronavirus disease 2019 in computerized tomography
[63] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: and X-ray images,” IEEE Trans. Neural Netw. Learn. Syst., vol. 33,
Challenges, methods, and future directions,” IEEE Signal Process. no. 1, pp. 3–11, Jan. 2022.
Mag., vol. 37, no. 3, pp. 50–60, May 2020. [81] A. Chaddad, P. Daniel, M. L. Zhang, S. Rathore, P. Sargos, C.
[64] J. Xu, B. S. Glicksberg, C. Su, P. Walker, J. Bian, and F. Wang, Desrosiers, and T. Niazi, “Deep radiomic signature with immune cell
“Federated learning for healthcare informatics,” J. Healthc. Inform. markers predicts the survival of glioma patients,” Neurocomputing,
Res., vol. 5, no. 1, pp. 1–19, Nov. 2021. vol. 469, pp. 366–375, Jan. 2022.
[65] Y. Zhao, M. Li, L. Z. Lai, N. Suda, D. Civin, and V. Chandra, [82] S. K. Yeom, P. Seegerer, S. Lapuschkin, A. Binder, S. Wiedemann, K.
“Federated learning with non-IID data,” arXiv preprint arXiv: R. Müller, and W. Samek, “Pruning by explaining: A novel criterion
1806.00582, 2018. for deep neural network pruning,” Pattern Recognit., vol. 115, p.
[66] A. Fallah, A. Mokhtari, and A. Ozdaglar, “Personalized federated 107899, Jul. 2021.
learning: A meta-learning approach,” arXiv preprint arXiv: [83] J. B. Li, C. Q. Zhang, J. T. Zhou, H. Z. Fu, S. Y. Xia, and Q. H. Hu,
2002.07948, 2020. “Deep-LIFT: Deep label-specific feature learning for image
[67] G. H. Lee and S. Y. Shin, “Federated learning on clinical benchmark annotation,” IEEE Trans. Cybern., vol. 52, no. 8, pp. 7732–7741, Aug.
data: Performance assessment,” J. Med. Internet Res., vol. 22, no. 10, p. 2022.
e20891, Oct. 2020. [84] A. Shrikumar, P. Greenside, and A. Kundaje, “Learning important
[68] H. R. Roth, K. Chang, P. Singh, N. Neumark, W. Q. Li, V. Gupta, S. features through propagating activation differences,” in Proc. 34th Int.
Conf. Machine Learning, Sydney, Australia, 2017, pp. 3145−3153.
Gupta, L. Q. Qu, A. Ihsani, B. C. Bizzo, Y. H. Wen, V. Buch, M.
Shah, F. Kitamura, M. Mendonça, V. Lavor, A. Harouni, C. Compas, [85] M. Lin, Q. Chen, and S. C. Yan, “Network in network,” arXiv preprint
J. Tetreault, P. Dogra, Y. Cheng, S. Erdal, R. White, B. Hashemian, T. arXiv: 1312.4400, 2013.
Schultz, M. Zhang, A. McCarthy, B. M. Yun, E. Sharaf, K. V. Hoebel, [86] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D.
J. B. Patel, B. Chen, S. Ko, E. Leibovitz, E. D. Pisano, L. Coombs, D. Batra, “Grad-CAM: Visual explanations from deep networks via
G. Xu, K. J. Dreyer, I. Dayan, R. C. Naidu, M. Flores, D. Rubin, and J. gradient-based localization,” in Proc. IEEE Int. Conf. Computer
Kalpathy-Cramer, “Federated learning for breast density classification: Vision, Venice, Italy, 2017, pp. 618−626.
A real-world implementation,” in Proc. 2nd MICCAI Workshop on
[87] H. F. Wang, Z. F. Wang, M. N. Du, F. Yang, Z. J. Zhang, S. R. Ding,
Domain Adaptation and Representation Transfer, Lima, Peru, 2020,
P. Mardziel, and X. Hu, “Score-CAM: Score-weighted visual
pp. 181−191.
explanations for convolutional neural networks,” in Proc. IEEE/CVF
[69] G. Joshi, A. Srivastava, B. Yagnik, M. Hasan, Z. Saiyed, L. A. Conf. Computer Vision and Pattern Recognition Workshops, Seattle,
Gabralla, A. Abraham, R. Walambe, and K. Kotecha, “Explainable USA, 2020, pp. 111−119.
misinformation detection across multiple social media platforms,”
[88] R. Dazeley, P. Vamplew, C. Foale, C. Young, S. Aryal, and F. Cruz,
arXiv preprint arXiv: 2203.11724, 2022.
“Levels of explainable artificial intelligence for human-aligned
[70] A. Raza, K. P. Tran, L. Koehl, and S. J. Li, “Designing ECG conversational explanations,” Artif. Intell., vol. 299, p. 103525, Oct.
monitoring healthcare system with federated transfer learning and 2021.
explainable AI,” Knowl.-Based Syst., vol. 236, p. 107763, Jan. 2022.
[89] J. Jiménez-Luna, M. Skalic, N. Weskamp, and G. Schneider,
[71] X. C. Peng, Z. J. Huang, Y. Z. Zhu, and K. Saenko, “Federated “Coloring molecules with explainable artificial intelligence for
adversarial domain adaptation,” in Proc. 8th Int. Conf. Learning preclinical relevance assessment,” J. Chem. Inf. Model., vol. 61, no. 3,
Representations, Addis Ababa, Ethiopia, 2020. pp. 1083–1094, Feb. 2021.
[72] X. X. Li, Y. F. Gu, N. Dvornek, L. H. Staib, P. Ventola, and J. S. [90] S. El-Sappagh, J. M. Alonso, S. M. R. Islam, A. M. Sultan, and K. S.
Duncan, “Multi-site fMRI analysis using privacy-preserving federated Kwak, “A multilayer multimodal detection and prediction model based
learning and domain adaptation: ABIDE results,” Med. Image Anal., on explainable artificial intelligence for Alzheimer’s disease,” Sci.
vol. 65, p. 101765, Oct. 2020. Rep., vol. 11, no. 1, p. 2660, Jan. 2021.
[73] A. Holzinger, A. Saranti, C. Molnar, P. Biecek, and W. Samek, [91] J. F. Peng, K. Q. Zou, M. Zhou, Y. Teng, X. Y. Zhu, F. F. Zhang, and
“Explainable AI methods-a brief overview,” in Proc. Int. Workshop on J. Xu, “An explainable artificial intelligence framework for the
Extending Explainable AI Beyond Deep Models and Classifiers, deterioration risk prediction of hepatitis patients,” J. Med. Syst.,
Vienna, Austria, 2022, pp. 13−38. vol. 45, no. 5, p. 61, Apr. 2021.
[74] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, [92] K. Davagdorj, J. W. Bae, V. H. Pham, N. Theera-Umpon, and K. H.
“Striving for simplicity: The all convolutional net,” in Proc. 3rd Int. Ryu, “Explainable artificial intelligence based framework for non-
Conf. Learning Representations, San Diego, USA, 2015. communicable diseases prediction,” IEEE Access, vol. 9,
[75] S. Heidarian, P. Afshar, N. Enshaei, F. Naderkhani, M. J. Rafiee, F. B. pp. 123672–123688, Sept. 2021.
Fard, K. Samimi, S. F. Atashzar, A. Oikonomou, K. N. Plataniotis, and [93] C. Dindorf, J. Konradi, C. Wolf, B. Taetz, G. Bleser, J. Huthwelker, F.
A. Mohammadi, “COVID-FACT: A fully-automated capsule network- Werthmann, E. Bartaguiz, J. Kniepert, P. Drees, U. Betz, and M.
based framework for identification of COVID-19 cases from chest CT Fröhlich, “Classification and automated interpretation of spinal posture
scans,” Front. Artif. Intell., vol. 4, p. 598932, Feb. 2021. data using a pathology-independent classifier and explainable artificial
[76] A. Chaddad, M. L. Zhang, C. Desrosiers, and T. Niazi, “Deep radiomic intelligence (XAI),” Sensors, vol. 21, no. 18, p. 6323, Sept. 2021.
features from MRI scans predict survival outcome of recurrent [94] Z. Papanastasopoulos, R. K. Samala, H. P. Chan, L. Hadjiiski, C.
glioblastoma,” in Proc. 1st Int. Workshop on Radiomics and Paramagul, M. A. Helvie, and C. H. Neal, “Explainable AI for medical
Radiogenomics in Neuro-Oncology, Shenzhen, China, 2019, pp. imaging: Deep-learning CNN ensemble for classification of estrogen
36−43. receptor status from breast MRI,” in Proc. SPIE 11314, Medical
[77] A. Chaddad, C. Desrosiers, B. Abdulkarim, and T. Niazi, “Predicting Imaging 2020: Computer-Aided Diagnosis, Houston, Texas, United
the gene status and survival outcome of lower grade glioma patients States, pp. 113140Z.
with multimodal MRI features,” IEEE Access, vol. 7, pp. 75976–75984, [95] R. K. Singh, R. Pandey, and R. N. Babu, “COVIDScreen: Explainable
Jun. 2019. deep learning framework for differential diagnosis of COVID-19 using
[78] A. Chaddad, P. Sargos, and C. Desrosiers, “Modeling texture in deep chest X-rays,” Neural Comput. Appl., vol. 33, no. 14, pp. 8871–8892,
3D CNN for survival analysis,” IEEE J. Biomed. Health Inform., Jan. 2021.
872 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 10, NO. 4, APRIL 2023

[96] H. C. Thorsen-Meyer, A. B. Nielsen, A. P. Nielsen, B. S. Kaas- dataset,” Neural Comput. Appl., vol. 34, no. 1, pp. 721–744, Jan. 2022.
Hansen, P. Toft, J. Schierbeck, T. Strøm, P. J. Chmura, M. Heimann, [112] S. Martínez-Agüero, C. Soguero-Ruiz, J. M. Alonso-Moral, I. Mora-
L. Dybdahl, L. Spangsege, P. Hulsen, K. Belling, S. Brunak, and A. Jiménez, J. Álvarez-Rodríguez, and A. G. Marques, “Interpretable
Perner, “Dynamic and explainable machine learning prediction of clinical time-series modeling with intelligent feature selection for early
mortality in patients in the intensive care unit: A retrospective study of prediction of antimicrobial multidrug resistance,” Future Gener.
high-frequency data in electronic patient records,” Lancet Digit. Comput. Syst., vol. 133, pp. 68–83, Aug. 2022.
Health, vol. 2, no. 4, pp. e179–e191, Apr. 2020.
[113] K. Wickstrøm, K. Ø. Mikalsen, M. Kampffmeyer, A. Revhaug, and R.
[97] R. Gu, G. T. Wang, T. Song, R. Huang, M. Aertsen, J. Deprest, S. Jenssen, “Uncertainty-aware deep ensembles for reliable and
Ourselin, T. Vercauteren, and S. T. Zhang, “CA-Net: Comprehensive explainable predictions of clinical time series,” IEEE J. Biomed.
attention convolutional neural networks for explainable medical image Health Inform., vol. 25, no. 7, pp. 2435–2444, Jul. 2021.
segmentation,” IEEE Trans. Med. Imaging, vol. 40, no. 2, pp. 699–711,
[114] C. Y. Wang, T. S. Ko, and C. C. Hsu, “Machine learning with
Feb. 2021.
explainable artificial intelligence vision for characterization of solution
[98] P. R. Magesh, R. D. Myloth, and R. J. Tom, “An explainable machine conductivity using optical emission spectroscopy of plasma in aqueous
learning model for early detection of Parkinson’s disease using LIME solution,” Plasma Process. Polym., vol. 18, no. 12, p. 2100096, Dec.
on DaTSCAN imagery,” Comput. Biol. Med., vol. 126, p. 104041, 2021.
Nov. 2020.
[115] Y. G. Akhlaghi, K. Aslansefat, X. D. Zhao, S. Sadati, A. Badiei, X.
[99] R. Karim, T. Döhmen, M. Cochez, O. Beyan, D. Rebholz-Schuhmann, Xiao, S. Shittu, Y. Fan, and X. L. Ma, “Hourly performance forecast of
and S. Decker, “DeepCOVIDExplainer: Explainable COVID-19 a dew point cooler using explainable artificial intelligence and
diagnosis from chest X-ray images,” in Proc. IEEE Int. Conf. evolutionary optimisations by 2050,” Appl. Energy, vol. 281, p.
Bioinformatics and Biomedicine, Seoul, Korea (South), 2020, pp. 116062, Jan. 2021.
1034−1037. [116] J. M. Valverde, V. Imani, A. Abdollahzadeh, R. De Feo, M. Prakash,
[100] L. A. Jr. de Souza, R. Mendel, S. Strasser, A. Ebigbo, A. Probst, H. R. Ciszek, and J. Tohka, “Transfer learning in magnetic resonance
Messmann, J. P. Papa, and C. Palm, “Convolutional neural networks brain imaging: A systematic review,” J. Imaging, vol. 7, no. 4, p. 66,
for the evaluation of cancer in Barrett’s esophagus: Explainable AI to Apr. 2021.
lighten up the black-box,” Comput. Biol. Med., vol. 135, p. 104578, [117] A. Farahani, S. Voghoei, K. Rasheed, and H. R. Arabnia, “A brief
Aug. 2021. review of domain adaptation,” in Advances in Data Science and
[101] T. Dissanayake, T. Fernando, S. Denman, S. Sridharan, H. Information Engineering, R. Stahlbock, G. M. Weiss, M. Abou-Nasr,
Ghaemmaghami, and C. Fookes, “A robust interpretable deep learning C. Y. Yang, H. R. Arabnia, and L. Deligiannidis, Eds. Cham,
classifier for heart anomaly detection without segmentation,” IEEE J. Germany: Springer, 2021, pp. 877−894.
Biomed. Health Inform., vol. 25, no. 6, pp. 2162–2171, Jun. 2021. [118] D. N. Liu, D. H. Zhang, Y. Song, F. Zhang, L. O’Donnell, H. Huang,
[102] M. Razzaq, L. Goumidi, M. J. Iglesias, G. Munsch, M. Bruzelius, M. M. Chen, and W. D. Cai, “Unsupervised instance segmentation in
Ibrahim-Kosta, L. Butler, J. Odeberg, P. E. Morange, and D. A. microscopy images via panoptic domain adaptation and task re-
Tregouet, “Explainable artificial neural network for recurrent venous weighting,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern
thromboembolism based on plasma proteomics,” in Proc. 19th Int. Recognition, Seattle, USA, 2020, pp. 4242−4251.
Conf. Computational Methods in Systems Biology, Bordeaux, France, [119] A. Choudhary, L. Tong, Y. D. Zhu, and M. D. Wang, “Advancing
2021, pp. 108−121. medical imaging informatics by deep learning-based domain adap-
[103] M. La Ferla, M. Montebello, and D. Seychell, “An XAI approach to tation,” Yearb. Med. Inform., vol. 29, no. 1, pp. 129–138, Aug. 2020.
deep learning models in the detection of ductal carcinoma in situ,” [120] S. T. Niu, Y. H. Hu, J. Wang, Y. X. Liu, and H. B. Song, “Feature-
arXiv preprint arXiv: 2106.14186, 2021. based distant domain transfer learning,” in Proc. IEEE Int. Conf. Big
[104] C. Gillmann, L. Peter, C. Schmidt, D. Saur, and G. Scheuermann, Data, Atlanta, USA, 2020, pp. 5164−5171.
“Visualizing multimodal deep learning for lesion prediction,” IEEE [121] Y. Yang, J. Guo, Q. W. Ye, Y. L. Xia, P. Yang, A. Ullah, and K.
Comput. Graph. Appl., vol. 41, no. 5, pp. 90–98, Sep.–Oct. 2021. Muhammad, “A weighted multi-feature transfer learning framework
[105] C. Ieracitano, N. Mammone, M. Versaci, G. Varone, A. R. Ali, A. for intelligent medical decision making,” Appl. Soft Comput., vol. 105,
Armentano, G. Calabrese, A. Ferrarelli, L. Turano, C. Tebala, Z. p. 107242, Jul. 2021.
Hussain, Z. Sheikh, A. Sheikh, G. Sceni, A. Hussain, and F. C. [122] A. Ackaouy, N. Courty, E. Vallée, O. Commowick, C. Barillot, and F.
Morabito, “A fuzzy-enhanced deep learning approach for early Galassi, “Unsupervised domain adaptation with optimal transport in
detection of COVID-19 pneumonia from portable chest X-ray multi-site segmentation of multiple sclerosis lesions from MRI data,”
images,” Neurocomputing, vol. 481, pp. 202–215, Apr. 2022. Front. Comput. Neurosci., vol. l, no. 14, p. 19, Mar. 2020.
[106] J. M. Mayor-Torres, S. Medina-DeVilliers, T. Clarkson, M. D. Lerner, [123] X. J. Ma, Y. H. Niu, L. Gu, Y. S. Wang, Y. T. Zhao, J. Bailey, and F.
and G. Riccardi, “Evaluation of interpretability for deep learning Lu, “Understanding adversarial attacks on deep learning based medical
algorithms in EEG emotion recognition: A case study in autism,” image analysis systems,” Pattern Recognit., vol. 110, p. 107332, Feb.
arXiv preprint arXiv: 2111.13208, 2021. 2021.
[107] K. H. Kim, H. W. Koo, B. J. Lee, S. W. Yoon, and M. J. Sohn, [124] Y. Z. Sun, G. Yang, D. Y. Ding, G. W. Cheng, J. P. Xu, and X. Ro. Li,
“Cerebral hemorrhage detection and localization with medical imaging “A GAN-based domain adaptation method for glaucoma diagnosis,” in
for cerebrovascular disease diagnosis and treatment using explainable Proc. Int. Joint Conf. Neural Networks, Glasgow, UK, 2020, pp. 1−8.
deep learning,” J. Korean Phys. Soc., vol. 79, no. 3, pp. 321–327, May [125] C. M. He, S. M. Wang, H. Y. Kang, L. Q. Zheng, T. F. Tan, and X. J.
2021. Fan, “Adversarial domain adaptation network for tumor image
[108] R. Arefin, M. D. Samad, F. A. Akyelken, and A. Davanian, “Non- diagnosis,” Int. J. Approx. Reason., vol. 135, pp. 38–52, Aug. 2021.
transfer deep learning of optical coherence tomography for post-hoc [126] G. J. Wang, M. Chen, Z. J. Ding, J. W. Li, H. Z. Yang, and P. Zhang,
explanation of macular disease classification,” in Proc. IEEE 9th Int. “Inter-patient ECG arrhythmia heartbeat classification based on
Conf. Healthcare Informatics, Victoria, Canada, 2021, pp. 48−52. unsupervised domain adaptation,” Neurocomputing, vol. 454,
[109] R. T. Sutton, O. R. Zaïane, R. Goebel, and D. C. Baumgart, “Artificial pp. 339–349, Sept. 2021.
intelligence enabled automated diagnosis and grading of ulcerative [127] E. Lee, A. Ho, Y. T. Wang, C. H. Huang, and C. Y. Lee, “Cross-
colitis endoscopy images,” Sci. Rep., vol. 12, no. 1, p. 2748, Feb. 2022. domain adaptation for biometric identification using
[110] D. H. Chen, H. J. Zhao, J. R. He, Q. Pan, and W. L. Zhao, “An causal photoplethysmogram,” in Proc. IEEE Int. Conf. Acoustics, Speech and
XAI diagnostic model for breast cancer based on mammography Signal Processing, Barcelona, Spain, 2020, pp. 1289−1293.
reports,” in Proc. IEEE Int. Conf. Bioinformatics and Biomedicine, [128] X. S. Bian, X. B. Luo, C. Wang, W. Q. Liu, and X. H. Lin, “DDA-Net:
Houston, USA, 2021, pp. 3341−3349. Unsupervised cross-modality medical image segmentation via dual
[111] Z. Uddin, K. K. Dysthe, A. Følstad, and P. B. Brandtzaeg, “Deep domain adaptation,” Comput. Methods Programs Biomed., vol. 213, p.
learning for prediction of depressive symptoms in a large textual 106531, Jan. 2022.
CHADDAD et al.: EXPLAINABLE, DOMAIN-ADAPTIVE, AND FEDERATED ARTIFICIAL INTELLIGENCE IN MEDICINE 873

[129] Q. Dou, C. Ouyang, C. Chen, H. Chen, B. Glocker, X. H. Zhuang, and 655−660.


P. A. Heng, “PnP-AdaNet: Plug-and-play adversarial domain [143] A. Vaid, S. K. Jaladanki, J. Xu, S. Teng, A. Kumar, S. Lee, S. Somani,
adaptation network at unpaired cross-modality cardiac segmentation,” I. Paranjpe, J. K. De Freitas, T. Y. Wanyan, Wanyan, K. W. Johnson,
IEEE Access, vol. 7, pp. 99065–99076, Jul. 2019. M. Bicak, E. Klang, Y. J. Kwon, A. Costa, S. Zhao, R. Miotto, A. W.
[130] H. Guan, L. Wang, and M. X. Liu, “Multi-source domain adaptation Charney, E. Böttinger, Z. A. Fayad, G. N. Nadkarni, F. Wang, and B.
via optimal transport for brain dementia identification,” in Proc. IEEE S. Glicksberg, “Federated learning of electronic health records to
18th Int. Symp. Biomedical Imaging, Nice, France, 2021, pp. improve mortality prediction in hospitalized patients with COVID-19:
1514−1517. Machine learning approach,” JMIR Med. Inform., vol. 9, no. 1, p.
[131] H. Guan, Y. B. Liu, E. K. Yang, P. T. Yap, D. G. Shen, and M. X. Liu, e24207, Jan. 2021.
“Multi-site MRI harmonization via attention-guided deep domain [144] P. F. Guo, P. Y. Wang, J. Y. Zhou, S. S. Jiang, and V. M. Patel,
adaptation for brain disorder identification,” Med. Image Anal., vol. 71, “Multi-institutional collaborations for improving deep learning-based
p. 102076, Jul. 2021. magnetic resonance image reconstruction using federated learning,” in
[132] J. Wang, L. C. Zhang, Q. Wang, L. Chen, J. Shi, X. B. Chen, Z. Y. Li, Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition,
and D. G. Shen, “Multi-class ASD classification based on functional Nashville, USA, 2021, pp. 2423−2432.
connectivity and functional correlation tensor via multi-source domain [145] M. Y. Lu, R. J. Chen, D. H. Kong, J. Lipkova, R. Singh, D. F. K.
adaptation and multi-view sparse representation,” IEEE Trans. Med. Williamson, T. Y. Chen, and F. Mahmood, “Federated learning for
Imaging, vol. 39, no. 10, pp. 3137–3147, Oct. 2020. computational pathology on gigapixel whole slide images,” Med.
[133] H. F. Cui, C. Yuwen, J. Lei, Y. Xia, and Y. N. Zhang, “Bidirectional Image Anal., vol. 76, p. 102298, Feb. 2022.
cross-modality unsupervised domain adaptation using generative [146] M. Flores, I. Dayan, H. Roth, A. X. Zhong, A. Harouni, A. Gentili, A.
adversarial networks for cardiac image segmentation,” Comput. Biol. Abidin, A. Liu, A. Costa, B. Wood, C. S. Tsai, C. H. Wang, C. N. Hsu,
Med., vol. 136, p. 104726, Sept. 2021. C. K. Lee, C. Ruan, D. G. Xu, D. F. Wu, E. Huang, F. Kitamura, G.
[134] R. Xu, Z. Cong, X. C. Ye, S. Kido, and N. Tomiyama, “Unsupervised Lacey, G. C. de AntÔnio Corradi, H. H. Shin, H. Obinata, H. Ren, J.
content-preserved adaptation network for classification of pulmonary Crane, J. Tetreault, J. H. Guan, J. Garrett, J. G. Park, K. Dreyer, K.
textures from different CT scanners,” in Proc. IEEE Int. Conf. Juluru, K. Kersten, M. A. B. C. Rockenbach, M. Linguraru, M. Haider,
Acoustics, Speech and Signal Processing, Barcelona, Spain, 2020, pp. M. AbdelMaseeh, N. Rieke, P. Damasceno, P. M. C. e Silva, P. Wang,
1060−1064. S. Xu, S C. Kawano, S. Sriswasdi, S. Y. Park, T. Grist, V. Buch, W.
Jantarabenjakul, W. Wang, W. Y. Tak, X. Li, X. H. Lin, F. Kwon, F.
[135] P. Wang, P. F. Li, Y. M. Li, J. Xu, and M. F. Jiang, “Classification of Gilbert, J. Kaggie, Q. Z. Li, A. Quraini, A. Feng, A. Priest, B.
histopathological whole slide images based on multiple weighted semi- Turkbey, B. Glicksberg, B. Bizzo, B. S. Kim, C. Tor-Diez, C. C. Lee,
supervised domain adaptation,” Biomed. Signal Process. Control, C. J. Hsu, C. Lin, C. L. Lai, C. Hess, C. Compas, D. Bhatia, E.
vol. 73, p. 103400, Mar. 2022. Oermann, E. Leibovitz, H. Sasaki, H. Mori, I. Yang, J. H. Sohn, K. N.
[136] Y. Wang, Y. Q. Feng, L. Zhang, Z. Z. Wang, Q. Lv, and Z. Yi, “Deep K. Murthy, L C. Fu, M. R. F. de Mendonça, M. Fralick, M. K. Kang,
adversarial domain adaptation for breast cancer screening from M. Adil, N. Gangai, P. Vateekul, P. Elnajjar, S. Hickman, S.
mammograms,” Med. Image Anal., vol. 73, p. 102147, Oct. 2021. Majumdar, S. McLeod, S. Reed, S. Graf, S. Harmon, T. Kodama, T.
[137] J. Li, C. L. Feng, X. Z. Lin, and X. H. Qian, “Utilizing GCN and meta- Puthanakit, T. Mazzulli, V. de Lima Lavor, Y. Rakvongthai, Y. R.
learning strategy in unsupervised domain adaptation for pancreatic Lee, Y. H. Wen, “Federated learning used for predicting outcomes in
cancer segmentation,” IEEE J. Biomed. Health Inform., vol. 26, no. 1, SARS-COV-2 patients,” Res. Sq., DOI: 10.21203/rs.3.rs-126892/v1
pp. 79–89, Jan. 2022. [147] B. L. Y. Agbley, J. P. Li, A. U. Haq, E. K. Bankas, S. Ahmad, I. O.
[138] M. Saiz-Vivó, A. Colomer, C. Fonfría, L. Martí-Bonmatí, and V. Agyemang, D. Kulevome, W. D. Ndiaye, B. Cobbinah, and S.
Naranjo, “Supervised domain adaptation for automated semantic Latipova, “Multimodal melanoma detection with federated learning,”
segmentation of the atrial cavity,” Entropy, vol. 23, no. 7, p. 898, Jul. in Proc. 18th Int. Computer Conf. Wavelet Active Media Technology
2021. and Information Processing, Chengdu, China, 2021, pp. 238−244.
[139] I. Dayan, H. R. Roth, A. X. Zhong, A. Harouni, A. Gentili, A. Z. [148] M. F. Zhang, Y. N. Wang, and T. Luo, “Federated learning for
Abidin, A. Liu, A. B. Costa, B. J. Wood, C. S. Tsai, C. H. Wang, C. N. arrhythmia detection of Non-IID ECG,” in Proc. IEEE 6th Int. Conf.
Hsu, C. K. Lee, P. Y. Ruan, D. G. Xu, D. F. Wu, E. Huang, F. C. Computer and Communications, Chengdu, China, 2020, pp.
Kitamura, G. Lacey, G. C. de Antônio Corradi, G. Nino, H. H. Shin, 1176−1180.
H. Obinata, H. Ren, J. C. Crane, J. Tetreault, J. H. Guan, J. W. Garrett, [149] Y. C. Gu, Q. Q. Hu, X. L. Wang, Z. Zhou, and S. X. Lu, “FedACS: An
J. D. Kaggie, J. G. Park, K. Dreyer, K. Juluru, K. Kersten, M. A. B. C. efficient federated learning method among multiple medical
Rockenbach, M. G. Linguraru, M. A. Haider, M. AbdelMaseeh, N. institutions with adaptive client sampling,” in Proc. 14th Int. Congr.
Rieke, P. F. Damasceno, P. M. C. e Silva, P. C. Wang, S. Xu, S. Image and Signal Processing, BioMedical Engineering and
Kawano, S. Sriswasdi, S. Y. Park, T. M. Grist, V. Buch, W. Informatics, Shanghai, China, 2021, pp. 1−6.
Jantarabenjakul, W. Wang, W. Y. Tak, X. Li, X. H. Lin, Y. J. Kwon, [150] Y. Kang, Y. Liu, and T. J. Chen, “FedMVT: Semi-supervised vertical
A. Quraini, A. Feng, A. N. Priest, B. Turkbey, B. Glicksberg, B. federated learning with MultiView training,” arXiv preprint arXiv:
Bizzo, B. S. Kim, C. Tor-Díez, C. C. Lee, C. J. Hsu, C. N. Lin, C. L. 2008.10838, 2020.
Lai, C. P. Hess, C. Compas, D. Bhatia, E. K. Oermann, E. Leibovitz,
H. Sasaki, H. Mori, I. Yang, J. H. Sohn, K. N. K. Murthy, L. C. Fu, M. [151] W. H. Sun, Y. Q. Chen, X. D. Yang, J. B. Cao, and Y. X. Song,
R. F. de Mendonça, M. Fralick, M. K. Kang, M. Adil, N. Gangai, P. “FedIO: Bridge inner-and outer-hospital information for perioperative
Vateekul, P. Elnajjar, S. Hickman, S. Majumdar, S. L. McLeod, S. complications prognostic prediction via federated learning,” in Proc.
Reed, S. Gräf, S. Harmon, T. Kodama, T. Puthanakit, T. Mazzulli, V. IEEE Int. Conf. Bioinformatics and Biomedicine, Houston, USA,
L. de Lavor, Y. Rakvongthai, Y. R. Lee, Y. H. Wen, F. J. Gilbert, M. 2021, pp. 3215−3221.
G. Flores, and Q. Z. Li, “Federated learning for predicting clinical [152] A. Linardos, K. Kushibar, S. Walsh, P. Gkontra, and K. Lekadir,
outcomes in patients with COVID-19,” Nat. Med., vol. 27, no. 10, “Federated learning for multi-center imaging diagnostics: A simulation
pp. 1735–1743, Oct. 2021. study in cardiovascular disease,” Sci. Rep., vol. 12, no. 1, p. 3551, Mar.
[140] S. Boughorbel, F. Jarray, N. Venugopal, S. Moosa, H. Elhadi, and M. 2022.
Makhlouf, “Federated uncertainty-aware learning for distributed [153] N. Q. Dong and I. Voiculescu, “Federated contrastive learning for
hospital EHR data,” arXiv preprint arXiv: 1910.12191, 2019. decentralized unlabeled medical images,” in Proc. 24th Int. Conf.
[141] C. X. Tian, H. L. Li, Y. F. Wang, and S. Q. Wang, “Privacy-preserving Medical Image Computing and Computer-Assisted Intervention,
constrained domain generalization for medical image classification,” Strasbourg, France, 2021, pp. 378−387.
arXiv preprint arXiv: 2105.08511, 2021. [154] Q. Yang, Y. Liu, T. J. Chen, and Y. X. Tong, “Federated machine
[142] M. Nasajpour, M. Karakaya, S. Pouriyeh, and R. M. Parizi, “Federated learning: Concept and applications,” ACM Trans. Intell. Syst. Technol.,
transfer learning for diabetic retinopathy detection using CNN vol. 10, no. 2, p. 12, Mar. 2019.
architectures,” in Proc. SoutheastCon, Mobile, USA, 2022, pp. [155] J. Lo, T. T. Yu, D. Ma, P. X. Zang, J. P. Owen, Q. Q. Zhang, R. K.
874 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 10, NO. 4, APRIL 2023

Wang, M. F. Beg, A. Y. Lee, Y. L. Jia, and M. V. Sarunic, “Federated [173] M. Meng, J. H. Hu, Y. Y. Gao, W. Z. Kong, and Z. Z. Luo, “A deep
learning for microvasculature segmentation and diabetic retinopathy subdomain associate adaptation network for cross-session and cross-
classification of OCT data,” Ophthalmol. Sci., vol. 1, no. 4, p. 100069, subject EEG emotion recognition,” Biomed. Signal Process. Control,
Dec. 2021. vol. 78, p. 103873, Sept. 2022.
[156] B. Gu, A. Xu, Z. Y. Huo, C. Deng, and H. Huang, “Privacy-preserving [174] Z. K. Tian, D. H. Li, Y. Yang, F. Z. Hou, Z. Y. Yang, Y. Song, and Q.
asynchronous vertical federated learning algorithms for multiparty Gao, “A novel domain adversarial networks based on 3D-LSTM and
collaborative learning,” IEEE Trans. Neural Netw. Learn. Syst., local domain discriminator for hearing-impaired emotion recognition,”
vol. 33, no. 11, pp. 6103–6115, Nov. 2022. IEEE J. Biomed. Health Inform., vol. 27, no. 1, pp. 363–373, Jan. 2023.
[157] Y. Liu, Y. Kang, C. P. Xing, T. J. Chen, and Q. Yang, “A secure [175] V. M. Bashyam, J. Doshi, G. Erus, D. Srinivasan, A. Abdulkadir, A.
federated transfer learning framework,” IEEE Intell. Syst., vol. 35, Singh, M. Habes, Y. Fan, C. L. Masters, P. Maruff, C. J. Zhuo, H.
no. 4, pp. 70–82, Jul.–Aug. 2020. Völzke, S. C. Johnson, J. Fripp, N. Koutsouleris, T. D. Satterthwaite,
[158] S. Li, T. X. Cai, and R. Duan, “Targeting underrepresented D. H. Wolf, R. E. Gur, R. C. Gur, J. C. Morris, M. S. Albert, H. J.
populations in precision medicine: A federated transfer learning Grabe, S. M. Resnick, N. R. Bryan, K. Wittfeld, R. Bülow, D. A.
approach,” arXiv preprint arXiv: 2108.12112, 2021. Wolk, H. C. Shou, I. M. Nasrallah, C. Davatzikos, and The iSTAGING
and PHENOM Consortia, “Deep generative medical image
[159] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith,
harmonization for improving cross-site generalization in deep learning
“Federated optimization in heterogeneous networks,” in Proc.
Machine Learning and Systems, Austin, USA, 2020, pp. 429−450. predictors,” J. Magn. Reson. Imaging, vol. 55, no. 3, pp. 908–916, Mar.
2022.
[160] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas,
“Communication-efficient learning of deep networks from [176] C. I. Bercea, B. Wiestler, D. Rueckert, and S. Albarqouni, “Federated
decentralized data,” in Proc. 20th Int. Conf. Artificial Intelligence and disentangled representation learning for unsupervised brain anomaly
Statistics, Fort Lauderdale, USA, 2017, pp. 1273−1282. detection,” Nat. Mach. Intell., vol. 4, no. 8, pp. 685–695, Aug. 2022.
[161] S. U. Stich, “Local SGD converges fast and communicates little,” in [177] X. Bai, H. C. Wang, L. Y. Ma, Y. C. Xu, J. F. Gan, Z. W. Fan, F.
Proc. 7th Int. Conf. Learning Representations, New Orleans, LA, Yang, K. Ma, J. H. Yang, S. Bai, C. Shu, X. Y. Zou, R. H. Huang, C.
USA, May, 2019. Z. Zhang, X. W. Liu, D. D. Tu, C. O. Xu, W. Q. Zhang, X. Wang, A.
G. Chen, Y. Zeng, D. H. Yang, M. W. Wang, N. Holalkere, N. J.
[162] F. Sattler, K. R. Müller, and W. Samek, “Clustered federated learning: Halin, I. R. Kamel, J. Wu, X. H. Peng, X. Wang, J. B. Shao, P.
Model-agnostic distributed multitask optimization under privacy Mongkolwat, J. J. Zhang, W. Y. Liu, M. Roberts, Z. Z. Teng, L. Beer,
constraints,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 8, L. E. Sanchez, E. Sala, D. L. Rubin, A. Weller, J. Lasenby, C. S.
pp. 3710–3722, Aug. 2021. Zheng, J. M. Wang, Z. Li, C. Schönlieb, and T. Xia, “Advancing
[163] O. Marfoq, G. Neglia, A. Bellet, L. Kameni, and R. Vidal, “Federated COVID-19 diagnosis with privacy-preserving collaboration in artificial
multi-task learning under a mixture of distributions,” in Proc. 35th intelligence,” Nat. Mach. Intell., vol. 3, no. 12, pp. 1081–1089, Dec.
Advances in Neural Information Processing Systems, 2021, pp. 2021.
15434−15447.
[178] M. T. Ribeiro, S. Singh, and C. Guestrin, ““Why should I trust you?”:
[164] A. C. Hauschild, M. Lemanczyk, J. Matschinske, T. Frisch, O. Explaining the predictions of any classifier,” in Proc. 22nd ACM
Zolotareva, A. Holzinger, J. Baumbach, and D. Heider, “Federated SIGKDD Int. Conf. Knowledge Discovery and Data Mining, San
random forests can improve local performance of predictive models Francisco, USA, 2016, pp. 1135−1144.
for various healthcare applications,” Bioinformatics, vol. 38, no. 8,
[179] Q. Huang, M. Yamada, Y. Tian, D. Singh, and Y. Chang,
pp. 2278–2286, Apr. 2022.
“GraphLIME: Local interpretable model explanations for graph neural
[165] A. Chowdhury, H. Kassem, N. Padoy, R. Umeton, and A. Karargyris, networks,” IEEE Trans. Knowl. Data Eng., DOI: 10.1109/TKDE.
“A review of medical federated learning: Applications in oncology and 2022.3187455
cancer research,” in Proc. 7th Int. MICCAI Brainlesion Workshop,
2022, pp. 3−24. [180] S. M. Lundberg and S. I. Lee, “A unified approach to interpreting
model predictions,” in Proc. 31st Int. Conf. Neural Information
[166] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and L. Fei-Fei, Processing Systems, Long Beach, USA, 2017, pp. 4768−4777.
“ImageNet: A large-scale hierarchical image database,” in Proc. IEEE
Conf. Computer Vision and Pattern Recognition, Miami, USA, 2009, [181] E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell, “Deep
pp. 248−255. domain confusion: Maximizing for domain invariance,” arXiv preprint
arXiv: 1412.3474, 2014.
[167] Y. Otaki, A. Singh, P. Kavanagh, R. J. H. Miller, T. Parekh, B. K.
Tamarappoo, T. Sharir, A. J. Einstein, M. B. Fish, T. D. Ruddy, P. A. [182] M. S. Long, Y. Cao, J. M. Wang, and M. I. Jordan, “Learning
Kaufmann, A. J. Sinusas, E. J. Miller, T. M. Bateman, S. Dorbala, M. transferable features with deep adaptation networks,” in Proc. 32nd
Di Carli, S. Cadet, J. X. Liang, D. Dey, D. S. Berman, and P. J. Int. Conf. Machine Learning, Lille, France, 2015, pp. 97−105.
Slomka, “Clinical deployment of explainable artificial intelligence of [183] X. T. Han, L. Qi, Q. Yu, Z. Q. Zhou, Y. F. Zheng, Y. H. Shi, and Y.
SPECT for diagnosis of coronary artery disease,” in JACC: Gao, “Deep symmetric adaptation network for cross-modality medical
Cardiovasc. Imaging, vol. 15, no. 6, pp. 1091−1102, Jun. 2022. image segmentation,” IEEE Trans. Med. Imaging, vol. 41, no. 1,
[168] W. Y. B. Lim, N. C. Luong, D. T. Hoang, Y. T. Jiao, Y. C. Liang, Q. pp. 121–132, Jan. 2022.
Yang, D. Niyato, and C. Y. Miao, “Federated learning in mobile edge [184] Y. C. Zhu, F. Z. Zhuang, J. D. Wang, G. L. Ke, J. W. Chen, J. Bian, H.
networks: A comprehensive survey,” IEEE Commun. Surv. Tut., Xiong, and Q. He, “Deep subdomain adaptation network for image
vol. 22, no. 3, pp. 2031–2063, 2020. classification,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 4,
[169] H. Guan and M. X. Liu, “Domain adaptation for medical image pp. 1713–1722, Apr. 2021.
analysis: A survey,” IEEE Trans. Biomed. Eng., vol. 60, no. 3, [185] Y. R. Jin, Z. Y. Li, Y. Q. Liu, J. L. Liu, C. J. Qin, L. Q. Zhao, and C.
pp. 1173–1185, Mar. 2022. L. Liu, “Multi-class 12-lead ECG automatic diagnosis based on a
[170] W. W. Zhang, F. Wang, Y. Jiang, Z. F. Xu, S. C. Wu, and Y. H. novel subdomain adaptive deep network,” Sci. China Technol. Sci.,
Zhang, “Cross-subject EEG-based emotion recognition with deep vol. 65, no. 11, pp. 2617–2630, Sept. 2022.
domain confusion,” in Proc. 12th Int. Conf. Intelligent Robotics and [186] J. D. Wang, Y. Q. Chen, W. J. Feng, H. Yu, M. Y. Huang, and Q.
Applications, Shenyang, China, 2019, pp. 558−570. Yang, “Transfer learning with dynamic distribution adaptation,” ACM
[171] S. T. Niu, M. Liu, Y. X. Liu, J. Wang, and H. B. Song, “Distant Trans. Intell. Syst. Technol., vol. 11, no. 1, p. 6, Feb. 2020.
domain transfer learning for medical imaging,” IEEE J. Biomed. [187] B. Xu, K. W. Wu, Y. Wu, J. He, and C. Y. Chen, “Dynamic
Health Inform., vol. 25, no. 10, pp. 3784–3793, Oct. 2021. adversarial domain adaptation based on multikernel maximum mean
[172] C. Chen, Q. Dou, H. Chen, J. Qin, and P. A. Heng, “Synergistic image discrepancy for breast ultrasound image classification,” Expert Syst.
and feature adaptation: Towards cross-modality domain adaptation for Appl., vol. 207, p. 117978, Nov. 2022.
medical image segmentation,” in Proc. AAAI Conf. Artif. Intell., vol. [188] H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-MNIST: A novel image
33, no. 1, pp. 865–872, Jul. 2019. dataset for benchmarking machine learning algorithms,” arXiv preprint
CHADDAD et al.: EXPLAINABLE, DOMAIN-ADAPTIVE, AND FEDERATED ARTIFICIAL INTELLIGENCE IN MEDICINE 875

arXiv: 1708.07747, 2017. Ahmad Chaddad received the master degree (2008)
from Université de Technologie de Compiegne and
[189] K. Saenko, B. Kulis, M. Fritz, and T. Darrell, “Adapting visual the Ph.D. degree (2012) from the University of Lor-
category models to new domains,” in Proc. 11th European Conf. raine, France. He worked for seven years at McGill
Computer Vision, Heraklion, Crete, Greece, 2010, pp. 213−226. University, Canada, École de Technologie Supéri-
eure (ETS), Montreal, The University of Texas MD
[190] B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J.
Anderson Cancer Center, USA, and Villanova Uni-
Kirby, Y. Burren, N. Porz, J. Slotboom, R. Wiest, L. Lanczi, E. versity, USA. In 2020, he joined the School of Artifi-
Gerstner, M. A. Weber, T. Arbel, B. B. Avants, N. Ayache, P. cial Intelligence, Guilin University of Electronic
Buendia, D. L. Collins, N. Cordier, J. J. Corso, A. Criminisi, T. Das, Technology, as a Professor. He is an Associate Mem-
H. Delingette, Ç. Demiralp, C. R. Durst, M. Dojat, S. Doyle, J. Festa, ber at LIVIA, ETS (Canada) and LCOMS, University of Lorraine (France).
F. Forbes, E. Geremia, B. Glocker, P. Golland, X. T. Guo, A. He has authored more than 80 research papers (60 papers first co-author). His
Hamamci, K. M. Iftekharuddin, R. Jena, N. M. John, E. Konukoglu, D. current research interests include AI and radiomics analysis to improve per-
Lashkari, J. A. Mariz, R. Meier, S. Pereira, D. Precup, S. J. Price, T. R. sonalized medicine strategies, by allowing clinicians to monitor disease in real
Raviv, S. M. S. Reza, M. Ryan, D. Sarikaya, L. Schwartz, H. C. Shin, time as patients move through treatment. Dr. Chaddad is a member of several
J. Shotton, C. A. Silva, N. Sousa, N. K. Subbanna, G. Szekely, T. J. international technical and organizational committees.
Taylor, O. M. Thomas, N. J. Tustison, G. Unal, F. Vasseur, M.
Wintermark, D. H. Ye, L. Zhao, B. S. Zhao, D. Zikic, M. Prastawa, M. Qizong Lu is undergraduate student in School of
Reyes, and K. Van Leemput, “The multimodal brain tumor image Artificial Intelligence, Guilin University of Elec-
segmentation benchmark (BRATS),” IEEE Trans. Med. Imaging, tronic Technology. From 2021 to 2022, he works on
vol. 34, no. 10, pp. 1993–2024, Oct. 2015. the current problems faced the clinical practice. His
current research interests include AI for healthcare
[191] X. H. Zhuang and J. Shen, “Multi-scale patch and multi-modality and medical image analysis. He has authored a
atlases for whole heart segmentation of MRI,” Med. Image Anal., research paper about radiomics analysis for autism
vol. 31, pp. 77–87, Jul. 2016. spectrum disorder.
[192] A. E. Kavur, N. S. Gezer, M. Barış, S. Aslan, P. H. Conze, V. Groza,
D. D. Pham, S. Chatterjee, P. Ernst, S. Özkan, B. Baydar, D. Lachinov,
S. Han, J. Pauli, F. Isensee, M. Perkonigg, R. Sathish, R. Rajan, D.
Sheet, G. Dovletov, O. Speck, A. Nürnberger, K. H. Maier-Hein, G. B. Jiali Li is undergraduate student in School of Artifi-
Akar, G. Ünal, O. Dicle, and M. A. Selver, “CHAOS challenge- cial Intelligence, Guilin University of Electronic Tech-
combined (CT-MR) healthy abdominal organ segmentation,” Med nology. She works on artificial intelligence for clini-
Image Anal, vol. 69, p. 101950, Apr. 2021. cal diagnosis. Her current research interests include
machine learning and biomedical engineering. She
[193] M. H. Yap, G. Pons, J. Martí, S. Ganau, M. Sentís, R. Zwiggelaar, A. has authored a research paper about radiomics analy-
K. Davison, and R. Martí, “Automated breast ultrasound lesions sis for Autism spectrum disorder.
detection using convolutional neural networks,” IEEE J. Biomed.
Health Inform., vol. 22, no. 4, pp. 1218–1226, Jul. 2018.
[194] J. De Fauw, J. R. Ledsam, B. Romera-Paredes, S. Nikolov, N.
Tomasev, S. Blackwell, H. Askham, X. Glorot, B. O’Donoghue, D.
Visentin, G. van den Driessche, B. Lakshminarayanan, C. Meyer, F. Yousef Katib finished his residency training pro-
Mackinder, S. Bouton, K. Ayoub, R. Chopra, D. King, A. gram in Radiation Oncology and fellowship in Geni-
tourinary and gastrointestinal cancer at McGill Uni-
Karthikesalingam, C. O. Hughes, R. Raine, J. Hughes, D. A. Sim, C.
versity, Canada. He also received the master degree
Egan, A. Tufail, H. Montgomery, D. Hassabis, G. Rees, T. Back, P. T. in science from McGill University. He is currently an
Khaw, M. Suleyman, J. Cornebise, P. A. Keane, and O. Ronneberger, Assistant Professor and Consultant in Radiation-
“Clinically applicable deep learning for diagnosis and referral in Oncology at Taibah University, Saudi Arabia. His
retinal disease,” Nat. Med., vol. 24, no. 9, pp. 1342–1350, Sept. 2018. clinical activity is oriented toward the management
of prostate tumors and radiomic analysis. He coordi-
[195] K. E. Stephan, F. Schlagenhauf, Q. J. M. Huys, S. Raman, E. A.
nates several prospective and retrospective studies in
Aponte, K. H. Brodersen, L. Rigoux, R. J. Moran, J. Daunizeau, R. J.
prostate cancers.
Dolan, K. J. Friston, and A. Heinz, “Computational neuroimaging
strategies for single patient predictions,” NeuroImage, vol. 145,
pp. 180–199, Jan. 2017. Reem Kateb is an Assistant Professor at Taibah Uni-
versity in the Department of Information Systems,
[196] C. Anastasopoulos, S. Yang, M. Pradella, T. A. D’Antonoli, S. Knecht, Saudi Arabia. She received the M.A.Sc. in informa-
J. Cyriac, M. Reisert, E. Kellner, R. Achermann, P. Haaf, B. Stieltjes, tion system security and Ph.D. in information and
A. W. Sauter, J. Bremerich, G. Sommer, and A. Abdulkadir, “Atri-U: systems engineering at Concordia University, Cana-
Assisted image analysis in routine cardiovascular magnetic resonance da. Her research interests are cybersecurity, cyber-
volumetry of the left atrium,” J. Cardiovasc. Magn. Reson., vol. 23, physical system, IoT, and Secure AI.
no. 1, p. 133, Nov. 2021.
[197] Y. L. Zhu and X. W. Duan, “Predictive nursing helps improve
treatment efficacy, treatment compliance, and quality of life in
unstable angina pectoris patients,” Am. J. Transl. Res., vol. 13, no. 4,
pp. 3473–3479, Apr. 2021. Camel Tanougast received the Ph.D. degree in
microelectronic and electronic instrumentation from
[198] N. A. M. Pauzi, Y. B. Wah, S. M. Deni, S. Khatijah, N. A. Rahim, and the Henri Poincaré University of Nancy, France, in
Suhartono, “Comparison of single and MICE imputation methods for 2001, and the authorization/accreditation to super-
missing values: A simulation study,” Pertanika J. Sci. Technol., vise research (HDR - Habilitation degree) from the
vol. 29, no. 2, pp. 979–998, Apr. 2021. University of Metz, in 2009. He is currently a Profes-
sor in electronics and embedded systems with the
[199] P. Wyss, D. Ginsbourger, H. C. Shou, et al., “Adaptive data-driven University of Lorraine, France. He joined the Elec-
selection of sequences of biological and cognitive markers in pre- tronic Architectures Group, Electronic Instrumenta-
clinical diagnosis of dementia,” medRxiv, DOI: 10.1101/2021.10.26. tion Laboratory of Nancy, in 2003, and the Micro-
21265515, 2021. electronic and Sensor Interface Laboratory of Metz (LICM), in 2008. He has
876 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 10, NO. 4, APRIL 2023

been the Head Research of the networked adaptive and self-organized sys- tially as Lecturer in computer architecture and image processing and later on
tems at LICM. He joined the Laboratory of Design, Optimization and Model- he was promoted to reader in computer science. In 2009, he joined Northum-
ing Systems (LCOMS), in 2013. He is currently the Deputy Director and the bria University at Newcastle leading the Computational Intelligence and
ASEC Team Leader of LCOMS for research in microelectronics, communi- Visual Computing Lab. His is now a professor of machine intelligence and
cating embedded systems, and smart sensors. He has authored or coauthored director of the Cybersecurity and Data Analytics Research Center at the Uni-
more than 120 publications and several research books, participated in sev- versity of Sharjah, UAE. His research interests are in machine learning with
eral editorial boards of books and international journals, and has been the applications to imaging for forensics and security, quantitative pathology and
Supervisor of 21 Ph.D. thesis. His research interests include radio communi- biomedical engineering, homeland security and video analytics. He has
cation, reconfigurable systems and NoCs, design and implementation of real authored and co-authored more than 350 publications and three research
time processing architectures, optimization, and implementation design, Sys- books on imaging for forensics and security and biometric security and pri-
tem-on-Chip development, FPGA design, computing vision, radiomic analy- vacy.
sis, image processing, cryptography, and the digital television broadcast
(DVB).
Ahmed Abdulkadir received the master degree
(2012) in life sciences from the École Polytechnique
Ahmed Bouridane (Senior Member, IEEE) received Fédérale de Lausanne, Switzerland and the Ph.D.
an “Ingenieur d’Etat” degree in electronics from (2018) in computer engineering from the Albert-Lud-
“Ecole Nationale Polytechnique” of Algiers (ENPA), wigs University Freiburg, Freiburg-im-Breisgau, Ger-
Algeria, in 1982, an M.Phil. degree in electrical engi- many. He worked as a Researcher at the University
neering (VLSI design for signal processing) from the of Bern, Switzerland, and the University of Pennsyl-
University of Newcastle-Upon-Tyne, U.K., in 1988, vania, USA. He is currently completing a three-year
and a Ph.D. degree in electrical engineering (com- mobility fellowship granted by the Swiss National
puter vision) from the University of Nottingham, Science Foundation at the Lausanne University Hos-
U.K., in 1992. From 1992 to 1994, he worked as a pital, Switzerland. His work focuses on the use of statistical methods and AI
research developer in telesurveillance and access to study aging and dementia in order to improve the diagnosis of brain disor-
control applications. In 1994, he joined Queen’s University Belfast, U.K., ini- ders.

You might also like