You are on page 1of 30

Computer Science Review 43 (2022) 100452

Contents lists available at ScienceDirect

Computer Science Review


journal homepage: www.elsevier.com/locate/cosrev

Review article

Developing future human-centered smart cities: Critical analysis of


smart city security, Data management, and Ethical challenges

Kashif Ahmad a , , Majdi Maabreh b , Mohamed Ghaly c , Khalil Khan d , Junaid Qadir e,f ,
Ala Al-Fuqaha a
a
Information and Computing Technology (ICT) Division, College of Science and Engineering (CSE), Hamad Bin Khalifa University, Doha, Qatar
b
Department of Information Technology, Faculty of Prince Al-Hussein Bin Abdallah II For Information Technology, The Hashemite University, P.O.
Box 330127, Zarqa 13133, Jordan
c
Research Center for Islamic Legislation & Ethics, College of Islamic Studies, Hamad Bin Khalifa University, Doha, Qatar
d
Department of Information Technology and Computer Science, Pak-Austria Fachhochschule: Institute of Applied Sciences and Technology,
Haripur-KPK, Pakistan
e
Department of Computer Science and Engineering, Faculty of Engineering, Qatar University, Doha, Qatar
f
Department of Electrical Engineering, Information Technology University, Lahore, Pakistan

article info a b s t r a c t

Article history: As the globally increasing population drives rapid urbanization in various parts of the world, there
Received 6 July 2021 is a great need to deliberate on the future of the cities worth living. In particular, as modern smart
Received in revised form 24 September 2021 cities embrace more and more data-driven artificial intelligence services, it is worth remembering that
Accepted 27 November 2021
(1) technology can facilitate prosperity, wellbeing, urban livability, or social justice, but only when it
Available online 28 December 2021
has the right analog complements (such as well-thought out policies, mature institutions, responsible
Keywords: governance); and (2) the ultimate objective of these smart cities is to facilitate and enhance human
Smart cities welfare and social flourishing. Researchers have shown that various technological business models and
Machine learning features can in fact contribute to social problems such as extremism, polarization, misinformation, and
AI ethics Internet addiction. In the light of these observations, addressing the philosophical and ethical questions
Adversarial attacks
involved in ensuring the security, safety, and interpretability of such AI algorithms that will form
Explainability
the technological bedrock of future cities assumes paramount importance. Globally there are calls for
Interpretability
Privacy technology to be made more humane and human-centered. In this paper, we analyze and explore key
Security challenges including security, robustness, interpretability, and ethical (data and algorithmic) challenges
Data management to a successful deployment of AI in human-centric applications, with a particular emphasis on the
Data auditing convergence of these concepts/challenges. We provide a detailed review of existing literature on these
Data ownership key challenges and analyze how one of these challenges may lead to others or help in solving other
Data bias challenges. The paper also advises on the current limitations, pitfalls, and future directions of research
Trojan attacks in these domains, and how it can fill the current gaps and lead to better solutions. We believe such
Evasion attacks
rigorous analysis will provide a baseline for future research in the domain.
© 2021 Elsevier Inc. All rights reserved.

Contents

1. Introduction......................................................................................................................................................................................................................... 2
1.1. AI-based smart city applications.......................................................................................................................................................................... 3
1.2. Scope of the survey ............................................................................................................................................................................................... 4
1.3. Related surveys ...................................................................................................................................................................................................... 4
1.4. Contributions .......................................................................................................................................................................................................... 5
2. Smart city AI security and robustness ............................................................................................................................................................................. 5
2.1. Adversarial attacks ............................................................................................................................................................................................... 6
2.1.1. Adversarial AI and smart city applications ........................................................................................................................................ 7
2.2. Security attacks on AI ........................................................................................................................................................................................... 7
2.2.1. Data poisoning ...................................................................................................................................................................................... 7

∗ Corresponding author.
E-mail addresses: kahmad@hbku.edu.qa (K. Ahmad), aalfuqaha@hbku.edu.qa (A. Al-Fuqaha).

https://doi.org/10.1016/j.cosrev.2021.100452
1574-0137/© 2021 Elsevier Inc. All rights reserved.
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

2.2.2. Evasion attacks ...................................................................................................................................................................................... 7


2.2.3. Trojan attacks ......................................................................................................................................................................................... 7
2.2.4. Model stealing (model extraction) ..................................................................................................................................................... 8
2.2.5. Membership inference attacks ............................................................................................................................................................ 8
2.3. AI safety in smart city........................................................................................................................................................................................... 9
3. Smart city AI interpretability ............................................................................................................................................................................................ 10
3.1. Explainable AI for smart city applications.......................................................................................................................................................... 11
3.2. Explainable AI and adversarial attacks................................................................................................................................................................ 12
4. Smart city and data-related challenges ........................................................................................................................................................................... 13
4.1. Challenges in collection and sharing data .......................................................................................................................................................... 13
4.2. Explainability and datasets ................................................................................................................................................................................... 15
4.2.1. Exploratory analysis of the datasets .................................................................................................................................................... 15
4.2.2. Dataset description and standardization ............................................................................................................................................. 16
4.2.3. Explainable features ............................................................................................................................................................................... 16
4.2.4. Dataset summarization .......................................................................................................................................................................... 16
5. AI ethics and smart cities.................................................................................................................................................................................................. 16
5.1. Academic publications........................................................................................................................................................................................... 16
5.2. Policies & guidelines.............................................................................................................................................................................................. 17
5.3. Analytical review of the key issues ..................................................................................................................................................................... 18
5.3.1. Singularity/superintelligence ............................................................................................................................................................... 18
5.3.2. Human-centered branch (AI ethics) ................................................................................................................................................... 19
5.3.3. Machine-centered branch (machine ethics) ...................................................................................................................................... 20
6. Insights and lessons learned ............................................................................................................................................................................................. 20
6.1. Smart city AI security and robustness ................................................................................................................................................................ 21
6.2. Smart city AI interpretability ............................................................................................................................................................................... 21
6.3. AI ethics .................................................................................................................................................................................................................. 21
7. Open issues and future research directions .................................................................................................................................................................... 22
7.1. Smart city AI security and robustness ................................................................................................................................................................ 22
7.2. Smart city AI interpretability ............................................................................................................................................................................... 22
7.2.1. Interpretation vs. performance ............................................................................................................................................................. 22
7.2.2. Concepts and evaluation metrics ......................................................................................................................................................... 23
7.2.3. Explanation of deep learning models .................................................................................................................................................. 23
7.2.4. Explainability and adversarial AI.......................................................................................................................................................... 23
7.3. AI ethics .................................................................................................................................................................................................................. 23
8. Conclusions.......................................................................................................................................................................................................................... 23
Declaration of competing interest.................................................................................................................................................................................... 23
Acknowledgment ................................................................................................................................................................................................................ 23
References ........................................................................................................................................................................................................................... 24

Greg Stone [4], ‘‘If you know the right questions and understand
1. Introduction the risks, data can help build better cities’’, and AI helps you
extract such insights from the data. Some key smart city applica-
According to a recent report [1], around 54% of the world’s tions where AI has been proved very effective include healthcare,
population lives in cities, and the number is expected to reach transportation, education, environment, agriculture, defense, and
66% by 2050. Rapid urbanization is driven by economic incentives public services [5–9]. We note here that while our focus in this
but it also has a significant collateral environmental and social paper is on AI, our ideas apply more broadly to the more general
impact. Therefore, environmental, social, and economic sustain- case of artificial intelligence (AI) technology for smart cities in
ability is very crucial to maintain a balance between rapid ex- general. We also note that most of the recent significant advances
pansion in the urbanization and resources of the cities. Thanks to have been made possible using advances in AI. Much of the work
modern technologies, striving for an improvement in the environ- on AI safety and AI ethics is directly relevant to AI: therefore,
mental, financial, and social aspects of urban life, and mitigating we will mostly use both of these term synonymously. In a smart
the associated challenges. More recently, the concept of smart city application, AI techniques aim to process and identify a
cities has been introduced, which aims to make use of modern pattern in data obtained from individual sensors or collective data
technologies including a wide range of Internet of things (IoT) generated by several sensors, and provide useful insights on how
sensors to collect and analyze data on different aspects of urban to optimize underlying services. For instance, in transportation,
life [2,3]. A smart city application demands a joint effort of peo- AI could be used to analyze data collected from different parts of
ple from different disciplines, such as engineering, architecture, a city (e.g., roads, commute mode, and number of passengers) for
urban design, and economics, to plan, design, implement, and future planning or deploying different transportation schemes in
deploy a smart solution for an underlying task. the city.
Artificial Intelligence (AI) techniques have also been proved However, there are several risks and challenges, such as avail-
very effective to gain insights from data collected through dif- ability, biases, and privacy of data, to successfully deploy AI in
ferent IoTs sensors to manage and utilize the resources more different smart city applications [10–12]. Various data biases can
efficiently. In this paper, we use the term AI broadly as an um- result in detrimental AI predictions in sensitive human-centric
brella term including techniques and algorithms able to learn applications—e.g., algorithmic predictions may be biased against
from data (i.e., data science, statistical learning, machine learn- certain races and genders as reported in [13,14]. Apart from data-
ing, deep learning) or intelligent systems able to perform tasks oriented challenges, there are some other threats to AI in smart
such as perception, reasoning, inference (i.e., expert systems, cities’ applications. For instance, attackers can launch different
probabilistic graphical models, Bayesian networks). According to types of adversarial attacks on AI models to affect their predictive
2
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

capabilities. Such attacks in sensitive application domains such


as connected autonomous vehicles can lead to significant loss in
terms of human lives and infrastructure [15].
Another key challenge to the deployment of AI in smart city
applications is the lack of interpretability (i.e., humans are unable
to understand the cause of an AI model’s decision) [16]. Explain-
ability is a key characteristic of AI models to be deployed in
critical smart city applications, where the predictive capabilities
of the models are not enough to solve a problem completely
rather reasons behind the prediction are needed to be understood
[17,18]. It also helps to ensure that the AI decisions in an un-
derlying application are equitable by avoiding decisions based on
protected attributes (e.g., race, gender, and age, for instance, Ama-
zon’s AI recruiting tool was found biased against women [19];
similarly Amazon’s Face Rekognition, a gender recognition tool,
was found 31.4% less accurate in classifying the gender of dark-
skinned women compared to light-skinned men [20,21]), and
ensuring an equal representation of protected attributes in the
sample space [22]. In recent years, ever-growing concerns have Fig. 1. Some interesting applications of AI in smart cities [27].

been noticed on the deployment of AI algorithms in human-


centric smart city applications, for instance, to ensure privacy
issues in surveillance systems, unequal inclusion of citizens in • Transportation and autonomous cars: Transportation can ben-
different services, and biases in predictive policing [20,23,24]. efit from AI in several ways. For instance, its predictive
A breakthrough in one of these challenges may have a knock- capabilities can help in traffic volume and congestion esti-
on effect. For instance, to offset the problem of the interpretability mation for route optimization [32]. AI algorithms can also
be jointly used with multimedia processing techniques for
and biases in decisions, explainable AI may also help to guard
road safety [33], driver distraction [34], and accident events
against adversarial attacks, and the explanations produced by
detection [35] and road passability analysis [36,37]. How-
explainable AI, on the other hand, may also help the attackers ever, AI can be considered as a backbone of autonomous cars
to generate more adverse attacks [25,26]. where one of the key responsibilities of the AI module is
continuous monitoring of the surrounding environment and
1.1. AI-based smart city applications the prediction of different events, which generally involves
the detection and recognition of various objects such as
pedestrians, vehicles, and roadside objects [38].
In a smart city, sensors are deployed at various places to gather
• Education: AI brings several advantages in education by con-
data about different aspects of the city – e.g., data related to trans-
tributing to several tasks, such as automatic grading and
portation, healthcare, and environment – which is then sent to a
evaluation, students’ retention and dropout prediction, per-
central server for analysis or processed locally at the edge devices sonalized learning, and intelligent tutoring systems [8]. AI
to obtain useful insights using AI techniques. Thanks to the recent predictive capabilities could also help in predicting students’
advancement in technology, government authorities can now career paths by applying AI techniques on students’ data
gather real-time data, combined with the capabilities of AI, can covering different aspects, such as interests and perfor-
manage public services in cities more efficiently and effectively. mances in different subjects.
For instance, having enough information about roads condition, • Crime detection/prediction and tracking: It is another inter-
traffic volume, and people’s commute means in a city, author- esting smart city application where AI has shown its poten-
ities can eliminate the bottlenecks which can, in turn, reduce tial. AI is transforming the way law enforcement agencies
city traffic, crowd, and pollution, leading to a more optimized, operate to prevent, detect, and deal with crimes. In the
sustainable, and clean services and environment. modern world, law enforcement agencies are heavily rely-
Some of the currently available key applications of AI in smart ing on predictive analysis to track crimes and identify the
cities are illustrated in Fig. 1 and described next. most vulnerable areas of a city, where additional force and
patrolling teams could be deployed. One example of such
• Healthcare: The basic motivation of ML applications in tools is PredPol, which relies on AI techniques to predict
healthcare lies in its ability to automatically analyze, identify ‘‘hot spot’’ crime neighborhoods [39].
hidden patterns, and extract meaningful clinical insights • Clean and sustainable environment: AI also helps in mon-
from large volumes of data, which is beyond the scope itoring and maintaining a clean and sustainable environ-
of human capabilities. The automatically extracted insights ment. Thanks to the recent advancement in deep learning
are generally efficient, and help medical staff in planning and satellite technologies, environment monitoring and en-
and treatment, ultimately leading to effective and low- forcement are more efficient than ever [40]. AI techniques
have been widely deployed in analyzing remotely sensed
cost treatment with increased patient satisfaction [28]. In
data for environmental changes. Moreover, AI techniques
recent years, AI has been heavily deployed in healthcare,
have also been demonstrated to be very effective in dis-
and proved very effective, thanks to the recent advance- aster detection [41], water management [42], and waste
ment in deep learning. For instance, a solution proposed by classification [43].
Google [29] outperformed human doctors (i.e., by around • Smart building: Smart building represents an automatic
16% accuracy) in the identification of breast cancer in mam- structure/system to control the building’s operations such as
mograms. Similarly, AI solutions proposed in [30,31], have lighting, heating, ventilation, air conditioning, and security.
been proved very effective in the diagnosis of skin and lung AI has been widely exploited for various tasks in smart
cancer, respectively. systems as elaborated upon in [44].
3
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

• Tourism, culture, services, and entertainment: Tourism and


entertainment industries are also benefited big time by AI
and social media [45]. For instance, AI-based recommenda-
tion systems are widely used by travelers in the decisions of
their holidays’ destinations considering different variables,
such as transportation and accommodation facilities, cost,
food, and historical points. In addition, AI-based applications
could help travelers in fraud detection, cost optimization,
and identification of entertainment venues and transporta-
tion facilities at the destination. Apart from the recommen-
dation systems, which is one of the main applications of
AI in the sector, AI enabled visual sentiment analysis tools
could be used to search or extract scenes from long TV show
videos based on sentiment analysis [46].

Despite the outstanding performances and success, AI also


brings challenges in the form of privacy and unintentional bias
in public services. For instance, to analyze people’s commute
patterns, the administration needs to collect and process a lot
of people’s data, including their movements risking people’s per- Fig. 2. Visual depiction of the scope of the paper.
sonal information to be leaked. The intentional and unintentional
bias in decisions of AI is even more dangerous, which might
endanger citizens’ lives in healthcare or law enforcement applica- 1.2. Scope of the survey
tions. For instance, an AI-based software used for future criminals
predictions was found biased against blacks [14]. Similarly, the The paper revolves around the key challenges to a successful
smart system used to predict the health care needs of about 70 deployment of AI in smart city applications including security,
million patients in the US was assigning higher risks (scores) to robustness, interpretability, and ethical (data and algorithmic)
black patient compared to white patients with the same medical challenges. Fig. 2 visually depicts the scope of the paper. The
paper emphasizes these concepts/challenges by exploring how
conditions [47]. It must be noted that the algorithms do not learn
one of these challenges/problems may cause or help in solving
the bias on their own rather it comes from the data used to train others. We also analyze research trends in these domains. The
the algorithms, which reflects the social and institutional biases paper also advises on the current limitations, pitfalls, and future
of the society practiced over the years [3]. Moreover, being a directions of research in these domains, and how it can fill the
product of humans, AI algorithms reflect the beliefs, objectives, current gaps in the literature, and lead to better solutions.
priorities, and design choices of humans (i.e., developers). For
instance, to make accurate predictions, AI algorithms need the 1.3. Related surveys
training data to be properly annotated and must contain suffi-
Due to the keen interest in the research community in lever-
cient representation for each class. An over-representation of a
aging AI for smart city applications, it has been always a popular
class may develop a tendency towards the class in predictions. A
area of research [49]. In literature, several interesting articles
trade-off between false positives and false negatives is also very analyzing different aspects of AI applications in smart cities have
crucial for AI predictions. These limitations of AI hinder its way been proposed [9]. In addition, being among the key active re-
of overcoming social and political biases to achieve smart cities’ search topics, a significant amount of the literature can be found
true objectives. According to Green [3], AI algorithms in smart on adversarial attacks, explainability, availability of datasets, and
city applications are mostly influenced by the social and political the ethical aspects of AI in human-centric applications. Adver-
choices of the society and authorities. Therefore, to ensure privacy sarial and explainable AI are comparatively more explored in the
and reduce bias of AI algorithms in human-centric applications, literature. There are also some interesting surveys on these topics
we need to discuss the need, goals, and potential impact of their covering different aspects of individual topics. However, to the
decisions on society before deploying them. best of our knowledge, there is no survey jointly analyzing the
challenges, and more importantly, emphasizing the connection
Moreover, there are several security threats to AI models in
between the four challenges. For instance, Zhang et al. [50] pro-
smart city applications, for instance, attackers can launch adver- vide a survey of adversarial attacks on deep learning models.
sarial attacks on AI models to bias the decisions by disturbing Similarly, Serban et al. [51] provide a comprehensive survey on
their prediction capabilities. For example, an adversarial attacker adversarial examples of object recognition. Zhou et al. [52] on
might turn off an autonomous car on a high way, and ask for the other hand provide a survey of game-theoretic approaches for
money to restart it. A more serious situation could be stopping a adversarial AI. There are also some recent surveys on explainable
train on the platform just before the arrival of the next train [48]. AI. For instance, in [53–56], a survey of existing literature on ex-
Another challenge is the lack of interpretability—which results in plainable AI is presented. Some surveys focus on a particular type
humans being unable to understand the causes of an AI model’s of technique for explainable AI. For instance, [57] and [58] survey
decision. To deal with such risks involved in deploying AI in smart web technologies and reinforcement learning-based approaches
for explainability. Baum et al. [59] on the other hand, provides a
city applications, the concept of explainability and ethics in AI has
survey of AI projects on ethics, risk, and policy. Similarly, Morley
been introduced. et al. [60] provides an overview of the literature on AI ethics in
In the next sections, we provide a detailed overview and healthcare. In contrast to other surveys, this paper emphasizes
analysis of the potential security, robustness, interpretability, and the connection between these four challenges, and analyze how
ethical (data and algorithmic) challenges to AI in smart city a solution to one of the challenges may also help or cause the
applications. others.
4
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

Fig. 3. Structure of the survey.

1.4. Contributions how it can be linked with adversarial attacks. Section 4 details the
challenges associated with data collection and sharing. Section 5
In this paper, we provide a detailed survey of the literature on focuses on the ethical aspects of deploying AI in human-centric
the security, safety, robustness, interpretability, and ethical (data smart city applications. Section 6 summarizes the key insights
and algorithmic) challenges to AI in smart city applications. The and lessons learned from the literature. In Section 7, we high-
paper mainly focuses on the connection among these concepts, lights the open issues and future research directions. Finally,
and analyzes how these concepts and challenges are dependent Section 8 provides some concluding remarks. Fig. 3 visually
on each other. depicts the structure of the paper.
The main contributions of the paper are summarized as follow:
2. Smart city AI security and robustness
• We provide a detailed analysis of how AI is helping in
developing our cities, and the potential challenges, such as
Machine Learning has tremendous potential in smart cities
salient ethical, interpretation, safety, security, and fairness,
that can improve the productivity and effectiveness of the differ-
hindering its way in different smart city applications.
ent city systems. Despite the positive outcomes and the promise
• The paper analyzes the literature on major challenges
of AI in smart cities, security is one of the main concerns that
including security, safety, robustness, interpretability, and
still need further investigation and experiments. AI models can
ethical challenges in deploying AI in human-centric appli-
be vulnerable to different kinds of attacks; such as adversarial
cations.
examples, model extraction attacks, backdooring attacks, Trojan
• The paper provides useful insights into the relationship
attacks, membership inference, model inversion [61].
among these challenges and describes how they may affect
each other. Attacks on AI models introduce new challenges to the existing
• We also identify the limitations, pitfalls, and open research software security systems and approaches that need to address a
challenges in these domains. bit different nature of challenges [62]. AI has its unique security
issues where a small modification on the objects (inputs or data
The rest of the paper is organized as follows. Section 2 pro- consumed by AI algorithms) might change the decision of AI
vides an overview of different security and robustness challenges models and cause serious consequences. The following shortlist,
to AI in smart city applications. Section 3 details the importance on the security issues of AI applications in the last five years,
of interpretability and explains how explainable AI can help in clearly raises the urgent need to intensively study the safety and
extracting more insightful information from AI decisions, and security aspects of AI while transforming cities to be smart.
5
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

• 2016, the Auto-driver system in Tesla also confused the


white side of a truck with the sky in 2016, leading to the
deadly crash.1
• 2016, Microsoft chatbot was shut down and closed after
a few hours of its release time. The model was attacked
and forced to post offensive tweets for users.2 Chatbots are
not just for businesses, but also for government services.
The City of North Charleston in South Carolina, USA, has
launched Citibot which is a communication tool for citizens
and their governments. Citizens can ask for information Fig. 4. An illustration of a common adversarial attack on image classification
or request repairs.3 These smart systems are vulnerable to AI model. The shown adversarial perturbation (amplified for illustration) is
added into a sample to force the model to make a wrong prediction with high
hacking as they consume data from citizens to analyze and
confidence [73].
manage their requests.
• 2016, Google AV was in autonomous mode where a failure
in speed estimation caused a crash.4
• 2016, face recognition detection attack using eyeglasses • 2021, FBI issued a warning about a rise in AI-based synthetic
frames [63]. materials including deepfake content.12
• 2017, Apple face recognition has been fooled by a cheap • 2021, AI chatbot has been suspended, and the firm is now
3D-printed mask.5 being sued in South Korea, for making offensive comments
• 2018, Uber’s self-driving cars killed a pedestrian. The AV has and leaking users information.13
not stopped at the right time.6
This list indicates that there are several issues to be handled
• 2018, robust physical perturbations could fool the DNN-
beyond building AI models of good performance. In the following
based classifier of a self-driving car to misclassify speed limit
signs [64]. subsections, we discuss the strategies of attacks on machine
• 2018, targeted audio adversarial examples have successfully learning models in smart city applications which shed the light
attacked DeepSpeech, a deep learning-based speech recog- on the necessity for safe and robust AI solutions at both technical
nition system. AI works on sounds wave which can carry and policy levels.
secret commands to the connected devices [65].
• 2019, the Tesla autopilot AI system has been attacked at 2.1. Adversarial attacks
Tencent’s Keen Security Lab by small changes on the lane
markings, yet clear for humans. Tesla Model S swerve to This challenge has been recognized and discussed either for
the wrong lane making the lane recognition models in Tesla crafting fake data that could belong to different domains; text
risky and unreliable under some conditions [66]. [68], images [69], audio [65], network signals [70] known as ad-
• 2019, a neural network model diagnosis a benign mole versarial examples or evaluating and developing solutions against
as Malignant because of tiny noise added to the medical
this security threat [71]. Formally, given a benign input data X
image [67].
which is classified as class 1 by model M, find a function F to
• 2019, Deepfakes. Facebook creates a dataset for Deepfake
generate X ′ (poisoning function F ; F (X ) = X ′ ) so that X ′ is
detection.7 classified as class 2 by the same model M where the difference
• 2019, the smart algorithm guiding care for tens of millions
between X and X ′ is not being discovered by humans.
of people is biased against dark-skinned patients in New Jer-
This former definition is referred to as un-targeted adver-
sey, USA. It is assigning dark-skinned patients lower scores
sarial attacks. The targeted attacks have a target class Y where
than white patients with the same medical conditions.8
function F is trying to find another version of any benign input
• 2020, A shopping guide robot in Fuzhou Zhongfang Marl-
where model M becomes biased towards Y in prediction. Another
boro Mall, China, walked to the escalator by itself. It fell off
the escalator and knocked over passengers. The robot has classification of adversarial attacks is based on the amount of
been suspended from its duties.9 knowledge the attackers have about the target model (victim).
• 2020, Starsky Robotics has been shut down due to a safety The threat model can be a white-box, gray-box, or black-box.
issue in the self-driving software on highways. The team of In white-box attacks, adversaries have full knowledge about the
Starsky reported that supervised ML is not solely enough to targeted model architecture. This eases the process of crafting
build a safe robot-trucks industry.10 poisoned data and thus fools the system. While in gray-box threat
• 2021, Tesla cars crash due to autopilot feature.11 models attackers could have some information about the overall
structure of the model, in black-box threat models all that they
1 Tesla website: https://www.tesla.com/en_JO/blog/tragic-loss. have is just access to use the model [72]. A detailed taxonomy of
2 Microsoft: https://blogs.microsoft.com/blog/2016/03/25/learning-tays-
adversarial attacks can be found in [5]. Fig. 4 illustrates a common
introduction/. adversarial attack on an image classification classifier, where
3 https://www.northcharleston.org/connect/citibot/. an AI model has been deceived by adding a tiny perturbation,
4 Media, technology news on BBC https://www.bbc.com/news/technology- amplified in the figure for visual depiction, to a legitimate sample
35692845. to disturb the prediction capabilities of the model.
5 Forbes: https://www.forbes.com/.
Since the main focus of the paper is on smart city applications,
6 The National Transportation Safety Board (NTSB), Media at: https://www.
thus without going into further details, in the next subsection, we
theverge.com/2018/5/24/17388696/uber-self-driving-crash-ntsb-report.
provide an overview of the literature on adversarial attacks on AI
7 Facebook AI: https://ai.facebook.com/blog/deepfake-detection-challenge/.
8 Media: https://www.wired.com/.
models in smart city applications.
9 Media: https://syncedreview.com, https://weibo.com/.
10 Medium: https://medium.com/. 12 https://securityintelligence.com/articles/how-protect-against-deepfake-
11 https://www.jumpstartmag.com/ai-gone-wrong-5-biggest-ai-failures-of- attacks-extortion/.
all-time/. 13 https://www.straitstimes.com/.

6
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

2.1.1. Adversarial AI and smart city applications Adversarial attacks also have a serious impact on food safety
Adversarial attacks are considered severe security threats in and production control [91]. Several AI solutions feed on images,
learner-based models due to its possible consequences. In smart videos, text in smart agriculture, and smart waste. These two
cities, complex networks, and collaborations of data-driven ap- smart sectors may be more vulnerable to unintentional attacks,
plications and devices, the impact of misleading a model, e.g., a one reason is because of natural conditions where the sensors
classifier, could result in harsh situations and a costly mess. This and cameras work. Table 1 provides a summary of some of the
could happen no matter the attack has intentionally misled the works on adversarial AI in smart city applications.
model such as crafted inputs by attackers, or unintentionally
‘‘accidentally’’ such as a defect in traffic light signals, or varying 2.2. Security attacks on AI
weather conditions that could impact signs illumination con-
sumed by autonomous vehicles [74]. In [75], a perturbation on In this section, we introduce the readers to some other com-
a regular image of a stop sign forces a deep neural network mon strategies to launch attacks on AI particularly in cloud and
edge deployments, such as data poisoning, evasion attacks, ex-
classifier to see it as a yield sign. This information can lead the
ploratory attacks, model extraction, backdooring, trojan, model-
vehicle to behave unsafely and might cause severe accidents.
reuse, cyber kill chain-based attacks, membership inference, and
This case could be worse if other neighboring vehicles consume
model inversion attacks, which are very common in smart city
some data sent by the attacked vehicle. A DNN-based solution
applications.
was developed in [76] to detect and then isolate the attacked
vehicle from the cooperative vehicles. AI models in Autonomous 2.2.1. Data poisoning
vehicles depend not only on the exchanged sensors data but also In this attack, as illustrated in Fig. 5, attackers intentionally
on consuming street signs to control the driving and traffic. The share manipulated data, e.g. incorrect labels, so the model would
security of these models is crucial since a slight change in sign consume in any re-training process with a target to degrade the
image could be enough to fool the model, for example, one pixel AI models’ performance. In this case, attackers somehow have
is often enough to attack a classifier in [77]. control over the training data or can contribute to the training
Similarly, human lives and billions of dollars could be victims data [109]. In smart cities, crowdsensing is an integral data source
of AI models that are misclassifying the diseases and medical for smart services which is involved in several areas such as trans-
reports. In [67] using a slight noise on disease images or even re- portation, pollution monitoring, and energy management [110].
placing some words in disease description by their synonyms, the However, it is highly susceptible to data poisoning attacks
AI models changed the decisions to the opposites of the true ones. [111,112], and in some settings, gain greater degrees of reliability
Despite that medical images are taken in pre-defined settings, so that they are hard to be identified [113,114]. In a very sensitive
where some manipulations applied to other domains images are field of study, an experiment on around 17,000 records of healthy,
not valid such as rotations, some manipulation methods can be unhealthy (disease-infected) people, a poisoning attack on the
easily detected by specialists eyes [78], there is still a chance training data was able to drop the classifier accuracy of about 28%
to be manipulated by other methods [79]. In [80], GAN was of its original accuracy by poising 30% of the data [100]. This could
able to modify breast medical images through adding/removing have severe consequences, for example, on dosage or treatment
features and change the AI decision while radiologists never management.
discriminate the difference between the original and manipu-
lated images at low-resolution rates. Brain medical images have 2.2.2. Evasion attacks
been manipulated by three different methods; noise generated, Compared to data poisoning, evasion attacks can take place
fast gradient sign, and virtual adversarial training to generate after model training as shown in Fig. 6. The attackers may have
adversarial examples to mislead the brain tumor classifier [81]. no idea about the required data manipulation to attack the model.
Fortunately, with the help of DNN, a detector has been developed A practical evaluation in [115] shows that commonly used clas-
and showed surprising high accuracy in detecting manipulated sification algorithms, such as SVM and NN, can be easily evaded
medical images [82]. Another detector is a result of ensemble even with limited knowledge about the system, and an artificial
CNN networks [83], or by training dataset augmentation by the dataset. To highlight the risk of using deep learning in the context
adversarial examples of modified CT scan images [84] The lit- of malware detection, [116] proposed a novel attack that changes
erature shows more evaluation of medical image attacks than a few bytes in the file header without injecting any other data
and forces MalConv, a convolutional neural network for malware
text attacks. This is probably because the attacks arise in the
detection, to misclassify the benign and fabricated inputs. In this
computer vision field. However, texts in natural language are
attack strategy, attackers would keep querying the AI models in
also liable to attacks [50]. This means prescriptions, medical
a trial and error fashion so they can learn how to design their
records classification for insurance decisions, patient history, and
inputs to pass the model. This would create an overhead on the
allergic information, medical claims codes that determine reim-
systems and a solution to identify suspicious queries that may
bursement, are all vulnerable to attacks. The sensitive nature of
save the availability of the systems and the power consumption,
these applications and the resulting harms (economic and well as
especially if the target models run on devices of limited energy
social) raises the concern for safety, security, and dependability supply.
of AI systems. In the future, extra computational interventions
(e.g. adversarial data detectors) may form an integral part of 2.2.3. Trojan attacks
AI-based medical solutions. Trojan attacks on AI algorithms are also very common in cloud
Other components of smart cities are not far from serious and edge deployments of AI [117,118]. In a trojan attack, the at-
attacks. In the smart energy sector, attacks come in different tackers modify the weights of a model in a way that its structure
forms; denial-of-service where systems or part of them become remains the same. Moreover, a trojan attacked AI model works
inaccessible and can also be optimized for more sufficient energy fine on normal samples, however, it predicts the trojan target
needs [85], randomly manipulate the sensor readings, or with label for an input sample when a trojan trigger, which is an in-
some information the attacker has about the system and sensors, fected sample to activate the attack, is activated. Fig. 7 illustrates
false data are injected to the system [86–88]. Several detection how trojan attacks behaves when a torjan attack is triggered on a
solutions have been proposed and evaluated to mitigate the at- face recognition system. In this case, the victim classifier always
tacks in a grid such as false data injection detection [89], securing predicts the trojan target label when test samples with trojan
the grid physical layers against attacks [90]. trigger are used.
7
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

Table 1
Summary of some key works in adversarial AI in terms of smart city application, type of attack (white/black-box), dataset and key features of the method.
Ref. Application Type of attack AI model Dataset Description of the method
[92] Transportation White-box and CNNs GTSRB [74] and It proposes adversarial attacks on the traffic sign recognition
Black-box GTSDB [93] systems/models of autonomous cars. It mainly proposes two types of
attacks namely (i) Out-of-Distribution attacks, and (ii) Lenticular
Printing attacks. The former modifies the innocuous signs in a way that
the model predicts it as potentially dangerous traffic signs while the
latter relies on an optical phenomenon to deceive the traffic sign
recognition system.
[94] Transportation White-box DNNs KITTI [95] It introduces adversarial attacks on LiDAR-based perception in
autonomous vehicles where LiDAR spoofing attacks are used for
generating fake obstacles in front of the target autonomous vehicle to
disturb its decision-making abilities.
[96] Transportation Black-box DNNs GTSRB [74] It targets Deep Neural Networks (DNNs) based traffic sign recognition
model with black-box attacks by employing an efficient sampling
strategy for Adaptive Square Attack (ASA) able of generating
perturbations for traffic sign images with fewer query times.
[79] Healthcare White and CNNs Chest X-ray [97] The work demonstrates how adversarial attacks can be launched
black-box against deep learning based systems for healthcare. Moreover, it also
attacks analyzes how healthcare is susceptible to adversarial attacks both in
terms of monetary incentives and technical vulnerabilities.
[98] Healthcare White-box MLP Los Alamos It explores and demonstrates the vulnerabilities of data-driven
National approaches to structural health monitoring by generating/mapping the
Laboratory (LANL) records of Los Alamos National Laboratory into adversaries using a
dataset [99] white-box attack. Moreover, the work also proposes an adversarial
threat model specific to structural health monitoring.
[100] Healthcare White and DT, RF, Self-collected It proposes a new type of adversarial attacks for targeting AI models in
black-box and healthcare where the attacker/adversary has partial information about
ANNs the data distribution and AI model. The attacks are intended to change
medical device readings to alter patient status/diagnosis results.
[101] Industry White and DNNs Self-collected Proposes a decentralized framework namely DeSVig to identify and to
black-box guard against adversarial attacks on an industrial AI system. The biggest
advantage of the framework is its ability to reduce failure of identifying
and being deceived by adversaries.
[102] Smart grids White-box GANs Self-collected Explores the vulnerabilities in smart grids. To this aim, a data-driven
learning-based algorithm has been proposed to detect un-observable
false data injection attacks in distribution systems. Moreover, the
method needs less training samples and makes use of unlabeled data in
a semi-supervised way.
[88] Smart grids Black-box RNN and Self-collected The work analyzes and explores the vulnerabilities of smart grids
LSTM against adversarial attacks. The authors mainly focus on the key
functions of smart grids, such as load forecasting algorithms, and
analyze the potential impact of the adversaries on load shedding and
increased dispatch costs using data injection attacks.
[103] Smart grids Black-box RNN and Self-collected The paper analyzes the vulnerabilities and resilience of AI models in
LSTM power distribution networks against adversarial attacks on smart
meters via a domain-specific deep learning architecture. Smart meters
are attacked under the assumption that the attacker has full knowledge
of both the model and the detector.
[104] Person re- Black-box CNNs Market1501 [105], The work aims to explore and analyze how the person re-identification
identification attacks CUHK03 [106], in CCTV cameras frameworks can suffer from adversarial attacks. To
(Surveillance) and DukeMTMC this aim, the authors launch back-box attacks using a novel multi-stage
[107] network architecture stacking the features extracted at different levels
for the adversarial perturbations.
[91] Food safety White-box CNNs UCR [108] The paper analyzes the vulnerabilities of deep learning algorithms in
time-series data by adding noise to the input samples to decrease a
deep learning model’s confidence in food safety and quality assurance
applications.

2.2.4. Model stealing (model extraction) its answers to the submitted queries [75,121]. The MLaaS could
This strategy is also called model extraction as illustrated in be the main target of this attack since a few dollars may help in
Fig. 8. As its name implies, the ultimate objective of the adversary creating a free copy of the paid model over cloud [122]. Creating
is to clone or reconstruct the target model, re-engineering a a private copy of the victim model not only a copyright issue
black-box model, or to compromise the nature and the properties but also expose the victim model to other attacks of different
of the training data [119]. This strategy of attacking the AI models strategies since the attackers have new information on crafting
dated back to 2005 when the authors of [120] were able to adversarial examples [75,123].
develop an effective algorithm for reverse engineering a spam
filter model. Compared to the above two attack strategies in 2.2.5. Membership inference attacks
AI applications, model extraction needs neither any knowledge In such attacks, the attackers do not necessarily need knowl-
about the training data nor the model properties and architecture. edge about the parameters of an AI model rather a knowledge of
All that the adversaries have is access to the model and they get the type and architecture of the model and/or the service used for
8
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

Fig. 7. An illustration of a trojan attack on face classification AI model. The tro-


jan trigger added into a sample activates the trojan attack and predicts/generates
the trojan target label [73].

Fig. 5. The demonstration of the poisoning attack on machine learning, where


the adversary can deliberately inject bad data (poisoned data) into the training
pool which is then used for training models. Models are built on something that
should not learn allowing subsequent mispredictions.

Fig. 8. The demonstration of the model extraction attack, or stealing, in machine


learning. In such an attack, the adversary queries the classifier by different inputs
and collects the labels. The combination of the returned labels and the input data
is used to build a training dataset to train another model.

service allowing the attackers to develop and launch membership


inference attacks using the same services. For instance, Shokri
et al. [124] proposed a membership attack technique capable of
launching attacks on AI models developed using Amazon and
Google services.
The severity and risks associated with membership inference
attack largely depend on the applications and the type of data
used for training an AI model. In certain applications involv-
ing complex image and speech classification tasks, the efforts
involved in generating training data reduce the severity of the
attacks. On the other hand, in some human-centric applications,
such as education, finance, and healthcare applications with tab-
ular data, which can be easily generated, membership inference
attacks may have server implications.
Table 2 provides a summary of some of the works on secu-
rity attacks on AI in cloud and edge deployment for smart city
applications.

2.3. AI safety in smart city

Fig. 6. The demonstration of the evasion attack on machine learning, where the
adversary queries the model by carefully crafted examples, yet seems normal
The concept of safety in AI is not much different from its
for a human, to have them misclassified. The attackers try to perturb the data definition in other engineering sectors. It mainly covers the min-
in iteration mode where a little noise is added in each iteration until the input imization of risk and uncertainty of damage [140].
changes its original label according to the model. An important issue is that the safety evaluation of the AI-based
systems could need further effort beyond the testing dataset
since the real environment could have a larger probability of
developing the model is used to launch an attack. Such attacks uncertainty and risk. The models trained and tested on large
are very common due to the growing interest in using AI as a datasets could be more robust in production environments [141].
9
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

Table 2
Summary of some key works on security attacks on AI in cloud and edge deployment in terms of application, type of attack, dataset and key features of the
method.
Ref. Application Type of attack AI model Dataset Description of the method
[125] Fashion and Trojan DNNs Fashion-MNIST It targets DNNs using stealth infection on the models in a cloud-based
supply chain [126] neural computing frameworks. It is to be noted the attack aims to harm
the end users without any impact on the service provider.
[127] Communication Trojan CNNs RML2016.10A It introduces and launches Trojan attack against a AI model for wireless
[128] signal classification. Moreover, the authors evaluate several types of
mechanisms for the detection of such attacks on wireless signal
classification.
[118] Autonomous Trojan DNNs GTSRB [74] It proposes and develops a framework namely STRong Intentional
cars Perturbation (STRIP) to guard against run-time Trojan attacks on image
classification frameworks where certain input samples are intentionally
perturbed by superimposing various image patterns to analyze the
behavior of the model for the detection of malicious samples.
[129] Communication Data poisoning DNNs Spectrum sensing It proposes a data poisoning attack on an AI model for the cognitive
(exploratory [130] transmitter by changing the channel occupancy information, for
attack) instance from busy to idle or vice-versa, to disturb the decision
capabilities of the model on certain samples.
[131] Sentiment Data poisoning DNNs IMDB [132] It guards against data poising attacks on AI models by constructing
analysis approximate upper bounds on the loss under two assumptions: (i) the
‘‘dataset is large enough for a statistical concentration between train
and test error to hold’’, and (ii) the outliers in the cleaned dataset does
not have any impact on the model. The technique is meant to the
defender aiming outlier detection.
[133] IRS tax pattern Model DT IRS tax pattern, It provides a mechanism for guarding against extraction attacks where
and Email extraction GSS survey, Email a framework is firstly attacked with extraction attacks by measuring
importance importance, and the learning rate of the model. A cloud-based extraction monitoring
Steak survey [132] mechanism is then developed to quantify the extraction status of
models by analyzing the query and the corresponding response streams.
[134] Facial Model CNNs Multiple dataset It aims to analyze whether a black box CNNs model can be steal or
expression extraction including AR Face not? To this aim, a CNN model is queered with unlabeled samples to
recognition [135], BU3DFE extract the model’s information by analyzing its response to the
[136], and JAFFE unlabeled samples, which are used to create a fake dataset then.
[137]
[138] Digits Evasion attacks CNNs MNIST [139] It analyzes the robustness and reliability of one of the commonly used
classification types of evasion attacks defense methods namely watermarking
schemes for CNNs where the authors claim that attackers can evade the
verification of original ownership under such schemes.

This simply means the availability of useful and representative causes behind an underlying decision [56]. In the literature, inter-
datasets not only a concern to get benefit from AI algorithms, pretability and explainability are generally used interchangeably.
but also to build more safe and robust solutions. In subsequent However, the terminologies are relevant but slightly different.
sections, we discuss the challenge of dataset availability. Interpretability shows the extent to which a cause and effect can
Unsafe machine learning-based solutions could impact the be observed within a system while explainability represents the
lives of creatures directly; such as those systems in AV that killed extent of explanation/description of AI algorithms mechanism to
people, or indirectly by raising racist issues, for example. The a human.
There are several factors motivating the need for explana-
serious issue of Tesla auto-driver is the human driver was killed
tion/justification of the potential causes of an AI model in general
because of a mistake after millions of miles in testing the auto-
and in smart city applications in particular, where justification
driver system. The Google photo app also returned racist results
and explanation of an AI model’s outcome are very critical for
after training on thousands of images. This simply means, even developing the users’ trust in AI models used to take some critical
with the availability of massive datasets, AI-based systems still decisions about their lives, such as whether we get a job or not
need serious and solid research works to mitigate the effect of (AI-based recruitment), whether an individual is guilty/involved
mistakes and develop counter-strategies against illegal usage; in a crime or not (i.e., predictive policing) etc., [24]. According
adversarial attacks for example. to Guidotti et al. [142], these justification of the causes of an
AI model’s predictions could be obtained in two ways either by
3. Smart city AI interpretability developing techniques/methods to describe the potential reasons
behind the model’s decision, which they termed as ‘‘Black-box
In a typical AI framework, a set of features is feed to an AI Explanation’’ or directly designing and developing transparent AI
algorithms.
algorithm, which learns from the data by identifying a hidden
Some key advantages of explainable AI are:
pattern, and in return produces some predictions. In such frame-
works, which are also termed as black-boxes, the predictions • Explainability of AI models helps in building users’ trust in
come without any justification/explanation, and the users have the technology, which will ultimately speed up its adoption
no idea of the reasons behind the outcome. On the other hand, by industry.
in an explainable AI framework, besides prediction/decisions, an • Explainability is a must characteristic for AI models in some
AI model also details the causes of the prediction/decision. To sensitive smart city applications, such as healthcare and
this aim, additional functions/interface is used to interpret the banking.
10
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

Fig. 9. A taxonomy of achieving transparent and explainable AI decisions by opening the so called black-box AI models [56].

• Explainable AI models are more impactful compared to tra- key decision about humans—such as who should get a particular
ditional AI models in decision making. service? which medicine should be used? who should get a job?—
• Helps in detecting algorithms’ biases. are made [24]. Such decisions in smart city applications require
interpretation of data (i.e., features) to mitigate the impurities,
Explainable AI methods could be categorized, at different lev- if any, for better predictions/decisions [148]. Healthcare is one
els, using different criteria [56,143,144]. Fig. 9 provides a tax- of the critical smart city applications demanding explainable AI
onomy of explainable AI. There are two main categories of ex- models instead of traditional black-box AI. No doubt AI has been
plainable AI, namely (i) transparent model, and (ii) Post-hoc proven very effective in healthcare facilitating health professional
explainability. The former represents the methods restricting the in diagnosis and treatment, however, traditional black-box AI
complexity of AI models for explainability while the other cat- just make decisions without interpretation. Several factors are
egory represents the methods used for analyzing the models’ motivating the need for explainable AI in healthcare, such as the
behavior after the training. It is to be noted that there is a far-reaching consequences and the cost associated with a mistake
trade-off between performance (e.g., accuracy) and explanation. in prediction [149]. Moreover, understanding the causes of AI
In literature lower accuracy has been observed for the transparent predictions/decisions is very critical for building doctors’ trust in
models, such as fuzzy rule-based predictors, compared to the AI-based diagnosis. Doctors would feel more confident in taking
so-called black-box methods, such as CNNs [145]. However, the decisions given an AI-based diagnosis if the decision of the AI
explanation and interpretability are preferred properties in crit- model is understandable/interpretable by humans. Explainable AI
ical applications, such as healthcare, smart grids, and predictive models would also benefit from the domain experts’ knowledge
policing. Thus, there is a particular focus on developing post hoc to be refined. Moreover, in healthcare, the predictive perfor-
explainable methods to keep a better balance between accuracy mance is not enough to obtain clinical insights for decisions [150].
and transparency. In [142,151], seven pillars of explainable AI in healthcare, show-
In the next subsections, we provide an analysis of how impor- ing its relationship with transparency, domain sense, consistency,
tant explainable AI models are in smart city applications, and how parsimony, generalizability, trust/performance, and fidelity, have
explainable AI meets adversarial attacks and the ethical aspects been provided.
of explainable AI. Transportation and autonomous cars is another critical smart
cities application where the consequences and the cost associated
3.1. Explainable AI for smart city applications with a mistake by an AI model is very high. For instance, an
error in differentiating between red and green traffic lights or an
As described earlier, explainability brings several advantages error in pedestrian detection may lead to heavy losses in terms
to AI [55,146,147]. In smart city applications, its impact is more of human lives and damage to public property and vehicles. It
evident and crucial especially given the direct impact of the has already happened when a self-driving Uber killed a woman in
technology on society and its people. Explainable AI is particularly Arizona, where the object (i.e., the lady) was detected but treated
important in some key applications of smart cities, such as health- it in the same way it would a plastic bag or tumbleweed carried
care, transportation, banking, and other financial services, where on the wind, due to a prediction/classification error [55,169]. We
11
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

Table 3
Summary of some key works on explainable AI in terms of smart city application, type (intrinsic and post hoc), dataset, and key features of the method.
Ref. Application Type Dataset Description of the method
[152] Healthcare Post hoc Self-collected It is a game theoretic post hoc approach explaining the predictions of
an AI model. To this aim, it assigns each feature an importance value
based on contribution in a decision by conditioning predictions on the
underlying feature.
[17] Healthcare Post hoc Self-collected The method provides a real-time prediction of the risk of hypoxemia
during a surgery. The model is trained on time-series data from a large
collection of medical records. The existing explanation methods namely
Model-agnostic prediction explanation [153,154] are employed for the
explanation of the model’s prediction.
[155] Healthcare Post hoc Multiple online It provides a distributed deep learning based framework for COVID-19
sources [155] diagnosis in a distributed environment ensuring low-latency and
high-bandwidth using edge computing. Feature visualization methods
are used for the explanation of the model’s outcome.
[156] Environment (air Intrinsic SIRTA [157] It relies on a tree-based method Gradient Boosted Regression Trees
quality) (GBRT) to predict daily total and speciated PM1 concentrations.
Moreover, to further improve the performance of the model, decision
trees are combined to form an ensemble prediction
[158] Transportation Post hoc Madrid open data It relies on the existing xAI tools to extract insights from black-box
portal [159] traffic forecasting models namely Random Forests and Recurrent Neural
Networks.
[160] Transportation Post hoc Self-collected It focuses on three major steps of decision support, namely (i) synthesis
of diverse traffic data, (ii) multilayered traffic demand estimation, and
(iii) marginal effect analyses for transport policies. For implementation,
the authors rely on the big data-driven transportation computational
graph (BTCG) framework [161]. The framework integrates data from
several external sources including surveys, mobile phone data, floating
car data etc.,.
[162] Transportation Post hoc Self-collected Proposes a reinforcement learning based solution for traffic volumes
and road lanes occupancy prediction. For explanation of the outcome,
the method relies on the SHAP model-agnostic technique.
[163] Agriculture Post hoc Self-collected Proposes a deep learning framework namely xPLNet to identify and
dataset classify different types of biotic (bacterial and fungal diseases) and
abiotic (chemical injury and nutrient deficiency) stresses in plant
images. For better explanation, the authors rely on high-resolution
feature maps isolating the visual symptoms in plants.
[164] Agriculture Post hoc Self-collected Relies a 3-D DCNN for the identification different diseases in plants in
hyper-spectral imagery. The explanation purposes, the method relies on
saliency maps,visualizing the most sensitive pixels for a decision.
Moreover, the method also identifies the most sensitive wavelengths
used by the model for classification/differentiating in different plant
diseases.
[165] Agriculture Post hoc It relies on a deep neural network applied to multivariate time-series of
vegetation and meteorological data crop yield estimation. For the
explanation of the predictions, the method makes use feature
visualization techniques to analyze the relevance of the features to the
predictions made by the model.
[166] Fake news Post hoc Buzzface [167] The methods relies on Extreme Gradient Boosting (XGB) machines [168]
detection for fake news detection. Explanation of the outcomes is provided using
feature relevance, and observed that some features favor in detecting
certain types of fake news.

believe transportation in general and autonomous cars, in par- Project Agency (DARPA) [178]. Explainable AI is also a need for
ticular, will benefit from explainable AI. Some interesting works, modern entertainment and businesses [179]. In order to trust in
such as proposed in [170–173], have already been reported in the AI predictions, the prediction and decision-making process of the
domain. models should be understandable for all the stakeholders, such
AI models also need to be interpretable and explainable to as investors, customers, and CEOs, etc., in the business. Table 3
fully explore their potential in the education sector. Despite out- summarizes some key explainable AI publications in different
standing capabilities, it is still risky to blindly follow AI models’ smart city applications.
prediction in making a critical decision in such a high-stake
domain. How people will allow a machine (i.e., AI tools) to de- 3.2. Explainable AI and adversarial attacks
termine their child’s education? In order to trust AI in educa-
tion, AI models need to make sure stakeholders (i.e., parents, The literature also shows a connection between adversarial at-
teachers, and administration) understand the decision-making tacks and explainability [25,26,180]. It is believed that explainable
processes [174,175]. There are already some efforts in this direc- AI models are robust against adversarial attacks, and can help in
tions [176,177]. the identification of adversarial inputs/samples by generating an
The literature also reports some efforts for explainable AI anomalous explanation for the perturbed samples [26]. To verify
in defense. The concept of explainable AI has been firstly in- the hypothesis, several efforts have been made in the literature to
troduced in a defense project by Defence Advanced Research guard the AI model against adversarial attacks via the emerging
12
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

concept of explainability. For instance, Fidel et al. [26] employed • Diversity/characteristics of the data: Generally in typical
an explainable AI framework namely Shapley Additive Explana- smart city applications data is collected through several
tions (SHAP) [152], which evaluates the relevance/importance devices, making it hard to understand the characteristics of
of a feature by assigning it an important value for a particular the data for removing outliers [192]. Moreover, the data is
prediction, to generate ‘XAI Signatures’ for the internal layers of a collected continuously, which may result in scalability issues
Deep Neural Network (DNN) classifier to differentiate between in the infrastructure.
normal and adversarial inputs. Dhaliwal et al. [181] proposed • Constrained Environment: In smart city applications, gen-
a gradient similarity-based approach for differentiating between erally, the devices including data collection sensors and
normal and adversarial inputs. According to them, gradient simi- data transfer networks have limited resources (i.e., storage,
larity shows the influence of training data on test samples, and bandwidth, and processing power, etc.,) [191,193]. In or-
behaves differently for genuine and adversarial input samples, der to collect and transfer a large amount of data, such
enabling the detection of various adversarial attacks with high systems require a reliable data collection and transmission
accuracy. Some other interesting works are relying on explain- infrastructure.
able AI techniques to guard against adversarial attacks [25,180,
182,183]. However, the explanations/information regarding the In the next subsection, we will focus on some major challenges
working mechanism of AI algorithms revealed by explainabil- and concerns in data collection, developing, and sharing smart
ity methods could also be utilized to generate more effective cities data/datasets.
adversarial attacks on the algorithms [56].
Adversarial AI also provides an opportunity to increase inter- 4.1. Challenges in collection and sharing data
pretability (i.e., human’s understanding) of AI models
[184,185]. For instance, in [184] an adversarial AI approach is The performance of AI algorithms is also constrained by the
used to identify the relevance of features concerning the predic- quality of the data. Thus, it is important to discuss the major
tions made by an AI. The adversarial AI technique aims to find challenges and issues related to dataset collection and sharing.
the magnitude of changes required in the features of the input These challenges and concerns are raised as a result of data
samples to correctly classify a given set of misclassified samples, collection, analysis, sharing, and the use of the data in sensitive
which is then used as an explanation of the misclassification. applications [194]. The main challenges and concerns in dataset
In [185], on the other hand, adversarial AI techniques are used collection and sharing include informed consent in the form of
for explaining the predictions of a DNN by identifying the rel- understanding of how and for what purpose the data will be
evance/importance of features for the predictions based on the used, transparency, interpretation, and trust [195,196]. Though
behavior of an adversarial attack on the DNN. informed consent is one of the key concerns of data collection,
considering the fact that future applications are sometimes un-
4. Smart city and data-related challenges specified and unknown, it may be inconvenient to give prior
commitments regarding potential future use of the data. More-
Several challenges are associated with the collection, storage, over, data could be merged with other existing sources making
sharing, ensuring, and maintaining the quality of data. For in- informed consent even more challenging [197]. In several cases,
stance, the smart city’s infrastructure requires physical resources it is even not possible to make sure informed consent of all people
for storing and processing the data. In addition, these resources subject to data collection. For instance, these days delivery by
also consume a significant amount of electricity and space as well drone is very common where those who opt for free delivery
as the environmental issues due to the carbon emissions by these
consent to unlimited data collection from their home. In areas
resources. Smart city applications may also make use of cloud
where drone delivery is permitted, a whole neighborhood could
and edge deployment to overcome the lack of physical infras-
be subject to such data collection activities [48].
tructure for data storage and computing [61,186–188]. Thanks
For data collection or annotation, usually, crowdsourcing stud-
to the recent advancement and popularity of cloud storage, the
ies are conducted where a large population is usually involved
technology meets the data storage and processing requirements
to collect or annotate training data for AI models in a particular
of a diversified set of smart city applications. However, cloud
application. During the process, several factors need to be consid-
and edge deployment are also vulnerable to several adversarial,
ered. For instance, it is really important to inform the participants
security, privacy and ethical challenges [61]. For instance, using
about your organization and the purpose for which the data is
third-party services may result in no control over the data, thus,
collected or annotated. The information of the participants should
the data’s privacy settings are beyond the control of the enter-
be kept confidential, and they should be allowed to withdraw
prise/authorities. Such deployments may also lead to potential
from the data collection process at any time. More importantly,
data leakage risk by the service provider [189,190]. Moreover, the
one should remain neutral and unbiased in conducting a crowd-
cloud edge deployments could also be subject to several types
sourcing study as personal preconceptions or opinions may affect
of attacks, such as adversarial attacks, backdoor attacks, cyber
kill chain-based attacks, data manipulation attacks, and Trojan the quality of the data. In the modern world, data is also col-
attacks [61]. lected as a result of a product/service. For instance, social media
There are also several challenges associated with the hetero- platforms can be used to collect users’ data for different services.
geneous nature of the data, collected through several IoTs devices In such cases, several questions arise [48]. For instance, are the
from different vendors, in smart city research [191]. Some of the users aware of the data collection process and purpose? do they
key challenges are: have a right and access to the data? is the company is sharing or
selling the users’ data? is there any policy for maintaining the
• Quality of the data: The quality of the data in smart city informed consent if the company is sold to another one? how
applications largely depends on the accuracy of the IoTs the companies can ensure the privacy of the users if their data
devices/sensors used for collecting the data. Therefore, it is leaked to some bad actors?
should be ensured that the data infrastructure is accurate Data sharing is also subject to several questions, such as the
and error-free [191]. In addition, some external factors, such transparency of the data, interpretation, and how much trusty
as temperature, weather, etc., may also affect the accurate the data is in a particular application? According to [198], ‘‘data
data collection. sharing is not simply the sharing of data, it is also the sharing
13
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

of face recognition technology from giant companies, such


as Amazon, Microsoft, and IBM, for law enforcement [20].
To address the privacy-related concerns, various privacy-
friendly techniques and algorithms have been developed
using methods where AI systems’ ‘‘sight’’ is ‘‘darkened’’ via
cryptography [206]. On the other hand, some believe that
the traditional ‘‘narrow’’ understanding of privacy as a moral
concept will eventually cease to exist and there is a need to
revise the concept itself in the post-AI age [207]. Although it
may entail some challenges, the newly introduced concept
‘‘data philanthropy’’ can also be of help in this regard [208].
The basic idea that we propose here is to extend the scope
of this concept to include certain cases of individuals who
would voluntarily ‘‘donate’’ their data for the advancement
of science or better-functioning smart cities. Within the
discourse of various religious and moral traditions, there is
the concept of ‘‘charity’’, where people would voluntarily
donate something they own and cherish for the benefit
of others. Considering the great value that data can have
in our modern world, one can argue that data would also
fall within the category of valuable objects that can be do-
nated for charitable purposes, under conditions that would
vary from case to case. This will be especially applicable
within the communities where familial or societal interests
Fig. 10. Major challenges in data management in smart city applications. usually occupy a higher position than individual interests.
Moreover, there are also different solutions, such as differ-
ential privacy, to ensure individuals’ privacy by withholding
of interpretation’’. Moreover, the re-identification of individuals individual’s information or information that could lead to
or groups or linking data back to them through data mining identification of an individual in a dataset [209].
and analysis are also key ethical concerns in data sharing. For • Informed consent: Informed consent, which is the process
instance, the possibility of identification or linking data to an of informing and obtaining participant’s consent for data
individual or a particular group may result in gender, race and collection, is a key element of data ethics. In a data collection
religious discrimination [194,199]. Recently, a growing concern process, it is important to make sure the users subject
has been noticed for the transparency and interpretation of the to data collection know about the data collection process,
data used for training AI models [200,201]. For instance, Bauchner goal, and the way and purpose of its use in future [48].
et al. [200] emphasize the importance of data sharing in health- Informed consent should fulfill four conditions including (i)
care, and ethical concerns regarding data collection and sharing the participants have information/knowledge about the data
in the domain. Bertino et al. [202] also analyze the importance of collection process, (ii) they understand the information and
transparency and interpretation, which they termed as providing fully aware of the goal, future use, and the way data is
a 360◦ view, of data in sensitive applications. The authors link the collected, (iii) the participants should volunteers and should
transparency and interpretation of data with the privacy, trust, not be manipulated or persuaded in any way, and (iv) the
compliance, and ethics of the data management systems. participants should have the capabilities to understand the
Fig. 10 shows some of the major challenges in data manage- risks involved with the data, and able to decide whether to
ment (collection and sharing) highlighted in the literature, and participate or not [210].
are summarized as follows: • Open data: For transparency and developing trust, the data
and insights obtained from the data should be openly acces-
• Privacy: The biggest challenge in human-centric smart city sible. However, there are also several challenges associated
applications is ensuring the privacy of the citizens, which is with open data. For instance, it is important to determine
their fundamental right. An improved data privacy mecha- which information should be made open, who should have
nism not only helps in developing citizens’ trust in different an access to the data, and for what purpose the data should
smart city services and businesses but also ensures in- be allowed to be made open/used to ensure the individuals’
dividuals’ safety as the leakage of sensitive information privacy [211].
may endanger individuals’ lives. For instance, though some • Data ownership: Data ownership is another key aspect of
off-the-shelf encryption, authentication, and anonymity smart cities that raised serious concerns recently [212]. In
techniques could reduce the chances, intelligent malicious smart cities, a lot of services are generally deployed by
attackers may misuse residences’ sensitive information col- private companies, whose ultimate goal and priorities unlike
lected from smart home applications and surveillance sys- public authorities are to make a profit, posing serious threats
tems to harm the individuals using a side-channel and cold to citizens’ data being monetized [213]. Under these circum-
boot attack [203–205]. Thus for the effectiveness of smart stance, key questions will include: who will have access to,
city applications, the concerned authorities should ensure and control over, these data? Will the upper hand be given
that individuals’ information is not misused by the author- to private companies, where the market logic will dominate
ities or any individual for any sort of personal or financial or will the voice of the normal citizen count and thus more
gains [192]. In recent years, there is a growing concern weight will be given to the public control? The answer could
over citizens’ privacy, and several international bodies, such have been straightforward if the services were initiated and
as the European Union (EU), have introduced new privacy sponsored by public authorities, however, the investment
regulations. One recent example of community’s concerns from the private sector makes it very complicated. The vari-
over privacy and radical bias is the demand for abandon ous choices to be made in this regard will greatly determine
14
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

the level of (im)morality in big data management [214]. Table 4


According to Ben Rossi [215], unfortunately in smart cities, Summary of some key works on challenges, risks, and issues associated with
data collection and sharing in smart city applications in terms of application
public authorities provide the private companies with the and issues covered.
opportunities to monetize smart city data by allowing them Ref. Application Challenges/Issues discussed
to deploy different services, and these companies have more
[225] Healthcare Security and privacy
information about citizens’ compared to the public author- [226] Healthcare Data interpretation and fusion
ities. There are also some debates on data nationalization. [227] Healthcare Security and privacy
For instance, Ben Rossi [215] provides hints on how the [228] Healthcare Informed consent
[229] Healthcare Informed consent and confidentiality
public authorities can get hold back on the data. One of
[230] Surveillance Privacy
the potential solutions is to encourage joint ventures of [231] Surveillance Security and privacy
public and private sectors where public authorities could [232] Surveillance Privacy
have control over the data. Some efforts have been already [233] Recruitment Privacy and informed consent
[234] Generic Security, privacy, bias, and informed consent
noticed in this direction. For instance, the Chinese Govern-
[235] Generic Informed consent
ment has initiated several joint smart city projects with [236] Recruitment Bias
big private companies. There are also debates, and some [237] Generic Bias
efforts have been made in terms of legislation to give citi- [238] Generic Bias
zens/users the ownership of the data [216]. Moreover, there [239] Generic Open data, interpretation, and annotation

are also some solutions allowing users to retain ownership


of their data while attaining different services. For instance,
Bozzelli et al. [217] proposed a user data protection service. • Data auditing: Data auditing involves the assessment of
The service allows users to analyze and evaluate their data data to analyze whether the available data is suitable for
protection requirements by considering the terms and con- a specific application or not, and the risks associated with
ditions of a service, which are normally overlooked by users, poor data. In smart cities, data is generally collected through
before using and accepting the terms and conditions of a several IoTs sensors from various vendors, which results in
service that might compromise their personal data. an unstructured collection of data. There are several chal-
• Interpretation: Interpretation is another key challenge of lenges associated with the unstructured collection of data
data shared and used for training AI models. For better re- as detailed earlier. Data auditing is essential, under such
sults, the data used for training an AI model should be inter- circumstances, to analyze and assess the quality of collected
pretable as also demanded by explainable AI. For instance, data as the performance of AI algorithms in smart city appli-
the big data predictive policing solution namely PredPol cations is also constrained by the quality of the data [221].
used by police in the USA collects and analyzes the use- In literature, several interesting data auditing techniques
fulness of the data before training and making predictions have been proposed [221–224]. For instance, Yu et al. [221]
about crimes in an underlying area. A very significant reduc- propose a decentralized big data auditing scheme for smart
tion has been observed in the crimes mainly because of the city application by employing blockchain technology. One
useful and interpretable data [218]. However, in smart city of the key advantages of the method is the elimination of
third-party auditors, which are prone to several security
applications data is collected through different IoTs sensors
threats.
from various vendors. Managing, interpreting, and picking
relevant and useful data from such a heterogeneous and Table 4 lists some key papers on the data associated challenges
unstructured collection is a very challenging task. in different smart city applications.
• Data biases: The datasets generally contain different types of
hidden biases, either due to the collector or the respondent, 4.2. Explainability and datasets
in the collection phase, which are generally hard to undo
and have a direct impact on the analysis [10]. These biases In the literature, the majority of the efforts made for ex-
are very risky in human-centric applications and need to plainable AI focus on the design of the algorithm to interpret AI
be eliminated at the beginning. In dataset collection using predictions/decisions. However, other aspects are contributing to
surveys/questionnaires, generally two types of biases can the interpretation of AI decision, such as the datasets, and post-
be incorporated, namely (i) response bias, and (ii) non- modeling analysis [240]. For instance, a dataset used for training
response bias. The former represents the intentional bias an AI model may contain features incomprehensible for the stake-
from the respondents by giving wrong answers while the holders, which may result in a lack of trust in the AI predictions.
latter type of bias is encountered when no response at Therefore, to achieve better interpretation/explains of AI mod-
all is received from the respondent. One of the possible els’ decision, explainability should be considered throughout the
process starting from data/features and concluding at the post-
solutions for avoiding bias in such processes is to use close
modeling explainability [240,241]. In this section of the paper, we
end surveys or restrict the respondents to some pre-defined
focus on the explainability aspects of the dataset used for training
options [219]. However, in smart city application data is
and validation of AI models. The literature on the explainability
collected from different services using different IoT sensors,
of the dataset can be divided into four main categories, namely (i)
and the problem of data bias is beyond the typical data col-
exploratory data analysis of the dataset, (ii) description and stan-
lection issues. Therefore, to avoid bias in such applications dardization of dataset, (iii) explainable features, and (iv) dataset
a more proactive response from the citizens and authorities summarization methods. In the next subsections, we provide the
is needed to help in eliminating unintended bias in smart details of these methods.
city solutions. For instance, the authorities need to invest
more in research before deploying the technology in an ap- 4.2.1. Exploratory analysis of the datasets
plication. Moreover, better communication and messaging Exploratory analysis of datasets aims to provide a summary of
strategies need to be adopted to inform and educate citizens key characteristics, such as dimensionality, mean/average, stan-
about the goal, process, importance, and risks involved with dard deviation, and missing features, of the dataset used for train-
the data collected around their city [220]. ing an AI model. Different data visualization tools are available
15
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

to visualize different properties of a dataset and extract infor- the AI systems and their applications in various aspects of life
mative insights that could help in understanding its impact on have produced great benefits but have concurrently continued
the decisions of the AI model. For instance, Google’s Facets [242], to trigger complex moral questions and challenges. In response,
which is an opensource data visualization library/tool, allows us an interdisciplinary AI ethics discourse is emerging. This is owed
to visualize and better understand data. The exploratory analysis to scholarly input from cognate disciplines, including data ethics,
helps in understanding the limitations of a dataset. For instance, information ethics, robot ethics, internet ethics, machine ethics,
in the case of an imbalanced dataset, such analysis could provide and military ethics. These new developments were reflected in an
an early clue for the poor performance of a classifier, which can increasing number of publications that assumed more than one
then be mitigated using different sampling techniques. form. To provide a systematic overview, relevant literature will be
divided into two main categories, viz., (a) academic publications
4.2.2. Dataset description and standardization and (b) policies and guidelines.
AI datasets are usually released without proper documen- The ethical and moral discourse on the AI systems is usually
tation and description. In order to fully understand a dataset, divided into two main branches. The larger and more mature
proper description should be provided. In this regard, a stan- branch, sometimes named just ‘‘AI ethics’’ or ‘‘robot ethics’’, is
dardized documentation/description of the dataset could be re- premised on a human-centered perspective which focuses on
ally helpful to mitigate the communication gap between the the morality of humans who deal with the AI systems, includ-
provider and user of a dataset. To this aim, several schemes, ing developers, manufacturers, operators, consumers, etc. The
such as datasheets, data statements, and nutrition labels, have smaller and younger branch, usually called ‘‘machine ethics’’, is
been proposed [243]. All the schemes aim to associate detailed a machine-centered discourse which mainly examines how the
and standardized documents containing a detailed description AI systems, intelligent machines and robots can themselves be-
of a dataset’s creation/collection process, composition, and le- have ethically [249–251]. Both the branches (i.e., human-centered
gal/ethical considerations. Nutrition labeling, which is a diagnos- and machine-centered) are overlapping as shown in Fig. 11. We
tic framework for datasets, provides a comprehensive overview will cover review the moral questions addressed within the first
of a dataset’s ingredients helping the developers of AI models to branch and the second branch separately in another Sections 5.3.2
be trained on the dataset [244]. and 5.3.3.

4.2.3. Explainable features 5.1. Academic publications


Another important aspect of explainable AI is explainable fea-
ture engineering, which aims for the identification of features The interdisciplinary character of AI ethics was manifested in
influencing an AI model’s decision. Moreover, as one of the key the considerably diverse backgrounds and research interests of
characteristics of a dataset, the features should also be explain- the academics who contributed to this emerging field. Due to
able, and make sense to the users and developers. Besides im- their close connections with AI ethics, many of the contributing
provement in an AI model’s performance, explainable features authors came from the cognate fields of (moral) philosophy,
also help in the model’s explainability. Explainable feature en- engineering, and computer science. Additionally, many important
gineering can be performed in two different ways, namely (i) authors came from other fields as well, including nanotechnology,
domain-specific feature engineering, and (ii) model-based fea- psychology, social sciences, applied ethics, bioethics, legal studies
ture engineering [241]. The former method utilizes a domain along with some researchers who simply identified themselves as
expert’s knowledge in combination with the insights extracted AI researchers. It is to be noted that some of the contributing au-
from exploratory data analysis while the latter makes use of thors already have an interdisciplinary background. This diverse
various mathematical models to unlock the underlying structure group of researchers contributed to the AI ethics in various ways,
of a dataset [245,246]. For instance, Shi et al. [245] used domain e.g., writing book chapters, journal articles, book-length studies,
exploratory data analysis for relevant feature selection for cloud editing volumes, and editing journal special issues. Below, we
detection in satellite imagery. give representative examples of each type of these publications.
Besides individual book chapters [252,253], important book-
4.2.4. Dataset summarization length studies have provided rigorous and critical insights on the
Dataset summarization is a technique to achieve a represen- moral questions of the AI systems, AI and related fields. Exam-
tative subset of a dataset for case-based reasoning. Case-based ples include Moral machines: Teaching robots right from wrong,
reasoning is an explainable modeling approach aiming to predict published in 2008 [254], Machine ethics, published in 2011 [255],
an underlying sample based on its similarity with training sam- The Machine question: Critical perspectives on AI, robots, and ethics,
ples, which are both presented to the users for explanations. One published in 2012 [256], Robot ethics: The Ethical and social im-
of the main limitations of case-based reasoning is keeping track plications of robotics, published in 2012 [257], Superintelligence:
of the complete training set for comparison purposes. Dataset Paths, dangers, strategies, published in 2014, Programming ma-
summarization is one of the possible solutions to avoid keeping chine ethics, published in 2016 [258], and Robot ethics 2.0: From
track of the complete training set and rather selects a subset autonomous cars to artificial intelligence, published in 2017 [259].
providing a condensed view of the training set. In addition, many individual journal articles [251,260–267],
and a number of academic journals dedicated special issues to
5. AI ethics and smart cities contribute to AI ethics. For instance, the Journal of Experimental
& Theoretical Artificial Intelligence published ‘‘Philosophical foun-
AI code of ethics is another aspect of smart city applications dations of artificial intelligence’’, in 2000 [268], the IEEE Intelligent
that has recently received a lot of attention from the commu- Systems published ‘‘Machine Ethics’’ in 2006 [269], the AI & Society:
nity. AI code of ethics is a formal document/statement from an Journal of Knowledge, Culture and Communication published ‘‘Ethics
organization that defines the scope and role of AI in human- and artificial agents’’ in 2008 [270], the Minds and Machines:
focused applications. The three-volume Handbook of Artificial Journal for Artificial Intelligence, Philosophy and Cognitive Science
Intelligence published in 1981–1982 [247] hardly paid any at- published ‘‘Ethics and artificial intelligence’’ in 2017, the Ethics and
tention to ethics [248]. After the lapse of about three decades, Information Technology published ‘‘Ethics in artificial intelligence’’
the situation has radically changed. The exponential progress in in 2018 [271], the Proceedings of the IEEE published ‘‘Machine
16
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

Fig. 11. A visual depiction of an overlap AI Ethics (which is human centered) and Machine Ethics (which is machine centered).

ethics: The Design and governance of ethical AI and autonomous that would help in developing morally-justified (self-)governance
systems’’ in 2019 [249], and The American Journal of Bioethics pub- frameworks. For tech giants and multinational companies, having
lished ‘‘Planning for the known unknown: AI for human healthcare such policies and guidelines in hand usually serve the purpose of
systems’’ [272]. calming critical voices and improving the image of these com-
An important milestone towards the maturation and can- panies among the general public, and particularly among their
onization of AI ethics, as a scholarly discipline, was the pub- potential clients and customers.
lication of some authoritative reference works. The Cambridge The efforts made by these stakeholders, especially from 2016
handbook of artificial intelligence, published in 2014, included onwards, resulted in a great number of AI guidelines, policies,
a distinct chapter on ‘‘the ethics of artificial intelligence’’ [273]. and principles. These documents and reports were surveyed,
Recently, dedicated handbooks started to appear, including Hand- sometimes with analytical and critical insights, by some re-
buch Maschinenethik (handbook of machine ethics), published in cently published papers [206,249,291–293]. Furthermore, some
2019 [274], and The Oxford Handbook of Ethics of AI, published in academic researchers contributed to this debate by providing
2020 [275], where the last chapter was dedicated to ‘‘Smart City theoretical foundations and critical views concerning drafting AI
Ethics’’ [214]. codes of ethics [294,295]. In her work, Boddington paid special
These publications addressed a wide range of moral issues attention to the Future of Life Institute’s ‘‘Asilomar AI princi-
that are relevant to the context of smart cities, even if not ex- ples,’’ which was the outcome of an international conference that
plicitly stated. Thus, no serious moral discourse on smart cities hosted a large interdisciplinary group, with expertise in various
can be developed without critical engagement with such publica-
disciplines, including law, philosophy, economics, industry, and
tions. Additionally, an increasing number of publications started
social science [294].
to highlight the AI moral questions within the specific context
From their side, almost all tech giants and multinational com-
of smart cities, especially themes like privacy and information
panies developed their own guidelines (see Table 5). After various
transparency. Besides the aforementioned chapter in The Oxford
checks, it seems that Twitter still has no published systematic
Handbook of Ethics of AI, journal articles and book chapters [207,
AI guidelines, but this case would just represent the exception
276–285], one also observes a growing ethics genre with fo-
to the rule [206]. Google has ‘‘Artificial Intelligence at Google:
cus on smart cities. Reference works dedicated to the theme of
Our Principles’’ [296] and ‘‘Perspectives on issues in AI gover-
smart cities also included chapters relevant to ethical, including
The Routledge Companion to Smart Cities [286] and the Hand- nance’’ [297]. OpenAI issued their ‘‘OpenAI Charter’’ [298], IBM
book of Smart Cities, which dedicated a distinct part to ‘‘Ethical has ‘‘Everyday ethics for artificial intelligence’’, and Microsoft has
Challenges’’ [287]. Representative examples also include book- ‘‘Microsoft AI principles’’ [299]. Sometimes, the adopted guide-
length studies like Data and the City [288]; The right to the lines are the product of joint efforts and collaboration among
smart city [282]; Citizens in the ‘Smart City’: Participation, Co- more than one company. A good example here is the coalition
production, Governance [289]; Technology and the city: Towards ‘‘Partnership on AI’’, where large companies like Amazon, Apple,
a philosophy of urban technologies [290]. Facebook, Google, IBM, Sony, and Intel collaborated to facilitate
and support the responsible use of AI [206]. Table 5 provides an
5.2. Policies & guidelines overview of moral principles in the AI codes of Tech companies.
One recent example of considering the ethical aspects of AI in
Besides academic researchers, AI ethics has proved to be of human-centric applications from these companies is quitting the
interest to a wide range of stakeholders. For instance, AI ethics use of face recognition technology for law enforcement after the
is appealing to managers of tech giants such as Apple, Face- privacy and racial concerns over it from the community [20].
book, and Google, as well as politicians and policymakers. Rather At the governmental level, many countries drafted guidelines
than the theoretical and philosophical ramifications, which usu- and policy frameworks for AI governance. The two leading AI
ally dominate the academic discourse, these stakeholders are superpowers, China and the United States, were at the forefront
more interested in applicable policies and practical guidelines in this regard. For the USA, there are several documents and
17
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

Table 5
Overview of moral principles in the AI codes of tech companies .
Company Key principles
IBM • Accountability
• Value alignment
• Explainability
• Fairness
• User data rights
Google • No overall harm to society
• No use of AI for weapons
• No violation of human rights and Fig. 12. A taxonomy of ethical issues.
international law
Microsoft • Fairness
• Security and privacy
• Empower and engage everyone
the field of AI ethics stresses the need to distinguish between
• Transparency and interpretability genuine and pretentious moral problems and to stress that this
• Accountability field should focus on the former rather than the latter type of
Samsung • Equality and diversity problems [250,311].
• No unfair bias The publicity of certain ‘‘exotic’’ anecdotes and their wide
• Easy access for all circulation in media would make people mistakenly think that
• Explainability they raise genuine ethical issues. This holds true for the public
• Social and ethical responsibility
• Benefit to society and corporate citizenship
unveiling of the Japanese roboticist Hiroshi Ishiguro’s Geminoids,
an android that so closely resembles his own appearance and
Intel • New employment opportunities
• People’s welfare
does human-like movements, such as blinking and fidgeting with
• Accountability and responsibility its hands. Another example here is the robot ‘‘Sophia’’, which re-
• Privacy ceived the ‘‘citizenship’’ status from Saudi Arabia after her speech
Facebook • Privacy & Security at a United Nations meeting [311]. It is to be noted that it is quite
• Fairness & Inclusion difficult to get the Saudi citizenship, even for people who were
• Robustness & Safety born in this country and spent a great deal of their life there but
• Transparency & Control had no Saudi parents. Such incidents make some people imagine
• Accountability & Governance
or create fearful scenarios that ethicists and policymakers should
Global convergence of AI • Transparency urgently address their moral ramifications, as if they are part
code of ethics • Fairness
• Non-maleficence
of an already existing dilemma. However, Ishiguro’s robot is a
• Responsibility remotely controlled android, not an autonomous agent and the
• Privacy speech given by Sophia was not her own work but it was prere-
corded by an organic human female. Thus, the fears and concerns
promoted after such incidents are more pretentious in nature and
are usually viewed as non-issues, from a moral perspective. They
reports, including the ‘‘Preparing for the future of artificial intelli- come close to analogous claims made about earlier technologies,
gence’’ published in 2016 and ‘‘The National artificial intelligence e.g., writing will destroy memory, trains are too fast for souls,
research and development strategic plan: 2019 update’’, by the telephones will destroy personal communication, video cassettes
National Science and Technology [300–302]. As for China, there is will make going out redundant, etc. [250].
the ‘‘Beijing AI Principles’’ issued in 2019 by the Beijing Academy Moral philosophers argue that such ‘‘non-issues’’ should not
of Artificial Intelligence and backed by the Chinese Ministry of be part of the mainstream AI ethics [250,311,312]. However,
Science and Technology [303]. sometimes it proves difficult to agree whether some AI-related
At the transnational or global level, there are also important questions and challenges should be considered as genuine or
initiatives [304,305]. The Institute of Electrical and Electronics pretentious issues. The main example here is the so-called ‘‘sin-
Engineers (IEEE) produced two versions of the ‘‘Ethically Aligned gularity hypothesis’’, which will be discussed in a distinct section
Design’’. The first version came out in 2016 and the second in below.
2019 [306,307]. After open consultation on a draft made publicly
available in December 2018, the European Commission published 5.3.1. Singularity/superintelligence
‘‘Ethics guidelines for trustworthy AI’’ [308]. The last example Unlike the usual concern linked with most technological ad-
to be mentioned here is the intergovernmental Organization for vances, viz., undermining people’s health or wellbeing, advances
Economic Co-operation and Development (OECD), which adopted in the AI systems (sometimes together with the related field of
in May 2019 the ‘‘OECD Principles on AI’’. The document is meant neurology) is believed to pose an existential threat to the human
to promote innovative and trustworthy AI that respects human species altogether. This concern is usually couched under the
rights and democratic values [309]. Inspired by this initiative, the so-called ‘‘singularity hypothesis’’.
G20 adopted the human-centered AI Principles [309,310]. The basic idea of this hypothesis is that once the AI systems are
able to produce machines or robots with a human level of intelli-
5.3. Analytical review of the key issues gence, these machines will also be able to act autonomously and
create their own ‘‘superintelligent’’ machines that will eventually
Fig. 12 provides a taxonomy of the key ethical issues discussed surpass the human level of intelligence. With such a shift-making
in the literature. In this section, we will mainly focus on the sequence of developments, the point of ‘‘singularity’’, similar to
algorithmic issues as a detailed description of the data ethics has that of physics, will be a natural outcome. After this point, the
been provided in Section 4. superintelligent machine will be the last invention made by man
Before delving into the detailed issues addressed by the above- because humans will not be able to have things under control
sketched literature (see overview below and in Table 6), a anymore, including their own destiny. Consequently, human af-
methodological note is in order. Due to the popularity of AI and fairs and basic values in life (including even what it means to be
the polarizing debates in media, some of the contributors to human), as we understand them today, will collapse [250,312].
18
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

Table 6
Overview of the key issues that (should) deserve attention in the moral discourse on AI. The significance of some issues is agreed upon (Serious issues), where
other issues are viewed as less important or simply non-issues (Pretentious issues). The (in)significance of some other issues, mainly represented by the singularity
hypothesis, is still a point of controversy and agreement.
Pretentious Issues Serious issues
Worries raised by (A) (B)
the Hiroshi Ishiguro’s Human-centered branch (AI Machine-centered branch
Geminoid ethics) (Machine ethics)
Worries raised by the (A.1) (A.2) Accountability
Humanoid robot Data-related concerns Social concerns Autonomy
Sophia, especially Preserving key values Confidentiality Discrimination Culpability
when she delivered Privacy Undermining Inter-human relations Liability
a talk at the UN Accountability Unemployment Moral agency
Fairness
Justice
Transparency
Trust
Addressing the moral implications Safety
of technical problems Adversarial attacks
Bias
Explainability
Singularity hypothesis (?)

For those who believe in the singularity hypothesis, one of Things’’, are tools for huge data-gathering machinery. As some
the possible post-singularity scenarios is that humans will be observers stated, the resulting data will not only include ‘‘pri-
replaced by superintelligent machines and thus mankind will vate’’ or ‘‘confidential’’ information about us but these tools will
become obsolete. The proponents of a more optimistic scenario even know more about us than what we know about ourselves.
do not speak of human extinction but of transformation into Consequently, the data gathered can be used to manipulate one’s
superhuman intelligent beings. Owing to mutual hybridization behavior. Besides the possibility of deploying it to infringe upon
between men and machines, humans will be able to exponentially people’s privacy and confidentiality of information, this massive
increase their levels of intelligence, all other capacities, and their data-gathering machinery can also make money through our
lifespan up to the possibility of achieving immortality [250,312]. collected data without consenting or even informing us. This
On the other hand, some voices consider the singularity hy- is sometimes called ‘‘surveillance economy’’ and ‘‘surveillance
pothesis dubious, untenable and overestimation of AI risks. Thus, capitalism’’ [250,314,315]. A more detailed discussion on the
some wonder whether this hypothesis ever deserves to be viewed data-related ethical issues and concerns, such as privacy, bias,
as a real moral issue or it should actually be seen as something ownership, data openness, interpretation, and informed consent,
imaginary whose right place is science fiction rather than moral has been provided in Section 4.
discourse [250,311,312]. The critics of the singularity hypothesis Explainability – which is closely related to key moral con-
sometimes even accuse its proponents of lacking work experience cepts such as fairness, bias, accountability, and trust – is another
in the AI field [206,295]. Such reservations about the singularity significant aspect of big data management. The minimum level
hypothesis and questioning whether it is even a serious issue of required explainability intersects with the concept of trans-
to be addressed may explain the silence of many of the above-
parency, which would simply mean developing easily-understood
reviewed policies and guidelines on this issue [302]. Even the
overview of system functionality. In other words, the AI sys-
report released in 2017 by the US Center for a New American
tems should at least maintain precise accounts of when, how,
Security (CNAS), which had the term singularity in its title, did
by whom, and with what motivation these systems have been
not provide serious analysis of the singularity hypothesis [313].
constructed, and these accounts should be explainable and under-
When the aforementioned ‘‘Preparing for the future of artificial
standable. Moreover, the very tools used to build the AI systems
intelligence’’ specifically touched upon the singularity hypoth-
can be set to capture and store such information [316]. On the
esis, it was stated that it should have little impact on current
other hand, explainability, as a technical term, has further moral
policy and that it should not be the main driver of AI public
requirements. It means that the causes behind an AI model’s
policy [302]. The same attitude was adopted by the first version of
the IEEE’s ‘‘Ethically aligned design’’, where an implicit reference decision should be explainable and understandable for humans
was made to the singularity hypothesis warning against adopt- so that stakeholders can be aware of the AI model’s biases, po-
ing ‘‘dystopian assumptions concerning autonomous machines tential causes of the bias, etc. [317]. The lack of explainability
threatening human autonomy’’ [306]. and transparency, which will be seen as opacity, continues to
trigger public and scholarly debates about the possible moral
5.3.2. Human-centered branch (AI ethics) violations related to discrimination, manipulation, bias, injustice,
Data-related concerns (e.g., privacy, transparency, explainability, ad- etc. An AI algorithm developed by Goldman Sachs was said to
versarial attacks). Broadly speaking, the efficiency of AI systems be discriminating against women [318]. Also, the Google Health
heavily depends on the quality of the training data. Thus, a great study, published in Nature, which argued that an AI system can
deal of the AI moral issues and dilemmas revolve around the outperform radiologists at predicting cancer, was said to violate
central question of how such big data should be managed in an transparency and reproducibility [319–321]. To address such con-
ethical way. While trying to collect and process as much data cerns, the AI field has been developing techniques to facilitate
as possible, the AI systems can actually be seen as performing a the so-called ‘‘explainable AI’’ and ‘‘discrimination aware data
modernized form of the conventional state surveillance by secret mining’’ [206]. On the other hand, governmental efforts continue
services. Various techniques that can be used in smart cities, such to put pressure on the AI industry to produce more explain-
as face recognition and device fingerprinting, in combination with able applications. For instance, the EU General Data Protection
‘‘smart’’ phones and TVs, ‘‘smart governance’’ and ‘‘Internet of Regulation (GDPR) underlined the ‘‘right to explanations’’ [317].
19
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

Furthermore, the aforementioned EU ‘‘Ethics guidelines for trust- jobs, and improve the overall economic growth. One of the key
worthy AI’’ included the principle of explicability, as one of the challenges to properly navigate these concerns is that there is
four core ethical principles in the context of AI systems [308]. little economics research in this area and available predictions
Another major concern related to data governance has to do are premised on past technologies. This state of academic re-
with ensuring its security and developing protective measures search makes it difficult for policymakers to prepare well for
against adversarial attacks, which can have a serious impact on the prospective AI impact on the labor market and economy in
the AI systems. AI algorithms, whose behavior fairly shapes life general [333–335]. Beyond the AI positive or negative economic
in smart cities, mainly feed on data collected from every partic- impact, some researchers expressed specific concerns in certain
ipated device to have fully integrated complex smart solutions. applications, like the so-called ‘‘carebots’’, which are meant to
However, AI algorithms are not safe by nature since the adversar- offload caregiving to a machine. Even if this automation of care-
ial attacks have been approved in different smart domains. This
giving will not result in job cuts, replacing human care will still
creates deep ethical responsibility shared by all stakeholders to
have social costs, e.g., exchanging feelings and emotions among
ensure data safety for both assets and people, to the extent that
humans will cease to be part of caregiving [336].
some considered it a human-rights issue [322].
Several defense techniques have being developed to mitigate
or minimize the risk of adversarial attacks [323–325]. Also, the 5.3.3. Machine-centered branch (machine ethics)
Generative Adversarial Networks (GANs) support decision sys- The machine-centered branch of AI ethics, or ‘‘machine ethics’’,
tems in several smart areas by generating realistic examples to approaches machines as subjects or agents, rather than objects or
enrich the available dataset (data augmentation) and thus im- tools used by humans. Despite some vagueness about the exact
prove the efficiency of the AI models [326–328]. It is to be noted scope and subject of this branch, the basic idea is that ‘‘machine
that the commissioned cyberattacks, originally meant to test the ethics’’ discourse would focus on questions related to the morality
immunity of the AI systems to the threat of Adversarial AI or of the machine itself, e.g., can a machine behave ethically towards
Offensive AI, can also help address some of the aforementioned humans or other machines? and if yes, which moral standards
concerns. For instance, it can step in to secure fairness in AI should apply to judge this behavior? Would the machine in such a
solutions so that classifiers will not judge based on any protected case be held accountable, morally responsible, or holder of rights
attributes related to gender, religion, rich, poor, etc. [329] and this and obligations? [249,250].
is called adversarial fairness. They may also need to keep a level
Available research shows a variety of approaches, already ap-
of privacy of some sensitive data and this is called adversarial
plied in experimental demonstrations with robots, that explore
representation [330,331]. Such benefits explain the presence of a
how the machine can be trained to recognize and correctly re-
clear trend in literature to expose all possible adversarial attacks
spond to morally challenging situations. It is to be noted that the
on different systems. This can be viewed as part of typical ethical
hacking, where AI specialists look for every possible form of outcome of these trials is still far away from producing even a
attack to improve the process of defenders’ development. human-like being whose acts can be judged in the same way we
judge human moral agents. Researchers just speak about ‘‘robots
Social concerns (e.g., discrimination, unemployment). In addition to with very limited ethics in constrained laboratory settings’’ [249].
the problems highlighted above, big data misgovernance can also In order to accommodate the restricted moral autonomy in some
create social problems. For instance, the absence of explainability (future) AI systems, some researchers proposed multi-layered
can pose a serious threat to democracy; the so-called ‘‘threat of typologies for ethical agents. In these typologies, the highest
algocracy’’ [332]. This threat will likely happen by standardizing category of full ethical agents is (now) exclusive to an average
dependence on ‘‘intelligent’’ systems whose rationale or mode of adult human whereas the machines trained to behave ethically
reasoning for the decision they made is inaccessible to individual
fall under lower categories [337].
citizens and sometimes even experts [250].
Whatever one’s conviction is about the nature of morality
Additionally, the aforementioned example of Goldman Sachs
that can be assigned to certain AI systems and how far we can
and similar stories show that data-driven algorithms can con-
regard them as ‘‘artificial moral agents’’, the very idea itself raised
tribute to sexism, racism, or reproducing other negative stereo-
complex questions about key concepts like moral responsibility,
types that we collectively agreed to judge as bad, even if they
sometimes reflect part of our current reality. Unregulated usage accountability, and liability. This holds particularly true for the
of AI applications like automated facial analysis proved to have two famous AI applications, namely autonomous vehicles and
systematic biases by skin type and gender [248,316]. Instead autonomous weapons. In principle, such applications challenge
of helping us reform the exiting inequalities in societies, math- the conventional idea that whenever there is a victim, there
ematical models and algorithms often reinforce them [24]. To should an identifiable culprit. The victims of violations made by
address such biases and discriminatory stereotyping, more care- autonomous cars or weapons will face the difficulty of allocat-
fully programmed AI systems are being developed. For instance, ing punishment, sometimes called the ‘‘retribution gap’’. This is
some discrimination-sensitive programs can be used in the early because they will not have a human driver or shooter who can
stages of human resources processes to help shortlist diverse be held accountable [250,338]. In response to these difficulties,
CVs [316]. What is also important in this regard is that the AI field proposals were made to forgo the idea of accountability assigned
itself should be more inclusive and diverse when it comes to the to a specific individual (e.g., the motorist or the shooter) and
cultural and ethnic background and gender of the AI teams [271]. to assign it to a pool of involved stakeholders (e.g., program-
By its increasing ability to outsource skilled and unskilled jobs, mers, manufacturers, and operators of the AI systems, besides
another socio-economic concern is that AI will disrupt the labor the bodies responsible for taking infrastructure, policy and legal
market. The pessimistic view sometimes goes as far as to warn decisions, etc.) [250,255,339].
of a dystopian climax, where a handful of AI giants will take jobs
away from millions of people who will end up having nothing
to do except ‘‘entertaining’’ themselves by what the AI industry 6. Insights and lessons learned
would allow them to access. At the other end of the spectrum,
there is an optimistic view whose advocates promise of an AI In this section, we present the insights and lessons learned
utopia where the AI systems will generate wealth, create more from the literature on each challenge to AI in smart cities.
20
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

6.1. Smart city AI security and robustness • Due to their high severity, adversarial attacks should be put
into the educational track as an integral part of the model
The topic of adversarial AI is not an emerging topic, however, building and deployment process of AI applications.
it becomes a crucial and hot topic in the era of smart cities • The transferability of adversarial examples across models
which needs extra efforts to reach an acceptable level of trust enables the attacker to target even the black-box model.
in our AI-based products. Trust might be defined concerning the There is no effective defense mechanism currently exist
possible attacks, defense mechanisms, and the expected effect on which shed the light on the techniques of substitution mod-
the overall system. This may create a trade-off between safety els.
and performance which needs further exploration. • Organizations may need to invest more not only in their col-
The AI safety strategies come in four categories based on four lected data but also in securing the models they developed.
general safety strategies in engineering [141]. We highlight the This probably needs more budget on security, training, and
basics of each of them with possible examples related to the tools.
discussion in this section. • AI models that show high accuracy at testing time could
not be good choices if the robustness of the model against
• Safe design strategy: The main idea in this strategy is to
attacks becomes part of the evaluation process.
study the data and any potential bias or harm before build-
ing AI solutions. For example, training a model on a mix of
6.2. Smart city AI interpretability
animals and humans could lead to harmful results. Using a
dataset that is biased to specific classes such as lighter-skin
Despite the outstanding capabilities, the decisions/predictions
examples are overwhelming in a dataset compared to other
made by the traditional black-box AI algorithms are not straight-
darker-skin colors could also be a biased solution towards
forward, in fact un-understandable, for different stakes-holders,
specific classes [21]. Therefore, the imbalance of the exam-
such as government authorities and citizens, involved in a smart
ples in the dataset forces the classifiers to perform better, in
city application. Even the data scientists that created the model
terms of accuracy, with specific classes related to male over
may have trouble explaining why their algorithm made a par-
female, and lighter-skin color over darker-skin color. The
ticular decision. One way to achieve better model transparency
general purpose IBM face recognition was stopped because
is to adopt from a specific family of models that are considered
it was used for racial profiling, the MIT technology review
explainable. Even, sometimes the developers of the AI models are
showed that this software does well with lighter-skinned
not fully aware of the causes of a particular decision. Understand-
color female than dark-skin color female.14
ing the causes of a model’s decision, in general, and in smart city
• Safety reserves: The feature set could be partitioned into
applications in particular, are critical for developing users’ trust in
protected, such as gender, race. . . etc, and unprotected
the system. For instance, in healthcare, understanding the causes
groups where the risk ratio of harm of a protected group
of AI predictions/decisions is very critical for doctors to consider
to an unprotected group should not exceed a predefined
AI-based clinical insights. Doctors would feel more confident in
threshold.
taking decisions given AI-based diagnosis if the decision of the AI
• Safe fail: If the decision cannot be given with confidence,
model is understandable/interpretable by a human. Explainability
the rejection option would be the choice. The human would
also provides an opportunity for AI models/developers to benefit
step in to have manual decisions.
from the domain experts’ knowledge to deal with the impurities
• Procedural safeguards: The availability of the open-source in data and structure of the models.
machine learning algorithms could improve the testing and
Some key lessons learned from this section are summarized
auditing works. However, since the data is playing a ma-
as:
jor role in any AI-based solution, the open dataset; freely
available, could help in developing more safe applications. • A lot of interest and demand has been observed for explain-
Although the above strategies could improve the safety of able AI over the last few years.
AI-based solutions, several defense methods have been devel- • Explainability helps in building stakeholders’ trust in AI
oped against security attacks to maintain the safety of AI-based models’ predictions, which will ultimately speed up its
applications. adoption in critical smart city applications, such as health-
Some key lessons learned from this section are summarized care.
as: • Explainability also plays a vital role in ensuring fair AI de-
cision by identifying and eliminating decisions based on
• Adversarial attacks are proved in several smart city appli- protected attributes such as race, gender, and age.
cations and they have serious consequences on people’s • There is a trade-off between explanation and performance.
lives, privacy, opportunities, and assets. They could also Transparent models are good for explanation, however, their
significantly impact the economy and the environment of performance is lower compared to the black box models,
countries. such as deep learning models.
• All stakeholders in developing smart city applications are • There is a deep connection between explainability and other
ethically responsible to follow the good technical practices emerging concepts in AI, namely adversarial attacks and
and extensively evaluate the impact of any AI applications ethics.
on fairness, privacy, and lives. • Explainability helps AI models to guard against adversar-
• Anti-adversarial attack solutions are not magic and all au- ial attacks by differentiating between genuine samples and
thorities and organizations share the responsibility of risk adversaries.
prevention and mitigation. • Explainability and ethics also link and cross-fertilize each
• Adversarial data do not mean ‘‘harm’’ all the time, it can be other in AI.
utilized as a data augmentation technique and to build more
robust AI-based solutions. 6.3. AI ethics

14 https://www.technologyreview.com/2020/06/12/1003482/amazon-stopped- The literature reviewed in this section demonstrates a growing


selling-police-face-recognition-fight/. interest and concern over the ethical aspects of the AI
21
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

systems and their applications. A diverse group contributed to • Estimating attacks implications (The ripple effect).
the emerging field of AI ethics, including not only academics and The ripple effect of the attacks must be considered in future
researchers but also governments and tech giants, such as Apple, works. Given the complexity of the smart city’s ecosys-
Facebook, and Google. They all have realized the growing impact tems, attacking one model may have a series of conse-
of AI technology on society and believe that ethical delibera- quences on the whole city and also may unintentionally
tions, guidelines, and governing policies are necessary to make a attack other models. Estimating the loss and effect of attack-
rigorous trade-off between potential benefits and possible harms. ing a model and functional dependency evaluation could be
The key lessons learned can be summarized as follows: integral parts of the future AI-based systems development
life-cycle. We can expect more interest in simulation works
• AI ethics is increasingly moving towards a distinct scholarly in this area soon.
field of inquiry with strong interdisciplinary character. Be- • Real-time adversarial attacks.
sides the two main involved groups, namely philosophers This is another challenge for AI safety teams. There is a need
and engineers, this young field is also benefiting from in- to evaluate the current techniques in generating poisoning
sights provided by an interdisciplinary group of scholars, data when only part of benign data is available, i.e., stream-
researchers, and practitioners. ing. How about the structures of defenders in real-time
• In their attempt to canonize the young field of AI ethics environments? [341]
and to theorize and standardize its scope, questions, and • Future works may show more efforts in defining the rules
methodology, various academic journals and publishers on operating smart cyber-systems and the accountability of
have been actively producing books, edited volumes, journal the services providers and operators [342].
special issues, and recently also handbooks. • Unintentional attacks in smart waste and agriculture Smart
• The key players in the AI industry, including multinational waste and agriculture mainly depend on a network of sen-
companies alongside national and transnational governmen- sors that work in harder conditions compared to some other
tal bodies, drafted various policies and guidelines meant fields such as transportation. In such a scenario, the envi-
to demonstrate their commitment to ethical governance of ronments might be wet, humid, dirty, have different tem-
their activities in the AI industry.
peratures, and may suffer from pollution. For example, the
• The wide range of moral issues addressed by academic sensors attached to the animals in large farms, sensors on
publications and/or guidelines show disagreement on cer- trash bins, electrochemical sensors for soil nutrients, etc, are
tain issues (such as the singularity hypothesis) and whether subject to convey some noise besides the required data due
they should be regarded as real problems. On the other
to the environmental effects. This could be an important
hand, a great number of issues were consensually viewed
source of unintentional attacks that should be evaluated and
as serious challenges, including those with relevance to
taken into account in future works.
smart city applications. Representative examples were dis-
• AI models detection and isolation techniques In [76], a tech-
cussed under broad themes, including big data management
nique of abnormal vehicle behavior detection and isolation
(e.g., privacy, explainability, transparency, opacity, bias), so-
is applied on the object level (i.e., vehicle) which may run
cial problems (e.g., facilitating discrimination and disrupting
several models to control the driving tasks and traffic man-
the labor market).
agement. Evaluating the approach on a lower level, i.e., mod-
els, to detect and isolate the possibly attacked models might
7. Open issues and future research directions
add value to the overall safety. Developing guidelines for
replacing suspected models or defining alternatives in AI
7.1. Smart city AI security and robustness
models’ maintenance plan could improve consumers’ trust.
• Robustness and safety metrics to be involved in the evaluation
Google scholar shows growth in the number and scope of ad-
process of AI models. The current metrics to evaluate the
versarial attacks research since the last decade [61]. The collabo-
performance of AI models could take into account the factor
ration of multidisciplinary teams including data scientists, cyber-
of safety and the robustness of the model against different
security engineers, and domain-specific-professionals is needed
for adversarial attacks research and development. Future research types of attacks. AI models of high accuracy at testing time
is expected to set a policy to accurately describe ethical outlines, might be the worse with a little noise added by attackers
and how and when the AI should be part of the organization’s at production environment [343]. This leads to the possible
ecosystems [67,79]. Some of possible research opportunities and need of revisiting the AI models evaluation policy before
open issues are: deployment. Thus, the agreement between the stakeholders
or services.
• Performance and accuracy vs. Security.
The classical trade-off between the response time and the 7.2. Smart city AI interpretability
safety procedure would be the first concern raised in de-
ploying AI in smart cities where decisions are supposed Although a lot of efforts have been made for the interpre-
to be taken on time. Applying detection algorithms against tation/explainability of AI algorithms since the concept of ex-
adversarial attacks must be carefully evaluated in different plainable AI has been introduced. However, there are still many
fields especially those that depend mainly on fast decisions aspects of explainable AI that need to be analyzed. In this section,
such as autonomous vehicles (AVs). Another concern related we provide some of the open issues and future research directions
to performance is the accuracy of AI models when these in the domain.
are trained on both benign and adversarial data, i.e., the
false positive and true negative rates. Different parameter 7.2.1. Interpretation vs. performance
optimization methods of learning-based algorithms share Despite all the benefits it brings for all the stakeholders in
the same objective, i.e., maximizing the overall accuracy of different application domains, there are some concerns about
the model [340]. However, the interesting question would its impact on the performance and the development process. It
be: do those parameters have any impact on the model’s is believed that the efforts for explainability will not only slow
immune system against adversarial attacks? down the development process but also put constraints on it,
22
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

which might also hurt the performance (i.e., accuracy) of the • There is a need for exploring innovative ways to bridge the
models [344]. For better interpretability AI models to be simple existing gaps between academic research and policymaking
as simpler the model more explainable is the causes of an under- on one hand and between policymaking and the AI reality
lying decision. However, literature shows that usually complex on the other hand. The questions raised and addressed by
AI algorithms (e.g., deep learning) tend to be more accurate. The the academics are sometimes too abstract and theoretical
trade-off between explainability and performance is believed to to be of relevance for policymakers and those engaged in
be optimized with better explainability methods, which is one of the AI business. Instead of broad philosophical questions like
the key research challenges in the domain [56,56,345]. ‘‘Will this contribute to human flourishing or put human
species at risk?’’, policymakers are more interested in prac-
7.2.2. Concepts and evaluation metrics tical questions like ‘‘Which harms should we expect if we
The literature still lacks a common ground, structure, and a are going to do this, and how to mitigate or minimize these
unified concept of explainability [56]. However, several efforts harms?’’. Despite some good but still seemingly exceptional
have been made in this regard. For instance, Arrieta et al. [56] instances, various researchers also warn that there is hardly
attempted to provide a common ground or a reference point in any touchable impact of ethics in general or policies and
this regard. According to them, the explainability of an AI model guidelines in particular on the reality of the AI industry.
refers to its ability to make its functioning (i.e., causes of its Most of the time, large companies are driven by economic
decisions) clearer to an audience. The authors also emphasize logic and incentives rather than by value or principle-based
the need and definition of an evaluation metric or set of metrics ethics [206,346].
for the evaluation and comparison of AI models in terms of • The moral discourse on AI systems is almost exclusively
explainability and interpretation capabilities. ‘‘Western’’ in nature. In other words, ethical deliberations
and academic publications are published by institutions
7.2.3. Explanation of deep learning models based in Western Europe and the United States and thus
Despite the sincere efforts made for explainable AI, there are imbued with secular-oriented moral thought. With the ex-
still several challenges hindering its success and adoption. One of pected growth of the AI industry and the adoption of its
the key challenges is the interpretability of deep learning. In this technologies by other communities worldwide, there is a
regard, efforts are ongoing to develop explainable deep learning need for diversifying and enriching the current AI moral
techniques or applications. To this aim, different visualization discourse by incorporating insights from other cultural and
techniques are used to explain their reasoning steps, which is religious traditions. Available research shows that people’s
expected to make them explainable and trustworthy. cultural norms do influence their understanding of what
makes AI systems ethical [347]. Moreover, reports com-
ing from Muslim-majority countries like Qatar, show that
7.2.4. Explainability and adversarial AI
their interest in having AI technologies is espoused with
As detailed earlier, explainability and adversarial AI has a di-
rect connection. Explainability on the one side can guard against a parallel interest in developing religio-culturally sensi-
adversarial attacks by differentiating between a genuine sample tive discourse and compliant policies, where also Arabic
and an adversary while on the other hand the information re- language processing will be a national priority [348].
vealed by explainability techniques can be used both to generate
more effective adversarial attacks on AI algorithms [56]. One 8. Conclusions
of the interesting directions of research on explainable AI is to
analyze how effectively it can be used to guard against adversarial In this paper, we have reviewed the key challenges in the
attacks. There are already ongoing efforts in this direction as successful deployment of AI in smart city applications. In par-
detailed in Section 3.2. ticular, we focused on four key challenges namely security and
robustness, interpretability, and ethical (data and algorithmic)
7.3. AI ethics challenges in the deployment of AI in human-centric applications.
We particularly focused on the connection between these chal-
Despite the significant progress AI ethics could make in a short lenges and discussed how they are linked. Based on our analysis
period, many issues still remain open and various challenges still of the existing literature and experience in the domain, we also
need to be addressed by future research. Below, we summarize identified the current limitations and the pitfalls of existing so-
the key points in this regard. lutions proposed for tackling these challenges. We also identify
open research issues in the domain. We believe such a rigorous
• Due to its strongly interdisciplinary character and relatively analysis of the domain will provide a baseline for future research.
young age, AI ethics suffers from serious conceptual ambigu-
ity. Many of the key terms have fundamentally different and, Declaration of competing interest
sometimes even incompatible, meanings for different peo-
ple. For example, key terms like agent, autonomy, and intel-
The authors declare that they have no known competing finan-
ligence do not have the same meaning for moral philoso-
cial interests or personal relationships that could have appeared
phers and AI engineers. For engineers, cars or weapons
to influence the work reported in this paper.
will be ‘‘autonomous’’ when they can behave without direct
human intervention. Moral philosophers, however, would
use the term ‘‘autonomous’’ exclusively for an entity that Acknowledgment
can define its own laws or rules of behavior by itself [311].
To improve the AI moral discourse and make it more ef- This publication was made possible by NPRP grant # [13S-
ficient, there is a dire need for future research that will 0206-200273] from the Qatar National Research Fund (a member
enhance its conceptual clarity and standardize the primary of Qatar Foundation). The statements made herein are solely the
and secondary meanings of its key terms. responsibility of the authors.
23
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

References [30] S.S. Han, I.J. Moon, W. Lim, I.S. Suh, S.Y. Lee, J.-I. Na, S.H. Kim, S.E.
Chang, Keratinocytic skin cancer detection on the face using region-based
[1] Secure, sustainable smart cities and the IoT, 2020, Accessed: 2020-06-24, convolutional neural network, JAMA Dermatol. 156 (1) (2020) 29–37.
https://tinyurl.com/y6qw479s. [31] A. Bhandary, G.A. Prabhu, V. Rajinikanth, K.P. Thanaraj, S.C. Satapathy, D.E.
[2] A. Gharaibeh, M.A. Salahuddin, S.J. Hussini, A. Khreishah, I. Khalil, M. Robbins, C. Shasky, Y.-D. Zhang, J.M.R. Tavares, N.S.M. Raja, Deep-learning
Guizani, A. Al-Fuqaha, Smart cities: A survey on data management, framework to detect lung abnormality–A study with chest X-Ray and lung
security, and enabling technologies, IEEE Commun. Surv. Tutor. 19 (4) CT scan images, Pattern Recognit. Lett. 129 (2020) 271–278.
(2017) 2456–2501. [32] S. Lee, Y. Kim, H. Kahng, S.-K. Lee, S. Chung, T. Cheong, K. Shin, J. Park,
[3] B. Green, The Smart Enough City: Putting Technology in Its Place to S.B. Kim, Intelligent traffic control for autonomous vehicle systems based
Reclaim Our Urban Future, MIT Press, 2019. on machine learning, Expert Syst. Appl. 144 (2020) 113074.
[4] ARUP: If you know the right questions and understand the risks, data [33] K.-T. Nguyen, T.-H. Hoang, M.-T. Tran, T.-N. Le, N.-M. Bui, T.-L. Do,
can help build better cities, 2020, Accessed: 2020-07-07, https://tinyurl. V.-K. Vo-Ho, Q.-A. Luong, M.-K. Tran, T.-A. Nguyen, et al., Vehicle re-
com/y4b8bq6e. identification with learned representation and spatial verification and
[5] A. Qayyum, J. Qadir, M. Bilal, A. Al-Fuqaha, Secure and robust machine abnormality detection with multi-adaptive vehicle detectors for traffic
learning for healthcare: A survey, 2020, arXiv preprint arXiv:2001.08103. video analysis., in: CVPR Workshops, 2019, pp. 363–372.
[6] M. Veres, M. Moussa, Deep learning for intelligent transportation systems: [34] G. Li, W. Yan, S. Li, X. Qu, W. Chu, D. Cao, A temporal-spatial deep learning
A survey of emerging trends, IEEE Trans. Intell. Transp. Syst. (2019). approach for driver distraction detection based on EEG signals, IEEE Trans.
[7] J. Xie, H. Tang, T. Huang, F.R. Yu, R. Xie, J. Liu, Y. Liu, A survey Autom. Sci. Eng. (2021).
of blockchain technology applied to smart cities: Research issues and [35] S. Bai, Z. He, Y. Lei, W. Wu, C. Zhu, M. Sun, J. Yan, Traffic anomaly
challenges, IEEE Commun. Surv. Tutor. 21 (3) (2019) 2794–2830. detection via perspective map based on spatial-temporal information
[8] K. Ahmad, J. Qadir, A. Al-Fuqaha, W. Iqbal, A. El-Hassan, D. Benhaddou, matrix., in: CVPR Workshops, 2019, pp. 117–124.
M. Ayyash, Artificial intelligence in education: A panoramic review, 2020. [36] K. Ahmad, N. Conci, How deep features have improved event recognition
[9] Z. Ullah, F. Al-Turjman, L. Mostarda, R. Gagliardi, Applications of artificial in multimedia: a survey, ACM Trans. Multimed. Comput. Commun. Appl.
intelligence and machine learning in smart cities, Comput. Commun. (TOMM) 15 (2) (2019) 1–27.
(2020). [37] K. Ahmad, K. Pogorelov, M. Riegler, O. Ostroukhova, P. Halvorsen, N. Conci,
[10] S. Latif, A. Qayyum, M. Usama, J. Qadir, A. Zwitter, M. Shahzad, Caveat R. Dahyot, Automatic detection of passable roads after floods in remote
emptor: the risks of using big data for human development, IEEE Technol. sensed and social media data, Signal Process., Image Commun. 74 (2019)
Soc. Mag. 38 (3) (2019) 82–90. 110–118.
[11] H. Ekbia, M. Mattioli, I. Kouper, G. Arave, A. Ghazinejad, T. Bowman, V.R. [38] S. Kuutti, R. Bowden, Y. Jin, P. Barber, S. Fallah, A survey of deep learning
Suri, A. Tsou, S. Weingart, C.R. Sugimoto, Big data, bigger dilemmas: A applications to autonomous vehicle control, IEEE Trans. Intell. Transp.
critical review, J. Assoc. Inform. Sci. Technol. 66 (8) (2015) 1523–1545. Syst. (2020).
[12] K. Crawford, R. Calo, There is a blind spot in AI research, Nature 538 [39] E. Huet, Server and protect: Predictive policing firm PredPol promises to
(7625) (2016) 311–313. map crime before it happens, Forbes Mag. (2015).
[13] Machine bias: There’s software used across the country to predict future [40] Stanford scholars show how machine learning can help environmental
criminals. And it’s biased against blacks., 2020, Accessed: 2020-08-26, monitoring and enforcement, 2020, Accessed: 2020-07-17, https://tinyurl.
https://tinyurl.com/j847koh. com/y3h8wcau.
[14] K. Crawford, Artificial intelligence’s white guy problem, N.Y. Times 25 [41] N. Said, K. Ahmad, M. Riegler, K. Pogorelov, L. Hassan, N. Ahmad, N. Conci,
(06) (2016). Natural disasters detection in social media and satellite imagery: a survey,
[15] A. Qayyum, M. Usama, J. Qadir, A. Al-Fuqaha, Securing connected & Multimedia Tools Appl. 78 (22) (2019) 31267–31302.
autonomous vehicles: Challenges posed by adversarial machine learning [42] D. Saboe, H. Ghasemi, M.M. Gao, M. Samardzic, K.D. Hristovski, D.
and the way forward, IEEE Commun. Surv. Tutor. 22 (2) (2020) 998–1026. Boscovic, S.R. Burge, R.G. Burge, D.A. Hoffman, Real-time monitoring and
[16] S.C.-H. Yang, P. Shafto, Explainable artificial intelligence via Bayesian prediction of water quality parameters and algae concentrations using
teaching, in: NIPS 2017 Workshop on Teaching Machines, Robots, and microbial potentiometric sensor signals and machine learning tools, Sci.
Humans, 2017, pp. 127–137. Total Environ. 764 (2021) 142876.
[17] S.M. Lundberg, B. Nair, M.S. Vavilala, M. Horibe, M.J. Eisses, T. Adams, D.E. [43] K. Ahmad, K. Khan, A. Al-Fuqaha, Intelligent fusion of deep features for
Liston, D.K.-W. Low, S.-F. Newman, J. Kim, et al., Explainable machine- improved waste classification, IEEE Access (2020).
learning predictions for the prevention of hypoxaemia during surgery,
[44] B. Qolomany, A. Al-Fuqaha, A. Gupta, D. Benhaddou, S. Alwajidi, J. Qadir,
Nat. Biomed. Eng. 2 (10) (2018) 749–760.
A.C. Fong, Leveraging machine learning and big data for smart buildings:
[18] R.R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, A comprehensive survey, IEEE Access 7 (2019) 90316–90356.
Grad-cam: Visual explanations from deep networks via gradient-based
[45] H. Go, M. Kang, S.C. Suh, Machine learning of robots in tourism and hos-
localization, in: Proceedings of the IEEE International Conference on
pitality: interactive technology acceptance model (iTAM)–cutting edge,
Computer Vision, pp. 618–626.
Tourism Rev. (2020).
[19] Amazon scraps secret AI recruiting tool that showed bias against women,
[46] K. Ahmad, S. Zohaib, N. Conci, A. Al-Fuqaha, Deriving emotions and
2020, Accessed: 2020-08-26, https://tinyurl.com/y8eelatr.
sentiments from visual content: A disaster analysis use case, 2020, arXiv
[20] The two-year fight to stop amazon from selling face recognition to the
preprint arXiv:2002.03773.
police, 2020, Accessed: 2020-08-05, https://tinyurl.com/y8q7cvue.
[47] Z. Obermeyer, S. Mullainathan, Dissecting racial bias in an algorithm
[21] J. Buolamwini, T. Gebru, Gender shades: Intersectional accuracy dispar-
that guides health decisions for 70 million people, in: Proceedings of
ities in commercial gender classification, in: Conference on Fairness,
the Conference on Fairness, Accountability, and Transparency, 2019, pp.
Accountability and Transparency, 2018, pp. 77–91.
89–89.
[22] S. Corbett-Davies, S. Goel, The measure and mismeasure of fairness:
[48] G. Thippeswamy, A guide to anticipating the future impact of today’s
A critical review of fair machine learning, 2018, arXiv preprint arXiv:
technology, 2019.
1808.00023.
[23] R. Kitchin, The ethics of smart cities and urban science, Phil. Trans. R. [49] L. Mora, R. Bolici, M. Deakin, The first two decades of smart-city research:
Soc. A 374 (2083) (2016) 20160115. A bibliometric analysis, J. Urban Technol. 24 (1) (2017) 3–27.
[24] C. O’neil, Weapons of Math Destruction: How Big Data Increases [50] W.E. Zhang, Q.Z. Sheng, A. Alhazmi, C. Li, Adversarial attacks on deep-
Inequality and Threatens Democracy, Broadway Books, 2016. learning models in natural language processing: A survey, ACM Trans.
[25] A. Ignatiev, N. Narodytska, J. Marques-Silva, On relating explanations Intell. Syst. Technol. (TIST) 11 (3) (2020) 1–41.
and adversarial examples, in: Advances in Neural Information Processing [51] A. Serban, E. Poll, J. Visser, Adversarial examples on object recognition:
Systems, 2019, pp. 15883–15893. A comprehensive survey, ACM Comput. Surv. 53 (3) (2020) 1–38.
[26] G. Fidel, R. Bitton, A. Shabtai, When explainability meets adversarial [52] Y. Zhou, M. Kantarcioglu, B. Xi, A survey of game theoretic approach for
learning: Detecting adversarial examples using shap signatures, 2019, adversarial machine learning, Wiley Interdiscip. Rev. Data Mining Knowl.
arXiv preprint arXiv:1909.03418. Discov. 9 (3) (2019) e1259.
[27] How AI is transforming the smart cities IoT? 2020, Accessed: 2020-07-07, [53] R. Roscher, B. Bohn, M.F. Duarte, J. Garcke, Explainable machine learning
https://hub.packtpub.com/how-ai-is-transforming-the-smart-cities-iot- for scientific insights and discoveries, IEEE Access 8 (2020) 42200–42216.
tutorial/. [54] E. Tjoa, C. Guan, A survey on explainable artificial intelligence (XAI):
[28] E. Corbett, The real-world benefits of machine learning in healthcare, towards medical XAI, 2019, arXiv preprint arXiv:1907.07374.
HealthCatalyst (2017). [55] A. Adadi, M. Berrada, Peeking inside the black-box: A survey
[29] Google computers trained to detect cancer, 2020, Accessed: 2020-07-07, on explainable artificial intelligence (XAI), IEEE Access 6 (2018)
https://tinyurl.com/y28vzkr6. 52138–52160.

24
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

[56] A.B. Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, [82] X. Ma, Y. Niu, L. Gu, Y. Wang, Y. Zhao, J. Bailey, F. Lu, Understanding ad-
S. García, S. Gil-López, D. Molina, R. Benjamins, et al., Explainable artificial versarial attacks on deep learning based medical image analysis systems,
intelligence (XAI): Concepts, taxonomies, opportunities and challenges Pattern Recognit. (2020) 107332.
toward responsible AI, Inf. Fusion 58 (2020) 82–115. [83] R. Paul, M. Schabath, R. Gillies, L. Hall, D. Goldgof, Mitigating adversarial
[57] A. Seeliger, M. Pfaff, H. Krcmar, Semantic web technologies for explainable attacks on medical image understanding systems, in: 2020 IEEE 17th
machine learning models: A literature review., in: PROFILES/SEMEX@ International Symposium on Biomedical Imaging (ISBI), IEEE, 2020, pp.
ISWC, 2019, pp. 30–45. 1517–1521.
[58] E. Puiutta, E. Veith, Explainable reinforcement learning: A survey, 2020, [84] S. Liu, A.A.A. Setio, F.C. Ghesu, E. Gibson, S. Grbic, B. Georgescu, D.
arXiv preprint arXiv:2005.06247. Comaniciu, No surprises: Training robust lung nodule detection for low-
[59] S. Baum, A survey of artificial general intelligence projects for ethics, risk, dose CT scans by augmenting with adversarial attacks, 2020, arXiv
and policy, Global Catastrophic Risk Institute Working Paper, 2017, 17–1. preprint arXiv:2003.03824.
[60] J. Morley, C.C. Machado, C. Burr, J. Cowls, I. Joshi, M. Taddeo, L. Floridi, [85] H. Zhang, Y. Qi, J. Wu, L. Fu, L. He, DoS attack energy management against
The ethics of AI in health care: A mapping review, Soc. Sci. Med. (2020) remote state estimation, IEEE Trans. Control Netw. Syst. 5 (1) (2016)
113172. 383–394.
[61] A. Qayyum, I. Aneeqa, M. Usama, W. Iqbal, J. Qadir, Y. Elkhatib, A. Al- [86] K. Manandhar, X. Cao, F. Hu, Y. Liu, Detection of faults and attacks
Fuqaha, Securing machine learning (ML) in the cloud: A systematic review including false data injection attack in smart grid using Kalman filter,
of cloud ML security, Front. Big Data (2020). IEEE Trans. Control Netw. Syst. 1 (4) (2014) 370–379.
[62] F. Hussain, R. Hussain, S.A. Hassan, E. Hossain, Machine learning in IoT [87] F. Marulli, C.A. Visaggio, Adversarial deep learning for energy
security: current solutions and future challenges, IEEE Commun. Surv. management in buildings, in: SummerSim, 2019, pp. 50–51.
Tutor. (2020). [88] Y. Chen, Y. Tan, B. Zhang, Exploiting vulnerabilities of load forecast-
[63] M. Sharif, S. Bhagavatula, L. Bauer, M.K. Reiter, Accessorize to a ing through adversarial attacks, in: Proceedings of the Tenth ACM
crime: Real and stealthy attacks on state-of-the-art face recognition, International Conference on Future Energy Systems, 2019, pp. 1–11.
in: Proceedings of the 2016 Acm Sigsac Conference on Computer and [89] O.A. Beg, T.T. Johnson, A. Davoudi, Detection of false-data injection attacks
Communications Security, 2016, pp. 1528–1540. in cyber-physical DC microgrids, IEEE Trans. Ind. Inf. 13 (5) (2017)
[64] K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, 2693–2703.
T. Kohno, D. Song, Robust physical-world attacks on deep learning visual [90] S.N. Islam, Z. Baig, S. Zeadally, Physical layer security for the smart grid:
classification, in: Proceedings of the IEEE Conference on Computer Vision vulnerabilities, threats, and countermeasures, IEEE Trans. Ind. Inf. 15 (12)
and Pattern Recognition, 2018, pp. 1625–1634. (2019) 6522–6530.
[65] N. Carlini, D. Wagner, Audio adversarial examples: Targeted attacks on [91] H.I. Fawaz, G. Forestier, J. Weber, L. Idoumghar, P.-A. Muller, Adversarial
speech-to-text, in: 2018 IEEE Security and Privacy Workshops (SPW), attacks on deep neural networks for time series classification, in: 2019
IEEE, 2018, pp. 1–7. International Joint Conference on Neural Networks (IJCNN), IEEE, 2019,
[66] E. Ackerman, Three small stickers in intersection can cause tesla autopilot
pp. 1–8.
to swerve into wrong lane, IEEE Spect. April 1 (2019).
[92] C. Sitawarin, A.N. Bhagoji, A. Mosenia, M. Chiang, P. Mittal, Darts:
[67] S.G. Finlayson, J.D. Bowers, J. Ito, J.L. Zittrain, A.L. Beam, I.S. Kohane,
Deceiving autonomous cars with toxic signs, 2018, arXiv preprint arXiv:
Adversarial attacks on medical machine learning, Science 363 (6433)
1802.06430.
(2019) 1287–1289.
[93] S. Houben, J. Stallkamp, J. Salmen, M. Schlipsing, C. Igel, Detection
[68] M. Sato, J. Suzuki, H. Shindo, Y. Matsumoto, Interpretable adversarial
of traffic signs in real-world images: The german traffic sign detec-
perturbation in input embedding space for text, 2018, arXiv preprint
tion benchmark, in: The 2013 International Joint Conference on Neural
arXiv:1805.02917.
Networks (IJCNN), IEEE, 2013, pp. 1–8.
[69] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z.B. Celik, A. Swami,
[94] Y. Cao, C. Xiao, B. Cyr, Y. Zhou, W. Park, S. Rampazzi, Q.A. Chen, K. Fu, Z.M.
The limitations of deep learning in adversarial settings, in: 2016 IEEE
Mao, Adversarial sensor attack on LiDAR-based perception in autonomous
European Symposium on Security and Privacy (EuroS&P), IEEE, 2016, pp.
driving, in: Proceedings of the 2019 ACM SIGSAC Conference on Computer
372–387.
and Communications Security, 2019, pp. 2267–2281.
[70] I. Corona, G. Giacinto, F. Roli, Adversarial attacks against intrusion de-
[95] A. Geiger, P. Lenz, C. Stiller, R. Urtasun, Vision meets robotics: The KITTI
tection systems: Taxonomy, solutions and open issues, Inform. Sci. 239
dataset, Int. J. Robot. Res. 32 (11) (2013) 1231–1237.
(2013) 201–225.
[96] Y. Li, X. Xu, J. Xiao, S. Li, H.T. Shen, Adaptive square attack: Fooling
[71] N. Carlini, A. Athalye, N. Papernot, W. Brendel, J. Rauber, D. Tsipras, I.
autonomous cars with adversarial traffic signs, IEEE Internet Things J.
Goodfellow, A. Madry, A. Kurakin, On evaluating adversarial robustness,
(2020).
2019, arXiv preprint arXiv:1902.06705.
[72] K. Ren, T. Zheng, Z. Qin, X. Liu, Adversarial attacks and defenses in deep [97] X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, R.M. Summers, Chest X-ray
learning, Engineering (2020). 8: Hospital-scale chest X-ray database and benchmarks on weakly-
[73] F.V. Massoli, F. Falchi, G. Amato, Cross-resolution face recognition supervised classification and localization of common thorax diseases, in:
adversarial attacks, Pattern Recognit. Lett. (2020). Proceedings of the IEEE Conference on Computer Vision and Pattern
[74] J. Stallkamp, M. Schlipsing, J. Salmen, C. Igel, Man vs. computer: Bench- Recognition, 2017, pp. 2097–2106.
marking machine learning algorithms for traffic sign recognition, Neural [98] M.D. Champneys, A. Green, J. Morales, M. Silva, D. Mascarenas, On
Netw. 32 (2012) 323–332. the vulnerability of data-driven structural health monitoring models to
[75] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z.B. Celik, A. Swami, adversarial attack, Struct. Health Monit. (2020) 1475921720920233.
Practical black-box attacks against machine learning, in: Proceedings of [99] C.R. Farrar, S.W. Doebling, Structural health monitoring at los alamos
the 2017 ACM on Asia Conference on Computer and Communications national laboratory, 1999.
Security, 2017, pp. 506–519. [100] A. Newaz, N.I. Haque, A.K. Sikder, M.A. Rahman, A.S. Uluagac, Adversarial
[76] E. Khanapuri, T. Chintalapati, R. Sharma, R. Gerdes, Learning-based adver- attacks to machine learning-based smart healthcare systems, 2020, arXiv
sarial agent detection and identification in cyber physical systems applied preprint arXiv:2010.03671.
to autonomous vehicular platoon, in: 2019 IEEE/ACM 5th International [101] G. Li, K. Ota, M. Dong, J. Wu, J. Li, DeSVig: Decentralized swift vigilance
Workshop on Software Engineering for Smart Cyber-Physical Systems against adversarial attacks in industrial artificial intelligence systems, IEEE
(SEsCPS), IEEE, 2019, pp. 39–45. Trans. Ind. Inf. 16 (5) (2019) 3267–3277.
[77] J. Su, D.V. Vargas, K. Sakurai, One pixel attack for fooling deep neural [102] Y. Zhang, J. Wang, B. Chen, Detecting false data injection attacks in smart
networks, IEEE Trans. Evol. Comput. 23 (5) (2019) 828–841. grids: A semi-supervised deep learning approach, IEEE Trans. Smart Grid
[78] S.A. Taghanaki, A. Das, G. Hamarneh, Vulnerability analysis of chest X- (2020).
ray image classification against adversarial attacks, in: Understanding and [103] X. Zhou, Y. Li, C.A. Barreto, J. Li, P. Volgyesi, H. Neema, X. Koutsoukos,
Interpreting Machine Learning in Medical Image Computing Applications, Evaluating resilience of grid load predictions under stealthy adversarial
Springer, 2018, pp. 87–94. attacks, in: 2019 Resilience Week (RWS), Vol. 1, IEEE, 2019, pp. 206–212.
[79] S.G. Finlayson, H.W. Chung, I.S. Kohane, A.L. Beam, Adversarial attacks [104] H. Wang, G. Wang, Y. Li, D. Zhang, L. Lin, Transferable, Controllable, and
against medical deep learning systems, 2018, arXiv preprint arXiv:1804. Inconspicuous Adversarial Attacks on Person Re-identification With Deep
05296. Mis-Ranking, in: Proceedings of the IEEE/CVF Conference on Computer
[80] A.S. Becker, L. Jendele, O. Skopek, N. Berger, S. Ghafoor, M. Marcon, E. Vision and Pattern Recognition, 2020, pp. 342–351.
Konukoglu, Injecting and removing suspicious features in breast imaging [105] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, Q. Tian, Scalable person
with CycleGAN: A pilot study of automated adversarial attacks using re-identification: A benchmark, in: Proceedings of the IEEE International
neural networks on small images, Eur. J. Radiol. 120 (2019) 108649. Conference on Computer Vision, 2015, pp. 1116–1124.
[81] J. Kotia, A. Kotwal, R. Bharti, Risk susceptibility of brain tumor classifica- [106] W. Li, R. Zhao, T. Xiao, X. Wang, DeepReID: Deep filter pairing neural net-
tion to adversarial attacks, in: International Conference on Man–Machine work for person re-identification, in: Proceedings of the IEEE Conference
Interactions, Springer, 2019, pp. 181–187. on Computer Vision and Pattern Recognition, 2014, pp. 152–159.

25
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

[107] E. Ristani, F. Solera, R. Zou, R. Cucchiara, C. Tomasi, Performance measures [132] A. Yenter, A. Verma, Deep CNN-LSTM with combined kernels from
and a data set for multi-target, multi-camera tracking, in: European multiple branches for imdb review sentiment analysis, in: 2017 IEEE 8th
Conference on Computer Vision, Springer, 2016, pp. 17–35. Annual Ubiquitous Computing, Electronics and Mobile Communication
[108] H.A. Dau, A. Bagnall, K. Kamgar, C.-C.M. Yeh, Y. Zhu, S. Gharghabi, C.A. Conference (UEMCON), IEEE, 2017, pp. 540–546.
Ratanamahatana, E. Keogh, The UCR time series archive, IEEE/CAA J. [133] M. Kesarwani, B. Mukhoty, V. Arya, S. Mehta, Model extraction warning
Autom. Sin. 6 (6) (2019) 1293–1305. in mlaas paradigm, in: Proceedings of the 34th Annual Computer Security
[109] C. Dunn, N. Moustafa, B. Turnbull, Robustness evaluations of sustainable Applications Conference, 2018, pp. 371–380.
machine learning models against data poisoning attacks in the internet [134] J.R. Correia-Silva, R.F. Berriel, C. Badue, A.F. de Souza, T. Oliveira-Santos,
of things, Sustainability 12 (16) (2020) 6434. Copycat cnn: Stealing knowledge by persuading confession with random
[110] O. Alvear, C.T. Calafate, J.-C. Cano, P. Manzoni, Crowdsensing in smart non-labeled data, in: 2018 International Joint Conference on Neural
cities: Overview, platforms, and environment sensing issues, Sensors 18 Networks (IJCNN), IEEE, 2018, pp. 1–8.
(2) (2018) 460. [135] A.M. Martinez, The AR face database, CVC Technical Report24, 1998.
[111] M. Li, Y. Sun, H. Lu, S. Maharjan, Z. Tian, Deep reinforcement learning for [136] L. Yin, X. Wei, Y. Sun, J. Wang, M.J. Rosato, A 3D facial expression
partially observable data poisoning attack in crowdsensing systems, IEEE database for facial behavior research, in: 7th International Conference
Internet Things J. (2019). on Automatic Face and Gesture Recognition (FGR06), IEEE, 2006, pp.
211–216.
[112] Z. Huang, M. Pan, Y. Gong, Robust truth discovery against data poi-
[137] M. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, Coding facial expressions
soning in mobile crowdsensing, in: 2019 IEEE Global Communications
with gabor wavelets, in: Proceedings Third IEEE International Conference
Conference (GLOBECOM), IEEE, 2019, pp. 1–6.
on Automatic Face and Gesture Recognition, IEEE, 1998, pp. 200–205.
[113] C. Miao, Q. Li, H. Xiao, W. Jiang, M. Huai, L. Su, Towards data poisoning
[138] D. Hitaj, L.V. Mancini, Have you stolen my model? evasion attacks against
attacks in crowd sensing systems, in: Proceedings of the Eighteenth ACM
deep neural network watermarking techniques, 2018, arXiv preprint
International Symposium on Mobile Ad Hoc Networking and Computing,
arXiv:1809.00615.
2018, pp. 111–120.
[139] Y. Mizukami, K. Tadamura, J. Warrell, P. Li, S. Prince, CUDA implemen-
[114] C. Miao, Q. Li, L. Su, M. Huai, W. Jiang, J. Gao, Attack under disguise:
tation of deformable pattern recognition and its application to MNIST
An intelligent data poisoning attack mechanism in crowdsourcing, in:
handwritten digit database, in: 2010 20th International Conference on
Proceedings of the 2018 World Wide Web Conference, 2018, pp. 13–22.
Pattern Recognition, IEEE, 2010, pp. 2001–2004.
[115] B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, [140] S. Mohseni, M. Pitale, V. Singh, Z. Wang, Practical solutions for machine
F. Roli, Evasion attacks against machine learning at test time, in: Joint learning safety in autonomous vehicles, 2019, arXiv preprint arXiv:1912.
European Conference on Machine Learning and Knowledge Discovery in 09630.
Databases, Springer, 2013, pp. 387–402. [141] K.R. Varshney, H. Alemzadeh, On the safety of machine learning: Cyber-
[116] L. Demetrio, B. Biggio, G. Lagorio, F. Roli, A. Armando, Explaining vul- physical systems, decision sciences, and data products, Big Data 5 (3)
nerabilities of deep learning to adversarial malware binaries, 2019, arXiv (2017) 246–255.
preprint arXiv:1901.03583. [142] R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, D. Pedreschi, A
[117] Y. Liu, A. Mondal, A. Chakraborty, M. Zuzak, N. Jacobsen, D. Xing, A. survey of methods for explaining black box models, ACM Comput. Surv.
Srivastava, A survey on neural trojans., IACR Cryptol. EPrint Arch. 2020 51 (5) (2018) 1–42.
(2020) 201. [143] P. Hall, An Introduction to Machine Learning Interpretability, O’Reilly
[118] Y. Gao, C. Xu, D. Wang, S. Chen, D.C. Ranasinghe, S. Nepal, Strip: A Media, Incorporated, 2019.
defence against trojan attacks on deep neural networks, in: Proceedings [144] K. Sokol, P. Flach, Explainability fact sheets: a framework for system-
of the 35th Annual Computer Security Applications Conference, 2019, pp. atic assessment of explainable approaches, in: Proceedings of the 2020
113–125. Conference on Fairness, Accountability, and Transparency, 2020, pp.
[119] M. Juuti, S. Szyller, S. Marchal, N. Asokan, Prada: protecting against DNN 56–67.
model stealing attacks, in: 2019 IEEE European Symposium on Security [145] I. Kamwa, S. Samantaray, G. Joós, On the accuracy versus transparency
and Privacy (EuroS&P), IEEE, 2019, pp. 512–527. trade-off of data-mining models for fast-response PMU-based catastrophe
[120] D. Lowd, C. Meek, Adversarial learning, in: Proceedings of the Eleventh predictors, IEEE Trans. Smart Grid 3 (1) (2011) 152–161.
ACM SIGKDD International Conference on Knowledge Discovery in Data [146] J.M. Alonso, C. Mencar, Building cognitive cities with explainable artificial
Mining, 2005, pp. 641–647. intelligent systems., in: CEx@ AI* IA, 2017.
[121] T. Orekondy, B. Schiele, M. Fritz, Knockoff nets: Stealing functionality of [147] R.K. Bellamy, K. Dey, M. Hind, S.C. Hoffman, S. Houde, K. Kannan, P. Lohia,
black-box models, in: Proceedings of the IEEE Conference on Computer J. Martino, S. Mehta, A. Mojsilovic, et al., AI fairness 360: An extensible
Vision and Pattern Recognition, 2019, pp. 4954–4963. toolkit for detecting, understanding, and mitigating unwanted algorithmic
[122] K. Krishna, G.S. Tomar, A.P. Parikh, N. Papernot, M. Iyyer, Thieves on bias, 2018, arXiv preprint arXiv:1810.01943.
sesame street! model extraction of BERT-based APIs, 2020. [148] J. Choo, S. Liu, Visual analytics for explainable deep learning, IEEE Comput.
[123] T. Orekondy, B. Schiele, M. Fritz, Prediction poisoning: Towards defenses Graph. Appl. 38 (4) (2018) 84–92.
against DNN model stealing attacks, in: International Conference on [149] M.A. Ahmad, C. Eckert, A. Teredesai, Interpretable machine learning in
Learning Representations, 2019. healthcare, in: Proceedings of the 2018 ACM International Conference on
Bioinformatics, Computational Biology, and Health Informatics, 2018, pp.
[124] R. Shokri, M. Stronati, C. Song, V. Shmatikov, Membership inference
559–560.
attacks against machine learning models, in: 2017 IEEE Symposium on
[150] F. Doshi-Velez, B. Kim, Towards a rigorous science of interpretable
Security and Privacy (SP), IEEE, 2017, pp. 3–18.
machine learning, 2017, arXiv preprint arXiv:1702.08608.
[125] T. Liu, W. Wen, Y. Jin, SIN 2: Stealth infection on neural network—A low-
[151] Explainable models for healthcare AI: ACM webinar, 2020, Accessed:
cost agile neural trojan attack methodology, in: 2018 IEEE International
2020-07-22, https://tinyurl.com/y5ug9utp.
Symposium on Hardware Oriented Security and Trust (HOST), IEEE, 2018,
[152] S.M. Lundberg, S.-I. Lee, A unified approach to interpreting model predic-
pp. 227–230.
tions, in: Advances in Neural Information Processing Systems, 2017, pp.
[126] H. Xiao, K. Rasul, R. Vollgraf, Fashion-mnist: a novel image dataset for
4765–4774.
benchmarking machine learning algorithms, 2017, arXiv preprint arXiv:
[153] S. Bramhall, H. Horn, M. Tieu, N. Lohia, QLIME-a quadratic local inter-
1708.07747.
pretable model-agnostic explanation approach, SMU Data Science Review
[127] K. Davaslioglu, Y.E. Sagduyu, Trojan attacks on wireless signal classifi- 3 (1) (2020) 4.
cation with adversarial machine learning, in: 2019 IEEE International [154] E. Štrumbelj, I. Kononenko, Explaining prediction models and individual
Symposium on Dynamic Spectrum Access Networks (DySPAN), IEEE, 2019, predictions with feature contributions, Knowl. Inf. Syst. 41 (3) (2014)
pp. 1–6. 647–665.
[128] T.J. O’shea, N. West, Radio machine learning dataset generation with gnu [155] M.A. Rahman, M.S. Hossain, N.A. Alrajeh, N. Guizani, B5G and explain-
radio, in: Proceedings of the GNU Radio Conference, Vol. 1, no. 1, 2016. able deep learning assisted healthcare vertical at the edge: COVID-19
[129] Y. Shi, T. Erpek, Y.E. Sagduyu, J.H. Li, Spectrum data poisoning perspective, IEEE Netw. 34 (4) (2020) 98–105.
with adversarial deep learning, in: MILCOM 2018-2018 IEEE Military [156] R. Stirnberg, J. Cermak, S. Kotthaus, M. Haeffelin, H. Andersen, J. Fuchs, M.
Communications Conference (MILCOM), IEEE, 2018, pp. 407–412. Kim, J.-E. Petit, O. Favez, Meteorology-driven variability of air pollution
[130] K. Davaslioglu, Y.E. Sagduyu, Generative adversarial learning for spectrum (PM1) revealed with explainable machine learning, Atmos. Chem. Phys.
sensing, in: 2018 IEEE International Conference on Communications (ICC), Discuss. (2020) 1–35.
IEEE, 2018, pp. 1–6. [157] M. Haeffelin, L. Barthès, O. Bock, C. Boitel, S. Bony, D. Bouniol, H.
[131] J. Steinhardt, P.W.W. Koh, P.S. Liang, Certified defenses for data poisoning Chepfer, M. Chiriaco, J. Cuesta, J. Delanoë, et al., SIRTA, a ground-based
attacks, in: Advances in Neural Information Processing Systems, 2017, pp. atmospheric observatory for cloud and aerosol research, in: Annales
3517–3529. Geophysicae, 23, (2) Copernicus GmbH, 2005, pp. 253–275.

26
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

[158] A. Barredo-Arrieta, I. Laña, J. Del Ser, What Lies beneath: A note on [184] D.L. Marino, C.S. Wickramasinghe, M. Manic, An adversarial approach
the explainability of black-box machine learning models for road traffic for explainable AI in intrusion detection systems, in: IECON 2018-44th
forecasting, in: 2019 IEEE Intelligent Transportation Systems Conference Annual Conference of the IEEE Industrial Electronics Society, IEEE, 2018,
(ITSC), IEEE, 2019, pp. 2232–2237. pp. 3237–3243.
[159] Real traffic flow data was retrieved from the madrid open data portal, [185] A. Rahnama, A. Tseng, An adversarial approach for explaining the
2020, Accessed: 2020-08-05, https://datos.madrid.es/portal/site/egob/. predictions of deep neural networks, 2020, arXiv preprint arXiv:2005.
[160] J. Sun, J. Guo, X. Wu, Q. Zhu, D. Wu, K. Xian, X. Zhou, Analyzing the impact 10284.
of traffic congestion mitigation: From an explainable neural network [186] C. Kakderi, N. Komninos, P. Tsarchopoulos, Smart cities and cloud com-
learning framework to marginal effect analyses, Sensors 19 (10) (2019) puting: lessons from the STORM CLOUDS experiment, J. Smart Cities 1
2254. (2) (2019) 4–13.
[161] X. Wu, J. Guo, K. Xian, X. Zhou, Hierarchical travel demand estimation [187] M.A. Jan, W. Zhang, M. Usman, Z. Tan, F. Khan, E. Luo, SmartEdge: An end-
using multiple data sources: A forward and backward propagation algo- to-end encryption framework for an edge-enabled smart city application,
rithmic framework on a layered computational graph, Transp. Res. C 96 J. Netw. Comput. Appl. 137 (2019) 1–10.
(2018) 321–346. [188] B. Qolomany, I. Mohammed, A. Al-Fuqaha, M. Guizani, J. Qadir, Trust-
[162] S.G. Rizzo, G. Vantini, S. Chawla, Reinforcement learning with explain- based cloud machine learning model selection for industrial IoT and smart
ability for traffic signal control, in: 2019 IEEE Intelligent Transportation city services, IEEE Internet Things J. (2020).
Systems Conference (ITSC), IEEE, 2019, pp. 3567–3572. [189] 6 Security risks of enterprises using cloud storage and file sharing apps,
[163] S. Ghosal, D. Blystone, A.K. Singh, B. Ganapathysubramanian, A. Singh, S. 2020, Accessed: 2020-11-17, https://tinyurl.com/yy68o3z9.
[190] T. Braun, B.C. Fung, F. Iqbal, B. Shah, Security and privacy challenges in
Sarkar, An explainable deep machine vision framework for plant stress
smart cities, Sustainable Cities Soc. 39 (2018) 499–507.
phenotyping, Proc. Natl. Acad. Sci. 115 (18) (2018) 4613–4618.
[191] S. Mallapuram, N. Ngwum, F. Yuan, C. Lu, W. Yu, Smart city: The state
[164] K. Nagasubramanian, S. Jones, A.K. Singh, S. Sarkar, A. Singh, B. Ganap-
of the art, datasets, and evaluation platforms, in: 2017 IEEE/ACIS 16th
athysubramanian, Plant disease identification using explainable 3D deep
International Conference on Computer and Information Science (ICIS),
learning on hyperspectral images, Plant Methods 15 (1) (2019) 98.
IEEE, 2017, pp. 447–452.
[165] A. Wolanin, G. Mateo-García, G. Camps-Valls, L. Gómez-Chova, M. Meroni,
[192] A. Ali, J. Qadir, R. ur Rasool, A. Sathiaseelan, A. Zwitter, J. Crowcroft, Big
G. Duveiller, Y. Liangzhi, L. Guanter, Estimating and understanding crop
data for development: applications and techniques, Big Data Anal. 1 (1)
yields with explainable deep learning in the Indian Wheat Belt, Environ.
(2016) 2.
Res. Lett. 15 (2) (2020) 024019.
[193] F. Samie, L. Bauer, J. Henkel, Hierarchical classification for constrained IoT
[166] J.C. Reis, A. Correia, F. Murai, A. Veloso, F. Benevenuto, Explainable devices: A case study on human activity recognition, IEEE Internet Things
machine learning for fake news detection, in: Proceedings of the 10th J. (2020).
ACM Conference on Web Science, 2019, pp. 17–26. [194] T.M. Floridi L, What is data ethics? in: Phil. Trans. R. Soc. A 374:
[167] G.C. Santia, J.R. Williams, Buzzface: A news veracity dataset with facebook 20160360, The Royal Society, 2016, pp. 486–491.
user commentary and egos, in: Twelfth International AAAI Conference on [195] J.-J. Boté, M. Térmens, Reusing data: Technical and ethical challenges.,
Web and Social Media, 2018. DESIDOC J. Lib. Inform. Technol. 39 (6) (2019).
[168] T. Chen, C. Guestrin, Xgboost: A scalable tree boosting system, in: Pro- [196] D.R. Thomas, S. Pastrana, A. Hutchings, R. Clayton, A.R. Beresford, Ethical
ceedings of the 22nd Acm Sigkdd International Conference on Knowledge issues in research using datasets of illicit origin, in: Proceedings of the
Discovery and Data Mining, 2016, pp. 785–794. 2017 Internet Measurement Conference, 2017, pp. 445–462.
[169] Uber shuts down self-driving operations in arizona: CNN, 2020, Accessed: [197] D.J. Hand, Aspects of data ethics in a changing world: Where are we
2020-07-22, https://tinyurl.com/y3yqj9xm. now? Big Data 6 (3) (2018) 176–190.
[170] J. Kim, J. Canny, Interpretable learning for self-driving cars by visualizing [198] The ethics of data sharing: A guide to best practices and governance,
causal attention, in: Proceedings of the IEEE International Conference on 2020, Accessed: 2020-08-08, https://tinyurl.com/y2hyexr8.
Computer Vision, 2017, pp. 2942–2950. [199] L. Taylor, L. Floridi, B. Van der Sloot, Group Privacy: New Challenges of
[171] E. Soares, P. Angelov, D. Filev, B. Costa, M. Castro, S. Nageshrao, Ex- Data Technologies, Vol. 126, Springer, 2016.
plainable density-based approach for self-driving actions classification, [200] H. Bauchner, R.M. Golub, P.B. Fontanarosa, Data sharing: an ethical and
in: 2019 18th IEEE International Conference on Machine Learning and scientific imperative, JAMA 315 (12) (2016) 1238–1240.
Applications (ICMLA), IEEE, 2019, pp. 469–474. [201] M. Beardsley, P. Santos, D. Hernández-Leo, K. Michos, Ethics in educa-
[172] J. Haspiel, N. Du, J. Meyerson, L.P. Robert Jr., D. Tilbury, X.J. Yang, A.K. tional technology research: Informing participants on data sharing risks,
Pradhan, Explanations and expectations: Trust building in automated Br. J. Educ. Technol. 50 (3) (2019) 1019–1034.
vehicles, in: Companion of the 2018 ACM/IEEE International Conference [202] E. Bertino, A. Kundu, Z. Sura, Data transparency with blockchain and AI
on Human-Robot Interaction, 2018, pp. 119–120. ethics, J. Data Inform. Qual. (JDIQ) 11 (4) (2019) 1–8.
[173] J. Kim, A. Rohrbach, T. Darrell, J. Canny, Z. Akata, Textual explanations [203] X. Li, R. Lu, X. Liang, X. Shen, J. Chen, X. Lin, Smart community: an internet
for self-driving vehicles, in: Proceedings of the European Conference on of things application, IEEE Commun. Mag. 49 (11) (2011) 68–75.
Computer Vision (ECCV), 2018, pp. 563–578. [204] K. Zhang, J. Ni, K. Yang, X. Liang, J. Ren, X.S. Shen, Security and privacy
[174] Explainability of AI (and what it means for educational AI), 2020, in smart city applications: Challenges and solutions, IEEE Commun. Mag.
Accessed: 2020-07-22, https://tinyurl.com/y3luk2me. 55 (1) (2017) 122–129.
[175] C. Conati, K. Porayska-Pomsta, M. Mavrikis, AI in education needs inter- [205] A. Martínez-Ballesté, P.A. Pérez-Martínez, A. Solanas, The pursuit of
pretable machine learning: Lessons from open learner modelling, 2018, citizens’ privacy: a privacy-aware smart city is possible, IEEE Commun.
arXiv preprint arXiv:1807.00154. Mag. 51 (6) (2013) 136–141.
[206] T. Hagendorff, The ethics of AI ethics: An evaluation of guidelines, Minds
[176] V. Putnam, C. Conati, Exploring the need for explainable artificial in-
Mach. (2020) 1–22.
telligence (XAI) in intelligent tutoring systems (ITS), in: IUI Workshops,
[207] J.S. Hiller, J.M. Blanke, Smart cities, big data, and the resilience of privacy,
2019.
Hastings LJ 68 (2016) 309.
[177] S. Tulli, Explainability in autonomous pedagogical agents., in: AAAI, 2020,
[208] Y. Lev-Aretz, Data philanthropy, Hastings LJ 70 (2018) 1491.
pp. 13738–13739.
[209] M.U. Hassan, M.H. Rehmani, J. Chen, Differential privacy techniques for
[178] Explainable artificial intelligence (XAI), 2020, Accessed: 2020-07-22,
cyber physical systems: a survey, IEEE Commun. Surv. Tutor. 22 (1) (2019)
https://www.darpa.mil/program/explainable-artificial-intelligence.
746–789.
[179] J. Zhu, A. Liapis, S. Risi, R. Bidarra, G.M. Youngblood, Explainable AI for [210] Everything you need to know about informed consent, 2020, Accessed:
designers: A human-centered perspective on mixed-initiative co-creation, 2020-11-15, https://humansofdata.atlan.com/2018/04/informed-consent/.
in: 2018 IEEE Conference on Computational Intelligence and Games (CIG), [211] C. Drew, Data science ethics in government, Phil. Trans. R. Soc. A 374
IEEE, 2018, pp. 1–8. (2083) (2016) 20160119.
[180] P. Panda, K. Roy, Explainable adversarial learning: Implicit generative [212] P. Hummel, M. Braun, P. Dabrock, Own data? Ethical reflections on data
modeling of random noise during training for adversarial robustness, ownership, Phil. Technol. (2020) 1–28.
2018. [213] Who owns the smart city’s data? 2020, Accessed: 2020-09-18,
[181] J. Dhaliwal, S. Shintre, Gradient similarity: An explainable approach to http://smartcityhub.com/governance-economy/who-owns-the-smart-
detect adversarial attacks against deep learning, 2018, arXiv preprint citys-data/.
arXiv:1806.10707. [214] E.P. Goodman, Smart city ethics: The challenge to democratic governance,
[182] M. Melis, M. Scalas, A. Demontis, D. Maiorca, B. Biggio, G. Giacinto, F. Roli, 2019.
Do gradient-based explanations tell anything about adversarial robustness [215] Is it time to nationalise data? 2020, Accessed: 2020-09-18, https://www.
to android malware? 2020, arXiv preprint arXiv:2005.01452. information-age.com/time-nationalise-data-123466627/.
[183] A. Hartl, M. Bachl, J. Fabini, T. Zseby, Explainability and adversarial [216] Why it’s so hard for users to control their data, 2020, Accessed:
robustness for RNNs, 2019, arXiv preprint arXiv:1912.09855. 2020-09-18, https://tinyurl.com/y2y3cdoh.

27
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

[217] G. Bozzelli, A. Raia, S. Ricciardi, M. De Nino, N. Barile, M. Perrella, M. [243] M.R. Costa-jussà, R. Creus, O. Domingo, A. Domínguez, M. Escobar,
Tramontano, A. Pagano, A. Palombini, An integrated VR/AR framework for C. López, M. Garcia, M. Geleta, MT-adapted datasheets for datasets:
user-centric interactive experience of cultural heritage: The ArkaeVision Template and repository, 2020, arXiv preprint arXiv:2005.13156.
project, Digital Appl. Archaeol. Cultural Heritage 15 (2019) e00124. [244] S. Holland, A. Hosny, S. Newman, The dataset nutrition label, Data Protect.
[218] Common challenges with interpreting big data (and how to fix them), Privacy: Data Protect. Democracy (2020) 1.
2020, Accessed: 2020-11-15, https://tinyurl.com/yxjszwda. [245] T. Shi, B. Yu, E.E. Clothiaux, A.J. Braverman, Daytime arctic cloud detection
[219] Response vs non response bias in surveys, 2020, Accessed: 2020-11-15, based on multi-angle satellite data with case studies, J. Amer. Statist.
https://www.formpl.us/blog/response-non-response-bias. Assoc. 103 (482) (2008) 584–593.
[220] Gaddressing unintended bias in smart cities, 2020, Accessed: 2020- [246] W.J. Murdoch, C. Singh, K. Kumbier, R. Abbasi-Asl, B. Yu, Interpretable
11-15, https://www.ichorstrategies.com/ideas-insights-blog/addressing- machine learning: definitions, methods, and applications, 2019, arXiv
unintended-bias-in-smart-cities. preprint arXiv:1901.04592.
[221] H. Yu, Z. Yang, R.O. Sinnott, Decentralized big data auditing for smart [247] E.A. Feigenbaum, A. Barr, P.R. Cohen, The handbook of artificial
city environments leveraging blockchain technology, IEEE Access 7 (2018) intelligence, 1981.
6288–6296. [248] T. Gebru, Oxford handbook on AI ethics book chapter on race and gender,
[222] L. Zang, Y. Yu, L. Xue, Y. Li, Y. Ding, X. Tao, Improved dynamic remote 2019, arXiv preprint arXiv:1908.06165.
data auditing protocol for smart city security, Pers. Ubiquitous Comput. [249] A.F. Winfield, K. Michael, J. Pitt, V. Evers, Machine ethics: the design and
21 (5) (2017) 911–921. governance of ethical AI and autonomous systems, Proc. IEEE 107 (3)
[223] J. Han, Y. Li, W. Chen, A lightweight and privacy-preserving public cloud (2019) 509–517.
auditing scheme without bilinear pairings in smart cities, Comput. Stand. [250] V.C. Müller, Ethics of artificial intelligence and robotics, 2020.
Interfaces 62 (2019) 84–97. [251] J. Borenstein, A. Howard, Emerging challenges in AI and the need for AI
[224] F. Peng, H. Tian, H. Quan, J. Lu, Data auditing for the internet of things ethics education, AI Ethics 1 (1) (2021) 61–65.
environments leveraging smart contract, in: International Conference on [252] J. Savulescu, H. Maslen, Moral enhancement and artificial intelligence:
Frontiers in Cyber Security, Springer, 2020, pp. 133–149. Moral AI? in: Beyond Artificial Intelligence, Springer, 2015, pp. 79–95.
[225] H.K. Patil, R. Seshadri, Big data security and privacy issues in healthcare, [253] K. LaGrandeur, Emotion, artificial intelligence, and ethics, in: Beyond
in: 2014 IEEE International Congress on Big Data, IEEE, 2014, pp. 762–765. Artificial Intelligence, Springer, 2015, pp. 97–109.
[226] H. Lee, K. Park, B. Lee, J. Choi, R. Elmasri, Issues in data fusion for [254] W. Wallach, C. Allen, Moral Machines: Teaching Robots Right from Wrong,
healthcare monitoring, in: Proceedings of the 1st International Conference Oxford University Press, 2008.
on PErvasive Technologies Related to Assistive Environments, 2008, pp. [255] M. Anderson, S.L. Anderson, Machine Ethics, Cambridge University Press,
1–8. 2011.
[227] M. Meingast, T. Roosta, S. Sastry, Security and privacy issues with [256] D.J. Gunkel, The Machine Question: Critical Perspectives on AI, Robots,
health care information technology, in: 2006 International Conference of and Ethics, MIT Press, 2012.
[257] P. Lin, K. Abney, G.A. Bekey, Robot Ethics: The Ethical and Social Impli-
the IEEE Engineering in Medicine and Biology Society, IEEE, 2006, pp.
cations of Robotics, Intelligent Robotics and Autonomous Agents series,
5453–5458.
2012.
[228] T. Ploug, In defence of informed consent for health record research-why
[258] T. Mulgan, Superintelligence: Paths, dangers, strategies, 2016.
arguments from ‘easy rescue’,‘no harm’and ‘consent bias’ fail, BMC Med.
[259] P. Lin, K. Abney, R. Jenkins, Robot Ethics 2.0: From Autonomous Cars to
Ethics 21 (1) (2020) 1–13.
Artificial Intelligence, Oxford University Press, 2017.
[229] S. Swedan, O.F. Khabour, K.H. Alzoubi, A.A. Aljabali, Graduate students
[260] A. Potapov, S. Rodionov, Universal empathy and ethical bias for artificial
reported practices regarding the issue of informed consent and maintain-
general intelligence, J. Exp. Theor. Artif. Intell. 26 (3) (2014) 405–416.
ing of data confidentiality in a developing country, Heliyon 6 (9) (2020)
[261] M. Brundage, Limitations and risks of machine ethics, J. Exp. Theor. Artif.
e04940.
Intell. 26 (3) (2014) 355–372.
[230] K. van der Schyff, S. Flowerday, S. Furnell, Duplicitous social media and
[262] E. Davis, Ethical guidelines for a superintelligence, Artificial Intelligence
data surveillance: An evaluation of privacy risk, Comput. Secur. (2020)
220 (2015) 121–124.
101822.
[263] S. Russell, S. Hauert, R. Altman, M. Veloso, Ethics of artificial intelligence,
[231] H. Kim, Y. Cha, T. Kim, P. Kim, A study on the security threats and
Nature 521 (7553) (2015) 415–416.
privacy policy of intelligent video surveillance system considering 5G
[264] T.J. Bench-Capon, Ethical approaches and autonomous systems, Artificial
network architecture, in: 2020 International Conference on Electronics,
Intelligence 281 (2020) 103239.
Information, and Communication (ICEIC), IEEE, 2020, pp. 1–4.
[265] I.D. Raji, M.K. Scheuerman, R. Amironesei, You Can’t Sit With Us: Ex-
[232] A. Romanou, The necessity of the implementation of privacy by design clusionary Pedagogy in AI Ethics Education, in: Proceedings of the 2021
in sectors where data protection concerns arise, Comput. Law Secur. Rev. ACM Conference on Fairness, Accountability, and Transparency, 2021, pp.
34 (1) (2018) 99–110. 515–525.
[233] F. Kreuter, G.-C. Haas, F. Keusch, S. Bähr, M. Trappmann, Collecting survey [266] J. Morley, A. Elhalal, F. Garcia, L. Kinsey, J. Mökander, L. Floridi, Ethics as a
and smartphone sensor data with an app: Opportunities and challenges service: a pragmatic operationalisation of AI ethics, Minds and Machines
around privacy and informed consent, Soc. Sci. Comput. Rev. 38 (5) (2020) (2021) 1–18.
533–549. [267] M. Hickok, Lessons learned from AI ethics principles for future actions,
[234] S. Wachter, Normative challenges of identification in the internet of AI Ethics 1 (1) (2021) 41–47.
things: Privacy, profiling, discrimination, and the GDPR, Comput. Law [268] V. Akman, Introduction to the special issue on philosophical foundations
Secur. Rev. 34 (3) (2018) 436–449. of artificial intelligence, J. Exp. Theor. Artif. Intell. 12 (3) (2000) 247–250.
[235] E.E. Anderson, S.B. Newman, A.K. Matthews, Improving informed consent: [269] M. Anderson, S.L. Anderson, Guest editors’ introduction: Machine ethics,
Stakeholder views, AJOB Empir. Bioethics 8 (3) (2017) 178–188. IEEE Intell. Syst. 21 (4) (2006) 10–11.
[236] M. Raghavan, S. Barocas, J. Kleinberg, K. Levy, Mitigating bias in algo- [270] S. Torrance, Special issue on ethics and artificial agents, AI Soc. 22 (4)
rithmic hiring: Evaluating claims and practices, in: Proceedings of the (2008) 461.
2020 Conference on Fairness, Accountability, and Transparency, 2020, pp. [271] V. Dignum, Ethics in artificial intelligence: introduction to the special
469–481. issue, 2018.
[237] J. Silberg, J. Manyika, Notes from the AI frontier: Tackling bias in AI (and [272] J.H. Chen, A. Verghese, Planning for the known unknown: Machine
in humans), McKinsey Global Institute (June 2019), 2019. learning for human healthcare systems, Amer. J. Bioethics 20 (11) (2020)
[238] E. Ntoutsi, P. Fafalios, U. Gadiraju, V. Iosifidis, W. Nejdl, M.-E. Vidal, 1–3.
S. Ruggieri, F. Turini, S. Papadopoulos, E. Krasanakis, et al., Bias in [273] N. Bostrom, E. Yudkowsky, The ethics of artificial intelligence, in:
data-driven artificial intelligence systems—An introductory survey, Wiley The Cambridge Handbook of Artificial Intelligence, Vol. 1, Cambridge
Interdiscip. Rev. Data Mining Knowl. Discov. 10 (3) (2020) e1356. University Press Cambridge, 2014, pp. 316–334.
[239] Y. Roh, G. Heo, S.E. Whang, A survey on data collection for machine [274] O. Bendel, Handbuch Maschinenethik, Springer, 2019.
learning: a big data-AI integration perspective, IEEE Trans. Knowl. Data [275] M. Dubber, F. Pasquale, S. Das, The oxford handbook of ethics of AI, 2020.
Eng. (2019). [276] N. David, J.G. McNutt, J.B. Justice, Smart cities, transparency, civic tech-
[240] A. Chander, R. Srinivasan, Creation of user friendly datasets: Insights from nology and reinventing government, in: Smart Technologies for Smart
a case study concerning explanations of loan denials, 2019, arXiv preprint Governments, Springer, 2018, pp. 19–34.
arXiv:1906.04643. [277] P. Milić, N. Veljković, L. Stoimenov, Semantic technologies in e-
[241] The how of explainable AI: Pre-modelling explainability, 2020, Accessed: government: Toward openness and transparency, in: Smart Technologies
2020-08-05, https://tinyurl.com/y4zufygs. for Smart Governments, Springer, 2018, pp. 55–66.
[242] Google AI blog: Facets: An open source visualization tool for machine [278] E. Ismagilova, L. Hughes, N.P. Rana, Y.K. Dwivedi, Security, privacy and
learning training data, 2020, Accessed: 2020-08-05, https://tinyurl.com/ risks within smart cities: Literature review and development of a smart
y56xjm9j. city interaction framework, Inform. Syst. Front. (2020) 1–22.

28
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

[279] S. Sholla, R. Naaz, M.A. Chishti, Ethics aware object oriented smart city [314] C. Burr, N. Cristianini, Can machines read our minds? Minds Mach. 29
architecture, China Commun. 14 (5) (2017) 160–173. (3) (2019) 461–494.
[280] S. Sholla, R.N. Mir, M.A. Chishti, Docile smart city architecture: Moving [315] S. Zuboff, The Age of Surveillance Capitalism: The Fight for a Human
toward an ethical smart city, Int. J. Comput. Digital Syst. 7 (03) (2018) Future at the New Frontier of Power: Barack Obama’s Books of 2019,
167–174. Profile Books, 2019.
[281] R. Mark, G. Anya, Ethics of using smart city AI and big data: The case of [316] J.J. Bryson, The artificial intelligence of the ethics of artificial intelli-
four large European cities, ORBIT J. 2 (2) (2019) 1–36. gence: An introductory overview for law and regulation, in: The Oxford
[282] P. Cardullo, C. Di Feliciantonio, R. Kitchin, The Right to the Smart City,
Handbook of Ethics of Artificial Intelligence, Oxford University Press,
Emerald Group Publishing, 2019.
2019.
[283] P. Calvo, The ethics of smart city (EoSC): moral implications of hypercon-
[317] F. Sado, C.K. Loo, M. Kerzel, S. Wermter, Explainable goal-driven agents
nectivity, algorithmization and the datafication of urban digital society,
Ethics Inform. Technol. 22 (2) (2020) 141–149. and robots–a comprehensive review and new framework, 2020, arXiv
[284] D. Offenhuber, Towards ethical legibility: An inclusive view of waste preprint arXiv:2004.09705.
technologies, in: The Routledge Companion to Smart Cities, Routledge, [318] Apple card algorithm sparks gender bias allegations against goldman
2020, pp. 210–224. sachs, 2020, Accessed: 2020-07-22, https://tinyurl.com/qr426ba.
[285] S. Sholla, R.N. Mir, M.A. Chishti, A neuro fuzzy system for incorporating [319] S.M. McKinney, M. Sieniek, V. Godbole, J. Godwin, N. Antropova, H.
ethics in the internet of things, J. Ambient Intell. Humaniz. Comput. 12 Ashrafian, T. Back, M. Chesus, G.C. Corrado, A. Darzi, et al., International
(1) (2021) 1487–1501. evaluation of an AI system for breast cancer screening, Nature 577 (7788)
[286] K.S. Willis, A. Aurigi, The Routledge Companion to Smart Cities, Routledge, (2020) 89–94.
2020. [320] S.M. McKinney, A. Karthikesalingam, D. Tse, C.J. Kelly, Y. Liu, G.S. Cor-
[287] J.C. Augusto, Handbook of Smart Cities, Springer, 2019. rado, S. Shetty, Reply to: Transparency and reproducibility in artificial
[288] R. Kitchin, T.P. Lauriault, G. McArdle, Data and the City, Routledge, 2017. intelligence, Nature 586 (7829) (2020) E17–E18.
[289] P. Cardullo, Citizens in the ‘Smart City’: Participation, Co-Production,
[321] B. Haibe-Kains, G.A. Adam, A. Hosny, F. Khodakarami, L. Waldron, B.
Governance, Routledge, 2020.
Wang, C. McIntosh, A. Goldenberg, A. Kundaje, C.S. Greene, et al., Trans-
[290] M. Nagenborg, T. Stone, M.G. Woge, P.E. Vermaas, Technology and the
parency and reproducibility in artificial intelligence, Nature 586 (7829)
city: Towards a philosophy of urban technologies, Vol. 36, Springer
(2020) E14–E16.
Nature, 2021.
[291] Y. Zeng, E. Lu, C. Huangfu, Linking artificial intelligence principles, 2018, [322] K. Yeung, A. Howes, G. Pogrebna, AI governance by human rights-centred
arXiv preprint arXiv:1812.04814. design, deliberation and oversight: An end to ethics washing, in: The
[292] A. Jobin, M. Ienca, E. Vayena, The global landscape of AI ethics guidelines, Oxford Handbook of AI Ethics, Oxford University Press (2019), 2019.
Nat. Mach. Intell. 1 (9) (2019) 389–399. [323] P. Samangouei, M. Kabkab, R. Chellappa, Defense-GAN: Protecting clas-
[293] J. Fjeld, N. Achten, H. Hilligoss, A. Nagy, M. Srikumar, Principled artificial sifiers against adversarial attacks using generative models, 2018, arXiv
intelligence: Mapping consensus in ethical and rights-based approaches preprint arXiv:1805.06605.
to principles for AI, Berkman Klein Center Research Publication (2020–1), [324] V. Zantedeschi, M.-I. Nicolae, A. Rawat, Efficient defenses against adver-
2020. sarial attacks, in: Proceedings of the 10th ACM Workshop on Artificial
[294] P. Boddington, Towards a Code of Ethics for Artificial Intelligence, Intelligence and Security, 2017, pp. 39–49.
Springer, 2017. [325] H. Zhang, J. Wang, Defense against adversarial attacks using feature
[295] R. Calo, Artificial intelligence policy: a primer and roadmap, UCDL Rev. scattering-based adversarial training, in: Advances in Neural Information
51 (2017) 399. Processing Systems, 2019, pp. 1831–1841.
[296] Artificial intelligence at google: Our principles, 2020, Accessed:
[326] S. Shao, P. Wang, R. Yan, Generative adversarial networks for data
2020-09-18, https://ai.google/principles.
augmentation in machine fault diagnosis, Comput. Ind. 106 (2019) 85–93.
[297] Perspectives on issues in AI governance, 2020, Accessed: 2020-09-18,
[327] A. Antoniou, A. Storkey, H. Edwards, Data augmentation generative
https://tinyurl.com/tgq4kkr.
[298] OpenAI charter, 2020, Accessed: 2020-09-18, https://openai.com/charter/. adversarial networks, 2017, arXiv preprint arXiv:1711.04340.
[299] Microsoft AI principles, 2020, Accessed: 2020-07-07, https://www. [328] X. Gao, F. Deng, X. Yue, Data augmentation in fault diagnosis based on
microsoft.com/en-us/ai/responsible-ai?activetab=pivot1:primaryr6. the wasserstein generative adversarial network with gradient penalty,
[300] L.E. Parker, Creation of the national artificial intelligence research and Neurocomputing 396 (2020) 487–494.
development strategic plan., AI Mag. 39 (2) (2018). [329] P. Delobelle, P. Temple, G. Perrouin, B. Frénay, P. Heymans, B. Berendt,
[301] A. Bundy, Preparing for the future of artificial intelligence, 2017. Ethical adversaries: Towards mitigating unfairness with adversarial
[302] National Science and Technology Council (US). Select Committee on machine learning, 2020, arXiv preprint arXiv:2005.06852.
Artificial Intelligence, The national artificial intelligence research and de- [330] J. Martinsson, E.L. Zec, D. Gillblad, O. Mogren, Adversarial representation
velopment strategic plan: 2019 update, National Science and Technology learning for synthetic replacement of private attributes, 2020, arXiv
Council (US), Select Committee on Artificial . . . , 2019. preprint arXiv:2006.08039.
[303] A. Beijing, Principles, (2019), Beijing academy of artificial intelligence, [331] D. Ericsson, A. Östberg, E.L. Zec, J. Martinsson, O. Mogren, Adversarial rep-
2019. resentation learning for private speech generation, 2020, arXiv preprint
[304] C. Ebell, R. Baeza-Yates, R. Benjamins, H. Cai, M. Coeckelbergh, T. Duarte,
arXiv:2006.09114.
M. Hickok, A. Jacquet, A. Kim, J. Krijger, et al., Towards intellectual
[332] J. Danaher, The threat of algocracy: Reality, resistance and accommoda-
freedom in an AI ethics global community, AI Ethics 1 (2) (2021) 131–138.
tion, Phil. Technol. 29 (3) (2016) 245–268.
[305] D. Schiff, J. Borenstein, J. Biddle, K. Laas, AI ethics in the public, private,
and NGO sectors: A review of a global document collection, IEEE Trans. [333] A. Goldfarb, J. Gans, A. Agrawal, The Economics of Artificial Intelligence:
Technol. Soc. (2021). An Agenda, University of Chicago Press, 2019.
[306] R. Chatila, J.C. Havens, The IEEE global initiative on ethics of autonomous [334] P. Boddington, Normative modes, in: The Oxford Handbook of Ethics of
and intelligent systems, in: Robotics and Well-Being, Springer, 2019, pp. AI.
11–16. [335] P. Moradi, K. Levy, The future of work in the age of AI: Displacement or
[307] K. Shahriari, M. Shahriari, IEEE standard review—Ethically aligned design: risk-shifting? 2020.
A vision for prioritizing human wellbeing with artificial intelligence and [336] J. Donath, Ethical issues in our relationship with artificial entities, in: The
autonomous systems, in: 2017 IEEE Canada International Humanitarian Oxford Handbook of Ethics of AI.
Technology Conference (IHTC), IEEE, 2017, pp. 197–201. [337] J.H. Moor, The nature, importance, and difficulty of machine ethics, IEEE
[308] High-level expert group on artificial intelligence, 2020, Accessed: Intell. Syst. 21 (4) (2006) 18–21.
2020-09-18, https://tinyurl.com/yxopcjv2. [338] J. Danaher, Robots, law and the retribution gap, Ethics Inform. Technol.
[309] OECD principles on AI, 2020, Accessed: 2020-09-18, https://www.oecd. 18 (4) (2016) 299–309.
org/going-digital/ai/principles/.
[339] J.A. Kroll, Accountability in computer systems, 2020.
[310] G20 adopted human-centred AI principles, 2020, Accessed: 2020-09-18,
[340] B. Qolomany, M. Maabreh, A. Al-Fuqaha, A. Gupta, D. Benhaddou, Pa-
https://www.mofa.go.jp/files/000486596.pdf.
rameters optimization of deep learning models using particle swarm
[311] T.M. Powers, J.-G. Ganascia, The ethics of the ethics of AI, in: The Oxford
Handbook of Ethics of AI. optimization, in: 2017 13th International Wireless Communications and
[312] M. Shanahan, The Technological Singularity, MIT Press, 2015. Mobile Computing Conference (IWCMC), IEEE, 2017, pp. 1285–1290.
[313] E.B. Kania, Battlefield singularity: artificial intelligence, military revolu- [341] Y. Gong, B. Li, C. Poellabauer, Y. Shi, Real-time adversarial attacks, 2019,
tion, and China’s future military power, washington, DC: CNAS, november arXiv preprint arXiv:1905.13399.
2017, in: Costello and Elsa Kania,‘‘Quantum Technologies, US-China [342] H.S.M. Lim, A. Taeihagh, Algorithmic decision-making in AVs: Understand-
Strategic Competition, and Future Dynamics of Cyber Stability,’’ CyCon ing ethical and technical concerns for smart cities, Sustainability 11 (20)
US, 2017. (2019) 5791.

29
K. Ahmad, M. Maabreh, M. Ghaly et al. Computer Science Review 43 (2022) 100452

[343] B. Biggio, F. Roli, Wild patterns: Ten years after the rise of adversarial [346] A. Korinek, Integrating Ethical Values and Economic Value to Steer
machine learning, Pattern Recognit. 84 (2018) 317–331. Progress in Artificial Intelligence, Tech. rep., National Bureau of Economic
[344] L.H. Gilpin, D. Bau, B.Z. Yuan, A. Bajwa, M. Specter, L. Kagal, Explaining Research, 2019.
explanations: An overview of interpretability of machine learning, in: [347] E. Awad, S. Dsouza, R. Kim, J. Schulz, J. Henrich, A. Shariff, J.-F. Bonnefon,
2018 IEEE 5th International Conference on Data Science and Advanced I. Rahwan, The moral machine experiment, Nature 563 (7729) (2018)
Analytics (DSAA), IEEE, 2018, pp. 80–89. 59–64.
[345] D. Gunning, Explainable artificial intelligence (XAI), in: Defense Advanced [348] Qatar’s national artificial intelligence strategy launched, 2020, Accessed:
Research Projects Agency (DARPA), and Web, Vol. 2, 2017, p. 2. 2020-08-05, https://tinyurl.com/y6p3bkng.

30

You might also like