You are on page 1of 36

Introduction

Nanotechnology is an innovative scientific field that encompasses specific materials and


equipment capable of manipulating the physicochemical properties of this material at the
molecular level. Biotechnology is the use of biological techniques and knowledge to manipulate
genetic, molecular and cellular functions in order to develop useful services and products in a
number of different areas, from health to agricultural sciences [1, 2]. Nanobiotechnology is
considered a novel combination of nanotechnology and biotechnology through which
conventional micro-technology can be merged into a real molecular approach. With this
technology, molecular or even atomic machines can be manufactured by integrating or imitating
biological phenomena or synthesizing small tools to modulate various properties of living
systems at the molecular level [3]. Therefore, nanobiotechnology can facilitate many pathways
in the life sciences by incorporating cutting edge applications of nanotechnology and information
technology into current biological problems. This is a leading technology with the potential to
break the boundaries between physics, chemistry, and biology to some extent and improve our
current understanding and ideas. Thus, over time, many new directions and challenges in
research and diagnostics may emerge through the wide use of Nano biotechnology with the
passage of time.
Therefore, nanobiotechnology can facilitate many pathways in the life sciences by integrating the
most modern applications of information technology and nanotechnology into current biological
problems. This technology has the potential to break down the obvious boundaries between
biology, physics and chemistry to some extent and improve our current concepts and knowledge.
For this reason, over time, the intensive use of nanobiotechnology can result in many new
challenges and directions in education, analysis, and medicine. Artificial Intelligence can be a
new research area that has occupied a prominent place in our lives. AI uses its presence in almost
every industry that handles a large amount of information through group actions for regular
operations. AI has a prognostic power that supports the feasibility of its data analysis and some
levels of autonomous learning that its raw material is simply the vast amount of data. Informatics
is concerned with extracting value from data that has become core business value once insights
are gained. AI has a variety of elementary applications. This technology can be applied to several
completely different sectors and industries. In recent decades, artificial intelligence has been

1
incredibly used in technology analysis. The convergence of computing and technology will pave
the way for various technological developments and oversized disciplines. In this chapter we
tend to give away those innovative and dynamic advancement that use artificial intelligence in
different areas of nanobiotechnology. To be effective in sudden pandemics like COVID19, an AI
approach based on nanobiotechnology could offer concrete solutions for social betterment. The
effort was created to provide a suitable platform for researchers to focus on advancements,
challenges, and future prospects in the field of nanobiomedical applications.

Nano-biotechnology at a glance

Nanotechnology and biotechnology are the most promising technologies of the 21 st century.
Biotechnology deals with the physiological and metabolic processes of living organisms,
including 4,444 microbial species. Nanotechnology deals with the application and development
of materials whose smallest functional unit is between 1 and 100 nm [4]. The combination of
these new technologies, i.e. Nanobiotechnology can play an amazing role in the implementation
and development of many useful tools for the study of biological systems. Current research has
shown that microorganisms, plant extracts, and fungi can biologically produce nanoparticles [5].
They have extraordinary properties that differ from large molecules of the same element. Their
electronic, optical and chemical properties differ from those observed in bulk composites [6]
than in loose or unpackaged materials due to their higher surface-to-volume ratio. In general,
nanotechnology is concerned with developing materials, devices or other structures that have at
least a dimension of 1 to 100 nanometers. Meanwhile, biotechnology deals with the metabolic
and physiological processes of biological subjects, including microorganisms. The combination
of these two technologies, i.e. Nano-biotechnology can play a critical role in the development
and implementation of many useful tools in the study of life.

Nanotechnology is very diverse, ranging from extensions of conventional device physics to


Completely new approaches based upon molecular self-assembly, from developing new
materials with dimensions on the nano scale to investigating whether we can directly control
matters on/in the atomic scale/level This idea entails the application of fields of science as
diverse as surface science, organic chemistry, molecular biology, semiconductor physics,
micro fabrication, etc. From metric MKS unit dimensional point of view or International
2
System Units (ISU), the prefix “nano” means one-billionth or 10-9; therefore, one nanometer is
one-billionth of a meter. Some examples which illustrate the Nano scale includes

1. A sheet of paper is about 100,000 nanometers thick.

2. A human hair is approximately 80,000- 100,000 nanometers wide.

3. A single gold atom is about a third of a nanometer in diameter.

4. One nanometer is about as long as your fingernail grows in one second.

5. A strand of human DNA is 2.5 nanometers in diameter.

6. There are 25,400,000 nanometers in one inch

The illustration in (Figure 2) has three visual examples of the size and the scale of
nanotechnology, showing just how small things at the nanoscale. Nano science and
nanotechnology involve the ability to see and to control individual atoms and molecules.
Everything on Earth is made up of atoms the food we eat, the clothes we wear, the buildings and
houses we live in, and our bodies. But something as small as an atom is impossible to see with
the naked eye. In fact, it’s impossible to see with the microscopes typically used in a high school
science class. The microscopes needed to see things at the nanoscale were invented in the early
1980s.

3
Fig 2: The Scale of Things [2].

Once scientists had the right tools, such as the Scanning Tunneling Microscope (STM) and the
Atomic Force Microscope (AFM), the age of nanotechnology was born. Although modern
nanoscience and nanotechnology are quite new, nanoscale materials were used for centuries.
Alternate-sized gold and silver particles created colors in medieval churches' stained-glass
windows hundreds of years ago. The artists back then did not know that the process they used to
create these beautiful works of art actually led to changes in the composition of the materials
they were working with.

Today's scientists and engineers are finding a wide variety of ways to deliberately make
materials at the nanoscale to take advantage of their enhanced properties such as higher strength,
lighter weight, increased control of light spectrum, and greater chemical reactivity than their
4
larger-scale counterparts [3].

Biotechnology:

Biotechnology can be broadly defined as the application of biological organisms,


systems, and processes to manufacture products or provide services. Three generations of
biotechnology have been projected, beginning with the use of whole organisms (initially,
unknowingly) in fermentation for example, in brewing. The second generation exploited greater
microbiological understanding and led to development of culture and extractive techniques in the
first half of the twentieth century (e.g., for the production of antibiotics from fungi). The third
generation, dating from the 1970s, is related to the isolation and application of restriction
enzymes and monoclonal antibodies (e.g. Recombinant production of insulin in bacteria,
monoclonal drugs from mammalian cell hybridisms). The long history and breadth of
biotechnology in terms of, for example, activities, technologies, spheres of application – render
precise and universal definitions of its meaning difficult, whereas third-generation
biotechnology, with specific reference to its application in the pharmaceutical innovation
process. The use of recombinant DNA and monoclonal antibody-based technologies since the
1970s, the emergence of genomics as a subset of biotechnology in the 1990s, and wider
developments in the post genomics period. In the closing decades of the twentieth century,
biotechnology emerged as a site of rapid change in science and technology and as an arena of
social and institutional transformation. The development of new techniques for studying,
manipulating, and redesigning living things produced important applications in medicine and
agriculture, and generated massive investment. In the world of research, biotechnology was often
at the forefront of change in scientific institutions and practices. More broadly, biotechnology
widely perceived as having ‘revolutionary’ implications inspired both intense enthusiasm and
determined opposition.

Bio-Nanotechnology
The terms nanobiotechnology can often be used synonymously. However, whether a distinction
is to be made will depend on whether the focus is on the application of biological ideas or on the
study of biology with nanotechnology. Bio nanotechnology generally refers to the study of how
the goals of nanotechnology can be guided by studying how biological "machines" work and
5
adapting these biological ideas to improve existing nanotechnologies or to create new
nanotechnologies. Nanobiotechnology, on the other hand, refers to the way nanotechnology is
used to develop devices to study biological systems. n other words, nanobiotechnology is
essentially a miniaturized biotechnology, whereas bio nanotechnology is a specific application of
nanotechnology. DNA nanotechnology or cell engineering, for example, would be classified as
bio nanotechnology because they work with biomolecules at the nanoscale. In contrast, many
emerging medical technologies that use nanoparticles as delivery systems or as sensors would be
examples of nanobiotechnology, as they involve the use of nanotechnology to advance the goals
of biology.
The definitions listed above are always used when making a distinction in this article between
Nanobio and Bionano. However, given the overlapping use of terms in modern language,
individual technologies may need to be evaluated to determine which term is more appropriate.
Therefore, it is better to discuss them in parallel. Nanobiotechnology, bionanotechnology and
nanobiology are terms that refer to the interface between nanotechnology and biology. Since the
topic has emerged recently, bionanotechnology and nanobiotechnology serve as generic terms
for various related technologies. This discipline helps to show the fusion of biological research
with different areas of nanotechnology. Concepts enhanced by nanobiology include: nanodevices
(such as biological machines), nanoparticles, and nanoscale phenomena that occur within the
discipline of nanotechnology.

This technical approach to biology enables scientists to visualize and develop systems that can
be used for biological research. Bio-inspired nanotechnology uses biological systems as
inspiration for technologies yet to be developed. However, like nano and biotechnology, bio-
nanotechnology has many potential ethical problems associated with it. The most important
goals that can often be found in nanobiology are the application of Nano tools to relevant
medico-biological problems and their further development. The development of new tools such
as peptide nano-sheets for medical and biological purposes is another important goal of
nanotechnology. New nano-tools often emerge from the refinement of applications of the nano-
tools that are already in use. Imaging native biomolecules, biological membranes, and tissues is
also an important topic for nanobiology researchers. Other topics in nanobiology are the use of
cantilever array sensors and the application of nanophotonics to manipulate molecular processes
6
in living cells. Recently, the use of microorganisms to synthesize functional nanoparticles has
been of great interest. Microorganisms can change the oxidation state of metals. These microbial
processes have opened up new opportunities for us to explore new applications for the
biosynthesis of metal nanomaterials.

Artificial Intelligence (AI)


Artificial Intelligence (AI) is a concept that has been in public discourse for decades and is often
portrayed in science fiction movies or debates about how intelligent machines will conquer the
world by banishing humanity to a submissive, secular existence, to support the new AI
regulation. While this image is a somewhat cartoonish representation of AI, the reality is that
artificial intelligence has arrived today and many of us interact with technology on a regular
basis in our daily lives. AI technology is no longer the domain of futurologists, but an integral
part of the business model of many organizations and a central strategic element in the plans of
many branches of economics, medicine and governments around the world. This transformative
influence of AI has generated considerable academic interest, with recent studies examining the
effects and consequences of technology rather than the effects of AI on performance, which
appears to have been the main area of research for several years.

Reference has offers various definitions of AI, each contains the key concepts of non-human
intelligence programmed to perform certain tasks. Russell and Norvig (2016) [7] defined the
term AI to describe systems that mimic cognitive functions generally associated with human
attributes such as learning, language, and problem solving . A more detailed and perhaps more
sophisticated characterization was presented in Kaplan and Haenlein (2019) [8] ,where the study
describes AI in the context of its ability to independently interpret and learn from external data in
order to achieve specific results through flexible adaptation . The use of big data has allowed
algorithms to perform excellently for specific tasks (robotic vehicles, games, autonomous
planning, etc.). Thought and sentiment have not yet been effectively translated [7]. The common
thread between these definitions is the increasing ability of machines to perform specific
functions and tasks currently performed by people in the workplace and in society at general.

The ability of AI to overcome some of the intellectual, creative and computation-intensive


limitations of humans is opening up new application domains in education and marketing,

7
healthcare, finance and manufacturing, with consequent impact on productivity and performance.
AI-enabled systems in enterprises were expanding rapidly, transforming business and
manufacturing, and extending their reach to areas that are normally considered fully human [9].
The era of artificial intelligence systems has reached a level where autonomous vehicles,
chatbots, autonomous planning and scheduling, games, translations, medical diagnostics, and
even antispam can all be carried out through artificial intelligence. The views of artificial
intelligence experts, as presented in Müller and Bostrom (2016) [10] , predicted that artificial
intelligence systems will likely reach general human capability by 2075 and that some experts
believe that further advances in artificial intelligence towards super intelligence may be bad for
humanity .

Application domains

The AI literature has identified several separate domains in which the technology can be
applied: digital imaging, education, government, healthcare, manufacturing, robotics and
supply chain. Studies have analyzed the impact of AI and its potential to replace humans via
intelligent automation within manufacturing, supply chain, production and even the
construction industry [1]. Existing factory processes will be increasingly subject to analysis
to ascertain whether they could be automated [12]. AI centric technologies will be able to
monitor and control processes in real time offering significant efficiencies over manual
processes [13]. Organizations have posited the benefits of integrating AI technologies in the
development of intelligent manufacturing and the smart factory of the future [14]. The
literature has generally moved on from the somewhat dated concepts of AI based machines
replacing all human workers. Studies have recognized the realistic limits of the continuing
drive to automation, highlighting a more realistic human in the loop concept where the focus
on AI is to enhance human capability, not replace it [15]. Humans are likely to move up the
value chain to focus on design and integration related activities as part of an integrated AI,
machines and human based workforce [16,17]. Manufacturing organisations are likely to use
AI technologies within a production environment where intelligent machines are socially
integrated within the manufacturing process, effectively functioning as co-workers for key
tasks or to solve significant problems [18].

8
Khanna et al, [19] emphasized the importance of AI in healthcare, particularly in medical
informatics. There is a growing requirement for new technologies that understand the
complexities of hospital operations and provide the necessary productivity gains in resource
usage and patient service delivery. AI has the potential to offer improved patient care and
diagnosis as well as interpretation of medical imaging in areas such as radiology [20]. Screening
for breast cancer (BC) and other related conditions could be more accurate and efficient
using AI technology. Houssami et al. [21] studied the use of AI for BC screening highlighting
its potential in reducing false-positives and related human detection error. The study
acknowledges some of the interrelated ethical and societal trust factors but the boundaries of
reliance on AI and acceptable human in the loop involvement is still to be developed. The
application of AI and related digital technologies within public health is rapidly developing.
However, collection, storage, and sharing of AI technology derived large data sets, raises ethical
questions connected to governance, quality, safety, standards, privacy and data ownership [22].
The benefits of utilizing AI technology for insurance claims within health- care. Claim
submission, claim adjudication and fraud analysis can significantly benefit from AI use.
Education and information search is an area where the literature has identified the potential
benefits of AI technology solutions [23]. Chaudhri et al [24] discussed application of AI in
education to improve teacher effectiveness and student engagement. The study analyzed the
potential of AI within education in the context of intelligent game-based learning
environments, tutoring systems and intelligent narrative technologies. The relevance of libraries
in the modern technology era has received focus within the literature. Arlitsch and Newell [25]
discussed how AI can change library processes, staffing requirements and library users. It is
important for libraries to focus on human qualities and the value add-on human interaction
integrated with AI to provide a richer user experience.

Data and information

The topic of big data and its integration with AI has received significant interest within the
wider literature. Studies have identified the benefits of applying AI technologies to big data
problems and the significant value of analytic insight and predictive capability for a number
of scenarios [26]. Health related studies that have analyzed the impact and contribution of
big data and AI arguing that these technologies can greatly support patient health based
9
diagnosis and predictive capability [27]. Big Data Analytics (BDA) develops the
methodological analysis of large data structures, often categorised under the terms: volume,
velocity, variety, veracity and value adding. BDA combined with AI has the potential to
transform areas of manufacturing, health and business intelligence offering advanced incites
within a predictive context [28, 29].

Organizations are increasingly deploying data visualization tools and methods to make
sense of their big data structures. In scenarios where the limitations of human perception and
cognition are taken into account, greater levels of understanding and interpretation can be
gained from the analysis and presentation of data using AI technologies [30]. The analysis
and processing of complex heterogeneous data is problematic. Organizations can extract
significant value and key management information from big data via intelligent AI based
visualization tools [31].

Challenges

The implementation of AI technologies can present significant challenges for


government and organisations as the scope and depth of potential applications increases and the
use of AI becomes more mainstream. These challenges are categorized in Fig. 1 and discussed in
this section. Table 2 lists the specific AI challenges from the literature and breakdown subtext of
challenge details.

Social challenges

The increasing use of AI is likely to challenge cultural norms and act as a potential barrier
within certain sectors of the population. For ex- ample, Xu et al. (2019) [32] highlighted the
challenges that AI will bring to healthcare in the context of the change in interaction and patient
education. This is likely to impact the patient as well as the clinician. The study highlighted the
requirement for clinicians to learn to interact with AI technologies in the context of healthcare
delivery and for pa- tient education to mitigate the fear of technology for many patient
demographics [32]. Social challenges have been highlighted as potential barriers to the further
adoption of AI technologies. Sun and Medaglia [33] identified social challenges relating to
unrealistic expectations towards AI technology and insufficient knowledge on values and
advantages of AI technologies. Studies have also discussed the social aspects of potential job
10
losses due to AI technologies. This specific topic has received widespread publicity in the media
and debated within numerous forums. The study by Risse [34] proposed that AI creates
challenges for humans that can affect the nature of work and potential influence on people's
status as participants in society. Human workers are likely to progress up the value chain to focus
on utilizing human attributes to solve design and integration problems as part of an integrated AI
and human centric workforce [16, 17].

Economic challenges

The mass introduction of AI technologies could have a significant economic impact on


organisations and institutions in the context of required investment and changes to working
practices. Reza Tizhoosh and Pantanowitz [35] focused on the affordability of technology
within the medical field arguing that AI is likely to require substantial financial investment. The
study highlighted the impact on pathology laboratories where current financial pressures may be
exacerbated by the additional pressures to adopt AI technologies. Sun and Medaglia [33]
identified several healthcare related economic challenges arguing that the introduction of AI
based technologies is likely to influence the profitability of hospitals and potentially raise
treatment costs for patients .

Table 2
AI Challenges from the literature.

AI Details

Challenge

Social Patient/Clinician Education; Cultural barriers; Human


challenges rights; Country specific disease profiles; Unrealistic
expectations towards AI technology; Country specific
medical practices and insufficient knowledge on values and
advantages of AI technologies.
Economic Affordability of required computational expenses; High
challenges treatment costs for patients; High cost and reduced profits
for hospitals; Ethical challenges including: lack of trust
towards AI based decision making and unethical use of

11
shared data.
Data challenges Lack of data to validate benefits of AI solutions; Quantity and

quality of input data; Transparency and reproducibility;

Dimensionality obstacles; Insufficient size of available data pool;

Lack of data integration and continuity; Lack of standards of data

collection; Format and quality; Lack of data integration and

continuity and lack of standards for data collection; Format and

quality

Realism of AI; Better understanding of needs of the health


systems; Organisational
Organizational resistance to data sharing; Lack of in-
house AI talent; Threat of replacement of human
and
workforce; Lack of strategy for AI development; Lack of
managerial interdisciplinary talent; Threat to replacement of human
challenges workforce

Technologic Non-Boolean nature of diagnostic tasks; Adversarial


al and attacks; Lack of transparency and interpretability; Design
technology of AI systems; AI safety; Specialisation and expertise;
implementat Big data; Architecture issues and complexities in
ion interpreting unstructured data.
challenges

Political, Copyright issues; Governance of autonomous intelligence


legal and systems; Responsibility and accountability; privacy/safety;
policy National security threats from foreign-owned companies
challenges collecting sensitive data, Lack of rules of accountability in
the use of AI; Costly human resources still legally
required to account for AI based decision; Lack of official
industry standards of AI use and performance evaluation.

12
Ethical Responsibility and explanation of decision made by AI;
challenges processes relating to AI and human behaviour,
compatibility of machine versus human value judgement,
moral dilemmas and AI discrimination

13
Technological and technology implementation challenges

Studies have analysed the nonboolean nature of diagnostic tasks within healthcare and the
challenges of applying AI technologies to the interpretation of data and imaging. Reza Tizhoosh
and Pantanowitz [35] highlighted the fact that humans apply cautious language or descriptive
terminology, not just binary language whereas AI based systems tend to function as a black bo X
where the lack of transparency acts as a barrier to adoption of the technology. These points are
reinforced in Cleophas and Cleophas [36] and where the research identified several limitations
of AI for imaging and medical diagnosis, thereby impacting clinician confidence in the
technology. Cheshire [37] discusses the limitation of medical AI-loop think. The term loop think
is defined as a type of implicit bias, which does not perform correct reappraisal of information or
revision of an ongoing plan of action. Thus, AI would disfavor qualitative human moral
principles. Weak loop think refers to the intrinsic inability of computer intelligence to redirect
executive data flow because of its fixed internal hard writing, un-editable sectors of its operating
system, or unalterable lines of its programme code. Strong loop think refers to AI suppression
due to internalization of the ethical framework.

Challenges exist around the architecture of IA systems and the need for sophisticated structures
to understand human cognitive flexibility, learning speed and even moral qualities [38,39]. Sun
and Medaglia [33] reviewed the technological challenges of algorithm opacity and lack of
ability to read unstructured data .Thrall et al. [40] study considered the challenge of a limited
pool of investigators trained in AI and radiology. This could be solved by recruiting scientists
with backgrounds in AI, but also by establishing educational programmes in radiology
professional services [41]. Varga-Szemes et al. [42] highlighted that machine learning
algorithms should be created by machine learning specialists with relevant knowledge of
medicine and an understanding of possible outcomes and consequences. It is highlighted that AI
systems do not yet have the essence of human intelligence [43]. AI systems are not able to
understand the situations humans experience and derive the right meaning from it. This barrier of
meaning makes current AI systems vulnerable in many areas but particularly to hacker
attacks titled – “adversarial examples”. In these kinds of attacks, a hacker can make specific
and subtle changes to sound, image or text files, which will not have a human cognitive
impact but could cause a programme to make potentially catastrophic errors. As the programmes
do not understand the inputs they process and outputs they produce, they are susceptible to
unexpected errors and undetectable attacks. These impacts can influence domains such as:
computer vision, medical image processing, speech recognition and language processing [43].

Political, legal and policy challenges

Gupta and Kumari [44] discussed legal challenges connected to AI responsibility when errors
occur using AI systems. Another legal challenge of using AI systems can be the issue of
copyrights. Current legal framework needs significant changes in order to effectively pro- tect
and incentivize human generated work [45]. Wirtz, Weyerer, and Geyer [46] focused on the
challenges of implementing AI within government positing the requirement for a more holistic
understanding of the range and impact of AI based applications and associated challenges. The
study analysed the concept of AI law and regulations to control governance including
autonomous intelligence systems, responsibility and accountability as well as privacy/safety.

Studies have identified the complexities of implementing AI based systems within government
and the public sector. Sun and Medaglia (2019) [33] used a case study approach to analyze the
challenges of applying AI within the public sector in China. The study analyzed three groups of
stakeholders – government policy-makers, hospital managers/doctors, and IT firm managers to
identify how they perceive the challenges of AI adoption in the public sector. The study
analyzed the scope of changes and impact on citizens in the context of: Political, legal and policy
challenges as well as national security threats from foreign-owned companies.

Ethical challenges

Researchers have discussed the ethical dimensions of AI and im- plications for greater use of the
technology. Individuals and organisa- tions can exhibit a lack of trust and concerns relating to
the ethical dimensions of AI systems and their use of shared data [33]. The rapid pace of
change and development of AI technologies increases the concerns that ethical issues are not
dealt with formally. It is not clear how ethical and legal concerns especially around respon-
sibility and analysis of decisions made by AI based systems can be solved. Adequate policies,
regulations, ethical guidance and a legal framework to prevent the misuse of AI should be
developed and en- forced by regulators [47]. Gupta and Kumari (2017) [44] reinforces many of
these points highlighting the ethical challenges re- lating to greater use of AI, data sharing issues
and inoperability of systems . AI based systems may exhibit levels of discrimination even though
the decisions made do not involve humans in the loop, high- lighting the criticality of AI
algorithm transparency [48].

Future opportunities

AI technology in all its forms is likely to see greater levels of adoption within organisations
as the range of applications and levels of automation increase. Studies have estimated that
by 2030, 70 per cent of businesses are likely to have adopted some form of AI technology
within their business processes or factory setting [49]. Studies have posited the benefits of
greater levels of adoption of AI within a range of applications, with manufacturing,
healthcare and digital marketing developing significant academic interest [50].

The factories of the future are likely to utilise AI technology ex- tensively, as production
becomes more automated and industry mi- grates to a more intelligent platform using AI and
cyber physical systems [17]. Within healthcare related studies, researchers have proposed new
opportunities for the application of AI within medical diagnosis and pathology where mundane
tasks can be automated with greater levels of speed and accuracy [35].Through the use of human
biofield technology, AI systems linked to sensors placed on and near the human body can
monitor health and well-being [26]. AI technologies will be able to monitor numerous life-signs
parameters via Body Area Networks (BANs) where remote diagnosis requiring specialised
clinical opinion and intervention will be checked by a human. (Hughes, Wang, & Chen, 2012).

AI technologies have been incorporated into marketing and retail where big data analytics are
used to develop personalized profiles of customers and their predicted purchasing habits.
Understanding and predicting consumer demand via integrated supply chains is more cri- tical
than ever and AI technology is likely to be a critical integral element. predicts that demand
forecasting using AI will more than treble between 2019 and 2023 and that chatbot interactions
will reach 22bn in the same year from current levels of 2.6bn. The study highlights that firms are
investing heavily in AI to improve trend analysis, logistics planning and stock management. AI
based in- novations such as the virtual mirror and visual search are set to improve the customer
interaction and narrow the gap between the physical and virtual shopping experience.

Researchers have argued for the more realistic future where the relationship between AI is
likely to transition towards a human in the loop collaborative context rather than an industry-
wide replacement of humans [15]. Stead (2018) [51] asserts the importance of establishing a
partnership where the AI machine will calculate and/or predict and humans will explain and
decide on the appropriate action. Humans are likely to focus on more value add activities
requiring design, analysis and interpretation based on AI processing and outputs. Future
organizations are likely to focus on creating value from an integrated human and AI
collaborative work- force [16, 17].

1. Perspectives from invited contributors

This section has been structured by employing an approach adopted from to present consolidated
yet multiple perspectives on various aspects of AI from invited expert contributors. We invited
each expert to set out their contribution in up to 3–4 pages, which are compiled in this section
in largely unedited form, expressed directly as they were written by the authors. Such an
approach creates an inherent unevenness in the logical flow but captures the distinctive
orientations of the experts and their recommendations at this critical juncture in the evolution of
AI.
Explainability and AI systems – John S. Edwards

EXplainability is the ability to explain the reasoning behind a par- ticular decision, classification
or forecast. It has become an increasingly topical issue recently in both theory and practice of AI
and machine learning systems.
Challenges. Explainability has been an issue ever since the earliest days of AI use in business in
the 1980s. This accounted for much of the early success of rule-based expert systems, where
explanations were straightforward to construct, compared to frame-based systems, where
explanations were more difficult, and neural networks, where they were impossible. At their
inception, neural networks were unable to give explanations except in terms of weightings
with little real-world

Relevance. As a result, they were often referred to as “black boX” systems. More recently,
so-called deep learning systems (typically neural networks with more than one hidden layer)
make the task of explanation even more difficult.

The implied “gold standard” has been that when a person makes a decision, they can be asked
to give an explanation, but this human explanation process is a more complex one than is usually
recognised in the AI literature, as indicated by Miller (2019) [52]. Even if a human ex- planation
is given that appears valid, is it accurate? Face-to-face job interviews are notorious for the risk of
being decided on factors (such as how the interviewee walks across the room) other than the
ones the panel members think they are using. This is related to the difficulty of making tacit
knowledge explicit.

There is also a difference between the “how” explanations that are useful for AI system
developers and the “why” explanations that are most helpful to end-users. Preece (2018)
describes how this too was recognised in the earliest days of expert systems such as MYCIN.
Nevertheless, some of the recent AI literature seems unaware of this; it is perhaps significant
that the machine learning literature tends to use the term interpretability rather than explain
ability. There are, however, many exceptions such as ,who identify four reasons for explanation:
to justify, to control, to improve and to dis- cover.

An important change in context is that governments are now in- troducing guidelines for the
use of any type of automated decision- making systems, not just AI systems. For example, the
European Union's General Data Protection Regulation (GDPR) Article 22 states “The data
subject shall have the right not to be subject to a decision based solely

on automated processing”, and the associated Recital 71 gives the data subject “the right…to
obtain an explanation of the decision reached after such assessment and to challenge the
decision”. Similarly, the UK government has introduced a code of conduct for the use of “data-
driven technology” in health and social care [53]. In regulated industries, existing provisions
about decision-making, such as outlawing “red-lining” in evaluating mortgage or loan
applications, which were first enshrined in law in the United States (US) as far backas the
1960s, also apply to AI systems.

Opportunities. People like explanations, even when they are not really necessary. It is not a
major disaster if NetfliX® recommends a film I don’t like to me, but even there a simple
explanation like

“because you watched < name of film/TV programme >” is added.

Unfortunately, at the time of writing, it doesn’t matter whether I watched that other film/TV
programme all the way through or gave up after five minutes. There is plenty of scope for
improving such simple explanations. More importantly, work here would give a foundation for
understanding what really makes a good explanation for an automated decision, and this
understanding should be transferable to systems which need a much higher level of
responsibility, such as safety-critical systems, medical diagnosis systems or crime detection
systems.
Alternatively, a good explanation for an automated decision may not need to be judged on
the same criteria that would be used for a human decision, even in a similar domain. People are
good at re- cognising faces and other types of image, but most of us do not know how we do
it, and so cannot give a useful explanation. Research into machine learning-based image
recognition is relatively well advanced. The work of researchers at IBM and MIT on
understanding the rea-
soning of generative adversarial networks (GANs) for image recognition suggests that “to some
degree, GANs are organising knowledge and information in ways that are logical to humans”
[54]. For example, one neuron in the network corresponds to the concept “tree”. This line of
study may even help us to understand how we humans do some tasks.
Contrary to both of these views, London (2019) [55] argues that in medical diagnosis and
treatment, explainability is less important than accuracy. London argues that human medical
decision-making is not so different from a black boX approach, in that there is often no agreed
underlying causal model: “Large parts of medical practice frequently reflect a miXture of
empirical findings and inherited clinical culture.”

(p.17) The outputs from a deep learning black boX approach should therefore simply be judged
in the same way, using clinical trials and evidence-based practice, and research should
concentrate on striving for accuracy.

Lastly, advances in data visualisation techniques and technology offer the prospect of
completely different approaches to the traditional “explanation in words”. Research agenda. We
offer suggestions for research in five linked areas.

Can explanations from a single central approach• be tailored to dif- ferent classes of explainee?
EXplanation approaches are typically divided into transparency and post hoc interpretation, the
• the latter for “why”. Is it possible to tailor
former being more suitable for “how” explanations,
explanations from a single central approach to different classes of explainee (developers, end-
users, domain experts…)? For example, a visualisation ap- proach for end-users that would
allow drill-down for more knowl- edgeable explainees?
What sort of explanation best demonstrates compliance with sta- tute/regulation? For example,
how specific does it have to be? UK train travellers often hear “this service is delayed because

of delays to a previous service”, which is a logically valid but completely useless
explanation. Do there need to be different requirements for different industry sectors? What form
should the explanation take – words, pictures, probabilities? The latter links to the next point.
Understanding the validity and acceptability of using probabilities in AI explanation. It is well-
known that many people are poor at dealing with probabilities . Are ex- plantations from AI
• is widely used in the healthcare sector
systems in terms of probabilities acceptable? This
already, but it is not clear how well understood even the existing explanations are, especially

in the light of the comments by London mentioned in the previous section.Improving
explanations of all decisions, not just automated ones. Can post hoc approaches like the
IBM/MIT work on GANs produce better explanations of not only automated decisions, but also
those made by humans?
Investigating the perceived trade-off between transparency and system performance. It is
generally accepted that there is an inverse relationship between performance/accuracy and
explainability for an AI system, and hence a trade-off that needs to be made. For ex-

ample, Niel Nickolaisen, vice president and CTO at human resource consulting company O.C.
Tanner observed: “I agree that there needs to be some transparency into the algorithms, but
does that weaken the capabilities of the [machine learning] to test different models and create
the ensemble that best links cause and effect?”. Does this trade-off have to be the case? Could a
radical ap- proach to explanation be an outlier to the trade-off curve?

Table 6
AI opportunities.

Title AI opportunities Contributor

Modelling explainability In the fields of medical diagnosis and treatment, explainability is perhaps less important than John S. Edwards
accuracy. Opportunities exists in conceptualising AI in the context of a black boX approach where
outputs should be judged using clinical trials and evidence-based practice to strive for accuracy
(London, 2019).

Organisation effectiveness There are a number of opportunities for organisations to utilise AI within a number of
categories: organisational environment, operations, Interaction, case management automation,
governance and adaptiveness. AI can provide the opportunity for organisations to develop both Paul Walton
operational and strategic situation awareness and to link that awareness through to action
increasingly quickly, efficiently and effectively.
Yanqing Duan, John Edwards, Yogesh
Transformational potential of AI Opportunities exist for the development of a greater understanding of the real impact of Dwivedi
decision making within organisations using AI in the context of: key success factors, culture,
performance, system design criteria.
Crispin Coombs
Automation complacency Although, automation complacency and bias can speed up decision making when
recommendations are correct. In instances where AI provides incorrect recommendations, omission
errors can occur as humans are either out of the loop or less able to assure decisions. Opportunities
exists to explore and understand the factors that influence over reliance on automation and how to
counter identified errors. Spyros Samothrakis

Workforce transition Society is likely to be significantly impacted by the AI technological trajectory if as


commentators suggest, society achieves full automation in the next 100 years (Müller & Bostrom,
2016; Walsh, 2018). The opportunity here for organisations and government, is the effective
management of this transition to mitigate this potentially painful change

Enabler for platforms and The exploration of opportunities as to how AI can be leveraged not only at the firm level but as an Arpan Kar
ecosystems enabler in platforms and ecosystems. AI may help to connect multiple firms and help in
automating and managing information flows across multiple organisations in such platforms.
Significant opportunities exist for AI to be used in such platforms to impact platform, firm and
ecosystem productivity.

Enhanced digital marketing AI offers opportunities to enhances campaign creation, planning, targeting, planning, and evaluation. Emmanuel Mogaji
AI offers the opportunity to process big datasets faster and more efficiently. Opportunities exist for
more innovative and relevant content creation and sharing using AI tools and technologies.
Sales performance Opportunities exist for improving the sales performance using AI driven dashboard, predictive and forecasting
capability and use of big data to retain and develop new customer leads.

Additionally the use of AI algorithms can contributing to productivity and provide sales process
enhancement through elimination of non-productive activities and removal of mundane jobs. Kenneth Le Meunier-Fitzhugh & Leslie
Emerging markets The presence of complementary assets are likely to influence the transition to AI in the Caroline Le Meunier-FitzHugh
developing world. Opportunities exist for the lessons learnt from India and Kenya to benefit
similar low income countries in future. For instance, Pakistan, Vietnam, and others are imitating the
success story of the Indian software services exports story. P. Vigneswara Ilavarasan

People centred AI AI can potentially be used to enhance ‘softer’ goals rather than the drive to economic
productivity or efficiency. The genuine needs of people can be identified that can solve real- world
problems. As our interactions with machines start to become more and more human-like, the
opportunity lies in the design of new personalities and the creation of new types of relationship.

Jak Spencer
22

Taste fear and cultural proXimity

Applications of AI in COVID-19 pandemic


In this worldwide health crisis, the medical industry is looking for new technologies to monitor
and controls the spread of COVID19 (Coronavirus) pandemic. AI is one of such technology
which can easily track the spread of this virus, identifies the high-risk patients, and is useful in
controlling this infection in real-time. It can also predict mortality risk by adequately analyzing
the previous data of the patients. AI can help us to fight this virus by population screening,
medical help, notification, and suggestions about the infection control [2-3]. This technology
has the potential to improve the planning, treatment and reported outcomes of the COVID-19
patient, being an evidence-based medical tool. Fig. 1 shows the general procedure of AI and
non-AI based applications that help general physicians to identify the COVID-19 symptoms.
The above flow diagram informs and compares the flow of minimal non-AI treatment versus
AI-based treatment. The above flow diagram explains the involvement of AI in the significant
steps of treatment of high accuracy and reduces complexity and time taken. The physician is
not only focused on the treatment of the patient, but also the control of disease with the AI
application. Major symptoms and test analysis are done with the help of AI with the highest of
accuracy. It also shows it reduces the total number of steps taken in the whole process, making
more procurable in nature.
23

Fig. 1. General procedure of AI and non-AI based applications that help general physicians
to identify the COVID-19 symptoms.

I) Early detection and diagnosis of the infection

AI can quickly analyze irregular symptom and other ‘red flags’ and thus alarm the patients
and the healthcare authorities [5]. It helps to provide faster decision making, which is
cost-effective. It helps to develop a new diagnosis and management system for the
COVID 19 cases, through useful algorithms. AI is helpful in the diagnosis of the
infected cases with the help of medical imaging technologies like Computed
tomography (CT), Magnetic resonance imaging (MRI) scan of human body parts.
II) Monitoring the treatment
24

AI can build an intelligent platform for automatic monitoring and prediction of the spread
of this virus. A neural network can also be developed to extract the visual features of
this disease, and this would help in proper monitoring and treatment of the affected
individuals [6e8]. It has the capability of providing day-to-day updates of the patients
and also to provide solutions to be followed in COVID-19 pandemic
III) Contact tracing of the individuals

AI can help analyze the level of infection by this virus identifying the clusters and ‘hot
spots’ and can successfully do the contact tracing of the individuals and also to
monitor them. It can predict the future course of this disease and likely reappearance.
IV) Projection of cases and mortality This technology can track and forecast the nature
of the virus from the available data, social media and media platforms, about the
risks of the infection and its likely spread. Further, it can predict the number of
positive cases and death in any region. AI can help identify the most vulnerable
regions, people and countries and take measures accordingly.
V) Development of drugs and vaccines: AI is used for drug research by analyzing the
available data on COVID-19. It is useful for drug delivery design and development.
This technology is used in speeding up drug testing in real-time, where standard
testing takes plenty of time and hence helps to accelerate this process significantly,
which may not be possible by a human [6]. It can help to identify useful drugs for the
treatment of COVID-19 patients. It has become a powerful tool for diagnostic test
designs and vaccination development [9-10]. AI helps in developing vaccines and
treatments at much of faster rate than usual and is also helpful for clinical trials
during the development of the vaccine.
VI) Reducing the workload of healthcare workers Due to a sudden and massive increase
in the numbers of patients during COVID-19 pandemic, healthcare professionals
have a very high workload. Here, AI is used to reduce the workload of healthcare
workers [15]. It helps in early diagnosis and providing treatment at an early stage
using digital approaches and decision science, offers the best training to students and
doctors regarding this new disease [19]. AI can impact future patient care and
address more potential challenges which reduce the workload of the doctors.
25

VII) Prevention of the disease With the help of real-time data analysis, AI can provide
updated information which is helpful in the prevention of this disease. It can be used
to predict the probable sites of infection, the influx of the virus, need for beds and
healthcare professionals during this crisis. AI is helpful for the future virus and
diseases prevention, with the help of previous mentored data over data prevalent at
different time. It identifies traits, causes and reasons for the spread of infection. In
future, this will become an important technology to fight against the other epidemics
and pandemics. It can provide a preventive measure and fight against many other
diseases. In future, AI will play a vital role in providing more predictive and
preventive healthcare

1.Haleem A, Javaid M, Vaishya. Effects of COVID 19 pandemic in daily life. Curr Med Res
Pract 2020. https://doi.org/10.1016/j.cmrp.2020.03.011.
[56] Bai HX, Hsieh B, Xiong Z, Halsey K, Choi JW, Tran TM, Pan I, Shi LB, Wang DC, Mei J,
Jiang XL. Performance of radiologists in differentiating COVID-19 from viral pneumonia on
chest CT. Radiology 2020. https://doi.org/10.1148/ radiol.2020200823.
[57] Hu Z, Ge Q, Jin L, Xiong M. Artificial intelligence forecasting of COVID-19 in China.
arXiv preprint arXiv:2002.07112. 2020 Feb 17.
[4] Ai T, Yang Z, Hou H, Zhan C, Chen C, Lv W, Tao Q, Sun Z, Xia L. Correlation of chest CT
and RT-PCR testing in coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases.
Radiology 2020. https://doi.org/10.1148/ radiol.2020200642.
[5] Luo H, Tang QL, Shang YX, Liang SB, Yang M, Robinson N, Liu JP. Can Chinese medicine
be used for prevention of coronavirus disease 2019 (COVID-19)? A review of historical classics,
research evidence and current prevention programs. Chin J Integr Med 2020.
https://doi.org/10.1007/s11655-020-3192-6.
[6] Haleem A, Vaishya R, Javaid M, Khan IH. Artificial Intelligence (AI) applications in
orthopaedics: an innovative technology to embrace. J Clin Orthop Trauma 2019.
https://doi.org/10.1016/j.jcot.2019.06.012.
[7] Biswas K, Sen P. Space-time dependence of coronavirus (COVID-19) outbreak. arXiv
preprint arXiv:2003.03149. 2020 Mar 6.
[8] Stebbing J, Phelan A, Griffin I, Tucker C, Oechsle O, Smith D, Richardson P. COVID-19:
combining antiviral and anti-inflammatory treatments. Lancet Infect Dis 2020 Feb 27.
26

[9] Sohrabi C, Alsafi Z, O’Neill N, Khan M, Kerwan A, Al-Jabir A, Iosifidis C, Agha R. World
Health Organization declares global emergency: a review of the 2019 novel coronavirus
(COVID-19). Int J Surg 2020 Feb 26.
[10] Chen S, Yang J, Yang W, Wang C, Barnighausen T. COVID-19 control in China € during
mass population movements at New Year. Lancet 2020. https:// doi.org/10.1016/S0140-
6736(20)30421-9.
[11] Bobdey S, Ray S. Going viraleCOVID-19 impact assessment: a perspective beyond clinical
practice. J Mar Med Soc 2020 Jan 1;22(1):9.
[12] Gozes O, Frid-Adar M, Greenspan H, Browning PD, Zhang H, Ji W, Bernheim A, Siegel E.
Rapid ai development cycle for the Coronavirus (COVID-19) pandemic: initial results for
automated detection & patient monitoring using deep learning ct image analysis. arXiv preprint
arXiv:2003.05037. 2020 Mar 10.
[13] Pirouz B, ShaffieeHaghshenas S, ShaffieeHaghshenas S, Piro P. Investigating a serious
challenge in the sustainable development process: analysis of confirmed cases of COVID-19
(new type of coronavirus) through a binary classification using artificial intelligence and
regression analysis. Sustainability 2020 Jan;12(6):2427.
[14] Ting DS, Carin L, Dzau V, Wong TY. Digital technology and COVID-19. Nat Med 2020
Mar 27:1e3.
[15] Wan KH, Huang SS, Young A, Lam DS. Precautionary measures needed for
ophthalmologists during pandemic of the coronavirus disease 2019 (COVID19). Acta
Ophthalmol 2020 Mar 29.
[16] Li L, Qin L, Xu Z, Yin Y, Wang X, Kong B, Bai J, Lu Y, Fang Z, Song Q, Cao K. Artificial
intelligence distinguishes COVID-19 from community-acquired pneumonia on chest CT.
Radiology 2020 Mar 19:200905.
[17] Smeulders AW, Van Ginneken AM. An analysis of pathology knowledge and decision
making for the development of artificial intelligence-based consulting systems. Anal Quant Cytol
Histol 1989 Jun 1;11(3):154e65.
[18] Gupta R, Misra A. Contentious issues and evolving concepts in the clinical presentation and
management of patients with COVID-19 infection with reference to use of therapeutic and other
drugs used in Co-morbid diseases (Hypertension, diabetes etc.). Diabetes, Metab Syndrome: Clin
Res Rev 2020;14(3):251e4.
27

[19] Gupta R, Ghosh A, Singh AK, Misra A. Clinical considerations for patients with diabetes in
times of COVID-19 epidemic. Diabetes & Metabolic Syndrome. Clin Res Rev
2020;14(3):211e2.

Artificial Intelligence in healthcare

The complexity and growth of data in the healthcare sector means that Artificial Intelligence (AI)
is being used more and more in this area. Various types of artificial intelligence are already used
by customers and service providers, as are life sciences companies. The most important
application categories include diagnostic and treatment recommendations, patient participation
and compliance, and administrative activities. Although there are many cases where AI can
perform healthcare tasks as well or better than humans, implementation factors will prevent
extensive automation of healthcare professions over a considerable period of time.
In recent years, machines have surpassed human performance in many cognitive tasks. The
transformative power of artificial intelligence (AI) extends too many industries. The effects of AI
in healthcare were very promising and could completely transform healthcare in the near future.
AI can be used in many health related areas, from hospital care and clinical research to drug
discovery and diagnosis prediction. The rapidly increasing availability and low costs of high-
performance computing resources are leading to the digital transformation of the healthcare
system. The use of innovative technologies in daily medical practice enables secure, real-time
access to data and big data analytics. This increases collaboration between specialists and
improves the overall quality of treatment. Large organizations use big data analytics to diagnose
disease. For example, IBM's Watson for Health helps healthcare organizations analyze large
amounts of health-related data to improve diagnosis [51]. An obstacle to data analysis is the
heterogeneity of medical information, e.g. medical journals, symptoms, test results, treatment
cases. Therefore, big data technology used with novel artificial intelligence methods should
provide doctors with diagnostic tools. Watson may review, store and process the medical data
mentioned above. Another example of successful collaboration between technological innovators
and medical institutions is Google's Deep Mind Health [52]. Researchers and clinicians work
with patients to solve real-world health problems by using machine learning (ML) algorithms,
such as neural network models, that mimic the human brain. ML looks for hidden patterns in the
data to identify patients at risk, segment regions of interest, evaluate data, diagnose and make a
28

decision, [54].

[51] Ibm is counting on its bet on watson, and paying big money for it, (2016)
https://www.nytimes.com/2016/10/17/technology/ibm-is-counting-on-its-bet-on-watson-and-paying-big-money-for-
it.html, Accessed 7th Sep 2020
[52]Deep mind health, (2019), https://deepmind.com/applied/deepmind-health/, Accessed 7th Sep 2020
[53] W.-H. Hu, D.-H. Tang, J. Teng, S. Said, R. Rohrmann, et al. Structural health monitoring of a prestressed
concrete bridge based on statistical pattern recognition of continuous dynamic measurements over 14 years,
Sensors, 18 (12) (2018), p. 4117
[54] J. Yang, G. Sha, Y. Zhou, G. Wang, B. Zheng, Statistical pattern recognition for structural health monitoring
using esn feature extraction method, Int J Robot Autom 33 (6).

AI in medical image analysis

In recent decades, medical imaging has become an integral part of medical care. Images were
widely used for the detection, verification, differential diagnosis and treatment of diseases and in
rehabilitation. The AI algorithms achieved significant results when processed. Doctors analyze
various digital medical imaging modalities, including X-ray, ultrasound (US), computed
tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography
(PET), mammography, retinal photography, histology, morphology and dermoscopy slides.
Table 2 summarizes the data on the imaging modalities and their most common uses. Reporting
images is a time-consuming task and is performed primarily by experienced radiologists and
physicians. Image reading is subject to error due to variation in visual appearance of pathology
and approaches to interpreting images. The potential fatigue of human experts can also be
responsible for an incorrect diagnostic decision. For example, the sensitivity and specificity of
mammography examinations were reported to be between 77-87% and 89–97% respectively
[55]. 
[55] M.S. Bae, W.K. Moon, J.M. Chang, H.R. Koo, W.H. Kim, N. Cho, A. Yi, B. La Yun, S.H. Lee, M.Y. Kim, et al., Breast
cancer detected with screening us: reasons for nondetection at mammography, Radiology, 270 (2) (2014), pp. 369-377
29

Patient engagement and adherence application

Patient commitment and adherence have long been considered the “last mile” problem of
healthcare, the final barrier between poor and good health outcomes. The more patients are
proactively involved in their own well-being and care, the better the results - utilization, financial
results, and member experience. Big data and artificial intelligence are increasingly addressing
these factors. Healthcare providers and hospitals often use their clinical experience to develop a
plan of care that they know will improve the health of an acute or chronic patient. However, this
often does not matter if the patient does not make the necessary behavioural adjustments, e.g. B.
Lose weight, make a follow-up appointment, fill prescriptions, or follow a treatment plan. Non-
compliance, when a patient does not follow treatment or does not take prescribed medications as
recommended, is a major problem. In a survey of more than 300 clinical and healthcare leaders,
more than 70% of respondents said that less than 50% of their patients were highly engaged, and
42% of respondents said that less than 25% of their patients were highly engaged [56]. If Greater
patient involvement leads to better health outcomes, can AI-based skills be effective in
personalizing and contextualizing care? There is a growing emphasis on using business rules
engines and machine learning to drive nuanced interventions throughout the continuum of care
[57]. News alerts and specific, relevant content that triggers action at critical times is a
promising research area. Another growing focus in health care is the effective design of
“electoral architecture” to shape patient behaviour in a more prospective way based on the
findings of practice. Using information provided by EHR systems, biosensors, watches,
smartphones, chat interfaces, and other tools, the software can tailor recommendations by
comparing patient data with other effective treatment routes for similar cohorts.
Recommendations can be shared with care providers, patients, nurses, call center agents, or care
delivery coordinators.

[56]. Davenport TH, Hongsermeier T, Mc Cord KA. Using AI to improve electronic health records. Harvard Business
Review 2018. https://hbr.org/2018/12/using-ai-to-improve-electronic-health-records. 
[57] . Volpp K, Mohta S. Improved engagement leads to better outcomes, but better tools are needed. Insights Report. NEJM
Catalyst, 2016, https://catalyst.nejm.org/patient-engagement-report-improved-engagement-leads-better-outcomes-better-tools-
needed. 

5. AI in precision medicine


30

Precision medicine is an emerging field for the prevention and treatment of disease. It takes into
account individual variability in genes, environment and lifestyle. In recent years, the healthcare
paradigm has changed [58]. The field of precision medicine has advanced rapidly due to the
development of AI algorithms that could analyze large amounts of genomic data to predict and
prevent disease. Traditional medicine applies uniform treatment to the entire population, while
precision medicine develops personalized treatment regimens for subgroups of patients. Some
factors may be more important to a particular subgroup. This motivates clinicians and medical
researchers to develop new approaches to subgroup identification and analysis. This is an
effective strategy for personalized treatment [60]. The original concept of precision medicine
included prevention and treatment strategies. These strategies take individual variability into
account by evaluating large data sets that include patient information, medical images, and
genomic sequences [61]. This approach allows clinicians and researchers to predict which
treatment and prevention strategy will work.

[58] F.S. Collins, H. Varmus, A new initiative on precision medicine, N Engl J Med, 372 (9) (2015), pp. 793-795

[59] Z.-G. Wang, L. Zhang, W.-J. Zhao Definition and application of precision medicine, Chin J Traumatol, 19 (5) (2016), p. 249
[60] M.Z. Nezhad, D. Zhu, N. Sadati, K. Yang, P. Levi, Subic: a supervised bi-clustering approach for precision medicine, 2017
16th IEEE International conference on machine learning and applications ICMLA, IEEE (2017), pp. 755-760
[61] J.-G. Lee, S. Jun, Y.-W. Cho, H. Lee, G.B. Kim, J.B. Seo, N. Kim, Deep learning in medical imaging: general overview,
Korean J Radiol, 18 (4) (2017), pp. 570-584
31

Opportunities exist in the focus on market taste, fear and cultural proXimity to
improve organisational use of AI. While their attention is currently focused on the
pros from efficiency gains, they might be overlooking the market reaction to the
integration of AI in their production process. Learning about tastes informs the
market about AI-generated products and services. Learning about fear within AI-
related social opinions and policy-making tendencies can help us make evidence-
based AI-related decisions. Learning about the importance of cultural pro Ximity in the
context of AI-human cultural distance can help to quantify the cultural gravity effect
that bounds our consumption of AI-goods and products

References:

1. Stewart Jr, C. N., 2016. Plant biotechnology and genetics: principles, techniques,
and applications. John Wiley & Sons.
http://125.234.102.146:8080/dspace/handle/DNULIB_52011/8909.

2. Gartland, K. M., Gartland, J. S., 2018. Opportunities in biotechnology. J. Biotechnology.


282, 38-45. https://doi.org/10.1016/j.jbiotec.2018.06.303

3. Thirumavalavan, M., Settu, K., Lee, J. F., 2016. A Short Review on Applications of
Nanomaterials in Biotechnology and Pharmacology. Current Bionanotechnology.2,116-121.
https://www.ingentaconnect.com/content/ben/cbnt/2016/00000002/00000002/art0001.

4. https://www.nano.gov/nanotech-101/what/definition.Thomas, D. G., Courtney, J., Gao,


X., Elmlund, L., Maniruzzaman, M., Freitas, D. N., 2017. Advances in Nano Biotechnology.
Advances in Nano Biotechnology. 276.

5. Verma, M. L., Kumar, P., Sharma, D., Verma, A. D., Jana, A. K., 2019.
Advances in Nanobiotechnology with Special Reference to Plant Systems. In Plant
Nanobionics (pp. 371-387). Springer, Cham. https://doi.org/10.1007/978-3-030-12496-0_13.

6. Roy, R., Pal, A., Chaudhuri, A. N., 2015. Antimicrobial Effect of Silver
Nanoparticle on Pathogenic Organisms Isolated from East Kolkata Wetland. Int. J. Appl. Res .1,
745-752.

7. Russell, S. J., & Norvig, P. (2016). Artificial intelligence: A modern approach.


Malaysia:Pearson Education Limited.
32

8. Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who's the fairest in the
land? On the interpretations, illustrations, and implications of artificial intelligence.
Business Horizons, 62(1), 15–25.

9. Wilson, J., & Daugherty, P. R. (2018). Collaborative intelligence humans and Al are
joining forces. Harvard Business Review, 96(4), 115–123.

10. Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A
survey of expert opinion. Fundamental issues of artificial intelligence. Cham:
Springer555–572.

11. Muhuri, P. K., Shukla, A. K., & Abraham, A. (2019). Industry 4.0: A bibliometric
analysis and detailed overview. Engineering Applications of Artificial Intelligence, 78,
218–235.

12. Yang, J., Chen, Y., Huang, W., & Li, Y. 2017, Survey on artificial intelligence for
additive manufacturing. 2017 23rd International Conference on Automation and
Computing (ICAC) (pp. 1–6).

13. Zhong, R. Y., Xu, X., Klotz, E., & Newman, S. T. (2017a). Intelligent
manufacturing in the context of industry 4.0: A review. Engineering, 3(5), 616–630.

14. Li, B. H., Hou, B. C., Yu, W. T., Lu, X. B., & Yang, C. W. (2017).
Applications of artificial intelligence in intelligent manufacturing: A review. Frontiers of
Information Technology & Electronic Engineering, 18(1), 86–96.

15. Katz, Y. (2017). Manufacturing an artificial intelligence revolution. Available at


SSRN 3078224.

16. Makridakis, S. (2018). Forecasting the impact of artificial intelligence, Part 3 of 4:


The potential effects of AI on businesses, manufacturing, and commerce. Foresight: The
International Journal of Applied Forecasting, (49), 18–27.

17. Wang, L., & Wang, X. V. (2016). Outlook of cloud, CPS and IoT in
manufacturing. Cloud- based cyber-physical systems in manufacturing. Cham:
Springer377–398.
18. Haeffner, M., & Panuwatwanich, K. (2017, September). Perceived Impacts of
Industry 4.0 on Manufacturing Industry and Its Workforce: Case of Germany.
33

International conference on engineering, project, and product management. Cham:


Springer199–208.

19. Khanna, S., Sattar, A., & Hansen, D. (2013). Artificial intelligence in health – The
three big challenges. Australasian Medical Journal, 6(5), 315–317.

20. Thrall, J. H., Li, X., Li, Q., Cruz, C., Do, S., Dreyer, K., & Brink, J. (2018).
Artificial intelligence and machine learning in radiology: Opportunities, challenges,
pitfalls, and criteria for success. Journal of the American College of Radiology, 15(3),
504–508.
21. Houssami, N., Lee, C. I., Buist, D. S. M., & Tao, D. (2017). Artificial
intelligence for breast cancer screening: Opportunity or hype? Breast, 36, 31–33.

22. Zandi, D., Reis, A., Vayena, E., & Goodman, K. (2019). New ethical challenges of
digital technologies, machine learning and artificial intelligence in public health: A call
for papers. Bulletin of the World Health Organization, 97(1), 2.

23. Thesmar, D., Sraer, D., Pinheiro, L., Dadson, N., Veliche, R., & Greenberg, P.
(2019). Combining the power of artificial intelligence with the richness of healthcare
claims data: Opportunities and challenges. PharmacoEconomics. https://doi.org/10.1007/
s40273-019-00777-6.

24. Chaudhri, V. K., Lane, H. C., Gunning, D., & Roschelle, J. (2013). Applications
of artificial intelligence to contemporary and emerging educational challenges. Artificial
Intelligence Magazine, Intelligent Learning Technologies: Part, 2(34), 4.

25. Arlitsch, K., & Newell, B. (2017). Thriving in the age of accelerations: A brief look
at the societal effects of artificial intelligence and the opportunities for libraries. Journal
of Library Administration, 57(7), 789–798.

26. Rubik, B., & Jabs, H. (2018). Artificial intelligence and the human biofield: New
oppor- tunities and challenges. Cosmos and History, 14(1), 153–162.

27. Beregi, J., Zins, M., Masson, J., Cart, P., Bartoli, J.-, Silberman, B., …, &
Meder, J. (2018). Radiology and artificial intelligence: An opportunity for our specialty.
Diagnostic and Interventional Imaging, 99(11), 677–678.

28. Abarca-Alvarez, F. J., Campos-Sanchez, F. S., & Reinoso-Bellido, R. (2018).


34

Demographic and dwelling models by artificial intelligence: Urban renewal


opportunities in spanish coast. International Journal of Sustainable Development and
Planning, 13(7), 941–953.

29. Shukla, N., Tiwari, M. K., & Beydoun, G. (2018). Next generation smart
manufacturing and service systems using big data analytics. Computers & Industrial
Engineering, 128, 905–910.

30. Olshannikova, E., Ometov, A., Koucheryavy, Y., & Olsson, T. (2015). Visualizing
big data with augmented and virtual reality: Challenges and research agenda. Journal of
Big Data, 2(1), https://doi.org/10.1186/s40537-015-0031-2.
31. Zheng, Y., Wu, W., Chen, Y., Qu, H., & Ni, L. M. (2016). Visual analytics
in urban com- puting: An overview. IEEE Transactions on Big Data, 2(3), 276–
296. https://doi.org/ 10.1109/TBDATA.2016.2586447.
32. Xu, J., Yang, P., Xue, S., Sharma, B., Sanchez-Martin, M., Wang, F., …,
& Parikh, B.(2019). Translating cancer genomics into precision medicine with
artificial in-telligence: Applications, challenges and future perspectives. Human
Genetics. https:// doi.org/10.1007/s00439-019-01970-5.

33. Sun, T. Q., & Medaglia, R. (2019). Mapping the challenges of artificial intelligence
in the public sector: Evidence from public healthcare. Government Information
Quarterly, 36(2), 368–383.

34. Risse, M. (2019). Human rights and artificial intelligence: An urgently needed
agenda.Human Rights Quarterly, 41(1), 1–16.
35. Reza Tizhoosh, H., & Pantanowitz, L. (2018). Artificial intelligence and digital
pathology: Challenges and opportunities. Journal of Pathology Informatics, 9(1).
36. Cleophas, T. J., & Cleophas, T. F. (2010). Artificial intelligence for
diagnostic purposes: Principles, procedures and limitations. Clinical Chemistry
and Laboratory Medicine, 48(2), 159–165.

37. Cheshire, W. P. (2017). Loopthink: A limitation of medical artificial intelligence.


Ethics and Medicine, 33(1), 7–12

38. Baldassarre, G., Santucci, V. G., Cartoni, E., & Caligiore, D. (2017). The
35

architecture challenge: Future artificial-intelligence systems will require sophisticated


archi- tectures, and knowledge of the brain might guide their construction. The
Behavioral and Brain Sciences, 40, e254.

39. Edwards, S. D. (2018). The HeartMath coherence model: Implications and


challenges for artificial intelligence and robotics. AI and Society, 1–7.
https://doi.org/10.1007/ s00146-018-0834-8.
40. Thrall, J. H., Li, X., Li, Q., Cruz, C., Do, S., Dreyer, K., & Brink, J. (2018).
Artificial intelligence and machine learning in radiology: Opportunities, challenges,
pitfalls, and criteria for success. Journal of the American College of Radiology,
15(3), 504–508.

41. Nguyen, G. K., & Shetty, A. S. (2018). Artificial intelligence and machine learning:
Opportunities for radiologists in training. Journal of the American College of Radiology,
15(9), 1320–1321.

42. Varga-Szemes, A., Jacobs, B. E., & Schoepf, U. J. (2018). The power and
limitations of machine learning and artificial intelligence in cardiac CT. Journal of
Cardiovascular Computed Tomography, 12(3), 202–203.

43. Mitchell, M. (2019). Artificial intelligence hits the barrier of meaning. Information
(Switzerland), 10(2), https://doi.org/10.3390/info10020051.

44. Gupta, R. K., & Kumari, R. (2017). Artificial intelligence in public health:
Opportunities and challenges. JK Science, 19(4), 191–192.

45. Zatarain, J. M. N. (2017). The role of automated technology in the creation of


copyright works: The challenges of artificial intelligence. International Review of Law,
Computers and Technology, 31(1), 91–104.
46. Wirtz, B. W., Weyerer, J. C., & Geyer, C. (2019). Artificial intelligence and
the public Sector—Applications and challenges. International Journal of Public
Administration, 42(7), 596–615.

47. Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for
decision making in the era of big data – Evolution, challenges and research agenda.
International Journal of Information Management, 48, 63–71.
36

48. Bostrom, N., & Yudkowsky, E. (2011). The ethics of Artificial Intelligence. In K.
Frankish (Ed.). Cambridge handbook of artificial intelligence . Cambridge: Cambridge
University Press.
49. Bughin, J., Seong, J., Manyika, J., Chui, M., & Joshi, R. (2018). Notes from
the AI frontier: Modeling the global economic impact of AI. McKinsey Global
Institute1–64 September (September). Retrieved from
https://www.mckinsey.com/featured-insights/ artificial-intelligence/notes-from-the-
ai-frontier-modeling-the-impact-of-ai-on-the- world-economy.

50. Juniper Research (2018). AI in retail. segment analysis, vendor positioning & market
forecasts 2019–2023. Accessed June 2019.
https://www.juniperresearch.com/researchstore/ fintech-payments/ai-in-retail.

51. Stead, W. W. (2018). Clinical implications and challenges of artificial intelligence


and deep learning. JAMA – Journal of the American Medical Association, 320(11),
1107–1108.

52. Miller, T. (2019). EXplanation in artificial intelligence: Insights from the social
sciences. Artificial Intelligence, 267, 1–38.
53. Anonymous (2018). Initial code of conduct for data-driven health and care
technology. Department of Health & Social Care (ed) Published online 5
September 2018 ed.. Her Majesty's Stationery Office.
54. Dickson, B. (2019). Explainable AI: Viewing the world through the eyes of neural
networks. Available at https://bdtechtalks.com/2019/02/04/explainable-ai-gan-dissection-
ibm-mit/ (accessed 21.03.19).
55. London, A. J. (2019). Artificial intelligence and black-boX medical decisions:
Accuracy versus explainability. Hastings Center Report, 49, 15–21.
56.

You might also like