Professional Documents
Culture Documents
Report
Report
ABSTRACT
Nanotechnology is a multidisciplinary field that covers a vast and diverse array of devices
derived from engineering, physics, chemistry, and biology. Nanotechnology has opened up by
rapid advances in science and technology, creating new opportunities for advances in the fields
of medicine, electronics, foods, and the environment. Nanoscale structures and have been
explored in many biological applications because their novel properties and functions differ
drastically from their bulk counterparts. Their high volume/surface ratio, improved solubility,
and multi-functionality open many new possibilities. The objective of this review is to describe
the potential benefits and impacts of the nanobiotechnology in different areas. Artificial
intelligence (AI) has been developing rapidly in recent years in terms of software algorithms,
hardware implementation, and applications in a vast number of areas. In this review, we
summarize the latest developments of applications of AI in biomedicine, including disease
diagnostics, living assistance, biomedical information processing, and biomedical research[1].
The aim of this review is to keep track of new scientific accomplishments, to understand the
availability of technologies, to appreciate the tremendous potential of AI in biomedicine, and to
provide researchers in related fields with inspiration. It can be asserted that, just like AI itself, the
application of AI in biomedicine is still in its early stage. Advances in biological and medical
technologies have been providing us explosive volumes of biological and physiological data,
such as medical images, electroencephalography, genomic and protein sequences. Learning from
these data facilitates the understanding of human health and disease. Developed from artificial
neural networks, deep learning-based algorithms show great promise in extracting features and
learning patterns from complex data. The aim of this paper is to provide an overview of deep
learning techniques and some of the state-of-the-art applications in the biomedical field. We first
introduce the development of artificial neural network and deep learning[2].
INTRODUCTION
Deep learning is a recent and fast-growing field of machine learning. It attempts to model
abstraction from large-scale data by employing multi-layered deep neural networks (DNNs), thus
making sense of data such as images, sounds, and texts .
The early framework for deep learning was built on artificial neural networks (ANNs) in the
1980s while the real impact of deep learning became apparent in 2006. Since then, deep learning
has been applied to a wide range of fields, including automatic speech recognition, image
recognition, natural language processing, drug discovery, and bioinformatics.
The past decades have witnessed a massive growth in biomedical data, such as genomic
sequences, protein structures, and medical images, due to the advances of high-throughput
technologies. This deluge of biomedical big data necessitates effective and efficient
computational tools to store, analyze, and interpret such data . Deep learning-based algorithmic
frameworks shed light on these challenging problems. The aim of this paper is to provide the
bioinformatics and biomedical informatics community an overview of deep learning techniques
and some of the state-of-the-art applications of deep learning in the biomedical field. We hope
this paper will provide readers an overview of deep learning, and how it can be used for
analyzing biomedical data. The rapid development of knowledge in the field of advanced
materials and nanomaterials has fueled a discussion on the best means to develop this emerging
technology both safely and sustainably, without limiting the incredible potential benefits that
these advancements bring about in material design and formulation. One of the first difficulties
encountered in this domain pertains to how we organize and utilize the massive volume of
information that is being produced, in relation to the performance and environmental and health
and safety (EHS) implications of these nanoscale materials. Nanotechnology, machine learning
(ML), and artificial intelligence (AI) are a few leading technologies in this domain; although ML
and AI have recently surpassed nanotechnology in popularity, they have largely complemented
each other. We have been conditioned to expect the development of AI in a wide range of
applications such as in flying drones for home delivery, traffic routing, and small-scale robotic
assistance in performing daily chores. We are probainteracting with AI more than we realize
due to a prominent upsurge in the use of AI in electronic gadgets and digital media, and with AI
grabbing the attention of the consumer industry[3].
the present contribution provides an interdisciplinary review of the existing research from the
areas of nano-engineering, biomedical engineering and ML. To the best of the authors
knowledge no such review exists in the technical literature, that focuses on the ML-related
methodologies that are employed in nano-scale biomedical engineering.
• Finally, the advantages and limitations of each ML approach are highlighted, and future
research directions are provided[4].
Nanotechnology offers promise, as a broad spectrum of highly innovative approaches emerges
for the overcoming of this challenge. Four emerging approaches are reviewed below:
nanostructured surfaces for the enhancement of proteomic analysis via mass spectrometry (MS)
and reverse-phase protein microarrays; the bio-bar code method for the amplification of protein
signatures via the use of two-particle, sandwich assay; nanowires as biologically gated
transistors, transducing molecular binding events into real-time electrical signals; and silicon
cantilevers for the mechanics-based recognition of biomolecular populations. In simple terms, AI
is a broad area of computer science that attempts to impart to machines human-like intelligence
to learn and perform the given tasks.In 1956, John McCarthy, a Dartmouth professor, first coined
the term “Artificial Intelligence” when he observed that machines can solve problems such as
understanding language semantics and forming abstractions and concepts, which were thought to
be limited to humans.McCarthy along with a group of computer scientists and mathematicians
demonstrated that machines are capable of formal reasoning using trial and error, thus paving the
way for a new era of AI over 60 years ago. Since then, AI has mostly remained limited to the
Internet, university classrooms, and exclusive labs. The timeline of advances in computer
programming indicates that a wealth of applications has been created along with uncertainties in
different areas (Figure 1). AI and ML are growing exponentially and can soon become
ubiquitous.Over the past few years, two factors have led to the skyrocketing of AI worldwide,
i.e., data availability and a faster processing capacity. The amount of data being generated is
growing exponentially, which can be seen from the fact that 90% of the data globally has been
generated over the past 2 years alone.With high processing speeds, computers can process all of
this information more quickly and effectively, thus steadily rendering AI more real than
artificial, and significantly more intelligent. In this review, we aim to address the developments
in ML implemented in theoretical approaches and simulations used in characterizing nanoscale
materials over the last decade. However, by incorporating AI into its core, the ML process has
reached an all-time high. In this article, we review ML algorithms, which are continually being
applied in new areas based on the widely distributed branches of AI, for classifying the diverse
properties of nanomaterials, as well as correlation, validation, and grouping algorithms (Figure
2)[6].
CHAPTER 3
LITERATURE REVIEW
3) Biomedicine Applications
1) Disease detection
2) Therapy development.
1) Structure and Material Design and Simulation
One of the fundamental challenges in material science and chemistry is the understanding
of the structure properties . The complexity of this problem grows dramatically in the
case of nanomaterials because:
i) they adopt different properties from their bulk components; and
ii) they are usually heterostructures, consisting of multiple materials.
As a result, the design and optimization of novel structures and materials, by discovering
their properties and behavior through simulations and experiments, lead to multi-
parameter and multi-objective problems, which in most cases are extremely difficult or
impossible to be solved through conventional approaches; ML can be an efficient
alternative choice to this challenge.
2) Inverse design:
The availability of several high resolution lithographic techniques opened
the door to devising complex structures with unprecedented properties. However, the vast
choices space, which is created due to the large number of spatial degrees of freedom
complemented by the wide choice of materials, makes extremely difficult or even
impossible for conventional inverse design methodologies to ensure the existence or
uniqueness of acceptable utilizations. To address this challenge, nanoscience community
turned their eyes to ML. In more detail, several researchers identified three possible
methods, which are based on artificial neural networks (ANNs), deep neural networks
(DNNs), and generative adversarial networks (GANs). ANNs follow a trail-and-error
approach in order to design multilayer nanoparticles . Meanwhile, DNNs are used in
metasurface design . Finally, GANs can be used to design nanophotonics structures with
precise user-define spectral responses .
3) Experiments planning and autonomous research:
ML has been widely employed, in order to efficiently explore the vast parameter space
created by different combinations of nano-materials and experimental conditions and to
reduce the number of experiments needed to optimize hetero-structures and references
therein). Towards this direction, fully autonomous research can be conducted, in which
experiments can be designed based on insights extracted from data processing through
ML, without human in the loop.
B. Communications and Signal Processing In biomedical applications
nano-sensors can be utilized for a variety of tasks such as monitoring,
detection and treatment. The size of such nano-sensors ranges between 1 − 100 nm,
which refers to both macro-molecules and bio-cells . The proper selection of size and
materials is critical for the system performance, while it is constrainted by the target area,
their purpose, and safety concerns. Such nano-networks are inspired by living organisms
and, when they are injected into the human body, they interact with biological processes
in order to collect the necessary information . However, they are characterized by limited
communication range and processing power, that allow only short-range transmission
techniques to be used . As a consequence, conventional electromagneticbased
transmission schemes may not be appropriate for communications among molecules ,
since, in the latter the information is usually encoded in the number of released particles.
The simplest approach for the receiver to demodulate the symbol is to compare the
number of received particles with predetermined thresholds. In the absence of inter-
symbol interference (ISI), finding the optimal thresholds is a straight forward process.
However, in the presence of ISI the threshold needs to be extracted as a solution of the
error probability minimization problem .The aforementioned approaches require
knowledge of the channel model. However, in several practical scenarios, where the
molecular communications (MC) system complexity is high, this may not be possible. To
countermeasure this issue, ML methods can be employed to accurately model the channel
or perform data sequence detection. An alternative to MCs that has been used to support
nanonetworks is communications in the terahertz (THz) band. For these networks, apart
from their specifications, an accurate model for the THz communication between nano-
sensors is imperative for their simulation and performance assessment. In addition,
another problem that is entangled with novel nanosensor networks is their resilience
against attacks, which is of high importance since not only the system reliability is
threatened, but also the safety of the patients is at stake. Thus, it is imperative for any
possible threats to be recognized and for effective countermeasures to be developed. A
solution to the above problems appears to be relatively complex for conventional
computational methods. On the other hand, ML can provide the tools to model the space-
time trajectories of nano-sensors in the complex environments of the human body as well
as to draw strategies that mitigate the security risks of the novel network architectures.
1) Channel modeling:
One of the fundamental problems in MCs is to accurately model the channel in
different environments and conditions. Most of the MC models assume that a
molecule is removed from the environment after hitting the receiver ; hence, each
molecule can contribute to the received signal once. To model this phenomenon a
first-passage process is employed. Another approach was created from the
assumption that molecules can pass through the receiver . In this case, a molecule
contributes multiple times to the received signal. However, neither of the
aforementioned approaches are capable of modeling perfectly absorbing receivers,
when the transmitters are reflecting spherical bodies. Interistingly, such models
accommodate practical scenarios where the emitter cells do not have receptors at the
emission site and they cannot absorb the emitted molecules. An indicative example
lies in hormonal secretion in the synapses and pancreatic β−cell islets . To fill this
gap, ML was employed in , to model molecular channels in realistic scenarios, with
the aid of ANNs. Similarly, in THz nano-scale networks, where the in-body
environment is characterized by high path-loss and molecular absorption noise
(MAN), ML methods can be used in order to accurately model MAN. This opens the
road to a better understanding of the MAN’s nature and the design of new
transmission schemes and waveforms.
2) Signal detection:
To avoid channel estimation in MC, Farsal et al. proposed in a sequence
detection scheme, based on recurrent neural networks (RNNs). Compared with
previously presented ISI mitigation schemes, ML-based data sequence detection is
less complex, since they do not require to perform channel estimation and data
equalization. Following a similar approach, the authors presented an ANN capable of
achieving the same performance as conventional detection techniques, that require
perfect knowledge of the channel. In THz nano-scale networks, an energy detector is
usually used to estimate the received data . In more detail, if the received signal
power is below a predefined threshold, the detector decides that the bit 0 has been
sent, otherwise, it decides that 1 is sent. However, the transmission of 1 causes a
MAN power increase, usually capable of affecting the detection of the next symbols.
To counterbalance this, without increasing the symbol duration, a possible approach
is to design ML algorithms that are trained to detect the next symbol and take into
account the already estimated ones. Another ML challenge in signal detection at THz
nano-scale networks, lies with detecting the modulation mode of the transmission
signal by a receiver, when no prior synchronization between transmitter and receiver
has occurred. The solution to this problem will provide scalability to these networks.
Motivated by this, in, the authors provided a ML algorithm for modulation
recognition and classification.
3) Routing and mobility management:
In THz nano-scale networks, the design of routing protocols capable of proactively
countermeasuring congestion has been identified as the next step for their utilization .
These protocols need to take into account the extremely constrained computational
resources, the stochastic nature of nano-nodes movements as well as the existence of
obstacles that may interrupt the line-of-sight transmission. The aforementioned
challenges can be faced by employing SOTA ML techniques for analyzing collected
data and modeling the nano-sensors’ movements, discovering neighbors that can be
used as intermediate nodes, identifying possible blockers, and proactively
determining the message root from the source to the final destination.In this context,
in the authors presented a multi-hop deflection routing algorithm based on
reinforcement learning and analyzed its performance in comparison to different
neural networks (NNs) and decision tree updating policies.
4) Event detection:
It is generally believed that AI tools will facilitate and enhance human work and not replace the
work of physicians and other healthcare staff as such. AI is ready to support healthcare personnel
with a variety of tasks from administrative workflow to clinical documentation and patient
outreach as well as specialized support such as in image analysis, medical device automation,
and patient monitoring. There are different opinions on the most beneficial applications of AI for
healthcare purposes. Forbes stated in 2018 that the most important areas would be administrative
workflows, image analysis, robotic surgery, virtual assistants, and clinical decision support . A
2018 report by Accenture mentioned the same areas and also included connected The rise of
artificial intelligence in healthcare applications 27 machines, dosage error reduction, and
cybersecurity . A 2019 report from McKinsey states important areas being connected and
cognitive devices, targeted and personalized medicine, robotics-assisted surgery, and
electroceuticals [10]. In the next sections, some of the major applications of AI in healthcare will
be discussed covering both the applications that are directly associated with healthcare and other
applications in the healthcare value chain such as drug development and ambient assisted living
(AAL).
2.Precision medicine
It is believed that within the next decade a large part of the global population will be offered full
genome sequencing either at birth or in adult life. Such genome sequencing is estimated to take
up 100150 GB of data and will allow a great tool for precision medicine. Interfacing the genomic
and phenotype information is still ongoing. The current clinical system would need a redesign to
be able to use such genomics data and the benefits hereof . Deep Genomics, a Healthtech
company, is looking at identifying patterns in the vast genetic dataset as well as EMRs, in order
to link the two with regard to disease markers. This company uses these correlations to identify
therapeutics targets, either existing therapeutic targets or new therapeutic candidates with the
purpose of developing individualized genetic medicines. They use AI in every step of their drug
discovery and development process including target discovery, lead optimization, toxicity
assessment, and innovative trial design. Many inherited diseases result in symptoms without a
specific diagnosis and while interpreting whole genome data is still challenging due to the many
genetic profiles. Precision medicine can allow methods to improve identification of genetic
mutations based on full genome sequencing and the use of AI.
Drug discovery and development is an immensely long, costly, and complex process that can
often take more than 10 years from identification of molecular targets until a drug product is
approved and marketed. Any failure during this process has a large financial impact, and in fact
most drug candidates fail sometime during development and never make it onto the market. On
top of that are the ever-increasing regulatory obstacles and The rise of artificial intelligence in
healthcare applications 29 the difficulties in continuously discovering drug molecules that are
substantially better than what is currently marketed. This makes the drug innovation process both
challenging and inefficient with a high price tag on any new drug products that make it onto the
market . There has been a substantial increase in the amount of data available assessing drug
compound activity and biomedical data in the past few years. This is due to the increasing
automation and the introduction of new experimental techniques including hidden Markov model
based text to speech synthesis and parallel synthesis. However, mining of the largescale
chemistry data is needed to efficiently classify potential drug compounds and machine learning
techniques have shown great potential [15]. Methods such as support vector machines, neural
networks, and random forest have all been used to develop models to aid drug discovery since
the 1990s. More recently, DL has begun to be implemented due to the increased amount of data
and the continuous improvements in computing power. There are various tasks in the drug
discovery process where machine learning can be used to streamline the tasks. This includes
drug compound property and activity prediction, de novo design of drug compounds,
drugreceptor interactions, and drug reaction prediction [16]. The drug molecules and the
associated features used in the in silico models are transformed into vector format so they can be
read by the learning systems. Generally, the data used here include molecular descriptors (e.g.,
physicochemical properties) and molecular fingerprints (molecular structure) as well as
simplified molecular input line entry system (SMILES) strings and grids for convolutional neural
networks (CNNs)[7].
Figure 1. Timeline of AI and ML in nanomaterial development. Evolution timeline for both
the development of nanoparticles (NPs), starting after the first synthesis and quantum
effects as observed in 1853 by Faraday, and AI including statistical approaches. In 2010,
both timelines merged when AI was applied in tasks such as the identification of NP
properties or interaction partners, the grouping of NPs depending on their properties or
toxic effects, and the prediction of NP toxicity.
CASE STUDY
Epileptic seizure prediction Epilepsy, a neurodegenerative disease, is one of the most common
neurological conditions and is characterized by spontaneous, unpredictable, and recurrent
seizures. While first lines of treatment consist of long-term medications-based therapy, more
than one third of patients are refractory. On the other hand, recourse to epilepsy surgery is still
relatively low due to very modest success rates and fear of complications. An interesting
research direction is to explore the possibility of predicting seizures, which, if made possible,
could result in the development of alternative interventional strategies. Although early seizure-
forecasting investigations date back to the 1970s, the limited number of seizure events, the
paucity of intracranial electroencephalography recordings, and the limited extent of interictal
epochs have been major hurdles toward an adequate evaluation of seizure prediction
performances. Interestingly, signals acquired from naturally epileptic canines implanted with the
ambulatory monitoring device have been made accessible through the ieeg.org online portal.
However, the seizure onset zone was not disclosed/ available. Our group investigated the
possibility of forecasting seizures using the aforementioned canine data. Subsequently, we
performed a directed transfer function (DTF)-based, quantitative identification of electrodes
located within the epileptic network. A genetic algorithm was employed to select the features
most discriminative of the preictal state. We proposed a new fitness function that is insensitive to
skewed data distributions. An average sensitivity of 84.82% at a time-in-warning of 10% was
reported on the held-out dataset, improving previous seizure prediction performances. Trying to
find new opportunities for seizure prediction, we also explored novel features to track the
preictal state based on higher order spectral analysis. Extracted features were then used as inputs
to a multilayer perceptron for classification. Our preliminary findings revealed significant
differences between interictal and preictal states using each of three bispectrum-extracted
characteristics (p < 0.05). Test accuracies of 73.26%, 72.64%, and 78.11% were achieved for the
mean of magnitudes, normalized bispectral entropy, and normalized squared entropy,
respectively. In addition, we demonstrated the existence of consistent differences between the
epileptic preictal and interictal states in mean phase– amplitude coupling on the same bilateral
canine iEEG recordings. In contrast, we also explored the possibility of using quantitative
effective connectivity measures to determine the network seizure activity in high-density
recordings. The ability of the DTF to quantify causal relations between iEEG recordings has
been previously validated. However, quasi-stationarity of the analyzed signals remains a must to
avoid spurious connections between iEEG contacts . Although the identification of stationary
epochs is possible when dealing with a relatively small number of contacts, it becomes more
challenging when analyzing highdensity iEEG signals. Recently, a time-varying version of the
DTF was proposed: the spectrum-weighted adaptive directed transfer function (swADTF). The
swADTF is able to cope with nonstationarity issues and automatically identify frequency ranges
of interest. Subsequently, we validated the possibility of finding seizure activity generators and
sinks by employing the swADTF on high-density recordings. The database consisted of patients
with refractory epilepsy admitted for pre-surgical evaluation at the University of Montreal
Hospital Center. Interestingly, the identified seizure activity sources were within the epileptic
focus and resected volume for patients who went seizure-free after surgical resection. In contrast,
additional or different generators were identified in non-seizure-free patients. Our findings
highlighted the feasibility of accurately identifying seizure generators and sinks using the
swADTF. Electrode selection methods based on effective connectivity measures are thus
recommended in future seizureforecasting investigations. Recent findings highlight the
feasibility of predicting seizures using iEEG recordings; the transition from interictal into ictal
states consists of a ‘‘buildup” that can be tracked using advanced feature extraction and AI
techniques. Nevertheless, before current approaches can be translated into actual clinical devices,
further research is needed on feature extraction, electrode selection, hardware implementation,
and deep learning algorithm.
CONCLUSION
1. Michele Greque de Morais , Vilásia Guimarães Martins, Daniela Steffens, Patricia Pranke, and
Jorge Alberto Vieira da Costa Journal of Nanoscience and Nanotechnology 2014, Vol. 14, 1007–
1017.
2. Guoguang Rong , Arnaldo Mendez , Elie Bou Assi c , Bo Zhao d , Mohamad Sawan Engineering
2020,Vol. 6,291-301.
3. Chensi Cao, Feng Liu, Hai Tan, Deshou Song, Wenjie Shu, Weizhong Li, Yiming Zhou, Xiaochen
Bo, Zhi Xie Genomics, Proteomics & Bioinformatics 2018, 0229(18)30002-0.
4. Ajay Vikram Singh,* Daniel Rosenkranz, Mohammad Hasan Dad Ansari, Rishabh Singh, Anurag
Kanase, Shubham Pratap Singh, Blair Johnston, Jutta Tentschert, Peter Laux, and Andreas Luch
Reviwe On Artificial Intelligence and Machine Learning Empower Advanced Biomedical Material
Design to Toxicity Prediction.
5. Alexandros–Apostolos A. Boulogeorgos, Senior Member, IEEE, Stylianos E. Trevlakis, Student
Member, IEEE, Sotiris A. Tegos, Student Member, IEEE, Vasilis K. Papanikolaou, Student
Member, IEEE, and George K. Karagiannidis, Fellow, 2020.
6. Mark Ming-Cheng Cheng1,* , Giovanni Cuda2,* , Yuri L Bunimovich3 , Marco Gaspari2 , James R
Heath3 , Haley D Hill4 , Chad A Mirkin4 , A Jasper Nijdam1 , Rosa Terracciano2 , Thomas
Thundat5 and Mauro Ferrari Nanotechnologies for biomolecular detection and medical
diagnostics
7. Weng J, McClelland J, Pentland A, Sporns O, Stockman I, Sur M, et al. Autonomous
mental development by robots and animals. Science 2001;291 (5504):599–60
8. Wooldridge M, Jennings NR. Intelligent agents: theory and practice. Knowl Eng Rev
1995;10(2):115–52.
9. Huang G, Huang GB, Song S, You K. Trends in extreme learning machines: a review.
Neural Netw 2015;61:32–48.
10. Hopfield JJ. Neural networks and physical systems with emergent collective
computational abilities. Proc Natl Acad Sci USA 1982;79(8):2554–8.
11. Watts DJ, Strogatz SH. Collective dynamics of ‘small-world’ networks.
Nature1998;393(6684):440–2.
12. Zucker RS, Regehr WG. Short-term synaptic plasticity. Annu Rev Physiol
2002;64:355–405.
13. Schmidhuber J. Deep learning in neural networks: an overview. Neural
Netw2015;61:85–117.
14. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521(7553):436–44.
15. Arel I, Rose DC, Karnowski TP. Deep machine learning—a new frontier in