You are on page 1of 16

a

solidi
physica

status
REVIEW ARTICLE
Memristors www.pss-a.com

Neuromorphic Computing with Memristor Crossbar


Xinjiang Zhang, Anping Huang,* Qi Hu, Zhisong Xiao, and Paul K. Chu

DeepMind’s differentiable neural ma-


Neural networks, one of the key artificial intelligence technologies today, have chine, DeepStack for no-limit poker play,
the computational power and learning ability similar to the brain. However, and Stanford’s skin cancer classifica-
implementation of neural networks based on the CMOS von Neumann tion.[1–5] In addition, many of our daily
operations and services use neural net-
computing systems suffers from the communication bottleneck restricted by
works such as smartphone voice control,
the bus bandwidth and memory wall resulting from CMOS downscaling. shopping recommendation, voice recog-
Consequently, applications based on large-scale neural networks are energy/ nition, face recognition, etc. Neural
area hungry and neuromorphic computing systems are proposed for efficient networks are also widely implemented
implementation of neural networks. Neuromorphic computing system con- in autonomous driving, smart grids, and
so on, but implementation of neural
sists of the synaptic device, neuronal circuit, and neuromorphic architecture.
networks has drawbacks due to the
With the two-terminal nonvolatile nanoscale memristor as the synaptic device energy, area, and time consumption. It
and crossbar as parallel architecture, memristor crossbars are proposed as a is because learning systems based on
promising candidate for neuromorphic computing. Herein, neuromorphic neural networks command a large data/
computing systems with memristor crossbars are reviewed. The feasibility computation volume for both the training
and applicability of memristor crossbars based neuromorphic computing for and inference processes and network
models are getting larger and larger for
the implementation of artificial neural networks and spiking neural networks
real-world applications.
are discussed and the prospects and challenges are also described. To implement efficient neural networks,
it requires high-density and parallel synap-
tic storage and computation. Consisting of
neurons connected by synapses, neural
networks have the computational power and learning ability
1. Introduction similar to the brain.[6,7] As shown in Figure 1, neural networks
On the heels of the rapid development of internet, mobile can be classified into three types by the neuron models.[8–11]
internet, and internet of things, the volume and complexity of Artificial neural networks (ANNs) which use the McCulloch and
data are expanding exponentially. Meanwhile, driven by Pitts neurons and differential activation function neurons map a
massive data and high-performance computing hardware, vector space into a vector space. Spiking neural networks (SNNs)
emerging technologies such as big data, cloud computing, using spiking neurons map trains of spikes into trains of spikes.
machine learning, data mining, deep learning, and artificial Table 1 compares the various biological neural networks, SNNs,
intelligence (AI) are flourishing and thriving. Today, various and ANNs on synapse models, neuron models, network
intelligent applications affect all aspects of human life topology, learning algorithms, implementation, applications,
including business, education, medical care, security, and and other features.[12–17] Since ANNs and SNNs simulate
other walks of life and consequently, the human society is different characteristics of the biological neural networks,[18–20]
transforming from the information era to the intelligence era. ANNs based deep neural networks (DNNs) and SNNs based
In this historical revolution, neural networks play one of the learning systems are both being developed continuously for
key roles. Recently, a series of impressive AI technologies are different purposes.[21] Currently, implementations of neural
based on neural networks, for instance, Google’s AlphaGo, networks are mostly based on the von Neumann computing
system (VCS) such as CPU, GPU, and their cluster, which is
powerful for logical computing but not efficient for neuronal and
Dr. X. Zhang, Prof. A. Huang, Dr. Q. Hu, Prof. Z. Xiao
synaptic computing.[22–24] Figure 2(a) illustrates the complexity
School of Physics, Beihang University
Beijing 100191, China relationship between the data environment and machine.
E-mail: aphuang@buaa.edu.cn Although powerful software, efficient learning algorithms,
Prof. P. K. Chu and novel network topologies are emerging constantly,[25–29]
Department of Physics and Department of Materials Science and VCS-based DNNs and SNNs still have drawbacks pertaining to
Engineering the energy, area and time consumption.[30–33] Therefore, the
City University of Hong Kong neuromorphic computing system (NCS) has been proposed for
Tat Chee Avenue, Kowloon, Hong Kong, China
efficient implementation of neural networks.
The ORCID identification number(s) for the author(s) of this article NCS integrates a large amount of synapses and neurons in a
can be found under https://doi.org/10.1002/pssa.201700875. single chip and supports complex spatiotemporal learning
DOI: 10.1002/pssa.201700875 algorithms as described in Table 1. The evolution of computing

Phys. Status Solidi A 2018, 215, 1700875 1700875 (1 of 16) © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
18626319, 2018, 13, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/pssa.201700875 by Karlsruher Inst F. Technologie, Wiley Online Library on [31/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
a

solidi
physica

status
www.advancedsciencenews.com www.pss-a.com

systems for neural networks is shown in Figure 2(b). Generally,


NCS has two main purposes: accelerating ANNs-based DNNs Xinjiang Zhang is currently a Ph.D.
and developing SNNs.[34–36] Table 2 shows the different candidate at the School of Physics
approaches of NCS for SNNs in comparison with the human of Beihang University. He received
brain. The key operations for the training and inference his Bachelor’s degree in automation
processes of DNNs are the vector-matrix product, nonlinear science from Beihang University. His
function execution, and weights matrix update, while SNNs research interests include neural
require spiking neurons and synaptic devices. To accelerate networks, learning systems,
DNNs, various computing systems have been designed as DL memristors, van der Waals
accelerator (DLA), for instance, the FPGA based platforms, ASIC heterostructures, as well as
based TPU, DianNao, etc.[37–46] These DLAs use a novel neuromorphic computing.
computing architecture to expedite training or the inference
process of DNNs. To develop SNN, some impressive results have Prof. Anping Huang received his BS
been obtained,[47–49] for example, IBM TrueNorth,[50] SpiNNa- in physics from Lanzhou University
ker,[51] Neurogrid,[52,53] Darwin,[54] and so on. Whereas these in 1999 and Ph.D. in material
silicon transistor circuits based computing systems can still be science from Beijing University of
improved from device, circuit and architecture. Technology in 2004. His research
To achieve more efficient NCS, novel synaptic device, compact interests include solid state
neuronal circuit, and more paralleling neuromorphic architec- electronics, interface science and
ture are desirable. As shown in Figure 3, various devices with engineering, semiconductor devices,
different materials, structures, mechanisms, and features are as well as neuromorphic computing.
applicable to neuromorphic computing.[55] These neuromorphic He is the vice dean and professor of
device can be loosely classified into memristor-type and condensed matter physics in the School of Physics at
transistor-type. Yet, the downscaling problem restricts transis- Beihang University.
tor-type devices for future neuromorphic applications. Fortu-
nately, as a nanoscale, solid-state, two-terminal neuromorphic Prof. Paul K. Chu received his BS in
device, memristor is a potential candidate for future NCS.[56–60] mathematics from The Ohio State
Memristors can be used in both analog and digital computing for University in 1977 and MS and
various types of synapses and neurons in ANNs and SNNs. Ph.D. in chemistry from Cornell
Besides, memristors can be easily integrated into the crossbar University in 1979 and 1982,
architecture which naturally supports parallel in-memory respectively. His research covers
computing and high-density storage. With the nanoscale, two- quite a broad scope, encompassing
terminal, and solid-state memristor, the memristor crossbar plasma surface engineering,
(MC) is a parallel computing circuit boasting a high density and materials science and engineering,
low power consumption. Overall, neuromorphic computing as well as surface science. He is the
with MC represents a possible path for efficient implementation Chair Professor of Materials Engineering in the
of neural networks. Department of Physics and Department of Materials
Herein, neuromorphic computing with memristor crossbar is Science and Engineering at City University of Hong Kong.
reviewed. By exploiting the synaptic and neuronal characteristics
of memristors and parallel nature of the crossbar, MC is shown
to be suitable for both DNNs and SNNs implementation. Then,
MC-based vector-matrix multiplication for DNNs and spike- and molecular plasticity.[62–64] Synaptic devices are electrical
timing dependent plasticity (STDP) for SNNs are described, and switches which can simulate a biological synapse in both the
the prospects and key challenges of neuromorphic computing function and structure for synaptic computation. Generally,
using MC are discussed. synaptic weights are represented as the conductance of the device
and synaptic plasticity requires the device to be resistive
switching. Research on synaptic devices has been conducted
2. Synaptic Memristor for many years and spurred by the rapid development of
neuroscience and nanotechnology, different synaptic devices
2.1. Synapse have been designed and adopted. As shown in Figure 4, synaptic
devices can be classified into synaptic transistors and synaptic
A synapse is a two-terminal device between the presynaptic memristors according to the structure. Synaptic transistors
neuron and post synaptic neuron as shown in Figure 4(a). consist of floating gate transistors using charge storage on
Synaptic plasticity, basic function of synapse, is the foundation of floating gate electrodes to modulate the channel conductance and
learning, memory, and adaption.[61] Usually, the strength of store synaptic weights,[65–70] and ionic transistors using the ionic
synaptic connectivity in term of synaptic weight can be regulated concentration in the channels controlled by the gate to achieve
by the neuron activities including excitatory and inhibitory ones. resistive switching of the channels.[71–77] However, these synaptic
There are different types of synaptic plasticity such as long term transistors are ultimately limited by downscaling. To design more
plasticity (LTP), short term plasticity (STP), structural plasticity, synapse-like devices, memristors have been explored.

Phys. Status Solidi A 2018, 215, 1700875 1700875 (2 of 16) © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
18626319, 2018, 13, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/pssa.201700875 by Karlsruher Inst F. Technologie, Wiley Online Library on [31/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
a

solidi
physica

status
www.advancedsciencenews.com www.pss-a.com

Figure 1. Schematic of the biological neurons networks, spiking neural networks, and artificial neural networks.

2.2. Synaptic Memristor further classified according to the mechanism, materials system,
and switching phenomena. An ionic memristor changes its
Memristors, an abbreviation of memory resistors, were states by applying a voltage or current to move cations or anions
conceived by Leon Chua in 1971[78] and experimentally modeled in the switching layer, where the movement of the anions or
by HP Labs in 2008.[79,80] A memristor is a two-terminal non- cations is usually accompanied by chemical reactions (re-
volatile memory (NVM) device based on resistive switching with dox).[84,85,87,91,94,95,98–100] The ionic memristor can be classified
a pinched hysteresis current-voltage loop as fingerprints.[81–83] It into the anion memristor and cation memristor. In the anion
consists of a switching layer between the first electrode and memristor, anion motion changes the valance of the metal to
bottom electrode as shown in Figure 5. Since it has enormous produce resistance change in the switching materials, which is
scientific and commercial potential in the information and termed the valence change memory (VCM). The switching
computing technologies, especially neuromorphic computing, materials in an anion memristor contains an oxide insulator
many types of memristors have been developed and the such as TiOx, ZnO, WOx, SiOx, etc. and non-oxide insulator such
mechanism, materials, and switching phenomena have been as AlN, ZnTe, ZnSe, and so on. In the cation memristor, the
reviewed.[84–98] Basically, memristors can be loosely grouped into resistance change is induced by cation motion through the
three categories: ionic memristors, spin-based memristors, and electrochemical reaction. Thus, a cation memristor is also called
phase-change memristors (PCM), and each one of them can be the electrochemical memory (ECM). The materials system of the

Table 1. Comparison of biological neurons networks, spiking neural networks, and artificial neural networks about synapse models, neuron
models, network topology, learning algorithms, and developments.

Biological neural networks Spiking neural networks Artificial neural networks

Synapses Diverse Short term plasticity (STP), Long term plasticity Numerical Matrix
(LTP), etc.
Neurons Diverse Integrate & Fire, Hodgkin–Huxley, etc. Sigmoid, Tanh, ReLU, Leaky ReLU
Topology Complex Hopfield Network, Liquid State Machines, etc. FNN, CNN, RNN, LSTM, DNC, etc.
Learning – Spike timing dependent plasticity, etc. Gradient Descent Backpropagation, etc.
algorithm
Application Cognition, Inference, Imagination, etc. Realtime recognition camera, Brain-like Autonomous driving, Voice control system, Medical
Neuromorphic Chip, etc. Dignosis, etc.
Implementation Brain TrueNorth, SpiNNaker, Neurogrid, Darwin, etc. Tensorflow, PyTorch, MXNet, GPU, TPU, Cambrian,
etc.
Features The most complex and powerful Biological Close; Realtime; Online Learning; Low Multilayer; Feasible and practical with current
computing system and learning system power; Noise input; Spatio-Temporal computing system; Data/computation intensive

Phys. Status Solidi A 2018, 215, 1700875 1700875 (3 of 16) © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
18626319, 2018, 13, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/pssa.201700875 by Karlsruher Inst F. Technologie, Wiley Online Library on [31/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
a

solidi
physica

status
www.advancedsciencenews.com www.pss-a.com

memristor (STT) and magnetic-domain-wall


motion induced memristor. The PCM changes
the resistance by transforming between the
amorphous and crystalline phases of the
phase-change materials. The amorphous
phase represents the low-conductance and
the crystalline phase has high conduc-
tance.[99,100] The behavior of the PCM depends
on the phase-change materials but there are
only a few reliable phase-change materials
with the most common being Ge2Sb2Te5
(GST).
All the two-terminal solid-state memristors
can be considered synaptic devices regardless
of the device materials, physical mechanisms,
and switching phenomena.[56,101–114] From the
structural perspective, a memristor is a two-
terminal nanoscale device similar to the
biological synapse. From the functional per-
spective, the synaptic weight can be repre-
sented as the conductance of the memristor
and modified by applying a charge or flux to
the memristor to achieve synaptic plasticity.
Specifically, a two-terminal nanoscale mem-
ristor is suitable for the high-density synapse
of large scale neural networks. The nonvolatile
resistive switching properties of a memristor
simulate the synaptic plasticity and reduce
power consumption. Although all types of
memristors can be used as synapse in neuro-
morphic computing, different types of mem-
ristors have different synaptic plasticity
mimicking characteristics such as the LTP,
STP, and stochastic activation.[115–126] Besides,
ANNs and SNNs require different types of
synapse. Specifically, ANNs need nonvolatile
analog memories with a low write noise, linear
conductance change, and high on/off ratio,
whereas SNNs requires a dynamic memristor
with various synaptic properties. For example,
the SNNs-based learning system requires
volatility to simulate STP and a transition
Figure 2. Evolution of computing systems: (a) Relationship of machine complexity and data between STP and LTP to simulate memory
environments complexity. Currently, VCS is more efficient and NCS is still simple but as the data consolidation in biological synapse. On the
environment gets more complex, NCS has the advantage; (b) Illustration of computing system
other hand, an ANNs-based learning system
evolution with time.
requires nonvolatile memory and multi-bit
storage to reduce energy/area consumption.
cation memristor has the signature that the electrode is made of The different types of synaptic devices and their synaptic
electrochemically active materials such as Cu and Ag and the properties are summarized in Table 3.
counter electrode is usually an electrochemically inert metal As aforementioned, the synapses model of ANNs and SNNs
like W, Pt, and Au. A spin-based memristor changes its states by are different. ANNs map a numerical vector space to another
altering the electron spin polarization.[86,97] A spin-based numerical vector space through vector-matrix multiplication
memristor can be envisaged as a trilayer device consisting of and nonlinear activation functions. The computations in the
the first electrode, magnetic layer, and second electrode. By training and inference process of ANNs are performed
applying a current through the magnetic layer, the current synchronously. According to these characteristics, the synapse
electron spin changes the magnetization of the device and model of ANNs should behave like an analog memory device.
magnetization can be regulated by the cumulative effects of In VCS-based implementation of ANNs, multiple transistors
electron spin excitation. A spin-based memristor can be are used to represent the synaptic weight according to the
classified into two types: the spin-torque-induced magnetization accuracy requirement which varies for different applications.

Phys. Status Solidi A 2018, 215, 1700875 1700875 (4 of 16) © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
18626319, 2018, 13, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/pssa.201700875 by Karlsruher Inst F. Technologie, Wiley Online Library on [31/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
a

solidi
physica

status
www.advancedsciencenews.com www.pss-a.com

[36]
Table 2. A comparison of the major features of the human brain and neuromorphic systems for SNNs. (adapted from reference ).

Platform Human brain Neurogrid BrainScaleS TrueNorth SpiNNaker Darwin

Material/Devices Biology CMOS Wafer-Scale ASIC CMOS ARM Boards, Custom CMOS
Interconnection
Programmable structure Neuron & Neuron & Neuron & Synapses Neuron & Neuron & Synapses Neuron &
Synapses Synapses Synapses Synapses
Component complexity (Neuron/ 1/1 79/8 Variable 10/3 Variable >5//>5
Synapse)
Device technology Biology Analogue, Analogue, over Digital, fixed Digital Programmable Digital
subthreshold threshold Programmable
Device feature size 10/mm 180 nm Transistor 180 nm Transistor 28 nm Transistor 130 nm Transistor Regular
Device numbers – 23 M 15 M 5.4 B 100 M M
Synapse model Diverse Shared dendrite 4-bit digital Binary, 4 Programmable Digital
modulators Programmable
Synapse number 1015 108 105 2.56  108 1.6  106 Programmable
Synapse feature Size 10 nm 100 nm 100 nm 100 nm 100 nm 100 nm
Neuron model Diverse, fixed Adaptive quadratic Adaptive LIF Programmable Programmable
IF exponential IF
Neuron number 1011 6.5  104 6.5  10 4
10 6
1.6  10 4
Programmable
Neuron feature Size nm 20 mm Variable 10 mm Variable 10 mm
Network internet 3D direct Tree-multicast Hierarchical 2D 2D mesh-multicast Hierarchical
signaling mesh-unicast
Network topology SNN based SNN based SNN based SNN based SNN based SNN based
Learning algorithm STDP STDP STDP STDP STDP STDP
Energy performance 10 fJ 100 pJ 100 pJ 25 pJ 10 nJ 10 nJ
On-chip learning Yes No Yes No Yes Yes

Generally, in NCS-based implementation of ANNs, analog, implementation of SNNs. SNNs transfer a set of spikes to
digital, or mixed memory devices represent the synaptic weight another set of spikes by asynchronous computation based on
of ANNs. These memory devices vary from the traditional the STDP rule. In this context, the synaptic devices are required
MOSFETs to transistors using 2D materials and floating gates to be more biological. Usually, traditional MOSFETs based
and metal oxide memristors to memristors with novel materials implementations are either complex and inefficient or simple
and structures, as shown in Figure 3. When memristors are but impractical, since the biological synapses exhibit various
used as synapse, the conductance can be either analog or behaviors. Among them, the timing dependent plasticity is the
digital. Figure 6 shows the conductance behavior of this type of most important. When memristors are used for biological
memristors. M. Hu et al. of HP Labs built an dot-product synapse, the conductance behavior can emulate many plasticity
engine for ANNs accelerated with the Pt/TaOx/Ta memristor behaviors of the biological synapse. Figure 7 illustrates the
which exhibited 8 levels of resistance states with 0.1% accuracy conductance behavior of a diffusive memristor when applying
and showed 1000 distinguishable states.[127] Recently, this team the pre-spike and post-spike to the device.[124] In this work
developed analog signal and image processing with a large performed by J. J. Yang et al., the synapse consisted of a
memristor crossbar with the Ta/HfO2/Pd memristor.[128] The diffusive memristor connected with a drift memristor. In
resistance of this device could be tuned from about 2 MΩ to addition, other memristors can be used as this kind of synapse
2 kΩ and a resistance between 1.1–1.0 kΩ could be applied for and some exhibit interesting synaptic behavior such as the
practical vector-matrix multiplications. Many types of mem- short-term memory to long-term memory transition.[132–135]
ristors are applied to vector-matrix multiplication for ANNs Since SNNs still need more development for practical
acceleration, for example, WOx-based memristors proposed application, the precise requirements for this type of synaptic
by W. D. Lu et al.,[129] and Al2O3/HfO2-based memristors memristors vary.
proposed by P. Yao et al.[130] In addition, to developing more Overall, various types of memristors with different conductance
practical vector-matrix multiplication for ANNs acceleration, behavior can be used as neuromorphic computing hardware for
the memristors need a fast read/write speed, read/write the implementation of ANNs and SNNs. Except for synapse in
linearity, low read/write noise, and high programmability.[131] ANNs and SNNs, some memristors exhibit a spiking neuron-like
The requirements for the conductance behavior of mem- behavior, which can be called neuronal memristors, and this type
ristors are different for ANNs when it comes to the of memristor is discussed in the next section.

Phys. Status Solidi A 2018, 215, 1700875 1700875 (5 of 16) © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
18626319, 2018, 13, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/pssa.201700875 by Karlsruher Inst F. Technologie, Wiley Online Library on [31/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
a

solidi
physica

status
www.advancedsciencenews.com www.pss-a.com

3. Neuronal Memristor
3.1. Neuron

A neuron composed of dendrites, soma, and


axon performs a nonlinear function which
maps the neuronal input to output as shown in
Figure 8. The dendrites usually receive signals
from other neurons and transmit signals to the
soma and the axon usually transmits signals
generated by the soma to other neurons. The
soma processes the neuronal input and
generates the neuronal output. The typical
behavior of neurons is to accumulate charges
to change the membrane potentials of the
soma through the excitatory and inhibitory
postsynaptic signals received by the dendrites.
When the membrane potential reaches a
specific threshold, the soma generates an
action potential which travels along the axons
to change the charges on other neurons
through synapse. Following this action poten-
tial, the membrane potential of the soma is
reduced to the rest potential. As shown in
Table 1, there are two types of neurons in neural
networks: spiking neurons and artificial neu-
rons. To emulate spiking neurons, a threshold
circuit with charge accumulation is needed,
which is usually implemented by software or a
specific operational circuit. Most NCSs use
leaky integrate & fire (LIF) to achieve efficient
hardware implementation. As a key to NCS
Figure 3. Summary of neuromorphic hardware from devices to systems.
implementation, several neuronal circuits
such as floating gate transistor based LIF and
silicon neurons are designed.[136–138] Among

Figure 4. Biological synapse and synaptic devices. A biological synapse can be envisioned as a resistive switching. The floating gate transistor and ionic
transistor use conductance of the source and drain electrodes to represent the synapse. As a two-terminal device, a memristor is more similar with the
biological synapse and represents the synapse as the conductance of the top and bottom electrodes.

Phys. Status Solidi A 2018, 215, 1700875 1700875 (6 of 16) © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
18626319, 2018, 13, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/pssa.201700875 by Karlsruher Inst F. Technologie, Wiley Online Library on [31/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
a

solidi
physica

status
www.advancedsciencenews.com www.pss-a.com

3.2. Neuronal Memristor

Unlike a synaptic memristor, the neuronal


memristor requires an accumulative behavior
and threshold gate instead of continuous
conductance states. The conductance of the
neuronal memristor should remain until the
threshold is reached and the neuron
fires when the neuronal input arrives.[140]
In addition, when a memristor is used as a
neuron, non-volatility is not essential and
volatility can potentially be used to
implement the LIF dynamics. According to
recent investigations, PCM, STT, and neu-
ristors have been reported as neurons for
SNNs.
The neuronal PCM represents the mem-
brane potential in the phase configura-
tion.[110] In an all-PCM neuromorphic
system, both the neuronal and synaptic
elements are realized using phase change
devices.[141,142] Unlike the synaptic PCM
which involves a smaller change of conduc-
tance, a large change of conductance is
required for the neuronal PCM thus reducing
the durability of the PCM device. The
Figure 5. Schematic of a synaptic memristor. Reproduced from with permission.[56] Copyright conductance behavior of the neuronal mem-
2010, American Chemical Society. ristor is shown in Figure 9.[124] It is important
to ensure that spike firing is sparse through-
out the lifetime of the PCM-based neuron.
them, some types of memristors are exploited as neurons to obtain Alternatively, other materials with better durability can be
significant considered. The neuristor proposed as a Hodgkin–Huxley axon
area/power efficiency. Specifically, active memristor can be used as by Pickett et al. is built with two nanoscale Mott memris-
neurons but passive memristors are used as synapse.[139] tors.[143] The Mott memristor is a dynamic device that exhibits

Table 3. Different synaptic memristors including spiking synaptic memristors and analogy synaptic memristors. (adapted from reference [104]).

Dimensions D or Energy Programming Multi-level Max dynamic Achievable


Synaptic device WL consumption time states range retention/endurance

Phase change Ge2Sb2Te5 (Kuzum et al.) D ¼ 75 nm 2–50 pJ 50 ns 100 1000 Years/1012
Ge2Sb2Te5 (Suri et al.) D ¼ 300 nm 121–1552 pJ 50 ns 30 100
Resistive TiOx/HfOx (Yu et al.) 5  5 mm 2
0.85–24 pJ 10 ns 100 100 Years/1013
change
PCMO (Park et al.) D ¼ 150 nm–1 mm 6–600 pJ 10–1 ms 100 1000
TiOx (Seo et al.) D ¼ 250 nm 200 nJ 10 ms 100 >10, <100
WOx (Yang et al.) 25  25 mm2 40 pJ 0.1 ms >10 300
Conductive Ag/a-Si/W (Jo et al.) 100  100 nm2 720 pJ 300 mm 100 10 Years/108
bridge
Ag/Ag2 S/nanogap/Pt (Ohno Pt tip 250 nJ 0.5 s 10 >1000
et al.)
Ag/GeS2/W (Suri et al.) D ¼ 200 nm 1800–3100 pJ 500 ns 2 1000
Ferroelectric BTO/LSMO (Chathbouala et al.) D ¼ 350 nm 15 pJ 10–200 ns 100 1000 N/A
FET-based Ion-doped polymer-based FET 1.5  20 mm2 10 pJ 2 ms 50 >4 N/A
(Lai et al.)
NOMFET (Alibart et al.) 5  1000 mm2 5 mJ 2–10 s 30 15 N/A

Phys. Status Solidi A 2018, 215, 1700875 1700875 (7 of 16) © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
18626319, 2018, 13, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/pssa.201700875 by Karlsruher Inst F. Technologie, Wiley Online Library on [31/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
a

solidi
physica

status
www.advancedsciencenews.com www.pss-a.com

Figure 6. Conductance behavior of the synaptic memristor for ANNs: (a) 64 resistance levels obtained from the Pt/TaOx/Ta memristor using current
compliance; (b) 8 levels of resistance states with 0.1% accuracy showing potentially 1,000 distinguishable states. Reproduced with permission.[127]
Copyright 2016, IEEE.

transient memory and negative differential resistance due to characteristics for neuronal circuits.[144,145] STTs have also
the insulating-to-conducting phase transition driven by been proposed for neuronal circuits.[113,146–149] The STT neuron
Joule heating. By exploiting the functional similarity between is described by A. Sengupta et al. to transform an already fully-
the dynamic resistance behavior of Mott memristors and trained DNNs into a SNN for feedforward inference.[149] The
Hodgkin–Huxley Naþ and Kþ ion channels, a neuristor STT oscillator has been proposed for both synapse and neuron
comprising two memristors with NbO2 has been shown to implementation by J. Griller et al.[113] It should be noted that
exhibit the important neuronal functions of all-or-nothing the spinwave generated by STT is quiet small for synaptic
spiking with signal gain and diverse periodic spiking. The device adaption. In addition, the spiking neuron is also realized
stochastic activation behavior of Mott memristor has been in a compact circuit comprising the memristive and mem-
described by HP Labs and it possesses more complex capacitive devices based on the strongly correlated electron

Figure 7. Conductance behavior of the synaptic memristor for SNNs: (a) Illustration of a biological synaptic junction between the pre- and postsynaptic
neurons; (b) SRDP showing the change in the conductance (weight) of the drift memristor in the electronic synapse with change in the duration tzero
between the applied pulses; (c) Schematic of the pulses applied to the combined device for STDP demonstration; (d) Plot of the conductance (weight)
change of the drift memristor with variation in t showing the STDP response of the electronic synapse. Reproduced with permission.[124] Copyright 2017,
Nature Research.

Phys. Status Solidi A 2018, 215, 1700875 1700875 (8 of 16) © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
18626319, 2018, 13, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/pssa.201700875 by Karlsruher Inst F. Technologie, Wiley Online Library on [31/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
a

solidi
physica

status
www.advancedsciencenews.com www.pss-a.com

4. Neuromorphic Memristor Crossbar


4.1. Neuromorphic Architecture

A human brain has approximately 1011 neurons and each


neuron is connected to about 5000–10 000 other neurons thus
producing an enormous quantity (about 1015) of biological
synapses. To be more energy/area efficient, a neuromorphic
architecture is needed for NCS. The neuromorphic
architecture is a computing architecture which can integrate
synapse and neurons in a compact manner, configure the
topology of neural networks easily, and execute neuronal
and synaptic computation efficiently. Unlike the von Neumann
architecture, the neuromorphic architecture should integrate
computing and memory and provide high-density and
parallel data storage and computation.[151] Among the
various parallel architectures for NCS, the crossbar architecture
shows tremendous potential.[152–157] As a neuromorphic
architecture, the crossbar can integrate both transistors and
memristors. The device in each cross point can be treated as
synapse and the neuron can be connected to the edge of
the crossbar. This architecture is highly parallel, area
efficient, and fault tolerant. Although the transistor-based
crossbar has been proposed for intensive memory and
computational memory such as IBM TrueNorth,[50] the chip
Figure 8. Biological neuron and neuronal circuit. is still quite expensive and the area is inefficient due to
implementation of synapse and neuron based on
material vanadium dioxide (VO2) and chemical electromigra- transistors. Since most memristors are two-terminal cross-
tion cell Ag/TiO2x/Al.[150] The circuit can emulate the bar-type devices and memristors can be used for synapses
dynamic spiking patterns in response to an external stimulus and neurons for ANNs or SNNs, memristor crossbars show
including adaptation. more potential in neuromorphic computing.

Figure 9. Conductance behavior of the stochastic phase-change neurons. Reproduced with permission.[110] Copyright 2016, Nature Research.

Phys. Status Solidi A 2018, 215, 1700875 1700875 (9 of 16) © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
18626319, 2018, 13, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/pssa.201700875 by Karlsruher Inst F. Technologie, Wiley Online Library on [31/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
a

solidi
physica

status
www.advancedsciencenews.com www.pss-a.com

and adapt.[159] Read and write operations can


be performed by three modes: current control,
voltage control, and spiking control.[160] To
write a memristor in the crossbar, a specific
voltage is applied to both lines where the cross
point is the memristor. To read a memristor, a
relatively small voltage is applied to the top
and bottom lines (for example, less than Vwrite)
to a junction to measure the current. Compre-
hensive analyses of MC read and write
operations can be found elsewhere.[161–164]
Compared to other neuromorphic platforms,
MC can be either analog or digital according to
the conductance behavior of the memristors.
When used as an analog computing circuit,
MC has a high capacity to store multiple
bits of information per element and a small
energy is required to write distinct states
(<50 fJ/bit).[165,166] Besides, MC can perform a
variety of functions according to the properties
Figure 10. Neuromorphic architecture based on STT neurons. Reproduced with permis- of the memristors, for instance, look-up tables,
sion.[106] Copyright 2015, IEEE. content addressable memories, and random
number generators.[167] The detailed explana-
tion of the neuromorphic operation per-
4.2. Neuromorphic Memristor Crossbar formed by MC is described in the next section, including
vector-matrix multiplication for the ANNs-based deep leaning
As a neuromorphic computing circuit, MC integrates storage model and STDP for SNNs.
and computation in a very dense crossbar array where
memristors are injected at each junction of the crossbar
between the top electrode and bottom electrode. MC can be 5. Neural Networks Using Memristor Crossbar
used as the synaptic array and the neuron circuit can be
integrated with MC for hardware implementation of neural 5.1. Accelerating DNNs
networks. Figure 10 shows the neuromorphic architecture based
on the STT neuron, where a synaptic memristor crossbar array DNNs have reliable network topologies and learning algorithms.
and STT neurons are wired to implement a single layer of the Such networks, including feedforward neural networks, con-
neural network.[106,158] volutional neural networks, recurrent neural networks, and
The most important property of MC is that it can be other derivatives, are usually trained with the supervised
programed by involving the three basic operations of read, write learning and error-based back-propagation algorithm. The basic
operations of DNNs are vector-matrix multi-
plication, weight matrix updating, and nonlin-
ear function execution. Both vector-matrix
multiplication and weight matrix updating
need more parallelism and the nonlinear
function can be executed by artificial neurons
such as the sigmoid. As a neuromorphic
computing circuit, MC is suitable for DNN
acceleration.
Vector-matrix multiplication is performed
on MC by readout operation and weight matrix
updating for adaption of MC.[58,127,168–173]
Specifically, vector-matrix multiplication can
be accelerated by exploiting a simple crossbar
array in multiplication and summation oper-
ations. As shown in Figure 11, the multiplica-
tion operation is performed at every cross
point by Ohm’s law with current summation
along rows or columns performed by
Kirchhff’s current law. Operation of the
Figure 11. Memristor crossbar for vector-matrix multiplication. memristor-based vector-matrix multiplication

Phys. Status Solidi A 2018, 215, 1700875 1700875 (10 of 16) © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
18626319, 2018, 13, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/pssa.201700875 by Karlsruher Inst F. Technologie, Wiley Online Library on [31/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
a

solidi
physica

status
www.advancedsciencenews.com www.pss-a.com

Figure 12. DNN implemented with MC and pattern classification experiment with FNN (top-level description): (a) Input image; (b) Single-layer
perceptron for classification of 3  3 binary images; (c) Used input pattern set; (d) Flow chart of one epoch of the used in situ training algorithm. As
shown in (d), the grey-shaded boxes show the steps implemented inside the crossbar, while those with solid black borders denote the only steps required
to perform the classification operation. Reproduced with permission.[174] Copyright 2015, Nature Research.

can be divided into two basic parts: programming of the array training process of ANNs involves adaption of the memristor
and performing the computation functions. During array conductance matrix to the data environment. Generally, the
programming, the conductance of each cell is tuned to a synaptic matrix of ANNs can be mapped into a memristor
targeted value and read for verification. To achieve faster array matrix and adapted by back-propagation with gradient algo-
programming, multiple cells may be tuned in parallel. During rithms.[130,168] To illustrate the MC-based single layer ANNs,
computation, a vector input of voltages is driven across the rows Figure 12 shows a single-layer perception implemented with the
in parallel, while the current at every column is sensed digital metal oxide memristor by Prezioso, et al.[174] This ANN
simultaneously to compute the vector-matrix product. These uses tanh as the neuron activation function and the 12  12
multiplications-accumulated operations can be performed in Al2O3/TiO2x memristor crossbar as the synaptic weight matrix
parallel at the location of data with locally analog computing to with in situ learning. C. Yakopcic, et al. have developed a MC that
reduce power by avoiding the time and energy for moving the could perform N  M convolution operations in parallel, where
weight data. Since accurate and reliable values are needed in the N is the number of input maps and M is the number of output
computation step, it is desirable to program to the arbitrary maps in a given layer in the CNN.[175] In this circuit, the
conductance states efficiently and rapidly. convolution kernels of CNN are assigned by software using an ex
With regard to neural network implementation, MC can situ training process. The convolution kernel circuit is divided
accelerate both the training and inference processes. The into two MCs such that positive and negative values can be
represented. Combined with the op-amp
circuit, this circuit can produce the same
result ( some analog circuit error) as a digital
software equivalent. A. Shafiee, et al. have
built in situ CNN implementation in which
MCs are used to store input weights and
perform dot-product operations in an analog
manner.[176] The Hopfield network is also a
typical RNN in which any two neurons are
linked through a weighted connection and the
MC-based Hopfield network has some analy-
sis and implementation.[177] P. Yao et al. have
proposed face classification neural networks
with 3320 memristors.[130] The energy con-
sumption in the analogous synapse for each
iteration is 1000 compared to the implemen-
tation using the Intel Xeon Phi processor with
off-chip memory. The accuracy on the test sets
Figure 13. SNN circuit: (a) Implementation of STDP with two-memristor-per-synapse scheme; is close to that using a CPU. Furthermore,
(b) Key spiking characteristics of spiking neural network: downstream spikes depend on the more ANNs have been implemented with MC
time integration of continuous inputs with the synaptic weight change dependent on relative for practical applications.[178–182] Thus, MC-
spike timing. Reproduced with permission. [111]
Copyright 2017; Taylor & Francis Ltd. based NCS can accelerate DNNs.

Phys. Status Solidi A 2018, 215, 1700875 1700875 (11 of 16) © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
18626319, 2018, 13, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/pssa.201700875 by Karlsruher Inst F. Technologie, Wiley Online Library on [31/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
a

solidi
physica

status
www.advancedsciencenews.com www.pss-a.com

5.2. Developing SNNs Table 4. Key challenges and potential approaches of neuromorphic
computing with memristor crossbars on the device level, circuit level,
To develop SNNs, the MC should consist of the spiking synaptic and system level.
memristor with STP, LTP, stochastic activation, and the
Key challenges Possible approaches
hardware-based spiking neurons are also required.[183–185]
Mapping of SNNs with STDP as a local learning rule for MC Device Level:
is highly intuitive as shown in Figure 13. One edge of -Materials • Various materials system. • Search from 2D materials,
the crossbar array represents the pre-synaptic neurons. The To achieve applicable conducting polymer, or
-Structures
orthogonal edge represents post-synaptic neurons and the memristor, core memristive design functional materials
-Mechanisms
voltage on the wiring leading to these latter neurons represents materials systems need
the membrane potential. One needs only to implement the developing
STDP learning rule to modify the memristor conductance based -Neuromorphic • Need nanoscale, solid- • Apply structures and
on the timing of the pulses in the pre- and post-synaptic state, crossbar-type experience in semiconductor
neurons.[186,187] As aforementioned, STDP can be implemented memristor technology
even with memristors that support small conductance changes -Behavior • Diversity but lack of • Develop simulation
only in one direction by separating the long-term potentiation -Device comprehensible and accurate software which models the
and long-term depression functionalities across two devices. In Modeling device from the basic
other words, STDP is only a local learning rule. When MCs are principles
used for SNNs, computation on MC is asynchronous.[188] Unlike • Stable and repeatable • Build SPICE model of
the MC-based ANNs, there are still lack of comprehensible conductance behavior for memristor and comparison
research work on MC based SNNs. Most of the published works different synapse models and with physical device
are sample simulation of SNNs with the memristor SPICE neuron models

model. Since MC-based SNNs have low power consumption and • The mechanism and
more brain-like properties, more research efforts are still conductance behavior are
needed. For example, the requirements for the conductance diversity. Need more general
conductance behavior of
behavior when memristors are used to implement SNNs should
memristor
be clarified and the learning algorithms for MC-based SNNs
should be explored. Circuit Level:
-Scaling • Sneak path and IR drop • Construct a large crossbar
or 3D crossbar or crossbar
array
6. Prospects and Conclusion
-Read/Write • Read/Write scheme, Analog • Develop memristor with
Neuromorphic computing with memristor crossbar is reviewed and digital mode rectifying effect
in this article. Owing to the large density and small power • Use selector device
consumption, MC is suitable for neuromorphic computing. As a System Level:
two terminal nanoscale device, memristors are mainly used as
-Neuromorphic • Scalable. Build large scale • Software simulation with
synapse and some types of memristors can be used to build neural networks memristor model
neuronal circuits due to its stochastic and chaotic behavior.
-Operations • General. Develop a general • Experimentally build some
According to the different synaptic properties of memristors,
purpose computing system application with memristor
different types of memristors can be used for DNNs and SNNs. for ANNs and SNNs, crossbar
As a neuromorphic architecture, the crossbar provides highly including data mapping, dot-
parallel computing. Together, memristor crossbar integrating product, STDP, etc
computing and memory can be used for efficient implementa- -Neural • Algorithm. Practical • Develop training
tion of neural networks including DNNs and SNNs. On the heels Networks network topology and algorithms for memristor
of the rapid development of AI technology, neuromorphic learning algorithm for both based SNNs and DNNs
computing using the memristor crossbar is likely to morph into MC based ANNs and SNNs
a practical and powerful platform for future AI applications. • Develop hybrid
To make further progress on MC-based neuromorphic neuromorphic system
computing systems and efficient implementation of neural consists of both ANNs and
networks, challenges remain. Table 4 lists the key challenges and SNNs
potential approaches from device level, circuit level, and system
level. On the device level, in order to obtain more reliable and
practical memristors, memristive mechanism needs to be When it comes to neuromorphic computing, the conductance
understood more thoroughly and the performance needs to behavior of the synaptic memristors or neuronal memristors
be improved.[196,197] To identify reliable and practical memristive needs to be studied thoroughly in order to fathom the
materials, two-dimensional materials are investigated for neuroscience or computer science. Furthermore, more simula-
memristors, such as graphene, MoS2, phosphorene, h-BN, tion work should be performed to make use of existing device
and two-dimensional perovskite.[189–195] In addition, crossbar- properties and providing guidance to the development of future
type memristors have better potential than planar memristors. devices for different performance requirements. On the circuit

Phys. Status Solidi A 2018, 215, 1700875 1700875 (12 of 16) © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
18626319, 2018, 13, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/pssa.201700875 by Karlsruher Inst F. Technologie, Wiley Online Library on [31/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
a

solidi
physica

status
www.advancedsciencenews.com www.pss-a.com

level, the key challenges are to effectively enlarge the MC circuit [3] A. Graves, G. Wayne, M. R. Eynolds, T. Harley, I. Danihelka,
and design efficiency read/write schemes to overcome the A. Grabska-Barwinska, S. G. Colmenarejo, E. Grefenstette,
noise.[198,199] Adding a transistor selector for each memristor or T. R. Amalho, J. Agapiou, A. P. Badia, K. M. Hermann, Y. Zwols,
designing a memristor with rectifying effects can dispel the G. O. Strovski, A. C. Ain, H. King, C. Summerfield, P. B. Lunsom,
K. Kavukcuoglu, D. Hassabis, Nature 2016, 538, 471.
sneak path, such as the 1T1R device and nSi/SiO2/pSi
[4] A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau,
memristor.[200] Besides, memristors with rectifying effects are
S. Thrun, Nature 2017, 542, 115.
also suitable for 3D crossbars. To reduce the IR drop, the [5] M. Moravcík, M. Schmid, N. Burch, V. Lisy, D. Morrill, N. Bard,
appropriate electrode materials, crossbar size, and appropriate T. Davis, K. Waugh, M. Johanson, M. Bowling, Science 2017, 356,
read/write schemes should be considered. Finally, the training 508.
and inference methods of MC-based ANNs and SNNs are still [6] H. T. Siegelmann, E. D. Sontag, J. Comp. Syst. Sci. 1995, 50, 132.
lacking. To design a practical neuromorphic computing system [7] W. Maass, H. Markram, J. Comp. Syst. Sci. 2004, 69, 593.
for efficient implementation of ANNs and SNNs, the basic [8] W. Maass, Neural Netw. 1997, 10, 1659.
computing operations should be supported, including matrix [9] S. Ghosh-Dastidar, H. Adeli, Int. J. Neural Syst. 2009, 19, 295.
read/write, dot-product, STDP functions. The learning algo- [10] A. Grüning, S. M. Bohte, in, ESANN 2014.
rithm of ANNs and SNNs should also be supported by MC based [11] H. Paugam-Moisy, S. Bohte, Computing with Spiking Neuron
Networks, Springer, Berlin Heidelberg, Berlin, Heidelberg 2012.
NCS, including in situ and ex situ learning. In spite of these
[12] A. M. Andrew, Kybernetes 2003, 32, https://doi.org//10.1108/
challenges, neuromorphic computing with the memristor k.2003.06732gae.003
crossbar continues to be an attractive approach for efficient [13] E. M. Izhikevich, IEEE Trans. Neural Netw. 2004, 15, 1063.
implementation of neural networks and development of [14] A. N. Burkitt, Biol. Cybern. 2006, 95, 1.
memristors and neuromorphic computing systems is expected [15] Y. Cao, Y. Chen, D. Khosla, Int. J. Comput. Vision 2015, 113, 54.
to continue to be an active research area in the AI era. [16] L. F. Abbott, B. DePasquale, R.-M. Memmesheimer, Nat. Neurosci.
2016, 19, 350.
[17] S. K. Esser, P. A. Merolla, J. V. Arthur, A. S. Cassidy, R. Appuswamy,
A. Andreopoulos, D. J. Berg, J. L. McKinstry, T. Melano, D. R. Barch,
Appendix
C. di Nolfo, P. Datta, A. Amir, B. Taba, M. D. Flickner, D. S. Modha,
PACS Codes: 01.30.Rr, 07.05.Mh, 07.50.Ek, 87.18.Sn Proc. Natl. Acad. Sci. USA 2016, 113, 11441.
[18] Y. LeCun, Y. Bengio, G. Hinton, Nature 2015, 521, 436.
[19] J. Schmidhuber, Neural Netw. 2015, 61, 85.
[20] T. E. Potok, C. D. Schuman, S. R. Young, R. M. Patton, F. Spedalieri,
Acknowledgements J. Liu, K.-T. Yao, G. Rose, G. Chakma, in Proceedings of the
The work was jointly financially supported by the National Natural Science Workshop on Machine Learning in High Performance Computing
Foundation of China (Grant Nos. 11574017, 11574021, 51372008, and Environments, ACM, Salt Lake City, Utah, 2016, https://doi.org//
11604007), Special Foundation of Beijing Municipal Science & 10.1109/MLHPC.2016.9
Technology Commission (Grant No. Z161100000216149), and City [21] A.-D. Almási, S. Wozniak, V. Cristea, Y. Leblebici, T. Engbersen,
University of Hong Kong Strategic Research Grant (SRG) No. 7004644. Neurocomputing 2016, 174, 31.
[22] D. Peteiro-Barral, B. Guijarro-Berdi~ nas, Prog. Artif Intell 2013, 2, 1.
[23] M. M. Najafabadi, F. Villanustre, T. M. Khoshgoftaar, N. Seliya,
R. Wald, E. Muharemagic, J. Big Data 2015, 2, 1.
Conflict of Interest [24] K. Ota, M. S. Dao, V. Mezaris, F. G. B. D. Natale, ACM Trans.
The authors declare no conflict of interest. Multimedia Comput. Commun Appl. 2017, 13, 34.
[25] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, M. Aurelio
Ranzato, A. Senior, P. Tucker, K. Yang, Q. V. Le, A. Y. Ng, Advances in
Neural Information Processing Systems 25 (Eds.: F. Pereira, C.J.C. Burges,
Keywords L. Bottou, K.Q. Weinberger), Curran Associates, Inc., Lake Tahoe 2012,
deep neural networks, memristor crossbar, memristors, neuromorphic pp. 1223.
computing, spiking neural networks [26] S. Gupta, A. Agrawal, K. Gopalakrishnan, P. Narayanan, in
Proceedings of the 32nd International Conference on Machine Learning
(ICML-15), 2015, pp. 1737–1746.
Received: November 13, 2017 [27] S. Han, J. Pool, J. Tran, W. Dally, Advances in Neural Information
Revised: March 26, 2018 Processing Systems 28 Curran Associates, Inc., Lake Tahoe 2015, pp.
Published online: May 21, 2018 1135.
[28] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro,
G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat,
[1] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, I. J. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz,
A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, L. Kaiser, M. Kudlur, J. Levenberg, D. Mane, R. Monga, S. Moore,
C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, D. G. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner,
S. Legg, D. Hassabis, Nature 2015, 518, 529. I. Sutskever, K. Talwar, P. A. Tucker, V. Vanhoucke,
[2] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den V. Vasudevan, F. B. Viegas, O. Vinyals, P. Warden, M. Watten-
Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, berg, M. Wicke, Y. Yu, X. Zheng. TensorFlow: Large-scale machine
M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, learning on heterogeneous distributed systems. arXiv preprint,
I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, 1603.04467, 2016. arxiv.org/abs/1603.04467. Software available
D. Hassabis, Nature 2016, 529, 484. from tensorflow.org.

Phys. Status Solidi A 2018, 215, 1700875 1700875 (13 of 16) © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
18626319, 2018, 13, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/pssa.201700875 by Karlsruher Inst F. Technologie, Wiley Online Library on [31/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
a

solidi
physica

status
www.advancedsciencenews.com www.pss-a.com

[29] R. Spring, A. Shrivastava, 23rd ACM SIGKDD International I. Vo, S. K. Esser, R. Appuswamy, B. Taba, A. Amir, M. D. Flickner,
Conference on Knowledge Discovery and Data Mining, ACM Press, W. P. Risk, R. Manohar, D. S. Modha, Science 2014, 345, 668.
Halifax, NS, Canada 2017, pp. 445. [51] S. B. Furber, F. Galluppi, S. Temple, L. A. Plana, Proc. IEEE 2014,
[30] A. Saulsbury, F. Pong, A. Nowatzyk, IEEE, Philadelphia, PA, USA, 102, 652.
1996, https://doi.org//10.1109/ISCA.1996.10008 [52] J. Schemmel, D. Bruederle, A. Gruebl, M. Hock, K. Meier, S. Millner,
[31] W. A. Wulf, S. A. McKee, ACM SIGARCH Computer Architecture in 2010 IEEE International Symposium on Circuits and Systems, IEEE,
News 1995, 23, 20. Paris, France, 2010, pp. 1947–1950.
[32] S. A. McKee, in Proceedings of the 1st Conference on Computing [53] B. V. Benjamin, P. Gao, E. McQuinn, S. Choudhary, A. R. Chandrasekaran,
Frontiers, ACM, Ischia, Italy, 2004, pp. 162. J.-M. Bussat, R. Alvarez-Icaza, J. V. Arthur, P. A. Merolla, K. Boahen, Proc.
[33] H. Esmaeilzadeh, E. Blem, R. St Amant, K. Sankaralingam, IEEE 2014, 102, 699.
D. Burger, IEEE Micro 2012, 32, 122. [54] D. Ma, J. Shen, Z. Gu, M. Zhang, X. Zhu, X. Xu, Q. Xu, Y. Shen,
[34] Z. Du, D. D. B.-D. Rubin, Y. Chen, L. He, T. Chen, L. Zhang, C. Wu, G. Pan, J. Syst. Archit. 2017, 77, 43.
O. Temam, in 48th Annual IEEE/ACM International Symposium on [55] B. Rajendran, F. Alibart, IEEE J. Emerging Sel. Top. Circuits Syst. 2016,
Microarchitecture (MICRO), Assoc Computing Machinery, Waikiki, 6, 198.
HI, 2015, pp. 494–507. [56] S. H. Jo, T. Chang, I. Ebong, B. B. Bhadviya, P. Mazumder, W. Lu,
[35] S. Soman, Jayadeva, M. Suri, Big Data Analytics 2016, 1, 15. Nano Lett. 2010, 10, 1297.
[36] R. A. Nawrocki, R. M. Voyles, S. E. Shaheen, IEEE Trans. Electron [57] G. Indiveri, B. Linares-Barranco, R. Legenstein, G. Deligeorgis,
Devices 2016, 63, 3819. T. Prodromakis, Nanotechnology 2013, 24, 384010.
[37] T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, O. Temam, ACM [58] B. Chen, F. Cai, J. Zhou, W. Ma, P. Sheridan, W. D. Lu, IEEE, Washington,
Sigplan Not. 2014, 49, 269. DC, 2015, pp. 17.5.1–17.5.4.
[38] Y. Chen, T. Chen, Z. Xu, N. Sun, O. Temam, Commun. ACM 2016, 59, 105. [59] S. B. Eryilmaz, D. Kuzum, S. Yu, H.-S. P. Wong, in IEEE International
[39] K. Ovtcharov, O. Ruwase, J.-Y. Kim, J. Fowers, K. Strauss, E. S. Chung, Electron Devices Meeting (IEDM), IEEE, Washington, DC, 2015,
in 2015 IEEE Hot Chips 27 Symposium (Hcs), IEEE, Cupertino, CA, https://doi.org//10.1109/IEDM. 2015.7409622
USA, 2016, https://doi.org//10.1109/HOTCHIPS.2015.7477459 [60] M. A. Zidan, J. P. Strachan, W. D. Lu, Nature Electron. 2018, 1, 22.
[40] J. Qiu, J. Wang, S. Yao, K. Guo, B. Li, E. Zhou, J. Yu, T. Tang, N. Xu, [61] L. F. Abbott, W. G. Regehr, Nature 2004, 431, 796.
S. Song, Y. Wang, H. Yang, in Proceedings of the 2016 ACM/SIGDA [62] R. S. Zucker, W. G. Regehr, Annu. Rev. Physiol. 2002, 64, 355.
International Symposium on Field-Programmable Gate Arrays, ACM, [63] R. Lamprecht, J. LeDoux, Nat. Rev. Neurosci 2004, 5, 45.
Monterey, California, 2016, pp. 26–35. [64] C. Lohmann, H. W. Kessels, J. Physiol. London 2014, 592, 13.
[41] S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz, W. J. Dally, [65] B. Lee, B. Sheu, H. Yang, IEEE Trans. Circuits Syst. 1991, 38, 654.
in Proceedings of the 43rd International Symposium on Computer [66] S. Ramakrishnan, P. E. Hasler, C. Gordon, IEEE Trans. Biomed.
Architecture, ACM, New York, NY, USA, 2016, pp. 243-254. Circuits Syst. 2011, 5, 244.
[42] F. Ortega-Zamorano, J. M. Jerez, D. Urda Munoz, R. M. Luque-Baena, [67] M. Ziegler, H. Kohlstedt, J. Appl. Phys. 2013, 114, 194506.
L. Franco, IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1840. [68] R. Gopalakrishnan, A. Basu, IEEE Trans. Neural Netw. Learn. Syst.
[43] C. Zhang, D. Wu, J. Sun, G. Sun, G. Luo, J. Cong, in Proceedings of 2015, 26, 2596.
the2016 International Symposium on Low Power Electronics and [69] S. Kim, B. Choi, M. Lim, J. Yoon, J. Lee, H.-D. Kim, S.-J. Choi, ACS
Design, ACM, New York, NY, USA, 2016, pp. 326–331. Nano 2017, 11, 2814.
[44] E. Nurvitadhi, D. Sheffield, J. Sim, A. Mishra, G. Venkatesh, D. Marr, [70] H.-S. Choi, D.-H. Wee, H. Kim, S. Kim, K.-C. Ryoo, B.-G. Park,
in (Eds.: Y.C. Song, S. Wang, B. Nelson, J. Li, Y. Peng), IEEE, Xi’an, Y. Kim, IEEE Trans. Electron Devices 2018, 65, 101.
China, China, 2016, https://doi.org//10.1109/FPT. 2016.7929192 [71] L. Q. Zhu, C. J. Wan, L. Q. Guo, Y. Shi, Q. Wan, Nat. Commun. 2014,
[45] N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, 5, 3158.
S. Bates, S. Bhatia, N. Boden, A. Borchers, R. Boyle, P.-l. Cantin, [72] L. Guo, J. Wen, J. Ding, C. Wan, G. Cheng, Sci. Rep. 2016, 6, 38578.
C. Chao, C. Clark, J. Coriell, M. Daley, M. Dau, J. Dean, B. Gelb, [73] R. A. John, J. Ko, M. R. Kulkarni, N. Tiwari, N. A. Chien, N. G. Ing,
T. V. Ghaemmaghami, R. Gottipati, W. Gulland, R. Hagmann, W. L. Leong, N. Mathews, Small 2017, 13.
C. R. Ho, D. Hogberg, J. Hu, R. Hundt, D. Hurt, J. Ibarz, A. Jaffey, [74] P. B. Pillai, M. M. De Souza, ACS Appl. Mater. Interfaces 2017, 9,
A. Jaworski, A. Kaplan, H. Khaitan, D. Killebrew, A. Koch, N. Kumar, 1609.
S. Lacy, J. Laudon, J. Law, D. Le, C. Leary, Z. Liu, K. Lucke, A. Lundin, [75] E. J. Fuller, F. El Gabaly, F. Leonard, S. Agarwal, S. J. Plimpton,
G. MacKean, A. Maggiore, M. Mahony, K. Miller, R. Nagarajan, R. B. Jacobs-Gedrim, C. D. James, M. J. Marinella, A. A. Talin, Adv.
R. Narayanaswami, R. Ni, K. Nix, T. Norrie, M. Omernick, Mater. 2017, 29, 1604310.
N. Penukonda, A. Phelps, J. Ross, M. Ross, A. Salek, [76] Y. van de Burgt, E. Lubberman, E. J. Fuller, S. T. Keene, G. C. Faria,
E. Samadiani, C. Severn, G. Sizikov, M. Snelham, J. Souter, S. Agarwal, M. J. Marinella, A. A. Talin, A. Salleo, Nat. Mater. 2017, 16, 414.
D. Steinberg, A. Swing, M. Tan, G. Thorson, B. Tian, H. Toma, [77] C. S. Yang, D. S. Shang, N. Liu, G. Shi, X. Shen, R. C. Yu, Y. Q. Li,
E. Tuttle, V. Vasudevan, R. Walter, W. Wang, E. Wilcox, D. H. Yoon, Y. Sun, Adv. Mater. 2017, 29, 1700906.
in Proceedings of the 44th Annual International Symposium on [78] L. Chua, IEEE Trans. Circuit Theory 1971, 18, 507.
Computer Architecture, ACM, New York, NY, USA, 2017, pp. 1–12. [79] D. B. Strukov, G. S. Snider, D. R. Stewart, R. S. Williams, Nature
[46] Z. Li, Y. Wang, T. Zhi, T. Chen, Front. Comput. Sci. 2017, 11, 746. 2008, 453, 80.
[47] J. L. Krichmar, P. Coussy, N. Dutt, ACM. J. Emerg. Technol. Comput. [80] S. Williams, IEEE Spectr. 2008, 45, 24.
Syst. 2015, 11, 36. [81] L. Chua, Appl. Phys. A-Mater. Sci. Process. 2011, 102, 765.
[48] S. Furber, J. Neural Eng. 2016, 13, 051001. [82] S. P. Adhikari, M. P. Sah, H. Kim, L. O. Chua, IEEE Trans. Circuits
[49] C. D. James, J. B. Aimone, N. E. Miner, C. M. Vineyard, F. H. Rothganger, Syst. 2013, 60, 3008.
K. D. Carlson, S. A. Mulder, T. J. Draelos, A. Faust, M. J. Marinella, [83] L. Chua, Radioengineering 2015, 24, 319.
J. H. Naegle, S. J. Plimpton, Biologically Inspired Cognitive Architectures [84] R. Waser, M. Aono, Nat. Mater. 2007, 6, 833.
2017, 19, 49. [85] J. J. Yang, M. D. Pickett, X. Li, D. A. A. Ohlberg, D. R. Stewart,
[50] P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada, R. S. Williams, Nat. Nanotechnol. 2008, 3, 429.
F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura, B. Brezzo, [86] Y. V. Pershin, M. Di Ventra, Phys. Rev. B 2008, 78, 113309.

Phys. Status Solidi A 2018, 215, 1700875 1700875 (14 of 16) © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
18626319, 2018, 13, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/pssa.201700875 by Karlsruher Inst F. Technologie, Wiley Online Library on [31/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
a

solidi
physica

status
www.advancedsciencenews.com www.pss-a.com

[87] R. Waser, R. Dittmann, G. Staikov, K. Szot, Adv. Mater. 2009, 21, [122] R. Berdan, E. Vasilaki, A. Khiat, G. Indiveri, A. Serb, T. Prodromakis,
2632. Sci. Rep. 2016, 6, 18639.
[88] T. Driscoll, H.-T. Kim, B.-G. Chae, M. Di Ventra, D. N. Basov, Appl. [123] C. H. Bennett, S. La Barbera, A. F. Vincent, J.-O. Klein,
Phys. Lett. 2009, 95, 043503. F. Alibart, D. Querlioz, in 2016 International Joint Conference on
[89] Y. V. Pershin, M. Di Ventra, Adv. Phys. 2011, 60, 145. Neural Networks (Ijcnn), IEEE, Vancouver, BC, Canada, 2016, pp.
[90] Q. Xia, M. D. Pickett, J. J. Yang, X. Li, W. Wu, G. Medeiros-Ribeiro, 947–954.
R. S. Williams, Adv. Funct. Mater. 2011, 21, 2660. [124] Z. Wang, S. Joshi, S. E. Savel’ev, H. Jiang, R. Midya, P. Lin, M. Hu,
[91] J. J. Yang, D. B. Strukov, D. R. Stewart, Nat. Nanotechnol. 2013, 8, 13. N. Ge, J. P. Strachan, Z. Li, Q. Wu, M. Barne, G.-L. Li, H. L. Xin,
[92] F. Pan, S. Gao, C. Chen, C. Song, F. Zeng, Mater. Sci. Eng., R 2014, 83, 1. R. S. Williams, Q. Xia, J. J. Yang, Nat. Mater. 2017, 16, 101.
[93] L. Wang, C. Yang, J. Wen, S. Gai, Y. Peng, J. Mater. Sci. Mater. [125] X. Zhu, C. Du, Y. Jeong, W. D. Lu, Nanoscale 2017, 9, 45.
Electron. 2015, 26, 4618. [126] X. Yan, J. Zhao, S. Liu, Z. Zhou, Q. Liu, J. Chen, X. Y. Liu, Adv. Funct.
[94] A. Wedig, M. Luebben, D.-Y. Cho, M. Moors, K. Skaja, V. Rana, Mater. 2018, 28, 1705320.
T. Hasegawa, K. K. Adepalli, B. Yildiz, R. Waser, I. Valov, Nat. [127] M. Hu, J. P. Strachan, Z. Li, R. W. Stanley, in Proceedings of the
Nanotechnol. 2016, 11, 67. Seventeenth International Symposium on Quality Electronic Design
[95] B. Mohammad, M. A. Jaoude, V. Kumar, D. M. Al Homouz, H. Abu Isqed 2016, IEEE, Santa Clara, CA, 2016, pp. 374–379.
Nahla, M. Al-Qutayri, N. Christoforou, Nanotechnol. Rev. 2016, 5, 311. [128] C. Li, M. Hu, Y. Li, H. Jiang, N. Ge, E. Montgomery, J. Zhang,
[96] D. Ielmini, Semicond. Sci. Technol. 2016, 31, 063002. W. Song, N. Dávila, C. E. Graves, Z. Li, J. P. Strachan, P. Lin,
[97] V. K. Joshi, Eng. Sci. Technol., Int. J. 2016, 19, 1503. Z. Wang, M. Barnell, Q. Wu, R. S. Williams, J. J. Yang, Q. Xia, Nature
[98] S. Sahoo, S. R. S. Prabaharan, J. Nanosci. Nanotechnol. 2017, 17, 72. Electronics 2018, 1, 52.
[99] S. Raoux, F. Xiong, M. Wuttig, E. Pop, MRS Bull. 2014, 39, 703. [129] P. M. Sheridan, F. Cai, C. Du, W. Ma, Z. Zhang, W. D. Lu, Nat.
[100] G. W. Burr, M. J. Brightsky, A. Sebastian, H.-Y. Cheng, J.-Y. Wu, Nanotechnol. 2017, 12, 784.
S. Kim, N. E. Sosa, N. Papandreou, H.-L. Lung, H. Pozidis, [130] P. Yao, H. Wu, B. Gao, S. B. Eryilmaz, X. Huang, W. Zhang,
E. Eleftheriou, C. H. Lam, IEEE J. Emerging Sel. Top. Circuits Syst. Q. Zhang, N. Deng, L. Shi, H.-S. P. Wong, H. Qian, Nat. Commun.
2016, 6, 146. 2017, 8, 15199.
[101] S. D. Ha, S. Ramanathan, J. Appl. Phys. 2011, 110, 071101. [131] E. J. Merced-Grafals, N. Davila, N. Ge, R. S. Williams, J. P. Strachan,
[102] P. Krzysteczko, J. Muenchenberger, M. Schaefers, G. Reiss, Nanotechnology 2016, 27, 365202.
A. Thomas, Adv. Mater. 2012, 24, 762. [132] T. Chang, Y. Yang, W. Lu, IEEE Circuits Syst. Mag. 2013, 13, 56.
[103] D. Kuzum, R. G. D. Jeyasingh, B. Lee, H.-S. P. Wong, Nano Lett. [133] Y. Park, J.-S. Lee, ACS Nano 2017, 11, 8962.
2012, 12, 2179. [134] X. Zhang, S. Liu, X. Zhao, F. Wu, Q. Wu, W. Wang, R. Cao, Y. Fang,
[104] D. Kuzum, S. Yu, H.-S. P. Wong, Nanotechnology 2013, 24, 382001. H. Lv, S. Long, Q. Liu, M. Liu, IEEE Electron Device Lett. 2017, 38,
[105] S. Gaba, P. Sheridan, J. Zhou, S. Choi, W. Lu, Nanoscale 2013, 5, 5872. 1208.
[106] A. F. Vincent, J. Larroque, N. Locatelli, N. Ben Romdhane, [135] W. Banerjee, Q. Liu, H. Lv, S. Long, M. Liu, Nanoscale 2017, 9,
O. Bichler, C. Gamrat, W. S. Zhao, J.-O. Klein, S. Galdin-Retailleau, 14442.
D. Querlioz, IEEE Trans. Biomed. Circuits Syst. 2015, 9, 166. [136] J. V. Arthur, K. A. Boahen, IEEE Trans. Circuits Syst. 2011, 58, 1034.
[107] N. K. Upadhyay, S. Joshi, J. J. Yang, Sci. China: Inf. Sci. 2016, 59, [137] S. Choudhary, S. Sloan, S. Fok, A. Neckar, E. Trautmann, P. Gao,
061404. T. Stewart, C. Eliasmith, K. Boahen, in Proceedings of the 22Nd
[108] J. Grollier, D. Querlioz, M. D. Stiles, Proc. IEEE 2016, 104. International Conference on Artificial Neural Networks and Machine
[109] S. Lequeux, J. Sampaio, V. Cros, K. Yakushiji, A. Fukushima, Learning  Volume Part I, Springer-Verlag, Berlin, Heidelberg, 2012,
R. Matsumoto, H. Kubota, S. Yuasa, J. Grollier, Sci. Rep. 2016, 6, 31510. pp. 121–128.
[110] T. Tuma, A. Pantazi, M. Le Gallo, A. Sebastian, E. Eleftheriou, Nat. [138] S. Dutta, V. Kumar, A. Shukla, N. R. Mohapatra, U. Ganguly, Sci.
Nanotechnol. 2016, 11, 693. Rep. 2017, 7, 8257.
[111] G. W. Burr, R. M. Shelby, A. Sebastian, S. Kim, S. Kim, S. Sidler, [139] L. Chua, Nanotechnology 2013, 24, 383001.
K. Virwani, M. Ishii, P. Narayanan, A. Fumarola, L. L. Sanches, [140] X. Zhang, W. Wang, Q. Liu, X. Zhao, J. Wei, R. Cao, Z. Yao, X. Zhu,
I. Boybat, M. L. Gallo, K. Moon, J. Woo, H. Hwang, Y. Leblebici, Adv. F. Zhang, H. Lv, S. Long, M. Liu, IEEE Electron Device Lett. 2018, 39,
Phys.: X 2017, 2, 89. 308.
[112] L. Wang, S.-R. Lu, J. Wen, Nanoscale Res. Lett. 2017, 12, 1. [141] C. D. Wright, P. Hosseini, J. A. V. Diosdado, Adv. Funct. Mater. 2013,
[113] J. Torrejon, M. Riou, F. A. Araujo, S. Tsunegi, G. Khalsa, D. Querlioz, 23, 2248.
P. Bortolotti, V. Cros, K. Yakushiji, A. Fukushima, H. Kubota, [142] A. Pantazi, S. Wozniak, T. Tuma, E. Eleftheriou, Nanotechnology
S. Y. Uasa, M. D. Stiles, J. Grollier, Nature 2017, 547, 428. 2016, 27, 355205.
[114] J. Li, Q. Duan, T. Zhang, M. Yin, X. Sun, Y. Cai, L. Li, Y. Yang, [143] M. D. Pickett, G. Medeiros-Ribeiro, R. S. Williams, Nat. Mater. 2013,
R. Huang, RSC Adv. 2017, 7, 43132. 12, 114.
[115] T. Ohno, T. Hasegawa, T. Tsuruoka, K. Terabe, J. K. Gimzewski, [144] S. Kumar, J. P. Strachan, R. S. T. Williams, Nature 2017, 548, 318.
M. Aono, Nat. Mater. 2011, 10, 591. [145] L. Gao, P.-Y. Chen, S. Yu, Appl. Phys. Lett. 2017, 111, 103503.
[116] T. Chang, S.-H. Jo, W. Lu, ACS Nano 2011, 5, 7669. [146] M. Sharad, D. Fan, K. Roy, J. Appl. Phys. 2013, 114, 234906.
[117] S. Saighi, C. G. Mayr, T. Serrano-Gotarredona, H. Schmidt, G. Lecerf, [147] N. Locatelli, V. Cros, J. Grollier, Nat. Mater. 2014, 13, 11.
J. Tomas, J. Grollier, S. Boyn, A. F. Vincent, D. Querlioz, S. La Barbera, [148] D. Fan, Y. Shim, A. Raghunathan, K. Roy, IEEE Trans. Nanotechnol.
F. Alibart, D. Vuillaume, O. Bichler, C. Gamrat, B. Linares-Barranco, 2015, 14, 1013.
Front. Neurosci. 2015, 9, 51. [149] A. Sengupta, K. Roy, in International Joint Conference on Neural
[118] S. La Barbera, D. Vuillaume, F. Alibart, ACS Nano 2015, 9, 941. Networks (Ijcnn), IEEE, Killarney, Ireland, 2015, https://doi.org//
[119] E. Prati, Int. J. Nanotechnol. 2016, 13, 509. 10.1109/IJCNN.2015.7280306
[120] S. La Barbera, A. F. Vincent, D. Vuillaume, D. Querlioz, F. Alibart, [150] M. Ignatov, M. Ziegler, M. Hansen, A. Petraru, H. Kohlstedt, Front.
Sci. Rep. 2016, 6, 39216. Neurosci. 2015, 9, 376.
[121] C. T. Chang, F. Zeng, X. J. Li, W. S. Dong, S. H. Lu, S. Gao, F. Pan, [151] L. A. Pastur-Romay, A. B. Porto-Pazos, F. Cedron, A. Pazos, Curr.
Sci. Rep. 2016, 6, 18915. Trends Med. Chem. 2017, 17, 1646.

Phys. Status Solidi A 2018, 215, 1700875 1700875 (15 of 16) © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
18626319, 2018, 13, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/pssa.201700875 by Karlsruher Inst F. Technologie, Wiley Online Library on [31/03/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
a

solidi
physica

status
www.advancedsciencenews.com www.pss-a.com

[152] J. R. Heath, P. J. Kuekes, G. S. Snider, R. S. Williams, Science 1998, [175] C. Yakopcic, R. Hasan, T. M. Taha, in International Joint Conference
280, 1716. on Neural Networks (Ijcnn), IEEE, Killarney, Ireland, 2015, https://
[153] O. Turel, K. Likharev, Int. J. Circ. Theor. App. 2003, 31, 37. doi.org//10.1109/IJCNN.2015.7280813
[154] W. S. Zhao, G. Agnus, V. Derycke, A. Filoramo, J.-P. Bourgoin, [176] A. Shafiee, A. Nag, N. Muralimanohar, R. Balasubramonian,
C. Gamrat, Nanotechnology 2010, 21, 175202. J. P. Strachan, M. Hu, R. S. Williams, V. Srikumar, in 2016 Acm/IEEE
[155] K. K. Likharev, Sci. Adv. Mater. 2011, 3, 322. 43rd Annual International Symposium on Computer Architecture
[156] H. Li, B. Gao, Z. Chen, Y. Zhao, P. Huang, H. Ye, L. Liu, X. Liu, (Isca), IEEE, Seoul, Republic of Korea, 2016, pp. 14–26.
J. Kang, Sci. Rep. 2015, 5, 13330. [177] X. Guo, F. Merrikh-Bayat, L. Gao, B. D. Hoskins, F. Alibart,
[157] O. Tunali, M. Altun, IEEE Trans. Comput. Aided Des. Integr. Circuits B. Linares-Barranco, L. Theogarajan, C. Teuscher, D. B. Strukov,
Syst. 2017, 36, 747. Front. Neurosci. 2015, 9, 488.
[158] D. Zhang, L. Zeng, K. Cao, M. Wang, S. Peng, Y. Zhang, Y. Zhang, J.- [178] Y. V. Pershin, M. Di Ventra, Neural Netw. 2010, 23, 881.
O. Klein, Y. Wang, W. Zhao, IEEE Trans. Biomed. Circuits Syst. 2016, [179] L. Gao, I.-T. Wang, P.-Y. Chen, S. Vrudhula, J.-S. Seo, Y. Cao, T.-H. Hou,
10, 828. S. Yu, Nanotechnology 2015, 26, 455204.
[159] L. V. Gambuzza, M. Frasca, L. Fortuna, V. Ntinas, I. Vourkas, [180] P. M. Sheridan, C. Du, W. D. Lu, IEEE Trans. Neural Netw. Learn.
G. C. Sirakoulis, IEEE Trans. Circuits Syst. 2017, 64, 2124. Syst. 2016, 27, 2327.
[160] S. Yu, J. Liang, Y. Wu, H.-S. P. Wong, Nanotechnology 2010, 21, [181] Y. Zhang, X. Wang, E. G. Friedman, IEEE Trans. Circuits Syst. 2018, 65, 677.
465202. [182] S. Choi, J. H. Shin, J. Lee, P. Sheridan, W. D. Lu, Nano Lett. 2017,
[161] P. O. Vontobel, W. Robinett, P. J. Kuekes, D. R. Stewart, J. Straznicky, 17, 3113.
R. S. Williams, Nanotechnology 2009, 20, 425204. [183] W. Ma, L. Chen, C. Du, W. D. Lu, Appl. Phys. Lett. 2015, 107, 193101.
[162] Z. Xu, A. Mohanty, P.-Y. Chen, D. Kadetotad, B. Lin, J. Ye, [184] C. Du, F. Cai, M. A. Zidan, W. Ma, S. H. Lee, W. D. Lu, Nat.
S. Vrudhula, S. Yu, J.-S. Seo, Y. Cao, 5th Annual International Commun. 2017, 8, 2204.
Conference on Biologically Inspired Cognitive Architectures, 2014 Bica. [185] M. A. Zidan, Y. Jeong, W. D. Lu, IEEE Trans. Nanotechnol. 2017, 16, 721.
(Eds.: A.V. Samsonovich, P. Robertson), ELSEVIER, MIT Campus, [186] J. Bill, R. Legenstein, Front. Neurosci. 2014, 8, 412.
Cambridge, MA 2014, pp. 126. [187] M. Hu, Y. Chen, J. J. Yang, Y. Wang, H. Li, IEEE Trans. Comput. Aided
[163] W. Joo, J. H. Lee, S. M. Choi, H.-D. Kim, S. Kim, J. Nanosci. Des. Integr. Circuits Syst. 2017, 36, 1353.
Nanotechnol. 2016, 16, 11391. [188] T. Werner, E. Vianello, O. Bichler, D. Garbin, D. Cattaert, B. Yvert,
[164] K. J. Yoon, W. Bae, D.-K. Jeong, C. S. Hwang, Adv. Electron. Mater. B. De Salvo, L. Perniola, Front. Neurosci. 2016, 10, 474.
2016, 2, 1600326. [189] Z. Xiao, J. Huang, Adv. Electron. Mater. 2016, 2, 1600100.
[165] A. C. Torrezan, J. P. Strachan, G. Medeiros-Ribeiro, R. S. Williams, [190] S.-T. Han, L. Hu, X. Wang, Y. Zhou, Y.-J. Zeng, S. Ruan, C. Pan,
Nanotechnology 2011, 22, 485203. Z. Peng, Adv. Sci. (Weinheim, Ger.) 2017, 4, 1600435.
[166] S. N. Truong, K.-S. Min, J. Semicond. Technol. Sci. 2014, 14, 356. [191] C. Pan, Y. Ji, N. Xiao, F. Hui, K. Tang, Y. Guo, X. Xie, F. M. Puglisi,
[167] H. Jiang, D. Belkin, S. E. Savel’ev, S. Lin, Z. Wang, Y. Li, S. Joshi, L. Larcher, E. Miranda, L. Jiang, Y. Shi, I. Valov, P. C. McIntyre,
R. Midya, C. Li, M. Rao, M. Barnell, Q. Wu, J. J. Yang, Q. Xia, Nat. R. Waser, M. Lanza, Adv. Funct. Mater. 2017, 27, 1604811.
Commun. 2017, 8, 882. [192] H. Tian, L. Zhao, X. Wang, Y.-W. Yeh, N. Yao, B. P. Rand, T.-L. Ren,
[168] D. Soudry, D. Di Castro, A. Gal, A. Kolodny, S. Kvatinsky, IEEE Trans. ACS Nano 2017, 11, 12247.
Neural Netw. Learn. Syst. 2015, 26, 2408. [193] W. Quan-Tan, S. Tuo, Z. Xiao-Long, Z. Xu-Meng, W. Fa-Cai,
[169] T. Gokmen, Y. Vlasov, Front. Neurosci. 2016, 10, 333. C. Rong-Rong, L. Shi-Bing, L. Hang-Bing, L. Qi, L. Ming, Acta Phys.
[170] A. Velasquez, S. K. Jha, in 2016 IEEE International Symposium on Sin. 2017, 66, 217304.
Circuits and Systems (Iscas), IEEE, Montreal, QC, Canada, 2016, pp. [194] R. Ge, X. Wu, M. Kim, J. Shi, S. Sonde, L. Tao, Y. Zhang, J. C. Lee,
1874–1877. D. Akinwande, Nano Lett. 2017, 18, 434.
[171] A. Hawn, J. Yu, R. Nane, M. Taouil, S. Hamdioui, K. Bertels, in 14th [195] Y. Wang, Z. Lv, L. Zhou, X. Chen, J. Chen, Y. Zhou, V. A. L. Roy, S.-
International Conference on High Performance Computing & T. Han, J. Mater. Chem. C 2018, 6, 1600.
Simulation (HPCS) (Ed.: W.W. Smari), IEEE, 14th International [196] W. Yi, S. E. Savel’ev, G. Medeiros-Ribeiro, F. Miao, M.-X. Zhang, J. J. Yang,
Conference on High Performance Computing & Simulation A. M. Bratkovsky, R. S. Williams, Nat. Commun. 2016, 7, 11142.
(HPCS), 2016, pp. 759–766. [197] Y. Kang, H. Ruan, R. O. Claus, J. Heremans, M. Orlowski, Nanoscale
[172] M. Nourazar, V. Rashtchi, A. Azarpeyvand, F. Merrikh-Bayat, Analog Res. Lett. 2016, 11, 179.
Integr Circuits Signal Process 2017, 93, 363. [198] G. C. Adam, B. D. Hoskins, M. Prezioso, F. Merrikh-Bayat,
[173] M. Hu, C. E. Graves, C. Li, Y. Li, N. Ge, E. Montgomery, N. Davila, B. Chakrabarti, D. B. Strukov, IEEE Trans. Electron Devices 2017, 64, 312.
H. Jiang, R. S. Williams, J. J. Yang, Q. Xia, J. P. Strachan, Adv. Mater. [199] C. Wu, T. W. Kim, H. Y. Choi, D. B. Strukov, J. J. Yang, Nat. Commun.
2018, 30, 1705914. 2017, 8, 752.
[174] M. Prezioso, F. Merrikh-Bayat, B. D. Hoskins, G. C. Adam, [200] C. Li, L. Han, H. Jiang, M.-H. Jang, P. Lin, Q. Wu, M. Barnell,
K. K. Likharev, D. B. Strukov, Nature 2015, 521, 61. J. J. Yang, H. L. Xin, Q. Xia, Nat. Commun. 2017, 8, 15666.

Phys. Status Solidi A 2018, 215, 1700875 1700875 (16 of 16) © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

You might also like