Professional Documents
Culture Documents
1 s2.0 S092523121831381X Main
1 s2.0 S092523121831381X Main
Neurocomputing
journal homepage: www.elsevier.com/locate/neucom
a r t i c l e i n f o a b s t r a c t
Article history: Recently, memristor as an emerging device has been used as the basic synapse realization component
Received 14 February 2018 for the circuit implementation of artificial neural networks. Following this trend, this paper studies the
Revised 9 October 2018
circuit realization of a memristor-based bidirectional associative memory (BAM) system that extends the
Accepted 12 November 2018
memristor crossbar array structure for bidirectional synaptic weighting operation. The neuron cell in BAM
Available online 20 November 2018
is represented by digital circuit element JK flip-flop for hardware cost-saving. Meanwhile a novel mem-
Communicated by Prof. Duan Shukai ristor programming strategy is also considered and examined to ease the on-chip learning. The design of
such memristive BAM circuit system is conducted by hardware description language Verilog-AMS and has
Keywords:
Memristor been validated in a commercial circuit simulation environment via a case study. Test results show that
Crossbar array a set of binary character patterns can be memorized and recalled successfully by the trained memristive
Bidirectional associative memory BAM system.
Content-addressable memory
© 2018 Elsevier B.V. All rights reserved.
Verilog-AMS
Artificial neural network
https://doi.org/10.1016/j.neucom.2018.11.050
0925-2312/© 2018 Elsevier B.V. All rights reserved.
438 B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448
(wi j )m×n , with m rows and n columns, each wij is a real number
which denotes the (i, j)th entry in the m × n matrix, where T stands
for vector/matrix transpose.
Either an arbitrary vector x = (x1 , . . . , xm )T or y = (y1 , . . . , yn )T
is input to the corresponding terminal layer of the BAM, BAM starts
the recall process between the two layers until a pair of stationary
vector by the following iterations:
n
xi (t + 1 ) = f wi j y j (t ), xi (t ) (1a)
j=1
for i = 1, . . . , m
m
y j (t + 1 ) = f wi j xi (t ), y j (t ) (1b)
Fig. 1. Illustration of the BAM structure.
i=1
for j = 1, . . . , n
focused on the analysis of the dynamical properties of BAM sys-
tem by software simulation when memristors are involved (mem- where t denotes the iteration step. f( · , · ) is the bivariate activation
ristive BAM) such as the stability, synchronization, and delay ef- function defined by
fects [29–33]. There is also an interest of hardware implementation
of the memristive BAM system. For instance, the work by Tarkov
1 if σ >0
[34] considered the application of a modified memristor bridge
f (σ , ζ) = ζ if σ =0 (2)
structure synapse to both HNN and BAM. However, in the case of
−1 if σ <0
BAM, Tarkov has to use two sets of memristor synapse array to where σ denotes the received sum of signals and ζ denotes the
represent weight matrix W and its transpose WT for the bidirec- current binary state of the neuron before activation.
tional weighting operation. On-chip learning is also not considered The main goal of BAM is to find a proper connection weight
in Tarkov’s work. Another example is Zhao et al. [35] propose and matrix W = (wi j )m×n so that all training pattern vector pairs
verify a programming method for memristor array off-chip learn- P
x (k ) , y (k )in a given set of P become the stationary points of
ing via a 6 × 6 memristive BAM case study. k=1
As discussed above, the motivation of this study is to design the BAM, where x(k ) = (x1(k ) , . . . , xm
(k ) T
) , y(k) = (y1(k) , . . . , yn(k) )T .
memristive BAM circuits as an hardware implementation of BAM Kosko once proposed the outer-product rule in his original BAM
by reusing only one set of memristor crossbar synapse array, to- paper [18], which simply calculates the sum of the outer-products
gether with an on-chip learning circuitry. Such memristor-based of all the training patterns
synapse design can be scalable, cost-effective and universal to
other bidirectional ANN circuits. Another cost-effective design is
P
W = x ( k ) y ( k )T (3)
that the BAM neuron circuit is realized by digital circuit element JK
k=1
flip-flop compared to previous analog IC implementation [28,34].
Beyond these design details, we attempt to simulate and verify the However, as pointed out by [19,21,22], the outer-product learning
whole memristive BAM system via the industry-standard mixed- rule is not an optimal learning rule that does not work well when
signal circuit design language Verilog-AMS other than SPICE re- the training data set has a large number of patterns. Wang et al.
ported in previous work to promote the system level design of [19] proposed and proved a learning method by introducing cyclic
memristor-based large scale ANN circuits when both digital and utilization of the training pattern pairs.
analog circuits are employed1 .
The rest of the paper is organized as follows: Section 2 presents
3. An extended memeristor crossbar synapse
a mathematical description of BAM. Section 3 is devoted to the ex-
tension of the memristor crossbar array for bidirectional synaptic
3.1. Existing memristive crossbar synapse
weighting operation. Section 4 explains the on-chip learning algo-
rithm and the corresponding circuits. In Section 5 we describe the
The crossbar structure has been applied by many authors
system-level circuit implementation details of the memristive BAM.
[4,5,10,36–39] due to the prominent advantage of its scalability
Circuit simulation results of the proposed memristive BAM system
for constructing large synapse arrays. The key point in memris-
for pattern recognition in Verilog-AMS environment are discussed
tor crossbar synapse design is how to represent weight value wij
in Section 6. This paper is finally concluded in Section 7.
by the conductance of memristor (GM, ij ) when wij is negative.
2. Brief mathematical description of BAM The most frequently adopted idea was to incorporate signal sub-
traction by two memristors for one synapse weight (wi j ∝ G+
M,i j
−
BAM is a two-layer neural network consisting of the X-layer and G−M,i j
). When arranged in a two-dimensional array, two alterna-
the Y-layer as illustrated in Fig. 1. The state of each neuron unit tive forms can perform signal subtraction: one is noted as pre-
takes on a bipolar value -1 or 1. Taking vector-matrix notations, m differentiation (Fig. 2(a)) used in [36,38] and the other is noted as
neurons x = (x1 , . . . , xm )T of X-layer interact with n neurons y = post-differentiation (Fig. 2(b)) employed in [4,5,39].
(y1 , . . . , yn )T of Y-layer via a fully connected weight matrix W = Most recently, Zhang et al. [37] replace the plus memristor
G+M,i j
with a constant resistor GRS = 1/RS for one synapse (wi j ∝
1
GRS − G−M,i j
). As for a synapse array, the GRS term array can be re-
The Verilog-AMS circuit design code of this memristive BAM project will be
open source and released under free BSD license. Currently the project code is in duced to only one column with the aid of an analog adder shown
the github link: https://github.com/PhdBoLi/BAM. in Fig. 2(c). Hereby RS is noted as summing resistor. The output
B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448 439
Fig. 2. Existing memristor crossbar synapse circuits: (a) crossbar memristor synapse array with pre-differentiation [36]; (b) crossbar memristor synapse array with post-
differentiation [4]; (c) crossbar synapse array with amplifier based biasing [37].
440 B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448
voltage (Vf ) of the differential input amplifier (AMP) with a feed- 4. On-chip learning circuits for the proposed BAM system
back resistor (Rf ) is expressed as (4)
4.1. An learning algorithm for memristive BAM
M
Rf
VF = − VIi (4) The variation wij during the BAM learning process is mapped
RS
i=1 to the variation of memconductance G+ M,i j
according to the fol-
lowing on-chip training steps altered from Wang’s algorithm [19]:
And VF can be converted to current signal via a resistor (noted as
To facilitate the statement, we introduce the notation of Hadamard
biasing resistor Rb ). Provided all Rb and Rf have the same value
product of two matrices of comparable dimension, which is de-
(Rb = R f ), The output voltage (VOj ) of the jth column is
fined by
m
R0 A B = (ai j · bi j )m×n
VO j = − R0 G− V +
M,i j Ii
VF
Rb
i=1 where A = (ai j )m×n and B = (bi j )m×n .
m
= R0 ( GRS − G−
M,i j
)VIi (5) 1. Initialize all memristors with random memconductance G+
M,i j
.
i=1 2. For each input pattern pair in the training set
P
x (k ) , y (k ) , which is encoded as voltage form x(k ) =
where VIi stands for the ith input voltage. As a result, a total of k=1
m × n memristors are used, unlike the pre-differentiation and post- (VX,1 , . . . , VX,m )T , y(k) = (VY,1 , . . . , VY,n )T .
differentiation designs where two copies of crossbars were used. (a) Evaluate judging condition ξi(t ) in X-layer: Apply the input
pattern y(t) to the crossbar synapse circuit to produce the
vector Wy(t) and calculate the Hadamard product of Wy(t)
3.2. Proposed memristive synapse for bidirectional weighting and x(t) . Namely, each component of the Hadamard product
operation is ξi(t ) := [W y(t ) x(t ) ]i . Then compare ξi(t ) with specifica-
tion Dx
Memristor crossbar structure synapse mentioned above merely (b) Update weight value of corresponding synapses: If the com-
focus on single direction weighting operation (multiply-add) from ponent ξi(t ) < Dx ,
one layer to another. This paper extends the design idea of Zhang
(t+1 ) (t ) (t ) (t )
et al. [37] to bidirectional situation so that an m × n BAM neu- GM,i j
= GM,i j
+ qVX,i VY, j (8)
ral network requires only one memristor crossbar array to per- (t )
form WT x and Wy operations illustrated in Fig. 1. The neural sig- for ξi < Dx , j = 1, . . . , n
nal propagation directions are controlled by switches, which are
switched between two terminals (1 or 2) by time-synchronized If ξi(k ) > Dx , related memristor synapse values maintain un-
signals illustrated in Fig. 3(a). Here constant resistors are employed changed.
to replace the minus term G− other than G+ for the conve- (c) Evaluate judging condition η (jt ) in Y-layer: the process is
M,i j M,i j
nience to do on-chip learning. For example, when learned weight similar to the sub-step (a) and each component is η (jt ) :=
variation wij is positive, we just increase corresponding mem- [W T x(t ) y(t ) ] j
ristor conductance G+ M,i j
by positive voltage. The constant resistors (d) Update weight value of corresponding synapses: the updat-
connected to X-Layer is noted as RCX , and the counterpart is noted ing method is also similar to the sub-step (b).
as RCY .
(t+1 ) (t ) (t )
The case of synaptic operation direction from the Y-layer to the GM,i j
= GM,i j (t ) + qVX,i VY, j (9)
X-layer is guaranteed by all switches contacted to node 1 as shown (t )
for η j < Dy , i = 1 , . . . , m
in Fig. 3(b), which performs as below:
n +
The plus term j=1 GM,i j VY, j is realized by connecting the in- 4.2.1. Arbiter circuit
put voltages to a row of n memristors. The minus term factor The arbiter circuit is used to verify the inequality in the pro-
− nj=1 VY, j is realized by the AMP with a matched feedback re- posed on-chip learning algorithm, i.e., ξ i < Dx or ηj < Dy . Assertion
sistor (R f = RCY = RCX ). RCY (resp. RCX ) functions as the summing of the inequality leads to the firing of an enabling signal for updat-
(resp. biasing) resistor RS (resp. Rb ) in Zhang’s design [37] in this ing the synapse weights. Fig. 4 shows the schematic of an arbiter
switch configuration. circuit. ξ i or ηj produced by the analog multiplier is the voltage
The work principle of the reversed propagation of the signals product of a neuron state xi (or yj ) and corresponding Sum signal
from the X-layer to the Y-layer is analogous by all switches con- that represent [Wy]i (or [WT x]j ). A reference voltage Vref acts as the
nected to node 2, whereas the role of RCX , RCY interchanges. So the threshold value Dx or Dy . The voltage comparator output is passed
proposed design reuses the inverting AMP and the two arrays of to the encoder which converts the voltage to a 2-bit digital code to
resistors for voltage summation and current biasing in both signal activate the SR flip-flop. The flip-flop controlled by the Arb_Clk
propagation directions. The proposed and extended memristor ar- signal outputs signal Enable_Lrn to enable the synapse weight
ray structure is compared to above mentioned in Table 1. update circuit.
B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448 441
Fig. 3. (a) Proposed memristor synapse array in crossbar. (b) Illustration of the crossbar operation as the signals from the Y-layer are summed by one neuron in the X-layer.
442 B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448
Table 1
Comparison of memristive crossbar structure.
Array Structure Cost for m × n synapse array Signal Direction single weight value wij ∝
Fig. 4. Arbiter circuit that fires an enable signal for synapse weight updating.
Table 2
Using a JK flip-flop with encoders for neuron activation. a
(a) Activation table
n
j=1 wi j y j xi (t) xi (t + 1 )
∗
=0 xi (t)
∗
<0 −1
∗
>0 1
(b) JK flip-flop truth table and output encoder.
J K Qn Qn+1 Signal (V)
∗
0 0 Qn Keep
∗
0 1 0 −1
∗
1 0 1 1
(c) Input encoder to JK flip-flop.
Sum/Input (V) J K State
=0 0 0 –
<0 0 1 Reset
>0 1 0 Set
b c
Fig. 7. Neuron circuits realized by a JK flip-flop with voltage encoders. Fig. 8. Mode switcher design: (a) switch control details at the X-layer; (b) control
timing for switch S_2 in the learning mode; (c) control timing for switch S_2 in
the recall mode.
• Input encoder: We refer to nj=1 wi j VY, j by the signal Sum. De-
pending on the sign of the Sum signal, Sum is converted to the
As shown in Fig. 8(a), two switches S1_X and S2_X are in-
digital inputs to JK flip-flop by the input encoder defined by
troduced at the X-layer for the functioning of one neuron. Switch
Table 2(c). The encoder is a natural consequence after compar-
ing Tables 2(a) and (b).
S1_X is used to control the working mode and is configured by
the mode control signal Mode_Ctrl. Switch S2_X is used to con-
• Output encoder: Just simply map the digital output of JK flip-
trol the signal propagation direction (layer X to Y or layer Y to
flop by: 1 to 1V, 0 to −1V, or no change.
X), which is configured by the direction control signal S2_Ctrl.
The input state of (J, K ) = (1, 1 ) of the JK flip-flop is not used. Switch S2_X has three states: by switching to node 1 or 2 it con-
Fig. 7 shows the circuit realization schematic view of a BAM neu- trols the signal propagation direction; and by switching to node n,
ron consisting of a JK flip-flop and the input and output voltage en- i.e., neutral, signal propagation is disabled. The neutral state is
coders. Then the BAM recall equations given by (1) are translated necessary when memristors are being programmed.
into the following voltage-based signal processing equations:
5.2.1. Learning mode
n
VX,i (t + 1 ) = f wi jVY, j (t ), VX,i (t ) (10a) By switching S1_X to node 4 (the arbiter module) the system
j=1
enters the learning mode. The learning mode is divided into four
clock phases:
m • In the X-layer Read (XR) phase the inequality on Dx is judged;
VY, j (t + 1 ) = f wi jVX,i (t ), VY, j (t ) (10b) • In the X-layer Write (XW) phase the synapse weight update
i=1 given by (8) is executed;
• In the Y-layer Read (YR) phase the inequality on Dy is judged;
j=1 wi j VY, j (t )
n
A voltage signal Sum representing or
• In the Y-layer Write (YW) phase the synapse weight update
m
w V
i=1 i j X,i (t ) is input to the voltage encoder port V_in be-
given by (9) is executed.
fore entering the JK flip-flop.
For the learning phase, a pair of binary patterns are presented In the XR phase, signals are transferred from the Y-layer to the
to the two layers of a BAM. The voltage signals are mapped to X-layer by passing the synapse network. The produced sum signal
the binary inputs to JK flop-flops by the input encoder defined in n
j=1 wi j VY, j (i = 1, . . . , m) is passed to the arbiter at the X-layer
Table 2(c). Simply speaking, a positive voltage sets and a negative (with the switch S2_X switched to node (1). The arbiter executes
resets the JK flip-flop. the computation and fires the Enable_Lrn signal to trigger the
synapse weight update module to update the memristance if the
5.2. Mode switchers threshold condition is satisfied.
In the XW phase, the switch S2_X is switched to the neutral
The two working modes combined with neural signal propaga- state by setting the control signal S2_Ctrl to 0V. While in this
tion direction control are realized by the configuration of two set period, the learning enabling signal Lrn_Ctrl is set to high and
CMOS switches (mode switch S1, direction switch S2). It is ade- the weight update module generates a proper voltage to adjust the
quate to use one signal path at X-layer to explain the switch oper- corresponding row (column) of memristances for a certain time
ation as Y-layer situation is just the counterpart. length to emulate the update Eq. (8).
444 B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448
6. Experimental results
The basic linear ion drift memristor model [2] and the synap-
tic model with threshold voltages [42] are both used in the ex-
periments coupled with the unified expression of window func-
tion proposed by Wen et al. [43], which can obtain many kind of
window functions such as the Biolek window function [44] by set-
ting different parameter values. Fig. 9(a) shows the characteriza-
tion of the linear memristor model given a sinusoidal voltage stim-
ulus with the following parameters: Ron = 100, Roff = 160 0 0,
and D = 10 nm. Then a pinched hysteresis loop of corresponding
voltage-current characteristics shown in the inset is identified as
the fingerprint of memristor [45]. The threshold model charactered
in Fig. 9(b) is set by the parameters of the AIST-based memris- Fig. 9. Characterization of the memristor model: (a) response of basic memristor
model; (b) response of threshold memristor model; (c) memristance tuned by pulse
tor listed in table I of reference [42]. As is shown in Fig. 9(b), the
voltage stimulus.
memristance will change only if applied voltage is above the pos-
itive threshold voltage (+0.37V ) or below negative threshold volt-
age (−0.19V ). Fig. 9(c) shows the memristor current waveform in
response to the periodic pulse voltage source applied to the mem- during the recall mode. The clock signal controlling the JK flip-flop
ristor. The memristance is being tuned over each duration time of is shown at the bottom with a clock period 20 ns. Note that the
each pulse, yet the variation of memristance by each pulse is not logical values of 0 and 1 of the JK flip-flop ports correspond to
the same due to the initial value of memristance. This character is electrical voltage levels 0V and 1V, respectively.
acceptable for the proposed BAM learning strategy. In the beginning period (20 ns), The signal Sum representing
j=1 wi j VY, j (t ) or i=1 wi j VX,i (t ) is below 0V (but is rising from
n m
6.2. Verification of neuron circuit −2V to 0V); the (J, K) ports thus receive the code (0,1). The output
of the JK flip-flop is Q = 0, i.e., a voltage −1V. During the second
Fig. 10 shows the simulated curves of a neuron circuit il- period from 20 ns to 40 ns, the Sum signal remains at 0V. The (J, K)
lustrated in Fig. 7 as a verification of the activation function ports thus receive the code (0,0); in this case the output Q keeps
B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448 445
Fig. 10. Verification of neuron activation function via a case in the recall mode.
the previous state, i.e., the voltage level −1V. In the third period
from 40 ns to 60 ns, Sum is above 0V (rising from 0V to 2V). Then
the (J, K) ports receive the code (1,0) and the output Q changes
to 1V at the positive edge of JK_Clk. In the fourth period (60 ns
to 80 ns), Sum goes back to 0V again, then the output Q remains
at −1V. The sampled test case mentioned above demonstrates that
the digital JK flip-flop with the voltage encoders correctly functions
as the neuron activation expressed in (10).
Fig. 11. The proposed memristive BAM under learning mode: (a) pattern pairs for
learning; (b) training process of the memristor based synapse.
Fig. 12. Recall experiment: (a) recall from X-layer to Y-Layer with 3 bits flipped in
the X-layer patterns (about 10% noise). (b) recall from Y-layer to X-Layer with 3 bits
flipped in the Y-layer patterns (about 10% noise). (c) recall from Y-layer to X-Layer
with 5 bits flipped in the Y-layer patterns (about 20% noise). (d) recall from Y-layer
to X-Layer with 7 bits flipped in the Y-layer patterns (about 30% noise).
above the memristor threshold voltage. [13] K. Pagiamtis, A. Sheikholeslami, Content-addressable memory (CAM) circuits
and architectures: a tutorial and survey, IEEE J. Solid-State Circ. 41 (3) (2006)
WG = 10−3 712–727.
⎡ ⎤ [14] S.G. Hu, Y. Liu, Z. Liu, Associative memory realized by a reconfigurable mem-
−31.78 −17.74 −17.74 −35.04 −31.47 3.49 ristive Hopfield neural network, Nat. Commun. 6 (7522) (2015) 1–8, doi:10.
⎢−33.49 −17.68 −17.68 −36.04 2.16 −34.95⎥ 1038/ncomms8522.
⎢ −19.56 −19.56 −19.56 0.27 −19.56 −35.04⎥ [15] J. Yang, L.D. Wang, Y. Wang, T. Guo, A novel memristive Hopfield neural net-
×⎢
⎢ 0.27
⎥
−19.56 0.27 −17.74 ⎥
work with application in associative memory, Neurocomputing 227 (2017)
0.27 0.27
⎣ ⎦ 142–148, doi:10.1016/j.neucom.2016.07.065.
0.27 0.27 0.27 −19.56 0.27 −17.74 [16] H. Kim, M.P. Sah, C. Yang, T. Roska, L.O. Chua, Memristor bridge synapses, Proc.
1.9 −17.98 −17.98 −35.01 −33.71 −33.97 IEEE 100 (6) (2012) 2061–2070, doi:10.1109/JPROC.2011.2166749.
[17] S.P. Adhikari, H. Kim, R.K. Budhathoki, C. Yang, L.O. Chua, A circuit-based
(11) learning architecture for multilayer neural networks with memristor bridge
synapses, IEEE Trans. Circ. Syst. I 62 (1) (2015) 215–223, doi:10.1109/TCSI.2014.
2359717.
7. Conclusion [18] B. Kosko, Bidirectional associative memories, IEEE Trans. Syst. Man Cybern. 18
(1) (1988) 49–60, doi:10.1109/21.87054.
[19] T. Wang, X. Zhuang, X. Xing, Designing bidirectional associative memories with
In summary, we incorporated the extended memristor cross- optimal stability, IEEE Trans. Syst. Man Cybern. 24 (5) (1994) 778–790, doi:10.
bar circuit into the memristive BAM neural network with on-chip 1109/21.293491.
learning feature. The redesigned crossbar circuit reuses single one [20] C.S. Leung, L.W. Chan, E. Lai, Stability, capacity, and statistical dynamics of
second-order bidirectional associative memory, IEEE Trans. Syst. Man Cybern.
set memristor array for bidirectional synaptic weighting opera- 25 (10) (1995) 1414–1424, doi:10.1109/21.464439.
tion with the benefit of modularity and cost-effectiveness, which [21] G. Shi, Genetic approach to the design of bidirectional associative memory, Int.
can contribute to hardware implementation of memristive artifi- J. Syst. Sci. 28 (2) (1997) 133–140, doi:10.1080/00207729708929371.
[22] G. Shi, Optimal bidirectional associative memories, Int. J. Syst. Sci. 31 (6)
cial neural networks. Meanwhile a novel programming strategy is (20 0 0) 751–757, doi:10.1080/0 0207720 050 030798.
adopted by the on-chip learning circuit to ease the process of tun- [23] B. Fa, Y. Yin, C. Fu, The bidirectional associative memory neural network based
ing memristor based synapse weight. Neuron activation function is on fault tree and its application to inverter’s fault diagnosis, in: Proceedings of
the IEEE International Conference on Intelligent Computing and Intelligent Sys-
also realized by digital circuit element JK flip-flop. A case of size tems, Shanghai, China, 2009, pp. 209–213, doi:10.1109/ICICISYS.2009.5357894.
24 × 24 memristive BAM neural network is used to demonstrate [24] L. Chen, Z. Ling, L. Liu, L. Dai, Voice conversion using deep neural networks
the ability of associative memory for the proposed framework by with layer-wise generative training, IEEE/ACM Trans. Audio Speech Lang. Pro-
cess. 22 (12) (2014) 1859–1872, doi:10.1109/TASLP.2014.2353991.
the Verilog-AMS design methodology. The simulation results show
[25] L. Chen, T. Raitio, C. Valentini-Botinhao, Z. Ling, J. Yamagishi, A deep gen-
that meaningful patterns can be successfully recalled ascribed to erative architecture for postfiltering in statistical parametric speech synthe-
the association relationship established by on-chip learning cir- sis, IEEE/ACM Trans. Audio Speech Lang. Process. 23 (11) (2015) 2003–2014,
cuits. It is expected that Verilog-AMS based methodology will be doi:10.1109/TASLP.2015.2461448.
[26] I. Nejadgholi, S. SeyyedSalehi, S. Chartier, A brain-inspired method of fa-
a good candidate for memristor based artificial neural network cir- cial expression generation using chaotic feature extracting bidirectional as-
cuits design. sociative memory, Neural Process. Lett. 46 (3) (2017) 943–960, doi:10.1007/
s11063- 017- 9615- 5.
[27] A. Pedrycz, Bidirectional and multidirectional associative memories as models
Acknowledgments in linkage analysis in data analytics: conceptual and algorithmic developments,
Knowl. Based Syst. 142 (2018) 160–169, doi:10.1016/j.knosys.2017.11.034.
[28] B. Linares-Barranco, E. Sanchez-Sinencio, A. Rodriguez-Vazquez, J.L. Huertas, A
This research was supported by the National Natural Science CMOS analog adaptive BAM with on-chip learning and weight refreshing, IEEE
Foundation of China (NSFC) under grants No. 61176129 and No. Trans. Neural Netw. 4 (3) (1993) 445–455, doi:10.1109/72.217187.
61474145. [29] K. Mathiyalagan, J.H. Park, R. Sakthivel, Synchronization for delayed memris-
tive BAM neural networks using impulsive control with random nonlinearities,
Appl. Math. Comput. 259 (2015) 967–979, doi:10.1016/j.amc.2015.03.022.
References [30] M.S. Ali, R. Saravanakumar, J. Cao, New passivity criteria for memristor-based
neutral-type stochastic BAM neural networks with mixed time-varying delays,
[1] L.O. Chua, Memristor-the missing circuit element, IEEE Trans. Circ. Theory 18 Neurocomputing 171 (2016) 1533–1547, doi:10.1016/j.amc.2015.03.022.
(1971) 507–519, doi:10.1109/TCT.1971.1083337. [31] R. Chinnathambi, F.A. Rihan, L. Shanmugam, R. Rajan, P. Muthukumar, Synchro-
[2] D.B. Strukov, G.S. Snider, D.R. Stewart, R.S. Williams, The missing memristor nization of memristor-based delayed BAM neural networks with fractional-
found, Nature 453 (7191) (2008) 80–83, doi:10.1038/nature06932. order derivatives, Complexity 21 (S2) (2016) 412–426, doi:10.1002/cplx.21821.
[3] J.S. Hyun, C. Ting, E. Idongesit, B.B. Bhadviya, P. Mazumder, L. Wei, Nanoscale [32] H. Li, H. Jiang, C. Hu, Existence and global exponential stability of periodic
memristor device as synapse in neuromorphic systems, Nano Lett. 10 (4) solution of memristor-based BAM neural networks with time-varying delays,
(2010) 1297–1301, doi:10.1021/nl904092h. Neural Netw. 75 (2016) 97–109, doi:10.1016/j.neunet.2015.12.006.
[4] A. Fabien, Z. Elham, D.B. Strukov, Pattern classification by memristive crossbar [33] J. Xiao, S. Zhong, Y. Li, F. Xu, Finite-time Mittag-Leffler synchronization of
circuits using ex situ and in situ training, Nat. Commun. 25 (10) (2013) 1864– fractional-order memristive BAM neural networks with time delays, Neuro-
1878, doi:10.1038/ncomms3072. computing 219 (2017) 431–439, doi:10.1016/j.neucom.2016.09.049.
[5] M. Hu, H. Li, Y. Chen, Q. Wu, G.S. Rose, R.W. Linderman, Memristor crossbar- [34] M.S. Tarkov, Oscillatory neural associative memories with synapses based on
based neuromorphic computing system: a case study, IEEE Trans. Neural Netw. memristor bridges, Opt. Mem. Neural Netw. 25 (4) (2016) 219–227, doi:10.
Learn. Syst. 25 (10) (2014) 1864–1878, doi:10.1109/TNNLS.2013.2296777. 3103/S1060992X16040068.
[6] G. Zhang, Y. Shen, Exponential stabilization of memristor-based chaotic neural [35] Y. Zhao, B. Li, G. Shi, A current-feedback method for programming memristor
networks with time-varying delays via intermittent control, IEEE Trans. Neural array in bidirectional associative memory, in: Proceedings of the International
Netw. Learn. Syst. 26 (7) (2015) 1431–1441, doi:10.1109/TNNLS.2014.2345125. Symposium on Intelligent Signal Processing and Communication Systems (IS-
[7] X. Hu, S. Duan, G. Chen, L. Chen, Modeling affections with memristor-based PACS), Xiamen, China, 2017, pp. 747–751, doi:10.1109/ISPACS.2017.8266575.
associative memory neural networks, Neurocomputing 223 (2017) 129–137, [36] B. Li, Y. Wang, Y. Wang, Y. Chen, H. Yang, Training itself: mixed-signal train-
doi:10.1016/j.neucom.2016.10.028. ing acceleration for memristor-based neural network, in: Proceedings of the
[8] S. Wen, S. Xiao, Z. Yan, Z. Zeng, T. Huang, Adjusting learning rate of memristor- Nineteenth Asia and South Pacific Design Automation Conference (ASPDAC),
based multilayer neural networks via fuzzy method, IEEE Trans. Comput.-Aided Singapore, 2014, pp. 361–366, doi:10.1109/ASPDAC.2014.6742916.
Design Integr.Circ. Syst. (2018a) 1, doi:10.1109/TCAD.2018.2834436. [37] Y. Zhang, X. Wang, E.G. Friedman, Memristor-based circuit design for multi-
[9] S. Wen, R. Hu, Y. Yang, T. Huang, Z. Zeng, Y. Song, Memristor-based echo state layer neural networks, IEEE Trans. Circ. Syst. I 65 (2) (2018) 677–686, doi:10.
network with online least mean square, IEEE Trans. Syst. Man Cybern. (2018b) 1109/TCSI.2017.2729787.
1–10, doi:10.1109/TSMC.2018.2825021. [38] R. Hasan, T.M. Taha, C. Yakopcic, A fast training method for memristor cross-
[10] P. Yao, H. Wu, B. Gao, S.B. Eryilmaz, X. Huang, W. Zhang, Q. Zhang, N. Deng, bar based multi-layer neural networks, Analog Integr. Circ. Sig. Process. 93 (3)
L. Shi, H.S.P. Wong, H. Qian, Face classification using electronic synapses, Nat. (2017) 443–454, doi:10.1007/s10470- 017- 1051- y.
Commun. 8 (15199) (2017) 1–8, doi:10.1038/ncomms15199. [39] M. Prezioso, F. Merrikh-Bayat, B.D. Hoskins1, G.C. Adam1, K.K. Likharev,
[11] D. Soudry, D.D. Castro, A. Gal, A. Kolodny, S. Kvatinsky, Memristor-based multi- D.B. Strukov, Training and operation of an integrated neuromorphic network
layer neural networks with online gradient descent training, IEEE Trans. Neural based on metal-oxide memristors, Nature 521 (7550) (2015) 61–64, doi:10.
Netw. Learn. Syst. 26 (10) (2015) 2408–2421, doi:10.1109/TNNLS.2014.2383395. 1038/nature14441.
[12] T. Kohonen, Self-Organization and Associative Memory, Springer-Verlag, Berlin,
Germany, 1988.
448 B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448
[40] F. Alibart, L. Gao, B.D. Hoskins, D.B. Strukov, High precision tuning of state for Yonglei Zhao recieved his B.E. degree at School of Elec-
memristive devices by adaptable variation-tolerant algorithm, Nanotechnology tronic information and electrical engineering, Shanghai
23 (7) (2012) 075201, doi:10.1088/0957-4484/23/7/075201. Jiao Tong Univ., Shanghai, China in 2016. Now he is
[41] Cadence Design Systems, Inc., Behavioral Modeling with Verilog-AMS, Cadence a graduate student at Dept. of micro/nano electronics,
Design Systems, Inc. 2017. Shanghai Jiao Tong University, Shanghai, China. His re-
[42] Y. Zhang, X. Wang, Y. Li, E.G. Friedman, Memristive model for synaptic circuits, search area includes memristive neural network, pattern
IEEE Trans. Circ. Syst. II 64 (7) (2017) 767–771, doi:10.1109/TCSII.2016.2605069. recognition.
[43] S. Wen, X. Xie, Z. Yan, T. Huang, Z. Zeng, General memristor with applications
in multilayer neural networks, Neural Netw. 103 (2018) 142–149, doi:10.1016/
j.neunet.2018.03.015.
[44] Z. Biolek, D. Biolek, V. Biolkova, SPICE model of memristor with nonlinear
dopant drift, Radioengineering 18 (2) (2009) 210–214.
[45] M.D. Ventra, Y.V. Pershin, L.O. Chua, Circuit elements with memory: memris-
tors, memcapacitors and meminductors, Proc. IEEE 97 (10) (2009) 1715–1716, Guoyong Shi has been a full professor in the Department
doi:10.1109/JPROC.2009.2027660. of Micro/Nano Electronics, School of Electronic, Informa-
tion and Electrical Engineering, Shanghai Jiao Tong Uni-
Bo Li received the B.E. degree in electronic science and versity, Shanghai China, since 2005. He received the Ph.D.
technology from Harbin University of Science and Tech- degree in Electrical Engineering from Washington State
nology, China, in 2010 and the M.S. degree in physics University in Pullman, USA. He worked as a post-doctor
from Shanghai University, China, in 2013. Currently, he in the Department of Electrical Engineering, University of
is pursuing Ph.D. degree under the supervision of Prof. Washington in Seattle, USA, from 2002 to 2005. He is cur-
Dr. Guoyong Shi in School of Electronic, Information, rently interested in research subjects on design automa-
and Electrical Engineering, Shanghai Jiao Tong University, tion of mixed-signal circuits and neuromorphic circuits.
Shanghai, China. His research interests cover memristor, He was co-recipient of the Donald Pederson Best Paper
memristor-based artificial neural networks system and Award in 2007. He is co-author of the book “Advanced
neuromorphic circuits, as well as the design automation Symbolic Analysis for VLSI Systems-Methods and Appli-
for memristor-based artificial neural networks circuits. cations” published by Springer in 2014.