You are on page 1of 12

Neurocomputing 330 (2019) 437–448

Contents lists available at ScienceDirect

Neurocomputing
journal homepage: www.elsevier.com/locate/neucom

A novel design of memristor-based bidirectional associative memory


circuits using Verilog-AMS
Bo Li, Yonglei Zhao, Guoyong Shi∗
Department of Micro/Nano Electronics, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China

a r t i c l e i n f o a b s t r a c t

Article history: Recently, memristor as an emerging device has been used as the basic synapse realization component
Received 14 February 2018 for the circuit implementation of artificial neural networks. Following this trend, this paper studies the
Revised 9 October 2018
circuit realization of a memristor-based bidirectional associative memory (BAM) system that extends the
Accepted 12 November 2018
memristor crossbar array structure for bidirectional synaptic weighting operation. The neuron cell in BAM
Available online 20 November 2018
is represented by digital circuit element JK flip-flop for hardware cost-saving. Meanwhile a novel mem-
Communicated by Prof. Duan Shukai ristor programming strategy is also considered and examined to ease the on-chip learning. The design of
such memristive BAM circuit system is conducted by hardware description language Verilog-AMS and has
Keywords:
Memristor been validated in a commercial circuit simulation environment via a case study. Test results show that
Crossbar array a set of binary character patterns can be memorized and recalled successfully by the trained memristive
Bidirectional associative memory BAM system.
Content-addressable memory
© 2018 Elsevier B.V. All rights reserved.
Verilog-AMS
Artificial neural network

1. Introduction on the memristor bridge synapse structure [16,17]. They validated


the capability of associative memory by circuit simulation.
Since memristor hypothesized by Leon Chua [1] was discov- Bidirectional associative memory (BAM) proposed by Kosko in
ered by scientists at HP lab in 2008 [2], a variety of circuit de- 1988 [18] is another special type of AM features bidirectional pat-
signs adopting memristor have been studied and reported due to tern association and easy training. Many extensions on the mem-
its unique properties. One notable trend is to use memristor as a ory structures and learning methods have been investigated in the
basic component for the implementation of synapse [3] in artificial literature [19–22]. Meanwhile, many researchers attempt to apply
neural networks (ANNs) circuits [4–7]. Many studies [8–11] have BAM to solve real problems such as combining BAM with the faults
shown that memristor-based synapse is more suitable than com- tree analysis technique to build an intelligent diagnosis system
plementary metal oxide semiconductor (CMOS) transistor-based [23]. One interesting trend of BAM application study is to exploit
designs in terms of chip area, power, etc. A class of ANNs can BAM as a generative model. For instance, BAM can be employed
mimic the functionality of human memory known as associative as middle layers to be embedded into deep neural networks for
memory (AM). AM by definition [12] can be treated and used as a voice conversation [24] and postfiltering in speech synthesis [25],
kind of content-addressable memory (CAM) [13] in the perspective so that it can take advantage of both models and overcome their
of circuit application, which is another design paradigm of mem- respective weakness. Another case of generative model based on
ory circuits that differs from the location-addressable memory in BAM is facial expression generation [26], which mimics the hu-
nowadays computers. Therefore, it is of great interest to realize man brain to extract features. Recently, Pedrycz [27] shows the po-
CAM or AM by means of ANN circuits. Circuit design and fabrica- tential of using BAM and its variant as a linkage analysis method
tion of memristor-based associate memory was first demonstrated in the field of data analytics. Circuit realization of BAM thereby
by Hu et al. [14] via the Hopfield neural network (HNN). How- has also been an active subject of study as CAM. The early study
ever, in that work the training process was conducted in software conducted by Linares–Barranco [28] designed CMOS analog inte-
and the learned synapse weight value was loaded into memris- grated circuits (ICs) as a realization of BAM. However, such tra-
tor, namely, on-chip training was not considered. Then Yang et al. ditional CMOS synapse suffered several limitations. For example,
[15] proposed another circuit realization of memristive HNN based synapse weight value is actually represented as the charge stored
in capacitors that require extra circuits for refreshing and repro-

gramming. The emergence of memristor has driven active research
Corresponding author.
E-mail address: shiguoyong@sjtu.edu.cn (G. Shi).
on developing alternative BAM circuit realizations. Most of them

https://doi.org/10.1016/j.neucom.2018.11.050
0925-2312/© 2018 Elsevier B.V. All rights reserved.
438 B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448

(wi j )m×n , with m rows and n columns, each wij is a real number
which denotes the (i, j)th entry in the m × n matrix, where T stands
for vector/matrix transpose.
Either an arbitrary vector x = (x1 , . . . , xm )T or y = (y1 , . . . , yn )T
is input to the corresponding terminal layer of the BAM, BAM starts
the recall process between the two layers until a pair of stationary
vector by the following iterations:
 

n
xi (t + 1 ) = f wi j y j (t ), xi (t ) (1a)
j=1

for i = 1, . . . , m
 

m
y j (t + 1 ) = f wi j xi (t ), y j (t ) (1b)
Fig. 1. Illustration of the BAM structure.
i=1

for j = 1, . . . , n
focused on the analysis of the dynamical properties of BAM sys-
tem by software simulation when memristors are involved (mem- where t denotes the iteration step. f( · , · ) is the bivariate activation
ristive BAM) such as the stability, synchronization, and delay ef- function defined by
fects [29–33]. There is also an interest of hardware implementation 
of the memristive BAM system. For instance, the work by Tarkov
1 if σ >0
[34] considered the application of a modified memristor bridge
f (σ , ζ) = ζ if σ =0 (2)
structure synapse to both HNN and BAM. However, in the case of
−1 if σ <0
BAM, Tarkov has to use two sets of memristor synapse array to where σ denotes the received sum of signals and ζ denotes the
represent weight matrix W and its transpose WT for the bidirec- current binary state of the neuron before activation.
tional weighting operation. On-chip learning is also not considered The main goal of BAM is to find a proper connection weight
in Tarkov’s work. Another example is Zhao et al. [35] propose and matrix W = (wi j )m×n so that all training pattern vector pairs
verify a programming method for memristor array off-chip learn-  P
x (k ) , y (k )in a given set of P become the stationary points of
ing via a 6 × 6 memristive BAM case study. k=1
As discussed above, the motivation of this study is to design the BAM, where x(k ) = (x1(k ) , . . . , xm
(k ) T
) , y(k) = (y1(k) , . . . , yn(k) )T .
memristive BAM circuits as an hardware implementation of BAM Kosko once proposed the outer-product rule in his original BAM
by reusing only one set of memristor crossbar synapse array, to- paper [18], which simply calculates the sum of the outer-products
gether with an on-chip learning circuitry. Such memristor-based of all the training patterns
synapse design can be scalable, cost-effective and universal to
other bidirectional ANN circuits. Another cost-effective design is 
P
W = x ( k ) y ( k )T (3)
that the BAM neuron circuit is realized by digital circuit element JK
k=1
flip-flop compared to previous analog IC implementation [28,34].
Beyond these design details, we attempt to simulate and verify the However, as pointed out by [19,21,22], the outer-product learning
whole memristive BAM system via the industry-standard mixed- rule is not an optimal learning rule that does not work well when
signal circuit design language Verilog-AMS other than SPICE re- the training data set has a large number of patterns. Wang et al.
ported in previous work to promote the system level design of [19] proposed and proved a learning method by introducing cyclic
memristor-based large scale ANN circuits when both digital and utilization of the training pattern pairs.
analog circuits are employed1 .
The rest of the paper is organized as follows: Section 2 presents
3. An extended memeristor crossbar synapse
a mathematical description of BAM. Section 3 is devoted to the ex-
tension of the memristor crossbar array for bidirectional synaptic
3.1. Existing memristive crossbar synapse
weighting operation. Section 4 explains the on-chip learning algo-
rithm and the corresponding circuits. In Section 5 we describe the
The crossbar structure has been applied by many authors
system-level circuit implementation details of the memristive BAM.
[4,5,10,36–39] due to the prominent advantage of its scalability
Circuit simulation results of the proposed memristive BAM system
for constructing large synapse arrays. The key point in memris-
for pattern recognition in Verilog-AMS environment are discussed
tor crossbar synapse design is how to represent weight value wij
in Section 6. This paper is finally concluded in Section 7.
by the conductance of memristor (GM, ij ) when wij is negative.
2. Brief mathematical description of BAM The most frequently adopted idea was to incorporate signal sub-
traction by two memristors for one synapse weight (wi j ∝ G+
M,i j

BAM is a two-layer neural network consisting of the X-layer and G−M,i j
). When arranged in a two-dimensional array, two alterna-
the Y-layer as illustrated in Fig. 1. The state of each neuron unit tive forms can perform signal subtraction: one is noted as pre-
takes on a bipolar value -1 or 1. Taking vector-matrix notations, m differentiation (Fig. 2(a)) used in [36,38] and the other is noted as
neurons x = (x1 , . . . , xm )T of X-layer interact with n neurons y = post-differentiation (Fig. 2(b)) employed in [4,5,39].
(y1 , . . . , yn )T of Y-layer via a fully connected weight matrix W = Most recently, Zhang et al. [37] replace the plus memristor
G+M,i j
with a constant resistor GRS = 1/RS for one synapse (wi j ∝
1
GRS − G−M,i j
). As for a synapse array, the GRS term array can be re-
The Verilog-AMS circuit design code of this memristive BAM project will be
open source and released under free BSD license. Currently the project code is in duced to only one column with the aid of an analog adder shown
the github link: https://github.com/PhdBoLi/BAM. in Fig. 2(c). Hereby RS is noted as summing resistor. The output
B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448 439

Fig. 2. Existing memristor crossbar synapse circuits: (a) crossbar memristor synapse array with pre-differentiation [36]; (b) crossbar memristor synapse array with post-
differentiation [4]; (c) crossbar synapse array with amplifier based biasing [37].
440 B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448

voltage (Vf ) of the differential input amplifier (AMP) with a feed- 4. On-chip learning circuits for the proposed BAM system
back resistor (Rf ) is expressed as (4)
4.1. An learning algorithm for memristive BAM

M
Rf
VF = − VIi (4) The variation wij during the BAM learning process is mapped
RS
i=1 to the variation of memconductance G+ M,i j
according to the fol-
lowing on-chip training steps altered from Wang’s algorithm [19]:
And VF can be converted to current signal via a resistor (noted as
To facilitate the statement, we introduce the notation of Hadamard
biasing resistor Rb ). Provided all Rb and Rf have the same value
product of two matrices of comparable dimension, which is de-
(Rb = R f ), The output voltage (VOj ) of the jth column is
fined by

m 
  R0 A  B = (ai j · bi j )m×n
VO j = − R0 G− V +
M,i j Ii
VF
Rb
i=1 where A = (ai j )m×n and B = (bi j )m×n .

m
= R0 ( GRS − G−
M,i j
)VIi (5) 1. Initialize all memristors with random memconductance G+
M,i j
.
i=1 2. For each input pattern pair in the training set
 P
x (k ) , y (k ) , which is encoded as voltage form x(k ) =
where VIi stands for the ith input voltage. As a result, a total of k=1
m × n memristors are used, unlike the pre-differentiation and post- (VX,1 , . . . , VX,m )T , y(k) = (VY,1 , . . . , VY,n )T .
differentiation designs where two copies of crossbars were used. (a) Evaluate judging condition ξi(t ) in X-layer: Apply the input
pattern y(t) to the crossbar synapse circuit to produce the
vector Wy(t) and calculate the Hadamard product of Wy(t)
3.2. Proposed memristive synapse for bidirectional weighting and x(t) . Namely, each component of the Hadamard product
operation is ξi(t ) := [W y(t )  x(t ) ]i . Then compare ξi(t ) with specifica-
tion Dx
Memristor crossbar structure synapse mentioned above merely (b) Update weight value of corresponding synapses: If the com-
focus on single direction weighting operation (multiply-add) from ponent ξi(t ) < Dx ,
one layer to another. This paper extends the design idea of Zhang
(t+1 ) (t ) (t ) (t )
et al. [37] to bidirectional situation so that an m × n BAM neu- GM,i j
= GM,i j
+ qVX,i VY, j (8)
ral network requires only one memristor crossbar array to per- (t )
form WT x and Wy operations illustrated in Fig. 1. The neural sig- for ξi < Dx , j = 1, . . . , n
nal propagation directions are controlled by switches, which are
switched between two terminals (1 or 2) by time-synchronized If ξi(k ) > Dx , related memristor synapse values maintain un-
signals illustrated in Fig. 3(a). Here constant resistors are employed changed.
to replace the minus term G− other than G+ for the conve- (c) Evaluate judging condition η (jt ) in Y-layer: the process is
M,i j M,i j
nience to do on-chip learning. For example, when learned weight similar to the sub-step (a) and each component is η (jt ) :=
variation wij is positive, we just increase corresponding mem- [W T x(t )  y(t ) ] j
ristor conductance G+ M,i j
by positive voltage. The constant resistors (d) Update weight value of corresponding synapses: the updat-
connected to X-Layer is noted as RCX , and the counterpart is noted ing method is also similar to the sub-step (b).
as RCY .
(t+1 ) (t ) (t )
The case of synaptic operation direction from the Y-layer to the GM,i j
= GM,i j (t ) + qVX,i VY, j (9)
X-layer is guaranteed by all switches contacted to node 1 as shown (t )
for η j < Dy , i = 1 , . . . , m
in Fig. 3(b), which performs as below:

n  3. Iterating the next pattern pair to step 2 until all Hadamard


 
IX,i = G+ − GRCX VY, j (6) product components satisfy the specification (ξi(t ) > Dx , η (jt ) >
M,i j
j=1 Dy ).

where Dx > 0, Dy > 0 and q is the learning rate.



n 
n
= G+ V − GRCX
M,i j Y, j
VY, j (7)
j=1 j=1 4.2. Learning circuit mapping

n +
The plus term j=1 GM,i j VY, j is realized by connecting the in- 4.2.1. Arbiter circuit
put voltages to a row of n memristors. The minus term factor The arbiter circuit is used to verify the inequality in the pro-
− nj=1 VY, j is realized by the AMP with a matched feedback re- posed on-chip learning algorithm, i.e., ξ i < Dx or ηj < Dy . Assertion
sistor (R f = RCY = RCX ). RCY (resp. RCX ) functions as the summing of the inequality leads to the firing of an enabling signal for updat-
(resp. biasing) resistor RS (resp. Rb ) in Zhang’s design [37] in this ing the synapse weights. Fig. 4 shows the schematic of an arbiter
switch configuration. circuit. ξ i or ηj produced by the analog multiplier is the voltage
The work principle of the reversed propagation of the signals product of a neuron state xi (or yj ) and corresponding Sum signal
from the X-layer to the Y-layer is analogous by all switches con- that represent [Wy]i (or [WT x]j ). A reference voltage Vref acts as the
nected to node 2, whereas the role of RCX , RCY interchanges. So the threshold value Dx or Dy . The voltage comparator output is passed
proposed design reuses the inverting AMP and the two arrays of to the encoder which converts the voltage to a 2-bit digital code to
resistors for voltage summation and current biasing in both signal activate the SR flip-flop. The flip-flop controlled by the Arb_Clk
propagation directions. The proposed and extended memristor ar- signal outputs signal Enable_Lrn to enable the synapse weight
ray structure is compared to above mentioned in Table 1. update circuit.
B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448 441

Fig. 3. (a) Proposed memristor synapse array in crossbar. (b) Illustration of the crossbar operation as the signals from the Y-layer are summed by one neuron in the X-layer.
442 B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448

Table 1
Comparison of memristive crossbar structure.

Array Structure Cost for m × n synapse array Signal Direction single weight value wij ∝

Pre/post-differentiation 2(m × n) Single G+


M,i j
− G−M,i j
Zhang’s m×n Single GR − G− M,i j
+
Proposed m×n Bi-directional GM,i j − GR

Fig. 4. Arbiter circuit that fires an enable signal for synapse weight updating.

Fig. 6. Architecture of the proposed memristive BAM.


Fig. 5. Weight update circuit.

which helps improving the learning accuracy for multilayer neural


network. Those strategies add the complexity of the on-chip learn-
4.2.2. Weight update circuit ing circuits.
Fig. 5 shows the schematic of the learning circuit for In this paper, we apply the product voltage VX, i VY, j on corre-
weight updating. The input port En_x (resp. En_y) receives the sponding memristor for a fixed duration time (fixed pulse width).
enable_lrn signal from the arbiter circuit in X-layer (resp. Y- Namely q in the proposed hardware implementation is a variable
layer). So the iteration of (8) and (9) is fulfilled by the Lrn_Ctrl for each iteration. The effectiveness of such programming strategy
signal controlling the multiplexer (Mux) to alternately select one is validated by circuit simulation.
of the enabling signals (En_x or En_y) depending on whether the
Lrn_Ctrl is high (1V) or low (−1V ). The Mux output the se- 5. The proposed memristive BAM circuit system
lected En_x or En_y to enable the analog multiplier that gives the
product of two voltages, which also enables the switches at termi- The architecture of the proposed memristive BAM is shown in
nal Mem_p connected to the output of multiplier and Mem_n con- Fig. 6. The system consists of two parts as indicated by the two
nected to the ground. Then corresponding row or column memris- columns: Included in the left column (framed in the dashed box)
tors in the synapse array will be updated for a fixed duration time are the circuit components for on-chip learning, which consists of
of the applied voltage pulse. When the Lrn_Ctrl signal is 0V, two arbiter modules and one weight update module as illustrated
the Mux is at the high-Z state, blocking the signal En_x or En_y in Section 4; The right column displays the symmetrical structure
to pass through the Mux submodule. that the system can works under the recall mode or the learning
Note that the learning rate q in software is a constant coeffi- mode controlled by the two switch boxes (called the mode switch-
cient. Considering VX, i or VY, j are binary value, the product VX, i VY, j ers). And the operation of memristor crossbar synapse array has
takes only 1 or -1. Then the weight variation wij (G+ M,i j
) is q been explained in Section 3. The detailed designs of other func-
or −q for each learning iteration. For hardware implementation, tional block are described in the sequel.
two strategies in principle can be adopted to precisely programme
memirstor with the memconductance change q or −q. One is to 5.1. Neuron circuits
utilize several width-fixed variant amplitude pulses according to
related programming algorithm [40]. The other is to apply an sin- If the neuron activation function f( · , · ) defined in (2) is ex-
gle amplitude-fixed pulse voltage with variant pulse width [11,35]. pressed as table format by a case of a neuron in X-layer given in
Due to the intrinsic properties of memristor, the pulse width is a Table 2(a), it reminds us that digital circuit element JK flip-flop is
function of the initial memconductance value if the memconduc- a suitable candidate for realizing this activation function. The truth
tance value will be changed by a constant quantity q or −q. A novel table of JK flip-flop is shown in Table 2(b). It suffices to create the
consideration is to adjust the learning rate via fuzzy method [8], following mapping (or encoding):
B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448 443

Table 2
Using a JK flip-flop with encoders for neuron activation. a
(a) Activation table
n
j=1 wi j y j xi (t) xi (t + 1 )

=0 xi (t)

<0 −1

>0 1
(b) JK flip-flop truth table and output encoder.
J K Qn Qn+1 Signal (V)

0 0 Qn Keep

0 1 0 −1

1 0 1 1
(c) Input encoder to JK flip-flop.
Sum/Input (V) J K State
=0 0 0 –
<0 0 1 Reset
>0 1 0 Set

b c

Fig. 7. Neuron circuits realized by a JK flip-flop with voltage encoders. Fig. 8. Mode switcher design: (a) switch control details at the X-layer; (b) control
timing for switch S_2 in the learning mode; (c) control timing for switch S_2 in
the recall mode.
• Input encoder: We refer to nj=1 wi j VY, j by the signal Sum. De-
pending on the sign of the Sum signal, Sum is converted to the
As shown in Fig. 8(a), two switches S1_X and S2_X are in-
digital inputs to JK flip-flop by the input encoder defined by
troduced at the X-layer for the functioning of one neuron. Switch
Table 2(c). The encoder is a natural consequence after compar-
ing Tables 2(a) and (b).
S1_X is used to control the working mode and is configured by
the mode control signal Mode_Ctrl. Switch S2_X is used to con-
• Output encoder: Just simply map the digital output of JK flip-
trol the signal propagation direction (layer X to Y or layer Y to
flop by: 1 to 1V, 0 to −1V, or no change.
X), which is configured by the direction control signal S2_Ctrl.
The input state of (J, K ) = (1, 1 ) of the JK flip-flop is not used. Switch S2_X has three states: by switching to node 1 or 2 it con-
Fig. 7 shows the circuit realization schematic view of a BAM neu- trols the signal propagation direction; and by switching to node n,
ron consisting of a JK flip-flop and the input and output voltage en- i.e., neutral, signal propagation is disabled. The neutral state is
coders. Then the BAM recall equations given by (1) are translated necessary when memristors are being programmed.
into the following voltage-based signal processing equations:
  5.2.1. Learning mode

n
VX,i (t + 1 ) = f wi jVY, j (t ), VX,i (t ) (10a) By switching S1_X to node 4 (the arbiter module) the system
j=1
enters the learning mode. The learning mode is divided into four
clock phases:
 

m • In the X-layer Read (XR) phase the inequality on Dx is judged;
VY, j (t + 1 ) = f wi jVX,i (t ), VY, j (t ) (10b) • In the X-layer Write (XW) phase the synapse weight update
i=1 given by (8) is executed;
• In the Y-layer Read (YR) phase the inequality on Dy is judged;
j=1 wi j VY, j (t )
n
A voltage signal Sum representing or
• In the Y-layer Write (YW) phase the synapse weight update
m
w V
i=1 i j X,i (t ) is input to the voltage encoder port V_in be-
given by (9) is executed.
fore entering the JK flip-flop.
For the learning phase, a pair of binary patterns are presented In the XR phase, signals are transferred from the Y-layer to the
to the two layers of a BAM. The voltage signals are mapped to X-layer by passing the synapse network. The produced sum signal
the binary inputs to JK flop-flops by the input encoder defined in n
j=1 wi j VY, j (i = 1, . . . , m) is passed to the arbiter at the X-layer
Table 2(c). Simply speaking, a positive voltage sets and a negative (with the switch S2_X switched to node (1). The arbiter executes
resets the JK flip-flop. the computation and fires the Enable_Lrn signal to trigger the
synapse weight update module to update the memristance if the
5.2. Mode switchers threshold condition is satisfied.
In the XW phase, the switch S2_X is switched to the neutral
The two working modes combined with neural signal propaga- state by setting the control signal S2_Ctrl to 0V. While in this
tion direction control are realized by the configuration of two set period, the learning enabling signal Lrn_Ctrl is set to high and
CMOS switches (mode switch S1, direction switch S2). It is ade- the weight update module generates a proper voltage to adjust the
quate to use one signal path at X-layer to explain the switch oper- corresponding row (column) of memristances for a certain time
ation as Y-layer situation is just the counterpart. length to emulate the update Eq. (8).
444 B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448

The working details of the YR and YW phases are analogous to


that described for the XR and XW phases with the reversed signal
a
propagation direction. In YR and YW phases, learning is enabled by
the low Lrn_Ctrl signal (−1V ) to make weights to be updated by
the other Eq. (9). Fig. 8(b) shows the timing diagram of the control
signals during the learning phases.

5.2.2. Recall mode


Switch S1_X is always connected to node 3 at the X-layer in
the recall mode that can be divided into two phases depending on
the signal propagation direction:
• Y-to-X Propagation (YXP): Signal propagation from the Y-layer
to the X-layer), and
• X-to-Y Propagation (XYP): Signal propagation from the X-layer
to the Y-layer).
In the YXP phase switch S2_X is switched to node 1 during
the time slot when the switch control signal S2_Ctrl is high b
n
(1V). During this time period the summation signal j=1 wi j VY, j
is transmitted to the X-layer via the CCVS elements and activates
the corresponding neuron in the X-layer. The X-layer neuron state
is updated by the JK flip-flop module described earlier.
In the other direction, i.e., in XYP phase, switch S2_X is
switched to node 2 while S2_Ctrl is low (−1V ). During this time
the summation signal nj=1 wi j VX,i is propagated from the X-layer
to the Y-layer and activates the neuron in the Y-layer. The timing
diagram in the recall mode are shown in Fig. 8(c).

6. Experimental results

The proposed memristive BAM circuit was simulated and val-


idated in the Cadence Virtuoso software suite [41] via a 24 × 24
linear memristor model based BAM system and a 6 × 6 threshold
memristor model based case. The simulation was conducted in vir-
tual machine VMware with the hardware configuration of 4 CPU
c
Cores and 4GB memory. The correctness of the BAM system was
verified by presenting a set of digital patterns to the system and
examining the learning phase and the recall phase.

6.1. Memristor characterization

The basic linear ion drift memristor model [2] and the synap-
tic model with threshold voltages [42] are both used in the ex-
periments coupled with the unified expression of window func-
tion proposed by Wen et al. [43], which can obtain many kind of
window functions such as the Biolek window function [44] by set-
ting different parameter values. Fig. 9(a) shows the characteriza-
tion of the linear memristor model given a sinusoidal voltage stim-
ulus with the following parameters: Ron = 100, Roff = 160 0 0,
and D = 10 nm. Then a pinched hysteresis loop of corresponding
voltage-current characteristics shown in the inset is identified as
the fingerprint of memristor [45]. The threshold model charactered
in Fig. 9(b) is set by the parameters of the AIST-based memris- Fig. 9. Characterization of the memristor model: (a) response of basic memristor
model; (b) response of threshold memristor model; (c) memristance tuned by pulse
tor listed in table I of reference [42]. As is shown in Fig. 9(b), the
voltage stimulus.
memristance will change only if applied voltage is above the pos-
itive threshold voltage (+0.37V ) or below negative threshold volt-
age (−0.19V ). Fig. 9(c) shows the memristor current waveform in
response to the periodic pulse voltage source applied to the mem- during the recall mode. The clock signal controlling the JK flip-flop
ristor. The memristance is being tuned over each duration time of is shown at the bottom with a clock period 20 ns. Note that the
each pulse, yet the variation of memristance by each pulse is not logical values of 0 and 1 of the JK flip-flop ports correspond to
the same due to the initial value of memristance. This character is electrical voltage levels 0V and 1V, respectively.
acceptable for the proposed BAM learning strategy. In the beginning period (20 ns), The signal Sum representing
j=1 wi j VY, j (t ) or i=1 wi j VX,i (t ) is below 0V (but is rising from
n m

6.2. Verification of neuron circuit −2V to 0V); the (J, K) ports thus receive the code (0,1). The output
of the JK flip-flop is Q = 0, i.e., a voltage −1V. During the second
Fig. 10 shows the simulated curves of a neuron circuit il- period from 20 ns to 40 ns, the Sum signal remains at 0V. The (J, K)
lustrated in Fig. 7 as a verification of the activation function ports thus receive the code (0,0); in this case the output Q keeps
B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448 445

Fig. 10. Verification of neuron activation function via a case in the recall mode.

the previous state, i.e., the voltage level −1V. In the third period
from 40 ns to 60 ns, Sum is above 0V (rising from 0V to 2V). Then
the (J, K) ports receive the code (1,0) and the output Q changes
to 1V at the positive edge of JK_Clk. In the fourth period (60 ns
to 80 ns), Sum goes back to 0V again, then the output Q remains
at −1V. The sampled test case mentioned above demonstrates that
the digital JK flip-flop with the voltage encoders correctly functions
as the neuron activation expressed in (10).
Fig. 11. The proposed memristive BAM under learning mode: (a) pattern pairs for
learning; (b) training process of the memristor based synapse.

6.3. Learning phase of the memristive BAM system

For illustration purpose, the set of learning patterns consists of


three pairs of 6 × 4 alphabetic letters {A, B, C} and numerical digits We randomly selected four memristors out of the 24 × 24 ar-
{1, 2, 3}, which are shown in Fig. 11(a). Each pixel in a binary pat- ray of memristors and measured their current variation over four
tern is encoded by a voltage level in circuit. We assign the voltage training cycles. The current waveforms are shown in Fig. 11(b). As
1V for a black pixel and the voltage −1V for a blank pixel. Each we discussed previously, each training cycle consists of four sub-
6 × 4 pattern is expanded to a vector of dimension 24 and input phases, i.e., XR, XW, YR, and YW, within a period of 2 μs. As indi-
to corresponding layer of the BAM, which the alphabetic letters cated in the third cycle with memristor M4 in Fig. 11(b), the read
{A, B, C} get presented to the X-layer and the digits {1, 2, 3} get phases XR and YR is very small portions of the whole period com-
presented to the Y-layer. Hence, both X and Y layers have dimen- paring to the time spans taken by the write phases XW and YW.
sion of 24 (i.e., equal number of neurons). And the BAM memristor Hence, the variation of memristance during the XR and YR phases
synapse array has a size of 24 × 24. can be neglected.
446 B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448

Fig. 12. Recall experiment: (a) recall from X-layer to Y-Layer with 3 bits flipped in
the X-layer patterns (about 10% noise). (b) recall from Y-layer to X-Layer with 3 bits
flipped in the Y-layer patterns (about 10% noise). (c) recall from Y-layer to X-Layer
with 5 bits flipped in the Y-layer patterns (about 20% noise). (d) recall from Y-layer
to X-Layer with 7 bits flipped in the Y-layer patterns (about 30% noise).

We observe that in the XW and YW phases of the third cy-


cle, all four memristors have zero current passing through, which
means that no threshold conditions were verified. In the XW phase
of the fourth cycle, we observe the increasing of positive (resp.
negative) current of memristor M4 (resp. M3 ), which indicates a
positive (resp. negative) training voltage pulse was applied to the
respective memristor by the weight update module to tune the
synapse weight.

6.4. Recall phase of the memristive BAM system

The recall capability of this memristive BAM is presented in


Fig. 12. We added noise to the input patterns to see the robust-
ness of recall. Fig. 12(a,b) show the recall results from the X-layer
to Y-layer and conversely, with three bits flipped over (equivalent
Fig. 13. Recall experiment: (a) Pattern pairs; (b) Test result.
to adding about 10% noise). In both cases the paired patterns were
successfully recalled. It validates that the trained BAM was func-
tioning correctly. respectively.
Fig. 12(c,d) show further tests on increased noise level. Both
show the recall results from the Y-layer to the X-layer, with the Pattern1 = (−1, 1, −1, 1, 1, 1 ),
digit patterns disturbed by 20% and 30% noise, respectively. All the
Pattern2 = (1, −1, −1, 1, 1, 1 ),
paired patterns at the X-layer were correctly recalled, demonstrat-
ing that the memristive BAM had been trained with resilient con- Pattern3 = (1, 1, −1, 1, 1, −1 )
tent addressing capability even corrupted by significant amount of
noise. Fig. 13(b) shows that 3 associative patterns of X-layer in signal
form are successfully recalled after the learning process. Each el-
ement of the learned synaptic weight matrix as (11) is the corre-
6.5. Compatibility sponding conductance of memristor subtract the constant biasing
conductance, which is measured in unit of siemens. Note that the
In order to show the memritor model compatibility of our sys- timing and circuit schematic of related module is not changed in
tem design, another the threshold memristor model based 6 × 6 the threshold memristor based BAM system. But some circuits pa-
memristor BAM is also build and verified with the training set of rameters are adjusted to fit the memristor model such as the ac-
three pairs of Tetris pattern shown in Fig. 13(a), which is encoded tual output voltage of neuron circuit is mapped to +/ − 0.1V that
using 1 (resp. −1) for black block (resp. white block). Tetris pat- is below related memristor model threshold threshold voltage and
tern to be recalled in X-layer is described by the vectors as below, the output voltage of weight update circuit is mapped to +/ − 0.5V
B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448 447

above the memristor threshold voltage. [13] K. Pagiamtis, A. Sheikholeslami, Content-addressable memory (CAM) circuits
and architectures: a tutorial and survey, IEEE J. Solid-State Circ. 41 (3) (2006)
WG = 10−3 712–727.
⎡ ⎤ [14] S.G. Hu, Y. Liu, Z. Liu, Associative memory realized by a reconfigurable mem-
−31.78 −17.74 −17.74 −35.04 −31.47 3.49 ristive Hopfield neural network, Nat. Commun. 6 (7522) (2015) 1–8, doi:10.
⎢−33.49 −17.68 −17.68 −36.04 2.16 −34.95⎥ 1038/ncomms8522.
⎢ −19.56 −19.56 −19.56 0.27 −19.56 −35.04⎥ [15] J. Yang, L.D. Wang, Y. Wang, T. Guo, A novel memristive Hopfield neural net-
×⎢
⎢ 0.27

−19.56 0.27 −17.74 ⎥
work with application in associative memory, Neurocomputing 227 (2017)
0.27 0.27
⎣ ⎦ 142–148, doi:10.1016/j.neucom.2016.07.065.
0.27 0.27 0.27 −19.56 0.27 −17.74 [16] H. Kim, M.P. Sah, C. Yang, T. Roska, L.O. Chua, Memristor bridge synapses, Proc.
1.9 −17.98 −17.98 −35.01 −33.71 −33.97 IEEE 100 (6) (2012) 2061–2070, doi:10.1109/JPROC.2011.2166749.
[17] S.P. Adhikari, H. Kim, R.K. Budhathoki, C. Yang, L.O. Chua, A circuit-based
(11) learning architecture for multilayer neural networks with memristor bridge
synapses, IEEE Trans. Circ. Syst. I 62 (1) (2015) 215–223, doi:10.1109/TCSI.2014.
2359717.
7. Conclusion [18] B. Kosko, Bidirectional associative memories, IEEE Trans. Syst. Man Cybern. 18
(1) (1988) 49–60, doi:10.1109/21.87054.
[19] T. Wang, X. Zhuang, X. Xing, Designing bidirectional associative memories with
In summary, we incorporated the extended memristor cross- optimal stability, IEEE Trans. Syst. Man Cybern. 24 (5) (1994) 778–790, doi:10.
bar circuit into the memristive BAM neural network with on-chip 1109/21.293491.
learning feature. The redesigned crossbar circuit reuses single one [20] C.S. Leung, L.W. Chan, E. Lai, Stability, capacity, and statistical dynamics of
second-order bidirectional associative memory, IEEE Trans. Syst. Man Cybern.
set memristor array for bidirectional synaptic weighting opera- 25 (10) (1995) 1414–1424, doi:10.1109/21.464439.
tion with the benefit of modularity and cost-effectiveness, which [21] G. Shi, Genetic approach to the design of bidirectional associative memory, Int.
can contribute to hardware implementation of memristive artifi- J. Syst. Sci. 28 (2) (1997) 133–140, doi:10.1080/00207729708929371.
[22] G. Shi, Optimal bidirectional associative memories, Int. J. Syst. Sci. 31 (6)
cial neural networks. Meanwhile a novel programming strategy is (20 0 0) 751–757, doi:10.1080/0 0207720 050 030798.
adopted by the on-chip learning circuit to ease the process of tun- [23] B. Fa, Y. Yin, C. Fu, The bidirectional associative memory neural network based
ing memristor based synapse weight. Neuron activation function is on fault tree and its application to inverter’s fault diagnosis, in: Proceedings of
the IEEE International Conference on Intelligent Computing and Intelligent Sys-
also realized by digital circuit element JK flip-flop. A case of size tems, Shanghai, China, 2009, pp. 209–213, doi:10.1109/ICICISYS.2009.5357894.
24 × 24 memristive BAM neural network is used to demonstrate [24] L. Chen, Z. Ling, L. Liu, L. Dai, Voice conversion using deep neural networks
the ability of associative memory for the proposed framework by with layer-wise generative training, IEEE/ACM Trans. Audio Speech Lang. Pro-
cess. 22 (12) (2014) 1859–1872, doi:10.1109/TASLP.2014.2353991.
the Verilog-AMS design methodology. The simulation results show
[25] L. Chen, T. Raitio, C. Valentini-Botinhao, Z. Ling, J. Yamagishi, A deep gen-
that meaningful patterns can be successfully recalled ascribed to erative architecture for postfiltering in statistical parametric speech synthe-
the association relationship established by on-chip learning cir- sis, IEEE/ACM Trans. Audio Speech Lang. Process. 23 (11) (2015) 2003–2014,
cuits. It is expected that Verilog-AMS based methodology will be doi:10.1109/TASLP.2015.2461448.
[26] I. Nejadgholi, S. SeyyedSalehi, S. Chartier, A brain-inspired method of fa-
a good candidate for memristor based artificial neural network cir- cial expression generation using chaotic feature extracting bidirectional as-
cuits design. sociative memory, Neural Process. Lett. 46 (3) (2017) 943–960, doi:10.1007/
s11063- 017- 9615- 5.
[27] A. Pedrycz, Bidirectional and multidirectional associative memories as models
Acknowledgments in linkage analysis in data analytics: conceptual and algorithmic developments,
Knowl. Based Syst. 142 (2018) 160–169, doi:10.1016/j.knosys.2017.11.034.
[28] B. Linares-Barranco, E. Sanchez-Sinencio, A. Rodriguez-Vazquez, J.L. Huertas, A
This research was supported by the National Natural Science CMOS analog adaptive BAM with on-chip learning and weight refreshing, IEEE
Foundation of China (NSFC) under grants No. 61176129 and No. Trans. Neural Netw. 4 (3) (1993) 445–455, doi:10.1109/72.217187.
61474145. [29] K. Mathiyalagan, J.H. Park, R. Sakthivel, Synchronization for delayed memris-
tive BAM neural networks using impulsive control with random nonlinearities,
Appl. Math. Comput. 259 (2015) 967–979, doi:10.1016/j.amc.2015.03.022.
References [30] M.S. Ali, R. Saravanakumar, J. Cao, New passivity criteria for memristor-based
neutral-type stochastic BAM neural networks with mixed time-varying delays,
[1] L.O. Chua, Memristor-the missing circuit element, IEEE Trans. Circ. Theory 18 Neurocomputing 171 (2016) 1533–1547, doi:10.1016/j.amc.2015.03.022.
(1971) 507–519, doi:10.1109/TCT.1971.1083337. [31] R. Chinnathambi, F.A. Rihan, L. Shanmugam, R. Rajan, P. Muthukumar, Synchro-
[2] D.B. Strukov, G.S. Snider, D.R. Stewart, R.S. Williams, The missing memristor nization of memristor-based delayed BAM neural networks with fractional-
found, Nature 453 (7191) (2008) 80–83, doi:10.1038/nature06932. order derivatives, Complexity 21 (S2) (2016) 412–426, doi:10.1002/cplx.21821.
[3] J.S. Hyun, C. Ting, E. Idongesit, B.B. Bhadviya, P. Mazumder, L. Wei, Nanoscale [32] H. Li, H. Jiang, C. Hu, Existence and global exponential stability of periodic
memristor device as synapse in neuromorphic systems, Nano Lett. 10 (4) solution of memristor-based BAM neural networks with time-varying delays,
(2010) 1297–1301, doi:10.1021/nl904092h. Neural Netw. 75 (2016) 97–109, doi:10.1016/j.neunet.2015.12.006.
[4] A. Fabien, Z. Elham, D.B. Strukov, Pattern classification by memristive crossbar [33] J. Xiao, S. Zhong, Y. Li, F. Xu, Finite-time Mittag-Leffler synchronization of
circuits using ex situ and in situ training, Nat. Commun. 25 (10) (2013) 1864– fractional-order memristive BAM neural networks with time delays, Neuro-
1878, doi:10.1038/ncomms3072. computing 219 (2017) 431–439, doi:10.1016/j.neucom.2016.09.049.
[5] M. Hu, H. Li, Y. Chen, Q. Wu, G.S. Rose, R.W. Linderman, Memristor crossbar- [34] M.S. Tarkov, Oscillatory neural associative memories with synapses based on
based neuromorphic computing system: a case study, IEEE Trans. Neural Netw. memristor bridges, Opt. Mem. Neural Netw. 25 (4) (2016) 219–227, doi:10.
Learn. Syst. 25 (10) (2014) 1864–1878, doi:10.1109/TNNLS.2013.2296777. 3103/S1060992X16040068.
[6] G. Zhang, Y. Shen, Exponential stabilization of memristor-based chaotic neural [35] Y. Zhao, B. Li, G. Shi, A current-feedback method for programming memristor
networks with time-varying delays via intermittent control, IEEE Trans. Neural array in bidirectional associative memory, in: Proceedings of the International
Netw. Learn. Syst. 26 (7) (2015) 1431–1441, doi:10.1109/TNNLS.2014.2345125. Symposium on Intelligent Signal Processing and Communication Systems (IS-
[7] X. Hu, S. Duan, G. Chen, L. Chen, Modeling affections with memristor-based PACS), Xiamen, China, 2017, pp. 747–751, doi:10.1109/ISPACS.2017.8266575.
associative memory neural networks, Neurocomputing 223 (2017) 129–137, [36] B. Li, Y. Wang, Y. Wang, Y. Chen, H. Yang, Training itself: mixed-signal train-
doi:10.1016/j.neucom.2016.10.028. ing acceleration for memristor-based neural network, in: Proceedings of the
[8] S. Wen, S. Xiao, Z. Yan, Z. Zeng, T. Huang, Adjusting learning rate of memristor- Nineteenth Asia and South Pacific Design Automation Conference (ASPDAC),
based multilayer neural networks via fuzzy method, IEEE Trans. Comput.-Aided Singapore, 2014, pp. 361–366, doi:10.1109/ASPDAC.2014.6742916.
Design Integr.Circ. Syst. (2018a) 1, doi:10.1109/TCAD.2018.2834436. [37] Y. Zhang, X. Wang, E.G. Friedman, Memristor-based circuit design for multi-
[9] S. Wen, R. Hu, Y. Yang, T. Huang, Z. Zeng, Y. Song, Memristor-based echo state layer neural networks, IEEE Trans. Circ. Syst. I 65 (2) (2018) 677–686, doi:10.
network with online least mean square, IEEE Trans. Syst. Man Cybern. (2018b) 1109/TCSI.2017.2729787.
1–10, doi:10.1109/TSMC.2018.2825021. [38] R. Hasan, T.M. Taha, C. Yakopcic, A fast training method for memristor cross-
[10] P. Yao, H. Wu, B. Gao, S.B. Eryilmaz, X. Huang, W. Zhang, Q. Zhang, N. Deng, bar based multi-layer neural networks, Analog Integr. Circ. Sig. Process. 93 (3)
L. Shi, H.S.P. Wong, H. Qian, Face classification using electronic synapses, Nat. (2017) 443–454, doi:10.1007/s10470- 017- 1051- y.
Commun. 8 (15199) (2017) 1–8, doi:10.1038/ncomms15199. [39] M. Prezioso, F. Merrikh-Bayat, B.D. Hoskins1, G.C. Adam1, K.K. Likharev,
[11] D. Soudry, D.D. Castro, A. Gal, A. Kolodny, S. Kvatinsky, Memristor-based multi- D.B. Strukov, Training and operation of an integrated neuromorphic network
layer neural networks with online gradient descent training, IEEE Trans. Neural based on metal-oxide memristors, Nature 521 (7550) (2015) 61–64, doi:10.
Netw. Learn. Syst. 26 (10) (2015) 2408–2421, doi:10.1109/TNNLS.2014.2383395. 1038/nature14441.
[12] T. Kohonen, Self-Organization and Associative Memory, Springer-Verlag, Berlin,
Germany, 1988.
448 B. Li, Y. Zhao and G. Shi / Neurocomputing 330 (2019) 437–448

[40] F. Alibart, L. Gao, B.D. Hoskins, D.B. Strukov, High precision tuning of state for Yonglei Zhao recieved his B.E. degree at School of Elec-
memristive devices by adaptable variation-tolerant algorithm, Nanotechnology tronic information and electrical engineering, Shanghai
23 (7) (2012) 075201, doi:10.1088/0957-4484/23/7/075201. Jiao Tong Univ., Shanghai, China in 2016. Now he is
[41] Cadence Design Systems, Inc., Behavioral Modeling with Verilog-AMS, Cadence a graduate student at Dept. of micro/nano electronics,
Design Systems, Inc. 2017. Shanghai Jiao Tong University, Shanghai, China. His re-
[42] Y. Zhang, X. Wang, Y. Li, E.G. Friedman, Memristive model for synaptic circuits, search area includes memristive neural network, pattern
IEEE Trans. Circ. Syst. II 64 (7) (2017) 767–771, doi:10.1109/TCSII.2016.2605069. recognition.
[43] S. Wen, X. Xie, Z. Yan, T. Huang, Z. Zeng, General memristor with applications
in multilayer neural networks, Neural Netw. 103 (2018) 142–149, doi:10.1016/
j.neunet.2018.03.015.
[44] Z. Biolek, D. Biolek, V. Biolkova, SPICE model of memristor with nonlinear
dopant drift, Radioengineering 18 (2) (2009) 210–214.
[45] M.D. Ventra, Y.V. Pershin, L.O. Chua, Circuit elements with memory: memris-
tors, memcapacitors and meminductors, Proc. IEEE 97 (10) (2009) 1715–1716, Guoyong Shi has been a full professor in the Department
doi:10.1109/JPROC.2009.2027660. of Micro/Nano Electronics, School of Electronic, Informa-
tion and Electrical Engineering, Shanghai Jiao Tong Uni-
Bo Li received the B.E. degree in electronic science and versity, Shanghai China, since 2005. He received the Ph.D.
technology from Harbin University of Science and Tech- degree in Electrical Engineering from Washington State
nology, China, in 2010 and the M.S. degree in physics University in Pullman, USA. He worked as a post-doctor
from Shanghai University, China, in 2013. Currently, he in the Department of Electrical Engineering, University of
is pursuing Ph.D. degree under the supervision of Prof. Washington in Seattle, USA, from 2002 to 2005. He is cur-
Dr. Guoyong Shi in School of Electronic, Information, rently interested in research subjects on design automa-
and Electrical Engineering, Shanghai Jiao Tong University, tion of mixed-signal circuits and neuromorphic circuits.
Shanghai, China. His research interests cover memristor, He was co-recipient of the Donald Pederson Best Paper
memristor-based artificial neural networks system and Award in 2007. He is co-author of the book “Advanced
neuromorphic circuits, as well as the design automation Symbolic Analysis for VLSI Systems-Methods and Appli-
for memristor-based artificial neural networks circuits. cations” published by Springer in 2014.

You might also like