You are on page 1of 5

SNNIM: A 10T-SRAM based Spiking-Neural-Network-In-Memory

architecture with capacitance computation


Bo Wang, Chen Xue, Han Liu, Xiang Li, Anran Yin, Zhongyuan Feng, Yuyao Kong, Tianzhu Xiong,
Haiming Hsu, Yongliang Zhou, An Guo, Yufei Wang, Jun Yang, Xin Si*
2022 IEEE International Symposium on Circuits and Systems (ISCAS) | 978-1-6654-8485-5/22/$31.00 ©2022 IEEE | DOI: 10.1109/ISCAS48785.2022.9937272

Southeast University, Nanjing, China


*Corresponding Author Email: xinsi@seu.edu.cn

Abstract—Spiking-Neural-Networks (SNN) have natural In recent years, SRAM-based Computing-In-Memory


advantages in high-speed signal processing and big data (CIM) technology is becoming more and more mature. As
operation. However, due to the complex implementation of a promising method, CIM has been well applied in some
synaptic arrays, SNN based accelerators may face low area ANN application scenarios, especially in the field of DNN
utilization and high energy consumption. Computing-In- acceleration [6-8]. Different from conventional von-
Memory (CIM) shows great potential in performing intensive Neumann’s architecture, CIM implements a more compact
and high energy efficient computations. In this work, we organization, which can achieve higher energy efficiency
proposed a 10T-SRAM based Spiking-Neural-Network-In- by performing some computing operations in the memory
Memory architecture (SNNIM) with 28nm CMOS technology array. Therefore, the exploration and the research of the
node. A compact 10T-SRAM bit-cell was developed to realize combination of CIM and SNN have great attraction and
signed 5bit synapses arrays and configurable bias arrays prospect.
(SYBIA). The soma array based standard 8T-SRAM (SMTA) In this paper, we proposed a 10T-SRAM based Spiking-
stores the soma membrane voltage and the threshold value. A Neural-Network-In-Memory architecture (SNNIM). A
capacitance computation scheme (CCA) between them was compact 10T-SRAM was developed to implement signed
proposed to support various SNN operations. The proposed 5bit synaptic array and configurable bias array (SYBIA).
SNNIM achieved energy efficiency of 25.18 TSyOPS/W. And The soma array based standard 8T-SRAM (SMTA) was
the proposed SNNIM achieved 1.79+× better array efficiency proposed to store the membrane potential and the neuron
compared with previous works. threshold value. A capacitance computation scheme (CCA)
Keywords—Spiking Neural Network, Computing In between them was proposed to support various SNN
Memory (CIM), Neuromorphic hardware, Capacitance operations with lower energy cost.
computation, 10T SRAM, Analog computation. The rest of this paper is organized as follows. Section Ⅱ
introduces the overall structure of the proposed SNNIM
I. INTRODUCTION architecture. The critical circuit techniques which include
SYBIA circuit, CCA and SMTA circuit are discussed in
Compared with traditional artificial neural network Section Ⅲ. Section Ⅳ shows the experiment results of the
(ANN), Spiking-Neural-Network (SNN) has higher proposed SNNIM architecture. And section Ⅴ concludes
biological similarity, which is called the third-generation the paper.
artificial neural network. Many previous related works,
such as IBM’s TrueNorth [1], Intel’s Loihi [2] and
II. PROPOSED 10T-SRAM BASED SPIKING-NEURAL-
Tsinghua’s Tianjic [3], have demonstrated that SNN can
also be well qualified for ANN's work, such as multi-object NETWORK-IN-MEMORY ARCHITECTURE (SNNIM)
detection and classification and so on. In addition, building As shown in Fig. 1(a), it is an abstract representation of
a large biologically-inspired Spiking-Neural-Network to information interaction between neurons in biology. Each
explore the field of brain-like chips is also one of the neuron consists of three main components: Dendrites,
current research hotspots. The complexity of neuron Axons and Soma. Dendrites act as the inputs to neurons.
function is the main challenge of large-scale SNN Axons act as the output to neurons. And soma is
hardware implementation, so most SNNs are implemented responsible for processing the synaptic signals. From the
in digital domain. And some researchers prefer to axon of the presynaptic neuron to the dendrite of the
implement SNN in analog domain to directly simulate the postsynaptic neuron, the signals can be excitatory or
dynamics of the real nervous system, such as ROLLS [4]. inhibitory, strong or weak, depending on the properties of
However, the implementation of SNN in digital domain synapses. Each neuron has so many dendrites and axons,
will bring great power consumption overhead, while the so it is a bottleneck to design a neural morphology
implementation of SNN in analog domain is difficult to hardware with high area utilization to realize synaptic
ensure the configurability of the whole system. At the same function.
time, both methods may lead to large area overhead, and it The overall structure of the proposed SNNIM macro is
is difficult to achieve significant energy efficiency. shown in Fig.1(b). It is composed of SNNIM group, core

978-1-6654-8485-5/22/$31.00 ©2022 IEEE 3383

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on December 22,2023 at 10:32:23 UTC from IEEE Xplore. Restrictions apply.
(b) Vref Gen. SRAM I/O Interface (c) VP VN SYBIA(10T) Group#0
t for spike SWL[0] SWL[0]
SWL[0] ... ... WL[0] S WL[0]
W
W/R Excitatory-Path L
WL[0] ... ... C 6T 6T 6T 6T 6T
Inhibitory-Path SWL[1] SWL[1]
Bias-Array WL[1] S
W
WL[1]
Core Ctrl L
...

...
SNNIM SNNIM SNNIM C 6T 6T 6T 6T 6T
&
#Group0 #Group1 #GroupX
WLs

...

...

...

...

...
/R
Driver ... ...
RWL[15] SWL[128] SWL[128]
W/R WL[128] S WL[128]
W
WWL[15] ... ... L
C 6T 6T 6T 6T 6T

SWL[129] SWL[129]
WL[129] S WL[129]
W

Spike Output Mapping Ctrl Next SNNIM Macro L


C 6T 6T 6T 6T 6T

...

...

...

...

...
SRAM I/O Interface
(a) RBL_s[3] RBL_s[2] RBL_s[1] RBL_s[0]

8C 4C 3C 2C 1C
Vmem
Comp.
CCA
Group#0 1C 3C 2C 4C 8C
Vmth

RBL_m[0] RBL_m[1] RBL_m[2] RBL_m[3]


RWL[0] RWL[0]
WWL[0] WWL[0]

8T 8T 8T 8T
Postsynaptic Neuron[j] RWL[1] RWL[1]
WWL[1] WWL[1]
Axon[j] Timing
8T 8T 8T 8T &
Soma
Spike-Output

...

...

...

...
RWL[15]
Ctrl RWL[15]
Presynaptic
WWL[15] #Group0 WWL[15]

Neuron[i] 8T 8T 8T 8T

SMTA(8T) Group#0
Postsynaptic Neuron[j+1]
...

Fig.1 (a) Abstract representation of information interaction between neurons in biology. (b) The Overall structure of the proposed SNNIM macro and
(c) partial detail description for the group0 of SNNIM macro.

control and input driver, reference voltage generator (Vref SRAM in SYBIA is used as a bias array to provide bias
Gen.), SRAM I/O interfaces and spike output mapping value for SNN operations (see Section III). For the 16×4b
control. In this design, a SNNIM macro integrates X 8T-SRAM of SMTA, the original membrane potential
SNNIM groups (X=16). All SNNIM groups in a SNNIM (Vmem0 [3:0]) and the corresponding threshold values
macro share a set of WL drivers and logic control modules. (Vmth0 [3:0]) are stored here.
When a spike-event occurs on SWLs, these SNNIM The CCA is an array of capacitors arranged in a specific
groups work in parallel. The input spikes are preprocessed way to integrate the output of the SYBIA and the SMTA
by Core Ctrl module and the pulse-width on these Word- by RBL_s [3:0] and RBL_m [3:0] respectively. There is
Lines are different. The SWLs transmit the customized only one CCA in each SNNIM group. These configured
spikes among these SYBIA of SNNIM groups, and the neurons share the CCA in one SNNIM group and realize
pulse on RWLs controls the read operation for developing the computing properties of SNN as the generalized
Vmem and Vmth on CCA from SMTA. The assertion of integrated-and-fire (G.I&F) model does.
WLs and WWLs is used to write and read data for 10T-
SRAM and 8T-SRAM respectively when SNNIM macro III. CIRCUIT TECHNIQUES EMPLOYED IN THE
works in SRAM mode. Due to the two types of SRAM
PROPOSED SNNIM
array (10T and 8T) existing in the SNNIM macro, there are
two SRAM I/O interfaces. The Vref Gen module generates A. SYBIA Circuit
the reference voltage named VN and VP that will be used
for clamping in SWLC circuit. The Spike Output Mapping Each row in SYBIA is a synaptic branch which is
Ctrl module can deliver the output spikes from one SNNIM capable of being an excitatory synapse or an inhibitory
group to the next SNNIM macro and can also send them synapse. This attribute of synaptic branch can be controlled
back to the local Core Ctrl module as the new input spikes. by its head-cell consisting of the leftmost SRAM cell (6T-
As shown in Fig. 1(c), each SNNIM group is a small SRAM) with the SWLC module (see Fig.2). If the head-
CIM bank composed of SYBIA that based 136×5b 10T- cell stores low on the node of QS (QS=0) and there is a
SRAM, SMTA that based 16×4b 8T-SRAM and CCA. The spike-event occurrence on the SWL, this synaptic branch
128×5b SRAM in SYBIA acts as the synaptic array, and will behave as an excitatory synapse and convert the SWL
each row is a configurable synaptic branch, which is to SWL_c by clamping its low voltage to VP while SWL_d
composed of 4b 10T-SRAM for synaptic weight and 1b keeps GND. Otherwise (QS=1) the synaptic branch will
SRAM with SWL-converter (SWLC) module for sign-bit behave as an inhibitory synapse and convert the SWL to
processing. If the synaptic array receives a spike-event XI SWL_d by clamping its high voltage to VN while SWL_c
(XI is a set of spikes) and N synaptic branches are activated, keeps VDD. The VP and VN are generated from a biasing
the MAC operation of ∑𝑁 𝐼 𝑖
𝑛=1 ∑𝑖=1 𝑋𝑛 × 𝑊𝑛 [4: 0] will be
circuit in Vref Gen. Therefore, the current (|𝐼𝑐 |) flowing
done between SYBIA and CCA. The remaining 8×5b 10T- through PMOS with VP as its gate voltage will be equal to

3384

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on December 22,2023 at 10:32:23 UTC from IEEE Xplore. Restrictions apply.
VN VDD
SWL_c
SWL_d
QS QS_B WL
SWL_d SWL_c

discharge
Repolar_Rst MN0 MP0

charge
MN1 MP1
VP Q QB
a single spike on SWL tswl for spike

RBL_s BL BLB
QS=1 Head-cells
SWL
VP VN
Repolar_Rst
SWL[0] S
SWL_c = 1 SWL_c[0]
WL[0] W
VN Working as a L
Excitatory synapse 6T 6T 6T 6T 6T
SWL_d C
QS=0

SWL[1] S SWL_d[1]
QS=0 SWL WL[1] W
Working as a L
Repolar_Rst Inhibitory synapse 6T 6T 6T 6T 6T
C
QS=1
SWL_c
RBL_s[3] RBL_s[2] RBL_s[1] RBL_s[0]

...

...

...

...

...
VP
SWL_d = 0

Fig.2 10T-SRAM bit-cell for SYBIA and the SWLC module with its operation waveform.

(a) Phase (a): Read the Vmem0 (c) Phase (c): Synaptic injection (e) Phase (e): charge-sharing for Vmem and Vmth

SYBIA SYBIA SYBIA


RBL_s[3] RBL_s[2] RBL_s[1] RBL_s[0] RBL_s[3] RBL_s[2] RBL_s[1] RBL_s[0] RBL_s[3] RBL_s[2] RBL_s[1] RBL_s[0]

8C 4C 3C 2C 1C 8C 4C 3C 2C 1C 8C 4C 3C 2C 1C
Vmem

1C 3C 2C 4C 8C 1C 3C 2C 4C 8C 1C 3C 2C 4C 8C
Vmth

RBL_m[0] RBL_m[1] RBL_m[2] RBL_m[3] RBL_m[0] RBL_m[1] RBL_m[2] RBL_m[3] RBL_m[0] RBL_m[1] RBL_m[2] RBL_m[3]

SMTA
SMTA SMTA
SMTA SMTA
SMTA

(b) Phase (b): Charge-sharing to U0 (d) Phase (d): Injection computing and read Vmth (f) Vmem updating and Muti-spikes output

Comparator clk
SYBIA SYBIA
Input spikes
RBL_s[3] RBL_s[2] RBL_s[1] RBL_s[0] RBL_s[3] RBL_s[2] RBL_s[1] RBL_s[0]
(Assume all
8C 4C 3C 2C 1C 8C 4C 3C 2C 1C on excitatory
U0
synapses )
1C 3C 2C 4C 8C 1C 3C 2C 4C 8C
Vmth
RBL_m[0] RBL_m[1] RBL_m[2] RBL_m[3] RBL_m[0] RBL_m[1] RBL_m[2] RBL_m[3] Vmem

SMTA
SMTA SMTA
SMTA Output spikes

time

Fig.3 The CIM methods in CCA for SNN computing about five phases (a) - (e) and (f) an example for Vmem updating and multi-spikes (MS) output.

the NMOS’s with VN as its gate voltage. The input signal


named Repolar_Rst enables the reset operation at the B. The CCA and SMTA circuit
negative edge of SWL, and it can also be used to disable The CIM methods in CCA for SNN computing are
the synaptic branch when the neuron works in a shown in Fig. 3. Due to the storage of the synaptic weight
repolarization phase. Each 10T-SRAM cell has two extra [3:0] (weight [4] as a sign-bit) in SYBIA from MSB to LSB,
computational embranchments: the cascade of MP0-MP1 the corresponding caps connected to the RBL_s [3:0] is
transistors as an excitatory embranchment and the cascade 8𝐶0 , 4𝐶0 , 2𝐶0 and 𝐶0 respectively. On the contrary,
of MN0-MN1 transistors as an inhibitory embranchment. Vmem0 [3:0] and Vmth0 [3:0] are stored in SYBIA from
The Q connects to the gate of MN1 while QB connects to LSB to MSB, so the ratio of the set of caps on RBL_m [0:3]
the gate of MP1. And the MN0 and MP0 are controlled by is 1: 2: 4: 8. The two sets of capacitors arranged in reverse
SWL_d and SWL_c respectively. If the 10T-SRAM cell order with the extra caps of 3𝐶0 compose a
stores high on the node of Q (Q=1), one of computational complementary computation array. As shown in Fig. 3,
embranchments will be activated by the assertion of from computing the synaptic injection to the integration of
SWL_c or SWL_d. Otherwise (Q=0), both computational Vmem and the comparison with Vmth, there are five
embranchments in this 10T-SRAM cell will be disabled. If operation phases. In phase (a), all RBL_m with the
the assertion time of SWL_c or SWL_d is 𝑡𝑠𝑤𝑙 , it will connected caps are pre-charged to VDD, then the Vmem0
charge or discharge the capacitor in CCA which can be [3:0] stored in the SMTA will be read out to develop the
expressed as: voltage of corresponding caps to GND or not by
−1, 𝑄𝑆 = 1 RBL_m[3:0]. Then the state of switches in CCA is changed
∆= 𝑠𝑖𝑔𝑛(𝑄𝑆) ∗ |𝐼𝑐 | ∗ 𝑡𝑠𝑤𝑙 , 𝑠𝑖𝑔𝑛(𝑄𝑆) {
1, 𝑄𝑆 = 0

3385

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on December 22,2023 at 10:32:23 UTC from IEEE Xplore. Restrictions apply.
Table I. Comparison table with previous SNN works
TrueNorth[1] Loihi[2] Tianjic[3] ROLLS[4] IMPULSE[5] This work
CCA&SMTA Technology node 28nm 14nm 28nm 180nm 65nm 28nm
SYBIA
Timing &Ctrl CIM CIM
Neuron type Digital Digital Digital Analog
Comparator 29.40% (Digital) (Mix-signal)
Vref Gen
44.15% Basal neuron model G.I&F G.I&F I&F+ANN G.I&F G.I&F G.I&F
Data width 9W-1S 9W-1S 8w-8A/1S N.A. 6W-1S 5W-1S/MS
Area(um2)/Neuron 4.3e6 457.88 360 2.008e5 4.02e3 100.6-201.25

11.14% Clock Frequency Async. Async. 300M N.A. 200M Async.

0.61% Accuracy 98.22%


N.A. 96% N.A. 98.96% 95.83%
14.70% (MNIST) (NMNIST)

Energy Efficiency
0.4[9] N.A. 0.649 N.A. N.A. 25.18
Fig4. Energy breakdown of the proposed SNNIM group. (TSyOPS/W)

in phase (b) as shown in Fig. 3(b), the charge-sharing will group with one input spike in SNN mode. Owing to the
happen and the 𝑈0 can be expressed as follow: compact design of synaptic and soma array, within the five
3 +4∗𝛿 2 +2∗𝛿 1 +1∗𝛿 0
8∗𝛿𝑚
𝑘 =1
𝑉𝐷𝐷, 𝑄𝑚 phases for SNN computing, the energy of SYBIAs for all
𝑘 = 0 (1)
𝑚 𝑚 𝑚 𝑘 = 𝛿(𝑄 𝑘 ) {
𝑈0 = , 𝛿𝑚 𝑚 groups in SNNIM macro with a spike-event (16*16
36 0, 𝑄𝑚
𝑘
synapses will activate by the single spike) was 0.349 pJ.
𝑄𝑚 is the stored value in 8T-SRAM cell corresponding Due to the sharing of CCA in one SNNIM group, the
to the Vmem0 [k] (k=0,1,2,3). After the charge-sharing, power of initialization and reset for membrane potential
each RBL_s becomes independent and holds the voltage of can be reduced by about 25%. Although the Channel-
𝑈0 as shown in Fig .3(c). In phase (c), the N synaptic Length-Modulation (CLM) for MN0 and MP0 may disturb
branches are activated and the ∆𝑘 will be integrated with the computational linearity of synaptic injection which can
𝑈0 on RBL_s [k]. Now, the accumulated electric charge be observed in Fig. 3(f), this can be greatly improved by
on RBL_s [k] can be expressed as 𝐸0𝑘 : setting the Vmth below the nonlinear interval with the
𝐸0𝑘 = ∆𝑘 + 𝑈0 ∗ 9𝐶0 employment of proposed SMTA bias circuits.
In phase (d), the top set of caps disconnect from the Table I presented the comparison between SNNIM with
bottom caps and accomplish the computing of input existing SNN based works [1-5]. Compared with digital
synaptic injection. Meanwhile, the bottom set of caps type neuron design [3], 38× better energy efficiency can
develop the Vmth0 as phase (a) does. The accumulated achieve owing to the SNNIM structure. Compared with
electric charge on RBL_s [k] is changed as 𝐸1𝑘 : pure analog type neuron design [4], this work
2𝑘 (201.25um2/neuron) can achieve over 997 times better
𝐸1𝑘 = ∗ (∆𝑘 + 𝑈0 ∗ 9𝐶0 )
9 neuron density. The benefits mainly arise from the
In the last phase (e) as shown in Fig. 3(e), the charge- proposed SYBIA and the employment of 10T-SRAM cells,
sharing happens synchronously on the two sets of caps, and which can enable larger data parallelism. By a low-cost
the final Vmem and Vmth can be determined: transition from ANN (only one hidden layer) to SNNIM
structure in simulation, the SNN can achieve an accuracy
1 8∗∆3 4∗∆2 2∗∆1 1∗∆0
𝑉𝑚𝑒𝑚 =
15
∗ ( 9𝐶0
+ 9𝐶0
+ 9𝐶0
+ 9𝐶0
)+𝑈0 (2) of 95.83% on MNIST without on-line training.

(8∗𝛿𝑡3 +4∗𝛿𝑡2 +2∗𝛿𝑡1 +1∗𝛿𝑡0 ) 𝑉𝐷𝐷, 𝑄𝑡𝑘 = 1 V. CONCLUSION


𝑉𝑚𝑡ℎ = , 𝛿𝑡𝑘 = 𝛿(𝑄𝑡𝑘 ) { (3)
15 0, 𝑄𝑡𝑘 = 0
A 10T-SRAM based Spiking-Neural-Network-In-
The 𝑄𝑡𝑘 is the stored value in the 8T-SRAM cell Memory architecture is implemented and analyzed in this
corresponding to the Vmth0 [k]. Finally, the Vmem and paper. The critical circuit techniques that include SYBIA,
Vmth will be sent to a simple comparator. If the Vmem is CCA, and SMTA circuits are proposed. The SYBIA,
higher than Vmth, the comparator will deliver a flag to the SMTA and CCA are used to construct a CIM array and
Spike-Output Ctrl module. With this flag, the Spike- perform SNN operations, the SYBIA realizes the function
Output Ctrl module fires out a spike and controls a bias of configurable synaptic injection while the SMTA
branch in the SYBIA to discharge the Vmem by RBL_s develops the pre-stored Vmem and Vmth on the CCA, and
[3:0] (subtract ∆𝑉𝑑𝑖𝑠𝑐 ). As shown in Fig. 3(f), the higher finally the CCA integrates them as a G.I&F model does.
Vmem held by the neuron, the more spikes will be fired. Thanks to the CIM structure, the area of SNN neuron can
be further compressed and the computing power of SNN
IV. EXPERIMENT RESULT which mainly focuses on the RBL_s, RBL_m, and the
A 10880b 10T-SRAM bit-cell based SNNIM macro was CCA can significantly decrease. The SNNIM achieves the
implemented with 28nm CMOS technology node. Fig.4 computing energy efficiency of 25.18 TSyOPS/W (Tera
presented the energy breakdown of the proposed SNNIM Synaptic Operations per second per watt) with 5b synaptic
weight and one input spike.

3386

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on December 22,2023 at 10:32:23 UTC from IEEE Xplore. Restrictions apply.
ACKNOWLEDEMENTS
This work was supported in part by the National
Natural Science Foundation of China under Grant
61834002.

REFERENCES

[1] P. A. Merolla et al., "A million spiking-neuron integrated circuit


with a scalable communication network and interface," Science,
2014, vol. 345, no. 6197, pp. 668–673.
[2] M. Davies et al., "Loihi: a neuromorphic manycore processor with
on-chip learning," IEEE Micro 38, 2018, pp. 82–99.
[3] L. Deng et al., "Tianjic: A Unified and Scalable Chip Bridging
Spike-Based and Continuous Neural Computation," IEEE Journal
of Solid-State Circuits, vol. 55, no. 8, pp. 2228-2246, Aug. 2020.
[4] N. Qiao et al., "A reconfigurable on-line learning spiking
neuromorphic processor comprising 256 neurons and 128K
synapses," Frontiers in Neuroscience, 29 April 2015.
[5] A. Agrawal, M. Ali, M. Koo, N. Rathi, A. Jaiswal and K. Roy,
"IMPULSE: A 65-nm Digital Compute-in-Memory Macro With
Fused Weights and Membrane Potential for Spike-Based Sequential
Learning Tasks," in IEEE Solid-State Circuits Letters, vol. 4, pp.
137-140, 2021, doi: 10.1109/LSSC.2021.3092727.
[6] X. Si et al., "24.5 A Twin-8T SRAM Computation-In-Memory
Macro for Multiple-Bit CNN-Based Machine Learning," 2019
IEEE International Solid- State Circuits Conference - (ISSCC), San
Francisco, CA, 2019, pp. 396-398.
[7] J. Yang et al., "Sandwich-RAM: An Energy-Efficient In-Memory
BWN Architecture with Pulse-Width Modulation," 2020 IEEE
International Solid- State Circuits Conference - (ISSCC), San
Francisco, CA, 2020, pp. 394-395.
[8] X. Si et al., "15.5 A 28nm 64Kb 6T SRAM Computing-in-Memory
Macro with 8b MAC Operation for AI Edge Chips," 2020 IEEE
International Solid- State Circuits Conference - (ISSCC), San
Francisco, CA, 2020, pp. 246-248.
[9] F. Akopyan et al., "TrueNorth: Design and tool flow of a 65 mW 1
million neuron programmable neurosynaptic chip," IEEE Trans.
Comput.- Aided Design Integr. Circuits Syst., vol. 34, no. 10, pp.
1537–1557, Oct. 2015.

3387

Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on December 22,2023 at 10:32:23 UTC from IEEE Xplore. Restrictions apply.

You might also like