Professional Documents
Culture Documents
MOTIVATION:
The Extreme Learning Machine (ELM) is an efficient analytical technique for solving classification and
regression tasks. ELM techniques is used in face recognition algorithm. But ELM networks are
difficult to implement in hardware due to the all-to-all connectivity between the input and hidden
layer neurons. This increases the requirement of hardware resources and also increases as the
hidden layer size increases.
The concept of using receptive fields (RF) for classification tasks originates from biology, in which
sensory neurons responds to a limited spatial range of the input stimulus. Using this methodology
into a classification system increases the performance.
Since SRAM has smaller area compared to logic gates, this implementation is efficient in terms of
hardware resources.
LITERATURE SURVEY:
Output Weight Block: Schematic of the output weight block, comprising a splitter
circuit wherein MR and the two M2R transistors form an R2R network, which gets
repeated 13 times in the block. The octave splitter is
terminated with a single MR transistor.
The output from the hidden neuron block, Ihid, is the input
current for the output weight block.
Ihid is divided successively to form a geometrically-spaced
series of smaller currents.
There are a total of N stages in the splitter circuit. The
current at the kth stage is given by 𝐼ℎ𝑖𝑑 /2𝑘 .
The master bias voltage Vgbias is the reference voltage for the p-FET gates in the
splitter
Two transistor switches in the lower half of the circuit route the branch current to
either useful current, Igood, or to current that goes to ground, Idump.
Igood is mirrored to generate a current,Iout, which is further routed to currents
Ipos (positive current) or Ineg (negative current), as determined by the signW signal.
The signW signal, stored in flip-flop, indicates the polarity of the output weight
connected between the hidden neuron and the output neuron.
[4] A neuromorphic hardware framework based on population coding:ref – 9
Biological neurons encode input stimuli such as motion, position, colours, and sound
into neuronal firing. The encoded information is represented by a set of neurons in a
collective and distributed manner, referred as population coding.
Trainable Analogue Block (TAB) is designed which encodes given input stimuli using a
large population of neurons with a heterogeneous tuning curve profile.
Heterogeneity of tuning curves is achieved using random device mismatches and by
adding a systematic offset to each hidden neuron.
[6] Fast, simple and accurate handwritten digit classification by training shallow neural network
classifiers with the `extreme learning machine' algorithm:ref-11
The main innovation is to ensure each hidden-unit operates only on a randomly sized
and positioned patch of each image.
This form of random `receptive field' sampling of the input ensures the input weight
matrix is sparse, with about 90% of weights equal to zero.
The algorithm for generating these `receptive field' is as follows:
Generate a random input weight matrix W
For each of M hidden units, select two pairs of distinct random integers from {1, 2.. L} to
form the co-ordinates of the rectangular mask.
If any mask has total area smaller than some value q then discard and repeat.
Set the entries of a √𝐿 𝑋 √𝐿 square matrix that are defined by the two pairs of integers
to 1, and all other entries to zero.
Flatten each receptive field matrix into a length L vector where each entry corresponds
to the same pixel as the entry in the data vectors Xtest or Xtrain.
Concatenate the resulting M vectors into a receptive field matrix F of size M X L.
[7] Gradient-Based Learning Applied to Document Recognition:
Given an appropriate network architecture, gradient-based learning algorithms can be
used to classify high-dimensional patterns, such as handwritten characters, with minimal
pre-processing.
This paper reviews various methods applied to handwritten character recognition and
compares them on a standard handwritten digit recognition task.
[8] Convolutional neural networks, which are specifically designed to deal with the variability of
two dimensional (2-D) shapes.
[9] A neuromorphic hardware
architecture using the Neural Engineering
Framework for pattern recognition: ref - 13
System Topology: The inputs are the
pixels; they are connected to a higher-
dimensional hidden layer with 8k neurons,
using randomly weighted connections.
The output layer consists of linear
neurons and the output layer weights are
solved analytically using the pseudo-inverse
operation
[10]Human face recognition based on multidimensional PCA and extreme learning machine:
A new human face recognition algorithm based on bidirectional two dimensional
principal component analysis
(B2DPCA) and extreme learning
machine (ELM) is introduced.
Images from each database
are converted into gray level image.
Curvelet transform is used
to generate initial feature vectors.
Subband that exhibits
highest standard deviation is
selected as an initial feature vector
of size U X V
B2DPCA is used to generate unique feature sets and to minimize computational
complexity.
Dimensionally reduced curvelet feature sets are randomly selected for training of an
ELM.
Remaining features of the same dataset are used to
judge using learned classifier
[11] A compact neural core for digital implementation of
the Neural Engineering Framework:
The Digital neural core consists of 64 neurons that
are instantiated by a single physical neuron using a time-
multiplexing approach.
Inputs A and B are 9 bits wide and the output result
is 18 bit wide.
Result is obtained after four clock cycles.
Summing the result of all the 64 virtual neurons, input stimulus is obtained.
Quartus II 13.0sp1
Modelsim-Altera 10.1d
Timeline Chart:
05.09.2019 – 30.11.2019 Implementation of Existing Architecture
1.12.2019 – 22.3.2020 Implementation of Modified Architecture
23.3.2020 – 31.1.2020 Documentation