Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
1Activity
0 of .
Results for:
No results containing your search query
P. 1
Chapter 15

Chapter 15

Ratings: (0)|Views: 55|Likes:
Published by armin2200

More info:

Published by: armin2200 on Dec 06, 2008
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

06/17/2009

pdf

text

original

 
L15L
VLSI
Implementations
of
Neural Networks
15.1
Introduction
In the previous chapters of this book we presented a broad exposition of neural networks,describing a variety of algorithms for implementing supervised and unsupervised learningparadigms. In the final analysis, however, neural networks can only gain acceptance astools for solving engineering problems such as pattern classification, modeling, signalprocessing, and control in one of two ways:
w
Compared to conventional methods, the use of a neural network makes a significantdifference in the performance of a system for a real
-
world application, or else itprovides a significant reduction in the cost of implementation without compromisingperformance.
w
Through the use of a neural network, we are able to solve a difficult problem forwhich there is no other solution.Given that we have a viable solution to an engineering problem based on a neural network approach, we need to take the next step: build the neural network in hardware, and embedthe piece of hardware in its
working
environment. It is only when we have a
working
model of the system that we can justifiably say we fully understand it. The key questionthat arises at this point in the discussion is: What is
the
most cost
-
effective medium forthe hardware implementation of a neural network? A fully digital approach that comesto mind is to use a
 RZSC 
 processor;
RISC is the acronym for
 reduced instruction set computer
(Cocke and
Markstein,
1990).
Such a processor is designed
to
execute a smallnumber of simple instructions, preferably one instruction for every cycle of the computerclock. Indeed, because of the very high speed of modern
-
day RISC processors, their usefor the emulation of neural networks is probably fast enough for some applications.However, for certain complex applications such as speech recognition and optical characterrecognition, a level of performance
is
required that is not attainable with existing RISCprocessors, certainly within the cost limitations of the proposed applications
(Ham-
merstrom,
1992).
Also, there are many situations such as process control, adaptive
beam-
forming, and adaptive noise cancellation where the required speed of learning is muchtoo fast for standard processors.
To
meet the computational requirements of the complexapplications and highly demanding situations described here, we may have to resort tothe use of 
very
-
large
-
 scale integrated 
(VLSI) circuits, a rapidly developing technologythat provides an ideal medium for the hardware implementation of neural networks.In the use of VLSI technology, we have the capability of fabricating integrated circuitswith tens of millions of transistors on a single
 silicon chip,
and it is highly likely thatthis number will be increased by two orders
of 
magnitude before reaching the fundamental
593
 
594
15
/
VLSl
Implementations of
Neural
Networks
limits of the technology imposed by the laws of physics (Hoeneisen and Mead, 1972;Keyes, 1987). We thus find that VLSI technology is well matched to neural networks fortwo principal reasons (Boser et al., 1992):
1.
The high functional density achievable with VLSI technology permits the implemen
-
tation of a large number
o
identical, concurrently operating neurons on a single chip,thereby
making
it possible to exploit the inherent parallelism of neural networks.
2.
The regular topology of neural networks and the relatively small number
of 
well
-
defined arithmetic operations involved in their learning algorithms greatly simplifythe design and layout of VLSl circuits.Accordingly, we find that there is a great deal
of 
research effort devoted worldwide toVLSI implementations of neural networks on many fronts. Today, there are general
-
purpose chips available for the construction of multilayer perceptrons, Boltzmannmachines, mean
-
field
-
theory machines, and self 
-
organizing neural networks. Moreover,various special
-
purpose chips have been developed for specific information
-
processingfunctions.VLSI technology not only provides the medium for the implementation of complexinformation
-
processing functions that are neurobiologically inspired, but also can be seento serve a complementary and inseparable role as a synthetic element to build test bedsfor postulates of neural organization (Mead, 1989). The successful use of VLSI technologyto create a bridge between neurobiology and information sciences will have the followingbeneficial effects: deeper understanding of information processing, and novel methods forsolving engineering problems that are intractable by traditional computer techniques(Mead, 1989). The interaction between neurobiology and information sciences via thesilicon medium may also influence the very
art
of electronics and VLSI technology itself by having to solve new challenges posed by the interaction.With all these positive attributes of VLSI technology, it is befitting that we devote thisfinal chapter of the
book
to
its
use
as the medium for hardware implementations of neuralnetworks. The discussion will, however, be at an introductory level.'
Organization
of
the Chapter
The material of the chapter is organized as follows. In Section 15.2 we discuss the basicdesign considerations involved in the VLSI implementation of neural networks. In Section15.3 we categorize VLSI implementations of neural networks into analog, digital, andhybrid methods. Then, in Section 15.4 we describe commercially available general
-
purposeand special
-
purpose chips for hardware implementations of neural networks. Section 15.5on concluding remarks completes the chapter and the book.
15.2
Major Design Considerations
The incredible functional density, ease of use, and low cost of industrial
CMOS
(comple
-
 mentary metal 
oxide
 silicon) transistors
make CMOS technology as the technology of choice for VLSI implementations of neural networks (Mead, 1989). Regardless of whetherwe are considering the development of general
-
purposeor special
-
purpose chips for neuralnetworks, there are a number of major design issues that would have to be considered in
'
For detailed treatment of analog
VLSI
systems, with emphasis on neuromorphic networks, see the book by Mead (1989).
For
specialized aspects of the subject, see the March 1991, May 1992, and May 1993 SpecialIssuesof the
 IEEE
Transactions on Neural Networks.
The report by Andreou (1992) provides an overview
of 
analog
VLSI
systems with emphasis on circuit models of neurons, synapses, and neuromorphic functions.
 
15.2
/
Major Design Considerations
595
the use of this technology. Specifically, we may identify the following items
(Ham-
merstrom, 1992).
1.
Sum
-
 of 
-
 Products Computation.
This is a functional requirement common to theoperation of 
all
neurons. It involves multiplying each element of an activation pattern(data vector) by an appropriate weight, and then summing the weighted inputs, as describedin the standard equation
vj
=
wjixi
(15.1)
i=
1
where
wji
is the weight of synapse
i
belonging to neuron
 j,
xi
is the input applied to theith synapse,
 p
is the number of synapses, and
vj
is the resulting activation potential of neuron
 j.
 2.
 Data Representation.
Generally speaking, neural networks have low
-
precisionrequirements, the exact specification of which is
algorithdapplication
dependent.
 3.
Output Computation.
The most common form of activation function at the outputof a neuron is a smooth nonlinear function such as the sigmoid function described by thelogistic function,or the hyperbolic tangent,1
-
exp(-vj)
1
+
exp(-vi)
p(vj)
=
tanh(vj)
=
(15.2)(15.3)These two forms of the sigmoidal activation function are linearly related to each other;see Chapter
6.
Occasionally, the threshold function1,
Vj>
0
(15.4)
{
0,
v,
<
0
dv,)
=
is considered to be sufficient.
4.
 Learning Complexity.
Each learning algorithm has computational requirements of its own. Several popular learning algorithms rely
on
the use of 
local 
computations formaking modifications to the synaptic weights of a neural network; this is a highly desirablefeature from
an
implementation point of view. Some other algorithms have additionalrequirements, such
as
the back 
-
propagation of error terms through the network, whichimposes an additional burden on the implementation of the neural network, as in the caseof a multilayer perceptron trained with the back 
-
propagation algorithm.
 5.
Weight Storage.
This requirement refers to the need to store the “old” values osynaptic weights of a neural network. The “newvalues of the weights are computedby using the changes computed by the learning algorithm
to
update the old values.
6.
Communications.
Metal is expensive in terms of silicon area, which leads to signifi
-
cant inefficiencies
if 
bandwidth utilization of communication (connectivity) links amongneurons is low. Connectivity is perhaps one
o
the most serious constraints imposed onthe fabrication of a silicon chip, particularly as we scale up analog or digital technology
to
very large neural networks. Indeed, significant innovation in communication schemes
is
necessary if we are to implement very large neural networks on silicon chips efficiently.The paper by Bailey and Hammerstrom
(1988)
discusses the fundamental issues involved
in
the connectivity problem with the
VLSI
implementation of neural networks in mind;

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->