P. 1
(Brain Study)_Speech Recognition Using Neural Networks

(Brain Study)_Speech Recognition Using Neural Networks

|Views: 8|Likes:
Published by nick Hall

More info:

Published by: nick Hall on Oct 03, 2011
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





The modern study of neural networks actually began in the 19th century, when neurobiol-
ogists first began extensive studies of the human nervous system. Cajal (1892) determined
that the nervous system is comprised of discrete neurons, which communicate with each
other by sending electrical signals down their long axons, which ultimately branch out and
touch the dendrites (receptive areas) of thousands of other neurons, transmitting the electri-
cal signals through synapses (points of contact, with variable resistance). This basic picture
was elaborated on in the following decades, as different kinds of neurons were identified,
their electrical responses were analyzed, and their patterns of connectivity and the brain’s
gross functional areas were mapped out. While neurobiologists found it relatively easy to
study the functionality of individual neurons (and to map out the brain’s gross functional
areas), it was extremely difficult to determine how neurons worked together to achieve high-
level functionality, such as perception and cognition. With the advent of high-speed com-
puters, however, it finally became possible to build working models of neural systems,
allowing researchers to freely experiment with such systems and better understand their

McCulloch and Pitts (1943) proposed the first computational model of a neuron, namely
the binary threshold unit, whose output was either 0 or 1 depending on whether its net input
exceeded a given threshold. This model caused a great deal of excitement, for it was shown
that a system of such neurons, assembled into a finite state automaton, could compute any
arbitrary function, given suitable values of weights between the neurons (see Minsky 1967).
Researchers soon began searching for learning procedures that would automatically find the
values of weights enabling such a network to compute any specific function. Rosenblatt
(1962) discovered an iterative learning procedure for a particular type of network, the sin-
gle-layer perceptron, and he proved that this learning procedure always converged to a set
of weights that produced the desired function, as long as the desired function was potentially
computable by the network. This discovery caused another great wave of excitement, as
many AI researchers imagined that the goal of machine intelligence was within reach.

3. Review of Neural Networks


However, in a rigorous analysis, Minsky and Papert (1969) showed that the set of functions
potentially computable by a single-layer perceptron is actually quite limited, and they
expressed pessimism about the potential of multi-layer perceptrons as well; as a direct
result, funding for connectionist research suddenly dried up, and the field lay dormant for 15

Interest in neural networks was gradually revived when Hopfield (1982) suggested that a
network can be analyzed in terms of anenergy function, triggering the development of the
Boltzmann Machine(Ackley, Hinton, & Sejnowski 1985) — a stochastic network that could
be trained to produce any kind of desired behavior, from arbitrary pattern mapping to pattern
completion. Soon thereafter, Rumelhart et al (1986) popularized a much faster learning pro-
cedure calledbackpropagation, which could train a multi-layer perceptron to compute any
desired function, showing that Minsky and Papert’s earlier pessimism was unfounded. With
the advent of backpropagation, neural networks have enjoyed a third wave of popularity,
and have now found many useful applications.

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->