You are on page 1of 8

# Neural Networks:Traditionally, the term neural network had been used to refer to a network or circuitry of biological neurons.

The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes. Thus the term 'Neural Network' has two distinct connotations: 1 -- Biological neural networks are made up of real biological neurons that are connected or functionally related in the peripheral nervous system or the central nervous system. 2 -- Artificial neural networks are made up of interconnecting artificial neurons, which may share some properties of biological neural networks. INTRODUCTION: Neural network is an artificial intelligence technique that emulates the behavior of biological neural systems in digital software or hardware. These networks can "learn," automatically, complex relationships among data. This feature makes the technique very useful in modeling processes for which mathematical modeling is difficult or impossible. The input vector The weight vector All the input values of each perceptron are collectively called the input vector of that perceptron. Similarly, all the weight values of each perceptron are collectively called the weight vector of that perceptron.

A Simple Artificial Neuron: Our basic computational element (model neuron) is often called a node or unit. It receives input from some other units, or perhaps from an external source. Each input has an associated weight w, which can be modified so as to model synaptic learning. The unit computes some function f of the weighted sum of its inputs:

PERCEPTRON:
The perceptron is a mathematical model of a biological neuron. While in actual neurons the dendrite receives electrical signals from the axons of other neurons, in the perceptron these electrical signals are represented as numerical values. At the synapses between the dendrite and axons, electrical signals are modulated in various amounts. This is also modeled in the perceptron by multiplying each input value by a value called the weight. An actual neuron fires an output signal only when the total strength of the input signals exceed a certain threshold. We model this phenomenon in a perceptron by calculating the weighted sum of the inputs to represent the total strength of the input signals, and applying a step function on the sum to determine its output. As in biological neural networks, this output is fed to other perceptrons. Neural networks resemble the human brain in the following two ways: A neural network acquires knowledge through learning. A neural network's knowledge is stored within inter-neuron connection strengths known as synaptic weights. The most common neural network model is the multilayer perceptron (MLP). This type of neural network is known as a supervised network because it requires a desired output in order to learn. The goal of this type of network is to create a model that correctly maps the input to the output using historical data so that the model can then be used to produce the output when the desired output is unknown. A graphical representation of an MLP is shown below.

The MLP and many other neural networks learn using an algorithm called backpropagation. With backpropagation, the input data is repeatedly presented to the neural network. With each presentation the output of the neural network is compared to the desired output and an error is computed. This error is then fed back (backpropagated)

to the neural network and used to adjust the weights such that the error decreases with each iteration and the neural model gets closer and closer to producing the desired output. This process is known as "training".

A good way to introduce the topic is to take a look at a typical application of neural networks. Many of today's document scanners for the PC come with software that performs a task known as optical character recognition (OCR). OCR software allows you to scan in a printed document and then convert the scanned image into to an electronic text format such as a Word document, enabling you to manipulate the text. In order to perform this conversion the software must analyze each group of pixels (0's and 1's) that form a letter and produce a value that corresponds to that letter. Some of the OCR software on the market use a neural network as the classification engine.

## Types Of Neural Networks:-

Figure 2. The (a) Rummelhart and (b) Hopfield and types of neural networks.

The Rummelhart-type neural network (Figure 2a) shows data flow in one direction (i.e., it is a unidirectional network). Its simplicity and stability make it a natural choice for applications such as data analysis, classification, and interpolation. As the performance of two-layer neural networks is very limited, it generally includes at least one more intermediate layer, also called the hidden layer. Each neuron is linked to all of the neurons of the neighboring layers, but there are no links between neurons of the same layer. The behavior of the Rummelhart network is static; its output is a reflection of its respective input.

The Hopfield neural network (Figure 2b), on the other hand, has multidirectional data flow. Its behavior is dynamic and more complex than the Rummelhart network. The Hopfield networks do not show neuron layers; there is total integration between input and output data, as all neurons are linked between themselves. These networks are typically used for studies about the optimization of connections.

Pattern Matching:A neural network is a software (or hardware) simulation of a biological brain (sometimes called Artificial Neural Network or "ANN"). The purpose of a neural network is to learn to recognize patterns in your data. Once the neural network has been trained on samples of your data, it can make predictions by detecting similar patterns in future data. Software that learns is truly "Artificial Intelligence". Neural networks are a branch of the field known as "Artificial Intelligence". Other branches include Case Based Reasoning, Expert Systems, and Genetic Algorithms. Related fields include Classical Statistics, Fuzzy Logic and Chaos Theory. A Neural network can be considered as a black box that is able to predict an output pattern when it recognizes a given input pattern. The neural network must first be "trained" by having it process a large number of input patterns and showing it what output resulted from each input pattern. Once trained, the neural network is able to recognize similarities when presented with a new input pattern, resulting in a predicted output pattern. Neural networks are able to detect similarities in inputs, even though a particular input may never have been seen previously. This property allows for excellent interpolation

capabilities, especially when the input data is noisy (not exact). Neural networks may be used as a direct substitute for autocorrelation, multivariable regression, linear regression, trigonometric and other regression techniques. When a data stream is analyzed using a neural network, it is possible to detect important predictive patterns that were not previously apparent to a non-expert. Thus the neural network can act as an expert.

The goal of a neuron of this sort is to fire when it recognizes a known pattern of inputs. Let's say for example that the inputs come from a black and white image 10 x 10 pixels in size. That would mean we have 100 input values. For each pixel that is white, we'll say the input on its corresponding dendrite has a value of -1. Conversely, for each black pixel, we have a value of 1. Let's say our goal is to get this neuron to fire when it sees a letter "A" in the image, but not when it sees any other pattern. The key to getting our neuron to recognize an "A" in the picture is to set the weights on each dendrite so that each of the input-times-weight values will be positive. So if one dendrite (for a pixel) is expecting to match white (-1), its weight should be -1. Then, when it sees white, it will contribute -1 * -1 = 1 to the sum of all inputs. Conversely, when a dendrite is expecting to match black (1), its weight should be 1. Then, its contribution when it sees black will be 1 * 1 = 1. So if the input image is exactly like the archetype our single neuron has of the letter "A" built into its dendrite weights, the sum will be exactly 100, the highest possible sum for 100 inputs. Using our "sigma" function, this reduces to an output of 1, the highest possible output value, and a sure indication that the image is of an A.

## Applications of neural networks

Medicine
One of the areas that has gained attention is in cardiopulmonary diagnostics. The ways neural networks work in this area or other areas of medical diagnosis is by the comparison of many different models. A patient may have regular checkups in a particular area, increasing the possibility of detecting a disease or dysfunction. The data may include heart rate, blood pressure, breathing rate, etc. to different models. The models may include variations for age, sex, and level of physical activity. Each individual's physiological data is compared to previous physiological data and/or data of the various generic models. The deviations from the norm are compared to the known causes of deviations for each medical condition. The neural network can learn by studying the different conditions and models, merging them to form a complete conceptual picture, and then diagnose a patient's condition based upon the models.

Electronic Noses

The idea of a chemical nose may seem a bit absurd, but it has several real-world applications. The electronic nose is composed of a chemical sensing system (such as a spectrometer) and an artificial neural network, which recognizes certain patterns of chemicals. An odor is passed over the chemical sensor array, these chemicals are then translated into a format that the computer can understand, and the artificial neural network identifies the chemical. A list at the Pacific Northwest Laboratory has several different applications in the environment, medical, and food industries.

## An actual electronic "nose"

Image courtesy Pacific Northwest Laboratory

Environment: identification of toxic wastes, analysis of fuel mixtures (7-11 example), detection of oil leaks, identification of household odors, monitoring air quality, monitoring factory emission, and testing ground water for odors. Medical: The idea of using these in the medical field is to examine odors from the body to identify and diagnose problems. Odors in the breath, infected wounds, and body fluids all can indicate problems. Artificial neural networks have even been used to detect tuberculosis. Food: The food industry is perhaps the biggest practical market for electronic noses, assisting or replacing entirely humans. Inspection of food, grading quality of food, fish inspection, fermentation control, checking mayonnaise for rancidity, automated flavor control, monitoring cheese ripening, verifying if orange juice is natural, beverage container inspection, and grading whiskey. Example Of Neural Networks In a Robot: The aim of this cooperative research initiative from INRIA is :

to develop a biologically inspired artificial neural network model for olfactory perception and to consider its implementation within a robotic framework.

The first aspect of this work will consist in developing spiking neural network models for solving odour discrimination and localization problems for which classical (rate coding) neural networks are not satisfactory. The second aspect of this work is about the use of olfactory perception in an autonomous robot so as to mimic the animal behavior of tracking specific odours. Possible future applications could include the detection and localization of drugs, explosives, gas leaks in hostile environments (e.g. landmine) or in public places (e.g. airport).