You are on page 1of 9

UNIT V: NEURAL NETWORK

Introduction – Difference between Human and Machine


Intelligence –Features of Biological Neural Network – Human
neurons to artificial neurons.

What is Artificial Neural Network?

The inventor of the first neurocomputer, Dr. Robert Hecht-Nielsen, defines a


neural network as −

"...a computing system made up of a number of simple, highly interconnected


processing elements, which process information by their dynamic state
response to external inputs.”

The term "Artificial Neural Network" is derived from Biological neural


networks that develop the structure of a human brain. Similar to the human
brain that has neurons interconnected to one another, artificial neural
networks also have neurons that are interconnected to one another in various
layers of the networks. These neurons are known as nodes.

The given figure illustrates the typical diagram of Biological Neural


Network.
The typical Artificial Neural Network looks something like the given
figure.

Dendrites from Biological Neural Network represent inputs in Artificial Neural


Networks, cell nucleus represents Nodes, synapse represents Weights, and
Axon represents Output.

Relationship between Biological neural network and artificial neural network:

Biological Neural Network Artificial Neural Network

Dendrites Inputs

Cell nucleus Nodes

Synapse Weights

Axon Output

An Artificial Neural Network in the field of Artificial intelligence where it


attempts to mimic the network of neurons makes up a human brain so that
computers will have an option to understand things and make decisions in a
human-like manner. The artificial neural network is designed by programming
computers to behave simply like interconnected brain cells.

There are around 1000 billion neurons in the human brain. Each neuron has an
association point somewhere in the range of 1,000 and 100,000. In the human
brain, data is stored in such a manner as to be distributed, and we can extract
more than one piece of this data when necessary from our memory parallelly.
We can say that the human brain is made up of incredibly amazing parallel
processors.
We can understand the artificial neural network with an example, consider an
example of a digital logic gate that takes an input and gives an output. "OR"
gate, which takes two inputs. If one or both the inputs are "On," then we get
"On" in output. If both the inputs are "Off," then we get "Off" in output. Here
the output depends upon input. Our brain does not perform the same task.
The outputs to inputs relationship keep changing because of the neurons in
our brain, which are "learning."

The architecture of an artificial neural


network:
To understand the concept of the architecture of an artificial neural network,
we have to understand what a neural network consists of. In order to define a
neural network that consists of a large number of artificial neurons, which are
termed units arranged in a sequence of layers. Lets us look at various types of
layers available in an artificial neural network.

Artificial Neural Network primarily consists of three layers:

Input Layer:

As the name suggests, it accepts inputs in several different formats provided


by the programmer.

Hidden Layer:

The hidden layer presents in-between input and output layers. It performs all
the calculations to find hidden features and patterns.

A Output Layer:

The input goes through a series of transformations using the hidden layer,
which finally results in output that is conveyed using this layer.
The artificial neural network takes input and computes the weighted sum of
the inputs and includes a bias. This computation is represented in the form of
a transfer function.

It determines weighted total is passed as an input to an activation function to


produce the output. Activation functions choose whether a node should fire
or not. Only those who are fired make it to the output layer. There are
distinctive activation functions available that can be applied upon the sort of
task we are performing.

Advantages of Artificial Neural Network (ANN)

Parallel processing capability:

Artificial neural networks have a numerical value that can perform more than
one task simultaneously.

Storing data on the entire network:

Data that is used in traditional programming is stored on the whole network,


not on a database. The disappearance of a couple of pieces of data in one
place doesn't prevent the network from working.

Capability to work with incomplete knowledge:

After ANN training, the information may produce output even with
inadequate data. The loss of performance here relies upon the significance of
missing data.

Having a memory distribution:

For ANN is to be able to adapt, it is important to determine the examples and


to encourage the network according to the desired output by demonstrating
these examples to the network. The succession of the network is directly
proportional to the chosen instances, and if the event can't appear to the
network in all its aspects, it can produce false output.

Having fault tolerance:

Extortion of one or more cells of ANN does not prohibit it from generating
output, and this feature makes the network fault-tolerance.
A LIMITATIONS OF ANN

1) ANN is not a daily life general purpose problem solver.

2) There is no structured methodology available in ANN.

3) There is no single standardized paradigm for ANN development.

4) The Output Quality of an ANN may be unpredictable.

5) Many ANN Systems does not describe how they solve problems.

6) Black box Nature

7) Greater computational burden.

8) Proneness to over fitting.

9) Empirical nature of model development.

How the Human Brain Learns in neural network

The human brain and artificial neural networks share some similarities in
terms of learning principles, but they also have significant differences. Here's a
brief overview of how the human brain learns, drawing parallels with artificial
neural networks:
1. Synaptic Plasticity:
Human Brain: Learning in the human brain is largely based on synaptic
plasticity, which refers to the ability of synapses (connections between neurons)
to strengthen or weaken over time. This is often associated with long-term
potentiation (LTP) and long-term depression (LTD).
Neural Networks: In artificial neural networks, synaptic weights are
adjusted during the training process to strengthen or weaken connections
between artificial neurons. This adjustment is analogous to the synaptic
plasticity observed in the human brain.
2. Adaptation and Experience:
Human Brain: Learning in the brain is influenced by experiences and
environmental stimuli. Neurons and synapses adapt based on the patterns and
frequencies of activation they experience.
Neural Networks: Artificial neural networks learn from data through
exposure to patterns and examples. The network adapts its parameters (weights
and biases) based on the input data and the desired output, optimizing its
performance over time.
3. Generalization:
Human Brain: The brain has a remarkable ability to generalize learning
from one context to another. It can recognize and adapt to similar patterns in
different situations.
Neural Networks: Generalization is a key goal in artificial neural networks
as well. Trained networks should be able to make accurate predictions or
classifications on new, unseen data that shares similarities with the training
data.
4. Feedback Mechanisms:
Human Brain: Learning in the brain often involves feedback mechanisms,
where the consequences of an action or the correctness of a thought contribute
to the learning process.
Neural Networks: Backpropagation is a common feedback mechanism in
artificial neural networks. During training, the network receives feedback on the
error between its predictions and the actual target values, and it adjusts its
parameters accordingly.
5. Unsupervised Learning:
Human Brain: Much of human learning is unsupervised, meaning that the
brain can learn from the environment without explicit guidance or labeled
examples.
Neural Networks: Unsupervised learning algorithms in artificial neural
networks aim to discover patterns or representations in data without labeled
targets. This is akin to the brain's ability to extract information from the
environment without explicit supervision.
While there are parallels between the human brain and artificial neural
networks, it's important to note that current artificial neural networks are highly
simplified models inspired by the brain's structure and function. The level of
complexity and sophistication seen in the human brain is not fully replicated in
current AI systems.

Feature Artificial Intelligence (AI) Human Intelligence (HI)

HI can learn from


AI can learn from vast
experience, observation,
amounts of data using
Learning and instruction, and can
algorithms and statistical
apply knowledge to new
models.
situations.

AI can generate new HI can create new ideas, art,


solutions based on existing music, and literature
Creativity
patterns and data, but lacks through imagination and
true creativity and originality. innovation.

Emotional AI does not have emotions HI has emotional


Intelligence or empathy, and cannot intelligence, and can
Feature Artificial Intelligence (AI) Human Intelligence (HI)

understand the emotions of recognize and respond to


others. the emotions of others.

AI is highly adaptable to HI is adaptable to changes


changes in input or in the environment, and
Adaptability
environment, and can learn can learn from new
quickly from new data. experiences and situations.

AI can make decisions based HI can make complex


on rules, algorithms, and decisions based on
Decision-
data, but lacks intuition and intuition, experience,
making
the ability to make ethical reasoning, and ethical
judgments. considerations.

AI can perform physical tasks HI has a wide range of


with precision and speed, physical abilities, including
Physical
but lacks the dexterity, fine motor skills,
Abilities
strength, and flexibility of athleticism, and sensory
humans. perception

Human neurons to artificial neurons

Human Neurons:

1. Structure:

Human neurons are the basic structural and functional units of the nervous system.
They consist of a cell body (soma), dendrites, and an axon.
Dendrites receive signals from other neurons, and the axon transmits signals to other
neurons.

2. Electrochemical Signaling:

Neurons communicate through electrochemical signals. When a neuron receives a


signal, an electrical impulse (action potential) travels down the axon.
At the synapse, neurotransmitters are released, transmitting the signal to the next
neuron.

3. Synapses:
Synapses are the junctions between neurons where information is transmitted.
Neurotransmitters bridge the synaptic gap and bind to receptors on the receiving
neuron, leading to the generation of new electrical signals.

Artificial Neurons:
1.Computational Units:

Artificial neurons are the basic computational units in artificial neural networks
(ANNs).
They receive input values, apply weights to these inputs, sum them up, and pass the
result through an activation function to produce an output.

Mathematical Representation:

The functioning of artificial neurons is often described by mathematical equations.


The weighted sum of inputs is passed through an activation function, such as the
sigmoid or rectified linear unit (ReLU), to introduce non-linearity.

Artificial Synapses:

The connections between artificial neurons are analogous to synapses in biological


systems.
Each connection has a weight, representing the strength of the connection. These
weights are adjusted during the training of the neural network.

Layers and Networks:

Artificial neural networks consist of layers of artificial neurons. Input neurons receive
external data, hidden layers process this information, and output neurons produce the
final result.

Deep neural networks have multiple hidden layers, allowing for more complex
hierarchical representations.

Training:

Artificial neural networks are trained using a process called backpropagation. During
training, the network adjusts its weights to minimize the difference between predicted
outputs and actual outputs for a given set of input data.

Activation Functions:

Activation functions in artificial neurons serve a role similar to the firing threshold in
biological neurons. They introduce non-linearity, enabling neural networks to model
complex relationships in data.

Challenges and Advancements:

Simplicity vs. Complexity:


Artificial neurons are simpler than their biological counterparts. Human neurons exhibit
intricate behaviors, including learning, adaptation, and plasticity, which are challenging to
replicate fully in artificial systems.

Neuromorphic Computing:
Neuromorphic computing seeks to design hardware that mimics the brain's
architecture more closely. This includes the development of artificial synapses and
neurons with spiking behavior.

Biological Inspiration:
Researchers are exploring ways to incorporate more biological inspiration into
artificial neural networks. This involves studying the brain's structure and function to
improve the efficiency and capabilities of AI systems.

Ethical Considerations:

As AI progresses, ethical considerations regarding the creation of intelligent systems


with potential societal impacts become increasingly important.

Future Prospects:

The transition from human neurons to artificial neurons is an ongoing area of research
with several promising directions, including neuromorphic computing, bio-inspired
algorithms, and the development of more sophisticated artificial neural networks.

While artificial neurons currently lack the complexity of biological neurons,


advancements in AI continue to push the boundaries of what can be achieved, raising
possibilities for improved brain-machine interfaces, medical applications, and general
AI capabilities.

You might also like