Professional Documents
Culture Documents
Unit V
Unit V
Dendrites Inputs
Synapse Weights
Axon Output
There are around 1000 billion neurons in the human brain. Each neuron has an
association point somewhere in the range of 1,000 and 100,000. In the human
brain, data is stored in such a manner as to be distributed, and we can extract
more than one piece of this data when necessary from our memory parallelly.
We can say that the human brain is made up of incredibly amazing parallel
processors.
We can understand the artificial neural network with an example, consider an
example of a digital logic gate that takes an input and gives an output. "OR"
gate, which takes two inputs. If one or both the inputs are "On," then we get
"On" in output. If both the inputs are "Off," then we get "Off" in output. Here
the output depends upon input. Our brain does not perform the same task.
The outputs to inputs relationship keep changing because of the neurons in
our brain, which are "learning."
Input Layer:
Hidden Layer:
The hidden layer presents in-between input and output layers. It performs all
the calculations to find hidden features and patterns.
A Output Layer:
The input goes through a series of transformations using the hidden layer,
which finally results in output that is conveyed using this layer.
The artificial neural network takes input and computes the weighted sum of
the inputs and includes a bias. This computation is represented in the form of
a transfer function.
Artificial neural networks have a numerical value that can perform more than
one task simultaneously.
After ANN training, the information may produce output even with
inadequate data. The loss of performance here relies upon the significance of
missing data.
Extortion of one or more cells of ANN does not prohibit it from generating
output, and this feature makes the network fault-tolerance.
A LIMITATIONS OF ANN
5) Many ANN Systems does not describe how they solve problems.
The human brain and artificial neural networks share some similarities in
terms of learning principles, but they also have significant differences. Here's a
brief overview of how the human brain learns, drawing parallels with artificial
neural networks:
1. Synaptic Plasticity:
Human Brain: Learning in the human brain is largely based on synaptic
plasticity, which refers to the ability of synapses (connections between neurons)
to strengthen or weaken over time. This is often associated with long-term
potentiation (LTP) and long-term depression (LTD).
Neural Networks: In artificial neural networks, synaptic weights are
adjusted during the training process to strengthen or weaken connections
between artificial neurons. This adjustment is analogous to the synaptic
plasticity observed in the human brain.
2. Adaptation and Experience:
Human Brain: Learning in the brain is influenced by experiences and
environmental stimuli. Neurons and synapses adapt based on the patterns and
frequencies of activation they experience.
Neural Networks: Artificial neural networks learn from data through
exposure to patterns and examples. The network adapts its parameters (weights
and biases) based on the input data and the desired output, optimizing its
performance over time.
3. Generalization:
Human Brain: The brain has a remarkable ability to generalize learning
from one context to another. It can recognize and adapt to similar patterns in
different situations.
Neural Networks: Generalization is a key goal in artificial neural networks
as well. Trained networks should be able to make accurate predictions or
classifications on new, unseen data that shares similarities with the training
data.
4. Feedback Mechanisms:
Human Brain: Learning in the brain often involves feedback mechanisms,
where the consequences of an action or the correctness of a thought contribute
to the learning process.
Neural Networks: Backpropagation is a common feedback mechanism in
artificial neural networks. During training, the network receives feedback on the
error between its predictions and the actual target values, and it adjusts its
parameters accordingly.
5. Unsupervised Learning:
Human Brain: Much of human learning is unsupervised, meaning that the
brain can learn from the environment without explicit guidance or labeled
examples.
Neural Networks: Unsupervised learning algorithms in artificial neural
networks aim to discover patterns or representations in data without labeled
targets. This is akin to the brain's ability to extract information from the
environment without explicit supervision.
While there are parallels between the human brain and artificial neural
networks, it's important to note that current artificial neural networks are highly
simplified models inspired by the brain's structure and function. The level of
complexity and sophistication seen in the human brain is not fully replicated in
current AI systems.
Human Neurons:
1. Structure:
Human neurons are the basic structural and functional units of the nervous system.
They consist of a cell body (soma), dendrites, and an axon.
Dendrites receive signals from other neurons, and the axon transmits signals to other
neurons.
2. Electrochemical Signaling:
3. Synapses:
Synapses are the junctions between neurons where information is transmitted.
Neurotransmitters bridge the synaptic gap and bind to receptors on the receiving
neuron, leading to the generation of new electrical signals.
Artificial Neurons:
1.Computational Units:
Artificial neurons are the basic computational units in artificial neural networks
(ANNs).
They receive input values, apply weights to these inputs, sum them up, and pass the
result through an activation function to produce an output.
Mathematical Representation:
Artificial Synapses:
Artificial neural networks consist of layers of artificial neurons. Input neurons receive
external data, hidden layers process this information, and output neurons produce the
final result.
Deep neural networks have multiple hidden layers, allowing for more complex
hierarchical representations.
Training:
Artificial neural networks are trained using a process called backpropagation. During
training, the network adjusts its weights to minimize the difference between predicted
outputs and actual outputs for a given set of input data.
Activation Functions:
Activation functions in artificial neurons serve a role similar to the firing threshold in
biological neurons. They introduce non-linearity, enabling neural networks to model
complex relationships in data.
Neuromorphic Computing:
Neuromorphic computing seeks to design hardware that mimics the brain's
architecture more closely. This includes the development of artificial synapses and
neurons with spiking behavior.
Biological Inspiration:
Researchers are exploring ways to incorporate more biological inspiration into
artificial neural networks. This involves studying the brain's structure and function to
improve the efficiency and capabilities of AI systems.
Ethical Considerations:
Future Prospects:
The transition from human neurons to artificial neurons is an ongoing area of research
with several promising directions, including neuromorphic computing, bio-inspired
algorithms, and the development of more sophisticated artificial neural networks.