You are on page 1of 3

1.

 A Distinguish between linearly separable and linearly inseparable problems


with example. Why a single layer of perceptron cannot be used to solve linearly
inseparable problems? 
Answer:
There is a whole class of problems which are termed as linearly separable. This name is
given to them, because if we were to represent them in the input space, we could classify
them using a straight line. The simplest examples are the logical AND or OR.

If you can draw a line or hyper plane that can separate those points into two classes, then
the data is separable. If not, then it may be separated by a hyper plane in higher dimensions.
Still if any of the hyper planes could not separate them, then the data is termed as
non linearly separable data.

A set of input vectors (or a training set) will be said to be linearly non-separable if no
hyperplane exists such that each vector lies on the pre-assigned side of the hyperplane

Reason why a single layer of perceptron cannot be used to solve linearly inseparable
problems:
The positive and negative points cannot be separated by a linear line, or effectively, there
does not exist a (linear) line that can separate the positive and negative points. This is why
XOR problem cannot be solved by One layer perceptron.

2. Explain Adaline and Madaline with diagram. Explain learning rule and transfer
function used in Adaline.
Answer:
ADALINE (Adaptive Linear Neuron or later Adaptive Linear Element) is an early single-
layer artificial neural network and the name of the physical device that implemented this
network. The network uses memistors. A multilayer network of ADALINE units is known as
a MADALINE.
The process of adjusting the weights and threshold of the ADALINE network is based on a
learning algorithm named the Delta rule (Widrow and Hoff 1960) or Widrow-Hoff
learning rule, also known as LMS (Least Mean Square ) algorithm or Gradient Descent
method.

3. What is the Hopfield mole of a neural network? What is a state transition


diagram for Hopfield Neural Network? Explain how to derive it in Hopfield
model.
Answer:
A neural network is a series of algorithms that endeavors to recognize underlying
relationships in a set of data through a process that mimics the way the human brain operates.
.Neural networks can adapt to changing input; so the network generates the best possible
result without needing to redesign the output criteria.

Hopfield networks in artificial intelligence


Hopfield neural network was invented by Dr. It consists of a single layer which contains one
or more fully connected recurrent neurons. The Hopfield network is commonly used for auto-
association and optimization tasks.

State transition diagram for Hopfield Neural Network:

The state of the net at any time is given by the vector of the node outputs . This information
may be represented in graphical form as a state transition diagram. state transition
diagram for 3 node net. States are represented by the circles with their
associated state number.
4. What are hard problems in p
Answer:
NP Hard Problems
In computational complexity theory, NP-hardness (non-deterministic polynomial-time
hardness) is the defining property of a class of problems that are informally "at least
as hard as the hardest problems in NP". A simple example of an NP-hard problem is the
subset sum problem.

You might also like