You are on page 1of 1

SUMMARY OF NEURAL NETWORKS AND THE ASCENT OF MACHINE LEARNING,

BY INMACULADA RODRÍGUEZ RUIZ.

As an improved method of perceptron, the author proposes the multilayer neural


networks, in which network means a set of elements connected in different ways. The
main difference from perceptron is that multilayer neuronal networks have three layers,
one of input units, also called simulated neurons; at least another one of hidden units; and
one left of output units.

Like perceptrons each unit multiplies each of its inputs by the weight on that
input’s connection and the sums the results. Instead, here each unit uses its own sum to
compute a number between 0 and 1 that is what we called the “activation” of the unit. To
process an image each hidden unit compute its own value of activation and then these
values refer to other outputs that calculate its own activation value.

Anyways, nowadays it is hard to say how many hidden units or how many layers
need to be used to a network works correctly with all the tasks.

One of the most challenging things is to develop a way of learning. That is why
the author talks about learning via back-propagation, a method that implies taking an error
from the output units and propagate the blame for that error to the hold network so it
doesn’t happens again. This lets back-propagation determine how much should the
weight be changed to eliminate the error. For that reason, in this context, learn means
modifying the weights of the connections so there’s no chance of error.

In the beginning, Neural Networks was known as Connectionism, this word refers
to the idea that knowledge in networks is based on the weighted connections between
units. Analyzing human behavior, researchers had come to the conclusion that many of
us cannot describe the way we do things because we just use subconscious knowledge as
common sense. This is hard to express in logic deductive and makes harder to develop
AI.

Apart from that, there are many advantages and disadvantages between symbolics
and subsymbolics systems. The first ones are more likely human conscious knowledge,
they even use human-understandable reasoning to solve problems. The second ones tend
to be hard to understand and no one directly knows the program of human knowledge
inside these systems. What we should do is try to combine and use both systems at the
same time, but until nowadays it hadn’t been possible.

In conclusion, researchers have developed algorithms in order to let computers


learning on it owns leading to an independent subdiscipline in AI called machine learning.

You might also like