Professional Documents
Culture Documents
Neural Networks
What is Learning
The word "learning" has many different meanings.
It is used, at least, to describe
• memorizing something
• learning facts through observation and exploration
• development of motor and/or cognitive skills
through practice
• organization of new knowledge into general,
effective representations
Learning
• Study of processes that lead to self-
improvement of machine performance.
• It implies the ability to use knowledge to
create new knowledge or integrating new
facts into an existing knowledge structure
• Learning typically requires repetition and
practice to reduce differences between
observed and actual performance
3
?What is Learning
experience.”
4
Learning
Definition:
A computer program is said to learn from
experience E with respect to some class of
tasks T and performance measure P, if its
performance at tasks in T, as measured by P,
improves with experience.
5
Learning & Adaptation
• ”Modification of a behavioral tendency by
expertise.” (Webster 1984)
• ”A learning machine, broadly defined as any
device whose actions are influenced by past
experiences.” (Nilsson 1965)
• ”Any change in a system that allows it to
perform better the second time on repetition of
the same task or on another task drawn from
the same population.” (Simon 1983)
6
Negative Features of Human
Learning
• Its slow (5-6 years for motor skills 12-20 years for
abstract reasoning)
• Inefficient
• Expensive
• There is no copy process
• Learning strategy is often a function of knowledge
available to learner
7
Applications of ML
• Learning to recognize spoken words
• Learning to drive an autonomous
vehicle
• Learning to classify objects
• Learning to play world-class
backgammon
• Designing the morphology and
control structure of electro-
mechanical artefacts 8
Motivating Problems
• Handwritten Character Recognition
Motivating Problems
• Fingerprint Recognition (e.g., border
control)
10
Motivating Problems
• Face Recognition (security access to
buildings etc)
11
…Different kinds of learning
• Supervised learning:
– Someone gives us examples and the right answer
for those examples
– We have to predict the right answer for unseen
examples
• Unsupervised learning:
– We see examples but get no feedback
– We need to find patterns in the data
• Reinforcement learning:
– We take actions and get rewards
– Have to learn how to get high rewards
Reinforcement learning
Learning with a Teacher
desired
state x response
Environment Teacher
actual
+
Learning response
-
system
error signal 14
Unsupervised Learning
state Learning
Environment
system
15
The red and the black
• Imagine that we were given all these points, and we
needed to guess a function of their x, y coordinates that
would have one output for the red ones and a different
output for the black ones.
?What’s the right hypothesis
• In this case, it seems like we could do pretty well by
defining a line that separates the two classes.
Now, what’s the right hypothesis
• Now, what if we have a slightly different configuration of
points? We can't divide them conveniently with a line.
Now, what’s the right hypothesis
• But this parabola-like curve seems like it might
be a reasonable separator.
Design a Learning System
Step 0:
Learning System Z
Design a Learning System
2
3
6
7
8
9
Design a Learning System
Step 2: Representing Experience
D= (0,0,0,0,0,1,0,0,0,0)
D= (0,0,0,0,0,0,0,0,1,0)
Example of supervised learning:
classification
• We lend money to people
• We have to predict whether they will pay us back or not
• People have various (say, binary) features:
– do we know their Address? do they have a Criminal record? high Income?
Educated? Old? Unemployed?
• We see examples: (Y = paid back, N = not)
+a, -c, +i, +e, +o, +u: Y
-a, +c, -i, +e, -o, -u: N
+a, -c, +i, -e, -o, -u: Y
-a, -c, +i, +e, -o, -u: Y
-a, +c, +i, -e, -o, -u: N
-a, -c, +i, -e, -o, +u: Y
+a, -c, -i, -e, +o, -u: N
+a, +c, +i, -e, +o, -u: N
• Next person is +a, -c, +i, -e, +o, -u. Will we get paid back?
Learning by Examples
Concept: ”days on which my friend Aldo enjoys his
favourite water sports”
Task: predict the value of ”Enjoy Sport” for an
arbitrary day based on the values of the other
attributes
yes no
Criminal record?
NO
yes no
NO YES
Constructing a +a, -c, +i, +e, +o, +u: Y
-a, +c, -i, +e, -o, -u: N
+a, -c, +i, -e, -o, -u: Y
• Numerical approaches
– Build numeric model with parameters based on
successes
• Structural approaches
– Concerned with the process of defining
relationships by creating links between concepts
30
Learning methods
• Decision rules:
– If income < $30.000 then reject
• Bayesian network:
– P(good | income, credit history,….)
• Neural Network:
• Nearest Neighbor:
– Take the same decision as for the customer in
the data base that is most similar to the
applicant
31
Classification
• Assign object/event to one of a given finite set of categories.
– Medical diagnosis
– Credit card applications or transactions
– Fraud detection in e-commerce
– Worm detection in network packets
– Spam filtering in email
– Recommended articles in a newspaper
– Recommended books, movies, music, or jokes
– Financial investments
– DNA sequences
– Spoken words
– Handwritten letters
– Astronomical images
32
Problem Solving / Planning /
Control
• Performing actions in an environment in order to
achieve a goal.
– Solving calculus problems
– Playing checkers, chess, or backgammon
– Balancing a pole
– Driving a car or a jeep
– Flying a plane, helicopter, or rocket
– Controlling an elevator
– Controlling a character in a video game
– Controlling a mobile robot
33
Another Example:
Handwriting Recognition
Positive:
– This is a letter S:
• Background concepts:
– Pixel information
• Categorisations:
– (Matrix, Letter) pairs
– Both positive &
Negative: negative
– This is a letter Z:
• Task
– Correctly categorise
• An unseen example
– Into 1 of 26 categories
History
• Roots of work on NN are in:
• Neurobiological studies (more than one century ago):
• How do nerves behave when stimulated by different magnitudes
of electric current? Is there a minimal threshold needed for
nerves to be activated? Given that no single nerve cel is long
enough, how do different nerve cells communicate among each
other?
• Psychological studies:
• How do animals learn, forget, recognize and perform other types
of tasks?
• Psycho-physical experiments helped to understand how individual
neurons and groups of neurons work.
• McCulloch and Pitts introduced the first mathematical model of
single neuron, widely applied in subsequent work.
History
• Widrow and Hoff (1960): Adaline
• Minsky and Papert (1969): limitations of single-layer perceptrons (and
they erroneously claimed that the limitations hold for multi-layer
perceptrons)
Stagnation in the 70's:
• Individual researchers continue laying foundations
• von der Marlsburg (1973): competitive learning and self-organization
Big neural-nets boom in the 80's
• Grossberg: adaptive resonance theory (ART)
• Hopfield: Hopfield network
• Kohonen: self-organising map (SOM)
Applications
• Classification:
– Image recognition
– Speech recognition
– Diagnostic
– Fraud detection
– …
• Regression:
– Forecasting (prediction on base of past history)
– …
• Pattern association:
– Retrieve an image from corrupted one
– …
• Clustering:
– clients profiles
– disease subtypes
– …
Real Neurons
• Cell structures
– Cell body
– Dendrites
– Axon
– Synaptic terminals
38
Non Symbolic Representations
• Decision trees can be easily read
– A disjunction of conjunctions (logic)
– We call this a symbolic representation
• Non-symbolic representations
– More numerical in nature, more difficult to read
• Artificial Neural Networks (ANNs)
– A Non-symbolic representation scheme
– They embed a giant mathematical function
• To take inputs and compute an output which is interpreted
as a categorisation
– Often shortened to “Neural Networks”
• Don’t confuse them with real neural networks (in heads)
Complicated Example:
Categorising Vehicles
41
Neural Network
X1
W1
Inputs X2 W2 f Output
Wn
Xn
Neuron Model
• A neuron has more than one input x1, x2,..,xm
• Each input is associated with a weight w 1,
w2,..,wm
• The neuron has a bias b
• The net input of the neuron is
n = w1 x1 + w2 x2+….+ wm xm + b
n wi xi b
Neuron output
y = f (n)
• f is called transfer function
Transfer Function
– Linear
– sigmoid
Architecture of ANN
• Feed-Forward networks
Allow the signals to travel one way from input to
output
• Feed-Back Networks
The signals travel as loops in the network, the
output is connected to the input of the network
Learning Rule
connections.
X1
W1
Inputs X2 W2 f Output
Wn
Xn
Perceptron
=f (wixi) = f ( WTX)
Perceptron Learning Rule
1-
W = 1.5
x
t = 0.0
W=1
y
Exercises
• Design a neural network to recognize the
problem of
• X1=[2 2] , t1=0
• X=[1 -2], t2=1
• X3=[-2 2], t3=0
• X4=[-1 1], t4=1
Start with initial weights w=[0 0] and bias =0
Problems
• Four one-dimensional data belonging to two
classes are
X = [1 -0.5 3 -2]
T = [1 -1 1 -1]
W = [-2.5 1.75]
Example
-1 -1 -1 -1 -1 -1 -1 -1
-1 -1 +1 +1 +1 +1 -1 -1
-1 -1 -1 -1 -1 +1 -1 -1
-1 -1 -1 +1 +1 +1 -1 -1
-1 -1 -1 -1 -1 +1 -1 -1
-1 -1 -1 -1 -1 +1 -1 -1
-1 -1 +1 +1 +1 +1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1
Example
-1 -1 -1 -1 -1 -1 -1 -1
-1 -1 +1 +1 +1 +1 -1 -1
-1 -1 -1 -1 -1 +1 -1 -1
-1 -1 -1 +1 +1 +1 -1 -1
-1 +1 -1 -1 -1 +1 -1 -1
-1 -1 -1 -1 -1 +1 -1 -1
-1 -1 +1 +1 +1 +1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1
AND Network
• This example means we construct a network
for AND operation. The network draw a line
to separate the classes which is called
Classification
Perceptron Geometric View
The equation below describes a (hyper-)plane in the input space
consisting of real valued m-dimensional vectors. The plane splits
the input space into two regions, each of them describing one class.
decision
region for C1
m x2 w1x1 + w2x2 + w0 >= 0
w x w
i 1
i i 0 0 decision
boundary C1
x1
C2
w1x1 + w2x2 + w0 = 0
Perceptron: Limitations
• The perceptron can only model linearly separable
classes, like (those described by) the following
Boolean functions:
• AND
• OR
• COMPLEMENT
• It cannot model the XOR.
Input Output
layer layer
Hidden Layer
Types of decision regions
w0 w1 x1 w2 x2 0
1
w0 Network
x1 w1 with a single
w0 w1 x1 w2 x2 0 node
x2 w2
1
L1 1
L2
Convex 1 One-hidden layer
region
x1 1 network that realizes
L3
L4
1
3.5- the convex region
x2
1
Learning rule
Network activation
Error computation
Forward Step
Error propagation
Backward Step
Bp Algorithm
• The weight change rule is
ijnew ijold .error . f ' (inputi )
• Where is the learning factor <1
• Error is the error between actual and trained
value
• f’ is the derivative of sigmoid function = f(1-f)
Delta Rule
• Each observation contributes a variable amount to the
output
• The scale of the contribution depends on the input
• Output errors can be blamed on the weights
• A least mean square (LSM) error function can be
defined (ideally it should be zero)
E = ½ (t – y)2
Calculation of Network Error
• Could calculate Network error as
– Proportion of mis-categorised examples
• But there are multiple output units, with numerical output
– So we use a more sophisticated measure:
• Caution:
– May not have enough momentum to get out of local minima
• The problem:
– Face recognition of persons of a known group
in an indoor environment.
• The approach:
– Learn face classes over a wide range of poses
using neural network.
Navigation of a car
• Done by Pomerlau. The network takes inputs from a 34X36 video image
and a 7X36 range finder. Output units represent “drive straight”, “turn
left” or “turn right”. After training about 40 times on 1200 road images,
the car drove around CMU campus at 5 km/h (using a small workstation
on the car). This was almost twice the speed of any other non-NN
algorithm at the time.
01/01/23 79
Automated driving at 70 mph on a
public highway
Camera
image
outputs 30
for steering
30x32 weights
hidden 4
into one out of
units
four hidden
unit
30x32 pixels
as inputs
80
Exercises
• Perform one iteration of backpropagation to
network of two layers. First layer has one neuron
with weight 1 and bias –2. The transfer function
in first layer is f=n2
• The second layer has only one neuron with
weight 1 and bias 1. The f in second layer is 1/n.
• The input to the network is x=1 and t=1
1 1
(2t 2 y ) 2
n 2
1 e
W 11
X1 W13
W 12
b1
W21
X2 W23
b3
W22
b2
using the initial weights [b1= - 0.5, w11=2, w12=2, w13=0.5, b2= 0.5, w21=
1, w22 = 2, w23 = 0.25, and b3= 0.5] and input vector [2, 2.5] and t = 8.
.Process one iteration of backpropagation algorithm
Consider a transfer function as f(n) = n2. Perform
one iteration of BackPropagation with a= 0.9 for
neural network of two neurons in input layer and
one neuron in output layer. The input values are
X=[1 -1] and t = 8, the weight values between
input and hidden layer are w11 = 1, w12 = - 2,
w21 = 0.2, and w22 = 0.1. The weight between
input and output layers are w1 = 2 and w2= -2.
The bias in input layers are b1 = -1, and b2= 3.
W11
X1 W1
W12
W21
X2 W2
W22
• Kakuro . . . is a kind of game puzzle. The object of
the puzzle is to insert a digit from 1 to 9 inclusive
into each white cell such that the sum of the
numbers in each entry matches the clue associated
with it and that no digit is duplicated in any entry.
Briefly describe how you’d use Constraint
Satisfaction Problem methods to solve Kakuro
puzzles intelligently.