You are on page 1of 25

Artificial Neural Networks

-Application-

Peter Andras
peter.andras@ncl.ac.uk
www.staff.ncl.ac.uk/peter.andras/lectures
Overview

1. Application principles
2. Problem
3. Neural network solution
Application principles

The solution of a problem must be the simple.

Complicated solutions waste time and resources.

If a problem can be solved with a small look-up


table that can be easily calculated that is a more
preferred solution than a complex neural network
with many layers that learns with back-propagation.
Application principles

The speed is crucial for computer game applications.

If it is possible on-line neural network solutions should be


avoided, because they are big time consumers. Preferably,
neural networks should be applied in an off-line fashion,
when the learning phase doesn’t happen during the game
playing time.
Application principles

On-line neural network solutions should be very simple.

Using many layer neural networks should be avoided, if


possible. Complex learning algorithms should be
avoided. If possible a priori knowledge should be used to
set the initial parameters such that very short training is
needed for optimal performance.
Application principles

All the available data should be collected about the


problem.

Having redundant data is usually a smaller problem than


not having the necessary data.

The data should be partitioned in training, validation and


testing data.
Application principles
The neural network solution of a problem should be
selected from a large enough pool of potential solutions.

Because of the nature of the neural networks, it is likely


that if a single solution is build than that will not be the
optimal one.

If a pool of potential solutions is generated and trained, it


is more likely that one which is close to the optimal one
is found.
Problem
1
.06

1
.04

1
.02

Control: 0
.98
2 3 4 5

0
.96

0
.94

0
.92

The objective is to maintain some variable in a given


range (possibly around a fixed value), by changing
the value of other, directly modifiable (controllable)
variables.

Example: keeping a stick vertically on a


finger, by moving your arm, such that
the stick doesn’t fall.
Problem
Movement control:

How to move the parts (e.g., legs, arms, head) of


an animated figure that moves on some terrain,
using various types of movements (e.g., walks,
runs, jumps) ?
Problem

Problem analysis:
• variables
• modularisation into sub-problems
• objectives
• data collection
Problem

Simple problems need simple solutions.

If the animated figure has only a few components,


moves on simple terrains, and is intended to do a few
simple moves (e.g., two types of leg and arm
movements, no head movement), the movement
control can be described by a few rules.
Problem

Example rules for a simple problem:

IF (left_leg IS forward) AND (right_leg IS


backward) THEN
right_leg CHANGES TO forward
left_leg CHANGES TO backward
Problem

Controlling complex movements needs complex rules.

Complex rules by simple solutions:

A1 A2 A3 A4
B1 M1 M4 M1a M3
B2 M3 M2 M2 M4
B3 M1a M1 M3 M4
Simple solutions get very complex structure.
Problem
Complex solutions by complex methods:
Variable B

Variable A

Approximation of functional relationship by


a neural network.
Neural network solution
Problem specification:
input and output variables
other specifications (e.g., smoothness)
Example: desired movement parameters for given input values
t 1 2 3 4 5 6 7 8 9 10
x1 0 .1 0 5 0 .0 6 0 0 .7 5 4 0 .8 9 2 0 .4 1 4 0 .8 8 1 0 .1 7 1 0 .4 4 7 0 .9 6 6 0 .5 9 3
x2 0 .1 3 3 0 .4 6 5 0 .7 8 9 0 .8 9 4 0 .8 6 9 0 .5 1 9 0 .7 6 7 0 .2 2 4 0 .2 7 0 0 .0 1 6
x3 0 .6 8 5 0 .2 9 2 0 .7 3 2 0 .9 6 9 0 .5 6 7 0 .0 4 7 0 .5 8 1 0 .0 0 9 0 .6 2 1 0 .6 2 3
y1 0 .8 5 1 0 .9 9 9 1 .4 9 8 1 .4 2 1 1 .2 5 3 1 .4 7 1 1 .0 0 3 1 .1 0 3 1 .5 6 5 1 .1 6 6
y2 0 .0 8 3 0 .0 5 9 0 .9 6 5 1 .1 2 5 0 .4 8 0 1 .2 6 5 0 .1 6 4 0 .4 9 1 1 .0 3 5 0 .4 8 7
Neural network solution
Problem modularisation:
separating sub-problems that are solved separately
Example:
the movements should be separated on the basis of
causal independence and connectedness

separate solution for y1 and y2 if they are causally


independent, joint solution if they are interdependent,
connected solution if one is causally dependent on the
other
Neural network solution

Data collection and organization:


training, validation and testing data sets

Example:
Training set: ~ 75% of the data
Validation set: ~ 10% of the data
Testing set: ~ 5% of the data
Neural network solution

Solution design:
neural network model selection

Example:
|| x − w|| 2

f ( x) = e 2a 2
x1
x2
yout
x3 Gaussian neurons
Neural network solution

Generation of a pool of candidate models.

Example: 4 −
|| x − c k ,1 || 2
4 −
|| x − c k , 2 || 2

= ∑ w ⋅e = ∑ w ⋅e
1 1 2 ( a k ,1 ) 2 2 2 2 ( ak , 2 ) 2
y out k ;y out k
W1, W2 k =1 k =1
|| x − c k , 3 || 2 || x − c k , 4 || 2
4 − 4 −
W3, W4 y 3
= ∑ w ⋅e3 2( ak , 3 ) 2
;y 4
= ∑ w ⋅e 4 2 ( ak , 4 ) 2
out k out k
k =1 k =1


… 4 −
|| x − c k ,19 || 2
4 −
|| x − c k , 20 || 2

= ∑ w ⋅e = ∑ wk20 ⋅ e
19 19 2 ( a k ,19 ) 2 20 2 ( a k , 20 ) 2
y ;y
W19, W20 out
k =1
k out
k =1
Neural network solution
Learning the task from the data:
we apply the learning algorithm to each network from
the solution pool
we use the training data set
Example: x (1) = (0.105 ,0.133 ,0.685 )
y 1out (1) = w11 ⋅ f1 ( x (1)) + w12 ⋅ f 2 ( x (1)) + w31 ⋅ f 3 ( x (1)) + w14 ⋅ f 4 ( x (1))
y 1out (1) = 0.997
E = ( y 1out (1) − y1 (1)) 2 = (0.997 − 0.851 ) 2 = 0.0213
w11, new = w11 − c ⋅ 0.146 ⋅ f1 ( x (1))
...
x (1) = (0.105 ,0.133 ,0.685 )
y 1out (1) = w11 ⋅ f1 ( x (1)) + w12 ⋅ f 2 ( x (1)) + w31 ⋅ f 3 ( x (1)) + w14 ⋅ f 4 ( x (1))
y 1out (1) = 0.847
E = ( y 1out (1) − y1 (1)) 2 = (0.847 − 0.851 ) 2 = 0.000016
w11, new = w11 − c ⋅ 0.004 ⋅ f1 ( x (1))
Neural network solution
Learning the task from the data:
5
2.5 5
0
-2.5 4

1 3
2
3 2

4
1
5

15
5
10 5 5
2.5
5
4 0
0 -2.5
4

1 3 1 3
2 2
3 2 2
3
4 4
1
5 51

Before learning After learning


Neural network solution
Neural network solution selection
5
2.5 5

each candidate solution is tested 0


-2.5 4

with the validation data and the 1


2
3

best performing network is selected


3
4
1
5

Network 11 Network 4 Network 7

7.5
5 5 5
2.5 5 2.5 5 5
2.5
0 0 0
4 4 4
-2.5 -2.5 -2.5

1 3 1 3 1 3
2 2 2
2 3 2 3 2
3
4 4 4

5
1 51 51
Neural network solution

Choosing a solution representation:


the solution can be represented directly as a
neural network specifying the parameters of the
neurons
alternatively the solution can be represented as a
multi-dimensional look-up table
the representation should allow fast use of the
solution within the application
Summary
• Neural network solutions should be kept as simple as possible.
• For the sake of the gaming speed neural networks should be
applied preferably off-line.
• A large data set should be collected and it should be divided
into training, validation, and testing data.
• Neural networks fit as solutions of complex problems.
• A pool of candidate solutions should be generated, and the
best candidate solution should be selected using the validation
data.
• The solution should be represented to allow fast application.
Questions
1. Are the immune cells part of the nervous system ?
2. Can an artificial neuron receive inhibitory and excitatory
inputs ?
3. Do the Gaussian neurons use sigmoidal activation function ?
4. Can we use general optimisation methods to calculate the
weights of neural networks with a single nonlinear layer ?
5. Does the application of neural networks increase the speed of
simple games ?
6. Should we have a validation data set when we train neural
networks ?

You might also like