This action might not be possible to undo. Are you sure you want to continue?

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

-ApplicationPeter Andras peter.andras@ncl.ac.uk www.staff.ncl.ac.uk/peter.andras/lectures

Overview

1. 2. 3. Application principles Problem Neural network solution

Complicated solutions waste time and resources. .Application principles The solution of a problem must be the simple. If a problem can be solved with a small look-up table that can be easily calculated that is a more preferred solution than a complex neural network with many layers that learns with back-propagation.

If it is possible on-line neural network solutions should be avoided. because they are big time consumers.Application principles The speed is crucial for computer game applications. when the learning phase doesn’t happen during the game playing time. Preferably. . neural networks should be applied in an off-line fashion.

Using many layer neural networks should be avoided. Complex learning algorithms should be avoided. if possible.Application principles On-line neural network solutions should be very simple. If possible a priori knowledge should be used to set the initial parameters such that very short training is needed for optimal performance. .

. The data should be partitioned in training. Having redundant data is usually a smaller problem than not having the necessary data. validation and testing data.Application principles All the available data should be collected about the problem.

it is likely that if a single solution is build than that will not be the optimal one. Because of the nature of the neural networks. .Application principles The neural network solution of a problem should be selected from a large enough pool of potential solutions. it is more likely that one which is close to the optimal one is found. If a pool of potential solutions is generated and trained.

6 09 . by changing the value of other. Example: keeping a stick vertically on a finger.2 3 4 5 The objective is to maintain some variable in a given range (possibly around a fixed value). such that the stick doesn’t fall. .Problem 10 .8 09 . by moving your arm. directly modifiable (controllable) variables.6 10 .2 Control: 2 09 .4 10 .4 09 .

runs. walks.g. head) of an animated figure that moves on some terrain.Problem Movement control: How to move the parts (e. legs.. jumps) ? .g. arms.. using various types of movements (e.

Problem Problem analysis: • variables • modularisation into sub-problems • objectives • data collection .

and is intended to do a few simple moves (e. If the animated figure has only a few components. the movement control can be described by a few rules.Problem Simple problems need simple solutions. .g.. two types of leg and arm movements. no head movement). moves on simple terrains.

Problem Example rules for a simple problem: IF (left_leg IS forward) AND (right_leg IS backward) THEN right_leg CHANGES TO forward left_leg CHANGES TO backward .

Problem Controlling complex movements needs complex rules. . Complex rules by simple solutions: A1 B1 B2 B3 M1 M3 A2 M4 M2 A3 M2 M3 A4 M4 M4 M1a M3 M1a M1 Simple solutions get very complex structure.

.Problem Complex solutions by complex methods: Variable B Variable A Approximation of functional relationship by a neural network.

0 1 6 0 .8 5 1 0 .4 8 0 6 0 .8 9 4 0 .5 6 5 1 .7 8 9 0 .9 6 6 0 .2 5 3 0 .0 5 9 3 0 .9 6 5 4 0 .1 7 1 0 .4 6 5 0 .0 4 7 1 .0 3 5 10 0 .4 4 7 0 .9 6 9 1 .6 2 3 1 .5 6 7 1 .4 7 1 1 .8 9 2 0 .5 8 1 1 .Neural network solution Problem specification: input and output variables other specifications (e.0 6 0 0 .4 8 7 .0 0 9 1 .8 6 9 0 .4 1 4 0 .5 1 9 0 .7 6 7 0 .1 2 5 5 0 .0 0 3 0 .4 9 1 9 0 .2 9 2 0 .1 0 5 0 .4 2 1 1 .1 6 6 0 .2 2 4 0 .1 0 3 0 .1 3 3 0 .2 7 0 0 .2 6 5 7 0 .1 6 4 8 0 .6 2 1 1 . smoothness) Example: desired movement parameters for given input values t x1 x2 x3 y1 y2 1 0 .9 9 9 0 ..8 8 1 0 .7 3 2 1 .7 5 4 0 .6 8 5 0 .g.0 8 3 2 0 .4 9 8 0 .5 9 3 0 .

Neural network solution Problem modularisation: separating sub-problems that are solved separately Example: the movements should be separated on the basis of causal independence and connectedness separate solution for y1 and y2 if they are causally independent. joint solution if they are interdependent. connected solution if one is causally dependent on the other .

Neural network solution Data collection and organization: training. validation and testing data sets Example: Training set: ~ 75% of the data Validation set: ~ 10% of the data Testing set: ~ 5% of the data .

Neural network solution Solution design: neural network model selection Example: x1 x2 x3 f ( x) = e yout || x − w|| 2 − 2a 2 Gaussian neurons .

y 20 out 20 = ∑ wk ⋅ e k =1 4 || x − c k . 2 || 2 2 ( ak .19 || 2 − 2 ( a k . Example: W1.y 2 out = ∑ w ⋅e k =1 4 2 k 4 − || x − c k . 3 ) 2 .Neural network solution Generation of a pool of candidate models.1 ) 2 . 3 || 2 2( ak .y 4 out = ∑ w ⋅e k =1 4 k − || x − c k .1 || 2 2 ( a k . 4 ) 2 … W19. W2 W3. 4 || 2 2 ( ak . 20 ) 2 . 20 || 2 − 2 ( a k . W20 y 19 out = ∑ w ⋅e k =1 19 k 4 || x − c k . W4 y y 1 out = ∑ w ⋅e k =1 4 1 k 4 − || x − c k . 2 ) 2 3 out = ∑ w ⋅e k =1 3 k − || x − c k .19 ) 2 .

133 .146 ⋅ f1 ( x (1)) .0.Neural network solution Learning the task from the data: we apply the learning algorithm to each network from the solution pool we use the training data set Example: x (1) = (0.000016 out 1 1 w1..997 − 0.0213 out 1 1 w1.851 ) 2 = 0.133 .0.105 .847 out E = ( y 1 (1) − y1 (1)) 2 = (0.0.004 ⋅ f1 ( x (1)) . new = w1 − c ⋅ 0. new = w1 − c ⋅ 0.847 − 0.997 out E = ( y 1 (1) − y1 (1)) 2 = (0.105 .685 ) 1 1 y 1 (1) = w1 ⋅ f1 ( x (1)) + w1 ⋅ f 2 ( x (1)) + w3 ⋅ f 3 ( x (1)) + w1 ⋅ f 4 ( x (1)) out 2 4 y 1 (1) = 0.851 ) 2 = 0.0. x (1) = (0.685 ) 1 1 y 1 (1) = w1 ⋅ f1 ( x (1)) + w1 ⋅ f 2 ( x (1)) + w3 ⋅ f 3 ( x (1)) + w1 ⋅ f 4 ( x (1)) out 2 4 y 1 (1) = 0..

5 1 2 3 4 51 2 3 5 4 Before learning After learning .Neural network solution Learning the task from the data: 5 2.5 0 -2.5 0 -2.5 1 2 3 4 5 1 2 3 4 5 15 10 5 0 1 2 3 4 5 1 2 3 5 4 5 2.

5 0 -2.5 1 2 3 4 5 1 2 3 4 5 Network 7 5 2.5 1 2 3 4 5 1 2 3 5 4 5 2.Neural network solution Neural network solution selection 5 each candidate solution is tested with the validation data and the best performing network is selected Network 11 Network 4 2.5 1 2 3 4 51 2 3 5 4 .5 0 -2.5 5 2.5 0 -2.5 1 2 3 4 51 2 3 5 4 7.5 0 -2.

Neural network solution Choosing a solution representation: the solution can be represented directly as a neural network specifying the parameters of the neurons alternatively the solution can be represented as a multi-dimensional look-up table the representation should allow fast use of the solution within the application .

validation. • Neural networks fit as solutions of complex problems. • A pool of candidate solutions should be generated. • A large data set should be collected and it should be divided into training. and testing data. .Summary • Neural network solutions should be kept as simple as possible. • For the sake of the gaming speed neural networks should be applied preferably off-line. • The solution should be represented to allow fast application. and the best candidate solution should be selected using the validation data.

2. 5. 3.Questions 1. 6. Are the immune cells part of the nervous system ? Can an artificial neuron receive inhibitory and excitatory inputs ? Do the Gaussian neurons use sigmoidal activation function ? Can we use general optimisation methods to calculate the weights of neural networks with a single nonlinear layer ? Does the application of neural networks increase the speed of simple games ? Should we have a validation data set when we train neural networks ? . 4.

- On Writing
- The Shell Collector
- I Am Having So Much Fun Here Without You
- The Rosie Project
- The Blazing World
- Hyperbole and a Half
- Quiet Dell
- After Visiting Friends
- Missing
- Birds in Fall
- The Great Bridge
- Who Owns the Future?
- The Wife
- On Looking
- Catch-22
- A Farewell to Arms
- Goat Song
- Birth of the Cool
- Girl Reading
- Galveston
- Telex from Cuba
- Rin Tin Tin
- The White Tiger
- The Serialist
- No One Belongs Here More Than You
- Love Today
- Sail of Stone
- All My Puny Sorrows
- Fourth of July Creek
- The Walls Around Us

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd