1

NEURAL NETWORKS
Basics using MATLAB
Neural Network Toolbox

ultilayer Layer Perceptron (LP)
· A LP consists of an input layer, several
hidden layers, and an output layer.
· Node i, also called a neuron,Ìt includes a
summer and a nonlinear activation function g.
· ni is the input to the activation function g.
3
ultilayer Layer Perceptron (LP)
· activation function was mathematical convenience a
hyberbolic tangent (tanh) or a sigmoid function are most
commonly used.
· Connecting several nodes in parallel and series, a LP
network is formed.
4
Basics using MATLAB Neural
Network Toolbox
· The ATLAB commands used in the procedure
are newff (type of architecture ,size and type of
training algorithm ) , train and sim.
· newff : create a feed-forward backpropagation
network
· The ATLAB command newff generates a
LPN neural network, which is called net.
47#35:9
Network training function,
5
Activation function
transfer function:
· Hardlim
· hardlims
· Purelin
6
Activation function
· Satlin:
· Satlins:
· Logsig:
· Tansig:
7
Training
· To create a network that can handle noisy input
vectors it is best to train the network on both ideal
and noisy vectors. To do this, the network is first
trained on ideal vectors until it has a low sum-
squared error .
· To test the result sim command is applied. The
output of the LP network is called a.
8
Basic flow diagram
9
Example-1
· Consider humps function in ATLAB. Ìt is given by :
y = 1 ./ ((x-.3).^ + .01) + 1 ./ ((x-.9).^ + .04) ÷ 6
but in ATLAB can be called by humps. Here we like to
see if it is possible to find a neural networkto fit the data
generated by humps-function between [0,(:
· a) Fit a multilayer perceptron network on the data. Try
different network sizes and different teaching algorithms.
· b) Repeat the exercise with radial basis function
networks.
10
solution (a)
· To obtain
the data use
the following
commands:
x = 0:.05:2; y=humps(x);
P=x; T=y;
plot(P,T,'x')
grid; xlabel('time (s)'); ylabel('output'); title('humps function')
Step-1
0 0. 2 0. 1 0. ê 0. 8 1 1. 2 1. 1 1. ê 1. 8 2
-20
0
20
10
ê0
80
100
t| m e [ s }
o
u
t
p
u
t
h u m p s fu n ct| o n
11
esign the network
Step-2
DESIGN THE NETWORK
==================
irst try a simple one - feedforward (multilayer perceptron network)
net=newff([0 2], [5,1], {'tansig','purelin'},'traingd');
Here newff defines feedforward network architecture.
The first argument [0 2] defines the range of the input and initializes
the network parameters.
The second argument the structure of the network. There are two
layers.
5 is the number of the nodes in the first hidden layer,
1 is the number of nodes in the output layer,
Next the activation functions in the layers are defined.
In the first hidden layer there are 5 tansig functions.
In the output layer there is 1 linear function.
'learngd' defines the basic learning scheme - gradient method
traingd is a network training function that updates weight and bias vaIues
according to gradient descent.
1
esign the network
Define learning parameters
net.trainParam.show = 50; The result is shown at every 50th iteration
(epoch)
net.trainParam.lr = 0.05; Learning rate used in some gradient schemes
net.trainParam.epochs =1000; Max number of iterations
net.trainParam.goal = 1e-3; Error tolerance; stopping criterion
Train network
net1 = train(net, P, T); Iterates gradient type of loop
Resulting network is strored in net1
· The goal is still far away after 1000 iterations
(epochs).
Step-3
13
solution (a)
0 100 200 300 100 500 ê00 Z00 800 900 1000
10
-1
10
-3
10
-2
10
-1
10
0
10
1
10
2
10
3
E p o c h s
T
r
a
|
n
|
n
g
-
ß
|
u
e


C
o
a
|
-
ß
|
a
c
k
P e rfo rm a n ce | s 2 8 . 8 ô 4 7 , C o a | | s .
14
solution (a)
Simulate how good a result is achieved: Input is the same
input vector P.
Output is the output of the neural network, which should be
compared with output data
a= sim(net1,P);
Plot result and compare
plot(P,a-T, P,T); grid;
0 0.2 0.1 0.ê 0.8 1 1.2 1.1 1.ê 1.8 2
-20
0
20
10
ê0
80
100
Step-4 Step-4
15
solution (a)
The fit is quite bad, especially in the beginning.Ìncrease
the size of the network: Use 0 nodes in the first hidden
layer.
net=newff([0 2], [20,1], {'tansig','purelin'},'traingd');
Otherwise apply the same algorithm parameters and
start the training process.
net.trainParam.show = 50; The result is shown at every 50th
iteration (epoch)
net.trainParam.lr = 0.05; Learning rate used in some gradient
schemes
net.trainParam.epochs =1000; Max number of iterations
net.trainParam.goal = 1e-3; Error tolerance; stopping criterion
Train network
net1 = train(net, P, T); Iterates gradient type of loop
Step-5
16
solution (a)
· The error goaI of 0.001 is not reached now either, but the
situation has improved significantIy.
0 100 200 300 100 500 ê00 Z00 800 900 1000
10
-1
10
-3
10
-2
10
-1
10
0
10
1
10
2
10
3
E p o ch s
T
r
a
|
n
|
n
g
-
ß
|
u
e


C
o
a
|
-
ß
|
a
c
k
P e rfo r m a n ce | s . 3 4 9 3 ô , C o a | | s .
17
solution (a)
Step-6
Simulate how good a result is achieved: Input is the same
input vector P.
Output is the output of the neural network, which should be
compared with output data
a= sim(net1,P);
Plot result and compare
plot(P,a-T, P,T); grid;
0 0. 2 0. 1 0. ê 0. 8 1 1. 2 1. 1 1. ê 1. 8 2
-20
0
20
10
ê0
80
100
18
solution (a)
· Try Levenberg-Marquardt ÷ trainlm. Use also smaller
size of network ÷ 10 nodes in the first hidden layer.
net=newff([0 2], [10,1], {'tansig','purelin'},'trainlm');
Define parameters
net.trainParam.show = 50;
net.trainParam.lr = 0.05;
net.trainParam.epochs =1000;
net.trainParam.goal = 1e-3;
Train network
net1 = train(net, P, T);
Step-7
0 50 100 150 200 250 300 350 100 150
10
-1
10
-3
10
-2
10
-1
10
0
10
1
10
2
10
3
4 9 5 E p o ch s
T
r
a
|
n
|
n
g
-
ß
|
u
e


C
o
a
|
-
ß
|
a
c
k
P e r fo rm a n ce | s . 9 9 4 8 , Co a | | s .
19
solution (a)
· Performance is now
according to the tolerance
specification.
Simulate result
a= sim(net1,P);
Plot the result and the error
plot(P,a-T,P,T)
xlabel('Time (s)'); ylabel('Output
of network and error');
title('Humps function')
Step-8
0 0. 5 1 1. 5 2
-20
0
20
10
ê0
80
100
T | m e [ s }
0
u
t
p
u
t

o
f

n
e
t
w
o
r
k

a
n
d

e
r
r
o
r
h u m p s fu n ct| o n
0
solution (a)
Ìt is clear that L- algorithm is significantly
faster and preferable method to back-
propagation. Note that depending on the
initialization the algorithm converges slower or
faster.
1
solution (b)
· RADIAL BASIS FUNCTION NETWORKS
· Here we would like to find a function, which fits the 41 data
points using a radial basis network. A radial basis network
is a network with two layers. Ìt consists of a hidden layer of
radial basis neurons and an output layer of linear neurons.
Here is a typical shape of a radial basis transfer function
used by the hidden layer:
p = -3:.1:3;
a = radbas(p);
plot(p,a)
-3 -2 -1 0 1 2 3
0
0. 1
0. 2
0. 3
0. 1
0. 5
0. ê
0. Z
0. 8
0. 9
1

solution (b)
· We can use the function newrb to quickly create a
radial basis network, which approximates the function
at these data points. Generate data as before :
x = 0:.05:2; y=humps(x);
P=x; T=y;
pIot(P,T)
0 0. 2 0. 1 0. ê 0. 8 1 1. 2 1. 1 1. ê 1. 8 2
-20
0
20
10
ê0
80
100
P
T
Step-1
3
solution (b)
· The simplest form of
newrb command is :
net1 = newrb(P,T);
Step-2
0 5 10 15 20 25
10
1
10
2
10
3
10
1
10
5
2 5 E p oc h s
T
r
a
|
n
|
n
g
-
ß
|
u
e
P e r fo r m a n ce | s 8 2 2 . 5 8 5 , C o a | | s
net = newrb(P,T,GOAL,SPREAD)
4
solution (b)
· For humps the network training leads to
singularity and therefore difficulties in training.
Simulate and plot the result::
a= sim(net1,P);
plot(P,T-a,P,T)
0 0.2 0.1 0.ê 0.8 1 1.2 1.1 1.ê 1.8 2
-20
0
20
10
ê0
80
100
5
solution (b)
The plot shows that the network approximates humps but
the error is quite large. The problem is that
the default values of the two parameters of the network are
not very good. Default values are goal -
mean squared error goal = 0.0, spread - spread of radial
basis functions = 1.0.
Ìn our example choose goal = 0.0 and spread = 0.1.
goal=0.02; spread= 0.1;
net1 = newrb(P,T,goal,spread);
Simulate and plot the result
a= sim(net1,P);
plot(P,T-a,P,T)
xlabel('Time (s)'); ylabel('Output of network and error');
title('Humps function approximation - radial basis function')
6
solution (b)
0 0. 2 0. 1 0. ê 0. 8 1 1. 2 1. 1 1. ê 1. 8 2
-20
0
20
10
ê0
80
100
T | m e [ s }
0
u
t
p
u
t

o
f

n
e
t
w
o
r
k

a
n
d

e
r
r
o
r
h u m p s fu n ct| o n a p p r o x| m a t| o n - r a d | a | b a s | s fu n ct| o n
7
What is the significance of small
value of spread. What about
large?
· The problem in the first case was too large
a spread (default = 1.0), which will lead to
too sparse a solution.
8
Example-
Consider a surface described by z = cos (x) sin (y)
defined on a square
~ ^x ^ x ,~^ y ^.
a) Plot the surface z as a function of x and y. This is
a demo function in ATLAB, so you can also find
it there.
b) Design a neural network, which will fit the data.
You should study different alternatives and test the
final result by studying the fitting error.
9
solution
Generate data:
x = -2:0.25:2; y = -2:0.25:2;
z = cos(x)'*sin(y);
Draw the surface (here grid size of 0.1 has
been used):
mesh(x,y,z)
xlabel('x axis'); ylabel('y axis'); zlabel('z axis');
title('surface z = cos(x)sin(y)');
gi=input('Strike any key ...');
Step-1
30
solution
-2
-1
0
1
2
-2
-1
0
1
2
-1
-0. 5
0
0. 5
1
x a x| s
s u rfa ce z = co s [ x} s | n [ y}
y a x| s
z

a
x
|
s
31
solution
Store data in input matrix P and output vector T
P = [x;y]; T = z;
Use a fairly small number of neurons in the first layer, say
5, 17 in the output.
Ìnitialize the network
net=newff([-2 2; -2 2], [25 17], {'tansig' 'purelin'},'trainlm');
Apply Levenberg-arquardt algorithm
Define parameters
net.trainParam.show = 50;
net.trainParam.lr = 0.05;
net.trainParam.epochs = 300;
net.trainParam.goal = 1e-3;
Train network
net1 = train(net, P, T);
gi=input('Strike any key ...');
Step-2
3
solution
0 0. 5 1 1. 5 2 2. 5 3 3. 5 1
10
-1
10
-3
10
-2
10
-1
10
0
10
1
4 E p o ch s
T
r
a
|
n
|
n
g
-
ß
|
u
e


C
o
a
|
-
ß
|
a
c
k
P e r fo rm a n ce | s . 2 2 9 3 , C o a | | s .
33
solution
Simulate the response of the neural network and
draw the corresponding surface:
a= sim(net1,P);
mesh(x,y,a)
-2
-1
0
1
2
-2
-1
0
1
2
-1
-0. 5
0
0. 5
1
Step-3
34
solution
The result looks
satisfactory, but a closer
examination reveals that
in certain areas the
approximation is
not so good. This can
be seen better by
drawing the error
surface.
Error surface
mesh(x,y,a-z)
xlabel('x axis');
ylabel('y axis');
zlabel('Error');
title('Error surface')
-2
-1
0
1
2
-2
-1
0
1
2
-0. 0ê
-0. 01
-0. 02
0
0. 02
0. 01
0. 0ê
x ax |s
E((o( s u(lace
v ax |s
E
(
(
o
(
Step-4
35
solution
36
Example-3
Consider Bessel functions Jq(t), which are solutions
of the differential equation
Use backpropagation network to approximate first
order Bessel function , q=, when t [0,0(.
a) Plot J1(t).
b) Try different structures to for fitting. Start with a
two-layer network.
0

y y t ) (t y t

-

37
solution (Plot J1(t))
First generate the
data.
ATLAB has Bessel
functions as ATLAB
functions.
t=0:0.:20;
y=bessel(,t);
plot(t,y)
grid
xlabel('time in
secs');ylabel('y');
title('First order bessel
function');
0 2 1 ê 8 10 12 11 1ê 18 20
-0. 1
-0. 3
-0. 2
-0. 1
0
0. 1
0. 2
0. 3
0. 1
0. 5
0. ê
| rs t or de r be s s e fun ct| o n
t| m e | n s e cs
v
Step-1
38
solution
Next try to fit a backpropagation network on the data. Try
Levenberg-arquardt.
!=t; T=y;
%efine network. First try a simple one
net=newff([0 20], [0,], {'tansig','purelin'},'trainlm');
%efine parameters
net.train!aram.show = 50;
net.train!aram.lr = 0.05;
net.train!aram.epochs = 300;
net.train!aram.goal = e-3;
%Train network
net = train(net, !, T);
0 0. 5 1 1. 5 2 2. 5 3
10
-1
10
-3
10
-2
10
-1
10
0
10
1
3 Epoc rs
T
r
a
|
n
|
n
g
-
ß
|
u
e


C
o
a
|
-
ß
|
a
c
k
P e rfo r m a n ce | s . ô 5 , C o a | | s .
39
solution
% Simulate result
a= sim(net,!);
%!lot result and compare
plot(!,a,!,a-T)
xlabel('time in secs');ylabel('Network output and error');
title('First order bessel function'); grid
0 2 1 ê 8 10 12 11 1ê 18 20
-0. 3
-0. 2
-0. 1
0
0. 1
0. 2
0. 3
0. 1
0. 5
0. ê
t| m e | n s e cs
N
e
t
w
o
r
k

o
u
t
p
u
t

a
n
d

e
r
r
o
r
| rs t or de r be s s e | fun ct| on
40
solution
4
10

Since the error is fairly significant, let's reduce it by doubling the
nodes in the first hidden layer to 0 and decreasing the error
tolerance to .
!=t; T=y
%efine network. First try a simple one
net=newff([0 20], [20,], {'tansig','purelin'},'trainlm');
%efine parameters
net.train!aram.show = 50;
net.train!aram.lr = 0.05;
net.train!aram.epochs = 300;
net.train!aram.goal = e-4;
%Train network
net = train(net, !, T);

Step-2
0 0. 5 1 1. 5 2 2. 5 3 3. 5 1
10
-5
10
-1
10
-3
10
-2
10
-1
10
0
10
1
4 E p o ch s
T
r
a
|
n
|
n
g
-
ß
|
u
e


C
o
a
|
-
ß
|
a
c
k
P e rfo rm a n ce | s 5 . 3 2 4 7 e - 5 , C o a | | s .
41
solution
0 2 1 ê 8 10 12 11 1ê 18 20
-0. 1
-0. 3
-0. 2
-0. 1
0
0. 1
0. 2
0. 3
0. 1
0. 5
0. ê
t| m e | n s e cs
N
e
t
w
o
r
k

o
u
t
p
u
t

a
n
d

e
r
r
o
r
| rs t or de r be s s e | fun ct| on
% Simulate result
a= sim(net,!);
%!lot result and compare
plot(!,a,!,a-T)
xlabel('time in secs');ylabel('Network output and error');
title('First order bessel function'); grid
4
solution
!=t; T=y
%efine network. First try a simple one
net=newff([0 20], [40,], {'tansig','purelin'},'trainlm');
%efine parameters
net.train!aram.show = 50;
net.train!aram.lr = 0.05;
net.train!aram.epochs = 300;
net.train!aram.goal = e-6;
%Train network
net = train(net, !, T);
The result is considerably better, although it would still require
improvement. This is left as furtherexercise to the reader.
0 10 20 30 10 50 ê0 Z0 80 90 100
10
-Z
10

10
-5
10
-1
10
-3
10
-2
10
-1
10
0
10
1
4 E p o ch s
T
r
a
|
n
|
n
g
-
ß
|
u
e


C
o
a
|
-
ß
|
a
c
k
P e rfo r m a n ce | s 9 . 7 2 9 4 9 e - 7 , C o a | | s e - ô
Step-3
43
solution
% Simulate result
a= sim(net,!);
%!lot result and compare
plot(!,a,!,a-T)
xlabel('time in secs');ylabel('Network output and error');
title('First order bessel function'); grid
0 2 1 ê 8 10 12 11 1ê 18 20
-0. 1
-0. 3
-0. 2
-0. 1
0
0. 1
0. 2
0. 3
0. 1
0. 5
0. ê
t| m e | n s e cs
N
e
t
w
o
r
k

o
u
t
p
u
t

a
n
d

e
r
r
o
r
| rs t or de r be s s e | fun ct| on
44
Neural Network Toolbox
Simulink
45
Block Set
The Neural Network Toolbox provides a set of blocks you can use to build
neural networks in Simulink or which can be used by the function gensim
to generate the Simulink version of any network you have created in
ATLAB.
Bring up the Neural Network Toolbox block set with this command.
neural
The result is a window that
contains three blocks. Each
of these blocks contains
additional blocks.
46
Transfer Function Blocks
Double-click on the Transfer Functions block in the Neural window to bring
up a window containing several transfer function blocks.
Each of these blocks takes a net input vector and generates a
corresponding output vector whose dimensions are the same as the input
vector
47
Net Ìnput Blocks
Double-click on the Net Ìnput Functions block in the Neural window to bring up a
window containing two net-input function blocks
Each of these blocks takes any number of weighted input vectors, weight
layer output vectors, and bias vectors, and returns a net-input vector
48
Weight Blocks
Double-click on the Weight
Functions block in the Neural
window to bring up a window
containing three weight function
blocks.
Each of these blocks takes a neuron's weight vector and applies it to an input
vector (or a layer output vector) to get a weighted input value for a neuron.
Ìt is important to note that the blocks above expect the neuron's weight vector to
be defined as a column vector. This is because Simulink signals can be column
vectors, but cannot be matrices or row vectors.
Ìt is also important to note that because of this limitation you have to create S
weight function blocks (one for each row), to implement a weight matrix going to a
layer with S neurons.
49
50
Example-4
0 1

x x x x

) (
1

1 1
1 x x x ) ( x

1
x x

if it is possible to find a neural network model, which produces the same
behavior as Van der Pol equation.
or in state-space form
Use different initial functions. Apply vector notation
51
solution

:9,07,07!07.059743 !
W ! .43889841,335:9,07 80;07, //03,078 ,3/,34:95:9,07 W 4/0 ,84.,0/,30:743 93.:/08, 8:2207,3/,34330,7,.9;,9431:3.943

W 3 89035:99490,.9;,9431:3.943 

:9,07,07!07.059743 !
W ,.9;,9431:3.943,82,902,9.,.43;0303.0, -07-4.9,3039 9,3 47,824/ 1:3.943,702489 .42243:80/ W 4330.9380;07,34/0835,7,0 ,3/80708 ,! 30947814720/ 

,8.8:83%0:7, 0947%44-4
W %0%.422,3/8:80/390574.0/:70 ,703011 95041,7.90.9:70 80,3/95041 97,33,4792 97,3 ,3/82 W 3011 .70,90,100/
147,7/-,.5745,,943 30947 W %0%.422,3/30110307,908, !30:7,30947 .8.,0/309 
47#35:9 094797,331:3.943 

.9;,9431:3.943
97,381071:3.943 W ,7/2 W ,7/28

W !:703 

38  .9.93 W 48 W $.938 W %...943 W $.9431:3.

3/034835:9 ..70/07747 W %4908990708:982.  ..%7.33 W %4.30/43/0.0.9478:399..3/348.309479.3/8.8.9478 %4/498 903094781789 97.3.70.947898-0899497.90.48:2 86:.9.422. .550/ %0 4:95:94190!309478.0.0/.0.3903094743-49/0.

8..14/.7.2  .

250  W 438/07:2581:3..03-  .9433% 98.

   )     .

#050.   )     -:93%..0/-:258 0700094 8001985488-09413/. 9.8097.05974330947 4390/.47928 W .3//110703990.30:7. %7 /110703930947808.943-09003 ( W ..3-0. 0307.943 309478  .90/-:258 1:3.07507.-.2:9.9.3.30947941990/.881:3.990007./.9..

422.84:943 .943        8         $905  .-0 920 8 .3 90/.  : 2 5 8  1: 3 .-0 4:95:9 990 :2581:3.3/8        :258   9 2 0  !% 549 ! %  7/. :80 901443 .9. 9 4 3  4: 9 5 : 9 W %44-9.

.05974330947 3093011  (  (  9.7/30947.3/-.73/ /0130890-.079070.9:70 %01789.07507..3/ 8.7338..9.9438 3904:95:9.:08 .8.078 8903:2-07419034/083901789//03.8.079070830.0839030947 $%% #  78997.08 90309475.9431:3.38 5:703 < 97.90809./0392094/ 97.9:70419030947 %070.943  0.0.07 8903:2-074134/083904:95:9.30419035:9.43/./039/08.7:2039 (/01308907.709.8250430 100/147.078.07 0990.90.381:3.47/3947..331:3.71:3.70/0130/ 3901789//03.3094797.3/  0703011/01308100/147..7.039  $905  .7:203990897:.7.9438390.020 7.9:5/.3/39.7094 .9439.209078 %080.7/ 2:9.

3 309 ! % 907.3:2-0741907.1907 907.3!.8  $905  .7.2 4.7.3!.7.2 054.90:80/384207..7.8  .8891.07 9907./03995041445 #08:93309478897470/3309 W %04.9438 309 97.790743 %7.7.7337.9438 054.90.3!. 0 77479407.3.3!..2 7 0.9087.330947 309 97.2 84  %0708:98843.7.0208 309 97./0398.209078 309 97.0839030947 01300.0894553.943 054. 309 97.7335.

. 0   8          4 . 3 3  : 0  4 .                        5 4 .   ! 0 714 72 .    8     %7 . 8  .  .84:943 . 3 .

0.30947 ..425.70/94:95:9/. % ! % 7/                   $905  .9.82 309 !   !49708:9.  .708:98.947!  :95:98904:95:9419030:7.90444/.425. $2:.84:943 .0/35:98908.84:/-0 .70  549 ! .0.3/.20 35:9.

38 5:703 < 97.799097.3/ 89./ 0850.90.3:2-0741907.209078.0208 309 97.0894553.2 84 %0708:98843.790743 %7.70.3!./039 8.2 4.33574.80 9080419030947&80 34/083901789//03 .07 3093011  (  (  9.390-0333 3.7.9438 309 97.2 054.3 309 ! % 907.3/  90780..2 7 0.07 9 907.7.47925.9087.330947 30997.7. %01986:90-.7.3. 309 97.7337.943 054.7. 0 77479407.088 309 97.3!./03995041445  $905  .3!.90:80/384207.55908.84:943 .3!.8 .20.

84:943 . 33 :0 4. 0   8         4 .                        5 4 .41 834970.943.0/831. .39  .0/340907 -:990 89:... 8 W %0077474.    8     %7 .82574. 3 .  .   ! 0 7 14 7 2 .

70 549 ! .3/.425.82 309 !   !49708:9.425.0/35:98908.84:943 .947!  :95:98904:95:9419030:7.0.84:/-0 ..20 35:9.90444/. % ! % 7/                    $905  .0.708:98.  . $2:.70/94:95:9/.30947 .9.

03-07 .8482. 0  8         4 .2 7  309 97.2 4.38 5:703 < 97.32  01305.84:943 .   8    309 97.07 804130947  34/083901789//03.8   309 97.76:.3!.3!. 3 3   : 0  4 .7.                       5 4 .7.7. W %70.7.2 84   309 97.2 054.330947  30997.07 3093011  (  (  9.   .0   %7.3!.3!. .7/9 97.7.3 309 ! %    %7 .32 &80.209078 ! 0 714 72 . 8  $905  . 3 .

.-0 :95:9 4130947.47/394909407.. 9 4 3         % 2 0  8     $905  .3/07747  : 2 5 8  1: 3 .3. % ! % .90708:9 .0834 .3/07747  990 :2581:3.1.943 $2:.3.84:943 . W !071472..-0 %20 8 .3/9007747 549 ! .82 309 !  !4990708:9.943 :95:94130947.0 850.

8907  . 98..0.43.94390.8907.84:943 .3/570107...-02094/94-. 5745.9 .79.39 1.9/0503/34390 39.0708840747 1.943 4909.47928831..4792.

074130.95.8830947 7.84:943 W #$$&% % #$ W 07004:/09413/.-.0741 7.7.19890/.-./.8 5  549 5 ./.5041.//03.1:3.8830947 8.34:95:9.43889841.8897.9./-.8830:7438.7./.07    5   .               .-.8..078 9. 54398:83./.381071:3.943 :80/-90//03.7.3/.30947994.730:7438  0708.-.943 .

90/.3:80901:3. 7./.90.55742.-..9.9.943 .8-01470     :258   !% 549 ! %   %        !           $905  .70.908901:3.8830947 .54398 0307.99080/.84:943 W 0.943307-946:....

! %               5 4 . 3 . 3  3   : 0 W %0825089147241 307.! %   $!#  $905  .422.84:943   ! 0 714 72 ..3/8 309307.  8    309307.   8  %7 . 0  8         4 .

79.:908397.90.33  $2:. ! %                     ./894 83:.3/90701470/11.84:943 W 47:258903094797.82 309 !  549 ! % .3/54990708:9  .330.

70 349..209078419030947./  309307.943 7./.704.-0 %20 8 .7.82 309 !  549 ! % .881:3.9 90/01.:084190945. 8570.943  .55742.70 %0574-0289.70/077474.3/54990708:9 .9438 34:70.386:. 8570.908:258 -:9 900774786:90. -.! % 4. 8570.99030947.:9.-./. ! % ./417.0744/ 01./ 8570.:08.84:943 %05498489.55742. 20.3/07747  990 :2581:3.-0 :95:94130947.943.90.:9.250./  $2:./  4.3/8570.44804..881:3.  .

9 4 3  . /  .. 94 3   : 95 : 9 4 1 3 0 9 4 7 . 5 5 74  2 . 9 4 3   7. 8  8  1: 3 .84:943  : 2 5 8  1: 3 .  . 3 /  0 774 7          % 2 0  8           .

.70 W %0574-023901789.8570.8944.84:943  .3.:0418570.9890831../ /01.-4:9 .780.9.70 ./94 94485.04182.0.:9 .80.../ . .

70  ^^ ^^ .0/08.9.  4:84:/89://1107039.708:9-89:/390199307747  .3/ %88 .1990/.7-0/-.9.30:7..86:.48  83  /0130/43.250  438/07..9073.083.8.94341.3.0.9433% 844:.. !49908:71.1:3.3/908990 13.8413/ 99070 .08..30947 .8:71./0241:3.

48  83   7.0 0707/8041 .30   $905  .      ..8 ..-0 .8  990 8:71.0.8  .8 -003:80/  208    .-0 .908:71.90/.48  83   35:9 $970.9.-0 .84:943 0307.

.  8  . 8      . 0   . 4 8  8 3     .84:943 8 : 71.  8      .

8   309 97.30   $905  .3!.03-07 .09030947 3093011   ( (  9.76:.335:92.7.3:2-074130:74383901789.  3904:95:9 39.38  5:703 < 97.1.3!.4792 01305.84:943 $9470/.782.7.7.2 7  309 97.9.3!.7.3 309 ! %  35:9 $970. 0  %7.2 054.330947 309 97.07 8.97!.2 4.0.209078 309 97.7/9.7.3/4:95:9.3!.947% ! (% &80.2 84   309 97.32  550.

 3 3   : 0  4 . . 8    %7 .84:943   ! 0 714 72 . 8        . 3 .                 5 4 .   . 0 8        4 .

84:943 $2:..909070854380419030:7.              $905  .0 .30947.47708543/38:71.82 309 !  208   .90.3/ /7.

 .94370.70.39007747  8:71...4807 0.0.89.9438  3498444/ %8.0  774 7   77478 :71.84:943 %0708:9448 8..9 3.55742.8  ..981.079.. 0 77478:71.0      ..-0 .-0 .0 208   . 8     .3.3  -08003-09907- /7.23. 8    $905  .890  .-0 7747  990 77478:71.8  .947 -:9..

84:943  .

.9438.250  438/0708801:3.

943    9  9  9 .55742.7084:9438 4190/1107039.06:.943 . 9 .9433094794...5745..901789 47/0708801:3.    &80-.

!49 9 . 94 .9:708941471993 $9.799. 039   ( .0730947  .%7/1107039897:.

%.8% 1:3. 8       $905  .80880 1:3.9090 /.8 .9438.943    78 9 4 7 / 0 7  .-0 9203 80.84:943 !49 9 7890307.0 8 8 0  1: 3 .9. 9 4 3                 9 2 0   3  8 0 .-0   990 78947/07-0880 1:3.9438 9    -0880  9  549 9  7/ .

2 054.2 4.7.  8   01305.8  309 97.  8     .7/9 !9% 013030947 78997.7.84:943 09979419. %7 0.3!.330947 30997.03-07 .7.76:.209078  309 97..               54. 0 8      4 . 3 3   : 0  4 .8250430 3093011   (  (  9.5745. .7.38 5:703 < 97.3!.2 7   309 97.2 84  309 97.3!.943309474390/.0   %7.   ..9.3!.32  ! 0 714 7 2 .7.-. 3 .3 309 ! %   %7 .

3/07747  990 78947/07-08801:3.90708:9  .-0 920380.0 8 8 0   1: 3 . 8 !49708:9.82 309 !        9 2 0   3  8 0 .70 549 ! .943 7/      .3/. % . 9 4 3     0 94 7   4: 95 : 9 . ! .-0 09474:95:9.84:943   78 9 4 7 / 0 7  . 3 / 0 77 4 7      $2:.8 .425.

09-/4:-390 34/083901789//03.7.8   309 97.839007747 9407.32  ! 0 714 72 .0900774781.3.3!.2 84  309 97.39 09 870/:.  3 3    : 0   4 .2 7     309 97.3!.38 5:703 < 97.                   5 4 . 0   8        0    4 .2 4.7.70. 3 .7. 8      $905   .3!.7.84:943 $3.3 309 ! %   %7 .    8   01305..0794 .7.8250430 3093011   (  (  9.330947  30997. .7831.2 054.0  %7.094 !9% 013030947 78997.3!.209078  309 97.3//0.   .

3 / 0 77 4 7       $2:.943 7/   9 2 0   3  8 0 .3/07747  990 78947/07-08801:3.84:943   78 9 4 7 / 0 7  . 8       . ! .90708:9 .-0 09474:95:9.-0 920380.3/.425.0 8 8 0   1: 3 . 9 4 3     0 94 7   4: 95 : 9 .82 309 !      !49708:9. % .70 549 ! .8 .

3!.8250430 3093011   (  (  9. 3 .2 054.  3 3   : 0   4 .2 4.7.--09907 .80949070.84:943 %0708:98.209078  309 97.38 5:703 < 97.94:94:/89706:70 2574.7.2 7  309 97.3!.3!. 8 $905  . .   8   0  01305.8   309 97.3 309 ! %    %7 .7.0   %7.32  ! 0 714 72 .2 84   309 97.330947 30997.02039 %88019.7.438/07.    .7.                           5 4 . 0   8       0   4 .81:7907007.3!./07 !9% 013030947 78997.

3/.84:943   78 9 4 7 / 0 7  .943 7/      .8 .3/07747  990 78947/07-08801:3.90708:9  .-0 09474:95:9.425. 8 549 ! . 3 / 0 77 4 7      $2:.82 309 !   !49708:9. 9 4 3     0 94 7   4: 95 : 9 .-0 920380.70       9 2 0   3  8 0 .0 8 8 0   1: 3 . % . ! .

0:7.0947%44-4 $2:3  .

3/  30:7..80941-4.38 .90/3 %  73:5900:7.84:.809998. 419080-4.. %0708:98.3-0:80/-901:3.0..8  .389700-4.3/49.3094783$2:347.422.0947%44-4574.4.0784341.439.0947%44-4-4.9090$2:3.$09 %00:7.9 .8.439.70..3309474:.//943.3:8094-:/ 30:7.-4../08.9430382 940307.8 .

947.08.89.9438-4.943-4.419080-4.%7.3/494-73 :5.0.9434.8 ..381071:3. ..908.3900:7.20.3/4.439.0.38107:3.0.70908.07.30935:9.947480/2038438.3380.3/0307.47708543/34:95:9.97.4390%7.38107:3.947  .8 4:-0 .89035:9 .

439.8 4:-0 .89..419080-4.0.8 .074:95:9.947  .08.43900935:9:3.0.33:2-0741090/35:9.3/709:738.8..0.0.0935:94.3/-.9478 09 .309 35:9.3394309 35:91:3. 3/4.3/494-73:5.3900:7.9438-4.9478 .9478 .943-4.

9-0.70.:80419829.08.74 942502039.:0147..3-0.8.335:9 .8 4:-0 .0.  ..947 %88-0.399434909.8425479.90$ 091:3.439009 :3.0.4:23.947 47...399434909.89.943 -4.074:95:9.:80$2:383.30:743 809..9478 -:9.092.3349-02.947.094..079$30:7438 .0.99030:743 809.0.090/35:9.94794 -0/0130/.943-4.9434:...97.30:743  9825479.974394.439.8 .8 4301470.084774.419080-4.990-4.3900:7.947 9409. 3/494-73:5.0050.8.3/4 .9438-4.8.094.3/.4:23 .339700091:3..5508994.0.-4.9478  98..0.0..

 .

0..947349.8'.943  .574/:.3094724/0 .01472               &80/110703939...30:7.9438 55.1:3.08908.3/07!406:.250  1985488-09413/.943          47389.20 -0.47.90 85.

84:943  .

Sign up to vote on this title
UsefulNot useful