You are on page 1of 30

EACT631-ADAPTIVE CONTROL SYSTEM

Chapter 6 & 7
Direct and Indirect adaptive
Control
Model Reference Adaptive System
(MRAS)

Dr. Saravanakumar Gurusamy


EET Department,
Technical and Vocational Training Institute,
Addis Ababa, Ethiopia
gsk.ftvet@gmail.com
1
Model
Reference
Adaptive
System(MRAS)

2
Introduction
• MRAS is an Adaptive controller.
• Regarded as an adaptive servo system
• Desired performance is expressed interms of a reference model
• Gives desired responses to a command signal.
MRAS system having two feedback loop:
• Inner Loop
• Ordinary FB loop
• Composed of process and controller.
• Outer loop - Changes the controller parameters.
• Parameters are changed on the basis of feedback from the error.
• Error = Output of the system - Output of the Reference Model.
• The mechanism for adjusting the parameters in a MRAS can be defined in
two ways:
• Using Gradient Method (MIT rule)
• Applying Stability Theory (Lyapunov rule)
• MRAS systems were originally derived for deterministic continuous time
system. 3
MRAC schemes
• MRAC schemes can be characterized as
∗ Direct
ቅ with normalized or unnormalized adaptive laws.
∗ Indirect

• Direct MRAC→ the parameter vector µ of the controller C(µ) is


updated directly by an adaptive law.
• Indirect MRAC→ µ is calculated at each time t by solving a certain
algebraic equation that relates µ with the on-line estimates of the plant
parameters.
• In both direct and indirect MRAC with normalized adaptive laws, the
form of C(µ), motivated from the known parameter case, is kept
unchanged.
• The controller C(µ) is combined with an adaptive law (or an
adaptive law and an algebraic equation in the indirect case) that is
developed independently using some techniques.
4
MIT Rule
• Name is derived from the fact that it was developed at the
Instrumentation Laboratory ( Now the Draper Laboratory) in
Massachusetts Institute of Technology (MIT).
• Consider, a closed loop system in which the controller has one
adjustable parameter 𝜃.
• The desired closed loop response is specified by a model whose
output is ym.
• Let, Error (e) = y - ym and y - Output of the system.
• One possibility to adjust parameters in such a way that the loss
function is minimized.
1 2
• Loss function 𝐽 𝜃 = 𝑒 →(1)
2

5
MIT Rule
• To make J small, the parameters have to be changed in the
direction of the negative gradient of J, that is,
𝑑𝜃 𝜕𝐽 𝜕𝑒
= −𝛾 = −𝛾𝑒 --> (2)
𝑑𝑡 𝜕𝜃 𝜕𝜃
This is the MIT rule.
𝜕𝑒
• → Sensitivity derivative of the system
𝜕𝜃
• Tells how the error is influenced by the adjustable parameter.
• If it is assumed that the parameter changes are slower than the
𝜕𝑒
other variables in the system, then the derivative can be
𝜕𝜃
evaluated under the assumption that 𝜃 is constant.
6
MIT Rule
• Other alternative loss function 𝐽 𝜃 = 𝑒 →(3)
• To make J small, the parameters have to be changed in the direction
of the negative gradient of J, that is,
𝑑𝜃 𝜕𝑒
𝑑𝑡
= −𝛾 𝑠𝑖𝑔𝑛
𝜕𝜃
𝑒 → (4)
𝑑𝜃 𝜕𝑒
• Other possibilities, for example = −𝛾 𝑠𝑖𝑔𝑛 𝑠𝑖𝑔𝑛 (𝑒) →(5)
𝑑𝑡 𝜕𝜃
• This is called as sign-sign algorithm.
• Equation (2) applies when there are many parameters to adjust.
• The symbol 𝜃→ interpreted as a vector.
𝜕𝑒
• → gradient of the error w.r.t the parameters.
𝜕𝜃
Y = sign(x) returns an array Y the same size as x, where each element of Y is:
• 1 if the corresponding element of x is greater than 0.
Sign function (signum function) : • 0 if the corresponding element of x equals 0.
• -1 if the corresponding element of x is less than 0.
7
• x./abs(x) if x is complex.
Example1: How MIT rule is used to obtain a simple adaptive controller
Adaptation of a Feed Forward Gain

• Assume the Linear Process with Transfer function kG(s)


• G(s) – Known & k - Unknown parameter.
• Design problem is to find a feedforward controller that gives a
system with T.F Gm(s)= K0 G(s), K0 - given Constant.
• With the feedforward controller u = 𝜃uc
• Where u – control signal, uc –command signal
• The T.F from command signal to the output becomes 𝜃 kG(s)
𝑘0
• 𝜃 kG(s) = Gm(s) if 𝜃 =
𝑘

8
Example1: How MIT rule is used to obtain a simple adaptive controller
Adaptation of a Feed Forward Gain
• Now use the MIT rule to obtain a method for adjusting the
parameter 𝜃 when k is not known.
The error is
𝑒 = 𝑦 − 𝑦𝑚 = 𝑘𝐺 𝑝 θ𝑢𝑐 − 𝑘0 𝐺 𝑝 𝑢𝑐
Where uc → command signal, ym is Model output, y - Process output
θ –adjustable parameter, p=d/dt → Differential Operator.
𝜕𝑒 𝑘
The sensitivity derivative is given by = 𝑘𝐺 𝑝 𝑢𝑐 = 𝑦𝑚
𝜕𝜃 𝑘0

The MIT rule then gives the following adaptation law:


𝑑𝜃 ′
𝑘
= −𝛾 𝑦 𝑒 = −𝛾𝑦𝑚 𝑒
𝑑𝑡 𝑘0 𝑚
𝑘
Where,𝛾 = 𝛾 ′ , Then to find the correct sign of 𝛾, it is necessary to
𝑘0
know the sign of k.
9
MRAS for adjustment of a Feed Forward Gain
based on MIT rule

𝑑𝜃
= −𝛾𝑦𝑚 𝑒
𝑑𝑡

10
Example1: How MIT rule is used to obtain a simple adaptive controller
Adaptation of a Feed Forward Gain (Simulation results)
1
𝐺 𝑠 = , input uc - sinusoid with frequency 1 rad/sec, k=1 and k0=2.
𝑠+1
Model o/p and Process o/p
parameter
Controller

• Parameter converges toward the correct value reasonably fast when the
adaptation gain is 𝛾 = 1 and that process o/p approaches the model output.
• Convergence rate depends on the adaptation gain
• Convergence rate increases with 𝛾. 11
Example 2: How MIT rule is used to obtain a simple adaptive controller
MRAS for a first-order system
• Consider the system described by the model →(1)

• Assume that, we want to obtain a closed loop system described by

• Let the controller be given by →(2)


• The controller has two parameters. If they are chosen to be

→(3)

the input-output relations of the system and the model are the
same.
• This is called “Perfect model following”
12
Example 2: How MIT rule is used to obtain a simple adaptive controller
MRAS for a first-order system
• To apply the MIT rule, introduce the error e=y-ym
• From the equation (1) and (2)

→(4)

• The sensitivity derivatives are obtained by taking partial derivatives


w.r.t the controller parameters 𝜃1 and 𝜃2 .
e=y-ym
Ym is not a function of 𝜽𝟏 or 𝜽𝟐

→(5)

13
14
Example 2: How MIT rule is used to obtain a simple adaptive controller
MRAS for a first-order system
• These formulas (in Eqn.5) cannot be used directly because process
parameters a and b are not known.
• So Approximations are required.
• One possible approximation is based on the observation is that

• When the parameters give perfect model following.


• Therefore use the approximation →(6)

→(7)

15
𝜕𝑒 𝑏 𝜕𝑒 𝑏
= 𝑢𝑐 & = 𝑦
𝜕𝜃1 𝑝 + 𝑎𝑚 𝜕𝜃2 𝑝 + 𝑎𝑚
𝑏
d𝜃1 /𝑑𝑡 = −𝛾′𝑒 𝑢𝑐
𝑝 + 𝑎𝑚
𝑏 𝑎𝑚
= −𝛾′ 𝑢𝑐 𝑒
𝑎𝑚 𝑝 + 𝑎𝑚
𝑏 𝑎𝑚
= −𝛾′ ⋅ 𝑢𝑐 𝑒
𝑎𝑚 𝑝 + 𝑎𝑚
𝑎𝑚
=𝛾 𝑒
𝑝 + 𝑎𝑚
𝑑𝜃2 𝑏 𝑎𝑚
||𝑙𝑦 = 𝛾′ ቆ 𝑦 𝑒
𝑑𝑡 𝑎𝑚 𝑝 + 𝑎𝑚

16
Model Reference Controller for a first order process

Parameters are chosen to be :


• a=1, b=0.5 and am = bm =2.
• Input – Square wave with amplitude 1 and 𝛾=1.
17
Model Reference Controller for a first order process (Simulation Results)
Ref. Model O/P and
Plant o/p

Controller O/P

For 𝜸 = 𝟏, 𝜽𝟏 = 𝟑. 𝟐, 𝜽𝟐 = 𝟏. 𝟐 at time t=100

18
Lyapunov Design of MRAC

19
Lyapunov Design of MRAC
Procedure

• Determine controller structure


• Derive the error equation
• Find a Lyapunov equation
• Determine adaptation law that satisfies Lyapunov theorem

20
𝑒 = 𝑦 − 𝑦𝑚

Adaptation of Feedforward Gain 𝑑𝑒


𝑑𝑡
=
𝑑𝑦 𝑑𝑦𝑚
𝑑𝑡

𝑑𝑡
= −𝑎𝑦 + 𝑘𝑢 + 𝑎𝑦𝑚 − 𝑘0 𝑢𝑐
= −𝑎𝑦 + 𝑘𝜃𝑢𝑐 + 𝑎𝑦𝑚 − 𝑘0 𝑢𝑐
= −𝑎 𝑦 − 𝑦𝑚 + 𝑘𝜃 − 𝑘0 𝑢𝑐
𝑑𝑒
= −𝑎𝑒 + 𝑘𝜃 − 𝑘0 𝑢𝑐
𝑑𝑡

21
Adaptation of Feedforward Gain

22
Adaptation of Feedforward Gain

𝛾 𝑘
𝑉 𝑒, 𝜃 = 𝑒 2 + 𝜃 − 𝜃0 2
2 2
𝑑𝑣 𝜈 𝑑𝑒 𝑘 𝑑𝜃
= ⋅ 2𝑒 + ⋅ 2 𝜃 − 𝜃0 ⋅
𝑑𝑡 2 𝑑𝑡 2 𝑑𝑡
𝑑𝜃
= 𝛾𝑒 [−𝑎𝑒 + 𝑘𝜃 − 𝑘0 𝑢𝑐 ] + 𝑘 𝜃 − 𝜃0 ]
𝑑𝑡
𝑑𝜃
= −𝛾𝑎𝑒 2 + 𝛾𝑒𝑢𝑐 𝑘𝜃 − 𝑘0 + 𝑘𝜃 − 𝑘𝜃0
𝑑𝑡
𝑑𝜃
= −𝛾𝑎𝑒 2 + 𝛾𝑒𝑢𝑐 𝑘𝜃 − 𝑘0 + 𝑘𝜃 − 𝑘0
𝑑𝑡
𝑑𝑣 2
𝑑𝜃
= [−𝛾𝑎𝑒 + 𝑘𝜃 − 𝑘0 ] − 𝛾𝑢𝑐 𝑒
𝑑𝑡 𝑑𝑡
23
Adaptation of Feedforward Gain

24
First Order MRAS based on Lyapunov Stability Theory
𝑒 = 𝑦 − 𝑦𝑚
𝑑𝑒 𝑑𝑦 𝑑𝑦𝑚
= −
𝑑𝑡 𝑑𝑡 𝑑𝑡
𝑑𝑒
= −𝑎𝑦 + 𝑏𝑢 + 𝑎𝑚 𝑦𝑚 − 𝑏𝑚𝑢𝑐
𝑑𝑡
𝑑𝑒
= −𝑎𝑦 + 𝑏 𝜃1 𝑢𝑐 − 𝜃2 𝑦 + 𝑎𝑚 𝑦𝑚 − 𝑏𝑚 𝑢𝑐
𝑑𝑡
𝑑𝑒
= −𝑎𝑦 + 𝑏 𝜃1 𝑢𝑐 − 𝜃2 𝑦 + 𝑎𝑚 𝑦𝑚 − 𝑏𝑚 𝑢𝑐
𝑑𝑡
−𝑎𝑚 𝑦 + 𝑎𝑚 𝑦

𝑑𝑒 = −𝑎𝑚 𝑦 − 𝑦𝑚 − 𝑦(𝑏𝜃2 + 𝑎 − 𝑎𝑚 )
𝑑𝑡 +𝑢𝑐 (𝑏𝜃1 − 𝑏𝑚 )

25
First Order MRAS based on Lyapunov Stability Theory

We get,
𝑑𝑉
= −𝑎𝑚 𝑒 2
𝑑𝑡
26
1 2 1 2
1 2
𝑉 𝑒, 𝜃1 , 𝜃2 = 𝑒 + 𝑏𝜃2 + 𝑎 − 𝑎𝑚 + 𝑏𝜃1 − 𝑏𝑚
2 𝑏𝛾 𝑏𝛾
𝑑𝑉 1 𝑑𝑒 1 𝑑𝜃2
= ൥2𝑒 + ⋅ 2 𝑏𝜃2 + 𝑎 − 𝑎𝑚 ⋅ 𝑏
𝑑𝑡 2 𝑑𝑡 𝑏𝛾 𝑑𝑡
1 𝑑𝜃1
+ ⋅ 2 𝑏𝜃1 − 𝑏𝑚 ⋅ 𝑏 ⋅ ቉
𝑏𝛾 𝑑𝑡
𝑑𝑒 1 𝑑𝜃2 1 𝑑𝜃1
=𝑒 + 𝑏𝜃2 − 𝑎 − 𝑎𝑚 + 𝑏𝜃1 − 𝑏𝑏𝑚
𝑑𝑡 𝛾 𝑑𝑡 𝛾 𝑑𝑡
= 𝑒 −𝑎𝑚 𝑒 − 𝑏𝜃2 + 𝑎 − 𝑎𝑚 𝑦 + 𝑏𝜃1 − 𝑏𝑚 𝑢𝑐
1 𝑑𝜃2 1 𝑑𝜃1
+ 𝑏𝜃2 − 𝑎 − 𝑎𝑚 + 𝑏𝜃1 − 𝑏𝑚
𝛾 𝑑𝑡 𝛾 𝑑𝑡
= −𝑎𝑚 𝑒 2 − 𝑏𝜃2 + 𝑎 − 𝑎𝑚 𝑦𝑒 + 𝑏𝜃1 − 𝑏𝑚 𝑢𝑐 𝑒 2
1 𝑑𝜃2 1 𝑑𝜃1
+ 𝑏𝜃2 − 𝑎 − 𝑎𝑚 + 𝑏𝜃1 − 𝑏𝑚
𝛾 𝑑𝑡 𝛾 𝑑𝑡
𝑑𝑉 2
1 𝑑𝜃2
= −𝑎𝑚 𝑒 + 𝑏𝜃2 − 𝑎 − 𝑎𝑚 − 𝛾𝑦𝑒
𝑑𝑡 𝛾 𝑑𝑡
1 𝑑𝜃1
+ 𝑏𝜃1 − 𝑏𝑚 − 𝛾𝑢𝑐 𝑒
𝛾 𝑑𝑡

27
First Order MRAS based on Lyapunov Stability Theory

28
First Order MRAS based on Lyapunov Stability Theory
In Lyapunov method, Adaptation rule is similar to the MIT
rule, the only difference is that there is no filtering of the
signals uc and y with the Lyapunov rule.

both cases the adjustment law can be written as

𝑑𝜃
= 𝛾𝜑𝑒
𝑑𝑡
where 𝜃 is a vector of parameters and
𝜑 = −𝑢𝑐 𝑦 𝑇 → Lyapunov Rule
𝑎𝑚
𝜑= −𝑢𝑐 𝑦 𝑇 → MIT Rule
𝑝 + 𝑎𝑚

29
End of Chapter-5
30

You might also like