Professional Documents
Culture Documents
Chapter 6 & 7
Direct and Indirect adaptive
Control
Model Reference Adaptive System
(MRAS)
2
Introduction
• MRAS is an Adaptive controller.
• Regarded as an adaptive servo system
• Desired performance is expressed interms of a reference model
• Gives desired responses to a command signal.
MRAS system having two feedback loop:
• Inner Loop
• Ordinary FB loop
• Composed of process and controller.
• Outer loop - Changes the controller parameters.
• Parameters are changed on the basis of feedback from the error.
• Error = Output of the system - Output of the Reference Model.
• The mechanism for adjusting the parameters in a MRAS can be defined in
two ways:
• Using Gradient Method (MIT rule)
• Applying Stability Theory (Lyapunov rule)
• MRAS systems were originally derived for deterministic continuous time
system. 3
MRAC schemes
• MRAC schemes can be characterized as
∗ Direct
ቅ with normalized or unnormalized adaptive laws.
∗ Indirect
5
MIT Rule
• To make J small, the parameters have to be changed in the
direction of the negative gradient of J, that is,
𝑑𝜃 𝜕𝐽 𝜕𝑒
= −𝛾 = −𝛾𝑒 --> (2)
𝑑𝑡 𝜕𝜃 𝜕𝜃
This is the MIT rule.
𝜕𝑒
• → Sensitivity derivative of the system
𝜕𝜃
• Tells how the error is influenced by the adjustable parameter.
• If it is assumed that the parameter changes are slower than the
𝜕𝑒
other variables in the system, then the derivative can be
𝜕𝜃
evaluated under the assumption that 𝜃 is constant.
6
MIT Rule
• Other alternative loss function 𝐽 𝜃 = 𝑒 →(3)
• To make J small, the parameters have to be changed in the direction
of the negative gradient of J, that is,
𝑑𝜃 𝜕𝑒
𝑑𝑡
= −𝛾 𝑠𝑖𝑔𝑛
𝜕𝜃
𝑒 → (4)
𝑑𝜃 𝜕𝑒
• Other possibilities, for example = −𝛾 𝑠𝑖𝑔𝑛 𝑠𝑖𝑔𝑛 (𝑒) →(5)
𝑑𝑡 𝜕𝜃
• This is called as sign-sign algorithm.
• Equation (2) applies when there are many parameters to adjust.
• The symbol 𝜃→ interpreted as a vector.
𝜕𝑒
• → gradient of the error w.r.t the parameters.
𝜕𝜃
Y = sign(x) returns an array Y the same size as x, where each element of Y is:
• 1 if the corresponding element of x is greater than 0.
Sign function (signum function) : • 0 if the corresponding element of x equals 0.
• -1 if the corresponding element of x is less than 0.
7
• x./abs(x) if x is complex.
Example1: How MIT rule is used to obtain a simple adaptive controller
Adaptation of a Feed Forward Gain
8
Example1: How MIT rule is used to obtain a simple adaptive controller
Adaptation of a Feed Forward Gain
• Now use the MIT rule to obtain a method for adjusting the
parameter 𝜃 when k is not known.
The error is
𝑒 = 𝑦 − 𝑦𝑚 = 𝑘𝐺 𝑝 θ𝑢𝑐 − 𝑘0 𝐺 𝑝 𝑢𝑐
Where uc → command signal, ym is Model output, y - Process output
θ –adjustable parameter, p=d/dt → Differential Operator.
𝜕𝑒 𝑘
The sensitivity derivative is given by = 𝑘𝐺 𝑝 𝑢𝑐 = 𝑦𝑚
𝜕𝜃 𝑘0
𝑑𝜃
= −𝛾𝑦𝑚 𝑒
𝑑𝑡
10
Example1: How MIT rule is used to obtain a simple adaptive controller
Adaptation of a Feed Forward Gain (Simulation results)
1
𝐺 𝑠 = , input uc - sinusoid with frequency 1 rad/sec, k=1 and k0=2.
𝑠+1
Model o/p and Process o/p
parameter
Controller
• Parameter converges toward the correct value reasonably fast when the
adaptation gain is 𝛾 = 1 and that process o/p approaches the model output.
• Convergence rate depends on the adaptation gain
• Convergence rate increases with 𝛾. 11
Example 2: How MIT rule is used to obtain a simple adaptive controller
MRAS for a first-order system
• Consider the system described by the model →(1)
→(3)
the input-output relations of the system and the model are the
same.
• This is called “Perfect model following”
12
Example 2: How MIT rule is used to obtain a simple adaptive controller
MRAS for a first-order system
• To apply the MIT rule, introduce the error e=y-ym
• From the equation (1) and (2)
→(4)
→(5)
13
14
Example 2: How MIT rule is used to obtain a simple adaptive controller
MRAS for a first-order system
• These formulas (in Eqn.5) cannot be used directly because process
parameters a and b are not known.
• So Approximations are required.
• One possible approximation is based on the observation is that
→(7)
15
𝜕𝑒 𝑏 𝜕𝑒 𝑏
= 𝑢𝑐 & = 𝑦
𝜕𝜃1 𝑝 + 𝑎𝑚 𝜕𝜃2 𝑝 + 𝑎𝑚
𝑏
d𝜃1 /𝑑𝑡 = −𝛾′𝑒 𝑢𝑐
𝑝 + 𝑎𝑚
𝑏 𝑎𝑚
= −𝛾′ 𝑢𝑐 𝑒
𝑎𝑚 𝑝 + 𝑎𝑚
𝑏 𝑎𝑚
= −𝛾′ ⋅ 𝑢𝑐 𝑒
𝑎𝑚 𝑝 + 𝑎𝑚
𝑎𝑚
=𝛾 𝑒
𝑝 + 𝑎𝑚
𝑑𝜃2 𝑏 𝑎𝑚
||𝑙𝑦 = 𝛾′ ቆ 𝑦 𝑒
𝑑𝑡 𝑎𝑚 𝑝 + 𝑎𝑚
16
Model Reference Controller for a first order process
Controller O/P
18
Lyapunov Design of MRAC
19
Lyapunov Design of MRAC
Procedure
20
𝑒 = 𝑦 − 𝑦𝑚
21
Adaptation of Feedforward Gain
22
Adaptation of Feedforward Gain
𝛾 𝑘
𝑉 𝑒, 𝜃 = 𝑒 2 + 𝜃 − 𝜃0 2
2 2
𝑑𝑣 𝜈 𝑑𝑒 𝑘 𝑑𝜃
= ⋅ 2𝑒 + ⋅ 2 𝜃 − 𝜃0 ⋅
𝑑𝑡 2 𝑑𝑡 2 𝑑𝑡
𝑑𝜃
= 𝛾𝑒 [−𝑎𝑒 + 𝑘𝜃 − 𝑘0 𝑢𝑐 ] + 𝑘 𝜃 − 𝜃0 ]
𝑑𝑡
𝑑𝜃
= −𝛾𝑎𝑒 2 + 𝛾𝑒𝑢𝑐 𝑘𝜃 − 𝑘0 + 𝑘𝜃 − 𝑘𝜃0
𝑑𝑡
𝑑𝜃
= −𝛾𝑎𝑒 2 + 𝛾𝑒𝑢𝑐 𝑘𝜃 − 𝑘0 + 𝑘𝜃 − 𝑘0
𝑑𝑡
𝑑𝑣 2
𝑑𝜃
= [−𝛾𝑎𝑒 + 𝑘𝜃 − 𝑘0 ] − 𝛾𝑢𝑐 𝑒
𝑑𝑡 𝑑𝑡
23
Adaptation of Feedforward Gain
24
First Order MRAS based on Lyapunov Stability Theory
𝑒 = 𝑦 − 𝑦𝑚
𝑑𝑒 𝑑𝑦 𝑑𝑦𝑚
= −
𝑑𝑡 𝑑𝑡 𝑑𝑡
𝑑𝑒
= −𝑎𝑦 + 𝑏𝑢 + 𝑎𝑚 𝑦𝑚 − 𝑏𝑚𝑢𝑐
𝑑𝑡
𝑑𝑒
= −𝑎𝑦 + 𝑏 𝜃1 𝑢𝑐 − 𝜃2 𝑦 + 𝑎𝑚 𝑦𝑚 − 𝑏𝑚 𝑢𝑐
𝑑𝑡
𝑑𝑒
= −𝑎𝑦 + 𝑏 𝜃1 𝑢𝑐 − 𝜃2 𝑦 + 𝑎𝑚 𝑦𝑚 − 𝑏𝑚 𝑢𝑐
𝑑𝑡
−𝑎𝑚 𝑦 + 𝑎𝑚 𝑦
𝑑𝑒 = −𝑎𝑚 𝑦 − 𝑦𝑚 − 𝑦(𝑏𝜃2 + 𝑎 − 𝑎𝑚 )
𝑑𝑡 +𝑢𝑐 (𝑏𝜃1 − 𝑏𝑚 )
25
First Order MRAS based on Lyapunov Stability Theory
We get,
𝑑𝑉
= −𝑎𝑚 𝑒 2
𝑑𝑡
26
1 2 1 2
1 2
𝑉 𝑒, 𝜃1 , 𝜃2 = 𝑒 + 𝑏𝜃2 + 𝑎 − 𝑎𝑚 + 𝑏𝜃1 − 𝑏𝑚
2 𝑏𝛾 𝑏𝛾
𝑑𝑉 1 𝑑𝑒 1 𝑑𝜃2
= 2𝑒 + ⋅ 2 𝑏𝜃2 + 𝑎 − 𝑎𝑚 ⋅ 𝑏
𝑑𝑡 2 𝑑𝑡 𝑏𝛾 𝑑𝑡
1 𝑑𝜃1
+ ⋅ 2 𝑏𝜃1 − 𝑏𝑚 ⋅ 𝑏 ⋅
𝑏𝛾 𝑑𝑡
𝑑𝑒 1 𝑑𝜃2 1 𝑑𝜃1
=𝑒 + 𝑏𝜃2 − 𝑎 − 𝑎𝑚 + 𝑏𝜃1 − 𝑏𝑏𝑚
𝑑𝑡 𝛾 𝑑𝑡 𝛾 𝑑𝑡
= 𝑒 −𝑎𝑚 𝑒 − 𝑏𝜃2 + 𝑎 − 𝑎𝑚 𝑦 + 𝑏𝜃1 − 𝑏𝑚 𝑢𝑐
1 𝑑𝜃2 1 𝑑𝜃1
+ 𝑏𝜃2 − 𝑎 − 𝑎𝑚 + 𝑏𝜃1 − 𝑏𝑚
𝛾 𝑑𝑡 𝛾 𝑑𝑡
= −𝑎𝑚 𝑒 2 − 𝑏𝜃2 + 𝑎 − 𝑎𝑚 𝑦𝑒 + 𝑏𝜃1 − 𝑏𝑚 𝑢𝑐 𝑒 2
1 𝑑𝜃2 1 𝑑𝜃1
+ 𝑏𝜃2 − 𝑎 − 𝑎𝑚 + 𝑏𝜃1 − 𝑏𝑚
𝛾 𝑑𝑡 𝛾 𝑑𝑡
𝑑𝑉 2
1 𝑑𝜃2
= −𝑎𝑚 𝑒 + 𝑏𝜃2 − 𝑎 − 𝑎𝑚 − 𝛾𝑦𝑒
𝑑𝑡 𝛾 𝑑𝑡
1 𝑑𝜃1
+ 𝑏𝜃1 − 𝑏𝑚 − 𝛾𝑢𝑐 𝑒
𝛾 𝑑𝑡
27
First Order MRAS based on Lyapunov Stability Theory
28
First Order MRAS based on Lyapunov Stability Theory
In Lyapunov method, Adaptation rule is similar to the MIT
rule, the only difference is that there is no filtering of the
signals uc and y with the Lyapunov rule.
𝑑𝜃
= 𝛾𝜑𝑒
𝑑𝑡
where 𝜃 is a vector of parameters and
𝜑 = −𝑢𝑐 𝑦 𝑇 → Lyapunov Rule
𝑎𝑚
𝜑= −𝑢𝑐 𝑦 𝑇 → MIT Rule
𝑝 + 𝑎𝑚
29
End of Chapter-5
30