You are on page 1of 24

EACT631-ADAPTIVE CONTROL

Chapter 4
Persistent Excitation (PE) condition
Gradient and least-squares algorithms

Dr. Saravanakumar Gurusamy


EET Department,
Technical and Vocational Training Institute,
Addis Ababa, Ethiopia
gsk.ftvet@gmail.com
1
Persistent Excitation (PE) condition
• Persistent Excitation (PE) condition is essential for good
parameter estimation but is not required for good output
prediction.
• It is one of the issue in parameter estimation.
• In order to have good estimation performance, the following
implementation issues must be considered:
• Generating linear-in-parameters model for on-line estimation
purposes
• Choosing initial parameters and initial gain matrix
• Specifying forgetting rate and gain bound
• Enforcing persistency of excitation through system input
Persistent Excitation (PE) condition
• When we derive the LS ( Least Square) estimate, the matrix
𝑃 = (Φ𝑇 Φ)−1 must exist. This condition is related to PE condition.
• The parameters of the model cannot be determined unless some
conditions are imposed on the input signal.
• Condition for uniqueness of the least square estimate➔ the matrix Φ𝑇 Φ
must have full rank. → Excitation condition
• Consider the Transfer function of SISO stable system
𝑌(𝑧)
•𝐺 𝑧 = = σ∞
𝑘=0 𝑔 𝑘 𝑧 −𝑘 = 𝜃 𝑧 −1 + 𝜃 𝑧 −2 + ⋯ + 𝜃 𝑧 −𝑛
1 2 𝑛
𝑋(𝑧)
• 𝑦 𝑘 = 𝜃1 𝑢(𝑘 − 1) + 𝜃2 𝑢(𝑘 − 2) + ⋯ + 𝜃𝑛 𝑢(𝑘 − 𝑛)→ Moving Average
(MA) model
• 𝑦 𝑘 = Φ𝑇 𝑘 θ
Persistent Excitation (PE) condition
• Let 𝑃−1 = (Φ𝑇 Φ) = σ𝑡𝑘=1 𝜑 𝑘 𝜑 𝑇 𝑘

𝑢(𝑘 − 1)
.
= σ𝑡𝑘=1 . 𝑢 𝑘 − 1 … 𝑢(𝑘 − 𝑛)
.
𝑢(𝑘 − 𝑛) Symmetric matrix

𝑢(𝑘 − 1)2 𝑢 𝑘 − 1 𝑢 𝑘 − 2 …𝑢 𝑘 − 1 𝑢 𝑘 − 𝑛
𝑢 𝑘 − 1 𝑢 𝑘 − 2 𝑢(𝑘 − 2)2 … 𝑢 𝑘 − 2 𝑢 𝑘 − 𝑛
= σ𝑡𝑘=1 .
.
𝑢 𝑘−1 𝑢 𝑘−𝑛 … 𝑢(𝑘 − 𝑛)2
Persistent Excitation (PE) condition
• Define Empirical covariance matrix
1
• 𝐶𝑛 = lim Φ𝑇 Φ
𝑡→∞ 𝑡
𝑐(0) 𝑐(1) ⋯ 𝑐(𝑛 − 1)
𝑐(1) 𝑐(0) ⋯ 𝑐(𝑛 − 2)
𝐶𝑛 =
⋮ ⋮ ⋮
𝑐(𝑛 − 1) 𝑐(𝑛 − 2) ⋯ 𝑐(0)
𝐶𝑛 − 𝑆𝑦𝑚𝑚𝑒𝑡𝑟𝑖𝑐 𝑀𝑎𝑡𝑟𝑖𝑥

1
Where , 𝐶 𝑘 are the empirical covariances of the input , that is 𝐶 𝑘 = lim σ𝑡𝑖=1 𝑢 𝑖 𝑢 𝑖 − 𝑘
𝑡→∞ 𝑡
1
• Let k=0 → 𝐶 0 = lim σ𝑡𝑖=1 𝑢 𝑖 𝑢 𝑖
𝑡→∞ 𝑡
1
= lim σ𝑡𝑖=1 𝑢 𝑖 2 → diagonal element of Cn matrix
𝑡→∞ 𝑡
1
k=1 → 𝐶 1 = lim σ𝑡𝑖=1 𝑢 𝑖 𝑢 𝑖 − 1 similarly
𝑡→∞ 𝑡
1
k=2 → 𝐶 2 = lim σ𝑡𝑖=1 𝑢 𝑖 𝑢 𝑖−2
𝑡→∞ 𝑡
Persistent Excitation (PE) condition
The condition for the input signal to be persistently excitation is
Rank of the matrix Cn= n
and also Cn = must be positive definite matrix
n→ Number of estimated parameters
Persistent Excitation (PE) condition

 Constant Input- Constant output (case 1)

 Variable Input- Variable output (case 2)

• In Constant Input- Constant output (case 1)- Dynamics of the plant is not shown. RLS
method will not give any estimate in this case.
• Variable Input- Variable output (case 2)→ dynamics of the plant is captured. In this case
RLS method will give some meaningful results as a estimate.
• PE condition is trying to investigate to show the dynamics of the plant and estimate the
corresponding parameters of the plant that can be used for design of adaptive controller.
Persistent Excitation (PE) characteristics for the
example input signals
• White noise signal → PE of any degree
• This is the best signal for using as an input signal for parameter estimation.
• pseudorandom binary sequence (PRBS) is a binary sequence
Gradient and least-squares algorithms

Adaptive Control System EACT 631- Chapter 1 9


Introduction
• Consider the identification problem for a first order SISO LTI system
described by transfer function.

→ (1)

• kp and ap are unknown parameters – To be determined by the


identification scheme on the basis of input and output measurement of
the plant.
• Assume: the plant is stable and ap >0
• Standard Approach for identification :
• Frequency domain approach
• Time-domain approach
Time-domain approach- Identification
• Time domain expression of the plant is
→ (2)

• Measurement of at one time instant t give us


one equation with two unknown

• Two time instants may be sufficient to determine the unknown


parameters from
→ (3)
Time-domain approach- Identification
• We may refine this approach to avoid the measurement of 𝑦𝑝ሶ 𝑡 .
• Consider the SISO LTI system transfer function

• Divide both side by (𝒔 + 𝝀) for some 𝝀 > 𝟎

→ (4)
• This is equivalent to

→ (5)
Time-domain approach- Identification
• Let us define the signals → (6)

• In the time domain → (7)

Then the equation 5 can be written as

→ (8)

• The signals 𝑤 (1) , 𝑤 (2) → obtained by stable filtering of the input and
output of the plant.
• Assume zero initial conditions on 𝑤 (1) , 𝑤 (2) .
Time-domain approach- Identification
• Assume: the measurements of r and yp are made continuously between 0 and t,
• Then look for algorithms that use the complete information and preferably update
estimates only on the basis of new data, without storing the entire signals.
From the Equation(8), Define the vector of nominal identifier parameters.

→ (9)

• Define 𝜽 𝒕 to be a vector of identical dimension of 𝜽∗ called the “Adaptive identifier


parameter”.
• 𝜽 𝒕 is the estimate of 𝜽∗ based on the input-output data up to time t.
Let

→ (10)

The equation (8) can be written as → (11)


Time-domain approach- Identification
• Based on measurements of r(t) and yp(t) up to time t, w(t) may be
calculated and an estimate 𝜽 𝒕 derived.
• Identification error

→ (12)

• The identification error in the equation (12) is linear in the parameter


error → (13)

Therefore, the equation (12) can be called as linear error equation.


• The purpose of the identification scheme is used to calculate 𝜽 𝒕 on
the basis of measurements of e1(t) and w(t) up to time t.
Gradient Algorithm
• The gradient algorithm is a steepest descent to minimize 𝑒12 (𝑡)
• The gradient of square of error is given by

→ (14)

Let the parameter update law → (15)

• This is expressed as proportional to the gradient of square of error.


Where g – is a strictly positive gain → Adaptation gain
• It allows us to vary the rate of adaptation of the parameters.
• The initial condition 𝜃(0) is arbitrary, but it can be chosen based on the prior
knowledge of the plant parameter.
• From the equation(12), the equation (15) can be written as
Normalized Gradient Algorithm
• This is an alternative to the gradient algorithm
• The parameter update law as per this algorithm can be defined by
→ (16)

Where g and 𝛾 are constants


• The update law in the equation(16) is equivalent to the update law in
𝑤
the equation(15) with w replaced by 𝑇
1+𝛾𝑤 𝑤
• The new regressor is normalised form of w. So, this is known as
Normalized Gradient Algorithm.
Theorem: Least Square Estimation
Definition: The function of equation (2) is minimal for parameter 𝜃መ
such that

If the matrix Φ𝑇 Φ is non-singular, the minimum is unique and given by


Least-Square Algorithm
• The least-squares algorithm minimizes the integral square error(ISE)

𝑡
→ (17)
𝐼𝑆𝐸 = න (𝜃 𝑇 𝑡 𝑤 𝑡 − 𝑦𝑝 𝑡 )2 𝑑𝑡
0
• Owing to the linearity of the error equation, the estimate may be obtained
directly from the condition,
𝜕 𝑡 𝑇
‫׬‬0 (𝜃 𝑡 𝑤 𝑡 − 𝑦𝑝 𝑡 )2 𝑑𝑡 = → (18)
𝜕𝜃
• So that the least square estimate is given by

→ (19)
Recursive Least-Square Algorithm
Let
→ (20)

So that
→ (21)

since

→ (22)

→ (23)
Recursive Least-Square Algorithm
• The equation 19 can be written as
→ (24)

• So that, using equation (23)

→ (25)
Recursive Least-Square Algorithm
• The correct initial condition at some t0 >0 such that

→ (26)

• In practice, the recursive least square algorithm is started with


arbitrary initial conditions at t0 = 0 so that,

→ (27)

• It may be verified that the solution of equation 19 and 27 is


→ (28)
Recursive Least-Square Algorithm
The parameter error is given by

→ (29)

It follows that 𝜽(𝒕) converges to 𝜽∗ if is unbounded (infinity)

as t → ∞
End of Chapter 4

You might also like