You are on page 1of 18

Linear Quadratic (LQG) Control

Three different cases can considered. In LQG control


the Kalman filter is used as a predictor.

Signal Signal Signal

Estimate Estimate Estimate

k-2 k-1 k k k k+1

xˆ (( k  m) h | kh) xˆ ( kh | kh) xˆ (( k  m) h | kh)

Smoothing Filtering Prediction k+2


Process corrupted by system and
measurement noise
x(kh  h)  x(kh)  u (kh)  v(kh)
y (kh)  Cx(kh)  e(kh)
where v and e are discrete-time Gaussian white noise
processes with zero-mean value and
E (v(kh)v(kh)T )  R1
Covariance and cross-covariance
E (v(kh)e(kh)T )  R12
functions; symmetric
E (e(kh)e(kh)T )  R2
The initial state x(0) is assumed to be Gaussian distributed
with Ex(0)  m , cov( x(0))  R
0 0
A ”lemma”
Consider the static quadratic function
 Qx Qxu   x 
J ( x, u )   x u  
T T

Qxu Qu  u 

where the weight functions Qi are symmetric and positive


semidefinite, Qu positive definite. It is an easy calculation
(do it: start by calculating the derivative of J with respect to
u) to show that the minimum is achieved for
u*  Qu1Qxu x
and the minimum is
J *  xT  Qx  Qxu Qu1Qxu  x
The Kalman filter
Consider the one-step-ahead predictor

xˆ (k  1 k )  xˆ (k k  1)  u (k )  K (k )  y (k )  Cxˆ (k k  1) 

where we have now freedom in choosing the gain K (not


just setting the estimation error poles at desired places).
The estimation error x  x  xˆ has dynamics

x (k  1 )  x (k )  v(k )  K (k )  y (k )  Cxˆ (k k  1


    K (k )C  x (k )  v(k )  K (k )e(k )
   v(k )  
 I  K (k )     x (k )   
 
C  e ( k ) 
We set the criterion of minimizing the variance of the
estimation error
P(k )  E  x (k )  Ex (k )  x (k )  Ex (k ) 
T

The mean value of x is


E  x (k  1)      K (k )C  Ex (k )

Because Ex(0)  m0 the mean value of the reconstruction


error is zero for all k independent of K, if xˆ (0)  m0 .
Because x ( k ) is independent of v(k) and e(k) we obtain
P(k  1)  Ex (k  1) x (k  1)T
      R1 R12    I 

T

 I  K (k )    P(k )      
 C   C   R12 R2    K (k )T 

 
 P(k )T  R1 P(k )C T  R12   I 
 I  K (k )   T
 CP ( k )  T
 R12 CP ( k )C T
 R 2    K ( k ) 

with P0 =R0. Now consider minimizing the scalar  T P(k  1)


for any value of α. By using the ”lemma” we get that the
minimizing vector K(t), the Kalman gain, is

K (k )   P (k )C  R12  CP(k )C  R2 
T T 1
Inserting that to the previous formula gives

P(k  1)  P(k )T  R1


  P (k )C T  R12  CP(k )C T  R2   CP(k )  R12 
1 T

P(0)  R0

which together with

xˆ (k  1 k )  xˆ (k k  1)  u (k )  K (k )  y (k )  Cxˆ (k k  1) 

K (k )   P (k )C  R12  CP(k )C  R2 
T T 1

is the celebrated Kalman filter.


Note that this was an algebraic derivation of the Kalman
filter (predictor case).

There are other approaches (based on Bayesian analysis,


using the orthogonality principle etc.), which give more
insight on the problem.

Example: Consider the scalar system

x(k  1)  x(k )
y ( k )  x ( k )  e( k )
where the measurement is corrupted by noise (zero mean
white noise with standard deviation  ; x(0) is assumed to
have variance 0.5. The Kalman filter is given by
xˆ (k  1 k )  xˆ (k k  1)  K (k )  y (k )  Cxˆ (k k  1) 
P(k )  2 P(k )
K (k )  2 , P(k  1)  2
  P(k )   P(k )

K = 0.01

K = 0.05

Kalman
LQG control
Consider the system
x(k  1)  x(k )  u (k )  v(k )
y (k )  Cx(k )  e(k )
and the criterion to be minimized
 N 1 
J  E    x(k )T Q1 x(k )  u (k )T Q2u (k )   x( N )T Q0 c x( N ) 
 k 0 
 N 1  x(k )  
 E    x(k )T u (k )T  Q    x ( N )T
Q0c x ( N ) 
 k  0  u ( k )  
where Q 0  (Q1, Q0c pos. semidefinite, Q2 pos. definite)
Q 1 
 0 Q2
LQG control is given by the separation theorem (not proved
here):

The optimal control is a combination of optimal LQ control


and optimal prediction. In other words

u *(k )   L(k ) xˆ (k k  1)

where L(k) is given by the Riccati equation of the LQ problem


and the state estimate is obtained by the Kalman filter.

The separation theorem reflects the fact that optimal control


can be separated to optimal deterministic LQ problem
solution and optimal stochastic prediction.
Structure of LQG control
End of Story
Intermediate exam 2 or alternatively full exam on
Wednesday, 9th of December at 14:00-16:00,
hall AS2.

You can choose (after seeing the problems),


which exam you do.

The following exam is on the 8th of February


2016.You can then do full exam (5 problems) or (re)do
intermediate exam 1 or 2 (3 problems). Later, only the
full exam is possible. The intermediate exam results
and bonus points are valid until the course lectures
start again (autumn 2016).
Core material
-Discretization (state-space, transfer function), ZOH

-Properties of a discrete-time system (pulse transfer


function, pulse response, weighting function, poles, zeros,
mapping of poles from continuous to discrete time
systems)

-Stability (state stability, BIBO-stability, Jury stability


test, frequency response, Bode, Nyquist, gain and phase
margins)
Core material...
-Controllability, reachability, observability

-Pole placement by state feedback control, regulation and


servo problems, static gain

-State observer, pole placement of the observer, combining


of an observer and state feedback controller
Core material...
-Discrete approximations of continuous-time controllers
(Euler, Tustin etc.)

-Discrete PID controller, integrator windup and antiwindup

-The alias-effect, Nyquist-frequency,


choosing the sampling interval, pre-filters

-Disturbance models (stochastics, expectation, covariance,


white noise, AR, MA, ARMA, ARMAX models, spectral
density)
Core material...
-Optimal predictor

-Minimum variance controller

-LQ controller. Basics of LQG control

The end

You might also like