You are on page 1of 127

EE6403D

COMPUTER CONTROLLED SYSTEM


Topics of Computer Controlled Systems Course
• Multivariable Control
• Programmable logic Controller (PLC).
• Supervisory Control And Data Acquisition (SCADA)
• Distributed Control Systems (DCS)
• Real Time Systems (RTS)
Control Systems
Desired + E (s ) U (s ) Output , Y ( s )
C (s ) G (s )
Output R(s )

Controller Plant

Y ( s ) = G ( s )U ( s ) Y ( s) = R( s)
then
U (s) = C (s) E (s)
E (s) = 0
E ( s) = R( s) − Y ( s) then U ( s ) = 0
Will output , Y ( s ) become zero ?
Multivariable Control

• Developments in linear control theory has been focused on the control of


multivariable control.

• Many systems in areas such as the aerospace , are represent by the models
with several inputs.

• With each input having a significant effect on several outputs.

• Such cross coupling make use of single input signal output (SISO) methods
difficult.
4
• In parallel with development in multi-input multi-output (MIMO)
systems , there has been a renewed emphasis on frequency response

• The ability of the state framework to handle uncertainty, especially


non-parametric uncertainty, proved deficient.

• In contrast uncertainty fits quite naturally in an input-output setting


such a frequency response .

• The state framework has not been cast aside, rather connections have
been made between it and frequency-response approach.

5
6
7
Basic expression for multivariable system.
• Objective is to present a few basic expression in MIMO linear systems theory.

• Following basic 1-DOF block diagram shows general structure of MIMO systems.

d
+
yd + e u + y
F(s) P(s)
_
+
v
+
8
• In the Figure , the signal are all vector quantities.
• F(s) and P(s) are Transfer function matrices.
• The dimensions are m X r for P(s) and r X m for F(s)
• where m = dim(y) and r = dim(u)
• Most significant differences between this and SISO system are
(1) that division become inversion.
(2) the order of multiplication matters.
• From the above block diagram
y=d+Pu
= d + PF (yd – y – v)
y + PFy = d + PF(yd – v)
9
10
11
12
Singular Values

13
14
15
1) The loop gain matrix for a two-input , two output system
is:
1 0 .5 
s − s +1 
L( s ) =  1 
1 
 s ( s + 1) 
(a) Calculate and display the  L( j )and  L( j).
(b) Calculate S(s) and T(s).
(c) Compute and display  S ( j).  S ( j)
Solution:
(a) The maximum and minimum singular values of L are
shown in Figure-1, obtained by Matlab sigma

 L( j ) = min ( L* ( j ) L( j ))
• sigma Singular value plot of dynamic systems.

• sigma(SYS) produces a singular value (SV) plot of the frequency


response of the dynamic system SYS. The frequency range and
number of points are chosen automatically.
• sigma(SYS,{WMIN,WMAX}) draws the SV plot for frequencies
ranging between WMIN and WMAX in radians/Time Unit (relative
to the time units specified in SYS.TimeUnit, the default being
seconds).
• sigma(SYS,W) uses the vector W of frequencies (in radians/time
unit) to evaluate the frequency response.
• use LOGSPACE to generate logarithmically spaced frequency
vectors.
• sigma(SYS,W,TYPE) or sigma(SYS,[],TYPE) draws the following
modified , depending on the value of TYPE:
TYPE = 1 --> SV of inv(SYS)
TYPE = 2 --> SV of I + SYS
TYPE = 3 --> SV of I + inv(SYS)

• SYS should be a square system when using this syntax.

• sigma(SYS1,SYS2,...,W,TYPE) draws the SV response of several


systems SYS1, SYS2,... on a single plot.

• The arguments W and TYPE are optional.

• You can also specify a color, line style, and marker for each
system, for example, sigma(sys1,'r',sys2,'y--',sys3,'gx').
• SV = sigma(SYS,W) and [SV,W] = sigma(SYS) return the singular
values SV of the frequency response (along with the frequency
vector W if unspecified).

• No plot is drawn on the screen.

• The matrix SV has length(W) columns

• SV(:,k) gives the singular values (in descending order) at the


frequency W(k).

• The frequencies W are in rad/time unit.


• (b) )
−1
 1 0.5 
 1+ − 
S ( s ) = ( I + L( s )) = 
−1 s s + 1
1 
 1 1+ 
 s ( s + 1) 
−1
1 + s 0.5   s 2 + s + 1 0.5 
 s −
= s +1  = 1  s ( s + 1) s + 1
s 2 + s + 1 s + s + 1 0.5 
2  
1+ s
 1  + −1
 s ( s + 1)  s 2
s + 1  s 
 s 2 + s + 1 0.5 
s 2 ( s + 1)  s ( s + 1) s + 1
S= 3 2 2  
s + s + s + s + s + 1 + 0.5s 
2
1+ s
−1
 s 
1 s( s 2 + s + 1) 0.5s 2 
S= 3  2
s + 2.5s + 2s + 1  − s ( s + 1) s( s + 1) 
2 2

1 1.5s 2
+ s + 1 − 0 .5s 2

T = I − S ( s) = 3  
s + 2.5s + 2s + 1  s ( s + 1)
2 2
0.5s + s + 1
2

(c) The singular value  S ( j) is computed numerically and is


displayed in Figure-2
Signular Values of Sensitivity

• Figure-1 Singular Values


5

Note that  S ( j) is small where


-5

Singular Values (dB)


-10

 L( j )  1
-15

-20

And  S ( j) is near 1 where -25


10
-1
10
0

Frequency (rad/s)
1
10

 L( j )  1
Signular Values of Loop gain
30

20

• Loop Transfer function Matrix : 10

Singular Values (dB)


L(s) = P(s) F(s) 0

-10

-20

-30

-40
-1 0 1
10 10 10
Frequency (rad/s)
• Find sigma values of L(s) at a given frequency of 1 rad/sec.
• Solution:
 L( j ) = m in (LT (− j ) L( j ) )
 L( j ) = m ax (LT (− j ) L( j ) )

1 0.5   1 0.5 
−  j −
s s +1  j + 1 
L ( j ) =  1  = 
 1 1 
1 
 s( s + 1)  s = j
 j ( j + 1) 
• At  = 1 rad / sec, L( j ) becomes
− j − 0.25 + 0.25 j )
= 
1 (−0.5 − j 0.5) 
 1 
 s 1 
L ( s) =  0.5
T
1 
− 
 s + 1 s( s + 1) 
 1   1 
1   − j 1 

LT (− j ) =  0s .5 1  = 
−  − 0.5 1 
 s + 1 s( s + 1)  s = − j
 − j + 1 − j (− j + 1) 

• At  = 1 rad / sec, LT (− j) becomes:

 j 1 
= 
 ( −0.25 − j 0.25) ( 0.5 + j 0.5) 
• At  = 1 rad / sec, Product L (− j) L( j) become:
T

 j 1  − j − 0.25 + 0.25 j )
=   
(−0.25 − j 0.25) (0.5 + j 0.5)  1 (−0.5 − j 0.5) 

 2 (−0.75 − j 0.75)
= 
(−0.75 + j 0.75) 0.625 
• At  = 1 rad / sec, Eigenvalue s of Product are 0.0485 and 2.5765 :

• Obtain the max and minimum value from set of Eigen


values as:
min (L (− j ) L( j )) = 0.0485
T

max(LT (− j ) L( j )) = 2.5765

• Then Largest and smallest singular values are:


 L( j ) = 0.0485 = 0.2203
 L( j ) = 2.57 = 1.6051
• In dB
20 log 10 ( L( j )) = 20 log 10 (0.2203) = −13.1397
( )
20 log 10  L( j ) = 20 log 10 (1.6051) = 4.1103
• The largest and smallest Singular values of S are denoted as:

•  (S ) and  (S ) respectively.
• Thus,
 (S ) = m ax (S * S )
 (S ) = m in (S * S )

• From these equation, one can write:

 (S ) d 2
 w 2
  (S ) d 2

27
Properties of Singular Values

1 1
1) If S −1
exits, then  = and  =
 ( S −1 )  ( S −1 )

2) det S =  1 2 3  n .

3)  ( A + B)   ( A) +  ( B)

4)  ( AB)   ( A) ( B)

5) 
max  ( A), ( B)   (A  B )  
2 max  ( A), ( B) 
28
• Form the equation:

 (S ) d 2
 w 2
  (S ) d 2

• One can see that  (S ) describes the worst case situation for a given d 2
.

• Given that squares of the magnitudes of the components of the disturbance at a


frequency  sum to d 2 ,
2
• most unfavorable distribution of disturbance components will lead to:

y ( j ) 2
=  (S ( j ) d 2

• It follows that  (S ( j ))is key scalar quantity describing the maximum


amplification of S ( j )and plays the role that was given to S ( j) in the
SISO case.
29
• The same is true of other contributions in equation of output, error and control
effort in terms of inputs and T, S.
• For example , the magnitude of the measurement noise contribution at  is
measured by:
 (T ( j ) ) v 2
• Singular values also describe the magnitude of the loop gain.
• In the SISO case,
If loop gain L( j )  1 , then S ( j ) is small and T ( j )  1 .
If loop gain L( j )  1, then S ( j )  1 and T ( j )  L( j ) .
• To come to a MIMO equivalent, consider :

( I + L ) a = a + La

• Where ‘a’ is a complex vector of appropriate dimensions and L = PF is the


loop gain matrix. 30
• The approximation ( I + L)  L is valid if,
• for any vector w, ( I + L ) a  La

• Form the equation ( I + L ) a = a + La , that is true if ,


La 2
 a 2

• Since La 2
is, at minimum , equal to  ( L) a 2
, the approximation holds
if ,
 L( j )  1

• This is precisely what is meant by “large loop gain. In such case:


S ( j)  L−1 ( j)
T ( j)  L−1 ( j) L( j) = I
31
• On other hand, ( I + L)  I if

La 2
 a 2

• that is La 2
is small compared to a 2
.

• That is so, for all ‘a’, if  L( j )  1 ;

• This is precisely what is meant by “small loop gain”. In such case:


S ( j )  I

T ( j )  L( j )

32
Stability
• One can write : P ( s ) F ( s ) = L( s ) = N ( s )
d ( s)
• Where N(s) is an m X m matrix of polynomial and d(s) is a polynomial .
• It is assumed that , if d (s0 ) = 0 then det N ( s0 )  0.
• This is the equivalent of the SISO condition that excludes pole-zero cancellations.
• Then: −1
 N ( s) 
S ( s ) = (I + L( s ) ) = 
−1
 I + d (s)  
 
= d ( s )(d ( s ) I + N ( s ) )
−1

Adjd ( s ) I + N ( s )
= d ( s)
det( d ( s ) I + N ( s )) 33
• Characteristic equation is:
det( d ( s ) I + N ( s )) = 0

• For a n X n matrix A and a scalar k, det(kA) = k n det( A) , rewriting above


equation  N ( s) 
d ( s ) det I +
n

 =0
 d ( s) 
• If d (s0 ) = 0 , then s 0 is not a root of the characteristic equation which
could require det N ( s0 ) = 0, which is excluded.

• So the factor d ( s) must be cancelled out by the determinant , so the roots
n

of the characteristic equation satisfy:


 N (s) 
det   = detI + L( s ) = 0
 I + d (s) 
  34
• The Routh Criterion may be applied to the characteristic equation . As in case
SISO case.

• To use the Nyquist Criteria in usual form, we write:

1 + (det I + L ( s )  − 1) = 0

• The Nyquist Plot is that of (det I + L ( s )  − 1) .

• N is counted as in SISO case.

• P is number of RHP poles of L(s)


• Which gives the number of RHP roots of d(s).
35
NORMS
• The Euclidean or l2 norm of a vector x has been defined as:
1/ 2
 2 
= (x x )
n
x 2 =   xi  T 1/ 2

 i =1 
• For a vector signal, x(t) the l2 norm is:
1/ 2
 +

x 2 =   x (t ) x (t ) dt 
T

 − 
• This norm is the square root of the sum of the energy in each component of
the vector.
• For power signals, we may use the root-mean-square (rms) value
1/ 2
 1
+T


rms ( x ) =  lim −T x (t ) x(t )dt 
T

 T → 2T 36
• For an m X r matrix , we define the Frobenius norm
1/ 2
 m r
2 
A =    Aij 
 i =1 j =1 
2

• It can be shown that :


= tr( AT A) = tr( AAT )
2
A 2

• Where “tr” represents the trace that is the sum of diagonal elements.
• Linear, time-invariant systems are generalizations of matrices.
• A matrix operates on a vector to produce another vector,
• A LTI system operates on a signal to produce another signal.
• By analogy to the Frobenius norm, we define the L2 norm for a ‘m X r ‘
transfer matrix , G(s) as:
37
1/ 2
 1 +

G 2
=
 2   T

tr G ( − j )G ( j ) d 

 − 

• G 2 exists, if and only if each element of G(s) is strictly proper and has no
poles on the imaginary axis.
• In that case, we write G  L2
• Under those conditions, the norm can be evaluated as an integral in the
complex plane:
1/ 2
 1 + j

G 2
=
 2 j   T

tr G ( − s )G ( s ) ds 

 − j 

1/ 2
 1 
=  trG (− s)G ( s)ds 
T
G  2 j
 
2

38
• Where the last integral is taken over a contour that runs up on the j-axis and
round an infinite semi-circle in either half plane.

• Since G(s) is strictly proper, the integrand vanishes over the semicircle , so that
the residue theorem can be used.

• If G  L2 and , in addition , G is stable, then we say that G  H 2

• H 2 is called Hardy space defined with the 2-norm

39
Q(2) Calculate norm L2 of :

1  s + 3 − ( s + 2)
G(s) = 2
s + 3s + 2  − 2 s + 2 
Solution :

1  s + 3 − ( s + 2)
G(s) = 2  
s + 3s + 2  − 2 s+2 
1  −s+3 −2 
G (− s) = 2
T

s − 3s + 2 − (− s + 2) − s + 2
G T (− s)G ( s) =
1  −s+3 − 2   s + 3 − ( s + 2)
=
( s + 2)( s + 1)( − s + 2)( − s + 1) − (− s + 2) − s + 2  − 2 s + 2 
1  (9 − s + 4)
2
− (− s + 3)( s + 2) − 2( s + 2)
=
( s + 2)( s + 1)( − s + 2)( − s + 1) − (− s + 2)( s + 3) − 2(− s + 2) 4−s +4−s
2 2 

trG (− s)G ( s) =
T

(−3s 2 + 21)
=
( s + 2)( s + 1)( − s + 2)( − s + 1)
• If we integrate about a contour enclosing LHP of s plane in positive angle
direction) , integral of following equation is sum of residues at s = -1 and s= -2
1
G2=  (− s)G ( s)) ds
2 T
tr (G
2 j
( −3s 2 + 21)
= lim ( s + 1)
2
G 2
s = −1 ( s + 2)( s + 1)( − s + 2)( − s + 1)
( −3s 2 + 21)
+ lim ( s + 2)
s = −2 ( s + 2)( s + 1)( − s + 2)( − s + 1)
( −3s + 21)
2
(−3s + 21)
2

= lim + lim
2
G
s = −1 ( s + 2)( − s + 2)( − s + 1) s = −2 ( s + 1)( − s + 2)( − s + 1)
2

18 (−3 * 4 + 21)
= +
2
G 2
(−1 + 2)(1 + 2)(1 + 1) (−2 + 1)( 2 + 2)( 2 + 1)
18 9 9 3
= + = ; G2=
2
G 2
(1)(3)( 2) ( −1)( 4)(3) 4 2
CALCULATIONS OF SYSTEM NORM
• We shall calculate system norms from state-space data, because good algorithm
have been devised for these purpose.
• First let us compute the H 2 norm:
• Let g(t) be the matrix impulse response corresponding to G(s) , i.e.
• g (t ) = Ce At
B
• Since A is assumed to stable ( G  H 2 ) , we use Parseval’s theorem and we
write
 +

G 2 = tr  g T (t ) g (t ) dt 
2

 0 
 + T A t T 
G 2 = tr  B e C Ce B dt 
2 T
At

 0 
 +

= trB   e C Ce dt  B
2 T T
A t T At
G
0 
2
43
= trBT Lc B
2
G 2

• Where +

Lc = e
AT t
C T Ce At dt
0

• we have the result: Lc satisfies the Lyapunov equation:


AT Lc + Lc A = −CT C
• Since tr(xy) = tr(yx), we may also use:
 + 
G 2 = tr  g (t ) g (t ) dt 
2 T

 0 
• Which leads to:
= tr CLoC T
2
G 2

• Where Lo satisfy the Lyapunov equation:


ALo + Lo AT = −BBT 44
Q3) A minimal realization for the transfer function of pervious
question is given as . 0 1  1 − 1
x=  x+  u
− 2 − 3 0 1 
1 0
y=  x
compute H norm of G(s).
2  0 1

Solution:
solving the Lyapunov equation: AT Lc + Lc A = −CT C

0 − 2  p1 p2   p1 p2   0 1 1 0
1 − 3  p p  +  p p  − 2 − 3 = − 0 1
  2 3  2 3   
 − 2 p2 − 2 p3  − 2 p2 p1 − 3 p2  1 0
 p − 3 p p − 3 p  +  − 2 p p − 3 p  = − 0 1
 1 2 2 3  3 2 3 
 − 4 p2 p1 − 3 p2 − 2 p3  1 0
 p − 3p − 2 p  = − 
 1 2 3
2 p2 − 6 p3  0 1 
• Forming the equations:
− 4 p2 = −1
p1 − 3 p2 − 2 p3 = 0
2 p2 − 6 p3 = −1
• On solving, we get Lc matrix as

5 1
4 4
Lc = 
1 1
 
4 4
G 2 = tr(B Lc B )
2 T

 5 1 
 1 0  4  − 
G 2 = tr 
2
4  1 1

    
 − 1 1  1 1 0 1  

 
 4 4 
 5 1  1 − 1 

G 2 = tr  4
2
 
4   
 − 1 
0  0 1 

 5  5
− 1 
G 2 = tr  4
9
= + =
2

 4 1
 − 1  4
 1 
• A different type of norms is the induced norm, which is applied to operators
and is essentially a type of maximum gain.
(a) For a matrix , the induced Euclidean norms is:
A 2i
= max Ad
d =1
( 2
) =  ( A) d 2
at d 2
=1
2

=  ( A)

(b) To obtain the induced norm for an LTI system, consider first stable strictly
proper SISO system.
• If the input u (.)   2 , the output y (.)   2 (see equation of  2for x(t))
• By Parseval theorem:
+
1
=  G ( j ) u ( j ) d
2 2 2
y 2
2 −

48
• One can state that:
+
1
 sup G ( j )  u ( j ) d
2 2 2
y 2
 2 −

• Or:
 sup G( j )
2 2 2
y 2
u 2

2
• RHS of above equation can approached arbitrary closely, for a fixed value of u 2 ,
that is chosen to be 1 with no loss of generality.
• Suppose u ( j ) approaches an impulse of an 2 in the frequency domain
2

at  = 0.
• Then the integral of above equation approaches G( j ) .
2

• If G ( j ) is maximum at some finite value of  , one may choose  0 to be


that frequency.
49
• If not , then G ( j ) must approach to a supremum as  → .

• we can make  0 as large as we like and G ( j ) will be close to the


supremum as we wish.

 sup G( j )
2 2 2
• The RHS of the equation y 2
u 2
can be approached

closely and
sup y 2
= sup G ( j )
u =1 
2

• This norm is also called the infinity norm of G and is given by:
1/ p
 1 

= lim   G ( j ) d 
p
G
 2
 p →
− 
50
• The  − norm of G(s) exists if and if only , G(s) is proper with no poles on the
imaginary axis.
• In that case , we write G  L 

• If in additions, G is stable, then we say


GH

• Which is the Hardy space with infinity norm.


(c) For multivariable systems:
1 +
y 2 =  G ( j )u ( j ) d
2 2

2 − 2

+

  G ( j ) u ( j )
1
 d
2 2
y 2
2 −
51
 
+

 sup  G ( j )
1
u ( j ) d
2

2 2
y 2
 2 −

 sup  G( j ) u


2 2
y 2    2

• Note that u ( j ) 2 in the integrand refers to the 2-norm of the vectoru ( j ) .


2
• Where as in second one, u 2 refers to the 2-norm of the signal.
• As in SISO, here also, RHS of above equation can approached arbitrarily closely,
by the proper choice of u ( j ) .
• Essentially we pick u ( j ) to be the Eigen vector of G* ( j)G( j),
corresponding to the largest Eigen value and
• We concentrate the spectrum of u ( j ) at frequency where  is the largest
(or at some frequency that is arbitrarily large , if  has no maximum but a
supremum). 52
• Therefore: sup y 2
= supG ( j )
u 2
=1 

• And we define: G 
= sup  G ( j )

• To derive an algorithm for the calculation of the H  norm, we need the


following result
Lemma 1: Let  A BB T

M = 
 − C T
C − AT 

where (A , B, C) is a realization of a strictly proper G(s), Then,

I − G T
( − s )G ( s ) 
−1
=I+ 0   −1B
B ( sI − M )  
T

0

53
• Proof: Consider a block diagram with “y” as output signal as shown:,

u + y y1 y2
G (s ) G (− s)
T

• the sensitivity TFM ( with positive feedback) is :

y = I − G (−s)G( s) u
• T −1 (1)

• Now the realization of G(s) is: d


x1 = Ax1 + By
dt
y1 = Cx1
• And the transfer function is
−1
G ( s ) = C ( sI − A) B 54

• The transfer function G (−s) = C ( − sI − A) B
T −1
 T

= − C ( sI + A) B 
−1 T

= B sI − ( − A) ( −C )
T T T

• which has state space realization : d


x2 = − AT x2 − C T y1
dt
y2 = B T x1

• The state-space realization of the overall system given in above block diagram:
d
x1 = Ax1 + B (u + B T x2 )
dt
d
x2 = − AT x2 − C T (Cx1 )
dt
y = u + B T x2 55
• In matrix form:  d 
 dt x1   A BB T   x1   B 
d = T   +  u
 x2  − C C − A   x2   0 
T

 dt 
 x1 
y = u + 0 B   
T

 x2 
• The input-output relationship is:
 B 
y =  I + 0  B T
 ( sI − M )   u
−1 (2)
 0
• Comparing equations (1) and (2), proves the result:

 B 
I − G T
( − s )G ( s ) 
−1
=  I + 0  B T
 ( sI − M )   
−1

 0
56
Lemma 2: Let the system (A, B, C) be stabilizable and detectable . Then the
realization equation:
d 
x
 dt 1   A BB T   x1   B 
d = T   +  u
 x2  − C C − A   x2   0 
T

 dt 
 x1 
y = u + 0 B   
T

 x2 
may not have un-observerable or uncontrollable modes on the j-axis of s-plane
Proof:
• Let  j be an un-obsevarable eigen values of M, with v the corresponding
eigen vector
• Let v be partitioned into two n-vectors : v1 and v 2

57
• Form the output equation and as this mode is un-observerable, we get:
 v1 
0 B    = B v2 = 0
T T

v2 
• Since v is an eigenvector:

 A BB T   v1   v1 
Mv =  T   = j  
 − C T
C − A  v2  v2 
• Combining the above two equations, we get:
Av1 = jv1
− C Cv1 − A v2 = jv2
T T

58
from equation Av1 = jv1 , we get ( A − jI )v1 = 0
• We have two possibilities :
(i) v = 0
(ii) A has an eigenvalue j , with v1 as the corresponding eigenvector .
1

First Possibility :
• If v1 = 0, then from the equation : − C T
Cv 1
− A T
v2 = jv2 , we have:
− AT v2 − jv2 = 0
− ( A + jI )v2 = 0
T

A T (−j )
• which shows that has an eigen value of
• With equation BT v2 = 0, it shows that mode is uncontrollable.
59

• That is contrary to our assumption of stabilizability,


• So v1  0
• Second Possibility
• IF v1  0 , then, A has an eigenvalue j , with v1 as the corresponding
eigenvector .
• Pre-multiplying both sides of equation − CT Cv1 − AT v2 = jv2 with
*T
v1 , we get:
− v1 C Cv1 − v1 A v2 = v1 jv2
*T T *T T *T

• On rearranging − v C Cv1 = v A v2 + v jv2


*T
1
T *T
1
T *T
1

− v C Cv1 = v
*T
1
T *T
1
(A T
+ jI ) v2
• Taking transpose on both side, we get:
− v1T CCT v1* = v2T ( A + jI )v1*
• (a scalar is its own transpose)
• If  j is an eigenvalue of A, so is ( −j )and its associated eigenvector is v1*
60
• Hence Av1* = −jv1* , the above equation can written as:
− v1 CC v1 = v2 (Av1 + jv1 )
T T * T * *

• Using the relationship, equation becomes:


− v CC v = v (− jv + jv
T
1
T *
1
T
2
*
1
*
1
)
v1T CCT v1* = 0
=0
2
Cv1
Cv1 = 0
• Which show that the mode  j is not observable , contrary to our detect
ability assumption.
• Assumed existence of an unobservable j-axis mode for the realizated state
equation lead to contradiction and hence, must be rejected. 61
• Theorem 1: Let G(s) be strictly proper with a stabilizable and
detectable realization (A,B,C) then,
G 
1

• If and only if , M has no eigenvalues on the imaginary axis.


Proof :
• By Lemma 1, The poles of I − G (−s)G( s) must be eigenvalues of M.
T −1

• Conversely, all j-axis eigenvalues of M will appear as poles I − G (−s)G(s)


T −1

of .
• If Suppose G  1 . For all  and a complex vector , v  0 we get:

v *T
I − G T
(− j )G( j )v = v
2
− v GT (− j )G( j )v
*T

62
v *T
I − G T
(− j )G( j )v = v
2
− v GT (− j )G( j )v
*T

v *T
I − G T
(− j )G ( j )v  v
2
−  G ( j ) v
2 2

v *T
I − G T
(− j )G ( j )v  1 −  ( 2
G ( j )) v 2

v *T
I − G T
(− j )G ( j )v  1 − G ( 2

)v 2

v*T
I − G T
(− j )G( j )v  0

 
• This shows that I − G (− j)G( j) is a Hermitian for all for all .
T

• Hence, its determinant is strictly positive.


 
• So that I − GT (− j)G( j) has no poles at s = j , any .
63
• Only If: Suppose that G 
1

• Since G is strictly proper , G ( j ) → 0 as  → ;

• Because of continuity ,  G( j ) = 1 for some  = 0 .

• We may choose a v 0 such that : v GT (− j )G( j )v0 = 1and


*T
0

• Following the steps in the “IF” part, it can be shown that:


detI − GT (− j)G( j) = 0

• In that case, I − G (−s)G( s) does have a pole at s = j0 , and that
T −1

pole appears as eigenvalue of M. 64


• We want to test a the statement: G



• This is achieved by testing  −1G 


1
• Note that

 −1G 
=  −1 G 

• So that 

 G −1

1  G 


• The realization for  −1G is constructed by simple by replacing B with  −1 B


In the matrix M
65
The calculation of G  is an iterative process, a search for that value
of  such that the matrix
 A  −2 BB T 
M = T T 
− C C −A 
has one or more eigenvalue on the imaginary axis .

• The procedure can begin with an upper bound for  and the search can
commence there.

• It can helps to realize that M is a sympletic matrix

• it has the property that its eigenvalues are symmetrically located with
respect to the j-axis. 66
• Example: A minimal realization for a system is given as:
.  0 1  1 − 1
x=  x+
− 2 − 3 0 1 
Calculate the Hinfinity-norm for the system.

• Solution: one forms the matrix:


 0 1 2 − 
 − 
 
 A BB  − 2
T
−3
M ( ) =  T T 
=
C C − A   −1 0 0 2 
 
 0 −1 −1 3 

• Where  =  −2
has been used.
67
• The characteristic polynomial of H is:

det( sI − M (  )) = s 4 + s 2 (3 − 5) + (  2 − 21 + 4)

• Only even powers of s are present .


• This lead to roots that are symmetrically located with respect to both the real and
imaginary axes.
• For  = 0, ( = ) , characteristic polynomial of H is:

= s − 5s + 4
4 2

• With roots s 2 = +4, + 1 , and thus s = 2. ,  1

• With  = 0, ( = ), M is block triangular and its eigenvalues are those of A


and − AT , that is A and -A ; those are -1, -2,+1,+2.
68
• Now M has imaginary-axis eigenvalues when s  0 *2

• Where s * is a roots of the characterises polynomial.


• For  = 0, ( = ), s has the values 1 and 4;
*2

• As  increases, s cross zero to become negative.


*2

*2
• We have s = 0 for:
(  − 21 + 4) = 0
2

• Or
21  212 − 16 21  441 − 16 21  425
= = =
2 2 2
• Taking the smaller of the two values, corresponding to the largest value
of  that result in imaginary-axis eigenvalues.
• Thus,  = 0.1922and G  1 
= =   = 2.281

  
69
ROBUST STABILITY

• Figure shows a MIMO feedback loop containing of linear (m x r) TFM G(s) and

a no-memory , (r x m) operator  that may be nonlinear.

• The  − norm of  is defined as :


G (s )
+ _
( x) 2
  = sup v z
x0 x2 

70
• If  is linear, i.e, is a matrix , this definition comes down to one already
given.
• The small gain theorem asserts that sufficient condition for stability of this loop
are:
(1) G(s) is stable
(2) G(s) is strictly proper
(3) G     1

• Let us consider the case in which  is a matrix , with  ()  1.


• From the small gain theorem , the condition G   1 is clearly sufficient to
guarantee stability.
• We now establish that this condition is also necessary , if stability is to be
maintained for all  of unit norm or less.
71
• Proof: Suppose G   1 , NOT INCLUDED

• Then, for some  = 0 ,  G( j0 ) = 1

• And there exits a complex vector, v1 such that:


• (i) v1 = 1
and
• (ii) v2 = G ( j0 )v1 with v2 = 1
Choosing
 = −v1 (v )* T
2

It can be shown that max( ) = 1 , so   = 1 .


*

72
• Since G is stable, the number of encirclement of the origin by the locus:

det I + G ( j ) 
must be zero for all admissible  .
• For  = 0 , detI  = 1 for all frequencies , and there is no encirclements
• Number of encirclements changes , if , for some admissible  , locus moves
across the origin for some  .
• That happens if, for some  = 0 and  ,
  1
det I + G ( j0 )  = 0
• This is always the case if G  1

73
• Now, Form I + G( j )v
0 2
= v2 − G ( j0 )v1 (v2* )T v2
= v2 − G ( j0 )v1
= v2 − v2
=0
 
• This shows that I + G ( j0 )  is singular and that its determinant is therefore
zero.
• Thus, if G   1 , there always exists a  ,    1, that will yield an unstable
closed loop .

• If G   1 , the loop is always stable, if G(s) is stable and    1.

• This result can be used to obtain sufficient conditions for robust stability in several
situations.
74
• We shall represent the uncertainty as W (s ) ,

• Where  is a constant matrix such that    1

and W(s) is used to incorporate the frequency information concerning the


uncertainty.
• We consider the following:

1 . Multiplicative uncertainty referred to the input


2 . Multiplicative uncertainty referred to the Output
3 . Additive uncertainty

75
W (s ) 

+
+
P (s )

• Illustration of a weighted input multiplicative uncertainty

W (s ) 

+
+
P (s )

• Illustration of a weighted output multiplicative uncertainty


76

+
+
P (s )

• Illustration of an additive uncertainty

77
• In each case, one can reduce the diagram to that given below:

+
G(s)

v z

Block diagram of Standard Feedback system

• This requires calculations of the transfer function G(s) to which  is connected

• i.e. the transfer function from v to z with  taken out.


78
• Thus Block diagram
W (s ) 

+
+ +
F (s) P(s )

can be reduced to that to standard feedback representation with

G(s) = −W I + FP FP = −WT


−1

79
• Block diagram
W (s ) 

+
+ +
F (s) P(s )

can be reduced to that to standard feedback representation with

G(s) = −W I + PF  PF
−1

80
• Block diagram
W (s ) 

+
+ +
F (s) P(s )

can be reduced to that to standard feedback representation with

G(s) = −W I + FP F
−1

81
• Simultaneous perturbations are handled by using the fact that several inputs
(outputs) can be gathered into one multivariable input (outputs )

• For example consider the block diagram shown below featuring a multiplicative
input certainty and an additive uncertainty.

z1 v1 z2 v2
W1 ( s ) 1 W2 ( s ) 2

+ +
+ + +
F (s) P(s )

82
• It can be shown that:
z1 = −W1 ( I + FP) ( FPv1 + Fv2 )−1

z2 = W2 ( I + FP) −1 (v1 − Fv2 )

• Therefore one may view the system as in standard feedback representation


where:
− W1 ( I + FP) FP −1
− W1 ( I + FP) F  −1

G= 
 W2 ( I + FP) − W2 ( I + FP) F 
−1 −1

• Uncertainty block for this problem:


1 0
T =  
0 2 
• Whose norm we need to bound
83
• In order bound the norm, we assume v1 and v 2 to be of unit Euclidean length
• and of dimensions equal to the numbers of columns of  1 and  2
respectively .
 v1
• Given  , 0    1, the vector vT =
1 −  v2
• vT is of unit length, We form:
(v )
*
T
T
 T vT =  (v
*
T 1
)
* T
 1v1 + (1 −  )(v
*
1 2
)
* T
 2v2
*
2

• Given 1  =  2  = 1 , the RHS is maximized for any  between 0


and 1 by choosing v1 and v2 such that:
(v )
* T
1
 1v1
*
1
= (v )
* T
2
 2v2 = 1
*
2

• In which the RHS of above equation is 1. Thus ,


T 
1 if 1 
, 2 
1
84
Definition of the design problem
• The basic design configuration is as shown in figure below:
w z

u y

• Where the inputs u and w and the outputs y and z are vectors.
• The vector w groups exogenous signals such as disturbances, set point or test
inputs.
• The vector z represents performance variables that must, in some sense, be kept
small .
• More precisely, the design objective is to keep the norm of the transmission
Tws (s ) small.
85
• The vector u contains the control inputs.
• The vector y contains the measurements used for the feedback purposes.
• It is easy to identify the vectors u and y since they are given at the outset.
• The problem definition consists in the identification of the inputs w and the
outputs z
• Because specifications for both performance and robustness are given in terms
of the weighted norms of certain transmissions,
• we must locate input -output pairs that have required transmission and
• form w and z from the unions of all required inputs and outputs respectively.
• The solution algorithm will minimize or bound Twz ( s) k with k = 2 or infinity.
• As example
 W1 ( s ) S ( s ) 
Twz ( s ) = W2 ( s )U ( s )
 
W3 ( s )T ( s )  86
• The quantities of interest are W1 (s)S (s) , W2 (s)U (s) and W3 (s)T (s) .

• We must therefore know how these quantities are related to Twz (s) .

• To proceed in this aspect, we need the following result in form of theorem:

• Theorem : Let T(s) be decomposed into an n m array of sub matricesTij (s) .


Then
max Tij
ij
  T
2 2
 mn max Tij
ij
 2

• The same result hold for the infinity norm

87
• Proof: For the 2-norm, it is easy to show that:
tr(T T ) =  tr(T T
* *
ij ij
)
ij

• Since all the terms in the same are positive,


tr(T *T )  max tr(Tij*Tij )
ij

• Also tr(T *T )  mn max tr(Tij*Tij )


ij

• So that max
ij
tr (T *
T
ij ij
)  tr (T *
T )  mn max
ij
tr (T *
T )
ij ij

• If T is a function of j , above inequality equation also hold for the Parseval


integral over frequency, since it holds point wise.

• Taking square roots everywhere yields the desired result. 88


• This theorem shows that the norm of each sub-matrix transfer function is bounded above
by T (s) ,
wz

• So that making Twz (s) small, forces all the components to be at least as small.

• The lower bound applies only to the sub-matrix of maximum norm.

• It is therefore possible for some sub-matrices to be appreciably smaller in norm than:


 1 
 Twz ( s ) 
 mn 
• Design can proceed by trail and error

• For example: If it is required that W2U ( s)  1and algorithm actually return


the value W2U (s) = 0.2 , another try can be made with an scaled down W892 ( s) .
Augmented state space model
• Solution algorithm are based on the state space representation:
.
x = Ax + B1w + B2u
z = C1 x + D11w + D12u
y = C2 x + D21w + D22u

• This representation model combines the system model and models of various
weight functions.

• The formation of augmented state space model can explained by following


example:

90
• Example:
• Obtain the augmented state space representation for the system shown in
figure below: z 1
z 2

W1 ( s ) W2 ( s )

yd + y z3
u P(s ) W3 ( s )
e
_

• The measured output is taken to be “e”


• Given
1 s +1
P( s) = 2 ; W1 ( s ) = ;
s + 0.2 s + 1 s ( s + 10)
W2 ( s ) = 1; W3 ( s ) = (2  10 )(s + 10)
−3 2
91
• As no state space model can be written for W3 ( s )
• As W3 ( s ) is improper transfer function
• One can look upon combination of P(s) and W3 ( s ) as one system.
• With input ‘u’’ and output z 3 , we have :
2 10 (s + 10)
−3
−3  19.8s + 99 
2
Z 3 ( s)
= = 2 10 1 + 2 
U ( s) s + 0.2s + 1
2
 s + 0.2s + 1 
• State space representation in controllable conical form is:
 .  0 1  x1  0
x
 .1  =     +  u
 x  − 1 − 0.2  x2  1
 2
z3 = 2 10−3 (99 x1 + 19.8 x2 ) + 2 10−3 u
y = x1
92
• Measured output, Error signal can be given as:
e = y d − x1
• Now taking input, e and output z1 , we get :
Z1 ( s ) s +1
W1 ( s ) = = 2
E ( s) s + 10 .01s + 0.1
• State space representation in controllable conical form is:
 .   0 1   x3  0
 x.3  =     +   ( yd − x1 )
 x  − 0.1 − 10.01  x4  1 
 4
 x3 
z1 = 1 1  
 x4 
93
Z 2 ( s ) = W2 ( s )U ( s )  z 2 = u
• Putting this expression together , with w = yd , we get:
0 1 0 0  0  0 
.
− 1 − 0.2 0 0  0  1 
x=  x +   yd +   u
0 0 0 1  0  0 
     
− 1 0 − 0.1 − 10.01 1 0 
ym = ( −1) 0 0 0 x + y d

 0 0 1 1  0 
z=
 0 0 0 0
 x + 
 1 u


0.198 0.0396 0 
0 
2  10 
−3

94
H2 Solution
• For a system with realization of following state-space equations:
.
x = Ax + B1w + B2u
z = C1 x + D11w + D12u
y = C2 x + D21w + D22u
• The following assumption are required:
1. ( A, B1 ) and ( A, B2 ) are stabilizab le.
2. ( A, C1 ) and ( A, C2 ) are detectable .
3. DT D and D DT are non − singular
12 12 12 12
4. D11 = 0
95
• We write Twz (s) by columns, as:

Twz (s) = (Twz (s))1 (T wz (s))1  (T


wz (s))m 
• To calculate the H 2 norm, we need to form:

(
tr(Twz (− s )Twz ( s ) ) =  tr (Twz (− s ) )i (Twz ( s ) )i )
m
T T

i =1

• The square of the H 2 norm is obtained by integrating the RHS of above equation
over the j-axis.
• We may use Parvesal theorem to translate this into the time-domain integral.
m 
Twz ( s) 2 =   (Twz )i (t ) 2 dt
2 2

i =1 0
• Where
(T ) (t ) = inverseLaplace(T ) (s)
wz i wz i 96
• (T ) (t )
wz i
is response z(t) to a unit impulse in wi (t ).

• Now exciting the system with an impulse is the same as giving it an initial state bi
, i th the column of the B1 matrix .
• Let z ( x0 , t ) be the response to an initial state, x0 .
m 
Twz ( s) 2 =   (Twz )i (t ) 2 dt
2 2
• We may then write equation
i =1 0
as m 
Twz ( s) 2 =   z (bi , t ) 2 dt
2 2

i =1 0
• From the state-space equation, we can rewrite above equation as:
m 
Twz ( s) 2 =   C1 x(bi , t ) + D12u(bi , t ) 2 dt
2 2

i =1 0
• Where the notations for x and u parallels that for z.
97
• The problem then is to minimize the RHS of equation ,
m 
Twz ( s) 2 =   C1 x(bi , t ) + D12u(bi , t ) 2 dt
2 2

i =1 0
• With subjected to:
.
x = Ax + B2u
y = C2 x + D21w + D22u
• This lead to problem formulation in an optimization set-up.

• Let us examine the case y = x, that is case of full state feedback

• Expanding the term corresponding to i = 1 in RHS of the above equation:


98
• We gets:

 x T
(b , t )C T
C x (b , t ) + x T
(b , t )C T
D u (b , t ) + u T
(b , t )D T
C x(bi , t ) +
J1 =   T
i 1 1 i i 1 12 i i 12 1
dt
0 u (bi , t )D12 D12u (bi , t )
T

• Equation of J1 and
.
x = Ax + B2u
y = C2 x + D21w + D22u
• Forms an Linear Quadratic (LQ) problem even with the inclusion of the cross
terms (mixed x and u ) in J1.
• The solution of that problem is form u = -Kx and is independent of the initial
state.
2
• Same control law minimizes every term of the sum in equation of Twz (s ) 2
and minimizes the sum itself. 99
• To remove the cross terms, define a new input v as:
v = u + (D D12 ) D C1 x
T −1 T
12 12
• It can be shown that:


x = A − B2 (D D12 ) D C1 x + B2v
.
T
12
−1 T
12

   

J1 =  x C I − D12 (D D12 ) D12T C1 x + vT D12T D12v dt


T T T −1
1 12
0
• Above two equation represent a standard LQ problem with solution as:
v = K1 x
• Then

u = − K1 + (D D12 ) D C1 x = − Kx
T
12
−1 T
12

• If state is not directly available, it is necessary to use a state estimator.
100
• Parseval's theorem states that the total power of a signal in the frequency
domain must equal the total power of the same signal in the time domain.
that is :
+ +
1
 x (t ) dt =  X ( j ) d
2 2

−
2 −

• (from equation of  2 for x(t))


+
1
=  X ( j ) d
2 2
x 2
2 −

• Back
101
Multi-loop and
Multivariable Control
PROCESS INTERACTION
RELATIVE GAIN ANALYSIS
DECOUPLING
DESIGN OF DECOUPLER
• Control problems that have only one controlled variable and one
manipulated variable.
• These problems are referred to as single-input, single-output (SISO) or
single-loop control problems.
• But in many practical control problems typically a number of variables must
be controlled and a number of variables can be manipulated.
• These problems are referred to as multiple-input, multiple-output
(MIMO) control problems.
• For almost all important processes, at least two variables must be
controlled: product quality and throughput.
• a characteristic feature of MIMO control problems, namely, the presence
of process interactions; that is, each manipulated variable can affect both
controlled variables.
• When significant process interactions are present, the selection of the
most effective control configuration may not be obvious
PROCESS INTERACTIONS AND CONTROL LOOP INTERACTIONS
• A schematic representation of several SISO and MIMO control applications is
shown in Fig. 1.
• For convenience, it is assumed that the number of manipulated variables is equal
to the number of controlled variables.
• This allows pairing of a single controlled variable and a single manipulated
variable via a feedback controller.
• On the other hand, more general multivariable control strategies do not
make such restrictions
• MIMO control problems are inherently more complex than SISO control
problems because process interactions occur between controlled and
manipulated variables.
• In general, a change in a manipulated variable, say u1, will affect all of the
controlled variables y1, y2, ... Yn·
• Because of the process interactions, the selection of the best pairing of controlled
and manipulated variables for a multiloop control scheme can be a difficult task.
• In particular, for a control problem with n controlled variables and n manipulated
variables, there are n! possible multiloop control configurations.
• Figure 1: SISO
and MIMO control
problems.
Block Diagram Analysis
• Consider the 2 X 2 control problem shown in Fig.1.b.
• Because there are two controlled variables and two manipulated variables,
four process transfer functions are necessary to completely characterize the
process dynamics:

• The transfer functions in Eq. 18-1 can be used to determine the effect of a
change in either U1 or U2 on Y1 and Y2.
• From the principle of Superposition, it follows that simultaneous changes in U1
and U2 have an additive effect on each controlled variable
• These input-output relations can also be expressed in vector-matrix
notation as:

• where Y(s) and U(s) are vectors with two elements,

• and Gp(s) is the process transfer function matrix:

• The matrix notation in above eqn. provides a compact representation


for problems larger than 2 X 2.
• The steady-state process transfer function matrix (s = 0) is called the
process gain matrix and is denoted by K.
• Suppose that a conventional multi-loop control scheme consisting of
two feedback controllers is to be used.
• The two possible control configurations are shown in Fig. 2.
• In scheme (a), Y1 is controlled by adjusting U1, while Y2 is controlled
by adjusting U2. Consequently, this configuration will be referred to as
the 1-1/2-2 control scheme.
• The alternative strategy is to pair Y1 with U2 and Y2 with U1, the
• 1-2/2-1 control scheme shown in Fig. 2b.
• Note that these block diagrams have been simplified by omitting the
transfer functions for the final control elements and the sensor
transmitters.
• Also, the disturbance variables have been omitted.
• Figure 2 Block diagrams for 2 X 2 multi-loop control schemes
• Figure 2 indicates that the process interactions can induce undesirable
interactions between the control loops.
• For example, suppose that the 1-1/2-2 control scheme is used and a
disturbance moves Y1 away from its set point, Ysp1
• · Then the following events occur:
• 1. The controller for loop 1 (Gc1) adjusts U1 so as to force Y1 back to
the set point Ysp1. However, U1 also affects Y2 via transfer function
Gp21.
2. Since Y2 has changed, the loop 2 controller ( Gc2) adjusts U2 so as
to bring Y2 back to its set point,Y2sp. However, changing U2 also
affects Y1 via transfer function Gp12.
• These controller actions proceed simultaneously until a new steady state is
reached.
• Note that the initial change in U1 has two effects on Y1: a direct effect (1)
and an indirect effect via the control loop interactions (2).
• Although it is instructive to view this dynamic behavior as a sequence of
events, in practice the process variables would change continuously and
simultaneously
• The control loop interactions in a 2 x 2 control problem result
from the presence of a third feedback loop that contains the
two controllers and two of the four process transfer functions
(Shinskey, 1996).
• Thus, for the 1-1/2-2 configuration, this hidden feedback loop
contains Gc1, Gc2, Gp12, and Gp21, as shown in Fig.3.
• A similar hidden feedback loop is also present in the 1-2/2-1
control scheme of Fig.2b.
• The third feedback loop causes two potential problems:
1. It usually destabilizes the closed-loop system.
2. It makes controller tuning more difficult.
• Figure 3: The hidden
feedback control
loop (in dark lines) for
a 1-1/2-2 controller
pairing.
• To show that the transfer function between a controlled variable and a
manipulated variable depends on whether the other feedback control loops
are open or closed.
• Consider the control system in Fig. 2a.
• If the controller for the second loop Gc2 is out of service or is placed in the
manual mode with the controller output constant at its nominal value,
• then U2 = 0. For this situation,
• the transfer function between Y1 and U1 is merely Gp11:

• If both loops are closed, then the contributions to Y1 from the two loops are
added together:
• However, if the second feedback controller is in the automatic
mode with Y2sp = 0, then, using block diagram algebra,

• The signal to the first loop from the second loop is

• If we substitute for Gp12U2 in equation of Y1 using above


equation and then substitute for Y2 using (18-9),
• then overall closed-loop transfer function between Y1 and U1 is

• Thus, the transfer function between Y1 and U1 depends
on the controller for the second loop Gc2 via the
interaction term.
• Similarly, transfer function Y2/U2 depends on Gc1 when
the first loop is closed.
• These results have important implications for controller
tuning because they indicate that the two controllers
should not be tuned independently.
• For general n X n processes, Balchen and Mumme (1988)
have derived analogous results that illustrate the effect of
closing all but one of the n feedback loops.
Bristol's Relative Gain Array Method
• Bristol (1966) developed a systematic approach for the analysis of multivariable
process control problems.
• His approach requires only steady-state information (the process gain
matrix K) and provides two important items of information:
1. A measure of process interactions.
2. A recommendation concerning the most effective pairing of controlled
and manipulated variables.
• Bristol's approach is based on the concept of a relative gain.
• Consider a process with ‘n’ controlled variables and ‘n’ manipulated
variables.
• The relative gain 𝜆𝑖𝑗 between a controlled variable yi and a manipulated
variable uj is defined to be the dimensionless ratio of two steady-state gains:
• for i = 1, 2, ... , n and j = 1, 2, ... , n.
• In above equation, the symbol (𝜕𝑦𝑖 Τ𝜕𝑢𝑗 )𝑢 , denotes a partial derivative
that is evaluated with all of the manipulated variables except 𝑢𝑗 held
constant.
• Thus, this term is the open-loop gain (or steady-state gain) between 𝑦𝑖 and
𝑢𝑗 which corresponds to the gain matrix element 𝐾𝑖𝑗 .
• Similarly, (𝜕𝑦𝑖 Τ𝜕𝑢𝑗 )𝑦 is evaluated with all of the controlled variables
except 𝑦𝑖 held constant.
• This situation could be achieved in practice by adjusting the other
manipulated variables using controllers with integral action.
• Thus, (𝜕𝑦𝑖 Τ𝜕𝑢𝑗 )𝑦 can be interpreted as a closed-loop gain that indicates
the effect of 𝑢𝑗 on 𝑦𝑖 when all of the other controlled variables (𝑦𝑖 ≠ 𝑦𝑗 )
are held constant
• It is convenient to arrange the relative gains in a relative gain array (RGA), denoted by Λ

• RGA has several important properties for steady-state process models (Bristol, 1966;
McAvoy,1983):
1. It is normalized because the sum of the elements in each row or column is equal to one
2. The relative gains are dimensionless and thus not affected by choice of units or
scaling of variables.
3. The RGA is a measure of sensitivity to element uncertainty in the gain matrix K.
• The gain matrix can become singular if a single element 𝐾𝑖𝑗 is changed to
𝐾𝑖𝑗 = 𝐾𝑖𝑗 (1 − 1/𝜆𝑖𝑗 ).
• Thus a large RGA element indicates that small changes in 𝐾𝑖𝑗 can markedly change the
process control characteristics.
Calculation of the RGA
• The relative gains can easily be calculated from either steady-state data or a
process model.
• For example, consider a 2 X 2 process for which a steady-state model is
available.
• Suppose that the model has been linearized and expressed in terms of
deviation variables as follows:
• (1)
• (2)
• where Kij denotes the steady-state gain between yi and uj·
• This model can be expressed more compactly in matrix notation as:
y =Ku
• For stable processes, the steady-state (gain) model is related to the
dynamic model by:
• Next, we consider how to calculate 𝜆11 .
• It follows from (1) that
(𝜕𝑦1 Τ𝜕𝑢1 )𝑢2 = 𝐾11 (3)
• Before calculating (𝜕𝑦1 Τ𝜕𝑢1 )𝑦2 , we first must eliminate u2.
• This is done by solving Eq. 2 for u2 and holding y2 constant at its nominal
value, y2 = 0:

• Then substituting into Eq. (1) gives:

• It follows that
• (4)
• Substituting Eqs. 3 and 4 into equation of 𝜆𝑖𝑗 gives an expression for relative
gain 𝜆11 :

• Because each row and each column of relative gain array (RGA), Λ , sums to
one, the other relative gains are easily calculated from 𝜆11 for the 2 X 2 case:

• Thus, the RGA for a 2 X 2 system can be expressed as:

• where the symbol 𝜆 is now used to denote 𝜆11 .


• RGA for a 2 X 2 process is always symmetric.
• However, this will not necessarily be the case for a higher-dimension process
(n > 2).
• For higher-dimension processes, the RGA can be calculated from the
expression :
• (5)

• Where denotes Schur product (element by element multiplication):


(6)

• Kij is (i, j) element of K


• Hij is (i, j) element of H = transpose(inverse(K));
• that is, Hij is an element of the transpose of the matrix inverse of K.
• Because computer software such as MATLAB is readily available to
perform matrix algebra, above equation can be easily evaluated.
• Note that Eq. 5 does not imply that
Decoupling Control

You might also like