You are on page 1of 21

Chapter-2

State-space Modeling

2 . 1 . S t at e - s p a ce C o nc e p t

2 . 1 . 1 . I nt r o d u ct i o n

In conventional control theory, only the input, output, and error


signals are considered important; the analysis and design of control
systems are carried out using transfer functions, together with a
variety of graphical techniques such as root-locus plots. The unique
characteristic of conventional control theory is that it is based on the
input-output relation of the system, or the transfer function.
The main disadvantage of conventional control theory is that,
generally speaking, it is applicable only to linear time-invariant
systems having a single input and single output.
The modern trend in engineering systems is toward greater
complexity, due mainly to the requirements of complex tasks and
good accuracy. Complex systems may have multiple inputs and
multiple outputs and may be time-varying. Because of the necessity
of meeting increasingly stringent requirements on the performance of
the control systems, the increase in system complexity, and easy
access to large-scale computers, modern control theory, which is a
new approach to the analysis and design of complex control systems,
has been developed since around 1960. This new approach is based
on the concept of state.

State. The state of a dynamic system is the smallest set of variables


(called state variables) sunch that the knowledge of these variables at
t = t 0 , together with the input for t ≥ t 0 , completely determines the
behavior of the system for any time t ≥ t 0 .

State variables. The state variables of a dynamic system are the


smallest set of variables which determine the state of the dynamic
system.

1
State vector. A vector which determines uniquely the system state
x(t) for any t ≥ t 0 , once the input u(t) for t ≥ t 0 , is specified.

State space. The n-dimensional space whose coordinate axes consist


of the x1 axis, x 2 axis, …, x n axis is called a state space. Any state
can be represented by a point in the state space.

2.1.2. State Equation and State-space Representation of Systems.

a). nth -order systems of linear differential equations in which the


forcing function does not involve derivative terms.

Consider the following nth -order system:


(n) ( n −1)
y + a1 y +  + an −1 y + an y =
u (2.1)
Let us define
x1 = y
x2 = y

( n −1)
xn = y
Then Eq. (2.1) can be written as
x1 = x2
x2 = x3

xn −1 = xn
xn = −an x1 −  − a1 xn + u
or
=
x Ax + Bu (2.2)
where

2
 x1   0 1 0  0  0
x   0 0 1  0  0
 2   
x =   , A =       , B =  
     
  0 0 0 0 1  0
 xn   −an −an −1 −an − 2  −a1  1 
The output equation becomes
 x1 
x 
 2
y = [1 0 0  0]   
 

 xn 
or y = Cx (2.3)
Where C = [1 0 0  0]
Eq. (2.2) is the State Equation, and Eq. (2.3) is the Output Equation.

E x a m p l es :
a.1. Convert the differential equation model

to state-space form. (u = input, y = output)

Solution:
The equation is manipulated to the form

We choose the state variables:

Their derivatives are:

From equations (1) and (2) and from substituting equations (1), (2)
and (4) to equation (0) we get

3
When these equations are written in matrix form and the output
equation (1) is included with them, we get the state-space model:

a.2. Consider the system defined by


y + 6 y + 11 y + 6 y = 6u
where y is the output and u is the input of the system. Obtain a
state-space representation of the system.

Solution:
Let us choose the state variables as
x1 = y
x 2 = y
x3 = y
Then we obtain
x1 = x 2
x 2 = x3
x 3 = −6 x1 − 11x 2 − 6 x3 + 6u
The last of these three equations was obtained by solving the original
differential equation for the highest derivative term y and then
substituting y = x1 , y = x 2 , y = x3 into the resulting equation. By
use of vector-matrix notation, these three first-order differential
equations can be combined into one as follows:
 x1   0 1 0   x1  0
 x  =  0 0 1   x 2  + 0[u ] (5)
 2 
 x 3  − 6 − 11 − 6  x3  6

4
The output equation is given by
 x1 
y = [1 0 0] x 2  (6)
 x3 
Equations (5) and (6) can be put in a standard form as
x = Ax + Bu
(7)
y = Cx
where
0 1 0 0 
   
A= 0 0 1  , B = 0 , C = [1 0 0]
− 6 − 11 − 6 6

b). nth -order systems of linear differential equations in which the


forcing functions involves derivative terms.

If the differential equation of the system involves derivatives of the


forcing function, such as
(n) ( n −1) ( n) ( n −1)
y + a1 y +  + an −1 y + an y = b0 u + b1 u +  + bn −1u + bnu (2.4)
( n −1)
then the set of n variables y, y , 
y, , y do not qualify as a set of
state variables, and the straightforward method previously employed
cannot be used. This is because n first-order differential equations
x1 = x2
x2 = x3
 (2.5)
xn −1 = xn
(n) ( n −1)
xn =−an x1 −  − a1 xn + b0 u + b1 u +  + bnu
where x1 = y may not yield a unique solution.

The main problem in defining the state variables for this case lies in
the derivative terms on the right-hand side of the last of the preceding

5
n equations. The state variables must be such that they will eliminate
the derivatives of u in the state equation.
It is a well-known fact in modern control theory that if we define the
following n variables as a set of n state variables,
x1= y − β 0u
x2 =y − β 0u − β1u =x1 − β1u
(2.6)
x3 =y − β 0u − β1u − β 2u =x2 − β 2u
  

( n −1) ( n −1)
xn = y − β 0 u −  − β n − 2u − β n −1u = xn −1 − β n −1u
where β 0 , β1 , , β n are determined from
β 0 = b0
β1= b1 − a1β 0
β2 = b2 − a1β1 − a2 β 0 (2.7)
β3 = b3 − a1β 2 − a2 β1 − a3 β 0

βn =
bn − a1β n −1 −  − an −1β1 − an β 0
then the existence and uniqueness of the solution of the state equation
is guaranteed. With the present choice of state variables, we obtain
the following state and output equations for the system of Eq. (2.4):

 x1   0 1 0  0   x1   β1 
 x   0 0 1  0   x2   β 2 
 2  
             +    [u ]
      
 xn −1   0 0 0  1   xn −1   β n −1 
 xn   −an −an −1 −an − 2  −a1   xn   β n 
 x1 
x 
=y [1 0  0]  2  + β 0u

 
 xn 
=
x Ax + Bu
or (2.8)
=y Cx + Du

6
c). Nonuniqueness of the set of state variables.

It has been stated that a set of state variables is not unique for a given
system. Suppose that x1 , x 2 ,  , x n are a set of variables. Then we
may take as another set of state variables any set of functions,

x1 = X 1 ( x1 , x 2 ,  , x n )

x 2 = X 2 ( x1 , x 2 ,  , x n )


x n = X n ( x1 , x 2 ,  , x n )
  
provided that, for every set of values x1 , x 2 ,  , x n , there
corresponds a unique set of values x1 , x 2 ,  , x n and vice versa.

Thus, if x is a state vector, then x where

x = Px (2.9)
is also a state vector, provided the matrix P is non-singular. Different
state vectors convey the same information about the system behavior.

Example
c.1. Consider the same system as discussed in Example a.2. We shall
show that Eq. (7) is not the only state equation possible for the
system. Suppose we define a set of new state variables z1 , z 2 , z 3 by
the transformation
 x1   1 1 1   z1 
 x  = − 1 − 2 − 3  z 
 2   2 
 x3   1 4 9   z 3 
or x = Pz (2.10)
where
1 1 1

P = − 1 − 2 − 3 (2.11)
 1 4 9 
Then by substituting Eq. (2.10) into Eq. (7), we obtain
Pz = APz + Bu

7
By premultiplying both sides of this last equation by P −1 , we get
z = P −1 APz + P −1 Bu (2.12)
or

 z1   3 2.5 0.5  0 1 0   1 1 1   z1   3 2.5 0.5 0


 z  = − 3 − 4 − 1  0
 2   0 1  − 1 − 2 − 3  z 2  + − 3 − 4 − 1 0[u ]
 z3   1 1.5 0.5 − 6 − 11 − 6  1 4 9   z 3   1 1.5 0.5 6
Simplifying gives
 z1  − 1 0 0   z1   3 
 z  =  0 − 2 0   z  + − 6[u ] (2.13)
 2   2   
 z3   0 0 − 3  z 3   3 
Equation (2.13) is also a state equation which describes the same
system as defined by Eq. (7).
The output equation, Eq. (7), is modified to
y = CPz
or
1 1 1   z1 
y = [1 0 0]− 1 − 2 − 3  z 2 

 1 4 9   z 3 
 z1 

= [1 1 1] z 2  (2.14)
 z 3 

2.1.3. Eigenvalues of an nxn matrix A.

The eigenvalues of an nxn matrix A are the roots of the charcteristic


equation
λI − A = 0 (2.15)
The eigenvalues are sometimes called the characteristic roots.

8
Example:
d.1. Consider the following matrix A:
0 1 0

A= 0 0 1 
− 6 − 11 − 6
The characteristic equation is
λ −1 0
λI − A = 0 λ −1
6 11 λ + 6
= λ3 + 6λ2 + 11λ + 6
= (λ + 1)(λ + 2 )(λ + 3) = 0
The eigenvalues of A are the roots of the charcateristic equation, or
-1, -2, and -3.

2.1.4. Invariance of eigenvalues.

To prove the invariance of the eigenvalues under a linear


transformation (the eigenvalues of A and those of P −1 AP are
identical), we must show that the characteristic polynomials λI − A
and λI − P −1 AP are identical.
Since the determinant of a product is the product of the determinants,
we obtain
λI − P −1 AP = λP −1 P − P −1 AP
= P −1 (λI − A)P
= P −1 λI − P P
= P −1 P λI − A
Noting that the product of the determinants P −1 and P is the
determinant of the product P −1 P , we obtain
λI − P −1 AP = P −1 P λI − A
= λI − A

9
Thus we have proved that the eigenvalues of A are invariant under a
linear transformation.

2.1.5. Diagonalization of nxn matrix.

Note that if an nxn matrix A with distinct eigenvalues is given by


 0 1 0  0 
 0 0 1  0 

A=       (2.16)
 
 0 0 0  1 
− a n − a n −1 − a n − 2  − a1 
the transformation x=Pz where
1 1 1  1
λ λ λ  λ 
 1 2 3 n

P = λ1 λ 2 λ3  λ n 
2 2 2 2
(2.17)
 
    
λ1n λn2 λ3n  λnn 
λ1 , λ 2 ,, λn = n distinct eigenvalues of A will transform P −1 AP into
the diagonal matrix, or
λ1 0
 λ2 
 
P −1 AP =    (2.18)
 
  
 0 λ n 
If matrix A defined by Eq. (2.16) involves multiple eigenvalues then
diagonalization is impossible. For example, if the 3x3 matrix A
where
 0 1 0 
A= 0  0 1 
− a3 − a 2 − a1 

10
has the eigenvalues λ1 , λ 2 , λ3 then the transformation x=Sz where
1 0 1
S =  λ1 1 λ3 
λ1 2λ1 λ32 
2

will yield
λ1 1 0 
S AS =  0 λ1 0 
−1

 0 0 λ3 
Such a form is called the Jordan canonical form.

2.2. Controllability and Observability

The controllability and observability are important structural


properties of a control system. The controllability and observability
analysis below can be used to check whether the system is
controllable or observable.

Controllability analysis is used to calculate the controllability matrix


and check whether the system is controllable. If the controllability
matrix

(2.19)
has full row rank, the system is controllable.

Example:
1. Consider the system given by
 x1  1 1   x1  0
 x  = 0 − 1  x  + 1[u ]
 2   2   

1 1
Since [B AB ] =   = singular, the system is not completely
0 0 
state controllable.

11
2. Consider the system given by
 x1  1 1   x1  0
 x  = 2 − 1  x  + 1[u ]
 2   2   

[B AB] = 
0 1
For this case  = non-singular, the system is
1 − 1
therefore completely state controllable.

Observability analysis is used to calculate the observability matrix


and check whether the system is observable. If the controllability
matrix
[C CA  CA n −1 ]T
(2.20)
has full rank, the system is observable.

Example:
1. Consider the system given by
 x1   1 1   x1  0
 x  = − 2 − 1  x  + 1[u ]
 2   2   
x 
y = [1 0] 1 
 x2 
1 1
Since [C AC ] = 
T
 has rank=2, or the determinant is not
0 1
equal zero. Hence the system is completely observable.

2. Show that the following system is not completely observable.


x = Ax + Bu
y = Cx
where
 x1  0 1 0 0 
     
x =  x2  , A = 0
 0 1  , B = 0 , C = [4 5 1]
 x3  − 6 − 11 − 6 1

12
Note that the control function u does not affect the complete
observability of the system. In order to examine complete
observability, we may simply set u=0. For this system, we have
4 −6 6
[C AC ]
A C = 5
2
−7 5 
1 −1 − 1
Note that
4 −6 6
5 − 7 5 = 0 , the rank of the matrix is less than three.
1 −1 1
Therefore, the system is not completely observable.

2.3. Analysis and Design in State-space Form

To develop an understanding of solution techniques as applied to the


first-order vector matrix model, it is useful to review a first-order
solution as applied to a scalar model. A first-order scalar equation
can be expressed as
x ( t ) ax ( t ) + bu ( t )
= (2.21)
in which a and b are constants and the value of a is typically
negative. The Laplace transformation is
sX ( s ) − x ( 0 ) = aX ( s ) + bU ( s ) (2.22)
Solving for X ( s ) produces
x ( 0)  1 
(s)
X= + b U ( s ) (2.23)
s−a  s−a
and the inverse transformation must completed with the knowledge
that u(t) is an unspecified input function. A general expression for the
inverse transformation is
t
x ( t ) x ( 0 ) e at + b ∫ e u (λ ) dλ
a(t −λ )
= (2.24)
0
The validity of the solution can be readily established by verifying
that it satisfies Equation (2.21).

13
Note that the zero-input response (the response with the input set to
zero) is x ( t ) = x ( 0 ) e at . Because x(t) must be a solution of
x ( t ) = ax ( t ) , it is apparent that the solution embodies a
transcendental function that yields the desired mathematical property.
An algebraic representation of the exponential function exists only as
an infinite series with
1 1
e at =1 + at + ( at ) + ( at ) +
2 3
(2.25)
2! 3!
and differentiation of the series confirms that the derivative of e at is
ae at . This result, in turn, confirms that x ( t ) = Keat is a solution of
x ( t ) = ax ( t ) .

2.3.1. State Transition Matrix

If a Laplace transformation is applied to the vector matrix model,


=x ( t ) Ax ( t ) + Bu ( t ) (2.26)
becomes
sX ( s ) − x ( 0 )= AX ( s ) + BU ( s )
Solving for X(s) is an operation that must be performed carefully.
Algebraic manipulation then provides
( sI − A) X ( s ) = x ( 0 ) + BU ( s )
and the solution for X(s) is
X ( s ) = ( sI − A ) x ( 0 ) + ( sI − A ) BU ( s )
−1 −1

If the notation is simplified by replacing ( sI − A )−1 by Φ ( s ) , then


X ( s ) = Φ ( s ) x ( 0 ) + Φ ( s ) BU ( s ) (2.27)
The transformation produces
t
x ( t ) =Φ ( t ) x ( 0 ) + ∫ Φ ( t − λ ) Bu ( λ ) d λ (2.28)
0

The solution is expressed as a matrix equation, and Φ ( t ) is know as


the state transition matrix.

14
If the input is zero, the system model is x ( t ) = Ax ( t ) and the
solution is x ( t ) = Φ ( t ) x ( 0 ) . Thus, the first derivative of Φ ( t )
must be equal to AΦ ( t ) and Φ ( 0 ) must be equal to I. In other
words, the transition matrix must exhibit a property that is very
similar to the property that is ascribed to the exponential function
when applied to a scalar solution. The concept of a vector matrix
exponential function is introduced with Φ ( t ) =
e At , and the required
mathematical properties are attained if e At is defined such that
1 1
e At =I + At + ( At ) + ( At ) +
2 3
(2.29)
2! 3!
Note that the first derivative of e At is Ae At . Revising Equation
(2.28) to utilize the exponential notation produces
t
x ( t ) e At x ( 0 ) + ∫ e BU ( λ ) d λ
A( t − λ )
= (2.30)
0
The exponential representation of the transition matrix yields a time
domain expression of the solution, and the series representation of
the matrix exponential introduces an effective programming option
when considering the utilization of a discrete time algorithm to
simulate the system model.

Note:
Φ (t ) =
e At is transition matrix
Eq. (2.30) is a McLaurin (Taylor) series.

Example:
Obtain the state transition matrix Φ (t ) of the following system;
 x1   0 1   x1 
 x  = − 2 − 3  x 
 2   2 

Solution:
For this system,

15
 0 1
A= 
− 2 − 3
The state transition matrix is given by Φ (t ) = e At = L−1 [sI − A]−1 .
 s 0  0 1  s − 1 
Since sI − A =   − = 
0 s  − 2 − 3 2 s + 3
The inverse of (sI-A) is given by
 s + 3 1
(sI − A)−1 = 1
(s + 1)(s + 2)  − 2 s 
 s+3 1 
 (s + 1)(s + 2 ) (s + 1)(s + 2 ) 
= 
 −2 s 
 (s + 1)(s + 2 ) (s + 1)(s + 2 ) 
Hence Φ (t ) = e At = L−1 [sI − A]−1
 2e − t − e −2t e − t − e −2t 
= −t − 2t 
 − 2e + 2e − e − t + 2e − 2 t 

2.3.2. Transfer Matrix.

The concept of the transfer matrix is an extension of that of the


transfer function of SISO systems. We shall first obtain transfer
functions of SISO systems and then transfer matrices of MIMO
systems from state and output equations.
Let us consider the system whose transfer function is given by
Y (s )
= G (s ) (2.31)
U (s )
The state-space representation for this system is given by
x = Ax + Bu (2.32)
y = Cx + Du (2.33)
The Laplace transform of Eqs. (2.32) and (2.33) are given by
sX (s ) − x(0) = AX (s ) + BU (s ) (2.34)
Y (s ) = CX (s ) + DU (s ) (2.35)

16
We assume x(0 ) = 0 of Eq. (2.34), and by substituting
X (s ) = (sI − A) BU (s )
−1

into Eq. (2.35), we obtain


[ ]
Y (s ) = C (sI − A) B + D U (s )
−1
(2.36)
Upon comparing Eq. (2.36) with Eq. (2.31), we see that
G (s ) = C (sI − A) B + D
−1
(2.37)

Example:
Obtain the transfer function of the system,
 x1  − 5 − 1  x1  2
 x  =  3 − 1  x  + 5[u ]
 2   2   
x 
y = [1 2] 1 
 x2 
The transfer function for the system is then
G (s ) = C (sI − A) B
−1

−1
s + 5 1   2
= [1 2]   
 − 3 s + 1 5
 s +1 −1 
 (s + 2 )(s + 4 ) (s + 2 )(s + 4 )  2
= [1 2]  
 3 s+5  5 
 (s + 2 )(s + 4 ) (s + 2 )(s + 4 ) 
12 s + 59
=
(s + 2)(s + 4)
2.3.3. Stability Criteria as Applied to Linear State Models

The roots of the characteristic equation are, of course, the poles of the
transfer function, and the poles of transfer function determine the
character of the natural response. Thus, if all the roots of the
characteristic equation are located in the LHP, all of the terms of the
natural response will decay asymptotically to zero.

17
Another definition of stability is described as bounded-input,
bounded-output (BIBO) stability. BIBO stability is obtained if the
output is bounded in response to all bounded inputs. Clearly, if a
linear system display asymptotic stability, it will also display BIBO
stability.
If a LTI system is described using a state model, the system model is
=x ( t ) Ax ( t ) + Bu ( t ) (2.38)
and the system is asymptotically stable if all of the terms of the state
transition matrix approach zero as time approach infinity. Hence, a
system exhibits asymptotic stability if
lim Φ ( t ) =
0 (2.39)
→∞

The Laplace transform of the transition matrix is


adj ( sI − A )
Φ ( s ) = ( sI − A ) =
−1
(2.40)
det ( sI − A )
and the denominator polynomial of each term of Φ ( s ) is determined
by evaluation of det ( sI − A ) . Therefore, asymptotic stability is
satisfied if all of the roots of det ( sI − A ) are located in the LHP. The
characteristic equation is
det ( sI − A ) =0 (2.41)
The roots of the characteristic equation (Eq. 2.41) are also known as
eigenvalues of A.

Example:
From the above example, we know that the characteristic equation of
the system is,
det ( sI − A ) =
0
(s + 2)(s + 4) = 0 ,
Hence the roots of the characteristic equation are -2 and -4, located in
the LHP. It means the system is stable.

18
2.4. Control Analysis using MatLab

2.4.1. Frequency-Response Plots using MATLAB

Considering the use of MATLAB to generate Bode plots. The system


(OL function) is represented as equation below,
G ( jω )H ( jω ) =
2
 jω 
2

jω  + 1
 10 
The open-loop function is described; thus, the gain and phase
margins are also calculated. Bode plots showing both gain (in
decibels) and phase versus frequency on semilog plots can be
obtained using function bode as follows:

%Bode plots (gain and phase) with automatic scaling and labels
n=[0 0 0 200]; % specify the numerator
d=[1 20 100 0]; % specify the denominator
bode(n,d) % plot gain and phase

%Gain & phase margins and the frequencies of measurement


[gm, pm, wpc, wgc]=margin(n,d) % specf info request
Gmdb=20*log10(gm) % convert the gain margin to dB

The revised set of statements is as follows:

%Bode plots (open and closed-loop) with various options


n=[0 0 0 200]; % specify the numerator
d1=[1 20 100 0]; % specify the OL denominator
d2=[1 20 100 200]; % specify the CL denominator
w=logspace(-1,2,200); % select a frequency vector-example values
are 10 −1 to 10 2 with 200 equal log-
-scale increments
[m1,p1,w]=bode(n,d1,w); % calculate OL gain and phase vectors
[m2,p2,w]=bode(n,d2,w); % calculate CL gain and phase vectors
db1=20*log10(m1);
db2=20*log10(m2); % convert gain to dB

19
[gm, pm]=margin(n,d1) % calculate gain and phase margins
figure(1)
semilogx(w, db1, w, db2), grid % plot gain
axis([.1 100 -40 20]) % specify plot ranges (optional)
xlabel(‘Freq (r/s)’), ylabel(‘Gain (dB)’) % label axes
pause % press any key to proceed to next plot
figure(2)
semilogx(w, p1, w, p2), grid % plot phase
axis([.1 100 -270 0]) % specify ranges (optional)
set(gca,’ytick’, [-270:30:0]) % specify y scale (optional)
xlabel(‘Freq (r/s)’), ylabel(‘Phase (dB)’) % label axes

2.4.2. Root-Locus Construction using MATLAB

Root loci are determined using a repeated application of a root-


finding algorithm, and there are several options for plotting the loci
and locating specific values of K on the plot. The simplest plotting
option is to use MATLAB function rlocus with an automatic
selection of the incremental steps and range of the adjustable
parameter K.

For example,
n=[0 0 0 1 2]; % numerator of P(s)
d=[1 3 4 2 0]; % denominator of P(s)
rlocus(n,d) % root locus calculation and plot
[k,poles]=rlocfind(n,d) % specific information request

When a root locus plot is completed, a rlocfind function can be


applied as many times as desired to determine the value of K and the
numerical value of the roots at specific points on the locus.

The following program describes a sequence of steps that allows the


user to control the dimension of the plot, the range and increments of
K, and the format of the plot. The root locations can be marked with a
symbol or connected by a continuous line. The specification of a
scale parameter g determines the dimensions of the plot. Unless
modified, the plot will extend horizontally from -8g to +2g and

20
vertically from -5g to +5g. It saved as script M-file, the program can
be applied to a variety of applications with very little modification.

Clear, g=.5; % clear old variables and set scale


k1=0:.005:.4; k2=.5:.5:30; % select K
k=[k1 k2];
n=[0 0 0 1]; % numerator
d=[1 3 2 0]; % denominator
[r,k]=rlocus(n,d,k); % calculation of root sequences
%plot(real(r),imag(r),’x’) % discrete point option
(move % to next line)
plot(real(r),imag(r),’linewidth’,2) % continuous loci option
x1=[-8*g 2*g]; % horizontal dimension
y1=[-5*g 5*g]; % vertical dimension
z1=[0 0]; line(x1,z1), line(z1,y1) % draw axes
axis([x1 y1]) % set plot dimensions
axis(‘square’) % set 1:1 aspect ratio (with 10g by 10g dimension)
z=.1:.1:.9; w=1:4; % select increments of ζ and ω n
grid, sgrid(z,w) % add rectangular grid and line of ζ and ω n
hold on % hold plot
p=roots(d);
plot(real(p), imag(p), ‘x’) % mark pole of P(s)
q=roots(n);
plot(real(q), imag(q), ‘o’) % mark zeros of P(s)
hold off % remove hold
[k, poles]=rlocfind(n,d) % move cursor to find K and roots
[k, poles]=rlocfind(n,d) % repeat at another point

The parameters as specified in the program are, of course, easily


modified. The program exactly as presented produces a root-locus
plot for a system with a characteristic equation equal to
K
1+ =0
s (s + 1)(s + 2 )

-o0o-

21

You might also like