Professional Documents
Culture Documents
LECTURES ON
SYSTEM THEORY
2008
PREFACE.
I
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI).
2.1. Input-Output Description of SISO LTI. 42
Example 2.1.1. Proper System Described by Differential Equation. 46
2.2. State Space Description of SISO LTI. 49
2.3. Input-Output Description of MIMO LTI. 51
2.4. Response of Linear Time Invariant Systems. 54
2.4.1. Expression of the State Vector and Output Vector in s-domain. 54
2.4.2. Time Response of LTI from Zero Time Moment. 55
2.4.3. Properties of Transition Matrix. 56
2.4.4. Transition Matrix Evaluation. 57
2.4.5. Time Response of LTI from Nonzero Time Moment . 58
3. SYSTEM CONNECTIONS.
II
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS.
4.1. Principle Diagrams and Block Diagrams. 84
4.1.1. Principle Diagrams. 84
4.1.2. Block Diagrams. 84
Example 4.1.1. Block Diagram of an Algebraical Relation. 85
Example 4.1.2. Variable's Directions in Principle Diagrams and
Block Diagrams. 87
Example 4.1.3. Block Diagram of an Integrator. 89
4.1.3. State Diagrams Represented by Block Diagrams. 89
4.2. Systems Reduction Using Block Diagrams. 92
4.2.1. Systems Reduction Problem. 92
4.2.2. Analytical Reduction. 92
4.2.3. Systems Reduction Through Block Diagrams Transformations. 93
4.2.3.1. Elementary Transformations on Block Diagrams. 93
Example 4.2.1. Representations of a Multi Inputs Summing Element. 96
4.2.3.2. Transformations of a Block Diagram Area by Analytical
Equivalence. 96
4.2.3.3. Algorithm for the Reduction of Complicated Block Diagrams. 96
Example 4.2.2. Reduction of a Multivariable System. 98
4.3 Signal Flow Graphs Method (SFG). 106
4.3.1. Signal Flow Graphs Fundamentals. 106
4.3.2. Signal Flow Graphs Algebra. 107
Example 4.3.1. SFGs of one Algebraic Equation. 110
Example 4.3.2. SFG of two Algebraic Equations. 111
4.3.3. Construction of Signal Flow Graphs. 113
4.3.3.1. Construction of SFG Starting from a System of Linear Algebraic
Equations. 113
Example 4.3.3. SFG of three Algebraic Equations. 114
4.3.3.2. Construction of SFG Starting from a Block Diagram. 115
Example 4.3.4. SFG of a Multivariable System. 115
4.4. Systems Reduction Using State Flow Graphs. 116
4.4.1. SFG Reduction by Elementary Transformations. 117
4.4.1.1. Elimination of a Self-loop. 117
4.4.1.2. Elimination of a Node. 118
4.4.1.3. Algorithm for SFG Reduction by Elementary Transformations. 120
4.4.2. SFG Reduction by Mason's General Formula. 121
Example 4.4.1. Reduction by Mason's Formula of a Multivariable System. 123
III
5. SYSTEMS REALISATION BY STATE EQUATIONS.
5.1. Problem Statement. 125
5.1.1. Controllability Criterion. 126
5.1.2. Observability Criterion. 126
5.2. First Type I-D Canonical Form. 127
Example 5.2.1. First Type I-D Canonical Form of a Second Order
System. 130
5.3. Second Type D-I Canonical Form. 132
5.4. Jordan Canonical Form. 134
5.5 State Equations Realisation Starting from the Block Diagram 137
IV
7. SYSTEMS STABILITY.
7.1. Problem Statement. 170
7.2. Algebraical Stability Criteria. 171
7.2.1. Necessary Condition for Stability. 171
7.2.2. Fundamental Stability Criterion. 171
7.2.3. Hurwitz Stability Criterion. ` 171
7.2.4. Routh Stability Criterion. 173
7.2.4.1. Routh Table. 173
7.2.4.2. Special Cases in Routh Table. 174
Example 7.2.1. Stability Analysis of a Feedback System. 176
7.3. Frequency Stability Criteria. 177
7.3.1. Nyquist Stability Criterion. 177
7.3.2. Frequency Quality Indicators. 178
7.3.3. Frequency Characteristics of Time Delay Systems. 180
V
9. SAMPLED DATA SYSTEMS.
9.1. Computer Controlled Systems. 197
9.2. Mathematical Model of the Sampling Process. 204
9.2.1. Time-Domain Description of the Sampling Process. 204
9.2.2. Complex Domain Description of the Sampling Process. 205
9.2.3. Shannon Sampling Theorem. 207
9.3. Sampled Data Systems Modelling. 209
9.3.1. Continuous Time Systems Response to Sampled Input Signals. 209
9.3.2. Sampler - Zero Order Holder (SH). 211
9.3.3. Continuous Time System Connected to a SH. 212
9.3.4. Mathematical Model of a Computer Controlled System. 213
9.3.5. Complex Domain Description of Sampled Data Systems. 215
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIMESY STEMS
10.1. Frequency Characteristics Definition. 217
10.2. Relations Between Frequency Characteristics and
Attributes of Z-Transfer Functions. 218
10.2.1. Frequency Characteristics of LTI Discrete Time Systems. 218
10.2.2. Frequency Characteristics of First Order Sliding Average Filter. 220
10.2.3. Frequency Characteristics of m-Order Sliding Weighted Filter. 221
10.3. Discrete Fourier Transform (DFT). 222
VI
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.1. Introduction.
.
1.1. Introduction
1
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.
Any physical system (or physical object), as an element of the real world,
is a part (a piece) of a more general context. It is not an isolated one, its
interactions with the outside are performed by exchanges of information, energy,
material. These exchanges alter its environments that cause modifications in time
and space of some of its specific (characteristic) variables.
In Fig. 1.1. , such a representation is realised.
Material OUTSIDE
Information
u y1
1 y2 y
u u2 System
INSIDE
up descriptor
yr
inputs outputs
Energy
2
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.
Practically are kept only those inputs that have a significant influence (into
a defined precision level context) on the chosen outputs.
Defining the inputs and outputs we are defining that border which
expresses the inside and the outside from behavioural point of view.
Usually an input is denoted by u, a scalar if there is only one input, or by a
column vector u=[u1 u2 .. up]T if there are p input variables.
The output usually is denoted by y if there is only one output or by a
column vector y=[y1 y2 .. yr]T if there are r output variables.
Scalars and vectors are written with the same fonts, the difference between
them, if necessary, is explicitly mentioned.
An oriented system can be graphically represented in a block diagram as it
is depicted in Fig. 1.2.2. by a rectangle which usually contains a system
descriptor which can be a description of or the name of the system or a symbol
for the mathematical model, for its identification. Inputs are represented by
arrows directed to the rectangle and outputs by arrows directed from the
rectangle.
Generally there are three main graphical representations of systems:
1. The physical diagram or construction diagram. This can be a picture of a
physical object or a diagram illustrating how the object is built or has to be build.
2. The principle diagram or functional diagram, is a graphical representation of a
physical system using norms and symbols specific to the field to which the
physical system belongs, represented in a such way to understand the functioning
(behaviour) of that system.
3. The block diagram is a graphical representation of the mathematical relations
between the variables by which the behaviour of the system is described. Mainly
the block diagram illustrates the abstract system. The representation is performed
by using rectangles or flow graphs.
3
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.
We can look at the motor as to an oriented object from the systems theory
point of view. In a such way we shall define the inside and the outside .
1. Suppose we are interested about the angular speed ω of the motor axle. This
will be the output of the oriented system we are defining now. The inputs to this
oriented system are all the causes that affect the selected output ω, into an
accepted level of precision. To do this, knowledge on electrical engineering is
necessary. The inputs are: rotor voltage Ur, excitation voltage Ue, resistant
torque Cr, external temperature θext. Now the oriented system related to the DC
motor having the angular speed ω as output, into the agreed level of precision is
depicted in Fig. 1.2.4. The mathematical relations between ω and Ur, Ue, Cr,
θext are denoted by S1 which expresses the abstract system. This abstract
system is the mathematical model of the physical oriented object
(or system) as defined above.
2. Suppose now we are interested about two attributes of the above DC motor:
the rotor current Ir and the internal temperature θint. These two variables are
selected as outputs. The inputs are the same Ur, Ue, Cr, θext . The resulted
oriented system is depicted in Fig. 1.2.5. The abstract system for this case is
denoted by S2.
Any one can understand that S1≠ S2 even they are related to the same
physical object so a conclusion can be drawn: For one physical object (system)
different abstract systems can be attached depending on what we are looking for.
4
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.
which also depends on the four above mentioned entities and can be considered
in a concentrate form as
y(t) = η(t, t 0 , x 0 , u [t 0,t] ) . (1.2.8)
This is called input-initial state-output relation on short i-is-o relation.
The time variation of the input is expressed by a function
u: T → U, t → u(t) (1.2.9)
so the input segment u [t 0,t] is the graph of a restriction on the observation interval
[t0,t] of the function u. In our case the set U of the input values can be for
example the interval [0,10] volts.
Someone who manages the physical object represented by the principle
diagram knows that there are some restrictions on the time evolution shape of the
function u. For example could be admitted piecewise continuous functions or
continuous and derivative functions only.
We shall denote by Ω the set of admissible inputs,
Ω={u / u: T → U , admitted to be applied to a system)} (1.2.10)
Our system S1 is well defined specifying three elements: the set Ω and the
two relations ϕ and η ,
S1={Ω, ϕ, η} (1.2.11)
This is a so called explicit form of abstract systems representation or the
representation by solutions.
An explicit form can be presented in complex domain applying the Laplace
transform to the differential equation, if this is a linear with time-constant
coefficients one, as in (1.2.2):
•
L{T y(t) +y(t)} = L{K 1 K 2 u(t)} ⇔ T[sY(s) − y(0)} + Y(s) = K 1 K 2 U(s) ⇒
K K
Y(s) = 1 2 U(s) + T y(0) (1.2.12)
Ts + 1 Ts + 1
We can see that the differential equation has been transformed to an algebraical
equation simpler for manipulations. But as in any Laplace transforms, the initial
values are stipulated for t=0 not t=t0 as we considered. This can be easily
overtaken considering that the time variable t in (1.2.12) is t-t0 from (1.2.2). With
this in our mind the inverse Laplace transform of (1.2.12) will give us the relation
(1.2.7) where y(0)=K2x(0)=K2x0 and t→t-t0.
From (1.2.12) we can denote
K K
H(s) = 1 2 (1.2.13)
Ts + 1
which is the so called the transfer function of the system. The transfer function
generally can be defined as the ratio between the Laplace transform of the output
Y(s) and the Laplace transform of the input U(s) into zero initial conditions,
Y(s)
H(s) = y(0)=0 . (1.2.14)
U(s)
The system S1 can be represented by the transfer function H(s) also.
6
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.
The oriented system with above defined input and output is represented in
Fig.1.2.10. where by S1 it is denoted a descriptor of the abstract system.Writing
the force equilibrium equations we get
•
K P x + K V x= f ; y = K 2 x .
Dividing the first equation by KP and denoting T=KV/KP ; K1=1/KP ; u=f
we get the mathematical model as state equations,
T x• +x = K 1 u
S1 : (1.2.16)
y = K2 x
This set of equations expresses the abstract object of the mechanical
system. Formally S1 from (1.2.16) is identical with S1 from (1.2.1) of the previous
example. Even if we have different physical objects they are characterised (for
above chosen outputs) by the same abstract system. This abstract system is a
common base for different physical systems. Any development we have done for
the electrical system, relations (1.2.2)--(1.2.15), is available for the mechanical
system too. These constitute the unitary framework of notions we mentioned in
the systems theory definition.
Managing with specific methods the abstract system, expressed in one of
the form (1.2.2) -- (1.2.15) , some results are obtained. These results equally can
be applied both to the electrical system and the mechanical system. Of course in
the first case, for example, x is the capacitor voltage but in the second case the
meaning of x is the paint A shifting.
Such a study is called model based study. We can say that the mechanical
system is a physical model for an electric system and vice versa because they are
related to the same abstract system.
8
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.
0 T
K t t−τ ∆
x(t) = e − T x(0) + 1 ∫ e − T u(τ)dτ = ϕ(t, 0, x(0), u [0,t] ), ∀t ≥ 0 . (1.2.24)
t
T 0
This is the state evolution starting at the initial time moment t=0, from the
initial state x(0) and has the form of the input-initial state-state relation (i-is-s).
For t = t0 from (1.2.24) we obtain,
9
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.
t0
K 1 t 0 − t 0T−τ ∆
x(t0)=e − T x(0) + ∫
T 0
e u(τ)dτ = ϕ(t 0 , 0, x(0), u [0,t 0] ) (1.2.25)
Substituting x(0) from (1.2.25)
t0
K t0 τ
x(0)=e T x(t 0 ) − 1 ∫ e T u(τ)dτ ,
T 0
into (1.2.24) we obtain
x(t)=e e x(t 0 ) − 1 ∫ e T u(τ)dτ + 1 ∫ e − T u(τ)dτ + 1 ∫ e − T u(τ)dτ
− Tt T0
t
K t0 τ K t 0 t−τ K t t−τ
T 0 T 0 T t0
t−t 0
K 1 t − t−τT ∆
x(t)=e − T
∫
x(t 0 ) +
T t0
e u(τ)dτ = ϕ(t, t 0 , x(t 0 ), u [t 0,t] ) (1.2.26)
which is just (1.2.4), if we are taking into consideration that x(t0) = x0.
Now from (1.2.24), (1.2.25) and (1.2.26) we observe that
x(t) = ϕ(t, 0, x(0), u [0,t] ) ≡ ϕ(t, t 0 , ϕ(t 0 , 0, x(0), u [0,t 0] ), u [t 0,t] ) (1.2.27)
x(t0)
which is the so called the state transition property of the i-is-s relation.
According to this property, the state at any time moment t, x(t), as the
result of the evolution from an initial state x(0) at the time moment t=0 with an
input u [0,t] is the same as the state obtained in the evolution of the system starting
at any intermediate time moment t0 from an initial state x(t0) with an input u [t 0,t] if
the intermediate state x(t0) is the result of the evolution from the same initial state
x(0) at the time t=0 with the input u [0,t 0]. It has to be precised that
u [0,t] = u [0,t 0 ] ∪ u [t 0 ,t] (1.2.28)
Two conclusions can be drawn from this example:
1. Any intermediate state is an initial state for the future evolution.
2. An initial state x0 at a time moment t0 contains all the essential
information from the past evolution able to assure the future evolution if the
input is given starting from that time moment t0.
10
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.
11
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.
0 t0 t1 t t−t 0
t
K K
−
∫ e − T u(τ)dτ = η(t, t 0 , x0 , u [t0,t])
t−τ
y y y(t) = e T x0 + 1 2
[ t0 ,t1]
ya
T t0
] [ t0 ,t1]
[
] For the same input u[t0,t1] , the output depends also on
[
the value x0=x(t0) which is the voltage across the capacitor
0 t0 t1 t terminals C in Ex. 1.2.2. or the arm position (point A ) in
Figure no. 1.3.0. Ex. 1.2.3. at the time moment t0.
12
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.
• •
T y +y = K 1 K 2 u ⇔ R(u, y) = 0 , R(u, y) = T y +y − K 1 K 2 u (1.3.6)
which is an implicit input-output relation.
•
By denoting D = d , the time derivative operator, Dy =y we obtain
dt
TDy(t) + y(t) − K 1 K 2 u(t) = 0 ⇔ (TD + 1)y(t) − K 1 K 2 u(t) = 0 ⇔
K K K K
y(t) = 1 2 u(t) ⇔ y(t) = Su(t) , S = 1 2 (1.3.7)
TD + 1 TD + 1
Here y(t) = Su(t) is an explicit input-output relation by an
integral-differential operator S . This relation is expressed in time domain but it
can be expressed in every domain if one to one correspondence exists.
For example we can express in s-complex domain applying to (1.3.6) the
Laplace transform,
K K
Y(s)= 1 2 U(s) + T x(0) (1.3.8)
Ts + 1 Ts + 1
from where an operator H(s) called transfer function is defined,
Y(s) K1 K2
H(s)= x(0)=0 = . (1.3.9)
U(s) Ts + 1
The relation between the Laplace transformation of the output Y(s) and the
Laplace transformation of the input U(s) which determined that output into zero
initial conditions
Y(s)=H(s)U(s) (1.3.10)
is another form of explicit input-output relation.
u uC =x 1 uC =x 2 y u y
C1 1 C2 2 S
iC i C2
1
To do this first we observe from the principle diagram that there are 8
variables as time functions involved : u, i1, iC1, uC1=x1, iC2, uC2=x2, i2 and y.
The other variables R1, R2, C1, C2 are constants in time and represent the
circuit parameters , any skilled people in electrical engineering understand on the
spot.
Because u is a cause (an input) it is a free variable and we have to look for
7 independent equations. These equations can be written using the Kirckoff's
theorems and Ohm's law:
. .
1. i c1 = i 1 − i 2 2. i c2 = i 2 3. i c1 = C 1 x 1 4. i c2 = C 2 x 2
5. i 1 = 1 (−x 1 + u) 6. i 2 = 1 (x 1 − x 2 ) 7. y=x2
R1 R2
We can observe that two variables x1 and x2 appear with their first order
derivative so eliminating all the intermediate variables a relation between u and y
will be obtained as a second order differential equation. But first shall we keep
the variables x1 and x2 and their derivative.
Denoting by T1=R1C1 and T2=R2C2 the two time constants, after some
substitutions we obtain,
. R R
T 1 x 1 = −(1 + 1 )x 1 + 1 x 2 + u (1.3.11)
. R 2 R 2
T 2 x 2 = x 1 − x2 (1.3.12)
y=x2 (1.3.13)
which after dividing by T1, T2 respectively, they take the final form
. R R
x 1 = − 1 (1 + 1 )x 1 + 1 1 x 2 + 1 u(t) (1.3.14)
T1 R2 T1 R 2 T1
.
S: x 2 = 1 x1 − 1 x 2 (1.3.15)
T2 T2
y=x2 (1.3.16)
14
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.
15
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.
Let H(s) be
Y(s)
H(s) = 1 = zero initial condition (1.3.27)
L(s) U(s)
where H(s) is called the transfer function of the system.
The transfer function of a system is the ratio between the Laplace
transform of the output and the Laplace transform of the input which determines
that output into zero initial conditions if and only if this ratio is not depending on
the form of the input.
By using the inverse Laplace transform we can obtain the time answer (the
output response) of this system. The characteristic equation
L(s) = T2s2+2,5Ts+1 = 0 has the roots,
s1,2=(− 25 T ± 21 25T 2 − 16T 2 )/2T 2 (1.3.28)
so the characteristic polynomial is presented as
L(s) = T2(s-λ1)(s-λ2) with λ1= -1/2T ; λ2= -2/T . (1.3.29)
One way to calculate the inverse Laplace transform is to use the partial
fraction development of rational functions from Y(s) as in (1.3.26)
H(s) = T 2(s−λ1)(s−λ ) ; H(s) = s−λ A
1
+ B
s−λ 2
⇒ A = 2
3T
; B = − 2
3T
1 2
T 2 s+2,5T A1 B
T 2 (s−λ
= + s−λ12 ⇒ A 1 = 4
; B 1 = − 13
1 )(s−λ 2 )
s−λ 1 3
T2 A2 B
T 2 (s−λ
= + s−λ22 ⇒ A 2 = 2T
; B 2 = − 2T
1 )(s−λ 2 )
s−λ 1 3 3
Y(s) = 2 1
3T s−λ 1
− s−λ1 2 U(s) + 13 s−λ4 1 − s−λ1 2 y(0) + 2T 1 − s−λ1 y.(0)
3 s−λ 1 2
L-1 1
s−λ 1 = e λ 1 t = α 1 (t) L-1 1
s−λ 2 = e λ 2 t = α 2 (t)
= ∫ 0 α 1 (t − τ)u(τ)dτ = ∫ 0 α 2 (t − τ)u(τ)dτ
1 t 1 t
L-1 s−λ 1 U(s) L-1 s−λ 2 U(s)
17
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.
k−1
x k = Φ(k, p)x p + Σ [Φ(k, j + 1)β j u j ] (1.3.39)
j=p
yk = γkxk (1.3.40)
We observe that (1.3.39) is an input-initial state-state relation of the form,
x k = ϕ(k, k 0 , x k 0 , u [k 0,k−1] ) (1.3.41)
where xk is the state on the current time k (in our case the day index), xk0 is the
initial state on the initial time moment p=kk0.
The output evolution is
k−1
y k = γ k Φ(k, p)x p + Σ [γ k Φ(k, j + 1)β j u j ] (1.3.42)
j=p
which is an input-initial state-output relation of the form,
y k = η(k, k 0 , x k 0 , u [k 0,k−1] ) (1.3.43)
i S0 S1
RB c
xt 0=0 xt 0 =1
d
18
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.
If the button RB is pushed, the current i is cancelled and the relay becomes
relaxed, the lamp L is turned off.
The variables encountered in this description SB, RB, i, x, y are associated
with the variables st, rt, it, xt, yt called logical variables .
They represent the truth values, at the time moment t, of the propositions:
st: "The button SB is pushed" ⇔ "Through terminals a-b the current can run".
rt: "The button RB is pushed" ⇔"Through terminals c-d the current can not run"
it: "
xt:"The relay is activated" ⇔ "The relay normal opened contacts are connected".
yt: "The lamp lights".
These logical variables can take only two values denoted usually by the
symbols 0 and 1 on a set B={0;1} which represents false and true.
The set B is organised as a Boolean algebra. In a Boolean algebra three
binary fundamental operations are defined: conjunction " ∧ ", disjunction " ∨ "
and negation " ¯ ".
Suppose we are interested about the lamp status so the output is y(t)=yt.
This selected output depends on the status of buttons SB, RB only (it is supposed
that the supply voltage E is continuously applied) as external causes, so the input
is the vector u(t) = [st rt ]T.
An oriented system is defined now as depicted in Fig. 1.3.5.
The mathematical relations between u(t) and y(t), defining the abstract
system S are expressed as logical equations.
The value of logical variable it is given by
it=(s t ∨ x t ) ∧ r t . (1.3.44)
Because of the mechanical inertia, the status of the relay changes after a
small time interval ε finite or ideally ε→0 to the value of it,
x t+ε = i t . (1.3.45)
Of course the status of the lamp equals to the status of the relay, so
yt=xt (1.3.46)
Substituting (1.3.44) into (1.3.45) together with (1.3.46) we obtain the
abstract system S as,
x t+ε = (s t ∨ x t ) ∧ r t (1.3.47)
S:
yt=xt (1.3.48)
To determine the output of this system beside the two inputs st; rt we
need another piece of information (the value of xt , the state of the relay: 1 - if
the relay is activated and 0 - if the relay is not activated.
19
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.
It does not matter how we shall denote these information A (or on) for 1
and B )or off) for 0. If we know that the state of the relay is A we can determine
the output evolution if we know the inputs.
If we are doing experiments with this physical system on time interval
[t0,t1] for any t0, t1, a set S of input-output pairs (u,y)=(u [t 0,t 1 ] , y [t 0,t 1 ] ) can be
observed,
S = {(u [t 0,t 1] , y [t 0,t 1] )/ observed , ∀t 0 , t 1 ∈ R, ∀u [t 0,t 1] ∈ Ω} , (1.3.49)
The set S can describe the abstract system. It can be split into two subsets
depending whether a pair (u,y) is obtained having x t 0 equal to 0 or 1:
S0={(u,y)∈S/ if x t 0 =0}={(u,y)∈S/ if x t 0 =A} ={(u,y)∈S/ if x t 0 =on} (1.3.50)
S1={(u,y)∈S/ if x t 0 =1}={(u,y)∈S/ if x t 0 =B} ={(u,y)∈S/ if x t 0 =off} (1.3.51)
It can be proved that
S0 ∪ S1 = S ; S0 ∩ S1 = ∅ , (1.3.52)
as depicted in Fig. 1.3.6.
Also inside of any subset Si the input uniquely determines the output
∀(u, y a ) ∈ S i , ∀(u, y b ) ∈ S i ⇒ y a ≡ y b i = 0; 1 (1.3.53)
From this we understand that the initial state is a label which parametrize
the subsets Si ⊆ S as (1.3.53).
20
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.
This collection S will determine the behaviour of the black-box. For the
same input applied u(t) we can get two outputs y(t) = 12 u(t) or y(t) = 23 u(t) . Our
set of input-output pairs can be split into two subsets: S0 if they correspond to
(1.3.54) and S1 if they correspond to (1.3.55).
Of course S0 ∪ S1 = S ; S0 ∩ S1 = ∅ .
If someone gave us an input u [t 0,t 1 ] we would not be able to say what the
output is because we have no idea from which subset, S0 or S1,, to select the right
pair. Some information is missing to us.
Suppose that the box cover face has been broken so we can have a look
inside the box as in Fig. 1.3.7.
Now we can understand why the two sets of input-output pairs (1.3.54),
(1.3.55) were obtained.
The box can be in two states depending of the switch status: opened or
closed. We can define the box state by a variable x which takes two values
nominated as: {off ; on} or {0 ; 1} or {A ; B}.
Now the subset S0 can be equivalently labelled by one of the marks: "off",
"0", "A" and the subset S1 by : "on", "1", "B" respectively.
It does not matter how the position of the switch is denoted (labelled). The
switch position will determine the state of the black-box.
The state is equivalently expressed by one of the variables:
x ∈ {off ; on} or x ∈ {0 ; 1} or ∼ x ∈ {A; B}
If someone gives us an input u [t 0,t 1] and an additional piece of information
formulated as: "the state is on " that means x=on or as "the state is B" that means
∼
x = B etc. we can uniquely determine the output y(t) = 23 u(t) selecting it from the
subset S1.
With this example our intention is to point out that the system state
appears as a way of parametrising the input-output pair subsets inside which one
input uniquely determines one output only.
Also we wanted to point out that the state can be presented in different
forms, the same input-output behaviour can have different state descriptions.
In this example the state of the system can not be changed by the input,
such a system being called state uncontrollable.
21
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
22
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
23
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
x 01 = 43 3 + 2⋅1
3
9 = 10V; x 02 = − 13 3 − 2⋅1
3
9 = −7V/ sec;
In this form of the state vector definition we can say that at the time
moment t0 the system is in the state x 0 =[10 -7]T , that is different of the state
x0=[3 9]T but, applying the same input u [t 0,t] , we obtain the same output as it
was obtained in the case of x0=[3 9]T .
The following important conclusion can be pointed out:
The same input-output behaviour of an oriented system can be obtained by
defining the state vector in different ways.
If x is the state vector related to an oriented system and a square matrix T
is a non-singular one, detT≠0, then the vector x = Tx is also a state vector for the
same oriented system. Both states x, x above related will determine the same
input-output behaviour.
In the above example, the two state relationships
x 1 = 43 x 1 + 2T x
3 2
x 2 = − 13 x 1 − 2T x
3 2
can be written in a matrix form
43 2T
x = Tx , where T = 1 2T 3
, det T = − 2T ≠ 0. (1.4.4)
− 3 − 3 3
24
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
This variable will be the output of the oriented system we are defining as
in Fig. 1.4.2. The input is the thickness realised on the flap position, point A, and
we shall denote it by the variable u(t).
The distance between points A and B is d . One piece of fuel passing from
A to B will take a period of time τ = dv .
Dust fuel (coal) y(t)=u(t- τ)
Controlled flap u(t) Pure time delay y(t) U(s) −τ s Y(s)
Thickness Thickness element e
u(t) y(t)
u(t);y(t)
speed v u(t)
y(t)
d
A B
Conveyor belt u(t 1−τ) y(t 1)
t
0 t 1−τ t1
Figure no. 1.4.1. Figure no. 1.4.2.
The input-output relation is expressed by the equation,
y(t) = u(t − τ) (1.4.5)
We can read this relation as: The output at the time moment t equals to the
value the input u(t) had τ seconds ago. Such a dependence is illustrated in the
diagram from Fig. 1.4.2. It is a so called a functional equation.
Now suppose an input u [t 0,t] is given. Can we determine the output y(t) for
any t ≥ t0 ? What do we need in addition to do this ? Looking to the principle
diagram from Fig. 1.4.1. or to the relation (1.4.5) we understand that in addition
to know all the thickness along the belt between the points A and B or in other
words all the values the input u(t) had during the time interval
[t0−τ , t0) .
This collection of information constitutes the system state at the time
moment t0 and it will be denoted by x0 .
So the state at the time moment t0, (t0,x0 ) , denoted on short as
x0=x(t0)=x t 0 is a set containing an infinite number of elements
x 0 = x t 0 = x(t 0 ) = { u(θ), θ ∈ [t 0 − τ , t 0 ) } = u [t 0−τ , t 0) (1.4.6)
Because of that this system has the dimension of infinity.
At any time t the state is (t,x)=x(t) defined by
x(t) = { u(θ), θ ∈ [t − τ , t) } = u [t−τ , t) (1.4.7)
All these intuitively observations may have a mathematical support
applying the Laplace transform to the input-output relation (1.4.5).
25
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
We remember that
0
L{u(t − τ)} = L{u(t − τ)1(t)} = e −τs
[U(s) + ∫ u(t)e −st dt] (1.4.8)
−τ
so the Laplace transform of the output is
0
Y(s) = e −τs U(s) + e −τs ∫ u(t)e −st dt = Y f (s) + Y l (s) (1.4.9)
−τ
where Y f (s) = e −τs U(s) ⇒ y f (t) = η(t, 0, 0, u [0,t] ) (1.4.10)
is the forced response which depends on the input u(t) only, of course here
depends on the Laplace transform U(s) which contains the input values u(θ) for
any θ≥0. We paid for Laplace transform instrument simplicity with t0=0.
0
Y l (s) = e −τs
∫−τ u(t)e −st dt (1.4.11)
is the free response as the output response when the input is zero.
By zero input we must mean
u(t) ≡ 0 ∀t ≥ 0 ⇔ U(s) ≡ 0∀s
from the convergence domain of U(s).
The free response depends on the initial state only (here at the initial time
moment t0=0 ) and as we can see from (1.4.11) it depends on all the values of
u(θ) ∀θ ∈ [−τ , 0) ⇔ u [−τ , 0)
so it looks naturally to choose the initial state as
x 0 = x(0) = u [−τ , 0) .
Now we can interpret the free response from (1.4.11) as
y l (t) = η(t, 0, x 0 , 0 [0,t) ) . (1.4.12)
From (1.4.10), (1.4.12) the general response ( the time image of (1.4.9) )
can be expressed as an input-initial state-output relation
y(t) = y f (t) + y l (t) = η(t, 0, 0, u [0,t] ) + η(t, 0, x 0 , 0 [0,t) ) = η(t, 0, x 0 , u [0,t) ) . (1.4.13)
26
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
27
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
28
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
t1 t2
t 2−t 1 t 1−t 0 t 1 −τ t 2−τ
K K
x(t 2 ) = e − T ⋅ [e − T x(t 0 ) + 1
T ∫e
t
− T u(τ)dτ] + 1
T ∫e
t
− T u(τ)dτ
0 1
x(t1)
t2
t 2−t 1 t 2−τ
K1
x(t 2 ) = e − T x(t 1 ) +
T ∫t e − T u(τ)dτ .
1
4. The causality condition: Because in (1.4.22) u(τ) is inside the integral from t0
to t , x(t) is irrespective of u(τ) ∀ τ >t .
-------------------
Before we said, as a general statement, that the input affects the state and
the state influences the output. However there are systems to which inputs do
not influence the state or some components of the state vector.
Conversely there are systems to which outputs or some outputs are not
influenced by the state. Such systems are called uncontrollable and unobservable
respectively, about which more will be analysed later on.
In Ex. 1.3.4., the black box system, the physical object is state
uncontrollable because no admitted input can make the switch to change its
position. If , for example, the wire to the output were broken then such a system
would be unobservable.
A state that is both uncontrollable and unobservable can not be detected
by any experiment and make no physical meaning.
F(x1,x2,...,xn,t0,x(t0))=0 (1.4.26)
where it was supposed a given (known) input. Simpler expression are obtained if
the input is constant for any t.
If the state vector components are the output and its (n-1) time derivative
the state space is called phase space and the trajectory in phase space is called
phase trajectory.
The trajectory in state space can easier be obtained directly from state
equations, a system of first order differential equations. The plot can efficiently
be exploited for n=2 in state plane or phase plane.
For a given initial state (t0,x0) , we have denoted x0=x(t0), only one
trajectory is obtained. For different initial conditions a family of trajectories are
obtained called state portrait or phase portrait.
Because of the uniqueness condition accomplished by the i-is-s relation,
for a given input u [t 0,t] , one and only one trajectory passes through each point in
state space and exists for all finite t≥t0 . As a consequence of this, the state
trajectories do not cross one another.
Supposing that we have λ1<0 and λ2>0 then the time-trajectories are
plotted through the two components as in Fig. 1.4.3.
Eliminating the variable t from (1.4.28) we obtain
x1 x2 x x
= e λ 1 (t−t 0 ) ; = e λ 2 (t−t 0 ) ⇒ [ 1 ] λ 1 = [ 2 ] λ 2 ⇔
x 1 (t 0 ) x 2 (t 0 ) x 1 (t 0 ) x 2 (t 0 )
x λ /λ
x 2 = x 20 x 1
2 1
⇔ F(x 1 , x 2 , x 0 ) = 0 . (1.4.29)
10
The same expression for (1.4.29) can be obtained directly from the
differential equation (1.4.27),
dx 1
= λ1 x1 dx 2 λ 2 x 2 x 1 λ 2 /λ 1
dt
⇒ = ⇒ x 2 = x 2 (t 0 ) (1.4.30)
dx 2
= λ2 x2 dx 1 λ 1 x 1 x 1 (t 0 )
dt
30
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
x''10 x1(t)
T1 =−1/ λ1
a''1
a'1 a''2 x2
a'2 λ1<0 λ2 <0 λ1< λ2
t0 t1 t2 t x'20 t0
t0
x''20 x2(t) x''20
t1
b''1
b'1 t2 t1
x'20 b''2
b'2 t2 x''10
a'2 a''2 a'1 a''1 x'
T2 =−1/ λ2 10
x1
b''1
b'1 b''2
t0 t1 t2
b'2 t
Figure no. 1.4.3. Figure no. 1.4.4.
In Fig. 1.4.5. the state portrait is shown for the case λ1<0, λ2>0.
x2
x1(t) x'10
1 / λ1 λ1<0 λ2 >0
t1
a b
t0
t1 t x'20
x2(t) t0 t2
a x'10 x1
x'20 b
t0 t1 t2 t
Figure no. 1.4.5.
The input-initial state-output relation (i-is-o relation), determines the
output at a time t,
y(t)=η(t, t0, x0, u [t 0,t] ) , (1.4.31)
In the case of example Ex. 1.2.1. , the relation
t−t 0
K K t
y(t) = K 2 e − T x 0 + 1 2 ∫ e − T u(τ)dτ
t−τ
T t0
is an i-is-o relation. It does not accomplish the consistency property
y(t0)=K2x0≠x0 so it can not be an i-is-s relation.
31
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
33
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.5. Examples of Dynamical Systems.
In this case:
u(t) and y(t) are scalars ,
the matrix B(t) degenerates to a column vector b(t),
the matrix C(t) degenerates to a row vector cT(t) and
the matrix D(t) degenerates to a scalar d(t).
If all these matrices are not depending on t (are constant ones) the system
is called linear time-invariant (dynamical) system (LTIS) having the form,
x. = Ax + Bu
S: (1.5.8)
y = Cx + Du
We observe that in any form of state equations the input time derivatives
does not appear.
34
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.5. Examples of Dynamical Systems.
R1 i0 R0
i1 C
u(t) i2 R2 - iC uC
ii R iR ii
ui Z i ;A;Z e -
w(t)
+ v(t) ui Z i ;A;Z e
+ y(t)
t .
x(t) = ∫ t 0 x(τ)dτ + x(t 0 ) (1.5.11)
.
x(t) = x(t − τ) − u(t) − time delay equation (1.5.12)
The initial state at the time t0 is x [t 0−τ,t 0] , even if the variable x in the
differential equation is a scalar one.
35
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.5. Examples of Dynamical Systems.
-Stochastic Systems.
All the above systems are called deterministic systems (at any time any
variable is well defined ). These systems are based on the so called probability
theory and random functions theory.
36
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.6. General Properties of Dynamical Systems.
u uR uL uC
a '
Supposing we are interested about i,ur,uL,uC, so the input is u and the
output is the vector
i
u
y = R , r=4 .
uL p=1
uC
We can determine the state equations by applying the Kirchoff theorems,
u C = C1 q
. i=q
.
u R = Rq
, where
u L = Lq̈ q̈ = di
dt
uR + u L + uC = u
Some variables but not all of them can be eliminated ( those from
algebraical equations only).
x1 = q x2 = i
. , where .
x2 = q q̈ = x 2
x. 1 = x 2
.
x 2 = LC x 1 − L x 2 + L u
1 R 1
y1 = x2
y 2 = Rx1 2
y 3 = − C x 1 − Rx 2 + u
y 4 = C x1
1
These are the state equations in a form when we chosen as state components the
electrical charge and current. We can write them into a matrix form:
x 1 x. = Ax + bu
x= ,
x 2 y = Cx + du
0 1 0
0 1 0 0 R
0
A= 1 R , b= 1 , C= 1 , d=
− LC − L L − C −R 1
1
C 0 0
38
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.6. General Properties of Dynamical Systems.
0 R1 0
1 0
0 RC 0 1 , d
A = , 0
b= R , C= =
− RL − RL L −1 −1 1
1 0 0
S = S(A, b, C, d, X) x 1 = C1 x 1 C1 0
, ⇒ x = Tx , T = , det(T) =
R
≠0
S = S(A, b, C, d, X) x 2 = Rx 2 0 R
C
This means that the two abstract systems are equivalent because for any
initial equivalent states we have the same output. We can pass between the two
systems with the next equations:
A = TAT −1
b = Tb
C = CT −1
d=d
0 1 0
. 0 1 0 0 R
x = Ax + bu 0
where A = 1 R , b = 1 , C = 1 , d=
y = Cx + du − LC − L L − C −R 1
1
C 0 0
39
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.6. General Properties of Dynamical Systems.
Let (x0,t0) be
5 C
(x0,t0)= and for given U [t 0,t] ⇒ Y [t 0,t]
10 A
C1 ⋅ 5 1 (V)
(x 0 , t 0 ) = = if C = 5F and R = 100Ω
R ⋅ 10 1000 (V)
We can prove that, for this example, the next relations are true:
A = TAT −1
b = Tb
C = CT −1
d=d
is not linear.
We can prove this, for
ya=2ua+4, yb=2ub+4
we obtain
y=2(α ua+β ub)+4=2α ua+2β ub+4≠ αy a + βy b
The response of a linear system for zero initial state and zero input is just zero.
The maximum order of the output derivative will determine the order of
the system.
Three types of systems can be distinguished depending on the ratio
between m and n:
1. m<n : the system is called strictly proper (causal) system.
2. m=n : the system is called proper system.
3. m>n : the system is called improper system.
The improper systems are not physically realisable. They could represent
an ideally desired mathematical behaviour of some physical objects, but
impossible to be obtained in the real world.
du(t)
For example the system described by the IOR, y(t) = , that means:
dt
n = 0, m = 1, a 0 = 1, b 0 = 0, b 1 = 1 , represents a derivative element. It cannot
process any input from the set Ω of the admissible inputs which can contain
functions with discontinuities or which are nonderivative only.
A rigorous mathematical treatment can be performed by using the
distributions theory but that results are not essential for our object of study.
Because of that, our attention will concentrate on the strictly proper and
proper systems only.
The IOR (2.1.1) will be written as
n n
Σ aky
k=0
(k)
(t) = Σ
k=0
b k u (k) (t) , an ≠ 0 (2.1.2)
If it is mentioned b n = 0 , this means the system is a strictly proper one.
Also, if m<n we can consider that bn=0...bm+1=0.
The input-output relation (IOR) can be very easy expressed in the complex
domain by applying the Laplace transform to the relation (2.1.2).
As we remember,
k−1
L{ y (t)} = s Y(s) − Σ y (k−i−1) (0 + )s i , k ≥ 1
(k) k
(2.1.3)
i=0
k−1
L{ u (k) (t)} = s k U(s) − Σ u (k−i−1) (0 + )s i , k ≥ 1 (2.1.4)
i=0
where we have denoted by
Y(s) = L{ y(t)} , U(s) = L{ u(t)} (2.1.5)
the Laplace transforms of output and input respectively. For the moment the
convergence abscissas are not mentioned.
In (2.1.3), (2.1.4) the initial conditions are defined as right side limits
y (0 ) , u (k−i−1) (0 + ) .
(k−i−1) +
43
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.1. Input-Output description of SISO LTI.
In the case of the SISO LTI, as we can observe from (2.1.6) ... (2.1.11) ,
the transfer function is
M(s) b n s n + b n−1 s n−1 + ... + b 1 s + b 0
H(s) = = (2.1.14)
L(s) a n s n + a n−1 s n−1 + ... + a 1 s + a 0
always a ratio of two polynomials (rational function).
Sometimes we denote this SISO LTI system as
S = TF{M, N} ⇔ TF (2.1.15)
There are systems to which a transfer function can be defined but it is not
a rational function, as for example systems with time delay or systems defined by
partial differential equations.
If the polynomials L(s) and M(s) have no common factor (they are
coprime) their ratio expresses the so called the nominal transfer function
(NTF).
The transfer function expresses only the input-output (i-o) behaviour of a
system which is just the forced response that means the system response into
zero initial conditions.
The same input-output behaviour can be assured by a family of transfer
functions if the nominator and the denominator have a common factor as
M(s)=M'(s)P(s) , L(s)=L'(s)P(s), (2.1.16)
then
M (s)P(s) M (s)
H(s) = ⇒ H(s) = 2.1.17)
L (s)P(s) L (s)
If the two polynomials M'(s) and L'(s) are coprime ones,
gcd{M'(s), L'(s)}=1 ,
then the last expression of H(s) is the reduced form of a transfer function
(RTF). In such a case, when M(s) and L(s) have common factors, some
properties such as : controllability or/and observability are not satisfied.
The order of a system is expressed by the degree of the denominator
polynomial of the transfer function that is n=degree{L(s)}.
It results that
degree{L'(s)}=n'<n=degree{L(s)}, (2.1.18)
so systems can have different orders for their internal description but all of them
will have the same forced response.
If in a transfer function common factors appear which are simplified, then
the i-o behaviour (the forced response) can be described by lower order abstract
systems, the order being the polynomial degree from the transfer function
denominator after simplification, but the is-o behaviour (the free response) still
remains of the order equal to the order the transfer function had before the
simplification.
45
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.1. Input-Output description of SISO LTI.
46
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.1. Input-Output description of SISO LTI.
M (s)
H(s) = s + 1 = , n = 1(order = 1) .
s + 4 L (s)
This system can be denoted as
S = TF{M , L } = TF{(s + 1), (s + 4)} ,
which is a first order one, its general solution depends of only one initial
condition.
The forced response of S is,
Y f (s) = s + 1 U(s)
s+4
identical with the forced response of S .
We can say that the system S expresses only some aspects of the internal
behaviour of S , that means only the modes ( the movements) that are depending
on the input and which affect the output.
This is the so called completely controllable and completely observable
part of the system S.
If M(s) and L(s) are prime one to each other the internal behaviour is
completely related by the i-o behaviour of the system that means by the transfer
function.
47
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.2. State Space Description of SISO LTI.
49
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.2. State Space Description of SISO LTI.
Some forms of SS have specific advantages that are useful for different
applications.
SS representation TF representation Reduction
of the system S Univoque of the system S by symplifying
transform M(s) the common factors M'(s)
S=SS(A,b,c,d,x) H(s)= H'(s)=
SS TF L(s) TF TF L'(s)
The order n dg{L(s)}=n dg{L'(s)}=n'
One
TF SS state realisation
Similarity transforms
Input-output and
Input-state-output
Equivalent systems
Another
S1=SS(A1,b1,c1,d1,x 1) TF SS state realisation
The order n
SS reduction. Univoque
Input-output only
SS TF transform
Equivalent systems TF SS One
state realisation
S'=SS(A',b',c',d',x')
The order n'<n
Univoque
SS TF transform
Figure no. 2.2.1.
In Fig.2.2.2. an example of two such systems is presented.
Step Response Transfer function:
1 100 s^2 + 2 s + 1
H(s)= -------------------------------
0.9 10000 s^3 + 300 s^2 + 102 s + 1
0.8
Zero/pole/gain:
Step response of both 0.01 (s^2 + 0.02s + 0.01)
0.7 H(s)= -----------------------------
H(s) and H'(s) (s+0.01) (s^2 + 0.02s + 0.01)
0.6
x1 x2 x3
0.5
Free respose of H(s) with x1 -0.03000 -0.08160 -0.01280
0.4
Amplitude x(0)=[1 10 1] x(0)=[1 1 1] A = x2 0.12500 0 0
0.3
x3 0
u1 0.06250 0
x1 0.12500 u1
0.2
Free respose of H'(s) with d = y1 0
x(0)=1 b = x2 0
0.1
x3 0
0
0 100 200 300 400 500 600
x1 x2 x3
Time (sec.) c = y1 0.08000 0.01280 0.10240
Transfer function: x1 u1 x1 u1
1 d= y1 0
H'(s)= - - - - - - - - - a = x1 -0.01000 b = x1 0.12500 c= y1 0.08000
100 s + 1
Figure no. 2.2.2.
50
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.3. Input-Output Description of MIMO LTI.
51
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.3. Input-Output Description of MIMO LTI.
Y j (s)
H ji (s) = (2.3.13)
U i (s) zero initial condition
U k (s)≡0 ∀s if k≠i
This rational matrix is an operator if it is the same for any expressions of U(s).
It is irrespective of the input.
Example.
Suppose we have a 2 inputs and 2 outputs system described by a system
of 2 differential equations,
(4) (1) (3) (1)
2y 1 + 3y 1 + 6y 2 + 3y 2 = 3u 1 + u 1 + 5u 2 (j = 1) r = 2
(2) (1) (1) (2) (1) ; ,
3y 1 + 2y 1 + 5y 1 + 8y 2 = 2u 1 + 5u 1 + 3u 2 + 4u 2 + 5u 2 (j = 2) p = 2
whose Laplace transform in z.i.c. is,
(2s 4 + 3)Y 1 (s) + (6s + 3)Y 2 (s) = (3s 3 + s)U 1 (s) + 5U 2 (s)
⇒
(3s 2 + 2s + 5)Y 1 (s) + 8Y 2 (s) = (2s + 5)U 1 (s) + (3s 2 + 4s + 5)U 2 (s)
2s 4 + 3 6s + 3 3s 3 + s 5
L(s) = 2 , M(s) = ⇒
3s + 2s + 5 8 2s + 5 3s + 4s + 5
2
the TM is
H(s) = L −1 (s) ⋅ M(s)
52
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.3. Input-Output Description of MIMO LTI.
53
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.
54
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.
55
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.
k=0 k!
is uniform convergent, then the matrix function
∞
Φ(t) = ϕ(x) x→A = Σ A t k = e At
k
k=0 k!
has the Laplace transform
∞
Φ (s) = L{e } = L{ Σ A t k } = s −1 I + s −2 A + s −3 A 2 + ...
k
At
k=0 k!
∞
Φ (s) = Σ A s k −(k+1)
k=0
where we applied the formula,
k
L t = k+1 1 = s −(k+1) .
k! s
Because of the identity
(sI-A)ΦΦ (s)=(sI-A)(s −1 I + s −2 A + s −3 A 2 + ...) =I-s-1A+s-1A-s-2A2+s-2A2-...=I
we have
(sI-A)ΦΦ (s)=I
and
Φ (s)(sI-A)=I => Φ (s)=(sI-A)-1 (2.4.26)
2. The identity property.
Φ(0)=I, where Φ(0)=Φ(t)|t=0 (2.4.26)
3. The transition property.
Φ(t1+t2)=Φ(t1)Φ(t2)=Φ(t2)Φ(t1) ∀t 1 , t 2 (2.4.27)
4. The determinant property.
The transition matrix is a non-singular matrix.
5. The inversion property.
Φ(-t)=Φ-1(t) (2.4.28)
6. The transition
. matrix is the solution
. of the matrix differential equation,
Φ(t) = AΦ(t) , Φ(0) = I , Φ(0) = A (2.4.29)
56
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.
k=0 k! (2.4.41)
If the dimension of the matrix A is very big then we may have a false
convergence. It is better to express t=qτ, q ∈ N and τ small enough =>
Φ(t) = Φ(qτ) = (Φ(τ)) q ≈ (Φ p (τ))
q
(2.4.42)
57
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.
This is called the general time response of a LTI, when the initial time
moment is now t 0 ≠ 0 . The formula is obtained by using the transition property of
the transition matrix.
We saw that using the Laplace transform the state response was,
t
x(t) = Φ(t)x(0) + ∫ Φ(t − τ)Bu(τ)dτ , ∀t ≥ 0 (2.4.43)
0
Substituting t=t0 in (2.4.43), we obtain
t0
x(t 0 ) = Φ(t 0 )x(0) + ∫ Φ(t 0 − τ)Bu(τ)dτ . (2.4.44)
0
Taking into consideration (2.4.27),
Φ(t 0 − τ) = Φ(t 0 )Φ(−τ) , (2.4.45)
and (2.4.28)
Φ(-t)=Φ-1(t) (2.4.46)
we can withdraw from (2.4.44) the vector x(0) multiplying (2.4.44) to the left
side by Φ-1(t0)
t0
x(0) = Φ(−t 0 )x(t 0 ) − ∫ Φ(−τ)Bu(τ)dτ
0
and, by substituting x(0) in the relation (2.4.43), we have,
t0 t0 t
x(t) = Φ(t − t 0 )x(t 0 ) − Φ(t) ∫ Φ(−τ)Bu(τ)dτ + ∫ Φ(t − τ)Bu(τ)dτ + ∫ Φ(t − τ)Bu(τ)d
0 0 t0
t
x(t) = Φ(t − t 0 )x(t 0 ) + ∫ Φ(t − τ)Bu(τ)dτ
t0
which is the general time response of the state vector.
58
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.
Example.
H(s) = , U(s) = 1s ∆u ⇒ ,
K
Ts+1
t
y(t) = Σ Rez Ts+1
K
⋅ ∆us e st = K + KT ⋅ 11 e − T ∆u
−T
-t/T
with s=0 , s= -1/T , y(t)=K(1-e )∆u
a
u (t)
initial steady
state }D u u( ∞ )
a
U st ( t 0 )
a
a
t
y (t) A T
-1
y st ( t 0 )
L {H(s)U(s)} y( ∞)
initial steady
state
a a
t0 t
Figure no. 2.4.1.
ust(t0 ) is the steady state of input observed at the absolute time t0a,
a
59
3. SYSTEM CONNECTIONS. 3.1. Connection Problem Statement .
3. SYSTEM CONNECTIONS.
(3.1.2)
3.1.2. Linear Time Invariant Continuous System (LTIC).
It is a particular case of CNS.
x. i (t) = A i x i (t) + B i u i (t); x i (t 0 ) = x i0 , t ≥ t 0 ∈ T ⊆ R
Si : i
y (t) = C i x (t) + D i u (t); Ω i = { u u : T → U i ; T ⊆ R; u admitted }
i i i i i
(3.1.3)
3.1.3. Discrete Time Nonlinear System (DNS).
x ik+1 = fi (x ik , u ik , k); x ik 0 = x i0 , k ≥ k 0 ∈ T ⊆ Z
Si : i
y k = g i (x k , u k , k); Ω i = {u k } u k : T → U i ; T ⊆ Z; {u k }admitted
i i i i i
(3.1.4)
3.1.4. Linear Time Invariant Discrete System (LTID).
It is a particular case of DNS.
x ik+1 = A i x ik + B i u ik ; x ik 0 = x i0 , k ≥ k 0 ∈ T ⊆ Z
Si : i
y k = C i x k + D i u k ; Ω i = {u k } u k : T → U i ; T ⊆ Z; {u k } admitted
i i i i i
(3.1.5)
Let consider that a set of subsystems as above defined constitutes a
family of subsystems, denoted F i ,
F I = {S i , i ∈ I} . (3.1.6)
60
3. SYSTEM CONNECTIONS. 3.1. Connection Problem Statement .
One can say that a subsystem S i (or generally a system) is defined (or
specified or that it exists) if the elements (Ω Ω i ; f i ; g i ; x i ) are specified.
From these elements all the other attributes, as discussed in Ch.1. and
Ch.2. , can be deduced or understood.
These all other attributes can be:
the system type (continuous, discrete, logical),
the sets T i ; U i ; Γ i ; Y i, etc.
A family of subsystems F I , builds up a connection if in addition to the
elements
D
S i = S i (Ω i , f i , g i , x i )
two other sets Rc and Cc are defined, having the following meaning:
1. Rc= the set of connection relations
2. Cc= the set of connection conditions.
Rc represents a set of algebraical relations between the variables ui, yi, i ∈ I ,
and other new variables introduced by these relations.
These new variables can be new inputs (causes), denoted by v j , j ∈ J , or
new outputs dented by w l , l ∈ L where I, L represent index sets.
Cc represents a set of conditions which must be satisfied by the input and
output variables (ui, yi), i ∈ I , of each subsystem, and also by the new
variables v j , j ∈ J , w l , l ∈ L introduced through Rc .
These conditions refer to:
The physical meaning, clarifying if the abstract systems are mathematical
models of some physical oriented systems (objects) or they are pure ab-
stract systems conceived (invented) into a theoretical synthesis procedure.
The number of input-output components, that is whether the input and out-
put variables are vectors or scalars.
The properties of the entities Ω i ; f i ; g i interpreted as functions or set of
functions, each of them defined on its observation domain.
The two 3-tuples
S = {F I ; R c ; C c }, F I = {S i , i ∈ I} (3.1.7)
constitute a correct connection or a possible connection, if S has the attributes of
a dynamical system to which :v j , j ∈ J are input variables and w l , l ∈ L are
output variables.
The system S is called the equivalent system or the interconnected system
of the subsystems family FI , family interconnected by the pair
{Rc; Cc}.
61
3. SYSTEM CONNECTIONS. 3.1. Connection Problem Statement .
Observation 1.
Any connection of subsystems is performed only through the input and
output variables of each component subsystem but not through the state
variables.
The set Rc does not contains relations into which the state vector
x , i ∈ I or the components of the vectors x i , i ∈ I to appear.
i
Observation 2.
If to the interconnected system S only the behaviour with respect to the
input variables (the forced response) is under interest, then each subsystem Si
can be represented by its input-output relation.
For LTI (LTIC or LTID) systems, these input-output relations are the
transfer matrices (for MIMO) or transfer functions (for SISO), denoted by Hi(s)
for LTIC and by Hi(z) for LTID systems.
Reciprocally, if in an interconnected structure only transfer matrices (functions)
appear, we have to understand that only the input-output behaviour (the forced
response) is expected or is enough for that analyse.
Obviously, if the transfer matrix (function) of the equivalent
interconnected system S is given or is determined (calculated), we can deduce
different forms of the state equations starting from this transfer matrix
(function), which are called different realisations of the transfer matrix
(function) by state equations.
If this transfer matrix (function) is a rational nominal one (there are no
common factors to nominators versus denominator) then all these state
equations realisations certainly express only the completely controllable and
completely observable part of the interconnected dynamical system S, that
means all the state components of these state realisations are both controllable
and observable.
No one will guaranty, if this transfer matrix (function) is not rational or
non-nominal one (there are common factors to nominators versus denominator)
62
3. SYSTEM CONNECTIONS. 3.1. Connection Problem Statement .
that in these state realisations will appear state components of the system S
which are only uncontrollable or only unobservable or both uncontrollable and
unobservable.
Observation 3.
If to the interconnected system S only the steady-state behaviour is under
interest, then each subsystem Si can be represented by its steady-state
mathematical model (static characteristics).
Notice that the steady-state equivalent model of the interconnected
system S, evaluated based on the steady-state models of the subsystems Si, has a
meaning if and only if it is possible to get a steady-state for the interconnected
system in the allowed domains of inputs values.
This means the interconnected system S must be asymptotic stable.
The steady-state mathematical model of the interconnected system can be
obtained by using graphical methods when some subsystems are described by
graphical static characteristics, experimentally determined.
Generally, the mathematical model deduction process, whichever type
would be: complete (by state equations); input-output (particularly by transfer
matrices) or steady state (by static characteristics) of an interconnected system
is called connection solving process or the structure reduction process to an
equivalent compact form.
Essentially to solve a connection means to eliminate all the intermediate
variables introduced by algebraical relations from Rc or from FI .
There are three fundamental types of connections:
1: Serial connection (cascade connection);
2: Parallel connection;
3: Feedback connection,
through which the majority of practical connections can be expressed.
63
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.
64
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.
x1 (n1 × 1) ;u 1 (p 1 × 1) ; y1 (r 1 × 1) x2 (n2 × 1) ;u 2 (p 2 × 1) ; y2 (r 2 × 1)
R c: u 2 = y 1 ⇒ (p 2 = r 1 ); u = u1 ⇒ (p = p 1 ); y = y2 ⇒ (r = r 2 ); (3.2.4)
1 1 1 2 2 2
Obvious, x , y , u , x , u , y are time function vectors.
By eliminating the intermediate variables u2, y1 and using the notations
u, y for u1 , y2 respectively, ⇒
.
x1 = f 1 (x1 , u, t)
.
S x2 = f 2 (x2 , g1 (x1 , u, t), t)
y = g (x2 , g (x1 , u, t), t)
2 1
denoting
∼
Ψ 1 (x, u, t) = Ψ 1 (x1 , x2 , u, t) = f 1 (x1 , u, t)
∼
Ψ 2 (x, u, t) = Ψ 2 (x1 , x2 , u, t) = f 2 (x2 , g1 (x1 , u, t), t)
g(x, u, t) = ∼g(x1 , x2 , u, t) = g (x2 , g (x1 , u, t), t)
2 1
x = x1 (n1 + n2 ) × 1 ,
x
2
the equivalent interconnected system is expressed by the equations,
65
3. SYSTEMS CONNECTIONS. 3.2. Serial Connection.
.
x = f(x, u, t) Ψ (x, u, t)
S f(x, u, t) = 1 S = S2 S1 (3.2.5)
y = g(x, u, t) Ψ 2 (x, u, t)
One can observe that the system order, determined to differential
systems by the state vector size, is n = n 1 + n 2 .
67
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.
68
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.
SD of S1 SD of S2
K2
x 1(0) x (0)
2 x2 +
u1 • y1 u2 •
+ x1 + 1 x1 + x 2+ + 1 y2
K1 s K2 (z2- p2 ) s
u=u + + + + y =y
1 u 2= y 1 2
-p -p
1 2
Rc S2
S1
Figure no. 3.2.4.
Based on this SD the state equations for the serial interconnected system
are deduced,
.
x = Ax + bu
x = x1
x
, (3.2.15)
y=c T x + du 2
where,
−p 1 0 K1 K2
A= ; b= ; c = ; d = 0
K2 (z2 − p 2 ) −p 2 0 1
Because
X(s) = Φ(s)x(0) + Φ(s)bU(s); Φ(s) = [sI − A]−1
the (ij) components of the transition matrix Φ ij (s), can be easily determined
based on relation (3.2.15).
Manipulating SD, the matrix inverse operation is avoided. For a better
commodity the SD from Fig. 3.2.4. equivalently is transformed into SD from
Fig. 3.2.5.
69
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.
K2
1 2
x (0) x (0)
2
1 x x2 + y = y
u=u 1 1 1 + 1 2
+
K1 s+p y1 u 2 K2 (z2- p2 ) + s+p
1 2 +
+
70
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.
If p 1 ≠ p 2 one obtain,
p −z K (z − p 2 )
yl (t) = K2 p 1 − p2 (x1 (0)) e−p 1 t + 2 2 x1 (0) + x 2 (0) e−p 2t . (3.2.21)
1 2 (p 1 − p 2 )
When p 1 = p 2 in the same way the inverse Laplace transform is applied,
yl (t) = K2 e−p 1 t ⋅ x1 (0) + K2 (z2 − p 1 )te−p 1 t ⋅ x1 (0) + e−p 1 t ⋅ x2 (0)
Case 2: z2 ≠ p 2 .
If z2 ≠ p 2 , det P ≠ 0 , the system is completely controllable, for any
relation between z2 and p 1. Qualitatively, this property can be put into
evidence also by the state diagrams from Fig. 3.2.4. or more clear from Fig.
3.2.5. , where it can be observed that the input u can modify the component
x 1 and through this, if z2 ≠ p 2 , also the component x 2 of the state vector.
This checking up of the state diagram will not guarantee the property of
complete controllability but categorically it will reveal the situation when the
system has not the controllability property.
71
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.
Case 3: z2 = p 2 .
In this case det(P) = 0 and the system is uncontrollable. If z2 = p 2
certainly the component x 2 cannot be modified by the input so categorically
this state component is uncontrollable but the component x 1 can be controlled
by the input. In this example the dependence between x 1 and u is given by
K1
X 1 (s) = U(s) ,
s +p1
which shows that x1 is modified by u.
One say that a system is completely controllable, det(P) ≠ 0 , if and only
if all the state vector components are controllable for any values of these
components in the state space.
The assessment of the full rank of the matrix P through its determinant
value will not indicate which state vector components are controllable and
which of them are not.
More precise, in our example, this means which of the evolution (modes)
generated by the poles (−p 1 ) or (−p2) can by modified through the input.
When z2 = p 2 the transfer function of the serial connection is of the first order
K1 K (s + z2 ) K1 K 2
H(s) = ⋅ 2 = (3.2.24)
(s + p 1 ) (s + p 2 ) s + p1
even if the system still remains of the second order , this second order being
put into evidence by the fact that its free response, (3.2.21), keeps depending
on the two poles that means on the mode e−p 1 t and on the mode e−p 2 t which
are different.
Case 4: z2 ≠ p 1 .
If z2 ≠ p 1 , det Q ≠ 0, the system is observable for any relations between
z2 and p 2 . The visual examination (inspection) of the state diagram, as in
Fig. 3.2.5. will not reveal directly the observability property.
We must not misunderstand that if each state vector component affects
(modifies) the output variable then certainly the system is observable one
because this is not true.
It must remember that the observability property expresses the
possibility to determine the initial state which has existed at a time moment t0
based on knowledge of the output and the input from that t0 .
For LTI systems it is enough to know the output only irrespective what
is the value of t 0 .
Because of that to LTI systems one say : The pair (A, C) is observable.
From the Fig. 3.2.5., it can be observed that when z2 = p 1 , that means
the system is not observable, both x1 and x2 still modifies the output variable.
72
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.
Case a: p 1 ≠ p 2 .
One obtain the expression (3.2.21) of the form,
yl (t) = 2p 1 −p e−p 1t + K2 (z 2 −p 2) x1 (0) + x 2 (0) e−p 2 t
K (p 1 −z 2 )
2
x 1 (0) (p 1−p 2 ) (3.2.21)
or equivalently
yl (t) = p 1 −p2 2 [(p 1 − z2 )e −p 1 t + (z2 − p 2 )e−p 2 t ]⋅ x1 (0) + [e−p 2t ]⋅ x2 (0) (3.2.25)
K
Because the goal is the state vector x(0) determination, through its two
components x1 (0) and x2 (0) , a two equations system is building up, created
from (3.2.21) for two different time moments t 1 ≠ t 2 ,
p K−p2 [(p 1 − z2 )e−p1 t 1 + (z2 − p 2 )e−p 2 t 1 ] e −p2 t 1 x 1 (0) y l (t 1 )
1 2
K2 =
p 1 −p2 [(p 1 − z2 )e−p1 t 2 + (z2 − p 2 )e−p 2 t 2 ] e −p2 t 2 x 2 (0) y l (t 2 )
which can be expressed in a compressed form,
y l (t 1 ) p K−p2 [(p 1 − z2 )e−p 1 t 1 + (z2 − p 2 )e−p 2 t 1 ] e−p 2 t 1
G ⋅ x(0) = G = 1K 2 2 .
y l (t 2 ) p −p [(p 1 − z2 )e 1 2 + (z2 − p 2 )e 2 2 ] e 2 2
1 2
−p t −p t −p t
The possibility to determine univocally the initial state x(0) is assured
by the determinant of the matrix G,
det G = p −2p (p 1 − z2 ) ⋅ e−p 1 t 1 ⋅ e−p 2t 2 ⋅ 1 − e−p 2(t 2 −t 1 )
K −p 1 (t 2 −t 1 )
1 2 e
Because p 1 ≠ p 2 ßi t 1 ≠ t 2 ,
73
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.
det G ≠ 0 ⇔ p 1 ≠ z2 ⇔ det Q = K2 (p 1 − z2 ) ≠ 0
y (t )
x(0) = G −1 ⋅ l 1
yl (t 2 )
Case b: p 1 = p 2 .
The analyse is performed in an Identical manner but the time domain
free response expression deduced from (3.2.19) is,
yl (t) = K2 e−p 1 t ⋅ x1 (0) + K2 (z2 − p 1 )te−p 1 t ⋅ x1 (0) + e−p 1 t ⋅ x2 (0) =
y l (t) = [K 2 (1 − (z2 − p 1 )t]e−p 1 t ⋅ x 1 (0) + e−p 1 t ⋅ x 2 (0)] (3.2.26)
The matrix G has the form,
K 2 [1 − (z2 − p 1 )t 1 ]e−p 2 t 1 e −p1 t 1
G=
K 2 [1 − (z2 − p 1 )t 2 ]e
−p 2 t 2
e−p 1 t2
det G = K2 (p 1 − z2 )(t 2 − t 1 )e−p 1 (t 2 +t 1 )
Also in this case,
z2 = p 1 = p 2 ⇔ det G = 0 ⇔ det Q = 0
that means the system is not observable while z2 = p 1 .
74
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.
75
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.
K (z − p )
= [ 2p 2− p 2 x1 (0) + x2 (0)]⋅ [ lim e −p 2 t ]0 = [..]⋅ 0 = 0 for p 1 ≠ p 2 ,
1 2 t→∞
This means that the internal stability is assured too. However we must
mention that this internal stability appeared only because the serial
interconnected system is unobservable one having disconnected the unstable
mode e−p 2 t .
Formally we can say that an unstable system S 1 , can be made external
stable one (bounded input- bounded output -BIBO) by serial compensation,
performing this by a serial connection of it with another system S 2 which
must have a zero equal to the undesired pole (now the unstable pole) of the
system S 1 .
This kind of stabilisation by serial connection is interesting one "on the
paper" only because:
1. It is very sensitive.
2. The system keep on being internal unstable one.
If the relation z2 = p 1 is not exactely realised but will be z2 = p 1 + ε for ε as
small as possible, the response y(t) goes to infinity.
From (3.2.25) it results,
K (z − p )
yl (t) = p −2p ε e−p 1 t x1 (0) + 2p 2− p 2 x1 (0) + x2 (0) e−p 2 t
K
(3.2.30)
1 2 1 2
lim yl (t) → ±∞ if −p 1 > 0
t→∞
K1 K2 (s + z2 ) K K U(−p )
yf (t) = L−1 U(s) =Σ r i (t)eλi t + 1 p 2 − p 1 ε e−p 1t , − p 1 > 0
(s + p 1 )(s + p 2 ) i 1 2
(3.2.31)
where by r i (t) we have denoted the residuum of the functionY f (s) in the pole
−p 2 and in the poles of the Laplace transform U(s) of the input . The poles of
U(s) belong to the left half plane because u(t) is a bounded function.
So
lim yf (t) = 0 ± ∞ = ±∞ .
t→∞
Each component of the state vector becomes unbounded for non zero
causes (the initial state and the input).
From (3.2.18), applying the inverse Laplace transform one obtain,
x1 (t) = e−p 1t x 1 (0) + K1 U(−p 1 )e−p 1t +Σ r i (t)eλi t (3.2.32)
i
K1 U(s)
where now by r i (t) we have denoted the residuum of the function in
s + p1
the poles λ i of the Laplace transform U(s), Re(λ i ) < 0.
We can express x1 (t) as a sum between an unstable component x I1 (t)
generated by the unstable pole −p 1 > 0 and a stable component xS1 (t) generated
by the poles λ i
x1 (t) = xI1 (t) + xS1 (t) (3.2.33)
77
3. SYSTEMS CONNECTIONS. 3.2. Serial Connection.
78
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.
The system S 2 stabilises the system S 1 if S 2 has such parameters that the
component x I1 which is transmitted through the two ways , finally is
reciprocally cancelled.
Indeed,
z −p z −p
yI2 (t) = K2 xI1 (t) + xI2 (t) = K2 x I1 (t) − K2 p2 − p2 xI1 (t) = K2 1 − p2 − p2 xI1 (t) .
1 2 1 2
If z2 = p 1 then,
p −p
yI2 (t) = K2 xI1 (t) − K2 p 1 − p 2 xI1 (t) = K2 x I1 (t) − K2 x I1 (t) = 0 ∀t.
1 2
79
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.
H(s)=HqHq-1...H1=H1...Hq-1Hq (3.2.62)
81
3. SYSTEM CONNECTIONS. 3.3. Parallel Connection.
U i (s) = U(s)
Rc : q
i = 1, q ;
Y(s) = Σ Yi (s)
i=1
Ωi = Ω
Cc : , ∀i i=1,...,q
Γi ⊆ Γ
q q q
Y(s) = Σ Y (s) = Σ Hi (s)U(s) = [Σ Hi (s)] ⋅ U(s) = H(s)U(s)
i
i=1 i=1 i=1
We can draw a block diagram which illustrate this connection.
u1 H y1
1
u u2 H y2 +
2 +
uq H yq + y
q +
82
3. SYSTEM CONNECTIONS. 3.4. Feedback Connection.
E = U ± Y2
U1 = E
Y 1 (s) = H 1 (s)U 1 (s)
Rc : Y 1 = H1 U 1 ;
Y 2 = H2 U 2 Y 2 (s) = H 2 (s)U 2 (s)
Y = Y1
We can plot these set of relations as in in the below block diagram . The
equivalent feedback interconnected transfer matrix H(s) can be obtained by an
algebraical procedure.
U(s) + E=U1 Y1 =Y(s)
H1
Y(s) U(s)
⇔
±
H
Y2
H2
_
Y=H1(U+H2Y)=+H1H2Y+H1U =>Y=(I+ H1H2)-1H1U
_
−1
H = (I + H 1 H 2 ) H1
(rxp)(pxr)
rxp
rxr
Another way:
Y1
_ _
E=U±H 2 H 1 E ⇒ E = (I + H 2 H 1 ) −1 U ⇒ Y =H 1 (I + H 2 H 1 ) −1 U
Y2 _ H
H = H 1 (I + H 2 H 1 ) −1
(pxr)(rxp)
rxp
pxp
The equivalent transfer matrix H(s)
Y(s) = H(s)U(s)
of the feedback interconnected system has two forms which algebraically are
identical.
The first one _
H(s) = [I + H 1 (s)H 2 (s)] −1 H 1 (s)
requires an (r × r) matrix inversion,
_
but the second,
H(s) = H 1 (s)[I + H 2 (s)H 1 (s)] −1
requires a (p × p) matrix inversion.
In the case of SISO we have a transfer function,
H (s) H (s) Y(s)
H(s) = _ 1 = _ 1 = .
1 + H 2 (s)H 1 (s) 1 + H 1 (s)H 2 (s) U(s)
83
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.1. Principle Diagrams and Block Diagrams.
84
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.1. Principle Diagrams and Block Diagrams.
1. Oriented Lines.
Oriented lines represent the variables involved in the mathematical
relationships. They are drawn by straight lines marked wit arrows. On short an
oriented line is called "arrow".
The direction of the arrow points out the cause-effect direction and has
nothing to do with the direction of the flow of variables from principle diagram.
2. Blocks.
A block, usually drawn as an rectangle, represents the mathematical
operator which relates the causes (input variables) and effects (output variables).
Inside the rectangle, representing a block, a symbol of that operator is
marked. However some specific operators are represented by other geometrical
figures ( for example usually the sum operator is represented by a circle).
The input variables of a block are drawn by incoming arrows to the
rectangle (geometrical figure) representing that block. The output variables are
drawn by arrows outgoing from the rectangle (geometrical figure) representing
that block. One oriented line (arrow), that means a variable, can be an output
variable for a block and an input variable for another block.
For example, an explicit relation between the variable u and the variable y,
is of the form, where u and y can be time-functions.
y = F(u) (4.1.1)
The symbol F( ) denoting an operator (it can be a simple function) expresses an
oriented system where u is the cause and y is the effect, that means y is the
output variable and u is the lonely input variable. We can write (4.1.1) as
y = F(u) ⇔ y = F{u} ⇔ y = Fu
and the attached block diagram is as in Fig. 4.1.1.
u y
F
85
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.1. Principle Diagrams and Block Diagrams.
86
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS .4.1. Principle Diagrams and Block Diagrams.
x2
b2
+ w1 + w3 x1
x3 K
+ +
b3
w2
b1
Suppose we are interested on the water level in the tank, denoted by L=y.
In a such a way the variable L=y, an attribute (a characteristic) of the physical
object (water tank), is an effect we are interested about.
All the causes which affect this selected output are represented by the two
flow-rates q1=u1 and q2=u2. So, in the oriented system, based on causality
principles, both u1 and u2 are input variables.
The corresponding block diagram, in the systems theory meaning will have
L=y as output and both u1 and u2 as inputs. It is represented by the block
diagram from Fig. 4.1.5.c. or one of the block diagram from Fig. 4.1.6.
The mathematical relationship between y and u1, u2 in time-domain is
t
y(t) = K ∫ [u1(t) − u2(t)]dt + y(t 0 ) (4.1.8)
t0
is represented in Fig. 4.1.6.a.
To determine the mathematical model in complex domain we define the
variables in variations with respect to a steady state, defined by:
Y ss = y(0), U ss 1 = u 1 (0) = 0, 2 = u 2 (0) = 0 .
U ss
Denoting,
Y(s) = L{y(t) − Y ss }, U 1 (s) = L{u 1 (t) − U ss
1 }, U 1 (s) = L{u 1 (t) − U ss
1 },
we have
Y(s) = Ks ⋅ [U 1 (s) − U 2 (s)] (4.1.8)
the I-O relation by components, represented in Fig. 4.1.6.b.
In matrix form the I-O relation is,
U (s)
Y(s) = H(s) ⋅ 1 = H(s) ⋅ U(s) , H(s) = Ks − Ks (4.1.9)
U 1 (s)
which allows us to represent she system as a hole as in Fig. 4.1.6.c.
a) b) c)
Figure no. 4.1.6.
88
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.1. Principle Diagrams and Block Diagrams.
90
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.1. Principle Diagrams and Block Diagrams.
.
L{x(t)} = sX(s) − x(0) = aX(s) + bU(s)
X(s) = 1s [aX(s) + x(0) + bU(s)]
X(s) = s −1 a [x(0) + bU(s)] (4.1.15)
The state diagram of a system can be drawn also for non scalor systems,
where at least one coefficient is a matrix, of the general form as in (4.1.16),
.
x = Ax + Bu
, (4.1.16)
y = Cx + Du
where the matrices have the dimensions: A-(nxn); B-(nxr); C-(pxr); D-(pxr).
Because
.
L{x(t)} = sX(s) − x(0) = [sX 1 (s) sX 2 (s) ... sX n (s)] T − [x 1 (0) x 2 (0)....x n (0)] T
we have
.
X(s) = [ 1s I n ] ⋅ [L{x(t)} + x(0)] , (4.1.17)
which is represented as in Fig. 4.1.12.
Sometimes, to point out that when the oriented lines forward vectors, they
are drawn by double lines as in Fig. 4.1.12.
x(0)
•
L{x(t)} +
U(s) + X(s) + Y(s)
B 1I C
+ s n
+ +
A
U(s)
D
91
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
92
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
93
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
Y Y
H
94
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
U
U H -1
95
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
96
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
For example suppose that a contour delimits an area with two contour
input variables, denoted a 1 , a 2 , and two contour output variables, denoted b 1 , b 2
, having the Laplace transforms A 1 (s), A 2 (s), B 1 (s), B 2 (s) respectively.
This oriented contour is represented in Fig. 4.2.2.a. and suppose that,
using analytical methods we can express B1, B2 upon A1, A2 as in (4.2.4) if the
dependence is linear one.
B1(s) B1(s)
A1(s) A1(s)
G11(s) G12 (s)
+ +
Undesired connections
A2(s) + + A2(s)
G21(s) G22(s)
B2(s) B2(s)
a) b)
Figure no. 4.2.2.
B 1 (s) = G 11 (s)A 1 (s) + G 12 (s)A 2 (s) (4.2.4)
B 2 (s) = G 21 (s)A 1 (s) + G 22 (s)A 2 (s)
These relations are now represented by a block diagram as in Fig. 4.2.2.b.
which will replace the undesired connection.
S1 Y2 U2 H4
S2
U1 + a1 a0 + a6 + a a4 + a8 Y1
3
H1 H2 H3 H6
_ _ + +
S3 S4
a
5
H5
a7
H7
98
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
with 13 variables: a0 : a8, Y1, Y2, U1, U2 where two input variables U1, U2 are
independent (free variables in the algebraical system). The coefficients H1: H7
represent expressions of some complex functions, let as denote them as transfer
functions too. To reduce the system means to eliminate all a0 : a8 intermediate
variables expressing Y1, Y2 upon U1 and U2.. This is rather difficult task.
Y 1 (s)
a). Determination of the component :H 11 (s) = .
U 1 (s) U 2=0
To do this we shall ignore Y2 and consider U2=0 as in Fig. 4.2.5 where
now Y2 is not drawn and because U2=0 now a3=a6 and a block with a gain 1 will
appear instead of the summing point S3 . To put into evidence the three standard
connections we intend to move, in Fig. 4.2.5. , the takeoff point a4 ahead of the
block H3 ( as pointed out by the dashed arrow) and to arrange this new takeoff
point using the equivalencies from Fig. 4.1.2.
H4
S1 S2 S3
a0 a4
+ a Y
U1 + a1 + a2 a6 a3 8 1
H1 H2 1 H3 H6
_ _ +
S4
a5 Move ahead
H5
a7
H7
99
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
H2
Ha = ; H b = H 4 + H3 H 6 , (4.2.9)
1 + H2H5H3
so the block diagram from Fig. 4.2.6. becomes as in Fig 4.2.7. a simple feedback
loop described by the transfer function,
H1HaHb
H 11 (s) = (4.2.10)
1 + H1 Ha H b H 7
S
U1 + 1 a0 a1 a3 Y1 U1 Y1
H1 Ha Hb H11
_ a8
a7 U2 =0
H7
Y 1 (s)
b). Determination of the component : H 12 (s) = .
U 2 (s) U 1=0
To evaluate H12(s), in the initial BD from Fig 4.2.3. ,we shall consider
U1 = 0 and ignore Y2 resulting a BD as in Fig 4.2.8.
U2 H4
S1 S2
+ a4 + Y1
a1 a0 + a2 a6 a3
H1 H2 H3 H6
_ _ + + a8
S3 S4
a5 Move ahead
H5
a7
H7
It can be observed that now appeared two simple cascade connections and
the parallel connection denoted Ha which is H a (s) = H 4 + H 3 H 6 .
U
2
S2 + a3 Y1
_ a2 a6
H2 Ha
_ + a8
S3
a5
Move beyond the block
H . H3
5
H1.H7
H . H3
1
+ 5 Ha
H1 .H
+
7 Hc
101
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
Y 2 (s)
c). Determination of the component : H 21 (s) =
U 1 (s) U 2=0
To determine the transfer function H21(s) we have to consider U2= 0 and to
ignore the output Y1 .
Into such conditions, the initial block diagram from Fig. 4.2.3. looks like in
Fig. 4.2.13.
Y2 H4
S1 S2 +
U1 + a1 a0 + a2 a6 a3 a4 a8
H1 H2 1 H3 H6
_ _ +
S3 S4
a Move ahead
5
H5
a7
H7
Figure no. 4.2.13.
Because U2= 0, the relation a3=U2+a6 from Fig. 4.2.3. becomes a3=a6 so
the summing operator S3 is now represented by a block with a gain 1, and later
on it will be ignored. Moving the takeoff point a4 ahead of the block H3 we get
the BD from Fig.4.2.14.
Y2 H4
S1 S2
a2 +
U1 + a1 a0
+ a6 a3 a4
H1 H2 H3 H6
_ _ +
S4
a5 Ha
H5 H3
a8
a7
H7
a7 a8
H7 Ha
Redrawing the above BD we get the shape from Figure no. 4.2.15.
S S Y2
U1 + 1 a1 a0 + 2 a2
H1
_ _
H2
a
5 H5. H 3 a6
a3 a3
a7
H . Ha
7
S a0 S2 Hd
U1 + 1 a Y2
1 + a2 a2 a2
H1
_ _
a5
H5. H3 H2
a7 a3
H7 . Ha H2
a6 He
He U2 =0
103
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
Y 2 (s)
d). Determination of the component : H 22 (s) =
U 2 (s) U 1=0
The component H22(s) is evaluated considering U1= 0 and ignoring the
output Y1 , so the initial block diagram from Fig. 4.2.3. looks like in Fig.4.2.18.
Y2 U2 H4
S1 S2 +
a1 a0 + a2 a6
+ a3
a4
H1 H2 H3 H6 a8
_ _ + +
S3 S4
a5
H5
a7
H
7
Y2 U2 H4
S1 S2
a1 a0 + a2 a6 + a3 a4 + a8
H1 H2 H3 H6
_ _ +S +
3 S4
a5 Ha
H5 H3
a7
H7
a7
H . Ha
7
104
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
Redrawing the above BD to have the input to the left side and the output
to the right side we obtain, Fig. 4.2.21.
H 5. H 3
_
U2 S1 Y2
+
+ _
H . Ha H1
7
Hf
H2
105
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
106
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
xi xj xi xj
t ji txxij
xi = t ji xj j x
xi = t xi xj
xj =t ij xi x
xj = txi xi
j
107
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
108
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
4.3.2.4. Parallel rule. A set of elementary branches, leaving the same node and
entering in the same node ( a parallel connection of elementary branches) can be
replaced by (are equivalent with) a single branch with the transmission function
the sum of the transmission functions of the components of this parallel
connection.
Example. Two parallel branches t 12 , t 12 as in the SFG from Fig. 4.3.6. are
equivalent with a single branch with the equivalent transmission function T12
where,
T12=t12'+t12'' (4.3.6)
t 12 T12
x1 x2 ≡ x1 x2
t 12
Figure no. 4.3.6.
4.3.2.5. Input node. An input node (source-node) is a node that has only
outgoing branches. Example. The node x1 in Fig. 4.3.4. is an input node. All
these branches are outgoing branches in respect with x1 and incoming branches in
respect to xk.
4.3.2.5. Output node. An output node (sink - node) is a node that has only
incoming branches.
Example. The node x1 in Fig. 4.3.3. is an output node. All these branches are
incoming branches in respect with x1 and outgoing branches in respect to xk.
Any node, different of an input node, can be related to a new node, as output
node, through an unity gain branch. This new node is a "dummy" node.
4.3.2.6. Path. A path from one node to other node is a continuous unidirectional
succession of branches (traversed in the same direction) along which no node is
passed more then once.
4.3.2.7. Loop. A loop is a path that originates and terminates to the same node in
a such way that no node is passed more than once.
4.3.2.9. Path gain. The gain of a path is the product of the gains of the branches
encountered in traversing the path. Sometimes we are saying path thinking that it
is the gain of that path. For the path "k" its gain is denoted Ck.
4.3.2.10. Loop gain. The loop gain is the product of the transmission functions
encountered along that loop. Sometimes we are saying loop thinking that it is the
gain of that loop. For the loop "k" its gain is denoted Bk.
109
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
4.3.2.11. Disjunctive Paths /Loops. Two paths or two loops are said to be
nontouching if they have no node in common.
t21= – a2
t11= 1- a1 x1 x3
x2
x4
t 41= – a4
111
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
D 11 u 1 + D 12 u 2 (−9)u 1 + (25)u 2
x1 = = = 9 ⋅ u 1 + −25 ⋅ u 2 (4.3.14)
D −23 23 23
D u + D 22 u 2 (1)u 1 + (−13)u 2 −1
x 2 = 21 1 = = ⋅ u 1 + 13 ⋅ u 2 (4.3.15)
D −23 23 23
The same results can be obtained if we express the system (4.3.12) in a
matrix form,
Ax = Bu, x = [x 1 x 2 ] T , u = [u 1 u 2 ] T (4.3.16)
where
3 4 1 −1
A= , B = .
2 −5 1 −5
If det(A) ≠ 0 , then
9/23 −25/23
x = A −1 Bu = Hu = u ⇒ (4.3.17)
−1/23 13/23
x1 = ⋅ u 1 + −25 ⋅ u 2 = T u 11 ⋅ u 1+ T u 12 ⋅ u 2
9 x x
23 23
(4.3.18)
−1
x 2 = ⋅ u 1 + 13 ⋅ u 2 = T u 21 ⋅ u 1+ T u 22 ⋅ u 2
x x
23 23
The same system (4.3.11), or its equivalent (4.3.12), can be represented
as a SFG and solved by using the SFG's techniques. If the coefficients of the
equations contain letters (symbolic expressions) and the system order is higher
than 3 the above methods can not be implemented on computers and to solve a
such a system is a very difficult task. Using the methods of SFG all these
difficulties will disappear.
To construct the graph first of all the two dependent variables (x1, x2 ) are
withdrawn from the system equations irrespective which variable from which
equation.
Let this be,
x 1 = (−2) ⋅ x 1 + (−4) ⋅ x 2 + (−1) ⋅ u 1 + (1) ⋅ u 2
(4.3.19)
x 2 = (−2) ⋅ x 1 + ( 6) ⋅ x 2 + ( 1) ⋅ u 1 + (5) ⋅ u 2
After marking the four dots, future nodes, by the variables x1, x2, u1, u2 ,
they are connected, using SFG rules, according the equations (4.3.19) , the SFG
is obtained as in Fig. 4.3.9.
Input node
1 u2 5
-2 6
Self-loop -4 Self-loop
x1 -2 x2
-1 1
u1
Input node
112
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
or in a matrix form,
Ax = Bu, x = [x 1 .. x i ... x n ] T , u = [u 1 ...u j ... u p ] T . (4.3.24)
If the n equations are linear independent, the determinant D,
D = det(A) ≠ 0
and a unique solution exists (considering that the vector u is given),
x = A −1 Bu = Hu , where H = {H ij } 1≤ i ≤n ; 1≤ j ≤p
whose i-th component is written as,
p p p
x i = Σ H ij ⋅ u j = Σ T u ij ⋅ u j = Σ T ji ⋅ u j
x
(4.3.25)
j=1 j=1 j=1
113
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
Example. Suppose that in Example 4.3.3. we are interested to evaluate only the
variables x 1 = x i 1 and x 3 = x i 2 , that means r=2. We can consider two new output
nodes y 1 = x 1 , y 2 = x 3 attached to the graph from Fig. 4.3.11. as in Fig. 4.4.1.
u1
4 u2
3 1
4
x1 -6 x2
-1 t
ii
t
ii
6 x3
y1 3 2
1 1
-7 y2
116
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.
117
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.
u1
4/2 u2
3 1
4
x1 -6 x2
t
ii
6 x3
y1 3/2 2
1 1
-7 y2
118
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.
119
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.
u1
11/10 u2
-2/5 39/10
x1
x3
y1 1
-64/10 1 y2
120
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.
1
Suma produselor de cite k bucle disjuncte
121
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.
Example. Let us consider the graph from Ex. 4.3.3. as redrawn in Fig.4.4.7.
Suppose we are interested about all the dependent variables x1, x2, x3.
To apply the Mason's general formula first we must define and draw in the
graph the corresponding "dummy" output nodes which get out the interested
variables. They are defined by relations:
y1=x1 ; y2=x2 ; y3=x3 ;
B3 u1
B1 4 u2
3 1 4
x1 -6 x2
-1
t
ii
t
ii
6 x3
B2 y3
3 2
1
y2 1
y1 1 -7
y
Evaluation of T u22 = T(u 2 , y 2 ) : We identify one path from u2 to y2:
C6=C6(u2, y2) = (1)(1) =1 ; B6={B1}, ⇒ D6=1-(-1)=2.
y C D (1) ⋅ (2) 2
We obtain, T u22 = 6 6 = = .
y3
D 8 8
Evaluation of T u 1 = T(u 1 , y 3 ) : We identify four paths from u1 to y3:
C7=C7(u1, y3) = (3)(2) =6 ; B7={B1}, ⇒ D7=2.
C8=C8(u1, y3) = (3)(3)(-7)(1) =-63 ; B8={∅ }, ⇒ D8=1.
C9=C9(u1, y3) = (4)(-7) =-28 ; B9={B2}, ⇒ D7=1-6=-5.
C10=C10(u1, y3) = (4)(-6)(2)(1) =-48 ; B8={∅ }, ⇒ D10=1.
We obtain,
C 7 D 7 + C 8 D 8 + C 9 D 9 + C 10 D 10 (6) ⋅ (2) + (−63)(1) + (−48) ⋅ (1) + (−28)(−5) 41
T u31 =
y
= =
D 8 8
y
Evaluation of T u23 = T(u 2 , y 3 ) :We identify three paths from u2 to y3:
C11=C11(u2, y3) = (4)(1) =4 ; B11={B1; B2; B3}, ⇒ D1=D=8.
C12=C12(u2, y3) = (1)(2)(1) =2 ; B12={B1}, ⇒ D12=2.
C13=C13(u2, y3) = (1)(3)(-7)(1) =-21 ; B13={∅ }, ⇒ D13=1.
y3 C D + C 12 D 12 + C 13 D 13 (4) ⋅ (8) + (2) ⋅ (2) + (−21) ⋅ (1) 15
T u 2 = 11 11 = =
D 8 8
Collecting the results, we write:
y 1 = x 1 = T u11 ⋅ u 1 + T u12 ⋅ u 2 = −11 ⋅ u 1 + 3 ⋅ u 2
y y
8 8
y2 y3
y2 = x 2 = T u1 ⋅ u 1 + Tu2 ⋅ u2 = −18 ⋅ u1 + 2 ⋅ u2
8 8
y3 y3
y3 = x 3 = T u1 ⋅ u 1 + Tu2 ⋅ u2 = 41 ⋅ u1 + 15 ⋅ u2
8 8
The same results are obtaining, easier, by pure algebraic methods solving
the system (4.3.28). The goal here were to illustrate only how to apply the
Mason's general formula which get many advantages for symbolic systems and
there no so many paths as in this example.
a5 H5
-1
B2
B3
-1 a7 H
7 B1
Figure no. 4.4.8.
123
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.
124
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.1. Problem Statement.
126
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.2. First Type I-D Canonical Form.
a a a
W(s) = − an−1 s −1 W(s) − an−2 s −2 W(s) −... − a 0 s −n W(s) + a1 U(s) (5.2.6)
n n n n
x n (s) x n−1 (s) x 1 (s)
3. Denote the products [s W(s)] , k = 1 : n as new n variables
−k
127
5. SYSTEMS REALISATION BY STATE EQUATIONS. 5.2. First Type I-D Canonical Form.
6. Draw, as block diagram or state flow graph, a cascade of n integrators and fill
in this graphical representation the relations (5.2.6) or (5.2.8) and the output
relations (5.2.13) or (5.2.14).
The integrators can be represented without the initial conditions or with
their initial conditions if later on we want to get the free response from this
diagram .
bn + + + + Y(s)
an
+ + + +
cn c n-1 c2 c1
a n-1 a n-2 a1 a0
an an an an
+ + +
+ + +
Figure no. 5.2.1.
128
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.2. First Type I-D Canonical Form.
7. Denote from the right hand side to the left hand side the integrators output by
some variables X 1 , X 2 , .. , X n , as (5.2.7) both in complex domain and time
domain.
8. Interpret the diagram in time domain and write the relations between variables
in time domain,
.
x1 = x 2
.
x2 = x 3
........ (5.2.15)
.
x n−1 = x n
. a a n−2 a n−1
x n = − a 0 x 1 − a1
a x 2 − ... − a x n−1 − 1
a xn + a u
n n n n n
129
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.2. First Type I-D Canonical Form.
Example 5.2.1. First Type I-D Canonical Form of a Second Order System.
Let a system be described by the second order differential equation.
. .
ÿ + 3y + 2y = ü + 5u + 6u
The transfer function is
Y(s) (s + 2)(s + 3)
H(s) = s 2 + 5s + 6 =
2
=
s + 3s + 2 U(s) (s + 2)(s + 1)
H(s) = 1 + 5s −1 + 6s −2
−1 −2
1 + 3s + 2s
U(s)
Y(s) = (1 + 5s −1 + 6s −2 ) ⋅
1 + 3s −1 + 2s −2
U(s)
W(s) =
1 + 3s −1 + 2s −2
W(s) + 3[s −1 W(s)] + 2s −2 [W(s)] = U(s)
x2(0) x1(0) 4
2
s-1 s-1
-1
U(s)
W(s) s s-1 W(s) 1 s-1 s-2W(s)
1
.
x2(t) x2(t)
.
x1(t) x1(t)
B1
-3 X2(s) X1(s)
B2
-2
y = 4x 1 + 2x 2 + u
0 1 0 4
A= , b = , c = , d=1
−2 −3 1 2
130
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.2. First Type I-D Canonical Form.
Now we can check that this state realisation (the controllable companion
canonical form) of the transfer function
Y(s) (s + 2)(s + 3)
H(s) = s 2 + 5s + 6 =
2
=
s + 3s + 2 U(s) (s + 2)(s + 1)
is controllable but not observable.
As we can see the transfer function has common factors to nominator and
denominator, and one of the two properties, controllability or observability, is
lost. In our case it must not be the controllability.
P = [b Ab] ,
0 0 1 0 1
b = , Ab = ⋅ = ⇒
1 −2 −3 1 −3
0 1
P= , det(P) = −1 ≠ 0 . The system is controllable.
1 −3
cT
Q= T
c A
0 1
c T = 4 2 , c T A = 4 2 ⋅ = −4 −2
−2 −3
4 2
Q= ,det(Q) = −8 + 8 = 0 . The system is not observable.
−4 −2
Operating on this signal flow graph, which is a state diagram (SD), we can
get the transfer function and the two components of the free response.
D = 1 + Σ (−1) k S pk
k=1
B 1 = −3s −1 ; B 2 = −2s −2
S p1 = B 1 + B 2 = −3s −1 − 2s −2
S p2 = 0
D = 1 − S p1 = 1 − (−3s −1 − 2s −2 ) = 1 + 3s −1 + 2s −2
yC1 D1 + C 2D 2 + C3 D3
H(s) = T u =
D
C 1 = 1; D 1 = 1; C 2 = 2s −1 ; D 2 = 1; C 3 = 4s −2 ; D 3 = 1;
H(s) = 1 + 3s + 2s −1 + 2s −2 + 4s = s 2 + 5s + 6
−1 −2 −1 −2 2
1 + 3s + 2s s + 3s + 2
131
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.3. Second Type D-I Canonical Form.
a n y (n) + a n−1 y (n−1) + ... + a 1 y (1) + a 0 y (0) = b n u (n) + b n−1 u (n−1) + ... + b 1 u (1) + b 0 u (0)
where we denote (5.3.2)
k
d y(t)
y (k) = k
= D k {y(t)} = D k y = D{D k−1 y} (5.3.3)
dt
(k) d k u(t)
u = k
= D k {u(t)} = D k u = D{D k−1 u} . (5.3.4)
dt
Using the derivative symbolic operator
def
D{•} = d {•} , (5.3.5)
dt
the equation (5.3.2) can be written as,
a n D n y + a n−1 D n−1 y + .. + a 1 Dy + a 0 y = b n D n u + b n−1 D n−1 u + .. + b 1 Du + b 0 u
or arranged as
D n [a n y − b n u] + D n−1 [a n−1 y − b n−1 u] + .. + D[a 1 y − b 1 u] + [a 0 y − b 0 u] = 0
x2
132
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.3. Second Type D-I Canonical Form.
. . a a
0 = a 0 y − b 0 u + xn ⇒
x n = − a 0 x1 + (b 0 − b n a 0 )u
n n
From (5.3.7) and (5.3.8) we can write the state equation in matrix form,
.
x = Ax + bu x = [x 1 , x 2 , ... , x n−1 , x n ] T
y = c T x + bu
where,
− a n−1 1 0 .. 0 0
an b n−1 − b n aan−1 1
− a an
a n−2
n−2 n
0 1 .. 0 0 −
an b n−2 b n a n 0
− a n−3 0 0 .. 0 0 b n−3 − b n a n a n−3 b
A= a n , b= , c= 0 , d = an
.. .. .. .. .. .. .. .. n
− a 1 0 0 .. 0 1 b 1 − b n aa 1n 0
an a0
− a 0 0 0 .. 0 0 b 0 − b n an 0
an
Example.
Our previous example with:
n = 2, b 2 = 1, b 1 = 5, b 0 = 6, a 2= 1, a 1 = 3, a 0 = 2, ⇒
−3 1 2 1
A= , b = , c = , d=1
−2 0 4 0
133
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.4. Jordan Canonical Form.
b 4 s 4 + ... + b 0
H(s) = =
(s − λ 1 )(s − λ 2 ) 2 ((s − α) 2 + β 2 )
c c c 22 c s+c Y(s)
= c 0 + 11 + 21 + + 31 2 32 2 =
s − λ 1 s − λ 2 (s − λ 2 ) 2
(s − σ) + β U(s)
Let denote
c 31 s + c 32
H 3 (s) =
(s − σ) 2 + β 2
To determine this canonical form first we plot as a block diagram the
output relation, considering for repeated roots a series of first order blocks.
Then we express in time domain the relations considering to any output of
first order block a component of first order block and for complex roots (poles)
consider a number of components equal to the number of complex poles.
C0 Y0
Y1
C 11
1 x11
Y2
U s-λ1
C 21 C 22
1 1
s-λ2 2 s-λ2 x 21
x 2
Y3 Y
H3
x13 x 23
Figure no. 5.4.1.
134
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.4. Jordan Canonical Form.
.
x 11 = 1
s−λ 1 U(s) ⇒ sx 11 − λ 1 x 11 = U ⇒ x 11 = λ 1 x 11 + u
.
x 21 = 1 2
s−λ 2 x 2 x 21 = λ 2 x 21 + x 22
.
x 22 = 1
s−λ 2 U(s) x 22 = λ 2 x 22 + u
For the blocks with complex poles we can use as an independent problem
any method for state realisation like the ID canonical form.
x 31
x = 3
3
x2
.
x3 = A3 x3 + b 3 u
y 3 = c T3 x 3 + d 3 u
=0
( d3=0 because H3 is a strictly proper part and the degree of the denominator is
bigger then the degree of the nominator). We have a second order system in this
case.
.
x3 = A 3 x 3 + b 3 u
y = c 0 u + c 11 x 11 + c 22 x 21 + c 21 x 22 + c 32 x 31 + c 31 x 32
Construct a state vector appendix all variable chosen as state variables.
x1
... 1
x 1 x2
x2 ... 1 x3
x 1 = x 11 ; x 2 = 12 ⇒ x = x 2 = ...x 2 ; x 3 = 13
x2 ... 3 23 x2
x 1 x
3
x2
Write the state equations in matrix form :
x. = A J ⋅ x + B J ⋅ u
y = C J ⋅ x + dJ ⋅ u
l1 0 0 0 0
0 l2 1 0 0 Jordan
AJ= Blocks
0 0 l2 0 0
0 0 A3
1 C 11
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅
b 1J C 1J
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ C 22
B J = ; C J =
b
b 2J = 1 C 2J = C 21 ; dJ = C 0 = a n
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ n
b 3J b 31 C 3J C 31
b 32 C 30
135
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.4. Jordan Canonical Form.
C 131 s + C 130
H 3 (s) = 2
s − 2αs + α 2 + β 2
.
x3 = A3 x2 + b 3 u
y3 = c T x 3 + d 3 u
U(s)
Y 3 = (c 131 s + c 130 )
(s − α) 2 + β 2
2
β
1 β2 s−α
W(s) = U(s) = ⋅ 12 U(s) = 1 U(s)
(s − α) + β
2 2
(s − α) + β β
2 2
β β
2 2
1 + s−α
We can interpret this relation as a feedback connection:
U(s) 1 + b x32 b x31 W(s)
b 2 - s- a s- a
136
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.5 State Equations Realisation Starting from the Block Diagram.
H= = + (5.5.2)
a1 s + a 0 a1 a1 s + a0
Denote as state variable the output of the first order dynamically block
and write the state equation:
x. = − aa0 x + a1 (b 0 − b 1 aa0 )u
1 1 1
(5.5.3)
y = x + a 1 u
b1
b
a
U Y
a 1 X
b0 -b 1 a0 a 1 s+a0
1
137
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.5 State Equations Realisation Starting from the Block Diagram.
.
x1 = x2
.
x2 = − a 2 x1 − a 2 x2 + a 2 u
a0 a1 1
(5.5.4)
y = (b − b a0 )x + (b − b a1 )x + b2 u
0 2 a2 1 1 2 a2 2 a2
For block 4 we can choose the input as being x. In such a case this x will
disappear if we want to get minimal realisation (minimum amount of state
variables).
.
u=x s y x=y
.
Figure no. 5.5.3.
Denote inputs and outputs of the blocks by variables and write the
algebraically equations between them.
Using the above determined state equation and the algebraical equation
eliminate all the intermediary variables and write the state equations in matrix
form considering as state vectors a vector composed by all components
denoted in the block diagram.
1 2
u + u1 s+2 y1 =u2 1 x2 y2 =y
- s+4 s
u 3
+
1 3
-2 +
s+3 s+4
x y3
3 +
x1 . −4 2 2 −2 0
x = Ax + bu
x = x2 ⇒ , A = 1 −1 −1 b = 1 , c = 1 , d = 0
x y=c x
T
3 1 −1 −4 1 0
138
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.1. Experimental Frequency Characteristics .
2 - ways
recorder
t
ϕ
T ∆ t = − ω = t y 0 − t u0
ya(t) T
Ym
Y0
t
t0 t u0 t y 0
ϕ1 ϕ2
ω
ω1 ω2
Figure no. 6.1.3.
140
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.1. Experimental Frequency Characteristics .
ϕ1 ϕ2
P2 P1 P
141
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.2. Relations Between Experimental Frequency
Characteristics
and Transfer Function Attributes.
142
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.2. Relations Between Experimental Frequency
Characteristics
and Transfer Function Attributes.
The transient response will disappear when t → ∞ if the real parts of all
transfer function poles are negative , Re(λ i ) < 0 .
This means that the real part of all the poles to be located in the left half of
the complex plane. This is the main stability criteria for linear systems.
The expression H(jω ) is a complex number for which we can define,
H(jω ) = Aω ) ⋅ ejϕ(ω)
A(ω ) = H(jω )
ϕ(ω ) = arg(H(jω )) (6.2.6)
H(jω ) = A(ω )ejϕ(ω)
H(−jω ) = A(ω )e−jϕ(ω)
In such a way :
yp (t) = H(jω ) ω ejωt + H(−jω ) ω e−jωt U m
2jω −2jω
j(ωt+ϕ) − e−j(ωt+ϕ)
yp (t) = Um ⋅ A(ω ) ⋅ e
2j
yp (t) =A(ω )Um sin(ω t+ ϕ(ω ) ) (6.2.7)
Ym −ω∆t , ∆t=∆t(ω)
We can see that the permanent response is also a sinusoidal signal with the
same frequency, but with a shifted phase.
Ym
= A(ω ) = H(jω ) (6.2.8)
Um
A exp (ω)
that means the ratio between the experimental amplitudes , as defined in (6.1.9),
is exactly the modulus of the complex expression H(jω ) obtained replacing s by
jω . In the same time the experimental phase, given by (6.1.10). is the argument
of the same complex expression H(jω ) ,
ϕ(ω ) = arg(H(jω )) (6.2.9)
−ω∆t exp
A(ω ) = P 2 (ω ) + Q 2 (ω ) (6.2.16)
arctg Q(ω) , P(ω ) > 0
P(ω)
arctg Q(ω)
+ π , P(ω ) < 0
ϕ(ω ) = P(ω) , ϕ∈ (−π, π] (6.2.17)
π , P(ω ) = 0, Q(ω ) > 0
− π , P(ω ) = 0, Q(ω ) < 0
2
2
The complex frequency characteristic is the polar plot of the pair
(A(ω),ϕ(ω)) or the Cartesian plot of the pair (P(ω ),Q(ω )).
144
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.3. LogarithmicFrequency Characteristics.
10 20
1 0
ω
0.1 1 10 100 1000
0.1 -20
0.01 -40
Linear π
ϕ(ω)
scale
π
2 Phase frequency characteristic
0
0.1 ω
1 10 100 1000
−π2
−π
0 , ωT < 1
L(ω ) = 20 lg[A(ω )] ≈ = La (ω ) . (6.3.6)
20 lg[ω T] , ω T ≥ 1
Let us denote by ω T the so called normalised frequency, where ω = 2πf .
Note that ω is also called in english frequency which corresponds to romanian
"pulsatie" and f is called frequency both in english and romanian.
As 2π is a nondimensional number both ω and f are measured in [sec]−1,
representing the natural frequency, and T called time constant is measured in
[sec]. So the normalised frequency ω T is a nondimentional number.
Frequently some items of the frequency characteristics are represented in
normalised frequency. The recovering of their shape in the natural frequency is a
matter of frequency scale gradation as depicted in Fig. 6.3.3.
One decade One decade
0 0.01 0.1 1 10 100
2 3 8 20
ωΤ Logarithmic scale
in normalised frequency
ω Logarithmic scale
0 0.01/T 0.1/T 1 /T 1 0 /T 100 /T in natural frequency
2 /T 3/T 8/T 2 0 /T
147
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.3. LogarithmicFrequency Characteristics.
0 0.01
1 0
0.1 1 ω1T 1 0 ω2 T 100 ωT ω T=10x Logarithmic scale
x x =lg(ωT) Linear scale
−∞ -2 -10 -1 0 x1 1 x2 2
y=0 x2- x1 =1
Horizontal asymptote
0.1 -20
Slope m=0 dB/dec
Figure no. 6.3.4.
148
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.3. LogarithmicFrequency Characteristics.
- The two horizontal asymptotes of the phase frequency chracteristic ϕ(ω ) in the
linear space of the variable x = lg(ω T) evaluated to the function
G(x) = ϕ(ω ) ωT→10 x = arctg(10x)
x → −∞ ⇔ ω → 0 ⇒ G(x) → 0
x → +∞ ⇔ ω → ∞ ⇒ G(x) → π2
- A line having the slope of ϕ(ω ) in the particular point
ωT = 1 ⇔ x = 0
slope which is evaluated for the function G(x) by the derivative
x
G (x) = 10 2xln 10 ⇒ G (0) = ln 10 .
10 + 1 2
Both ϕ(ω ) and ϕ a (ω ) are depicted in Fig. 6.3.5.
ϕ( ω ) ϕ( ω )
ϕa (ω) ϕa (ω)
180 π
degree rad Oblique line Max. error
y = (ln10/2)x + π/4
135 3π/4 6 degree
Horizontal asymptote
90 π/2
45 π/4 Exacte
phase freq. charact.
0
0 0
0.01
.
0.1 0.2 1 5
. 10 100
ωT ω T=10x Logarithmic scale
−∞
. . x x =lg(ωT) Linear scale
-45 −π/4
-2 -1 0 1 2
x1 = - 0.69897 x2 - 0 = 0 - x1
x2 = 0.69897
-90 −π/2 ωT
ω1T=0.2 ω T=5 2 = 1
2
1 ω1T
Horizontal asymptote
149
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
30
|K|
10 20
20lg( |K| )
10
0
1 0
0.01 0.1 1 10 100 ω
-10
0.1 -20
ϕ( ω )
If K>=0
π
π/2
If K<0
−π/2
150
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
H(s) = 1α , α ∈ Z (6.4.5)
s
if α=1 then the system is a pure simple integrator;
if α=2 then the system is a double pure integrator;
if α=-1 then the system is a pure derivative H(s) = s .
s → jω ⇒ H(jω ) = α 1 α , ω > 0, α ∈ Z
j ω
A(ω ) = 1α , ω ≥ 0, α ≤ 0 (6.4.6)
ω
L(ω ) = −20α lg ω (6.4.7)
ϕ(ω ) = −α π (6.4.8)
2
A(ω) L(ω) dB
100 40
α=2 α=1 α=−1 α=−2
3 0 Slope Slope Slope Slope
-40 dB/dec -20 dB/dec 20 dB/dec 40 dB/dec
10 20
10
0
1 0
0.01 0.1 1 10 100 ω
-10
0.1 -20
ϕ(ω)
α=−2
π
α=−1
π/2
0
0
0.01 0.1 1 10 100 ω
α=1
−π/2
α=2
−π
151
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
30
10 20
Max. error
3dB Slope
10
+20 dB/dec
1 0 ωT
0 0.01 0.1 1 10 100
-10
0.1 -20
ϕ( ω ) ϕa (ω) Max. error
6 degree
π/2
π/4
0
0
0.01
.
0.1 0.2 1
.
5 10 100
ωT
−π/4
Figure no. 6.4.3.
152
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
ω=ω1
0.5
Α(ω 1 )
ϕ(ω 1) H(jω1 )
0 ω =0
P
0 0.5 1
-0.25
Figure no. 6.4.4.
From Bode and complex frequency characteristics the following
observations can be obtained:
1. If the frequency increases the output as a time sinusoidal signal will be in
advance in respect to input.
2. When ω → ∞ then the output will be with a time interval of T/4 in advance
with respect to the input.
3. For breaking point ω T = 1 ⇔ ω = 1/T the output, as a time sinusoidal signal
will be T/8 in advance , which corresponds to a phase of π/4.
153
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
Slope
+40 dB/dec
A(ω ) L(ω) dB
100 40
ξ>√2/2
10 20
ξ=√2/2 1/2<ξ<√2/2
ω tT ωrezT
1 0 ωT
0.1 1
0.01 ξ=1/2 10
A( ωn) Am
0<ξ<1/2
0.1 -20
ξ=0 ξ=0
0.01 -40
0
L(ωn) =−13.2974 -20 ωrez
=0.09039
-2 -1 0
ω Frequency (1/sec)
10 10 10
200
ϕ(ω) Phase (deg);
180
150
90 100
50
0 -1 0
ω Frequency (1/sec)
-2 10
10 10
Figure no. 6.4.6.
155
6. FREQUENCY DOMAIN SYSTEMS ANALYSIS. 6.4. Elementary Frequency Characteristics.
y=0
We note y = (ω T 2 ) ⇒ (1 − y) + 4ξy = 1 ⇒
2
y = 2(1 − 2ξ )
2
(ω T) 2 = 2(1 − 2ξ 2 ) ⇒ ω t = ω c = 1 2 1 − 2ξ 2
T
b. The asymptotic phase characteristic.
We said that :
arctg 2ξωT , ωT < 1
1−(ωT) 2
π
ϕ(ω ) = 2 , ωT = 1 (6.4.35)
π + arctg , ωT > 1
2ξωT
1−(ωT) 2
Denoting,
2ξω T 2ξ10x
g(x) = arctg ωT=10 x = arctg (6.4.36)
1 − (ω T) 2 1 − 102x
we have
G(x) = ϕ(ω ) ωT→10 x
as a function of x in a linear space X with three branches,
arctg 2ξ10
x
, x<0
1−10 2x
y= ln10 π π
x + = ⇒ x2 = 2
⇒ ω 2 T = 10 x2
3 2 2 1− 3 ln 10
0 , ω T < ω 1T
ϕ a (ω ) = 2 + ξ lg(ω T) , ω T ∈ [ω 1 T, ω 2 T]
π ln(10)
(6.4.38)
π , ω T > ω 2T
156
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
ϕ( ω ) ϕa (ω)
. D
.
C
π
ln(10)/ ξ
π/2 . A
0
0
0.01
.
B
0.1 ω1T 1
.
ω2T 1 0 100 ωT
−π/2
Figure no. 6.4.7.
Algorithm for asymptotic phase drawing:
- mark A [(ω T)=1; ϕ=π/2]
- mark C [(ω T)=10; ϕ= π + ln 10 ]
2 ξ
- connect A and C to determine ω1Τ and ω2T to the intersection with ϕ=0
(point B) and ϕ=π (point D) respectively.
So the exact phase characteristic will have as semitangents the segments
AB and AD.
ω=ω1
Α(ω 1 )
Q=2ξ
0.5
ϕ(ω 1) H(jω 1)
0 ω =0
-0.25 0 0.5 1 P
157
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
H(s) = 1 = T ; (6.4.40)
Ts + 1 s + p
1
The pole is −p = − .
T
H(jω ) = 1 = 1 − j ωT (6.4.41)
1 + jω T 1 + (ω T) 2 1 + (ω T) 2
P(ω ) = 1 , Q(ω ) = − ωT (6.4.42)
1 + (ω T) 2 1 + (ω T) 2
A(ω ) = 1 (6.4.43)
1 + (ω T) 2
10
Max. error
-3dB
0.01 0.1 1 10 100
1 0 ωT
Slope
-10 -20 dB/dec
ϕ( ω )
ϕa (ω) 0.1 -20
π/4
0
0
0.01
.
0.1 0.2 1
.
5 10 100 ωT
−π/2
−3π/2
158
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
-0.5 ω=ω1
ω =1/T
Figure no. 6.4.10.
1−(ωT) 2
−2ξωT
P(ω ) = 2
; Q(ω ) = (6.4.54)
1−(ωT) 2 +4ξ2 (ωT) 2 1−(ωT) 2 2 +4ξ2 (ωT) 2
ω c = ω t = 2 ω rez (6.4.58)
159
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
10 20
0<ξ<1/2
A(ω n) Am
0.01 ξ=1/2 10
0.1 1
1 0
ωrez T ωT
ωtT
ξ=√2/2 1/2<ξ<√2/2
Slope
-40 dB/dec
0.1 -20
ξ>√2/2
0.01 -40
ϕ( ω ) ϕa (ω )
π/2
−π/2 . A
ln(10)/ ξ
−π . .
. D
C
ϕ(ωres ) ∈(−π/2, 0)
ω=ωres
Figure no. 6.4.13.
Example.
1 (0.1) 2
H(s) = = ; ω n = 0.1, T=10
100s 2 + 2s + 1 s 2 + 2ξω n s + ω 2n
2ξT = 2 ⇔ ξ = 2 = 0.1 ; 20 lg 1 = 20 lg 1 = 14.01
2T 2ξ 0.2
Am = 1 = 5.02 ; Lm = 20lg(5.02) = 14.01
2ξ 1 − ξ 2
ω rez = 1 1 − 2ξ 2
T
ωn
L(ω) Magnitude (dB) Bode Diagrams
20
L(ωn) =13.2974
0
ωrez=0.09039 H(s)= 1
100 s^2 + s + 1
-20
Slope
-40 dB/Dec.
-40
-2 -1 0 ω Frequency (1/sec)
10 10 10
ϕ(ω) Phase (deg);
0
0
-50
-90 -100
-150
-180 -200
ω Frequency (1/sec)
ω1 ω2
Figure no. 6.4.14.
161
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .
162
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .
We define,
AR (ω ) = R(jω ) = K ⋅ ω −α (6.5.21)
LR (ω ) = 20lg(AR (ω )) = 20 lg( K ) − α ⋅ 20 lg(ω ) (6.5.22)
−α ⋅ π/2 if K ≥ 0
ϕ R (ω ) = arg(R(jω )) = (6.5.23)
−α ⋅ π/2 − π if K < 0
The second factor G(s),
m1 m 1 +m2
Π (θ i ⋅ s + 1) Π θ 2i s 2 + 2ζ i θ i s + 1
i=m 1 +1
G(s) = n 1
i=1
n 1 +n 2 (6.5.24)
Π (T is + 1) Π T 2i s 2 + 2ξ iT is + 1
i=1 i=n1 +1
has no effect on the frequency characteristics of the series connection when
ω → 0. Indeed,
G(0)=1 , lim G(jω ) = 1 lim arg(G(jω )) = 0 . (6.5.25)
ω→0 ω→0
165
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .
20
0
La4
-20
La 3
-40
-60
-3 -2 -1
100 1 102
ω
10 10 10 10
ϕ(ω) ϕ a6 ϕ5a Phase (deg)
90
45
0
ϕ3a ϕ4a
-45
-90
-135
ϕ2a
-180
-3 -2 -1 ω
10 10 10 100 10
1 102
166
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .
100 s+10
H(s) = =1000⋅ s ⋅ 0.1s + 1 ⋅ 1
1
100s + 1
s 100 s + 1
H 1 H 3
H2
H4
T1
H1 (jω ) = 1000 ; L1 = 20lg 103 = 60 dB.
H2 (s) = 1s ⇒ H2 (jω ) = 1 ; A2 (ω ) = H2 (jω ) = 1 = ω
1
jω ω2
L2 = 20lg A2 (ω ) = 20lg ω −1 = −20 lg ω = −20x
x
H3 (s) = 0.1s + 1 ; 1
H4 (s) =
100s + 1
The Bode diagrams drawn using Matlab is depicted in Fig. 6.5.2.
L(ω) Magnitude (dB)
150
100
50
-50 ω
1 0-4 10-3 10-2 10
-1
100 101 102
ϕ(ω) Phase (deg)
-80
-100
-120
-140
-160
-180 ω
1 0-4 10-3 10-2 10-1 100 101 102
Their Bode diagrams and step responses are depicted in Fig. 6.5.3.
167
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .
1.8
20 H1 H1
1.6
0
1.4
-20 H2 H2
1.2
-40
0
Phase (deg); 1
-50 H1 0.8
H2 0.6
-100
0.2
-200
10- 2 10- 1 100
0
Frequency (1/sec) 50 100 150 200 250 300 350
Time (sec.)
1 1
H1(s)=----------------- H2(s)= -----------------
100 s^2 + s + 1 100 s^2 + 5 s + 1
168
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .
1. Evaluate the poles and zeros, (zi , p i , ω n,i , ω n,k ), ∀i, k and place them on
the plotting sheet in a logarithmic scale. In a such a way we determine the
system frequency bandwidth of interest.
2. Mark each zero by a small circle, and each pole by a small cross.
Complex zeros/poles are marked by double circles/crosses.
3. Chose a starting frequency ω 0inside the system frequency bandwidth .
4. Evaluate a starting point M = M(ω 0 , LR0 ) , for the asymptotic magnitude
characteristic, where LR0 is obtained from (6.5.22),
5. LR0 = LR (ω 0 ) = 20 lg(AR (ω 0 )) = 20 lg( K ) − α ⋅ 20 lg(ω 0 ) (6.5.27)
If possible chose,
ω 0 = 1 ⇒ LR0 = 20lg( K ) .
6. Draw a straight line through the point M having the slope equal to
−(α ⋅ 20) dB/dec till the first breaking point abscissa is reached.
7. Keep drawing segments of straight lines between two consecutive
breaking points with the slope equals to the previous slope plus the
changing slope determined by the left side breaking point.
8. The same procedure can be applied for phase characteristic but we must
take into consideration that the asymptotic phase characteristic keep the
same slope in each frequency interval of
ω ∈ [ω i − ω i /5 , ω i + ω i ∗ 5], ω i = zi , p i , ω ni (6.5.28)
Example.
Let we draw the magnitude characteristic only for the system of
Ex. 6.5.2.1.
L(ω) Magnitude (dB)
100
81.9382 M S2=S1+20=-20+20= 0 dB/dec
80
-20 -1 x x ω
-3 -2
10 10 10 100 10
1 102
7. SYSTEM STABILITY
170
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.
There are used the so called stability criteria, that express necessary and
sufficient condition for stability.
Having a polynomial L(s) (could be ∆(s)) we are calling this as a stable
polynomial (or Hurwitz polynomial) if the system having that polynomial at the
denominator of the transfer function (or characteristic polynomial) is a stable
one. There are two types of criteria:
- algebraical criteria, that manages the polynomial coefficients or roots
- frequency stability criteria that manages the frequency characteristic of
the system.
171
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.
172
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.
The table is continued horizontally and vertically until only zeros are obtained.
The system is asymptotically stable if and only if all the elements in the
first column of the Routh table are of the same sign and they are not zero.
The numbers of poles, roots of the equation L(s)=0, from the right side is
equal with the number of sign changing in the first column of the Routh table.
173
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.
174
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.
The roots at the auxiliary polynomial could be a zero root, pure imaginary
roots, real roots or complex roots.
If the polynomial has real coefficients then it has two conjugated roots.
It can be proved that a polynomial with a row of zeros in the Routh table
has as common factor the auxiliary polynomial:
L(s)=L1(s)U(s) (7.2.13)
The Routh criterion is useful because there are many determinants of the
second order rather Hurwitz criterion that ask for higher order determinants.
1 ω2 10
r 31 = −1p = −1p (pω 2 − pω 2 ) = ; r 32 = −1p = 0.
p pω 2 p 0
We observe that all the elements in the third line, i=3, are zero. The attached
polynomial U(s) is,
U(s) = ps 2 + pω 2 s 2−2 = p(s 2 + ω 2
L(s)=L1(s)U(s); U'(s)=2ps+0
r 41 = −1
p pω 2
= pω 2
2p 2p 0
We can determine L1(s) which is a factor without symmetrical roots.
175
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.
+
v e k s+1 1 y
- s (s+2)(s+3)
HR (s) HF (s)
1 k+6
r 31 = −1 = −k − 5k − 30 = 4k + 30
5 5 k 5 5
5 5 k kr
r 41 = − = r 31 = k
4k + 30 4k+30
5
0 31
176
7. SYSTEM STABILITY. 7.3. Frequency Stability Criteria.
177
7. SYSTEM STABILITY. 7.3. Frequency Stability Criteria.
d
Q j Im(H (j ω))
ωc= ωc2
d
Re(H (j ω))
0 ω→∞ ω =0 P
-1+j0 0 R=1 d
H1(j ω )
d
d
H2(j ω) γ2=−π
H3(j ω) γ3= ϕ(ωc3)
178
7. SYSTEM STABILITY. 7.3. Frequency Stability Criteria.
179
7. SYSTEM STABILITY. 7.3. Frequency Stability Criteria.
8.1. Z - Transformation.
181
8. DISCRETE TIME SYSTEMS. 8.1. Z - Transformation.
∞
Y(z) = Z{y(t)} = Σ y(kT + )z −k , z > R c = eσ0 T (8.1.5)
k=0
where the string of numbers (yk)k≥0 to which the Z-transformation is applied is
represented by the values of this function yk=y(kT +).
Here σ0 is the convergence abscissa of y(t).
The process of getting the string of numbers yk, the values of a time
function y(t) for t=kT or ( t=kT+ ) , is called " sampling process " having the
variable T as " sampling period " .
Vice versa, a time function y(t)=ycov(t) , can be obtained from a string of
numbers (yk)k≥0 , replacing only, for example, k by t/T.
This is one covering functions, a smooth function which passes through
the points ( k, yk ) .
Such a process is called " uniformly covering process " :
y(t)=ycov(t)=yk|k→t/T . (8.1.6)
Example.
Suppose yk=k/(k2+1), k ≥ 0. We can create a time function
y(t)=ycov(t)=(t/T)/((t/T)2+1)).
This time function has a Laplace transform Y(s). Through this covering
process, the string values are forced to be considered equally spaced in time
even if the string , maybe, has nothing with the time variable.
182
8. DISCRETE TIME SYSTEMS. 8.1. Z - Transformation.
2
z
183
8. DISCRETE TIME SYSTEMS. 8.1. Z - Transformation.
Examples.
1. Y(z) = z .
z− 1
yk = Z−1 {Y(z)} = Σ Rez z zk−1 , k≥ 0, is a parameter .
poles of
z − 1
Y(z)z k−1
We must check if the number of poles is different for different values of k.
In this case, we can see that z zk−1 has one simple pole z=1 for any k≥ 0 so,
z−1
yk=1 ∀ k≥ 0 .
184
8. DISCRETE TIME SYSTEMS. 8.1. Z - Transformation.
Example.
We know that ∞ ∞
yk = λ ⇒ Y(z) = Σ λ z = Σ (λz−1 ) = 1 = z so,
k k −k k
Example:
Y(z) = z2 + 1 = 1 − 2z−1 + 4z−2 + ..... ⇒
z2 + 2z + 1
y0 = 1; y1 = −2; y2 = 4; ....
185
8. DISCRETE TIME SYSTEMS. 8.1. Z - Transformation.
if the time limit exists or if the function [(1-z-1)Y(z)] has no pole on or inside the
unit circle in z - plane.
186
8. DISCRETE TIME SYSTEMS. 8.1. Z - Transformation.
d Y(z) ∞
−Tz = Σ (kTy(kT))z−k =Z{ (kT)y(kT)} =Z{ ty(t)}
d z k=0
[−TzY(z)][1]
d Y(z) ∞ 2 2
−Tz d −Tz = Σ k T y(kT) ⋅ z−k =Z t2 y(t)
d z d z k=0
[−TzY(z)][2]
By the operator definition we know that,
[−TzY(z)][p] = −Tz d [−TzY(z)][p−1] .
dz
187
8. DISCRETE TIME SYSTEMS. 8.1. Z - Transformation.
Suppose that
∞
[−TzY(z)][p−1] = Z{t p−1 y(t)} = Σ
k=0
(kT) p−1 z−k .
Examples.
1. Let w(t) be
w(t) = t = t 1(t) = t y(t) , where y(t)=1(t) .
But we know Y(z)=Z{1(t)}= z .
z−1
W(z)=Z{ty(t)}=-Tz ( z ) = −Tz ⋅ −1 2 = Tz 2 ,
d
dz z − 1 (z − 1) (z − 1)
the same result we obtained through other methods.
2. Z{ t 2 1(t)} = Z t (t ⋅ 1(t)) = −Tz d Tz 2 =
d z (z − 1)
known
T(z − 1) 2 − 2Tz(z − 1) T 2 z(z + 1)
= −Tz =
(z − 1) 4 (z − 1) 3
This theorem, one of the most important for System theory, states that the
Z-transform of the so called "sum of convolution of two strings" is just the
algebraical product of the corresponding Z-transforms:
k
Z Σ y a (iT)yb ((k − i)T) = Ya (z)Yb (z) . (8.1.26)
i=0
Vice versa, the inverse Z-transform of a product of two Z-transforms is a
convolution sum
k k
Z−1 Ya (z)Yb (z) = Σ ya (iT)yb ((k − i)T) = Σ ya ((k − i)T)y b (iT) (8.1.27)
i=0 i=0
188
8. DISCRETE TIME SYSTEMS. 8.1. Z - Transformation.
∞ ∞
=Σ Σ [ya (iT)y b (pT) ⋅ z−p ⋅ z−i ]
i=0 p=−i
∞ ∞
= Σ[y (iT)z ( Σ yb (pT)z−p )]
a −i
i=0 p=−i
∞ ∞ ∞
W(z) = Σ[y (iT)z ( Σ y (pT)z )] = Σ[ya (iT)z −i (Yb (z))]
a −i b −p
i=0 p=0 i=0
∞
= [Σ ya (iT)z −i ]⋅ Yb (z) = Ya (z) ⋅ Y b (z) , q.e.d.
i=0
189
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.
(uk)k>=0 (yk)k>=0
Pure discrete-time
String of numbers system String of numbers
(can be a vector ) ( can be a vector )
190
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.
Analytical approach.
This is just applications of the Z-transform. Applying Z-transform to
(8.2.2) and using (8.1.14) we obtain
z(Y(z) − y 0 ) − aY(z) = b[z(U(z) − u0 )] .
The Z-transform of the output string can be expressed as
H(z)
191
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.
If the modulus of the z-transfer function poles are under unit (inside the
unit circle in z-plane ), then the free response vanish and the forced response
goes to the permanent response (permanent response is a component of the
forced response determined by the poles of the input z-transform only).
In this particular step response the input has one pole z=0 and the
permanent response is just b/(1-a) as we can see from residues formula. This
property is called stability property.
The forced response for k → ∞ can be determined directly from the
expression of Yf(z), by using the final value theorem, without to be necessary to
compute the expression of the time response
yf (∞) =lim [(1 − z−1 )Yf (z)] =lim [(1 − z−1 ) ⋅ b z −z a ⋅ z ] = b .
z→1 z→1 z− 1 1 −a
Of course we have to pay something for this simplicity, the validity of the
theorem has to be checked: Yf (z) has to be analytic outside the unit circle).
The discrete time systems (which are linear and time invariant) can be very
easy described in z-complex plane by using the Z-transformation. Applying the
Z-transformation to the difference equation (8.2.11) one can obtain,
n i i−1 m i i−1
Σ
i=0
a i
z [Y(z) − Σ
k=0,i≠0
y k z −k
] Σ i
= b
i=0
z [U(z) − Σ
k=0,i≠0
u k z−k ]
(8.2.15)
I(z) M(z)
Y(z) =H(z)U(z) + , H(z) = (8.2.16)
L(z) L(z)
Yf(z)
Yl (z)
M(z) = b m zm + ... + b 1 z + b 0 (8.2.17)
L(z) = an z + ... + a1 z + a0
n (8.2.18)
where the expression H(z) is called the z-transfer function.
The z-transfer function of a discrete time system is defined as a ratio
between the Z-transform of the output and the Z-transform of the input which
determined that output into zero initial conditions, if this ratio is the same for any
input.
Y(z) M(z)
H(z) = = (8.2.19)
U(z) zer.init.cond L(z)
The z-transfer function determines only the forced response. The time
espression of the forced response is,
k
yk = Σ hi ⋅ u k−i (8.2.20)
i=0
where hk is the inverse Z-transform of H(z), called the weighting function
hk = Z−1 {H(z)} ⇔ H(z) = Z{hk } . (8.2.21)
194
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.
where the matrices are: A-(n x n); B-(n x p); C-(r x n); D-(r x p).
p = 1 ⇒B → b
For :
r = 1 ⇒ C → cT
Having a z-transfer function, the state space description can be obtained
as for the continuos time systems by using the same methods.
Any canonical form from continuous time systems can be obtained also
for discrete time systems with the same formula only by considering against s
the variable z.
For example, the polynomial
M(s) = b m s m + ... + b 0 , s m → zm
will become
M(z)=bmzm+... +b1z+b0 .
By using the Z-transformation, the state equations (8.2.23) become,
z(X(z) − x 0 ) = AX(z) + BU(z) .
195
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.
Φ (z)x0 + 1z CΦ
Y(z) = CΦ Φ (z)B + D ⋅U(z) (8.2.30)
H(z)
where,
H(z) = 1zCΦΦ (z)B + D (8.2.31)
is the so called " Z-transfer matrix " .
For single-input and single-output systems, the Z-transfer matrix is just
the Z-transfer function.
196
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.
Sampled data systems are systems where interact both discrete time
systems and continuous time systems. We shall start analysing such a type of
systems by an example.
y(t) ∈[ 0 , 1 0 ] V
197
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.
We put k, or kT, as entry for all the variables, thinking that the
acquisitions of the system and all the numerical computing are very fast and
performed just in the time moment t=kT.
The result of the numerical algorithm, denoted by wNk, is applied to the so
called numerical-analogical converter (NAC). The NAC will offer a voltage
piecewise constant:
u(t) = KNA ⋅ wNk , ∀t ∈ (kT, (k + 1)T] , (9.1.3)
Initialize
No System Yes
Read some
default variables
Other jobs
Task no.8
Other jobs
Task no. 8.
Initialise : X1,X2; a1.,a2,a3,a4; b1,b2; c1,c2 . % If it is required by a flag
Read R (Let R=IN(5)) % R = r Nk = kAN r(kT)
Read Y (Let Y=IN(6)) % Y = yNk = kAN y(kT)
W= c1*X1 + c2*X2 % W = wkN = c1xk1 + c2xk2
Write W ( OUT (W,10) ) % u(t) = W , ∀ t∈( kT , (k+1)T ]
E=R-Y % E = eNk = kAN e(t); e(t) = r(t) − y(t)
Z=X1 % Z=X1=x1k is kept as additional variable to be able to compute x2k+1
X1=a1*Z+a2*X2+b1*E % xk+1 1
= a1 xk1 + a2 x 2k + b 1 e Nk
X2=a3*Z+a4*X2+b2*E % x2k+1 = a3 xk1 + a4 x 2k + b 2 e Nk
Return.
This corresponds, for the control algorithm, to the state equations:
xk+1 = Axk + b N eNk (9.1.4)
wk = c xk + d ek (d = 0)
N T N N N (9.1.5)
Usually in process computers there is only one ANC. Several analogical
signals are numerical converted to numbers by using the so called analogical
multiplexors MUX.
An analogical-numerical converter ANC, transforms an analogical signal
to a string of numbers represented by a number of bits. If the converter has p
bits and the analogical signal, let say y, changes from ymax to ymin then
y ∈ [ymin , ymax ), yN ∈ [0, 2 p − 1) ⇒ kAN = 2p (9.1.6)
(y max − ymin )
The physical structure of an ANC is a matter of hardware, well defined
and well known.
From behavioural point of view, as an oriented object, in any ANC there
are two types of phenomena:
- time conversion
- amplitude conversion
Time conversion expresses the conversion of a time function to a string of
numbers. For example from y(t) it is obtained the string(sequence) yk=y(kT).
This is so called the sampling process with the sampling period T.
The variable yk from the string has the same dimension like y(t). If, for
example, y(t) is a voltage which takes values inside a domain then yk is also a
voltage.
This sampling process is represented in a principle diagram by a symbol
like in Fig. 9.1.3. we can denote it SPS- "sampling physical symbol".
It is not a mathematical symbol operator, it only determines someone to
understand that a time function is converted to a string of numbers which
represents its values in some time moments t=kT, and nothing else.
Amplitude conversion equivalently expresses all the phenomena by which
yk is converted to a number yNk represented in a computer by a number
199
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.
0 0 [ )
y min y max y
δ ΝΝ
δ
y
δN
) ) y kAN y -- yN
1 ) )
k AN
+
0 [ [ [ [
y min y max y
same number:
ymax − ymin
δy = 1 = (9.1.7)
kAN 2p
If we want to represent the amplitude conversion noise, then the
conversion part of an ANC can be represented like in Fig. 9.1.6. When p is large
enough ( p=12 for example ) this noise can be neglected.
The numerical-analogical converter NAC, converts a string of numbers
N
w k to a piecewise constant time function u(t) as defined in (9.1.3).
200
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.
r(t) r(kT) k r Nk
AN
(R)
N eNk
+ ek Input-Output r(t) + e(t) e(kT) k
- (E=R-Y) Equivalent - ek AN
(E)
N
yk y(t)
y(t) y(kT) k
AN
(Y)
(Feeddback variable)
203
9. SAMPLED DATA SYSTEMS. 9.2. Mathematical Model of the Sampling Process.
We can plot the sampled signal as a string of arrows whose lengths are
just the area of the corresponding Dirac impulse as in Fig. 9.2.1.
y*(t)
y(t)
kT+aT
0 T 2T 3T 4T 5T kT t
Figure no. 9.2.1.
This signal y*(t) has a Laplace transform Y*(s), but it contains
information only on the values yk=y(kT+).
This process is expressed by an operator called "sampling operator",
denoted by the symbol { }*. For example we write,
{y(t)}*=y*(t),
where y*(t) is defined as above.
SAMPLING OPERATOR - MATHEMATICAL MODEL
we observe from (9.2.5) that the Z-transform of a signal is just the Laplace
transform of the sampled signal replacing only esT by z or s = 1 lnz ,
∞ T
s= 1 ln z = Σ y k z
∗ (s)
Y −k
=Z{ yk } =Z{ y(t)} = Y(z) (9.2.8)
T k=0
p(t)
∞
p(t) = Σ δ(t − kT) (9.2.10)
k=0
For t=kT, δ(t-kT) is not defined as a function, but the integral is its area
and admits Laplace transform. Also p(t) is a generalised function that admits
Laplace transform.
205
9. SAMPLED DATA SYSTEMS. 9.2. Mathematical Model of the Sampling Process.
Because y*(t) from (9.2.9) is a product of two time functions (for p(t) a
generalised one), which admit Laplace transform, we can use the complex
convolution theorem
c+j∞
x
x Γ2
R=
∞
ω
s+j0 s x
x x Re( ξ)
R=
x ∞
x
x
Poles of Y(ξ) Γ2
Γ1
∞
c - j∞
∞ ∞ −kTs
P(s) = L Σ (t − kT) = Σ e = 1 (9.2.13)
k=0 k=0 1 − e−Ts
P(s − ξ) = 1 ; Re(s) > 0 , Re(s) > Re(ξ)
1 − e−T(s−ξ)
We can say that,
c+j∞
By fulfilling the vertical line with a contour Γ1 containing all the left hand
half-plan and if the above integral is zero on Γ1 , then it is an integral on a close
contour which contains all the poles of Y(ξ) so the residues theorem can be
applied.
206
9. SAMPLED DATA SYSTEMS. 9.2. Mathematical Model of the Sampling Process.
c+j∞
∫ +Γ∫ − p∫ = ∫ ⇒ Y∗ (s) = Σ Rez Y(ξ)
1
1−e e
−Ts T ξ
c−j∞ 1 1 poles
ofY(ξ)
closed 0
cot our
integral
Because the Z-transform this is just the result of variable changes in Y*(s)
:
Y(z) = Y∗ (s) s= 1T ln z =Z {y(t)} = Σ Rez Y(ξ)
1
1− z e
−1 Ts . (9.2.15)
poles
ofY(ξ)
If we are fulfilling the vertical line c − j∞, c + j∞ by a contour Γ2
containing all the right half plain the above integral can be computed on a closed
contour maybe by using the residues theorem. Inside this closed contour the
poles of P(s-ξ) there are inside only.
These poles are
ξ n = s + jnω s , where ω s = 2π = 2πf s . (9.2.16)
T
By using the residues theorem the following formula is obtained:
+∞
Y∗ (s) = 1 Σ Y(s + jnω s ) . (9.2.17)
T n=−∞
Two properties of Y*(s) can be mentioned:
1) Y*(s) is a periodical function with the period jω s that is,
Y*(s)=Y*(s+jω s)=Y∗ (s + jnω s ) , ∀s (9.2.18)
2) If Y*(s) has a pole sk then it also has the poles sk+jnω s , ∀n ∈ Z
- ωs ωs - ωc 0 ωc ωs 2 ωs
-- --- ---
2 |Y*(jω)| 2 Ideal filter
1 1 |Y(jω + jω )|
1 |Y(jω)|
--- --- s
T
T
1 |Y(0)|
---
T +
ω
c
- ωs- ωc - ωs ωs - ωc ωs ωs+ω ωs
- ω s +ωc ωc ωs - ωc ωs ωs - ωc ωs
0 2 2
--- --- c 2
--
2 2 ωs
208
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.
∗
because H(s)U∗ (s) =H∗ (s) ⋅U ∗ (s).
Y(s) H(z) U(z)
210
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.
This time function is the response of this system to the unit Dirac impulse,
what means it is a weighted function and its Laplace transform is a transfer
function,
L{ he0 (t)} = He0 (s) = 1s − e s ⇒ He 0 = 1 −se
−Ts −Ts
(9.3.23)
This is the transfer function of the zero order holder.
Application. Let us suppose that U(z) = z (we have applied a step unit
z− 1
function) and H(s) = b . By using the above relations we can easy determine
s+a
the response of this system:
Z s = Z s(s+a
H(s) b
) = Σ Rez ξ(ξ+a)
b
⋅ 1−1 Tξ = ba 1 −1 + −ab −11 −aT
1−z e 1−z 1−z e
poles of b/(ξ(ξ+a))
212
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.
1−z −1
213
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.
With this notation, the block diagram from Fig. 9.3.9. is represented in
Fig. 9.3.10. which is the standard block diagram, in the sense of system theory,
of the computer controlled system.
214
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.
If we are able we can compute the inverse Laplace of Y(s) from (9.3.40)
if r(t), and as a result R*(s), is given. Unfortunately, directly computing is a very
difficult task.
By using some special methods, as for for example:
" The modified Z-transform",
" The time approach of discrete systems",
it is possible to compute y(t) for any t ∈ R .
If we want to compute the values of y(t) only for t=kT will be enough to
compute Y*(s) and Y(z). So, we can get Y*(s) applying to (9.3.40), the
sampling rules from (9.3.1), (9.3.12).
H∗R (s)
Y(s) =G(s) R ∗ (s) ⇒
1 + H ∗R (s)G ∗(s)
A(s)
B ∗ (s)
∗
G (s)H R (s) ∗
∗
Y∗ (s) = R (s) (9.3.41)
1 + G ∗ (s)H∗R (s)
where
∗
H (s)
G ∗ (s) = (1 − e−Ts ) F
s
The z-transform Y(z), can be simply obtained from (9.3.41) by a simple
substitution s = 1 lnz , getting,
T
HR (z)G(z)
Y(z) = R(z) (9.3.42)
1 + HR (z)G(z)
where :
HR(z)=kANkNAD(z)
H (s)
G(z) = (1 − z−1 ) Z Fs
We observe from (9.3.42), that for this structure of a computer
controlled system, a closed loop z-transfer function Hv(z), as a ratio between the
z-transform of the output and the z-transform of the input, can be determined,
Y(z) HR (z)G(z)
Hv (z) = = (9.3.43)
R(z) 1 + H R (z)G(z)
This relation can be expressed by a block diagram in z-complex plane as
in Fig. 9.3.11.
r ek wk yk
k
R(z) + E(z) W(z) Y(z)
_ HR (z) G(z)
216
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.1. Frequency Characteristics Definition.
Um
0 1 2 3 4 k
u(t)=Umsin(ωt)
yk
Ym
0 1 2 3 4 k
ϕ ---
ϕ
λ λ = ---
ωT = w
Figure no. 10.1. 2.
In (10.1. 1), we considered the string uk coming from the time function
u(t) = Umsin(ω t) , (10.1.3)
and uk = u(kT) (10.1.4)
but it could be just a string, nothing to do with time variable,
uk= Umsin(ak). (10.1.5)
Of course we can do the equivalence a=ω T.
Note: The amplitude values Um , Y m , are the maximum values of the sinus
functions u(t), y(t), but not of the maximum values of the strings uk , yk .
For given ω=2πf , or a, and T = 1 we can measure Um , Ym , and λ .
fs
Then we can compute,
217
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.2. Relations Between Frequency Characteristics
and Attributes of Z-Transfer Functions.
Ym
A(ω ) = (10.1.6)
Um
ϕ(ω ) = λ ⋅ (ω T) or ϕ(ω ) = λ ⋅ a (10.1.7)
If we repeat the experiment for different values of ω ( or a) we can plot
the curves A(ω ), ϕ(ω ) with respect to ω . These are so called: "Magnitude
frequency characteristic" and "Phase frequency characteristic"
respectively of the pure discrete time system.
|z|
=1
ωT H(z) ϕ(ω) =arg{H(ejωT)}
Re(z) Re(H(z))
Α(ω)=| H(ejωT) |
z-plane H(z)-plane
Figure no. 10.2.1.
220
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.2. Relations Between Frequency Characteristics
and Attributes of Z-Transfer Functions.
sinω T
ϕ(ω )=arctg( )=arctg( tg(ω T/2))=ωΤ/2 (10.2.21)
1 + cos ω T
Suppose that the sampling frequency is fs=1000Hz . With this filter we
can reject the component of the frequency f=50 Hz with the factor
α=A(2πf)=A(100π)= 12 2 + 2cos(100π ⋅ 0.001) =0.9877
The Matlab drawn Bode characteristics for (10.2.19) is depicted in Fig. 10.2.2.
Magnitude (dB)
20 Bode Diagrams
0
-20
-40
Frequency (rad/sec)
-60
Phase (deg);
100
50
-50
Frequency (rad/sec)
-100 -2
10 -1 0 1
10 10 10
221
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.3. Discrete Fourier Transform (DFT).
2π −2 − ϕ = ωT= 2π -p-
Ν N
2 π−1 −
1
|z|=
Ν
0 Re(z)
Ν−1
2 π −−
N
This can be applied also for strings of numbers. The Z-transform is,
∞
Y(z) = Σ yk z−k ⇒
k=0
∞
Y (p) = Σ yk W N = Y(z)
F −pk
j
2π
⋅p
(10.3.9)
k=0 z=e N
222
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.3. Discrete Fourier Transform (DFT).
Very interesting results are obtained by using the DFT for finite strings of
numbers,
{ yk } k≥0 , yk = 0 for k ≥ N (10.3.10)
For such strings of numbers (finite ), the DFT has the form,
N−1
Σ −pk
Y (p) = DFT{yk } =
F
yk W N (10.3.11)
k=0
Because of the periodicity of the expression W N , and its form (10.3.7), it
can be defined, for finite strings of numbers, the so called Inverse Discrete
Fourier Transform (IDFT) of the form,
N−1
yk = IDFT{YF (p)} = 1 ⋅ Σ Y F (p)W N
pk
(10.3.12)
N p=0
Directly it is very difficult to compute (10.3.11), (10.3.12), but there are
several methods to do it like the algorithm known as " The Fast Fourier
Transform" on short FFT , which is rather convenient if N = 2q , q ∈ Z.
The Fast Fourier Transform is intensely utilised in signal processing field.
Suppose that we have a string of numbers {uk } k≥0 whose DFT is
∞
U (p) = Σ uk W N .
F −pk
(10.3.13)
k=0
If this string of numbers uk is passed through a dynamic system with the
z-transfer function H(z), given by (10.3.1), the output of such a system is
k
yk = Σ ui hk−i ⇔ Y(z) = H(z)U(z) , (10.3.14)
i=0
and it can be expressed by its DFT as,
YF (p) = HF (p) ⋅ UF (p) , (10.3.15)
where
UF(p) is the DFT of the input strings of numbers uk , (10.3.13),
YF(p) is the DFT of the output string of numbers yk
∞
Y (p) = Σ yk W N
F −pk
k=0
HF(p) is, by definition, "the DFT transfer function" ,
N−1
Σ −kp
HF (p) = H(z) 2π
j ⋅p
= hk W N . (10.3.16)
z=e N k=0
If the input is a finite string
{uk } k≥0 , uk = 0 for k ≥ N
and the system is Finite Impulse Response (FIR) System, (10.2.22) with m=N,
then it is convenient to use FFT to evaluate, in time domain and frequency
domain, the system output .
223
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.1. Introduction.
11.1. Introduction.
Having a continuous time system with the input u(t) and the output y(t),
both continuous time functions, the discretization problem means the computing
of a discrete-time model that will realise the best approximation of the
continuous-time system.
But, as any discrete-time system, the input to the model is a string of
numbers, so we are able to supply the discrete model with the values of the input
in some time moments only. The discrete time model output is a string of
numbers too.
The discretization problem is: what is the best discrete time model so that
the error ek between the output values of the continuous system, in some time
moments, and the output string values of the discrete system, corresponding to
that time moments, to be minimum.
A graphical image of the discretization process is presented in Fig. 11.1.1.
yk
uk Discrete yk
System
224
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.2. Direct Methods of Discretization.
Backward approximation.
For a time derivable function x(t), where we denote the sampled values by
xk = x(kT) ,
the backward approximation of the first derivative is
d(x(t)) x − xk−1
≈ k (11.2.1)
dt t = kT T
Forward approximation.
For the same x(t) the forward approximation of the first derivative is
. x − xk
x(t) ≈ k+1 (11.2.2)
t = kT T
Unfortunately, sometimes the last approximation leads to unstable models.
t t
x(t) = x(k0 T) + ∫ u(τ)dτ = xk 0 + ∫ u(τ)dτ , (11.2.6)
k0 T k0 T
The integration process is illustred in Fig. 11.2.1.
u(t) u(t)
T t
t0 k0T (k0+1)T (k-1)T kT
xk xk- xk
0 0
Figure no. 11.2.1.
The integral is approximated by a sum of forward or backward rectangles
or trapezoidals. We denote x(kT) = xk .
Rectangular backward integral approximation:
k
xk = xk 0 + T ⋅ Σ ui (11.2.5)
i=k 0 +1
Trapezoidal backward integral approximation:
k
u +u
xk = xk 0 + T ⋅ Σ i i−1 (11.2.6)
i=k 0 +1 2
226
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.2. Direct Methods of Discretization.
Y(z) T z + 1
= ⋅ (11.2.8)
U(z) 2 z − 1
which allows us to perform the correspondence between the s-operator and the
z-operator::
1 ↔ T ⋅ z+1
s 2 z−1
s ↔ ⋅ z −1
2 (11.2.9)
T z +1
For a transfer function H(s) we can obtain a z-transfer function H(z), by a
simple substitution
H(z) = H(s) (11.2.10)
s = T2 ⋅ z−1
z+1
Equation (11.2.9) is called also bilinear transformation. It performs a
mapping from the s plane to z plane which transforms the entire jω axis of the s
plane into one complete revolution of the unit circle in the z plane.
which the poles matching is assured. The zeros matching and the steady state
gain is assured by special techniques.
227
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.3. LTI Systems Discretization
Using State Space Equations.
228
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.3. LTI Systems Discretization
Using State Space Equations.
T
G i = ∫ eA(T−θ) θ dθ
i
(11.3.11)
0
i!
T
G 0 = ∫ eA(T−θ) dθ (11.3.12)
0
G 0 = A−1 (eAT − I) (11.3.13)
229
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.3. LTI Systems Discretization
Using State Space Equations.
230
12. DISCRETE TIME SYSTEMS STABILITY. 12.1. Stability sProblem Statement.
231
12. DISCRETE TIME SYSTEMS STABILITY. 12.1. Stability sProblem Statement.
Y1=2.8;Y0=2;U1=1;U0=1;
t=0:70;lt=length(t);y=zeros(lt);
y(1)=Y0;y(2)=Y1;
for k=3:lt
U=1;
Y=2*Y1-0.99*Y0+U-1.1*U1;
y(k)=Y;
Y0=Y1;Y1=Y;U1=U;
end
As we understand, the first line "Y1=2.8;Y0=2;U1=1;U0=1;" establishes
the initial conditions. We can see that they are not zero. The response to this
program, depicted in Fig. 12.1.2. , indicates a stable response approaching the
steady state value of 10 as it does with zero initial conditions, in Fig. 12.1.2. ,
" Y1=0;Y0=0;U1=0;U0=0;".
But if the initial conditions are " Y1=-2;Y0=2;U1=1;U0=1;" then the
response is as in Fig. 12.1.3. which represents a disaster for a controlled
process. Even if the initial conditions have a very small variation with respect to
the first case, " Y1=2.799;Y0=2;U1=1;U0=1;" that means Y1=2.799 instead
of Y1=2.8, we have un unstable response as in Fig. 12.1.4.
232
12. DISCRETE TIME SYSTEMS STABILITY. 12.1. Stability sProblem Statement.
10 5000
9
8 0
7 Initial conditions:
Y1=2.8;Y0=2;U1=1;U0=1; -5000
6
5 Initial conditions:
Y1=-2;Y0=2;U1=1;U0=1;
4 -10000
1
0 -20000
0 10 20 30 40 50 60 70 0 10 20 30 40 50 60 70
233
12. DISCRETE TIME SYSTEMS STABILITY. 12.2. Stability Criteria for Discrete Time Systems.
... ...
...
an an-1
ak-1 ... a 1 a0 0 an
D =
k an a0 a 1 ... ak-1
... ... (12.2.5)
0
...
an
...
.. . a a
an-k+1 ... an 0 0 a10
We denoted by ak the complex conjugate of the coefficient ak.
The necessary and sufficient stability condition is :
∀k ∈ [1, n] (−1) k Dk > 0 . (12.2.6)
234
12. DISCRETE TIME SYSTEMS STABILITY. 12.2. Stability Criteria for Discrete Time Systems.
Y (s) to entire s plane. Most utilised band is the fundamental band for q=0.
∗
236
12. DISCRETE TIME SYSTEMS STABILITY. 12.2. Stability Criteria for Discrete Time Systems.
But, w = z − 1 ⇒ −
jϕ
e
w = jϕ 1 = = ϕ ϕ ϕ
z+ 1 e + 1 cos ϕ + 1 + jsin ϕ 2 cos 2 2 + j2 sin 2 cos 2
ϕ ϕ
ϕ −sin 2 + jcos 2 ϕ
w = tg ⋅ ϕ ϕ = jtg , ϕ ∈ (−π , π) ⇒ w ∈ (−j∞ , j∞) .
2 cos + jsin 2
2 2
The transformation of the other segments of SB0 is performed in a similar
manner.
4 ω /2
5 -j s 5
237
12. DISCRETE TIME SYSTEMS STABILITY. 12.2. Stability Criteria for Discrete Time Systems.
238
References.