You are on page 1of 247

Constantin MARIN

LECTURES ON
SYSTEM THEORY

2008
PREFACE.

In recent years systems theory has assumed an increasingly important role


in the development and implementation of methods and algorithms not only for
technical purposes but also for a broader range of economical, biological and
social fields.
The success common key is the notion of system and the system oriented
thinking of those involved in such applications.
This is a student textbook mainly dedicated to determine a such a form of
thinking to the future graduates, to achieve a minimal satisfactory understanding
of systems theory fundamentals.
The material presented here has been developed from lectures given to
the second study year students from Computer Science Specialisation at
University of Craiova.
Knowledge of algebra, differential equations, integral calculus, complex
variable functions constitutes prerequisites for the book. The illustrative
examples included in the text are limited to electrical circuits to fit the technical
background on electrical engineering from the second study year.
The book is written with the students in the mind, trying to offer a
coherent development of the subjects with many and detailed explanations. The
study of the book has to be accomplished with the practical exercises published
in our problem book | 19|.
It is hoped that the book will be found useful by other students as well as
by industrial engineers who are concerned with control systems.
CONTENTS

1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS.


1.1. Introduction 1
1.2. Abstract Systems; Oriented Systems; Examples. 2
Example 1.2.1. DC Electrical Motor. 3
Example 1.2.2. Simple Electrical Circuit. 5
Example 1.2.3. Simple Mechanical System. 8
Example 1.2.4. Forms of the Common Abstract Base. 9
1.3. Inputs; Outputs; Input-Output Relations. 11
1.3.1. Inputs; Outputs. 11
1.3.2. Input-Output Relations. 12
Example 1.3.1. Double RC Electrical Circuit. 13
Example 1.3.2. Manufacturing Point as a Discrete Time System. 17
Example 1.3.3. RS-memory Relay as a Logic System. 18
Example 1.3.4. Black-box Toy as a Two States Dynamical System. 20
1.4. System State Concept; Dynamical Systems. 22

1.4.1. General aspects. 22


Example 1.4.1. Pure Time Delay Element. 24
1.4.2. State Variable Definition. 26
Example 1.4.2. Properties of the i-is-s relation. 28
1.4.3. Trajectories in State Space. 29
Example 1.4.3. State Trajectories of a Second Order System. 30
1.5. Examples of Dynamical Systems. 33
1.5.1. Differential Systems with Lumped Parameters. 33
1.5.2. Time Delay Systems (Dead-Time Systems). 35
Example 1.5.2.1. Time Delay Electronic Device. 35
1.5.3. Discrete-Time Systems. 36
1.5.4. Other Types of Systems. 36
1.6. General Properties of Dynamical Systems. 37
1.6.1. Equivalence Property. 37
Example 1.6.1. Electrical RLC Circuit. 38
1.6.2. Decomposition Property. 40
1.6.3. Linearity Property. 40
Example 1.6.2. Example of Nonlinear System. 41
1.6.4. Time Invariance Property. 41
1.6.5. Controllability Property. 41
1.6.6. Observability Property. 41
1.6.7. Stability Property. 41

I
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI).
2.1. Input-Output Description of SISO LTI. 42
Example 2.1.1. Proper System Described by Differential Equation. 46
2.2. State Space Description of SISO LTI. 49
2.3. Input-Output Description of MIMO LTI. 51
2.4. Response of Linear Time Invariant Systems. 54
2.4.1. Expression of the State Vector and Output Vector in s-domain. 54
2.4.2. Time Response of LTI from Zero Time Moment. 55
2.4.3. Properties of Transition Matrix. 56
2.4.4. Transition Matrix Evaluation. 57
2.4.5. Time Response of LTI from Nonzero Time Moment . 58

3. SYSTEM CONNECTIONS.

3.1. Connection Problem Statement. 60


3.1.1. Continuous Time Nonlinear System (CNS). 60
3.1.2. Linear Time Invariant Continuous System (LTIC). 60
3.1.3. Discrete Time Nonlinear System (DNS). 60
3.1.4. Linear Time Invariant Discrete System (LTID). 60
3.2. Serial Connection. 64
3.2.1. Serial Connection of two Subsystems. 64
3.2.2. Serial Connection of two Continuous Time Nonlinear Systems (CNS). 65
3.2.3. Serial Connection of two LTIC. Complete Representation. 66
3.2.4. Serial Connection of two LTIC. Input-Output Representation. 66
3.2.5. The controllability and observability of the serial connection. 67
3.2.5.1. State Diagrams Representation. 68
3.2.5.2. Controllability and Observability of Serial Connection. 71
3.2.5.3. Observability Property Underlined as the Possibility to Determine
the Initial State if the Output and the Input are Known. 73
3.2.5.4. Time Domain Free Response Interpretation for
an Unobservable System. 75
3.2.6. Systems Stabilisation by Serial Connection. 76
3.2.7. Steady State Serial Connection of Two Systems. 80
3.2.8. Serial Connection of Several Subsystems. 81
3.3. Parallel Connection. 82
3.4. Feedback Connection. 83

II
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS.
4.1. Principle Diagrams and Block Diagrams. 84
4.1.1. Principle Diagrams. 84
4.1.2. Block Diagrams. 84
Example 4.1.1. Block Diagram of an Algebraical Relation. 85
Example 4.1.2. Variable's Directions in Principle Diagrams and
Block Diagrams. 87
Example 4.1.3. Block Diagram of an Integrator. 89
4.1.3. State Diagrams Represented by Block Diagrams. 89
4.2. Systems Reduction Using Block Diagrams. 92
4.2.1. Systems Reduction Problem. 92
4.2.2. Analytical Reduction. 92
4.2.3. Systems Reduction Through Block Diagrams Transformations. 93
4.2.3.1. Elementary Transformations on Block Diagrams. 93
Example 4.2.1. Representations of a Multi Inputs Summing Element. 96
4.2.3.2. Transformations of a Block Diagram Area by Analytical
Equivalence. 96
4.2.3.3. Algorithm for the Reduction of Complicated Block Diagrams. 96
Example 4.2.2. Reduction of a Multivariable System. 98
4.3 Signal Flow Graphs Method (SFG). 106
4.3.1. Signal Flow Graphs Fundamentals. 106
4.3.2. Signal Flow Graphs Algebra. 107
Example 4.3.1. SFGs of one Algebraic Equation. 110
Example 4.3.2. SFG of two Algebraic Equations. 111
4.3.3. Construction of Signal Flow Graphs. 113
4.3.3.1. Construction of SFG Starting from a System of Linear Algebraic
Equations. 113
Example 4.3.3. SFG of three Algebraic Equations. 114
4.3.3.2. Construction of SFG Starting from a Block Diagram. 115
Example 4.3.4. SFG of a Multivariable System. 115
4.4. Systems Reduction Using State Flow Graphs. 116
4.4.1. SFG Reduction by Elementary Transformations. 117
4.4.1.1. Elimination of a Self-loop. 117
4.4.1.2. Elimination of a Node. 118
4.4.1.3. Algorithm for SFG Reduction by Elementary Transformations. 120
4.4.2. SFG Reduction by Mason's General Formula. 121
Example 4.4.1. Reduction by Mason's Formula of a Multivariable System. 123

III
5. SYSTEMS REALISATION BY STATE EQUATIONS.
5.1. Problem Statement. 125
5.1.1. Controllability Criterion. 126
5.1.2. Observability Criterion. 126
5.2. First Type I-D Canonical Form. 127
Example 5.2.1. First Type I-D Canonical Form of a Second Order
System. 130
5.3. Second Type D-I Canonical Form. 132
5.4. Jordan Canonical Form. 134
5.5 State Equations Realisation Starting from the Block Diagram 137

6. FREQUENCY DOMAIN SYSTEMS ANALYSIS.


6.1. Experimental Frequency Characteristics. 139
6.2. Relations Between Experimental Frequency Characteristics and
Transfer Function Attributes. 142
6.3. Logarithmic Frequency Characteristics. 145
6.3.1. Definition of Logarithmic Characteristics. 145
6.3.2. Asymptotic Approximations of Frequency Characteristic. 146
6.3.2.1. Asymptotic Approximations of Magnitude Frequency
Characteristic for a First Degree Complex Variable Polynomial. 146
3.2.2.2. Asymptotic Approximations of Phase Frequency Characteristic
for a First Degree Complex Variable Polynomial. 148
6.4. Elementary Frequency Characteristics. 150
6.4.1. Proportional Element. 150
6.4.2. Integral Type Element. 151
6.4.3. First Degree Polynomial Element. 152
6.4.4. Second Degree Polynomial Element with Complex Roots. 153
6.4.5. Aperiodic Element. Transfer Function with one Real Pole. 158
6.4.6. Oscillatory element. Transfer Function with two Complex Poles. 159
6.5. Frequency Characteristics for Series Connection of Systems.
162
6.5.1. General Aspects. 162
Example 6.5.1.1. Types of Factorisations. 164
6.5.2. Bode Diagrams Construction Procedures. 165
6.5.2.1. Bode Diagram Construction by Components. 165
Example 6.5.2.1. Examples of Bode Diagram Construction by
Components. 165
6.5.2.2. Directly Bode Diagram Construction. 168

IV
7. SYSTEMS STABILITY.
7.1. Problem Statement. 170
7.2. Algebraical Stability Criteria. 171
7.2.1. Necessary Condition for Stability. 171
7.2.2. Fundamental Stability Criterion. 171
7.2.3. Hurwitz Stability Criterion. ` 171
7.2.4. Routh Stability Criterion. 173
7.2.4.1. Routh Table. 173
7.2.4.2. Special Cases in Routh Table. 174
Example 7.2.1. Stability Analysis of a Feedback System. 176
7.3. Frequency Stability Criteria. 177
7.3.1. Nyquist Stability Criterion. 177
7.3.2. Frequency Quality Indicators. 178
7.3.3. Frequency Characteristics of Time Delay Systems. 180

8. DISCRETE TIME SYSTEMS.


8.1. Z - Transformation. 181
8.1.1. Direct Z-Transformation. 181
8.1.1.1. Fundamental Formula. 181
8.1.1.2. Formula by Residues. 182
8.1.2. Inverse Z-Transformation. 183
8.1.2.1. Fundamental Formula. 183
8.1.2.2. Partial Fraction Expansion Method. 185
8.1.2.3. Power Series Method. 185
8.1.3. Theorems of the Z-Transformation. 186
8.1.3.1. Linearity Theorem. 186
8.1.3.2. Real Time Delay Theorem. 186
8.1.3.3. Real Time Shifting in Advance Theorem. 186
8.1.3.4. Initial Value Theorem. 186
8.1.3.5. Final Value Theorem. 186
8.1.3.6. Complex Shifting Theorem. 187
8.1.3.8. Partial Derivative Theorem. 188
8.2. Pure Discrete Time Systems (DTS). 190
8.2.1. Introduction ; Example. 190
Example 8.2.1. First Order DTS Implementation. 190
8.2.2. Input Output Description of Pure Discrete Time Systems. 193
Example 8.2.2.1. Improper First Order DTS. 193
Example 8.2.2.2. Proper Second Order DTS. 193
8.2.3. State Space Description of Discrete Time Systems. 195

V
9. SAMPLED DATA SYSTEMS.
9.1. Computer Controlled Systems. 197
9.2. Mathematical Model of the Sampling Process. 204
9.2.1. Time-Domain Description of the Sampling Process. 204
9.2.2. Complex Domain Description of the Sampling Process. 205
9.2.3. Shannon Sampling Theorem. 207
9.3. Sampled Data Systems Modelling. 209
9.3.1. Continuous Time Systems Response to Sampled Input Signals. 209
9.3.2. Sampler - Zero Order Holder (SH). 211
9.3.3. Continuous Time System Connected to a SH. 212
9.3.4. Mathematical Model of a Computer Controlled System. 213
9.3.5. Complex Domain Description of Sampled Data Systems. 215
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIMESY STEMS
10.1. Frequency Characteristics Definition. 217
10.2. Relations Between Frequency Characteristics and
Attributes of Z-Transfer Functions. 218
10.2.1. Frequency Characteristics of LTI Discrete Time Systems. 218
10.2.2. Frequency Characteristics of First Order Sliding Average Filter. 220
10.2.3. Frequency Characteristics of m-Order Sliding Weighted Filter. 221
10.3. Discrete Fourier Transform (DFT). 222

11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS.


11.1. Introduction. 224
11.2. Direct Methods of Discretization. 225
11.2.1. Approximation of the Derivation Operator.
Example 11.2.1. LTI Discrete Model Obtained by Direct Methods. 225
11.2.2. Approximation of the Integral Operator. 225
11.2.3. Tustin's Substitution. 226
11.2.4. Other Direct Methods of Discretization. 227
11.3. LTI Systems Discretization Using State Space Equations. 228
11.3.1. Analytical Relations. 228
11.3.2. Numerical Methods for Discretized Matrices Evaluation. 230

12. DISCRETE TIME SYSTEMS STABILITY.


12.1. Stability Problem Statement. 231
Example 12.1.1. Study of the Internal and External Stability. 232
12.2. Stability Criteria for Discrete Time Systems. 234
12.2.1. Necessary Stability Conditions. 234
12.2.2. Schur-Kohn Stability Criterion. 234
12.2.3. Jury Stability Criterion. 234
12.2.4. Periodicity Bands and Mappings Between Complex Planes. 235
12.2.5. Discrete Equivalent Routh Criterion in the "w" plane. 238

VI
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.1. Introduction.
.

1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS.

1.1. Introduction

The systems theory or systems science is a discipline well defined, whose


goal is the behaviour study of different types and forms of systems within a
unitary framework of notions, performed on a common abstract base.
In such a context the systems theory is a set of general methods,
techniques and special algorithms to solve problems as analysis, synthesis,
control, identification, optimisation irrespective whether the system to which they
are applied is electrical, mechanical, chemical, economical, social, artistic,
military one.
In systems theory it is the mathematical form of a system which is
important and not its physical aspect or its application field.
There are several definitions for the notion of system, each of them
aspiring to be as general as possible. In common usage the word " system " is a
rather nebulous one. We can mention the Webster's definition:
"A system is a set of physical objects or abstract entities united
(connected or related) by different forms of interactions or interdependencies as
to form an entirety or a whole ".
Numerous examples of systems according to this definition can be
intuitively done: our planetary system, the car steering system , a system of
algebraical or differential equations, the economical system of a country.
A peculiar category of systems is expressed by so called "physical
systems" whose definition comes from thermodynamics:
"A system is a part (a fragment) of the universe for which one inside and
one outside can be delimited from behavioural point of view".
Later on several examples will be done according to this definition.
The system theory is the basement of the control science which deals with
all the conscious activities performed inside a system to accomplish a goal under
the conditions of the external systems influence. In the control science three main
branches can be pointed out: Automatic control, Cybernetics, Informatics.
The automatic control or just automatic, as a branch of the control science deals
with automatic control systems.
An automatic control system or just a control system, is a set of objects
interconnected in such a structure able to perform, to elaborate, command and
control decisions based on information got with its own resources.
There are also many other meanings for the notion of control systems.
Automation means all the activities to put to practice automatic control
systems.

1
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.

1.2. Abstract Systems; Oriented Systems; Examples.

Any physical system (or physical object), as an element of the real world,
is a part (a piece) of a more general context. It is not an isolated one, its
interactions with the outside are performed by exchanges of information, energy,
material. These exchanges alter its environments that cause modifications in time
and space of some of its specific (characteristic) variables.
In Fig. 1.1. , such a representation is realised.
Material OUTSIDE
Information
u y1
 1 y2 y
u  u2 System

INSIDE
 up descriptor
yr
inputs outputs
Energy

Figure no. 1.2.1. Figure no. 1.2.2


The physical system (object) interactions with the outside are realised
throughout some signals so called terminal variables. In the systems theory, the
mathematical relations between terminal variables are important. These
mathematical relations define the mathematical model of the physical system.
By an abstract system one can understand the mathematical model of a
physical system or the result of a synthesis procedure.
A causal system, feasible system or realisable system, is an abstract
system for which a physical model can be obtained in such a way that its
mathematical model is precisely that abstract system.
An oriented system is a system (physical or abstract) whose terminal
variables are split, based on causalities principles, in two categories:
input variables and output variables.
Input variables (or just "inputs" ) represent the causes by which the outside
affects (inform) the inside.
Output variables (or just "outputs" ) represent the effects of the external
and internal causes by which "the inside" affects ( influences or inform) the
outside. The output variables do not affect the input variables. This is the
directional property of a system: the outputs are influenced by the inputs but not
vice versa.
When an abstract system is defined starting from a physical system
(object), first of all the outputs are defined (selected). The outputs represent
those physical object attributes (qualities) which an interest exists for, taking into
consideration the goal the abstract system is defined for.
The inputs of this abstract system are all the external causes that affect the
above chosen outputs.

2
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.

Practically are kept only those inputs that have a significant influence (into
a defined precision level context) on the chosen outputs.
Defining the inputs and outputs we are defining that border which
expresses the inside and the outside from behavioural point of view.
Usually an input is denoted by u, a scalar if there is only one input, or by a
column vector u=[u1 u2 .. up]T if there are p input variables.
The output usually is denoted by y if there is only one output or by a
column vector y=[y1 y2 .. yr]T if there are r output variables.
Scalars and vectors are written with the same fonts, the difference between
them, if necessary, is explicitly mentioned.
An oriented system can be graphically represented in a block diagram as it
is depicted in Fig. 1.2.2. by a rectangle which usually contains a system
descriptor which can be a description of or the name of the system or a symbol
for the mathematical model, for its identification. Inputs are represented by
arrows directed to the rectangle and outputs by arrows directed from the
rectangle.
Generally there are three main graphical representations of systems:
1. The physical diagram or construction diagram. This can be a picture of a
physical object or a diagram illustrating how the object is built or has to be build.
2. The principle diagram or functional diagram, is a graphical representation of a
physical system using norms and symbols specific to the field to which the
physical system belongs, represented in a such way to understand the functioning
(behaviour) of that system.
3. The block diagram is a graphical representation of the mathematical relations
between the variables by which the behaviour of the system is described. Mainly
the block diagram illustrates the abstract system. The representation is performed
by using rectangles or flow graphs.

Example 1.2.1. DC Electrical Motor.


Let us consider a DC electrical motor with independent excitation voltage.
As a physical object it has several attributes: colour, weight, rotor voltage
(armature voltage), excitation voltage (field voltage), cost price, etc. It can be
represented by a picture as in Fig. 1.2.3. This is a physical diagram, any skilled
people understand that it is about a DC motor, can identify it, and nothing else.
Ur Ur
Made in ω Cr Ue Ue Ir
Craiova ω
Ue
EP Tip MCC3
Cr S1 Cr S2 θint
Ir Ur θext θext

Figure no. 1.2.3. Figure no. 1.2.4. Figure no. 1.2.5.

3
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.

We can look at the motor as to an oriented object from the systems theory
point of view. In a such way we shall define the inside and the outside .
1. Suppose we are interested about the angular speed ω of the motor axle. This
will be the output of the oriented system we are defining now. The inputs to this
oriented system are all the causes that affect the selected output ω, into an
accepted level of precision. To do this, knowledge on electrical engineering is
necessary. The inputs are: rotor voltage Ur, excitation voltage Ue, resistant
torque Cr, external temperature θext. Now the oriented system related to the DC
motor having the angular speed ω as output, into the agreed level of precision is
depicted in Fig. 1.2.4. The mathematical relations between ω and Ur, Ue, Cr,
θext are denoted by S1 which expresses the abstract system. This abstract
system is the mathematical model of the physical oriented object
(or system) as defined above.
2. Suppose now we are interested about two attributes of the above DC motor:
the rotor current Ir and the internal temperature θint. These two variables are
selected as outputs. The inputs are the same Ur, Ue, Cr, θext . The resulted
oriented system is depicted in Fig. 1.2.5. The abstract system for this case is
denoted by S2.
Any one can understand that S1≠ S2 even they are related to the same
physical object so a conclusion can be drawn: For one physical object (system)
different abstract systems can be attached depending on what we are looking for.

Example 1.2.2. Simple Electrical Circuit.


Let we consider an electrical circuit represented by a principle diagram as
it is depicted in Fig. 1.2.6.
Controlled voltage generator Voltage amplifier
8 R i i=0 u= α y= u2
4 6
volts iC S1
2 i
u1 uC= x K2 u2= y
0
α C
 .
Zi= ì S1 Tx + x = K1 u
 y = K2 x
Figure no. 1.2.6. Figure no. 1.2.7.
From the above principle diagram we can understand how this physical
object behaves.
Suppose we are interested about the amplifier voltage u2 only, so it will be
selected as output and the notation y=u2 is utilised and marked on the arrow
outgoing from the rectangle as in Fig. 1.2.7. Under a common usage of this
circuit we accept that the unique cause which affects the voltage u2 is the knob
position α in the voltage generator so the input is u2 and is denoted by u=α and
marked on the incoming arrow to the rectangle as depicted in Fig. 1.2.7.

4
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.

The abstract system attached to this oriented physical object, denoted by


S1, is expressed by the mathematical relations between y=u2 and u=α . With
elementary knowledge of electrical engineering one can write:

x=uC; iC=Cx ; -u1+ Ri + x=0; u1=K1α=K1u; i=iC; y=K2x; T=RC; ⇒
 T x• +x = K 1 u
S1 :  (1.2.1)
 y = K2 x
In (1.2.1) the abstract system is expressed by so called "state equations".
Here the variable x at the time moment t , x(t), represents the state of the system
in that time moment t. The first equation is the proper state equation and the
second is called "output relation".
The same mathematical model S1 can be expressed by a single relation: a
differential equation in y and u as

S 1 : T y +y = K 1 K 2 u (1.2.2)
which can be presented as an input-output relation

S 1 : R(u, y) = 0 where R(u, y) = T y +y − K 1 K 2 u (1.2.3)
The three above forms of an abstract system representations are called
implicit forms or representation by equations. The time evolution of the system
variables are solutions of these equations starting in a time moment t0 with given
initial conditions for t ≥ t0 .
The time evolution of the capacitor voltage x(t) can be obtained integrating
(1.2.1) for t ≥ t0 and x(t0)=x0 or just from the system analysis as,
t−t
t
K1
− T0
∫ e − T u(τ)dτ .
t−τ
x(t) = e x 0 + (1.2.4)
T t0
We can observe that the value of x at a time moment t , denoted x(t), depends on
four entities:
1. The current (present) time variable t in which the value of x is expressed.
2. The initial time moment t0 from which the evolution is considered.
3. An initial value x0 which is just the value of x(t) for t=t0.
This is called the initial state.
4. All the input values u(τ) on the time interval [t0,t],called observation interval,
are expressed by the so called input segment u [t 0,t] where,
u [t 0,t] = {(τ, u(τ) / ∀τ ∈ [t 0 , t]} . (1.2.5)
Putting into evidence these four entities any relation as (1.2.4) is written in
a concentrate form as
x(t) = ϕ(t, t 0 , x 0 , u [t 0,t] ) (1.2.6)
called the input-initial state-state relation, on short i-is-s relation.
Also by substituting (1.2.4) into the output relation from (1.2.1) we get the
output time evolution expression,
t−t
t
K 1 K2
− T0
∫ e − T u(τ)dτ
t−τ
y(t) = K 2 e x 0 + (1.2.7)
T t0
5
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.

which also depends on the four above mentioned entities and can be considered
in a concentrate form as
y(t) = η(t, t 0 , x 0 , u [t 0,t] ) . (1.2.8)
This is called input-initial state-output relation on short i-is-o relation.
The time variation of the input is expressed by a function
u: T → U, t → u(t) (1.2.9)
so the input segment u [t 0,t] is the graph of a restriction on the observation interval
[t0,t] of the function u. In our case the set U of the input values can be for
example the interval [0,10] volts.
Someone who manages the physical object represented by the principle
diagram knows that there are some restrictions on the time evolution shape of the
function u. For example could be admitted piecewise continuous functions or
continuous and derivative functions only.
We shall denote by Ω the set of admissible inputs,
Ω={u / u: T → U , admitted to be applied to a system)} (1.2.10)
Our system S1 is well defined specifying three elements: the set Ω and the
two relations ϕ and η ,
S1={Ω, ϕ, η} (1.2.11)
This is a so called explicit form of abstract systems representation or the
representation by solutions.
An explicit form can be presented in complex domain applying the Laplace
transform to the differential equation, if this is a linear with time-constant
coefficients one, as in (1.2.2):

L{T y(t) +y(t)} = L{K 1 K 2 u(t)} ⇔ T[sY(s) − y(0)} + Y(s) = K 1 K 2 U(s) ⇒
K K
Y(s) = 1 2 U(s) + T y(0) (1.2.12)
Ts + 1 Ts + 1
We can see that the differential equation has been transformed to an algebraical
equation simpler for manipulations. But as in any Laplace transforms, the initial
values are stipulated for t=0 not t=t0 as we considered. This can be easily
overtaken considering that the time variable t in (1.2.12) is t-t0 from (1.2.2). With
this in our mind the inverse Laplace transform of (1.2.12) will give us the relation
(1.2.7) where y(0)=K2x(0)=K2x0 and t→t-t0.
From (1.2.12) we can denote
K K
H(s) = 1 2 (1.2.13)
Ts + 1
which is the so called the transfer function of the system. The transfer function
generally can be defined as the ratio between the Laplace transform of the output
Y(s) and the Laplace transform of the input U(s) into zero initial conditions,
Y(s)
H(s) = y(0)=0 . (1.2.14)
U(s)
The system S1 can be represented by the transfer function H(s) also.

6
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.

Sometimes an explicit form is obtained using the so called integral-


differential operators. Denoting by D=d/dt the differential operator then the
differential equation (1.2.2) is expressed as TDy(t) + y(t)=K1K2u(t)
from where formally one obtain,
K K K K
y(t) = 1 2 u(t) ⇔ y(t) = S(D)u(t) where S(D) = 1 2 (1.2.15)
TD + 1 TD + 1
so the system S1 can be represented by the integral-differential operator S(D).
Now suppose that in the other context we are interested in the current i of
the physical object represented by the principle diagram from Fig. 1.2.6. The
output is now y(t)=i(t) and the input, considering the same experimental
conditions, is u(t)=α(t) also. This oriented system is represented in Fig. 1.2.8.
u= α y= i .
Tx + x = K1 u
S2 S2  1 K1
 y = R x+ R u

Figure no. 1.2.8.


The mathematical model of this oriented system is now the abstract system
S2 represented, for example, by the state equations as in Fig. 1.2.8.
Of course any form of representation can be used as discussed above on
the system S1. Because S1≠ S2 we can withdraw again the conclusion:
"For the same physical object it is possible to define different abstract
systems depending on the goal ".

Example 1.2.3. Simple Mechanical System.


Let us consider a mechanical system whose principle diagram is
represented in Fig. 1.2.9.
KV x
KP u= f y
f S1
Spring A
Damping  .
system y S1 Tx + x = K1 u
 y = K2 x
B
Figure no. 1.2.9. Figure no. 1.2.10.
If a force f is applied to the point A of the main arm, whose shifting with
respect to a reference position is expressed by the variable x, then the point B of
the secondary arm has a shift expressed by the variable y. To the movement
determined by f, the spring develops a resistant force proportional to x , by a
factor KP and the damper one proportional to the derivative of x by KV..
Suppose we are interested on the point B shifting only so the variable y is
selected as to be the output of the oriented system which is defining. Under
common experimental conditions, the unique cause changing y is the force f
which is the input denoted as u=f.
7
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.

The oriented system with above defined input and output is represented in
Fig.1.2.10. where by S1 it is denoted a descriptor of the abstract system.Writing
the force equilibrium equations we get

K P x + K V x= f ; y = K 2 x .
Dividing the first equation by KP and denoting T=KV/KP ; K1=1/KP ; u=f
we get the mathematical model as state equations,
 T x• +x = K 1 u
S1 :  (1.2.16)
 y = K2 x
This set of equations expresses the abstract object of the mechanical
system. Formally S1 from (1.2.16) is identical with S1 from (1.2.1) of the previous
example. Even if we have different physical objects they are characterised (for
above chosen outputs) by the same abstract system. This abstract system is a
common base for different physical systems. Any development we have done for
the electrical system, relations (1.2.2)--(1.2.15), is available for the mechanical
system too. These constitute the unitary framework of notions we mentioned in
the systems theory definition.
Managing with specific methods the abstract system, expressed in one of
the form (1.2.2) -- (1.2.15) , some results are obtained. These results equally can
be applied both to the electrical system and the mechanical system. Of course in
the first case, for example, x is the capacitor voltage but in the second case the
meaning of x is the paint A shifting.
Such a study is called model based study. We can say that the mechanical
system is a physical model for an electric system and vice versa because they are
related to the same abstract system.

Example 1.2.4. Forms of the Common Abstract Base.


The goal of this example is to manipulate the abstract system (1.2.1) or
equivalently (1.2.16) from mathematical point of view using one element of the
common abstract base, the Laplace transform on short LT. Finally the solutions
(1.2.4) and (1.2.7) will be obtained.
Now we write (1.2.1) putting into evidence the time variable t as
.
Tx(t) +x(t)=K1u(t), t≥t0 , x(t0)=x0 (1.2.17)
y(t)=K2x(t) (1.2.18)
The main problem is to get the expression of x(t) because y(t) is obtained
by a simple substitution. As we know the one side Laplace transform always
uses as initial time t0=0 and we have to obtain (1.2.4) depending on any t0. It is
admitted that all the functions restrictions for t≥0 are original functions.
Shall we denote by
X(s)=L{x(t)} , U(s)=L{u(t)}
the Laplace transforms of x(t) and u(t) respectively.

8
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.

We remember that the Laplace transform of a time derivative function,


admitted to be an original function is,
.
L{x (t)}=sX(s) - x(0+)
where x(0+)=x(t) t=0 + = lim x(t).
t→0 ; t >0
If x(t) is a continuous function for t=0 we can simpler write x(0) against to
+
x(0 ). However we can prove that the state of a differential system driven by
bounded inputs is always a continuous function.
So, applying the LT to (1.2.17) we obtain
T[sX(s)-x(0)]+X(s)=K1U(s)
K1
X(s)= U(s) + T x(0) (1.2.19)
Ts + 1 Ts + 1
which gives us the expression of the state in complex domain but with initial state
in t=0.
We remember now the convolution product theorem of LT:
If F1(s) = L{f1(t)} and F2(s) = L{f2(t)} then,
F1(s)F2(s) = L{∫ 0 f 1 (t − τ)f 2 (τ)dτ } = L{∫ 0 f 1 (τ)f2 (t − τ)dτ }
t t

and in the inverse form,


L-1{F1(s)F2(s)} = ∫ 0 f 1 (t − τ)f 2 (τ)dτ = ∫ 0 f 1 (τ)f2 (t − τ)dτ
t t
(1.2.20)
The inverse LT of (1.2.19) is
K
x(t)=L-1{ 1 ⋅ U(s) } + L-1{ T }x(0). (1.2.21)
Ts + 1 Ts + 1
We know from tables that
L-1 T = e−T
t
, for t ≥ 0 (1.2.22)
Ts + 1
K1 K
= 1 e − T , for t ≥ 0
t
L-1 (1.2.23)
Ts + 1 T
Identifying now by
K1 K K t−τ
⇔ f1 (t) = 1 e − T ⇒ f1 (t − τ) = 1 e − T
t
F 1 (s) = and
Ts + 1 T T
F 2 (s) = U(s) ⇔ f 2 (t) = u(t) ⇒ f2 (τ) = u(τ)
taking into consideration (1.2.20) applied to (1.2.21) after the substitution of
(1.2.22), we have,
t K
x(t) = ∫ [ 1 e − T ]u(τ)dτ + e − T x(0), ∀t ≥ 0 which is written as
t−τ t

0 T

K t t−τ ∆
x(t) = e − T x(0) + 1 ∫ e − T u(τ)dτ = ϕ(t, 0, x(0), u [0,t] ), ∀t ≥ 0 . (1.2.24)
t

T 0
This is the state evolution starting at the initial time moment t=0, from the
initial state x(0) and has the form of the input-initial state-state relation (i-is-s).
For t = t0 from (1.2.24) we obtain,

9
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.

t0
K 1 t 0 − t 0T−τ ∆
x(t0)=e − T x(0) + ∫
T 0
e u(τ)dτ = ϕ(t 0 , 0, x(0), u [0,t 0] ) (1.2.25)
Substituting x(0) from (1.2.25)
t0
K t0 τ
x(0)=e T x(t 0 ) − 1 ∫ e T u(τ)dτ ,
T 0
into (1.2.24) we obtain
x(t)=e  e x(t 0 ) − 1 ∫ e T u(τ)dτ + 1 ∫ e − T u(τ)dτ + 1 ∫ e − T u(τ)dτ
− Tt  T0
t
K t0 τ K t 0 t−τ K t t−τ

 T 0  T 0 T t0
t−t 0
K 1 t − t−τT ∆
x(t)=e − T

x(t 0 ) +
T t0
e u(τ)dτ = ϕ(t, t 0 , x(t 0 ), u [t 0,t] ) (1.2.26)
which is just (1.2.4), if we are taking into consideration that x(t0) = x0.
Now from (1.2.24), (1.2.25) and (1.2.26) we observe that
x(t) = ϕ(t, 0, x(0), u [0,t] ) ≡ ϕ(t, t 0 , ϕ(t 0 , 0, x(0), u [0,t 0] ), u [t 0,t] ) (1.2.27)
x(t0)
which is the so called the state transition property of the i-is-s relation.
According to this property, the state at any time moment t, x(t), as the
result of the evolution from an initial state x(0) at the time moment t=0 with an
input u [0,t] is the same as the state obtained in the evolution of the system starting
at any intermediate time moment t0 from an initial state x(t0) with an input u [t 0,t] if
the intermediate state x(t0) is the result of the evolution from the same initial state
x(0) at the time t=0 with the input u [0,t 0]. It has to be precised that
u [0,t] = u [0,t 0 ] ∪ u [t 0 ,t] (1.2.28)
Two conclusions can be drawn from this example:
1. Any intermediate state is an initial state for the future evolution.
2. An initial state x0 at a time moment t0 contains all the essential
information from the past evolution able to assure the future evolution if the
input is given starting from that time moment t0.

To obtain the relation (1.2.7), we have only to substitute x=y/K2 in


(1.2.26) getting, if we denote x(t0)=x0
t−T 0
K K t ∆
y(t) = K 2 e − T x 0 + 1 2 ∫ e − T u(τ)dτ = ϕ(t, t 0 , x(t 0 ), u [t 0,t] )
t−τ
(1.2.29)
T t0
This is an input-initial state-output relation (i-is-o).

10
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.

1.3. Inputs; Outputs; Input-Output Relations.

1.3.1. Inputs; Outputs.


The time variable is denoted by letter t for the so called continuous-time
systems and by letter k for the so called discrete-time systems.
The time domain T , or observation domain, is the domain of functions
describing the time evolution of variables. For continuous-time systems T ⊆ R
and for discrete-time systems T ⊆ Z . Sometimes the letter t is utilised as time
variable also for discrete-time systems thinking that t ∈Z .
The input variable is the function
u :T→ U; t→u(t), (1.3.1)
where U is the set of input values (or the set of all inputs).
Usually U ⊆ R p if there are p inputs expressed by real numbers.
The set of admissible inputs Ω, is the set of functions u allowed to be
applied to an oriented system.
The input segment on a time interval [t0,t1] ⊆Τ Τ called observation interval
is the graphic of the function u on this time interval, u [t 0,t 1 ] :
u [t 0,t 1 ] ={(t,u(t)),∀t ∈ [t 0 , t1]} (1.3.2)
When we are saying that to a system an input is applied on a time-interval
[t0, t1], we have to understand that the input variable changes in time according to
the given graphic u [t 0,t 1 ] ,that is according to the input segment. Sometimes for
easier writing we understand by u , depending on context, the following:
u - a function as (1.3.1).
u [t 0,t1] - a segment as (1.3.2) on an understood observation interval.
u(t) -the law of correspondence of the function (1.3.1).
Also u(t) can be seen as the law of correspondence of the function (1.3.1)
or the value of this function in a specific time moment denoted t.
All these conveniences will be encountered for all other variables in this
textbook.
The output variable is the function
y :T→ Y; t→y(t), (1.3.3)
where Y is the set of output values (or the set of all outputs). Usually Y ⊆ R r if
there are r outputs expressed by real numbers.
We denote by Γ the set of possible outputs that is the set of all functions y
that are expected to be got from a physical system if inputs that belong to Ω are
applied.
The input-output pair. If an input u [t 0,t 1 ] is applied to a physical system the
output time response is expressed by the output segment y [t 0,t 1 ] , where
y [t 0,t 1 ] ={(t,y(t)),∀t ∈ [t 0 , t 1 ]} (1.3.4)
that means to an input corresponds an output.

11
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.

The pair of segments


[u [t 0,t 1 ] ; y [t 0,t 1 ] ] = (u; y) (1.3.5)
observed to a physical system is called an input-output pair.
It is possible that for the same input u [t 0,t 1] another output segment y a[t 0,t 1 ] to
be obtained, that means also the pair [u [t 0,t 1] ; y a[t 0,t 1] ] = (u; y a ) is an input-output
pair of that system as depicted in Fig. 1.3.0.
u u In the example of the electrical device from Ex. 1.2.2.
[ t0 ,t1]
] or of the mechanical system from Ex. 1.2.3. , the solution of
[ the differential equation (1.2.2) for t ≥ t0 and x(t0)=x0 is

0 t0 t1 t t−t 0
t
K K

∫ e − T u(τ)dτ = η(t, t 0 , x0 , u [t0,t])
t−τ

y y y(t) = e T x0 + 1 2
[ t0 ,t1]
ya
T t0
] [ t0 ,t1]
[
] For the same input u[t0,t1] , the output depends also on
[
the value x0=x(t0) which is the voltage across the capacitor
0 t0 t1 t terminals C in Ex. 1.2.2. or the arm position (point A ) in
Figure no. 1.3.0. Ex. 1.2.3. at the time moment t0.

1.3.2. Input-Output Relations.


The totality of input-output pairs that describe the behaviour of a physical
object is just the abstract system. Instead of specific list of input time functions
and their corresponding output time functions, the abstract system is usually
characterised as a class of all time functions that obey a set of mathematical
equations. This is in accord once with the scientific method of hypothesising an
equation and then checking to see that the physical object behaves in a manner
similar to that predicted by the equation /2/.
Practically an abstract system is expressed by the so called input-output
relation which can be a differential or difference equation, graph, table or
functional diagram.
A relation implicitly expressed by R(u,y)=0 or explicitly expressed by an
operator S, y=S{u}, is an input-output relation for an oriented system if:
1. Any input-output pair observed to the system is checking this relation.
2. Any pair (u,y) which is checking this relation is an input-output pair of that
oriented system.
We have to mention that by the operatorial notation y=S{u} or just y=Su,
we understand that the operator S is applied to the input (function) u and as a
result, the output (function) y is obtained.

For example if in the differential equation T x +x = K 1 u from Ex. 1.2.2.,
we substitute x=y/K2, we obtain,

12
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.

• •
T y +y = K 1 K 2 u ⇔ R(u, y) = 0 , R(u, y) = T y +y − K 1 K 2 u (1.3.6)
which is an implicit input-output relation.

By denoting D = d , the time derivative operator, Dy =y we obtain
dt
TDy(t) + y(t) − K 1 K 2 u(t) = 0 ⇔ (TD + 1)y(t) − K 1 K 2 u(t) = 0 ⇔
K K K K
y(t) = 1 2 u(t) ⇔ y(t) = Su(t) , S = 1 2 (1.3.7)
TD + 1 TD + 1
Here y(t) = Su(t) is an explicit input-output relation by an
integral-differential operator S . This relation is expressed in time domain but it
can be expressed in every domain if one to one correspondence exists.
For example we can express in s-complex domain applying to (1.3.6) the
Laplace transform,
K K
Y(s)= 1 2 U(s) + T x(0) (1.3.8)
Ts + 1 Ts + 1
from where an operator H(s) called transfer function is defined,
Y(s) K1 K2
H(s)= x(0)=0 = . (1.3.9)
U(s) Ts + 1
The relation between the Laplace transformation of the output Y(s) and the
Laplace transformation of the input U(s) which determined that output into zero
initial conditions
Y(s)=H(s)U(s) (1.3.10)
is another form of explicit input-output relation.

Example 1.3.1. Double RC Electrical Circuit.


Let us consider an electrical network obtained by physical series
connections of two simple R-C circuits whose principle diagram is represented in
Fig. 1.3.1.
A1 i1 R1 B1 R2
A2 i2 i=0 B2

u uC =x 1 uC =x 2 y u y
C1 1 C2 2 S
iC i C2
1

A'1 B'1 A'2 B'2


Figure no. 1.3.1. Figure no. 1.3.1.
Suppose that the second circuit runs empty and the first is controlled by
the voltage u across the terminals A1,,A1' , and we are interested in the voltage y
across the terminals B2,B2'.
Because the output y is defined, under common conditions only the voltage
u affects this selected output so the oriented system is specified as depicted in
Fig. 1.3.2.
The abstract system denoted by S will be defined establishing the
mathematical relations between u and y.
13
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.

To do this first we observe from the principle diagram that there are 8
variables as time functions involved : u, i1, iC1, uC1=x1, iC2, uC2=x2, i2 and y.
The other variables R1, R2, C1, C2 are constants in time and represent the
circuit parameters , any skilled people in electrical engineering understand on the
spot.
Because u is a cause (an input) it is a free variable and we have to look for
7 independent equations. These equations can be written using the Kirckoff's
theorems and Ohm's law:
. .
1. i c1 = i 1 − i 2 2. i c2 = i 2 3. i c1 = C 1 x 1 4. i c2 = C 2 x 2
5. i 1 = 1 (−x 1 + u) 6. i 2 = 1 (x 1 − x 2 ) 7. y=x2
R1 R2
We can observe that two variables x1 and x2 appear with their first order
derivative so eliminating all the intermediate variables a relation between u and y
will be obtained as a second order differential equation. But first shall we keep
the variables x1 and x2 and their derivative.
Denoting by T1=R1C1 and T2=R2C2 the two time constants, after some
substitutions we obtain,
. R R
T 1 x 1 = −(1 + 1 )x 1 + 1 x 2 + u (1.3.11)
. R 2 R 2
T 2 x 2 = x 1 − x2 (1.3.12)
y=x2 (1.3.13)
which after dividing by T1, T2 respectively, they take the final form
. R R
 x 1 = − 1 (1 + 1 )x 1 + 1 1 x 2 + 1 u(t) (1.3.14)
T1 R2 T1 R 2 T1
.
S:  x 2 = 1 x1 − 1 x 2 (1.3.15)
T2 T2
 y=x2 (1.3.16)

The equations (1.3.14), (1.3.15), (1.3.16) are called state equations


related to that oriented system and they constitute the abstract system S in state
equations form. We can rewrite these equations into a matrix form,
.
S:  x = Ax + bu (1.3.17)
 y=c x + du
T
(1.3.18)
where:
 − 1 (1 + R 1 ) 1 R1   T1  0
A=  R 2 T1 R2 
= =
     ; d=0,
T1
; b 1 ; c (1.3.19)
 1
− 1
  0   1 
 T2 T2 

and generally they are called:


A the system matrix
b the command vector
c the output vector
d the direct input-output connection factor .

14
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.

The input-output relation R(u.y)=0 , as mentioned before, can be


expressed as a single differential equation in u and y. To do this we can use for
example (1.3.14), (1.3.15), (1.3.16) or simpler (1.3.11), (1.3.12), (1.3.13).
Substituting x2 from (13) in (12) and multiplying by T1 we obtain
. .
T 1 T 2 y = T 1 x 1 − T 1 y ⇒ T 1 x 1 = T 1 T 2 y + T 1 y (∗) .
.
Applying the first derivative to (*) and substituting T 1 x 1 from (11) and
.
x1 = T 2 y + y
from (*) we.. obtain,
. R . R
T 1 T 2 ÿ + T 1 y = −(1 + 1 )(T 2 y + y) + 1 y + u
R2 R2
which finally goes to
R .
T 1 T 2 ÿ + [T 1 + (1 + 1 )T 2 ]y + y = u . (1.3.20)
R2
This is a differential equation expressing the mathematical model (the
abstract system) of the oriented system. It can be presented as an i-o relation
R .
R(u, y) = 0 where R(u, y) = T 1 T 2 ÿ + [T 1 + (1 + 1 )T 2 ]y + y − u (1.3.21)
R2
If we denoted by d = D we can express the i-o relations into an explicit form
dt
y(t)= 1 u(t) ⇒ y(t) = S(D)u(t) (1.3.22)
T 1 T 2 D 2 + [T 1 + T 2 (1 + R12 )]D + 1
R

where S(D) is an integral-differential operator.


For simplicity shall we consider the following values for parameters:
R1=R ; R2=2R ; C1=C ; C2=C/2 ⇒ T1=T2=T=RC
so the differential equation (1.3.20) becomes
.
T 2 ÿ + 2.5T y + y = u . (1.3.23)
We can express the i-o relation into a complex form by using the Laplace
transform. Applying the Laplace transform to (1.3.23) we get
Y(s)=L{y(t)} ; U(s)=L {u(t)} ⇒
.
T 2 [s 2 Y(s) − sy(0 + ) − y(0 + )] + 2, 5T[sY(s) − y(0 + )] + Y(s) = U(s) ⇒
T 2 s + 2, 5T 2 .
Y(s) = 2 2 1 U(s) + 2 2 y(0 + ) + 2 2 T y(0 + )
T s + 2, 5Ts + 1 T s + 2, 5Ts + 1 T s + 2, 5Ts + 1
(1.3.24)
We denote by L(s) the characteristic polynomial
L(s) = T2s2+2,5Ts+1 (1.3.25)
so the output in complex domain is,
T 2 s + 2, 5T + 2 .
Y(s) = 1 U(s) + y(0 ) + T y(0 + ) (1.3.26)
L(s) L(s) L(s)
As we can see, the Laplace transform of the output Y(s), depends on the
Laplace transform of the input U(s) and on two initial conditions : y(0+) , the
.
value of the output, and y(0 + ) , the value of the time derivative of the output
both on the time-moment 0+.

15
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.

Let H(s) be
Y(s)
H(s) = 1 = zero initial condition (1.3.27)
L(s) U(s)
where H(s) is called the transfer function of the system.
The transfer function of a system is the ratio between the Laplace
transform of the output and the Laplace transform of the input which determines
that output into zero initial conditions if and only if this ratio is not depending on
the form of the input.
By using the inverse Laplace transform we can obtain the time answer (the
output response) of this system. The characteristic equation
L(s) = T2s2+2,5Ts+1 = 0 has the roots,
s1,2=(− 25 T ± 21 25T 2 − 16T 2 )/2T 2 (1.3.28)
so the characteristic polynomial is presented as
L(s) = T2(s-λ1)(s-λ2) with λ1= -1/2T ; λ2= -2/T . (1.3.29)
One way to calculate the inverse Laplace transform is to use the partial
fraction development of rational functions from Y(s) as in (1.3.26)
H(s) = T 2(s−λ1)(s−λ ) ; H(s) = s−λ A
1
+ B
s−λ 2
⇒ A = 2
3T
; B = − 2
3T
1 2

T 2 s+2,5T A1 B
T 2 (s−λ
= + s−λ12 ⇒ A 1 = 4
; B 1 = − 13
1 )(s−λ 2 )
s−λ 1 3

T2 A2 B
T 2 (s−λ
= + s−λ22 ⇒ A 2 = 2T
; B 2 = − 2T
1 )(s−λ 2 )
s−λ 1 3 3

Y(s) = 2  1
3T  s−λ 1
− s−λ1 2  U(s) + 13  s−λ4 1 − s−λ1 2  y(0) + 2T  1 − s−λ1  y.(0)
3  s−λ 1 2

L-1 1
s−λ 1 = e λ 1 t = α 1 (t) L-1 1
s−λ 2 = e λ 2 t = α 2 (t)
= ∫ 0 α 1 (t − τ)u(τ)dτ = ∫ 0 α 2 (t − τ)u(τ)dτ
1 t 1 t
L-1 s−λ 1 U(s) L-1 s−λ 2 U(s)

y(t) = 1 [4α 1 (t) − α 2 (t)] y(0) + 2T [α 1 (t) − α 2 (t)] y.(0) + 2


3T ∫0
t
[α 1 (t − τ) − α 2 (t − τ)]u(τ)dτ
3 3
where (1.3.30)
λ 1t − 2T
t
α 1 (t) = e =e (1.3.31)
α 2 (t) = e λ 2t = e − T/2
t
(1.3.32)
By using the same procedure like to the first order R-C circuit presented in
Ex. 1.2.2 we can express this time relation depending on the initial time t0 as:
.
y(t) = 13 [4α 1 (t − t 0 ) − α 2 (t − t 0 )]y(t 0 ) + 2T
3
[α 1 (t − t 0 ) − α 2 (t − t 0 )]y(t 0 ) +
+ 3T ∫t 0 [α 1 (t − τ) − α 2 (t − τ)]u(τ)dτ
2 t
(1.3.33)
As we can see the general response by output depends on: t, t0, the initial
state x0 , and the input u [t 0,t] , where the state vector is defined as
.
x 1 (t 0 ) = y(t 0 ) ; x 2 (t 0 ) = y(t 0 ) ⇒ x(t 0 ) = [x 1 (t 0 ) x 2 (t 0 )] T = x 0 (1.3.34)
The relation (1.3.33) is an input-initial state-output relation of the form
y(t) = η(t, t 0 , x 0 , u [t 0,t] ) (1.3.35)
16
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.

Example 1.3.2. Manufacturing Point as a Discrete Time Ststem.


Shall we consider a working point (manufacturing point). In this
manufacturing point the work is made by a robot which manufactures
semiproducts. We suppose that it works in a daily cycle. Suppose that in the day
k the working point is supplied with uk semiproducts from which only a
percentage of βk is transformed in finite products. The working point has a store
house which contains in the day k , xk finite products. Suppose that daily (may be
in the day k) a percentage of (1-αk) from the stock is delivered to other sections.
Also at day k the production is reported as being yk, a percentage γk from the
stock. We want to see what are the evolution of the stock and the report of the
stock for any day. It can be observed that the time variable is a discrete one, the
integer number k .
This working point can be interpreted as an
uk yk oriented system having the daily report yk as the
S output. The input is the daily rate of supply uk so the
oriented system is defined as in Fig. 1.3.3. where the
Figure no. 1.3.3. input and the output are string of numbers.
The mathematical model is determined from the above description. If we
denote by xk+1 the stock for the next day, it is composed from the left stock
xk-(1-αk)xk=αkxk and the new stock βkuk,
xk+1=αkxk+βkuk (1.3.36)
yk=γkxk (1.3.37)
This is the abstract system S of the working point looked upon as an
oriented system. These are difference equations expressing a discrete-time
system.
We can determine the solution of this system of equations by using a
general method, but in this case we shall proceed to difference equation
integration step by step.
The day p+1 : xp+1=αpxp+βpup | αk-1...αp+1
The day p+2 : xp+2=αp+1xp+1+βp+1up+1 | αk-1...αp
.............................
The day k-2 : xk-2=αk-3xk-3+βk-3 uk-3 | ακ−1ακ−2
The day k-1 : xk-1=αk-2xk-2+βk-2uk-2 | αk-1
The day k : xk=αk-1xk-1+βk-1uk-1 |1
Denoting by Φ(k, p) the discrete-time transition matrix (in this example it is a
scalar),
k−1
Φ(k, p) = Π α j = α k−1 α k−2 ...α p+1 α p ; Φ(k,k)=1 (1.3.38)
j=p
and adding the above set of relations each multiplied by the factor written on the
right side, we obtain

17
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.

k−1
x k = Φ(k, p)x p + Σ [Φ(k, j + 1)β j u j ] (1.3.39)
j=p
yk = γkxk (1.3.40)
We observe that (1.3.39) is an input-initial state-state relation of the form,
x k = ϕ(k, k 0 , x k 0 , u [k 0,k−1] ) (1.3.41)
where xk is the state on the current time k (in our case the day index), xk0 is the
initial state on the initial time moment p=kk0.
The output evolution is
k−1
y k = γ k Φ(k, p)x p + Σ [γ k Φ(k, j + 1)β j u j ] (1.3.42)
j=p
which is an input-initial state-output relation of the form,
y k = η(k, k 0 , x k 0 , u [k 0,k−1] ) (1.3.43)

Example 1.3.3. RS-memory Relay as a Logic System.


Let us consider a physical object represented by a principle diagram as in
Fig. 1.3.4. This is a logic circuit performing an RS-memory relay based.
+E
rt yt
SB a x x st S
b
L Figure no. 1.3.5.
Relay x
y S= S 0 ∪ S1

i S0 S1
RB c
xt 0=0 xt 0 =1
d

Figure no. 1.3.4. Figure no. 1.3.6.


Here SB represents a normal opened button (the set-button) and RB a
normal closed button (the reset-button).
By normal it is understood "not pressed".
When the button SB is pushed the current can run through the terminals
a-b and when the button RB is pushed the current can not run through the
terminals c-d. By x are denoted the open-normal contacts of the relay.
The normal status (or state) of the relay is considered that when the current
through the coil is zero. L is a lamp which lights when the relay is activated. The
functioning of this circuit can be explained in words:
If the button RB is free, pushing on the button SB a current i run activating
the relay whose contact x will shortcut the terminals a-b and the lamp is turned
on. Even if the button SB is released the lamp keeps on lighting.

18
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.

If the button RB is pushed, the current i is cancelled and the relay becomes
relaxed, the lamp L is turned off.
The variables encountered in this description SB, RB, i, x, y are associated
with the variables st, rt, it, xt, yt called logical variables .
They represent the truth values, at the time moment t, of the propositions:
st: "The button SB is pushed" ⇔ "Through terminals a-b the current can run".
rt: "The button RB is pushed" ⇔"Through terminals c-d the current can not run"
it: "
xt:"The relay is activated" ⇔ "The relay normal opened contacts are connected".
yt: "The lamp lights".
These logical variables can take only two values denoted usually by the
symbols 0 and 1 on a set B={0;1} which represents false and true.
The set B is organised as a Boolean algebra. In a Boolean algebra three
binary fundamental operations are defined: conjunction " ∧ ", disjunction " ∨ "
and negation " ¯ ".
Suppose we are interested about the lamp status so the output is y(t)=yt.
This selected output depends on the status of buttons SB, RB only (it is supposed
that the supply voltage E is continuously applied) as external causes, so the input
is the vector u(t) = [st rt ]T.
An oriented system is defined now as depicted in Fig. 1.3.5.
The mathematical relations between u(t) and y(t), defining the abstract
system S are expressed as logical equations.
The value of logical variable it is given by
it=(s t ∨ x t ) ∧ r t . (1.3.44)
Because of the mechanical inertia, the status of the relay changes after a
small time interval ε finite or ideally ε→0 to the value of it,
x t+ε = i t . (1.3.45)
Of course the status of the lamp equals to the status of the relay, so
yt=xt (1.3.46)
Substituting (1.3.44) into (1.3.45) together with (1.3.46) we obtain the
abstract system S as,
 x t+ε = (s t ∨ x t ) ∧ r t (1.3.47)
S: 
 yt=xt (1.3.48)
To determine the output of this system beside the two inputs st; rt we
need another piece of information (the value of xt , the state of the relay: 1 - if
the relay is activated and 0 - if the relay is not activated.

19
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.

It does not matter how we shall denote these information A (or on) for 1
and B )or off) for 0. If we know that the state of the relay is A we can determine
the output evolution if we know the inputs.
If we are doing experiments with this physical system on time interval
[t0,t1] for any t0, t1, a set S of input-output pairs (u,y)=(u [t 0,t 1 ] , y [t 0,t 1 ] ) can be
observed,
S = {(u [t 0,t 1] , y [t 0,t 1] )/ observed , ∀t 0 , t 1 ∈ R, ∀u [t 0,t 1] ∈ Ω} , (1.3.49)
The set S can describe the abstract system. It can be split into two subsets
depending whether a pair (u,y) is obtained having x t 0 equal to 0 or 1:
S0={(u,y)∈S/ if x t 0 =0}={(u,y)∈S/ if x t 0 =A} ={(u,y)∈S/ if x t 0 =on} (1.3.50)
S1={(u,y)∈S/ if x t 0 =1}={(u,y)∈S/ if x t 0 =B} ={(u,y)∈S/ if x t 0 =off} (1.3.51)
It can be proved that
S0 ∪ S1 = S ; S0 ∩ S1 = ∅ , (1.3.52)
as depicted in Fig. 1.3.6.
Also inside of any subset Si the input uniquely determines the output
∀(u, y a ) ∈ S i , ∀(u, y b ) ∈ S i ⇒ y a ≡ y b i = 0; 1 (1.3.53)
From this we understand that the initial state is a label which parametrize
the subsets Si ⊆ S as (1.3.53).

Example 1.3.4. Black-box Toy as a Two States Dynamical System.


Let us consider a black-box, as a toy, someone received. The box is
covered, nothing can be seen of what it contains inside , but it has a controllable
voltage supplier with a voltage meter u(t) across the terminals A-B and a voltage
meter y(t) across the terminals C-D, as depicted in Fig. 1.3.7.
A C
R
Voltage V u(t) R R V y(t)
Supplayer
X
B D
Figure no. 1.3.7.
Playing with this black-box, we register the evolution of the input u(t) and
of the resulted output y(t) . We are surprised that sometimes we get the
input-output pair
(u [t 0,t 1 ] , y [t 0,t 1 ] = 12 u [t 0,t 1] ) ⇒ y(τ) = 12 u(τ) ∀τ , (1.3.54)
but other times we get the input-output pair
(u [t 0,t 1 ] , y [t 0,t 1 ] = 23 u [t 0,t 1] ) ⇒ y(τ) = 23 u(τ) ∀τ . (1.3.55)
Doing all the experiments possible we have a collection of input-output
pairs which constitute a set S as (1.3.49).

20
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; Input-Output Relations.

This collection S will determine the behaviour of the black-box. For the
same input applied u(t) we can get two outputs y(t) = 12 u(t) or y(t) = 23 u(t) . Our
set of input-output pairs can be split into two subsets: S0 if they correspond to
(1.3.54) and S1 if they correspond to (1.3.55).
Of course S0 ∪ S1 = S ; S0 ∩ S1 = ∅ .
If someone gave us an input u [t 0,t 1 ] we would not be able to say what the
output is because we have no idea from which subset, S0 or S1,, to select the right
pair. Some information is missing to us.
Suppose that the box cover face has been broken so we can have a look
inside the box as in Fig. 1.3.7.
Now we can understand why the two sets of input-output pairs (1.3.54),
(1.3.55) were obtained.
The box can be in two states depending of the switch status: opened or
closed. We can define the box state by a variable x which takes two values
nominated as: {off ; on} or {0 ; 1} or {A ; B}.
Now the subset S0 can be equivalently labelled by one of the marks: "off",
"0", "A" and the subset S1 by : "on", "1", "B" respectively.
It does not matter how the position of the switch is denoted (labelled). The
switch position will determine the state of the black-box.
The state is equivalently expressed by one of the variables:
x ∈ {off ; on} or x ∈ {0 ; 1} or ∼ x ∈ {A; B}
If someone gives us an input u [t 0,t 1] and an additional piece of information
formulated as: "the state is on " that means x=on or as "the state is B" that means

x = B etc. we can uniquely determine the output y(t) = 23 u(t) selecting it from the
subset S1.
With this example our intention is to point out that the system state
appears as a way of parametrising the input-output pair subsets inside which one
input uniquely determines one output only.
Also we wanted to point out that the state can be presented in different
forms, the same input-output behaviour can have different state descriptions.
In this example the state of the system can not be changed by the input,
such a system being called state uncontrollable.

21
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.

1.4. System State Concept; Dynamical Systems.

1.4.1. General aspects.


As we saw in the above examples, to determine univocally the output,
beside the input, some initial conditions have to be known.
For example in the case of simple RC circuit we have to know the voltage
across the capacitor, for the mechanical system the initial position of the arm, in
the case of double RC circuit the voltages across the two capacitors, for the
manufacturing point the initial value of the stock, for the relay based circuit the
initial status of the relay.
All this information defines the system state in the time moment from
which the input will affect the output.
The state of an abstract system is a collection of elements (the elements
can be numbers) which together with the input u(t) for all t ≥ t0 uniquely
determines the output y(t) for all t ≥ t0 ..
In essence the state parametrizes the listing of input-output pairs.
The state is the answer to the question: "Given u(t) for t ≥ t0 and the
mathematical relationships between input and output (the abstract system), what
additional information is needed to completely specify y(t) for t ≥ t0 ? ".
The system state at a time moment contains (includes) all the essential
information regarding the previous evolution to determine, starting with that time
moment, the output if the input is known.
A state variable denoted by the vector x(t) is the time function whose value
at any specified time is the state of the system at that time moment.
The state can be a set consisting of an infinity of numbers and in this case
the state variable is an infinite collection of time functions. However in most
cases considered, the state is a set of n numbers and correspondingly x(t) is a
n-vector function of time.
The state space, denoted by X, is the set of all x(t) values.
The state representation is not unique. There can be many different ways
of expressing the relationships of input to output. For example in the case of
black-box or of the logic circuit we can define the state as {on , off}, {A, B} and
so on. For the double RC circuit one state representation means the output value
y(t0) and the time derivative value of the output
The state of a system is related to a time moment. For example the state x0

at a time moment t0 is denoted by (x0 , t0 ) = x(t0) .
The minimum number of state vector elements, for which the output can be
univocally determined, for a known (given) input, represents the system order or
the system dimension. Systems from Ex. 1.2.2. or Ex. 1.2.3. are of first order

22
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.

because it is enough to know a single element x0 to determine the output


response as we can see in relation (1.2.7).
Also the discrete time system (1.3.36), (1.3.37) from Ex. 1.3.2. is of first
order because the response (1.3.42) can be uniquely determined if we know just
the number xp .
But the abstract system (1.3.17), (1.3.18) or (1.3.20) from Ex. 1.3.1 is of
second order because, as we can see in relation (1.3.33) (considering for the sake
of convenience the particular case T1 = T2 =T the system dimension is not
affected), the output y(t) is uniquely determined by a given input u [t 0,t] if two
.
initial conditions y(t0) , y(t 0 ) are known as it is rewritten in (1.4.1)
.
y(t) = 13 [4α 1 (t − t 0 ) − α 2 (t − t 0 )]y(t 0 ) + 2T 3
[α 1 (t − t 0 ) − α 2 (t − t 0 )]y (t 0 )+
+ 3T ∫ t 0 [α 1 (t − τ)− 2 (t − τ)]u(τ)dτ
2 t
(1.4.1)
These initial conditions are the output value and the output time
derivative value at the time moment t0. These two values can be selected as
 x 01   y(t 0 ) 
components of a vector, the state vector x 0 =  0  =  .  that means at the
 x 2   y(t 0 ) 
time t0 the state is x0⇒ (t 0 , x 0 ) = x(t 0 ) .
.
If , for example, y(t0)=3V, y(t 0 ) =9V/sec. , the state at the time moment t0
 3   3volts 
is expressed by the numerical vector x 0 =   =   so we can say
 9   9volts/ sec . 
that at the time moment t0 , the system is in the state [3 9]T. Because two
numbers are necessary to determine uniquely the output we can say that this
system is a second order one.
The response from (1.3.33) can be arranged as:
 .   1 2T .  2 t
y(t) = α 1 (t − t 0 )  43 y(t 0 ) + 2T 3
y (t 0 )  + α 2 (t − t 0 )  − 3
y(t 0 ) − 3
y (t 0 )  + 3T ∫t 0 [..]u(τ)dτ
   
x 01 x 02 (1.4.2)
Denoting by
.
x 01 = 43 y(t 0 ) + 2T 3
y(t 0 )
.
x 02 = − 13 y(t 0 ) − 2T3
y (t 0 )
the output response can be written as
y(t) = α 1 (t − t 0 )x 01 + α 2 (t − t 0 )x 02 + 3T ∫ t 0 [[α 1 (t − τ)− 2 (t − τ)]]u(τ)dτ
2 t
(1.4.3)
This output response can be univocally determined if the numbers x 01 and x 02 are
 x 01 
known, that means they can constitute the components of the vector x 0 =  0 
 x2 
Let us consider a concrete example for T=1sec. For
.
y(t 0 ) = x 01 = 3V; y(t 0 ) = x 02 = 9V/ sec ⇒

23
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.

x 01 = 43 3 + 2⋅1
3
9 = 10V; x 02 = − 13 3 − 2⋅1
3
9 = −7V/ sec;
In this form of the state vector definition we can say that at the time
moment t0 the system is in the state x 0 =[10 -7]T , that is different of the state
x0=[3 9]T but, applying the same input u [t 0,t] , we obtain the same output as it
was obtained in the case of x0=[3 9]T .
The following important conclusion can be pointed out:
The same input-output behaviour of an oriented system can be obtained by
defining the state vector in different ways.
If x is the state vector related to an oriented system and a square matrix T
is a non-singular one, detT≠0, then the vector x = Tx is also a state vector for the
same oriented system. Both states x, x above related will determine the same
input-output behaviour.
In the above example, the two state relationships
x 1 = 43 x 1 + 2T x
3 2
x 2 = − 13 x 1 − 2T x
3 2
can be written in a matrix form
 43 2T 
x = Tx , where T =  1 2T 3 
 , det T = − 2T ≠ 0. (1.4.4)
 − 3 − 3  3

For example in the case of logic circuit (Ex. 1.3.3.) or of black-box


(Ex.1.3.4.) we can define the state values as {on, off}, {A, B} and so on as we
discussed.
If the amount of the collection of numbers which define the state is a finite
one the state is defined as a column-vector: x=[x1 x2 . . xn]T.
The minimal number of the elements of this vector able uniquely to precise
(to determine) the output will define the system order or the system dimension.
When the amount of such a collection strictly necessary is infinite ( we can
say the vector x has an infinity number of elements) then the order of the system
is infinity or the system is infinite-dimensional.
Such a infinity-dimensional system is presented in the next example by the
pure time delay element.

Example 1.4.1. Pure Time Delay Element.


Let us consider a belt conveyor transporting dust fuel (dust coal for
example) utilised in a heating system represented by a principle diagram as
shown in Fig. 1.4.1. The belt moves with a speed v.
The thickness of the fuel is controlled by a mobile flap.
Suppose we are interested about the thickness in the point B at the end of
the belt expressed by the variable y(t) .

24
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.

This variable will be the output of the oriented system we are defining as
in Fig. 1.4.2. The input is the thickness realised on the flap position, point A, and
we shall denote it by the variable u(t).
The distance between points A and B is d . One piece of fuel passing from
A to B will take a period of time τ = dv .
Dust fuel (coal) y(t)=u(t- τ)
Controlled flap u(t) Pure time delay y(t) U(s) −τ s Y(s)
Thickness Thickness element e
u(t) y(t)
u(t);y(t)
speed v u(t)
y(t)
d
A B
 
Conveyor belt u(t 1−τ)  y(t 1)
  t
0 t 1−τ t1
Figure no. 1.4.1. Figure no. 1.4.2.
The input-output relation is expressed by the equation,
y(t) = u(t − τ) (1.4.5)
We can read this relation as: The output at the time moment t equals to the
value the input u(t) had τ seconds ago. Such a dependence is illustrated in the
diagram from Fig. 1.4.2. It is a so called a functional equation.
Now suppose an input u [t 0,t] is given. Can we determine the output y(t) for
any t ≥ t0 ? What do we need in addition to do this ? Looking to the principle
diagram from Fig. 1.4.1. or to the relation (1.4.5) we understand that in addition
to know all the thickness along the belt between the points A and B or in other
words all the values the input u(t) had during the time interval
[t0−τ , t0) .
This collection of information constitutes the system state at the time
moment t0 and it will be denoted by x0 .
So the state at the time moment t0, (t0,x0 ) , denoted on short as
x0=x(t0)=x t 0 is a set containing an infinite number of elements
x 0 = x t 0 = x(t 0 ) = { u(θ), θ ∈ [t 0 − τ , t 0 ) } = u [t 0−τ , t 0) (1.4.6)
Because of that this system has the dimension of infinity.
At any time t the state is (t,x)=x(t) defined by
x(t) = { u(θ), θ ∈ [t − τ , t) } = u [t−τ , t) (1.4.7)
All these intuitively observations may have a mathematical support
applying the Laplace transform to the input-output relation (1.4.5).

25
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.

We remember that
0
L{u(t − τ)} = L{u(t − τ)1(t)} = e −τs
[U(s) + ∫ u(t)e −st dt] (1.4.8)
−τ
so the Laplace transform of the output is
0
Y(s) = e −τs U(s) + e −τs ∫ u(t)e −st dt = Y f (s) + Y l (s) (1.4.9)
−τ
where Y f (s) = e −τs U(s) ⇒ y f (t) = η(t, 0, 0, u [0,t] ) (1.4.10)
is the forced response which depends on the input u(t) only, of course here
depends on the Laplace transform U(s) which contains the input values u(θ) for
any θ≥0. We paid for Laplace transform instrument simplicity with t0=0.
0
Y l (s) = e −τs
∫−τ u(t)e −st dt (1.4.11)
is the free response as the output response when the input is zero.
By zero input we must mean
u(t) ≡ 0 ∀t ≥ 0 ⇔ U(s) ≡ 0∀s
from the convergence domain of U(s).
The free response depends on the initial state only (here at the initial time
moment t0=0 ) and as we can see from (1.4.11) it depends on all the values of
u(θ) ∀θ ∈ [−τ , 0) ⇔ u [−τ , 0)
so it looks naturally to choose the initial state as
x 0 = x(0) = u [−τ , 0) .
Now we can interpret the free response from (1.4.11) as
y l (t) = η(t, 0, x 0 , 0 [0,t) ) . (1.4.12)
From (1.4.10), (1.4.12) the general response ( the time image of (1.4.9) )
can be expressed as an input-initial state-output relation
y(t) = y f (t) + y l (t) = η(t, 0, 0, u [0,t] ) + η(t, 0, x 0 , 0 [0,t) ) = η(t, 0, x 0 , u [0,t) ) . (1.4.13)

1.4.2. State Variable Definition.


The state variable is a function
x: T→ X , t→x(t), (1.4.14)
where X is the state space , which expresses the time evolution of the system
state. The state is not a constant (fixed) one.
It can be changed during the system time evolution, so the function x(t)
can be a non-constant one.
The graphic of this function on a time interval [t0, t1] , denoted by
x [t 0, t 1] = {(t, x(t)), ∀t ∈ [t 0 , t 1 ] } (1.4.15)
is called the time state trajectory on the interval [t0, t1]. The state variable x(t)
is an explicit function of time but also depends implicitly on the starting time t0,
the initial state x(t0)=x0 and the input u(τ) τ∈[t0,t].

26
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.

This functional dependency called input-initial state-state relation (i-is-s


relation) or just a trajectory (more precisely time-trajectory ), can be written as
x(t) = ϕ(t, t 0 , x 0 , u [t 0,t] ) x 0 = x(t 0 ) (1.4.16)
A relation of the form (1.4.16) is an input-initial state-state relation
(i-is-s relation) and expresses the state evolution of a system if the following four
conditions are accomplished:
1. The uniqueness condition . For a given initial state x(t0)=x0 at time t0 and
a given well defined input u [t 0 , t] the state trajectory is unique. This can be
expressed as: "A unique trajectory ϕ(t, t 0 , x 0 , u [t 0,t] ) exists for all t > t0 , given
the state x0 at time t0 and a real input u(τ), for τ≥t0 ".
2. The consistency condition . For t=t0 the relation (1.4.16) has to check
the condition :
x(t) t=t 0 = x(t 0 ) = ϕ(t 0 , t 0 , x 0 , u [t 0,t 0 ] ) = x 0 . (1.4.17)
Also
∀ t 1 ≥ t 0 , lim ϕ(t, t 1 , x(t 1 ), u [t 1,t] ) = x(t 1 ) , (1.4.18)
t→t 1 ,t>t 1
that means a unique trajectory starts from each state.

3. The transition condition . Any intermediate state on a state trajectory is an


initial state for the future state evolution.
For any t2≥t0, one input u [t 0,t 2 ] takes the state x(t0) to x(t2),
x(t 2 ) = ϕ(t 2 , t 0 , x(t 0 ), u [t 0,t 2 ] )
but for any intermediate time t1, t0 ≤ t1≤ t2 , applying u [t 0,t 1 ] a subset of u [t 0,t 2 ] that
means
u [t 0,t 2 ] = u [t 0 ,t 1] ∪ u [t 1,t 2] (1.4.19)
we get the intermediate state x(t1)
x(t 1 ) = ϕ(t 1 , t 0 , x(t 0 ), u [t 0,t 1 ] )
which acting as an initial state from t1, will determine the same x(t2)
x(t 2 ) = ϕ(t 2 , t 0 , x(t 0 ), u [t 0,t 2 ] ) = ϕ(t 2 , t 1 , ϕ(t 1 , t 0 , x(t 0 ) , u [t 1,t 2] )) (1.4.20)
x(t1)
According to this property we can say that the input u [t 0,t] (or u) takes the
system from a state (t0, x0)=x(t0) to a state (t,x)=x(t) and if a state (t1,x1)=x(t1) is
on that trajectory, then the corresponding segment of the input will take the
system from x(t1) to x(t).
4. The causality condition . The state x(t) at any time t or the trajectories
ϕ(t, t 0 , x 0 , u [t 0,t] ) do not depend on the future inputs u(τ) for τ >t. This condition
assures the causality of the abstract system which has to correspond to the
causality of the original physical oriented system.

27
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.

Example 1.4.2. Properties of the i-is-s relation.


Shall we consider the examples Ex. 1.2.2. and Ex. 1.2.3. for which the
abstract system is described by relations (1.2.1) or (1.2.16)
 T x• +x = K 1 u
S1 :  (1.4.21)
 y = K2 x
The voltage across the capacitor x , or the movement of the arm x , is the
state. Its time evolution is
t−t 0 t
x(t) = e − − t−τ
x0 + ∫ e T u(τ)dτ ⇔ x(t) = ϕ(t, t 0 , x 0 , u [t 0,t] )
K1
T
T
(1.4.22)
t0
We can show that the relationship (1.4.22) accomplishes the four above
conditions, that means it is an i-is-s relation.

1. The uniqueness condition:


If x 0 = x 0 and u [t 0,t] = u [t 0, t] , the two state trajectories are
t−t 0 t
− − t−τ
x (t) = e x0 + ∫e u (τ)dτ
K1
T T
T
t0
t−t 0 t
x (t) = e − x0 + ∫e
− t−τ
u (τ)dτ ⇒
K1
T T
T
t0
t−t 0 t
x (t) − x (t) = e − (x 0 − x 0 )x 0 + ∫e
− t−τ
[u (τ) − u (τ)]dτ ≡ 0∀τ ⇒ x (t) ≡ x (t)
K1
T T
T
t0
=0 ≡0
2. The consistency condition:
Substituting
t 0 −t 0
t
K

∫t e −
t−τ
t = t 0 ⇒ x(t 0 ) = e T x0 + 1 T u(τ)dτ = x 0 ⇒ x(t 0 ) = x 0
T 0

3. The transition condition:


For t=t1, denoting x0=x(t0) , (1.4.22) becomes
t1
t −t t −τ
K1
T t∫0
− 1T 0 − 1T
x(t 1 ) = e x(t 0 ) + e u(τ)dτ

and for t=t2 (1.4.22) is


t2
t 2−t 0 t 2−τ
K
x(t 2 ) = e − T x(t 0 ) + 1
T ∫ e−
t0
T u(τ)dτ .
t2 t1 t2
t 2 −t 0 t 2−t 1 t 1−t 0
Because e − T =e − T ⋅e − T and ∫t (..)dτ = t∫(..)dτ + t∫(..)dτ , we get
0 0 1
t1 t2
t 2−t 1 t 1−t 0 t 2 −t 1 t 1 −τ t 2−τ
K K
x(t 2 ) = e − T ⋅e − T x(t 0 ) + 1
T ∫e
t0
− T e − T u(τ)dτ + 1
T ∫ e−
t1
T u(τ)dτ

28
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.

t1 t2
t 2−t 1 t 1−t 0 t 1 −τ t 2−τ
K K
x(t 2 ) = e − T ⋅ [e − T x(t 0 ) + 1
T ∫e
t
− T u(τ)dτ] + 1
T ∫e
t
− T u(τ)dτ
0 1

x(t1)
t2
t 2−t 1 t 2−τ
K1
x(t 2 ) = e − T x(t 1 ) +
T ∫t e − T u(τ)dτ .
1

4. The causality condition: Because in (1.4.22) u(τ) is inside the integral from t0
to t , x(t) is irrespective of u(τ) ∀ τ >t .
-------------------
Before we said, as a general statement, that the input affects the state and
the state influences the output. However there are systems to which inputs do
not influence the state or some components of the state vector.
Conversely there are systems to which outputs or some outputs are not
influenced by the state. Such systems are called uncontrollable and unobservable
respectively, about which more will be analysed later on.
In Ex. 1.3.4., the black box system, the physical object is state
uncontrollable because no admitted input can make the switch to change its
position. If , for example, the wire to the output were broken then such a system
would be unobservable.
A state that is both uncontrollable and unobservable can not be detected
by any experiment and make no physical meaning.

1.4.3. Trajectories in State Space.


The input-initial state-state (i-is-s) relation
x(t) = ϕ(t, t 0 , x 0 , u [t 0,t] ) (1.4.23)
which expresses the time-trajectory of the state is an explicit function of time.
If the vector x is n-dimensional one there are n time-trajectories, one for
each component
x i (t) = ϕ i (t, t 0 , x 0 , u [t 0,t] ) , i = 1, ..., n . (1.4.24)
These time-trajectories can be plotted as t increases from t0, with t as an implicit
parameter, in n+1 dimensional space or as n separate plots xi(t), t ≥ t0 , i=1,..,n.
Often this plot can be made by eliminating t from the solutions (1.4.24) of the
state equations, which is just a trajectory in state space.
If we denote xi=xi(t) , i=1,..,n , the i-is-s relation (1.4.24) is written as,
x 1 = ϕ 1 (t, t 0 , x 0 , u [t 0,t] )
x i = ϕ i (t, t 0 , x 0 , u [t 0,t] ) (1.4.25)
x n = ϕ n (t, t 0 , x 0 , u [t 0,t] ) ,
and eliminating t from the n above relations we determine a trajectory in state
space, implicitly expressed as
29
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.

F(x1,x2,...,xn,t0,x(t0))=0 (1.4.26)
where it was supposed a given (known) input. Simpler expression are obtained if
the input is constant for any t.
If the state vector components are the output and its (n-1) time derivative
the state space is called phase space and the trajectory in phase space is called
phase trajectory.
The trajectory in state space can easier be obtained directly from state
equations, a system of first order differential equations. The plot can efficiently
be exploited for n=2 in state plane or phase plane.
For a given initial state (t0,x0) , we have denoted x0=x(t0), only one
trajectory is obtained. For different initial conditions a family of trajectories are
obtained called state portrait or phase portrait.
Because of the uniqueness condition accomplished by the i-is-s relation,
for a given input u [t 0,t] , one and only one trajectory passes through each point in
state space and exists for all finite t≥t0 . As a consequence of this, the state
trajectories do not cross one another.

Example 1.4.3. State Trajectories of a Second Order System.


Let we consider a simple second order system,
 x. 1 = λ 1 x 1 + u  x. 1 (t) = λ 1 x 1 (t) + u(t)
 . ⇔  . (1.4.27)
 x2 = λ 2 x 2 + u  x 2 (t) = λ 2 x 2 (t) + u(t)
For simplicity let be u(t)≡0 ∀t ⇔ 0 [t 0 ,t] = {(τ, u(τ)) = 0, ∀τ ∈ [t 0 , t]} .
Under this hypothesis, the i-is-s is obtained by integrating the system (1.4.27),
 x 1 (t) = e λ 1(t−t 0 ) x 1 (t 0 ) = ϕ 1 (t, t 0 , x 1 (t 0 ), 0 [t 0,t] )
 λ 2 (t−t 0 )
(1.4.28)
 x 2 (t) = e x 2 (t 0 ) = ϕ 2 (t, t 0 , x 2 (t 0 ), 0 [t 0 ,t] )

Supposing that we have λ1<0 and λ2>0 then the time-trajectories are
plotted through the two components as in Fig. 1.4.3.
Eliminating the variable t from (1.4.28) we obtain
x1 x2 x x
= e λ 1 (t−t 0 ) ; = e λ 2 (t−t 0 ) ⇒ [ 1 ] λ 1 = [ 2 ] λ 2 ⇔
x 1 (t 0 ) x 2 (t 0 ) x 1 (t 0 ) x 2 (t 0 )

x λ /λ
x 2 = x 20  x 1 
2 1
⇔ F(x 1 , x 2 , x 0 ) = 0 . (1.4.29)
10
The same expression for (1.4.29) can be obtained directly from the
differential equation (1.4.27),
dx 1
= λ1 x1 dx 2 λ 2 x 2  x 1  λ 2 /λ 1
dt
⇒ = ⇒ x 2 = x 2 (t 0 )  (1.4.30)
dx 2
= λ2 x2 dx 1 λ 1 x 1 x 1 (t 0 ) 
dt

30
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.

x''10 x1(t)

x'10 λ1<0 λ2 <0 λ1< λ2

T1 =−1/ λ1

a''1
a'1 a''2 x2
a'2 λ1<0 λ2 <0 λ1< λ2
t0 t1 t2 t x'20 t0
t0
x''20 x2(t) x''20
t1
b''1
b'1 t2 t1
x'20 b''2
b'2 t2 x''10
a'2 a''2 a'1 a''1 x'
T2 =−1/ λ2 10
x1

b''1
b'1 b''2
t0 t1 t2
b'2 t
Figure no. 1.4.3. Figure no. 1.4.4.

In Fig. 1.4.5. the state portrait is shown for the case λ1<0, λ2>0.
x2
x1(t) x'10
1 / λ1 λ1<0 λ2 >0
t1
a b
t0
t1 t x'20
x2(t) t0 t2
a x'10 x1
x'20 b

t0 t1 t2 t
Figure no. 1.4.5.
The input-initial state-output relation (i-is-o relation), determines the
output at a time t,
y(t)=η(t, t0, x0, u [t 0,t] ) , (1.4.31)
In the case of example Ex. 1.2.1. , the relation
t−t 0
K K t
y(t) = K 2 e − T x 0 + 1 2 ∫ e − T u(τ)dτ
t−τ

T t0
is an i-is-o relation. It does not accomplish the consistency property
y(t0)=K2x0≠x0 so it can not be an i-is-s relation.

31
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.

For t0 = t , taking into consideration that u[t,t]=u(t) it becomes


y(t)=g(x(t), u(t), t) . (1.4.32)
This is an algebraical relation and is called also the output relation or
output equation.
By a dynamical system one can understand a set S of three elements
S = {Ω, ϕ, η} or S = {Ω, ϕ, g} (1.4.33)
which are defined above, where:
Ω − the set of admissible inputs
ϕ − the input-initial state-state relation
η − the input-initial state-output relation
g − the output relation
This is called the explicit form of a dynamical system , expressed by
relations (solutions) or trajectories.
Another form for dynamical system representation is the implicit form by
state equations
S = {Ω, f, g} (1.4.34)
whose solutions are the trajectories expressed by (1.4.33), where
f − the vector function defining a set of equations: differential, difference,
logical, functional.
The solution of the equations defined by f , for given initial state (t0, x0) is
just the relation ϕ .
For example, the first order system from Ex. 1.2.2. or Ex. 1.2.3. , can be
represented as:
The explicit form by relations (functions) or state trajectories is:
 u⊂Ω (Ω)
 t−t
− T0 − t−τ
S :  x(t) = e x(t 0 ) + T ∫ t e T dτ (ϕ)
K1 t
(1.4.35)
 0
(g)
 y(t) = K 2 x(t)
The implicit form by state equations is:
 u∈Ω
 . (1.4.36)
S :  x = − T1 x + T1 u, t ≥ t 0 , x(t 0 ) = x 0

 y = K2x
Frequently, the state equations of a dynamical system are composed by:
1. The proper state equation defined by f, whose solution is the state trajectory
x(t), t ≥ t0 expressed by ϕ.
2. The output equation or more precisely the output relation g.
Generally, we can say that a system is a dynamical one, if the input-output
dependence is not an univocal one, but the univocity can be re-established by the
knowledge of the state (some additional information) at the initial time.
32
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.5. Examples of Dynamical Systems.

1.5. Examples of Dynamical Systems.


1.5.1. Differential Systems with Lumped Parameters.
These systems are continuous time systems. Both the input u and the
output y are time function vectors,
u ⊆ Ω u − (p × 1) − vector u = [u 1 , u 2 , ... u p ] T
(1.5.1)
y ⊆ Γ y − (r × 1) − vector y = [y 1 , y 2 , ... y r ] T
The input-output (i-o) relation is a set of differential equations:
 F i (y, y (1) , ..., y (n i ) , u, ..., u (m i ) , t) = 0
 (1.5.2)
 i = 1, r
r
The system dimension or the order of the system is n ≤ Σ n i .
i=1
The standard form of the state equations of such a system is obtained by
transforming the above system of differential equations into n differential
equations of the first order ( the Cauchy normal form) which do not contain the
input time derivatives as in (1.5.3),
 x.(t) = f(x(t), u(t), t), t ≥ t 0 , x(t 0 ) = x 0

 y(t) = g(x(t), u(t), t) (1.5.3)

 u⊆Ω
This is the implicit form of the dynamical system
S={Ω,f,g} or S={Ω,f,g,x}
where:
f -is a function expressing a differential equation,
g -expresses an algebraical relation, x -is a n-vector.
In (1.5.3) the first equation is called the proper state equation and the
second is called the output relation.
Here x(t)=(x1(t),...,xn(t))T is just the state of the system.
The number n of the elements of this vector is the dimension of the system
or the system order.
Of course, f and g are vector functions,
f(x(t),u(t),t) = [f1(x(t),u(t),t) f2(x(t),u(t),t) . . fr(x(t),u(t),t)]T
g(x(t),u(t),t) = [g1(x(t),u(t),t) g2(x(t),u(t),t) . . gp(x(t),u(t),t)]T .
The dimension of the state vector x has no connection with r the number of
outputs and p the number of inputs.
If the function f(x,u,t) satisfy the Lipshitz conditions with respect to
variable x, then the solution x(t) , x(t0)=x0 exists and is unique for any t≥t0.
The system is called time-invariant or autonomous system if the time
variable t does not explicitly appear in f and g and its form is:

33
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.5. Examples of Dynamical Systems.

 x.(t) = f(x(t), u(t)), t ≥ t 0 , x(t 0 ) = x 0



 y(t) = g(x(t), u(t)) (1.5.4)

 u⊆Ω
written on short
.
 x = f(x, u)
 . (1.5.5)
 y = g(x, u)
If the functions f and g are linear with respect to x and u the system is
called continuos-time linear system.
The state equations are
.
 x(t) = A(t)x(t) + B(t)u(t)
S:  (1.5.6)
 y(t) = C(t)x(t) + D(t)u(t)
Because the matrices A, B, C, D depend on the time variable t we call this
multivariable time-varying system:
multi-input(p-inputs)--multi-output(r-outputs).
The matrices have the following meaning:
A(t), (nxn)- the system matrix;
B(t), (nxp)- the input (command) matrix ;
C(t), (rxn)- the output matrix;
D(t), (rxp)- the auxiliary matrix or
the straight input-output connection matrix.
The state equations of single variable linear system or single-input (p=1),
single-output (r=1) system are
 x.(t) = A(t)x(t) + b(t)u(t)
S:  (1.5.7)
 y(t) = c T
(t)x(t) + d(t)u(t)

In this case:
u(t) and y(t) are scalars ,
the matrix B(t) degenerates to a column vector b(t),
the matrix C(t) degenerates to a row vector cT(t) and
the matrix D(t) degenerates to a scalar d(t).
If all these matrices are not depending on t (are constant ones) the system
is called linear time-invariant (dynamical) system (LTIS) having the form,

 x. = Ax + Bu
S: (1.5.8)
 y = Cx + Du
We observe that in any form of state equations the input time derivatives
does not appear.

34
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.5. Examples of Dynamical Systems.

1.5.2. Time Delay Systems (Dead-Time Systems).


Somewhere in these systems at least one delay operator exists that means
in the physical systems there are elements in which the information is transmitted
with a finite speed.
The state equation has the form
.
x(t) = f(x(t), x(t − τ), u(t), t) , t ≥ t 0 (1.5.9)
y(t) = g(x(t), x(t − τ), u(t), t) (1.5.10)
The order of time delay systems is infinite and has nothing to do with the
number of x vector elements.
In Ex. 1.4.1. , we discussed the pure time delay element as the simplest
dead time system. Let we now consider a more complex example of time delay
system.

Example 1.5.2.1. Time Delay Electronic Device.


Shall we consider an electronic device as depicted in the principle diagram
from Fig. 1.5.1.
The pure time delay element is a device consisting of a tape based record
and play structure. The two record-head and play-head are positioned to a
distance d so it will take τ seconds the tape to pass from one head to the other.
The device has amplifiers so we can consider the signals as being voltages
between -10 V and +10V for example.

R1 i0 R0
i1 C
u(t) i2 R2 - iC uC
ii R iR ii
ui Z i ;A;Z e -
w(t)
+ v(t) ui Z i ;A;Z e
+ y(t)

Pure time delay


w(t) element y(t)
w(t)=y(t-τ )

Figurae no. 1.5.1.

t .
x(t) = ∫ t 0 x(τ)dτ + x(t 0 ) (1.5.11)
.
x(t) = x(t − τ) − u(t) − time delay equation (1.5.12)
The initial state at the time t0 is x [t 0−τ,t 0] , even if the variable x in the
differential equation is a scalar one.

35
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.5. Examples of Dynamical Systems.

1.5.3. Discrete-Time Systems.


The state equation is expressed by difference equations the input uk, the
output yk and state xk are strings of numbers.
The general form of state equations are:
xk+1=f(xk,uk,k) (1.5.13)
yk=g(xk,uk,k) (1.5.14)
where: k - means the present step, k+1 - means the next step.
If f and g are linear with respect x and u the state equations are:
xk+1=Akxk+Bkuk (1.5.15)
yk=Ckxk+Dkuk (1.5.16)
This is a time varying system.
If the matrixes A,B,C,D are constant with respect to the variable k then
the system is a linear discrete time invariant system.

1.5.4. Other Types of Systems.

-Distributed Parameter Systems.


Are described by partial differential equations. They are infinite
dimensional systems.

-Finite State Systems (Logical systems or Finite Automata).


The functions f and g are logical functions.

-Stochastic Systems.
All the above systems are called deterministic systems (at any time any
variable is well defined ). These systems are based on the so called probability
theory and random functions theory.

36
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.6. General Properties of Dynamical Systems.

1.6. General Properties of Dynamical Systems.

For any form of dynamical systems some general properties can be


established.

1.6.1. Equivalence Property.


We have a system denoted by
S=S(Ω,f,g)=S(Ω,f,g,x)
where x is mentioned just to know how the state variable will be denote.
Two states xa,xb∈ S of the system are equivalent at the time t=t0 (k=k0) if
starting on the initial time from that states as initial states the output will be the
same if we apply the same input:
ϕ(t, t 0 , x a , u [t 0,t] ) ≡ ϕ(t, t 0 , x b , u [t 0 ,t] )
ψ(t, t 0 , x a , u [t 0,t] ) ≡ ψ(t, t 0 , x b , u [t 0 , t] )
If in a system there are equivalent states then the system is not in a
reduced form, that means that the real dimension of the system is smaller rather
that defined by the vector x.
Two systems S and S :
S=S(Ω,f,g,x) and S = S(Ω, f, g, x)
are called input-output (I/O) equivalent and we denote this S ≈ S , if for any state
x it exists a state x ∈ S (x = x(x)) so that if the the same inputs are applied to
both systems and the initial state is x for the system S and x for the system S
then the outputs are the same.
For linear time invariant differential systems (SLIT) denoted by
.
x = Ax + Bu
, S = S(A, B, C, D, x)
y = Cx + Du
.
x = Ax + Bu
, S = S(A, B, C, D, x)
y = Cx + Du
if there exists a square matrix T, det T≠ 0 so that x = Tx then S ≈ S and we can
express
A = TAT −1 , B = TB , C = CT −1 , D = D
then the above systems are I/O equivalent.
For single-input, single-output (siso) systems, .
−1
b = Tb x = T x.
−1 T x= AT −1 x + Bu
. ⇒ .
c T = c T T −1 x = T −1 x x= (TAT −1 )x + TBu
A B
−1
y = Cx ⇒ y = CT x + D u
C D
37
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.6. General Properties of Dynamical Systems.

Example 1.6.1. Electrical RLC Circuit.


Let us consider an RLC circuit controlled by the voltage u :
i R L C
a

u uR uL uC
a '
Supposing we are interested about i,ur,uL,uC, so the input is u and the
output is the vector
 i 
u 
y =  R  , r=4 .

 uL p=1
 
 uC
We can determine the state equations by applying the Kirchoff theorems,
 u C = C1 q
 .  i=q
.
 u R = Rq
 , where 
 u L = Lq̈  q̈ = di
 dt

 uR + u L + uC = u
Some variables but not all of them can be eliminated ( those from
algebraical equations only).
 x1 = q  x2 = i
 . , where  .
 x2 = q  q̈ = x 2
 x. 1 = x 2
 .
 x 2 = LC x 1 − L x 2 + L u
1 R 1

 y1 = x2

 y 2 = Rx1 2
 y 3 = − C x 1 − Rx 2 + u

 y 4 = C x1
1

These are the state equations in a form when we chosen as state components the
electrical charge and current. We can write them into a matrix form:
 x 1   x. = Ax + bu
x= , 
 x 2   y = Cx + du
 0 1   0 
 0 1  0  0 R   
   0 
A= 1 R  , b= 1  , C= 1 , d=  
 − LC − L  L   − C −R   1 
 1   
 C 0   0 

38
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.6. General Properties of Dynamical Systems.

We cannot choose as state variables (vector state components) the


variables for example x1=i , x2=uR because uR=Ri (there is an algebraical relation
between them), but we can choose as state variables, for example,
 x1 = uC

 x2 = uR

 x = 1x
 • 1 RC 2
 x = −Rx − Rx + Ru
 2 L 1 L 2 L
 y = 1x
 1 R 2
 y =x
 2 2
 y 3 = −x 1 − x 2 + u

 y4 = x 1
This is another representation of the oriented system,
 x1   x. = Ax + bu
x= ⇒ S=
 x2   y = Cx + du

 0 R1   0 
 1  0     
0 RC  0 1 , d  
A =   , 0
b= R , C=  =  
 − RL − RL   L   −1 −1   1 
     
 1 0   0 
S = S(A, b, C, d, X)  x 1 = C1 x 1  C1 0 
, ⇒ x = Tx , T =   , det(T) =
R
≠0
S = S(A, b, C, d, X)  x 2 = Rx 2 0 R
C

This means that the two abstract systems are equivalent because for any
initial equivalent states we have the same output. We can pass between the two
systems with the next equations:
 A = TAT −1

 b = Tb

 C = CT −1

 d=d

  0 1   0 
 .  0 1  0  0 R   
 x = Ax + bu    0 
 where A =  1 R  , b =  1  , C =  1 , d=  
 y = Cx + du  − LC − L  L   − C −R   1 
  1   
  C 0   0 

39
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.6. General Properties of Dynamical Systems.

Let (x0,t0) be
 5 C
(x0,t0)=   and for given U [t 0,t] ⇒ Y [t 0,t]
 10  A
 C1 ⋅ 5   1  (V)
(x 0 , t 0 ) =  =  if C = 5F and R = 100Ω
 R ⋅ 10   1000  (V)
We can prove that, for this example, the next relations are true:
 A = TAT −1

 b = Tb

 C = CT −1

 d=d

1.6.2. Decomposition Property.


For any system we can define the so called forced-response or zero
initial state response which is the response of the system when the initial state is
zero. This zero state is just the neutral element of the set X as linear space. We
are denoting the forced response as yf, xf.
We can define the free response that is the system response when the
input is the segment u = 0 [t 0,t] ⇔ ∀τ , u(τ) = 0 , this zero means the neutral
element of U is organised as a linear space.
By yl and xl -are denoted the free response.
A system S has the decomposition property with respect to the state or the
output if they can be expressed as
y(t)=yl(t)+yf(t)
and
x(t)=xl(t)+xf(t).

1.6.3. Linearity Property.


A system is linear if its response with respect to state and output is a
linear combination of the pair: initial state and input.
(x a , t 0 ), U [ta 0,t] → x a (t), y a (t)
This system is linear if :
(x b , t 0 ), U b[t 0,t] → x b (t), y b (t)
(x, t 0 ) = αx a + βx b
U [t 0,t] = αu a + βu b
∀α, β ∈ C
x(t) = αx a (t) + βx b (t)
y(t) = αy a (t) + βy b (t)
If the system is expressed in an implicit form by a state equation, it is
linear if the two functions involved in these equations are linear with respect to
the two variables x and u.
40
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.6. General Properties of Dynamical Systems.

Example 1.6.2. Example of Nonlinear System.


The system
y(t)=2u(t)+4
4
u + + y
2

is not linear.
We can prove this, for
ya=2ua+4, yb=2ub+4
we obtain
y=2(α ua+β ub)+4=2α ua+2β ub+4≠ αy a + βy b
The response of a linear system for zero initial state and zero input is just zero.

1.6.4. Time Invariance Property.


A system is time-invariant if its state and output response do not depend
on the initial time moment if these responses are determined by the same initial
state and the same translated input.
A system is a time-invariant one if in the state equation the time variable t
does not appear explicitly. If a system is time-invariant the initial time always
appears by the binomial (t-t0). We can express (t- τ) like: t-τ=(t-t0)-(τ-t0)

1.6.5. Controllability Property.


Let S be a dynamical system. A state x0 at the time t0 is controllable to a
state (x1,t1) if there exists an admissible input U [t 0,t] ⊂ Ω which transfers the
state (x0,t0) to the state (x1,t1). If this property takes place for any x 0 ∈ X the
system is called completely controllable. If in addition this property takes
place for any [t0,t1] finite the system is called totally controllable.
There are a lot of criteria for this property expressing.

1.6.6. Observability Property.


We can say that the state x0 at the moment t0 is observable at a time
moment t 1 ≥ t 0 if this state can be uniquely determined knowing the input
U [t 0,t 1] and the output Y [t 0 ,t 1] .
If this property takes place for any x 0 ∈ X the system is called completely
observable. In addition, if this property takes place for any finite [t0,t1] the
system is called totally observable.

1.6.7. Stability Property.


This is one of the most important general property of a dynamical system.
It will be studied in detail later on.
41
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.1. Input-Output description of SISO LTI.

2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI).

The Linear Time-Invariant Differential Systems, on short LTI, are most


studied in systems theory mainly because the mathematical tools are easier to be
applied.
Even if the majority of systems encountered in nature are nonlinear ones,
their behaviour can be rather well studied by linear systems under some
particular conditions like description around a running point or for small
disturbances. In addition LTI benefits for their study on the Laplace transform
which translate the complicated operations with differential equations into simple
algebraical operations in complex domain.
However some properties like controllability , observability, optimality,
are better studied in time domain by state equations even for LTI.
Mainly Linear Time-Invariant Differential Systems means systems
described by ordinary linear differential equations, with constant coefficients.
Sometimes they are also called continuous time systems.
Two categories of LTI will be distinguished where:
SISO : Single Input Single Output LTI
MIMO: Multi Inputs Multi Outputs LTI.

2.1. Input-Output Description of SISO LTI.


Let us suppose that we have a time invariant system with one input and
one output.
The abstract system is expressed by an input-output relation (IOR) as a
differential equation or by state equations.
The two forms depend on the physical aspect, depend on our knowledge
regarding the physical system, our qualification and our ability to get the
mathematical equations.
The bloc diagram of such a system is as depicted in Fig. 2.1.1.
u(t) y(t)
H(s)
U(s) Y(s)

Figure no. 2.1.1.


Let we consider the input u ∈ Ω the set of scalar functions which admit
the Laplace transform. The output is a function y ∈ Γ , where the set Γ will be,
for these systems, a set of functions which admit the Laplace transform too.
The input-output relation (IOR) is expressed by the ordinary linear
differential equations, with constant coefficients of the order n,
n m
Σ aky
k=0
(k)
(t) = Σ
k=0
b k u (k) (t) , an ≠ 0 (2.1.1)
d k y(t) d k u(t)
where we understand: y (k) (t) = ; u (k) (t) = .
dt k dt k
42
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.1. Input-Output description of SISO LTI.

The maximum order of the output derivative will determine the order of
the system.
Three types of systems can be distinguished depending on the ratio
between m and n:
1. m<n : the system is called strictly proper (causal) system.
2. m=n : the system is called proper system.
3. m>n : the system is called improper system.
The improper systems are not physically realisable. They could represent
an ideally desired mathematical behaviour of some physical objects, but
impossible to be obtained in the real world.
du(t)
For example the system described by the IOR, y(t) = , that means:
dt
n = 0, m = 1, a 0 = 1, b 0 = 0, b 1 = 1 , represents a derivative element. It cannot
process any input from the set Ω of the admissible inputs which can contain
functions with discontinuities or which are nonderivative only.
A rigorous mathematical treatment can be performed by using the
distributions theory but that results are not essential for our object of study.
Because of that, our attention will concentrate on the strictly proper and
proper systems only.
The IOR (2.1.1) will be written as
n n
Σ aky
k=0
(k)
(t) = Σ
k=0
b k u (k) (t) , an ≠ 0 (2.1.2)
If it is mentioned b n = 0 , this means the system is a strictly proper one.
Also, if m<n we can consider that bn=0...bm+1=0.
The input-output relation (IOR) can be very easy expressed in the complex
domain by applying the Laplace transform to the relation (2.1.2).
As we remember,
k−1
L{ y (t)} = s Y(s) − Σ y (k−i−1) (0 + )s i , k ≥ 1
(k) k
(2.1.3)
i=0
k−1
L{ u (k) (t)} = s k U(s) − Σ u (k−i−1) (0 + )s i , k ≥ 1 (2.1.4)
i=0
where we have denoted by
Y(s) = L{ y(t)} , U(s) = L{ u(t)} (2.1.5)
the Laplace transforms of output and input respectively. For the moment the
convergence abscissas are not mentioned.
In (2.1.3), (2.1.4) the initial conditions are defined as right side limits
y (0 ) , u (k−i−1) (0 + ) .
(k−i−1) +

For simplicity we shall denote them by y (k−i−1) (0) , u (k−i−1) (0) .


One obtains,
n n k−1 n n k−1
( Σ a k s k ) ⋅ Y(s) − Σ [ Σ y (k−i−1) (0)s i ] = ( Σ b k s k ) ⋅ U(s) − Σ [ Σ u (k−i−1) (0)s i ]
k=0 k=1 i=0 k=0 k=1 i=0

43
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.1. Input-Output description of SISO LTI.

from where Y(s) can be withdraw as


n n
[ Σ a k s k ] ⋅ Y(s) = [ Σ b k s k ] ⋅ U(s) + I(s)
k=0 k=0
M(s) I(s)
Y(s) = U(s) + (2.1.6)
L(s) L(s)
Yf(s) Yl(s)
where,
n
M(s) = Σ b k s k = b n s n + b n−1 s n−1 + ... + b 1 s + b 0 (2.1.7)
k=0
n
L(s) = Σ a k s k = a n s n + a n−1 s n−1 + ... + a 1 s + a 0 (2.1.8)
k=0
n k−1 n k−1
I(s) = Σ a k [ Σ y (k−i−1) (0)s i ] − Σ b k [ Σ u (k−i−1) (0)s i ] (2.1.9)
k=1 i=0 k=1 i=0
As we can observe from (2.1.6) the output appears as a sum of two
components called, the forced response yf (t) and the free response yl(t) so, in the
complex domain, the system response is,
Y(s) = Y f (s) + Y l (s) (2.1.10)
where,
M(s)
Y f (s) = ⋅ U(s) = H(s)U(s) (2.1.11)
L(s)
is the forced response Laplace transform which depends on the input only, and
I(s)
Y l (s) = (2.1.12)
L(s)
is the free response Laplace transform which depends on the initial conditions
only. If the initial conditions are zero then I(s)=0 and Y(s)=Yf(s).
If the input u(t) ≡ 0, ∀t ≥ 0 , then U(s)=0 and Y(s)=Yl(s). These express
the decomposition property.
Any linear system has decomposition property.
The forced response Yf(s) expresses the input-output behaviour
(i-o response) which does not depend on the system state (because it is supposed
to be zero) or on how the system internal description is organised (how the
system state is defined).
The free response Yl(s) expresses the initial state-output behaviour
(is-o response) which does not depend on the input (because it is supposed to be
zero) but it depends on how the system internal description is organised (how the
system state is defined).
We can now define the very important notion of the transfer function (TF).
The transfer function of a system, (TF), denoted by H(s), is the ratio
between the Laplace transform of the output and the Laplace transform of the
input which determined that output into zero initial conditions (z.i.c.) if this ratio
is the same for any input variation.
Y(s)
H(s) = z.i.c. , the same for ∀U(s) 2.1.13)
U(s)
44
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.1. Input-Output description of SISO LTI.

In the case of the SISO LTI, as we can observe from (2.1.6) ... (2.1.11) ,
the transfer function is
M(s) b n s n + b n−1 s n−1 + ... + b 1 s + b 0
H(s) = = (2.1.14)
L(s) a n s n + a n−1 s n−1 + ... + a 1 s + a 0
always a ratio of two polynomials (rational function).
Sometimes we denote this SISO LTI system as
S = TF{M, N} ⇔ TF (2.1.15)
There are systems to which a transfer function can be defined but it is not
a rational function, as for example systems with time delay or systems defined by
partial differential equations.
If the polynomials L(s) and M(s) have no common factor (they are
coprime) their ratio expresses the so called the nominal transfer function
(NTF).
The transfer function expresses only the input-output (i-o) behaviour of a
system which is just the forced response that means the system response into
zero initial conditions.
The same input-output behaviour can be assured by a family of transfer
functions if the nominator and the denominator have a common factor as
M(s)=M'(s)P(s) , L(s)=L'(s)P(s), (2.1.16)
then
M (s)P(s) M (s)
H(s) = ⇒ H(s) = 2.1.17)
L (s)P(s) L (s)
If the two polynomials M'(s) and L'(s) are coprime ones,
gcd{M'(s), L'(s)}=1 ,
then the last expression of H(s) is the reduced form of a transfer function
(RTF). In such a case, when M(s) and L(s) have common factors, some
properties such as : controllability or/and observability are not satisfied.
The order of a system is expressed by the degree of the denominator
polynomial of the transfer function that is n=degree{L(s)}.
It results that
degree{L'(s)}=n'<n=degree{L(s)}, (2.1.18)
so systems can have different orders for their internal description but all of them
will have the same forced response.
If in a transfer function common factors appear which are simplified, then
the i-o behaviour (the forced response) can be described by lower order abstract
systems, the order being the polynomial degree from the transfer function
denominator after simplification, but the is-o behaviour (the free response) still
remains of the order equal to the order the transfer function had before the
simplification.

45
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.1. Input-Output description of SISO LTI.

Example 2.1.1. Proper System Described by Differential Equation.

Let we consider a proper system with n=2; m=2 described by differential


equation,
. .
ÿ + 7y + 12y = ü + 4u + 3u (2.1.19)
whose Laplace transform in z.i.c. is
s 2 Y(s) + 7sY(s) + 12Y(s) = s 2 U(s) + 4sU(s) + 3U(s) ,
from where the TF is
H(s) =
Y(s) s 2 + 4s + 3 = M(s) ⇒ n = 2
z.i.c. = 2
U(s) s + 7s + 12 L(s)
so we can consider the system
S = TF{M, N} = TF{s 2 + 4s + 3, s 2 + 7s + 12} = TF{(s + 1)(s + 3), (s + 4)(s + 3)}
But we observe that,
(s + 1)(s + 3) s + 1 M (s)
H(s) = = = , ⇒ n = 1(order = 1) (2.1.20)
(s + 4)(s + 3) s + 4 L (s)
the NTF
The transfer function is related to the forced response
Y f (s) = H(s)U(s)
(s + 1)(s + 3)
Y f (s) = U(s) ⇒ Y f (s) = s + 1 U(s)
(s + 4)(s + 3) s+4
The i-o behaviour is of the first order even if the differential equation
(2.1.19) is of the second order. Of course, the solution of the system expressed
by the differential equation (2.1.19) depends on two initial conditions.
From this point of view the system is of the second order.
The internal behaviour of the system is represented by two state variables.
If this differential equation is represented by state equations it will be of the
second order.
However the time domain equivalent expression of the NTF
M (s) Y(s)
H(s) = s + 1 = =
s + 4 L (s) U(s)
z.i.c.

is a first order differential equation,


. .
y(t) + 4y(t) = u(t) + u(t)
which describes only a part of the system given by (2.1.19).

Shall we now consider another system described by the differential


equation,
. .
y(t) + 4y(t) = u(t) + u(t) , (2.1.21)
whose Laplace transform in z.i.c. is
sY(s) + 4Y(s) = sU(s) + U(s) .
and its transfer function is

46
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.1. Input-Output description of SISO LTI.

M (s)
H(s) = s + 1 = , n = 1(order = 1) .
s + 4 L (s)
This system can be denoted as
S = TF{M , L } = TF{(s + 1), (s + 4)} ,
which is a first order one, its general solution depends of only one initial
condition.
The forced response of S is,
Y f (s) = s + 1 U(s)
s+4
identical with the forced response of S .
We can say that the system S expresses only some aspects of the internal
behaviour of S , that means only the modes ( the movements) that are depending
on the input and which affect the output.
This is the so called completely controllable and completely observable
part of the system S.
If M(s) and L(s) are prime one to each other the internal behaviour is
completely related by the i-o behaviour of the system that means by the transfer
function.

47
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.2. State Space Description of SISO LTI.

2.2. State Space Description of SISO LTI.

Sometimes the abstract system attached to an oriented system, as depicted


in Fig. 2.1.1., can be directly determined, by using different methods, under the
form of the state equations (SS) of the form,
.
x = Ax + bu (2.2.1)
y = c x + du
T
(2.2.2)
where the matrices and vectors dimensions and name are:
x , (nx1) :the state column vector, x = [x 1 , x 2 , .... , x n ] T
A , (nxn), :the system matrix
b , (nx1), :the command column vector
c , (nx1), :the output column vector
d , (1x1) :the coefficient of the direct input-output connection.
The relation (2.2.1) is called the proper state equation and the relation
(2.2.2) is called the output equation (relation).
In the previous analysed RLC-circuit example, we have got directly the
state equations expressing all the mathematical relations between variables into
Cauchy normal-form ( a system of the first order differential equations).
Such a system form can be denoted as
S = S{A, b, c, d, x } (2.2.3)
where we understand that it is about the relations (2.2.1) and (2.2.2). Also the
variables x, u, y have to be understood as time functions x(t), u(t), y(t) which
admit Laplace transform.
In (2.2.3) we also mentioned the letter x just to see how we have denoted
the state vector only. Because a system of the form (2.2.1) and (2.2.2) is
completely described by the matrices A,b,c,d only, the system S from (2.2.3) is
sometimes denoted as,
S = SS{A, b, c, d} ⇔ SS{S} . (2.2.4)
The abstract system (2.2.3) can also be obtained not as a procedure of
mathematical modelling of a physical oriented system but as a result of a
synthesis procedure.
Such a form, (2.2.1) and (2.2.2) or (2.2.3), expresses everything about the
system behaviour : the internal and external behaviour.
If a system of the order n is given, described by state equations (SS) as
(2.2.1) and (2.2.2) or (2.2.3) , then a unique (TF) transfer function H(s) can be
attached to it. This TF is
M(s)
H(s) = c T [sI − A] −1 b + d = , (2.2.5)
L(s)
which can be obtained by applying the Laplace transform to (2.2.1) and (2.2.2)
into z.i.c. The degree of L(s) is n.
If d ≠ 0 , then dg{M(s)} = dg{L(s)} = n , the system is proper.
If d = 0 , then dg{M(s)} < dg{L(s)} = n , the system is strictly proper.
48
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.2. State Space Description of SISO LTI.

If a system is first described by a differential equation like (2.1.2) or by a


transfer function (TF) like (2.1.14) of the order n, then an infinite number of
systems described by state equations (SS) as (2.2.1) and (2.2.2) or (2.2.3) can be
attached to it.
However, the TF (2.1.14) depends on maximum 2n+2 coefficients from
where, because of the ratio, only 2n+1 are essential.
The state equations (SS) as (2.2.1) and (2.2.2) or (2.2.3) depend from
A,b,c,d, respectively on n ⋅ n + n + n + 1 = n 2 + 2n + 1 > 2n + 2 coefficients.
The determination of SS from TF which is called the transfer function
realisation by state equations, means to determine n 2 + 2n + 1 unknown
variables from (n + 1) + (n + 1) equations obtained from the two identities on s,
nom{c T [sI − A] −1 b + d} ≡ M(s) and den{c T [sI − A] −1 b + d} ≡ L(s) , (2.2.6)
where: "nom{}" means "the nominator of {}"
"den{}" means"the denominator of{}".
We shall denote this process by
TF{S} → SS{S} , ⇔ TF{M, L} → SS{A, b, c, d} (2.2.7)
If the TF H(s) of the system S is reducible one, that means,
M(s) M (s) ⋅ P(s)
H(s) = = (2.2.8)
L(s) L (s) ⋅ P(s)
then the expression obtained after simplification,
M (s)
H (s) = , dg{L (s)} = n < n = dg{L(s)} (2.2.9)
L (s)
can be interpreted as the transfer function of another system S' of the order n', for
which one of its state realisation is,
S = SS{A , b , c , d , x } (2.2.10)
that means,
TF{S } → SS{S } , ⇔ TF{M , L } → SS{A , b , c , d , x } (2.2.11)
The systems S and S' are input-output equivalent. They determine the
same forced response but not the same free response.
If H (s) is itself a nominal transfer function (NTF), that means M (s) and
L (s) are coprime, then the system
S = SS{A , b , c , d , x } (2.2.12)
represents the so called the input-output equivalent minimal realisation, on short
the minimal realisation, of the system
S = TF{M, L} (2.2.13)
and we denote this by
m.r. m.r.
S → S , ⇔ TF{M, L} → SS{A , b , c , d , x } (2.2.14)
The transitions from one form of realisation to another one, by
mentioning the equivalence and univalence these realisations have, are presented
in the diagram from Fig. 2.2.1.
There are methods to obtain different SS from a TF which will be studied.

49
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.2. State Space Description of SISO LTI.

Some forms of SS have specific advantages that are useful for different
applications.
SS representation TF representation Reduction
of the system S Univoque of the system S by symplifying
transform M(s) the common factors M'(s)
S=SS(A,b,c,d,x) H(s)= H'(s)=
SS TF L(s) TF TF L'(s)
The order n dg{L(s)}=n dg{L'(s)}=n'

One
TF SS state realisation

Similarity transforms
Input-output and
Input-state-output
Equivalent systems
Another
S1=SS(A1,b1,c1,d1,x 1) TF SS state realisation
The order n

SS reduction. Univoque
Input-output only
SS TF transform
Equivalent systems TF SS One
state realisation
S'=SS(A',b',c',d',x')
The order n'<n

Univoque
SS TF transform
Figure no. 2.2.1.
In Fig.2.2.2. an example of two such systems is presented.
Step Response Transfer function:
1 100 s^2 + 2 s + 1
H(s)= -------------------------------
0.9 10000 s^3 + 300 s^2 + 102 s + 1
0.8
Zero/pole/gain:
Step response of both 0.01 (s^2 + 0.02s + 0.01)
0.7 H(s)= -----------------------------
H(s) and H'(s) (s+0.01) (s^2 + 0.02s + 0.01)
0.6
x1 x2 x3
0.5
Free respose of H(s) with x1 -0.03000 -0.08160 -0.01280
0.4
Amplitude x(0)=[1 10 1] x(0)=[1 1 1] A = x2 0.12500 0 0
0.3
x3 0
u1 0.06250 0
x1 0.12500 u1
0.2
Free respose of H'(s) with d = y1 0
x(0)=1 b = x2 0
0.1
x3 0
0
0 100 200 300 400 500 600
x1 x2 x3
Time (sec.) c = y1 0.08000 0.01280 0.10240
Transfer function: x1 u1 x1 u1
1 d= y1 0
H'(s)= - - - - - - - - - a = x1 -0.01000 b = x1 0.12500 c= y1 0.08000
100 s + 1
Figure no. 2.2.2.
50
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.3. Input-Output Description of MIMO LTI.

2.3. Input-Output Description of MIMO LTI.

As it was mentioned in Chap 2.1. MIMO LTI means Multi Input-Multi


Output Linear Time Invariant Systems, on short "Multivariable Systems".
They are systems with several inputs and several outputs described by a
system of linear constant coefficients ordinary differential equations.
Such a system denoted by S, with p-inputs u 1 , ... , u p and r-outputs
y 1 , ... , y r , is represented in a block diagram having the input and output
components explicitly represented or considering one input, the p-column vector
u and one output, the r-column vector y,
u = [u 1 , ... , u p ] T , y = [y 1 , ... , y r ] T , (2.3.1)
as depicted in Fig. 2.3.1.
u1 y1
u{ u2
up
S
y2
yr
} y
u
(p×1)vector
S
y
(r×1)vector

Figure no. 2.3.1.


Usually the vectors u, y, x, are denoted simply by u, y, x, without
bold-face fonts, if no confusion they will undertake.
The input-output relation (IOR) is expressed by a system of r linear
constant coefficients ordinary differential equations, of the form,
 i,j j   i,j j 
r n p m
Σ  Σ a k,i ⋅ y i (t)  = Σ  Σ b k,i ⋅ u i (t)  , j = 1, ... , r
i=1  k=0
(k)
 i=1  k=0
(k)

(2.3.2)
This system of differential equations can be expressed in the complex
domain by applying the Laplace transform to (2.3.2). For an easier
understanding we shall consider here zero initial conditions (z.i.c.).
r
 n i,j j k  p
 mi,j j k 
Σ  Σ a k,i s  Y i (s) = Σ  Σ b k,i s  U i (s) ⇒
i=1  k=0  i=1  k=0 
Lj,i(s) Mj,i(s)
r p
Σ
i=1
Lj,i (s)Y i (s) = Σ M j,i (s)U i (s) ,
i=1
j = 1, ... , r (2.3.3)
We can define the vectors:
Y(s) = (Y 1 (s), ..., Y r (s)) T (2.3.4)
U(s) = (U 1 (s), ..., U p (s)) ⇒
T
(2.3.5)
L(s)Y(s) = M(s)U(s) (2.3.6)
where:
L(s)= { Lj,i (s)} 1≤j≤r (2.3.7)
(r×r)
1≤i≤r
M(s)= { M j,i (s)} 1≤j≤r (2.3.8)
(r×p)
1≤i≤p

51
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.3. Input-Output Description of MIMO LTI.

L(s) and M(s) - are called matrices of polynomials. Any component of


each matrix is a polynomial. The i-o behaviour of a multivariable system in z.i.c.
is expressed by
Y(s)=H(s)U(s) (2.3.9)
where
H(s)=L-1(s)M(s) (2.3.10)
is called the Transfer Matrix, (TM).
 H 11 (s)...H 1i (s)...H 1p (s) 
 
 ............... 
H(s) =  H j1 (s)...H ji (s)...H jp (s)  (2.3.11)
 
 ............... 
 
 H r1 (s)...H ri (s)...H rp (s) 
The j-th component of the output is given by,
p
Y j (s) = Σ H ji (s)U i (s) (2.3.12)
i=1
H(s) is a rational matrix. Each of its component is a rational function. Any
component of this rational matrix, for example Hji, can be interpreted as a
transfer function between the input Ui and the output Yj that means :

Y j (s)
H ji (s) = (2.3.13)
U i (s) zero initial condition
U k (s)≡0 ∀s if k≠i

This rational matrix is an operator if it is the same for any expressions of U(s).
It is irrespective of the input.

Example.
Suppose we have a 2 inputs and 2 outputs system described by a system
of 2 differential equations,
(4) (1) (3) (1)
2y 1 + 3y 1 + 6y 2 + 3y 2 = 3u 1 + u 1 + 5u 2 (j = 1) r = 2
(2) (1) (1) (2) (1) ; ,
3y 1 + 2y 1 + 5y 1 + 8y 2 = 2u 1 + 5u 1 + 3u 2 + 4u 2 + 5u 2 (j = 2) p = 2
whose Laplace transform in z.i.c. is,
(2s 4 + 3)Y 1 (s) + (6s + 3)Y 2 (s) = (3s 3 + s)U 1 (s) + 5U 2 (s)

(3s 2 + 2s + 5)Y 1 (s) + 8Y 2 (s) = (2s + 5)U 1 (s) + (3s 2 + 4s + 5)U 2 (s)

 2s 4 + 3 6s + 3   3s 3 + s 5 
L(s) =  2  , M(s) =  ⇒
 3s + 2s + 5 8   2s + 5 3s + 4s + 5 
2

the TM is
H(s) = L −1 (s) ⋅ M(s)

52
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.3. Input-Output Description of MIMO LTI.

For a system, we can directly obtain, by managing the mathematical


relations between system variables, the state equations (SS),
.
x = Ax + Bu (2.3.14)
y = Cx + Du (2.3.15)
where the matrices and vectors dimensions and name are:
x , (n×1) :the state column vector,
A , (n×n), :the system matrix
B , (n×p), :the command matrix
C , (r×n), :the output matrix
D , (r×p) :the matrix of the direct input-output connection.
The order of the system is just n, the number of the state vector x.
The system S is denoted as,
S = SS(A, B, C, D) . (2.3.16)
If the state equations (SS) are given then the transfer matrix (TM) , is
uniquely determined by the relation,
H(s) = C[sI − A] −1 B + D (2.3.17)
This relation expresses the transform SS → TM .
The opposite transform,TM → SS , the state realisation of a transfer
matrix, is possible but much more difficult.
However the order n of the system has nothing to do with the number of
inputs p and the number of outputs r. It represents the internal structure of the
system.
A SISO LTI system is a particular case of a MIMO LTI system. If the
system has one input ( p=1 ) and one output ( r=1) the matrices will be denoted
as:
A → A
B → b
C → cT (2.3.18)
D → d
so a SISO as a particular case of (2.3.16) is
S = SS(A, B, C, D) = SS(A, b, c T , d) . (2.3.19)

53
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.

2.4. Response of Linear Time Invariant Systems.

2.4.1. Expression of the State Vector and Output Vector in s-domain.


Let we consider a LTI system
S = SS(A, B, C, D, x) , u ∈ Ω (2.4.1)
where Ω is given and the state is the column vector x = [x 1 , ... , x n ] . T

The state equations are


.
x = Ax + Bu (2.4.2)
y = Cx + Du (2.4.3)
The order n of the system is irrespective on the number p of inputs and of
the number r of outputs.
The system behaviour can be expressed in the s-domain getting the
Laplace transform of the state x(t). We know that the Laplace transform of a
vector (matrix) is the vector (matrix) of the Laplace transforms,
L{x(t)} = X(s) = [X 1 (s), ... , X n (s)] T , L{x i (t)} = X i (s) (2.4.4)
and,
.
L{x(t)} = sX(s) − x(0) , where x(0) = lim x(t) lim t→0, t>0 . (2.4.5)
t→0, t>0
By applying this to (2.4.2) and denoting
L{u(t)} = U(s) = [U 1 (s), ... , U p (s)] T
the Laplace transform of the input vector, we obtain
sX(s) − x(0) = AX(s) + BU(s) ⇒ (sI − A)X(s) = x(0) + BU(s) ⇒
X(s) = (sI − A) −1 x(0) + (sI − A) −1 BU(s) (2.4.6)
If we denote
Φ (s) = (sI − A) −1 , (2.4.7)
then the expression of the state vector in s-domain is,
X(s) = Φ (s)x(0) + Φ (s)BU(s) . (2.4.8)
The expression Φ (s) is the Laplace transform of the so called the
transition matrix Φ(t) or state transition matrix , where
Φ (s)=(sI-A)-1=L{Φ(t)} ; Φ(t)=eAt . (2.4.9)
From (2.4.8) we observe that the state response has two components:
- State free response X l (s)
X l (s) = Φ (s)x(0) (2.4.10)
- State forced response X f (s)
X f (s) = Φ (s)BU(s) (2.4.11)
which respects the decomposition property,
X(s) = X l (s) + X l (s) . (2.4.12)
The complex s-domain expression of the output can be obtained by
substituting (2.4.8) in the Laplace transform of (2.4.3),
Y(s) = CX(s) + DU(s) (2.4.13)

54
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.

Y(s) = CΦ Φ (s)x(0) + [CΦ


Φ (s)B + D]U(s) (2.4.14)
Also the output is the sum of two components:
Y(s) = Y l (s) + Y l (s) (2.4.15)
- Output free response
Y l (s) = CΦ
Φ (s)x(0) = Ψ(s)x(0) (2.4.16)
- Output forced response
Y f (s) = [CΦ
Φ (s)B + D]U(s) = H(s)Us) (2.4.17)
which reveals the decomposition property.
We denoted by Ψ(s)
Ψ(s) = CΦ Φ (s) (2.4.18)
the so called the base functions matrix in complex domain, and by H(s)
H(s) = [CΦ Φ (s)B + D] (2.4.19)
the transfer matrix of the system. This transfer matrix is univocal determined
from state equations.
For SISO system the transfer function is,
H(s)=cTΦ(s)b+d (2.4.20)

2.4.2. Time Response of LTI from Zero Time Moment.


This response, from the zero initial time moment t 0 = 0 can be easy
determined by using the real time convolution theorem of the Laplace transform,
t
L {F 1 (s) ⋅ F 2 (s)} = ∫ f 1 (t − τ)f2 (τ)dτ
−1
(2.4.21)
0
where F 1 (s) = L{f 1 (t)}, F 2 (s) = L{f2 (t)} .
The relation (2.4.8) can be interpreted as
X(s)=Φ(s)x(0)+Φ(s)BU(s)=Φ(s)x(0)+F1(s)F2(s)
Φ(t)=L-1{Φ(s)}=L -1{(sI-A)-1}
so, by applying the real time convolution theorem we obtain
t
x(t) = Φ(t)x(0) + ∫ Φ(t − τ)Bu(τ)dτ , (2.4.22)
0
which is the state response of the system and substituting it in (2.4.3) results
t
y(t) = CΦ(t)x(0) + ∫ CΦ(t − τ)Bu(τ)dτ + Du(t) , (2.4.23)
0
the output response, both from the initial time moment t 0 = 0 .

55
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.

2.4.3. Properties of Transition Matrix.


Let A be a (n x n) matrix.
1. The transition matrix Laplace transform.
The transition matrix defined as
Φ(t) = e At , ∀t ( ( 2.4.2
has the Laplace transform (2.4.24)
L{Φ(t)} = L{e At } = Φ (s) = (sI − A) −1. (2.4.25)
Proof:
Because the series of powers

ϕ(x) = Σ x t k = e xt
k

k=0 k!
is uniform convergent, then the matrix function

Φ(t) = ϕ(x) x→A = Σ A t k = e At
k

k=0 k!
has the Laplace transform

Φ (s) = L{e } = L{ Σ A t k } = s −1 I + s −2 A + s −3 A 2 + ...
k
At
k=0 k!

Φ (s) = Σ A s k −(k+1)
k=0
where we applied the formula,
k
L t = k+1 1 = s −(k+1) .
k! s
Because of the identity
(sI-A)ΦΦ (s)=(sI-A)(s −1 I + s −2 A + s −3 A 2 + ...) =I-s-1A+s-1A-s-2A2+s-2A2-...=I
we have
(sI-A)ΦΦ (s)=I
and
Φ (s)(sI-A)=I => Φ (s)=(sI-A)-1 (2.4.26)
2. The identity property.
Φ(0)=I, where Φ(0)=Φ(t)|t=0 (2.4.26)
3. The transition property.
Φ(t1+t2)=Φ(t1)Φ(t2)=Φ(t2)Φ(t1) ∀t 1 , t 2 (2.4.27)
4. The determinant property.
The transition matrix is a non-singular matrix.
5. The inversion property.
Φ(-t)=Φ-1(t) (2.4.28)
6. The transition
. matrix is the solution
. of the matrix differential equation,
Φ(t) = AΦ(t) , Φ(0) = I , Φ(0) = A (2.4.29)

56
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.

2.4.4. Transition Matrix Evaluation.


To compute the transition matrix we can use different methods.
1. The direct method. We have A so we directly evaluate
Φ (s)=(sI-A)-1 , Φ(t)=L-1{Φ
Φ (s)} (2.4.30)
2. By using the general formula of a matrix function. Let be the characteristic
polynomial of the square matrix A,
L(λ)=det(λI-A) (2.4.31)
and the characteristic equation,
N N
L(λ)=0 => L(λ)= Π (λ − λ k ) mk , λ k ∈ C , Σ mk = n (2.4.32)
k=1 k=1
then the attached matrix function is,
N  m k −1  d l f(λ)
f(A) = Σ  Σ f(l) (λ k )E kl  where f(l) (λ k ) = λ=λ k (2.4.33)
k=1  l=0  dλ l
The matrices Ekl are called spectral matrices of the matrix A. There are n
matrices Ekl. They are determined solving a matrix algebraical system by using n
arbitrary independent functions f(λ) . Finally for
f(λ) = e λt → f(A) = e At , f (l) (λ k ) = t l e λ kt (2.4.34)
3. The Polynomial Method.
Let be a function f(λ) : C → C , continuous which has mk-1 derivatives for
λk that means
f(l) (λ k ) = exists ∀ l ∈ [0, mk−1 ] , (2.4.35)
and a (nxn) matrix A whose characteristic polynomial
N
det(λI − A) = Π (λ − λ k ) mk . (2.4.36)
k=1
Let
p(λ) = α n−1 λ n−1 + ... + α 1 λ + α 0 (2.4.37)
be a n-1 degree polynomial ( it has n coefficients α i , 0 ≤ i ≤ n − 1 ). The attached
polynomial matrix function p(A) is
p(A) = α n−1 A n−1 + ... + α 1 A + α 0 I . (2.4.38)
In such conditions, if the system of n equation
f(l)(λk)=p(l)(λk) , k=1,...,N , l=0,...,mk-1 (2.4.39)
is satisfied, then
f(A)=p(A) (2.4.40)
The relation (2.4.39) expresses n conditions that means an algebraical
system with n unknowns the variables : α0, α1, ...,αn-1. Solving this system the
coefficients α0,...,αn-1 are determined.
4. The numerical method.
p
Φ(t) ≈ Φ p (t) = Σ A t k
k

k=0 k! (2.4.41)
If the dimension of the matrix A is very big then we may have a false
convergence. It is better to express t=qτ, q ∈ N and τ small enough =>
Φ(t) = Φ(qτ) = (Φ(τ)) q ≈ (Φ p (τ))
q
(2.4.42)
57
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.

2.4.5. Time Response of LTI from Nonzero Time Moment .

This is called the general time response of a LTI, when the initial time
moment is now t 0 ≠ 0 . The formula is obtained by using the transition property of
the transition matrix.
We saw that using the Laplace transform the state response was,
t
x(t) = Φ(t)x(0) + ∫ Φ(t − τ)Bu(τ)dτ , ∀t ≥ 0 (2.4.43)
0
Substituting t=t0 in (2.4.43), we obtain
t0
x(t 0 ) = Φ(t 0 )x(0) + ∫ Φ(t 0 − τ)Bu(τ)dτ . (2.4.44)
0
Taking into consideration (2.4.27),
Φ(t 0 − τ) = Φ(t 0 )Φ(−τ) , (2.4.45)
and (2.4.28)
Φ(-t)=Φ-1(t) (2.4.46)
we can withdraw from (2.4.44) the vector x(0) multiplying (2.4.44) to the left
side by Φ-1(t0)
t0
x(0) = Φ(−t 0 )x(t 0 ) − ∫ Φ(−τ)Bu(τ)dτ
0
and, by substituting x(0) in the relation (2.4.43), we have,
t0 t0 t
x(t) = Φ(t − t 0 )x(t 0 ) − Φ(t) ∫ Φ(−τ)Bu(τ)dτ + ∫ Φ(t − τ)Bu(τ)dτ + ∫ Φ(t − τ)Bu(τ)d
0 0 t0
t
x(t) = Φ(t − t 0 )x(t 0 ) + ∫ Φ(t − τ)Bu(τ)dτ
t0
which is the general time response of the state vector.

The output general time response response is,


t
y(t) =Φ(t − t 0 )x(t 0 ) + ∫ [CΦ(t − t 0 )B + Dδ(t − τ)] u(τ)dτ
t0
Ψ(t−t 0 ) ℵ(t−t 0 )
where we have expressed,
Du(t) = ∫ t 0 Dδ(t − τ)dτ ,
t

The free response is,


Ψ(t) = L −1 {CΦ Φ )(s)} ⇒ y l (t)
Similarly, we compute the so called the weighting matrix,
ℵ(t) = L{CΦ(s)B + D} = L −1 {H(s)}
We can say that the transfer matrix is the Laplace transform of the impulse
matrix ℵ(t) .
The impulse matrix is the output response for zero initial conditions (zero
initial state) when the input is the unit impulse.
Y(s)=H(s)U(s)

58
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.

u(t)=Ipδ(t) => U(s)=Ip => Y(s)=H(s) => y(t)=L -1{H(s)}=ℵ (t)


For scalar systems the weighting matrix is called weighting function or
impulse function (response).
In practice are very important some special responses like step response
(it is the output response from zero initial state determined by step unit input ).
For scalor type systems, p=1, r=1 (1 input, 1 output),
 0, t<0
u(t)=1(t)=u(t) = 1(t) =  , U(s) = 1s
 1 ,t ≥ 0
Y(s) = H(s) ⋅ s ⇒ y(t) = L −1 {H(s) ⋅ 1s }
1

Example.
H(s) = , U(s) = 1s ∆u ⇒ ,
K
Ts+1
 t 
y(t) = Σ  Rez  Ts+1
K
⋅ ∆us e st   = K + KT ⋅ 11 e − T ∆u
 −T 
-t/T
with s=0 , s= -1/T , y(t)=K(1-e )∆u

a
u (t)
initial steady
state }D u u( ∞ )
a
U st ( t 0 )
a
a
t
y (t) A T
-1
y st ( t 0 )
L {H(s)U(s)} y( ∞)
initial steady
state
a a
t0 t
Figure no. 2.4.1.
ust(t0 ) is the steady state of input observed at the absolute time t0a,
a

y a (t) = y(t) + y ast (t 0 ) .


To any point B at the output response of this system ( first order system )
we are drawing the tangent and we obtain a time interval T.
We can compute the area between the new steady state and the output
response.
y(∞)
T= A ,K= .
y(∞) u(∞)

59
3. SYSTEM CONNECTIONS. 3.1. Connection Problem Statement .

3. SYSTEM CONNECTIONS.

3.1. Connection Problem Statement.


Let S i , i ∈ I be a set of systems where I is a set of indexes, each S i
being considered as a subsystem,
D
S i = S i (Ω i , f i , g i , x i ) (3.1.1)
where the utilised symbols express:
Ωi - the set of allowed inputs;
fi - the proper state equation;
gi - the output relation;
i
x - the state vector.
Knowing the above elements one can determine the set Γ i of the possible
outputs.
These subsystems can be dynamical or non-dynamical (scalor) ones, can
be continuous or discrete time ones, can be logical systems, stochastic systems
etc. as for example:

3.1.1. Continuous Time Nonlinear System (CNS).


 x. i (t) = fi (x i (t), u i (t), t); x i (t 0 ) = x i0 , t ≥ t 0 ∈ T ⊆ R
Si :  i
 y (t) = g i (x (t), u (t), t); Ω i = { u u : T → U i ; T ⊆ R; u admitted }
i i i i i

(3.1.2)
3.1.2. Linear Time Invariant Continuous System (LTIC).
It is a particular case of CNS.
 x. i (t) = A i x i (t) + B i u i (t); x i (t 0 ) = x i0 , t ≥ t 0 ∈ T ⊆ R
Si :  i
 y (t) = C i x (t) + D i u (t); Ω i = { u u : T → U i ; T ⊆ R; u admitted }
i i i i i

(3.1.3)
3.1.3. Discrete Time Nonlinear System (DNS).
 x ik+1 = fi (x ik , u ik , k); x ik 0 = x i0 , k ≥ k 0 ∈ T ⊆ Z
Si :  i
 y k = g i (x k , u k , k); Ω i = {u k } u k : T → U i ; T ⊆ Z; {u k }admitted
i i i i i

(3.1.4)
3.1.4. Linear Time Invariant Discrete System (LTID).
It is a particular case of DNS.
 x ik+1 = A i x ik + B i u ik ; x ik 0 = x i0 , k ≥ k 0 ∈ T ⊆ Z
Si :  i
 y k = C i x k + D i u k ; Ω i = {u k } u k : T → U i ; T ⊆ Z; {u k } admitted
i i i i i

(3.1.5)
Let consider that a set of subsystems as above defined constitutes a
family of subsystems, denoted F i ,
F I = {S i , i ∈ I} . (3.1.6)

60
3. SYSTEM CONNECTIONS. 3.1. Connection Problem Statement .

One can say that a subsystem S i (or generally a system) is defined (or
specified or that it exists) if the elements (Ω Ω i ; f i ; g i ; x i ) are specified.
From these elements all the other attributes, as discussed in Ch.1. and
Ch.2. , can be deduced or understood.
These all other attributes can be:
the system type (continuous, discrete, logical),
the sets T i ; U i ; Γ i ; Y i, etc.
A family of subsystems F I , builds up a connection if in addition to the
elements
D
S i = S i (Ω i , f i , g i , x i )
two other sets Rc and Cc are defined, having the following meaning:
1. Rc= the set of connection relations
2. Cc= the set of connection conditions.
Rc represents a set of algebraical relations between the variables ui, yi, i ∈ I ,
and other new variables introduced by these relations.
These new variables can be new inputs (causes), denoted by v j , j ∈ J , or
new outputs dented by w l , l ∈ L where I, L represent index sets.
Cc represents a set of conditions which must be satisfied by the input and
output variables (ui, yi), i ∈ I , of each subsystem, and also by the new
variables v j , j ∈ J , w l , l ∈ L introduced through Rc .
These conditions refer to:
The physical meaning, clarifying if the abstract systems are mathematical
models of some physical oriented systems (objects) or they are pure ab-
stract systems conceived (invented) into a theoretical synthesis procedure.
The number of input-output components, that is whether the input and out-
put variables are vectors or scalars.
The properties of the entities Ω i ; f i ; g i interpreted as functions or set of
functions, each of them defined on its observation domain.
The two 3-tuples
S = {F I ; R c ; C c }, F I = {S i , i ∈ I} (3.1.7)
constitute a correct connection or a possible connection, if S has the attributes of
a dynamical system to which :v j , j ∈ J are input variables and w l , l ∈ L are
output variables.
The system S is called the equivalent system or the interconnected system
of the subsystems family FI , family interconnected by the pair
{Rc; Cc}.

61
3. SYSTEM CONNECTIONS. 3.1. Connection Problem Statement .

Observation 1.
Any connection of subsystems is performed only through the input and
output variables of each component subsystem but not through the state
variables.
The set Rc does not contains relations into which the state vector
x , i ∈ I or the components of the vectors x i , i ∈ I to appear.
i

If, in some examples, the interconnections relation Rc contains


components of the state vectors, we must understand that they represent
components of the output vectors y i , for example yik=xik, but for the seek of
convenience and writing economy different symbols have not been utilised .
This is a very important problem when a subsystem Si (an abstract one)
is the mathematical model of a physical oriented system for which some state
variables (some components of the state vector), are not accessible for
measurements and observations or even they have no physical existence
(physical meaning) and so they cannot be undertaken as components of the
output vector.

Observation 2.
If to the interconnected system S only the behaviour with respect to the
input variables (the forced response) is under interest, then each subsystem Si
can be represented by its input-output relation.
For LTI (LTIC or LTID) systems, these input-output relations are the
transfer matrices (for MIMO) or transfer functions (for SISO), denoted by Hi(s)
for LTIC and by Hi(z) for LTID systems.
Reciprocally, if in an interconnected structure only transfer matrices (functions)
appear, we have to understand that only the input-output behaviour (the forced
response) is expected or is enough for that analyse.
Obviously, if the transfer matrix (function) of the equivalent
interconnected system S is given or is determined (calculated), we can deduce
different forms of the state equations starting from this transfer matrix
(function), which are called different realisations of the transfer matrix
(function) by state equations.
If this transfer matrix (function) is a rational nominal one (there are no
common factors to nominators versus denominator) then all these state
equations realisations certainly express only the completely controllable and
completely observable part of the interconnected dynamical system S, that
means all the state components of these state realisations are both controllable
and observable.
No one will guaranty, if this transfer matrix (function) is not rational or
non-nominal one (there are common factors to nominators versus denominator)

62
3. SYSTEM CONNECTIONS. 3.1. Connection Problem Statement .

that in these state realisations will appear state components of the system S
which are only uncontrollable or only unobservable or both uncontrollable and
unobservable.

Observation 3.
If to the interconnected system S only the steady-state behaviour is under
interest, then each subsystem Si can be represented by its steady-state
mathematical model (static characteristics).
Notice that the steady-state equivalent model of the interconnected
system S, evaluated based on the steady-state models of the subsystems Si, has a
meaning if and only if it is possible to get a steady-state for the interconnected
system in the allowed domains of inputs values.
This means the interconnected system S must be asymptotic stable.
The steady-state mathematical model of the interconnected system can be
obtained by using graphical methods when some subsystems are described by
graphical static characteristics, experimentally determined.
Generally, the mathematical model deduction process, whichever type
would be: complete (by state equations); input-output (particularly by transfer
matrices) or steady state (by static characteristics) of an interconnected system
is called connection solving process or the structure reduction process to an
equivalent compact form.
Essentially to solve a connection means to eliminate all the intermediate
variables introduced by algebraical relations from Rc or from FI .
There are three fundamental types of connections:
1: Serial connection (cascade connection);
2: Parallel connection;
3: Feedback connection,
through which the majority of practical connections can be expressed.

63
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.

3.2. SERIAL CONNECTION.

3.2.1. Serial Connection of two Subsystems.


For the beginning let us consider only two subsystems S1, S 2, in the
family FI , represented by a block diagram as in Fig. (f** .2.1.
1
y1
2 2
u u y u y
S1 S2 = S

Figure no. 3.2.1.


The 3-tuples (3.2.1),
F I = {S 1 , S 2 }, I = {1; 2}
R c = {u 2 = y 1 , u 1 = u; y = y 2 } (3.2.1)
C c = {ΓΓ 1 ⊆ Ω 2; Ω = Ω 1; Γ = Γ 2}
build up a serial connection of the two subsystems.
In a serial connection of two subsystems one of them, here S1, has the
attribute of "upstream" and the other S2, of "downstream".
The input of the downstream subsystem is identical to the output of the
upstream subsystem.
From the connection relation u2=y1 one can understand that the function
(u 2 : T 2 → U 2 ) ∈ Ω 2
is identical with any function
(y 1 : T 1 → Y 1 ) ∈ Γ 1
which arises to the output of the subsystem S1.
From this identity the following must be understood:
- The sets Y1 and U2 have the same dimensions (as Cartesian products)
- Y1 ⊆ U2 ;
- The variables have the same physical meaning if both S 1 and S 2 express
oriented physical systems;
- u2 and y1 are the same size vectors;
- T1 ≡ T2 .
From the connection condition : Γ 1 ⊆ Ω 2 one understand that the set
(class) Ω 2 , of the functions allowed to represent input variable u2 to S2 ( Ω 2 is
a very defining element of S 2), must contain all the functions y1 from the set
Γ 1 of the possible outputs from S1.
Of course Γ 1 depends both on the equations (f1, g1) which define S1 and
on Ω 1 too.
For example, if Ω 2 represents a set of continuous and derivative functions
but from S1 one can obtain output functions which have discontinuities of the
first kind, the connection is not possible even if T 2 ≡ T 1 , Y1 ⊆ U2 .

64
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.

Here, the new letters u, Ω, y, Γ can be interpreted just only as simple


notations for a more clear understanding that it is about the input and the
output of the new equivalent interconnected system.
If S1, S2, S represent symbols for operators, formally one can write
D 2 D
y = y = S 2 {u2 } = S 2 {y1 } = S 2 {S 1 {u1 }} = S 2 S 1 {u1 } = S 2 S 1 {u} = S{u},
(3.2.2)
where it has been denoted
S = S2 S1 = S2 S1
understanding by " S 2 S 1 " or just S 2S1 the result of two operators
composition.
Many times, in connection solving problems only FI, Rc, are presented
(given) as strictly necessary elements to solve formally that connection.
The conditions set Cc is explicitly mentioned only when it must be
proved the connection existence, otherwise it is tacitly assumed that all these
conditions are accomplished.

3.2.2. Serial Connection of two Continuous Time Nonlinear Systems


(CNS).
Let us consider two differential systems of the form (3.1.2) serial
interconnected. We determine the complete system, described by state
equations, which will result after connection.
. .
 x 1 = f 1 (x1 , u1 , t)  x 2 = f 1 (x2 , u2 , t)
FI: S1:  1 ; S2:  2 (3.2.3)
 y = g1 (x , u , t)  y = g2 (x , u , t)
1 1 2 2

x1 (n1 × 1) ;u 1 (p 1 × 1) ; y1 (r 1 × 1) x2 (n2 × 1) ;u 2 (p 2 × 1) ; y2 (r 2 × 1)
R c: u 2 = y 1 ⇒ (p 2 = r 1 ); u = u1 ⇒ (p = p 1 ); y = y2 ⇒ (r = r 2 ); (3.2.4)
1 1 1 2 2 2
Obvious, x , y , u , x , u , y are time function vectors.
By eliminating the intermediate variables u2, y1 and using the notations
u, y for u1 , y2 respectively, ⇒
.
 x1 = f 1 (x1 , u, t)
 .
S  x2 = f 2 (x2 , g1 (x1 , u, t), t)
 y = g (x2 , g (x1 , u, t), t)
 2 1
denoting

Ψ 1 (x, u, t) = Ψ 1 (x1 , x2 , u, t) = f 1 (x1 , u, t)

Ψ 2 (x, u, t) = Ψ 2 (x1 , x2 , u, t) = f 2 (x2 , g1 (x1 , u, t), t)
g(x, u, t) = ∼g(x1 , x2 , u, t) = g (x2 , g (x1 , u, t), t)
2 1

x =  x1  (n1 + n2 ) × 1 ,
x
 2
the equivalent interconnected system is expressed by the equations,
65
3. SYSTEMS CONNECTIONS. 3.2. Serial Connection.

.
 x = f(x, u, t) Ψ (x, u, t) 
S  f(x, u, t) =  1  S = S2 S1 (3.2.5)
 y = g(x, u, t)  Ψ 2 (x, u, t) 
One can observe that the system order, determined to differential
systems by the state vector size, is n = n 1 + n 2 .

3.2.3. Serial Connection of two LTIC. Complete Representation.


Let us consider two LTIC, on short LTI, serially interconnected. One
can determine the interconnected system completely represented by state
equations.
. .
 x 1 = A1 x 1 + B1 u 1  x 2 = A2 x 2 + B2 u 2
S1 :  1 S2 :  2 (3.2.6)
 y = C 1 x 1 + D u1
1  y = C 2 x 2 + D u2
2

x1 (n1 × 1); u1 (p 1 × 1); y1 (r 1 × 1) x2 (n2 × 1); u2 (p 2 × 1); y2 (r 2 × 1)


R c : u 2 = y 1 ⇒ (p 2 = r 1 ); u = u 1; y = y 2; p = p 1 ; r = r2 (3.2.7)
Eliminating the intermediate variables u 2 , y 1 ⇒
. .
x 1 = A1 x 1 + B1 u x 1 = A1 x 1 + B1 u
.2 .
x = A2 x2 + B2 [C 1 x1 + D1 u] ⇒ x 2 = B 2 C 1 x 1 + A2 x 2 + B2 D1 u
y = C 2 x2 + D2 [C 1 x1 + D1 u] y = D 2 C 1 x 1 + C 2 x 2 + D2 D1 u
 x1 
Concatenating the two state vectors x 1 , x 2 to a single vector x =  2  one
x 
obtain the compact form of the interconnected system ,
.
x = Ax + Bu
(3.2.8)
y = Cx + Du
n1 n2 p n1 n2
 A 1 0  n1  B1  n1
A=  B=   C =r  D1 C 1 C 2  D = D2 D 1
 B 2 C 1 A 2  n 2  B 2 D 1  n 2

3.2.4. Serial Connection of two LTIC. Input-Output Representation.


Let us assume we are interested only on the input-output behaviour of
the serially interconnected system represented as in Fig. 3.2.2.
For nonlinear systems as presented in Ch. 3.2.2. the explicit expressing
of the forced response, in a general manner, practically is impossible because
the decomposition property is not available for any nonlinear system
For linear systems from Ch. 3.2.3. the transfer matrices can be easily
determined as,
H1 (s) = C 1 (SI − A1 ) −1 B1 + D1 (3.2.9)
H2 (s) = C 2 (sI − A2 ) B2 + D2
−1

The forced responses of the two systems are expressed by


66
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.

S 1 : Y 1 (s) = H 1 (s)U 1 (s); S 2 : Y 2 (s) = H 2 (s)U 2 (s)


and the connection relations are
R c : U 2 (s) ≡ Y 1 (s); U(s) ≡ U 1 (s); Y(s) ≡ Y 2 (s). p 1 = p; r 2 = r; p 2 = r 1 .
1 1 2 2
U (s) Y (s) U (s) Y (s) U(s) Y(s)
H1(s) H2(s) ≡ H2(s) H1(s)
( r1xp1 ) ( r2xp 2) (rxp)

Figure no. 3.2.2.


D
Y(s) = Y 2 (s) = H 2 (s)U 2 (s) = H 2 (s)Y 1 (s) = H 2 (s)H 1 (s)U 1 (s) = H(s)U(s)
H(s) = H2 (s)H1 (s) (3.2.10)
Easy can be verified that the transfer matrix (TF) of the complete system
(3.2.8), obtained by serial connection, is the product of the transfer matrices
from (3.2.10).
Indeed, from (3.1.8) we obtain,
−1
 sI − A1 0   B1 
H(s) = C[sI − A] B + D =  D2 C 1 C 2  
−1
   + D2 D1 =
 −B 2 C 1 sI − A 2   B 2 D 1 
 Φ 1 (s) 0   B1 
=  D2 C 1 C 2     + D 2 D1 =
 Φ 2 (s)B2 C 1 Φ 1 (s) Φ 2 (s)   B2 D 1 
 B1
=  D2 C 1 Φ 1 (s) − C 2 Φ 2 (s)B 2 C 1 Φ 1 (s) C 2 Φ 2 (s)  
 B2 D1
H(s) = D2 C 1 Φ 1 (s)B1 + C 2 Φ 2 (s)B 2 C 1 Φ 1 (s)B1 + C 2 Φ 2 (s)B 2 D1 + D 2 D1 =
= C 2 Φ 2 (s)B2 [C 1 Φ 1 (s)B1 + D1 ] + D 2 [C 1 Φ 1 (s)B1 + D1 ]
H(s) = [C 2 Φ 2 (s)B2 + D2 ][C 1 Φ 1 (s)B 1 + D1 ] = H2 (s)H1 (s)
the TF of the serial connection is the product of the component's TFs.

3.2.5. The controllability and observability of the serial connection.


The serial connection of a systems family F I = {S i , i ∈ I} preserve
some common properties of the component systems as for example the
linearity and stability.
Other properties, as the controllability and observability may disappear
to the serial interconnected system even if each component system satisfies
these properties.
In the sequel we shall analyse such aspects based on a concrete example
referred to the serial connection of two LTI each of them of the first order.

67
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.

3.2.5.1. State Diagrams Representation.


We consider a serial connection of two SISO systems of the first order.
.
K1 Y (s)  x1 (t) = −p 1 x1 (t) + K1 u1 (t)
S 1 : H 1 (s) = = 1 ⇒  (3.2.11)
s + p 1 U1 (s)  y1 (t) = x1 (t)
.
K2 (s + z2 ) Y2 (s)  x2 (t) = −p 2 x2 (t) + K2 (z2 − p 2 )u2 (t)
S 2 : H 2 (s) = = ⇒ 
s +p2 U(s)  y2 (t) = x2 (t) + K2 u2 (t)
(3.2.12)
R c : u 2 (t) ≡ y 1 (t), u(t) ≡ u 1 (t), y(t) ≡ y 2 (t)
(3.2.13)
Cc : Γ 1 ⊆ Ω 2, Ω = Ω 1, Γ = Γ 2.

The connection, expressed by input-output relations (transfer functions)


is represented in the block diagram from Fig. 3.2.3.

U(s)=U 1 (s) Y1 (s) U2 (s) Y2 (s)=Y(s) U(s) y(s)


H1(s) H2(s) ≡ H(s)
H(s)=H2(s)H1(s)
Figure no. 3.2.3.
In this scalar case, the equivalent transfer function can be presented as an
algebraical expression (because of the commutability property) also under the
form,
H(s) = H2 (s)H1 (s) = H1 (s)H 2 (s)
K K (s + z2 )
H(s) = 1 2 . (3.2.14)
(s + p 1 )(s + p 2 )
One can observe that H(s) is of the order n 1 + n 2 = 1 + 1 = 2 .
The same interconnected system, but represented by state equations, can
be illustrated using so called "state diagrams", as shown in Fig. 3.2.4.
The "State Diagram" (SD) is form of graphical representation of the
state equations, both in time domain or complex domain, using block
diagrams (BD) or signal flow graphs (SFG).
The state diagram (SD) of LTIs contains only three types of graphical
symbols: integrating elements; summing elements; proportional elements.
The summing and proportional elements, called on short summators
and scalors respectively, have the same graphical representation both in time
and complex domains. For the integrating elements, called on short
integrators, frequently the complex domain graphical form is utilised and the
corresponding variables are denoted in time domain together, sometimes, with
their complex form notation.

68
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.

If for a system given by a transfer matrix (function), we succeed to


represent it by a block diagram (BD) or a signal flow graph (SFG) which
contains only the three types of symbols: integrators, summators and scalors,
then we can interpret that block diagram (BD) or that signal flow graph (SFG)
as being a state diagram (SD). Based on it we can write (deduce), on spot, a
realisation by state equations of that transfer matrix (function).
Given the state diagrams for the systems Si of a family FI very easy can
be drawn the state diagram of the interconnected system by using the
connection relations and from here the state equations of the interconnected
system.
In our case the state diagram (SD) means the graphical representation in
complex domain of the equations (3.2.11) which is the SD for S1, (3.2.12)
which is the SD for S2 and (3.2.13) which contains the connection relations.

SD of S1 SD of S2
K2
x 1(0) x (0)
2 x2 +
u1 • y1 u2 •
+ x1 + 1 x1 + x 2+ + 1 y2
K1 s K2 (z2- p2 ) s
u=u + + + + y =y
1 u 2= y 1 2

-p -p
1 2
Rc S2
S1
Figure no. 3.2.4.
Based on this SD the state equations for the serial interconnected system
are deduced,
.
x = Ax + bu
x =  x1 
x
, (3.2.15)
y=c T x + du 2

where,
 −p 1 0   K1   K2 
A= ; b=  ; c =  ; d = 0
 K2 (z2 − p 2 ) −p 2   0   1 
Because
X(s) = Φ(s)x(0) + Φ(s)bU(s); Φ(s) = [sI − A]−1
the (ij) components of the transition matrix Φ ij (s), can be easily determined
based on relation (3.2.15).
Manipulating SD, the matrix inverse operation is avoided. For a better
commodity the SD from Fig. 3.2.4. equivalently is transformed into SD from
Fig. 3.2.5.

69
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.

K2
1 2
x (0) x (0)
2
1 x x2 + y = y
u=u 1 1 1 + 1 2
+
K1 s+p y1 u 2 K2 (z2- p2 ) + s+p
1 2 +
+

Figure no. 3.2.5.


From (3.2.15) we obtain,
X i (s)
Φ ij (s) = (3.2.16)
x j (0) x k (0)=0, k≠j, U(s)≡0
so,
X1 (s) X (s)
Φ 11 (s) = = 1 ; Φ 12 (s) = 1 = 0;
x 1 (0) s + p 1 x2 (0)
X (s) K2 (z2 − p 2 ) X (s) 1
Φ 21 (s) = 2 = ; Φ 22 (s) = 2 =
x 1 (0) (s + p 1 )(s + p 2 ) x2 (0) (s + p1 )
and finally
 1 0 
 s +p1
Φ(s) =  K (z − p )  (3.2.17)
 2 2 2 1 
 (s + p 1 )(s + p 2 ) s + p2 
The state response is,
 s+p1 0   x 1 (0)   s+p1 0   K 1 

X(s) =  K 2(z 2 −p 2)  
1    +  K 2 (z 2−p 2 ) 1    U(s)
1 1

 (s+p )(s+p )   x 2 (0)     0 


 1 2 s+p 2   (s+p 1 )(s+p2 ) s+p2 
K1
X 1 (s) = X 1l (s) + X 1f (s) = 1 x 1 (0) + U(s) (3.2.18)
s + p1 s + p1
K 2 (z2 − p 2 )
X 2 (s) = X 2l (s) + X 2f (s) = x 1 (0) + 1 x 2 (0)+
(s + p 1 )(s + p 2 ) s + p2
K 1 K 2 (z2 − p 2 )
+ U(s)
(s + p 1 )(s + p 2 )
It can be proved that the transfer function of the serial connection is,
K K (s + z2 )
H(s) = c T Φ(s)b + d = 1 2 = H1 (s)H2 (s).
(s + p 1 )(s + p 2 )
The expression of the output variable in complex domain is,
Y(s) = c T Φ(s)x(0) + (c T Φ(s)b + d)U(s) = Yl (s) + Yf (s)
Y(s) = K2 X1l (s) + X2l (s) + H(s)U(s) = Y l (s) + Yf (s) ⇒
K2 K2 (z 2 − p 2 )
Yl (s) = x1 (0) + x1 (0) + 1 x2 (0) (3.2.19)
(s + p 1 ) (s + p 1 )(s + p 2 ) (s + p 2 )
K1 K2 (s + z2 )
Yf (s) = U(s) (3.2.20)
(s + p 1 )(s + p 2 )

70
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.

If p 1 ≠ p 2 one obtain,
p −z K (z − p 2 )
yl (t) =  K2 p 1 − p2 (x1 (0))  e−p 1 t +  2 2 x1 (0) + x 2 (0)  e−p 2t . (3.2.21)
 1 2   (p 1 − p 2 ) 
When p 1 = p 2 in the same way the inverse Laplace transform is applied,
yl (t) = K2 e−p 1 t ⋅ x1 (0) + K2 (z2 − p 1 )te−p 1 t ⋅ x1 (0) + e−p 1 t ⋅ x2 (0)

3.2.5.2. Controllability and Observability of Serial Connection.


Case 1: z2 ≠ p 1 ßi z2 ≠ p 1 .
If the transfer function (3.2.14) obtained further to the serial connection
K K (s + z2 )
H(s) = 1 2 (3.2.14)
(s + p 1 )(s + p 2 )
is irreducible one, that is z2 ≠ p 1 and z2 ≠ p 2 , then the complete system S,
given by the state equations (3.2.15), obtained by serial connection of the
subsystems S1 and S2 is completely controllable and completely observable.
These properties are preserved for any realisation by state equation of
the irreducible transfer function.
For the system (3.2.15) can be calculated both the controllability matrix
P and the observability matrix Q as,
. K −K1 p 1 
P = [b .. Ab] =  1  ; det P = K 1 K2 (z2 − p 2 )
2
(3.2.22)
 0 K 1 K 2 (z 2 − p 2 ) 
 cT   K2 1 
Q = T =
T
 ; det Q = K2 (p 1 − z2 ) . (3.2.23)
 c A   −K2 p 1 + K2 (z2 − p 2 ) −p 2 
In this case can be observed that,
det(P) ≠ 0 that means the interconnected system is completely controllable
det(Q) ≠ 0 that means the interconnected system is completely observable.

Case 2: z2 ≠ p 2 .
If z2 ≠ p 2 , det P ≠ 0 , the system is completely controllable, for any
relation between z2 and p 1. Qualitatively, this property can be put into
evidence also by the state diagrams from Fig. 3.2.4. or more clear from Fig.
3.2.5. , where it can be observed that the input u can modify the component
x 1 and through this, if z2 ≠ p 2 , also the component x 2 of the state vector.
This checking up of the state diagram will not guarantee the property of
complete controllability but categorically it will reveal the situation when the
system has not the controllability property.

71
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.

Case 3: z2 = p 2 .
In this case det(P) = 0 and the system is uncontrollable. If z2 = p 2
certainly the component x 2 cannot be modified by the input so categorically
this state component is uncontrollable but the component x 1 can be controlled
by the input. In this example the dependence between x 1 and u is given by
K1
X 1 (s) = U(s) ,
s +p1
which shows that x1 is modified by u.
One say that a system is completely controllable, det(P) ≠ 0 , if and only
if all the state vector components are controllable for any values of these
components in the state space.
The assessment of the full rank of the matrix P through its determinant
value will not indicate which state vector components are controllable and
which of them are not.
More precise, in our example, this means which of the evolution (modes)
generated by the poles (−p 1 ) or (−p2) can by modified through the input.
When z2 = p 2 the transfer function of the serial connection is of the first order
K1 K (s + z2 ) K1 K 2
H(s) = ⋅ 2 = (3.2.24)
(s + p 1 ) (s + p 2 ) s + p1
even if the system still remains of the second order , this second order being
put into evidence by the fact that its free response, (3.2.21), keeps depending
on the two poles that means on the mode e−p 1 t and on the mode e−p 2 t which
are different.

Case 4: z2 ≠ p 1 .
If z2 ≠ p 1 , det Q ≠ 0, the system is observable for any relations between
z2 and p 2 . The visual examination (inspection) of the state diagram, as in
Fig. 3.2.5. will not reveal directly the observability property.
We must not misunderstand that if each state vector component affects
(modifies) the output variable then certainly the system is observable one
because this is not true.
It must remember that the observability property expresses the
possibility to determine the initial state which has existed at a time moment t0
based on knowledge of the output and the input from that t0 .
For LTI systems it is enough to know the output only irrespective what
is the value of t 0 .
Because of that to LTI systems one say : The pair (A, C) is observable.
From the Fig. 3.2.5., it can be observed that when z2 = p 1 , that means
the system is not observable, both x1 and x2 still modifies the output variable.

72
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.

3.2.5.3. Observability Property Underlined as the Possibility to


Determine the Initial State if the Output and the Input are Known.
In the above analyse we saw that while z2 = p 1 the interconnected
system has det(Q) = 0 that means it is not observable even if the output
y(t) = c T x(t) + du(t) = K 2 ⋅ x1 (t) + 1 ⋅ x2 (t) + 0 ⋅ u(t)
depends on both components of the state vector.
We shall illustrate, through this example, that the observability property
means the possibility to determine the initial state based on the applied input
and the observed output starting from that initial time moment.
In the case of LTI systems this property depends on the output only as a
response of that initial state. For LTI this property does not depend on the
input confirming that it is a system property, it is a system structure property.
With these observations, we shall consider u(t) ≡ 0 (it is not necessary
another input or it is indifferent what input is applied) so the output is the free
response, y(t) = y l (t) .
The time expression of the free response is obtained by applying the
inverse Laplace transform to the relation (3.2.19) evaluated in two distinct
cases: Case a: p 1 ≠ p 2 and Case b: p 1 = p 2 .

Case a: p 1 ≠ p 2 .
One obtain the expression (3.2.21) of the form,
yl (t) =  2p 1 −p  e−p 1t +  K2 (z 2 −p 2) x1 (0) + x 2 (0)  e−p 2 t
K (p 1 −z 2 )
2
x 1 (0)   (p 1−p 2 )  (3.2.21)
or equivalently
yl (t) = p 1 −p2 2 [(p 1 − z2 )e −p 1 t + (z2 − p 2 )e−p 2 t ]⋅ x1 (0) + [e−p 2t ]⋅ x2 (0) (3.2.25)
K

Because the goal is the state vector x(0) determination, through its two
components x1 (0) and x2 (0) , a two equations system is building up, created
from (3.2.21) for two different time moments t 1 ≠ t 2 ,
 p K−p2 [(p 1 − z2 )e−p1 t 1 + (z2 − p 2 )e−p 2 t 1 ] e −p2 t 1   x 1 (0)   y l (t 1 ) 
 1 2 
 K2  = 
 p 1 −p2 [(p 1 − z2 )e−p1 t 2 + (z2 − p 2 )e−p 2 t 2 ] e −p2 t 2   x 2 (0)   y l (t 2 ) 
 
which can be expressed in a compressed form,
 y l (t 1 )   p K−p2 [(p 1 − z2 )e−p 1 t 1 + (z2 − p 2 )e−p 2 t 1 ] e−p 2 t 1 
G ⋅ x(0) =   G =  1K 2 2 .

 y l (t 2 )   p −p [(p 1 − z2 )e 1 2 + (z2 − p 2 )e 2 2 ] e 2 2 
 1 2
−p t −p t −p t

The possibility to determine univocally the initial state x(0) is assured
by the determinant of the matrix G,
det G = p −2p (p 1 − z2 ) ⋅ e−p 1 t 1 ⋅ e−p 2t 2 ⋅  1 − e−p 2(t 2 −t 1 ) 
K −p 1 (t 2 −t 1 )

1 2 e
Because p 1 ≠ p 2 ßi t 1 ≠ t 2 ,

73
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.

det G ≠ 0 ⇔ p 1 ≠ z2 ⇔ det Q = K2 (p 1 − z2 ) ≠ 0
 y (t ) 
x(0) = G −1 ⋅  l 1 
 yl (t 2 ) 
Case b: p 1 = p 2 .
The analyse is performed in an Identical manner but the time domain
free response expression deduced from (3.2.19) is,
yl (t) = K2 e−p 1 t ⋅ x1 (0) + K2 (z2 − p 1 )te−p 1 t ⋅ x1 (0) + e−p 1 t ⋅ x2 (0) =
y l (t) = [K 2 (1 − (z2 − p 1 )t]e−p 1 t ⋅ x 1 (0) + e−p 1 t ⋅ x 2 (0)] (3.2.26)
The matrix G has the form,
 K 2 [1 − (z2 − p 1 )t 1 ]e−p 2 t 1 e −p1 t 1 
G= 
 K 2 [1 − (z2 − p 1 )t 2 ]e
−p 2 t 2
e−p 1 t2 
det G = K2 (p 1 − z2 )(t 2 − t 1 )e−p 1 (t 2 +t 1 )
Also in this case,
z2 = p 1 = p 2 ⇔ det G = 0 ⇔ det Q = 0
that means the system is not observable while z2 = p 1 .

3.2.5.4. Time Domain Free Response Interpretation


for an Unobservable System.
Let us consider the situation when z2 = p 1 , so the observability
condition is not accomplished.
The free response (3.2.25),
K (z 2 − p 2 )
yl (t) = [ p −2p (p 1 − z2 ) ⋅ x1 (0)] ⋅ e −p 1 t + [K2 p − p ⋅ x1 (0) + x2 (0) ] ⋅ e−p 2 t
1 2 1 2
in this case of z2 = p 1 , is of the form
z2 − p2
y l (t) =  K 2 p − p x 1 (0) + x 2 (0)  ⋅ e−p2 t , p 1 ≠ p 2 , (z2 = p 1 ) (3.2.27)
 1 2 
and that from (3. 26) is,
y l (t) = [K 2 x 1 (0) + x 2 (0)] ⋅ e−p 2 t , p 1 = p 2, (z2 = p 1 = p 2 ) (3.2.28)
From (3.2.27) it can be observed that the free response of this
unobservable system depends on both components of the state vector x 1 (0)
and x 2 (0) , but the output expresses the effect of the pole −p 2 only, through
the mode e−p 2 t .
In this case one can not determine, based on the response, separately
both the components x 1 (0) and x 2 (0) but can be determined only a linear
K (z 2 −p 2 )
combination of them 2p 1 −p 2
x 1 (0) + x2 (0) .
From (3.2.28) we see that this structure is mentioned also when p 1 = p 2 .

74
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.

Therefore, if in a serial connection (3.2.13) the subsystem S 2 (3.2.12)


has a zero s = −z2 equal to the pole s = −p 1 of the subsystem S1 (3.2.11), then
the interconnected system keep being of the second order but it has not the
observability property.
Its forced response is of the first order, which depends on the pole
s = −p 2 , only, having the transfer function,
K1 K (s + z2 ) K1 K 2
H(s) = ⋅ 2 = . (3.2.29)
(s + p 1 ) (s + p 2 ) s + p2
In the transfer function of this serial connection, two common factors
appeared which, for the state equations realisation (3 .2.15), determined the
lack of the the observability property of this state equations realisation.
If in the same transfer function, the same common factors would appear,
for another realisation by state equations of the same transfer function it is
possible to lose the controllability property of this realisation by state
equations or to lose the same observability property or to lose both the
observability and controllability properties.
Generally exist particular realisations by state equations which explicitly
preserve the controllability property called controllable realisations (but
which will not guarantee the observability property) as well particular
realisations by state equations which explicitly preserve the observability
property called observable realisations (but which will not guarantee the
controllability property)

3.2.6. System Stabilisation by Serial Connection.


We saw, through the above presented example, that by serial connection
it is possible that a pole of a subsystem transfer function to be simplified
(cancelled) by a zero of the transfer function of another subsystem in a such a
way that in the interconnected transfer function does not appear the
simplified pole.
This eliminating process of poles from some subsystems by
simplification with zeros from other subsystems is called serial compensation
or cascade compensation.
Of course involving notions as poles, zeros, transfer functions, it is
expected that their effects to appear in the forced response only.
Particularly, at least from theoretical point of view, in a such a way we
are able to eliminate all the poles of the system from the right half complex
plane, that means a system which has been instable one (because of the poles
located in the right half complex plane) to become a stable one.

75
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.

This process is called stabilisation by serial connection or stabilisation


by serial compensation.
The stabilisation by serial compensation will assure only the external
stability known also as input-output stability.
We remember that there exist several concepts of stability one of them
being the external stability.
Such a procedure is strongly not recommended to practice. We
sustain this recommendation by a concrete analyse performed on the above
discussed example, described by relations (3.2.11) ... (3.2.15).
Shall we consider the case where −p 1 > 0 and −p 2 < 0 namely the
systems S 1 is stable but S 2 is unstable. Considering in addition that
−z2 = −p 1 > 0 then S 2 is a non minimum phase stable system.
In such conditions the input-output behaviour of the serial
interconnected system is stable (external stability) also as it results from the
transfer function (3.2.29),
K1 K (s + z2 ) K1 K 2 Y(s)
H(s) = ⋅ 2 = = (3.2.29)
(s + p 1 ) (s + p 2 ) s + p 2 U(s) C.I.Z.
for which the forced response is determined by a first order transfer function
containing only the stable pole s = −p 2 .
Indeed, the forced component yf (t) of the output, as a response to a
bounded input u(t) which admits a steady state value u(∞) where
u(∞) =lim s ⋅ U(s) ,
s→0
evolves to a steady state bounded value yf (∞) given by
KK K K
lim y f (t) =lim s ⋅ Y(s) =lim s ⋅ 1 2 ⋅ U(s) = p1 2 ⋅ u(∞) = yf (∞) ,
t→∞ s→0 s→0 s + p2 2
K1K2
because the complex variable function s ⋅ H(s) = s ⋅ , is analytic on both
s + p2
the imaginary axis and the right half plane. This expresses the external stability
in the meaning of bounded input- bounded output (BIBO).
But from (3.2.27) and (3.2.28) we observe that, in the particular case of
z2 = p 1 , also the free response is bounded and asymptotically goes to zero,
K (z − p )
lim yl (t) =lim [ 2p 2− p 2 x1 (0) + x2 (0)] ⋅ e −p 2 t =
t→∞ t→∞ 1 2

K (z − p )
= [ 2p 2− p 2 x1 (0) + x2 (0)]⋅ [ lim e −p 2 t ]0 = [..]⋅ 0 = 0 for p 1 ≠ p 2 ,
1 2 t→∞

lim yl (t) =lim [K2 x1 (0) + x2 (0)] ⋅ e−p 2 t =


t→∞ t→∞

= [K2 x1 (0) + x2 (0)] ⋅ [ lim e−p 2 t ]0 = [..] ⋅ 0 = 0 for p 1 = p 2 ,


t→∞

for any bounded initial state x1 (0) , x2 (0) .


76
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.

This means that the internal stability is assured too. However we must
mention that this internal stability appeared only because the serial
interconnected system is unobservable one having disconnected the unstable
mode e−p 2 t .
Formally we can say that an unstable system S 1 , can be made external
stable one (bounded input- bounded output -BIBO) by serial compensation,
performing this by a serial connection of it with another system S 2 which
must have a zero equal to the undesired pole (now the unstable pole) of the
system S 1 .
This kind of stabilisation by serial connection is interesting one "on the
paper" only because:
1. It is very sensitive.
2. The system keep on being internal unstable one.
If the relation z2 = p 1 is not exactely realised but will be z2 = p 1 + ε for ε as
small as possible, the response y(t) goes to infinity.
From (3.2.25) it results,
K (z − p )
yl (t) = p −2p ε e−p 1 t x1 (0) +  2p 2− p 2 x1 (0) + x2 (0)  e−p 2 t
K
(3.2.30)
1 2  1 2 
lim yl (t) → ±∞ if −p 1 > 0
t→∞

K1 K2 (s + z2 ) K K U(−p )
yf (t) = L−1 U(s) =Σ r i (t)eλi t + 1 p 2 − p 1 ε e−p 1t , − p 1 > 0
(s + p 1 )(s + p 2 ) i 1 2

(3.2.31)
where by r i (t) we have denoted the residuum of the functionY f (s) in the pole
−p 2 and in the poles of the Laplace transform U(s) of the input . The poles of
U(s) belong to the left half plane because u(t) is a bounded function.
So
lim yf (t) = 0 ± ∞ = ±∞ .
t→∞
Each component of the state vector becomes unbounded for non zero
causes (the initial state and the input).
From (3.2.18), applying the inverse Laplace transform one obtain,
x1 (t) = e−p 1t x 1 (0) + K1 U(−p 1 )e−p 1t +Σ r i (t)eλi t (3.2.32)
i
K1 U(s)
where now by r i (t) we have denoted the residuum of the function in
s + p1
the poles λ i of the Laplace transform U(s), Re(λ i ) < 0.
We can express x1 (t) as a sum between an unstable component x I1 (t)
generated by the unstable pole −p 1 > 0 and a stable component xS1 (t) generated
by the poles λ i
x1 (t) = xI1 (t) + xS1 (t) (3.2.33)

77
3. SYSTEMS CONNECTIONS. 3.2. Serial Connection.

xI1 (t) = [x1 (0) + K1 U(−p 1 )] ⋅ e−p 1 t (3.2.34)


lim xI1 (t) = ±∞ ⇒ lim x 1 (t) = ±∞ (3.2.35)
t→∞ t→∞

xS1 (t) =Σ r i (t)eλi t , lim xS1 (t) = 0 (3.2.36)


i t→∞

The second relation from (3.2.18) leads to,


K (z − p )
X2 (s) = 2p 2− p 2 [x1 (0) + K1 U(s)]⋅ 1 + (3.2.37)
2 1 s + p1
K (z − p )
+ 2p 2− p 2 [x1 (0) + K1 U(s)] ⋅ 1 + 1 x2 (0)
1 2 s + p2 s + p2
Similarly, we can express the unstable component xI2 (t) , generated by
the pole (−p 1 > 0) and the stable component xS2 (t) generated by the pole
−p 2 < 0 and by the poles λ i of the function U(s) .
x2 (t) = xI2 (t) + xS2 (t) (3.2.38)
K2 (z2 − p 2 ) −p 1 t = K2 (z 2 − p 2 ) x I (t) (3.2.39)
xI2 (t) = p2 − p1 [x 1 (0) + K 1 U(−p 1 )] ⋅ e p 1 − p2 1

if −p 1 > 0 and z2 = p 1 , for K2 > 0 one obtain


xI1 (t) → ±∞ ⇒ xI2 (t) = K2 xI1 → ±∞
Because
K (z − p )
xS2 (t) = L−1 { 2p 2− p 2 [x1 (0) + K1 U(s)] ⋅ 1 + 1 x2 (0)}
1 2 s + p2 s + p2
lim x2 (t) = 0 ⇒
S
t→∞
lim x2 (t) = −K2  lim xI1 (t)  + 0 = −K2 (±∞) (3.2.40)
t→∞ t→∞

The component x2 (t) is unbounded because the input u2 = y1 = x1 in the


system S 2 is unbounded too.
Evidently, such a situation can not appear in physical systems. In a
physical system, eventually, an unstable linear mathematical model is possible
only in a bounded domain of the input and output values. At these domains
borders a saturation phenomena will encounter so the linear model is not
available.
One physical explanation of this stabilisation performed in the system
S 2 is as following: The unstable component of the first subsystem,
x I1 (t) = y I1 (t) , is transmitted to the output of S 2 through two ways, as we also
can see in Fig. 3.2.5.:
1. Through the path of the direct connection by the factor K2 ,
2. Through the path of the dynamic connection with the component x2 .

78
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.

The system S 2 stabilises the system S 1 if S 2 has such parameters that the
component x I1 which is transmitted through the two ways , finally is
reciprocally cancelled.
Indeed,
z −p z −p
yI2 (t) = K2 xI1 (t) + xI2 (t) = K2 x I1 (t) − K2 p2 − p2 xI1 (t) = K2  1 − p2 − p2  xI1 (t) .
1 2  1 2
If z2 = p 1 then,
p −p
yI2 (t) = K2 xI1 (t) − K2 p 1 − p 2 xI1 (t) = K2 x I1 (t) − K2 x I1 (t) = 0 ∀t.
1 2

Practically this is not possible because xI1 (t) → ±∞ and


yI2 (t) = K2 xI1 (t) − K2 xI1 (t) → K2 (±∞) − K 2 (±∞) = ±(∞ − ∞) ,
that means "the stabilised" output yI2 (t) is the difference of two very large
values which, to limit when t → ∞, becomes an undetermination of the type
∞ − ∞.
The theoretical solution of this undetermination will give us a finite
value for the limit,
lim yI2 (t) = 0.
t→∞
This finite limit is our interpretation of the system S 2 output which is,
we say, serial "stabilised".

3.2.7. Steady State Serial Connection of Two Systems.


A system S i is in so called equilibrium state denoted xie if its state
variable xi is constant for any time moment starting with an initial time
moment.
For a continuous time system Si of the form (3.1.2),
.
S i : x i (t) = f i (xi (t), ui (t), t); yi (t) = gi (xi (t), ui (t), t); (3.2.41)
this means,
.
xi (t) = xie = const., ∀t ≥ t 0 ⇔ xi (t) ≡ 0, ∀t ≥ t 0 (3.2.42)
The equilibrium state is the real solution of the equation
f i (xie , ui (t), t) = 0 ⇒ xie = f −1 i ( u (t), t) x e = const.
i i
(3.2.43)
possible only for some function ui (t) = uie (t) .
The output in the equilibrium state is,
yie (t) = gi (xie , u ie (t), t) (3.2.44)
If the system is time invariant, that means
.
S i : x i (t) = f i (xi (t), ui (t)); yi (t) = g i (xi (t), ui (t)); (3.2.45)
an equilibrium state
xie = f −1 i ( u (t)) x e = const.
i i
(3.2.46)
is possible only if the input is a time constant function,

79
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.

ui (t) = uie (t) = Uie ∈ D u , ∀t ≥ t 0 ⇒ (3.2.47)


−1
xe = f i (Ue ) xe = const. ∈ R n
i i i
(3.2.46)
and such a regime is called steady state regime.
The output in a steady state regime is
Yie = gi (xie , U ie ) = gi (f −1i (Ue ), Ue ) = Q (Ue ) ,
i i i i
Uie ∈ D u . (3.2.48)
For seek of convenience we shall denote the input and output variables
in steady state regime by
Yie = Yi , Uie = Ui . (3.2.49)
The input-output relation in steady state regime is also called static
characteristic,
Yi = Q i (Uie ) , Uie ∈ D u (3.2.50)
We can say that a system is in steady state regime, starting on an initial
time moment t 0 , if all the variables: state, input, output are time constant
function for ∀t ≥ t 0 .
A system S i is called static system if exists at least one static
characteristic (3.2.50). It is possible that a system to have several equilibrium
states and so several static characteristics.
For SISO LTI,
.
S i : x i (t) = Ai xi (t) + b i ui (t); yi (t) = c Ti xi (t) + d i ui (t) (3.2.51)
the static characteristic is
Yi = Q i (Uie ) = [−c Ti A−1 i b i + d i] ⋅ U ,
i
∀Uie ∈ R if det(Ai ) ≠ 0 (3.2.52)
If det(Ai ) = 0 that means the system has at least one eigenvalue equals
to zero, the system is of the integral type, then the static characteristic exists
only for Ui = 0 .
For discrete time systems as (3.1.4), the discussion is similar one.
The equilibrium state is
xk = xe = const., ∀k ≥ k0 ⇔ xk+1 ≡ xk , ∀k ≥ k 0 (3.2.53)
given by the equation,
xie = f i (xie , u ik , k) ⇒ x ie = ψ −1 i ( u k , k) x e = const.
i i
(3.2.54)
Let us consider now two nonlinear subsystems S 1 , S 2 , described in
steady state regime by the static characteristics
S 1 : Y1 = Q 1 (U1 ) (3.2.55)
S 2 : Y = Q 2 (U )
2 2 (3.2.56)
They are serial connected through the connection relation,
R c : U2 = Y1 , U = U1 , Y = Y2 . (3.2.57)
The serial interconnected system has a static characteristic,
Y = Q(U) = Q 2 [Q 1 (U)] = Q 2 Q 1 (U)
obtained by simple composition of two functions. This composition can be
performed also graphically.
80
3. SYSTEM CONNECTIONS. 3.2. Serial Connection.

3.2.8. Serial Connection of Several Subsystems.


All aspects discussed regarding the connection of two systems can be
extended without difficulty to several subsystems , let say q subsystems.
For q LTI systems described by transfer matrices Hi(s)
S i : Yi (s) = Hi (s)Ui (s) , i = 1 : q (3.2.58)
the connection relations are
Rc : ui+1 =yi , i = 1 : (q − 1) (3.2.59)
the connection conditions are
Cc : Γi ⊆ Ω i+1 , i = 1 : (q − 1) (3.2.60)
The input-output equivalent transfer matrix is,
H(s)=HqHq-1...H1 (3.2.61)
because
Yq=HqUq , Yq-1=Hq-1Uq-1 but Uq=Yq-1 => Yq=Hq(Hq-1Uq-1) a.s.o.

For SISO, the succession of transfer functions in the equivalent product


can changed that means,

H(s)=HqHq-1...H1=H1...Hq-1Hq (3.2.62)

81
3. SYSTEM CONNECTIONS. 3.3. Parallel Connection.

3.3. Parallel Connection.

Si : Yi(s)=Hi (s)Ui(s) , i=1,...,q ;

 U i (s) = U(s)

Rc :  q
i = 1, q ;
 Y(s) = Σ Yi (s)
 i=1

 Ωi = Ω
Cc :  , ∀i i=1,...,q
 Γi ⊆ Γ

q q q
Y(s) = Σ Y (s) = Σ Hi (s)U(s) = [Σ Hi (s)] ⋅ U(s) = H(s)U(s)
i
i=1 i=1 i=1
We can draw a block diagram which illustrate this connection.
u1 H y1
1

u u2 H y2 +
2 +

uq H yq + y
q +

Figure no. 3.3.1.


The equivalent transfer matrix is,
q
H(s) = Σ H i (s)
i=1

82
3. SYSTEM CONNECTIONS. 3.4. Feedback Connection.

3.4. Feedback Connection.


We shall consider the feedback connection of two subsystems,
Si : Yi(s)=Hi (s)Ui(s) , i=1, 2

 E = U ± Y2
 U1 = E
  Y 1 (s) = H 1 (s)U 1 (s)
Rc :  Y 1 = H1 U 1  ;

Y 2 = H2 U 2  Y 2 (s) = H 2 (s)U 2 (s)

 Y = Y1
We can plot these set of relations as in in the below block diagram . The
equivalent feedback interconnected transfer matrix H(s) can be obtained by an
algebraical procedure.
U(s) + E=U1 Y1 =Y(s)
H1
Y(s) U(s)

±
H
Y2
H2
_
Y=H1(U+H2Y)=+H1H2Y+H1U =>Y=(I+ H1H2)-1H1U
_
−1
H = (I + H 1 H 2 ) H1
(rxp)(pxr)
rxp
rxr
Another way:
Y1
_ _
E=U±H 2 H 1 E ⇒ E = (I + H 2 H 1 ) −1 U ⇒ Y =H 1 (I + H 2 H 1 ) −1 U
Y2 _ H
H = H 1 (I + H 2 H 1 ) −1
(pxr)(rxp)
rxp
pxp
The equivalent transfer matrix H(s)
Y(s) = H(s)U(s)
of the feedback interconnected system has two forms which algebraically are
identical.
The first one _
H(s) = [I + H 1 (s)H 2 (s)] −1 H 1 (s)
requires an (r × r) matrix inversion,
_
but the second,
H(s) = H 1 (s)[I + H 2 (s)H 1 (s)] −1
requires a (p × p) matrix inversion.
In the case of SISO we have a transfer function,
H (s) H (s) Y(s)
H(s) = _ 1 = _ 1 = .
1 + H 2 (s)H 1 (s) 1 + H 1 (s)H 2 (s) U(s)
83
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.1. Principle Diagrams and Block Diagrams.

4. GRAPHICAL REPRESENTATION AND REDUCTION OF


SYSTEMS.

4.1. Principle Diagrams and Block Diagrams.

Frequently the dynamical systems are represented, interpreted and


manipulated using different graphical methods and techniques.
There are three main types of graphical representation of systems:
1. - Principle diagrams ;
2. - Block diagrams;
3. - Signal flow graphs.

4.1.1. Principle Diagrams.


The principle diagram is a method of physical systems representation by
using norms and symbols that belong to the physical system domain so expressed
to be able to understand how the physical system is running.
They are also called schematic diagrams. By diagrams only physical
objects are described. There are no mathematical model on this representations
but they contain all the specifications or description of that physical object
(system) configuration. It reveals all its components in a form amenable to
analysis, design and evaluation.
To be able to understand and interpret a principle diagram, knowledge and
competence on the field to which that object belongs to are necessary.
The same symbol can have different meanings depending on the field of
applications. For example, the symbol represents a rezistor for an
electrical engineer but a spring for a mechanical engineer.
Starting from and using a principle diagram, an oriented system (or several
oriented systems) can be specified if the output variables (on short outputs) are
selected. After that the mathematical model, that means the abstract system
attached to that oriented system, can be determined.

4.1.2. Block Diagrams.


A block diagram, in the system theory approach, is a pictorial (graphical)
representation of the mathematical relations between the variables which
characterise a system. Mainly a block diagram represents the cause and effect
relationship between the input and output of an oriented system. So a block
diagram expresses the abstract system related to an oriented system. Block
diagrams consist of unidirectional operational blocks.
If in this representation the system state, including the initial state, is
involved then the block diagram is called state diagram (SD)
The fundamental elements of a block diagram are:.

84
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.1. Principle Diagrams and Block Diagrams.

1. Oriented Lines.
Oriented lines represent the variables involved in the mathematical
relationships. They are drawn by straight lines marked wit arrows. On short an
oriented line is called "arrow".
The direction of the arrow points out the cause-effect direction and has
nothing to do with the direction of the flow of variables from principle diagram.
2. Blocks.
A block, usually drawn as an rectangle, represents the mathematical
operator which relates the causes (input variables) and effects (output variables).
Inside the rectangle, representing a block, a symbol of that operator is
marked. However some specific operators are represented by other geometrical
figures ( for example usually the sum operator is represented by a circle).
The input variables of a block are drawn by incoming arrows to the
rectangle (geometrical figure) representing that block. The output variables are
drawn by arrows outgoing from the rectangle (geometrical figure) representing
that block. One oriented line (arrow), that means a variable, can be an output
variable for a block and an input variable for another block.
For example, an explicit relation between the variable u and the variable y,
is of the form, where u and y can be time-functions.
y = F(u) (4.1.1)
The symbol F( ) denoting an operator (it can be a simple function) expresses an
oriented system where u is the cause and y is the effect, that means y is the
output variable and u is the lonely input variable. We can write (4.1.1) as
y = F(u) ⇔ y = F{u} ⇔ y = Fu
and the attached block diagram is as in Fig. 4.1.1.
u y
F

Figure no. 4.1.1.


3. Take off Points.
A take off point, called also pickoff point, graphically illustrates that the
same variable, represented by an arrow, is delivered ( dispersed) to several
arrows. It is drawn by a dot. The following representations are equivalent, as in
Fig. 4.1.2.
y y
y
y y y y y y
y y
y y y
y
Figure no. 4.1.2.

85
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.1. Principle Diagrams and Block Diagrams.

Example 4.1.1. Block Diagram of an Algebraical Relation.


Let we consider an algebraical relation,
R(x 1 , x 2 , x 3 ) = 0 (4.1.2)
where the variables x 1 , x 2 , x 3 can be time-functions, x i = x i (t) . Let us consider
that (4.1.2) is a linear relation with constant coefficients,
a 1 x 1 + a 2 x2 + a 3 x3 = 0 (4.1.3)
The relations (4.1.2), (4.1.3) represent a non oriented system and can not
be represented by a block diagram.
Suppose we are interested about the variable x 1 so it can be expressed, if
a 1 ≠ 0 , as
x 1 = −a 2 /a 1 x 2 − a 3 /a 1 x 3 ; ⇔ x 1 = f(x 2 , x 3 ) ⇔
x 1 = F{[x 2 x 3 ]} ⇔ x 1 = F{u}, u = [x 2 x 3 ] T
which constitutes an oriented system where x 1 is the output and x 2 , x 3 are the
inputs.
The block diagram, representing this oriented system is as in Fig. 4.1.3.
where we understand the symbols of the elementary operators involved in this
block diagram.
x2 x2 x2
a2 / a 1
- x1 x1 x1
x3 +
x3 x1 =f(x2 , x3 ) x3 F
- a3 / a1

Figure no. 4.1.3.


Any of the variables x 2 or x 3 from (4.1.2), (4.1.3) could be chosen as an
output variable. Now let relation (4.1.3) be of the form,
x 1 = K(b 1 x 1 + b 2 x 2 + b 3 x 3 ) (4.1.5)
where, a 1 = 1 − Kb 1 , a 2 = −Kb 2 , a 3 = −Kb 3 .
Suppose we are interested about x 1 , that means x 1 is output variable and
x 2 , x 3 are input variables.
Relation (4.1.5) do not represent an unidirectional operator because x 1
depends on itself. From (4.1.5) we can delimit several unidirectional operators
(blocks) introducing some new variables, as for example
w 1 = b 2 x 2 + b 3 x3
w 2 = b 1x1
w3 = w1 + w2 (4.1.6)
x 1 = Kw 3 .
Each relation from (4.1.6) represents an unidirectional operator which can
be represented by a block. All the relations from (4.1.6), which represents (4.1.5)
can be represented as several interconnected blocks as in Fig. 4.1.4.

86
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS .4.1. Principle Diagrams and Block Diagrams.

x2
b2
+ w1 + w3 x1
x3 K
+ +
b3
w2
b1

Figure no. 4.1.4.


This is a feedback structure containing a loop. If K and b 1 are constant
coefficients, this loop is an algebraical loop which causes many difficulties in
numerical implementations.
Of course, manipulating the block diagram from Fig. 4.1.4. or eliminating
the intermediate variables w 1 , w 2 , w 3 from (4.1.6) or withdrawing x 1 from
(4.1.5) we get the oriented system, having x 1 as output variable and x 2 , x 3 as
input variables, characterised by the unidirectional relation of the form
x 1 = 1−Kb 2 + 1−Kb x 3 = (−a 2 /a 1 )x 2 + (−a 3 /a 1 )x 3
b2 b3
1
x 1
(4.1.7)
Now the relation (4.1.7) can be depicted as an unidirectional block as in
Fig. 4.1.3.
If 1 − Kb 1 = 0 ⇔ a 1 = 0
then relations (4.1.3) and (4.1.5) are degenerate, that means they do not contain
the variable x 1 .

Example 4.1.2. Variable's Directions in Principle Diagrams and


Block Diagrams.
Let us consider a physical object, as described by the principle (schematic)
diagram from Fig. 4.1.5.
Incoming flow
Pipe P1 q1=u1
q1=u1 Input
q1=u1 Output
Water tank Water tank L=y
as a physical Input as an oriented
q2=u2 system
Pipe P2 "block"
L=y

Water tank q2=u2


q2=u2 Outgoing flow
a) b) c)
Figure no. 4.1.5.
It represents a cylindrical water tank supplied, through the pipe P1, with
water of the flow rate q1=u1 and from which tank water drains, through the pipe
P2, of the flow rate q2=u2.
As we can see, from physical point of view, q1=u1 means an incoming
flow and q2=u2 means an outgoing flow.
87
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS .4.1. Principle Diagrams and Block Diagrams.

Suppose we are interested on the water level in the tank, denoted by L=y.
In a such a way the variable L=y, an attribute (a characteristic) of the physical
object (water tank), is an effect we are interested about.
All the causes which affect this selected output are represented by the two
flow-rates q1=u1 and q2=u2. So, in the oriented system, based on causality
principles, both u1 and u2 are input variables.
The corresponding block diagram, in the systems theory meaning will have
L=y as output and both u1 and u2 as inputs. It is represented by the block
diagram from Fig. 4.1.5.c. or one of the block diagram from Fig. 4.1.6.
The mathematical relationship between y and u1, u2 in time-domain is
t
y(t) = K ∫ [u1(t) − u2(t)]dt + y(t 0 ) (4.1.8)
t0
is represented in Fig. 4.1.6.a.
To determine the mathematical model in complex domain we define the
variables in variations with respect to a steady state, defined by:
Y ss = y(0), U ss 1 = u 1 (0) = 0, 2 = u 2 (0) = 0 .
U ss
Denoting,
Y(s) = L{y(t) − Y ss }, U 1 (s) = L{u 1 (t) − U ss
1 }, U 1 (s) = L{u 1 (t) − U ss
1 },
we have
Y(s) = Ks ⋅ [U 1 (s) − U 2 (s)] (4.1.8)
the I-O relation by components, represented in Fig. 4.1.6.b.
In matrix form the I-O relation is,
U (s)
Y(s) = H(s) ⋅ 1 = H(s) ⋅ U(s) , H(s) = Ks − Ks (4.1.9)
U 1 (s)
which allows us to represent she system as a hole as in Fig. 4.1.6.c.

u1 (t) y(t 0 ) U1(s) U1(s)


t
u2 (t)
+
∫t K
+ y(t)
U2(s)
+ K
s
Y(s)
U2(s) H(s)
Y(s)
- 0 + -

a) b) c)
Figure no. 4.1.6.

88
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.1. Principle Diagrams and Block Diagrams.

Example 4.1.3. Block Diagram of an Integrator.


The integrator is an operator which performs in time domain,
t
.
x(t) = x(t 0 ) + ∫ u(τ)dτ , x(t) = u(t) , (4.1.10)
t0
whose block diagram is as in Fig. 4.1.7.a. Because the first equation from
(4.1.10) can be written using Dirac impulse as in (4.1.11),
t t t
x(t) = ∫t x(t 0 )δ(τ − t 0 )u(τ)dτ + t∫ u(τ)dτ = t∫[u(τ) + x(t 0 )δ(τ − t 0 )]dτ (4.1.10)
0 0 0

we can draw the integrator, in time-domain as in Fig. 4.1.7.b.


Time domain
x(t0) x(t0)δ(t- t0) Complex domain
x(0)
.x(t) x(t)
.
x(t) + x(t)
u(t) ∫ u(t) + ∫ U(s) + 1 X(s)
• + s
L{x(t)}
a) b)
Figure no. 4.1.7. Figure no. 4.1.8.
We can represent the integrator behaviour in complex domain taking into
consideration that
.
L{x(t)} = sX(s) − x(0). (4.1.11)
We have to understand that t=(t a − t 0 ) - the initial time moment has to be
zero when we are using the Laplace transformation.
Denoting X(s)=L{x(t)} we obtain,
.
X(s) = 1s [L{x(t)} + x(0)] (4.1.12)
Using the integrator graphical representation (by block diagrams or signal
flow-graphs) together with summing and scalors operators we can represent the
so called state diagram (SD) of a system.

4.1.3. State Diagrams Represented by Block Diagrams.


State diagrams, on short SD, means the graphical representation of state
equations of a system. They can be drawn using both the block diagram and
signal flow graph methods.
An SD contains only three types of elements (operators):
Nondynamical (scalor type) elements represented by ordinary scalar or vecto-
rial functions. They can be a matrix gains or a scalar gains.
Summing operators.
Integrators considering or not the initial state.
State diagrams can be used also for time-variant systems and for nonlinear
systems, but this will be useless to the algebraical computation. State diagrams
can be represented both in time domain or complex domain (s or z). They can be
applied for both continuous-time and discrete-time systems.
89
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.1. Principle Diagrams and Block Diagrams.

To draw an SD first the integrators involved in state equations have to be


represented and then it is filled in with the other two types of components.
Let us consider a first order continuous-time system described by the state
equation (4.1.13),
.
x(t) = a(t) ⋅ x(t) + b(t) ⋅ u(t)
(4.1.13)
y(t) = c(t) ⋅ x(t) + d(t) ⋅ u(t)
The corresponding SD in time domain is depicted in Fig. 4.1.9.
. x(0)
u(t) + x(t) x(t) + y(t)
b(t) ∫ c(t)
+ +
a(t)
d(t)

Figure no. 4.1.9.


For the linear time-invariant systems all the coefficients a, b, c, d have
constant values so we can represent the state equation (4.1.13) as
.
x(t) = a ⋅ x(t) + b ⋅ u(t)
. (4.1.14)
y(t) = c ⋅ x(t) + d ⋅ u(t)
The time domain SD of this system is identical with that from Fig. 4.1.9.
except the coefficients are constants.
The state diagram in complex domain is identical with time domain SD except
the integrator which is replaced by its complex equivalent from Fig 4.1.8. and the
variables are denoted by their complex equivalents as in Fig. 4.1.10.
Sometimes having the integrator represented in complex domain (because
in complex we can perform algebraic operations) we denote the variable in time
domain or both, in time and complex domain, withdrawing advantages when
necessary.
•x(t) Complex domain form
x(0) of the integrator

L{x(t)}
U(s) + + 1 X(s) + Y(s)
b + s c
u(t) + x(t) + y(t)
a
d

Figure no. 4.1.10.


In these state diagrams, we still can see explicitly the state derivative, in
addition to the initial state, or its complex image. If we are not interested about
the state derivative we can transform the internal loop as a simple block.
To do this from the above SD, we obtain:

90
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.1. Principle Diagrams and Block Diagrams.

.
L{x(t)} = sX(s) − x(0) = aX(s) + bU(s)
X(s) = 1s [aX(s) + x(0) + bU(s)]
X(s) = s −1 a [x(0) + bU(s)] (4.1.15)

Now we replaced the internal loop by the transfer function s −1 a and a


summing element at its input as in Fig. 4.1.11.
x(0) X(s)
U(s) + 1 + Y(s)
b s-a c
+ +

Figure no. 4.1.11.


On this form of the SD we can see very easy the dependence of Y(s) and
X(s) upon U(s) and x(0).

The state diagram of a system can be drawn also for non scalor systems,
where at least one coefficient is a matrix, of the general form as in (4.1.16),
.
x = Ax + Bu
, (4.1.16)
y = Cx + Du
where the matrices have the dimensions: A-(nxn); B-(nxr); C-(pxr); D-(pxr).
Because
.
L{x(t)} = sX(s) − x(0) = [sX 1 (s) sX 2 (s) ... sX n (s)] T − [x 1 (0) x 2 (0)....x n (0)] T
we have
.
X(s) = [ 1s I n ] ⋅ [L{x(t)} + x(0)] , (4.1.17)
which is represented as in Fig. 4.1.12.
Sometimes, to point out that when the oriented lines forward vectors, they
are drawn by double lines as in Fig. 4.1.12.
x(0)

L{x(t)} +
U(s) + X(s) + Y(s)
B 1I C
+ s n
+ +

A
U(s)
D

Figure no. 4.1.12.

91
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.

4.2. System Reduction Using Block Diagrams.

4.2.1. System Reduction Problem.


Sometimes complex systems appear to be expressed as a connection of
subsystems in which connection some intermediate variables are pointed out.
The reduction of a such complex system to an equivalent structure means
to determine the expression of the input-output mathematical model of that
complex system as a function of subsystem's mathematical models. This can be
done by eliminating all the intermediate variables.
In the case of SISO-LTI systems the equivalent transfer function has to be
determined (for MIMO-LTI the equivalent transfer matrix).
In reduction processes, the goal is not to solve completely that system of
equations which characterise that connection ( that means to determine all the
unknown variables:outputs and intermediate variables) but to eliminate only the
intermediate variables and to express one component (or all the components) of
the output vector as a function of input vector.
Three methods of reduction are usually utilised:
1. Analytical reduction.
2. Reduction through block diagrams transformations.
3. Reduction by using signal flow-graphs method.

4.2.2. Analytical Reduction.


Mainly this means to solve analytically, by using different techniques and
methods, she set (system) of equations describing the dynamical system. It has
the advantage to be applied to a broader class of systems other rather SLIT.
Because the coefficients of the equations are expressions of some literal,
particularly transfer functions on complex variables s or z, the determination of
the solution becomes very cumbersome one, mainly for systems with a larger
number of equations. In such cases, the numerical methods to be implemented on
computers can not be applied.

92
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.

4.2.3. System Reduction Through Block Diagrams Transformations.


If a system is represented by a complex block diagram it can be reduced to
a simple equivalent form by transforming, through manipulations, the block
diagram according to some rules.
If the complex system is a multi-input, multi-output one (MIMO LTI) the
equivalent structure will be expressed by an equivalent transfer matrix H(s).
Throghout this method it is possible to determene the equivalent transfer
function between one input U k (s) and one output Y i (s) which represents the
component H ik (s) of the transfer matrix H(s), considering the relations,
Y(s)=H(s)U(s) (4.2.1)
Y Y (s)
H ik (s) = H Uik (s) = i U (s)≡0 , ∀j≠k (4.2.2)
U k (s) j
 H 11 ... H 1p 
  n
H(s) =  : H ik :  , Y i (s) = Σ H ik U k . (4.2.3)
  k=1
 H r1 ... H rp 
To determine such a component H ik (s) we have to ignore all the other
outputs except Y i and to consider zero all the inputs except the input U k .

4.2.3.1. Elementary Transformations on Block Diagrams.


For block diagram manipulation some graphical transformations can be
used. They are based on the identity of the algebaical input-output relations.
We consider all the relations in complex domain s or z, but for seek of
convenience in the following these variables are omitted.

1. Combining blocks in cascade.


Original relation/diagram. Equivalent relation/diagram.
Y = H 2 ⋅ (H 1 U) Y = (H 2 H 1 ) ⋅ U
U Y U H2 H1 Y
H1 H2

2. Combining blocks in parallel.


Original relation/diagram. Equivalent relation/diagram.
Y = H1 ⋅ U ± H2 ⋅ U Y = (H 1 ± H 2 ) ⋅ U
+
U Y
H1
± U Y
H1±H2
H2

93
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.

3. Eliminating a forward loop.


Original relation/diagram. Equivalent relation/diagram.
Y = H1 ⋅ U ± H2 ⋅ U Y = (H 1 H −1
2 ± I) ⋅ H 2 ⋅ U
+
−1 +
U Y
H1 U Y
± H2 H1 H2
±
H2

4. Eliminating a feedback loop.


Original relation/diagram. Equivalent relations/diagram.
_
Y = H 1 ⋅ (U ± H 2 ⋅ Y) a) Y = [(I + H_1 H 2 ) −1 H 1 ] ⋅ U
b) Y = [H 1 (I + H 2 H 1 ) −1 ] ⋅ U
H H
Scalar case: Y = _ 1 ⋅U = _ 1 ⋅U
1 + H1H2 1 + H2H1
U+ Y Y a) Scalar case b)
H1
± U H1 Y U H 1 Y
H2 1-+ H 1H 2 1-+H 2 H 1

5. Removing a block from a feedback loop.


Original relation/diagram. Equivalent relations/diagram.
_
Y = H 1 ⋅ (U ± H 2 ⋅ Y) a) Y = [(I + H 1 H 2 ) −1 H 1_H 2 ] ⋅ [H −1
2 U]
−1
b) Y = [H 2 ] ⋅ [H 2 H 1 (I + H 2 H 1 ) −1 ] ⋅ U
H H H H H
Scalar case: H = _ 1 = _1 2 ⋅ 1 = 1 ⋅ _2 1
1 + H 1 H 2 1 + H1 H2 H2 H 2 1 + H2 H1
a) b)
U+ Y Y U −1 + Y Y U+ −1 Y
H1 H 1H 2 H 2H 1
H2 H2
± ± ±
H2

6. Moving a takeoff point ahead of a block.


Original relation/diagram. Equivalent relations/diagram.
Y =H⋅U Y =H⋅U
U Y U Y
H H

Y Y
H

94
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.

7. Moving a takeoff point beyond a block.


Original relation/diagram. Equivalent relations/diagram.
Y =H⋅U U=U Y = H ⋅ U U = [H −1 H] ⋅ U
U Y U Y
H H

U
U H -1

8. Moving a summing point ahead of a block.


Original relation/diagram. Equivalent relations/diagram.
Y = H ⋅ U1 ± U 2 Y = H ⋅ [U 1 ± H −1 U 2 ]
U1 U1
+ Y +
H Y
H
± ±
U2
U2 H
-1

9. Moving a summing point beyond a block.


Original relation/diagram. Equivalent relations/diagram.
Y = H ⋅ [U 1 ± U 2 ] Y = [HU 1 ] ± [HU 2 ]
U1 U1 + Y
+ Y H
H
± U2 ±
U2 H

10. Rearranging summing points.


Original relation/diagram. Equivalent relations/diagram.
Y = ±U 1 + [U 2 ± U 3 ] Y = U 2 + [±U 1 ± U 3 ]
U2 + U2
+ Y + Y
U3 ± ± U3 +
±
U1 U1 ±

Y(s) = U 1 (s) + U 2 (s) ± U 3 (s) Y(s) = (U 1 (s) + U 2 (s)) ± U 3 (s)


U2(s) U (s)
+ Y(s) +2
U1(s) U1(s) + Y(s)
+ +
_
+ +
_
U3(s) U 3(s)

Y(s) = (U 1 (s) ± U 2 (s)) ± U 3 (s) Y(s) = (U 1 (s) ± U 3 (s)) ± U 2 (s)


U1(s) + Y(s) U1(s) + + Y(s)
+
+_ _+ +
_ +
_
U (s) U (s) U (s) U (s)
2 3 3 2

95
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.

Example 4.2.1. Representations of a Multi Inputs Summing Element.


As we mentioned in the case of multivariable systems, to determine the
component H ik (s) of a transfer matrix H(s), we must consider zero all the
components U j (s) (zero-inputs) of the vector U(s), except the component U k (s)
and to ignore all the output vector components except the component Y i (s).
When we apply this rule and a summing operator has several inputs among
which it exist zero-inputs, this summing operator will be replaced by an element
(block) expressing the dependence of the summing operator output upon the
inputs that are not considered to be zero.
Let us consider a two-inputs/one-output system as depicted in Fig. 4.2.1.a.
We want determine the two components H 11 , H 12 of the transfer matrix
H(s) = [H 11 (s) H 12 (s)] using the relation (4.2.2). Of course this is a very simple
example but the goal is only to illustrate how the summator will be transformed.
U1 + Y U1 1 Y Y Y
G1 G1 G1 -1 G1
- -
U2 U2 U2
G2 G2 G2
a) b) c) d)
Figure no. 4.2.1.
When we determine H11 , U2 must be considered zero and the summing
element will be represented as in Fig. 4.2.1.b. by a block with gain 1.
When we determine H12 , U1 must be considered zero and the summing
element will be represented as in Fig. 4.2.1.c. or by a block with gain -1 as in
Fig. 4.2.1.d.

4.2.3.2. Transformations of a Block Diagram Area by Analytical


Equivalence.
If in some parts of a block diagram non-standard connections appear, for
which we can not perform a directly reduction, then we can mark that area,
containing the undesired connections, by a closed contour specifying and
denoting, by additional letters, all the incoming oriented lines and outgoing
oriented lines to and from this contour.
The incoming oriented lines will be considered as input variables and the
outgoing oriented lines will be considered as output variables of the contour,
interpreted as a separate oriented system.
On this oriented system (our contour) we can perform an analytical
analysis and determine the expression of each contour output variable as a
function of the contour input variables. Then these relationships are graphically
represented as a block diagram which will replace the marked area from the
original block diagram.

96
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.

For example suppose that a contour delimits an area with two contour
input variables, denoted a 1 , a 2 , and two contour output variables, denoted b 1 , b 2
, having the Laplace transforms A 1 (s), A 2 (s), B 1 (s), B 2 (s) respectively.
This oriented contour is represented in Fig. 4.2.2.a. and suppose that,
using analytical methods we can express B1, B2 upon A1, A2 as in (4.2.4) if the
dependence is linear one.
B1(s) B1(s)
A1(s) A1(s)
G11(s) G12 (s)
+ +
Undesired connections
A2(s) + + A2(s)
G21(s) G22(s)
B2(s) B2(s)
a) b)
Figure no. 4.2.2.
B 1 (s) = G 11 (s)A 1 (s) + G 12 (s)A 2 (s) (4.2.4)
B 2 (s) = G 21 (s)A 1 (s) + G 22 (s)A 2 (s)
These relations are now represented by a block diagram as in Fig. 4.2.2.b.
which will replace the undesired connection.

4.2.3.3. Algorithm for the Reduction of Complicated Block Diagrams.


In practice block diagrams are often quite complicated, having multiple
inputs and multiple outputs, including several feedback or feedforward loops.
In the linear case the reduced system is described by a transfer matrix H(s)
whose components are separately determined.
For example to determine the component
Y Y i (s)
H ik (s) = H Uik (s) = U (s)≡0 , ∀j≠k
U k (s) j
which relates the input Uk to the output Yi , the following steps may be used:
1. Ignore all the outputs except Yi .
2. Consider zero all the inputs except Uk. Sometimes we can transform the
related summing points as in Fig. 4.2.1.
3. Combine and replace by its equivalent all blocks connected in cascade
(serial connections) using Transformation 1.
4. Combine and replace by its equivalent all blocks connected in parallel
using Transformation 2.
5. Eliminate all the feedback loops using Transformation 4.
6. Apply the graphical transformations 3, 5:10 to reveal the three above
standard connections.
7. Solve complicated connections using analytical equivalence as in 4.2.3.2.
8. Repeat steps 3 to 7 and then select another component of the transfer
matrix.
97
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.

Example 4.2.2. Reduction of a Multivariable System.


Let us consider a multivariable system described by a block diagram (BD)
as in Fig. 4.2.3. In this example we shall fallow in detail, for training reasons, all
the steps involved in systems reduction based on block diagram transformations.
However in practice and with some experience many of the below steps and
drawings could be avoided.
For a easier manipulations it is recommended to attach additional variables
to each takeoff point and to the output of each summing operator. In our case
they are a0 : a8. The summing operators are denoted S1: S4.

S1 Y2 U2 H4
S2
U1 + a1 a0 + a6 + a a4 + a8 Y1
3
H1 H2 H3 H6
_ _ + +
S3 S4
a
5
H5
a7
H7

Figure no. 4.2.3.


It can be observed that the system has two inputs and two outputs
 U (s)   Y (s) 
U(s) =  1  ; Y(s) =  1  ; (4.2.5)
 U 2 (s)   Y 2 (s) 
with global block diagram (BD) as in Fig.4.2.4., described by the transfer matrix
H(s) containing 4 components
 H (s) H 12 (s) 
H(s) =  11 
 H 21 (s) H 22 (s) 
Y (s) Y (s)
H 11 (s) = 1 ; H 12 (s) = 1 (4.2.6)
U 1 (s) U 2=0 U 2 (s) U 1=0
Y (s) Y (s)
H 21 (s) = 2 ; H 22 (s) = 2 .
U 1 (s) U 2=0 U 2 (s) U 1=0
U1(s) Y1(s)
U2(s) H(s) Y2(s)

Figure no. 4.2.4.


This LTI system in fact is represented by a set of 11 simultaneous
algebraical equations as in (4.2.7).
a0=H1a1; a1=U1-a7; a2=a0-a5; a3=U2+a6 a4=H3a3
a5=H5a4 a6=H2a2 a7=H7a8 a8=H4a3+H6a4 Y1=a8
Y2=a2 , (4.2.7)

98
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.

with 13 variables: a0 : a8, Y1, Y2, U1, U2 where two input variables U1, U2 are
independent (free variables in the algebraical system). The coefficients H1: H7
represent expressions of some complex functions, let as denote them as transfer
functions too. To reduce the system means to eliminate all a0 : a8 intermediate
variables expressing Y1, Y2 upon U1 and U2.. This is rather difficult task.

Y 1 (s)
a). Determination of the component :H 11 (s) = .
U 1 (s) U 2=0
To do this we shall ignore Y2 and consider U2=0 as in Fig. 4.2.5 where
now Y2 is not drawn and because U2=0 now a3=a6 and a block with a gain 1 will
appear instead of the summing point S3 . To put into evidence the three standard
connections we intend to move, in Fig. 4.2.5. , the takeoff point a4 ahead of the
block H3 ( as pointed out by the dashed arrow) and to arrange this new takeoff
point using the equivalencies from Fig. 4.1.2.
H4
S1 S2 S3
a0 a4
+ a Y
U1 + a1 + a2 a6 a3 8 1
H1 H2 1 H3 H6
_ _ +
S4
a5 Move ahead
H5
a7
H7

Figure no. 4.2.5.


After these operations, it will appear very clear, marked by dashed
rectangles, two structures: one standard feedback connection (with the equivalent
transfer function Ha) and one parallel connection (with the equivalent transfer
function Hb) as depicted in 4.2.6.
H4
S1 S2
U1 + a0 a1 + a2 a6 a3 a3 a4 + Y1
H1 H2 H3 H6
_ _ + a8
S4
a
5 Hb
H5 H3
Ha
a7
H7

Figure no. 4.2.6.


This takeoff point movement reflects the following equivalence relationships
a 5 = H 5 a 4 ; a 4 = H3 a 3 ; ⇒ a 5 = H 5 H 3 a 3 (4.2.8)
The blocks marked Ha, Hb are reduced to the equivalent transfer functions,

99
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.

H2
Ha = ; H b = H 4 + H3 H 6 , (4.2.9)
1 + H2H5H3
so the block diagram from Fig. 4.2.6. becomes as in Fig 4.2.7. a simple feedback
loop described by the transfer function,
H1HaHb
H 11 (s) = (4.2.10)
1 + H1 Ha H b H 7
S
U1 + 1 a0 a1 a3 Y1 U1 Y1
H1 Ha Hb H11
_ a8
a7 U2 =0
H7

Figure no. 4.2.7.


Substituting (4.2.9) into (4.2.10), finally we obtain the expression of H11(s)
H1 H2 H 3 H 6 + H1 H2 H4
H 11 (s) = . (4.2.11)
1 + H2 H3 H5 + H 1 H 2 H3 H6 H7 + H 1 H 2 H 4 H 7

Y 1 (s)
b). Determination of the component : H 12 (s) = .
U 2 (s) U 1=0
To evaluate H12(s), in the initial BD from Fig 4.2.3. ,we shall consider
U1 = 0 and ignore Y2 resulting a BD as in Fig 4.2.8.
U2 H4
S1 S2
+ a4 + Y1
a1 a0 + a2 a6 a3
H1 H2 H3 H6
_ _ + + a8
S3 S4
a5 Move ahead
H5
a7
H7

Figure no. 4.2.8.


As discussed before we move, in Fig. 4.2.8. , the takeoff point a4 ahead of
the block H3 . Because now U1=0, a1=-a7 and a2=a0-a5=H1a1-a5=-H1a7-a5, we
transfer the sign "- "of S1 to the input of S2 as in Fig. 4.2.9.
U2 H4
S2
_ a2 a6 + a3 a4 + Y1
H2 H3 H6
_ + S + a8
3
S4
a5 H5 H3
Ha
a7
H1 H7
Figure no. 4.2.9.
100
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.

It can be observed that now appeared two simple cascade connections and
the parallel connection denoted Ha which is H a (s) = H 4 + H 3 H 6 .
U
2
S2 + a3 Y1
_ a2 a6
H2 Ha
_ + a8
S3
a5
Move beyond the block
H . H3
5

H1.H7

Figure no. 4.2.10.


After the takeoff point a3 has been moved beyond the block Ha the BD
looks like in Fig. 4.2.11.
U2
+ Y1
H2
- Ha

H . H3
1
+ 5 Ha

H1 .H
+
7 Hc

Figure no. 4.2.11.


It appeared a new parallel connection denoted Hc equals to
H H
H b (s) = H 1 H 7 + 5 3 , (4.2.12)
Ha
which determines the BD from Fig. 4.2.12.
U2 + Y1
Ha U2 H12 Y1
-
U1 =0
H2 Hc

Figure no. 4.2.12.


The equivalent relationship from the above BD is the component H12(s):
Ha H 4 + H3 H 6
H 12 (s) = =
1 + Ha H2 H c 1 + H 2 H 3 H 5 + H1 H2 H 3 H 6 H 7 + H1 H2 H4 H7
(4.2.13)

101
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.

Y 2 (s)
c). Determination of the component : H 21 (s) =
U 1 (s) U 2=0
To determine the transfer function H21(s) we have to consider U2= 0 and to
ignore the output Y1 .
Into such conditions, the initial block diagram from Fig. 4.2.3. looks like in
Fig. 4.2.13.
Y2 H4
S1 S2 +
U1 + a1 a0 + a2 a6 a3 a4 a8
H1 H2 1 H3 H6
_ _ +
S3 S4
a Move ahead
5
H5
a7
H7
Figure no. 4.2.13.
Because U2= 0, the relation a3=U2+a6 from Fig. 4.2.3. becomes a3=a6 so
the summing operator S3 is now represented by a block with a gain 1, and later
on it will be ignored. Moving the takeoff point a4 ahead of the block H3 we get
the BD from Fig.4.2.14.

Y2 H4
S1 S2
a2 +
U1 + a1 a0
+ a6 a3 a4
H1 H2 H3 H6
_ _ +
S4
a5 Ha
H5 H3
a8
a7
H7

Figure no. 4.2.14.


The new parallel connection Ha is replaced by H a (s) = H 4 + H 3 H 6 so is
obtained the BD from Fig. 4.2.15.
S1 Y2
S2 a6
U1 + a1 a0 +
H1 H2
_ _ a2
a
5
a3
H5 . H3

a7 a8
H7 Ha

Figure no. 4.2.15.


102
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.

Redrawing the above BD we get the shape from Figure no. 4.2.15.
S S Y2
U1 + 1 a1 a0 + 2 a2
H1
_ _
H2
a
5 H5. H 3 a6
a3 a3
a7
H . Ha
7

Figure no. 4.2.15.


Moving the takeoff point a6 ahead of the block H2 we get the BD from
Fig.4.2.16. where a new feedback connection appeared, denoted Hd and a
cascade connection denoted He.

S a0 S2 Hd
U1 + 1 a Y2
1 + a2 a2 a2
H1
_ _
a5
H5. H3 H2

a7 a3
H7 . Ha H2
a6 He

Figure no. 4.2.16.


The two new connections are expressed by
H e (s) = H 2 H 7 H a ; H d (s) = 1 (4.2.14)
1 + H2 H3 H5
so the final structure is a simple feedback connection
U1 + Y2
H1 Hd U1 Y2
_ H21

He U2 =0

Figure no. 4.2.17.


from where we obtain H21 (s) as,
H1Hc H1
H 21 (s) = = (4.2.15)
1 + H1 Hc H b 1 + H 2 H 3 H 5 + H1 H2 H3 H6 H 7 + H 1 H2 H4 H7

103
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.

Y 2 (s)
d). Determination of the component : H 22 (s) =
U 2 (s) U 1=0
The component H22(s) is evaluated considering U1= 0 and ignoring the
output Y1 , so the initial block diagram from Fig. 4.2.3. looks like in Fig.4.2.18.
Y2 U2 H4
S1 S2 +
a1 a0 + a2 a6
+ a3
a4
H1 H2 H3 H6 a8
_ _ + +
S3 S4
a5
H5
a7
H
7

Figure no. 4.2.18.


To reduce this block diagram the takeoff point a4 is moved ahead of the
block H3 and will result the structure from Fig. 4.2.19.

Y2 U2 H4
S1 S2
a1 a0 + a2 a6 + a3 a4 + a8
H1 H2 H3 H6
_ _ +S +
3 S4
a5 Ha
H5 H3

a7
H7

Figure no. 4.2.19.


Now we can get the equivalence of the cascade connection between H3
and H5 and the parallel connection Ha(s) = H4 + H3 H6 . After these reductions, a
new cascade connections between Ha and H7 can be observed as in Fig. 4.2.20.
Y2 U2
S1 S2
a6 +
a1 a0 + a2 a3
H1 H2
_ _ +
S3
a5
H5. H3

a7
H . Ha
7

Figure no. 4.2.20.

104
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.

Redrawing the above BD to have the input to the left side and the output
to the right side we obtain, Fig. 4.2.21.

H 5. H 3
_
U2 S1 Y2
+

+ _
H . Ha H1
7
Hf

H2

Figure no. 4.2.21.


Inside this BD a parallel connection is observed, and we shall denote it as
Hf, having the expression,
H f = −(H 3 H 5 + H 1 H 7 H a ) (4.2.16)
The last component of the transfer matrix can now obtained immediately,
solving a standard feedback loop,
Hf −H 3 H 5 − H 1 H 4 H 7 − H 1 H 3 H 6 H 7
H 22 (s) = = (4.2.17)
1 − H2 Hf 1 + H 2 H 3 H 5 + H1 H2 H3 H6 H 7 + H 1 H2 H4 H7

Concluding, the four components of the transfer matrix are:


H1 H2 H 3 H 6 + H1 H2 H4
H 11 (s) = (4.2.18)
1 + H2 H3 H5 + H 1 H 2 H3 H6 H7 + H 1 H 2 H 4 H 7
H 4 + H 3 H6
H 12 (s) = (4.2.19)
1 + H2 H3 H5 + H 1 H 2 H3 H6 H7 + H 1 H 2 H 4 H 7
H1
H 21 (s) = (4.2.20)
1 + H2 H3 H5 + H 1 H 2 H3 H6 H7 + H 1 H 2 H 4 H 7
−H 3 H 5 − H 1 H 4 H 7 − H 1 H 3 H 6 H 7
H 22 (s) = . (4.2.21)
1 + H2 H3 H5 + H 1 H 2 H3 H6 H7 + H 1 H 2 H 4 H 7
It can be observed that all of them have the same polynomial to
denominator. This polynomial, denoted by L(s)
L(s) = 1 + H 2 H 3 H 5 + H 1 H 2 H 3 H 6 H 7 + H 1 H 2 H 4 H 7 (4.2.22)
is the determinant of the system (4.2.7).

105
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).

4.3 Signal Flow Graphs Method (SFG).


A signal flow graph (SFG) is a graphical representation of a set of
simultaneous linear equations representing a system. It graphically displays the
transmission of signals through systems. Like the block diagrams, SFGs illustrate
the cause and effect relationships but with more mathematical rules.
Unfortunately they are applied only for linear systems modelled by
algebraic equations in time domain, s-domain or z-domain. In such a situation
they appear as a simplified version of block diagrams. Instead, SFGs are easier to
draw, easier to manipulate with respect to block diagrams.

4.3.1. Signal Flow Graphs Fundamentals.


Let
xj
x j = t ij x i or x j = t ij {x i } or x j = t x i {x i } (4.3.1)
be a linear mathematical relationship, where:
xi , xj - are time functions, complex functions or numerical variables.
xj
tij or t x i - is an operator called elementary transmission function (ETF)
which makes the correspondence from xi to xj . It is a time domain or complex
domain operator, called also elementary transmittance or gain. We shall see how
this mathematical relationship will be represented in a SFG.
The fundamental elements of a signal flow graph are: node and branch.
Node.
A node, represented in a SFG by a small dot, expresses a variable denoted
by a letter, for example xi, which will be a variable of the graph. This variable
can be a numerical variable, a time function or a complex function (like Laplace
transforms or z transforms).
Usually we refer a node by the variable associated to it. For example we
can say " the node xi " thinking that, to that node the variable xi is associated.
Elementary branch.
An elementary branch, on short branch, between two nodes, from the
node xi to the node xj for example, is a curve line oriented by an arrow, directly
connecting the node xi and xj without passing through other nodes. It contains
information how the node xi directly affects the node xj.
Branches are always unidirectional reflecting cause-effect relationships.
Any branch has associated two elements: branch operator (gain or transmittance)
and direction. A branch is marked by a letter or a symbol, placed near the arrow
xj
symbol, which defines the coefficient or the gain or the operator, tij (t x i ) for
example, through which the variable xj contributes with a term to the variable xj
determination. This gain (coefficient, operator) , between nodes xi and xj ,
xj
denoted by tij ( t x i ), defines the so called elementary transmittance or elementary
transmission function. It is considered to be zero on a direction opposite to that
indicated by the arrow.

106
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).

The signal flow graph representation of the equation (4.3.1) is shown in


Fig. 4.3.1.
x
xi t ij xj xi t xji xj xj
xj =t ij xi xj = t xi {xi}

Figure no. 4.3.1.


Notice that branch directing from the node xi (input node of this branch) to
the node xj (output node of this branch) expresses the dependence of xj upon xi
but not the reverse, that means the signals travel along branches only in the
direction described by the arrows of the branches.
It is very important to note that the transmittance between the two nodes xi
and xj should be interpreted as an unidirectional ( unilateral) operator (amplifier)
xj xj
with the gain tij ( t x i ) so that a contribution of tijxi (t x i x i ) is delivered at node xij.
We can algebraically write the relation (4.3.1) as
xj = 1 xi
xj
or x j = [t x i ] −1 {x i }
t ij
but the SFG of Fig. 4.3.1. does not imply this relationship.
If in the same time the variable xj directly affects the variable xi , that
x
means it exists (is given) an independent relationship x i = t ji x j ( x i = t x ij {x j } )
then the set of two simultaneously equations
xj
x j = t ij x i x j = t x i {x i }
or (4.3.2)
x i = t ji x j
x
x i = t x ij {x j }
is represented by a SFG as in Fig. 4.3.2. This is a so called elementary loop
between two nodes.
x
t ij t xji

xi xj xi xj
t ji txxij
xi = t ji xj j x
xi = t xi xj
xj =t ij xi x
xj = txi xi
j

Figure no. 4.3.2.

107
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).

4.3.2. Signal Flow Graphs Algebra.

In SFG the following fundamental operations, rules and notions are


defined:
4.3.2.1. Addition rule. The value of a variable in a SFG denoted by a node is
equal to the sum of signals entering that node.
Example. The variable x1 depends on three variables, x2, x3, x4.
This dependence is expressed as in (4.3.3) and drawn as in Fig. 4.3.3.
x 1 = t 21 x 2 + t 31 x 3 + t 41 x 4 . (4.3.3)
If we have confusion we shall denote the transmission functions with
x
indexes formed with the nodes names (from x2 to x 1 as t x 12 ).
t21 x2
t31
x3
x1 t41
x4

Figure no. 4.3.3.


4.3.2.2. Transmission rule. The value of a variable denoted by a node is
transmitted on every branch leaving that node.
Example. Let us consider three variables x2, x3, x4 which directly depend upon
the variable x1 through the relationships,
xk=t1kx1, k=2 : 4. (4.3.4)
These relations are represented by a SFG as in Fig. 4.3.4.
t12
x2
t13
x3
x1 t14
x4

Figure no. 4.3.4.


4.3.2.3. Multiplication rule. A cascade a (series) of elementary branches can be
replaced by (are equivalent with) a single branch with the transmission function
the product of the transmission functions of the components of this cascade.
This rule is available if and only if no other branches are connected to the
intermediate nodes.
Example. A cascade of three elementary branches as in the SFG from
Fig. 4.3.5. is replaced be a unique equivalent branch T12 where
T14=t12t23t34=t12(t23(t34)) (4.3.5)
t12 t23 t34 T14

x1 x2 x3 x4 x1 x4

Figure no. 4.3.5.

108
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).

4.3.2.4. Parallel rule. A set of elementary branches, leaving the same node and
entering in the same node ( a parallel connection of elementary branches) can be
replaced by (are equivalent with) a single branch with the transmission function
the sum of the transmission functions of the components of this parallel
connection.
Example. Two parallel branches t 12 , t 12 as in the SFG from Fig. 4.3.6. are
equivalent with a single branch with the equivalent transmission function T12
where,
T12=t12'+t12'' (4.3.6)
t 12 T12
x1 x2 ≡ x1 x2
t 12
Figure no. 4.3.6.
4.3.2.5. Input node. An input node (source-node) is a node that has only
outgoing branches. Example. The node x1 in Fig. 4.3.4. is an input node. All
these branches are outgoing branches in respect with x1 and incoming branches in
respect to xk.

4.3.2.5. Output node. An output node (sink - node) is a node that has only
incoming branches.
Example. The node x1 in Fig. 4.3.3. is an output node. All these branches are
incoming branches in respect with x1 and outgoing branches in respect to xk.
Any node, different of an input node, can be related to a new node, as output
node, through an unity gain branch. This new node is a "dummy" node.

4.3.2.6. Path. A path from one node to other node is a continuous unidirectional
succession of branches (traversed in the same direction) along which no node is
passed more then once.

4.3.2.7. Loop. A loop is a path that originates and terminates to the same node in
a such way that no node is passed more than once.

4.3.2.8. Self-loop. A self-loop is a loop consisting of a single branch.

4.3.2.9. Path gain. The gain of a path is the product of the gains of the branches
encountered in traversing the path. Sometimes we are saying path thinking that it
is the gain of that path. For the path "k" its gain is denoted Ck.

4.3.2.10. Loop gain. The loop gain is the product of the transmission functions
encountered along that loop. Sometimes we are saying loop thinking that it is the
gain of that loop. For the loop "k" its gain is denoted Bk.
109
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).

4.3.2.11. Disjunctive Paths /Loops. Two paths or two loops are said to be
nontouching if they have no node in common.

4.3.2.12. Equivalent transmission function. The equivalent transmission


function (equivalent transmittance) between one input node uk and one node xi of
the graph is the operator that expresses that node xi with respect to the input node
uk taking into considerations all the graph connections. It is denoted by capital a
x
letter( symbol) indexed by the two nodes symbols, for example T u ik .
One graph's node xi, different of input node is expressed as a function of
all p input nodes uj , j=1:p, of that graph by a relation of the form:
p
x i = Σ T u ij u j
x
(4.3.7)
j=1
It is a difference between the elementary transmission function (elementary
x
transmittance) t u ik and the equivalent transmission function (equivalent
x
transmittance) T u ik .
The determination of the equivalent gains means to solve the set of
equations where the undetermined variables are nodes in the graphs and the
determined variables are inputs, (x1`,x2) - unknowns, (u,x3) - free variable in our
previous example.

Example 4.3.1. SFGs of one Algebraic Equation.


Le us consider one algebraical equation having 4 variables x1 , x2 , x3 , x4,
a 1 x 1 + a 2 x2 + a 3 x3 + a 4 x 4 = 0 . (4.3.8)
Suppose that we are interested about x1.. To draw the SFG we have to
express this variable as a function upon the others as
x 1 = −(a −1 −1 −1
1 ⋅ a 2 )x 2 − (a 1 ⋅ a 3 )x 3 − (a 1 ⋅ a 4 )x 4 ⇔. (4.3.9)
x 1 = t 11 x 1 + t 21 x 2 + t 31 x 3 + t 41 x 4
To draw the graph, first, 4 dotes are marked by the names of the all 4
variables and then connect them by branches according the relation (4.3.9).
The resulting SFG is illustrated in Fig. 4.3.7.
-1
t '31= – a1• a3
-1
' = – a1• a 2
t21
x1 x3
x2
' = – a • a4
t 41
-1
x4
1

Figure no. 4.3.7.


As we can observe the nodes x2 , x3 , x4, are input nodes and x1 is an
output node.
To transform (4.3.8) into (4.3.9) we had to divide the equation by a1 that
means to multiply by the inverse a −1
1 of the element a1. If the transmittances are
operators such an operation could not be desirable or it is not possible.
110
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).

To avoid this we consider a1=1+(a1-1), obtaining,


x 1 + (a 1 − 1)x 1 + a 2 x 2 + a 3 x 3 + a 4 x 4 = 0 ⇔
x 1 = (1 − a 1 )x 1 − a 2 x 2 − a 3 x 3 − a 4 x 4 ⇔
x 1 = t 11 x 1 + t 21 x 2 + t 31 x 3 + t 41 x 4
Now the SFG looks like in Fig. 4.3.8.
t 31= – a3

t21= – a2
t11= 1- a1 x1 x3
x2
x4
t 41= – a4

Figure no. 4.3.8.


We can observe that in the last SFG a self-loop (t11) appeared. However
the two graphs are equivalent (they express the same algebraic equation (4.3.8))
and this illustrate how to escape of undesired self-loop.
To eliminate a self-loop tii , replace all the elementary transmittances tki by
t ki where,
t
t ki = ki if t ii ≠ 1 (4.3.10)
1 − t ii
In this example i=1, k=2:4, and
t −a k −a
t k1 = k1 = = a k = −a −1
1 ⋅ a k if t 11 = 1 − a 1 ≠ 1 ⇔ a 1 ≠ 0
1 − t 11 1 − (1 − a 1 ) 1

Example 4.3.2. SFG of two Algebraic Equations.


Let us consider a system of two algebraic equations with numerical
coefficients,
 3x 1 + 4x 2 − x 3 + x 4 = 0
 (4.3.11)
 2x 1 − 5x 2 − x 3 + 5x 4 = 0
Suppose we are interested on x1 and x2 so they will be unknown
(dependent) variables and x3, x4 will be free (independent) variables. We shall
denote them, for convenience, as x3=u1 and x4=u2.
With this notations, (4.3.11) becomes
 3x 1 + 4x 2 = u 1 − u 2
 (4.3.12)
 2x 1 − 5x 2 = u 1 − 5u 2
We can solve this system by using the Kramer's method, computing the
determinant of the system D,
3 4
D= = −23 (4.3.13)
2 −5
and obtaining,

111
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).

D 11 u 1 + D 12 u 2 (−9)u 1 + (25)u 2
x1 = = = 9 ⋅ u 1 + −25 ⋅ u 2 (4.3.14)
D −23 23 23
D u + D 22 u 2 (1)u 1 + (−13)u 2 −1
x 2 = 21 1 = = ⋅ u 1 + 13 ⋅ u 2 (4.3.15)
D −23 23 23
The same results can be obtained if we express the system (4.3.12) in a
matrix form,
Ax = Bu, x = [x 1 x 2 ] T , u = [u 1 u 2 ] T (4.3.16)
where
3 4   1 −1 
A= , B =  .
 2 −5   1 −5 
If det(A) ≠ 0 , then
 9/23 −25/23 
x = A −1 Bu = Hu =  u ⇒ (4.3.17)
 −1/23 13/23 
x1 = ⋅ u 1 + −25 ⋅ u 2 = T u 11 ⋅ u 1+ T u 12 ⋅ u 2
9 x x
23 23
(4.3.18)
−1
x 2 = ⋅ u 1 + 13 ⋅ u 2 = T u 21 ⋅ u 1+ T u 22 ⋅ u 2
x x
23 23
The same system (4.3.11), or its equivalent (4.3.12), can be represented
as a SFG and solved by using the SFG's techniques. If the coefficients of the
equations contain letters (symbolic expressions) and the system order is higher
than 3 the above methods can not be implemented on computers and to solve a
such a system is a very difficult task. Using the methods of SFG all these
difficulties will disappear.
To construct the graph first of all the two dependent variables (x1, x2 ) are
withdrawn from the system equations irrespective which variable from which
equation.
Let this be,
 x 1 = (−2) ⋅ x 1 + (−4) ⋅ x 2 + (−1) ⋅ u 1 + (1) ⋅ u 2
 (4.3.19)
 x 2 = (−2) ⋅ x 1 + ( 6) ⋅ x 2 + ( 1) ⋅ u 1 + (5) ⋅ u 2
After marking the four dots, future nodes, by the variables x1, x2, u1, u2 ,
they are connected, using SFG rules, according the equations (4.3.19) , the SFG
is obtained as in Fig. 4.3.9.
Input node
1 u2 5
-2 6

Self-loop -4 Self-loop
x1 -2 x2
-1 1
u1
Input node

Figure no. 4.3.9.


In the next paragraph we shall see haw can we reduce this graph to the
simplest form to express the system solution (4.3.18).

112
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).

4.3.3. Construction of Signal Flow Graphs.


A signal flow graph can be built starting from two sources of information:
1. Construction of SFG starting from a system of linear algebraic equations.
2. Construction of SFG starting from a block diagram.

4.3.3.1. Construction of SFG Starting from a System of Linear Algebraic


Equations.
Suppose we have a system of n equations with m > n variables, xk , k=1: m.
m
Σ
j=1
a ij x j = 0 , i=1:n (4.3.20)
Based on some reasons (physical description, causality principles, etc.) a
number of n variables are selected to be unknown (dependent) variables, for
example xk , k=1: n. The others p=m-n variables are considered to be free
(independent) variables. For seek of convenience let us denote them as
u j = x n+j , 1 ≤ j ≤ p = m − n , (4.3.21)
and the corresponding coefficients from the i-th equation as,
b ij = −a i,n+j , 1 ≤ j ≤ p = m − n (4.3.22)
In the case of dynamical systems they will represent the input variables.
With these notations, (4.3.20) is written as,
n p
Σ a ij x j = Σ b ij u j ,
j=1 j=1
i=1:n (4.3.23)

or in a matrix form,
Ax = Bu, x = [x 1 .. x i ... x n ] T , u = [u 1 ...u j ... u p ] T . (4.3.24)
If the n equations are linear independent, the determinant D,
D = det(A) ≠ 0
and a unique solution exists (considering that the vector u is given),
x = A −1 Bu = Hu , where H = {H ij } 1≤ i ≤n ; 1≤ j ≤p
whose i-th component is written as,
p p p
x i = Σ H ij ⋅ u j = Σ T u ij ⋅ u j = Σ T ji ⋅ u j
x
(4.3.25)
j=1 j=1 j=1

We mentioned all these forms of solutions, to point out the difference of


notations in algebra matrix approach:
H ij is the component from the row i and column j of the matrix H
and in SFG approach:
x
T u ij (or T ji ) is the equivalent transmittance between the node uj (or only j)
and the node xi (or only i).
The next step in SFG construction is to express each dependent variable
x i , i = 1 : n , as a function upon the all dependent and independent variables
from (4.3.23).
It is not important from which equation the variable xi is withdrawn.

113
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).

Denoting the elementary transmittance,


 −a i ≠ j, i, j ∈ [1, n]
t ji =  ij , g ji = b ij , (4.3.26)
 1 − a ii i = j i, j ∈ [1, n]
the dependent variables are expressed as,
n p
x i = Σ t ji ⋅ x j + Σ g ji ⋅ u j , i = 1 : n. (4.3.27)
j=1 j=1
Now we have to draw n+p=m dots and to mark them by the name of the
variables involved in (4.3.2). Then the SFG is obtained connecting the nodes
according to each equation from (4.3.27). The zero gain branches are not drawn.
The image of one equation (i) from (4.3.27) is illustrated in Fig. 4.3.10.
u2 g 2i g 1i u1
x1
uj gji t 1i x2
t 2i
xj
xi t ji
up t ni xn
g pi t ii

Figure no. 4.3.10.


Example 4.3.3. SFG of three Algebraic Equations.
Let us consider a system of three algebraic equations with numerical
coefficients, where already we have selected the dependent and independent
variables as in (4.3.23).
2x 1 − 3x 2 = 4u 1
7x 1 − 2x 2 + x 3 = 4u 2 (4.3.28)
6x 1 − 5x 2 = 3u 1 + u 2
Now we express each dependent variable upon the others variables. The
variable x3 can be withdrawn only from the second equations. Let express x1
from the first equation and x2 from the third, getting,
x 1 = − x 1 + 3x 2 + 4u 1
x 2 = −6x 1 + 6x 2 + 3u 1 + u 2 (4.3.29)
x 3 = −7x 1 + 2x 2 + 4u 2
The corresponding SFG is depicted in Fig. 4.3.11.
We could select x1 from the third equation and x2 from the first one, the
same system being now represented by another SFG.
u1
4 u2
3 1
4
x1 -6 x2
-1
t
ii
t
ii
6 x3
3 2
-7
Figure no. 4.3.11.
114
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).

4.3.3.2. Construction of SFG Starting from a Block Diagram.


As discussed, a block diagram graphically represent a system of equations.
As a consequence this system can be represented by signal flow graph. This
process can be performed directly from the block diagram according to the
following algorithm:
1. Attach additional variables to each takeoff point and to the output of each
summing operator. They will become nodes of the graph.
2. The input variables and the initial conditions (when the block diagram
represents also the effects of initial conditions or it is a state diagram) will be
associated to inputs nodes. The output variables will be associated to output
nodes.
3. Draw all the nodes in positions similar to the positions of the corresponding
variables in block diagram. It is recommended to avoid branches crossing.
4. For each node, different of an input node, establish, based on the block
diagram, the relationship expressing the value of that node upon other nodes and
draw that relationship in the graph.
Y (s)
5. The componentH ij (s) = Uij (s) of the transfer matrix H(s) will be
U k ≡0, k≠j
Y
represented by the equivalent transmittance T Uij between the input node Uj and
the node Yi.
Example 4.3.4. SFG of a Multivariable System.
Let us consider the system from Example 4.2.2. whose block diagram
depicted in Fig. 4.2.3. is copied in Fig. 4.3.12
S1 Y2 U2
S2 H4
U1 + a1 a0 + a6 + a a4 + a8 Y1
3
H1 H2 H3 H6
_ _ + +
a5 S3 S4
H5
a7
H7

Figure no. 4.3.12.


This block diagram is already prepared with additional variables we
utilised in Example 4.2.2. This system contains 13 variables and is described by
11 equations (4.2.7). We could draw the graph starting from this set of equations
but now we shall draw the graph, presented in Fig. 4.3.12. just looking at the
block diagram.
Y2 U2 H4
1 1
U1 1 a1 H1 a0 1 a2 H2 a6 1 a3 H3 a4 H6 a8 1 Y1
a5 H5
-1 a7 H -1
7
Figure no. 4.3.13.
115
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.

4.4. Systems Reduction Using State Flow Graphs.


To reduce a system to one of its canonical forms: input-output,
input+initial_state-output or input+initial_state-state means to solve the system of
equations describing that system.
When the system is represented by a state flow graph (SFG), this process
is called SFG reduction or solving the SFG which means to determine all or only
required equivalent transmittances between input nodes and output nodes.
As we mentioned in Ch. 4.3., any node, different of an input node, can be
related to a new node, as output node, through an unity gain branch. This new
node is a "dummy" node.
The nodes of the graph (different of input nodes), x i , i = i 1 , .., i k , ..., i r , we
only are interested for their evaluation will be related to "dummy" output nodes
yk , k=1: r by the relation
yk = x ik , k = 1 : r (4.4.1)
and represented in the graph by additional r branches with unity gains.

Example. Suppose that in Example 4.3.3. we are interested to evaluate only the
variables x 1 = x i 1 and x 3 = x i 2 , that means r=2. We can consider two new output
nodes y 1 = x 1 , y 2 = x 3 attached to the graph from Fig. 4.3.11. as in Fig. 4.4.1.
u1
4 u2
3 1
4
x1 -6 x2
-1 t
ii
t
ii
6 x3
y1 3 2
1 1
-7 y2

Figure no. 4.4.1.


In a reduced graph, any output variable yk, is expressed by the so called
canonical relationship of the form
r r k
y k = Σ H kj ⋅ u j = Σ T ukj ⋅ u j = Σ T jk ⋅ u j
y
(4.4.2)
j=1 j=1 j=1
y
where by symbols H kj , T ukj , T jk we denote the equivalent transmission functions
(equivalent transmittances).
A reduced graph is represented as in Fig. 4.4.2.
y
T1k = Tuk1
u1
y
Tjk = Tuk
uj j
yk
yk
Tpk = Tu p
up
Figure no. 4.4.2.

116
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.

Two methods can be utilised for SFG reduction:


1. SFG reduction by elementary transformations.
2. SFG reduction using Mason's formula.

4.4.1. SFG Reduction by Elementary Transformations.


Beside the transformations based on Signal flow graphs algebra presented
in Ch. 4.3.2. two other operations can be utilised:
Elimination of a self-loop.
Elimination of a node.
These operations can be utilised also only to simplify a graph even if we
shall apply the Mason's formula.

4.4.1.1. Elimination of a Self-loop.


A self-loop appeared in a graph because a node xi is represented by an
equation of the form (4.3.27),
n p
x i = Σ t ki ⋅ x k + Σ g ji ⋅ u j , (4.4.3)
k=1 j=1
where xi depends upon xi itself through tii.
To eliminate the self-loop determined by this tii , where t ii ≠ 1 ,we shall
express (4.4.3) as
n p
x i − t ii x i = Σ
k=1,k≠i
t ki ⋅ x k + Σ g ji ⋅ u j
j=1

n p
Σ ⋅ xj + Σ
t ki g ji
xi = ⋅ uj ⇔
k=1,k≠i 1 − t ii j=1 1 − t ii
n p
xi = Σ
k=1,k≠i
t ki ⋅ x j + Σ g ji ⋅ u j .
j=1
(4.4.4)

Now the following rule can be formulated:


To eliminate an undesired self-loop tii, attached to a node xi, replace the
nonzero elementary transmittances t ki , ∀k and g ji , ∀j , of the all elementary
branches entering the node xi , by the transmittances t ki , ∀k and g ji , ∀j
respectively, where,
t g ji
t ki = ki , k = 1 : n ; g ji = , j = 1 : p. (4.4.5)
1 − t ii 1 − t ii
Example. Let us consider the graph from Ex. 4.3.3. as redrawn in Fig.4.4.1. We
want to eliminate the self-loop attached to x1 whose transmittance is, t11=-1. The
entering branches (on short transmittances) to x1 are: t21=3 and g11=4. The new
transmittances are,
t 3 g 11 4
t 21 = 21 = = 3/2 ; g 11 = = = 4/2 .
1 − t 11 1 − (−1) 1 − t 11 1 − (−1)
The other transmittances are not modified.
After this elimination the graph from Fig. 4.4.1. looks like in Fig. 4.4.3.

117
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.

u1
4/2 u2
3 1
4
x1 -6 x2
t
ii
6 x3
y1 3/2 2
1 1
-7 y2

Figure no. 4.4.3.


Now in the above graph we want to eliminate the self-loop attached to x2
whose transmittance is, t22=6. The entering branches to x2 are: t12=-6, g12=3 and
g22=1 . Then we obtain,
t −6 g g
t 12 = 1−t1222 = 1−(6) = 6/5 ; g 12 = 1−t1222 = 1−(6)
3
= −3/5 ; g 22 = 1−t2222 = 1−(6)
1
= −1/5
After this new elimination the graph from Fig. 4.4.3. looks like in Fig. 4.4.4.
u1
4/2 u2
-3/5
4
x1 x2 -1/5
6/5
x3
y1 3/2 2
1 1
-7 y2

Figure no. 4.4.4.


This graph could be obtained if we would express the variables from the
system (4.3.28) by division: first eq. by 2 and third eq. by -5.

4.4.1.2. Elimination of a Node.


Only dependent nodes can be eliminated. Elimination a node xi from a
graph is equivalent to the the elimination of a variable xi ,from the system of
equations. This is performed by substituting the expression of the variable xi, as a
function of all the other variables except xi, expression obtained from one
equation, to all the other equations of the system. That means the node to be
eliminated must not have self-loop.
Withdrawing the variable xi, , free of self-loop, from one equation of the
system (4.3.20) or (4.3.23), we obtain,
n p m
xi = Σ
j=1, j≠i
t ji ⋅ x j + Σ g ji ⋅ u j =
j=1
Σ
j=1, j≠i
t ji ⋅ x j (4.4.6)
where, for seek of convenience, we do not explicitly denote the transmittances gji
coming from input nodes.
One node xk , different of an input node (that means 1 ≤ k ≤ n ), and
different of our xi (k ≠ i ), was drawn taking into consideration the relation,
m
xk = Σ
j=1, j≠i
t jk ⋅ x j + t ik x i (4.4.7)

118
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.

Substituting (4.4.6) into (4.4.7) will result,


m m m
xk = Σ
j=1, j≠i
t jk ⋅ x j + t ik [ Σ
j=1, j≠i
t ji ⋅ x j ] = Σ
j=1, j≠i
[t jk + t ik t ji ] ⋅ x j
Denoting,
t jk = t jk + t ik t ji ∀k ∈ [1, n], k ≠ i
we have
m
xk = Σ
j=1, j≠i
t jk ⋅ x j , ∀k ∈ [1, n], k ≠ i (4.4.8)
If we can write t ik t ji = t ji t ik
then ∀k ∈ [1, n], k ≠ i , ∀j ∈ [1, m], j ≠ i
 t jk + t ji t ik , if t ji t ik ≠ 0
t jk =  (4.4.9)
 t jk , if t ji = 0 or t ik = 0
which is easier to interpret the node xi elimination rule (as in Fig. 4.4.5.):
tji xi tik
t'jk = tjk + tji tik
tjk t'jk
xj xk xj xk

Figure no. 4.4.5.


To eliminate a non-input node xi, replace all the elementary transmittances
between any non-output node xj and any non-input node xk by a new
transmittance t jk = t jk + t ji t ik if the gain of the path x j → x i → x k is different of
zero and then erase all the branches related to the node xi.
It is possible that initially to have tjk=0 ( no branch directly connect xj to
xk) and after elimination of the xi a branch to appear between xj and xk with the
transmittance, t jk = 0 + t ji t ik .
In the category of nodes xk we must include all the "dummy" output nodes
and of course in the category of nodes xj we must include all input nodes.
To avoid some omissions, it is recommended to note in two separate lists
what nodes can be as xj nodes and which as xk nodes.
We must inspect all combinations (x j , x k ) to detect a non-zero path (constituted
only by two branches) passing through xi, even if there are no direct branch
between xj and xk.
Note that if two selections of the pairs (x j , x k ) are, for example, (a, b) and
(b, a) then they must be inspected independently.
Sometimes, for writing conveniences, we shall use the notations
x
t jk = t x kj = t(x j , x k ) ,
x
t jk = t x jk = t (x j , x k ) , (4.4.10)
xk
T jk = T x j = T(x j , x k ) .
Example. Let us consider the graph from Fig.4.4.4. related to Ex. 4.3.3.

119
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.

We want to eliminate the node x2.


As xj types nodes can be: u1, u2, x1, x3.
As xk types nodes can be: y1, y2, x1, x3.
Non-zero paths (constituted only by two branches) passing through x2, that
will affect the previous elementary transmittances (even they are zero) are
observed to the pairs: (u1, x1), (u1, x3), (u2, x1), (u2, x3), (x1, x3). They determines,
t'(u1, x1) = 4/2 + (-3/5)(3/2) = 11/10,
t'(u1, x3) = 0 + (-1/5)(2) = -2/5,
t'(u2, x1) = 0 + (-1/5)(3/2) = -3/10,
t'(u2, x3) = 4 + (-1/5)(2) = 39/10,
t'(x1, x3) = -7 + (6/5)(2) = -64/10.
Now in the graph from Fig.4.4.4. first we replace the transmittances
t(u1, x1), t(u1, x3), t(u2, x1), t(u2, x3), t(x1, x3) by
t'(u1, x1), t'(u1, x3), t'(u2, x1), t'(u2, x3), t'(x1, x3)
and then erase all the branches related to x2.
The node x2 reduced graph is illustrated in Fig. 4.4.6.
-3/10

u1
11/10 u2

-2/5 39/10
x1
x3
y1 1
-64/10 1 y2

Figure no. 4.4.6.

4.4.1.3. Algorithm for SFG Reduction by Elementary Transformations.


The reduction of a SFG by elementary transformations is obtained
performing the following steps:
1. Eliminate the parallel branches using the parallel rule.
2. Eliminate the cascade of branches using the multiplication rule.
3. Eliminate the self-loops.
4. Eliminate the intermediate nodes.
Practically, the complete reduction of a SFG up to the canonical form is a
rather tedious task. Steps of the above algorithms can be utilised only to simplify
the graph for applying the Mason's formula.

120
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.

4.4.2. SFG Reduction by Mason's General Formula.


The difficulties encountered in algebraic manipulations of a graph are
eliminated using the so called the Mason's general formula.
y
It determines the equivalent transmittance T ukj = T(u j , y k ) between an input
node uj and an output node yk.
If we are interested about some intermediate nodes xi , then we must
construct "dummy" output nodes connecting these intermediate nodes to output
nodes through additional branches of the unity transmittance which performs
xi = y i .
The value of an output variable yi in a graph with p input nodes,
uj, j=1:p is expressed by the relation,
p
y i = Σ T uij ⋅ u j
y
(4.4.11)
j=1
The Mason's general formula is,
y
Σq C q D q
T uij = , (4.4.12)
D
where:
D - is the determinant of the graph, equals to the determinant of the algebraical
system of equations multiplied by ±1 . It is given by the formula,
m
D = 1 + Σ (−1) k S pk , (4.4.13)
k=1
Spk - is the sum of all possible combinations of k products of nontouching loops1.
If in the graph there is a stt B of n loops B={B1 , ... , Bn } ( by Bk we
understand and also denote the gain of the loop Bk) then :
Sp1=B1+B2+...+Bn
Sp2=B1B2+B1B3+...+Bn-1Bn
................................... (4.4.14)
Spn=B1B2...Bn
Any term of Spk is different of zero if and only if any of two loops Bi, Bj
entered in that term are nontouching one to each other that means they have no
common node.
We remember that the gain of a loop equals to the product of the
transmittances of branches defining that loop.
Cq - means the path no q from the input node uj to the output node yi denoted
also as Cq=Cq(uj, yi). The index q will be utilised for all the graph's paths. By Cq
we understand and also denote the gain of the path Cq. The gain of a path equals
to the product of the transmittances of branches defining that path.
Dq - is computed just like D, relation (4.4.12), but taking into consideration for
its Spk , only the nontouching loops with the path Cq .
The set of these loops is denoted by Bq. If Bq={∅ }, then Dq=1. .

1
Suma produselor de cite k bucle disjuncte
121
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.

Example. Let us consider the graph from Ex. 4.3.3. as redrawn in Fig.4.4.7.
Suppose we are interested about all the dependent variables x1, x2, x3.
To apply the Mason's general formula first we must define and draw in the
graph the corresponding "dummy" output nodes which get out the interested
variables. They are defined by relations:
y1=x1 ; y2=x2 ; y3=x3 ;
B3 u1
B1 4 u2
3 1 4
x1 -6 x2
-1
t
ii
t
ii
6 x3
B2 y3
3 2
1
y2 1
y1 1 -7

Figure no. 4.4.7.


We identify only three loops (n=3), B={B1, B2 B3}:
B1=-1; B2=6. B3=(3)(-6)=-18;
B1 and B2 are nontouching loops but (B1,B2 ), (B3,B2 ) are touching loops.
Now we compute,
Sp1=B1+B2+B3=(-1)+(6)+(-18)=-13
Sp2=B1B2=(-1)(6))=-6.
The other two terms B2B3 , B3B1 do not appear in Sp2 because the loops
(B1,B2 ) and (B3,B2 ) are touching. Also Sp3=0 because does not exist three loops
to be nontouching each to other. In such a conditions,
2
D = 1 + Σ (−1) k S pk = 1 − S p1 + S p2 = 1 − (−13) + (−6) = 8.
k=1
y
Evaluation of T u11 = T(u 1 , y 1 ) : We identify two paths from u1 to y1:
C1=C1(u1, y1) = (4)(1) =4 ; Because only B2 is nontouching with C! , it results
that B1={B1}, Sp1=B2 , Sp2=0, so D1=1-Sp1=1-B2=1-6=-5.
C2=C2(u1, y1) =(3)(3)(1) =9. Because no loop is nontouching with C2 it results
B2={∅ } , that means D2=1.
y1 C D + C 2 D 2 4 ⋅ (−5) + 9 ⋅ (1) −11
We obtain, T u1 = 1 1 = = .
y1
D 8 8
Evaluation of T u 2 = T(u 2 , y 1 ) : We identify only one path from u2 to y1:
C3=C3(u2, y1) = (1)(3)(1) =3 ; Because no loop is nontouching with C3 it
results B3={∅} ⇒ D3=1.
y1 C D 3 ⋅ (1) 3
We obtain, T u2 = 3 3 = = .
D 8 8
y2
Evaluation of T u 1 = T(u 1 , y 2 ) : We identify two paths from u1 to y2:
C4=C4(u1, y2) = (3)(1) =4 ; B4={B1}, ⇒ D4=1-(-1)=2.
C5=C5(u1, y2) = (4)(-6) =-24 ; B5={∅ }, ⇒ D5=1.
y C D + C 5 D 5 (3) ⋅ (2) + (−24)(1) −18
We obtain, T u21 = 4 4 = = .
D 8 8
122
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.

y
Evaluation of T u22 = T(u 2 , y 2 ) : We identify one path from u2 to y2:
C6=C6(u2, y2) = (1)(1) =1 ; B6={B1}, ⇒ D6=1-(-1)=2.
y C D (1) ⋅ (2) 2
We obtain, T u22 = 6 6 = = .
y3
D 8 8
Evaluation of T u 1 = T(u 1 , y 3 ) : We identify four paths from u1 to y3:
C7=C7(u1, y3) = (3)(2) =6 ; B7={B1}, ⇒ D7=2.
C8=C8(u1, y3) = (3)(3)(-7)(1) =-63 ; B8={∅ }, ⇒ D8=1.
C9=C9(u1, y3) = (4)(-7) =-28 ; B9={B2}, ⇒ D7=1-6=-5.
C10=C10(u1, y3) = (4)(-6)(2)(1) =-48 ; B8={∅ }, ⇒ D10=1.
We obtain,
C 7 D 7 + C 8 D 8 + C 9 D 9 + C 10 D 10 (6) ⋅ (2) + (−63)(1) + (−48) ⋅ (1) + (−28)(−5) 41
T u31 =
y
= =
D 8 8
y
Evaluation of T u23 = T(u 2 , y 3 ) :We identify three paths from u2 to y3:
C11=C11(u2, y3) = (4)(1) =4 ; B11={B1; B2; B3}, ⇒ D1=D=8.
C12=C12(u2, y3) = (1)(2)(1) =2 ; B12={B1}, ⇒ D12=2.
C13=C13(u2, y3) = (1)(3)(-7)(1) =-21 ; B13={∅ }, ⇒ D13=1.
y3 C D + C 12 D 12 + C 13 D 13 (4) ⋅ (8) + (2) ⋅ (2) + (−21) ⋅ (1) 15
T u 2 = 11 11 = =
D 8 8
Collecting the results, we write:
y 1 = x 1 = T u11 ⋅ u 1 + T u12 ⋅ u 2 = −11 ⋅ u 1 + 3 ⋅ u 2
y y
8 8
y2 y3
y2 = x 2 = T u1 ⋅ u 1 + Tu2 ⋅ u2 = −18 ⋅ u1 + 2 ⋅ u2
8 8
y3 y3
y3 = x 3 = T u1 ⋅ u 1 + Tu2 ⋅ u2 = 41 ⋅ u1 + 15 ⋅ u2
8 8
The same results are obtaining, easier, by pure algebraic methods solving
the system (4.3.28). The goal here were to illustrate only how to apply the
Mason's general formula which get many advantages for symbolic systems and
there no so many paths as in this example.

Example 4.4.1. Reduction by Mason's Formula of a Multivariable System.


We saw in Ex.4.2.2. how a multivariable system from Fig. 4.2.3. can be
reduced using block diagrams transformations.
Its SFG has been drawn in Fig. 4.3.13. which we are copying now in
Fig. 4.4.8. to apply the Mason's general formula for reduction purposes.
Y2 U2 H4
1 1
U1 1 a1 H1 a0 1 a2 H2 a6 1 a3 H3 a4 H6 a8 1 Y1

a5 H5
-1
B2
B3
-1 a7 H
7 B1
Figure no. 4.4.8.
123
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.

Three loops are identified (n=3), B={B1, B2 B3}:


B 1 = H 1 H 3 (−H 5 ) ; B 2 = −H 7 H 1 H 2 H 3 H 6 ; B 3 = H 1 H 2 H 4 (−H 7 )
All of them are touching loops so now we compute,
S p1 = B 1 + B 2 + B 3 ; S p2 = 0
D = 1 − S p1 = 1 − [H 1 H 3 (−H 5 ) − H 7 H 1 H 2 H 3 H 6 + H 1 H 2 H 4 (−H 7 )]
D = 1 + H1 H3 H5 + H 7 H 1 H2 H3 H6 + H 1 H 2 H 4 H 7
It can be observed that in 3 rows we determined the graph determinant
which is the same with the common denominator of the transfer matrix from
relation (4.2.22).
y y
Evaluation of T u11 = T(u 1 , y 1 ) = H 11 (s) = H u11 .
We identify two paths from u1 to y1:
C1=C1(u1, y1) =H1H2H3H6 . Because no loop is nontouching with C1 it results
B1={∅ } , that means D1=1.
C2=C2(u1, y1) =H1H2H4. Because no loop is nontouching with C2 it results
B2={∅ } , that means D2=1. We obtain,
y y C D + C2D2 H 1 H 2 H 3 H 6 + H1 H2 H4
T u11 = H u11 = H 11 = 1 1 =
D 1 + H2 H3 H 5 + H 1 H2 H3 H6 H7 + H 1 H 2 H 4 H7
y y
Evaluation of T u12 = T(u 2 , y 1 ) = H 12 (s) = H u12 .
We identify two paths from u2 to y1:
C3=C1(u2, y1) =H4. B3={∅ } , D3=1.
C4=C1(u2, y1) =H3H6. B4={∅ } , D4=1. We obtain,
y1 y1 C 3D 3 + C4 D4 H 4 + H3 H6
T u 2 = H u 2 = H 12 = =
D 1 + H2 H3 H 5 + H 1 H2 H3 H6 H7 + H 1 H 2 H 4 H7
y2 y
Evaluation of T u 1 = T(u 1 , y 2 ) = H 21 (s) = H u21
We identify one path from u1 to y2:
C5=C5(u1, y2) =H1. B5={∅ } , D5=1. We obtain,
y y2 C D H1
T u21 = H u 1 = H 21 = 5 5 =
D 1 + H 2 H 3 H 5 + H 1 H 2 H 3 H 6 H 7 + H1 H2 H 4 H 7
y2 y
Evaluation of T u2 = T(u 2 , y 2 ) = H 22 (s) = H u22
We identify three paths from u2 to y2:
C6=C6(u2, y2) =H4(-H7)H1. B6={∅ } , D6=1.
C7=C7(u2, y2) =H3(-H5). B7={∅ } , D7=1.
C8=C8(u2, y2) =H3H6(-H7)H1. B8={∅ } , D8=1. We obtain,
y y2 C D + C7 D7 + C 8D 8
T u22 = H u 2 = H 22 = 6 6
D
−H 4 H 7 H1 − H 3 H 5 − H 3 H6H 7 H 1
H 22 =
1 + H2 H 3 H 5 + H1 H2 H3 H6 H7 + H 1 H 2 H4 H7
The results are identical to those obtained by block diagram transformations,
relations (4.2.18)... (4.2.21), but with much less effort.

124
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.1. Problem Statement.

5. SYSTEM REALISATION BY STATE EQUATIONS.

5.1. Problem Statement.


Let us suppose a SISO system, p=1, r=1. If a system
S=SS(A,b,c,d,x) (5.1.1)
is given by state equations then the transfer function H(s) can uniquely be
determined as
M(s)
H(s) = c T Φ(s)b + d = , Φ(s) = (sI − A) −1 (5.1.2)
L(s)
on short, S=TF(M,L).
Having the transfer function of a system, several forms for state equations
can be obtained, that means we have several system realisations by state
equation.
S = TF(M, L) ⇒ S = SS(A, b, c, d, x) (5.1.3)
is one state realisation, but
S = TF(M, L) ⇒ S = SS(A, b, c, d, x) (5.1.4)
is another state realisation, with the same transfer function,
H(s) = c T Φ(s)b + d = H(s) , Φ(s) = (sI − A) −1 (5.1.5)
The two state realisations are equivalent, that means,
∃T, x = Tx, det T ≠ 0
A = TAT −1 , b = Tb, c T = c T T −1 , d = d (5.1.6)
The two transfer functions, H(s) and H(s) are identical,
H(s) = c T (sI − A) −1 b + d =
= c T T −1 (sI − TAT −1 ) −1 Tb + d = c T T −1 T(Is − A) −1 T −1 Tb + d = H(s)
( We remember that (AB) −1 = B −1 A −1 ).
Suppose that m=degree(M) and n=degree(L), m ≤ n . Because L(s) has
n+1 coefficients and M(s) has m+1 coefficients H(s) has only n+m+1 free
parameters due to the ratio. The state realisations have
n2 (from A) + n (from b) + n (from c) +1 (from d) =n2+2n+1
parameters, so
n 2 + 2n + 1 > n + m + 1
which conducts to an undetermined system of n2+2n+1 unknown variables from
n+m+1 equations. This explains why there are several (an infinity) state
realisations for the same transfer function.
There are several methods to determine the state equation starting from the
transfer function. For multivariable systems this procedure is more complicated,
but it is possible too.
Some of state realisations have some special forms with minimum number
of free parameters They are called canonical forms (economical forms).
Some canonical forms are important because they are putting into evidence
some system properties like controllability and observability.
125
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.1. Problem Statement.

Mainly there are two structures for canonical forms:


- Controllable structures (I-D : integral-differential structures) and
- Observable structures (D-I : differential-integral structures).
Now we remember the controllability and observability criteria for LTI systems:

5.1.1. Controllability Criterion.


The MIMO LTI system S
S = SS(A, B, C, D) (5.1.7)
is completely controllable, or we say that the pair (A,B) is a controllable pair, if
and only if the so called controllability matrix P
P = [B AB ... A n−1 B] (5.1.8)
has a maximum rank. P is a (n × (np)) matrix. The maximum rank of P is n.
Sometimes we can assure this condition by computing
∆ P = det(PP T ) . (5.1.9)
which must be different of zero. For a SISO LTI system S
S = SS(A, b, c, d)
the matrix P is a square n × n matrix,
P = [b Ab ... A n−1 b] . (5.1.10)

5.1.2. Observability Criterion.


The MIMO LTI system S
S = SS(A, B, C, D) (5.1.11)
is completely observable or the pair (A,C) is an observable pair, if and only if
the so called observability matrix Q
 C 
 CA 
Q =  ,
 (5.1.12)
 ⋅⋅⋅ 
 n−1

 CA 
has maximum rank. Q is a (n × (rn)) matrix. The maximum rank of Q is n.
Sometimes we can assure this condition by computing
∆ Q = det(QQ T ) (5.1.13)
which must be different of zero. For a SISO LTI system S
S = SS(A, b, c, d) (5.1.14)
the matrix Q is a square n × n matrix,
 cT 
 c TA 
Q =  .
 (5.1.15)
 ⋅⋅⋅ 
 T n−1 
c A 

126
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.2. First Type I-D Canonical Form.

5.2. First Type I-D Canonical Form.

Let H(s) be a transfer function of the form


M(s) b n s n + ... + b 0 Y(s)
H(s) = = = , an ≠ 0 (5.2.1)
L(s) a n s n + ... + a 0 U(s)
The state realisation of this transfer function, as I-D canonical form of the
first type, can be obtained by so called the direct programming method,
according to the following algorithm:

1. Divide by s n the nominator and the denominator of the transfer function


b + b n−1 s −1 + ... + b 1 s −(n−1) + b 0 s −n Y(s)
H(s) = n = , a n ≠ 0 (5.2.2)
a n + a n−1 s −1 + ... + a 1 s −(n−1) + a 0 s −n U(s)
and express the output as
W(s)

Y(s) = M(s) ⋅ 1 ⋅U(s) (5.2.3)


L(s)
like D operator
like I operator
U(s)
Y(s) = (b n + b n−1 s −1 + ... + b 0 s −n ) ⋅ (5.2.4)
a n + a n−1 s −1 + ... + a 1 s −(n−1) + a 0 s −n
W(s)
2. Denote
U(s)
W(s) = (5.2.5)
a n + a n−1 s −1 + ... + a 1 s −(n−1) + a 0 s −n
and express W(s) as a function of U(s) and the products ( [s −k W(s)] , k = 1 : n )

a n [W(s)] + a n−1 [s −1 W(s)] + ... + a 1 [s −(n−1) W(s)] + a 0 [s −n W(s)] = U(s)

a a a
W(s) = − an−1 s −1 W(s) − an−2 s −2 W(s) −... − a 0 s −n W(s) + a1 U(s) (5.2.6)
n n n n
x n (s) x n−1 (s) x 1 (s)
3. Denote the products [s W(s)] , k = 1 : n as new n variables
−k

X k (s) = [s −(n−k+1) W(s)] , k = 1 : n (5.2.7)


X 1 (s) = s −(n) W(s)
X 2 (s) = s −(n−1) W(s)
.......
X n (s) = s −(1) W(s)
so the expression of W(s) from (5.2.6) is now,
a a a
W(s) = − an−1 X n (s) − an−2 X n−1 (s) − ... − a 0 X 1 (s) + a1 U(s) (5.2.8)
n n n n

4. Express the output Y(s) from (5.2.3) and (5.2.4) as


Y(s) = b n W(s) + b n−1 [s −1 W(s)] + ... + b 0 [s −n W(s)] (5.2.9)

127
5. SYSTEMS REALISATION BY STATE EQUATIONS. 5.2. First Type I-D Canonical Form.

where W(s) is substituted from (5.2.6)


a a a b
Y(s) = (b n−1 − b n an−1 )[s −1 W(s)] + (b n−2 − b n an−2 )[s −2 W(s)] + ...+(b 0 − b n a 0 )[s −n W(s)]+ a n U(s
n n n n
cn c n−1 c1 d
(5.2.10)
or from (5.2.8)
a a a b
Y(s) = (b n−1 − b n an−1 )X n (s) + (b n−2 − bn an−2 )X n−1 (s) + ...+(b 0 − b n a 0 )X 1 (s)+ a n U(s)
n n n n
cn c n−1 c1 d
(5.2.11)
a
5. Denote c 1 = b 0 − bn a 0
n
a1
c 2 = b 1 − bn a
n
.....
a
c k = b k−1 − b n ak−1 , k=1:n (5.2.12)
n
.....
a
c n = b n−1 − b n an−1
n
and express the output Y(s) from (5.2.10) as
b
Y(s) = c n ⋅ [s −1 W(s)] + c n−1 ⋅ [s −2 W(s)] + ... + c 1 ⋅ [s −n W(s)] + a n U(s)
n
(5.2.13)
or from (5.2.11) as,
b
Y(s) = c 1 ⋅ X 1 (s) + c 2 ⋅ X 2 (s) + ... + c n ⋅ X n (s) + a n U(s) (5.2.14)
n

6. Draw, as block diagram or state flow graph, a cascade of n integrators and fill
in this graphical representation the relations (5.2.6) or (5.2.8) and the output
relations (5.2.13) or (5.2.14).
The integrators can be represented without the initial conditions or with
their initial conditions if later on we want to get the free response from this
diagram .
bn + + + + Y(s)
an
+ + + +
cn c n-1 c2 c1

1 + W(s) s-1 W(s) s-2W(s) s-(n-1)W(s) s-n W(s)


an s-1 s-1 s-1 s-1
U(s) + Xn (s) Xn-1(s) X2(s) X1(s)
.
x n (t)
.
x n (t) = xn-1(t) xn-1(t) =
.x (t) .
x2(t) = x1(t) x1(t)
= 2

a n-1 a n-2 a1 a0
an an an an

+ + +
+ + +
Figure no. 5.2.1.

128
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.2. First Type I-D Canonical Form.

7. Denote from the right hand side to the left hand side the integrators output by
some variables X 1 , X 2 , .. , X n , as (5.2.7) both in complex domain and time
domain.
8. Interpret the diagram in time domain and write the relations between variables
in time domain,
.
x1 = x 2
.
x2 = x 3
........ (5.2.15)
.
x n−1 = x n
. a a n−2 a n−1
x n = − a 0 x 1 − a1
a x 2 − ... − a x n−1 − 1
a xn + a u
n n n n n

y = c 1 x 1 + c 2 x 2 + ... + c n−1 x n−1 + c n x n + du (5.2.16)


where we denoted,
b a
d = a n , c k = b k−1 − b n ak−1 , k = 1 : n (5.2.17)
n n
9. Write in matrix form the relations (5.2.15), (5.2.16), (5.2.17).
.
x = Ax + bu , x = [x 1 , x 2 , ... , x n−1 , x n ] T (5.2.18)
y = c T x + bu (5.2.19)
where
 0 1 0 .. 0 0   0   b 0 − b n aa 0n 
 0  0 
0   a1 
 b 1 − b n aa n
 0 1 .. 0   
 0 0 0 .. 0 0   0   b 2 − b n a 2n 
A =  .. =   =
bn
 .. .. .. .. ..  , b  ..  , c  , d = an
    .. 
 0 0 0 .. 0 1   0   b n−2 − b n aan−2 
 − a 0 − a 1 − a 2 .. − a n−2 − a n−1   1   n
a n−1 
 a n an a n an an   an   b n−1 − b n a n 
(5.2.20)
This is called also the controllable companion canonical form of state
equations.
It can be observed that if b n = 0 , that is the system is strictly proper, then
the c vector is composed from the transfer function denominator coefficients. If
a n = 1 , then the last row of the matrix A is composed from the transfer function
nominator coefficients with changed sign.

129
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.2. First Type I-D Canonical Form.

Example 5.2.1. First Type I-D Canonical Form of a Second Order System.
Let a system be described by the second order differential equation.
. .
ÿ + 3y + 2y = ü + 5u + 6u
The transfer function is
Y(s) (s + 2)(s + 3)
H(s) = s 2 + 5s + 6 =
2
=
s + 3s + 2 U(s) (s + 2)(s + 1)
H(s) = 1 + 5s −1 + 6s −2
−1 −2

1 + 3s + 2s
U(s)
Y(s) = (1 + 5s −1 + 6s −2 ) ⋅
1 + 3s −1 + 2s −2
U(s)
W(s) =
1 + 3s −1 + 2s −2
W(s) + 3[s −1 W(s)] + 2s −2 [W(s)] = U(s)

W(s) = −3[s −1 W(s)] − 2s −2 [W(s)] + U(s) (5.2.21)


−1 −2
Y(s) = 1 ⋅ W(s) + 5[s W(s)] + 6[s W(s)] =
= 1 ⋅ {−3[s −1 W(s)] − 2s −2 [W(s)] + U(s)} + 5[s −1 W(s)] + 6[s −2 W(s)]
Y(s) = (5 − 3)[s −1 W(s)] + (6 − 2)[s −2 W(s)] + U(s)
Y(s) = 2[s −1 W(s)] + 4[s −2 W(s)] + U(s) (5.2.22)
Now the relations (5.2.21) and (5.2.22) are represented by a signal flow
graph.
1 Y(s)

x2(0) x1(0) 4
2
s-1 s-1
-1
U(s)
W(s) s s-1 W(s) 1 s-1 s-2W(s)
1
.
x2(t) x2(t)
.
x1(t) x1(t)
B1
-3 X2(s) X1(s)
B2
-2

Figure no. 5.2.2.


Writing the relations in time domain we obtain,
.
x1 = x 2
.
x 2 = −2x 1 − 3x 2 + u

y = 4x 1 + 2x 2 + u

 0 1  0 4
A= , b =  , c =  , d=1
 −2 −3  1 2

130
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.2. First Type I-D Canonical Form.

Now we can check that this state realisation (the controllable companion
canonical form) of the transfer function
Y(s) (s + 2)(s + 3)
H(s) = s 2 + 5s + 6 =
2
=
s + 3s + 2 U(s) (s + 2)(s + 1)
is controllable but not observable.
As we can see the transfer function has common factors to nominator and
denominator, and one of the two properties, controllability or observability, is
lost. In our case it must not be the controllability.
P = [b Ab] ,
0  0 1  0 1
b =   , Ab =  ⋅ = ⇒
1  −2 −3   1  −3
0 1 
P=  , det(P) = −1 ≠ 0 . The system is controllable.
 1 −3 
 cT 
Q= T 
c A
 0 1 
c T =  4 2  , c T A =  4 2  ⋅   =  −4 −2 
 −2 −3 

 4 2 
Q=  ,det(Q) = −8 + 8 = 0 . The system is not observable.
 −4 −2 

Operating on this signal flow graph, which is a state diagram (SD), we can
get the transfer function and the two components of the free response.
D = 1 + Σ (−1) k S pk
k=1
B 1 = −3s −1 ; B 2 = −2s −2
S p1 = B 1 + B 2 = −3s −1 − 2s −2
S p2 = 0
D = 1 − S p1 = 1 − (−3s −1 − 2s −2 ) = 1 + 3s −1 + 2s −2

yC1 D1 + C 2D 2 + C3 D3
H(s) = T u =
D
C 1 = 1; D 1 = 1; C 2 = 2s −1 ; D 2 = 1; C 3 = 4s −2 ; D 3 = 1;

H(s) = 1 + 3s + 2s −1 + 2s −2 + 4s = s 2 + 5s + 6
−1 −2 −1 −2 2

1 + 3s + 2s s + 3s + 2

131
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.3. Second Type D-I Canonical Form.

5.3. Second Type D-I Canonical Form.

This state realisation is called also the observable companion canonical


form: canonical because it has a minimum number of non-standard elements (0 or
1) and observable because it assures the observability of its state vector.
It is called D-I (derivative-integral) realisation because mainly it interprets
the input processing first as derivative and the result of this is then integrated. We
can illustrate this D-I process considering the associative property
Y(s) = H(s)U(s) = 1 [M(s)U(s)] . (5.3.1)
L(s)
One method to get this canonical form is to start from the system
differential equation,

a n y (n) + a n−1 y (n−1) + ... + a 1 y (1) + a 0 y (0) = b n u (n) + b n−1 u (n−1) + ... + b 1 u (1) + b 0 u (0)
where we denote (5.3.2)
k
d y(t)
y (k) = k
= D k {y(t)} = D k y = D{D k−1 y} (5.3.3)
dt
(k) d k u(t)
u = k
= D k {u(t)} = D k u = D{D k−1 u} . (5.3.4)
dt
Using the derivative symbolic operator
def
D{•} = d {•} , (5.3.5)
dt
the equation (5.3.2) can be written as,
a n D n y + a n−1 D n−1 y + .. + a 1 Dy + a 0 y = b n D n u + b n−1 D n−1 u + .. + b 1 Du + b 0 u
or arranged as
D n [a n y − b n u] + D n−1 [a n−1 y − b n−1 u] + .. + D[a 1 y − b 1 u] + [a 0 y − b 0 u] = 0
x2

a 0 y − b 0 u+D[a 1 y − b 1 u +D[.. + D[a n−1 y − b n−1 u+D [a n y − b n u]]..]] = 0 . (5.3.6)


 . →
xn x 1
.
x1
As we can see from (5.3.6) , we denote,
x1 = a n y − b n u (5.3.7)
from where,
b
y = a1 x 1 + a n u
n n
. . a a
x 2 = a n−1 y − b n−1 u + x 1 ⇒ x 1 = − an−1 x 1 + x 2 + (b n−1 − b n an−1 )u
n n
. . a n−2 a n−2
x 3 = a n−2 y − b n−2 u + x 2 ⇒ x 2 = − a x 1 + x 3 + (b n−2 − b n a )u
n n
..........
(5.3.8)
. . a1 a1
x n = a 1 y − b 1 u + x n−1 ⇒ x n−1 = − a x 1 + x n + (b 1 − b n a )u
n n

132
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.3. Second Type D-I Canonical Form.

. . a a
0 = a 0 y − b 0 u + xn ⇒
x n = − a 0 x1 + (b 0 − b n a 0 )u
n n
From (5.3.7) and (5.3.8) we can write the state equation in matrix form,
.
x = Ax + bu x = [x 1 , x 2 , ... , x n−1 , x n ] T
y = c T x + bu
where,
 − a n−1 1 0 .. 0 0 
 an   b n−1 − b n aan−1   1 
− a   an 
 a n−2 
n−2 n
0 1 .. 0 0 −
 an   b n−2 b n a n   0 
 − a n−3 0 0 .. 0 0   b n−3 − b n a n a n−3   b
A= a n  , b=  , c= 0  , d = an
 .. .. .. .. .. ..   ..   ..  n

 − a 1 0 0 .. 0 1   b 1 − b n aa 1n   0 
 an   a0   
 − a 0 0 0 .. 0 0   b 0 − b n an   0 
 an 
Example.
Our previous example with:
n = 2, b 2 = 1, b 1 = 5, b 0 = 6, a 2= 1, a 1 = 3, a 0 = 2, ⇒
 −3 1  2 1
A= , b =  , c =  , d=1
 −2 0  4 0

Now we can check the controllability and the observability properties.


 2 −2 
P=  ⇒ det(P) = 0 .
 4 −4 
This realisation of the same transfer function as in above example is not
controllable one, but
 1 0
Q=  ⇒ det(Q) = 1 ≠ 0
 −3 1 
which confirms that this canonical form is observable one.

133
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.4. Jordan Canonical Form.

5.4. Jordan Canonical Form.

This is a canonical form which is pointing out the eigenvalues of the


system.
The system matrix has a diagonal form if the eigenvalues are distinct or it
has a block-diagonal form if there are multiple eigenvalues. This form can be got
by using so called partial fraction development of the transfer function. Of
course, for that we have to know the roots of the denominator.
Example.

b 4 s 4 + ... + b 0
H(s) = =
(s − λ 1 )(s − λ 2 ) 2 ((s − α) 2 + β 2 )
c c c 22 c s+c Y(s)
= c 0 + 11 + 21 + + 31 2 32 2 =
s − λ 1 s − λ 2 (s − λ 2 ) 2
(s − σ) + β U(s)
Let denote
c 31 s + c 32
H 3 (s) =
(s − σ) 2 + β 2
To determine this canonical form first we plot as a block diagram the
output relation, considering for repeated roots a series of first order blocks.
Then we express in time domain the relations considering to any output of
first order block a component of first order block and for complex roots (poles)
consider a number of components equal to the number of complex poles.

C0 Y0
Y1
C 11

1 x11
Y2
U s-λ1

C 21 C 22

1 1
s-λ2 2 s-λ2 x 21
x 2

Y3 Y
H3

x13 x 23
Figure no. 5.4.1.

134
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.4. Jordan Canonical Form.

.
x 11 = 1
s−λ 1 U(s) ⇒ sx 11 − λ 1 x 11 = U ⇒ x 11 = λ 1 x 11 + u
.
x 21 = 1 2
s−λ 2 x 2 x 21 = λ 2 x 21 + x 22
.
x 22 = 1
s−λ 2 U(s) x 22 = λ 2 x 22 + u

For the blocks with complex poles we can use as an independent problem
any method for state realisation like the ID canonical form.
 x 31 
x = 3 
3

 x2 
.
x3 = A3 x3 + b 3 u
y 3 = c T3 x 3 + d 3 u
=0
( d3=0 because H3 is a strictly proper part and the degree of the denominator is
bigger then the degree of the nominator). We have a second order system in this
case.
.
x3 = A 3 x 3 + b 3 u
y = c 0 u + c 11 x 11 + c 22 x 21 + c 21 x 22 + c 32 x 31 + c 31 x 32
Construct a state vector appendix all variable chosen as state variables.
 x1 
 ... 1 
 x 1   x2 
 x2   ...   1   x3 
x 1 = x 11 ; x 2 =  12  ⇒ x =  x 2  =  ...x 2  ; x 3 =  13 
 x2   ... 3   23   x2 
 x   1  x
3
 x2 
Write the state equations in matrix form :
 x. = A J ⋅ x + B J ⋅ u

 y = C J ⋅ x + dJ ⋅ u

l1 0 0 0 0
0 l2 1 0 0 Jordan
AJ= Blocks
0 0 l2 0 0
0 0 A3

 1   C 11 
  
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅
   ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅
 b 1J  C 1J
 ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅   0 
  ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅   C 22 

B J =  ; C J = 
b
b 2J  =  1  C 2J  =  C 21  ; dJ = C 0 = a n
 ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅
 
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅   ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅
  ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅  n
b 3J b 31  C 3J C 31 
       
 b 32   C 30 

135
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.4. Jordan Canonical Form.

C 131 s + C 130
H 3 (s) = 2
s − 2αs + α 2 + β 2
.
x3 = A3 x2 + b 3 u
y3 = c T x 3 + d 3 u
U(s)
Y 3 = (c 131 s + c 130 )
(s − α) 2 + β 2
2
 β 
1 β2  s−α 
W(s) = U(s) = ⋅ 12 U(s) = 1 U(s)
(s − α) + β
2 2
(s − α) + β β
2 2
 β  β
2 2
1 +  s−α 
We can interpret this relation as a feedback connection:
U(s) 1 + b x32 b x31 W(s)
b 2 - s- a s- a

Figure no. 5.4.2.


β 3 .
x 31 = s−α x 2 ⇒ x 31 = αx 31 + βx 32
β .
x 32 = s−α (−x 31 + β12 U) ⇒ x 32 = −βx 31 + αx 32 + β1 u
Y 3 =  c 131 (s − α + α) + c 130  W(s)
Y 3 = c 13 βx 32 + (c 130 + αc 131 )x 31 = c 3T x 3
β 3
W = x 31 = s−α x 2 ⇒ (s − α)W = βx 32
c 32 = (c 130 + αc 131 ) c 31 = c 13 β Y 3 = c 3T x 3
 α β  0  c 32 
A=  ; b 3 =  1  ; c 3 =  
 −β α  β   c 31 
The companion canonical forms ID are very easy to be determined, but
they are not robust regarding the numerical computing for high dimensions.
The Jordan canonical form is a bit more difficult to be determined, but in
many cases is indicated for numerical computation.
There are several algebraical methods for canonical form determination.
Example: Jordan canonical form can be got using model matrix:
T = M −1 , M = [U 1 ; ...U n ] ,
U i are the eigen vectors if they are independent related to
the matrix A.
AU i = λ i U i

136
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.5 State Equations Realisation Starting from the Block Diagram.

5.5 State Equations Realisation Starting from the Block Diagram.

If the system is represented by a block diagram where blocks have some


physical meanings we can get useful state realisation so that as much as possible
state variables have a physical meaning. Of course the matrices have no
economical form , but they are more robust for numerical computing. The
following algorithm can be performed:
1) If blocks are complex make series factorisation as much as possible.
Only the following elements should appear .
1 3
U b0 Y U b2 s2 + b1 s+b0 Y
a 1 s+a0 a s 2 + a1 s+a0
2
2
U b1 s+b0 Y 4 U Y
a 1 s+a0 s

Figure no. 5.5.1.


2) For any such a block denote the state variables and determine the state
equations, that means:
For block 1 the output can be chosen as a state variable:
 x. = − a0 x + b 0 u
 a1 a1
(5.5.1)
 y = x
For the block 2 the output cannot be chose as a state variable because we
do not like to have the input time derivative, then this transfer function is split in
two components:
b 1 s + b 0 b 1 b 0 − b 1 a1
a0

H= = + (5.5.2)
a1 s + a 0 a1 a1 s + a0
Denote as state variable the output of the first order dynamically block
and write the state equation:
 x. = − aa0 x + a1 (b 0 − b 1 aa0 )u

1 1 1
(5.5.3)
 y = x + a 1 u
b1

b
a
U Y

a 1 X
b0 -b 1 a0 a 1 s+a0
1

Figure no. 5.5.2.


For block 3 we can use any state realisation method (canonical form or
Jordan form pointing out the imaginary or real part of the poles):

137
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.5 State Equations Realisation Starting from the Block Diagram.

.
 x1 = x2
 .
 x2 = − a 2 x1 − a 2 x2 + a 2 u
a0 a1 1
(5.5.4)
 y = (b − b a0 )x + (b − b a1 )x + b2 u
 0 2 a2 1 1 2 a2 2 a2
For block 4 we can choose the input as being x. In such a case this x will
disappear if we want to get minimal realisation (minimum amount of state
variables).
.
u=x s y x=y
.
Figure no. 5.5.3.
Denote inputs and outputs of the blocks by variables and write the
algebraically equations between them.
Using the above determined state equation and the algebraical equation
eliminate all the intermediary variables and write the state equations in matrix
form considering as state vectors a vector composed by all components
denoted in the block diagram.
1 2

u + u1 s+2 y1 =u2 1 x2 y2 =y
- s+4 s
u 3
+

1 3
-2 +
s+3 s+4
x y3
3 +

Figure no. 5.5.4. Figure no. 5.5.5.


.
1. x1 = −4x1 − 2u1 y1 = x1 + u1
x2 = 1s y1
.
2. x 2 = u2
u 2 = y1
.
3. x3 = −3x3 + u3 y3 = x3 y = x2 u1 = u − y3 − y u3 = y1 ⇒
. .
x1 = −4x1 − 2(u − x3 − x2 ) ⇒ x1 = −4x1 + 2x 2 + 2x3 − 2u
. .
x2 = x1 + u − x3 − x2 ⇒ x2 = x1 − x 2 − x3 + u
. .
x3 = −3x3 + x1 + u − x3 − x2 ⇒ x3 = x 1 − x2 − 4x3 + u ⇒ y = x2

 x1  .  −4 2 2   −2  0
   x = Ax + bu
x =  x2  ⇒ , A =  1 −1 −1  b =  1  , c =  1  , d = 0
   
x   y=c x      
T

 3   1 −1 −4   1  0

138
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.1. Experimental Frequency Characteristics .

6. FREQUENCY DOMAIN SYSTEM ANALYSIS.


Frequency systems analysis involves two main topics:
1. Experimental frequency characteristics,
2. Transfer functions description in frequency domain.

6.1. Experimental Frequency Characteristics.


In practical engineering frequently there are used the so called
experimental frequency characteristics or just frequency characteristics. They
can be obtained using some devices connected as in Fig. 6.1.1.
Let us consider a physical object with the input ua and the output ya. For
example, this object can be:- electronic amplifier; - electric motor; etc.
Generator of ua Physical ya
sinusoidal Object
signal

2 - ways
recorder

Figure no. 6.1.1.


Supposing the system is in a steady state expressed by the constant
input-output pair (U0, Y0 ) we apply a sinusoidal input signal, of the period T,
ua (t) = U0 + Um ⋅ sin ω t , ω = 2πf = 2π (6.1.1)
T
We have to note that the Romanian term "pulsatie"
ω = 2πf = 2π , [ω ] = sec −1 (6.1.2)
T
is called in English by the general term "frequency".
The deviation of the physical applied signal ua (t) with respect to the
steady state value U0 is
u(t) = ua (t) − U0 = Um ⋅ sin(ω t) . (6.1.3)
ua(t) T
Um
U0

t
ϕ
T ∆ t = − ω = t y 0 − t u0
ya(t) T

Ym
Y0

t
t0 t u0 t y 0

Figure no. 6.1.2.


139
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.1. Experimental Frequency Characteristics .

The curves of input-output response, obtained using a two-ways recorder


looks like in Fig. 6.1.2.
After a transient time period, when a so called "permanent regime" is
installed, the output response is a sinusoidal function with the same frequency ω
but with another amplitude Y0 and shifted in time (with respect to the steady
state values time crossings t y0 , t u0 ) by a value ∆t
∆t = t y0 − t u0 . (6.1.4)
a (t) = Y + Y ⋅ sin[ω (t − ∆t)]
y 0 m . (6.1.5)
Let we interpret this time interval ∆t by a phase ϕ with respect to the
frequency ω as,
ϕ = −ω ⋅ ∆t (6.1.6)
If the value of the shifting time ∆t > 0 (the output is delayed with respect
to the input, t y0 > t u0 ) the phase is negative ϕ < 0 "retard of phase" and if ∆t < 0
(the output is in advance with respect to the input, t y0 < t u0 ) the phase is
positive ϕ > 0 "advance of phase" . The permanent response is written as,
ya (t) = Y0 + Ym ⋅ sin(ω t + ϕ) , ϕ = −ω ⋅ ∆t (6.1.7)
Let us denote the deviation with respect to the steady state value Y0 by,
y(t) = ya (t) − Y0 = Ym ⋅ sin(ω t + ϕ) . (6.1.8)
During one experiment, performed for one value of the frequency ω , we
have to measure the amplitudes Um, Y m and the delay shifting time ∆t . Two of
the three measured values Ym and ∆t depend on this value of ω and one can
compute the ratio:
Y
A(ω ) = m (6.1.9)
Um
in the permanent regime at ω frequency.
We can also compute :
ϕ(ω ) = ω ⋅ ∆ (6.1.10)
in the permanent regime at frequency ω . Repeat this experiment for different
values of ω > 0 and, if possible for ω ≥ 0.
These two variables A and ϕ can be plotted (we use the same scale) for
different values of ω , as in Fig. 6.1.3.
Α(ω)
A1 A2
ω
ω1 ω2
ϕ(ω)

ϕ1 ϕ2
ω
ω1 ω2
Figure no. 6.1.3.
140
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.1. Experimental Frequency Characteristics .

The function A(ω ) obtained in such a way is called


Magnitude Characteristic (Magnitude-Frequency Characteristic, or Gain
Characteristic):
A(ω ) : ω ∈ [0, ∞) (6.1.11)
The function :
ϕ(ω ) : [0, ∞) (6.1.12)
is called
Phase Characteristic (Phase-Angle Characteristic, Phase-Frequency
Characteristic).
These experimental characteristics can completely describe all the systems
properties for linear systems and some properties of non - linear systems.
In the above diagrams the frequency axes is on a linear scale, but in
practice the so called logarithmic scale is used.
Based on magnitude and phase characteristics other characteristics can be
obtained like:
Real Frequency Characteristic:
P(ω ) = A(ω ) ⋅ cos[ϕ(ω )] , ω ∈ [0, ∞ (6.1.13)
Imaginary Frequency Characteristic:
Q(ω ) = A(ω ) ⋅ sin[ϕ(ω )], (6.1.14)
Complex Frequency Characteristic:
It is a polar representation of the pair (A(ω), ϕ(ω)), ω ∈∈ [0, ∞) or a
Cartesian representation of (P(ω ),Q(ω )):
P = P(ω ) ;
Q = Q(ω ). (6.1.15)
This complex characteristic is marked by an arrow showing the increasing
direction of ω , it is also marked by the values of ω as in fig. 6.1.4.
The ϕ - angle can be considered into [0,2π) circle or into other
trigonometric circles like for examples [-π,π) , (-2π,0].
Q
ω2
Q2
A2
Q1 ω1
A1

ϕ1 ϕ2
P2 P1 P

Figure no. 6.1.4.

141
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.2. Relations Between Experimental Frequency
Characteristics
and Transfer Function Attributes.

6.2. Relations Between Experimental Frequency Characteristics and


Transfer Function Attributes.

Shall we consider a linear system with the transfer function


Y(s) M(s)
H(s) = = N , λi ∈ C ( R ) , (6.2.1)
U(s)
an Π (s − λ i ) m i
i=1
and an input
u(t) = ua (t) − U0 = Um ⋅ sin(ω t) ⇒ (6.2.2)
ω ω Um
U(s) = L{u(t)} = Um = (6.2.3)
s +ω 22 (s − jω )(s + jω )
The system has N distinct poles, the pole λ i has the order of multiplicity
N
mi , Σ mi = n . Between them N1 poles are real and the other 2N2, where
i=1
N1+2N2=n. The complex poles are
 
 
λ i = σi + jω i , i = N1 + 1 : 2N2 , σi = Re(λ i ) , eλi t = eσit  cos ω i t + jsin ω i t 
 
 e jω i t 
The system response, into zero initial conditions, to this input is
 
 a 
Y(s) = L y (t) − Y0  = H(s) ⋅ U(s) . (6.2.4)
 
 y(t) 
Supposing that ω ≠ Im(λ i )∀i, that means that ω is not a resonant
frequency, the output is:
 
 ω ⋅ Um 
y(t) = Σ Rez  N est 
M(s)

 (s − jω )(s + jω ) 
an Π (s − λ i ) m i
 i=1 
   
N1   
y(t) =  Σ P i (t)eλi t + Σ Ψ i (t)eσi t  Um +  H(jω ) ⋅ 2jω
N2
ω jωt ω −jωt
e + H(−jω ) ⋅ −2jω e  Um
 i=1 i=N 1 +1   
 
 r(t)   y p (t) 
transient response permanent response (6.2.5)

The transient response is determined by the transfer function poles and


the permanent response is determined by the Laplace transform of input poles.

142
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.2. Relations Between Experimental Frequency
Characteristics
and Transfer Function Attributes.

The transient response will disappear when t → ∞ if the real parts of all
transfer function poles are negative , Re(λ i ) < 0 .
This means that the real part of all the poles to be located in the left half of
the complex plane. This is the main stability criteria for linear systems.
The expression H(jω ) is a complex number for which we can define,
H(jω ) = Aω ) ⋅ ejϕ(ω)

A(ω ) = H(jω )
ϕ(ω ) = arg(H(jω )) (6.2.6)
H(jω ) = A(ω )ejϕ(ω)
H(−jω ) = A(ω )e−jϕ(ω)
In such a way :
 
yp (t) =  H(jω ) ω ejωt + H(−jω ) ω e−jωt  U m
 2jω −2jω 

 j(ωt+ϕ) − e−j(ωt+ϕ) 
yp (t) = Um ⋅ A(ω ) ⋅  e 
 2j 
yp (t) =A(ω )Um sin(ω t+ ϕ(ω ) ) (6.2.7)
Ym −ω∆t , ∆t=∆t(ω)

We can see that the permanent response is also a sinusoidal signal with the
same frequency, but with a shifted phase.
Ym
= A(ω ) = H(jω ) (6.2.8)
Um
A exp (ω)
that means the ratio between the experimental amplitudes , as defined in (6.1.9),
is exactly the modulus of the complex expression H(jω ) obtained replacing s by
jω . In the same time the experimental phase, given by (6.1.10). is the argument
of the same complex expression H(jω ) ,
ϕ(ω ) = arg(H(jω )) (6.2.9)
−ω∆t exp

There is a strong connection between the experimental frequency


characteristic and the transfer function attributes: modulus and argument.
Manipulating the transfer function attributes we can point out, in a reverse
order, what will be the shape of some experimental characteristics.
Having a transfer function H(s) we can determine its frequency description
ω.
just substituting (if possible) s by jω
H(jω ) = H(s) s→jω (6.2.10)
From this theoretical expression we obtain,
143
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.2. Relations Between Experimental Frequency
Characteristics
and Transfer Function Attributes.

Magnitude Frequency Characteristic:


A(ω )=|H(jω )| (6.2.11)
Phase Frequency Characteristic:
ϕ(ω)=arg(H(jω )) (6.2.12)
Real Frequency Characteristic:
P(ω )=Re(H(jω )) (6.2.13)
Imaginary Frequency Characteristic:
Q(ω )=Im(H(jω )) (6.2.14)
We remember that,
H(jω ) = A(ω )ejϕ(ω) = P(ω ) + jQ(ω ) (6.2.15)

A(ω ) = P 2 (ω ) + Q 2 (ω ) (6.2.16)
 arctg Q(ω) , P(ω ) > 0
 P(ω)
 arctg Q(ω)
+ π , P(ω ) < 0
ϕ(ω ) =  P(ω) , ϕ∈ (−π, π] (6.2.17)
 π , P(ω ) = 0, Q(ω ) > 0
 − π , P(ω ) = 0, Q(ω ) < 0
2

 2
The complex frequency characteristic is the polar plot of the pair
(A(ω),ϕ(ω)) or the Cartesian plot of the pair (P(ω ),Q(ω )).

144
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.3. LogarithmicFrequency Characteristics.

6.3. Logarithmic Frequency Characteristics.

6.3.1. Definition of Logarithmic Characteristics.


To point out better the properties of a system, the frequency
characteristics are plotted on the so called logarithmic scale, with respect to
frequency and amplitude.
The logarithmic scale for the variable ω is just the linear scale for the
variable x=lgω as is illustrated in Fig. 6.3.1.
One decade One decade
ω =10x
0 0.01 0.1 1 10 100
2 3 8 20 ω Logarithmic scale
lg(2)=0.30103
x=lgω Linear scale
−∞ -2 -1 0 1 2
lg(3)=0.477121
lg(8)=0.90309
lg(20)=1+lg(2)=1.30103

Figure no. 6.3.1.


Example.
ω = 2 ⇒ x = lg 2 = 0.30103
ω = 3 ⇒ x = lg 3 = 0.477121
ω = 0.2 ⇒ x = lg 0.2 = −1 + lg 2 = −1 + 0.30103 = −0.69897
ω = 0 will be plotted at x = −∞ .
ω
A decade is a frequency interval [ω 1 , ω 2 ], such that ω 21 = 10.
In the logarithmic scale we do not have a linear space and we are not able
to add or to perform operations as in the usual analysis, but in the linear space
"X" there are possible all the mathematical operations on x, which are defined in
linear spaces .
The values of magnitude characteristics A(ω) can be plotted also on
logarithmic scale.
Frequently, for the values of magnitude characteristics it is utilised a linear
scale L(ω ), expressed in decibels (dB) where,
L(ω )=20lg(A(ω )). (6.3.1)
If we have a value L dB then the corresponding magnitude value A is,
L
A = 10 20 (6.3.2)
The units of decibels are utilised for example to measure the sound intensity,
 
I s = 20lg P (6.3.3)
 P0 
Is - sound intensity expressed in dB
P - pressure on ear
P0 - minimum pressure to produce a feeling of sound.
145
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.3. LogarithmicFrequency Characteristics.

Bode Diagram (Bode Plot).


The Bode diagram is the pair of magnitude characteristic represented on
logarithmic scale A(ω ) or linear scale in dB L(ω ) and the phase characteristic
ϕ(ω ) represented on linear scale both versus the same logarithmic scale in ω, as
depicted in Fig. 6.3.2.
Logarithmic Linear
scale scale
A(ω) L (ω) (dB)
1000 60 Magnitude frequency characteristic
100 40

10 20
1 0
ω
0.1 1 10 100 1000
0.1 -20

0.01 -40

Linear π
ϕ(ω)
scale
π
2 Phase frequency characteristic
0
0.1 ω
1 10 100 1000
−π2

−π

Figure no. 6.3.2.

6.3.2. Asymptotic Approximations of Frequency Characteristic.


The magnitude and phase characteristics are approximated by their
asymptotes with respect to linear variable x. Such characteristics are called
frequency asymtotic characteristics.

6.3.2.1. Asymptotic Approximations of Magnitude Frequency


Characteristic for a First Degree Complex Variable Polynomial.
Let us consider a first degree complex polynomial
H(s) = Ts + 1 (6.3.4)
The exact magnitude frequency chracteristic of this polynomial is
H(jω ) = jω T + 1 = A(ω ) = (ω T) 2 + 1 , L(ω ) = 20 lg[A(ω )] (6.3.5)
It is approximated by
 1 , ωT < 1
A(ω ) = (ω T) 2 + 1 ≈  = Aa (ω )
 ω T , ω T ≥ 1
and
146
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.3. LogarithmicFrequency Characteristics.

0 , ωT < 1
L(ω ) = 20 lg[A(ω )] ≈  = La (ω ) . (6.3.6)
 20 lg[ω T] , ω T ≥ 1
Let us denote by ω T the so called normalised frequency, where ω = 2πf .
Note that ω is also called in english frequency which corresponds to romanian
"pulsatie" and f is called frequency both in english and romanian.
As 2π is a nondimensional number both ω and f are measured in [sec]−1,
representing the natural frequency, and T called time constant is measured in
[sec]. So the normalised frequency ω T is a nondimentional number.
Frequently some items of the frequency characteristics are represented in
normalised frequency. The recovering of their shape in the natural frequency is a
matter of frequency scale gradation as depicted in Fig. 6.3.3.
One decade One decade
0 0.01 0.1 1 10 100
2 3 8 20
ωΤ Logarithmic scale
in normalised frequency

ω Logarithmic scale
0 0.01/T 0.1/T 1 /T 1 0 /T 100 /T in natural frequency
2 /T 3/T 8/T 2 0 /T

Figure no. 6.3.3.


The two approximating branches from (6.3.4) are the horizontal and
oblique asymptotes of the corresponding function of L(ω ) on the variable x in
the linear space X ,
F(x) = L(ω ) ωT→10 x = 20lg( 102x + 1 ) (6.3.7)
lg ω T = x ⇒ ω T = 10x , (6.3.8)
The horizontal asymptote is,
y = x→−∞
lim [F(x)] = 0 (6.3.9)
The oblique asymptote is,
F(x)
y=mx+n, m = x→∞
lim x = 20 , n = x→∞
lim [F(x) − mx] (6.3.10)
So, the two asimptotes in a linear space are
y = 20x, for x → ∞ (6.3.11)
y = 0, for x → −∞ . (6.3.12)
The slopes of the asymptotes can be expressed also in logarithmic scale
as a number of decibels over a decade dB/dec .
When the linear variable x increases with 1 unit in the linear space X=R
the variable ωT or ω in logarithmic scale increases 10 times (it covers a
decade).
ω T ω
x=lg(ω T) : x2-x1=1 ⇒ lg(ω 2T)-lg(ω 1T)=1 ⇒ 2 = 10 ⇒ ω 2 = 10
ω 1T 1
The slope in linear space is
y −y
m = x2 − x 1
2 1

147
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.3. LogarithmicFrequency Characteristics.

which can be interpreted as the variation


ω 2T
m = y2 − y1 if x2 − x 1 = 1 ⇔ = 10 ⇔ one decade
ω 1T
But
y2 = 20 lg(A(ω 2 )) = L(ω 2 )
y1 = 20 lg(A(ω 1 )) = L(ω 1 )
so the slope in logarithmic scale for the magnitude characteristic is expressed as,
m = y2 − y1 = 20 lg(A(ω 2 )) − 20lg(A(ω 1 )) = L(ω 2 ) − L(ω 1 ) dB
ω 2T ω 2
if = = 10 ⇔ one decade
ω 1T ω 1
which allows to express the slope as m [dB/dec] .
For (6.3.10) the slope is m=20dB/dec.
The exact and asymptotic frequency characteristics of (6.3.5), (6.3.6) is
depicted in Fig. 6.3.4.
A( ω) L( ω) dB
100 40 ωT
2 =10
ωT
Exacte 1
30 One decade
Magnitude
freq. charact.
10 20
y=mx+n
Max. error
Oblique asymptote 20 dB
3dB
10 Slope m=20 dB/dec

0 0.01
1 0
0.1 1 ω1T 1 0 ω2 T 100 ωT ω T=10x Logarithmic scale
x x =lg(ωT) Linear scale
−∞ -2 -10 -1 0 x1 1 x2 2
y=0 x2- x1 =1
Horizontal asymptote
0.1 -20
Slope m=0 dB/dec
Figure no. 6.3.4.

6.3.2.2. Asymptotic Approximations of Phase Frequency Characteristic


for a First Degree Complex Variable Polynomial.
Let us consider a first degree complex polynomial
H(s) = Ts + 1 (6.3.13)
The exact phase frequency chracteristic of this polynomial is
ϕ(ω ) = arg(jω T + 1) = arctg(ω T) (6.3.14)
It is approximated by
0 , ω T < 0.2
 ln 10 π
ϕ(ω ) = arctg(ω T) ≈  2 lg(ω T) + 4 , ω T ∈ [0.2 , 5] = ϕ a (ω ) , ω ∈ [0, ∞)
 π , ωT > 5
 2
This approximation means three lines which are: (6.3.15)

148
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.3. LogarithmicFrequency Characteristics.

- The two horizontal asymptotes of the phase frequency chracteristic ϕ(ω ) in the
linear space of the variable x = lg(ω T) evaluated to the function
G(x) = ϕ(ω ) ωT→10 x = arctg(10x)
x → −∞ ⇔ ω → 0 ⇒ G(x) → 0
x → +∞ ⇔ ω → ∞ ⇒ G(x) → π2
- A line having the slope of ϕ(ω ) in the particular point
ωT = 1 ⇔ x = 0
slope which is evaluated for the function G(x) by the derivative
x
G (x) = 10 2xln 10 ⇒ G (0) = ln 10 .
10 + 1 2
Both ϕ(ω ) and ϕ a (ω ) are depicted in Fig. 6.3.5.

ϕ( ω ) ϕ( ω )
ϕa (ω) ϕa (ω)
180 π
degree rad Oblique line Max. error
y = (ln10/2)x + π/4
135 3π/4 6 degree
Horizontal asymptote
90 π/2

45 π/4 Exacte
phase freq. charact.

0
0 0
0.01
.
0.1 0.2 1 5
. 10 100
ωT ω T=10x Logarithmic scale
−∞
. . x x =lg(ωT) Linear scale
-45 −π/4
-2 -1 0 1 2

x1 = - 0.69897 x2 - 0 = 0 - x1
x2 = 0.69897
-90 −π/2 ωT
ω1T=0.2 ω T=5 2 = 1
2
1 ω1T
Horizontal asymptote

Figure no. 6.3.5.


Other asymptotic approximations will be presented later,when we discuss
elementary frequency characteristics.

149
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.

6.4. Elementary Frequency Characteristics.

Elementary frequency characteristics are used to draw frequency


characteristics for any transfer function. There are six such elementary frequency
characteristics coming out from the transfer functions polynomials factorisation.
They are called also frequency characteristics of typical transfer functions
or of typical elements.
For each element A(ω ), ϕ(ω ), L(ω are evaluated and their asymptotic
counterpart is deduced accompanied by Bode diagrams.

6.4.1. Proportional Element.


H(s)=K (6.4.1)
A(ω )=|K| (6.4.2)
 0, K≥0
ϕ(ω ) =  (6.4.3)
 π, K<0
L(ω )=20lg|K| (6.4.4)
Its Bode diagram is depicted in Fig. 6.4.1.
A(ω) L(ω) dB
100 40

30
|K|
10 20

20lg( |K| )
10

0
1 0
0.01 0.1 1 10 100 ω

-10

0.1 -20
ϕ( ω )
If K>=0
π

π/2
If K<0

0 0 0.01 0.1 1 10 100 ω

−π/2

Figure no. 6.4.1.


The complex frequency characteristic is a point ∀ω , placed in the plane
(P,Q) to P(ω ) = K , Q(ω ) = 0, ∀ω.

150
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.

6.4.2. Integral Type Element.

H(s) = 1α , α ∈ Z (6.4.5)
s
if α=1 then the system is a pure simple integrator;
if α=2 then the system is a double pure integrator;
if α=-1 then the system is a pure derivative H(s) = s .
s → jω ⇒ H(jω ) = α 1 α , ω > 0, α ∈ Z
j ω
A(ω ) = 1α , ω ≥ 0, α ≤ 0 (6.4.6)
ω
L(ω ) = −20α lg ω (6.4.7)
ϕ(ω ) = −α π (6.4.8)
2
A(ω) L(ω) dB
100 40
α=2 α=1 α=−1 α=−2
3 0 Slope Slope Slope Slope
-40 dB/dec -20 dB/dec 20 dB/dec 40 dB/dec
10 20

10

0
1 0
0.01 0.1 1 10 100 ω

-10

0.1 -20

ϕ(ω)
α=−2
π

α=−1
π/2

0
0
0.01 0.1 1 10 100 ω
α=1
−π/2

α=2
−π

Figure no. 6.4.2.


The complex frequency characteristics , in the plane (P,Q) are:
orizontal lines if α = 2k, k ∈ Z ⇒ P(ω ) = (−1) k /ω 2k ; Q(ω ) = 0 and
vertical lines if α = 2k + 1, k ∈ Z ⇒ P(ω ) = 0; Q(ω ) = −(−1) k /ω 2k+1

151
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.

6.4.3. First Degree Plynomial Element.


This element has a transfer function with one real zero (PD Element).
A PD element means Proportional-Derivative element.
.
y(t) = Tu + u Y(s)=(Ts+1)U(s) H(s)=Ts+1; (6.4.9)
H(s) = T(s + z) , z = 1 s → jω ⇒H(jω ) = 1 + j(ω T) = P(ω ) + jQ(ω )
T
A(ω ) = H(jω ) = 1 + (ω T) 2 (6.4.10)
L(ω ) = 20 lg 1 + (ω T) 2 (6.4.11)
La (ω ) = asymptotic approximation of magnitude characteristic.
ϕ a (ω ) = asymptotic approximation of phase characteristic.
0 , ωT < 1
La (ω ) =  (6.4.12)
 20lg ω T , ω T ≥ 1
ϕ(ω ) = arctg(ω T) (6.4.13)
0 , ω T < 0.2
 π
ϕ a (ω ) =  2 lg(ω T) + 4 , ω T ∈ [0.2 , 5]
ln 10
(6.4.14)
 π , ωT > 5
 2
P(ω ) = Real(H(jω )) = 1 (6.4.15)
Q(ω ) = Img(H(jω )) = ω T (6.4.16)
The Bode characteristics are represented in Fig. 6.4.3.
A(ω ) L(ω) dB
100 40

30

10 20
Max. error
3dB Slope
10
+20 dB/dec

1 0 ωT
0 0.01 0.1 1 10 100

-10

0.1 -20
ϕ( ω ) ϕa (ω) Max. error
6 degree
π/2

π/4

0
0
0.01
.
0.1 0.2 1
.
5 10 100
ωT

−π/4
Figure no. 6.4.3.
152
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.

The complex frequency characteristic, in the plane (P,Q) is depicted in


Fig. 6.4.4.
Q ω→∞
1 ω =1/T=z

ω=ω1
0.5
Α(ω 1 )

ϕ(ω 1) H(jω1 )
0 ω =0
P
0 0.5 1
-0.25
Figure no. 6.4.4.
From Bode and complex frequency characteristics the following
observations can be obtained:
1. If the frequency increases the output as a time sinusoidal signal will be in
advance in respect to input.
2. When ω → ∞ then the output will be with a time interval of T/4 in advance
with respect to the input.
3. For breaking point ω T = 1 ⇔ ω = 1/T the output, as a time sinusoidal signal
will be T/8 in advance , which corresponds to a phase of π/4.

6.4.4. Second Degree Plynomial Element with Complex Roots.


This element has a transfer function with only two complex zeros.
H(s) = T 2 s 2 + 2ξTs + 1 , ξ ∈ (0, 1) , 1 =ω (6.4.17)
n
T
where: ω n - the natural frequency; ξ - the damping factor
H(jω ) = (1 − (ω T) ) + j(2ξω T)
2
(6.4.18)
 P(ω ) = 1 − (ω T) 2
 (6.4.19)
 Q(ω ) = 2ξω T
A(ω ) = H(jω ) = (1 − (ω T) 2 ) 2 + 4ξ 2 (Tω ) 2 (6.4.20)
L(ω ) = 20 lg A(ω ) (6.4.21)
ϕ ∈ [− π, 3π ) (6.4.22)
2 2
 arctg 2ξTω , ωT < 1
 1−(Tω) 2
π
ϕ(ω ) =  2 , ωT = 1 (6.4.23)

π + arctg
2ξTω
, ωT > 1
 1−(Tω) 2

153
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.

a.The asymptotic magnitude characteristic.


The asymptotical approximations are:
 
L(ω ) = 20 lg   1 − (ω T) 2  + 4ξ 2 (ω T) 2 
2
(6.4.24)
 
x = lg (ω T) ⇒ ω T = 10x (6.4.25)
F(x) = L(ω ) ωt=10 x = 20lg (1 − 102x ) + 4ξ 2 102x
2
(6.4.26)
When x → −∞ (ω → 0) ⇒ F(−∞) = 0 that means an horizontal asymptote.
When x → +∞ ,
F(x)
m = x→∞
lim x = 40
n = x→∞
lim [F(x) − mx] = 0
an oblique asymptote exists,
y = 40x
 0 , ωT < 1
La (ω ) =  (6.4.27)
 40lg (ω T) , ω T ≥ 1
 1 , ωT < 1
A(ω ) ≈ Aa (ω ) =  (6.4.28)
 (ω T ) 2
, ω T ≥ 1
La = 20Aa (ω )
The exact frequency characteristics are depending on the damping factor
ξ , so there is a family of characteristics.
 ω 2
2
2 ω 
A(ω ) =  1 − (ω T) 2  + 4ξ 2 (ω T) 2 =
2

2
 1 −  ω n   + 4ξ  ω n  (6.4.29)
We can find the resonant frequency setting to zero A'(ω ) with respect to ω :
dA(ω )
=0⇒

ω rez = 1 1 − 2ξ 2 (6.4.30)
T
2
We can obtain a resonance frequency if 0 < ξ < 2 .
The resonant magnitude, Am=Arez is,
Am = 2ξ 1 − ξ 2 = A(ω rez ) (6.4.31)
where ω rez is the rezonance frequency.
A(ω n ) = A(1/T) = 2ξ (6.4.32)
L(1/T) = L(ω n ) = 20 lg(2ξ) . (6.4.33)
If ξ = 0.5 ⇒ L(1/T) = 0 .
A(ω ) = 1 ⇒ ω = ω ,
where ω t is the crossing frequency
ω t = 2 ω rez (6.4.34)
All these allow us to draw the asymptotic magnitude characteristic and the
family of the exact characteristics as in Fig. 6.4.5.
154
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.

Slope
+40 dB/dec
A(ω ) L(ω) dB

100 40

ξ>√2/2
10 20

ξ=√2/2 1/2<ξ<√2/2
ω tT ωrezT
1 0 ωT
0.1 1
0.01 ξ=1/2 10

A( ωn) Am
0<ξ<1/2
0.1 -20

ξ=0 ξ=0
0.01 -40

Figure no. 6.4.5.

Example: H(s ) =100 s 2 + 2 s + 1 T = 10, ξ = 0.1, ω n = 0.1


T2 2ξT
L(ω n ) = 20lg (2ξ) = 20 lg (0.2) = −13.2974
ω rez = T1 1 − 2ξ 2 = 0.1 1 − 2 ⋅ 0.01 = 0.09039
Am = Arez = 2ξ 1 − ξ 2 = 0.19899
A(ωn )
L(ω) Magnitude (dB) Bode Diagrams
40
H(s)= 100 s^2 + s + 1
20

0
L(ωn) =−13.2974 -20 ωrez
=0.09039
-2 -1 0
ω Frequency (1/sec)
10 10 10
200
ϕ(ω) Phase (deg);
180
150

90 100
50
0 -1 0
ω Frequency (1/sec)
-2 10
10 10
Figure no. 6.4.6.
155
6. FREQUENCY DOMAIN SYSTEMS ANALYSIS. 6.4. Elementary Frequency Characteristics.

The crossing frequency is obtained solving the equation


A =  1 − (ω T) 2  + 4ξ 2 (ω T) 2 = 1
2

 y=0
We note y = (ω T 2 ) ⇒ (1 − y) + 4ξy = 1 ⇒ 
2

 y = 2(1 − 2ξ )
2

(ω T) 2 = 2(1 − 2ξ 2 ) ⇒ ω t = ω c = 1 2 1 − 2ξ 2
T
b. The asymptotic phase characteristic.
We said that :
 arctg 2ξωT , ωT < 1
 1−(ωT) 2
 π
ϕ(ω ) =  2 , ωT = 1 (6.4.35)

π + arctg , ωT > 1
2ξωT
 1−(ωT) 2

Denoting,
2ξω T 2ξ10x
g(x) = arctg ωT=10 x = arctg (6.4.36)
1 − (ω T) 2 1 − 102x
we have
G(x) = ϕ(ω ) ωT→10 x
as a function of x in a linear space X with three branches,
 arctg 2ξ10
x
, x<0
 1−10 2x

G(x) =  π2 ,x= 0 (6.4.37)


 x
π + arctg 2x , x > 0
2ξ10
 1−10
and then it is approximed by three lines, two horizontal asymptotes and one
oblique line.
x → −∞ ⇒ G(x) → 0
The asymptotes:
x → +∞ ⇒ G(x) → π
The oblique line: y = G (0) ⋅ x + π/2
The asymptotic phase ϕa will be approximated between two points by a
straight line passing through x=0, ϕ=π/2
ξ
y = ln10 x + π = 0 ⇒ x 1 = −π ⇒ ω 1 T = 10x1
ξ 2 2 ln 10
π

y= ln10 π π
x + = ⇒ x2 = 2
⇒ ω 2 T = 10 x2
3 2 2 1− 3 ln 10

0 , ω T < ω 1T

ϕ a (ω ) =  2 + ξ lg(ω T) , ω T ∈ [ω 1 T, ω 2 T]
π ln(10)
(6.4.38)

π , ω T > ω 2T

156
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.

ϕ( ω ) ϕa (ω)
. D
.
C
π
ln(10)/ ξ

π/2 . A

0
0
0.01
.
B
0.1 ω1T 1
.
ω2T 1 0 100 ωT

−π/2
Figure no. 6.4.7.
Algorithm for asymptotic phase drawing:
- mark A [(ω T)=1; ϕ=π/2]
- mark C [(ω T)=10; ϕ= π + ln 10 ]
2 ξ
- connect A and C to determine ω1Τ and ω2T to the intersection with ϕ=0
(point B) and ϕ=π (point D) respectively.
So the exact phase characteristic will have as semitangents the segments
AB and AD.

Example: T = 10 , ξ = 0.1 , ln 10 = 2.3026 = 23.026rad = 7.33π .


ξ 0.1

The complex frequency characteristics, ∀ξ > 0 , in the plane (P,Q) are


obtained eliminating ω in (6.4.19),
Q2
P =1− 2, Q ≥ 0 . (6.4.39)

It is represented in Fig. 6.4.8.
ω→∞ Q
ω =1/T
1

ω=ω1
Α(ω 1 )
Q=2ξ
0.5

ϕ(ω 1) H(jω 1)
0 ω =0
-0.25 0 0.5 1 P

Figure no. 6.4.8.

157
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.

6.4.5. Aperiodic Element. Transfer Function with one Real Pole.


1

H(s) = 1 = T ; (6.4.40)
Ts + 1 s + p
1
The pole is −p = − .
T
H(jω ) = 1 = 1 − j ωT (6.4.41)
1 + jω T 1 + (ω T) 2 1 + (ω T) 2
P(ω ) = 1 , Q(ω ) = − ωT (6.4.42)
1 + (ω T) 2 1 + (ω T) 2
A(ω ) = 1 (6.4.43)
1 + (ω T) 2

L(ω ) = −20 lg  1 + (ω T) 2  (6.4.44)


 0 , ωT < 1

L (ω ) =  −20 lg (ω T) , ω T ≥ 1
a
(6.4.45)

 x

ϕ(ω ) = −arctg(ω T) (6.4.46)


 0 , ω T < 0.2

ϕ a (ω ) =  −4 − 2 lg (ω T) , ω T ∈ [0.2, 5]
π ln 10
(6.4.47)
 −π , ωT > 5
 2
The Bode plot depicted in Fig. 6.4.9. We observe that the asymptotic
characteristic has a break down of -20 dB/dec regarding the slope.
A(ω ) L(ω) dB
10 20

10
Max. error
-3dB
0.01 0.1 1 10 100
1 0 ωT
Slope
-10 -20 dB/dec
ϕ( ω )
ϕa (ω) 0.1 -20
π/4

0
0
0.01
.
0.1 0.2 1
.
5 10 100 ωT

−π/4 Max. error


6 degree

−π/2

−3π/2

Figure no. 6.4.9.

158
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.

The complex frequency characteristic, in the (P,Q) plane, obtained by


eliminating ω in (6.4.40), is expressed by the equation (6.4.48),
P 2 + Q 2 − P = 0, Q ≤ 0. (6.4.48)
It is represented in Fig. 6.4.10.
Q
Α (ω 1 )
0 0.5 1
P
-0.5 ω→∞ ω =0
ϕ(ω 1)
H(jω1 )

-0.5 ω=ω1
ω =1/T
Figure no. 6.4.10.

6.4.6. Oscillatory element. Transfer Function with two Complex Poles.


ω 2n
H(s) = 2 2 1 = (6.4.49)
T s + 2ξTs + 1 s 2 + 2ξω n s + ω 2n
ω n = 1 − is the natural frequency ; ξ ∈ (0, 1) − is the dumping factor
T
 1−(ωT) 2  −j2ξωT
 
H(jω ) =  2
1
= (6.4.50)
 1−(ωT)2  2 +4ξ2 (ωT)2
 1−(ωT)  +j2ξωT  
A(ω ) = 1
(6.4.51)
 1−(ωT) 2  2 +4ξ2 (ωT) 2
 
 
L(ω ) = −20 lg   1 − (ω T) 2  + 4ξ 2 (ω T) 2 
2
(6.4.52)
 
 −arctg 2ξωT
, ωT < 1
 1−(ωT)2
 π
ϕ(ω ) =  − 2 , ωT = 1 (6.4.53)

−π − arctg , ωT > 1
2ξωT
 1−(ωT) 2

 1−(ωT) 2 
  −2ξωT
P(ω ) = 2
; Q(ω ) = (6.4.54)
 1−(ωT) 2  +4ξ2 (ωT) 2  1−(ωT) 2  2 +4ξ2 (ωT) 2
   

ω rez = ω n 1 − 2ξ 2 where 0 < ξ <


2
2
(6.4.55)
A = A = A(ω ) = 1 (6.4.56)
m rez rez
2ξ 1 − ξ 2

A(ω n ) = A( 1 ) = 1 L(ω n ) = L( 1 ) = −20lg(2ξ) (6.4.57)


T 2ξ T

ω c = ω t = 2 ω rez (6.4.58)

159
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.

a.The asymptotic magnitude characteristic.


 1 , ωT < 1
A(ω ) ≈ Aa (ω ) =  1 (6.4.59)
 (ωT) 2 , ω T ≥ 1
0 , ωT < 1
La (ω ) =  (6.4.60)
 −40lg (ω T) , ω T ≥ 1
A(ω ) L(ω) dB
100 40 ξ=0
ξ=0

10 20
0<ξ<1/2
A(ω n) Am

0.01 ξ=1/2 10
0.1 1
1 0
ωrez T ωT
ωtT
ξ=√2/2 1/2<ξ<√2/2
Slope
-40 dB/dec
0.1 -20
ξ>√2/2

0.01 -40

Figure no. 6.4.11.


b. The asymptotic phase characteristic.
0 , ω T < ω 1T

ϕ a (ω ) =  −π2 − ξ lg(ω T) , ω T ∈ [ω 1 T, ω 2 T]
ln(10)
(6.4.61)

 −π , ω T > ω 2T

ϕ( ω ) ϕa (ω )
π/2

0.1 ω1T ω2T 1 0


0
0
0.01 .
B
1 . 100
ωT

−π/2 . A
ln(10)/ ξ
−π . .
. D
C

Figure no. 6.4.12.


160
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.

The complex frequency characteristics, in the plane (P,Q) are obtained


eliminating ω T in (6.4.54). They looks as in Fig. 6.4.13.
If ξ ≥ 2 /2 , the magnitude A has decreasing values but, if ξ ∈ (0, 2 /2)
a resonance appear to ω res and ϕ(ω res ) = ϕ res ∈ (−π/2 , 0).
Q
Α(ω res )
ω→∞ 0 0.5 1
P
-0.5 ω =0
ϕ(ωres )
ω =1/T ξ≥√2/2
−1/2ξ

−1/2ξ ω =1/T 0<ξ<√2/2

ϕ(ωres ) ∈(−π/2, 0)
ω=ωres
Figure no. 6.4.13.
Example.
1 (0.1) 2
H(s) = = ; ω n = 0.1, T=10
100s 2 + 2s + 1 s 2 + 2ξω n s + ω 2n
 
2ξT = 2 ⇔ ξ = 2 = 0.1 ; 20 lg  1  = 20 lg  1  = 14.01
2T  2ξ  0.2
Am = 1 = 5.02 ; Lm = 20lg(5.02) = 14.01
2ξ 1 − ξ 2

ω rez = 1 1 − 2ξ 2
T
ωn
L(ω) Magnitude (dB) Bode Diagrams
20
L(ωn) =13.2974
0
ωrez=0.09039 H(s)= 1
100 s^2 + s + 1
-20
Slope
-40 dB/Dec.
-40
-2 -1 0 ω Frequency (1/sec)
10 10 10
ϕ(ω) Phase (deg);
0
0
-50
-90 -100

-150
-180 -200
ω Frequency (1/sec)
ω1 ω2
Figure no. 6.4.14.
161
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .

6.5. Frequency Characteristics for Series Connection of Systems.

6.5.1. General Aspects.


Suppose that a system is expressed by a cascade of q transfer functions,
H(s) = H1 (s) ⋅ H2 (s) ⋅ ⋅⋅Hq (s) (6.5.1)
whose transfer functions frequency characteristics are known,
Ak (ω ) = Hk (jω ) k= 1:q (6.5.2)
ϕ k (ω ) = arg (Hk (jω )) k = 1 : q . (6.5.3)
The magnitude characteristic of the series interconnected system is the
product of the component's magnitudes
q
A(ω ) = Π Ak (ω ), (6.5.4)
k=1
but the logarithmic magnitude and phase characteristics of the series
interconnected system are the sum of the component's characteristics,
q
L(ω ) = Σ Lk (ω ) (6.5.5)
k=1
q
ϕ(ω ) = Σ ϕ k (ω (6.5.6)
k=1
Also the asymptotic logarithmic magnitude and phase characteristics of
the series interconnected system are the sum of the asymptotic component's
characteristics,
q
La (ω ) = Σ Lak (ω ) (6.5.7)
k=1
q
ϕ a (ω ) = Σ ϕ ak (ω (6.5.8)
k=1
These relationships are used for plotting characteristics for any transfer
function.
A transfer function can be factorised at the nominator and denominator by
using time constants as in (6.5.9) in which the free term in any polynomial equals
to 1,
m1 m1 +m 2
K ⋅ Π (θ i ⋅ s + 1) Π  θ 2i s 2 + 2ζ i θ i s + 1
i=m 1 +1
H(s) =
i=1
n1 n 1 +n 2
(6.5.9)

s Π (T i s + 1) Π  T i s + 2ξ i T i s + 1
α 2 2 
i=1 i=n1 +1
where, n1 + n2 = n , m1 + m2 = m , α ∈ Z.
Based on a such factorisation, any complicated transfer function is
interpreted as a series connection of the 6 types of elementary transfer functions,
p p
H(s) = Π H i (s) = Π Hi i (s), ki ∈ [1, 6] ,
k
(6.5.10)
k=1 k=1
where Hi (s) = Hi i (s) is one of the already studied elementary transfer function of
k

the type indicated, if necessary, by the superscript ki.

162
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .

The time constants factorisation reveals a factor K which is called "the


general amplifier factor" of a transfer function.
The general amplifier factor K can be determined before the time
constants factorisation, using the formula,
K = lim s α H(s) , (6.5.11)
s→0
where α ∈ Z, indicates the number of the transfer function H(s) poles placed in
the s-plane origin if α > 0 or the transfer function H(s) zeros placed in the
s-plane origin if α < 0. For particular values of α , K has the following names:
α=0 K=Kp - the position amplifier factor;
α=1 K=Kv - the speed amplifier factor; (6.5.12)
α=2 K=Ka - the acceleration amplifier factor.
These general amplifier factors express the ratio between the steady state
.
values of: output y(∞) (Kp), output first derivative y(∞) (Kv), output second
derivative ÿ(∞) (Ka), and the steady state values of the input u(∞) .
y(t) y(∞)
Kp = lim H(s) = lim = (6.5.13)
s→0 t→∞ u(t) u(∞)
. .
y(t) y(∞)
Kv = lim sH(s) =lim = (6.5.14)
s→0 t→∞ u(t) u(∞)
ÿ(t) ÿ(∞)
Ka = lim s 2 H(s) = lim = . (6.5.15)
s→0 t→∞ u(t) u(∞)
If the transfer function is factorised by poles and zeros we have the form
from (6.5.16), in which the main term in any polynomial equals to 1,
m1 m 1 +m2
B Π (s + z i ) Π  s 2 + 2ζ i Ω i s + Ω 2i 
i=m 1 +1
H(s) =
i=1
n1 n 1 +n 2
(6.5.16)

s Π (s + p i ) Π  s + 2ξ i ω i s + ω i 
α 2 2
i=1 i=n 1 +1
where
zi = 1 ⇒ ζ i = −z i are the real zeros; (6.5.17)
θi
p i = 1 ⇒ λ i = −p i are the real poles. (6.5.18)
Ti
Sometimes, we call also pi, zi as a "pole" and as a "zero" respectively.
The factorisation by poles and zeros of a transfer function is called also
"z-p-k" factorisation. Here B is only a coefficient which has nothing to do
with the system amplifier.
For fast Bode characteristics drawing it is useful to consider the time
constants factorisation (6.5.9) split in two main factors,
H(s) = R(s) ⋅ G(s) (6.5.19)
where,
R(s) = Kα (6.5.20)
s
incorporates the essential behaviour for low frequencies mainly for ω → 0.
163
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .

We define,
AR (ω ) = R(jω ) = K ⋅ ω −α (6.5.21)
LR (ω ) = 20lg(AR (ω )) = 20 lg( K ) − α ⋅ 20 lg(ω ) (6.5.22)
 −α ⋅ π/2 if K ≥ 0
ϕ R (ω ) = arg(R(jω )) =  (6.5.23)
 −α ⋅ π/2 − π if K < 0
The second factor G(s),
m1 m 1 +m2
Π (θ i ⋅ s + 1) Π  θ 2i s 2 + 2ζ i θ i s + 1
i=m 1 +1
G(s) = n 1
i=1
n 1 +n 2 (6.5.24)
Π (T is + 1) Π T 2i s 2 + 2ξ iT is + 1
i=1 i=n1 +1
has no effect on the frequency characteristics of the series connection when
ω → 0. Indeed,
G(0)=1 , lim G(jω ) = 1 lim arg(G(jω )) = 0 . (6.5.25)
ω→0 ω→0

Example 6.5.1.1. Types of Factorisations.


Let us consider the transfer function, (6.5.26)
(s + 10)(20s + 1)
H(s) = 10 . (6.5.26)
s(5s + 1)(s + 2)
Even if it is a factorised one it does not fit none of the above factorisation
types. The factor 10 in front of the fraction tell us nothing.
Very easy we can determine, directly from H(s),the two elements of R(s),
α = 1 observed by a simple inspection and K = Kv is evaluated using (6.5.14),
(0 + 10)(0 + 1) [y]
K = Kv = lim sH(s) = 10 = 50
s→0 (0 + 1)(0 + 2) sec⋅[u]
For this transfer function the factor R(s) is,
R(s) = Kα = 50s . (6.5.27)
s
which determines,
LR (ω ) = 20lg( K ) − α ⋅ 20lg(ω ) = 20lg(50) − 20 lg(ω ) (6.5.28)
ϕ R (ω ) = arg(R(jω )) = −π/2. (6.5.29)
It is important to remark that H(s) has, beside the pole s=0 , other two
simple real poles and two simple real zeros, denoted using the conventions
(6.5.17), (6.5.18) as:
p0=0; p1=0.2; p2=2; z1=0.05; z2=10; (6.5.30)
We can write the expression of G(s) as
(0.1s + 1)(20s + 1)
G(s) = (6.5.31)
(5s + 1)(0.5s + 1)
which accomplishes the condition G(0)=1.
We shall use these results in the next paragraph regarding the Bode
diagram construction. For the same system, Kp = ∞, Ka = 0.
The two types of factorisation for H(s) are:
164
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .

- time constants factorisation


(0.1s + 1)(20s + 1)
H(s) = 50 ,
s(5s + 1)(0.5s + 1)
- "z-p-k" factorisation
(s + 10)(s + 0.05)
H(s) = 40 .
s(s + 0.2)(s + 2)
As a ratio of two polynomials, useless in frequency characteristics
construction, the transfer function is,
H(s) = 200s
2
+ 2010s + 100 .
5s + 11s 2 + 2s + 0
3

6.5.2. Bode Diagrams Construction Procedures.


The Bode diagram, for complicated transfer functions, can be plotted
rather easy by using two methods:
1. Bode Diagram Construction by Components.
2. Directly Bode Diagram Construction.

6.5.2.1. Bode Diagram Construction by Components.


This method is based on using the Elementary frequency characteristics
(prototype characteristics), presented in Ch. 6.4., as components of a transfer
function according to the relations (6.5.1)... (6.5.8).
The following steps are recommended:
1. Realise the time constants transfer function factorisation.
2. Identify elementary transfer functions of the six types.
3. Plot the asymptotic characteristics, magnitude and phase, for prototypes.
4. The asymptotic characteristic of the transfer function results by adding point
by point the asymptotic characteristics of the components.
5. To obtain the real characteristics make correction in the breaking points.
The values of magnitude characteristics are added on a linear scale of
L(ω ) expressed in dB, even if finally we can use the logarithmic scale of A(ω )
for their marking.
If a pole or a zero, real or complex, has the order of multiplicity mi, then it
will be considered as determining m i separate e lementary frequency
characteristics even if it is about on the same characteristic.

Example 6.5.2.1. Examples of Bode Diagram Construction by


Components.
Let us construct the Bode diagram (Bode plot) of the transfer function
analysed in Ex. 6.5.1.1., given by the transfer function (6.5.26),
(s + 10)(20s + 1)
H(s) = 10 (6.5.26)
s(5s + 1)(s + 2)

165
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .

The factorised time constants form is interpreted as a series connection of


6 elementary elements,
(0.1s + 1)(20s + 1)
H(s) = 50 = H1 (s) ⋅ H2 (s) ⋅ H3 (s) ⋅ H 4 (s) ⋅ H5 (s) ⋅ H6 (s)
s(5s + 1)(0.5s + 1)
where H1 (s) = H11 (s) = 50 ; H2 (s) = H22 (s) = 1s
H3 (s) = H53 (s) = 1 ; H4 (s) = H54 (s) = 1
5s + 1 0.5s + 1
H5 (s) = H5 (s) = 0.1s + 1 ;
3
H6 (s) = H6 (s) = 20s + 1
3

L(ω) La2 Magnitude (dB)


L6
100
80
a
60 L1
a
40 L5

20
0
La4
-20
La 3
-40

-60
-3 -2 -1
100 1 102
ω
10 10 10 10
ϕ(ω) ϕ a6 ϕ5a Phase (deg)
90

45

0
ϕ3a ϕ4a
-45

-90

-135
ϕ2a

-180
-3 -2 -1 ω
10 10 10 100 10
1 102

ω=0.05 ω=0.2 ω=2 ω=10

0.05/5 0.05*5 2/5 2*5


Figure no. 6.5.1.
We denoted by Lai , ϕ ai the asymptotic characteristics of Hi, i=1:6.

166
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .

Let us consider now a transfer function,


H(s) = 100s +21000
100s + s
The time constants factorisation is performed as below,
 
z1

100 s+10 

   
 
H(s) = =1000⋅ s ⋅  0.1s + 1  ⋅ 1
1 
    100s + 1
 
s  100 s + 1
H 1 H 3
H2
 
H4
 T1 
H1 (jω ) = 1000 ; L1 = 20lg 103 = 60 dB.
H2 (s) = 1s ⇒ H2 (jω ) = 1 ; A2 (ω ) = H2 (jω ) = 1 = ω
1
jω ω2
L2 = 20lg A2 (ω ) = 20lg ω −1 = −20 lg ω = −20x
x
H3 (s) = 0.1s + 1 ; 1
H4 (s) =
100s + 1
The Bode diagrams drawn using Matlab is depicted in Fig. 6.5.2.
L(ω) Magnitude (dB)
150

100

50

-50 ω
1 0-4 10-3 10-2 10
-1
100 101 102
ϕ(ω) Phase (deg)
-80

-100
-120
-140
-160
-180 ω
1 0-4 10-3 10-2 10-1 100 101 102

Figure no. 6.5.2.


To reveal the influence of the Bode diagram shape to the time response,
two transfer functions are considered,
H1 (s) = 1 and H2 (s) = 1 .
100s + s + 1
2
100s + 5s + 1
2

Their Bode diagrams and step responses are depicted in Fig. 6.5.3.

167
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .

Bode Diagrams Step Response


Magnitude (dB) Amplitude
40

1.8
20 H1 H1
1.6
0
1.4
-20 H2 H2
1.2
-40

0
Phase (deg); 1

-50 H1 0.8

H2 0.6
-100

To: Y(1) 0.4


-150

0.2
-200
10- 2 10- 1 100
0
Frequency (1/sec) 50 100 150 200 250 300 350

Time (sec.)
1 1
H1(s)=----------------- H2(s)= -----------------
100 s^2 + s + 1 100 s^2 + 5 s + 1

Figure no. 6.5.3.

6.5.2.2. Directly Bode Diagram Construction.


The Bode diagrams can be directly plotted considering the following
observations regarding the asymptotic behaviour:
1. A real zero ζ i = −zi < 0 determines in the breaking point ω = zi > 0, to the
asymptotic magnitude characteristic, slope to change with +20dB/dec.
(up-breacking point of 20 dB/dec.).
2. A real pole λ i = −p i < 0 determines in the breaking point ω = p i > 0, to the
asymptotic magnitude characteristic, slope to change with
-20dB/dec.(down-breacking point of 20 dB/dec.).
3. A complex pair of zeros, s = ζ i 1 ,i 2 , Re(ζ i 1 ,i2 ) < 0, solution of the equation
T 2i s 2 + 2ξ i T i s + 1 = 0 , ξ i ∈ (0, 1), ω n,i = 1/T i
determines in the breaking point ω = ω n,i > 0, to the asymptotic
magnitude characteristic, slope to change with +40dB/dec. (up-breacking
point of 40 dB/dec.).
4. A complex pair of poles, s = λ k 1 ,k 2 , Re(ζ k 1 ,k 2 ) < 0,solution of the equation
T 2k s 2 + 2ξ k T k s + 1 = 0 , ξ k ∈ (0, 1), ω n,k = 1/T k
determines in the breaking point ω = ω n,k > 0,to the asymptotic magnitude
characteristic, slope to change with -40dB/dec. (down-breacking point of
40 dB/dec.).
5. For ∀ω < min(zi , p i , ω n,i , ω n,ik ), ∀i, k, the asymptotic behaviour is
determined only by the term R(s) from (6.5.20).
The following steps are recommended:

168
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .

1. Evaluate the poles and zeros, (zi , p i , ω n,i , ω n,k ), ∀i, k and place them on
the plotting sheet in a logarithmic scale. In a such a way we determine the
system frequency bandwidth of interest.
2. Mark each zero by a small circle, and each pole by a small cross.
Complex zeros/poles are marked by double circles/crosses.
3. Chose a starting frequency ω 0inside the system frequency bandwidth .
4. Evaluate a starting point M = M(ω 0 , LR0 ) , for the asymptotic magnitude
characteristic, where LR0 is obtained from (6.5.22),
5. LR0 = LR (ω 0 ) = 20 lg(AR (ω 0 )) = 20 lg( K ) − α ⋅ 20 lg(ω 0 ) (6.5.27)
If possible chose,
ω 0 = 1 ⇒ LR0 = 20lg( K ) .
6. Draw a straight line through the point M having the slope equal to
−(α ⋅ 20) dB/dec till the first breaking point abscissa is reached.
7. Keep drawing segments of straight lines between two consecutive
breaking points with the slope equals to the previous slope plus the
changing slope determined by the left side breaking point.
8. The same procedure can be applied for phase characteristic but we must
take into consideration that the asymptotic phase characteristic keep the
same slope in each frequency interval of
ω ∈ [ω i − ω i /5 , ω i + ω i ∗ 5], ω i = zi , p i , ω ni (6.5.28)
Example.
Let we draw the magnitude characteristic only for the system of
Ex. 6.5.2.1.
L(ω) Magnitude (dB)
100
81.9382 M S2=S1+20=-20+20= 0 dB/dec
80

60 S4=S3-20=-20-20= -40 dB/dec


S1=-(α *20)= -20 dB/dec
40
S3=S2-20=0-20= -20 dB/dec S5=S4+20=-40+20= -20 dB/dec
20
0

-20 -1 x x ω
-3 -2
10 10 10 100 10
1 102

ω0 =0.004 ω=0.05 ω=0.5 ω=2 ω=10


LR0=20lg(50)-1*lg(0.004)=33.974 +47.958=81.9382 dB

Figure no. 6.5.4.


If we chose, for example ω 0 = 0.004 we compute
LR0 = LR (ω 0 ) = 20 lg( K ) − α ⋅ 20lg(ω 0 ) =
= 20lg(50) − 1 ⋅ 20 lg(0.004) = 33.974 + 479588 = 81.9382 dB
M = M(0.004, 81.93).
169
7. SYSTEM STABILITY. 7.1. Problem Statement.

7. SYSTEM STABILITY

7.1. Problem Statement.

The stability of a system is one of the most important property of a


system. Intuitively we can say that a system is stable if it will remain at rest
unless external disturbances that affect its behaviour and it will return to rest if all
external causes are removed. There are several definitions for systems stability.
One of them is the so called bounded input - bounded output (BIBO):
if the input is bounded then the output remains bounded too.
In addition a system is asymptotically stable if it is bounded input -
bounded output and the output goes to a steady state. We can say that the
system is stable if the transient response disappears and finally only the
permanent regime will be settled down. This is called external stability.
There is also the so called internal stability that takes into account the
transient response with respect to the components of the state vector. Internal
stability involves a description by state equations.
There are very used, mainly for nonlinear systems, the so called Leapunov
sense stability that is a continuity condition at the state trajectory with respect to
the initial state. There is a large theory regarding the Leapunov stability.
For linear time invariant systems the external stability is determined by the
poles of the transfer function
M(s)
H(s) = , (7.1.1)
L(s)
that means the roots of the denominator:
L(s) = 0 ⇒ s = λ i , (7.1.2)
which must be placed in the left half complex plane Re(λ i ) < 0 .
The internal stability of a system, expressed by state equations, is
determined by the eigenvalues of the system matrix A, that means the roots of
the characteristic polynomial,
∆(s) = det(sI − A) = 0 ⇒ s = λ i (7.1.3)
A system can be external stable one but not internal stable .
If the system is completely observable and controllable both types of
stability are equivalent, so we shall discuss the stability of complete observable
and complete controllable systems.
The transfer function denominator polynomial
L(s)=ans n+an-1s n-1+...+a1s+a0 (7.1.4)
could be the characteristic polynomial
∆(s)=ans n+an-1s n-1+...+a1s+a0. (7.1.5)

170
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.

There are used the so called stability criteria, that express necessary and
sufficient condition for stability.
Having a polynomial L(s) (could be ∆(s)) we are calling this as a stable
polynomial (or Hurwitz polynomial) if the system having that polynomial at the
denominator of the transfer function (or characteristic polynomial) is a stable
one. There are two types of criteria:
- algebraical criteria, that manages the polynomial coefficients or roots
- frequency stability criteria that manages the frequency characteristic of
the system.

7.2. Algebraical Stability Criteria.

7.2.1. Necessary Condition for Stability.


A necessary condition for stability of a polynomial L(s) (or ∆(s)) is that all
the polynomial coefficients must have the same sign and no missing coefficients
(all of them ≠ 0).
Example: L(s) = s 2 − 2s + 3 is an unstable polynomial.

7.2.2. Fundamental Stability Criterion.


A system having L(s) or ∆(s) as stability polynomials is stable if and only
if all the poles (eigenvalues) are placed on the left half of the complex plane and
if there are poles (eigenvalues) on the imaginary axes they have to be simple
poles (or simple eigenvalues).
If λi is a pole then Re(λi)<0 or if Re(λi)=0 then λi is a simple pole (eigen
value). A system is asymptotically stable if all the poles (eigen values) are placed
in the left - half of the s - plane, that is Re(λi)<0 ∀i .
As we saw when we discussed about experimental frequency
characteristics the transient response were
N
ytr (t) = Σ p i (t)e(Re(λi ))t (7.2.1)
i=1
where λ i are the eigenvalues of the system matrix A.
If Re(λi)<0 then lim ytr (t) = 0 . (7.2.2)
t→∞
There are two very important algebraically stability criteria:
Hurwitz stability criterion.
Routh stability criterion.

171
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.

7.2.3. Hurwitz Stability Criterion.


Let us consider a polynomial,
L(s) = an s n + an−1 s n−1 + ... + a1 s 1 + a0 an ≠ 0 (7.2.3)
It is assumed that the first coefficient an is positive, ( an>0 ). There are
constructed the so called Hurwitz determinants ∆k where ∆n is a n × n
determinant.

an−1 an−3 an−5 an−7 ... 0


an an−2 an−4 an−6 ... 0
0 an−1 an−3 an−5 ... 0
∆n = (7.2.4)
0 an an−2 an−4 ... 0
... ... ... ... ... ...
0 0 0 ... 0 a0
n − columns
On the main diagonal we write an−i , i = 1, .. , n .
Decrease the subscripts with two a time in the right side and increase them
in the left side. If the subscripts overtake n or they became negative the elements
are considered zero.
∆k is the determinant built by ∆k using the first k - rows and k - columns.
The polynomial L(s), with positive main coefficient an , is a Hurwitz
polynomial or a stable polynomial, that means all its roots lie in the left half of the
s-plane if and only if all the determinants ∆k are positive (∆k>0, ∀k = 1, n ).
If a polynomial is a stable one then
L∗ (s) = ∼a 0 s n + ∼a 1 s n−1 + ... + ∼a n (7.2.5)

is also stable , where a k = an−k .
Example:
L(s)=s3-3s 2+2s+5; n=3;
210
∆3 = 5 3 0 ; ∆1 = 2 ; ∆2 = 1 ; ∆3 = 1
021

172
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.

7.2.4. Routh Stability Criterion.

7.2.4.1. Routh Table.


The Routh stability criterion called also Routh-Hurwitz criterion, is similar
to Hurwitz criterion but with arrangements of coefficients in a table, called
"Routh table" which involves the necessity to evaluate several determinants of
the maximum order 2x2 only.
Le us consider a polynomial of the degree n,
L(s) = an s n + an−1 s n−1 + ... + a1 s 1 + a0 an ≠ 0 (7.2.6)
The Routh table, called also "Routh's array" or "Routh's tabulation"
associated to this polynomial is illustrated in Fig. 7.2.1.

1 2 ... j-1 j j+1 ...


sn 1 an an-2 ... a n-2(j-2) a n-2(j-1) a n-2(j) ...
s n-1 2 an-1 an-3 ... a n+1-2(j-1) a n+1-2(j) a n+1-2(j+1) ...
s n-2 3 r r3, j-1 r3, j r3, j+1 ...
3,1 r3,2 ...
... ... ... ...
... ... ... ...
s n-i i-2 r i-2,1 ri-2,2 ... r i-2,j ri-2,j+1 ...
s n-i+1 i-1 r ri-1,2 ... r i-1,j r i-1,j+1 ...
i-1,1
s n-2 i r ri.2 ... r i,j r i,j+1 ...
i,1
... ...
... ... ...
s0 n+1 rn+1,1 0 0 0 ...
...

Figure no. 7.2.1.


It contains n+1 rows and a number of columns marked by indexes.
For any row the power of the variable s is denoted decreasing order.
At the beginning we fill in the first two rows.
If the subscripts became negative the table will be filled in with 0.
The algorithm for evaluating te entries of the array, is based on a 2x2
determinant, which, for the general element r i,j , i ≥ 3 , is
r i, j = r −1
r i−2,1 r i−2,j+1
, if r i−1,1 ≠ 0 (7.2.7)
i−1,1 r i−1,1 r i−1,j+1

The table is continued horizontally and vertically until only zeros are obtained.
The system is asymptotically stable if and only if all the elements in the
first column of the Routh table are of the same sign and they are not zero.
The numbers of poles, roots of the equation L(s)=0, from the right side is
equal with the number of sign changing in the first column of the Routh table.

173
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.

7.2.4.2. Special Cases in Routh Table.

a. One element in the first column in the Routh table is zero.


In this case the computing can not be performed.
There are two ways for this case:
a.1. Replace that zero element by a letter, supposing ε, thinking that ε >0 , ε->0
and keep on computing the other elements as functions of ε.
To determine the stability of the system (the sign of the first column
elements) the limits lim r i, 1 (ε) have to be determined.
ε→0
ε>0
a.2. If we are at the beginning of Routh table fulfilling, replace the polynomial by
its reciprocal polynomial and realise the Routh table for the reciprocal
polynomial.
The polynomial
L(s)=ans n+...+a1s+a0
is replaced by

L(s) = a0 s n + a1 s n−1 + ... + an−1 s + an, (7.2.8)
and construct the Routh table for this reciprocal polynomial .

b. All the elements of a row are zero.


Suppose
ri+1,k=0 , k=1,2,... (7.2.9)
Then this part of the table looks like,
i−1
s n−i+1 i r i, 1 r i, 2 r i, 3 ... (7.2.10)
s n−i i+1 0 0 0 ...
To keep on fulfilling the table, we construct the auxiliary polynomial by
using the elements of the above non-zero row having the degree equal to the
degree of the attached of that non-zero row, in such a case the degree is n-i+1,
but the degrees are decreasing by two at a time.
The auxiliary polynomial is :
U(s) = r i1 s n−i+1 + r i2 s n−i−1 + r i3 s n−i−3 + ... (7.2.11)
Compute the derivative of this polynomial:
U (s) =(n − i + 1)r i1 s n−i +(n − i − 1)r i2 s n−i−2 +(n − i − 3)r i3 s n−i−4 + ... (7.2.12)
∼r ∼r ∼r
i1 i2 i3

Use the coefficients of the auxiliary polynomial derivative as elements of


the zero row and keep on computing the other elements.
The roots of the auxiliary polynomial are symmetrically disposed in
s-plane with respect to the origin of the s-plane.

174
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.

The roots at the auxiliary polynomial could be a zero root, pure imaginary
roots, real roots or complex roots.
If the polynomial has real coefficients then it has two conjugated roots.
It can be proved that a polynomial with a row of zeros in the Routh table
has as common factor the auxiliary polynomial:
L(s)=L1(s)U(s) (7.2.13)
The Routh criterion is useful because there are many determinants of the
second order rather Hurwitz criterion that ask for higher order determinants.

Example. Let us consider a polynomial containing two real parameters, p, ω :


L(s) = (s 2 + ω 2 )(s + p)
L(s) = s 3 + ps 2 + ω 2 s + pω 2
The Routh table is,
1 2 3 4
2
s3 1 1 ω 0 0
s2 2 p pω 2 0 0
s1 3 02 p 00 00 00
s0 4 pω2 0 0 0

1 ω2 10
r 31 = −1p = −1p (pω 2 − pω 2 ) = ; r 32 = −1p = 0.
p pω 2 p 0
We observe that all the elements in the third line, i=3, are zero. The attached
polynomial U(s) is,
U(s) = ps 2 + pω 2 s 2−2 = p(s 2 + ω 2
L(s)=L1(s)U(s); U'(s)=2ps+0

r 41 = −1
p pω 2
= pω 2
2p 2p 0
We can determine L1(s) which is a factor without symmetrical roots.

175
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.

Example 7.2.1. Stability Analysis of a Feedback System.

Supposing that we have a feedback system called also "closed loop


system" described by the block diagram from Fig. 7.2.2.

+
v e k s+1 1 y
- s (s+2)(s+3)
HR (s) HF (s)

Figure no. 7.2.2.


Let us denote,
Hd(s)=HR(s)HF(s) - the open loop transfer function
Hd (s) = k 2 s + 1
M(s)
=
s (s + 5s + 6) N(s)
The closed loop transfer function is,
Y(s) d M(s) M(s)
Hv (s) = = H d= =
V(s) 1 + H N(s) + M(s) L(s)
L(s)
where,
L(s) = s 3 + 5s 2 + 6s + ks + k = s 3 + 5s 2 + (6 + k)s + k
is the characteristic polynomial of the closed loop system.
We analyse the stability using the Routh table,
1 2 3
s3 1 1 k+6 0
s2 2 5 k 0
s1 3 4k+30 0 0
5
s0 4 k 0 0

1 k+6
r 31 = −1 = −k − 5k − 30 = 4k + 30
5 5 k 5 5

5 5 k kr
r 41 = − = r 31 = k
4k + 30 4k+30
5
0 31

The Routh stability criterion asks for,


4k + 30 > 0
⇒k> 0.
k> 0

176
7. SYSTEM STABILITY. 7.3. Frequency Stability Criteria.

7.3. Frequency Stability Criteria.


There are several criteria that assure the necessary and sufficient
conditions for stability of systems based on the shape of frequency
characteristic.
The algebraically criteria are very sensitive criteria because they are
coefficient based criteria. Someone has to tell us what the coefficients of a
transfer function are. It is a very difficult and dangerous problem for systems.
The complex frequency characteristic can be determined by experimental ways
and the Nyquist criteria can be interpreted by analysing the shape of these
characteristics without other mathematically representation.

7.3.1. Nyquist Stability Criterion.


It is related to a closed loop system having in the open loop the transfer
function Hd(s).
Hv(s)
v + y
Hd (s)
-

Figure no. 7.3.1.


The transfer function of this closed loop system, denoted Hv (s) is,
Hd (s)
Hv (s) = . (7.3.1)
1 + H d (s)
The following notations are utilised,
Ad (ω ) = Hd (jω ) (7.3.2)
ϕ d (ω ) = arg Hd (jω ) (7.3.3)
d
Based on the complex frequency characteristic of H (s), the stability of
the closed loop system is analysed.
One very simple form of the Nyquist criterion is:
If the open loop system Hd(s) is a stable one then the closed loop system
having the transfer function Hv (s) , (7.3.1), is asymptotically stable if and only if
the complex frequency characteristic Hd(jω) leaves the critical point (-1,j0) to the
left side when ω increases from 0 to ∞.
The Nyquist criterion can be also developed for systems which are open
loop unstable or systems having in open loop resonance frequencies (poles of
Hd(s) on the imaginary axis). They are based on the Cauchy principle of the
argument.
In such a case instead of complex frequency characteristic Hd (jω )
determined for real ω ∈ [0, ∞), the so called Nyquist characteristic HdN of Hd (s)
is utilised.

177
7. SYSTEM STABILITY. 7.3. Frequency Stability Criteria.

Nyquist characteristic HN of a transfer function H(s) is the image through


the function H(s) of the so called Nyquist contour N . The Nyquist contour N is
a closed contour clockwise oriented in the complex plane s, which contain all
the right half complex plane except the poles placed on the imaginary axes.
These poles are surrounded by circles of the radius approaching zero.
If we shall realise the feedback connection around Hd (s) , which respects
the Nyquist criterion, then the resulting close loop system will be asymptotically
stable one.
If the complex characteristic Hd (jω ) passes through the critical point
(-1, j0) then the system will be at the limit of the stability. That means the transfer
function Hv (s) will have simple poles on the imaginary axis and all the other
poles in the left half complex plane.
In Fig. 7.3.2. the complex characteristics of three open loop systems,
H1 (s), Hd2 (s), Hd3 (s) , are represented.
d

d
Q j Im(H (j ω))
ωc= ωc2
d
Re(H (j ω))
0 ω→∞ ω =0 P
-1+j0 0 R=1 d
H1(j ω )

ωc= ωc1 γ1= ϕ(ω c1)

d
d
H2(j ω) γ2=−π
H3(j ω) γ3= ϕ(ωc3)

Figure no. 7.3.2.


The complex frequency characteristic Hd1 (jω ) of the system Hd1 (s) from
Fig. 7.3.2., leaves the critical point (-1, j0) on the left side, that means the future
close loop system H1v (s) will be stable. The open loop system having the
complex characteristic Hd3 (jω ) will determine an unstable H3v (s) but one having
Hd2 (jω ) will determine H2v (s) at the limit of stability.

7.3.2. Frequency Quality Indicators.


The closed loop system behaviour can be appreciated, in some of its
aspects, using several quality indicators defined on the complex frequency
characteristic of the open loop system as specified in Fig. 7.3.2. and Fig. 7.3.3.
Similarly this quality indicators can also be defined on the Bode
characteristics of the open loop system.
Among this quality indicators we can mention:

178
7. SYSTEM STABILITY. 7.3. Frequency Stability Criteria.

a. Crossing frequency ω t . The crossing frequency ω t , denoted also by ω c ,


represents the largest frequency to which the complex characteristic Hd (jω )
crosses the unit radius circle.
d
jIm(H (jω))
d
Adπ Planul H (jω)
(-1,j0)
ωπ d
γ Re(H (jω))
ωt ϕd(ωt )
ωt = ωc
ω

Figure no. 7.3.3.


It can be obtained from the equation (7.3.2)
ω c = ω t = max {ω A d (ω ) = 1} (7.3.4)

b. Phase margin γ . The phase margin γ represents the clockwise angle


between the direction of the vector Hd (jω t ) and the negative real axis. according
to the Nyquist stability criterion it expresses the stability reserve of the closed
loop system.
It is defined by the relation,
γ = π + ϕ d (ω t ) (7.3.5)
where we have considered ϕ (ω t ) = arg H (jω t ) on the circle (−2π, 0]. The
d d

value γ = 0 indicates a limit of stability. To have a good stability the following


conditions is imposed to the quality indicator γ (performance condition)
γ ≥ γ imp (7.3.6)

c. Phase crossover frequency ω π .


The phase crossover frequency ω π , represents the smallest frequency to
which the complex characteristic Hd (jω ) crosses the negative real axis.
ω π = min {ω ϕ d (ω ) = −π} (7.3.7)
d
d. Magnitude ( gain) margin Aπ .
The magnitude margin Adπ is the length of the vector Hd (jω π ) , that means
Adπ = Ad (ω π) (7.3.8)
If Ad (ω ) < 1, ∀ω > ω π, then Adπ very clear expresses the reserve of closed
loop system stability. According to Nyquist criterion, the performance
condition is expressed by
Ad ≤ Ad . (7.3.9)
π π imp.

179
7. SYSTEM STABILITY. 7.3. Frequency Stability Criteria.

7.3.3. Frequency Characteristics of Time Delay Systems.


The Nyquist criteria can be applied to time delay systems.
For example, if the plant has a time delay then its transfer function is:
Hτ (s) = H(s)e−τs (7.3.10)
Hτ (jω ) = H(jω )e−jωτ (7.3.11)
Aτ (ω ) = Hτ (jω ) = H(jω ) e−jωτ = H(jω ) = A(ω ) (7.3.12)
ϕ τ (ω ) = arg(Hτ (jω )) =arg(H(jω )) −ω τ ⇒ ϕ τ (ω ) = ϕ(ω ) − ω τ (7.3.13)
ϕ(ω)
These relations express the effect of time delay on the complex frequency
characteristics as illustrated in Fig. 7.3.4.
We can see that the complex frequency characteristics has now a spiral
shape because as the phase (negative one) increases in absolute value, the same
time the magnitude approaches zero.
jImg H(jω)
planul H(jω) ; Hτ (jω )
H(jω1)
ω3 τ ω3
A(ω1) ϕ(ωH(j ω)
ω3 1 ) ω=0
Re H(j ω)
Ητ(0)=Η(0)
Hτ(jω2) ω1
ω2 ω2
ω1
ω1 τ
ω2τ
Hτ(j ω )
H(jω2)
Hτ(jω1)
Figure no. 7.3.4.
The time delay systems are still linear ones but their transfer functions
unfortunately are not a ratio of polynomials.
From analytical point of view we can compute the closed loop transfer
function,
M(s)e−τs M(s)e−τs
Hdτ (s) = ⇒ Hrτ (s) = (7.3.14)
N(s) N(s) + e−τs M(s)
L(s)
but the denominator L(s) is not a polynomial and we can not solve easy the
characteristic equation L(s)=0 so that the algebraical criteria can not be applied.
Instead the Nyquist criteria can be applied even for this type of systems.
There are several methods to escape of the factor e−τs . A possibility is to
use the Padé approximation:
τ
1 − s
e−τs ≈ 2
(7.3.15)
1 + τ2 s
180
8. DISCRETE TIME SYSTEMS. 8.1. Z - Transformation.

8. DISCRETE TIME SYSTEMS.

8.1. Z - Transformation.

Let { yk } , k ≥ 0 be a string of numbers. These numbers can be values of a


time function y(t) for t=kT, so we denote yk=y (kT) .
If the time function y(t) has discontinuities of the first order in the time
moments kT then we consider yk as the right side limit,
yk = y(kT + ) = lim y(t) (8.1.1)
t→kT,t>0
The Z-transformation is a mapping from the set of strings { yk } k≥0 into
the complex plain called z- plane. The result is a complex function Y(z).
We call this transformation, "the direct Z-transformation". The result of
this transformation Y(z) is called Z-transform denoted by,
Y(z)=Z{(yk)k≥0} (8.1.2)
In the same way it is defined so called " the inverse Z-transformation",
(yk)k≥0=Z-1{Y(z)} (8.1.3)
The interconnection between the two Z-transformations is represented by
a diagram as in Fig. 8.1.1. ,

Time variable function String of numbers Complex variable function


SAMPLING Z{ }
(univocally) (univocally)

y(t) (yk ) Y(z)


k>=0
-1
COVERING Z{}
( not-univocally) (univocally)
Figure no. 8.1.1.

8.1.1. Direct Z-Transformation.


There are several ways for Z-transformation definition:
8.1.1.1. Fundamental Formula.
By definition, the Z-transformation is:

Y(z) = Z{yk } = Σ y k z−k (8.1.4)
k=0
This series of powers is convergent for |z|>Rc , where R c is called
convergence radius. The property of convergence states that Z{(yk)k≥0} exists
n
if lim
n→∞
Σ yk z −k exists.
k=0
We can consider the Z-transformation as being applied to an original time
function y(t),

181
8. DISCRETE TIME SYSTEMS. 8.1. Z - Transformation.


Y(z) = Z{y(t)} = Σ y(kT + )z −k , z > R c = eσ0 T (8.1.5)
k=0
where the string of numbers (yk)k≥0 to which the Z-transformation is applied is
represented by the values of this function yk=y(kT +).
Here σ0 is the convergence abscissa of y(t).
The process of getting the string of numbers yk, the values of a time
function y(t) for t=kT or ( t=kT+ ) , is called " sampling process " having the
variable T as " sampling period " .
Vice versa, a time function y(t)=ycov(t) , can be obtained from a string of
numbers (yk)k≥0 , replacing only, for example, k by t/T.
This is one covering functions, a smooth function which passes through
the points ( k, yk ) .
Such a process is called " uniformly covering process " :
y(t)=ycov(t)=yk|k→t/T . (8.1.6)
Example.
Suppose yk=k/(k2+1), k ≥ 0. We can create a time function
y(t)=ycov(t)=(t/T)/((t/T)2+1)).
This time function has a Laplace transform Y(s). Through this covering
process, the string values are forced to be considered equally spaced in time
even if the string , maybe, has nothing with the time variable.

8.1.1.2. Formula by Residues.


The second formula for the Z-transformation, so called " formula by
residues ", manipulates the Laplace transform of the genuine time function y(t)
when the Z-transformation is applied to a time function, or to the covering
function y(t)=ycov(t) whether the Z-transformation is applied to a pure string of
numbers (yk)k≥0 .
 
Y(z) = Z{y(t)} = Σ Rez Y(ξ) 1
 (8.1.7)
poles of Y(ξ)  1 − z−1 eTξ 
In this formula Y(s) is the Laplace transform of y(t) or ycov(t) and the
expression Y(ξ) is obtained through a simple replacement of s by ξ that means,
Y(ξ) = Y(s) s→ξ
Examples.
1. Let we have yk=1 , k ≥ 0 ; it could be obtained from y(t)=1(t) or from
y(t)=yk k→ t = 1 , t ≥ 0 .
T
This is the so called " unit step string " or " unit step function "
respectively. First we apply the fundamental

formula:
Y(z)=Z{1(t)} =Z{1, k ≥ 0} = Σ 1 ⋅ z = 1 −1 = z , if
−k
k=0 1−z z −1
z < 1 ⇒ z > 1 = Rc
−1

We can get the same result by the second method:

182
8. DISCRETE TIME SYSTEMS. 8.1. Z - Transformation.

L{1(t)} = 1s = Y(s) ⇒ Y(ξ) = 1 Re(s)>0=σ0 .


ξ
Y(z) = Z{1(t)} = Σ Rez. 1 ⋅ 1 = 1 −1 = z z > eσ0T = 1 = R c
ξ 1−z e−1 Tξ
1−z z −1

2. yk = kT , k ≥ 0 = {0, T, 2T, 3T, ...}.


We can create a time function
y(t) = t ⇒ Y(s) = 12
∞s
Y(z) = Z{ kT} = Σ kT ⋅ z−k = ...
k=0
it is difficult to continue in this way so we shall use second formula,
(2−1)
1   21 
Y(s) = 2 ⇒ Y(z) = Σ Rez  2
1 1
=
1 ⋅ ξ 1

s ∗  ξ 1 − z −1 eTξ  (z − 1)!  ξ 2 1 − z −1 eTξ  ξ=0
= − −z Te 2
−1 Tξ −1
= Tz 2 = Tz 2 ,
(1 − z −1 eT ξ ) ξ=0 (1 − z −1 ) (z − 1)
where " * " means: poles of 1/ξ 2.

3. Let us consider now a finite string of numbers,


 
yk =  1 , 4 , −5, 0 , 0, ..., 0, ... . Directly we compute,
 y 0 y 1 y 2 

Y(z) = 1 + 4z−1 − 5z−2 + 0z−3 + ... = z + 4z −5


2

2
z

8.1.2. Inverse Z-Transformation.


There are several methods to evaluate the inverse Z-transform:
8.1.2.1. Fundamental Formula.
The most general method for obtaining the inverse of a Z-transform, is the
inversion integral.
Having a complex function Y(z) which is analytic in annular domain
R1 <|z|<R2 and Γ is any simple closed curve separating R1 from R2, then
yk = 1 ∫ Y(z)zk−1 dz (8.1.8)
2πj Γ
Simply we can say that Γ is any closed curve in the z-plane which
encloses all of the finite poles of Y(z)zk-1.
If the function Y(z) is rational and causal one, then the above integral is
easily expressed using the theorem of residues
yk k≥0 = Σ Rez Y(z)zk−1  (8.1.9)
poles of
(Y(z)z k−1 )

183
8. DISCRETE TIME SYSTEMS. 8.1. Z - Transformation.

The resulted string yk can be interpreted as being the values of a time


function
yk=y(t)|t=kT =y(kT).
For example if yk=k/(k2+1), k ≥ 0, we can interpret it as
yk= y(t)|t=kT = (kT/T)/((kT/T)2+1)) = y(kT).

Examples.
1. Y(z) = z .
z− 1
yk = Z−1 {Y(z)} = Σ Rez  z zk−1  , k≥ 0, is a parameter .
poles of
z − 1 
Y(z)z k−1
We must check if the number of poles is different for different values of k.
In this case, we can see that z zk−1 has one simple pole z=1 for any k≥ 0 so,
z−1
yk=1 ∀ k≥ 0 .

2. Y(z) = 1 ⇒ yk = Σ Rez 1 zk−1 


z− 1 poles of
z − 1 
 1 z k−1 
 z−1  k≥0

For k=0 , y0 = Σ Rez 1  = 1 + 1 = 0,


 (z − 1)z  0− 1
because it has two simple poles: z=0 and z=1.
 k−1 
For k ≥ 1 , y k = Σ Rez z  = 1 ,
z− 1
because it has one simple pole z=1.

Y(z) = z + 4z − 5 ⇒ y = Rez z2 + 4z − 5 ⋅ zk−1 


Σ  z2
2
3. k 
z2 
 2 −5 = 1
k = 0 , y0 = Rez z + 4z
(3−1)
 ( [(z 2 + 4z − 5)] z=0 = 1 ⋅ 2 = 1.
 z 3  3 − 1)! 2
 2 −5 = 1
k = 1 , y1 = Rez z + 4z
(2−1) 1 ⋅ 4 = 4.
 [(z 2
+ 4z − 5)] =
 z2  (2 − 1)! z=0 1
In the same way, y2= -5, yk =0 for k≥3.
We reobtained the particular string from the above example.

184
8. DISCRETE TIME SYSTEMS. 8.1. Z - Transformation.

8.1.2.2. Partial Fraction Expansion Method.


The expression of Y(z) is expanded into a sum of simple fractions for
which the inverse Z-transform is known:
z , z , z .
z − λ (z − λ) p
(z − α) 2 + β
To do this we expand Y(z) into common sum of fractions.
z

Example.
We know that ∞ ∞
yk = λ ⇒ Y(z) = Σ λ z = Σ (λz−1 ) = 1 = z so,
k k −k k

k=0 k=0 1 − λz −1 z−λ


z ) = λ k.
Z−1 (
z− λ
If Y(z)/z has simple poles,
Σ Σ
Y(z) Ai A i z then
= ⇒ Y(z) =
i z − λi i z− λ
z
Z−1 {Y(z)} = Σ[Ai ⋅ Z−1 ( z )] = Σ A i λ ki
i z −λi i
If some fractions have complex poles then for that fractions the
fundamental inversion formula can be used.

8.1.2.3. Power Series Method.


From the definition formula (8.1. 4) we can see that y k is just the
coefficient of z-k in the power-series form of Y(z). By different method Y(z) is
expanded into a power series, and just keep the coefficients:

Y(z) = Σ c k z−k , c k = yk (8.1.10)
k=0
For rational functions, this can be done just by dividing the nominator and
the denominator (long division).

Example:
Y(z) = z2 + 1 = 1 − 2z−1 + 4z−2 + ..... ⇒
z2 + 2z + 1

y0 = 1; y1 = −2; y2 = 4; ....

185
8. DISCRETE TIME SYSTEMS. 8.1. Z - Transformation.

8.1.3. Theorems of the Z-Transformation.


There are some properties useful for fast Z-transforms computing:

8.1.3.1. Linearity Theorem.


If { yak } k≥0 or ya(t) and ybk k≥0or yb(t) have Z-transforms,
Ya (z) = Z { yak } =Z { ya (t)} ;
Yb(z) = Z {ykb} = Z {yb(t)}
then, for any α,β real or complex:
Z αyak + βybk = αYa (z) + βYb (z) (8.1.11)
8.1.3.2. Real Time Delay Theorem.
 −1 
Z{yk−p } = z−p  Y(z) + Σ yk z−k  , Y(z) = Z{yk }, p ∈ N (8.1.12)
 k=−p 
 −1 
Z{y(t − pT)} = z−p  Y(z) + Σ y(kT)z−k  , Y(z) = Z{y(t)}, p ∈ N (8.1.13)
 k=−p 
If the string or the time function is an original one then the second term
does not appear.

8.1.3.3. Real Time Shifting in Advance Theorem.


 p−1 
Z yk+p = z  Y(z) − Σ yk z−k  , p ∈ N
p
(8.1.14)
 k=0 
 p−1 
Z{y(t + pT)} = zp  Y(z) − Σ y(kT)z−k  , p ∈ N (8.1.15)
 k=0 
Example.
Z {yk+1 }= z[Y(z)-y0]
2 
1
Z {yk+2 }=z  Y(z) − Σ yk z−k  = z2 Y(z) − y0 ⋅ z 2 − y1 ⋅ z
 k=0 

8.1.3.4. Initial Value Theorem.


y0 = lim Y(z) (8.1.16)
z→∞
We understand by y0 ,
y0= lim y(t) .
t→0,t>0

8.1.3.5. Final Value Theorem.


lim yk = lim y(kT) = lim[(1 − z−1 )Y(z)] (8.1.17)
k→∞ k→∞ z→1

if the time limit exists or if the function [(1-z-1)Y(z)] has no pole on or inside the
unit circle in z - plane.

186
8. DISCRETE TIME SYSTEMS. 8.1. Z - Transformation.

8.1.3.6. Complex Shifting Theorem.


Z λ k yk = Y(λ −1 z), Y(z)=Z{yk} (8.1.18)
Proof:
−k
  ∞ ∞  
Z λ k yk  = Σ λ k yk z −k = Σ y k  λ −1 z = Y(z 1 ) = Y(λ −1 z)
 w  k=0 k=0  z1 
or
Z{ ea t y(t)} = Y(e−a T z), where : Y(z) = Z{y(t)}

8.1.3.7. Complex derivative theorem.


Z{ kp yk } = [−zY(z)][p] (8.1.19)
where the operator [p] - times applied , means:
[−zY(z)][p ] = −z d [−zY(z)][p−1]
dz
Z{t y(t)} = [−TzY(z)][p] = −Tz d [[−TzY(z)][p−1] ] ,
p
dz
[−TzY(z)] = Y(z)
[0] (8.1.20)
Examples.
1. Z{ kyk } = −z d Y(z) (8.1.21)
dz
2. Z{ k2 yk } = −z d [−zY(z)][1] = −z d  −z d Y(z)  (8.1.22)
dz dz  dz 
3. Z{ t y(t)} = −Tz d Y(z) (8.1.23)
dz
4. Z{ t 2 y(t)} = −Tz d [−TzY(z)][1] = −Tz d  −Tz d Y(z)  (8.1.24)
dz dz  dz 
Proof of the Complex derivative theorem.
For p=2 we can prove:
∞ ∞
Y(z) = Σ y(kT)z−k ; d Y(z) = −Σ ky(kT)z −k z−1
k=0 dz k=0

d Y(z) ∞
−Tz = Σ (kTy(kT))z−k =Z{ (kT)y(kT)} =Z{ ty(t)}
d z k=0
[−TzY(z)][1]

d  −TzdY(z)  = − ∞ k2 Ty(kT) ⋅ z−k−1 ;



d z dz 
 Σ
k=0

d Y(z)  ∞ 2 2
−Tz d  −Tz  = Σ k T y(kT) ⋅ z−k =Z t2 y(t)
d z d z  k=0
[−TzY(z)][2]
By the operator definition we know that,
[−TzY(z)][p] = −Tz d  [−TzY(z)][p−1]  .
dz
187
8. DISCRETE TIME SYSTEMS. 8.1. Z - Transformation.

Suppose that

[−TzY(z)][p−1] = Z{t p−1 y(t)} = Σ
k=0
(kT) p−1 z−k .

From the above,


∞ ∞
[−TzY(z)] = −Tz d Σ (kT) p−1 z−k = −Tz(−k)z−1 Σ (kT) p−1 z−k
p
dz k=0 k=0

= Σ
k=0
(kT) p z−k = Z{t p y(t)} q.e.d.

Examples.
1. Let w(t) be
w(t) = t = t 1(t) = t y(t) , where y(t)=1(t) .
But we know Y(z)=Z{1(t)}= z .
z−1
W(z)=Z{ty(t)}=-Tz ( z ) = −Tz ⋅ −1 2 = Tz 2 ,
d
dz z − 1 (z − 1) (z − 1)
the same result we obtained through other methods.

 
   
2. Z{ t 2 1(t)} = Z t (t ⋅ 1(t))  = −Tz d  Tz 2  =
  d z  (z − 1) 
 known 
T(z − 1) 2 − 2Tz(z − 1) T 2 z(z + 1)
= −Tz =
(z − 1) 4 (z − 1) 3

8.1.3.8. Partial Derivative Theorem.


If yk(α) or y(t,α) are derivavable functions with respect to a parameter
α, then
∂ ∂
Z{yk [α ]} = Z y (α) (8.1.25)
∂α ∂α k

8.1.3.9. Real Convolution sum theorem.


Let ya(t), yb(t) be two original functions having:
Z{ ya (t)} = Ya (z) ; Z yb (t) = Yb (z) .
The same y , y , could be two original strings, yka and ybk..
a b

This theorem, one of the most important for System theory, states that the
Z-transform of the so called "sum of convolution of two strings" is just the
algebraical product of the corresponding Z-transforms:
 k 
Z Σ y a (iT)yb ((k − i)T)  = Ya (z)Yb (z) . (8.1.26)
 i=0 
Vice versa, the inverse Z-transform of a product of two Z-transforms is a
convolution sum
k k
Z−1 Ya (z)Yb (z) = Σ ya (iT)yb ((k − i)T) = Σ ya ((k − i)T)y b (iT) (8.1.27)
i=0 i=0

188
8. DISCRETE TIME SYSTEMS. 8.1. Z - Transformation.

Proof: Shall we denote


k
w(kT) = Σ y a (iT)yb ((k − i)T) ,
i=0
W(z)=Z{w(kT)}=Ya(z)Yb(z).
According to the fundamental formula,
∞  k  ∞  k 
W(z) = Σ  Σ ya (iT)y b ((k − i)T)  ⋅ z−k = Σ  Σ y a (iT)yb ((k − i)T)  ⋅ z−(k−i) ⋅ z−i
k=0  i=0  k=0  i=0 
Denoting p=k-i, then k=0 ⇒ p=-i; k= ∞ ⇒ p= ∞ .
Because ya ,yb are original functions,
yb((k-i)T=0 for i>k ( for k<0, ya(kT)=0, yb(kT)=0 )
the upper limit in the inside sum can be ∞ and the lower limit starts from zero so
we can change the summing ordering.
With this, the above relation can be written:
∞ ∞
W(z) = Σ Σ [ya (iT)yb ((k − i)T) ⋅ z−(k−i) ⋅ z−i ] =
i=0 k=0

∞ ∞
=Σ Σ [ya (iT)y b (pT) ⋅ z−p ⋅ z−i ]
i=0 p=−i
∞ ∞
= Σ[y (iT)z ( Σ yb (pT)z−p )]
a −i
i=0 p=−i

∞ ∞ ∞
W(z) = Σ[y (iT)z ( Σ y (pT)z )] = Σ[ya (iT)z −i (Yb (z))]
a −i b −p
i=0 p=0 i=0

= [Σ ya (iT)z −i ]⋅ Yb (z) = Ya (z) ⋅ Y b (z) , q.e.d.
i=0

189
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.

8.2. Pure Discrete Time Systems (DTS).

8.2.1. Introduction ; Example.


A pure discrete time system ("DTS") is an oriented system whose inputs
are strings of numbers and the outputs are strings of numbers too.
The input-output relation of a single-input , single-output system is a
difference equation.
For the multi-inputs, multi-outputs systems the input-output relation is
expressed by a set of difference equations. As an oriented system a DTS is
represented in Fig. no. 8.2.1.

(uk)k>=0 (yk)k>=0
Pure discrete-time
String of numbers system String of numbers
(can be a vector ) ( can be a vector )

Figure no. 8.2.1.


Example 8.2.1. First Order DTS Implementation.
Shall we consider a first order difference equation
yk = ayk−1 + buk , k ≥ 1 (8.2.1)
which is an input-output relation of a DTS having the string uk as input variable
and yk as output variable.
It can be materialised by a computer program run for k>=1. Beside the
coefficients a, b that are structure parameters, we have to know the initial
condition yk-1 |k=1 =y0. This illustrates that the system is dynamic one.

A pseudocode like written program is:


Initialise: a,b, kin=1, kmax, Y.
% Here Y as a computer variable "CV" has a semantic of y0 as a mathematical
% variable "MV".
For k=kin : kmax
Read U
% Here U as CV represents uk as MV.
Y=aY+bU
% The previous value of Y, ( yk-1 ), multiplied by the coefficient a plus the
% actual value of U, ( uk ), multiplied by b determines the actual value of Y,
% (yk ). This is just the relation (8.2.1).
Write Y
% The output is the actual value of Y , (yk ).
End

190
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.

Of course we can replace k by k+1 and relation (8.2.1) becomes


equivalent with (8.2.2).
yk+1 − ayk = buk+1 , k ≥ 0 (8.2.2)
Running such a program we can obtain a string of numbers yk only if the
input numbers uk are given and the initial value y0 too, but we can not understand
the system possibilities. Because of that an analytical approach is necessary.
Relation (8.2.1) is more suitable for computer implementation. It is a " pure
recursive relation: the actual result is depending on the previous results
and on the actual and previous inputs ".
Relation (8.2.2) is more suitable for analytical treatment because every thing is
defined for k>=0.

Analytical approach.
This is just applications of the Z-transform. Applying Z-transform to
(8.2.2) and using (8.1.14) we obtain
z(Y(z) − y 0 ) − aY(z) = b[z(U(z) − u0 )] .
The Z-transform of the output string can be expressed as
H(z)

Y(z) = b z −z a U(z) +z −z a (y0 − bu0 )


forced term free term

Y(z) =H(z)U(z) +Yl (z) (8.2.3)


Yf(z)

We observe that the output is a sum of two terms:


i.- The forced term, Y f(z), which is depending on the input only ( more
precisely is not depending on the initial conditions),
Yf(z)=H(z)U(z), H(z) = b z −z a
(8.2.4) where the operator
Y(z)
H(z) = (8.2.5)
U(z) Z.I.C.
is called "z-transfer function ".
The z-transfer function is the ratio between the z-transform of the output
variable and the z-transform of the input variable which determined that output,
into zero initial conditions " Z.I.C. ", if and only if this ratio is the same for any
input variable.
We can see that this transfer function has one pole z=a. A lot of behaviour
properties are depending on the transfer function poles , in our case on the value
of a.
ii. - The free term , Yl(z), which is depending on the initial conditions only
(more precisely is not depending on the input variable),

191
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.

Yl (z) = z −z a (y0 − bu 0 ) . (8.2.6)


Here the free term looks like depending on two initial conditions: y0 and u0
, even if the difference equation (8.2.2) from which we have started and its
equivalent (8.2.1), is of the first order and it must depend of one condition ( as
a minimum number of conditions).
However we could start the computer program , based on (8.2.1) with
one initial condition only.
The explanation is following: If we are looking at (8.2.2) we can see that
for k= -1, y0-bu0= ay-1 and the free term looks like
Yl (z) = z −z a ⋅ ay−1 , (8.2.7)
depending on a single initial condition.
If we start integrating (8.2.2) from k= -1, (to use u0 as the first input
number), y0 is depending on the input u0 and the genuine initial condition y-1 .
If we start from k=0, the first output we can compute is y 1 that is
depending on y0 and on the input u0.

Conclusion: In the relation (8.2.6) obtained through z-transform (shifting in


advance theorem) u0 has not to be interpreted as an initial condition.
Relation (8.2.6) is only one expression of the free term.
The general time response (forced and free), can be easy calculated
applying the inverse z-transformation to relation (8.2.3),
k
yk = Z−1 {Y(z)} = Σ hi uk−i + yl(k) (8.2.8)
i=0
where
hk = Z−1 {H(z)} (8.2.9)
is called " discrete weighting function " as the inverse z-transform of the
z-transfer function.
The time free response is
Yl (z) = z −z a (y0 − bu 0 ) ⇒

yl (k) = Σ Rez z −z azk−1  (y0 − bu0 ) ⇒ yl (k) = ak (y0 − bu0 ) (8.2.10)


Expressions of the answers to different inputs can be easy calculated.
z
Supposing that uk is a step unit string , that is uk=1 , k ≥ 0 U(z) =
z −1
 
= Σ Rez
2
z
yf (k) = Z−1 b z − a ⋅ z bz ⋅ z k−1  =
z −1  (z − a)(z − 1) 
= b ⋅ ak+1 + b = b  1 − ak+1 
a −1 1 − a 1 −a
If |a|<1 then lim ak+1 → 0 and
k→∞
yf( ∞) = b/(1-a) ; yl( ∞)=0.
This allow us to point out an important conclusion :
192
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.

If the modulus of the z-transfer function poles are under unit (inside the
unit circle in z-plane ), then the free response vanish and the forced response
goes to the permanent response (permanent response is a component of the
forced response determined by the poles of the input z-transform only).
In this particular step response the input has one pole z=0 and the
permanent response is just b/(1-a) as we can see from residues formula. This
property is called stability property.
The forced response for k → ∞ can be determined directly from the
expression of Yf(z), by using the final value theorem, without to be necessary to
compute the expression of the time response
yf (∞) =lim [(1 − z−1 )Yf (z)] =lim [(1 − z−1 ) ⋅ b z −z a ⋅ z ] = b .
z→1 z→1 z− 1 1 −a
Of course we have to pay something for this simplicity, the validity of the
theorem has to be checked: Yf (z) has to be analytic outside the unit circle).

8.2.2. Input Output Description of Pure Discrete Time Systems.


For the so called linear time invariant discrete-time systems, on short
"LIDTS" , the input-output relation is an ordinary difference equation with
constant coefficients,
n m
Σ ai y k+i = Σ b i u k+i , an ≠ 0, (8.2.11)
i=0 i=0
If: m<n the system is strictly proper system (strictly causal),
m=n the system is proper system (causal).
m>n the system is improper system (non-causal).
An improper system can not be physical realised (even by a computer
program) as an on-line algorithm.

Example 8.2.2.1. Improper First Order DTS.


The system described by
yk -2yk-1 = 3uk +7uk+1 (8.2.12)
can not be realised, because the actual value of the output yk is depending on
the future value of the input, uk+1 , which is unknown. It is not the case when the
expression of uk is priory known for any k; this would not be a real physical
system (an on-line or real-time algorithm).
Example 8.2.2.2. Proper Second Order DTS.
a2 yk+2 + a1 yk+1 + a0 yk = b 2 uk+2 + b 1 uk+1 + b 0 uk , k ≥ 0 (8.2.13)
This difference equation can be implemented in a recursive form as it
follows. We can say that
a a b b
yk+2 = −a1 yk+1 − a0 yk + a 2 uk+2 + ... + a 0 uk
2 2 2 2
If we denote k+2 = j and then replace j by k , and denote
α i = ai/a2, βi =bi/a2, i=0,1,2,
193
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.

then (8.2.13) becomes


yk = α 1 yk−1 + α 0 yk−2 + β 2 uk + β 1 uk−1 + β 0 uk−2 , k ≥ 2 (8.2.14)
For a computer program we have to know:
α 1 , α 0 , β 2 , β 1 , β 0 ; y1 , y0 , u1 , u0 .
Example of a computer program.
Initialise : α 1 , α 0 , β 2 , β 1 , β 0 ; Y1, Y0, U1, U0, kin = 2, k max
For k = kin : kmax
Read U
Y ← α 1 Y1 + α 0 Y0 + β 2 U + β 1 U1 + β 0 U0
U0=U1; U1=U; Y0=Y1; Y1=Y
% Update the initial conditions
Write Y
End

The discrete time systems (which are linear and time invariant) can be very
easy described in z-complex plane by using the Z-transformation. Applying the
Z-transformation to the difference equation (8.2.11) one can obtain,
n  i i−1  m  i i−1 
Σ
i=0
a i

z [Y(z) − Σ
k=0,i≠0
y k z −k
] Σ i
= b
 i=0 
z [U(z) − Σ
k=0,i≠0
u k z−k ]

(8.2.15)

I(z) M(z)
Y(z) =H(z)U(z) + , H(z) = (8.2.16)
L(z) L(z)
Yf(z)
Yl (z)
M(z) = b m zm + ... + b 1 z + b 0 (8.2.17)
L(z) = an z + ... + a1 z + a0
n (8.2.18)
where the expression H(z) is called the z-transfer function.
The z-transfer function of a discrete time system is defined as a ratio
between the Z-transform of the output and the Z-transform of the input which
determined that output into zero initial conditions, if this ratio is the same for any
input.
Y(z) M(z)
H(z) = = (8.2.19)
U(z) zer.init.cond L(z)
The z-transfer function determines only the forced response. The time
espression of the forced response is,
k
yk = Σ hi ⋅ u k−i (8.2.20)
i=0
where hk is the inverse Z-transform of H(z), called the weighting function
hk = Z−1 {H(z)} ⇔ H(z) = Z{hk } . (8.2.21)

194
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.

The weighting functions represents the system response to an unit impulse


input into zero initial conditions. Because of that the weighting functions is called
also "impulse response" .
The unit impulse string is defined as
u0=1 , uk=0 , k ≥ 1 ;
U(z) = 1 ⇒ Y(z) = H(z) ⋅ 1 = H(z) .
For both proper and strictly proper systems, hk = 0, ∀k < 0 .
For strictly proper systems, h0 = 0 . This reveals that the present input uk
has no contribution to the present output yk.
Another important particular response is the so called " unit step
response" as the response of a discrete time system to an unit step input into
zero initial conditions. The unit step input string is defined as,
uk = 1 , k ≥ 0 , ⇒ U(z) = z , z > 1 .
z−1
The unit step response string at the time k ,denoted as hsk where the
superscript "s" means step, can be evaluated, for strictly proper systems, using
the formula,
hsk = Σ Rez H(z) ⋅ z ⋅ z k−1  (8.2.22)
∗ z−1
where " * " means: poles of [H(z) ⋅ z−1 z
⋅ zk−1 ].

8.2.3. State Space Description of Discrete Time Systems.


Starting from a practical problem a discrete time system can be expressed
by a first order difference equation in a matrix form like,
 xk+1 = Axk + Buk
 (8.2.23)
 y k = Cx k + Du k

where the matrices are: A-(n x n); B-(n x p); C-(r x n); D-(r x p).
p = 1 ⇒B → b
For :
r = 1 ⇒ C → cT
Having a z-transfer function, the state space description can be obtained
as for the continuos time systems by using the same methods.
Any canonical form from continuous time systems can be obtained also
for discrete time systems with the same formula only by considering against s
the variable z.
For example, the polynomial
M(s) = b m s m + ... + b 0 , s m → zm
will become
M(z)=bmzm+... +b1z+b0 .
By using the Z-transformation, the state equations (8.2.23) become,
z(X(z) − x 0 ) = AX(z) + BU(z) .

195
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.

We remember that the Z-transform of a vector is the vector of the


Z-transforms. So,
 x1k   X1 (z) 
   
xk =  ...  ⇒ X(z) =  ... 
 xn   Xn (z) 
 k   
The z-form of the first state equation (8.2.23) is,
X(z) =z(zI − A) −1 ⋅ x0 + (zI − A) −1 BU(z) (8.2.24)
Φ (z)
where Φ (z) is the z-transform of the so called " transition matrix " Φ(k),
−1
Φ (z) = z(zI − A) −1 = (I − z−1 A) = Z{Φ(k)} , (8.2.25)
It can be proved that,
−1 −1
Φ(k) = Z−1 {ΦΦ (z)} = Z−1 (I − z−1 A) =Z−1 (I − z−1 A) =Z−1 Ak
(8.2.26)
Φ(k) = A , Φ(0) = I
k (8.2.27)
X(z) = Φ (z)x0 + 1z Φ (z)BU(z) , (8.2.28)
where we utilised the identity
−1
(zI − A) −1 = 1z(I − z−1 A) .
Φ (z)
The general time response with respect to the state vector is
k−1
xk = Φ(k)x0 + Σ Φ(k − i − 1) BUi . (8.2.29)
i=0
A k−i−1

The Z-transform of the output is obtained simply applying the


Z-transform to the second equation from (8.2.23) and just substituting X(z)
from (8.2.28), obtaining
Yf(z)

Φ (z)x0 + 1z CΦ
Y(z) = CΦ Φ (z)B + D ⋅U(z) (8.2.30)
 
H(z)
where,
H(z) = 1zCΦΦ (z)B + D (8.2.31)
is the so called " Z-transfer matrix " .
For single-input and single-output systems, the Z-transfer matrix is just
the Z-transfer function.

196
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.

9. SAMPLED DATA SYSTEMS.

Sampled data systems are systems where interact both discrete time
systems and continuous time systems. We shall start analysing such a type of
systems by an example.

9.1. Computer Controlled Systems.


A process computer is a computer able to work in real time. It is a
computer with I/O analog or digital interfaces having a real time operating system
able to perform data acquisition, computing and command in real time.
We can look at the process computer like at a black-box having some
analogical ( continuous time) terminals. The principle diagram of a computer
controlled system is depicted in Fig. 9.1.1.
r(t) ∈[ 0 , 1 0 ] V u(t) ∈[ 0 , 1 0 ] V
Process Computer Controlled Plant
5 r Nk Numerical
r(t) ANC
Control wNk NAC Technical
6 N Algorithm u(t) Act uator Transductor y(t)
yk Plant
ANC ( NCA )
10

y(t) ∈[ 0 , 1 0 ] V

Figure no. 9.1.1.


The computer has some terminals denoted (labelled, marked) by numbers
as analogical ports. For example, as is depicted in Fig. 9.1.1. , for this control
system, there are utilised two analogical input signals, port 5 and 6, and one
output analog signal, port 10.
To these ports are connected:
desired input, r(t), called also " set-point ";
measured variable, y(t), called also "feedback variable"
command variable , u(t) , called also " manipulated variable ".
The controlled plant contain an actuator technological plant, and a
transductor.
The process computer has two analogical-numerical converters
(ANC), called also "analog to digital converters" which converts the input
variables r(t), y(t) which are time functions, to strings of numbers rNk , yNk .
r Nk = KAN ⋅ r(kT) , r(kT) = r(t) t=kT (9.1.1)
yk = KAN ⋅ y(kT) , y(kT) = y(t) t=kT
N
(9.1.2)
where KAN is the analogical-numerical conversion factor.

197
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.

We put k, or kT, as entry for all the variables, thinking that the
acquisitions of the system and all the numerical computing are very fast and
performed just in the time moment t=kT.
The result of the numerical algorithm, denoted by wNk, is applied to the so
called numerical-analogical converter (NAC). The NAC will offer a voltage
piecewise constant:
u(t) = KNA ⋅ wNk , ∀t ∈ (kT, (k + 1)T] , (9.1.3)

where KNA is the numerical-analogical conversion factor.


The structures described above and depicted in Fig. 9.1.1. constitutes a
so called, "closed loop system ".
In this structure, the information is represented in two ways:
by numbers, or strings of numbers, inside the numerical algorithm and
by time functions for controlled plant.
To obtain a good behaviour of the closed loop system we have to manage
simultaneously numbers and time-functions.
There are different ways by which the operating system physically
manages this job, that means to control the output y(t) to be as narrow as the set
point r(t).. The simplest one is the so called simple cycle with a constant
repeating time. All the aspects regarding this loop are included into (represented
by) a subroutine or a task. Each task has a name or a label number. For example
in simple cycle the logical structure in the software can be represented by the
logical diagram depicted in Fig. 9.1.2.

Initialize
No System Yes

Read some
default variables

Other jobs

Task no.8

Other jobs

Figure no. 9.1.2.


For example the Task no. 8, regarding our computer controlled system,
could be as below (in brackets are statements in REAL_TIME BASIC - as an
example).
198
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.

Task no. 8.
Initialise : X1,X2; a1.,a2,a3,a4; b1,b2; c1,c2 . % If it is required by a flag
Read R (Let R=IN(5)) % R = r Nk = kAN r(kT)
Read Y (Let Y=IN(6)) % Y = yNk = kAN y(kT)
W= c1*X1 + c2*X2 % W = wkN = c1xk1 + c2xk2
Write W ( OUT (W,10) ) % u(t) = W , ∀ t∈( kT , (k+1)T ]
E=R-Y % E = eNk = kAN e(t); e(t) = r(t) − y(t)
Z=X1 % Z=X1=x1k is kept as additional variable to be able to compute x2k+1
X1=a1*Z+a2*X2+b1*E % xk+1 1
= a1 xk1 + a2 x 2k + b 1 e Nk
X2=a3*Z+a4*X2+b2*E % x2k+1 = a3 xk1 + a4 x 2k + b 2 e Nk
Return.
This corresponds, for the control algorithm, to the state equations:
xk+1 = Axk + b N eNk (9.1.4)
wk = c xk + d ek (d = 0)
N T N N N (9.1.5)
Usually in process computers there is only one ANC. Several analogical
signals are numerical converted to numbers by using the so called analogical
multiplexors MUX.
An analogical-numerical converter ANC, transforms an analogical signal
to a string of numbers represented by a number of bits. If the converter has p
bits and the analogical signal, let say y, changes from ymax to ymin then
y ∈ [ymin , ymax ), yN ∈ [0, 2 p − 1) ⇒ kAN = 2p (9.1.6)
(y max − ymin )
The physical structure of an ANC is a matter of hardware, well defined
and well known.
From behavioural point of view, as an oriented object, in any ANC there
are two types of phenomena:
- time conversion
- amplitude conversion
Time conversion expresses the conversion of a time function to a string of
numbers. For example from y(t) it is obtained the string(sequence) yk=y(kT).
This is so called the sampling process with the sampling period T.
The variable yk from the string has the same dimension like y(t). If, for
example, y(t) is a voltage which takes values inside a domain then yk is also a
voltage.
This sampling process is represented in a principle diagram by a symbol
like in Fig. 9.1.3. we can denote it SPS- "sampling physical symbol".
It is not a mathematical symbol operator, it only determines someone to
understand that a time function is converted to a string of numbers which
represents its values in some time moments t=kT, and nothing else.
Amplitude conversion equivalently expresses all the phenomena by which
yk is converted to a number yNk represented in a computer by a number
199
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.

of bits called also ANC-number. It is expressed by the conversion factor KAN


whose dimension is [kAN]= 1/[y].
For example if y is a voltage [y]=volt ⇒ [κΑΝ ] = 1/volt.
The two phenomena allow us to represent in a principle diagram an ANC,
as a whole, like in Fig. 9.1.4.
time function string of numbers
y(t) y(kT) k yNk
y(t) y(kT)=y k AN

SPS- sampling physical symbol ANC- bloc diagram


Figure no. 9.1.3. Figure no. 9.1.4.
We modelled the amplitude conversion just by a proportional factor KAN,
which is also a linear mathematical operator.
In fact, this process is more complicated because yk can take a infinity
number of values in a bounded domain ( belongs to a infinity power set ) but yNk
can take a finite number of values: from 0 to 2p-1 if the converter is of p-bits
one. The steady-state diagram of the amplitude conversion is depicted in Fig.
9.1.5. , considering p=2.
yN k AN .y
p
2 4 Ν
Ν
p δ = kAN .y -yN
2 -1 3 [ )
2 [ ) yN
1 [ )

0 0 [ )
y min y max y
δ ΝΝ
δ
y
δN

) ) y kAN y -- yN
1 ) )
k AN
+
0 [ [ [ [
y min y max y

Figure no. 9.1. 5. Figure no. 9.1. 6.


Because y is an integer number and (kANy) is a real one, their difference δN∈
N

[0 , 1), appears like a noise and is called "amplitude conversion noise".


Also can be defined the " converter amplitude resolution", denoted by
δ , as being the maximum analogical interval that can be represented by the
y

same number:
ymax − ymin
δy = 1 = (9.1.7)
kAN 2p
If we want to represent the amplitude conversion noise, then the
conversion part of an ANC can be represented like in Fig. 9.1.6. When p is large
enough ( p=12 for example ) this noise can be neglected.
The numerical-analogical converter NAC, converts a string of numbers
N
w k to a piecewise constant time function u(t) as defined in (9.1.3).

200
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.

In NAC there are also two phenomena:


-time conversion
-amplitude conversion.
In any physical NAC, the number wkN is kept in a numerical memory as a
combination of bits during a sampling period. It can be interpreted as a time
function wN(t) like (9.1.3).
This process of memorisation is called also "holding process" during a
sampling period.
The command of memorisation is made only in some time moments t=kT,
and the holding process is connected with a sampling process.
They form the so called "sampling-hold process" and express the time
conversion. The sampling-hold process is represented, in principle diagrams, by
a "sample-hold physical symbol" SHPS, as depicted in Fig. 9.1.7.
It is not a mathematical operator, it illustrates us only that a string of
numbers wk , sampled in the time moments t=kT, is converted to a piecewise
constant time function w(t)
w(t) = wk , t ∈ (kT, (k + 1)T] (9.1.8)
N
In a physical NAC, the input of SHPS is the string of numbers w k, the
output of the SHPS is a time function wN(t) whose values are numbers.
The bits combination of wN(t) is converted to a voltage or a current by a
system of current keys performing in such away the amplitude convertion, the
result being u(t).
This phenomena interpretation allow us to represent a NAC, in a principle
diagram, by a block diagram as depicted in Fig. 9.1.8.
However, equivalently we can consider, even if practically it could not be like
that but the results are the same, that the numbers wNk are first converted to a
string of physical signal wk, current or voltage, and then they are kept constant
during a sampling period.
Now the input to SHPS is wk and the output is u(t). This allow us to
represent a NAC by a principle diagram like is depicted in Fig. 9.1.9.
Both representations from Figs. 9.1.8. and 9.1.9. express the same
input-output behaviour. The last one has the advantage that allow us to include
kNA, and kAN also for ANC, to the numerical control algorithm and not to the
analogical part of the control system.
string of time function
numbers (piecewise-constant)

wk w(t) wNk wN(t) u(t) wNk wk u(t)


kNA kNA

SHPS - Sample Hold NAC-Physical NAC- Equivalent


Physical Symbol Principle Diagram Principle Diagram

Figure no. 9.1. 7. Figure no. 9.1. 8. Figure no. 9.1. 9.


201
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.

We saw that physically, maybe like in computer programs, first r(t) is


aquisitioned as R and then y(t) is aquisitioned getting Y after that numerically is
computed E=R-Y.
To get a simpler representation, but with the same input-output behaviour,
we can consider that someone first made the difference
e(t)=r(t)-y(t),
as a physical signal, and then he sampled e(t) and converted it to a number E
as would be used only one ANC.
This equivalence process is illustrated in Fig. 9.1.10.

r(t) r(kT) k r Nk
AN
(R)
N eNk
+ ek Input-Output r(t) + e(t) e(kT) k
- (E=R-Y) Equivalent - ek AN
(E)
N
yk y(t)
y(t) y(kT) k
AN
(Y)

Figure no. 9.1.10.


All these allow us to represent the computer controlled system from
Fig. 9.1.1. by a block diagram as in Fig. 9.1.11.
This diagram contains both physical symbols for ANC, NAC and
mathematical symbols in the form of block diagrams: Hf(s), summing operator,
kAN, kNA.
So, the Fig. 9.1.11. will be now interpreted as a principle diagram.

ANC Numerical Control NAC Controlled Plant


Algorithm (Fixed part
Set point of the System)
N N
e Nk xk+1=Axk + b ek wNk wk u(t) H (s) y(t)
r(t)+ e(t) e(kT) k kNA
_ AN wNk =CNxk+ dN eNk F Controlled
variable
(Feeddback variable)

Figure no. 9.1.11.


Both kAN, kNA factors can be included into control algorithm considering
eN = k e ; ⇒ b = b N k (9.1.9)
k AN k AN

wk = kNA wNk ⇒ wk = (k NA C N )xk + (kNA d N kAN )e k ⇒


C = k CN; d = k dNk ,
NA NA AN
(9.1.10)
kAN ⋅ kNA = 1 if p = q and ymax − y min = umax − umin
so we can reprezent the computer controlled system as a sampled data system
as in Figure no. 9.1.12.
This is a standard representation of such systems.
202
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.

Sampler Discrete Time Sample-Hold Continuous Time


Symbol System Symbol System
Set point Controlled
r(t)+ e(t) ek xk + 1 =Axk +be k wk u(t) y(t) variable
_ w HF (s)
e(kT) k =Cxk + dek

(Feeddback variable)

Figure No. 9.1.12.


We denote by HF(s) the transfer function of the so called "the fixed part
of the system". We suppose that it is a linear one.
Such a system is called "sampled data system" (SDS). It works with
string of numbers and time functions.
The SDS feature appeared because a physical plant (continuous time
system) is combined with the computer (discrete time system).
There are many examples when sampled data systems can appear: radar,
chemical plants, economical models, a.s.o.

203
9. SAMPLED DATA SYSTEMS. 9.2. Mathematical Model of the Sampling Process.

9.2. Mathematical Model of the Sampling Process.

9.2.1. Time-Domain Description of the Sampling Process.


We saw that for the physical sampler the input is a time function and the
output is a string of numbers.
For continuous time systems the Laplace transform, Y(s)=L{y(t)}, is a
very useful mathematical instrument. The Laplace transform has no physical
meaning. It is just in our mind. We can not compute L{yk} because it does not
exist.
To be able to manage both: the information expressed by a time function
and the information expressed by a string of function it was invented the so
called "sampled signal", which is defined as a string of Dirac impulses,

y∗ (t) = Σ yk ⋅ δ (t − kT) , yk = y(kT + ) (9.2.1)
k=0
where y*(t) is defined in the sense of the distribution theory, over T on R.
 0, t≠0 ∞
δ(t) =  but ∫ δ(t)dt = 1, (9.2.2)
 ∞ , t = 0 −∞

We can plot the sampled signal as a string of arrows whose lengths are
just the area of the corresponding Dirac impulse as in Fig. 9.2.1.
y*(t)
y(t)

kT+aT

∫ y(t) δ (t-kT) dt = y(kT+)


kT-aT
0<a<1

0 T 2T 3T 4T 5T kT t
Figure no. 9.2.1.
This signal y*(t) has a Laplace transform Y*(s), but it contains
information only on the values yk=y(kT+).
This process is expressed by an operator called "sampling operator",
denoted by the symbol { }*. For example we write,
{y(t)}*=y*(t),
where y*(t) is defined as above.
SAMPLING OPERATOR - MATHEMATICAL MODEL

y(t) y*(t) (yk)k > 0 y*(t)


=
T y*(t)={y(t)}* T y*(t)={ yk}*
Sampling on a time function Sampling on a string of numbers

Figure no. 9.2.2.


204
9. SAMPLED DATA SYSTEMS. 9.2. Mathematical Model of the Sampling Process.

In a block diagram this mathematical operator is plotted by the symbol, as


depicted in Fig. 9.2.2. where T is "the sampling period".
The input to this operator is the time function y(t) whose Laplace
transform is Y(s) and the output is the sampled signal above defined which has a
Laplace transform.
Y*(s)=L {y*(t)} (9.2.3)
The sampling operator can be applied also to a pure string of numbers,
but in such a case it is supposed that each number of the string is related to a
time moment equal spaced, yk to the moment kT.

{yk } k≥0 = Σ yk δ (t − kT) = {ycov (t)}∗, ycov (t) t=kT = yk

(9.2.4)
k=0

9.2.2. Complex Domain Description of the Sampling Process.


We can approach the sampling process in complex domain by using the
Laplace transform which can be related also with continuous time systems.
The following terms are used:
T -sampling period;
fs -sampling pure frequency;
ω s=2πfs -sampling frequency.
The Laplace transform of a sampled signal is

Y (s) = L{y (t)} = Σ yk e−kTs
∗ ∗
(9.2.5)
k=0

L{δ(t − kT)} = e−kTs (9.2.6)


Denoting,
z=eTs ; s = 1 ln z , (9.2.7)
T

we observe from (9.2.5) that the Z-transform of a signal is just the Laplace
transform of the sampled signal replacing only esT by z or s = 1 lnz ,
∞ T
s= 1 ln z = Σ y k z
∗ (s)
Y −k
=Z{ yk } =Z{ y(t)} = Y(z) (9.2.8)
T k=0

To express Y*(s) by a complex integral, the selection property of Dirac


distribution can be utilised

y (t) = y(t)⋅ Σ δ(t − kT) = y(t) ⋅ p(t);

(9.2.9)
k=0

p(t)

p(t) = Σ δ(t − kT) (9.2.10)
k=0
For t=kT, δ(t-kT) is not defined as a function, but the integral is its area
and admits Laplace transform. Also p(t) is a generalised function that admits
Laplace transform.
205
9. SAMPLED DATA SYSTEMS. 9.2. Mathematical Model of the Sampling Process.

Because y*(t) from (9.2.9) is a product of two time functions (for p(t) a
generalised one), which admit Laplace transform, we can use the complex
convolution theorem
c+j∞

Y∗ (s) = L {y*(t)}=L {y(t)p(t)}= 1 ∫ Y(ξ)P(s − ξ)dξ (9.2.11)


2πj c−j∞
If σ1 is the convergence abscissa of y(t) and σ2 is the convergence
abscissa of p(t) and if c is chosen so
σ1 < c < σ − σ 2 , σ > max(σ1 , σ2 , σ1 + σ2 ) (9.2.12)
the vertical line is splitting the singular points (poles) of Y(ξ) and P(s-ξ) in the
ξ-complex plain, as in Fig. 9.2.3.
jIm(ξ) ξ− Plane
c+j ∞ Poles of p(s-ξ)
x
Γ1
ω
s
n
s+j

x
x Γ2
R=

ω
s+j0 s x
x x Re( ξ)
R=
x ∞
x
x
Poles of Y(ξ) Γ2
Γ1

c - j∞

Figure no. 9.2.3.

∞  ∞ −kTs
P(s) = L Σ (t − kT)  = Σ e = 1 (9.2.13)
 k=0  k=0 1 − e−Ts
P(s − ξ) = 1 ; Re(s) > 0 , Re(s) > Re(ξ)
1 − e−T(s−ξ)
We can say that,
c+j∞

Y∗ (s) = 1 ∫ Y(ξ) 1 dξ (9.2.14)


2πj c−j∞ 1 − e eTξ
−Ts

By fulfilling the vertical line with a contour Γ1 containing all the left hand
half-plan and if the above integral is zero on Γ1 , then it is an integral on a close
contour which contains all the poles of Y(ξ) so the residues theorem can be
applied.

206
9. SAMPLED DATA SYSTEMS. 9.2. Mathematical Model of the Sampling Process.

c+j∞
 
∫ +Γ∫ − p∫ = ∫ ⇒ Y∗ (s) = Σ Rez Y(ξ)

1
1−e e 
−Ts T ξ
c−j∞ 1 1 poles
ofY(ξ)
closed 0
cot our
integral
Because the Z-transform this is just the result of variable changes in Y*(s)
:
 
Y(z) = Y∗ (s) s= 1T ln z =Z {y(t)} = Σ Rez Y(ξ)

1
1− z e 
−1 Ts  . (9.2.15)
poles
ofY(ξ)
If we are fulfilling the vertical line c − j∞, c + j∞ by a contour Γ2
containing all the right half plain the above integral can be computed on a closed
contour maybe by using the residues theorem. Inside this closed contour the
poles of P(s-ξ) there are inside only.
These poles are
ξ n = s + jnω s , where ω s = 2π = 2πf s . (9.2.16)
T
By using the residues theorem the following formula is obtained:
+∞
Y∗ (s) = 1 Σ Y(s + jnω s ) . (9.2.17)
T n=−∞
Two properties of Y*(s) can be mentioned:
1) Y*(s) is a periodical function with the period jω s that is,
Y*(s)=Y*(s+jω s)=Y∗ (s + jnω s ) , ∀s (9.2.18)
2) If Y*(s) has a pole sk then it also has the poles sk+jnω s , ∀n ∈ Z

9.2.3. Shannon Sampling Theorem.


Let we have a time function y(t), from which, by sampling, the string of
numbers {yk}={y(kT)} has been obtained.
The problem is, what condition has to be accomplished to be able to
reconstruct the continuous signal y(t) using only the sampled values of yk. This
can be equivalently presented in frequency domain.
The continuous time signal y(t) is uniquely expressed by the complex
characteristic
Y(jω ): Ay(ω )=|Y(jω )| , ϕy(ω )=arg(Y(jω )). (9.2.19)
The string of numbers y k is uniquely expressed by discrete complex
characteristics,
Y(z) z=exp(jωT) = Y∗ (jω ), (9.2.20)
Ay*( ω )=|Y*(jω )|; (9.2.21)
ϕy*(ω )=arg(Y*(jω )). (9.2.22)
207
9. SAMPLED DATA SYSTEMS. 9.2. Mathematical Model of the Sampling Process.

The reconstruction problem in frequency domain can be presented as:


given Ay*( ω), ϕy*(ω), what conditions has to be accomplished so that Ay(ω),
ϕy(ω ) to be exactly re obtained.
To analyse this, the second formula (9.2.17) of the Laplace transform of
the sampled signal is utilised,
+∞
Y∗ (s) = 1 Σ Y(s + jnω s ) , where ω s = 2π. (9.2.23)
T n=−∞ T
Supposing that
ω
Y(jω ) = 0 for ω > ω c , ω c < s (9.2.24)
2
then,
+∞
Y (jω ) =
∗ 1
Σ Y(jω + nω s )
T n=−∞
(9.2.25)
The sampled signal contains an infinity numbers of frequencies as is
represented in Fig. 9.2.4.
|Y(jω)|
|Y(0)|
ωs

- ωs ωs - ωc 0 ωc ωs 2 ωs
-- --- ---
2 |Y*(jω)| 2 Ideal filter
1 1 |Y(jω + jω )|
1 |Y(jω)|
--- --- s
T
T
1 |Y(0)|
---
T +
ω
c

- ωs- ωc - ωs ωs - ωc ωs ωs+ω ωs
- ω s +ωc ωc ωs - ωc ωs ωs - ωc ωs
0 2 2
--- --- c 2
--
2 2 ωs

Figure no. 9.2.4.


If
ω s > 2ω c , T < 1  ω
2π  (9.2.26)
2 c
then by using an ideal low pass filter it is possible to obtain the
amplitude-frequency characteristic of a continuous signal from
amplitude-frequency characteristic of the sampled signal. This is the so called
Shannon's theorem:
A continuous time signal y(t) with bounded frequency-characteristicat ω c
Y(jω ) = 0 , ω > ω (9.2.27)
can be reconstruct from its sampled signal yk if T < 1 ⋅ ω 2π .
2 c
This is just in theory because practically does not exist an ideal low pass
filter. Such a filter is not a realisable system. It is theoretical too because a few
signals have a bounded spectrum of frequencies.

208
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.

9.3. Sampled Data Systems Modelling.

9.3.1. Continuous Time Systems Response to Sampled Input Signals.

Further, to some mathematical modelling of physical systems, it can


appear a structure like that drawn in Fig. 9.3.1.

u*(t)= u(kT)δ (t-kT)
k=0
u(t) u*(t) H(s) y(t)
U(s) T U*(s) Y(s)
y*(t)
T Y*(s)

Figure no. 9.3.1.


This is a continuous time system expressed by a transfer function H(s)
whose output signal is y(t), but whose input is a sampled signal u*(t).
The Laplace transform of the output is Y(s). Sometimes we may denote
U*(s)={U(s)}*=L{u*(t)}, (9.3.1)
* *
that means U (s) is the Laplace transform of the sampled signal u (t). Then,
Y(s)=H(s)U*(s) (9.3.2)
We could compute the time response
y(t)=L-1{H(s)U*(s)} (9.3.3)
* sT
but it is very difficult because usually U (s) contains terms of the form e .
Example: u(t)=1(t) ⇒ U∗ (s) = 1 . (9.3.4)
1 − e−Ts
Because of that we are determined (content) to compute only the values of
the response in the time moments t=kT+, so we give up computing the values of
the response between the sampled moments kT.
This giving up process is expressed in a block diagram, which illustrates
our computing intentions, by another ideal sampler connected at the continuous
system output whose output is,

y∗ (t) = Σ y(kT + )δ(t − kT) ; Y*(s)=L{y*(t)} (9.3.5)
k=0
It can be proved that
Y*(s)={H(s)U*(s)}* =H*(s)U*(s) , (9.3.6)
*
where H (s) is the so called "the sampled transfer function" , which is the
Laplace transform of the sampled weighting function,
h(t)=L-1{H(s)}

 
H (s) = Σ h(kT)e−kTs = Σ Rez H(ξ)
∗ 1 (9.3.7)
k=0  1− e e
−Ts Tξ 
poles
ofH(ξ)
Y(z)=H(z)U(z) (9.3.8)
209
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.


 
 
because  H(s)U∗ (s)  =H∗ (s) ⋅U ∗ (s).
 
 Y(s)  H(z) U(z)

Based on (9.3.6) an algebra regarding sampled data systems can be


developed. Some rules can be specified as follows:

{ αA(s) + βB(s)} = αA∗ (s) + βB∗ (s) (9.3. 9)
Z{ αA(s) + βB(s)} = αA(z) + βB(z) (9.3.10)
{A(s)B∗ (s)} ∗ = A∗ (s)B∗ (s) (9.3.11)
Z{A(s)B∗ (s)} = A(z)B(z) (9.3.12)
{A(s)B(s)} ∗ = AB∗ (s) ≠ A∗ (s)B ∗ (s) (9.3.13)
Z{A(s)B(s)} = AB(z) ≠ A(z)B(z) (9.3.14)
If C(s) is a periodical function with the period jω s,
C(s) = C(s + jω s ) , ∀s ∈ C (9.3.15)
then
{A(s)C(s)} ∗ = A∗ (s)C(s) and
Z{A(s)C(s)} = A(z) ⋅ C(s) s= 1 ln z
T
(9.3.16)
where :
A(z) = Σ
poles of A(ξ)
Rez[A(ξ) 1
1 − z−1 eξT
] (9.3.17)
Example:
C(s) = 1 − e−Ts = C(s + jω s ) = 1 − e−Ts ⋅e−j2π= 1 − e−Ts .
1
If we have a system with the transfer function H(s) whose discrete form is
H(z) and an input U(s) whose discrete transform is U(z), in the structure
depicted in Fig. 9.3.1., then the Z-transform of the output, Y(z), is:
Y(z)=H(z)U(z) (9.3.18)
Applying the inverse Z-transformation, we are getting,
Z−1 {H(z)U(z)} = y(kT + ) (9.3.19)
If the system response
y(t) = L−1 {H(s)U∗ (s)} = L−1 {Y(s)} (9.3.20)
has discontinuities in the sampling times t=kT then the inverse Z-transform of
Y(z) is
Z−1 {Y(z)} = y(kT + ) = lim y(t). (9.3.21)
t→kT,t>kT

210
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.

9.3.2. Sampler - Zero Order Holder (SH).


There are many devices that have to the input a continuous function u(t)
and at the output a piecewise constant function ye(t), where
y (t) = u(kT) , ∀t ∈ (kT, (k + 1)T] (9.3.22)
e
as depicted in Fig. 9.3.2. For example a numerical analogical converter NAC
connected to a data bus as discussed above. In such a NAC, u(t) has the
meaning of a number binary representation of the data bus.
u(t) ye(t) ( ] ye(t)
( ]
u(t) ( ]
( ] u(t) ye(t)
( ] ( ]
( ]
( ]
t
0 T 2T (k-1)T kT (k+1)T
Figure no. 9.3.2.
We can see that the information contained in the output ye(t) represents
only the input values for some time moments called sampling time moments, kT
and a sampling process will be involved.
This sampling process is expressed by the mathematical model : the ideal
sampler as the sampling operator whose output is the sampled signal u*(t).
The process through which u*(t) is transformed to ye(t), a piecewise time
constant function, is represented by an operator, a transfer function H e0(s),
whose expression will be determined as follows.
The sample-hold process is then represented by a block diagram, a
mathematical operator, as in Fig. 9.3.3.
u(t) u*(t) ye(t)
He0 (s)

Figure no. 9.3.3.


Suppose that u(t) is a special function having : u(0)=1, u(kT) = 0, ∀k .
According to the definition (9.2.1),
u∗ (t) = δ(t) =u(0) δ(t)+u(T) δ(t − T)+u(2T) δ(t − 2T)
1 0 0
For this input signal, according to (9.3.22), the output ye(t) graphically is
represented in Fig. 9.3.4.
u*(t)= δ(t)
1
t
-T 0 T 2T
ye(t)
1( ]
( ]( ] ( ]( ] t
-T 0 T 2T
Figure no. 9.3.4.
211
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.

This time function is the response of this system to the unit Dirac impulse,
what means it is a weighted function and its Laplace transform is a transfer
function,
L{ he0 (t)} = He0 (s) = 1s − e s ⇒ He 0 = 1 −se
−Ts −Ts
(9.3.23)
This is the transfer function of the zero order holder.

9.3.3. Continuous Time System Connected to a SH.


Supposing that the output of a zero order holder is acting the input to a
continuous time system with the transfer function H(s), as in Fig. 9.3.5.
Because the mathematical model of the SH contains the transfer function
He0(s), this and H(s) can be considered as a series connection equivalent to G(s)
as depicted in Fig. 9.3.6. , where
G(s) = He0 (s)H(s) (9.3.24)

u(t) ye0 (t) y(t) G(s)


H(s) u(t) u*(t) He (s) ye0 (t) H(s) y(t)
0
SHPS Transf er function
this is not a this is a
mathematical mathematical The mathematical model
model & we model of a SH controlling a
cannot operate continuous system
with it

Figure no. 9.3.5. Figure no. 9.3.6.


Now the behaviour, at least in sampling time moments, can be evaluated
by using the methods of SDS.
Y(s)=G(s)U*(s)
Y∗ (s) = {Y(s)} ∗ = {G(s)U∗ (s)} ∗ = G ∗ (s)U∗ (s) (9.3.25)

 
 H(s)  H(s) ∗
G ∗ (s) = {He0 (s)H(s)} ∗ =  (1 − e−Ts ) s  = (1 − e−Ts ) s (9.3.26)
 
 periodicalfunction 
H(s)
G(z) = Z{G(s)} = (1 − z−1 ) ⋅ Z s (9.3.27)
H(s)
Y(z) = (1 − z−1 ) ⋅ Z s ⋅ U(z) (9.3.28)

Application. Let us suppose that U(z) = z (we have applied a step unit
z− 1
function) and H(s) = b . By using the above relations we can easy determine
s+a
the response of this system:
Z s = Z s(s+a
H(s) b
) = Σ Rez ξ(ξ+a)
b
⋅ 1−1 Tξ  = ba 1 −1 + −ab −11 −aT
1−z e 1−z 1−z e
poles of b/(ξ(ξ+a))

212
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.

If we note λ=e-aT , c= ba (a − λ) , then:


b
( )
= ba  z−1 z 
− z−λ = a a−λ z
H(s) z
Z s  (z−1)(z−λ)
Y(z) = z−1
z ⋅ (z−1)(z−λ) U(z) ⇒ Y(z) = z−λ U(z) = G(z)U(z) .
cz c

1−z −1

9.3.4. Mathematical Model of a Computer Controlled System.


Shall we come back to Ch. 9.1. "Computer Controlled Systems" where
one analysis result has been presented in Fig. 9.1.11.
The numerical control algorithm, NCA, implemented in computer, can be
presented by a z-transfer function D(z), if NCA is LTI,
W N (z)
D(z) = N (9.3.29)
E (z)
D(z) = 1zC N (I − z−1 A) −1 b N + d N (9.3.30)
The block diagram from Fig. 9.1.11. with this above notation is presented
in Fig. 9.3.7.

r(t) + e(t) ek eNk wNk wk u(t) y(t)


KAN D(z) KNA HF (s)
- e(kT)

Figure no. 9.3.7.


For example, if the computer program of the NCA is:
Read R,Y % W,α,β has to be initialised.
E=R-Y
Write W
W = αW + βE % wNk+1 = αwNk + βeNk
then we have
β
D(z) = z − α ,
but, if we have the program:
Read R,Y % W,α,β has to be initialised.
E=R-Y
W = αW + βE % wNk = αwNk−1 + βeNk
Write W
then the NCA has the transfer function
βz
D(z) = z − α
The two factors kAN, k NA, of ANC and NAC can be grouped with NCA
transfer function, denoting,
HR(z) = kANkNAD(z) (9.3.31)

213
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.

where, considering E(z)=Z{ek}, W(z)=Z{wk},


W(z)
HR (z) = (9.3.32)
E(z)
is the z-transfer function of the so called "Discrete Time Controller", on short:
DTC.
With this, Fig. 9.3.7. will be equivalently represented like Fig. 9.3.8.

r(t) + e(t) ek wk u(t) y(t)


- HR(z) HF (s)
e(kT)

Figure no. 9.3.8.


Of course H R(z) comes from (9.3.31), having a very clear physical
meaning, but, equivalently, we can consider that it comes from a s-transfer
function HR(s) whose Z-transform is just HR(z),
HR(z) = Z{HR(s)} = H∗R (s) s= 1 ln z
T
(9.3.33)
where
W ∗ (s) L{w∗ (t)}
H∗R (s) = ∗ = (9.3.34)
E (s) L{(e∗ (t)}
and e*(t), w*(t) are the sampled signals associated to the strings e k , w k,
respectively.
With this in our mind, and taking into account the mathematical model of
the couple sampler - zero order holder as depicted in Fig. 9.3.3. , we can
represent the computer controlled system by a block diagram, in the sense of
system theory, block diagram which expresses a mathematical model, as in
Fig. 9.3.9.
1-e -Ts
H*(s)
R G(s) He0(s)= s
r(t) + e(t) e*(t) HR (s) w(t) w*(t) He0(s) u(t) HF (s) y(t)
- T T

Figure no. 9.3.9.


We can denote, because now we are processing a mathematical model,
G(s) the result of two transfer functions series connection,
H (s)
G(s) = He0(s)HF(s) = 1 − se ⋅ HF (s) = (1 − e−Ts ) ⋅  Fs  (9.3.35)
−Ts

 
With this notation, the block diagram from Fig. 9.3.9. is represented in
Fig. 9.3.10. which is the standard block diagram, in the sense of system theory,
of the computer controlled system.

214
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.

r(t) + e(t) e*(t) HR (s) w(t) w*(t) G(s) y(t)


- T T

Figure no. 9.3.10.


Observations:
1. This is one example showing how, starting from physical phenomena, we can
get continuous-time systems, described by s-transfer functions, which have at
the input sampled signals.
2. Now it is very easy to manipulate the structure from Fig. 9.3.10. in order to
get anything we are interested regarding the system behaviour.

9.3.5. Complex Domain Description of Sampled Data Systems.


The mathematical model of a sampled data system can be easy
determined in complex domains ("s" or "z") using the sampled data systems
algebra as presented in Ch 9.3.1.
We can present, as an example, the computer controlled system behaviour
starting from the block diagram 9.3.10. The main goal is to express the output
y(t), represented in complex domains by: Y(s), Y*(s) or Y(z) as a function of
the input complex representations.
Here the sampled data systems algebra is applied step by step but there
are stronger techniques.
From the block diagram we can write,
Y(s) = G(s)W*(s); (9.3.36)
*
W(s) = HR(s)E (s); (9.3.37)
E(s) = R(s)-Y(s); (9.3.38)
In `the above relations it appeared both W(s) and W* (s) respectively
Y(s) and Y* (s). New relations are obtained applying the sampling rules to the
three above relations:
E*(s) = [E(s)]* = [R(s)-Y(s)]* = R*(s)-Y*(s)
Y*(s) = [G(s)W*(s)]* = G*(s)W*(s)
W*(s) = [HR(s)E*(s)]* = HR*(s)E*(s)
W* = HR*(R*-G*W*)
H∗R (s)
W (s) =
∗ R ∗ (s) (9.3.39)
1 + H ∗R (s)G ∗ (s)
H∗R (s)
Y(s) = G(s) ⋅ R ∗ (s) (9.3.40)
1 + H∗R (s)G ∗ (s)
Because the controlled plant is a continuous system, y(t) exists for any t,
and as a consequence, we have got the Laplace transform Y(s). We observe
that Y(s) depends on the R*(s), that means y(t) depends on the values rk=r(kT),
as we expected from physical behaviour.
215
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.

If we are able we can compute the inverse Laplace of Y(s) from (9.3.40)
if r(t), and as a result R*(s), is given. Unfortunately, directly computing is a very
difficult task.
By using some special methods, as for for example:
" The modified Z-transform",
" The time approach of discrete systems",
it is possible to compute y(t) for any t ∈ R .
If we want to compute the values of y(t) only for t=kT will be enough to
compute Y*(s) and Y(z). So, we can get Y*(s) applying to (9.3.40), the
sampling rules from (9.3.1), (9.3.12).
H∗R (s)
Y(s) =G(s) R ∗ (s) ⇒
1 + H ∗R (s)G ∗(s)
A(s)
B ∗ (s)

G (s)H R (s) ∗

Y∗ (s) = R (s) (9.3.41)
1 + G ∗ (s)H∗R (s)
where

H (s)
G ∗ (s) = (1 − e−Ts )  F 
 s 
 
The z-transform Y(z), can be simply obtained from (9.3.41) by a simple
substitution s = 1 lnz , getting,
T
HR (z)G(z)
Y(z) = R(z) (9.3.42)
1 + HR (z)G(z)
where :
HR(z)=kANkNAD(z)
H (s)
G(z) = (1 − z−1 ) Z Fs
We observe from (9.3.42), that for this structure of a computer
controlled system, a closed loop z-transfer function Hv(z), as a ratio between the
z-transform of the output and the z-transform of the input, can be determined,
Y(z) HR (z)G(z)
Hv (z) = = (9.3.43)
R(z) 1 + H R (z)G(z)
This relation can be expressed by a block diagram in z-complex plane as
in Fig. 9.3.11.
r ek wk yk
k
R(z) + E(z) W(z) Y(z)
_ HR (z) G(z)

Figure no. 9.3.11.


It is observed the similarity of the relations and block diagrams for discrete
time systems in z-plane (for which we are able to study the behaviour in the
sampling time moments only), and those of continuos time systems in s-plane.

216
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.1. Frequency Characteristics Definition.

10. FREQUENCY CHARACTERISTICS FOR


DISCRETE TIME SYSTEMS.

10.1. Frequency Characteristics Definition.


Suppose we have a pure discrete time system with a z-transfer function
H(z), as in Fig. 10.1.1.
If a sinus string of numbers uk of the form (10.1. 1) is applied to the input
of the system, then, after a transient period, the answer of the system is also a
string of numbers yk of the sinus type but with other amplitude and other phase,
of the form (10.1. 2) as represented in Fig. 10.1. 2.
uk = Um sin (ω kT) (10.1.1)
yk = Ym sin(ω kT + ϕ) (10.1.2)
{uk }k > 0 {yk}k > 0
=
H(z) =

Figure no. 10.1. 1.


uk

Um
0 1 2 3 4 k
u(t)=Umsin(ωt)

yk

Ym
0 1 2 3 4 k
ϕ ---
ϕ
λ λ = ---
ωT = w
Figure no. 10.1. 2.
In (10.1. 1), we considered the string uk coming from the time function
u(t) = Umsin(ω t) , (10.1.3)
and uk = u(kT) (10.1.4)
but it could be just a string, nothing to do with time variable,
uk= Umsin(ak). (10.1.5)
Of course we can do the equivalence a=ω T.
Note: The amplitude values Um , Y m , are the maximum values of the sinus
functions u(t), y(t), but not of the maximum values of the strings uk , yk .
For given ω=2πf , or a, and T = 1 we can measure Um , Ym , and λ .
fs
Then we can compute,
217
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.2. Relations Between Frequency Characteristics
and Attributes of Z-Transfer Functions.

Ym
A(ω ) = (10.1.6)
Um
ϕ(ω ) = λ ⋅ (ω T) or ϕ(ω ) = λ ⋅ a (10.1.7)
If we repeat the experiment for different values of ω ( or a) we can plot
the curves A(ω ), ϕ(ω ) with respect to ω . These are so called: "Magnitude
frequency characteristic" and "Phase frequency characteristic"
respectively of the pure discrete time system.

10.2. Relations Between Frequency Characteristics and Attributes of


Z-Transfer
Functions.
10.2.1. Frequency Characteristics of LTI Discrete Time Systems.
To relieve these important relations we shall, compute the permanent
response of the H(z) to the input uk from (10.1.1).
We can compute U(z) by using residues relation (8.1.7),
U(z)=Z{uk}=Z{Umsin ω t}
 ω  Um
U(z) = Z Um  = (Z{ ejωt } −Z { e−jωt } )=
 (s − jω )(s + jω )  2j
U  z z 
= m − ⇒
2j  z − ejωT z − e−jωT 
Um sin (ω T)z
Z{Umsin ω t}= (10.2.1)
(z − e −jωT z − e jGwT
)( )
The permanent response , ypk , of any system with th Z-transfer function H(z) is
determined by the residues on the poles of the input z-transform U(z).
In our case there are two poles: z1 =ejωT , and z2 =e-jωT .
 U sin (ω T)z ⋅ zk−1 
yk = Σ Rez  H(z) m −jωT
p
 (10.2.2)
poles: z 1 , z 2  (z − e )(z − e jωT ) 
sin ω T  −jωTk
yk = Um −jωT H(e−jωT ) − ejωTk H(ejωT )  =
p
jωT 
e
(e −e )
sin ω T
= Um A(ω )  e−j(ωkT+ϕ) − ej(ωkT+ϕ) 
−2jsin ω T
where we have
H(ejωT ) = H(z) z=e jωT (10.2.3)
that can be expressed as
H(ejωT ) = A(ω )ejϕ(ω) (10.2.4)
where has been denoted
A(ω ) = H(ejωT ) (10.2.5)
ϕ(ω ) = arg (H(e ))jωT (10.2.6)
Because H(z) has real coefficients,
H(e−jωT ) = A(ω )e−jϕ(ω) .
218
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.2. Relations Between Frequency Characteristics
and Attributes of Z-Transfer Functions.

Finally the permanent response is,


yk =Um A(ω ) sin (ω kT + ϕ(ω ))
p
(10.2.7)
Ym
The amplitude-frequency characteristic is :
Ym
= A(ω ) = H(ejωT) = H(z) (10.2.8)
Um z = ejωT
ϕ(ω ) = arg (H(ejωT )) (10.2.9)
ϕ(ω )
Because sin[ωkT + ϕ(ω)]=sin[ωT(k + )], we can consider the
ωT
output delay in the variable k as being
ϕ(ω )
λ= (10.2.10)
ωT
For discrete time systems the frequency characteristics are periodical
functions with the period equals to the sampling frequency ω s ,
ω s = 2π = 2πf s ,
T
because e = ej(ω+ω s)T = ejωT ej2π,
jωT

A(ω )=A(ω+ω s ) , ϕ(ω)=ϕ(ω+ω s ). (10.2.11)


Sometimes the frequency characteristics A(ω ) , ϕ(ω ), for discrete time
systems are expressed as functions of
w - the relative frequency,
r - the relative pure frequency,
fs - the sampling frequency.
where,
f
w = ω T = 2π w ∈ [0, 2π] (10.2.12)
fs
f
r= , r ∈ [0, 1] , (10.2.13)
fs
fs =1/T (10.2.14)
Frequency characteristics are directly related to the sampled transfer
function H*(s), because
H(z) = H∗ (s) e sT=z 10.2.15)
H(z) z=e jωT =H∗ (s) s=jω ⇒ H(ejωT ) = H∗ (jω ) (10.2.16)
A(ω ) = H∗ (jω ) , ϕ(ω ) = arg (H∗ (jω )) (10.2.17)
As we mentioned before, the Laplace transform of a sampled signal is a
periodical function, so the sampled transfer function is periodical function too.
Always the frequency characteristics of discrete systems are represented in a
logarithmic scale only in relative frequency. It is periodical in linear scale, but in
logarithmic scale it is not.
From (10.2.5), (10.2.6), we can see that the frequency characteristics are
the modulus and argument of a complex number H(z), evaluated for z=ejωT .
219
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.2. Relations Between Frequency Characteristics
and Attributes of Z-Transfer Functions.

Because | ejωT |=1, arg(ejωT)=ωT, the frequency characteristics are


evaluated from H(z) considering the variable z evolving in z-plane on a unit circle
as depicted in Fig. 10.2.1.
jIm(z) jωT jIm(H(z)) H(ejωT)
z =e

|z|
=1
ωT H(z) ϕ(ω) =arg{H(ejωT)}
Re(z) Re(H(z))

Α(ω)=| H(ejωT) |
z-plane H(z)-plane
Figure no. 10.2.1.

10.2.2. Frequency Characteristics of First Order Sliding Average Filter.


The result of a numerical acquisition of a continuous signal u(t) with a
sampling period of T is uk. .We want to filter this string of numbers, getting the
filtered variable yk , according to the relation
u + uk−1
yk = k ` (10.2.18)
2
which is a two term sliding average filter called also "first order sliding
average filter".
By the order of a sliding filter we understand the difference betweem the
maximum step index (k) and the minimum step index (k-1) of the input
sequences involved in. In this case it is k-(k-1)=1. The number of input
sequences involved in, uk , uk−1 , is k-(k-1)+1=2.
A computer program for this filter could be:
Initialise U1
Read U % U=uk
Y=(U + U1)/2
Write Y
U1=U
This filter is expressed by a z-transfer function H(z),
Y(z) =12 (1 + z−1 ) U(z) ; H(z) = 1(1 + z −1 ) = z + 1 (10.2.19)
2 2z
H(z)
which expresses a first order proper discrete time system . We can evalute,
H(ejωT ) = 12 (1 + e−jωT ) = 12 (1 + cos ω T + jsin ω T)

A(ω ) = 12 (1 + cos ω T) 2 + sin2 ω T = 12 2 + 2 cos ω T (10.2.20)

220
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.2. Relations Between Frequency Characteristics
and Attributes of Z-Transfer Functions.

sinω T
ϕ(ω )=arctg( )=arctg( tg(ω T/2))=ωΤ/2 (10.2.21)
1 + cos ω T
Suppose that the sampling frequency is fs=1000Hz . With this filter we
can reject the component of the frequency f=50 Hz with the factor
α=A(2πf)=A(100π)= 12 2 + 2cos(100π ⋅ 0.001) =0.9877
The Matlab drawn Bode characteristics for (10.2.19) is depicted in Fig. 10.2.2.
Magnitude (dB)
20 Bode Diagrams
0

-20

-40
Frequency (rad/sec)
-60
Phase (deg);
100

50

-50
Frequency (rad/sec)
-100 -2
10 -1 0 1
10 10 10

Figure no. 10.2.2.

10.2.3. Frequency Characteristics of m-Order Sliding Weighted Filter.


We can use another type of sliding filter,
k
yk = b m uk + b m−1 uk−1 + ... + b 0 uk−m = Σ hi uk−i , ∀k ≥ 0, (10.2.22)
i=0
The order of this sliding filter is m because the difference betweem the
maximum step index (k) and the minimum step index (k-m) of the input
sequences involved in is m. The number of input sequences involved in filtering
process, uk , .., uk−m , is k-(k-m)+1=m+1. They determines the output through
m+1 weighting factors, b i , i = 0 : m . When
m
Σ bi = 1
i=0
the sliding weighted filter is called m-order average sliding filter.
If m=1 then
yk = b 1 uk + b 0 uk−1
and if b 1 = b 0 = 12 ,
then we have the previous first order sliding average filter.
The transfer function of the m-order sliding weighted filter is,
∞ m
b zm + b m−1 zm−1 + ... + b 0
H(z) = Σ hk z−k = Σ b m−k z−k == m (10.2.23)
k=0 k=0 zm
This is a proper m-order discrete time system. For its computer
implementation is recommended to express it by state equations as (8.2.23).

221
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.3. Discrete Fourier Transform (DFT).

10.3. Discrete Fourier Transform (DFT).


We saw that, having a z-transfer function H(z)

H(z) = Σ hk z−k , (10.3.1)
k=0
to evaluate the frequency characteristics of this transfer function, that means to
compute the modulus and phase of the complex number H(z) for z=ejωT, z
belongs to the unit circle with,
|z|=1, arg(z) = ω T= 2πf 1 = 2πr = w = ϕ∈[0,2π]. (10.3.2)
fs
f
frequency

j2π f
H(z) H(z) z=e jωT = H(e s ) (10.3.3)
characteristic
To reduce the computing effort, these are evaluated for some values of ϕ,
for example only N values
f p
ϕ = ω T = 2π = 2π , p ∈ [0, N − 1] (10.3.4)
fs N
as is shown in Fig. 10.3.1.
As we perform a sampling operation of the frequency, this is equivalent to
compute the frequency characteristics only for a finite numbers of values,
p
f = ⋅ f s , p ∈ [0, N − 1] (10.3.5)
N
jIm(z) z-plane

2π −2 − ϕ = ωT= 2π -p-
Ν N

2 π−1 −
1
|z|=

Ν
0 Re(z)
Ν−1
2 π −−
N

Figure no. 10.3.1.


This will determine the so called Discrete Fourier Transform (DFT),

F  N
p
H (p) = H(z) j 2π p = H e  = Σ hk e−j N pk
j 2π 2π
(10.3.6)
z=e N k=0
Denoting

W N = ej N = N 1 , (10.3.7)
the DFT is,

H (p) = Σ hk W N = H(z)
F −pk
j

⋅p
(10.3.8)
k=0 z=e N

This can be applied also for strings of numbers. The Z-transform is,

Y(z) = Σ yk z−k ⇒
k=0

Y (p) = Σ yk W N = Y(z)
F −pk
j

⋅p
(10.3.9)
k=0 z=e N

222
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.3. Discrete Fourier Transform (DFT).

Very interesting results are obtained by using the DFT for finite strings of
numbers,
{ yk } k≥0 , yk = 0 for k ≥ N (10.3.10)
For such strings of numbers (finite ), the DFT has the form,
N−1
Σ −pk
Y (p) = DFT{yk } =
F
yk W N (10.3.11)
k=0
Because of the periodicity of the expression W N , and its form (10.3.7), it
can be defined, for finite strings of numbers, the so called Inverse Discrete
Fourier Transform (IDFT) of the form,
N−1
yk = IDFT{YF (p)} = 1 ⋅ Σ Y F (p)W N
pk
(10.3.12)
N p=0
Directly it is very difficult to compute (10.3.11), (10.3.12), but there are
several methods to do it like the algorithm known as " The Fast Fourier
Transform" on short FFT , which is rather convenient if N = 2q , q ∈ Z.
The Fast Fourier Transform is intensely utilised in signal processing field.
Suppose that we have a string of numbers {uk } k≥0 whose DFT is

U (p) = Σ uk W N .
F −pk
(10.3.13)
k=0
If this string of numbers uk is passed through a dynamic system with the
z-transfer function H(z), given by (10.3.1), the output of such a system is
k
yk = Σ ui hk−i ⇔ Y(z) = H(z)U(z) , (10.3.14)
i=0
and it can be expressed by its DFT as,
YF (p) = HF (p) ⋅ UF (p) , (10.3.15)
where
UF(p) is the DFT of the input strings of numbers uk , (10.3.13),
YF(p) is the DFT of the output string of numbers yk

Y (p) = Σ yk W N
F −pk

k=0
HF(p) is, by definition, "the DFT transfer function" ,
N−1
Σ −kp
HF (p) = H(z) 2π
j ⋅p
= hk W N . (10.3.16)
z=e N k=0
If the input is a finite string
{uk } k≥0 , uk = 0 for k ≥ N
and the system is Finite Impulse Response (FIR) System, (10.2.22) with m=N,
then it is convenient to use FFT to evaluate, in time domain and frequency
domain, the system output .

223
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.1. Introduction.

11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS.

11.1. Introduction.
Having a continuous time system with the input u(t) and the output y(t),
both continuous time functions, the discretization problem means the computing
of a discrete-time model that will realise the best approximation of the
continuous-time system.
But, as any discrete-time system, the input to the model is a string of
numbers, so we are able to supply the discrete model with the values of the input
in some time moments only. The discrete time model output is a string of
numbers too.
The discretization problem is: what is the best discrete time model so that
the error ek between the output values of the continuous system, in some time
moments, and the output string values of the discrete system, corresponding to
that time moments, to be minimum.
A graphical image of the discretization process is presented in Fig. 11.1.1.

u(t) Continuos y(t) y(kT+ ) + ek


System -

yk
uk Discrete yk
System

Figure no. 11.1.1.


This problem can be approached as a local (point based) minimisation if
the criterion is selected to be
||ek|| = ||y(kT+)- yk|| = minimum , for any integer k, (11.1.1)
or as a global minimisation if the criterion is
N
Σ ek = minimum (11.1.2)
k=0
to get the best approximation for a finite time evolution interval.
There are several methods for systems discretization. Two main problems
are encountered: how to approximate the derivation operator and how to
approximate the integral operator.
For a derivation operator can be used the backward approximation (BA)
and forward approximation (FA), on a finite time interval T, equals to the
sampling period.

224
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.2. Direct Methods of Discretization.

11.2. Direct Methods of Discretization.

11.2.1. Approximation of the derivation operator.

Backward approximation.
For a time derivable function x(t), where we denote the sampled values by
xk = x(kT) ,
the backward approximation of the first derivative is
d(x(t)) x − xk−1
≈ k (11.2.1)
dt t = kT T
Forward approximation.
For the same x(t) the forward approximation of the first derivative is
. x − xk
x(t) ≈ k+1 (11.2.2)
t = kT T
Unfortunately, sometimes the last approximation leads to unstable models.

Example 11.2.1. LTI Discrete Model Obtained by Direct Methods.


Let us consider the system described by state equation,
.
x = Ax + Bu .
The backward approximation gives us,
xk − xk−1
= Axk + Buk
T
xk =(I − TA) −1 xk−1 +T(I − TA) −1 u k
F G
xk = Fxk−1 + Guk (11.2.3)
where, F = (I − TA) −1 , G = T(I − TA) −1.
The forward approximation gives us,
xk+1 − xk
= Axk + Buk
T
xk+1 = (I + TA)xk + TBuk
xk = Fxk−1 + Guk−1 (11.2.4)
where,
F = (I + TA), G = TB.

11.2.2. Approximation of the Integral Operator.


Let us consider that the integral operator is applied to a function u(t)
giving us,
t
x(t) = x(t 0 ) + ∫ u(τ)dτ (11.2.5)
t0
Suppose ∃k0 ∈ Z , k0 T ≥ t 0 , so
225
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.2. Direct Methods of Discretization.

t t
x(t) = x(k0 T) + ∫ u(τ)dτ = xk 0 + ∫ u(τ)dτ , (11.2.6)
k0 T k0 T
The integration process is illustred in Fig. 11.2.1.
u(t) u(t)

uk = u(k0T) u k-1= u((k-1)T )


0
T u k =u(kT)

T t
t0 k0T (k0+1)T (k-1)T kT
xk xk- xk
0 0
Figure no. 11.2.1.
The integral is approximated by a sum of forward or backward rectangles
or trapezoidals. We denote x(kT) = xk .
Rectangular backward integral approximation:
k
xk = xk 0 + T ⋅ Σ ui (11.2.5)
i=k 0 +1
Trapezoidal backward integral approximation:
k
u +u
xk = xk 0 + T ⋅ Σ i i−1 (11.2.6)
i=k 0 +1 2

11.2.3. Tustin's Substitution.


Based on the approximation of the integral operator, represented in
complex domain as in Fig. 11.2.2. , through a sum of trapezoidal, an equivalent
discrete algorithm is obtained as an equivalent for the s-operator. Tustin's
substitution is a procedure of continuous transfer function discretization.

u(t) 1_ x(t) U(z) T z+1 X(z)


s 2 z-1

Figure no. 11.2.2.


By using :
k k−1
xk = xk 0 + T Σ (u i + ui−1 ) ⇒ xk−1 = xk 0 + T Σ (u i + ui−1 )
2 i=k 0+1 2 i=k 0 +1

xk − xk−1 = T (uk − uk−1 ) (11.2.7)


2

Applying the z - transformation to (11.2.7), it results


T
(1 − z −1 )X(z) = (1 + z−1 )U(z) ,
2
the z-transfer function,

226
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.2. Direct Methods of Discretization.

Y(z) T z + 1
= ⋅ (11.2.8)
U(z) 2 z − 1
which allows us to perform the correspondence between the s-operator and the
z-operator::
1 ↔ T ⋅ z+1
s 2 z−1
s ↔ ⋅ z −1
2 (11.2.9)
T z +1
For a transfer function H(s) we can obtain a z-transfer function H(z), by a
simple substitution
H(z) = H(s) (11.2.10)
s = T2 ⋅ z−1
z+1
Equation (11.2.9) is called also bilinear transformation. It performs a
mapping from the s plane to z plane which transforms the entire jω axis of the s
plane into one complete revolution of the unit circle in the z plane.

11.2.4. Other Direct Methods of Discretization.

Discretization by matching poles and zeros.


Sometimes the above simple difference approximations may transform a
stable continuous system into an unstable discrete time system.
This methods takes into consideration the relation between s plane and z
plane by the relation
z = esT (11.2.11)
and that to a pole in s plane s=sk corresponds a pole in z plane zk = e through
s k T

which the poles matching is assured. The zeros matching and the steady state
gain is assured by special techniques.

Discretization by z transformation of the continuous transfer function.


Another way of getting H(z) from H(s) is to use the theory of SDS as
have been presented,
H(z) = H∗ (s) (11.2.12)
s = T1 ln z
which called also "impulse equivalence discretization". This method , even if
not so simple, gives the best results.
We can mention only other methods as:
Discretization by matching the step response.
Discretization by substitution of the first few terms in the series for 1 ln(z)
T
matching poles and zeros.
Discretization by solution of the continuous equation over each time step.

227
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.3. LTI Systems Discretization
Using State Space Equations.

11.3. LTI Systems Discretization Using State Space Equations.


11.3.1. Analytical Relations.
This method of discretization is based on the general solution of a
continuous time system with respect to the state vector.
If the input is a piecewise constant function on time intervals of the length
T, as it is any computer controlled system with classical NAC, then the discrete
time model is an exact model: ek = 0 , for any k >= 0 .
.
x = Ax + Bu (11.3.1)
t
x(t) = Φ(t − t 0 )x(t 0 ) + ∫ Φ(t − τ)Bu(τ)dτ . (11.3.2)
t0
If we consider t0=kT, and denote xk = x(kT), we can get the system
evolution inside one sampling period. However if u(t) is a bounded function, the
state of a continuous system is a continuous function with respect to the time
variable : x(kT+)=x(kT).
t
x(t) = Φ(t − kT)xk + ∫ Φ(t − τ)Bu(τ)dτ) , t ∈ (kT, (k + 1)T] (11.3.3)
kT
At the end of the k-sampling interval,
t = (k + 1)T , x k+1 = x((k + 1)T)
(k+1)T

xk+1 = Φ(T)xk + ∫ Φ(kT + T − τ)Bu(τ)dτ , (11.3.4)


kT
Because τ ∈ (kT, (k + 1)T] , x k+1 depends on the values of the input in
this time interval only. Performing the substitution θ = τ − kT , θ ∈ (0, T] , and
u(τ) = u(θ + kT) , we get the discrete time model,
T
xk+1 = Φ(T)xk + ∫ Φ(T − θ)Bu(θ + kT)dθ (11.3.5)
0
If it is possible we can develop u(kT+θ) in Taylor series with respect to
θ∈(0,Τ],

u(kT + θ) = u(τ) = Σ ui (kT) θ
i
(11.3.6)
i=0 i!
∂ i u(kT + θ)
ui = ui (kT) = θ=0 +
, i ≥ 1, u0 = u(kT + ) (11.3.7)
∂θ i
orθ=0
Substituting (11.3.6) in (r** .5), after the terms arrangements it results,

xk+1 = Φ(T)xk + Σ G i Bui , (11.3.8)
i=0
where,
Φ(T) = eAT (11.3.9)
∂ i u(kT + θ)
ui = (11.3.10)
∂θ i θ = 0+

228
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.3. LTI Systems Discretization
Using State Space Equations.
T
G i = ∫ eA(T−θ) θ dθ
i
(11.3.11)
0
i!
T
G 0 = ∫ eA(T−θ) dθ (11.3.12)
0
G 0 = A−1 (eAT − I) (11.3.13)

The last expression of G0 can be performed only if A is not a singular


matrix, that is det(A)≠ 0 , otherwise G0 can be computed performing the integral
with respect to all elements eA(T-θ) from (11.3.12).
By using a method of variable separation we get a recursive formula
i
G i = A−1 G i−1 + T G 0 , G 0 = A−1 (eAT − I) (11.3.14)
i!
If the continuous system has piecewise time constant input on time
intervals of the length T, then an exact discrete time model can be obtained.
This situation can appear when a continuous system is controlled by a
computer with zero order holder as depicted in Fig. 11.3.1. , and the time input
evolution is like in Fig. (f** .2.
u(t); w(t) ( ] u(t)
( ]
w(t) ( ]
. y(t) (
w(t) u(t) x=Ax+Bu ]
( ] ( ]
y=Cx+Du w(t)
( ]
( ]
t
0 T 2T (k-1)T kT (k+1)T
Figure no. 11.3.1. Figure no. 11.3.2.

From Fig. 11.3.2. we can see that


u(t) = w(kT) = wk ∀t ∈ (kT, (k + 1)T] . (11.3.15)
Because u(t) is a time constant function on (kT, (k+1)T], its right side
derivatives are zero and
u0 = u(t) = wk (11.3.16)
so from (11.3.8) we get the exact mathematical model
xk+1 = Fxk + Gwk (11.3.17)
yk = Cxk + Dwk (11.3.18)
where,
F = eAT (11.3.19)
G = G0 B (11.3.20)

229
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.3. LTI Systems Discretization
Using State Space Equations.

11.3.2. Numerical Methods for Discretized Matrices Evaluation.


If A is a singular matrix or if it is difficult to perform the analytical
computing, of e AT , then this matrix can be evaluated be finite series
approximation
N
Φ(T) ≈ Φ N (T) = Σ Ai T
i
(11.3.21)
i=0 i!
For large matrices, because of the finite word representation of the
numbers in computer, a false convergence of the series (11.3.21) can appear,
that means,
Φ N (T) ≡ Φ N+1 (T) (11.3.22)
with large numerical errors. This false convergence is more evident if the
sampling period T is large.
To avoid this, an integer number m is determined in a such away that,
T = mτ , τ small enough. (11.3.23)
The value of τ is determined by so called Gershgorin theorem.
Tacking into consideration the property of transition matrix, we can write,
Φ(mτ) ≡ [Φ(τ)]m , m ∈ N ⇒ Φ N (T) ≡ [Φ N (τ)]m (11.3.24)
Also the matrix G0 can be evaluated from the series

G0 = Σ A T
i i+1
(11.3.25)
i=0 (i + 1)!
Good numerical results are obtained if the transition matrix, approximated
by a finite series (11.3.21),
2 N
Φ N (T) = I + T A + T A2 + ⋅⋅⋅⋅+T (11.3.26)
1! 2! N!
is arranged for numerical computing like,
Φ N (T) − A = TA(I + TA(I + TA(⋅⋅⋅(I + TA (I + TA)) ⋅ ⋅⋅) (11.3.27)
2 3 N−1 N
Denoting,
Ψ = (I + TA(I + TA (⋅⋅⋅(I + TA (I + TA)) ⋅ ⋅⋅) (11.3.28)
2 3 N−1 N
we can then evaluate,
F = Φ N(T) = A + TAΨ (11.3.29)
G = TΨB (11.3.30)
It is a easy job to create computer programmes for such matrices.

230
12. DISCRETE TIME SYSTEMS STABILITY. 12.1. Stability sProblem Statement.

12. DISCRETE TIME SYSTEMS STABILITY.

12.1. Stability Problem Statement.


Let us consider a linear time invariant discrete system described by
M(z)
H(z) = , (12.1.1)
L(z)
where the denominator polynomial is
N
L(z)=anz +...+a1z+a0=an Π (z − λ i ) m i , Σ mi = n.
N
n
(12.1.2)
i=1 i=1
The system is stable if and only if the modulus of the poles are under unit,
that means that all the poles are placed inside the unit circle in z plane.
In Fig. 12.1.1. the transfer of poles from s plane to z plane is shown.
zi =esiT jIm(s)
jIm(z)
z plane s plane
x
x x x
0 x 1x x 0
0 Re(z) x0 Re(s)
x x x
x
Unit circle Imaginary axis

Figure no. 12.1.1.


If a transfer function H(s) has a pole
s = λ i = −p i (12.1.3)
then its z transforn
H(z) = Z{H(s)} = H∗ (s) s=1/T⋅ln(z) (12.1.4)
has a pole
z = ζ i = e λi T . (12.1.5)
If there are simple poles on the unit circle,
z = 1 ⇔ Re(s) = 0 , (12.1.6)
then the discrete system is at the limit of stability.
For z transfer function stability it is not necessary, as for continuous
systems, that all the coefficients of the denominator polynomial to be different
of zero and to have the same sign.
Because z=esT, the left half s plane which is the stability half plane, is
transformed to the inside of the unit circle of z plane, which is the stability
domain of discrete systems.
There are some specific criteria, algebraical and frequency, for the stability
of discrete time systems.

231
12. DISCRETE TIME SYSTEMS STABILITY. 12.1. Stability sProblem Statement.

The algebraical criteria are related on the z transfer function denominator


polynomial L(z), for the so called external stability, and on the characteristic
polynomial ∆(z) , for the so called internal stability, where
∆(z) = det(zI − A) (12.1.7)
and A is the system matrix of one state realisation of the z transfer function.
The external stability or input-output stability, requires that all the roots of
L(z) to be inside the unit circle
The internal stability asks that the roots of ∆(z) = 0 to be inside the unit
circle.
For internal stability we manage the characteristic polynomial ∆(s) and
L(z) for the external stability.
If the system has the transfer function H(z) as the nominal transfer
functions, that means its nominator and denominator has no common factor,
then any state realisation is both controllable and observable and vice versa. In
such a case ∆(z) ≡ L(z) .
In the following we shall manipulate only the polinomial L(z) but if
necessary the same technics can be considered to be applied to ∆(z) .

Example 12.1.1. Study of the Internal and External Stability.


To understand what internal and external stability means and what their
importance are, let us start with an computer program for a dynamic system
giving us the step response. This is a Matlab program but it is similar to any
program implemented for the computer controlled systems.

Y1=2.8;Y0=2;U1=1;U0=1;
t=0:70;lt=length(t);y=zeros(lt);
y(1)=Y0;y(2)=Y1;
for k=3:lt
U=1;
Y=2*Y1-0.99*Y0+U-1.1*U1;
y(k)=Y;
Y0=Y1;Y1=Y;U1=U;
end
As we understand, the first line "Y1=2.8;Y0=2;U1=1;U0=1;" establishes
the initial conditions. We can see that they are not zero. The response to this
program, depicted in Fig. 12.1.2. , indicates a stable response approaching the
steady state value of 10 as it does with zero initial conditions, in Fig. 12.1.2. ,
" Y1=0;Y0=0;U1=0;U0=0;".
But if the initial conditions are " Y1=-2;Y0=2;U1=1;U0=1;" then the
response is as in Fig. 12.1.3. which represents a disaster for a controlled
process. Even if the initial conditions have a very small variation with respect to
the first case, " Y1=2.799;Y0=2;U1=1;U0=1;" that means Y1=2.799 instead
of Y1=2.8, we have un unstable response as in Fig. 12.1.4.

232
12. DISCRETE TIME SYSTEMS STABILITY. 12.1. Stability sProblem Statement.

10 5000

9
8 0
7 Initial conditions:
Y1=2.8;Y0=2;U1=1;U0=1; -5000
6
5 Initial conditions:
Y1=-2;Y0=2;U1=1;U0=1;
4 -10000

3 Zero initial conditions:


2 Y1=0;Y0=0;U1=0;U0=0; -15000

1
0 -20000
0 10 20 30 40 50 60 70 0 10 20 30 40 50 60 70

Figure no. 12.1.2. Figure no. 12.1.3.


10
0
-10
-20 Initial conditions:
Y1=2.799;Y0=2;U1=1;U0=1;
-30
-40
-50
-60
-70
-80
-90
0 20 40 60 80 100 120
Figure no.12.1.4.
As a conclusion: the forced response is stable so the system is external
stable, but the general response, mainly the free response, is unstable so the
system is internal unstable.
All such practical aspects can be under control only using the theoretical
approach of systems theory.
A brief response:
The implemented line "Y=2*Y1-0.99*Y0+U-1.1*U1;", corresponds to
the discrete transfer function,
z(z − 1.1)
H(z) = 2 z − 1.1z =
2
(12.1.8)
z − 2z + 0.99 (z − 1.1)(z − 0.9)
One pole ξ 1 = 1.1 > 1 is unstable but the second ξ 2 = 0.9 < 1is stable. As we
can see the transfer function is not a nominal one. The input output behaviour is
described by the reduced transfer function
H (z) = z . (12.1.9)
(z − 0.9)

233
12. DISCRETE TIME SYSTEMS STABILITY. 12.2. Stability Criteria for Discrete Time Systems.

12.2. Stability Criteria for Discrete Time Systems.

12.2.1. Necessary Stability Conditions.


The necessary conditions that a n degree polynomial
L(z)=anzn+an-1zn-1+...+a1z+a0 , ak∈C, an ≠ 0, (12.2.1)
to be discrete time asymptotical stable (to have all the roots inside the unit
circle) are:
1) L(1)>0 , (12.2.2)
n
2) (-1) L(-1)>0. (12.2.3)

12.2.2. Schur-Kohn Stability Criterion.


Let us consider the discrete time system characteristic polynomial,
L(z)=anzn+an-1zn-1+...+a1z+a0 , ak∈C, an ≠ 0. (12.2.4)
The Schur-Kohn stability criterion determines the necessary and sufficient
discrete time asymptotical stability conditons:all the roots inside the unit circle.
To apply this criterion n determinants Dk of the dimension 2kx2 are computed .
They are called Schur - Kohn determinants.
a0 an a n-1 ...an-k+1
... ...
a1 a 0 0
...

... ...
...

an an-1
ak-1 ... a 1 a0 0 an
D =
k an a0 a 1 ... ak-1
... ... (12.2.5)
0
...

an
...

.. . a a
an-k+1 ... an 0 0 a10
We denoted by ak the complex conjugate of the coefficient ak.
The necessary and sufficient stability condition is :
∀k ∈ [1, n] (−1) k Dk > 0 . (12.2.6)

12.2.3. Jury Stability Criterion.


L(z)=anzn+an-1zn-1+...+a1z+a0 , ak ∈ C , an ≠ 0 (12.2.7)
We create the Jury table (12.2.8),
z0 z1 z2 ... z n-2 z n-1 zn
1 a0 a1 a2 ... a n-2 a n-1 an
2 an an-1 an-2 ... a2 a1 a0
3 b0 b1 b2 ... bn-2 bn-1 0
4 bn-1 bn-2 bn-3 ... b1 b0 0 (12.2.8)
5 c0 c1 c2 ... c n-2 0 0
6. c n-2 cn-3 cn-4 ... c0 0 0
.
2n-5 p0 p1 p2 p3
2n-4 p3 p2 p1 p0
2n-3 q0 q1 q2 0

234
12. DISCRETE TIME SYSTEMS STABILITY. 12.2. Stability Criteria for Discrete Time Systems.

where the table entries are evaluated based on 2x2 determinants,


a a
b k = 0 n−k , k = 0, n − 1 (12.2.9)
an ak
b 0 b n−1−k
ck = , k = 0, n − 2 (12.2.10)
b n−1 b k
p 0 p 3−k
qk = , k = 0, 1, 2. (12.2.11)
p3 pk
The discrete time system with the characteristic polynomial (12.2.7) is
asymptotic stable if and only if there are satisfied the necessary conditions
 L(1) > 0
 (12.2.12)
 (−1) L(−1) > 0
n

and the inequalities,


 a0 < an
 .............

 b 0 > b n−1

 c 0 > c n−2 (12.2.13)
 ..............

 p0 > p3

 q0 > q2
12.2.4. Periodicity Bands and Mappings Between Complex Planes.
A lot of methods from continuous time systems are based on a stability
limit as an imaginary axis of a complex plane. Continuous time criteria express
the necessary and sufficient conditions for a polynomial to have all the roots
placed on the left side of the imaginary axis.
We saw that for discrete time systems the stability limit is a circle in the z
plane. Several transformations can be utilised to transform a circle from a
complex plane, in our case the z plane into the imaginary axis, except some
evolution to infinity, of a new complex plane. So the continuous time criteria
can be directly applied to discrete time systems.
Most utilised are the bilinear transformations "r" and "w".
The "r " transformation.
z → r: r = z+ 1 (12.2.14)
z− 1
r→ z: z = r+ 1 (12.2.15)
r− 1
The "w " transformation.
z→ w: w = z− 1 (12.2.16)
z+ 1
w→ z: z = 1+ w (12.2.17)
1− w
w→ r: r = 1/w (12.2.18)
235
12. DISCRETE TIME SYSTEMS STABILITY. 12.2. Stability Criteria for Discrete Time Systems.

In the sequel we shall present the "w" transformation only.


The transformations of the stability domains from s to z and then from z
to w are illustrated in Fig. 12.2.1. In such a way the unit circle is transformed
into the imaginary axes.
s plane z plane w plane

Figure no. 12.2.1.


We note that an algebraical relations from s plane to z plane exists only
between the Laplace transform of a sampled signal and its z transform.
But, because the Laplace transform of a sampled signal Y∗ (s) is a
periodical function wit the period of jω s ,
Y∗ (s) = Y∗ (s + jω s ) , ω s = 2π
T
, (12.2.19)
only one periodicity band in the s plane is enough to represent such a sampled
signal. Particularly, a stable half periodicity band SBq can be defined as,
SBq = {s = σ + jω ω ∈ (−ω s /2 + qω s , ω s /2 + qω s ], σ ≤ 0, q ∈ Z} (12.2.20)
Any of these results can be applied to a sampled transfer function H∗ (s) too.
The fundamental stable band SB0 is presented in Fig. 12.2.2. In the definition
relation (12.2.20) of a SBq, it is considered also the area determined by σ = 0 to
manage the limit of stability. A periodicity band denoted Bq is defined as in
(12.2.20) without restrictions on σ .
In a such a way the relation
z = e sT (12.2.21)
performs an on to on correspondence between one selected Bq from s plane
and the entire z plane.
As we remember, the analytical extension of the z-transform expression
Y(z) is a obtained by a simple algebraical substitution
Y(z) = Y∗ (s) s=1/T⋅ln(z) (12.2.22)
for any finite s different of a pole. In the same time, to any family of poles
s ni = s i + n ⋅ (jω s ), n ∈ Z, (12.2.23)
of Y (s) , corresponds a unique pole of Y(z),

z i = e [s i +n⋅(jω s )]T = e s i T . (12.2.24)


Vice versa, the entire z plane gives the values of Y (s) in one selected

periodicity band Bq using the transformation,


Y∗ (s) = Y(z) z=esT , s ∈ Bq, q ∈ Z (12.2.25)
By periodicity (of both the Y (s) values and of the poles) we can extend

Y (s) to entire s plane. Most utilised band is the fundamental band for q=0.

236
12. DISCRETE TIME SYSTEMS STABILITY. 12.2. Stability Criteria for Discrete Time Systems.

The frequency characteristics are obtained considering in SB0


s = jω , ω ∈ (−ω s /2, ω s /2
which determines from (12.2.25)
Y∗ (jω ) = Y(z) z=ejωT , arg(z) = ω T ∈ (−π/2 , π/2] . (12.2.26)
For pozitive frequencies only we have
ω ∈ [0 , ω s /2] = [0, πf s ]
where f s = 1/T = ω /2π is the pure sampling frequency.
Denoting ω = 2πf , the range of the pure frequencies f , from continuous
time domain, for which the discrete time frequency characteristics are uniquely
determined is
f ∈ [0 , f s /2] . (12.2.27)
In the sequel we shall illustrate how the imaginary part of SB0 ( the
segments (5)--(6)=(1)--(2) ) from Fig. 12.2.2. are transformed, throughout the z
plane, into the imaginary axis of the w plane.

s = jω , ω ∈ (−ω s /2 , ω s /2) ⇒ z = e jωT = e jϕ , ϕ ∈ (−π, π


ϕ ϕ ϕ
cos ϕ − 1 + jsin ϕ −2 sin 2 + j2 sin 2 cos 2
2

But, w = z − 1 ⇒ −

e
w = jϕ 1 = = ϕ ϕ ϕ
z+ 1 e + 1 cos ϕ + 1 + jsin ϕ 2 cos 2 2 + j2 sin 2 cos 2
ϕ ϕ
ϕ −sin 2 + jcos 2 ϕ
w = tg ⋅ ϕ ϕ = jtg , ϕ ∈ (−π , π) ⇒ w ∈ (−j∞ , j∞) .
2 cos + jsin 2
2 2
The transformation of the other segments of SB0 is performed in a similar
manner.

s plane z plane w plane



jIm(z) jIm(w)
3 2 2
jωs /2
SB0
σ 3 3
1 2
0
0
1
6
Re(z) 2 ' .
(-1,j0) 0 1
6
Re(w)
6 5 4 5' 4

4 ω /2
5 -j s 5

Figure no. 12.2.2.

237
12. DISCRETE TIME SYSTEMS STABILITY. 12.2. Stability Criteria for Discrete Time Systems.

12.2.5. Discrete Equivalent Routh Criterion in the "w" plane.


As we saw, the inside of the unit circle from z plane is transformed into
left half part ofd the w complex plane.
The discrete time stability of a polynomial L(z) can be analysed on its

image L(w) , available for z = finite ⇒ w ≠ 1 ,where

L(w)
L(z) = (12.2.28)
z = 1−w
1+w
(1 − w) n
In such conditions,

L(z) = 0 ⇔ L(w) = 0 .

Note that because L(w) comes from L(z) for finite z, never we shall have

L(w) w=1 = 0 . The stability conditions for L(z)=0 will give the same results as

the stability conditions for checking of L(w) = 0 .

That means the roots of L(w) are placed in the left half w - plane so

Routh or Hurwitz criterion can be directly applied for the polynomial L(w) .
If the polynomial L(z) is written as,
n
L(z) = Σ ak zk , an ≠ 0, (12.2.29)
k=0
then for z = finite ⇒ w ≠ 1 we have,
∼ n
 + 
k
L(w) = (1 − w) ⋅ Σ ak 
n 1 w , a n ≠ 0, (12.2.30)
k=0 1 − w
∼ n
L(w) = Σ a k (1 + w) k (1 − w) n−k , an ≠ 0, w ≠ 1 (12.2.31)
k=0

L(w) = Σ ∼a k wk , ∼a n ≠ 0, w ≠ 1 .
n
(12.2.32)
k=0
Any continuous time algebraical stability criterion can be applied using the
coefficients ∼a k , k = 0 : n .

The w transformation can by applied to a transfer function H(z), getting


the w image denoted, for simplicity also by H(w),
H(w) = H(z) . (12.2.33)
z = 1+w
1−w
Because the stability limit of H(w) is the imaginary axis of the plane w,
then some specific frequency methods for continuous time systems can be
directly applied but with some differences in their interpretation.

238
References.

1. Bálan T., Matematici speciale, Reprografia Universitáþii din Craiova, 1988.


2. Belea C.,Teoria sistemelor automate, vol. I, Repr. Universitáþii din Craiova,
1971.
3. Belea C., Teoria sistemelor, Editura Didacticá ßi Pedagogicá, Bucureßti, 1985.
4. Belea C., Teoria sistemelor, Editura Didacticá ßi Pedagogicá, Bucureßti, 1985.
5. Cálin S., Regulatoare automate, Editura Didacticá ßi Pedagogicá, Bucureßti,
1976.
6. Cálin S., Belea C., Sisteme automate complexe, Editura Tehnicá, Bucureßti, 1981.
7. Cálin S., Dumitrache I., s.a., Reglarea numericá a proceselor tehnologice,
Editura Tehnicá, Bucureßti, 1984.
8. Cálin S., Petrescu Gh., Tábuß I., Sisteme automate numerice, Editura §tiinþficá ßi
Enciclopedicá, Bucureßti, 1984.
9. Csaki F., Modern control theories, Akad. Kiado, Budapest, 1972.
10. Director S., Rohrer R., Introduction to systems theory, Mc. Graw Hill, 1972.
11. Houpis C., Lamont G., Digital control systems, Mc. Graw Hill, 1992.
12. Dumitrache I. Automatizári electronice, Editura Didacticá ßi Pedagogicá,
Bucureßti, 1997.
13. Ionescu V., Teoria sistemelor, Editura Didacticá ßi Pedagogicá, Bucureßti, 1985.
14. Iserman R., Digital Control Systems, Springer Verlag, 1981.
15. Kailath T., Linear systems, Prentice Hall, 1987.
16. Kuo B. C., Sisteme automate cu eßantionare, Editura Tehnicá, Bucureßti, 1967.
17. Kuo B. C., Automatic control systems, Prentice Hall, 1991.
18. Marin C., Teoria sistemelor ßi reglare automatá - Índrumar de proiectare,
Reprografia Universitáþii din Craiova, 1981.
19. Marin C., Petre E., Popescu D., Ionete C., Selißteanu D., Teoria sistemelor -
Probleme, Editura SITECH, Craiova, 1997.
20. Ogata K., Discrete time control systems, Prentice Hall, 1987.
21. Rásvan V.,Teoria stabilitáþii, Editura §tiinþificá ßi Enciclopedicá, Bucureßti,
1987.
22. Stánáßilá O., Analizá matematicá, Ed. Didacticá ßi Pedagogicá, Bucureßti, 1981.
23. §abac I., Matematici speciale, Editura Didacticá ßi Pedagogicá, Bucureßti, 1981.
24. Vinatoru M., Sisteme automate, Editura SPICON, 1997.
25. Voicu M., Tehnici de analizá a stabilitáþii sistemelor automate, Editura Tehnicá,
Bucureßti, 1986.
26. Wiberg D., State Space and Linear Systems , Mc. Graw Hill, 1971.
27. Williamson D., Digital control and implementation, Prentice Hall, 1991.
28. * * * MATLAB - User's Guide.

You might also like