Constantin MARIN
LECTURES ON
SYSTEM THEORY
2008
PREFACE.
In recent years systems theory has assumed an increasingly important role
in the development and implementation of methods and algorithms not only for
technical purposes but also for a broader range of economical, biological and
social fields.
The success common key is the notion of system and the system oriented
thinking of those involved in such applications.
This is a student textbook mainly dedicated to determine a such a form of
thinking to the future graduates, to achieve a minimal satisfactory understanding
of systems theory fundamentals.
The material presented here has been developed from lectures given to
the second study year students from Computer Science Specialisation at
University of Craiova.
Knowledge of algebra, differential equations, integral calculus, complex
variable functions constitutes prerequisites for the book. The illustrative
examples included in the text are limited to electrical circuits to fit the technical
background on electrical engineering from the second study year.
The book is written with the students in the mind, trying to offer a
coherent development of the subjects with many and detailed explanations. The
study of the book has to be accomplished with the practical exercises published
in our problem book  19.
It is hoped that the book will be found useful by other students as well as
by industrial engineers who are concerned with control systems.
CONTENTS
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS.
1.1. Introduction 1
1.2. Abstract Systems; Oriented Systems; Examples. 2
Example 1.2.1. DC Electrical Motor. 3
Example 1.2.2. Simple Electrical Circuit. 5
Example 1.2.3. Simple Mechanical System. 8
Example 1.2.4. Forms of the Common Abstract Base. 9
1.3. Inputs; Outputs; InputOutput Relations. 11
1.3.1. Inputs; Outputs. 11
1.3.2. InputOutput Relations. 12
Example 1.3.1. Double RC Electrical Circuit. 13
Example 1.3.2. Manufacturing Point as a Discrete Time System. 17
Example 1.3.3. RSmemory Relay as a Logic System. 18
Example 1.3.4. Blackbox Toy as a Two States Dynamical System. 20
1.4. System State Concept; Dynamical Systems. 22
1.4.1. General aspects. 22
Example 1.4.1. Pure Time Delay Element. 24
1.4.2. State Variable Definition. 26
Example 1.4.2. Properties of the iiss relation. 28
1.4.3. Trajectories in State Space. 29
Example 1.4.3. State Trajectories of a Second Order System. 30
1.5. Examples of Dynamical Systems. 33
1.5.1. Differential Systems with Lumped Parameters. 33
1.5.2. Time Delay Systems (DeadTime Systems). 35
Example 1.5.2.1. Time Delay Electronic Device. 35
1.5.3. DiscreteTime Systems. 36
1.5.4. Other Types of Systems. 36
1.6. General Properties of Dynamical Systems. 37
1.6.1. Equivalence Property. 37
Example 1.6.1. Electrical RLC Circuit. 38
1.6.2. Decomposition Property. 40
1.6.3. Linearity Property. 40
Example 1.6.2. Example of Nonlinear System. 41
1.6.4. Time Invariance Property. 41
1.6.5. Controllability Property. 41
1.6.6. Observability Property. 41
1.6.7. Stability Property. 41
I
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI).
2.1. InputOutput Description of SISO LTI. 42
Example 2.1.1. Proper System Described by Differential Equation. 46
2.2. State Space Description of SISO LTI. 49
2.3. InputOutput Description of MIMO LTI. 51
2.4. Response of Linear Time Invariant Systems. 54
2.4.1. Expression of the State Vector and Output Vector in sdomain. 54
2.4.2. Time Response of LTI from Zero Time Moment. 55
2.4.3. Properties of Transition Matrix. 56
2.4.4. Transition Matrix Evaluation. 57
2.4.5. Time Response of LTI from Nonzero Time Moment . 58
3. SYSTEM CONNECTIONS.
3.1. Connection Problem Statement. 60
3.1.1. Continuous Time Nonlinear System (CNS). 60
3.1.2. Linear Time Invariant Continuous System (LTIC). 60
3.1.3. Discrete Time Nonlinear System (DNS). 60
3.1.4. Linear Time Invariant Discrete System (LTID). 60
3.2. Serial Connection. 64
3.2.1. Serial Connection of two Subsystems. 64
3.2.2. Serial Connection of two Continuous Time Nonlinear Systems (CNS). 65
3.2.3. Serial Connection of two LTIC. Complete Representation. 66
3.2.4. Serial Connection of two LTIC. InputOutput Representation. 66
3.2.5. The controllability and observability of the serial connection. 67
3.2.5.1. State Diagrams Representation. 68
3.2.5.2. Controllability and Observability of Serial Connection. 71
3.2.5.3. Observability Property Underlined as the Possibility to Determine
the Initial State if the Output and the Input are Known. 73
3.2.5.4. Time Domain Free Response Interpretation for
an Unobservable System. 75
3.2.6. Systems Stabilisation by Serial Connection. 76
3.2.7. Steady State Serial Connection of Two Systems. 80
3.2.8. Serial Connection of Several Subsystems. 81
3.3. Parallel Connection. 82
3.4. Feedback Connection. 83
II
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS.
4.1. Principle Diagrams and Block Diagrams. 84
4.1.1. Principle Diagrams. 84
4.1.2. Block Diagrams. 84
Example 4.1.1. Block Diagram of an Algebraical Relation. 85
Example 4.1.2. Variable's Directions in Principle Diagrams and
Block Diagrams. 87
Example 4.1.3. Block Diagram of an Integrator. 89
4.1.3. State Diagrams Represented by Block Diagrams. 89
4.2. Systems Reduction Using Block Diagrams. 92
4.2.1. Systems Reduction Problem. 92
4.2.2. Analytical Reduction. 92
4.2.3. Systems Reduction Through Block Diagrams Transformations. 93
4.2.3.1. Elementary Transformations on Block Diagrams. 93
Example 4.2.1. Representations of a Multi Inputs Summing Element. 96
4.2.3.2. Transformations of a Block Diagram Area by Analytical
Equivalence. 96
4.2.3.3. Algorithm for the Reduction of Complicated Block Diagrams. 96
Example 4.2.2. Reduction of a Multivariable System. 98
4.3 Signal Flow Graphs Method (SFG). 106
4.3.1. Signal Flow Graphs Fundamentals. 106
4.3.2. Signal Flow Graphs Algebra. 107
Example 4.3.1. SFGs of one Algebraic Equation. 110
Example 4.3.2. SFG of two Algebraic Equations. 111
4.3.3. Construction of Signal Flow Graphs. 113
4.3.3.1. Construction of SFG Starting from a System of Linear Algebraic
Equations. 113
Example 4.3.3. SFG of three Algebraic Equations. 114
4.3.3.2. Construction of SFG Starting from a Block Diagram. 115
Example 4.3.4. SFG of a Multivariable System. 115
4.4. Systems Reduction Using State Flow Graphs. 116
4.4.1. SFG Reduction by Elementary Transformations. 117
4.4.1.1. Elimination of a Selfloop. 117
4.4.1.2. Elimination of a Node. 118
4.4.1.3. Algorithm for SFG Reduction by Elementary Transformations. 120
4.4.2. SFG Reduction by Mason's General Formula. 121
Example 4.4.1. Reduction by Mason's Formula of a Multivariable System. 123
III
5. SYSTEMS REALISATION BY STATE EQUATIONS.
5.1. Problem Statement. 125
5.1.1. Controllability Criterion. 126
5.1.2. Observability Criterion. 126
5.2. First Type ID Canonical Form. 127
Example 5.2.1. First Type ID Canonical Form of a Second Order
System. 130
5.3. Second Type DI Canonical Form. 132
5.4. Jordan Canonical Form. 134
5.5 State Equations Realisation Starting from the Block Diagram 137
6. FREQUENCY DOMAIN SYSTEMS ANALYSIS.
6.1. Experimental Frequency Characteristics. 139
6.2. Relations Between Experimental Frequency Characteristics and
Transfer Function Attributes. 142
6.3. Logarithmic Frequency Characteristics. 145
6.3.1. Definition of Logarithmic Characteristics. 145
6.3.2. Asymptotic Approximations of Frequency Characteristic. 146
6.3.2.1. Asymptotic Approximations of Magnitude Frequency
Characteristic for a First Degree Complex Variable Polynomial. 146
3.2.2.2. Asymptotic Approximations of Phase Frequency Characteristic
for a First Degree Complex Variable Polynomial. 148
6.4. Elementary Frequency Characteristics. 150
6.4.1. Proportional Element. 150
6.4.2. Integral Type Element. 151
6.4.3. First Degree Polynomial Element. 152
6.4.4. Second Degree Polynomial Element with Complex Roots. 153
6.4.5. Aperiodic Element. Transfer Function with one Real Pole. 158
6.4.6. Oscillatory element. Transfer Function with two Complex Poles. 159
6.5. Frequency Characteristics for Series Connection of Systems.
162
6.5.1. General Aspects. 162
Example 6.5.1.1. Types of Factorisations. 164
6.5.2. Bode Diagrams Construction Procedures. 165
6.5.2.1. Bode Diagram Construction by Components. 165
Example 6.5.2.1. Examples of Bode Diagram Construction by
Components. 165
6.5.2.2. Directly Bode Diagram Construction. 168
IV
7. SYSTEMS STABILITY.
7.1. Problem Statement. 170
7.2. Algebraical Stability Criteria. 171
7.2.1. Necessary Condition for Stability. 171
7.2.2. Fundamental Stability Criterion. 171
7.2.3. Hurwitz Stability Criterion. ` 171
7.2.4. Routh Stability Criterion. 173
7.2.4.1. Routh Table. 173
7.2.4.2. Special Cases in Routh Table. 174
Example 7.2.1. Stability Analysis of a Feedback System. 176
7.3. Frequency Stability Criteria. 177
7.3.1. Nyquist Stability Criterion. 177
7.3.2. Frequency Quality Indicators. 178
7.3.3. Frequency Characteristics of Time Delay Systems. 180
8. DISCRETE TIME SYSTEMS.
8.1. Z  Transformation. 181
8.1.1. Direct ZTransformation. 181
8.1.1.1. Fundamental Formula. 181
8.1.1.2. Formula by Residues. 182
8.1.2. Inverse ZTransformation. 183
8.1.2.1. Fundamental Formula. 183
8.1.2.2. Partial Fraction Expansion Method. 185
8.1.2.3. Power Series Method. 185
8.1.3. Theorems of the ZTransformation. 186
8.1.3.1. Linearity Theorem. 186
8.1.3.2. Real Time Delay Theorem. 186
8.1.3.3. Real Time Shifting in Advance Theorem. 186
8.1.3.4. Initial Value Theorem. 186
8.1.3.5. Final Value Theorem. 186
8.1.3.6. Complex Shifting Theorem. 187
8.1.3.8. Partial Derivative Theorem. 188
8.2. Pure Discrete Time Systems (DTS). 190
8.2.1. Introduction ; Example. 190
Example 8.2.1. First Order DTS Implementation. 190
8.2.2. Input Output Description of Pure Discrete Time Systems. 193
Example 8.2.2.1. Improper First Order DTS. 193
Example 8.2.2.2. Proper Second Order DTS. 193
8.2.3. State Space Description of Discrete Time Systems. 195
V
9. SAMPLED DATA SYSTEMS.
9.1. Computer Controlled Systems. 197
9.2. Mathematical Model of the Sampling Process. 204
9.2.1. TimeDomain Description of the Sampling Process. 204
9.2.2. Complex Domain Description of the Sampling Process. 205
9.2.3. Shannon Sampling Theorem. 207
9.3. Sampled Data Systems Modelling. 209
9.3.1. Continuous Time Systems Response to Sampled Input Signals. 209
9.3.2. Sampler  Zero Order Holder (SH). 211
9.3.3. Continuous Time System Connected to a SH. 212
9.3.4. Mathematical Model of a Computer Controlled System. 213
9.3.5. Complex Domain Description of Sampled Data Systems. 215
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIMESY STEMS
10.1. Frequency Characteristics Definition. 217
10.2. Relations Between Frequency Characteristics and
Attributes of ZTransfer Functions. 218
10.2.1. Frequency Characteristics of LTI Discrete Time Systems. 218
10.2.2. Frequency Characteristics of First Order Sliding Average Filter. 220
10.2.3. Frequency Characteristics of mOrder Sliding Weighted Filter. 221
10.3. Discrete Fourier Transform (DFT). 222
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS.
11.1. Introduction. 224
11.2. Direct Methods of Discretization. 225
11.2.1. Approximation of the Derivation Operator.
Example 11.2.1. LTI Discrete Model Obtained by Direct Methods. 225
11.2.2. Approximation of the Integral Operator. 225
11.2.3. Tustin's Substitution. 226
11.2.4. Other Direct Methods of Discretization. 227
11.3. LTI Systems Discretization Using State Space Equations. 228
11.3.1. Analytical Relations. 228
11.3.2. Numerical Methods for Discretized Matrices Evaluation. 230
12. DISCRETE TIME SYSTEMS STABILITY.
12.1. Stability Problem Statement. 231
Example 12.1.1. Study of the Internal and External Stability. 232
12.2. Stability Criteria for Discrete Time Systems. 234
12.2.1. Necessary Stability Conditions. 234
12.2.2. SchurKohn Stability Criterion. 234
12.2.3. Jury Stability Criterion. 234
12.2.4. Periodicity Bands and Mappings Between Complex Planes. 235
12.2.5. Discrete Equivalent Routh Criterion in the "w" plane. 238
VI
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS.
1.1. Introduction
The systems theory or systems science is a discipline well defined, whose
goal is the behaviour study of different types and forms of systems within a
unitary framework of notions, performed on a common abstract base.
In such a context the systems theory is a set of general methods,
techniques and special algorithms to solve problems as analysis, synthesis,
control, identification, optimisation irrespective whether the system to which they
are applied is electrical, mechanical, chemical, economical, social, artistic,
military one.
In systems theory it is the mathematical form of a system which is
important and not its physical aspect or its application field.
There are several definitions for the notion of system, each of them
aspiring to be as general as possible. In common usage the word " system " is a
rather nebulous one. We can mention the Webster's definition:
"A system is a set of physical objects or abstract entities united
(connected or related) by different forms of interactions or interdependencies as
to form an entirety or a whole ".
Numerous examples of systems according to this definition can be
intuitively done: our planetary system, the car steering system , a system of
algebraical or differential equations, the economical system of a country.
A peculiar category of systems is expressed by so called "physical
systems" whose definition comes from thermodynamics:
"A system is a part (a fragment) of the universe for which one inside and
one outside can be delimited from behavioural point of view".
Later on several examples will be done according to this definition.
The system theory is the basement of the control science which deals with
all the conscious activities performed inside a system to accomplish a goal under
the conditions of the external systems influence. In the control science three main
branches can be pointed out: Automatic control, Cybernetics, Informatics.
The automatic control or just automatic, as a branch of the control science deals
with automatic control systems.
An automatic control system or just a control system, is a set of objects
interconnected in such a structure able to perform, to elaborate, command and
control decisions based on information got with its own resources.
There are also many other meanings for the notion of control systems.
Automation means all the activities to put to practice automatic control
systems.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.1. Introduction.
.
1
1.2. Abstract Systems; Oriented Systems; Examples.
Any physical system (or physical object), as an element of the real world,
is a part (a piece) of a more general context. It is not an isolated one, its
interactions with the outside are performed by exchanges of information, energy,
material. These exchanges alter its environments that cause modifications in time
and space of some of its specific (characteristic) variables.
In Fig. 1.1. , such a representation is realised.
Information
Energy
Material
INSIDE
OUTSIDE
inputs outputs
u
1 y
1
u
2 y
2
y
r
u
p
'
¹
¹
'
¹
¹
u
y
System
descriptor
Figure no. 1.2.1. Figure no. 1.2.2
The physical system (object) interactions with the outside are realised
throughout some signals so called terminal variables. In the systems theory, the
mathematical relations between terminal variables are important. These
mathematical relations define the mathematical model of the physical system.
By an abstract system one can understand the mathematical model of a
physical system or the result of a synthesis procedure.
A causal system, feasible system or realisable system, is an abstract
system for which a physical model can be obtained in such a way that its
mathematical model is precisely that abstract system.
An oriented system is a system (physical or abstract) whose terminal
variables are split, based on causalities principles, in two categories:
input variables and output variables.
Input variables (or just "inputs" ) represent the causes by which the outside
affects (inform) the inside.
Output variables (or just "outputs" ) represent the effects of the external
and internal causes by which "the inside" affects ( influences or inform) the
outside. The output variables do not affect the input variables. This is the
directional property of a system: the outputs are influenced by the inputs but not
vice versa.
When an abstract system is defined starting from a physical system
(object), first of all the outputs are defined (selected). The outputs represent
those physical object attributes (qualities) which an interest exists for, taking into
consideration the goal the abstract system is defined for.
The inputs of this abstract system are all the external causes that affect the
above chosen outputs.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.
2
Practically are kept only those inputs that have a significant influence (into
a defined precision level context) on the chosen outputs.
Defining the inputs and outputs we are defining that border which
expresses the inside and the outside from behavioural point of view.
Usually an input is denoted by u, a scalar if there is only one input, or by a
column vector u=[u
1
u
2
.. u
p
]
T
if there are p input variables.
The output usually is denoted by y if there is only one output or by a
column vector y=[y
1
y
2
.. y
r
]
T
if there are r output variables.
Scalars and vectors are written with the same fonts, the difference between
them, if necessary, is explicitly mentioned.
An oriented system can be graphically represented in a block diagram as it
is depicted in Fig. 1.2.2. by a rectangle which usually contains a system
descriptor which can be a description of or the name of the system or a symbol
for the mathematical model, for its identification. Inputs are represented by
arrows directed to the rectangle and outputs by arrows directed from the
rectangle.
Generally there are three main graphical representations of systems:
1. The physical diagram or construction diagram. This can be a picture of a
physical object or a diagram illustrating how the object is built or has to be build.
2. The principle diagram or functional diagram, is a graphical representation of a
physical system using norms and symbols specific to the field to which the
physical system belongs, represented in a such way to understand the functioning
(behaviour) of that system.
3. The block diagram is a graphical representation of the mathematical relations
between the variables by which the behaviour of the system is described. Mainly
the block diagram illustrates the abstract system. The representation is performed
by using rectangles or flow graphs.
Example 1.2.1. DC Electrical Motor.
Let us consider a DC electrical motor with independent excitation voltage.
As a physical object it has several attributes: colour, weight, rotor voltage
(armature voltage), excitation voltage (field voltage), cost price, etc. It can be
represented by a picture as in Fig. 1.2.3. This is a physical diagram, any skilled
people understand that it is about a DC motor, can identify it, and nothing else.
Made i n
Cr ai ova
EP Ti p MCC3
ω C
r
Ur
Ue
Ir
Ur
Ue
Cr
ω
S1
θext
Ur
Ue
Cr S2
θext
θint
Ir
Figure no. 1.2.3. Figure no. 1.2.4. Figure no. 1.2.5.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.
3
We can look at the motor as to an oriented object from the systems theory
point of view. In a such way we shall define the inside and the outside .
1. Suppose we are interested about the angular speed ω of the motor axle. This
will be the output of the oriented system we are defining now. The inputs to this
oriented system are all the causes that affect the selected output ω, into an
accepted level of precision. To do this, knowledge on electrical engineering is
necessary. The inputs are: rotor voltage Ur, excitation voltage Ue, resistant
torque Cr, external temperature θext. Now the oriented system related to the DC
motor having the angular speed ω as output, into the agreed level of precision is
depicted in Fig. 1.2.4. The mathematical relations between ω and Ur, Ue, Cr,
θext are denoted by S1 which expresses the abstract system. This abstract
system is the mathematical model of the physical oriented object
(or system) as defined above.
2. Suppose now we are interested about two attributes of the above DC motor:
the rotor current Ir and the internal temperature θint. These two variables are
selected as outputs. The inputs are the same Ur, Ue, Cr, θext . The resulted
oriented system is depicted in Fig. 1.2.5. The abstract system for this case is
denoted by S2.
Any one can understand that S1≠ S2 even they are related to the same
physical object so a conclusion can be drawn: For one physical object (system)
different abstract systems can be attached depending on what we are looking for.
Example 1.2.2. Simple Electrical Circuit.
Let we consider an electrical circuit represented by a principle diagram as
it is depicted in Fig. 1.2.6.
i i =0
R
i
C
α
0
2
4
6
8
volts
Controlled voltage generator
u
1
=
x u
C
u
2
=
y
ì
Zi=
K
2
i
C
Voltage amplifier
u=
α
y= u
2
1
S
1
S
'
¹
¹
Tx + x = K u
y = K x
.
1
2
Figure no. 1.2.6. Figure no. 1.2.7.
From the above principle diagram we can understand how this physical
object behaves.
Suppose we are interested about the amplifier voltage u
2
only, so it will be
selected as output and the notation y=u
2
is utilised and marked on the arrow
outgoing from the rectangle as in Fig. 1.2.7. Under a common usage of this
circuit we accept that the unique cause which affects the voltage u
2
is the knob
position α in the voltage generator so the input is u
2
and is denoted by u=α and
marked on the incoming arrow to the rectangle as depicted in Fig. 1.2.7.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.
4
The abstract system attached to this oriented physical object, denoted by
S
1
, is expressed by the mathematical relations between y=u
2
and u=α . With
elementary knowledge of electrical engineering one can write:
x=u
C
; i
C
=C ; u
1
+ Ri + x=0; u
1
=K
1
α·K
1
u; i=i
C
; y=K
2
x; T=RC; ⇒
•
x
(1.2.1) S
1
:
¹
¹
'
T
•
x +x · K
1
u
y · K
2
x
In (1.2.1) the abstract system is expressed by so called "state equations".
Here the variable x at the time moment t , x(t), represents the state of the system
in that time moment t. The first equation is the proper state equation and the
second is called "output relation".
The same mathematical model S
1
can be expressed by a single relation: a
differential equation in y and u as
(1.2.2) S
1
: T
•
y +y · K
1
K
2
u
which can be presented as an inputoutput relation
(1.2.3) S
1
: R(u, y) · 0 where R(u, y) · T
•
y +y − K
1
K
2
u
The three above forms of an abstract system representations are called
implicit forms or representation by equations. The time evolution of the system
variables are solutions of these equations starting in a time moment t
0
with given
initial conditions for t ≥ t
0
.
The time evolution of the capacitor voltage x(t) can be obtained integrating
(1.2.1) for t ≥ t
0
and x(t
0
)=x
0
or just from the system analysis as,
. (1.2.4) x(t) · e
−
t−t
0
T
x
0
+
K
1
T
∫
t
0
t
e
−
t−τ
T
u(τ)dτ
We can observe that the value of x at a time moment t , denoted x(t), depends on
four entities:
1. The current (present) time variable t in which the value of x is expressed.
2. The initial time moment t
0
from which the evolution is considered.
3. An initial value x
0
which is just the value of x(t) for t=t
0
.
This is called the initial state.
4. All the input values u(τ) on the time interval [t
0
,t],called observation interval,
are expressed by the so called input segment where, u
[t
0
,t]
. (1.2.5) u
[t
0
,t]
· {(τ, u(τ) / ∀τ ∈ [t
0
, t]¦
Putting into evidence these four entities any relation as (1.2.4) is written in
a concentrate form as
(1.2.6) x(t) · ϕ(t, t
0
, x
0
, u
[t
0
,t]
)
called the inputinitial statestate relation, on short iiss relation.
Also by substituting (1.2.4) into the output relation from (1.2.1) we get the
output time evolution expression,
(1.2.7) y(t) · K
2
e
−
t−t
0
T
x
0
+
K
1
K
2
T
∫
t
0
t
e
−
t−τ
T
u(τ)dτ
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.
5
which also depends on the four above mentioned entities and can be considered
in a concentrate form as
. (1.2.8) y(t) · η(t, t
0
, x
0
, u
[t
0
,t]
)
This is called inputinitial stateoutput relation on short iiso relation.
The time variation of the input is expressed by a function
u: T → U, t → u(t) (1.2.9)
so the input segment is the graph of a restriction on the observation interval u
[t
0
,t]
[t
0
,t] of the function u. In our case the set U of the input values can be for
example the interval [0,10] volts.
Someone who manages the physical object represented by the principle
diagram knows that there are some restrictions on the time evolution shape of the
function u. For example could be admitted piecewise continuous functions or
continuous and derivative functions only.
We shall denote by Ω the set of admissible inputs,
Ω={u / u: T → U , admitted to be applied to a system)} (1.2.10)
Our system S
1
is well defined specifying three elements: the set Ω and the
two relations ϕ and η ,
S
1
={Ω, ϕ, η} (1.2.11)
This is a so called explicit form of abstract systems representation or the
representation by solutions.
An explicit form can be presented in complex domain applying the Laplace
transform to the differential equation, if this is a linear with timeconstant
coefficients one, as in (1.2.2):
L{T
•
y(t) +y(t)¦ · L{K
1
K
2
u(t)¦ ⇔ T[sY(s) − y(0)¦ + Y(s) · K
1
K
2
U(s) ⇒
(1.2.12) Y(s) ·
K
1
K
2
Ts + 1
U(s) +
T
Ts + 1
y(0)
We can see that the differential equation has been transformed to an algebraical
equation simpler for manipulations. But as in any Laplace transforms, the initial
values are stipulated for t=0 not t=t
0
as we considered. This can be easily
overtaken considering that the time variable t in (1.2.12) is tt
0
from (1.2.2). With
this in our mind the inverse Laplace transform of (1.2.12) will give us the relation
(1.2.7) where y(0)=K
2
x(0)=K
2
x
0
and t→tt
0
.
From (1.2.12) we can denote
(1.2.13) H(s) ·
K
1
K
2
Ts + 1
which is the so called the transfer function of the system. The transfer function
generally can be defined as the ratio between the Laplace transform of the output
Y(s) and the Laplace transform of the input U(s) into zero initial conditions,
. (1.2.14) H(s) ·
Y(s)
U(s)
y(0)·0
The system S
1
can be represented by the transfer function H(s) also.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.
6
Sometimes an explicit form is obtained using the so called integral
differential operators. Denoting by D=d/dt the differential operator then the
differential equation (1.2.2) is expressed as TDy(t) + y(t)=K
1
K
2
u(t)
from where formally one obtain,
(1.2.15) y(t) ·
K
1
K
2
TD + 1
u(t) ⇔ y(t) · S(D)u(t) where S(D) ·
K
1
K
2
TD + 1
so the system S
1
can be represented by the integraldifferential operator S(D).
Now suppose that in the other context we are interested in the current i of
the physical object represented by the principle diagram from Fig. 1.2.6. The
output is now y(t)=i(t) and the input, considering the same experimental
conditions, is u(t)=α(t) also. This oriented system is represented in Fig. 1.2.8.
u=
α
y= i
2
S
2
S
'
¹
¹
.
Tx + x = K u
1
y =
1
R
x
R
1
K
+
u
Figure no. 1.2.8.
The mathematical model of this oriented system is now the abstract system
S
2
represented, for example, by the state equations as in Fig. 1.2.8.
Of course any form of representation can be used as discussed above on
the system S
1
. Because S
1
≠ S
2
we can withdraw again the conclusion:
"For the same physical object it is possible to define different abstract
systems depending on the goal ".
Example 1.2.3. Simple Mechanical System.
Let us consider a mechanical system whose principle diagram is
represented in Fig. 1.2.9.
A
B
K
V
K
P
x
y
f
Spring
Damping
system
1
S
1
S
'
¹
¹
Tx + x = K u
y = K x
.
1
2
u= f y
Figure no. 1.2.9. Figure no. 1.2.10.
If a force f is applied to the point A of the main arm, whose shifting with
respect to a reference position is expressed by the variable x, then the point B of
the secondary arm has a shift expressed by the variable y. To the movement
determined by f, the spring develops a resistant force proportional to x , by a
factor K
P
and the damper one proportional to the derivative of x by K
V.
.
Suppose we are interested on the point B shifting only so the variable y is
selected as to be the output of the oriented system which is defining. Under
common experimental conditions, the unique cause changing y is the force f
which is the input denoted as u=f.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.
7
The oriented system with above defined input and output is represented in
Fig.1.2.10. where by S
1
it
is denoted a descriptor of the abstract system.Writing
the force equilibrium equations we get
. K
P
x + K
V
•
x· f ; y · K
2
x
Dividing the first equation by K
P
and denoting T=K
V
/K
P
; K
1
=1/K
P
; u=f
we get the mathematical model as state equations,
(1.2.16) S
1
:
¹
¹
'
T
•
x +x · K
1
u
y · K
2
x
This set of equations expresses the abstract object of the mechanical
system. Formally S
1
from (1.2.16) is identical with S
1
from (1.2.1) of the previous
example. Even if we have different physical objects they are characterised (for
above chosen outputs) by the same abstract system. This abstract system is a
common base for different physical systems. Any development we have done for
the electrical system, relations (1.2.2)(1.2.15), is available for the mechanical
system too. These constitute the unitary framework of notions we mentioned in
the systems theory definition.
Managing with specific methods the abstract system, expressed in one of
the form (1.2.2)  (1.2.15) , some results are obtained. These results equally can
be applied both to the electrical system and the mechanical system. Of course in
the first case, for example, x is the capacitor voltage but in the second case the
meaning of x is the paint A shifting.
Such a study is called model based study. We can say that the mechanical
system is a physical model for an electric system and vice versa because they are
related to the same abstract system.
Example 1.2.4. Forms of the Common Abstract Base.
The goal of this example is to manipulate the abstract system (1.2.1) or
equivalently (1.2.16) from mathematical point of view using one element of the
common abstract base, the Laplace transform on short LT. Finally the solutions
(1.2.4) and (1.2.7) will be obtained.
Now we write (1.2.1) putting into evidence the time variable t as
T +x(t)=K
1
u(t), t≥t
0
, x(t
0
)=x
0
(1.2.17) x
.
(t)
y(t)=K
2
x(t) (1.2.18)
The main problem is to get the expression of x(t) because y(t) is obtained
by a simple substitution. As we know the one side Laplace transform always
uses as initial time t
0
=0 and we have to obtain (1.2.4) depending on any t
0
. It is
admitted that all the functions restrictions for t≥0 are original functions.
Shall we denote by
X(s)=L{x(t)} , U(s)=L{u(t)}
the Laplace transforms of x(t) and u(t) respectively.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.
8
We remember that the Laplace transform of a time derivative function,
admitted to be an original function is,
L{ (t)}=sX(s)  x(0
+
) x
.
where x(0
+
)=x(t) .
t·0
+ · lim
t→0 ; t >0
x(t)
If x(t) is a continuous function for t=0 we can simpler write x(0) against to
x(0
+
). However we can prove that the state of a differential system driven by
bounded inputs is always a continuous function.
So, applying the LT to (1.2.17) we obtain
T[sX(s)x(0)]+X(s)=K
1
U(s)
X(s)= U(s) + x(0) (1.2.19)
K
1
Ts + 1
T
Ts + 1
which gives us the expression of the state in complex domain but with initial state
in t=0.
We remember now the convolution product theorem of LT:
If F
1
(s) = L{f
1
(t)} and F
2
(s) = L{f
2
(t)} then,
F
1
(s)F
2
(s) = L{ } = L{ }
∫
0
t
f
1
(t − τ)f
2
(τ)dτ
∫
0
t
f
1
(τ)f
2
(t − τ)dτ
and in the inverse form,
L
1
{F
1
(s)F
2
(s)} = = (1.2.20)
∫
0
t
f
1
(t − τ)f
2
(τ)dτ
∫
0
t
f
1
(τ)f
2
(t − τ)dτ
The inverse LT of (1.2.19) is
x(t)=L
1
{ } + L
1
{ }x(0). (1.2.21)
K
1
Ts + 1
⋅ U(s)
T
Ts + 1
We know from tables that
L
1
, for t ≥ 0 (1.2.22)
T
Ts + 1
· e
−
t
T
L
1
, for t ≥ 0 (1.2.23)
K
1
Ts + 1
·
K
1
T
e
−
t
T
Identifying now by
and F
1
(s) ·
K
1
Ts + 1
⇔ f
1
(t) ·
K
1
T
e
−
t
T
⇒ f
1
(t − τ) ·
K
1
T
e
−
t−τ
T
F
2
(s) · U(s) ⇔ f
2
(t) · u(t) ⇒ f
2
(τ) · u(τ)
taking into consideration (1.2.20) applied to (1.2.21) after the substitution of
(1.2.22), we have,
which is written as x(t) ·
∫
0
t
[
K
1
T
e
−
t−τ
T
]u(τ)dτ + e
−
t
T
x(0), ∀t ≥ 0
. (1.2.24) x(t) · e
−
t
T
x(0) +
K
1
T
∫
0
t
e
−
t−τ
T
u(τ)dτ
∆
· ϕ(t, 0, x(0), u
[0,t]
), ∀t ≥ 0
This is the state evolution starting at the initial time moment t=0, from the
initial state x(0) and has the form of the inputinitial statestate relation (iiss).
For t = t
0
from (1.2.24) we obtain,
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.
9
x(t
0
)=e (1.2.25)
−
t
0
T
x(0) +
K
1
T
∫
0
t
0
e
−
t
0
−τ
T
u(τ)dτ
∆
· ϕ(t
0
, 0, x(0), u
[0,t
0
]
)
Substituting x(0) from (1.2.25)
x(0)=e ,
t
0
T
x(t
0
) −
K
1
T
∫
0
t
0
e
τ
T
u(τ)dτ
into (1.2.24) we obtain
x(t)=e +
−
t
T
e
t
0
T
x(t
0
) −
K
1
T
∫
0
t
0
e
τ
T
u(τ)dτ
]
]
]
K
1
T
∫
0
t
0
e
−
t−τ
T
u(τ)dτ +
K
1
T
∫
t
0
t
e
−
t−τ
T
u(τ)dτ
x(t)=e (1.2.26)
−
t−t
0
T
x(t
0
) +
K
1
T
∫
t
0
t
e
−
t−τ
T
u(τ)dτ
∆
· ϕ(t, t
0
, x(t
0
), u
[t
0
,t]
)
which is just (1.2.4), if we are taking into consideration that x(t
0
) = x
0
.
Now from (1.2.24), (1.2.25) and (1.2.26) we observe that
(1.2.27) x(t) · ϕ(t, 0, x(0), u
[0,t]
) ≡ ϕ(t, t
0
, ϕ(t
0
, 0, x(0), u
[0,t
0
]
), u
[t
0
,t]
)
x(t
0
)
which is the so called the state transition property of the iiss relation.
According to this property, the state at any time moment t, x(t), as the
result of the evolution from an initial state x(0) at the time moment t=0 with an
input is the same as the state obtained in the evolution of the system starting u
[0,t]
at any intermediate time moment t
0
from an initial state x(t
0
) with an input if u
[t
0
,t]
the intermediate state x(t
0
) is the result of the evolution from the same initial state
x(0) at the time t=0 with the input . It has to be precised that u
[0,t
0
]
(1.2.28) u
[0,t]
· u
[0,t
0
]
∪u
[t
0
,t]
Two conclusions can be drawn from this example:
1. Any intermediate state is an initial state for the future evolution.
2. An initial state x
0
at a time moment t
0
contains all the essential
information from the past evolution able to assure the future evolution if the
input is given starting from that time moment t
0
.
To obtain the relation (1.2.7), we have only to substitute x=y/K
2
in
(1.2.26) getting, if we denote x(t
0
)=x
0
(1.2.29) y(t) · K
2
e
−
t−T
0
T
x
0
+
K
1
K
2
T
∫
t
0
t
e
−
t−τ
T
u(τ)dτ
∆
· ϕ(t, t
0
, x(t
0
), u
[t
0
,t]
)
This is an inputinitial stateoutput relation (iiso).
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.2. Abstract Systems; Oriented Systems; Examples.
10
1.3. Inputs; Outputs; InputOutput Relations.
1.3.1. Inputs; Outputs.
The time variable is denoted by letter t for the so called continuoustime
systems and by letter k for the so called discretetime systems.
The time domain T , or observation domain, is the domain of functions
describing the time evolution of variables. For continuoustime systems T R ⊆
and for discretetime systems T Z . Sometimes the letter t is utilised as time ⊆
variable also for discretetime systems thinking that t ∈Z .
The input variable is the function
u :T→ U; t→u(t), (1.3.1)
where U is the set of input values (or the set of all inputs).
Usually if there are p inputs expressed by real numbers. U ⊆ R
p
The set of admissible inputs Ω, is the set of functions u allowed to be
applied to an oriented system.
The input segment on a time interval [t
0
,t
1
] ⊆Τ Τ called observation interval
is the graphic of the function u on this time interval, : u
[t
0
,t
1
]
u ={(t,u(t)), t
1
]} (1.3.2)
[t
0
,t
1
]
∀t ∈ [t
0
,
When we are saying that to a system an input is applied on a timeinterval
[t
0
, t
1
], we have to understand that the input variable changes in time according to
the given graphic ,that is according to the input segment. Sometimes for u
[t
0
,t
1
]
easier writing we understand by u , depending on context, the following:
u  a function as (1.3.1).
u  a segment as (1.3.2) on an understood observation interval.
[t
0
,t
1
]
u(t) the law of correspondence of the function (1.3.1).
Also u(t) can be seen as the law of correspondence of the function (1.3.1)
or the value of this function in a specific time moment denoted t.
All these conveniences will be encountered for all other variables in this
textbook.
The output variable is the function
y :T→ Y; t→y(t), (1.3.3)
where Y is the set of output values (or the set of all outputs). Usually if Y ⊆ R
r
there are r outputs expressed by real numbers.
We denote by Γ Γ the set of possible outputs that is the set of all functions y
that are expected to be got from a physical system if inputs that belong to Ω are
applied.
The inputoutput pair. If an input is applied to a physical system the u
[t
0
,t
1
]
output time response is expressed by the output segment , where y
[t
0
,t
1
]
={(t,y(t)), (1.3.4) y
[t
0
,t
1
]
∀t ∈ [t
0
, t
1
]¦
that means to an input corresponds an output.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; InputOutput Relations.
11
The pair of segments
(1.3.5) [u
[t
0
,t
1
]
; y
[t
0
,t
1
]
] · (u; y)
observed to a physical system is called an inputoutput pair.
It is possible that for the same input another output segment to u
[t
0
,t
1
]
y
[t
0
,t
1
]
a
be obtained, that means also the pair is an inputoutput [u
[t
0
,t
1
]
; y
[t
0
,t
1
]
a
] · (u; y
a
)
pair of that system as depicted in Fig. 1.3.0.
0 t
u
u
, [ ] t
0
t
1
[
]
t
0 t
1
0 t
y y
, [ ] t
0
t
1
[
[
]
]
, [ ] t
0
t
1
y
a
t
0 t
1
Figure no. 1.3.0.
In the example of the electrical device from Ex. 1.2.2.
or of the mechanical system from Ex. 1.2.3. , the solution of
the differential equation (1.2.2) for t ≥ t
0
and x(t
0
)=x
0
is
y(t) · e
−
t−t
0
T
x
0
+
K
1
K
2
T
∫
t
0
t
e
−
t−τ
T
u(τ)dτ · η(t, t
0
, x
0
, u
[t
0
,t]
)
For the same input u
[t0,t1]
, the output depends also on
the value x
0
=x(t
0
) which is the voltage across the capacitor
terminals C in Ex. 1.2.2. or the arm position (point A ) in
Ex. 1.2.3. at the time moment t
0
.
1.3.2. InputOutput Relations.
The totality of inputoutput pairs that describe the behaviour of a physical
object is just the abstract system. Instead of specific list of input time functions
and their corresponding output time functions, the abstract system is usually
characterised as a class of all time functions that obey a set of mathematical
equations. This is in accord once with the scientific method of hypothesising an
equation and then checking to see that the physical object behaves in a manner
similar to that predicted by the equation /2/.
Practically an abstract system is expressed by the so called inputoutput
relation which can be a differential or difference equation, graph, table or
functional diagram.
A relation implicitly expressed by R(u,y)=0 or explicitly expressed by an
operator S, y=S{u}, is an inputoutput relation for an oriented system if:
1. Any inputoutput pair observed to the system is checking this relation.
2. Any pair (u,y) which is checking this relation is an inputoutput pair of that
oriented system.
We have to mention that by the operatorial notation y=S{u} or just y=Su,
we understand that the operator S is applied to the input (function) u and as a
result, the output (function) y is obtained.
For example if in the differential equation from Ex. 1.2.2., T
•
x +x · K
1
u
we substitute x=y/K
2
, we obtain,
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; InputOutput Relations.
12
, (1.3.6) T
•
y +y · K
1
K
2
u ⇔R(u, y) · 0 R(u, y) · T
•
y +y − K
1
K
2
u
which is an implicit inputoutput relation.
By denoting , the time derivative operator, we obtain D ·
d
dt
Dy ·
•
y
TDy(t) + y(t) − K
1
K
2
u(t) · 0 ⇔ (TD + 1)y(t) − K
1
K
2
u(t) · 0 ⇔
(1.3.7) y(t) ·
K
1
K
2
TD + 1
u(t) ⇔ y(t) · Su(t) , S ·
K
1
K
2
TD+ 1
Here is an explicit inputoutput relation by an y(t) · Su(t)
integraldifferential operator S . This relation is expressed in time domain but it
can be expressed in every domain if one to one correspondence exists.
For example we can express in scomplex domain applying to (1.3.6) the
Laplace transform,
Y(s)= (1.3.8)
K
1
K
2
Ts + 1
U(s) +
T
Ts + 1
x(0)
from where an operator H(s) called transfer function is defined,
H(s)= . (1.3.9)
Y(s)
U(s)
x(0)·0
·
K
1
K
2
Ts + 1
The relation between the Laplace transformation of the output Y(s) and the
Laplace transformation of the input U(s) which determined that output into zero
initial conditions
Y(s)=H(s)U(s) (1.3.10)
is another form of explicit inputoutput relation.
Example 1.3.1. Double RC Electrical Circuit.
Let us consider an electrical network obtained by physical series
connections of two simple RC circuits whose principle diagram is represented in
Fig. 1.3.1.
C 1
1 A
1
i
1
A'
1
B'
1
B
1
C
1
R
2
A' 2
B'
2 A
2
i 2 B
2 R
2
C
u =x
C 2
u =x
u
C
i
1
C
i
2
y
i=0
1
2
u y
S
Figure no. 1.3.1. Figure no. 1.3.1.
Suppose that the second circuit runs empty and the first is controlled by
the voltage u across the terminals A
1,
,A
1
' , and we are interested in the voltage y
across the terminals B
2
,B
2
'.
Because the output y is defined, under common conditions only the voltage
u affects this selected output so the oriented system is specified as depicted in
Fig. 1.3.2.
The abstract system denoted by S will be defined establishing the
mathematical relations between u and y.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; InputOutput Relations.
13
To do this first we observe from the principle diagram that there are 8
variables as time functions involved : u, i
1
, i
C1
, u
C1
=x
1
, i
C2
, u
C2
=x
2
, i
2
and y.
The other variables R
1
, R
2
, C
1
, C
2
are constants in time and represent the
circuit parameters , any skilled people in electrical engineering understand on the
spot.
Because u is a cause (an input) it is a free variable and we have to look for
7 independent equations. These equations can be written using the Kirckoff's
theorems and Ohm's law:
1. 2. 3. 4. i
c
1
· i
1
− i
2
i
c
2
· i
2
i
c
1
· C
1
x
.
1
i
c
2
· C
2
x
.
2
5. 6. 7. y=x
2
i
1
·
1
R
1
(−x
1
+ u) i
2
·
1
R
2
(x
1
− x
2
)
We can observe that two variables x
1
and x
2
appear with their first order
derivative so eliminating all the intermediate variables a relation between u and y
will be obtained as a second order differential equation. But first shall we keep
the variables x
1
and x
2
and their derivative.
Denoting by T
1
=R
1
C
1
and T
2
=R
2
C
2
the two time constants, after some
substitutions we obtain,
(1.3.11) T
1
x
.
1
· −(1 +
R
1
R
2
)x
1
+
R
1
R
2
x
2
+ u
(1.3.12) T
2
x
.
2
· x
1
− x
2
y=x
2
(1.3.13)
which after dividing by T
1
, T
2
respectively, they take the final form
(1.3.14) x
.
1
· −
1
T
1
(1 +
R
1
R
2
)x
1
+
1
T
1
R
1
R
2
x
2
+
1
T
1
u(t)
S: (1.3.15) x
.
2
·
1
T
2
x
1
−
1
T
2
x
2
y=x
2
(1.3.16)
The equations (1.3.14), (1.3.15), (1.3.16) are called state equations
related to that oriented system and they constitute the abstract system S in state
equations form. We can rewrite these equations into a matrix form,
S: (1.3.17) x
.
· Ax + bu
y=c
T
x + du (1.3.18)
where:
; ; ; d = 0 , (1.3.19) A ·
−
1
T
1
(1 +
R
1
R
2
)
1
T
1
R
1
R
2
1
T
2
−
1
T
2
]
]
]
]
]
b ·
1
T
1
0
]
]
]
]
]
c ·
0
1
]
]
]
and generally they are called:
A the system matrix
b the command vector
c the output vector
d the direct inputoutput connection factor .
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; InputOutput Relations.
14
The inputoutput relation R(u.y)=0 , as mentioned before, can be
expressed as a single differential equation in u and y. To do this we can use for
example (1.3.14), (1.3.15), (1.3.16) or simpler (1.3.11), (1.3.12), (1.3.13).
Substituting x
2
from (13) in (12) and multiplying by T
1
we obtain
. T
1
T
2
y
.
· T
1
x
1
− T
1
y ⇒ T
1
x
1
· T
1
T
2
y
.
+ T
1
y (∗)
Applying the first derivative to (*) and substituting from (11) and T
1
x
.
1
x
1
· T
2
y
.
+ y
from (*) we.. obtain,
T
1
T
2
y¨ + T
1
y
.
· −(1 +
R
1
R
2
)(T
2
y
.
+ y) +
R
1
R
2
y + u
which finally goes to
. (1.3.20) T
1
T
2
y¨ + [T
1
+ (1 +
R
1
R
2
)T
2
]y
.
+ y · u
This is a differential equation expressing the mathematical model (the
abstract system) of the oriented system. It can be presented as an io relation
(1.3.21) R(u, y) · 0 where R(u, y) · T
1
T
2
y¨ + [T
1
+ (1 +
R
1
R
2
)T
2
]y
.
+ y − u
If we denoted by we can express the io relations into an explicit form
d
dt
· D
y(t)= (1.3.22)
1
T
1
T
2
D
2
+ [T
1
+ T
2
(1 +
R
1
R
2
)]D + 1
u(t) ⇒ y(t) · S(D)u(t)
where S(D) is an integraldifferential operator.
For simplicity shall we consider the following values for parameters:
R
1
=R ; R
2
=2R ; C
1
=C ; C
2
=C/2 T
1
=T
2
=T=RC ⇒
so the differential equation (1.3.20) becomes
. (1.3.23) T
2
y¨ + 2.5T y
.
+ y · u
We can express the io relation into a complex form by using the Laplace
transform. Applying the Laplace transform to (1.3.23) we get
Y(s)=L{y(t)} ; U(s)=L {u(t)} ⇒
T
2
[s
2
Y(s) − sy(0
+
) − y
.
(0
+
)] + 2, 5T[sY(s) − y(0
+
)] + Y(s) · U(s) ⇒
Y(s) ·
1
T
2
s
2
+ 2, 5Ts + 1
U(s) +
T
2
s + 2, 5T
T
2
s
2
+ 2, 5Ts + 1
y(0
+
) +
T
2
T
2
s
2
+ 2, 5Ts + 1
y
.
(0
+
)
(1.3.24)
We denote by L(s) the characteristic polynomial
L(s) = T
2
s
2
+2,5Ts+1 (1.3.25)
so the output in complex domain is,
(1.3.26) Y(s) ·
1
L(s)
U(s) +
T
2
s + 2, 5T
L(s)
y(0
+
) +
T
2
L(s)
y
.
(0
+
)
As we can see, the Laplace transform of the output Y(s), depends on the
Laplace transform of the input U(s) and on two initial conditions : y(0
+
) , the
value of the output, and , the value of the time derivative of the output y
.
(0
+
)
both on the timemoment 0
+
.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; InputOutput Relations.
15
Let H(s) be
(1.3.27) H(s) ·
1
L(s)
·
Y(s)
U(s)
zero initial condition
where H(s) is called the transfer function of the system.
The transfer function of a system is the ratio between the Laplace
transform of the output and the Laplace transform of the input which determines
that output into zero initial conditions if and only if this ratio is not depending on
the form of the input.
By using the inverse Laplace transform we can obtain the time answer (the
output response) of this system. The characteristic equation
L(s) = T
2
s
2
+2,5Ts+1 = 0 has the roots,
s
1,2
= (1.3.28) (−
5
2
T t
1
2
25T
2
− 16T
2
)/2T
2
so the characteristic polynomial is presented as
L(s) = T
2
(sλ
1
)(sλ
2
) with λ
1
= 1/2T ; λ
2
= 2/T . (1.3.29)
One way to calculate the inverse Laplace transform is to use the partial
fraction development of rational functions from Y(s) as in (1.3.26)
H(s) ·
1
T
2
(s−λ
1
)(s−λ
2
)
; H(s) ·
A
s−λ
1
+
B
s−λ
2
⇒ A ·
2
3T
; B · −
2
3T
T
2
s+2,5T
T
2
(s−λ
1
)(s−λ
2
)
·
A
1
s−λ
1
+
B
1
s−λ
2
⇒A
1
·
4
3
; B
1
· −
1
3
T
2
T
2
(s−λ
1
)(s−λ
2
)
·
A
2
s−λ
1
+
B
2
s−λ
2
⇒ A
2
·
2T
3
; B
2
· −
2T
3
Y(s) ·
2
3T
1
s−λ
1
−
1
s−λ
2
]
]
U(s) +
1
3
4
s−λ
1
−
1
s−λ
2
]
]
y(0) +
2T
3
1
s−λ
1
−
1
s−λ
2
]
]
y
.
(0)
L
1
L
1 1
s−λ
1
· e
λ
1
t
· α
1
(t)
1
s−λ
2
· e
λ
2
t
· α
2
(t)
L
1
L
1 1
s−λ
1
U(s) ·
∫
0
t
α
1
(t − τ)u(τ)dτ
1
s−λ
2
U(s) ·
∫
0
t
α
2
(t − τ)u(τ)dτ
y(t) · 1
3
[4α
1
(t) − α
2
(t)] y(0) +
2T
3
[α
1
(t) − α
2
(t)] y
.
(0) +
2
3T
∫
0
t
[α
1
(t − τ) − α
2
(t − τ)]u(τ)dτ
where (1.3.30)
(1.3.31) α
1
(t) · e
λ
1
t
· e
−
t
2T
(1.3.32) α
2
(t) · e
λ
2
t
· e
−
t
T/2
By using the same procedure like to the first order RC circuit presented in
Ex. 1.2.2 we can express this time relation depending on the initial time t
0
as:
y(t) =
1
3
[4α
1
(t − t
0
) − α
2
(t − t
0
)]y(t
0
) +
2T
3
[α
1
(t − t
0
) − α
2
(t − t
0
)]y
.
(t
0
) +
(1.3.33) +
2
3T
∫
t
0
t
[α
1
(t − τ) − α
2
(t − τ)]u(τ)dτ
As we can see the general response by output depends on: t, t
0
, the initial
state x
0
, and the input , where the state vector is defined as u
[t
0
,t]
(1.3.34) x
1
(t
0
) · y(t
0
) ; x
2
(t
0
) · y
.
(t
0
) ⇒ x(t
0
) · [x
1
(t
0
) x
2
(t
0
)]
T
· x
0
The relation (1.3.33) is an inputinitial stateoutput relation of the form
(1.3.35) y(t) · η(t, t
0
, x
0
, u
[t
0
,t]
)
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; InputOutput Relations.
16
Example 1.3.2. Manufacturing Point as a Discrete Time Ststem.
Shall we consider a working point (manufacturing point). In this
manufacturing point the work is made by a robot which manufactures
semiproducts. We suppose that it works in a daily cycle. Suppose that in the day
k the working point is supplied with u
k
semiproducts from which only a
percentage of β
k
is transformed in finite products. The working point has a store
house which contains in the day k , x
k
finite products. Suppose that daily (may be
in the day k) a percentage of (1α
k
) from the stock is delivered to other sections.
Also at day k the production is reported as being y
k
, a percentage γ
k
from the
stock. We want to see what are the evolution of the stock and the report of the
stock for any day. It can be observed that the time variable is a discrete one, the
integer number k .
u y
k k
S
Figure no. 1.3.3.
This working point can be interpreted as an
oriented system having the daily report y
k
as the
output. The input is the daily rate of supply u
k
so the
oriented system is defined as in Fig. 1.3.3. where the
input and the output are string of numbers.
The mathematical model is determined from the above description. If we
denote by x
k+1
the stock for the next day, it is composed from the left stock
x
k
(1α
k
)x
k
=α
k
x
k
and the new stock β
k
u
k
,
x
k+1
=α
k
x
k
+β
k
u
k
(1.3.36)
y
k
=γ
k
x
k
(1.3.37)
This is the abstract system S of the working point looked upon as an
oriented system. These are difference equations expressing a discretetime
system.
We can determine the solution of this system of equations by using a
general method, but in this case we shall proceed to difference equation
integration step by step.
The day p+1: x
p+1
=α
p
x
p
+β
p
u
p
 α
k1
...α
p+1
The day p+2 : x
p+2
=α
p+1
x
p+1
+β
p+1
u
p+1
 α
k1
...α
p
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The day k2 : x
k2
=α
k3
x
k3
+β
k3
u
k3
 α
κ−1
α
κ−2
The day k1 : x
k1
=α
k2
x
k2
+β
k2
u
k2
 α
k1
The day k : x
k
=α
k1
x
k1
+β
k1
u
k1
 1
Denoting by the discretetime transition matrix (in this example it is a Φ(k, p)
scalar),
; Φ(k,k)=1 (1.3.38) Φ(k, p) ·
Π
j·p
k−1
α
j
· α
k−1
α
k−2
...α
p+1
α
p
and adding the above set of relations each multiplied by the factor written on the
right side, we obtain
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; InputOutput Relations.
17
(1.3.39) x
k
· Φ(k, p)x
p
+
Σ
j·p
k−1
[Φ(k, j + 1)β
j
u
j
]
y
k
= γ
k
x
k
(1.3.40)
We observe that (1.3.39) is an inputinitial statestate relation of the form,
(1.3.41) x
k
· ϕ(k, k
0
, x
k
0
, u
[k
0
,k−1]
)
where x
k
is the state on the current time k (in our case the day index), x
k0
is the
initial state on the initial time moment p=k
k0
.
The output evolution is
(1.3.42) y
k
· γ
k
Φ(k, p)x
p
+
Σ
j·p
k−1
[γ
k
Φ(k, j + 1)β
j
u
j
]
which is an inputinitial stateoutput relation of the form,
(1.3.43) y
k
· η(k, k
0
, x
k
0
, u
[k
0
,k−1]
)
Example 1.3.3. RSmemory Relay as a Logic System.
Let us consider a physical object represented by a principle diagram as in
Fig. 1.3.4. This is a logic circuit performing an RSmemory relay based.
x
x
x
a
b
c
d
L
y
+ E
i
Relay
SB
RB
Figure no. 1.3.4.
t
r
y
s
t
t
S
Figure no. 1.3.5.
=0 x
t
0 =1
x
t
0
S
0
S
1
S
0 S
1 S=
∪
Figure no. 1.3.6.
Here SB represents a normal opened button (the setbutton) and RB a
normal closed button (the resetbutton).
By normal it is understood "not pressed".
When the button SB is pushed the current can run through the terminals
ab and when the button RB is pushed the current can not run through the
terminals cd. By x are denoted the opennormal contacts of the relay.
The normal status (or state) of the relay is considered that when the current
through the coil is zero. L is a lamp which lights when the relay is activated. The
functioning of this circuit can be explained in words:
If the button RB is free, pushing on the button SB a current i run activating
the relay whose contact x will shortcut the terminals ab and the lamp is turned
on. Even if the button SB is released the lamp keeps on lighting.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; InputOutput Relations.
18
If the button RB is pushed, the current i is cancelled and the relay becomes
relaxed, the lamp L is turned off.
The variables encountered in this description SB, RB, i, x, y are associated
with the variables s
t
, r
t
, i
t
, x
t
, y
t
called logical variables .
They represent the truth values, at the time moment t, of the propositions:
s
t
: "The button SB is pushed" ⇔ "Through terminals ab the current can run".
r
t
: "The button RB is pushed" ⇔"Through terminals cd the current can not run"
i
t
: "
x
t
:"The relay is activated" ⇔ "The relay normal opened contacts are connected".
y
t
: "The lamp lights".
These logical variables can take only two values denoted usually by the
symbols 0 and 1 on a set B={0;1} which represents false and true.
The set B is organised as a Boolean algebra. In a Boolean algebra three
binary fundamental operations are defined: conjunction " ∧ ∧ ", disjunction " ∨ ∨ "
and negation " ¯ ".
Suppose we are interested about the lamp status so the output is y(t)=y
t
.
This selected output depends on the status of buttons SB, RB only (it is supposed
that the supply voltage E is continuously applied) as external causes, so the input
is the vector u(t) = [s
t
r
t
]
T
.
An oriented system is defined now as depicted in Fig. 1.3.5.
The mathematical relations between u(t) and y(t), defining the abstract
system S are expressed as logical equations.
The value of logical variable i
t
is given by
i
t
= . (1.3.44) (s
t
∨ x
t
) ∧ r
t
Because of the mechanical inertia, the status of the relay changes after a
small time interval ε finite or ideally ε→0 to the value of i
t
,
. (1.3.45) x
t+ε
· i
t
Of course the status of the lamp equals to the status of the relay, so
y
t
=x
t
(1.3.46)
Substituting (1.3.44) into (1.3.45) together with (1.3.46) we obtain the
abstract system S as,
(1.3.47) x
t+ε
· (s
t
∨ x
t
) ∧ r
t
S:
y
t
=x
t
(1.3.48)
To determine the output of this system beside the two inputs s
t
; r
t
we
need another piece of information (the value of x
t
, the state of the relay: 1  if
the relay is activated and 0  if the relay is not activated.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; InputOutput Relations.
19
It does not matter how we shall denote these information A (or on) for 1
and B )or off) for 0. If we know that the state of the relay is A we can determine
the output evolution if we know the inputs.
If we are doing experiments with this physical system on time interval
[t
0
,t
1
] for any t
0
, t
1
, a set S of inputoutput pairs (u,y)= can be (u
[t
0
,t
1
]
, y
[t
0
,t
1
]
)
observed,
, (1.3.49) S · {(u
[t
0
,t
1
]
, y
[t
0
,t
1
]
)/ observed, ∀t
0
, t
1
∈ R, ∀u
[t
0
,t
1
]
∈ Ω¦
The set S can describe the abstract system. It can be split into two subsets
depending whether a pair (u,y) is obtained having equal to 0 or 1: x
t
0
S
0
={(u,y)∈S/ if =0}={(u,y)∈S/ if =A} ={(u,y)∈S/ if =on} (1.3.50) x
t
0
x
t
0
x
t
0
S
1
={(u,y)∈S/ if =1}={(u,y)∈S/ if =B} ={(u,y)∈S/ if =off} (1.3.51) x
t
0
x
t
0
x
t
0
It can be proved that
S
0
∪ S
1
= S ; S
0
∩ S
1
· ∅ ∅, (1.3.52)
as depicted in Fig. 1.3.6.
Also inside of any subset S
i
the input uniquely determines the output
(1.3.53) ∀(u, y
a
) ∈ S
i
, ∀(u, y
b
) ∈ S
i
⇒ y
a
≡ y
b
i · 0; 1
From this we understand that the initial state is a label which parametrize
the subsets S
i
⊆ S as (1.3.53).
Example 1.3.4. Blackbox Toy as a Two States Dynamical System.
Let us consider a blackbox, as a toy, someone received. The box is
covered, nothing can be seen of what it contains inside , but it has a controllable
voltage supplier with a voltage meter u(t) across the terminals AB and a voltage
meter y(t) across the terminals CD, as depicted in Fig. 1.3.7.
V y(t)
R
R R
X
V
Voltage
Supplayer
u(t)
Figure no. 1.3.7.
A
B
C
D
Playing with this blackbox, we register the evolution of the input u(t) and
of the resulted output y(t) . We are surprised that sometimes we get the
inputoutput pair
, (1.3.54) (u
[t
0
,t
1
]
, y
[t
0
,t
1
]
·
1
2
u
[t
0
,t
1
]
) ⇒ y(τ) ·
1
2
u(τ) ∀τ
but other times we get the inputoutput pair
. (1.3.55) (u
[t
0
,t
1
]
, y
[t
0
,t
1
]
·
2
3
u
[t
0
,t
1
]
) ⇒ y(τ) ·
2
3
u(τ) ∀τ
Doing all the experiments possible we have a collection of inputoutput
pairs which constitute a set S as (1.3.49).
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; InputOutput Relations.
20
This collection S will determine the behaviour of the blackbox. For the
same input applied u(t) we can get two outputs or . Our y(t) ·
1
2
u(t) y(t) ·
2
3
u(t)
set of inputoutput pairs can be split into two subsets: S
0
if they correspond to
(1.3.54) and S
1
if they correspond to (1.3.55).
Of course S
0
∪ S
1
= S ; S
0
∩ S
1
· ∅ ∅ .
If someone gave us an input we would not be able to say what the u
[t
0
,t
1
]
output is because we have no idea from which subset, S
0
or S
1,
, to select the right
pair. Some information is missing to us.
Suppose that the box cover face has been broken so we can have a look
inside the box as in Fig. 1.3.7.
Now we can understand why the two sets of inputoutput pairs (1.3.54),
(1.3.55) were obtained.
The box can be in two states depending of the switch status: opened or
closed. We can define the box state by a variable x which takes two values
nominated as: {off ; on} or {0 ; 1} or {A ; B}.
Now the subset S
0
can be equivalently labelled by one of the marks: "off",
"0", "A" and the subset S
1
by : "on", "1", "B" respectively.
It does not matter how the position of the switch is denoted (labelled). The
switch position will determine the state of the blackbox.
The state is equivalently expressed by one of the variables:
x ∈ {off ; on¦ or x ∈ {0 ; 1¦ or x
∼
∈ {A; B¦
If someone gives us an input and an additional piece of information u
[t
0
,t
1
]
formulated as: "the state is on " that means x=on or as "the state is B" that means
etc. we can uniquely determine the output selecting it from the x
∼
· B y(t) ·
2
3
u(t)
subset S
1
.
With this example our intention is to point out that the system state
appears as a way of parametrising the inputoutput pair subsets inside which one
input uniquely determines one output only.
Also we wanted to point out that the state can be presented in different
forms, the same inputoutput behaviour can have different state descriptions.
In this example the state of the system can not be changed by the input,
such a system being called state uncontrollable.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.3. Inputs; Outputs; InputOutput Relations.
21
1.4. System State Concept; Dynamical Systems.
1.4.1. General aspects.
As we saw in the above examples, to determine univocally the output,
beside the input, some initial conditions have to be known.
For example in the case of simple RC circuit we have to know the voltage
across the capacitor, for the mechanical system the initial position of the arm, in
the case of double RC circuit the voltages across the two capacitors, for the
manufacturing point the initial value of the stock, for the relay based circuit the
initial status of the relay.
All this information defines the system state in the time moment from
which the input will affect the output.
The state of an abstract system is a collection of elements (the elements
can be numbers) which together with the input u(t) for all t ≥ t
0
uniquely
determines the output y(t) for all t ≥ t
0 .
.
In essence the state parametrizes the listing of inputoutput pairs.
The state is the answer to the question: "Given u(t) for t ≥ t
0
and the
mathematical relationships between input and output (the abstract system), what
additional information is needed to completely specify y(t) for t ≥ t
0
? ".
The system state at a time moment contains (includes) all the essential
information regarding the previous evolution to determine, starting with that time
moment, the output if the input is known.
A state variable denoted by the vector x(t) is the time function whose value
at any specified time is the state of the system at that time moment.
The state can be a set consisting of an infinity of numbers and in this case
the state variable is an infinite collection of time functions. However in most
cases considered, the state is a set of n numbers and correspondingly x(t) is a
nvector function of time.
The state space, denoted by X, is the set of all x(t) values.
The state representation is not unique. There can be many different ways
of expressing the relationships of input to output. For example in the case of
blackbox or of the logic circuit we can define the state as {on , off}, {A, B} and
so on. For the double RC circuit one state representation means the output value
y(t
0
) and the time derivative value of the output
The state of a system is related to a time moment. For example the state x
0
at a time moment t
0
is denoted by (x
0
, t
0
) x(t
0
) .
∆
·
The minimum number of state vector elements, for which the output can be
univocally determined, for a known (given) input, represents the system order or
the system dimension. Systems from Ex. 1.2.2. or Ex. 1.2.3. are of first order
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
22
because it is enough to know a single element x
0
to determine the output
response as we can see in relation (1.2.7).
Also the discrete time system (1.3.36), (1.3.37) from Ex. 1.3.2. is of first
order because the response (1.3.42) can be uniquely determined if we know just
the number x
p
.
But the abstract system (1.3.17), (1.3.18) or (1.3.20) from Ex. 1.3.1 is of
second order because, as we can see in relation (1.3.33) (considering for the sake
of convenience the particular case T
1
= T
2
=T the system dimension is not
affected), the output y(t) is uniquely determined by a given input if two u
[t
0
,t]
initial conditions y(t
0
) , are known as it is rewritten in (1.4.1) y
.
(t
0
)
y(t) ·
1
3
[4α
1
(t − t
0
) − α
2
(t − t
0
)]y(t
0
) +
2T
3
[α
1
(t − t
0
) − α
2
(t − t
0
)]y
.
(t
0
)+
(1.4.1) +
2
3T
∫
t
0
t
[α
1
(t − τ)−
2
(t − τ)]u(τ)dτ
These initial conditions are the output value and the output time
derivative value at the time moment t
0
. These two values can be selected as
components of a vector, the state vector that means at the x
0
·
x
1
0
x
2
0
]
]
]
·
y(t
0
)
y
.
(t
0
)
]
]
]
time t
0
the state is x
0
. ⇒ (t
0
, x
0
) · x(t
0
)
If , for example, y(t
0
)=3V, =9V/sec. , the state at the time moment t
0
y
.
(t
0
)
is expressed by the numerical vector so we can say x
0
·
3
9
]
]
]
·
3volts
9volts/ sec .
]
]
]
that at the time moment t
0
, the system is in the state [3 9]
T
. Because two
numbers are necessary to determine uniquely the output we can say that this
system is a second order one.
The response from (1.3.33) can be arranged as:
y(t) · α
1
(t − t
0
)
4
3
y(t
0
) +
2T
3
y
.
(t
0
)
]
]
]
+ α
2
(t − t
0
)
−
1
3
y(t
0
) −
2T
3
y
.
(t
0
)
]
]
]
+
2
3T
∫
t
0
t
[..]u(τ)dτ
(1.4.2) x
1
0
x
2
0
Denoting by
x
1
0
·
4
3
y(t
0
) +
2T
3
y
.
(t
0
)
x
2
0
· −
1
3
y(t
0
) −
2T
3
y
.
(t
0
)
the output response can be written as
(1.4.3) y(t) · α
1
(t − t
0
)x
1
0
+ α
2
(t − t
0
)x
2
0
+
2
3T
∫
t
0
t
[[α
1
(t − τ)−
2
(t − τ)]]u(τ)dτ
This output response can be univocally determined if the numbers and are x
1
0
x
2
0
known, that means they can constitute the components of the vector x
0
·
x
1
0
x
2
0
]
]
]
Let us consider a concrete example for T=1sec. For
y(t
0
) · x
1
0
· 3V; y
.
(t
0
) · x
2
0
· 9V/ sec ⇒
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
23
x
1
0
·
4
3
3 +
2⋅1
3
9 · 10V; x
2
0
· −
1
3
3 −
2⋅1
3
9 · −7V/ sec;
In this form of the state vector definition we can say that at the time
moment t
0
the system is in the state =[10 7]
T
, that is different of the state x
0
x
0
=[3 9]
T
but, applying the same input , we obtain the same output as it u
[t
0
,t]
was obtained in the case of x
0
=[3 9]
T
.
The following important conclusion can be pointed out:
The same inputoutput behaviour of an oriented system can be obtained by
defining the state vector in different ways.
If x is the state vector related to an oriented system and a square matrix T
is a nonsingular one, detT≠0, then the vector is also a state vector for the x · Tx
same oriented system. Both states x, above related will determine the same x
inputoutput behaviour.
In the above example, the two state relationships
x
1
·
4
3
x
1
+
2T
3
x
2
x
2
· −
1
3
x
1
−
2T
3
x
2
can be written in a matrix form
. (1.4.4) x · Tx, where T ·
4
3
2T
3
−
1
3
−
2T
3
]
]
]
]
]
, det T · −
2T
3
≠ 0
For example in the case of logic circuit (Ex. 1.3.3.) or of blackbox
(Ex.1.3.4.) we can define the state values as {on, off}, {A, B} and so on as we
discussed.
If the amount of the collection of numbers which define the state is a finite
one the state is defined as a columnvector: x=[x
1
x
2
. . x
n
]
T
.
The minimal number of the elements of this vector able uniquely to precise
(to determine) the output will define the system order or the system dimension.
When the amount of such a collection strictly necessary is infinite ( we can
say the vector x has an infinity number of elements) then the order of the system
is infinity or the system is infinitedimensional.
Such a infinitydimensional system is presented in the next example by the
pure time delay element.
Example 1.4.1. Pure Time Delay Element.
Let us consider a belt conveyor transporting dust fuel (dust coal for
example) utilised in a heating system represented by a principle diagram as
shown in Fig. 1.4.1. The belt moves with a speed v.
The thickness of the fuel is controlled by a mobile flap.
Suppose we are interested about the thickness in the point B at the end of
the belt expressed by the variable y(t) .
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
24
This variable will be the output of the oriented system we are defining as
in Fig. 1.4.2. The input is the thickness realised on the flap position, point A, and
we shall denote it by the variable u(t).
The distance between points A and B is d . One piece of fuel passing from
A to B will take a period of time . τ ·
d
v
¹
'
¹
u(t);y(t)
t
y(t)
u(t)
t
1
0
y(t
1
) t
1
−τ u( )
t
1
−τ
¹
'
¹
Figure no. 1.4.1.
y(t)=u(t ) τ
u(t) y(t) Pure time delay
element
U(s) Y(s)
−τs
e
Figure no. 1.4.2.
Conveyor belt
d
Controlled flap
Dust fuel (coal)
u(t)
speed v
Thickness
Thickness
y(t)
A B
The inputoutput relation is expressed by the equation,
y(t) = u(t − τ) (1.4.5)
We can read this relation as: The output at the time moment t equals to the
value the input u(t) had τ seconds ago. Such a dependence is illustrated in the
diagram from Fig. 1.4.2. It is a so called a functional equation.
Now suppose an input is given. Can we determine the output y(t) for u
[t
0
,t]
any t ≥ t
0
? What do we need in addition to do this ? Looking to the principle
diagram from Fig. 1.4.1. or to the relation (1.4.5) we understand that in addition
to know all the thickness along the belt between the points A and B or in other
words all the values the input u(t) had during the time interval
[t
0
−τ , t
0
) .
This collection of information constitutes the system state at the time
moment t
0
and it will be denoted by x
0
.
So the state at the time moment t
0
, (t
0
,x
0
) , denoted on short as
x
0
=x(t
0
)= is a set containing an infinite number of elements x
t
0
(1.4.6) x
0
· x
t
0
· x(t
0
) · { u(θ), θ ∈ [t
0
− τ, t
0
) ¦ · u
[t
0
−τ , t
0
)
Because of that this system has the dimension of infinity.
At any time t the state is (t,x)=x(t) defined by
(1.4.7) x(t) · { u(θ), θ ∈ [t − τ, t) ¦ · u
[t−τ , t)
All these intuitively observations may have a mathematical support
applying the Laplace transform to the inputoutput relation (1.4.5).
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
25
We remember that
(1.4.8) L{u(t − τ)¦ · L{u(t − τ)1(t)¦ · e
−τs
[U(s) +
∫
−τ
0
u(t)e
−st
dt]
so the Laplace transform of the output is
(1.4.9) Y(s) · e
−τs
U(s) + e
−τs
∫
−τ
0
u(t)e
−st
dt · Y
f
(s) + Y
l
(s)
where (1.4.10) Y
f
(s) · e
−τs
U(s) ⇒ y
f
(t) · η(t, 0, 0, u
[0,t]
)
is the forced response which depends on the input u(t) only, of course here
depends on the Laplace transform U(s) which contains the input values u(θ) for
any θ≥0. We paid for Laplace transform instrument simplicity with t
0
=0.
(1.4.11) Y
l
(s) · e
−τs
∫
−τ
0
u(t)e
−st
dt
is the free response as the output response when the input is zero.
By zero input we must mean
u(t) ≡ 0 ∀t ≥ 0 ⇔ U(s) ≡ 0∀s
from the convergence domain of U(s).
The free response depends on the initial state only (here at the initial time
moment t
0
=0 ) and as we can see from (1.4.11) it depends on all the values of
u(θ) ∀θ ∈ [−τ, 0) ⇔ u
[−τ, 0)
so it looks naturally to choose the initial state as
. x
0
· x(0) · u
[−τ , 0)
Now we can interpret the free response from (1.4.11) as
. (1.4.12) y
l
(t) · η(t, 0, x
0
, 0
[0,t)
)
From (1.4.10), (1.4.12) the general response ( the time image of (1.4.9) )
can be expressed as an inputinitial stateoutput relation
. (1.4.13) y(t) · y
f
(t) + y
l
(t) · η(t, 0, 0, u
[0,t]
) + η(t, 0, x
0
, 0
[0,t)
) · η(t, 0, x
0
, u
[0,t)
)
1.4.2. State Variable Definition.
The state variable is a function
x: T→ X , t→x(t), (1.4.14)
where X is the state space , which expresses the time evolution of the system
state. The state is not a constant (fixed) one.
It can be changed during the system time evolution, so the function x(t)
can be a nonconstant one.
The graphic of this function on a time interval [t
0
, t
1
] , denoted by
(1.4.15) x
[t
0
, t
1
]
· {(t, x(t)), ∀t ∈ [t
0
, t
1
] ¦
is called the time state trajectory on the interval [t
0
, t
1
]. The state variable x(t)
is an explicit function of time but also depends implicitly on the starting time t
0
,
the initial state x(t
0
)=x
0
and the input u(τ) τ∈[t
0
,t].
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
26
This functional dependency called inputinitial statestate relation (iiss
relation) or just a trajectory (more precisely timetrajectory ), can be written as
(1.4.16) x(t) · ϕ(t, t
0
, x
0
, u
[t
0
,t]
) x
0
· x(t
0
)
A relation of the form (1.4.16) is an inputinitial statestate relation
(iiss relation) and expresses the state evolution of a system if the following four
conditions are accomplished:
1. The uniqueness condition . For a given initial state x(t
0
)=x
0
at time t
0
and
a given well defined input the state trajectory is unique. This can be u
[t
0
, t]
expressed as: "A unique trajectory exists for all t > t
0
, given ϕ(t, t
0
, x
0
, u
[t
0
,t]
)
the state x
0
at time t
0
and a real input u(τ), for τ≥t
0
".
2. The consistency condition . For t=t
0
the relation (1.4.16) has to check
the condition :
. (1.4.17) x(t)
t·t
0
· x(t
0
) · ϕ(t
0
, t
0
, x
0
, u
[t
0
,t
0
]
) · x
0
Also
, (1.4.18) ∀ t
1
≥ t
0
,
t→t
1
,t>t
1
lim ϕ(t, t
1
, x(t
1
), u
[t
1
,t]
) · x(t
1
)
that means a unique trajectory starts from each state.
3. The transition condition . Any intermediate state on a state trajectory is an
initial state for the future state evolution.
For any t
2
≥t
0
, one input takes the state x(t
0
) to x(t
2
), u
[t
0
,t
2
]
x(t
2
) · ϕ(t
2
, t
0
, x(t
0
), u
[t
0
,t
2
]
)
but for any intermediate time t
1
, t
0
≤ t
1
≤ t
2
, applying a subset of that u
[t
0
,t
1
]
u
[t
0
,t
2
]
means
(1.4.19) u
[t
0
,t
2
]
· u
[t
0
,t
1
]
∪u
[t
1
,t
2
]
we get the intermediate state x(t
1
)
x(t
1
) · ϕ(t
1
, t
0
, x(t
0
), u
[t
0
,t
1
]
)
which acting as an initial state from t
1
, will determine the same x(t
2
)
(1.4.20) x(t
2
) · ϕ(t
2
, t
0
, x(t
0
), u
[t
0
,t
2
]
) · ϕ(t
2
, t
1
, ϕ(t
1
, t
0
, x(t
0
) , u
[t
1
,t
2
]
))
x(t
1
)
According to this property we can say that the input (or u) takes the u
[t
0
,t]
system from a state (t
0
, x
0
)=x(t
0
) to a state (t,x)=x(t) and if a state (t
1
,x
1
)=x(t
1
) is
on that trajectory, then the corresponding segment of the input will take the
system from x(t
1
) to x(t).
4. The causality condition . The state x(t) at any time t or the trajectories
do not depend on the future inputs u(τ) for τ >t. This condition ϕ(t, t
0
, x
0
, u
[t
0
,t]
)
assures the causality of the abstract system which has to correspond to the
causality of the original physical oriented system.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
27
Example 1.4.2. Properties of the iiss relation.
Shall we consider the examples Ex. 1.2.2. and Ex. 1.2.3. for which the
abstract system is described by relations (1.2.1) or (1.2.16)
(1.4.21) S
1
:
¹
¹
'
T
•
x +x · K
1
u
y · K
2
x
The voltage across the capacitor x , or the movement of the arm x , is the
state. Its time evolution is
(1.4.22) x(t) · e
−
t−t
0
T
x
0
+
K
1
T
∫
t
0
t
e
−
t−τ
T
u(τ)dτ ⇔ x(t) · ϕ(t, t
0
, x
0
, u
[t
0
,t]
)
We can show that the relationship (1.4.22) accomplishes the four above
conditions, that means it is an iiss relation.
1. The uniqueness condition:
If and , the two state trajectories are x
0
· x
0
u
[t
0
,t]
· u
[t
0
, t]
x (t) · e
−
t−t
0
T
x
0
+
K
1
T
∫
t
0
t
e
−
t−τ
T
u (τ)dτ
x (t) · e
−
t−t
0
T
x
0
+
K
1
T
∫
t
0
t
e
−
t−τ
T
u (τ)dτ ⇒
x (t) − x (t) · e
−
t−t
0
T
(x
0
− x
0
)x
0
+
K
1
T
∫
t
0
t
e
−
t−τ
T
[u (τ) − u (τ)]dτ ≡ 0∀τ ⇒x (t) ≡ x (t)
=0 ≡0
2. The consistency condition:
Substituting
t · t
0
⇒x(t
0
) · e
−
t
0
−t
0
T
x
0
+
K
1
T
∫
t
0
t
e
−
t−τ
T
u(τ)dτ · x
0
⇒x(t
0
) · x
0
3. The transition condition:
For t=t
1
, denoting x
0
=x(t
0
) , (1.4.22) becomes
x(t
1
) · e
−
t
1
−t
0
T
x(t
0
) +
K
1
T
∫
t
0
t
1
e
−
t
1
−τ
T
u(τ)dτ
and for t=t
2
(1.4.22) is
. x(t
2
) · e
−
t
2
−t
0
T
x(t
0
) +
K
1
T
∫
t
0
t
2
e
−
t
2
−τ
T
u(τ)dτ
Because and , we get e
−
t
2
−t
0
T
· e
−
t
2
−t
1
T
⋅ e
−
t
1
−t
0
T
∫
t
0
t
2
(..)dτ ·
∫
t
0
t
1
(..)dτ +
∫
t
1
t
2
(..)dτ
x(t
2
) · e
−
t
2
−t
1
T
⋅ e
−
t
1
−t
0
T
x(t
0
) +
K
1
T
∫
t
0
t
1
e
−
t
2
−t
1
T
e
−
t
1
−τ
T
u(τ)dτ +
K
1
T
∫
t
1
t
2
e
−
t
2
−τ
T
u(τ)dτ
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
28
x(t
2
) · e
−
t
2
−t
1
T
⋅ [e
−
t
1
−t
0
T
x(t
0
) +
K
1
T
∫
t
0
t
1
e
−
t
1
−τ
T
u(τ)dτ] +
K
1
T
∫
t
1
t
2
e
−
t
2
−τ
T
u(τ)dτ
x(t
1
)
. x(t
2
) · e
−
t
2
−t
1
T
x(t
1
) +
K
1
T
∫
t
1
t
2
e
−
t
2
−τ
T
u(τ)dτ
4. The causality condition: Because in (1.4.22) u(τ) is inside the integral from t
0
to t , x(t) is irrespective of u(τ) τ >t . ∀

Before we said, as a general statement, that the input affects the state and
the state influences the output. However there are systems to which inputs do
not influence the state or some components of the state vector.
Conversely there are systems to which outputs or some outputs are not
influenced by the state. Such systems are called uncontrollable and unobservable
respectively, about which more will be analysed later on.
In Ex. 1.3.4., the black box system, the physical object is state
uncontrollable because no admitted input can make the switch to change its
position. If , for example, the wire to the output were broken then such a system
would be unobservable.
A state that is both uncontrollable and unobservable can not be detected
by any experiment and make no physical meaning.
1.4.3. Trajectories in State Space.
The inputinitial statestate (iiss) relation
(1.4.23) x(t) · ϕ(t, t
0
, x
0
, u
[t
0
,t]
)
which expresses the timetrajectory of the state is an explicit function of time.
If the vector x is ndimensional one there are n timetrajectories, one for
each component
. (1.4.24) x
i
(t) · ϕ
i
(t, t
0
, x
0
, u
[t
0
,t]
) , i · 1, ..., n
These timetrajectories can be plotted as t increases from t
0
, with t as an implicit
parameter, in n+1 dimensional space or as n separate plots x
i
(t), t ≥ t
0
, i=1,..,n.
Often this plot can be made by eliminating t from the solutions (1.4.24) of the
state equations, which is just a trajectory in state space.
If we denote x
i
=x
i
(t) , i=1,..,n , the iiss relation (1.4.24) is written as,
x
1
· ϕ
1
(t, t
0
, x
0
, u
[t
0
,t]
)
(1.4.25) x
i
· ϕ
i
(t, t
0
, x
0
, u
[t
0
,t]
)
, x
n
· ϕ
n
(t, t
0
, x
0
, u
[t
0
,t]
)
and eliminating t from the n above relations we determine a trajectory in state
space, implicitly expressed as
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
29
F(x
1
,x
2
,...,x
n
,t
0
,x(t
0
))=0 (1.4.26)
where it was supposed a given (known) input. Simpler expression are obtained if
the input is constant for any t.
If the state vector components are the output and its (n1) time derivative
the state space is called phase space and the trajectory in phase space is called
phase trajectory.
The trajectory in state space can easier be obtained directly from state
equations, a system of first order differential equations. The plot can efficiently
be exploited for n=2 in state plane or phase plane.
For a given initial state (t
0
,x
0
) , we have denoted x
0
=x(t
0
), only one
trajectory is obtained. For different initial conditions a family of trajectories are
obtained called state portrait or phase portrait.
Because of the uniqueness condition accomplished by the iiss relation,
for a given input , one and only one trajectory passes through each point in u
[t
0
,t]
state space and exists for all finite t≥t
0
. As a consequence of this, the state
trajectories do not cross one another.
Example 1.4.3. State Trajectories of a Second Order System.
Let we consider a simple second order system,
(1.4.27)
¹
¹
'
x
.
1
· λ
1
x
1
+ u
x
.
2
· λ
2
x
2
+ u
⇔
¹
¹
'
x
.
1
(t) · λ
1
x
1
(t) + u(t)
x
.
2
(t) · λ
2
x
2
(t) + u(t)
For simplicity let be u(t)≡0 . ∀t ⇔0
[t
0
,t]
· {(τ, u(τ)) · 0, ∀τ ∈ [t
0
, t]¦
Under this hypothesis, the iiss is obtained by integrating the system (1.4.27),
(1.4.28)
¹
¹
'
x
1
(t) · e
λ
1
(t−t
0
)
x
1
(t
0
) · ϕ
1
(t, t
0
, x
1
(t
0
), 0
[t
0
,t]
)
x
2
(t) · e
λ
2
(t−t
0
)
x
2
(t
0
) · ϕ
2
(t, t
0
, x
2
(t
0
), 0
[t
0
,t]
)
Supposing that we have λ
1
<0 and λ
2
>0 then the timetrajectories are
plotted through the two components as in Fig. 1.4.3.
Eliminating the variable t from (1.4.28) we obtain
x
1
x
1
(t
0
)
· e
λ
1
(t−t
0
)
;
x
2
x
2
(t
0
)
· e
λ
2
(t−t
0
)
⇒ [
x
1
x
1
(t
0
)
]
λ
1
· [
x
2
x
2
(t
0
)
]
λ
2
⇔
. (1.4.29) x
2
· x
20
x
1
x
10
]
]
λ
2
/λ
1
⇔ F(x
1
, x
2
, x
0
) · 0
The same expression for (1.4.29) can be obtained directly from the
differential equation (1.4.27),
(1.4.30)
dx
1
dt
· λ
1
x
1
dx
2
dt
· λ
2
x
2
⇒
dx
2
dx
1
·
λ
2
λ
1
x
2
x
1
⇒ x
2
· x
2
(t
0
)

.
x
1
x
1
(t
0
)
`
,
λ
2
/λ
1
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
30
a'
2
a'
1
a''
1
a''
2
10
x'
10
x'' 1
x (t)
2
t
0
t
1
t
t
1 λ <0 2 λ <0 1 λ 2 λ <
1 λ ·−1/
T
1
t b'
2
b'
1
b''
1
b''
2
20
x'
20
x''
x (t)
2
2
t
0
t
1
t
λ2 ·−1/
T
2
Figure no. 1.4.3.
Figure no. 1.4.4.
a'
2
a'
1
a''
1
a''
2
10
x'
10
x''
1
x
2
t
2
t
0
t
0
t
1
t
1
t
1 λ <0 2 λ <0 1 λ 2 λ <
20
x''
x
2
b'
2
b'
1
b''
1
b''
2
20
x'
In Fig. 1.4.5. the state portrait is shown for the case λ
1
<0, λ
2
>0.
The inputinitial stateoutput relation (iiso relation), determines the
output at a time t,
y(t)=η(t, t
0
, x
0
, u , (1.4.31)
[t
0
,t]
)
In the case of example Ex. 1.2.1. , the relation
y(t) · K
2
e
−
t−t
0
T
x
0
+
K
1
K
2
T
∫
t
0
t
e
−
t−τ
T
u(τ)dτ
is an iiso relation. It does not accomplish the consistency property
y(t
0
)=K
2
x
0
≠x
0
so it can not be an iiss relation.
a
b
10
x'
1
x
0
t
1
t
x
2
20
x'
1 λ <0 2 λ >0
Figure no. 1.4.5.
a
b
t
t
1 /
1 λ
10
x'
2
t
2
t
0
t
0
t
1
t
1
t
20
x'
1
x (t)
x
2
(t)
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
31
For t
0
= t , taking into consideration that u
[t,t]
=u(t) it becomes
y(t)=g(x(t), u(t), t) . (1.4.32)
This is an algebraical relation and is called also the output relation or
output equation.
By a dynamical system one can understand a set S of three elements
S = {Ω, ϕ, η} or S = {Ω, ϕ, g} (1.4.33)
which are defined above, where:
Ω − the set of admissible inputs
ϕ − the inputinitial statestate relation
η − the inputinitial stateoutput relation
g − the output relation
This is called the explicit form of a dynamical system , expressed by
relations (solutions) or trajectories.
Another form for dynamical system representation is the implicit form by
state equations
S = {Ω, f, g} (1.4.34)
whose solutions are the trajectories expressed by (1.4.33), where
f − the vector function defining a set of equations: differential, difference,
logical, functional.
The solution of the equations defined by f , for given initial state (t
0
, x
0
) is
just the relation ϕ .
For example, the first order system from Ex. 1.2.2. or Ex. 1.2.3. , can be
represented as:
The explicit form by relations (functions) or state trajectories is:
(1.4.35) S :
¹
¹
'
¹
¹
u ⊂ Ω
x(t) · e
−
t−t
0
T
x(t
0
) +
K
1
T
∫
t
0
t
e
−
t−τ
T
dτ
y(t) · K
2
x(t)
(Ω)
(ϕ)
(g)
The implicit form by state equations is:
(1.4.36)
S :
¹
¹
'
¹
¹
u ∈ Ω
x
.
· −
1
T
x +
1
T
u, t ≥ t
0
, x(t
0
) · x
0
y · K
2
x
Frequently, the state equations of a dynamical system are composed by:
1. The proper state equation defined by f, whose solution is the state trajectory
x(t), t ≥ t
0
expressed by ϕ.
2. The output equation or more precisely the output relation g.
Generally, we can say that a system is a dynamical one, if the inputoutput
dependence is not an univocal one, but the univocity can be reestablished by the
knowledge of the state (some additional information) at the initial time.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.4. System State Concept; Dynamical Systems.
32
1.5. Examples of Dynamical Systems.
1.5.1. Differential Systems with Lumped Parameters.
These systems are continuous time systems. Both the input u and the
output y are time function vectors,
(1.5.1)
u ⊆ Ω Ω u − (p × 1) − vector u · [u
1
, u
2
, ... u
p
]
T
y ⊆ Γ Γ y − (r × 1) − vector y · [y
1
, y
2
, ... y
r
]
T
The inputoutput (io) relation is a set of differential equations:
(1.5.2)
¹
¹
'
F
i
(y, y
(1)
, ..., y
(n
i
)
, u, ..., u
(m
i
)
, t) · 0
i · 1, r
The system dimension or the order of the system is . n ≤ Σ
i·1
r
n
i
The standard form of the state equations of such a system is obtained by
transforming the above system of differential equations into n differential
equations of the first order ( the Cauchy normal form) which do not contain the
input time derivatives as in (1.5.3),
(1.5.3)
¹
¹
'
¹
¹
x
.
(t) · f(x(t), u(t), t), t ≥ t
0
, x(t
0
) · x
0
y(t) · g(x(t), u(t), t)
u ⊆ Ω
This is the implicit form of the dynamical system
S={Ω,f,g} or S={Ω,f,g,x}
where:
f is a function expressing a differential equation,
g expresses an algebraical relation, x is a nvector.
In (1.5.3) the first equation is called the proper state equation and the
second is called the output relation.
Here x(t)=(x
1
(t),...,x
n
(t))
T
is just the state of the system.
The number n of the elements of this vector is the dimension of the system
or the system order.
Of course, f and g are vector functions,
f(x(t),u(t),t) = [f
1
(x(t),u(t),t) f
2
(x(t),u(t),t) . . f
r
(x(t),u(t),t)]
T
g(x(t),u(t),t) = [g
1
(x(t),u(t),t) g
2
(x(t),u(t),t) . . g
p
(x(t),u(t),t)]
T
.
The dimension of the state vector x has no connection with r the number of
outputs and p the number of inputs.
If the function f(x,u,t) satisfy the Lipshitz conditions with respect to
variable x, then the solution x(t) , x(t
0
)=x
0
exists and is unique for any t≥t
0
.
The system is called timeinvariant or autonomous system if the time
variable t does not explicitly appear in f and g and its form is:
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.5. Examples of Dynamical Systems.
33
(1.5.4)
¹
¹
'
¹
¹
x
.
(t) · f(x(t), u(t)), t ≥ t
0
, x(t
0
) · x
0
y(t) · g(x(t), u(t))
u ⊆ Ω
written on short
. (1.5.5)
¹
¹
'
x
.
· f(x, u)
y · g(x, u)
If the functions f and g are linear with respect to x and u the system is
called continuostime linear system.
The state equations are
(1.5.6) S :
¹
¹
'
x
.
(t) · A(t)x(t) + B(t)u(t)
y(t) · C(t)x(t) + D(t)u(t)
Because the matrices A, B, C, D depend on the time variable t we call this
multivariable timevarying system:
multiinput(pinputs)multioutput(routputs).
The matrices have the following meaning:
A(t), (nxn) the system matrix;
B(t), (nxp) the input (command) matrix ;
C(t), (rxn) the output matrix;
D(t), (rxp) the auxiliary matrix or
the straight inputoutput connection matrix.
The state equations of single variable linear system or singleinput (p=1),
singleoutput (r=1) system are
(1.5.7) S :
¹
¹
'
x
.
(t) · A(t)x(t) + b(t)u(t)
y(t) · c
T
(t)x(t) + d(t)u(t)
In this case:
u(t) and y(t) are scalars ,
the matrix B(t) degenerates to a column vector b(t),
the matrix C(t) degenerates to a row vector c
T
(t) and
the matrix D(t) degenerates to a scalar d(t).
If all these matrices are not depending on t (are constant ones) the system
is called linear timeinvariant (dynamical) system (LTIS) having the form,
(1.5.8) S :
¹
¹
'
x
.
· Ax + Bu
y · Cx + Du
We observe that in any form of state equations the input time derivatives
does not appear.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.5. Examples of Dynamical Systems.
34
1.5.2. Time Delay Systems (DeadTime Systems).
Somewhere in these systems at least one delay operator exists that means
in the physical systems there are elements in which the information is transmitted
with a finite speed.
The state equation has the form
(1.5.9) x
.
(t) · f(x(t), x(t − τ), u(t), t) , t ≥ t
0
(1.5.10) y(t) · g(x(t), x(t − τ), u(t), t)
The order of time delay systems is infinite and has nothing to do with the
number of x vector elements.
In Ex. 1.4.1. , we discussed the pure time delay element as the simplest
dead time system. Let we now consider a more complex example of time delay
system.
Example 1.5.2.1. Time Delay Electronic Device.
Shall we consider an electronic device as depicted in the principle diagram
from Fig. 1.5.1.
The pure time delay element is a device consisting of a tape based record
and play structure. The two recordhead and playhead are positioned to a
distance d so it will take τ seconds the tape to pass from one head to the other.
The device has amplifiers so we can consider the signals as being voltages
between 10 V and +10V for example.
Figurae no. 1.5.1.

+
Z
i
;A;Z
e
y(t)
i
R
i
C
i
i
R
C
i
u
u
C
i
2
u(t)

+
Z
i
;A;Z
e
R
1
R
2
R
0
i
0
i
1
i
i
i
u
w(t)
v(t)
y(t) w(t)
Pure time delay
element
w(t)=y(t ) τ
(1.5.11) x(t) ·
∫
t
0
t
x
.
(τ)dτ + x(t
0
)
(1.5.12) x
.
(t) · x(t − τ) − u(t) − time delay equation
The initial state at the time t
0
is , even if the variable x in the x
[t
0
−τ,t
0
]
differential equation is a scalar one.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.5. Examples of Dynamical Systems.
35
1.5.3. DiscreteTime Systems.
The state equation is expressed by difference equations the input u
k
, the
output y
k
and state x
k
are strings of numbers.
The general form of state equations are:
x
k+1
=f(x
k
,u
k
,k) (1.5.13)
y
k
=g(x
k
,u
k
,k) (1.5.14)
where: k  means the present step, k+1  means the next step.
If f and g are linear with respect x and u the state equations are:
x
k+1
=A
k
x
k
+B
k
u
k
(1.5.15)
y
k
=C
k
x
k
+D
k
u
k
(1.5.16)
This is a time varying system.
If the matrixes A,B,C,D are constant with respect to the variable k then
the system is a linear discrete time invariant system.
1.5.4. Other Types of Systems.
Distributed Parameter Systems.
Are described by partial differential equations. They are infinite
dimensional systems.
Finite State Systems (Logical systems or Finite Automata).
The functions f and g are logical functions.
Stochastic Systems.
All the above systems are called deterministic systems (at any time any
variable is well defined ). These systems are based on the so called probability
theory and random functions theory.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.5. Examples of Dynamical Systems.
36
1.6. General Properties of Dynamical Systems.
For any form of dynamical systems some general properties can be
established.
1.6.1. Equivalence Property.
We have a system denoted by
S=S(Ω,f,g)=S(Ω,f,g,x)
where x is mentioned just to know how the state variable will be denote.
Two states x
a
,x
b
of the system are equivalent at the time t=t
0
(k=k
0
) if ∈ S
starting on the initial time from that states as initial states the output will be the
same if we apply the same input:
ϕ(t, t
0
, x
a
, u
[t
0
,t]
) ≡ ϕ(t, t
0
, x
b
, u
[t
0
,t]
)
ψ(t, t
0
, x
a
, u
[t
0
,t]
) ≡ ψ(t, t
0
, x
b
, u
[t
0
,
t]
)
If in a system there are equivalent states then the system is not in a
reduced form, that means that the real dimension of the system is smaller rather
that defined by the vector x.
Two systems S and : S
S=S(Ω,f,g,x) and S · S(Ω, f, g, x)
are called inputoutput (I/O) equivalent and we denote this , if for any state S ≈ S
x it exists a state so that if the the same inputs are applied to x ∈ S (x · x(x))
both systems and the initial state is x for the system S and x for the systemS
then the outputs are the same.
For linear time invariant differential systems (SLIT) denoted by
x
.
· Ax + Bu
y · Cx + Du
, S · S(A, B, C, D, x)
x
.
· Ax + Bu
y · Cx + Du
, S · S(A, B, C, D, x)
if there exists a square matrix T, det T and we can ≠ 0 so that x · Tx then S ≈ S
express
A · TAT
−1
, B · TB , C · CT
−1
, D · D
then the above systems are I/O equivalent.
For singleinput, singleoutput (siso) systems,
b · Tb
c
T
· c
T
T
−1
x · T
−1
x
x
.
· T
−1
.
x
⇒
T
−1
.
x· AT
−1
x + Bu
.
x· (TAT
−1
)x + TBu
A B
y · Cx ⇒ y · CT
−1
x + Du
C D
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.6. General Properties of Dynamical Systems.
37
Example 1.6.1. Electrical RLC Circuit.
Let us consider an RLC circuit controlled by the voltage u :
a
a '
i
R L C
u
u
C
u
L
u
R
Supposing we are interested about i,u
r
,u
L
,u
C
, so the input is u and the
output is the vector
. y ·
i
u
R
u
L
u
C
]
]
]
]
]
]
]
,
r · 4
p · 1
We can determine the state equations by applying the Kirchoff theorems,
¹
¹
'
¹
¹
¹
¹
u
C
·
1
C
q
u
R
· Rq
.
u
L
· Lq¨
u
R
+ u
L
+ u
C
· u
, where
¹
¹
'
i · q
.
q¨ ·
di
dt
Some variables but not all of them can be eliminated ( those from
algebraical equations only).
¹
¹
'
x
1
· q
x
2
· q
.
, where
¹
¹
'
x
2
· i
q¨ · x
.
2
¹
¹
'
¹
¹
¹
¹
¹
¹
x
.
1
· x
2
x
.
2
·
1
LC
x
1
−
R
L
x
2
+
1
L
u
y
1
· x
2
y
2
· Rx
2
y
3
· −
1
C
x
1
− Rx
2
+ u
y
4
·
1
C
x
1
These are the state equations in a form when we chosen as state components the
electrical charge and current. We can write them into a matrix form:
, x ·
x
1
x
2
]
]
]
¹
¹
'
x
.
· Ax + bu
y · Cx + du
A ·
0 1
−
1
LC
−
R
L
]
]
]
, b ·
0
1
L
]
]
]
, C ·
0 1
0 R
−
1
C
−R
1
C
0
]
]
]
]
]
]
]
, d ·
0
0
1
0
]
]
]
]
]
]
]
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.6. General Properties of Dynamical Systems.
38
We cannot choose as state variables (vector state components) the
variables for example x
1
=i , x
2
=u
R
because u
R
=Ri (there is an algebraical relation
between them), but we can choose as state variables, for example,
¹
¹
'
x
1
· u
C
x
2
· u
R
¹
¹
'
¹
¹
¹
¹
¹
¹
¹
¹
•
x
1
·
1
RC
x
2
•
x
2
· −
R
L
x
1
−
R
L
x
2
+
R
L
u
y
1
·
1
R
x
2
y
2
· x
2
y
3
· −x
1
− x
2
+ u
y
4
· x
1
This is another representation of the oriented system,
x ·
x
1
x
2
]
]
]
⇒ S ·
¹
¹
'
¹
¹
x
.
· Ax + bu
y · Cx + du
A ·
0
1
RC
−
R
L
−
R
L
]
]
]
]
]
, b ·
0
R
L
]
]
]
, C ·
0
1
R
0 1
−1 −1
1 0
]
]
]
]
]
]
]
, d ·
0
0
1
0
]
]
]
]
]
]
]
,
S · S(A, b, C, d, X)
S · S(A, b, C, d, X)
¹
¹
'
x
1
·
1
C
x
1
x
2
· Rx
2
⇒ x · Tx , T ·
1
C
0
0 R
]
]
]
, det(T) ·
R
C
≠ 0
This means that the two abstract systems are equivalent because for any
initial equivalent states we have the same output. We can pass between the two
systems with the next equations:
¹
¹
'
¹
¹
¹
¹
A · TAT
−1
b · Tb
C · CT
−1
d · d
¹
¹
'
¹
¹
¹
¹
x
.
· Ax + bu
y · Cx + du
where A ·
0 1
−
1
LC
−
R
L
]
]
]
, b ·
0
1
L
]
]
]
, C ·
0 1
0 R
−
1
C
−R
1
C
0
]
]
]
]
]
]
]
, d ·
0
0
1
0
]
]
]
]
]
]
]
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.6. General Properties of Dynamical Systems.
39
Let (x
0
,t
0
) be
(x
0
,t
0
)=
5
10
]
]
]
C
A
and for given U
[t
0
,t]
⇒Y
[t
0
,t]
(x
0
, t
0
) ·
1
C
⋅ 5
R⋅ 10
]
]
]
·
1
1000
]
]
]
(V)
(V)
if C · 5F and R · 100Ω
We can prove that, for this example, the next relations are true:
¹
¹
'
¹
¹
¹
¹
A · TAT
−1
b · Tb
C · CT
−1
d · d
1.6.2. Decomposition Property.
For any system we can define the so called forcedresponse or zero
initial state response which is the response of the system when the initial state is
zero. This zero state is just the neutral element of the set X as linear space. We
are denoting the forced response as y
f
, x
f
.
We can define the free response that is the system response when the
input is the segment , this zero means the neutral u · 0
[t
0
,t]
⇔ ∀τ , u(τ) · 0
element of U is organised as a linear space.
By y
l
and x
l
are denoted the free response.
A system S has the decomposition property with respect to the state or the
output if they can be expressed as
y(t)=y
l
(t)+y
f
(t)
and
x(t)=x
l
(t)+x
f
(t).
1.6.3. Linearity Property.
A system is linear if its response with respect to state and output is a
linear combination of the pair: initial state and input.
(x
a
, t
0
), U
[t
0
,t]
a
→x
a
(t), y
a
(t)
(x
b
, t
0
), U
[t
0
,t]
b
→x
b
(t), y
b
(t)
This systemis linear if :
C
(x, t
0
) · αx
a
+ βx
b
U
[t
0
,t]
· αu
a
+ βu
b
x(t) · αx
a
(t) + βx
b
(t)
y(t) · αy
a
(t) + βy
b
(t)
∀α, β ∈
If the system is expressed in an implicit form by a state equation, it is
linear if the two functions involved in these equations are linear with respect to
the two variables x and u.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.6. General Properties of Dynamical Systems.
40
Example 1.6.2. Example of Nonlinear System.
The system
y(t)=2u(t)+4
u y
2
4
+
+
is not linear.
We can prove this, for
y
a
=2u
a
+4, y
b
=2u
b
+4
we obtain
y=2(α u
a
+β u
b
)+4=2α u
a
+2β u
b
+4 ≠ αy
a
+ βy
b
The response of a linear system for zero initial state and zero input is just zero.
1.6.4. Time Invariance Property.
A system is timeinvariant if its state and output response do not depend
on the initial time moment if these responses are determined by the same initial
state and the same translated input.
A system is a timeinvariant one if in the state equation the time variable t
does not appear explicitly. If a system is timeinvariant the initial time always
appears by the binomial (tt
0
). We can express (t τ) like: tτ=(tt
0
)(τt
0
)
1.6.5. Controllability Property.
Let S be a dynamical system. A state x
0
at the time t
0
is controllable to a
state (x
1
,t
1
) if there exists an admissible input which transfers the U
[t
0
,t]
⊂ Ω
state (x
0
,t
0
) to the state (x
1
,t
1
). If this property takes place for any the x
0
∈ X
system is called completely controllable. If in addition this property takes
place for any [t
0
,t
1
] finite the system is called totally controllable.
There are a lot of criteria for this property expressing.
1.6.6. Observability Property.
We can say that the state x
0
at the moment t
0
is observable at a time
moment if this state can be uniquely determined knowing the input t
1
≥ t
0
U
[t
0
,t
1
]
and the output Y
[t
0
,t
1
]
.
If this property takes place for any the system is called completely x
0
∈ X
observable. In addition, if this property takes place for any finite [t
0
,t
1
] the
system is called totally observable.
1.6.7. Stability Property.
This is one of the most important general property of a dynamical system.
It will be studied in detail later on.
1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.6. General Properties of Dynamical Systems.
41
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI).
The Linear TimeInvariant Differential Systems, on short LTI, are most
studied in systems theory mainly because the mathematical tools are easier to be
applied.
Even if the majority of systems encountered in nature are nonlinear ones,
their behaviour can be rather well studied by linear systems under some
particular conditions like description around a running point or for small
disturbances. In addition LTI benefits for their study on the Laplace transform
which translate the complicated operations with differential equations into simple
algebraical operations in complex domain.
However some properties like controllability , observability, optimality,
are better studied in time domain by state equations even for LTI.
Mainly Linear TimeInvariant Differential Systems means systems
described by ordinary linear differential equations, with constant coefficients.
Sometimes they are also called continuous time systems.
Two categories of LTI will be distinguished where:
SISO : Single Input Single Output LTI
MIMO: Multi Inputs Multi Outputs LTI.
2.1. InputOutput Description of SISO LTI.
Let us suppose that we have a time invariant system with one input and
one output.
The abstract system is expressed by an inputoutput relation (IOR) as a
differential equation or by state equations.
The two forms depend on the physical aspect, depend on our knowledge
regarding the physical system, our qualification and our ability to get the
mathematical equations.
The bloc diagram of such a system is as depicted in Fig. 2.1.1.
u(t) y(t)
H(s)
U(s)
Y(s)
Figure no. 2.1.1.
Let we consider the input the set of scalar functions which admit u ∈ Ω Ω
the Laplace transform. The output is a function , where the set will be, y ∈ Γ Γ Γ Γ
for these systems, a set of functions which admit the Laplace transform too.
The inputoutput relation (IOR) is expressed by the ordinary linear
differential equations, with constant coefficients of the order n,
(2.1.1)
Σ
k·0
n
a
k
y
(k)
(t) ·
Σ
k·0
m
b
k
u
(k)
(t) , a
n
≠ 0
where we understand: . y
(k)
(t) ·
d
k
y(t)
dt
k
; u
(k)
(t) ·
d
k
u(t)
dt
k
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.1. InputOutput description of SISO LTI.
42
The maximum order of the output derivative will determine the order of
the system.
Three types of systems can be distinguished depending on the ratio
between m and n:
1. m<n : the system is called strictly proper (causal) system.
2. m=n : the system is called proper system.
3. m>n : the system is called improper system.
The improper systems are not physically realisable. They could represent
an ideally desired mathematical behaviour of some physical objects, but
impossible to be obtained in the real world.
For example the system described by the IOR, , that means: y(t) ·
du(t)
dt
, represents a derivative element. It cannot n · 0, m · 1, a
0
· 1, b
0
· 0, b
1
· 1
process any input from the set of the admissible inputs which can contain Ω Ω
functions with discontinuities or which are nonderivative only.
A rigorous mathematical treatment can be performed by using the
distributions theory but that results are not essential for our object of study.
Because of that, our attention will concentrate on the strictly proper and
proper systems only.
The IOR (2.1.1) will be written as
(2.1.2)
Σ
k·0
n
a
k
y
(k)
(t) ·
Σ
k·0
n
b
k
u
(k)
(t) , a
n
≠ 0
If it is mentioned , this means the system is a strictly proper one. b
n
· 0
Also, if m<n we can consider that b
n
=0...b
m+1
=0.
The inputoutput relation (IOR) can be very easy expressed in the complex
domain by applying the Laplace transform to the relation (2.1.2).
As we remember,
(2.1.3) L{y
(k)
(t)¦ · s
k
Y(s) −
Σ
i·0
k−1
y
(k−i−1)
(0
+
)s
i
, k ≥ 1
(2.1.4) L{u
(k)
(t)¦ · s
k
U(s) −
Σ
i·0
k−1
u
(k−i−1)
(0
+
)s
i
, k ≥ 1
where we have denoted by
(2.1.5) Y(s) · L{ y(t)¦ , U(s) · L{ u(t)¦
the Laplace transforms of output and input respectively. For the moment the
convergence abscissas are not mentioned.
In (2.1.3), (2.1.4) the initial conditions are defined as right side limits
. y
(k−i−1)
(0
+
) , u
(k−i−1)
(0
+
)
For simplicity we shall denote them by . y
(k−i−1)
(0) , u
(k−i−1)
(0)
One obtains,
( Σ
k·0
n
a
k
s
k
) ⋅ Y(s) − Σ
k·1
n
[ Σ
i·0
k−1
y
(k−i−1)
(0)s
i
] · ( Σ
k·0
n
b
k
s
k
) ⋅ U(s) − Σ
k·1
n
[ Σ
i·0
k−1
u
(k−i−1)
(0)s
i
]
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.1. InputOutput description of SISO LTI.
43
from where Y(s) can be withdraw as
[ Σ
k·0
n
a
k
s
k
] ⋅ Y(s) · [ Σ
k·0
n
b
k
s
k
] ⋅ U(s) + I(s)
(2.1.6) Y(s) ·
M(s)
L(s)
U(s) +
I(s)
L(s)
Y
f
(s) Y
l
(s)
where,
(2.1.7) M(s) · Σ
k·0
n
b
k
s
k
· b
n
s
n
+ b
n−1
s
n−1
+ ... + b
1
s + b
0
(2.1.8) L(s) · Σ
k·0
n
a
k
s
k
· a
n
s
n
+ a
n−1
s
n−1
+ ... + a
1
s + a
0
(2.1.9) I(s) · Σ
k·1
n
a
k
[ Σ
i·0
k−1
y
(k−i−1)
(0)s
i
] − Σ
k·1
n
b
k
[ Σ
i·0
k−1
u
(k−i−1)
(0)s
i
]
As we can observe from (2.1.6) the output appears as a sum of two
components called, the forced response y
f
(t) and the free response y
l
(t) so, in the
complex domain, the system response is,
(2.1.10) Y(s) · Y
f
(s) + Y
l
(s)
where,
(2.1.11) Y
f
(s) ·
M(s)
L(s)
⋅ U(s) · H(s)U(s)
is the forced response Laplace transform which depends on the input only, and
(2.1.12) Y
l
(s) ·
I(s)
L(s)
is the free response Laplace transform which depends on the initial conditions
only. If the initial conditions are zero then I(s)=0 and Y(s)=Y
f
(s).
If the input , then U(s)=0 and Y(s)=Y
l
(s). These express u(t) ≡ 0, ∀t ≥ 0
the decomposition property.
Any linear system has decomposition property.
The forced response Y
f
(s) expresses the inputoutput behaviour
(io response) which does not depend on the system state (because it is supposed
to be zero) or on how the system internal description is organised (how the
system state is defined).
The free response Y
l
(s) expresses the initial stateoutput behaviour
(iso response) which does not depend on the input (because it is supposed to be
zero) but it depends on how the system internal description is organised (how the
system state is defined).
We can now define the very important notion of the transfer function (TF).
The transfer function of a system, (TF), denoted by H(s), is the ratio
between the Laplace transform of the output and the Laplace transform of the
input which determined that output into zero initial conditions (z.i.c.) if this ratio
is the same for any input variation.
2.1.13) H(s) ·
Y(s)
U(s)
z.i.c.
, the same for ∀U(s)
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.1. InputOutput description of SISO LTI.
44
In the case of the SISO LTI, as we can observe from (2.1.6) ... (2.1.11) ,
the transfer function is
(2.1.14) H(s) ·
M(s)
L(s)
·
b
n
s
n
+ b
n−1
s
n−1
+ ... + b
1
s + b
0
a
n
s
n
+ a
n−1
s
n−1
+ ... + a
1
s + a
0
always a ratio of two polynomials (rational function).
Sometimes we denote this SISO LTI system as
(2.1.15) S · TF{M, N¦ ⇔ TF
There are systems to which a transfer function can be defined but it is not
a rational function, as for example systems with time delay or systems defined by
partial differential equations.
If the polynomials L(s) and M(s) have no common factor (they are
coprime) their ratio expresses the so called the nominal transfer function
(NTF).
The transfer function expresses only the inputoutput (io) behaviour of a
system which is just the forced response that means the system response into
zero initial conditions.
The same inputoutput behaviour can be assured by a family of transfer
functions if the nominator and the denominator have a common factor as
M(s)=M'(s)P(s) , L(s)=L'(s)P(s), (2.1.16)
then
2.1.17) H(s) ·
M (s)P(s)
L (s)P(s)
⇒H(s) ·
M (s)
L (s)
If the two polynomials M'(s) and L'(s) are coprime ones,
gcd{M'(s), L'(s)}=1 ,
then the last expression of H(s) is the reduced form of a transfer function
(RTF). In such a case, when M(s) and L(s) have common factors, some
properties such as : controllability or/and observability are not satisfied.
The order of a system is expressed by the degree of the denominator
polynomial of the transfer function that is n=degree{L(s)}.
It results that
degree{L'(s)}=n'<n=degree{L(s)}, (2.1.18)
so systems can have different orders for their internal description but all of them
will have the same forced response.
If in a transfer function common factors appear which are simplified, then
the io behaviour (the forced response) can be described by lower order abstract
systems, the order being the polynomial degree from the transfer function
denominator after simplification, but the iso behaviour (the free response) still
remains of the order equal to the order the transfer function had before the
simplification.
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.1. InputOutput description of SISO LTI.
45
Example 2.1.1. Proper System Described by Differential Equation.
Let we consider a proper system with n=2; m=2 described by differential
equation,
(2.1.19) y¨ + 7y
.
+ 12y · u¨ + 4u
.
+ 3u
whose Laplace transform in z.i.c. is
, s
2
Y(s) + 7sY(s) + 12Y(s) · s
2
U(s) + 4sU(s) + 3U(s)
from where the TF is
H(s) ·
Y(s)
U(s)
z.i.c.
·
s
2
+ 4s + 3
s
2
+ 7s + 12
·
M(s)
L(s)
⇒n · 2
so we can consider the system
S · TF{M, N¦ · TF{s
2
+ 4s + 3, s
2
+ 7s + 12¦ · TF{(s + 1)(s + 3), (s + 4)(s + 3)¦
But we observe that,
(2.1.20) H(s) ·
(s + 1)(s + 3)
(s + 4)(s + 3)
·
s + 1
s + 4
·
M (s)
L (s)
, ⇒n · 1(order · 1)
the NTF
The transfer function is related to the forced response
Y
f
(s) · H(s)U(s)
Y
f
(s) ·
(s + 1)(s + 3)
(s + 4)(s + 3)
U(s) ⇒Y
f
(s) ·
s + 1
s + 4
U(s)
The io behaviour is of the first order even if the differential equation
(2.1.19) is of the second order. Of course, the solution of the system expressed
by the differential equation (2.1.19) depends on two initial conditions.
From this point of view the system is of the second order.
The internal behaviour of the system is represented by two state variables.
If this differential equation is represented by state equations it will be of the
second order.
However the time domain equivalent expression of the NTF
H(s) ·
s + 1
s + 4
·
M (s)
L (s)
·
Y(s)
U(s)
z.i.c.
is a first order differential equation,
y
.
(t) + 4y(t) · u
.
(t) + u(t)
which describes only a part of the system given by (2.1.19).
Shall we now consider another system described by the differential
equation,
, (2.1.21) y
.
(t) + 4y(t) · u
.
(t) + u(t)
whose Laplace transform in z.i.c. is
. sY(s) + 4Y(s) · sU(s) + U(s)
and its transfer function is
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.1. InputOutput description of SISO LTI.
46
. H(s) ·
s + 1
s + 4
·
M (s)
L (s)
, n · 1(order · 1)
This system can be denoted as
, S · TF{M , L ¦ · TF{(s + 1), (s + 4)¦
which is a first order one, its general solution depends of only one initial
condition.
The forced response of is, S
Y
f
(s) ·
s + 1
s + 4
U(s)
identical with the forced response of . S
We can say that the system expresses only some aspects of the internal S
behaviour of , that means only the modes ( the movements) that are depending S
on the input and which affect the output.
This is the so called completely controllable and completely observable
part of the system S.
If M(s) and L(s) are prime one to each other the internal behaviour is
completely related by the io behaviour of the system that means by the transfer
function.
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.1. InputOutput description of SISO LTI.
47
2.2. State Space Description of SISO LTI.
Sometimes the abstract system attached to an oriented system, as depicted
in Fig. 2.1.1., can be directly determined, by using different methods, under the
form of the state equations (SS) of the form,
(2.2.1) x
.
· Ax + bu
(2.2.2) y · c
T
x + du
where the matrices and vectors dimensions and name are:
x , (nx1) :the state column vector, x · [x
1
, x
2
, .... , x
n
]
T
A , (nxn), :the system matrix
b , (nx1), :the command column vector
c , (nx1), :the output column vector
d , (1x1) :the coefficient of the direct inputoutput connection.
The relation (2.2.1) is called the proper state equation and the relation
(2.2.2) is called the output equation (relation).
In the previous analysed RLCcircuit example, we have got directly the
state equations expressing all the mathematical relations between variables into
Cauchy normalform ( a system of the first order differential equations).
Such a system form can be denoted as
(2.2.3) S · S{A, b, c, d, x¦
where we understand that it is about the relations (2.2.1) and (2.2.2). Also the
variables have to be understood as time functions which x, u, y x(t), u(t), y(t)
admit Laplace transform.
In (2.2.3) we also mentioned the letter x just to see how we have denoted
the state vector only. Because a system of the form (2.2.1) and (2.2.2) is
completely described by the matrices A,b,c,d only, the system S from (2.2.3) is
sometimes denoted as,
. (2.2.4) S · SS{A, b, c, d¦ ⇔ SS{S¦
The abstract system (2.2.3) can also be obtained not as a procedure of
mathematical modelling of a physical oriented system but as a result of a
synthesis procedure.
Such a form, (2.2.1) and (2.2.2) or (2.2.3), expresses everything about the
system behaviour : the internal and external behaviour.
If a system of the order n is given, described by state equations (SS) as
(2.2.1) and (2.2.2) or (2.2.3) , then a unique (TF) transfer function H(s) can be
attached to it. This TF is
, (2.2.5) H(s) · c
T
[sI − A]
−1
b + d ·
M(s)
L(s)
which can be obtained by applying the Laplace transform to (2.2.1) and (2.2.2)
into z.i.c. The degree of L(s) is n.
If , then , the system is proper. d ≠ 0 dg{M(s)¦ · dg{L(s)¦ · n
If , then , the system is strictly proper. d · 0 dg{M(s)¦ < dg{L(s)¦ · n
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.2. State Space Description of SISO LTI.
48
If a system is first described by a differential equation like (2.1.2) or by a
transfer function (TF) like (2.1.14) of the order n, then an infinite number of
systems described by state equations (SS) as (2.2.1) and (2.2.2) or (2.2.3) can be
attached to it.
However, the TF (2.1.14) depends on maximum 2n+2 coefficients from
where, because of the ratio, only 2n+1 are essential.
The state equations (SS) as (2.2.1) and (2.2.2) or (2.2.3) depend from
A,b,c,d, respectively on coefficients. n ⋅ n + n + n + 1 · n
2
+ 2n + 1 > 2n + 2
The determination of SS from TF which is called the transfer function
realisation by state equations, means to determine unknown n
2
+ 2n + 1
variables from equations obtained from the two identities on s, (n + 1) + (n + 1)
and , (2.2.6) nom{c
T
[sI − A]
−1
b + d¦ ≡ M(s) den{c
T
[sI − A]
−1
b + d¦ ≡ L(s)
where: "nom{}" means "the nominator of {}"
"den{}" means"the denominator of{}".
We shall denote this process by
(2.2.7) TF{S¦ →SS{S¦ , ⇔ TF{M, L¦ →SS{A, b, c, d¦
If the TF H(s) of the system S is reducible one, that means,
(2.2.8) H(s) ·
M(s)
L(s)
·
M (s) ⋅ P(s)
L (s) ⋅ P(s)
then the expression obtained after simplification,
(2.2.9) H (s) ·
M (s)
L (s)
, dg{L (s)¦ · n < n · dg{L(s)¦
can be interpreted as the transfer function of another system S' of the order n', for
which one of its state realisation is,
(2.2.10) S · SS{A , b , c , d , x ¦
that means,
(2.2.11) TF{S ¦ →SS{S ¦ , ⇔ TF{M , L ¦ →SS{A , b , c , d , x ¦
The systems S and S' are inputoutput equivalent. They determine the
same forced response but not the same free response.
If is itself a nominal transfer function (NTF), that means and H (s) M (s)
are coprime, then the system L (s)
(2.2.12) S · SS{A , b , c , d , x ¦
represents the so called the inputoutput equivalent minimal realisation, on short
the minimal realisation, of the system
(2.2.13) S · TF{M, L¦
and we denote this by
(2.2.14) S
m.r.
→ S , ⇔ TF{M, L¦
m.r.
→ SS{A , b , c , d , x ¦
The transitions from one form of realisation to another one, by
mentioning the equivalence and univalence these realisations have, are presented
in the diagram from Fig. 2.2.1.
There are methods to obtain different SS from a TF which will be studied.
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.2. State Space Description of SISO LTI.
49
Some forms of SS have specific advantages that are useful for different
applications.
S=SS(A,b,c,d,x)
The order n
S'=SS(A',b',c',d',x')
The order n'<n
H(s)=
M(s)
L(s)
dg{L(s)}=n
H'(s)=
M'(s)
L'(s)
dg{L'(s)}=n'
SS representation TF representation
of the system S of the system S Univoque
transform
Univoque
transform
Univoque
transform
Inputoutput and
Inputstateoutput
Equivalent systems
Reduction
by symplifying
the common factors
SS TF
SS TF
SS TF
TF TF
TF SS
TF SS
TF SS
SS reduction.
Inputoutput only
Equivalent systems
Similarity transforms
Another
state realisation
One
state realisation
One
state realisation
The order n
S =SS(A ,b ,c ,d ,x )
1 1 1 1 1 1
Figure no. 2.2.1.
In Fig.2.2.2. an example of two such systems is presented.
Time (sec.)
Amplitude
Step Response
0 100 200 300 400 500 600
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Transfer function:
100 s^2 + 2 s + 1

10000 s^3 + 300 s^2 + 102 s + 1
H(s)=
x2 0.12500 0 0 A =
x1 x2 x3
x1 0.03000 0.08160 0.01280
x3 0 0.06250 0
x2 0 b =
u1
x1 0.12500
x3 0
y1 0.08000 0.01280 0.10240
c =
x1 x2 x3
d =
u1
y1 0
Zero/pole/gain:
0.01 (s^2 + 0.02s + 0.01)

(s+0.01) (s^2 + 0.02s + 0.01)
H(s)=
a =
x1
x1 0.01000 x1 0.12500 b =
u1
c =
x1
y1 0.08000
d =
u1
y1 0
Transfer function:
1
        
100 s + 1
H'(s)=
Step response of both
H(s) and H'(s)
Free respose of H(s) with
Free respose of H'(s) with
x(0)=[1 1 1] x(0)=[1 10 1]
x(0)=1
Figure no. 2.2.2.
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.2. State Space Description of SISO LTI.
50
2.3. InputOutput Description of MIMO LTI.
As it was mentioned in Chap 2.1. MIMO LTI means Multi InputMulti
Output Linear Time Invariant Systems, on short "Multivariable Systems".
They are systems with several inputs and several outputs described by a
system of linear constant coefficients ordinary differential equations.
Such a system denoted by S, with pinputs and routputs u
1
, ... , u
p
, is represented in a block diagram having the input and output y
1
, ... , y
r
components explicitly represented or considering one input, the pcolumn vector
u and one output, the rcolumn vector y,
, , (2.3.1) u · [u
1
, ... , u
p
]
T
y · [y
1
, ... , y
r
]
T
as depicted in Fig. 2.3.1.
}
{
u
u
u
1
2
p y
y
y
1
2
r
u y
u y
(r×1)vector (p×1)vector
S S
Figure no. 2.3.1.
Usually the vectors u, y, x, are denoted simply by u, y, x, without
boldface fonts, if no confusion they will undertake.
The inputoutput relation (IOR) is expressed by a system of r linear
constant coefficients ordinary differential equations, of the form,
(2.3.2)
Σ
i·1
r

.
Σ
k·0
n
i,j
a
k,i
j
⋅ y
i
(k)
(t)
`
,
·
Σ
i·1
p

.
Σ
k·0
m
i,j
b
k,i
j
⋅ u
i
(k)
(t)
`
,
, j · 1, ... , r
This system of differential equations can be expressed in the complex
domain by applying the Laplace transform to (2.3.2). For an easier
understanding we shall consider here zero initial conditions (z.i.c.).
Σ
i·1
r

.
Σ
k·0
n
i,j
a
k,i
j
s
k
`
,
Y
i
(s) ·
Σ
i·1
p

.
Σ
k·0
m
i,j
b
k,i
j
s
k
`
,
U
i
(s) ⇒
L
j,i
(s) M
j,i
(s)
(2.3.3)
Σ
i·1
r
L
j,i
(s)Y
i
(s) ·
Σ
i·1
p
M
j,i
(s)U
i
(s) , j · 1, ... , r
We can define the vectors:
(2.3.4) Y(s) · (Y
1
(s), ..., Y
r
(s))
T
(2.3.5) U(s) · (U
1
(s), ..., U
p
(s))
T
⇒
(2.3.6) L(s)Y(s) · M(s)U(s)
where:
(2.3.7)
(r×r)
L(s)· {L
j,i
(s)¦
1≤j≤r
1≤i≤r
(2.3.8)
(r×p)
M(s)· {M
j,i
(s)¦
1≤j≤r
1≤i≤p
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.3. InputOutput Description of MIMO LTI.
51
L(s) and M(s)  are called matrices of polynomials. Any component of
each matrix is a polynomial. The io behaviour of a multivariable system in z.i.c.
is expressed by
Y(s)=H(s)U(s) (2.3.9)
where
H(s)=L
1
(s)M(s) (2.3.10)
is called the Transfer Matrix, (TM).
(2.3.11) H(s) ·
H
11
(s)...H
1i
(s)...H
1p
(s)
...............
H
j1
(s)...H
ji
(s)...H
jp
(s)
...............
H
r1
(s)...H
ri
(s)...H
rp
(s)
]
]
]
]
]
]
]
]
]
The jth component of the output is given by,
(2.3.12) Y
j
(s) ·
Σ
i·1
p
H
ji
(s)U
i
(s)
H(s) is a rational matrix. Each of its component is a rational function. Any
component of this rational matrix, for example H
ji
, can be interpreted as a
transfer function between the input U
i
and the output Y
j
that means :
(2.3.13) H
ji
(s) ·
Y
j
(s)
U
i
(s)
zero initial condition
U
k
(s)≡0 ∀s if k≠i
This rational matrix is an operator if it is the same for any expressions of U(s).
It is irrespective of the input.
Example.
Suppose we have a 2 inputs and 2 outputs system described by a system
of 2 differential equations,
,
2y
1
(4)
+ 3y
1
+ 6y
2
(1)
+ 3y
2
· 3u
1
(3)
+ u
1
(1)
+ 5u
2
(j · 1)
3y
1
(2)
+ 2y
1
(1)
+ 5y
1
+ 8y
2
· 2u
1
(1)
+ 5u
1
+ 3u
2
(2)
+ 4u
2
(1)
+ 5u
2
(j · 2)
;
r · 2
p · 2
whose Laplace transform in z.i.c. is,
(2s
4
+ 3)Y
1
(s) + (6s + 3)Y
2
(s) · (3s
3
+ s)U
1
(s) + 5U
2
(s)
(3s
2
+ 2s + 5)Y
1
(s) + 8Y
2
(s) · (2s + 5)U
1
(s) + (3s
2
+ 4s + 5)U
2
(s)
⇒
, L(s) ·
2s
4
+ 3 6s + 3
3s
2
+ 2s + 5 8
]
]
]
M(s) ·
3s
3
+ s 5
2s + 5 3s
2
+ 4s + 5
]
]
]
⇒
the TM is
H(s) · L
−1
(s) ⋅ M(s)
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.3. InputOutput Description of MIMO LTI.
52
For a system, we can directly obtain, by managing the mathematical
relations between system variables, the state equations (SS),
(2.3.14) x
.
· Ax + Bu
(2.3.15) y · Cx + Du
where the matrices and vectors dimensions and name are:
x , (n×1) :the state column vector,
A , (n×n), :the system matrix
B , (n×p), :the command matrix
C , (r×n), :the output matrix
D , (r×p) :the matrix of the direct inputoutput connection.
The order of the system is just n, the number of the state vector x.
The system S is denoted as,
. (2.3.16) S · SS(A, B, C, D)
If the state equations (SS) are given then the transfer matrix (TM) , is
uniquely determined by the relation,
(2.3.17) H(s) · C[sI − A]
−1
B+ D
This relation expresses the transform . SS → TM
The opposite transform, , the state realisation of a transfer TM → SS
matrix, is possible but much more difficult.
However the order n of the system has nothing to do with the number of
inputs p and the number of outputs r. It represents the internal structure of the
system.
A SISO LTI system is a particular case of a MIMO LTI system. If the
system has one input ( p=1 ) and one output ( r=1) the matrices will be denoted
as:
A → A
B → b
(2.3.18) C → c
T
D → d
so a SISO as a particular case of (2.3.16) is
. (2.3.19) S · SS(A, B, C, D) · SS(A, b, c
T
, d)
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.3. InputOutput Description of MIMO LTI.
53
2.4. Response of Linear Time Invariant Systems.
2.4.1. Expression of the State Vector and Output Vector in sdomain.
Let we consider a LTI system
(2.4.1) S · SS(A, B, C, D, x) , u ∈ Ω Ω
where is given and the state is the column vector . Ω Ω x · [x
1
, ... , x
n
]
T
The state equations are
(2.4.2) x
.
· Ax + Bu
(2.4.3) y · Cx + Du
The order n of the system is irrespective on the number p of inputs and of
the number r of outputs.
The system behaviour can be expressed in the sdomain getting the
Laplace transform of the state x(t). We know that the Laplace transform of a
vector (matrix) is the vector (matrix) of the Laplace transforms,
(2.4.4) L{x(t)¦ · X(s) · [X
1
(s), ... , X
n
(s)]
T
, L{x
i
(t)¦ · X
i
(s)
and,
, where . (2.4.5) L{x
.
(t)¦ · sX(s) − x(0) x(0) ·
t→0, t>0
lim x(t)
limt→0, t>0
By applying this to (2.4.2) and denoting
L{u(t)¦ · U(s) · [U
1
(s), ... , U
p
(s)]
T
the Laplace transform of the input vector, we obtain
sX(s) − x(0) · AX(s) + BU(s) ⇒ (sI − A)X(s) · x(0) + BU(s) ⇒
(2.4.6) X(s) · (sI − A)
−1
x(0) + (sI − A)
−1
BU(s)
If we denote
, (2.4.7) Φ Φ(s) · (sI − A)
−1
then the expression of the state vector in sdomain is,
. (2.4.8) X(s) · Φ Φ(s)x(0) + Φ Φ(s)BU(s)
The expression is the Laplace transform of the so called the Φ Φ(s)
transition matrix or state transition matrix , where Φ(t)
Φ Φ(s)=(sIA)
1
=L{Φ(t)} ; Φ(t)=e
At
. (2.4.9)
From (2.4.8) we observe that the state response has two components:
 State free response X
l
(s)
(2.4.10) X
l
(s) · Φ Φ(s)x(0)
 State forced response X
f
(s)
(2.4.11) X
f
(s) · Φ Φ(s)BU(s)
which respects the decomposition property,
. (2.4.12) X(s) · X
l
(s) + X
l
(s)
The complex sdomain expression of the output can be obtained by
substituting (2.4.8) in the Laplace transform of (2.4.3),
(2.4.13) Y(s) · CX(s) + DU(s)
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.
54
(2.4.14) Y(s) · CΦ Φ(s)x(0) + [CΦ Φ(s)B+ D]U(s)
Also the output is the sum of two components:
(2.4.15) Y(s) · Y
l
(s) + Y
l
(s)
 Output free response
(2.4.16) Y
l
(s) · CΦ Φ(s)x(0) · Ψ(s)x(0)
 Output forced response
(2.4.17) Y
f
(s) · [CΦ Φ(s)B+ D]U(s) · H(s)Us)
which reveals the decomposition property.
We denoted by Ψ(s)
(2.4.18) Ψ(s) · CΦ Φ(s)
the so called the base functions matrix in complex domain, and by H(s)
(2.4.19) H(s) · [CΦ Φ(s)B+ D]
the transfer matrix of the system. This transfer matrix is univocal determined
from state equations.
For SISO system the transfer function is,
H(s)=c
T
Φ(s)b+d (2.4.20)
2.4.2. Time Response of LTI from Zero Time Moment.
This response, from the zero initial time moment can be easy t
0
· 0
determined by using the real time convolution theorem of the Laplace transform,
(2.4.21) L
−1
{F
1
(s) ⋅ F
2
(s)¦ ·
∫
0
t
f
1
(t − τ)f
2
(τ)dτ
where . F
1
(s) · L{f
1
(t)¦, F
2
(s) · L{f
2
(t)¦
The relation (2.4.8) can be interpreted as
X(s)=Φ(s)x(0)+Φ(s)BU(s)=Φ(s)x(0)+F
1
(s)F
2
(s)
Φ(t)=L
1
{Φ(s)}=L
1
{(sIA)
1
}
so, by applying the real time convolution theorem we obtain
, (2.4.22) x(t) · Φ(t)x(0) +
∫
0
t
Φ(t − τ)Bu(τ)dτ
which is the state response of the system and substituting it in (2.4.3) results
, (2.4.23) y(t) · CΦ(t)x(0) +
∫
0
t
CΦ(t − τ)Bu(τ)dτ + Du(t)
the output response, both from the initial time moment . t
0
· 0
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.
55
2.4.3. Properties of Transition Matrix.
Let A be a (n x n) matrix.
1. The transition matrix Laplace transform.
The transition matrix defined as
Φ(t) · e
At
, ∀t ( ( 2.4.2
has the Laplace transform (2.4.24)
. (2.4.25) L{Φ(t)¦ · L{e
At
¦ · Φ Φ(s) · (sI − A)
−1
Proof:
Because the series of powers
ϕ(x) ·
Σ
k·0
∞
x
k
k!
t
k
· e
xt
is uniform convergent, then the matrix function
Φ(t) · ϕ(x)
x→A
·
Σ
k·0
∞
A
k
k!
t
k
· e
At
has the Laplace transform
Φ Φ(s) · L{e
At
¦ · L{
Σ
k·0
∞
A
k
k!
t
k
¦ · s
−1
I + s
−2
A+ s
−3
A
2
+ ...
Φ Φ(s) · Σ
k·0
∞
A
k
s
−(k+1)
where we applied the formula,
. L
t
k
k!
·
1
s
k+1
· s
−(k+1)
Because of the identity
(sIA)Φ Φ(s)=(sIA)( =Is
1
A+s
1
As
2
A
2
+s
2
A
2
...=I s
−1
I + s
−2
A+ s
−3
A
2
+ ...)
we have
(sIA)Φ Φ(s)=I
and
Φ Φ(s)(sIA)=I => Φ Φ(s)=(sIA)
1
(2.4.26)
2. The identity property.
Φ(0)=I, where Φ(0)=Φ(t)
t=0
(2.4.26)
3. The transition property.
Φ(t
1
+t
2
)=Φ(t
1
)Φ(t
2
)=Φ(t
2
)Φ(t
1
) (2.4.27) ∀t
1
, t
2
4. The determinant property.
The transition matrix is a nonsingular matrix.
5. The inversion property.
Φ(t)=Φ
1
(t) (2.4.28)
6. The transition matrix is the solution of the matrix differential equation,
(2.4.29) Φ
.
(t) · AΦ(t) , Φ(0) · I , Φ
.
(0) · A
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.
56
2.4.4. Transition Matrix Evaluation.
To compute the transition matrix we can use different methods.
1. The direct method. We have A so we directly evaluate
Φ Φ(s)=(sIA)
1
, Φ(t)=L
1
{Φ Φ(s)} (2.4.30)
2. By using the general formula of a matrix function. Let be the characteristic
polynomial of the square matrix A,
L(λ)=det(λIA) (2.4.31)
and the characteristic equation,
L(λ)=0 => L(λ)= , (2.4.32) Π
k·1
N
(λ − λ
k
)
m
k
, λ
k
∈ C Σ
k·1
N
m
k
· n
then the attached matrix function is,
(2.4.33) f(A) ·
Σ
k·1
N

.
Σ
l·0
m
k
−1
f
(l)
(λ
k
)E
kl
`
,
where f
(l)
(λ
k
) ·
d
l
f(λ)
dλ
l
λ·λ
k
The matrices E
kl
are called spectral matrices of the matrix A. There are n
matrices E
kl
. They are determined solving a matrix algebraical system by using n
arbitrary independent functions . Finally for f(λ)
(2.4.34) f(λ) · e
λt
→ f(A) · e
At
, f
(l)
(λ
k
) · t
l
e
λ
k
t
3. The Polynomial Method.
Let be a function , continuous which has m
k
1 derivatives for f(λ) : C →C
λ
k
that means
, (2.4.35) f
(l)
(λ
k
) · exists ∀ l ∈ [0, m
k−1
]
and a (nxn) matrix A whose characteristic polynomial
. (2.4.36) det(λI − A) · Π
k·1
N
(λ − λ
k
)
m
k
Let
(2.4.37) p(λ) · α
n−1
λ
n−1
+ ... + α
1
λ + α
0
be a n1 degree polynomial ( it has n coefficients ). The attached α
i
, 0 ≤ i ≤ n − 1
polynomial matrix function p(A) is
. (2.4.38) p(A) · α
n−1
A
n−1
+ ... + α
1
A+ α
0
I
In such conditions, if the system of n equation
f
(l)
(λ
k
)=p
(l)
(λ
k
) , k=1,...,N , l=0,...,m
k1
(2.4.39)
is satisfied, then
f(A)=p(A) (2.4.40)
The relation (2.4.39) expresses n conditions that means an algebraical
system with n unknowns the variables : α
0
, α
1
, ...,α
n1
. Solving this system the
coefficients α
0
,...,α
n1
are determined.
4. The numerical method.
(2.4.41)
Φ(t) ≈ Φ
p
(t) ·
Σ
k·0
p
A
k
k!
t
k
If the dimension of the matrix A is very big then we may have a false
convergence. It is better to express t=qτ, N and τ small enough => q ∈
(2.4.42) Φ(t) · Φ(qτ) · (Φ(τ))
q
≈ (Φ
p
(τ)
)
q
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.
57
2.4.5. Time Response of LTI from Nonzero Time Moment .
This is called the general time response of a LTI, when the initial time
moment is now . The formula is obtained by using the transition property of t
0
≠ 0
the transition matrix.
We saw that using the Laplace transform the state response was,
(2.4.43) x(t) · Φ(t)x(0) +
∫
0
t
Φ(t − τ)Bu(τ)dτ , ∀t ≥ 0
Substituting t=t
0
in (2.4.43), we obtain
. (2.4.44) x(t
0
) · Φ(t
0
)x(0) +
∫
0
t
0
Φ(t
0
− τ)Bu(τ)dτ
Taking into consideration (2.4.27),
, (2.4.45) Φ(t
0
− τ) · Φ(t
0
)Φ(−τ)
and (2.4.28)
Φ(t)=Φ
1
(t) (2.4.46)
we can withdraw from (2.4.44) the vector x(0) multiplying (2.4.44) to the left
side by Φ
1
(t
0
)
x(0) · Φ(−t
0
)x(t
0
) −
∫
0
t
0
Φ(−τ)Bu(τ)dτ
and, by substituting x(0) in the relation (2.4.43), we have,
x(t) · Φ(t − t
0
)x(t
0
) − Φ(t)
∫
0
t
0
Φ(−τ)Bu(τ)dτ +
∫
0
t
0
Φ(t − τ)Bu(τ)dτ +
∫
t
0
t
Φ(t − τ)Bu(τ)d
x(t) · Φ(t − t
0
)x(t
0
) +
∫
t
0
t
Φ(t − τ)Bu(τ)dτ
which is the general time response of the state vector.
The output general time response response is,
y(t) ·
Ψ(t−t
0
)
Φ(t − t
0
)x(t
0
) +
∫
t
0
t
ℵ(t−t
0
)
[CΦ(t − t
0
)B+ Dδ(t − τ)] u(τ)dτ
where we have expressed,
, Du(t) ·
∫
t
0
t
Dδ(t − τ)dτ
The free response is,
Ψ(t) · L
−1
{CΦ Φ)(s)¦ ⇒ y
l
(t)
Similarly, we compute the so called the weighting matrix,
ℵ(t) · L{CΦ(s)B+ D¦ · L
−1
{H(s)¦
We can say that the transfer matrix is the Laplace transform of the impulse
matrix . ℵ(t)
The impulse matrix is the output response for zero initial conditions (zero
initial state) when the input is the unit impulse.
Y(s)=H(s)U(s)
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.
58
u(t)=I
p
δ(t) => U(s)=I
p
=> Y(s)=H(s) => y(t)=L
1
{H(s)}= (t) ℵ
For scalar systems the weighting matrix is called weighting function or
impulse function (response).
In practice are very important some special responses like step response
(it is the output response from zero initial state determined by step unit input ).
For scalor type systems, p=1, r=1 (1 input, 1 output),
u(t)=1(t)= u(t) · 1(t) ·
¹
¹
'
0 , t < 0
1 , t ≥ 0
, U(s) ·
1
s
Y(s) · H(s) ⋅
1
s
⇒ y(t) · L
−1
{H(s) ⋅
1
s
¦
Example.
, H(s) ·
K
Ts+1
, U(s) ·
1
s
∆u ⇒
y(t) · Σ

.
Rez

.
K
Ts+1
⋅
∆u
s
e
st `
,
`
,
·

.
K+
K
T
⋅
1
−
1
T
e
−
t
T
`
,
∆u
with s=0 , s= 1/T , y(t)=K(1e
t/T
)∆u
Figure no. 2.4.1.
u
st
(t
0
a
) is the steady state of input observed at the absolute time t
0
a
,
. y
a
(t) · y(t) + y
st
a
(t
0
)
To any point B at the output response of this system ( first order system )
we are drawing the tangent and we obtain a time interval T.
We can compute the area between the new steady state and the output
response.
, . T ·
A
y(∞)
K ·
y(∞)
u(∞)
)
T
}
D
u
u( ) ∞
∞
y(
t
t
t
t
t
0
0
0
a
a
a
a
st
st
U ( )
y ( )
L {H(s)U(s)}
1
y (t)
u (t)
a
a
A
initial steady
state
initial steady
state
2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.4.Response of LTI Systems.
59
3. SYSTEM CONNECTIONS.
3.1. Connection Problem Statement.
Let , be a set of systems where I i s a set of i ndexes, each S
i
i ∈ I S
i
bei ng consi dered as a subsystem,
(3.1.1) S
i
D
· S
i (Ω Ω
i
, f
i
, g
i
, x
i
)
where the uti l i sed symbol s express:
 the set of al l owed i nputs; Ω Ω
i
f
i
 the proper state equati on;
g
i
 the output rel ati on;
x
i
 the state vector.
Knowi ng the above el ements one can determi ne the set of the possi bl e Γ Γ
i
outputs.
These subsystems can be dynami cal or nondynami cal (scal or) ones, can
be conti nuous or di screte ti me ones, can be l ogi cal systems, stochasti c systems
etc. as f or exampl e:
3.1.1. Continuous Time Nonlinear System (CNS).
S
i
:
¹
¹
'
x
.
i
(t) · f
i
(x
i
(t), u
i
(t), t); x
i
(t
0
) · x
0
i
, t ≥ t
0
∈ T ⊆ R
y
i
(t) · g
i
(x
i
(t), u
i
(t), t); Ω Ω
i
· {u
i
u
i
: T →U
i
; T ⊆ R; u
i
admitted ¦
(3.1.2)
3.1.2. Linear Time Invariant Continuous System (LTIC).
I t i s a parti cul ar case of CNS.
S
i
:
¹
¹
'
x
.
i
(t) · A
i
x
i
(t) + B
i
u
i
(t); x
i
(t
0
) · x
0
i
, t ≥ t
0
∈ T ⊆ R
y
i
(t) · C
i
x
i
(t) + D
i
u
i
(t); Ω Ω
i
· {u
i
u
i
: T →U
i
; T ⊆ R; u
i
admitted ¦
(3.1.3)
3.1.3. Discrete Time Nonlinear System (DNS).
S
i
:
¹
¹
'
x
k+1
i
· f
i
(x
k
i
, u
k
i
, k); x
k
0
i
· x
0
i
, k ≥ k
0
∈ T ⊆ Z
y
k
i
· g
i
(x
k
i
, u
k
i
, k); Ω Ω
i
· {u
k
i
¦ u
k
i
: T →U
i
; T ⊆ Z; {u
k
i
¦admitted
(3.1.4)
3.1.4. Linear Time Invariant Discrete System (LTID).
I t i s a parti cul ar case of DNS.
S
i
:
¹
¹
'
x
k+1
i
· A
i
x
k
i
+ B
i
u
k
i
; x
k
0
i
· x
0
i
, k ≥ k
0
∈ T ⊆ Z
y
k
i
· C
i
x
k
i
+ D
i
u
k
i
; Ω Ω
i
· {u
k
i
¦ u
k
i
: T →U
i
; T ⊆ Z; {u
k
i
¦ admitted
(3.1.5)
Let consi der that a set of subsystems as above def i ned consti tutes a
f ami l y of subsystems, denoted , F
i
. (3.1.6) F
I
· {S
i
, i ∈ I¦
3. SYSTEM CONNECTI ONS. 3.1. Connecti on Pr obl em Statement .
60
One can say that a subsystem (or general l y a system) i s def i ned (or S
i
speci f i ed or that i t exi sts) i f the el ements are speci f i ed. (Ω Ω
i
; f
i
; g
i
; x
i
)
From these el ements al l the other attri butes, as di scussed i n Ch.1. and
Ch.2. , can be deduced or understood.
These al l other attri butes can be:
the system type (conti nuous, di screte, l ogi cal ),
the sets , etc. T
i
; U
i
; Γ
i
; Y
i
A f ami l y of subsystems , bui l ds up a connecti on i f i n addi ti on to the F
I
el ements
S
i
D
· S
i (Ω Ω
i
, f
i
, g
i
, x
i
)
two other sets R
c
and C
c
are def i ned, havi ng the f ol l owi ng meani ng:
1. R
c
= the set of connecti on rel ati ons
2. C
c
= the set of connecti on condi ti ons.
R
c
represents a set of al gebrai cal rel ati ons between the vari abl es u
i
, y
i
, , i ∈ I
and other new vari abl es i ntroduced by these rel ati ons.
These new vari abl es can be new i nputs (causes), denoted by , , or v
j
j ∈ J
new outputs dented by where I , L represent i ndex sets. w
l
, l ∈ L
C
c
represents a set of condi ti ons whi ch must be sati sf i ed by the i nput and
output vari abl es (u
i
, y
i
), , of each subsystem, and al so by the new i ∈ I
vari abl es , i ntroduced through R
c
. v
j
, j ∈ J w
l ,
l ∈ L
These condi ti ons ref er to:
The physi cal meani ng, cl ari f yi ng i f the abstract systems are mathemati cal
model s of some physi cal ori ented systems (obj ects) or they are pure ab
stract systems concei ved (i nvented) i nto a theoreti cal synthesi s procedure.
The number of i nputoutput components, that i s whether the i nput and out
put vari abl es are vectors or scal ars.
The properti es of the enti ti es i nterpreted as f uncti ons or set of Ω Ω
i
; f
i
; g
i
f uncti ons, each of them def i ned on i ts observati on domai n.
The two 3tupl es
(3.1.7) S· {F
I
; R
c
; C
c
¦, F
I
· {S
i
, i ∈ I ¦
consti tute a correct connecti on or a possi bl e connecti on, i f S has the attri butes of
a dynami cal system to whi ch : are i nput vari abl es and are v
j
, j ∈ J w
l
, l ∈ L
output vari abl es.
The system S i s cal l ed the equi val ent system or the i nterconnected system
of the subsystems f ami l y F
I
, f ami l y i nterconnected by the pai r
{ R
c
; C
c
} .
3. SYSTEM CONNECTI ONS. 3.1. Connecti on Pr obl em Statement .
61
Obser vation 1.
Any connecti on of subsystems i s perf ormed onl y through the i nput and
output vari abl es of each component subsystem but not through the state
vari abl es.
The set R
c
does not contai ns rel ati ons i nto whi ch the state vector
or the components of the vectors to appear. x
i
, i ∈ I x
i
, i ∈ I
I f , i n some exampl es, the i nterconnecti ons rel ati on R
c
contai ns
components of the state vectors, we must understand that they represent
components of the output vectors , f or exampl e y
i
k
=x
i
k
, but f or the seek of y
i
conveni ence and wri ti ng economy di f f erent symbol s have not been uti l i sed .
Thi s i s a very i mportant probl em when a subsystem S
i
(an abstract one)
i s the mathemati cal model of a physi cal ori ented system f or whi ch some state
vari abl es (some components of the state vector), are not accessi bl e f or
measurements and observati ons or even they have no physi cal exi stence
(physi cal meani ng) and so they cannot be undertaken as components of the
output vector.
Obser vation 2.
I f to the i nterconnected system S onl y the behavi our wi th respect to the
i nput vari abl es (the f orced response) i s under i nterest, then each subsystem S
i
can be represented by i ts i nputoutput rel ati on.
For LTI (LTI C or LTI D) systems, these i nputoutput rel ati ons are the
transf er matri ces (f or MI MO) or transf er f uncti ons (f or SI SO), denoted by H
i
(s)
f or LTI C and by H
i
(z) f or LTI D systems.
Recipr ocally, i f i n an i nterconnected structure onl y transf er matri ces (f uncti ons)
appear, we have to understand that onl y the i nputoutput behavi our (the f orced
response) i s expected or i s enough f or that anal yse.
Obvi ousl y, i f the transf er matri x (f uncti on) of the equi val ent
i nterconnected system S i s gi ven or i s determi ned (cal cul ated), we can deduce
di f f erent f orms of the state equati ons starti ng f rom thi s transf er matri x
(f uncti on), whi ch are cal l ed di f f erent real i sati ons of the transf er matri x
(f uncti on) by state equati ons.
I f thi s transf er matri x (f uncti on) i s a rati onal nomi nal one (there are no
common f actors to nomi nators versus denomi nator) then al l these state
equati ons real i sati ons certai nl y express onl y the compl etel y control l abl e and
compl etel y observabl e part of the i nterconnected dynami cal system S, that
means al l the state components of these state real i sati ons are both control l abl e
and observabl e.
No one wi l l guaranty, i f thi s transf er matri x (f uncti on) i s not rati onal or
nonnomi nal one (there are common f actors to nomi nators versus denomi nator)
3. SYSTEM CONNECTI ONS. 3.1. Connecti on Pr obl em Statement .
62
that i n these state real i sati ons wi l l appear state components of the system S
whi ch are onl y uncontrol l abl e or onl y unobservabl e or both uncontrol l abl e and
unobservabl e.
Obser vation 3.
I f to the i nterconnected system S onl y the steadystate behavi our i s under
i nterest, then each subsystem S
i
can be represented by i ts steadystate
mathemati cal model (stati c characteri sti cs).
Noti ce that the steadystate equi val ent model of the i nterconnected
system S, eval uated based on the steadystate model s of the subsystems S
i
, has a
meani ng i f and onl y i f i t i s possi bl e to get a steadystate f or the i nterconnected
system i n the al l owed domai ns of i nputs val ues.
Thi s means the i nterconnected system S must be asymptoti c stabl e.
The steadystate mathemati cal model of the i nterconnected system can be
obtai ned by usi ng graphi cal methods when some subsystems are descri bed by
graphi cal stati c characteri sti cs, experi mental l y determi ned.
General l y, the mathemati cal model deducti on process, whi chever type
woul d be: compl ete (by state equati ons); i nputoutput (parti cul arl y by transf er
matri ces) or steady state (by stati c characteri sti cs) of an i nterconnected system
i s cal l ed connecti on sol vi ng process or the structure reducti on process to an
equi val ent compact f orm.
Essenti al l y to sol ve a connecti on means to el i mi nate al l the i ntermedi ate
vari abl es i ntroduced by al gebrai cal rel ati ons f rom R
c
or f rom F
I
.
There are three f undamental types of connecti ons:
1: Seri al connecti on (cascade connecti on);
2: Paral l el connecti on;
3: Feedback connecti on,
through whi ch the maj ori ty of practi cal connecti ons can be expressed.
3. SYSTEM CONNECTI ONS. 3.1. Connecti on Pr obl em Statement .
63
3.2. SERIAL CONNECTION.
3.2.1. Serial Connection of two Subsystems.
For the begi nni ng l et us consi der onl y two subsystems S
1
, S
2
, i n the
f ami l y F
I
, represented by a bl ock di agram as i n Fi g. (f * * .2.1.
u
S
y u
S
y u
S
y
1
2
2
1
2
=
1
Fi gure no. 3.2.1.
The 3tupl es (3.2.1),
F
I
· {S
1
, S
2
¦, I · {1; 2¦
(3.2.1) R
c
· {u
2
· y
1
, u
1
· u; y · y
2
¦
C
c
· {Γ Γ
1
⊆ Ω Ω
2
; Ω Ω · Ω Ω
1
; Γ Γ · Γ Γ
2
¦
bui l d up a seri al connecti on of the two subsystems.
I n a seri al connecti on of two subsystems one of them, here S
1
, has the
attri bute of " upstream" and the other S
2
, of " downstream" .
The i nput of the downstream subsystem i s i denti cal to the output of the
upstream subsystem.
From the connecti on rel ati on u
2
=y
1
one can understand that the f uncti on
(u
2
: T
2
→U
2
) ∈ Ω Ω
2
i s i denti cal wi th any f uncti on
(y
1
: T
1
→Y
1
) ∈ Γ Γ
1
whi ch ari ses to the output of the subsystem S
1
.
From thi s i denti ty the f ol l owi ng must be understood:
 The sets Y
1
and U
2
have the same di mensi ons (as Cartesi an products)
 ; Y
1
⊆ U
2
 The vari abl es have the same physi cal meani ng i f both and express S
1
S
2
ori ented physi cal systems;
 u
2
and y
1
are the same si ze vectors;
 . T
1
≡ T
2
From the connecti on condi ti on : one understand that the set Γ Γ
1
⊆ Ω Ω
2
(cl ass) , of the f uncti ons al l owed to represent i nput vari abl e u
2
to S
2
( i s Ω Ω
2
Ω Ω
2
a very def i ni ng el ement of S
2
), must contai n al l the f uncti ons y
1
f rom the set
of the possi bl e outputs f rom S
1
. Γ Γ
1
Of course depends both on the equati ons (f
1
, g
1
) whi ch def i ne S
1
and Γ Γ
1
on too. Ω
1
For exampl e, i f represents a set of conti nuous and deri vati ve f uncti ons Ω
2
but f rom S
1
one can obtai n output f uncti ons whi ch have di sconti nui ti es of the
f i rst ki nd, the connecti on i s not possi bl e even i f . T
2
≡ T
1
, Y
1
⊆ U
2
3. SYSTEM CONNECTI ONS. 3.2. Ser i al Connecti on.
64
Here, the new l etters can be i nterpreted j ust onl y as si mpl e u, Ω, y, Γ
notati ons f or a more cl ear understandi ng that i t i s about the i nput and the
output of the new equi val ent i nterconnected system.
If S
1
, S
2
, S represent symbol s f or operators, f ormal l y one can wri te
, y
D
· y
2
· S
2
{u
2
¦ · S
2
{y
1
¦ · S
2
{S
1
{u
1
¦¦ · S
2
S
1
{u
1
¦
D
· S
2
S
1
{u¦ · S{u¦
(3.2.2)
where i t has been denoted
S · S
2
S
1
· S
2
S
1
understandi ng by " " or j ust S
2
S
1
the resul t of two operators S
2
S
1
composi ti on.
Many ti mes, i n connecti on sol vi ng probl ems onl y F
I
, R
c
, are presented
(gi ven) as stri ctl y necessary el ements to sol ve f ormal l y that connecti on.
The condi ti ons set C
c
i s expl i ci tl y menti oned onl y when i t must be
proved the connecti on exi stence, otherwi se i t i s taci tl y assumed that al l these
condi ti ons are accompl i shed.
3.2.2. Serial Connection of two Continuous Time Nonlinear Systems
(CNS).
L et us consi der two di f f erenti al systems of the f orm (3.1.2) seri al
i nterconnected. We determi ne the compl ete system, descri bed by state
equati ons, whi ch wi l l resul t af ter connecti on.
F
I
: S
1
: ; S
2
: (3.2.3)
¹
¹
'
x
.
1
· f
1
(x
1
, u
1
, t)
y
1
· g
1
(x
1
, u
1
, t)
¹
¹
'
x
.
2
· f
1
(x
2
, u
2
, t)
y
2
· g
2
(x
2
, u
2
, t)
x
1
(n
1
× 1) ; u
1
(p
1
×1) ; y
1
(r
1
× 1) x
2
(n
2
× 1) ; u
2
(p
2
×1) ; y
2
(r
2
× 1)
R
c
: (3.2.4) u
2
· y
1
⇒(p
2
· r
1
); u · u
1
⇒ (p · p
1
); y · y
2
⇒ (r · r
2
);
Obvi ous, x
1
, y
1
, u
1
, x
2
, u
2
, y
2
are ti me f uncti on vectors.
By el i mi nati ng the i ntermedi ate vari abl es u
2
, y
1
and usi ng the notati ons
u, y f or u
1
, y
2
respecti vel y, ⇒
S
¹
¹
'
¹
¹
x
.
1
· f
1
(x
1
, u, t)
x
.
2
· f
2
(x
2
, g
1
(x
1
, u, t), t)
y · g
2
(x
2
, g
1
(x
1
, u, t), t)
denoti ng
Ψ
1
(x, u, t) · Ψ
∼
1
(x
1
, x
2
, u, t) · f
1
(x
1
, u, t)
Ψ
2
(x, u, t) · Ψ
∼
2
(x
1
, x
2
, u, t) · f
2
(x
2
, g
1
(x
1
, u, t), t)
g(x, u, t) · g
∼
(x
1
, x
2
, u, t) · g
2
(x
2
, g
1
(x
1
, u, t), t)
, x ·
x
1
x
2
]
]
(n
1
+ n
2
) ×1
the equi val ent i nterconnected system i s expressed by the equati ons,
3. SYSTEM CONNECTI ONS. 3.2. Ser i al Connecti on.
65
(3.2.5) S
¹
¹
'
x
.
· f(x, u, t)
y · g(x, u, t)
f(x, u, t) ·
Ψ
1
(x, u, t)
Ψ
2
(x, u, t)
]
]
]
S · S
2
S
1
One can observe that the system order, determi ned to di f f erenti al
systems by the state vector si ze, i s . n · n
1
+ n
2
3.2.3. Ser ial Connection of two LTI C. Complete Repr esentation.
Let us consi der two LTI C, on short LTI , seri al l y i nterconnected. One
can determi ne the i nterconnected system compl etel y represented by state
equati ons.
(3.2.6) S
1
:
¹
¹
'
x
.
1
· A
1
x
1
+ B
1
u
1
y
1
· C
1
x
1
+ D
1
u
1
S
2
:
¹
¹
'
x
.
2
· A
2
x
2
+ B
2
u
2
y
2
· C
2
x
2
+ D
2
u
2
x
1
(n
1
× 1); u
1
(p
1
× 1); y
1
(r
1
× 1) x
2
(n
2
× 1); u
2
(p
2
× 1); y
2
(r
2
× 1)
(3.2.7) R
c
: u
2
· y
1
⇒(p
2
· r
1
); u · u
1
; y · y
2
; p · p
1
; r · r
2
El i mi nati ng the i ntermedi ate vari abl es u
2
, y
1
⇒
x
.
1
· A
1
x
1
+ B
1
u
x
.
2
· A
2
x
2
+ B
2
[C
1
x
1
+ D
1
u]
y · C
2
x
2
+D
2
[C
1
x
1
+ D
1
u]
⇒
x
.
1
· A
1
x
1
+ B
1
u
x
.
2
· B
2
C
1
x
1
+ A
2
x
2
+ B
2
D
1
u
y · D
2
C
1
x
1
+ C
2
x
2
+ D
2
D
1
u
Concatenati ng the two state vectors to a si ngl e vector one x
1
, x
2
x ·
x
1
x
2
]
]
]
obtai n the compact f orm of the i nterconnected system ,
(3.2.8)
x
.
· Ax + Bu
y · Cx +Du
n
1
n
2
p n
1
n
2
A ·
A
1
0
B
2
C
1
A
2
]
]
]
n
1
n
2
B·
B
1
B
2
D
1
]
]
]
n
1
n
2
C ·r
D
1
C
1
C
2
]
]
D· D
2
D
1
3.2.4. Ser ial Connection of two LTI C. I nputOutput Repr esentation.
Let us assume we are i nterested onl y on the i nputoutput behavi our of
the seri al l y i nterconnected system represented as i n Fi g. 3.2.2.
For nonl i near systems as presented i n Ch. 3.2.2. the expl i ci t expressi ng
of the f orced response, i n a general manner, practi cal l y i s i mpossi bl e because
the decomposi ti on property i s not avai l abl e f or any nonl i near system
For l i near systems f rom Ch. 3.2.3. the transf er matri ces can be easi l y
determi ned as,
(3.2.9) H
1
(s) · C
1
(SI −A
1
)
−1
B
1
+ D
1
H
2
(s) · C
2
(sI −A
2
)
−1
B
2
+ D
2
The f orced responses of the two systems are expressed by
3. SYSTEMS CONNECTI ONS. 3.2. Ser i al Connecti on.
66
S
1
: Y
1
(s) · H
1
(s)U
1
(s); S
2
: Y
2
(s) · H
2
(s)U
2
(s)
and the connecti on rel ati ons are
. R
c
: U
2
(s) ≡ Y
1
(s); U(s) ≡ U
1
(s); Y(s) ≡ Y
2
(s). p
1
· p; r
2
· r; p
2
· r
1
H (s)
1
H (s)
2
H ( s)
1
H (s)
2
U (s)
1
1
Y (s) U (s)
2 2
Y ( s)
≡
U(s) Y(s)
1 1
r xp )
(
2
2
r xp ) ( (rxp)
Fi gure no. 3.2.2.
Y(s)
D
· Y
2
(s) · H
2
(s)U
2
(s) · H
2
(s)Y
1
(s) · H
2
(s)H
1
(s)U
1
(s) · H(s)U(s)
(3.2.10) H(s) · H
2
(s)H
1
(s)
Easy can be veri f i ed that the transf er matri x (TF) of the compl ete system
(3.2.8), obtai ned by seri al connecti on, i s the product of the transf er matri ces
f rom (3.2.10).
I ndeed, f rom (3.1.8) we obtai n,
H(s) · C[sI − A]
−1
B+ D·
D
2
C
1
C
2
]
]
sI −A
1
0
−B
2
C
1
sI − A
2
]
]
]
−1
B
1
B
2
D
1
]
]
]
+ D
2
D
1
·
·
D
2
C
1
C
2
]
]
Φ
1
(s) 0
Φ
2
(s)B
2
C
1
Φ
1
(s) Φ
2
(s)
]
]
]
B
1
B
2
D
1
]
]
]
+D
2
D
1
·
·
D
2
C
1
Φ
1
(s) − C
2
Φ
2
(s)B
2
C
1
Φ
1
(s) C
2
Φ
2
(s) ]
]
B
1
B
2
D
1
H(s) · D
2
C
1
Φ
1
(s)B
1
+ C
2
Φ
2
(s)B
2
C
1
Φ
1
(s)B
1
+ C
2
Φ
2
(s)B
2
D
1
+D
2
D
1
·
· C
2
Φ
2
(s)B
2
[C
1
Φ
1
(s)B
1
+ D
1
] +D
2
[C
1
Φ
1
(s)B
1
+ D
1
]
H(s) · [C
2
Φ
2
(s)B
2
+ D
2
][C
1
Φ
1
(s)B
1
+ D
1
] · H
2
(s)H
1
(s)
the TF of the seri al connecti on i s the product of the component' s TFs.
3.2.5. The contr ollability and obser vability of the ser ial connection.
The seri al connecti on of a systems f ami l y preserve F
I
· {S
i
, i ∈ I¦
some common properti es of the component systems as f or exampl e the
l i neari ty and stabi l i ty.
Other properti es, as the control l abi l i ty and observabi l i ty may di sappear
to the seri al i nterconnected system even i f each component system sati sf i es
these properti es.
I n the sequel we shal l anal yse such aspects based on a concrete exampl e
referred to the seri al connecti on of two LTI each of them of the f i rst order.
3. SYSTEM CONNECTI ONS. 3.2. Ser i al Connecti on.
67
3.2.5.1. State Diagrams Representation.
We consi der a seri al connecti on of two SI SO systems of the f i rst order.
(3.2.11) S
1
: H
1
(s) ·
K
1
s + p
1
·
Y
1
(s)
U
1
(s)
⇒
¹
¹
'
x
.
1
(t) · −p
1
x
1
(t) + K
1
u
1
(t)
y
1
(t) · x
1
(t)
S
2
: H
2
(s) ·
K
2
(s + z
2
)
s +p
2
·
Y
2
(s)
U(s)
⇒
¹
¹
'
x
.
2
(t) · −p
2
x
2
(t) + K
2
(z
2
− p
2
)u
2
(t)
y
2
(t) · x
2
(t) + K
2
u
2
(t)
(3.2.12)
R
c
: u
2
(t) ≡ y
1
(t), u(t) ≡ u
1
(t), y(t) ≡ y
2
(t)
(3.2.13)
C
c
: Γ
1
⊆ Ω Ω
2
, Ω Ω · Ω Ω
1
, Γ Γ · Γ Γ
2
.
The connecti on, expressed by i nputoutput rel ati ons (transf er f uncti ons)
i s represented i n the bl ock di agram f rom Fi g. 3.2.3.
≡ H (s)
1
H (s)
2
Y (s)
1
U (s)
2
Y (s)=Y(s)
2
U(s)=U (s)
1
H(s)
U(s) y(s)
H(s)=H (s)H (s)
2
1
Fi gure no. 3.2.3.
I n thi s scal ar case, the equi val ent transf er f uncti on can be presented as an
al gebrai cal expressi on (because of the commutabi l i ty property) al so under the
f orm,
H(s) · H
2
(s)H
1
(s) · H
1
(s)H
2
(s)
. (3.2.14) H(s) ·
K
1
K
2
(s +z
2
)
(s + p
1
)(s + p
2
)
One can observe that H(s) i s of the order . n
1
+ n
2
· 1 + 1 · 2
The same i nterconnected system, but represented by state equati ons, can
be i l l ustrated usi ng so cal l ed "state di agrams", as shown i n Fi g. 3.2.4.
The " State Diagr am" (SD) i s f orm of graphi cal representati on of the
state equati ons, both i n ti me domai n or compl ex domai n, usi ng bl ock
di agrams (BD) or si gnal f l ow graphs (SFG).
The state di agram (SD) of LTI s contai ns onl y three types of graphi cal
symbol s: i ntegrati ng el ements; summi ng el ements; proporti onal el ements.
The summi ng and proporti onal el ements, cal l ed on short summators
and scal ors respecti vel y, have the same graphi cal representati on both i n ti me
and compl ex domai ns. For the i ntegrati ng el ements, cal l ed on short
i ntegrators, f requentl y the compl ex domai n graphi cal f orm i s uti l i sed and the
correspondi ng vari abl es are denoted i n ti me domai n together, someti mes, wi th
thei r compl ex f orm notati on.
3. SYSTEM CONNECTI ONS. 3.2. Ser i al Connecti on.
68
I f f or a system gi ven by a transf er matri x (f uncti on), we succeed to
represent i t by a bl ock di agram (BD) or a si gnal f l ow graph (SFG) whi ch
contai ns onl y the three types of symbol s: i ntegrators, summators and scal ors,
then we can i nterpret that bl ock di agram (BD) or that si gnal f l ow graph (SFG)
as bei ng a state di agram (SD). Based on i t we can wri te (deduce), on spot, a
real i sati on by state equati ons of that transf er matri x (f uncti on).
Gi ven the state di agrams f or the systems S
i
of a f ami l y F
I
very easy can
be drawn the state di agram of the i nterconnected system by usi ng the
connecti on rel ati ons and f rom here the state equati ons of the i nterconnected
system.
I n our case the state di agram (SD) means the graphi cal representati on i n
compl ex domai n of the equati ons (3.2.11) whi ch i s the SD f or S
1
, (3.2.12)
whi ch i s the SD f or S
2
and (3.2.13) whi ch contai ns the connecti on rel ati ons.
2
K
2
K
(z  p )
2 2
2
p
1
s
+
+
+ +
+
+
2
x (0)
•
x
2
x
2
y = y
2
u=u
1
1
K
1
p
1
s
+
+ +
+
1
x (0)
•
x
1
x
1
u = y
2 1
u
2
u
1
y
2
y
1
S1
Rc
S2
SD of S1
SD of S2
Fi gure no. 3.2.4.
Based on thi s SD the state equati ons f or the seri al i nterconnected system
are deduced,
(3.2.15)
x
.
· Ax + bu
y · c
T
x+ du
, x ·
x
1
x
2
]
]
where,
A ·
−p
1
0
K
2
(z
2
−p
2
) −p
2
]
]
]
; b ·
K
1
0
]
]
]
; c ·
K
2
1
]
]
]
; d · 0
Because
X(s) · Φ(s)x(0) + Φ(s)bU(s); Φ(s) · [sI −A]
−1
the (i j ) components of the transi ti on matri x , can be easi l y determi ned Φ
ij
(s)
based on rel ati on (3.2.15).
Mani pul ati ng SD, the matri x i nverse operati on i s avoi ded. For a better
commodi ty the SD f rom Fi g. 3.2.4. equi val entl y i s transf ormed i nto SD f rom
Fi g. 3.2.5.
3. SYSTEM CONNECTI ONS. 3.2. Ser i al Connecti on.
69
+
+
2
x (0)
x
2 y = y
2 u=u
1 +
+
1
x (0)
1
s+p
1
1
s+p
2
1
x
y u
1 2
+
+
2
1
1
K
2
K
2
K
(z  p )
2 2
Fi gure no. 3.2.5.
From (3.2.15) we obtai n,
(3.2.16) Φ
ij
(s) ·
X
i
(s)
x
j
(0)
x
k
(0)·0, k≠j, U(s)≡0
so,
Φ
11
(s) ·
X
1
(s)
x
1
(0)
·
1
s + p
1
; Φ
12
(s) ·
X
1
(s)
x
2
(0)
· 0;
Φ
21
(s) ·
X
2
(s)
x
1
(0)
·
K
2
(z
2
−p
2
)
(s + p
1
)(s +p
2
)
; Φ
22
(s) ·
X
2
(s)
x
2
(0)
·
1
(s +p
1
)
and f i nal l y
(3.2.17) Φ(s) ·
1
s +p
1
0
K
2
(z
2
− p
2
)
(s+ p
1
)(s + p
2
)
1
s + p
2
]
]
]
]
]
The state response i s,
X(s) ·
1
s+p
1
0
K
2
(z
2
−p
2
)
(s+p
1
)(s+p
2
)
1
s+p
2
]
]
]
]
]
x
1
(0)
x
2
(0)
]
]
]
+
1
s+p
1
0
K
2
(z
2
−p
2
)
(s+p
1
)(s+p
2
)
1
s+p
2
]
]
]
]
]
K
1
0
]
]
]
U(s)
(3.2.18) X
1
(s) · X
1l
(s) +X
1f
(s) ·
1
s + p
1
x
1
(0) +
K
1
s + p
1
U(s)
X
2
(s) · X
2l
(s) +X
2f
(s) ·
K
2
(z
2
− p
2
)
(s + p
1
)(s +p
2
)
x
1
(0) +
1
s + p
2
x
2
(0)+
+
K
1
K
2
(z
2
− p
2
)
(s + p
1
)(s + p
2
)
U(s)
I t can be proved that the transf er f uncti on of the seri al connecti on i s,
H(s) · c
T
Φ(s)b + d ·
K
1
K
2
(s +z
2
)
(s + p
1
)(s + p
2
)
· H
1
(s)H
2
(s).
The expressi on of the output vari abl e i n compl ex domai n i s,
Y(s) · c
T
Φ(s)x(0) + (c
T
Φ(s)b + d)U(s) · Y
l
(s) + Y
f
(s)
Y(s) · K
2
X
1l
(s) +X
2l
(s) + H(s)U(s) · Y
l
(s) + Y
f
(s) ⇒
(3.2.19) Y
l
(s) ·
K
2
(s +p
1
)
x
1
(0) +
K
2
(z
2
− p
2
)
(s + p
1
)(s +p
2
)
x
1
(0) +
1
(s +p
2
)
x
2
(0)
(3.2.20) Y
f
(s) ·
K
1
K
2
(s + z
2
)
(s +p
1
)(s + p
2
)
U(s)
3. SYSTEM CONNECTI ONS. 3.2. Ser i al Connecti on.
70
If one obtai n, p
1
≠ p
2
. (3.2.21) y
l
(t) ·
K
2
p
1
− z
2
p
1
− p
2
(x
1
(0))
]
]
e
−p
1
t
+
K
2
(z
2
− p
2
)
(p
1
−p
2
)
x
1
(0) +x
2
(0)
]
]
]
e
−p
2
t
When i n the same way the i nverse Lapl ace transf orm i s appl i ed, p
1
· p
2
y
l
(t) · K
2
e
−p
1
t
⋅ x
1
(0) + K
2
(z
2
− p
1
)te
−p
1
t
⋅ x
1
(0) +e
−p
1
t
⋅ x
2
(0)
3.2.5.2. Contr ollability and Obser vability of Ser ial Connection.
Case 1: ßi . z
2
≠ p
1
z
2
≠ p
1
I f the transf er f uncti on (3.2.14) obtai ned f urther to the seri al connecti on
(3.2.14) H(s) ·
K
1
K
2
(s +z
2
)
(s + p
1
)(s + p
2
)
i s i rreduci bl e one, that i s and , then the compl ete system S, z
2
≠ p
1
z
2
≠ p
2
gi ven by the state equati ons (3.2.15), obtai ned by seri al connecti on of the
subsystems S
1
and S
2
i s compl etel y control l abl e and compl etel y observabl e.
These properti es are preserved f or any real i sati on by state equati on of
the i rreduci bl e transf er f uncti on.
For the system (3.2.15) can be cal cul ated both the control l abi l i ty matri x
P and the observabi l i ty matri x Q as,
(3.2.22) P · [b
.
.
.
Ab] ·
K
1
−K
1
p
1
0 K
1
K
2
(z
2
− p
2
)
]
]
]
; det P · K
1
2
K
2
(z
2
− p
2
)
. (3.2.23) Q
T
·
c
T
c
T
A
]
]
]
·
K
2
1
−K
2
p
1
+K
2
(z
2
−p
2
) −p
2
]
]
]
; det Q · K
2
(p
1
− z
2
)
I n thi s case can be observed that,
that means the i nterconnected system i s compl etel y control l abl e det(P) ≠ 0
that means the i nterconnected system i s compl etel y observabl e. det(Q) ≠ 0
Case 2: . z
2
≠ p
2
If , the system i s compl etel y control l abl e, f or any z
2
≠ p
2
, det P ≠ 0
rel ati on between z
2
and p
1
. Qual i tati vel y, thi s property can be put i nto
evi dence al so by the state di agrams f rom Fi g. 3.2.4. or more cl ear f rom Fi g.
3.2.5. , where i t can be observed that the i nput u can modi f y the component
and through thi s, i f , al so the component of the state vector. x
1
z
2
≠ p
2
x
2
Thi s checki ng up of the state di agram wi l l not guarantee the property of
compl ete control l abi l i ty but categori cal l y i t wi l l reveal the si tuati on when the
system has not the control l abi l i ty property.
3. SYSTEM CONNECTI ONS. 3.2. Ser i al Connecti on.
71
Case 3: . z
2
· p
2
I n thi s case and the system i s uncontrol l abl e. I f det(P) · 0 z
2
· p
2
certai nl y the component cannot be modi f i ed by the i nput so categori cal l y x
2
thi s state component i s uncontrol l abl e but the component can be control l ed x
1
by the i nput. I n thi s exampl e the dependence between and u i s gi ven by x
1
, X
1
(s) ·
K
1
s +p
1
U(s)
whi ch shows that i s modi f i ed by u. x
1
One say that a system i s compl etel y control l abl e, , i f and onl y det(P) ≠ 0
i f al l the state vector components are control l abl e f or any val ues of these
components i n the state space.
The assessment of the f ul l rank of the matri x P through i ts determi nant
val ue wi l l not i ndi cate whi ch state vector components are control l abl e and
whi ch of them are not.
More preci se, i n our exampl e, thi s means whi ch of the evol uti on (modes)
generated by the pol es or can by modi f i ed through the i nput. (−p
1
) (−p2)
When the transf er f uncti on of the seri al connecti on i s of the f i rst order z
2
· p
2
(3.2.24) H(s) ·
K
1
(s + p
1
)
⋅
K
2
(s +z
2
)
(s + p
2
)
·
K
1
K
2
s + p
1
even i f the system sti l l remai ns of the second order , thi s second order bei ng
put i nto evi dence by the f act that i ts f ree response, (3.2.21), keeps dependi ng
on the two pol es that means on the mode and on the mode whi ch e
−p
1
t
e
−p
2
t
are di f f erent.
Case 4: .
z
2
≠ p
1
If , the system i s observabl e f or any rel ati ons between z
2
≠ p
1
, det Q ≠ 0
and . The vi sual exami nati on (i nspecti on) of the state di agram, as i n z
2
p
2
Fi g. 3.2.5. wi l l not reveal di rectl y the observabi l i ty property.
We must not mi sunderstand that i f each state vector component af f ects
(modi f i es) the output vari abl e then certai nl y the system i s observabl e one
because thi s i s not true.
I t must remember that the observabi l i ty property expresses the
possi bi l i ty to determi ne the i ni ti al state whi ch has exi sted at a ti me moment t
0
based on knowl edge of the output and the i nput f rom that t
0
.
For LTI systems i t i s enough to know the output onl y i rrespecti ve what
i s the val ue of . t
0
Because of that to LTI systems one say : The pai r (A, C) i s observabl e.
From the Fi g. 3.2.5., i t can be observed that when , that means z
2
· p
1
the system i s not observabl e, both x
1
and x
2
sti l l modi f i es the output vari abl e.
3. SYSTEM CONNECTI ONS. 3.2. Ser i al Connecti on.
72
3.2.5.3. Obser vability Pr oper ty Under lined as the Possibility to
Deter mine the I nitial State if the Output and the I nput ar e Known.
I n the above anal yse we saw that whi l e the i nterconnected z
2
· p
1
system has that means i t i s not observabl e even i f the output det(Q) · 0
y(t) · c
T
x(t) + du(t) · K
2
⋅ x
1
(t) + 1 ⋅ x
2
(t) + 0 ⋅ u(t)
depends on both components of the state vector.
We shal l i l l ustrate, through thi s exampl e, that the observabi l i ty property
means the possibility to deter mine the i ni ti al state based on the appl i ed i nput
and the observed output starti ng f rom that i ni ti al ti me moment.
I n the case of LTI systems thi s property depends on the output onl y as a
response of that i ni ti al state. For LTI thi s property does not depend on the
i nput conf i rmi ng that i t i s a system property, i t i s a system structure property.
Wi th these observati ons, we shal l consi der (i t i s not necessary u(t) ≡ 0
another i nput or i t i s i ndi f f erent what i nput i s appl i ed) so the output i s the f ree
response, . y(t) · y
l
(t)
The ti me expressi on of the f ree response i s obtai ned by appl yi ng the
i nverse Lapl ace transf orm to the rel ati on (3.2.19) eval uated i n two di sti nct
cases: Case a: and Case b: . p
1
≠ p
2
p
1
· p
2
Case a: . p
1
≠ p
2
One obtai n the expressi on (3.2.21) of the f orm,
(3.2.21) y
l
(t) ·
K
2
(p
1
−z
2
)
p
1
−p
2
x
1
(0)
]
]
e
−p
1
t
+
K
2
(z
2
−p
2
)
(p
1
−p
2
)
x
1
(0) +x
2
(0)
]
]
e
−p
2
t
or equi val entl y
(3.2.25) y
l
(t) ·
K
2
p
1
−p
2
[(p
1
− z
2
)e
−p
1
t
+ (z
2
− p
2
)e
−p
2
t
]⋅ x
1
(0) + [e
−p
2
t
]⋅ x
2
(0)
Because the goal i s the state vector x(0) determi nati on, through i ts two
components and , a two equati ons system i s bui l di ng up, created x
1
(0) x
2
(0)
f rom (3.2.21) f or two di f f erent ti me moments , t
1
≠ t
2
K
2
p
1
−p
2
[(p
1
−z
2
)e
−p
1
t
1
+ (z
2
− p
2
)e
−p
2
t
1
] e
−p
2
t
1
K
2
p
1
−p
2
[(p
1
−z
2
)e
−p
1
t
2
+ (z
2
− p
2
)e
−p
2
t
2
] e
−p
2
t
2
]
]
]
]
]
x
1
(0)
x
2
(0)
]
]
]
·
y
l
(t
1
)
y
l
(t
2
)
]
]
]
whi ch can be expressed i n a compressed f orm,
. G ⋅ x(0) ·
y
l
(t
1
)
y
l
(t
2
)
]
]
] G ·
K
2
p
1
−p
2
[(p
1
− z
2
)e
−p
1
t
1
+ (z
2
− p
2
)e
−p
2
t
1
] e
−p
2
t
1
K
2
p
1
−p
2
[(p
1
− z
2
)e
−p
1
t
2
+ (z
2
− p
2
)e
−p
2
t
2
] e
−p
2
t
2
]
]
]
]
]
The possi bi l i ty to determi ne uni vocal l y the i ni ti al state x(0) i s assured
by the determi nant of the matri x G,
det G ·
K
2
p
1
− p
2
(p
1
− z
2
) ⋅ e
−p
1
t
1
⋅ e
−p
2
t
2
⋅

.
1 −
e
−p
1
(t
2
−t
1
)
e
−p
2
(t
2
−t
1
)
`
,
Because ßi , p
1
≠ p
2
t
1
≠ t
2
3. SYSTEM CONNECTI ONS. 3.2. Ser i al Connecti on.
73
det G ≠ 0 ⇔ p
1
≠ z
2
⇔ det Q · K
2
(p
1
−z
2
) ≠ 0
x(0) · G
−1
⋅
y
l
(t
1
)
y
l
(t
2
)
]
]
]
Case b: . p
1
· p
2
The anal yse i s perf ormed i n an I denti cal manner but the ti me domai n
f ree response expressi on deduced f rom (3.2.19) i s,
y
l
(t) · K
2
e
−p
1
t
⋅ x
1
(0) + K
2
(z
2
− p
1
)te
−p
1
t
⋅ x
1
(0) +e
−p
1
t
⋅ x
2
(0) ·
(3.2.26) y
l
(t) · [K
2
(1 −(z
2
− p
1
)t]e
−p
1
t
⋅ x
1
(0) +e
−p
1
t
⋅ x
2
(0)]
The matri x G has the f orm,
G ·
K
2
[1 − (z
2
− p
1
)t
1
]e
−p
2
t
1
e
−p
1
t
1
K
2
[1 − (z
2
− p
1
)t
2
]e
−p
2
t
2
e
−p
1
t
2
]
]
]
det G · K
2
(p
1
− z
2
)(t
2
− t
1
)e
−p
1
(t
2
+t
1
)
Al so i n thi s case,
z
2
· p
1
· p
2
⇔ det G · 0 ⇔ det Q · 0
that means the system i s not observabl e whi l e . z
2
· p
1
3.2.5.4. Time Domain Fr ee Response I nter pr etation
for an Unobser vable System.
Let us consi der the si tuati on when , so the observabi l i ty z
2
· p
1
condi ti on i s not accompl i shed.
The f ree response (3.2.25),
y
l
(t) · [
K
2
p
1
− p
2
(p
1
− z
2
) ⋅ x
1
(0)] ⋅ e
−p
1
t
+ [K
2
(z
2
− p
2
)
p
1
− p
2
⋅ x
1
(0) + x
2
(0) ] ⋅ e
−p
2
t
i n thi s case of , i s of the f orm z
2
· p
1
(3.2.27) y
l
(t) ·
K
2
z
2
− p
2
p
1
− p
2
x
1
(0) + x
2
(0)
]
]
⋅ e
−p
2
t
, p
1
≠ p
2
, (z
2
· p
1
)
and that f rom (3. 26) i s,
(3.2.28) y
l
(t) · [K
2
x
1
(0) + x
2
(0)] ⋅ e
−p
2
t
, p
1
· p
2
, (z
2
· p
1
· p
2
)
From (3.2.27) i t can be observed that the f ree response of thi s
unobservabl e system depends on both components of the state vector x
1
(0)
and , but the output expresses the ef f ect of the pol e onl y, through x
2
(0) −p
2
the mode . e
−p
2
t
I n thi s case one can not determi ne, based on the response, separatel y
both the components and but can be determi ned onl y a l i near x
1
(0) x
2
(0)
combi nati on of them .
K
2
(z
2
−p
2
)
p
1
−p
2
x
1
(0) + x
2
(0)
From (3.2.28) we see that thi s structure i s menti oned al so when . p
1
· p
2
3. SYSTEM CONNECTI ONS. 3.2. Ser i al Connecti on.
74
Theref ore, i f i n a seri al connecti on (3.2.13) the subsystem S
2
(3.2.12)
has a zero equal to the pol e of the subsystem S
1
(3.2.11), then s · −z
2
s · −p
1
the i nterconnected system keep bei ng of the second order but i t has not the
observabi l i ty property.
I ts f orced response i s of the f i rst order, whi ch depends on the pol e
, onl y, havi ng the transf er f uncti on, s · −p
2
. (3.2.29) H(s) ·
K
1
(s + p
1
)
⋅
K
2
(s +z
2
)
(s + p
2
)
·
K
1
K
2
s + p
2
I n the transf er f uncti on of thi s seri al connecti on, two common f actors
appeared whi ch, f or the state equati ons real i sati on (3 .2.15), determi ned the
l ack of the the observabi l i ty property of this state equations r ealisation.
I f i n the same transf er f uncti on, the same common f actors woul d appear,
for another r ealisation by state equati ons of the same tr ansfer f uncti on i t i s
possi bl e to l ose the control l abi l i ty property of thi s real i sati on by state
equati ons or to l ose the same observabi l i ty property or to l ose both the
observabi l i ty and control l abi l i ty properti es.
General l y exi st parti cul ar real i sati ons by state equati ons whi ch expl i ci tl y
preserve the control l abi l i ty property cal l ed contr ollable r ealisations (but
whi ch wi l l not guarantee the observabi l i ty property) as wel l parti cul ar
real i sati ons by state equati ons whi ch expl i ci tl y preserve the observabi l i ty
property cal l ed obser vable r ealisations (but whi ch wi l l not guarantee the
control l abi l i ty property)
3.2.6. System Stabilisation by Ser ial Connection.
We saw, through the above presented exampl e, that by seri al connecti on
i t i s possi bl e that a pol e of a subsystem transf er f uncti on to be si mpl i f i ed
(cancel l ed) by a zero of the transf er f uncti on of another subsystem i n a such a
way that i n the i nterconnected transf er f uncti on does not appear the
si mpl i f i ed pol e.
Thi s el i mi nati ng process of pol es f rom some subsystems by
si mpl i f i cati on wi th zeros f rom other subsystems i s cal l ed seri al compensati on
or cascade compensati on.
Of course i nvol vi ng noti ons as pol es, zeros, transf er f uncti ons, i t i s
expected that thei r ef f ects to appear i n the f orced response onl y.
Parti cul arl y, at l east f rom theoreti cal poi nt of vi ew, i n a such a way we
are abl e to el i mi nate al l the pol es of the system f rom the ri ght hal f compl ex
pl ane, that means a system whi ch has been i nstabl e one (because of the pol es
l ocated i n the ri ght hal f compl ex pl ane) to become a stabl e one.
3. SYSTEM CONNECTI ONS. 3.2. Ser i al Connecti on.
75
Thi s process i s cal l ed stabi l i sati on by seri al connecti on or stabi l i sati on
by seri al compensati on.
The stabi l i sati on by seri al compensati on wi l l assure onl y the external
stabi l i ty known al so as i nputoutput stabi l i ty.
We remember that there exi st several concepts of stabi l i ty one of them
bei ng the external stabi l i ty.
Such a pr ocedur e is str ongly not r ecommended to pr actice. We
sustai n thi s recommendati on by a concrete anal yse perf ormed on the above
di scussed exampl e, descri bed by rel ati ons (3.2.11) ... (3.2.15).
Shal l we consi der the case where and namel y the −p
1
> 0 −p
2
< 0
systems i s stabl e but i s unstabl e. Consi deri ng i n addi ti on that S
1
S
2
then i s a non mi ni mum phase stabl e system. −z
2
· −p
1
> 0 S
2
I n such condi ti ons the i nputoutput behavi our of the seri al
i nterconnected system i s stabl e (external stabi l i ty) al so as i t resul ts f rom the
transf er f uncti on (3.2.29),
(3.2.29) H(s) ·
K
1
(s + p
1
)
⋅
K
2
(s +z
2
)
(s + p
2
)
·
K
1
K
2
s + p
2
·
Y(s)
U(s)
C.I.Z.
f or whi ch the f orced response i s determi ned by a f i rst order transf er f uncti on
contai ni ng onl y the stabl e pol e . s · −p
2
I ndeed, the f orced component of the output, as a response to a y
f
(t)
bounded i nput u(t) whi ch admi ts a steady state val ue where u(∞)
, u(∞) ·
s→0
l i m s ⋅ U(s)
evol ves to a steady state bounded val ue gi ven by y
f
(∞)
,
t→∞
lim y
f
(t) ·
s→0
lim s ⋅ Y(s) ·
s→0
lim s ⋅
K
1
K
2
s + p
2
⋅ U(s) ·
K
1
K
2
p
2
⋅ u(∞) · y
f
(∞)
because the compl ex vari abl e f uncti on , i s anal yti c on both s ⋅ H(s) · s ⋅
K
1
K
2
s + p
2
the i magi nary axi s and the ri ght hal f pl ane. Thi s expresses the external stabi l i ty
i n the meani ng of bounded i nput bounded output (BI BO).
But f rom (3.2.27) and (3.2.28) we observe that, i n the parti cul ar case of
, al so the f ree response i s bounded and asymptoti cal l y goes to zero, z
2
· p
1
t→∞
lim y
l
(t) ·
t→∞
lim [
K
2
(z
2
−p
2
)
p
1
− p
2
x
1
(0) + x
2
(0)] ⋅ e
−p
2
t
·
f or , · [
K
2
(z
2
− p
2
)
p
1
−p
2
x
1
(0) +x
2
(0)]⋅ [
t→∞
lime
−p
2
t
]0 · [..]⋅ 0 · 0 p
1
≠ p
2
t→∞
lim y
l
(t) ·
t→∞
lim [K
2
x
1
(0) + x
2
(0)] ⋅ e
−p
2
t
·
f or , · [K
2
x
1
(0) + x
2
(0)] ⋅ [
t→∞
lim e
−p
2
t
]0 · [..] ⋅ 0 · 0 p
1
· p
2
f or any bounded i ni ti al state . x
1
(0) , x
2
(0)
3. SYSTEM CONNECTI ONS. 3.2. Ser i al Connecti on.
76
Thi s means that the i nternal stabi l i ty i s assured too. However we must
menti on that thi s i nternal stabi l i ty appeared onl y because the seri al
i nterconnected system i s unobservabl e one havi ng di sconnected the unstabl e
mode . e
−p
2
t
Formal l y we can say that an unstabl e system , can be made external S
1
stabl e one (bounded i nput bounded output BI BO) by seri al compensati on,
perf ormi ng thi s by a seri al connecti on of i t wi th another system whi ch S
2
must have a zero equal to the undesi red pol e (now the unstabl e pol e) of the
system . S
1
Thi s ki nd of stabi l i sati on by seri al connecti on i s i nteresti ng one " on the
paper" onl y because:
1. I t i s very sensi ti ve.
2. The system keep on bei ng i nternal unstabl e one.
I f the rel ati on i s not exactel y real i sed but wi l l be f or as z
2
· p
1
z
2
· p
1
+ ε ε
smal l as possi bl e, the response y(t) goes to i nf i ni ty.
From (3.2.25) i t resul ts,
(3.2.30) y
l
(t) ·
K
2
p
1
− p
2
ε e
−p
1
t
x
1
(0) +
K
2
(z
2
−p
2
)
p
1
− p
2
x
1
(0) + x
2
(0)
]
]
]
e
−p
2
t
i f
t→∞
lim y
l
(t) → t∞ −p
1
> 0
y
f
(t) · L
−1
K
1
K
2
(s + z
2
)
(s + p
1
)(s +p
2
)
U(s) ·
i
Σ
r
i
(t)e
λ
i
t
+
K
1
K
2
U(−p
1
)
p
1
−p
2
ε e
−p
1
t
, − p
1
> 0
(3.2.31)
where by we have denoted the resi duum of the f uncti on i n the pol e r
i
(t) Y
f
(s)
and i n the pol es of the Lapl ace transf orm U(s) of the i nput . The pol es of −p
2
U(s) bel ong to the l ef t hal f pl ane because u(t) i s a bounded f uncti on.
So
.
t→∞
lim y
f
(t) · 0t ∞ · t∞
Each component of the state vector becomes unbounded f or non zero
causes (the i ni ti al state and the i nput).
From (3.2.18), appl yi ng the i nverse Lapl ace transf orm one obtai n,
(3.2.32) x
1
(t) · e
−p
1
t
x
1
(0) + K
1
U(−p
1
)e
−p
1
t
+
i
Σ
r
i
(t)e
λ
i
t
where now by we have denoted the resi duum of the f uncti on i n r
i
(t)
K
1
U(s)
s + p
1
the pol es of the Lapl ace transf orm λ
i
U(s), Re(λ
i
) < 0.
We can express as a sum between an unstabl e component x
1
(t) x
1
I
(t)
generated by the unstabl e pol e and a stabl e component generated −p
1
> 0 x
1
S
(t)
by the pol es λ
i
(3.2.33) x
1
(t) · x
1
I
(t) +x
1
S
(t)
3. SYSTEM CONNECTI ONS. 3.2. Ser i al Connecti on.
77
(3.2.34) x
1
I
(t) · [x
1
(0) +K
1
U(−p
1
)] ⋅ e
−p
1
t
(3.2.35)
t→∞
lim x
1
I
(t) · t∞ ⇒
t→∞
l i m x
1
(t) · t∞
(3.2.36) x
1
S
(t) ·
i
Σ r
i
(t)e
λ
i
t
,
t→∞
lim x
1
S
(t) · 0
The second rel ati on f rom (3.2.18) l eads to,
(3.2.37) X
2
(s) ·
K
2
(z
2
− p
2
)
p
2
−p
1
[x
1
(0) +K
1
U(s)]⋅
1
s + p
1
+
+
K
2
(z
2
−p
2
)
p
1
− p
2
[x
1
(0) + K
1
U(s)] ⋅
1
s + p
2
+
1
s + p
2
x
2
(0)
Si mi l arl y, we can express the unstabl e component , generated by x
2
I
(t)
the pol e and the stabl e component generated by the pol e (−p
1
> 0) x
2
S
(t)
and by the pol es of the f uncti on . −p
2
< 0 λ
i
U(s)
(3.2.38) x
2
(t) · x
2
I
(t) +x
2
S
(t)
(3.2.39) x
2
I
(t) ·
K
2
(z
2
− p
2
)
p
2
−p
1
[x
1
(0) +K
1
U(−p
1
)] ⋅ e
−p
1
t
·
K
2
(z
2
− p
2
)
p
1
− p
2
x
1
I
(t)
i f and , f or one obtai n −p
1
> 0 z
2
· p
1
K
2
> 0
x
1
I
(t) → t∞ ⇒ x
2
I
(t) · K
2
x
1
I
→ t∞
Because
x
2
S
(t) · L
−1
{
K
2
(z
2
− p
2
)
p
1
− p
2
[x
1
(0) + K
1
U(s)] ⋅
1
s + p
2
+
1
s + p
2
x
2
(0)¦
t→∞
lim x
2
S
(t) · 0 ⇒
(3.2.40)
t→∞
lim x
2
(t) · −K
2

.
t→∞
lim x
1
I
(t)
`
,
+ 0 · −K
2
(t∞)
The component i s unbounded because the i nput i n the x
2
(t) u
2
· y
1
· x
1
system i s unbounded too. S
2
Evi dentl y, such a si tuati on can not appear i n physi cal systems. I n a
physi cal system, eventual l y, an unstabl e l i near mathemati cal model i s possi bl e
onl y i n a bounded domai n of the i nput and output val ues. At these domai ns
borders a saturati on phenomena wi l l encounter so the l i near model i s not
avai l abl e.
One physi cal expl anati on of thi s stabi l i sati on perf ormed i n the system
i s as f ol l owi ng: The unstabl e component of the f i rst subsystem, S
2
, i s transmi tted to the output of through two ways, as we al so x
1
I
(t) · y
1
I
(t) S
2
can see i n Fi g. 3.2.5.:
1. Through the path of the di rect connecti on by the f actor , K
2
2. Through the path of the dynami c connecti on wi th the component . x
2
3. SYSTEMS CONNECTI ONS. 3.2. Ser i al Connecti on.
78
The system stabi l i ses the system i f has such parameters that the S
2
S
1
S
2
component whi ch i s transmi tted through the two ways , f i nal l y i s x
1
I
reci procal l y cancel l ed.
I ndeed,
. y
2
I
(t) · K
2
x
1
I
(t) + x
2
I
(t) · K
2
x
1
I
(t) − K
2
z
2
− p
2
p
1
− p
2
x
1
I
(t) · K
2
1 −
z
2
− p
2
p
1
− p
2
]
]
x
1
I
(t)
If then, z
2
· p
1
y
2
I
(t) · K
2
x
1
I
(t) − K
2
p
1
− p
2
p
1
− p
2
x
1
I
(t) · K
2
x
1
I
(t) − K
2
x
1
I
(t) · 0 ∀t.
Practi cal l y thi s i s not possi bl e because and x
1
I
(t) → t∞
, y
2
I
(t) · K
2
x
1
I
(t) − K
2
x
1
I
(t) → K
2
(t∞) −K
2
(t∞) · t(∞ − ∞)
that means " the stabi l i sed" output i s the di f f erence of two very l arge y
2
I
(t)
val ues whi ch, to l i mi t when , becomes an undetermi nati on of the type t → ∞
. ∞ −∞
The theoreti cal sol uti on of thi s undetermi nati on wi l l gi ve us a f i ni te
val ue f or the l i mi t,
.
t→∞
lim y
2
I
(t) · 0
Thi s f i ni te l i mi t i s our i nterpretati on of the system output whi ch is, S
2
we say, seri al "stabi l i sed".
3.2.7. Steady State Ser ial Connection of Two Systems.
A system S
i
i s i n so cal l ed equi l i bri um state denoted i f i ts state xe
i
vari abl e i s constant f or any ti me moment starti ng wi th an i ni ti al ti me x
i
moment.
For a conti nuous ti me system S
i
of the f orm (3.1.2),
(3.2.41) S
i
: x
.
i
(t) · f
i
(x
i
(t), u
i
(t), t); y
i
(t) · g
i
(x
i
(t), u
i
(t), t);
thi s means,
(3.2.42) x
i
(t) · xe
i
· const., ∀t ≥ t
0
⇔ x
.
i
(t) ≡ 0, ∀t ≥ t
0
The equi l i bri um state i s the real sol uti on of the equati on
(3.2.43) f
i
(xe
i
, u
i
(t), t) · 0 ⇒ xe
i
· f
i
−1
( u
i
(t), t) xe
i
· const.
possi bl e onl y f or some f uncti on . u
i
(t) · ue
i
(t)
The output i n the equi l i bri um state i s,
(3.2.44) ye
i
(t) · g
i
(xe
i
, ue
i
(t), t)
I f the system i s ti me i nvari ant, that means
(3.2.45) S
i
: x
.
i
(t) · f
i
(x
i
(t), u
i
(t)); y
i
(t) · g
i
(x
i
(t), u
i
(t));
an equi l i bri um state
(3.2.46) xe
i
· f
i
−1
( u
i
(t)) xe
i
· const.
i s possi bl e onl y i f the i nput i s a ti me constant f uncti on,
3. SYSTEM CONNECTI ONS. 3.2. Ser i al Connecti on.
79
(3.2.47) u
i
(t) · ue
i
(t) · Ue
i
∈ D
u
, ∀t ≥ t
0
⇒
(3.2.46) xe
i
· f
i
−1
(Ue
i
) xe
i
· const. ∈ R
n
and such a regi me i s cal l ed steady state regi me.
The output i n a steady state regi me i s
. (3.2.48) Ye
i
· g
i
(xe
i
, Ue
i
) · g
i
(f
i
−1
(Ue
i
), Ue
i
) · Q
i
(Ue
i
) , Ue
i
∈ D
u
For seek of conveni ence we shal l denote the i nput and output vari abl es
i n steady state regi me by
. (3.2.49) Ye
i
· Y
i
, Ue
i
· U
i
The i nputoutput rel ati on i n steady state regi me i s al so cal l ed stati c
characteri sti c,
(3.2.50) Y
i
· Q
i
(Ue
i
) , Ue
i
∈ D
u
We can say that a system i s i n steady state regi me, starti ng on an i ni ti al
ti me moment , i f al l the vari abl es: state, i nput, output are ti me constant t
0
f uncti on f or . ∀t ≥ t
0
A system i s cal l ed stati c system i f exi sts at l east one stati c S
i
characteri sti c (3.2.50). I t i s possi bl e that a system to have several equi l i bri um
states and so several stati c characteri sti cs.
For SI SO LTI ,
(3.2.51) S
i
: x
.
i
(t) · A
i
x
i
(t) + b
i
u
i
(t); y
i
(t) · c
i
T
x
i
(t) +d
i
u
i
(t)
the stati c characteri sti c i s
(3.2.52) Y
i
· Q
i
(Ue
i
) · [−c
i
T
A
i
−1
b
i
+ d
i
] ⋅ U
i
, ∀Ue
i
∈ R if det(A
i
) ≠ 0
If that means the system has at l east one ei genval ue equal s det(A
i
) · 0
to zero, the system i s of the i ntegral type, then the stati c characteri sti c exi sts
onl y f or . U
i
· 0
For di screte ti me systems as (3.1.4), the di scussi on i s si mi l ar one.
The equi l i bri um state i s
(3.2.53) x
k
· x
e
· const., ∀k ≥ k
0
⇔ x
k+1
≡ x
k
, ∀k ≥ k
0
gi ven by the equati on,
(3.2.54) x
e
i
· f
i
(x
e
i
, u
k
i
, k) ⇒ x
e
i
· ψ
i
−1
( u
k
i
, k) x
e
i
· const.
Let us consi der now two nonl i near subsystems descri bed i n S
1
, S
2
,
steady state regi me by the stati c characteri sti cs
(3.2.55) S
1
: Y
1
· Q
1
(U
1
)
(3.2.56) S
2
: Y
2
· Q
2
(U
2
)
They are seri al connected through the connecti on rel ati on,
. (3.2.57) R
c
: U
2
· Y
1
, U · U
1
, Y · Y
2
The seri al i nterconnected system has a stati c characteri sti c,
Y · Q(U) · Q
2
[Q
1
(U)] · Q
2
Q
1
(U)
obtai ned by si mpl e composi ti on of two f uncti ons. Thi s composi ti on can be
perf ormed al so graphi cal l y.
3. SYSTEM CONNECTI ONS. 3.2. Ser i al Connecti on.
80
3.2.8. Ser ial Connection of Sever al Subsystems.
Al l aspects di scussed regardi ng the connecti on of two systems can be
extended wi thout di f f i cul ty to several subsystems , l et say q subsystems.
For q LTI systems described by transfer matrices H
i
(s)
(3.2.58) S
i
: Y
i
(s) · H
i
(s)U
i
(s) , i · 1 : q
the connection relations are
R
c
: u
i+1
=y
i
, (3.2.59) i · 1 : (q − 1)
the connection conditions are
C
c
: (3.2.60) Γ
i
⊆ Ω
i+1
, i · 1 : (q − 1)
The inputoutput equivalent transfer matrix is,
H(s)=H
q
H
q1
...H
1
(3.2.61)
because
Y
q
=H
q
U
q
, Y
q1
=H
q1
U
q1
but U
q
=Y
q1
=> Y
q
=H
q
(H
q1
U
q1
) a.s.o.
For SISO, the succession of transfer functions in the equivalent product
can changed that means,
H(s)=H
q
H
q1
...H
1
=H
1
...H
q1
H
q
(3.2.62)
3. SYSTEM CONNECTI ONS. 3.2. Ser i al Connecti on.
81
3.3. Parallel Connection.
S
i
: Y
i
(s)=H
i
(s)U
i
(s)
, i=1,...,q ;
R
c
: ;
¹
¹
'
¹
¹
U
i
(s) · U(s)
Y(s) · Σ
i·1
q
Y
i
(s)
i · 1, q
C
c
: i=1,...,q
¹
¹
'
Ω
i
· Ω
Γ
i
⊆ Γ
, ∀i
Y(s) ·
Σ
i·1
q
Y
i
(s) ·
Σ
i·1
q
H
i
(s)U(s) · [
Σ
i·1
q
H
i
(s)] ⋅ U(s) · H(s)U(s)
We can draw a block diagram which illustrate this connection.
u
y
u
y
H
2
2 2
u
y
H
q
q q
u
y
H
1
1 1
+
+
+
+
Figure no. 3.3.1.
The equivalent transfer matrix is,
H(s) ·
Σ
i·1
q
H
i
(s)
3. SYSTEM CONNECTI ONS. 3.3. Par al l el Connecti on.
82
3.4. Feedback Connection.
We shall consider the feedback connection of two subsystems,
S
i
: Y
i
(s)=H
i
(s)U
i
(s)
, i=1, 2
R
c
: ;
¹
¹
'
¹
¹
¹
¹
¹
¹
E · U t Y
2
U
1
· E
Y
1
· H
1
U
1
Y
2
· H
2
U
2
Y · Y
1
¹
¹
'
Y
1
(s) · H
1
(s)U
1
(s)
Y
2
(s) · H
2
(s)U
2
(s)
We can plot these set of relations as in in the below block diagram . The
equivalent feedback interconnected transfer matrix H(s) can be obtained by an
algebraical procedure.
H
1
H
2
+
±
Y =Y(s)
Y
U(s)
E=U
1
2
1
H
Y(s) U(s)
⇔
Y=H
1
(U+H
2
Y)=+H
1
H
2
Y+H
1
U =>Y=(I H
1
H
2
)
1
H
1
U
_
+
H · (I
_
+
rxr
(rxp)(pxr)
H
1
H
2
)
rxp
−1
H
1
Another way:
E=Ut
Y
2
H
2
Y
1
H
1
E ⇒ E · (I
_
+ H
2
H
1
)
−1
U ⇒ Y ·
H
H
1
(I
_
+ H
2
H
1
)
−1
U
H ·
rxp
H
1
(I
_
+
pxp
(pxr)(rxp)
H
2
H
1
)
−1
The equi val ent transf er matri x H(s)
Y(s) · H(s)U(s)
of the f eedback i nterconnected system has two f orms whi ch al gebrai cal l y are
i denti cal .
The f i rst one
H(s) · [I
_
+ H
1
(s)H
2
(s)]
−1
H
1
(s)
requi res an matri x i nversi on, but the second, (r × r)
H(s) · H
1
(s)[I
_
+ H
2
(s)H
1
(s)]
−1
requi res a matri x i nversi on. (p × p)
I n the case of SI SO we have a transf er f uncti on,
. H(s) ·
H
1
(s)
1
_
+ H
2
(s)H
1
(s)
·
H
1
(s)
1
_
+ H
1
(s)H
2
(s)
·
Y(s)
U(s)
3. SYSTEM CONNECTI ONS. 3.4. Feedback Connecti on.
83
4. GRAPHICAL REPRESENTATION AND REDUCTION OF
SYSTEMS.
4.1. Principle Diagrams and Block Diagrams.
Frequently the dynamical systems are represented, interpreted and
manipulated using different graphical methods and techniques.
There are three main types of graphical representation of systems:
1.  Principle diagrams ;
2.  Block diagrams;
3.  Signal flow graphs.
4.1.1. Principle Diagrams.
The principle diagram is a method of physical systems representation by
using norms and symbols that belong to the physical system domain so expressed
to be able to understand how the physical system is running.
They are also called schematic diagrams. By diagrams only physical
objects are described. There are no mathematical model on this representations
but they contain all the specifications or description of that physical object
(system) configuration. It reveals all its components in a form amenable to
analysis, design and evaluation.
To be able to understand and interpret a principle diagram, knowledge and
competence on the field to which that object belongs to are necessary.
The same symbol can have different meanings depending on the field of
applications. For example, the symbol represents a rezistor for an
electrical engineer but a spring for a mechanical engineer.
Starting from and using a principle diagram, an oriented system (or several
oriented systems) can be specified if the output variables (on short outputs) are
selected. After that the mathematical model, that means the abstract system
attached to that oriented system, can be determined.
4.1.2. Block Diagrams.
A block diagram, in the system theory approach, is a pictorial (graphical)
representation of the mathematical relations between the variables which
characterise a system. Mainly a block diagram represents the cause and effect
relationship between the input and output of an oriented system. So a block
diagram expresses the abstract system related to an oriented system. Block
diagrams consist of unidirectional operational blocks.
If in this representation the system state, including the initial state, is
involved then the block diagram is called state diagram (SD)
The fundamental elements of a block diagram are:.
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.1. Principle Diagrams and Block Diagrams.
84
1. Oriented Lines.
Oriented lines represent the variables involved in the mathematical
relationships. They are drawn by straight lines marked wit arrows. On short an
oriented line is called "arrow".
The direction of the arrow points out the causeeffect direction and has
nothing to do with the direction of the flow of variables from principle diagram.
2. Blocks.
A block, usually drawn as an rectangle, represents the mathematical
operator which relates the causes (input variables) and effects (output variables).
Inside the rectangle, representing a block, a symbol of that operator is
marked. However some specific operators are represented by other geometrical
figures ( for example usually the sum operator is represented by a circle).
The input variables of a block are drawn by incoming arrows to the
rectangle (geometrical figure) representing that block. The output variables are
drawn by arrows outgoing from the rectangle (geometrical figure) representing
that block. One oriented line (arrow), that means a variable, can be an output
variable for a block and an input variable for another block.
For example, an explicit relation between the variable u and the variable y,
is of the form, where u and y can be timefunctions.
(4.1.1) y · F(u)
The symbol denoting an operator (it can be a simple function) expresses an F( )
oriented system where u is the cause and y is the effect, that means y is the
output variable and u is the lonely input variable. We can write (4.1.1) as
y · F(u) ⇔ y · F{u¦ ⇔ y · Fu
and the attached block diagram is as in Fig. 4.1.1.
u
F
y
Figure no. 4.1.1.
3. Take off Points.
A take off point, called also pickoff point, graphically illustrates that the
same variable, represented by an arrow, is delivered ( dispersed) to several
arrows. It is drawn by a dot. The following representations are equivalent, as in
Fig. 4.1.2.
y
y
y
y
y
y y
y
y
y
y y
y
y y
Figure no. 4.1.2.
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.1. Principle Diagrams and Block Diagrams.
85
Example 4.1.1. Block Diagram of an Algebraical Relation.
Let we consider an algebraical relation,
(4.1.2) R(x
1
, x
2
, x
3
) · 0
where the variables can be timefunctions, . Let us consider x
1
, x
2
, x
3
x
i
· x
i
(t)
that (4.1.2) is a linear relation with constant coefficients,
(4.1.3) a
1
x
1
+ a
2
x
2
+ a
3
x
3
· 0
The relations (4.1.2), (4.1.3) represent a non oriented system and can not
be represented by a block diagram.
Suppose we are interested about the variable so it can be expressed, if x
1
, as a
1
≠ 0
x
1
· −a
2
/a
1
x
2
− a
3
/a
1
x
3
; ⇔ x
1
· f(x
2
, x
3
) ⇔
x
1
· F{[x
2
x
3
]¦ ⇔ x
1
· F{u¦, u · [x
2
x
3
]
T
which constitutes an oriented system where is the output and are the x
1
x
2
, x
3
inputs.
The block diagram, representing this oriented system is as in Fig. 4.1.3.
where we understand the symbols of the elementary operators involved in this
block diagram.
x
1
x
2
a
1
a
2
/
x
3
a
3
a
1
/ 
+

x
1
x
2
x
3
x
1
=f( , ) x
2
x
3
F
x
1
x
2
x
3
Figure no. 4.1.3.
Any of the variables from (4.1.2), (4.1.3) could be chosen as an x
2
or x
3
output variable. Now let relation (4.1.3) be of the form,
(4.1.5) x
1
· K(b
1
x
1
+ b
2
x
2
+ b
3
x
3
)
where, . a
1
· 1 − Kb
1
, a
2
· −Kb
2
, a
3
· −Kb
3
Suppose we are interested about , that means is output variable and x
1
x
1
are input variables. x
2
, x
3
Relation (4.1.5) do not represent an unidirectional operator because x
1
depends on itself. From (4.1.5) we can delimit several unidirectional operators
(blocks) introducing some new variables, as for example
w
1
· b
2
x
2
+ b
3
x
3
w
2
· b
1
x
1
(4.1.6) w
3
· w
1
+ w
2
. x
1
· Kw
3
Each relation from (4.1.6) represents an unidirectional operator which can
be represented by a block. All the relations from (4.1.6), which represents (4.1.5)
can be represented as several interconnected blocks as in Fig. 4.1.4.
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.1. Principle Diagrams and Block Diagrams.
86
b
1
b
3
b
2
w
3
w
1
w
2
x
1
x
2
x
3 +
+ +
+
K
Figure no. 4.1.4.
This is a feedback structure containing a loop. If K and are constant b
1
coefficients, this loop is an algebraical loop which causes many difficulties in
numerical implementations.
Of course, manipulating the block diagram from Fig. 4.1.4. or eliminating
the intermediate variables from (4.1.6) or withdrawing from w
1
, w
2
, w
3
x
1
(4.1.5) we get the oriented system, having as output variable and as x
1
x
2
, x
3
input variables, characterised by the unidirectional relation of the form
(4.1.7) x
1
·
b
2
1−Kb
1
x
2
+
b
3
1−Kb
1
x
3
· (−a
2
/a
1
)x
2
+ (−a
3
/a
1
)x
3
Now the relation (4.1.7) can be depicted as an unidirectional block as in
Fig. 4.1.3.
If 1 − Kb
1
· 0 ⇔ a
1
· 0
then relations (4.1.3) and (4.1.5) are degenerate, that means they do not contain
the variable . x
1
Example 4.1.2. Variable's Directions in Principle Diagrams and
Block Diagrams.
Let us consider a physical object, as described by the principle (schematic)
diagram from Fig. 4.1.5.
q2=u2
q1=u1
L=y
q2=u2
q1=u1
Water tank
as a physical
"block"
Incoming flow
Outgoing flow
q2=u2
q1=u1
Water tank
as an oriented
L=y
Input
Input
Output
system
Water tank
a) b) c)
Pipe P1
Pipe P2
Figure no. 4.1.5.
It represents a cylindrical water tank supplied, through the pipe P1, with
water of the flow rate q1=u1 and from which tank water drains, through the pipe
P2, of the flow rate q2=u2.
As we can see, from physical point of view, q1=u1 means an incoming
flow and q2=u2 means an outgoing flow.
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS .4.1. Principle Diagrams and Block Diagrams.
87
Suppose we are interested on the water level in the tank, denoted by L=y.
In a such a way the variable L=y, an attribute (a characteristic) of the physical
object (water tank), is an effect we are interested about.
All the causes which affect this selected output are represented by the two
flowrates q1=u1 and q2=u2. So, in the oriented system, based on causality
principles, both u1 and u2 are input variables.
The corresponding block diagram, in the systems theory meaning will have
L=y as output and both u1 and u2 as inputs. It is represented by the block
diagram from Fig. 4.1.5.c. or one of the block diagram from Fig. 4.1.6.
The mathematical relationship between y and u1, u2 in timedomain is
(4.1.8) y(t) · K
∫
t
0
t
[u1(t) − u2(t)]dt + y(t
0
)
is represented in Fig. 4.1.6.a.
To determine the mathematical model in complex domain we define the
variables in variations with respect to a steady state, defined by:
. Y
ss
· y(0), U
1
ss
· u
1
(0) · 0, U
2
ss
· u
2
(0) · 0
Denoting,
, Y(s) · L{y(t) − Y
ss
¦, U
1
(s) · L{u
1
(t) − U
1
ss
¦, U
1
(s) · L{u
1
(t) − U
1
ss
¦
we have
(4.1.8) Y(s) ·
K
s
⋅ [U
1
(s) − U
2
(s)]
the IO relation by components, represented in Fig. 4.1.6.b.
In matrix form the IO relation is,
, (4.1.9) Y(s) · H(s) ⋅
U
1
(s)
U
1
(s)
· H(s) ⋅ U(s) H(s) ·
K
s
−
K
s
which allows us to represent she system as a hole as in Fig. 4.1.6.c.
+ +
+
K

∫
t
t
0
t
0
y( )
y(t)
u
2
(t)
u
1
(t)
+

U
1
(s)
U
2
(s)
K
s
Y(s)
U
1
(s)
U
2
(s)
Y(s)
H(s)
a) b)
c)
Figure no. 4.1.6.
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS .4.1. Principle Diagrams and Block Diagrams.
88
Example 4.1.3. Block Diagram of an Integrator.
The integrator is an operator which performs in time domain,
, (4.1.10) x(t) · x(t
0
) +
∫
t
0
t
u(τ)dτ , x
.
(t) · u(t)
whose block diagram is as in Fig. 4.1.7.a. Because the first equation from
(4.1.10) can be written using Dirac impulse as in (4.1.11),
(4.1.10) x(t) ·
∫
t
0
t
x(t
0
)δ(τ − t
0
)u(τ)dτ +
∫
t
0
t
u(τ)dτ ·
∫
t
0
t
[u(τ) + x(t
0
)δ(τ − t
0
)]dτ
we can draw the integrator, in timedomain as in Fig. 4.1.7.b.
x(t)
u(t)
x( )
t
0
x(t)
.
∫
Time domain
x(t)
.
x(t)
u(t)
x( )
t
0
∫
(t ) t
0
δ
+
+
a) b)
1
s
x(0)
X(s)
U(s) +
+
Complex domain
L{x(t)}
•
Figure no. 4.1.7. Figure no. 4.1.8.
We can represent the integrator behaviour in complex domain taking into
consideration that
(4.1.11) L{x
.
(t)¦ · sX(s) − x(0).
We have to understand that t=(t  the initial time moment has to be
a
− t
0
)
zero when we are using the Laplace transformation.
Denoting X(s)=L{x(t)} we obtain,
(4.1.12) X(s) ·
1
s
[L{x
.
(t)¦ + x(0)]
Using the integrator graphical representation (by block diagrams or signal
flowgraphs) together with summing and scalors operators we can represent the
so called state diagram (SD) of a system.
4.1.3. State Diagrams Represented by Block Diagrams.
State diagrams, on short SD, means the graphical representation of state
equations of a system. They can be drawn using both the block diagram and
signal flow graph methods.
An SD contains only three types of elements (operators):
Nondynamical (scalor type) elements represented by ordinary scalar or vecto
rial functions. They can be a matrix gains or a scalar gains.
Summing operators.
Integrators considering or not the initial state.
State diagrams can be used also for timevariant systems and for nonlinear
systems, but this will be useless to the algebraical computation. State diagrams
can be represented both in time domain or complex domain (s or z). They can be
applied for both continuoustime and discretetime systems.
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.1. Principle Diagrams and Block Diagrams.
89
To draw an SD first the integrators involved in state equations have to be
represented and then it is filled in with the other two types of components.
Let us consider a first order continuoustime system described by the state
equation (4.1.13),
(4.1.13)
x
.
(t) · a(t) ⋅ x(t) + b(t) ⋅ u(t)
y(t) · c(t) ⋅ x(t) + d(t) ⋅ u(t)
The corresponding SD in time domain is depicted in Fig. 4.1.9.
b(t) c(t)
∫
+
+ +
+
u(t) y(t) x(t) x(t)
x(0) .
d(t)
a(t)
Figure no. 4.1.9.
For the linear timeinvariant systems all the coefficients a, b, c, d have
constant values so we can represent the state equation (4.1.13) as
. (4.1.14)
x
.
(t) · a ⋅ x(t) + b ⋅ u(t)
y(t) · c ⋅ x(t) + d ⋅ u(t)
The time domain SD of this system is identical with that from Fig. 4.1.9.
except the coefficients are constants.
The state diagram in complex domain is identical with time domain SD except
the integrator which is replaced by its complex equivalent from Fig 4.1.8. and the
variables are denoted by their complex equivalents as in Fig. 4.1.10.
Sometimes having the integrator represented in complex domain (because
in complex we can perform algebraic operations) we denote the variable in time
domain or both, in time and complex domain, withdrawing advantages when
necessary.
a
c
d
b
+
+
+
+
+
+
U(s) Y(s)
X(s)
x(0)
1
s
L{x(t)}
•
x(t)
• Complex domain form
of the integrator
u(t)
y(t)
x(t)
Figure no. 4.1.10.
In these state diagrams, we still can see explicitly the state derivative, in
addition to the initial state, or its complex image. If we are not interested about
the state derivative we can transform the internal loop as a simple block.
To do this from the above SD, we obtain:
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.1. Principle Diagrams and Block Diagrams.
90
L{x
.
(t)¦ · sX(s) − x(0) · aX(s) + bU(s)
X(s) ·
1
s
[aX(s) + x(0) + bU(s)]
(4.1.15) X(s) ·
1
s − a
[x(0) + bU(s)]
Now we replaced the internal loop by the transfer function and a
1
s − a
summing element at its input as in Fig. 4.1.11.
c
b
d
+
+ +
+ U(s) Y(s)
X(s)
x(0)
1
sa
Figure no. 4.1.11.
On this form of the SD we can see very easy the dependence of Y(s) and
X(s) upon U(s) and x(0).
The state diagram of a system can be drawn also for non scalor systems,
where at least one coefficient is a matrix, of the general form as in (4.1.16),
, (4.1.16)
x
.
· Ax + Bu
y · Cx + Du
where the matrices have the dimensions: A(nxn); B(nxr); C(pxr); D(pxr).
Because
L{x
.
(t)¦ · sX(s) − x(0) · [sX
1
(s) sX
2
(s) ... sX
n
(s)]
T
− [x
1
(0) x
2
(0)....x
n
(0)]
T
we have
, (4.1.17) X(s) · [
1
s
I
n
] ⋅ [L{x
.
(t)¦ + x(0)]
which is represented as in Fig. 4.1.12.
Sometimes, to point out that when the oriented lines forward vectors, they
are drawn by double lines as in Fig. 4.1.12.
U(s)
U(s)
B
1
s
I
n
X(s) Y(s)
x(0)
+
+
+
+
+
+
A
D
C
L{x(t)}
•
Figure no. 4.1.12.
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.1. Principle Diagrams and Block Diagrams.
91
4.2. System Reduction Using Block Diagrams.
4.2.1. System Reduction Problem.
Sometimes complex systems appear to be expressed as a connection of
subsystems in which connection some intermediate variables are pointed out.
The reduction of a such complex system to an equivalent structure means
to determine the expression of the inputoutput mathematical model of that
complex system as a function of subsystem's mathematical models. This can be
done by eliminating all the intermediate variables.
In the case of SISOLTI systems the equivalent transfer function has to be
determined (for MIMOLTI the equivalent transfer matrix).
In reduction processes, the goal is not to solve completely that system of
equations which characterise that connection ( that means to determine all the
unknown variables:outputs and intermediate variables) but to eliminate only the
intermediate variables and to express one component (or all the components) of
the output vector as a function of input vector.
Three methods of reduction are usually utilised:
1. Analytical reduction.
2. Reduction through block diagrams transformations.
3. Reduction by using signal flowgraphs method.
4.2.2. Analytical Reduction.
Mainly this means to solve analytically, by using different techniques and
methods, she set (system) of equations describing the dynamical system. It has
the advantage to be applied to a broader class of systems other rather SLIT.
Because the coefficients of the equations are expressions of some literal,
particularly transfer functions on complex variables s or z, the determination of
the solution becomes very cumbersome one, mainly for systems with a larger
number of equations. In such cases, the numerical methods to be implemented on
computers can not be applied.
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
92
4.2.3. System Reduction Through Block Diagrams Transformations.
If a system is represented by a complex block diagram it can be reduced to
a simple equivalent form by transforming, through manipulations, the block
diagram according to some rules.
If the complex system is a multiinput, multioutput one (MIMO LTI) the
equivalent structure will be expressed by an equivalent transfer matrix H(s).
Throghout this method it is possible to determene the equivalent transfer
function between one input and one output which represents the U
k
(s) Y
i
(s)
component of the transfer matrix H(s), considering the relations, H
ik
(s)
Y(s)=H(s)U(s) (4.2.1)
(4.2.2) H
ik
(s) · H
U
k
Y
i
(s) ·
Y
i
(s)
U
k
(s)
U
j
(s)≡0 , ∀j≠k
. (4.2.3) H(s) ·

.
H
11
... H
1p
: H
ik
:
H
r1
... H
rp
`
,
, Y
i
(s) ·
Σ
k·1
n
H
ik
U
k
To determine such a component we have to ignore all the other H
ik
(s)
outputs except and to consider zero all the inputs except the input . Y
i
U
k
4.2.3.1. Elementary Transformations on Block Diagrams.
For block diagram manipulation some graphical transformations can be
used. They are based on the identity of the algebaical inputoutput relations.
We consider all the relations in complex domain s or z, but for seek of
convenience in the following these variables are omitted.
1. Combining blocks in cascade.
Original relation/diagram. Equivalent relation/diagram.
Y · H
2
⋅ (H
1
U) Y · (H
2
H
1
) ⋅ U
H
2
H
1
U Y
H
2
H
1
U Y
2. Combining blocks in parallel.
Original relation/diagram. Equivalent relation/diagram.
Y · H
1
⋅ Ut H
2
⋅ U Y · (H
1
t H
2
) ⋅ U
H
1
H
2
U Y
±
+
U Y
H
1
H
2
±
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
93
3. Eliminating a forward loop.
Original relation/diagram. Equivalent relation/diagram.
Y · H
1
⋅ Ut H
2
⋅ U Y · (H
1
H
2
−1
t I) ⋅ H
2
⋅ U
H
1
H
2
U Y
±
+
H
1 H
2
−1
H
2
U Y
±
+
4. Eliminating a feedback loop.
Original relation/diagram. Equivalent relations/diagram.
a) Y · H
1
⋅ (Ut H
2
⋅ Y) Y · [(I
_
+ H
1
H
2
)
−1
H
1
] ⋅ U
b) Y · [H
1
(I
_
+ H
2
H
1
)
−1
] ⋅ U
Scalar case: Y ·
H
1
1
_
+ H
1
H
2
⋅ U ·
H
1
1
_
+ H
2
H
1
⋅ U
H
1
H
2
U Y Y
±
+
U Y H
1
H
1
H
2 1

+
U Y H
1
H
1
H
2 1

+
Scalar case
a) b)
5. Removing a block from a feedback loop.
Original relation/diagram. Equivalent relations/diagram.
a) Y · H
1
⋅ (Ut H
2
⋅ Y) Y · [(I
_
+ H
1
H
2
)
−1
H
1
H
2
] ⋅ [H
2
−1
U]
b) Y · [H
2
−1
] ⋅ [H
2
H
1
(I
_
+ H
2
H
1
)
−1
] ⋅ U
Scalar case: H ·
H
1
1
_
+ H
1
H
2
·
H
1
H
2
1
_
+ H
1
H
2
⋅
1
H
2
·
1
H
2
⋅
H
2
H
1
1
_
+ H
2
H
1
a) b)
H
1
H
2
U Y Y
±
+
H
1
H
2
U Y Y
±
+
H
2
−1
H
1
H
2
U Y
±
+
H
2
−1
6. Moving a takeoff point ahead of a block.
Original relation/diagram. Equivalent relations/diagram.
Y · H⋅ U Y · H⋅ U
Y
Y
H
U
Y
Y
H
U
H
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
94
7. Moving a takeoff point beyond a block.
Original relation/diagram. Equivalent relations/diagram.
Y · H⋅ U U · U Y · H⋅ U U · [H
−1
H] ⋅ U
U
Y
H
U Y
H
U
U
H
1
8. Moving a summing point ahead of a block.
Original relation/diagram. Equivalent relations/diagram.
Y · H⋅ U
1
t U
2
Y · H⋅ [U
1
t H
−1
U
2
]
Y
H
U
2
U
1
±
+
Y
H
U
2
U
1
±
+
H
1
9. Moving a summing point beyond a block.
Original relation/diagram. Equivalent relations/diagram.
Y · H⋅ [U
1
t U
2
] Y · [HU
1
] t [HU
2
]
Y
H
U
2
U
1
±
+
Y
H
H
U
2
U
1
±
+
10. Rearranging summing points.
Original relation/diagram. Equivalent relations/diagram.
Y · tU
1
+ [U
2
t U
3
] Y · U
2
+ [tU
1
t U
3
]
Y
U
2
U
3
U
1
± ±
+ +
Y
U
2
U
3
U
1
±
±
+
+
Y(s) · U
1
(s) + U
2
(s) t U
3
(s)
Y(s) · (U
1
(s) + U
2
(s)) t U
3
(s)
U (s)
1
U (s)
Y(s)
2
U (s)
3
+
+
_
+
1
U (s)
U (s)
Y(s)
2
U (s)
3
+
+
+_
+
Y(s) · (U
1
(s) t U
2
(s)) t U
3
(s) Y(s) · (U
1
(s) t U
3
(s)) t U
2
(s)
U (s)
1
U (s)
U (s)
Y(s)
2
3
+
+ +
+
_
_
Y(s)
U (s)
2
U (s)
3
U (s)
1
+
+
+
+
_
_
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
95
Example 4.2.1. Representations of a Multi Inputs Summing Element.
As we mentioned in the case of multivariable systems, to determine the
component of a transfer matrix H(s), we must consider zero all the H
ik
(s)
components (zeroinputs) of the vector U(s), except the component U
j
(s) U
k
(s)
and to ignore all the output vector components except the component Y
i
(s).
When we apply this rule and a summing operator has several inputs among
which it exist zeroinputs, this summing operator will be replaced by an element
(block) expressing the dependence of the summing operator output upon the
inputs that are not considered to be zero.
Let us consider a twoinputs/oneoutput system as depicted in Fig. 4.2.1.a.
We want determine the two components of the transfer matrix H
11
, H
12
using the relation (4.2.2). Of course this is a very simple H(s) · [H
11
(s) H
12
(s)]
example but the goal is only to illustrate how the summator will be transformed.
G
1
G2
Y
+

U
2
U
1
G
1
G2
Y

U
2
G
1
Y U
1
1 G
1
G2
Y
U
2
1
a) b) c) d)
Figure no. 4.2.1.
When we determine H
11
, U
2
must be considered zero and the summing
element will be represented as in Fig. 4.2.1.b. by a block with gain 1.
When we determine H
12
, U
1
must be considered zero and the summing
element will be represented as in Fig. 4.2.1.c. or by a block with gain 1 as in
Fig. 4.2.1.d.
4.2.3.2. Transformations of a Block Diagram Area by Analytical
Equivalence.
If in some parts of a block diagram nonstandard connections appear, for
which we can not perform a directly reduction, then we can mark that area,
containing the undesired connections, by a closed contour specifying and
denoting, by additional letters, all the incoming oriented lines and outgoing
oriented lines to and from this contour.
The incoming oriented lines will be considered as input variables and the
outgoing oriented lines will be considered as output variables of the contour,
interpreted as a separate oriented system.
On this oriented system (our contour) we can perform an analytical
analysis and determine the expression of each contour output variable as a
function of the contour input variables. Then these relationships are graphically
represented as a block diagram which will replace the marked area from the
original block diagram.
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
96
For example suppose that a contour delimits an area with two contour
input variables, denoted , and two contour output variables, denoted a
1
, a
2
b
1
, b
2
, having the Laplace transforms respectively. A
1
(s), A
2
(s), B
1
(s), B
2
(s)
This oriented contour is represented in Fig. 4.2.2.a. and suppose that,
using analytical methods we can express B
1
, B
2
upon A
1
, A
2
as in (4.2.4) if the
dependence is linear one.
G (s)
12
G (s)
11
G (s)
21
G (s)
22
B (s) 1
A (s)
1
B (s)
2
A (s)
2
+ +
+ +
B (s) 1
A (s)
1
B (s)
2
A (s)
2
a)
b)
Undesired connections
Figure no. 4.2.2.
(4.2.4) B
1
(s) · G
11
(s)A
1
(s) + G
12
(s)A
2
(s)
B
2
(s) · G
21
(s)A
1
(s) + G
22
(s)A
2
(s)
These relations are now represented by a block diagram as in Fig. 4.2.2.b.
which will replace the undesired connection.
4.2.3.3. Algorithm for the Reduction of Complicated Block Diagrams.
In practice block diagrams are often quite complicated, having multiple
inputs and multiple outputs, including several feedback or feedforward loops.
In the linear case the reduced system is described by a transfer matrix H(s)
whose components are separately determined.
For example to determine the component
H
ik
(s) · H
U
k
Y
i
(s) ·
Y
i
(s)
U
k
(s)
U
j
(s)≡0 , ∀j≠k
which relates the input U
k
to the output Y
i
, the following steps may be used:
1. Ignore all the outputs except Y
i
.
2. Consider zero all the inputs except U
k
. Sometimes we can transform the
related summing points as in Fig. 4.2.1.
3. Combine and replace by its equivalent all blocks connected in cascade
(serial connections) using Transformation 1.
4. Combine and replace by its equivalent all blocks connected in parallel
using Transformation 2.
5. Eliminate all the feedback loops using Transformation 4.
6. Apply the graphical transformations 3, 5:10 to reveal the three above
standard connections.
7. Solve complicated connections using analytical equivalence as in 4.2.3.2.
8. Repeat steps 3 to 7 and then select another component of the transfer
matrix.
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
97
Example 4.2.2. Reduction of a Multivariable System.
Let us consider a multivariable system described by a block diagram (BD)
as in Fig. 4.2.3. In this example we shall fallow in detail, for training reasons, all
the steps involved in systems reduction based on block diagram transformations.
However in practice and with some experience many of the below steps and
drawings could be avoided.
For a easier manipulations it is recommended to attach additional variables
to each takeoff point and to the output of each summing operator. In our case
they are a
0
: a
8
. The summing operators are denoted S
1
: S
4
.
H H H
H
H
H
H
1
U
1
S
1 S
2
S
4
S
3
Y
1
Y U
2 2
2 3
4
5
6
7
_
_ +
+
+
+
+
+ a
1
a
3
a
4
a
5
a
6
a
7
a
8 a
0
Figure no. 4.2.3.
It can be observed that the system has two inputs and two outputs
; ; (4.2.5) U(s) ·
U
1
(s)
U
2
(s)
]
]
]
Y(s) ·
Y
1
(s)
Y
2
(s)
]
]
]
with global block diagram (BD) as in Fig.4.2.4., described by the transfer matrix
H(s) containing 4 components
H(s) ·
H
11
(s) H
12
(s)
H
21
(s) H
22
(s)
]
]
]
; (4.2.6) H
11
(s) ·
Y
1
(s)
U
1
(s)
U
2
·0
H
12
(s) ·
Y
1
(s)
U
2
(s)
U
1
·0
; . H
21
(s) ·
Y
2
(s)
U
1
(s)
U
2
·0
H
22
(s) ·
Y
2
(s)
U
2
(s)
U
1
·0
U
1
(s)
Y
2
(s)
Y
1
(s)
U
2
(s)
H(s)
Figure no. 4.2.4.
This LTI system in fact is represented by a set of 11 simultaneous
algebraical equations as in (4.2.7).
a
0
=H
1
a
1
; a
1
=U
1
a
7
; a
2
=a
0
a
5
; a
3
=U
2
+a
6
a
4
=H
3
a
3
a
5
=H
5
a
4
a
6
=H
2
a
2
a
7
=H
7
a
8
a
8
=H
4
a
3
+H
6
a
4
Y
1
=a
8
Y
2
=a
2 ,
(4.2.7)
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
98
with 13 variables: a
0
: a
8
, Y
1
, Y
2
, U
1
, U
2
where two input variables U
1
, U
2
are
independent (free variables in the algebraical system). The coefficients H
1
: H
7
represent expressions of some complex functions, let as denote them as transfer
functions too. To reduce the system means to eliminate all a
0
: a
8
intermediate
variables expressing Y
1
, Y
2
upon U
1
and U
2.
. This is rather difficult task.
a). Determination of the component : . H
11
(s) ·
Y
1
(s)
U
1
(s)
U
2
·0
To do this we shall ignore Y
2
and consider U
2
=0 as in Fig. 4.2.5 where
now Y
2
is not drawn and because U
2
=0 now a
3
=a
6
and a block with a gain 1 will
appear instead of the summing point S
3
. To put into evidence the three standard
connections we intend to move, in Fig. 4.2.5. , the takeoff point a
4
ahead of the
block H
3
( as pointed out by the dashed arrow) and to arrange this new takeoff
point using the equivalencies from Fig. 4.1.2.
H H H H
H
H
H
1
U
1
Y
1
2 3
4
5
6
7
_ _
+ +
+
+
a
1
a
2
a
3
a
4
a
5
a
6
a
7
a
8
a
0
Move ahead
S
1
S
2
S
3
S
4
1
Figure no. 4.2.5.
After these operations, it will appear very clear, marked by dashed
rectangles, two structures: one standard feedback connection (with the equivalent
transfer function H
a
) and one parallel connection (with the equivalent transfer
function H
b
) as depicted in 4.2.6.
H H H H
H
H
H
1
U
1
Y
1
2 3
4
5
6
7
_ _
H
3
H
H
a
b
+
+
+
+ a
1
a
2
a
3
a
3
a
4
a
5
a
6
a
7
a
8
a
0
S
1
S
2
S
4
Figure no. 4.2.6.
This takeoff point movement reflects the following equivalence relationships
(4.2.8) a
5
· H
5
a
4
; a
4
· H
3
a
3
; ⇒ a
5
· H
5
H
3
a
3
The blocks marked H
a
, H
b
are reduced to the equivalent transfer functions,
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
99
; , (4.2.9) H
a
·
H
2
1 + H
2
H
5
H
3
H
b
· H
4
+ H
3
H
6
so the block diagram from Fig. 4.2.6. becomes as in Fig 4.2.7. a simple feedback
loop described by the transfer function,
(4.2.10) H
11
(s) ·
H
1
H
a
H
b
1 + H
1
H
a
H
b
H
7
H
H
1
H
11
U
1
U
1
U =0
2
Y
1
Y
1
7
_
H H
a b
+
a
1
a
3
a
7
a
8
a
0
S
1
Figure no. 4.2.7.
Substituting (4.2.9) into (4.2.10), finally we obtain the expression of H
11
(s)
. (4.2.11) H
11
(s) ·
H
1
H
2
H
3
H
6
+ H
1
H
2
H
4
1 + H
2
H
3
H
5
+ H
1
H
2
H
3
H
6
H
7
+ H
1
H
2
H
4
H
7
b). Determination of the component : . H
12
(s) ·
Y
1
(s)
U
2
(s)
U
1
·0
To evaluate H
12
(s), in the initial BD from Fig 4.2.3. ,we shall consider
U
1
= 0 and ignore Y
2
resulting a BD as in Fig 4.2.8.
H H H H
H
H
H
1
Y
1
U
2
2 3
4
5
6
7
_
_
+
+
+
+
+
a
1
a
2
a
3
a
4
a
5
a
6
a
7
a
8
a
0
S
1
S
2
S
3 S
4
Move ahead
Figure no. 4.2.8.
As discussed before we move, in Fig. 4.2.8. , the takeoff point a
4
ahead of
the block H
3
. Because now U
1
=0, a
1
=a
7
and a
2
=a
0
a5=H
1
a
1
a
5
=H
1
a
7
a
5
, we
transfer the sign " "of S
1
to the input of S
2
as in Fig. 4.2.9.
H
H H H
H
H
H
1
Y
1
U
2
2 3
4
5
6
7
_
_
H
3
H
a
+
+
+
+
a
2
a
3
a
4
a
5
a
6
a
7
a
8
S
2
S
3
S
4
Figure no. 4.2.9.
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
100
It can be observed that now appeared two simple cascade connections and
the parallel connection denoted H
a
which is . H
a
(s) · H
4
+ H
3
H
6
H
H
H
H
1
Y
1
U
2
2
5
7
_
_
H
3
H
a
.
.
+
+
a
2
a
3
a
5
a
6
a
8
S
2
S
3
Move beyond the block
Figure no. 4.2.10.
After the takeoff point a
3
has been moved beyond the block H
a
the BD
looks like in Fig. 4.2.11.
H
H
H
H
1
Y
1
U
2
2
5
7
H
3
H
a
.
.
1
H
a
H
c

+
+
+
Figure no. 4.2.11.
It appeared a new parallel connection denoted H
c
equals to
, (4.2.12) H
b
(s) · H
1
H
7
+
H
5
H
3
H
a
which determines the BD from Fig. 4.2.12.
H
Y
1
Y
1
H
a
H
2
U
2

+
c
H
12
U
2
U =0
1
Figure no. 4.2.12.
The equivalent relationship from the above BD is the component H
12
(s):
H
12
(s) ·
H
a
1 + H
a
H
2
H
c
·
H
4
+ H
3
H
6
1 + H
2
H
3
H
5
+ H
1
H
2
H
3
H
6
H
7
+ H
1
H
2
H
4
H
7
(4.2.13)
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
101
c). Determination of the component : H
21
(s) ·
Y
2
(s)
U
1
(s)
U
2
·0
To determine the transfer function H
21
(s) we have to consider U
2
= 0 and to
ignore the output Y
1
.
Into such conditions, the initial block diagram from Fig. 4.2.3. looks like in
Fig. 4.2.13.
H H H
H
H
H
H
1
U
1
Y
2
2 3
4
5
6
7
_
_
+ +
+
+
a
1
a
2
a
3
a
4
a
5
a
6
a
7
a
8
a
0
S
1
S
2
S
3
S
4
1
Move ahead
Figure no. 4.2.13.
Because U
2
= 0, the relation a
3
=U
2
+a
6
from Fig. 4.2.3. becomes a
3
=a
6
so
the summing operator S
3
is now represented by a block with a gain 1, and later
on it will be ignored. Moving the takeoff point a
4
ahead of the block H
3
we get
the BD from Fig.4.2.14.
H H H
H
H
H
H
1
U
1
Y
2
2 3
4
5
6
7
_ _
H
3
H
a
+ +
+
+
a
1
a
2 a
3
a
4
a
5
a
6
a
7
a
8
a
0
S
1
S
2
S
4
Figure no. 4.2.14.
The new parallel connection H
a
is replaced by so is H
a
(s) · H
4
+ H
3
H
6
obtained the BD from Fig. 4.2.15.
H H
H
H
1
U
1
Y
2
2
5
7
_
_
H
a
H
3
.
+
+ a
1
a
2
a
3
a
5
a
6
a
7
a
8
a
0
S
1
S
2
Figure no. 4.2.15.
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
102
Redrawing the above BD we get the shape from Figure no. 4.2.15.
H
H
1
U
1
_
_
H
3
.
5
H
2
H H
7
a
Y
2
.
+
+
a
1
a
2
a
3
a
3
a
5
a
6
a
7
a
0
S
1
S
2
Figure no. 4.2.15.
Moving the takeoff point a
6
ahead of the block H
2
we get the BD from
Fig.4.2.16. where a new feedback connection appeared, denoted H
d
and a
cascade connection denoted H
e
.
H
1
U
1
_ _
.
H
2
H
H
7
a
Y
2
H
2
H H
5 3
.
H
H
d
e
+ + a
1
a
2
a
2 a
2
a
3
a
5
a
6
a
7
a
0
S
1
S
2
Figure no. 4.2.16.
The two new connections are expressed by
; (4.2.14) H
e
(s) · H
2
H
7
H
a
H
d
(s) ·
1
1 + H
2
H
3
H
5
so the final structure is a simple feedback connection
H
1
U
1
_
Y
2
H
d
H
e
+
U
1
H
21
Y
2
U =0
2
Figure no. 4.2.17.
from where we obtain H
21
(s) as,
(4.2.15) H
21
(s) ·
H
1
H
c
1 + H
1
H
c
H
b
·
H
1
1 + H
2
H
3
H
5
+ H
1
H
2
H
3
H
6
H
7
+ H
1
H
2
H
4
H
7
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
103
d). Determination of the component : H
22
(s) ·
Y
2
(s)
U
2
(s)
U
1
·0
The component H
22
(s) is evaluated considering U
1
= 0 and ignoring the
output Y
1
, so the initial block diagram from Fig. 4.2.3. looks like in Fig.4.2.18.
H H H
H
H
H
H
1
Y U
2 2
2 3
4
5
6
7
_
_
+
+
+
+
+
a
1
a
2
a
3
a
4
a
5
a
6
a
7
a
8
a
0
S
1
S
2
S
3 S
4
Figure no. 4.2.18.
To reduce this block diagram the takeoff point a
4
is moved ahead of the
block H
3
and will result the structure from Fig. 4.2.19.
H H H H
H
H
H
1
Y U
2
2
2
3
4
5
6
7
_
_
H
3
H
a
+
+
+
+
+
a
1
a
2
a
3
a
4
a
5
a
6
a
7
a
8
a
0
S
1
S
2
S
3 S
4
Figure no. 4.2.19.
Now we can get the equivalence of the cascade connection between H
3
and H
5
and the parallel connection H
a
(s) = H
4
+ H
3
H
6
. After these reductions, a
new cascade connections between H
a
and H
7
can be observed as in Fig. 4.2.20.
H H
H
H
1
Y U
2 2
2
5
7
_ _
H
3
H
a
.
.
+
+
+
a
1
a
2
a
3
a
5
a
6
a
7
a
0
S
1
S
2
S
3
Figure no. 4.2.20.
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
104
Redrawing the above BD to have the input to the left side and the output
to the right side we obtain, Fig. 4.2.21.
H H
H H H
H
Y
3 5
7
a
_
_
1
2
.
.
H
f
U
+
+
2
2
S
1
Figure no. 4.2.21.
Inside this BD a parallel connection is observed, and we shall denote it as
H
f
, having the expression,
(4.2.16) H
f
· −(H
3
H
5
+ H
1
H
7
H
a
)
The last component of the transfer matrix can now obtained immediately,
solving a standard feedback loop,
(4.2.17) H
22
(s) ·
H
f
1 − H
2
H
f
·
−H
3
H
5
− H
1
H
4
H
7
− H
1
H
3
H
6
H
7
1 + H
2
H
3
H
5
+ H
1
H
2
H
3
H
6
H
7
+ H
1
H
2
H
4
H
7
Concluding, the four components of the transfer matrix are:
(4.2.18) H
11
(s) ·
H
1
H
2
H
3
H
6
+ H
1
H
2
H
4
1 + H
2
H
3
H
5
+ H
1
H
2
H
3
H
6
H
7
+ H
1
H
2
H
4
H
7
(4.2.19) H
12
(s) ·
H
4
+ H
3
H
6
1 + H
2
H
3
H
5
+ H
1
H
2
H
3
H
6
H
7
+ H
1
H
2
H
4
H
7
(4.2.20) H
21
(s) ·
H
1
1 + H
2
H
3
H
5
+ H
1
H
2
H
3
H
6
H
7
+ H
1
H
2
H
4
H
7
. (4.2.21) H
22
(s) ·
−H
3
H
5
− H
1
H
4
H
7
− H
1
H
3
H
6
H
7
1 + H
2
H
3
H
5
+ H
1
H
2
H
3
H
6
H
7
+ H
1
H
2
H
4
H
7
It can be observed that all of them have the same polynomial to
denominator. This polynomial, denoted by L(s)
(4.2.22) L(s) · 1 + H
2
H
3
H
5
+ H
1
H
2
H
3
H
6
H
7
+ H
1
H
2
H
4
H
7
is the determinant of the system (4.2.7).
4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2. System Reduction by Using Block Diagrams.
105
4.3 Signal Flow Graphs Method (SFG).
A signal flow graph (SFG) is a graphical representation of a set of
simultaneous linear equations representing a system. It graphically displays the
transmission of signals through systems. Like the block diagrams, SFGs illustrate
the cause and effect relationships but with more mathematical rules.
Unfortunately they are applied only for linear systems modelled by
algebraic equations in time domain, sdomain or zdomain. In such a situation
they appear as a simplified version of block diagrams. Instead, SFGs are easier to
draw, easier to manipulate with respect to block diagrams.
4.3.1. Signal Flow Graphs Fundamentals.
Let
or or (4.3.1) x
j
· t
ij
x
i
x
j
· t
ij
{x
i
¦ x
j
· tx
i
x
j
{x
i
¦
be a linear mathematical relationship, where:
x
i
, x
j
 are time functions, complex functions or numerical variables.
t
ij
or  is an operator called elementary transmission function (ETF) tx
i
x
j
which makes the correspondence from x
i
to x
j
. It is a time domain or complex
domain operator, called also elementary transmittance or gain. We shall see how
this mathematical relationship will be represented in a SFG.
The fundamental elements of a signal flow graph are: node and branch.
Node.
A node, represented in a SFG by a small dot, expresses a variable denoted
by a letter, for example x
i
, which will be a variable of the graph. This variable
can be a numerical variable, a time function or a complex function (like Laplace
transforms or z transforms).
Usually we refer a node by the variable associated to it. For example we
can say " the node x
i
" thinking that, to that node the variable x
i
is associated.
Elementary branch.
An elementary branch, on short branch, between two nodes, from the
node x
i
to the node x
j
for example, is a curve line oriented by an arrow, directly
connecting the node x
i
and x
j
without passing through other nodes. It contains
information how the node x
i
directly affects the node x
j
.
Branches are always unidirectional reflecting causeeffect relationships.
Any branch has associated two elements: branch operator (gain or transmittance)
and direction. A branch is marked by a letter or a symbol, placed near the arrow
symbol, which defines the coefficient or the gain or the operator, t
ij
( ) for tx
i
x
j
example, through which the variable x
j
contributes with a term to the variable x
j
determination. This gain (coefficient, operator) , between nodes x
i
and x
j
,
denoted by t
ij
( ), defines the so called elementary transmittance or elementary tx
i
x
j
transmission function. It is considered to be zero on a direction opposite to that
indicated by the arrow.
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
106
The signal flow graph representation of the equation (4.3.1) is shown in
Fig. 4.3.1.
x
i
x
j t
i j
x
i
x
j x
i
x
j
t
x
i
x
j
t
i j
= { } x
i
x
j
t x
i
= x
j
Figure no. 4.3.1.
Notice that branch directing from the node x
i
(input node of this branch) to
the node x
j
(output node of this branch) expresses the dependence of x
j
upon x
i
but not the reverse, that means the signals travel along branches only in the
direction described by the arrows of the branches.
It is very important to note that the transmittance between the two nodes x
i
and x
j
should be interpreted as an unidirectional ( unilateral) operator (amplifier)
with the gain t
ij
( ) so that a contribution of t
ij
x
i
( ) is delivered at node x
ij
. tx
i
x
j
tx
i
x
j
x
i
We can algebraically write the relation (4.3.1) as
or x
j
·
1
t
ij
x
i
x
j
· [tx
i
x
j
]
−1
{x
i
¦
but the SFG of Fig. 4.3.1. does not imply this relationship.
If in the same time the variable x
j
directly affects the variable x
i
, that
means it exists (is given) an independent relationship ( ) x
i
· t
ji
x
j
x
i
· tx
j
x
i
{x
j
¦
then the set of two simultaneously equations
or (4.3.2)
x
j
· t
ij
x
i
x
i
· t
ji
x
j
x
j
· tx
i
x
j
{x
i
¦
x
i
· tx
j
x
i
{x
j
¦
is represented by a SFG as in Fig. 4.3.2. This is a so called elementary loop
between two nodes.
x
i
x
j
t
i j
t
j i
x
i
x
j
t
i j
=
t
j i
x
i
x
j =
x
i
x
j
x
i
x
j
t
x
i
x
j
t
x
i
x
j
t x
i
x
j
=
x
i
x
j
t
x
i
x
j =
Figure no. 4.3.2.
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
107
4.3.2. Signal Flow Graphs Algebra.
In SFG the following fundamental operations, rules and notions are
defined:
4.3.2.1. Addition rule. The value of a variable in a SFG denoted by a node is
equal to the sum of signals entering that node.
Example. The variable x
1
depends on three variables, x
2
, x
3
, x
4
.
This dependence is expressed as in (4.3.3) and drawn as in Fig. 4.3.3.
. (4.3.3) x
1
· t
21
x
2
+ t
31
x
3
+ t
41
x
4
If we have confusion we shall denote the transmission functions with
indexes formed with the nodes names (from x
2
to x
1
as ). tx
2
x
1
x
x
x
t
t
t
1
2
3
x
4
21
31
41
Figure no. 4.3.3.
4.3.2.2. Transmission rule. The value of a variable denoted by a node is
transmitted on every branch leaving that node.
Example. Let us consider three variables x
2
, x
3
, x
4
which directly depend upon
the variable x
1
through the relationships,
x
k
=t
1k
x
1
, k=2 : 4. (4.3.4)
These relations are represented by a SFG as in Fig. 4.3.4.
x
x
x
x
t
t
t
1
2
3
4
12
13
14
Figure no. 4.3.4.
4.3.2.3. Multiplication rule. A cascade a (series) of elementary branches can be
replaced by (are equivalent with) a single branch with the transmission function
the product of the transmission functions of the components of this cascade.
This rule is available if and only if no other branches are connected to the
intermediate nodes.
Example. A cascade of three elementary branches as in the SFG from
Fig. 4.3.5. is replaced be a unique equivalent branch T
12
where
T
14
=t
12
t
23
t
34
=t
12
(t
23
(t
34
)) (4.3.5)
12
t
23
t
34
t
≡
14
T
x x x
1
x
2 3 4
x
4
x
1
Figure no. 4.3.5.
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
108
4.3.2.4. Parallel rule. A set of elementary branches, leaving the same node and
entering in the same node ( a parallel connection of elementary branches) can be
replaced by (are equivalent with) a single branch with the transmission function
the sum of the transmission functions of the components of this parallel
connection.
Example. Two parallel branches as in the SFG from Fig. 4.3.6. are t
12
, t
12
equivalent with a single branch with the equivalent transmission function T
12
where,
T
12
=t
12
'+t
12
'' (4.3.6)
12
t
x
1
x
2
12
t
x
1
x
2
12
T
≡
Figure no. 4.3.6.
4.3.2.5. Input node. An input node (sourcenode) is a node that has only
outgoing branches. Example. The node x
1
in Fig. 4.3.4. is an input node. All
these branches are outgoing branches in respect with x
1
and incoming branches in
respect to x
k
.
4.3.2.5. Output node. An output node (sink  node) is a node that has only
incoming branches.
Example. The node x
1
in Fig. 4.3.3. is an output node. All these branches are
incoming branches in respect with x
1
and outgoing branches in respect to x
k
.
Any node, different of an input node, can be related to a new node, as output
node, through an unity gain branch. This new node is a "dummy" node.
4.3.2.6. Path. A path from one node to other node is a continuous unidirectional
succession of branches (traversed in the same direction) along which no node is
passed more then once.
4.3.2.7. Loop. A loop is a path that originates and terminates to the same node in
a such way that no node is passed more than once.
4.3.2.8. Selfloop. A selfloop is a loop consisting of a single branch.
4.3.2.9. Path gain. The gain of a path is the product of the gains of the branches
encountered in traversing the path. Sometimes we are saying path thinking that it
is the gain of that path. For the path "k" its gain is denoted C
k
.
4.3.2.10. Loop gain. The loop gain is the product of the transmission functions
encountered along that loop. Sometimes we are saying loop thinking that it is the
gain of that loop. For the loop "k" its gain is denoted B
k
.
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
109
4.3.2.11. Disjunctive Paths /Loops. Two paths or two loops are said to be
nontouching if they have no node in common.
4.3.2.12. Equivalent transmission function. The equivalent transmission
function (equivalent transmittance) between one input node u
k
and one node x
i
of
the graph is the operator that expresses that node x
i
with respect to the input node
u
k
taking into considerations all the graph connections. It is denoted by capital a
letter( symbol) indexed by the two nodes symbols, for example . Tu
k
x
i
One graph's node x
i
, different of input node is expressed as a function of
all p input nodes u
j
, j=1:p, of that graph by a relation of the form:
(4.3.7) x
i
· Σ
j·1
p
Tu
j
x
i
u
j
It is a difference between the elementary transmission function (elementary
transmittance) and the equivalent transmission function (equivalent tu
k
x
i
transmittance) . Tu
k
x
i
The determination of the equivalent gains means to solve the set of
equations where the undetermined variables are nodes in the graphs and the
determined variables are inputs, (x
1
`,x
2
)  unknowns, (u,x
3
)  free variable in our
previous example.
Example 4.3.1. SFGs of one Algebraic Equation.
Le us consider one algebraical equation having 4 variables x
1
, x
2
, x
3
, x
4
,
. (4.3.8) a
1
x
1
+ a
2
x
2
+ a
3
x
3
+ a
4
x
4
· 0
Suppose that we are interested about x
1.
. To draw the SFG we have to
express this variable as a function upon the others as
. (4.3.9) x
1
· −(a
1
−1
⋅ a
2
)x
2
− (a
1
−1
⋅ a
3
)x
3
− (a
1
−1
⋅ a
4
)x
4
⇔
x
1
· t
11
x
1
+ t
21
x
2
+ t
31
x
3
+ t
41
x
4
To draw the graph, first, 4 dotes are marked by the names of the all 4
variables and then connect them by branches according the relation (4.3.9).
The resulting SFG is illustrated in Fig. 4.3.7.
x
1 x
3
x
4
x
2
a
2
a
1
1
•
–
a
4
a
1
1
•
–
a
1
1
•
– a
3
t =
31
'
t =
21
'
t =
41
'
Figure no. 4.3.7.
As we can observe the nodes x
2
, x
3
, x
4
, are input nodes and x
1
is an
output node.
To transform (4.3.8) into (4.3.9) we had to divide the equation by a
1
that
means to multiply by the inverse of the element a
1
. If the transmittances are a
1
−1
operators such an operation could not be desirable or it is not possible.
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
110
To avoid this we consider a
1
=1+(a
1
1), obtaining,
x
1
+ (a
1
− 1)x
1
+ a
2
x
2
+ a
3
x
3
+ a
4
x
4
· 0 ⇔
x
1
· (1 − a
1
)x
1
− a
2
x
2
− a
3
x
3
− a
4
x
4
⇔
x
1
· t
11
x
1
+ t
21
x
2
+ t
31
x
3
+ t
41
x
4
Now the SFG looks like in Fig. 4.3.8.
x
1 x
3
x
4
x
2
a
2
–
a
4
–
– a
3
a
1
1
t =
11
t =
21
t =
31
t =
41
Figure no. 4.3.8.
We can observe that in the last SFG a selfloop (t
11
) appeared. However
the two graphs are equivalent (they express the same algebraic equation (4.3.8))
and this illustrate how to escape of undesired selfloop.
To eliminate a selfloop t
ii
, replace all the elementary transmittances t
ki
by
where, t
ki
(4.3.10) t
ki
·
t
ki
1 − t
ii
if t
ii
≠ 1
In this example i=1, k=2:4, and
t
k1
·
t
k1
1 − t
11
·
−a
k
1 − (1 − a
1
)
·
−a
k
a
1
· −a
1
−1
⋅ a
k
if t
11
· 1 − a
1
≠ 1 ⇔ a
1
≠ 0
Example 4.3.2. SFG of two Algebraic Equations.
Let us consider a system of two algebraic equations with numerical
coefficients,
(4.3.11)
¹
¹
'
3x
1
+ 4x
2
− x
3
+ x
4
· 0
2x
1
− 5x
2
− x
3
+ 5x
4
· 0
Suppose we are interested on x
1
and x
2
so they will be unknown
(dependent) variables and x
3
, x
4
will be free (independent) variables. We shall
denote them, for convenience, as x
3
=u
1
and x
4
=u
2
.
With this notations, (4.3.11) becomes
(4.3.12)
¹
¹
'
3x
1
+ 4x
2
· u
1
− u
2
2x
1
− 5x
2
· u
1
− 5u
2
We can solve this system by using the Kramer's method, computing the
determinant of the system D,
(4.3.13) D ·
3 4
2 −5
· −23
and obtaining,
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
111
(4.3.14) x
1
·
D
11
u
1
+ D
12
u
2
D
·
(−9)u
1
+ (25)u
2
−23
·
9
23
⋅ u
1
+
−25
23
⋅ u
2
(4.3.15) x
2
·
D
21
u
1
+ D
22
u
2
D
·
(1)u
1
+ (−13)u
2
−23
·
−1
23
⋅ u
1
+
13
23
⋅ u
2
The same results can be obtained if we express the system (4.3.12) in a
matrix form,
(4.3.16) Ax · Bu, x · [x
1
x
2
]
T
, u · [u
1
u
2
]
T
where
. A ·
3 4
2 −5
]
]
]
, B ·
1 −1
1 −5
]
]
]
If , then det(A) ≠ 0
(4.3.17) x · A
−1
Bu · Hu ·
9/23 −25/23
−1/23 13/23
]
]
]
u ⇒
(4.3.18) x
1
·
9
23
⋅ u
1
+
−25
23
⋅ u
2
· Tu
1
x
1
⋅ u
1+
Tu
2
x
1
⋅ u
2
x
2
·
−1
23
⋅ u
1
+
13
23
⋅ u
2
· Tu
1
x
2
⋅ u
1+
Tu
2
x
2
⋅ u
2
The same system (4.3.11), or its equivalent (4.3.12), can be represented
as a SFG and solved by using the SFG's techniques. If the coefficients of the
equations contain letters (symbolic expressions) and the system order is higher
than 3 the above methods can not be implemented on computers and to solve a
such a system is a very difficult task. Using the methods of SFG all these
difficulties will disappear.
To construct the graph first of all the two dependent variables (x
1
, x
2
) are
withdrawn from the system equations irrespective which variable from which
equation.
Let this be,
(4.3.19)
¹
¹
'
x
1
· (−2) ⋅ x
1
+ (−4) ⋅ x
2
+ (−1) ⋅ u
1
+ (1) ⋅ u
2
x
2
· (−2) ⋅ x
1
+ ( 6) ⋅ x
2
+ ( 1) ⋅ u
1
+ (5) ⋅ u
2
After marking the four dots, future nodes, by the variables x
1
, x
2
, u
1
, u
2
,
they are connected, using SFG rules, according the equations (4.3.19) , the SFG
is obtained as in Fig. 4.3.9.
x
1
x
2
u
2
u
1
2
1
 4
2
1
1
6
5
Selfloop Selfloop
Input node
Input node
Figure no. 4.3.9.
In the next paragraph we shall see haw can we reduce this graph to the
simplest form to express the system solution (4.3.18).
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
112
4.3.3. Construction of Signal Flow Graphs.
A signal flow graph can be built starting from two sources of information:
1. Construction of SFG starting from a system of linear algebraic equations.
2. Construction of SFG starting from a block diagram.
4.3.3.1. Construction of SFG Starting from a System of Linear Algebraic
Equations.
Suppose we have a system of n equations with variables, x
k
, k=1: m. m> n
(4.3.20)
Σ
j·1
m
a
ij
x
j
· 0 , i · 1 : n
Based on some reasons (physical description, causality principles, etc.) a
number of n variables are selected to be unknown (dependent) variables, for
example x
k
, k=1: n. The others p=mn variables are considered to be free
(independent) variables. For seek of convenience let us denote them as
, (4.3.21) u
j
· x
n+j
, 1 ≤ j ≤ p · m− n
and the corresponding coefficients from the ith equation as,
(4.3.22) b
ij
· −a
i,n+j
, 1 ≤ j ≤ p · m− n
In the case of dynamical systems they will represent the input variables.
With these notations, (4.3.20) is written as,
(4.3.23)
Σ
j·1
n
a
ij
x
j
·
Σ
j·1
p
b
ij
u
j
, i · 1 : n
or in a matrix form,
. (4.3.24) Ax · Bu, x · [x
1
.. x
i
... x
n
]
T
, u · [u
1
...u
j
... u
p
]
T
If the n equations are linear independent, the determinant D,
D · det(A) ≠ 0
and a unique solution exists (considering that the vector u is given),
, where x · A
−1
Bu · Hu H · {H
ij
¦
1≤i ≤n; 1≤j ≤p
whose ith component is written as,
(4.3.25) x
i
· Σ
j·1
p
H
ij
⋅ u
j
· Σ
j·1
p
Tu
j
x
i
⋅ u
j
· Σ
j·1
p
T
ji
⋅ u
j
We mentioned all these forms of solutions, to point out the difference of
notations in algebra matrix approach:
is the component from the row i and column j of the matrix H H
ij
and in SFG approach:
(or ) is the equivalent transmittance between the node u
j
(or only j) Tu
j
x
i
T
ji
and the node x
i
(or only i).
The next step in SFG construction is to express each dependent variable
, as a function upon the all dependent and independent variables x
i
, i · 1 : n
from (4.3.23).
It is not important from which equation the variable x
i
is withdrawn.
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
113
Denoting the elementary transmittance,
, , (4.3.26) t
ji
·
¹
¹
'
−a
ij
i ≠ j, i, j ∈ [1, n]
1 − a
ii
i · j i, j ∈ [1, n]
g
ji
· b
ij
the dependent variables are expressed as,
. (4.3.27) x
i
·
Σ
j·1
n
t
ji
⋅ x
j
+
Σ
j·1
p
g
ji
⋅ u
j
, i · 1 : n
Now we have to draw n+p=m dots and to mark them by the name of the
variables involved in (4.3.2). Then the SFG is obtained connecting the nodes
according to each equation from (4.3.27). The zero gain branches are not drawn.
The image of one equation (i) from (4.3.27) is illustrated in Fig. 4.3.10.
x
i
x
1
u
1 u
2
u
j
u
p
x
n
x
j
x
2
t
ji
t
2i
t
ni
t
ii
g
1i
g
2i
g
ji
g
pi
t
1i
Figure no. 4.3.10.
Example 4.3.3. SFG of three Algebraic Equations.
Let us consider a system of three algebraic equations with numerical
coefficients, where already we have selected the dependent and independent
variables as in (4.3.23).
2x
1
− 3x
2
· 4u
1
(4.3.28) 7x
1
− 2x
2
+ x
3
· 4u
2
6x
1
− 5x
2
· 3u
1
+ u
2
Now we express each dependent variable upon the others variables. The
variable x
3
can be withdrawn only from the second equations. Let express x
1
from the first equation and x
2
from the third, getting,
x
1
· − x
1
+ 3x
2
+ 4u
1
(4.3.29) x
2
· −6x
1
+ 6x
2
+ 3u
1
+ u
2
x
3
· −7x
1
+ 2x
2
+ 4u
2
The corresponding SFG is depicted in Fig. 4.3.11.
We could select x
1
from the third equation and x
2
from the first one, the
same system being now represented by another SFG.
t
i i
t
i i
x
1
u
1
u
2
x
3
x
2
1
3
4
6
6
3 1
7
2
4
Figure no. 4.3.11.
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
114
4.3.3.2. Construction of SFG Starting from a Block Diagram.
As discussed, a block diagram graphically represent a system of equations.
As a consequence this system can be represented by signal flow graph. This
process can be performed directly from the block diagram according to the
following algorithm:
1. Attach additional variables to each takeoff point and to the output of each
summing operator. They will become nodes of the graph.
2. The input variables and the initial conditions (when the block diagram
represents also the effects of initial conditions or it is a state diagram) will be
associated to inputs nodes. The output variables will be associated to output
nodes.
3. Draw all the nodes in positions similar to the positions of the corresponding
variables in block diagram. It is recommended to avoid branches crossing.
4. For each node, different of an input node, establish, based on the block
diagram, the relationship expressing the value of that node upon other nodes and
draw that relationship in the graph.
5. The component of the transfer matrix H(s) will be H
ij
(s) ·
Y
i
(s)
U
j
(s)
U
k
≡0, k≠j
represented by the equivalent transmittance between the input node U
j
and T
U
j
Y
i
the node Y
i
.
Example 4.3.4. SFG of a Multivariable System.
Let us consider the system from Example 4.2.2. whose block diagram
depicted in Fig. 4.2.3. is copied in Fig. 4.3.12
H H H
H
H
1
U
1
S
1 S
2
S
4
S
3
Y
1
Y U
2 2
2 3
4
H
5
6
H
7
_
_ +
+
+
+
+
+ a
1
a
3
a
4
a
5
a
6
a
7
a
8 a
0
Figure no. 4.3.12.
This block diagram is already prepared with additional variables we
utilised in Example 4.2.2. This system contains 13 variables and is described by
11 equations (4.2.7). We could draw the graph starting from this set of equations
but now we shall draw the graph, presented in Fig. 4.3.12. just looking at the
block diagram.
H
1
U
1
Y
1
Y
2
U
2
H
2
H
3
H
4
H
5
H
6
H
7
a
1
a
3
a
4
a
5
a
6
a
7
a
8
a
0
a
2
1
1
1
1 1
1
1
1
Figure no. 4.3.13.
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.3. Signal Flow Graphs Method (SFG).
115
4.4. Systems Reduction Using State Flow Graphs.
To reduce a system to one of its canonical forms: inputoutput,
input+initial_stateoutput or input+initial_statestate means to solve the system of
equations describing that system.
When the system is represented by a state flow graph (SFG), this process
is called SFG reduction or solving the SFG which means to determine all or only
required equivalent transmittances between input nodes and output nodes.
As we mentioned in Ch. 4.3., any node, different of an input node, can be
related to a new node, as output node, through an unity gain branch. This new
node is a "dummy" node.
The nodes of the graph (different of input nodes), , we x
i
, i · i
1
, .., i
k
, ..., i
r
only are interested for their evaluation will be related to "dummy" output nodes
y
k
, k=1: r by the relation
(4.4.1) y
k
· x
i
k
, k · 1 : r
and represented in the graph by additional r branches with unity gains.
Example. Suppose that in Example 4.3.3. we are interested to evaluate only the
variables and , that means r=2. We can consider two new output x
1
· x
i
1
x
3
· x
i
2
nodes , attached to the graph from Fig. 4.3.11. as in Fig. 4.4.1. y
1
· x
1
y
2
· x
3
t
i i
t
i i
x
1
y
1
y
2
u
1
u
2
x
3
x
2
1
3
4
6
6
3 1
7
2
4
1
1
Figure no. 4.4.1.
In a reduced graph, any output variable y
k
, is expressed by the so called
canonical relationship of the form
(4.4.2) y
k
· Σ
j·1
r
H
kj
⋅ u
j
· Σ
j·1
r
Tu
j
y
k
⋅ u
j
· Σ
j·1
k
T
jk
⋅ u
j
where by symbols we denote the equivalent transmission functions H
kj
, Tu
j
y
k
, T
jk
(equivalent transmittances).
A reduced graph is represented as in Fig. 4.4.2.
u
1
y
k
u
p
u
j
T
y
k
u
j
T
jk =
T
y
k
u
p
T
pk=
T
u
1
y
k
T
1k
=
Figure no. 4.4.2.
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.
116
Two methods can be utilised for SFG reduction:
1. SFG reduction by elementary transformations.
2. SFG reduction using Mason's formula.
4.4.1. SFG Reduction by Elementary Transformations.
Beside the transformations based on Signal flow graphs algebra presented
in Ch. 4.3.2. two other operations can be utilised:
Elimination of a selfloop.
Elimination of a node.
These operations can be utilised also only to simplify a graph even if we
shall apply the Mason's formula.
4.4.1.1. Elimination of a Selfloop.
A selfloop appeared in a graph because a node x
i
is represented by an
equation of the form (4.3.27),
, (4.4.3) x
i
·
Σ
k·1
n
t
ki
⋅ x
k
+
Σ
j·1
p
g
ji
⋅ u
j
where x
i
depends upon x
i
itself through t
ii
.
To eliminate the selfloop determined by this t
ii
, where ,we shall t
ii
≠ 1
express (4.4.3) as
x
i
− t
ii
x
i
·
Σ
k·1,k≠i
n
t
ki
⋅ x
k
+
Σ
j·1
p
g
ji
⋅ u
j
⇔
x
i
·
Σ
k·1,k≠i
n
t
ki
1 − t
ii
⋅ x
j
+
Σ
j·1
p
g
ji
1 − t
ii
⋅ u
j
⇔
. (4.4.4) x
i
·
Σ
k·1,k≠i
n
t
ki
⋅ x
j
+
Σ
j·1
p
g
ji
⋅ u
j
Now the following rule can be formulated:
To eliminate an undesired selfloop t
ii
, attached to a node x
i
, replace the
nonzero elementary transmittances and , of the all elementary t
ki
, ∀k g
ji
, ∀j
branches entering the node x
i
, by the transmittances and t
ki
, ∀k g
ji
, ∀j
respectively, where,
; . (4.4.5) t
ki
·
t
ki
1 − t
ii
, k · 1 : n g
ji
·
g
ji
1 − t
ii
, j · 1 : p
Example. Let us consider the graph from Ex. 4.3.3. as redrawn in Fig.4.4.1. We
want to eliminate the selfloop attached to x
1
whose transmittance is, t
11
=1. The
entering branches (on short transmittances) to x
1
are: t
21
=3 and g
11
=4. The new
transmittances are,
; . t
21
·
t
21
1 − t
11
·
3
1 − (−1)
· 3/2 g
11
·
g
11
1 − t
11
·
4
1 − (−1)
· 4/2
The other transmittances are not modified.
After this elimination the graph from Fig. 4.4.1. looks like in Fig. 4.4.3.
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.
117
t
i i
x
1
y
1
y
2
u
1
u
2
x
3
x
2
3/2
4/2
6
6
3 1
7
2
4
1
1
Figure no. 4.4.3.
Now in the above graph we want to eliminate the selfloop attached to x
2
whose transmittance is, t
22
=6. The entering branches to x
2
are: t
12
=6, g
12
=3 and
g
22
=1 . Then we obtain,
; ; t
12
·
t
12
1−t
22
·
−6
1−(6)
· 6/5 g
12
·
g
12
1−t
22
·
3
1−(6)
· −3/5 g
22
·
g
22
1−t
22
·
1
1−(6)
· −1/5
After this new elimination the graph from Fig. 4.4.3. looks like in Fig. 4.4.4.
x
1
y
1
y
2
u
1
u
2
x
3
x
2
3/2
4/2
6/5
3/5
1/5
7
2
4
1
1
Figure no. 4.4.4.
This graph could be obtained if we would express the variables from the
system (4.3.28) by division: first eq. by 2 and third eq. by 5.
4.4.1.2. Elimination of a Node.
Only dependent nodes can be eliminated. Elimination a node x
i
from a
graph is equivalent to the the elimination of a variable x
i
,from the system of
equations. This is performed by substituting the expression of the variable x
i
, as a
function of all the other variables except x
i
, expression obtained from one
equation, to all the other equations of the system. That means the node to be
eliminated must not have selfloop.
Withdrawing the variable x
i,
, free of selfloop, from one equation of the
system (4.3.20) or (4.3.23), we obtain,
(4.4.6) x
i
·
Σ
j·1, j≠i
n
t
ji
⋅ x
j
+
Σ
j·1
p
g
ji
⋅ u
j
·
Σ
j·1, j≠i
m
t
ji
⋅ x
j
where, for seek of convenience, we do not explicitly denote the transmittances g
ji
coming from input nodes.
One node x
k
, different of an input node (that means ), and 1 ≤ k ≤ n
different of our x
i
( ), was drawn taking into consideration the relation, k ≠ i
(4.4.7) x
k
·
Σ
j·1, j≠i
m
t
jk
⋅ x
j
+ t
ik
x
i
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.
118
Substituting (4.4.6) into (4.4.7) will result,
x
k
·
Σ
j·1, j≠i
m
t
jk
⋅ x
j
+ t
ik
[
Σ
j·1, j≠i
m
t
ji
⋅ x
j
] ·
Σ
j·1, j≠i
m
[t
jk
+ t
ik
t
ji
] ⋅ x
j
Denoting,
t
jk
· t
jk
+ t
ik
t
ji
∀k ∈ [1, n], k ≠ i
we have
(4.4.8) x
k
·
Σ
j·1, j≠i
m
t
jk
⋅ x
j
, ∀k ∈ [1, n], k ≠ i
If we can write t
ik
t
ji
· t
ji
t
ik
then ∀k ∈ [1, n], k ≠ i , ∀j ∈ [1, m], j ≠ i
(4.4.9) t
jk
·
¹
¹
'
t
jk
+ t
ji
t
ik
, if t
ji
t
ik
≠ 0
t
jk
, if t
ji
· 0 or t
ik
· 0
which is easier to interpret the node x
i
elimination rule (as in Fig. 4.4.5.):
t
jk
t
'
jk
t
ji
t
ik
x
i
x
j
x
j
x
k
x
k
t
jk
t
'
jk
t
ji
t
ik
= +
Figure no. 4.4.5.
To eliminate a noninput node x
i
, replace all the elementary transmittances
between any nonoutput node x
j
and any noninput node x
k
by a new
transmittance if the gain of the path is different of t
jk
· t
jk
+ t
ji
t
ik
x
j
→x
i
→x
k
zero and then erase all the branches related to the node x
i
.
It is possible that initially to have t
jk
=0 ( no branch directly connect x
j
to
x
k
) and after elimination of the x
i
a branch to appear between x
j
and x
k
with the
transmittance, . t
jk
· 0 + t
ji
t
ik
In the category of nodes x
k
we must include all the "dummy" output nodes
and of course in the category of nodes x
j
we must include all input nodes.
To avoid some omissions, it is recommended to note in two separate lists
what nodes can be as x
j
nodes and which as x
k
nodes.
We must inspect all combinations ( ) to detect a nonzero path (constituted x
j
, x
k
only by two branches) passing through x
i
, even if there are no direct branch
between x
j
and x
k
.
Note that if two selections of the pairs ( ) are, for example, (a, b) and x
j
, x
k
(b, a) then they must be inspected independently.
Sometimes, for writing conveniences, we shall use the notations
, t
jk
· tx
j
x
k
· t(x
j
, x
k
)
, (4.4.10) t
jk
· t x
j
x
k
· t (x
j
, x
k
)
. T
jk
· Tx
j
x
k
· T(x
j
, x
k
)
Example. Let us consider the graph from Fig.4.4.4. related to Ex. 4.3.3.
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.
119
We want to eliminate the node x
2
.
As x
j
types nodes can be: u
1
, u
2
, x
1
, x
3
.
As x
k
types nodes can be: y
1
, y
2
, x
1
, x
3
.
Nonzero paths (constituted only by two branches) passing through x
2
, that
will affect the previous elementary transmittances (even they are zero) are
observed to the pairs: (u
1
, x
1
), (u
1
, x
3
), (u
2
, x
1
), (u
2
, x
3
), (x
1
, x
3
). They determines,
t'(u
1
, x
1
) = 4/2 + (3/5)(3/2) = 11/10,
t'(u
1
, x
3
) = 0 + (1/5)(2) = 2/5,
t'(u
2
, x
1
) = 0 + (1/5)(3/2) = 3/10,
t'(u
2
, x
3
) = 4 + (1/5)(2) = 39/10,
t'(x
1
, x
3
) = 7 + (6/5)(2) = 64/10.
Now in the graph from Fig.4.4.4. first we replace the transmittances
t(u
1
, x
1
), t(u
1
, x
3
), t(u
2
, x
1
), t(u
2
, x
3
), t(x
1
, x
3
) by
t'(u
1
, x
1
), t'(u
1
, x
3
), t'(u
2
, x
1
), t'(u
2
, x
3
), t'(x
1
, x
3
)
and then erase all the branches related to x
2
.
The node x
2
reduced graph is illustrated in Fig. 4.4.6.
x
1
y
1
y
2
u
1
u
2
x
3
11/10
2/5
1
1
3/10
39/10
64/10
Figure no. 4.4.6.
4.4.1.3. Algorithm for SFG Reduction by Elementary Transformations.
The reduction of a SFG by elementary transformations is obtained
performing the following steps:
1. Eliminate the parallel branches using the parallel rule.
2. Eliminate the cascade of branches using the multiplication rule.
3. Eliminate the selfloops.
4. Eliminate the intermediate nodes.
Practically, the complete reduction of a SFG up to the canonical form is a
rather tedious task. Steps of the above algorithms can be utilised only to simplify
the graph for applying the Mason's formula.
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.
120
4.4.2. SFG Reduction by Mason's General Formula.
The difficulties encountered in algebraic manipulations of a graph are
eliminated using the so called the Mason's general formula.
It determines the equivalent transmittance between an input Tu
j
y
k
· T(u
j
, y
k
)
node u
j
and an output node y
k
.
If we are interested about some intermediate nodes x
i
, then we must
construct "dummy" output nodes connecting these intermediate nodes to output
nodes through additional branches of the unity transmittance which performs
. x
i
· y
i
The value of an output variable y
i
in a graph with p input nodes,
u
j
, j=1:p is expressed by the relation,
(4.4.11) y
i
· Σ
j·1
p
Tu
j
y
i
⋅ u
j
The Mason's general formula is,
, (4.4.12) Tu
j
y
i
·
Σ
q
C
q
D
q
D
where:
D  is the determinant of the graph, equals to the determinant of the algebraical
system of equations multiplied by . It is given by the formula, t1
, (4.4.13) D · 1 +
Σ
k·1
m
(−1)
k
S
pk
S
pk
 is the sum of all possible combinations of k products of nontouching loops
1
.
If in the graph there is a stt B of n loops B={B
1
, ... , B
n
} ( by B
k
we
understand and also denote the gain of the loop B
k
) then :
S
p1
=B
1
+B
2
+...+B
n
S
p2
=B
1
B
2
+B
1
B
3
+...+B
n1
B
n
................................... (4.4.14)
S
pn
=B
1
B
2
...B
n
Any term of S
pk
is different of zero if and only if any of two loops B
i
, B
j
entered in that term are nontouching one to each other that means they have no
common node.
We remember that the gain of a loop equals to the product of the
transmittances of branches defining that loop.
C
q
 means the path no q from the input node u
j
to the output node y
i
denoted
also as C
q
=C
q
(u
j
, y
i
). The index q will be utilised for all the graph's paths. By C
q
we understand and also denote the gain of the path C
q
. The gain of a path equals
to the product of the transmittances of branches defining that path.
D
q
 is computed just like D, relation (4.4.12), but taking into consideration for
its S
pk
, only the nontouching loops with the path C
q
.
The set of these loops is denoted by B
q
. If B
q
={ }, then D
q
=1. . ∅
1
Suma produselor de cite k bucle disjuncte
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.
121
Example. Let us consider the graph from Ex. 4.3.3. as redrawn in Fig.4.4.7.
Suppose we are interested about all the dependent variables x
1
, x
2
, x
3
.
To apply the Mason's general formula first we must define and draw in the
graph the corresponding "dummy" output nodes which get out the interested
variables. They are defined by relations:
y
1
=x
1
; y
2
=x
2
; y
3
=x
3
;
t
i i
t
i i
x
1
B
1
B
2
B
3
y
1
y
2
y
3
u
1
u
2
x
3
x
2
1
3
4
6
6
3
1
7
2
4
1
1
1
Figure no. 4.4.7.
We identify only three loops (n=3), B={B
1
, B
2
B
3
}:
B
1
=1; B
2
=6. B
3
=(3)(6)=18;
B
1
and B
2
are nontouching loops but (B
1
,B
2
), (B
3
,B
2
) are touching loops.
Now we compute,
S
p1
=B
1
+B
2
+B
3
=(1)+(6)+(18)=13
S
p2
=B
1
B
2
=(1)(6))=6.
The other two terms B
2
B
3
, B
3
B
1
do not appear in S
p2
because the loops
(B
1
,B
2
) and (B
3
,B
2
) are touching. Also S
p3
=0 because does not exist three loops
to be nontouching each to other. In such a conditions,
D · 1 +
Σ
k·1
2
(−1)
k
S
pk
· 1 − S
p1
+ S
p2
· 1 − (−13) + (−6) · 8.
Evaluation of : We identify two paths from u
1
to y
1
: Tu
1
y
1
· T(u
1
, y
1
)
C
1
=C
1
(u
1
, y
1
) = (4)(1) =4 ; Because only B
2
is nontouching with C
!
, it results
that B
1
={B
1
}, S
p1
=B
2
, S
p2
=0, so D
1
=1S
p1
=1B
2
=16=5.
C
2
=C
2
(u
1
, y
1
) =(3)(3)(1) =9. Because no loop is nontouching with C
2
it results
B
2
={ } , that means D
2
=1. ∅
We obtain, . T
u1
y1
·
C
1
D
1
+ C
2
D
2
D
·
4 ⋅ (−5) + 9 ⋅ (1)
8
·
−11
8
Evaluation of : We identify only one path from u
2
to y
1
: Tu
2
y
1
· T(u
2
, y
1
)
C
3
=C
3
(u
2
, y
1
) = (1)(3)(1) =3 ; Because no loop is nontouching with C
3
it
results B
3
={ D
3
=1. ∅¦ ⇒
We obtain, . T
u2
y1
·
C
3
D
3
D
·
3 ⋅ (1)
8
·
3
8
Evaluation of : We identify two paths from u
1
to y
2
: Tu
1
y2
· T(u
1
, y
2
)
C
4
=C
4
(u
1
, y
2
) = (3)(1) =4 ; B
4
={B
1
}, ⇒ D
4
=1(1)=2.
C
5
=C
5
(u
1
, y
2
) = (4)(6) =24 ; B
5
={ }, ⇒ D
5
=1. ∅
We obtain, . Tu
1
y
2
·
C
4
D
4
+ C
5
D
5
D
·
(3) ⋅ (2) + (−24)(1)
8
·
−18
8
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.
122
Evaluation of : We identify one path from u
2
to y
2
: Tu
2
y
2
· T(u
2
, y
2
)
C
6
=C
6
(u
2
, y
2
) = (1)(1) =1 ; B
6
={B
1
}, ⇒ D
6
=1(1)=2.
We obtain, . Tu
2
y
2
·
C
6
D
6
D
·
(1) ⋅ (2)
8
·
2
8
Evaluation of : We identify four paths from u
1
to y
3
: Tu
1
y
3
· T(u
1
, y
3
)
C
7
=C
7
(u
1
, y
3
) = (3)(2) =6 ; B
7
={B
1
}, ⇒ D
7
=2.
C
8
=C
8
(u
1
, y
3
) = (3)(3)(7)(1) =63 ; B
8
={ }, ⇒ D
8
=1. ∅
C
9
=C
9
(u
1
, y
3
) = (4)(7) =28 ; B
9
={B
2
}, ⇒ D
7
=16=5.
C
10
=C
10
(u
1
, y
3
) = (4)(6)(2)(1) =48 ; B
8
={ }, ⇒ D
10
=1. ∅
We obtain,
Tu
1
y
3
·
C
7
D
7
+ C
8
D
8
+ C
9
D
9
+ C
10
D
10
D
·
(6) ⋅ (2) + (−63)(1) + (−48) ⋅ (1) + (−28)(−5)
8
·
41
8
Evaluation of :We identify three paths from u
2
to y
3
: T
u2
y
3
· T(u
2
, y
3
)
C
11
=C
11
(u
2
, y
3
) = (4)(1) =4 ; B
11
={B
1
; B
2
; B
3
}, ⇒ D
1
=D=8.
C
12
=C
12
(u
2
, y
3
) = (1)(2)(1) =2 ; B
12
={B
1
}, ⇒ D
12
=2.
C
13
=C
13
(u
2
, y
3
) = (1)(3)(7)(1) =21 ; B
13
={ }, ⇒ D
13
=1. ∅
Tu
2
y
3
·
C
11
D
11
+ C
12
D
12
+ C
13
D
13
D
·
(4) ⋅ (8) + (2) ⋅ (2) + (−21) ⋅ (1)
8
·
15
8
Collecting the results, we write:
y
1
· x
1
· Tu
1
y
1
⋅ u
1
+ Tu
2
y
1
⋅ u
2
·
−11
8
⋅ u
1
+
3
8
⋅ u
2
y
2
· x
2
· Tu
1
y
2
⋅ u
1
+ Tu
2
y
3
⋅ u
2
·
−18
8
⋅ u
1
+
2
8
⋅ u
2
y
3
· x
3
· Tu
1
y
3
⋅ u
1
+ Tu
2
y
3
⋅ u
2
·
41
8
⋅ u
1
+
15
8
⋅ u
2
The same results are obtaining, easier, by pure algebraic methods solving
the system (4.3.28). The goal here were to illustrate only how to apply the
Mason's general formula which get many advantages for symbolic systems and
there no so many paths as in this example.
Example 4.4.1. Reduction by Mason's Formula of a Multivariable System.
We saw in Ex.4.2.2. how a multivariable system from Fig. 4.2.3. can be
reduced using block diagrams transformations.
Its SFG has been drawn in Fig. 4.3.13. which we are copying now in
Fig. 4.4.8. to apply the Mason's general formula for reduction purposes.
H
1
U
1
Y
1
Y
2
U
2
H
2
H
3
H
4
H
5
H
6
H
7
B
1
B
2
B
3
a
1
a
3
a
4
a
5
a
6
a
7
a
8
a
0
a
2
1
1
1
1 1
1
1
1
Figure no. 4.4.8.
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.
123
Three loops are identified (n=3), B={B
1
, B
2
B
3
}:
; ; B
1
· H
1
H
3
(−H
5
) B
2
· −H
7
H
1
H
2
H
3
H
6
B
3
· H
1
H
2
H
4
(−H
7
)
All of them are touching loops so now we compute,
; S
p1
· B
1
+ B
2
+ B
3
S
p2
· 0
D · 1 − S
p1
· 1 − [H
1
H
3
(−H
5
) − H
7
H
1
H
2
H
3
H
6
+ H
1
H
2
H
4
(−H
7
)]
D · 1 + H
1
H
3
H
5
+ H
7
H
1
H
2
H
3
H
6
+ H
1
H
2
H
4
H
7
It can be observed that in 3 rows we determined the graph determinant
which is the same with the common denominator of the transfer matrix from
relation (4.2.22).
Evaluation of . Tu
1
y
1
· T(u
1
, y
1
) · H
11
(s) · Hu
1
y
1
We identify two paths from u
1
to y
1
:
C
1
=C
1
(u
1
, y
1
) =H
1
H
2
H
3
H
6
. Because no loop is nontouching with C
1
it results
B
1
={ } , that means D
1
=1. ∅
C
2
=C
2
(u
1
, y
1
) =H
1
H
2
H
4
. Because no loop is nontouching with C
2
it results
B
2
={ } , that means D
2
=1. We obtain, ∅
Tu
1
y
1
· Hu
1
y
1
· H
11
·
C
1
D
1
+ C
2
D
2
D
·
H
1
H
2
H
3
H
6
+ H
1
H
2
H
4
1 + H
2
H
3
H
5
+ H
1
H
2
H
3
H
6
H
7
+ H
1
H
2
H
4
H
7
Evaluation of . Tu
2
y
1
· T(u
2
, y
1
) · H
12
(s) · Hu
2
y
1
We identify two paths from u
2
to y
1
:
C
3
=C
1
(u
2
, y
1
) =H
4
. B
3
={ } , D
3
=1. ∅
C
4
=C
1
(u
2
, y
1
) =H
3
H
6
. B
4
={ } , D
4
=1. We obtain, ∅
Tu
2
y
1
· Hu
2
y
1
· H
12
·
C
3
D
3
+ C
4
D
4
D
·
H
4
+ H
3
H
6
1 + H
2
H
3
H
5
+ H
1
H
2
H
3
H
6
H
7
+ H
1
H
2
H
4
H
7
Evaluation of Tu
1
y2
· T(u
1
, y
2
) · H
21
(s) · Hu
1
y
2
We identify one path from u
1
to y
2
:
C
5
=C
5
(u
1
, y
2
) =H
1
. B
5
={ } , D
5
=1. We obtain, ∅
Tu
1
y
2
· Hu
1
y2
· H
21
·
C
5
D
5
D
·
H
1
1 + H
2
H
3
H
5
+ H
1
H
2
H
3
H
6
H
7
+ H
1
H
2
H
4
H
7
Evaluation of T
u2
y2
· T(u
2
, y
2
) · H
22
(s) · Hu
2
y
2
We identify three paths from u
2
to y
2
:
C
6
=C
6
(u
2
, y
2
) =H
4
(H
7
)H
1
. B
6
={ } , D
6
=1. ∅
C
7
=C
7
(u
2
, y
2
) =H
3
(H
5
). B
7
={ } , D
7
=1. ∅
C
8
=C
8
(u
2
, y
2
) =H
3
H
6
(H
7
)H
1
. B
8
={ } , D
8
=1. We obtain, ∅
Tu
2
y
2
· Hu
2
y2
· H
22
·
C
6
D
6
+ C
7
D
7
+ C
8
D
8
D
H
22
·
−H
4
H
7
H1 − H
3
H
5
− H
3
H6H
7
H
1
1 + H
2
H
3
H
5
+ H
1
H
2
H
3
H
6
H
7
+ H
1
H
2
H
4
H
7
The results are identical to those obtained by block diagram transformations,
relations (4.2.18)... (4.2.21), but with much less effort.
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4.4. Systems Reduction Using State Flow Graphs.
124
5. SYSTEM REALISATION BY STATE EQUATIONS.
5.1. Problem Statement.
Let us suppose a SISO system, p=1, r=1. If a system
S=SS(A,b,c,d,x) (5.1.1)
is given by state equations then the transfer function H(s) can uniquely be
determined as
(5.1.2) H(s) · c
T
Φ(s)b + d ·
M(s)
L(s)
, Φ(s) · (sI − A)
−1
on short, S=TF(M,L).
Having the transfer function of a system, several forms for state equations
can be obtained, that means we have several system realisations by state
equation.
(5.1.3) S · TF(M, L) ⇒ S · SS(A, b, c, d, x)
is one state realisation, but
(5.1.4) S · TF(M, L) ⇒ S · SS(A, b, c, d, x)
is another state realisation, with the same transfer function,
(5.1.5) H(s) · c
T
Φ(s)b + d · H(s) , Φ(s) · (sI − A)
−1
The two state realisations are equivalent, that means,
∃T, x · Tx, det T ≠ 0
(5.1.6) A · TAT
−1
, b · Tb, c
T
· c
T
T
−1
, d · d
The two transfer functions, and H(s) are identical, H(s)
H(s) · c
T
(sI − A)
−1
b + d ·
· c
T
T
−1
(sI − TAT
−1
)
−1
Tb + d · c
T
T
−1
T(Is − A)
−1
T
−1
Tb + d · H(s)
( We remember that ). (AB)
−1
· B
−1
A
−1
Suppose that m=degree(M) and n=degree(L), . Because L(s) has m ≤ n
n+1 coefficients and M(s) has m+1 coefficients H(s) has only n+m+1 free
parameters due to the ratio. The state realisations have
n
2
(from A) + n (from b) + n (from c) +1 (from d) =n
2
+2n+1
parameters, so
n
2
+ 2n + 1 > n + m+ 1
which conducts to an undetermined system of n
2
+2n+1 unknown variables from
n+m+1 equations. This explains why there are several (an infinity) state
realisations for the same transfer function.
There are several methods to determine the state equation starting from the
transfer function. For multivariable systems this procedure is more complicated,
but it is possible too.
Some of state realisations have some special forms with minimum number
of free parameters They are called canonical forms (economical forms).
Some canonical forms are important because they are putting into evidence
some system properties like controllability and observability.
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.1. Problem Statement.
125
Mainly there are two structures for canonical forms:
 Controllable structures (ID : integraldifferential structures) and
 Observable structures (DI : differentialintegral structures).
Now we remember the controllability and observability criteria for LTI systems:
5.1.1. Controllability Criterion.
The MIMO LTI system S
(5.1.7) S · SS(A, B, C, D)
is completely controllable, or we say that the pair (A,B) is a controllable pair, if
and only if the so called controllability matrix P
(5.1.8) P · [B AB ... A
n−1
B]
has a maximum rank. P is a matrix. The maximum rank of P is n. (n × (np))
Sometimes we can assure this condition by computing
. (5.1.9) ∆
P
· det(PP
T
)
which must be different of zero. For a SISO LTI system S
S · SS(A, b, c, d)
the matrix P is a square matrix, n × n
. (5.1.10) P · [b Ab ... A
n−1
b]
5.1.2. Observability Criterion.
The MIMO LTI system S
(5.1.11) S · SS(A, B, C, D)
is completely observable or the pair (A,C) is an observable pair, if and only if
the so called observability matrix Q
, (5.1.12) Q ·
C
CA
⋅⋅⋅
CA
n−1
]
]
]
]
]
]
]
has maximum rank. Q is a matrix. The maximum rank of Q is n. (n × (rn))
Sometimes we can assure this condition by computing
(5.1.13) ∆
Q
· det(QQ
T
)
which must be different of zero. For a SISO LTI system S
(5.1.14) S · SS(A, b, c, d)
the matrix Q is a square matrix, n × n
. (5.1.15) Q ·
c
T
c
T
A
⋅⋅⋅
c
T
A
n−1
]
]
]
]
]
]
]
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.1. Problem Statement.
126
5.2. First Type ID Canonical Form.
Let H(s) be a transfer function of the form
(5.2.1) H(s) ·
M(s)
L(s)
·
b
n
s
n
+ ... + b
0
a
n
s
n
+ ... + a
0
·
Y(s)
U(s)
, a
n
≠ 0
The state realisation of this transfer function, as ID canonical form of the
first type, can be obtained by so called the direct programming method,
according to the following algorithm:
1. Divide by the nominator and the denominator of the transfer function s
n
(5.2.2) H(s) ·
b
n
+ b
n−1
s
−1
+ ... + b
1
s
−(n−1)
+ b
0
s
−n
a
n
+ a
n−1
s
−1
+ ... + a
1
s
−(n−1)
+ a
0
s
−n
·
Y(s)
U(s)
, a
n
≠ 0
and express the output as
(5.2.3) Y(s) ·
like D operator
M(s) ⋅
W(s)
like I operator
1
L(s)
⋅U(s)
(5.2.4) Y(s) · (b
n
+ b
n−1
s
−1
+ ... + b
0
s
−n
)
W(s)
⋅
U(s)
a
n
+ a
n−1
s
−1
+ ... + a
1
s
−(n−1)
+ a
0
s
−n
2. Denote
(5.2.5) W(s) ·
U(s)
a
n
+ a
n−1
s
−1
+ ... + a
1
s
−(n−1)
+ a
0
s
−n
and express W(s) as a function of U(s) and the products ( ) [s
−k
W(s)] , k · 1 : n
a
n
[W(s)] + a
n−1
[s
−1
W(s)] + ... + a
1
[s
−(n−1)
W(s)] + a
0
[s
−n
W(s)] · U(s)
(5.2.6) W(s) · −
a
n−1
a
n
xn(s)
s
−1
W(s) −
a
n−2
a
n
x
n−1
(s)
s
−2
W(s) −... −
a
0
a
n
x
1
(s)
s
−n
W(s) +
1
a
n
U(s)
3. Denote the products as new n variables [s
−k
W(s)] , k · 1 : n
(5.2.7) X
k
(s) · [s
−(n−k+1)
W(s)] , k · 1 : n
X
1
(s) · s
−(n)
W(s)
X
2
(s) · s
−(n−1)
W(s)
.......
X
n
(s) · s
−(1)
W(s)
so the expression of W(s) from (5.2.6) is now,
(5.2.8) W(s) · −
a
n−1
a
n
X
n
(s) −
a
n−2
a
n
X
n−1
(s) − ... −
a
0
a
n
X
1
(s) +
1
a
n
U(s)
4. Express the output Y(s) from (5.2.3) and (5.2.4) as
(5.2.9) Y(s) · b
n
W(s) + b
n−1
[s
−1
W(s)] + ... + b
0
[s
−n
W(s)]
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.2. First Type ID Canonical Form.
127
where W(s) is substituted from (5.2.6)
Y(s) · (
cn
b
n−1
− b
n
a
n−1
a
n
)[s
−1
W(s)] + (
c
n−1
b
n−2
− b
n
a
n−2
a
n
)[s
−2
W(s)] + ...+
c
1
(b
0
− b
n
a
0
a
n
)[s
−n
W(s)]+
d
b
n
a
n
U(s
(5.2.10)
or from (5.2.8)
Y(s) · (
cn
b
n−1
− b
n
a
n−1
a
n
)X
n
(s) + (
c
n−1
b
n−2
− b
n
a
n−2
a
n
)X
n−1
(s) + ...+
c
1
(b
0
− b
n
a
0
a
n
)X
1
(s)+
d
b
n
a
n
U(s)
(5.2.11)
5. Denote c
1
· b
0
− b
n
a
0
a
n
c
2
· b
1
− b
n
a
1
a
n
.....
(5.2.12) c
k
· b
k−1
− b
n
a
k−1
a
n
, k · 1 : n
.....
c
n
· b
n−1
− b
n
a
n−1
a
n
and express the output Y(s) from (5.2.10) as
Y(s) · c
n
⋅ [s
−1
W(s)] + c
n−1
⋅ [s
−2
W(s)] + ... + c
1
⋅ [s
−n
W(s)] +
b
n
a
n
U(s)
(5.2.13)
or from (5.2.11) as,
(5.2.14) Y(s) · c
1
⋅ X
1
(s) + c
2
⋅ X
2
(s) + ... + c
n
⋅ X
n
(s) +
b
n
a
n
U(s)
6. Draw, as block diagram or state flow graph, a cascade of n integrators and fill
in this graphical representation the relations (5.2.6) or (5.2.8) and the output
relations (5.2.13) or (5.2.14).
The integrators can be represented without the initial conditions or with
their initial conditions if later on we want to get the free response from this
diagram .
+
+
+
+
+
+
+
+
+
+
U(s)
W(s)
Y(s)
W(s)
s
(n1)
W(s) s
1
W(s)
s
2
W(s) s
n
X (s)
1
x (t)
1
X (s)
2
x (t)
2
X (s)
n1
x (t)
n1
X (s) n
x (t)
n
a1
an
+
+
a n1
an
+
+
+
+
a
an
n2 a0
an
s
1
s
1
s
1
s
1
c2 c n1 cn c
1
an
1
bn
an
x (t)
n
.
x (t)
n1
.
x (t)
2
.
x (t)
1
.
= = = =
Figure no. 5.2.1.
5. SYSTEMS REALISATION BY STATE EQUATIONS. 5.2. First Type ID Canonical Form.
128
7. Denote from the right hand side to the left hand side the integrators output by
some variables , as (5.2.7) both in complex domain and time X
1
, X
2
, .. , X
n
domain.
8. Interpret the diagram in time domain and write the relations between variables
in time domain,
x
.
1
· x
2
x
.
2
· x
3
........ (5.2.15)
x
.
n−1
· x
n
x
.
n
· −
a
0
a
n
x
1
−
a1
a
n
x
2
− ... −
a
n−2
a
n
x
n−1
−
a
n−1
a
n
x
n
+
1
a
n
u
(5.2.16) y · c
1
x
1
+ c
2
x
2
+ ... + c
n−1
x
n−1
+ c
n
x
n
+ du
where we denoted,
(5.2.17) d ·
b
n
a
n
, c
k
· b
k−1
− b
n
a
k−1
a
n
, k · 1 : n
9. Write in matrix form the relations (5.2.15), (5.2.16), (5.2.17).
, (5.2.18) x
.
· Ax + bu x · [x
1
, x
2
, ... , x
n−1
, x
n
]
T
(5.2.19) y · c
T
x + bu
where
, A ·
0 1 0 .. 0 0
0 0 1 .. 0 0
0 0 0 .. 0 0
.. .. .. .. .. ..
0 0 0 .. 0 1
−
a
0
a
n
−
a
1
a
n
−
a
2
a
n
.. −
a
n−2
a
n
−
a
n−1
a
n
]
]
]
]
]
]
]
]
]
]
]
b ·
0
0
0
..
0
1
a
n
]
]
]
]
]
]
]
]
]
]
]
, c ·
b
0
− b
n
a
0
a
n
b
1
− b
n
a
1
an
b
2
− b
n
a
2
a
n
..
b
n−2
− b
n
a
n−2
a
n
b
n−1
− b
n
a
n−1
an
]
]
]
]
]
]
]
]
]
, d ·
b
n
a
n
(5.2.20)
This is called also the controllable companion canonical form of state
equations.
It can be observed that if , that is the system is strictly proper, then b
n
· 0
the c vector is composed from the transfer function denominator coefficients. If
, then the last row of the matrix A is composed from the transfer function a
n
· 1
nominator coefficients with changed sign.
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.2. First Type ID Canonical Form.
129
Example 5.2.1. First Type ID Canonical Form of a Second Order System.
Let a system be described by the second order differential equation.
y¨ + 3y
.
+ 2y · u¨ + 5u
.
+ 6u
The transfer function is
H(s) ·
s
2
+ 5s + 6
s
2
+ 3s + 2
·
Y(s)
U(s)
·
(s + 2)(s + 3)
(s + 2)(s + 1)
H(s) ·
1 + 5s
−1
+ 6s
−2
1 + 3s
−1
+ 2s
−2
Y(s) · (1 + 5s
−1
+ 6s
−2
) ⋅
U(s)
1 + 3s
−1
+ 2s
−2
W(s) ·
U(s)
1 + 3s
−1
+ 2s
−2
W(s) + 3[s
−1
W(s)] + 2s
−2
[W(s)] · U(s)
(5.2.21) W(s) · −3[s
−1
W(s)] − 2s
−2
[W(s)] + U(s)
Y(s) · 1 ⋅ W(s) + 5[s
−1
W(s)] + 6[s
−2
W(s)] ·
· 1 ⋅ {−3[s
−1
W(s)] − 2s
−2
[W(s)] + U(s)¦ + 5[s
−1
W(s)] + 6[s
−2
W(s)]
Y(s) · (5 − 3)[s
−1
W(s)] + (6 − 2)[s
−2
W(s)] + U(s)
(5.2.22) Y(s) · 2[s
−1
W(s)] + 4[s
−2
W(s)] + U(s)
Now the relations (5.2.21) and (5.2.22) are represented by a signal flow
graph.
U(s)
W(s)
Y(s)
W(s) s
1
W(s)
s
2
X (s)
1
x (t)
1
x (0)
1
X (s)
2
x (t)
2
x (0)
2
x (t)
2
.
x (t)
1
.
s
1
s
1
s
1
s
1
3
1 1
1
2
4
2
B1
B2
Figure no. 5.2.2.
Writing the relations in time domain we obtain,
x
.
1
· x
2
x
.
2
· −2x
1
− 3x
2
+ u
y · 4x
1
+ 2x
2
+ u
, , , A ·
0 1
−2 −3
]
]
]
b ·
0
1
]
]
]
c ·
4
2
]
]
]
d · 1
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.2. First Type ID Canonical Form.
130
Now we can check that this state realisation (the controllable companion
canonical form) of the transfer function
H(s) ·
s
2
+ 5s + 6
s
2
+ 3s + 2
·
Y(s)
U(s)
·
(s + 2)(s + 3)
(s + 2)(s + 1)
is controllable but not observable.
As we can see the transfer function has common factors to nominator and
denominator, and one of the two properties, controllability or observability, is
lost. In our case it must not be the controllability.
, P · [b Ab]
, b ·
0
1
]
]
]
Ab ·
0 1
−2 −3
]
]
]
⋅
0
1
]
]
]
·
1
−3
⇒
, . The system is controllable. P ·
0 1
1 −3
]
]
]
det(P) · −1 ≠ 0
Q ·
c
T
c
T
A
]
]
]
, c
T
·
4 2
]
]
c
T
A ·
4 2
]
]
⋅
0 1
−2 −3
]
]
]
·
−4 −2
]
]
, . The system is not observable. Q ·
4 2
−4 −2
]
]
]
det(Q) · −8 + 8 · 0
Operating on this signal flow graph, which is a state diagram (SD), we can
get the transfer function and the two components of the free response.
D · 1 + Σ
k·1
(−1)
k
S
pk
B
1
· −3s
−1
; B
2
· −2s
−2
S
p1
· B
1
+ + B
2
· −3s
−1
− 2s
−2
S
p2
· 0
D · 1 − S
p1
· 1 − (−3s
−1
− 2s
−2
) · 1 + 3s
−1
+ 2s
−2
H(s) · Tu
y
·
C
1
D
1
+ C
2
D
2
+ C
3
D
3
D
C
1
· 1; D
1
· 1; C
2
· 2s
−1
; D
2
· 1; C
3
· 4s
−2
; D
3
· 1;
H(s) ·
1 + 3s
−1
+ 2s
−2
+ 2s
−1
+ 4s
−2
1 + 3s
−1
+ 2s
−2
·
s
2
+ 5s + 6
s
2
+ 3s + 2
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.2. First Type ID Canonical Form.
131
5.3. Second Type DI Canonical Form.
This state realisation is called also the observable companion canonical
form: canonical because it has a minimum number of nonstandard elements (0 or
1) and observable because it assures the observability of its state vector.
It is called DI (derivativeintegral) realisation because mainly it interprets
the input processing first as derivative and the result of this is then integrated. We
can illustrate this DI process considering the associative property
. (5.3.1) Y(s) · H(s)U(s) ·
1
L(s)
[M(s)U(s)]
One method to get this canonical form is to start from the system
differential equation,
a
n
y
(n)
+ a
n−1
y
(n−1)
+ ... + a
1
y
(1)
+ a
0
y
(0)
· b
n
u
(n)
+ b
n−1
u
(n−1)
+ ... + b
1
u
(1)
+ b
0
u
(0)
where we denote (5.3.2)
(5.3.3) y
(k)
·
d
k
y(t)
dt
k
· D
k
{y(t)¦ · D
k
y · D{D
k−1
y¦
. (5.3.4) u
(k)
·
d
k
u(t)
dt
k
· D
k
{u(t)¦ · D
k
u · D{D
k−1
u¦
Using the derivative symbolic operator
, (5.3.5) D{•¦
def
·
d
dt
{•¦
the equation (5.3.2) can be written as,
a
n
D
n
y + a
n−1
D
n−1
y + .. + a
1
Dy + a
0
y · b
n
D
n
u + b
n−1
D
n−1
u + .. + b
1
Du + b
0
u
or arranged as
D
n
[a
n
y − b
n
u] + D
n−1
[a
n−1
y − b
n−1
u] + .. + D[a
1
y − b
1
u] + [a
0
y − b
0
u] · 0
x
2
. (5.3.6) a
0
y − b
0
u+
x
.
n
→ ÷÷÷÷
D[a
1
y − b
1
u +D[.. + D[a
n−1
y − b
n−1
u+
x
.
1
D
x
1
[a
n
y − b
n
u]]..]] · 0
As we can see from (5.3.6) , we denote,
(5.3.7) x
1
· a
n
y − b
n
u
from where,
y ·
1
a
n
x
1
+
b
n
a
n
u
x
2
· a
n−1
y − b
n−1
u + x
.
1
⇒ x
.
1
· −
a
n−1
a
n
x
1
+ x
2
+ (b
n−1
− b
n
a
n−1
a
n
)u
x
3
· a
n−2
y − b
n−2
u + x
.
2
⇒ x
.
2
· −
a
n−2
a
n
x
1
+ x
3
+ (b
n−2
− b
n
a
n−2
a
n
)u
..........
(5.3.8)
x
n
· a
1
y − b
1
u + x
.
n−1
⇒ x
.
n−1
· −
a
1
a
n
x
1
+ x
n
+ (b
1
− b
n
a
1
a
n
)u
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.3. Second Type DI Canonical Form.
132
0 · a
0
y − b
0
u + x
.
n
⇒ x
.
n
· −
a
0
a
n
x
1
+ (b
0
− b
n
a
0
a
n
)u
From (5.3.7) and (5.3.8) we can write the state equation in matrix form,
x
.
· Ax + bu x · [x
1
, x
2
, ... , x
n−1
, x
n
]
T
y · c
T
x + bu
where,
, A ·
−
a
n−1
a
n
1 0 .. 0 0
−
a
n−2
a
n
0 1 .. 0 0
−
a
n−3
a
n
0 0 .. 0 0
.. .. .. .. .. ..
−
a
1
a
n
0 0 .. 0 1
−
a
0
a
n
0 0 .. 0 0
]
]
]
]
]
]
]
]
]
]
]
]
]
b ·
b
n−1
− b
n
a
n−1
a
n
b
n−2
− b
n
a
n−2
an
b
n−3
− b
n
a
n−3
a
n
..
b
1
− b
n
a
1
a
n
b
0
− b
n
a
0
an
]
]
]
]
]
]
]
]
]
, c ·
1
a
n
0
0
..
0
0
]
]
]
]
]
]
]
]
]
]
]
, d ·
b
n
a
n
Example.
Our previous example with:
n · 2, b
2
· 1, b
1
· 5, b
0
· 6, a
2·
1, a
1
· 3, a
0
· 2, ⇒
, , , A ·
−3 1
−2 0
]
]
]
b ·
2
4
]
]
]
c ·
1
0
]
]
]
d · 1
Now we can check the controllability and the observability properties.
. P ·
2 −2
4 −4
]
]
]
⇒ det(P) · 0
This realisation of the same transfer function as in above example is not
controllable one, but
Q ·
1 0
−3 1
]
]
]
⇒ det(Q) · 1 ≠ 0
which confirms that this canonical form is observable one.
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.3. Second Type DI Canonical Form.
133
5.4. Jordan Canonical Form.
This is a canonical form which is pointing out the eigenvalues of the
system.
The system matrix has a diagonal form if the eigenvalues are distinct or it
has a blockdiagonal form if there are multiple eigenvalues. This form can be got
by using so called partial fraction development of the transfer function. Of
course, for that we have to know the roots of the denominator.
Example.
H(s) ·
b
4
s
4
+ ... + b
0
(s − λ
1
)(s − λ
2
)
2
((s − α)
2
+ β
2
)
·
· c
0
+
c
11
s − λ
1
+
c
21
s − λ
2
+
c
22
(s − λ
2
)
2
+
c
31
s + c
32
(s − σ)
2
+ β
2
·
Y(s)
U(s)
Let denote
H
3
(s) ·
c
31
s + c
32
(s − σ)
2
+ β
2
To determine this canonical form first we plot as a block diagram the
output relation, considering for repeated roots a series of first order blocks.
Then we express in time domain the relations considering to any output of
first order block a component of first order block and for complex roots (poles)
consider a number of components equal to the number of complex poles.
1
s
λ1
1
s
λ2
1
s
λ2
C
0
Y
0
Y
1
Y
2
C
21
C
22
C
11
H
3
Y
3 Y
U
x
1
1
x
2
2
x
2
1
x
1
3
x
2
3
Figure no. 5.4.1.
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.4. Jordan Canonical Form.
134
x
1
1
·
1
s−λ
1
U(s) ⇒ sx
1
1
− λ
1
x
1
1
· U ⇒ x
.
1
1
· λ
1
x
1
1
+ u
x
1
2
·
1
s−λ
2
x
2
2
x
.
1
2
· λ
2
x
1
2
+ x
2
2
x
2
2
·
1
s−λ
2
U(s) x
.
2
2
· λ
2
x
2
2
+ u
For the blocks with complex poles we can use as an independent problem
any method for state realisation like the ID canonical form.
x
3
·
x
1
3
x
2
3
]
]
]
x
.
3
· A
3
x
3
+ b
3
u
y
3
· c
3
T
x
3
+ d
3
u
=0
( d
3
=0 because H
3
is a strictly proper part and the degree of the denominator is
bigger then the degree of the nominator). We have a second order system in this
case.
x
.
3
· A
3
x
3
+ b
3
u
y · c
0
u + c
11
x
1
1
+ c
22
x
1
2
+ c
21
x
2
2
+ c
32
x
1
3
+ c
31
x
2
3
Construct a state vector appendix all variable chosen as state variables.
; ; x
1
· x
1
1
x
2
·
x
1
2
x
2
2
]
]
]
⇒ x ·
...
x
1
...
x
2
x
3
]
]
]
]
]
·
...
x
1
1
x
1
2
...
x
2
2
x
1
3
x
2
3
]
]
]
]
]
]
]
]
]
x
3
·
x
1
3
x
2
3
]
]
]
Write the state equations in matrix form :
¹
¹
'
x
.
· A
J
⋅ x + B
J
⋅ u
y · C
J
⋅ x + d
J
⋅ u
; ; B
J
·
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅
b
J
1
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅
b
J
2
b
J
3
]
]
]
]
]
]
]
·
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅
1
0
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅
1
b
1
3
b
2
3
]
]
]
]
]
]
]
]
]
C
J
·
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅
C
J
1
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅
C
J
2
C
J
3
]
]
]
]
]
]
]
·
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅
C
11
C
22
⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅
C
21
C
31
C
30
]
]
]
]
]
]
]
]
]
d
J
· C
0
·
b
n
a
n
A =
J
l
2
l
1
l
2
0
0
1
0 0 0
0 0
0 0
0 0
0
0
A
3
Jordan
Blocks
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.4. Jordan Canonical Form.
135
H
3
(s) ·
C
31
1
s + C
30
1
s
2
− 2αs + α
2
+ β
2
x
.
3
· A
3
x
2
+ b
3
u
y
3
· c
T
x
3
+ d
3
u
Y
3
· (c
31
1
s + c
30
1
)
U(s)
(s − α)
2
+ β
2
W(s) ·
1
(s − α)
2
+ β
2
U(s) ·
β
2
(s − α)
2
+ β
2
⋅
1
β
2
U(s) ·

.
β
s−α
`
,
2
1 +

.
β
s−α
`
,
2
1
β
2
U(s)
We can interpret this relation as a feedback connection:
1
b
2
b
sa
b
sa
U(s) W(s)
+

x
2
3
x
1
3
Figure no. 5.4.2.
x
1
3
·
β
s−α
x
2
3
⇒x
.
1
3
· αx
1
3
+ βx
2
3
x
2
3
·
β
s−α
(−x
1
3
+
1
β
2
U) ⇒x
.
2
3
· −βx
1
3
+ αx
2
3
+
1
β
u
Y
3
·
c
31
1
(s − α + α) + c
30
1
]
]
W(s)
Y
3
· c
3
1
βx
2
3
+ (c
30
1
+ αc
31
1
)x
1
3
· c
3T
x
3
W · x
1
3
·
β
s−α
x
2
3
⇒ (s − α)W · βx
2
3
c
32
· (c
30
1
+ αc
31
1
) c
31
· c
3
1
β Y
3
· c
3T
x
3
A ·
α β
−β α
]
]
]
; b
3
·
0
1
β
]
]
]
]
]
; c
3
·
c
32
c
31
]
]
]
The companion canonical forms ID are very easy to be determined, but
they are not robust regarding the numerical computing for high dimensions.
The Jordan canonical form is a bit more difficult to be determined, but in
many cases is indicated for numerical computation.
There are several algebraical methods for canonical form determination.
Example: Jordan canonical form can be got using model matrix:
T · M
−1
, M · [U
1
; ...U
n
] ,
U
i
are the eigen vectors if they are independent related to
the matrix A.
AU
i
· λ
i
U
i
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.4. Jordan Canonical Form.
136
5.5 State Equations Realisation Starting from the Block Diagram.
If the system is represented by a block diagram where blocks have some
physical meanings we can get useful state realisation so that as much as possible
state variables have a physical meaning. Of course the matrices have no
economical form , but they are more robust for numerical computing. The
following algorithm can be performed:
1) If blocks are complex make series factorisation as much as possible.
Only the following elements should appear .
0
b
a s+a
0 1
a s+a
0 1
b s+b
0 1
s
U Y
a s+a
0 1 a s +
2
b s+b0
1
b s +
2
2
2
U Y U Y
U Y
1
2
3
4
Figure no. 5.5.1.
2) For any such a block denote the state variables and determine the state
equations, that means:
For block 1 the output can be chosen as a state variable:
(5.5.1)
¹
¹
'
x
.
· −
a
0
a
1
x+
b
0
a
1
u
y · x
For the block 2 the output cannot be chose as a state variable because we
do not like to have the input time derivative, then this transfer function is split in
two components:
(5.5.2) H ·
b
1
s +b
0
a
1
s +a
0
·
b
1
a
1
+
b
0
− b
1
a
0
a
1
a
1
s + a
0
Denote as state variable the output of the first order dynamically block
and write the state equation:
(5.5.3)
¹
¹
'
¹
¹
x
.
· −
a
0
a
1
x+
1
a
1
(b
0
− b
1
a
0
a
1
)u
y · x +
b
1
a
1
u
b
a
b b
0 1
1
0
a
a
1
a s+a
1 0
U
X
Y
Figure no. 5.5.2.
For block 3 we can use any state realisation method (canonical form or
Jordan form pointing out the imaginary or real part of the poles):
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.5 State Equations Realisation Starting from the Block Diagram.
137
(5.5.4)
¹
¹
'
¹
¹
x
.
1
· x
2
x
.
2
· −
a
0
a
2
x
1
−
a
1
a
2
x
2
+
1
a
2
u
y · (b
0
− b
2
a
0
a
2
)x
1
+(b
1
− b
2
a
1
a
2
)x
2
+
b
2
a
2
u
For block 4 we can choose the input as being x. In such a case this x will
disappear if we want to get minimal realisation (minimum amount of state
variables).
s
u=x y
x=y
.
.
Figure no. 5.5.3.
Denote inputs and outputs of the blocks by variables and write the
algebraically equations between them.
Using the above determined state equation and the algebraical equation
eliminate all the intermediary variables and write the state equations in matrix
form considering as state vectors a vector composed by all components
denoted in the block diagram.
s+2
s+4
1
s
1
s+3
u u
y =u
u
x
x y =y
1 1 2
3
3
y
3
2 2
1 2
3
+
+
+

2
s+4
+
+
Figure no. 5.5.4. Figure no. 5.5.5.
1. x
.
1
· −4x
1
−2u
1
y
1
· x
1
+u
1
2.
x
2
·
1
s
y
1
x
.
2
· u
2
u
2
· y
1
3. x
.
3
· −3x
3
+u
3
y
3
· x
3
y · x
2
u
1
· u− y
3
− y u
3
· y
1
⇒
x
.
1
· −4x
1
−2(u − x
3
− x
2
) ⇒ x
.
1
· −4x
1
+2x
2
+ 2x
3
− 2u
x
.
2
· x
1
+u − x
3
− x
2
⇒ x
.
2
· x
1
−x
2
− x
3
+ u
x
.
3
· −3x
3
+x
1
+ u − x
3
−x
2
⇒ x
.
3
· x
1
− x
2
− 4x
3
+ u ⇒y · x
2
, x ·
x
1
x
2
x
3
]
]
]
]
]
⇒
¹
¹
'
x
.
· Ax + bu
y · c
T
x
A ·

.
−4 2 2
1 −1 −1
1 −1 −4
`
,
b ·

.
−2
1
1
`
,
, c ·

.
0
1
0
`
,
, d · 0
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.5 State Equations Realisation Starting from the Block Diagram.
138
6. FREQUENCY DOMAIN SYSTEM ANALYSIS.
Frequency systems analysis involves two main topics:
1. Experimental frequency characteristics,
2. Transfer functions description in frequency domain.
6.1. Experimental Frequency Characteristics.
In practical engineering frequently there are used the so called
experimental frequency characteristics or just frequency characteristics. They
can be obtained using some devices connected as in Fig. 6.1.1.
Let us consider a physical object with the input u
a
and the output y
a
. For
example, this object can be: electronic amplifier;  electric motor; etc.
Generator of
sinusoidal
signal
Physical
Object
2  ways
recorder
a
y
a
u
Figure no. 6.1.1.
Supposing the system is in a steady state expressed by the constant
inputoutput pair (U
0
, Y
0
) we apply a sinusoidal input signal, of the period T,
(6.1.1) u
a
(t) · U
0
+U
m
⋅ sinωt , ω · 2πf ·
2π
T
We have to note that the Romanian term "pulsatie"
(6.1.2) ω · 2πf ·
2π
T
, [ω] · sec
−1
is called in English by the general term "frequency".
The deviation of the physical applied signal with respect to the u
a
(t)
steady state value U
0
is
. (6.1.3) u(t) · u
a
(t) − U
0
· U
m
⋅ sin(ω t)
T
T
T
U
m
Y
m
Y
0
U
0
t
t
t
0
y (t)
a
u (t)
a
t
u0
y 0 t
ϕ
ω
· − ∆
t
t
u0
y 0 t
·
−
Figure no. 6.1.2.
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.1. Experimental Frequency Characteristics.
139
The curves of inputoutput response, obtained using a twoways recorder
looks like in Fig. 6.1.2.
After a transient time period, when a so called "permanent regime" is
installed, the output response is a sinusoidal function with the same frequency ω
but with another amplitude Y
0
and shifted in time (with respect to the steady
state values time crossings ) by a value t
y0
, t
u0
∆t
. (6.1.4) ∆t · t
y0
− t
u0
. (6.1.5) y
a
(t) · Y
0
+Y
m
⋅ sin[ω (t −∆t)]
Let we interpret this time interval by a phase with respect to the ∆t ϕ
frequency as, ω
(6.1.6) ϕ · −ω ⋅ ∆t
If the value of the shifting time (the output is delayed with respect ∆t > 0
to the input, ) the phase is negative "retard of phase" and if t
y0
> t
u0
ϕ < 0 ∆t < 0
(the output is in advance with respect to the input, ) the phase is t
y0
< t
u0
positive "advance of phase" . The permanent response is written as, ϕ > 0
(6.1.7) y
a
(t) · Y
0
+Y
m
⋅ sin(ω t +ϕ) , ϕ · −ω ⋅ ∆t
Let us denote the deviation with respect to the steady state value Y
0
by,
. (6.1.8) y(t) · y
a
(t) − Y
0
· Y
m
⋅ sin(ω t + ϕ)
During one experiment, performed for one value of the frequency , we ω
have to measure the amplitudes U
m
, Y
m
and the delay shifting time . Two of ∆t
the three measured values Y
m
and depend on this value of and one can ∆t ω
compute the ratio:
(6.1.9) A(ω) ·
Y
m
U
m
in the permanent regime at ω frequency.
We can also compute :
(6.1.10) ϕ(ω) · ω ⋅ ∆
in the permanent regime at frequency ω. Repeat this experiment for different
values of and, if possible for . ω > 0 ω ≥ 0
These two variables A and ϕ can be plotted (we use the same scale) for
different values of ω, as in Fig. 6.1.3.
ϕ
2
ω
2
ω
2
ϕ
1
ω
1
ω
1
A
2
A
1
ω
ω
Α(ω)
ϕ(ω)
Figure no. 6.1.3.
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.1. Experimental Frequency Characteristics.
140
The function A(ω) obtained in such a way is called
Magnitude Characteristic (MagnitudeFrequency Characteristic, or Gain
Characteristic):
(6.1.11) A(ω) : ω ∈ [0, ∞)
The function :
(6.1.12) ϕ(ω) : [0, ∞)
is called
Phase Characteristic (PhaseAngle Characteristic, PhaseFrequency
Characteristic).
These experimental characteristics can completely describe all the systems
properties for linear systems and some properties of non  linear systems.
In the above diagrams the frequency axes is on a linear scale, but in
practice the so called logarithmic scale is used.
Based on magnitude and phase characteristics other characteristics can be
obtained like:
Real Frequency Characteristic:
(6.1.13) P(ω) · A(ω) ⋅ cos[ϕ(ω)] , ω ∈ [0, ∞
Imaginary Frequency Characteristic:
(6.1.14) Q(ω) · A(ω) ⋅ sin[ϕ(ω)],
Complex Frequency Characteristic:
It is a polar representation of the pair (A(ω), ϕ(ω)), or a ω ∈∈ [0, ∞)
Cartesian representation of (P(ω),Q(ω)):
; P · P(ω)
. (6.1.15) Q · Q(ω)
This complex characteristic is marked by an arrow showing the increasing
direction of ω , it is also marked by the values of ω as in fig. 6.1.4.
The ϕ  angle can be considered into [0,2π) circle or into other
trigonometric circles like for examples [π,π) , (2π,0].
Q
P
ϕ
2
ω
2
ϕ
1
ω
1
A
1
A
2
P
2
P
1
Q
2
Q
1
Figure no. 6.1.4.
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.1. Experimental Frequency Characteristics.
141
6.2. Relations Between Experimental Frequency Characteristics and
Transfer Function Attributes.
Shall we consider a linear system with the transfer function
C ( R ) , (6.2.1) H(s) ·
Y(s)
U(s)
·
M(s)
a
nΠ
i·1
N
(s −λ
i
)
m
i
, λ
i
∈ ∈
and an input
(6.2.2) u(t) · u
a
(t) − U
0
· U
m
⋅ sin(ω t) ⇒
(6.2.3) U(s) · L{u(t)¦ · U
m
ω
s
2
+ ω
2
·
ωU
m
(s − jω)(s + jω)
The system has N distinct poles, the pole has the order of multiplicity λ
i
m
i
,
. Between them N
1
poles are real and the other 2N
2
, where Σ
i·1
N
m
i
· n
N
1
+2N
2
=n. The complex poles are
, λ
i
· σ
i
+jω
i
, i · N
1
+1 : 2N
2
σ
i
· Re(λ
i
) , e
λ
i
t
· e
σ
i
t
e
jω
i
t
cos ω
i
t +j sinω
i
t
]
]
]
]
]
The system response, into zero initial conditions, to this input is
. (6.2.4) Y(s) · L
¹
¹
'
¹
¹
y(t)
y
a
(t) −Y
0
¹
¹
'
¹
¹
· H(s) ⋅ U(s)
Supposing that , that means that ω is not a resonant ω ≠ Im(λ
i
)∀i
frequency, the output is:
y(t) ·
Σ
Rez
M(s)
a
n
Π
i·1
N
(s − λ
i
)
m
i
⋅
ω ⋅ U
m
(s − jω)(s + jω)
e
st
]
]
]
]
]
]
]
y(t) ·
r(t)
Σ
i·1
N
1
P
i
(t)e
λ
i
t
+ Σ
i·N
1
+1
N
2
Ψ
i
(t)e
σ
i
t
]
]
]
]
]
]
]
U
m
+
yp (t)
H(jω) ⋅
ω
2jω
e
jωt
+ H(−jω) ⋅
ω
−2jω
e
−jωt
]
]
]
]
]
U
m
transient response permanent response (6.2.5)
The transient response is determined by the transfer function poles and
the permanent response is determined by the Laplace transform of input poles.
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.2. Relations Between Experimental Frequency
Characteristics
and Transfer Function Attributes.
142
The transient response will disappear when if the real parts of all t → ∞
transfer function poles are negative , . Re(λ
i
) < 0
This means that the real part of all the poles to be located in the left half of
the complex plane. This is the main stability criteria for linear systems.
The expression is a complex number for which we can define, H(jω)
H(jω) · Aω) ⋅ e
jϕ(ω)
A(ω) · H(jω)
(6.2.6) ϕ(ω) · arg(H(jω))
H(jω) · A(ω)e
jϕ(ω)
H(−jω) · A(ω)e
−jϕ(ω)
In such a way :
y
p
(t) ·
H(jω)
ω
2jω
e
jωt
+H(−jω)
ω
−2jω
e
−jωt
]
]
]
U
m
y
p
(t) · U
m
⋅ A(ω) ⋅
e
j(ωt+ϕ)
− e
−j(ωt+ϕ)
2j
]
]
]
(6.2.7) y
p
(t) ·
Ym
A(ω)U
m
sin(ωt+
−ω∆t , ∆t·∆t(ω)
ϕ(ω) )
We can see that the permanent response is also a sinusoidal signal with the
same frequency, but with a shifted phase.
(6.2.8)
A
exp
(ω)
Y
m
U
m
· A(ω) · H(jω)
that means the ratio between the experimental amplitudes , as defined in (6.1.9),
is exactly the modulus of the complex expression obtained replacing s by H(jω)
. In the same time the experimental phase, given by (6.1.10). is the argument jω
of the same complex expression , H(jω)
(6.2.9)
−ω∆t
exp
ϕ(ω) · arg(H(jω))
There is a strong connection between the experimental frequency
characteristic and the transfer function attributes: modulus and argument.
Manipulating the transfer function attributes we can point out, in a reverse
order, what will be the shape of some experimental characteristics.
Having a transfer function H(s) we can determine its frequency description
just substituting (if possible) s by jω ω .
(6.2.10) H(jω) · H(s)
s→jω
From this theoretical expression we obtain,
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.2. Relations Between Experimental Frequency
Characteristics
and Transfer Function Attributes.
143
Magnitude Frequency Characteristic:
A(ω)=H(jω) (6.2.11)
Phase Frequency Characteristic:
ϕ(ω)=arg(H(jω)) (6.2.12)
Real Frequency Characteristic:
P(ω)=Re(H(jω)) (6.2.13)
Imaginary Frequency Characteristic:
Q(ω)=Im(H(jω)) (6.2.14)
We remember that,
(6.2.15) H(jω) · A(ω)e
jϕ(ω)
· P(ω) +jQ(ω)
(6.2.16) A(ω) · P
2
(ω) + Q
2
(ω)
, ϕ (6.2.17) ϕ(ω) ·
¹
¹
'
¹
¹
¹
¹
¹
¹
arctg
Q(ω)
P(ω)
, P(ω) > 0
arctg
Q(ω)
P(ω)
+ π , P(ω) < 0
π
2
, P(ω) · 0, Q(ω) > 0
−
π
2
, P(ω) · 0, Q(ω) < 0
∈ (−π, π]
The complex frequency characteristic is the polar plot of the pair
(A(ω),ϕ(ω)) or the Cartesian plot of the pair (P(ω),Q(ω)).
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.2. Relations Between Experimental Frequency
Characteristics
and Transfer Function Attributes.
144
6.3. Logarithmic Frequency Characteristics.
6.3.1. Definition of Logarithmic Characteristics.
To point out better the properties of a system, the frequency
characteristics are plotted on the so called logarithmic scale, with respect to
frequency and amplitude.
The logarithmic scale for the variable ω ω is just the linear scale for the
variable x=lgω as is illustrated in Fig. 6.3.1.
0.1 1 10 100
1 0 1 2
2
ω
x=lgω
0
−∞
0.01
2
lg(2)=0.30103
lg(20)=1+lg(2)=1.30103
lg(3)=0.477121
lg(8)=0.90309
3 8
Linear scale
Logarithmic scale
20
One decade One decade
ω
=10
x
Figure no. 6.3.1.
Example.
ω · 2 ⇒x · lg2 · 0.30103
ω · 3 ⇒x · lg3 · 0.477121
ω · 0.2 ⇒x · lg0.2 · −1 +lg2 · −1 + 0.30103 · −0.69897
will be plotted at . ω · 0 x · −∞
A decade is a frequency interval such that [ω
1
, ω
2
],
ω
2
ω
1
· 10.
In the logarithmic scale we do not have a linear space and we are not able
to add or to perform operations as in the usual analysis, but in the linear space
"X" there are possible all the mathematical operations on x, which are defined in
linear spaces .
The values of magnitude characteristics A(ω) can be plotted also on
logarithmic scale.
Frequently, for the values of magnitude characteristics it is utilised a linear
scale L(ω), expressed in decibels (dB) where,
L(ω)=20lg(A(ω)). (6.3.1)
If we have a value L dB then the corresponding magnitude value A is,
(6.3.2) A · 10
L
20
The units of decibels are utilised for example to measure the sound intensity,
(6.3.3) I
s
· 20lg

.
P
P
0
`
,
I
s
 sound intensity expressed in dB
P  pressure on ear
P
0
 minimum pressure to produce a feeling of sound.
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.3. LogarithmicFrequency Characteristics.
145
Bode Diagram (Bode Plot).
The Bode diagram is the pair of magnitude characteristic represented on
logarithmic scale or linear scale in dB and the phase characteristic A(ω) L(ω)
represented on linear scale both versus the same logarithmic scale in ω, as ϕ(ω)
depicted in Fig. 6.3.2.
A
( )
ω
L( ) ω
ϕ( ) ω
0
2
π
π
−π
2
π
−
(dB)
0
20
40
60
20
40
1
10
100
1000
0.1
0.01
0.1
0.1
1
1
10
10
100
100
1000
1000
ω
ω
Linear
scale
Linear
scale
Logarithmic
scale
Magnitude frequency characteristic
Phase frequency characteristic
Figure no. 6.3.2.
6.3.2. Asymptotic Approximations of Frequency Characteristic.
The magnitude and phase characteristics are approximated by their
asymptotes with respect to linear variable x. Such characteristics are called
frequency asymtotic characteristics.
6.3.2.1. Asymptotic Approximations of Magnitude Frequency
Characteristic for a First Degree Complex Variable Polynomial.
Let us consider a first degree complex polynomial
(6.3.4) H(s) · Ts + 1
The exact magnitude frequency chracteristic of this polynomial is
, (6.3.5) H(jω) · jωT + 1 · A(ω) · (ωT)
2
+ 1 L(ω) · 20lg[A(ω)]
It is approximated by
A(ω) · (ωT)
2
+ 1 ≈
¹
¹
'
1 , ωT < 1
ωT , ωT ≥ 1
· A
a
(ω)
and
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.3. LogarithmicFrequency Characteristics.
146
. (6.3.6) L(ω) · 20lg[A(ω)] ≈
¹
¹
'
0 , ωT < 1
20lg[ωT] , ωT ≥ 1
· L
a
(ω)
Let us denote by ωT the so called normalised frequency, where . ω · 2πf
Note that ω is also called in english frequency which corresponds to romanian
"pulsatie" and f is called frequency both in english and romanian.
As 2π is a nondimensional number both ω and f are measured in [sec]
−1
,
representing the natural frequency, and T called time constant is measured in
[sec]. So the normalised frequency ωT is a nondimentional number.
Frequently some items of the frequency characteristics are represented in
normalised frequency. The recovering of their shape in the natural frequency is a
matter of frequency scale gradation as depicted in Fig. 6.3.3.
0.1
0.1
1
1
10
10
100
100
2
2
ω Τ
ω
0
0
0.01
0.01
3
3
8
8
Logarithmic scale
Logarithmic scale
20
20
One decade One decade
in normalised frequency
in natural frequency
/T /T /T /T
/T /T /T /T /T
Figure no. 6.3.3.
The two approximating branches from (6.3.4) are the horizontal and
oblique asymptotes of the corresponding function of on the variable x in L(ω)
the linear space X ,
(6.3.7) F(x) · L(ω)
ωT→10
x · 20lg( 10
2x
+ 1 )
, (6.3.8) lgωT · x ⇒ωT · 10
x
The horizontal asymptote is,
(6.3.9) y · lim
x→−∞
[F(x)] · 0
The oblique asymptote is,
y=mx+n, , (6.3.10) m · lim
x→∞
F(x)
x
· 20 n · lim
x→∞
[F(x) − mx]
So, the two asimptotes in a linear space are
(6.3.11) y · 20x, for x → ∞
. (6.3.12) y · 0, for x → −∞
The slopes of the asymptotes can be expressed also in logarithmic scale
as a number of decibels over a decade dB/dec .
When the linear variable x increases with 1 unit in the linear space X=R
the variable ωT or ω in logarithmic scale increases 10 times (it covers a
decade).
x=lg(ωT) : x
2
x
1
=1 ⇒ lg(ω
2
T)lg(ω
1
T)=1 ⇒
ω
2
T
ω
1
T
· 10 ⇒
ω
2
ω
1
· 10
The slope in linear space is
m ·
y
2
−y
1
x
2
−x
1
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.3. LogarithmicFrequency Characteristics.
147
which can be interpreted as the variation
m · y
2
− y
1
if x
2
−x
1
· 1 ⇔
ω
2
T
ω
1
T
· 10 ⇔one decade
But
y
2
· 20lg(A(ω
2
)) · L(ω
2
)
y
1
· 20lg(A(ω
1
)) · L(ω
1
)
so the slope in logarithmic scale for the magnitude characteristic is expressed as,
m · y
2
− y
1
· 20lg(A(ω
2
)) − 20lg(A(ω
1
)) · L(ω
2
) − L(ω
1
) dB
if
ω
2
T
ω
1
T
·
ω
2
ω
1
· 10 ⇔one decade
which allows to express the slope as . m [dB/dec]
For (6.3.10) the slope is m=20dB/dec.
The exact and asymptotic frequency characteristics of (6.3.5), (6.3.6) is
depicted in Fig. 6.3.4.
A( ) L( )
1 0 1 2
−∞
2
x
Linear scale
Logarithmic scale ωT=10
x
x=lg( ) ω
T
0.1 1 10 100 0 0.01
ωT
40
30
20
10
0
10
20
1
10
100
0.1
ω ω
dB
x
1
x
2
x
1
x
2
 =1
ω
2
T
ω
1
T
ω
2
T
ω
1
T
=10
One decade
Oblique asymptote
y=mx+n
Horizontal asymptote
y=0
20 dB
Slope m=20 dB/dec
Slope m=0 dB/dec
Max. error
3dB
Exact e
Magnitude
freq. charact.
Figure no. 6.3.4.
6.3.2.2. Asymptotic Approximations of Phase Frequency Characteristic
for a First Degree Complex Variable Polynomial.
Let us consider a first degree complex polynomial
(6.3.13) H(s) · Ts + 1
The exact phase frequency chracteristic of this polynomial is
(6.3.14) ϕ(ω) · arg(jωT + 1) · arctg(ωT)
It is approximated by
, ϕ(ω) · arctg(ωT) ≈
¹
¹
'
¹
¹
0 , ωT < 0.2
ln 10
2
lg(ωT) +
π
4
, ωT ∈ [0.2, 5]
π
2
, ωT > 5
· ϕ
a
(ω) ω ∈ [0, ∞)
This approximation means three lines which are: (6.3.15)
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.3. LogarithmicFrequency Characteristics.
148
 The two horizontal asymptotes of the phase frequency chracteristic in the ϕ(ω)
linear space of the variable evaluated to the function x · lg(ωT)
G(x) · ϕ(ω)
ωT→10
x · arctg(10
x
)
x → −∞ ⇔ ω →0 ⇒G(x) →0
x → +∞ ⇔ ω → ∞ ⇒G(x) →
π
2
 A line having the slope of in the particular point ϕ(ω)
ωT · 1 ⇔ x · 0
slope which is evaluated for the function by the derivative G(x)
. G (x) ·
10
x
ln10
10
2x
+1
⇒G (0) ·
ln10
2
Both and are depicted in Fig. 6.3.5. ϕ(ω) ϕ
a
(ω)
1 0 1 2
−∞
2
x
Linear scale
Logarithmic scale
ωT=10
x
x=lg( ) ωT
0.1 1 10 100 0 0.01
ωT
180
135
90
45
0
45
90
0
−π/4
−π/2
π/4
π/2
3π/4
π
ϕ( )
ω
ϕ( )
ω
x
1
x
2
 0 = 0 
Horizontal asymptote
Horizontal asymptote
Max. error
6 degree
Exact e
phase freq. charact.
ϕ ( ) ω
a
ϕ ( ) ω
a
degree rad
ω
1
T=0.2 ω
2
T=5
x
2
= 0.69897 x
1
=  0.69897
5 0.2
.
. .
.
ω
2
T
=
1 ω
1
T
1
Oblique line
y = (ln10/2)x + π/4
Figure no. 6.3.5.
Other asymptotic approximations will be presented later,when we discuss
elementary frequency characteristics.
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.3. LogarithmicFrequency Characteristics.
149
6.4. Elementary Frequency Characteristics.
Elementary frequency characteristics are used to draw frequency
characteristics for any transfer function. There are six such elementary frequency
characteristics coming out from the transfer functions polynomials factorisation.
They are called also frequency characteristics of typical transfer functions
or of typical elements.
For each element are evaluated and their asymptotic A(ω), ϕ(ω), L(ω
counterpart is deduced accompanied by Bode diagrams.
6.4.1. Proportional Element.
H(s)=K (6.4.1)
A(ω)=K (6.4.2)
(6.4.3) ϕ(ω) ·
¹
¹
'
0 , K ≥ 0
π , K < 0
L(ω)=20lgK (6.4.4)
Its Bode diagram is depicted in Fig. 6.4.1.
ω
ϕ( )
ω
0 0.1 1 10 100 0.01
0
−π/2
π/2
π
If K>=0
If K<0
0.1 1 10 100 0 0.01
ω
40
30
20
10
0
10
20
1
10
100
0.1
L( ) ω A( ) ω dB
20lg( K )
K
Figure no. 6.4.1.
The complex frequency characteristic is a point , placed in the plane ∀ω
(P,Q) to . P(ω) · K, Q(ω) · 0, ∀ω
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
150
6.4.2. Integral Type Element.
(6.4.5) H(s) ·
1
s
α
, α ∈ Z
if α=1 then the system is a pure simple integrator;
if α=2 then the system is a double pure integrator;
if α·1 then the system is a pure derivative . H(s) · s
s →jω ⇒ H(jω) ·
1
j
α
ω
α
, ω > 0, α ∈ Z
(6.4.6) A(ω) ·
1
ω
α
, ω ≥ 0, α ≤ 0
(6.4.7) L(ω) · −20α lgω
(6.4.8) ϕ(ω) · −α
π
2
0 0.1 1 10 100 0.01
ω
0
−π/2
π/2
π
−π
ϕ( ) ω
α=1 α=1
α=2 α=2
α=−1 α=−1
α=−2 α=−2
0.1 1 10 100 0 0.01
ω
40
30
20
10
0
10
20
1
10
100
0.1
A( ) ω L( ) ω dB
α=1 α=1 α=2 α=2 α=−1 α=−1 α=−2 α=−2
20 dB/dec 40 dB/dec 20 dB/dec 40 dB/dec
Slope Slope Slope Slope
Figure no. 6.4.2.
The complex frequency characteristics , in the plane (P,Q) are:
orizontal lines if and α · 2k, k ∈ Z ⇒ P(ω) · (−1)
k
/ω
2k
; Q(ω) · 0
vertical lines if α · 2k + 1, k ∈ Z ⇒ P(ω) · 0; Q(ω) · −(−1)
k
/ω
2k+1
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
151
6.4.3. First Degree Plynomial Element.
This element has a transfer function with one real zero (PD Element).
A PD element means ProportionalDerivative element.
Y(s)=(Ts+1)U(s) H(s)=Ts+1; (6.4.9) y(t) · Tu
.
+ u
H(s) · T(s + z) , z ·
1
T
s →jω ⇒H(jω) · 1 + j(ωT) · P(ω) + jQ(ω)
(6.4.10) A(ω) · H(jω) · 1 +(ωT)
2
(6.4.11) L(ω) · 20lg 1 + (ωT)
2
= asymptotic approximation of magnitude characteristic. L
a
(ω)
= asymptotic approximation of phase characteristic. ϕ
a
(ω)
(6.4.12) L
a
(ω) ·
¹
¹
'
0 , ωT < 1
20lgωT , ωT ≥ 1
(6.4.13)
ϕ(ω) · arctg(ωT)
(6.4.14) ϕ
a
(ω) ·
¹
¹
'
¹
¹
0 , ωT < 0.2
ln 10
2
lg(ωT) +
π
4
, ωT ∈ [0.2, 5]
π
2
, ωT > 5
(6.4.15) P(ω) · Real(H(jω)) · 1
(6.4.16)
Q(ω) · Img(H(jω)) · ωT
The Bode characteristics are represented in Fig. 6.4.3.
A( ) L( )
0.1 1 10 100 0 0.01
ω
T
40
30
20
10
0
10
20
1
10
100
0.1
ω ω dB
Slope
+20 dB/dec
Max. error
3dB
0.1 1 10 100 0 0.01
ω
T
0
−π/4
π/4
π/2
ϕ( )
ω Max. error
6 degree
ϕ ( ) ω
a
5 0.2
. .
Figure no. 6.4.3.
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
152
The complex frequency characteristic, in the plane (P,Q) is depicted in
Fig. 6.4.4.
P
Q
0.5 1
0.5
1
0
0
0.25
1
ω·ω
1
ϕ(ω )
=0 ω
=1/T=z ω
ω→∞
1
(ω )
Α
1
ω H(j )
Figure no. 6.4.4.
From Bode and complex frequency characteristics the following
observations can be obtained:
1. If the frequency increases the output as a time sinusoidal signal will be in
advance in respect to input.
2. When then the output will be with a time interval of T/4 in advance ω → ∞
with respect to the input.
3. For breaking point the output, as a time sinusoidal signal ωT · 1 ⇔ ω · 1/T
will be T/8 in advance , which corresponds to a phase of . π/4
6.4.4. Second Degree Plynomial Element with Complex Roots.
This element has a transfer function with only two complex zeros.
, (6.4.17) H(s) · T
2
s
2
+2ξTs + 1 , ξ ∈ (0, 1)
1
T
· ω
n
where: ω
n
 the natural frequency; ξ  the damping factor
(6.4.18) H(jω) · (1 − (ωT)
2
) + j(2ξωT)
(6.4.19)
¹
¹
'
P(ω) · 1− (ωT)
2
Q(ω) · 2ξωT
(6.4.20) A(ω) · H(jω) · (1 −(ωT)
2
)
2
+4ξ
2
(Tω)
2
(6.4.21) L(ω) · 20lgA(ω)
(6.4.22) ϕ ∈ [−
π
2
, 3
π
2
)
(6.4.23) ϕ(ω) ·
¹
¹
'
¹
¹
¹
¹
arctg
2ξTω
1−(Tω)
2
, ωT < 1
π
2
, ωT · 1
π + arctg
2ξTω
1−(Tω)
2
, ωT > 1
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
153
a.The asymptotic magnitude characteristic.
The asymptotical approximations are:
(6.4.24) L(ω) · 20lg

.

.
1 −(ωT)
2 `
,
2
+ 4ξ
2
(ωT)
2
`
,
(6.4.25) x · lg(ωT) ⇒ωT · 10
x
(6.4.26) F(x) · L(ω)
ωt·10
x · 20lg
(
1 −10
2x
)
2
+4ξ
2
10
2x
When that means an horizontal asymptote. x → −∞(ω →0) ⇒F(−∞) · 0
When , x → +∞
m · lim
x→∞
F(x)
x
· 40
n · lim
x→∞
[F(x) − mx] · 0
an oblique asymptote exists,
y · 40x
(6.4.27) L
a
(ω) ·
¹
¹
'
0 , ωT < 1
40lg(ωT) , ωT ≥ 1
(6.4.28) A(ω) ≈ A
a
(ω) ·
¹
¹
'
1 , ωT < 1
(ωT)
2
, ωT ≥ 1
L
a
· 20A
a
(ω)
The exact frequency characteristics are depending on the damping factor
ξ , so there is a family of characteristics.
(6.4.29) A(ω) ·

.
1 − (ωT)
2 `
,
2
+4ξ
2
(ωT)
2
·

.
1 −

.
ω
ω
n
`
,
2
`
,
2
+ 4ξ
2 
.
ω
ω
n
`
,
2
We can find the resonant frequency setting to zero A'(ω) with respect to ω:
dA(ω)
dω
· 0 ⇒
(6.4.30) ω
rez
·
1
T
1 −2ξ
2
We can obtain a resonance frequency if 0 < ξ <
2
2
.
The resonant magnitude, A
m
=A
rez is,
(6.4.31) A
m
· 2ξ 1 − ξ
2
· A(ω
rez
)
where is the rezonance frequency. ω
rez
(6.4.32) A(ω
n
) · A(1/T) · 2ξ
. (6.4.33) L(1/T) · L(ω
n
) · 20lg(2ξ)
If . ξ · 0.5 ⇒L(1/T) · 0
, A(ω) · 1 ⇒ ω · ω
where is the crossing frequency ω
t
(6.4.34) ω
t
· 2 ω
rez
All these allow us to draw the asymptotic magnitude characteristic and the
family of the exact characteristics as in Fig. 6.4.5.
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
154
0.1 1
10 0.01
ωT
40
20
0
20
1
10
100
0.1
A( ) ω L( ) ω dB
Slope
+40 dB/dec
40 0.01
A( ) ω
n
A
m
ω
rez
T
ξ·0
ξ·0
0<ξ<1/2
1/2<ξ<√2/2
ξ>√2/2
ξ·√2/2
ξ·1/2
T ω
t
Figure no. 6.4.5.
Example: H(s) ·
T
2
100 s
2
+
2ξT
2 s + 1 T · 10, ξ · 0.1, ω
n
· 0.1
L(ω
n
) · 20lg(2ξ) · 20lg(0.2) · −13.2974
ω
rez
·
1
T
1 −2ξ
2
· 0.1 1− 2 ⋅ 0.01 · 0.09039
A
m
· A
rez
·
A(ω
n
)
2ξ 1 − ξ
2
· 0.19899
Frequency (1/sec)
Magnitude (dB)
Bode Diagrams
20
0
20
40
10
 2
10
1
10
0
L( )
ω
ω
100 s^2 + s + 1 H(s)=
ω
rez
·0.09039
L( )
ω
n
·−13.2974
Frequency (1/sec)
Phase (deg);
10
 2
10
1
10
0
50
100
150
200
ω
ϕ(ω)
0
90
180
Figure no. 6.4.6.
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
155
The crossing frequency is obtained solving the equation
A ·

.
1 − (ωT)
2 `
,
2
+4ξ
2
(ωT)
2
· 1
We note y ·
(
ωT
2
)
⇒ (1− y)
2
+ 4ξy · 1 ⇒
¹
¹
'
y · 0
y · 2
(
1− 2ξ
2
)
(ωT)
2
· 2
(
1− 2ξ
2
)
⇒ ω
t
· ω
c
·
1
T
2 1 −2ξ
2
b. The asymptotic phase characteristic.
We said that :
(6.4.35) ϕ(ω) ·
¹
¹
'
¹
¹
¹
¹
arctg
2ξωT
1−(ωT)
2
, ωT < 1
π
2
, ωT · 1
π + arctg
2ξωT
1−(ωT)
2
, ωT > 1
Denoting,
(6.4.36) g(x) · arctg
2ξωT
1− (ωT)
2
ωT·10
x · arctg
2ξ10
x
1− 10
2x
we have
G(x) · ϕ(ω)
ωT→10
x
as a function of x in a linear space X with three branches,
(6.4.37) G(x) ·
¹
¹
'
¹
¹
¹
¹
arctg
2ξ10
x
1−10
2x
, x < 0
π
2
, x · 0
π + arctg
2ξ10
x
1−10
2x
, x > 0
and then it is approximed by three lines, two horizontal asymptotes and one
oblique line.
The asymptotes:
x → −∞ ⇒G(x) →0
x → +∞ ⇒G(x) → π
The oblique line: y · G (0) ⋅ x + π/2
The asymptotic phase ϕ
a
will be approximated between two points by a
straight line passing through x=0, ϕ·π/2
y ·
ln10
ξ
x+
π
2
· 0 ⇒x
1
· −
π
2
ξ
ln10
⇒ ω
1
T · 10
x
1
y ·
ln10
3
x+
π
2
·
π
2
⇒x
2
·
π
2
1 −
ln 10
3
⇒ ω
2
T · 10
x
2
(6.4.38) ϕ
a
(ω) ·
¹
¹
'
¹
¹
0 , ωT < ω
1
T
π
2
+
ln(10)
ξ
lg(ωT) , ωT ∈ [ω
1
T, ω
2
T]
π , ωT > ω
2
T
6. FREQUENCY DOMAIN SYSTEMS ANALYSIS. 6.4. Elementary Frequency Characteristics.
156
0.1 1 10 100 0 0.01
ωT
0
−π/2
π/2
π
ϕ( )
ω ϕ ( ) ω
a
.
.
.
.
.
A
C
B
D
1
ω
T
2
ω
T
ln(10)/ ξ
Figure no. 6.4.7.
Algorithm for asymptotic phase drawing:
 mark A [(ωT)=1; ϕ·π/2]
 mark C [(ωT)=10; ϕ· ]
π
2
+
ln10
ξ
 connect A and C to determine ω
1
Τ and ω
2
T to the intersection with ϕ·0
(point B) and ϕ·π (point D) respectively.
So the exact phase characteristic will have as semitangents the segments
AB and AD.
Example: , . T · 10 , ξ · 0.1
ln10
ξ
·
2.3026
0.1
· 23.026rad · 7.33π
The complex frequency characteristics, , in the plane (P,Q) are ∀ξ > 0
obtained eliminating in (6.4.19), ω
. (6.4.39) P · 1 −
Q
2
4ξ
2
, Q ≥ 0
It is represented in Fig. 6.4.8.
P
Q
0.5 1
0.5
1
0
0 0.25
1
ω·ω
1
ϕ(ω )
=0 ω
=1/T ω
ω→∞
1
(ω ) Α
1
ω H(j )
Q=2ξ
Figure no. 6.4.8.
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
157
6.4.5. Aperiodic Element. Transfer Function with one Real Pole.
; (6.4.40) H(s) ·
1
Ts + 1
·
1
T
s + p
The pole is . −p · −
1
T
(6.4.41) H(jω) ·
1
1+ jωT
·
1
1 + (ωT)
2
− j
ωT
1 +(ωT)
2
(6.4.42) P(ω) ·
1
1+ (ωT)
2
, Q(ω) · −
ωT
1 + (ωT)
2
(6.4.43) A(ω) ·
1
1 + (ωT)
2
(6.4.44) L(ω) · −20lg

.
1+ (ωT)
2 `
,
(6.4.45) L
a
(ω) ·
¹
¹
'
¹
¹
0 , ωT < 1
−20
x
lg(ωT) , ωT ≥ 1
(6.4.46) ϕ(ω) · −arctg(ωT)
(6.4.47) ϕ
a
(ω) ·
¹
¹
'
¹
¹
0 , ωT < 0.2
−
π
4
−
ln 10
2
lg(ωT) , ωT ∈ [0.2, 5]
−
π
2
, ωT > 5
The Bode plot depicted in Fig. 6.4.9. We observe that the asymptotic
characteristic has a break down of 20 dB/dec regarding the slope.
Max. error
3dB
0.1 1 10 100 0.01
ωT
20
10
0
10
20
1
10
0.1
A( ) ω L( ) ω dB
Slope
20 dB/dec
−π/2
0.1 1 10 100 0 0.01
ωT
0
−π/4
π/4
ϕ( )
ω
Max. error
6 degree
ϕ ( ) ω
a
5 0.2
. .
−3π/2
Figure no. 6.4.9.
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
158
The complex frequency characteristic, in the (P,Q) plane, obtained by
eliminating in (6.4.40), is expressed by the equation (6.4.48), ω
. (6.4.48) P
2
+ Q
2
−P · 0, Q ≤ 0
It is represented in Fig. 6.4.10.
P
Q
0.5 1
0.5
0.5
0
1
ω·ω
1
ϕ(ω )
=0 ω
=1/T ω
ω→∞
1
(ω ) Α
1
ω H(j )
Figure no. 6.4.10.
6.4.6. Oscillatory element. Transfer Function with two Complex Poles.
(6.4.49) H(s) ·
1
T
2
s
2
+ 2ξTs + 1
·
ω
n
2
s
2
+ 2ξω
n
s + ω
n
2
; ω
n
·
1
T
− is the natural frequency ξ ∈ (0, 1) − is the dumping factor
(6.4.50) H(jω) ·
1

.
1−(ωT)
2 `
,
+j2ξωT
·

.
1−(ωT)
2 `
,
−j2ξωT

.
1−(ωT)
2 `
,
2
+4ξ
2
(ωT)
2
(6.4.51) A(ω) ·
1

.
1−(ωT)
2 `
,
2
+4ξ
2
(ωT)
2
(6.4.52) L(ω) · −20lg

.

.
1 −(ωT)
2 `
,
2
+ 4ξ
2
(ωT)
2
`
,
(6.4.53) ϕ(ω) ·
¹
¹
'
¹
¹
¹
¹
−arctg
2ξωT
1−(ωT)
2
, ωT < 1
−
π
2
, ωT · 1
−π − arctg
2ξωT
1−(ωT)
2
, ωT > 1
; (6.4.54) P(ω) ·

.
1−(ωT)
2 `
,

.
1−(ωT)
2 `
,
2
+4ξ
2
(ωT)
2
Q(ω) ·
−2ξωT

.
1−(ωT)
2 `
,
2
+4ξ
2
(ωT)
2
(6.4.55) ω
rez
· ω
n
1− 2ξ
2
where 0 < ξ <
2
2
(6.4.56) A
m
· A
rez
· A(ω
rez
) ·
1
2ξ 1 −ξ
2
(6.4.57) A(ω
n
) · A(
1
T
) ·
1
2ξ
L(ω
n
) · L(
1
T
) · −20lg(2ξ)
(6.4.58)
ω
c
· ω
t
· 2 ω
rez
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
159
a.The asymptotic magnitude characteristic.
(6.4.59) A(ω) ≈ A
a
(ω) ·
¹
¹
'
¹
¹
1 , ωT < 1
1
(ωT)
2
, ωT ≥ 1
(6.4.60) L
a
(ω) ·
¹
¹
'
0 , ωT < 1
−40lg(ωT) , ωT ≥ 1
0.1 1
10 0.01
ωT
40
20
0
20
1
10
100
0.1
A( ) ω L( ) ω
dB
Slope
40 dB/dec
40
0.01
A( ) ωn A
m
ω
rez
T
ξ·0
ξ·0
0<ξ<1/2
1/2<ξ<√2/2
ξ>√2/2
ξ·√2/2
ξ·1/2
T ω
t
Figure no. 6.4.11.
b. The asymptotic phase characteristic.
(6.4.61) ϕ
a
(ω) ·
¹
¹
'
¹
¹
0 , ωT < ω
1
T
−
π
2
−
ln(10)
ξ
lg(ωT) , ωT ∈ [ω
1
T, ω
2
T]
−π , ωT > ω
2
T
0.1 1 10 100 0 0.01
ωT
0
−π/2
π/2
−π
ϕ( ) ω ϕ ( ) ω
a
.
.
.
.
.
.
A
C
B
D
1
ω
T 2
ω
T
ln(10)/ ξ
Figure no. 6.4.12.
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
160
The complex frequency characteristics, in the plane (P,Q) are obtained
eliminating in (6.4.54). They looks as in Fig. 6.4.13. ωT
If , the magnitude A has decreasing values but, if ξ ≥ 2 /2 ξ ∈ (0, 2 /2)
a resonance appear to and . ω
res
ϕ(ω
res
) · ϕ
res
∈ (−π/2, 0)
P
Q
0.5
1
0.5
0
res
ω·ω
=0 ω
=1/T ω
=1/T ω
ω→∞
res
ω Α( )
ϕ(ω )
res
0<ξ<√2/2
ξ≥√2/2
−1/2ξ
−1/2ξ
ϕ(ω )
res
∈(−π/2, 0)
Figure no. 6.4.13.
Example.
; , T=10 H(s) ·
1
100s
2
+2s + 1
·
(0.1)
2
s
2
+ 2ξω
n
s +ω
n
2
ω
n
· 0.1
; 2ξT · 2 ⇔ ξ ·
2
2T
· 0.1 20lg

.
1
2ξ
`
,
· 20lg

.
1
0.2
`
,
· 14.01
; A
m
·
1
2ξ 1− ξ
2
· 5.02 L
m
· 20lg(5.02) · 14.01
ω
rez
·
ω
n
1
T
1 −2ξ
2
Magnitude (dB)
Bode Diagrams
Slope
40 dB/Dec.
100 s^2 + s + 1
H(s)=
1
Frequency (1/sec)
20
40
0
20
10
 2
10
1
10
0
L( )
ω
ω
ω
rez
·0.09039
L( ) ω
n
·13.2974
Frequency (1/sec) ω
0
Phase (deg); ϕ(ω)
ω
1
ω
2
50
100
150
200
0
90
180
Figure no. 6.4.14.
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.4. Elementary Frequency Characteristics.
161
6.5. Frequency Characteristics for Series Connection of Systems.
6.5.1. General Aspects.
Suppose that a system is expressed by a cascade of q transfer functions,
(6.5.1) H(s) · H
1
(s) ⋅ H
2
(s) ⋅ ⋅⋅H
q
(s)
whose transfer functions frequency characteristics are known,
(6.5.2) A
k
(ω) · H
k
(jω) k · 1 : q
. (6.5.3) ϕ
k
(ω) · arg(H
k
(jω)) k · 1 : q
The magnitude characteristic of the series interconnected system is the
product of the component's magnitudes
, (6.5.4)
A(ω) · Π
k·1
q
A
k
(ω)
but the logarithmic magnitude and phase characteristics of the series
interconnected system are the sum of the component's characteristics,
(6.5.5) L(ω) · Σ
k·1
q
L
k
(ω)
(6.5.6) ϕ(ω) · Σ
k·1
q
ϕ
k
(ω
Also the asymptotic logarithmic magnitude and phase characteristics of
the series interconnected system are the sum of the asymptotic component's
characteristics,
(6.5.7) L
a
(ω) · Σ
k·1
q
L
k
a
(ω)
(6.5.8) ϕ
a
(ω) · Σ
k·1
q
ϕ
k
a
(ω
These relationships are used for plotting characteristics for any transfer
function.
A transfer function can be factorised at the nominator and denominator by
using time constants as in (6.5.9) in which the free term in any polynomial equals
to 1,
(6.5.9) H(s) ·
K⋅
Π
i·1
m
1
(θ
i
⋅ s +1)
Π
i·m
1
+1
m
1
+m
2

.
θ
i
2
s
2
+2ζ
i
θ
i
s +1
`
,
s
α
Π
i·1
n
1
(T
i
s + 1)
Π
i·n
1
+1
n
1
+n
2

.
T
i
2
s
2
+ 2ξ
i
T
i
s + 1
`
,
where, , Z. n
1
+ n
2
· n m
1
+ m
2
· m, α ∈
Based on a such factorisation, any complicated transfer function is
interpreted as a series connection of the 6 types of elementary transfer functions,
, (6.5.10) H(s) ·
Π
k·1
p
H
i
(s) ·
Π
k·1
p
H
i
k
i
(s), k
i
∈ [1, 6]
where is one of the already studied elementary transfer function of H
i
(s) · H
i
k
i
(s)
the type indicated, if necessary, by the superscript k
i
.
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .
162
The time constants factorisation reveals a factor K which is called "the
general amplifier factor" of a transfer function.
The general amplifier factor K can be determined before the time
constants factorisation, using the formula,
, (6.5.11) K · lim
s→0
s
α
H(s)
where , indicates the number of the transfer function H(s) poles placed in α ∈ Z
the splane origin if or the transfer function H(s) zeros placed in the α > 0
splane origin if . For particular values of , K has the following names: α < 0 α
α=0 K=K
p
 the position amplifier factor;
α=1 K=K
v
 the speed amplifier factor; (6.5.12)
α=2 K=K
a
 the acceleration amplifier factor.
These general amplifier factors express the ratio between the steady state
values of: output (K
p
), output first derivative (K
v
), output second y(∞) y
.
(∞)
derivative (K
a
), and the steady state values of the input . y¨(∞) u(∞)
(6.5.13) K
p
· lim
s→0
H(s) ·
t→∞
lim
y(t)
u(t)
·
y(∞)
u(∞)
(6.5.14) K
v
· lim
s→0
sH(s) ·
t→∞
lim
y
.
(t)
u(t)
·
y
.
(∞)
u(∞)
. (6.5.15) K
a
· lim
s→0
s
2
H(s) ·
t→∞
lim
y¨(t)
u(t)
·
y¨(∞)
u(∞)
If the transfer function is factorised by poles and zeros we have the form
from (6.5.16), in which the main term in any polynomial equals to 1,
(6.5.16) H(s) ·
B
Π
i·1
m
1
(s +z
i
)
Π
i·m
1
+1
m
1
+m
2

.
s
2
+2ζ
i
Ω
i
s +Ω
i
2
`
,
s
α
Π
i·1
n
1
(s + p
i
)
Π
i·n
1
+1
n
1
+n
2

.
s
2
+ 2ξ
i
ω
i
s + ω
i
2 `
,
where
are the real zeros; (6.5.17) z
i
·
1
θ
i
⇒ ζ
i
· −z
i
are the real poles. (6.5.18) p
i
·
1
T
i
⇒ λ
i
· −p
i
Sometimes, we call also p
i
, z
i
as a "pole" and as a "zero" respectively.
The factorisation by poles and zeros of a transfer function is called also
"zpk" factorisation. Here B is only a coefficient which has nothing to do
with the system amplifier.
For fast Bode characteristics drawing it is useful to consider the time
constants factorisation (6.5.9) split in two main factors,
(6.5.19) H(s) · R(s) ⋅ G(s)
where,
(6.5.20) R(s) ·
K
s
α
incorporates the essential behaviour for low frequencies mainly for . ω →0
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .
163
We define,
(6.5.21) A
R
(ω) · R(jω) · K ⋅ ω
−α
(6.5.22) L
R
(ω) · 20lg(A
R
(ω)) · 20lg( K ) − α ⋅ 20lg(ω)
(6.5.23) ϕ
R
(ω) · arg(R(jω)) ·
¹
¹
'
−α ⋅ π/2 if K ≥ 0
−α ⋅ π/2 − π if K < 0
The second factor G(s),
(6.5.24) G(s) ·
Π
i·1
m
1
(θ
i
⋅ s + 1)
Π
i·m
1
+1
m
1
+m
2

.
θ
i
2
s
2
+ 2ζ
i
θ
i
s + 1
`
,
Π
i·1
n
1
(T
i
s + 1)
Π
i·n
1
+1
n
1
+n
2

.
T
i
2
s
2
+ 2ξ
i
T
i
s + 1
`
,
has no effect on the frequency characteristics of the series connection when
. Indeed, ω →0
G(0)=1 , . (6.5.25) lim
ω→0
G(jω) · 1 lim
ω→0
arg(G(jω)) · 0
Example 6.5.1.1. Types of Factorisations.
Let us consider the transfer function, (6.5.26)
. (6.5.26) H(s) · 10
(s + 10)(20s + 1)
s(5s + 1)(s + 2)
Even if it is a factorised one it does not fit none of the above factorisation
types. The factor 10 in front of the fraction tell us nothing.
Very easy we can determine, directly from H(s),the two elements of R(s),
observed by a simple inspection and is evaluated using (6.5.14), α · 1 K · K
v
K · K
v
· lim
s→0
sH(s) · 10
(0 + 10)(0+ 1)
(0+ 1)(0 + 2)
· 50
[y]
sec⋅[u]
For this transfer function the factor R(s) is,
. (6.5.27) R(s) ·
K
s
α
·
50
s
which determines,
(6.5.28) L
R
(ω) · 20lg( K ) −α ⋅ 20lg(ω) · 20lg(50) − 20lg(ω)
. (6.5.29) ϕ
R
(ω) · arg(R(jω)) · −π/2
It is important to remark that H(s) has, beside the pole s=0 , other two
simple real poles and two simple real zeros, denoted using the conventions
(6.5.17), (6.5.18) as:
p
0
=0; p
1
=0.2; p
2
=2; z
1
=0.05; z
2
=10; (6.5.30)
We can write the expression of G(s) as
(6.5.31) G(s) ·
(0.1s + 1)(20s + 1)
(5s + 1)(0.5s + 1)
which accomplishes the condition G(0)=1.
We shall use these results in the next paragraph regarding the Bode
diagram construction. For the same system, . K
p
· ∞, K
a
· 0
The two types of factorisation for H(s) are:
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .
164
 time constants factorisation
, H(s) · 50
(0.1s + 1)(20s + 1)
s(5s + 1)(0.5s + 1)
 "zpk" factorisation
. H(s) · 40
(s + 10)(s + 0.05)
s(s + 0.2)(s + 2)
As a ratio of two polynomials, useless in frequency characteristics
construction, the transfer function is,
. H(s) ·
200s
2
+2010s + 100
5s
3
+ 11s
2
+ 2s + 0
6.5.2. Bode Diagrams Construction Procedures.
The Bode diagram, for complicated transfer functions, can be plotted
rather easy by using two methods:
1. Bode Diagram Construction by Components.
2. Directly Bode Diagram Construction.
6.5.2.1. Bode Diagram Construction by Components.
This method is based on using the Elementary frequency characteristics
(prototype characteristics), presented in Ch. 6.4., as components of a transfer
function according to the relations (6.5.1)... (6.5.8).
The following steps are recommended:
1. Realise the time constants transfer function factorisation.
2. Identify elementary transfer functions of the six types.
3. Plot the asymptotic characteristics, magnitude and phase, for prototypes.
4. The asymptotic characteristic of the transfer function results by adding point
by point the asymptotic characteristics of the components.
5. To obtain the real characteristics make correction in the breaking points.
The values of magnitude characteristics are added on a linear scale of
expressed in dB, even if finally we can use the logarithmic scale of L(ω) A(ω)
for their marking.
If a pole or a zero, real or complex, has the order of multiplicity m
i
, then it
will be considered as determining m
i
separate e lementary frequency
characteristics even if it is about on the same characteristic.
Example 6.5.2.1. Examples of Bode Diagram Construction by
Components.
Let us construct the Bode diagram (Bode plot) of the transfer function
analysed in Ex. 6.5.1.1., given by the transfer function (6.5.26),
(6.5.26) H(s) · 10
(s + 10)(20s + 1)
s(5s + 1)(s + 2)
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .
165
The factorised time constants form is interpreted as a series connection of
6 elementary elements,
H(s) · 50
(0.1s + 1)(20s + 1)
s(5s + 1)(0.5s + 1)
· H
1
(s) ⋅ H
2
(s) ⋅ H
3
(s) ⋅ H
4
(s) ⋅ H
5
(s) ⋅ H
6
(s)
where ; H
1
(s) · H
1
1
(s) · 50 H
2
(s) · H
2
2
(s) ·
1
s
; H
3
(s) · H
3
5
(s) ·
1
5s + 1
H
4
(s) · H
4
5
(s) ·
1
0.5s + 1
; H
5
(s) · H
5
3
(s) · 0.1s + 1 H
6
(s) · H
6
3
(s) · 20s + 1
a
3
ϕ
3
10
2
10
1
10
0
10
1
10
10
2
Phase (deg)
Magnitude (dB)
3
10
2
10
1
10
0
10
1
10
10
2
0
60
40
20
20
40
60
80
100
180
135
 9 0
 4 5
0
4 5
9 0
ω·10
ω·0.05
ω·0.2 ω·2
ω
ω
ϕ(ω)
(ω)
L
0.05*5 0.05/5 2*5 2/5
L
6
L
2
a
L
1
a
L
3
a
L
4
a
L
5
a
a
2
ϕ
a
4
ϕ
a
5
ϕ
a
6
ϕ
Figure no. 6.5.1.
We denoted by the asymptotic characteristics of H
i
, i=1:6. L
i
a
, ϕ
i
a
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .
166
Let us consider now a transfer function,
H(s) ·
100s + 1000
100s
2
+s
The time constants factorisation is performed as below,
H(s) ·
100

.
s+
z
1
10
`
,
s

.
T
1
100 s + 1
`
,
·
H
1
1000
H
2
⋅
1
s
⋅

.
H
3
0.1s +1
`
,
H
4
⋅
1
100s + 1
; dB. H
1
(jω) · 1000 L
1
· 20lg10
3
· 60
; H
2
(s) ·
1
s
⇒H
2
(jω) ·
1
jω
A
2
(ω) · H
2
(jω) ·
1
ω
2
·
1
ω
L
2
· 20lgA
2
(ω) · 20lgω
−1
· −20
x
lgω· −20x
; H
3
(s) · 0.1s + 1 H
4
(s) ·
1
100s +1
The Bode diagrams drawn using Matlab is depicted in Fig. 6.5.2.
100
10
3
10
3
Phase (deg)
Magni tude (dB)
50
0
50
100
150
10
4
10
4
10
2
10
2
10
1
10
1
10
0
10
0
10
1
10
1
10
2
10
2
180
160
140
120
80
ϕ(ω)
L( ) ω
ω
ω
Figure no. 6.5.2.
To reveal the influence of the Bode diagram shape to the time response,
two transfer functions are considered,
and . H
1
(s) ·
1
100s
2
+ s + 1
H
2
(s) ·
1
100s
2
+ 5s + 1
Their Bode diagrams and step responses are depicted in Fig. 6.5.3.
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .
167
Frequency (1/sec)
Phase (deg);
Magnitude (dB)
Bode Diagrams
40
20
0
20
40
10
 2
10
 1
10
0
200
150
100
50
0
To: Y(1)
Time (sec.)
Amplitude
Step Response
50 100 150 200 250 300 350
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
1

100 s^2 + 5 s + 1
H2(s)=
1

100 s^2 + s + 1
H1(s)=
H1
H2
H1
H2
H1
H2
Figure no. 6.5.3.
6.5.2.2. Directly Bode Diagram Construction.
The Bode diagrams can be directly plotted considering the following
observations regarding the asymptotic behaviour:
1. A real zero determines in the breaking point , to the ζ
i
· −z
i
< 0 ω · z
i
> 0
asymptotic magnitude characteristic, slope to change with +20dB/dec.
(upbreacking point of 20 dB/dec.).
2. A real pole determines in the breaking point , to the λ
i
· −p
i
< 0 ω · p
i
> 0
asymptotic magnitude characteristic, slope to change with
20dB/dec.(downbreacking point of 20 dB/dec.).
3. A complex pair of zeros, , solution of the equation s · ζ
i
1
,i
2
, Re(ζ
i
1
,i
2
) < 0
T
i
2
s
2
+ 2ξ
i
T
i
s + 1 · 0 , ξ
i
∈ (0, 1), ω
n,i
· 1/T
i
determines in the breaking point , to the asymptotic ω · ω
n,i
> 0
magnitude characteristic, slope to change with +40dB/dec. (upbreacking
point of 40 dB/dec.).
4. A complex pair of poles, ,solution of the equation s · λ
k
1
,k
2
, Re(ζ
k
1
,k
2
) < 0
T
k
2
s
2
+ 2ξ
k
T
k
s + 1 · 0 , ξ
k
∈ (0, 1), ω
n,k
· 1/T
k
determines in the breaking point ,to the asymptotic magnitude ω · ω
n,k
> 0
characteristic, slope to change with 40dB/dec. (downbreacking point of
40 dB/dec.).
5. For , the asymptotic behaviour is ∀ω < min(z
i
, p
i
, ω
n,i
, ω
n,ik
), ∀i, k
determined only by the term R(s) from (6.5.20).
The following steps are recommended:
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .
168
1. Evaluate the poles and zeros, and place them on (z
i
, p
i
, ω
n,i
, ω
n,k
), ∀i, k
the plotting sheet in a logarithmic scale. In a such a way we determine the
system frequency bandwidth of interest.
2. Mark each zero by a small circle, and each pole by a small cross.
Complex zeros/poles are marked by double circles/crosses.
3. Chose a starting frequency inside the system frequency bandwidth . ω
0
4. Evaluate a starting point , for the asymptotic magnitude M · M(ω
0
, L
R0
)
characteristic, where L
R0
is obtained from (6.5.22),
5. (6.5.27) L
R0
· L
R
(ω
0
) · 20lg(A
R
(ω
0
)) · 20lg( K ) − α ⋅ 20lg(ω
0
)
If possible chose,
. ω
0
· 1 ⇒L
R0
· 20lg( K )
6. Draw a straight line through the point M having the slope equal to
till the first breaking point abscissa is reached. −(α ⋅ 20) dB/dec
7. Keep drawing segments of straight lines between two consecutive
breaking points with the slope equals to the previous slope plus the
changing slope determined by the left side breaking point.
8. The same procedure can be applied for phase characteristic but we must
take into consideration that the asymptotic phase characteristic keep the
same slope in each frequency interval of
(6.5.28) ω ∈ [ω
i
− ω
i
/5, ω
i
+ ω
i
∗ 5], ω
i
· z
i
, p
i
, ω
ni
Example.
Let we draw the magnitude characteristic only for the system of
Ex. 6.5.2.1.
Magnitude (dB)
3
10
2
10
1
10
0
10
1
10
10
2
0
20
20
40
60
80
100
ω
(ω)
L
x x
S2=S1+20=20+20= 0 dB/dec
S3=S220=020= 20 dB/dec
S4=S320=2020= 40 dB/dec
S5=S4+20=40+20= 20 dB/dec
S1=( *20)= 20 dB/dec α
ω·0.05ω·0.5 ω·2 ω·10
ω ·0.004
0
M
L
R0
=20lg(50)1*lg(0.004)=33.974 +47.958=81.9382 dB
81.9382
Figure no. 6.5.4.
If we chose, for example we compute ω
0
· 0.004
L
R0
· L
R
(ω
0
) · 20lg( K ) − α ⋅ 20lg(ω
0
) ·
· 20lg(50) − 1 ⋅ 20lg(0.004) · 33.974 + 479588 · 81.9382dB
M · M(0.004, 81.93).
6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6.5. Fre quency Characteristics for
Series Connection of Systems .
169
7. SYSTEM STABILITY
7.1. Problem Statement.
The stability of a system is one of the most important property of a
system. Intuitively we can say that a system is stable if it will remain at rest
unless external disturbances that affect its behaviour and it will return to rest if all
external causes are removed. There are several definitions for systems stability.
One of them is the so called bounded input  bounded output (BIBO):
if the input is bounded then the output remains bounded too.
In addition a system is asymptotically stable if it is bounded input 
bounded output and the output goes to a steady state. We can say that the
system is stable if the transient response disappears and finally only the
permanent regime will be settled down. This is called external stability.
There is also the so called internal stability that takes into account the
transient response with respect to the components of the state vector. Internal
stability involves a description by state equations.
There are very used, mainly for nonlinear systems, the so called Leapunov
sense stability that is a continuity condition at the state trajectory with respect to
the initial state. There is a large theory regarding the Leapunov stability.
For linear time invariant systems the external stability is determined by the
poles of the transfer function
, (7.1.1) H(s) ·
M(s)
L(s)
that means the roots of the denominator:
, (7.1.2) L(s) · 0 ⇒ s · λ
i
which must be placed in the left half complex plane . Re(λ
i
) < 0
The internal stability of a system, expressed by state equations, is
determined by the eigenvalues of the system matrix A, that means the roots of
the characteristic polynomial,
(7.1.3) ∆(s) · det(sI − A) · 0 ⇒ s · λ
i
A system can be external stable one but not internal stable .
If the system is completely observable and controllable both types of
stability are equivalent, so we shall discuss the stability of complete observable
and complete controllable systems.
The transfer function denominator polynomial
L(s)=a
n
s
n
+a
n1
s
n1
+...+a
1
s+a
0
(7.1.4)
could be the characteristic polynomial
∆(s)=a
n
s
n
+a
n1
s
n1
+...+a
1
s+a
0
. (7.1.5)
7. SYSTEM STABILITY. 7.1. Problem Statement.
170
There are used the so called stability criteria, that express necessary and
sufficient condition for stability.
Having a polynomial L(s) (could be ∆(s)) we are calling this as a stable
polynomial (or Hurwitz polynomial) if the system having that polynomial at the
denominator of the transfer function (or characteristic polynomial) is a stable
one. There are two types of criteria:
 algebraical criteria, that manages the polynomial coefficients or roots
 frequency stability criteria that manages the frequency characteristic of
the system.
7.2. Algebraical Stability Criteria.
7.2.1. Necessary Condition for Stability.
A necessary condition for stability of a polynomial L(s) (or ∆(s)) is that all
the polynomial coefficients must have the same sign and no missing coefficients
(all of them ). ≠ 0
Example: is an unstable polynomial. L(s) · s
2
− 2s + 3
7.2.2. Fundamental Stability Criterion.
A system having L(s) or ∆(s) as stability polynomials is stable if and only
if all the poles (eigenvalues) are placed on the left half of the complex plane and
if there are poles (eigenvalues) on the imaginary axes they have to be simple
poles (or simple eigenvalues).
If λ
i
is a pole then Re(λ
i
)<0 or if Re(λ
i
)=0 then λ
i
is a simple pole (eigen
value). A system is asymptotically stable if all the poles (eigen values) are placed
in the left  half of the s  plane, that is Re(λ
i
)<0 . ∀i
As we saw when we discussed about experimental frequency
characteristics the transient response were
(7.2.1) y
tr
(t) ·
Σ
i·1
N
p
i
(t)e
(Re(λ
i
))t
where are the eigenvalues of the system matrix A. λ
i
If Re(λ
i
)<0 then . (7.2.2) lim
t→∞
y
tr
(t) · 0
There are two very important algebraically stability criteria:
Hurwitz stability criterion.
Routh stability criterion.
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.
171
7.2.3. Hurwitz Stability Criterion.
Let us consider a polynomial,
(7.2.3) L(s) · a
n
s
n
+a
n−1
s
n−1
+... + a
1
s
1
+ a
0
a
n
≠ 0
It is assumed that the first coefficient a
n
is positive, ( a
n
>0 ). There are
constructed the so called Hurwitz determinants ∆
k
where ∆
n
is a n ×n
determinant.
(7.2.4) ∆
n
·
n − columns
a
n−1
a
n−3
a
n−5
a
n−7
... 0
a
n
a
n−2
a
n−4
a
n−6
... 0
0 a
n−1
a
n−3
a
n−5
... 0
0 a
n
a
n−2
a
n−4
... 0
... ... ... ... ... ...
0 0 0 ... 0 a
0
On the main diagonal we write . a
n−i
, i · 1, .. , n
Decrease the subscripts with two a time in the right side and increase them
in the left side. If the subscripts overtake n or they became negative the elements
are considered zero.
∆
k
is the determinant built by ∆
k
using the first k  rows and k  columns.
The polynomial L(s), with positive main coefficient a
n
, is a Hurwitz
polynomial or a stable polynomial, that means all its roots lie in the left half of the
splane if and only if all the determinants ∆
k
are positive (∆
k
>0, ). ∀k · 1, n
If a polynomial is a stable one then
(7.2.5) L
∗
(s) · a
∼
0
s
n
+ a
∼
1
s
n−1
+ ... + a
∼
n
is also stable , where . a
∼
k
· a
n−k
Example:
L(s)=s
3
3s
2
+2s+5; n=3;
∆
3
·
2 1 0
5 3 0
0 2 1
; ∆
1
· 2 ; ∆
2
· 1 ; ∆
3
· 1
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.
172
7.2.4. Routh Stability Criterion.
7.2.4.1. Routh Table.
The Routh stability criterion called also RouthHurwitz criterion, is similar
to Hurwitz criterion but with arrangements of coefficients in a table, called
"Routh table" which involves the necessity to evaluate several determinants of
the maximum order 2x2 only.
Le us consider a polynomial of the degree n,
(7.2.6) L(s) · a
n
s
n
+a
n−1
s
n−1
+... + a
1
s
1
+ a
0
a
n
≠ 0
The Routh table, called also "Routh's array" or "Routh's tabulation"
associated to this polynomial is illustrated in Fig. 7.2.1.
1 s
n
j1 ...
.
.
.
a
n
1 2
a
n2
a
n2(j1)
a
n2(j)
a
n+12(j)
a
n+12(j+1)
a
n+12(j1)
a
n2(j2)
j
j+1 ...
...
...
...
...
...
...
...
...
...
...
2 s
n1
a
n1
a
n3
...
3
s
n2
r
3,1 r
3,2
r
3, j1
r
3, j+1
r
3, j ...
... ... ... ... ...
...
i2
s
ni
r
i2,1
r
i2,2
r
i2,j ...
i1 ni+1
r
i1,1
r
i1,2
r
i1,j
...
i
n2
r
i,1
r
i.2
r
i,j
...
... ... ...
s
s
r
i2,j+1
r
i1,j+1
r
i,j+1
.
.
.
n+1 0
r
n+1,1
0
...
s 0 0
Figure no. 7.2.1.
It contains n+1 rows and a number of columns marked by indexes.
For any row the power of the variable s is denoted decreasing order.
At the beginning we fill in the first two rows.
If the subscripts became negative the table will be filled in with 0.
The algorithm for evaluating te entries of the array, is based on a 2x2
determinant, which, for the general element , is r
i,j
, i ≥ 3
(7.2.7) r
i, j
·
−1
r
i−1,1
r
i−2,1
r
i−2,j+1
r
i−1,1
r
i−1,j+1
, if r
i−1,1
≠ 0
The table is continued horizontally and vertically until only zeros are obtained.
The system is asymptotically stable if and only if all the elements in the
first column of the Routh table are of the same sign and they are not zero.
The numbers of poles, roots of the equation L(s)=0, from the right side is
equal with the number of sign changing in the first column of the Routh table.
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.
173
7.2.4.2. Special Cases in Routh Table.
a. One element in the first column in the Routh table is zero.
In this case the computing can not be performed.
There are two ways for this case:
a.1. Replace that zero element by a letter, supposing ε, thinking that ε >0 , ε>0
and keep on computing the other elements as functions of ε.
To determine the stability of the system (the sign of the first column
elements) the limits have to be determined. lim
ε→0
ε>0
r
i, 1
(ε)
a.2. If we are at the beginning of Routh table fulfilling, replace the polynomial by
its reciprocal polynomial and realise the Routh table for the reciprocal
polynomial.
The polynomial
L(s)=a
n
s
n
+...+a
1
s+a
0
is replaced by
, (7.2.8) L
∼
(s) · a
0
s
n
+a
1
s
n−1
+... + a
n−1
s + a
n
and construct the Routh table for this reciprocal polynomial .
b. All the elements of a row are zero.
Suppose
r
i+1,k
=0 , k=1,2,... (7.2.9)
Then this part of the table looks like,
(7.2.10)
i −1
s
n−i+1
i
s
n−i
i +1
r
i, 1
r
i, 2
r
i, 3
...
0 0 0 ...
To keep on fulfilling the table, we construct the auxiliary polynomial by
using the elements of the above nonzero row having the degree equal to the
degree of the attached of that nonzero row, in such a case the degree is ni+1,
but the degrees are decreasing by two at a time.
The auxiliary polynomial is :
(7.2.11) U(s) · r
i1
s
n−i+1
+r
i2
s
n−i−1
+r
i3
s
n−i−3
+...
Compute the derivative of this polynomial:
(7.2.12) U (s) ·
r
∼
i1
(n −i + 1)r
i1
s
n−i
+
r
∼
i2
(n −i − 1)r
i2
s
n−i−2
+
r
∼
i3
(n− i − 3)r
i3
s
n−i−4
+ ...
Use the coefficients of the auxiliary polynomial derivative as elements of
the zero row and keep on computing the other elements.
The roots of the auxiliary polynomial are symmetrically disposed in
splane with respect to the origin of the splane.
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.
174
The roots at the auxiliary polynomial could be a zero root, pure imaginary
roots, real roots or complex roots.
If the polynomial has real coefficients then it has two conjugated roots.
It can be proved that a polynomial with a row of zeros in the Routh table
has as common factor the auxiliary polynomial:
L(s)=L
1
(s)U(s) (7.2.13)
The Routh criterion is useful because there are many determinants of the
second order rather Hurwitz criterion that ask for higher order determinants.
Example. Let us consider a polynomial containing two real parameters, : p, ω
L(s) · (s
2
+ ω
2
)(s + p)
L(s) · s
3
+ ps
2
+ ω
2
s + pω
2
The Routh table is,
1 2 3
s
3
s
2
1
2
s
1
s
0
3
4
1
0
ω
2
ω
2
0 0 0
4
0 0 0
0 0
0 0
2p 0 0 0
ω
2
p
p p
; . r
31
· −
1
p
1 ω
2
p pω
2
· −
1
p
(
pω
2
− pω
2
)
· r
32
· −
1
p
1 0
p 0
· 0
We observe that all the elements in the third line, i=3, are zero. The attached
polynomial U(s) is,
U(s) · ps
2
+ pω
2
s
2−2
· p
(
s
2
+ ω
2
L(s)=L
1
(s)U(s); U'(s)=2ps+0
r
41
·
−1
2p
p pω
2
2p 0
· pω
2
We can determine L
1
(s) which is a factor without symmetrical roots.
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.
175
Example 7.2.1. Stability Analysis of a Feedback System.
Supposing that we have a feedback system called also "closed loop
system" described by the block diagram from Fig. 7.2.2.
k
s+1
s
1
(s+2)(s+3)
y v e
+

H (s)
R
F
H (s)
Figure no. 7.2.2.
Let us denote,
H
d
(s)=H
R
(s)H
F
(s)  the open loop transfer function
H
d
(s) · k
s + 1
s
(
s
2
+ 5s + 6
)
·
M(s)
N(s)
The closed loop transfer function is,
H
v
(s) ·
Y(s)
V(s)
·
H
d
1 +H
d
·
L(s)
M(s)
N(s) +M(s)
·
M(s)
L(s)
where,
L(s) · s
3
+ 5s
2
+ 6s +ks + k · s
3
+ 5s
2
+ (6+ k)s +k
is the characteristic polynomial of the closed loop system.
We analyse the stability using the Routh table,
1 2 3
s
3
s
2
1
2
s
1
s
0
3
4
1
5
k 0 0
0 0
0
0
k
k+6
4k+30
5
r
31
· −
1
5
1 k + 6
5 k
· −
k − 5k− 30
5
·
4k + 30
5
r
41
· −
5
4k +30
5 k
4k+30
5
0
·
kr
31
r
31
· k
The Routh stability criterion asks for,
.
4k + 30 > 0
k > 0
⇒k > 0
7. SYSTEM STABILITY. 7.2. Algebraical Stability Criteria.
176
7.3. Frequency Stability Criteria.
There are several criteria that assure the necessary and sufficient
conditions for stability of systems based on the shape of frequency
characteristic.
The algebraically criteria are very sensitive criteria because they are
coefficient based criteria. Someone has to tell us what the coefficients of a
transfer function are. It is a very difficult and dangerous problem for systems.
The complex frequency characteristic can be determined by experimental ways
and the Nyquist criteria can be interpreted by analysing the shape of these
characteristics without other mathematically representation.
7.3.1. Nyquist Stability Criterion.
It is related to a closed loop system having in the open loop the transfer
function H
d
(s).
d
H (s)
+

y
v
Hv(s)
Figure no. 7.3.1.
The transfer function of this closed loop system, denoted is, H
v
(s)
. (7.3.1) H
v
(s) ·
H
d
(s)
1 +H
d
(s)
The following notations are utilised,
(7.3.2) A
d
(ω) · H
d
(jω)
(7.3.3) ϕ
d
(ω) · arg H
d
(jω)
Based on the complex frequency characteristic of H
d
(s), the stability of
the closed loop system is analysed.
One very simple form of the Nyquist criterion is:
If the open loop system H
d
(s) is a stable one then the closed loop system
having the transfer function , (7.3.1), is asymptotically stable if and only if H
v
(s)
the complex frequency characteristic H
d
(jω) leaves the critical point (1,j0) to the
left side when increases from 0 to . ω ∞
The Nyquist criterion can be also developed for systems which are open
loop unstable or systems having in open loop resonance frequencies (poles of
H
d
(s) on the imaginary axis). They are based on the Cauchy principle of the
argument.
In such a case instead of complex frequency characteristic H
d
(jω)
determined for real , the so called Nyquist characteristic of ω ∈ [0, ∞) H
N
d
H
d
(s)
is utilised.
7. SYSTEM STABILITY. 7.3. Frequency Stability Criteria.
177
Nyquist characteristic of a transfer function H(s) is the image through H
N
the function H(s) of the so called Nyquist contour N . The Nyquist contour N is
a closed contour clockwise oriented in the complex plane s, which contain all
the right half complex plane except the poles placed on the imaginary axes.
These poles are surrounded by circles of the radius approaching zero.
If we shall realise the feedback connection around , which respects H
d
(s)
the Nyquist criterion, then the resulting close loop system will be asymptotically
stable one.
If the complex characteristic passes through the critical point H
d
(jω)
(1, j0) then the system will be at the limit of the stability. That means the transfer
function will have simple poles on the imaginary axis and all the other H
v
(s)
poles in the left half complex plane.
In Fig. 7.3.2. the complex characteristics of three open loop systems,
, are represented. H
1
d
(s), H
2
d
(s), H
3
d
(s)
P
Q
0
0
=0 ω ω→∞
1+j0
ω Re(H (j ))
d
ω j Im(H (j ))
d
R=1
H (j ) ω
d
1
H (j ) ω
d
3
ω H (j )
d
2
ωc1
ω ·
c
ωc2
ω ·
c
ϕ(ω )
c1
γ1·
ϕ(ω )
c3
γ3·
γ2·−π
Figure no. 7.3.2.
The complex frequency characteristic of the system from H
1
d
(jω) H
1
d
(s)
Fig. 7.3.2., leaves the critical point (1, j0) on the left side, that means the future
close loop system will be stable. The open loop system having the H
v
1
(s)
complex characteristic will determine an unstable but one having H
3
d
(jω) H
v
3
(s)
will determine at the limit of stability. H
2
d
(jω) H
v
2
(s)
7.3.2. Frequency Quality Indicators.
The closed loop system behaviour can be appreciated, in some of its
aspects, using several quality indicators defined on the complex frequency
characteristic of the open loop system as specified in Fig. 7.3.2. and Fig. 7.3.3.
Similarly this quality indicators can also be defined on the Bode
characteristics of the open loop system.
Among this quality indicators we can mention:
7. SYSTEM STABILITY. 7.3. Frequency Stability Criteria.
178
a. Crossing frequency . The crossing frequency , denoted also by , ω
t
ω
t
ω
c
represents the largest frequency to which the complex characteristic H
d
(jω)
crosses the unit radius circle.
γ
ω
ωt
ωπ
ϕ
d
ωt ( )
Re( ω H
j
( )
d
)
ω
H
j
( )
d
jIm( )
ω H
j
( )
d
Planul
(1,j0)
A
d
π
ωt ωc
=
Figure no. 7.3.3.
It can be obtained from the equation (7.3.2)
(7.3.4) ω
c
· ω
t
· max {ω A
d
(ω) · 1¦
b. Phase margin . The phase margin represents the clockwise angle γ γ
between the direction of the vector and the negative real axis. according H
d
(jω
t
)
to the Nyquist stability criterion it expresses the stability reserve of the closed
loop system.
It is defined by the relation,
(7.3.5) γ · π + ϕ
d
(ω
t
)
where we have considered on the circle The ϕ
d
(ω
t
) · arg H
d
(jω
t
) (−2π, 0].
value indicates a limit of stability. To have a good stability the following γ · 0
conditions is imposed to the quality indicator (performance condition) γ
(7.3.6) γ ≥ γ
imp
c. Phase crossover frequency . ω
π
The phase crossover frequency , represents the smallest frequency to ω
π
which the complex characteristic crosses the negative real axis. H
d
(jω)
(7.3.7) ω
π
· min {ω ϕ
d
(ω) · −π¦
d. Magnitude ( gain) margin . A
π
d
The magnitude margin is the length of the vector , that means A
π
d
H
d
(jω
π
)
(7.3.8) A
π
d
· A
d
(ω
π
)
If , then very clear expresses the reserve of closed A
d
(ω) < 1, ∀ω > ω
π
A
π
d
loop system stability. According to Nyquist criterion, the performance
condition is expressed by
. (7.3.9)
A
π
d
≤ A
π imp.
d
7. SYSTEM STABILITY. 7.3. Frequency Stability Criteria.
179
7.3.3. Frequency Characteristics of Time Delay Systems.
The Nyquist criteria can be applied to time delay systems.
For example, if the plant has a time delay then its transfer function is:
(7.3.10) H
τ
(s) · H(s)e
−τs
(7.3.11) H
τ
(jω) · H(jω)e
−jωτ
(7.3.12) A
τ
(ω) · H
τ
(jω) · H(jω) e
−jωτ
· H(jω) · A(ω)
(7.3.13) ϕ
τ
(ω) · arg(H
τ
(jω)) ·
ϕ(ω)
arg(H(jω)) −ωτ ⇒ ϕ
τ
(ω) · ϕ(ω) −ωτ
These relations express the effect of time delay on the complex frequency
characteristics as illustrated in Fig. 7.3.4.
We can see that the complex frequency characteristics has now a spiral
shape because as the phase (negative one) increases in absolute value, the same
time the magnitude approaches zero.
Re H(j ) ω
H(j )
ω
1
ω
2
1
ω
ω
ω
τ
τ
H (j )
H(j )
A( )
ω
1
ω·0
Η (0)·Η(0)
ω
ω
ω
ω
ω
ω
1
1
2
2
3
3
jImg H(j )
ω
planul H(j ) ; H (j )
ω
τ
ω
ϕ(ω )
1
H (j )
τ
ω
1
H(j )
ω
2
H (j )
ω
2
τ
τ
τ
ω
3
τ
Figure no. 7.3.4.
The time delay systems are still linear ones but their transfer functions
unfortunately are not a ratio of polynomials.
From analytical point of view we can compute the closed loop transfer
function,
(7.3.14) H
τ
d
(s) ·
M(s)e
−τs
N(s)
⇒ H
τ
r
(s) ·
M(s)e
−τs
L(s)
N(s) +e
−τs
M(s)
but the denominator L(s) is not a polynomial and we can not solve easy the
characteristic equation L(s)=0 so that the algebraical criteria can not be applied.
Instead the Nyquist criteria can be applied even for this type of systems.
There are several methods to escape of the factor A possibility is to e
−τs
.
use the Padé approximation:
(7.3.15) e
−τs
≈
1 −
τ
2
s
1 +
τ
2
s
7. SYSTEM STABILITY. 7.3. Frequency Stability Criteria.
180
8. DISCRETE TIME SYSTEMS.
8.1. Z  Transformation.
Let be a string of numbers. These numbers can be values of a {y
k
¦, k ≥ 0
time function y(t) for t=kT, so we denote y
k
=y (kT) .
If the time function y(t) has discontinuities of the first order in the time
moments kT then we consider y
k
as the right side limit,
(8.1.1) y
k
· y(kT
+
) · lim
t→kT,t>0
y(t)
The Ztransformation is a mapping from the set of strings into {y
k
¦
k≥0
the complex plain called z plane. The result is a complex function Y(z).
We call this transformation, "the direct Ztransformation". The result of
this transformation Y(z) is called Ztransform denoted by,
Y(z)=Z{(y
k
)
k≥0
} (8.1.2)
In the same way it is defined so called " the inverse Ztransformation",
(y
k
)
k≥0
=Z
1
{Y(z)} (8.1.3)
The interconnection between the two Ztransformations is represented by
a diagram as in Fig. 8.1.1. ,
y(t )
(y
k>=0
k
)
Y(z)
Z
{ }
{ }
Z
1
(univocally) (univocally)
(univocally)
( notunivocally)
SAMPLING
COVERING
Time variable function
String of numbers Complex variable function
Figure no. 8.1.1.
8.1.1. Direct ZTransformation.
There are several ways for Ztransformation definition:
8.1.1.1. Fundamental Formula.
By definition, the Ztransformation is:
(8.1.4) Y(z) · Z{y
k
¦ ·
Σ
k·0
∞
y
k
z
−k
This series of powers is convergent for z>R
c
, where R
c
is called
convergence radius. The property of convergence states that Z{(y
k
)
k≥0
} exists
if exists.
n→∞
lim
Σ
k·0
n
y
k
z
−k
We can consider the Ztransformation as being applied to an original time
function y(t),
8. DISCRETE TIME SYSTEMS. 8.1. Z  Transformation.
181
(8.1.5) Y(z) · Z{y(t)¦ ·
Σ
k·0
∞
y(kT
+
)z
−k
, z > R
c
· e
σ
0
T
where the string of numbers (y
k
)
k≥0
to which the Ztransformation is applied is
represented by the values of this function y
k
=y(kT
+
).
Here σ
0
is the convergence abscissa of y(t).
The process of getting the string of numbers y
k
, the values of a time
function y(t) for t=kT or ( t=kT
+
) , is called " sampling process " having the
variable T as " sampling period " .
Vice versa, a time function y(t)=y
cov
(t) , can be obtained from a string of
numbers (y
k
)
k≥0
, replacing only, for example, k by t/T.
This is one covering functions, a smooth function which passes through
the points ( k, y
k
) .
Such a process is called " uniformly covering process " :
y(t)=y
cov
(t)=y
k

k→t/T
. (8.1.6)
Example.
Suppose y
k
=k/(k
2
+1), k ≥ 0. We can create a time function
y(t)=y
cov
(t)=(t/T)/((t/T)
2
+1)).
This time function has a Laplace transform Y(s). Through this covering
process, the string values are forced to be considered equally spaced in time
even if the string , maybe, has nothing with the time variable.
8.1.1.2. Formula by Residues.
The second formula for the Ztransformation, so called " formula by
residues ", manipulates the Laplace transform of the genuine time function y(t)
when the Ztransformation is applied to a time function, or to the covering
function y(t)=y
cov
(t) whether the Ztransformation is applied to a pure string of
numbers (y
k
)
k≥0
.
(8.1.7)
Y(z) · Z{y(t)¦ ·
Σ
poles of Y(ξ)
Rez
Y(ξ)
1
1 − z
−1
e
Tξ
]
]
]
In this formula Y(s) is the Laplace transform of y(t) or y
cov
(t) and the
expression Y(ξ) is obtained through a simple replacement of s by ξ that means,
Y(ξ) · Y(s)
s→ξ
Examples.
1. Let we have y
k
=1 , ; it could be obtained from y(t)=1(t) or from k ≥ 0
y(t)= . y
k
k→
t
T
· 1 , t ≥ 0
This is the so called " unit step string " or " unit step function "
respectively. First we apply the fundamental formula:
Y(z)=Z Z , if {1(t)¦ · {1, k ≥ 0¦ ·
Σ
k·0
∞
1 ⋅ z
−k
·
1
1 − z
−1
·
z
z −1
z
−1
< 1 ⇒ z > 1 · R
c
We can get the same result by the second method:
8. DISCRETE TIME SYSTEMS. 8.1. Z  Transformation.
182
Re(s)>0=σ
0
. L{1(t)¦ ·
1
s
· Y(s) ⇒Y(ξ) ·
1
ξ
Y(z) · Z{1(t)¦ ·
Σ
Rez.
1
ξ
⋅
1
1 − z
−1
e
Tξ
·
1
1 − z
−1
·
z
z −1
z > e
σ
0
T
· 1 · R
c
2. . y
k
· kT , k ≥ 0 · {0, T, 2T, 3T, ...¦
We can create a time function
y(t) · t ⇒Y(s) ·
1
s
2
Y(z) · Z{kT¦ ·
Σ
k·0
∞
kT ⋅ z
−k
· ...
it is difficult to continue in this way so we shall use second formula,
Y(s) ·
1
s
2
⇒Y(z) ·
Σ
∗
Rez
1
ξ
2
1
1 −z
−1
e
Tξ
]
]
]
·
1
(z − 1)!
⋅
ξ
2
1
ξ
2
1
1 −z
−1
e
Tξ
]
]
]
ξ·0
(2−1)
, · −
−z
−1
Te
Tξ
(
1 −z
−1
e
T
ξ
)
2
ξ·0
·
Tz
−1
(
1 −z
−1
)
2
·
Tz
(z −1)
2
where " * " means: poles of 1/ξ
2
.
3. Let us consider now a finite string of numbers,
. Directly we compute, y
k
·
¹
¹
'
¹
¹
y
0
1 ,
y
1
4 ,
y
2
−5, 0 , 0, ..., 0, ...
¹
¹
'
¹
¹
Y(z) · 1 + 4z
−1
− 5z
−2
+0z
−3
+ ... ·
z
2
+4z − 5
z
2
8.1.2. Inverse ZTransformation.
There are several methods to evaluate the inverse Ztransform:
8.1.2.1. Fundamental Formula.
The most general method for obtaining the inverse of a Ztransform, is the
inversion integral.
Having a complex function Y(z) which is analytic in annular domain
R
1
<z<R
2
and Γ is any simple closed curve separating R
1
from R
2
, then
(8.1.8) y
k
·
1
2πj
∫
Γ
Y(z)z
k−1
dz
Simply we can say that Γ is any closed curve in the zplane which
encloses all of the finite poles of Y(z)z
k1
.
If the function Y(z) is rational and causal one, then the above integral is
easily expressed using the theorem of residues
(8.1.9) y
k k≥0
·
Σ
poles of
(Y(z)z
k−1
)
Rez
Y(z)z
k−1
]
]
8. DISCRETE TIME SYSTEMS. 8.1. Z  Transformation.
183
The resulted string y
k
can be interpreted as being the values of a time
function
y
k
=y(t)
t=kT
=y(kT).
For example if y
k
=k/(k
2
+1), k ≥ 0, we can interpret it as
y
k
= y(t)
t=kT
= (kT/T)/((kT/T)
2
+1)) = y(kT).
Examples.
1. Y(z) ·
z
z − 1
.
k≥ 0, is a parameter . y
k
· Z
−1
{Y(z)¦ ·
Σ
poles of
Y(z)z
k−1
Rez
z
z −1
z
k−1 ]
]
]
,
We must check if the number of poles is different for different values of k.
In this case, we can see that has one simple pole z=1 for any k≥ 0 so,
z
z − 1
z
k−1
y
k
=1 ∀ k≥ 0 .
2. Y(z) ·
1
z − 1
⇒y
k
·
Σ
poles of
1
z−1
z
k−1 ]
]
k≥0
Rez
1
z − 1
z
k−1 ]
]
]
For k=0 , y
0
·
Σ
Rez
1
(z − 1)z
]
]
]
· 1+
1
0− 1
· 0,
because it has two simple poles: z=0 and z=1.
, For k ≥ 1 , y
k
·
Σ
Rez
z
k−1
z − 1
]
]
]
· 1
because it has one simple pole z=1.
3. Y(z) ·
z
2
+4z − 5
z
2
⇒ y
k
·
Σ
Rez
z
2
+ 4z − 5
z
2
⋅ z
k−1
]
]
]
k · 0 , y
0
· Rez
z
2
+ 4z −5
z
3
]
]
]
·
1
(3− 1)!
[(
z
2
+ 4z − 5
)]
z·0
(3−1)
·
1
2
⋅ 2 · 1.
k · 1 , y
1
· Rez
z
2
+ 4z −5
z
2
]
]
]
·
1
(2− 1)!
[(
z
2
+ 4z − 5
)]
z·0
(2−1)
·
1
1
⋅ 4 · 4.
In the same way, y
2
= 5, y
k
=0 for k≥3.
We reobtained the particular string from the above example.
8. DISCRETE TIME SYSTEMS. 8.1. Z  Transformation.
184
8.1.2.2. Partial Fraction Expansion Method.
The expression of Y(z) is expanded into a sum of simple fractions for
which the inverse Ztransform is known:
.
z
z − λ
,
z
(z − λ)
p
,
z
(z − α)
2
+β
To do this we expand into common sum of fractions.
Y(z)
z
Example.
We know that
so, y
k
· λ
k
⇒ Y(z) ·
Σ
k·0
∞
λ
k
z
−k
·
Σ
k·0
∞
(
λz
−1
)
k
·
1
1− λz
−1
·
z
z − λ
. Z
−1
(
z
z − λ
) · λ
k
If Y(z)/z has simple poles,
then
Y(z)
z
·
Σ
i
A
i
z − λ
i
⇒ Y(z) ·
Σ
i
A
i
z
z − λ
Z
−1
{Y(z)¦ ·
Σ
i
[A
i
⋅ Z
−1
(
z
z −λ
i
)] ·
Σ
i
A
i
λ
i
k
If some fractions have complex poles then for that fractions the
fundamental inversion formula can be used.
8.1.2.3. Power Series Method.
From the definition formula (8.1. 4) we can see that y
k
is just the
coefficient of z
k
in the powerseries form of Y(z). By different method Y(z) is
expanded into a power series, and just keep the coefficients:
(8.1.10) Y(z) ·
Σ
k·0
∞
c
k
z
−k
, c
k
· y
k
For rational functions, this can be done just by dividing the nominator and
the denominator (long division).
Example:
Y(z) ·
z
2
+ 1
z
2
+2z + 1
· 1− 2z
−1
+ 4z
−2
+..... ⇒
y
0
· 1; y
1
· −2; y
2
· 4; ....
8. DISCRETE TIME SYSTEMS. 8.1. Z  Transformation.
185
8.1.3. Theorems of the ZTransformation.
There are some properties useful for fast Ztransforms computing:
8.1.3.1. Linearity Theorem.
If or y
a
(t) and or y
b
(t) have Ztransforms,
{
y
k
a
¦
k≥0
y
k
b
k≥0
Z Z ; Y
a
(z) ·
{
y
k
a
¦
· {y
a
(t)¦
Y
b
(z) = Z {y
k
b
} = Z {y
b
(t)}
then, for any α,β real or complex:
Z (8.1.11) αy
k
a
+ βy
k
b
· αY
a
(z) + βY
b
(z)
8.1.3.2. Real Time Delay Theorem.
(8.1.12) Z{y
k−p
¦ · z
−p
Y(z) +
Σ
k·−p
−1
y
k
z
−k
]
]
]
, Y(z) · Z{y
k
¦, p ∈ N
(8.1.13) Z{y(t −pT)¦ · z
−p
Y(z) +
Σ
k·−p
−1
y(kT)z
−k
]
]
]
, Y(z) · Z{y(t)¦, p ∈ N
If the string or the time function is an original one then the second term
does not appear.
8.1.3.3. Real Time Shifting in Advance Theorem.
(8.1.14) Z y
k+p
· z
p
Y(z) −
Σ
k·0
p−1
y
k
z
−k
]
]
]
, p ∈ N
(8.1.15) Z{y(t +pT)¦ · z
p
Y(z) −
Σ
k·0
p−1
y(kT)z
−k
]
]
]
, p ∈ N
Example.
Z {y
k+1
}= z[Y(z)y
0
]
Z {y
k+2
}=z
2
Y(z) −
Σ
k·0
1
y
k
z
−k
]
]
]
· z
2
Y(z) − y
0
⋅ z
2
− y
1
⋅ z
8.1.3.4. Initial Value Theorem.
(8.1.16) y
0
· lim
z→∞
Y(z)
We understand by y
0
,
y
0
= .
t→0,t>0
lim y(t)
8.1.3.5. Final Value Theorem.
(8.1.17) lim
k→∞
y
k
· lim
k→∞
y(kT) · lim
z→1
[
(
1 − z
−1
)
Y(z)]
if the time limit exists or if the function [(1z
1
)Y(z)] has no pole on or inside the
unit circle in z  plane.
8. DISCRETE TIME SYSTEMS. 8.1. Z  Transformation.
186
8.1.3.6. Complex Shifting Theorem.
, Y(z)=Z{y
k
} (8.1.18) Z λ
k
y
k
· Y
(
λ
−1
z
)
Proof:
Z
¹
¹
'
¹
¹
w
λ
k
y
k
¹
¹
'
¹
¹
·
Σ
k·0
∞
λ
k
y
k
z
−k
·
Σ
k·0
∞
y
k

.
z
1
λ
−1
z
`
,
−k
· Y(z
1
) · Y(λ
−1
z)
or
Z{e
a t
y(t)¦ · Y(e
−a T
z), where : Y(z) · Z{y(t)¦
8.1.3.7. Complex derivative theorem.
(8.1.19) Z{k
p
y
k
¦ · [−zY(z)]
[p]
where the operator [p]  times applied , means:
[−zY(z)]
[p]
· −z
d
dz
[−zY(z)]
[p−1]
, Z{t
p
y(t)¦ · [−TzY(z)]
[p]
· −Tz
d
dz
[
[−TzY(z)]
[p−1]
]
(8.1.20) [−TzY(z)]
[0]
· Y(z)
Examples.
1. (8.1.21) Z{ky
k
¦ · −z
d
dz
Y(z)
2. (8.1.22) Z
{
k
2
y
k ¦
· −z
d
d z
[−zY(z)]
[1]
· −z
d
dz
−z
d
dz
Y(z)
]
]
]
3. (8.1.23) Z{t y(t)¦ · −Tz
d
d z
Y(z)
4. (8.1.24) Z
{
t
2
y(t)
¦
· −Tz
d
d z
[−TzY(z)]
[1]
· −Tz
d
dz
−Tz
d
dz
Y(z)
]
]
]
Proof of the Complex derivative theorem.
For p=2 we can prove:
; Y(z) ·
Σ
k·0
∞
y(kT)z
−k
d
d z
Y(z) · −
Σ
k·0
∞
ky(kT)z
−k
z
−1
[−TzY(z)]
[1]
−Tz
d Y(z)
d z
·
Σ
k·0
∞
(kTy(kT))z
−k
·Z{(kT)y(kT)¦ ·Z{ty(t)¦
;
d
d z
−Tz
dY(z)
d z
]
]
]
· −
Σ
k·0
∞
k
2
Ty(kT) ⋅ z
−k−1
Z −
[−TzY(z)][2]
Tz
d
d z
−Tz
d Y(z)
d z
]
]
]
·
Σ
k·0
∞
k
2
T
2
y(kT) ⋅ z
−k
· t
2
y(t)
By the operator definition we know that,
. [−TzY(z)]
[p]
· −Tz
d
d z
[−TzY(z)]
[p−1]
]
]
8. DISCRETE TIME SYSTEMS. 8.1. Z  Transformation.
187
Suppose that
. [−TzY(z)]
[p−1]
· Z{t
p−1
y(t)¦ ·
Σ
k·0
∞
(kT)
p−1
z
−k
From the above,
[−TzY(z)]
p
· −Tz
d
dz
Σ
k·0
∞
(kT)
p−1
z
−k
· −Tz(−k)z
−1
Σ
k·0
∞
(kT)
p−1
z
−k
q.e.d. ·
Σ
k·0
∞
(kT)
p
z
−k
= = Z{t
p
y(t)¦
Examples.
1. Let w(t) be
w(t) = t = t 1(t) = t y(t) , where y(t)=1(t) .
But we know Y(z)=Z{1(t)}= .
z
z − 1
W(z)=Z{ty(t)}=Tz ,
d
dz
(
z
z −1
) · −Tz ⋅
−1
(z −1)
2
·
Tz
(z − 1)
2
the same result we obtained through other methods.
2. Z
{
t
2
1(t)
¦
· Z
¹
¹
'
¹
¹
t
known
(t ⋅ 1(t))
¹
¹
'
¹
¹
· −Tz
d
dz
Tz
(z − 1)
2
]
]
]
·
· −Tz
T(z −1)
2
− 2Tz(z − 1)
(z − 1)
4
·
T
2
z(z +1)
(z − 1)
3
8.1.3.8. Partial Derivative Theorem.
If y
k
(α) or y(t,α) are derivavable functions with respect to a parameter
α, then
(8.1.25)
∂
∂α
Z{y
k
[α]¦ · Z
∂
∂α
y
k
(α)
8.1.3.9. Real Convolution sum theorem.
Let y
a
(t), y
b
(t) be two original functions having:
Z ; Z . {y
a
(t)¦ · Y
a
(z) y
b
(t) · Y
b
(z)
The same y
a
, y
b
, could be two original strings, y
k
a
and y
b
k
..
This theorem, one of the most important for System theory, states that the
Ztransform of the so called "sum of convolution of two strings" is just the
algebraical product of the corresponding Ztransforms:
. (8.1.26) Z
¹
¹
'Σ
i·0
k
y
a
(iT)y
b
((k −i)T)
¹
¹
'
· Y
a
(z)Y
b
(z)
Vice versa, the inverse Ztransform of a product of two Ztransforms is a
convolution sum
(8.1.27) Z
−1
Y
a
(z)Y
b
(z) ·
Σ
i·0
k
y
a
(iT)y
b
((k −i)T) ·
Σ
i·0
k
y
a
((k −i)T)y
b
(iT)
8. DISCRETE TIME SYSTEMS. 8.1. Z  Transformation.
188
Proof: Shall we denote
, w(kT) ·
Σ
i·0
k
y
a
(iT)y
b
((k −i)T)
W(z)=Z{w(kT)}=Y
a
(z)Y
b
(z).
According to the fundamental formula,
W(z) ·
Σ
k·0
∞
Σ
i·0
k
y
a
(iT)y
b
((k− i)T)
]
]
]
⋅ z
−k
·
Σ
k·0
∞
Σ
i·0
k
y
a
(iT)y
b
((k −i)T)
]
]
]
⋅ z
−(k−i)
⋅ z
−i
Denoting p=ki, then k=0 ⇒ p=i; k= ∞ ⇒ p= ∞ .
Because y
a
,y
b
are original functions,
y
b
((ki)T=0 for i>k ( for k<0, y
a
(kT)=0, y
b
(kT)=0 )
the upper limit in the inside sum can be ∞ and the lower limit starts from zero so
we can change the summing ordering.
With this, the above relation can be written:
W(z) ·
Σ
i·0
∞
Σ
k·0
∞
[y
a
(iT)y
b
((k − i)T) ⋅ z
−(k−i)
⋅ z
−i
] ·
·
Σ
i·0
∞
Σ
p·−i
∞
[y
a
(iT)y
b
(pT) ⋅ z
−p
⋅ z
−i
]
·
Σ
i·0
∞
[y
a
(iT)z
−i
(
Σ
p·−i
∞
y
b
(pT)z
−p
)]
W(z) ·
Σ
i·0
∞
[y
a
(iT)z
−i
(
Σ
p·0
∞
y
b
(pT)z
−p
)] ·
Σ
i·0
∞
[y
a
(iT)z
−i
(Y
b
(z))]
, q.e.d. · [
Σ
i·0
∞
y
a
(iT)z
−i
]⋅ Y
b
(z) · Y
a
(z) ⋅ Y
b
(z)
8. DISCRETE TIME SYSTEMS. 8.1. Z  Transformation.
189
8.2. Pure Discrete Time Systems (DTS).
8.2.1. Introduction ; Example.
A pure discrete time system ("DTS") is an oriented system whose inputs
are strings of numbers and the outputs are strings of numbers too.
The inputoutput relation of a singleinput , singleoutput system is a
difference equation.
For the multiinputs, multioutputs systems the inputoutput relation is
expressed by a set of difference equations. As an oriented system a DTS is
represented in Fig. no. 8.2.1.
Pure discretetime
system
(u
(y
k k k>=0
k>=0
)
)
String of numbers String of numbers
(can be a vector ) ( can be a vector )
Figure no. 8.2.1.
Example 8.2.1. First Order DTS Implementation.
Shall we consider a first order difference equation
(8.2.1) y
k
· ay
k−1
+bu
k
, k ≥ 1
which is an inputoutput relation of a DTS having the string u
k
as input variable
and y
k
as output variable.
It can be materialised by a computer program run for k>=1. Beside the
coefficients a, b that are structure parameters, we have to know the initial
condition y
k1

k=1
=y
0
. This illustrates that the system is dynamic one.
A pseudocode like written program is:
Initialise: a,b, kin=1, kmax, Y.
% Here Y as a computer variable "CV" has a semantic of y
0
as a mathematical
% variable "MV".
For k=kin : kmax
Read U
% Here U as CV represents u
k
as MV.
Y=aY+bU
% The previous value of Y, ( y
k1
), multiplied by the coefficient a plus the
% actual value of U, ( u
k
), multiplied by b determines the actual value of Y,
% (y
k
). This is just the relation (8.2.1).
Write Y
% The output is the actual value of Y , (y
k
).
End
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.
190
Of course we can replace k by k+1 and relation (8.2.1) becomes
equivalent with (8.2.2).
(8.2.2) y
k+1
− ay
k
· bu
k+1
, k ≥ 0
Running such a program we can obtain a string of numbers y
k
only if the
input numbers u
k
are given and the initial value y
0
too, but we can not understand
the system possibilities. Because of that an analytical approach is necessary.
Relation (8.2.1) is more suitable for computer implementation. It is a " pure
recursive relation: the actual result is depending on the previous results
and on the actual and previous inputs ".
Relation (8.2.2) is more suitable for analytical treatment because every thing is
defined for k>=0.
Analytical approach.
This is just applications of the Ztransform. Applying Ztransform to
(8.2.2) and using (8.1.14) we obtain
. z(Y(z) −y
0
) − aY(z) · b[z(U(z) − u
0
)]
The Ztransform of the output string can be expressed as
Y(z) · b
forced term
H(z)
z
z − a
U(z) +
free term
z
z −a
(y
0
− bu
0
)
(8.2.3) Y(z) ·
Y
f
(z)
H(z)U(z) +Y
l
(z)
We observe that the output is a sum of two terms:
i. The forced term, Y
f
(z), which is depending on the input only ( more
precisely is not depending on the initial conditions),
Y
f
(z)=H(z)U(z), H(z) · b
z
z − a
(8.2.4) where the operator
(8.2.5) H(z) ·
Y(z)
U(z)
Z.I.C.
is called "ztransfer function ".
The ztransfer function is the ratio between the ztransform of the output
variable and the ztransform of the input variable which determined that output,
into zero initial conditions " Z.I.C. ", if and only if this ratio is the same for any
input variable.
We can see that this transfer function has one pole z=a. A lot of behaviour
properties are depending on the transfer function poles , in our case on the value
of a.
ii.  The free term , Y
l
(z), which is depending on the initial conditions only
(more precisely is not depending on the input variable),
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.
191
. (8.2.6) Y
l
(z) ·
z
z −a
(y
0
−bu
0
)
Here the free term looks like depending on two initial conditions: y
0
and u
0
, even if the difference equation (8.2.2) from which we have started and its
equivalent (8.2.1), is of the first order and it must depend of one condition ( as
a minimum number of conditions).
However we could start the computer program , based on (8.2.1) with
one initial condition only.
The explanation is following: If we are looking at (8.2.2) we can see that
for k= 1, y
0
bu
0
= ay
1
and the free term looks like
, (8.2.7) Y
l
(z) ·
z
z −a
⋅ ay
−1
depending on a single initial condition.
If we start integrating (8.2.2) from k= 1, (to use u
0
as the first input
number), y
0
is depending on the input u
0
and the genuine initial condition y
1
.
If we start from k=0, the first output we can compute is y
1
that is
depending on y
0
and on the input u
0.
Conclusion: In the relation (8.2.6) obtained through ztransform (shifting in
advance theorem)
u
0
has not to be interpreted as an initial condition.
Relation (8.2.6) is only one expression of the free term.
The general time response (forced and free), can be easy calculated
applying the inverse ztransformation to relation (8.2.3),
(8.2.8) y
k
· Z
−1
{Y(z)¦ ·
Σ
i·0
k
h
i
u
k−i
+y
l(k)
where
(8.2.9) h
k
· Z
−1
{H(z)¦
is called " discrete weighting function " as the inverse ztransform of the
ztransfer function.
The time free response is
Y
l
(z) ·
z
z −a
(y
0
−bu
0
) ⇒
(8.2.10) y
l
(k) ·
Σ
Rez

.
z
z − a
z
k−1 `
,
(y
0
− bu
0
) ⇒y
l
(k) · a
k
(y
0
− bu
0
)
Expressions of the answers to different inputs can be easy calculated.
Supposing that u
k
is a step unit string , that is u
k
=1 , k ≥ 0 U(z) ·
z
z −1
y
f
(k) · Z
−1
b
z
z − a
⋅
z
z −1
·
Σ
Rez
bz
2
(z − a)(z − 1)
⋅ z
k−1
]
]
]
·
·
b
a −1
⋅ a
k+1
+
b
1 − a
·
b
1 −a

.
1 −a
k+1 `
,
If a<1 then and lim
k→∞
a
k+1
→0
y
f
( ∞) = b/(1a) ; y
l
( ∞)=0.
This allow us to point out an important conclusion :
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.
192
If the modulus of the ztransfer function poles are under unit (inside the
unit circle in zplane ), then the free response vanish and the forced response
goes to the permanent response (permanent response is a component of the
forced response determined by the poles of the input ztransform only).
In this particular step response the input has one pole z=0 and the
permanent response is just b/(1a) as we can see from residues formula. This
property is called stability property.
The forced response for k → ∞ can be determined directly from the
expression of Y
f
(z), by using the final value theorem, without to be necessary to
compute the expression of the time response
. y
f
(∞) ·
z→1
lim [(1 − z
−1
)Y
f
(z)] ·
z→1
lim [(1 − z
−1
) ⋅ b
z
z −a
⋅
z
z − 1
] ·
b
1 −a
Of course we have to pay something for this simplicity, the validity of the
theorem has to be checked: Y
f
(z) has to be analytic outside the unit circle).
8.2.2. Input Output Description of Pure Discrete Time Systems.
For the so called linear time invariant discretetime systems, on short
"LIDTS" , the inputoutput relation is an ordinary difference equation with
constant coefficients,
(8.2.11)
Σ
i·0
n
a
i
y
k+i
·
Σ
i·0
m
b
i
u
k+i
, a
n
≠ 0,
If: m<n the system is strictly proper system (strictly causal),
m=n the system is proper system (causal).
m>n the system is improper system (noncausal).
An improper system can not be physical realised (even by a computer
program) as an online algorithm.
Example 8.2.2.1. Improper First Order DTS.
The system described by
y
k
2y
k1
= 3u
k
+7u
k+1
(8.2.12)
can not be realised, because the actual value of the output y
k
is depending on
the future value of the input, u
k+1
, which is unknown. It is not the case when the
expression of u
k
is priory known for any k; this would not be a real physical
system (an online or realtime algorithm).
Example 8.2.2.2. Proper Second Order DTS.
(8.2.13) a
2
y
k+2
+ a
1
y
k+1
+ a
0
y
k
· b
2
u
k+2
+ b
1
u
k+1
+ b
0
u
k
, k ≥ 0
This difference equation can be implemented in a recursive form as it
follows. We can say that
y
k+2
· −
a
1
a
2
y
k+1
−
a
0
a
2
y
k
+
b
2
a
2
u
k+2
+ ... +
b
0
a
2
u
k
If we denote k+2 = j and then replace j by k , and denote
α
i
= a
i
/a
2
, β
i
=b
i
/a
2
, i=0,1,2,
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.
193
then (8.2.13) becomes
(8.2.14) y
k
· α
1
y
k−1
+ α
0
y
k−2
+ β
2
u
k
+ β
1
u
k−1
+ β
0
u
k−2
, k ≥ 2
For a computer program we have to know:
α
1
, α
0
, β
2
, β
1
, β
0
; y
1
, y
0
, u
1
, u
0
.
Example of a computer program.
Initialise : α
1
, α
0
, β
2
, β
1
, β
0
; Y1, Y0, U1, U0, kin · 2, kmax
For k = kin : kmax
Read U
Y ← α
1
Y1+ α
0
Y0 + β
2
U+ β
1
U1 +β
0
U0
U0=U1; U1=U; Y0=Y1; Y1=Y
% Update the initial conditions
Write Y
End
The discrete time systems (which are linear and time invariant) can be very
easy described in zcomplex plane by using the Ztransformation. Applying the
Ztransformation to the difference equation (8.2.11) one can obtain,
(8.2.15)
Σ
i·0
n
a
i

.
z
i
[Y(z) −
Σ
k·0,i≠0
i−1
y
k
z
−k
]
`
,
·
Σ
i·0
m
b
i

.
z
i
[U(z) −
Σ
k·0,i≠0
i−1
u
k
z
−k
]
`
,
(8.2.16) Y(z) ·
Y
f
(z)
H(z)U(z) +
Y
l
(z)
I(z)
L(z)
, H(z) ·
M(z)
L(z)
(8.2.17)
M(z) · b
m
z
m
+... + b
1
z + b
0
(8.2.18) L(z) · a
n
z
n
+... + a
1
z + a
0
where the expression is called the ztransfer function. H(z)
The ztransfer function of a discrete time system is defined as a ratio
between the Ztransform of the output and the Ztransform of the input which
determined that output into zero initial conditions, if this ratio is the same for any
input.
(8.2.19) H(z) ·
Y(z)
U(z)
zer.init.cond
·
M(z)
L(z)
The ztransfer function determines only the forced response. The time
espression of the forced response is,
(8.2.20) y
k
·
Σ
i·0
k
h
i
⋅ u
k−i
where h
k
is the inverse Ztransform of H(z), called the weighting function
. (8.2.21) h
k
· Z
−1
{H(z)¦ ⇔ H(z) · Z{h
k
¦
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.
194
The weighting functions represents the system response to an unit impulse
input into zero initial conditions. Because of that the weighting functions is called
also "impulse response" .
The unit impulse string is defined as
u
0
=1 , u
k
=0 , ; k ≥ 1
. U(z) · 1 ⇒Y(z) · H(z) ⋅ 1 · H(z)
For both proper and strictly proper systems, . h
k
· 0, ∀k < 0
For strictly proper systems, . This reveals that the present input u
k
h
0
· 0
has no contribution to the present output y
k
.
Another important particular response is the so called " unit step
response" as the response of a discrete time system to an unit step input into
zero initial conditions. The unit step input string is defined as,
. u
k
· 1 , k ≥ 0 , ⇒ U(z) ·
z
z − 1
, z > 1
The unit step response string at the time k ,denoted as where the h
k
s
superscript "s" means step, can be evaluated, for strictly proper systems, using
the formula,
(8.2.22) h
k
s
·
Σ
∗
Rez

.
H(z) ⋅
z
z − 1
⋅ z
k−1 `
,
where " * " means: . poles of [H(z) ⋅
z
z−1
⋅ z
k−1
]
8.2.3. State Space Description of Discrete Time Systems.
Starting from a practical problem a discrete time system can be expressed
by a first order difference equation in a matrix form like,
(8.2.23)
¹
¹
'
x
k+1
· Ax
k
+Bu
k
y
k
· Cx
k
+Du
k
where the matrices are: A(n x n); B(n x p); C(r x n); D(r x p).
For :
p · 1 ⇒B →b
r · 1 ⇒C →c
T
Having a ztransfer function, the state space description can be obtained
as for the continuos time systems by using the same methods.
Any canonical form from continuous time systems can be obtained also
for discrete time systems with the same formula only by considering against s
the variable z.
For example, the polynomial
M(s) · b
m
s
m
+... + b
0
, s
m
→z
m
will become
M(z)=b
m
z
m
+... +b
1
z+b
0
.
By using the Ztransformation, the state equations (8.2.23) become,
. z(X(z) −x
0
) · AX(z) + BU(z)
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.
195
We remember that the Ztransform of a vector is the vector of the
Ztransforms. So,
x
k
·
x
k
1
...
x
k
n
]
]
]
]
]
⇒ X(z) ·
X
1
(z)
...
X
n
(z)
]
]
]
]
]
The zform of the first state equation (8.2.23) is,
(8.2.24) X(z) ·
Φ(z)
z(zI − A)
−1
⋅ x
0
+ (zI − A)
−1
BU(z)
where Φ Φ(z) is the ztransform of the so called " transition matrix " Φ(k),
, (8.2.25) Φ Φ(z) · z(zI − A)
−1
·
(
I − z
−1
A
)
−1
· Z{Φ(k)¦
It can be proved that,
Φ(k) · Z
−1
{Φ Φ(z)¦ · Z
−1
(
I − z
−1
A
)
−1
·Z
−1
(
I − z
−1
A
)
−1
·Z
−1
A
k
(8.2.26)
(8.2.27) Φ(k) · A
k
, Φ(0) · I
, (8.2.28) X(z) · Φ Φ(z)x
0
+
1
z
Φ Φ(z)BU(z)
where we utilised the identity
. (zI −A)
−1
·
Φ(z)
1
z
(
I − z
−1
A
)
−1
The general time response with respect to the state vector is
. (8.2.29) x
k
· Φ(k)x
0
+
Σ
i·0
k−1
A
k−i−1
Φ(k− i − 1) BU
i
The Ztransform of the output is obtained simply applying the
Ztransform to the second equation from (8.2.23) and just substituting X(z)
from (8.2.28), obtaining
(8.2.30) Y(z) · CΦ Φ(z)x
0
+
Y
f
(z)
H(z)
1
z
CΦ Φ(z)B+ D
]
]
⋅U(z)
where,
(8.2.31) H(z) ·
1
z
CΦ Φ(z)B+D
is the so called " Ztransfer matrix " .
For singleinput and singleoutput systems, the Ztransfer matrix is just
the Ztransfer function.
8. DISCRETE TIME SYSTEMS. 8.2. Pure Discrete Time Systems.
196
9. SAMPLED DATA SYSTEMS.
Sampled data systems are systems where interact both discrete time
systems and continuous time systems. We shall start analysing such a type of
systems by an example.
9.1. Computer Controlled Systems.
A process computer is a computer able to work in real time. It is a
computer with I/O analog or digital interfaces having a real time operating system
able to perform data acquisition, computing and command in real time.
We can look at the process computer like at a blackbox having some
analogical ( continuous time) terminals. The principle diagram of a computer
controlled system is depicted in Fig. 9.1.1.
r
k
N
u(t)
r(t)
Process Computer Controlled Plant
Act uator
Technical
Pl ant
Transductor
5
6
10
y(t)
ANC
Numerical
Control
Algorithm
( NCA )
NAC
y
k
N
k
N
w
ANC
y(t) [ 0, 10] V ∈
u(t) [ 0, 10] V ∈
r(t) [ 0, 10] V ∈
Figure no. 9.1.1.
The computer has some terminals denoted (labelled, marked) by numbers
as analogical ports. For example, as is depicted in Fig. 9.1.1. , for this control
system, there are utilised two analogical input signals, port 5 and 6, and one
output analog signal, port 10.
To these ports are connected:
desired input, r(t), called also " setpoint ";
measured variable, y(t), called also "feedback variable"
command variable , u(t) , called also " manipulated variable ".
The controlled plant contain an actuator technological plant, and a
transductor.
The process computer has two analogicalnumerical converters
(ANC), called also "analog to digital converters" which converts the input
variables r(t), y(t) which are time functions, to strings of numbers r
N
k
, y
N
k
.
(9.1.1) r
k
N
· K
AN
⋅ r(kT) , r(kT) · r(t)
t·kT
(9.1.2) y
k
N
· K
AN
⋅ y(kT) , y(kT) · y(t)
t·kT
where K
AN
is the analogicalnumerical conversion factor.
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.
197
We put k, or kT, as entry for all the variables, thinking that the
acquisitions of the system and all the numerical computing are very fast and
performed just in the time moment t=kT.
The result of the numerical algorithm, denoted by w
N
k
, is applied to the so
called numericalanalogical converter (NAC). The NAC will offer a voltage
piecewise constant:
, (9.1.3) u(t) · K
NA
⋅ w
k
N
, ∀t ∈ (kT, (k + 1)T]
where K
NA
is the numericalanalogical conversion factor.
The structures described above and depicted in Fig. 9.1.1. constitutes a
so called, "closed loop system ".
In this structure, the information is represented in two ways:
by numbers, or strings of numbers, inside the numerical algorithm and
by time functions for controlled plant.
To obtain a good behaviour of the closed loop system we have to manage
simultaneously numbers and timefunctions.
There are different ways by which the operating system physically
manages this job, that means to control the output y(t) to be as narrow as the set
point r(t).. The simplest one is the so called simple cycle with a constant
repeating time. All the aspects regarding this loop are included into (represented
by) a subroutine or a task. Each task has a name or a label number. For example
in simple cycle the logical structure in the software can be represented by the
logical diagram depicted in Fig. 9.1.2.
No
Initialize
System
Yes
Read some
default variables
Other jobs
T a s k n o . 8
Other jobs
Figure no. 9.1.2.
For example the Task no. 8, regarding our computer controlled system,
could be as below (in brackets are statements in REAL_TIME BASIC  as an
example).
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.
198
Task no. 8.
Initialise : X
1
,X
2
; a
1.
,a
2
,a
3
,a
4
; b
1
,b
2
; c
1
,c
2
. % If it is required by a flag
Read R (Let R=IN(5)) % R · r
k
N
· k
AN
r(kT)
Read Y (Let Y=IN(6)) % Y · y
k
N
· k
AN
y(kT)
W= c
1
*X
1
+ c
2
*X
2
% W = w
k
N
= c
1
x
k
1
+ c
2
x
k
2
Write W ( OUT (W,10) ) % u(t) = W , ∀ t∈( kT , (k+1)T ]
E=RY % E · e
k
N
· k
AN
e(t); e(t) · r(t) − y(t)
Z=X
1
% Z=X
1
=x
1
k
is kept as additional variable to be able to compute x
2
k+1
X
1
=a
1
*Z+a
2
*X
2
+b
1
*E % x
k+1
1
· a
1
x
k
1
+ a
2
x
k
2
+ b
1
e
k
N
X
2
=a
3
*Z+a
4
*X
2
+b
2
*E % x
k+1
2
· a
3
x
k
1
+ a
4
x
k
2
+ b
2
e
k
N
Return.
This corresponds, for the control algorithm, to the state equations:
(9.1.4) x
k+1
· Ax
k
+b
N
e
k
N
(9.1.5) w
k
N
· c
T
x
k
+ d
N
e
k
N
(
d
N
· 0
)
Usually in process computers there is only one ANC. Several analogical
signals are numerical converted to numbers by using the so called analogical
multiplexors MUX.
An analogicalnumerical converter ANC, transforms an analogical signal
to a string of numbers represented by a number of bits. If the converter has p
bits and the analogical signal, let say y, changes from y
max
to y
min
then
(9.1.6) y ∈ [y
min
, y
max
), y
N
∈ [0, 2
p
− 1) ⇒ k
AN
·
2
p
(y
max
− y
min
)
The physical structure of an ANC is a matter of hardware, well defined
and well known.
From behavioural point of view, as an oriented object, in any ANC there
are two types of phenomena:
 time conversion
 amplitude conversion
Time conversion expresses the conversion of a time function to a string of
numbers. For example from y(t) it is obtained the string(sequence) y
k
=y(kT).
This is so called the sampling process with the sampling period T.
The variable y
k
from the string has the same dimension like y(t). If, for
example, y(t) is a voltage which takes values inside a domain then y
k
is also a
voltage.
This sampling process is represented in a principle diagram by a symbol
like in Fig. 9.1.3. we can denote it SPS "sampling physical symbol".
It is not a mathematical symbol operator, it only determines someone to
understand that a time function is converted to a string of numbers which
represents its values in some time moments t=kT, and nothing else.
Amplitude conversion equivalently expresses all the phenomena by which
y
k
is converted to a number y
N
k
represented in a computer by a number
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.
199
of bits called also ANCnumber. It is expressed by the conversion factor K
AN
whose dimension is [k
AN
]= 1/[y].
For example if y is a voltage [y]=volt ⇒ [κ
ΑΝ
] = 1/volt.
The two phenomena allow us to represent in a principle diagram an ANC,
as a whole, like in Fig. 9.1.4.
y(t)
y
k
N
y(kT)
k
AN
y(kT)=y
k
y(t)
time function string of numbers
SPS sampling physical symbol ANC bloc diagram
Figure no. 9.1.3. Figure no. 9.1.4.
We modelled the amplitude conversion just by a proportional factor K
AN
,
which is also a linear mathematical operator.
In fact, this process is more complicated because y
k
can take a infinity
number of values in a bounded domain ( belongs to a infinity power set ) but y
N
k
can take a finite number of values: from 0 to 2
p
1 if the converter is of pbits
one. The steadystate diagram of the amplitude conversion is depicted in Fig.
9.1.5. , considering p=2.
[ )
[ )
[ )
[ )
y
0
1
2
3
4
2
p
2
p
1
y
min
y
min
y
max
y
max
y
N
k
AN
.y
k
AN
.yy
N
δ δ
Ν Ν
= =
y
N
[
)
[
)
[
)
[
)
δ δ
N
0
1
δ δ
y
0
y
y k
AN
k
AN
+
 y
N
y
δ δ
Ν Ν
Figure no. 9.1. 5. Figure no. 9.1. 6.
Because y
N
is an integer number and (k
AN
y) is a real one, their difference δ
N
∈
[0 , 1), appears like a noise and is called "amplitude conversion noise".
Also can be defined the " converter amplitude resolution", denoted by
δ
y
, as being the maximum analogical interval that can be represented by the
same number:
(9.1.7) δ
y
·
1
k
AN
·
y
max
− y
min
2
p
If we want to represent the amplitude conversion noise, then the
conversion part of an ANC can be represented like in Fig. 9.1.6. When p is large
enough ( p=12 for example ) this noise can be neglected.
The numericalanalogical converter NAC, converts a string of numbers
w
N
k
to a piecewise constant time function u(t) as defined in (9.1.3).
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.
200
In NAC there are also two phenomena:
time conversion
amplitude conversion.
In any physical NAC, the number w
k
N
is kept in a numerical memory as a
combination of bits during a sampling period. It can be interpreted as a time
function w
N
(t) like (9.1.3).
This process of memorisation is called also "holding process" during a
sampling period.
The command of memorisation is made only in some time moments t=kT,
and the holding process is connected with a sampling process.
They form the so called "samplinghold process" and express the time
conversion. The samplinghold process is represented, in principle diagrams, by
a "samplehold physical symbol" SHPS, as depicted in Fig. 9.1.7.
It is not a mathematical operator, it illustrates us only that a string of
numbers w
k
, sampled in the time moments t=kT, is converted to a piecewise
constant time function w(t)
(9.1.8) w(t) · w
k
, t ∈ (kT, (k + 1)T]
In a physical NAC, the input of SHPS is the string of numbers w
N
k
, the
output of the SHPS is a time function w
N
(t) whose values are numbers.
The bits combination of w
N
(t) is converted to a voltage or a current by a
system of current keys performing in such away the amplitude convertion, the
result being u(t).
This phenomena interpretation allow us to represent a NAC, in a principle
diagram, by a block diagram as depicted in Fig. 9.1.8.
However, equivalently we can consider, even if practically it could not be like
that but the results are the same, that the numbers w
N
k
are first converted to a
string of physical signal w
k
, current or voltage, and then they are kept constant
during a sampling period.
Now the input to SHPS is w
k
and the output is u(t). This allow us to
represent a NAC by a principle diagram like is depicted in Fig. 9.1.9.
Both representations from Figs. 9.1.8. and 9.1.9. express the same
inputoutput behaviour. The last one has the advantage that allow us to include
k
NA
, and k
AN
also for ANC, to the numerical control algorithm and not to the
analogical part of the control system.
Principle Diagram Principle Diagram
NAC Equivalent
string of
numbers
time function
(piecewiseconstant)
u(t)
w
N
k wk
w(t) wk
k
NA
u(t)
w
N
(t)
w
N
k
k
NA
SHPS  Sample Hold
Physical Symbol
NACPhysical
Figure no. 9.1. 7. Figure no. 9.1. 8. Figure no. 9.1. 9.
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.
201
We saw that physically, maybe like in computer programs, first r(t) is
aquisitioned as R and then y(t) is aquisitioned getting Y after that numerically is
computed E=RY.
To get a simpler representation, but with the same inputoutput behaviour,
we can consider that someone first made the difference
e(t)=r(t)y(t),
as a physical signal, and then he sampled e(t) and converted it to a number E
as would be used only one ANC.
This equivalence process is illustrated in Fig. 9.1.10.
y(t)
e
k
N
r(kT)
k
AN
r(t)
r
k
N
y
k
N
y(kT)
k
AN
+

(R)
(Y)
(E=RY)
e(kT)
k
AN
e(t)
e
k
e
k
N
+

r(t)
y(t)
(E)
InputOutput
Equivalent
Figure no. 9.1.10.
All these allow us to represent the computer controlled system from
Fig. 9.1.1. by a block diagram as in Fig. 9.1.11.
This diagram contains both physical symbols for ANC, NAC and
mathematical symbols in the form of block diagrams: H
f
(s), summing operator,
k
AN
, k
NA
.
So, the Fig. 9.1.11. will be now interpreted as a principle diagram.
N
=C x + d e
x =Ax + b e
e(t) k+1 k
N
e(kT)
k
AN
k
N
e
F
H (s) k
NA
w
k k
N
w
u(t) y(t)
r(t)+
_
ANC
NAC
k k
N
w
k
N
k
Controlled Plant
(Fixed part
of the System)
Numerical Control
Algorithm
Set point
Controlled
variable
(Feeddback variable)
N
N
Figure no. 9.1.11.
Both k
AN
, k
NA
factors can be included into control algorithm considering
(9.1.9)
e
k
N
· k
AN
e
k
; ⇒ b · b
N
k
AN
w
k
· k
NA
w
k
N
⇒w
k
· (k
NA
C
N
)x
k
+ (k
NA
d
N
k
AN
)e
k
⇒
, (9.1.10)
C · k
NA
C
N
; d · k
NA
d
N
k
AN
k
AN
⋅ k
NA
· 1 if p · q and y
max
−y
min
· u
max
− u
min
so we can reprezent the computer controlled system as a sampled data system
as in Figure no. 9.1.12.
This is a standard representation of such systems.
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.
202
u(t)
k
w
e(t)
e(kT)
r(t)+
_
x =Ax +be
k + 1 k
w
k k k
=Cx + de
k
Set point
F
H (s)
y(t)
Controlled
variable
(Feeddback variable)
e
k
Discrete Time
System
Continuous Time
System
Sampler
Symbol
SampleHold
Symbol
Figure No. 9.1.12.
We denote by H
F
(s) the transfer function of the so called "the fixed part
of the system". We suppose that it is a linear one.
Such a system is called "sampled data system" (SDS). It works with
string of numbers and time functions.
The SDS feature appeared because a physical plant (continuous time
system) is combined with the computer (discrete time system).
There are many examples when sampled data systems can appear: radar,
chemical plants, economical models, a.s.o.
9. SAMPLED DATA SYSTEMS. 9.1. Computer Controlled Systems.
203
9.2. Mathematical Model of the Sampling Process.
9.2.1. TimeDomain Description of the Sampling Process.
We saw that for the physical sampler the input is a time function and the
output is a string of numbers.
For continuous time systems the Laplace transform, Y(s)=L{y(t)}, is a
very useful mathematical instrument. The Laplace transform has no physical
meaning. It is just in our mind. We can not compute L{y
k
} because it does not
exist.
To be able to manage both: the information expressed by a time function
and the information expressed by a string of function it was invented the so
called "sampled signal", which is defined as a string of Dirac impulses,
(9.2.1) y
∗
(t) ·
Σ
k·0
∞
y
k
⋅ δ(t −kT) , y
k
· y(kT
+
)
where y*(t) is defined in the sense of the distribution theory, over T on R.
(9.2.2) δ(t) ·
¹
¹
'
0 , t ≠ 0
∞ , t · 0
but
∫
−∞
∞
δ(t)dt · 1,
We can plot the sampled signal as a string of arrows whose lengths are
just the area of the corresponding Dirac impulse as in Fig. 9.2.1.
0 T 2T 3T 4T 5T kT t
y*(t)
y(t)
∫
kT+aT
kTaT
0<a<1
δ(tkT) dt = y(kT )
+
y(t)
Figure no. 9.2.1.
This signal y*(t) has a Laplace transform Y*(s), but it contains
information only on the values y
k
=y(kT+).
This process is expressed by an operator called "sampling operator",
denoted by the symbol { }*. For example we write,
{y(t)}*=y*(t),
where y*(t) is defined as above.
y(t) y*(t)
T
y*(t)={y(t)}*
T
SAMPLING OPERATOR  MATHEMATICAL MODEL
(y )
k k 0 >
=
y*(t)
y*(t)={ }* y
k
Sampling on a time function Sampling on a string of numbers
Figure no. 9.2.2.
9. SAMPLED DATA SYSTEMS. 9.2. Mathematical Model of the Sampling Process.
204
In a block diagram this mathematical operator is plotted by the symbol, as
depicted in Fig. 9.2.2. where T is "the sampling period".
The input to this operator is the time function y(t) whose Laplace
transform is Y(s) and the output is the sampled signal above defined which has a
Laplace transform.
Y*(s)=L {y*(t)} (9.2.3)
The sampling operator can be applied also to a pure string of numbers,
but in such a case it is supposed that each number of the string is related to a
time moment equal spaced, y
k
to the moment kT.
(9.2.4) {y
k
¦
k≥0
∗
·
Σ
k·0
∞
y
k
δ(t −kT) · {y
cov
(t)¦∗, y
cov
(t)
t·kT
· y
k
9.2.2. Complex Domain Description of the Sampling Process.
We can approach the sampling process in complex domain by using the
Laplace transform which can be related also with continuous time systems.
The following terms are used:
T sampling period;
f
s
sampling pure frequency;
ω
s
=2πf
s
sampling frequency.
The Laplace transform of a sampled signal is
(9.2.5) Y
∗
(s) · L{y
∗
(t)¦ ·
Σ
k·0
∞
y
k
e
−kTs
(9.2.6) L{δ(t −kT)¦ · e
−kTs
Denoting,
z=e
Ts
, (9.2.7) ; s ·
1
T
lnz
we observe from (9.2.5) that the Ztransform of a signal is just the Laplace
transform of the sampled signal replacing only e
sT
by z or , s ·
1
T
lnz
= Z Z (9.2.8) Y
∗
(s)
s·
1
T
ln z
Σ
k·0
∞
y
k
z
−k
· {y
k
¦ · {y(t)¦ · Y(z)
To express Y*(s) by a complex integral, the selection property of Dirac
distribution can be utilised
(9.2.9) y
∗
(t) · y(t)⋅
p(t)
Σ
k·0
∞
δ(t − kT) · y(t) ⋅ p(t);
(9.2.10) p(t) ·
Σ
k·0
∞
δ(t −kT)
For t=kT, δ(tkT) is not defined as a function, but the integral is its area
and admits Laplace transform. Also p(t) is a generalised function that admits
Laplace transform.
9. SAMPLED DATA SYSTEMS. 9.2. Mathematical Model of the Sampling Process.
205
Because y*(t) from (9.2.9) is a product of two time functions (for p(t) a
generalised one), which admit Laplace transform, we can use the complex
convolution theorem
L {y*(t)}=L {y(t)p(t)}= (9.2.11) Y
∗
(s) ·
1
2πj
∫
c−j∞
c+j∞
Y(ξ)P(s − ξ)dξ
If σ
1
is the convergence abscissa of y(t) and σ
2
is the convergence
abscissa of p(t) and if c is chosen so
(9.2.12) σ
1
< c < σ −σ
2
, σ > max(σ
1
, σ
2
, σ
1
+σ
2
)
the vertical line is splitting the singular points (poles) of Y(ξ) and P(sξ) in the
ξcomplex plain, as in Fig. 9.2.3.
∞ ∞ c+j
c  j∞ ∞
Γ Γ
1 1
Γ Γ
1 1
Γ Γ
2 2
Γ Γ
2 2
∞
R
=
∞
R
=
x x
x
x
x
x
x
x
x
s
+
j
0
ω ω
s
s
+
j
n
ω ω
s
Re(
ξ) ξ)
jIm(
ξ) ξ)
ξ) ξ) Poles of p(s
ξ) ξ)
Poles of Y(
ξ− ξ−
Plane
Figure no. 9.2.3.
L (9.2.13) P(s) ·
¹
¹
'Σ
k·0
∞
(t − kT)
¹
¹
'
·
Σ
k·0
∞
e
−kTs
·
1
1 − e
−Ts
; P(s −ξ) ·
1
1 − e
−T(s−ξ)
Re(s) > 0 , Re(s) > Re(ξ)
We can say that,
(9.2.14) Y
∗
(s) ·
1
2πj
∫
c−j∞
c+j∞
Y(ξ)
1
1− e
−Ts
e
Tξ
dξ
By fulfilling the vertical line with a contour Γ
1
containing all the left hand
halfplan and if the above integral is zero on Γ
1
, then it is an integral on a close
contour which contains all the poles of Y(ξ) so the residues theorem can be
applied.
9. SAMPLED DATA SYSTEMS. 9.2. Mathematical Model of the Sampling Process.
206
closed
cot our
integral
∫
c−j∞
c+j∞
+
∫
Γ
1
−
0
∫
p
1
·
∫
⇒Y
∗
(s) ·
Σ
poles
ofY(ξ)
Rez

.
Y(ξ)
1
1 − e
−Ts
e
T ξ
`
,
Because the Ztransform this is just the result of variable changes in Y*(s)
:
=Z . (9.2.15) Y(z) · Y
∗
(s)
s·
1
T
ln z
{y(t)¦ ·
Σ
poles
ofY(ξ)
Rez
Y(ξ)
1
1− z
−1
e
Ts
]
]
]
If we are fulfilling the vertical line by a contour Γ
2
c −j∞, c + j∞
containing all the right half plain the above integral can be computed on a closed
contour maybe by using the residues theorem. Inside this closed contour the
poles of P(sξ) there are inside only.
These poles are
. (9.2.16) ξ
n
· s + jnω
s
, where ω
s
·
2π
T
· 2πf
s
By using the residues theorem the following formula is obtained:
. (9.2.17) Y
∗
(s) ·
1
T
Σ
n·−∞
+∞
Y(s + jnω
s
)
Two properties of Y
*
(s) can be mentioned:
1) Y
*
(s) is a periodical function with the period jω
s
that is,
Y
*
(s)=Y
*
(s+jω
s
)= (9.2.18) Y
∗
(s + jnω
s) , ∀s
2) If Y
*
(s) has a pole s
k
then it also has the poles s
k
+jnω
s
, Z ∀n ∈
9.2.3. Shannon Sampling Theorem.
Let we have a time function y(t), from which, by sampling, the string of
numbers {y
k
}={y(kT)} has been obtained.
The problem is, what condition has to be accomplished to be able to
reconstruct the continuous signal y(t) using only the sampled values of y
k
. This
can be equivalently presented in frequency domain.
The continuous time signal y(t) is uniquely expressed by the complex
characteristic
Y(jω): A
y
(ω)=Y(jω) , ϕ
y
(ω)=arg(Y(jω)). (9.2.19)
The string of numbers y
k
is uniquely expressed by discrete complex
characteristics,
, (9.2.20) Y(z)
z·exp(jωT)
· Y
∗
(jω)
A
y
*( ω)=Y*(jω); (9.2.21)
ϕ
y
*(ω)=arg(Y*(jω)). (9.2.22)
9. SAMPLED DATA SYSTEMS. 9.2. Mathematical Model of the Sampling Process.
207
The reconstruction problem in frequency domain can be presented as:
given A
y
*( ω), ϕ
y
*(ω), what conditions has to be accomplished so that A
y
(ω),
ϕ
y
(ω) to be exactly re obtained.
To analyse this, the second formula (9.2.17) of the Laplace transform of
the sampled signal is utilised,
. (9.2.23) Y
∗
(s) ·
1
T
Σ
n·−∞
+∞
Y(s + jnω
s
) , where ω
s
·
2π
T
Supposing that
(9.2.24) Y(jω) · 0 for ω > ω
c
, ω
c
<
ω
s
2
then,
(9.2.25) Y
∗
(jω) ·
1
T
Σ
n·−∞
+∞
Y(jω +nω
s
)
The sampled signal contains an infinity numbers of frequencies as is
represented in Fig. 9.2.4.
Y*(j ) ω
Y(j )
ω
0
0
ω
c
ω
c
ω
s
ω
s
2
ω
s
2
ω
s
ω
c

ω
c

ω
c
 ω
s
2 ω
c
 ω
c

ω
s

ω
s

ω
s

ω
c
ω
s
+
ω
c
ω
s
+
ω
s 
ω
c
+
ω
s
2
ω
s

2
ω
s

2
1

T
Y(0)
Y(j + j )
ω ω
s
1

T
ω
s
Y(j )
ω
1

T
Y(0)
1
ω
s

2

ω
s

2

Ideal filter
Figure no. 9.2.4.
If
(9.2.26) ω
s
> 2ω
c
, T <
1
2

.
2π
ω
c
`
,
then by using an ideal low pass filter it is possible to obtain the
amplitudefrequency characteristic of a continuous signal from
amplitudefrequency characteristic of the sampled signal. This is the so called
Shannon's theorem:
A continuous time signal y(t) with bounded frequencycharacteristicat ω
c
(9.2.27) Y(jω) · 0 , ω > ω
can be reconstruct from its sampled signal y
k
if . T <
1
2
⋅
2π
ω
c
This is just in theory because practically does not exist an ideal low pass
filter. Such a filter is not a realisable system. It is theoretical too because a few
signals have a bounded spectrum of frequencies.
9. SAMPLED DATA SYSTEMS. 9.2. Mathematical Model of the Sampling Process.
208
9.3. Sampled Data Systems Modelling.
9.3.1. Continuous Time Systems Response to Sampled Input Signals.
Further, to some mathematical modelling of physical systems, it can
appear a structure like that drawn in Fig. 9.3.1.
y*(t)
u (t)=
k=0
∞ ∞
u(kT)δ(tkT) *
u(t)
H(s)
y(t)
U(s) U*(s)
T
Y(s)
Y*(s)
T
u*(t)
Figure no. 9.3.1.
This is a continuous time system expressed by a transfer function H(s)
whose output signal is y(t), but whose input is a sampled signal u
*
(t).
The Laplace transform of the output is Y(s). Sometimes we may denote
U
*
(s)={U(s)}
*
=L{u*(t)}, (9.3.1)
that means U
*
(s) is the Laplace transform of the sampled signal u
*
(t). Then,
Y(s)=H(s)U
*
(s) (9.3.2)
We could compute the time response
y(t)=L
1
{H(s)U
*
(s)} (9.3.3)
but it is very difficult because usually U
*
(s) contains terms of the form e
sT
.
Example: u(t)=1(t) . (9.3.4) ⇒ U
∗
(s) ·
1
1− e
−Ts
Because of that we are determined (content) to compute only the values of
the response in the time moments t=kT
+
, so we give up computing the values of
the response between the sampled moments kT.
This giving up process is expressed in a block diagram, which illustrates
our computing intentions, by another ideal sampler connected at the continuous
system output whose output is,
; Y
*
(s)=L{y
*
(t)} (9.3.5) y
∗
(t) ·
Σ
k·0
∞
y(kT
+
)δ(t − kT)
It can be proved that
Y
*
(s)={H(s)U
*
(s)}* =H
*
(s)U
*
(s) , (9.3.6)
where H
*
(s) is the so called "the sampled transfer function" , which is the
Laplace transform of the sampled weighting function,
h(t)=L
1
{H(s)}
(9.3.7)
H
∗
(s) ·
Σ
k·0
∞
h(kT)e
−kTs
·
Σ
poles
ofH(ξ)
Rez

.
H(ξ)
1
1− e
−Ts
e
Tξ
`
,
Y(z)=H(z)U(z) (9.3.8)
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.
209
because .
¹
¹
'
¹
¹
Y(s)
H(s)U
∗
(s)
¹
¹
'
¹
¹
∗
·
H(z)
H
∗
(s) ⋅
U(z)
U
∗
(s)
Based on (9.3.6) an algebra regarding sampled data systems can be
developed. Some rules can be specified as follows:
(9.3. 9) {αA(s) +βB(s)¦
∗
· αA
∗
(s) + βB
∗
(s)
(9.3.10) Z{αA(s) +βB(s)¦ · αA(z) +βB(z)
(9.3.11) {A(s)B
∗
(s)¦
∗
· A
∗
(s)B
∗
(s)
(9.3.12) Z{A(s)B
∗
(s)¦ · A(z)B(z)
(9.3.13) {A(s)B(s)¦
∗
· AB
∗
(s) ≠ A
∗
(s)B
∗
(s)
(9.3.14) Z{A(s)B(s)¦ · AB(z) ≠ A(z)B(z)
If C(s) is a periodical function with the period jω
s
,
(9.3.15) C(s) · C(s + jω
s
) , ∀s ∈ C
then
and {A(s)C(s)¦
∗
· A
∗
(s)C(s)
Z{A(s)C(s)¦ · A(z) ⋅ C(s)
s·
1
T
ln z
(9.3.16)
where :
(9.3.17) A(z) ·
Σ
poles of A(ξ)
Rez[A(ξ)
1
1 −z
−1
e
ξT
]
Example:
. C(s) · 1 − e
−Ts
· C(s + jω
s
) · 1 − e
−Ts
⋅
1
e
−j2π
· 1 − e
−Ts
If we have a system with the transfer function H(s) whose discrete form is
H(z) and an input U(s) whose discrete transform is U(z), in the structure
depicted in Fig. 9.3.1., then the Ztransform of the output, Y(z), is:
Y(z)=H(z)U(z) (9.3.18)
Applying the inverse Ztransformation, we are getting,
(9.3.19) Z
−1
{H(z)U(z)¦ · y(kT
+
)
If the system response
(9.3.20) y(t) · L
−1
{H(s)U
∗
(s)¦ · L
−1
{Y(s)¦
has discontinuities in the sampling times t=kT then the inverse Ztransform of
Y(z) is
. (9.3.21) Z
−1
{Y(z)¦ · y(kT
+
) · lim
t→kT,t>kT
y(t)
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.
210
9.3.2. Sampler  Zero Order Holder (SH).
There are many devices that have to the input a continuous function u(t)
and at the output a piecewise constant function y
e
(t), where
(9.3.22)
y
e
(t) · u(kT) , ∀t ∈ (kT, (k + 1)T]
as depicted in Fig. 9.3.2. For example a numerical analogical converter NAC
connected to a data bus as discussed above. In such a NAC, u(t) has the
meaning of a number binary representation of the data bus.
( ]
( ]
0 T 2T kT (k1)T
(k+1)T
t
y (t)
e y (t)
e
y (t)
e
u(t)
u(t)
( ]
( ]
( ]
( ]
( ]
( ]
u(t)
Figure no. 9.3.2.
We can see that the information contained in the output y
e
(t) represents
only the input values for some time moments called sampling time moments, kT
and a sampling process will be involved.
This sampling process is expressed by the mathematical model : the ideal
sampler as the sampling operator whose output is the sampled signal u*(t).
The process through which u*(t) is transformed to y
e
(t), a piecewise time
constant function, is represented by an operator, a transfer function H
e0
(s),
whose expression will be determined as follows.
The samplehold process is then represented by a block diagram, a
mathematical operator, as in Fig. 9.3.3.
H (s)
e0
u(t) y (t)
e
*
u (t)
Figure no. 9.3.3.
Suppose that u(t) is a special function having : u(0)=1, . u(kT) · 0, ∀k
According to the definition (9.2.1),
u
∗
(t) · δ(t) ·
1
u(0) δ(t)+
0
u(T) δ(t − T)+
0
u(2T) δ(t −2T)
For this input signal, according to (9.3.22), the output y
e
(t) graphically is
represented in Fig. 9.3.4.
t
T 0 T 2T
1
δ
(t) u*(t)=
( ] ( ] ( ] ( ]
( ]
T 0 T 2T
t
1
y (t)
e
Figure no. 9.3.4.
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.
211
This time function is the response of this system to the unit Dirac impulse,
what means it is a weighted function and its Laplace transform is a transfer
function,
(9.3.23) L
{
h
e0
(t)
¦
· H
e0
(s) ·
1
s
−
e
−Ts
s
⇒H
e
0
·
1 − e
−Ts
s
This is the transfer function of the zero order holder.
9.3.3. Continuous Time System Connected to a SH.
Supposing that the output of a zero order holder is acting the input to a
continuous time system with the transfer function H(s), as in Fig. 9.3.5.
Because the mathematical model of the SH contains the transfer function
H
e0
(s), this and H(s) can be considered as a series connection equivalent to G(s)
as depicted in Fig. 9.3.6. , where
(9.3.24) G(s) · H
e0
(s)H(s)
H(s)
u(t) y(t)
this is a
model
mathematical
this is not a
mathematical
model & we
cannot operate
with it
SHPS Transf er function
y (t)
e0
y(t)
The mathematical model
H (s) e
0
u(t)
H(s)
of a SH controlling a
continuous system
y (t)
e0 u*(t)
G(s)
Figure no. 9.3.5. Figure no. 9.3.6.
Now the behaviour, at least in sampling time moments, can be evaluated
by using the methods of SDS.
Y(s)=G(s)U
*
(s)
(9.3.25) Y
∗
(s) · {Y(s)¦
∗
· {G(s)U
∗
(s)¦
∗
· G
∗
(s)U
∗
(s)
(9.3.26) G
∗
(s) · {H
e0
(s)H(s)¦
∗
·
¹
¹
'
¹
¹
periodicalfunction
(
1 −e
−Ts
)
H(s)
s
¹
¹
'
¹
¹
∗
·
(
1− e
−Ts
)
H(s)
s
∗
(9.3.27) G(z) · Z{G(s)¦ ·
(
1− z
−1
)
⋅ Z
H(s)
s
(9.3.28) Y(z) ·
(
1 − z
−1
)
⋅ Z
H(s)
s
⋅ U(z)
Application. Let us suppose that (we have applied a step unit U(z) ·
z
z − 1
function) and . By using the above relations we can easy determine H(s) ·
b
s + a
the response of this system:
Z
H(s)
s
· Z
b
s(s+a)
·
Σ
poles of b/(ξ(ξ+a))
Rez
b
ξ(ξ+a)
⋅
1
1−z
−1
e
Tξ
]
]
·
b
a
1
1−z
−1
+
b
−a
1
1−z
−1
e
−aT
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.
212
If we note λ=e
aT
, c= , then:
b
a
(a − λ)
Z
H(s)
s
·
b
a
z
z−1
−
z
z−λ
]
]
·
b
a
(a−λ)z
(z−1)(z−λ)
. Y(z) ·
1−z
−1
z−1
z
⋅
cz
(z−1)(z−λ)
U(z) ⇒Y(z) ·
c
z−λ
U(z) · G(z)U(z)
9.3.4. Mathematical Model of a Computer Controlled System.
Shall we come back to Ch. 9.1. "Computer Controlled Systems" where
one analysis result has been presented in Fig. 9.1.11.
The numerical control algorithm, NCA, implemented in computer, can be
presented by a ztransfer function D(z), if NCA is LTI,
(9.3.29) D(z) ·
W
N
(z)
E
N
(z)
(9.3.30) D(z) ·
1
z
C
N
(I − z
−1
A)
−1
b
N
+ d
N
The block diagram from Fig. 9.1.11. with this above notation is presented
in Fig. 9.3.7.
K
AN D(z)
K
NA
F
H (s)
r(t) e(t)
e
k
N
w
k
N
w
k
y(t)
+

e(kT)
u(t) e
k
Figure no. 9.3.7.
For example, if the computer program of the NCA is:
Read R,Y % W,α,β has to be initialised.
E=RY
Write W
% W · αW + βE w
k+1
N
· αw
k
N
+βe
k
N
then we have
, D(z) ·
β
z − α
but, if we have the program:
Read R,Y % W,α,β has to be initialised.
E=RY
% W · αW + βE w
k
N
· αw
k−1
N
+βe
k
N
Write W
then the NCA has the transfer function
D(z) ·
βz
z − α
The two factors k
AN
, k
NA
, of ANC and NAC can be grouped with NCA
transfer function, denoting,
H
R
(z) = k
AN
k
NA
D(z) (9.3.31)
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.
213
where, considering E(z)=Z{e
k
}, W(z)=Z{w
k
},
(9.3.32) H
R
(z) ·
W(z)
E(z)
is the ztransfer function of the so called "Discrete Time Controller", on short:
DTC.
With this, Fig. 9.3.7. will be equivalently represented like Fig. 9.3.8.
r(t) e(t)
+

e(kT)
F
H (s)
w
k
y(t) u(t) e
k
H (z)
R
Figure no. 9.3.8.
Of course H
R
(z) comes from (9.3.31), having a very clear physical
meaning, but, equivalently, we can consider that it comes from a stransfer
function H
R
(s) whose Ztransform is just H
R
(z),
H
R
(z) = Z{H
R
(s)} = H
R
∗
(s)
s·
1
T
ln z
(9.3.33)
where
(9.3.34) H
R
∗
(s) ·
W
∗
(s)
E
∗
(s)
·
L{w
∗
(t)¦
L{(e
∗
(t)¦
and e*(t), w*(t) are the sampled signals associated to the strings e
k
, w
k
,
respectively.
With this in our mind, and taking into account the mathematical model of
the couple sampler  zero order holder as depicted in Fig. 9.3.3. , we can
represent the computer controlled system by a block diagram, in the sense of
system theory, block diagram which expresses a mathematical model, as in
Fig. 9.3.9.
r(t) e(t) +

H (s)
R
H*(s)
R
e*(t)
w(t)
T
1e
Ts
s
H (s)=
e0
H (s)
F
y(t)
H (s)
e0
w*(t) u(t)
G(s)
T
Figure no. 9.3.9.
We can denote, because now we are processing a mathematical model,
G(s) the result of two transfer functions series connection,
G(s) = H
e0
(s)H
F
(s) = (9.3.35)
1 − e
−Ts
s
⋅ H
F
(s) · (1 − e
−Ts
) ⋅
H
F
(s)
s
]
]
]
With this notation, the block diagram from Fig. 9.3.9. is represented in
Fig. 9.3.10. which is the standard block diagram, in the sense of system theory,
of the computer controlled system.
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.
214
r(t) e(t) +

H (s)
R
e*(t)
w(t)
T
y(t) w*(t)
T
G(s)
Figure no. 9.3.10.
Observations:
1. This is one example showing how, starting from physical phenomena, we can
get continuoustime systems, described by stransfer functions, which have at
the input sampled signals.
2. Now it is very easy to manipulate the structure from Fig. 9.3.10. in order to
get anything we are interested regarding the system behaviour.
9.3.5. Complex Domain Description of Sampled Data Systems.
The mathematical model of a sampled data system can be easy
determined in complex domains ("s" or "z") using the sampled data systems
algebra as presented in Ch 9.3.1.
We can present, as an example, the computer controlled system behaviour
starting from the block diagram 9.3.10. The main goal is to express the output
y(t), represented in complex domains by: Y(s), Y
*
(s) or Y(z) as a function of
the input complex representations.
Here the sampled data systems algebra is applied step by step but there
are stronger techniques.
From the block diagram we can write,
Y(s) = G(s)W
*
(s); (9.3.36)
W(s) = H
R
(s)E
*
(s); (9.3.37)
E(s) = R(s)Y(s); (9.3.38)
In `the above relations it appeared both W(s) and W* (s) respectively
Y(s) and Y* (s). New relations are obtained applying the sampling rules to the
three above relations:
E
*
(s) = [E(s)]* = [R(s)Y(s)]* = R
*
(s)Y
*
(s)
Y
*
(s) = [G(s)W
*
(s)]* = G
*
(s)W
*
(s)
W
*
(s) = [H
R
(s)E
*
(s)]* = H
R
*
(s)E
*
(s)
W
*
= H
R
*
(R
*
G
*
W
*
)
(9.3.39) W
∗
(s) ·
H
R
∗
(s)
1 +H
R
∗
(s)G
∗
(s)
R
∗
(s)
(9.3.40) Y(s) · G(s)
H
R
∗
(s)
1+ H
R
∗
(s)G
∗
(s)
⋅ R
∗
(s)
Because the controlled plant is a continuous system, y(t) exists for any t,
and as a consequence, we have got the Laplace transform Y(s). We observe
that Y(s) depends on the R
*
(s), that means y(t) depends on the values r
k
=r(kT),
as we expected from physical behaviour.
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.
215
If we are able we can compute the inverse Laplace of Y(s) from (9.3.40)
if r(t), and as a result R
*
(s), is given. Unfortunately, directly computing is a very
difficult task.
By using some special methods, as for for example:
" The modified Ztransform",
" The time approach of discrete systems",
it is possible to compute y(t) for any . t ∈ R
If we want to compute the values of y(t) only for t=kT will be enough to
compute Y
*
(s) and Y(z). So, we can get Y
*
(s) applying to (9.3.40), the
sampling rules from (9.3.1), (9.3.12).
Y(s) ·
A(s)
G(s)
B
∗
(s)
H
R
∗
(s)
1 +H
R
∗
(s)G
∗(s)
R
∗
(s) ⇒
(9.3.41) Y
∗
(s) ·
G
∗
(s)H
R
∗
(s)
1 +G
∗
(s)H
R
∗
(s)
R
∗
(s)
where
G
∗
(s) · (1− e
−Ts
)
H
F
(s)
s
]
]
]
∗
The ztransform Y(z), can be simply obtained from (9.3.41) by a simple
substitution , getting, s ·
1
T
lnz
(9.3.42) Y(z) ·
H
R
(z)G(z)
1+ H
R
(z)G(z)
R(z)
where :
H
R
(z)=k
AN
k
NA
D(z)
Z G(z) ·
(
1 − z
−1
)
H
F
(s)
s
We observe from (9.3.42), that for this structure of a computer
controlled system, a closed loop ztransfer function H
v
(z), as a ratio between the
ztransform of the output and the ztransform of the input, can be determined,
(9.3.43) H
v
(z) ·
Y(z)
R(z)
·
H
R
(z)G(z)
1 +H
R
(z)G(z)
This relation can be expressed by a block diagram in zcomplex plane as
in Fig. 9.3.11.
E(z) R(z) W(z) Y(z)
H (z) G(z)
+
_
y
k
r
k
e
k
w
k
R
Figure no. 9.3.11.
It is observed the similarity of the relations and block diagrams for discrete
time systems in zplane (for which we are able to study the behaviour in the
sampling time moments only), and those of continuos time systems in splane.
9. SAMPLED DATA SYSTEMS. 9.3. Sampled Data Systems Modelling.
216
10. FREQUENCY CHARACTERISTICS FOR
DISCRETE TIME SYSTEMS.
10.1. Frequency Characteristics Definition.
Suppose we have a pure discrete time system with a ztransfer function
H(z), as in Fig. 10.1.1.
If a sinus string of numbers u
k
of the form (10.1. 1) is applied to the input
of the system, then, after a transient period, the answer of the system is also a
string of numbers y
k
of the sinus type but with other amplitude and other phase,
of the form (10.1. 2) as represented in Fig. 10.1. 2.
(10.1.1) u
k
· U
m
sin(ωkT)
(10.1.2) y
k
· Y
m
sin(ωkT +ϕ)
{u }
k
k > 0
=
{y }
k k > 0
=
H(z)
Figure no. 10.1. 1.
0
1 2 3
0 1 2 3
k
k
u
k
k
y
U
m
m
Y
4
4

ϕ
ωT

ϕ
w
=
u(t)=U sin( t) ω
m
λ λ ·
Figure no. 10.1. 2.
In (10.1. 1), we considered the string u
k
coming from the time function
u(t) = U
m
sin(ωt) , (10.1.3)
and u
k
= u(kT) (10.1.4)
but it could be just a string, nothing to do with time variable,
u
k
= U
m
sin(ak). (10.1.5)
Of course we can do the equivalence a=ωT.
Note: The amplitude values U
m
, Y
m
, are the maximum values of the sinus
functions u(t), y(t), but not of the maximum values of the strings u
k
, y
k
.
For given ω·2πf , or a, and we can measure U
m
, Y
m
, and λ . T ·
1
f
s
Then we can compute,
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.1. Frequency Characteristics Definition.
217
(10.1.6) A(ω) ·
Y
m
U
m
(10.1.7) ϕ(ω) · λ ⋅ (ωT) or ϕ(ω) · λ ⋅ a
If we repeat the experiment for different values of ω ( or a) we can plot
the curves A(ω), ϕ(ω) with respect to ω. These are so called: "Magnitude
frequency characteristic" and "Phase frequency characteristic"
respectively of the pure discrete time system.
10.2. Relations Between Frequency Characteristics and Attributes of
ZTransfer
Functions.
10.2.1. Frequency Characteristics of LTI Discrete Time Systems.
To relieve these important relations we shall, compute the permanent
response of the H(z) to the input u
k
from
(10.1.1).
We can compute U(z) by using residues relation (8.1.7),
U(z)=Z{u
k
}=Z{U
m
sin ωt}
(Z Z )= U(z) · Z
¹
¹
'
U
m
ω
(s − jω)(s + jω)
¹
¹
'
·
U
m
2j
{
e
jωt
¦
−
{
e
−jωt
¦
·
U
m
2j

.
z
z − e
jωT
−
z
z − e
−jωT
`
,
⇒
Z{U
m
sin ωt}= (10.2.1)
U
m
sin(ωT)z
(
z − e
−jωT
)(
z − e
jGwT
)
The permanent response , y
p
k
, of any system with th Ztransfer function H(z) is
determined by the residues on the poles of the input ztransform U(z).
In our case there are two poles: z
1
=e
jωT
, and z
2
=e
jωT
.
(10.2.2) y
k
p
·
Σ
poles: z
1
, z
2
Rez
H(z)
U
m
sin(ωT)z ⋅ z
k−1
(
z − e
−jωT
)(
z −e
jωT
)
]
]
]
y
k
p
· U
m
sinωT
(
e
−jωT
− e
jωT
)
e
−jωTk
H
(
e
−jωT
)
− e
jωTk
H
(
e
jωT
)
]
]
·
· U
m
sinωT
−2j sinωT
A(ω)
e
−j(ωkT+ϕ)
− e
j(ωkT+ϕ)
]
]
where we have
(10.2.3) H(e
jωT
) · H(z)
z·e
jωT
that can be expressed as
(10.2.4) H(e
jωT
) · A(ω)e
jϕ(ω)
where has been denoted
(10.2.5) A(ω) · H(e
jωT
)
(10.2.6) ϕ(ω) · arg
(
H(e
jωT
)
)
Because H(z) has real coefficients,
. H(e
−jωT
) · A(ω)e
−jϕ(ω)
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.2. Relations Between Frequency Characteristics
and Attributes of ZTransfer Functions.
218
Finally the permanent response is,
(10.2.7) y
k
p
·
Y
m
U
m
A(ω) sin(ωkT + ϕ(ω))
The amplitudefrequency characteristic is :
(10.2.8)
Y
m
U
m
· A(ω) · H(e
jωT)
· H(z)
z · e
jωT
(10.2.9) ϕ(ω) · arg
(
H(e
jωT
)
)
Because sin[ωkT + ϕ(ω)]=sin[ωT(k + )], we can consider the
ϕ(ω)
ωT
output delay in the variable k as being
λ= (10.2.10)
ϕ(ω)
ωT
For discrete time systems the frequency characteristics are periodical
functions with the period equals to the sampling frequency ω
s
,
, ω
s
·
2π
T
· 2πf
s
because , e
jωT
· e
j(ω+ωs)T
· e
jωT
e
j2π
A(ω)=A(ω+ω
s
) , ϕ(ω)·ϕ(ω+ω
s
). (10.2.11)
Sometimes the frequency characteristics A(ω) , ϕ(ω), for discrete time
systems are expressed as functions of
w  the relative frequency,
r  the relative pure frequency,
f
s
 the sampling frequency.
where,
(10.2.12) w · ωT · 2π
f
f
s
w ∈ [0, 2π]
, (10.2.13) r ·
f
f
s
, r ∈ [0, 1]
f
s
=1/T (10.2.14)
Frequency characteristics are directly related to the sampled transfer
function H*(s), because
10.2.15) H(z) · H
∗
(s)
e
sT
·z
= (10.2.16) H(z)
z·e
jωT H
∗
(s)
s·jω
⇒H(e
jωT
) · H
∗
(jω)
(10.2.17) A(ω) · H
∗
(jω) , ϕ(ω) · arg(H
∗
(jω))
As we mentioned before, the Laplace transform of a sampled signal is a
periodical function, so the sampled transfer function is periodical function too.
Always the frequency characteristics of discrete systems are represented in a
logarithmic scale only in relative frequency. It is periodical in linear scale, but in
logarithmic scale it is not.
From (10.2.5), (10.2.6), we can see that the frequency characteristics are
the modulus and argument of a complex number H(z), evaluated for z=e
jωT
.
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.2. Relations Between Frequency Characteristics
and Attributes of ZTransfer Functions.
219
Because  e
jωT
=1, arg(e
jωT
)=ωT, the frequency characteristics are
evaluated from H(z) considering the variable z evolving in zplane on a unit circle
as depicted in Fig. 10.2.1.
jIm(z)
Re(z)
zplane
ωT

z

=
1
jIm(H(z))
Re(H(z))
H(z)plane
H( )
ω j T
e
ω j T
e
z=
H(z)
H( )
ω j T
e
ϕ(ω) ·arg{ }
H( )
ω j T
e
Α(ω)· 
Figure no. 10.2.1.
10.2.2. Frequency Characteristics of First Order Sliding Average Filter.
The result of a numerical acquisition of a continuous signal u(t) with a
sampling period of T is u
k.
.We want to filter this string of numbers, getting the
filtered variable y
k
, according to the relation
` (10.2.18) y
k
·
u
k
+ u
k−1
2
which is a two term sliding average filter called also "first order sliding
average filter".
By the order of a sliding filter we understand the difference betweem the
maximum step index (k) and the minimum step index (k1) of the input
sequences involved in. In this case it is k(k1)=1. The number of input
sequences involved in, , is k(k1)+1=2. u
k
, u
k−1
A computer program for this filter could be:
Initialise U1
Read U % U=u
k
Y=(U + U1)/2
Write Y
U1=U
This filter is expressed by a ztransfer function H(z),
; (10.2.19) Y(z) ·
H(z)
1
2
(
1 + z
−1
)
U(z) H(z) ·
1
2
(
1 +z
−1
)
·
z + 1
2z
which expresses a first order proper discrete time system . We can evalute,
H(e
jωT
) ·
1
2
(
1 + e
−jωT
)
·
1
2
(1 +cos ωT + jsinωT)
(10.2.20) A(ω) ·
1
2
(1+ cos ωT)
2
+ sin
2
ωT ·
1
2
2 + 2cos ωT
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.2. Relations Between Frequency Characteristics
and Attributes of ZTransfer Functions.
220
ϕ(ω)=arctg( )=arctg( tg(ωT/2))=ωΤ/2 (10.2.21)
sinωT
1 + cos ωT
Suppose that the sampling frequency is f
s
=1000Hz . With this filter we
can reject the component of the frequency f=50 Hz with the factor
α=A(2πf)=A(100π)= =0.9877
1
2
2 + 2cos(100π ⋅ 0.001)
The Matlab drawn Bode characteristics for (10.2.19) is depicted in Fig. 10.2.2.
Frequency (rad/sec)
Frequency (rad/sec)
Magnitude (dB)
Phase (deg);
Bode Diagrams
60
40
20
0
20
10
2
10
1
10
0
10
1
100
50
0
50
100
Figure no. 10.2.2.
10.2.3. Frequency Characteristics of mOrder Sliding Weighted Filter.
We can use another type of sliding filter,
, (10.2.22) y
k
· b
m
u
k
+ b
m−1
u
k−1
+ ... + b
0
u
k−m
· Σ
i·0
k
h
i
u
k−i
, ∀k ≥ 0
The order of this sliding filter is m because the difference betweem the
maximum step index (k) and the minimum step index (km) of the input
sequences involved in is m. The number of input sequences involved in filtering
process, , is k(km)+1=m+1. They determines the output through u
k
, .., u
k−m
m+1 weighting factors, . When b
i
, i · 0 : m
Σ
i·0
m
b
i
· 1
the sliding weighted filter is called morder average sliding filter.
If m=1 then
y
k
· b
1
u
k
+ b
0
u
k−1
and if , b
1
· b
0
·
1
2
then we have the previous first order sliding average filter.
The transfer function of the morder sliding weighted filter is,
(10.2.23) H(z) ·
Σ
k·0
∞
h
k
z
−k
·
Σ
k·0
m
b
m−k
z
−k
··
b
m
z
m
+ b
m−1
z
m−1
+ ... +b
0
z
m
This is a proper morder discrete time system. For its computer
implementation is recommended to express it by state equations as (8.2.23).
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.2. Relations Between Frequency Characteristics
and Attributes of ZTransfer Functions.
221
10.3. Discrete Fourier Transform (DFT).
We saw that, having a ztransfer function H(z)
, (10.3.1) H(z) ·
Σ
k·0
∞
h
k
z
−k
to evaluate the frequency characteristics of this transfer function, that means to
compute the modulus and phase of the complex number H(z) for z=e
jωT
, z
belongs to the unit circle with,
z=1, arg(z) = ωT= = 2πr = w = ϕ∈[0,2π]. (10.3.2) 2πf
1
f
s
= (10.3.3) H(z)
frequency
characteristic
→ H(z)
z·e
jωT H(e
j2π
f
f
s
)
To reduce the computing effort, these are evaluated for some values of ϕ,
for example only N values
(10.3.4) ϕ · ωT · 2π
f
f
s
· 2π
p
N
, p ∈ [0, N− 1]
as is shown in Fig. 10.3.1.
As we perform a sampling operation of the frequency, this is equivalent to
compute the frequency characteristics only for a finite numbers of values,
(10.3.5) f ·
p
N
⋅ f
s
, p ∈ [0, N− 1]
jIm(z)
Re(z)
zplane
ϕ · · 2π 
p
N
ωT

z

=
1
2π− −
Ν
1
2π −−
Ν
2
N
2 π
Ν − 1
−− −−
0
Figure no. 10.3.1.
This will determine the so called Discrete Fourier Transform (DFT),
(10.3.6) H
F
(p) · H(z)
z·e
j
2π
N
p
· H

.
e
j
2π
N
p
`
,
·
Σ
k·0
∞
h
k
e
−j
2π
N
pk
Denoting
, (10.3.7) W
N
· e
j
2π
N
·
N
1
the DFT is,
(10.3.8) H
F
(p) ·
Σ
k·0
∞
h
k
W
N
−pk
· H(z)
z·e
j
2π
N
⋅p
This can be applied also for strings of numbers. The Ztransform is,
Y(z) ·
Σ
k·0
∞
y
k
z
−k
⇒
(10.3.9) Y
F
(p) ·
Σ
k·0
∞
y
k
W
N
−pk
· Y(z)
z·e
j
2π
N
⋅p
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.3. Discrete Fourier Transform (DFT).
222
Very interesting results are obtained by using the DFT for finite strings of
numbers,
(10.3.10) {y
k
¦
k≥0
, y
k
· 0 for k ≥ N
For such strings of numbers (finite ), the DFT has the form,
(10.3.11) Y
F
(p) · DFT{y
k
¦ ·
Σ
k·0
N−1
y
k
W
N
−pk
Because of the periodicity of the expression , and its form (10.3.7), it W
N
can be defined, for finite strings of numbers, the so called Inverse Discrete
Fourier Transform (IDFT) of the form,
(10.3.12) y
k
· IDFT{Y
F
(p)¦ ·
1
N
⋅
Σ
p·0
N−1
Y
F
(p)W
N
pk
Directly it is very difficult to compute (10.3.11), (10.3.12), but there are
several methods to do it like the algorithm known as " The Fast Fourier
Transform" on short FFT , which is rather convenient if . N · 2
q
, q ∈ Z
The Fast Fourier Transform is intensely utilised in signal processing field.
Suppose that we have a string of numbers whose DFT is {u
k
¦
k≥0
. (10.3.13) U
F
(p) ·
Σ
k·0
∞
u
k
W
N
−pk
If this string of numbers u
k
is passed through a dynamic system with the
ztransfer function H(z), given by (10.3.1), the output of such a system is
, (10.3.14) y
k
· Σ
i·0
k
u
i
h
k−i
⇔ Y(z) · H(z)U(z)
and it can be expressed by its DFT as,
, (10.3.15) Y
F
(p) · H
F
(p) ⋅ U
F
(p)
where
U
F
(p) is the DFT of the input strings of numbers u
k
, (10.3.13),
Y
F
(p) is the DFT of the output string of numbers y
k
Y
F
(p) ·
Σ
k·0
∞
y
k
W
N
−pk
H
F
(p) is, by definition, "the DFT transfer function" ,
. (10.3.16) H
F
(p) · H(z)
z·e
j
2π
N
⋅p
·
Σ
k·0
N−1
h
k
W
N
−kp
If the input is a finite string
{u
k
¦
k≥0
, u
k
· 0 for k ≥ N
and the system is Finite Impulse Response (FIR) System, (10.2.22) with m=N,
then it is convenient to use FFT to evaluate, in time domain and frequency
domain, the system output .
10. FREQUENCY CHARACTERISTICS FOR DISCRETE TIME SYSTEMS. 10.3. Discrete Fourier Transform (DFT).
223
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS.
11.1. Introduction.
Having a continuous time system with the input u(t) and the output y(t),
both continuous time functions, the discretization problem means the computing
of a discretetime model that will realise the best approximation of the
continuoustime system.
But, as any discretetime system, the input to the model is a string of
numbers, so we are able to supply the discrete model with the values of the input
in some time moments only. The discrete time model output is a string of
numbers too.
The discretization problem is: what is the best discrete time model so that
the error e
k
between the output values of the continuous system, in some time
moments, and the output string values of the discrete system, corresponding to
that time moments, to be minimum.
A graphical image of the discretization process is presented in Fig. 11.1.1.
Continuos
System
u(t) y(t)
Discrete
System
u
k
e
k
+

y
k
y
k
y(kT )
+
Figure no. 11.1.1.
This problem can be approached as a local (point based) minimisation if
the criterion is selected to be
e
k
 = y(kT
+
) y
k
 = minimum , for any integer k, (11.1.1)
or as a global minimisation if the criterion is
(11.1.2)
Σ
k·0
N
e
k
· minimum
to get the best approximation for a finite time evolution interval.
There are several methods for systems discretization. Two main problems
are encountered: how to approximate the derivation operator and how to
approximate the integral operator.
For a derivation operator can be used the backward approximation (BA)
and forward approximation (FA), on a finite time interval T, equals to the
sampling period.
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.1. Introduction.
224
11.2. Direct Methods of Discretization.
11.2.1. Approximation of the derivation operator.
Backward approximation.
For a time derivable function x(t), where we denote the sampled values by
x
k
= x(kT) ,
the backward approximation of the first derivative is
≈ (11.2.1)
d(x(t))
dt t · kT
x
k
− x
k−1
T
Forward approximation.
For the same x(t) the forward approximation of the first derivative is
(11.2.2) x
.
(t)
t · kT
≈
x
k+1
− x
k
T
Unfortunately, sometimes the last approximation leads to unstable models.
Example 11.2.1. LTI Discrete Model Obtained by Direct Methods.
Let us consider the system described by state equation,
. x
.
· Ax + Bu
The backward approximation gives us,
x
k
− x
k−1
T
· Ax
k
+ Bu
k
x
k
·
F
(I −TA)
−1
x
k−1
+
G
T(I − TA)
−1
u
k
(11.2.3) x
k
· Fx
k−1
+Gu
k
where, . F · (I − TA)
−1
, G · T(I − TA)
−1
The forward approximation gives us,
x
k+1
− x
k
T
· Ax
k
+ Bu
k
x
k+1
· (I + TA)x
k
+ TBu
k
(11.2.4) x
k
· Fx
k−1
+Gu
k−1
where,
. F · (I + TA), G · TB
11.2.2. Approximation of the Integral Operator.
Let us consider that the integral operator is applied to a function u(t)
giving us,
(11.2.5)
x(t) · x(t
0
) +
∫
t
0
t
u(τ)dτ
Suppose , so ∃k
0
∈ Z , k
0
T ≥ t
0
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.2. Direct Methods of Discretization.
225
, (11.2.6) x(t) · x(k
0
T) +
∫
k
0
T
t
u(τ)dτ · x
k
0
+
∫
k
0
T
t
u(τ)dτ
The integration process is illustred in Fig. 11.2.1.
kT
(k1)T
t
u(t)
u(t)
t
0
k
0
( +1)T
kT u( ) u
k
=
T k
0
k
0
x
T k
0
u( )
k
0
u
=
(k1)T u
k1
=
u( )
k
0
x x
k

T
T
Figure no. 11.2.1.
The integral is approximated by a sum of forward or backward rectangles
or trapezoidals. We denote . x(kT) · x
k
Rectangular backward integral approximation:
(11.2.5) x
k
· x
k
0
+ T ⋅
Σ
i·k
0
+1
k
u
i
Trapezoidal backward integral approximation:
(11.2.6) x
k
· x
k
0
+ T ⋅
Σ
i·k
0
+1
k
u
i
+ u
i−1
2
11.2.3. Tustin's Substitution.
Based on the approximation of the integral operator, represented in
complex domain as in Fig. 11.2.2. , through a sum of trapezoidal, an equivalent
discrete algorithm is obtained as an equivalent for the soperator. Tustin's
substitution is a procedure of continuous transfer function discretization.
T
2
z+1
z1
U(z) X(z)
1
_
s
u(t) x(t)
Figure no. 11.2.2.
By using :
x
k
· x
k
0
+
T
2
Σ
i·k
0
+1
k
(u
i
+ u
i−1
) ⇒ x
k−1
· x
k
0
+
T
2
Σ
i·k
0
+1
k−1
(u
i
+ u
i−1
)
(11.2.7) x
k
− x
k−1
·
T
2
(u
k
− u
k−1
)
Applying the z  transformation to (11.2.7), it results
,
(
1 −z
−1
)
X(z) ·
T
2
(
1+ z
−1
)
U(z)
the ztransfer function,
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.2. Direct Methods of Discretization.
226
(11.2.8)
Y(z)
U(z)
·
T
2
⋅
z + 1
z − 1
which allows us to perform the correspondence between the soperator and the
zoperator::
1
s
↔
T
2
⋅
z + 1
z − 1
(11.2.9) s ↔
2
T
⋅
z −1
z +1
For a transfer function H(s) we can obtain a ztransfer function H(z), by a
simple substitution
(11.2.10) H(z) · H(s)
s ·
2
T
⋅
z−1
z+1
Equation (11.2.9) is called also bilinear transformation. It performs a
mapping from the s plane to z plane which transforms the entire axis of the s jω
plane into one complete revolution of the unit circle in the z plane.
11.2.4. Other Direct Methods of Discretization.
Discretization by matching poles and zeros.
Sometimes the above simple difference approximations may transform a
stable continuous system into an unstable discrete time system.
This methods takes into consideration the relation between s plane and z
plane by the relation
(11.2.11) z · e
sT
and that to a pole in s plane s=s
k
corresponds a pole in z plane through z
k
· e
s
k
T
which the poles matching is assured. The zeros matching and the steady state
gain is assured by special techniques.
Discretization by z transformation of the continuous transfer function.
Another way of getting H(z) from H(s) is to use the theory of SDS as
have been presented,
(11.2.12) H(z) · H
∗
(s)
s ·
1
T
lnz
which called also "impulse equivalence discretization". This method , even if
not so simple, gives the best results.
We can mention only other methods as:
Discretization by matching the step response.
Discretization by substitution of the first few terms in the series for
1
T
ln(z)
matching poles and zeros.
Discretization by solution of the continuous equation over each time step.
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.2. Direct Methods of Discretization.
227
11.3. LTI Systems Discretization Using State Space Equations.
11.3.1. Analytical Relations.
This method of discretization is based on the general solution of a
continuous time system with respect to the state vector.
If the input is a piecewise constant function on time intervals of the length
T, as it is any computer controlled system with classical NAC, then the discrete
time model is an exact model: e
k
= 0 , for any k >= 0 .
(11.3.1) x
.
· Ax + Bu
. (11.3.2) x(t) · Φ(t − t
0
)x(t
0
) +
∫
t
0
t
Φ(t − τ)Bu(τ)dτ
If we consider t
0
=kT, and denote x
k
= x(kT), we can get the system
evolution inside one sampling period. However if u(t) is a bounded function, the
state of a continuous system is a continuous function with respect to the time
variable : x(kT
+
)=x(kT).
(11.3.3) x(t) · Φ(t − kT)x
k
+
∫
kT
t
Φ(t − τ)Bu(τ)dτ) , t ∈ (kT, (k +1)T]
At the end of the ksampling interval,
t · (k + 1)T , x
k+1
· x((k + 1)T)
(11.3.4) x
k+1
· Φ(T)x
k
+
∫
kT
(k+1)T
Φ(kT + T −τ)Bu(τ)dτ ,
Because , x
k+1
depends on the values of the input in τ ∈ (kT, (k +1)T]
this time interval only. Performing the substitution , and θ · τ − kT , θ ∈ (0, T]
, we get the discrete time model, u(τ) · u(θ + kT)
(11.3.5) x
k+1
· Φ(T)x
k
+
∫
0
T
Φ(T − θ)Bu(θ +kT)dθ
If it is possible we can develop u(kT+θ) in Taylor series with respect to
θ∈(0,Τ],
(11.3.6) u(kT +θ) · u(τ) ·
Σ
i·0
∞
u
i
(kT)
θ
i
i!
(11.3.7) u
i
· u
i
(kT) ·
∂
i
u(kT +θ)
∂θ
i θ·0
+
orθ·0
, i ≥ 1, u
0
· u(kT
+
)
Substituting (11.3.6) in (r** .5), after the terms arrangements it results,
(11.3.8)
x
k+1
· Φ(T)x
k
+
Σ
i·0
∞
G
i
Bu
i
,
where,
(11.3.9) Φ(T) · e
AT
(11.3.10) u
i
·
∂
i
u(kT + θ)
∂θ
i
θ · 0
+
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.3. LTI Systems Discretization
Using State Space Equations.
228
(11.3.11) G
i
·
∫
0
T
e
A(T−θ)
θ
i
i!
dθ
(11.3.12) G
0
·
∫
0
T
e
A(T−θ)
dθ
(11.3.13) G
0
· A
−1
(e
AT
− I)
The last expression of G
0
can be performed only if A is not a singular
matrix, that is det(A) , otherwise G
0
can be computed performing the integral ≠ 0
with respect to all elements e
A(Tθ)
from (11.3.12).
By using a method of variable separation we get a recursive formula
(11.3.14) G
i
· A
−1
G
i−1
+
T
i
i!
G
0
, G
0
· A
−1
(e
AT
− I)
If the continuous system has piecewise time constant input on time
intervals of the length T, then an exact discrete time model can be obtained.
This situation can appear when a continuous system is controlled by a
computer with zero order holder as depicted in Fig. 11.3.1. , and the time input
evolution is like in Fig. (f** .2.
u(t) w(t)
x=Ax+Bu
y=Cx+Du
.
y(t)
( ]
( ]
0 T 2T kT (k1)T
(k+1)T
t
u(t);
( ]
( ]
( ]
( ]
( ]
( ]
u(t)
w(t)
w(t)
w(t)
Figure no. 11.3.1. Figure no. 11.3.2.
From Fig. 11.3.2. we can see that
. (11.3.15) u(t) · w(kT) · w
k
∀t ∈ (kT, (k +1)T]
Because u(t) is a time constant function on (kT, (k+1)T], its right side
derivatives are zero and
u
0
= u(t) = w
k
(11.3.16)
so from (11.3.8) we get the exact mathematical model
x
k+1
= Fx
k
+ Gw
k
(11.3.17)
y
k
= Cx
k
+ Dw
k
(11.3.18)
where,
F = e
AT
(11.3.19)
G = G
0
B (11.3.20)
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.3. LTI Systems Discretization
Using State Space Equations.
229
11.3.2. Numerical Methods for Discretized Matrices Evaluation.
If A is a singular matrix or if it is difficult to perform the analytical
computing, of e
AT
, then this matrix can be evaluated be finite series
approximation
(11.3.21) Φ(T) ≈ Φ
N
(T) ·
Σ
i·0
N
A
i
T
i
i!
For large matrices, because of the finite word representation of the
numbers in computer, a false convergence of the series (11.3.21) can appear,
that means,
(11.3.22) Φ
N
(T) ≡ Φ
N+1
(T)
with large numerical errors. This false convergence is more evident if the
sampling period T is large.
To avoid this, an integer number m is determined in a such away that,
T = mτ , τ small enough. (11.3.23)
The value of τ is determined by so called Gershgorin theorem.
Tacking into consideration the property of transition matrix, we can write,
(11.3.24) Φ(mτ) ≡ [Φ(τ)]
m
, m ∈ N ⇒ Φ
N
(T) ≡ [Φ
N
(τ)]
m
Also the matrix G
0
can be evaluated from the series
(11.3.25) G
0
·
Σ
i·0
∞
A
i
T
i+1
(i + 1)!
Good numerical results are obtained if the transition matrix, approximated
by a finite series (11.3.21),
(11.3.26) Φ
N
(T) · I +
T
1!
A+
T
2
2!
A
2
+ ⋅⋅⋅⋅+
T
N
N!
is arranged for numerical computing like,
(11.3.27) Φ
N
(T) − A· TA(I +
TA
2
(I +
TA
3
(⋅⋅⋅(I +
TA
N− 1
(I +
TA
N
)) ⋅ ⋅⋅)
Denoting,
(11.3.28) Ψ · (I +
TA
2
(I +
TA
3
(⋅⋅⋅(I +
TA
N− 1
(I +
TA
N
)) ⋅ ⋅⋅)
we can then evaluate,
F = Φ
N
(T) = A + TAΨ (11.3.29)
G = TΨB (11.3.30)
It is a easy job to create computer programmes for such matrices.
11. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.3. LTI Systems Discretization
Using State Space Equations.
230
12. DISCRETE TIME SYSTEMS STABILITY.
12.1. Stability Problem Statement.
Let us consider a linear time invariant discrete system described by
, (12.1.1) H(z) ·
M(z)
L(z)
where the denominator polynomial is
L(z)=a
n
z
n
+...+a
1
z+a
0
= . (12.1.2) a
nΠ
i·1
N
(z − λ
i
)
m
i
, Σ
i·1
N
m
i
· n
The system is stable if and only if the modulus of the poles are under unit,
that means that all the poles are placed inside the unit circle in z plane.
In Fig. 12.1.1. the transfer of poles from s plane to z plane is shown.
Re(z)
jIm(z)
jIm(s)
Re(s)
e z
i
T s
i
=
x x x
x
x
x
x x
x
x
x
x
0 0
0
0
1
Unit circle
Imaginary axis
z plane
s plane
Figure no. 12.1.1.
If a transfer function H(s) has a pole
(12.1.3) s · λ
i
· −p
i
then its z transforn
(12.1.4) H(z) · Z{H(s)¦ · H
∗
(s)
s·1/T⋅ln(z)
has a pole
. (12.1.5) z · ζ
i
· e
λ
i
T
If there are simple poles on the unit circle,
, (12.1.6) z · 1 ⇔ Re(s) · 0
then the discrete system is at the limit of stability.
For z transfer function stability it is not necessary, as for continuous
systems, that all the coefficients of the denominator polynomial to be different
of zero and to have the same sign.
Because z=e
sT
, the left half s plane which is the stability half plane, is
transformed to the inside of the unit circle of z plane, which is the stability
domain of discrete systems.
There are some specific criteria, algebraical and frequency, for the stability
of discrete time systems.
12. DISCRETE TIME SYSTEMS STABILITY. 12.1. Stability sProblem Statement.
231
The algebraical criteria are related on the z transfer function denominator
polynomial L(z), for the so called external stability, and on the characteristic
polynomial , for the so called internal stability, where ∆(z)
(12.1.7) ∆(z) · det(zI − A)
and A is the system matrix of one state realisation of the z transfer function.
The external stability or inputoutput stability, requires that all the roots of
L(z) to be inside the unit circle
The internal stability asks that the roots of to be inside the unit ∆(z) · 0
circle.
For internal stability we manage the characteristic polynomial and ∆(s)
L(z) for the external stability.
If the system has the transfer function H(z) as the nominal transfer
functions, that means its nominator and denominator has no common factor,
then any state realisation is both controllable and observable and vice versa. In
such a case . ∆(z) ≡ L(z)
In the following we shall manipulate only the polinomial L(z) but if
necessary the same technics can be considered to be applied to . ∆(z)
Example 12.1.1. Study of the Internal and External Stability.
To understand what internal and external stability means and what their
importance are, let us start with an computer program for a dynamic system
giving us the step response. This is a Matlab program but it is similar to any
program implemented for the computer controlled systems.
Y1=2.8;Y0=2;U1=1;U0=1;
t=0:70;lt=length(t);y=zeros(lt);
y(1)=Y0;y(2)=Y1;
for k=3:lt
U=1;
Y=2*Y10.99*Y0+U1.1*U1;
y(k)=Y;
Y0=Y1;Y1=Y;U1=U;
end
As we understand, the first line "Y1=2.8;Y0=2;U1=1;U0=1;" establishes
the initial conditions. We can see that they are not zero. The response to this
program, depicted in Fig. 12.1.2. , indicates a stable response approaching the
steady state value of 10 as it does with zero initial conditions, in Fig. 12.1.2. ,
" Y1=0;Y0=0;U1=0;U0=0;".
But if the initial conditions are " Y1=2;Y0=2;U1=1;U0=1;" then the
response is as in Fig. 12.1.3. which represents a disaster for a controlled
process. Even if the initial conditions have a very small variation with respect to
the first case, " Y1=2.799;Y0=2;U1=1;U0=1;" that means Y1=2.799 instead
of Y1=2.8, we have un unstable response as in Fig. 12.1.4.
12. DISCRETE TIME SYSTEMS STABILITY. 12.1. Stability sProblem Statement.
232
0 10 20 30 40 50 60 70
0
1
2
3
4
5
6
7
8
9
10
Y1=2.8;Y0=2;U1=1;U0=1;
Initial conditions:
Y1=0;Y0=0;U1=0;U0=0;
Zero initial conditions:
0 10 20 30 40 50 60 70
20000
15000
10000
5000
0
5000
Y1=2;Y0=2;U1=1;U0=1;
Initial conditions:
Figure no. 12.1.2. Figure no. 12.1.3.
0 20 40 60 80 100 120
90
 80
 70
 60
 50
 40
 30
 20
 10
0
10
Y1=2.799;Y0=2;U1=1;U0=1;
Initial conditions:
Figure no.12.1.4.
As a conclusion: the forced response is stable so the system is external
stable, but the general response, mainly the free response, is unstable so the
system is internal unstable.
All such practical aspects can be under control only using the theoretical
approach of systems theory.
A brief response:
The implemented line "Y=2*Y10.99*Y0+U1.1*U1;", corresponds to
the discrete transfer function,
(12.1.8) H(z) ·
z
2
− 1.1z
z
2
−2z + 0.99
·
z(z − 1.1)
(z −1.1)(z − 0.9)
One pole is unstable but the second is stable. As we ξ
1
· 1.1 > 1 ξ
2
· 0.9 < 1
can see the transfer function is not a nominal one. The input output behaviour is
described by the reduced transfer function
. (12.1.9) H (z) ·
z
(z −0.9)
12. DISCRETE TIME SYSTEMS STABILITY. 12.1. Stability sProblem Statement.
233
12.2. Stability Criteria for Discrete Time Systems.
12.2.1. Necessary Stability Conditions.
The necessary conditions that a n degree polynomial
L(z)=a
n
z
n
+a
n1
z
n1
+...+a
1
z+a
0
, a
k
C, , (12.2.1) ∈ a
n
≠ 0
to be discrete time asymptotical stable (to have all the roots inside the unit
circle) are:
1) L(1)>0 , (12.2.2)
2) (1)
n
L(1)>0. (12.2.3)
12.2.2. SchurKohn Stability Criterion.
Let us consider the discrete time system characteristic polynomial,
L(z)=a
n
z
n
+a
n1
z
n1
+...+a
1
z+a
0
, a
k
C, . (12.2.4) ∈ a
n
≠ 0
The SchurKohn stability criterion determines the necessary and sufficient
discrete time asymptotical stability conditons:all the roots inside the unit circle.
To apply this criterion n determinants D
k
of the dimension 2kx2 are computed .
They are called Schur  Kohn determinants.
(12.2.5)
.
.
.
...
D =
k
a
0
a
1
a
k1
a
0
a
1
a
0
...
.
.
.
.
.
.
.
.
.
0
an
a
nk+1
an
a
...
.
.
.
.
.
.
0
a
k1
a
0
a
1
...
.
.
.
.
.
.
.
.
.
0
n
a
0
a
1
a
0
a
nk+1
an a
.
.
.
.
.
.
0
an a
n1
an
n1
We denoted by the complex conjugate of the coefficient a
k
. a
k
The necessary and sufficient stability condition is :
. (12.2.6) ∀k ∈ [1, n] (−1)
k
D
k
> 0
12.2.3. Jury Stability Criterion.
L(z)=a
n
z
n
+a
n1
z
n1
+...+a
1
z+a
0
, (12.2.7) a
k
∈ C, a
n
≠ 0
We create the Jury table (12.2.8),
(12.2.8)
z
0
1
2
3
4
5
z
1
z
2
z
n2
z
n1
z
n
0
a
1
a
2
a
n2
a
n1
a
n
a
n
a
n1
a
n2
a
2
a
1
a
0
a
0
b
1
b
2
b
n2
b
n1
b
n1
b
n2
b
n3
b
1
b
0
b
0
0
0
c
1
c
2
c
n2
c
n2
c
n3
c
n4
c
0
c
0
0
0
0 6
2n5
2n4
2n3
0
p
1
p
2
p
3
p
2
p
1
p
3
p
0
p
0
q
1
q
2
q 0
...
...
...
...
...
...
...
.
.
.
12. DISCRETE TIME SYSTEMS STABILITY. 12.2. Stability Criteria for Discrete Time Systems.
234
where the table entries are evaluated based on 2x2 determinants,
(12.2.9) b
k
·
a
0
a
n−k
a
n
a
k
, k · 0, n− 1
(12.2.10) c
k
·
b
0
b
n−1−k
b
n−1
b
k
, k · 0, n− 2
(12.2.11) q
k
·
p
0
p
3−k
p
3
p
k
, k · 0, 1, 2.
The discrete time system with the characteristic polynomial (12.2.7) is
asymptotic stable if and only if there are satisfied the necessary conditions
(12.2.12)
¹
¹
'
L(1) > 0
(−1)
n
L(−1) > 0
and the inequalities,
(12.2.13)
¹
¹
'
¹
¹
¹
¹
¹
¹
¹
¹
a
0
< a
n
.............
b
0
> b
n−1
c
0
> c
n−2
..............
p
0
> p
3
q
0
> q
2
12.2.4. Periodicity Bands and Mappings Between Complex Planes.
A lot of methods from continuous time systems are based on a stability
limit as an imaginary axis of a complex plane. Continuous time criteria express
the necessary and sufficient conditions for a polynomial to have all the roots
placed on the left side of the imaginary axis.
We saw that for discrete time systems the stability limit is a circle in the z
plane. Several transformations can be utilised to transform a circle from a
complex plane, in our case the z plane into the imaginary axis, except some
evolution to infinity, of a new complex plane. So the continuous time criteria
can be directly applied to discrete time systems.
Most utilised are the bilinear transformations "r" and "w".
The "r " transformation.
: (12.2.14) z → r r ·
z + 1
z − 1
: (12.2.15) r → z z ·
r + 1
r − 1
The "w " transformation.
: (12.2.16) z → w w ·
z − 1
z + 1
: (12.2.17) w → z z ·
1+ w
1− w
: (12.2.18) w → r r · 1/w
12. DISCRETE TIME SYSTEMS STABILITY. 12.2. Stability Criteria for Discrete Time Systems.
235
In the sequel we shall present the "w" transformation only.
The transformations of the stability domains from s to z and then from z
to w are illustrated in Fig. 12.2.1. In such a way the unit circle is transformed
into the imaginary axes.
w plane
z plane s plane
Figure no. 12.2.1.
We note that an algebraical relations from s plane to z plane exists only
between the Laplace transform of a sampled signal and its z transform.
But, because the Laplace transform of a sampled signal is a Y
∗
(s)
periodical function wit the period of , jω
s
, (12.2.19) Y
∗
(s) · Y
∗
(s + jω
s
) , ω
s
·
2π
T
only one periodicity band in the s plane is enough to represent such a sampled
signal. Particularly, a stable half periodicity band can be defined as, SBq
(12.2.20) SBq · {s · σ+ jω ω ∈ (−ω
s
/2 + qω
s
, ω
s
/2 +qω
s
], σ ≤ 0, q ∈ Z¦
Any of these results can be applied to a sampled transfer function too. H
∗
(s)
The fundamental stable band SB0 is presented in Fig. 12.2.2. In the definition
relation (12.2.20) of a SBq, it is considered also the area determined by to σ · 0
manage the limit of stability. A periodicity band denoted Bq is defined as in
(12.2.20) without restrictions on . σ
In a such a way the relation
(12.2.21) z · e
sT
performs an on to on correspondence between one selected Bq from s plane
and the entire z plane.
As we remember, the analytical extension of the ztransform expression
Y(z) is a obtained by a simple algebraical substitution
(12.2.22) Y(z) · Y
∗
(s)
s·1/T⋅ln(z)
for any finite s different of a pole. In the same time, to any family of poles
, (12.2.23) s
i
n
· s
i
+ n ⋅ (jω
s
), n ∈ Z
of , corresponds a unique pole of Y(z), Y
∗
(s)
. (12.2.24) z
i
· e
[s
i
+n⋅(jωs )]T
· e
s
i
T
Vice versa, the entire z plane gives the values of in one selected Y
∗
(s)
periodicity band Bq using the transformation,
(12.2.25) Y
∗
(s) · Y(z)
z·e
sT , s ∈ Bq, q ∈ Z
By periodicity (of both the values and of the poles) we can extend Y
∗
(s)
to entire s plane. Most utilised band is the fundamental band for q=0. Y
∗
(s)
12. DISCRETE TIME SYSTEMS STABILITY. 12.2. Stability Criteria for Discrete Time Systems.
236
The frequency characteristics are obtained considering in SB0
s · jω , ω ∈ (−ω
s
/2, ω
s
/2
which determines from (12.2.25)
. (12.2.26) Y
∗
(jω) · Y(z)
z·e
jωT , arg(z) · ωT ∈ (−π/2, π/2]
For pozitive frequencies only we have
ω ∈ [0, ω
s
/2] · [0, πf
s
]
where is the pure sampling frequency. f
s
· 1/T · ω/2π
Denoting , the range of the pure frequencies f , from continuous ω · 2πf
time domain, for which the discrete time frequency characteristics are uniquely
determined is
. (12.2.27) f ∈ [0, f
s
/2]
In the sequel we shall illustrate how the imaginary part of SB0 ( the
segments (5)(6)=(1)(2) ) from Fig. 12.2.2. are transformed, throughout the z
plane, into the imaginary axis of the w plane.
s · jω , ω ∈ (−ω
s
/2, ω
s
/2) ⇒ z · e
jωT
· e
jϕ
, ϕ ∈ (−π, π
But, w ·
z − 1
z + 1
⇒ w ·
e
jϕ
− 1
e
jϕ
+ 1
·
cos ϕ −1 + jsinϕ
cos ϕ +1 + jsinϕ
·
−2sin
2
ϕ
2
+ j2sin
ϕ
2
cos
ϕ
2
2cos
2
ϕ
2
+ j2sin
ϕ
2
cos
ϕ
2
. w · tg
ϕ
2
⋅
−sin
ϕ
2
+ jcos
ϕ
2
cos
ϕ
2
+ jsin
ϕ
2
· jtg
ϕ
2
, ϕ ∈ (−π, π) ⇒ w∈ (−j∞, j∞)
The transformation of the other segments of SB0 is performed in a similar
manner.
ω j
s
/2
1
2
3
4 5
6
s plane
σ
ω
j
2
5
6
3
4
z plane
jIm(z)
Re(z)
0
0
2'
5
1 1
3
4
w plane
jIm(w)
Re(w)
0
6
2
5'
.
(1,j0)
ω
j s
/2
SB0
Figure no. 12.2.2.
12. DISCRETE TIME SYSTEMS STABILITY. 12.2. Stability Criteria for Discrete Time Systems.
237
12.2.5. Discrete Equivalent Routh Criterion in the "w" plane.
As we saw, the inside of the unit circle from z plane is transformed into
left half part ofd the w complex plane.
The discrete time stability of a polynomial L(z) can be analysed on its
image , available for ,where L
∼
(w) z · finite ⇒ w ≠ 1
(12.2.28) L(z)
z ·
1+w
1−w
·
L
∼
(w)
(1 − w)
n
In such conditions,
. L(z) · 0 ⇔ L
∼
(w) · 0
Note that because comes from L(z) for finite z, never we shall have L
∼
(w)
. The stability conditions for L(z)=0 will give the same results as L
∼
(w)
w·1
· 0
the stability conditions for checking of . L
∼
(w) · 0
That means the roots of are placed in the left half w  plane so L
∼
(w)
Routh or Hurwitz criterion can be directly applied for the polynomial . L
∼
(w)
If the polynomial L(z) is written as,
(12.2.29) L(z) · Σ
k·0
n
a
k
z
k
, a
n
≠ 0,
then for we have, z · finite ⇒ w ≠ 1
(12.2.30) L
∼
(w) · (1 − w)
n
⋅
Σ
k·0
n
a
k

.
1+ w
1− w
`
,
k
, a
n
≠ 0,
(12.2.31) L
∼
(w) ·
Σ
k·0
n
a
k
(1+ w)
k
(1 − w)
n−k
, a
n
≠ 0, w ≠ 1
. (12.2.32) L
∼
(w) ·
Σ
k·0
n
a
∼
k
w
k
, a
∼
n
≠ 0, w≠ 1
Any continuous time algebraical stability criterion can be applied using the
coefficients . a
∼
k
, k · 0 : n
The w transformation can by applied to a transfer function H(z), getting
the w image denoted, for simplicity also by H(w),
. (12.2.33) H(w) · H(z)
z ·
1+w
1−w
Because the stability limit of H(w) is the imaginary axis of the plane w,
then some specific frequency methods for continuous time systems can be
directly applied but with some differences in their interpretation.
12. DISCRETE TIME SYSTEMS STABILITY. 12.2. Stability Criteria for Discrete Time Systems.
238
References.
1. Bál an T., Matemati ci speci al e, Reprograf i a Uni versi táþi i di n Crai ova, 1988.
2. Bel ea C.,Teor i a si stemel or automate, vol . I , Repr. Uni versi táþi i di n Crai ova,
1971.
3. Bel ea C., Teor i a si stemel or , Edi tura Di dacti cá ßi Pedagogi cá, Bucureßti , 1985.
4. Bel ea C., Teor i a si stemel or , Edi tura Di dacti cá ßi Pedagogi cá, Bucureßti , 1985.
5. Cál i n S., Regul atoar e automate, Edi tura Di dacti cá ßi Pedagogi cá, Bucureßti ,
1976.
6. Cál i n S., Bel ea C., Si steme automate compl exe, Edi tura Tehni cá, Bucureßti , 1981.
7. Cál i n S., Dumi trache I ., s.a., Regl ar ea numer i cá a pr ocesel or tehnol ogi ce,
Edi tura Tehni cá, Bucureßti , 1984.
8. Cál i n S., Petrescu Gh., Tábuß I ., Si steme automate numer i ce, Edi tura §ti i nþf i cá ßi
Enci cl opedi cá, Bucureßti , 1984.
9. Csaki F., Moder n contr ol theor i es, Akad. Ki ado, Budapest, 1972.
10. Di rector S., Rohrer R., I ntr oducti on to systems theor y, Mc. Graw Hi l l , 1972.
11. Houpi s C., Lamont G., Di gi tal contr ol systems, Mc. Graw Hi l l , 1992.
12. Dumi trache I . Automati zári el ectroni ce, Edi tura Di dacti cá ßi Pedagogi cá,
Bucureßti , 1997.
13. I onescu V., Teor i a si stemel or , Edi tura Di dacti cá ßi Pedagogi cá, Bucureßti , 1985.
14. I serman R., Di gi tal Contr ol Systems, Spri nger Verl ag, 1981.
15. Kai l ath T., Li near systems, Prenti ce Hal l , 1987.
16. Kuo B. C., Si steme automate cu eßanti onar e, Edi tura Tehni cá, Bucureßti , 1967.
17. Kuo B. C., Automati c contr ol systems, Prenti ce Hal l , 1991.
18. Mari n C., Teor i a si stemel or ßi r egl ar e automatá  Í ndrumar de proi ectare,
Reprograf i a Uni versi táþi i di n Crai ova, 1981.
19. Mari n C., Petre E., Popescu D., I onete C., Sel i ßteanu D., Teor i a si stemel or 
Probl eme, Edi tura SI TECH, Crai ova, 1997.
20. Ogata K., Di scr ete ti me contr ol systems, Prenti ce Hal l , 1987.
21. Rásvan V.,Teor i a stabi l i táþi i , Edi tura §ti i nþi f i cá ßi Enci cl opedi cá, Bucureßti ,
1987.
22. Stánáßi l á O., Anal i zá matemati cá, Ed. Di dacti cá ßi Pedagogi cá, Bucureßti , 1981.
23. §abac I ., Matemati ci speci al e, Edi tura Di dacti cá ßi Pedagogi cá, Bucureßti , 1981.
24. Vi natoru M., Si steme automate, Edi tura SPI CON, 1997.
25. Voi cu M., Tehni ci de anal i zá a stabi l i táþi i si stemel or automate, Edi tura Tehni cá,
Bucureßti , 1986.
26. Wi berg D., State Space and Li near Systems , Mc. Graw Hi l l , 1971.
27. Wi l l i amson D., Di gi tal contr ol and i mpl ementati on, Prenti ce Hal l , 1991.
28. * * * MATLAB  User' s Gui de.
PREFACE.
In recent years systems theory has assumed an increasingly important role in the development and implementation of methods and algorithms not only for technical purposes but also for a broader range of economical, biological and social fields. The success common key is the notion of system and the system oriented thinking of those involved in such applications. This is a student textbook mainly dedicated to determine a such a form of thinking to the future graduates, to achieve a minimal satisfactory understanding of systems theory fundamentals. The material presented here has been developed from lectures given to the second study year students from Computer Science Specialisation at University of Craiova. Knowledge of algebra, differential equations, integral calculus, complex variable functions constitutes prerequisites for the book. The illustrative examples included in the text are limited to electrical circuits to fit the technical background on electrical engineering from the second study year. The book is written with the students in the mind, trying to offer a coherent development of the subjects with many and detailed explanations. The study of the book has to be accomplished with the practical exercises published in our problem book  19. It is hoped that the book will be found useful by other students as well as by industrial engineers who are concerned with control systems.
CONTENTS 1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1.1. Introduction 1.2. Abstract Systems; Oriented Systems; Examples. Example 1.2.1. DC Electrical Motor. Example 1.2.2. Simple Electrical Circuit. Example 1.2.3. Simple Mechanical System. Example 1.2.4. Forms of the Common Abstract Base. 1.3. Inputs; Outputs; InputOutput Relations. 1.3.1. Inputs; Outputs. 1.3.2. InputOutput Relations. Example 1.3.1. Double RC Electrical Circuit. Example 1.3.2. Manufacturing Point as a Discrete Time System. Example 1.3.3. RSmemory Relay as a Logic System. Example 1.3.4. Blackbox Toy as a Two States Dynamical System. 1.4. System State Concept; Dynamical Systems. 1.4.1. General aspects. Example 1.4.1. Pure Time Delay Element. 1.4.2. State Variable Definition. Example 1.4.2. Properties of the iiss relation. 1.4.3. Trajectories in State Space. Example 1.4.3. State Trajectories of a Second Order System. 1.5. Examples of Dynamical Systems. 1.5.1. Differential Systems with Lumped Parameters. 1.5.2. Time Delay Systems (DeadTime Systems). Example 1.5.2.1. Time Delay Electronic Device. 1.5.3. DiscreteTime Systems. 1.5.4. Other Types of Systems. 1.6. General Properties of Dynamical Systems. 1.6.1. Equivalence Property. Example 1.6.1. Electrical RLC Circuit. 1.6.2. Decomposition Property. 1.6.3. Linearity Property. Example 1.6.2. Example of Nonlinear System. 1.6.4. Time Invariance Property. 1.6.5. Controllability Property. 1.6.6. Observability Property. 1.6.7. Stability Property. I
1 2 3 5 8 9 11 11 12 13 17 18 20 22 22 24 26 28 29 30 33 33 35 35 36 36 37 37 38 40 40 41 41 41 41 41
2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2.1. InputOutput Description of SISO LTI. Example 2.1.1. Proper System Described by Differential Equation. 2.2. State Space Description of SISO LTI. 2.3. InputOutput Description of MIMO LTI. 2.4. Response of Linear Time Invariant Systems. 2.4.1. Expression of the State Vector and Output Vector in sdomain. 2.4.2. Time Response of LTI from Zero Time Moment. 2.4.3. Properties of Transition Matrix. 2.4.4. Transition Matrix Evaluation. 2.4.5. Time Response of LTI from Nonzero Time Moment . 3. SYSTEM CONNECTIONS. 3.1. Connection Problem Statement. 3.1.1. Continuous Time Nonlinear System (CNS). 3.1.2. Linear Time Invariant Continuous System (LTIC). 3.1.3. Discrete Time Nonlinear System (DNS). 3.1.4. Linear Time Invariant Discrete System (LTID). 3.2. Serial Connection. 3.2.1. Serial Connection of two Subsystems. 3.2.2. Serial Connection of two Continuous Time Nonlinear Systems (CNS). 3.2.3. Serial Connection of two LTIC. Complete Representation. 3.2.4. Serial Connection of two LTIC. InputOutput Representation. 3.2.5. The controllability and observability of the serial connection. 3.2.5.1. State Diagrams Representation. 3.2.5.2. Controllability and Observability of Serial Connection. 3.2.5.3. Observability Property Underlined as the Possibility to Determine the Initial State if the Output and the Input are Known. 3.2.5.4. Time Domain Free Response Interpretation for an Unobservable System. 3.2.6. Systems Stabilisation by Serial Connection. 3.2.7. Steady State Serial Connection of Two Systems. 3.2.8. Serial Connection of Several Subsystems. 3.3. Parallel Connection. 3.4. Feedback Connection.
42 46 49 51 54 54 55 56 57 58
60 60 60 60 60 64 64 65 66 66 67 68 71 73 75 76 80 81 82 83
II
Block Diagram of an Integrator. 121 Example 4. Algorithm for SFG Reduction by Elementary Transformations. 113 Example 4.1. 123 III .4. 92 4. Construction of SFG Starting from a System of Linear Algebraic Equations. Systems Reduction Using State Flow Graphs. 116 117 4.2.3.2. Systems Reduction Through Block Diagrams Transformations.2. 89 4. Construction of SFG Starting from a Block Diagram. 96 4. 89 4.1.2. SFG of three Algebraic Equations.3. Reduction by Mason's Formula of a Multivariable System. 96 4.3.4. Elementary Transformations on Block Diagrams.2. Transformations of a Block Diagram Area by Analytical Equivalence. 111 4.2. 106 4.1.1. Algorithm for the Reduction of Complicated Block Diagrams.3. SFG of two Algebraic Equations.1.3. Example 4. Reduction of a Multivariable System.1. 84 4.4.3.1. Signal Flow Graphs Fundamentals. 84 4.3. 98 4.3. SFG Reduction by Mason's General Formula.2.4.3. Systems Reduction Problem. Elimination of a Selfloop. 114 115 4.2.1.1.3.2.2. Systems Reduction Using Block Diagrams.1. Representations of a Multi Inputs Summing Element.2. 87 Example 4.3.2.3. Analytical Reduction.1.3. 4.1.2. 4. 120 4.1. SFG of a Multivariable System.3. SFGs of one Algebraic Equation. Construction of Signal Flow Graphs.4.1.2. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS.3. 113 4. Signal Flow Graphs Algebra.3. Variable's Directions in Principle Diagrams and Block Diagrams.4. Block Diagram of an Algebraical Relation. Elimination of a Node. 96 Example 4. Block Diagrams. 84 Example 4.3. 93 Example 4. 106 4. SFG Reduction by Elementary Transformations.2. 93 4. 92 4. Principle Diagrams.1.2.1.2.3.1.3.3. 107 Example 4.4. 4. State Diagrams Represented by Block Diagrams.3 Signal Flow Graphs Method (SFG).3. 110 Example 4.1.1.4.4.1. Principle Diagrams and Block Diagrams.2.2. 92 4. 85 Example 4.1. 115 4. 117 118 4.
6. Definition of Logarithmic Characteristics. Bode Diagram Construction by Components. 6.2.1. Logarithmic Frequency Characteristics.3.2.1.2. 6.2.2.2.4. First Type ID Canonical Form.2. 6. FREQUENCY DOMAIN SYSTEMS ANALYSIS.5. 5.1.5.4.4. Experimental Frequency Characteristics.2. 6. 5. Asymptotic Approximations of Frequency Characteristic. Second Type DI Canonical Form. 6. SYSTEMS REALISATION BY STATE EQUATIONS. 6. Aperiodic Element.4.2. First Degree Polynomial Element. 6.5. First Type ID Canonical Form of a Second Order System.6.1. Asymptotic Approximations of Phase Frequency Characteristic for a First Degree Complex Variable Polynomial. Directly Bode Diagram Construction. 3. Proportional Element. Transfer Function with two Complex Poles. Example 6. Examples of Bode Diagram Construction by Components. Observability Criterion.3. 6.2.1. Controllability Criterion.1. IV 139 142 145 145 146 146 148 150 150 151 152 153 158 159 162 164 165 165 165 168 .3. Example 6. 6. General Aspects.3. 6. 6. Second Degree Polynomial Element with Complex Roots. 6. Integral Type Element.1.4. Jordan Canonical Form.1. 6. 6. 5. 6. 5.2.3.1. Elementary Frequency Characteristics. 6. 162 6.5.4.5. 5. Asymptotic Approximations of Magnitude Frequency Characteristic for a First Degree Complex Variable Polynomial.3.4.1.4.1. Example 5.1.4.5.2.5 State Equations Realisation Starting from the Block Diagram 125 126 126 127 130 132 134 137 6. Relations Between Experimental Frequency Characteristics and Transfer Function Attributes. Problem Statement.5. Bode Diagrams Construction Procedures.5.5. Types of Factorisations.2.2.2. Frequency Characteristics for Series Connection of Systems. Transfer Function with one Real Pole. 5. 5.1. Oscillatory element.1.
2.3. Algebraical Stability Criteria.2. 8. 7. 8.3.1.2.1. Partial Derivative Theorem. 8. Z . 8.2.1. Introduction .2.8. 8. Example. Real Time Delay Theorem.4.3. Formula by Residues.2.2.1. 8. Example 8. 8. First Order DTS Implementation. 8. 7.3. 8. 7.3. ` 170 171 171 171 171 173 173 174 176 177 177 178 180 181 181 181 182 183 183 185 185 186 186 186 186 186 186 187 188 190 190 190 193 193 193 195 V .1. State Space Description of Discrete Time Systems.3.2.1. 7.1.4.2.1. Necessary Condition for Stability.2. Fundamental Formula.1. 8. Power Series Method.2.1.Transformation.1.1. Problem Statement.3.1.2.2.1. 7.3.2.1. Inverse ZTransformation. 8. 8.2.2.4.7. Direct ZTransformation.3.1.2.3.1.2.1. Real Time Shifting in Advance Theorem. Routh Stability Criterion.2. 7. Example 8. 8.2.2.5. Pure Discrete Time Systems (DTS). 8. Hurwitz Stability Criterion.2. 7.1. 8. 8.1. 7.2. 7.1. Example 8. Frequency Characteristics of Time Delay Systems. 8.3.3. Fundamental Stability Criterion. Frequency Quality Indicators. Partial Fraction Expansion Method. Special Cases in Routh Table. Complex Shifting Theorem.1. Input Output Description of Pure Discrete Time Systems. DISCRETE TIME SYSTEMS. Routh Table.3. Frequency Stability Criteria. Fundamental Formula.1. Nyquist Stability Criterion.2. Stability Analysis of a Feedback System.6.1.4. 8. 8.1. Proper Second Order DTS.1. 8.3. 7.2.3.1.3. Initial Value Theorem. 7.1. Theorems of the ZTransformation.3. SYSTEMS STABILITY. Improper First Order DTS. Example 7. 8.2.2.1.2. Final Value Theorem.1. 7.2. Linearity Theorem.
Frequency Characteristics Definition.2.2. 211 9.1. Discrete Equivalent Routh Criterion in the "w" plane. Approximation of the Derivation Operator. 11. 222 11. Analytical Relations. 212 9.2. 220 10. DISCRETIZATION OF CONTINUOUS TIME SYSTEMS. 11.3. 209 9.1. Sampled Data Systems Modelling. 218 10. 11.3.1. 11. 217 10.4. 205 9.3. Sampler .1.2. DISCRETE TIME SYSTEMS STABILITY.3. 12.1. 204 9. Direct Methods of Discretization. Example 12.1.5.3. Stability Criteria for Discrete Time Systems. Frequency Characteristics of mOrder Sliding Weighted Filter. 11.3.1. 12.4. Introduction. 204 9. Necessary Stability Conditions.5.2.2.2.1. Discrete Fourier Transform (DFT).2.3. 221 10.3.2. 12. Approximation of the Integral Operator. Jury Stability Criterion.3. 12. Mathematical Model of a Computer Controlled System.2.Zero Order Holder (SH).2.2.3.2.2. Study of the Internal and External Stability.1. FREQUENCY CHARACTERISTICS FOR DISCRETE TIMESY STEMS 10. 9.2. 12. 11.2. 224 225 225 225 226 227 228 228 230 231 232 234 234 234 234 235 238 VI .2.9. 218 10.3. Other Direct Methods of Discretization.2.2.1. SAMPLED DATA SYSTEMS.1. 215 10.4. SchurKohn Stability Criterion.2. Complex Domain Description of Sampled Data Systems. Complex Domain Description of the Sampling Process. 197 9. 12. LTI Systems Discretization Using State Space Equations.3.2.3. Mathematical Model of the Sampling Process. 207 9.2.2.1. Frequency Characteristics of LTI Discrete Time Systems. Example 11. Stability Problem Statement. 12. 11. 12. Relations Between Frequency Characteristics and Attributes of ZTransfer Functions. Numerical Methods for Discretized Matrices Evaluation.2. Periodicity Bands and Mappings Between Complex Planes. Computer Controlled Systems. 11.2. 11. Continuous Time System Connected to a SH. Tustin's Substitution.3.2. Continuous Time Systems Response to Sampled Input Signals. Frequency Characteristics of First Order Sliding Average Filter. Shannon Sampling Theorem. TimeDomain Description of the Sampling Process. 209 9. LTI Discrete Model Obtained by Direct Methods.3. 213 9.1.
In such a context the systems theory is a set of general methods. The automatic control or just automatic. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. a system of algebraical or differential equations. mechanical. Introduction The systems theory or systems science is a discipline well defined. as a branch of the control science deals with automatic control systems. Cybernetics. In common usage the word " system " is a rather nebulous one. techniques and special algorithms to solve problems as analysis. performed on a common abstract base. control. social.1. whose goal is the behaviour study of different types and forms of systems within a unitary framework of notions. In the control science three main branches can be pointed out: Automatic control. 1. Automation means all the activities to put to practice automatic control systems. Numerous examples of systems according to this definition can be intuitively done: our planetary system. to elaborate. There are several definitions for the notion of system. artistic.1. Later on several examples will be done according to this definition. the economical system of a country. command and control decisions based on information got with its own resources. The system theory is the basement of the control science which deals with all the conscious activities performed inside a system to accomplish a goal under the conditions of the external systems influence. We can mention the Webster's definition: "A system is a set of physical objects or abstract entities united (connected or related) by different forms of interactions or interdependencies as to form an entirety or a whole ". 1. identification. A peculiar category of systems is expressed by so called "physical systems" whose definition comes from thermodynamics: "A system is a part (a fragment) of the universe for which one inside and one outside can be delimited from behavioural point of view". military one. Informatics. 1. economical. chemical. Introduction. each of them aspiring to be as general as possible. In systems theory it is the mathematical form of a system which is important and not its physical aspect or its application field. 1 . the car steering system . . is a set of objects interconnected in such a structure able to perform. An automatic control system or just a control system. There are also many other meanings for the notion of control systems. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS.1. optimisation irrespective whether the system to which they are applied is electrical. synthesis.
These mathematical relations define the mathematical model of the physical system.1. When an abstract system is defined starting from a physical system (object). 2 . The outputs represent those physical object attributes (qualities) which an interest exists for. such a representation is realised. Examples. Any physical system (or physical object). energy. Abstract Systems. In Fig. The output variables do not affect the input variables. 1.1. first of all the outputs are defined (selected). in two categories: input variables and output variables.2. Material OUTSIDE INSIDE Information 1 u u2 up Energy inputs u System descriptor y1 y2 y yr outputs Figure no. as an element of the real world. Input variables (or just "inputs" ) represent the causes by which the outside affects (inform) the inside. The inputs of this abstract system are all the external causes that affect the above chosen outputs. based on causalities principles.2. By an abstract system one can understand the mathematical model of a physical system or the result of a synthesis procedure.2 The physical system (object) interactions with the outside are realised throughout some signals so called terminal variables. taking into consideration the goal the abstract system is defined for. In the systems theory. . This is the directional property of a system: the outputs are influenced by the inputs but not vice versa. feasible system or realisable system. is a part (a piece) of a more general context.2.2. Output variables (or just "outputs" ) represent the effects of the external and internal causes by which "the inside" affects ( influences or inform) the outside. 1. An oriented system is a system (physical or abstract) whose terminal variables are split. Figure no. 1. is an abstract system for which a physical model can be obtained in such a way that its mathematical model is precisely that abstract system. material. A causal system. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. These exchanges alter its environments that cause modifications in time and space of some of its specific (characteristic) variables. 1. the mathematical relations between terminal variables are important. its interactions with the outside are performed by exchanges of information.1. 1. It is not an isolated one. Abstract Systems. Oriented Systems. Oriented Systems. Examples.
. weight. yr]T if there are r output variables. Made in Craiova EP Tip MCC3 ω Cr Ir Ur Ur Ue Ue θext Cr S1 ω Ur Ue Cr Ir θext S2 θint Figure no. any skilled people understand that it is about a DC motor..2. 1.3. is a graphical representation of a physical system using norms and symbols specific to the field to which the physical system belongs. This is a physical diagram. 1. 3. 1.2. can identify it. The output usually is denoted by y if there is only one output or by a column vector y=[y1 y2 .2. etc.1. As a physical object it has several attributes: colour. cost price. The physical diagram or construction diagram. Abstract Systems. The representation is performed by using rectangles or flow graphs. if necessary.2. for its identification. Mainly the block diagram illustrates the abstract system.4. Usually an input is denoted by u.2. is explicitly mentioned. 3 Figure no. by a rectangle which usually contains a system descriptor which can be a description of or the name of the system or a symbol for the mathematical model. Figure no. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. the difference between them. up]T if there are p input variables. excitation voltage (field voltage). The principle diagram or functional diagram. Scalars and vectors are written with the same fonts.1. Example 1. 1. Inputs are represented by arrows directed to the rectangle and outputs by arrows directed from the rectangle.2.2. Examples. Defining the inputs and outputs we are defining that border which expresses the inside and the outside from behavioural point of view. 2. This can be a picture of a physical object or a diagram illustrating how the object is built or has to be build. 1. a scalar if there is only one input. . 1. It can be represented by a picture as in Fig.5.3. Let us consider a DC electrical motor with independent excitation voltage. Generally there are three main graphical representations of systems: 1. The block diagram is a graphical representation of the mathematical relations between the variables by which the behaviour of the system is described. An oriented system can be graphically represented in a block diagram as it is depicted in Fig.2. Oriented Systems. or by a column vector u=[u1 u2 . rotor voltage (armature voltage). and nothing else. DC Electrical Motor. Practically are kept only those inputs that have a significant influence (into a defined precision level context) on the chosen outputs. represented in a such way to understand the functioning (behaviour) of that system.
Examples. 1. To do this. The mathematical relations between ω and Ur. Example 1.7. Controlled voltage generator 8 4 6 volts i 2 u1 α 0 R iC C i i=0 uC= x Voltage amplifier u= α u2= y S1 y= u2 K2 Zi= ì .7. Oriented Systems. The abstract system for this case is denoted by S2.6. into the agreed level of precision is depicted in Fig.2. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. Any one can understand that S1≠ S2 even they are related to the same physical object so a conclusion can be drawn: For one physical object (system) different abstract systems can be attached depending on what we are looking for. Cr. 1. Ue. S1 Tx + x = K1 u y = K2 x Figure no. The inputs are the same Ur. Abstract Systems. resistant torque Cr.2.2. 1. 1. Under a common usage of this circuit we accept that the unique cause which affects the voltage u2 is the knob position α in the voltage generator so the input is u2 and is denoted by u=α and marked on the incoming arrow to the rectangle as depicted in Fig.2. Let we consider an electrical circuit represented by a principle diagram as it is depicted in Fig. These two variables are selected as outputs. knowledge on electrical engineering is necessary. Suppose now we are interested about two attributes of the above DC motor: the rotor current Ir and the internal temperature θint. so it will be selected as output and the notation y=u2 is utilised and marked on the arrow outgoing from the rectangle as in Fig. 2. 1.4. The inputs are: rotor voltage Ur. This will be the output of the oriented system we are defining now. 4 .2. Figure no. From the above principle diagram we can understand how this physical object behaves. Simple Electrical Circuit. Cr.2. external temperature θext. excitation voltage Ue.7.2.6. Suppose we are interested about the amplifier voltage u2 only. In a such way we shall define the inside and the outside . We can look at the motor as to an oriented object from the systems theory point of view.5. into an accepted level of precision. θext are denoted by S1 which expresses the abstract system. 1. The inputs to this oriented system are all the causes that affect the selected output ω.1.2. Suppose we are interested about the angular speed ω of the motor axle. θext . The resulted oriented system is depicted in Fig. Now the oriented system related to the DC motor having the angular speed ω as output. Ue.2. This abstract system is the mathematical model of the physical oriented object (or system) as defined above.2. 1. 1. 1.
The current (present) time variable t in which the value of x is expressed. Abstract Systems.2. An initial value x0 which is just the value of x(t) for t=t0.t] where.2. 3.2. u1+ Ri + x=0. Also by substituting (1. i=iC.4) ∫ e − T u(τ)dτ . u [t 0. The time evolution of the system variables are solutions of these equations starting in a time moment t0 with given initial conditions for t ≥ t0 .2) which can be presented as an inputoutput relation • S 1 : R(u. The initial time moment t0 from which the evolution is considered.1) we get the output time evolution expression.t] = {(τ. ⇒ • T x +x = K 1 u (1. are expressed by the so called input segment u [t 0.5) Putting into evidence these four entities any relation as (1. iC=Cx . (1. y) = T y +y − K 1 K 2 u (1. The abstract system attached to this oriented physical object. All the input values u(τ) on the time interval [t0. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. The time evolution of the capacitor voltage x(t) can be obtained integrating (1.3) The three above forms of an abstract system representations are called implicit forms or representation by equations.1. T t0 We can observe that the value of x at a time moment t . denoted x(t).7) y(t) = K 2 e x 0 + ∫ e − T u(τ)dτ T t0 5 . u [t 0.1) S1 : y = K2 x In (1.2. Oriented Systems. u1=K1α=K1u.2.4) into the output relation from (1.2. 4. The first equation is the proper state equation and the second is called "output relation".1) for t ≥ t0 and x(t0)=x0 or just from the system analysis as. Examples.4) is written in a concentrate form as x(t) = ϕ(t.2.1) the abstract system is expressed by so called "state equations". represents the state of the system in that time moment t. is expressed by the mathematical relations between y=u2 and u=α .2.t] ) (1. t]} . y) = 0 where R(u. Here the variable x at the time moment t . 2.2. x(t). on short iiss relation. This is called the initial state. denoted by S1. t t−t t−τ K1 − T0 x(t) = e x 0 + (1. t t−t t−τ K 1 K2 − T0 (1.6) called the inputinitial statestate relation.2. depends on four entities: 1.2. The same mathematical model S1 can be expressed by a single relation: a differential equation in y and u as • S 1 : T y +y = K 1 K 2 u (1.t]. t 0 . x 0 . T=RC. 1.called observation interval. y=K2x.2. With elementary knowledge of electrical engineering one can write: • x=uC.2. u(τ) / ∀τ ∈ [t 0 .
2. (1.2.7) where y(0)=K2x(0)=K2x0 and t→tt0.10) Our system S1 is well defined specifying three elements: the set Ω and the two relations ϕ and η .9) so the input segment u [t 0.t] ) .2. t → u(t) (1. η} This is a so called explicit form of abstract systems representation or the representation by solutions.2. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. ϕ.13) Ts + 1 which is the so called the transfer function of the system. With this in our mind the inverse Laplace transform of (1.2.t] of the function u. (1. Someone who manages the physical object represented by the principle diagram knows that there are some restrictions on the time evolution shape of the function u. 6 . Oriented Systems.2.12) Ts + 1 Ts + 1 We can see that the differential equation has been transformed to an algebraical equation simpler for manipulations. The time variation of the input is expressed by a function u: T → U.12) is tt0 from (1.8) This is called inputinitial stateoutput relation on short iiso relation. x 0 .t] is the graph of a restriction on the observation interval [t0. We shall denote by Ω the set of admissible inputs. u [t 0. This can be easily overtaken considering that the time variable t in (1.2. which also depends on the four above mentioned entities and can be considered in a concentrate form as y(t) = η(t. Examples. Y(s) H(s) = (1.12) will give us the relation (1. An explicit form can be presented in complex domain applying the Laplace transform to the differential equation. 1. From (1. In our case the set U of the input values can be for example the interval [0.2): • L{T y(t) +y(t)} = L{K 1 K 2 u(t)} ⇔ T[sY(s) − y(0)} + Y(s) = K 1 K 2 U(s) ⇒ K K Y(s) = 1 2 U(s) + T y(0) (1. admitted to be applied to a system)} (1.2. The transfer function generally can be defined as the ratio between the Laplace transform of the output Y(s) and the Laplace transform of the input U(s) into zero initial conditions.2.11) S1={Ω.2.2. But as in any Laplace transforms.2. as in (1.1. For example could be admitted piecewise continuous functions or continuous and derivative functions only.2).10] volts. the initial values are stipulated for t=0 not t=t0 as we considered. if this is a linear with timeconstant coefficients one.2.12) we can denote K K H(s) = 1 2 (1. t 0 . U(s) The system S1 can be represented by the transfer function H(s) also.2.14) y(0)=0 . Ω={u / u: T → U . Abstract Systems.
9. by the state equations as in Fig.. considering the same experimental conditions.2.8.2. The output is now y(t)=i(t) and the input. Now suppose that in the other context we are interested in the current i of the physical object represented by the principle diagram from Fig. 7 .2. 1. 1. S1 Tx + x = K1 u y = K2 x B Figure no. Let us consider a mechanical system whose principle diagram is represented in Fig. KP Spring Damping system KV A x f u= f S1 y y .2.2. The mathematical model of this oriented system is now the abstract system S2 represented. u= α y= i S2 Tx + x = K1 u S2 K1 1 y = R x+ R u . 1. To the movement determined by f. This oriented system is represented in Fig. 1. Simple Mechanical System.2. If a force f is applied to the point A of the main arm. the spring develops a resistant force proportional to x .2) is expressed as TDy(t) + y(t)=K1K2u(t) from where formally one obtain. by a factor KP and the damper one proportional to the derivative of x by KV. Abstract Systems.2. Example 1.10. then the point B of the secondary arm has a shift expressed by the variable y. Of course any form of representation can be used as discussed above on the system S1. 1.6. Oriented Systems.2.1.8. Figure no.15) y(t) = 1 2 u(t) ⇔ y(t) = S(D)u(t) where S(D) = 1 2 TD + 1 TD + 1 so the system S1 can be represented by the integraldifferential operator S(D). Examples. is u(t)=α(t) also.2. 1.2. Denoting by D=d/dt the differential operator then the differential equation (1. whose shifting with respect to a reference position is expressed by the variable x. 1.8. Figure no.9. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. Under common experimental conditions. 1. for example.2. Sometimes an explicit form is obtained using the so called integraldifferential operators. Suppose we are interested on the point B shifting only so the variable y is selected as to be the output of the oriented system which is defining. the unique cause changing y is the force f which is the input denoted as u=f.3. K K K K (1. Because S1≠ S2 we can withdraw again the conclusion: "For the same physical object it is possible to define different abstract systems depending on the goal ".
relations (1. u=f we get the mathematical model as state equations.2. Now we write (1.2.2.4. This abstract system is a common base for different physical systems. for example.2.16) y = K2 x This set of equations expresses the abstract object of the mechanical system.2.7) will be obtained.2. These results equally can be applied both to the electrical system and the mechanical system. x is the capacitor voltage but in the second case the meaning of x is the paint A shifting. Formally S1 from (1.2. 1.4) and (1.18) The main problem is to get the expression of x(t) because y(t) is obtained by a simple substitution.1.2.15) . The oriented system with above defined input and output is represented in Fig. Such a study is called model based study. Of course in the first case. the Laplace transform on short LT. Tx(t) +x(t)=K1u(t). Finally the solutions (1.2. where by S1 it is denoted a descriptor of the abstract system.2.2.1.16) is identical with S1 from (1. Forms of the Common Abstract Base. It is admitted that all the functions restrictions for t≥0 are original functions. K1=1/KP . Shall we denote by X(s)=L{x(t)} . Abstract Systems. • T x +x = K 1 u S1 : (1. Any development we have done for the electrical system.2)(1. expressed in one of the form (1.1) putting into evidence the time variable t as .1) of the previous example.2) .2.2. U(s)=L{u(t)} the Laplace transforms of x(t) and u(t) respectively.Writing the force equilibrium equations we get • K P x + K V x= f . Even if we have different physical objects they are characterised (for above chosen outputs) by the same abstract system. some results are obtained.4) depending on any t0. is available for the mechanical system too. t≥t0 . Managing with specific methods the abstract system. The goal of this example is to manipulate the abstract system (1.2.(1.1) or equivalently (1. y = K 2 x .17) y(t)=K2x(t) (1. These constitute the unitary framework of notions we mentioned in the systems theory definition. Example 1. x(t0)=x0 (1.16) from mathematical point of view using one element of the common abstract base. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. Oriented Systems. Examples. As we know the one side Laplace transform always uses as initial time t0=0 and we have to obtain (1.2.15). Dividing the first equation by KP and denoting T=KV/KP .2.2.2. 8 . We can say that the mechanical system is a physical model for an electric system and vice versa because they are related to the same abstract system.10.
t >0 If x(t) is a continuous function for t=0 we can simpler write x(0) against to + x(0 ).22) (1. t t F1(s)F2(s) = L{∫ 0 f 1 (t − τ)f 2 (τ)dτ } = L{∫ 0 f 1 (τ)f2 (t − τ)dτ } and in the inverse form.x(0+) where x(0+)=x(t) t=0 + = lim x(t).2.2. t−τ t t K x(t) = ∫ [ 1 e − T ]u(τ)dτ + e − T x(0). DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. (1.22). for t ≥ 0 Ts + 1 T Identifying now by t K1 K K t−τ F 1 (s) = ⇔ f1 (t) = 1 e − T ⇒ f1 (t − τ) = 1 e − T Ts + 1 T T F 2 (s) = U(s) ⇔ f 2 (t) = u(t) ⇒ f2 (τ) = u(τ) (1. Abstract Systems. However we can prove that the state of a differential system driven by bounded inputs is always a continuous function. x(0). Ts + 1 Ts + 1 We know from tables that t T L1 = e−T . admitted to be an original function is. applying the LT to (1.20) applied to (1.21) after the substitution of (1. 1. we have.2.17) we obtain T[sX(s)x(0)]+X(s)=K1U(s) K1 X(s)= U(s) + T x(0) (1. 0.1.t] ). .2. t t L1{F1(s)F2(s)} = ∫ 0 f 1 (t − τ)f 2 (τ)dτ = ∫ 0 f 1 (τ)f2 (t − τ)dτ The inverse LT of (1. u [0. t→0 .20) (1. L{x (t)}=sX(s) . 9 .2. from the initial state x(0) and has the form of the inputinitial statestate relation (iiss).24) we obtain. ∀t ≥ 0 . Examples.2.19) is K x(t)=L1{ 1 ⋅ U(s) } + L1{ T }x(0).19) Ts + 1 Ts + 1 which gives us the expression of the state in complex domain but with initial state in t=0.2. ∀t ≥ 0 which is written as 0 T t ∆ K t t−τ x(t) = e − T x(0) + 1 ∫ e − T u(τ)dτ = ϕ(t.2. for t ≥ 0 Ts + 1 t K1 K L1 = 1 e − T . For t = t0 from (1.24) T 0 This is the state evolution starting at the initial time moment t=0.23) and taking into consideration (1. We remember that the Laplace transform of a time derivative function.2. We remember now the convolution product theorem of LT: If F1(s) = L{f1(t)} and F2(s) = L{f2(t)} then. So.2.21) (1.2.2.2. Oriented Systems.
u[t 0. Abstract Systems.2.24). An initial state x0 at a time moment t0 contains all the essential information from the past evolution able to assure the future evolution if the input is given starting from that time moment t0.t] Two conclusions can be drawn from this example: 1. as the result of the evolution from an initial state x(0) at the time moment t=0 with an input u [0. t 0 .2. T 0 into (1.t] ) x(t)=e − t−t 0 T x(t 0 ) + x(t0) which is the so called the state transition property of the iiss relation.t] ) (1. 10 . ϕ(t 0 . we have only to substitute x=y/K2 in (1.25) t0 K t0 τ x(0)=e T x(t 0 ) − 1 ∫ e T u(τ)dτ . x(t 0 ). if we denote x(t0)=x0 t−T 0 t−τ ∆ K K t y(t) = K 2 e − T x 0 + 1 2 ∫ e − T u(τ)dτ = ϕ(t.27) x(t) = ϕ(t. u[0. x(t). DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS.1. the state at any time moment t.26) getting. t 0 . u [t 0. Now from (1. Any intermediate state is an initial state for the future evolution.24) we obtain t0 t t−τ K t0 τ K t 0 t−τ K t −T T x(t)=e e x(t 0 ) − 1 ∫ e T u(τ)dτ + 1 ∫ e − T u(τ)dτ + 1 ∫ e − T u(τ)dτ T 0 T 0 T t0 x(t0)=e − T x(0) + t0 ∆ K 1 t − t−τ (1. 0. t 0 .t 0]. Oriented Systems.t 0 ] ∪ u [t 0 .2. x(0).25) ∫ 0 e u(τ)dτ = ϕ(t 0 .2. 2.t] = u [0.2.2.2. u [0. u [t 0.2.2. To obtain the relation (1.t] ) ≡ ϕ(t.t] is the same as the state obtained in the evolution of the system starting at any intermediate time moment t0 from an initial state x(t0) with an input u [t 0. 0.2. x(t 0 ). It has to be precised that (1.t]) T which is just (1.2.26) ∫ t0 e T u(τ)dτ = ϕ(t. x(0). 0.2. x(0).29) t0 T This is an inputinitial stateoutput relation (iiso).25) and (1.7).t 0] ).2.t] if the intermediate state x(t0) is the result of the evolution from the same initial state x(0) at the time t=0 with the input u [0. (1. According to this property. Examples. if we are taking into consideration that x(t0) = x0.t 0] ) T Substituting x(0) from (1.4). ∆ K 1 t 0 − t 0T−τ (1. u [0.2.26) we observe that (1. 1.28) u [0.
we have to understand that the input variable changes in time according to the given graphic u [t 0.1). Outputs.t1] ⊆Τ called observation interval is the graphic of the function u on this time interval. Usually Y ⊆ R r if there are r outputs expressed by real numbers.3.t 1 ] . Inputs.t 1 ] : u [t 0. InputOutput Relations.t 1 ] . Outputs.3.1). is the set of functions u allowed to be applied to an oriented system. u [t 0. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS.a segment as (1. If an input u [t 0.3.t 1 ] ={(t. or observation domain.1) or the value of this function in a specific time moment denoted t. 1.3. Inputs. All these conveniences will be encountered for all other variables in this textbook.1. (1.a function as (1.4) that means to an input corresponds an output. t→u(t).∀t ∈ [t 0 .1. 11 .1) where U is the set of input values (or the set of all inputs). t1]} (1. u(t) the law of correspondence of the function (1. is the domain of functions describing the time evolution of variables.t 1 ] is applied to a physical system the output time response is expressed by the output segment y [t 0. Sometimes for easier writing we understand by u . u [t 0. The output variable is the function y :T→ Y. Outputs.3. Τ The input segment on a time interval [t0. t 1 ]} (1. the following: u . Inputs. The set of admissible inputs Ω. Sometimes the letter t is utilised as time variable also for discretetime systems thinking that t ∈Z . 1.3. Usually U ⊆ R p if there are p inputs expressed by real numbers.3. Also u(t) can be seen as the law of correspondence of the function (1. t→y(t). The inputoutput pair.3. For continuoustime systems T ⊆ R and for discretetime systems T ⊆ Z .3) where Y is the set of output values (or the set of all outputs).3.3.2) When we are saying that to a system an input is applied on a timeinterval [t0.3.∀t ∈ [t 0 . The time domain T .that is according to the input segment. depending on context.2) on an understood observation interval. We denote by Γ the set of possible outputs that is the set of all functions y that are expected to be got from a physical system if inputs that belong to Ω are applied. 1. InputOutput Relations.y(t)).t1] . where y [t 0. (1. t1]. The time variable is denoted by letter t for the so called continuoustime systems and by letter k for the so called discretetime systems.t 1 ] ={(t.u(t)). The input variable is the function u :T→ U.
table or functional diagram. We have to mention that by the operatorial notation y=S{u} or just y=Su. Outputs. is an inputoutput relation for an oriented system if: 1. the output depends also on the value x0=x(t0) which is the voltage across the capacitor terminals C in Ex. For the same input u[t0. y) observed to a physical system is called an inputoutput pair.t1] ] or of the mechanical system from Ex.t 1 ] ] = (u. 1.2.2) for t ≥ t0 and x(t0)=x0 is [ 0 y [ [ t0 y [ t0 .t1] . 1. InputOutput Relations. at the time moment t0.t1] K K x0 + 1 2 T t0 ∫ e − T u(τ)dτ = η(t. Any inputoutput pair observed to the system is checking this relation.2. y [t 0.2. • For example if in the differential equation T x +x = K 1 u from Ex.1. The pair of segments (1. The totality of inputoutput pairs that describe the behaviour of a physical object is just the abstract system.3.2. Instead of specific list of input time functions and their corresponding output time functions.3.t 1 ] . graph.t 1] another output segment y a 0. t 0 . the solution of the differential equation (1. Practically an abstract system is expressed by the so called inputoutput relation which can be a differential or difference equation. 1. y=S{u}.t 1] . . we understand that the operator S is applied to the input (function) u and as a result.t 1 ] to [t a a be obtained. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. y ) is an inputoutput pair of that system as depicted in Fig.. 2. 1.t]) t−τ t 0 t0 t1 t Figure no.2.0. the abstract system is usually characterised as a class of all time functions that obey a set of mathematical equations.2.t1] ] ] t1 t y(t) = e − t−t 0 T ya [ t0 . It is possible that for the same input u [t 0. This is in accord once with the scientific method of hypothesising an equation and then checking to see that the physical object behaves in a manner similar to that predicted by the equation /2/. 1.3.2. that means also the pair [u [t 0. [ t0 . 1. Any pair (u.3.3.5) [u [t 0. InputOutput Relations. Inputs.2.3. 1. u [t0.3. u u In the example of the electrical device from Ex. x0 .2. the output (function) y is obtained. 1.0. we obtain. 1.2. or the arm position (point A ) in Ex.t 1] ] = (u.y)=0 or explicitly expressed by an operator S.y) which is checking this relation is an inputoutput pair of that oriented system. A relation implicitly expressed by R(u. 12 . y [t 0. we substitute x=y/K2.
Dy =y we obtain dt TDy(t) + y(t) − K 1 K 2 u(t) = 0 ⇔ (TD + 1)y(t) − K 1 K 2 u(t) = 0 ⇔ K K K K y(t) = 1 2 u(t) ⇔ y(t) = Su(t) . For example we can express in scomplex domain applying to (1. InputOutput Relations.10) is another form of explicit inputoutput relation. 1. Inputs.3. (1.3. Y(s) K1 K2 H(s)= . DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 1. and we are interested in the voltage y across the terminals B2.6) which is an implicit inputoutput relation.3. the time derivative operator. y) = 0 . Outputs.3.6) the Laplace transform. • By denoting D = d .1.7) TD + 1 TD + 1 Here y(t) = Su(t) is an explicit inputoutput relation by an integraldifferential operator S . Example 1.1. • T y +y = K 1 K 2 u ⇔ R(u.A1' .2.3. K K Y(s)= 1 2 U(s) + T x(0) (1.3. 13 . R(u. A1 i1 u R1 B1 uC =x 1 1 • A2 i2 R2 i=0 B2 uC =x 2 y 2 B' 2 u y C1 iC 1 C2 i C2 A'2 S A'1 B' 1 Figure no.8) Ts + 1 Ts + 1 from where an operator H(s) called transfer function is defined. Double RC Electrical Circuit. Suppose that the second circuit runs empty and the first is controlled by the voltage u across the terminals A1.3.3..1.B2'. under common conditions only the voltage u affects this selected output so the oriented system is specified as depicted in Fig.3. This relation is expressed in time domain but it can be expressed in every domain if one to one correspondence exists.1.3. The abstract system denoted by S will be defined establishing the mathematical relations between u and y. 1. S = 1 2 (1. Because the output y is defined.3. y) = T y +y − K 1 K 2 u (1.9) x(0)=0 = U(s) Ts + 1 The relation between the Laplace transformation of the output Y(s) and the Laplace transformation of the input U(s) which determined that output into zero initial conditions Y(s)=H(s)U(s) (1.1. Let us consider an electrical network obtained by physical series connections of two simple RC circuits whose principle diagram is represented in Fig. 1. 1.3. Figure no.
3. i c1 = C 1 x 1 4.17) T y=c x + du (1. InputOutput Relations. d=0. (1. uC2=x2.3. i c1 = i 1 − i 2 2. Outputs.3.3.3.b= 0 − T2 1 T2 and generally they are called: A the system matrix b the command vector c the output vector d the direct inputoutput connection factor . after some substitutions we obtain.14).3.3. .3. Denoting by T1=R1C1 and T2=R2C2 the two time constants. T2 respectively. (1.18) where: 1 − 1 (1 + R 1 ) 1 R1 T T1 R 2 T1 R2 1 . iC2. C2 are constants in time and represent the circuit parameters . To do this first we observe from the principle diagram that there are 8 variables as time functions involved : u.1. T 2 x 2 = x 1 − x2 (1. Inputs. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. y=x2 R1 R2 We can observe that two variables x1 and x2 appear with their first order derivative so eliminating all the intermediate variables a relation between u and y will be obtained as a second order differential equation.c=0 .19) 1 1 .12) y=x2 (1. These equations can be written using the Kirckoff's theorems and Ohm's law: . i c2 = i 2 3. R2.3. i1. S: x = Ax + bu (1. Because u is a cause (an input) it is a free variable and we have to look for 7 independent equations.3.3.3. i 1 = 1 (−x 1 + u) 6.15) T2 T2 y=x2 (1. i2 and y. The other variables R1. i 2 = 1 (x 1 − x 2 ) 7. C1.14) T1 R2 T1 R 2 T1 .16) are called state equations related to that oriented system and they constitute the abstract system S in state equations form. We can rewrite these equations into a matrix form. uC1=x1. any skilled people in electrical engineering understand on the spot. i c2 = C 2 x 2 5. 14 .13) which after dividing by T1.16) The equations (1.3. A= (1. 1. iC1. they take the final form . .11) R2 R2 . R R x 1 = − 1 (1 + 1 )x 1 + 1 1 x 2 + 1 u(t) (1. But first shall we keep the variables x1 and x2 and their derivative. R R T 1 x 1 = −(1 + 1 )x 1 + 1 x 2 + u (1. S: x 2 = 1 x1 − 1 x 2 (1. 1. .15).
22) u(t) ⇒ y(t) = S(D)u(t) R T 1 T 2 D 2 + [T 1 + T 2 (1 + R1 )]D + 1 2 where S(D) is an integraldifferential operator.3.15).20) T 1 T 2 y + [T 1 + (1 + 1 )T 2 ]y + y = u .3.y)=0 . value of the output. Applying the first derivative to (*) and substituting T 1 x 1 from (11) and .23) we get Y(s)=L{y(t)} . T 1 T 2 y = T 1 x 1 − T 1 y ⇒ T 1 x 1 = T 1 T 2 y + T 1 y (∗) . (1. y) = 0 where R(u.3. Outputs. T 2 s + 2.3. It can be presented as an io relation .26) y(0 ) + T y(0 + ) Y(s) = 1 U(s) + L(s) L(s) L(s) As we can see. the value of the time derivative of the output both on the timemoment 0+. 15 . T 2 y + 2. . Inputs. C2=C/2 ⇒ T1=T2=T=RC so the differential equation (1.24) We denote by L(s) the characteristic polynomial (1. 5T + (1. T 2 [s 2 Y(s) − sy(0 + ) − y(0 + )] + 2.3. C1=C .3.3.5T y + y = u .25) L(s) = T2s2+2. (1. (1. (1. . To do this we can use for example (1. and y(0 + ) . 5T[sY(s) − y(0 + )] + Y(s) = U(s) ⇒ 2 . . For simplicity shall we consider the following values for parameters: R1=R . x1 = T 2 y + y from (*) we. The inputoutput relation R(u. R ¨ R(u. the . 2 . can be expressed as a single differential equation in u and y. depends on the Laplace transform of the input U(s) and on two initial conditions : y(0+) . Substituting x2 from (13) in (12) and multiplying by T1 we obtain . 1. R2=2R .16) or simpler (1.3. obtain.12).3. 5Ts + 1 (1. U(s)=L {u(t)} ⇒ . . 5Ts + 1 T s + 2. ¨ R2 This is a differential equation expressing the mathematical model (the abstract system) of the oriented system.1. y) = T 1 T 2 y + [T 1 + (1 + 1 )T 2 ]y + y − u (1.3. R R ¨ T 1 T 2 y + T 1 y = −(1 + 1 )(T 2 y + y) + 1 y + u R2 R2 which finally goes to . .3.14). Applying the Laplace transform to (1.3.3. 5Ts + 1 T s + 2.20) becomes . InputOutput Relations. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. R (1.21) R2 If we denoted by d = D we can express the io relations into an explicit form dt 1 y(t)= (1. the Laplace transform of the output Y(s). T 2 s + 2.3. as mentioned before. (1.11).5Ts+1 so the output in complex domain is.23) ¨ We can express the io relation into a complex form by using the Laplace transform.3. 5T Y(s) = 2 2 1 U(s) + 2 2 y(0 + ) + 2 2 T y(0 + ) T s + 2.3.13).
Outputs.33) As we can see the general response by output depends on: t. Let H(s) be Y(s) H(s) = 1 = (1.t] ) (1. 1.3.30) (1.31) (1. B1 = − 1 3 . 5 1 s1.2 we can express this time relation depending on the initial time t0 as: . H(s) = s−λ 1 + s−λ 2 ⇒ A = 3T .28) so the characteristic polynomial is presented as L(s) = T2(sλ1)(sλ2) with λ1= 1/2T . the initial state x0 .32) α 2 (t) = e λ 2t = e − T/2 By using the same procedure like to the first order RC circuit presented in Ex.3.3. B 2 = − 2T 3 T2 1 )(s−λ 2 ) B Y(s) = L1 L1 3 2 1 3T s−λ 1 . DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. By using the inverse Laplace transform we can obtain the time answer (the output response) of this system.3.3. y(t) = 1 [4α 1 (t) − α 2 (t)] y(0) + 2T [α 1 (t) − α 2 (t)] y(0) + 2 3 [α 1 (t − τ) − α 2 (t − τ)]u(τ)dτ 3T ∫0 where α 1 (t) = e λ 1t =e t − 2T t (1.3. where the state vector is defined as . (1.33) is an inputinitial stateoutput relation of the form y(t) = η(t. 1.34) The relation (1. and the input u [t 0.2=(− 2 T ± 2 25T 2 − 16T 2 )/2T 2 (1. B = − 3T 1 2 T 2 s+2.1.35) t 16 .3. u [t 0. 1 1 1 4 1 − s−λ 2 U(s) + 1 s−λ 1 − s−λ 2 y(0) + 2T s−λ 1 − s−λ 2 y(0) 3 3 L1 L1 1 s−λ 2 1 s−λ 1 = e λ 1 t = α 1 (t) = ∫ 0 α 1 (t − τ)u(τ)dτ t = e λ 2 t = α 2 (t) = ∫ 0 α 2 (t − τ)u(τ)dτ t 1 s−λ 1 U(s) 1 s−λ 2 U(s) t . x 0 . The transfer function of a system is the ratio between the Laplace transform of the output and the Laplace transform of the input which determines that output into zero initial conditions if and only if this ratio is not depending on the form of the input. t 0 .3. x 2 (t 0 ) = y(t 0 ) ⇒ x(t 0 ) = [x 1 (t 0 ) x 2 (t 0 )] T = x 0 (1. InputOutput Relations.5Ts+1 = 0 has the roots. Inputs. λ2= 2/T . x 1 (t 0 ) = y(t 0 ) .26) A B 2 2 H(s) = T 2(s−λ1)(s−λ ) .2. t0.3. The characteristic equation L(s) = T2s2+2.3.t] .29) One way to calculate the inverse Laplace transform is to use the partial fraction development of rational functions from Y(s) as in (1.27) zero initial condition L(s) U(s) where H(s) is called the transfer function of the system.3.5T T 2 (s−λ T 2 (s−λ 1 )(s−λ 2 ) = = A1 s−λ 1 A2 s−λ 1 1 + s−λ 2 ⇒ A 1 = 2 + s−λ 2 ⇒ A 2 = B 4 3 2T 3 .3. y(t) = 1 [4α 1 (t − t 0 ) − α 2 (t − t 0 )]y(t 0 ) + 2T [α 1 (t − t 0 ) − α 2 (t − t 0 )]y(t 0 ) + 3 3 2 + 3T ∫t 0 [α 1 (t − τ) − α 2 (t − τ)]u(τ)dτ (1.
... The day k2 : xk2=αk3xk3+βk3 uk3  ακ−1ακ−2 The day k1 : xk1=αk2xk2+βk2uk2  αk1 The day k : xk=αk1xk1+βk1uk1 1 Denoting by Φ(k. We want to see what are the evolution of the stock and the report of the stock for any day.3.3. It can be observed that the time variable is a discrete one. Φ(k...αp . Suppose that daily (may be in the day k) a percentage of (1αk) from the stock is delivered to other sections.. These are difference equations expressing a discretetime system..1... a percentage γk from the stock.. Suppose that in the day k the working point is supplied with uk semiproducts from which only a percentage of βk is transformed in finite products..3. Shall we consider a working point (manufacturing point).. This working point can be interpreted as an oriented system having the daily report yk as the uk yk output..2.. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. In this manufacturing point the work is made by a robot which manufactures semiproducts....3.k)=1 (1. We suppose that it works in a daily cycle. p) the discretetime transition matrix (in this example it is a scalar). InputOutput Relations. input and the output are string of numbers... The input is the daily rate of supply uk so the S oriented system is defined as in Fig.3. it is composed from the left stock xk(1αk)xk=αkxk and the new stock βkuk.α p+1 α p . Also at day k the production is reported as being yk. where the Figure no. but in this case we shall proceed to difference equation integration step by step. We can determine the solution of this system of equations by using a general method. xk finite products.αp+1 The day p+2 : xp+2=αp+1xp+1+βp+1up+1  αk1..3. xk+1=αkxk+βkuk (1.. If we denote by xk+1 the stock for the next day. 1.. The mathematical model is determined from the above description. Outputs.38) and adding the above set of relations each multiplied by the factor written on the right side.3.. j=p k−1 Φ(k.3..3. The working point has a store house which contains in the day k .. we obtain 17 .. the integer number k .. Manufacturing Point as a Discrete Time Ststem.. 1.37) This is the abstract system S of the working point looked upon as an oriented system. The day p+1 : xp+1=αpxp+βpup  αk1...36) yk=γkxk (1..... p) = Π α j = α k−1 α k−2 . 1. Inputs. Example 1.
6. 1. p)x p + Σ [Φ(k.3.1. j + 1)β j u j ] j=p k−1 (1. Here SB represents a normal opened button (the setbutton) and RB a normal closed button (the resetbutton). Let us consider a physical object represented by a principle diagram as in Fig.41) where xk is the state on the current time k (in our case the day index). y k = η(k.3.3. The normal status (or state) of the relay is considered that when the current through the coil is zero. x k = ϕ(k. x k 0 .4. Outputs. 1. The output evolution is y k = γ k Φ(k. By normal it is understood "not pressed". j + 1)β j u j ] j=p k−1 (1. +E SB a b L Relay i RB c d x y x x rt st S y t Figure no.42) (1. By x are denoted the opennormal contacts of the relay.3. 1. When the button SB is pushed the current can run through the terminals ab and when the button RB is pushed the current can not run through the terminals cd.43) which is an inputinitial stateoutput relation of the form. This is a logic circuit performing an RSmemory relay based. 18 .3. L is a lamp which lights when the relay is activated. k 0 . 1. Even if the button SB is released the lamp keeps on lighting.39) yk = γkxk (1.3. xk0 is the initial state on the initial time moment p=kk0. S= S 0 ∪ S1 S0 xt 0=0 S1 xt 0 =1 Figure no. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS.3.39) is an inputinitial statestate relation of the form. The functioning of this circuit can be explained in words: If the button RB is free. u [k 0. Inputs. x k 0 . u [k 0.3. p)x p + Σ [γ k Φ(k. 1.3.3.k−1] ) (1.4.k−1] ) Example 1.3. pushing on the button SB a current i run activating the relay whose contact x will shortcut the terminals ab and the lamp is turned on.40) We observe that (1. k 0 .3. InputOutput Relations.3.5. Figure no. x k = Φ(k. RSmemory Relay as a Logic System.
45) together with (1. the state of the relay: 1 .46) Substituting (1. 19 .3. x t+ε = (s t ∨ x t ) ∧ r t (1. rt we need another piece of information (the value of xt . The set B is organised as a Boolean algebra.46) we obtain the abstract system S as.3. rt: "The button RB is pushed" ⇔"Through terminals cd the current can not run" it: " xt:"The relay is activated" ⇔ "The relay normal opened contacts are connected". (1.3. They represent the truth values. An oriented system is defined now as depicted in Fig. The value of logical variable it is given by it=(s t ∨ x t ) ∧ r t .5.48) To determine the output of this system beside the two inputs st. disjunction " ∨ " and negation " ¯ ".47) S: yt=xt (1. xt. y are associated with the variables st.45) Of course the status of the lamp equals to the status of the relay. 1. so the input is the vector u(t) = [st rt ]T.if the relay is not activated. defining the abstract system S are expressed as logical equations.3. yt: "The lamp lights".1.3. so yt=xt (1. 1. the current i is cancelled and the relay becomes relaxed. at the time moment t. Inputs. Outputs.3. the status of the relay changes after a small time interval ε finite or ideally ε→0 to the value of it. InputOutput Relations.44) Because of the mechanical inertia. These logical variables can take only two values denoted usually by the symbols 0 and 1 on a set B={0. of the propositions: st: "The button SB is pushed" ⇔ "Through terminals ab the current can run". If the button RB is pushed. x t+ε = i t . the lamp L is turned off. rt. This selected output depends on the status of buttons SB.if the relay is activated and 0 .3. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. RB only (it is supposed that the supply voltage E is continuously applied) as external causes. RB.3.3. x. (1. it. In a Boolean algebra three binary fundamental operations are defined: conjunction " ∧ ". i. yt called logical variables .1} which represents false and true. The mathematical relations between u(t) and y(t).3. Suppose we are interested about the lamp status so the output is y(t)=yt.44) into (1. The variables encountered in this description SB.
55) (u [t 0.y)∈S/ if x t 0 =off} (1.3.t 1 ] ) can be observed.t 1] . S0 ∩ S1 = ∅ . as depicted in Fig. A Voltage Supplayer R V B u(t) X D R R V y(t) C Figure no.3. y a ) ∈ S i .3.t 1 ] .4. It does not matter how we shall denote these information A (or on) for 1 and B )or off) for 0.y)∈S/ if x t 0 =A} ={(u.t 1] ∈ Ω} .y)∈S/ if x t 0 =on} (1.t 1] ) ⇒ y(τ) = 1 u(τ) ∀τ . 20 .y)=(u [t 0. Let us consider a blackbox. (1. It can be split into two subsets depending whether a pair (u.y)∈S/ if x t 0 =B} ={(u.49) The set S can describe the abstract system. we register the evolution of the input u(t) and of the resulted output y(t) . 1. Inputs.3. If we are doing experiments with this physical system on time interval [t0. y [t 0. Playing with this blackbox.3. 1. y [t 0.y)∈S/ if x t 0 =1}={(u.6. but it has a controllable voltage supplier with a voltage meter u(t) across the terminals AB and a voltage meter y(t) across the terminals CD.52) as depicted in Fig. 1. ∀t 0 .3.7. t 1 ∈ R. someone received.t1] for any t0. 3 3 Doing all the experiments possible we have a collection of inputoutput pairs which constitute a set S as (1. We are surprised that sometimes we get the inputoutput pair (u [t 0.7.t 1] )/ observed .t 1 ] = 1 u [t 0.54) 2 2 but other times we get the inputoutput pair (1.3. y b ) ∈ S i ⇒ y a ≡ y b i = 0.3. Also inside of any subset Si the input uniquely determines the output (1.t 1 ] . Blackbox Toy as a Two States Dynamical System. (1.y) is obtained having x t 0 equal to 0 or 1: S0={(u.53). InputOutput Relations. S = {(u [t 0. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS.3.49). t1.t 1 ] = 2 u [t 0.3. (1.y)∈S/ if x t 0 =0}={(u. ∀(u. Example 1.t 1 ] .3.53) ∀(u. ∀u [t 0. nothing can be seen of what it contains inside .3. y [t 0. 1. a set S of inputoutput pairs (u. The box is covered.3. as a toy. If we know that the state of the relay is A we can determine the output evolution if we know the inputs. y [t 0.50) S1={(u. Outputs. 1 From this we understand that the initial state is a label which parametrize the subsets Si ⊆ S as (1.1.51) It can be proved that S0 ∪ S1 = S .t 1] ) ⇒ y(τ) = 2 u(τ) ∀τ .3.
Of course S0 ∪ S1 = S . 1.3. on} or {0 . such a system being called state uncontrollable. With this example our intention is to point out that the system state appears as a way of parametrising the inputoutput pair subsets inside which one input uniquely determines one output only. 21 . The switch position will determine the state of the blackbox. InputOutput Relations.3. (1. This collection S will determine the behaviour of the blackbox. "A" and the subset S1 by : "on". "0". Now we can understand why the two sets of inputoutput pairs (1. Our 2 3 set of inputoutput pairs can be split into two subsets: S0 if they correspond to (1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS.7.54) and S1 if they correspond to (1. It does not matter how the position of the switch is denoted (labelled).3. 1} or ∼ ∈ {A. If someone gave us an input u [t 0.t 1] and an additional piece of information formulated as: "the state is on " that means x=on or as "the state is B" that means ∼ = B etc.. B}. The box can be in two states depending of the switch status: opened or closed.55).54). to select the right pair. "1". S0 or S1. The state is equivalently expressed by one of the variables: x ∈ {off . We can define the box state by a variable x which takes two values nominated as: {off . on} or x ∈ {0 . Some information is missing to us. we can uniquely determine the output y(t) = 2 u(t) selecting it from the x 3 subset S1. B} x If someone gives us an input u [t 0. In this example the state of the system can not be changed by the input. 1} or {A . 1. For the same input applied u(t) we can get two outputs y(t) = 1 u(t) or y(t) = 2 u(t) . the same inputoutput behaviour can have different state descriptions. Outputs.55) were obtained. Now the subset S0 can be equivalently labelled by one of the marks: "off". Inputs.3. Suppose that the box cover face has been broken so we can have a look inside the box as in Fig.3. "B" respectively.1. S0 ∩ S1 = ∅ .3. Also we wanted to point out that the state can be presented in different forms.t 1 ] we would not be able to say what the output is because we have no idea from which subset.
For example in the case of blackbox or of the logic circuit we can define the state as {on . for the mechanical system the initial position of the arm. some initial conditions have to be known. what additional information is needed to completely specify y(t) for t ≥ t0 ? ".2. Dynamical Systems. However in most cases considered. the state is a set of n numbers and correspondingly x(t) is a nvector function of time. For example the state x0 ∆ at a time moment t0 is denoted by (x0 . are of first order 22 . There can be many different ways of expressing the relationships of input to output. {A. for the manufacturing point the initial value of the stock. for which the output can be univocally determined. 1. in the case of double RC circuit the voltages across the two capacitors. The state is the answer to the question: "Given u(t) for t ≥ t0 and the mathematical relationships between input and output (the abstract system).1. All this information defines the system state in the time moment from which the input will affect the output. Systems from Ex. For the double RC circuit one state representation means the output value y(t0) and the time derivative value of the output The state of a system is related to a time moment. the output if the input is known. The state space. 1.4. off}. B} and so on. is the set of all x(t) values. System State Concept. 1. represents the system order or the system dimension. denoted by X.4. 1. The minimum number of state vector elements.4. A state variable denoted by the vector x(t) is the time function whose value at any specified time is the state of the system at that time moment. The state of an abstract system is a collection of elements (the elements can be numbers) which together with the input u(t) for all t ≥ t0 uniquely determines the output y(t) for all t ≥ t0 . System State Concept. In essence the state parametrizes the listing of inputoutput pairs. for the relay based circuit the initial status of the relay. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. starting with that time moment. For example in the case of simple RC circuit we have to know the voltage across the capacitor. to determine univocally the output. The state representation is not unique.2.3. beside the input. The system state at a time moment contains (includes) all the essential information regarding the previous evolution to determine. t0 ) = x(t0) . for a known (given) input. 1. Dynamical Systems.2. As we saw in the above examples. General aspects. or Ex. The state can be a set consisting of an infinity of numbers and in this case the state variable is an infinite collection of time functions.1..
4.3. because it is enough to know a single element x0 to determine the output response as we can see in relation (1. as we can see in relation (1.1 is of second order because. 1. y(t 0 ) are known as it is rewritten in (1. the state vector x 0 = 0 = . y(t 0 ) = x 0 = 9V/ sec 1 2 23 ⇒ .]u(τ)dτ 3 3 3 3 (1. If . x 0 = 4 y(t 0 ) + 2T y(t 0 ) 1 3 3 . .37) from Ex.3.3.4. x 0 = − 1 y(t 0 ) − 2T y(t 0 ) 2 3 3 the output response can be written as 2 t y(t) = α 1 (t − t 0 )x 0 + α 2 (t − t 0 )x 0 + 3T ∫ t 0 [[α 1 (t − τ)− 2 (t − τ)]]u(τ)dτ (1. These two values can be selected as x 0 y(t 0 ) 1 components of a vector.3. y(t) = α 1 (t − t 0 ) 4 y(t 0 ) + 2T y(t 0 ) + α 2 (t − t 0 ) − 1 y(t 0 ) − 2T y(t 0 ) + 3T ∫t 0 [.4. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS.3. y(t 0 ) =9V/sec.1) .33) (considering for the sake of convenience the particular case T1 = T2 =T the system dimension is not affected).7). Also the discrete time system (1. System State Concept. For .42) can be uniquely determined if we know just the number xp .4. is of first order because the response (1. that means they can constitute the components of the vector x 0 = 1 0 x2 Let us consider a concrete example for T=1sec.3.3. 1. that means at the x 2 y(t 0 ) time t0 the state is x0⇒ (t 0 . 1. y(t 0 ) = x 0 = 3V. y(t) = 1 [4α 1 (t − t 0 ) − α 2 (t − t 0 )]y(t 0 ) + 2T [α 1 (t − t 0 ) − α 2 (t − t 0 )]y(t 0 )+ 3 3 2 t + 3T ∫ t 0 [α 1 (t − τ)− 2 (t − τ)]u(τ)dτ (1.3) 1 2 This output response can be univocally determined if the numbers x 0 and x 0 are 1 2 x0 known. x 0 ) = x(t 0 ) . But the abstract system (1. . .33) can be arranged as: 2 t . The response from (1. the system is in the state [3 9]T.2) x0 x0 1 2 Denoting by . the output y(t) is uniquely determined by a given input u [t 0.3. (1.4. Dynamical Systems. that at the time moment t0 .2.3.1) These initial conditions are the output value and the output time derivative value at the time moment t0.36). for example. Because two numbers are necessary to determine uniquely the output we can say that this system is a second order one. y(t0)=3V.20) from Ex.18) or (1. initial conditions y(t0) .t] if two . (1..17). the state at the time moment t0 3 3volts is expressed by the numerical vector x 0 = = so we can say 9 9volts/ sec .2.1.3.
Both states x. where T = 3 2T .1. x 0 = − 1 3 − 2⋅1 9 = −7V/ sec. that is different of the state x0=[3 9]T but.4) 1 3 −3 − 3 For example in the case of logic circuit (Ex.4. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. we obtain the same output as it was obtained in the case of x0=[3 9]T .t] .3. x above related will determine the same inputoutput behaviour. Pure Time Delay Element.4. det T = − 2T ≠ 0 .1. 1. detT≠0. applying the same input u [t 0. The following important conclusion can be pointed out: The same inputoutput behaviour of an oriented system can be obtained by defining the state vector in different ways. If x is the state vector related to an oriented system and a square matrix T is a nonsingular one.1. Example 1. The thickness of the fuel is controlled by a mobile flap. Dynamical Systems. {A.3. then the vector x = Tx is also a state vector for the same oriented system. Let us consider a belt conveyor transporting dust fuel (dust coal for example) utilised in a heating system represented by a principle diagram as shown in Fig.4.) we can define the state values as {on. (1. The minimal number of the elements of this vector able uniquely to precise (to determine) the output will define the system order or the system dimension. 1. the two state relationships x 1 = 4 x 1 + 2T x 2 3 3 x 2 = − 1 x 1 − 2T x 2 3 3 4 2T 3 x = Tx . . 24 can be written in a matrix form . When the amount of such a collection strictly necessary is infinite ( we can say the vector x has an infinity number of elements) then the order of the system is infinity or the system is infinitedimensional. x 0 = 4 3 + 2⋅1 9 = 10V.4. Suppose we are interested about the thickness in the point B at the end of the belt expressed by the variable y(t) . off}. xn]T. In the above example. System State Concept. 1.3. The belt moves with a speed v.1.) or of blackbox (Ex. Such a infinitydimensional system is presented in the next example by the pure time delay element.4. B} and so on as we discussed. If the amount of the collection of numbers which define the state is a finite one the state is defined as a columnvector: x=[x1 x2 . 1 2 3 3 3 3 In this form of the state vector definition we can say that at the time moment t0 the system is in the state x 0 =[10 7]T .
x)=x(t) defined by x(t) = { u(θ). θ ∈ [t 0 − τ . 1. 25 . and we shall denote it by the variable u(t).4. (t0. denoted on short as x0=x(t0)=x t 0 is a set containing an infinite number of elements x 0 = x t 0 = x(t 0 ) = { u(θ). t0) .y(t) u(t) e −τ s Y(s) speed v d y(t) A Conveyor belt B 0 u(t 1−τ) t 1−τ t1 y(t 1) t Figure no. point A.2.4. Dynamical Systems. This variable will be the output of the oriented system we are defining as in Fig. System State Concept. The input is the thickness realised on the flap position.6) Because of that this system has the dimension of infinity.7) All these intuitively observations may have a mathematical support applying the Laplace transform to the inputoutput relation (1.4. y(t) = u(t − τ) (1. The inputoutput relation is expressed by the equation. 1.x0 ) .4.4. At any time t the state is (t. The distance between points A and B is d . θ ∈ [t − τ . 1.t] is given. 1. 1. t) (1. It is a so called a functional equation. t 0) (1.5) we understand that in addition to know all the thickness along the belt between the points A and B or in other words all the values the input u(t) had during the time interval [t0−τ . Figure no. Can we determine the output y(t) for any t ≥ t0 ? What do we need in addition to do this ? Looking to the principle diagram from Fig.2.4.4.1.5). Such a dependence is illustrated in the diagram from Fig. t) } = u [t−τ .4. v Dust fuel (coal) Controlled flap Thickness u(t) y(t)=u(t. or to the relation (1.1. 1.5) We can read this relation as: The output at the time moment t equals to the value the input u(t) had τ seconds ago. t 0 ) } = u [t 0−τ .2.1. One piece of fuel passing from A to B will take a period of time τ = d . So the state at the time moment t0. This collection of information constitutes the system state at the time moment t0 and it will be denoted by x0 .4. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. Now suppose an input u [t 0.4.4.τ ) u(t) Pure time delay y(t) U(s) element Thickness y(t) u(t).
4. The state variable is a function x: T→ X . DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 26 .4. Y l (s) = e −τs ∫ u(t)e −st dt −τ 0 (1.4. 0) . x 0 .15) is called the time state trajectory on the interval [t0.4.12) the general response ( the time image of (1.4.11) it depends on all the values of u(θ) ∀θ ∈ [−τ . 0. t1] . 0.t] ) + η(t. 1.11) is the free response as the output response when the input is zero.t) ) = η(t. The free response depends on the initial state only (here at the initial time moment t0=0 ) and as we can see from (1.4. 0. System State Concept. t1]. t→x(t). t 1] = {(t.t]. 0.14) where X is the state space .10). u [0. t 1 ] } (1. x(t)). ∀t ∈ [t 0 . 0) ⇔ u [−τ .8) (1.2. We paid for Laplace transform instrument simplicity with t0=0. 0 [0. (1. x 0 . which expresses the time evolution of the system state.4.10) is the forced response which depends on the input u(t) only. Dynamical Systems. We remember that L{u(t − τ)} = L{u(t − τ)1(t)} = e so the Laplace transform of the output is Y(s) = e −τs U(s) + e −τs ∫ u(t)e −st dt = Y f (s) + Y l (s) where Y f (s) = e −τs U(s) ⇒ y f (t) = η(t.13) 1. u [0. (1.12) From (1. u [0. 0.1.t) ) .4. 0.11) as y l (t) = η(t. It can be changed during the system time evolution. The state is not a constant (fixed) one.9) (1. The state variable x(t) is an explicit function of time but also depends implicitly on the starting time t0. the initial state x(t0)=x0 and the input u(τ) τ∈[t0. so the function x(t) can be a nonconstant one. (1.t] ) −τ 0 −τs [U(s) + ∫ u(t)e −st dt] −τ 0 (1.4. Now we can interpret the free response from (1.9) ) can be expressed as an inputinitial stateoutput relation y(t) = y f (t) + y l (t) = η(t.4. 0. 0 [0.t) ) . The graphic of this function on a time interval [t0. 0) so it looks naturally to choose the initial state as x 0 = x(0) = u [−τ . denoted by x [t 0. By zero input we must mean u(t) ≡ 0 ∀t ≥ 0 ⇔ U(s) ≡ 0∀s from the convergence domain of U(s).4. State Variable Definition.4.4.4. (1. x 0 . of course here depends on the Laplace transform U(s) which contains the input values u(θ) for any θ≥0.4.
u [t 0. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. 4. This condition assures the causality of the abstract system which has to correspond to the causality of the original physical oriented system.4.4.t] ) do not depend on the future inputs u(τ) for τ >t.19) we get the intermediate state x(t1) x(t 1 ) = ϕ(t 1 .4.16) A relation of the form (1. The consistency condition .t 0 ] ) = x 0 .t 2 ] ) but for any intermediate time t1.1. u [t 0.16) has to check the condition : x(t) t=t 0 = x(t 0 ) = ϕ(t 0 . x(t 0 ).4. 3. This can be expressed as: "A unique trajectory ϕ(t. x 0 . x(t 0 ) . u [t 1.t>t 1 that means a unique trajectory starts from each state. can be written as x(t) = ϕ(t. u [t 0. u [t 0.t 2] (1. For a given initial state x(t0)=x0 at time t0 and a given well defined input u [t 0 . one input u [t 0. For any t2≥t0.t 2] )) (1. This functional dependency called inputinitial statestate relation (iiss relation) or just a trajectory (more precisely timetrajectory ). x0)=x(t0) to a state (t.t 2 ] = u [t 0 . then the corresponding segment of the input will take the system from x(t1) to x(t). t 1 . u [t 0. will determine the same x(t2) x(t 2 ) = ϕ(t 2 .4.17) Also ∀ t 1 ≥ t 0 . x 0 . t 0 .t 1 ] ) which acting as an initial state from t1. 2. System State Concept. The transition condition .t 1] ∪ u [t 1.18) t→t 1 .16) is an inputinitial statestate relation (iiss relation) and expresses the state evolution of a system if the following four conditions are accomplished: 1. t0 ≤ t1≤ t2 . x(t 0 ).t] ) = x(t 1 ) .4. u [t 0. u [t 1. t 0 . 27 . lim ϕ(t. x(t 1 ).t 2 ] ) = ϕ(t 2 . t] the state trajectory is unique. t 0 . (1.x1)=x(t1) is on that trajectory. The state x(t) at any time t or the trajectories ϕ(t.x)=x(t) and if a state (t1.t] ) exists for all t > t0 . 1. u [t 0.t] (or u) takes the system from a state (t0. given the state x0 at time t0 and a real input u(τ). x(t 0 ). x(t 2 ) = ϕ(t 2 . ϕ(t 1 . t 0 .t 1 ] a subset of u [t 0. applying u [t 0. (1.4. t 1 . Any intermediate state on a state trajectory is an initial state for the future state evolution. The causality condition . For t=t0 the relation (1. The uniqueness condition .4.20) x(t1) According to this property we can say that the input u [t 0.t 2 ] takes the state x(t0) to x(t2). x 0 . t 0 . Dynamical Systems.t 2 ] that means u [t 0. for τ≥t0 ". t 0 .t] ) x 0 = x(t 0 ) (1. t 0 . t 0 . x 0 .
t1 t2 =e − t 2−t 1 T − t 2−t 1 T ⋅e t 1−t 0 T and t1 t0 ∫ (.4.2. that means it is an iiss relation. Its time evolution is x(t) = e − t−t 0 T x0 + K1 T t0 − ∫ e T u(τ)dτ t−τ t ⇔ x(t) = ϕ(t. 1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS.)dτ = t∫(.21) S1 : y = K2 x The voltage across the capacitor x .4.)dτ + t∫(.22) becomes t1 t −t t 1−τ K1 − 1T 0 x(t 1 ) = e x(t 0 ) + e − T u(τ)dτ T t∫ 0 and for t=t2 (1. or the movement of the arm x . Shall we consider the examples Ex. t] .t] ) (1. u [t 0.2. denoting x0=x(t0) . t 0 .4. and Ex.2.22) We can show that the relationship (1. The uniqueness condition: If x 0 = x 0 and u [t 0. The consistency condition: Substituting t = t 0 ⇒ x(t 0 ) = e − t 0 −t 0 T K x0 + 1 T ∫ e− t 0 t−τ T u(τ)dτ = x 0 ⇒ x(t 0 ) = x 0 3..22) is x(t 2 ) = e Because e − t 2 −t 0 T − t 2−t 0 T K x(t 0 ) + 1 T − t 1−t 0 T t2 t0 ∫ e− t2 0 t 2−τ T u(τ)dτ .22) accomplishes the four above conditions. 1. is the state.t] = u [t 0. The transition condition: For t=t1. 1..4.4.16) • T x +x = K 1 u (1.2. System State Concept. Properties of the iiss relation. t 0 1 we get t 2−τ T x(t 2 ) = e ⋅e − K x(t 0 ) + 1 T ∫e − t 2 −t 1 T e − t 1 −τ T K u(τ)dτ + 1 T t2 t1 ∫ e− u(τ)dτ 28 . 1.3. (1.1) or (1.4.2. for which the abstract system is described by relations (1. the two state trajectories are x (t) = e − t−t 0 T x0 + K1 T x (t) = e − x (t) − x (t) = e − t−t 0 T t−t 0 T x0 + K1 T t0 t ∫e t0 t − t−τ T u (τ)dτ u (τ)dτ ⇒ ∫e − t−τ T t (x 0 − x 0 )x 0 + K1 T t0 ∫e − t−τ T [u (τ) − u (τ)]dτ ≡ 0∀τ ⇒ x (t) ≡ x (t) ≡0 t =0 2. Dynamical Systems. x 0 .)dτ .4..2.1. Example 1.
(1. x 1 = ϕ 1 (t.. n .n.4.t] ) . that the input affects the state and the state influences the output.. t 0 . t ≥ t0 . 4. u [t 0. t 0 . However there are systems to which inputs do not influence the state or some components of the state vector. implicitly expressed as 29 . t 0 . . The inputinitial statestate (iiss) relation x(t) = ϕ(t. and eliminating t from the n above relations we determine a trajectory in state space. i=1.t] ) x i = ϕ i (t. as a general statement. 1. in n+1 dimensional space or as n separate plots xi(t). u [t 0. the wire to the output were broken then such a system would be unobservable. Often this plot can be made by eliminating t from the solutions (1. u [t 0. u [t 0.3.25) x n = ϕ n (t... If we denote xi=xi(t) . with t as an implicit parameter. If . If the vector x is ndimensional one there are n timetrajectories. u [t 0.22) u(τ) is inside the integral from t0 to t .. Conversely there are systems to which outputs or some outputs are not influenced by the state.3. System State Concept. t 0 . x 0 .t] ) .4.4. Dynamical Systems. i = 1. In Ex.t] ) (1. 1.24) of the state equations... x 0 .. for example. x 0 . 1. Such systems are called uncontrollable and unobservable respectively.. x(t) is irrespective of u(τ) ∀ τ >t . the black box system.4.t] ) (1. the iiss relation (1.4.23) which expresses the timetrajectory of the state is an explicit function of time.4. x 0 . which is just a trajectory in state space. i=1. Trajectories in State Space. Before we said.1. x 0 .n . The causality condition: Because in (1. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS.4.24) is written as..4. the physical object is state uncontrollable because no admitted input can make the switch to change its position. A state that is both uncontrollable and unobservable can not be detected by any experiment and make no physical meaning. one for each component x i (t) = ϕ i (t. t1 t 1 −τ T x(t 2 ) = e − t 2−t 1 T ⋅ [e − t 1−t 0 T K x(t 0 ) + 1 T t2 ∫e t 0 − K u(τ)dτ] + 1 T t2 ∫e t 1 − t 2−τ T u(τ)dτ x(t1) t 2−τ T x(t 2 ) = e − t 2−t 1 T x(t 1 ) + K1 T ∫ e− t 1 u(τ)dτ . about which more will be analysed later on.4.24) These timetrajectories can be plotted as t increases from t0. t 0 .
t 0 .4. dx 1 λ /λ = λ1 x1 dx 2 λ 2 x 2 x1 2 1 dt ⇒ = ⇒ x 2 = x 2 (t 0 ) (1.x0) .. The trajectory in state space can easier be obtained directly from state equations.3.4.4.30) dx 2 x 1 (t 0 ) dx 1 λ 1 x 1 = λ2 x2 λ /λ dt 30 . 0 [t 0. only one trajectory is obtained. If the state vector components are the output and its (n1) time derivative the state space is called phase space and the trajectory in phase space is called phase trajectory.27). Eliminating the variable t from (1.. . (1..28) we obtain x1 x2 x x = e λ 1 (t−t 0 ) .xn. we have denoted x0=x(t0). For different initial conditions a family of trajectories are obtained called state portrait or phase portrait.27).x(t0))=0 (1.t] = {(τ. Because of the uniqueness condition accomplished by the iiss relation. Dynamical Systems. x 2 . .3. .4.t0. 1.4. x 1 (t 0 ). the iiss is obtained by integrating the system (1. Under this hypothesis.. As a consequence of this.4. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. t]} .x2.27) For simplicity let be u(t)≡0 ∀t ⇔ 0 [t 0 .t] ) (1. Example 1. For a given initial state (t0. t 0 .1.t] . for a given input u [t 0.4. one and only one trajectory passes through each point in state space and exists for all finite t≥t0 . State Trajectories of a Second Order System. System State Concept.26) where it was supposed a given (known) input.29) can be obtained directly from the differential equation (1.4. 0 [t 0. 1. Let we consider a simple second order system. F(x1. x 2 (t 0 ).4. ∀τ ∈ [t 0 .4.29) 10 The same expression for (1. = e λ 2 (t−t 0 ) ⇒ [ 1 ] λ 1 = [ 2 ] λ 2 ⇔ x 1 (t 0 ) x 2 (t 0 ) x 1 (t 0 ) x 2 (t 0 ) 2 1 x x 2 = x 20 x 1 ⇔ F(x 1 .t] ) Supposing that we have λ1<0 and λ2>0 then the timetrajectories are plotted through the two components as in Fig. a system of first order differential equations.4. Simpler expression are obtained if the input is constant for any t. x 1 (t) = e λ 1(t−t 0 ) x 1 (t 0 ) = ϕ 1 (t. The plot can efficiently be exploited for n=2 in state plane or phase plane. x2 = λ 2 x 2 + u x 2 (t) = λ 2 x 2 (t) + u(t) (1.4. the state trajectories do not cross one another.28) x 2 (t) = e λ 2(t−t 0 ) x 2 (t 0 ) = ϕ 2 (t. x 0 ) = 0 . u(τ)) = 0. x1 = λ 1 x 1 + u x 1 (t) = λ 1 x 1 (t) + u(t) ⇔ .
determines the output at a time t.31) In the case of example Ex. 31 .4.t] ) .3.4. Dynamical Systems. 1. System State Concept. 1.2.4. x'' x1(t) 10 x'10 λ1<0 λ2 <0 λ1< λ2 T1 =−1/ λ1 a'' 1 t0 a'1 a'' 2 t1 a'2 t2 t x'20 x'' 20 b'' 1 b'' 2 b' 2 b' t2 1 x2 λ1<0 λ2 <0 λ1< λ2 t0 t1 t1 10 t0 x'' x2(t) 20 x'20 T2 =−1/ λ2 b'' 1 b' 1 t0 b'' 2 t1 b' 2 t2 t t2 x'' 10 a'2 a'' a'1 a'' x' 2 1 x1 Figure no. (1. 1. The inputinitial stateoutput relation (iiso relation).4.4. Figure no. u [t 0. the state portrait is shown for the case λ1<0.1.1. λ2>0. the relation t−t 0 t−τ K K t y(t) = K 2 e − T x 0 + 1 2 ∫ e − T u(τ)dτ t0 T is an iiso relation.4.4. x0. x1(t) x'10 1 / λ1 x2 λ1<0 λ2 >0 t2 t b x'20 t1 t0 a x'10 x1 a x2(t) x'20 t0 t0 t1 b t1 t2 t Figure no. 1. y(t)=η(t. .5. t0. 1. It does not accomplish the consistency property y(t0)=K2x0≠x0 so it can not be an iiss relation. In Fig. 1.5. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS.
34) whose solutions are the trajectories expressed by (1. whose solution is the state trajectory x(t).2. the first order system from Ex. x0) is just the relation ϕ .4. or Ex. The proper state equation defined by f.2. t) .4.4. 1. the state equations of a dynamical system are composed by: 1.35) (ϕ) 0 (g) y(t) = K 2 x(t) The implicit form by state equations is: u∈Ω . taking into consideration that u[t. Dynamical Systems. if the inputoutput dependence is not an univocal one. but the univocity can be reestablished by the knowledge of the state (some additional information) at the initial time. 1 1 x = − T x + T u. difference. t ≥ t0 expressed by ϕ. g} (1. u(t). For example.36) Frequently. g} (1.t]=u(t) it becomes y(t)=g(x(t). (1.4. we can say that a system is a dynamical one. ϕ. The solution of the equations defined by f .2. expressed by relations (solutions) or trajectories.32) This is an algebraical relation and is called also the output relation or output equation. where: Ω − the set of admissible inputs ϕ − the inputinitial statestate relation η − the inputinitial stateoutput relation g − the output relation This is called the explicit form of a dynamical system . can be represented as: The explicit form by relations (functions) or state trajectories is: u⊂Ω (Ω) t−t K1 t − T0 − t−τ S : x(t) = e x(t 0 ) + T ∫ t e T dτ (1. 32 . η} or S = {Ω. logical.3.33) which are defined above.33). x(t 0 ) = x 0 S : y = K2x (1. 1. 1.4. for given initial state (t0. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS.4. System State Concept. 2. t ≥ t 0 . The output equation or more precisely the output relation g. ϕ. By a dynamical system one can understand a set S of three elements S = {Ω. where f − the vector function defining a set of equations: differential. Generally. For t0 = t .4. Another form for dynamical system representation is the implicit form by state equations S = {Ω.1. . f. functional.
t) g2(x(t).xn(t))T is just the state of the system.5.u(t).5. 1. Here x(t)=(x1(t)...g.. .u(t).. y (n i ) . u ⊆ Ω u − (p × 1) − vector u = [u 1 .f. u (m i ) .5. . t). The system is called timeinvariant or autonomous system if the time variable t does not explicitly appear in f and g and its form is: 33 .u(t).1. t ≥ t 0 ..5.5. Both the input u and the output y are time function vectors. .u(t). In (1.f. . Differential Systems with Lumped Parameters.g} or S={Ω.x} where: f is a function expressing a differential equation.3) y(t) = g(x(t). y (1) .5. r The system dimension or the order of the system is n ≤ Σ n i . These systems are continuous time systems. .1) y ⊆ Γ y − (r × 1) − vector y = [y 1 .t) .. u p ] T (1. Of course..t) = [f1(x(t).u(t)..t) . .2) The standard form of the state equations of such a system is obtained by transforming the above system of differential equations into n differential equations of the first order ( the Cauchy normal form) which do not contain the input time derivatives as in (1. y r ] T The inputoutput (io) relation is a set of differential equations: F i (y.. f and g are vector functions.u.u(t). y 2 . g expresses an algebraical relation.5.t) f2(x(t). Examples of Dynamical Systems.... u(t).u(t)... t) = 0 i = 1. x is a nvector. If the function f(x. u(t). gp(x(t). i=1 r (1. then the solution x(t) . x(t 0 ) = x 0 (1. Examples of Dynamical Systems. t) u⊆Ω This is the implicit form of the dynamical system S={Ω. x(t0)=x0 exists and is unique for any t≥t0.t)]T . DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. x(t) = f(x(t). f(x(t). The dimension of the state vector x has no connection with r the number of outputs and p the number of inputs. 1.3).t)]T g(x(t). u. The number n of the elements of this vector is the dimension of the system or the system order.u(t).t) = [g1(x(t).1.5.t) satisfy the Lipshitz conditions with respect to variable x. fr(x(t). . 1.3) the first equation is called the proper state equation and the second is called the output relation. u 2 .
5.5.the output matrix.4) (1. x(t) = A(t)x(t) + B(t)u(t) S: (1. .5) If the functions f and g are linear with respect to x and u the system is called continuostime linear system. (nxp)the input (command) matrix . Examples of Dynamical Systems.5. 34 . x(t) = f(x(t). C. u) .6) y(t) = C(t)x(t) + D(t)u(t) Because the matrices A. singleoutput (r=1) system are . If all these matrices are not depending on t (are constant ones) the system is called linear timeinvariant (dynamical) system (LTIS) having the form. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. y = g(x.7) y(t) = c T (t)x(t) + d(t)u(t) In this case: u(t) and y(t) are scalars . The state equations are .5. u) (1. t ≥ t 0 . the matrix B(t) degenerates to a column vector b(t). D(t). x(t 0 ) = x 0 y(t) = g(x(t). (nxn)the system matrix. (rxn). B.8) y = Cx + Du We observe that in any form of state equations the input time derivatives does not appear. x = f(x. u(t)) u⊆Ω written on short . (rxp)the auxiliary matrix or the straight inputoutput connection matrix. . The matrices have the following meaning: A(t). 1. D depend on the time variable t we call this multivariable timevarying system: multiinput(pinputs)multioutput(routputs).1. C(t). the matrix C(t) degenerates to a row vector cT(t) and the matrix D(t) degenerates to a scalar d(t). x(t) = A(t)x(t) + b(t)u(t) S: (1. x = Ax + Bu S: (1.5. The state equations of single variable linear system or singleinput (p=1). u(t)). B(t).5.
Examples of Dynamical Systems. x(t) = f(x(t). 1. t . Time Delay Electronic Device.5.5. Shall we consider an electronic device as depicted in the principle diagram from Fig. t ≥ t 0 (1.10) The order of time delay systems is infinite and has nothing to do with the number of x vector elements.1.5. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. In Ex. t) (1.t 0] . 1. x(t − τ). 1.2. The state equation has the form . 1.5. Let we now consider a more complex example of time delay system.2. i1 u(t) i2 w(t) R1 R2 ui i0 ii R Z i .11) .5.1. The device has amplifiers so we can consider the signals as being voltages between 10 V and +10V for example.Z e + + Pure time delay element y(t) w(t) w(t)=y(tτ ) y(t) Figurae no. . x(t − τ).12) The initial state at the time t0 is x [t 0−τ. 1. Time Delay Systems (DeadTime Systems).A. Example 1. u(t).9) y(t) = g(x(t). 35 .4.1. x(t) = ∫ t 0 x(τ)dτ + x(t 0 ) (1. x(t) = x(t − τ) − u(t) − time delay equation (1.5.5.5. even if the variable x in the differential equation is a scalar one.1.A. The two recordhead and playhead are positioned to a distance d so it will take τ seconds the tape to pass from one head to the other. The pure time delay element is a device consisting of a tape based record and play structure.Z e R0 C iC iR ui ii v(t) uC Z i . t) . u(t). Somewhere in these systems at least one delay operator exists that means in the physical systems there are elements in which the information is transmitted with a finite speed.1. we discussed the pure time delay element as the simplest dead time system.5.
These systems are based on the so called probability theory and random functions theory.4.13) yk=g(xk.5. Stochastic Systems.5. Are described by partial differential equations.15) yk=Ckxk+Dkuk (1.5.5. They are infinite dimensional systems.uk. The general form of state equations are: xk+1=f(xk. 1.16) This is a time varying system. If f and g are linear with respect x and u the state equations are: xk+1=Akxk+Bkuk (1. 1.C.1. Examples of Dynamical Systems.5.14) where: k .means the present step.B. 36 . If the matrixes A.means the next step.D are constant with respect to the variable k then the system is a linear discrete time invariant system. the output yk and state xk are strings of numbers.k) (1.5.3. All the above systems are called deterministic systems (at any time any variable is well defined ). Distributed Parameter Systems. k+1 .k) (1.5.uk. The functions f and g are logical functions. Finite State Systems (Logical systems or Finite Automata). 1. DiscreteTime Systems. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. Other Types of Systems. The state equation is expressed by difference equations the input uk.
x = Ax + Bu .xb∈ S of the system are equivalent at the time t=t0 (k=k0) if starting on the initial time from that states as initial states the output will be the same if we apply the same input: ϕ(t.1. 1. x) y = Cx + Du . General Properties of Dynamical Systems. t 0 . u [t 0 .g.g. B. −1 T T −1 x= (TAT −1 )x + TBu x=T x c =c T y = Cx ⇒ y = CT x + D u C D 37 −1 A B . S = S(A. C = CT −1 . u [t 0 . For linear time invariant differential systems (SLIT) denoted by .6. if for any state x it exists a state x ∈ S (x = x(x)) so that if the the same inputs are applied to both systems and the initial state is x for the system S and x for the system S then the outputs are the same. B. x) are called inputoutput (I/O) equivalent and we denote this S ≈ S . C. t 0 .1. t 0 . x a . Two systems S and S : S=S(Ω. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. t 0 .g)=S(Ω. D. x a . General Properties of Dynamical Systems. det T≠ 0 so that x = Tx then S ≈ S and we can express A = TAT −1 .x) and S = S(Ω.f. u [t 0. . 1. We have a system denoted by S=S(Ω. −1 −1 T x= AT −1 x + Bu x=T x b = Tb . 1. that means that the real dimension of the system is smaller rather that defined by the vector x. For any form of dynamical systems some general properties can be established. x = Ax + Bu . x b .f. u [t 0. . Equivalence Property. ⇒ . B = TB . C.x) where x is mentioned just to know how the state variable will be denote. Two states xa. f. S = S(A. D.f.t] ) ψ(t. x) y = Cx + Du if there exists a square matrix T.t] ) ≡ ψ(t.6.t] ) ≡ ϕ(t. x b . t] ) If in a system there are equivalent states then the system is not in a reduced form. D = D then the above systems are I/O equivalent. For singleinput.6. singleoutput (siso) systems. g.
. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. x 2 y = Cx + du 0 1 0 A= 1 R . x 1 x = Ax + bu x= . 1. d= − C −R 1 0 C 0 0 1 0 38 . where ¨ di ¨ u L = Lq q = dt uR + u L + uC = u all of them can be eliminated ( those from x2 = i . b= 1 . 1 R 1 x 2 = LC x 1 − L x 2 + L u y1 = x2 y 2 = Rx 2 1 y 3 = − C x 1 − Rx 2 + u 1 y 4 = C x1 These are the state equations in a form when we chosen as state components the electrical charge and current. − LC − L L 0 1 0 R C= 1 .6. 1 uC = Cq .ur. i=q u R = Rq . Let us consider an RLC circuit controlled by the voltage u : a u a ' i uR R uL L uC C Supposing we output is the vector i u y= R uL uC are interested about i. p=1 We can determine the state equations by applying the Kirchoff theorems.1.6. .1. Electrical RLC Circuit. so the input is u and the . r=4 . ¨ q = x2 Some variables but not algebraical equations only).uC. where x2 = q . We can write them into a matrix form: . x1 = q . Example 1. x1 = x2 .uL. General Properties of Dynamical Systems.
T = .6. C = 1 . 0 b= R . .1. X) x 2 = Rx 2 1 C 0 ⇒ x = Tx . det(T) = 0 R ≠0 This means that the two abstract systems are equivalent because for any initial equivalent states we have the same output. b = 1 . 1. x2=uR because uR=Ri (there is an algebraical relation between them). C. X) x 1 = C x 1 . S = S(A. d= − C −R − LC − L L 1 C 0 39 0 0 1 0 . General Properties of Dynamical Systems. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. x = Ax + bu x1 x= ⇒ S= y = Cx + du x2 A= . for example. x1 = uC x2 = uR x = 1x • 1 RC 2 x = −Rx − Rx + Ru L 1 L 2 L 2 y = 1x 1 R 2 y =x 2 2 y 3 = −x 1 − x 2 + u y4 = x 1 This is another representation of the oriented system. b. b. but we can choose as state variables. d. We can pass between the two systems with the next equations: A = TAT −1 b = Tb C = CT −1 d=d . We cannot choose as state variables (vector state components) the variables for example x1=i . d = −1 −1 1 0 • 1 0 RC −R −R L L 0 0 1 0 R C 1 S = S(A. C. d. x = Ax + bu y = Cx + du 0 1 0 R 0 1 0 where A = 1 R . C= L 1 0 R 0 1 .
t] → x a (t). U [t 0.6.3.t] 10 A 1 C ⋅ 5 1 (V) (x 0 . for this example. xf. β ∈ C x(t) = αx a (t) + βx b (t) y(t) = αy a (t) + βy b (t) If the system is expressed in an implicit form by a state equation. t 0 ) = if C = 5F and R = 100Ω = R ⋅ 10 1000 (V) We can prove that. t 0 ). 40 . this zero means the neutral element of U is organised as a linear space.t0) be 5 C (x0.t] → x b (t). the next relations are true: A = TAT −1 b = Tb C = CT −1 d=d 1.t] = αu a + βu b ∀α.t] ⇔ ∀τ . By yl and xl are denoted the free response. A system S has the decomposition property with respect to the state or the output if they can be expressed as y(t)=yl(t)+yf(t) and x(t)=xl(t)+xf(t). This zero state is just the neutral element of the set X as linear space. it is linear if the two functions involved in these equations are linear with respect to the two variables x and u. A system is linear if its response with respect to state and output is a linear combination of the pair: initial state and input.t] ⇒ Y [t 0.6. Decomposition Property. We are denoting the forced response as yf. General Properties of Dynamical Systems.6.t0)= and for given U [t 0. 1. Linearity Property.2. a (x a . U b 0. t 0 ) = αx a + βx b U [t 0. t 0 ). DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS. u(τ) = 0 .1. Let (x0. 1. y a (t) This system is linear if : (x b . For any system we can define the so called forcedresponse or zero initial state response which is the response of the system when the initial state is zero. We can define the free response that is the system response when the input is the segment u = 0 [t 0. y b (t) [t (x.
It will be studied in detail later on.t1] finite the system is called totally controllable.6. A state x0 at the time t0 is controllable to a state (x1. A system is a timeinvariant one if in the state equation the time variable t does not appear explicitly.6. General Properties of Dynamical Systems. 1.t1) if there exists an admissible input U [t 0.4.t0) to the state (x1. We can express (t.t1).6. Controllability Property. if this property takes place for any finite [t0.7. This is one of the most important general property of a dynamical system. Time Invariance Property. Let S be a dynamical system. Example of Nonlinear System. If a system is timeinvariant the initial time always appears by the binomial (tt0). We can prove this. yb=2ub+4 we obtain y=2(α ua+β ub)+4=2α ua+2β ub+4≠ αy a + βy b The response of a linear system for zero initial state and zero input is just zero.t 1] .6. for ya=2ua+4.1. The system y(t)=2u(t)+4 u + 4 + y 2 is not linear.τ) like: tτ=(tt0)(τt0) 1. In addition. Stability Property. If this property takes place for any x 0 ∈ X the system is called completely observable. 1. We can say that the state x0 at the moment t0 is observable at a time moment t 1 ≥ t 0 if this state can be uniquely determined knowing the input U [t 0.t1] the system is called totally observable.t] ⊂ Ω which transfers the state (x0.5. If this property takes place for any x 0 ∈ X the system is called completely controllable. 1. Observability Property. There are a lot of criteria for this property expressing.6. A system is timeinvariant if its state and output response do not depend on the initial time moment if these responses are determined by the same initial state and the same translated input. Example 1.2.t 1] and the output Y [t 0 . 41 . 1. If in addition this property takes place for any [t0.6.6. DESCRIPTION AND GENERAL PROPERTIES OF SYSTEMS.
2. their behaviour can be rather well studied by linear systems under some particular conditions like description around a running point or for small disturbances. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). dt k 42 m an ≠ 0 u (k) (t) = d k u(t) . Even if the majority of systems encountered in nature are nonlinear ones.1. a set of functions which admit the Laplace transform too. Sometimes they are also called continuous time systems. depend on our knowledge regarding the physical system.1. on short LTI. The bloc diagram of such a system is as depicted in Fig. optimality.1. Mainly Linear TimeInvariant Differential Systems means systems described by ordinary linear differential equations.2. In addition LTI benefits for their study on the Laplace transform which translate the complicated operations with differential equations into simple algebraical operations in complex domain. 2. where the set Γ will be. with constant coefficients of the order n. Let we consider the input u ∈ Ω the set of scalar functions which admit the Laplace transform. Two categories of LTI will be distinguished where: SISO : Single Input Single Output LTI MIMO: Multi Inputs Multi Outputs LTI.1. are most studied in systems theory mainly because the mathematical tools are easier to be applied. The abstract system is expressed by an inputoutput relation (IOR) as a differential equation or by state equations.1. The Linear TimeInvariant Differential Systems. 2. The inputoutput relation (IOR) is expressed by the ordinary linear differential equations. observability. 2. k=0 d k y(t) . However some properties like controllability . with constant coefficients. are better studied in time domain by state equations even for LTI.1. for these systems. InputOutput Description of SISO LTI. 2. u(t) U(s) H(s) y(t) Y(s) Figure no.1. The output is a function y ∈ Γ . Σ aky k=0 n (k) (t) = Σ b k u(k) (t) . Let us suppose that we have a time invariant system with one input and one output. our qualification and our ability to get the mathematical equations. dt k (2.1) where we understand: y (k) (t) = . InputOutput description of SISO LTI. The two forms depend on the physical aspect.
( Σ a k s k ) ⋅ Y(s) − Σ [ Σ y (k−i−1) (0)s i ] = ( Σ b k s k ) ⋅ U(s) − Σ [ Σ u (k−i−1) (0)s i ] k=0 k=1 i=0 k=0 k=1 i=0 n n k−1 n n k−1 43 . b 0 = 0. U(s) = L{ u(t)} (2.3) (2. It cannot process any input from the set Ω of the admissible inputs which can contain functions with discontinuities or which are nonderivative only.1. m = 1.1. One obtains. As we remember. k ≥ 1 where we have denoted by Y(s) = L{ y(t)} .1.1. k=0 n an ≠ 0 (2. m<n : the system is called strictly proper (causal) system.4) the initial conditions are defined as right side limits (k−i−1) + y (0 ) . L{ y (t)} = s Y(s) − Σ y (k−i−1) (0 + )s i .1. Three types of systems can be distinguished depending on the ratio between m and n: 1. that means: dt n = 0. Because of that. u (k−i−1) (0 + ) . 2. In (2.1) will be written as If it is mentioned b n = 0 . represents a derivative element. m>n : the system is called improper system. InputOutput description of SISO LTI. (2.1.1. if m<n we can consider that bn=0. They could represent an ideally desired mathematical behaviour of some physical objects.bm+1=0. y(t) = . m=n : the system is called proper system.3). a 0 = 1. 3.4) L{ u (k) (t)} = s k U(s) − Σ u (k−i−1) (0 + )s i . Also. The maximum order of the output derivative will determine the order of the system. 2. b 1 = 1 .2) (2.2).. our attention will concentrate on the strictly proper and proper systems only. The improper systems are not physically realisable. For the moment the convergence abscissas are not mentioned.. but impossible to be obtained in the real world. du(t) For example the system described by the IOR. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). For simplicity we shall denote them by y (k−i−1) (0) . k ≥ 1 (k) k k−1 i=0 k−1 i=0 Σ aky k=0 n (k) (t) = Σ b k u(k) (t) .2. The inputoutput relation (IOR) can be very easy expressed in the complex domain by applying the Laplace transform to the relation (2. this means the system is a strictly proper one.5) the Laplace transforms of output and input respectively.1. A rigorous mathematical treatment can be performed by using the distributions theory but that results are not essential for our object of study. The IOR (2.1. u (k−i−1) (0) .
.2.. The transfer function of a system.) if this ratio is the same for any input variation.i.1. from where Y(s) can be withdraw as n n [ Σ a k s k ] ⋅ Y(s) = [ Σ b k s k ] ⋅ U(s) + I(s) k=0 k=0 Y(s) = M(s) I(s) U(s) + L(s) L(s) Yf(s) Yl(s) (2. in the complex domain. Any linear system has decomposition property. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). ∀t ≥ 0 .1.1. If the input u(t) ≡ 0. and I(s) Y l (s) = (2. Y(s) = Y f (s) + Y l (s) (2..13) z. is the ratio between the Laplace transform of the output and the Laplace transform of the input which determined that output into zero initial conditions (z. + b 1 s + b 0 L(s) = Σ a k s k = a n s n + a n−1 s n−1 + .. + a 1 s + a 0 I(s) = Σ a k [ Σ y (k−i−1) (0)s i ] − Σ b k [ Σ u (k−i−1) (0)s i ] k=1 i=0 k=1 i=0 k=0 n k−1 n k−1 k=0 n n (2.6) where. If the initial conditions are zero then I(s)=0 and Y(s)=Yf(s).1. 2. denoted by H(s).c.7) (2.i.6) the output appears as a sum of two components called.11) L(s) is the forced response Laplace transform which depends on the input only. the same for ∀U(s) U(s) 44 . The forced response Yf(s) expresses the inputoutput behaviour (io response) which does not depend on the system state (because it is supposed to be zero) or on how the system internal description is organised (how the system state is defined).9) As we can observe from (2.1.1. M(s) = Σ b k s k = b n s n + b n−1 s n−1 + . the forced response yf (t) and the free response yl(t) so.c.1. These express the decomposition property.12) L(s) is the free response Laplace transform which depends on the initial conditions only. then U(s)=0 and Y(s)=Yl(s). (TF). InputOutput description of SISO LTI. We can now define the very important notion of the transfer function (TF). The free response Yl(s) expresses the initial stateoutput behaviour (iso response) which does not depend on the input (because it is supposed to be zero) but it depends on how the system internal description is organised (how the system state is defined). the system response is.1. Y(s) H(s) = 2.1.10) where.1..8) (2. M(s) Y f (s) = ⋅ U(s) = H(s)U(s) (2.
16) then M (s)P(s) M (s) 2. The order of a system is expressed by the degree of the denominator polynomial of the transfer function that is n=degree{L(s)}.2. gcd{M'(s).17) H(s) = ⇒ H(s) = L (s)P(s) L (s) If the two polynomials M'(s) and L'(s) are coprime ones. L(s)=L'(s)P(s).6) . then the last expression of H(s) is the reduced form of a transfer function (RTF).. In such a case. then the io behaviour (the forced response) can be described by lower order abstract systems.14) = L(s) a n s n + a n−1 s n−1 + . The transfer function expresses only the inputoutput (io) behaviour of a system which is just the forced response that means the system response into zero initial conditions. InputOutput description of SISO LTI.. as we can observe from (2.1.1.. If in a transfer function common factors appear which are simplified. L'(s)}=1 . 45 . The same inputoutput behaviour can be assured by a family of transfer functions if the nominator and the denominator have a common factor as M(s)=M'(s)P(s) . 2. as for example systems with time delay or systems defined by partial differential equations. + b 1 s + b 0 H(s) = (2. the order being the polynomial degree from the transfer function denominator after simplification. when M(s) and L(s) have common factors. (2. but the iso behaviour (the free response) still remains of the order equal to the order the transfer function had before the simplification..1.1. some properties such as : controllability or/and observability are not satisfied.18) so systems can have different orders for their internal description but all of them will have the same forced response. + a 1 s + a 0 always a ratio of two polynomials (rational function). If the polynomials L(s) and M(s) have no common factor (they are coprime) their ratio expresses the so called the nominal transfer function (NTF).15) There are systems to which a transfer function can be defined but it is not a rational function.1.1. N} ⇔ TF (2. the transfer function is M(s) b n s n + b n−1 s n−1 + .11) .1..1. (2. Sometimes we denote this SISO LTI system as S = TF{M. (2. In the case of the SISO LTI. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). It results that degree{L'(s)}=n'<n=degree{L(s)}..
InputOutput description of SISO LTI. is sY(s) + 4Y(s) = sU(s) + U(s) .1.20) H(s) = = = .1. Of course. (s + 4)(s + 3)} But we observe that.i.c. s 2 + 7s + 12} = TF{(s + 1)(s + 3). .1. Proper System Described by Differential Equation.i. Let we consider a proper system with n=2. N} = TF{s 2 + 4s + 3. Example 2.i. (s + 1)(s + 3) s + 1 M (s) (2.1.c.19) depends on two initial conditions. .19) ¨ ¨ whose Laplace transform in z.19) is of the second order.2. s + 4 L (s) U(s) is a first order differential equation.c. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). The internal behaviour of the system is represented by two state variables. However the time domain equivalent expression of the NTF M (s) Y(s) H(s) = s + 1 = = z. is s 2 Y(s) + 7sY(s) + 12Y(s) = s 2 U(s) + 4sU(s) + 3U(s) . and its transfer function is 46 . m=2 described by differential equation. . y + 7y + 12y = u + 4u + 3u (2. From this point of view the system is of the second order.i.19).1. the solution of the system expressed by the differential equation (2. ⇒ n = 1(order = 1) (s + 4)(s + 3) s + 4 L (s) the NTF The transfer function is related to the forced response Y f (s) = H(s)U(s) (s + 1)(s + 3) U(s) ⇒ Y f (s) = s + 1 U(s) (s + 4)(s + 3) s+4 The io behaviour is of the first order even if the differential equation (2.1.1. (2.1.1. Y f (s) = Shall we now consider another system described by the differential equation. 2. . . If this differential equation is represented by state equations it will be of the second order.c. . = 2 U(s) L(s) s + 7s + 12 so we can consider the system S = TF{M. y(t) + 4y(t) = u(t) + u(t) . from where the TF is Y(s) s 2 + 4s + 3 = M(s) ⇒ n = 2 H(s) = z.21) whose Laplace transform in z. y(t) + 4y(t) = u(t) + u(t) which describes only a part of the system given by (2.
its general solution depends of only one initial condition. If M(s) and L(s) are prime one to each other the internal behaviour is completely related by the io behaviour of the system that means by the transfer function. M (s) H(s) = s + 1 = . s + 4 L (s) This system can be denoted as S = TF{M . InputOutput description of SISO LTI. This is the so called completely controllable and completely observable part of the system S. n = 1(order = 1) . which is a first order one.1. (s + 4)} . that means only the modes ( the movements) that are depending on the input and which affect the output. 2.2. LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). Y f (s) = s + 1 U(s) s+4 identical with the forced response of S . 47 . The forced response of S is. We can say that the system S expresses only some aspects of the internal behaviour of S . L } = TF{(s + 1).
3).2.2. 2.2. :the command column vector c .1) and (2.2.3) is sometimes denoted as. If d ≠ 0 . u. .c. S = SS{A. .1. (nx1). Also the variables x.1) and (2.2. the system S from (2.3) we also mentioned the letter x just to see how we have denoted the state vector only. by using different methods. (1x1) :the coefficient of the direct inputoutput connection. The degree of L(s) is n.2.1) x = Ax + bu T y = c x + du (2.4) The abstract system (2.2. the system is strictly proper. we have got directly the state equations expressing all the mathematical relations between variables into Cauchy normalform ( a system of the first order differential equations).2). y(t) which admit Laplace transform. x = [x 1 . Because a system of the form (2. If a system of the order n is given. (nx1) :the state column vector. The relation (2.2) is called the output equation (relation). as depicted in Fig. under the form of the state equations (SS) of the form.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI).1) and (2.2.2) or (2. described by state equations (SS) as (2.2. (2.2. .3) can also be obtained not as a procedure of mathematical modelling of a physical oriented system but as a result of a synthesis procedure. Such a form.2. (nxn).2) is completely described by the matrices A.. (2.1) and (2.2. 2.2.2.2) into z. If d = 0 . State Space Description of SISO LTI.1) and (2. State Space Description of SISO LTI.2) or (2.2. y have to be understood as time functions x(t). can be directly determined. u(t).5) L(s) which can be obtained by applying the Laplace transform to (2.c. then dg{M(s)} = dg{L(s)} = n .2. d} ⇔ SS{S} .2.2. In the previous analysed RLCcircuit example.3) where we understand that it is about the relations (2.3) . c.2.. b.i. d.1) is called the proper state equation and the relation (2. then a unique (TF) transfer function H(s) can be attached to it.d only. x n ] T A . x } (2.b. Sometimes the abstract system attached to an oriented system. :the output column vector d .2) where the matrices and vectors dimensions and name are: x . This TF is M(s) H(s) = c T [sI − A] −1 b + d = . Such a system form can be denoted as S = S{A.1.. (nx1). :the system matrix b . In (2. (2.2. b. x 2 ..2. c.2. 2.2. the system is proper.2. (2. then dg{M(s)} < dg{L(s)} = n .2. expresses everything about the system behaviour : the internal and external behaviour. 48 .
that means. then the system S = SS{A .2. by mentioning the equivalence and univalence these realisations have.10) that means. If a system is first described by a differential equation like (2.2) or (2.1. of the system S = TF{M. We shall denote this process by (2.2.2.1. m. means to determine n 2 + 2n + 1 unknown variables from (n + 1) + (n + 1) equations obtained from the two identities on s.13) and we denote this by m.11) The systems S and S' are inputoutput equivalent. c .2.1.2. c .14) S → S .d. c . ⇔ TF{M.2. only 2n+1 are essential. d} If the TF H(s) of the system S is reducible one.2.7) TF{S} → SS{S} . L} → SS{A.14) depends on maximum 2n+2 coefficients from where.2) or (2. x } The transitions from one form of realisation to another one.2. M(s) M (s) ⋅ P(s) = H(s) = (2. c. 2. x } (2. L } → SS{A .2. (2. x } (2.8) L(s) L (s) ⋅ P(s) then the expression obtained after simplification. that means M (s) and L (s) are coprime.14) of the order n.2. There are methods to obtain different SS from a TF which will be studied. The determination of SS from TF which is called the transfer function realisation by state equations. respectively on n ⋅ n + n + n + 1 = n 2 + 2n + 1 > 2n + 2 coefficients. ⇔ TF{M . 49 .b. are presented in the diagram from Fig.2) or by a transfer function (TF) like (2. nom{c T [sI − A] −1 b + d} ≡ M(s) and den{c T [sI − A] −1 b + d} ≡ L(s) . d . then an infinite number of systems described by state equations (SS) as (2. the TF (2.2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI).6) where: "nom{}" means "the nominator of {}" "den{}" means"the denominator of{}".2. The state equations (SS) as (2. S = SS{A . L} → SS{A . TF{S } → SS{S } . If H (s) is itself a nominal transfer function (NTF).1) and (2. ⇔ TF{M. L} (2. State Space Description of SISO LTI. d . for which one of its state realisation is.12) represents the so called the inputoutput equivalent minimal realisation. (2. b . b .3) can be attached to it. M (s) (2. They determine the same forced response but not the same free response.9) . on short the minimal realisation.r.1.2.1) and (2. However. dg{L (s)} = n < n = dg{L(s)} H (s) = L (s) can be interpreted as the transfer function of another system S' of the order n'. b.3) depend from A.r.2. d . b . c .2. because of the ratio.c. d .2. 2.2. b .2. x } (2.
5 0.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI).08160 0.c1.. 2.12500 0 d = y1 b = x2 0 x3 0 x1 x2 x3 0.2.08000 d = c= u1 y1 0 Transfer function: u1 x1 1 H'(s)= ..a = x1 0.03000 0.08000 0.01000 b = x1 0.x') The order n'<n Univoque SS TF transform Figure no.2.01280 0.. SS representation of the system S Univoque transform TF representation Reduction of the system S by symplifying the common factors M(s) S=SS(A.2.d.2.x) The order n TF SS TF H(s)= L(s) dg{L(s)}=n TF TF H'(s)= M'(s) L'(s) dg{L'(s)}=n' SS state realisation One Similarity transforms Inputoutput and Inputstateoutput Equivalent systems Another S1=SS(A1. Step Response 1 0..2.x 1) TF The order n SS state realisation SS reduction.2.8 0.02s + 0.1 0 0 Step response of both H(s) and H'(s) Free respose of H(s) with x(0)=[1 10 1] x(0)=[1 1 1] Free respose of H'(s) with x(0)=1 100 200 300 Time (sec. an example of two such systems is presented.d1.9 0. Inputoutput only Equivalent systems SS TF transform TF SS One Univoque state realisation S'=SS(A'.2.02s + 0.6 0.) 400 500 600 Transfer function: 100 s^2 + 2 s + 1 H(s)= 10000 s^3 + 300 s^2 + 102 s + 1 Zero/pole/gain: 0.. 2.c. In Fig.3 0.01) (s^2 + 0.10240 c = y1 x1 y1 0.d'.b'.12500 0 0 A = x2 x3 0 0.4 Amplitude 0..2 0.01) x1 x2 x3 x1 0..12500 100 s + 1 Figure no.c'.06250 0 u1 u1 x1 0.01 (s^2 + 0.01) H(s)= (s+0. State Space Description of SISO LTI. Some forms of SS have specific advantages that are useful for different applications. 2.b1.7 0. 50 .1.2.b..01280 0.
They are systems with several inputs and several outputs described by a system of linear constant coefficients ordinary differential equations. y.. ..5) (2. U p (s)) ⇒ L(s)Y(s) = M(s)U(s) where: L(s)= { Lj..3.3.1..j j (k) (2. .3. u p ] T . ..i. is represented in a block diagram having the input and output components explicitly represented or considering one input.1.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI).i(s) p Mj.3.i(s) j = 1. if no confusion they will undertake.. the rcolumn vector y. .3.i ⋅ u(k) (t) . with pinputs u 1 . .i (s)} 1≤j≤r (r×r) 1≤i≤r 1≤j≤r 1≤i≤p r Lj.3.6) (2. on short "Multivariable Systems".. . y r ] T . n m p r i. 2. j = 1. u { u1 u2 up S y1 y2 yr } y u (p×1)vector S y (r×1)vector Figure no.. the pcolumn vector u and one output..3. u = [u 1 . of the form.i (s)Y i (s) = Σ M j. . . u p and routputs y 1 . MIMO LTI means Multi InputMulti Output Linear Time Invariant Systems. .. y.2.3. x.).i s Y i (s) = Σ k=0 b k. . r (2. InputOutput Description of MIMO LTI. r i i=1 i=1 This system of differential equations can be expressed in the complex domain by applying the Laplace transform to (2. As it was mentioned in Chap 2. InputOutput Description of MIMO LTI. The inputoutput relation (IOR) is expressed by a system of r linear constant coefficients ordinary differential equations.i (s)U i (s) . 2..j j k mi..2) Σ Σ Σ k=0 a k.3.. (2..i ⋅ y i (t) = Σ k=0 b k.1.. 2. Such a system denoted by S. are denoted simply by u. .1) as depicted in Fig. y r . 2..4) (2.i (s)} (r×p) 51 .3) (2. For an easier understanding we shall consider here zero initial conditions (z.i s U i (s) ⇒ Σ Σ i=1 i=1 Σ Lj.j j i. . .3. Usually the vectors u. without boldface fonts. y = [y 1 .. .8) M(s)= { M j. Y r (s)) T T U(s) = (U 1 (s).c.2).7) (2. i=1 i=1 We can define the vectors: Y(s) = (Y 1 (s).3.j j k Σ k=0 a k..3. p r n i. x.3.
3..H ri (s)...13) This rational matrix is an operator if it is the same for any expressions of U(s)..3.H 1p (s) ...3.10) is called the Transfer Matrix.... Suppose we have a 2 inputs and 2 outputs system described by a system of 2 differential equations.H 1i (s)...are called matrices of polynomials.....H ji (s)... Example. Each of its component is a rational function...... Any component of this rational matrix. is expressed by Y(s)=H(s)U(s) (2... (2) (1) (1) (2) (1) 3y 1 + 2y 1 + 5y 1 + 8y 2 = 2u 1 + 5u 1 + 3u 2 + 4u 2 + 5u 2 (j = 2) p = 2 whose Laplace transform in z.12) H(s) is a rational matrix.3. can be interpreted as a transfer function between the input Ui and the output Yj that means : H ji (s) = Y j (s) U i (s) zero initial condition U k (s)≡0 ∀s if k≠i (2.c. L(s) and M(s) . 2. M(s) = ⇒ 2 3s + 2s + 5 8 2s + 5 3s + 4s + 5 the TM is H(s) = L −1 (s) ⋅ M(s) 52 .i.2. (2s 4 + 3)Y 1 (s) + (6s + 3)Y 2 (s) = (3s 3 + s)U 1 (s) + 5U 2 (s) ⇒ (3s 2 + 2s + 5)Y 1 (s) + 8Y 2 (s) = (2s + 5)U 1 (s) + (3s 2 + 4s + 5)U 2 (s) 2s 4 + 3 6s + 3 3s 3 + s 5 L(s) = 2 .3.i. for example Hji.9) where H(s)=L1(s)M(s) (2. (TM). InputOutput Description of MIMO LTI. is..... Y j (s) = Σ H ji (s)U i (s) i=1 p (2.c..... H r1 (s). It is irrespective of the input.H jp (s) (2.. The io behaviour of a multivariable system in z..3.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). H 11 (s). H(s) = H j1 (s).11) ...H rp (s) The jth component of the output is given by. Any component of each matrix is a polynomial.... . (4) (1) (3) (1) (j = 1) r = 2 2y 1 + 3y 1 + 6y 2 + 3y 2 = 3u 1 + u 1 + 5u 2 ..
(n×1) :the state column vector. :the command matrix C . is uniquely determined by the relation. . A SISO LTI system is a particular case of a MIMO LTI system. H(s) = C[sI − A] −1 B + D (2.15) where the matrices and vectors dimensions and name are: x . 2. the state equations (SS).3. (n×p). D) . It represents the internal structure of the system. c T .3. However the order n of the system has nothing to do with the number of inputs p and the number of outputs r. C. (2. A . is possible but much more difficult. b. The system S is denoted as. d) . The opposite transform. (2. (r×n).14) x = Ax + Bu y = Cx + Du (2. (2. For a system. (r×p) :the matrix of the direct inputoutput connection. :the output matrix D .19) 53 . InputOutput Description of MIMO LTI. S = SS(A. C.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). If the system has one input ( p=1 ) and one output ( r=1) the matrices will be denoted as: A → A B → b C → cT (2. the number of the state vector x.TM → SS . by managing the mathematical relations between system variables.16) is S = SS(A. D) = SS(A.18) D → d so a SISO as a particular case of (2. the state realisation of a transfer matrix. :the system matrix B . we can directly obtain.3.17) This relation expresses the transform SS → TM .3. B.3.3.16) If the state equations (SS) are given then the transfer matrix (TM) .3. B. (n×n). The order of the system is just n.3.2.
where x(0) = lim x(t) lim t→0.4.2) x = Ax + Bu y = Cx + Du (2. The state equations are .7) then the expression of the state vector in sdomain is.13) 54 ..4.4.4. (2. X(s) = X l (s) + X l (s) . U p (s)] T the Laplace transform of the input vector. 2.4..6) If we denote Φ (s) = (sI − A) −1 .4. where Φ (s)=(sIA)1=L{Φ(t)} .4. L{x i (t)} = X i (s) and.. .8) we observe that the state response has two components: . we obtain sX(s) − x(0) = AX(s) + BU(s) ⇒ (sI − A)X(s) = x(0) + BU(s) ⇒ X(s) = (sI − A) −1 x(0) + (sI − A) −1 BU(s) (2. X n (s)] T .12) The complex sdomain expression of the output can be obtained by substituting (2. We know that the Laplace transform of a vector (matrix) is the vector (matrix) of the Laplace transforms.2) and denoting L{u(t)} = U(s) = [U 1 (s). .4.4.4. Let we consider a LTI system S = SS(A.1. . D. .State forced response X f (s) X f (s) = Φ (s)BU(s) (2.1) T where Ω is given and the state is the column vector x = [x 1 . 2. (2.2.Response of LTI Systems. Φ(t)=eAt . X(s) = Φ (s)x(0) + Φ (s)BU(s) . (2. 2.5) t→0.9) From (2.4.4.4) L{x(t)} = X(s) = [X 1 (s).4.4. x) .8) The expression Φ (s) is the Laplace transform of the so called the transition matrix Φ(t) or state transition matrix . . (2. C. . t>0 By applying this to (2.4. (2.10) . .4. t>0 .11) which respects the decomposition property. The system behaviour can be expressed in the sdomain getting the Laplace transform of the state x(t).4. (2... Response of Linear Time Invariant Systems.4.4. Y(s) = CX(s) + DU(s) (2. u ∈ Ω (2. (2. Expression of the State Vector and Output Vector in sdomain. B.8) in the Laplace transform of (2. x n ] ..State free response X l (s) X l (s) = Φ (s)x(0) (2.3).4.3) The order n of the system is irrespective on the number p of inputs and of the number r of outputs. L{x(t)} = sX(s) − x(0) .LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI).
4. 55 .15) . (2. H(s)=cTΦ(s)b+d (2. both from the initial time moment t 0 = 0 .22) which is the state response of the system and substituting it in (2.4. We denoted by Ψ(s) Ψ(s) = CΦ(s) Φ (2.4.4.20) 2.4.4.23) the output response. 2. For SISO system the transfer function is.8) can be interpreted as X(s)=Φ(s)x(0)+Φ(s)BU(s)=Φ(s)x(0)+F1(s)F2(s) Φ(t)=L1{Φ(s)}=L 1{(sIA)1} so. F 2 (s) = L{f2 (t)} .Response of LTI Systems. L {F 1 (s) ⋅ F 2 (s)} = ∫ f 1 (t − τ)f2 (τ)dτ −1 t (2. from the zero initial time moment t 0 = 0 can be easy determined by using the real time convolution theorem of the Laplace transform.4.2. Y(s) = CΦ(s)x(0) + [CΦ(s)B + D]U(s) Φ Φ (2. and by H(s) H(s) = [CΦ(s)B + D] Φ (2.3) results y(t) = CΦ(t)x(0) + ∫ CΦ(t − τ)Bu(τ)dτ + Du(t) 0 t .21) where F 1 (s) = L{f 1 (t)}. The relation (2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI).2.4.17) Y f (s) = [CΦ(s)B + D]U(s) = H(s)Us) Φ which reveals the decomposition property.4. by applying the real time convolution theorem we obtain x(t) = Φ(t)x(0) + ∫ Φ(t − τ)Bu(τ)dτ .4.16) Y l (s) = CΦ(s)x(0) = Ψ(s)x(0) Φ .19) the transfer matrix of the system. This transfer matrix is univocal determined from state equations.18) the so called the base functions matrix in complex domain.Output free response (2.Output forced response (2. 0 t 0 (2.4.4.4.4. Time Response of LTI from Zero Time Moment.14) Also the output is the sum of two components: Y(s) = Y l (s) + Y l (s) (2. This response.
then the matrix function ∞ k Φ(t) = ϕ(x) x→A = Σ A t k = e At k=0 k! has the Laplace transform ∞ k At Φ (s) = L{e } = L{ Σ A t k } = s −1 I + s −2 A + s −3 A 2 + .28) Φ(t)=Φ1(t) 6. The transition matrix Laplace transform.=I Φ we have (sIA)Φ (s)=I Φ and (2.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). The identity property. Φ(0) = A (2.4. The inversion property.. . k=0 k! ∞ k −(k+1) Φ (s) = Σ A s k=0 (2.4.29) 56 .) =Is1A+s1As2A2+s2A2. Let A be a (n x n) matrix. where Φ(0)=Φ(t)t=0 3. The transition matrix is a nonsingular matrix.25) where we applied the formula. The transition matrix defined as Φ(t) = e At .4. k 1 L t = k+1 = s −(k+1) .2. t 2 4.4.3.. Properties of Transition Matrix. The determinant property. The transition matrix is the solution of the matrix differential equation..27) Φ(t1+t2)=Φ(t1)Φ(t2)=Φ(t2)Φ(t1) ∀t 1 .4. 2. 2. (2.4. Φ(0) = I . .26) Φ(0)=I. ∀t ( ( 2. k! s Because of the identity (sIA)Φ (s)=(sIA)(s −1 I + s −2 A + s −3 A 2 + .26) Φ (s)(sIA)=I => Φ (s)=(sIA)1 2. (2.4. (2. Φ(t) = AΦ(t) .. 1.. 5. The transition property.24) (2.4.Response of LTI Systems.4.2 has the Laplace transform L{Φ(t)} = L{e At } = Φ (s) = (sI − A) −1.4. Proof: Because the series of powers ∞ k ϕ(x) = Σ x t k = e xt k=0 k! is uniform convergent..
. λ k ∈ C k=1 N . .31) and the characteristic equation.4. N m k −1 d l f(λ) f(A) = Σ Σ f(l) (λ k )E kl where f(l) (λ k ) = (2... if the system of n equation f(l)(λk)=p(l)(λk) .38) In such conditions. p k Φ(t) ≈ Φ p (t) = Σ A t k (2.4.4. It is better to express t=qτ.4.4.. Solving this system the coefficients α0.2. Let be the characteristic polynomial of the square matrix A.4.34) 3.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). 2..33) λ=λ k dλ l k=1 l=0 The matrices Ekl are called spectral matrices of the matrix A. α1. (2.35) and a (nxn) matrix A whose characteristic polynomial det(λI − A) = Π (λ − λ k ) mk . Let be a function f(λ) : C → C .39) is satisfied. By using the general formula of a matrix function. The Polynomial Method.42) 57 Let ...4. Σ mk = n k=1 N (2.. 0 ≤ i ≤ n − 1 ). q ∈ N and τ small enough => q Φ(t) = Φ(qτ) = (Φ(τ)) q ≈ (Φ p (τ)) (2...4. then f(A)=p(A) (2. 4.37) be a n1 degree polynomial ( it has n coefficients α i . l=0.40) The relation (2.4.4.41) k=0 k! If the dimension of the matrix A is very big then we may have a false convergence.4. 2.4.4. L(λ)=det(λIA) (2.4. Transition Matrix Evaluation. We have A so we directly evaluate Φ (s)=(sIA)1 . Φ(t)=L1{Φ (s)} Φ (2. The attached polynomial matrix function p(A) is p(A) = α n−1 A n−1 + ..αn1.. + α 1 A + α 0 I .. There are n matrices Ekl. (2.. L(λ)=0 => L(λ)= Π (λ − λ k ) mk .4. They are determined solving a matrix algebraical system by using n arbitrary independent functions f(λ) . 1.Response of LTI Systems.N ..mk1 (2.30) 2. mk−1 ] .39) expresses n conditions that means an algebraical system with n unknowns the variables : α0.4.4.αn1 are determined.. f (l) (λ k ) = t l e λ kt (2. The numerical method. To compute the transition matrix we can use different methods. + α 1 λ + α 0 (2.36) p(λ) = α n−1 λ n−1 + . k=1. The direct method. k=1 N (2.. Finally for f(λ) = e λt → f(A) = e At . continuous which has mk1 derivatives for λk that means f(l) (λ k ) = exists ∀ l ∈ [0.32) then the attached matrix function is...
The free response is.44) to the left side by Φ1(t0) x(0) = Φ(−t 0 )x(t 0 ) − ∫ Φ(−τ)Bu(τ)dτ 0 t0 and.4. y(t) =Φ(t − t 0 )x(t 0 ) + ∫ [CΦ(t − t 0 )B + Dδ(t − τ)] u(τ)dτ Ψ(t−t 0 ) t0 ℵ(t−t 0 ) t where we have expressed.4. by substituting x(0) in the relation (2. This is called the general time response of a LTI.44) the vector x(0) multiplying (2. we compute the so called the weighting matrix.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). x(t) = Φ(t − t 0 )x(t 0 ) − Φ(t) ∫ Φ(−τ)Bu(τ)dτ + ∫ Φ(t − τ)Bu(τ)dτ + ∫ Φ(t − τ)Bu(τ)d 0 0 t0 t0 t0 t x(t) = Φ(t − t 0 )x(t 0 ) + ∫ Φ(t − τ)Bu(τ)dτ t0 t which is the general time response of the state vector.4.27). ℵ(t) = L{CΦ(s)B + D} = L −1 {H(s)} We can say that the transfer matrix is the Laplace transform of the impulse matrix ℵ(t) . Time Response of LTI from Nonzero Time Moment .4. we have. The formula is obtained by using the transition property of the transition matrix.Response of LTI Systems.4.2.45) Φ(t 0 − τ) = Φ(t 0 )Φ(−τ) .4.4.4. x(t) = Φ(t)x(0) + ∫ Φ(t − τ)Bu(τ)dτ .5.43). t Du(t) = ∫ t 0 Dδ(t − τ)dτ . when the initial time moment is now t 0 ≠ 0 . The impulse matrix is the output response for zero initial conditions (zero initial state) when the input is the unit impulse. 2.44) Taking into consideration (2.43) Substituting t=t0 in (2.4. The output general time response response is.4.4. ∀t ≥ 0 0 t (2. (2. We saw that using the Laplace transform the state response was.46) we can withdraw from (2. 2.28) Φ(t)=Φ1(t) (2. we obtain x(t 0 ) = Φ(t 0 )x(0) + ∫ Φ(t 0 − τ)Bu(τ)dτ . 0 t0 (2. Ψ(t) = L −1 {CΦ)(s)} ⇒ y l (t) Φ Similarly. and (2. Y(s)=H(s)U(s) 58 .43).4.
y(∞) . y(t)=K(1e )∆u K Ts+1 u (t) initial steady state a a }D u T L a 1 u( ∞ ) a U st ( t 0 ) y (t) a t A y st ( t 0 ) initial steady state {H(s)U(s)} y( ∞) a t0 a t Figure no.2. For scalor type systems. r=1 (1 input. U(s) = 1 s 1 .K= y(∞) u(∞) 59 . y a (t) = y(t) + y a (t 0 ) . We can compute the area between the new steady state and the output response.LINEAR TIME INVARIANT DIFFERENTIAL SYSTEMS (LTI). st To any point B at the output response of this system ( first order system ) we are drawing the tangent and we obtain a time interval T.1. s= 1/T . In practice are very important some special responses like step response (it is the output response from zero initial state determined by step unit input ). 2.4. T= A .4.t ≥ 0 Y(s) = H(s) ⋅ 1 ⇒ y(t) = L −1 {H(s) ⋅ 1 } s s Example. H(s) = .Response of LTI Systems. U(s) = 1 ∆u ⇒ . p=1. ust(t0 ) is the steady state of input observed at the absolute time t0a. 1 output). 2. u(t)=Ipδ(t) => U(s)=Ip => Y(s)=H(s) => y(t)=L 1{H(s)}=ℵ (t) For scalar systems the weighting matrix is called weighting function or impulse function (response). t<0 u(t)=1(t)=u(t) = 1(t) = . s t K y(t) = Σ Rez Ts+1 ⋅ ∆u e st = K + K ⋅ 11 e − T ∆u s T − T t/T with s=0 . 0.
. Ω i = { u u : T → U i . x i (t) = A i x i (t) + B i u i (t).1.1. x i (t) = fi (x i (t). t). Connection Problem Statement .2. Let S i .1. k). k ≥ k 0 ∈ T ⊆ Z Si : i i i i i i y k = g i (x k .4. . It is a particular case of CNS.4) 3. Ω i = {u k } u k : T → U i .2) 3. u ik . {u k } admitted (3.1. T ⊆ Z. 3.6) 60 .1. It is a particular case of DNS.3) 3. {u k }admitted (3. . Ωi fi . T ⊆ R. u i (t). k ≥ k 0 ∈ T ⊆ Z Si : i i i i i i y k = C i x k + D i u k .1. Linear Time Invariant Discrete System (LTID). u (t). D S i = S i (Ω i . u admitted } (3. x i (t 0 ) = x i0 . These subsystems can be dynamical or nondynamical (scalor) ones. SYSTEM CONNECTIONS. 3. Discrete Time Nonlinear System (DNS). x i ) (3. i ∈ I} .1. can be logical systems. T ⊆ Z. Ω i = {u k } u k : T → U i . stochastic systems etc. (3. i ∈ I be a set of systems where I is a set of indexes.1. f i . Linear Time Invariant Continuous System (LTIC).1.1) where the utilised symbols express: . t ≥ t 0 ∈ T ⊆ R Si : i i i i i i y (t) = g i (x (t). 3.3.the proper state equation.5) Let consider that a set of subsystems as above defined constitutes a family of subsystems. t). SYSTEM CONNECTIONS. Ω i = { u u : T → U i .3.the state vector. T ⊆ R. F I = {S i . g i . x ik+1 = A i x ik + B i u ik . t ≥ t 0 ∈ T ⊆ R Si : i i i i i i y (t) = C i x (t) + D i u (t). gi i . u k . u admitted } (3. Connection Problem Statement.1.the output relation. each S i being considered as a subsystem.1.1. x i (t 0 ) = x i0 .the set of allowed inputs. x ik 0 = x i0 . x ik+1 = fi (x ik . k).1. denoted F i . as for example: 3. can be continuous or discrete time ones. Continuous Time Nonlinear System (CNS). x ik 0 = x i0 . x Knowing the above elements one can determine the set Γ i of the possible outputs.
g i . yi). if S has the attributes of a dynamical system to which :v j . Cc represents a set of conditions which must be satisfied by the input and output variables (ui. x i ) two other sets Rc and Cc are defined. family interconnected by the pair {Rc. The system S is called the equivalent system or the interconnected system of the subsystems family FI . R c . The two 3tuples (3. l ∈ L are output variables. builds up a connection if in addition to the elements D S i = S i (Ω i .1. These conditions refer to: The physical meaning. g i . i ∈ I} constitute a correct connection or a possible connection. g i interpreted as functions or set of functions.1. Ω From these elements all the other attributes. L represent index sets. Rc= the set of connection relations 2. yi.1. . of each subsystem. Γ i . One can say that a subsystem S i (or generally a system) is defined (or specified or that it exists) if the elements (Ω i . and also by the new variables v j . having the following meaning: 1. clarifying if the abstract systems are mathematical models of some physical oriented systems (objects) or they are pure abstract systems conceived (invented) into a theoretical synthesis procedure.3. i ∈ I . j ∈ J .7) S = {F I . Cc= the set of connection conditions. 61 . x i ) are specified. the sets T i . or new outputs dented by w l . discrete. i ∈ I . j ∈ J are input variables and w l . Y i. F I = {S i . A family of subsystems F I . 3. j ∈ J . The properties of the entities Ω i . and other new variables introduced by these relations. The number of inputoutput components. denoted by v j . etc. f i . f i . that is whether the input and output variables are vectors or scalars. U i . These all other attributes can be: the system type (continuous. Rc represents a set of algebraical relations between the variables ui. Cc}. can be deduced or understood. each of them defined on its observation domain. Connection Problem Statement . l ∈ L introduced through Rc .2. f i . and Ch. l ∈ L where I. as discussed in Ch. SYSTEM CONNECTIONS. C c }. These new variables can be new inputs (causes). logical). w l .
This is a very important problem when a subsystem Si (an abstract one) is the mathematical model of a physical oriented system for which some state variables (some components of the state vector). The set Rc does not contains relations into which the state vector i x . these inputoutput relations are the transfer matrices (for MIMO) or transfer functions (for SISO). No one will guaranty. Connection Problem Statement . if in an interconnected structure only transfer matrices (functions) appear. Observation 2. Reciprocally. denoted by Hi(s) for LTIC and by Hi(z) for LTID systems. if this transfer matrix (function) is not rational or nonnominal one (there are common factors to nominators versus denominator) 62 .1. for example yik=xik. Any connection of subsystems is performed only through the input and output variables of each component subsystem but not through the state variables. SYSTEM CONNECTIONS. in some examples. Observation 1. we have to understand that only the inputoutput behaviour (the forced response) is expected or is enough for that analyse. 3. we can deduce different forms of the state equations starting from this transfer matrix (function). we must understand that they represent components of the output vectors y i . i ∈ I to appear. if the transfer matrix (function) of the equivalent interconnected system S is given or is determined (calculated). the interconnections relation Rc contains components of the state vectors. If. If to the interconnected system S only the behaviour with respect to the input variables (the forced response) is under interest. that means all the state components of these state realisations are both controllable and observable. but for the seek of convenience and writing economy different symbols have not been utilised . If this transfer matrix (function) is a rational nominal one (there are no common factors to nominators versus denominator) then all these state equations realisations certainly express only the completely controllable and completely observable part of the interconnected dynamical system S. i ∈ I or the components of the vectors x i . Obviously. then each subsystem Si can be represented by its inputoutput relation.3. which are called different realisations of the transfer matrix (function) by state equations. For LTI (LTIC or LTID) systems. are not accessible for measurements and observations or even they have no physical existence (physical meaning) and so they cannot be undertaken as components of the output vector.
Generally.3. inputoutput (particularly by transfer matrices) or steady state (by static characteristics) of an interconnected system is called connection solving process or the structure reduction process to an equivalent compact form. experimentally determined. 3. evaluated based on the steadystate models of the subsystems Si. then each subsystem Si can be represented by its steadystate mathematical model (static characteristics). There are three fundamental types of connections: 1: Serial connection (cascade connection). that in these state realisations will appear state components of the system S which are only uncontrollable or only unobservable or both uncontrollable and unobservable. SYSTEM CONNECTIONS. Observation 3.1. Connection Problem Statement . Essentially to solve a connection means to eliminate all the intermediate variables introduced by algebraical relations from Rc or from FI . If to the interconnected system S only the steadystate behaviour is under interest. 3: Feedback connection. the mathematical model deduction process. whichever type would be: complete (by state equations). through which the majority of practical connections can be expressed. The steadystate mathematical model of the interconnected system can be obtained by using graphical methods when some subsystems are described by graphical static characteristics. has a meaning if and only if it is possible to get a steadystate for the interconnected system in the allowed domains of inputs values. 2: Parallel connection. 63 . This means the interconnected system S must be asymptotic stable. Notice that the steadystate equivalent model of the interconnected system S.
I = {1. 3. SERIAL CONNECTION. the connection is not possible even if T 2 ≡ T 1 . g1) which define S1 and on Ω 1 too. . Of course Γ 1 depends both on the equations (f1.3.2.1. of the functions allowed to represent input variable u2 to S2 ( Ω 2 is a very defining element of S 2). if Ω 2 represents a set of continuous and derivative functions but from S1 one can obtain output functions which have discontinuities of the first kind.1. has the attribute of "upstream" and the other S2. represented by a block diagram as in Fig. Γ = Γ 2 } Γ build up a serial connection of the two subsystems. S 2. Serial Connection of two Subsystems.1. . 3. Ω = Ω 1 .2. (f** . 3. in the family FI . From this identity the following must be understood: .The sets Y1 and U2 have the same dimensions (as Cartesian products) . The 3tuples (3.2. . F I = {S 1 . Serial Connection. u 1 = u.2. of "downstream". must contain all the functions y1 from the set Γ 1 of the possible outputs from S1.1) C c = {Γ1 ⊆ Ω 2 .2. Y1 ⊆ U2 . In a serial connection of two subsystems one of them.Y1 ⊆ U2 .T1 ≡ T2 .2. 2} R c = {u 2 = y 1 .2.1). The input of the downstream subsystem is identical to the output of the upstream subsystem. 64 . From the connection condition : Γ 1 ⊆ Ω 2 one understand that the set (class) Ω 2 .The variables have the same physical meaning if both S 1 and S 2 express oriented physical systems.u2 and y1 are the same size vectors. y = y 2 } (3. For the beginning let us consider only two subsystems S1. 3. From the connection relation u2=y1 one can understand that the function (u 2 : T 2 → U 2 ) ∈ Ω 2 is identical with any function (y 1 : T 1 → Y 1 ) ∈ Γ 1 which arises to the output of the subsystem S1. For example. SYSTEM CONNECTIONS. u 1 S1 y1 u 2 S2 y 2 = u S y Figure no. S 2 }. here S1.
t) g 2 1 x x = x1 (n1 + n2 ) × 1 .2. u . y1 and using the notations u. u . x2 . t). u. t). y are time function vectors. u. t). Many times. t) FI: S1: 1 . S2. 65 . t) y = g2 (x . x1 = f 1 (x1 . Serial Connection of two Continuous Time Nonlinear Systems (CNS). u. By eliminating the intermediate variables u2.2. t) = Ψ 2 (x1 . Γ can be interpreted just only as simple notations for a more clear understanding that it is about the input and the output of the new equivalent interconnected system. . Ω. u . which will result after connection.4) 1 1 1 2 2 2 Obvious. the new letters u. u . y for u1 . u. t) g(x. g (x1 . g1 (x1 .2) serial interconnected.2. t) = f 1 (x1 . t) = f 2 (x2 .2. t) = g (x2 . y2 respectively.u 2 (p 2 × 1) . t) ∼ Ψ 2 (x. y. Let us consider two differential systems of the form (3. are presented (given) as strictly necessary elements to solve formally that connection. We determine the complete system. y = y2 ⇒ (r = r 2 ). t) . in connection solving problems only FI. x2 . t) y = g (x2 . Here.u 1 (p 1 × 1) . t) x1 (n1 × 1) . (3. SYSTEM CONNECTIONS. u = u1 ⇒ (p = p 1 ). S represent symbols for operators. u. ⇒ . u1 . x . u. y1 (r 1 × 1) R c: x2 (n2 × 1) . t) = Ψ 1 (x1 . u. (3. The conditions set Cc is explicitly mentioned only when it must be proved the connection existence.2. x2 . u. 3. 3.2) where it has been denoted S = S2 S1 = S2 S1 understanding by " S 2 S 1 " or just S 2S1 the result of two operators composition. formally one can write D 2 D y = y = S 2 {u2 } = S 2 {y1 } = S 2 {S 1 {u1 }} = S 2 S 1 {u1 } = S 2 S 1 {u} = S{u}. If S1. otherwise it is tacitly assumed that all these conditions are accomplished. y2 (r 2 × 1) u 2 = y 1 ⇒ (p 2 = r 1 ).1. u. S x2 = f 2 (x2 . 2 the equivalent interconnected system is expressed by the equations. t) 2 1 denoting ∼ Ψ 1 (x. g (x1 .3) 1 1 2 2 y = g1 (x . u.3. u. Serial Connection. x 1 = f 1 (x1 . described by state equations. u. .2. t) = ∼(x1 . Rc. t). u2 . g1 (x1 . x . y . t) x 2 = f 1 (x2 . S2: 2 (3.
Serial Connection of two LTIC. Complete Representation. x 1 = A1 x 1 + B1 u 1 x 2 = A2 x 2 + B2 u 2 S1 : 1 S2 : 2 (3.2 x = A2 x2 + B2 [C 1 x1 + D1 u] ⇒ x = B 2 C 1 x 1 + A2 x 2 + B2 D1 u y = C 2 x2 + D2 [C 1 x1 + D1 u] y = D 2 C 1 x 1 + C 2 x 2 + D2 D1 u x1 Concatenating the two state vectors x 1 . . u. u.2. is n = n 1 + n 2 . y 1 ⇒ . x2 (n2 × 1).4. u. 3. x 2 to a single vector x = 2 one x obtain the compact form of the interconnected system . y = y 2. .2.2. t) S f(x. Let us consider two LTIC. InputOutput Representation. x = f(x.2. u. x = Ax + Bu (3. u. y1 (r 1 × 1) R c : u 2 = y 1 ⇒ (p 2 = r 1 ). t) = 1 S = S2 S1 (3. the transfer matrices can be easily determined as. .2. 3. 3.2.2. u1 (p 1 × 1). t) One can observe that the system order. r = r2 (3.7) u = u 1.2.2. y2 (r 2 × 1) p = p 1 . H1 (s) = C 1 (SI − A1 ) −1 B1 + D1 (3.2. Serial Connection. 3. . One can determine the interconnected system completely represented by state equations.2 . the explicit expressing of the forced response. x 1 = A1 x 1 + B1 u x 1 = A1 x 1 + B1 u .3.5) Ψ (x. in a general manner. t) 2 y = g(x. For nonlinear systems as presented in Ch.3. practically is impossible because the decomposition property is not available for any nonlinear system For linear systems from Ch. SYSTEMS CONNECTIONS.2.2. Eliminating the intermediate variables u 2 . t) Ψ (x. u2 (p 2 × 1). Serial Connection of two LTIC.2. 3.9) −1 H2 (s) = C 2 (sI − A2 ) B2 + D2 The forced responses of the two systems are expressed by 66 . determined to differential systems by the state vector size.3. . on short LTI. serially interconnected. Let us assume we are interested only on the inputoutput behaviour of the serially interconnected system represented as in Fig.6) y = C 1 x 1 + D1 u 1 y = C 2 x 2 + D2 u 2 x1 (n1 × 1).8) y = Cx + Du n1 n2 p n1 n2 A 1 0 n1 B1 n1 A= B= C =r D1 C 1 C 2 D = D2 D 1 B2 C 1 A2 n2 B2 D1 n2 3.
Indeed. 3. r 2 = r. i ∈ I} preserve some common properties of the component systems as for example the linearity and stability. p 1 = p. SYSTEM CONNECTIONS.1.2. as the controllability and observability may disappear to the serial interconnected system even if each component system satisfies these properties. obtained by serial connection.2. The serial connection of a systems family F I = {S i . In the sequel we shall analyse such aspects based on a concrete example referred to the serial connection of two LTI each of them of the first order.2. U(s) ≡ U 1 (s).2.2.10) H(s) = H2 (s)H1 (s) Easy can be verified that the transfer matrix (TF) of the complete system (3. 3.10). S 1 : Y 1 (s) = H 1 (s)U 1 (s). 3. S 2 : Y 2 (s) = H 2 (s)U 2 (s) and the connection relations are R c : U 2 (s) ≡ Y 1 (s). is the product of the transfer matrices from (3.2.2.3. Serial Connection. Y(s) ≡ Y 2 (s). p 2 = r 1 .8). U (s) 1 H1(s) ( r1xp1 ) Y (s) 1 U (s) 2 H2(s) ( r2xp 2) Y (s) 2 ≡ U(s) H2(s) H1(s) (rxp) Y(s) Figure no.8) we obtain. 67 . Y(s) = Y 2 (s) = H 2 (s)U 2 (s) = H 2 (s)Y 1 (s) = H 2 (s)H 1 (s)U 1 (s) = H(s)U(s) D (3. from (3.5. The controllability and observability of the serial connection. Other properties. −1 sI − A1 B1 0 −1 B + D = H(s) = C[sI − A] D2 C 1 C 2 −B 2 C 1 sI − A2 B2 D1 + D2 D1 = Φ 1 (s) 0 B1 = D2 C 1 C 2 Φ (s)B C Φ (s) Φ (s) B D + D 2 D1 = 2 1 1 2 2 2 1 B1 C 2 Φ 2 (s) BD 2 1 H(s) = D2 C 1 Φ 1 (s)B1 + C 2 Φ 2 (s)B 2 C 1 Φ 1 (s)B1 + C 2 Φ 2 (s)B 2 D1 + D 2 D1 = = D2 C 1 Φ 1 (s) − C 2 Φ 2 (s)B 2 C 1 Φ 1 (s) = C 2 Φ 2 (s)B2 [C 1 Φ 1 (s)B1 + D1 ] + D 2 [C 1 Φ 1 (s)B1 + D1 ] H(s) = [C 2 Φ 2 (s)B2 + D2 ][C 1 Φ 1 (s)B 1 + D1 ] = H2 (s)H1 (s) the TF of the serial connection is the product of the component's TFs.
The state diagram (SD) of LTIs contains only three types of graphical symbols: integrating elements.2. Serial Connection. proportional elements. For the integrating elements.2. x1 (t) = −p 1 x1 (t) + K1 u1 (t) ⇒ (3.2.14) H(s) = 1 2 (s + p 1 )(s + p 2 ) One can observe that H(s) is of the order n 1 + n 2 = 1 + 1 = 2 .11) y1 (t) = x1 (t) .4. with their complex form notation. We consider a serial connection of two SISO systems of the first order. 68 . y(t) ≡ y 2 (t) (3. summing elements.3. 3.13) Cc : Γ 1 ⊆ Ω 2. Ω = Ω 1. u(t) ≡ u 1 (t). Γ = Γ 2.2.2. State Diagrams Representation.2. In this scalar case. as shown in Fig. 3. 3.2. 3. the equivalent transfer function can be presented as an algebraical expression (because of the commutability property) also under the form. (3. using block diagrams (BD) or signal flow graphs (SFG). U(s)=U 1 (s) H1(s) Y (s) 1 U2 (s) H2(s) Y (s)=Y(s) 2 ≡ U(s) H(s) y(s) H(s)=H2(s)H1(s) Figure no. but represented by state equations. The same interconnected system. The "State Diagram" (SD) is form of graphical representation of the state equations. The summing and proportional elements. called on short integrators. sometimes.3.1. H(s) = H2 (s)H1 (s) = H1 (s)H 2 (s) K K (s + z2 ) .2. called on short summators and scalors respectively.12) R c : u 2 (t) ≡ y 1 (t). SYSTEM CONNECTIONS. both in time domain or complex domain. x2 (t) = −p 2 x2 (t) + K2 (z2 − p 2 )u2 (t) K2 (s + z2 ) Y2 (s) S 2 : H 2 (s) = = ⇒ s +p2 U(s) y2 (t) = x2 (t) + K2 u2 (t) (3.2. 3. frequently the complex domain graphical form is utilised and the corresponding variables are denoted in time domain together. Y (s) K1 S 1 : H 1 (s) = = 1 s + p 1 U1 (s) The connection. have the same graphical representation both in time and complex domains.3. can be illustrated using so called "state diagrams". expressed by inputoutput relations (transfer functions) is represented in the block diagram from Fig. .5.
12) which is the SD for S2 and (3. on spot. x = Ax + bu x (3. a realisation by state equations of that transfer matrix (function).13) which contains the connection relations. For a better commodity the SD from Fig. In our case the state diagram (SD) means the graphical representation in complex domain of the equations (3.15) .4. Manipulating SD. 3. Based on it we can write (deduce). (3. c = . Serial Connection. . d = 0 K2 (z2 − p 2 ) −p 2 0 1 X(s) = Φ(s)x(0) + Φ(s)bU(s).2.2. Φ(s) = [sI − A]−1 the (ij) components of the transition matrix Φ ij (s).4. summators and scalors. SYSTEM CONNECTIONS. If for a system given by a transfer matrix (function). K1 K2 −p 1 0 A= .2. the matrix inverse operation is avoided. equivalently is transformed into SD from Fig. 3.2.2. x = x1 T x + du 2 y=c where. then we can interpret that block diagram (BD) or that signal flow graph (SFG) as being a state diagram (SD). 3.2. we succeed to represent it by a block diagram (BD) or a signal flow graph (SFG) which contains only the three types of symbols: integrators. b= .p ) 2 x (0) 2 x2 + + x 2+ + 1 s + + • y2 y =y 2 1 p 2 S1 Rc S2 Figure no. SD of S1 u1 u=u 1 • SD of S2 x 1(0) + + p K2 1 x1 s y1 u2 u 2= y 1 K1 + + x1 K2 (z2. can be easily determined based on relation (3.11) which is the SD for S1. Given the state diagrams for the systems Si of a family FI very easy can be drawn the state diagram of the interconnected system by using the connection relations and from here the state equations of the interconnected system. Based on this SD the state equations for the serial interconnected system are deduced.3.2.15).2. 3. 69 Because .2.5.
Φ 22 (s) = 2 = (s + p 1 ) (s + p 1 )(s + p 2 ) x 1 (0) x2 (0) .15) we obtain.2. K K (s + z2 ) H(s) = c T Φ(s)b + d = 1 2 = H1 (s)H2 (s). K2 x (0) 1 1 s+p x 1 u=u 1 1 x (0) y1 u 2 2 1 s+p x2 + y = y 2 + K1 + + 1 K2 (z2. 3. Φ 12 (s) = 1 = 0. (s + p 1 )(s + p 2 ) The expression of the output variable in complex domain is.2. k≠j. SYSTEM CONNECTIONS. Y(s) = c T Φ(s)x(0) + (c T Φ(s)b + d)U(s) = Yl (s) + Yf (s) Y(s) = K2 X1l (s) + X2l (s) + H(s)U(s) = Y l (s) + Yf (s) ⇒ K2 (z 2 − p 2 ) K2 Yl (s) = x1 (0) + x1 (0) + 1 x2 (0) (3.17) Φ(s) = K (z − p ) 2 2 2 1 s + p2 (s + p 1 )(s + p 2 ) The state response is.2. U(s)≡0 Φ 11 (s) = (3. Serial Connection.p2 ) + + 2 2 so. Figure no.20) and finally X1 (s) X (s) = 1 . X i (s) Φ ij (s) = x j (0) x k (0)=0. 3. 1 1 s+p 0 x 1 (0) s+p 1 0 K1 1 X(s) = K 2(z 2 −p 2) + K 2 2−p 2 U(s) 1 1 (s+p )(s+p ) x 2 (0) (s+p(z)(s+p ) ) s+p 0 s+p 2 2 1 2 1 2 K1 X 1 (s) = X 1l (s) + X 1f (s) = 1 x 1 (0) + U(s) (3.2.2.2. From (3.16) 1 0 s +p1 (3. x 1 (0) s + p 1 x2 (0) K2 (z2 − p 2 ) X (s) X (s) 1 Φ 21 (s) = 2 = .2.5.2.3.19) (s + p 2 ) (s + p 1 ) (s + p 1 )(s + p 2 ) Yf (s) = K1 K2 (s + z2 ) U(s) (s + p 1 )(s + p 2 ) 70 (3.18) s + p1 s + p1 K 2 (z2 − p 2 ) X 2 (s) = X 2l (s) + X 2f (s) = x 1 (0) + 1 x 2 (0)+ s + p2 (s + p 1 )(s + p 2 ) K 1 K 2 (z2 − p 2 ) + U(s) (s + p 1 )(s + p 2 ) It can be proved that the transfer function of the serial connection is.
2 (3.2. the system is completely controllable. Serial Connection. p −z K (z − p 2 ) yl (t) = K2 p 1 − p2 (x1 (0)) e−p 1 t + 2 2 (3. If p 1 ≠ p 2 one obtain.14) obtained further to the serial connection K K (s + z2 ) (3. that is z2 ≠ p 1 and z2 ≠ p 2 .5. Qualitatively. obtained by serial connection of the subsystems S1 and S2 is completely controllable and completely observable.2.4. det P = K 1 K2 (z2 − p 2 ) . if z2 ≠ p 2 . Controllability and Observability of Serial Connection.2.23) In this case can be observed that. this property can be put into evidence also by the state diagrams from Fig. yl (t) = K2 e−p 1 t ⋅ x1 (0) + K2 (z2 − p 1 )te−p 1 t ⋅ x1 (0) + e−p 1 t ⋅ x2 (0) 3. Case 1: z2 ≠ p 1 ßi z2 ≠ p 1 .2. If the transfer function (3.15) can be calculated both the controllability matrix P and the observability matrix Q as. where it can be observed that the input u can modify the component x 1 and through this.2. 3. given by the state equations (3. det(P) ≠ 0 that means the interconnected system is completely controllable det(Q) ≠ 0 that means the interconnected system is completely observable.2. This checking up of the state diagram will not guarantee the property of complete controllability but categorically it will reveal the situation when the system has not the controllability property.21) (p − p ) x1 (0) + x 2 (0) e−p 2t . for any relation between z2 and p 1.14) H(s) = 1 2 (s + p 1 )(s + p 2 ) is irreducible one.2. 71 . SYSTEM CONNECTIONS. 1 2 1 2 When p 1 = p 2 in the same way the inverse Laplace transform is applied. det Q = K2 (p 1 − z2 ) .22) P = [b . det P ≠ 0 .2. Case 2: z2 ≠ p 2 . 3. These properties are preserved for any realisation by state equation of the irreducible transfer function.2. or more clear from Fig. If z2 ≠ p 2 .3.5.2. . For the system (3.2.2. Ab] = 1 . 0 K1 K2 (z2 − p 2 ) cT 1 K2 Q = T = . K −K1 p 1 . then the complete system S.15). 3. also the component x 2 of the state vector. c A −K2 p 1 + K2 (z2 − p 2 ) −p 2 T (3.
both x1 and x2 still modifies the output variable. When z2 = p 2 the transfer function of the serial connection is of the first order K1 K (s + z2 ) K1 K 2 (3. 3. keeps depending on the two poles that means on the mode e−p 1 t and on the mode e−p 2 t which are different. Case 4: z2 ≠ p 1 . the system is observable for any relations between z2 and p 2 . it can be observed that when z2 = p 1 . that means the system is not observable. (3.2. 3. 72 .5. det(P) ≠ 0 .2. The assessment of the full rank of the matrix P through its determinant value will not indicate which state vector components are controllable and which of them are not. For LTI systems it is enough to know the output only irrespective what is the value of t 0 . C) is observable.24) H(s) = ⋅ 2 = s + p1 (s + p 1 ) (s + p 2 ) even if the system still remains of the second order . det Q ≠ 0. will not reveal directly the observability property. From the Fig.2. In this example the dependence between x 1 and u is given by K1 X 1 (s) = U(s) . 3.5.3.2.21).2. SYSTEM CONNECTIONS. One say that a system is completely controllable. In this case det(P) = 0 and the system is uncontrollable. in our example. If z2 = p 2 certainly the component x 2 cannot be modified by the input so categorically this state component is uncontrollable but the component x 1 can be controlled by the input. Serial Connection. if and only if all the state vector components are controllable for any values of these components in the state space. More precise. s +p1 which shows that x1 is modified by u. Case 3: z2 = p 2 . this means which of the evolution (modes) generated by the poles (−p 1 ) or (−p2) can by modified through the input. this second order being put into evidence by the fact that its free response. Because of that to LTI systems one say : The pair (A. If z2 ≠ p 1 . We must not misunderstand that if each state vector component affects (modifies) the output variable then certainly the system is observable one because this is not true. The visual examination (inspection) of the state diagram. It must remember that the observability property expresses the possibility to determine the initial state which has existed at a time moment t0 based on knowledge of the output and the input from that t0 . as in Fig..
3. In the above analyse we saw that while z2 = p 1 the interconnected system has det(Q) = 0 that means it is not observable even if the output y(t) = c T x(t) + du(t) = K 2 ⋅ x1 (t) + 1 ⋅ x2 (t) + 0 ⋅ u(t) depends on both components of the state vector. Case a: p 1 ≠ p 2 . In the case of LTI systems this property depends on the output only as a response of that initial state. −p 1 (t 2 −t 1 ) K det G = p −2p (p 1 − z2 ) ⋅ e−p 1 t 1 ⋅ e−p 2t 2 ⋅ 1 − e−p 2(t 2 −t 1 ) 1 2 e Because p 1 ≠ p 2 ßi t 1 ≠ t 2 . through its two components x1 (0) and x2 (0) .21) or equivalently K2 yl (t) = p 1 −p 2 [(p 1 − z2 )e −p 1 t + (z2 − p 2 )e−p 2 t ]⋅ x1 (0) + [e−p 2t ]⋅ x2 (0) (3. it is a system structure property. 73 . One obtain the expression (3.3. that the observability property means the possibility to determine the initial state based on the applied input and the observed output starting from that initial time moment.19) evaluated in two distinct cases: Case a: p 1 ≠ p 2 and Case b: p 1 = p 2 . SYSTEM CONNECTIONS. −p 1 t 2 −p 2 t 2 −p 2 t 2 p −p [(p 1 − z2 )e y l (t 2 ) + (z2 − p 2 )e ]e 1 2 The possibility to determine univocally the initial state x(0) is assured by the determinant of the matrix G. We shall illustrate. a two equations system is building up. Serial Connection. The time expression of the free response is obtained by applying the inverse Laplace transform to the relation (3.2.2. through this example. y(t) = y l (t) . K (p 1 −z ) K2 (z 2 −p ) yl (t) = 2p 1 −p 2 2 x1 (0) e−p 1t + (p 1−p 2 )2 x1 (0) + x 2 (0) e−p 2 t (3.2.21) of the form.2. With these observations.5. we shall consider u(t) ≡ 0 (it is not necessary another input or it is indifferent what input is applied) so the output is the free response.2. 2 p K−p [(p 1 − z2 )e−p1 t 1 + (z2 − p 2 )e−p 2 t 1 ] e −p2 t 1 x 1 (0) y l (t 1 ) 1 2 = K2 p 1 −p2 [(p 1 − z2 )e−p1 t 2 + (z2 − p 2 )e−p 2 t 2 ] e −p2 t 2 x 2 (0) y l (t 2 ) which can be expressed in a compressed form. Observability Property Underlined as the Possibility to Determine the Initial State if the Output and the Input are Known. 3.25) Because the goal is the state vector x(0) determination. p K 2 [(p 1 − z2 )e−p 1 t 1 + (z2 − p 2 )e−p 2 t 1 ] e−p 2 t 1 y l (t 1 ) −p G ⋅ x(0) = G = 1K 2 2 .2.2. created from (3. 3. For LTI this property does not depend on the input confirming that it is a system property.21) for two different time moments t 1 ≠ t 2 .
19) is.2.2. separately both the components x 1 (0) and x 2 (0) but can be determined only a linear K (z 2 −p ) combination of them 2p 1 −p 2 2 x 1 (0) + x2 (0) . 74 .2. The free response (3. 26) is. Let us consider the situation when z2 = p 1 . SYSTEM CONNECTIONS. is of the form z2 − p2 y l (t) = K 2 p − p x 1 (0) + x 2 (0) ⋅ e−p2 t . y l (t) = [K 2 x 1 (0) + x 2 (0)] ⋅ e−p 2 t . z2 = p 1 = p 2 ⇔ det G = 0 ⇔ det Q = 0 that means the system is not observable while z2 = p 1 . From (3. 3. K 2 [1 − (z2 − p 1 )t 1 ]e−p 2 t 1 e −p1 t 1 G= −p 2 t 2 e−p 1 t2 K 2 [1 − (z2 − p 1 )t 2 ]e det G = K2 (p 1 − z2 )(t 2 − t 1 )e−p 1 (t 2 +t 1 ) Also in this case.27) it can be observed that the free response of this unobservable system depends on both components of the state vector x 1 (0) and x 2 (0) . yl (t) = K2 e−p 1 t ⋅ x1 (0) + K2 (z2 − p 1 )te−p 1 t ⋅ x1 (0) + e−p 1 t ⋅ x2 (0) = y l (t) = [K 2 (1 − (z2 − p 1 )t]e−p 1 t ⋅ x 1 (0) + e−p 1 t ⋅ x 2 (0)] The matrix G has the form.5. based on the response. so the observability condition is not accomplished. In this case one can not determine. (3. Time Domain Free Response Interpretation for an Unobservable System. through the mode e−p 2 t . Serial Connection. (z 2 − p 2 ) K yl (t) = [ p −2p (p 1 − z2 ) ⋅ x1 (0)] ⋅ e −p 1 t + [K2 p − p ⋅ x1 (0) + x2 (0) ] ⋅ e−p 2 t 1 2 1 2 in this case of z2 = p 1 . but the output expresses the effect of the pole −p 2 only. p 1 = p 2. (z2 = p 1 ) (3.4.26) 3.2.28) we see that this structure is mentioned also when p 1 = p 2 . p 1 ≠ p 2 .2. det G ≠ 0 ⇔ p 1 ≠ z2 ⇔ y (t ) x(0) = G −1 ⋅ l 1 yl (t 2 ) det Q = K2 (p 1 − z2 ) ≠ 0 Case b: p 1 = p 2 . The analyse is performed in an Identical manner but the time domain free response expression deduced from (3. (z2 = p 1 = p 2 ) (3.2.2.25).3.27) 1 2 and that from (3.2.28) From (3.2.
If in the same transfer function. for another realisation by state equations of the same transfer function it is possible to lose the controllability property of this realisation by state equations or to lose the same observability property or to lose both the observability and controllability properties.2. Serial Connection. if in a serial connection (3.2.29) H(s) = ⋅ 2 = s + p2 (s + p 1 ) (s + p 2 ) In the transfer function of this serial connection.13) the subsystem S 2 (3. that by serial connection it is possible that a pole of a subsystem transfer function to be simplified (cancelled) by a zero of the transfer function of another subsystem in a such a way that in the interconnected transfer function does not appear the simplified pole.2.15). System Stabilisation by Serial Connection.2. SYSTEM CONNECTIONS. determined the lack of the the observability property of this state equations realisation. (3. 75 . through the above presented example. in a such a way we are able to eliminate all the poles of the system from the right half complex plane. zeros. Of course involving notions as poles. This eliminating process of poles from some subsystems by simplification with zeros from other subsystems is called serial compensation or cascade compensation. K1 K (s + z2 ) K1 K 2 .6. having the transfer function. We saw.2.3.2.12) has a zero s = −z2 equal to the pole s = −p 1 of the subsystem S1 (3. at least from theoretical point of view. 3. the same common factors would appear. Its forced response is of the first order. which depends on the pole s = −p 2 .11). transfer functions. then the interconnected system keep being of the second order but it has not the observability property. two common factors appeared which. that means a system which has been instable one (because of the poles located in the right half complex plane) to become a stable one. only. Generally exist particular realisations by state equations which explicitly preserve the controllability property called controllable realisations (but which will not guarantee the observability property) as well particular realisations by state equations which explicitly preserve the observability property called observable realisations (but which will not guarantee the controllability property) 3.2. Particularly. Therefore. for the state equations realisation (3 . it is expected that their effects to appear in the forced response only.
x2 (0) .11) . for any bounded initial state x1 (0) . But from (3. SYSTEM CONNECTIONS.29) H(s) = ⋅ 2 = = s + p 2 U(s) C. Considering in addition that −z2 = −p 1 > 0 then S 2 is a non minimum phase stable system.bounded output (BIBO). K1 K (s + z2 ) K1 K 2 Y(s) (3. (s + p 1 ) (s + p 2 ) for which the forced response is determined by a first order transfer function containing only the stable pole s = −p 2 . also the free response is bounded and asymptotically goes to zero.2. Shall we consider the case where −p 1 > 0 and −p 2 < 0 namely the systems S 1 is stable but S 2 is unstable.2. s + p2 2 t→∞ s→0 s→0 K1K2 because the complex variable function s ⋅ H(s) = s ⋅ .. The stabilisation by serial compensation will assure only the external stability known also as inputoutput stability. in the particular case of z2 = p 1 .]⋅ 0 = 0 p1 − 2 t→∞ t→∞ s→0 for p 1 ≠ p 2 . We sustain this recommendation by a concrete analyse performed on the above discussed example. Indeed. K (z − p ) lim yl (t) =lim [ 2 2 p 2 x1 (0) + x2 (0)] ⋅ e −p 2 t = p1 − 2 t→∞ t→∞ K (z − p ) = [ 2 2 p 2 x1 (0) + x2 (0)]⋅ [ lim e −p 2 t ]0 = [. evolves to a steady state bounded value yf (∞) given by KK K1K lim y f (t) =lim s ⋅ Y(s) =lim s ⋅ 1 2 ⋅ U(s) = p 2 ⋅ u(∞) = yf (∞) . This process is called stabilisation by serial connection or stabilisation by serial compensation. lim yl (t) =lim [K2 x1 (0) + x2 (0)] ⋅ e−p 2 t = t→∞ = [K2 x1 (0) + x2 (0)] ⋅ [ lim e−p 2 t ]0 = [.2.2.29).28) we observe that. is analytic on both s + p2 the imaginary axis and the right half plane.. described by relations (3.2.Z. In such conditions the inputoutput behaviour of the serial interconnected system is stable (external stability) also as it results from the transfer function (3. This expresses the external stability in the meaning of bounded input.15).] ⋅ 0 = 0 t→∞ for p 1 = p 2 . Such a procedure is strongly not recommended to practice.2.2. the forced component yf (t) of the output.27) and (3. as a response to a bounded input u(t) which admits a steady state value u(∞) where u(∞) =lim s ⋅ U(s) . (3. 3.I. We remember that there exist several concepts of stability one of them being the external stability.. 76 .. Serial Connection.3.
2. Re(λ i ) < 0. Formally we can say that an unstable system S 1 .2. However we must mention that this internal stability appeared only because the serial interconnected system is unobservable one having disconnected the unstable mode e−p 2 t .31) where by r i (t) we have denoted the residuum of the functionY f (s) in the pole −p 2 and in the poles of the Laplace transform U(s) of the input .32) x1 (t) = e−p 1t x 1 (0) + K1 U(−p 1 )e−p 1t +Σ r i (t)eλi t i where now by r i (t) we have denoted the residuum of the function the poles λ i of the Laplace transform U(s).2. The poles of U(s) belong to the left half plane because u(t) is a bounded function.18). This kind of stabilisation by serial connection is interesting one "on the paper" only because: 1. Serial Connection. can be made external stable one (bounded input. We can express x1 (t) as a sum between an unstable component x I (t) 1 S generated by the unstable pole −p 1 > 0 and a stable component x1 (t) generated by the poles λ i (3. From (3. K (z − p ) K (3. the response y(t) goes to infinity. This means that the internal stability is assured too. applying the inverse Laplace transform one obtain. The system keep on being internal unstable one.30) yl (t) = p −2p ε e−p 1 t x1 (0) + 2 2 p 2 x1 (0) + x2 (0) e−p 2 t p1 − 2 1 2 lim yl (t) → ±∞ if −p 1 > 0 t→∞ K K U(−p ) K1 K2 (s + z2 ) U(s) =Σ r i (t)eλi t + 1 p 2 − p 1 ε e−p 1t . yf (t) = L−1 t→∞ Each component of the state vector becomes unbounded for non zero causes (the initial state and the input). From (3. (3. It is very sensitive.25) it results.bounded output BIBO) by serial compensation. If the relation z2 = p 1 is not exactely realised but will be z2 = p 1 + ε for ε as small as possible. So lim yf (t) = 0 ± ∞ = ±∞ .33) x1 (t) = xI (t) + xS (t) 1 1 77 K1 U(s) in s + p1 . − p 1 > 0 (s + p 1 )(s + p 2 ) 1 2 i (3. SYSTEM CONNECTIONS. 3.2.3. 2.2. performing this by a serial connection of it with another system S 2 which must have a zero equal to the undesired pole (now the unstable pole) of the system S 1 .2.2.
3. SYSTEMS CONNECTIONS.
3.2. Serial Connection.
xI (t) = [x1 (0) + K1 U(−p 1 )] ⋅ e−p 1 t 1
t→∞ xS(t) 1
(3.2.34) (3.2.35) (3.2.36)
lim xI (t) = ±∞ 1 =Σ r i (t)eλi t ,
i
⇒ lim x 1 (t) = ±∞
t→∞
lim xS (t) = 0 1
t→∞
The second relation from (3.2.18) leads to, K (z − p ) X2 (s) = 2 2 p 2 [x1 (0) + K1 U(s)]⋅ 1 + p2 − 1 s + p1
(3.2.37)
K (z − p ) + 2 2 p 2 [x1 (0) + K1 U(s)] ⋅ 1 + 1 x2 (0) p1 − 2 s + p2 s + p2 Similarly, we can express the unstable component xI (t) , generated by 2 the pole (−p 1 > 0) and the stable component xS(t) generated by the pole 2 −p 2 < 0 and by the poles λ i of the function U(s) . (3.2.38) x2 (t) = xI (t) + xS (t) 2 2 xI (t) = 2 K2 (z2 − p 2 ) K (z − p ) [x1 (0) + K1 U(−p 1 )] ⋅ e−p 1 t = 2 2 p 2 x I (t) (3.2.39) 1 p2 − p1 p1 − 2
if −p 1 > 0 and z2 = p 1 , for K2 > 0 one obtain xI (t) → ±∞ ⇒ xI (t) = K2 xI → ±∞ 1 2 1 Because K (z − p ) xS(t) = L−1 { 2 2 p 2 [x1 (0) + K1 U(s)] ⋅ 1 + 1 x2 (0)} 2 p1 − 2 s + p2 s + p2 S lim x2 (t) = 0 ⇒ t→∞ lim x2 (t) = −K2 lim xI (t) + 0 = −K2 (±∞) (3.2.40) t→∞ 1 t→∞
The component x2 (t) is unbounded because the input u2 = y1 = x1 in the system S 2 is unbounded too. Evidently, such a situation can not appear in physical systems. In a physical system, eventually, an unstable linear mathematical model is possible only in a bounded domain of the input and output values. At these domains borders a saturation phenomena will encounter so the linear model is not available. One physical explanation of this stabilisation performed in the system S 2 is as following: The unstable component of the first subsystem, x I (t) = y I (t) , is transmitted to the output of S 2 through two ways, as we also 1 1 can see in Fig. 3.2.5.: 1. Through the path of the direct connection by the factor K2 , 2. Through the path of the dynamic connection with the component x2 . 78
3. SYSTEM CONNECTIONS.
3.2. Serial Connection.
The system S 2 stabilises the system S 1 if S 2 has such parameters that the component x I which is transmitted through the two ways , finally is 1 reciprocally cancelled. Indeed, z −p z −p yI (t) = K2 xI (t) + xI (t) = K2 x I (t) − K2 p2 − p2 xI (t) = K2 1 − p2 − p2 xI (t) . 2 1 2 1 1 1 2 1 2 1 If z2 = p 1 then, p −p yI (t) = K2 xI (t) − K2 p 1 − p 2 xI (t) = K2 x I (t) − K2 x I (t) = 0 ∀t. 2 1 1 1 1 2 1 Practically this is not possible because xI (t) → ±∞ and 1 yI (t) = K2 xI (t) − K2 xI (t) → K2 (±∞) − K 2 (±∞) = ±(∞ − ∞) , 2 1 1 that means "the stabilised" output yI (t) is the difference of two very large 2 values which, to limit when t → ∞, becomes an undetermination of the type ∞ − ∞. The theoretical solution of this undetermination will give us a finite value for the limit, lim yI (t) = 0. 2 This finite limit is our interpretation of the system S 2 output which is, we say, serial "stabilised". 3.2.7. Steady State Serial Connection of Two Systems. A system S i is in so called equilibrium state denoted xie if its state variable xi is constant for any time moment starting with an initial time moment. For a continuous time system Si of the form (3.1.2), . S i : x i (t) = f i (xi (t), ui (t), t); yi (t) = gi (xi (t), ui (t), t); (3.2.41) this means, . (3.2.42) xi (t) = xie = const., ∀t ≥ t 0 ⇔ xi (t) ≡ 0, ∀t ≥ t 0 The equilibrium state is the real solution of the equation f i (xie , ui (t), t) = 0 ⇒ xie = f −1 ( ui (t), t) x ie = const. (3.2.43) i possible only for some function ui (t) = uie (t) . The output in the equilibrium state is, (3.2.44) yie (t) = gi (xie , u ie (t), t) If the system is time invariant, that means . (3.2.45) S i : x i (t) = f i (xi (t), ui (t)); yi (t) = g i (xi (t), ui (t)); an equilibrium state (3.2.46) xie = f −1 ( ui (t)) xie = const. i is possible only if the input is a time constant function, 79
t→∞
3. SYSTEM CONNECTIONS.
3.2. Serial Connection.
ui (t) = uie (t) = Uie ∈ D u , ∀t ≥ t 0 ⇒ (3.2.47) i −1 i i xe = f i (Ue ) xe = const. ∈ R n (3.2.46) and such a regime is called steady state regime. The output in a steady state regime is Yie = gi (xie , U ie ) = gi (f −1 (Uie ), Uie ) = Q i (Uie ) , Uie ∈ D u . (3.2.48) i For seek of convenience we shall denote the input and output variables in steady state regime by Yie = Yi , Uie = Ui . (3.2.49) The inputoutput relation in steady state regime is also called static characteristic, (3.2.50) Yi = Q i (Uie ) , Uie ∈ D u We can say that a system is in steady state regime, starting on an initial time moment t 0 , if all the variables: state, input, output are time constant function for ∀t ≥ t 0 . A system S i is called static system if exists at least one static characteristic (3.2.50). It is possible that a system to have several equilibrium states and so several static characteristics. For SISO LTI, . (3.2.51) S i : x i (t) = Ai xi (t) + b i ui (t); yi (t) = c T xi (t) + d i ui (t) i the static characteristic is Yi = Q i (Uie ) = [−c T A−1 b i + d i ] ⋅ Ui , ∀Uie ∈ R if det(Ai ) ≠ 0 (3.2.52) i i If det(Ai ) = 0 that means the system has at least one eigenvalue equals to zero, the system is of the integral type, then the static characteristic exists only for Ui = 0 . For discrete time systems as (3.1.4), the discussion is similar one. The equilibrium state is xk = xe = const., ∀k ≥ k0 ⇔ xk+1 ≡ xk , ∀k ≥ k 0 (3.2.53) given by the equation, xie = f i (xie , u ik , k) ⇒ x ie = ψ −1 ( uik , k) xie = const. (3.2.54) i Let us consider now two nonlinear subsystems S 1 , S 2 , described in steady state regime by the static characteristics S 1 : Y1 = Q 1 (U1 ) (3.2.55) 2 2 (3.2.56) S 2 : Y = Q 2 (U ) They are serial connected through the connection relation, R c : U2 = Y1 , U = U1 , Y = Y2 . (3.2.57) The serial interconnected system has a static characteristic, Y = Q(U) = Q 2 [Q 1 (U)] = Q 2 Q 1 (U) obtained by simple composition of two functions. This composition can be performed also graphically. 80
3. SYSTEM CONNECTIONS.
3.2. Serial Connection.
3.2.8. Serial Connection of Several Subsystems. All aspects discussed regarding the connection of two systems can be extended without difficulty to several subsystems , let say q subsystems. For q LTI systems described by transfer matrices Hi(s) S i : Yi (s) = Hi (s)Ui (s) , i = 1 : q (3.2.58) the connection relations are Rc : ui+1 =yi , i = 1 : (q − 1) the connection conditions are Cc : Γi ⊆ Ω i+1 , i = 1 : (q − 1) The inputoutput equivalent transfer matrix is, H(s)=HqHq1...H1 because Yq=HqUq , Yq1=Hq1Uq1 but Uq=Yq1 => Yq=Hq(Hq1Uq1) (3.2.59) (3.2.60) (3.2.61) a.s.o.
For SISO, the succession of transfer functions in the equivalent product can changed that means, H(s)=HqHq1...H1=H1...Hq1Hq (3.2.62)
81
. q . 3..3.3. ∀i i=1. H(s) = Σ H i (s) i=1 q 82 .. 3.1.q Γi ⊆ Γ q q i Y(s) = Σ Y (s) = Σ Hi (s)U(s) = [Σ Hi (s)] ⋅ U(s) = H(s)U(s) i=1 i=1 i=1 We can draw a block diagram which illustrate this connection. q U i (s) = U(s) Y(s) = Σ Yi (s) i=1 q i = 1. Ωi = Ω . Parallel Connection. 3. u1 H 1 u u2 H 2 uq H q y1 y2 + + yq + + y Figure no.q . The equivalent transfer matrix is..3.3.. i=1.. Parallel Connection... SYSTEM CONNECTIONS. Si : Rc : Cc : Yi(s)=Hi (s)Ui(s) .
3. i=1.4. but the second. Si : Yi(s)=Hi (s)Ui(s) . H (s) H (s) Y(s) H(s) = _ 1 = _ 1 = . We can plot these set of relations as in in the below block diagram . 1 + H 2 (s)H 1 (s) 1 + H 1 (s)H 2 (s) U(s) 83 . In the case of SISO we have a transfer function. SYSTEM CONNECTIONS. 3.4. 3. We shall consider the feedback connection of two subsystems. U(s) + ± E=U1 H1 H2 Y =Y(s) 1 Y 2 ⇔ Y(s) H U(s) Y=H1(U+H2Y)=+H1H2Y+H1U =>Y=(I+ H1H2)1H1U H = (I + H 1 H 2 ) (rxp)(pxr) rxr _ −1 _ H1 rxp Another way: Y1 E=U±H 2 H 1 E Y2 ⇒ _ E = (I + H 2 H 1 ) −1 U _ ⇒ Y =H 1 (I + H 2 H 1 ) −1 U H _ H = H 1 (I + H 2 H 1 ) −1 rxp (pxr)(rxp) pxp The equivalent transfer matrix H(s) Y(s) = H(s)U(s) of the feedback interconnected system has two forms which algebraically are identical. Feedback Connection. Feedback Connection. 2 E = U ± Y2 U1 = E Y 1 = H1 U 1 Y 2 = H2 U 2 Y = Y1 Rc : Y 1 (s) = H 1 (s)U 1 (s) Y 2 (s) = H 2 (s)U 2 (s) . _ H(s) = H 1 (s)[I + H 2 (s)H 1 (s)] −1 requires a (p × p) matrix inversion. The equivalent feedback interconnected transfer matrix H(s) can be obtained by an algebraical procedure. The first one _ H(s) = [I + H 1 (s)H 2 (s)] −1 H 1 (s) requires an (r × r) matrix inversion.
1. There are three main types of graphical representation of systems: 1. So a block diagram expresses the abstract system related to an oriented system. knowledge and competence on the field to which that object belongs to are necessary. interpreted and manipulated using different graphical methods and techniques. For example. Mainly a block diagram represents the cause and effect relationship between the input and output of an oriented system. that means the abstract system attached to that oriented system. . Principle Diagrams and Block Diagrams. By diagrams only physical objects are described. design and evaluation. is involved then the block diagram is called state diagram (SD) The fundamental elements of a block diagram are:. 2.1.Block diagrams. Block diagrams consist of unidirectional operational blocks. is a pictorial (graphical) representation of the mathematical relations between the variables which characterise a system. Starting from and using a principle diagram. After that the mathematical model. an oriented system (or several oriented systems) can be specified if the output variables (on short outputs) are selected. .Principle diagrams . It reveals all its components in a form amenable to analysis.1. To be able to understand and interpret a principle diagram. A block diagram. 4. Principle Diagrams and Block Diagrams.4. 84 . They are also called schematic diagrams.2. 4.1. Frequently the dynamical systems are represented. can be determined. Principle Diagrams. Block Diagrams.Signal flow graphs. in the system theory approach. . If in this representation the system state. The principle diagram is a method of physical systems representation by using norms and symbols that belong to the physical system domain so expressed to be able to understand how the physical system is running. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4. There are no mathematical model on this representations but they contain all the specifications or description of that physical object (system) configuration. 4. 3. including the initial state. 4. The same symbol can have different meanings depending on the field of applications.1. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. the symbol represents a rezistor for an electrical engineer but a spring for a mechanical engineer.
is of the form.1. Take off Points. Oriented Lines. that means a variable. represents the mathematical operator which relates the causes (input variables) and effects (output variables).1) y = F(u) The symbol F( ) denoting an operator (it can be a simple function) expresses an oriented system where u is the cause and y is the effect. represented by an arrow. 4. Blocks.1) as y = F(u) ⇔ y = F{u} ⇔ y = Fu and the attached block diagram is as in Fig. 85 . A take off point.4. We can write (4.1. 4.1. Principle Diagrams and Block Diagrams. 2. One oriented line (arrow). However some specific operators are represented by other geometrical figures ( for example usually the sum operator is represented by a circle). representing a block.1. usually drawn as an rectangle. For example. Inside the rectangle. y y y y y y y y y y y y y y y Figure no. where u and y can be timefunctions. They are drawn by straight lines marked wit arrows. It is drawn by a dot. as in Fig. called also pickoff point. Oriented lines represent the variables involved in the mathematical relationships. 4.1. On short an oriented line is called "arrow".1. 3. that means y is the output variable and u is the lonely input variable. a symbol of that operator is marked. can be an output variable for a block and an input variable for another block. (4.1. graphically illustrates that the same variable. u F y Figure no. 4. 1. is delivered ( dispersed) to several arrows. The input variables of a block are drawn by incoming arrows to the rectangle (geometrical figure) representing that block. A block. an explicit relation between the variable u and the variable y.2. The following representations are equivalent.1. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. The output variables are drawn by arrows outgoing from the rectangle (geometrical figure) representing that block. The direction of the arrow points out the causeeffect direction and has nothing to do with the direction of the flow of variables from principle diagram.2.1. 4.
3.5) do not represent an unidirectional operator because x 1 depends on itself. Let we consider an algebraical relation.5) where. Suppose we are interested about the variable x 1 so it can be expressed. The block diagram.1.4. (4.1. that means x 1 is output variable and x 2 .2) is a linear relation with constant coefficients. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS.1.3) could be chosen as an output variable.1.1.6).2).1. x 3 ) = 0 (4.1.1. Block Diagram of an Algebraical Relation. u = [x 2 x 3 ] T which constitutes an oriented system where x 1 is the output and x 2 .a3 / a1 Figure no.1. a 1 = 1 − Kb 1 . All the relations from (4.1. x 2 . a 1 x 1 + a 2 x2 + a 3 x3 = 0 (4.1.1.3) The relations (4.1. Each relation from (4.4. (4.1.3) represent a non oriented system and can not be represented by a block diagram. 4. Let us consider that (4.1.2) where the variables x 1 . x 3 are input variables.6) x 1 = Kw 3 . if a 1 ≠ 0 .1. Any of the variables x 2 or x 3 from (4. 4.1. ⇔ x 1 = f(x 2 . From (4. x 3 are the inputs. 86 .6) represents an unidirectional operator which can be represented by a block. Now let relation (4. Suppose we are interested about x 1 .1. as x 1 = −a 2 /a 1 x 2 − a 3 /a 1 x 3 .5) can be represented as several interconnected blocks as in Fig. Example 4.3) be of the form. representing this oriented system is as in Fig.3. a 2 = −Kb 2 .1. x 3 can be timefunctions. as for example w 1 = b 2 x 2 + b 3 x3 w 2 = b 1x1 w3 = w1 + w2 (4. x3 ) x1 x2 x3 F x1 .2). 4. where we understand the symbols of the elementary operators involved in this block diagram. which represents (4. a 3 = −Kb 3 .1. x2 x3 a2 / a 1 + x1 x2 x3 x1 =f(x2 . 4. Relation (4. x i = x i (t) . Principle Diagrams and Block Diagrams. x 2 . x 1 = K(b 1 x 1 + b 2 x 2 + b 3 x 3 ) (4.1. R(x 1 . x 3 ) ⇔ x 1 = F{[x 2 x 3 ]} ⇔ x 1 = F{u}.5) we can delimit several unidirectional operators (blocks) introducing some new variables.
GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS . through the pipe P1.1.1. 87 a) . w 3 from (4.6) or withdrawing x 1 from (4.5) we get the oriented system. As we can see.1. If 1 − Kb 1 = 0 ⇔ a 1 = 0 then relations (4.4. Example 4. 4.5.4. from physical point of view. This is a feedback structure containing a loop.5. 4. through the pipe P2.7) Now the relation (4. Let us consider a physical object.1. with water of the flow rate q1=u1 and from which tank water drains.3.1. as described by the principle (schematic) diagram from Fig. x2 x3 b2 b3 + w1 + + + w3 w2 K b1 x1 Figure no. manipulating the block diagram from Fig. characterised by the unidirectional relation of the form b2 b3 x 1 = 1−Kb 1 x 2 + 1−Kb 1 x 3 = (−a 2 /a 1 )x 2 + (−a 3 /a 1 )x 3 (4. It represents a cylindrical water tank supplied. If K and b 1 are constant coefficients. Variable's Directions in Principle Diagrams and Block Diagrams.1.2. Principle Diagrams and Block Diagrams.5) are degenerate. 4. q1=u1 means an incoming flow and q2=u2 means an outgoing flow.4.1.1.1. of the flow rate q2=u2. 4.1.4.1. or eliminating the intermediate variables w 1 . 4. Of course. x 3 as input variables. this loop is an algebraical loop which causes many difficulties in numerical implementations. w 2 .7) can be depicted as an unidirectional block as in Fig.3) and (4. that means they do not contain the variable x 1 .1. having x 1 as output variable and x 2 . Pipe P1 q1=u1 Water tank as a physical "block" Incoming flow q1=u1 Input q1=u1 Pipe P2 Water tank Input q2=u2 Water tank as an oriented system Output L=y L=y q2=u2 q2=u2 Outgoing flow c) b) Figure no.1.
u2 in timedomain is y(t) = K ∫ [u1(t) − u2(t)]dt + y(t 0 ) t0 t (4. Y(s) = L{y(t) − Y ss }. 88 .1. U ss = u 1 (0) = 0.8) is represented in Fig. u1 (t) u2 (t) y(t 0 ) U1(s) +  t ∫t 0 K + y(t) + U2(s) + K Y(s) U1(s) U2(s)  s H(s) c) Y(s) a) b) Figure no.1.1.5. denoted by L=y.4.6.6.1. in the systems theory meaning will have L=y as output and both u1 and u2 as inputs. The corresponding block diagram. in the oriented system. In matrix form the IO relation is. In a such a way the variable L=y.1. 1 2 Denoting. The mathematical relationship between y and u1.1. U ss = u 2 (0) = 0 . represented in Fig. So. 4. U 1 (s) = L{u 1 (t) − U ss } . 4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS . based on causality principles. All the causes which affect this selected output are represented by the two flowrates q1=u1 and q2=u2.1. Principle Diagrams and Block Diagrams. defined by: Y ss = y(0).6.1. To determine the mathematical model in complex domain we define the variables in variations with respect to a steady state.4. 1 1 we have Y(s) = K ⋅ [U 1 (s) − U 2 (s)] (4. 4.a. 4.8) s the IO relation by components. 4.9) s s U 1 (s) which allows us to represent she system as a hole as in Fig.c.b.1. both u1 and u2 are input variables.c. an attribute (a characteristic) of the physical object (water tank). H(s) = K − K Y(s) = H(s) ⋅ 1 (4. Suppose we are interested on the water level in the tank. or one of the block diagram from Fig. U 1 (s) = L{u 1 (t) − U ss }. It is represented by the block diagram from Fig. is an effect we are interested about.6.6.1. 4. U (s) = H(s) ⋅ U(s) .
x(t) = ∫ x(t 0 )δ(τ − t 0 )u(τ)dτ + t∫ u(τ)dτ = t∫[u(τ) + x(t 0 )δ(τ − t 0 )]dτ t 0 0 0 t t t (4. x(t) = u(t) . 4. t .11) We have to understand that t=(t a − t 0 ) . 4.1.3.1.1. 4.7. The integrator is an operator which performs in time domain. Block Diagram of an Integrator. Complex domain x(0) x(t) U(s) + + • L{x(t)} 1 s X(s) ∫ b) Figure no.12) s Using the integrator graphical representation (by block diagrams or signal flowgraphs) together with summing and scalors operators we can represent the so called state diagram (SD) of a system. Integrators considering or not the initial state.10) we can draw the integrator.1. Summing operators.1.1.1. X(s) = 1 [L{x(t)} + x(0)] (4.1. They can be a matrix gains or a scalar gains. Principle Diagrams and Block Diagrams. x(t) = x(t 0 ) + ∫ u(τ)dτ .a.11).1.3.7.10) can be written using Dirac impulse as in (4. . State diagrams. State Diagrams Represented by Block Diagrams.10) whose block diagram is as in Fig.1. State diagrams can be used also for timevariant systems and for nonlinear systems. They can be drawn using both the block diagram and signal flow graph methods.b. Example 4. t0 (4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. Time domain x(t0) x(t0)δ(t. in timedomain as in Fig. 4. x(t) u(t) ∫ a) x(t) + x(t) u(t) + .the initial time moment has to be zero when we are using the Laplace transformation. L{x(t)} = sX(s) − x(0). on short SD. They can be applied for both continuoustime and discretetime systems. We can represent the integrator behaviour in complex domain taking into consideration that . Denoting X(s)=L{x(t)} we obtain. (4.1. 89 . Figure no.1.t0) .8.4. means the graphical representation of state equations of a system. State diagrams can be represented both in time domain or complex domain (s or z). Because the first equation from (4. but this will be useless to the algebraical computation.7. 4. An SD contains only three types of elements (operators): Nondynamical (scalor type) elements represented by ordinary scalar or vectorial functions.1. 4.
10.9. To draw an SD first the integrators involved in state equations have to be represented and then it is filled in with the other two types of components. and the variables are denoted by their complex equivalents as in Fig.4.13) as .10. or its complex image. withdrawing advantages when necessary. To do this from the above SD. Let us consider a first order continuoustime system described by the state equation (4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. For the linear timeinvariant systems all the coefficients a.1. 4.1. .1. 4. in time and complex domain.9. 4. x(t) = a ⋅ x(t) + b ⋅ u(t) . (4. d have constant values so we can represent the state equation (4. c.13). we obtain: 90 . 4.14) y(t) = c ⋅ x(t) + d ⋅ u(t) The time domain SD of this system is identical with that from Fig. in addition to the initial state.1.1.9.1. • x(t) • L{x(t)} U(s) u(t) b + + + + x(0) 1 s Complex domain form of the integrator X(s) x(t) c d + + Y(s) y(t) a Figure no. Principle Diagrams and Block Diagrams.13) y(t) = c(t) ⋅ x(t) + d(t) ⋅ u(t) The corresponding SD in time domain is depicted in Fig. we still can see explicitly the state derivative.1.1.1. In these state diagrams. except the coefficients are constants. The state diagram in complex domain is identical with time domain SD except the integrator which is replaced by its complex equivalent from Fig 4. x(t) = a(t) ⋅ x(t) + b(t) ⋅ u(t) (4. If we are not interested about the state derivative we can transform the internal loop as a simple block.1. x(t) a(t) x(0) ∫ x(t) c(t) + + y(t) d(t) Figure no. Sometimes having the integrator represented in complex domain (because in complex we can perform algebraic operations) we denote the variable in time domain or both. u(t) + b(t) + .8. 4. 4. b.1.
1.15) 1 Now we replaced the internal loop by the transfer function s − a and a summing element at its input as in Fig. Principle Diagrams and Block Diagrams.1.1.1. 4.. L{x(t)} = sX(s) − x(0) = [sX 1 (s) sX 2 (s) .. U(s) x(0) b + + 1 sa X(s) c d + + Y(s) Figure no. B(nxr).11. 4.1.12. D(pxr).x n (0)] T we have . they are drawn by double lines as in Fig.12. Sometimes. L{x(t)} = sX(s) − x(0) = aX(s) + bU(s) X(s) = 1 [aX(s) + x(0) + bU(s)] s 1 X(s) = s − a [x(0) + bU(s)] (4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. .4.11. . 4. x = Ax + Bu . 4. x(0) U(s) + • L{x(t)} + + B + 1I s n X(s) + C + Y(s) A U(s) D Figure no.1. (4.1. Because . where at least one coefficient is a matrix. to point out that when the oriented lines forward vectors. 4.. X(s) = [ 1 I n ] ⋅ [L{x(t)} + x(0)] .. On this form of the SD we can see very easy the dependence of Y(s) and X(s) upon U(s) and x(0). sX n (s)] T − [x 1 (0) x 2 (0).. of the general form as in (4.16) y = Cx + Du where the matrices have the dimensions: A(nxn).1.17) s which is represented as in Fig.12. The state diagram of a system can be drawn also for non scalor systems. (4. C(pxr).1.1.16). 4. 91 .
GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. Sometimes complex systems appear to be expressed as a connection of subsystems in which connection some intermediate variables are pointed out. the numerical methods to be implemented on computers can not be applied.2. Three methods of reduction are usually utilised: 1. 4. 92 . System Reduction by Using Block Diagrams. Because the coefficients of the equations are expressions of some literal. 4.2. This can be done by eliminating all the intermediate variables. In such cases. 3. particularly transfer functions on complex variables s or z. System Reduction Problem. the determination of the solution becomes very cumbersome one. Reduction through block diagrams transformations. by using different techniques and methods. Reduction by using signal flowgraphs method. the goal is not to solve completely that system of equations which characterise that connection ( that means to determine all the unknown variables:outputs and intermediate variables) but to eliminate only the intermediate variables and to express one component (or all the components) of the output vector as a function of input vector. mainly for systems with a larger number of equations. 2.4. It has the advantage to be applied to a broader class of systems other rather SLIT.2. In reduction processes. Analytical Reduction. 4. she set (system) of equations describing the dynamical system. System Reduction Using Block Diagrams. Mainly this means to solve analytically.1. 4. The reduction of a such complex system to an equivalent structure means to determine the expression of the inputoutput mathematical model of that complex system as a function of subsystem's mathematical models. Analytical reduction. In the case of SISOLTI systems the equivalent transfer function has to be determined (for MIMOLTI the equivalent transfer matrix).2.2.
4.4.2) H ik (s) = H Uik (s) = i U (s)≡0 .2. ∀j≠k U k (s) j H 11 . 4. (4.1) Y (s) Y (4.2. Y(s)=H(s)U(s) (4. For block diagram manipulation some graphical transformations can be used.. We consider all the relations in complex domain s or z. through manipulations.. H rp To determine such a component H ik (s) we have to ignore all the other outputs except Y i and to consider zero all the inputs except the input U k . but for seek of convenience in the following these variables are omitted. Equivalent relation/diagram. multioutput one (MIMO LTI) the equivalent structure will be expressed by an equivalent transfer matrix H(s). 1. Throghout this method it is possible to determene the equivalent transfer function between one input U k (s) and one output Y i (s) which represents the component H ik (s) of the transfer matrix H(s).2. Elementary Transformations on Block Diagrams. Combining blocks in parallel.3. If the complex system is a multiinput. Y = (H 1 ± H 2 ) ⋅ U Y = H1 ⋅ U ± H2 ⋅ U U + ± H1 H2 Y U H1±H2 Y 93 .1. 4.. Equivalent relation/diagram. System Reduction Through Block Diagrams Transformations. H 1p n H(s) = : H ik : .3) k=1 H r1 . System Reduction by Using Block Diagrams. Original relation/diagram.2.3. Combining blocks in cascade. Original relation/diagram. Y = (H 2 H 1 ) ⋅ U Y = H 2 ⋅ (H 1 U) U H1 H2 Y U H2 H1 Y 2. Y i (s) = Σ H ik U k . If a system is represented by a complex block diagram it can be reduced to a simple equivalent form by transforming. the block diagram according to some rules.2. considering the relations..2. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. They are based on the identity of the algebaical inputoutput relations.
Original relation/diagram. Original relation/diagram.H 2 H 1 + 1 1 . Y =H⋅U Y =H⋅U U Y H Y U Y H H Y 94 . 4. Moving a takeoff point ahead of a block. Eliminating a feedback loop.4. _ H Y = H 1 ⋅ (U ± H 2 ⋅ Y) a) Y = [(I + H 1 H 2 ) −1 H 1_ 2 ] ⋅ [H −1 U] 2 −1 b) Y = [H 2 ] ⋅ [H 2 H 1 (I + H 2 H 1 ) −1 ] ⋅ U H H H H H Scalar case: H = _ 1 = _1 2 ⋅ 1 = 1 ⋅ _2 1 1 + H 1 H 2 1 + H1 H2 H2 H 2 1 + H2 H1 a) U+ ± b) H 1H 2 Y H1 H2 Y Y U H2 −1 + Y U+ ± H 2H 1 ± H2 −1 Y 6. Removing a block from a feedback loop. Equivalent relations/diagram. 3. Original relation/diagram. Equivalent relations/diagram. Y = (H 1 H −1 ± I) ⋅ H 2 ⋅ U Y = H1 ⋅ U ± H2 ⋅ U 2 U + ± H1 H2 Y U H2 H1 H2 −1 + Y ± 4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. System Reduction by Using Block Diagrams.2. _ Y = H 1 ⋅ (U ± H 2 ⋅ Y) a) Y = [(I + H 1 H 2 ) −1 H 1 ] ⋅ U _ b) Y = [H 1 (I + H 2 H 1 ) −1 ] ⋅ U H H Scalar case: Y = _ 1 ⋅U = _ 1 ⋅U 1 + H1H2 1 + H2H1 U+ ± H1 H2 Y Y U a) H Scalar case Y U b) 1 1. Original relation/diagram. Equivalent relation/diagram. Equivalent relations/diagram.H 1H 2 + H Y 5. Eliminating a forward loop.
Original relation/diagram.2. Original relation/diagram. Moving a summing point ahead of a block. Equivalent relations/diagram. Equivalent relations/diagram. Rearranging summing points. Original relation/diagram. Original relation/diagram. Y = U 2 + [±U 1 ± U 3 ] Y = ±U 1 + [U 2 ± U 3 ] U2 U3 U1 + + Y ± U2 U3 U1 + Y + ± ± ± Y(s) = U 1 (s) + U 2 (s) ± U 3 (s) U1(s) + U2(s) + Y(s) _ + U3 (s) Y(s) = (U 1 (s) + U 2 (s)) ± U 3 (s) U (s) 1 U (s) +2 + + + _ Y(s) U3 (s) Y(s) = (U 1 (s) ± U 2 (s)) ± U 3 (s) U1(s) + U (s) 2 + _ + Y(s) Y(s) = (U 1 (s) ± U 3 (s)) ± U 2 (s) U1(s) + U (s) 3 + _ + + _ Y(s) U (s) 2 _ + U (s) 3 95 . Moving a takeoff point beyond a block. Y = H ⋅ [U 1 ± H −1 U 2 ] Y = H ⋅ U1 ± U 2 U1 U2 H + Y ± U1 U2 1 + H ± Y H 9.4. Moving a summing point beyond a block. Equivalent relations/diagram. Y = [HU 1 ] ± [HU 2 ] Y = H ⋅ [U 1 ± U 2 ] U1 U2 + H ± Y U1 U2 H H + Y ± 10. System Reduction by Using Block Diagrams. Equivalent relations/diagram. U=U Y = H ⋅ U U = [H −1 H] ⋅ U Y =H⋅U U U H Y U U H H 1 Y 8. 4. 7. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS.
2.4. except the component U k (s) and to ignore all the output vector components except the component Y i (s).2. or by a block with gain 1 as in Fig.2).2.2. for which we can not perform a directly reduction. 96 .1. interpreted as a separate oriented system.1. U1 +  G1 G2 a) Y U2 U1 1 G1 Y  G1 G2 Y U2 1 G1 G2 d) Y U2 b) c) Figure no. The incoming oriented lines will be considered as input variables and the outgoing oriented lines will be considered as output variables of the contour. When we determine H11 . H 12 of the transfer matrix H(s) = [H 11 (s) H 12 (s)] using the relation (4. by a block with gain 1.2. 4.1. all the incoming oriented lines and outgoing oriented lines to and from this contour. 4. System Reduction by Using Block Diagrams. U1 must be considered zero and the summing element will be represented as in Fig. 4. by a closed contour specifying and denoting. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4. Transformations of a Block Diagram Area by Analytical Equivalence. When we determine H12 .b. this summing operator will be replaced by an element (block) expressing the dependence of the summing operator output upon the inputs that are not considered to be zero. 4. 4. As we mentioned in the case of multivariable systems. On this oriented system (our contour) we can perform an analytical analysis and determine the expression of each contour output variable as a function of the contour input variables.1. Then these relationships are graphically represented as a block diagram which will replace the marked area from the original block diagram. When we apply this rule and a summing operator has several inputs among which it exist zeroinputs.1. Let us consider a twoinputs/oneoutput system as depicted in Fig.1.d.a.2. If in some parts of a block diagram nonstandard connections appear.2.3. Representations of a Multi Inputs Summing Element. Of course this is a very simple example but the goal is only to illustrate how the summator will be transformed. U2 must be considered zero and the summing element will be represented as in Fig. we must consider zero all the components U j (s) (zeroinputs) of the vector U(s). then we can mark that area.c. containing the undesired connections.2. We want determine the two components H 11 . by additional letters. Example 4.2. to determine the component H ik (s) of a transfer matrix H(s). 4.2.
2. Combine and replace by its equivalent all blocks connected in parallel using Transformation 2. 4.a. For example to determine the component Y i (s) Y H ik (s) = H Uik (s) = U (s)≡0 .2. 97 .4) B 2 (s) = G 21 (s)A 1 (s) + G 22 (s)A 2 (s) These relations are now represented by a block diagram as in Fig. B 1 (s) = G 11 (s)A 1 (s) + G 12 (s)A 2 (s) (4. B 1 (s). Repeat steps 3 to 7 and then select another component of the transfer matrix. B 2 (s) respectively. ∀j≠k U k (s) j which relates the input Uk to the output Yi .b. 4. 5:10 to reveal the three above standard connections.2. 4. 4. Eliminate all the feedback loops using Transformation 4. which will replace the undesired connection. 8. 4. 3. a 2 .2. In practice block diagrams are often quite complicated.3. For example suppose that a contour delimits an area with two contour input variables.2. using analytical methods we can express B1. 6.4) if the dependence is linear one. 7. denoted a 1 . A2 as in (4. Apply the graphical transformations 3. Ignore all the outputs except Yi . Consider zero all the inputs except Uk. b 2 . Combine and replace by its equivalent all blocks connected in cascade (serial connections) using Transformation 1. 5. and suppose that. including several feedback or feedforward loops. 2. and two contour output variables.2. having multiple inputs and multiple outputs. B2 upon A1. A 2 (s). denoted b 1 .4. System Reduction by Using Block Diagrams.2.2.2. 4.2. Solve complicated connections using analytical equivalence as in 4.2.1. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. In the linear case the reduced system is described by a transfer matrix H(s) whose components are separately determined. Algorithm for the Reduction of Complicated Block Diagrams.3. 4.2. the following steps may be used: 1.3. Sometimes we can transform the related summing points as in Fig.2. having the Laplace transforms A 1 (s). This oriented contour is represented in Fig. B1(s) A1(s) Undesired connections A2(s) B2(s) G21(s) + B1(s) A1(s) G11(s) + + + G12 (s) G22(s) A2(s) B2(s) a) b) Figure no.
4. In this example we shall fallow in detail. However in practice and with some experience many of the below steps and drawings could be avoided.2. (4. For a easier manipulations it is recommended to attach additional variables to each takeoff point and to the output of each summing operator.7).2. Y(s) = 1 .2.2. a3=U2+a6 a4=H3a3 a0=H1a1. 4. U 1 (s) U 2=0 U 2 (s) U 1=0 U1(s) U2(s) Y1(s) H(s) Y2(s) Figure no. Example 4.2. (4.2.7) 98 .2.6) H 11 (s) = 1 H 12 (s) = 1 U 1 (s) U 2=0 U 2 (s) U 1=0 Y (s) Y (s) H 21 (s) = 2 . for training reasons.2.2. all the steps involved in systems reduction based on block diagram transformations. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. The summing operators are denoted S1: S4.4.3. H 22 (s) = 2 . a1=U1a7.4. 4. It can be observed that the system has two inputs and two outputs U (s) Y (s) U(s) = 1 . This LTI system in fact is represented by a set of 11 simultaneous algebraical equations as in (4. In our case they are a0 : a8. (4. System Reduction by Using Block Diagrams.4.2.2. Let us consider a multivariable system described by a block diagram (BD) as in Fig. described by the transfer matrix H(s) containing 4 components H (s) H 12 (s) H(s) = 11 H 21 (s) H 22 (s) Y (s) Y (s) . a2=a0a5. Reduction of a Multivariable System..5) U 2 (s) Y 2 (s) with global block diagram (BD) as in Fig. 4.4. a5=H5a4 a6=H2a2 a7=H7a8 a8=H4a3+H6a4 Y1=a8 Y2=a2 .3. S1 U + 1 _ a1 H1 a0 + S2 _ a 5 Y2 H2 H5 a6 U2 + a 3 + S3 H3 a4 H4 H6 + a8 Y 1 + S4 a7 H7 Figure no.
GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS..2.2. it will appear very clear. ⇒ a 5 = H 5 H 3 a 3 The blocks marked Ha. Y1. 99 .2. 4. S1 S2 H4 a2 a3 a3 U1 + _ a0 H1 a1 + _ a 5 H2 H5 a6 H3 a4 H6 + + S4 Y 1 a8 Ha a7 H3 Hb H7 Figure no. with 13 variables: a0 : a8. let as denote them as transfer functions too.2. . This is rather difficult task.5 where now Y2 is not drawn and because U2=0 now a3=a6 and a block with a gain 1 will appear instead of the summing point S3 . in Fig. Y 1 (s) . U 1 (s) U 2=0 To do this we shall ignore Y2 and consider U2=0 as in Fig. System Reduction by Using Block Diagrams. Hb are reduced to the equivalent transfer functions.2. Determination of the component :H 11 (s) = S1 S2 + _ a5 a7 U1 + _ a1 H1 a0 a2 H2 H5 a6 S3 H4 a3 1 H3 a4 H6 + a Y 8 1 + S4 Move ahead H7 Figure no. This takeoff point movement reflects the following equivalence relationships (4. Y2.4.2. 4. To reduce the system means to eliminate all a0 : a8 intermediate variables expressing Y1. a). 4. U1. To put into evidence the three standard connections we intend to move.6. the takeoff point a4 ahead of the block H3 ( as pointed out by the dashed arrow) and to arrange this new takeoff point using the equivalencies from Fig. U2 where two input variables U1. 4. 4. The coefficients H1: H7 represent expressions of some complex functions.2. a 4 = H3 a 3 . After these operations.5.6.5. 4.1.8) a 5 = H 5 a 4 .2. two structures: one standard feedback connection (with the equivalent transfer function Ha) and one parallel connection (with the equivalent transfer function Hb) as depicted in 4. Y2 upon U1 and U2. marked by dashed rectangles. U2 are independent (free variables in the algebraical system).
in Fig. S1 a1 S2 H1 a0 + a2 _ _ a5 H2 H5 U2 + a6 + S3 H4 a3 H3 a4 + H6 + S4 Y1 a8 Move ahead a7 H7 Figure no. H1HaHb H 11 (s) = (4. the takeoff point a4 ahead of the block H3 . System Reduction by Using Block Diagrams.11) b).8.9.we shall consider U1 = 0 and ignore Y2 resulting a BD as in Fig 4.9) into (4.2.10).2.2. 4. Determination of the component : H 12 (s) = Y 1 (s) . a simple feedback loop described by the transfer function. (4.2.2.2. . 4. U 2 (s) U 1=0 To evaluate H12(s). H b = H 4 + H3 H 6 .2. 4.2. becomes as in Fig 4. H2 .7. Because now U1=0.2."of S1 to the input of S2 as in Fig.2.2.9) 1 + H2H5H3 so the block diagram from Fig. we transfer the sign ".2. a1=a7 and a2=a0a5=H1a1a5=H1a7a5. . in the initial BD from Fig 4.4.9. 4.7. 100 . 4.3.10) 1 + H1 Ha H b H 7 Ha = S U1 + 1 a0 _ H1 a1 Ha a3 Hb Y 1 a8 U1 H11 U =0 2 Y 1 a7 H7 Figure no. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. As discussed before we move. S2 _ a5 U2 a2 H4 a3 _ H2 H5 H7 a6 + + S 3 H3 a4 H6 + + S4 Y 1 a8 H3 Ha H1 a7 Figure no. 4.8. 1 + H2 H3 H5 + H 1 H 2 H3 H6 H7 + H 1 H 2 H 4 H 7 (4. finally we obtain the expression of H11(s) H 11 (s) = H1 H2 H 3 H 6 + H1 H2 H4 .8.2. Substituting (4.2.6. 4.2.
GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.H7 Figure no.H 1 7  + Ha 1 Ha Y 1 + + Hc Figure no.12. 4. H3 5 H . After the takeoff point a3 has been moved beyond the block Ha the BD looks like in Fig.10. U2 + Ha H2 Hc Figure no.2. (4.11.12) Y1 U2  H12 U1 =0 Y1 The equivalent relationship from the above BD is the component H12(s): Ha H 4 + H3 H 6 H 12 (s) = = 1 + Ha H2 H c 1 + H 2 H 3 H 5 + H1 H2 H 3 H 6 H 7 + H1 H2 H4 H7 (4.2. 4. System Reduction by Using Block Diagrams. Ha which determines the BD from Fig. It can be observed that now appeared two simple cascade connections and the parallel connection denoted Ha which is H a (s) = H 4 + H 3 H 6 . It appeared a new parallel connection denoted Hc equals to H H H b (s) = H 1 H 7 + 5 3 . _ S2 a2 _ a5 U 2 H2 a6 + a3 + Ha Move beyond the block Y 1 a8 S3 H .2.11.2.2. 4.4.12.2. 4.2. 4. U2 H2 H . H3 5 H1.13) 101 .2.
2. 4. looks like in Fig. U1 + _ S1 a1 H1 Y2 S2 a0 + H2 _ a2 a 5 a6 H5 .2.4. becomes a3=a6 so the summing operator S3 is now represented by a block with a gain 1. Because U2= 0. the initial block diagram from Fig.2.2. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4.2.14. System Reduction by Using Block Diagrams. and later on it will be ignored. 4.13.14.2.2. 4. 4. 4. S1 S2 _ a5 H7 Y2 a2 H4 H2 a6 U1 + _ a1 H1 a0 + a3 H3 a4 + + S4 a8 H6 H5 H3 Ha a7 H7 Figure no. The new parallel connection Ha is replaced by H a (s) = H 4 + H 3 H 6 so is obtained the BD from Fig.15. c).3.13.2. 102 .15. the relation a3=U2+a6 from Fig.2. Moving the takeoff point a4 ahead of the block H3 we get the BD from Fig. 4. H3 a8 a3 a7 H7 Ha Figure no.3. Y 2 (s) U 1 (s) U 2=0 To determine the transfer function H21(s) we have to consider U2= 0 and to ignore the output Y1 . Into such conditions. Determination of the component : H 21 (s) = S1 S2 _ a a7 5 Y2 a2 H4 H2 a6 U1 + _ a1 H1 a0 + 1 S3 a3 H3 Move ahead a4 + + S4 H6 a8 H5 Figure no.4. 4.
The two new connections are expressed by 1 H e (s) = H 2 H 7 H a .2. H3 a3 H2 a7 H7 .16.2. Redrawing the above BD we get the shape from Figure no.17. H 3 a7 H .2. 4. denoted Hd and a cascade connection denoted He. Ha a6 H2 He Figure no. S U1 + 1 a1 _ H1 S a0 + 2 _ a 5 a2 Y2 H2 a3 a3 a6 H5. where a new feedback connection appeared.2.14) U1 H21 U2 =0 Y2 Figure no. Ha 7 Figure no.2. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS. 4. System Reduction by Using Block Diagrams.4.15) H 21 (s) = = 1 + H1 Hc H b 1 + H 2 H 3 H 5 + H1 H2 H3 H6 H 7 + H 1 H2 H4 H7 103 .2.2. 4.16.2. S U1 + 1 a 1 _ a0 H1 + _ S2 a2 Hd a2 a2 Y2 a5 H5.4.15. Moving the takeoff point a6 ahead of the block H2 we get the BD from Fig. H d (s) = 1 + H2 H3 H5 so the final structure is a simple feedback connection U1 + _ H1 He Hd Y2 (4. H1Hc H1 (4. 4. 4.15. from where we obtain H21 (s) as.
4.2. so the initial block diagram from Fig. H3 H .2. Determination of the component : H 22 (s) = Y 2 (s) U 2 (s) U 1=0 The component H22(s) is evaluated considering U1= 0 and ignoring the output Y1 .18.4.19.2. 104 . looks like in Fig.2. 4.2.2. 4. S1 _ a1 a0 + S2 _ Y2 a2 U2 H2 a6 + + S3 H1 a3 a5 a7 H5. After these reductions. Now we can get the equivalence of the cascade connection between H3 and H5 and the parallel connection Ha(s) = H4 + H3 H6 .18.4.2. 4. a new cascade connections between Ha and H7 can be observed as in Fig.3.20. To reduce this block diagram the takeoff point a4 is moved ahead of the block H3 and will result the structure from Fig. 4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS.2.20. d). Ha 7 Figure no. System Reduction by Using Block Diagrams. 4. 4. S1 _ S2 _ a5 a7 Y2 a2 U2 H2 a6 H4 + a3 a1 H1 a0 + + S3 H3 a4 + + S4 a8 H6 H5 H 7 Figure no.19. S1 _ S2 Y2 a2 U2 H2 a6 H4 a3 a1 H1 a0 + + +S 3 H3 _ a5 H3 a4 + H6 + Ha S4 a8 H5 a7 H7 Figure no.
21. (4. 105 . having the expression. 4. (4.20) −H 3 H 5 − H 1 H 4 H 7 − H 1 H 3 H 6 H 7 . 4. denoted by L(s) (4. This polynomial.19) (4.22) L(s) = 1 + H 2 H 3 H 5 + H 1 H 2 H 3 H 6 H 7 + H 1 H 2 H 4 H 7 is the determinant of the system (4.16) H f = −(H 3 H 5 + H 1 H 7 H a ) The last component of the transfer matrix can now obtained immediately. and we shall denote it as Hf. H5 H3 U2 S1 + + _ _ Y2 H .17) 1 − H2 Hf 1 + H 2 H 3 H 5 + H1 H2 H3 H6 H 7 + H 1 H2 H4 H7 Concluding.2. Redrawing the above BD to have the input to the left side and the output to the right side we obtain.2.18) (4. GRAPHICAL REPRESENTATION AND REDUCTION OF SYSTEMS.2. Inside this BD a parallel connection is observed.2. . Ha 7 H2 H1 Hf Figure no. System Reduction by Using Block Diagrams. the four components of the transfer matrix are: H1 H2 H 3 H 6 + H1 H2 H4 H 11 (s) = 1 + H2 H3 H5 + H 1 H 2 H3 H6 H7 + H 1 H 2 H 4 H 7 H 12 (s) = H 21 (s) = H 22 (s) = H 4 + H 3 H6 1 + H2 H3 H5 + H 1 H 2 H3 H6 H7 + H 1 H 2 H 4 H 7 H1 1 + H2 H3 H5 + H 1 H 2 H3 H6 H7 + H 1 H 2 H 4 H 7 (4.2.21.2. Fig. Hf −H 3 H 5 − H 1 H 4 H 7 − H 1 H 3 H 6 H 7 H 22 (s) = = (4.2.2.7).2. 4. solving a standard feedback loop.4.2.21) 1 + H2 H3 H5 + H 1 H 2 H3 H6 H7 + H 1 H 2 H 4 H 7 It can be observed that all of them have the same polynomial to denominator.2.
3. xj .3. from the node xi to the node xj for example. which will be a variable of the graph. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. Any branch has associated two elements: branch operator (gain or transmittance) and direction.4. to that node the variable xi is associated. Like the block diagrams. Node. Branches are always unidirectional reflecting causeeffect relationships. An elementary branch. 4.are time functions. on short branch. between nodes xi and xj . 106 . SFGs are easier to draw. Instead. SFGs illustrate the cause and effect relationships but with more mathematical rules. This gain (coefficient.3 Signal Flow Graphs Method (SFG). represented in a SFG by a small dot. It is a time domain or complex domain operator. We shall see how this mathematical relationship will be represented in a SFG. defines the so called elementary transmittance or elementary transmission function. between two nodes. A branch is marked by a letter or a symbol.is an operator called elementary transmission function (ETF) which makes the correspondence from xi to xj . through which the variable xj contributes with a term to the variable xj determination.1) x j = t x i {x i } x j = t ij x i x j = t ij {x i } or be a linear mathematical relationship. complex functions or numerical variables. It contains information how the node xi directly affects the node xj. called also elementary transmittance or gain. which defines the coefficient or the gain or the operator. operator) . Usually we refer a node by the variable associated to it. tij (t x i ) for example. xj denoted by tij ( t x i ). sdomain or zdomain. The fundamental elements of a signal flow graph are: node and branch. for example xi. a time function or a complex function (like Laplace transforms or z transforms). expresses a variable denoted by a letter.1. Elementary branch. Unfortunately they are applied only for linear systems modelled by algebraic equations in time domain. where: xi . directly connecting the node xi and xj without passing through other nodes. 4. A signal flow graph (SFG) is a graphical representation of a set of simultaneous linear equations representing a system. It graphically displays the transmission of signals through systems. 4. In such a situation they appear as a simplified version of block diagrams. Let xj or (4. This variable can be a numerical variable. xj tij or t x i . Signal Flow Graphs Method (SFG). Signal Flow Graphs Fundamentals. A node. easier to manipulate with respect to block diagrams. placed near the arrow xj symbol. is a curve line oriented by an arrow. It is considered to be zero on a direction opposite to that indicated by the arrow. For example we can say " the node xi " thinking that.3.
1. 4. 4.3. 4. 4. It is very important to note that the transmittance between the two nodes xi and xj should be interpreted as an unidirectional ( unilateral) operator (amplifier) xj xj with the gain tij ( t x i ) so that a contribution of tijxi (t x i x i ) is delivered at node xij.2.2. does not imply this relationship. 107 . Notice that branch directing from the node xi (input node of this branch) to the node xj (output node of this branch) expresses the dependence of xj upon xi but not the reverse. xi t ij xj xj =t ij xi xi t xji x xj xj = t xi {xi} xj Figure no. that x means it exists (is given) an independent relationship x i = t ji x j ( x i = t x ij {x j } ) then the set of two simultaneously equations xj x j = t ij x i x j = t x i {x i } or (4. t ij xi t xji x t ji xi = t ji xj xj =t ij xi xj xi txij x j xi = t xi xj xj x xj = txi xi j x Figure no. If in the same time the variable xj directly affects the variable xi .1.4.3.3.3.3. The signal flow graph representation of the equation (4.1) is shown in Fig.1) as xj xj = 1 xi or x j = [t x i ] −1 {x i } t ij but the SFG of Fig.3. Signal Flow Graphs Method (SFG).3. This is a so called elementary loop between two nodes.1. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. We can algebraically write the relation (4. that means the signals travel along branches only in the direction described by the arrows of the branches. 4.2) x x i = t ji x j x i = t x ij {x j } is represented by a SFG as in Fig.3.3. 4.
4) These relations are represented by a SFG as in Fig. In SFG the following fundamental operations. (4.3.3. Let us consider three variables x2. 4. The variable x1 depends on three variables. 4.3. x4. 4. 4.2. Signal Flow Graphs Method (SFG).3.4.3. x2. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS.3.3. A cascade a (series) of elementary branches can be replaced by (are equivalent with) a single branch with the transmission function the product of the transmission functions of the components of this cascade.2. 2 t21 x1 t31 t41 x2 x3 x4 Figure no. x3. 4.3.4. The value of a variable in a SFG denoted by a node is equal to the sum of signals entering that node.5.3. 4. k=2 : 4. This dependence is expressed as in (4. Example. Signal Flow Graphs Algebra. xk=t1kx1.4.3.3. Example.5. rules and notions are defined: 4.2. Multiplication rule. x3. This rule is available if and only if no other branches are connected to the intermediate nodes. Transmission rule. is replaced be a unique equivalent branch T12 where T14=t12t23t34=t12(t23(t34)) (4.3.3. 4. The value of a variable denoted by a node is transmitted on every branch leaving that node. 4.5) t12 x1 x2 t23 x3 t34 x4 ≡ T14 x1 x4 Figure no. t12 t13 x1 t14 x2 x3 x4 Figure no.3. Addition rule.3.3) and drawn as in Fig.3. If we have confusion we shall denote the transmission functions with x indexes formed with the nodes names (from x2 to x 1 as t x 1 ). A cascade of three elementary branches as in the SFG from Fig.3) x 1 = t 21 x 2 + t 31 x 3 + t 41 x 4 .3.3. 108 .2. x4 which directly depend upon the variable x1 through the relationships.2. 4. (4.1. 4. Example.
3.4. can be related to a new node. t 12 as in the SFG from Fig. A loop is a path that originates and terminates to the same node in a such way that no node is passed more than once.3.2. 4. This new node is a "dummy" node. Any node.10. All these branches are outgoing branches in respect with x1 and incoming branches in respect to xk.6) t 12 x1 t 12 x2 ≡ x1 T12 x2 Figure no. as output node. 4. Example.3. Sometimes we are saying path thinking that it is the gain of that path. Selfloop. Example. 4. Path gain. All these branches are incoming branches in respect with x1 and outgoing branches in respect to xk. A selfloop is a loop consisting of a single branch. Input node. 4.2. 4. Sometimes we are saying loop thinking that it is the gain of that loop. The gain of a path is the product of the gains of the branches encountered in traversing the path.2. is an input node.5.3. An input node (sourcenode) is a node that has only outgoing branches. For the loop "k" its gain is denoted Bk. The node x1 in Fig. 4. Example. Two parallel branches t 12 . Path.3.2. The loop gain is the product of the transmission functions encountered along that loop.3. A set of elementary branches. different of an input node. The node x1 in Fig. 4.2. leaving the same node and entering in the same node ( a parallel connection of elementary branches) can be replaced by (are equivalent with) a single branch with the transmission function the sum of the transmission functions of the components of this parallel connection. is an output node.3.9. 4. 109 .3.7.3.3. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4. through an unity gain branch.node) is a node that has only incoming branches.2.6.8.3.3. are equivalent with a single branch with the equivalent transmission function T12 where.6.4.3.2. 4.6. A path from one node to other node is a continuous unidirectional succession of branches (traversed in the same direction) along which no node is passed more then once. 4. T12=t12'+t12'' (4. Parallel rule.4. 4.3. For the path "k" its gain is denoted Ck. Loop.5. Loop gain. An output node (sink .2. Output node.3. Signal Flow Graphs Method (SFG). 4.
8) Suppose that we are interested about x1. a 1 x 1 + a 2 x2 + a 3 x3 + a 4 x 4 = 0 .3.3.. To transform (4.3.3. 4. The resulting SFG is illustrated in Fig.7. for example T u ik . 4. 4.12.9) 1 1 1 x 1 = t 11 x 1 + t 21 x 2 + t 31 x 3 + t 41 x 4 To draw the graph.7. x4.2. (x1`. Signal Flow Graphs Method (SFG).9) we had to divide the equation by a1 that means to multiply by the inverse a −1 of the element a1. x3 . As we can observe the nodes x2 . x2 . of that graph by a relation of the form: p x x i = Σ T u ij u j (4. To draw the SFG we have to express this variable as a function upon the others as x 1 = −(a −1 ⋅ a 2 )x 2 − (a −1 ⋅ a 3 )x 3 − (a −1 ⋅ a 4 )x 4 ⇔ . SFGs of one Algebraic Equation. Le us consider one algebraical equation having 4 variables x1 . It is denoted by capital a x letter( symbol) indexed by the two nodes symbols. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. x3 . Equivalent transmission function.7) j=1 It is a difference between the elementary transmission function (elementary x transmittance) t u ik and the equivalent transmission function (equivalent x transmittance) T u ik .3.3.x3) . If the transmittances are 1 operators such an operation could not be desirable or it is not possible.4.2. t '31= – a1• a3 x1 ' t 41= – a1• a 4 1 1 ' t21= – a1• a 2 x2 x4 1 x3 Figure no. are input nodes and x1 is an output node.unknowns. (u. different of input node is expressed as a function of all p input nodes uj .3. j=1:p. 4 dotes are marked by the names of the all 4 variables and then connect them by branches according the relation (4. One graph's node xi. 4. first. (4.9).3.11.8) into (4. 110 . Two paths or two loops are said to be nontouching if they have no node in common. Example 4. x4.1.3.3. 4. The determination of the equivalent gains means to solve the set of equations where the undetermined variables are nodes in the graphs and the determined variables are inputs. The equivalent transmission function (equivalent transmittance) between one input node uk and one node xi of the graph is the operator that expresses that node xi with respect to the input node uk taking into considerations all the graph connections. (4. Disjunctive Paths /Loops.x2) .3.free variable in our previous example.3.
(4.8)) and this illustrate how to escape of undesired selfloop. 3 4 D= = −23 (4. SFG of two Algebraic Equations.4.3. 4.a1 x1 t21= – a2 x2 x4 x3 ⇔ t 41= – a4 Figure no. t t ki = ki if t ii ≠ 1 (4.11) becomes 3x 1 + 4x 2 = u 1 − u 2 (4.2.3. 111 .3. 3x 1 + 4x 2 − x 3 + x 4 = 0 (4. We can observe that in the last SFG a selfloop (t11) appeared. k=2:4. replace all the elementary transmittances tki by t ki where.3. x 1 + (a 1 − 1)x 1 + a 2 x 2 + a 3 x 3 + a 4 x 4 = 0 ⇔ x 1 = (1 − a 1 )x 1 − a 2 x 2 − a 3 x 3 − a 4 x 4 x 1 = t 11 x 1 + t 21 x 2 + t 31 x 3 + t 41 x 4 Now the SFG looks like in Fig.3. computing the determinant of the system D.3.12) 2x 1 − 5x 2 = u 1 − 5u 2 We can solve this system by using the Kramer's method. Signal Flow Graphs Method (SFG). However the two graphs are equivalent (they express the same algebraic equation (4. To avoid this we consider a1=1+(a11).8.13) 2 −5 and obtaining. and t −a k −a t k1 = k1 = = a k = −a −1 ⋅ a k if t 11 = 1 − a 1 ≠ 1 ⇔ a 1 ≠ 0 1 1 1 − t 11 1 − (1 − a 1 ) Example 4.3.8. obtaining. t 31= – a3 t11= 1.11) 2x 1 − 5x 2 − x 3 + 5x 4 = 0 Suppose we are interested on x1 and x2 so they will be unknown (dependent) variables and x3. With this notations. Let us consider a system of two algebraic equations with numerical coefficients. 4.3. To eliminate a selfloop tii . 4. We shall denote them. x4 will be free (independent) variables.3. for convenience. as x3=u1 and x4=u2.3. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS.10) 1 − t ii In this example i=1.
Using the methods of SFG all these difficulties will disappear. u = [u 1 u 2 ] T (4. In the next paragraph we shall see haw can we reduce this graph to the simplest form to express the system solution (4. the SFG is obtained as in Fig. x2.19) .3. according the equations (4. If the coefficients of the equations contain letters (symbolic expressions) and the system order is higher than 3 the above methods can not be implemented on computers and to solve a such a system is a very difficult task. Ax = Bu.12) in a matrix form.9. they are connected.3.3. can be represented as a SFG and solved by using the SFG's techniques.17) u ⇒ −1/23 13/23 x1 = x1 = x 2 = ⋅ u 1 + 13 ⋅ u 2 = T u 2 ⋅ u 1+ T u 2 ⋅ u 2 1 2 23 The same system (4. u2 . u1.12). 4. future nodes. x 1 = (−2) ⋅ x 1 + (−4) ⋅ x 2 + (−1) ⋅ u 1 + (1) ⋅ u 2 (4.3.3. Signal Flow Graphs Method (SFG). using SFG rules.3. To construct the graph first of all the two dependent variables (x1. D 11 u 1 + D 12 u 2 (−9)u 1 + (25)u 2 = = 9 ⋅ u 1 + −25 ⋅ u 2 (4. 112 .19) x 2 = (−2) ⋅ x 1 + ( 6) ⋅ x 2 + ( 1) ⋅ u 1 + (5) ⋅ u 2 After marking the four dots. by the variables x1. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 4. x = [x 1 x 2 ] T . 4.15) D −23 23 23 The same results can be obtained if we express the system (4. 2 −5 1 −5 If det(A) ≠ 0 .3.3.11).3.3.9. then 9/23 −25/23 x = A −1 Bu = Hu = (4.3. B = . x x 9 23 −1 23 1 ⋅ u 1 + −25 ⋅ u 2 = T u 1 ⋅ u 1+ T u 1 ⋅ u 2 2 23 x x (4.16) where 3 4 1 −1 A= . Let this be.18) Input node 1 2 u2 4 5 6 Selfloop Selfloop x1 1 2 x2 1 u1 Input node Figure no.4.18).14) 23 23 D −23 D u + D 22 u 2 (1)u 1 + (−13)u 2 −1 x 2 = 21 1 = = ⋅ u 1 + 13 ⋅ u 2 (4.3.3. x2 ) are withdrawn from the system equations irrespective which variable from which equation. or its equivalent (4.3.
x i . Σ a ij x j = Σ b ij u j . as a function upon the all dependent and independent variables from (4. j=1 m i=1:n (4.4.23).3.3. Construction of Signal Flow Graphs.24) or in a matrix form.. A signal flow graph can be built starting from two sources of information: 1. for example xk . etc. (4. For seek of convenience let us denote them as u j = x n+j . Construction of SFG Starting from a System of Linear Algebraic Equations. 4. x = [x 1 . Construction of SFG starting from a block diagram. Ax = Bu. j=1 j=1 n p i=1:n (4. i = 1 : n . to point out the difference of notations in algebra matrix approach: H ij is the component from the row i and column j of the matrix H and in SFG approach: x T u ij (or T ji ) is the equivalent transmittance between the node uj (or only j) and the node xi (or only i). Σ a ij x j = 0 .3.3. The next step in SFG construction is to express each dependent variable x i .. 1≤ j ≤p whose ith component is written as. Signal Flow Graphs Method (SFG). u = [u 1 . If the n equations are linear independent.3. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS.u j . It is not important from which equation the variable xi is withdrawn.20) Based on some reasons (physical description.3. 4. The others p=mn variables are considered to be free (independent) variables. u p ] T . (4. 4. k=1: n.3. With these notations. k=1: m. 2.3. the determinant D. p p x j=1 j=1 x i = Σ H ij ⋅ u j = Σ T u ij ⋅ u j = Σ T ji ⋅ u j j=1 p (4.25) We mentioned all these forms of solutions.. x = A −1 Bu = Hu .20) is written as.. 1 ≤ j ≤ p = m − n In the case of dynamical systems they will represent the input variables.21) and the corresponding coefficients from the ith equation as. D = det(A) ≠ 0 and a unique solution exists (considering that the vector u is given). xk .22) b ij = −a i. 113 . causality principles. Suppose we have a system of n equations with m > n variables...3. x n ] T .. Construction of SFG starting from a system of linear algebraic equations.3. 1 ≤ j ≤ p = m − n .1.3. where H = {H ij } 1≤ i ≤n .3.n+j .23) (4. (4.3.) a number of n variables are selected to be unknown (dependent) variables.
23).26) i = 1 : n. SFG of three Algebraic Equations.29) x 2 = −6x 1 + 6x 2 + 3u 1 + u 2 x 3 = −7x 1 + 2x 2 + 4u 2 The corresponding SFG is depicted in Fig. n] t ji = ij .3.27) Now we have to draw n+p=m dots and to mark them by the name of the variables involved in (4. x 1 = − x 1 + 3x 2 + 4u 1 (4. x i = Σ t ji ⋅ x j + Σ g ji ⋅ u j . Example 4. getting.11.3.3. Let us consider a system of three algebraic equations with numerical coefficients.3.3.3. = 4u 1 2x 1 − 3x 2 (4. the same system being now represented by another SFG. We could select x1 from the third equation and x2 from the first one. −a i ≠ j.3. u2 uj up gji g pi xi t ii g 2i g 1i t 1i x1 t 2i t ji t ni u1 x2 xj xn Figure no. 4.27) is illustrated in Fig.3. 4.10.3. 114 . j=1 j=1 n p g ji = b ij . GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. 1 − a ii i = j i. The variable x3 can be withdrawn only from the second equations.27). i.10.3.2). j ∈ [1. The zero gain branches are not drawn. The image of one equation (i) from (4. Let express x1 from the first equation and x2 from the third.4. 4.3.3. Then the SFG is obtained connecting the nodes according to each equation from (4.11. 4. (4. 4.3. 4 x1 1 t ii u1 3 x2 t ii u2 4 x3 1 6 2 6 3 7 Figure no. n] the dependent variables are expressed as. (4. Signal Flow Graphs Method (SFG).28) 7x 1 − 2x 2 + x 3 = 4u 2 6x 1 − 5x 2 = 3u 1 + u 2 Now we express each dependent variable upon the others variables.3.3. j ∈ [1. where already we have selected the dependent and independent variables as in (4. Denoting the elementary transmittance.
just looking at the block diagram.12. presented in Fig.3.3.3. 4.4. whose block diagram depicted in Fig. 4.7).2. 3. 4. Signal Flow Graphs Method (SFG). 4.12 S1 U + 1 _ a1 a0 + Y S2 _ a5 Y2 H2 H5 a6 U2 + a 3 + S3 H3 a4 H4 H6 H1 a7 + a8 Y 1 + S4 H7 Figure no. The componentH ij (s) = Uij (s) of the transfer matrix H(s) will be U k ≡0.12.2. is copied in Fig.3. Example 4. Attach additional variables to each takeoff point and to the output of each summing operator. This system contains 13 variables and is described by 11 equations (4. Draw all the nodes in positions similar to the positions of the corresponding variables in block diagram.3.2.3. They will become nodes of the graph. U 1 a1 1 H1 1 a0 a7 1 a2 1 Y2 H2 1 a5 a6 1 1 U 2 a3 H4 H3 H5 a4 H6 a8 1 Y 1 H 7 Figure no. 2. 4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. It is recommended to avoid branches crossing. For each node. different of an input node. Let us consider the system from Example 4. This process can be performed directly from the block diagram according to the following algorithm: 1. establish.3. The output variables will be associated to output nodes.2.2.2. SFG of a Multivariable System. 4. k≠j represented by the equivalent transmittance T Uij between the input node Uj and the node Yi.13. As discussed. 115 . based on the block diagram. Y (s) 5. Construction of SFG Starting from a Block Diagram.3. We could draw the graph starting from this set of equations but now we shall draw the graph. The input variables and the initial conditions (when the block diagram represents also the effects of initial conditions or it is a state diagram) will be associated to inputs nodes.3. 4.4.2. a block diagram graphically represent a system of equations. the relationship expressing the value of that node upon other nodes and draw that relationship in the graph. This block diagram is already prepared with additional variables we utilised in Example 4. 4. As a consequence this system can be represented by signal flow graph.
input+initial_stateoutput or input+initial_statestate means to solve the system of equations describing that system. 4. 4. k = 1 : r and represented in the graph by additional r branches with unity gains. Systems Reduction Using State Flow Graphs. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS.4.3.4. Systems Reduction Using State Flow Graphs. as output node. In a reduced graph. .2) where by symbols H kj . As we mentioned in Ch. Suppose that in Example 4.. 4. The nodes of the graph (different of input nodes).4.11. T jk we denote the equivalent transmission functions j (equivalent transmittances). i = i 1 . is expressed by the so called canonical relationship of the form y k = Σ H kj ⋅ u j = Σ T uk ⋅ u j = Σ T jk ⋅ u j j y j=1 j=1 j=1 y r r k (4. we are interested to evaluate only the variables x 1 = x i 1 and x 3 = x i 2 .4. This new node is a "dummy" node...4.3.1. 4. y 2 = x 3 attached to the graph from Fig. k=1: r by the relation (4..2. . Example. 4. 4. 4.3. we only are interested for their evaluation will be related to "dummy" output nodes yk . through an unity gain branch. When the system is represented by a state flow graph (SFG).4.. 4. different of an input node. x i . can be related to a new node. 116 yk yk . To reduce a system to one of its canonical forms: inputoutput.4. that means r=2.1.4.1) yk = x ik . 4 x1 1 t ii u1 3 x2 t ii u2 4 x3 1 y2 1 6 2 6 3 7 y1 1 Figure no. i k .3. u1 uj up T1k = Tuk1 Tjk = Tuk y j y Tpk = Tu p Figure no. as in Fig. this process is called SFG reduction or solving the SFG which means to determine all or only required equivalent transmittances between input nodes and output nodes. any node. We can consider two new output nodes y 1 = x 1 .2.4. any output variable yk. A reduced graph is represented as in Fig.. T uk . i r .
3) as x i − t ii x i = xi = xi = k=1. These operations can be utilised also only to simplify a graph even if we shall apply the Mason's formula. j=1 p Now the following rule can be formulated: To eliminate an undesired selfloop tii. 4. 4.3. SFG reduction by elementary transformations.3.4.1. Elimination of a Selfloop. To eliminate the selfloop determined by this tii .1. by the transmittances t ki . replace the nonzero elementary transmittances t ki . g 11 t 3 4 t 21 = 21 = = 3/2 .4.3.4.4.1.4. 4.3. attached to a node xi. 4. j = 1 : p. Systems Reduction Using State Flow Graphs.27). g 11 = = = 4/2 . 4. where t ii ≠ 1 . 117 . g ji = .1.4.4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. SFG Reduction by Elementary Transformations. 4. Elimination of a node.4. 2. (4. ∀k and g ji . as redrawn in Fig.k≠i n k=1.we shall express (4.4. 1 − t 11 1 − (−1) 1 − t 11 1 − (−1) The other transmittances are not modified. 4.5) 1 − t ii 1 − t ii Example. g ji t t ki = ki . ∀j respectively.k≠i Σ Σ n p g ji t ki ⋅ xj + Σ ⋅ uj 1 − t ii j=1 1 − t ii k=1. SFG reduction using Mason's formula. We want to eliminate the selfloop attached to x1 whose transmittance is. ∀k and g ji . looks like in Fig. k = 1 : n . k=1 j=1 n p (4. ∀j . x i = Σ t ki ⋅ x k + Σ g ji ⋅ u j . Let us consider the graph from Ex. Two methods can be utilised for SFG reduction: 1.3) where xi depends upon xi itself through tii.k≠i Σ n t ki ⋅ x k + Σ g ji ⋅ u j j=1 p ⇔ ⇔ (4.1.3. two other operations can be utilised: Elimination of a selfloop.4. A selfloop appeared in a graph because a node xi is represented by an equation of the form (4. The new transmittances are. Beside the transformations based on Signal flow graphs algebra presented in Ch.4. of the all elementary branches entering the node xi . After this elimination the graph from Fig.4. The entering branches (on short transmittances) to x1 are: t21=3 and g11=4.2.4) t ki ⋅ x j + Σ g ji ⋅ u j . where. t11=1.
GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. .4.4. Then we obtain. xk = j=1.7) . 4/2 x1 y1 1 6/5 3/2 7 u1 3/5 x2 1/5 u2 4 x3 2 1 y2 Figure no. t22=6. j≠i Σ n t ji ⋅ x j + Σ g ji ⋅ u j = j=1 p j=1. for seek of convenience.4. we obtain. we do not explicitly denote the transmittances gji coming from input nodes. looks like in Fig. as a function of all the other variables except xi.2. j≠i Σ m t jk ⋅ x j + t ik x i 118 (4.3. from one equation of the system (4. One node xk .4. 4.4.4.4. expression obtained from one equation. j≠i Σ m t ji ⋅ x j (4. free of selfloop. Elimination of a Node. different of an input node (that means 1 ≤ k ≤ n ). Elimination a node xi from a graph is equivalent to the the elimination of a variable xi . 4. 4.from the system of equations. Only dependent nodes can be eliminated.23). by 5. and different of our xi (k ≠ i ).28) by division: first eq.3. 4.3. xi = j=1. Now in the above graph we want to eliminate the selfloop attached to x2 whose transmittance is.4.3.4. 4/2 x1 y1 1 6 3/2 7 x2 u1 3 t ii u2 4 x3 1 y2 1 6 2 Figure no. by 2 and third eq.1. g 12 = 1−t 22 = 1−(6) = −3/5 . g 22 = 1−t 22 = 1−(6) = −1/5 After this new elimination the graph from Fig. Withdrawing the variable xi.4. was drawn taking into consideration the relation. This is performed by substituting the expression of the variable xi. Systems Reduction Using State Flow Graphs.3.4. g12=3 and g22=1 . This graph could be obtained if we would express the variables from the system (4.20) or (4. t 12 g12 g22 −6 3 1 t 12 = 1−t 22 = 1−(6) = 6/5 . 4.6) where. The entering branches to x2 are: t12=6. That means the node to be eliminated must not have selfloop. to all the other equations of the system. 4.
j≠i Σ m t jk ⋅ x j + t ik [ j=1. replace all the elementary transmittances between any nonoutput node xj and any noninput node xk by a new transmittance t jk = t jk + t ji t ik if the gain of the path x j → x i → x k is different of zero and then erase all the branches related to the node xi. We must inspect all combinations (x j . for example. k ≠ i . if t ji t ik ≠ 0 t jk = . a) then they must be inspected independently. It is possible that initially to have tjk=0 ( no branch directly connect xj to xk) and after elimination of the xi a branch to appear between xj and xk with the transmittance.4. x k ) . b) and (b.8) (4. j≠i Σ m [t jk + t ik t ji ] ⋅ x j Denoting.3. k ≠ i we have xk = If we can write t ik t ji = t ji t ik then ∀k ∈ [1.4. 4. t jk = t jk + t ik t ji ∀k ∈ [1. To eliminate a noninput node xi.4. it is recommended to note in two separate lists what nodes can be as xj nodes and which as xk nodes. Note that if two selections of the pairs (x j . 4. x k ) .5.): tji xi tjk xj xk xj tik t'jk = tjk + tji tik t'jk xk j=1.4.4. j≠i Σ m t ji ⋅ x j ] = j=1. m]. n]. related to Ex. Sometimes. 119 .4. Let us consider the graph from Fig.10) xk T jk = T x j = T(x j . k ≠ i (4. To avoid some omissions. j x t jk = t x jk = t (x j . (a. n]. (4.4. ∀j ∈ [1.4.4. for writing conveniences.5. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. n]. if t ji = 0 or t ik = 0 t jk which is easier to interpret the node xi elimination rule (as in Fig. x k ) are. Systems Reduction Using State Flow Graphs.4.4. j≠i Σ m t jk ⋅ x j . Substituting (4. x k ) .3. x k ) to detect a nonzero path (constituted only by two branches) passing through xi. t jk = 0 + t ji t ik . even if there are no direct branch between xj and xk. ∀k ∈ [1. 4. we shall use the notations x t jk = t x k = t(x j .4.6) into (4.9) Figure no. In the category of nodes xk we must include all the "dummy" output nodes and of course in the category of nodes xj we must include all input nodes. Example. xk = j=1.7) will result. j ≠ i t jk + t ji t ik . 4.
Steps of the above algorithms can be utilised only to simplify the graph for applying the Mason's formula. t(u1. We want to eliminate the node x2. Eliminate the parallel branches using the parallel rule. Eliminate the intermediate nodes. As xj types nodes can be: u1. x1. t'(u2. x3). (u2. t(u2. x3). the complete reduction of a SFG up to the canonical form is a rather tedious task. 3. x1) = 4/2 + (3/5)(3/2) = 11/10. 2.4. 4. The reduction of a SFG by elementary transformations is obtained performing the following steps: 1. Systems Reduction Using State Flow Graphs. x3. x1). Now in the graph from Fig. The node x2 reduced graph is illustrated in Fig. (u2. x3) = 0 + (1/5)(2) = 2/5. t(u2. x3) = 4 + (1/5)(2) = 39/10.4. t'(u2. x1). They determines. 3/10 11/10 x1 y1 1 2/5 u1 u2 39/10 x3 64/10 1 y2 Figure no. 120 . that will affect the previous elementary transmittances (even they are zero) are observed to the pairs: (u1. x3).4. (u1. x3) and then erase all the branches related to x2. t'(u2. As xk types nodes can be: y1.6. Practically. t'(u2. t'(u1.4. x1). 4.3. x3) = 7 + (6/5)(2) = 64/10. Eliminate the selfloops. 4. x3).4. x3) by t'(u1. t'(x1. x3). 4.4.4. u2. x1) = 0 + (1/5)(3/2) = 3/10. x1. t'(x1.4. t'(u1. (x1.1.6. x3). x3). x1). y2. Eliminate the cascade of branches using the multiplication rule. 4. x1). x1). t(x1. Nonzero paths (constituted only by two branches) passing through x2. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. Algorithm for SFG Reduction by Elementary Transformations. t'(u1. first we replace the transmittances t(u1. x3.
..... y It determines the equivalent transmittance T uk = T(u j .means the path no q from the input node uj to the output node yi denoted also as Cq=Cq(uj....4.12).4.4.. j=1:p is expressed by the relation.. only the nontouching loops with the path Cq . Systems Reduction Using State Flow Graphs.. equals to the determinant of the algebraical system of equations multiplied by ±1 ... .4.is computed just like D. If in the graph there is a stt B of n loops B={B1 ..11) j=1 The Mason's general formula is. SFG Reduction by Mason's General Formula. uj... GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS. then we must construct "dummy" output nodes connecting these intermediate nodes to output nodes through additional branches of the unity transmittance which performs xi = y i . (4.. then Dq=1.. Σ CqDq q y T uij = . The index q will be utilised for all the graph's paths.4.2. 1 Suma produselor de cite k bucle disjuncte 121 .. 4.. yi). Dq ...4..12) D where: D .. It is given by the formula.. By Cq we understand and also denote the gain of the path Cq.. If Bq={∅ }.. The difficulties encountered in algebraic manipulations of a graph are eliminated using the so called the Mason's general formula.is the determinant of the graph..+Bn1Bn .14) Spn=B1B2..13) Spk . p y y i = Σ T uij ⋅ u j (4. The set of these loops is denoted by Bq. 4. (4... relation (4...4. D = 1 + Σ (−1) k S pk . but taking into consideration for its Spk .... Bn } ( by Bk we understand and also denote the gain of the loop Bk) then : Sp1=B1+B2+.4. Bj entered in that term are nontouching one to each other that means they have no common node. y k ) between an input j node uj and an output node yk.+Bn Sp2=B1B2+B1B3+. Cq . The value of an output variable yi in a graph with p input nodes.. k=1 m (4. The gain of a path equals to the product of the transmittances of branches defining that path.....Bn Any term of Spk is different of zero if and only if any of two loops Bi. If we are interested about some intermediate nodes xi .. ..is the sum of all possible combinations of k products of nontouching loops1. We remember that the gain of a loop equals to the product of the transmittances of branches defining that loop.
B2 ) are touching loops. y3=x3 . T u2 = 3 3 = = . y1) =(3)(3)(1) =9. Systems Reduction Using State Flow Graphs. 3 ⋅ (1) 3 C D y1 We obtain. Sp2=0. C2=C2(u1. Suppose we are interested about all the dependent variables x1.4. Sp1=B1+B2+B3=(1)+(6)+(18)=13 Sp2=B1B2=(1)(6))=6. it results that B1={B1}.7.7. Now we compute. 4. In such a conditions. Because no loop is nontouching with C3 it results B3={∅} ⇒ D3=1. as redrawn in Fig. x2.B2 ). B5={∅ }. y 1 ) : We identify two paths from u1 to y1: 1 C1=C1(u1.3. y2) = (3)(1) =4 . Sp1=B2 . T u1 = 1 1 = = . B3=(3)(6)=18. so D1=1Sp1=1B2=16=5. y1) = (1)(3)(1) =3 . C5=C5(u1. D 8 8 y2 Evaluation of T u 1 = T(u 1 .4. y2=x2 . x3. y2) = (4)(6) =24 . 1 D 8 8 122 y k=1 2 . Because only B2 is nontouching with C! . (B3.4. ⇒ D4=1(1)=2. B4={B1}.4. Evaluation of T u1 = T(u 1 . that means D2=1. 4. B2 B3}: B1=1. 4. y1) = (4)(1) =4 . C D + C 5 D 5 (3) ⋅ (2) + (−24)(1) −18 y We obtain. D 8 8 y1 Evaluation of T u 2 = T(u 2 . They are defined by relations: y1=x1 . y 2 ) : We identify two paths from u1 to y2: C4=C4(u1. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS.B2 ) are touching. ⇒ D5=1. C D + C 2 D 2 4 ⋅ (−5) + 9 ⋅ (1) −11 y1 We obtain. Because no loop is nontouching with C2 it results B2={∅ } . We identify only three loops (n=3). y 1 ) : We identify only one path from u2 to y1: C3=C3(u2. Also Sp3=0 because does not exist three loops to be nontouching each to other. B1 x1 1 t ii B3 6 4 x2 u1 3 t u2 4 1 ii 6 2 3 y1 1 7 B2 x3 y3 1 1 y2 Figure no. D = 1 + Σ (−1) k S pk = 1 − S p1 + S p2 = 1 − (−13) + (−6) = 8. Example. Let us consider the graph from Ex.3. To apply the Mason's general formula first we must define and draw in the graph the corresponding "dummy" output nodes which get out the interested variables.B2 ) and (B3. B2=6. The other two terms B2B3 . B3B1 do not appear in Sp2 because the loops (B1. B1 and B2 are nontouching loops but (B1. T u2 = 4 4 = = .4. B={B1.
123 . B8={∅ }. y3) = (1)(3)(7)(1) =21 . by pure algebraic methods solving the system (4. Systems Reduction Using State Flow Graphs. y2) = (1)(1) =1 . y3) = (1)(2)(1) =2 . T u2 = 6 6 = = . 4. Reduction by Mason's Formula of a Multivariable System.13. C13=C13(u2. The goal here were to illustrate only how to apply the Mason's general formula which get many advantages for symbolic systems and there no so many paths as in this example. y 3 ) : We identify four paths from u1 to y3: C7=C7(u1. 4. which we are copying now in Fig. can be reduced using block diagrams transformations. easier. 4.2. B12={B1}. B6={B1}. ⇒ D10=1. y 2 ) : We identify one path from u2 to y2: 2 C6=C6(u2. ⇒ D1=D=8. ⇒ D7=16=5.8.1.4. C12=C12(u2. 4. y3) = (4)(1) =4 . B9={B2}. y3) = (4)(6)(2)(1) =48 . C8=C8(u1. C9=C9(u1. B7={B1}. ⇒ D6=1(1)=2. ⇒ D7=2. U 1 a1 1 H1 a0 1 a2 1 Y2 H2 1 a5 a6 1 1 U 2 a3 H4 H3 H5 a4 H6 a8 1 Y 1 B2 1 a7 B3 H 7 B1 Figure no. ⇒ D13=1. B3}. B2.3. ⇒ D8=1. Evaluation of T u2 = T(u 2 .4.8.4. 4. B8={∅ }. y3 C D + C 12 D 12 + C 13 D 13 (4) ⋅ (8) + (2) ⋅ (2) + (−21) ⋅ (1) 15 T u 2 = 11 11 = = D 8 8 Collecting the results. to apply the Mason's general formula for reduction purposes. Its SFG has been drawn in Fig. We obtain.3. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS.4. 2 D 8 8 y3 Evaluation of T u 1 = T(u 1 .4. C10=C10(u1. ⇒ D12=2. y3) = (3)(2) =6 .2.3. (1) ⋅ (2) 2 C D y We obtain. we write: y y y 1 = x 1 = T u1 ⋅ u 1 + T u1 ⋅ u 2 = −11 ⋅ u 1 + 3 ⋅ u 2 1 2 8 8 y2 y3 −18 ⋅ u + 2 ⋅ u y2 = x 2 = T u1 ⋅ u 1 + Tu2 ⋅ u2 = 1 2 8 8 y y y 3 = x 3 = T u3 ⋅ u 1 + T u3 ⋅ u 2 = 41 ⋅ u 1 + 15 ⋅ u 2 1 2 8 8 The same results are obtaining.28). We saw in Ex. y 3 ) :We identify three paths from u2 to y3: C11=C11(u2. how a multivariable system from Fig. T u3 = 1 y y 3 Evaluation of T u2 = T(u 2 . y3) = (3)(3)(7)(1) =63 .4. B11={B1. B13={∅ }. y3) = (4)(7) =28 .2. C 7 D 7 + C 8 D 8 + C 9 D 9 + C 10 D 10 (6) ⋅ (2) + (−63)(1) + (−48) ⋅ (1) + (−28)(−5) 41 = = D 8 8 y Example 4.
4. GRAPHICAL REPRESENTATION AND MANIPULATION OF SYSTEMS.
4.4. Systems Reduction Using State Flow Graphs.
Three loops are identified (n=3), B={B1, B2 B3}: B 1 = H 1 H 3 (−H 5 ) ; B 2 = −H 7 H 1 H 2 H 3 H 6 ; B 3 = H 1 H 2 H 4 (−H 7 ) All of them are touching loops so now we compute, S p1 = B 1 + B 2 + B 3 ; S p2 = 0 D = 1 − S p1 = 1 − [H 1 H 3 (−H 5 ) − H 7 H 1 H 2 H 3 H 6 + H 1 H 2 H 4 (−H 7 )] D = 1 + H1 H3 H5 + H 7 H 1 H2 H3 H6 + H 1 H 2 H 4 H 7 It can be observed that in 3 rows we determined the graph determinant which is the same with the common denominator of the transfer matrix from relation (4.2.22). y y Evaluation of T u1 = T(u 1 , y 1 ) = H 11 (s) = H u1 . 1 1 We identify two paths from u1 to y1: C1=C1(u1, y1) =H1H2H3H6 . Because no loop is nontouching with C1 it results B1={∅ } , that means D1=1. C2=C2(u1, y1) =H1H2H4. Because no loop is nontouching with C2 it results B2={∅ } , that means D2=1. We obtain, C D + C2D2 H 1 H 2 H 3 H 6 + H1 H2 H4 y y T u1 = H u1 = H 11 = 1 1 = 1 1 D 1 + H2 H3 H 5 + H 1 H2 H3 H6 H7 + H 1 H 2 H 4 H7 Evaluation of T u1 = T(u 2 , y 1 ) = H 12 (s) = H u1 . 2 2 We identify two paths from u2 to y1: C3=C1(u2, y1) =H4. B3={∅ } , D3=1. C4=C1(u2, y1) =H3H6. B4={∅ } , D4=1. We obtain, y1 C 3D 3 + C4 D4 H 4 + H3 H6 y1 T u 2 = H u 2 = H 12 = = D 1 + H2 H3 H 5 + H 1 H2 H3 H6 H7 + H 1 H 2 H 4 H7 y2 y Evaluation of T u 1 = T(u 1 , y 2 ) = H 21 (s) = H u2 1 We identify one path from u1 to y2: C5=C5(u1, y2) =H1. B5={∅ } , D5=1. We obtain, C D H1 y y2 T u2 = H u 1 = H 21 = 5 5 = 1 D 1 + H 2 H 3 H 5 + H 1 H 2 H 3 H 6 H 7 + H1 H2 H 4 H 7 y2 y Evaluation of T u2 = T(u 2 , y 2 ) = H 22 (s) = H u2 2 We identify three paths from u2 to y2: C6=C6(u2, y2) =H4(H7)H1. B6={∅ } , D6=1. C7=C7(u2, y2) =H3(H5). B7={∅ } , D7=1. C8=C8(u2, y2) =H3H6(H7)H1. B8={∅ } , D8=1. We obtain, C D + C7 D7 + C 8D 8 y y2 T u2 = H u 2 = H 22 = 6 6 2 D −H 4 H 7 H1 − H 3 H 5 − H 3 H6H 7 H 1 H 22 = 1 + H2 H 3 H 5 + H1 H2 H3 H6 H7 + H 1 H 2 H4 H7 The results are identical to those obtained by block diagram transformations, relations (4.2.18)... (4.2.21), but with much less effort. 124
y y
5. SYSTEM REALISATION BY STATE EQUATIONS.
5.1. Problem Statement.
5. SYSTEM REALISATION BY STATE EQUATIONS. 5.1. Problem Statement. Let us suppose a SISO system, p=1, r=1. If a system S=SS(A,b,c,d,x) (5.1.1) is given by state equations then the transfer function H(s) can uniquely be determined as M(s) , Φ(s) = (sI − A) −1 H(s) = c T Φ(s)b + d = (5.1.2) L(s) on short, S=TF(M,L). Having the transfer function of a system, several forms for state equations can be obtained, that means we have several system realisations by state equation. S = TF(M, L) ⇒ S = SS(A, b, c, d, x) (5.1.3) is one state realisation, but S = TF(M, L) ⇒ S = SS(A, b, c, d, x) (5.1.4) is another state realisation, with the same transfer function, H(s) = c T Φ(s)b + d = H(s) , Φ(s) = (sI − A) −1 (5.1.5) The two state realisations are equivalent, that means, ∃T, x = Tx, det T ≠ 0 A = TAT −1 , b = Tb, c T = c T T −1 , d = d (5.1.6) The two transfer functions, H(s) and H(s) are identical, H(s) = c T (sI − A) −1 b + d = = c T T −1 (sI − TAT −1 ) −1 Tb + d = c T T −1 T(Is − A) −1 T −1 Tb + d = H(s) ( We remember that (AB) −1 = B −1 A −1 ). Suppose that m=degree(M) and n=degree(L), m ≤ n . Because L(s) has n+1 coefficients and M(s) has m+1 coefficients H(s) has only n+m+1 free parameters due to the ratio. The state realisations have n2 (from A) + n (from b) + n (from c) +1 (from d) =n2+2n+1 parameters, so n 2 + 2n + 1 > n + m + 1 which conducts to an undetermined system of n2+2n+1 unknown variables from n+m+1 equations. This explains why there are several (an infinity) state realisations for the same transfer function. There are several methods to determine the state equation starting from the transfer function. For multivariable systems this procedure is more complicated, but it is possible too. Some of state realisations have some special forms with minimum number of free parameters They are called canonical forms (economical forms). Some canonical forms are important because they are putting into evidence some system properties like controllability and observability. 125
5. SYSTEM REALISATION BY STATE EQUATIONS.
5.1. Problem Statement.
Mainly there are two structures for canonical forms:  Controllable structures (ID : integraldifferential structures) and  Observable structures (DI : differentialintegral structures). Now we remember the controllability and observability criteria for LTI systems: 5.1.1. Controllability Criterion. The MIMO LTI system S S = SS(A, B, C, D) (5.1.7) is completely controllable, or we say that the pair (A,B) is a controllable pair, if and only if the so called controllability matrix P P = [B AB ... A n−1 B] (5.1.8) has a maximum rank. P is a (n × (np)) matrix. The maximum rank of P is n. Sometimes we can assure this condition by computing ∆ P = det(PP T ) . (5.1.9) which must be different of zero. For a SISO LTI system S S = SS(A, b, c, d) the matrix P is a square n × n matrix, P = [b Ab ... A n−1 b] . (5.1.10) 5.1.2. Observability Criterion. The MIMO LTI system S S = SS(A, B, C, D) (5.1.11) is completely observable or the pair (A,C) is an observable pair, if and only if the so called observability matrix Q C CA , Q= (5.1.12) ⋅⋅⋅ n−1 CA has maximum rank. Q is a (n × (rn)) matrix. The maximum rank of Q is n. Sometimes we can assure this condition by computing ∆ Q = det(QQ T ) (5.1.13) which must be different of zero. For a SISO LTI system S S = SS(A, b, c, d) (5.1.14) the matrix Q is a square n × n matrix, cT c TA . Q= (5.1.15) ⋅⋅⋅ T n−1 c A
126
5. SYSTEM REALISATION BY STATE EQUATIONS.
5.2. First Type ID Canonical Form.
5.2. First Type ID Canonical Form. Let H(s) be a transfer function of the form M(s) b n s n + ... + b 0 Y(s) H(s) = (5.2.1) = = , an ≠ 0 L(s) a n s n + ... + a 0 U(s) The state realisation of this transfer function, as ID canonical form of the first type, can be obtained by so called the direct programming method, according to the following algorithm: 1. Divide by s n the nominator and the denominator of the transfer function b + b n−1 s −1 + ... + b 1 s −(n−1) + b 0 s −n Y(s) H(s) = n = , a n ≠ 0 (5.2.2) a n + a n−1 s −1 + ... + a 1 s −(n−1) + a 0 s −n U(s) and express the output as
W(s)
Y(s) =
M(s)
like D operator
⋅
1 L(s)
⋅U(s)
(5.2.3)
like I operator
Y(s) = (b n + b n−1 s −1 + ... + b 0 s −n ) ⋅ 2. Denote
U(s) a n + a n−1 s −1 + ... + a 1 s −(n−1) + a 0 s −n
W(s)
(5.2.4)
U(s) (5.2.5) a n + a n−1 s −1 + ... + a 1 s −(n−1) + a 0 s −n and express W(s) as a function of U(s) and the products ( [s −k W(s)] , k = 1 : n ) W(s) = a n [W(s)] + a n−1 [s −1 W(s)] + ... + a 1 [s −(n−1) W(s)] + a 0 [s −n W(s)] = U(s) a n−1 a n−2 a 1 W(s) = − a s −1 W(s) − a s −2 W(s) −... − a 0 s −n W(s) + a U(s) n n n n 3. Denote the products [s W(s)] , k = 1 : n as new n variables X k (s) = [s −(n−k+1) W(s)] , k = 1 : n
−k x n (s) x n−1 (s) x 1 (s)
(5.2.6)
(5.2.7)
X 1 (s) = s −(n) W(s) X 2 (s) = s −(n−1) W(s) ....... X n (s) = s −(1) W(s) so the expression of W(s) from (5.2.6) is now, a n−1 a n−2 a 1 W(s) = − a X n (s) − a X n−1 (s) − ... − a 0 X 1 (s) + a U(s)
n n n n
(5.2.8)
4. Express the output Y(s) from (5.2.3) and (5.2.4) as Y(s) = b n W(s) + b n−1 [s −1 W(s)] + ... + b 0 [s −n W(s)] 127
(5.2.9)
. 5. a k−1 c k = b k−1 − b n a ..6) a n−1 a n−2 a b Y(s) = (b n−1 − b n a )[s −1 W(s)] + (b n−2 − b n a )[s −2 W(s)] + . 128 .2. Draw. where W(s) is substituted from (5. + c 1 ⋅ [s −n W(s)] + a n U(s) n (5.10) or from (5.2. b Y(s) = c 1 ⋅ X 1 (s) + c 2 ⋅ X 2 (s) + .10) as b Y(s) = c n ⋅ [s −1 W(s)] + c n−1 ⋅ [s −2 W(s)] + . SYSTEMS REALISATION BY STATE EQUATIONS. (t) x 2 Figure no.2. 5.2.13) or from (5.1.13) or (5..6) or (5. Denote a c 1 = b 0 − bn a 0 n a1 c 2 = b 1 − bn a n . x n (t) = xn1(t) a n1 an + + ...2.11) k=1:n (5. n . Xn (s) . The integrators can be represented without the initial conditions or with their initial conditions if later on we want to get the free response from this diagram . a n−1 c n = b n−1 − b n a n (5.. First Type ID Canonical Form.8) and the output relations (5.2..11) as.5.12) and express the output Y(s) from (5.+(b 0 − b n a 0 )X 1 (s)+ a n U(s) n n n n cn c n−1 c1 d 5....14).. as block diagram or state flow graph. a cascade of n integrators and fill in this graphical representation the relations (5.2.2..8) a n−1 a n−2 a b Y(s) = (b n−1 − b n a )X n (s) + (b n−2 − bn a )X n−1 (s) + . bn an + + cn 1 an U(s) + + W(s) x n (t) s1 W(s) + + c n1 s2W(s) Xn1(s) xn1(t) = a n2 an + + + + c2 s(n1)W(s) X2(s) ..2.2..2.. + c n ⋅ X n (s) + a n U(s) (5.2.+(b 0 − b n a 0 )[s −n W(s)]+ a n U(s n n n n cn c n−1 c1 d (5.2..2. x2(t) = x1(t) a1 an + + c1 sn W(s) X1(s) x1(t) a0 an + + Y(s) s1 s1 s1 = s1 .14) n 6.2.
x1 = x 2 . It can be observed that if b n = 0 ... First Type ID Canonical Form.. − a x n−1 − a x n + a u a n n n n n y = c 1 x 1 + c 2 x 2 + .2.2. then the last row of the matrix A is composed from the transfer function nominator coefficients with changed sign..2. as (5.. x = [x 1 . x2 = x 3 .15) . Denote from the right hand side to the left hand side the integrators output by some variables X 1 .2.16).2.2. x n−1 . − a n−2 − a n−1 a n−1 1 an an b n−1 − b n a n an a n an a n (5.2.. If a n = 1 .17). d = an (5. X n ...16) (5. (5.7) both in complex domain and time domain. .. SYSTEM REALISATION BY STATE EQUATIONS.2. n−2 0 0 0 . + c n−1 x n−1 + c n x n + du where we denoted. 0 1 b n−2 − b n aa n 0 − a 0 − a 1 − a 2 .. 129 . 5. c k = b k−1 − b n a . . x = Ax + bu .5.. . a a n−2 a n−1 1 x n = − a 0 x 1 − a1 x 2 − ... . Interpret the diagram in time domain and write the relations between variables in time domain. .15). .. . . x n ] T y = c T x + bu where 0 0 b 0 − b n a0 1 0 . . 0 0 an 0 0 a1 0 1 . then the c vector is composed from the transfer function denominator coefficients. b a k−1 d = a n . x 2 .2... 0 0 . .2. b = A = . 0 0 a b 1 − b n an 2 0 0 0 0 .. k = 1 : n n n 9. .. 8. Write in matrix form the relations (5..18) (5..2.17) (5. (5. . that is the system is strictly proper.19) bn . X 2 .... (5. c = b 2 − b n an . x n−1 = x n . 7..20) This is called also the controllable companion canonical form of state equations.. ...
3 2 Figure no. x1 = x 2 . Let a system be described by the second order differential equation.2.2.2.21) −1 −2 Y(s) = 1 ⋅ W(s) + 5[s W(s)] + 6[s W(s)] = = 1 ⋅ {−3[s −1 W(s)] − 2s −2 [W(s)] + U(s)} + 5[s −1 W(s)] + 6[s −2 W(s)] Y(s) = (5 − 3)[s −1 W(s)] + (6 − 2)[s −2 W(s)] + U(s) Y(s) = 2[s −1 W(s)] + 4[s −2 W(s)] + U(s) (5. Writing the relations in time domain we obtain. 2 d=1 130 .2. First Type ID Canonical Form.2.2.22) Now the relations (5. −2 −3 0 b = .22) are represented by a signal flow graph. . y + 3y + 2y = u + 5u + 6u ¨ ¨ The transfer function is 2 Y(s) (s + 2)(s + 3) H(s) = s 2 + 5s + 6 = = s + 3s + 2 U(s) (s + 2)(s + 1) −1 −2 H(s) = 1 + 5s −1 + 6s −2 1 + 3s + 2s U(s) Y(s) = (1 + 5s −1 + 6s −2 ) ⋅ 1 + 3s −1 + 2s −2 U(s) W(s) = 1 + 3s −1 + 2s −2 W(s) + 3[s −1 W(s)] + 2s −2 [W(s)] = U(s) W(s) = −3[s −1 W(s)] − 2s −2 [W(s)] + U(s) (5.5. . 5.1.21) and (5. 1 x2(0) Y(s) x1(0) 4 s1 s2W(s) x1(t) X1(s) 2 s1 U(s) s1 s 1 1 W(s) x2(t) . x 2 = −2x 1 − 3x 2 + u y = 4x 1 + 2x 2 + u 0 1 A= . First Type ID Canonical Form of a Second Order System. .2. 1 4 c = . SYSTEM REALISATION BY STATE EQUATIONS.2. s1 W(s) 1 x2(t) X2(s) B1 x1(t) B2 . Example 5. 5.
D 3 = 1.5. P = [b Ab] . 4 2 Q= −4 −2 0 1 c TA = 4 2 ⋅ −2 −3 = −4 −2 . which is a state diagram (SD). 5. Now we can check that this state realisation (the controllable companion canonical form) of the transfer function 2 Y(s) (s + 2)(s + 3) H(s) = s 2 + 5s + 6 = = s + 3s + 2 U(s) (s + 2)(s + 1) is controllable but not observable. C 3 = 4s −2 . and one of the two properties. The system is controllable.det(Q) = −8 + 8 = 0 . The system is not observable. is lost. Ab = ⋅ = 1 −2 −3 1 −3 0 1 P= . 0 0 1 0 1 ⇒ b = . Operating on this signal flow graph. 1 −3 cT Q= T c A cT = 4 2 . D = 1 + Σ (−1) k S pk B 1 = −3s −1 . SYSTEM REALISATION BY STATE EQUATIONS. D 1 = 1. In our case it must not be the controllability. H(s) = T u = y −1 −2 −1 −2 2 H(s) = 1 + 3s + 2s −1 + 2s −2 + 4s = s 2 + 5s + 6 1 + 3s + 2s s + 3s + 2 k=1 131 . D 2 = 1. As we can see the transfer function has common factors to nominator and denominator. C 2 = 2s −1 . det(P) = −1 ≠ 0 . we can get the transfer function and the two components of the free response. controllability or observability. First Type ID Canonical Form.2. B 2 = −2s −2 S p1 = B 1 + B 2 = −3s −1 − 2s −2 S p2 = 0 D = 1 − S p1 = 1 − (−3s −1 − 2s −2 ) = 1 + 3s −1 + 2s −2 C1 D1 + C 2D 2 + C3 D3 D C 1 = 1.
+ a 1 Dy + a 0 y = b n D n u + b n−1 D n−1 u + ..... a1 a1 x n = a 1 y − b 1 u + x n−1 ⇒ x n−1 = − a x 1 + x n + (b 1 − b n a )u n n 132 .... a n−2 a n−2 x 3 = a n−2 y − b n−2 u + x 2 ⇒ x 2 = − a x 1 + x 3 + (b n−2 − b n a )u n n . We can illustrate this DI process considering the associative property Y(s) = H(s)U(s) = 1 [M(s)U(s)] .7) x1 = a n y − b n u from where.3.3.3. + D[a n−1 y − b n−1 u+D [a n y − b n u]]. Second Type DI Canonical Form.3. a n y (n) + a n−1 y (n−1) + . we denote.3.]] = 0 ..3. 5. (5. . a n D n y + a n−1 D n−1 y + . dt the equation (5.6) → .3. + a 1 y (1) + a 0 y (0) = b n u (n) + b n−1 u (n−1) + .3. .3...3) y (k) = = D k {y(t)} = D k y = D{D k−1 y} k dt d k u(t) (k) u = = D k {u(t)} = D k u = D{D k−1 u} . (5.. .. It is called DI (derivativeintegral) realisation because mainly it interprets the input processing first as derivative and the result of this is then integrated.3...4) k dt Using the derivative symbolic operator def (5. x1 As we can see from (5. + b 1 u (1) + b 0 u (0) where we denote (5..5.2) k d y(t) (5. b 1 y = a x1 + a n u n n ...8) . (5.3.3. (5.1) L(s) One method to get this canonical form is to start from the system differential equation.6) . a n−1 a n−1 x 2 = a n−1 y − b n−1 u + x 1 ⇒ x 1 = − a x 1 + x 2 + (b n−1 − b n a )u n n . (5.2) can be written as.5) D{•} = d {•} . 5. + D[a 1 y − b 1 u] + [a 0 y − b 0 u] = 0 x2 a 0 y − b 0 u+D[a 1 y − b 1 u +D[. + b 1 Du + b 0 u or arranged as D n [a n y − b n u] + D n−1 [a n−1 y − b n−1 u] + .. x xn 1 . Second Type DI Canonical Form. SYSTEM REALISATION BY STATE EQUATIONS. This state realisation is called also the observable companion canonical form: canonical because it has a minimum number of nonstandard elements (0 or 1) and observable because it assures the observability of its state vector.
. a a x n = − a 0 x1 + (b 0 − b n a 0 )u n n From (5. . x n−1 . SYSTEM REALISATION BY STATE EQUATIONS.. x 2 ..5. 0 0 n−1 1 an b n−1 − b n aa n a n−2 an − a n−2 0 1 . −3 1 A= ... but 1 0 Q= ⇒ det(Q) = 1 ≠ 0 −3 1 which confirms that this canonical form is observable one. an .. 5.. 4 −4 This realisation of the same transfer function as in above example is not controllable one.3.3. .. . 0 0 b n−2 − b n a n 0 an a n−3 − a n−3 0 0 . 0 1 0 an a0 an b 0 − b n an − a 0 0 0 . 2 −2 P= ⇒ det(P) = 0 . x = Ax + bu x = [x 1 . a 1 = 3. −2 0 2 b = .3. Our previous example with: n = 2. . . 0 = a 0 y − b 0 u + xn ⇒ d=1 Now we can check the controllability and the observability properties.. A= b= c= .. b 1 − b n a1 − a 1 0 0 . . .8) we can write the state equation in matrix form... 4 1 c = . . − a n−1 1 0 . a 2= 1. b 1 = 5. a 0 = 2. 0 . . d = b n an .. Second Type DI Canonical Form.. 0 0 b n−3 − b n a n 0 . b 2 = 1. 133 . b 0 = 6. . x n ] T y = c T x + bu where...7) and (5. 0 0 0 an ⇒ Example.
Of course. 5.4. Jordan Canonical Form. 134 .. SYSTEM REALISATION BY STATE EQUATIONS. 5. H 3 (s) = C0 Y1 C 11 1 sλ1 Y0 x1 1 U Y2 C 22 1 sλ2 C 21 1 sλ2 H3 x 2 2 x2 1 Y3 Y x1 x 2 3 3 Figure no. The system matrix has a diagonal form if the eigenvalues are distinct or it has a blockdiagonal form if there are multiple eigenvalues.4. This is a canonical form which is pointing out the eigenvalues of the system. + b 0 = (s − λ 1 )(s − λ 2 ) 2 ((s − α) 2 + β 2 ) Y(s) c c c 22 c s+c = c 0 + 11 + 21 + + 31 2 32 2 = 2 s − λ 1 s − λ 2 (s − λ 2 ) U(s) (s − σ) + β Let denote c 31 s + c 32 (s − σ) 2 + β 2 To determine this canonical form first we plot as a block diagram the output relation.5. H(s) = b 4 s 4 + .4. Jordan Canonical Form.. considering for repeated roots a series of first order blocks. Then we express in time domain the relations considering to any output of first order block a component of first order block and for complex roots (poles) consider a number of components equal to the number of complex poles. 5. Example.1. for that we have to know the roots of the denominator. This form can be got by using so called partial fraction development of the transfer function.
5. x1 = 1 x2 = 1 x2 = 2 1 s−λ 1 U(s) 1 2 s−λ 2 x 2 1 s−λ 2 U(s) . x = AJ ⋅ x + BJ ⋅ u y = C J ⋅ x + dJ ⋅ u l1 0 0 0 0 0 l2 1 0 0 AJ= 0 0 l2 0 0 Jordan Blocks 0 0 A3 BJ = b1 J ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ b2 J ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ b3 J = 1 0 1 ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ b3 1 b3 2 .5. Jordan Canonical Form.. 3 2 x2 3 x x1 3 x2 Write the state equations in matrix form : .2 2 x2 = λ 2 x 2 + u For the blocks with complex poles we can use as an independent problem any method for state realisation like the ID canonical form.. 1 x 2 = 1 ⇒ x = x 2 = .. x3 3 x = 1 3 x2 . x3 = 1 x 1 2 3 x2 . x1 . x3 = A3 x3 + b 3 u y3 = c T x 3 + d 3 u 3 =0 ( d3=0 because H3 is a strictly proper part and the degree of the denominator is bigger then the degree of the nominator). SYSTEM REALISATION BY STATE EQUATIONS....2 x1 = λ 2 x 2 + x 2 1 2 . ⇒ sx 1 − λ 1 x 1 = U ⇒ x 1 = λ 1 x 1 + u 1 1 1 1 . 2 . x3 = A 3 x 3 + b 3 u y = c 0 u + c 11 x 1 + c 22 x 2 + c 21 x 2 + c 32 x 3 + c 31 x 3 1 1 2 1 2 Construct a state vector appendix all variable chosen as state variables.. 1 x 1 x2 x3 x2 . dJ = C 0 = a n n 135 . . x1 = x 1 . CJ = C1 J ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ C2 J ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ C3 J = C 11 ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ C 22 C 21 C 31 C 30 b . We have a second order system in this case..4.
5.U n ] . Jordan Canonical Form. There are several algebraical methods for canonical form determination. c3 = β −β α c 31 The companion canonical forms ID are very easy to be determined.2. .a x3 2 b s. b3 = 1 . ..4.a x3 W(s) 1 Figure no. The Jordan canonical form is a bit more difficult to be determined. x3 = A3 x2 + b 3 u y3 = c T x 3 + d 3 u Y 3 = (c 1 s + c 1 ) 31 30 W(s) = U(s) (s − α) 2 + β 2 β s−α 2 2 1 U(s) 2 β β 1 + s−α We can interpret this relation as a feedback connection: U(s) 1 b 2 β2 1 U(s) = ⋅ 12 U(s) = 2 2 2 2 (s − α) + β (s − α) + β β +  b s. SYSTEM REALISATION BY STATE EQUATIONS. but they are not robust regarding the numerical computing for high dimensions.. AU i = λ i U i 136 . ⇒ x 3 = αx 3 + βx 3 1 1 2 .4. C 1 s + C1 30 H 3 (s) = 2 31 2 s − 2αs + α + β 2 . but in many cases is indicated for numerical computation. Example: Jordan canonical form can be got using model matrix: T = M −1 . 5. U i are the eigen vectors if they are independent related to the matrix A. M = [U 1 .5. β 1 x 3 = s−α (−x 3 + β12 U) ⇒ x 3 = −βx 3 + αx 3 + β u 2 1 2 1 2 x3 = 1 β 3 s−α x 2 Y 3 = c 1 (s − α + α) + c 1 W(s) 30 31 Y 3 = c 1 βx 3 + (c 1 + αc 1 )x 3 = c 3T x 3 3 2 30 31 1 W = x3 = 1 c 32 = β 3 s−α x 2 ⇒ (c 1 + αc 1 ) 30 31 (s − α)W = βx 3 2 c 31 = c 1 β 3 Y 3 = c 3T x 3 0 α β c 32 A= .
x = − a0 x + b 0 u a1 a1 (5.1. Of course the matrices have no economical form . Only the following elements should appear . that means: For block 1 the output can be chosen as a state variable: . If the system is represented by a block diagram where blocks have some physical meanings we can get useful state realisation so that as much as possible state variables have a physical meaning. The following algorithm can be performed: 1) If blocks are complex make series factorisation as much as possible. but they are more robust for numerical computing.5. 5.5.5 State Equations Realisation Starting from the Block Diagram.2.5.5. x = − a0 x + a1 (b 0 − b 1 a0 )u a1 a1 1 (5.2) = + a1 s + a 0 a1 a1 s + a0 Denote as state variable the output of the first order dynamically block and write the state equation: . 5.5. 2) For any such a block denote the state variables and determine the state equations. 5.5. For block 3 we can use any state realisation method (canonical form or Jordan form pointing out the imaginary or real part of the poles): 137 . SYSTEM REALISATION BY STATE EQUATIONS.1) y=x For the block 2 the output cannot be chose as a state variable because we do not like to have the input time derivative.3) b1 y = x + a1 u b a U a b0 b 1 a0 1 Y 1 X a 1 s+a0 Figure no.5 State Equations Realisation Starting from the Block Diagram. 1 U 2 U b1 s+b0 a 1 s+a0 Y 4 U s Y b0 a 1 s+a0 Y 3 U b2 s2 + b1 s+b0 a s 2 + a1 s+a0 2 Y Figure no. then this transfer function is split in two components: a0 b 1 s + b 0 b 1 b 0 − b 1 a1 H= (5. 5.
u=x .5. s y . c = 1 . 5. a0 a1 1 (5. . 5.5.5. SYSTEM REALISATION BY STATE EQUATIONS. Denote inputs and outputs of the blocks by variables and write the algebraically equations between them. Using the above determined state equation and the algebraical equation eliminate all the intermediary variables and write the state equations in matrix form considering as state vectors a vector composed by all components denoted in the block diagram. x2 = x1 + u − x3 − x2 ⇒ x2 = x1 − x 2 − x3 + u . x1 = −4x1 − 2(u − x3 − x2 ) ⇒ x1 = −4x1 + 2x 2 + 2x3 − 2u .4. Figure no.5. x1 = x2 . . 3. 2.A= ⇒ T y=c x 1 −1 −4 1 0 138 .3. 5. . d = 0 . In such a case this x will disappear if we want to get minimal realisation (minimum amount of state variables).5 State Equations Realisation Starting from the Block Diagram. . x = Ax + bu 1 −1 −1 b = 1 .5. .5. y1 = x1 + u1 x2 = 1 y1 s . x 2 = u2 u 2 = y1 . x1 = −4x1 − 2u1 Figure no.4) x2 = − a 2 x1 − a 2 x2 + a 2 u y = (b − b a0 )x + (b − b a1 )x + b2 u 0 2 a2 1 1 2 a2 2 a2 For block 4 we can choose the input as being x. 1 2 u +  u1 s+2 y1 =u2 s+4 u 1 x2 y2 =y s 3 3 + 1 s+3 x y3 3 + + 2 s+4 + 1. 5. x3 = −3x3 + u3 y3 = x3 y = x2 u1 = u − y3 − y u3 = y1 . x3 = −3x3 + x1 + u − x3 − x2 ⇒ x3 = x 1 − x2 − 4x3 + u ⇒ y = x2 ⇒ x1 x = x2 x 3 −4 2 2 −2 0 . x=y Figure no.
ω = 2πf = 2π (6. Let us consider a physical object with the input ua and the output ya. 6.electric motor. FREQUENCY DOMAIN SYSTEM ANALYSIS.1. Generator of sinusoidal signal ua Physical Object 2 . In practical engineering frequently there are used the so called experimental frequency characteristics or just frequency characteristics. They can be obtained using some devices connected as in Fig. Experimental Frequency Characteristics. FREQUENCY DOMAIN SYSTEM ANALYSIS. Frequency systems analysis involves two main topics: 1. [ω ] = sec −1 (6. 139 .3) ua(t) U0 T Um t ya(t) T T ϕ ∆ t = − ω = t y 0 − t u0 Ym Y0 t t0 t u0 t y 0 Figure no. The deviation of the physical applied signal ua (t) with respect to the steady state value U0 is u(t) = ua (t) − U0 = Um ⋅ sin(ω t) .1.2.1. this object can be:.1.2) T is called in English by the general term "frequency". 6. ua (t) = U0 + Um ⋅ sin ω t . of the period T. Transfer functions description in frequency domain. Experimental Frequency Characteristics . 6. Supposing the system is in a steady state expressed by the constant inputoutput pair (U0.1.ways recorder ya Figure no.1) T We have to note that the Romanian term "pulsatie" ω = 2πf = 2π . Y0 ) we apply a sinusoidal input signal. For example. etc. Experimental frequency characteristics. 6. (6.6. . 6. 2.1. 6.1.electronic amplifier.1.1.1.
1. obtained using a twoways recorder looks like in Fig.3. when a so called "permanent regime" is installed.1. The permanent response is written as.6) If the value of the shifting time ∆t > 0 (the output is delayed with respect to the input. ϕ = −ω ⋅ ∆t (6.1.1.7) Let us denote the deviation with respect to the steady state value Y0 by. Α(ω) A1 ϕ(ω) ϕ1 ω 1 ω 1 A2 ω 2 ω ϕ 2 ω 2 ω Figure no. (6. y(t) = ya (t) − Y0 = Ym ⋅ sin(ω t + ϕ) .6. 6. as in Fig. These two variables A and ϕ can be plotted (we use the same scale) for different values of ω . After a transient time period. (6. Repeat this experiment for different values of ω > 0 and. Experimental Frequency Characteristics . t y0 > t u0 ) the phase is negative ϕ < 0 "retard of phase" and if ∆t < 0 (the output is in advance with respect to the input.1. performed for one value of the frequency ω . 6. the output response is a sinusoidal function with the same frequency ω but with another amplitude Y0 and shifted in time (with respect to the steady state values time crossings t y0 . 6.1.1.8) During one experiment.3.4) ∆t = t y0 − t u0 .2. Two of the three measured values Ym and ∆t depend on this value of ω and one can compute the ratio: Y (6. Y m and the delay shifting time ∆t . t y0 < t u0 ) the phase is positive ϕ > 0 "advance of phase" .5) 0 m Let we interpret this time interval ∆t by a phase ϕ with respect to the frequency ω as. if possible for ω ≥ 0. 140 .10) in the permanent regime at frequency ω .1. ϕ = −ω ⋅ ∆t (6. a (t) = Y + Y ⋅ sin[ω (t − ∆t)] y . we have to measure the amplitudes Um. ya (t) = Y0 + Ym ⋅ sin(ω t + ϕ) .1. 6. We can also compute : ϕ(ω ) = ω ⋅ ∆ (6. FREQUENCY DOMAIN SYSTEM ANALYSIS. t u0 ) by a value ∆t (6.9) A(ω ) = m Um in the permanent regime at ω frequency.1.1. The curves of inputoutput response.
PhaseFrequency Characteristic).15) Q = Q(ω ). (6. ∞ (6.4. This complex characteristic is marked by an arrow showing the increasing direction of ω .1. 6. These experimental characteristics can completely describe all the systems properties for linear systems and some properties of non . In the above diagrams the frequency axes is on a linear scale. 6.Q(ω )): P = P(ω ) . or Gain Characteristic): (6. Experimental Frequency Characteristics . (2π.π) .1.1.6. Q ω 2 A2 Q1 Q2 A1 ϕ 1 ω 1 ϕ 2 P1 P P2 Figure no.4.14) Complex Frequency Characteristic: It is a polar representation of the pair (A(ω).0].12) ϕ(ω ) : [0. 141 .1. ∞) The function : (6. ω ∈∈ [0. 6. The ϕ .1.2π) circle or into other trigonometric circles like for examples [π.1. ω ∈ [0. Based on magnitude and phase characteristics other characteristics can be obtained like: Real Frequency Characteristic: P(ω ) = A(ω ) ⋅ cos[ϕ(ω )] . ϕ(ω)). ∞) is called Phase Characteristic (PhaseAngle Characteristic.linear systems. but in practice the so called logarithmic scale is used.angle can be considered into [0.13) Imaginary Frequency Characteristic: Q(ω ) = A(ω ) ⋅ sin[ϕ(ω )]. (6. it is also marked by the values of ω as in fig.1. The function A(ω ) obtained in such a way is called Magnitude Characteristic (MagnitudeFrequency Characteristic.1. ∞) or a Cartesian representation of (P(ω ). FREQUENCY DOMAIN SYSTEM ANALYSIS.11) A(ω ) : ω ∈ [0.
i = N1 + 1 : 2N2 . that means that ω is not a resonant frequency. into zero initial conditions.2.1) and an input u(t) = ua (t) − U0 = Um ⋅ sin(ω t) ⇒ U(s) = L{u(t)} = Um 2 (6.2. 6.6. the pole λ i has the order of multiplicity mi .2) ω Um ω (6. to this input is a Y(s) = L y (t) − Y0 = H(s) ⋅ U(s) . λi ∈ C ( R ) . Relations Between Experimental Frequency Characteristics and Transfer Function Attributes. where i=1 N N1+2N2=n. Relations Between Experimental Frequency Characteristics and Transfer Function Attributes. (6.2.5) The transient response is determined by the transfer function poles and the permanent response is determined by the Laplace transform of input poles.3) = 2 (s − jω )(s + jω ) s +ω The system has N distinct poles. Σ mi = n . Between them N1 poles are real and the other 2N2. FREQUENCY DOMAIN SYSTEM ANALYSIS. U(s) an Π (s − λ i ) m i i=1 (6. Shall we consider a linear system with the transfer function Y(s) M(s) H(s) = = N . 6. eλi t = eσit cos ω i t + jsin ω i t e jω i t The system response.2.2.2. σi = Re(λ i ) .2. 142 . The complex poles are λ i = σi + jω i .4) y(t) Supposing that ω ≠ Im(λ i )∀i. the output is: M(s) ω ⋅ Um y(t) = Σ Rez N ⋅ est m i (s − jω )(s + jω ) an Π (s − λ i ) i=1 N1 N2 ω ω y(t) = Σ P i (t)eλi t + Σ Ψ i (t)eσi t Um + H(jω ) ⋅ 2jω ejωt + H(−jω ) ⋅ −2jω e−jωt Um i=1 i=N 1 +1 y p (t) r(t) transient response permanent response (6.
This means that the real part of all the poles to be located in the left half of the complex plane. Relations Between Experimental Frequency Characteristics and Transfer Function Attributes. as defined in (6.10).2.2.2. what will be the shape of some experimental characteristics.1.2. Manipulating the transfer function attributes we can point out.8) = A(ω ) = H(jω ) Um A exp (ω) that means the ratio between the experimental amplitudes . ω H(jω ) = H(s) s→jω (6.1. given by (6.10) From this theoretical expression we obtain. 143 . is the argument of the same complex expression H(jω ) . In the same time the experimental phase.2.9) −ω∆t exp There is a strong connection between the experimental frequency characteristic and the transfer function attributes: modulus and argument. but with a shifted phase. ∆t=∆t(ω) We can see that the permanent response is also a sinusoidal signal with the same frequency.6. The transient response will disappear when t → ∞ if the real parts of all transfer function poles are negative . The expression H(jω ) is a complex number for which we can define. ϕ(ω ) = arg(H(jω )) (6. H(jω ) = Aω ) ⋅ ejϕ(ω) A(ω ) = H(jω ) ϕ(ω ) = arg(H(jω )) H(jω ) = A(ω )ejϕ(ω) H(−jω ) = A(ω )e−jϕ(ω) In such a way : yp (t) = H(jω ) ω ejωt + H(−jω ) ω e−jωt U m 2jω −2jω j(ωt+ϕ) − e−j(ωt+ϕ) yp (t) = Um ⋅ A(ω ) ⋅ e 2j yp (t) =A(ω )Um sin(ω t+ Ym (6. 6. in a reverse order. Re(λ i ) < 0 .9). FREQUENCY DOMAIN SYSTEM ANALYSIS.6) ϕ(ω ) ) (6.7) −ω∆t . This is the main stability criteria for linear systems.2. Ym (6. Having a transfer function H(s) we can determine its frequency description just substituting (if possible) s by jω . is exactly the modulus of the complex expression H(jω ) obtained replacing s by jω .
16) arctg Q(ω) . ϕ∈ (−π.2. Relations Between Experimental Frequency Characteristics and Transfer Function Attributes. P(ω ) = 0. P(ω ) > 0 P(ω) Q(ω) arctg + π . 144 .Q(ω )).2. FREQUENCY DOMAIN SYSTEM ANALYSIS.11) (6.13) (6.14) (6.ϕ(ω)) or the Cartesian plot of the pair (P(ω ). H(jω ) = A(ω )ejϕ(ω) = P(ω ) + jQ(ω ) A(ω ) = P 2 (ω ) + Q 2 (ω ) (6.2.6.2.2.12) (6.17) π .2. P(ω ) < 0 P(ω) ϕ(ω ) = .15) (6. 6. P(ω ) = 0. Magnitude Frequency Characteristic: A(ω )=H(jω ) Phase Frequency Characteristic: ϕ(ω)=arg(H(jω )) Real Frequency Characteristic: P(ω )=Re(H(jω )) Imaginary Frequency Characteristic: Q(ω )=Im(H(jω )) We remember that.2. Q(ω ) > 0 2 − π . Q(ω ) < 0 2 The complex frequency characteristic is the polar plot of the pair (A(ω).2. π] (6.
which are defined in linear spaces .477121 ω = 0.3) P0 Is .1 1 One decade 2 3 8 10 20 100 ω =10x ω Logarithmic scale x=lgω Linear scale lg(2)=0. Definition of Logarithmic Characteristics. L(ω )=20lg(A(ω )). ω A decade is a frequency interval [ω 1 . Frequently.2) The units of decibels are utilised for example to measure the sound intensity.3. LogarithmicFrequency Characteristics. 145 . 6. but in the linear space "X" there are possible all the mathematical operations on x. 1 In the logarithmic scale we do not have a linear space and we are not able to add or to perform operations as in the usual analysis.3. FREQUENCY DOMAIN SYSTEM ANALYSIS.2 = −1 + lg 2 = −1 + 0. ω = 2 ⇒ x = lg 2 = 0. with respect to frequency and amplitude. Logarithmic Frequency Characteristics.1.01 0.sound intensity expressed in dB P . such that ω 2 = 10. L A = 10 20 (6. 6. ω 2 ].3.3. expressed in decibels (dB) where.1. The logarithmic scale for the variable ω is just the linear scale for the variable x=lgω as is illustrated in Fig.69897 ω = 0 will be plotted at x = −∞ .477121 lg(8)=0.3.30103 Figure no. for the values of magnitude characteristics it is utilised a linear scale L(ω ).3. 6.minimum pressure to produce a feeling of sound.90309 lg(20)=1+lg(2)=1.30103 −∞ 2 1 0 1 2 lg(3)=0. the frequency characteristics are plotted on the so called logarithmic scale. To point out better the properties of a system.6. 6. The values of magnitude characteristics A(ω) can be plotted also on logarithmic scale.1.30103 = −0. I s = 20lg P (6.3. (6. 6.2 ⇒ x = lg 0.3. Example.30103 ω = 3 ⇒ x = lg 3 = 0. One decade 0 0.1) If we have a value L dB then the corresponding magnitude value A is.pressure on ear P0 .
Let us consider a first degree complex polynomial (6. 6. Asymptotic Approximations of Frequency Characteristic. The Bode diagram is the pair of magnitude characteristic represented on logarithmic scale A(ω ) or linear scale in dB L(ω ) and the phase characteristic ϕ(ω ) represented on linear scale both versus the same logarithmic scale in ω. 6.3. as depicted in Fig.3. Asymptotic Approximations of Magnitude Frequency Characteristic for a First Degree Complex Variable Polynomial. FREQUENCY DOMAIN SYSTEM ANALYSIS. ωT ≥ 1 and 146 .1 0.1 −π 2 −π Figure no. The magnitude and phase characteristics are approximated by their asymptotes with respect to linear variable x. Logarithmic Linear scale scale L (ω) (dB) A(ω) 1000 100 10 1 0.1 0. L(ω ) = 20 lg[A(ω )] It is approximated by 1 . 6. LogarithmicFrequency Characteristics.2.2. Bode Diagram (Bode Plot).3.6.3. Such characteristics are called frequency asymtotic characteristics. ωT < 1 A(ω ) = (ω T) 2 + 1 ≈ = Aa (ω ) ωT .3. 6.4) H(s) = Ts + 1 The exact magnitude frequency chracteristic of this polynomial is (6.2.1.5) H(jω ) = jω T + 1 = A(ω ) = (ω T) 2 + 1 .3.3.2.01 20 40 60 40 20 0 1 10 100 1000 Magnitude frequency characteristic ω Linear π scale ϕ(ω) Phase frequency characteristic ω 1 10 100 1000 π 2 0 0. 6.
ω T ≥ 1 Let us denote by ω T the so called normalised frequency.6) = La (ω ) . 6. F(x) y=mx+n. 0 .1/T 1 /T 1 0 /T 2 /T 3/T 8/T 2 0 /T 100 /T Figure no. lim n = x→∞[F(x) − mx] (6. Frequently some items of the frequency characteristics are represented in normalised frequency. (6.1 1 One decade 2 3 8 10 20 100 ωΤ ω Logarithmic scale in normalised frequency Logarithmic scale in natural frequency 0 0. for x → ∞ y = 0. ωT < 1 L(ω ) = 20 lg[A(ω )] ≈ (6. When the linear variable x increases with 1 unit in the linear space X=R the variable ωT or ω in logarithmic scale increases 10 times (it covers a decade).9) y = x→−∞[F(x)] = 0 lim The oblique asymptote is. One decade 0 0. As 2π is a nondimensional number both ω and f are measured in [sec]−1. The two approximating branches from (6. Note that ω is also called in english frequency which corresponds to romanian "pulsatie" and f is called frequency both in english and romanian. So the normalised frequency ω T is a nondimentional number. and T called time constant is measured in [sec].3.4) are the horizontal and oblique asymptotes of the corresponding function of L(ω ) on the variable x in the linear space X .7) F(x) = L(ω ) ωT→10 x = 20lg( 102x + 1 ) (6.8) lg ω T = x ⇒ ω T = 10x .3.3.01/T 0. the two asimptotes in a linear space are (6.3.3. FREQUENCY DOMAIN SYSTEM ANALYSIS.3.6.3.01 0. (6. representing the natural frequency.3. m = x→∞ x = 20 .3. for x → −∞ .3. LogarithmicFrequency Characteristics. 20 lg[ω T] . where ω = 2πf .3.12) The slopes of the asymptotes can be expressed also in logarithmic scale as a number of decibels over a decade dB/dec . 6. The recovering of their shape in the natural frequency is a matter of frequency scale gradation as depicted in Fig.3.11) y = 20x. (6. The horizontal asymptote is. 6.10) lim So. ω T ω x=lg(ω T) : x2x1=1 ⇒ lg(ω 2T)lg(ω 1T)=1 ⇒ 2 = 10 ⇒ ω 2 = 10 1 ω 1T The slope in linear space is y −y m = x2 − x 1 2 1 147 .3.
6. A( ω) 100 L( ω) dB 40 30 Exacte Magnitude freq.2.4. (6. The exact and asymptotic frequency characteristics of (6. m = y2 − y1 = 20 lg(A(ω 2 )) − 20lg(A(ω 1 )) = L(ω 2 ) − L(ω 1 ) dB ω 2T ω 2 = = 10 ⇔ one decade if ω 1T ω 1 which allows to express the slope as m [dB/dec] . 6.3.3. Let us consider a first degree complex polynomial (6.2 .4.15) .6) is depicted in Fig. ω T ∈ [0. 6. ω T < 0.3. LogarithmicFrequency Characteristics.1 0 0. ω ∈ [0.3. error 3dB ωT 2 =10 ωT 1 One decade 10 20 10 y=mx+n Oblique asymptote Slope m=20 dB/dec 1 ω1T 0 x1 20 dB 0 −∞ 1 0. which can be interpreted as the variation m = y2 − y1 if x2 − x 1 = 1 ⇔ But y2 = 20 lg(A(ω 2 )) = L(ω 2 ) y1 = 20 lg(A(ω 1 )) = L(ω 1 ) ω 2T = 10 ⇔ one decade ω 1T so the slope in logarithmic scale for the magnitude characteristic is expressed as. 5] = ϕ a (ω ) .10) the slope is m=20dB/dec.6.3. ωT > 5 2 This approximation means three lines which are: 148 (6. charact.14) ϕ(ω ) = arg(jω T + 1) = arctg(ω T) It is approximated by 0 . 6. ∞) π .01 2 0. Max. Asymptotic Approximations of Phase Frequency Characteristic for a First Degree Complex Variable Polynomial.3.13) H(s) = Ts + 1 The exact phase frequency chracteristic of this polynomial is (6.5). FREQUENCY DOMAIN SYSTEM ANALYSIS.3.2 ln 10 π ϕ(ω ) = arctg(ω T) ≈ 2 lg(ω T) + 4 .3.3.x1 =1 Figure no.2. For (6.1 1 0 ω2 T 1 x2 100 2 10 1 20 x ωT ω T=10 Logarithmic scale x x = ωT) Linear scale lg( y=0 Horizontal asymptote Slope m=0 dB/dec x2.3.
charact. 6.when we discuss elementary frequency characteristics. 6.0 = 0 . 6.3.69897 ω T=5 2 x2 .2 x2 = 0.3. .x1 ωT 2 = 1 1 ω1T Horizontal asymptote Figure no. 2 10 + 1 Both ϕ(ω ) and ϕ a (ω ) are depicted in Fig. 10 1 100 2 x ωT ω T=10 Logarithmic scale x = ωT) Linear scale lg( x Oblique line y = (ln10/2)x + π/4 Max. .6. Other asymptotic approximations will be presented later. 1 0 5 .01 2 −π/4 −π/2 0. .5.3.0.A line having the slope of ϕ(ω ) in the particular point ωT = 1 ⇔ x = 0 slope which is evaluated for the function G(x) by the derivative xln G (x) = 10 2x 10 ⇒ G (0) = ln 10 .2 1 Exacte phase freq. error 6 degree Horizontal asymptote 0 −∞ 45 90 .1 0. ϕ( ω ) ϕa (ω) 180 degree 135 90 45 0 ϕ( ω ) ϕa (ω) π rad 3π/ 4 π/2 π/4 0 0.69897 ω1T=0.5. FREQUENCY DOMAIN SYSTEM ANALYSIS. x1 = . LogarithmicFrequency Characteristics.The two horizontal asymptotes of the phase frequency chracteristic ϕ(ω ) in the linear space of the variable x = lg(ω T) evaluated to the function G(x) = ϕ(ω ) ωT→10 x = arctg(10x) x → −∞ ⇔ ω → 0 ⇒ G(x) → 0 x → +∞ ⇔ ω → ∞ ⇒ G(x) → π 2 . 149 .
1 1 10 100 ω 10 0. Elementary frequency characteristics are used to draw frequency characteristics for any transfer function.01 0. 6. 6. Q(ω ) = 0.01 0.1) (6. H(s)=K A(ω )=K 0.1.1.4. ϕ(ω ).4. Proportional Element. 150 . 6. L(ω are evaluated and their asymptotic counterpart is deduced accompanied by Bode diagrams.4. ∀ω . Elementary Frequency Characteristics. They are called also frequency characteristics of typical transfer functions or of typical elements.4. placed in the plane (P. The complex frequency characteristic is a point ∀ω .Q) to P(ω ) = K .1. For each element A(ω ). K<0 L(ω )=20lgK Its Bode diagram is depicted in Fig.4) L(ω) dB 40 30 K 10 20 10 1 0 20lg( K ) 0 0. 6.3) (6. K≥0 ϕ(ω ) = π.4. FREQUENCY DOMAIN SYSTEM ANALYSIS.2) (6.6.4.1 π π/2 20 ϕ( ω ) If K>=0 If K<0 0. A(ω) 100 (6. 6.4. There are six such elementary frequency characteristics coming out from the transfer functions polynomials factorisation. Elementary Frequency Characteristics.4.4.1 1 10 100 0 0 −π/2 ω Figure no.
4. k ∈ Z ⇒ P(ω ) = 0. Q(ω ) = 0 and orizontal lines if α = 2k.1 20 ϕ(ω) π π/2 0 0 −π/2 −π 0.1 1 10 100 ω 10 0. α ∈ Z j ω A(ω ) = 1α .2.2. FREQUENCY DOMAIN SYSTEM ANALYSIS. s → jω ⇒ H(jω ) = α 1 α . The complex frequency characteristics . H(s) = 1 .01 α=−2 α=−1 0.6.4.1 α=1 α=2 1 10 100 ω Figure no. in the plane (P.01 0. Elementary Frequency Characteristics. k ∈ Z vertical lines if α = 2k + 1. if α=1 then the system is a pure derivative H(s) = s .8) 0. Integral Type Element.4.5) (6.Q) are: ⇒ P(ω ) = (−1) k /ω 2k .4.4. Q(ω ) = −(−1) k /ω 2k+1 151 . ω > 0. 6. if α=2 then the system is a double pure integrator.4. 6.7) (6.4. 6. α ≤ 0 ω L(ω ) = −20α lg ω ϕ(ω ) = −α π 2 A(ω) L(ω) dB 40 α=2 3 0 Slope 40 dB/dec 10 20 10 0 1 0 α=1 Slope 20 dB/dec α=−1 Slope 20 dB/dec α=−2 Slope 40 dB/dec 100 (6.6) (6. α ∈ Z sα if α=1 then the system is a pure simple integrator. ω ≥ 0.
A PD element means ProportionalDerivative element.4.4. FREQUENCY DOMAIN SYSTEM ANALYSIS.01 20 Max.2 . .13) ϕ(ω ) = arctg(ω T) 0 .1 0.12) 20lg ω T . ϕ a (ω ) = asymptotic approximation of phase characteristic.11) La (ω ) = asymptotic approximation of magnitude characteristic. 6. error 3dB Slope +20 dB/dec 1 10 100 ωT 0 1 0. 152 .4.4. z = 1 s → jω ⇒H(jω ) = 1 + j(ω T) = P(ω ) + jQ(ω ) T A(ω ) = H(jω ) = 1 + (ω T) 2 (6. ω T < 0.4.2 .4.4.4.2 ln 10 π ϕ a (ω ) = 2 lg(ω T) + 4 .1 π/2 π/4 0 0 −π/4 0. 5] (6.01 0 10 0. ωT < 1 La (ω ) = (6.3. y(t) = Tu + u Y(s)=(Ts+1)U(s) H(s)=Ts+1. ω T ≥ 1 (6. 1 5 .15) (6. First Degree Plynomial Element.1 0.4. This element has a transfer function with one real zero (PD Element).3.6. (6. ωT > 5 2 P(ω ) = Real(H(jω )) = 1 (6.4. 6. 6.4.10) L(ω ) = 20 lg 1 + (ω T) 2 (6. Elementary Frequency Characteristics. ω T ∈ [0.4. error 6 degree ϕ( ω ) ϕa (ω) 0.16) Q(ω ) = Img(H(jω )) = ω T The Bode characteristics are represented in Fig.3. A(ω ) 100 L(ω) dB 40 30 10 20 10 Max.14) π . 10 100 ωT Figure no.9) H(s) = T(s + z) . 0 . 6.
4.4. ωT = 1 ϕ(ω ) = 2 π + arctg 2ξTω . If the frequency increases the output as a time sinusoidal signal will be in advance in respect to input.25 ϕ(ω 1) 0. ξ .17) n T where: ω n . FREQUENCY DOMAIN SYSTEM ANALYSIS.Q) is depicted in Fig.4. 1 =ω H(s) = T 2 s 2 + 2ξTs + 1 .4.4. Elementary Frequency Characteristics.the damping factor 2 H(jω ) = (1 − (ω T) ) + j(2ξω T) (6.5 Α(ω 1 ) 0 0 0.23) . For breaking point ω T = 1 ⇔ ω = 1/T the output. 6.6.4. 1) . 6. 2. From Bode and complex frequency characteristics the following observations can be obtained: 1.the natural frequency. (6.21) (6.4. This element has a transfer function with only two complex zeros.4. which corresponds to a phase of π/4. Second Degree Plynomial Element with Complex Roots. ξ ∈ (0. 3π ) 2 2 arctg 2ξTω .5 H(jω1 ) ω =0 1 P Figure no.18) P(ω ) = 1 − (ω T) 2 (6.4.4. ω T > 1 1−(Tω) 2 153 (6.4. Q 1 ω→∞ ω =1/T=z ω=ω1 0. When ω → ∞ then the output will be with a time interval of T/4 in advance with respect to the input.20) (6.4. 6. 3. in the plane (P.19) Q(ω ) = 2ξω T A(ω ) = H(jω ) = (1 − (ω T) 2 ) 2 + 4ξ 2 (Tω ) 2 L(ω ) = 20 lg A(ω ) ϕ ∈ [− π. The complex frequency characteristic.22) (6. ωT < 1 1−(Tω) 2 π . as a time sinusoidal signal will be T/8 in advance . 6.4.4.
4.4. Am=Arez is. ωT < 1 (6. A(ω ) = 1 ⇒ ω = ω .6.4. F(x) m = x→∞ x = 40 lim lim n = x→∞[F(x) − mx] = 0 an oblique asymptote exists.27) La (ω ) = 40lg (ω T) .24) x = lg (ω T) ⇒ ω T = 10x (6. a. If ξ = 0. 6.5. When x → +∞ .5 ⇒ L(1/T) = 0 .26) When x → −∞ (ω → 0) ⇒ F(−∞) = 0 that means an horizontal asymptote. so there is a family of characteristics.4.28) 2 (ω T) . FREQUENCY DOMAIN SYSTEM ANALYSIS. ωT < 1 A(ω ) ≈ Aa (ω ) = (6. 2 2 ω + 4ξ 2 ω (6.33) L(1/T) = L(ω n ) = 20 lg(2ξ) .30) T 2 We can obtain a resonance frequency if 0 < ξ < 2 .4. Elementary Frequency Characteristics. A(ω n ) = A(1/T) = 2ξ (6.4.4. Am = 2ξ 1 − ξ 2 = A(ω rez ) (6.25) 2 F(x) = L(ω ) ωt=10 x = 20lg (1 − 102x ) + 4ξ 2 102x (6.32) (6. ω T ≥ 1 1 .4. The asymptotical approximations are: 2 L(ω ) = 20 lg 1 − (ω T) 2 + 4ξ 2 (ω T) 2 (6. The resonant magnitude.4.34) All these allow us to draw the asymptotic magnitude characteristic and the family of the exact characteristics as in Fig.The asymptotic magnitude characteristic.31) where ω rez is the rezonance frequency. y = 40x 0 .4.4.4.4. where ω t is the crossing frequency ω t = 2 ω rez (6. 154 A(ω ) = 1 − (ω T) 2 + 4ξ 2 (ω T) 2 = 2 2 . ω T ≥ 1 La = 20Aa (ω ) The exact frequency characteristics are depending on the damping factor ξ .29) 1 − ω n ωn We can find the resonant frequency setting to zero A'(ω ) with respect to ω : dA(ω ) =0⇒ dω ω rez = 1 1 − 2ξ 2 (6. 6.
6.1 1 − 2 ⋅ 0.2974 20 ωrez 2 10 0 =0. ξ = 0. FREQUENCY DOMAIN SYSTEM ANALYSIS.4. Elementary Frequency Characteristics.09039 Am = Arez = 2ξ A(ωn ) 1 − ξ 2 = 0.6.1 1/2<ξ<√2/2 ω tT ωrezT ξ=1/2 1 10 ωT 0<ξ<1/2 0.1 L(ω n ) = 20lg (2ξ) = 20 lg (0.01 = 0. A(ω ) 100 L(ω) dB 40 Slope +40 dB/dec ξ>√2/2 10 20 ξ=√2/2 1 0.1 20 A( ωn) Am 0.2) = −13.5. 6. Example: H(s ) =100 s 2 + 2 s + 1 T2 2ξT T = 10. 6.01 0 0.2974 1 ω rez = T 1 − 2ξ 2 = 0.09039 10 Phase (deg). 1 180 200 150 ϕ(ω) 10 0 ω Frequency (1/sec) 90 100 50 0 10 2 10 1 10 0 ω Frequency (1/sec) Figure no.01 40 ξ=0 ξ=0 Figure no.6.4. ω n = 0.1.19899 Bode Diagrams H(s)= 100 s^2 + s + 1 L(ω Magnitude (dB) ) 40 20 L(ωn) =−13. 155 .4.
ω T ∈ [ω 1 T.4. Elementary Frequency Characteristics. ϕ=π/2 ξ y = ln10 x + π = 0 ⇒ x 1 = −π ⇒ ω 1 T = 10x1 ξ 2 2 ln 10 π ln10 x + π = π ⇒ x = 2 y= ⇒ ω 2 T = 10 x2 2 ln 10 3 2 2 1− 3 0 . ωT = 1 ϕ(ω ) = 2 π + arctg 2ξωT . x arctg 2ξ10 . ω T > 1 1−(ωT) 2 Denoting. ω 2 T] . The crossing frequency is obtained solving the equation A = 1 − (ω T) 2 + 4ξ 2 (ω T) 2 = 1 2 y=0 2 We note y = (ω T 2 ) ⇒ (1 − y) + 4ξy = 1 ⇒ 2 y = 2(1 − 2ξ ) (ω T) 2 = 2(1 − 2ξ 2 ) ⇒ ω t = ω c = 1 2 1 − 2ξ 2 T b.4. We said that : arctg 2ξωT .4.35) 2ξ10x = arctg 1 − 102x (6. 6. ω T > ω 2T π 156 (6.6.4. x → −∞ ⇒ G(x) → 0 The asymptotes: x → +∞ ⇒ G(x) → π The oblique line: y = G (0) ⋅ x + π/2 The asymptotic phase ϕa will be approximated between two points by a straight line passing through x=0. 2ξω T g(x) = arctg 1 − (ω T) 2 we have ωT=10 x (6. x<0 2x 1−10 π .37) x π + arctg 2ξ10 .x= 0 G(x) = 2 (6. ωT < 1 1−(ωT) 2 π .4. x > 0 1−10 2x and then it is approximed by three lines. FREQUENCY DOMAIN SYSTEMS ANALYSIS.38) .36) G(x) = ϕ(ω ) ωT→10 x as a function of x in a linear space X with three branches. two horizontal asymptotes and one oblique line. ω T < ω 1T π ln(10) ϕ a (ω ) = 2 + ξ lg(ω T) . The asymptotic phase characteristic.
ω→∞ 1 Q ω =1/T Q=2ξ Α(ω 1 ) ω=ω1 0.1 ξ The complex frequency characteristics. Elementary Frequency Characteristics. ξ = 0. ln 10 = 2.Q) are obtained eliminating ω in (6.01 . 6.33π . 100 ωT Figure no.4.5 ω =0 1 P Figure no. ln(10)/ ξ π π/2 0 . ϕ=π/2] . 0.4. 0 −π/2 0.mark C [(ω T)=10. 1 ω2T 1 0 .7.5 0. Q ≥ 0 . ϕ( ω ) ϕa (ω) D A B 0. Example: T = 10 . ϕ= π + ln 10 ] 2 ξ .1 . FREQUENCY DOMAIN SYSTEM ANALYSIS. Q2 P =1− 2.4.39) 4ξ It is represented in Fig. 6.25 0 0 ϕ(ω 1) H(jω 1) 0.connect A and C to determine ω1Τ and ω2T to the intersection with ϕ=0 (point B) and ϕ=π (point D) respectively.026rad = 7. .1 ω1T C .4.3026 = 23. (6. ∀ξ > 0 . So the exact phase characteristic will have as semitangents the segments AB and AD. 157 .4.6.4.19).8. 6.mark A [(ω T)=1. 6.8. Algorithm for asymptotic phase drawing: . in the plane (P.
10 100 Max. Transfer Function with one Real Pole.40) (6. error 6 degree ωT Figure no.41) (6. We observe that the asymptotic characteristic has a break down of 20 dB/dec regarding the slope.4.43) 1 + (ω T) L(ω ) = −20 lg 1 + (ω T) 2 (6.9. 158 .4. Aperiodic Element. error 3dB 0.44) 0 . 6.4.47) −π . Q(ω ) = − ω T 2 1 + (ω T) 2 1 + (ω T) 1 A(ω ) = 2 (6.5.4.1 20 L(ω) dB Max.01 0. 6.4.4.4.4.42) (6.46) ϕ(ω ) = −arctg(ω T) 0 . Ts + 1 s + p 1 .1 1 10 Slope 20 dB/dec 100 ωT 0. ωT > 5 2 The Bode plot depicted in Fig.45) x (6. ω T ≥ 1 (6.2 . A(ω ) 10 20 10 0. ω T < 0. ω T ∈ [0.9. 1 5 .4.4. 6. FREQUENCY DOMAIN SYSTEM ANALYSIS. ωT < 1 a L (ω ) = −20 lg (ω T) .4.2 π ln 10 ϕ a (ω ) = −4 − 2 lg (ω T) . 6.4.6.01 1 0 10 ϕ( ω ) ϕa (ω) π/4 0 0 −π/4 −π/2 −3π/ 2 0.1 0.2. 1 H(s) = 1 = T . Elementary Frequency Characteristics. The pole is −p = − T 1 1 H(jω ) = = − j ωT 2 1 + jω T 1 + (ω T) 2 1 + (ω T) 1 P(ω ) = . 5] (6.
49) H(s) = 2 2 1 = 2 + 2ξω s + ω 2 T s + 2ξTs + 1 s n n ω n = 1 − is the natural frequency .54) (6.40).4.52) 2 L(ω ) = −20 lg 1 − (ω T) 2 + 4ξ 2 (ω T) 2 2ξωT −arctg . 6.4.5 ϕ(ω 1) 1 ω =0 P H(jω1 ) 0. Oscillatory element. ω T > 1 1−(ωT) 2 1−(ωT) 2 2 +4ξ2 (ωT) 2 (6. ξ ∈ (0.5 ω =1/T ω=ω1 Figure no.4. 6. 6.48) It is represented in Fig.5 0 ω→∞ Α (ω 1 ) 0.53) P(ω ) = 1−(ωT) 2 2 1−(ωT) 2 +4ξ2 (ωT) 2 . 6. ω2 n (6. FREQUENCY DOMAIN SYSTEM ANALYSIS. ωT < 1 1−(ωT)2 π . is expressed by the equation (6. (6. Q 0.51) (6. Q(ω ) = 2 2 −2ξωT 1−(ωT) 2 2 +4ξ2 (ωT) 2 (6.6.4.4. Elementary Frequency Characteristics. obtained by eliminating ω in (6.4. Transfer Function with two Complex Poles.4.4.4.50) (6. P 2 + Q 2 − P = 0.4.58) ω rez = ω n 1 − 2ξ 2 where 0 < ξ < 1 A = A = A(ω ) = m rez rez 2ξ 1 − ξ 2 A(ω n ) = A( 1 ) = 1 T 2ξ ω c = ω t = 2 ω rez L(ω n ) = L( 1 ) = −20lg(2ξ) T 159 . The complex frequency characteristic.4. ωT = 1 ϕ(ω ) = − 2 −π − arctg 2ξωT .6.56) (6.10.4.4.Q) plane.57) (6.4.48). 1) − is the dumping factor T H(jω ) = A(ω ) = 1 2 1−(ωT) +j2ξωT 1 = 1−(ωT) 2 −j2ξωT 1−(ωT)2 2 +4ξ2 (ωT)2 (6.4.10.4.55) (6. in the (P.4. Q ≤ 0.
ω T ≥ 1 0 .4. a.The asymptotic magnitude characteristic.59) (6. ω T ∈ [ω 1 T. .12.1 ω1T B (6.6. ω 2 T] . ωT < 1 La (ω ) = −40lg (ω T) . 100 ωT . 1 ω2T 1 0 .01 1 0 0.1 20 ξ>√2/2 0.01 0. 1 .4. A D −π .60) L(ω) dB 40 ξ=0 ξ=0 10 20 0<ξ<1/2 A(ω n) Am 1 10 ωT Slope 0. Elementary Frequency Characteristics. 6. ω T > ω 2T −π π/2 0 0 −π/2 ln(10)/ ξ ϕ( ω ) ϕa (ω ) 0.4.11. C Figure no. FREQUENCY DOMAIN SYSTEM ANALYSIS. ω T ≥ 1 A(ω ) 100 (6. 0 . 6. ωT < 1 A(ω ) ≈ Aa (ω ) = 1 (ωT) 2 . .1 ξ=1/2 ωtT ξ=√2/2 ωrez T 1/2<ξ<√2/2 40 dB/dec 0. 160 . The asymptotic phase characteristic.01 40 Figure no.61) . 6. ω T < ω 1T π ln(10) ϕ a (ω ) = −2 − ξ lg(ω T) . b.4.4.4.
Q) are obtained eliminating ω T in (6.02) = 14. 1 1 100 s^2 + s + 1 ϕ(ω) 10 0 ω Frequency (1/sec) 0 90 0 50 100 150 180 200 ω1 ω2 ω Frequency (1/sec) Figure no.4. in the plane (P. 6.5 −1/2ξ 0 Α(ω res ) 0. ω n = 0. the magnitude A has decreasing values but. 2 /2) a resonance appear to ω res and ϕ(ω res ) = ϕ res ∈ (−π/2 . Q ω→∞ 0. 6. 10 Phase (deg).1 .13. If ξ ≥ 2 /2 .2 2ξ 2T 1 Am = = 5.01 0.54).1) 2 1 H(s) = = . 20 lg 1 = 20 lg 1 = 14. The complex frequency characteristics. Example. 161 .5 ξ≥√2/2 1 ω =0 ϕ(ωres ) ω =1/T P −1/2ξ ω =1/T 0<ξ<√2/2 ϕ(ωres ) ∈(−π/2.4. Lm = 20lg(5. Elementary Frequency Characteristics. 6.14. T=10 100s 2 + 2s + 1 s 2 + 2ξω n s + ω 2 n 2ξT = 2 ⇔ ξ = 2 = 0. 6. 0) ω=ωres Figure no.4. FREQUENCY DOMAIN SYSTEM ANALYSIS. (0.01 2 2ξ 1 − ξ ω rez = 1 1 − 2ξ 2 T ωn L(ω) Magnitude (dB) L(ωn) =13. They looks as in Fig.6.13.1.09039 Slope 40 dB/Dec.02 . 0).4.4. if ξ ∈ (0.2974 20 0 20 40 2 10 Bode Diagrams H(s)= ωrez=0.
General Aspects.1.5.6. 162 . if necessary. H(s) = Π H i (s) = Π Hi i (s). ki ∈ [1.8) These relationships are used for plotting characteristics for any transfer function. k k k=1 k=1 p p (6. k= 1:q Ak (ω ) = Hk (jω ) (6.9) n1 n 1 +n 2 α T 2 s 2 + 2ξ T s + 1 s Π (T i s + 1) Π i i i i=1 where. q L(ω ) = Σ Lk (ω ) (6. any complicated transfer function is interpreted as a series connection of the 6 types of elementary transfer functions. Suppose that a system is expressed by a cascade of q transfer functions. Based on a such factorisation.5) ϕ(ω ) = Σ ϕ k (ω k=1 k=1 q (6. (6.5.6) Also the asymptotic logarithmic magnitude and phase characteristics of the series interconnected system are the sum of the asymptotic component's characteristics. Fre quency Characteristics for Series Connection of Systems .5. i=n1 +1 α ∈ Z.7) k ϕ a (ω ) = Σ ϕ a (ω k k=1 k=1 q (6. q La (ω ) = Σ La (ω ) (6.5. The magnitude characteristic of the series interconnected system is the product of the component's magnitudes q A(ω ) = Π Ak (ω ).5.1) whose transfer functions frequency characteristics are known.5.3) ϕ k (ω ) = arg (Hk (jω )) k = 1 : q .4) k=1 but the logarithmic magnitude and phase characteristics of the series interconnected system are the sum of the component's characteristics.5. n1 + n2 = n .5.10) where Hi (s) = Hi i (s) is one of the already studied elementary transfer function of the type indicated.5.9) in which the free term in any polynomial equals to 1.2) (6.5.5. 6. m1 m1 +m 2 K ⋅ Π (θ i ⋅ s + 1) Π θ 2 s 2 + 2ζ i θ i s + 1 i i=1 i=m 1 +1 H(s) = (6. m1 + m2 = m .5. 6. A transfer function can be factorised at the nominator and denominator by using time constants as in (6. H(s) = H1 (s) ⋅ H2 (s) ⋅ ⋅⋅Hq (s) (6. by the superscript ki. 6. Frequency Characteristics for Series Connection of Systems. 6] .5.5. FREQUENCY DOMAIN SYSTEM ANALYSIS.
zi as a "pole" and as a "zero" respectively. The factorisation by poles and zeros of a transfer function is called also "zpk" factorisation. using the formula.5.9) split in two main factors.6.13) u(∞) t→∞ u(t) s→0 . R(s) = K (6.11) where α ∈ Z.14) u(∞) t→∞ u(t) s→0 y(t) y(∞) ¨ ¨ Ka = lim s 2 H(s) = lim = . in which the main term in any polynomial equals to 1. K = lim s α H(s) . Here B is only a coefficient which has nothing to do with the system amplifier.5.20) sα incorporates the essential behaviour for low frequencies mainly for ω → 0. (6. we call also pi.15) u(∞) t→∞ u(t) s→0 If the transfer function is factorised by poles and zeros we have the form from (6. K has the following names: α=0 K=Kp .the acceleration amplifier factor.5. ¨ y(t) y(∞) Kp = lim H(s) = lim = (6.5. output first derivative y(∞) (Kv). . (6. (6. These general amplifier factors express the ratio between the steady state . and the steady state values of the input u(∞) .5. H(s) = R(s) ⋅ G(s) (6.5. (6.12) α=1 K=Kv .the speed amplifier factor. α=2 K=Ka . For fast Bode characteristics drawing it is useful to consider the time constants factorisation (6.17) θi p i = 1 ⇒ λ i = −p i are the real poles. indicates the number of the transfer function H(s) poles placed in the splane origin if α > 0 or the transfer function H(s) zeros placed in the splane origin if α < 0. For particular values of α .5.18) Ti Sometimes. output second derivative y(∞) (Ka). values of: output y(∞) (Kp).5. The time constants factorisation reveals a factor K which is called "the general amplifier factor" of a transfer function. The general amplifier factor K can be determined before the time constants factorisation.19) where.16). y(t) y(∞) Kv = lim sH(s) =lim = (6. FREQUENCY DOMAIN SYSTEM ANALYSIS. 6. (6.5.5. 163 .16) n1 n 1 +n 2 α s 2 + 2ξ ω s + ω 2 s Π (s + p i ) Π i i i i=1 i=n 1 +1 s→0 where zi = 1 ⇒ ζ i = −z i are the real zeros. Fre quency Characteristics for Series Connection of Systems .5.5.5. m1 m 1 +m2 B Π (s + z i ) Π s 2 + 2ζ i Ω i s + Ω 2 i i=1 i=m 1 +1 H(s) = (6.the position amplifier factor.
For the same system.5.5.27) sα s which determines.5. (6. (6. LR (ω ) = 20lg( K ) − α ⋅ 20lg(ω ) = 20lg(50) − 20 lg(ω ) (6.5.1. m1 m 1 +m2 Π (θ i ⋅ s + 1)i=m +1 θ 2 s 2 + 2ζ iθ is + 1 Π i i=1 1 G(s) = n 1 n 1 +n 2 Π (T is + 1) Π T 2 s 2 + 2ξ iT is + 1 i i=1 i=n1 +1 (6. (6. Indeed.5. FREQUENCY DOMAIN SYSTEM ANALYSIS.6.18) as: p1=0. We define. (6. AR (ω ) = R(jω ) = K ⋅ ω −α LR (ω ) = 20lg(AR (ω )) = 20 lg( K ) − α ⋅ 20 lg(ω ) −α ⋅ π/2 if K ≥ 0 ϕ R (ω ) = arg(R(jω )) = −α ⋅ π/2 − π if K < 0 The second factor G(s).5.31) (5s + 1)(0. 6.5.17). We shall use these results in the next paragraph regarding the Bode diagram construction. R(s) = K = 50 . α = 1 observed by a simple inspection and K = Kv is evaluated using (6. Kp = ∞.5.1s + 1)(20s + 1) G(s) = (6.23) (6. The two types of factorisation for H(s) are: 164 .5. The factor 10 in front of the fraction tell us nothing. z1=0.26) (s + 10)(20s + 1) H(s) = 10 . lim G(jω ) = 1 (6.5. (6. Fre quency Characteristics for Series Connection of Systems . p2=2.14).05.21) (6. Let us consider the transfer function. Ka = 0. (6. (0 + 10)(0 + 1) [y] K = Kv = lim sH(s) = 10 = 50 sec⋅[u] s→0 (0 + 1)(0 + 2) For this transfer function the factor R(s) is. directly from H(s).5s + 1) which accomplishes the condition G(0)=1.24) has no effect on the frequency characteristics of the series connection when ω → 0.1.28) ϕ R (ω ) = arg(R(jω )) = −π/2.25) ω→0 ω→0 Example 6.5. Types of Factorisations. z2=10. Very easy we can determine. denoted using the conventions (6.22) (6.5. lim arg(G(jω )) = 0 .the two elements of R(s).5.29) It is important to remark that H(s) has.5.30) p0=0. other two simple real poles and two simple real zeros.26) s(5s + 1)(s + 2) Even if it is a factorised one it does not fit none of the above factorisation types.5.5.2.5. beside the pole s=0 . We can write the expression of G(s) as (0. G(0)=1 .
3. 6.2)(s + 2) As a ratio of two polynomials. s(s + 0.4.5. has the order of multiplicity mi. The following steps are recommended: 1. s(5s + 1)(0. Plot the asymptotic characteristics.5.1. given by the transfer function (6. If a pole or a zero. The Bode diagram.1. Bode Diagrams Construction Procedures. To obtain the real characteristics make correction in the breaking points.5. 6.. Directly Bode Diagram Construction. for complicated transfer functions. presented in Ch.5. 2 H(s) = 200s + 2010s + 100 . for prototypes. The asymptotic characteristic of the transfer function results by adding point by point the asymptotic characteristics of the components. even if finally we can use the logarithmic scale of A(ω ) for their marking.05) H(s) = 40 . (6. FREQUENCY DOMAIN SYSTEM ANALYSIS. can be plotted rather easy by using two methods: 1.6.5.. The values of magnitude characteristics are added on a linear scale of L(ω ) expressed in dB. (s + 10)(20s + 1) H(s) = 10 (6. the transfer function is.2. Let us construct the Bode diagram (Bode plot) of the transfer function analysed in Ex.5.5s + 1) .2. Bode Diagram Construction by Components.1.5. Bode Diagram Construction by Components.5. as components of a transfer function according to the relations (6. Fre quency Characteristics for Series Connection of Systems . This method is based on using the Elementary frequency characteristics (prototype characteristics). 2.. 5. Examples of Bode Diagram Construction by Components. 2..2.1. Identify elementary transfer functions of the six types. Realise the time constants transfer function factorisation.1s + 1)(20s + 1) H(s) = 50 . magnitude and phase. real or complex. useless in frequency characteristics construction."zpk" factorisation (s + 10)(s + 0.26).8).1).5. 6. 5s 3 + 11s 2 + 2s + 0 6. Example 6. .time constants factorisation (0. 6. then it will be considered as determining m i separate e lementary frequency characteristics even if it is about on the same characteristic.26) s(5s + 1)(s + 2) 165 . 4.
i=1:6.5s + 1 3 3 H5 (s) = H5 (s) = 0. 166 . (0. Fre quency Characteristics for Series Connection of Systems .1s + 1 . 6. the asymptotic characteristics of Hi.5. The factorised time constants form is interpreted as a series connection of 6 elementary elements. H2 (s) = H2 (s) = 1 1 2 s 1 . 1 5 5 H3 (s) = H3 (s) = H4 (s) = H4 (s) = 5s + 1 0. FREQUENCY DOMAIN SYSTEM ANALYSIS.5s + 1) where H1 (s) = H1 (s) = 50 .6.1.5. 6. i ϕa i Figure no.2 ω=2 ω=10 0. H6 (s) = H6 (s) = 20s + 1 L(ω) 100 80 60 40 20 0 a L4 a a L2 Magnitude (dB) L6 L1 L5 a 20 40 60 3 10 90 45 2 1 0 10 1 2 10 a L3 ϕ(ω) 10 ϕa 6 10 10 ω ϕ5a Phase (deg) 0 ϕ3a 45 90 135 180 a ϕ4 a ϕ2 10 3 10 2 10 1 0 10 10 1 2 10 ω ω=0.05 ω=0.05/5 0.05*5 2/5 2*5 We denoted by La .1s + 1)(20s + 1) = H1 (s) ⋅ H2 (s) ⋅ H3 (s) ⋅ H 4 (s) ⋅ H5 (s) ⋅ H6 (s) H(s) = 50 s(5s + 1)(0.
2 2 100s + s + 1 100s + 5s + 1 Their Bode diagrams and step responses are depicted in Fig. 6. FREQUENCY DOMAIN SYSTEM ANALYSIS.6. H(s) = 100s +21000 100s + s The time constants factorisation is performed as below. 6. 6. 1 1 H1 (s) = and H2 (s) = . 6. Fre quency Characteristics for Series Connection of Systems . two transfer functions are considered.1s + 1 .5. L1 = 20lg 103 = 60 dB.2.1s + 1 ⋅ 1 =1000⋅ s H(s) = 100s + 1 H3 H1 H2 s 100 s + 1 H4 T1 H1 (jω ) = 1000 .2. H3 (s) = 0.5.5. L(ω) 150 100 50 0 50 1 04 H4 (s) = Magnitude (dB) 103 102 10 1 100 1 10 102 ω 80 100 120 140 160 180 ϕ(ω) Phase (deg) 1 04 103 102 101 100 1 10 102 ω Figure no. 1 H2 (s) = 1 ⇒ H2 (jω ) = 1 . z1 s+10 100 1 ⋅ 0.3. A2 (ω ) = H2 (jω ) = 1 = ω s jω ω2 L2 = 20lg A2 (ω ) = 20lg ω −1 = −20 lg ω = −20x x 1 100s + 1 The Bode diagrams drawn using Matlab is depicted in Fig. To reveal the influence of the Bode diagram shape to the time response.5. Let us consider now a transfer function. 167 .
1 100 0 50 100 150 200 250 300 350 Frequency (1/sec) Time (sec. ξ k ∈ (0.).solution of the equation T 2 s 2 + 2ξ k T k s + 1 = 0 .k > 0 the asymptotic magnitude . slope to change with 20dB/dec. ∀i.i . (upbreacking point of 20 dB/dec.i 2 .5. 1). 40 Bode Diagrams Magnitude (dB) Step Response Amplitude 1. 4.i2 ) < 0. the asymptotic behaviour is 5. A real zero ζ i = −zi < 0 determines in the breaking point ω = zi > 0. A complex pair of poles.2 1 H2 Phase (deg).8 20 H1 H1 1.8 0. ω n. (downbreacking point of 40 dB/dec. The Bode diagrams can be directly plotted considering the following observations regarding the asymptotic behaviour: 1. to the asymptotic magnitude characteristic.). 50 H1 H2 0. Directly Bode Diagram Construction.).). k. A real pole λ i = −p i < 0 determines in the breaking point ω = p i > 0.i > 0.i = 1/T i i determines in the breaking point ω = ω n.2.5. ξ i ∈ (0. A complex pair of zeros. 6.2. 3.to characteristic. (upbreacking point of 40 dB/dec. ∀ω < min(zi . 6. ω n.6 0.6 1. ω n.k = 1/T k k determines in the breaking point ω = ω n.ik ). FREQUENCY DOMAIN SYSTEM ANALYSIS. Re(ζ i 1 . slope to change with +40dB/dec. slope to change with 40dB/dec.4 0 20 40 0 H2 1.2 100 To: Y(1) 150 200 10. 2. Re(ζ k 1 .20).k 2 . Fre quency Characteristics for Series Connection of Systems . 1). p i .k 2 ) < 0. ω n.5.6. slope to change with +20dB/dec.(downbreacking point of 20 dB/dec.3.2 10. solution of the equation T 2 s 2 + 2ξ i T i s + 1 = 0 . s = ζ i 1 . 6. The following steps are recommended: 168 .5. For determined only by the term R(s) from (6.) 1 H1(s)=100 s^2 + s + 1 1 H2(s)= 100 s^2 + 5 s + 1 Figure no.4 0. to the asymptotic magnitude characteristic. to the asymptotic magnitude characteristic. s = λ k 1 .
5 ω0 =0.5. ω n.5. FREQUENCY DOMAIN SYSTEM ANALYSIS. 169 .9382 100 80 60 40 20 0 20 3 2 1 M S2=S1+20=20+20= 0 dB/dec S4=S320=2020= 40 dB/dec S1=(α *20)= 20 dB/dec S3=S220=020= 20 dB/dec S5=S4+20=40+20= 20 dB/dec x 10 10 10 0 10 x 10 1 2 10 ω ω=2 ω=10 ω=0. 7. Evaluate the poles and zeros. 6. k and place them on the plotting sheet in a logarithmic scale.5. Chose a starting frequency ω 0inside the system frequency bandwidth .k ).93). 5. and each pole by a small cross. Draw a straight line through the point M having the slope equal to −(α ⋅ 20) dB/dec till the first breaking point abscissa is reached.5. In a such a way we determine the system frequency bandwidth of interest. Keep drawing segments of straight lines between two consecutive breaking points with the slope equals to the previous slope plus the changing