You are on page 1of 12

Chapter (2) Mathematical Background

CHAPTER 2
MATHEMATICAL BACKGROUND
2.1 INTRODUCTION:
In order to analyze a dynamic system, an accurate
mathematical model that describes the system completely must be
determined. Because the system under consideration is dynamic in
nature, the descriptive equations are usually differential equations.
Furthermore, if these equations can be linearized, then the Laplace
transforms can be used to simplify the method of solution.
In this chapter the mathematical background needed for the
study of control theory is presented. This mathematical background
includes, differential equations, complex variable theory, Laplace
transform and matrix theory.

2.2 DIFFERENTIAL EQUATIONS


2.2.1 LINEAR ORDINARY DIFFERENTIAL EQUATIONS
Differential equations generally involve derivatives and
integrals of the dependent variables with respect to the independent
variable. For example an RLC (resistance – inductance – capacitance)
network can be represented by the differential equation:
di (t ) 1
R i (t )  L
dt

c i (t ) dt  e (t ) (2.1)

Where R is the resistance, L the inductance, C the capacitance, i (t) the


current in the network, and e (t) the applied voltage. In this case, e (t) is the
forcing function, t the independent variable and i (t) the dependent variable.
Equation (2.1) is referred to as second order differential equation
and it is referred to as an integro differential equation, since an
integral is involved.

2-1
Chapter (2) Mathematical Background

In general, the differential equation of an nth – order system is


written as
d n y (t ) d n1 y (t ) dy (t )
 a n1  ...........  a1  a o y (t )  f (t ) (2.2)
dt n dt n1 dt

Which is also known as a linear ordinary differential equation if the


coefficients ao, a1, ….., an-1 are not function of the dependant variable y(t).
Dynamic systems that are composed of linear time - invariant
lumped parameter components may be described by linear time
invariant (constant parameters) differential equations. Such systems
are called linear time – invariant (or linear constant coefficients)
systems. The differential equations whose parameters are function of
time are denoted as linear time varying differential equations. System
that are represented by linear time varying differential equations are
called linear time –varying system An example of a time – varying
control system is a spacecraft control system where it is mass changes
due to fuel consumption.

2.2.2 Nonlinear Differential Equations


Many physical systems are nonlinear and must be described by
nonlinear differential equations. If a differential equation contains
terms which are higher powers, products or transcendental functions
of the dependent variables, it is nonlinear, such terms include [dy/dt] 2,
x(dy/dt) and sin x, respectively.
Examples of nonlinear differential equations are:
2
d 2 x  dx 
    x  A sin t
dt 2  dt 

d 2x dx
2
 ( x 2  1)  x 0
dt dt
2-2
Chapter (2) Mathematical Background

d 2y
cos x 2  sin 2 x  0
dt

Where y, x are the dependant variables and t is the independent


variable.

2.2.3 First Order Differential Equations:


State Equations
In general, an nth – order differential equations can be
decomposed into n first order differential equations. For the
differential equation represented by Eq.(2.1), if we let
x1 (t )   i (t ) dt (2.3)

and
dx1(t )
x2 (t )   i (t ) (2.4)
dt
Therefore Eq.(2.1) is decomposed into two first order differential
equations as:
dx1 (t )
 x 2 (t ) (2.5)
dt
dx2 (t ) 1 R 1
 x1(t )  x2 (t )  e (t ) (2.6)
dt LC L L
Equation (2.2) can be decomposed into n first order differential
equations in a similar manner, if we define

x1 (t )  y (t )
dy (t )
x2 (t )  (2.7)
dt

d n 1 y (t )
.. xn (t ) 
dt n 1
2-3
Chapter (2) Mathematical Background

Then the n first order differential equations are obtained as :

dx1 (t )
 x 2 (t )
dt
dx 2 (t )
 x 3 (t ) (2.8)
dt
.
dx 2 (t )
=  a0 x1 (t )  a1 x2 (t )  .......  an  2 xn 1 (t )  an 1 xn (t )  f (t )
dt

In control system theory, the set of first – order differential


equations of Eq. (2.8) is called the state equations and x1 , x2 , …….xn
are called the state variables.

State concepts
The state of a system is a mathematical structure containing a set of
n variables x1(t), x2(t),…….xn(t), called the state variables , such that
the initial values xi(to) of this set and the system inputs uj(t) are
sufficient to uniquely describe the system’s future response at t ≥
to.There is a minimum set of state variables, which is required to
represent the system accurately. The m inputs u1 (t), u2 (t),… um (t),
are deterministic; i.e. they have specific values for all values of time t ≥ o.
Generally the initial starting time to is taken to be zero. The
state variables need not be physical observable and measurable
quantities; they may be purely mathematical quantities. As a
consequence of the definition of state, the following additional
definitions are generated.
State vector. The set of state variables representing the components
of the n – dimensional state vector x(t); that is,

2-4
Chapter (2) Mathematical Background

 x1 (t )   x1 
 x (t )  x 
 2   2
x(t ) =       x (2.9)
   
   
 xn (t )  xn 
   
The state equations representation of the system consist of n
first – order differential equations. When all the inputs uj(t) to a given
system are specific for t > to, the resulting state vector uniquely
determines the system behavioral for any t > to.
Output Equations. An output of a system is a variable that can be
measured, but state variables do not always satisfy this requirement.
For example in an electrical motor armature current, rotor speed and
displacement may be measured physically, and all these variables all
qualify as output variables, on the other hand the magnetic flux can be
regarded as a state variable, but it cannot be measured directly during
operation and therefore it does not qualify as an output of the motor.
In generally output variables can be expressed as an algebraic
combination of the state variables.

2.3 COMPLEX VARIABLES CONCEPT


2.3.1 Complex Variables
A complex number has a real part and an imaginary
part, both of which are constant. If the real part and / or
imaginary part are variables a complex number is called a complex
variable. Graphically the complex variables s can be represented by a
real component measured along the σ-axis in the horizontal
direction, and an imaginary component measured along the
vertical Jω-axis Figure (2.1) shows the complex, s–plan, in which
2-5
Chapter (2) Mathematical Background

any arbitrary point s= s1 is defined by the coordinates σ = σ1 and ω =


ω1 or simply s1 = σ1 + jωi

s-plan

s1   1  j1
ω1 •
0 
1

Fig. 2.1 Complex s- plane.


2.3.2 Complex Function:
A complex function G (s), a function of s has a real part and an
imaginary part ; that is
G(s)  Gx  jG y (2.10)
Where Gx and Gy are the real part and the imaginary part of G(s)
Respectively. The magnitude of G (s) is G2x  G2y and the angle θ of
G(s) is tan-1(Gy / Gx) measured counterclockwise from the real axis. If
for every value of s there is only one corresponding value of G(s) in
the G(s) plane, G(s) is said to be a single valued function. Complex
functions encountered in linear control system are single – valued
functions of S and are uniquely determined for a given value of s.

2.3.3 Analytic Function


A function G (s) of the complex variables s is called an analytic
function in a region of s–plan if the function and all its derivatives
exist in that region. The derivative of an analytic function G(s) is
given by:
d G(s  s)  G(s) G
G (s)  Lims 0  lim s  0
ds s s
2-6
Chapter (2) Mathematical Background

The value of the derivative is independent of the choice of the path.


Since Δs = Δσ +jΔω, Δs can approach zero along an infinite number
of different paths. It can be shown that if the derivatives along two
particular paths (Δs = Δσ and Δs = jΔω), are equal, then the derivative
is unique for any other path Δs = Δσ +jΔω.
For the path Δs = Δσ ( the path is parallel to the imaginary – axes),
d   Gx Gy   Gx Gy
G(s)  lim   0   j   j
ds      

For another particular path Δs = JΔω (the path is parallel to the real–axes),
d   Gx Gy   Gx Gy
G(s)  lim J  0   j    j 
ds  j j   

If these two values of derivative are equal then


Gx Gy Gx Gy
j   j
   
dG(s )
Therefore the derivative is uniquely determined if,
ds

Gx Gy Gy Gx


 and   (2.11)
   
These two conditions are known as the cauchy–Riemann. If these
conditions are satisfied the function G(s) is analytic.
For instance, given the function
1
G ( s)  , then
s 1
1 (1   )  j
G(  j )  *
(1   )  j (1   )  j

 Gx  jGy

(1   ) 
Hence G( x)  and G y 
(1   )  
2 2
(1   ) 2   2

2-7
Chapter (2) Mathematical Background

Since

Gx Gy  2  (  1) 2


  ;
  [(1   ) 2   2 ]2

Gy Gx 2 (   1)
And  
  
(1   ) 2   2 
2

So, G (s) satisfies the Gaushy – Riemann conditions except at s = -1


(normally, σ = -1 ,  = 0 )
Hence G(s) = 1/(s+1) is analytic in the entire S- plane except at s = -1.
The derivative dG(s)/ds except at s = -1 is found to be
d Gx  Gy
G( s)   j
ds  
Gy  Gx
  j
 
1

(  j )  12
1

( s  1) 2

Note that the derivative of an analytic function can be obtained simply


by differentiating G (s) with respect to s.

2.3.4 Singularities and Poles of a function


The points in the s-plane at which the function or its
derivatives do not exist are called singular points a pole is the most
common type of singularity. If a function G (s) is analytic and single
valued in the neighborhood of si , it said to have a pole of order r at s =
si if the limit has a non zero value.

lim s si [(s  si )r G(s) ]

2-8
Chapter (2) Mathematical Background

If r = 1, the pole is called a simple pole. If r = 2,3, …, the pole is


called a second - order pole , a third - order pole , etc . As an example
, the function
( s  3)
G( s)  (2.12)
s( s  2 ) ( s  4 ) 2

Has a pole of order 2 at s = -4 and simple poles at s = 0 and s = -2. It can


be said that the function G(s) is analytic in the s-plane except at these poles.

2.3.5 ZEROS OF A FUNCTION:


If the function G (s) is analytic at s = si , it is said to have a zero
of order at s=si , if the limit lim ssi [(s  si ) r G(s)] has a finite, non zero
value. The function given by Eq. (2.12) has a zero at s = -3. If the
points at infinity are included, G(s) has the same number of poles as
zeros. The function G(s) given by Eq. (2.12) has three zeros at infinity in
addition to a finite Zero at s = -3, since
1
lim s  G (s)  lim s  0
s3
Therefore , the function has a total of four poles and four zeros in the entire
s- plane , including infinity .

2.4 LAPLACE TRANSFORM


The Laplace transform is one of the mathematical tools used for the
solution of linear ordinary differential equations. In comparison with
the classical method of solving linear differential equations Laplace
transform has the following advantages:
1- The transient and steady state components of the solution are
obtained simultaneously.
2- Laplace transform converts the differential equations into an
algebraic equation in s, so the work involved in the solution is
2-9
Chapter (2) Mathematical Background

simple algebra. The final solution is obtained by talking the


inverse Laplace transforms.
3- The use of table of transforms reduces the Labour required.

2.4.1 Definition of the Laplace Transform


Given the function f (t) that satisfies condition :

t
 f (t ) e dt   (2.13)
0

For some finite real σ , the Laplace transform of f (t) is defined as



F ( s)   f (t ) e  st dt (2.14)
0

or

F (s)  £  f (t ) (2.15)
The variable s is referred to as the Laplace operator, which is a
complex variable that is s = σ+ jω. The defining equation of Eq (2.14)
is also known as the one – sided Laplace transform, as the integration
is evaluated from 0 to ∞. This simply means that all information
contained in f (t) prior to t = 0 is ignored or considered to be zero. This
assumption does not place any serious limitation on the applications of
the Laplace transform to linear system problems, since in the usual
time – domain studies, time reference is often chosen at the instant t =
0. Furthermore, for a physical system when an input is applied at t = 0,
the response of the system does not start sooner than t = 0; that is,
response does not precede excitation.
Strictly, the "one-sided" Laplace transform should be defined from
t= 0 - to t = . The symbol 0 - implies that the limit of t → 0 is taken from
the left side of t = 0. This limit process will take care of situations under

2-11
Chapter (2) Mathematical Background

which the function f (t) has a jump discontinuity or an impulse at t = 0.


For the subjects treated in this text, the defining equation of the Laplace

transform in Eq. (2-14) is almost never used in problem solving, since


the transform expressions are either given or can be found from the
Laplace transform table. Thus, the fine point of using 0 - or 0+ never
needs to be addressed. For simplicity, we shall, in general and without
further justification, use t = 0 or t = to as the initial time in all
subsequent chapters.

In what follows we derive Laplace transforms of a few


commonly encountered functions.
1-The unit step function
Let u (t) be a unit step function that is defined to have a
constant value of unity for t > 0 and a zero value for t < 0.
or.
u (t) = 0 for t  0
u (t) = 1 for t > 0 (2.16)
Then the Laplace transform of u (t) is

1 1
£ u (t )    u (t ) e
 st 
dt   e  st 0
 (2.17)
0 s s
2-Exponential function:
Consider the exponential function
f (t )  e at t 0 (2.18)
Where a is a constant
The Laplace transform of f (t) is written


 at  st e  ( s  a )t 1 (2.19)
f ( s)   e e dt   
0 sa 0
sa

2-11
Chapter (2) Mathematical Background

3-The unit ramb function


The Laplace transform of the unit ramp function defined as

0 for < 0
r(t) = (2.20)
t for > 0
Can be determined from
  st
£[r(t)]   te dt
0

 1  st   1  st


 te e dt
s 0 0 s
1 
=  (0  0 )  e  st
s2 0

1
= (2.21)
s2

4-The Laplace transform of sin ωt and cos ωt


One of the Euler equations,
e Jωt = cos ωt + j sin ωt
Can be applied to the previous result. The Laplace transform of two
functions shows that
Since,

 cos t   j £  sin t 

jt
£ [e ]
0

 
    (e
£ e jt jt
)e  st
dt   e ( s  j )t
  1  ( s  j )t
  e 

 s  j 
0
0 0

2-12

You might also like