Copyright Information

June 2005
Second edition
Intended for use with Mathematica 4 or 5
Software and manual: Igor Bakshee
Product managers: Yezabel Dooley and Kristin Kummer
Project managers: Julienne Davison, Julia Guelfi, and Jennifer Peterson
Editor: Jan Progen
Proofreading: Richard Martin and Emilie Finn
Software quality assurance: Jay Hawkins, Cindie Strater, Angela Thelen, Rachelle Bergmann, and Shiho Inui
Package design: Jeremy Davis, Megan Gillette, and Kara Wilson
Includes Electrical Engineering Plots package by Steve Adams, Jeffrey Adams, and John M. Novak
Special thanks to John M. Novak, Leszek Sczaniecki, Todd Gayley, Roger Germundsson, Hans Hoelzer, Neil Munro,
David Smith, and Daniil Sarkissian
Published by Wolfram Research, Inc., 100 Trade Center Drive, Champaign, Illinois 61820-7237, USA
phone: +1-217-398-0700; fax: +1-217-398-0747; email: info@wolfram.com; web: www.wolfram.com
Copyright © 1996–2005 Wolfram Research, Inc.
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form
or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission
of Wolfram Research, Inc.
Wolfram Research, Inc. is the holder of the copyright to the Control System Professional software and documentation
("Product") described in this document, including without limitation such aspects of the Product as its code, structure,
sequence, organization, "look and feel", programming language, and compilation of command names. Use of the
Product, unless pursuant to the terms of a license granted by Wolfram Research, Inc. or as otherwise authorized by
law, is an infringement of the copyright.
Wolfram Research, Inc. makes no representations, express or implied, with respect to this Product, including
without limitations, any implied warranties of merchantability, interoperability, or fitness for a particular purpose,
all of which are expressly disclaimed. Users should be aware that included in the terms and conditions under
which Wolfram Research, Inc. is willing to license the Product is a provision that Wolfram Research, Inc. and its
distribution licensees, distributors, and dealers shall in no event be liable for any indirect, incidental or consequen-
tial damages, and that liability for direct damages shall be limited to the amount of the purchase price paid for the
Product.
In addition to the foregoing, users should recognize that all complex software systems and their documentation
contain errors and omissions. Wolfram Research, Inc. shall not be responsible under any circumstances for
providing information on or corrections to errors and omissions discovered at any time in this document or the
package software it describes, whether or not they are aware of the errors or omissions. Wolfram Research, Inc.
does not recommend the use of the software described in this document for applications in which errors or
omissions could threaten life, injury, or significant loss.
Mathematica, MathLink, and MathSource are registered trademarks of Wolfram Research, Inc. All other trademarks used
herein are the property of their respective owners. Mathematica is not associated with Mathematica Policy Research,
Inc. or MathTech, Inc.
#T4070
New in Version 2
Version 2 of Control System Professional carries numerous internal changes that simplify the
addition of new packages to the Control System Professional Suite while maintaining the integ-
rity of a single Mathematica application. Version 2 also includes several new algorithmic and
interface features. Except as noted in the next section, the new version is fully compatible with
Version 1.
Intuitive, traditional typesetting of control objects in the StateSpace and Transfer
Function forms; editable control objects; easy switching between automatic display
of the results in the traditional and standard forms.
Support for constructing StateSpace realizations from TransferFunction
objects in different target forms.
Support for different methods of reducing StateSpace objects to TransferFunc
tion objects.
Support for analog simulations in StateResponse, OutputResponse, and
SimulationPlot.
Faster discrete-time domain simulations.
Simplified syntax for polling inputs of multi-input systems with the same input
signal in time-domain simulation functions.
Support for interconnections of multi-input, multi-output TransferFunction
systems with SeriesConnect, ParallelConnect, and FeedbackConnect.
Support for recovering the transformation matrix used to construct realizations of
special forms (such as KalmanControllableForm or InternallyBalanced
Form).
Faster ordered Schur decomposition.
Performance optimization through internal caching. Developers can use the same
mechanism to provide advanced services to their users.
Fully indexed online documentation.
Incompatible Changes between Version 1 and Version 2
Thirty-eight new functions, options, and symbols have been added, some of whose
names may conflict with names already being used.
ReviewForm has been superseded by the built-in Mathematica function Tradition
alForm.
MinimalRealization[transferfunction] no longer provides an interface to PoleZe
roCancel and uniformly returns the StateSpace object.
The option InternallyBalanced of the function DominantSubsystem is now
obsolete. DominantSubsystem can automatically determine if the input system is
an output of the function InternallyBalancedForm.
The syntax StateFeedbackGains[a, b, poles] is no longer supported; the first
argument to StateFeedbackGains must be the StateSpace object. Example:
StateFeedbackGains[StateSpace[a, b], poles].
The option Iterations of the function StateFeedbackGains has been
superseded by the built-in Mathematica option MaxIterations.
Symbol J, a default generic name for a summation index in a symbolic sum returned
by StateResponse and related functions, has been superseded by the built-in
Mathematica symbol K.
By default, the random orthogonal complements are no longer selected using the
randomized algorithm. To revert to the behavior in the previous version, you can set
$RandomOrthogonalComplement = True.
Table of Contents
1. Getting Started ................................................................................................................ 1
1.1 Using the Application for the First Time .............................................................. 1
1.2 The Structure of the Application ............................................................................ 2
1.3 The Control Objects .................................................................................................. 3
1.4 Traditional Notations ............................................................................................... 5
1.5 The Control Format .................................................................................................. 7
1.6 The Notation for the Imaginary Unit .................................................................... 9
1.7 Numericalizing for Speed ....................................................................................... 9
1.8 Evaluating Examples in This Guide ...................................................................... 10
2. Introduction: Extending Mathematica to Solve Control Problems ................... 11
3. Description of Dynamic Systems ............................................................................... 27
3.1 Transfer Function Representations ........................................................................ 27
3.2 State-Space Representations ................................................................................... 33
3.3 Continuous-Time versus Discrete-Time Systems ................................................ 39
3.4 The Traditional Notations ....................................................................................... 41
3.5 Discrete-Time Models of Continuous-Time Systems .......................................... 44
3.5.1 The Conversion Methods ................................................................................ 48
3.6 Discrete-Time Models of Systems with Delay ..................................................... 53
3.7 Continuous-Time Models of Discrete-Time Systems .......................................... 54
4. Time-Domain Response ............................................................................................... 57
4.1 Symbolic Approach .................................................................................................. 57
4.2 Simulating System Behavior ................................................................................... 65
4.3 Step, Impulse, and Other Responses ..................................................................... 74
5. Classical Methods of Control Theory ....................................................................... 80
5.1 Root Loci .................................................................................................................... 80
5.2 The Bode Plot ............................................................................................................ 85
5.2.1 The Basic Function .......................................................................................... 85
5.2.2 Gain and Phase Margins ................................................................................. 89
5.3 The Nyquist Plot ....................................................................................................... 93
5.4 The Nichols Plot ....................................................................................................... 95
5.5 The Singular-Value Plot .......................................................................................... 96
6. System Interconnections ............................................................................................... 98
6.1 Elementary Interconnections .................................................................................. 98
6.1.1 Connecting in Series ....................................................................................... 98
6.1.2 Connecting in Parallel ..................................................................................... 103
6.1.3 Closing Feedback Loop ................................................................................... 105
6.2 Arbitrary Interconnections ...................................................................................... 110
6.3 State Feedback .......................................................................................................... 114
6.4 Manipulating a System's Contents ........................................................................ 115
6.5 Using Interconnecting Functions for Controller Design .................................... 119
7. Controllability and Observability ............................................................................. 122
7.1 Tests for Controllability and Observability .......................................................... 122
7.2 Controllability and Observability Constructs ...................................................... 126
7.3 Dual System .............................................................................................................. 130
8. Realizations ...................................................................................................................... 132
8.1 Irreducible (Minimal) Realizations ........................................................................ 133
8.2 Kalman Canonical Forms ........................................................................................ 136
8.3 Jordan Canonical (Modal) Form ............................................................................ 138
8.4 Internally Balanced Realizations ............................................................................ 140
8.5 Dominant Subsystem ............................................................................................... 142
8.6 Pole-Zero Cancellation ............................................................................................ 144
8.7 Similarity Transformation ....................................................................................... 146
8.8 Recovering the Transformation Matrix ................................................................. 148
9. Feedback Control Systems Design ............................................................................ 150
9.1 Pole Assignment with State Feedback .................................................................. 150
9.1.1 Ackermann's Formula .................................................................................... 154
9.1.2 Robust Pole Assignment ................................................................................. 158
9.2 State Reconstruction ................................................................................................. 161
10. Optimal Control Systems Design ............................................................................ 165
10.1 Linear Quadratic Regulator .................................................................................. 166
10.2 Optimal Output Regulator .................................................................................... 171
10.3 Riccati Equations .................................................................................................... 172
10.4 Discrete Regulator by Emulation of Continuous Design ................................. 178
10.5 Optimal Estimation ................................................................................................ 181
10.6 Discrete Estimator by Emulation of Continuous Design ................................. 184
10.7 Kalman Estimator ................................................................................................... 185
10.8 Optimal Controller ................................................................................................. 192
11. Nonlinear Control Systems ....................................................................................... 197
11.1 Local Linearization of Nonlinear Systems .......................................................... 197
vi Control System Professional
11.2 Rational Polynomial Approximations ................................................................. 202
12. Miscellaneous ................................................................................................................ 204
12.1 Ordered Schur Decomposition ............................................................................. 204
12.2 Lyapunov Equations .............................................................................................. 206
12.3 Rank of Matrix ........................................................................................................ 210
12.4 Part Count and Consistency Check ..................................................................... 210
12.5 Displaying Graphics Array Objects Together .................................................... 211
12.6 Systems with Random Elements .......................................................................... 211
References .............................................................................................................................. 214
Index ....................................................................................................................................... 215
Table of Contents vii
1. Getting Started
Control System Professional is a collection of Mathematica programs that extend Mathematica to
solve a wide range of control system problems. Both classical and modern approaches are
supported for continuous-time (analog) and discrete-time (sampled) systems. This guide
describes in detail the new data types introduced in Control System Professional and the new
functions that operate on these data types. Also given are a number of examples of how to use
the Control System Professional functions together with the rest of Mathematica. It is beyond the
scope of this guide to address those innumerable control problems that could be solved
simply with step-by-step application of the usual Mathematica functionality. Nor can the guide
be considered an introduction to control systems theory. Although an attempt was made to
put the new functions into the relevant control theory context, the guide is by no means a
substitute for standard texts such as the ones listed in the References.
The many illustrations, solved examples, and other included features should make it possible
for the interested reader to tackle most of the problems just after reading the corresponding
parts of this guide. However, the guide is definitely not an introduction to Mathematica itself.
To gain the most from this application package, the reader is advised to consult the standard
Mathematica reference by Stephen Wolfram, The Mathematica Book, 4th Edition (Wolfram
Media/Cambridge University Press, 1999).
1.1 Using the Application for the First Time
Control System Professional is one of many available Mathematica applications and is normally
installed in a separate directory, ControlSystems, in parallel to other applications. If this
has been done at the installation stage, the application package should be visible to Mathemat-
ica without further effort on your part. Then, to make all the functionality of the application
package available at once, you simply load the Kernel/init.m package with the Get or
Needs command.
« This makes Control System Professional available.
In[1]:= ControlSystems`
If the previous command causes an error message, it is probably due to a nonstandard loca-
tion of the application on your system, and you will have to check that the directory enclosing
the ControlSystems directory is included in your $Path variable. Commands such as
AppendTo[$Path, theDirectoryControlSystemsIsIn]
can be used to inform Mathematica of how to find the application. You may want to add this
command to your init.m file to have it executed automatically at the outset of any of your
Mathematica sessions. The installation card that came with Control System Professional contains
the detailed instructions on the installation procedure.
1.2 The Structure of the Application
Control System Professional consists of this guide and accompanying packages. The entire guide
is provided as Mathematica notebooks that are accessible from your Help Browser. The note-
books are located in the Documentation directory. The palette included with Control System
Professional can be found in the FrontEnd/Palettes directory.
The following packages are located in the main ControlSystems directory. The packages
listed in the first part of the table typically correspond to separate sections of this guide and
can be loaded into your Mathematica session independently (with or without prior loading of
the Kernel/init.m package). Supplemental packages are listed in the second part of the
table.
2 Control System Professional
Common.m control systems objects and conversion between them
Conversions.m conversion between
continuous-time and discrete-time domains
Simulations.m investigating systembehavior in the time domain
Plots.m classical methods—root locus and frequency response
Connections.m systeminterconnections and manipulating systemcontents
Properties.m controllability and observability properties
Realizations.m equivalent and reduced representations of the same system
PoleAssignment.m pole assignment using state feedback
LQdesign.m optimal linear quadratic design of control systems
Linearization.m local linearization of nonlinear systems
Lyapunov.m Lyapunov equations solver
Riccati.m Riccati equations solver
SolversCommon.m common definitions for the
Lyapunov and Riccati equations solvers
SchurOrdered.m ordered Schur decomposition
CycleOptions.m options handling routine
EEPlotsExtensions.m extensions to the Electrical
Engineering Examples plotting routines
Kernel/init.m initialization file
Packages within Control System Professional .
1.3 The Control Objects
Most Control System Professional functions operate on special data types, or control objects, that
contain the available information of the control system. These are TransferFunction,
StateSpace, and ZeroPoleGain. The control objects are freely convertible from one to
another and are easy to pass from one function to another. You can think of control objects as
"active wrappers". On the one hand, they are containers, or wrappers, that conveniently
combine the information about the system in one Mathematica expression. On the other hand,
they work like functions when one is applied to another.
1. Getting Started 3
« Let us create an integrator system in the transfer function form.
In[2]:= TransferFunction ]s,
1

s
|
Out[2]= TransferFunction|s, ¦¦
1

s
¦¦|
« We find a state-space realization of the transfer function object by applying the
StateSpace head to it. In this case, the resultant state-space system contains very
simple matrices A, B, and C. The percentage mark, % refers as usual to the result of
the preceding computation.
In[3]:= StateSpace[% ]
Out[3]= StateSpace|{{0¦¦,{{1¦¦,{{1¦¦]
No special functions are needed to convert one control object to another. Simply apply
the desired head to the object you wish to convert.
Converting between control objects.
Along with the structural information about the system, control objects may contain a refer-
ence to the domain (continuous-time or discrete-time) the system is in and/or the period at
which the (discrete-time) system was sampled.
« This finds a discrete-time approximation to the integrator system. Notice that the
result is still the TransferFunction object, in which the discrete-time domain is
indicated by the option Sampled. For the convention for using internal variable in
TransferFunction, refer to Section 3.1 ff.
In[4]:= ToDiscreteTime]TransferFunction ]s,
1

s
|, Sampled Period[T]|
Out[4]= TransferFunction|s, ¦¦
T

1s
¦¦,SampledPeriod|T]|
« This converts the discrete-time object back to continuous time.
In[5]:= ToContinuousTime[%]
Out[5]= TransferFunction|s, ¦¦
1

s
¦¦|
4 Control System Professional
By default, the system is assumed to be in the continuous-time domain if the Sampled option
is not supplied. You can reverse this if you are mainly dealing with the discrete-time systems.
For a detailed description of the control objects, see Chapter 3.
1.4 Traditional Notations
When using the notebook front end, you will often find it useful to represent the control
objects in their traditional typeset form. That can be done either by applying the Mathematica
function TraditionalForm or by selecting an expression that contains one or more control
objects and executing the menu command Cell Convert To TraditionalForm (or the corre-
sponding keyboard shortcut, as described in the documentation for your copy of Mathematica).
The Control Format palette provided in Control System Professional allows you to switch
between automatic display of results in the traditional form and standard Mathematica output.
« This is a single-input, two-output TransferFunction object in Traditional
Form. Since the object is believed to be in the continuous-time domain, the variable
is used. The superscripted letter distinguishes the result from a regular matrix.
In[6]:=
1
\
.
.system TransferFunction]s, ]]
1

s
¦,]
1

s Α
¦¦|
\
!
.
.//TraditionalForm
Out[6]//TraditionalForm=
l
\
.
.
.
.
.
.
.
.
.
.
.
1

1

+ Α
\
!
.
.
.
.
.
.
.
.
.
.
.

« This is the TraditionalForm of the discretized object. It displays using the
variable . The subscript gives the value of the sampling period.
In[7]:= ToDiscreteTime[%, Sampled Period [2]]//Simplify //TraditionalForm
Out[7]//TraditionalForm=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
2

÷ 1
1 ÷ :
2 Α

Α ÷:
2 Α
Α
\
!
.
.
.
.
.
.
.
.
.
.
.
.
2

1. Getting Started 5
« This is a possible state-space realization of the above system in TraditionalForm.
The superscripted letter identifies the StateSpace object, while the small
subscripted bullet character denotes the continuous-time domain.
In[8]:= StateSpace[system]//TraditionalForm
Out[8]//TraditionalForm=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 1 0
0 ÷Α 1
Α 1 0
0 1 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Additionally, Control System Professional provides the function EquationForm that allows
you to display the StateSpace objects as the familiar state-space equations. These have the
conventional form for both continuous-time and discrete-time systems. Note that Equation
Form disregards the value of the sampling period.
« This represents the above StateSpace object as a pair of matrix state-space
equations.
In[9]:= % //EquationForm
Out[9]//EquationForm=

=
l
\
.
..
0 1
0 ÷Α
\
!
.
.. +
l
\
.
..
0
1
\
!
.
..
=
l
\
.
..
Α 1
0 1
\
!
.
..
« For the discretized system, the state-space equations are displayed as difference
rather than differential equations.
In[10]:= ToDiscreteTime[%, Sampled Period [Τ]]//Simplify //EquationForm
Out[10]//EquationForm=
( + 1) =
l
\
.
.
.
.
.
.
.
1
1 ÷ :
÷Α Τ

Α
0 :
÷Α Τ
\
!
.
.
.
.
.
.
.
() +
l
\
.
.
.
.
.
.
.
.
.
.
.
Α Τ +:
÷ΑΤ
÷ 1

Α
2
1 ÷ :
÷Α Τ

Α
\
!
.
.
.
.
.
.
.
.
.
.
.
()
() =
l
\
.
..
Α 1
0 1
\
!
.
.. ()
6 Control System Professional
« Here is the same system in TraditionalForm.
In[11]:= % //TraditionalForm
Out[11]//TraditionalForm=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1 ÷:
÷Α Τ

Α
ΑΤ + :
÷Α Τ
÷ 1

Α
2
0 :
÷Α Τ
1 ÷ :
÷ΑΤ

Α
Α 1 0
0 1 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Τ

Both TraditionalForm and EquationForm provide convenient formatting. Neither
changes the internal representation of the objects. In this respect, the functions behave much
like OutputForm or MatrixForm (and all other members of the $OutputForms list).
« Despite different formatting, the previous result is still the StateSpace object.
In[12]:= %
Out[12]= StateSpace|¦¦1,
1
Α Τ

Α
¦, {0,
Α Τ
¦¦,
¦¦
1
Α Τ
Α Τ

Α
2
¦,¦
1
Α Τ

Α
¦¦,{{Α,1¦,{0, 1¦¦, SampledPeriod|Τ]|
Typically, you can freely copy, paste, and edit the typeset representations of control objects.
When editing, however, exercise caution to prevent destruction of the invisible tags that allow
an unambiguous interpretation of the object in typeset form. As a rule of thumb, you will
typically find it safe to select exactly the part of expression that you want to edit or to drag
across the entire object and choose Edit Copy when you want to copy the object as a whole
(or, better yet, copy the entire cell that contains the control object).
1.5 The Control Format
SetControlFormat|] display control objects and
matrices in TraditionalForm
SetStandardFormat|] restore the standard Mathematica output format
Two output modes.
Instead of applying TraditionalForm to every control object individually, you can switch
to displaying control objects in traditional form automatically. This can be done by issuing the
1. Getting Started 7
command SetControlFormat[] or by simply clicking the "Control Format" button in the
Control Format palette, which is available under the Palettes submenu of the File menu (see
Figure 1.1). Additionally, SetControlFormat[] turns on the TraditionalForm display
for all matrices and some expressions that involve control objects and matrices. You can revert
to the standard Mathematica output format by issuing the command SetStandardFormat[]
or by clicking the "Standard Format" button in the Control Format palette. The SetControl
Format function sets an appropriate value for the built-in global variable $PrePrint. Set
StandardFormat restores the previous setting of that variable, if any.
Figure 1.1. Setting the display of control objects in TraditionalForm and matrices in MatrixForm.
In this guide we routinely use the control format, but switch to the standard one as needed to
highlight the underlying standard-form representation of the object in question. As the two
formats look quite different and can be easily distinguished, no notice is given in the text to
identify them. If you reevaluate the online documentation and your results appear in the
different format, simply switch to the appropriate format using the Control Format palette or
convert the individual cell to the appropriate format using the menu choice or the correspond-
ing keyboard shortcut.
« At this point, we switch to the control format. As a result, the matrix displays in the
traditional form and so does the StateSpace object.
In[13]:= Table[ij, {i, 2}, {j, 3}]
Out[13]=
l
\
.
..
2 3 4
3 4 5
\
!
.
..
In[14]:= StateSpace[{{0}},{{1}},{{1}}]
Out[14]=
l
\
.
..
0 1
1 0
\
!
.
..

8 Control System Professional
1.6 The Notation for the Imaginary Unit
Mathematica uses the letter I (i in the notebook front end) for the imaginary unit

÷1 , which
is not the standard notation in the control literature. However, it is quite easy to set things up
differently. Recall that the expression + I, for example, is a shortcut for Complex[ , ].
Therefore, to change the appearance of complex numbers, it is sufficient to change the format-
ting rule for Complex, as it is done in the following few lines. You may want to add an analo-
gous definition to your init.m file if you prefer an alternative to the built-in notation.
« This changes the way complex numbers appear on the screen.
In[15]:= (
Unprotect[Complex];
Format[Complex[x_, y_]]:x " " y;
Protect[Complex];
)
« From now on, appears as the letter .
In[16]:= 23
¸------
1
Out[16]= 23
1.7 Numericalizing for Speed
A word of common sense is in order before you start working with the application. As is the
case with many built-in Mathematica routines, the Control System Professional functions, unless
stated otherwise in this guide, accept both exact and inexact input (or a mix of the two) and
handle them appropriately, attempting to give an exact answer to a problem involving an
exact input. You do not have to worry about choosing the right algorithm; it is done in trans-
parent fashion. However, you do have to realize that the algorithms invoked with the two
types of input can be quite different, as can the computing time, with the exact computation
sometimes being considerably more time consuming. You are therefore advised, whenever
possible, to make sure that at least a part of the input contains an inexact expression to pre-
vent running into a long exact calculation unnecessarily.
1. Getting Started 9

1
2 3 2 3
« Consider, for example, a random 4×4 exact matrix.
In[17]:= Array[Random[Integer]&, {4, 4}]
Out[17]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 0 0 0
0 0 0 1
1 0 1 0
1 1 0 1
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
« These are its exact eigenvalues together with the time taken by the CPU to compute
the result.
In[18]:= Eigenvalues[%]//Timing
Out[18]= ¦0.1 Second, ¦1, 1,
1

2
¹1

5 |,
1

2
¹1

5 |¦¦
« If an inexact result suffices for the particular purpose, considerable savings in
computing time can be realized just by using inexact input at the outset. (The double
percentage mark, %%, is the familiar reference to the result of the next-to-previous
computation.)
In[19]:= Eigenvalues[N[%%]]//Timing
Out[19]= {0. Second, {1.61803, 1., 1., 0.618034¦¦
1.8 Evaluating Examples in This Guide
When evaluating examples in this guide on your computer system, you may sometimes find
that your results differ somewhat from the documentation. Differences in symbolic expres-
sions may come from different heuristic rules employed in different versions of Mathematica;
they usually vanish after a simplification. Small numerical residuals may be different because
of the different machine arithmetic. Further, some numerical results may be different if a
numerical algorithm, also based on the machine arithmetic, selects another possible value
from the universe of equivalent values. In such cases, you can usually confirm that you are
still getting a correct result by testing some property of the resulting system (e.g., eigenvalues
of the closed loop system after a pole placement). Finally, the concrete values after random
distortions will, of course, be different in your experiments.
10 Control System Professional
2. Introduction: Extending Mathematica to Solve Control
Problems
In this chapter, using the classical example of controlling an inverted pendulum, we learn
how to formulate a control problem in Mathematica, solve the problem using the Control System
Professional functionality, and analyze the results using the standard Mathematica functions.
We will see that, by being seamlessly incorporated with the rest of Mathematica, this applica-
tion package provides a convenient environment for solving typical control engineering
problems. Additional solved examples will be given in later chapters when individual func-
tions are discussed.
The inverted pendulum shown in Figure 2.1 is a massive rod mounted on a cart that moves in
a horizontal direction in such a way that the rod remains vertical. The vertical position of the
rod is unstable, and the cart exerts a force to provide the attitude control of the pendulum.
This or a similar model is considered in many textbooks on control systems. We will follow
Brogan (1991), pp. 590–92.
Θ
x
c
X
Figure 2.1. Inverted pendulum.
Let us obtain a mathematical model for the system. Assume that the length of the pendulum is
L, and its mass and the moment of inertia about its center of gravity are m and J , respectively.
The mass of the cart is M. Then, summing the forces applied to the pendulum in horizontal
and vertical directions (Figure 2.2a), we have
(2.1)
F
x
mx

c
(2.2)
F
y
m g my

c
where F
x
and F
y
are the components of the reaction force at the support point, and
x
c
X Ll 2sinΘ and y
c
Ll 2cosΘ are the horizontal and vertical displacements of the
center of gravity of the pendulum; x
c
depends on the horizontal displacement X of the cart.
Summing all the moments around the center of gravity of the pendulum gives the dynamical
equation
(2.3)
F
y
L

2
sinΘ
F
x
L

2
cosΘ JΘ

where J mL
2
l 12, which corresponds to the case of uniform mass distribution along the
pendulum.
Θ
F
y
mg
F
x
Mg
F
y
F
x
f
x
f
y _
2
f
y _
2
(a) (b)
Figure 2.2. Forces applied to the rod (a) and the cart (b) of the pendulum.
Finally, for the cart we have (see Figure 2.2b)
(2.4)
f
x
F
x
MX

12 Control System Professional
where f
x
is the input force applied to the wheels.
Translating the model into Mathematica is straightforward.
This is the first equation. We will keep it as eq1 for future reference.
In[1]:= eq1=Fx == m xc
··
[t]
Out[1]= Fx==m xc
··
|t]
The next equation translates almost verbatim as well.
In[2]:= eq2=Fy - m g == m yc
··
[t]
Out[2]= Fy-g m==m yc
··
|t]
This is the dynamical equation.
In[3]:= eq3=
1
----
2
Fy L Sin[Θ[t]]-
1
----
2
Fx L Cos[Θ[t]]==J Θ
··
[t]
Out[3]= -
1
----
2
Fx L Cos|Θ|t]]+
1
----
2
Fy L Sin|Θ|t]]==J Θ
··
|t]
This gives the definition for the moment of inertia J.
In[4]:= J =
m L
2
-----------
12
Out[4]=
L
2
m
----------
12
Here is the last equation.
In[5]:= eq4 =fx-Fx ==M X
··
[t]
Out[5]= fx-Fx==M X
··
|t]
We now define the horizontal displacement xc of the pendulum through the
displacement of the cart X and the angle Θ. As the two depend on time t, then so
does xc. The pattern notation t_ on the left-hand side makes the formula work for
any expression t.
In[6]:= xc[t_]=X[t]+
1
----
2
L Sin[Θ[t]]
Out[6]=
1
----
2
L Sin|Θ|t]]+X|t]
2. Introduction: Extending Mathematica to Solve Control Problems 13
Here is the corresponding assignment to the vertical displacement yc.
In[7]:= yc[t_]=
1
----
2
L Cos[Θ[t]]
Out[7]=
1
----
2
L Cos|Θ|t]]
Notice that we have defined eq1 … eq4 as logical equations (using the double equation mark
==) and the expressions for xc and yc as assignments to these symbols (using the single =). We
then have the benefit of not solving the differential equation against X
··
|t], but simply elimi-
nating it algebraically along with the other variables we no longer need.
Eliminate takes the list of equations and the list of variables to eliminate. The
result is a nonlinear differential equation between the input force fx and the angular
displacement Θ and its first and second derivatives. It is solvable for Θ
··
|t].
In[8]:= Eliminate[{eq1, eq2, eq3, eq4},{Fx, Fy, X
··
[t]}]
Out[8]= L m 6 g m Sin|Θ|t]]+6 g M Sin|Θ|t]]-3 L m Cos|Θ|t]] Sin|Θ|t]] Θ
·
|t]
2
-
L m Θ
··
|t]-L M Θ
··
|t]-3 L M Cos|Θ|t]]
2
Θ
··
|t]-
3 L m Sin|Θ|t]]
2
Θ
··
|t]-3 L M Sin|Θ|t]]
2
Θ
··
|t])==6 fx L m Cos|Θ|t]]
Solve returns a list of rules that give generic solutions to the input equation.
In[9]:= Solve[%, Θ
··
[t]]
Out[9]= {{Θ
··
|t]--3 2 fx Cos|Θ|t]]-2 g m Sin|Θ|t]]-
2 g M Sin|Θ|t]]+L m Cos|Θ|t]] Sin|Θ|t]] Θ
·
|t]
2
))/
L m+M+3 M Cos|Θ|t]]
2
+3 m Sin|Θ|t]]
2
+3 M Sin|Θ|t]]
2
))¦¦
We have, in fact, just a single rule and we extract it from the lists.
In[10]:= sln =% #1, 1]
Out[10]= Θ
··
|t]--3 2 fx Cos|Θ|t]]-2 g m Sin|Θ|t]]-
2 g M Sin|Θ|t]]+L m Cos|Θ|t]] Sin|Θ|t]] Θ
·
|t]
2
))/
L m+M+3 M Cos|Θ|t]]
2
+3 m Sin|Θ|t]]
2
+3 M Sin|Θ|t]]
2
))
14 Control System Professional
As our next step, we create a state-space model of the system and linearize it for small pertur-
bations near the equilibrium position Θ 0. Then, based on the linearized model, we design
the state feedback controller that attempts to keep the pendulum in equilibrium. Finally, we
carry out several simulations of the actual nonlinear system governed by the controller and
see what such a controller can and cannot do.
The nonlinear state-space model of the system will be presented in the form
(2.5)
x

f(x, u)
y h(x, u)
where Θ and Θ

constitute the state vector x, f
x
is the only component of the input vector u,
and Θ makes up the output vector y.
This creates the state vector in Mathematica.
In[11]:= x ={Θ[t],Θ
·
[t]};
This sets the input and output vectors.
In[12]:= u ={fx};y ={Θ[t]};
To obtain f and h in Eq. (2.5), we observe that their Mathematica equivalents f and h are
simply the derivative D[x, t] and the output vector y expressed via the state and input
variables.
The expression for the derivative contains an undesirable variable, Θ
··
|t], which is
among neither state nor input variables.
In[13]:= D[x, t]
Out[13]= {Θ
·
|t], Θ
··
|t]¦
The replacement rule stored as sln helps to get rid of Θ
··
|t].
In[14]:= f =% /.sln
Out[14]= {Θ
·
|t], -3 2 fx Cos|Θ|t]]-2 g m Sin|Θ|t]]-
2 g M Sin|Θ|t]]+L m Cos|Θ|t]] Sin|Θ|t]] Θ
·
|t]
2
))/
L m+M+3 M Cos|Θ|t]]
2
+3 m Sin|Θ|t]]
2
+3 M Sin|Θ|t]]
2
))¦
2. Introduction: Extending Mathematica to Solve Control Problems 15
The expression for function h is trivial.
In[15]:= h =y
Out[15]= {Θ|t]¦
So far we have used the built-in Mathematica functions. Now it's time to make accessible the
library of functions provided in Control System Professional.
This loads the application.
In[16]:= <<ControlSystems`
For most Control System Professional functions, the input state-space model must be linear.
Therefore, our first task will be to linearize the model, that is, represent it in the form
(2.6)
x

Ax Bu
y Cx Du
This is the purpose of the function Linearize, which, given the nonlinear functions f and h
and the lists of state and input variables, supplied together with values at the nominal point
(the point in the vicinity of which the linearization will take place), returns the control object
StateSpace[a, b, c, d], where matrices a, b, c, and d are the coefficients A, B, C, and D
in Eq. (2.6).
This performs the linearization.
In[17]:= ss =Linearize[f, h , {{Θ[t],0},{Θ
·
[t],0}}, {{fx, 0}}]
Out[17]= StateSpace|¦{0, 1¦,¦-
3 -2 g m-2 g M)
---------------- ---------------- -------
L m+4 M)
, 0¦¦,
¦{0¦,¦-
6
---------------- --------
L m+4 M)
¦¦,{{1, 0¦¦, {{0¦¦|
Mapping the built-in Mathematica function Factor onto components of the
state-space object simplifies the result somewhat. (Here /@ is a shortcut for the Map
command.)
In[18]:= Factor /®%
Out[18]= StateSpace|¦{0, 1¦,¦
6 g m+M)
---------------- --------
L m+4 M)
, 0¦¦, ¦{0¦,¦-
6
---------------- --------
L m+4 M)
¦¦, {{1, 0¦¦,{{0¦¦|
16 Control System Professional
TraditionalForm often gives a more compact representation for control objects.
In[19]:= TraditionalForm[%]
Out[19]//TraditionalForm=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
0 1 0
6 g (m M)

L (m 4 M)
0
6

L (m 4 M)
1 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.

Now let us design a state feedback controller that will stabilize the pendulum in a vertical
position near the nominal point. One way to do this is to place the poles of the closed-loop
system at some points p
1
and p
2
on the left-hand side of the complex plane.
In this particular case, Ackermann's formula (see Section 9.1) is used. The result is a
matrix comprising the feedback gains.
In[20]:= k =StateFeedbackGains[ss, {p1, p2}]
Out[20]= ¦¦
1
----
6
-6 g m-6 g M-L m p1 p2-4 L M p1 p2),
1
----
6
L m+4 M) p1+p2)¦¦
Note that we were able to obtain a symbolic solution to this problem and thus see immedi-
ately that, for example, only the first gain depends on g and so would be affected should our
pendulum get sent to Mars (and the change would be linear in g). We also see that the first
gain depends on the product of pole values, the second gain on their sum, and so on.
To check if the pole assignment has been performed correctly, we can find the poles of the
closed-loop system, that is, the eigenvalues of the matrix A BK.
This extracts the matrices from their StateSpace wrapper.
In[21]:= a =ss#1];b =ss#2];
We see that the eigenvalues of the closed-loop system are indeed as required.
In[22]:= Eigenvalues[a-b.k]
Out[22]= {p1, p2¦
With Control System Professional, we can also design the state feedback using the optimal
linear-quadratic (LQ) regulator (see Chapter 10). This approach is more computationally
intensive, so it is advisable to work with inexact numeric input. For convenience in presenting
results, we switch to the control print display (Section 1.5).
2. Introduction: Extending Mathematica to Solve Control Problems 17
This is the particular set of numeric values (all in SI) we will use.
In[23]:= numericValues ={m -2., M -8., L -1., g -9.8, p1--4., p2--5.}
Out[23]= {m-2., M-8., L-1., g-9.8, p1--4., p2--5.¦
Here our system is numericalized.
In[24]:= nss =ss /.numericValues
Out[24]=
l
\
.
.
.
.
.
.
.
..
0 1 0
17.2941 0 0.176471
1 0 0
\
!
.
.
.
.
.
.
.
..

Let Q and R be identity matrices.
In[25]:= Q =IdentityMatrix[2]
Out[25]=
l
\
.
..
1 0
0 1
\
!
.
..
In[26]:= R ={{1}}
Out[26]= ( 1 )
LQRegulatorGains solves the Riccati equations and returns the corresponding
gain matrix.
In[27]:= LQRegulatorGains[nss, Q, R ]
Out[27]= ( 196.005 47.1422 )
Here are the poles our system will possess when we close the loop.
In[28]:= Eigenvalues[a-b.% /.numericValues]
Out[28]= {-4.24526, -4.07396¦
Let us make some simulations of the linearized system as well as the original, nonlinear
system stabilized with one of the controllers we have designed—say the one obtained with
Ackermann's formula. We start with the linearized system and compute the transient
response of the system for the initial values of Θ(0) of 0.5, 1, and 1.2, assuming in all cases that
Θ

(0) 0. The same initial conditions will then be used for the nonlinear system, and the results
will be compared.
18 Control System Professional
Here is the list of initial conditions for Θ.
In[29]:= Θ
0
={.5, 1., 1.2}
Out[29]= {0.5, 1., 1.2¦
This is the linearized system after the closing state feedback. The function State
FeedbackConnect is described in Chapter 6 together with other utilities for
interconnecting systems.
In[30]:= StateFeedbackConnect[ss, k]//Simplify
Out[30]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
0 1 0
p1 p2 p1 p2
6

L (m 4 M)
1 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.

To compute how the initial condition in Θ decays in the absence of an input signal, we can use
OutputResponse, which is one of the functions defined in Chapter 4.
In this particular case, the input arguments to OutputResponse are the system to
be analyzed, the input signal (which is 0 for all t), the time variable t, and the initial
conditions for the state variables supplied as an option. The initial value for Θ is
denoted as angle.
In[31]:= OutputResponse[%, 0, t, InitialConditions -{angle, 0}]
Out[31]= ¦
angle c
p2 t
p1-c
p1 t
p2)
-------------------------------- ---------------- ----------
p1-p2
¦
2. Introduction: Extending Mathematica to Solve Control Problems 19
Here is the plot of the previous function for the chosen values Θ
0
. We store it as
plot for future reference.
In[32]:= plot =Plot[Evaluate[% /.numericValues /.angle -Θ
0
],
{t, 0, 4}, PlotStyle -RGBColor[1, 0, 0]];
1 2 3 4
0.2
0.4
0.6
0.8
1
1.2
The case of actual nonlinear system stabilized with the linear controller is more interesting,
but requires some work on our part. We note that when the control loop is closed, the input
variable—the force f
x
applied by the motor of the cart—tracks changes in state variables Θ(t)
and Θ

(t).
First we prepare the input rules. As we have only one input, there is only one rule in
the list.
In[33]:= feedbackRules =Thread [u --k.x]/.numericValues
Out[33]= {fx-211.333 Θ|t]+51. Θ
·
|t]¦
Recall that we store the description of our nonlinear system as sln.
In[34]:= sln
Out[34]= Θ
··
|t]--3 2 fx Cos|Θ|t]]-2 g m Sin|Θ|t]]-
2 g M Sin|Θ|t]]+L m Cos|Θ|t]] Sin|Θ|t]] Θ
·
|t]
2
))/
L m+M+3 M Cos|Θ|t]]
2
+3 m Sin|Θ|t]]
2
+3 M Sin|Θ|t]]
2
))
20 Control System Professional
Now we numericalize the rule, substitute the feedback rules, and, to convert the rule
to an equation, apply the head Equal to it (@@ is the shorthand form of the Apply
function). The resultant differential equation is labeled de.
In[35]:= de =Equal®®sln /.numericValues /.feedbackRules
Out[35]= Θ
··
|t]==-3. -196. Sin|Θ|t]]+2. Cos|Θ|t]] Sin|Θ|t]] Θ
·
|t]
2
+
2 Cos|Θ|t]] 211.333 Θ|t]+51. Θ
·
|t])))/
10.+24. Cos|Θ|t]]
2
+30. Sin|Θ|t]]
2
)
This solves the differential equation with the initial conditions for every value in the
list Θ
0
one by one and returns a list of solutions. The time t is assumed to vary from
0 to 4 seconds.
In[36]:= e
0
=NDSolve[{de, Θ[0]==#, Θ
·
[0]==0},{Θ},{t, 0, 4}]& /®Θ
0
Out[36]= {{{Θ-InterpolatingFunction|{{0., 4.¦¦,·>]¦¦,
{{Θ-InterpolatingFunction|{{0., 4.¦¦,·>]¦¦,
{{Θ-InterpolatingFunction|{{0., 4.¦¦,·>]¦¦¦
In several graphs that follow, we show the results for Θ(0) 0 as a solid line, for
Θ(0) 1 as a dashed-dotted one, and for Θ(0) 1.2 as a dashed line. This changes the
Plot options to reflect that convention and adjusts a few other nonautomatic values
for plot options.
In[37]:= SetOptions[Plot, PlotStyle -{Thickness[.001],
Dashing[{.025, .0075, .0075, .0075}],Dashing[{.01, .01}]},Frame -
{Automatic, Automatic, None, None},FrameLabel-{"Time (s)", None}];
The results for Θ are now presented graphically. We can see that the controller
succeeds in driving the pendulum to its equilibrium position for all three initial
displacements. The plot is stored as plot1.
In[38]:= plot1=Plot[Evaluate[Θ[t]/.e
0
],{t, 0, 4},PlotLabel-"Θ (radian)"];
0 1 2 3 4
Time s)
0
0.2
0.4
0.6
0.8
1
1.2
Θ radian)
2. Introduction: Extending Mathematica to Solve Control Problems 21
We can also see that, once the angle Θ[t] has come to zero, the derivative Θ
·
|t]
vanishes as well. This means that the pendulum is not about to oscillate around its
equilibrium position, at least not when driven from the displacements we are
considering for now.
In[39]:= Plot[Evaluate[Θ
·
[t]/.e
0
], {t, 0, 4}, PlotLabel-"Θ' (radian/s)"];
0 1 2 3 4
Time s)
-1.2
-1
-0.8
-0.6
-0.4
-0.2
0
Θ' radian/s)
Here is the plot of input force versus time.
In[40]:= Plot[Evaluate[fx /.feedbackRules /.e
0
],{t, 0, 4},
PlotLabel-"Input Force (Newton)", PlotRange -All];
0 1 2 3 4
Time s)
0
50
100
150
200
250
Input Force Newton)
22 Control System Professional
Finally, we compare the graphs of Θ for the nonlinear and linear systems and see
that the only case of smallest initial displacement is treated adequately by the linear
model.
In[41]:= Show[plot1, plot];
0 1 2 3 4
Time s)
0
0.2
0.4
0.6
0.8
1
1.2
Θ radian)
The transient responses suggest that our linear feedback is not sufficiently prompt in reacting
to moderate and large initial displacements Θ(0), and that may cause problems for still larger
angles. The case Θ(0) 1.2 rad is almost critical. Indeed, for a slightly larger displacement,
Θ(0) 1.25 rad, the system becomes hard to control.
We solve the same equation for another set of initial conditions.
In[42]:= Θ
Big
=NDSolve[{de, Θ[0]==1.25, Θ
·
[0]==0},{Θ},{t, 0, 4},MaxSteps -1000]
Out[42]= {{Θ-InterpolatingFunction|{{0., 4.¦¦,·>]¦¦
In the following graphs, we will plot the results for Θ(0) 1.25 as a solid line and one
of our previous curves (namely, Θ(0) 1.2) as a dashed line. This sets the new
options.
In[43]:= SetOptions[Plot, PlotStyle -{Thickness[.001],Dashing[{.01, .01}]}];
2. Introduction: Extending Mathematica to Solve Control Problems 23
We find that the pendulum still could be driven from Θ(0) 1.25 to Θ 0, but now it
oscillates badly around the equilibrium point.
In[44]:= Plot[Evaluate[Θ[t]/.{Θ
Big
, e
0
#3]}], {t, 0, 4}, PlotLabel-"Θ (radian)"];
0 1 2 3 4
Time s)
-6
-4
-2
0
2
4
6
Θ radian)
Of course, the cart in our particular model of the pendulum (as shown in Figure 2.1) would
not allow the pendulum to rotate in circles, but, for the sake of argument, we will assume that
it would.
The variations in Θ

become more complex and far more intense.
In[45]:= Plot[Evaluate[Θ
·
[t]/.{Θ
Big
, e
0
#3]}],
{t, 0, 4},PlotLabel-"Θ' (radian/s)"];
0 1 2 3 4
Time s)
-30
-20
-10
0
10
20
30
Θ' radian/s)
24 Control System Professional
This is the force the motor must exert to maintain the process.
In[46]:= Plot[Evaluate[fx /.feedbackRules /.{Θ
Big
, e
0
#3]}],
{t, 0, 4},PlotLabel-"Input Force (Newton)", PlotRange -All];
0 1 2 3 4
Time s)
-2000
-1000
0
1000
2000
Input Force Newton)
The real actuator may not be up to the task. If the maximum force the motor can provide is,
say, 1000 N, and the feedback saturates at that limit, the controller fails to balance the pendu-
lum.
To model this situation, we create a clip function.
In[47]:= clip =If[Abs[#]s1000., #, Sign [#] 1000.]&
Out[47]= If|Abs|#1]1000., #1, Sign|#1] 1000.]&
Here is how it works: everything beyond the interval from 1000 to 1000 gets cut
off.
In[48]:= clip /®{-1001, -999, 999, 1001}
Out[48]= {-1000., -999, 999, 1000.¦
We use clip to saturate the feedback.
In[49]:= feedbackLtd =MapAt[clip, feedbackRules, {1, 2}]
Out[49]= {fx-If|Abs|211.333 Θ|t]+51. Θ
·
|t]]1000.,
211.333 Θ|t]+51. Θ
·
|t],Sign|211.333 Θ|t]+51. Θ
·
|t]] 1000.]¦
2. Introduction: Extending Mathematica to Solve Control Problems 25
This is the new differential equation for Θ under the saturated feedback.
In[50]:= de1=Equal®®sln /.numericValues /.feedbackLtd
Out[50]= Θ
··
|t]==
-3. 2 Cos|Θ|t]] If|Abs|211.333 Θ|t]+51. Θ
·
|t]]1000., 211.333 Θ|t]+
51. Θ
·
|t],Sign|211.333 Θ|t]+51. Θ
·
|t]] 1000.]-
196. Sin|Θ|t]]+2. Cos|Θ|t]] Sin|Θ|t]] Θ
·
|t]
2
))/
10.+24. Cos|Θ|t]]
2
+30. Sin|Θ|t]]
2
)
This solves it.
In[51]:= Θ
Big1
=NDSolve[{de1, Θ[0]==1.25, Θ
·
[0]==0},{Θ},{t, 0, 4},MaxSteps -1000]
Out[51]= {{Θ-InterpolatingFunction|{{0., 4.¦¦,·>]¦¦
Finally, we plot the state response—for Θ as a solid line and for Θ

as a dashed one. It
is clear that the controller fails to return the pendulum to its equilibrium position.
In[52]:= Plot[Evaluate[{Θ[t],Θ
·
[t]}/.Θ
Big1
],{t, 0, 4},
PlotStyle -{Thickness[.001], {Dashing[{.05, .01}],Thickness[.001]}},
PlotRange -All, PlotLabel-"State Response"];
0 1 2 3 4
Time s)
-40
-30
-20
-10
0
10
20
State Response
26 Control System Professional
3. Description of Dynamic Systems
Control System Professional deals with state-space and transfer function models of
continuous-time (analog) and discrete-time (sampled) systems. The transfer function represen-
tations can be in rational polynomial or zero-pole-gain form. This chapter introduces the
available data types and the means to convert between them.
3.1 Transfer Function Representations
The basic form for representing transfer function matrices is the TransferFunction data
structure. TransferFunction in many respects behaves like Function, the built-in Mathe-
matica pure function. Like Function, TransferFunction can have a formal parameter,
"variable", and is capable of operating on any Mathematica expression. Unlike Function,
TransferFunction currently accepts at most one formal parameter and does not accept the
list of attributes, but it does accept options.
TransferFunction|m] a transfer function as a rational
polynomial matrix m of a formal
parameter # that behaves as a pure function
TransferFunction|var, m] a transfer function of a formal parameter var
Transfer function representation as a pure function.
« Load the application.
In[1]:= <<ControlSystems`
« This is the transfer function representation of a two-input, one-output system. The
TransferFunction object comprises a variable and a matrix in that variable.
In[2]:= TransferFunction ]var, ]]
2
------------------
1+var
2
,
var
----------------
var-1
¦¦|
Out[2]= TransferFunction|var, ¦¦
2

1var
2
,
var

1var
¦¦|
« Supplying the variable to this function leads to a matrix in that variable.
In[3]:= % [s]
Out[3]= ¦¦
2

1s
2
,
s

1s
¦¦
« If a particular numeric frequency is supplied, the value of the transfer matrix at that
frequency is obtained.
In[4]:= %%[r 10.]
Out[4]= {{0.020202, 0.9900990.0990099 ¦¦
« We can name the internal variable whatever we please or drop it altogether and still
have a mathematically identical object.
In[5]:= TransferFunction ]]]
1
----
#
¦¦|
Out[5]= TransferFunction|¦¦
1

#1
¦¦|
« Without the formal parameter, TransferFunction behaves as a pure function in
#.
In[6]:= % [s]
Out[6]= ¦¦
1

s
¦¦
TransferFunction deals with multiple-input, multiple-output (MIMO) systems. Scalar, or
single-input, single-output (SISO), transfer functions are "upgraded" to the matrix representa-
tion automatically.
« A scalar transfer function is represented as a 1×1 transfer matrix.
In[7]:= TransferFunction ]
1
----
#
|
Out[7]= TransferFunction|¦¦
1

#1
¦¦|
It is often useful to factor the elements of a transfer matrix so that the zeros and poles of
individual elements become apparent. The factored form can be obtained by using the func-
tion FactorRational. The opposite function is ExpandRational, which expands the
individual numerators and denominators of transfer matrix elements.
28 Control System Professional
FactorRational|transferfunction]
represent the transferfunction object in factored form
ExpandRational|transferfunction]
represent the transferfunction
object in expanded form
Converting transfer function objects to standard forms.
« This is some rational polynomial transfer function.
In[8]:= TransferFunction ]s, ]]
2
-------------
1+s
2
,
6 -5 s +s
2
---------------- ---------
s
4
-1
¦¦|
Out[8]= TransferFunction|s, ¦¦
2

1s
2
,
65 ss
2

1s
4
¦¦|
« This factors the numerators and denominators of all elements.
In[9]:= FactorRational[%]
Out[9]= TransferFunction|s, ¦¦
2

s) s)
,
3s) 2s)

1s) s) s) 1s)
¦¦|
« This expands the numerators and denominators of the previous result.
In[10]:= ExpandRational[%]
Out[10]= TransferFunction|s, ¦¦
2

1s
2
,
65 ss
2

1s
4
¦¦|
Both FactorRational and ExpandRational can be viewed as means to transformation
arbitrary TransferFunction objects into their standard forms, whereas TransferFunc
tion itself behaves merely as a wrapper with respect to polynomial expressions.
« By itself, TransferFunction does not change polynomial expressions in a transfer
function matrix.
In[11]:= TransferFunction]s,
1
----
s
+
1
-----------
s -1
|
Out[11]= TransferFunction|s, ¦¦
1

1s

1

s
¦¦|
3. Description of Dynamic Systems 29
« However, ExpandRational and FactorRational do convert arbitrary rational
polynomial matrices to the standard forms.
In[12]:= ExpandRational[%]
Out[12]= TransferFunction|s, ¦¦
12 s

ss
2
¦¦|
In[13]:= FactorRational[%%]
Out[13]= TransferFunction|s, ¦¦
2
1

2
s)

1s) s
¦¦|
The coefficients of the factored form of the individual transfer matrix elements (i.e., zeros,
poles, and gains) can be stored using the special data structure ZeroPoleGain. To allow
complete restoration of the transfer function object from its zero-pole-gain equivalent, Zero
PoleGain may optionally contain the variable used in the transfer function.
ZeroPoleGain|zeros, poles, gains]
a collection of matrices representing the zeros,
poles, and gains of the elements of a transfer matrix
ZeroPoleGain|var, zeros, poles, gains]
use variable var in the
descendent TransferFunction objects
ZeroPoleGain data structure.
Structurally, both zeros and poles are matrices of vectors of the corresponding coefficients,
whereas gains is just a matrix of coefficients. All three of these matrices and the parent transfer
matrix have the same dimensions down to the second level.
« Here is another transfer function.
In[14]:= tf =TransferFunction ]s, ]]
2
-------------
1+s
2
,
s
-----------
s -1
¦¦|
Out[14]= TransferFunction|s, ¦¦
2

1s
2
,
s

1s
¦¦|
30 Control System Professional
« This picks up its zeros, poles, and gains. Notice that there are no finite zeros in the
first element of the transfer matrix, so the corresponding list of zeros is empty.
In[15]:= ZeroPoleGain [%]
Out[15]= ZeroPoleGain|s, {{{¦, {0¦¦¦, {{{, ¦,{1¦¦¦,{{2, 1¦¦]
« Applying TransferFunction to the ZeroPoleGain object brings out the transfer
function in its factored form.
In[16]:= TransferFunction[% ]
Out[16]= TransferFunction|s, ¦¦
2

s) s)
,
s

1s
¦¦|
« The same result can be arrived at directly.
In[17]:= FactorRational[tf]
Out[17]= TransferFunction|s, ¦¦
2

s) s)
,
s

1s
¦¦|
Like TransferFunction objects, ZeroPoleGain objects need not have a named variable. If
a ZeroPoleGain object does not have one, neither will the descendent transfer function.
« This 1×1 system does not have finite zeros, has a single pole at the origin, and has
a unit gain. Therefore, it represents an ideal integrator.
In[18]:= ZeroPoleGain [{{{}}},{{{0}}},{{1}}]
Out[18]= ZeroPoleGain|{{{¦¦¦, {{{0¦¦¦,{{1¦¦]
« Here is its transfer function. As no variable was used in the parent ZeroPoleGain,
none appears here.
In[19]:= TransferFunction[% ]
Out[19]= TransferFunction|¦¦
1

#1
¦¦|
It is worth emphasizing that the variable in the TransferFunction and ZeroPoleGain
objects is a formal parameter and has nothing to do with distinguishing Laplace and
z -transform domains (which is what the Sampled option is for; see Section 3.3). The variable
may simply be omitted or easily renamed if desired using the built-in Mathematica functions.
Again, renaming does not change the domain of the object; to transform the system from one
domain to another, use either ToDiscreteTime (Section 3.5) or ToContinuousTime (Sec-
3. Description of Dynamic Systems 31
tion 3.7), whichever is appropriate. Note, however, that interpretation of the TransferFunc
tion object in TraditionalForm obeys a different convention and does take the variable
into account (see Section 3.4).
« This is a transfer function in the variable var.
In[20]:= TransferFunction]var,
1
------------------
1+var
2
|
Out[20]= TransferFunction|var, ¦¦
1

1var
2
¦¦|
« This changes the variable to z. No change of domain has occurred.
In[21]:= % /.var -z
Out[21]= TransferFunction|z, ¦¦
1

1z
2
¦¦|
Two additional utility functions, Zeros and Poles, return zeros and poles of transfer func-
tions for control objects. In the case of ZeroPoleGain objects, these functions simply extract
the relevant parts from the data structure.
Zeros|system] gives the matrix of zeros of the
transfer function corresponding to system
Poles|system] gives the matrix of poles of the
transfer function corresponding to system
Computing zeros and poles separately.
Both TransferFunction and ZeroPoleGain, as well as the head StateSpace described
in Section 3.2, work as "active wrappers". This means that, on the one hand, they keep system
components together allowing the system as a whole to be conveniently passed from one
function to another. They can also be used for conversion between the data structures when
one wrapper is applied over another.
32 Control System Professional
TransferFunction|statespace]
find the TransferFunction object that
corresponds to the StateSpace object statespace
TransferFunction|statespace, ReductionMethod method]
use the specified method for conversion
Converting from state-space to transfer function representation.
The straightforward, but computationally expensive, way of finding the transfer function
matrix of a state-space realization is based on the formula
(3.1) H(s) = C (s I ÷ A)
÷1
B + D
The method involves computing the inverse of the matrix (s I ÷ A)
÷1
(the so-called resolvent
matrix) and is accessible by setting the option value ReductionMethodInverse. A better
alternative is often to scan the determinant expansion formula for a single-input, single output
system over all possible input-output pairs of a multi-input, multi-output system (cf. Kailath
(1980), Appendix A)
(3.2) h
ij
(s) =
| s I ÷ A + b
j
c
i
| + (d
ij
÷ 1) | s I ÷ A|

| s I ÷ A |
Here the notation | | stands for the determinant of the matrix, b
j
and c
i
are the column- and
row-vector components of the matrices B and C that correspond to the given input-output
pair, d
ij
is the corresponding scalar part of the matrix D, and h
ij
is the same for the transfer
function matrix H. This method is available under the option value DeterminantExpan
sion, which is the default value of the option ReductionMethod. Although the above
formulas refer to the continuous-time case, the implemented algorithms are equally applicable
to discrete-time systems.
For conversion and most other purposes, the elements of the transfer matrix must be rational
polynomials. However, nonrational polynomial terms can also be handled in some cases,
notably in some frequency response functions (Chapter 5) and system interconnection func-
tions (Chapter 6).
3.2 State-Space Representations
Continuous-time state-space systems (Figure 3.1)
3. Description of Dynamic Systems 33
(3.3)
x

(t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
and discrete-time state-space systems (Figure 3.2)
(3.4)
x(k + 1) = Ax(k) + Bu(k)
y(k) = Cx(k) + Du(k)
are represented by the StateSpace data structure; this guide refers to component matrices
using the same symbols, A, B, C, and D. Here A is the state (or evolution) matrix, B is the
input (or control) matrix, C is the output (or observation) matrix, and D is the direct transmission
(or feedthrough) matrix. Keeping the same notation for continuous- and discrete-time systems
is common in the literature and allows sharing many concepts and algorithms that are essen-
tially identical. (It is also common to use F, G, H, and J instead of A, B, C, and D for
continuous-time systems and and instead of A and B in the discrete-time case.)
To distinguish between the two types of systems, the option Sampled is introduced in
Section 3.3.
StateSpace|a, b] state-space object comprising two matrices, a and b
StateSpace|a, b, c] state-space object comprising matrices a,
b, and c, which assumes that the
direct transmission term (matrix d) is zero
StateSpace|a, b, c, d] state-space object comprising matrices a, b, c, and d
State-space data structure.
B

u(t) y(t) x(t) x(t)
.
C
D

A
Figure 3.1. State-space model of a continuous-time system.

34 Control System Professional
B

u(k) y(k)
x(k) x(k+1)
C
D

A
Delay
Figure 3.2. State-space model of a discrete-time system.
The StateSpace objects can contain as few as two matrices a and b, but for most functions, at
least matrix c and optionally d are required too. If matrices c and d are absent, no assumptions
are made about them; if only matrix d is absent, it is routinely assumed to be a zero matrix.
The truncated StateSpace[a, b] representation is, of course, of limited value, but contains
all the necessary information, for instance, for state feedback regulator design (Section 9.1).
« Here is a second-order SISO system with no direct transition term.
In[22]:= StateSpace[{{a
1
, a
2
}, {a
3
, a
4
}},{{b
1
},{b
2
}},{{c
1
, c
2
}}]
Out[22]= StateSpace|{{a
1
, a
2
¦,{a
3
, a
4
¦¦, {{b
1
¦, {b
2
¦¦,{{c
1
, c
2
¦¦]
« This is the same system with a zero direct transition term added.
In[23]:= StateSpace[{{a
1
, a
2
}, {a
3
, a
4
}},{{b
1
},{b
2
}},{{c
1
, c
2
}},{{0}}]
Out[23]= StateSpace|{{a
1
, a
2
¦,{a
3
, a
4
¦¦, {{b
1
¦, {b
2
¦¦,{{c
1
, c
2
¦¦,{{0¦¦]
« The two systems yield the same transfer function and so correspond to the same
physical system.
In[24]:= SameQ ®®TransferFunction /®{%, %%}
Out[24]= True
StateSpace|transferfunction]
the StateSpace realization of the
TransferFunction object transferfunction
StateSpace|transferfunction, TargetForm form]
the realization of the specified form
Obtaining state-space realizations of transfer functions.
3. Description of Dynamic Systems 35
The state-space representation can be obtained directly from the differential equations of the
system (difference equations in the case of a discrete-time system) or from the transfer matrix.
In the latter case, the target form of the state-space model can be specified by the option
TargetForm, with the default value ControllableCompanion that corresponds to the
controllable companion form, constructed by inspection. The adopted definition of the controlla-
ble companion form is due to, for example, Gopal (1993). (In the literature, this form is some-
times referred to as the controllable canonical form.) For the strictly proper transfer matrix
(3.5)
H(s) =
Β
1
s
n÷1
+ Β
2
s
n÷2
+ + Β
n

s
n
+ Α
1
s
n÷1
+ +Α
n
of size q× p, where Β
i
are of that size as well, p and q being the number of inputs and outputs,
respectively, the controllable companion realization is assumed to be
(3.6)
x

=
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
0 I 0 0
0 0 I 0

0 0 0 I
÷Α
n
I ÷Α
n÷1
I ÷Α
n÷2
I ÷Α
1
I
.
.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
++
x +
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
0
0

0
I
.
.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
++
u
y
= |
Β
n
Β
n÷1
Β
n÷2
Β
1
| x + Du
where 0 and I are zero and identity matrices of size p× p and the matrix D is zero. The dimen-
sion of the system is n p. If the transfer matrix H(s) is proper (but not strictly proper), the
same representation of the strictly proper part of H(s) holds while the matrix D can be found
as
(3.7) D = H(o)
Similarly, the observable companion realization is
(3.8)
x

=
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
0 0 0 ÷Α
n
I
I 0 0 ÷Α
n÷1
I
0 I 0 ÷Α
n÷2
I

0 0 I ÷Α
1
I
.
.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
++
x +
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Β
n
Β
n÷1
Β
n÷2

Β
1
.
.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
u
y = |
0 0 0 I
| x + Du
36 Control System Professional
where 0 and I are zero and the identity matrices of size q ×q and the dimension of the system
is n q . This form can be obtained using the option TargetForm ÷ ObservableCompanion.
Note that the controllable and observable companion forms may not be of minimal order and,
as a rule, are ill-conditioned. Chapter 8 describes the methods to transform the model further.
« Here is a transfer function.
In[25]:= tf =TransferFunction ]s, ]]
1
-----------
s -a
,
1
-----------
s -b
¦¦|
Out[25]= TransferFunction|s, ¦¦
1

as
,
1

bs
¦¦|
« This produces the controllable companion realization.
In[26]:= StateSpace[%]
Out[26]= StateSpace|{{0, 0, 1, 0¦, {0, 0, 0, 1¦,{a b, 0, ab, 0¦,{0, a b, 0, ab¦¦,
{{0, 0¦,{0, 0¦,{1, 0¦,{0, 1¦¦,{{b, a, 1, 1¦¦]
« The structure of the state-space system is more transparent with the use of Tradi
tionalForm (Section 3.4). Clearly, this is not the minimal-order model.
In[27]:= TraditionalForm[%]
Out[27]//TraditionalForm=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 1 0 0 0
0 0 0 1 0 0
÷a b 0 a + b 0 1 0
0 ÷a b 0 a + b 0 1
÷b ÷a 1 1 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

3. Description of Dynamic Systems 37
« This reduces the order of the system using the function MinimalRealization
described in Section 8.1. To obtain a simpler result, we assert that none of the
symbolic variables are complex.
In[28]:= MinimalRealization[%, ComplexVariables -None]//Simplify //
TraditionalForm
Out[28]//TraditionalForm=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
a 0 ÷
1

1 +
1

b
2
b
0
0 b 0 ÷
1

1 +
1

a
2
a
÷

1 +
1

b
2
b ÷

1 +
1

a
2
a 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

« This produces the observable companion realization.
In[29]:= StateSpace[tf, TargetForm -ObservableCompanion]//TraditionalForm
Out[29]//TraditionalForm=
l
\
.
.
.
.
.
.
.
..
0 ÷a b ÷b ÷a
1 a + b 1 1
0 1 0 0
\
!
.
.
.
.
.
.
.
..

« It is easy to demonstrate that both realizations correspond to the initial system.
In[30]:= TransferFunction[s, #]& /®{%, %%}//Simplify
Out[30]= ¦TransferFunction|s, ¦¦
1

as
,
1

bs
¦¦|,
TransferFunction|s, ¦¦
1

as
,
1

bs
¦¦|¦
By itself, StateSpace does not check the consistency of component matrices; however, the
consistency is the prerequisite for many other functions to operate in a meaningful way.
Consistency here means that matrix a must be square, n×n, where n is the number of states;
matrix b must be n× p, where p is the number of inputs; matrix c must be q ×n, where q is the
number of outputs; and matrix d, if used, must be q × p. When entering state-space objects
manually, these requirements should be kept in mind. The function ConsistentQ checks
that they are satisfied.
StateSpace, as well as other control objects, accepts options, which could, if so desired,
come wrapped in lists. Therefore, caution should be exercised to not confuse an empty list of
38 Control System Professional
options with an empty matrix in a control object's description. For this reason, parsing an
empty list as a valid list of options is disallowed for all control objects as well as in other
functions in Control System Professional where this may cause a confusion.
3.3 Continuous-Time versus Discrete-Time Systems
The way to to specify whether the TransferFunction, StateSpace, and ZeroPoleGain
objects to the continuous-time or discrete-time domain is to set the option Sampled to either
False, True, or Period[value]. Once set, the option remains a part of the data structure.
This allows the same set of functions to operate on both types of systems implementing an
object-oriented paradigm, according to which the method "knows" how to deal with the
object.
As with all other Mathematica options, setting of the option Sampled is not mandatory. If a
function does not find the option in a particular control object, it relies on the global variable
$Sampled to make the decision when necessary. The default value of $Sampled is False.
Changing the global domain specification makes sense if you deal primarily with the
discrete-time systems or, even more restrictively, with systems sampled primarily at one rate.
If this is the case, you may want to change the variable $Sampled in your Mathematica session
and/or include the corresponding line in your init.m file. It should be emphasized that
relying on the global variable, although convenient, may cause confusion if you save the
results of your Mathematica session to a file and later read it in after changing that variable, or
if you send your file to a colleague who prefers another global value. A useful precaution,
therefore, is to save the value of $Sampled together with your data or use the Sampled
option explicitly in all your data structures.
3. Description of Dynamic Systems 39
Sampled ÷ False attributes the systemto the continuous-time domain
Sampled ÷ True attributes the systemto the discrete-time domain
Sampled ÷ Period|T] attributes the system to the continuous- or
discrete-time domain depending on the value of T;
a zero value refers to the continuous-time domain,
and everything else refers to the discrete-time domain
$Sampled global variable that determines the
domain in which a system is assumed to
be in the absence of the Sampled option
ContinuousTimeQ|system] test if system is in the continuous-time domain
DiscreteTimeQ|system] test if system is in the discrete-time domain
SamplingPeriod|system] find the sampling period of system
Options, global variables, and test functions pertaining to domain specification.
« There is no domain-specifying option in this StateSpace object, so it is attributed
to continuous-time by default.
In[31]:= ContinuousTimeQ[StateSpace[a, b, c]]
Out[31]= True
« Now the system is explicitly set to be in the discrete-time domain.
In[32]:= ContinuousTimeQ[StateSpace[a, b, c, Sampled -True]]
Out[32]= False
« This is another way to attribute the system to the discrete-time domain—by using
Period[T] with a nonzero value of T.
In[33]:= ContinuousTimeQ[StateSpace[a, b, c, Sampled -Period [T]]]
Out[33]= False
« This changes the default domain. From now on the system will be assumed to be
discrete-time if not specified otherwise with the Sampled option (until $Sampled is
changed back).
In[34]:= $Sampled =Period[1]
Out[34]= Period|1]
40 Control System Professional
« The same system as before is now considered to be in the discrete-time domain.
In[35]:= ContinuousTimeQ[StateSpace[a, b, c]]
Out[35]= False
« This restores the default behavior.
In[36]:= $Sampled =False
Out[36]= False
3.4 The Traditional Notations
TraditionalForm|system] traditional form of the
state-space or transfer-function system
Traditional representation of the control objects.
In the notebook front end, you can display and manipulate control objects in their traditional
typeset form. The TransferFunction and StateSpace objects are represented as transfer
function matrices and a block matrices
l
\
.
..
A B
C D
\
!
.
.., correspondingly. By convention, control
objects are distinguished from regular matrices by their superscripts, which are the script
letters for TransferFunction and for StateSpace.
The objects can also have a subscript that indicates the time domain or the sampling period.
The default subscripts for continuous-time and discrete-time objects are • and (the Mathemat-
ica characters \[Bullet] and \[EmptyUpTriangle]). By setting the values of global
variables $ContinuousTimeToken and $DiscreteTimeToken, you can choose a different
notation. The subscript is typically omitted if the domain can otherwise be unambiguously
determined from the contents of the control object.
The traditional form of the TransferFunction object uses the variable (the Mathematica
character \[ScriptS]) to represent the complex variable of the Laplace-transform domain
and the variable (\[ScriptZ]) for the z-transform domain. You can choose different
symbols by setting the global variables $ContinuousTimeComplexPlaneVariable and
$DiscreteTimeComplexPlaneVariable.
3. Description of Dynamic Systems 41
Note that contrary to the standard representation of the TransferFunction object, which
does not require a formal variable (nor does it take the variable into account for the time
domain identification purposes), interpretation of the TransferFunction object in Tradi
tionalForm is based on the domain variable. However, should the variable in the body of
the TransferFunction point to the domain that is different from the one indicated by the
subscript of the control object, the domain is determined by the value of the subscript.
$ContinuousTimeToken the token of the continuous-time domain in the
subscript of the control objects in TraditionalForm
$DiscreteTimeToken the token of the discrete-time domain
• the default value of $ContinuousTimeToken
the default value of $DiscreteTimeToken
$ContinuousTimeComplexPlaneVariable
the complex variable in the TraditionalForm
representation of continuous-time control objects
$DiscreteTimeComplexPlaneVariable
the discrete-time variable
the default value of
$ContinuousTimeComplexPlaneVariable
the default value of
$DiscreteTimeComplexPlaneVariable
Customizing TraditionalForm of control objects.
« Here is a discrete-time state-space object.
In[37]:= StateSpace[{{a
1
, a
2
}, {a
3
, a
4
}},{{b
1
},{b
2
}},{{c
1
, c
2
}},Sampled -True]
Out[37]= StateSpace|{{a
1
, a
2
¦,{a
3
, a
4
¦¦, {{b
1
¦, {b
2
¦¦,{{c
1
, c
2
¦¦,SampledTrue]
« This is its TraditionalForm representation. The discrete-time domain is indicated
by the small triangle.
In[38]:= TraditionalForm[%]
Out[38]//TraditionalForm=
l
\
.
.
.
.
.
.
.
..
a
1
a
2
b
1
a
3
a
4
b
2
c
1
c
2
0
\
!
.
.
.
.
.
.
.
..

42 Control System Professional
« This is a continuous-time TransferFunction object.
In[39]:= TransferFunction]
1
----
#
|
Out[39]= TransferFunction|¦¦
1

#1
¦¦|
« This is its TraditionalForm representation. The variable indicates the
continuous-time domain.
In[40]:= TraditionalForm[%]
Out[40]//TraditionalForm=
|
1

|

« Here we copy the previous output cell and paste it into the input cell. The expression
is interpreted as a continuous-time object because of the variable .
In[41]:= ContinuousTimeQ]|
1
---

)

|
Out[41]= True
EquationForm|statespace] the state-space system
statespace in the formof matrix equations
Representing StateSpace objects as state-space equations.
Several options allow you to customize the appearance of a state-space system in Equation
Form. By default, the state, input, and output variables are, correspondingly, (the Mathemat-
ica character \[ScriptX]), (\[ScriptU]) and (\[ScriptY]). The default time variables
for the continuous-time and discrete-time systems are (\[ScriptT]) and (\[ScriptK]),
respectively.
option name default value
StateVariables the state variables to use in equations
InputVariables the input variables
OutputVariables the output variables
TimeVariable Automatic the time variable
Specifying the variables for the StateSpace objects in EquationForm.
3. Description of Dynamic Systems 43
« Here is a state-space system.
In[42]:= StateSpace[{{a
1
, a
2
}, {a
3
, a
4
}},{{b
1
},{b
2
}},{{c
1
, c
2
}}]
Out[42]= StateSpace|{{a
1
, a
2
¦,{a
3
, a
4
¦¦, {{b
1
¦, {b
2
¦¦,{{c
1
, c
2
¦¦]
« These are the corresponding state-space equations.
In[43]:= EquationForm[%]
Out[43]//EquationForm=

=
l
\
.
..
a
1
a
2
a
3
a
4
\
!
.
.. +
l
\
.
..
b
1
b
2
\
!
.
..
= ( c
1
c
2 )
« This uses the specified variables to represent the system.
In[44]:= EquationForm[%, StateVariables -{Θ,Θ

}, InputVariables -∆]
Out[44]//EquationForm=
l
\
.
.
.
..
Θ

Θ

\
!
.
.
.
..
=
l
\
.
..
a
1
a
2
a
3
a
4
\
!
.
..
l
\
.
.
.
.
Θ
Θ

\
!
.
.
.
.
+
l
\
.
..
b
1
b
2
\
!
.
.. ∆
= ( c
1
c
2 )
l
\
.
.
.
.
Θ
Θ

\
!
.
.
.
.
3.5 Discrete-Time Models of Continuous-Time Systems
If an analog system is to be analyzed in the discrete-time domain, the problem arises of find-
ing a discrete-time representation of the system such that the output variables sampled at
times t
0
, t
1
, …, t
k
, … suitably approximate the ones of the original system. A possible way to
make such a conversion is to apply the function ToDiscreteTime to a continuous-time
system. ToDiscreteTime operates on all control objects. Note that the built-in functions
LaplaceTransform and ZTransform can, in principle, perform similar tasks. However, the
procedures in Control System Professional use the state-space approach and so the conversion
is, as a rule, more efficient.
44 Control System Professional
ToDiscreteTime|system] find the discrete-time approximation
of continuous-time system sampled
with the default sampling period
ToDiscreteTime|system, Sampled ÷ Period|T]]
find the discrete-time approximation of
continuous-time system sampled with period T
$SamplingPeriod default numeric value of sampling period
Finding discrete-time equivalents of continuous-time systems.
« Consider a simple continuous-time system.
In[45]:= tf =TransferFunction ]s,
a
-----------
s +a
|
Out[45]= TransferFunction|s, ¦¦
a

as
¦¦|
« This is a possible discrete-time approximation. As noted previously, the formal
internal variable (in this case s) does not indicate the domain, but the Sampled
option does.
In[46]:= hz =ToDiscreteTime[% , Sampled -Period [T]]//Simplify
Out[46]= TransferFunction|s, ¦¦
1
a T

1
a T
s
¦¦,SampledPeriod|T]|
« Should the need arise, it is easy to rename the formal variable.
In[47]:= % /.s -z
Out[47]= TransferFunction|z, ¦¦
1
a T

1
a T
z
¦¦,SampledPeriod|T]|
« Consider a state-space continuous-time system.
In[48]:= (cont =StateSpace[{{0, 1},{0, -1}},{{0},{1}},{{1, 0}}])//
TraditionalForm
Out[48]//TraditionalForm=
l
\
.
.
.
.
.
.
.
..
0 1 0
0 ÷1 1
1 0 0
\
!
.
.
.
.
.
.
.
..

3. Description of Dynamic Systems 45
« This is its discrete-time approximation. The result is still a StateSpace object that is
in the discrete-time domain as indicated by the option Sampled.
In[49]:= disc =ToDiscreteTime[%, Sampled -Period [Τ]]
Out[49]= StateSpace|{{1, 1
Τ
¦, {0,
Τ
¦¦,
{{1
Τ
Τ¦,{1
Τ
¦¦, {{1, 0¦¦,SampledPeriod|Τ]]
« Here is the same system in TraditionalForm.
In[50]:= TraditionalForm[%]
Out[50]//TraditionalForm=
l
\
.
.
.
.
.
.
.
..
1 1 ÷ :
÷Τ
Τ + :
÷Τ
÷ 1
0 :
÷Τ
1 ÷ :
÷Τ
1 0 0
\
!
.
.
.
.
.
.
.
..
Τ

The default value for the option Sampled in ToDiscreteTime is the global variable $Sam
plingPeriod. This variable also provides a fallback where some functions retreat in situa-
tions when the sampling period does not evaluate to a number, but the numeric value is
needed to perform the task (for example, to simulate the transient behavior of a system). Thus,
$SamplingPeriod must never be set to anything that has no numeric value. If it is desirable
for ToDiscreteTime to use a symbolic sampling period T by default, this can be easily
achieved by using the standard Mathematica mechanism, say with a command
SetOptions|ToDiscreteTime, Sampled Period|T]].
It is worth emphasizing that the "conversion" from an analog to a sampled system is merely
an approximation, the quality of which depends on the method used and the sampling period.
The time-domain response functions described in Chapter 4 can make the difference
between the original and approximated system readily apparent.
« This is the analog output response of the original system to the sinusoidal input
signal.
In[51]:= OutputResponse[cont, Sin[t],t]//Simplify
Out[51]= ¦
1

2
2
t
Cos|t]Sin|t])¦
46 Control System Professional
« Here is a piece of that curve on the time interval from 0 to 10 seconds.
In[52]:= Plot[%, {t, 0, 10}];
2 4 6 8 10
0.25
0.5
0.75
1
1.25
1.5
« This plots the simulated response of the discrete-time system for the sampling
period Τ of 0.5 seconds over the same time interval. For reference, we include the
analog response from the previous plot (structurally, the line is the first element of
the Graphics object returned by the previous command and is extracted with
First).
In[53]:= SimulationPlot[disc /.Τ-.5, Sin[t], {t, 0, 10},
PlotJoined -False, PlotStyle -PointSize[.015],
PlotLabel-"Output Response", Epilog -First[%]];
2 4 6 8 10
0.25
0.5
0.75
1
1.25
1.5
Output Response
We can see that the sampled values systematically lag behind corresponding points on the
analog curve, which is typical for the default ZeroOrderHold method. Choosing smaller
values for Τ can make the lag less noticeable.
3. Description of Dynamic Systems 47
3.5.1 The Conversion Methods
The conversion from the continuous-time domain to the discrete-time domain can be per-
formed using the following methods (see Franklin et al. (1990), Section 4). The method can be
selected with the Method option.
• Hold equivalence methods—zero- and first-order (triangle) hold, implemented with
ZeroOrderHold and FirstOrderHold, respectively
• Numerical integration methods—forward and backward rectangular rules,
implemented with ForwardRectangularRule and BackwardRectangular
Rule, respectively, and bilinear (Tustin) transformation with and without
prewarping, implemented with BilinearTransform
• Zero-pole mapping, implemented with ZeroPoleMapping
option name default value
Method ZeroOrderHold method to use for transformation
CriticalFrequency Automatic critical frequency, if any,
at which to make comparisons
Options related to continuous- to discrete-time conversion.
« This is the result of conversion with the forward rule of the transfer function system
tf defined earlier in this chapter.
In[54]:= ToDiscreteTime[tf, Sampled -Period [T],Method -ForwardRectangularRule]
Out[54]= TransferFunction|s, ¦¦
a T

1sa T
¦¦,SampledPeriod|T]|
« This is the conversion of the same transfer function using the backward rule.
In[55]:= ToDiscreteTime[tf, Sampled -Period [T],
Method -BackwardRectangularRule]//Simplify
Out[55]= TransferFunction|s, ¦¦
a s T

1sa s T
¦¦,SampledPeriod|T]|
48 Control System Professional
« This conversion uses the bilinear transformation.
In[56]:= ToDiscreteTime[tf, Sampled -Period [T],
Method -BilinearTransform]//Simplify
Out[56]= TransferFunction|s, ¦¦
a 1s) T

2a Ts 2a T)
¦¦, SampledPeriod|T]|
« This uses the zero-pole mapping.
In[57]:= ToDiscreteTime[tf, Sampled -Period [T],Method -ZeroPoleMapping]
Out[57]= TransferFunction|s, ¦¦
1
a T

a T
s
¦¦,SampledPeriod|T]|
« This is the first-order-hold equivalent of the transfer function H(s) = 1l s
2
.
In[58]:= ToDiscreteTime]TransferFunction ]s,
1
------
s
2
|,
Sampled -Period [T],Method -FirstOrderHold|//Simplify
Out[58]= TransferFunction|s, ¦¦
14 ss
2
) T
2

6 1s)
2
¦¦,SampledPeriod|T]|
« Yet another example, which uses a lag network, illustrates the effect of the option
CriticalFrequency on accuracy of the conversion.
In[59]:= lag =TransferFunction]s,
a s +1
---------------
b s +1
|
Out[59]= TransferFunction|s, ¦¦
1a s

1b s
¦¦|
3. Description of Dynamic Systems 49
« Here is the Bode plot for the network for some set of parameters.
In[60]:= lagplot =BodePlot[lag /.{a -1, b -5}];
0.01 0.05 0.1 0.51 5 10
Frequency Rad/Second)
-40
-30
-20
-10
P
h
a
s
e

d
e
g
)
0.01 0.05 0.1 0.51 5 10
Frequency Rad/Second)
-14
-12
-10
-8
-6
-4
-2
0
M
a
g
n
i
t
u
d
e

d
B
)
« This is the network after the bilinear transformation.
In[61]:= dlag =ToDiscreteTime[lag,
Sampled -Period [T],Method -BilinearTransform] //Simplify
Out[61]= TransferFunction|s, ¦¦
2 a 1s)1s) T

2 b 1s)1s) T
¦¦,SampledPeriod|T]|
50 Control System Professional
« This computes the Bode plots for continuous and sampled lag networks—and
displays them together using a utility function DisplayTogetherGraphicsArray
(see Section 12.5). The responses coincide for the low frequencies but differ
somewhat near the Nyquist frequency.
In[62]:= DisplayTogetherGraphicsArray[
lagplot, BodePlot[dlag /.{a -1, b -5, T -1},
PlotStyle -{RGBColor[1, 0, 0],Dashing[{.02}]}]];
0.01 0.05 0.1 0.51 5 10
Frequency Rad/Second)
-40
-30
-20
-10
P
h
a
s
e

d
e
g
)
0.01 0.05 0.1 0.51 5 10
Frequency Rad/Second)
-14
-12
-10
-8
-6
-4
-2
0
M
a
g
n
i
t
u
d
e

d
B
)
« We again use BilinearTransform, this time with frequency prewarping at some
critical frequency Ω
c
.
In[63]:= ToDiscreteTime[lag, Sampled -Period [T],
Method -BilinearTransform, CriticalFrequency -Ω
c
]//Simplify
Out[63]= TransferFunction|s, ¦¦
a 1s) Ω
c
1s) Tan|
T Ωc

2
]

b 1s) Ω
c
1s) Tan|
T Ωc

2
]
¦¦,SampledPeriod|T]|
3. Description of Dynamic Systems 51
« We have achieved a perfect match at that frequency at the expense of less accurate
behavior at other frequencies.
In[64]:= DisplayTogetherGraphicsArray[
lagplot, BodePlot[% /.{a -1, b -5, T -1, Ω
c
-2},
PlotStyle -{RGBColor[1, 0, 0],Dashing[{.02}]}]];
0.01 0.05 0.1 0.51 5 10
Frequency Rad/Second)
-40
-30
-20
-10
P
h
a
s
e

d
e
g
)
0.01 0.05 0.1 0.51 5 10
Frequency Rad/Second)
-14
-12
-10
-8
-6
-4
-2
0
M
a
g
n
i
t
u
d
e

d
B
)
Except for zero-pole mapping, which naturally operates on ZeroPoleGain objects, the
conversion from the continuous- to the discrete-time domain is implemented with state-space
algorithms. Therefore, the transformation to and from StateSpace objects can be avoided if
the system is represented in StateSpace in the first place.
« This is our lag system as a StateSpace object.
In[65]:= (lag1=StateSpace[lag]//Simplify)//TraditionalForm
Out[65]//TraditionalForm=
l
\
.
.
.
.
.
.
.
.
.
.
..
÷
1

b
1
b ÷ a

b
2
a

b
\
!
.
.
.
.
.
.
.
.
.
.
..

52 Control System Professional
« This converts the system to discrete time using the bilinear transformation.
In[66]:= ToDiscreteTime[lag1, Sampled -Period [T], Method -BilinearTransform]//
Simplify //TraditionalForm
Out[66]//TraditionalForm=
l
\
.
.
.
.
.
.
.
.
.
.
.
..
2 b ÷ T

2 b + T
2 b

2 b + T
2 (b ÷ a) T

b (2 b + T)
2 a + T

2 b + T
\
!
.
.
.
.
.
.
.
.
.
.
.
..
T

3.6 Discrete-Time Models of Systems with Delay
ToDiscreteTime can also find the discrete-time approximation of analog state-space sys-
tems of the form
(3.9)
x

(t) = Ax(t) + Bu(t ÷Λ)
y(t) = Cx(t)
which corresponds to systems with time delay Λ. The systems can be represented as conven-
tional StateSpace objects with a Delay option added. The conversion implements the
modified z -transform algorithm described in Franklin et al. (1990), Section 2.4.4.
Currently, ToDiscreteTime is the only function that takes the Delay option into consider-
ation and, even more restrictively, it handles the only case of numerical values of the ratio
Λl T
s
, where T
s
is the sampling period. Negative delay is equivalent to prediction of the
system's behavior and can be used as long as the delay is not longer than the sampling period.
option name default value
Delay 0 time delay in the state-space model
Option in StateSpace to introduce a delay in control.
« Here is a simple state-space system with delay of Λ.
In[67]:= ss =StateSpace[{{1}},{{1}},{{1}},Delay -Λ]
Out[67]= StateSpace|{{1¦¦,{{1¦¦,{{1¦¦,DelayΛ]
3. Description of Dynamic Systems 53
« This is its discrete-time approximation when the delay equals the sampling period.
In[68]:= ToDiscreteTime[ss /.Λ-Τ,Sampled -Period [Τ]]
Out[68]= StateSpace|{{
Τ
, 1
Τ
¦,{0, 0¦¦,{{0¦,{1¦¦,{{1, 0¦¦,SampledPeriod|Τ]]
« If the delay increases, then so does the dimension of the state space.
In[69]:= ToDiscreteTime[ss /.Λ-2 Τ,Sampled -Period[Τ]]
Out[69]= StateSpace|{{
Τ
, 1
Τ
, 0¦,{0, 0, 1¦, {0, 0, 0¦¦,
{{0¦,{0¦,{1¦¦, {{1, 0, 0¦¦,SampledPeriod|Τ]]
« This is an example of the transformation of a system with a negative delay.
In[70]:= ToDiscreteTime]ss /.Λ--
Τ
----
2
, Sampled -Period [Τ]|
Out[70]= StateSpace|{{
Τ
¦¦,{{
Τ/2
1
Τ/2
)
Τ
1
Τ/2
)¦¦,
{{1¦¦,{{1
Τ/2
¦¦, SampledPeriod|Τ]]
3.7 Continuous-Time Models of Discrete-Time Systems
The function ToContinuousTime, when applied to discrete-time objects, converts them to
the continuous-time domain. In many respects ToContinuousTime behaves as an inverse
function to ToDiscreteTime and applies inverse algorithms to those described previously in
Section 3.5. Again, the methods can be chosen through the Method option, which accepts the
same values. The conversion attempts to find the slowest possible continuous-time model the
outputs of which would match the ones of the discrete-time model at the sample time.
ToContinuousTime|system] find the continuous-time model
of the discrete-time object system
Finding continuous-time equivalents of discrete-time systems.
« Here a continuous-time system from an earlier example is converted to the
discrete-time domain.
In[71]:= ToDiscreteTime[
StateSpace[{{0, 1},{0, -1}},{{0}, {1}}, {{1, 0}}], Sampled -Period [T]]
Out[71]= StateSpace|{{1, 1
T
¦, {0,
T
¦¦,
{{1
T
T¦,{1
T
¦¦, {{1, 0¦¦,SampledPeriod|T]]
54 Control System Professional
« This brings the system back to continuous time.
In[72]:= ToContinuousTime[%]
Out[72]= StateSpace|¦¦0,
Log|
T
]

T
¦,¦0,
Log|
T
]

T
¦¦,
¦¦
TLog|
T
]

T
¦,¦
Log|
T
]

T
¦¦, {{1, 0¦¦|
« As our sampling period T is real-valued, we can simplify the result and see that the
system is the same as the one we started with.
In[73]:= ComplexExpand /®%
Out[73]= StateSpace|{{0, 1¦,{0, 1¦¦, {{0¦,{1¦¦,{{1, 0¦¦]
« Here is the lag system used earlier.
In[74]:= lag1 = StateSpace]]]-
1
----
b
¦¦,{{1}},]]
-a +b
--------------
b
2
¦¦,]]
a
----
b
¦¦|
Out[74]= StateSpace|¦¦
1

b
¦¦,{{1¦¦,¦¦
ab

b
2
¦¦, ¦¦
a

b
¦¦|
« This converts the system to the discrete-time domain using the bilinear
transformation with frequency prewarping.
In[75]:= ToDiscreteTime[%, Sampled -Period [T],
Method -BilinearTransform, CriticalFrequency -Ω
c
]//Simplify
Out[75]= StateSpace|¦¦
b Ω
c
Tan|
T Ωc

2
]

b Ω
c
Tan|
T Ωc

2
]
¦¦,¦¦
b Ω
c

b Ω
c
Tan|
T Ωc

2
]
¦¦,
¦¦
2 ab) Tan|
T Ωc

2
]

b b Ω
c
Tan|
T Ωc

2
])
¦¦,¦¦
a Ω
c
Tan|
T Ωc

2
]

b Ω
c
Tan|
T Ωc

2
]
¦¦,SampledPeriod|T]|
« This brings the system back to continuous time.
In[76]:= ToContinuousTime[%, Method -BilinearTransform,
CriticalFrequency -Ω
c
]//Simplify
Out[76]= StateSpace|¦¦
1

b
¦¦,{{1¦¦,¦¦
ab

b
2
¦¦, ¦¦
a

b
¦¦|
3. Description of Dynamic Systems 55
« This is the discrete-time approximation of the lag system obtained with the
first-order hold.
In[77]:= ToDiscreteTime[lag1, Sampled -Period [T],
Method -FirstOrderHold]//Simplify
Out[77]= StateSpace|¦¦

T

b ¦¦, ¦¦
b
2

2 T

b ¹1
T

b |
2

T
¦¦,
¦¦
ab

b
2
¦¦,¦¦
aab)

\

1
b

\

1

T

b
`

T
`

b
¦¦, SampledPeriod|T]|
« This converts the result back to continuous time.
In[78]:= ToContinuousTime[%, Method -FirstOrderHold]//Simplify
Out[78]= StateSpace|¦¦
1

b
¦¦,{{1¦¦,¦¦
ab

b
2
¦¦, ¦¦
a

b
¦¦|
Note that in the case of the FirstOrderHold method, ToContinuousTime cannot use the
inverse of the state-space algorithm implemented in ToDiscreteTime and, as an exception,
resorts to the conversion using transfer functions, which is less efficient.
56 Control System Professional
4. Time-Domain Response
Control System Professional provides the means to analyze linear systems in both time and
frequency domains. This chapter deals with the time dependencies of the state and output
vectors. Refer to Chapter 5 for the description of frequency-domain analysis tools. Two
approaches, symbolic and simulation-based, are implemented and will be introduced in the
following sections. The functions StateResponse and OutputResponse, which compute
the state and output responses, are capable of performing both operations and choose their
mode depending on the supplied input. SimulationPlot, on the other hand, always uses
the simulation approach.
4.1 Symbolic Approach
For the continuous-time system
(4.1) x

(t) = Ax(t) + Bu(t)
StateResponse, when supplied with the symbolic input u(t), attempts to find the solution
(4.2) x(t) = e
(t÷t
0
) A
x(t
0
) +
¹
t
0
t
e
(t÷Τ) A
B(Τ) u(Τ) dΤ
The first term in this equation represents the zero-input response (also called the free, natural,
unforced, or homogeneous response) and the second is the zero-state (or particular, forced)
response. Once the state response x(t) is found, the output response y(t) can computed
directly from the equation
(4.3) y(t) = C(t) x(t) + D(t) u(t)
OutputResponse, therefore, calls StateResponse first and then proceeds according to
Eq. (4.3). The preceding solution is valid for constant-coefficient matrices A, whereas matrices
B, C, and D may be time dependent.
The state response for the discrete-time system
(4.4) x(k + 1) = Ax(k) + Bu(k)
is computed according to
(4.5) x(k) = A
k
x(0) + `
j=1
k
A
k÷j
B(j ÷1) u(j ÷ 1)
and the output response is found by
(4.6) y(k) = Cx(k) + Du(k)
Again, matrices B, C, and D may be time dependent, but matrix A may not.
StateResponse|system, u, var]
compute the state response of system to the input
signals u given as functions of the time variable var
OutputResponse|system, u, var]
compute the output response
OutputResponse|c, d, x, u]
compute the output response
given the system matrices c and d,
state response x, and input u
OutputResponse|c, x] compute the output
response assuming matrix d is zero
State and output responses to an input function.
For both functions, the input system can be supplied in either state-space or transfer function
form; however, the computation is always carried out in the state-space form and the transfer
functions are converted to state-space form first. Because matrices C and D are not needed to
compute the state response, their presence in the input system for StateResponse is
optional.
« Load the application.
In[1]:= <<ControlSystems`
58 Control System Professional
« Consider a transfer function with a simple pole.
In[2]:= TransferFunction ]s,
1
-----------
s +1
|
Out[2]= |
1

+ 1
|

« This finds the output response to the sinusoidal input function sin(10 t) . We can see
that the first exponential term (the natural response) will vanish, leaving only the
harmonic (forced) signal in the output.
In[3]:= OutputResponse[%, Sin [10 t],t]
Out[3]= ¦
10 c
-t
---------------
101
+
1
---------
101
-10 Cos|10 t]+Sin|10 t])¦
« The built-in Mathematica function Plot plots the result on some time interval.
In[4]:= Plot[% , {t, 0, 10}];
2 4 6 8 10
-0.1
-0.05
0.05
0.1
0.15
The zero-input response in Eq. (4.2) or Eq. (4.5) depends on the initial conditions on the state
vector x(t
0
) or x(0), respectively. By default, StateResponse and OutputResponse
assume zero initial conditions. If this is not the case, the initial conditions can be supplied
using the option InitialConditions. The typical format is InitialConditions ÷
vector, where vector must have the same length as matrix A. Also acceptable is InitialCondi
tions ÷ value, which will cause all state variables to have the same initial value.
Using the option ControlInputs you can effectively cycle through the subsystems that
correspond to the specified list of inputs. For ControlInputs ÷ Automatic, StateRe
sponse and OutputResponse apply the input function to all inputs in turn. Another option,
ResponseVariable, allows you to name the internal variable if the state or output response
should contain unevaluated integral(s) or sum(s). The default value for this option is Auto
4. Time-Domain Response 59
matic, which corresponds to the internal integration variable Tau or summation index K for
the continuous- and discrete-time cases, correspondingly.
option name default value
InitialConditions 0 initial state conditions
ControlInputs Automatic inputs to use in turn
ResponseVariable Automatic internal variable to
use in the integral or sum
Specifying initial conditions and the response variable in StateResponse and OutputResponse.
As an example consider the simple production and inventory control model from Brogan
(1991):
x

=
.
.
.
.
.
.
.
.
..
÷1 ÷k
1 0
.
.
+
+
+
+
+
+
++
x +
.
.
.
.
.
.
.
.
..
k 0
0 ÷1
.
.
+
+
+
+
+
+
++

.
.
.
.
.
.
.
.
..
c
u
2
.
.
+
+
+
+
+
+
++
which can be represented schematically as shown in Figure 4.1. The model assumes that u
1
(t)
and u
2
(t) are the scheduled production rate and the sales rate, respectively, x
1
(t) and x
2
(t)
represent the actual production rate and inventory level, and c is the desired inventory level.

k

_
y(t)
u
2
(t)
u
1
(t)
c
x
1
(t)

_

_

x
2
(t)
Figure 4.1. Simple production and inventory control system.
« Here is the production and inventory control model.
In[5]:= StateSpace[{{-1, -k}, {1, 0}},{{k, 0}, {0, -1}}]
Out[5]=
l
\
.
..
÷1 ÷k k 0
1 0 0 ÷1
\
!
.
..

60 Control System Professional
Initially the system was in equilibrium, with the production rate x
1
(0) equal to the sales rate
and x
2
(0) = c. We denote x
1
(0) as x10. At t = 0, sales increase by 10 percent.
« This finds the state response for the particular value of k =
3

16
. The initial conditions
are specified using the option InitialConditions.
In[6]:= StateResponse]% /.k -
3
-------
16
, {c, 1.1 x10},
t, InitialConditions -{x10, c}|//Simplify
Out[6]= {1.1+2.05 c
-3 t/4
-2.15 c
-t/4
) x10,
1. c+-5.86667-2.73333 c
-3 t/4
+8.6 c
-t/4
) x10¦
« This plots the results for particular values of the initial production rate x
1
(0) and
inventory level c . To distinguish the graphs for production rate and for inventory
level, we set PlotStyle for them differently. The first is plotted as a solid line and
the second as a dashed one.
In[7]:= Plot[Evaluate[% /.{x10 -1, c -6}], {t, 0, 50},
PlotStyle -{Thickness[.005], Dashing[{.05, .01}]},
PlotRange -All, PlotLabel-"State Response"];
10 20 30 40 50
1
2
3
4
5
6
State Response
We can see that (within this model) keeping initial inventory relatively large allows us to
stabilize production rate at a new level.
Note that if multiple input signals are supplied to StateResponse and OutputResponse,
the number of signals must be equal to the number of inputs. For multi-input systems, an
input signal must be supplied as a vector of functions, but for single-input systems, a function
without the List wrapping is also acceptable (in the simulation mode, the same rule of
correspondence between the number of input signals and the number of inputs applies—see
Section 4.2). If only the response from one or several inputs or outputs must be studied, the
function Subsystem (or DeleteSubsystem) may be used to select the subsystem of interest.
4. Time-Domain Response 61
For further examples when the same input signal is to be applied to all inputs in turn, see
Section 4.3.
« Here is a two-output system.
In[8]:= TransferFunction ]s, ]]
1
----
s
¦,]
1
---------------- -----------
s
2
+2 s +10
¦¦|
Out[8]=
l
\
.
.
.
.
.
.
.
.
.
.
.
1

1

2
+2 + 10
\
!
.
.
.
.
.
.
.
.
.
.
.

« This selects the subsystem associated with the second output and computes the
output response to a delayed step function.
In[9]:= resp =OutputResponse[Subsystem[%, All, 2], UnitStep[t-1],t]
Out[9]= ¦|
1
------
60
-
f
------
60
|
3+3 f)-1+2 f) c
-1-3 f) -1+t)
-2+f) c
-1+3 f) -1+t)
) UnitStep|-1+t]¦
« We can use ComplexExpand to reduce the complex exponentials to the
trigonometric functions.
In[10]:= ComplexExpand[resp]
Out[10]= ¦
1
------
10
UnitStep|-1+t]-
1
------
10
c
1-t
Cos|3 -1+t)] UnitStep|-1+t]-
1
------
30
c
1-t
Sin|3 -1+t)] UnitStep|-1+t]¦
« This simplifies the result.
In[11]:= % //Simplify
Out[11]= ¦
1
------
30
c
-t
3 c
t
-3 c Cos|3-3 t]+c Sin|3-3 t]) UnitStep|-1+t]¦
62 Control System Professional
« This is a plot of the response.
In[12]:= Plot[%, {t, 0, 7},PlotRange -All];
1 2 3 4 5 6 7
0.02
0.04
0.06
0.08
0.1
0.12
« Here is a discrete-time system.
In[13]:= system =StateSpace]]]
1
----
2
,
1
----
4
¦,]
1
----
4
,
1
----
2
¦¦,
DiagonalMatrix[{1, 1}], {{1, 2}},Sampled -Period [1]|
Out[13]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1

2
1

4
1 0
1

4
1

2
0 1
1 2 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1

« In this input vector, the components are a ramp function and a decaying exponential
e
÷t
both sampled at a unit-time interval.
In[14]:= inputs ={k, e
-k
}
Out[14]= {k, c
-k
¦
4. Time-Domain Response 63
« This is the state response for the particular set of initial conditions. Since k is a real
valued parameter, the expression can be simplified with ComplexExpand.
In[15]:= StateResponse[system, inputs, k, InitialConditions -{-1, 2}]//
ComplexExpand //Simplify
Out[15]= ¦
1
-------------------------------- --------------
9 -4+c) -4+3 c)

2
-1-2 k
c
-k
9 2
3+2 k
c
2
-3 47+5 2
5+2 k
) c
2+k
+55 3 c)
2+k
+9 4 c)
2+k
k-
32 c
1+k
-10+3
4+k
-5 4
2+k
+3 2
3+2 k
k)-
16 c
k
11+5 2
5+2 k
-17 3
2+k
-3 4
2+k
k))),
1
-------------------------------- --------------
9 -4+c) -4+3 c)

¹2
-1-2 k
c
-k
¹9 2
5+2 k
c-9 4
2+k
c
2
+55 3 c)
2+k
+3 c
2+k
¹47-2
7+2 k
+3 2
3+2 k
k|+
16 c
k
¹11-2
7+2 k
+17 3
2+k
+3 2
3+2 k
k|-32 c
1+k
10+3
4+k
-4
3+k
+3 4
1+k
k)||¦
« Once the state response vector is available, the output response can be found.
In[16]:= OutputResponse[{{1, 2}},% ]//Simplify
Out[16]= ¦
1
-------------------------------- --------------
9 -4+c) -4+3 c)

2
-1-2 k
c
-k
9 4
3+k
c-27 2
3+2 k
c
2
+3 c
2+k
47-13 2
5+2 k
+55 3
2+k
+3 2
5+2 k
k)+16 c
k
11-13 2
5+2 k
+17 3
3+k
+3 2
5+2 k
k)-32 c
1+k
10+3
5+k
-13 4
2+k
+3 4
2+k
k)))¦
« The same result can be obtained directly.
In[17]:= OutputResponse[system, inputs, k, InitialConditions -{-1, 2}]//
ComplexExpand //Simplify
Out[17]= ¦
1
-------------------------------- --------------
9 -4+c) -4+3 c)

2
-1-2 k
c
-k
9 4
3+k
c-27 2
3+2 k
c
2
+3 c
2+k
47-13 2
5+2 k
+55 3
2+k
+3 2
5+2 k
k)+16 c
k
11-13 2
5+2 k
+17 3
3+k
+3 2
5+2 k
k)-32 c
1+k
10+3
5+k
-13 4
2+k
+3 4
2+k
k)))¦
64 Control System Professional
4.2 Simulating System Behavior
Once input signals are supplied to StateResponse or OutputResponse in the form of a
function in the specified variable, as described in Section 4.1, a symbolic solution based on Eq.
(4.2) or Eq. (4.5) is attempted. There are, however, situations when the symbolic solution is
either impossible or just too time consuming. In such cases, StateResponse and OutputRe
sponse can be used to carry out a simulation of the state and output responses. For continuous-
time systems, the simulation is based on the approximate numerical solution of the underly-
ing differential equations using the built-in function NDSolve and is invoked when the range
for the time-domain variable in the form of {t, 0, tmax}, or simply {t, tmax}, is given instead of
the variable t. The result then appears in terms of InterpolatingFunction objects. This
regime is referred to as the analog simulation.
For discrete-time systems, the same input syntax causes the input functions to be sampled on
a uniform time grid. The grid starts at time t = 0 (or k = 0). The discrete simulation then pro-
ceeds in a straightforward manner according to Eq. (4.4) starting from the initial conditions for
k = 0 and then iterating for k = 1, 2, … . Instead of supplying the input functions and the
range for the time variable, you can give the input sequences explicitly. Note that if you
supply the input signals in the form of input sequences for the continuous-time system, the
system is first converted to discrete time, and then the simulation is performed. To facilitate
the conversion, the response functions accept the options pertinent to ToDiscreteTime,
notably Sampled may be used to set the sampling period, and Method to choose the conver-
sion method.
When input signals are discretized, the input sequences u must represent a matrix each row of
which corresponds to a signal at one input, and the number of rows must be equal to the
number of inputs. For single-input systems, the matrix may be reduced to a vector.
4. Time-Domain Response 65
StateResponse|system, u, ¦t, tmax¦]
find the approximation of the state response for
the time variable t in the range from 0 to tmax
OutputResponse|system, u, ¦t, tmax¦]
find the approximation of the output
response for the specified period of time
StateResponse|system, u]
find the state response
to the discrete input sequences u
OutputResponse|system, u]
find the output response
to the specified input sequences
OutputResponse|c, d, x, u]
find the output response for the system
matrices c and d from the simulated
state response x and input sequences u
OutputResponse|c, x] find the output response when d is a zero matrix
State and output responses in the simulation mode.
Let us simulate the output of the T-bridge network in Figure 4.2 to a square-wave signal.
R
R
C
2
C
1
V
o
V
i
Figure 4.2. A T-bridge.
66 Control System Professional
« This is the transfer function for the T-bridge network.
In[18]:= bridge =TransferFunction ]s,
r
2
c
1
c
2
s
2
+2 r c
2
s +1
-------------------------------- -------------------------------- ---------
r
2
c
1
c
2
s
2
+(r c
1
+2 r c
2
) s +1
|
Out[18]=
l
\
.
..
r
2
c
1
c
2

2
+ 2 r c
2
+ 1

r
2
c
1
c
2

2
+ (r c
1
+ 2 r c
2
) + 1
\
!
.
..

« Here is the transfer function for a given set of parameters.
In[19]:= bridge1=% /.]r -1, c
1
-10, c
2
-
1
-------
10
¦
Out[19]=
l
\
.
.
.
..

2
+

5
+ 1

2
+
51

5
+ 1
\
!
.
.
.
..

« This utility function creates a square wave with period Τ.
In[20]:= square[t_, Τ_]:=Sign]Sin]
2 Π t
-------------
Τ
||
« Supplying the input signal together with both the time variable and the duration of
time period causes OutputResponse to perform in the simulation mode. The
output is a list of simulated responses. In this case, there is only one element in the
list since we are dealing with a single-output system.
In[21]:= outs =OutputResponse[bridge1, square[t, 1],{t, 3}]
Out[21]= {Sign|Sin|2 Π t]]-10 InterpolatingFunction|{{0., 3.¦¦, ·>]|t]¦
« This plots the input (the solid line) and the output (the dashed line) signals.
In[22]:= plot =Plot[Evaluate[Flatten[{square[t, 1], % }]],{t, 0, 3},
PlotStyle -{Thickness[.0075], Dashing[{.05, .01}]}];
0.5 1 1.5 2 2.5 3
-2
-1
1
2
4. Time-Domain Response 67
« This produces the output of the circuit using the discrete simulation with the
sampling period Τ. Note that the sampling period is supplied to OutputResponse
to specify the distance between points in the input vector (produced by the Table
function); this is necessary to convert the system to discrete time properly. The
output is a list of simulated responses. In this case, there is only one vector in the list.
In[23]:= Τ=.1;
In[24]:= OutputResponse[bridge1,
Table[square[t, 1],{t, 0, 3, Τ}],Sampled -Period[Τ]]
Out[24]= {{0., 1., 0.374159, 0.152404, 0.0777489, 0.0566053, -1.94511,
-0.688127, -0.236818, -0.0788588, -0.0276691, 1.9847, 0.73131,
0.281026, 0.12316, 0.0717262, -1.94101, -0.688021, -0.238155,
-0.0807077, -0.029691, 1.98263, 0.729233, 0.278961, 0.121112,
0.0696968, -1.94302, -0.690011, -0.240126, -0.082659, -0.031623¦¦
« The result can be drawn with the built-in Mathematica function ListPlot
(MultipleListPlot for multiple outputs). Here, however, we use a utility
function, SimulationPlot, described later in this section.
In[25]:= SimulationPlot[%, Sampled -Period [Τ],PlotStyle -Hue[0]];
0.5 1 1.5 2 2.5 3
-2
-1
1
2
68 Control System Professional
« This compares the results of the analog and discrete simulations. Clearly, the chosen
sampling period is too large to accurately approximate the behavior of the circuit.
In[26]:= Show[plot, % ];
0.5 1 1.5 2 2.5 3
-2
-1
1
2
SimulationPlot|v] plot the vector or list of vectors v
SimulationPlot|system, u, {t, tmax¦]
plot the simulated output response of system to
the input u as a function of t from 0 to tmax
SimulationPlot|system, u]
simulate the output response
to the discrete input sequences u
Output response simulation plots.
SimulationPlot provides a convenient way to compute and plot the output response of a
system with a single function call. The syntax for SimulationPlot closely resembles the one
for OutputResponse. As with OutputResponse, SimulationPlot produces the analog
or discrete simulation depending on the type of input system, unless the input signal is
specifically given as input sequences, in which case the discrete simulation takes place. You
can also force the discrete simulation of a continuous system by giving the option Sampled to
SimulationPlot. The option is ignored if the input signal is already sampled. For the
purpose of simulation, the sampling period cannot take symbolic values.
4. Time-Domain Response 69
As an exception, simulation of the response of continuous-time systems to the Dirac delta
function is always performed in the discrete-simulation mode. To carry out analog simulation,
you can mimic the Dirac delta function with a continuous impulse of small width. See the
impulse response examples in Section 4.3. Note that, for a continuous-time system, the
impulse response is simulated as a decay of some initial state conditions in the absence of the
input signal. The initial conditions are determined by the components of the matrix B.
To facilitate computation of the output response and conversion to discrete-time, if necessary,
SimulationPlot takes the options for OutputResponse and ToDiscreteTime. Addition-
ally, SimulationPlot accepts the options for the built-in plotting functions Plot, List
Plot, or MultipleListPlot and routes the options to appropriate functions.
« This is an analog simulation of the output response of the bridge circuit to a square-
wave input signal for another set of parameters of the bridge.
In[27]:= SimulationPlot[bridge /.{r -1, c
1
-10, c
2
-1},square[t, 1], {t, 3}];
0.5 1 1.5 2 2.5 3
-1
-0.5
0.5
1
70 Control System Professional
« This is a discrete simulation for the same parameters. The input system is discretized
using the specified sampling period.
In[28]:= SimulationPlot[bridge /.{r -1, c
1
-10, c
2
-1},
square[t, 1], {t, 3},Sampled -Period [.1]];
0.5 1 1.5 2 2.5 3
-1
-0.5
0.5
1
« This is a square pulse function.
In[29]:= UnitStep[(t -1/2) (1-t)]
Out[29]= UnitStep|1-t) |-
1
----
2
+t||
« This simulates the behavior of the bridge for yet another set of parameters. To
ensure the correct result of NDSolve, which is called internally to compute the
output response, we limit the value of MaxRelativeStepSize.
In[30]:= SimulationPlot[bridge /.{r -1, c
1
-1, c
2
-1},
% , {t, 5}, MaxRelativeStepSize -.1];
1 2 3 4 5
-0.2
0.2
0.4
0.6
0.8
1
4. Time-Domain Response 71
« This converts a particular bridge circuit to the discrete-time domain.
In[31]:= ToDiscreteTime[bridge /.{r -1, c
1
-10, c
2
-1},Sampled -Period [.2]]
Out[31]= TransferFunction|s,
{{0.980382-0.0563042 f)-s) 0.980382+0.0563042 f)-s))/
0.800931-s) 0.982142-s))¦¦,Sampled-Period|0.2]]
« Here is the response of this circuit to random noise. Note that SimulationPlot
picks up the sampling period from the input system.
In[32]:= SimulationPlot[%, Table[Random[]-.5, {100}]];
5 10 15 20
-0.6
-0.4
-0.2
0.2
0.4
0.6
Although SimulationPlot generates only output response curves, it is a simple matter to get
it to plot the state response as well. This can be done by adding an appropriate matrix C to the
StateSpace object composed of only matrices A and B. In the rest of the section we give an
example.
Let us consider the linearized state-space system for the depth control problem of a submarine
(see Figure 4.3) for small angles Θ and constant velocity v = 25 ft/s as described in Dorf (1992).
We will assume that the state variables are x
1
= Θ, x
2
= Θ

, and x
3
= Α, where Α is the angle of
attack and the input variable u is ∆
s
, the deflection of the stern plane. We will compute the
state response of the system to a step command for the stern plane of 0.3 deg from zero initial
conditions using the discrete-time approximation with sampling period 2 seconds.

72 Control System Professional
Center of gravity
Velocity v
Θ
Α

s
Figure 4.3. Depth control of a submarine.
« Here is the two-matrix state-space model.
In[33]:= StateSpace[{{0, 1, 0},{-.0071, -.111, .12},{0, .07, -.3}},
{{0}, {-.095}, {.072}}]
Out[33]=
l
\
.
.
.
.
.
.
.
.
0 1 0 0
÷0.0071 ÷0.111 0.12 ÷0.095
0 0.07 ÷0.3 0.072
\
!
.
.
.
.
.
.
.
.

« To compute the state response, we insert a diagonal matrix C containing some
weighting coefficients.
In[34]:= Insert[%, DiagonalMatrix[{.01, 1, 1}], -1]
Out[34]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
0 1 0 0
÷0.0071 ÷0.111 0.12 ÷0.095
0 0.07 ÷0.3 0.072
0.01 0 0 0
0 1 0 0
0 0 1 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

« This finds the discrete-time approximation of the system.
In[35]:= ToDiscreteTime[%, Sampled -Period [2]]
Out[35]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
0.986793 1.79374 0.183892 ÷0.167272
÷0.0127356 0.800561 0.160081 ÷0.157165
÷0.000761621 0.0933806 0.559313 0.0986635
0.01 0 0 0
0 1 0 0
0 0 1 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
2

4. Time-Domain Response 73
« This plots the state response. We numericalize the input function to avoid supplying
Degree (°) in symbolic form. Note that SimulationPlot automatically picks up
as many points as needed for the discrete simulation to cover the specified time
frame.
In[36]:= SimulationPlot[%, N [.3 °],{t, 150},
PlotLabel-"State Response of Submarine"];
20 40 60 80 100 120 140
-0.002
-0.0015
-0.001
-0.0005
0.0005
0.001
State Response of Submarine
4.3 Step, Impulse, and Other Responses
Using the general time-domain response functions defined in this chapter, it is easy to investi-
gate typical problems such as step, impulse, and ramp responses. This section presents some
examples.
« Let us create the second-order system with the natural frequency Ω and damping
ratio Ζ.
In[37]:= s2[Ω_, Ζ_]=TransferFunction ]s,

2
---------------- ------------------
s
2
+2 Ζ Ω s +Ω
2
|
Out[37]=
l
\
.
..

2

2
+ 2 Ζ Ω +Ω
2
\
!
.
..

« Here is the symbolic form of the step response for the critically damped case.
In[38]:= OutputResponse[s2[Ω
n
, 1], UnitStep[t], t]
Out[38]= {c
-t Ωn
-1+c
t Ωn
-t Ω
n
) UnitStep|t]¦
74 Control System Professional
« This is the analog simulation of the step response for the system with the natural
frequency equal to unity and a particular value of Ζ from the underdamped case
region.
In[39]:= SimulationPlot[s2[1, .1], UnitStep[t],{t, 50},
PlotLabel-"Underdamped Second -Order System", PlotRange -All];
10 20 30 40 50
0.25
0.5
0.75
1
1.25
1.5
1.75
Underdamped Second-Order System
« This plots the response for various values of the damping ratio Ζ. We can see that the
response changes from pure oscillation for Ζ= 0 (undamped case) to the exponential
for Ζ = 1.5 (overdamped case).
In[40]:= Plot3D[Evaluate[OutputResponse[s2[1, Ζ],1, t]#1]],
{t, 0, 20}, {Ζ,0, 1.5},PlotLabel-"Second -Order System",
AxesLabel-{t, "Ζ×10", "y(t)"},ViewPoint -{-0.920, 3.110, 1.750},
PlotRange -All, PlotPoints -50];
Second-Order System
0
5
10
15
20
t
0
0.5
1
1.5
Ζx10
0
0.5
1
1.5
2
yt)
5
10
15
20
0
0.5
1
Ζx10
4. Time-Domain Response 75
« Here is a third-order system. In the next inputs, we compute the step response for a
few values of the parameter Β defined as Β = pl (ΖΩ
n
) for the particular case of
Ζ = 1l 2.
In[41]:= s3[Ω_, Ζ_, p_]=TransferFunction ]s,

2
p
-------------------------------- ---------------- ---------
(s
2
+2 Ζ Ω s +Ω
2
) (s +p)
|
Out[41]=
l
\
.
..
p Ω
2

(p + ) (
2
+ 2 Ζ Ω + Ω
2
)
\
!
.
..

« For Β = o, the system degenerates to the second-order one.
In[42]:= MapAt[Limit[#, p -«]&, %, 2]
Out[42]=
l
\
.
..

2

2
+ 2 Ζ Ω +Ω
2
\
!
.
..

« This is its step response.
In[43]:= OutputResponse]% /.]Ω-1, Ζ-
1
----
2
¦, UnitStep[t],t|//ComplexExpand //
Simplify
Out[43]= ¦-
1
----
3
c
-t/2

\

-3 c
t/2
+3 Cos|

3 t
-------------
2
|+

3 Sin|

3 t
-------------
2
|
`

UnitStep|t]¦
« This is the step response for Β = 2.
In[44]:= OutputResponse]s3]1,
1
----
2
, 1|, UnitStep[t],t|//ComplexExpand //Simplify
Out[44]= ¦

\

1-c
-t
-
2 c
-t/2
Sin|

3 t
---------
2
|
---------------- ---------------- --------

3
`

UnitStep|t]¦
« Finally, here is the step response for Β = 1.
In[45]:= OutputResponse]s3]1,
1
----
2
,
1
----
2
|, UnitStep[t], t|//ComplexExpand //Simplify
Out[45]= ¦
1
----
3
c
-t/2

\

-4+3 c
t/2
+Cos|

3 t
-------------
2
|-

3 Sin|

3 t
-------------
2
|
`

UnitStep|t]¦
76 Control System Professional
« This combines the three previous results. The solid, dashed, and dashed-dotted lines
represent the respective curves for Β = 1, 2, and o.
In[46]:= Plot[Evaluate[Join[%, %%, %%% ]],{t, 0, 8},PlotStyle -
{Thickness[.005], Dashing[{.03}],Dashing[{.05, .02, .01, .02}]},
PlotLabel-"Step Response"];
2 4 6 8
0.2
0.4
0.6
0.8
1
Step Response
« This is the simulation of a ramp response of the third-order system for a particular
set of parameters. To visualize the steady-state error, we include the dashed straight
line.
In[47]:= SimulationPlot[s3[1., .5, 1.],t, {t, 0, 20},
Epilog -{Dashing[{0.025}],Line[{{0, 0},{20, 20}}]},
PlotLabel-"Ramp Response"];
5 10 15 20
2.5
5
7.5
10
12.5
15
17.5
Ramp Response
« Consider a two-input, second-order system.
In[48]:= system =StateSpace[{{0, 2},{-1, -1}},{{1, 1},{1, 0}}, {{1, -2}}]
Out[48]=
l
\
.
.
.
.
.
.
.
..
0 2 1 1
÷1 ÷1 1 0
1 ÷2 0 0
\
!
.
.
.
.
.
.
.
..

4. Time-Domain Response 77
« This applies the impulse function to all inputs in turn and simplifies the result.
In[49]:= OutputResponse[%, DiracDelta[t], t]//ComplexExpand //Simplify
Out[49]= ¦¦
1
----
7
c
-
t
---
2

\

11

7 Sin|

7 t
-------------
2
|-7 Cos|

7 t
-------------
2
|
`

UnitStep|t]¦,
¦
1
----
7
c
-
t
---
2

\

7 Cos|

7 t
-------------
2
|+5

7 Sin|

7 t
-------------
2
|
`

UnitStep|t]¦¦
« Here is the plot of the impulse responses. Note that we have joined the lists in the
previous result to plot all curves in one graph. The impulse responses for the first
and second inputs are shown as a solid and a dashed line, respectively.
In[50]:= Plot[Evaluate[Join ®®%], {t, 0, 10},PlotRange -All,
PlotStyle -{Thickness[.005],Dashing[{.05, .02}]},
PlotLabel-"Impulse Response"];
2 4 6 8 10
-1
-0.5
0.5
1
1.5
2
Impulse Response
78 Control System Professional
« This is the simulated (rather than computed symbolically) impulse response for the
first input. As mentioned in Section 4.2, SimulationPlot performs the discrete
simulation for the input signal in a form of the Dirac delta function.
In[51]:= SimulationPlot[system, DiracDelta[t],
{t, 10}, ControlInputs -1, Sampled -Period [.25],
PlotLabel-"Discrete Simulation of Impulse Response"];
2 4 6 8 10
-1
-0.5
0.5
1
1.5
2
Discrete Simulation of Impulse Response
« To perform the analog simulation, we can introduce an approximation of the Dirac
delta function by an impulse with a finite width. In myDiracDelta, the parameter Τ
determines the offset of the impulse along the time axis and Α determines the width
of the impulse.
In[52]:= myDiracDelta[t_, Τ_: 0, Α_: 100.]:=
Α
----------------
e
Α (t-Τ)
UnitStep[t-Τ]
In[53]:= SimulationPlot[system, myDiracDelta[t],{t, 10},ControlInputs -1,
PlotPoints -50, PlotLabel-"Analog Simulation of Impulse Response"];
2 4 6 8 10
-0.5
0.5
1
1.5
2
Analog Simulation of Impulse Response
4. Time-Domain Response 79
5. Classical Methods of Control Theory
This chapter introduces several classical control theory tools available in Control System Profes-
sional. Some of the functions described here are routines adapted from the Mathematica Applica-
tions Library package Electrical Engineering Examples; the proper extensions have been added
to allow handling of the control objects. If you already have Electrical Engineering Examples
installed, all the functionality you are accustomed to should still be available.
5.1 Root Loci
RootLocusPlotsystem, k, kmin, kmax
generate a root locus plot for system
when parameter k varies from kmin to kmax
Plotting the root loci.
RootLocusPlot graphs the roots of the characteristic equation (the root loci) while some
parameter, typically (but not necessarily) the gain, varies. The input system may be in
state-space or transfer function form and may or may not depend on the parameter k. In the
latter case, the parameter is assumed to be the gain of the open-loop system. Both discrete-
and continuous-time systems can be analyzed using this function. As in the Electrical Engineer-
ing Examples, RootLocusPlot (as well as BodePlot, NyquistPlot, and NicholsPlot)
also accepts the body of the transfer function object as the input system.
« Load the application.
In[1]:= <<ControlSystems`
« Here is a double-integrator system.
In[2]:= TransferFunction ]s,
s +1
-----------
s
2
|
Out[2]= |
+ 1

2
|

« This plots the root loci. Note that because the system does not contain the parameter
k, it is assumed to be the gain.
In[3]:= RootLocusPlot[%, {k, 0, 10}];
-8 -6 -4 -2
Re
-1
-0.5
0.5
1
Im
RootLocusPlot accepts the options for ListPlot as well as several specific options that
define the appearances of poles and zeros, and the number of points at which to sample the
range for k.
option name default value
PoleStyle Automatic styles for poles
ZeroStyle Automatic styles for zeros
PlotPoints 10 number of times the parameter is sampled
Options specific to RootLocusPlot.
As an example of using RootLocusPlot in feedback design, consider the system shown in
Figure 5.1, for which we find coefficients k such that the damping ratio of the dominant
closed-loop poles is 0.4 (cf. Ogata (1990)). The input or reference signal is denoted as r, the
output or controlled signal is c, and the actuating or error signal is e.
r e c
G(s) =
20
H(s) = 1 +ks

s(s+ 1)( s + 4)
Figure 5.1. Example system for root locus construction.
5. Classical Methods of Control Theory 81
« This describes the block with the transfer function G(s) .
In[4]:= g =TransferFunction]s,
20
---------------- ---------------- -----
s (s +1) (s+4)
|
Out[4]= |
20

( + 1) ( + 4)
|

« This defines the transfer function H(s) .
In[5]:= h =TransferFunction[s, 1+k s]
Out[5]= ( k + 1 )

« The open-loop system is formed by the serial connection of the blocks.
In[6]:= SeriesConnect[g, h ]
Out[6]=
l
\
.
.
20 (k + 1)

( + 1) ( + 4)
\
!
.
.

The suitable values of k correspond to intersections of the root locus curve with the straight
line depicting the locus of poles of the generic second-order system
H(s) =

n
2

s
2
+ 2 ΖΩ
n
s + Ω
n
2
for the chosen value of Ζ, where Ω
n
is the natural frequency and Ζ is the damping ratio.
82 Control System Professional
« This plots root loci together with the line for the chosen value of Ζ. The natural
frequency Ω has been picked quite arbitrarily, just to display the line in some
bounds. Note that in this case parameter k is not the gain. As we can see, the line
intersects the root locus branch somewhere between 5 and 6 points, and then again
between points 14 and 15.
In[7]:= RootLocusPlot]%, {k, 0, 2}, PlotPoints -20, PoleStyle -PointSize[.005],
Epilog -Line]]{0, 0},]-Ζ Ω,Ω
¸¯¯¯¯¯¯¯¯¯¯¯¯
1-Ζ
2
¦/.{Ω-10, Ζ-.4}¦|,
PlotLabel-"Root Loci"|;
-4 -3 -2 -1
Re
-6
-4
-2
2
4
6
Im
Root Loci
« Roughly estimating that the intersections are 0.2 and 0.5 off the respective points, we
get some estimates of parameter k that provide the required damping ratio.
In[8]:= kmax =2;points =20;
kmax
---------------- ---------
points -1
(#-1)& /®{5.2, 14.5}
Out[8]= 0.442105, 1.42105
« Here are the corresponding expressions for the closed-loop system.
In[9]:= results =FeedbackConnect[g, h ]/.List/®Thread [k -% ]//Simplify
Out[9]= ||
20

3
+ 5
2
+ 12.8421 +20
|

, |
20

3
+ 5
2
+ 32.4211 + 20
|

|
« This simulates the step responses for the chosen values of k. The first system exhibits
a faster response with reasonable overshoot compared to the second system.
In[10]:= SimulationPlot[#, UnitStep[t],{t, 5},
PlotRange -All, PlotLabel-"Step Response"]& /®results;
5. Classical Methods of Control Theory 83
1 2 3 4 5
0.2
0.4
0.6
0.8
1
Step Response
1 2 3 4 5
0.2
0.4
0.6
0.8
Step Response
RootLocusAnimation complements RootLocusPlot and provides information on the
direction of the evolution of root loci. The syntax for the two function is identical; the only
exception is that the option PlotPoints is meaningless for the animation and is replaced by
Frames, which determines the number of frames the animation should contain.
RootLocusAnimationsystem, k, kmin, kmax
generate a sequence of root locus plots for system
with parameter k running from kmin to kmax
Animating the root loci.
84 Control System Professional
5.2 The Bode Plot
5.2.1 The Basic Function
The Bode plot is the first example of the frequency response analysis tools that we consider in
this chapter. By frequency response, we mean the response to purely sinusoidal signals. Bode
Plot depicts, in the form of GraphicsArray, the frequency dependence of the magnitude
and phase response. The magnitude response is the ratio of the span of the output sinusoid to
the input one, and the phase response is the phase difference between them. The magnitude
and phase dependencies are plotted in the double-logarithmic and semilogarithmic scales,
respectively.
BodePlotsystem generate a Bode plot for system
BodePlotsystem, w, wmin, wmax
generate a Bode plot for
frequency w running from wmin to wmax
Creating the Bode plots.
Like all other plotting routines in Control System Professional, BodePlot accepts a system in
state-space or transfer function form in the continuous- or discrete-time domain. Specifying
the frequency range is optional. If it is not supplied, BodePlot will try to determine a suitable
range automatically. Also optional is the naming of the frequency variable in supplying the
desired frequency range; {w, wmin, wmax¦ works as well as {wmin, wmax¦. This should come
as no surprise if you recall that the variable in TransferFunction is an internal parameter
and does not have any meaning outside the scope of the TransferFunction object.
« Consider again the second-order system.
In[11]:= s2[Ω_, Ζ_]=TransferFunction ]s,

2
---------------- ------------------
s
2
+2 Ζ Ω s +Ω
2
|;
5. Classical Methods of Control Theory 85
« This is the Bode plot for the unity natural frequency and damping ratio. The
frequency range is not specified and is determined automatically.
In[12]:= BodePlot[s2[1, 1]];
0.1 0.2 0.5 1 2 5 10
Frequency RadSecond
-150
-125
-100
-75
-50
-25
P
h
a
s
e

d
e
g

0.10.2 0.5 1 2 5 10
Frequency RadSecond
-40
-30
-20
-10
0
M
a
g
n
i
t
u
d
e

d
B

« This gives the Bode plot over a wider frequency range.
In[13]:= BodePlot[s2[1, 1],{.01, 100}];
0.01 0.1 1 10 100
Frequency RadSecond
-175
-150
-125
-100
-75
-50
-25
0
P
h
a
s
e

d
e
g

0.01 0.1 1 10 100
Frequency RadSecond
-80
-60
-40
-20
0
M
a
g
n
i
t
u
d
e

d
B

By default, BodePlot tries to unwrap the phase in order to present a smooth phase curve
should branch cuts be detected. The value Automatic for the option PhaseRange typically
86 Control System Professional
(unless the gain and phase margins are to be plotted—see below) corresponds to such behav-
ior. PhaseRange allows also to shift the graph to another phase range which must be speci-
fied in the form PhaseRange ÷ {min, max¦, with the new limits min and max given in degrees.
Very rapid changes in phase dependence may cause errors in phase unwrapping. The safest
(and fastest) choice is PhaseRange ÷ {-180, 180}, in which case no additional processing
of the phase is performed.
Another way to eliminate possible errors in phase unwrapping is to increase the number of
points at which the transfer function is sampled by changing the option PlotPoints; this is
also useful if rapid changes in the magnitude or phase curve are not plotted satisfactorily
otherwise. For some plots, you may wish to use linear spacing between samples instead of the
default logarithmic spacing. This can be done by using PlotSampling ÷ LinearSpacing.
The option PhaseRange is in no way a substitute for the standard Mathematica plotting
option PlotRange and, in fact, works quite differently. If, for example, a plot does not have
any part in the range specified in PlotRange, nothing will appear in the graph. On the other
hand, PhaseRange will bring the graph into the specified range based on the principle that
all 360 ° intervals are physically equivalent.
option name default value
PhaseRange Automatic the range for plotting the phase curve
PlotPoints 30 how many points
of the graph are required
PlotSampling LogSpacing distribute points evenly on
the logarithmic or linear scale
Margins False whether to compute and
show gain and phase margins
MarginStyle Automatic a list of lists of graphics
primitives to use for each margins
Options specific to BodePlot.
BodePlot also accepts options pertinent to ListPlot and GraphicsArray. Because a plot
consists of two subplots, certain options—namely, AspectRatio, AxesLabel, AxesOrigin,
FrameLabel, Epilog, Prolog, FrameTicks, GridLines, Ticks, and PlotRange—are
handled in a nonstandard fashion. Given a list of two items as input, they will apply the first
item to the magnitude plot and the second to the phase plot. Other options are applied equally
to both plots or to GraphicsArray itself.
BodePlot operates on MIMO systems by grouping the graphs related to each input and
5. Classical Methods of Control Theory 87
plotting as many graphs as there are inputs. You may specify options for each input differ-
ently by wrapping option values into additional lists. By default, the graphs related to different
outputs in one plot are distinguished by the Hue value. This can be changed using the stan-
dard PlotStyle option.
« Here is a single-input, multiple-output system, of which each input-output pair is a
second-order system with a different value of damping ratio.
In[14]:= TransferFunction]s, Table]]
1
---------------- ------------
s
2
+2 s Ζ+1
¦, {Ζ,.05, .8, .25}||
Out[14]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1

2
+ 0.1 + 1
1

2
+ 0.6 + 1
1

2
+ 1.1 + 1
1

2
+ 1.6 + 1
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

« This is its Bode plot.
In[15]:= BodePlot[%, {.1, 10},
PlotStyle -{Dashing[{.05, .01, .01, .01}],Dashing[{.05, .01}],
Dashing[{.05}], Thickness[.001]},PlotPoints -200];
0.10.2 0.5 1 2 5 10
Frequency RadSecond
-175
-150
-125
-100
-75
-50
-25
0
P
h
a
s
e

d
e
g

0.10.2 0.5 1 2 5 10
Frequency RadSecond
-40
-30
-20
-10
0
10
20
M
a
g
n
i
t
u
d
e

d
B

« Of course, you can always supplement the results from BodePlot with diagrams
obtained using the built-in Mathematica functions. This plots the gain versus
88 Control System Professional
frequency and damping ratio for the generic second-order system. To make a more
attractive graphic, we picked up some coefficients for the x and y axes.
In[16]:= ParametricPlot3D[
Evaluate[{10 Log[10, Ω],Ζ 50, 20 Log [10, Abs[s2[1, Ζ][r Ω]#1, 1]]]}],
{Ω,.005, 5},{Ζ, 0, 1.5},PlotPoints -{40, 20},
AxesLabel-{"10 Log[10, Ω]", "Ζ×50", "Gain (dB)"}, Boxed -False];
-10
-5
0
5
10 Log10, Ω
0
20
40
60
Ζ50
-20
0
20
Gain dB
10
-5
0
0
20
40
60
Ζ50
5.2.2 Gain and Phase Margins
Important insight into the stability of the closed-loop system can be achieved by analyzing the
gain and phase margins of the system before closing the loop. The margins can be computed
using GainPhaseMargins and displayed on the Bode plot using the option Margins, which
accepts either the result of GainPhaseMargins (if it was computed prior to the call to Bode
Plot) or True to compute the margins transparently. The MarginStyle option can be used
to specify the graphics primitive for margins in a manner similar to the PlotStyle option. If
Margins is set to False, the margins are not computed (the default).
5. Classical Methods of Control Theory 89
GainPhaseMarginssystem compute gain and phase margins for system
BodePlotsystem, Margins ÷ margins
show precomputed gain and
phase margins on the Bode plot
BodePlotsystem, Margins ÷ True
compute and show gain and phase margins
Gain and phase margins.
As an exercise, let us select gain k for the system shown in Figure 5.2 such that the phase
margin is greater than 30 ° and the gain margin is greater than 10 dB (Brogan (1991)).

k(s+ 50)
1
r e c

s(s+ 10)
s + 20
Figure 5.2. Example system for gain and phase margin selection.
« This defines the plant.
In[17]:= g =TransferFunction ]s,
k (s +50)
---------------- --------
s (s +10)
|
Out[17]=
l
\
.
.
k ( + 50)

( + 10)
\
!
.
.

« This is the regulator.
In[18]:= h =TransferFunction ]s,
1
--------------
s +20
|
Out[18]= |
1

+ 20
|

90 Control System Professional
« Here is the open-loop system formed from the above blocks.
In[19]:= SeriesConnect[g, h ]
Out[19]=
l
\
.
.
k ( + 50)

( + 10) ( + 20)
\
!
.
.

« This is the Bode plot for a particular value of k.
In[20]:= BodePlot[% /.k -4.];
0.1 1 10 100 1000
Frequency RadSecond
-180
-160
-140
-120
-100
P
h
a
s
e

d
e
g

0.1 1 10 100 1000
Frequency RadSecond
-100
-80
-60
-40
-20
0
20
M
a
g
n
i
t
u
d
e

d
B

We observe that at Ω = 10 the phase is around ÷150 °, which would give the desired phase
margin of 30 ° if we increase the gain by 24 dB, the value by which the current gain at this
frequency is less than zero. That would lead to a gain margin somewhat more than 10 dB at
Ω = 24, the current crossover frequency, which would satisfy the specification.
5. Classical Methods of Control Theory 91
« This is the Bode plot for the increased gain, with the gain and phase margins
displayed. We can see that the specifications are met.
In[21]:= BodePlot[%% /.k -4. 10
24/20
, Margins -True];
0.1 1 10 100 1000
Frequency RadSecond
-180
-160
-140
-120
-100
P
h
a
s
e

d
e
g

0.1 1 10 100 1000
Frequency RadSecond
-80
-60
-40
-20
0
20
40
M
a
g
n
i
t
u
d
e

d
B

Gain and phase margins are determined by finding the crossover points of the frequency
response, which can be done either analytically or from the interpolation of the response
curve computed on a frequency grid. The method can be selected using the option Method,
which accepts values Analytic or Interpolation. The freedom of choosing the method is
restricted to continuous-time systems; for discrete-time systems, the margins are always
computed via interpolation.
option name default value
Method Automatic whether the margins should be
computed analytically or by interpolation
Options specific to GainPhaseMargins.
When margins are to be plotted, the Automatic value for PhaseRange corresponds to the
phase range of {-360, 0} degrees. Using the option PhaseRange, you may set the phase
range differently.
92 Control System Professional
5.3 The Nyquist Plot
The Nyquist plot allows us to gain insight into the stability of the closed-loop system by
analyzing the contour of the frequency response function on the complex plane.
NyquistPlotsystem generate a Nyquist plot for system
NyquistPlotsystem, w, wmin, wmax
generate a Nyquist plot for
frequency w running from wmin to wmax
Creating the Nyquist plots.
If no frequency range is specified, NyquistPlot will try to plot the graph for both negative
and positive frequencies (a task that can be avoided by plotting only half of the curve and
then reflecting it over the real axis).
Note that jumps may appear in the Nyquist plot when the frequency response changes rap-
idly. That may be corrected by choosing a higher value for the PlotPoints option.
« Consider an open-loop transfer function.
In[22]:= TransferFunction]s,
1
-------------------
(s +1)
2
|
Out[22]=
l
\
.
..
1

( + 1)
2
\
!
.
..

5. Classical Methods of Control Theory 93
« This is the corresponding Nyquist plot. The contour does not encircle the (÷1, 0)
point and the transfer function does not have unstable poles; therefore, the
closed-loop system will be stable.
In[23]:= NyquistPlot[%];
0.2 0.4 0.6 0.8 1
Re
-0.6
-0.4
-0.2
0.2
0.4
0.6
Im
Like the other frequency response plotting functions, NyquistPlot is capable of handling
nonpolynomial transfer functions. Note, however, that such systems must be supplied in
transfer function form and the desired frequency range must be specified, since conversion
from the state space works for linear systems only and so does the routine that determines
the default frequency range.
« Here is a system consisting of a transport lag e
÷Τ s
and first-order lag 1l (1 + T s) , for
the unit values of Τ and T.
In[24]:= TransferFunction]s,
e
-s
-----------
1+s
|
Out[24]= |
:
÷

+ 1
|

94 Control System Professional
« This is the Nyquist plot for this system.
In[25]:= NyquistPlot[%, {.01, 20}, PlotPoints -500];
-0.4 -0.2 0.2 0.4 0.6 0.8 1
Re
-0.8
-0.6
-0.4
-0.2
0.2
Im
Like BodePlot, NyquistPlot accepts the PlotPoints and PlotSampling options, as
well as options pertinent to ListPlot.
5.4 The Nichols Plot
NicholsPlot gives yet another way to depict frequency response information—by plotting
the magnitude versus phase curve in the semilogarithmic scale.
NicholsPlotsystem generate a Nichols plot for system
NicholsPlotsystem, w, wmin, wmax
generate a Nichols plot for
frequency w running from wmin to wmax
Creating the Nichols plots.
« Here is an open-loop transfer function for the spacecraft attitude control system,
compensated with a PID controller from Franklin et al. (1991).
In[26]:= TransferFunction]s,
.05 (10 s +1) (s+.005) .9 2
-------------------------------- ---------------- ---------------- -----
s s
2
(s +2)
|
Out[26]=
l
\
.
.
0.09 ( + 0.005) (10 + 1)

3
( + 2)
\
!
.
.

5. Classical Methods of Control Theory 95
« This is the Nichols plot for the system.
In[27]:= NicholsPlot[%];
-180 -170 -160 -150 -140 -130 -120
Phase degrees
-80
-60
-40
-20
0
G
a
i
n

d
B

NicholsPlot accepts the options PlotPoints, PlotSampling, and PhaseRange, as does
BodePlot, as well as the options pertinent to ListPlot.
5.5 The Singular-Value Plot
For MIMO systems, BodePlot returns a total of p × q individual Bode plots, and conse-
quently, the amount of information quickly grows with the number of inputs p and outputs q .
SingularValuePlot provides a means to generalize this information by generating a plot
of the frequency dependence of singular values of the transfer matrix evaluated at different
values of s = j Ω (or z = e
j Ω T
for discrete-time systems, T being the sampling period).
SingularValuePlotsystem
generate a singular-value plot for system
SingularValuePlotsystem, w, wmin, wmax
generate a singular-value plot for
frequency w running from wmin to wmax
Creating the singular-value plots.
96 Control System Professional
« Here is a mixing tank system (see Section 10.1).
In[28]:= StateSpace[{{-.01, 0}, {0, -.02}},
{{1, 1},{-.004, .002}},{{.01, 0},{0, 1}}]
Out[28]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
÷0.01 0 1 1
0 ÷0.02 ÷0.004 0.002
0.01 0 0 0
0 1 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

« This plots the frequency dependence of the singular values.
In[29]:= SingularValuePlot[%];
0.001 0.005 0.01 0.050.1 0.5 1
Frequency RadSecond
-40
-30
-20
-10
0
S
i
n
g
u
l
a
r
V
a
l
u
e
s

d
B

5. Classical Methods of Control Theory 97
6. System Interconnections
This chapter introduces the tools needed to construct a composite system based on a given
system topology and descriptions of the blocks. SeriesConnect, ParallelConnect,
FeedbackConnect, and StateFeedbackConnect perform elementary interconnections;
GenericConnect handles these as well as more convoluted cases. The auxiliary functions
Subsystem, DeleteSubsystem, and MergeSystems provide a means of manipulating
system contents other than interconnections. All the functions accept (and operate analo-
gously on) continuous- and discrete-time objects, but the systems to be connected must be in
the same domain (and sampled at the same rate, if discrete-time) to produce a meaningful
result. It is, therefore, the responsibility of the user to perform the necessary conversions prior
to calling the interconnecting functions.
The interconnection procedures do not necessarily result in minimal state models, so some
model reduction techniques (see Chapter 8) may be appropriate.
6.1 Elementary Interconnections
Elementary interconnections include serial (cascade), parallel, and feedback interconnections.
The subsystems can be supplied in either state-space or transfer function form. If of two
subsystems one is in state space, the other is transformed to that form too, and a state-space
object is returned. If both systems are entered as transfer functions, this is the form of the
resultant system. If the transfer functions of the blocks are defined using differently named
variables, the first variable encountered is used in the result. In elementary and other intercon-
nections, inputs and outputs are referred to by their index in the input or output vectors.
6.1.1 Connecting in Series
SeriesConnect finds the description of an aggregate system composed of two subsystems
connected in series, as shown in Figure 6.1. The aggregate has inputs of system i , outputs of
system j , and states of both systems, x
.
.
.
.
.
.
.
.
.
.
x
i
x
j
.
.
+
+
+
+
+
+
+
+
. Note that not all outputs of the first system
should necessarily be connected to inputs of the second system.

u
i
System i System j
y
i
u
j
y
j
Figure 6.1. Series interconnection.
SeriesConnect|system
1
, system
2
]
find the aggregate systemby connecting all outputs
of system
1
to corresponding inputs of system
2
SeriesConnect|system
1
, system
2
, {o, i¦]
connect the output o of
system
1
to the input i of system
2
SeriesConnect|system
1
, system
2
, {{o
1
, i
1
¦, {o
2
, i
2
¦, …¦]
connect outputs o
k
of system
1
to inputs i
k
of system
2
Series connection of two systems.
Load the application.
In[1]:= <<ControlSystems`
Here are two systems in state-space form.
In[2]:= ss1=StateSpace[{{a
11
, a
12
},{a
21
, a
22
}},
{{b
11
, b
12
},{b
21
, b
22
}},{{c
11
, c
12
}, {c
21
, c
22
}},{{d
11
, d
12
},{d
21
, d
22
}}]
Out[2]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
11
a
12
b
11
b
12
a
21
a
22
b
21
b
22
c
11
c
12
d
11
d
12
c
21
c
22
d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

In[3]:= ss2=StateSpace[{{A
11
, A
12
},{A
21
, A
22
}},
{{B
1
, 0},{0, B
2
}},{{C
11
, C
12
}},{{D
11
, D
12
}}]
Out[3]=
l
\
.
.
.
.
.
.
.
..
A
11
A
12
B
1
0
A
21
A
22
0 B
2
C
11
C
12
D
11
D
12
\
!
.
.
.
.
.
.
.
..

6. System Interconnections 99
This connects the systems in series.
In[4]:= SeriesConnect[ss1, ss2]
Out[4]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
a
11
a
12
0 0 b
11
b
12
a
21
a
22
0 0 b
21
b
22
B
1
c
11
B
1
c
12
A
11
A
12
B
1
d
11
B
1
d
12
B
2
c
21
B
2
c
22
A
21
A
22
B
2
d
21
B
2
d
22
c
11
D
11
c
21
D
12
c
12
D
11
c
22
D
12
C
11
C
12
d
11
D
11
d
21
D
12
d
12
D
11
d
22
D
12
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

The result is, of course, the familiar StateSpace object.
In[5]:= % //StandardForm
Out[5]//StandardForm=
StateSpace|{{a
11
, a
12
, 0, 0¦,{a
21
, a
22
, 0, 0¦,
{B
1
c
11
, B
1
c
12
, A
11
, A
12
¦, {B
2
c
21
, B
2
c
22
, A
21
, A
22
¦¦,
{{b
11
, b
12
¦,{b
21
, b
22
¦,{B
1
d
11
, B
1
d
12
¦,{B
2
d
21
, B
2
d
22
¦¦,
{{c
11
D
11
c
21
D
12
, c
12
D
11
c
22
D
12
, C
11
, C
12
¦¦,{{d
11
D
11
d
21
D
12
, d
12
D
11
d
22
D
12
¦¦]
Now the second output of the first system is connected to the first input of the
second system. Another output and input remain loose.
In[6]:= SeriesConnect[ss1, ss2, {2, 1}]
Out[6]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
11
a
12
0 0 b
11
b
12
a
21
a
22
0 0 b
21
b
22
B
1
c
21
B
1
c
22
A
11
A
12
B
1
d
21
B
1
d
22
0 0 A
21
A
22
0 0
c
21
D
11
c
22
D
11
C
11
C
12
d
21
D
11
d
22
D
11
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Now all the outputs and inputs are connected again, but this time in the reverse
order.
In[7]:= SeriesConnect[ss1, ss2, {{2, 1}, {1, 2}}]
Out[7]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
11
a
12
0 0 b
11
b
12
a
21
a
22
0 0 b
21
b
22
B
1
c
21
B
1
c
22
A
11
A
12
B
1
d
21
B
1
d
22
B
2
c
11
B
2
c
12
A
21
A
22
B
2
d
11
B
2
d
12
c
21
D
11
c
11
D
12
c
22
D
11
c
12
D
12
C
11
C
12
d
21
D
11
d
11
D
12
d
22
D
11
d
12
D
12
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

100 Control System Professional
Here is an integrator given in transfer function form.
In[8]:= integrator =TransferFunction]s,
1
----
s
|
Out[8]= |
1

|

This attaches the integrator to the first output of system ss1. Notice that SeriesCon
nect returns the result in state-space form as soon as one of the input systems is
given in that form.
In[9]:= SeriesConnect[ss1, integrator, {1, 1}]
Out[9]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
11
a
12
0 b
11
b
12
a
21
a
22
0 b
21
b
22
c
11
c
12
0 d
11
d
12
0 0 1 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Here both input systems are in transfer function form, and so is the result.
In[10]:= SeriesConnect[integrator, integrator]
Out[10]= |
1

2
|

This is a slightly more complicated connection—the integrator is connected to the
third output of the transfer function system tf.
In[11]:= tf =TransferFunction ]s,
]]
s
---------------- ---------------- ------
(s+1)
2
(s +2)
2
,
s
-------------------
(s+2)
2
¦,]-
s
-------------------
(s +2)
2
, -
s
-------------------
(s +2)
2
¦,]
1
-------------------
(s +2)
2
,
s
-------------------
(s +1)
2
¦¦|
Out[11]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

( 1)
2
( 2)
2


( 2)
2

( 2)
2



( 2)
2
1

( 2)
2


( 1)
2
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

In[12]:= SeriesConnect[tf, integrator, {3, 1}]
Out[12]=
l
\
.
..
1

( 2)
2
1

( 1)
2
\
!
.
..

6. System Interconnections 101
SeriesConnect can cascade only two systems. However, the Mathematica language makes
extensions to any number of systems fairly straightforward.
Here is a function that connects any number of matching subsystems in series.
In[13]:= cascade[s__]:=Fold[SeriesConnect, First[{s}],Rest[{s}]]
Using the new function, this command cascades a set of four abstract systems.
Notice that the aggregate remains partially unevaluated until the systems are
specified.
In[14]:= cascade[s1, s2, s3, s4]
Out[14]= SeriesConnect|SeriesConnect|SeriesConnect|s1, s2], s3],s4]
This built-in function reveals the structure of the cascade.
In[15]:= TreeForm[%]
Out[15]//TreeForm=
SeriesConnect||
SeriesConnect||
SeriesConnect|s1, s2]
, s3|
, s4|
We prepare a set of first-order systems with simple poles to be used instead of the
abstract systems.
In[16]:= Table]TransferFunction]s,
1
-----------
s +i
|, {i, 0, 3}|
Out[16]= ||
1

|

, |
1

1
|

, |
1

2
|

, |
1

3
|

|
Now the cascade can be found in closed form. The cascade is shown in Figure 6.2.
In[17]:= %% /.Thread [{s1, s2, s3, s4}-% ]
Out[17]= |
1

( 1) ( 2) ( 3)
|

u y 1
s
1
s 1
1
s 2
1
s 3
Figure 6.2. Cascade of first-order systems.
102 Control System Professional
6.1.2 Connecting in Parallel
The parallel interconnection of subsystems according to Figure 6.3 can be accomplished with
the function ParallelConnect. The subsystems may or may not have some shared inputs
and some summed outputs. The aggregate has these shared inputs and summed outputs as
well as other inputs and outputs of the components. The states of the aggregate come from
both subsystems. As with all elementary interconnection functions, the systems may be in
either state-space or transfer function form, or both.

u
j
y
j
y
i
y
u
i
u
System i
System j

Figure 6.3. Parallel interconnection.
ParallelConnect|system
1
, system
2
]
connect all inputs of system
1
with all inputs of
system
2
and sumall corresponding outputs
ParallelConnect|system
1
, system
2
, {i
1
, i
2
¦, {o
1
, o
2
¦]
connect the input i
1
of system
1
with the input i
2
of
system
2
and sumoutputs o
1
of system
1
and o
2
of system
2
ParallelConnect|system
1
, system
2
,
{{i
11
, i
21
¦, {i
12
, i
22
¦, …¦, {{o
11
, o
21
¦, {o
12
, o
22
¦, …¦]
connect the inputs i
1k
of system
1

with the inputs i
2k
of system
2
and sumoutputs
o
1k
of system
1
with outputs o
2k
of system
2
Parallel connection of two systems.
6. System Interconnections 103
These are two state-space systems defined earlier.
In[18]:= ss1=StateSpace[{{a
11
, a
12
},{a
21
, a
22
}},
{{b
11
, b
12
},{b
21
, b
22
}}, {{c
11
, c
12
},{c
21
, c
22
}},{{d
11
, d
12
}, {d
21
, d
22
}}]
Out[18]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
11
a
12
b
11
b
12
a
21
a
22
b
21
b
22
c
11
c
12
d
11
d
12
c
21
c
22
d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

In[19]:= ss2=StateSpace[{{A
11
, A
12
},{A
21
, A
22
}},
{{B
1
, 0},{0, B
2
}},{{C
11
, C
12
}}, {{D
11
, D
12
}}]
Out[19]=
l
\
.
.
.
.
.
.
.
..
A
11
A
12
B
1
0
A
21
A
22
0 B
2
C
11
C
12
D
11
D
12
\
!
.
.
.
.
.
.
.
..

This connects correspondingly numbered inputs of the subsystems and adds the first
output of ss2 to both outputs of ss1.
In[20]:= ParallelConnect[ss1, ss2, {{1, 1}, {2, 2}},{{1, 1},{2, 1}}]
Out[20]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
a
11
a
12
0 0 b
11
b
12
a
21
a
22
0 0 b
21
b
22
0 0 A
11
A
12
B
1
0
0 0 A
21
A
22
0 B
2
c
11
c
12
C
11
C
12
d
11
D
11
d
12
D
12
c
21
c
22
C
11
C
12
d
21
D
11
d
22
D
12
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

This takes two systems represented by the same transfer function tf and connects
all inputs and sums all outputs in the criss-cross order.
In[21]:= tf =TransferFunction ]s,
]]
s
---------------- ---------------- ------
(s+1)
2
(s +2)
2
,
s
-------------------
(s+2)
2
¦,]-
s
-------------------
(s +2)
2
, -
s
-------------------
(s +2)
2
¦,]
1
-------------------
(s +2)
2
,
s
-------------------
(s +1)
2
¦¦|
Out[21]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

( 1)
2
( 2)
2


( 2)
2

( 2)
2



( 2)
2
1

( 2)
2


( 1)
2
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

104 Control System Professional
In[22]:= ParallelConnect[tf, tf,
{{1, 2}, {2, 1}}, {{1, 3}, {2, 2}, {3, 1}}]//Simplify
Out[22]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
(
2
4 5)

( 1)
2
( 2)
2
1

( 2)
2

2

( 2)
2

2

( 2)
2
1

( 2)
2
(
2
4 5)

( 1)
2
( 2)
2
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

ParallelConnect can be used to sum the outputs or just to connect the inputs without
summing the outputs. Examples of such usage are given in Section 10.7.
6.1.3 Closing Feedback Loop
Feedback interconnections of the types shown in Figure 6.4 are performed with the function
FeedbackConnect. The function may just close the loop (Figure 6.4a), or may include a
second system (typically a controller) in the feedback (Figure 6.4b). Either negative (default),
positive, or mixed feedback can be formed. The connection specifications may be omitted if all
corresponding inputs and outputs are to be used. The inputs and outputs of the aggregate are
the ones of the first system. The states come from all subsystems.

u
j
y
j
y
i u
i
u
i
u
System i
System j
y
y
i
u
System y
(a) (b)

Figure 6.4. Feedback interconnections.
6. System Interconnections 105
FeedbackConnect|system]
feed all the outputs of system back to the
corresponding inputs with a negative sign
FeedbackConnect|system, type]
form either negative or
positive feedback depending on type
FeedbackConnect|system, {o, i¦]
feed the output o of system back to the input i
FeedbackConnect|system, {o, i¦, type]
use type for the connection
FeedbackConnect|system, {{o
1
, i
1
¦, {o
2
, i
2
¦, …¦]
connect several outputs o
k
and inputs i
k
FeedbackConnect|system, {{o
1
, i
1
¦, {o
2
, i
2
¦, …¦, type]
use type for all connections
FeedbackConnect|system, {{o
1
, i
1
, type
1
¦, {o
2
, i
2
, type
2
¦, …¦]
use type
k
when connecting output o
k
to input i
k
Closing the feedback loop.
106 Control System Professional
FeedbackConnect|system
1
, system
2
]
put system
2
in the negative feedback loop for system
1
FeedbackConnect|system
1
, system
2
, type]
create feedback specified by type
FeedbackConnect|system
1
, system
2
, {o
1
, o
2
, …¦, {i
1
, i
2
, …¦]
close the negative feedback loop for system
1

with system
2
by connecting outputs o
k
of
system
1
to sequentially numbered inputs of
system
2
and sequentially numbered outputs of
system
2
to inputs i
k
of system
1
FeedbackConnect|system
1
, system
2
, {o
1
, o
2
, …¦, {i
1
, i
2
, …¦, type]
use type for all connections
FeedbackConnect|system
1
, system
2
,
{o
1
, o
2
, …¦, {{i
1
, type
1
¦, {i
2
, type
2
¦, …¦]
use type
k
when connecting input i
k
Feedback interconnections with second system.
The type descriptor for FeedbackConnect should be one of the reserved words Positive or
Negative.
Consider a second-order system with two inputs and two outputs.
In[23]:= ss =StateSpace[DiagonalMatrix[{a
1
, a
2
}],DiagonalMatrix[{b
1
, b
2
}],
DiagonalMatrix[{c
1
, c
2
}],DiagonalMatrix[{d
1
, d
2
}]]
Out[23]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
1
0 b
1
0
0 a
2
0 b
2
c
1
0 d
1
0
0 c
2
0 d
2
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

6. System Interconnections 107
This forms the negative feedback by connecting corresponding outputs and inputs.
In[24]:= FeedbackConnect[ss]
Out[24]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
a
1

b
1
c
1
(d
2
1)

d
2
d
1
d
1
d
2
1
0 b
1

b
1
d
1
(d
2
1)

d
2
d
1
d
1
d
2
1
0
0 a
2

b
2
c
2
(d
1
1)

d
2
d
1
d
1
d
2
1
0 b
2

b
2
(d
1
1) d
2

d
2
d
1
d
1
d
2
1
c
1

c
1
d
1
(d
2
1)

d
2
d
1
d
1
d
2
1
0 d
1

d
1
2
(d
2
1)

d
2
d
1
d
1
d
2
1
0
0 c
2

c
2
(d
1
1) d
2

d
2
d
1
d
1
d
2
1
0 d
2

(d
1
1) d
2
2

d
2
d
1
d
1
d
2
1
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

We can use built-in Mathematica functions to simplify the result.
In[25]:= Together /®%
Out[25]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
d
1
a
1
a
1
b
1
c
1

d
1
1
0
b
1

d
1
1
0
0
d
2
a
2
a
2
b
2
c
2

d
2
1
0
b
2

d
2
1
c
1

d
1
1
0
d
1

d
1
1
0
0
c
2

d
2
1
0
d
2

d
2
1
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

This forms the closed loop for the first two inputs and outputs of the transfer
function tf.
In[26]:= tf =TransferFunction ]s,
]]
s
---------------- ---------------- ------
(s+1)
2
(s +2)
2
,
s
-------------------
(s+2)
2
¦,]-
s
-------------------
(s +2)
2
, -
s
-------------------
(s +2)
2
¦,]
1
-------------------
(s +2)
2
,
s
-------------------
(s +1)
2
¦¦|
Out[26]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

( 1)
2
( 2)
2


( 2)
2

( 2)
2



( 2)
2
1

( 2)
2


( 1)
2
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

108 Control System Professional
In[27]:= FeedbackConnect[tf, {{1, 1}, {2, 2}}]//Simplify
Out[27]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
(
2
2)

5
7
4
22
3
34
2
28 8
( 1)
2
( 2)

5
7
4
22
3
34
2
28 8

( 1)
2
( 2)

5
7
4
22
3
34
2
28 8

(
3
3
2
5 2)

5
7
4
22
3
34
2
28 8
2
4
9
3
15
2
11 4

( 2) (
5
7
4
22
3
34
2
28 8)
(
6
10
5
40
4
85
3
102
2
64 15)

( 1)
2
( 2) (
5
7
4
22
3
34
2
28 8)
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

Now we plug the system ss in the positive feedback loop for ss1 (assuming, for
simplicity, that there is no direct transmission term in ss).
In[28]:= ss1=StateSpace[{{a
11
, a
12
},{a
21
, a
22
}},
{{b
11
, b
12
},{b
21
, b
22
}}, {{c
11
, c
12
},{c
21
, c
22
}},{{d
11
, d
12
}, {d
21
, d
22
}}]
Out[28]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
11
a
12
b
11
b
12
a
21
a
22
b
21
b
22
c
11
c
12
d
11
d
12
c
21
c
22
d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

In[29]:= FeedbackConnect[ss1, ss /.d
_
-0, Positive]
Out[29]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
a
11
a
12
b
11
c
1
b
12
c
2
b
11
b
12
a
21
a
22
b
21
c
1
b
22
c
2
b
21
b
22
b
1
c
11
b
1
c
12
a
1
b
1
c
1
d
11
b
1
c
2
d
12
b
1
d
11
b
1
d
12
b
2
c
21
b
2
c
22
b
2
c
1
d
21
a
2
b
2
c
2
d
22
b
2
d
21
b
2
d
22
c
11
c
12
c
1
d
11
c
2
d
12
d
11
d
12
c
21
c
22
c
1
d
21
c
2
d
22
d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

This connects ss1 and ss according to Figure 6.5 (under the same simplifying
assumption).
1
1
2
1
2
2
1
2
u
ss1
ss
y

Figure 6.5. Example of feedback interconnection.
6. System Interconnections 109
In[30]:= FeedbackConnect[ss1, ss /.d
_
-0, {1, 2},{{2, Positive},{1, Negative}}]
Out[30]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
a
11
a
12
b
12
c
1
b
11
c
2
b
11
b
12
a
21
a
22
b
22
c
1
b
21
c
2
b
21
b
22
b
1
c
11
b
1
c
12
a
1
b
1
c
1
d
12
b
1
c
2
d
11
b
1
d
11
b
1
d
12
b
2
c
21
b
2
c
22
b
2
c
1
d
22
a
2
b
2
c
2
d
21
b
2
d
21
b
2
d
22
c
11
c
12
c
1
d
12
c
2
d
11
d
11
d
12
c
21
c
22
c
1
d
22
c
2
d
21
d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

The connection functions described in this section accept transfer functions in arbitrary (rather
than rational polynomial) form as long as no type conversion or subsystem selection has to be
made.
This is the serial connection of two systems described by the transfer functions g[s]
and h[s].
In[31]:= SeriesConnect[TransferFunction[s, g[s]],TransferFunction[s, h [s]]]
Out[31]= ( g() h() )

Here are the same systems connected in parallel.
In[32]:= ParallelConnect[TransferFunction [s, g [s]],TransferFunction [s, h [s]]]
Out[32]= ( g() h() )

This is the negative feedback connection.
In[33]:= FeedbackConnect[TransferFunction [s, g [s]],TransferFunction [s, h [s]]]
Out[33]=
l
\
.
.
g()

g() h() 1
\
!
.
.

6.2 Arbitrary Interconnections
Systems more complex than the types described in the previous section can be constructed
using GenericConnect. As with the elementary interconnecting functions, subsystems in
GenericConnect can be supplied in either state-space or transfer function form. The result
will be a state-space object if at least one system is in the StateSpace form. Note that, unlike
elementary interconnections, GenericConnect always uses the state-space algorithm so it is
advantageous to supply all input systems in the StateSpace form from the outset.
110 Control System Professional
GenericConnect|system , system , …, connections, ins, outs]
construct a composite systemfromblocks system
k

using the connection specification connections so
that the aggregate has inputs ins and outputs outs
Building a complex system.
A necessary step in preparing the connections specification for GenericConnect is number-
ing all the inputs and all the outputs of the subsystems according to the order of the systems
in the argument list (which can be arbitrary). The elements of the specification connections can
be supplied as (1) a list of integers in the form {i, o
1
, o
2
, … ¦ indicating that input i gets its
signal from summed outputs o
k
, (2) a list in the form {i, {o
1
, type
1
¦ , {o
2
, type
2
¦ , … ¦ that
allows the sign (negative or positive) for output o
k
to be set differently according to type
k
, or
(3) a mix of the two. The type specification can be one of the reserved words Negative or
Positive, with the default value depending on the option DefaultInputPort.
As an example, consider connecting the three systems according to the block diagram in
Figure 6.6. The subsystems were numbered in some order, and then the inputs and outputs
were numbered sequentially in the order of the subsystems. In this way, the two inputs of
system 2 (ss2) receive numbers 3 and 4, and the only output of this subsystem becomes
output 3. The interconnections are then constructed using these numbers. Input 3 receives its
signal from outputs 2 and 4, the latter taken with a negative sign, and input 5 is connected to
output 3. The aggregate has three external inputs, 1, 2, and 4, and two external outputs, 1
and 3.

1
1
2
1
2
ss
y

u
3
3
4
4
ss2
s
s Α
2
3
5
Figure 6.6. Example of an arbitrary interconnection.
6. System Interconnections 111
1 2
These are the components.
In[34]:= ss = StateSpace[{{a
1
, 0},{0, a
2
}},
{{b
1
, 0},{0, b
2
}},{{c
1
, 0}, {0, c
2
}}, {{d
1
, 0},{0, d
2
}}]
Out[34]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
1
0 b
1
0
0 a
2
0 b
2
c
1
0 d
1
0
0 c
2
0 d
2
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

In[35]:= ss2=StateSpace[{{A
11
, A
12
},{A
21
, A
22
}},
{{B
1
, 0},{0, B
2
}},{{C
11
, C
12
}}, {{D
11
, D
12
}}]
Out[35]=
l
\
.
.
.
.
.
.
.
..
A
11
A
12
B
1
0
A
21
A
22
0 B
2
C
11
C
12
D
11
D
12
\
!
.
.
.
.
.
.
.
..

This creates the aggregate.
In[36]:= GenericConnect]ss, ss2, TransferFunction]s,
s
-----------
s +a
|,
{{3, 2, {4, Negative}}, {5, 3}},{1, 2, 4},{1, 3}|
Out[36]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
a
1
0 0 0 0 b
1
0 0
0 a
2
0 0 0 0 b
2
0
0
B
1
c
2

D
11
1
A
11

B
1
C
11

D
11
1
A
12

B
1
C
12

D
11
1
a B
1

D
11
1
0
B
1
d
2

D
11
1

B
1
D
12

D
11
1
0 0 A
21
A
22
0 0 0 B
2
0
c
2
D
11

D
11
1
C
11

D
11
1
C
12

D
11
1
a D
11

D
11
1
a 0
d
2
D
11

D
11
1
D
12

D
11
1
c
1
0 0 0 0 d
1
0 0
0
c
2
D
11

D
11
1
C
11

D
11
1
C
12

D
11
1
a D
11

D
11
1
0
d
2
D
11

D
11
1
D
12

D
11
1
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

The algorithm implemented in GenericConnect (in which signals originate necessarily at
the outputs of the blocks) imposes a limitation on the block diagram that GenericConnect
can compute directly. The limitation is easy to overcome, however, by the addition of dummy
blocks with a unit gain. For example, to construct a system containing the feed-forward and
feedback paths as shown in Figure 6.7a, dummy blocks may be added as shown in Figure 6.7b.
112 Control System Professional
Β
Α
y

u
1 2 3 4
5
1 2
3 4
5
1 2 4
5
3

(a)
(b)
Β
Α
1 1
y

u

Figure 6.7. Example of interconnection with dummy blocks added.
This computes the aggregate system shown in Figure 6.7. The output is a Transfer
Function object since all input systems are supplied as transfer functions. Note that
the variable from the first TransferFunction object, if any, is used in the result.
Since, in this case, the first TransferFunction does not have a named variable,
the result is the pure-function object. To emphasize the structure of the result we
turn off the Control Format display.
In[37]:= GenericConnect]TransferFunction [1],TransferFunction [b],
TransferFunction ]s,
1
----
s
|, TransferFunction[1], TransferFunction[a],
{{2, 1},{3, 2, {5, Negative}}, {4, 1, 3},{5, 4}}, {1},{4}|
Out[37]= TransferFunction|¦¦
b#1

a#1
¦¦|
Unless instructed otherwise, GenericConnect assumes that the outputs o
k
in the connection
specification {i, o
1
, o
2
, … ¦ come to the summing input i with negative or positive signs as
determined by the option DefaultInputPort. The default value of DefaultInputPort is
Positive, which means that the outputs should be summed uninverted.
option name default value
DefaultInputPort Positive default port for interconnections
Option to GenericConnect.
6. System Interconnections 113
6.3 State Feedback
When the state variables become known (either through measurement or estimation), it is
sometimes desirable to feed their values back to the inputs via a controller to change the
dynamics of the system in a certain way (see Chapter 9). Figure 6.8 shows such a connection
for a continuous-time system; the diagram for a discrete-time system is structurally similar.
This type of connection can be formed using StateFeedbackConnect.

B

u y
x x
.
C
D

A
System
Controller
Figure 6.8. State feedback schematic.
StateFeedbackConnect|system, controller]
feed all states of system back to its inputs
via controller forming negative feedback
StateFeedbackConnect|system, controller, connections]
use connections to form the state feedback
State feedback connections.
To form state feedback, must be in state-space form; controller can be supplied in
either state-space or transfer function form, or simply as a gain matrix (such as that returned
by StateFeedbackGains, for example); connections can be specified using the same format
as for FeedbackConnect.
114 Control System Professional
system
6.4 Manipulating a System's Contents
Subsystem and DeleteSubsystem can be used to select or delete a desired part of the
system; Subsystem can also rearrange the order of inputs, outputs, or states. The system can
be in either state-space or transfer function form (although manipulating state contents is
possible for state-space objects only). The element specifications can be either vectors of
integers corresponding to indices or the reserved words All or None.
Subsystem|system, inputs]
select the part of system
associated with the specified inputs
Subsystem|system, inputs, outputs]
select the subsystemwith
the specified inputs and outputs
Subsystem|system, inputs, outputs, states]
select the subsystem with the specified inputs,
outputs, and states
Selecting a part of the system.
Consider a state-space system.
In[38]:= ss =StateSpace[DiagonalMatrix[{a
1
, a
2
, a
3
}], DiagonalMatrix[{b
1
, b
2
, b
3
}],
DiagonalMatrix[{c
1
, c
2
, c
3
}],DiagonalMatrix[{d
1
, d
2
, d
3
}]]
Out[38]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
a
1
0 0 b
1
0 0
0 a
2
0 0 b
2
0
0 0 a
3
0 0 b
3
c
1
0 0 d
1
0 0
0 c
2
0 0 d
2
0
0 0 c
3
0 0 d
3
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

6. System Interconnections 115
This picks the subsystem that has only the first and third inputs.
In[39]:= Subsystem[ss, {1, 3}]
Out[39]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
a
1
0 0 b
1
0
0 a
2
0 0 0
0 0 a
3
0 b
3
c
1
0 0 d
1
0
0 c
2
0 0 0
0 0 c
3
0 d
3
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

This swaps the second and third inputs.
In[40]:= Subsystem[ss, {1, 3, 2}]
Out[40]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
a
1
0 0 b
1
0 0
0 a
2
0 0 0 b
2
0 0 a
3
0 b
3
0
c
1
0 0 d
1
0 0
0 c
2
0 0 0 d
2
0 0 c
3
0 d
3
0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

This selects the subsystem that has all inputs, the first output, and the first and third
states of the original system ss.
In[41]:= Subsystem[ss, All, {1}, {1, 3}]
Out[41]=
l
\
.
.
.
.
.
.
.
..
a
1
0 b
1
0 0
0 a
3
0 0 b
3
c
1
0 d
1
0 0
\
!
.
.
.
.
.
.
.
..

DeleteSubsystem is complementary to Subsystem and has similar syntax.
116 Control System Professional
DeleteSubsystem|system, inputs]
delete the part of system
associated with the specified inputs
DeleteSubsystem|system, inputs, outputs]
delete the specified inputs and outputs
DeleteSubsystem|system, inputs, outputs, states]
delete the specified inputs, outputs, and states
Deleting a part of the system.
Here is a transfer function of a system with two inputs and three outputs.
In[42]:= TransferFunction[Array[f, {3, 2}]]
Out[42]=
l
\
.
.
.
.
.
.
.
.
f (1, 1) f (1, 2)
f (2, 1) f (2, 2)
f (3, 1) f (3, 2)
\
!
.
.
.
.
.
.
.
.

This deletes the first and third outputs, leaving all the inputs intact.
In[43]:= DeleteSubsystem[%, None, {1, 3}]
Out[43]= ( f (2, 1) f (2, 2) )

MergeSystems merges several systems into one by appending their inputs and outputs (and,
for state-space systems, states). The result is in state-space form if at least one of the systems is
in this form.
MergeSystems system
1
, system
2
, …
merge the systems system
i
Merging several systems.
Here are two state-space systems.
In[44]:= ss1=StateSpace[{{a
11
, a
12
},{a
21
, a
22
}},
{{b
11
, b
12
},{b
21
, b
22
}}, {{c
1
, c
2
}},{{d
1
, d
2
}}]
Out[44]=
l
\
.
.
.
.
.
.
.
..
a
11
a
12
b
11
b
12
a
21
a
22
b
21
b
22
c
1
c
2
d
1
d
2
\
!
.
.
.
.
.
.
.
..

6. System Interconnections 117
In[45]:= ss2=StateSpace[{{A
11
, A
12
},{A
21
, A
22
}},
{{B
1
},{B
2
}},{{C
11
, C
12
},{C
21
, C
22
}}, {{D
1
},{D
2
}}]
Out[45]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
A
11
A
12
B
1
A
21
A
22
B
2
C
11
C
12
D
1
C
21
C
22
D
2
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

This merges them into one. The aggregate has all the inputs, outputs, and states of
its components.
In[46]:= MergeSystems[ss1, ss2]
Out[46]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
11
a
12
0 0 b
11
b
12
0
a
21
a
22
0 0 b
21
b
22
0
0 0 A
11
A
12
0 0 B
1
0 0 A
21
A
22
0 0 B
2
c
1
c
2
0 0 d
1
d
2
0
0 0 C
11
C
12
0 0 D
1
0 0 C
21
C
22
0 0 D
2
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Here are two transfer functions for the two-input, one-output and the one-input,
two-output systems, respectively.
In[47]:= tf1=TransferFunction ]s, ]]
1
----
s
,
1
-----------
s +1
¦¦|
Out[47]= |
1

1

1
|

In[48]:= tf2=TransferFunction ]s, ]]
1
-----------
s +2
¦,]
1
-----------
s +3
¦¦|
Out[48]=
l
\
.
.
.
.
.
.
.
.
.
.
.
1

2
1

3
\
!
.
.
.
.
.
.
.
.
.
.
.

118 Control System Professional
This merges the two transfer functions into one.
In[49]:= MergeSystems[tf1, tf2]
Out[49]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1

1

1
0
0 0
1

2
0 0
1

3
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

6.5 Using Interconnecting Functions for Controller Design
Due to their symbolic capabilities, the interconnecting functions described in this chapter can
be especially useful for design purposes. In a typical scenario, the designer chooses the struc-
ture of the controller and then determines the particular values of its parameters to meet the
specifications. Several design functions are described in details later in Chapters 9 and 10. The
example given here illustrates that the ready-made set of design tools can be easily expanded
and outlines possible steps in that process.
We consider a double-integrator model of the satellite control system and design a PID
(proportional-integral-derivative) controller that would place the poles of the closed-loop
system in some predefined positions (cf. Section 9.1). The block diagram of a PID controller
connected to the system is shown in Figure 6.9. Only the propagation of the reference signal r
to the controlled output c is taken into account. A typical system would also have a distur-
bance input, not shown on the diagram. The prefilter can be used to further correct the dynam-
ics of the system, for example, by eliminating unwanted zeros from the closed-loop transfer
function. Note that the derivative part of the controller includes the term 1 l (Τs 1), with a
presumably small time constant Τ. Otherwise that part would not be physically realizable.
6. System Interconnections 119
Prefilter

System
PID Controller
K
p
c r
K
i
s
K
d
s
(Τs 1)

Figure 6.9. PID controller example.
Here is a double integrator plant.
In[50]:= plant =StateSpace[{{0, 1},{0, 0}},{{0},{1}}, {{1, 0}}]
Out[50]=
l
\
.
.
.
.
.
.
.
..
0 1 0
0 0 1
1 0 0
\
!
.
.
.
.
.
.
.
..

This describes the PID controller.
In[51]:= pid =TransferFunction]s, k
p
+
k
i
------
s
+
k
d
s
---------------
1+Τ s
|
Out[51]= |
k
d

Τ 1

k
i

k
p
|

This connects the controller to the plant, closes the feedback loop, and simplifies the
result.
In[52]:= SeriesConnect[pid, plant]//FeedbackConnect //Simplify
Out[52]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
0 1 0 0 0
0
1

Τ
1 0 1
0 0 0 1 0
k
i

Τ
k
i

k
d

Τ
2

k
d
Τ k
p

Τ
0
k
d

Τ
k
p
0 0 1 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

120 Control System Professional
This finds the transfer function of the closed-loop system.
In[53]:= TransferFunction[s, %]//ExpandRational
Out[53]=
l
\
.
..
k
d

2
Τ k
p

2
Τ k
i
k
p
k
i

Τ
4

3
k
d

2
Τ k
p

2
Τ k
i
k
p
k
i
\
!
.
..

Here is the denominator of the transfer function as a polynomial in the variable s.
In[54]:= Denominator[%[s]]#1, 1]
Out[54]= s
3
s
4
Τs
2
k
d
k
i
s Τ k
i
s k
p
s
2
Τ k
p
This makes the polynomial monic.
In[55]:= d
1
=Expand ]
%
----
Τ
|
Out[55]= s
4

s
3

Τ

s
2
k
d

Τ
s k
i

k
i

Τ
s
2
k
p

s k
p

Τ
Suppose now that the closed-loop system with desired dynamics has poles at p
1
,
p
2
, p
3
, and p
4
. This is the denominator of the corresponding transfer function.
In[56]:= d
2
=(s+p
1
) (s +p
2
) (s +p
3
) (s+p
4
)
Out[56]= sp
1
) sp
2
) sp
3
) sp
4
)
To find the unknown parameters k
i
, k
d
, k
p
, and Τ we equate the coefficients of
powers of s in the denominators and solve the resultant system of equations.
In[57]:= Solve[CoefficientList[d
1
-d
2
, s]=0, {k
i
, k
d
, k
p
, Τ}]//Simplify
Out[57]= ¦¦k
d

p
1
p
2
p
3
) p
1
p
2
p
4
) p
1
p
3
p
4
) p
2
p
3
p
4
)

p
1
p
2
p
3
p
4
)
3
,
k
i

p
1
p
2
p
3
p
4

p
1
p
2
p
3
p
4
, k
p
p
2
p
3
p
4
p
2
p
3
p
4
)p
1
2
p
3
p
4
p
2
p
3
p
4
))
p
1
p
2
2
p
3
p
4
)p
3
p
4
p
3
p
4
)p
2
p
3
2
3 p
3
p
4
p
4
2
)))/
p
1
p
2
p
3
p
4
)
2
, Τ
1

p
1
p
2
p
3
p
4
¦¦
This is a simplified result for negligible Τ.
In[58]:= % /.(k_ -expr_)¬k -Limit[expr, p
1
--«]
Out[58]= {{k
d
p
2
p
3
p
4
, k
i
p
2
p
3
p
4
, k
p
p
3
p
4
p
2
p
3
p
4
),Τ0¦¦
6. System Interconnections 121
7. Controllability and Observability
This chapter describes the tools related to controllability and observability of state-space
systems, including the test functions themselves and other necessary constructs.
7.1 Tests for Controllability and Observability
A linear system is said to be completely controllable if, for all initial times t
0
and all initial states
x(t
0
), there exists some input function (or sequence for discrete systems) that drives the state
vector to any final state x(t
1
) at some finite time t
1
> t
0
. Controllability of the system is deter-
mined by matrices A and B. The function Controllable performs the test.
Another kind of controllability may be useful from a practical perspective, namely, complete
output controllability, which is defined as the ability to drive the output vector to the origin in
finite time. This property involves all matrices A, B, C, and D. The test can be done using the
function OutputControllable.
Analogously, a linear system is said to be completely observable if, for all initial times t
0
, the
state vector x(t
0
) can be determined from the output function (or sequence) y(t
1
), defined
over a finite time t
1
> t
0
. Consequently, observability involves the matrices A and C. Observ
able performs the test.
Controllablestatespace test if the system statespace is controllable
OutputControllablestatespace
test if the system
statespace is output controllable
Observablestatespace test if the system statespace is observable
Testing controllability and observability properties.
« Load the application.
In[1]:= ControlSystems`
« It is easy to show that this system is neither controllable nor output controllable.
In[2]:= ss StateSpace0, 1, 1, 2,1,1,1, 0,1, 1
Out[2]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 1 1
÷1 ÷2 ÷1
1 0 0
1 1 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

« This tests the controllability.
In[3]:= Controllabless
Out[3]= False
« And here is the test for output controllability.
In[4]:= OutputControllabless
Out[4]= False
« On the other hand, the system is completely observable.
In[5]:= Observabless
Out[5]= True
If for some reason the controllability and observability tests cannot be evaluated to True or
False, they are returned partially unevaluated. In this sense, the tests behave more like the
Mathematica built-in functions Positive, Negative, or Equal rather than the *Q functions,
which always return True or False.
« Controllability and observability tests do not necessarily return True or False.
In[6]:= Controllableundefinedsystem
Out[6]= Controllableundefinedsystem
7. Controllability and Observability 123
option name default value
ControllabilityTest Automatic test to apply to find if
the systemis controllable
ObservabilityTest Automatic test to apply to find if
the systemis observable
Options specific to controllability and observability test functions.
Controllability and observability can be determined by testing if the controllability or observ-
ability matrix has rank n, where n is the dimension of the state space, or if the controllability
or observability Gramian is singular. The tests can be chosen through the options Controlla
bilityTest and ObservabilityTest. Both accept a pure function or a list of pure func-
tions to be applied in turn until one gives True or False or the list is exhausted. By default,
both first try to see if the controllability or observability information for the system is available
(see below in this section) and then proceed with the matrix test and further with the Gramian
test as needed. To disable this feature, simply select one desired method.
124 Control System Professional
ControllabilityTest FullRankControllabilityMatrix
test if the controllability matrix is
full rank in order to test controllability
ControllabilityTest NonSingularControllabilityGramian
test if the controllability Gramian is
nonsingular in order to test controllability
ControllabilityTest test
use the pure function test
ControllabilityTest test
1
, test
2
, …
try test
i
in turn until one succeeds
ObservabilityTest FullRankObservabilityMatrix
test if the observability matrix is
full rank in order to test observability
ObservabilityTest NonSingularObservabilityGramian
test if the observability Gramian is
nonsingular in order to test observability
ObservabilityTest test
use the pure function test
ObservabilityTest test
1
, test
2
, …
try test
i
in turn until one succeeds
Selecting controllability and observability tests.
Controllability and observability can also be inferred from the structure of certain special-type
realizations, such as the Kalman controllable or observable forms (see Section 8.2). In fact, in
some cases the size of the controllable or observable subspace is computed internally when
arriving at the realization of the special form. This information can be retrieved with the
functions ControllableSpaceSize and ObservableSpaceSize and then used for
alternative controllability and observability tests.
In their turn, ControllableSpaceSize and ObservableSpaceSize take the option
ReductionMethod that allows you to specify the method to use for those functions. The
7. Controllability and Observability 125

option operates analogously in ControllableSubsystem and ObservableSubsystem
and is described in Section 8.1.
ControllableSpaceSizestatespace
the size of the controllable
subspace of the state-space system
ObservableSpaceSizestatespace
the size of the observable subspace
The controllable and observable subspace sizes.
« This determines the size of the controllable subspace.
In[7]:= ControllableSpaceSizess
Out[7]= 1
« As the size of the controllable subspace is less than the number of states, the system
is not controllable.
In[8]:= % CountStatesss
Out[8]= False
7.2 Controllability and Observability Constructs
The controllability matrix of a linear system is defined as
(7.1) = [ B AB A
2
B A
n÷1
B ]
and can be obtained with ControllabilityMatrix. The output controllability matrix
(7.2) = [ CB CAB CA
2
B CA
n÷1
B D]
can be found with OutputControllabilityMatrix. The observability matrix
126 Control System Professional
(7.3) =
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
C
CA
CA
2

CA
n÷1
.
.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
is obtainable with ObservabilityMatrix.
ControllabilityMatrixstatespace
returns the controllability
matrix for the system statespace
OutputControllabilityMatrixstatespace
returns the output controllability matrix
ObservabilityMatrixstatespace
returns the observability matrix
Finding controllability and observability matrices.
« This is a simple test system.
In[9]:= system
StateSpace3, 1, 0,5, 0, 1, 3, 0, 0,1,2,3, 1, 0, 0
Out[9]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
÷3 1 0 1
÷5 0 1 2
÷3 0 0 3
1 0 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

« This is its controllability matrix.
In[10]:= ControllabilityMatrixsystem
Out[10]=
l
\
.
.
.
.
.
.
.
.
1 ÷1 1
2 ÷2 2
3 ÷3 3
\
!
.
.
.
.
.
.
.
.
7. Controllability and Observability 127
« Clearly, this is not a full-rank matrix and, therefore, the system is not completely
controllable.
In[11]:= Rank%
Out[11]= 1
« We can come to this conclusion directly.
In[12]:= Controllablesystem
Out[12]= False
« This is the observability matrix for the system.
In[13]:= ObservabilityMatrixsystem
Out[13]=
l
\
.
.
.
.
.
.
.
.
1 0 0
÷3 1 0
4 ÷3 1
\
!
.
.
.
.
.
.
.
.
« This is a full-rank matrix, so the system is completely observable.
In[14]:= Rank%
Out[14]= 3
« Again, we can come to this conclusion directly.
In[15]:= Observablesystem
Out[15]= True
Equally important in controllability and observability studies are the corresponding Grami-
ans, which may be defined as (see, e.g., Moore (1981))
(7.4) W
c
2
=
¹
0
o
e

BB
T
e
A
T
Τ

(the controllability Gramian) and
(7.5) W
o
2
=
¹
0
o
e
A
T
Τ
C
T
Ce


(the observability Gramian). Examples of using the Gramians can be found later in Section 8.4.
128 Control System Professional
ControllabilityGramianstatespace
find the controllability
Gramian for the system statespace
ObservabilityGramianstatespace
find the observability Gramian
Finding controllability and observability Gramians.
« This is a mixing tank system (described in Section 10.1).
In[16]:= StateSpace0.9512, 0,0, 0.9048,4.88, 4.88, 0.019, 0.0095,
0.01, 0,0, 1,Sampled Period 5.
Out[16]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0.9512 0 4.88 4.88
0 0.9048 ÷0.019 0.0095
0.01 0 0 0
0 1 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5.

« Here is its controllability Gramian.
In[17]:= ControllabilityGramian %
Out[17]=
l
\
.
..
500.205 ÷0.332677
÷0.332677 0.00248846
\
!
.
..
ControllabilityGramian and ObservabilityGramian rely on the function Lyapunov
Solve (see Section 12.2) to solve Lyapunov equations regarding W
c
2
and W
o
2
:
(7.6) AW
c
2
+W
c
2
A
T
= ÷B B
T
(7.7) A
T
W
o
2
+ W
o
2
A = ÷C
T
C
Consequently, these functions accept the same options as LyapunovSolve does. For
discrete-time systems, ControllabilityGramian and ObservabilityGramian call
DiscreteLyapunovSolve appropriately.
The Gramian functions sometimes must reconstruct the full orthogonal basis of the vector
space given a few orthogonal vectors belonging to the basis—a problem that may have more
than one solution. In such cases, the choice between equivalent vectors can be made ran-
domly. You may enable this feature by setting the option RandomOrthogonalComplement
to True. The same mechanism is used by several other functions, for example, Kalman trans-
7. Controllability and Observability 129
form functions and the KNVD algorithm in StateFeedbackGains. To have them all utilize
the randomized algorithm, you may change the global variable $RandomOrthogonalComple
ment to True. Of course, reseeding the random number generator with some number (using
the Mathematica built-in function SeedRandom), while employing the randomized algorithm,
provides a random, yet reproducible solution.
RandomOrthogonalComplement ÷ True
use the randomized algorithmfor constructing
an orthogonal basis in a particular function
$RandomOrthogonalComplement True
use the randomized algorithmin all
functions that construct orthogonal bases
Enabling the random choice between equivalent orthogonal vectors.
7.3 Dual System
The function DualSystem returns the system that is dual to the input system. It is useful
because observability tests are often implemented as controllability tests of the system dual to
the one in question.
DualSystemstatespace a system that is dual to the system statespace
Finding the dual system.
« This is a system in state-space form.
In[18]:= StateSpace0, 1,a1, a2, 1, 1, 1, 0,1, 1
Out[18]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 1 1
a1 a2 ÷1
1 0 0
1 1 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

130 Control System Professional
« This is its dual system.
In[19]:= DualSystem%
Out[19]=
l
\
.
.
.
.
.
.
.
..
0 Conjugate(a1) 1 1
1 Conjugate(a2) 0 1
1 ÷1 0 0
\
!
.
.
.
.
.
.
.
..

By default, DualSystem assumes that all symbolic entries in state-space representation can be
complex. The user may disable this feature by setting the option ComplexVariables to
None, in which case the built-in function ComplexExpand will be applied to the result.
ComplexVariables accepts also a list of variables which may be complex. In this case, all
other variables are assumed to be real valued.
« If we assume that none of the symbolic entries are complex, a simpler result is
returned.
In[20]:= DualSystem%%, ComplexVariables None
Out[20]=
l
\
.
.
.
.
.
.
.
..
0 a1 1 1
1 a2 0 1
1 ÷1 0 0
\
!
.
.
.
.
.
.
.
..

« Finally, here is the dual system if only a1 may be complex.
In[21]:= DualSystem%%%, ComplexVariables a1
Out[21]=
l
\
.
.
.
.
.
.
.
..
0 Re(a1) ÷i Im(a1) 1 1
1 a2 0 1
1 ÷1 0 0
\
!
.
.
.
.
.
.
.
..

option name default value
ComplexVariables All which symbolic entries
in state-space representation
should be assumed to be complex
Option to DualSystem.
7. Controllability and Observability 131
8. Realizations
A realization is any pair of equations
(8.1)
x

(t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
or the corresponding quadruple of the matrices ¦A, B, C, D¦ , or, for the purposes of this
guide, the corresponding state-space object StateSpace[a, b, c, d] that leads to the required
input-output relations for the system (usually specified as a transfer function matrix H(s)). A
realization can also refer to a discrete-time state-space realization (with the obvious change
from the differential to difference in the equations). Because there can be innumerable ways to
satisfy the input-output relations, a physical system can have an infinite number of realiza-
tions. In this chapter, several means of converting between different realizations are described,
including means of obtaining realizations of a smaller order. All the functions operate on
continuous- and discrete-time objects.
The first group of functions represents the means to convert between different types of realiza-
tions. By convention, their names end with Form. For example, to convert a system to Kalman
controllable form, the function KalmanControllableForm would be applied to the system.
The list of all forms available in Control System Professional can be obtained with ?Control
Systems`*`*Form.
The other group comprises functions that select a subsystem (typically a subspace) possessing
certain properties. These functions end with Subsystem (e.g., ControllableSubsystem or
DominantSubsystem). The exception to this convention is MinimalRealization, used for
computing a minimal realization, which represents the intersection of the controllable and
observable subspaces; therefore, this function does effectively select a subsystem. Also intro-
duced is SimilarityTransform, a function that transforms between equivalent realizations
of the same system.
8.1 Irreducible (Minimal) Realizations
The internal structure of a system may allow some of the integrators (or delay elements) to be
shared by several input-output pairs and still result in the same transfer matrix. The system
that realizes the maximum possible degree of sharing (and, consequently, the smallest possi-
ble dimension of the associated state space) is called the irreducible (or minimal) realization
(see, e.g., Brogan (1991), Section 12.4). The function MinimalRealization tries to find such
a realization.
MinimalRealizationsystem
find a minimal realization for system
Finding the irreducible (minimal) realization.
The input system for MinimalRealization can be in either state-space or transfer function
form; the resultant system is always a state-space one. For SISO transfer function systems,
MinimalRealization constructs a state-space realization after an attempt to cancel com-
mon pole-zero pairs (the underlying function, PoleZeroCancel, can also be accessed
directly, see Section 8.6). Otherwise, MinimalRealization constructs a state-space realiza-
tion first and then uses the functions ControllableSubsystem and ObservableSub
system consecutively to select first the controllable and then the observable subspaces. The
result is therefore a subsystem that is both completely observable and controllable. See
Section 8.2 for more on the definitions of the controllable and observable subspaces. In con-
trast, DominantSubsystem eliminates weakly controllable and observable modes (see Section
8.5).
ControllableSubsystemstatespace
select the controllable subspace of statespace
ObservableSubsystemstatespace
select the observable subspace of statespace
Selecting controllable and observable subspaces.
« Load the application.
In[1]:= ControlSystems`
8. Realizations 133
« Consider a third-order state-space system with two inputs and two outputs. The first
mode is uncontrollable, and the second one is unobservable. These modes, then,
have no effect on the input-output relations and so can be dropped without
changing the transfer function matrix.
In[2]:= ss StateSpaceDiagonalMatrixa
1
, a
2
, a
3
,
0, 0,b
1
, 0,0, b
2
,c
1
, 0, 0,0, 0, c
3
, d
11
, d
12
,d
21
, d
22

Out[2]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
1
0 0 0 0
0 a
2
0 b
1
0
0 0 a
3
0 b
2
c
1
0 0 d
11
d
12
0 0 c
3
d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

« We can verify that this system is not controllable.
In[3]:= Controllabless
Out[3]= False
« Neither is it observable.
In[4]:= Observabless
Out[4]= False
« This selects the controllable subspace.
In[5]:= ControllableSubsystemss
Out[5]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
2
0 b
1
0
0 a
3
0 b
2
0 0 d
11
d
12
0 c
3
d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

« This selects the observable subspace.
In[6]:= ObservableSubsystemss
Out[6]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
1
0 0 0
0 a
3
0 b
2
c
1
0 d
11
d
12
0 c
3
d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

134 Control System Professional
« By selecting the observable subspace of the controllable subspace, we arrive at a
minimal realization.
In[7]:= ObservableSubsystem%%
Out[7]=
l
\
.
.
.
.
.
.
.
..
a
3
0 b
2
0 d
11
d
12
c
3
d
21
d
22
\
!
.
.
.
.
.
.
.
..

« The same result can be obtained directly.
In[8]:= MinimalRealizationss
Out[8]=
l
\
.
.
.
.
.
.
.
..
a
3
0 b
2
0 d
11
d
12
c
3
d
21
d
22
\
!
.
.
.
.
.
.
.
..

« The minimal realization is both controllable and observable.
In[9]:= Controllable%Observable%True
Out[9]= True
The method that ControllableSubsystem and ObservableSubsystem use to reduce the
dimension of the system can be chosen through the option ReductionMethod. Reduction
Method ÷ Kalman specifies that the Kalman decomposition is to be used. With the default
option value Automatic, the functions first try to use the structural information about the
input system that might be already available. Additionally, ControllableSubsystem takes,
and passes along, the options specific to the method it employs. ObservableSubsystem
takes both the options for ControllableSubsystem and DualSystem. MinimalRealiza
tion, as an interface function, inherits options from its constituents ControllableSub
system and ObservableSubsystem or PoleZeroCancel, whichever is applicable.
option name default value
ReductionMethod Automatic method to use to reduce
the dimension of the system
Specifying the reduction method.
8. Realizations 135
8.2 Kalman Canonical Forms
The state equations in Eq. (8.1) can be transformed to the Kalman controllable canonical form
(see, e.g., Brogan 1991, Section 11.7)
(8.2)
.
.
.
.
.
.
.
.
.
..
w

1
w

2
.
.
+
+
+
+
+
+
+
++
=
.
.
.
.
.
.
.
.
.
.
.
.
T
1
T
AT
1
T
1
T
AT
2
0 T
2
T
AT
2
.
.
+
+
+
+
+
+
+
+
+
+
.
.
.
.
.
.
.
.
..
w
1
w
2
.
.
+
+
+
+
+
+
++
+
.
.
.
.
.
.
.
.
.
.
.
T
1
T
B
0
.
.
+
+
+
+
+
+
+
+
+
u
y = [ CT
1
CT
2 ]
.
.
.
.
.
.
.
.
..
w
1
w
2
.
.
+
+
+
+
+
+
++
+ D u
using a similarity transformation x = Tw, where the orthogonal transformation matrix
T = [ T
1
T
2 ] is constructed and partitioned in such a way that T
1
represents the subspace
spanned by the columns of the controllability matrix and T
2
is the subspace orthogonal to T
1
.
The space vector w is partitioned into two corresponding parts, too: w =
.
.
.
.
.
.
.
.
.
.
w
1
w
2
.
.
+
+
+
+
+
+
+
+
. It is seen from
Eq. (8.2) that state variables w
2
are uncontrollable, since there is no way to change w
2
either
directly through input u or indirectly through w
1
coupling.
Similarly, in the Kalman observable canonical form
(8.3)
.
.
.
.
.
.
.
.
.
..
v

1
v

2
.
.
+
+
+
+
+
+
+
++
=
.
.
.
.
.
.
.
.
.
.
.
.
V
1
T
AV
1
0
V
2
T
AV
1
V
2
T
AV
2
.
.
+
+
+
+
+
+
+
+
+
+
.
.
.
.
.
.
.
.
..
v
1
v
2
.
.
+
+
+
+
+
+
++
+
.
.
.
.
.
.
.
.
.
.
.
.
V
1
T
B
V
2
T
B
.
.
+
+
+
+
+
+
+
+
+
+
u
y = [ CV
1
0 ]
.
.
.
.
.
.
.
.
..
v
1
v
2
.
.
+
+
+
+
+
+
++
+ D u ,
the state space is divided into observable v
1
and unobservable v
2
subspaces. Kalman controlla-
ble and observable canonical forms can be arrived at by using the functions KalmanControl
lableForm and KalmanObservableForm.
KalmanControllableFormstatespace
find the Kalman controllable
canonical form of the system statespace
KalmanObservableFormstatespace
find the Kalman observable
canonical form of the system statespace
Finding Kalman canonical forms.
136 Control System Professional
« This is an example system defined previously.
In[10]:= ss StateSpaceDiagonalMatrixa
1
, a
2
, a
3
,
0, 0, b
1
, 0,0, b
2
,c
1
, 0, 0, 0, 0, c
3
,d
11
, d
12
,d
21
, d
22

Out[10]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
1
0 0 0 0
0 a
2
0 b
1
0
0 0 a
3
0 b
2
c
1
0 0 d
11
d
12
0 0 c
3
d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

« This is the Kalman controllable canonical form of an example system defined
previously. We can see that the uncontrollable mode (the third one) is moved to the
end of the state vector.
In[11]:= KalmanControllableFormss
Out[11]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
a
2
0 0 b
1
0
0 a
3
0 0 b
2
0 0 a
1
0 0
0 0 c
1
d
11
d
12
0 c
3
0 d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

« This is the Kalman observable canonical form of the same system. Now the
unobservable mode (the second one) is moved to the end.
In[12]:= KalmanObservableFormss
Out[12]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
a
1
0 0 0 0
0 a
3
0 0 b
2
0 0 a
2
b
1
0
c
1
0 0 d
11
d
12
0 c
3
0 d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

The decomposition into controllable and uncontrollable (or observable and unobservable)
subspaces can be performed using several methods that are accessible through the option
DecompositionMethod. The default Automatic value for this option invokes the built-in
function RowReduce for exact systems and SingularValues for inexact ones. Other avail-
able option values are QRDecomposition and NullSpace. Like all other functions in Control
System Professional, the functions related to Kalman canonical forms accept the options belong-
ing to the employed method and pass them on to their destinations. Kalman
decomposition-related functions also accept the option RandomOrthogonalComplement.
8. Realizations 137
option name default value
De c ompos i t i onMe t hod Aut oma t i c method to perform
the decomposition
Specifying the decomposition method.
KalmanControllableForm and KalmanObservableForm are similar to the functions
ControllableSubsystem and ObservableSubsystem described in Section 8.1. However,
the two former functions merely rearrange the order of variables in the state vector, whereas
the latter two select controllable or observable subspaces of the given system. Similarly to
ObservableSubsystem, KalmanObservableForm takes the options of KalmanControlla
bleForm as well as the options of DualSystem.
8.3 Jordan Canonical (Modal) Form
JordanCanonicalFormstatespace
find the Jordan canonical
form of the system statespace
Finding the Jordan canonical form.
« Let us create a diagonal matrix.
In[13]:= m DiagonalMatrix1, 2, 3
Out[13]=
l
\
.
.
.
.
.
.
.
.
÷1 0 0
0 ÷2 0
0 0 ÷3
\
!
.
.
.
.
.
.
.
.
« Here is some nonsingular matrix.
In[14]:= t 0, 1, 0,1, 0, 1,1, 1, 0
Out[14]=
l
\
.
.
.
.
.
.
.
.
0 1 0
÷1 0 ÷1
1 ÷1 0
\
!
.
.
.
.
.
.
.
.
138 Control System Professional
« This creates a matrix with the predefined set of eigenvalues.
In[15]:= Inverset.m.t
Out[15]=
l
\
.
.
.
.
.
.
.
.
÷3 2 0
0 ÷1 0
1 ÷2 ÷2
\
!
.
.
.
.
.
.
.
.
« We use the previous matrix as matrix A in our test state-space object ss.
In[16]:= ss StateSpaceDiagonalMatrixa
1
, a
2
, a
3
,
0, 0, b
1
, 0,0, b
2
,c
1
, 0, 0, 0, 0, c
3
,d
11
, d
12
,d
21
, d
22

Out[16]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
1
0 0 0 0
0 a
2
0 b
1
0
0 0 a
3
0 b
2
c
1
0 0 d
11
d
12
0 0 c
3
d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

In[17]:= ReplacePartss, %%, 1
Out[17]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
÷3 2 0 0 0
0 ÷1 0 b
1
0
1 ÷2 ÷2 0 b
2
c
1
0 0 d
11
d
12
0 0 c
3
d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

« This finds the Jordan canonical form of the preceding system.
In[18]:= JordanCanonicalForm%
Out[18]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
÷3 0 0 b
1
0
0 ÷2 0 0 b
2
0 0 ÷1 ÷b
1
0
÷c
1
0 ÷c
1
d
11
d
12
c
3
c
3
c
3
d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

In the case of an exact input system, JordanCanonicalForm relies on the built-in function
JordanDecomposition; otherwise the eigenvalue decomposition is used. The latter method
may lead to significant numerical errors if eigenvalues happen to be multiple.
8. Realizations 139
8.4 Internally Balanced Realizations
The Kalman minimal realization algorithm may result in structurally unstable models. In such
cases another model reduction technique, based on the internally balanced realization, may be
applied (Moore (1981)). The controllable and observable realization is said to be internally
balanced if its controllability and observability Gramians are represented by the same
(positive definite) diagonal matrix. InternallyBalancedForm attempts to construct such a
realization.
InternallyBalancedFormstatespace
find the internally balanced
realization of the system statespace
Finding the internally balanced form.
« Consider a SISO system in its controllable canonical form (see Example 1 in the
above-cited paper of Moore (1981)). Evidently, the realization is poorly balanced.
In[19]:= original
StateSpace0, 1, 0, 0,0, 0, 1, 0,0, 0, 0, 1, 50, 79, 33, 5,
0,0,0,1,50, 15, 1, 0N
Out[19]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0. 1. 0. 0. 0.
0. 0. 1. 0. 0.
0. 0. 0. 1. 0.
÷50. ÷79. ÷33. ÷5. 1.
50. 15. 1. 0. 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

« This finds the internally balanced realization.
In[20]:= balanced InternallyBalancedForm%
Out[20]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
÷0.518282 1.45031 0.391098 0.350054 0.772915
÷1.45031 ÷2.19539 ÷4.75334 ÷1.21805 0.804697
0.391098 4.75334 ÷0.629679 ÷1.19628 ÷0.337348
÷0.350054 ÷1.21805 1.19628 ÷1.65665 0.252316
0.772915 ÷0.804697 ÷0.337348 ÷0.252316 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

140 Control System Professional
« We can verify that the realization has equal and diagonal controllability and
observability Gramians. The built-in function Chop rounds small numerical errors
down to the exact zeros.
In[21]:= ControllabilityGramian %Chop
Out[21]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0.576324 0 0 0
0 0.147477 0 0
0 0 0.0903667 0
0 0 0 0.0192144
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
In[22]:= ObservabilityGramian%%Chop
Out[22]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0.576324 0 0 0
0 0.147477 0 0
0 0 0.0903667 0
0 0 0 0.0192144
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
« Finally, we can see that the two forms are in fact different realizations of the same
transfer function.
In[23]:= TransferFunctions, originalExpandRationalChop
Out[23]= |
1.
2
+ 15. + 50.

4
+ 5.
3
+ 33.
2
+ 79. + 50.
|

In[24]:= TransferFunctions, balancedExpandRationalChop
Out[24]= |
1.
2
+ 15. + 50.

4
+ 5.
3
+ 33.
2
+ 79. + 50.
|

Similarly to KalmanControllableForm and related functions, InternallyBalanced
Form takes the option DecompositionMethod. The default Automatic value for this
option invokes the built-in function Eigensystem for the continuous-time systems and
SingularValues for the discrete-time systems, in the most important case of inexact sys-
tems. For exact systems, the Eigensystem-based solution will be attempted.
8. Realizations 141
8.5 Dominant Subsystem
The controllability and observability Gramians in the example in the preceding section reveal
the modes that are relatively small and, therefore, contribute little to the input-output character-
istic. This suggests a way of reducing the order of the system by eliminating these weak
modes. What is meant by "relatively small" is, of course, a matter of convention and will be
addressed in the context of the option RejectionLevel later in this section.
Schematically, the system breaks up into dominant and weak subsystems connected as shown
in Figure 8.1. DominantSubsystem can be used to select the dominant part.

y u
Dominant subsystem
Reduced model is obtained
by severing these connections
Weak subsystem

Figure 8.1. Model reduction based on decomposing the system into its dominant and weak parts.
DominantSubsystemsystem
find the dominant
subsystem of the state-space system
Model reduction by selection of the dominant subsystem.
142 Control System Professional
« A reduced-order model of the system original can be found by selecting the
dominant subsystem.
In[25]:= reduced DominantSubsystemoriginal
Out[25]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
÷0.592249 1.19293 0.643875 0.826229
÷1.19293 ÷1.29982 ÷5.6329 0.619183
0.643875 5.6329 ÷1.49353 ÷0.519548
0.826229 ÷0.619183 ÷0.519548 ÷0.0384288
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Similarly to the function MinimalRealization, DominantSubsystem takes the option
ReductionMethod. With the default Automatic value for this option, DominantSub
system may use the structural information about the input system if it is available (if, for
example, the system is already in the internally balanced form). Otherwise, the function starts
from constructing the internally balanced realization and then selects the dominant part, as set
by the option RejectionLevel. With ReductionMethod ÷ InternallyBalancedForm,
DominantSubsystem always uses the internally balanced form to select the dominant part.
DominantSubsystem inherits options from InternallyBalancedForm.
option name default value
RejectionLevel 0.1 if the relative value of a diagonal element
(compared with the biggest element)
of the Gramians is equal
to or less than this value,
the corresponding mode is to be dropped
The option specific to DominantSubsystem.
If the default value of RejectionLevel is not appropriate for the particular case, the user
may find the internally balanced realization "manually" and then choose a suitable value for
RejectionLevel.
« Because we do have the internally balanced realization of the system original, we
use it to find the second-order reduced model.
In[26]:= reduced2DominantSubsystembalanced, RejectionLevel.2
Out[26]=
l
\
.
.
.
.
.
.
.
..
÷0.314668 3.62134 0.602247
÷3.62134 ÷22.5446 2.57868
0.602247 ÷2.57868 0.142305
\
!
.
.
.
.
.
.
.
..

8. Realizations 143
« This creates the Bode plots for the three systems and displays them at once. The
solid line is for the initial system, and the dashed and dashed-dotted ones are for the
third- and second-order reduced models, respectively. Note that in the first plot we
set up a plot range sufficient to display all three graphs, and in the second plot we
tweak PlotPoints for the phase unwrapping mechanism to operate correctly.
In[27]:= DisplayTogetherGraphicsArray
BodePlotoriginal, PlotRange All, 0, 600,
BodePlotreduced, PlotStyle Dashing0.02,PlotPoints 75,
BodePlotreduced2, PlotStyle Dashing0.05, 0.015, 0.005, 0.015;
0.1 0.51 5 10 50100
Frequency RadSecond
-500
-400
-300
-200
-100
0
P
h
a
s
e

d
e
g

0.1 0.51 5 10 50100
Frequency RadSecond
-80
-60
-40
-20
0
M
a
g
n
i
t
u
d
e

d
B

As we can see, in the reduced models the low-frequency behavior is preserved while the
high-frequency (fast) states are eliminated.
8.6 Pole-Zero Cancellation
PoleZeroCanceltf cancel the common pole-zero
pairs in the transfer function tf
Canceling common pole-zero pairs.
For the model reduction purposes, PoleZeroCancel in its current implementation is most
useful for SISO systems.
144 Control System Professional
« Here is a z-plane transfer function for a SISO system.
In[28]:= TransferFunctionz,
z .5

.5 1.5 z z
2
, Sampled True
Out[28]= |
÷ 0.5

2
÷ 1.5 + 0.5
|

« This is the transfer function after the pole-zero pair at z = 0.5 is canceled.
In[29]:= PoleZeroCancel%
Out[29]= |
1

÷ 1.
|

Note that if the common factors coincide within the precision of the elements, they can be
canceled without calling PoleZeroCancel—for example, just by factoring the elements of
the transfer matrix. Therefore, PoleZeroCancel is most useful when the match is not exact.
In fact, the Tolerance option allows cancellation of the factors within any desired difference.
« The common pair in the preceding transfer function cancel in the factored form, too.
In[30]:= FactorRational%%
Out[30]= |
1

÷ 1.
|

« Here is another transfer function.
In[31]:= TransferFunctions,
s .5

s .4

Out[31]= |
+ 0.5

+ 0.4
|

« Tolerance allows cancellation of pairs with a significant difference. The pole at
s = 0.4 and the zero at s = 0.5 cancel so long as Tolerance is set to 0.1.
In[32]:= PoleZeroCancel%, Tolerance .1
Out[32]= ( 1 )

8. Realizations 145
option name default value
Tolerance Automatic maximum difference between
a pole and a zero for which they
are considered a common pair
Option to PoleZeroCancel.
8.7 Similarity Transformation
There can be an infinite number of realizations of a physical system that correspond to the
system's representations in different bases of state space. The transformation x

(t) = Tx(t)
from basis x to another basis, x

, may be performed with any nonsingular matrix T. In the new
basis, the state equations from Eq. (8.1) become
(8.4)
x

(t) = A

x

(t) + B

u(t)
y(t) = C

x

(t) + Du(t)
where
(8.5)
A

= TAT
÷1
B

= TB
C

= CT
÷1
The result can be obtained using the function SimilarityTransform.
SimilarityTransformstatespace, m
transform the system statespace with the matrix m
Finding the similarity transformation.
146 Control System Professional
« Consider an earlier system.
In[33]:= system StateSpace3, 2, 0,0, 1, 0, 1, 2, 2,
0, 0, b
1
, 0,0, b
2
,c
1
, 0, 0, 0, 0, c
3
,d
11
, d
12
,d
21
, d
22

Out[33]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
÷3 2 0 0 0
0 ÷1 0 b
1
0
1 ÷2 ÷2 0 b
2
c
1
0 0 d
11
d
12
0 0 c
3
d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

« Here its eigenvectors are arranged as columns of the matrix.
In[34]:= t EigenvectorsFirst%Transpose
Out[34]=
l
\
.
.
.
.
.
.
.
.
÷1 0 ÷1
0 0 ÷1
1 1 1
\
!
.
.
.
.
.
.
.
.
« A similarity transformation based on the inverse of that matrix can be used to
represent the system in its canonical form.
In[35]:= SimilarityTransformsystem, Inverset
Out[35]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
÷3 0 0 b
1
0
0 ÷2 0 0 b
2
0 0 ÷1 ÷b
1
0
÷c
1
0 ÷c
1
d
11
d
12
c
3
c
3
c
3
d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

To avoid a double inversion of the matrix T in Eq. (8.5), either T or its inverse T
÷1
can be
supplied as an input argument to SimilarityTransform. In the latter case, the option
InvertedTransformMatrix must be set to True. Another way to look at this option is that
it allows backward transformation from basis x

to x.
8. Realizations 147
« This performs the transformation, assuming that matrix t is the inverse of the
transformation matrix.
In[36]:= SimilarityTransformsystem, t, InvertedTransformMatrix True
Out[36]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
÷3 0 0 b
1
0
0 ÷2 0 0 b
2
0 0 ÷1 ÷b
1
0
÷c
1
0 ÷c
1
d
11
d
12
c
3
c
3
c
3
d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Further performance gain can be achieved if the transformation matrix is known to be orthogo-
nal, in which case the transpose of the matrix can be used instead of its inverse.
option name default value
InvertedTransformMatrix False whether the input
matrix is already inverted
OrthogonalTransformMatrix Automatic whether the input matrix
is known to be orthogonal
Options to SimilarityTransform.
8.8 Recovering the Transformation Matrix
The similarity transformation matrix, which has been used internally to arrive at the realiza-
tions of special forms introduced in this chapter, can be retrieved using the function Trans
formationMatrix.
TransformationMatrixoriginal, transformed
the similarity transformation
matrix between two realizations
Recovering the similarity transformation.
148 Control System Professional
« This is another simple system and its Kalman controllable realization.
In[37]:= system StateSpace3, 0, 0,0, 1, 0,1, 2, 2,
0, 0, b
1
, 0,0, b
2
,c
1
, 0, 0, 0, 0, c
3
,d
11
, d
12
,d
21
, d
22

Out[37]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
÷3 0 0 0 0
0 ÷1 0 b
1
0
1 ÷2 ÷2 0 b
2
c
1
0 0 d
11
d
12
0 0 c
3
d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

In[38]:= KalmanControllableFormsystem
Out[38]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
÷1 0 0 b
1
0
÷2 ÷2 1 0 b
2
0 0 ÷3 0 0
0 0 c
1
d
11
d
12
0 c
3
0 d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

« This gives the transformation matrix between the two realizations.
In[39]:= TransformationMatrixsystem, %
Out[39]=
l
\
.
.
.
.
.
.
.
.
0 1 0
0 0 1
1 0 0
\
!
.
.
.
.
.
.
.
.
« Indeed, the similarity transformation of the original system with this matrix, which
is known to be orthogonal, brings about the Kalman controllable realization.
In[40]:= SimilarityTransformsystem, % , OrthogonalTransformMatrix True
Out[40]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
÷1 0 0 b
1
0
÷2 ÷2 1 0 b
2
0 0 ÷3 0 0
0 0 c
1
d
11
d
12
0 c
3
0 d
21
d
22
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

8. Realizations 149
9. Feedback Control Systems Design
Feeding a weighted part of the state or output variables back to the input is a method often
used to correct the behavior of control systems. This chapter describes the tools provided in
Control System Professional to design such feedback schemes by enforcing the desired pole
location on the complex plane. Another approach, using the optimal control technique, is
discussed in Chapter 10. The methods described here are applicable to continuous- or
discrete-time systems as long as the proper values of the poles—on the s - or z -plane—are
supplied.
9.1 Pole Assignment with State Feedback
By closing the loop of an n
th
-order system
(9.1) x

= Ax +Bu
with the state feedback control u = ÷Kx, one forces the system's poles, that is, the eigenvalues
of matrix A÷BK, to assume the new positions
(9.2) Λ
1
, Λ
2
, …, Λ
n
The problem of finding the matrix K that yields the desired locations of poles Λ
i
is often
referred to as the pole assignment or pole placement problem. The function StateFeedback
Gains attempts to find its solution, assuming that the input system is completely controllable.
So long as the algorithm does not require knowledge of matrices C and D, they may be
omitted in the state-space description of the input system.
StateFeedbackGainsstatespace, poles
find the feedback gain matrix that places poles
of the state-space system statespace into poles
State feedback design.
As an example, we consider a model of the single-axis satellite control system depicted in
Figure 9.1. The transfer function for the system is H(s) = Θ(s) l u (s) = 1 l s
2
, where Θ is the angle
of the satellite axis with respect to a reference, u = M l J is the normalized input variable, M is
the control torque applied by the thruster, and J is the moment of inertia about the center of
mass (Franklin et al. (1990)).

Θ
Reference
Figure 9.1. One-degree-of-freedom model of satellite attitude control.
« Load the application.
In[1]:= <<ControlSystems`
« This is the transfer function for the satellite control system.
In[2]:= TransferFunction ]s,
1

s
2
|
Out[2]= |
1

2
|

« This finds a discrete-time realization of the system for sampling period T.
In[3]:= ToDiscreteTime[StateSpace[%],Sampled Period [T]]
Out[3]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
1 T
T
2

2
0 1 T
1 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
T

« This designs the discrete-time controller that would place the poles of this system
into the desired locations z
1
and z
2
in the complex plane.
In[4]:= StateFeedbackGains[%, {z1, z2}]//Simplify
Out[4]= (
(z1÷1) (z2÷1)

T
2
÷
z2 z1+z1+z2÷3

2 T
)
9. Feedback Control Systems Design 151
« This is the controller gain matrix for the particular case z = 0.8 ± 0.2 j and sampling
period T = 0.1 seconds.
In[5]:= % /.{z1.8 .2 , z2.8 .2 , T 0.1}//Chop
Out[5]= ( 8. 3.6 )
Feedback design is prone to significant numerical errors, especially for high-order or weakly
controllable systems. StateFeedbackGains has several options that help to avoid meaning-
less results. There are also some options pertaining to the method being applied.
option name default value
Method Automatic method to use
VerifyPoles Automatic whether to check
resulting closed-loop poles
against the ones required
AdmissibleError 0.01 relative error in pole
position considered admissible
Some of the options for StateFeedbackGains.
The option Method determines what method to use to compute the feedback gains. In the
base package, two methods are available: Ackermann and KNVD. The option value Auto
matic imposes the following division of labor: KNVD is called for inexact input and for a
special case of exact input, namely when the number of inputs is equal to the number of
states; Ackermann is used otherwise.
Using the option VerifyPoles, it is possible to check if the poles of the closed-loop system
are indeed (close to) the required ones. For the option value Automatic, the check is made
only if the input is inexact. The value must be set to True to perform the check on exact input
or input containing symbolic expressions. The option can be set to False to save some comput-
ing time.
The option AdmissibleError relates to the option VerifyPoles and specifies the relative
error in the location of poles that is deemed admissible. The value in no way affects the result
of the computation. If the required accuracy has not been reached, it is up to the user to
choose the appropriate strategy. Traditionally, the case would require changing the method of
computation, reconsidering the requirement for accuracy, or even reformulating the problem.
In Mathematica, one may also use a built-in mechanism for manipulating the precision, which
may resolve some problems of this sort.
152 Control System Professional
« Consider a hypothetical eighth-order system.
In[6]:= n 8;
In[7]:= a DiagonalMatrix[Range[n]]//N
Out[7]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
1. 0. 0. 0. 0. 0. 0. 0.
0. 2. 0. 0. 0. 0. 0. 0.
0. 0. 3. 0. 0. 0. 0. 0.
0. 0. 0. 4. 0. 0. 0. 0.
0. 0. 0. 0. 5. 0. 0. 0.
0. 0. 0. 0. 0. 6. 0. 0.
0. 0. 0. 0. 0. 0. 7. 0.
0. 0. 0. 0. 0. 0. 0. 8.
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
In[8]:= b Table[{1.},{n}]
Out[8]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
1.
1.
1.
1.
1.
1.
1.
1.
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
« Suppose this is the list of the desired pole locations after the loop is closed.
In[9]:= poles Range[n , 1]
Out[9]= 8, 7, 6, 5, 4, 3, 2, 1
« Let us try to place the poles using Ackermann's formula (defined shortly) while
making sure that the target is not missed by more than 0.0001 percent. We are
warned that the goal has not been achieved. (Since we are not particularly interested
in the concrete values of the gains, we suppress the output by placing the semicolon
in the end of the input statement.)
In[10]:= StateFeedbackGains[a, b, poles,
Method Ackermann, AdmissibleError .000001];
StateFeedbackGains::bpl : Warning: Pole location
may deviate from the required one by more than 0.0001`%
9. Feedback Control Systems Design 153
« On our machine, the machine-precision numbers are 16 digits long.
In[11]:= $MachinePrecision
Out[11]= 16
« This increases the precision of our parameters.
In[12]:= {a, b, poles}SetPrecision [{a, b, poles}, 25];
« By increasing the precision of the input, we force the feedback computation to be
done with higher precision. Now we are presented with no warning.
In[13]:= StateFeedbackGains[a, b, poles,
Method Ackermann, AdmissibleError .000001];
« We double-check the result. The achieved accuracy is even greater than required.
In[14]:= Max[Eigenvalues[ab.% ]poles]
Out[14]= 5.4410
21
9.1.1 Ackermann's Formula
According to Ackermann's formula, the feedback matrix K that places the poles of a
single-input system x

= Ax +Bu into the positions Λ
1
, Λ
2
, …, Λ
n can be found as
(9.3) K = e
n
C
÷1
Φ(A)
where
e
n
= [ 0 0 0 1 ]
is the unit vector of length n,
C = [ B AB A
2
B A
n÷1
B]
is the controllability matrix, and
Φ (A) = A
n
+ Α
1
A
n÷1

2
A
n÷2
+ + Α
n÷1
A+Α
n
I
154 Control System Professional
is the characteristic polynomial of matrix A. Here I is the identity matrix, and the coefficients
Α
i
are such that Λ
1
, Λ
2
, …, Λ
n
are the roots of the polynomial:
(s ÷Λ
1
) (s ÷ Λ
2
) (s ÷ Λ
n
) = s
n
+ Α
1
s
n÷1
+ Α
2
s
n÷2
+ + Α
n÷1
s + Α
n
The function StateFeedbackGains with the option Method ÷ Ackermann implements this
algorithm.
Method ÷ Ackermann compute the feedback matrix using Ackermann' s formula
State feedback design using Ackermann's formula.
The Ackermann method, besides being useful for single-input systems, may also find applica-
tion if an attempt is to be made to control a multi-input system through a single input. The
option ControlInput ÷ Automatic is used in such cases to find the "best" control using the
condition number of the corresponding controllability matrix as a criterion. It is also possible
to specify the control input explicitly.
option name default value
ControlInput Automatic which input to choose as control
Option specific to the Ackermann method.
9. Feedback Control Systems Design 155
Figure 9.2. F-8 aircraft in flight. Photograph by Dryden Flight Research Center, NASA.
As an example we consider an approximate model of the lateral dynamics of an F-8 aircraft
(Figure 9.2) linearized about a particular set of flight conditions and reproduced after Brogan
(1991). The state and input vectors in the model are
x =
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
p
r
Β
Φ
.
.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
and u =
.
.
.
.
.
.
.
.
..

a

r
.
.
+
+
+
+
+
+
++
where p, r , Β, and Φ are the roll and yaw rates and the sideslip and roll angles, respectively,
and ∆
a
and ∆
r
are the aileron and rudder deflections. Figure 9.3 introduces the nomenclature.

156 Control System Professional
Φ
Rudder
Ailerons
Top view
Front view
Β

r
Figure 9.3. Aircraft schematic.
« This is the state-space model of the aircraft.
In[15]:= aircraft
StateSpace[{{10, 0, 10, 0},{0, .7, 9, 0},{0, 1, .7, 0},{1, 0, 0, 0}},
{{20, 2.8},{0, 3.13},{0, 0}, {0, 0}}]
Out[15]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
÷10 0 ÷10 0 20 2.8
0 ÷0.7 9 0 0 ÷3.13
0 ÷1 ÷0.7 0 0 0
1 0 0 0 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

« Here are the closed-loop poles we wish the system to have.
In[16]:= poles {10, 8, 5, 2};
StateFeedbackGains may be asked to determine whether it is possible to control the
aircraft using only one of the inputs if, say, a malfunction prevents manipulation of the other.
If such input exists, the feedback gain matrix will contain a nonzero row corresponding to this
input.
9. Feedback Control Systems Design 157
« StateFeedbackGains finds that the system is better controlled through the
second input (i.e., the rudder deflection) and returns the corresponding feedback
gains.
In[17]:= StateFeedbackGains[aircraft, poles, Method Ackermann]
Out[17]=
l
\
.
..
0 0 0 0
÷16.9205 ÷19.4816 25.6083 ÷169.205
\
!
.
..
« The attempt to control the aircraft from only the first input fails, and we are
presented with messages suggesting that the trouble possibly stems from the system
being uncontrollable.
In[18]:= StateFeedbackGains[aircraft, poles, Method Ackermann, ControlInput 1]
LinearSolve::nosol :
Linear equation encountered which has no solution.
StateFeedbackGains::nos :
Cannot find feedback gain matrix. The system may not be controllable.
Out[18]= StateFeedbackGains
StateSpace10, 0, 10, 0, 0, 0.7, 9, 0,0, 1, 0.7, 0,1, 0, 0, 0,
20, 2.8,0, 3.13,0, 0, 0, 0,
10, 8, 5, 2,MethodAckermann, ControlInput1
« Indeed, the system is not controllable from its first input; in other words, the aircraft
cannot be controlled by only the aileron deflections (at least not within the linearized
model).
In[19]:= Controllable[Subsystem[aircraft, 1]]
Out[19]= False
9.1.2 Robust Pole Assignment
Ackermann's formula does not provide for multi-input control and, even for a single-input
case, the resulting matrix can be very badly conditioned. The robust algorithm by Kautsky et
al. (1985), which is accessible via the method KNVD, often offers a better alternative. In the case
where the number of states is greater than the number of control inputs, the algorithm uses
the additional degrees of freedom to find the solution that is as insensitive to perturbations as
possible.
158 Control System Professional
Method ÷ KNVD compute the feedback matrix using
the Kautsky-Nichols-Van Dooren algorithm
State feedback design using the robust method by Kautsky et al.
The algorithm depends heavily on numerical quantities of the system in question and so is
implemented for inexact input parameters only. However, for the particular case where the
number of inputs is equal to the number of states and matrix B is not singular, the solution
may be found simply from K = B
÷1
(A÷A) (where is the diagonal matrix determined by the
eigenvalues Λ
1
, Λ
2
, …, Λ
n ), and is attempted for exact and symbolic arguments, too.
In the KNVD regime, StateFeedbackGains accepts the option RandomOrthogonalComple
ment, similarly to other functions that need to reconstruct the basis of the vector space given
an insufficient number of orthogonal components.
The "robustness" of the system designed using the KNVD method may improve after several
iterations, the number of which can be set via the option MaxIterations. Note, however,
that the implemented algorithm is not guaranteed to converge.
option name default value
MaxIterations 1 how many iterations to try
Option specific to the KNVD method.
In the rest of this section, the solutions obtained with the Ackermann and KNVD methods are
compared in terms of their robustness for the aircraft model.
« This finds the feedback via Ackermann's formula—reproducing the result we
already have.
In[20]:= kA StateFeedbackGains[aircraft, poles, Method Ackermann]
Out[20]=
l
\
.
..
0 0 0 0
÷16.9205 ÷19.4816 25.6083 ÷169.205
\
!
.
..
« Now we obtain the feedback using the KNVD algorithm with a single iteration.
In[21]:= k1StateFeedbackGains[aircraft, poles, Method KNVD, MaxIterations 1]
Out[21]=
l
\
.
.
.
0.4 0.250479 ÷0.347476 4.
÷4.90531 ×10
÷14
÷1.78914 ÷1.08946 ÷4.27485 ×10
÷13
\
!
.
.
.
9. Feedback Control Systems Design 159
« This is the solution using the same method after 10 iterations.
In[22]:= k10
StateFeedbackGains[aircraft, poles, Method KNVD, MaxIterations 10]
Out[22]=
l
\
.
..
0.31088 0.516493 ÷1.13904 2.94829
÷0.982774 ÷3.23775 2.026 ÷6.18559
\
!
.
..
To measure the robustness of the solutions, we distort the feedback matrices to some extent
(as if due to noise in the line) and see how that will affect the locations of the poles of the
closed-loop system. In practice, we would prefer the noise to have as little effect as possible.
« This is a utility function that distorts all numeric values in expr to some degree, up
to the maximum relative error err.
In[23]:= distort[expr_, err_]:
expr /.x_?NumberQ x (1Random[Real, {err, err}])
« We form the list comprising all three feedback matrices we have computed so far
and distort them as much as 1 percent.
In[24]:= dks distort[{kA, k1, k10}, .01];
« These are the three groups of eigenvalues of the corresponding distorted closed-loop
systems.
In[25]:= Eigenvalues[StateFeedbackConnect[aircraft, #]#1]]& /dks
Out[25]= 8.978591.21033 ,8.978591.21033 ,4.98785, 1.96418,
10.39, 7.68141, 5.06312, 1.97541,
9.90304, 8.05419, 5.09136, 2.00677
« Finally, these are the maximum relative errors introduced by the distortion.
In[26]:= Max]Abs]
# poles

poles
||& /%
Out[26]= 0.194556, 0.0398239, 0.0182728
The last value in the list—and the smallest one—is the solution found after 10 iterations with
the KNVD method. This solution indeed looks robust—the relative error in pole location is
about as big as the imposed distortion of the feedback matrix. Comparing the second and the
third values, we see that the robustness has somewhat improved with iterations. On the other
end, the solution obtained with Ackermann's formula (the first element in the list) is not stable
against the noise.
160 Control System Professional
9.2 State Reconstruction
The prerequisite for pole assignment using state feedback is knowledge of the state variables.
So far in this chapter we have assumed that this prerequisite was somehow met. Now we
address the problem of how that can be done. In the trivial case of a square nonsingular
matrix C, the state vector can simply be computed from the input and output vectors as
x = C
÷1
(y ÷ Du); otherwise an attempt can be made to reconstruct the state vector by forming
a device called an estimator (or observer), with which the approximation x

to the state vector x
can be obtained as
(9.4)
x

= (A÷ LC) x

+(B÷ LD) u + Ly ,
where L is the estimator gain matrix. Figure 9.4 presents such a device in block diagram form
( Brogan 1991). As seen from the diagram, the estimator is driven by the difference
between the actual output measurement y and the "expected" output Cx

, provided that the
direct transmission term Du is taken into account.

System
Observer
L
C
B D
A

y u
State estimate
x=Ax +Bu
y=Cx +Du
.
.
x
^
x
^
Figure 9.4. Continuous-time estimator schematic.
9. Feedback Control Systems Design 161
System
Observer
L
C
B D
A

y(k) u(k) x(k+1)
State estimat
Delay
x(k+1) =Ax(k) +Bu(k)
y(k) =Cx(k) +Du(k)
^
x(k)
^
Figure 9.5. Discrete-time estimator schematic.
If the initial system is completely observable, it is possible to choose gains L to be such that
the eigenvalues of A
c
= A÷ LC assume any desired location, thereby controlling the rate at
which x

follows x. This reduces the problem of finding the estimator gains to one of finding
the (transposed) controller gains for the dual system
(9.5) A
c
T
= A
T
÷ C
T
L
T
Therefore, the function StateFeedbackGains can be employed for the purpose. Alterna-
tively, one may use the function EstimatorGains, which performs the procedure in one
step. EstimatorGains does not introduce any new options of its own, but accepts and
passes on the options for StateFeedbackGains and DualSystem.
EstimatorGainssystem, poles
find the estimator gain matrix for system
that places poles of the estimator at poles
Estimator design.
The preceding diagram and equations refer to continuous-time systems. The estimator for a
discrete-time system is determined by an equation similar to Eq. (9.4):
(9.6) x

(k + 1) = (A÷ LC) x

(k) + (B÷ LD) u(k) + Ly (k)
162 Control System Professional


The corresponding block diagram is presented in Figure 9.5. EstimatorGains handles both
continuous- and discrete-time cases.
Another linearized state-space model of an inverted pendulum, reproduced after Gopal
(1993), will be used in the rest of this section to illustrate obtaining the estimator gain matrix.
The model assumes that the state and input vectors are, respectively,
x =
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
X
X

Θ
Θ

.
.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
and u = [f
x
] ,
where Θ is the angular displacement of the pendulum, X is the horizontal position of the cart,
and f
x
is the external force applied to the wheels. The model further assumes that the only
variable available for measurement is X, making it, therefore, the only output variable:
« Here is the state-space model.
In[27]:= StateSpace]
]{0, 1, 0, 0}, ]0, 0,
3 g m

4 m 7 M
, 0¦,{0, 0, 0, 1},]0, 0,
6 g (m M )

4 L m 7 L M
, 0¦¦,
]{0}, ]
7

4 m 7 M
¦,{0},]
6

4 L m 7 L M
¦¦,{{1, 0, 0, 0}}, {{0}}|
Out[27]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
0 1 0 0 0
0 0 ÷
3 g m

4 m+ 7 M
0
7

4 m+ 7 M
0 0 0 1 0
0 0
6 g (m+ M)

4 L m+ 7 L M
0 ÷
6

4 L m+7 L M
1 0 0 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

Because we know that all symbolic variables in the input system are real-valued, we may set
the option ComplexVariables correspondingly to get a simpler result. This option will be
picked up by the function DualSystem. At this point, we will not specify the desired poles
for the estimator, but use a generic list of four symbolic values, . Therefore,
the expression will be obtained in its general form.
9. Feedback Control Systems Design 163
y XXXX .
{p
1
, p
2
, p
3
, p
4
}
« This finds the estimator for the system and somewhat simplifies the result.
In[28]:= EstimatorGains[%, {p1, p2, p3, p4},ComplexVariables None]//Apart
Out[28]= p1p2p3p4,

1

L 4 m7 M
6 g m4 L p1 p2 m4 L p1 p3 m4 L p2 p3 m6 g M
7 L M p1 p27 L M p1 p37 L M p2 p3p1p2p3 p4,

1

3 g L m
6 g m p16 g M p14 L m p2 p3 p17 L M p2 p3 p16 g m p2
6 g M p26 g m p36 g M p3
1

3 g L m
6 g m4 L p1 p2 m4 L p1 p3 m
4 L p2 p3 m6 g M7 L M p1 p27 L M p1 p37 L M p2 p3 p4,

1

L
2
m 4 m7 M
2 mM 6 g m4 L p1 p2 m4 L p1 p3 m4 L p2 p3 m
6 g M7 L M p1 p27 L M p1 p37 L M p2 p3
1

3 g L m
6 g m p16 g M p14 L m p2 p3 p17 L M p2 p3 p1
6 g m p26 g M p26 g m p36 g M p3 p4
« This is the result for a particular set of numerical values.
In[29]:= % /.{m .15, M 1, L 2, g 9.81, p12, p22, p32,p4 3}//
Chop
Out[29]= 9,35.4532, 153.358, 323.456
164 Control System Professional
10. Optimal Control Systems Design
Given the constraints U on control functions u(t) that form the set of admissible controls
u(t) e U for all t e [t
0
, t
1
] and the constraints X on the state trajectories x(t) that form the set
of admissible trajectories x(t) e X for all t e [t
0
, t
1
] , the optimal control problem is to find an
admissible control function u(t) that forces the continuous-time system
(10.1) x

= f(x(t), u(t), t)
to follow an admissible trajectory x(t) while minimizing the performance criterion
(10.2)
J = S(x(t
1
), t
1
) +
¹
t
0
t
1
L(x(t), u(t), t) dt
If the solution to the optimal control problem can be found in the form
(10.3) u(t) = u(x(t), t)
then the control is said to exist in the closed-loop form, and Eq. (10.3) is referred to as the
optimal control law.
In Eq. (10.2), the function S is the cost associated with error in the terminal state at time t
1
,
and L penalizes for transient state errors and control effort. In the particular case of quadratic
cost functions,
(10.4) S = x
T
(t
1
) Mx(t
1
)
and
(10.5) L = x
T
(t) Qx(t) + u
T
(t) Ru(t)
or (in the form that includes the cross-term P)
(10.6) L = x
T
(t) Qx(t) + u
T
(t) Ru(t) + 2 u
T
(t) Px(t)
where the desired state is assumed to be x = 0.
Matrices M, Q, and R must be square; M and Q must have a length equal to the number of
states; and R must correspond in dimension to the number of inputs. Additionally, to ensure
that the solution is unique and finite, matrices M and Q must be positive semidefinite and
matrix R must be positive definite. The cross-term problem in Eq. (10.6) is reducible to the one
in Eq. (10.5), and so matrix P must be of a form that brings about suitable Q and R.
The components of the matrices reflect the emphasis the designer places on corresponding
errors. For instance, if R is a diagonal matrix, a relatively larger value of R
i i
means that more
control effort will be allotted to regulate input u
i
. In a sense, the art of choosing the elements
of Q, R, and M is similar to the art of selecting the proper pole locations in the feedback
design via pole assignment.
The optimal control problem can be restated similarly for discrete-time control systems
(10.7) x(k + 1) = f(x(k), u(k))
and the performance criterion is
(10.8) J = S(x(N)) +`
k=0
N÷1
L(x(k), u(k))
Control System Professional addresses both continuous- and discrete-time problems.
10.1 Linear Quadratic Regulator
In the case of the linear system
(10.9) x

= Ax +Bu
or
(10.10) x(k + 1) = Ax(k) + Bu(k)
and quadratic cost functions, the optimal control problem is said to be the linear-quadratic (LQ)
optimal control problem. Further, for constant-coefficient matrices A and B and terminal time
infinitely far in the future (meaning, of course, that the operating time is sufficiently long
compared to the time constants of the system), the problem is referred to as the infinite-horizon
or infinite-time-to-go problem. In this case, the control law in Eq. (10.3) simplifies to
(10.11) u = ÷Kx
with a constant-coefficient feedback gain matrix K. Note that the penalty function S for
terminal constraint in Eq. (10.2) and Eq. (10.8) is not an issue for the infinite-horizon problem.
The function LQRegulatorGains attempts to find the matrix K for this particular case. It
recognizes the type of system supplied to its input—continuous- versus discrete-time—and
166 Control System Professional
acts accordingly. As is the case with the pole assignment problem, furnishing matrices C and
D in the state-space description is optional.
LQRegulatorGainsstatespace, q, r
find the optimal feedback gains for the
system statespace and the quadratic cost
function defined by weighting matrices q and r
LQRegulatorGainsstatespace, q, r, p
find the optimal feedback for the case where the
quadratic cost function contains the cross-term p
Linear quadratic regulator design.
As an example, we design an optimal regulator for the mixing tank shown in Figure 10.1. The
tank (Gopal (1993)) implements concentration control of a chemical mixture with inflows
through two regulated valves at rates Q
1
and Q
2
and concentrations C
1
and C
2
, respectively.
Outflow is at the rate Q with concentration C. The volume of the liquid in the tank is V. The
state, input, and output vectors are assumed to be as follows:
x =
.
.
.
.
.
.
.
.
..
V
C
.
.
+
+
+
+
+
+
++
, u =
.
.
.
.
.
.
.
.
..
Q
1
Q
2
.
.
+
+
+
+
+
+
++
, and y =
.
.
.
.
.
.
.
.
..
Q
C
.
.
+
+
+
+
+
+
++
The design will be carried out in the discrete-time domain.

h
Q
1
, C
1
Q
2
, C
2
Q , C
Tank
Figure 10.1. Mixing tank schematic.
10. Optimal Control Systems Design 167
« Load the application.
In[1]:= ControlSystems`
« This is a state-space representation of the mixing tank system for a particular set of
parameters.
In[2]:= tank StateSpace[{{0.01, 0},{0, 0.02}},
{{1, 1}, {0.004, 0.002}},{{0.01, 0},{0, 1}}]
Out[2]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
÷0.01 0 1 1
0 ÷0.02 ÷0.004 0.002
0.01 0 0 0
0 1 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

« This is a discrete-time approximation of the system sampled with a period of 5
seconds.
In[3]:= tankd ToDiscreteTime[%, Sampled Period [5]]
Out[3]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0.951229 0. 4.87706 4.87706
0. 0.904837 ÷0.0190325 0.00951626
0.01 0 0 0
0 1 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5

« This assigns some values to the weighting matrices Q and R.
In[4]:= q DiagonalMatrix[{.01, 100}]
Out[4]=
l
\
.
..
0.01 0
0 100
\
!
.
..
In[5]:= r DiagonalMatrix[{2, .5}]
Out[5]=
l
\
.
..
2 0
0 0.5
\
!
.
..
« This finds the optimal gain matrix.
In[6]:= LQRegulatorGains[tankd, q , r]
Out[6]=
l
\
.
..
0.0223257 ÷3.30934
0.0778205 3.81983
\
!
.
..
168 Control System Professional
« To check the regulator in action, we plug it into the state feedback. This is the device
after the loop is closed.
In[7]:= StateFeedbackConnect[tankd , % ]
Out[7]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0.462811 ÷2.48971 4.87706 4.87706
÷0.000315645 0.805502 ÷0.0190325 0.00951626
0.01 0. 0. 0.
0. 1. 0. 0.
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5

« This simulates the output response of the closed-loop system to an impulse signal at
the second input; in other words, it shows how the outflow rate and the
concentration react to a short, sudden increase in the flow rate of the second
chemical. The outflow rate and the concentration are shown as the dash-dotted and
dotted lines, respectively.
In[8]:= SimulationPlot[%, {0, DiscreteDelta[t]}, {t, 200},PlotStyle
{Dashing[{.1, .01, .01, .01}],Dashing[{.01}]},PlotRange All];
50 100 150 200
0.01
0.02
0.03
0.04
10. Optimal Control Systems Design 169
« For comparison's sake, we plot the same graph for the original system. The outflow
rate and the concentration are now shown as the solid and dashed lines.
In[9]:= SimulationPlot[tankd , {0, DiscreteDelta[t]},
{t, 200},PlotStyle {Thickness[.001],Dashing[{.05}]}];
50 100 150 200
0.01
0.02
0.03
0.04
« Finally, we combine the two plots in one graph, which shows clearly that the
regulated system has a significantly faster response.
In[10]:= Show[%, %%,
PlotLabel"Regulated Vs. Original System", PlotRange All];
50 100 150 200
0.01
0.02
0.03
0.04
Regulated Vs. Original System
A successful regulator design is possible only for a stabilizable system. Otherwise, LQRegula
torGains does not return any solution.
« Here are matrices A and B for a system that is not stabilizable (because it is both
unstable and uncontrollable).
In[11]:= aa |
3 1
0 2
j;
In[12]:= bb |
1
0
j;
170 Control System Professional
« We assign equal weight to all inputs and states.
In[13]:= q DiagonalMatrix[{1., 1.}];r {{1.}};
« The attempt to build the regulator fails, as it should.
In[14]:= LQRegulatorGains[StateSpace[aa, bb],q , r]
LinearSolve::nosol :
Linear equation encountered which has no solution.
Out[14]= LQRegulatorGains
StateSpace3, 1, 0, 2, 1, 0,1., 0, 0, 1.,1.
« This modifies matrix A so that the system is stable (though still uncontrollable).
In[15]:= aa |
3 1
0 2
j;
« Now the regulator can be built.
In[16]:= LQRegulatorGains[StateSpace[aa, bb],q , r]
Out[16]= ( 0.162278 0.0314353 )
The optimal regulator design is based on solving the appropriate algebraic Riccati equation.
The corresponding functions—RiccatiSolve and DiscreteRiccatiSolve—can be
accessed directly, if so desired, and are described in Section 10.3. LQRegulatorGains shares
with the Riccati equation solvers the limitations imposed on the input arguments and accepts
the options accepted by the Riccati solvers.
10.2 Optimal Output Regulator
Closely related to the function LQRegulatorGains, which optimizes a system's behavior
with regard to state variables, is the function LQOutputRegulatorGains, which finds the
optimal solution to the output regulator problem. The cost function for this case contains
outputs y instead of states x (cf. Eq. 10.5).
(10.12) L = y
T
(t) Qy(t) + u
T
(t) Ru(t)
LQOutputRegulatorGains calls LQRegulatorGains and accepts the same options, the
syntactic difference between the two being that the output regulator function, obviously, does
10. Optimal Control Systems Design 171
require matrix C (and matrix D, if necessary) in the state-space description of the input sys-
tem.
LQOutputRegulatorGainsstatespace, q, r
find the optimal feedback
matrix for the output regulator problem
Linear quadratic output regulator design.
« Reconsider the mixing tank problem from Figure 10.1. We wish to design the output
regulator with matrices Q and R as given.
In[17]:= tank StateSpace[{{0.01, 0}, {0, 0.02}},
{{1, 1}, {0.004, 0.002}}, {{0.01, 0},{0, 1}}]
Out[17]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
÷0.01 0 1 1
0 ÷0.02 ÷0.004 0.002
0.01 0 0 0
0 1 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

In[18]:= q DiagonalMatrix[{10, .01}]
Out[18]=
l
\
.
..
10 0
0 0.01
\
!
.
..
In[19]:= r DiagonalMatrix[{2, .2}]
Out[19]=
l
\
.
..
2 0
0 0.2
\
!
.
..
« This solves the problem.
In[20]:= LQOutputRegulatorGains[tank, q , r]
Out[20]=
l
\
.
..
0.00589458 ÷0.000624227
0.0589383 0.00125691
\
!
.
..
10.3 Riccati Equations
Finding the optimal control for a continuous-time linear system with a quadratic cost function
involves solving the differential Riccati equation, which, for the case of the infinite-horizon
problem, simplifies to the algebraic Riccati equation (ARE)
172 Control System Professional
(10.13) A
T
W+ WA÷ WBR
÷1
B
T
W+ Q = 0
The discrete-time case requires solution of the discrete algebraic Riccati equation (DARE)
(10.14)
W= Q+ A
T
WA÷ A
T
WB[R+ B
T
WB]
÷1
B
T
WA
(see, for example, Brogan (1991), Section 14). These are equations in an unknown matrix W
that could be viewed as systems of coupled quadratic equations regarding the unknown
components W
i j
and as such could be attempted using the built-in Mathematica functions
Solve and NSolve. Control System Professional provides also two more specialized functions
that can be more efficient than the general-purpose solvers.
RiccatiSolvea, b, q, r solve the algebraic Riccati equation
DiscreteRiccatiSolvea, b, q, r
solve the discrete algebraic Riccati equation
Functions for solving Riccati equations.
« Consider again the double-integrator system.
In[20]:= StateSpace]TransferFunction]s,
1

s
2
||
Out[20]=
l
\
.
.
.
.
.
.
.
..
0 1 0
0 0 1
1 0 0
\
!
.
.
.
.
.
.
.
..

« This extracts matrices A and B.
In[21]:= aa % #1];bb % #2];
« Let all the weights be equal.
In[22]:= q DiagonalMatrix[{1, 1}];r {{1}};
« This is the solution to the corresponding ARE.
In[23]:= RiccatiSolve[aa, bb, q , r]
Out[23]=
l
\
.
.
.
..

3 1
1

3
\
!
.
.
.
..
10. Optimal Control Systems Design 173
« Consider now another second-order system.
In[24]:= StateSpace]TransferFunction]s,
1

s (s .5)
||
Out[24]=
l
\
.
.
.
.
.
.
.
..
0 1. 0
0 ÷0.5 1
1 0 0
\
!
.
.
.
.
.
.
.
..

« This time we find its discrete-time equivalent for a sampling period of,
say, 1 second.
In[25]:= ss ToDiscreteTime[% , Sampled Period [1]]
Out[25]=
l
\
.
.
.
.
.
.
.
..
1. 0.786939 0.426123
0. 0.606531 0.786939
1 0 0
\
!
.
.
.
.
.
.
.
..
1

« This extracts matrices A and B.
In[26]:= aa ss#1];bb ss#2];
« This solves the DARE.
In[27]:= DiscreteRiccatiSolve[aa, bb, q , r]
Out[27]=
l
\
.
..
2.40153 1.05792
1.05792 2.09705
\
!
.
..
« We can use the result to find the optimal feedback gains.
In[28]:= Inverse[Transpose[bb].%.bbr].Transpose[bb].% .aa
Out[28]= ( 0.538833 0.794026 )
« We could also arrive at this result by using LQRegulatorGains directly.
In[29]:= LQRegulatorGains[ss, q, r]
Out[29]= ( 0.538833 0.794026 )
For yet another example, let us design a controller for the roll attitude of a missile
(Figure 10.2). The controller must, by using the hydraulic-powered ailerons, keep roll attitude
Φ close to zero while staying within the physical limits of aileron deflection ∆ and aileron
deflection rate ∆

(Bryson and Ho (1969)). In the state-space description, it is assumed that the
174 Control System Professional
system has one input, which accepts the command signal to aileron actuators u = ∆

so that
u = [∆

] and the state vector is
x =
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

Φ

Φ
.
.
+
+
+
+
+
+
+
+
+
+
+
+
+
++
The performance index we choose is
J =
1

2

¹
0
o
l
\
.
.
.
.
Φ
2

Φ
0
2
+

2


0
2
+
u
2

u
0
2
\
!
.
.
.
.
d t
resulting in matrices Q and R as shown below. Φ
0
and ∆
0
are the maximum desired values of
Φ and ∆, and u
0
is the maximum available value of u.

Φ
Ailerons
Top view Front view

Figure 10.2. Roll attitude control of a missile.
« This is the A matrix for the model. Q and Τ are the aileron effectiveness and roll-time
constant, respectively.
In[30]:= aa
1
\
.
.
.
.
.
.
.
.
.
.
.
0 0 0
Q

Τ

1

Τ
0
0 1 0
\
!
.
.
.
.
.
.
.
.
.
.
.
;
10. Optimal Control Systems Design 175
« This is matrix B.
In[31]:= bb
1
\
.
.
.
.
..
1
0
0
\
!
.
.
.
.
..
;
« These are the matrices Q and R for the performance index.
In[32]:= q DiagonalMatrix]]
1


0
2
, 0,
1

Φ
0
2
¦|
Out[32]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
..
1


0
2
0 0
0 0 0
0 0
1

Φ
0
2
\
!
.
.
.
.
.
.
.
.
.
.
.
.
..
In[33]:= r ]]
1

u
0
2
¦¦
Out[33]= |
1

u
0
2 ]
« We choose a set of numeric values for our parameters.
In[34]:= ]Τ1, Q 10, u
0
Π, ∆
0

Π

12.
, Φ
0

Π

180.
¦
Out[34]= Τ1, Q10, u
0
Π, ∆
0
0.261799, Φ
0
0.0174533
« We then reassign the matrices using these values.
In[35]:= {aa, bb, q , r}{aa, bb, q , r}/.% ;
« Applying RiccatiSolve to this list solves the corresponding Riccati equation.
In[36]:= RiccatiSolve %
Out[36]=
l
\
.
.
.
.
.
.
.
.
2.72757 2.94179 18.2378
2.94179 6.38971 49.0962
18.2378 49.0962 578.619
\
!
.
.
.
.
.
.
.
.
« We can use the solution to find the optimal feedback matrix.
In[37]:= Inverse[r].Transpose[bb].%
Out[37]= ( 26.92 29.0343 180. )
176 Control System Professional
« Again, the result is the same as the one obtained with LQRegulatorGains.
In[38]:= LQRegulatorGains[StateSpace[aa, bb],q , r]
Out[38]= ( 26.92 29.0343 180. )
RiccatiSolve and DiscreteRiccatiSolve work by finding the eigensystem of the
corresponding Hamiltonian matrix. Alternatively, you may use the Schur decomposition
method, which has advantages for systems with multiple or near-multiple eigenvalues of the
Hamiltonian (Laub 1979). The method is accessible by selecting the option SolveMethod ÷
SchurDecomposition (which is available for RiccatiSolve only). If this method is
chosen, RiccatiSolve does not accept infinite-precision input since neither does the built-in
function SchurDecomposition. The default Automatic setting of the option selects the
Eigendecomposition or SchurDecomposition method depending on whether the input
matrices are exact or not (for DiscreteRiccatiSolve, Eigendecomposition is chosen
in either case).
option name default value
SolveMethod Automatic method to find the basis of the eigenspace
Option specific to RiccatiSolve.
Solving Riccati equations involves sorting eigenvalues of the Hamiltonian (which is required
to separate stable and unstable eigenvalues); therefore, the eigenvalues should not contain
symbolic parameters. If symbolic parameters are used, a warning is issued and LQRegulator
Gains returns unevaluated. This, of course, does not mean that the system cannot be solved
symbolically; however, such a case requires human intervention in the selection of stable
poles.
10. Optimal Control Systems Design 177
10.4 Discrete Regulator by Emulation of Continuous Design
If the optimal regulator has been designed for a continuous system so that matrices Q and R
(and possibly P) are constructed to meet the design specifications, this knowledge can be
applied to find a discrete equivalent of the optimal regulator. The procedure starts with
computation of the discrete equivalent of the continuous cost function and then the discrete
regulator is designed based on the discrete equivalent cost (see Franklin et al. (1990),
Section 9.4.4). This algorithm is implemented in DiscreteLQRegulatorGains. Its syntax
closely resembles that for LQRegulatorGains, and it accepts the same set of options. In
addition, DiscreteLQRegulatorGains accepts the options related to the function ToDis
creteTime.
DiscreteLQRegulatorGainsstatespace, q, r
design the discrete emulation of the
continuous optimal regulator for the system
statespace and weighting matrices q and r
DiscreteLQRegulatorGainsstatespace, q, r, p, t
design the emulation for a cost function
containing the cross-weighting matrix p
Finding the discrete equivalent of a continuous optimal regulator.
« This is the model for the satellite system.
In[39]:= satellite StateSpace]TransferFunction ]
1

#
2
||
Out[39]=
l
\
.
.
.
.
.
.
.
..
0 1 0
0 0 1
1 0 0
\
!
.
.
.
.
.
.
.
..

« Following an example in Franklin et al. (1990), we assume that these are the matrices
Q and R that lead to satisfactory behavior of the closed-loop system.
In[40]:= q {{1, 0},{0, 0}};r {{0.01}};
178 Control System Professional
« This is the optimal regulator matrix for the continuous case.
In[41]:= LQRegulatorGains[satellite, q , r]
Out[41]= ( 10. 4.47214 )
« This is matrix A for the corresponding closed-loop system.
In[42]:= StateFeedbackConnect[satellite, % ]#1]
Out[42]=
l
\
.
..
0. 1.
÷10. ÷4.47214
\
!
.
..
« These are the poles of the system. We want the discrete emulation to have poles
close to these.
In[43]:= poles Eigenvalues[% ]
Out[43]= 2.236072.23607 ,2.236072.23607
« First we decide on the sampling period. To find a suitable time scale, we compute
the characteristic polynomial of the closed-loop system.
In[44]:= CharacteristicPolynomial[%%, s]//Expand //Chop
Out[44]= 10.4.47214 ss
2
« Comparing the preceding result with its representation via natural frequency Ω
n

and damping coefficient Ζ , Ω
n
2
+ 2 ΖΩ
n
s + s
2
, we find the characteristic time
t
n
= 2 Πl Ω
n
.
In[45]:= tn


¸-------------------------------- ------------------
Coefficient[%, s, 0]
Out[45]= 1.98692
« We choose a sampling period several times smaller than the characteristic time.
In[46]:= ts
tn

6
Out[46]= 0.331153
10. Optimal Control Systems Design 179
« This is the discrete regulator that emulates the continuous one.
In[47]:= DiscreteLQRegulatorGains[satellite, q , r, Sampled Period [ts]]
Out[47]= ( 4.82977 3.10798 )
« These are the poles of the corresponding discrete-time closed-loop system. The poles
are in the z-plane.
In[48]:= Eigenvalues[StateFeedbackConnect[
ToDiscreteTime[satellite, Sampled Period [ts]],% ]//First]
Out[48]= 0.352980.333181 ,0.352980.333181
« This maps the poles back to the s -plane.
In[49]:=
Log[%]

ts
Out[49]= 2.182682.2846 ,2.182682.2846
« As we can see, the poles of the emulation system match the ones found by the
continuous design within a few percentage points.
In[50]:= Abs]
% poles

poles
|
Out[50]= 0.0228161, 0.0228161
The function DiscreteLQRegulatorGains should not be confused with LQRegulator
Gains, which, given a discrete-time StateSpace object as input, designs a discrete regulator.
The difference between the two functions is that the former generates a discrete object from a
continuous-type input, whereas the latter makes no type conversion; given continuous- or
discrete-type input, it returns output of the same type. To reinforce the difference, Discrete
LQRegulatorGains generates an error message if a meaningless attempt is made to feed it a
discrete-time StateSpace object.
180 Control System Professional
10.5 Optimal Estimation
Section 9.2 introduced the device called the estimator (or observer) and the function Estima
torGains, which computes the gain matrix for the device. Input and output measurements
were assumed to be known precisely so the problem could be referred to as the deterministic
state reconstruction. Consider now a linear system whose state vector is subject to some
random disturbances w(t), called the process noise, and whose output measurements are
contaminated with noise v(t), called the measurement noise
(10.15)
x

(t) = Ax(t) + Bu(t) +B
w
w(t)
y(t) = Cx(t) + Du(t) + D
w
u(t) + v(t)
The noise processes are assumed to have flat spectra (white noise), zero mean values
E ¦w(t)¦ = 0
E ¦v(t)¦ = 0
and covariance matrices Q and R
E ¦w(t) w
T
(Τ)¦ = Q∆ (t ÷Τ)
E ¦v(t) v
T
(Τ)¦ = R∆ (t ÷ Τ)
Here E ¦x¦ denotes the mean of random variable x. The two noises may further be assumed to
be mutually uncorrelated,
E ¦v(t) w
T
(Τ)¦ = 0
or if they are correlated, then their cross-covariance matrix is P:
E ¦v(t) w
T
(Τ)¦ = P∆(t ÷ Τ)
If the observer with the same structure as in Figure 9.4 (Figure 9.5 for the discrete-time case) is
applied to find the state estimates from noisy measurements, and the dual algorithm to the
one used by the linear quadratic regulator is used to find the estimator gain matrix L, then the
observer provides the least-square unbiased estimation for the state vector and is called the
Kalman filter (or Kalman estimator). As with the infinite-horizon problem, one can consider the
steady-state constant-gain solution to the optimal estimation problem that is arrived at when
both process and measurement noises are stationary (at least in the wide sense) and the
10. Optimal Control Systems Design 181
estimator operates for a sufficiently long time. The algorithm is implemented in the function
LQEstimatorGains. The corresponding block diagrams are given in Section 10.7, where the
KalmanEstimator function is introduced.
If, in addition, the noise terms have Gaussian distributions, then LQEstimatorGains finds
the solution to the so-called linear quadratic Gaussian (LQG) problem. In this case, the estima-
tion not only is optimal in the least-squares sense, but also satisfies the most-likelihood require-
ments.
Real processes never have (nor could have) absolutely flat spectra (i.e., be absolutely uncorre-
lated in time). At high spectral frequencies, the spectrum bends downwards, whereas at low
frequencies it usually has a significant 1 l f
Γ
component. It is the responsibility of the user to
decide if the white-noise approximation is applicable to the particular case.
LQEstimatorGainsstatespace, q, r
find the optimal estimator matrix for the
system statespace and process and measurement
noises with covariance matrices q and r,
assuming that all inputs of the systemare stochastic
LQEstimatorGainsstatespace, q, r, p
find the estimator gains for correlated process and
measurement noises with cross-covariance matrix p
LQEstimatorGainsstatespace, q, r, dinputs
or LQEstimatorGainsstatespace, q, r, p, dinputs
find the estimator gains if the
system has deterministic inputs dinputs
in addition to the stochastic ones
Optimal estimator design.
The function LQEstimatorGains relies on LQRegulatorGains (and, consequently, on the
Riccati equation solvers) and, therefore, accepts the same set of options and involves similar
restrictions on the input arguments.
Consider a servomechanism for the azimuth control of an antenna shown in Figure 10.3. The
system (cf. Gopal (1993)) has the state vector
x =
.
.
.
.
.
.
.
.
.
.
.
Θ
Θ

.
.
+
+
+
+
+
+
+
+
+
182 Control System Professional
and input and output vectors
u =
.
.
.
.
.
.
.
.
..
V
w
.
.
+
+
+
+
+
+
++
and y = [Θ]
where Θ is the angular position of the antenna, V is the input voltage applied to the servo
motor, and w is the disturbing torque acting on the motor's shaft. In the following examples
we will find the continuous and discrete Kalman estimators. The input w(t) and output v(t)
noise terms will be assumed to be white, mutually uncorrelated noises with zero-mean values.

Θ
Figure 10.3. Antenna schematic.
« Here is a state-space realization of the antenna mechanism.
In[51]:= antenna StateSpace[{{0, 1}, {0, 5}},{{0, 0},{1, 0.1}}, {{1, 0}}]
Out[51]=
l
\
.
.
.
.
.
.
.
..
0 1 0 0
0 ÷5 1 0.1
1 0 0 0
\
!
.
.
.
.
.
.
.
..

« This defines the noise variances.
In[52]:= q {{100}};r {{1}};
« This finds the stationary Kalman gains achieved after an observation of sufficient
length. The first input in our antenna system is the only deterministic input, which
is specified by the fourth argument to LQEstimatorGains.
In[53]:= LQEstimatorGains[antenna, q , r, {1}]
Out[53]=
l
\
.
..
0.196152
0.0192379
\
!
.
..
10. Optimal Control Systems Design 183
« This is a discrete-time approximation to antenna for some sampling period.
In[54]:= antennad ToDiscreteTime[antenna, Sampled Period[.1]]
Out[54]=
l
\
.
.
.
.
.
.
.
..
1. 0.0786939 0.00426123 0.000426123
0. 0.606531 0.0786939 0.00786939
1 0 0 0
\
!
.
.
.
.
.
.
.
..
0.1

« Now we let both noise terms have the same intensity.
In[55]:= qd rd {{10}};
« This finds the stationary Kalman gain matrix for the discrete-time system.
In[56]:= LQEstimatorGains[antennad, qd, rd, {1}]
Out[56]=
l
\
.
..
0.00199394
0.0000203033
\
!
.
..
Like most other functions in Control System Professional, LQEstimatorGains accepts both
continuous- and discrete-time objects and chooses the appropriate algorithm accordingly.
10.6 Discrete Estimator by Emulation of Continuous Design
The function DiscreteLQEstimatorGains is similar in purpose to DiscreteLQRegula
torGains (see Section 10.4); it is applicable when the optimal estimator has been designed
for a continuous system and you need to create a discrete equivalent. DiscreteLQEstima
torGains first finds the discrete equivalents of the covariance matrices for process and
measurement noises using the algorithm described in Franklin et al. (1990), Section 9.5.3. Then
it converts the continuous system to a discrete one, and finally performs the discrete estimator
design using LQEstimatorGains. DiscreteLQEstimatorGains accepts the same options
as LQEstimatorGains and shares the same restrictions. In addition, it accepts the options
related to ToDiscreteTime.
184 Control System Professional
DiscreteLQEstimatorGainsstatespace, q, r
design the discrete emulation of the continuous
optimal estimator for the systemdefined by the state÷
space object statespace and noise
covariance matrices q and r
DiscreteLQEstimatorGainsstatespace, q, r, dinputs
design the emulation if the
system has deterministic inputs dinputs
Finding the discrete equivalent of the continuous optimal estimator.
« This finds the discrete estimator gains for the continuous system antenna.
In[57]:= antenna StateSpace[{{0, 1}, {0, 5}},{{0, 0},{1, 0.1}}, {{1, 0}}]
Out[57]=
l
\
.
.
.
.
.
.
.
..
0 1 0 0
0 ÷5 1 0.1
1 0 0 0
\
!
.
.
.
.
.
.
.
..

In[58]:= DiscreteLQEstimatorGains[antenna, q , r, {1},Sampled Period [.1]]
Out[58]=
l
\
.
..
0.0194241
0.00190346
\
!
.
..
10.7 Kalman Estimator
Once the gain matrix L for the Kalman estimator has been found, the estimator can be con-
structed as a state-space object using KalmanEstimator. The block diagram of the device is
shown in Figures 10.4 and 10.5 for continuous and discrete cases, respectively. Notice that the
estimator outputs the estimates for both output and state variables of the system, and there-
fore can be used either as an estimator per se (for example, to form a controller, as described in
the following section) or as a filter (see the example later in this section).

10. Optimal Control Systems Design 185
Kalman Estimator
L
B
D D
A C

.
State
estimate
Output estimate

Deterministic inputs
Stochastic inputs
Sensor
outputs
System
u
B

y
w
x
.
x
C

B
w
A
D
w
x
^
y
^
x
^
Figure 10.4. Continuous Kalman estimator.

u
B

y
w
C C Delay Delay

Kalman Estimator System
Stochastic inputs
Deterministic inputs
A
B
B
w

L
A
D
w

D D D
C
x
_
State estimate
Output
estimate
Sensor
outputs
x
^
y
^
Figure 10.5. Discrete Kalman estimator.
186 Control System Professional
KalmanEstimatorstatespace, gains
design the Kalman estimator
for the system statespace assuming
that the estimator gain matrix is gains,
all outputs of the systemare sensor outputs,
and all inputs are stochastic inputs
KalmanEstimatorstatespace, gains, sensors
use the sensor outputs
specified by the vector sensors
KalmanEstimatorstatespace, gains, sensors, dinputs
use dinputs as additional deterministic inputs
Kalman estimator design.
« Consider a two-input, two-output system.
In[59]:= StateSpace[{{a}},{{b
1
, b
2
}}, {{c
1
},{c
2
}},{{d
11
, d
12
},{d
21
, d
22
}}]
Out[59]=
l
\
.
.
.
.
.
.
.
..
a b
1
b
2
c
1
d
11
d
12
c
2
d
21
d
22
\
!
.
.
.
.
.
.
.
..

« Let the estimator gain matrix be symbolic.
In[60]:= ll = {{l}};
« This finds the state-space representation for the Kalman estimator assuming that the
first output of the system is the sensor output and only the first of the inputs is
deterministic. The result agrees with the block diagram in Figure 10.4.
In[61]:= KalmanEstimator[%%, ll, {1}, {1}]
Out[61]=
l
\
.
.
.
.
.
.
.
..
a ÷ l c
1
b
1
÷ l d
11
l
c
1
d
11
0
1 0 0
\
!
.
.
.
.
.
.
.
..

10. Optimal Control Systems Design 187
« Now we create another system that has the same state-space matrices as the
previous one, but is in the discrete-time domain.
In[62]:= StateSpace[{{a}},{{b
1
, b
2
}},
{{c
1
},{c
2
}},{{d
11
, d
12
}, {d
21
, d
22
}},Sampled True]
Out[62]=
l
\
.
.
.
.
.
.
.
..
a b
1
b
2
c
1
d
11
d
12
c
2
d
21
d
22
\
!
.
.
.
.
.
.
.
..

« KalmanEstimator recognizes the discrete-time data object and, consequently,
computes the discrete-time estimator. The result satisfies the block diagram in
Figure 10.5.
In[63]:= KalmanEstimator[%, ll, {1}, {1}]
Out[63]=
l
\
.
.
.
.
.
.
.
.
.
a ÷ a l c
1
b
1
÷ a l d
11
a l
c
1
÷l c
1
2
d
11
÷ l c
1
d
11
l c
1
1 ÷ l c
1
÷l d
11
l
\
!
.
.
.
.
.
.
.
.
.

Because KalmanEstimator provides estimates for output signals, too, it can be used as a
Kalman filter (Figure 10.6). In the rest of this section, we design and try out the Kalman filter,
which extracts the useful signal from additive Gaussian noise that masks the angular measure-
ments for the antenna servomechanism (see Section 10.5).

Kalman
filter u y
Inputs Sensor outputs
Filtered outputs
Stochastic linear system
y
^
Figure 10.6. Kalman filter connected to a system.
« This is again our antenna system.
In[64]:= antenna
Out[64]=
l
\
.
.
.
.
.
.
.
..
0 1 0 0
0 ÷5 1 0.1
1 0 0 0
\
!
.
.
.
.
.
.
.
..

188 Control System Professional
« Recall that the variance of the process noise is q, and the variance of the
measurement noise is r. We now make the measurement noise far more intense and
will try to filter it out with the Kalman filter.
In[65]:= q {{1}};r {{100}};
« This is the estimator gain matrix.
In[66]:= llLQEstimatorGains[antenna, q , r, {1}]
Out[66]=
l
\
.
..
0.0019996
1.9992 ×10
÷6
\
!
.
..
Notice that because the process noise was made relatively small and the measurement noise
was relatively large compared with the noise in the example in Section 10.5, the relatively
smaller gains in the Kalman estimator were achieved. Consequently, the Kalman filter will
rely more on re-creating the output signal from the deterministic input using the known
system dynamics than on actually processing the noisy sensor output, which is the optimal
strategy for noisy measurements.
« This is a new object, antenna1, that incorporates another stochastic input through
which the measurement noise adds directly to the output. This is done by using the
parallel connection with no input connected. The expanded system has three inputs
and one output.
In[67]:= antenna1ParallelConnect[antenna, TransferFunction [1],{}, {1, 1}]
Out[67]=
l
\
.
.
.
.
.
.
.
..
0 1 0 0 0
0 ÷5 1 0.1 0
1 0 0 0 1
\
!
.
.
.
.
.
.
.
..

« This finds the Kalman estimator. Note that only the first output of the estimator
gives the filtered output of the system (the rest of the outputs give estimates for the
state vector; see Figure 10.4).
In[68]:= estimator KalmanEstimator[%, ll, {1},{1}]
Out[68]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
÷0.0019996 1. 0. 0.0019996
÷1.9992×10
÷6
÷5. 1. 1.9992×10
÷6
1. 0. 0. 0.
1. 0. 0. 0.
0. 1. 0. 0.
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

10. Optimal Control Systems Design 189
« This picks the subsystem we are currently interested in. It contains all the inputs and
the first output of the estimator.
In[69]:= filter Subsystem[estimator, All, {1}]
Out[69]=
l
\
.
.
.
.
.
.
.
.
.
÷0.0019996 1. 0. 0.0019996
÷1.9992×10
÷6
÷5. 1. 1.9992×10
÷6
1. 0. 0. 0.
\
!
.
.
.
.
.
.
.
.
.

« Then, we connect the expanded system antenna1 and the Kalman filter according
to the diagram in Figure 10.7. First we connect the inputs. In the figure, the inputs
and outputs are numbered as they appear after this stage.
In[70]:= ParallelConnect[antenna1, filter, {1, 1}, {}]
Out[70]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 1 0 0 0 0 0 0
0 ÷5 0 0 1 0.1 0 0
0 0 ÷0.0019996 1. 0. 0 0 0.0019996
0 0 ÷1.9992×10
÷6
÷5. 1. 0 0 1.9992×10
÷6
1 0 0 0 0 0 1 0
0 0 1. 0. 0. 0 0 0.
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

antenna1
filter
zeros
u
w
v
y
y
1
2
3
1
1
4
2
^

Figure 10.7. Kalman filter connection example.
190 Control System Professional
« Finally, we close the feedback loop by connecting the first output of the preceding
system with its fourth input (which actually is the first input of the filter). In the
composite, the first output corresponds to the output of the system and the second
one to the filtered output.
In[71]:= composite FeedbackConnect[%, {1, 4, Positive}]
Out[71]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0. 1. 0. 0. 0. 0. 0. 0.
0. ÷5. 0. 0. 1. 0.1 0. 0.
0.0019996 0. ÷0.0019996 1. 0. 0. 0.0019996 0.0019996
1.9992 ×10
÷6
0. ÷1.9992 ×10
÷6
÷5. 1. 0. 1.9992×10
÷6
1.9992 ×10
÷6
1. 0. 0. 0. 0. 0. 1. 0.
0. 0. 1. 0. 0. 0. 0. 0.
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

« To create the normally distributed noise, we will load this standard package.
In[72]:= Needs["Statistics`ContinuousDistributions`"]
« Let the length of our simulation sequences be n.
In[73]:= n 100;
« This creates the process noise vector that has the Gaussian distribution with zero
mean and standard deviation as required.
In[74]:= w Table]Random]NormalDistribution]0,
¸¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
q #1, 1] ||,{n}|;
« This is the measurement noise vector.
In[75]:= v Table]Random]NormalDistribution]0,
¸¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
r#1, 1] ||,{n}|;
« This is the sinusoidal input signal.
In[76]:= u Sin ]
2

20
Π Range[n]|//N ;
« We intend to supply no external signal to the summing input 4. This prepares a
dummy zero signal for this input.
In[77]:= zeros Table[0, {n }];
« This is where the simulation is performed. The result is a list of output vectors.
In[78]:= {y1, y2}OutputResponse[composite, {u, w , v, zeros}];
10. Optimal Control Systems Design 191
« We wish to see how the filter suppresses the measurement noise added to the
system output signal y1. The "original" signal (before the addition of noise v) is
denoted as y10.
In[79]:= y10 y1v;
« This plots the signals. We can see that the Kalman filter has quite successfully
restored the original signal from the noise.
In[80]:= MultipleListPlot[y10, y1, y2,
PlotRange {10, 10},PlotJoined True, SymbolShape None,
PlotLabel"Signals prior to and after the Kalman Filter",
PlotLegend {"Original", "Noise Added", "Filtered"}];
20 40 60 80 100
-10
-7.5
-5
-2.5
2.5
5
7.5
10
Signals prior to and after the Kalman Filter
Filtered
Noise Added
Original
10.8 Optimal Controller
The optimal controller design for the stochastic systems is based on the separation principle,
which states that if the optimal estimate for the state vector in the presence of noise is avail-
able, the optimal control law can be obtained as if there were no noise in the system (see, e.g.,
Gopal 1993, Section 12.5). The idea is implemented in the function Controller. Again, if
the process and measurement noise have a Gaussian distribution, Controller forms the
LQG controller.
To use this function you must first determine both the estimator gain matrix L (using, say, the
optimal procedure in LQEstimatorGains) and the regulator matrix K (with, say, LQRegula
torGains). Figures 10.8 and 10.9 show the structure of the controller for continuous- and
discrete-time systems, respectively. Note that only the part of the system related to the sensor
outputs is shown in these diagrams. For the discrete-time case, the controller is based on the
current estimator (as opposed to the predictor estimator (see Franklin et al. 1990, Section 6.3.4)),
and thus can be called the current controller.
192 Control System Professional
Controllerstatespace, egains, cgains
design the controller for the system statespace,
if the estimator gain matrix is egains
and the controller gain matrix is cgains,
assuming that all outputs of the systemare
sensor outputs and all inputs are control inputs
Controllerstatespace, egains, cgains, sensors
use the sensor outputs
specified by the vector sensors
Controllerstatespace, egains, cgains, sensors, dinputs
use dinputs as additional deterministic inputs,
assuming that the rest of the inputs are control inputs
Controllerstatespace, egains, cgains, sensors, dinputs, controls
use controls as the control inputs
Controller design.
Once constructed, the controller can be connected to the system according to the block dia-
gram shown in Figure 10.10. It is a state-space object whose inputs are "additional determinis-
tic inputs" and the sensor outputs of the system to be controlled. The outputs of the controller
are typically connected to control inputs of the system to close the negative feedback loop.
Inside the controller, the feedback loop for control inputs is already closed.

10. Optimal Control Systems Design 193
u
2
u
1
B
2
B
1

y
v w
x
.
x
C
1

Controller
System
Control inputs
Stochastic inputs
Additional deterministic inputs
L
B
2
D
12
D
12
B
1
D
11
D
1w
B
w

A
K
C
1
A
D
11
.
x
^
x
^
y
^
Figure 10.8. Continuous-time controller.

u
2
u
1
B
2
B
1

y
w
C
1
Delay Delay

Controller System
Control inputs
Stochastic inputs
Additional deterministic inputs
A
B
2
D
12
D
12
B
1
D
11
B
w

x
_
K
C
1
L
A
D
11
v
D
1w
x
^
Figure 10.9. Discrete-time controller based on a current estimator.

194 Control System Professional
u
y
Controller
Stochastic linear system
Control inputs Sensor outputs

x
^
Kalman
estimator
Control gains for
deterministic problem
K
Additional deterministic inputs
Figure 10.10. Controller design and connection.
« Consider a three-input, two-output system. Suppose that the first input is the control
input and the second and third are, respectively, deterministic and stochastic inputs.
Suppose further that we wish to use first output of the system to close the feedback
loop. Therefore, estimator and controller gain matrices should both be 1×1.
In[81]:= StateSpace[{{a}},{{b
1
, b
2
, b
3
}},
{{c
1
},{c
2
}},{{d
11
, d
12
, d
13
}, {d
21
, d
22
, d
23
}}]
Out[81]=
l
\
.
.
.
.
.
.
.
..
a b
1
b
2
b
3
c
1
d
11
d
12
d
13
c
2
d
21
d
22
d
23
\
!
.
.
.
.
.
.
.
..

« This sets the estimator and controller gains.
In[82]:= ll{{l}};kk {{k}};
« This designs the controller. We can see that the interconnections correspond to the
diagram in Figure 10.8.
In[83]:= Controller[%%, ll, kk, {1},{2}, {1}]
Out[83]=
l
\
.
..
a ÷ l c
1
÷ k (b
1
÷ l d
11
) b
2
÷ l d
12
l
k 0 0
\
!
.
..

10. Optimal Control Systems Design 195
« This is the discrete-time system with the same state-space matrices as in the
preceding example.
In[84]:= StateSpace[{{a}},{{b
1
, b
2
, b
3
}},{{c
1
},{c
2
}},
{{d
11
, d
12
, d
13
},{d
21
, d
22
, d
23
}}, Sampled True]
Out[84]=
l
\
.
.
.
.
.
.
.
..
a b
1
b
2
b
3
c
1
d
11
d
12
d
13
c
2
d
21
d
22
d
23
\
!
.
.
.
.
.
.
.
..

« The discrete-time controller is designed (cf. Figure 10.9).
In[85]:= Controller[%, ll, kk, {1}, {2},{1}]//Simplify
Out[85]=
l
\
.
.
.
.
.
.
.
.
.
.
.
..
(a ÷ k b
1
) (l c
1
÷ 1)

k l d
11
÷ 1
b
2
(k l d
11
÷ 1) + l (a ÷ k b
1
) d
12

k l d
11
÷ 1
k l b
1
÷ a l

k l d
11
÷ 1
k (l c
1
÷ 1)

k l d
11
÷ 1
k l d
12

k l d
11
÷ 1
k l

1 ÷k l d
11
\
!
.
.
.
.
.
.
.
.
.
.
.
..

« As yet another example, consider again the discrete-time model for the familiar
satellite control system (see Section 9.1). The sampling period is T.
In[86]:= satellite ToDiscreteTime[
StateSpace[{{0, 1}, {0, 0}},{{0},{1}},{{1, 0}}],Sampled Period [T]]
Out[86]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
1 T
T
2

2
0 1 T
1 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
T

« This sets the estimator and controller gains symbolically.
In[87]:= kk {{k
1
, k
2
}};ll{{l
1
},{l
2
}};
« This finds the controller.
In[88]:= Controller[satellite, ll, kk]
Out[88]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
÷
1

2
k
1
T
2
÷
1

2
(÷k
1
l
1
÷ k
2
l
2
) T
2
÷ l
2
T ÷ l
1
+ 1 T ÷
T
2
k
2

2
1

2
(÷k
1
l
1
÷ k
2
l
2
) T
2
+ l
2
T + l
1
÷T k
1
÷ l
2
÷ T (÷k
1
l
1
÷ k
2
l
2
) 1 ÷ T k
2
l
2
+ T (÷k
1
l
1
÷ k
2
l
2
)
÷l
1
k
1
+ k
1
÷ k
2
l
2
k
2
k
1
l
1
+ k
2
l
2
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
T

196 Control System Professional
11. Nonlinear Control Systems
Although systematic treatment of nonlinear control problems is currently beyond the scope of
Control System Professional, it does provide the linearization tools that may allow construction
of a linear model describing the behavior of the nonlinear system in the vicinity of some
operating point. For reasonably smooth nonlinearities, this approach may provide a useful
insight into the properties of the system, if not a suitable approximation.
11.1 Local Linearization of Nonlinear Systems
For a nonlinear state-space system of the form
(11.1)
x

= f(x, u)
y = h(x, u)
the locally linearized model is given by
(11.2)
x

= Ax

+Bu

y

= Cx

+Du

where x

, y

, and u

are small deviations,
x

(t) = x(t) ÷ x
n
(t)
y

(t) = y(t) ÷ y
n
(t)
u

(t) = u(t) ÷ u
n
(t)
from some known solution x
n
, y
n
, and u
n
to the original nonlinear Eq. (11.1), which we refer
to as the nominal solution. The coefficients A, B, C, and D are the Jacobian matrices evaluated
on the nominal solution:
(11.3)
A =
of

ox
.
.
.
|
.
.
.
n
, B =
of

ou
.
.
.
|
.
.
.
n
,
C =
oh

ox
.
.
.
|
.
.
.
n
, D =
oh

ou
.
.
.
|
.
.
.
n
where the Jacobian og l ot for a vector of functions
g(t) =
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
g
1
(t
1
, t
2
, …, t
n
)
g
2
(t
1
, t
2
, …, t
n
)

g
m
(t
1
, t
2
, …, t
n
)
.
.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
in variables t
1
, t
2
, …, t
n
is
og

ot
=
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
o g
1

ot
1
o g
1

ot
2

o g
1

ot
n
o g
2

ot
1
o g
2

ot
2

o g
2

ot
n

o g
m

ot
1
o g
m

ot
2

o g
m

ot
n
.
.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
The function Linearize attempts to find such an approximation.
Linearize|f , {x, x
n
¦, {u, u
n
¦]
find a linearized state-space
approximation to the function f in the
state variable x and input variable u in
the vicinity of the operating point x
n
, u
n
Linearize|f , {{x
1
, x
1 n
¦, {x
2
, x
2 n
¦, …¦, {{u
1
, u
1 n
¦, {u
2
, u
2 n
¦, …¦]
find a linearized state-space approximation
to a vector of functions f in the state
variables x
i
and input variables u
i

in the vicinity of the operating point x
i n
, u
i n
Linearize|f , h, {{x
1
, x
1 n
¦, {x
2
, x
2 n
¦, …¦, {{u
1
, u
1 n
¦, {u
2
, u
2 n
¦, …¦]
find a linearized state-space
approximation to the vectors f and h
Local linearization.
Linearize accepts one or both of the function vectors f and h and returns, respectively, the
short (containing just matrices A and B) or full (containing all four matrices) state-space
description. Desired options for the resulting state-space object may be supplied to Linear
ize or inserted later.
As an example, we consider the linearization of the one-dimensional magnetic ball suspension
198 Control System Professional
system shown in Figure 11.1 (after Kuo (1991)). The system attempts to control the vertical
position l(t) of the steel ball through input voltage e(t). The state variables are chosen as
x
1
(t) = l(t) , x
2
(t) = l

(t) , and x
3
(t) = i(t). The only input variable is u(t) = e(t) .

Mg
i
_
l
l
Steel ball
Electromagnet

i
_
L R
e
2
Figure 11.1. Magnetic ball levitation system.
« Load the application.
In[1]:= ControlSystems`
« This is vector f corresponding to the nonlinear state equation x

= f(x, u) .
In[2]:= f ]x2, g
x3
2

M x1
,
R x3

L

e

L
¦
Out[2]= ¦x2, g
x3
2

M x1
,
e

L

R x3

L
¦
« The output vector contains a single variable. Linearize does not require additional
list wrapping in such a case.
In[3]:= h x1
Out[3]= x1
11. Nonlinear Control Systems 199
« This linearizes the state equation near some nominal point with the coordinates x10,
x20, x30, and e0. Since only one vector (f) is supplied, the resulting state-space
object contains only two matrices, A and B.
In[4]:= ballLinearize[f, {{x1, x10}, {x2, x20},{x3, x30}},{e, e0}]
Out[4]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 1 0 0
x30
2

M x10
2
0 ÷
2 x30

M x10
0
0 0 ÷
R

L
1

L
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

« This finds the full (four-matrix) state-space description.
In[5]:= Linearize[f, h , {{x1, x10}, {x2, x20}, {x3, x30}},{e, e0}]
Out[5]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 1 0 0
x30
2

M x10
2
0 ÷
2 x30

M x10
0
0 0 ÷
R

L
1

L
1 0 0 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

« As soon as the linearized state-space model is found, we may apply other Control
System Professional functions. For this experiment, let the nominal point be the
equilibrium position of the ball at some coordinate l0.
In[6]:= ]x10 l0, x20 0, x30
¸---------------
M g l0 ¦
Out[6]= {x10l0, x200, x30

g l0 M ¦
« This is the state-space description near the equilibrium.
In[7]:= ballball/.%
Out[7]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 1 0 0
g

l0
0 ÷
2

g l0 M

l0 M
0
0 0 ÷
R

L
1

L
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

200 Control System Professional
« We now find the state feedback gain matrix that places the poles of the system into
some position p1, p2, p3.
In[8]:= StateFeedbackGains[ball, {p1, p2, p3}]
Out[8]= |
L M (g p1+l0 p2 p3 p1+g p2+g p3)

2

g l0 M
÷
L M (g+l0 p1 p2+l0 p1 p3+l0 p2 p3)

2

g l0 M
÷L p1 ÷ L p2 ÷ L p3 ÷ R
]
« This is the system after the feedback loop is closed.
In[9]:= StateFeedbackConnect[ball, % ]//Simplify
Out[9]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
0 1 0 0
g

l0
0 ÷
2 g

g l0 M
0
÷
M (l0 p1 p2 p3 + g (p1 + p2 + p3))

2

g l0 M
M (g + l0 (p2 p3 + p1 (p2 + p3)))

2

g l0 M
p1 + p2 + p3
1

L
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

« This is the closed-loop system for a particular set of numerical parameters after
dropping insignificant numerical errors.
In[10]:= % /.]l0 .1, M
10

10
3
, L 10., R 100, g 9.8, p11, p21, p31¦//
Chop
Out[10]=
l
\
.
.
.
.
.
.
.
.
0 1 0 0
98. 0 ÷197.99 0
1.49503 0.515178 ÷3 0.1
\
!
.
.
.
.
.
.
.
.

« Let us investigate the dynamics of the state variables. One way to do this is to insert
an identity matrix as matrix C into the previous (short) state-space object to feed all
states through to the outputs. The matrix can be directly inserted at the last position
in the state-space object.
In[11]:= % //StandardForm
Out[11]//StandardForm=
StateSpace|
{{0, 1, 0¦, {98., 0, 197.99¦,{1.49503, 0.515178, 3¦¦,{{0¦,{0¦, {0.1¦¦]
11. Nonlinear Control Systems 201
In[12]:= Insert[%, IdentityMatrix[3],1]
Out[12]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
0 1 0 0
98. 0 ÷197.99 0
1.49503 0.515178 ÷3 0.1
1 0 0 0
0 1 0 0
0 0 1 0
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..

« This gives the state response to a step function, that is, it shows how the state
variables change if the input voltage applied to the electromagnet suddenly changes
by some value (÷10 mV in the plot). The coordinate and velocity of the ball are
shown as a solid and dashed line, respectively, and the current in the magnet is
shown as the dashed-dotted line.
In[13]:= SimulationPlot[%, .01, {t, 5},Sampled Period [.1], PlotStyle
{Thickness[.001],Dashing[{.04, .02}],Dashing[{.05, .01, .01, .01}]},
PlotLabel"Step Response"];
1 2 3 4 5
0.02
0.04
0.06
0.08
0.1
Step Response
11.2 Rational Polynomial Approximations
Another approach to obtaining the linear time-invariant (LTI) approximation to a nonlinear
system involves the approximation of a nonlinear transfer function by a polynomial ratio.
Corresponding functions are provided with standard Mathematica packages and represent
Padé, economized rational, general rational, and minimax approximations. The first two are in
the context Calculus`Pade`, and the others are in NumericalMath`Approxima-
tions`. Note that the built-in function InterpolatingPolynomial may be useful for
some approximations, too.
202 Control System Professional
« Here is the transfer function describing some ideal heat exchanger.
In[14]:= h
1

(10 s 1) (50 s 1)
Out[14]=
1

110 s) 150 s)
« The temperature sensor for the exchanger is located so that its reading is delayed a
few seconds, which introduces the delay term.
In[15]:= delay Exp[a s]
Out[15]=
a s
« This loads the necessary package.
In[16]:= Needs["Calculus`Pade`"]
« We use the Padé approximation to represent the delay as a polynomial ratio of the
order 2l 2.
In[17]:= Pade[delay, {s, 0, 2, 2}]
Out[17]=
1
a s

2

a
2
s
2

12

1
a s

2

a
2
s
2

12
« This generates an object suitable for analysis with Control System Professional.
In[18]:= TransferFunction[s, % h ]
Out[18]=
l
\
.
.
.
.
.
.
a
2

2

12
÷
a

2
+ 1

(10 +1) (50 +1) |
a
2

2

12
+
a

2
+ 1]
\
!
.
.
.
.
.
.

11. Nonlinear Control Systems 203
12. Miscellaneous
This chapter covers several mathematical and utility functions. The functions SchurDecompo
sitionOrdered, LyapunovSolve, Rank, and DisplayTogetherGraphicsArray are
included with Control System Professional but are useful well outside its scope. Other utility
functions provide the means to determine the structure of systems (CountInputs, CountOut
puts, and CountStates) and to check on the structural consistency (ConsistentQ). Also
described is the function to create systems with random elements (RandomSystem).
12.1 Ordered Schur Decomposition
The function SchurDecompositionOrdered is an extension to the built-in function Schur
Decomposition; it shares the same syntax and accepts the same options, with certain addi-
tions and exceptions.
Similar to the Schur decomposition of matrix m, the ordered Schur decomposition finds a
unitary matrix q and triangular matrix t, such that q t q
H
gives m, where H denotes Hermi-
tian transpose, with matrix t being such that the eigenvalues of m appear on the main diago-
nal of t . In addition, the ordered Schur decomposition makes the eigenvalues appear on the
diagonal in the prescribed order. The guidelines for selecting the ordering function in Schur
DecompositionOrdered are the same as those for the built-in function Sort.
SchurDecompositionOrderedm
find the Schur decomposition in
which eigenvalues of matrix m appear
on the diagonal in canonical order
SchurDecompositionOrderedm, pred
use the function pred to determine
whether pairs of eigenvalues are in order
Ordered Schur decomposition.
« Load the application.
In[1]:= ControlSystems`
« Here is a 3×3 matrix with random elements.
In[2]:= m Table[Random[],{3},{3}]
Out[2]=
l
\
.
.
.
.
.
.
.
.
0.786599 0.848339 0.612827
0.398038 0.238012 0.517232
0.238887 0.194399 0.893263
\
!
.
.
.
.
.
.
.
.
« This finds its Schur decomposition using the built-in Mathematica function.
In[3]:= {q, t}SchurDecomposition[m];
« The eigenvalues on the main diagonal do not follow any particular order.
In[4]:= t
Out[4]=
l
\
.
.
.
.
.
.
.
.
1.5415 0.425219 0.422634
0. ÷0.12446 0.295623
0. 0. 0.500837
\
!
.
.
.
.
.
.
.
.
« This finds the ordered Schur decomposition of the same matrix.
In[5]:= {qo, to}SchurDecompositionOrdered[m];
« Now the diagonal elements appear in canonical order.
In[6]:= to //Chop
Out[6]=
l
\
.
.
.
.
.
.
.
.
÷0.12446 0.343931 ÷0.309219
0 0.500837 ÷0.482617
0 0 1.5415
\
!
.
.
.
.
.
.
.
.
« The decomposition is still valid.
In[7]:= qo.to.Conjugate[Transpose[qo]] m //Chop
Out[7]=
l
\
.
.
.
.
.
.
.
.
0 0 0
0 0 0
0 0 0
\
!
.
.
.
.
.
.
.
.
« This is the ordered Schur decomposition in which the eigenvalues residing in the
right half of the complex plane go first.
In[8]:= {qo, to}SchurDecompositionOrdered[m, Re[#]0 &];
12. Miscellaneous 205
« This is the corresponding matrix t.
In[9]:= to //Chop
Out[9]=
l
\
.
.
.
.
.
.
.
.
1.5415 0.563829 0.203783
0 0.500837 ÷0.295623
0 0 ÷0.12446
\
!
.
.
.
.
.
.
.
.
« This sorts the eigenvalues in descending order of their real parts. For this particular
matrix m, the result is the same as the previous one.
In[10]:= {qo, to}SchurDecompositionOrdered[m, Re[#2]Re[#1]&];
In[11]:= to //Chop
Out[11]=
l
\
.
.
.
.
.
.
.
.
1.5415 0.563829 0.203783
0 0.500837 ÷0.295623
0 0 ÷0.12446
\
!
.
.
.
.
.
.
.
.
The function SchurDecompositionOrdered accepts the option Pivoting, just as SchurDe
composition does. However, the option RealBlockForm in SchurDecompositionOr
dered accepts only the default value False.
option name default value
RealBlockForm False whether complex eigenvalues
of the real input matrix should
be returned as real blocks
Option value specific to SchurDecompositionOrdered.
12.2 Lyapunov Equations
The function LyapunovSolve attempts to find the solution X to the Lyapunov equation
(12.1) XA+ BX = C
a special class of linear matrix equations that occurs in many branches of control theory, such
as stability analysis, optimal control, and response of a linear system to white noise (see, e.g.,
Brogan (1991)). Typically, A and B are square matrices with dimensions m×m and n×n. To be
consistent, matrices C and X should have dimensions n×m. An important particular case of
the Lyapunov equation,
206 Control System Professional
(12.2) XA A
T
X C
involves only matrices A and C, in which case all matrices are square with the same dimen-
sions.
For discrete-time systems, the discrete Lyapunov equation
(12.3) X = AXA
T
+ C
arises. The solution can be found with DiscreteLyapunovSolve.
LyapunovSolvea, c solve the matrix Lyapunov equation x.a +
Transpose[a].x = c for matrix x
LyapunovSolvea, b, c solve the Lyapunov equation x.a + b.x = c
DiscreteLyapunovSolvea, c
solve the discrete Lyapunov equation x =
a.x.Transpose[a] + c
Functions for solving Lyapunov equations.
« Here is matrix a.
In[12]:= a
1
\
.
.
.
.
1

2
1
1 1
\
!
.
.
.
.
;
« Here is matrix c.
In[13]:= c |
q1 0
0 q2
j;
« This solves the discrete Lyapunov equation and simplifies the result.
In[14]:= DiscreteLyapunovSolve[a, c]//Simplify
Out[14]=
l
\
.
.
.
.
..
2 q1 +
3 q2

2
÷q1 ÷
5 q2

4
÷q1 ÷
5 q2

4
3 q1

2
+
19 q2

8
\
!
.
.
.
.
..
« We may verify that the solution indeed satisfies the discrete Lyapunov equation.
In[15]:= a.% .Transpose[a]c % //Simplify
Out[15]= True
12. Miscellaneous 207
Consider now an example of design of a feedback controller that uses Lyapunov's method to
approximate the minimum-time response system for a given system (see Brogan (1991),
Section 10.8). The method is applicable to systems stable at least in Lyapunov's sense. To find
the Lyapunov function V (x) = x
T
Px, we will first solve the Lyapunov equation,
A
T
P +PA = ÷Q, where Q is the identity matrix with proper dimensions. Knowing V (x), one
can find V

(x) = x
T
Px + x
T
Px

= ÷x
T
x +2 u
T
B
T
Px. To make V

(x) as negative as possible, and
thereby obtain the fastest response, we will compute the input signal as
u =
÷B
T
Px(t)

|B
T
Px(t)|
« These are matrices A and B for a state-space system, which we assume to be
continuous-time.
In[16]:= a |
3 2
1 1
j;
In[17]:= b |
1 1
0 1
j;
« This solves the Lyapunov equation. The matrix Q is assumed to be the identity
matrix 2×2.
In[18]:= p LyapunovSolve[a, IdentityMatrix[2]]
Out[18]=
l
\
.
.
.
.
.
.
.
.
.
.
.
7

40
÷
1

40
÷
1

40
9

20
\
!
.
.
.
.
.
.
.
.
.
.
.
« This computes the control law.
In[19]:= bpx Transpose[b].p.{x1, x2};
In[20]:= u
bpx

¸-------------------
bpx.bpx
//Simplify
Out[20]=
7 x1x2


113 x1
2
318 x1 x2362 x2
2
,
8 x119 x2


113 x1
2
318 x1 x2362 x2
2

Despite the fact that our system is linear, the minimum time response control is not.
208 Control System Professional
« This plot shows the first control signal as a function of the state variables.
In[21]:= Plot3D[u#1],{x1, 2, 2},{x2, 2, 2},PlotPoints 40, Boxed False,
AxesLabel{"x
1
", "x
2
", "u
1
"},ViewPoint {3.076, 0.000, 1.080},
PlotLabel"Minimum time response control"];
Minimum time response control
-2
-1
0
1
2
x
1
-2 -1 0 1 2
x
2
-1
-0.5
0
0.5
1
u
1
-
-
« This is the second component of the control signal.
In[22]:= Plot3D[u#2],{x1, 2, 2},{x2, 2, 2},PlotPoints 40, Boxed False,
AxesLabel{"x
1
", "x
2
", "u
2
"},ViewPoint {2.141, 2.209, 1.080},
PlotLabel"Minimum time response control"];
Minimum time response control
-2
-1
0
1
2
x
1
-2
-1
0
1
2
x
2
-1
-0.5
0
0.5
1
u
2
-2
-1
0
1
x
1
LyapunovSolve and DiscreteLyapunovSolve use the direct method (via the built-in
function Solve) or the eigenvalue decomposition method. The method is set using the option
SolveMethod, which correspondingly accepts the values DirectSolve and Eigendecompo
sition. If this option's value is Automatic, these methods are tried in turn until one suc-
ceeds or all are tried.
12. Miscellaneous 209
option name default value
SolveMethod Automatic method to solve the equation
Option specific to Lyapunov equation solvers.
12.3 Rank of Matrix
Rankm find the rank of matrix m
Determining the rank of matrix.
For inexact numerical matrices m, Rank counts the nonzero singular values, as obtained
through the built-in function SingularValues. For exact and symbolic matrices, the differ-
ence between the number of columns of m and the length of the null space, as obtained
through NullSpace, is used to determine the rank. Correspondingly, Rank accepts the
options pertinent to SingularValues or NullSpace and passes them along to these func-
tions.
12.4 Part Count and Consistency Check
CountInputs, CountOutputs, and CountStates can be used to determine the number of
inputs, outputs, or states of the system. They are useful in cases when systems change as the
result of structural operations (say, merging or connecting with others). Alternatively, for a
system entered manually, you may wish to check that no mistakes have been made and that
the matrices you entered can really represent a system. The following functions help with
these chores.
CountInputssystem find the number of inputs of system
CountOutputssystem find the number of outputs of system
CountStatessystem find the number of states of system
ConsistentQsystem determine if the elements of system
have dimensions consistent with each other
Checking on the system parameters.
210 Control System Professional
12.5 Displaying Graphics Array Objects Together
The function DisplayTogetherGraphicsArray displays multiple GraphicsArray
objects as one such object and is similar to the function DisplayTogether from the standard
package Graphics`Graphics`. Input arguments can be either GraphicsArray objects or
any other Mathematica commands that result in such objects (e.g., BodePlot). The function
also accepts options pertinent to GraphicsArray. All input GraphicsArray objects must
have the same dimensions.
DisplayTogetherGraphicsArrayarray
1
, array
2
, …, opts
combine the graphics arrays
array
i
in a GraphicsArray object
Displaying GraphicsArray objects together.
12.6 Systems with Random Elements
Random yet stable (at least in the sense of Lyapunov) systems can be generated using the
function RandomSystem in conjunction with the desired system type—StateSpace, Trans
ferFunction, or ZeroPoleGain. RandomSystem[args] is used in place of the actual
system contents. Systems with random parameters could be useful for numerical experiments,
checking the design concepts, etc.
typeRandomSystemargs create a random system of type type
typevar, RandomSystemargs
use the variable var in the body of
the random transfer function system
RandomSystem randomfirst-order single-input,
single-output (SISO) system
RandomSystemn n
th
-order SISO system
RandomSystemn, i, o n
th
-order system with i inputs and o outputs
Generating a system with random elements.
12. Miscellaneous 211
« This creates a random first-order transfer function in variable s.
In[23]:= TransferFunction[s, RandomSystem[]]
Out[23]= |
0.400733

+ 2.40162
|

« This is a second-order single-input, two-output state-space system.
In[24]:= StateSpace[RandomSystem[2, 1, 2]]
Out[24]=
l
\
.
.
.
.
.
.
.
.
.
.
.
.
.
.
÷1.93888 ÷0.00387746 1.82357
0.344929 ÷2.59669 ÷3.01261
3.76923 0.228547 ÷3.05476
÷3.05821 3.55686 4.87885
\
!
.
.
.
.
.
.
.
.
.
.
.
.
.
.

RandomSystem works by creating random matrices of zeros, poles, and gains (for transfer
function systems) or by creating a block diagonal matrix with suitable eigenvalues and then
performing linear transformation on that matrix to form matrix A (for state-space systems).
Almost all of the following options listed below (with the exception of Exact) accept either
one value (to be applied to both numerators and denominators or to the eigenvalues) or a list
of two values to create the numerators and denominators according to different rules.
212 Control System Professional
option name default value
Exact False whether to generate an
infinite-precision system
ComplexRootProbability 0.33,0.5 probability of complex roots
RealRootProbability 0.33,Automatic probability of
real-valued roots
SpecialPoints Automatic special point locations
SpecialPointProbability 0.05,0.1 probability of special points
MultipleRootProbability 0.1,0.2 probability of multiple roots
ImaginaryToRealRatio Automatic mean imaginarylreal
ratio for complex roots
MappingFunction Automatic function(s) to map
over randomroots
ExactConversionFunction Automatic additional function(s) to
map over roots to create
an infinite-precision system
Options to RandomSystem.
If RealRootProbability is set to Automatic, then all available root positions that remain
after the creation of complex roots and special points will be filled with real roots. This is
usually what you want in the denominator of a transfer function to ensure that the system
has the required order, but this is not necessarily the case for the numerators. Setting Real
RootProbability ÷ Automatic will generate a strictly proper transfer function.
The option SpecialPoints allows the location(s) of some special roots to be specified (to
create an integrator, for example). The probability of such points in numerators and denomina-
tors is set by the option SpecialPointProbability.
12. Miscellaneous 213
References
Brogan, William L. Modern Control Theory, 3rd ed. Englewood Cliffs, NJ: Prentice Hall, 1991.
Dorf, Richard C. Modern Control Systems, Reading, MA: Addison-Wesley, 1992.
Franklin, Gene F., J. David Powell, and Abbas Emami-Naeini. Feedback Control of Dynamic
Systems, 2nd ed. Reading, MA: Addison-Wesley, 1991.
Franklin, Gene F, J. David Powell, and Michael L. Workman. Digital Control of Dynamic Systems,
2nd ed. Reading, MA: Addison-Wesley, 1990.
Gopal, M. Modern Control System Theory, 2nd ed. New York, NY: John Wiley & Sons, 1993.
Kailath, Thomas. Linear Systems, Englewood Cliffs, NJ: Prentice Hall, 1980.
Kautsky, J., N. K. Nichols, and P. Van Dooren. Robust pole assignment in linear state feed-
back. International Journal of Control, 41, no. 5, 1985, pp. 1129–1155.
Kuo, Benjamin C. Automatic Control Systems, 6th ed. Englewood Cliffs, NJ: Prentice Hall, 1991.
Laub, Alan J. A Schur method for solving algebraic Riccati equations. IEEE Transactions on
Automatic Control, AC-24, Dec. 1979, pp. 913–921.
Levine, William S. The Control Handbook, Boca Raton, FL: CRC Press, 1996.
Moore, Bruce C. Principal component analysis in linear systems: Controllability, observability,
and model reduction. IEEE Transactions on Automatic Control, AC-26, Feb. 1981, pp. 17–32.
Ogata, Katsuhiko. Modern Control Engineering, Englewood Cliffs, NJ: Prentice Hall, 1990.
Index
1 f noise, 182

Ackermann, 152, 155
robustness of solutions, 159
Ackermann's formula, 17
Ackermann, 154
Active wrappers, 3, 32
Actuators, 25, 175
A/D conversion, 44
Admissible controls, 165
Admissible error, AdmissibleError, 152
Admissible trajectories, 165
AdmissibleError, 152
Ailerons, 156
effectiveness of, 175
of a missile, 175
roll-time constant of, 175
Aircraft example, 156
Algebraic Riccati equation, 172
discrete, 173
All, in DeleteSubsystem, 115
in Subsystem, 115
Amplifiers, 112
Analog simulation, 65
Analog systems, 33
Analog-to-digital converters, 44
Analytic, 92
Angle of attack, of submarine, 72
Antenna example, 182
discrete estimator for, 185
Kalman filter for, 188
ARE, 172
Attitude control, of satellite, 151
of spacecraft, 95
Azimuth control, of antenna, 182
Backward rectangular rule, Backward
RectangularRule, 48
BackwardRectangularRule, 48
Bilinear transformation,
BilinearTransform, 48
BilinearTransform, 48, 55
Block diagrams, 98
Bode plot, BodePlot, 85
BodePlot, 50, 85
collecting several plots in one, 144
displaying several objects together, 211
gain and phase margins in, 89
phase unwrapping in, 87
Bridged-T network, 66

Calculus`Pade`, 202
Canonical forms, controllable, 140
controllable companion, 36
Jordan, 138
Kalman controllable, 136
Kalman observable, 136
modal, 138
observable companion, 36
Cascade compensation, 119
Cascade connection, SeriesConnect, 98
Center of gravity, of pendulum, 12
Center of mass, of satellite, 151
Characteristic equation, 80
Characteristic polynomial, 154, 179
Chemical mixture, control of, 167
Classical control, 80
Closed loop, construction of, 105
Companion realizations, controllable, 36
observable, 36
Compensator, 95
Complex exponentials, reducing to
trigonometric functions, 62
Complex numbers, notation for, 9
Complex plane, s- vs. z-plane, 4
ComplexRootProbability, 213
ComplexVariables, 38, 131
Composite systems, 98
Concentration control, 167
Condition number, of controllability matrix,
155
Consistency check, ConsistentQ, 210
ConsistentQ, 38, 210
Continuous-time systems, 33, 39, 54
Continuous-time to discrete-time conversion,
4, 44
Continuous-time vs. discrete-time systems,
39
ContinuousTimeQ, 40
Control design, optimal, 165
using pole assignment, 150
Control effort, 165
Control Format palette, 7
Control input, selection in single-input
algorithms, ControlInput, 155
Control inputs, for time-domain response,
ControlInputs, 60
Control matrix, 34
Control objects, 3
constructing from ODEs, 16
continuous-time vs. discrete-time, 4, 33, 39
converting between, 4, 31–32, 35
default domain of, 39
domain identification of, 4, 32, 39, 45
traditional notations for, 41
ControlInput, 155
ControlInputs, 60
Controllability, 122
216 Control System Professional
Controllability Gramian, 140, 142
ControllabilityGramian, 124, 128
Controllability matrix, 136, 154
ControllabilityMatrix, 124, 126
Controllability test, Controllable, 122
ControllabilityGramian, 129
ControllabilityMatrix, 126
ControllabilityTest, 124
Controllable, 122, 134
Controllable canonical form, 140
Controllable companion realization,
ControllableCompanion, 36
Controllable states, 136
Controllable subspace, size of,
ControllableSpaceSize, 125
ControllableCompanion, 36
ControllableSpaceSize, 125
ControllableSubsystem, 132–133
vs. KalmanControllableForm, 138
Controlled signal, 81
Controller, current, 192
design of, 119, 150
in output feedback, 105
in state feedback, 114
optimal, 192
Controller, 192
Controller design, 15
Correlation, between process and
measurement noises, 181
Cost function, 165, 171
equivalent, 178, 184
quadratic, 166
CountInputs, 210
CountOutputs, 210
CountStates, 210
Covariance matrix, 181
CriticalFrequency, 48
Critically damped, 74
Cross-covariance matrix, 181
Crossover frequency, 91
Current estimator, 192
D/A conversion, 54
Damping ratio, 74, 82, 88, 179
of dominant poles, 81
of third-order system, 76
DARE, 173
DecompositionMethod, in
InternallyBalancedForm, 141
in KalmanControllableForm, 137
in KalmanObservableForm, 137
DefaultInputPort, 113
Deflection, of ailerons, 175
Deflection rate, of ailerons, 175
Delay, 53
in heat exchanger, 203
Padé approximation of, 203
DeleteSubsystem, 98, 115–116
Depth control, of submarine, 72
Derivative controller, 119
Determinant expansion formula,
DeterminantExpansion, 33
DeterminantExpansion, 33
Deterministic, inputs, 182, 185, 187, 193
state reconstruction, 181
Index 217
Differential equations, StateSpace model
from, 15
Digital systems, 34
Digital-to-analog converters, 54
Dirac delta function, approximating in
analog simulation, 79
DiracDelta, in impulse response
simulation, 70, 78
DiracDelta, in impulse response
simulation, 70, 78
Direct transmission matrix, 34
DirectSolve, as value of SolveMethod,
209
Discrete emulation of continuous design, 178,
184
Discrete simulation, 65
Discrete-time systems, 34, 39, 44
Discrete-time to continuous-time conversion,
4
DiscreteDelta, in impulse response
simulation, 169
DiscreteLQRegulatorGains, 178, 184
DiscreteLyapunovSolve, 129, 207
DiscreteRiccatiSolve, 171, 173
DiscreteTimeQ, 40
Displacement, of pendulum, 12
DisplayTogether, vs.
DisplayTogetherGraphicsArray,
211
DisplayTogetherGraphicsArray, 52,
144, 211
Domain identification of control objects, 4, 39
Dominant subsystem,
DominantSubsystem, 142
DominantSubsystem, 132–133, 142
Double integrator, 80, 151
PID controller for, 120
Dryden Flight Research Center, 156
Dual systems, DualSystem, 130
DualSystem, 130, 135
Dynamic systems, 27

Economized rational approximation, 202
Effectiveness, of ailerons, 175
Eigendecomposition, as value of
SolveMethod, 209
Eigensystem, as value of
DecompositionMethod, 141
Eigenvalues, and poles, 17, 150, 179
in SchurDecomposition, 204
multiple, 139, 177
of closed-loop system, 17, 150
of Hamiltonian matrix, 177
specifying order of, 204
Eigenvectors, 147
Electrical Engineering Examples, plotting
routines from, 80
Emulation, of continuous design, 178, 184
EquationForm, 6, 43
Equilibrium, of magnetic ball, 200
of pendulum, 15
Equivalent cost, 178, 184
Error signal, 81
Estimator, 161
current, 192
discrete by emulation of continuous, 184
gain matrix for, EstimatorGains, 161
Kalman, 182, 185
optimal, 181
predictor, 192
EstimatorGains, 162
Evolution matrix, 34
Exact, 213
ExactConversionFunction, 213
Expanding transfer functions,
ExpandRational, 28
218 Control System Professional
ExpandRational, 28

F-8 aircraft example, 156
Factoring transfer functions,
FactorRational, 28
FactorRational, 28, 145
pole-zero cancellation in, 145
Feed-forward path, 112
Feedback, negative, 105
path, 112
positive, 105
Feedback connection, FeedbackConnect,
105
Feedback controller design, 15, 150, 201
using Lyapunov's method, 208
Feedback loop, forming with
FeedbackConnect, 105
FeedbackConnect, 83, 98, 105, 191
Feedthrough matrix, 34
First-order hold, FirstOrderHold, 48
First-order lag, 94
FirstOrderHold, 48, 56
Flicker noise, 182
Flight control example, 156
Forced response, 57
Forward rectangular rule,
ForwardRectangularRule, 48
ForwardRectangularRule, 48
Frames, 84
Free response, 57
Frequency prewarping,
CriticalFrequency, 48, 55
Frequency response, 85, 93, 95–96
FullRankControllabilityMatrix, 125
FullRankObservabilityMatrix, 125
Gain band, 96
Gain margin, GainPhaseMargins, 89
GainPhaseMargins, 89
Gaussian distribution, 182, 192
NormalDistribution, 191
Gaussian noise, 188
General rational approximation, 202
GenericConnect, 98, 110
Gramian, 124, 128
GraphicsArray, displaying several objects
together, 211
Hamiltonian matrix, 177
Heat exchanger example, 203
Hold equivalence methods, 48
Homogeneous response, 57
Horizon, infinite, 166
Imaginary unit, notation for, 9
ImaginaryToRealRatio, 213
Impulse response, 74
simulating with DiracDelta, 70
simulating with DiscreteDelta, 169
simulation of, 70
Infinite-horizon problem, 166, 182
Infinite-time-to-go problem, 166
Inflow control, 167
Inflow rate, 167
Initial conditions, InitialConditions, 60
InitialConditions, 19, 60
Index 219
Input matrix, 34
Inputs, count of, 210
deterministic, 182, 185, 193
stochastic, 182, 187
InputVariables, 43
Installation, 1
Integral controller, 119
Integrator, 4, 80, 101
Interconnections, 98
arbitrary, 110
elementary, 98
Internally balanced realizations,
InternallyBalancedForm, 140
InternallyBalancedForm, 140
InterpolatingFunction, in time-domain
simulations, 65
InterpolatingPolynomial, 202
Interpolation, as value of
GainPhaseMargins, 92
Inventory control, 60
Inventory level, control of, 60
Inverted pendulum example, 11, 163
controller for, 15
optimal controller for, 17
InvertedTransformMatrix, 147
Irreducible realization,
MinimalRealization, 133

Jacobian matrix, 197
Jordan canonical form,
JordanCanonicalForm, 138
JordanCanonicalForm, 138

Kalman, 135
Kalman canonical forms, 136
Kalman decomposition, Kalman, 135
Kalman estimator, 182
KalmanEstimator, 185
Kalman filter, 182, 185
example of, 188
Kalman gain matrix, 184
KalmanControllableForm, 132, 136, 149
KalmanEstimator, 182, 185
KalmanObservableForm, 136
Kautsky-Nichols-Van Dooren algorithm,
KNVD, 158
KNVD, 152, 158
Lag system, 49, 52, 94
Laplace transform, LaplaceTransform, 44
LaplaceTransform, 44
Levitation system example, 199
Linear quadratic Gaussian problem, 182
Linear quadratic regulator, 17
LQRegulatorGains, 166
Linearization, 16, 197
Linearize, 197
Linearize, 16, 198
LinearSpacing, 87
LogSpacing, 87
LQ regulator, 17
LQEstimatorGains, 182, 184, 192
LQG controller, Controller, 192
LQG problem, LQEstimatorGains, 182
LQOutputRegulatorGains, 171
LQRegulatorGains, 17–18, 167, 174, 177,
179, 182, 192
220 Control System Professional
vs. DiscreteLQRegulatorGains, 180
vs. LQOutputRegulatorGains, 172
LTI systems, 27, 202
Lyapunov equations, 206
continuous, 206
discrete, 207
for controllability and observability
Gramians, 129
Lyapunov function, 208
LyapunovSolve, 129, 206

Magnetic ball suspension example, 199
Magnitude response, 85
MappingFunction, 213
Margins, 87
MarginStyle, 87, 89
MaxIterations, in
StateFeedbackGains, 159
Measurement noise, 181, 191–192
MergeSystems, 98, 117
Method, in Rank, 210
in StateFeedbackGains, 151
in ToContinuousTime, 54
in ToDiscreteTime, 48
MIMO systems, 28
frequency response of, 96
Minimal realization,
MinimalRealization, 133
MinimalRealization, 38, 132–133
Minimax approximation, 202
Minimum-time response, 208
Missile example, 175
Mixing tank example, 167
controllability Gramian of, 129
LQ regulator for, 168
output regulator for, 172
singular value plot for, 97
Modal realization, JordanCanonicalForm,
138
Model reduction, DominantSubsystem, 142
MinimalRealization, 133
PoleZeroCancel, 144
Modern control, 27
Moment of inertia, of pendulum, 12
Monic polynomial, 121
Most likelihood, 182
Multiple-input, multiple-output (MIMO)
systems, 28
minimal realization of, 133
MultipleRootProbability, 213
Multivariable control, 28
NASA, Dryden Flight Research Center, 156
Natural frequency, 74, 82, 179
of third-order system, 76
Natural response, 57
NDSolve, in time-domain simulations, 65
Negative, in FeedbackConnect, 107
in GenericConnect, 111
Negative feedback, 105
Nichols plot, NicholsPlot, 95
NicholsPlot, 95
Noise, 1 f , 182
additive, 188
covariance matrix of, 181, 184
Index 221
Gaussian, 182
high-frequency cutoff of, 182
measurement, 181
process, 181
spectrum of, 181
stationary, 182
white, 181
Nominal solution, 197
None, in DeleteSubsystem, 115
in Subsystem, 115
Nonlinear state-space models, 15
linearization of, 16, 197
simulation of, 21
Nonlinear systems, 197
local linearization of, 197
rational polynomial approximations for, 202
Nonpolynomial transfer functions, 94
NonSingularControllabilityGramian,
125
NonSingularObservabilityGramian,
125
Normal distribution, 182
NormalDistribution, in simulations, 191
NullSpace, as value of
DecompositionMethod, 137
in Rank, 210
Numerical errors, control of, 152
Numerical integration methods, 48
NumericalMath`Approximations`, 202
Nyquist frequency, 51
Nyquist plot, NyquistPlot, 93
NyquistPlot, 93

Observability, 122
Observability Gramian, 140, 142
ObservabilityGramian, 124, 128
Observability matrix, 136
ObservabilityMatrix, 124, 126
Observability test, Observable, 122
ObservabilityGramian, 129
ObservabilityMatrix, 126
ObservabilityTest, 124
Observable, 122, 134
Observable canonical form, 36
Observable companion realization,
ObservableCompanion, 36
Observable states, 136
Observable subspace, size of,
ObservableSpaceSize, 125
ObservableCompanion, 36
ObservableSpaceSize, 125
ObservableSubsystem, 133
vs. KalmanObservableForm, 138
Observation matrix, 34
Observer, 161
Kalman, 182
optimal, 181
Optimal control, 165
Optimal controller, 192
Optimal estimation, 181
Ordered Schur decomposition,
SchurDecompositionOrdered, 204
Orthogonal basis, 137, 159
reconstruction of, 130
Orthogonal complement, 130, 137, 159
OrthogonalTransformMatrix, 148–149
222 Control System Professional
Outflow control, 167
Outflow rate, 167
Output controllability matrix,
OutputControllabilityMatrix, 126
Output matrix, 34
Output regulator, optimal,
LQOutputRegulatorGains, 171
Output response, OutputResponse, 57
OutputControllabilityMatrix, 126
OutputControllable, 122
OutputResponse, 19, 57–58
dummy variable in, 60
initial conditions in, 19, 60
number of input signals in, 60, 65
polling inputs in, 60
simulations with, 65
symbolic solution, 57
Outputs, count of, 210
OutputVariables, 43
Overdamped response, 75
Overshoot, 83

Padé approximation, 202
Parallel connection, ParallelConnect, 103
ParallelConnect, 98, 103, 110, 190
for adding inputs to system, 189
Particular response, 57
Penalty function, 165
Pendulum, inverted, 11, 163
Performance criterion, 165
of a missile, 175
Performance index, 165
of a missile, 175
Period, 40
Phase band, 96
Phase margin, GainPhaseMargins, 89
Phase response, 85
Phase unwrapping, adjusting PlotPoints
for, 144
PhaseRange, 87
PhaseRange, 87, 92, 96
PID controller, 95, 119
Pivoting, in
SchurDecompositionOrdered, 206
Plant, 27
PlotPoints, 81, 84, 87, 93, 95–96, 144
PlotSampling, 87, 95–96
Pole assignment, 17
controlling numerical errors in, 152
robust, 158
StateFeedbackGains, 150
using Ackermann's formula, 154
Pole placement, 17
StateFeedbackGains, 150
Pole-zero cancellation, PoleZeroCancel,
144
Poles, 18, 150
multiple, 213
of closed-loop system, 17
of transfer function, Poles, 32
of transfer function, ZeroPoleGain, 30
Poles, 32
PoleStyle, 81
PoleZeroCancel, 133, 144
Polynomial approximations, 202
Index 223
Positive, in FeedbackConnect, 107, 191
in GenericConnect, 111
Positive definite, 166
Positive feedback, 105
Positive semidefinite, 166
Predictor estimator, 192
Prewarping, 55
CriticalFrequency, 48, 55
Process noise, 181, 191–192
Production and inventory control model, 60
Production rate, control of, 60
Proper transfer function, state-space
realization of, 36
Proportional controller, 119

QRDecomposition, as value of
DecompositionMethod, 137
Quadratic cost function, 166

Ramp response, 77
Random systems, RandomSystem, 211
RandomOrthogonalComplement, 130
in KalmanControllableForm, 137
in KalmanObservableForm, 137
in StateFeedbackGains, 159
RandomSystem, 211
Rank, of matrix, Rank, 210
Rational polynomial approximations, 202
Rational polynomials, as
TransferFunction objects, 27
RealBlockForm, in
SchurDecompositionOrdered, 206
Realizations, 132
converting between, 132
RealRootProbability, 213
Reduced-order model, 133, 140
ReductionMethod, in
ControllableSpaceSize, 126
in ControllableSubsystem, 135
in DominantSubsystem, 143
in MinimalRealization, 135
in ObservableSpaceSize, 126
in ObservableSubsystem, 135
in TransferFunction, 33
Reference signal, 81
Regulator, linear quadratic,
LQRegulatorGains, 166
output, 171
RejectionLevel, 142–143
Resolvent matrix, 33
Response, in frequency domain, 85
in time domain, 57
minimum-time, 208
ResponseVariable, 60
Riccati equations, 18, 171–172, 182
RiccatiSolve, 171, 173
Robust pole assignment, KNVD, 158
Roll angle, 156
Roll attitude control, of a missile, 175
Roll rate, 156
Roll time constant, of ailerons, 175
Root loci, evolution of, 84
RootLocusPlot, 80
RootLocusAnimation, 84
RootLocusPlot, 80
RowReduce, as value of
DecompositionMethod, 137
224 Control System Professional
Rudder, 156

Sampled, 4, 39
Sampling, 44
Sampling period, choice of, 179
SamplingPeriod, 40
Sampling rate, 40
SamplingPeriod, 40
Satellite attitude control example, 151
controller for, 196
PID controller for, 119
Schur decomposition, ordered,
SchurDecompositionOrdered, 204
SchurDecomposition, as value of
SolveMethod, 177
vs. SchurDecompositionOrdered, 204
SchurDecompositionOrdered, 204
Second-order system, frequency response of,
85, 88
impulse response of, 78
root loci of, 82
sampling rate for, 179
step response of, 74
Separation principle, 192
Serial connection, SeriesConnect, 98
Series compensation, 119
SeriesConnect, 82, 91, 98, 110
Servo mechanism, 188
Servo motor, 183
SetControlFormat, 7
SetStandardFormat, 7
Sideslip angle, 156
Similarity transformation, for Kalman forms,
136
SimilarityTransform, 146
SimilarityTransform, 132, 146, 149
Simulation, analog, 65
discrete, 65
SimulationPlot, 57, 69, 83, 169
plotting state response with, 72
Single-input, single-output (SISO) systems, 28
minimal realization of, 133
Singular-value plot, SingularValuePlot,
96
SingularValuePlot, 96
SingularValues, as value of
DecompositionMethod, 137, 141
in Rank, 210
SISO systems, 28
Solution, nominal, 197
SolveMethod, in
DiscreteRiccatiSolve, 177
in RiccatiSolve, 177
Spacecraft, attitude control of, 95
SpecialPointProbability, 213
SpecialPoints, 213
Spectrum, of noise, 181
Stabilizable system, 170
Stable system, 171
in Lyapunov's sense, 208
State equations, 33
nonlinear, 15, 197
State estimation, deterministic, 161
optimal, 181
stochastic, 181
State feedback, StateFeedbackConnect,
114
StateFeedbackGains, 150
Index 225
State matrix, 34
State reconstruction, deterministic, 161, 181
optimal, 181
stochastic, 181
State response, plotting with
SimulationPlot, 72, 201
StateResponse, 57
State trajectories, 165
State-space data structure, StateSpace, 33
State-space models, nonlinear, 15, 197
State-space realizations, 132
State-space systems, 33
StateFeedbackConnect, 19, 98, 114, 169,
179–180, 201
StateFeedbackGains, 17, 150, 201
StateResponse, 57–58
dummy variable in, 60
initial conditions in, 60
number of input signals in, 60, 65
polling inputs in, 60
simulations with, 65
symbolic solution, 57
States, count of, 210
StateSpace, 3, 34
traditional notations for, 6, 41
with RandomSystem, 211
StateVariables, 43
Stationary noise, in wide sense, 182
Steady-state error, 77
Step response, 74, 202
Stern plane, deflection of, 72
Stochastic, inputs, 182, 187
state reconstruction, 181
system, 181, 192
Strictly proper transfer function, and
RandomSystem, 213
state-space realization of, 36
Submarine, depth control of, 72
Subspace, controllable, 125
observable, 125
Subsystem, 98, 115, 132
Suspension system example, 199
SV plot, SingularValuePlot, 96
System, analog, 33
composite, 98
continuous-time, 33
digital, 34
discrete-time, 34
dual, 130
dynamic, 27
nonlinear, 197
random, 211
state-space realization of, 33
stochastic, 181
time-varying, 57
T-bridge network, 66
TargetForm, 35
Temperature sensor, 203
Terminal state error, 165
Third-order system, step response of, 76
Time-domain response, 57
Time-domain simulation, 65
226 Control System Professional
Time-varying systems, response of, 57
TimeVariable, 43
ToContinuousTime, 4, 54
ToDiscreteTime, 4, 44, 151, 168, 174, 178,
180, 184
Tolerance, in PoleZeroCancel, 145
Torque, 183
Traditional notations, 5, 41
TraditionalForm, of control objects, 5, 41
Transfer function matrix, data structure for,
TransferFunction, 27
Transfer matrix, data structure for,
TransferFunction, 27
TransferFunction, 3, 27
as a pure function object, 27
expanding, 28
factoring, 28
nonpolynomial, 94
traditional notations for, 5, 41
variable in, 32
with RandomSystem, 211
Transformation matrix, recovering of,
TransformationMatrix, 148
TransformationMatrix, 148
Transient response, 57
Transient state error, 165
Transport lag, 94
Triangle hold, FirstOrderHold, 48, 56
Tustin transformation,
BilinearTransform, 48, 55

Uncontrollable states, 136, 158, 171
Undamped response, 75
Underdamped response, 75
Unforced response, 57
Unobservable, states, 136
VerifyPoles, 152
Weak modes, 142
Weak subsystem, 142
Yaw rate, 156
z-transform, modified, 53
ZTransform, 44
Zero-input response, 57
Zero-order hold, ZeroOrderHold, 48
Zero-pole mapping, ZeroPoleMapping, 48
Zero-pole-gain data structure,
ZeroPoleGain, 30
Zero-state response, 57
ZeroOrderHold, 48
ZeroPoleGain, 3, 30
variable in, 32
with RandomSystem, 211
ZeroPoleMapping, 48
Zeros, of transfer function, ZeroPoleGain,
30
of transfer function, Zeros, 32
Zeros, 32
ZeroStyle, 81
ZTransform, 44
$ContinuousTimeComplexPlane
Variable, 41
$ContinuousTimeToken, 41
$DiscreteTimeComplexPlaneVariable,
41
Index 227
$DiscreteTimeToken , 41
$RandomOrthogonalComplement, 130
$Sampled, 39
$SamplingPeriod, 45


[Bullet] ( • ), in control objects, 41
[EmptyUpTriangle] (), in control
objects, 41
[ScriptCapitalS] (.), in control objects,
41
[ScriptCapitalT] (¯), in control objects,
41
[ScriptK] (|), in control objects, 43
[ScriptS] (), in control objects, 41
[ScriptT] (:), in control objects, 43
[ScriptU] (.), in control objects, 43
[ScriptX] («), in control objects, 43
[ScriptY] (}), in control objects, 43
[ScriptZ] (z), in control objects, 41
228 Control System Professional

Copyright Information
June 2005 Second edition Intended for use with Mathematica 4 or 5 Software and manual: Igor Bakshee Product managers: Yezabel Dooley and Kristin Kummer Project managers: Julienne Davison, Julia Guelfi, and Jennifer Peterson Editor: Jan Progen Proofreading: Richard Martin and Emilie Finn Software quality assurance: Jay Hawkins, Cindie Strater, Angela Thelen, Rachelle Bergmann, and Shiho Inui Package design: Jeremy Davis, Megan Gillette, and Kara Wilson Includes Electrical Engineering Plots package by Steve Adams, Jeffrey Adams, and John M. Novak Special thanks to John M. Novak, Leszek Sczaniecki, Todd Gayley, Roger Germundsson, Hans Hoelzer, Neil Munro, David Smith, and Daniil Sarkissian Published by Wolfram Research, Inc., 100 Trade Center Drive, Champaign, Illinois 61820-7237, USA phone: +1-217-398-0700; fax: +1-217-398-0747; email: info@wolfram.com; web: www.wolfram.com Copyright © 1996–2005 Wolfram Research, Inc. All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of Wolfram Research, Inc. Wolfram Research, Inc. is the holder of the copyright to the Control System Professional software and documentation ("Product") described in this document, including without limitation such aspects of the Product as its code, structure, sequence, organization, "look and feel", programming language, and compilation of command names. Use of the Product, unless pursuant to the terms of a license granted by Wolfram Research, Inc. or as otherwise authorized by law, is an infringement of the copyright. Wolfram Research, Inc. makes no representations, express or implied, with respect to this Product, including without limitations, any implied warranties of merchantability, interoperability, or fitness for a particular purpose, all of which are expressly disclaimed. Users should be aware that included in the terms and conditions under which Wolfram Research, Inc. is willing to license the Product is a provision that Wolfram Research, Inc. and its distribution licensees, distributors, and dealers shall in no event be liable for any indirect, incidental or consequential damages, and that liability for direct damages shall be limited to the amount of the purchase price paid for the Product. In addition to the foregoing, users should recognize that all complex software systems and their documentation contain errors and omissions. Wolfram Research, Inc. shall not be responsible under any circumstances for providing information on or corrections to errors and omissions discovered at any time in this document or the package software it describes, whether or not they are aware of the errors or omissions. Wolfram Research, Inc. does not recommend the use of the software described in this document for applications in which errors or omissions could threaten life, injury, or significant loss. Mathematica, MathLink, and MathSource are registered trademarks of Wolfram Research, Inc. All other trademarks used herein are the property of their respective owners. Mathematica is not associated with Mathematica Policy Research, Inc. or MathTech, Inc. #T4070

New in Version 2
Version 2 of Control System Professional carries numerous internal changes that simplify the addition of new packages to the Control System Professional Suite while maintaining the integrity of a single Mathematica application. Version 2 also includes several new algorithmic and interface features. Except as noted in the next section, the new version is fully compatible with Version 1. Intuitive, traditional typesetting of control objects in the StateSpace and Transfer Function forms; editable control objects; easy switching between automatic display of the results in the traditional and standard forms. Support for constructing StateSpace realizations from TransferFunction objects in different target forms. Support for different methods of reducing StateSpace objects to TransferFunc tion objects. Support for analog simulations in StateResponse, OutputResponse, and SimulationPlot. Faster discrete-time domain simulations. Simplified syntax for polling inputs of multi-input systems with the same input signal in time-domain simulation functions. Support for interconnections of multi-input, multi-output TransferFunction systems with SeriesConnect, ParallelConnect, and FeedbackConnect. Support for recovering the transformation matrix used to construct realizations of special forms (such as KalmanControllableForm or InternallyBalanced Form). Faster ordered Schur decomposition. Performance optimization through internal caching. Developers can use the same mechanism to provide advanced services to their users. Fully indexed online documentation.

Incompatible Changes between Version 1 and Version 2
Thirty-eight new functions, options, and symbols have been added, some of whose names may conflict with names already being used. ReviewForm has been superseded by the built-in Mathematica function Tradition alForm. MinimalRealization[transferfunction] no longer provides an interface to PoleZe roCancel and uniformly returns the StateSpace object. The option InternallyBalanced of the function DominantSubsystem is now obsolete. DominantSubsystem can automatically determine if the input system is an output of the function InternallyBalancedForm. The syntax StateFeedbackGains[a, b, poles] is no longer supported; the first argument to StateFeedbackGains must be the StateSpace object. Example: StateFeedbackGains[StateSpace[a, b], poles]. The option Iterations of the function StateFeedbackGains has been superseded by the built-in Mathematica option MaxIterations. Symbol J, a default generic name for a summation index in a symbolic sum returned by StateResponse and related functions, has been superseded by the built-in Mathematica symbol K. By default, the random orthogonal complements are no longer selected using the randomized algorithm. To revert to the behavior in the previous version, you can set $RandomOrthogonalComplement = True.

........ 1................................. Impulse.... 3....................6 Discrete-Time Models of Systems with Delay .......... Getting Started .............................................. 4. and Other Responses .................................................................... 5.............................................................................2... 1..........1 The Conversion Methods ............1 Symbolic Approach ......................... 27 33 39 41 44 48 53 54 57 57 65 74 80 80 85 85 89 93 95 4..............................7 Numericalizing for Speed .................2 The Structure of the Application ........................................................ 1......................................................................................3 Continuous-Time versus Discrete-Time Systems ......................................4 The Traditional Notations ................................3 The Nyquist Plot ................. Description of Dynamic Systems ................................................................. 3......................1 Transfer Function Representations ................. .......Table of Contents 1.2 Gain and Phase Margins .......................7 Continuous-Time Models of Discrete-Time Systems ............................................... 5........................................................................................................................ 1................3 The Control Objects ............ 1..............................................5 Discrete-Time Models of Continuous-Time Systems .........1 Using the Application for the First Time ................ Introduction: Extending Mathematica to Solve Control Problems ...............................................................................................2 State-Space Representations .................................................... 5............ 5........................... 3........................ 3...................................................................1 Root Loci ........... 11 3.................5................................................................... 4.................... 27 3.............................................................. Classical Methods of Control Theory ..................................................3 Step........................................... 5...................... 3.................. 5........................................................... 1 1 2 3 5 7 9 9 10 2.................1 The Basic Function ....................................................................................2 The Bode Plot ..............................................................................................................................6 The Notation for the Imaginary Unit ...4 Traditional Notations ........................................................................................................................... 1................................. 1...................................................................... 4.................2........................................................4 The Nichols Plot ..... 5.................. 3....................... 1............................ Time-Domain Response ................................................................................................................................... 3.............8 Evaluating Examples in This Guide ..................2 Simulating System Behavior .............5 The Control Format ......

..................................5 Using Interconnecting Functions for Controller Design .....1..........8 Optimal Controller ... Optimal Control Systems Design ....................... Nonlinear Control Systems ........................................... 6.......................... 10........................................................................................................................................... System Interconnections ........... 8. 10..................................................................4 Manipulating a System's Contents .................. 10..................................................................1 Elementary Interconnections .................................... 130 8.........2 Optimal Output Regulator ............................................................... 6............................................................................ 132 8......... 10........ 9.......................................................................................3 Dual System ...............................................1.........................................6 Pole-Zero Cancellation ...........................2 Robust Pole Assignment ...........3 Closing Feedback Loop ....1 Tests for Controllability and Observability .......................1 Ackermann's Formula ...................................................................................... Controllability and Observability ..1......... 6............ 165 11...........6 Discrete Estimator by Emulation of Continuous Design ........................................................................................... 96 98 98 98 103 105 110 114 115 119 6.....................................3 Jordan Canonical (Modal) Form ........ 8....5 The Singular-Value Plot ............................................................................ Feedback Control Systems Design ......................................................... 9...... 150 10............ 9.................................................... 6.......1 Irreducible (Minimal) Realizations .......... 126 7..........................................................7 Kalman Estimator ..... 8..................................... 133 136 138 140 142 144 146 148 150 154 158 161 166 171 172 178 181 184 185 192 9...........................7 Similarity Transformation ........ 8.........................................................................................................................................................................................................................................................1 Linear Quadratic Regulator .......................1.....................................................................5 Dominant Subsystem ............................................................................2 Connecting in Parallel .............................................................................................................................vi Control System Professional 5...4 Internally Balanced Realizations ...........1 Connecting in Series ................................................8 Recovering the Transformation Matrix .....5 Optimal Estimation .................................. 6....... 10........ 122 7........................................................................................................................................................... Realizations .3 Riccati Equations ............................................1.................... 8.......2 Kalman Canonical Forms ..... 10.3 State Feedback ........................................................................................................ 197 .............. 7....................................................................................... 9..................................................................... 8............. 10. 122 7..............................................................................1 Local Linearization of Nonlinear Systems .............................2 Controllability and Observability Constructs .................2 State Reconstruction ...............1 Pole Assignment with State Feedback ......... 6.....2 Arbitrary Interconnections ................................. 6......................................................... 8......4 Discrete Regulator by Emulation of Continuous Design ............................................ 10...................................................................................... 197 11..................................................................................................................... 6........

............................................................................4 Part Count and Consistency Check .....................................................................................................................1 Ordered Schur Decomposition ..........................................3 Rank of Matrix ...............2 Rational Polynomial Approximations ..................................................... 12.......................................................................................................................................... 204 12............ 202 12....................5 Displaying Graphics Array Objects Together ......................... 12... 214 Index .............................2 Lyapunov Equations ................................. 215 ............................................................ 12................................................. 12................................................................................................... 204 206 210 210 211 211 References ................6 Systems with Random Elements .............................. 12............................................................................... Miscellaneous .Table of Contents vii 11....................

.

If this has been done at the installation stage. Nor can the guide be considered an introduction to control systems theory. 1999). This makes Control System Professional available. Then. 1. to make all the functionality of the application package available at once. Getting Started Control System Professional is a collection of Mathematica programs that extend Mathematica to solve a wide range of control system problems. ControlSystems. Also given are a number of examples of how to use the Control System Professional functions together with the rest of Mathematica. solved examples. To gain the most from this application package. the application package should be visible to Mathematica without further effort on your part. In[1]:= ControlSystems` . the guide is definitely not an introduction to Mathematica itself. in parallel to other applications.m package with the Get or Needs command. The Mathematica Book. The many illustrations. and other included features should make it possible for the interested reader to tackle most of the problems just after reading the corresponding parts of this guide. Although an attempt was made to put the new functions into the relevant control theory context.1. Both classical and modern approaches are supported for continuous-time (analog) and discrete-time (sampled) systems. This guide describes in detail the new data types introduced in Control System Professional and the new functions that operate on these data types. However. It is beyond the scope of this guide to address those innumerable control problems that could be solved simply with step-by-step application of the usual Mathematica functionality. the reader is advised to consult the standard Mathematica reference by Stephen Wolfram.1 Using the Application for the First Time Control System Professional is one of many available Mathematica applications and is normally installed in a separate directory. 4th Edition (Wolfram Media/Cambridge University Press. the guide is by no means a substitute for standard texts such as the ones listed in the References. you simply load the Kernel/init.

theDirectoryControlSystemsIsIn] can be used to inform Mathematica of how to find the application. it is probably due to a nonstandard location of the application on your system.2 The Structure of the Application Control System Professional consists of this guide and accompanying packages. Supplemental packages are listed in the second part of the table.m package). 1.m file to have it executed automatically at the outset of any of your Mathematica sessions. The palette included with Control System Professional can be found in the FrontEnd/Palettes directory. and you will have to check that the directory enclosing the ControlSystems directory is included in your $Path variable. You may want to add this command to your init. . The notebooks are located in the Documentation directory. Commands such as AppendTo[$Path.2 Control System Professional If the previous command causes an error message. The installation card that came with Control System Professional contains the detailed instructions on the installation procedure. The packages listed in the first part of the table typically correspond to separate sections of this guide and can be loaded into your Mathematica session independently (with or without prior loading of the Kernel/init. The following packages are located in the main ControlSystems directory. The entire guide is provided as Mathematica notebooks that are accessible from your Help Browser.

m EEPlotsExtensions. On the one hand.m Packages within Control System Professional .m Simulations. that contain the available information of the control system.m Connections. On the other hand.m Riccati. or control objects. .1.m Linearization.m PoleAssignment.m SolversCommon. they are containers. or wrappers.m Kernel init.m Realizations. they work like functions when one is applied to another.m Plots.m Conversions. StateSpace. that conveniently combine the information about the system in one Mathematica expression. You can think of control objects as "active wrappers". control systems objects and conversion between them conversion between continuous-time and discrete-time domains investigating system behavior in the time domain classical methods—root locus and frequency response system interconnections and manipulating system contents controllability and observability properties equivalent and reduced representations of the same system pole assignment using state feedback optimal linear quadratic design of control systems local linearization of nonlinear systems Lyapunov equations solver Riccati equations solver common definitions for the Lyapunov and Riccati equations solvers ordered Schur decomposition options handling routine extensions to the Electrical Engineering Examples plotting routines initialization file 1. and ZeroPoleGain. Getting Started 3 Common.m Properties.3 The Control Objects Most Control System Professional functions operate on special data types.m Lyapunov.m LQdesign. These are TransferFunction .m SchurOrdered.m CycleOptions. The control objects are freely convertible from one to another and are easy to pass from one function to another.

In[3]:= StateSpace % Out[3]= StateSpace 0 . refer to Section 3. the resultant state-space system contains very simple matrices A . 1 s . Sampled This converts the discrete-time object back to continuous time. Notice that the result is still the TransferFunction object. and C . 1 s Out[2]= TransferFunction s. Converting between control objects. in which the discrete-time domain is indicated by the option Sampled. In[4]:= ToDiscreteTime TransferFunction s. 1 s We find a state-space realization of the transfer function object by applying the StateSpace head to it.1 ff. Simply apply the desired head to the object you wish to convert. In[5]:= ToContinuousTime % Out[5]= TransferFunction s. Along with the structural information about the system. B .4 Control System Professional Let us create an integrator system in the transfer function form. control objects may contain a reference to the domain (continuous-time or discrete-time) the system is in and/or the period at which the (discrete-time) system was sampled. 1 . 1 s . This finds a discrete-time approximation to the integrator system. For the convention for using internal variable in TransferFunction. Sampled Period T Period T Out[4]= TransferFunction s. T 1 s . In[2]:= TransferFunction s. % refers as usual to the result of the preceding computation. In this case. 1 No special functions are needed to convert one control object to another. The percentage mark.

This is a single-input. 1 s Α TraditionalForm Out[6]//TraditionalForm= 1 1 Α This is the TraditionalForm of the discretized object. In[7]:= ToDiscreteTime % . the variable is used. see Chapter 3. You can reverse this if you are mainly dealing with the discrete-time systems. Sampled Out[7]//TraditionalForm= Period 2 Simplify TraditionalForm 2 1 1 Α 2Α 2Α Α 2 . The superscripted letter distinguishes the result from a regular matrix.1. 1. The subscript gives the value of the sampling period. For a detailed description of the control objects. two-output TransferFunction object in Traditional Form. That can be done either by applying the Mathematica function TraditionalForm or by selecting an expression that contains one or more control objects and executing the menu command Cell Convert To TraditionalForm (or the corresponding keyboard shortcut. as described in the documentation for your copy of Mathematica). 1 s . Getting Started 5 By default. Since the object is believed to be in the continuous-time domain. you will often find it useful to represent the control objects in their traditional typeset form. In[6]:= system TransferFunction s. The Control Format palette provided in Control System Professional allows you to switch between automatic display of results in the traditional form and standard Mathematica output.4 Traditional Notations When using the notebook front end. It displays using the variable . the system is assumed to be in the continuous-time domain if the Sampled option is not supplied.

while the small subscripted bullet character denotes the continuous-time domain. In[10]:= ToDiscreteTime % . Note that Equation Form disregards the value of the sampling period. The superscripted letter identifies the StateSpace object. In[8]:= StateSpace system Out[8]//TraditionalForm= TraditionalForm 0 0 Α 0 1 Α 1 1 0 1 0 0 • Additionally. This represents the above StateSpace object as a pair of matrix state-space equations. Control System Professional provides the function EquationForm that allows you to display the StateSpace objects as the familiar state-space equations. the state-space equations are displayed as difference rather than differential equations. Sampled Out[10]//EquationForm= Period Τ ΑΤ Simplify EquationForm 1 1 0 1 Α ΑΤ ΑΤ 1 Α 1 Α2 ΑΤ ΑΤ Α 1 0 1 . These have the conventional form for both continuous-time and discrete-time systems.6 Control System Professional This is a possible state-space realization of the above system in TraditionalForm. In[9]:= % EquationForm 0 0 1 Α 0 1 Out[9]//EquationForm= Α 1 0 1 For the discretized system.

1 . the functions behave much like OutputForm or MatrixForm (and all other members of the $OutputForms list). better yet. you will typically find it safe to select exactly the part of expression that you want to edit or to drag across the entire object and choose Edit Copy when you want to copy the object as a whole (or. 1 Typically. paste. copy the entire cell that contains the control object).5 The Control Format SetControlFormat SetStandardFormat Two output modes. display control objects and matrices in TraditionalForm restore the standard Mathematica output format Instead of applying TraditionalForm to every control object individually. Α. In[11]:= % TraditionalForm ΑΤ Out[11]//TraditionalForm= 1 0 Α 0 1 Α ΑΤ 1 Α 0 0 ΑΤ 1 Α2 ΑΤ ΑΤ 1 1 Τ Both TraditionalForm and EquationForm provide convenient formatting. ΑΤ 1 . As a rule of thumb. 0. Α 1 ΑΤ . . ΑΤ ΑΤ . Getting Started 7 Here is the same system in TraditionalForm. When editing. the previous result is still the StateSpace object.1. and edit the typeset representations of control objects. exercise caution to prevent destruction of the invisible tags that allow an unambiguous interpretation of the object in typeset form. In[12]:= % Out[12]= StateSpace 1 ΑΤ 1. you can freely copy. 0. you can switch to displaying control objects in traditional form automatically. Despite different formatting. Sampled Period Τ Α2 Α . 1. In this respect. however. This can be done by issuing the . Neither changes the internal representation of the objects.

1). i. The SetControl Format function sets an appropriate value for the built-in global variable $PrePrint. but switch to the standard one as needed to highlight the underlying standard-form representation of the object in question. As the two formats look quite different and can be easily distinguished. You can revert to the standard Mathematica output format by issuing the command SetStandardFormat[] or by clicking the "Standard Format" button in the Control Format palette. As a result. j. Set StandardFormat restores the previous setting of that variable. simply switch to the appropriate format using the Control Format palette or convert the individual cell to the appropriate format using the menu choice or the corresponding keyboard shortcut. Figure 1.8 Control System Professional command SetControlFormat[] or by simply clicking the "Control Format" button in the Control Format palette. no notice is given in the text to identify them.1. we switch to the control format. Additionally. Setting the display of control objects in TraditionalForm and matrices in MatrixForm. 2 . the matrix displays in the traditional form and so does the StateSpace object. 1 . if any. SetControlFormat[] turns on the TraditionalForm display for all matrices and some expressions that involve control objects and matrices. 3 Out[13]= 2 3 4 3 4 5 0 . In this guide we routinely use the control format. If you reevaluate the online documentation and your results appear in the different format. which is available under the Palettes submenu of the File menu (see Figure 1. At this point. In[13]:= Table i j. 1 In[14]:= StateSpace 0 1 Out[14]= 1 0 • .

which is not the standard notation in the control literature. attempting to give an exact answer to a problem involving an exact input. it is sufficient to change the formatting rule for Complex. From now on. unless stated otherwise in this guide. with the exact computation sometimes being considerably more time consuming. for example. y_ Protect Complex . Getting Started 9 1. In[15]:= Unprotect Complex .m file if you prefer an alternative to the built-in notation. This changes the way complex numbers appear on the screen. whenever possible. the Control System Professional functions.1.6 The Notation for the Imaginary Unit Mathematica uses the letter I ( in the notebook front end) for the imaginary unit 1 . to change the appearance of complex numbers. you do have to realize that the algorithms invoked with the two types of input can be quite different. it is quite easy to set things up differently. it is done in transparent fashion.7 Numericalizing for Speed A word of common sense is in order before you start working with the application. However. to make sure that at least a part of the input contains an inexact expression to prevent running into a long exact calculation unnecessarily. . However. Recall that the expression 2 + 3 I. 3 ]. In[16]:= 2 3 Out[16]= 1 appears as the letter . You are therefore advised. You do not have to worry about choosing the right algorithm. accept both exact and inexact input (or a mix of the two) and handle them appropriately. As is the case with many built-in Mathematica routines. as can the computing time. Therefore. as it is done in the following few lines. : x " " y. 1 2 3 1. Format Complex x_. You may want to add an analogous definition to your init. is a shortcut for Co mplex[ 2.

4 Out[17]= 1 0 1 1 0 0 0 1 0 0 1 0 0 1 0 1 These are its exact eigenvalues together with the time taken by the CPU to compute the result. In such cases. 1.) In[19]:= Eigenvalues N %% Out[19]= Timing 0. eigenvalues of the closed loop system after a pole placement). for example. they usually vanish after a simplification. Second. is the familiar reference to the result of the next-to-previous computation. %%. 1. of course.. In[17]:= Array Random Integer &. 4. 1. a random 4 4 exact matrix. 1. some numerical results may be different if a numerical algorithm. . Small numerical residuals may be different because of the different machine arithmetic.g. Differences in symbolic expressions may come from different heuristic rules employed in different versions of Mathematica. the concrete values after random distortions will.. 0.10 Control System Professional Consider. Further.618034 1. you can usually confirm that you are still getting a correct result by testing some property of the resulting system (e. 1.61803. Finally. selects another possible value from the universe of equivalent values. be different in your experiments. In[18]:= Eigenvalues % Out[18]= Timing 1 2 1 5 . considerable savings in computing time can be realized just by using inexact input at the outset.. 1 2 1 5 0. also based on the machine arithmetic.1 Second. you may sometimes find that your results differ somewhat from the documentation. If an inexact result suffices for the particular purpose. (The double percentage mark.8 Evaluating Examples in This Guide When evaluating examples in this guide on your computer system.

using the classical example of controlling an inverted pendulum. The inverted pendulum shown in Figure 2. Introduction: Extending Mathematica to Solve Control Problems In this chapter.2. and the cart exerts a force to provide the attitude control of the pendulum. by being seamlessly incorporated with the rest of Mathematica. Additional solved examples will be given in later chapters when individual functions are discussed. solve the problem using the Control System Professional functionality. . The vertical position of the rod is unstable. This or a similar model is considered in many textbooks on control systems. we learn how to formulate a control problem in Mathematica. pp. Inverted pendulum.1 is a massive rod mounted on a cart that moves in a horizontal direction in such a way that the rod remains vertical. and analyze the results using the standard Mathematica functions. 590–92. Θ xc X Figure 2. We will follow Brogan (1991). We will see that.1. this application package provides a convenient environment for solving typical control engineering problems.

and its mass and the moment of inertia about its center of gravity are m and J . which corresponds to the case of uniform mass distribution along the pendulum. Summing all the moments around the center of gravity of the pendulum gives the dynamical equation Fy L 2 Fx L cos Θ 2 sin Θ JΘ (2. Fy mg Fy fx f _y 2 b Finally. Forces applied to the rod (a) and the cart (b) of the pendulum. Assume that the length of the pendulum is L . summing the forces applied to the pendulum in horizontal and vertical directions (Figure 2. Then. and xc X L 2 sin Θ and yc L 2 cos Θ are the horizontal and vertical displacements of the center of gravity of the pendulum.12 Control System Professional Let us obtain a mathematical model for the system. The mass of the cart is M . we have (2. for the cart we have (see Figure 2. Θ Fx f _y 2 Mg Fx a Figure 2. xc depends on the horizontal displacement X of the cart.2a).2b) (2. respectively.2) Fx Fy m xc mg where Fx and Fy are the components of the reaction force at the support point.4) fx Fx MX .2.3) where J m L2 12 .1) m yc (2.

This is the first equation. In[2]:= eq2 Out[2]= Fy mg m yc m yc t t Fy g m This is the dynamical equation. Translating the model into Mathematica is straightforward. In[3]:= eq3 1 2 Fy L Sin Θ t 1 2 Fx L Cos Θ t JΘ JΘ t t Out[3]= 1 Fx L Cos Θ t 2 1 Fy L Sin Θ t 2 This gives the definition for the moment of inertia J. We will keep it as eq1 for future reference. In[1]:= eq1 Out[1]= Fx m xc m xc t t Fx The next equation translates almost verbatim as well. Introduction: Extending Mathematica to Solve Control Problems 13 where fx is the input force applied to the wheels. In[4]:= J m L2 12 Out[4]= L2 m 12 Here is the last equation. then so does xc. In[5]:= eq4 Out[5]= fx Fx MX M X t t fx Fx We now define the horizontal displacement xc of the pendulum through the displacement of the cart X and the angle Θ. In[6]:= xc t_ X t 1 2 L Sin Θ t Out[6]= 1 L Sin Θ t 2 X t . The pattern notation t_ on the left-hand side makes the formula work for any expression t.2. As the two depend on time t.

just a single rule and we extract it from the lists. In[8]:= Eliminate Out[8]= eq1. eq3. 1 t 3 2 fx Cos Θ t 2 g m Sin Θ t 2 g M Sin Θ t L m Cos Θ t Sin Θ t Θ t 2 L m M 3 M Cos Θ t 2 3 m Sin Θ t 2 3 M Sin Θ t Θ 2 . Eliminate takes the list of equations and the list of variables to eliminate. Fy. It is solvable for Θ t . eq2.14 Control System Professional Here is the corresponding assignment to the vertical displacement yc. Θ Out[9]= t Θ t 3 2 fx Cos Θ t 2 g m Sin Θ t 2 g M Sin Θ t L m Cos Θ t Sin Θ t Θ t 2 L m M 3 M Cos Θ t 2 3 m Sin Θ t 2 3 M Sin Θ t 2 We have. In[9]:= Solve % . We then have the benefit of not solving the differential equation against X t . in fact. but simply eliminating it algebraically along with the other variables we no longer need. In[7]:= yc t_ 1 2 L Cos Θ t Out[7]= 1 L Cos Θ t 2 Notice that we have defined eq1 … eq4 as logical equations (using the double equation mark ==) and the expressions for xc and yc as assignments to these symbols (using the single =). The result is a nonlinear differential equation between the input force fx and the angular displacement Θ and its first and second derivatives. In[10]:= sln Out[10]= % 1. X t Sin Θ t Θ t 2 L m 6 g m Sin Θ t 6 g M Sin Θ t 3 L m Cos Θ t L m Θ t L M Θ t 3 L M Cos Θ t 2 Θ t 3 L m Sin Θ t 2 Θ t 3 L M Sin Θ t 2 Θ t 6 fx L m Cos Θ t Solve returns a list of rules that give generic solutions to the input equation. eq4 . Fx.

This sets the input and output vectors. In[12]:= u fx . fx is the only component of the input vector u . (2. 3 2 fx Cos Θ t 2 g m Sin Θ t 2 g M Sin Θ t L m Cos Θ t Sin Θ t Θ t 2 L m M 3 M Cos Θ t 2 3 m Sin Θ t 2 3 M Sin Θ t 2 . Introduction: Extending Mathematica to Solve Control Problems 15 As our next step. The nonlinear state-space model of the system will be presented in the form x y f x. we design the state feedback controller that attempts to keep the pendulum in equilibrium. Θ among neither state nor input variables.5). we observe that their Mathematica equivalents f and h are simply the derivative D[x. In[11]:= x Θ t . u (2. we create a state-space model of the system and linearize it for small perturbations near the equilibrium position Θ 0. To obtain f and h in Eq. sln Θ t . y Θ t . which is Θ t . based on the linearized model. t Out[13]= t . u h x.5) where Θ and Θ constitute the state vector x . In[13]:= D x. Then. This creates the state vector in Mathematica. we carry out several simulations of the actual nonlinear system governed by the controller and see what such a controller can and cannot do. % . and Θ makes up the output vector y .Θ t The replacement rule stored as sln helps to get rid of Θ In[14]:= f Out[14]= t . Finally. t] and the output vector y expressed via the state and input variables.2. The expression for the derivative contains an undesirable variable.Θ t .

6g m M . returns the control object StateSpace[a. d]. 0 . This performs the linearization. Θ t . 1. which. b. (2.6) This is the purpose of the function Linearize. 0 . supplied together with values at the nominal point (the point in the vicinity of which the linearization will take place). 1 . the input state-space model must be linear. In[15]:= h Out[15]= y Θ t So far we have used the built-in Mathematica functions. Θ t . 0 StateSpace 0 . . (Here /@ is a shortcut for the Map command. This loads the application. 6 L m 4M Mapping the built-in Mathematica function Factor onto components of the state-space object simplifies the result somewhat.16 Control System Professional The expression for function h is trivial. 1 . our first task will be to linearize the model. Therefore. . In[17]:= ss Out[17]= Linearize f. h .) In[18]:= Factor Out[18]= % 0. Now it's time to make accessible the library of functions provided in Control System Professional. fx. C . 0 . In[16]:= ControlSystems` For most Control System Professional functions. 6 L m 4M . 0 StateSpace . given the nonlinear functions f and h and the lists of state and input variables. and D in Eq. where matrices a.0 . B. c. c.6). and d are the coefficients A. represent it in the form x y Ax Cx Bu Du (2.0 3 2gm 2gM . b. that is. 0.0 L m 4M . 0 .0 L m 4M 1.

1 L m 4M 6 p1 p2 Note that we were able to obtain a symbolic solution to this problem and thus see immediately that. p1. In this particular case. Introduction: Extending Mathematica to Solve Control Problems 17 TraditionalForm often gives a more compact representation for control objects. The result is a matrix comprising the feedback gains. and so on. so it is advisable to work with inexact numeric input. This approach is more computationally intensive.2. In[22]:= Eigenvalues a Out[22]= b. only the first gain depends on g and so would be affected should our pendulum get sent to Mars (and the change would be linear in g ). we can also design the state feedback using the optimal linear-quadratic (LQ) regulator (see Chapter 10).1) is used. In[21]:= a ss 1 . the eigenvalues of the matrix A B K. For convenience in presenting results. One way to do this is to place the poles of the closed-loop system at some points p1 and p2 on the left-hand side of the complex plane. p2 With Control System Professional. This extracts the matrices from their StateSpace wrapper. that is. In[19]:= TraditionalForm % Out[19]//TraditionalForm= 0 6g m M L m 4M 1 1 0 0 0 6 L m 4M 0 • Now let us design a state feedback controller that will stabilize the pendulum in a vertical position near the nominal point. b ss 2 . for example.5). . the second gain on their sum. In[20]:= k Out[20]= StateFeedbackGains ss. We see that the eigenvalues of the closed-loop system are indeed as required. To check if the pole assignment has been performed correctly.k p1. Ackermann's formula (see Section 9. We also see that the first gain depends on the product of pole values. p2 1 6 6 g m 6 g M L m p1 p2 4 L M p1 p2 . we can find the poles of the closed-loop system. we switch to the control print display (Section 1.

assuming in all cases that Θ 0 0 . R Out[27]= 196. L 1. numericValues 4.07396 Let us make some simulations of the linearized system as well as the original.005 47. M 8. The same initial conditions will then be used for the nonlinear system. numericValues 0 0. In[27]:= LQRegulatorGains nss..1422 Here are the poles our system will possess when we close the loop.. p1 5. In[23]:= numericValues Out[23]= m 2. Q.8. g 9.2941 0 1 0 Let Q and R be identity matrices. .2.8... p2 9. m 2.176471 0 • Out[24]= 0 1 17.24526.5. and 1. In[25]:= Q Out[25]= IdentityMatrix 2 1 0 0 1 1 1 In[26]:= R Out[26]= LQRegulatorGains solves the Riccati equations and returns the corresponding gain matrix. p1 Here our system is numericalized. g 4. 4. L 1. In[28]:= Eigenvalues a Out[28]= b.. M 8. and the results will be compared. We start with the linearized system and compute the transient response of the system for the initial values of Θ 0 of 0. 4. 1..% . nonlinear system stabilized with one of the controllers we have designed—say the one obtained with Ackermann's formula.18 Control System Professional This is the particular set of numeric values (all in SI) we will use. p2 5... In[24]:= nss ss .

In[29]:= Θ0 Out[29]= .5.5. t. Introduction: Extending Mathematica to Solve Control Problems 19 Here is the list of initial conditions for Θ. 0 angle p2 t p1 p1 p2 p1 t p2 . 1. we can use OutputResponse. 1.2. which is one of the functions defined in Chapter 4. k Simplify 0 Out[30]= 1 p2 0 L m p1 p2 p1 1 0 6 4M 0 • To compute how the initial condition in Θ decays in the absence of an input signal. The initial value for Θ is denoted as angle. the time variable t. The function State FeedbackConnect is described in Chapter 6 together with other utilities for interconnecting systems. 1..2 This is the linearized system after the closing state feedback. the input signal (which is 0 for all t). In[30]:= StateFeedbackConnect ss. 0. and the initial conditions for the state variables supplied as an option.2 0. 1.. InitialConditions Out[31]= angle. the input arguments to OutputResponse are the system to be analyzed. In[31]:= OutputResponse % . In this particular case.

8 0.6 0. 1. First we prepare the input rules.2 1 0. As we have only one input.333 Θ t Recall that we store the description of our nonlinear system as sln. In[33]:= feedbackRules Out[33]= Thread u 51. We store it as plot for future reference. 0.4 0. 0.20 Control System Professional Here is the plot of the previous function for the chosen values Θ0 . 0 . Θ0 . the input variable—the force fx applied by the motor of the cart—tracks changes in state variables Θ t and Θ t .2 1 2 3 4 The case of actual nonlinear system stabilized with the linear controller is more interesting. 4 . numericValues fx 211. We note that when the control loop is closed. PlotStyle RGBColor 1. angle t. numericValues .x . In[34]:= sln Out[34]= Θ t 3 2 fx Cos Θ t 2 g m Sin Θ t 2 g M Sin Θ t L m Cos Θ t Sin Θ t Θ t 2 L m M 3 M Cos Θ t 2 3 m Sin Θ t 2 3 M Sin Θ t 2 . but requires some work on our part. there is only one rule in the list. In[32]:= plot Plot Evaluate % . Θ t k.

In[35]:= de Out[35]= Equal t sln . numericValues . . to convert the rule to an equation. In[36]:= Out[36]= 0 NDSolve Θ Θ Θ de. Θ . This changes the Plot options to reflect that convention and adjusts a few other nonautomatic values for plot options. t. The plot is stored as plot1.0075. 0 . 0. 4 . .001 . Cos Θ t 2 30. None . 0.0075 . 4 0. Θ 0 0 . PlotStyle Thickness .4 0. In[38]:= plot1 Plot Evaluate Θ t Θ radian . .. Θ t 10. Sin Θ t 2.025.8 0. substitute the feedback rules. apply the head Equal to it (@@ is the shorthand form of the Apply function). Dashing Automatic. The time t is assumed to vary from 0 to 4 seconds. 1. t.6 0. for Θ 0 1 as a dashed-dotted one.01.333 Θ t 51. . 4. None. None . Automatic. & Θ0 InterpolatingFunction InterpolatingFunction InterpolatingFunction In several graphs that follow.2. The resultant differential equation is labeled de.. Introduction: Extending Mathematica to Solve Control Problems 21 Now we numericalize the rule.2 1 0.01 . 0. Cos Θ t 2 Cos Θ t 211. 24. Θ 0 #. and. 4. we show the results for Θ 0 0 as a solid line. Frame "Time s ". Sin Θ t 2 This solves the differential equation with the initial conditions for every value in the list Θ0 one by one and returns a list of solutions. In[37]:= SetOptions Plot.2 as a dashed line.2 0 0 1 2 Time s 3 4 . PlotLabel "Θ radian " . 0. FrameLabel . feedbackRules Sin Θ t Θ t 2 Θ 3. . 196.0075. 4. . Dashing .. . . and for Θ 0 1. We can see that the controller succeeds in driving the pendulum to its equilibrium position for all three initial displacements. The results for Θ are now presented graphically. .

22 Control System Professional We can also see that. 0. PlotLabel "Input Force 0 . radian s Here is the plot of input force versus time. 4 . PlotRange All . the derivative Θ t vanishes as well. 0.2 0 1 2 Time s 3 4 t . t.6 -0. feedbackRules .2 -0. 4 .4 -0. In[40]:= Plot Evaluate fx . Input Force 250 200 150 100 50 0 0 1 Newton 2 Time s 3 4 . 0 . once the angle Θ[t] has come to zero. t. Newton ".8 -1 -1. at least not when driven from the displacements we are considering for now. This means that the pendulum is not about to oscillate around its equilibrium position. PlotLabel "Θ' radian s " . In[39]:= Plot Evaluate Θ Θ' 0 -0.

we will plot the results for Θ 0 1.25.25 as a solid line and one of our previous curves (namely. 0. Θ 0 1. We solve the same equation for another set of initial conditions. . 4.8 0.2 rad is almost critical. we compare the graphs of Θ for the nonlinear and linear systems and see that the only case of smallest initial displacement is treated adequately by the linear model. plot .01 .2 1 0.. 0 . This sets the new options. 1000 Θ InterpolatingFunction In the following graphs.6 0. and that may cause problems for still larger angles. In[42]:= ΘBig Out[42]= NDSolve de.4 0. t. MaxSteps . Θ 0 1. the system becomes hard to control.01. The case Θ 0 1.2. PlotStyle Thickness . for a slightly larger displacement. 4 . Θ . Introduction: Extending Mathematica to Solve Control Problems 23 Finally. . Θ 0 0.2 ) as a dashed line. Indeed.25 rad. Dashing . In[43]:= SetOptions Plot. Θ 1.001 . Θ 0 1.2 0 0 1 2 Time s 3 4 radian The transient responses suggest that our linear feedback is not sufficiently prompt in reacting to moderate and large initial displacements Θ 0 . In[41]:= Show plot1.

PlotLabel radian Of course. 0. we will assume that it would.25 to Θ 0 . t. but. 4 . for the sake of argument. 0 3 . 0 3 . the cart in our particular model of the pendulum (as shown in Figure 2.1) would not allow the pendulum to rotate in circles. 4 . . but now it "Θ radian " . t. ΘBig . radian s . 0. ΘBig .24 Control System Professional We find that the pendulum still could be driven from Θ 0 oscillates badly around the equilibrium point. In[44]:= Plot Evaluate Θ t Θ 6 4 2 0 -2 -4 -6 0 1 2 Time s 3 4 1. The variations in Θ become more complex and far more intense. In[45]:= Plot Evaluate Θ Θ' 30 20 10 0 -10 -20 -30 0 1 2 Time s 3 4 t . PlotLabel "Θ' radian s " .

211. t. #1. 4 . In[46]:= Plot Evaluate fx . Sign 211. To model this situation.. the controller fails to balance the pendulum. In[48]:= clip Out[48]= 1001. 0 3 Newton ".333 Θ t 51. & Here is how it works: everything beyond the interval from 1000 to 1000 gets cut off. 1000 N. #. 999. In[49]:= feedbackLtd Out[49]= MapAt clip. 2 Time s 3 4 The real actuator may not be up to the task. 1001 1000.2..333 Θ t 51. In[47]:= clip Out[47]= If Abs # 1000.333 Θ t 1000. Θ t fx If Abs 211. feedbackRules . and the feedback saturates at that limit. we create a clip function. PlotRange All .. PlotLabel Input Force 2000 1000 0 -1000 -2000 0 1 "Input Force Newton . If the maximum force the motor can provide is. 999. 999. 999.. & If Abs #1 1000. Sign #1 1000. . 1. say. 0. Θ t 1000. feedbackRules. 1000. Sign # 1000. Θ t . ΘBig . 2 51. We use clip to saturate the feedback. Introduction: Extending Mathematica to Solve Control Problems 25 This is the force the motor must exert to maintain the process.

In[51]:= ΘBig1 Out[51]= NDSolve de1. 20 10 0 -10 -20 -30 -40 0 1 2 Time s 3 4 . numericValues . MaxSteps 1000 Θ InterpolatingFunction 0. Dashing .05.Θ t .333 Θ t 51. Sin Θ t 2 This solves it.001 . Cos Θ t 30. In[50]:= de1 Out[50]= Equal sln . 4 . State Response . Θ 0 1. Θ . 0. Θ t .01 All. Finally. Sign 211. It is clear that the controller fails to return the pendulum to its equilibrium position. Sin Θ t 2. Θ t 1000. t. ΘBig1 . In[52]:= Plot Evaluate PlotStyle PlotRange Θ t .333 Θ t 51. Θ 0 .25. Thickness . we plot the state response—for Θ as a solid line and for Θ as a dashed one. 211.. t. PlotLabel "State Response" . Θ t 1000. 4. 4 . Thickness .001 . 0 .26 Control System Professional This is the new differential equation for Θ under the saturated feedback. 196.. 2 Cos Θ t If Abs 211. Cos Θ t Sin Θ t Θ t 2 2 10. .333 Θ t 51. 24. feedbackLtd Θ t 3. 0.

In[2]:= TransferFunction var. 3. 2 var . The TransferFunction object comprises a variable and a matrix in that variable. m Transfer function representation as a pure function.3. TransferFunction currently accepts at most one formal parameter and does not accept the list of attributes. Description of Dynamic Systems Control System Professional deals with state-space and transfer function models of continuous-time (analog) and discrete-time (sampled) systems. 1 var2 1 var . TransferFunction in many respects behaves like Function. the built-in Mathematica pure function. This chapter introduces the available data types and the means to convert between them. Unlike Function.1 Transfer Function Representations The basic form for representing transfer function matrices is the TransferFunction data structure. var var 1 Out[2]= TransferFunction var. Like Function. In[1]:= ControlSystems` This is the transfer function representation of a two-input. The transfer function representations can be in rational polynomial or zero-pole-gain form. 2 1 var2 . "variable". Load the application. one-output system. and is capable of operating on any Mathematica expression. TransferFunction can have a formal parameter. TransferFunction m a transfer function as a rational polynomial matrix m of a formal parameter # that behaves as a pure function a transfer function of a formal parameter var TransferFunction var. but it does accept options.

0. The factored form can be obtained by using the function FactorRational. TransferFunction behaves as a pure function in #. multiple-output (MIMO) systems. The opposite function is ExpandRational. 1 s2 1 s If a particular numeric frequency is supplied. In[3]:= % s Out[3]= 2 s . In[5]:= TransferFunction 1 # Out[5]= TransferFunction 1 #1 Without the formal parameter. In[6]:= % s Out[6]= 1 s TransferFunction deals with multiple-input. the value of the transfer matrix at that frequency is obtained. In[7]:= TransferFunction 1 # Out[7]= TransferFunction 1 #1 It is often useful to factor the elements of a transfer matrix so that the zeros and poles of individual elements become apparent. which expands the individual numerators and denominators of transfer matrix elements. A scalar transfer function is represented as a 1 1 transfer matrix. Scalar. In[4]:= %% Out[4]= 10. single-output (SISO).990099 0. . or single-input. transfer functions are "upgraded" to the matrix representation automatically.28 Control System Professional Supplying the variable to this function leads to a matrix in that variable. 0.020202.0990099 We can name the internal variable whatever we please or drop it altogether and still have a mathematically identical object.

By itself. whereas TransferFunc tion itself behaves merely as a wrapper with respect to polynomial expressions. Description of Dynamic Systems 29 FactorRational transferfunction represent the transferfunction object in factored form ExpandRational transferfunction represent the transferfunction object in expanded form Converting transfer function objects to standard forms. TransferFunction does not change polynomial expressions in a transfer function matrix. 1 s 1 s 1 1 1 s 1 s Out[11]= TransferFunction s. 1 s2 1 s4 This factors the numerators and denominators of all elements. This is some rational polynomial transfer function. In[9]:= FactorRational % Out[9]= TransferFunction s. 2 6 5 s s2 .3. 3 s 1 s s 2 s s 1 s This expands the numerators and denominators of the previous result. 2 1 s2 . In[10]:= ExpandRational % Out[10]= TransferFunction s. 1 s2 1 s4 Both FactorRational and ExpandRational can be viewed as means to transformation arbitrary TransferFunction objects into their standard forms. In[11]:= TransferFunction s. 2 s s . 6 5 s s2 s4 1 Out[8]= TransferFunction s. In[8]:= TransferFunction s. 2 6 5 s s2 . .

2 1 s2 . To allow complete restoration of the transfer function object from its zero-pole-gain equivalent. Here is another transfer function. gains a collection of matrices representing the zeros. ZeroPoleGain zeros. zeros. poles. ExpandRational and FactorRational do convert arbitrary rational polynomial matrices to the standard forms.e. Structurally. zeros. both zeros and poles are matrices of vectors of the corresponding coefficients. poles.30 Control System Professional However. s s 1 Out[14]= TransferFunction s. In[12]:= ExpandRational % Out[12]= TransferFunction s. poles.. In[14]:= tf TransferFunction s. Zero PoleGain may optionally contain the variable used in the transfer function. All three of these matrices and the parent transfer matrix have the same dimensions down to the second level. 2 s 1 s s 1 2 The coefficients of the factored form of the individual transfer matrix elements (i. whereas gains is just a matrix of coefficients. poles. 1 s2 1 s . gains use variable var in the descendent TransferFunction objects ZeroPoleGain data structure. 1 2s s s2 In[13]:= FactorRational %% Out[13]= TransferFunction s. 2 s . and gains) can be stored using the special data structure ZeroPoleGain. and gains of the elements of a transfer matrix ZeroPoleGain var.

3. Description of Dynamic Systems

31

This picks up its zeros, poles, and gains. Notice that there are no finite zeros in the first element of the transfer matrix, so the corresponding list of zeros is empty.
In[15]:= ZeroPoleGain % Out[15]=

ZeroPoleGain s,

, 0

,

,

, 1

,

2, 1

Applying TransferFunction to the ZeroPoleGain object brings out the transfer function in its factored form.
In[16]:= TransferFunction % Out[16]=

TransferFunction s,

2 s s

,

s 1 s

The same result can be arrived at directly.
In[17]:= FactorRational tf Out[17]=

TransferFunction s,

2 s s

,

s 1 s

Like TransferFunction objects, ZeroPoleGain objects need not have a named variable. If a ZeroPoleGain object does not have one, neither will the descendent transfer function.
This 1 1 system does not have finite zeros, has a single pole at the origin, and has a unit gain. Therefore, it represents an ideal integrator.
In[18]:= ZeroPoleGain Out[18]=

, ,

0 0

, ,

1 1

ZeroPoleGain

Here is its transfer function. As no variable was used in the parent ZeroPoleGain, none appears here.
In[19]:= TransferFunction % Out[19]=

TransferFunction

1 #1

It is worth emphasizing that the variable in the TransferFunction and ZeroPoleGain objects is a formal parameter and has nothing to do with distinguishing Laplace and z -transform domains (which is what the Sampled option is for; see Section 3.3). The variable may simply be omitted or easily renamed if desired using the built-in Mathematica functions. Again, renaming does not change the domain of the object; to transform the system from one domain to another, use either ToDiscreteTime (Section 3.5) or ToContinuousTime (Sec-

32

Control System Professional

tion 3.7), whichever is appropriate. Note, however, that interpretation of the TransferFunc tion object in TraditionalForm obeys a different convention and does take the variable into account (see Section 3.4).
This is a transfer function in the variable var.
In[20]:= TransferFunction var,

1 1 var2 1 1 var2

Out[20]=

TransferFunction var,

This changes the variable to z. No change of domain has occurred.
In[21]:= % Out[21]=

. var

z 1 1 z2

TransferFunction z,

Two additional utility functions, Zeros and Poles, return zeros and poles of transfer functions for control objects. In the case of ZeroPoleGain objects, these functions simply extract the relevant parts from the data structure.

Zeros system Poles system

gives the matrix of zeros of the transfer function corresponding to system gives the matrix of poles of the transfer function corresponding to system

Computing zeros and poles separately.

Both TransferFunction and ZeroPoleGain, as well as the head StateSpace described in Section 3.2, work as "active wrappers". This means that, on the one hand, they keep system components together allowing the system as a whole to be conveniently passed from one function to another. They can also be used for conversion between the data structures when one wrapper is applied over another.

3. Description of Dynamic Systems

33

TransferFunction statespace find the TransferFunction object that corresponds to the StateSpace object statespace TransferFunction statespace, ReductionMethod method

use the specified method for conversion
Converting from state-space to transfer function representation.

The straightforward, but computationally expensive, way of finding the transfer function matrix of a state-space realization is based on the formula Hs C sI A
1

B

D

(3.1)

The method involves computing the inverse of the matrix s I A 1 (the so-called resolvent matrix) and is accessible by setting the option value ReductionMethod Inverse . A better alternative is often to scan the determinant expansion formula for a single-input, single output system over all possible input-output pairs of a multi-input, multi-output system (cf. Kailath (1980), Appendix A) sI A b j ci sI dij A 1 sI A

hij s

(3.2)

Here the notation stands for the determinant of the matrix, b j and ci are the column- and row-vector components of the matrices B and C that correspond to the given input-output pair, dij is the corresponding scalar part of the matrix D, and hij is the same for the transfer function matrix H. This method is available under the option value DeterminantExpan sion, which is the default value of the option ReductionMethod. Although the above formulas refer to the continuous-time case, the implemented algorithms are equally applicable to discrete-time systems. For conversion and most other purposes, the elements of the transfer matrix must be rational polynomials. However, nonrational polynomial terms can also be handled in some cases, notably in some frequency response functions (Chapter 5) and system interconnection functions (Chapter 6).

3.2 State-Space Representations
Continuous-time state-space systems (Figure 3.1)

34

Control System Professional

x t y t

Ax t Cx t

Bu t Du t

(3.3)

and discrete-time state-space systems (Figure 3.2) x k 1 y k Ax k Cx k Bu k Du k

(3.4)

are represented by the StateSpace data structure; this guide refers to component matrices using the same symbols, A , B, C , and D . Here A is the state (or evolution) matrix, B is the input (or control) matrix, C is the output (or observation) matrix, and D is the direct transmission (or feedthrough) matrix. Keeping the same notation for continuous- and discrete-time systems is common in the literature and allows sharing many concepts and algorithms that are essentially identical. (It is also common to use F, G, H, and J instead of A , B, C , and D for continuous-time systems and and instead of A and B in the discrete-time case.) To distinguish between the two types of systems, the option Sampled is introduced in Section 3.3.

StateSpace a, b StateSpace a, b, c

state-space object comprising two matrices, a and b state-space object comprising matrices a, b, and c, which assumes that the direct transmission term matrix d is zero state-space object comprising matrices a, b, c, and d

StateSpace a, b, c, d
State-space data structure.

D

ut B

. xt

xt C

yt

A
Figure 3.1. State-space model of a continuous-time system.

. . c1 . b2 b1 . for state feedback regulator design (Section 9. The truncated StateSpace[a. The StateSpace objects can contain as few as two matrices a and b. b1 . State-space model of a discrete-time system. Description of Dynamic Systems 35 D uk B xk 1 Delay xk yk C A Figure 3. If matrices c and d are absent. a3 . b1 . Here is a second-order SISO system with no direct transition term. a4 .2. c2 . if only matrix d is absent. c1 .1). c2 c1 . a3 . a2 . In[23]:= StateSpace Out[23]= a1 . no assumptions are made about them. but contains all the necessary information. In[24]:= SameQ Out[24]= TransferFunction % . but for most functions. a2 . In[22]:= StateSpace Out[22]= a1 . a2 . . a4 a1 . %% True StateSpace transferfunction the StateSpace realization of the TransferFunction object transferfunction StateSpace transferfunction. TargetForm form the realization of the specified form Obtaining state-space realizations of transfer functions. a3 . . at least matrix c and optionally d are required too. a4 .3. it is routinely assumed to be a zero matrix. of limited value. b2 . c2 c1 . b] representation is. . b2 b1 . of course. . 0 0 StateSpace The two systems yield the same transfer function and so correspond to the same physical system. a2 . a3 . b2 . for instance. c2 StateSpace This is the same system with a zero direct transition term added. a4 a1 .

If the transfer matrix H s is proper (but not strictly proper). (In the literature. The adopted definition of the controllable companion form is due to.7) Similarly. for example. The dimension of the system is n p. this form is sometimes referred to as the controllable canonical form.5) of size q p .8) 0 I x . constructed by inspection.6) Αn 2 I Βn Β1 x where 0 and I are zero and identity matrices of size p p and the matrix D is zero. In the latter case. the observable companion realization is 0 0 I 0 0 I 0 0 y 0 0 0 0 0 I Αn I Αn 1 I Αn 2 I x Α1 I Du Βn Βn 1 Βn Β1 2 x u (3. the controllable companion realization is assumed to be 0 0 x 0 Αn I y Βn Β n 1 I 0 0 Αn 1 0 I 0 I 2 0 0 x I Α1 I Du 0 0 u 0 I (3. where Βi are of that size as well. the target form of the state-space model can be specified by the option TargetForm. the same representation of the strictly proper part of H s holds while the matrix D can be found as D H (3. p and q being the number of inputs and outputs. with the default value ControllableCompanion that corresponds to the controllable companion form.36 Control System Professional The state-space representation can be obtained directly from the differential equations of the system (difference equations in the case of a discrete-time system) or from the transfer matrix. Gopal (1993). respectively.) For the strictly proper transfer matrix Β 1 sn sn 1 H s Β 2 sn Α1 sn 1 2 Βn Αn (3.

The structure of the state-space system is more transparent with the use of Tradi tionalForm (Section 3. 0. a b. 0. a b. Here is a transfer function. 1 . a b. 1 1 . In[27]:= TraditionalForm % Out[27]//TraditionalForm= 0 0 ab 0 b 0 1 0 0 0 1 0 a b 0 ab 0 a b a 1 1 0 0 1 0 0 0 0 1 • 0 0 . In[25]:= tf TransferFunction s. a b 0. 1. this is not the minimal-order model. 0.3. 0 . 0. 1. 0. 1 s b Out[25]= TransferFunction s.4). 0. Clearly. Note that the controllable and observable companion forms may not be of minimal order and. 1 s a . 0. 0 . b. a. Chapter 8 describes the methods to transform the model further. 0 . 0. In[26]:= StateSpace % Out[26]= StateSpace 0. as a rule. 1 . 0. a s b s This produces the controllable companion realization. This form can be obtained using the option TargetForm ObservableCompanion . 0 . are ill-conditioned. 1. Description of Dynamic Systems 37 where 0 and I are zero and the identity matrices of size q q and the dimension of the system is n q . 0 . 1 .

we assert that none of the symbolic variables are complex. Therefore. Consistency here means that matrix a must be square. The function ConsistentQ checks that they are satisfied. In[28]:= MinimalRealization % . if so desired. n n . In[30]:= TransferFunction s. StateSpace does not check the consistency of component matrices. matrix b must be n p . accepts options. TransferFunction s. StateSpace. In[29]:= StateSpace tf. which could. these requirements should be kept in mind. if used. where q is the number of outputs. the consistency is the prerequisite for many other functions to operate in a meaningful way. # & Out[30]= % . When entering state-space objects manually. TransferFunction s. a s b s Simplify . To obtain a simpler result. as well as other control objects. ComplexVariables None Simplify TraditionalForm Out[28]//TraditionalForm= a 0 1 1 1 b2 0 b 1 1 1 a2 0 b 0 a 1 1 b b2 1 1 a a2 0 0 • This produces the observable companion realization. must be q p . %% 1 1 . come wrapped in lists.38 Control System Professional This reduces the order of the system using the function MinimalRealization described in Section 8. caution should be exercised to not confuse an empty list of .1. however. matrix c must be q n . where n is the number of states. TargetForm Out[29]//TraditionalForm= ObservableCompanion TraditionalForm 0 ab 1 a b 0 1 b 1 0 a 1 0 • It is easy to demonstrate that both realizations correspond to the initial system. where p is the number of inputs. By itself. and matrix d. a s b s 1 1 .

True. setting of the option Sampled is not mandatory. 3. or if you send your file to a colleague who prefers another global value.m file.3. It should be emphasized that relying on the global variable. although convenient. according to which the method "knows" how to deal with the object. and ZeroPoleGain objects to the continuous-time or discrete-time domain is to set the option Sampled to either False. it relies on the global variable $Sampled to make the decision when necessary. A useful precaution. parsing an empty list as a valid list of options is disallowed for all control objects as well as in other functions in Control System Professional where this may cause a confusion. If a function does not find the option in a particular control object. The default value of $Sampled is False. with systems sampled primarily at one rate. If this is the case. even more restrictively.3 Continuous-Time versus Discrete-Time Systems The way to to specify whether the TransferFunction . Description of Dynamic Systems 39 options with an empty matrix in a control object's description. . or Period[value]. Changing the global domain specification makes sense if you deal primarily with the discrete-time systems or. may cause confusion if you save the results of your Mathematica session to a file and later read it in after changing that variable. For this reason. Once set. As with all other Mathematica options. is to save the value of $Sampled together with your data or use the Sampled option explicitly in all your data structures. StateSpace. therefore. This allows the same set of functions to operate on both types of systems implementing an object-oriented paradigm. the option remains a part of the data structure. you may want to change the variable $Sampled in your Mathematica session and/or include the corresponding line in your init.

and test functions pertaining to domain specification. so it is attributed to continuous-time by default.40 Control System Professional Sampled Sampled Sampled False True Period T attributes the system to the continuous-time domain attributes the system to the discrete-time domain attributes the system to the continuous. Sampled Out[32]= True False This is another way to attribute the system to the discrete-time domain—by using Period[T] with a nonzero value of T. b. From now on the system will be assumed to be discrete-time if not specified otherwise with the Sampled option (until $Sampled is changed back). global variables. a zero value refers to the continuous-time domain.or discrete-time domain depending on the value of T. c Out[31]= True Now the system is explicitly set to be in the discrete-time domain. In[33]:= ContinuousTimeQ StateSpace a. There is no domain-specifying option in this StateSpace object. In[32]:= ContinuousTimeQ StateSpace a. c. b. and everything else refers to the discrete-time domain global variable that determines the domain in which a system is assumed to be in the absence of the Sampled option test if system is in the continuous-time domain test if system is in the discrete-time domain find the sampling period of system $Sampled ContinuousTimeQ system DiscreteTimeQ system SamplingPeriod system Options. Sampled Out[33]= Period T False This changes the default domain. In[31]:= ContinuousTimeQ StateSpace a. c. In[34]:= $Sampled Out[34]= Period 1 Period 1 . b.

The traditional form of the TransferFunction object uses the variable (the Mathematica character [ScriptS]) to represent the complex variable of the Laplace-transform domain and the variable ( [ScriptZ]) for the z-transform domain. In[35]:= ContinuousTimeQ StateSpace a. control function matrices and a block matrices C D objects are distinguished from regular matrices by their superscripts. The default subscripts for continuous-time and discrete-time objects are • and (the Mathematica characters [Bullet] and [EmptyUpTriangle]). The subscript is typically omitted if the domain can otherwise be unambiguously determined from the contents of the control object. which are the script letters for TransferFunction and for StateSpace.3. you can choose a different notation. c Out[35]= False This restores the default behavior. In[36]:= $Sampled Out[36]= False False 3. Description of Dynamic Systems 41 The same system as before is now considered to be in the discrete-time domain. b. The objects can also have a subscript that indicates the time domain or the sampling period.4 The Traditional Notations TraditionalForm system traditional form of the state-space or transfer-function system Traditional representation of the control objects. By setting the values of global variables $ContinuousTimeToken and $DiscreteTimeToken. correspondingly. In the notebook front end. You can choose different symbols by setting the global variables $ContinuousTimeComplexPlaneVariable and $DiscreteTimeComplexPlaneVariable. The TransferFunction and StateSpace objects are represented as transfer A B . By convention. you can display and manipulate control objects in their traditional typeset form. .

However. a2 . b2 b1 . b1 .42 Control System Professional Note that contrary to the standard representation of the TransferFunction object. The discrete-time domain is indicated by the small triangle. . . b2 . the domain is determined by the value of the subscript. Sampled . Sampled True True StateSpace This is its TraditionalForm representation. c2 . c1 . a4 a1 . a4 . interpretation of the TransferFunction object in Tradi tionalForm is based on the domain variable. a3 . should the variable in the body of the TransferFunction point to the domain that is different from the one indicated by the subscript of the control object. Here is a discrete-time state-space object. a3 . In[38]:= TraditionalForm % Out[38]//TraditionalForm= a1 a2 b1 a3 a4 b2 c1 c2 0 . $ContinuousTimeToken $DiscreteTimeToken • the token of the continuous-time domain in the subscript of the control objects in TraditionalForm the token of the discrete-time domain the default value of $ContinuousTimeToken the default value of $DiscreteTimeToken $ContinuousTimeComplexPlaneVariable the complex variable in the TraditionalForm representation of continuous-time control objects $DiscreteTimeComplexPlaneVariable the discrete-time variable the default value of $ContinuousTimeComplexPlaneVariable the default value of $DiscreteTimeComplexPlaneVariable Customizing TraditionalForm of control objects. a2 . c2 c1 . which does not require a formal variable (nor does it take the variable into account for the time domain identification purposes). In[37]:= StateSpace Out[37]= a1 .

. ( [ScriptU]) and ( [ScriptY]). In[41]:= ContinuousTimeQ Out[41]= 1 True EquationForm statespace the state-space system statespace in the form of matrix equations Representing StateSpace objects as state-space equations. Several options allow you to customize the appearance of a state-space system in Equation Form. the state. correspondingly. respectively. The default time variables for the continuous-time and discrete-time systems are ( [ScriptT]) and ( [ScriptK]). Description of Dynamic Systems 43 This is a continuous-time TransferFunction object. input. By default. In[39]:= TransferFunction 1 # Out[39]= TransferFunction 1 #1 This is its TraditionalForm representation. option name default value StateVariables InputVariables OutputVariables TimeVariable Automatic the state variables to use in equations the input variables the output variables the time variable Specifying the variables for the StateSpace objects in EquationForm. In[40]:= TraditionalForm % Out[40]//TraditionalForm= 1 Here we copy the previous output cell and paste it into the input cell. and output variables are. The expression is interpreted as a continuous-time object because of the variable . (the Mathematica character [ScriptX]). The variable indicates the continuous-time domain.3.

However. perform similar tasks. b2 . . more efficient. the problem arises of finding a discrete-time representation of the system such that the output variables sampled at times t0 . In[43]:= EquationForm % Out[43]//EquationForm= a1 a2 a3 a4 c1 c 2 b1 b2 This uses the specified variables to represent the system. b1 . a2 . InputVariables ∆ Θ Θ a1 a2 a3 a4 c1 c2 Θ Θ Θ Θ b1 ∆ b2 3. c2 StateSpace These are the corresponding state-space equations. a3 . a2 . In[44]:= EquationForm % . . …. In[42]:= StateSpace Out[42]= a1 .44 Control System Professional Here is a state-space system. a3 . … suitably approximate the ones of the original system. in principle. Note that the built-in functions LaplaceTransform and ZTransform can. StateVariables Out[44]//EquationForm= Θ. Θ . the procedures in Control System Professional use the state-space approach and so the conversion is. . b2 b1 . A possible way to make such a conversion is to apply the function ToDiscreteTime to a continuous-time system. c1 . ToDiscreteTime operates on all control objects. c2 c1 . a4 a1 . tk .5 Discrete-Time Models of Continuous-Time Systems If an analog system is to be analyzed in the discrete-time domain. t1 . a4 . as a rule.

1 . Sampled find the discrete-time approximation of continuous-time system sampled with period T $SamplingPeriod default numeric value of sampling period Finding discrete-time equivalents of continuous-time systems.3. 1. 0 . As noted previously. In[47]:= % Out[47]= .s z 1 1 aT aT TransferFunction z. 0 Out[48]//TraditionalForm= . 1 . 1 . Sampled Period T Consider a state-space continuous-time system. In[46]:= hz Out[46]= ToDiscreteTime % . the formal internal variable (in this case s) does not indicate the domain. a a s a TransferFunction s. In[45]:= tf Out[45]= TransferFunction s. Consider a simple continuous-time system. Should the need arise. Sampled Simplify Period T TransferFunction s. it is easy to rename the formal variable. 0. z . a s This is a possible discrete-time approximation. Sampled 1 1 aT aT Period T s . Description of Dynamic Systems 45 ToDiscreteTime system find the discrete-time approximation of continuous-time system sampled with the default sampling period Period T ToDiscreteTime system. but the Sampled option does. In[48]:= cont StateSpace TraditionalForm 0 0 1 1 0 1 1 0 0 • 0.

This is the analog output response of the original system to the sinusoidal input signal. 0. 1 Τ 1 Τ . Sin t . If it is desirable for ToDiscreteTime to use a symbolic sampling period T by default. the quality of which depends on the method used and the sampling period. In[50]:= TraditionalForm % Out[50]//TraditionalForm= 1 1 Τ 0 1 0 Τ Τ 1 Τ Τ 1 0 Τ The default value for the option Sampled in ToDiscreteTime is the global variable $Sam plingPeriod. t Out[51]= Simplify 1 2 2 t Cos t Sin t . This variable also provides a fallback where some functions retreat in situations when the sampling period does not evaluate to a number. It is worth emphasizing that the "conversion" from an analog to a sampled system is merely an approximation. 1 . 0 Here is the same system in TraditionalForm. Sampled Period Τ StateSpace 1. $SamplingPeriod must never be set to anything that has no numeric value. say with a command SetOptions ToDiscreteTime. In[49]:= disc Out[49]= ToDiscreteTime % . The time-domain response functions described in Chapter 4 can make the difference between the original and approximated system readily apparent.46 Control System Professional This is its discrete-time approximation. 1. to simulate the transient behavior of a system). this can be easily achieved by using the standard Mathematica mechanism. The result is still a StateSpace object that is in the discrete-time domain as indicated by the option Sampled. Thus. but the numeric value is needed to perform the task (for example. Sampled Period T . . In[51]:= OutputResponse cont. Τ . Sampled Τ Τ Period Τ .

Sin t .5. we include the analog response from the previous plot (structurally. 10 . 0. Choosing smaller values for Τ can make the lag less noticeable. For reference. PlotLabel "Output Response".5 0. Epilog First % .015 .75 0. t.25 1 0. t. 0. Τ .5 1. Description of Dynamic Systems 47 Here is a piece of that curve on the time interval from 0 to 10 seconds. the line is the first element of the Graphics object returned by the previous command and is extracted with First). 1.25 2 4 6 8 10 We can see that the sampled values systematically lag behind corresponding points on the analog curve.3. 10 . In[52]:= Plot % . PlotJoined False. In[53]:= SimulationPlot disc .25 2 4 6 8 10 This plots the simulated response of the discrete-time system for the sampling period Τ of 0.25 1 0.5 seconds over the same time interval.75 0.5 1.5 0. . PlotStyle PointSize . which is typical for the default ZeroOrderHold method. Output Response 1.

In[55]:= ToDiscreteTime tf.and first-order (triangle) hold. aT 1 s aT Period T This is the conversion of the same transfer function using the backward rule.48 Control System Professional 3. respectively. . and bilinear (Tustin) transformation with and without prewarping. BackwardRectangularRule Simplify asT 1 s asT . Sampled Out[54]= Period T . implemented with ForwardRectangularRule and BackwardRectangular Rule. if any. In[54]:= ToDiscreteTime tf.to discrete-time conversion. implemented with ZeroPoleMapping option name default value Method CriticalFrequency ZeroOrderHold Automatic method to use for transformation critical frequency. (1990). The method can be selected with the Method option.1 The Conversion Methods The conversion from the continuous-time domain to the discrete-time domain can be performed using the following methods (see Franklin et al. respectively • Numerical integration methods—forward and backward rectangular rules. Method . This is the result of conversion with the forward rule of the transfer function system tf defined earlier in this chapter.5. Sampled ForwardRectangularRule TransferFunction s. • Hold equivalence methods—zero. implemented with BilinearTransform • Zero-pole mapping. Section 4). implemented with ZeroOrderHold and FirstOrderHold. at which to make comparisons Options related to continuous. Sampled Method Out[55]= Period T . Sampled Period T TransferFunction s.

3. In[57]:= ToDiscreteTime tf. s2 FirstOrderHold . as 1 bs 1 Out[59]= TransferFunction s. Method 1 . Simplify . Description of Dynamic Systems 49 This conversion uses the bilinear transformation. Sampled Out[57]= Period T . Sampled Out[58]= Period T . In[59]:= lag TransferFunction s. which uses a lag network. a 1 s T 2 aT s 2 aT This uses the zero-pole mapping. 1 s2 . 1 4 s s2 T2 6 1 s 2 Yet another example. Sampled Method Out[56]= BilinearTransform Period T . illustrates the effect of the option CriticalFrequency on accuracy of the conversion. Sampled Simplify Period T TransferFunction s. Sampled Period T TransferFunction s. Method aT ZeroPoleMapping TransferFunction s. 1 as 1 bs . Sampled Period T This is the first-order-hold equivalent of the transfer function H s In[58]:= ToDiscreteTime TransferFunction s. In[56]:= ToDiscreteTime tf. 1 aT s .

1 Frequency 0.5 1 5 10 Rad Second This is the network after the bilinear transformation.1 Frequency 1.01 0. b 5 . Sampled Simplify Period T Out[61]= TransferFunction s. .50 Control System Professional Here is the Bode plot for the network for some set of parameters.51 5 10 Rad Second deg Phase -10 -20 -30 -40 0. Magnitude dB 0. In[60]:= lagplot BodePlot lag . In[61]:= dlag ToDiscreteTime lag .01 0. a 0 -2 -4 -6 -8 -10 -12 -14 0. Sampled Period T .05 0.05 0. Method 2a 2b 1 s 1 s BilinearTransform 1 s T 1 s T .

CriticalFrequency a b 1 s Ωc 1 s Ωc 1 s Tan 1 s Tan T Ωc 2 T Ωc 2 Ωc Simplify . a 1. PlotStyle RGBColor 1.05 0. In[63]:= ToDiscreteTime lag . . Description of Dynamic Systems 51 This computes the Bode plots for continuous and sampled lag networks—and displays them together using a utility function DisplayTogetherGraphicsArray (see Section 12.3.1 Frequency 0. In[62]:= DisplayTogetherGraphicsArray lagplot. Dashing . Sampled Method Out[63]= Period T . BodePlot dlag . Magnitude 0. BilinearTransform.05 0.51 5 10 Rad Second deg Phase -10 -20 -30 -40 0.01 0. The responses coincide for the low frequencies but differ somewhat near the Nyquist frequency. 0 .1 Frequency dB .02 0 -2 -4 -6 -8 -10 -12 -14 0. this time with frequency prewarping at some critical frequency Ωc . b 5.01 0.5). 0.5 1 5 10 Rad Second We again use BilinearTransform. T 1 . Sampled Period T TransferFunction s.

01 0.52 Control System Professional We have achieved a perfect match at that frequency at the expense of less accurate behavior at other frequencies.1 Frequency 0.05 0.01 0. Therefore. the conversion from the continuous. Ωc 2 . T 1. 0. In[64]:= DisplayTogetherGraphicsArray lagplot.5 1 5 10 Rad Second Except for zero-pole mapping. Dashing . This is our lag system as a StateSpace object.02 0 -2 -4 -6 -8 -10 -12 -14 0. Magnitude 0. b 5.05 0. a 1.1 Frequency dB .to the discrete-time domain is implemented with state-space algorithms. 0 .51 5 10 Rad Second deg Phase -10 -20 -30 -40 0. BodePlot % . In[65]:= lag1 StateSpace lag Simplify TraditionalForm Out[65]//TraditionalForm= 1 b b a b2 1 a b • . which naturally operates on ZeroPoleGain objects. PlotStyle RGBColor 1. the transformation to and from StateSpace objects can be avoided if the system is represented in StateSpace in the first place.

even more restrictively. . ToDiscreteTime is the only function that takes the Delay option into consideration and. Delay . Description of Dynamic Systems 53 This converts the system to discrete time using the bilinear transformation. Here is a simple state-space system with delay of Λ. 1 1 . The conversion implements the modified z -transform algorithm described in Franklin et al. Currently. 1 1 . Delay Λ Λ StateSpace . In[66]:= ToDiscreteTime lag1. Method BilinearTransform Simplify TraditionalForm Out[66]//TraditionalForm= 2b T 2b T 2 b a T b 2b T 2b 2b T 2a T 2b T T 3. In[67]:= ss Out[67]= StateSpace 1 1 .4. Sampled Period T . The systems can be represented as conventional StateSpace objects with a Delay option added.3. (1990).6 Discrete-Time Models of Systems with Delay ToDiscreteTime can also find the discrete-time approximation of analog state-space systems of the form x t y t Ax t Cx t Bu t Λ (3. where Ts is the sampling period. it handles the only case of numerical values of the ratio Λ Ts .4. . Negative delay is equivalent to prediction of the system's behavior and can be used as long as the delay is not longer than the sampling period. Section 2.9) which corresponds to systems with time delay Λ. option name default value Delay 0 time delay in the state-space model Option in StateSpace to introduce a delay in control.

In[70]:= ToDiscreteTime ss . 0 . 1 Τ . 1. 0. Here a continuous-time system from an earlier example is converted to the discrete-time domain. ToContinuousTime system find the continuous-time model of the discrete-time object system Finding continuous-time equivalents of discrete-time systems. Sampled Period Τ . 0. In[69]:= ToDiscreteTime ss . converts them to the continuous-time domain. 1 . 1 Τ . . the methods can be chosen through the Method option. . Λ Out[70]= Τ 2 Τ 2 . Sampled Period Τ StateSpace Τ . then so does the dimension of the state space. when applied to discrete-time objects. 0 . 1 T T . Τ 1 Τ 2 . 0 . 0 . Sampled Τ Period Τ 0 . The conversion attempts to find the slowest possible continuous-time model the outputs of which would match the ones of the discrete-time model at the sample time. Sampled Period Τ StateSpace 0 . Sampled Period T StateSpace 1. 1. 0 .5. 1. 1 . 0. In[71]:= ToDiscreteTime StateSpace Out[71]= 0. 3. 1 . 1. If the delay increases. In[68]:= ToDiscreteTime ss . 1 . 0 . Λ Out[68]= Τ. 0 . 1 T 1 T . 1 . which accepts the same values. Λ Out[69]= 2 Τ. 0.54 Control System Professional This is its discrete-time approximation when the delay equals the sampling period. 0. 0 . 0. Sampled Period Τ 1 Τ 2 StateSpace 1 . Sampled Period T .7 Continuous-Time Models of Discrete-Time Systems The function ToContinuousTime . T . 1 Τ Τ 2 . 1 . In many respects ToContinuousTime behaves as an inverse function to ToDiscreteTime and applies inverse algorithms to those described previously in Section 3. 0 . Sampled Period Τ This is an example of the transformation of a system with a negative delay. 0. Again. 0.

1 . BilinearTransform. a b StateSpace 1 b . Sampled Method Out[75]= Period T . . Method CriticalFrequency Out[76]= Ωc 1 . In[72]:= ToContinuousTime % Out[72]= StateSpace T Log T T 0. 0. In[73]:= ComplexExpand Out[73]= % 0. In[76]:= ToContinuousTime % . . a Ωc Tan b Ωc Tan . a b . Log T Log T . As our sampling period T is real-valued. T Log T 1. 1 . b Ωc b Ωc Tan T Ωc 2 T Ωc 2 T Ωc 2 Ω 2 a b Tan T 2 c T Ωc b b Ωc Tan 2 . . Simplify StateSpace . we can simplify the result and see that the system is the same as the one we started with. CriticalFrequency b Ωc Tan b Ωc Tan T Ωc 2 T Ωc 2 Ωc . 1 . BilinearTransform. Description of Dynamic Systems 55 This brings the system back to continuous time. 1 b . 1. a b Out[74]= StateSpace 1 This converts the system to the discrete-time domain using the bilinear transformation with frequency prewarping. 0.3. 0 T . 0 . Sampled Period T This brings the system back to continuous time. In[74]:= lag1 StateSpace 1 b . a b b2 a b b2 . T . 0 StateSpace Here is the lag system used earlier. 1 . In[75]:= ToDiscreteTime % . Simplify a b b2 .

as an exception. which is less efficient. b2 2T b 2 . In[78]:= ToContinuousTime % . In[77]:= ToDiscreteTime lag1. resorts to the conversion using transfer functions. Sampled Method FirstOrderHold T b Period T .56 Control System Professional This is the discrete-time approximation of the lag system obtained with the first-order hold. Sampled Period T This converts the result back to continuous time. Note that in the case of the FirstOrderHold method. Method Out[78]= FirstOrderHold a b b2 . . a b 1 b . Simplify 1 T T b T b Out[77]= StateSpace . ToContinuousTime cannot use the inverse of the state-space algorithm implemented in ToDiscreteTime and. b 1 T a a b b2 . 1 . a b Simplify StateSpace 1 b .

4. when supplied with the symbolic input u t . the output response y t can computed directly from the equation yt Ct xt Dt ut (4. symbolic and simulation-based. forced) response. The preceding solution is valid for constant-coefficient matrices A . calls StateResponse first and then proceeds according to Eq. on the other hand. or homogeneous response) and the second is the zero-state (or particular. Time-Domain Response Control System Professional provides the means to analyze linear systems in both time and frequency domains. natural. are implemented and will be introduced in the following sections.1) StateResponse. and D may be time dependent. This chapter deals with the time dependencies of the state and output vectors. (4. Refer to Chapter 5 for the description of frequency-domain analysis tools. Once the state response x t is found. attempts to find the solution t Τ A xt et t0 A x t0 t0 et BΤ uΤ Τ (4. The state response for the discrete-time system xk 1 Ax k Bu k (4. whereas matrices B . therefore. which compute the state and output responses. C .3) OutputResponse.2) The first term in this equation represents the zero-input response (also called the free. The functions StateResponse and OutputResponse. Two approaches.4) . SimulationPlot.3). are capable of performing both operations and choose their mode depending on the supplied input. always uses the simulation approach.1 Symbolic Approach For the continuous-time system xt Ax t Bu t (4. unforced. 4.

the computation is always carried out in the state-space form and the transfer functions are converted to state-space form first. Because matrices C and D are not needed to compute the state response. u. however. state response x.6) Again. d. the input system can be supplied in either state-space or transfer function form. compute the output response assuming matrix d is zero For both functions.58 Control System Professional is computed according to xk Ak x 0 k j 1 Ak j B j 1 u j 1 (4. x State and output responses to an input function. In[1]:= ControlSystems` . x. var compute the state response of system to the input signals u given as functions of the time variable var OutputResponse system. and D may be time dependent. matrices B. var compute the output response OutputResponse c. Load the application. C . but matrix A may not. StateResponse system.5) and the output response is found by yk Cx k Du k (4. their presence in the input system for StateResponse is optional. and input u OutputResponse c. u. u compute the output response given the system matrices c and d.

the initial conditions can be supplied using the option InitialConditions .05 -0. In[2]:= TransferFunction s. The default value for this option is Auto . For ControlInputs Automatic.1 0.15 0. t Out[3]= 10 t 101 1 101 10 Cos 10 t Sin 10 t The built-in Mathematica function Plot plots the result on some time interval. By default. 0. StateResponse and OutputResponse assume zero initial conditions. (4. We can see that the first exponential term (the natural response) will vanish. Time-Domain Response 59 Consider a transfer function with a simple pole. Using the option ControlInputs you can effectively cycle through the subsystems that correspond to the specified list of inputs. 10 . Sin 10 t . StateRe sponse and OutputResponse apply the input function to all inputs in turn. In[4]:= Plot % .5) depends on the initial conditions on the state vector x t0 or x 0 . ResponseVariable . (4. t. Another option. leaving only the harmonic (forced) signal in the output.1 4 6 8 10 The zero-input response in Eq. The typical format is InitialConditions vector. Also acceptable is InitialCondi tions value.2) or Eq. allows you to name the internal variable if the state or output response should contain unevaluated integral(s) or sum(s).4. 0. where vector must have the same length as matrix A . which will cause all state variables to have the same initial value. If this is not the case.05 2 -0. In[3]:= OutputResponse % . 1 s 1 Out[2]= 1 1 This finds the output response to the sinusoidal input function sin 10 t . respectively.

0 0 1 . k. x1 t and x2 t represent the actual production rate and inventory level. and c is the desired inventory level. 1. k . 1 Out[5]= 1 1 k k 0 0 • . which corresponds to the internal integration variable Tau or summation index K for the continuous. The model assumes that u1 t and u2 t are the scheduled production rate and the sales rate.and discrete-time cases. As an example consider the simple production and inventory control model from Brogan (1991): 1 1 k x 0 k 0 0 1 c u2 x which can be represented schematically as shown in Figure 4.1. 0. Here is the production and inventory control model. In[5]:= StateSpace 1.60 Control System Professional matic. 0 . correspondingly. u2 t _ yt c _ u1 t k _ x1 t x2 t Figure 4. option name default value InitialConditions ControlInputs ResponseVariable 0 Automatic Automatic initial state conditions inputs to use in turn internal variable to use in the integral or sum Specifying initial conditions and the response variable in StateResponse and OutputResponse.1. Simple production and inventory control system. respectively.

1 x10 . the number of signals must be equal to the number of inputs. a function without the List wrapping is also acceptable (in the simulation mode. x10 1. In[6]:= StateResponse % 3 16 . with the production rate x1 0 equal to the sales rate and x2 0 c . In[7]:= Plot Evaluate % PlotStyle PlotRange 6 5 4 3 2 1 10 .1 2. This finds the state response for the particular value of k are specified using the option InitialConditions. The initial conditions 3 . The first is plotted as a solid line and the second as a dashed one. .k 4 Out[6]= 1. We denote x1 0 as x10. c 5. For multi-input systems. an input signal must be supplied as a vector of functions. InitialConditions x10.01 All.05 3 t 4 2. c Simplify . 50 . To distinguish the graphs for production rate and for inventory level. the same rule of correspondence between the number of input signals and the number of inputs applies—see Section 4.2).15 t 1. c 6 . Thickness . 16 t. t. but for single-input systems.6 t 4 x10 This plots the results for particular values of the initial production rate x1 0 and inventory level c . At t 0.73333 3t 4 x10. Time-Domain Response 61 Initially the system was in equilibrium. we set PlotStyle for them differently. 1. .005 . the function Subsystem (or DeleteSubsystem) may be used to select the subsystem of interest.05. Note that if multiple input signals are supplied to StateResponse and OutputResponse. State Response 20 30 40 50 We can see that (within this model) keeping initial inventory relatively large allows us to stabilize production rate at a new level. PlotLabel "State Response" . .4. Dashing . c. 8.86667 2. If only the response from one or several inputs or outputs must be studied. 0. sales increase by 10 percent.

62 Control System Professional For further examples when the same input signal is to be applied to all inputs in turn. All. t 1 60 60 3 3 1 2 1 3 1 t 2 1 3 1 t UnitStep 1 t We can use ComplexExpand to reduce the complex exponentials to the trigonometric functions.3. In[9]:= resp Out[9]= OutputResponse Subsystem % . 1 s . Here is a two-output system. 2 . UnitStep t 1 . 1 s2 2 s 10 1 Out[8]= 2 1 2 10 This selects the subsystem associated with the second output and computes the output response to a delayed step function. In[8]:= TransferFunction s. see Section 4. In[10]:= ComplexExpand resp Out[10]= 1 1 1 t UnitStep 1 t Cos 3 1 t 10 10 1 1 t Sin 3 1 t UnitStep 1 t 30 This simplifies the result. UnitStep 1 t In[11]:= % Out[11]= Simplify 1 30 t 3 t 3 Cos 3 3 t Sin 3 3 t UnitStep 1 t .

In[12]:= Plot % . the components are a ramp function and a decaying exponential e t both sampled at a unit-time interval. 1 . Sampled Period 1 Out[13]= 1 In this input vector. k k. 0. In[14]:= inputs Out[14]= k.02 1 2 3 4 5 6 7 Here is a discrete-time system. t. 4 1. 2 . In[13]:= system StateSpace 1 .12 0. 1 . Time-Domain Response 63 This is a plot of the response.4. 7 .1 0.06 0. PlotRange All . 0. 2 . 1 DiagonalMatrix 1 2 1 4 1 1 1 0 4 1 0 1 2 2 0 0 2 4 1.08 0.04 0. k . 1 .

% Simplify 1 9 2 4 1 2k 4 3 9 43 k 27 23 2 k 2 3 2 k 47 13 25 2 k 55 32 k 3 25 2 k k 16 5 2k 11 13 2 17 33 k 3 25 2 k k 32 1 k 10 35 k 13 42 k 3 42 k k k k The same result can be obtained directly. inputs. k. the expression can be simplified with ComplexExpand. InitialConditions 1. 2 ComplexExpand Out[17]= Simplify 1 9 2 4 1 2k 4 3 9 43 k 27 23 2 k 2 3 2 k 47 13 25 2 k 55 32 k 3 25 2 k k 16 5 2k 11 13 2 17 33 k 3 25 2 k k 32 1 k 10 35 k 13 42 k 3 42 k k k k . 2 . 2 ComplexExpand Out[15]= Simplify 1 9 2 4 1 2k k 1 k 32 16 2 1 2k 4 3 2 9 23 2 k 2 3 47 5 25 2 k k 4 k 2 k 3 2k 10 3 54 32 k 11 5 25 9 25 2k 7 2k 2k k 55 3 . InitialConditions 1. In[15]:= StateResponse system. In[17]:= OutputResponse system. In[16]:= OutputResponse Out[16]= 1. Since k is a real valued parameter. k. 2 k 9 4 2 k k 17 32 9 42 17 3 k 2 k k 3 42 k k 2 k 1 9 3 1 k 4 2 k 4 3 47 27 2k k k 2 55 3 32 3 2k 3 23 3 k 2k k k 16 11 2 k 32 10 3 4 k 4 34 1 k Once the state response vector is available. inputs. the output response can be found.64 Control System Professional This is the state response for the particular set of initial conditions.

The result then appears in terms of InterpolatingFunction objects. the system is first converted to discrete time. the same input syntax causes the input functions to be sampled on a uniform time grid. (4. The grid starts at time t 0 (or k 0 ). This regime is referred to as the analog simulation. StateResponse and OutputRe sponse can be used to carry out a simulation of the state and output responses. and the number of rows must be equal to the number of inputs. however. For single-input systems. Note that if you supply the input signals in the form of input sequences for the continuous-time system. . There are.4) starting from the initial conditions for k 0 and then iterating for k 1.1. (4. notably Sampled may be used to set the sampling period. … . Time-Domain Response 65 4. 0. is given instead of the variable t. tmax}. and then the simulation is performed. (4. 2. When input signals are discretized. In such cases. For continuoustime systems. Instead of supplying the input functions and the range for the time variable.2) or Eq. To facilitate the conversion. For discrete-time systems. the input sequences u must represent a matrix each row of which corresponds to a signal at one input. The discrete simulation then proceeds in a straightforward manner according to Eq. as described in Section 4. or simply {t. tmax}.5) is attempted. the matrix may be reduced to a vector. the simulation is based on the approximate numerical solution of the underlying differential equations using the built-in function NDSolve and is invoked when the range for the time-domain variable in the form of {t. the response functions accept the options pertinent to ToDiscreteTime. a symbolic solution based on Eq. you can give the input sequences explicitly. situations when the symbolic solution is either impossible or just too time consuming.4. and Method to choose the conversion method.2 Simulating System Behavior Once input signals are supplied to StateResponse or OutputResponse in the form of a function in the specified variable.

Let us simulate the output of the T-bridge network in Figure 4. u. tmax find the approximation of the state response for the time variable t in the range from 0 to tmax OutputResponse system. t. u find the output response for the system matrices c and d from the simulated state response x and input sequences u OutputResponse c.2. u find the state response to the discrete input sequences u OutputResponse system. . u find the output response to the specified input sequences OutputResponse c. t. d. C2 R Vi C1 R Vo Figure 4. x.66 Control System Professional StateResponse system. x find the output response when d is a zero matrix State and output responses in the simulation mode. A T-bridge.2 to a square-wave signal. u. tmax find the approximation of the output response for the specified period of time StateResponse system.

3 . PlotStyle Thickness . c2 1 10 Out[19]= 2 5 51 5 This utility function creates a square wave with period Τ.. The output is a list of simulated responses. In this case. 2 1 0. t Sign Sin 2 Π t This plots the input (the solid line) and the output (the dashed line) signals.01 . 1 . . t. In[20]:= square t_. r2 r 2 c1 c2 s 2 2 r c2 s 1 c1 c2 s 2 r c1 2 r c2 s 1 Out[18]= r2 r2 c1 c2 2 2 r c2 1 c1 c2 2 r c1 2 r c2 1 Here is the transfer function for a given set of parameters. % . 0.0075 .5 3 -2 . t. square t. 1 . In[21]:= outs Out[21]= OutputResponse bridge1. c1 10.05.5 2 2. Τ_ : Sign Sin 2Π t Τ Supplying the input signal together with both the time variable and the duration of time period causes OutputResponse to perform in the simulation mode. 3 10 InterpolatingFunction 0. Dashing . .4. In[19]:= bridge1 2 % . 3. Time-Domain Response 67 This is the transfer function for the T-bridge network. there is only one element in the list since we are dealing with a single-output system.5 -1 1 1. In[22]:= plot Plot Evaluate Flatten square t. r 1 1 1. In[18]:= bridge TransferFunction s.

SimulationPlot. 1..029691. 1. 0..5 3 -2 . Table square t.031623 The result can be drawn with the built-in Mathematica function ListPlot (MultipleListPlot for multiple outputs).9847. there is only one vector in the list.0717262. In[25]:= SimulationPlot % . described later in this section.0777489. In this case. t. PlotStyle Hue 0 .152404. In[23]:= Τ .278961. 1.94101. 0.5 2 2.0276691.73131.374159. 0.082659.729233. 0. however.238155. we use a utility function. 0. 0. 0. 0. 0.0566053. 0.0788588.1. Sampled Period Τ 0.281026. 0.94302. 0.236818.688021. 1. Τ Out[24]= . 0. 0. 3. Sampled 2 Period Τ . 1 . 0.688127. 0. 0.98263. 1 0. 1.0807077. 0. 0. 0. 0.121112.5 -1 1 1. 0. In[24]:= OutputResponse bridge1. 1. Note that the sampling period is supplied to OutputResponse to specify the distance between points in the input vector (produced by the Table function). this is necessary to convert the system to discrete time properly.690011. The output is a list of simulated responses. Here.68 Control System Professional This produces the output of the circuit using the discrete simulation with the sampling period Τ.0696968.240126. 0.12316. 0.94511. 0.

For the purpose of simulation.5 3 -2 SimulationPlot v plot the vector or list of vectors v SimulationPlot system. t.4. % . Clearly. u simulate the output response to the discrete input sequences u Output response simulation plots.5 -1 1 1. in which case the discrete simulation takes place. SimulationPlot provides a convenient way to compute and plot the output response of a system with a single function call.5 2 2. The syntax for SimulationPlot closely resembles the one for OutputResponse. . u. SimulationPlot produces the analog or discrete simulation depending on the type of input system. Time-Domain Response 69 This compares the results of the analog and discrete simulations. 2 1 0. You can also force the discrete simulation of a continuous system by giving the option Sampled to SimulationPlot. As with OutputResponse. In[26]:= Show plot. the chosen sampling period is too large to accurately approximate the behavior of the circuit. unless the input signal is specifically given as input sequences. The option is ignored if the input signal is already sampled. the sampling period cannot take symbolic values. tmax plot the simulated output response of system to the input u as a function of t from 0 to tmax SimulationPlot system.

square t. List Plot.5 -1 1 1. t. 1 0.5 3 . 3 . The initial conditions are determined by the components of the matrix B . c1 10. you can mimic the Dirac delta function with a continuous impulse of small width. To facilitate computation of the output response and conversion to discrete-time. c2 1 . In[27]:= SimulationPlot bridge . simulation of the response of continuous-time systems to the Dirac delta function is always performed in the discrete-simulation mode.5 0.5 -0.5 2 2. the impulse response is simulated as a decay of some initial state conditions in the absence of the input signal. r 1. if necessary. See the impulse response examples in Section 4. Additionally.70 Control System Professional As an exception. SimulationPlot accepts the options for the built-in plotting functions Plot. 1 . SimulationPlot takes the options for OutputResponse and ToDiscreteTime.3. Note that. This is an analog simulation of the output response of the bridge circuit to a squarewave input signal for another set of parameters of the bridge. for a continuous-time system. To carry out analog simulation. or MultipleListPlot and routes the options to appropriate functions.

To ensure the correct result of NDSolve.1 . 1 0.4 0. 1 0.8 0. c1 10. 1 .4. 1 . c2 1 . 5 . c2 % . Sampled Period .5 -1 1 1.5 3 This is a square pulse function. In[29]:= UnitStep Out[29]= t 1 2 1 t 1 t 1 t 2 UnitStep This simulates the behavior of the bridge for yet another set of parameters.5 2 2. t. In[30]:= SimulationPlot bridge . c1 1.6 0.2 2 3 4 5 . square t. Time-Domain Response 71 This is a discrete simulation for the same parameters. r 1. MaxRelativeStepSize . In[28]:= SimulationPlot bridge .2 1 -0. The input system is discretized using the specified sampling period. 3 . which is called internally to compute the output response. r 1.1 .5 -0. we limit the value of MaxRelativeStepSize.5 0. t.

4 -0. c2 1 .980382 0. it is a simple matter to get it to plot the state response as well. where Α is the angle of attack and the input variable u is ∆s .72 Control System Professional This converts a particular bridge circuit to the discrete-time domain.6 0.3 deg from zero initial conditions using the discrete-time approximation with sampling period 2 seconds. 0.2 5 -0.3) for small angles Θ and constant velocity v 25 ft/s as described in Dorf (1992). In the rest of the section we give an example. and x3 Α.2 -0. Let us consider the linearized state-space system for the depth control problem of a submarine (see Figure 4. .4 0. Table Random . Note that SimulationPlot picks up the sampling period from the input system. the deflection of the stern plane.980382 0. r Out[31]= 1. c1 10. In[32]:= SimulationPlot % . This can be done by adding an appropriate matrix C to the StateSpace object composed of only matrices A and B . 0. Sampled Period .800931 s 0. We will compute the state response of the system to a step command for the stern plane of 0. Sampled Period 0.5.0563042 . We will assume that the state variables are x1 Θ. x2 Θ .0563042 s 0. 100 .6 10 15 20 Although SimulationPlot generates only output response curves.2 s Here is the response of this circuit to random noise.982142 s 0.2 TransferFunction s. In[31]:= ToDiscreteTime bridge .

0.0933806 0.0071. 0 0.3 .79374 0.12 0.111 0. 1 0 0. In[33]:= StateSpace 0 .072 1 0.3.559313 Out[35]= 0.01 0 0 0 1 0 0 0 1 2 .095 . . 1.160081 0.3 . . DiagonalMatrix .0071 0 Out[34]= 1 0. 1 .07 0 0.01 0 0 • This finds the discrete-time approximation of the system. In[35]:= ToDiscreteTime % .3 0 0 1 0 0. .072 0 0 0 0.072 • To compute the state response.4.12 0.0071 0 0. Time-Domain Response 73 Α Θ Velocity v Center of gravity ∆s Figure 4.12 . In[34]:= Insert % . Here is the two-matrix state-space model.07 0 1 0 0 0.167272 0. 0 .095 0.111. Out[33]= 0 0.0986635 0 0 0 0.095 0. we insert a diagonal matrix C containing some weighting coefficients.986793 1. 1.183892 0.000761621 0.157165 0. Depth control of a submarine. .01. Sampled Period 2 0.800561 0.111 0.07.0127356 0. . .

State Response of Submarine 0.0015 -0. Impulse. In[38]:= OutputResponse s2 Ωn .001 -0. In[37]:= s2 Ω_. and ramp responses. 1 . impulse. We numericalize the input function to avoid supplying Degree (°) in symbolic form. it is easy to investigate typical problems such as step. In[36]:= SimulationPlot % . Ζ_ TransferFunction s. This section presents some examples. Note that SimulationPlot automatically picks up as many points as needed for the discrete simulation to cover the specified time frame. UnitStep t .3 ° .74 Control System Professional This plots the state response. N .0005 -0.002 40 60 80 100 120 140 4. and Other Responses Using the general time-domain response functions defined in this chapter. 150 . t.001 0. Let us create the second-order system with the natural frequency Ω and damping ratio Ζ.3 Step. Ω2 s 2 2 Ζ Ω s Ω2 Out[37]= 2 Ω2 2Ζ Ω Ω2 Here is the symbolic form of the step response for the critically damped case. t Out[38]= t Ωn 1 t Ωn t Ωn UnitStep t . PlotLabel "State Response of Submarine" .0005 20 -0.

t.5 2 1.75 1.750 . In[39]:= SimulationPlot s2 1.4. .5 0. 0. 1. Second Order System 0 Ζ 10 1 1. PlotLabel 1.5 0 20 15 10 5 t 0 .920. t 1 .5 y t 1 0. 0. PlotRange All. "Ζ 10". 1. "y t " . 1.5 .5 1. We can see that the response changes from pure oscillation for Ζ= 0 (undamped case) to the exponential for Ζ = 1. 3. UnitStep t . 50 . ViewPoint 0. Ζ .75 0. PlotPoints 50 . Ζ. PlotRange All . In[40]:= Plot3D Evaluate OutputResponse s2 1. Underdamped Second Order System 20 30 40 50 This plots the response for various values of the damping ratio Ζ. PlotLabel "Second Order System".110.25 10 "Underdamped Second Order System". Time-Domain Response 75 This is the analog simulation of the step response for the system with the natural frequency equal to unity and a particular value of Ζ from the underdamped case region.5 (overdamped case). AxesLabel t. 20 .5 0. t.1 .25 1 0.

UnitStep t .76 Control System Professional Here is a third-order system. Ζ_. ComplexExpand Simplify 1 2 . p Out[42]= 2 Ω2 2Ζ Ω Ω2 This is its step response. In[43]:= OutputResponse % . % . p_ TransferFunction s. UnitStep t . In[41]:= s3 Ω_. 1. 2. . Ζ 1 2 . Ω 1. &. t 3 t 2 Out[45]= 1 3 t 2 4 3 t 2 Cos 3 t 2 3 Sin UnitStep t . the system degenerates to the second-order one. s2 Ω2 p 2 Ζ Ω s Ω2 s p Out[41]= p For Β p Ω2 2 2ΖΩ Ω2 . UnitStep t . 1 . t ComplexExpand Simplify Out[43]= 1 3 t 2 3 t 2 3 Cos 3 t 2 3 Sin 3 t 2 UnitStep t This is the step response for Β In[44]:= OutputResponse s3 1. 1 2 . In the next inputs. 2 In[42]:= MapAt Limit #. here is the step response for Β In[45]:= OutputResponse s3 1. t ComplexExpand Simplify 1 2 Out[44]= 1 t 2 t 2 Sin 3 3 t 2 UnitStep t Finally. we compute the step response for a few values of the parameter Β defined as Β p ΖΩn for the particular case of Ζ 1 2.

t. Dashing . 2 . 2 . 0.8 0. and . t. Ramp Response 17. 0.02 PlotLabel "Step Response" .025 . . Time-Domain Response 77 This combines the three previous results. 1 . 1. 1. 8 . 1.02. we include the dashed straight line. .03 . %%% . The solid.4.. Step Response 1 .5 15 12. and dashed-dotted lines represent the respective curves for Β 1 . . Consider a two-input. 20. 1. PlotStyle Thickness . second-order system. dashed. .5 5 10 15 20 0. To visualize the steady-state error.05. 20 . 20 . Epilog Dashing 0. 0 .6 0. 2 Out[48]= 0 1 1 2 1 1 1 1 0 2 0 0 • .2 2 4 6 8 This is the simulation of a ramp response of the third-order system for a particular set of parameters. %% . 1.5 5 2. 1 .5.005 . Line PlotLabel "Ramp Response" . 0 . t. . In[46]:= Plot Evaluate Join % . In[48]:= system StateSpace 0. In[47]:= SimulationPlot s3 1. Dashing .4 0. 0.01.5 10 7.

005 . "Impulse Response" .5 1 0. respectively. t Out[49]= ComplexExpand UnitStep t UnitStep t . 0. In[49]:= OutputResponse % .78 Control System Professional This applies the impulse function to all inputs in turn and simplifies the result. In[50]:= Plot Evaluate Join PlotStyle PlotLabel % .5 2 -0. DiracDelta t . Note that we have joined the lists in the previous result to plot all curves in one graph.02 . PlotRange All. Impulse Response 2 1. Simplify 1 7 1 7 t 2 11 7 Sin 7 t 2 7 t 2 5 7 Cos 7 Sin 7 t 2 7 t 2 t 2 7 Cos Here is the plot of the impulse responses. Dashing .05. t. 10 .5 -1 4 6 8 10 . The impulse responses for the first and second inputs are shown as a solid and a dashed line. . Thickness .

ControlInputs PlotPoints 50. Sampled Period . t. DiracDelta t . In[51]:= SimulationPlot system. : Α Α t Τ UnitStep t Τ In[53]:= SimulationPlot system.5 2 -0. Analog Simulation of Impulse Response 2 1.5 1 0. Τ_: 0. t. 10 . the parameter Τ determines the offset of the impulse along the time axis and Α determines the width of the impulse. we can introduce an approximation of the Dirac delta function by an impulse with a finite width. ControlInputs 1.5 4 6 8 10 . In[52]:= myDiracDelta t_. Α_: 100. myDiracDelta t . 10 . PlotLabel 1.5 1 0.4.25 .5 -1 4 6 8 10 To perform the analog simulation. "Analog Simulation of Impulse Response" . As mentioned in Section 4. SimulationPlot performs the discrete simulation for the input signal in a form of the Dirac delta function. PlotLabel "Discrete Simulation of Impulse Response" . Time-Domain Response 79 This is the simulated (rather than computed symbolically) impulse response for the first input.2. Discrete Simulation of Impulse Response 2 1. In myDiracDelta.5 2 -0.

The input system may be in state-space or transfer function form and may or may not depend on the parameter k. Load the application. 5. and NicholsPlot) also accepts the body of the transfer function object as the input system.1 Root Loci RootLocusPlot system. NyquistPlot. s 1 s2 Out[2]= 1 2 . If you already have Electrical Engineering Examples installed. In[1]:= ControlSystems` Here is a double-integrator system. Both discreteand continuous-time systems can be analyzed using this function. kmax generate a root locus plot for system when parameter k varies from kmin to kmax Plotting the root loci. As in the Electrical Engineering Examples. typically (but not necessarily) the gain. the proper extensions have been added to allow handling of the control objects. k. varies. the parameter is assumed to be the gain of the open-loop system. Classical Methods of Control Theory This chapter introduces several classical control theory tools available in Control System Professional.5. all the functionality you are accustomed to should still be available. RootLocusPlot (as well as BodePlot. Some of the functions described here are routines adapted from the Mathematica Applications Library package Electrical Engineering Examples. In the latter case. kmin. In[2]:= TransferFunction s. RootLocusPlot graphs the roots of the characteristic equation (the root loci) while some parameter.

and the actuating or error signal is e. 10 .5. Note that because the system does not contain the parameter k. Im 1 0. 0. for which we find coefficients k such that the damping ratio of the dominant closed-loop poles is 0.1. As an example of using RootLocusPlot in feedback design. k. it is assumed to be the gain. r e Gs 20 ss 1 s c 4 Hs 1 ks Figure 5.5 Re -8 -6 -4 -2 -0. . option name default value PoleStyle ZeroStyle PlotPoints Automatic Automatic 10 styles for poles styles for zeros number of times the parameter is sampled Options specific to RootLocusPlot.4 (cf. Classical Methods of Control Theory 81 This plots the root loci. In[3]:= RootLocusPlot % . consider the system shown in Figure 5. the output or controlled signal is c.1.5 -1 RootLocusPlot accepts the options for ListPlot as well as several specific options that define the appearances of poles and zeros. Ogata (1990)). The input or reference signal is denoted as r. Example system for root locus construction. and the number of points at which to sample the range for k.

20 1 20 s s 1 s 4 Out[4]= 4 This defines the transfer function H s . 1 k s k 1 The open-loop system is formed by the serial connection of the blocks. . h Out[6]= 20 k 1 1 4 The suitable values of k correspond to intersections of the root locus curve with the straight line depicting the locus of poles of the generic second-order system Hs Ω2 n s2 2 ΖΩn s Ω2 n for the chosen value of Ζ . In[6]:= SeriesConnect g . In[4]:= g TransferFunction s. where Ωn is the natural frequency and Ζ is the damping ratio. In[5]:= h Out[5]= TransferFunction s.82 Control System Professional This describes the block with the transfer function G s .

PlotLabel "Step Response" & results. Ω 1 Ζ2 . 1. Ζ Ω. 2 . points 20. and then again between points 14 and 15. we get some estimates of parameter k that provide the required damping ratio.005 . The first system exhibits a faster response with reasonable overshoot compared to the second system. In[9]:= results Out[9]= FeedbackConnect g . h 20 12. kmax points 1 # 1 & 5. just to display the line in some bounds. PlotRange All.5. In[10]:= SimulationPlot #. 0.4211 Thread k % Simplify 3 5 2 20 3 5 2 20 This simulates the step responses for the chosen values of k. Ζ PointSize . Classical Methods of Control Theory 83 This plots root loci together with the line for the chosen value of Ζ. Ω 10. PlotPoints 20. 14. 0 .5 off the respective points. Epilog Line 0. 5 .42105 Here are the corresponding expressions for the closed-loop system. The natural frequency Ω has been picked quite arbitrarily.4 PlotLabel "Root Loci" .2. the line intersects the root locus branch somewhere between 5 and 6 points. List 20 32. Root Loci Im 6 4 2 -4 -3 -2 -1 -2 -4 -6 Re Roughly estimating that the intersections are 0. In[8]:= kmax 2.2 and 0. Note that in this case parameter k is not the gain.5 Out[8]= 0. . As we can see. k. UnitStep t . t.442105. PoleStyle .8421 . In[7]:= RootLocusPlot % . . .

4 0.2 1 2 3 4 5 RootLocusAnimation complements RootLocusPlot and provides information on the direction of the evolution of root loci. .8 0.2 1 2 3 4 5 Step Response 0.6 0. The syntax for the two function is identical.4 0. k. RootLocusAnimation system. the only exception is that the option PlotPoints is meaningless for the animation and is replaced by Frames.8 0. kmax generate a sequence of root locus plots for system with parameter k running from kmin to kmax Animating the root loci. which determines the number of frames the animation should contain.84 Control System Professional Step Response 1 0. kmin.6 0.

The magnitude response is the ratio of the span of the output sinusoid to the input one. By frequency response. we mean the response to purely sinusoidal signals. BodePlot accepts a system in state-space or transfer function form in the continuous. Consider again the second-order system. and the phase response is the phase difference between them. Also optional is the naming of the frequency variable in supplying the desired frequency range.2 The Bode Plot 5. in the form of GraphicsArray. If it is not supplied. w.2. {w. the frequency dependence of the magnitude and phase response. wmax . Like all other plotting routines in Control System Professional. Ζ_ TransferFunction s. respectively. Bode Plot depicts. In[11]:= s2 Ω_. wmin. BodePlot system generate a Bode plot for system BodePlot system.1 The Basic Function The Bode plot is the first example of the frequency response analysis tools that we consider in this chapter. Ω2 s2 2 Ζ Ω s Ω2 . wmin. BodePlot will try to determine a suitable range automatically. Specifying the frequency range is optional. wmax generate a Bode plot for frequency w running from wmin to wmax Creating the Bode plots. This should come as no surprise if you recall that the variable in TransferFunction is an internal parameter and does not have any meaning outside the scope of the TransferFunction object. wmax works as well as {wmin. The magnitude and phase dependencies are plotted in the double-logarithmic and semilogarithmic scales.5.or discrete-time domain. . Classical Methods of Control Theory 85 5.

2 0.5 1 2 5 10 Frequency Rad Second This gives the Bode plot over a wider frequency range. In[12]:= BodePlot s2 1. The frequency range is not specified and is determined automatically. 1 .1 1 10 100 Frequency Rad Second 0 -25 -50 -75 -100 -125 -150 -175 0. Phase deg dB -25 -50 -75 -100 -125 -150 0.2 0.1 0. 1 0 -10 Magnitude -20 -30 -40 0. 100 0 -20 Magnitude -40 -60 -80 0. BodePlot tries to unwrap the phase in order to present a smooth phase curve should branch cuts be detected.01 0.1 1 10 100 Frequency Rad Second . In[13]:= BodePlot s2 1. .01.5 1 2 5 10 Frequency Rad Second . By default.86 Control System Professional This is the Bode plot for the unity natural frequency and damping ratio.01 0. The value Automatic for the option PhaseRange typically Phase deg dB .1 0.

they will apply the first item to the magnitude plot and the second to the phase plot.5. you may wish to use linear spacing between samples instead of the default logarithmic spacing. for example. AspectRatio. Another way to eliminate possible errors in phase unwrapping is to increase the number of points at which the transfer function is sampled by changing the option PlotPoints. AxesLabel. Because a plot consists of two subplots. max . certain options—namely. PhaseRange will bring the graph into the specified range based on the principle that all 360 ° intervals are physically equivalent. This can be done by using PlotSampling LinearSpacing. FrameTicks. a plot does not have any part in the range specified in PlotRange. 180}. AxesOrigin. and PlotRange—are handled in a nonstandard fashion. BodePlot operates on MIMO systems by grouping the graphs related to each input and . Ticks. Epilog. Classical Methods of Control Theory 87 (unless the gain and phase margins are to be plotted—see below) corresponds to such behavior. with the new limits min and max given in degrees. this is also useful if rapid changes in the magnitude or phase curve are not plotted satisfactorily otherwise. works quite differently. For some plots. option name default value PhaseRange PlotPoints PlotSampling Margins MarginStyle Automatic 30 LogSpacing False Automatic the range for plotting the phase curve how many points of the graph are required distribute points evenly on the logarithmic or linear scale whether to compute and show gain and phase margins a list of lists of graphics primitives to use for each margins Options specific to BodePlot. FrameLabel. GridLines. The option PhaseRange is in no way a substitute for the standard Mathematica plotting option PlotRange and. PhaseRange allows also to shift the graph to another phase range which must be specified in the form PhaseRange {min. Very rapid changes in phase dependence may cause errors in phase unwrapping. On the other hand. in fact. The safest (and fastest) choice is PhaseRange {-180. in which case no additional processing of the phase is performed. Prolog. Other options are applied equally to both plots or to GraphicsArray itself. nothing will appear in the graph. Given a list of two items as input. BodePlot also accepts options pertinent to ListPlot and GraphicsArray. If.

01 . . . 20 10 0 -10 -20 -30 -40 0. This plots the gain versus Phase deg Magnitude . Table 1 s2 2 s Ζ 1 .6 1 1 1 1 This is its Bode plot.88 Control System Professional plotting as many graphs as there are inputs. .1 1 0. Here is a single-input. Thickness .01. PlotStyle Dashing dB Dashing . . .8. This can be changed using the standard PlotStyle option. Ζ. In[15]:= BodePlot % .1 0.1.1 0. multiple-output system.25 2 2 Out[14]= 2 2 1 0. . . of which each input-output pair is a second-order system with a different value of damping ratio. By default.05. you can always supplement the results from BodePlot with diagrams obtained using the built-in Mathematica functions.05 .001 . .5 1 2 5 10 Frequency Rad Second Of course. In[14]:= TransferFunction s.2 0.2 0. 10 .05.01 .1 1 1. You may specify options for each input differently by wrapping option values into additional lists. the graphs related to different outputs in one plot are distinguished by the Hue value. Dashing .5 1 2 5 10 Frequency Rad Second 0 -25 -50 -75 -100 -125 -150 -175 0. .01.05. PlotPoints 200 .6 1 1.

1. 20 . Ω 0 5 5. 60 Ζ 50 20 0 20 40 Gain dB 0 -20 -10-5 10 10 Log 10. "Ζ 50". Ζ. Ω . Ζ 50. To make a more attractive graphic. In[16]:= ParametricPlot3D Evaluate 10 Log 10.5.005. Ω ". PlotPoints 40. Abs s2 1.5 . AxesLabel "10 Log 10. we picked up some coefficients for the x and y axes. The margins can be computed using GainPhaseMargins and displayed on the Bode plot using the option Margins. 20 Log 10. The MarginStyle option can be used to specify the graphics primitive for margins in a manner similar to the PlotStyle option. If Margins is set to False. the margins are not computed (the default).2. Classical Methods of Control Theory 89 frequency and damping ratio for the generic second-order system. Boxed False . Ζ Ω 1.2 Gain and Phase Margins Important insight into the stability of the closed-loop system can be achieved by analyzing the gain and phase margins of the system before closing the loop. 5 . which accepts either the result of GainPhaseMargins (if it was computed prior to the call to Bode Plot) or True to compute the margins transparently. . "Gain dB " . 1 . 0. Ω. .

Margins True compute and show gain and phase margins Gain and phase margins. Example system for gain and phase margin selection. In[17]:= g TransferFunction s. 1 20 1 s 20 Out[18]= .90 Control System Professional GainPhaseMargins system BodePlot system.2. Margins compute gain and phase margins for system margins show precomputed gain and phase margins on the Bode plot BodePlot system. r e k s s s 50 10 c 1 s 20 Figure 5. As an exercise. This defines the plant. let us select gain k for the system shown in Figure 5. In[18]:= h TransferFunction s.2 such that the phase margin is greater than 30 ° and the gain margin is greater than 10 dB (Brogan (1991)). k 50 10 k s 50 s s 10 Out[17]= This is the regulator.

In[19]:= SeriesConnect g .k 4.5. the value by which the current gain at this frequency is less than zero. which would give the desired phase margin of 30 ° if we increase the gain by 24 dB. In[20]:= BodePlot % . . h Out[19]= k 10 50 20 This is the Bode plot for a particular value of k. which would satisfy the specification. 20 0 -20 -40 -60 -80 -100 0. That would lead to a gain margin somewhat more than 10 dB at Ω 24.1 1 10 100 1000 Frequency Rad Second -100 deg Phase Magnitude dB -120 -140 -160 -180 0.1 1 10 100 1000 Frequency Rad Second We observe that at Ω 10 the phase is around 150 ° . Classical Methods of Control Theory 91 Here is the open-loop system formed from the above blocks. the current crossover frequency. .

which can be done either analytically or from the interpolation of the response curve computed on a frequency grid. which accepts values Analytic or Interpolation. default value Method Automatic whether the margins should be computed analytically or by interpolation Options specific to GainPhaseMargins.1 1 10 100 1000 Frequency Rad Second Gain and phase margins are determined by finding the crossover points of the frequency response. you may set the phase range differently. In[21]:= BodePlot %% . Using the option PhaseRange.1 1 10 100 1000 Frequency Rad Second -100 deg Phase option name Magnitude dB -120 -140 -160 -180 0. the margins are always computed via interpolation. 0} degrees. The method can be selected using the option Method. When margins are to be plotted. 40 20 0 -20 -40 -60 -80 0. We can see that the specifications are met. . Margins True . 1024 20 . with the gain and phase margins displayed. The freedom of choosing the method is restricted to continuous-time systems.k 4. for discrete-time systems. the Automatic value for PhaseRange corresponds to the phase range of {-360.92 Control System Professional This is the Bode plot for the increased gain.

NyquistPlot system generate a Nyquist plot for system NyquistPlot system. Note that jumps may appear in the Nyquist plot when the frequency response changes rapidly. In[22]:= TransferFunction s. If no frequency range is specified. 1 s 1 2 1 Out[22]= 1 2 . wmax generate a Nyquist plot for frequency w running from wmin to wmax Creating the Nyquist plots. w.5.3 The Nyquist Plot The Nyquist plot allows us to gain insight into the stability of the closed-loop system by analyzing the contour of the frequency response function on the complex plane. Consider an open-loop transfer function. wmin. Classical Methods of Control Theory 93 5. That may be corrected by choosing a higher value for the PlotPoints option. NyquistPlot will try to plot the graph for both negative and positive frequencies (a task that can be avoided by plotting only half of the curve and then reflecting it over the real axis).

2 0.6 Like the other frequency response plotting functions. that such systems must be supplied in transfer function form and the desired frequency range must be specified.4 -0.8 1 Re -0. Im 0. 1 s Out[24]= 1 . however.4 0. since conversion from the state space works for linear systems only and so does the routine that determines the default frequency range. NyquistPlot is capable of handling nonpolynomial transfer functions.94 Control System Professional This is the corresponding Nyquist plot. Here is a system consisting of a transport lag e the unit values of Τ and T . the closed-loop system will be stable.6 0.2 0. The contour does not encircle the 1. s Τs and first-order lag 1 1 T s . Note. 0 point and the transfer function does not have unstable poles.6 0. therefore.4 0. for In[24]:= TransferFunction s.2 -0. In[23]:= NyquistPlot % .

9 2 s s2 s 2 Out[26]= 0. PlotPoints Im 0.4 -0.4 The Nichols Plot NicholsPlot gives yet another way to depict frequency response information—by plotting the magnitude versus phase curve in the semilogarithmic scale. . In[26]:= TransferFunction s.2 0. Here is an open-loop transfer function for the spacecraft attitude control system.6 -0.05 10 s 1 s .2 Re 500 .5. 5. In[25]:= NyquistPlot % . wmin. 20 .8 0.6 0.8 1 Like BodePlot. wmax generate a Nichols plot for frequency w running from wmin to wmax Creating the Nichols plots.4 0.2 -0. -0. .005 10 2 1 . as well as options pertinent to ListPlot.09 3 0. NicholsPlot system generate a Nichols plot for system NicholsPlot system.2 -0.005 . Classical Methods of Control Theory 95 This is the Nyquist plot for this system. NyquistPlot accepts the PlotPoints and PlotSampling options.4 -0. w. compensated with a PID controller from Franklin et al.01. (1991).

T being the sampling period). In[27]:= NicholsPlot % . as well as the options pertinent to ListPlot. SingularValuePlot system generate a singular-value plot for system SingularValuePlot system. 5. w. PlotSampling. wmax generate a singular-value plot for frequency w running from wmin to wmax Creating the singular-value plots. as does BodePlot. BodePlot returns a total of p q individual Bode plots. and PhaseRange. SingularValuePlot provides a means to generalize this information by generating a plot of the frequency dependence of singular values of the transfer matrix evaluated at different values of s j Ω (or z e j Ω T for discrete-time systems. . wmin. and consequently. 0 dB Gain -20 -40 -60 -80 -180 -170 -160 -150 -140 -130 -120 Phase degrees NicholsPlot accepts the options PlotPoints. the amount of information quickly grows with the number of inputs p and outputs q .96 Control System Professional This is the Nichols plot for the system.5 The Singular-Value Plot For MIMO systems.

. . 0. 0 .02 .5. 1 1 1 0.02 0 1 0. 0.050.005.01.01 0 • This plots the frequency dependence of the singular values. In[28]:= StateSpace 1.01.002 . 1 .01 0 Out[28]= . 0.004.5 1 .001 0. dB Singular Values 0 -10 -20 -30 -40 0. 0 .1 Rad Second 0.1). .01 0 Frequency 0.004 0. Classical Methods of Control Theory 97 Here is a mixing tank system (see Section 10. In[29]:= SingularValuePlot % . .002 0 0 0 0 0 0.

The interconnection procedures do not necessarily result in minimal state models. SeriesConnect. The subsystems can be supplied in either state-space or transfer function form. The auxiliary functions Subsystem. and states of both systems.1 Connecting in Series SeriesConnect finds the description of an aggregate system composed of two subsystems connected in series.6. ParallelConnect . Note that not all outputs of the first system system j . If both systems are entered as transfer functions. It is. and MergeSystems provide a means of manipulating system contents other than interconnections. 6. If of two subsystems one is in state space. In elementary and other interconnections. DeleteSubsystem. The aggregate has inputs of system i .and discrete-time objects.1 Elementary Interconnections Elementary interconnections include serial (cascade). therefore. outputs of xi . so some model reduction techniques (see Chapter 8) may be appropriate.1. the responsibility of the user to perform the necessary conversions prior to calling the interconnecting functions. 6. and feedback interconnections. the first variable encountered is used in the result. x xj should necessarily be connected to inputs of the second system. and StateFeedbackConnect perform elementary interconnections. parallel. as shown in Figure 6. GenericConnect handles these as well as more convoluted cases.1. . if discrete-time) to produce a meaningful result. FeedbackConnect. but the systems to be connected must be in the same domain (and sampled at the same rate. All the functions accept (and operate analogously on) continuous. the other is transformed to that form too. inputs and outputs are referred to by their index in the input or output vectors. this is the form of the resultant system. System Interconnections This chapter introduces the tools needed to construct a composite system based on a given system topology and descriptions of the blocks. If the transfer functions of the blocks are defined using differently named variables. and a state-space object is returned.

d11 . o. system2 . c22 . B2 A12 A22 B1 0 0 B2 • A11 . yi uj System j yj SeriesConnect system1 . d21 . C11 . a21 . A12 . system2 find the aggregate system by connecting all outputs of system1 to corresponding inputs of system2 SeriesConnect system1 . Series interconnection. b12 . b22 . b11 . In[2]:= ss1 StateSpace a11 . c11 . c12 . C12 . A22 . i1 . b21 . a22 . system2 . Load the application. .1. System Interconnections 99 ui System i Figure 6. A21 . o2 . i2 .6. d22 Out[2]= a11 a12 b11 b12 a21 a22 b21 b22 c11 c12 d11 d12 c21 c22 d21 d22 • In[3]:= ss2 StateSpace B1 . 0 . i connect the output o of system1 to the input i of system2 SeriesConnect system1 . D12 Out[3]= A11 A21 C11 C12 D11 D12 . c21 . d12 . In[1]:= ControlSystems` Here are two systems in state-space form. 0. … connect outputs ok of system1 to inputs ik of system2 Series connection of two systems. D11 . a12 . o1 .

B2 d21 . b22 . d12 D11 Now the second output of the first system is connected to the first input of the second system. b12 . b11 . ss2 Out[4]= a11 a21 B1 c11 B2 c21 c11 D11 a12 a22 B1 c12 B2 c22 0 0 A11 A21 0 0 A12 A22 b11 b21 B1 d11 B2 d21 b12 b22 B1 d12 B2 d22 d22 D12 • c21 D12 c12 D11 c22 D12 C11 C12 d11 D11 d21 D12 d12 D11 The result is. 2 0 0 A11 A21 0 0 A12 A22 b11 b21 B1 d21 B2 d11 b12 b22 B1 d22 B2 d12 d12 D12 • Out[7]= a11 a21 B1 c21 B2 c11 c21 D11 a12 a22 B1 c22 B2 c12 c11 D12 c22 D11 c12 D12 C11 C12 d21 D11 d11 D12 d22 D11 . ss2. 1 d22 D12 Out[6]= a11 a21 B1 c21 0 a12 a22 B1 c22 0 0 0 0 0 A11 A12 A21 A22 b11 b21 B1 d21 0 b12 b22 B1 d22 0 • c21 D11 c22 D11 C11 C12 d21 D11 d22 D11 Now all the outputs and inputs are connected again. 2. C12 . a21 . A11 . B2 d22 . the familiar StateSpace object. In[5]:= % StandardForm Out[5]//StandardForm= StateSpace a11 . A22 . B1 c11 . a12 . In[4]:= SeriesConnect ss1. ss2. a22 . A21 . Another output and input remain loose. 0.100 Control System Professional This connects the systems in series. d11 D11 d21 D12 . 2. B1 d12 . B2 c21 . B1 d11 . In[6]:= SeriesConnect ss1. B1 c12 . 1 . b21 . 0. 1. c12 D11 c22 D12 . of course. C11 . A12 . but this time in the reverse order. B2 c22 . In[7]:= SeriesConnect ss1. 0 . 0 . c11 D11 c21 D12 .

3. and so is the result. In[10]:= SeriesConnect integrator. 1 1 Out[12]= 1 2 2 1 2 . integrator Out[10]= 1 2 This is a slightly more complicated connection—the integrator is connected to the third output of the transfer function system tf. In[9]:= SeriesConnect ss1. s s 1 2 s 2 2 1 Out[11]= 2 2 2 2 2 1 2 2 1 2 2 In[12]:= SeriesConnect tf. integrator. 1 Out[9]= a11 a12 0 b11 b12 a21 a22 0 b21 b22 c11 c12 0 d11 d12 0 0 1 0 0 • Here both input systems are in transfer function form. s s 2 2 . 1 s Out[8]= 1 This attaches the integrator to the first output of system ss1. s s 2 2 2 . In[8]:= integrator TransferFunction s. 1 s 2 2 . In[11]:= tf TransferFunction s. integrator.6. System Interconnections 101 Here is an integrator given in transfer function form. s s 1 2 2 2 . Notice that SeriesCon nect returns the result in state-space form as soon as one of the input systems is given in that form. s s 2 2 . 1.

s3. However. s3 . s4 Out[14]= SeriesConnect SeriesConnect SeriesConnect s1. The cascade is shown in Figure 6.2.2. Here is a function that connects any number of matching subsystems in series. s2 . 1 3 Now the cascade can be found in closed form. s2 . In[16]:= Table TransferFunction s. 3 Out[16]= 1 .102 Control System Professional SeriesConnect can cascade only two systems. In[13]:= cascade s__ : Fold SeriesConnect. In[15]:= TreeForm % Out[15]//TreeForm= SeriesConnect SeriesConnect SeriesConnect s1. In[14]:= cascade s1. s2. 0. the Mathematica language makes extensions to any number of systems fairly straightforward. s2. s3 . s3. First s . s4 This built-in function reveals the structure of the cascade. Thread 1 1 2 s1. 1 2 . In[17]:= %% . this command cascades a set of four abstract systems. 1 1 . Notice that the aggregate remains partially unevaluated until the systems are specified. s4 We prepare a set of first-order systems with simple poles to be used instead of the abstract systems. . s4 % Out[17]= 3 u 1 s s 1 1 s 1 2 s 1 3 y Figure 6. Cascade of first-order systems. i. Rest s Using the new function. 1 s i .

The states of the aggregate come from both subsystems. o12 . or both.2 Connecting in Parallel The parallel interconnection of subsystems according to Figure 6. i1 . i21 . o22 .3 can be accomplished with the function ParallelConnect. System Interconnections 103 6. system2 .6. Parallel interconnection. o11 . The aggregate has these shared inputs and summed outputs as well as other inputs and outputs of the components.3. the systems may be in either state-space or transfer function form.1. i22 . u ui uj System i yi yj System j y Figure 6. o1 . ParallelConnect system1 . The subsystems may or may not have some shared inputs and some summed outputs. i2 . … . system2 . o21 . … connect the inputs i1 k of system1 with the inputs i2 k of system2 and sum outputs o1 k of system1 with outputs o2 k of system2 Parallel connection of two systems. As with all elementary interconnection functions. i12 . system2 connect all inputs of system1 with all inputs of system2 and sum all corresponding outputs ParallelConnect system1 . . o2 connect the input i1 of system1 with the input i2 of system2 and sum outputs o1 of system1 and o2 of system2 ParallelConnect system1 . i11 .

b22 . In[20]:= ParallelConnect ss1. 1. A12 . s s 1 2 1 Out[21]= 2 2 2 2 2 1 2 2 1 2 2 . 1 Out[20]= a11 a12 0 0 a21 a22 0 0 0 0 A11 A12 0 0 A21 A22 c11 c12 C11 C12 d11 c21 c22 C11 C12 d21 b11 b21 B1 0 D11 d12 D11 d22 • This takes two systems represented by the same transfer function tf and connects all inputs and sums all outputs in the criss-cross order. 1 . B2 A12 A22 B1 0 0 B2 • A11 . 1 . c11 . 2. . In[21]:= tf TransferFunction s. d22 Out[18]= a11 a12 b11 b12 a21 a22 b21 b22 c11 c12 d11 d12 c21 c22 d21 d22 • In[19]:= ss2 StateSpace B1 . In[18]:= ss1 StateSpace a11 . 0 . C11 . s 2 2 2 s s 2 2 2 . 0. c22 . 1. a22 . d21 . 1 s 2 2 . A21 . a21 . D12 Out[19]= A11 A21 C11 C12 D11 D12 This connects correspondingly numbered inputs of the subsystems and adds the first output of ss2 to both outputs of ss1. 2. d12 . d11 . A22 . b11 . b21 . D11 . C12 . b12 . s s 1 2 2 . 2 b12 b22 0 B2 D12 D12 . s s 2 2 . ss2. c12 . s s 2 2 .104 Control System Professional These are two state-space systems defined earlier. c21 . a12 .

1. 3 . 3.4b). The connection specifications may be omitted if all corresponding inputs and outputs are to be used. Either negative (default). positive. tf. 2 . Examples of such usage are given in Section 10. 1.7. 2. or may include a second system (typically a controller) in the feedback (Figure 6. 6. The states come from all subsystems.4. The inputs and outputs of the aggregate are the ones of the first system. 2 .4a).1. 1 1 Simplify 4 2 5 2 2 1 Out[22]= 2 2 1 2 2 2 2 2 2 4 2 2 2 1 2 5 2 2 ParallelConnect can be used to sum the outputs or just to connect the inputs without summing the outputs. . u ui a System yi y u ui b yj System i yi uj y System j Figure 6. System Interconnections 105 In[22]:= ParallelConnect tf.4 are performed with the function FeedbackConnect.3 Closing Feedback Loop Feedback interconnections of the types shown in Figure 6.6. Feedback interconnections. or mixed feedback can be formed. 2. 1 2 . The function may just close the loop (Figure 6.

i1 . o2 . o1 . i2 . type use type for all connections FeedbackConnect system. type use type for the connection FeedbackConnect system. i . o1 . o1 . o2 . i feed the output o of system back to the input i FeedbackConnect system. o. … . type2 . type1 . . … connect several outputs ok and inputs ik FeedbackConnect system. i2 . i1 . o. type form either negative or positive feedback depending on type FeedbackConnect system. o2 . … use typek when connecting output ok to input ik Closing the feedback loop. i1 . i2 .106 Control System Professional FeedbackConnect system feed all the outputs of system back to the corresponding inputs with a negative sign FeedbackConnect system.

In[23]:= ss StateSpace DiagonalMatrix a1 . … . type use type for all connections FeedbackConnect system1 . … use typek when connecting input ik Feedback interconnections with second system. … close the negative feedback loop for system1 with system2 by connecting outputs ok of system1 to sequentially numbered inputs of system2 and sequentially numbered outputs of system2 to inputs ik of system1 FeedbackConnect system1 . i2 . i2 . system2 . system2 . Out[23]= a1 0 b1 0 0 a2 0 b2 c1 0 d1 0 0 c2 0 d 2 • . … . … . type1 . d2 b1 . Consider a second-order system with two inputs and two outputs. type2 . system2 . i1 . a2 . o1 . system2 put system2 in the negative feedback loop for system1 FeedbackConnect system1 . i2 . c2 . o2 . System Interconnections 107 FeedbackConnect system1 . The type descriptor for FeedbackConnect should be one of the reserved words Positive or Negative. DiagonalMatrix DiagonalMatrix c1 . i1 . i1 .6. DiagonalMatrix d1 . o1 . type create feedback specified by type FeedbackConnect system1 . o1 . b2 . o2 . … . system2 . o2 .

s s 2 2 .108 Control System Professional This forms the negative feedback by connecting corresponding outputs and inputs. s s 1 2 s 2 2 1 Out[26]= 2 2 2 2 2 1 2 2 1 2 2 . 1 s 2 2 . s s 1 2 2 2 . s s 2 2 2 . s s 2 2 . In[24]:= FeedbackConnect ss a1 b1 c1 d2 1 d2 d1 d1 d2 1 0 Out[24]= c1 c1 d1 d2 1 d2 d1 d1 d2 1 0 c2 a2 0 b2 c2 d1 1 d2 d1 d1 d2 1 0 c2 d1 1 d 2 d2 d1 d1 d2 1 d1 b1 b 1 d1 d 2 1 d2 d1 d1 d2 1 0 2 d1 d2 d2 d1 d1 0 b2 b2 d 1 1 d 2 d2 d1 d1 d2 1 0 d2 d1 d 2 d1 2 1 d2 d1 d2 1 d2 1 0 1 • We can use built-in Mathematica functions to simplify the result. In[25]:= Together % 0 d2 a2 a2 b2 c2 d2 1 0 c2 d2 1 b1 d1 1 0 d1 d1 1 0 0 b2 d2 1 0 d2 d2 1 d1 a1 a1 b1 c1 d1 1 0 Out[25]= c1 d1 1 0 • This forms the closed loop for the first two inputs and outputs of the transfer function tf. In[26]:= tf TransferFunction s.

b22 . d21 . 2 Simplify 1 2 2 3 2 34 5 34 3 7 5 4 22 4 34 2 3 2 28 2 2 8 8 4 28 6 5 7 5 4 22 3 4 3 Out[27]= 1 7 2 2 5 4 3 22 3 2 22 9 3 4 34 2 28 7 5 7 15 22 3 11 34 2 10 2 40 5 4 85 4 28 8 2 2 28 8 102 2 64 3 2 15 8 8 1 2 7 22 34 2 28 Now we plug the system ss in the positive feedback loop for ss1 (assuming. c11 .5 (under the same simplifying assumption). for simplicity. d_ 0. d12 . a22 . System Interconnections 109 In[27]:= FeedbackConnect tf. d11 . c22 . d22 Out[28]= a11 a12 b11 b12 a21 a22 b21 b22 c11 c12 d11 d12 c21 c22 d21 d22 • In[29]:= FeedbackConnect ss1. b11 . 2. a21 . Positive Out[29]= a11 a12 b11 c1 b12 c2 b11 b12 a21 a22 b21 c1 b22 c2 b21 b22 b1 c11 b1 c12 a1 b1 c1 d11 b1 c2 d12 b1 d11 b1 d12 b2 c21 b2 c22 b2 c1 d21 a2 b2 c2 d22 b2 d21 b2 d22 c11 c21 c12 c22 c1 d11 c1 d21 c2 d12 c2 d22 d11 d21 d12 d22 • This connects ss1 and ss according to Figure 6. b12 . 1 u 2 1 ss 2 2 ss1 2 1 1 y Figure 6.6. b21 . In[28]:= ss1 StateSpace a11 . 2 5 1. . c21 . that there is no direct transmission term in ss). a12 . 1 . c12 .5. Example of feedback interconnection. ss .

g s . d_ 0. h s g h Here are the same systems connected in parallel. ss . This is the serial connection of two systems described by the transfer functions g[s] and h[s]. TransferFunction s. GenericConnect always uses the state-space algorithm so it is advantageous to supply all input systems in the StateSpace form from the outset. In[32]:= ParallelConnect TransferFunction s. h s g h This is the negative feedback connection. 1. Note that. Negative Out[30]= a11 a12 b12 c1 b11 c2 b11 b12 a21 a22 b22 c1 b21 c2 b21 b22 b1 c11 b1 c12 a1 b1 c1 d12 b1 c2 d11 b1 d11 b1 d12 b2 c21 b2 c22 b2 c1 d22 a2 b2 c2 d21 b2 d21 b2 d22 c11 c21 c12 c22 c1 d12 c1 d22 c2 d11 c2 d21 d11 d21 d12 d22 • The connection functions described in this section accept transfer functions in arbitrary (rather than rational polynomial) form as long as no type conversion or subsystem selection has to be made. In[33]:= FeedbackConnect TransferFunction s. subsystems in GenericConnect can be supplied in either state-space or transfer function form. Positive . TransferFunction s. As with the elementary interconnecting functions. h s Out[33]= g g h 1 6. unlike elementary interconnections.2 Arbitrary Interconnections Systems more complex than the types described in the previous section can be constructed using GenericConnect. The result will be a state-space object if at least one system is in the StateSpace form.110 Control System Professional In[30]:= FeedbackConnect ss1. In[31]:= SeriesConnect TransferFunction s. 2. . 1. 2 . g s Out[31]= . TransferFunction s. g s Out[32]= .

2. Example of an arbitrary interconnection. type1 . the two inputs of system 2 (ss2) receive numbers 3 and 4. the latter taken with a negative sign. … indicating that input i gets its signal from summed outputs ok . and two external outputs. …. In this way. o1 . {o2 . and then the inputs and outputs were numbered sequentially in the order of the subsystems. system 2 . o2 . 1.6.6. and input 5 is connected to output 3. and the only output of this subsystem becomes output 3. A necessary step in preparing the connections specification for GenericConnect is numbering all the inputs and all the outputs of the subsystems according to the order of the systems in the argument list (which can be arbitrary). The subsystems were numbered in some order. type2 . The aggregate has three external inputs. (2) a list in the form {i. The elements of the specification connections can be supplied as (1) a list of integers in the form {i. . outs construct a composite system from blocks systemk using the connection specification connections so that the aggregate has inputs ins and outputs outs Building a complex system. and 4 . 1 1 1 u ss 2 2 3 2 ss2 4 4 3 3 y s Α 5 s Figure 6. As an example. System Interconnections 111 GenericConnect system 1 .6. … that allows the sign (negative or positive) for output ok to be set differently according to typek . ins. The interconnections are then constructed using these numbers. {o1 . The type specification can be one of the reserved words Negative or Positive. or (3) a mix of the two. 1 and 3 . connections. consider connecting the three systems according to the block diagram in Figure 6. with the default value depending on the option DefaultInputPort. Input 3 receives its signal from outputs 2 and 4.

D11 . d2 Out[34]= a1 0 b1 0 0 a2 0 b2 c1 0 d1 0 0 c2 0 d 2 • In[35]:= ss2 StateSpace B1 . 5. 4 . For example. c1 . TransferFunction s. 0. 2.7b. A12 . The limitation is easy to overcome. D12 Out[35]= A11 A21 C11 C12 D11 D12 This creates the aggregate. In[36]:= GenericConnect ss. 1. b2 a1 . 1. to construct a system containing the feed-forward and feedback paths as shown in Figure 6. C12 . 0 . A21 .112 Control System Professional These are the components. s . 0. 2. however. B2 A12 A22 B1 0 0 B2 • A11 . Negative a1 0 0 Out[36]= . 0 . d1 . 3 0 b1 0 0 a B1 0 D11 1 0 0 a D11 a 0 D11 1 0 d1 a D11 0 D11 1 0 b2 B1 d2 D11 1 0 d2 D11 D11 1 0 d2 D11 D11 1 0 0 B1 D12 D11 1 B2 D12 D11 1 0 D12 D11 1 0 0 c1 0 0 a2 B1 c2 D11 1 0 c2 D11 D11 1 0 c2 D11 D11 1 0 0 B1 C11 A11 D11 1 A21 C11 D11 1 0 C11 D11 1 0 0 B1 C12 A12 D11 1 A22 C12 D11 1 0 C12 D11 1 • The algorithm implemented in GenericConnect (in which signals originate necessarily at the outputs of the blocks) imposes a limitation on the block diagram that GenericConnect can compute directly. . In[34]:= ss StateSpace b1 . 0 . dummy blocks may be added as shown in Figure 6. C11 . by the addition of dummy blocks with a unit gain. . 0. 0 . 0. ss2. a2 . 0. 3. A22 .7a. 3 s a . 0 . 4. c2 . .

the first TransferFunction does not have a named variable. The output is a Transfer Function object since all input systems are supplied as transfer functions. .7. In[37]:= GenericConnect TransferFunction 1 . 2. 5 Α 5 This computes the aggregate system shown in Figure 6. the result is the pure-function object. s 2. Negative . is used in the result. 4. 5. which means that the outputs should be summed uninverted. 3. Example of interconnection with dummy blocks added. Positive default port for interconnections . option name default value DefaultInputPort Option to GenericConnect. 4 . 1 . The default value of DefaultInputPort is Positive. TransferFunction 1 . 1. TransferFunction a . GenericConnect assumes that the outputs ok in the connection specification {i. if any. 4 b #1 a #1 1 Out[37]= TransferFunction Unless instructed otherwise.6. Note that the variable from the first TransferFunction object. TransferFunction s. System Interconnections 113 u Β y a Α u 1 1 2 2 1 1 Β 2 3 3 3 4 4 y 1 4 5 b Figure 6. 5. o1 . o2 . To emphasize the structure of the result we turn off the Control Format display. 3 . TransferFunction b . 1 . in this case.7. Since. … come to the summing input i with negative or positive signs as determined by the option DefaultInputPort .

. controller.8 shows such a connection for a continuous-time system. for example). system must be in state-space form. connections can be specified using the same format as for FeedbackConnect. x B u x C y A Controller Figure 6. the diagram for a discrete-time system is structurally similar. connections use connections to form the state feedback State feedback connections. This type of connection can be formed using StateFeedbackConnect.8. State feedback schematic.114 Control System Professional 6. StateFeedbackConnect system. controller can be supplied in either state-space or transfer function form.3 State Feedback When the state variables become known (either through measurement or estimation). it is sometimes desirable to feed their values back to the inputs via a controller to change the dynamics of the system in a certain way (see Chapter 9). To form state feedback. System D . Figure 6. or simply as a gain matrix (such as that returned by StateFeedbackGains. controller feed all states of system back to its inputs via controller forming negative feedback StateFeedbackConnect system.

6. inputs. DiagonalMatrix d1 . The system can be in either state-space or transfer function form (although manipulating state contents is possible for state-space objects only). outputs. Subsystem can also rearrange the order of inputs. and states Selecting a part of the system. inputs. Consider a state-space system. inputs select the part of system associated with the specified inputs Subsystem system.4 Manipulating a System's Contents Subsystem and DeleteSubsystem can be used to select or delete a desired part of the system. The element specifications can be either vectors of integers corresponding to indices or the reserved words All or None. b2 . b3 . outputs. a2 . c2 . d3 b1 . states select the subsystem with the specified inputs. or states. In[38]:= ss StateSpace DiagonalMatrix a1 . DiagonalMatrix DiagonalMatrix c1 . Out[38]= a1 0 0 b1 0 0 0 a 2 0 0 b2 0 0 0 a3 0 0 b 3 c1 0 0 d1 0 0 0 c 2 0 0 d2 0 0 0 c3 0 0 d 3 • . c3 . System Interconnections 115 6. outputs. a3 . outputs select the subsystem with the specified inputs and outputs Subsystem system. Subsystem system. d2 .

1. . In[41]:= Subsystem ss. 3 a1 0 0 b1 0 0 a2 0 0 0 0 0 a3 0 b3 Out[39]= c1 0 0 d1 0 0 c2 0 0 0 0 0 c3 0 d3 • This swaps the second and third inputs. the first output. In[39]:= Subsystem ss. and the first and third states of the original system ss. All. 1. 3 Out[41]= a1 0 b1 0 0 0 a3 0 0 b3 c1 0 d1 0 0 • DeleteSubsystem is complementary to Subsystem and has similar syntax. 2 Out[40]= a1 0 0 b1 0 0 0 a2 0 0 0 b2 0 0 a3 0 b 3 0 c1 0 0 d 1 0 0 0 c2 0 0 0 d 2 0 0 c3 0 d 3 0 • This selects the subsystem that has all inputs. 1 . In[40]:= Subsystem ss. 1. 3.116 Control System Professional This picks the subsystem that has only the first and third inputs.

1. System Interconnections 117 DeleteSubsystem system. outputs. b21 . inputs delete the part of system associated with the specified inputs DeleteSubsystem system. 2 • MergeSystems merges several systems into one by appending their inputs and outputs (and. Here is a transfer function of a system with two inputs and three outputs. 1 f 2. . In[42]:= TransferFunction Array f. system2 . b22 . In[43]:= DeleteSubsystem % . outputs delete the specified inputs and outputs DeleteSubsystem system. 2 f 3. 1 f 1. states). The result is in state-space form if at least one of the systems is in this form. 2 f 2. leaving all the inputs intact. outputs. c2 . 2 • This deletes the first and third outputs. for state-space systems. and states Deleting a part of the system. 1 f 3. In[44]:= ss1 StateSpace a11 . d1 . … merge the systems systemi Merging several systems. c1 .6. d2 Out[44]= a11 a12 b11 b12 a21 a22 b21 b22 c1 c2 d1 d2 • . a21 . a22 b11 . 1 f 2. None. 2 Out[42]= f 1. 3 Out[43]= f 2. inputs. inputs. MergeSystems system1 . a12 . b12 . 3. Here are two state-space systems. states delete the specified inputs.

1 1 TransferFunction s. A21 . In[46]:= MergeSystems ss1. ss2 Out[46]= a11 a12 0 0 b11 b12 0 a21 a22 0 0 b21 b22 0 0 0 A11 A12 0 0 B1 0 0 A21 A22 0 0 B2 c1 0 0 c2 0 0 0 0 C11 C12 C21 C22 d1 0 0 d2 0 0 0 D1 D2 • Here are two transfer functions for the two-input. outputs. C21 . respectively. 1 s 3 1 Out[48]= 2 1 3 . D1 . and states of its components. In[47]:= tf1 TransferFunction s. A12 A22 B1 B2 A11 . A12 .118 Control System Professional In[45]:= ss2 StateSpace B1 . 1 s . two-output systems. C22 . 1 s 1 Out[47]= 1 In[48]:= tf2 1 s 2 . one-output and the one-input. C12 . A22 . The aggregate has all the inputs. B2 . C11 . D2 Out[45]= A11 A21 C11 C12 D1 C21 C22 D2 • This merges them into one.

9. Otherwise that part would not be physically realizable. In[49]:= MergeSystems tf1. for example. . by eliminating unwanted zeros from the closed-loop transfer function. We consider a double-integrator model of the satellite control system and design a PID (proportional-integral-derivative) controller that would place the poles of the closed-loop system in some predefined positions (cf. In a typical scenario.1). The prefilter can be used to further correct the dynamics of the system. The example given here illustrates that the ready-made set of design tools can be easily expanded and outlines possible steps in that process.5 Using Interconnecting Functions for Controller Design Due to their symbolic capabilities. the interconnecting functions described in this chapter can be especially useful for design purposes. not shown on the diagram.6. Several design functions are described in details later in Chapters 9 and 10. System Interconnections 119 This merges the two transfer functions into one. Note that the derivative part of the controller includes the term 1 Τ s 1 . The block diagram of a PID controller connected to the system is shown in Figure 6. Section 9. Only the propagation of the reference signal r to the controlled output c is taken into account. with a presumably small time constant Τ . A typical system would also have a disturbance input. tf2 1 Out[49]= 1 1 0 0 0 1 2 1 3 0 0 6. the designer chooses the structure of the controller and then determines the particular values of its parameters to meet the specifications.

1 . 0 . plant FeedbackConnect 0 1 0 Simplify 0 0 Out[52]= 0 ki Τ 0 1 1 Τ 0 kd ki Τ2 0 0 1 0 kd Τ kp Τ 1 0 0 1 0 0 kd Τ kp 0 • . and simplifies the result. In[51]:= pid TransferFunction s. closes the feedback loop. 0 . PID controller example. Here is a double integrator plant. 1 . In[50]:= plant StateSpace 0. 1. 0 Out[50]= 0 1 0 0 0 1 1 0 0 • This describes the PID controller.120 Control System Professional PID Controller Kp r Prefilter Ki s Kd s Τs 1 System c Figure 6. 0.9. In[52]:= SeriesConnect pid . kp kd ki 1 ki s kd s 1 Τs Out[51]= Τ kp This connects the controller to the plant.

System Interconnections 121 This finds the transfer function of the closed-loop system. kp . kd . p2 . In[58]:= % Out[58]= . In[57]:= Solve CoefficientList d1 Out[57]= d2 . k p . s 0. Τ p2 p3 p4 . kp p3 p4 p2 p3 p4 . p1 p2 p3 p4 . This is the denominator of the corresponding transfer function. 1 s2 Τ kp s3 s4 Τ s2 kd ki s Τ ki s kp This makes the polynomial monic. ki .6. Τ 0 p2 p3 p4 . In[53]:= TransferFunction s. kp p2 p3 p4 p2 p3 p4 p2 p3 p4 p2 p3 p4 1 p1 p2 p3 p4 2 2 p1 p2 p3 p4 p3 p4 p3 p4 p2 p3 3 p3 p4 p2 4 1 p1 p2 p3 p4 2 . In[55]:= d1 Expand s3 Τ % Τ s ki ki Τ s2 kp s kp Τ Out[55]= s4 s2 kd Τ Suppose now that the closed-loop system with desired dynamics has poles at p1 . % ExpandRational ki kp Out[53]= kd Τ 4 3 2 Τ kp kd 2 Τ ki kp Τ k p 2 Τ ki 2 ki Here is the denominator of the transfer function as a polynomial in the variable s. ki . kd . and Τ we equate the coefficients of powers of s in the denominators and solve the resultant system of equations. p3 . k_ kd expr_ k Limit expr. Simplify kd ki p1 p2 p3 p1 p2 p4 p1 p3 p4 p1 p2 p3 p4 3 p1 p2 p3 p4 . In[54]:= Denominator % s Out[54]= 1. Τ p1 p2 p3 p4 This is a simplified result for negligible Τ. In[56]:= d2 Out[56]= s p1 s p2 s p3 s p4 s p1 s p2 s p3 s p4 To find the unknown parameters ki . and p 4 .

The test can be done using the function OutputControllable. Controllability of the system is determined by matrices A and B . namely. there exists some input function (or sequence for discrete systems) that drives the state vector to any final state x t1 at some finite time t1 t0 . and D .1 Tests for Controllability and Observability A linear system is said to be completely controllable if. Analogously. The function Controllable performs the test. B. 7. observability involves the matrices A and C. C. complete output controllability. a linear system is said to be completely observable if. test if the system statespace is observable Load the application. Another kind of controllability may be useful from a practical perspective. which is defined as the ability to drive the output vector to the origin in finite time. defined over a finite time t1 t0 . including the test functions themselves and other necessary constructs.7. This property involves all matrices A. Controllability and Observability This chapter describes the tools related to controllability and observability of state-space systems. the state vector x t0 can be determined from the output function (or sequence) y t1 . Controllable statespace test if the system statespace is controllable OutputControllable statespace test if the system statespace is output controllable Observable statespace Testing controllability and observability properties. for all initial times t0 . Consequently. In[1]:= ControlSystems` . for all initial times t0 and all initial states x t0 . Observ able performs the test.

1. they are returned partially unevaluated. 1 . Negative. which always return True or False. Controllability and Observability 123 It is easy to show that this system is neither controllable nor output controllable. In[2]:= ss StateSpace 0 1 1 2 0 1 1 1 0 0 0. or Equal rather than the *Q functions. the tests behave more like the Mathematica built-in functions Positive. 1. 0 . the system is completely observable. 1 . In[5]:= Observable ss Out[5]= True If for some reason the controllability and observability tests cannot be evaluated to True or False. Controllability and observability tests do not necessarily return True or False. In[4]:= OutputControllable ss Out[4]= False On the other hand.7. 1 Out[2]= 1 1 • This tests the controllability. In[6]:= Controllable undefinedsystem Out[6]= Controllable undefinedsystem . In this sense. 1 . 2 . 1. In[3]:= Controllable ss Out[3]= False And here is the test for output controllability.

simply select one desired method. The tests can be chosen through the options Controlla bilityTest and ObservabilityTest . By default.124 Control System Professional option name default value ControllabilityTest ObservabilityTest Automatic Automatic test to apply to find if the system is controllable test to apply to find if the system is observable Options specific to controllability and observability test functions. where n is the dimension of the state space. Both accept a pure function or a list of pure functions to be applied in turn until one gives True or False or the list is exhausted. or if the controllability or observability Gramian is singular. Controllability and observability can be determined by testing if the controllability or observability matrix has rank n. To disable this feature. . both first try to see if the controllability or observability information for the system is available (see below in this section) and then proceed with the matrix test and further with the Gramian test as needed.

Controllability and Observability 125 ControllabilityTest FullRankControllabilityMatrix test if the controllability matrix is full rank in order to test controllability NonSingularControllabilityGramian test if the controllability Gramian is nonsingular in order to test controllability ControllabilityTest ControllabilityTest test use the pure function test ControllabilityTest test1 . test2 . Controllability and observability can also be inferred from the structure of certain special-type realizations. in some cases the size of the controllable or observable subspace is computed internally when arriving at the realization of the special form. ControllableSpaceSize and ObservableSpaceSize take the option ReductionMethod that allows you to specify the method to use for those functions. In fact. This information can be retrieved with the functions ControllableSpaceSize and ObservableSpaceSize and then used for alternative controllability and observability tests.2). The . … try testi in turn until one succeeds ObservabilityTest FullRankObservabilityMatrix test if the observability matrix is full rank in order to test observability ObservabilityTest NonSingularObservabilityGramian test if the observability Gramian is nonsingular in order to test observability ObservabilityTest test use the pure function test ObservabilityTest test1 .7. In their turn. such as the Kalman controllable or observable forms (see Section 8. … try testi in turn until one succeeds Selecting controllability and observability tests. test2 .

126

Control System Professional

option operates analogously in ControllableSubsystem and ObservableSubsystem and is described in Section 8.1.

ControllableSpaceSize statespace the size of the controllable subspace of the state-space system ObservableSpaceSize statespace the size of the observable subspace
The controllable and observable subspace sizes.

This determines the size of the controllable subspace.
In[7]:= ControllableSpaceSize ss Out[7]=

1

As the size of the controllable subspace is less than the number of states, the system is not controllable.
In[8]:= % Out[8]=

CountStates ss

False

7.2 Controllability and Observability Constructs
The controllability matrix of a linear system is defined as B AB A2 B An 1 B (7.1)

and can be obtained with ControllabilityMatrix. The output controllability matrix CB CAB CA2 B CAn 1 B D (7.2)

can be found with OutputControllabilityMatrix. The observability matrix

7. Controllability and Observability

127

C CA CA CA
2

(7.3)

n 1

is obtainable with ObservabilityMatrix.

ControllabilityMatrix statespace returns the controllability matrix for the system statespace OutputControllabilityMatrix statespace returns the output controllability matrix ObservabilityMatrix statespace returns the observability matrix
Finding controllability and observability matrices.

This is a simple test system.
In[9]:= system

StateSpace 3 1 0 1 5 0 1 2 3 0 0 3 1 0 0 0

3, 1, 0 ,

5, 0, 1 ,

3, 0, 0

,

1 , 2 , 3

,

1, 0, 0

Out[9]=

This is its controllability matrix.
In[10]:= ControllabilityMatrix system

Out[10]=

1 2 3

1 1 2 2 3 3

128

Control System Professional

Clearly, this is not a full-rank matrix and, therefore, the system is not completely controllable.
In[11]:= Rank % Out[11]=

1

We can come to this conclusion directly.
In[12]:= Controllable system Out[12]=

False

This is the observability matrix for the system.
In[13]:= ObservabilityMatrix system

Out[13]=

1 3 4

0 0 1 0 3 1

This is a full-rank matrix, so the system is completely observable.
In[14]:= Rank % Out[14]=

3

Again, we can come to this conclusion directly.
In[15]:= Observable system Out[15]=

True

Equally important in controllability and observability studies are the corresponding Gramians, which may be defined as (see, e.g., Moore (1981)) W2 c
0
T eA Τ B BT eA Τ dΤ

(7.4)

(the controllability Gramian) and W2 o
0
T eA Τ CT C eA Τ dΤ

(7.5)

(the observability Gramian). Examples of using the Gramians can be found later in Section 8.4.

7. Controllability and Observability

129

ControllabilityGramian statespace find the controllability Gramian for the system statespace ObservabilityGramian statespace find the observability Gramian
Finding controllability and observability Gramians.

This is a mixing tank system (described in Section 10.1).
In[16]:= StateSpace

0.9512, 0 , 0, 0.9048 , 4.88, 4.88 , 0.01, 0 , 0, 1 , Sampled Period 5. 0 0.9048 0 1 4.88 0.019 0 0 4.88 0.0095 0 0

0.019, 0.0095

,

Out[16]=

0.9512 0 0.01 0

5.

Here is its controllability Gramian.
In[17]:= ControllabilityGramian % Out[17]=

500.205 0.332677

0.332677 0.00248846

ControllabilityGramian and ObservabilityGramian rely on the function Lyapunov Solve (see Section 12.2) to solve Lyapunov equations regarding W2 and W2 : c o A W2 c AT W2 o W2 AT c W2 A o B BT CT C (7.6) (7.7)

Consequently, these functions accept the same options as LyapunovSolve does. For discrete-time systems, ControllabilityGramian and ObservabilityGramian call DiscreteLyapunovSolve appropriately. The Gramian functions sometimes must reconstruct the full orthogonal basis of the vector space given a few orthogonal vectors belonging to the basis—a problem that may have more than one solution. In such cases, the choice between equivalent vectors can be made randomly. You may enable this feature by setting the option RandomOrthogonalComplement to True. The same mechanism is used by several other functions, for example, Kalman trans-

To have them all utilize the randomized algorithm. In[18]:= StateSpace 0. yet reproducible solution. RandomOrthogonalComplement True use the randomized algorithm for constructing an orthogonal basis in a particular function $RandomOrthogonalComplement True use the randomized algorithm in all functions that construct orthogonal bases Enabling the random choice between equivalent orthogonal vectors. 1. a1. 1 . a2 . you may change the global variable $RandomOrthogonalComple ment to True. 0 . 1. Of course. It is useful because observability tests are often implemented as controllability tests of the system dual to the one in question. 1 0 1 a1 a2 Out[18]= 1 1 0 0 1 1 0 1 • . a system that is dual to the system statespace This is a system in state-space form. reseeding the random number generator with some number (using the Mathematica built-in function SeedRandom). DualSystem statespace Finding the dual system.130 Control System Professional form functions and the KNVD algorithm in StateFeedbackGains .3 Dual System The function DualSystem returns the system that is dual to the input system. provides a random. while employing the randomized algorithm. 1 . 7. 1 .

ComplexVariables a1 Out[21]= 0 Re a1 Im a1 1 a2 1 1 1 1 0 1 0 0 • option name default value ComplexVariables All which symbolic entries in state-space representation should be assumed to be complex Option to DualSystem. DualSystem assumes that all symbolic entries in state-space representation can be complex. In[21]:= DualSystem %%% . in which case the built-in function ComplexExpand will be applied to the result. ComplexVariables None Out[20]= 0 a1 1 1 1 a2 0 1 1 1 0 0 • Finally. here is the dual system if only a1 may be complex. all other variables are assumed to be real valued. In[20]:= DualSystem %% . . ComplexVariables accepts also a list of variables which may be complex. The user may disable this feature by setting the option ComplexVariables to None. a simpler result is returned. In[19]:= DualSystem % Out[19]= 0 Conjugate a1 1 Conjugate a2 1 1 1 1 0 1 0 0 • By default. If we assume that none of the symbolic entries are complex. Controllability and Observability 131 This is its dual system. In this case.7.

therefore. used for computing a minimal realization. c.. The other group comprises functions that select a subsystem (typically a subspace) possessing certain properties. C . D . These functions end with Subsystem (e. The exception to this convention is MinimalRealization . a function that transforms between equivalent realizations of the same system. or. to convert a system to Kalman controllable form. which represents the intersection of the controllable and observable subspaces. By convention. the function KalmanControllableForm would be applied to the system. a physical system can have an infinite number of realizations.and discrete-time objects. For example. several means of converting between different realizations are described. In this chapter. Also introduced is SimilarityTransform .1) or the corresponding quadruple of the matrices A. b. A realization can also refer to a discrete-time state-space realization (with the obvious change from the differential to difference in the equations). this function does effectively select a subsystem. B . including means of obtaining realizations of a smaller order. The first group of functions represents the means to convert between different types of realizations. d] that leads to the required input-output relations for the system (usually specified as a transfer function matrix H s ). The list of all forms available in Control System Professional can be obtained with ?Control Systems`*`*Form. Because there can be innumerable ways to satisfy the input-output relations. Realizations A realization is any pair of equations x t y t Ax t Cx t Bu t Du t (8. the corresponding state-space object StateSpace[a. ControllableSubsystem or DominantSubsystem ). . for the purposes of this guide. their names end with Form. All the functions operate on continuous.g.8.

Brogan (1991).2 for more on the definitions of the controllable and observable subspaces.6).. For SISO transfer function systems.g. e. PoleZeroCancel. In contrast. The input system for MinimalRealization can be in either state-space or transfer function form. The result is therefore a subsystem that is both completely observable and controllable. consequently.4). the smallest possible dimension of the associated state space) is called the irreducible (or minimal) realization (see. In[1]:= ControlSystems` . DominantSubsystem eliminates weakly controllable and observable modes (see Section 8. See Section 8.1 Irreducible (Minimal) Realizations The internal structure of a system may allow some of the integrators (or delay elements) to be shared by several input-output pairs and still result in the same transfer matrix. ControllableSubsystem statespace select the controllable subspace of statespace ObservableSubsystem statespace select the observable subspace of statespace Selecting controllable and observable subspaces. can also be accessed directly. Otherwise. the resultant system is always a state-space one. Load the application. see Section 8. Section 12. The system that realizes the maximum possible degree of sharing (and. MinimalRealization constructs a state-space realization after an attempt to cancel common pole-zero pairs (the underlying function.5). Realizations 133 8.8. The function MinimalRealization tries to find such a realization. MinimalRealization constructs a state-space realization first and then uses the functions ControllableSubsystem and ObservableSub system consecutively to select first the controllable and then the observable subspaces. MinimalRealization system find a minimal realization for system Finding the irreducible (minimal) realization.

b1 . In[5]:= ControllableSubsystem ss Out[5]= a2 0 0 a3 0 0 b1 0 0 b2 0 d11 d12 c3 d21 d22 • This selects the observable subspace. and the second one is unobservable. 0 . d22 Out[2]= a1 0 0 0 a2 0 0 0 a3 c1 0 0 0 0 d11 d12 c3 d21 d22 • We can verify that this system is not controllable. 0. a3 . d12 .134 Control System Professional Consider a third-order state-space system with two inputs and two outputs. a2 . These modes. 0 . 0 . d21 . In[3]:= Controllable ss Out[3]= False Neither is it observable. In[2]:= ss StateSpace DiagonalMatrix a1 . In[4]:= Observable ss Out[4]= False This selects the controllable subspace. b2 . 0. The first mode is uncontrollable. 0. then. 0. In[6]:= ObservableSubsystem ss Out[6]= a1 0 0 a3 0 0 0 b2 c1 0 d11 d12 0 c3 d21 d22 • . have no effect on the input-output relations and so can be dropped without changing the transfer function matrix. c3 0 b1 0 0 0 b2 . d11 . c1 . 0.

whichever is applicable. Additionally. inherits options from its constituents ControllableSub system and ObservableSubsystem or PoleZeroCancel. In[7]:= ObservableSubsystem %% a3 Out[7]= 0 b2 0 d11 d12 c3 d21 d22 • The same result can be obtained directly. as an interface function. . In[9]:= Controllable % Out[9]= Observable % True True The method that ControllableSubsystem and ObservableSubsystem use to reduce the dimension of the system can be chosen through the option ReductionMethod. With the default option value Automatic. MinimalRealiza tion. the options specific to the method it employs. ObservableSubsystem takes both the options for ControllableSubsystem and DualSystem. the functions first try to use the structural information about the input system that might be already available. option name default value ReductionMethod Automatic method to use to reduce the dimension of the system Specifying the reduction method. ControllableSubsystem takes. Realizations 135 By selecting the observable subspace of the controllable subspace. In[8]:= MinimalRealization ss a3 Out[8]= 0 b2 0 d11 d12 c3 d21 d22 • The minimal realization is both controllable and observable. and passes along. we arrive at a minimal realization. Reduction Method Kalman specifies that the Kalman decomposition is to be used.8.

2) that state variables w2 are uncontrollable. (8. Brogan 1991. the state space is divided into observable v1 and unobservable v2 subspaces. .2) u CT1 CT2 using a similarity transformation x T w . too: w w2 Eq.. (8. Section 11.1) can be transformed to the Kalman controllable canonical form (see. Similarly. KalmanControllableForm statespace find the Kalman controllable canonical form of the system statespace KalmanObservableForm statespace find the Kalman observable canonical form of the system statespace Finding Kalman canonical forms. Kalman controllable and observable canonical forms can be arrived at by using the functions KalmanControl lableForm and KalmanObservableForm. w1 .2 Kalman Canonical Forms The state equations in Eq. It is seen from The space vector w is partitioned into two corresponding parts.3) u.136 Control System Professional 8. e. since there is no way to change w2 either directly through input u or indirectly through w1 coupling. in the Kalman observable canonical form v1 v2 y VT AV1 1 0 v1 v2 v1 v2 VT B 1 VT B 2 D VT AV1 VT AV2 2 2 CV1 0 u (8.7) w1 w2 y TT AT1 TT AT2 1 1 0 TT AT2 2 w1 w2 w1 w2 TT B 1 0 D u (8. where the orthogonal transformation matrix T1 T2 is constructed and partitioned in such a way that T1 represents the subspace T spanned by the columns of the controllability matrix and T2 is the subspace orthogonal to T1 .g.

the functions related to Kalman canonical forms accept the options belonging to the employed method and pass them on to their destinations.8. a2 . c1 . b2 . a3 . 0. In[10]:= ss StateSpace DiagonalMatrix a1 . In[12]:= KalmanObservableForm ss Out[12]= a1 0 0 0 a3 0 0 0 a2 c1 0 0 c3 0 0 0 0 b1 0 b2 0 d11 d12 d21 d22 • The decomposition into controllable and uncontrollable (or observable and unobservable) subspaces can be performed using several methods that are accessible through the option DecompositionMethod . Like all other functions in Control System Professional. d11 . 0. Now the unobservable mode (the second one) is moved to the end. 0 . . d22 Out[10]= a1 0 0 0 a2 0 0 0 a3 c1 0 0 0 0 d11 d12 c3 d21 d22 • This is the Kalman controllable canonical form of an example system defined previously. Realizations 137 This is an example system defined previously. 0 . Other available option values are QRDecomposition and NullSpace. c3 0 b1 0 0 0 b2 . d12 . In[11]:= KalmanControllableForm ss Out[11]= a2 0 0 0 a3 0 0 0 a1 0 0 b1 0 0 0 b2 0 0 c1 d11 d12 c3 0 d21 d22 • This is the Kalman observable canonical form of the same system. 0. d21 . 0. Kalman decomposition-related functions also accept the option RandomOrthogonalComplement. We can see that the uncontrollable mode (the third one) is moved to the end of the state vector. 0 . b1 . The default Automatic value for this option invokes the built-in function RowReduce for exact systems and SingularValues for inexact ones. 0.

0 . In[14]:= t 0. However. KalmanControllableForm and KalmanObservableForm are similar to the functions ControllableSubsystem and ObservableSubsystem described in Section 8. Let us create a diagonal matrix. whereas the latter two select controllable or observable subspaces of the given system. the two former functions merely rearrange the order of variables in the state vector. In[13]:= m DiagonalMatrix 1 0 0 0 2 0 0 0 3 1. Similarly to ObservableSubsystem . 1 . 1.3 Jordan Canonical (Modal) Form JordanCanonicalForm statespace find the Jordan canonical form of the system statespace Finding the Jordan canonical form.138 Control System Professional option name default value DecompositionMethod Automatic method to perform the decomposition Specifying the decomposition method. 3 Out[13]= Here is some nonsingular matrix. 1.1. 0 1 1 1 0 1 0 1 0 1. KalmanObservableForm takes the options of KalmanControlla bleForm as well as the options of DualSystem. 8. 0 Out[14]= . 2. 0. 1.

a3 . In[16]:= ss StateSpace DiagonalMatrix a1 . c3 . b2 . 0. 0 . 0 .t Out[15]= 3 0 1 2 1 2 0 0 2 We use the previous matrix as matrix A in our test state-space object ss. In[18]:= JordanCanonicalForm % Out[18]= 3 0 0 c1 c3 0 2 0 0 c3 0 0 1 c1 c3 b1 0 b1 d11 d21 0 b2 0 d12 d22 • In the case of an exact input system. %% . otherwise the eigenvalue decomposition is used. d11 . 0. 0. The latter method may lead to significant numerical errors if eigenvalues happen to be multiple. 0. a2 . Realizations 139 This creates a matrix with the predefined set of eigenvalues. . 0. In[15]:= Inverse t . 1 Out[17]= 3 0 1 c1 0 2 1 2 0 0 0 0 0 b1 2 0 0 c3 0 0 b2 d11 d12 d21 d22 • This finds the Jordan canonical form of the preceding system. c1 . JordanCanonicalForm relies on the built-in function JordanDecomposition .8. d22 Out[16]= a1 0 0 0 0 0 a2 0 b1 0 0 0 a3 0 b2 c1 0 0 d11 d12 0 0 c3 d21 d22 • In[17]:= ReplacePart ss. d21 .m. 0 . b1 . d12 .

1.804697 0. may be applied (Moore (1981)). 50. 33. 1 .140 Control System Professional 8. 15. 50.75334 0. 0 N 0. 0.772915 0. In[19]:= original StateSpace 0. Consider a SISO system in its controllable canonical form (see Example 1 in the above-cited paper of Moore (1981)).350054 1. 79. 5 . 0.772915 . In such cases another model reduction technique.65665 0. 15. 0. 0.391098 0. Evidently.337348 0. 0 . 0 .391098 4. • 50.75334 1. 0. 0. 0. the realization is poorly balanced.45031 0. 0.45031 2. 0. 5. 0.19628 0.19628 1. 0. 1 .629679 1. 0. 0. 0.337348 0. based on the internally balanced realization. 1. 0 .4 Internally Balanced Realizations The Kalman minimal realization algorithm may result in structurally unstable models. 1. 1. 1.19539 4.252316 0.804697 0.350054 0.21805 1. 79. 50. 0.518282 1. 0. 33. The controllable and observable realization is said to be internally balanced if its controllability and observability Gramians are represented by the same (positive definite) diagonal matrix. InternallyBalancedForm statespace find the internally balanced realization of the system statespace Finding the internally balanced form. In[20]:= balanced InternallyBalancedForm % 1. InternallyBalancedForm attempts to construct such a realization.21805 0. 1. 0 . 0 This finds the internally balanced realization.252316 0 • Out[20]= 0. 1. 0 . 0. Out[19]= 0. 1.

0903667 0 In[22]:= ObservabilityGramian %% Out[22]= 0. The default Automatic value for this option invokes the built-in function Eigensystem for the continuous-time systems and SingularValues for the discrete-time systems. The built-in function Chop rounds small numerical errors down to the exact zeros. 79.576324 0 0 0 0 0.147477 0 0 0 0 0. we can see that the two forms are in fact different realizations of the same transfer function. original ExpandRational Chop Out[23]= 4 1. For exact systems.8. 33.0903667 0 Finally.576324 0 0 0 0 0.0192144 Chop 0 0 0 0.147477 0 0 0 0 0. Similarly to KalmanControllableForm and related functions. 33. InternallyBalanced Form takes the option DecompositionMethod . Realizations 141 We can verify that the realization has equal and diagonal controllability and observability Gramians. In[23]:= TransferFunction s. 2 50. 3 2 15. ExpandRational Chop In[24]:= TransferFunction s. 79. . balanced Out[24]= 4 1. In[21]:= ControllabilityGramian % Chop 0 0 0 0. 3 2 15. 5. 5.0192144 Out[21]= 0. 50. in the most important case of inexact systems. 2 50. 50. the Eigensystem-based solution will be attempted.

What is meant by "relatively small" is.142 Control System Professional 8.1. . DominantSubsystem system find the dominant subsystem of the state-space system Model reduction by selection of the dominant subsystem. of course. the system breaks up into dominant and weak subsystems connected as shown in Figure 8. u Dominant subsystem y Reduced model is obtained by severing these connections Weak subsystem Figure 8. This suggests a way of reducing the order of the system by eliminating these weak modes. contribute little to the input-output characteristic.5 Dominant Subsystem The controllability and observability Gramians in the example in the preceding section reveal the modes that are relatively small and. Model reduction based on decomposing the system into its dominant and weak parts. Schematically. therefore. DominantSubsystem can be used to select the dominant part. a matter of convention and will be addressed in the context of the option RejectionLevel later in this section.1.

592249 1.602247 22.142305 • . DominantSub system may use the structural information about the input system if it is available (if.314668 3. Realizations 143 A reduced-order model of the system original can be found by selecting the dominant subsystem. option name default value RejectionLevel 0.57868 2.643875 0. If the default value of RejectionLevel is not appropriate for the particular case.62134 0. as set by the option RejectionLevel.1 if the relative value of a diagonal element compared with the biggest element of the Gramians is equal to or less than this value. DominantSubsystem always uses the internally balanced form to select the dominant part.62134 0.602247 . DominantSubsystem inherits options from InternallyBalancedForm.519548 0.643875 5.6329 0. the corresponding mode is to be dropped The option specific to DominantSubsystem. Otherwise.0384288 Out[25]= 0. the system is already in the internally balanced form).826229 • Similarly to the function MinimalRealization.6329 1. we use it to find the second-order reduced model. Because we do have the internally balanced realization of the system original. With the default Automatic value for this option. In[26]:= reduced2 DominantSubsystem balanced .19293 0.19293 1.29982 5.826229 0.619183 0. In[25]:= reduced DominantSubsystem original 1.57868 0. for example. With ReductionMethod InternallyBalancedForm . the user may find the internally balanced realization "manually" and then choose a suitable value for RejectionLevel.5446 2.619183 0.8. the function starts from constructing the internally balanced realization and then selects the dominant part.49353 0. RejectionLevel 3. DominantSubsystem takes the option ReductionMethod.519548 0.2 Out[26]= 0.

PlotRange All. and the dashed and dashed-dotted ones are for the third. BodePlot reduced2.005. 0.1 0. The solid line is for the initial system.144 Control System Professional This creates the Bode plots for the three systems and displays them at once. 0.02 . respectively. 600 .05. Note that in the first plot we set up a plot range sufficient to display all three graphs.51 Frequency 5 10 50 100 Rad Second 5 10 50100 Rad Second .1 0. PlotStyle Dashing 0.015. As we can see.6 Pole-Zero Cancellation PoleZeroCancel tf cancel the common pole-zero pairs in the transfer function tf Canceling common pole-zero pairs. PlotStyle Dashing 0. 0. in the reduced models the low-frequency behavior is preserved while the high-frequency (fast) states are eliminated.015 dB 0 -20 Magnitude -40 -60 -80 0. and in the second plot we tweak PlotPoints for the phase unwrapping mechanism to operate correctly. 8.51 Frequency 0 -100 -200 -300 -400 -500 0. For the model reduction purposes. BodePlot reduced .and second-order reduced models. 0. Phase deg . In[27]:= DisplayTogetherGraphicsArray BodePlot original. PlotPoints 75 . PoleZeroCancel in its current implementation is most useful for SISO systems.

Sampled True Out[28]= 0. just by factoring the elements of the transfer matrix.5 0. too.5 cancel so long as Tolerance is set to 0. s . Tolerance Out[32]= . In[30]:= FactorRational %% Out[30]= 1 1.5 2 1.4 and the zero at s 0. they can be canceled without calling PoleZeroCancel—for example.5 . PoleZeroCancel is most useful when the match is not exact. In[32]:= PoleZeroCancel % .5 1.8.5 s .5 0. This is the transfer function after the pole-zero pair at z In[29]:= PoleZeroCancel % Out[29]= 1 1. Note that if the common factors coincide within the precision of the elements. z .4 Tolerance allows cancellation of pairs with a significant difference. The pole at s 0.1 .4 Out[31]= 0. In[28]:= TransferFunction z. Here is another transfer function. In fact. The common pair in the preceding transfer function cancel in the factored form.5 0.5 is canceled. the Tolerance option allows cancellation of the factors within any desired difference. Realizations 145 Here is a z-plane transfer function for a SISO system.5 z z 2 .1 1 • . Therefore. In[31]:= TransferFunction s.

SimilarityTransform statespace. x . the state equations from Eq. 8. In the new basis.146 Control System Professional option name default value Tolerance Automatic maximum difference between a pole and a zero for which they are considered a common pair Option to PoleZeroCancel. The transformation x t T x t from basis x to another basis.1) become x t y t where A B C TA T TB CT 1 1 Ax t Cx t Bu t Du t (8. m transform the system statespace with the matrix m Finding the similarity transformation. .5) The result can be obtained using the function SimilarityTransform. may be performed with any nonsingular matrix T .4) (8.7 Similarity Transformation There can be an infinite number of realizations of a physical system that correspond to the system's representations in different bases of state space. (8.

either T or its inverse T 1 can be supplied as an input argument to SimilarityTransform . d21 . 0. In[33]:= system StateSpace 3.8. In[35]:= SimilarityTransform system. d11 . b1 . c3 . 1. 0. 0. the option InvertedTransformMatrix must be set to True. In the latter case. 2.5). 0 . b2 . d12 . In[34]:= t Eigenvectors First % 1 0 0 0 1 1 1 1 1 Transpose Out[34]= A similarity transformation based on the inverse of that matrix can be used to represent the system in its canonical form. 0. 0 . 0. c1 . . Realizations 147 Consider an earlier system. 0 . Another way to look at this option is that it allows backward transformation from basis x to x. d22 2 1 2 0 0 0 0 0 b1 2 0 0 c3 0 0 b2 Out[33]= 3 0 1 c1 0 d11 d12 d21 d22 • Here its eigenvectors are arranged as columns of the matrix. (8. Inverse t Out[35]= 3 0 0 c1 c3 0 2 0 0 c3 0 0 1 c1 c3 b1 0 b1 d11 d21 0 b2 0 d12 d22 • To avoid a double inversion of the matrix T in Eq. 2. 0 . 1. 0. 2 . 0 .

in which case the transpose of the matrix can be used instead of its inverse. . 8. t. InvertedTransformMatrix True Out[36]= 3 0 0 c1 c3 0 2 0 0 c3 0 0 1 c1 c3 b1 0 b1 d11 d21 0 b2 0 d12 d22 • Further performance gain can be achieved if the transformation matrix is known to be orthogonal. which has been used internally to arrive at the realizations of special forms introduced in this chapter. TransformationMatrix original. assuming that matrix t is the inverse of the transformation matrix.8 Recovering the Transformation Matrix The similarity transformation matrix. transformed the similarity transformation matrix between two realizations Recovering the similarity transformation. option name default value InvertedTransformMatrix OrthogonalTransformMatrix False Automatic whether the input matrix is already inverted whether the input matrix is known to be orthogonal Options to SimilarityTransform.148 Control System Professional This performs the transformation. In[36]:= SimilarityTransform system. can be retrieved using the function Trans formationMatrix.

OrthogonalTransformMatrix True Out[40]= 1 2 0 0 0 0 2 0 0 c3 0 b1 1 0 3 0 c1 0 0 b2 0 d11 d12 d21 d22 • . % .8. which is known to be orthogonal. 0. 1. c1 . d12 . b2 . d11 . b1 . 0. % Out[39]= 0 1 0 0 0 1 1 0 0 Indeed. 0. d22 0 1 2 0 0 0 0 0 b1 2 0 0 c3 0 0 b2 Out[37]= 3 0 1 c1 0 d11 d12 d21 d22 • In[38]:= KalmanControllableForm system Out[38]= 1 2 0 0 0 0 2 0 0 c3 0 b1 1 0 3 0 c1 0 0 b2 0 d11 d12 d21 d22 • This gives the transformation matrix between the two realizations. 1. 0 . 0. brings about the Kalman controllable realization. 0 . In[37]:= system StateSpace 3. 0. 0 . Realizations 149 This is another simple system and its Kalman controllable realization. In[39]:= TransformationMatrix system. 0. 0. c3 . In[40]:= SimilarityTransform system. 2 . the similarity transformation of the original system with this matrix. 0 . 0 . d21 . 2.

M is the control torque applied by the thruster. poles find the feedback gain matrix that places poles of the state-space system statespace into poles State feedback design.1.9. u M J is the normalized input variable. Feedback Control Systems Design Feeding a weighted part of the state or output variables back to the input is a method often used to correct the behavior of control systems. This chapter describes the tools provided in Control System Professional to design such feedback schemes by enforcing the desired pole location on the complex plane. The methods described here are applicable to continuous. one forces the system's poles. 9. Λ2 .or z -plane—are supplied. StateFeedbackGains statespace. where Θ is the angle of the satellite axis with respect to a reference. So long as the algorithm does not require knowledge of matrices C and D . is discussed in Chapter 10. The function StateFeedback Gains attempts to find its solution. we consider a model of the single-axis satellite control system depicted in Figure 9. …. they may be omitted in the state-space description of the input system. As an example. Λn (9.1 Pole Assignment with State Feedback By closing the loop of an nth -order system x Ax Bu (9.or discrete-time systems as long as the proper values of the poles—on the s . Another approach.2) The problem of finding the matrix K that yields the desired locations of poles Λi is often referred to as the pole assignment or pole placement problem. to assume the new positions Λ1 . assuming that the input system is completely controllable. that is. using the optimal control technique. and J is the moment of inertia about the center of . the eigenvalues of matrix A B K . The transfer function for the system is H s Θ s u s 1 s2 .1) with the state feedback control u Kx.

Θ Reference Figure 9. 1 s2 Out[2]= 1 2 This finds a discrete-time realization of the system for sampling period T . In[4]:= StateFeedbackGains % . In[1]:= <<ControlSystems` This is the transfer function for the satellite control system. z1. (1990)). One-degree-of-freedom model of satellite attitude control. Load the application. Sampled Period T 1 T Out[3]= 0 1 1 0 T2 2 T 0 T This designs the discrete-time controller that would place the poles of this system into the desired locations z1 and z2 in the complex plane. In[2]:= TransferFunction s. In[3]:= ToDiscreteTime StateSpace % . Feedback Control Systems Design 151 mass (Franklin et al. z2 Out[4]= z1 1 z2 1 T2 z2 z1 z1 z2 3 2T Simplify .1.9.

the check is made only if the input is inexact. reconsidering the requirement for accuracy.2 . In the base package. z1 8.8 . The value must be set to True to perform the check on exact input or input containing symbolic expressions. In Mathematica. which may resolve some problems of this sort.1 seconds. one may also use a built-in mechanism for manipulating the precision. . The option Method determines what method to use to compute the feedback gains. Using the option VerifyPoles. There are also some options pertaining to the method being applied. For the option value Automatic. the case would require changing the method of computation.6 . it is possible to check if the poles of the closed-loop system are indeed (close to) the required ones. The option can be set to False to save some computing time. StateFeedbackGains has several options that help to avoid meaningless results. T 0. The value in no way affects the result of the computation. namely when the number of inputs is equal to the number of states. The option AdmissibleError relates to the option VerifyPoles and specifies the relative error in the location of poles that is deemed admissible.8 .01 Some of the options for StateFeedbackGains.2 . two methods are available: Ackermann and KNVD. or even reformulating the problem.152 Control System Professional This is the controller gain matrix for the particular case z period T 0. option name default value Method VerifyPoles Automatic Automatic method to use whether to check resulting closed-loop poles against the ones required relative error in pole position considered admissible AdmissibleError 0.2 j and sampling . In[5]:= % Out[5]= 0. 3. especially for high-order or weakly controllable systems. it is up to the user to choose the appropriate strategy. The option value Auto matic imposes the following division of labor: KNVD is called for inexact input and for a special case of exact input. Ackermann is used otherwise.1 Chop Feedback design is prone to significant numerical errors.8 0. Traditionally. z2 . If the required accuracy has not been reached.

3.9. 0. In[9]:= poles Out[9]= Range n. 0. 0. 0. 1. 0. 0. 0. 1 Let us try to place the poles using Ackermann's formula (defined shortly) while making sure that the target is not missed by more than 0. 0. 0. 5. 0. 8. 0. 0. 0. N Out[7]= In[8]:= b Table 1. 0. 0. 0. 0. we suppress the output by placing the semicolon in the end of the input statement.000001 . 0. 4. In[6]:= n In[7]:= a 8. 2. 0. 0.0001 percent. 0. 0. 0. 0. 0. . 0. 0. 0. 0. 0. 0. 1 8. 6. DiagonalMatrix Range n 1. 0. 1. 0. 0. StateFeedbackGains::bpl : Warning: Pole location may deviate from the required one by more than 0. 0. 0. 7. 0. 0. 0. 0. AdmissibleError . 0. 0. b. (Since we are not particularly interested in the concrete values of the gains. 7.) In[10]:= StateFeedbackGains a. 0. 0. Method Ackermann . 0. 0. 1.0001`% . 0. 1. 0. 1. poles. 2. 0. Feedback Control Systems Design 153 Consider a hypothetical eighth-order system. n Out[8]= Suppose this is the list of the desired pole locations after the loop is closed. 5. 1. 0. 0. 6. 0. 0. We are warned that the goal has not been achieved. 0. 0. 1. 4. 3. 0. 1.

Method Ackermann .154 Control System Professional On our machine. In[12]:= a. …. In[14]:= Max Eigenvalues a Out[14]= b. poles SetPrecision a. b. 25 .% poles 5. poles .44 10 21 9. We double-check the result. By increasing the precision of the input. In[13]:= StateFeedbackGains a. In[11]:= $MachinePrecision Out[11]= 16 This increases the precision of our parameters.3) is the unit vector of length n . the machine-precision numbers are 16 digits long. Now we are presented with no warning. and Φ A An Α1 An 1 Α2 An 2 Αn 1 A Αn I . b.1.000001 . C B A B A2 B An 1 B is the controllability matrix. the feedback matrix K that places the poles of a single-input system x A x B u into the positions Λ1 . Λn can be found as K where en 0 0 0 1 en C 1 ΦA (9. Λ2 . we force the feedback computation to be done with higher precision. The achieved accuracy is even greater than required. poles.1 Ackermann's Formula According to Ackermann's formula. AdmissibleError . b.

The Ackermann method. … . Here I is the identity matrix. besides being useful for single-input systems. It is also possible to specify the control input explicitly. . and the coefficients Αi are such that Λ1 . may also find application if an attempt is to be made to control a multi-input system through a single input. Λn are the roots of the polynomial: s Λ1 s Λ2 s Λn sn Α1 sn 1 Α2 sn 2 Αn 1 s Αn The function StateFeedbackGains with the option Method algorithm. The option ControlInput Automatic is used in such cases to find the "best" control using the condition number of the corresponding controllability matrix as a criterion. Ackermann implements this Method Ackermann compute the feedback matrix using Ackermann' s formula State feedback design using Ackermann's formula. option name default value ControlInput Automatic which input to choose as control Option specific to the Ackermann method.9. Λ2 . Feedback Control Systems Design 155 is the characteristic polynomial of matrix A .

As an example we consider an approximate model of the lateral dynamics of an F-8 aircraft (Figure 9. Figure 9. r . and Φ are the roll and yaw rates and the sideslip and roll angles.2) linearized about a particular set of flight conditions and reproduced after Brogan (1991). Β .2. NASA.3 introduces the nomenclature. F-8 aircraft in flight. The state and input vectors in the model are p r Β Φ x and u ∆a ∆r where p. .156 Control System Professional Figure 9. Photograph by Dryden Flight Research Center. and ∆a and ∆r are the aileron and rudder deflections. respectively.

0. . 1.13 . 8. If such input exists. 0 .7.7. 9. .13 0 0 . 0.9. 2 . a malfunction prevents manipulation of the other. 0 . 10. 0. Aircraft schematic. In[16]:= poles 10. 0 10 0 0 1 0 0. 0 . 0. This is the state-space model of the aircraft. 0. 0. 0. the feedback gain matrix will contain a nonzero row corresponding to this input. 2.8 3. 0 20. 5. say. Feedback Control Systems Design 157 Rudder ∆r Φ Front view Top view Ailerons Β Figure 9. 0 . . StateFeedbackGains may be asked to determine whether it is possible to control the aircraft using only one of the inputs if. 3. Out[15]= • Here are the closed-loop poles we wish the system to have.8 .7 0 0 20 0 0 0 0 0 0 2. 1. In[15]:= aircraft StateSpace 10.3. 0.7 1 0 10 9 0.

9205 0 0 19. (1985). 0 .7. 0. The system may not be controllable. 9.2 Robust Pole Assignment Ackermann's formula does not provide for multi-input control and. which is accessible via the method KNVD.158 Control System Professional StateFeedbackGains finds that the system is better controlled through the second input (i. . 0 . Method Out[17]= Ackermann 0 16. 3. 0.6083 0 169. Method Ackermann . Out[18]= StateFeedbackGains StateSpace 10. 10. In[17]:= StateFeedbackGains aircraft. 0.1. 0. 0 . In the case where the number of states is greater than the number of control inputs. 10. the algorithm uses the additional degrees of freedom to find the solution that is as insensitive to perturbations as possible. 1. 0. Indeed. 0 20.205 The attempt to control the aircraft from only the first input fails. ControlInput 1 LinearSolve::nosol : Linear equation encountered which has no solution. 0 . 0 . StateFeedbackGains::nos : Cannot find feedback gain matrix. poles. 0. 1 Out[19]= False 9.8 . even for a single-input case. 0. 1.e. 0. in other words.13 . 2 . the aircraft cannot be controlled by only the aileron deflections (at least not within the linearized model). 0.4816 25. poles. often offers a better alternative. 8. In[19]:= Controllable Subsystem aircraft.7. and we are presented with messages suggesting that the trouble possibly stems from the system being uncontrollable. The robust algorithm by Kautsky et al. the resulting matrix can be very badly conditioned.. the system is not controllable from its first input. Method Ackermann. the rudder deflection) and returns the corresponding feedback gains. 0. 2. 5. In[18]:= StateFeedbackGains aircraft. ControlInput 1 .

similarly to other functions that need to reconstruct the basis of the vector space given an insufficient number of orthogonal components.347476 1. Feedback Control Systems Design 159 Method KNVD compute the feedback matrix using the Kautsky-Nichols-Van Dooren algorithm State feedback design using the robust method by Kautsky et al. 1 how many iterations to try In the rest of this section. and is attempted for exact and symbolic arguments.9205 0 0 19.27485 10 13 Out[21]= 4. the number of which can be set via the option MaxIterations. Λ2 . for the particular case where the number of inputs is equal to the number of states and matrix B is not singular. MaxIterations 1 0. that the implemented algorithm is not guaranteed to converge. The "robustness" of the system designed using the KNVD method may improve after several iterations. however. Method 0. too. Λn ). The algorithm depends heavily on numerical quantities of the system in question and so is implemented for inexact input parameters only. However. Method 0 16.78914 . ….6083 0 169.08946 4. This finds the feedback via Ackermann's formula—reproducing the result we already have. the solutions obtained with the Ackermann and KNVD methods are compared in terms of their robustness for the aircraft model. 4. the solution (where is the diagonal matrix determined by the may be found simply from K B 1 A eigenvalues Λ1 . In[21]:= k1 StateFeedbackGains aircraft. In[20]:= kA Out[20]= StateFeedbackGains aircraft.4 0. option name default value MaxIterations Option specific to the KNVD method.9.90531 10 1.205 Ackermann Now we obtain the feedback using the KNVD algorithm with a single iteration. In the KNVD regime. poles.250479 14 KNVD. Note.4816 25. poles. StateFeedbackGains accepts the option RandomOrthogonalComple ment.

.97541 .90304.98785. k1.21033 .97859 1. 5. In[25]:= Eigenvalues StateFeedbackConnect aircraft. err. This is a utility function that distorts all numeric values in expr to some degree. # Out[25]= 1 & dks 8. In[22]:= k10 StateFeedbackGains aircraft. 8.21033 . In[24]:= dks distort kA.31088 0.68141. up to the maximum relative error err. err_ : expr . These are the three groups of eigenvalues of the corresponding distorted closed-loop systems. 5. In[23]:= distort expr_. these are the maximum relative errors introduced by the distortion.160 Control System Professional This is the solution using the same method after 10 iterations. 10. we distort the feedback matrices to some extent (as if due to noise in the line) and see how that will affect the locations of the poles of the closed-loop system. 1.39. the solution obtained with Ackermann's formula (the first element in the list) is not stable against the noise.94829 6. 8.18559 To measure the robustness of the solutions. 9.00677 Finally.194556. 4. Method Out[22]= KNVD. 0. 2. 0. poles.96418 . we would prefer the noise to have as little effect as possible.01 . 1.09136. . In[26]:= Max Abs # poles poles & % Out[26]= 0.13904 2. x_ ? NumberQ x 1 Random Real.06312.026 2. On the other end.516493 3. In practice.97859 1.982774 0.05419. MaxIterations 10 0. we see that the robustness has somewhat improved with iterations. 7. Comparing the second and the third values. k10 .0398239.0182728 The last value in the list—and the smallest one—is the solution found after 10 iterations with the KNVD method. err We form the list comprising all three feedback matrices we have computed so far and distort them as much as 1 percent. This solution indeed looks robust—the relative error in pole location is about as big as the imposed distortion of the feedback matrix.23775 1.

otherwise an attempt can be made to reconstruct the state vector by forming a device called an estimator (or observer).9. Now we address the problem of how that can be done. Feedback Control Systems Design 161 9. Continuous-time estimator schematic. Figure 9. provided that the direct transmission term D u is taken into account. with which the approximation x to the state vector x can be obtained as x A LC x B LD u Ly . As seen from the diagram. . x Ax Bu y Cx Du y ^ x A C Figure 9. So far in this chapter we have assumed that this prerequisite was somehow met.4. In the trivial case of a square nonsingular matrix C . the state vector can simply be computed from the input and output vectors as x C 1 y D u .2 State Reconstruction The prerequisite for pole assignment using state feedback is knowledge of the state variables. Observer D B . ^ x L State estimate u System .4 presents such a device in block diagram form ( Brogan 1991). (9. the estimator is driven by the difference between the actual output measurement y and the "expected" output C x .4) where L is the estimator gain matrix.

5) Therefore. The preceding diagram and equations refer to continuous-time systems. the function StateFeedbackGains can be employed for the purpose. one may use the function EstimatorGains. it is possible to choose gains L to be such that the eigenvalues of Ac A L C assume any desired location. poles find the estimator gain matrix for system that places poles of the estimator at poles Estimator design. but accepts and passes on the options for StateFeedbackGains and DualSystem. This reduces the problem of finding the estimator gains to one of finding the (transposed) controller gains for the dual system AT c AT CT L T (9. EstimatorGains does not introduce any new options of its own.6) . Discrete-time estimator schematic.4): x k 1 A LC x k B LD u k Ly k  (9.162 Control System Professional Observer D B System uk xk 1 yk Ax k Bu k Cx k Du k yk L ^ xk 1 ^ xk Delay State estimat A C Figure 9. The estimator for a discrete-time system is determined by an equation similar to Eq. EstimatorGains system. If the initial system is completely observable. Alternatively. which performs the procedure in one step. thereby controlling the rate at which x follows x .5. (9.

EstimatorGains handles both continuous. Therefore. will be used in the rest of this section to illustrate obtaining the estimator gain matrix.5. reproduced after Gopal (1993). 6 . 1. In[27]:= StateSpace 0. p2. the only output variable: y X and u fx . 1.9. making it. This option will be picked up by the function DualSystem. 0 7 4 m 7M .and discrete-time cases. 0 0 1 0 0 4m 0 7 4 L m 7L M 0 0 0 0 1 0 0 3gm 4m 7M 0 6g m M 4Lm 7LM 0 7M 0 6 4Lm 7LM 0 • Because we know that all symbolic variables in the input system are real-valued. 0 1 0 0 Out[27]= 3g m 4 m 7M . 0. . 0 . 0 . 0. 0. The model assumes that the state and input vectors are. respectively. At this point. the expression will be obtained in its general form. . we may set the option ComplexVariables correspondingly to get a simpler result. p3. 0. Here is the state-space model. Feedback Control Systems Design 163 The corresponding block diagram is presented in Figure 9. 0. therefore. 0. The model further assumes that the only variable available for measurement is X . 0. but use a generic list of four symbolic values.0 . p4} . and fx is the external force applied to the wheels. 0 . 0 . {p1. X is the horizontal position of the cart. X x X Θ Θ where Θ is the angular displacement of the pendulum. 0 6g m M . 0. Another linearized state-space model of an inverted pendulum. 4 L m 7LM . 1 . we will not specify the desired poles for the estimator. 0. 0.

15. p2. p1 2. 153. g 9. 1 6 g m 4 L p1 p2 m 4 L p1 p3 m 4 L p2 p3 m 6 g M L 4m 7M 7 L M p1 p2 7 L M p1 p3 7 L M p2 p3 p1 p2 p3 p4 . p3 2 . p4 . M 1. In[28]:= EstimatorGains % . p2 2 . L 2. p4 3 Out[29]= 9 . 1 3gLm 6 g m p1 6 g M p1 4 L m p2 p3 p1 7 L M p2 p3 p1 6 g m p2 1 6 g m 4 L p1 p2 m 4 L p1 p3 m 3gLm 4 L p2 p3 m 6 g M 7 L M p1 p2 7 L M p1 p3 7 L M p2 p3 p4 . m Chop . p1.358 . ComplexVariables Out[28]= None Apart p1 p2 p3 p4 . 1 2 m M 6 g m 4 L p1 p2 m 4 L p1 p3 m 4 L p2 p3 m L2 m 4 m 7 M 6 g M 7 L M p1 p2 7 L M p1 p3 7 L M p2 p3 1 6 g m p1 6 g M p1 4 L m p2 p3 p1 7 L M p2 p3 p1 3gLm 6 g m p2 6 g M p2 6 g m p3 6 g M p3 p4 6 g M p2 6 g m p3 6 g M p3 This is the result for a particular set of numerical values. p3. In[29]:= % .4532 . 323.456 .81.164 Control System Professional This finds the estimator for the system and somewhat simplifies the result. 35.

t (10. (10. Q . S and L xT t Q x t uT t R u t (10. t 1 t01 t Lxt. Additionally. t1 . In Eq. the optimal control problem is to find an admissible control function u t that forces the continuous-time system x f xt. to ensure that the solution is unique and finite.t t (10.10. matrices M and Q must be positive semidefinite and . t1 and the constraints X on the state trajectories x t that form the set of admissible trajectories x t X for all t t0 .3) is referred to as the optimal control law. Optimal Control Systems Design Given the constraints U on control functions u t that form the set of admissible controls u t U for all t t0 .3) then the control is said to exist in the closed-loop form.2).t (10. the function S is the cost associated with error in the terminal state at time t1 . and L penalizes for transient state errors and control effort.ut.2) If the solution to the optimal control problem can be found in the form ut uxt.6) where the desired state is assumed to be x Matrices M .1) to follow an admissible trajectory x t while minimizing the performance criterion J S x t1 . In the particular case of quadratic cost functions. (10. and R must be square.5) xT t1 Mx t1 (10. and R must correspond in dimension to the number of inputs.4) or (in the form that includes the cross-term P) L xT t Q x t uT t R u t 2 uT t Px t 0.ut. M and Q must have a length equal to the number of states. (10. and Eq.

166

Control System Professional

matrix R must be positive definite. The cross-term problem in Eq. (10.6) is reducible to the one in Eq. (10.5), and so matrix P must be of a form that brings about suitable Q and R . The components of the matrices reflect the emphasis the designer places on corresponding errors. For instance, if R is a diagonal matrix, a relatively larger value of Ri i means that more control effort will be allotted to regulate input ui . In a sense, the art of choosing the elements of Q, R, and M is similar to the art of selecting the proper pole locations in the feedback design via pole assignment. The optimal control problem can be restated similarly for discrete-time control systems x k 1 f x k,u k (10.7)

and the performance criterion is J Sx N
N 1 k 0

Lx k,u k

(10.8)

Control System Professional addresses both continuous- and discrete-time problems.

10.1 Linear Quadratic Regulator
In the case of the linear system x or x k 1 Ax k Bu k (10.10) Ax Bu (10.9)

and quadratic cost functions, the optimal control problem is said to be the linear-quadratic (LQ) optimal control problem. Further, for constant-coefficient matrices A and B and terminal time infinitely far in the future (meaning, of course, that the operating time is sufficiently long compared to the time constants of the system), the problem is referred to as the infinite-horizon or infinite-time-to-go problem. In this case, the control law in Eq. (10.3) simplifies to u Kx (10.11)

with a constant-coefficient feedback gain matrix K. Note that the penalty function S for terminal constraint in Eq. (10.2) and Eq. (10.8) is not an issue for the infinite-horizon problem. The function LQRegulatorGains attempts to find the matrix K for this particular case. It recognizes the type of system supplied to its input—continuous- versus discrete-time—and

10. Optimal Control Systems Design

167

acts accordingly. As is the case with the pole assignment problem, furnishing matrices C and D in the state-space description is optional.

LQRegulatorGains statespace, q, r find the optimal feedback gains for the system statespace and the quadratic cost function defined by weighting matrices q and r LQRegulatorGains statespace, q, r, p find the optimal feedback for the case where the quadratic cost function contains the cross-term p
Linear quadratic regulator design.

As an example, we design an optimal regulator for the mixing tank shown in Figure 10.1. The tank (Gopal (1993)) implements concentration control of a chemical mixture with inflows through two regulated valves at rates Q1 and Q2 and concentrations C1 and C2 , respectively. Outflow is at the rate Q with concentration C . The volume of the liquid in the tank is V . The state, input, and output vectors are assumed to be as follows: x V , u C Q1 , and y Q2 Q C

The design will be carried out in the discrete-time domain.

Q1 , C1

Q2 , C2

Tank h

Q, C

Figure 10.1. Mixing tank schematic.

168

Control System Professional

Load the application.
In[1]:=

ControlSystems` This is a state-space representation of the mixing tank system for a particular set of parameters.

In[2]:= tank

StateSpace 0.01, 0 , 0, 0.02 , 1, 1 , 0.004, 0.002 , 0.01, 0 , 0, 1 0 0.02 0 1 1 1 0.004 0.002 0 0 0 0

0.01 0
Out[2]=

0.01 0

This is a discrete-time approximation of the system sampled with a period of 5 seconds.
In[3]:= tankd

ToDiscreteTime % , Sampled 0. 0.904837 0 1 4.87706 0.0190325 0 0

Period 5

Out[3]=

0.951229 0. 0.01 0

4.87706 0.00951626 0 0

5

This assigns some values to the weighting matrices Q and R .
In[4]:= q Out[4]=

DiagonalMatrix 0.01 0 0 100 DiagonalMatrix 2 0 0 0.5

.01, 100

In[5]:= r Out[5]=

2, .5

This finds the optimal gain matrix.
In[6]:= LQRegulatorGains tankd , q , r Out[6]=

0.0223257 0.0778205

3.30934 3.81983

10. Optimal Control Systems Design

169

To check the regulator in action, we plug it into the state feedback. This is the device after the loop is closed.
In[7]:= StateFeedbackConnect tankd , %

0.462811 0.000315645
Out[7]=

2.48971 0.805502 0. 1.

4.87706 4.87706 0.0190325 0.00951626 0. 0. 0. 0.

0.01 0.

5

This simulates the output response of the closed-loop system to an impulse signal at the second input; in other words, it shows how the outflow rate and the concentration react to a short, sudden increase in the flow rate of the second chemical. The outflow rate and the concentration are shown as the dash-dotted and dotted lines, respectively.
In[8]:= SimulationPlot % , 0, DiscreteDelta t

Dashing

.1, .01, .01, .01

, t, 200 , PlotStyle , Dashing .01 , PlotRange All ;

0.04 0.03 0.02 0.01

50

100

150

200

PlotLabel "Regulated Vs. Regulated Vs. 0 In[12]:= bb .03 0.001 . DiscreteDelta t t.04 0. In[11]:= aa 3 1 . 0. Otherwise. Thickness .03 0. Dashing . which shows clearly that the regulated system has a significantly faster response.02 0.05 . In[9]:= SimulationPlot tankd . we plot the same graph for the original system. we combine the two plots in one graph.02 0. PlotRange All .01 50 100 150 200 A successful regulator design is possible only for a stabilizable system. 0. In[10]:= Show % . Original System".01 50 100 150 200 Finally.170 Control System Professional For comparison's sake. Here are matrices A and B for a system that is not stabilizable (because it is both unstable and uncontrollable). Original System 0. 0 2 1 .04 0. %% . LQRegula torGains does not return any solution. The outflow rate and the concentration are now shown as the solid and dashed lines. PlotStyle . 200 .

LQRegulatorGains shares with the Riccati equation solvers the limitations imposed on the input arguments and accepts the options accepted by the Riccati solvers. q . The cost function for this case contains outputs y instead of states x (cf. 10. 0. q . This modifies matrix A so that the system is stable (though still uncontrollable).12) LQOutputRegulatorGains calls LQRegulatorGains and accepts the same options. 1. 2 Now the regulator can be built. 0 . 1 . does . is the function LQOutputRegulatorGains . Eq.0314353 The optimal regulator design is based on solving the appropriate algebraic Riccati equation. In[13]:= q DiagonalMatrix 1.5).r 1. 1.. 0. which optimizes a system's behavior with regard to state variables. In[16]:= LQRegulatorGains StateSpace aa. Optimal Control Systems Design 171 We assign equal weight to all inputs and states. the syntactic difference between the two being that the output regulator function. The corresponding functions—RiccatiSolve and DiscreteRiccatiSolve —can be accessed directly. which finds the optimal solution to the output regulator problem. bb .10. r Out[16]= 0.3. 1. 10. and are described in Section 10. 2 . . Out[14]= LQRegulatorGains StateSpace 3.. 1 . . r LinearSolve::nosol : Linear equation encountered which has no solution. bb . In[15]:= aa 3 0 1 . 1.162278 0. In[14]:= LQRegulatorGains StateSpace aa. . The attempt to build the regulator fails. L yT t Q y t uT t R u t (10. as it should. obviously. 0 .2 Optimal Output Regulator Closely related to the function LQRegulatorGains. if so desired.

LQOutputRegulatorGains statespace. r Out[20]= 0.3 Riccati Equations Finding the optimal control for a continuous-time linear system with a quadratic cost function involves solving the differential Riccati equation. 0 . In[17]:= tank StateSpace 0.00125691 10.1. . 0. In[20]:= LQOutputRegulatorGains tank. .004. 1. 0 .00589458 0.02 0 1 1 1 0. q .01.2 This solves the problem.0589383 0.01. r find the optimal feedback matrix for the output regulator problem Linear quadratic output regulator design.01 DiagonalMatrix 2 0 0 0. 0.172 Control System Professional require matrix C (and matrix D. 0.000624227 0.2 10.004 0.002 . if necessary) in the state-space description of the input system. simplifies to the algebraic Riccati equation (ARE) .02 . We wish to design the output regulator with matrices Q and R as given. 1 . q. Reconsider the mixing tank problem from Figure 10. which. 0.01 In[19]:= r Out[19]= 2.002 0 0 0 0 0. for the case of the infinite-horizon problem.01 0 • In[18]:= q Out[18]= DiagonalMatrix 10 0 0 0. 0.01 0 Out[17]= 0. 1 0 0. 0.

Brogan (1991). Let all the weights be equal. In[21]:= aa % 1 . These are equations in an unknown matrix W that could be viewed as systems of coupled quadratic equations regarding the unknown components Wi j and as such could be attempted using the built-in Mathematica functions Solve and NSolve. 1 s2 Out[20]= 0 1 0 0 0 1 1 0 0 • This extracts matrices A and B. 1 . q.13) The discrete-time case requires solution of the discrete algebraic Riccati equation (DARE) W Q AT W A AT W B R BT W B 1 BT W A (10. In[20]:= StateSpace TransferFunction s.10. r solve the algebraic Riccati equation DiscreteRiccatiSolve a. bb. q. Control System Professional provides also two more specialized functions that can be more efficient than the general-purpose solvers. b. r solve the discrete algebraic Riccati equation Functions for solving Riccati equations. for example. r Out[23]= 3 1 1 3 . In[22]:= q DiagonalMatrix 1. q . In[23]:= RiccatiSolve aa. Consider again the double-integrator system.14) (see.r 1 . This is the solution to the corresponding ARE. b. RiccatiSolve a. Section 14). Optimal Control Systems Design 173 AT W WA W B R 1 BT W Q 0 (10. bb % 2 .

174 Control System Professional Consider now another second-order system.% .786939 0.Transpose bb . In the state-space description. In[27]:= DiscreteRiccatiSolve aa. 0 0. r Out[27]= 2.2). Sampled Period 1 Out[25]= 1.bb Out[28]= r . let us design a controller for the roll attitude of a missile (Figure 10. This solves the DARE.786939 1 0 0 1 This extracts matrices A and B.5 1 0 0 • This time we find its discrete-time equivalent for a sampling period of. r Out[29]= 0.% . by using the hydraulic-powered ailerons.40153 1. 1 s s . q . say.794026 We could also arrive at this result by using LQRegulatorGains directly. 0.426123 0.05792 1. In[24]:= StateSpace TransferFunction s. keep roll attitude Φ close to zero while staying within the physical limits of aileron deflection ∆ and aileron deflection rate ∆ (Bryson and Ho (1969)). q .794026 For yet another example.09705 We can use the result to find the optimal feedback gains.aa 0.606531 0.05792 2. In[28]:= Inverse Transpose bb . In[29]:= LQRegulatorGains ss. 1 second. it is assumed that the . 0. bb.538833 0.5 Out[24]= 0 0 1 1. In[26]:= aa ss 1 . The controller must. bb ss 2 . In[25]:= ss ToDiscreteTime % .538833 0.

0 Q Τ 0 0 0 1 0 . Φ0 and ∆0 are the maximum desired values of Φ and ∆. which accepts the command signal to aileron actuators u u ∆ and the state vector is ∆ x Φ Φ The performance index we choose is J 1 2 Φ2 0 ∆ so that ∆2 ∆2 0 u2 u2 0 Φ2 0 dt resulting in matrices Q and R as shown below.2. Optimal Control Systems Design 175 system has one input. Roll attitude control of a missile. Τ 1 0 In[30]:= aa .10. respectively. Q and Τ are the aileron effectiveness and roll-time constant. Top view This is the A matrix for the model. and u0 is the maximum available value of u . Φ ∆ Front view Ailerons Figure 10.

0174533 0. Φ0 We then reassign the matrices using these values. Φ0 Π 180.% Out[37]= 26.92 29.619 We can use the solution to find the optimal feedback matrix. ∆0 Π. In[34]:= Out[34]= Τ Τ 1.176 Control System Professional This is matrix B .261799. Applying RiccatiSolve to this list solves the corresponding Riccati equation.72757 2. q . In[37]:= Inverse r . In[32]:= q 1 ∆2 0 DiagonalMatrix 0 0 0 1 u0 2 0 0 1 Φ2 0 1 ∆0 2 . r . u0 10. 0. In[31]:= bb 1 0 . r aa.2378 2. In[35]:= aa.0962 578. Q 10. u0 Π. bb.%. . In[36]:= RiccatiSolve % Out[36]= 2. Q 1.94179 6.Transpose bb .38971 49. 0 These are the matrices Q and R for the performance index. ∆0 Π 12. .94179 18. bb. q .0343 180. 0.0962 18. 1 Φ0 2 Out[32]= 0 0 In[33]:= r 1 u2 0 Out[33]= We choose a set of numeric values for our parameters.2378 49.

however. . This. the result is the same as the one obtained with LQRegulatorGains. the eigenvalues should not contain symbolic parameters. therefore. you may use the Schur decomposition method. does not mean that the system cannot be solved symbolically. The method is accessible by selecting the option SolveMethod SchurDecomposition (which is available for RiccatiSolve only).10. RiccatiSolve does not accept infinite-precision input since neither does the built-in function SchurDecomposition . Eigendecomposition is chosen in either case). of course. q . RiccatiSolve and DiscreteRiccatiSolve work by finding the eigensystem of the corresponding Hamiltonian matrix. If symbolic parameters are used. In[38]:= LQRegulatorGains StateSpace aa. which has advantages for systems with multiple or near-multiple eigenvalues of the Hamiltonian (Laub 1979). bb . such a case requires human intervention in the selection of stable poles. a warning is issued and LQRegulator Gains returns unevaluated. r Out[38]= 26.0343 180. The default Automatic setting of the option selects the Eigendecomposition or SchurDecomposition method depending on whether the input matrices are exact or not (for DiscreteRiccatiSolve . option name default value SolveMethod Automatic method to find the basis of the eigenspace Option specific to RiccatiSolve. If this method is chosen. Optimal Control Systems Design 177 Again.92 29. Alternatively. Solving Riccati equations involves sorting eigenvalues of the Hamiltonian (which is required to separate stable and unstable eigenvalues).

In addition. p. (1990). This algorithm is implemented in DiscreteLQRegulatorGains . r design the discrete emulation of the continuous optimal regulator for the system statespace and weighting matrices q and r DiscreteLQRegulatorGains statespace. this knowledge can be applied to find a discrete equivalent of the optimal regulator. In[40]:= q 1. q. .01 .4 Discrete Regulator by Emulation of Continuous Design If the optimal regulator has been designed for a continuous system so that matrices Q and R (and possibly P) are constructed to meet the design specifications. 0. DiscreteLQRegulatorGains accepts the options related to the function ToDis creteTime. t design the emulation for a cost function containing the cross-weighting matrix p Finding the discrete equivalent of a continuous optimal regulator.4). 0 . we assume that these are the matrices Q and R that lead to satisfactory behavior of the closed-loop system. (1990). q. Section 9. DiscreteLQRegulatorGains statespace. This is the model for the satellite system. 0 . The procedure starts with computation of the discrete equivalent of the continuous cost function and then the discrete regulator is designed based on the discrete equivalent cost (see Franklin et al. r. and it accepts the same set of options. In[39]:= satellite StateSpace TransferFunction 1 #2 Out[39]= 0 1 0 0 0 1 1 0 0 • Following an example in Franklin et al.4.r 0. Its syntax closely resembles that for LQRegulatorGains .178 Control System Professional 10.

10. Optimal Control Systems Design

179

This is the optimal regulator matrix for the continuous case.
In[41]:= LQRegulatorGains satellite, q , r Out[41]=

10. 4.47214

This is matrix A for the corresponding closed-loop system.
In[42]:= StateFeedbackConnect satellite, % Out[42]=

1

0. 10.

1. 4.47214

These are the poles of the system. We want the discrete emulation to have poles close to these.
In[43]:= poles Out[43]=

Eigenvalues %

2.23607 2.23607 , 2.23607 2.23607

First we decide on the sampling period. To find a suitable time scale, we compute the characteristic polynomial of the closed-loop system.
In[44]:= CharacteristicPolynomial %% , s Out[44]=

Expand

Chop

10. 4.47214 s s2 Comparing the preceding result with its representation via natural frequency Ωn and damping coefficient Ζ , Ω2 2 ΖΩn s s2 , we find the characteristic time n tn 2 Π Ωn .

In[45]:= tn

2Π Coefficient % , s, 0

Out[45]=

1.98692

We choose a sampling period several times smaller than the characteristic time.
In[46]:= ts Out[46]=

tn 6

0.331153

180

Control System Professional

This is the discrete regulator that emulates the continuous one.
In[47]:= DiscreteLQRegulatorGains satellite, q , r, Sampled Out[47]=

Period ts

4.82977 3.10798

These are the poles of the corresponding discrete-time closed-loop system. The poles are in the z -plane.
In[48]:= Eigenvalues StateFeedbackConnect

ToDiscreteTime satellite, Sampled
Out[48]=

Period ts

,%

First

0.35298 0.333181 , 0.35298 0.333181

This maps the poles back to the s -plane. Log %
In[49]:=

ts
Out[49]=

2.18268 2.2846 , 2.18268 2.2846

As we can see, the poles of the emulation system match the ones found by the continuous design within a few percentage points.
In[50]:= Abs

%

poles poles

Out[50]=

0.0228161, 0.0228161

The function DiscreteLQRegulatorGains should not be confused with LQRegulator Gains, which, given a discrete-time StateSpace object as input, designs a discrete regulator. The difference between the two functions is that the former generates a discrete object from a continuous-type input, whereas the latter makes no type conversion; given continuous- or discrete-type input, it returns output of the same type. To reinforce the difference, Discrete LQRegulatorGains generates an error message if a meaningless attempt is made to feed it a discrete-time StateSpace object.

10. Optimal Control Systems Design

181

10.5 Optimal Estimation
Section 9.2 introduced the device called the estimator (or observer) and the function Estima torGains, which computes the gain matrix for the device. Input and output measurements were assumed to be known precisely so the problem could be referred to as the deterministic state reconstruction. Consider now a linear system whose state vector is subject to some random disturbances w t , called the process noise, and whose output measurements are contaminated with noise v t , called the measurement noise x t y t Ax t Cx t Bu t Du t Bw w t Dw u t v t

(10.15)

The noise processes are assumed to have flat spectra (white noise), zero mean values E w t E v t 0 0

and covariance matrices Q and R E w t wT Τ E v t v Τ
T

Q∆ t R∆ t

Τ Τ

Here E x denotes the mean of random variable x . The two noises may further be assumed to be mutually uncorrelated, E v t wT Τ 0

or if they are correlated, then their cross-covariance matrix is P: E v t wT Τ P∆ t Τ

If the observer with the same structure as in Figure 9.4 (Figure 9.5 for the discrete-time case) is applied to find the state estimates from noisy measurements, and the dual algorithm to the one used by the linear quadratic regulator is used to find the estimator gain matrix L , then the observer provides the least-square unbiased estimation for the state vector and is called the Kalman filter (or Kalman estimator). As with the infinite-horizon problem, one can consider the steady-state constant-gain solution to the optimal estimation problem that is arrived at when both process and measurement noises are stationary (at least in the wide sense) and the

182

Control System Professional

estimator operates for a sufficiently long time. The algorithm is implemented in the function LQEstimatorGains . The corresponding block diagrams are given in Section 10.7, where the KalmanEstimator function is introduced. If, in addition, the noise terms have Gaussian distributions, then LQEstimatorGains finds the solution to the so-called linear quadratic Gaussian (LQG) problem. In this case, the estimation not only is optimal in the least-squares sense, but also satisfies the most-likelihood requirements. Real processes never have (nor could have) absolutely flat spectra (i.e., be absolutely uncorrelated in time). At high spectral frequencies, the spectrum bends downwards, whereas at low frequencies it usually has a significant 1 f Γ component. It is the responsibility of the user to decide if the white-noise approximation is applicable to the particular case.

LQEstimatorGains statespace, q, r find the optimal estimator matrix for the system statespace and process and measurement noises with covariance matrices q and r, assuming that all inputs of the system are stochastic LQEstimatorGains statespace, q, r, p find the estimator gains for correlated process and measurement noises with cross-covariance matrix p LQEstimatorGains statespace, q, r, dinputs or LQEstimatorGains statespace, q, r, p, dinputs find the estimator gains if the system has deterministic inputs dinputs in addition to the stochastic ones
Optimal estimator design.

The function LQEstimatorGains relies on LQRegulatorGains (and, consequently, on the Riccati equation solvers) and, therefore, accepts the same set of options and involves similar restrictions on the input arguments. Consider a servomechanism for the azimuth control of an antenna shown in Figure 10.3. The system (cf. Gopal (1993)) has the state vector Θ Θ

x

Antenna schematic. 0. and w is the disturbing torque acting on the motor's shaft.1 0 0 0 • This defines the noise variances. which is specified by the fourth argument to LQEstimatorGains. In the following examples we will find the continuous and discrete Kalman estimators. 0 . Optimal Control Systems Design 183 and input and output vectors u V w and y Θ where Θ is the angular position of the antenna. 0. 1. V is the input voltage applied to the servo motor. 5 . r.196152 0.10. 1.0192379 .1 . q .r 1 . Θ Figure 10. The first input in our antenna system is the only deterministic input. This finds the stationary Kalman gains achieved after an observation of sufficient length. 1 Out[53]= 0. In[52]:= q 100 . In[53]:= LQEstimatorGains antenna. 0. Here is a state-space realization of the antenna mechanism.3. 0 Out[51]= 0 0 1 1 0 0 5 1 0. 1 . mutually uncorrelated noises with zero-mean values. The input w t and output v t noise terms will be assumed to be white. In[51]:= antenna StateSpace 0.

6 Discrete Estimator by Emulation of Continuous Design The function DiscreteLQEstimatorGains is similar in purpose to DiscreteLQRegula torGains (see Section 10. 1 Out[56]= 0. rd . In[55]:= qd rd 10 . 10.0786939 0.4).000426123 0. 0.184 Control System Professional This is a discrete-time approximation to antenna for some sampling period. and finally performs the discrete estimator design using LQEstimatorGains .and discrete-time objects and chooses the appropriate algorithm accordingly. DiscreteLQEstima torGains first finds the discrete equivalents of the covariance matrices for process and measurement noises using the algorithm described in Franklin et al. This finds the stationary Kalman gain matrix for the discrete-time system. Sampled Period . LQEstimatorGains accepts both continuous. DiscreteLQEstimatorGains accepts the same options as LQEstimatorGains and shares the same restrictions.3. (1990). it is applicable when the optimal estimator has been designed for a continuous system and you need to create a discrete equivalent.5. it accepts the options related to ToDiscreteTime. .00199394 0. 0. In[54]:= antennad ToDiscreteTime antenna. Then it converts the continuous system to a discrete one. In[56]:= LQEstimatorGains antennad . Section 9.00786939 1 0 0 0 0.0000203033 Like most other functions in Control System Professional.00426123 0. In addition.1 Now we let both noise terms have the same intensity. qd .0786939 0.606531 0.1 Out[54]= 1.

dinputs design the emulation if the system has deterministic inputs dinputs Finding the discrete equivalent of the continuous optimal estimator. In[57]:= antenna StateSpace 0. . r design the discrete emulation of the continuous optimal estimator for the system defined by the state space object statespace and noise covariance matrices q and r DiscreteLQEstimatorGains statespace.0194241 0. 0. Sampled Out[58]= Period . 1 .1 . and therefore can be used either as an estimator per se (for example. q. to form a controller. r. 0. as described in the following section) or as a filter (see the example later in this section). 0 .10.1 0 0 0 • In[58]:= DiscreteLQEstimatorGains antenna. 1 . Optimal Control Systems Design 185 DiscreteLQEstimatorGains statespace.1 0. This finds the discrete estimator gains for the continuous system antenna. 0.5 for continuous and discrete cases. Notice that the estimator outputs the estimates for both output and state variables of the system. respectively. r.00190346 10.7 Kalman Estimator Once the gain matrix L for the Kalman estimator has been found. q.4 and 10. 1. q . 5 . 0 Out[57]= 0 0 1 1 0 0 5 1 0. the estimator can be constructed as a state-space object using KalmanEstimator. 1. The block diagram of the device is shown in Figures 10.

Discrete Kalman estimator.5.4. Output estimate ^ ^ x x State estimate C A Bw w Stochastic inputs Figure 10.186 Control System Professional Deterministic inputs u B A . Dw Deterministic inputs u System Kalman Estimator B A D D _ x C Delay B D ^ y y Delay C Sensor outputs Dw Output estimate C ^ x L Bw w Stochastic inputs Figure 10. x System Kalman Estimator D D B ^ y x C y L Sensor outputs . A State estimate . Continuous Kalman estimator.

and all inputs are stochastic inputs KalmanEstimator statespace. all outputs of the system are sensor outputs. b1 . b2 . 1 a Out[61]= l c1 b1 c1 1 l d11 l d11 0 0 0 • . sensors use the sensor outputs specified by the vector sensors KalmanEstimator statespace. The result agrees with the block diagram in Figure 10. d11 . c2 . Consider a two-input. In[59]:= StateSpace a .10. sensors. gains. gains design the Kalman estimator for the system statespace assuming that the estimator gain matrix is gains. In[61]:= KalmanEstimator %% . 1 . In[60]:= ll = {{l}}. d22 a Out[59]= b1 b2 c1 d11 d12 c2 d21 d22 • Let the estimator gain matrix be symbolic. c1 .4. Optimal Control Systems Design 187 KalmanEstimator statespace. dinputs use dinputs as additional deterministic inputs Kalman estimator design. d21 . gains. This finds the state-space representation for the Kalman estimator assuming that the first output of the system is the sensor output and only the first of the inputs is deterministic. two-output system. ll. d12 .

188 Control System Professional Now we create another system that has the same state-space matrices as the previous one. In[63]:= KalmanEstimator % .5.1 0 0 0 • . we design and try out the Kalman filter. too. b1 . but is in the discrete-time domain. 1 . d11 . c1 . it can be used as a Kalman filter (Figure 10. 1 a Out[63]= a l c1 c2 1 b1 d11 a l d11 al c1 l 1 l c1 l c1 d11 l c1 l d11 l Because KalmanEstimator provides estimates for output signals. d21 . d22 . ll. In the rest of this section. In[62]:= StateSpace a . b2 . In[64]:= antenna Out[64]= 0 0 1 1 0 0 5 1 0. Sampled True b1 b2 c1 d11 d12 c2 d21 d22 KalmanEstimator recognizes the discrete-time data object and. c2 a Out[62]= . computes the discrete-time estimator. Kalman filter connected to a system. which extracts the useful signal from additive Gaussian noise that masks the angular measurements for the antenna servomechanism (see Section 10. The result satisfies the block diagram in Figure 10.5). u Stochastic linear system Inputs Figure 10. y Sensor outputs Kalman filter ^ y Filtered outputs This is again our antenna system.6. consequently. d12 .6).

This is the estimator gain matrix. 1. 0. 1.0019996 Out[66]= 1. 0. Consequently. 1 1. r. In[67]:= antenna1 ParallelConnect antenna. This is a new object. 0. 1. The expanded system has three inputs and one output. In[66]:= ll LQEstimatorGains antenna. antenna1. 1.10. 1 0. 5.0019996 Out[68]= 1. 0. see Figure 10. which is the optimal strategy for noisy measurements. the relatively smaller gains in the Kalman estimator were achieved. 1 Out[67]= 0 0 1 1 0 0 0 5 1 0. 1. 6 • . This is done by using the parallel connection with no input connected. We now make the measurement noise far more intense and will try to filter it out with the Kalman filter. Note that only the first output of the estimator gives the filtered output of the system (the rest of the outputs give estimates for the state vector. In[68]:= estimator KalmanEstimator % . 0. 0. 0.9992 10 0.r 100 . In[65]:= q 1 .9992 10 6 Notice that because the process noise was made relatively small and the measurement noise was relatively large compared with the noise in the example in Section 10. 0. . 0. Optimal Control Systems Design 189 Recall that the variance of the process noise is q. 0.9992 10 1. TransferFunction 1 .1 0 0 0 0 1 • This finds the Kalman estimator. q . that incorporates another stochastic input through which the measurement noise adds directly to the output.4).0019996 6 0. the Kalman filter will rely more on re-creating the output signal from the deterministic input using the known system dynamics than on actually processing the noisy sensor output.5. and the variance of the measurement noise is r. ll. 1 .

5. 0.0019996 Out[69]= 1. Kalman filter connection example. the inputs and outputs are numbered as they appear after this stage. 1. 0 0 1. In[69]:= filter Subsystem estimator. 0 0 0 0 0. In the figure.0019996 1. 1. .9992 10 0 1. In[70]:= ParallelConnect antenna1. filter. 1 . 6 0 1 0 • y 1 u 1 1 4 filter 2 3 2 ^ y w v zeros antenna1 Figure 10. 1.9992 10 1 0 0 0. 0. 0.9992 10 1. 0 0 0 Out[70]= 1 5 0 0 0 0 0 0 0. 1. It contains all the inputs and the first output of the estimator. 1 1.0019996 5.0019996 6 0.1 0 0 1. 0.7.9992 10 0. we connect the expanded system antenna1 and the Kalman filter according to the diagram in Figure 10. First we connect the inputs. 6 0 0 0 0 0 0 1 0. All. 6 • Then. 0 0 0. 0.7. 0.190 Control System Professional This picks the subsystem we are currently interested in.

r 1. 0. In the composite. Positive 1. 0.9992 10 0. 0. The result is a list of output vectors. v. 0. 1. Optimal Control Systems Design 191 Finally. In[74]:= w Table Random NormalDistribution 0.0019996 1. 0. In[72]:= Needs "Statistics`ContinuousDistributions`" Let the length of our simulation sequences be n. 1. y2 OutputResponse composite.9992 10 1. 0. 0. This is the sinusoidal input signal. 0.0019996 1. the first output corresponds to the output of the system and the second one to the filtered output. 0. This is where the simulation is performed. 0.1 0. 0. 0. 0. This is the measurement noise vector. 0. u. 6 0. We intend to supply no external signal to the summing input 4 . 0. we will load this standard package. 0. 1 . 1. 4. 1. 6 0. In[71]:= composite FeedbackConnect % .0019996 Out[71]= 0. 0. zeros . In[78]:= y1. 0. n . • To create the normally distributed noise. 1.10.9992 10 0. 6 1. 0.9992 10 1. 0. q 1. In[75]:= v Table Random NormalDistribution 0. 6 0. 0. n . . In[73]:= n 100.0019996 5. 1. 0. we close the feedback loop by connecting the first output of the preceding system with its fourth input (which actually is the first input of the filter). In[77]:= zeros Table 0. 0. This prepares a dummy zero signal for this input. 0. 0. 1 . n . w . In[76]:= u Sin 2 20 Π Range n N. 0. 5. This creates the process noise vector that has the Gaussian distribution with zero mean and standard deviation as required. 0. 0.

. .8 and 10. e. y1. LQRegula torGains). which states that if the optimal estimate for the state vector in the presence of noise is available. SymbolShape None. the optimal control law can be obtained as if there were no noise in the system (see. 1990.5 -10 20 40 60 80 100 Original Noise Added Filtered 10. PlotLegend "Original". 10 .9 show the structure of the controller for continuous. Controller forms the LQG controller. Gopal 1993.and discrete-time systems. The idea is implemented in the function Controller. and thus can be called the current controller. the optimal procedure in LQEstimatorGains) and the regulator matrix K (with.4)).3.5 -5 -7. the controller is based on the current estimator (as opposed to the predictor estimator (see Franklin et al. In[79]:= y10 y1 v. Section 6. The "original" signal (before the addition of noise v) is denoted as y10.192 Control System Professional We wish to see how the filter suppresses the measurement noise added to the system output signal y1. We can see that the Kalman filter has quite successfully restored the original signal from the noise. PlotRange 10. Figures 10.5). Section 12.5 5 2. if the process and measurement noise have a Gaussian distribution. Again. To use this function you must first determine both the estimator gain matrix L (using. This plots the signals. say. PlotLabel "Signals prior to and after the Kalman Filter". For the discrete-time case. "Noise Added ". PlotJoined True. Note that only the part of the system related to the sensor outputs is shown in these diagrams. Signals prior to and after the Kalman Filter 10 7. "Filtered " . In[80]:= MultipleListPlot y10.g.8 Optimal Controller The optimal controller design for the stochastic systems is based on the separation principle. respectively. y2.5 -2. say.

Once constructed. egains. egains. Inside the controller. sensors. the feedback loop for control inputs is already closed. It is a state-space object whose inputs are "additional deterministic inputs" and the sensor outputs of the system to be controlled. egains. dinputs. if the estimator gain matrix is egains and the controller gain matrix is cgains. Optimal Control Systems Design 193 Controller statespace. assuming that the rest of the inputs are control inputs Controller statespace. egains. sensors. sensors use the sensor outputs specified by the vector sensors Controller statespace. controls use controls as the control inputs Controller design.10. cgains. the controller can be connected to the system according to the block diagram shown in Figure 10. The outputs of the controller are typically connected to control inputs of the system to close the negative feedback loop. cgains design the controller for the system statespace. assuming that all outputs of the system are sensor outputs and all inputs are control inputs Controller statespace. dinputs use dinputs as additional deterministic inputs. .10. cgains. cgains.

Additional deterministic inputs u2 B2 A System Controller D 12 D 12 _ x C1 Delay B2 y Delay C1 L B1 Bw D 1w D 11 D 11 K u1 Control inputs w Stochastic inputs v ^ x A B1 Figure 10.8. ^ x B1 A ^ x ^ y B1 Bw D 1w D 11 D 11 C1 u1 K Control inputs w Stochastic inputs v Figure 10.9.194 Control System Professional Additional deterministic inputs u2 B2 . Discrete-time controller based on a current estimator. . x A System Controller D 12 D 12 B2 x y C1 L . Continuous-time controller.

We can see that the interconnections correspond to the diagram in Figure 10. d22 . Suppose further that we wish to use first output of the system to close the feedback loop. b2 . d21 . This designs the controller. 1 a Out[83]= l c1 k b1 k l d11 b2 l d12 l 0 0 • . Suppose that the first input is the control input and the second and third are. b3 c1 . d23 b1 b2 c1 d11 d12 d13 c2 d21 d22 d23 • This sets the estimator and controller gains.10. In[81]:= StateSpace a . b3 . d13 . 2 . Controller design and connection. d12 .10. In[82]:= ll l . b1 . Consider a three-input. estimator and controller gain matrices should both be 1 1 . Therefore. deterministic and stochastic inputs. Optimal Control Systems Design 195 Additional deterministic inputs u Control inputs Stochastic linear system Sensor outputs y Control gains for deterministic problem K ^ x Kalman estimator Controller Figure 10. d11 . two-output system. kk. ll. respectively.8. c2 a Out[81]= . In[83]:= Controller %% . kk k . 1 .

In[85]:= Controller % . In[84]:= StateSpace a . 0 . consider again the discrete-time model for the familiar satellite control system (see Section 9. d11 . d13 . Sampled True b1 b2 b3 a Out[84]= c1 d11 d12 d13 c2 d21 d22 d23 The discrete-time controller is designed (cf. d21 . d23 . In[88]:= Controller satellite. Sampled Period T 1 T Out[86]= 0 1 1 0 This sets the estimator and controller gains symbolically. ll. c2 . kk. 0 T2 2 T 0 T . d22 . 1 . b3 . This finds the controller. b2 . 0 . The sampling period is T. 2 .196 Control System Professional This is the discrete-time system with the same state-space matrices as in the preceding example. 1. l2 . d12 .1). Figure 10. ll l1 . 1 Simplify k l b1 a l k l d11 1 kl 1 k l d11 a Out[85]= k b 1 l c1 1 k l d11 1 k l c1 1 k l d11 1 b2 k l d11 1 l a k b1 d12 k l d11 1 k l d12 k l d11 1 As yet another example.9). b1 . 1 . 0. ll. kk Out[88]= 1 k T2 2 1 1 2 T k1 k1 l1 k2 l2 T 2 l2 T l1 1 T l2 T k1 l1 k2 l2 l1 k1 k1 k2 l2 T 2 k2 2 1 T k2 k2 1 2 k1 l1 l2 k2 l2 T 2 l2 T l1 T k1 l1 k2 l2 k1 l1 k2 l2 T . k2 . In[87]:= kk k1 . c1 . In[86]:= satellite ToDiscreteTime StateSpace 0. 1 .

3) where the Jacobian g t for a vector of functions . (11. For reasonably smooth nonlinearities. y . it does provide the linearization tools that may allow construction of a linear model describing the behavior of the nonlinear system in the vicinity of some operating point. u h x. C . if not a suitable approximation.1). The coefficients A . x t y t u t x t y t u t xn t yn t un t from some known solution xn . and u are small deviations. x n h . B. u n h u n A C B D (11. 11.1) the locally linearized model is given by x y Ax Cx Bu Du (11. x n f . u (11.11. and D are the Jacobian matrices evaluated on the nominal solution: f . yn . which we refer to as the nominal solution.2) where x . and un to the original nonlinear Eq. this approach may provide a useful insight into the properties of the system.1 Local Linearization of Nonlinear Systems For a nonlinear state-space system of the form x y f x. Nonlinear Control Systems Although systematic treatment of nonlinear control problems is currently beyond the scope of Control System Professional.

u1 n . tn g2 t1 . the short (containing just matrices A and B ) or full (containing all four matrices) state-space description. … find a linearized state-space approximation to a vector of functions f in the state variables xi and input variables ui in the vicinity of the operating point xi n . respectively. As an example. u1 n . tn in variables t1 . u2 . u1 . … . t2 .198 Control System Professional g t g1 t1 . u2 . we consider the linearization of the one-dimensional magnetic ball suspension . x2 n . …. tn gm t1 . t2 . t2 . x2 n . …. un find a linearized state-space approximation to the function f in the state variable x and input variable u in the vicinity of the operating point xn . u1 . x1 n . un Linearize f . t2 . Linearize accepts one or both of the function vectors f and h and returns. ui n Linearize f . h. xn . … . tn is g1 t1 g2 t1 gm t1 g1 t2 g2 t2 gm t2 g1 tn g2 tn gm tn g t The function Linearize attempts to find such an approximation. …. x. Desired options for the resulting state-space object may be supplied to Linear ize or inserted later. x1 n . x1 . …. … find a linearized state-space approximation to the vectors f and h Local linearization. u2 n . x2 . Linearize f . u2 n . u. x1 . x2 .

and x3 t i t . M x1 L The output vector contains a single variable. In[2]:= f x2.11.1 (after Kuo (1991)). u .1. x2 t l t . Linearize does not require additional list wrapping in such a case. g x32 M x1 . Magnetic ball levitation system. R x3 L R x3 L e L Out[2]= x32 e . g x2. In[1]:= ControlSystems` This is vector f corresponding to the nonlinear state equation x f x. i R L e _ l i2 _ l Electromagnet Steel ball Mg Figure 11. Nonlinear Control Systems 199 system shown in Figure 11. The state variables are chosen as x1 t l t . The system attempts to control the vertical position l t of the steel ball through input voltage e t . Load the application. The only input variable is u t e t . In[3]:= h Out[3]= x1 x1 .

200 Control System Professional This linearizes the state equation near some nominal point with the coordinates x10. For this experiment. let the nominal point be the equilibrium position of the ball at some coordinate l0. x3. x30 . A and B . x10 . x1. In[7]:= ball ball . x3. x20 0. x2. e0 0 x302 Out[4]= M x102 0 • This finds the full (four-matrix) state-space description. x20. e. In[6]:= Out[6]= x10 x10 l0. the resulting state-space object contains only two matrices. x30. x20 l0. In[5]:= Linearize f. x2. x10 . 1 0 0 0 2 x30 M x10 R L 0 0 1 L x1. x30 0 0 1 L 0 . % 1 0 0 2 0 g l0 M l0 M R L 0 0 1 L 0 Out[7]= g l0 0 • . x30 M g l0 g l0 M This is the state-space description near the equilibrium. e0 0 x30 Out[5]= 2 2 1 0 0 0 0 2 x30 M x10 R L 0 M x10 0 1 • As soon as the linearized state-space model is found. In[4]:= ball Linearize f. x30 0. Since only one vector (f) is supplied. and e0. x20 . h . x20 . we may apply other Control System Professional functions. e.

0 . 0 . 0.. p1 1. In[11]:= % StandardForm Out[11]//StandardForm= StateSpace 0.49503 0..11. 3 . 0. 98.8. 0. 1. One way to do this is to insert an identity matrix as matrix C into the previous (short) state-space object to feed all states through to the outputs. % Simplify 1 0 0 2g g l0 M p1 p2 p3 0 0 p3 1 L Out[9]= 0 g l0 M l0 p1 p2 p3 2 g p1 g l0 M p2 p3 M g 2 l0 p2 p3 p1 p2 g l0 M • This is the closed-loop system for a particular set of numerical parameters after dropping insignificant numerical errors.1 Out[10]= • Let us investigate the dynamics of the state variables.515178.49503. 0 1. Nonlinear Control Systems 201 We now find the state feedback gain matrix that places the poles of the system into some position p1. p3 1 Chop 0 1 98. In[9]:= StateFeedbackConnect ball. M 10 103 . In[8]:= StateFeedbackGains ball. R 100.99 . The matrix can be directly inserted at the last position in the state-space object.99 0 3 0. p3. In[10]:= % .1 .515178 0 0 197. p2. g 9. p1. p2. p2 1 . 197.L 10. p3 Out[8]= L M g p1 l0 p2 p3 p1 g p2 g p3 2 g l0 M L M g l0 p1 p2 l0 p1 p3 l0 p2 p3 2 g l0 M L p1 L p2 L p3 R This is the system after the feedback loop is closed.1. 0 . 1. l0 .

Note that the built-in function InterpolatingPolynomial may be useful for some approximations. general rational. PlotStyle . economized rational. The first two are in the context Calculus`Pade` .515178 0 1 0 0 197.05.1 0. and the others are in NumericalMath`Approxima- tions`.001 .1 0 0 0 • This gives the state response to a step function. it shows how the state variables change if the input voltage applied to the electromagnet suddenly changes by some value ( 10 mV in the plot). . In[13]:= SimulationPlot % . IdentityMatrix 3 . .04 0. 1 Out[12]= 0 98.02 PlotLabel "Step Response" . Sampled Thickness . and the current in the magnet is shown as the dashed-dotted line. 5 .04.2 Rational Polynomial Approximations Another approach to obtaining the linear time-invariant (LTI) approximation to a nonlinear system involves the approximation of a nonlinear transfer function by a polynomial ratio.01. and minimax approximations.49503 1 0 0 1 0 0.06 0. Dashing .01. . . too.202 Control System Professional In[12]:= Insert % . t. respectively.01 . Dashing . Corresponding functions are provided with standard Mathematica packages and represent Padé. Step Response 0. . that is. 11.02 1 2 3 4 5 Period . 1. The coordinate and velocity of the ball are shown as a solid and dashed line.01.08 0.1 .99 3 0 0 1 0 0 0. .

In[17]:= Pade delay. s. In[15]:= delay Out[15]= as Exp as This loads the necessary package. % h a2 2 12 a 2 1 a2 2 12 a 2 Out[18]= 10 1 50 1 1 .11. 2 1 Out[17]= 1 as 2 as 2 a2 s 2 12 a2 s 2 12 This generates an object suitable for analysis with Control System Professional. In[18]:= TransferFunction s. which introduces the delay term. In[14]:= h 1 10 s 1 50 s 1 Out[14]= 1 1 10 s 1 50 s The temperature sensor for the exchanger is located so that its reading is delayed a few seconds. 2. Nonlinear Control Systems 203 Here is the transfer function describing some ideal heat exchanger. 0. In[16]:= Needs "Calculus`Pade`" We use the Padé approximation to represent the delay as a polynomial ratio of the order 2 2.

Load the application. and DisplayTogetherGraphicsArray are included with Control System Professional but are useful well outside its scope. the ordered Schur decomposition finds a unitary matrix q and triangular matrix t. Similar to the Schur decomposition of matrix m . Miscellaneous This chapter covers several mathematical and utility functions. it shares the same syntax and accepts the same options. with matrix t being such that the eigenvalues of m appear on the main diagonal of t .1 Ordered Schur Decomposition The function SchurDecompositionOrdered is an extension to the built-in function Schur Decomposition. 12. such that q t qH gives m . and CountStates) and to check on the structural consistency (ConsistentQ). SchurDecompositionOrdered m find the Schur decomposition in which eigenvalues of matrix m appear on the diagonal in canonical order SchurDecompositionOrdered m. with certain additions and exceptions.12. The functions SchurDecompo sitionOrdered. The guidelines for selecting the ordering function in Schur DecompositionOrdered are the same as those for the built-in function Sort. Also described is the function to create systems with random elements (RandomSystem). In[1]:= ControlSystems` . CountOut puts. In addition. where H denotes Hermitian transpose. LyapunovSolve. Other utility functions provide the means to determine the structure of systems (CountInputs. pred use the function pred to determine whether pairs of eigenvalues are in order Ordered Schur decomposition. Rank. the ordered Schur decomposition makes the eigenvalues appear on the diagonal in the prescribed order.

343931 0 0.786599 0. 3 Out[2]= 0.517232 0.612827 0. In[3]:= q.12446 0.425219 0.12. The eigenvalues on the main diagonal do not follow any particular order. 0. In[6]:= to Chop 0.482617 1.194399 0.t SchurDecomposition m .500837 0 0 0. to SchurDecompositionOrdered m .422634 0.238887 0.295623 0.5415 Out[6]= The decomposition is still valid.Conjugate Transpose qo m Chop Out[7]= 0 0 0 0 0 0 0 0 0 This is the ordered Schur decomposition in which the eigenvalues residing in the right half of the complex plane go first. In[2]:= m Table Random . In[8]:= qo. Miscellaneous 205 Here is a 3 3 matrix with random elements.309219 0.893263 This finds its Schur decomposition using the built-in Mathematica function. 3 .to. In[4]:= t Out[4]= 1.398038 0. Now the diagonal elements appear in canonical order. to SchurDecompositionOrdered m. In[5]:= qo. In[7]:= qo. .12446 0. 0. Re # 0& .5415 0. 0.500837 This finds the ordered Schur decomposition of the same matrix.848339 0.238012 0.

option name default value RealBlockForm False whether complex eigenvalues of the real input matrix should be returned as real blocks Option value specific to SchurDecompositionOrdered. Typically. such as stability analysis. In[10]:= qo.. 12. optimal control.500837 0 0 This sorts the eigenvalues in descending order of their real parts. A and B are square matrices with dimensions m m and n n .5415 0.206 Control System Professional This is the corresponding matrix t. An important particular case of the Lyapunov equation.563829 0 0. In[11]:= to Out[11]= 1. just as SchurDe composition does. Re #2 Re #1 & . matrices C and X should have dimensions n m .12446 The function SchurDecompositionOrdered accepts the option Pivoting. . For this particular matrix m.203783 0.563829 0 0.295623 0. the option RealBlockForm in SchurDecompositionOr dered accepts only the default value False.2 Lyapunov Equations The function LyapunovSolve attempts to find the solution X to the Lyapunov equation XA BX C (12. and response of a linear system to white noise (see. In[9]:= to Chop 0. To be consistent. Brogan (1991)). e. to Chop SchurDecompositionOrdered m.295623 0.12446 Out[9]= 1. However. the result is the same as the previous one.1) a special class of linear matrix equations that occurs in many branches of control theory.g.203783 0.500837 0 0 0.5415 0.

For discrete-time systems.2) involves only matrices A and C . b.a Transpose a .% . c solve the discrete Lyapunov equation x a. c Simplify 2 q1 Out[14]= q1 3 q2 2 5 q2 4 q1 3 q1 2 5 q2 4 19 q2 8 We may verify that the solution indeed satisfies the discrete Lyapunov equation. In[13]:= c q1 0 .x c DiscreteLyapunovSolve a. 1 In[12]:= a 1 1 2 . The solution can be found with DiscreteLyapunovSolve. In[15]:= a. In[14]:= DiscreteLyapunovSolve a. 1 Here is matrix c.x c for matrix x solve the Lyapunov equation x. c solve the matrix Lyapunov equation x. Here is matrix a.Transpose a c Functions for solving Lyapunov equations.12. c LyapunovSolve a.Transpose a Out[15]= c % Simplify True . Miscellaneous 207 XA AT X C (12.a b.3) arises.x. LyapunovSolve a. in which case all matrices are square with the same dimensions. the discrete Lyapunov equation X A X AT C (12. 0 q2 This solves the discrete Lyapunov equation and simplifies the result.

1 In[17]:= b This solves the Lyapunov equation.bpx Simplify 7 x1 x2 . Section 10. x2 . In[16]:= a 3 1 1 0 2 . we will compute the input signal as BT P x t BTP x t u These are matrices A and B for a state-space system. 318 x1 x2 362 x22 8 x1 19 x2 113 x1 318 x1 x2 362 x22 2 In[20]:= u Out[20]= 113 x1 2 Despite the fact that our system is linear. and can find V x xT P x xT P x thereby obtain the fastest response. T Q . x1. . The method is applicable to systems stable at least in Lyapunov's sense. The matrix Q is assumed to be the identity matrix 2 2. bpx bpx. one A P PA xT x 2 uTBT P x. 1 1 .p. which we assume to be continuous-time. To make V x as negative as possible.8). IdentityMatrix 2 7 40 1 40 1 40 9 20 Out[18]= This computes the control law. where Q is the identity matrix with proper dimensions. we will first solve the Lyapunov equation.208 Control System Professional Consider now an example of design of a feedback controller that uses Lyapunov's method to approximate the minimum-time response system for a given system (see Brogan (1991). In[18]:= p LyapunovSolve a. In[19]:= bpx Transpose b . To find the Lyapunov function V x xT P x. the minimum time response control is not. Knowing V x .

0. If this option's value is Automatic.5 0 u1 -0. these methods are tried in turn until one succeeds or all are tried. The method is set using the option SolveMethod. 2 . 1. PlotPoints AxesLabel PlotLabel 40.076. Minimum time response control 1 0. x1.209. 2. 2 .5 0 u 2 -0.5 -2 -1 0 x2 1 -1 2 This is the second component of the control signal. "Minimum time response control" . x1 1 2 Minimum time response control -2 -1 0 1 0.12.5 -2 -1 0 x2 1 2 1 0 x1 -1 -1 -2 LyapunovSolve and DiscreteLyapunovSolve use the direct method (via the built-in function Solve) or the eigenvalue decomposition method. 2. 2. Boxed False. Miscellaneous 209 This plot shows the first control signal as a function of the state variables. "x1 ". "Minimum time response control" . ViewPoint 2. "x2 ". "x1 ". which correspondingly accepts the values DirectSolve and Eigendecompo sition. PlotPoints AxesLabel PlotLabel 40.000. "u1 " . "x2 ". "u2 " . In[22]:= Plot3D u 2 . 2. Boxed False. x1. 2 .141.080 . x2. x2.080 . 2 . In[21]:= Plot3D u 1 . 2. 1. . ViewPoint 3.

. for a system entered manually. as obtained through NullSpace. For exact and symbolic matrices. merging or connecting with others). the difference between the number of columns of m and the length of the null space. Correspondingly. you may wish to check that no mistakes have been made and that the matrices you entered can really represent a system. and CountStates can be used to determine the number of inputs. The following functions help with these chores. Rank counts the nonzero singular values. find the number of inputs of system find the number of outputs of system find the number of states of system determine if the elements of system have dimensions consistent with each other CountInputs system CountOutputs system CountStates system ConsistentQ system Checking on the system parameters.3 Rank of Matrix Rank m Determining the rank of matrix. find the rank of matrix m For inexact numerical matrices m. Alternatively.210 Control System Professional option name default value SolveMethod Automatic method to solve the equation Option specific to Lyapunov equation solvers. 12. Rank accepts the options pertinent to SingularValues or NullSpace and passes them along to these functions. or states of the system. They are useful in cases when systems change as the result of structural operations (say. is used to determine the rank. outputs. CountOutputs. 12. as obtained through the built-in function SingularValues.4 Part Count and Consistency Check CountInputs.

All input GraphicsArray objects must have the same dimensions.g. single-output SISO system nth-order SISO system nth-order system with i inputs and o outputs . opts combine the graphics arrays arrayi in a GraphicsArray object Displaying GraphicsArray objects together.12. o Generating a system with random elements. array2 . RandomSystem[args] is used in place of the actual system contents. 12. i. or ZeroPoleGain. RandomSystem args create a random system of type type use the variable var in the body of the random transfer function system RandomSystem RandomSystem n RandomSystem n. Systems with random parameters could be useful for numerical experiments. random first-order single-input. BodePlot). checking the design concepts.5 Displaying Graphics Array Objects Together The function DisplayTogetherGraphicsArray displays multiple GraphicsArray objects as one such object and is similar to the function DisplayTogether from the standard package Graphics`Graphics` . ….6 Systems with Random Elements Random yet stable (at least in the sense of Lyapunov) systems can be generated using the function RandomSystem in conjunction with the desired system type—StateSpace.. Trans ferFunction. The function also accepts options pertinent to GraphicsArray. Miscellaneous 211 12. Input arguments can be either GraphicsArray objects or any other Mathematica commands that result in such objects (e. DisplayTogetherGraphicsArray array1 . type RandomSystem args type var. etc.

40162 This is a second-order single-input.344929 Out[24]= 0. In[24]:= StateSpace RandomSystem 2.01261 3.87885 3.82357 3. .76923 3. 1. and gains (for transfer function systems) or by creating a block diagonal matrix with suitable eigenvalues and then performing linear transformation on that matrix to form matrix A (for state-space systems).400733 2.93888 0. poles. two-output state-space system. Almost all of the following options listed below (with the exception of Exact) accept either one value (to be applied to both numerators and denominators or to the eigenvalues) or a list of two values to create the numerators and denominators according to different rules.00387746 2.05476 4.05821 • RandomSystem works by creating random matrices of zeros.55686 1. 2 1. RandomSystem Out[23]= 0. In[23]:= TransferFunction s.59669 0.212 Control System Professional This creates a random first-order transfer function in variable s.228547 3.

Miscellaneous 213 option name default value Exact ComplexRootProbability RealRootProbability SpecialPoints SpecialPointProbability MultipleRootProbability ImaginaryToRealRatio MappingFunction ExactConversionFunction False 0.33. Setting Real RootProbability Automatic will generate a strictly proper transfer function.Automatic Automatic 0.2 Automatic Automatic Automatic whether to generate an infinite-precision system probability of complex roots probability of real-valued roots special point locations probability of special points probability of multiple roots mean imaginary real ratio for complex roots function s to map over random roots additional function s to map over roots to create an infinite-precision system Options to RandomSystem.0.1 0. The probability of such points in numerators and denominators is set by the option SpecialPointProbability. then all available root positions that remain after the creation of complex roots and special points will be filled with real roots.12. The option SpecialPoints allows the location(s) of some special roots to be specified (to create an integrator.33.0.0. If RealRootProbability is set to Automatic. This is usually what you want in the denominator of a transfer function to ensure that the system has the required order.1.5 0. for example). .05. but this is not necessarily the case for the numerators.

Gene F. William L. Levine. MA: Addison-Wesley. NJ: Prentice Hall. Feedback Control of Dynamic Systems. Dec. pp. K. M. The Control Handbook. 3rd ed. Laub. Workman. . Englewood Cliffs. 6th ed. IEEE Transactions on Automatic Control. NJ: Prentice Hall. 5. A Schur method for solving algebraic Riccati equations. AC-26. Bruce C. 913–921. MA: Addison-Wesley. observability. 1990. 17–32. and Abbas Emami-Naeini. 1981. 1992. Feb. Nichols. 2nd ed. Kuo. Englewood Cliffs. Modern Control Engineering. Englewood Cliffs. pp. Dorf. Modern Control System Theory. Ogata. David Powell. NJ: Prentice Hall. Principal component analysis in linear systems: Controllability. Franklin. William S. 1985. MA: Addison-Wesley. J. New York. Van Dooren. International Journal of Control. Franklin. 1980. 1996. Benjamin C. 1991.References Brogan. Reading. Modern Control Theory. no. N. 1979. Moore. J. Boca Raton. Reading. Richard C. 2nd ed. 1993. Digital Control of Dynamic Systems. Automatic Control Systems. 41. J. NY: John Wiley & Sons. Gopal. Modern Control Systems. FL: CRC Press. 1990. Robust pole assignment in linear state feedback. AC-24. pp.. Linear Systems. Gene F. Kailath. and model reduction. Englewood Cliffs. 1129–1155. Katsuhiko. Reading. 2nd ed. 1991. and Michael L. Kautsky. Alan J. NJ: Prentice Hall.. Thomas. David Powell. IEEE Transactions on Automatic Control. and P. 1991.

48 BackwardRectangularRule. 175 roll-time constant of. 48. 182 Ackermann. 159 Ackermann's formula. of antenna. 173 All. 175 A/D conversion. 185 Kalman filter for. 172 Attitude control. 152 Ailerons. 95 Azimuth control. 3. 48 BilinearTransform. 182 Backward rectangular rule. 175 Aircraft example. 165 AdmissibleError. AdmissibleError. 17 Ackermann. 92 Angle of attack. 188 ARE. of submarine. 151 of spacecraft. 152. 32 Actuators. 112 Analog simulation. 55 . in DeleteSubsystem. 152 Admissible trajectories. 154 Active wrappers. Backward RectangularRule. 155 robustness of solutions. BilinearTransform. 65 Analog systems. 115 in Subsystem. 48 Bilinear transformation. 33 Analog-to-digital converters. 25. 175 of a missile. of satellite. 165 Admissible error. 156 effectiveness of. 172 discrete.Index 1 f noise. 156 Algebraic Riccati equation. 44 Analytic. 72 Antenna example. 115 Amplifiers. 182 discrete estimator for. 44 Admissible controls.

89 phase unwrapping in. 12 Center of mass. for time-domain response. 151 Characteristic equation. 44 Continuous-time vs. 167 Classical control. 50. 9 Complex plane. 33. 4. 131 Composite systems. 140 controllable companion. z-plane. ConsistentQ. 39 converting between. 36 observable.vs. 213 ComplexVariables. 122 . 138 observable companion. 105 Companion realizations. 45 traditional notations for. 4. 210 Continuous-time systems. 154. 60 Controllability. 3 constructing from ODEs. 38. 167 Condition number. SeriesConnect. optimal. 165 using pole assignment. 80 Characteristic polynomial. controllable. 210 ConsistentQ. discrete-time systems. selection in single-input algorithms. 39 ContinuousTimeQ. of pendulum. 36 Jordan. 66 Calculus`Pade`. 202 Canonical forms. 155 ControlInputs. 4 ComplexRootProbability. 155 Control inputs. reducing to trigonometric functions. 33. of satellite. 34 Control objects.216 Control System Professional Block diagrams. 98 Center of gravity. ControlInput. 136 Kalman observable. 98 Bode plot. ControlInputs. 85 collecting several plots in one. 39. 150 Control effort. 98 Concentration control. 38. notation for. 144 displaying several objects together. of controllability matrix. 165 Control Format palette. 4. 119 Cascade connection. 39 domain identification of. controllable. 95 Complex exponentials. 31–32. 85 BodePlot. 36 Cascade compensation. 138 Kalman controllable. 179 Chemical mixture. 41 ControlInput. construction of. 87 Bridged-T network. 36 Compensator. 54 Continuous-time to discrete-time conversion. 211 gain and phase margins in. 62 Complex numbers. 155 Consistency check. 136 modal. 80 Closed loop. 35 default domain of. BodePlot. 4. 32. 7 Control input. 16 continuous-time vs. s. 39. discrete-time. 40 Control design. 60 Control matrix. control of.

181 Cost function. 134 Controllable canonical form. 210 CountOutputs. 54 Damping ratio. 178. 72 Derivative controller. 210 CountStates. 124 Controllable. DeterminantExpansion. 82. 140 Controllable companion realization. 192 Controller design. 36 ControllableSpaceSize. 181 Crossover frequency. 193 state reconstruction. 125 ControllableCompanion. 126 ControllabilityTest. of ailerons. 53 in heat exchanger. 182. 136 Controllable subspace. 98. ControllableCompanion. 36 Controllable states. Controllable. 184 quadratic. 203 DeleteSubsystem. 141 in KalmanControllableForm. 33 DeterminantExpansion. 136. 115–116 Depth control. 185. 113 Deflection. 128 Controllability matrix. 91 Current estimator. 187. 132–133 vs. 137 in KalmanObservableForm. 129 ControllabilityMatrix. 179 of dominant poles. 175 Delay. 124. 124. 181 CriticalFrequency. 154 ControllabilityMatrix. 33 Deterministic. size of. 125 ControllableSubsystem. 105 in state feedback. 166 CountInputs. 122 ControllabilityGramian. 15 Correlation. 76 DARE. in InternallyBalancedForm. ControllableSpaceSize. 181 . of submarine.Index 217 Controllability Gramian. 203 Padé approximation of. 171 equivalent. 150 in output feedback. 74 Cross-covariance matrix. 175 Deflection rate. 173 DecompositionMethod. 142 ControllabilityGramian. 140. 48 Critically damped. inputs. 81 Controller. 119 Determinant expansion formula. 114 optimal. 88. 81 of third-order system. 192 Controller. current. 119. 126 Controllability test. 165. between process and measurement noises. 192 D/A conversion. KalmanControllableForm. 138 Controlled signal. 192 design of. 137 DefaultInputPort. 210 Covariance matrix. 122. 74. of ailerons.

184 gain matrix for. 185 optimal. 161 current. 181 predictor. of continuous design. plotting routines from. 70. 162 Evolution matrix. 52. StateSpace model from. 130. 204 multiple. 6. 39 Dominant subsystem. 12 DisplayTogether. 135 Dynamic systems. 120 Dryden Flight Research Center. 171. 147 Electrical Engineering Examples. 200 of pendulum. 129. of magnetic ball. 80 Emulation. 209 Eigensystem. 43 Equilibrium. 178. 44 Discrete-time to continuous-time conversion. 4 DiscreteDelta. 211 DisplayTogetherGraphicsArray. 78 Direct transmission matrix. 192 discrete by emulation of continuous. 139. 184 DiscreteLyapunovSolve. as value of SolveMethod. 17. 141 Eigenvalues. 17. 151 PID controller for. in impulse response simulation.218 Control System Professional Differential equations. 178. 40 Displacement. 142 DominantSubsystem. 202 Effectiveness. 211 Domain identification of control objects. 34 DirectSolve. and poles. 142 Double integrator. 184 Error signal. 81 Estimator. 34 Digital-to-analog converters. 78 DiracDelta. as value of SolveMethod. 34. in impulse response simulation. as value of DecompositionMethod. DominantSubsystem. 80. ExpandRational. 132–133. 177 of closed-loop system. of ailerons. 34 Exact. 179 in SchurDecomposition. 213 Expanding transfer functions. 156 Dual systems. in impulse response simulation. DisplayTogetherGraphicsArray. 79 DiracDelta. of pendulum. 144. 184 Discrete simulation. 178. DualSystem. 213 ExactConversionFunction. 169 DiscreteLQRegulatorGains. 184 EquationForm. vs. 28 . 15 Equivalent cost. EstimatorGains. 161 Kalman. 70. 65 Discrete-time systems. 207 DiscreteRiccatiSolve. 130 DualSystem. approximating in analog simulation. 173 DiscreteTimeQ. 175 Eigendecomposition. 182. 15 Digital systems. 54 Dirac delta function. 150. 204 Eigenvectors. 209 Discrete emulation of continuous design. 150 of Hamiltonian matrix. 177 specifying order of. 192 EstimatorGains. 4. 39. 178. 27 Economized rational approximation.

211 Hamiltonian matrix. 203 Hold equivalence methods. displaying several objects together. 93. 56 Flicker noise. 84 Free response. 125 Gain band. 85. 57 Frequency prewarping. FeedbackConnect. 89 GainPhaseMargins. 98. 177 Heat exchanger example. 28 F-8 aircraft example. InitialConditions. 48 Homogeneous response.Index 219 ExpandRational. FactorRational. 48 ForwardRectangularRule. 70 Infinite-horizon problem. 70 simulating with DiscreteDelta. forming with FeedbackConnect. infinite. 57 Horizon. 191 Feedthrough matrix. 112 positive. 28 FactorRational. 28. 15. 128 GraphicsArray. 166 Imaginary unit. 169 simulation of. 182 Flight control example. 48. 182 Infinite-time-to-go problem. 60 . 110 Gramian. 34 First-order hold. 94 FirstOrderHold. 124. 48 Frames. 112 Feedback. 9 ImaginaryToRealRatio. 167 Initial conditions. 125 FullRankObservabilityMatrix. 167 Inflow rate. 150. 55 Frequency response. 105 path. 201 using Lyapunov's method. 192 NormalDistribution. 60 InitialConditions. 95–96 FullRankControllabilityMatrix. 96 Gain margin. 48 First-order lag. 208 Feedback loop. GainPhaseMargins. 57 Forward rectangular rule. 19. 105 Feedback controller design. 182. 145 Feed-forward path. ForwardRectangularRule. 145 pole-zero cancellation in. 105 FeedbackConnect. 166 Inflow control. CriticalFrequency. 166. negative. 191 Gaussian noise. 156 Forced response. 89 Gaussian distribution. 83. FirstOrderHold. 188 General rational approximation. 105 Feedback connection. 98. 213 Impulse response. 105. 74 simulating with DiracDelta. 202 GenericConnect. 48. notation for. 156 Factoring transfer functions.

Controller. 135 Kalman estimator. 167. LQEstimatorGains. 197 Linearize. 44 Levitation system example. 185 KalmanObservableForm. LaplaceTransform. 17 LQEstimatorGains. 92 Inventory control. 152. KNVD. InternallyBalancedForm. 1 Integral controller. 11. 149 KalmanEstimator. 192 LQG problem. 182. 202 Interpolation. 60 Inventory level. 60 Inverted pendulum example. 184. 34 Inputs. 98 Internally balanced realizations. 80. 140 InterpolatingFunction. 182. JordanCanonicalForm. 197 Linearize. 65 InterpolatingPolynomial. 158 Lag system. 16. 15 optimal controller for. 182. 44 LaplaceTransform. 49. 193 stochastic. 87 LQ regulator. 182. 199 Linear quadratic Gaussian problem. 185. 17 LQRegulatorGains. 198 LinearSpacing. 133 Jacobian matrix. count of. 184 KalmanControllableForm. 16. 185 Kalman filter. 158 KNVD. control of. 210 deterministic. 136.220 Control System Professional Input matrix. Kalman. 177. 136 Kautsky-Nichols-Van Dooren algorithm. 4. 17 InvertedTransformMatrix. 138 Kalman. 182. 192 LQG controller. 182 KalmanEstimator. 187 InputVariables. in time-domain simulations. as value of GainPhaseMargins. 87 LogSpacing. 147 Irreducible realization. MinimalRealization. 140 InternallyBalancedForm. 166 Linearization. 182 Linear quadratic regulator. 52. 119 Integrator. 179. 98 arbitrary. 163 controller for. 192 . 132. 182 LQOutputRegulatorGains. 188 Kalman gain matrix. 17–18. 94 Laplace transform. 110 elementary. 136 Kalman decomposition. 171 LQRegulatorGains. 43 Installation. 135 Kalman canonical forms. 185 example of. 197 Jordan canonical form. 174. 138 JordanCanonicalForm. 182. 101 Interconnections.

97 Modal realization. 121 Most likelihood. LQOutputRegulatorGains. 184 . 208 Missile example. 179 of third-order system. 76 Natural response. 87. 156 Natural frequency. 213 Multivariable control. 133 PoleZeroCancel. 117 Method. 129 LQ regulator for. 181. 182 Multiple-input. 168 output regulator for. 38. 202 Lyapunov equations. 199 Magnitude response. 129 Lyapunov function. 138 Model reduction. 1 f . JordanCanonicalForm. 28 minimal realization of. 95 NicholsPlot. 180 vs. DiscreteLQRegulatorGains. 133 MultipleRootProbability. 206 discrete. 151 in ToContinuousTime. in time-domain simulations. 98. 202 Minimum-time response. 206 Magnetic ball suspension example. 82. 65 Negative. 28 NASA. 167 controllability Gramian of. 107 in GenericConnect. 28 frequency response of. of pendulum. 87 MarginStyle. 27 Moment of inertia. DominantSubsystem. 191–192 MergeSystems. 172 LTI systems. 175 Mixing tank example. 57 NDSolve. 95 Noise. 188 covariance matrix of. 133 MinimalRealization. 206 continuous. Dryden Flight Research Center. in Rank. 48 MIMO systems.Index 221 vs. 96 Minimal realization. MinimalRealization. 213 Margins. in StateFeedbackGains. multiple-output (MIMO) systems. 207 for controllability and observability Gramians. 12 Monic polynomial. in FeedbackConnect. 27. 132–133 Minimax approximation. 74. 111 Negative feedback. 105 Nichols plot. 159 Measurement noise. 54 in ToDiscreteTime. 181. 210 in StateFeedbackGains. 142 MinimalRealization. 144 Modern control. 182 additive. 129. 172 singular value plot for. 208 LyapunovSolve. NicholsPlot. 89 MaxIterations. 85 MappingFunction.

124. 159 reconstruction of. NyquistPlot. 93 Observability. 148–149 . control of. 181 stationary. 136 ObservabilityMatrix. 130. ObservableSpaceSize. 21 Nonlinear systems. 181 Nominal solution. 140. 197 rational polynomial approximations for. 152 Numerical integration methods. 161 Kalman. 34 Observer. 142 ObservabilityGramian. 36 Observable states. 48 NumericalMath`Approximations`. 137 in Rank. 182 measurement. 192 Optimal estimation. 16.222 Control System Professional Gaussian. 197 local linearization of. 181 spectrum of. 129 ObservabilityMatrix. 181 Optimal control. 204 Orthogonal basis. 124 Observable. 122 Observability Gramian. 122. 191 NullSpace. 182 NormalDistribution. 137. 202 Nyquist frequency. ObservableCompanion. 134 Observable canonical form. in simulations. 126 Observability test. KalmanObservableForm. 125 ObservableCompanion. 182 white. 126 ObservabilityTest. 133 vs. 136 Observable subspace. 94 NonSingularControllabilityGramian. 181 Ordered Schur decomposition. 15 linearization of. 124. as value of DecompositionMethod. 128 Observability matrix. 115 in Subsystem. 36 Observable companion realization. in DeleteSubsystem. 137. 202 Nonpolynomial transfer functions. 197 simulation of. 93 NyquistPlot. 51 Nyquist plot. 165 Optimal controller. 125 Normal distribution. size of. 182 optimal. 130 Orthogonal complement. 125 ObservableSubsystem. 125 NonSingularObservabilityGramian. 36 ObservableSpaceSize. 210 Numerical errors. SchurDecompositionOrdered. 181 process. 182 high-frequency cutoff of. Observable. 115 Nonlinear state-space models. 122 ObservabilityGramian. 159 OrthogonalTransformMatrix. 197 None. 138 Observation matrix.

167 Outflow rate. 92. PoleZeroCancel. 144 polling inputs in. 32 Pendulum. 32 of a missile. 150 for adding inputs to system. 65 Pole assignment. 150 Overdamped response. OutputResponse. 75 using Ackermann's formula. 57 PhaseRange. 60 Plant. 122 Pivoting. 83 Pole placement. 163 of transfer function. count of. 213 Particular response. 85 Output regulator. 110. 158 OutputVariables. 87 OutputControllabilityMatrix. 93. 95–96. 144 Output response. 81. 167 Period. 57 controlling numerical errors in. 60 PlotSampling. 126 PhaseRange. 152 Outputs. 95. 17 StateFeedbackGains. 175 Outflow control. ZeroPoleGain. 65 PlotPoints. 95–96 simulations with. 18. 81 Performance index. 98. 154 Overshoot. 27 number of input signals in. 87. 60 SchurDecompositionOrdered. 89 Output matrix. 96 PID controller. adjusting PlotPoints LQOutputRegulatorGains.Index 223 of a missile. 171 for. 87. 57 of closed-loop system. 190 Poles. Poles. 150 Padé approximation. 60. 189 multiple. Parallel connection. inverted. in OutputResponse. 11. 165 Poles. 30 Performance criterion. 144 Polynomial approximations. Phase unwrapping. 175 PoleStyle. Phase band. 57–58 dummy variable in. 133. 84. 202 Pole-zero cancellation. 202 . 19. 17 symbolic solution. 19. 119 OutputControllable. 165 of transfer function. 103 144 ParallelConnect. optimal. ParallelConnect. GainPhaseMargins. 17 Penalty function. 43 StateFeedbackGains. 206 initial conditions in. 96 OutputControllabilityMatrix. 126 Phase margin. 87. 40 Output controllability matrix. 165 PoleZeroCancel. 210 robust. 103. 34 Phase response.

208 ResponseVariable. 77 Random systems. as TransferFunction objects. linear quadratic. 137 in KalmanObservableForm. 33 Response. 57 minimum-time. 130 in KalmanControllableForm. 140 ReductionMethod. 133. RandomSystem. 27 RealBlockForm. 210 Rational polynomial approximations. state-space realization of. 18. 166 Ramp response.224 Control System Professional Positive. 132 converting between. 156 Roll attitude control. 158 Roll angle. 126 in ControllableSubsystem. 80 RootLocusAnimation. 85 in time domain. 166 output. 202 Rational polynomials. 181. 60 Proper transfer function. in frequency domain. 182 RiccatiSolve. 166 Predictor estimator. 191–192 Production and inventory control model. of a missile. 33 Reference signal. of matrix. 211 Rank. in FeedbackConnect. 173 Robust pole assignment. 84 RootLocusPlot. 175 Root loci. in SchurDecompositionOrdered. 60 Riccati equations. 137 in StateFeedbackGains. of ailerons. in ControllableSpaceSize. 156 Roll time constant. 84 RootLocusPlot. 55 CriticalFrequency. control of. 191 in GenericConnect. 137 Quadratic cost function. 80 RowReduce. 48. 81 Regulator. 137 . 60 Production rate. 211 RandomOrthogonalComplement. 142–143 Resolvent matrix. LQRegulatorGains. 159 RandomSystem. as value of DecompositionMethod. 135 in TransferFunction. 55 Process noise. 111 Positive definite. 132 RealRootProbability. evolution of. 119 QRDecomposition. 213 Reduced-order model. 143 in MinimalRealization. 192 Prewarping. 166 Positive feedback. 105 Positive semidefinite. 206 Realizations. 107. Rank. 36 Proportional controller. 135 in DominantSubsystem. 135 in ObservableSpaceSize. KNVD. 171. 171 RejectionLevel. 126 in ObservableSubsystem. 171–172. 175 Roll rate. as value of DecompositionMethod.

119 Schur decomposition. nominal. of noise. choice of. 196 PID controller for. 98 Series compensation. 197 SolveMethod. 96 SingularValues. 204 Second-order system. 88 impulse response of. 40 Sampling rate. SchurDecompositionOrdered. 95 SpecialPointProbability. 69. 98. 82. 44 Sampling period. 188 Servo motor. 169 plotting state response with. 133 Singular-value plot. 132. StateFeedbackConnect. 4. 177 vs. 7 SetStandardFormat. SchurDecompositionOrdered. as value of DecompositionMethod. 197 State estimation. 65 discrete. 57. 91. 65 SimulationPlot. 151 controller for. 15. analog. 7 Sideslip angle. 179 step response of. 170 Stable system. SingularValuePlot. in DiscreteRiccatiSolve. 213 SpecialPoints. 137. 141 in Rank. ordered. 149 Simulation. 28 minimal realization of. 208 State equations. 78 root loci of. 156 Sampled. 146. 161 optimal. 72 Single-input. 150 . 146 SimilarityTransform. for Kalman forms. frequency response of. 33 nonlinear. 204 SchurDecomposition. 156 Similarity transformation. 213 Spectrum. 181 State feedback. 210 SISO systems. 40 Satellite attitude control example. attitude control of. 85. 192 Serial connection. 181 stochastic. 119 SeriesConnect. 82 sampling rate for. as value of SolveMethod. 83. 39 Sampling. 136 SimilarityTransform. 110 Servo mechanism. 96 SingularValuePlot. 177 Spacecraft. 204 SchurDecompositionOrdered. single-output (SISO) systems. 181 Stabilizable system. deterministic. SeriesConnect. 183 SetControlFormat. 114 StateFeedbackGains. 179 SamplingPeriod. 40 SamplingPeriod. 28 Solution. 74 Separation principle.Index 225 Rudder. 177 in RiccatiSolve. 171 in Lyapunov's sense.

197 random. 187 state reconstruction. 60 initial conditions in. 34 discrete-time. 197 State-space realizations. 27 nonlinear. deterministic. 72. deflection of. nonlinear. 96 System. analog. 211 StateVariables. 161. 19. 65 polling inputs in. 181 optimal. 179–180. SingularValuePlot. 57 Time-domain simulation. 181 system. 201 StateFeedbackGains. 72 Stochastic. 98. 57 States. 33 State-space models. 132 Suspension system example. 3. 132 State-space systems. inputs. step response of. 181 time-varying. 213 state-space realization of. 211 state-space realization of. 181. 65 symbolic solution. 57–58 dummy variable in. 60 number of input signals in. 33 composite. 60 simulations with. 181 State response. 57 T-bridge network. 33 stochastic. controllable. 115. in wide sense.226 Control System Professional State matrix. 35 Temperature sensor. 33 StateFeedbackConnect. 74. 60. 98 continuous-time. 125 Subsystem. 199 SV plot. 125 observable. and RandomSystem. 114. 72 Subspace. 182 Steady-state error. 57 State trajectories. 182. 165 Third-order system. count of. 130 dynamic. 41 with RandomSystem. 201 StateResponse. 34 State reconstruction. 6. 36 Submarine. 98. depth control of. 202 Stern plane. 165 State-space data structure. StateSpace. 181 stochastic. plotting with SimulationPlot. 34 dual. 17. 33 digital. 201 StateResponse. 169. 43 Stationary noise. 150. 34 traditional notations for. 65 . 203 Terminal state error. 76 Time-domain response. 192 Strictly proper transfer function. 66 TargetForm. 210 StateSpace. 15. 77 Step response.

178. 57 Transient state error. recovering of. 75 Underdamped response. 48 Zeros. of transfer function. states. 183 Traditional notations. 211 ZeroPoleMapping. 27 as a pure function object. data structure for. 5. 48 Zero-pole mapping. 184 Tolerance. 48 ZeroPoleGain. 43 ToContinuousTime. 142 Yaw rate. of control objects. 4. 152 Weak modes. 145 Torque. 136. ZeroPoleGain. 211 Transformation matrix. 44. 55 Uncontrollable states. 41 $ContinuousTimeToken. 165 Transport lag. 28 nonpolynomial. 32 with RandomSystem. in PoleZeroCancel. TransformationMatrix. 5. TransferFunction. FirstOrderHold. 41 TraditionalForm. ZeroPoleMapping. data structure for.Index 227 Time-varying systems. ZeroPoleGain. 54 ToDiscreteTime. 57 TimeVariable. 56 Tustin transformation. 57 Unobservable. 41 . 44 Zero-input response. 28 factoring. 136 VerifyPoles. 168. 81 ZTransform. 94 traditional notations for. 94 Triangle hold. BilinearTransform. 41 variable in. 151. modified. 41 Transfer function matrix. 53 ZTransform. 27 Transfer matrix. 27 expanding. 171 Undamped response. Zeros. 30 Zero-state response. 3. 148 TransformationMatrix. 32 Zeros. 57 ZeroOrderHold. 180. 4. 57 Zero-order hold. 3. 32 ZeroStyle. 48. 156 z-transform. 44 $ContinuousTimeComplexPlane Variable. 27 TransferFunction. 30 variable in. 5. ZeroOrderHold. 142 Weak subsystem. 32 with RandomSystem. 148 Transient response. TransferFunction. 41 $DiscreteTimeComplexPlaneVariable. 174. 48 Zero-pole-gain data structure. 48. 75 Unforced response. 158. 30 of transfer function. response of.

43 [ScriptX] ( ). 41 [ScriptCapitalT] ( ). in control objects.228 Control System Professional $DiscreteTimeToken . 43 [ScriptS] ( ). 39 $SamplingPeriod. in control objects. in control objects. 43 [ScriptU] ( ). in control objects. in control objects. in control objects. 45 [Bullet] ( • ). in control objects. in control objects. 41 [ScriptK] ( ). 41 $RandomOrthogonalComplement. 41 [EmptyUpTriangle] ( ). 41 [ScriptCapitalS] ( ). in control objects. in control objects. 43 [ScriptZ] ( ). 43 [ScriptY] ( ). in control objects. 41 [ScriptT] ( ). 130 $Sampled. 41 .

Sign up to vote on this title
UsefulNot useful