DiscreteTime Markov
Jump Linear Systems
Series Editors
J. Gani
Stochastic Analysis Group, CMA
Australian National University
Canberra ACT 0200
Australia
C.C. Heyde
Stochastic Analysis Group, CMA
Australian National University
Canberra ACT 0200
Australia
P. Jagers
Mathematical Statistics
Chalmers University of Technology
SE412 96 Goteborg
Sweden
T.G. Kurtz
Department of Mathematics
University of Wisconsin
480 Lincoln Drive
Madison, WI 53706
USA
Mathematics Subject Classification (2000): 93E11, 93E15, 93E20, 93B36, 93C05, 93C55, 60J10, 60J75
British Library Cataloguing in Publication Data
Costa, O. L. V.
Discretetime Markov jump linear systems. (Probability and its applications)
1. Jump processes 2. Markov processes 3. Discretetime systems 4. Linear systems
I. Title II. Fragoso, M. D. III. Marques, Ricardo Paulino
519.233
ISBN 1852337613
Library of Congress CataloginginPublication Data
Costa, Oswaldo Luiz do Valle.
Discretetime Markov jumb linear systems / O.L.V. Costa, M.D. Fragoso, and R. P. Marques.
p. cm. (Probability and its applications)
Includes bibliographical references and index.
ISBN 1852337613 (acidfree paper)
1. Stochastic control theory. 2. Stochastic systems. 3. Linear systems. 4. Control theory.
5. Markov processes. I. Fragoso, M.D. (Marcelo Dutra) II. Marques, Ricardo Paulino.
III. Title. IV. Probability and its applications (SpringerVerlag)
QA402.37.C67 2005
003.76dc22
2004059023
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under
the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any
form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers.
ISBN 1852337613
Springer is a part of Springer Science+Business Media
springeronline.com
SpringerVerlag London Limited 2005
Printed in the United States of America
The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific
statement, that such names are exempt from the relevant laws and regulations and therefore free for general use.
The publisher makes no representation, express or implied, with regard to the accuracy of the information contained
in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made.
Typesetting: Cameraready by author
12/3830543210 Printed on acidfree paper SPIN 10937914
Preface
VI
Preface
Secondly, it was our intention to write a book that would contribute to open
the way to further research on MJLS.
Our treatment is theoretically oriented, although some illustrative examples are included in the book. The reader is assumed to have some background
in stochastic processes and modern analysis. Although the book is primarily
intended for students and practitioners of control theory, it may be also a
valuable reference for those in elds such as communication engineering and
economics. Moreover, we believe that the book should be suitable for certain
advanced courses or seminars.
The rst chapter presents the class of MJLS via some applicationoriented
examples, with motivating remarks and an outline of the problems. Chapter
2 provides the bare essential of background. Stability for MJLS is treated in
Chapter 3. Chapter 4 deals with optimal control (quadratic and H2 ). Chapter
5 considers the ltering problem, while Chapter 6 treats the quadratic optimal
control with partial information. Chapter 7 deals with the H control of
MJSL. Design techniques, some simulations and examples are considered in
Chapter 8. Finally, the associated coupled algebraic Riccati equations, and
some auxiliary results are considered in Appendices A, B, and C.
This book could not have been written without direct and indirect assistance from many sources. We are very grateful to our colleagues from
the Laboratory of Automation and Control LAC/USP at the University
of Sao Paulo, and from the National Laboratory for Scientic Computing
LNCC/MCT. We had the good fortune of interacting with a number of
special people. We seize this opportunity to express our gratitude to our
colleagues and research partners, specially to Profs. C.E. de Souza, J.B.R.
do Val, J.C. Geromel, E.M. Hemerly, E.K. Boukas, M.H. Terra, and F. Dufour. Many thanks go also to our former PhD students. We acknowledge with
great pleasure the eciency and support of Stephanie Harding, our contact
at Springer. We wish to express our appreciation for the continued support of
the Brazilian National Research Council CNPq, under grants 472920/030,
520169/972, and 304866/030, and the Research Council of the State of Sao
Paulo FAPESP, under grant 03/067367. We also acknowledge the support
of PRONEX, grant 015/98, and IMAGIMB.
We (OLVC and MDF) have been fortunate to meet Prof. C.S. Kubrusly,
at the very beginning of our scientic career. His enthusiasm, intellectual
integrity, and friendship were important ingredients to make us continue. To
him we owe a special debt of gratitude.
Last, but not least, we are very grateful to our families for their continuing
and unwavering support. To them we dedicate this book.
S
ao Paulo, Brazil
Petropolis, Brazil
S
ao Paulo, Brazil
O.L.V. Costa
M.D. Fragoso
R.P. Marques
Contents
Background Material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1 Some Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Auxiliary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Probabilistic Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4 Linear System Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4.1 Stability and the Lyapunov Equation . . . . . . . . . . . . . . . .
2.4.2 Controllability and Observability . . . . . . . . . . . . . . . . . . . .
2.4.3 The Algebraic Riccati Equation and the LinearQuadratic Regulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5 Linear Matrix Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
15
18
20
21
21
23
Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Outline of the Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Main Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MSS: The Homogeneous Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.1 Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.3 Proof of Theorem 3.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.4 Easy to Check Conditions for Mean Square Stability . . .
3.4 MSS: The Nonhomogeneous Case . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.1 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.2 Wide Sense Stationary Input Sequence . . . . . . . . . . . . . . .
3.4.3 The 2 disturbance Case . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
29
30
36
36
37
41
45
48
48
49
55
On
3.1
3.2
3.3
26
27
VIII
Contents
Optimal Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1 Outline of the Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 The Finite Horizon Quadratic Optimal Control Problem . . . . . .
4.2.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.2 The Optimal Control Law . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 Innite Horizon Quadratic Optimal Control Problems . . . . . . . .
4.3.1 Denition of the Problems . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.2 The Markov Jump Linear Quadratic
Regulator Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.3 The Long Run Average Cost . . . . . . . . . . . . . . . . . . . . . . . .
4.4 The H2 control Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.1 Preliminaries and the H2 norm . . . . . . . . . . . . . . . . . . . . . .
4.4.2 The H2 norm and the Grammians . . . . . . . . . . . . . . . . . . .
4.4.3 An Alternative Denition for the
H2 control Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.4 Connection Between the CARE and the H2 control
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5 Quadratic Control with Stochastic 2 input . . . . . . . . . . . . . . . . .
4.5.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.2 Auxiliary Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.3 The Optimal Control Law . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.4 An Application to a Failure Prone Manufacturing
System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.6 Historical Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
57
59
63
63
66
69
71
71
72
72
74
78
78
80
81
82
82
83
86
86
90
90
91
94
96
99
Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.1 Outline of the Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.2 Finite Horizon Filtering with (k) Known . . . . . . . . . . . . . . . . . . 102
5.3 Innite Horizon Filtering with (k) Known . . . . . . . . . . . . . . . . . 109
5.4 Optimal Linear Filter with (k) Unknown . . . . . . . . . . . . . . . . . . 113
5.4.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.4.2 Optimal Linear Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5.4.3 Stationary Linear Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
5.5 Robust Linear Filter with (k) Unknown . . . . . . . . . . . . . . . . . . . 119
5.5.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.5.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Contents
IX
H Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.1 Outline of the Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.2 The MJLS H like Control Problem . . . . . . . . . . . . . . . . . . . . . . . 144
7.2.1 The General Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
7.2.2 H Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
7.3 Proof of Theorem 7.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
7.3.1 Sucient Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
7.3.2 Necessary Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7.4 Recursive Algorithm for the H control CARE . . . . . . . . . . . . . 162
7.5 Historical Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Contents
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Notation and Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
1
Markov Jump Linear Systems
1.1 Introduction
Most control systems are based on a mathematical model of the process to
be controlled. This model should be able to describe with relative accuracy
the process behavior, in order that a controller whose design is based on the
information provided by it performs accordingly when implemented in the real
process. As pointed out by M. Kac in [148], Models are, for the most part,
caricatures of reality, but if they are good, then, like good caricatures, they
portray, though perhaps in a distorted manner, some of the features of the
real world. This translates, in part, the fact that to have more representative
models for real systems, we have to characterize adequately the uncertainties.
Many processes may be well described, for example, by timeinvariant
linear models, but there are also a large number of them that are subject to
uncertain changes in their dynamics, and demand a more complex approach.
If this change is an abrupt one, having only a small inuence in the system
behavior, classical sensitivity analysis may provide an adequate assessment
of the eects. On the other hand, when the variations caused by the changes
signicantly alter the dynamic behavior of the system, a stochastic model
that gives a quantitative indication of the relative likelihood of various possible
scenarios would be preferable. Over the last decades, several dierent classes of
models that take into account possible dierent scenarios have been proposed
and studied, with more or less success.
To illustrate this situation, consider a dynamical system that is, in a certain
moment, well described by a model G1 . Suppose that this system is subject to
abrupt changes that cause it to be described, after a certain amount of time,
by a dierent model, say G2 . More generally we can imagine that the system
is subject to a series of possible qualitative changes that make it switch, over
time, among a countable set of models, for example, {G1 , G2 , . . . , GN }. We can
associate each of these models to an operation mode of the system or just
mode and will say that the system jumps from one mode to the other or that
there are transitions between them.
The next question that arises is about the jumps. What hypotheses, if any
at all, have to be made on them? It would be desirable to make none, but it
would also strongly restrict any results that might be inferred. We will assume
in this book that the jumps evolve stochastically according to a Markov chain,
that is, given that at a certain instant k the system lies in mode i, we know
the jump probability for each of the other modes, and also the probability of
remaining in mode i (these probabilities depend only on the current operation
mode). Notice that we assume only that the jump probability is known: in
general, we do not know a priori when, if ever, jumps will occur.
We will restrict ourselves in this book to the case in which all operation
modes are discretetime, timeinvariant linear models. With these assumptions we will be able to construct a coherent body of theory, develop the
basic concepts of control and ltering and present controller and lter design procedures for this class of systems, known in the international literature
as discretetime Markov jump linear systems (MJLS for short). The Markov
state (or operation mode) will be denoted throughout the book by (k).
Another main question that arises is whether or not the current operation mode (k) is known at each time k. Although in engineering problems
the operation modes are not often available, there are enough cases where
the knowledge of random changes in system structure is directly available
to make these applications of great interest. This is the case, for instance,
of a nonlinear plant for which there are a countable number of operating
points, each of them characterized by a corresponding linearized model, and
the abrupt changes would represent the dynamics of the system moving from
one operation point to another. In many situations it is possible to monitor
these changes in the operating conditions of the process through appropriate
sensors. In a deterministic formulation, an adaptive controller that changes its
parameters in response to the monitored operating conditions of the process is
1.1 Introduction
termed a gain scheduling controller (see [9], Chapter 9). That is, it is a linear
feedback controller whose parameters are changed as a function of operating
conditions in a preprogrammed way. Several examples are presented in [9] of
this kind of controller, and they could also be seen as examples for the optimal
control problem of systems subject to abrupt dynamic changes, with the operation mode representing the monitored operation condition, and transition
between the models following a Markov chain. For instance, in ship steering
([9]), the ship dynamics change with the ships speed, which is not known a
priori, but can be measured by appropriate speed sensors. The autopilot for
this system can be improved by taking these changes into account. Examples
related to control of pH in a chemical reactor, combustion control of a boiler,
fuel air control in a car engine and ight control systems are also presented in
[9] as examples of gain scheduling controllers, and they could be rewritten in
a stochastic framework by introducing the probabilities of transition between
the models, and serve as examples of the optimal control of MJLS with the
operation mode available to the controller.
Another example of control of a system modeled by a MJLS, with (k)
representing abrupt environmental changes measured by sensors located on
the plant, would be the control of a solarpowered boiler [208]. The boiler
ow rate is strongly dependent upon the receiving insolation and, as a result
of this abrupt variability, several linearized models are required to characterize the evolution of the boiler when clouds interfere with the suns rays. The
control law described in [208] makes use of the state feedback and a measurement of (k) through the use of ux sensors on the receiver panels. A
numerical example of control dependent on (k) for MJLS using Samuelsons
multiplieraccelerator macroeconomic model can be found in [27], [28] and
[89]. In this example (k) denotes the situation of a countrys economy during
period k, represented by normal, boom and slump. A control law for
the governmental expenditure was derived.
The above examples, which will be discussed in greater detail in Chapter
8, illustrate the situation in which the abrupt changes occur due to variation
of the operation point of a nonlinear plant and/or environmental disturbances or due to economic periods of a country. Another possibility would be
changes due to random failures/repairs of the process, with (k) in this case
indicating the nature of any failure. For (k) to be available, an appropriate
failure detector (see [190], [218]) would be used in conjunction with a control
reconguration given by the optimal control of a MJLS.
Unless otherwise stated, we will assume throughout the book that the
current operation mode (k) is known at each time k. As seen in the above
discussion, the hypothesis that (k) is available is based on the fact that the
operating condition of the plant can be monitored either by direct inspection
or through some kind of sensor or failure detector. Some of the examples
mentioned above will be examined under the theory and design techniques
developed here.
use a feedback control based on (k) instead of (k)? Conditions for this will
be presented in Subsection 3.5.2. In Section 5.4 the problem of linear ltering
for the case in which (k) is not known will be examined. Tracing a parallel
with the standard Kalman ltering theory, a linear lter for the state variable
is developed. The case in which there are uncertainties on the operation mode
parameters is considered in Section 5.5.
(1.1)
by G1 and G2 give us only a hint of the specic behavior of the system. Since
any sequence of operation modes is essentially stochastic, we cannot know a
priori the system trajectories, but as will be seen throughout this book, much
information can be obtained from this kind of structure.
For the sake of illustration, consider one of the possible sequences of operation modes for this system when k = 0, 1, . . . , 20 and the initial state of
the Markov chain is (0) = 1. For these timespan and initial conditions, the
Markov chain may present 220 = 1 048 576 dierent realizations. The most
probable is the following:
0...20 = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1},
while the least probable is
0...20 = {1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1}.
Notice that the probability of occurrence of 0...20 is 1 in 1 253, while the
probability of occurrence of 0...20 is merely 1 in 1.62109 . Figure 1.1 presents
the trajectory of the system for a randomly chosen realization of the Markov
chain, given by
0...20 = {1, 2, 1, 2, 1, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 2}
when the initial state is set as x(0) = 1.
1.6
mode 1
mode 2
transitions
1.4
1.2
x(k)
1
0.8
0.6
0.4
0.2
10
k
12
14
16
18
20
Fig. 1.1. A randomly picked trajectory for the system of Example 1.1.
Dierent realizations of the Markov chain lead to a multitude of very dissimilar trajectories, as sketched in Figure 1.2. The thick lines surrounding the
gray area on the graphic are the extreme trajectories. All 1 048 574 remaining
trajectories lie within them. The thin line is the trajectory of Figure 1.1. One
could notice that some trajectories are unstable while others tend to zero as k
increases. Stability concepts for this kind of system will be discussed in detail
in Chapter 3.
4
3.5
3
x(k)
2.5
2
1.5
1
0.5
0
10
k
12
14
16
18
20
Fig. 1.2. Trajectories for Example 1.1. The gray area contains all possible trajectories. The thin line in the middle is the trajectory of Figure 1.1.
Example 1.2. To better illustrate when MJLS should be used, we will consider
the solar energy plant described in [208], and already mentioned in Section 1.1.
This same example will be treated in greater detail in Chapter 8. It consists
of a set of adjustable mirrors, capable of focusing sunlight on a tower that
contains a boiler, through which ows water, as sketched in Figure 1.3.
The power transferred to the boiler depends on the atmospheric conditions,
more specically on whether it is a sunny or cloudy day. With clear skies, the
boiler receives more solar energy and so we should operate with a greater ow
than on cloudy conditions. Clearly, the process dynamics is dierent for each
of these conditions. It is very easy to assess the current weather conditions,
but their prediction is certainly a more complex problem, and one that can
only be solved on probabilistic terms.
Given adequate historical data, the atmospheric conditions can be modeled as a Markov chain with two states: 1) sunny; and 2) cloudy. This is a
reasonable model: we can assume that the current state is known, that we do
not know how the weather will behave in the future, but that the probabilities
of it being sunny or cloudy in the immediate future are known. Also, these
probabilities will depend on whether it is sunny or cloudy now. Given that
the process dynamics behaves dierently according to the current state of this
Markov chain, considering a model like the one in Section 1.1 seems natural.
Since the current state of the Markov chain is known, we could consider
three approaches to control the water ow through the boiler.
1. One single control law, with the dierences in the dynamic behavior due
to the changes in the operation mode treated as disturbances or as model
uncertainties. This would be a kind of Standard Robust Control approach.
2. Two control laws, one for each operation mode. When the operation mode
changes the control law used also changes. Each of the control laws is independently designed in order to stabilize the system and produce the best
performance while it is in the corresponding operation mode. Any information regarding the transition probabilities is not taken into account.
This could be called a Multimodel approach.
3. Two control laws. The same as above, except that both control laws were
designed considering the Markov nature of the jumps between operation
modes. This will be the approach adopted throughout this book, and for
now lets call it the Markov jump approach.
When using the rst approach, one would expect poorer performance, especially when system dynamics for each operation model diers signicantly.
Also, due to the random switching between operation modes, the boiler is
markedly a timevariant system, and this fact alone can compromise the stability guarantees of most of the standard design techniques.
The second approach has the advantage of, at least on a rst analysis, present better performance, but the stability issues still remain. With
relation to the performance, there is a very important question: with the
plant+controller dynamics changing from time to time, would this approach
give, on the long run, the best results? Or rephrasing the question, would
the best controllers for each operation mode, when associated, result in the
best controller for the overall system? The answer to this question is: not
necessarily.
Assuming that the sequence of weather conditions is compatible with the
Markov chain, a Markov jump approach would generate a controller that could
guarantee stability in a special sense and could also optimize the expected
performance, as will be seen later. We should stress the term expected, for the
sequence of future weather conditions is intrinsically random.
Whenever the problem can be reasonably stated as a Markov one, and
theoretical guarantees regarding stability, performance, etc, are a must, the
Markov jump approach should be considered. In many situations, even if the
switching between operation modes does not follow rigidly a Markov chain, it
is still possible to estimate a likely chain based on historical data with good
results, as is done in the original reference [208] of the boiler example.
Notice that the Markov jump approach is usually associated with an expected (in the probabilistic sense) performance. This implies that for a specic
unfavorable sequence of operation modes, the performance may be very poor,
but in the long run, or for a great number of sequences, it is likely to present
the best possible results.
boiler
mirror
Fig. 1.3. A solar energy plant
Finally, we refer to [17] and [207], where the readers can nd other applications of MJLS in problems such as tracking a maneuvering aircraft, automatic
target recognition, decoding of signals transmitted across a wireless communication link, inter alia.
x(0) = x0 , (0) = 0 .
As usual, x(k) represents the state variable of the system, u(k) the control
variable, w(k) the noise sequence acting on the system, y(k) the measurable
variable available to the controller, and z(k) the output of the system. The
specication of the actors of G will depend upon the kind of problem we shall
be dealing with (optimal control, H2 control, H control, etc.) and will be
more rigorously characterized in the next chapters. For the moment it suces
to know that all matrices have appropriate dimensions and (k) stands for
the state of a Markov chain taking values in a nite set N {1, . . . , N }. The
initial distribution for 0 is denoted by = {1 , . . . , N }, and the transition
probability matrix of the Markov chain (k) by P = [pij ]. Besides the additive
stochastic perturbation w(k), and the uncertainties associated to (k), we will
also consider in some chapters parametric uncertainties acting on the matrices
of the system.
Generically speaking, we shall be dealing with variants of G, dened according to the specic types of problems which will be presented throughout
the book. Unless otherwise stated, (k) will be assumed to be directly accessible. The problems and respective chapters are the following:
1. Stability (Chapter 3)
In this case we will consider just the rst equation of G in (1.3), with w(k)
either a sequence of independent identically distributed second order
random variables or an 2 sequence of random variables. Initially we
will consider a homogeneous version of this system, with u(k) = 0
and w(k) = 0, and present the key concept of mean square stability
for MJLS. In particular we will introduce some important operators
related to the second moment of the state variable x(k), that will be
used throughout the entire book. The nonhomogeneous case is studied in the sequence. When discussing stabilizability and detectability
we will consider the control law u(k) and output y(k), and the case in
10
4.
5.
6.
7.
First we will assume that the jump variable (k) is available, and it
is desired to design a Markov jump linear lter for the state variable
x(k). The solution for this problem is obtained again through a set of
ltering coupled dierence (nite horizon) and algebraic (innite horizon) Riccati equations. The second situation is when the jump variable
(k) is not available to the controller, and again it is desired to design
a linear lter for the state variable x(k). Once again the solution is
derived through a dierence and algebraic Riccatilike equation. The
case with uncertainties on the parameters of the system will be also
analyzed by using convex optimization.
Quadratic Optimal Control with Partial Information (Chapter 6)
We will consider in this chapter the linear quadratic optimal control
problem for the case in which the controller has only access to the
output variable y(k) (besides the jump variable (k)). Tracing a parallel with the standard LQG control problem, we will obtain a result
that resembles the separation principle for the optimal control of linear
systems. A Markov jump linear controller is designed from two sets of
coupled dierence (for the nite horizon case) and algebraic (for the
H2 control case) Riccati equations, one associated with the control
problem, and the other one associated with the ltering problem.
The H control Problem (Chapter 7)
This chapter deals with an H like theory for MJLS, following an approach based on a worstcase design problem. We will analyze the
special case of state feedback, tracing a parallel with the time domain
formulation used for studying the standard H theory. In particular
a recursive algorithm for the H control CARE is deduced.
Design Techniques and Examples (Chapter 8)
This chapter presents and discusses some applications of the theoretical
results introduced earlier. It also presents designoriented techniques
based on convex optimization for control problems of MJLS with parametric uncertainties on the matrices of the system. This nal chapter
is intended to conclude the book, assembling some problems and the
tools to solve them.
Coupled Algebraic Riccati Equations (Appendix A)
As mentioned before, the control and ltering problems posed in this
book are solved through a set of coupled dierence and algebraic Riccati equations. In this appendix we will study the asymptotic behavior
of this set of coupled Riccati dierence equations, and some properties of the corresponding stationary solution, which will satisfy a set
of coupled algebraic Riccati equations. Necessary and sucient conditions for the existence of a stabilizing solution for the coupled algebraic
Riccati equations will be established.
Appendices B and C present some auxiliary results for Chapters 5 and 6.
11
12
Throughout the next chapters, these and other issues will be discussed
in detail and, we hope, the distinguishing particularities of MJLS, as well as
the dierences (and similarities) in relation to the classical case, will be made
clear.
13
the study in [105]. Loosely, the idea is to use a discretetime change of measure technique (a discretetime version of Girsanovs theorem) in such a way
that in the new probability space (the ctitious world), the problem can
now be treated via wellknown results for i.i.d. random variables. Besides the
methodology, the models and topics considered here dier from those in [105]
in many aspects. For instance, in order to go deep in the special structure of
MJLS, we analyze the case with complete observations. This allows us, for
instance, to devise a mean square stability theory, which parallels that for
the linear case. On the other hand, the HMM approach adds to the APV in
the sense that the latter has been largely wedded to the complete observation
scenario.
14
[123], [125], [127], [128], [132], [137], [143], [144], [145], [146], [147], [163], [166],
[167], [169], [170], [171], [172], [173], [176], [177], [178], [188], [191], [192], [197],
[198], [199], [210], [211], [215], [223], [226], [227]. In addition, there is by now
a growing conviction that MJLS provide a model of wide applicability (see,
e.g., [10] and [202]). For instance, it is said in [202] that the results achieved
by MJLS, when applied to the synthesis problem of wing deployment of an
uncrewed air vehicle, were quite encouraging. The evidence in favor of such a
proposition has been amassing rapidly over the last decades. We mention [4],
[10], [16], [23], [27], [47], [89], [134], [135], [150], [164], [168], [174], [175], [189],
[202], and [208] as works dealing with applications of this class (see also [17],
[207], and references therein).
2
Background Material
This chapter consists primarily of some background material, with the selection of topics being dictated by our later needs. Some facts and structural
concepts of the linear case have a marked parallel in MJLS, so they are included here in order to facilitate the comparison. In Section 2.1 we introduce
the notation, norms, and spaces that are appropriate for our approach. Next,
in Section 2.2, we present some important auxiliary results that will be used
throughout the book. In Section 2.3 we discuss some issues on the probability
space for the underlined model. In Sections 2.4 and 2.5, we recall some basic
facts regarding linear systems and linear matrix inequalities.
16
2 Background Material
i=1,...,n
Vi
i=1
1/2
tr(Vi Vi )
i=1
(2.2)
We shall omit the subscripts 1, 2, max whenever the denition of a specic norm does not aect the result being considered. It is easy to verify
that Hn,m equipped with any of the above norms is a Banach space and,
in fact, (.2 , Hn,m ) is a Hilbert space, with the inner product given, for
V = (V1 , . . . , VN ) and S = (S1 , . . . , SN ) in Hn,m , by
V ; S
tr(Vi Si ).
(2.3)
i=1
T 1 supV Hn
T (V )1
V 1
T 2 supV Hn
T (V )2
V 2
17
Again, we shall omit the subscripts 1, 2 whenever the denition of the specic norm does not matter to the problem under consideration. For V =
(V1 , . . . , VN ) Hn,m we write V = (V1 , . . . , VN ) Hm,n and say that
V Hn is hermitian if V = V . We set
Hn {V = (V1 , . . . , VN ) Hn ; Vi = Vi , i N}
and
Hn+ {V = (V1 , . . . , VN ) Hn ; Vi 0, i N}
vi1
(V1 )
With the Kronecker product L K B(Cn ) dened in the usual way for
any L, K B(Cn ), the following properties hold (see [43]):
(L K) = L K
(LKH) = (H L)(K), H B(Cn ).
(2.4a)
(2.4b)
Remark 2.3. It is well known that if W B(Cn )+ then there exists a unique
W 1/2 B(Cn )+ such that W = (W 1/2 )2 . The absolute value of W B(Cn ),
denoted by W , is dened as W  = (W W )1/2 . As shown in [216], p. 170,
there exists an orthogonal matrix U B(Cn ) (that is, U 1 = U ) such that
W = U W  ( or W  = U 1 W = U W ),
and W = W  .
(2.5)
18
2 Background Material
n
j
Remark
1, 2, 3, 4,such that W j
2.4.
For any W B(C ) there exist W , j =
j
1
0 and W W for j = 1, 2, 3, 4, and W = (W W 2 ) + 1(W 3 W 4 ).
Indeed, we can write
W = V 1 + 1V 2
where
1
(W + W )
2
1
V2 =
(W W ).
2
V1 =
Since V 1 and V 2 are selfadjoint (that is, V i = V i , i = 1, 2), and every selfadjoint element in B(Cn ) can be decomposed into positive and negative parts
(see [181], p. 464), we have that there exist W i B(Cn )+ , i = 1, 2, 3, 4, such
that
V 1 = W1 W2
V 2 = W 3 W 4.
n
j
n+
Therefore for any
S = (S1 , . . . , SN ) H , we can nd S H , j = 1, 2, 3, 4
j
such that S 1 S1 and
S = (S 1 S 2 ) +
1(S 3 S 4 ).
1(S 3 S 4 ).
k
Z (S) = Z k (S 1 ) Z k (S 2 ) + 1 Z k (S 3 ) Z k (S 4 )
1
1
4
k i
Z (S ) .
19
(2.6)
i=1
The result now follows easily from Lemma 1 and Remark 4 in [156], after
noticing that Hn is a nite dimensional complex Banach space.
The next result is an immediate adaptation of Lemma 1 in [158].
Proposition 2.6. Let Z B(Hn ). If r (Z) < 1 then there exists a unique
V Hn such that
V = Z(V ) + S
for any S Hn . Moreover,
1
V = 1 (IN n2 [Z])
(S)
=
Z k (S).
k=0
V = Z(V ) + S,
with S S (S > S) then V V (V > V ).
Proof. Straightforward from Proposition 2.6.
sup z(k + ) z(k) < .
k=0
20
2 Background Material
where Nk are copies of N, and denote the product space, and T represents
the discretetime set, being {. . . , 1, 0, 1, . . .} when the process starts from
, or {0, 1, . . .}, when the process starts from 0. Set also Tk = {i T; i k}
for each k T, and
l and l N for l Tk
N ; Sl F
Fk
Sl l
lTk
=k+1
21
chain taking values in N and with transition probability matrix P = [pij ]. The
initial distribution for (0) is denoted by = {1 , . . . , N }.
We set C m = L2 (, F, P, Cm ) the Hilbert space of all second order Cm valued Fmeasurable random variables with inner product given by x; y =
E(x y) for all x, y C m , where E(.) stands for the expectation of the
underlying scalar valued random variables, and norm denoted by .2 . Set
2 (C m ) = C m , the direct sum of countably innite copies of C m , which is
kT
(2.7)
x(k + 1) = Ax(k)
(2.8)
and
with k {0, 1, 2, . . .}, x(k) Cn , f : Cn Cn and A B(Cn ). A sequence
x(0), x(1), . . . generated according to (2.7) or (2.8) is called a trajectory of
22
2 Background Material
the system. The second equation is a particular case of the rst one and is of
greater interest to us (thus we shall not be concerned on regularity hypotheses
over f in (2.7)). It denes what we call a discretetime homogeneous linear
timeinvariant system. For more information on dynamic systems or proofs of
the results presented in this section, the reader may refer to one of the many
works on the theme, like [48], [165] and [213].
First we recall that a point xe Cn is called an equilibrium point of System
(2.7), if f (xe ) = xe . In particular, xe = 0 is an equilibrium point of System
(2.8). The following denitions apply to Systems (2.7) and (2.8).
Denition 2.10 (Lyapunov Stability). An equilibrium point xe is said to
be stable in the sense of Lyapunov if for each > 0 there exists > 0 such
that x(k) xe for all k 0 whenever x(0) xe .
Denition 2.11 (Asymptotic Stability). An equilibrium point is said to
be asymptotically stable if it is stable in the sense of Lyapunov and there
exists > 0 such that whenever x(0) xe we have that x(k) xe as k
increases. It is globally asymptotically stable if it is asymptotically stable and
x(k) xe as k increases for any x(0) in the state space.
The denition above simply states that the equilibrium point is stable
if, given any spherical region surrounding the equilibrium point, we can nd
another spherical region surrounding the equilibrium point such that trajectories starting inside this second region do not leave the rst one. Besides, if the
trajectories also converge to this equilibrium point, then it is asymptotically
stable.
Denition 2.12 (Lyapunov Function). Let xe be an equilibrium point for
System (2.7). A positive function : R, where is such that xe
Cn , is said to be a Lyapunov function for System (2.7) and equilibrium point
xe if
1. (.) is continuous,
2. (xe ) < (x) for every x such that x = xe ,
3. (x) = (f (x)) (x) 0 for all x .
With this we can proceed to the Lyapunov Theorem. A proof of this result
can be found in [165].
Theorem 2.13 (Lyapunov Theorem). If there exists a Lyapunov function
(x) for System (2.7) and xe , then the equilibrium point is stable in the sense
of Lyapunov. Moreover, if (x) < 0 for all x = xe , then it is asymptotically
stable. Furthermore if is dened on the entire state space and (x) goes to
innity as any component of x gets arbitrarily large in magnitude then the
equilibrium point xe is globally asymptotically stable.
23
(2.10)
V A V A > 0.
(2.11)
(2.12)
24
2 Background Material
The following denition establishes more precisely the concept of controllability. Although not treated here, a concept akin to controllability is the
reachability of a system. In more general situations these concepts may dier,
but in the present case they are equivalent, and therefore we will only use the
term controllability.
Denition 2.15 (Controllability). The pair (A, B) is said to be controllable, if for any x(0) and any given nal state xf , there exists a nite positive
integer T and a sequence of inputs u(0), u(1), . . . , u(T 1) that, applied to
System (2.12), yields x(T ) = xf .
One can establish if a given system is controllable using the following
theorem, which also lists some classical results (see [48], p. 288).
Theorem 2.16. The following assertions are equivalent.
1. The pair (A, B) is controllable.
2. The following n nm matrix (called a controllability matrix) has rank n:
B AB An1 B .
Ai BB (A )
i=0
25
(2.13a)
(2.13b)
L
LA
.. .
.
LAn1
3. The observability Grammian So B(Cn ) given by
So (k) =
(A ) L LAi
i=0
26
2 Background Material
T
1
Cx(k)2 + Du(k)2 + E(x(T ) Vx(T )),
(2.14)
k=0
F (k) = (B XT (k + 1)B + D D)
B XT (k + 1)A
(2.16a)
(B XT (k + 1)B + D D)
B XT (k + 1)A
(2.16b)
XT (T ) = V.
Equation (2.16b) is called the dierence Riccati equation. Another related
problem is the innite horizon linear quadratic regulator problem, in which it
is desired to minimize the cost
J(x0 , u) =
Cx(k)2 + Du(k)2 .
k=0
(2.17)
u(k) = F (X)x(k),
27
(2.18)
F (X) = (B XB + D D)
B XA
(2.19)
W = C C + A W A A W B(B W B + D D)
B W A.
(2.20)
3. (AA ) = AA ,
4. (A A) = A A.
28
2 Background Material
For more on this subject, see [49]. The Schur complements presented below
are used to convert quadratic equations into larger dimension linear ones and
vice versa.
Lemma 2.23 (Schur complements). (From [195]). Consider an hermitian
matrix Q such that
Q11 Q12
Q=
.
Q12 Q22
1. Q > 0 if and only if
or
2. Q 0 if and only if
or
Q22 > 0
Q11 Q12 Q1
22 Q12 > 0
Q11 > 0
Q22 Q12 Q1
11 Q12 > 0.
Q22 0
Q12 = Q12 Q22 Q22
Q11 0
Q12 = Q11 Q11 Q12
(2.21)
where xi are the variables and the hermitian matrices Fi B(Rn ) for i =
1, . . . , m are known.
LMI (2.21) is referred to as a strict LMI. Also of interest are the nonstrict
LMIs, where F (x) 0. From the practical point of view, LMIs are usually
presented as
f (X1 , . . . , XN ) < g(X1 , . . . , XN ),
(2.22)
where f and g are ane functions of the unknown matrices X1 , . . . , XN .
For example, from the Lyapunov equation, the stability of System (2.8) is
equivalent to the existence of a V > 0 satisfying the LMI (2.11). Quadratic
forms can usually be converted to ane ones using the Schur complements.
Therefore we will make no distinctions between (2.21) and (2.22), quadratic
and ane forms, or between a set of LMIs or a single one, and will refer to all
of them as simply LMIs. For more on LMIs the reader is referred to [7], [42],
or any of the many works on the subject.
3
On Stability
30
3 On Stability
advantages in doing this (we hope!) is that, for instance, we can state the
results for the homogeneous case in a more general setting (without requiring,
e.g., that the Markov chain is ergodic) and also it avoids at the beginning the
heavy expressions of the nonhomogeneous case.
It will be shown in Section 3.3 that MSS is equivalent to the spectral radius
of an augmented matrix being less than one, or to the existence of a solution
to a set of coupled Lyapunov equations. The rst criterion (spectral radius)
will show clearly the connection between MSS and the probability of visits
to the unstable modes, translating the intuitive idea that unstable operation
modes do not necessarily compromise the global stability of the system. In
fact, as will be shown through some examples, the stability of all modes of
operation is neither necessary nor sucient for global stability of the system.
For the case of one single operation mode (no jumps in the parameters) these
criteria reconcile to the well known stability results for discretetime linear
systems. It is also shown that the Lyapunov equations can be written down
in four equivalent forms and each of these forms provides an easy to check
sucient condition.
We will also consider the nonhomogeneous case in Section 3.4. For the
case in which the system is driven by a second order widesense stationary
random sequence, it is proved that MSS is equivalent to asymptotic wide sense
stationary stability, a result that, we believe, gives a rather complete picture
for the MSS of discretetime MJLS. For the case in which the inputs are 2 stochastic signals, it will be shown that MSS is equivalent to the discretetime
MJLS being a bounded linear operator that maps 2 stochastic input signals
into 2 stochastic output signals. This result will be particularly useful for the
H control problem, to be studied in Chapter 7.
Some necessary and sucient conditions for mean square stabilizability
and detectability, as well as a study of mean square stabilizability for the case
in which the Markov parameter is only partially known, are carried out in
Section 3.5.
With relation to almost sure convergence (ASC), we will consider in Section 3.6 the noise free case and obtain sucient conditions in terms of the
norms of some matrices and limit probabilities of a Markov chain constructed
from the original one. We also present an application of this result to the
Markovian version of the adaptive ltering algorithm proposed in [25], and
obtain a very easy to check condition for ASC.
31
E(x(k)1{(k)=i} )
i=1
and
E(x(k)x(k) ) =
E(x(k)x(k) 1{(k)=i} ).
i=1
q1 (k)
q(k) ... CN n ,
(3.3a)
qN (k)
qi (k) E(x(k)1{(k)=i} ) Cn ,
(3.3b)
n+
(3.3c)
(3.3d)
and
(k) E(x(k)) =
qi (k) Cn ,
(3.4)
i=1
Q(k) E(x(k)x(k) ) =
Qi (k) B(Cn )+ .
i=1
(3.5)
32
3 On Stability
i=1
i=1
pij i qi (k)
i=1
i=1
pij i Qi (k)i .
i=1
x(k)2 = E(x(k) ) =
=
E(x(k) 1{(k)=i} )
i=1
E(tr(x(k)x(k) 1{(k)=i} ))
i=1
tr(E(x(k)x(k) 1{(k)=i} ))
i=1
tr(Qi (k)) = tr
i=1
N
n
Qi (k)
Qi (k)
i=1
i=1
N
i=1
33
As mentioned before, instead of dealing directly with the state x(k), which
is not Markov, we couch the systems in the Markovian framework via the
augmented state (x(k), (k)), as in Proposition 3.1. In this case the following
operators play an important role in our approach. We set
E(.) (E1 (.), . . . , EN (.)) B(Hn )
L(.) (L1 (.), . . . , LN (.)) B(Hn )
T (.) (T1 (.), . . . , TN (.)) B(Hn )
J (.) (J1 (.), . . . , JN (.)) B(Hn )
V(.) (V1 (.), . . . , VN (.)) B(Hn )
as follows. For V = (V1 , . . . , VN ) Hn and i, j N,
Ei (V )
pij Vj B(Cn )
(3.6)
pij i Vi i B(Cn )
(3.7)
j=1
Tj (V )
Li (V )
Vj (V )
i=1
i Ei (V
N
)i B(Cn )
(3.8)
pij j Vi j B(Cn )
(3.9)
pij j Vj j B(Cn ).
(3.10)
i=1
Ji (V )
N
j=1
From Proposition 3.1 we have a connection between the operator T and the
second moment in (3.3d) as follows:
Q(k + 1) = T (Q(k)).
(3.11)
34
3 On Stability
T (V ); S =
tr(Tj (V ) Sj ) =
j=1
j=1 i=1
i=1
tr(Tj (V )Sj )
j=1
N
N
pij tr(i Vi i Sj ) =
tr Vi i
N
N
pij tr(Vi i Sj i )
i=1 j=1
pij Sj i =
j=1
tr(Vi i Ei (Sj )i )
i=1
i=1
and similarly,
V(V ); S = V ; J (S).
For Mi B(Cn ), i N, we set diag[Mi ] the N n N n block diagonal matrix
formed with Mi in the diagonal and zero elsewhere, that is,
M1 0
(3.12a)
N n2
(3.12b)
)
(3.12c)
A1 CN , A2 N C , A3 N C , A4 C N .
(3.12d)
From Proposition 3.1 we have that matrix B and the rst moment in (3.3b)
are related as follows:
q(k + 1) = Bq(k).
(3.13)
Remark 3.3. Since A1 = A2 , A3 = A4 , it is immediate that r (A1 ) = r (A2 )
and r (A3 ) = r (A4 ). Moreover, since r (CN ) = r (N C), we get r (A1 ) =
r (A3 ). Summing up, we have r (A1 ) = r (A2 ) = r (A3 ) = r (A4 ).
The following result is germane to our MSS approach. It gives the connection
between the operators T , L, V, J and A1 , A2 , A3 , A4 , allowing a treatment
which reconciles, to some extent, with that of the linear case, i.e., gives stability conditions in terms of the spectral radius of an augmented matrix in the
spirit, mutatis mutandis, of the one for the linear case.
35
(T
(Q)) = A1 (Q),
(L(Q))
= A2 (Q),
(V(Q))
= A3 (Q),
(J
(Q)) = A4 (Q).
Proof. Immediate application of the denitions in Section 2.1, (2.4) and (3.12).
Remark 3.5. From Remark 2.2, Remark 3.3 and Proposition 3.4 it is immediate that
r (A1 ) = r (A2 ) = r (A3 ) = r (A4 ) = r (T ) = r (L) = r (V) = r (J ).
We conclude this section with the following result, which guarantees that
stability of the second moment operator implies stability of the rst moment
operator.
Proposition 3.6. If r (A1 ) < 1 then r (B) < 1.
Proof. Let {ei ; i = 1, . . . , N n} and {vi ; i = 1, . . . , n} be the canonical orthonormal basis for CN n and Cn respectively. Fix arbitrarily , , 1 n,
N, and consider the homogeneous System (3.1) with initial conditions
x(0) = v , = 1. Then q(0) = e , where = + ( 1)n. From Proposition
3.1,
N
N
k 2
2
B e = q(k)2 =
E(x(k)1{(k)=i} )2
qi (k) =
i=1
i=1
N
2
E(x(k)1{(k)=i} ) =
tr(Qi (k))
i=1
i=1
i=1
(Q(k))
= Ak1 (Q(0))
with Q (0) = v v and Qi (0) = 0 for i = .
Thus, r (A1 ) < 1 implies that
Q(k) 0 as k and thus Bk e 0 as k . Since and
k
were arbitrarily chosen,
for each
k it
follows that B ei 0 as k
i = 1, . . . , N n. Then B q 0 as k for every q CN n . Hence we
conclude that r (B) < 1.
Remark 3.7. It can be easily checked that r (B) < 1 does not imply that
r (A1 ) < 1. Indeed, consider n = 1, N = 2, p11 = p22 = 12 , 1 = 0.7, 2 =
1.25. In this case r (B) = 0.975 and r (A1 ) = 1.02625.
36
3 On Stability
(3.15)
37
5. For some 1, 0 < < 1, we have for all x0 C0n and all 0 0 ,
2
E(x(k) ) k x0 2 , k = 0, 1, . . . .
6. For all x0 C0n and all 0 0
E(x(k) ) < .
(3.16)
k=0
1 1
2
P=2
1 1
2 2
38
3 On Stability
16
1 9
A1 =
2 16
1
9
1
9
r (A1 ) =
17
<1
18
and so the system is MSS. Suppose now that we have a dierent transition
probability matrix, say
= 0.9 0.1 ,
P
0.9 0.1
so that the system will most likely stay longer in mode 1, which is unstable.
Then
A1 =
144 1
90 10
16
9
1
9
and the system is no longer MSS. This evinces a connection between MSS
and the probability of visits to the unstable modes, which is translated in the
expression for A1 , given in (3.12d).
Example 3.15. Consider the above example again with probability transition
matrix given by
1 1
P = 12 21 .
2 2
1
18
we obtain
16
1
1
1
, j = 1, 2,
Vj (V1
+ V2 ) =
2
9
9
18
which has the positivedenite solution V1 = V2 = 1. If we use the equivalent
forms, V L(V ) = S or V V(V ) = S, for the Lyapunov equations with
S1 = 19 and S2 = 49 , we get
1
16
1
V1 (V1 + V2 )
= ,
2
9
9
1
1
4
V2 (V1 + V2 ) =
2
9
9
39
Example 3.17 (A Non MSS System with Stable Modes). Consider a system
with two operation modes, dened by matrices
0 2
0.5 0
A1 =
and A2 =
0 0.5
2 0
and the transition probability matrix
0.5 0.5
P=
.
0.5 0.5
Note that both modes are stable. Curiously, we have that r (A1 ) = 2.125 > 1,
which means that the system is not MSS. A brief analysis of the trajectories
for each mode is useful to clarify the matter. First consider only trajectories
for mode 1. For initial conditions given by
x10
,
x(0) =
x20
the trajectories are given by
2(0.5)k1 x20
x1 (k)
x(k) =
for k = 1, 2, . . . .
=
k1
x2 (k)
0.5(0.5)
x20
So, with the exception of the point given by x(0), the whole trajectory lies
along the line given by x1 (k) = 4x2 (k) for any initial condition. This means
that if, in a given time, the state is not in this line, mode 1 dynamics will
transfer it to the line in one time step and it will remain there thereafter. For
mode 2, it can be easily shown that the trajectories are given by
0.5(0.5)k1 x10
x1 (k)
x(k) =
for k = 1, 2, . . . .
=
k1
x2 (k)
2(0.5)
x10
Similarly to mode 1, if the state is not in the line x1 (k) = x2 (k)/4, mode 2
dynamics will transfer it to the line in one time step. The equations for the
trajectories also show that the transitions make the state switch between these
two lines. Notice that transitions from mode 1 to mode 2 cause the state to
move away from the origin in the direction of component x2 , while transitions
from mode 2 to mode 1 do the same with respect to component x1 . Figure
3.1.a shows the trajectory of the system with mode 1 dynamics only, for a
given initial condition while Figure 3.1.b shows the same for mode 2. Figure
3.2 shows the trajectory for a possible sequence of switchings between the two
modes, evincing unstability of the system.
Example 3.18 (A MSS System with Unstable Modes). Consider the following
system:
2 1
01
A1 =
and A2 =
0 0
02
40
3 On Stability
(a)
(b)
2
1.5
1.5
x0
x2(k)
x2(k)
0.5
0
x0
0.5
0.5
1
x1(k)
1.5
0.5
1
x1(k)
1.5
20
15
10
5
0
8
6
8
4
x0
2
x2(k)
6
4
2
x1(k)
Fig. 3.2. One of the possible trajectories for the Markov system. Note that the
trajectory tends to move away from the origin. The time k = 1, . . . , 20 is presented
on the zaxis. The gray lines are trajectory projections
41
square stability of the system. MSS depends upon a balance between transition
probability of the Markov chain and the operation modes.
3.3.3 Proof of Theorem 3.9
We rst establish our main results related to the coupled Lyapunov equations,
which give us four equivalent forms of writing them.
Theorem 3.19. If there exists V Hn+ , V > 0, such that
V = T (V ) + S
(3.17)
for some S Hn+ , S > 0, then r (A1 ) < 1. The result also holds if we replace
T by L, V or J .
Proof. Consider the homogeneous system
Y (k + 1) = L(Y (k)), Y (0) Hn+ .
(3.18)
Y (k) Hn+ for k = 0, 1, . . . . Dene the function from Hn+ to R as: for
Y Hn+ ,
(Y ) V ; Y =
tr(Vj Yj )
j=1
N
j=1
1/2
tr(Vj
1/2
Yj Vj
) 0.
c1 (P ) max i (Pj ) 0.
1in
1jN
42
3 On Stability
N
N
n
n
c0 (V )
i (Yj ) (y) c1 (V )
i (Yj ) .
j=1 i=1
(3.19)
j=1 i=1
Note that
2
Y 2 =
N
n
tr Yj2 =
i (Yj )2 .
j=1
j=1 i=1
N
n
c0 (S)
i (Yj (k)) < 0
j=1 i=1
whenever Y (k) =
we have shown that (3.18) is asymptotically
0. Therefore
stable and thus Lk (Y ) 0 as k for all Y Hn+ , which yields, from
Proposition 2.5, and Remark 3.5, that r (L) = r (A1 ) < 1. The result also
holds replacing T by L, V or J , bearing in mind Proposition 3.2, Remark 3.5,
and their relations to A as in Proposition 3.4.
The following result is an immediate consequence of Proposition 2.6 (recalling that r (A1 ) = r (T )).
Proposition 3.20. If r (A1 ) < 1 then there exists a unique V Hn , such
that
V = T (V ) + S
for any S Hn . Moreover,
1. V
2. S
3. S
4. S
= 1 (( IN n2 A1 )1 (S))
=
= S V = V ,
0 V 0,
> 0 V > 0.
k=0
T k (S),
43
1. r (A1 ) < 1.
2. For any given S Hn+ , S > 0, there exists a unique V Hn+ , V > 0,
such that V = T (V ) + S. Moreover
1
V =
(( I
N n2
A1 )
(S))
T k (S).
k=0
N
i=1
Qi (k) =
Tik (Q(0)).
(3.20)
i=1
Tik (Q(0)) 0 as k
i=1
44
3 On Stability
E(x(k) ) k x0 2 ,
k = 0, 1, . . . .
(3.21)
Proof. If System (3.1) is MSS then from Proposition 3.23, r (T ) < 1 and
therefore,
to Proposition 2.5, for some 1 and 0 < < 1, we
according
k for all k = 0, 1, . . . . Therefore, recalling from Proposition
have T k 1
3.1 that Q(k + 1) = T (Q(k)), where Q(k) = (Q1 (k), . . . , QN (k)) Hn+ and
Qi (k) = E(x(k)x(k) 1{(k)=i} ), we get
2
E(x(k) ) = E(tr(x(k)x(k) ))
=
N
tr E x(k)x(k) 1{(k)=i} =
tr(Qi (k))
i=1
i=1
N
i=1
Qi (k) = n Q(k)1 n T k 1 Q(0)1
k x0 2 ,
n
2
since
Q(0)1 =
Qi (0) =
i=1
N
i=1
N
E(x(0)x(0) 1{(0)=i} )
i=1
showing (3.21). On the other hand, if (3.21) is satised, then clearly (3.16)
holds for all x0 C0n and all 0 0 , and from Proposition 3.24 we have that
System (3.1) is MSS, completing the proof of the proposition.
We present now the proof of Theorem 3.9.
Proof. The equivalence among (1), (2), (3), (4), (5), and (6) follows from
Corollary 3.21, and Propositions 3.22, 3.23, 3.24, and 3.25.
45
pj j V1 j > 0
j=1
implies MSS, so that (2) implies (1). On the other hand, MSS implies that
r (A1 ) < 1, which implies in turn, from Theorem 3.9 and Proposition 3.20,
that there exists a unique V Hn+ satisfying
V = J (V ) + S
where
V =
J k (S).
(3.22)
(3.23)
k=0
pj j U1 j
(3.24)
j=1
(3.25)
pj j V1 j = S1 ,
j=1
From Theorem 3.9 we can derive some sucient conditions easier to check
for MSS.
46
3 On Stability
pij i j j (j
i=1
i=1
Corollary 3.28. If for real numbers i > 0, i N, one of the conditions below
is satised
N
< j , j = 1, . . . , N ,
p
1. r
ij
i
i
i
i=1
N
2. r
j=1 pij j j j < i , i = 1, . . . , N ,
then model (3.1) is MSS. Besides, these conditions are stronger than those in
Corollary 3.27.
Proof. Suppose that condition (1) is satised and make Vi = i I > 0, i
N, V = (V1 , . . . , VN ). Following the same arguments as in the proof of the
previous corollary, we have
Vj Tj (V ) = j I
N
i=1
pij i i i (j
i=1
and the model (3.1) is MSS. The proof of condition (2) is similar. Note that
if the conditions of Corollary 3.27 are satised then for each j N,
N
i=1
pij i i i
47
N
=
pij i i i
i=1
pij i i i
i=1
pij i r (i i )
i=1
< j
which implies condition (1). Similarly we can show that condition (4) of Corollary 3.27 implies condition (2) above.
Let us illustrate these conditions through some examples.
Example 3.29. This example shows that conditions (1) and (2) of Corollary
3.28 are not equivalent. Indeed, consider n = 2, pij = pj (0, 1), i, j = 1, 2,
and
0 0
0
1 =
,
and 2 = 2
0 0
1 0
with 1 , 2 real numbers. Condition (1) will be satised if, for some i > 0,
i = 1, 2, we have
2 22 0
pj = max{1 12 , 2 22 }pj < j , j = 1, 2
r
0 1 12
or, equivalently, if p1 12 < 1, p2 22 < 1. On the other hand, condition (2) will
be satised if for some i > 0, i = 1, 2, we have
1 p1 12 + 2 p2 22 0
= 1 p1 12 + 2 p2 22 < i , i = 1, 2.
r
(3.26)
0
0
Note that if (3.26) is satised then p1 12 < 1 and p2 22 < 1. Consider p1 =
p2 = 21 , 12 = 22 = 43 . For this choice, p1 12 < 1, p2 22 < 1, but (3.26) will
not be satised for any i > 0, i = 1, 2. Therefore conditions (1) and (2) of
Corollary 3.28 are not equivalent. Note that in this example, condition (4) of
Corollary 3.27 will be the same as (3.26), and there will be a positivedenite
solution for
V p1 1 V 1 p2 2 V 2 = I
if and only if 22 p2 < 1, and thus from Corollary 3.26 this is a necessary and
sucient condition for MSS.
Example 3.30. Consider Example 3.14 with a dierent 1 , that is,
0 1
.
1 =
0 0
48
3 On Stability
Repeating the same arguments above, we get that condition (2) of Corollary
3.28 will be satised if p1 a21 < 1 and p2 a22 < 1, while condition (1) will be the
same as (3.26). This shows that we cannot say that any of the conditions of
Corollary 3.28 will be stronger than the other. Again in this problem we have
that MSS is equivalent to 22 p2 < 1.
(3.28)
and
pij i = j ,
i = 1
i=1
i (k) i  k
49
(3.29)
Ui (k, l) B(Cn )
i=1
We consider here the rst scenario regarding additive disturbance: the one
characterized by a second order independent wide sense stationary sequence
of random variables. We will prove Theorem 3.33 in this subsection, unifying
the results for the homogeneous and nonhomogeneous cases. We make the
following denitions. For
q1
..
q = . CN n
qN
50
3 On Stability
1 (k)
(k) ... CN n ,
(3.30)
N (k)
where
j (k)
pij Gi i (k) Cn ,
(3.31)
i=1
and
R(k, q) (R1 (k, q), . . . , RN (k, q)) Hn ,
(3.32)
where
Rj (k, q)
(3.33)
i=1
In what follows recall the denition of q(k), Q(k), and U (k) in (3.3) and
(3.29).
Proposition 3.35. For every k = 0, 1, . . . , = 0, 1, . . . and j N,
1. q(k + 1) = Bq(k) + (k),
2. Q(k + 1) = T (Q(k)) + R(k, q(k)),
N
N
N
1
3. Uj (k, ) = i=1 pij i Ui (k, 1) + i=1 pij Gi
p
q
(k)
l
l=1 li
where p1
= P(( 1) = i(0) = l).
li
Proof. From the hypothesis that x0 and {(k); k T} are independent of
{w(k); k T}, it easily follows that x(k) is independent of w(l) for every
l k. Therefore
qj (k + 1)
=
i=1
i=1
i=1
N
i=1
pij i qi (k) +
N
i=1
pij Gi i (k)
51
i=1
pij i Qi (k)i
i=1
i=1
i=1
pij i Ui (k, 1) +
i=1
i=1
pij i Ui (k, 1) +
i=1
i=1
l=1
p1
li ql (k)
Dene now
1
... CN n
(3.34)
N
where
j
pij Gi i Cn ,
(3.35)
i=1
and
(3.36)
where
Rj (q)
i=1
(3.37)
52
3 On Stability
(3.38)
(3.39)
(3.40)
(3.41)
Moreover,
q = (I B)1 ,
1
Q = (I T )
1
(3.42)
R(q)
((I A1 )1 (R(q)))
(3.43)
with Q Hn+ . Furthermore for any v(0) Cn , Z(0) = (Z1 (0), . . . , ZN (0))
Hn , we have that v(k) and Z(k) given by (3.38) and (3.39) satisfy,
v(k) q as k ,
Z(k) Q as k .
Proof. Since r (T ) = r (A1 ) < 1, we have from Proposition 3.6 that r (B) <
1. Let us show that v(k) in (3.38) is a Cauchy summable sequence. Recall
that by hypothesis the Markov chain is ergodic (see Assumption 3.31) and
therefore for some 1 and 0 < < 1,
i i (k) k .
(3.44)
k=0
k=0
sup{(k + ) + (k)}
0
max Gj
1jN
max Gj
1jN
k=0
k=0
1jN
2
.
1
53
v(k))) (R(q))
as k
it follows from Proposition 2.9 that
Z(k) Q = 1 ((I A1 )1 (R(q)).
qi ,
(3.45)
i=1
where
1
q1
Q=
(3.46)
Qi
(3.47)
i=1
Q = (I T )1 (R(q))
= 1 ((I A1 )1 (R(q))),
(3.48)
and
U() =
Ui (),
(3.49)
i=1
Uj ( + 1) =
N
i=1
U (0) = Q.
pij i Ui () +
N
i=1
pij Gi
N
l=1
p1
li ql
,
(3.50)
54
3 On Stability
pij i Qi +
i=1
pij Gi qi as k .
i=1
Suppose that the induction hypothesis holds for , that is, U (k, ) U (i) as
k , for some U () Hn . Then, similarly,
N
N
N
Uj (k, + 1)
pij i Ui () +
pij Gi
pli ql as k
i=1
i=1
l=1
Qi (k).
i=1
Thus,
Q(k) =
Tik (Q(0)) +
i=1
N
i=1
k1
j=0
Tikj1 (R(q(j))) .
(3.51)
Tik (Q(0)) 0 as k .
i=1
This shows that T k (Q(0)) 0 as k . By choosing suitable initial conditions x0 and , we have that any element in Hn+ can be written as Q(0) so
that, from Proposition 2.6, r (T ) < 1.
55
k1
=0
Set
Wi () E(w()w() 1{()=i} ),
W () (W1 (), . . . , WN ()) Hr+ ,
GW ()G (G1 W1 ()G1 , . . . , GN WN ()GN ) Hn+ .
From Proposition 3.1, we get
(k1) . . . (+1) G() w()2 n T k1 (GW ()G )
1
2
n T k1 1 GW ()G 1
2
2
n Gmax T k1 1 w()2
since
56
3 On Stability
W ()1 =
Wi ()
i=1
N
E(w() 1{()=i} )
i=1
2
= E(w() )
2
= w()2 .
Similarly,
(k1) . . . (0) x(0)2 n T k x(0)2 .
2
1
2
From Theorem 3.9, there exists 0 < < 1 and 1 such that T k 1 k ,
and therefore,
k
k ,
x(k)2
=0
Set a (0 , 1 , . . .) and b (0 , 1 , . . .). Since a 1 (that is, =0  < )
and b 2 (that is, =0  2 < ) it follows that the convolution c ab =
k
(c0 , c1 , . . .), ck =0 k , lies itself in 2 with c2 a1 b2 (cf. [103],
p. 529). Hence,
x2 =
1/2
2
E(x(k) )
k
=0
k=0
1/2
c2
= c2 < .
E(x(k)x(k) 1{(k)=i} )
i=1
i=1
E(x(k) 1{(k)=i} )
i=1
2
= E(x(k) )
and thus
57
k
2
T (V )
E(x(k) ) <
k=0
k=0
x(0) = x0 , (0) = 0 .
In the following we will present the denition of these properties and also
tests, based on convex programming, to establish both stabilizability and detectability for a given system. The reader should keep in mind that these
concepts in the Markovian context are akin to their deterministic equivalents,
and that stabilizability and detectability for the deterministic case could be
considered as special cases of the denitions given below (see Denitions 2.17
and 2.20).
Denition 3.40 (Mean Square Stabilizability). Let A = (A1 , . . . , AN )
Hn , B = (B1 , . . . , BN ) Hm,n . We say that the pair (A, B) is mean
square stabilizable if there is F = (F1 , . . . , FN ) Hn,m such that when
u(k) = F(k) x(k), System (3.52) is MSS. In this case, F is said to stabilize
the pair (A, B).
Denition 3.41 (Mean Square Detectability). Let C = (C1 , . . . , CN )
Hn,s . We say that the pair (C, A) is mean square detectable if there is H =
(H1 , . . . , HN ) Hs,n such that r (T ) < 1 as in (3.7) with i = Ai + Hi Ci for
i N.
We now introduce tests to establish detectability and stabilizability for a
given system. These tests are based on the resolution of feasibility problems,
with restrictions given in terms of LMIs.
Proposition 3.42 (Mean square Stabilizability Test). The pair (A, B) is
mean square stabilizable if and only if there are W1 = (W11 , . . . , WN 1 ) Hn+ ,
58
3 On Stability
i=1
Wj1 Wj2
0,
Wj2 Wj3
(3.53b)
Wj1 > 0.
(3.53c)
Xj +
pij (Ai + Bi Fi )Xi (Ai + Bi Fi ) < 0,
(3.54)
i=1
1
Wj1 +
i=1
= Wj1 +
i=1
= Wj1 +
i=1
i=1
for all j N. From Theorem 3.9 we have that F stabilizes (A, B) in the mean
square sense (see Denition 3.40). So it is clear that (A, B) is mean square
stabilizable.
59
(3.55)
and a question that immediately arises is whether mean square stability will be
preserved. To answer this question we have to make some assumptions about
k ) = P((k)
P((k)
=sF
= s  (k)) = (k)s , s N
with
is 0 and
is = 1 for all i, s N
s=1
0 = F0 and, for k = 1, . . . , F
k is the eld generated by the ranwhere F
1)}. Therefore
dom variables {x(0), (0), x(1), (1), (0),
. . . , x(k), (k), (k
P((k)
= i  (k) = i) p for i = 1, . . . , N,
60
3 On Stability
and
N
P((k)
= i  (k) = i) =
is = 1 ii 1 p, for i N.
s=1,s =i
c0 max{(Ai + Bi Fs ) ; i, s N, i = s, is = 0}.
From the fact that F stabilizes (A, B) we have that there exists 1 and
0 < < 1 such that (see Theorem 3.9) T 1 k , k = 0, 1, . . . . The following
lemma provides a condition on the lower bound p of the probability of correct
estimation of (k) which guarantees mean square stability of (3.55).
Lemma 3.44. Assume that F stabilizes (A, B). If p > 1 (1 )/c0 , with
, and c0 as above, then (3.55) is mean square stable.
Proof. Set Q(k) = (Q1 (k), . . . , QN (k)) Hn+ where (see (3.3d))
Qi (k) = E(x(k)x(k) 1{(k)=i} ).
We get
)x(k)x(k)
Qj (k + 1) = E (A(k) + B(k) F(k)
N
E E (A(k) + B(k) F(k)
)x(k)x(k)
i=1
N
E P (k + 1) = j  Fk (Ai + Bi F(k)
)(x(k)x(k) 1{(k)=i} )
i=1
)
(Ai + Bi F(k)
N
i=1
pij
N
s=1
(Ai + Bi Fs )
E (Ai + Bi Fs ) x(k)x(k) 1{(k)=i} 1{(k)=s}
Qj (k + 1)
61
i=1
N
i=1
pij
N
s=1,s =i
= Tj (Q(k)) +
(Ai + Bi Fs )
(Ai + Bi Fs )E x(k)x(k) 1{(k)=i} 1{(k)=s}
pij
i=1
(Ai + Bi Fs )E E x(k)x(k) 1{(k)=i}
N
s=1,s =i
k
1{(k)=s}
F
= Tj (Q(k)) +
pij
i=1
(Ai + Bi Fs )
k)
(Ai + Bi Fs )E P((k)
=sF
N
s=1,s =i
E(x(k)x(k) 1{(k)=i} ) (Ai + Bi Fs )
= Tj (Q(k)) +
pij
i=1
s=1,s =i
= Tj (Q(k)) + Sj (Q(k)),
(3.56)
pij
i=1
s=1,s =i
Then
S1 = sup {S(V )1 }
V 1 =1
N
N
N
= sup
pij
is (Ai + Bi Fs )Vi (Ai + Bi Fs )
V 1 =1 j=1 i=1
s=1,s =i
N
N
N
2
Vi
sup
pij
is Ai + Bi Fs
V 1 =1 j=1 i=1
s=1,s =i
N
N
c0
sup
is
pij Vi
V 1 =1
s=1,s =i
c0 (1 p) sup
V 1 =1 i=1
= c0 (1 p).
j=1
Vi = c0 (1 p) sup V 1
V 1 =1
(3.57)
62
3 On Stability
k1
T k1 (S(V ())),
=0
so that
k1
T k1 S V ()
V (k)1 T k 1 V (0)1 +
1
1
1
=0
{ k V (0)1 +
k1
=0
k1 S1 V ()1 }.
(k) = (0) +
k1
=0
k1 S1 V ()1 ,
N
2
E(x(k) ) = tr
Qi (k) tr
Vi (k)
i=1
i=1
63
k=0
E(x(k) )
2
x0 2
1
Throughout the remainder of this section we shall assume that the Markov
chain {(k); k = 0, 1, . . .} is ergodic, as in Assumption 3.31. For positive
integer, dene the stochastic process (k), taking values in N (the fold
product space of N) as
(k) ((k + 1), . . . , (k)) N , k = 0, 1, . . . .
It follows that (k) is a Markov chain and for , N , = (i1 , . . . , i0 ),
= (j1 , . . . , j0 ), the transition probabilities for (k) are
64
3 On Stability
j0 pj0 j1 . . . pj2 j1 .
Dene for every N ,
i1 . . . i0
and
x
(k) x(k).
Thus,
(k). (3.59)
x
(k + 1) = x((k + 1)) = (k+1) . . . (k) x(k) = (k) x
In what follows, . will denote any norm in Cn , and for M B(Cn , Cm ), M
denotes the induced uniform norm in B(Cn , Cm ). For instance, for P B(Cn ),
2
P > 0, we dene x in Example 3.49 below as x = x P x. From (3.59),
(i) x(0) .
(3.60)
(3.61)
x (k + 1)
k
i=0
x (k + 1)
x(0)
k
1
ln (i) .
k + 1 i=0
From the hypothesis that the Markov chain {(k)} is ergodic it follows that
(k) is a regenerative process, and from Theorem 5.10 of [194], we have
k
1
ln (i)
ln
k + 1 i=0
w.p.1
= ln
65
Example 3.48. We can see that the system in Example 3.29 is always ASC.
Indeed, 2 1 = 0 and for = 2, 2 = (2, 1) we have
2 = p2 p1 > 0 so that
the product in (3.61) is equal to zero.
Example 3.49. By making an appropriate choice of the norm in Cn we can
force the inequality in (3.61) to hold. Consider n = 2, N = 2, 1 = 2 = 21 ,
and
11
1/2 0
1 =
, 2 =
.
01
0 2/3
Using the Euclidean norm we obtain 1 = 1.618, 2 =
1 2 = 1.078 > 1. Dening
1
0
P = 25
0 1
2
3,
so that
66
3 On Stability
Proof. We have
(k + 1) = x
(k) (k) P (k) x
(k)
x
(k + 1)P x
P ( (k) )
x (k)P x
(k)
and thus for x(0) = 0,
k
(k + 1)
x
(k + 1)P x
P ( (i) ).
x
(0)P x
(0)
i=0
Repeating the same reasoning of the previous theorem, we conclude that
x (k) as k w.p.1. Note that for {1, . . . , },
x(k + ) = (k+1) . . . (k+) x(k + )
so that
x (k + 1) max x(k + ) ,
which shows that x(k) as k w.p.1.
u(k) u(k)
e(k) = (k) e(k),
e(k + 1) = I
u(k) u(k)
where
i = I
ui ui
, for i N.
ui ui
67
ui ui x
=x
ui ui
ui1 i x = ui1 x = 0.
uip uip
y = vj y
vj ip y = vj y
u ip u ip
and
v1 ip y
uip
=
ui
uip uip
uip uip
(1 )uip y
.
ui
p
Then
n
i y 2 =
(vj ip y)2
p
j=1
<
uip y(1 )
ui
p
u y
ip
ui
p
2
+
(vj y)2
j=2
n
j=2
(3.62)
68
3 On Stability
since uip y = 0 and (0, 2). From (3.62) and recalling that i = 1, for
i N, we have
p1
i1 . . . ip1 ip ip+1 . . . i x
ij ip y
j=1
= ip y < y
= ip+1 . . . i x
i x
j
j=p+1
= x
and we conclude that i1 . . . i x < x for all x = 0 in Rn . Recalling that
the maximum of a continuous function in a compact set is reached at some
point of this set, we get
i1 . . . i =
max
x=1,xRn
n
n , un } generates R
Clearly the set {ui1 , . . . , ui11 , . . . , uin1 , . . . , uin1
so that
from Lemma 3.52,
n n < 1.
(3.63)
i1 . . . i11 . . . in1 . . . in1
Moreover,
n i
i1 pi1 i21 . . . pi11 i2 . . . pin1 i2n1 . . . pin1
> 0.
n
(3.64)
69
70
3 On Stability
in [67] and [119] that mean square and stochastic stability (2 stability) are no
longer equivalent (see also [118]). A new fresh look at detectability questions
for discretetime MJLS was given recently in [54] and [56] (see [57] for the
innite countable case and [55] for the continuoustime case).
Almost sure stability for MJLS is examined, for instance, in [66], [93], [107],
[108], [161] and [169]. A historical account of earlier works can be found in
[107]. This is a topic which certainly deserves further study. Regarding other
issues such as robust stability (including the case with delay) the readers are
referred, for instance, to [20], [35], [36], [61], [166], [191] and [215].
4
Optimal Control
This chapter is devoted to the study of the quadratic optimal control problems
of Markov Jump Linear Systems (MJLS) for the case in which the jump
variable (k) as well as the state variable x(k) are available to the controller.
Both the nite horizon and the innite horizon cases are considered, and the
optimal controllers derived from a set of coupled dierence Riccati equations
for the former problem, and the mean square stabilizing solution of a set of
coupled algebraic Riccati equations (CARE) for the latter problem.
72
4 Optimal Control
average cost function, a problem which is very much similar to the linear regulator problem considered in Subsection 4.3.2. The same stochastic input is
considered in Section 4.4, but with the cost function being the H2 norm of the
system. This problem is called the H2 control problem and some of its aspects,
especially the connection with the CARE and a certain convex programming
problem, are addressed in Section 4.4. We generalize in Section 4.5 the linear
quadratic regulator problem presented in Subsection 4.3.2 by considering a
MJLS subject to a 2 stochastic input w = {w(k); k = 0, 1, . . .} C r . In this
case it is shown that the optimal control law at time k has a feedback term
given by the stabilizing solution of the CARE as well as a term characterized by the projection of the 2 stochastic inputs and Markov chain into the
ltration at time k. We illustrate the application of these results through an
example for the optimal control of a manufacturing system subject to random breakdowns. We analyze the simple case of just one item in production,
although the technique could be easily extended to the more complex case of
several items being manufactured. Finally we mention that when restricted
to the case with no jumps, all the results presented here coincide with those
known in the general literature (see, for instance, [151], [204]).
As aforementioned, all the quadratic optimal control problems presented
in this chapter are related to a set of coupled algebraic (dierence for the
nite horizon problem) Riccati equations. These equations are analyzed in
Appendix A where, among other results, necessary and sucient conditions
for existence and uniqueness of the mean square stabilizing solution for the
CARE are provided. These results can be seen as a generalization of those
analyzed in [67], [143], [144], [146], [147], [176], [177] and [178] using dierent
approaches.
73
where A(k) = (A1 (k), . . . , AN (k)) Hn , B(k) = (B1 (k), . . . , BN (k)) Hm,n ,
M (k) = (M1 (k), . . . , MN (k)) Hr,n , C(k) = (C1 (k), . . . , CN (k)) Hn,q ,
D(k) = (D1 (k), . . . , DN (k)) Hm,q , and = {(k); k {0, . . . , T 1}}
represents a noise sequence satisfying
E((k)(k) 1{(k)=i} ) = i (k), E((0)x(0) 1{(0)=i} ) = 0
(4.2)
pij (k)Vj .
(4.3)
j=1
We assume that the state variable and operation modes (x(k) and (k) respectively) are known at each time k, and as in (3.3),
E(x(0)x(0) 1{(k)=i} ) = Qi (0)
(4.4)
(4.5)
(4.6)
for all i N, k {0, . . . , T 1}. Notice that (4.6) means that all components
of the control variable will be penalized in the cost function. As pointed out
in Remark 4.1 below, there is no loss of generality in assuming (4.5).
We assume that for any measurable functions f and g,
E(f ((k))g((k + 1))Gk ) = E(f ((k))Gk ))
p(k)j (k)g(j)
(4.7)
j=1
(4.8)
E((k)u(k) 1{(k)=i} ) = 0.
(4.9)
and
74
4 Optimal Control
Notice that we dont assume here the usual wide sense white noise sequence
assumption for {(k); k = 0, . . . , T } since it will not be suitable for the partial
observation case, to be analyzed in Chapter 6. What will be in fact required
for the partial observation case, and also suitable for the complete observation
case, are conditions (4.7), (4.8) and (4.9).
The quadratic cost associated to system G with an admissible control
law u = (u(0), . . . , u(T 1)) and initial conditions (x0 , 0 ) is denoted by
J(0 , x0 , u), and is given by
J(0 , x0 , u)
T
1
(4.10)
k=0
and
(4.11)
(4.12)
J((), x(), , u )
T
1
75
(4.13)
k=
where the control law u = (u(), . . . , u(T 1)) and respective state sequence
(x(), . . . , x(T )) are such that, for each k T 1, (4.8) and (4.9) hold, and
u(k) is Gk measurable. We denote this set of admissible controllers by UT (),
and the optimal cost, usually known in the literature as the value function,
, x , ). The intermediate problem is thus to optimize the performance
by J(
of the system over the last T stages starting at the point x() = x
and mode () = i. As in the stochastic linear regulator problem (see, for
instance, [80]) we apply dynamic programming to obtain, by induction on
the intermediate problems, the solution of minimizing J(0 , x0 , u). For i N
and k = T 1, . . . , 0, dene the following recursive coupled Riccati dierence
equations X(k) = (X1 (k), . . . , XN (k)) Hn+ ,
X i (k) Ai (k) Ei (X(k + 1), k)Ai (k) Ai (k) Ei (X(k + 1), k)Bi (k)
1
Di (k) Di (k) + Bi (k) Ei (X(k + 1), k)Bi (k)
Bi (k) Ei (X(k + 1), k)Ai (k) + Ci (k) Ci (k)
(4.14)
(4.15)
T
1
(t), (T ) 0
t=k
(t)
i=1
The following theorem, which is based on the results in [115] taking into
account (4.2), (4.7), (4.8), (4.9), and bearing in mind the Markov property
x, k) = W (i, x, k).
for {(k)}, shows that J(i,
Theorem 4.2. For the intermediate stochastic control problems described in
(4.13) the optimal control law u
= (
u(), . . . , u
(T 1)) is given by
76
4 Optimal Control
u
(k) = F(k) (k)x(k)
(4.16)
0 , x0 ) = E(J((0),
x(0), 0)) = E(W ((0), x(0), 0)),
J(
that is,
0 , x0 ) =
J(
N
i=1
T
1
i (k) tr(Mi (k)i (k)Mi (k) Ei (X(k + 1), k)) .
(4.17)
k=0
Proof. Notice that for any u UT () we have from the Markov property,
(4.7), (4.14) and (4.15) that
E(W ((k + 1), x(k + 1), k + 1)Gk ) W ((k), x(k), k) =
1/2
From (4.2),
E(tr(M(k) (k)E((k)(k) Gk )M(k) (k) E(k) (X(k + 1), k)))
= E(tr(M(k) (k)(k)(k) M(k) (k) E(k) (X(k + 1), k)))
=
i=1
i=1
= (k)
(4.19)
i=1
= 0.
(4.20)
77
i=1
= 0.
(4.21)
T
1
k=
T
1
k=
1/2
that is,
T
1
k=
=W ((), x(), )+
T
1
E(
k=
1/2
(4.22)
Taking the minimum over u UT () in (4.22) we obtain that the optimal control law is given by (4.16), so that the second term on the right
, x , ) = J((), x(), , u
hand side of (4.22) equals to zero and J(
) =
W ((), x(), ). In particular for = 0 we obtain that
0 , x0 ) = E(x(0) X(0) (0)x(0)) + (0)
J(
which, after some manipulation, leads to (4.17), proving the desired result.
78
4 Optimal Control
Remark 4.3. For the case in which Ai , Bi , Ci , Di , pij in (4.1) are timeinvariant
the control coupled Riccati dierence equations (4.14) lead to the following
control coupled algebraic Riccati equations (CARE)
Xi = Ai Ei (X)Ai Ai Ei (X)Bi (Di Di + Bi Ei (X)Bi )1 Bi Ei (X)Ai
+ Ci Ci ,
(4.23)
and respective gains:
Fi (X) (Di Di + Bi Ei (X)Bi )1 Bi Ei (X)Ai .
(4.24)
(4.26)
(4.27)
79
for all i N. The control variable u = (u(0), . . .) is such that u(k) is Gk measurable for each k = 0, 1, . . . and
limk E(x(k)2 ) = 0
(4.28)
E(z(k)2 )
(4.29)
k=0
(4.30)
This is the Markov jump version of the well known LQR (linear quadratic
regulator), a classical control problem, discussed in Chapter 2. The cost
functional (4.29) will also be considered in Section 4.5 for the more general case in which the state equation includes an additive 2 stochastic input
w = {w(k); k T} C r . These results will be used in the minmax problem
associated to the H control problem, to be presented in Chapter 7.
For the case in which the state equation includes an additive wide sense
white noise sequence (that is, when w = {w(k); k T} is such that E(w(k)) =
0, E(w(k)w(k) ) = I, E(w(k)w(l) ) = 0) independent of the Markov chain
{(k)} and initial state value x0 , a cost function as in (4.29) might lead to
an unbounded value. So in this case two cost functions will be considered.
The rst one is the long run average cost function Jav (0 , x0 , u) dened for
arbitrary u U as
t1
1
Jav (0 , x0 , u) lim sup
E z(k)2 ,
(4.31)
t t
k=0
(4.32)
The second one, which is somewhat equivalent to the long run average cost
criterion, is the H2 norm of system G. For this criterion we consider control
laws in the form of a state feedback u = F(k) x(k). To make sure that (4.28)
is satised, we dene (see Denition 3.40)
F {F = (F1 , . . . , FN ) Hm,n ;
F stabilizes (A, B) in the mean square sense}
80
4 Optimal Control
which contains all controllers that stabilize system G in the mean square sense.
When the input u applied to system G is given by u(k) = F(k) x(k), we will
denote it by GF , and dene the objective function as
2
(4.33)
(4.34)
These problems will be considered in Subsection 4.3.3 and Section 4.4 respectively.
As will be made clear in the next sections, the solutions of all the control problems posed above are closely related to the mean square stabilizing
solution for the CARE, dened next.
Denition 4.4 (Control CARE). We say that X = (X1 , . . . , XN ) Hn+
is the mean square stabilizing solution for the control CARE (4.35) if it satises for i N
Xi = Ai Ei (X)Ai Ai Ei (X)Bi (Di Di + Bi Ei (X)Bi )1 Bi Ei (X)Ai
+ Ci Ci
(4.35)
and r (T ) < 1 where T (.) = (T1 (.), . . . , TN (.)) is dened as in (3.7) with
i = Ai + Bi Fi (X) and Fi (X) as in
Fi (X) (Di Di + Bi Ei (X)Bi )1 Bi Ei (X)Ai
(4.36)
for i N.
It will be convenient to dene, for V = (V1 , . . . , VN ) Hn+ , R(V ) =
(R1 (V ), . . . , RN (V )) as follows:
Ri (V ) Di Di + Bi Ei (V )Bi > 0.
(4.37)
Necessary and sucient conditions for the existence of the mean square stabilizing solution for the control CARE and other related results are presented
in Appendix A.
4.3.2 The Markov Jump Linear Quadratic Regulator Problem
We consider in this subsection model (4.25) without additive noise (that is,
w(k) = 0 for all k = 0, 1, . . .). We show next that if there is the mean square
stabilizing solution X Hn+ for the CARE (4.35) then the control law given
by
u
(k) = F(k) (X)x(k)
(4.38)
is an optimal solution for the quadratic optimal control problem posed in
Subsection 4.3.1, with cost function given by (4.29).
81
x(k) C(k)
C(k) x(k)
and thus
E x(k) X(k) x(k) x(k + 1) X(k+1) x(k + 1)
+ R(k) (X)1/2 (u(k) F(k) (X)x(k))2
= E x(k) C(k)
C(k) x(k) + u(k) D(k)
D(k) u(k) .
Taking the sum from 0 to and noticing that E(x(k) X(k) x(k)) goes to 0
as k goes to innity from the hypothesis that u U, it follows that
J(0 , x0 , u) =
E R(k) (X)1/2 (u(k) F(k) (X)x(k))2
k=0
+ E x(0) X(0) x(0)
completing the proof of the theorem.
82
4 Optimal Control
Proof. From the fact that the solution X is mean square stabilizing, we have
that (4.28) is satised and thus u
U. Consider (4.10) with V = X. From
(4.17) in Theorem 4.2 we have for any u = (u(0), . . .) U that
T
1
k=0
N
i=1
T
1
i (k) tr(Gi Gi Ei (X)) ,
(4.39)
k=0
since in this case X(k) = X for all k = 0, 1, . . . . Notice that we have equality
in (4.39) when the control law is given by u
(k) as in (4.38). Dividing (4.39) by
T and taking the limit as T goes to , we obtain from the ergodic assumption
3.31 for the Markov chain that
T 1
1
2
Jav (0 , x0 , u) = lim sup
E z(k)
T T
k=0
T 1
N
1
lim sup
i (k)tr{Gi Ei (X)Gi }
T
T
i=1
k=0
i tr{Gi Ei (X)Gi }
i=1
83
GF 2
r
s=1
zs 2
where zs represents the output sequence (z(0), z(1), . . .) given by (4.25) when
1. x(0) = 0 and the input sequence is given by w = (w(0), w(1), . . .) with
w(0) = es , w(k) = 0 for k > 0, where {e1 , . . . , er } forms a basis for Cr ,
and
2. (0) = i with probability i where {i ; i N} is such that i > 0 for i N.
It is easy to see that the denition above is equivalent to consider system GF
without inputs, and with initial conditions given by x(0) = Gj es and (0) = j
with probability j for j N and s = 1, . . . , r. Notice that the equivalence
between Denition 4.7 and the one presented in (4.33) will be established
in Subsection 4.4.3. When we restrict ourselves to the socalled deterministic
case (N = 1 and p11 = 1), both denitions reduce to the usual H2 norm.
4.4.2 The H2 norm and the Grammians
There is a direct relation between the H2 norm and the solutions of the observability and controllability Grammians. This relation is particularly important,
for it will allow us to obtain the main result of this section.
For any F F, let O(F ) = (O1 (F ), . . . , ON (F )) Hn+ be dened as
Oi (F ) Ci Ci + Fi Di Di Fi .
Also for any F F, let So(F ) = (So1 (F ), . . . , SoN (F )) Hn+ and Sc(F ) =
(Sc1 (F ), . . . , ScN (F )) Hn+ be the unique solutions of the observability and
controllability Grammians respectively (see Proposition 3.20), given by
Soi (F ) = (Ai + Bi Fi ) Ei (So(F ))(Ai + Bi Fi ) + Oi (F )
= Li (So(F )) + Oi (F )
for i N and
(4.40)
84
4 Optimal Control
N
Scj (F ) =
i=1
= Tj (Sc(F )) +
pij [i Gi Gi ]
(4.41)
i=1
for j N, where L and T are dened as in (3.8) and (3.7) respectively with
i = Ai + Bi Fi . Since O(F ) 0 and Gj Gj 0 for j N, we have that
So(F ) 0 and Sc(F ) 0.
With (4.40) and (4.41) we are ready to present the next result, adapted
from [65] (see Denition 4.7), which establishes a connection between the
H2 norm and the Grammians.
Proposition 4.8. For any F F,
2
GF 2 =
=
N
i=1
N
j=1
Proof. Notice that for w(0) = es , (0) = i with probability i , and w(k) = 0
for k 1 we have from the observability Grammian that
E(zs (k) zs (k))
= E(x(k) O(k) (F )x(k))
= E(x(k) (So(k) (F )
Since from MSS E(x(k) x(k)) goes to zero as k goes to innity and x(1) =
G(0) es , we have that
2
zs 2 =
=
k=1
N
i es Gi Ei (So(F ))Gi es
i=1
and
2
GF 2 =
N
i=1
(4.42)
85
N
(Ai + Bi Fi )
pij Soj (F ) (Ai + Bi Fi ) + Oi (F ) Soi (F ) = 0
j=1
GF 2 =
=
N
i=1
N
tr i Gi Gi
pij Soj (F )
i=1
j=1
N
+ Sci (F ) (Ai + Bi Fi )
pij Soj (F ) (Ai + Bi Fi )
+ Oi (F ) Soi (F ) .
j=1
GF 2 =
tr
N
j=1
i pij Gi Gi Soj (F )
i=1
Scj (F ) Scj (F )Fj
Cj Cj 0
+
0 Dj Dj Fj Scj (F ) Fj Scj (F )Fj
N
N
tr [pij (Ai + Bi Fi )Sci (F )(Ai + Bi Fi ) Soj (F )]
+
j=1
i=1
tr [Scj (F )Soj (F )]
j=1
tr
N
j=1
i Gi Gi
N
j=1
N
j=1
i=1
Scj (F ) Soj (F )
Scj (F ) Scj (F )Fj
Cj Cj 0
tr
0 Dj Dj Fj Scj (F ) Fj Scj (F )Fj
Scj (F ) Scj (F )Fj
Cj Cj 0
tr
0 Dj Dj Fj Scj (F ) Fj Scj (F )Fj
86
4 Optimal Control
i N.
i=1
k
Moreover, since the closed loop system is MSS, we have that Q(k) Q (see
Chapter 3, Proposition 3.36), where Q = (Q1 , . . . , QN ) is the unique solution
of the controllability Grammian (4.41), that is, Q = Sc(F ). Notice that
E(z(k)2 ) = E(tr(z(k)z(k) ))
N
Qi (k) Qi (k)Fi
Ci Ci 0
=
tr
0 Di Di Fi Qi (k) Fi Qi (k)Fi
i=1
N
i=1
Qi Qi Fi
Ci Ci 0
= GF 22
tr
0 Di Di Fi Qi Fi Qi Fi
(4.43)
and thus from Proposition 4.8 we have the equivalence between Denition 4.7
and the one presented in (4.33).
4.4.4 Connection Between the CARE and the H2 control Problem
In this subsection we will establish the equivalence of:
(i) nding the stabilizing solution for the quadratic CARE (4.35);
(ii) obtaining a stabilizing controller F that minimizes the H2 norm of system
GF , as given by Denition 4.7 or Proposition 4.8, and
(iii) obtaining the optimal solution of a certain convex programming problem,
which will be dened in the following.
We assume in this subsection that Gi Gi > 0 for each i N, so that from
(4.41) we have that Sc(F ) > 0, and that all matrices are real. We dene the
two following sets:
n+m
{ W = (W
; for j N,
1 , . . . , WN ) H
Wj1 Wj2
0, Wj1 > 0,
Wj =
Wj2
Wj3
N
87
and
( { W = (W1 , . . . , WN ) ; for j N,
1
Wj1
Wj2 ,
W = Wj2
j3
N
p
(A
W
A
i i1 i + Bi Wi2 Ai + Ai Wi2 Bi + Bi Wi3 Bi + i Gi Gi )
i=1 ij
Wj1 = 0}.
Note that, unlike (, is a convex set. The cost function associated with the
optimization problem mentioned in (iii) above is dened as
(W )
N
j=1
tr
Ci Ci 0
0 Di Di
Wi1 Wi2
Wi2
Wi3
.
We introduce now two mappings, which will be used later on to connect the
solutions of (i), (ii) and (iii). For any F = (F1 , . . . , FN ) F, let Sc(F ) =
(Sc1 (F ), . . . , ScN (F )) > 0 be as in (4.41). Dene
ScN (F )
Sc1 (F ) Sc1 (F )F1
ScN (F )FN
,
.
.
.
,
K(F )
.
F1 Sc1 (F ) F1 Sc1 (F )F1
FN ScN (F ) FN ScN (F )FN
From the Schur complement (see Lemma 2.23) and (4.41) it is immediate that
K(F ) (. Dene also for W
1
Y(W ) (W12
W11
, . . . , WN 2 WN11 ).
1
1
1
pij (Ai Wi1 Ai + Bi Wi2
Ai + Ai Wi2 Bi + Bi Wi3 Bi + i Gi Gi ) Wj1
i=1
(4.44)
and therefore from Theorem 3.9, Y(W ) F. Then we have dened the mappings K : F ( and Y : F. The following proposition relates them.
Proposition 4.9. The following assertions hold:
1. YK = I,
2. KY = I on (,
3. KY(W ) W for any W ,
where we recall that I is the identity operator.
88
4 Optimal Control
1
1
Sci (Y(W )) = 0
for j N. Thus, from Theorem 3.9 and (4.44), Scj (Y(W )) Wj1 for j N.
If we prove that
Wi ( KY)i (W )
1
Wi2 Sci (Y(W ))Wi1
Wi2
Wi1 Sci (Y(W ))
0
=
1
1
1
Wi2
Wi2
Wi1
Sci (Y(W )) Wi3 Wi2
Wi1
Sci (Y(W ))Wi1
Wi2
for i N then we will have the desired result. As Sci (Y(W )) Wi1 we have
(from the Schur complement) that
1
(Wi1 Sci (Y(W )))(Wi1 Sci (Y(W ))) (Wi1 Sci (Y(W )))Wi1
Wi2
1
= Wi2 Sci (Y(W ))Wi1
Wi2
and
1
1
1
Wi3 Wi2
Wi1
Sci (Y(W ))Wi1
Wi2 Wi2
Wi1
(Wi1 Sci (Y(W )))
1
Wi2
(Wi1 Sci (Y(W ))) (Wi1 Sci (Y(W )))Wi1
1
Wi1
Wi2 0.
= Wi3 Wi2
The next theorem presents the main result of this section, which establishes
the equivalence of problems (i), (ii) and (iii) and the connection between their
solutions.
Theorem 4.10. The following assertions are equivalent:
1. There exists the mean square stabilizing solution X = (X1 , . . . , XN ) for
the CARE given by (4.35).
2. There exists F = (F1 , . . . , FN ) F such that
GF 2 = min{GK 2 ; K F}.
3. There exists W = (W1 , . . . , WN ) such that
(W ) = min{(V ); V }.
Moreover
89
4. If X satises 1), then F(X) satises 2) and K(F(X)) satises 3), where
F(X) is as in (4.24).
5. If F satises 2), then K(F ) satises 3) and So(F ) satises 1).
6. If W satises 3), then So(Y(W )) satises 1) where Y(W ) satises 2).
Proof. The second part of the proof (assertions 4), 5) and 6)) follows immediately from the rst part (equivalence of 1), 2) and 3)).
2)3): From Propositions 4.8 and 4.9 it is immediate that
min{ GK 2 ; K F} = min{(V ); V (}
and since ( ,
min{(V ); V (} min{(V ); V }.
On the other hand, from Proposition 4.9, KY(V ) V , with KY(V ) (.
Therefore, since (KY(V )) (V ), we have
min{(V ); V (} min{(V ); V },
showing that
min{ GK 2 ; K F} = min{(V ); V },
which proves the equivalence between 2) and 3).
1)2): Suppose that X is the mean square stabilizing solution of the
CARE (4.35). It is easy to show that X satises
Xi (Ai + Bi Fi (X)) Ei (X)(Ai + Bi Fi (X)) = Oi (F(X))
and from the uniqueness of the solution of this equation (see Proposition 3.20),
So(F(X)) = X. For any K F and So(K) as in (4.40) we have, according to
Lemma A.7, that
(Soi (K) Xi ) (Ai + Bi Ki ) Ei (So(K) X)(Ai + Bi Ki )
= (Ki Fi (X)) Ri (X)(Ki Fi (X))
(4.45)
GK 2 =
N
i=1
N
i=1
2
i tr(Gi Ei (X)Gi ) = GF (X) 2 ,
90
4 Optimal Control
K1
2
2
i tr(Gi Ei (X 1 )Gi )
i=1
N
i=1
i=1
J(0 , x0 , w, u)
E C(k) x(k)2 + D(k) u(k)2 .
91
(4.46)
k=0
(4.47)
uU
Remark 4.11. The assumption that w2 < can be relaxed if we consider
the discounted case, which is a particular case of (4.25), (4.46) and (4.47).
Indeed in this case, for (0, 1),
J (0 , x0 , w, u) =
E k C(k) x(k)2 + D(k) u(k)2
k=0
and dening x (k) = k/2 x(k), u (k) = k/2 u(k), w (k) = k/2 w(k), we get
that
J (0 , x0 , w, u) =
E C(k) x (k)2 + D(k) u (k)2
k=0
with
(l, l) = I.
Set
i = Ai + Bi Fi (X)
for each i N, and r = (r(0), r(1), . . .) as
r(k)
=0
E (k + , k) X(k++1) G(k+) w(k + )  Fk .
(4.48)
92
4 Optimal Control
=
E (k + , k) X(k++1) v(k + + 1)Fk .
(4.49)
=0
r(k) (D(k)
D(k) + B(k)
E(k) (X)B(k) )1 B(k)
r(k)
(1), . . .) as:
and set u
(
u(0), . . .) and x
(x0 , x
u
(k) F(k) (X)
x(k) + r(k)
(4.50)
(k) + B(k) u
(k) + G(k) w(k)
x
(k + 1) A(k) x
= (A(k) + B(k) F(k) (X))
x(k) + B(k) r(k) + G(k) w(k)
x
(0) x0 .
Clearly from Lemma 4.12, r = (
r(0), . . .) C m and from the fact that r (T ) <
1, it follows from Theorem 3.34 that x
C n and u
C m . Dene now
((0), (1), . . .) as
D(k) )1 B(k)
(k + 1) I + E(k) (X)B(k) (D(k)
(E(k) (X)A(k) x
(k) + r(k)).
Since (see [6], p. 349)
1
= D(k)
D(k) + B(k)
E(k) (X)B(k)
B(k)
we have that
u
(k) = (D(k)
D(k) )1 B(k)
(k + 1).
93
(4.51)
x
(k)
=(X(k) C(k)
C(k) )
x(k) + (A(k) + B(k) F(k) (X)) r(k).
(4.52)
k+1
+
E
(A() + B() F() (X)) X(k+j+1) G(k+j) w(k + j)  Fk
j=1
=k+j
= E(k) (X)G(k) w(k) + E (A(k+1) + B(k+1) F(k+1) (X)) r(k + 1)  Fk .
(4.53)
From (4.52), (4.53) and the fact that x
(k) is Fk1 measurable, it follows that
94
4 Optimal Control
E A(k) (k + 1)  Fk1
= E (X(k) C(k)
C(k) )
x(k)  Fk1
+ E (A(k) + B(k) F(k) (X)) r(k)  Fk1
= E(k1) (X)
x(k) E(k1) (C C)
x(k) + r(k 1)
E(k1) (X)G(k1) w(k 1)
= (D(k1)
D(k1) E(k1) (X)B(k1) R(k1) (X)1 B(k1)
(E(k1) (X)A(k1) x
(k 1) + r(k 1))) E(k1) (C C)
x(k)
= (k) E(k1) (C C)
x(k)
completing the proof.
Re E((k + 1) G(k) w(k)) + Re (E(x0 (0))) .
=
k=0
95
E(x(k) C(k)
C(k) x
(k) w(k) G(k) (k + 1) u(k) B(k)
(k + 1))
k=0
E(E(x(k) C(k)
C(k) x
(k)  Fk1 )
k=0
(k + 1))
w(k) G(k) (k + 1) u(k) B(k)
=
=
k=0
E(x(k) E(k1) (C C)
x(k) (w(k) G(k) u(k) B(k)
)(k + 1))
k=0
k=0
k=0
= E(x (0)(0))
since that E(x(k) (k)) x(k)2 (k)2 0 as k . Thus, recalling
2
2
2
D(k) u
(k)) + C(k) x
(k)
w(k) G(k) (k + 1) + u(k) D(k)
2
2
2
+ D(k) u
(k) C(k) x
(k) D(k) u
(k)
2
2
2
E C(k) (x(k) x
(k)) + D(k) (u(k) u
(k)) C(k) x
(k)
k=0
2
D(k) u
(k) + 2 Re(w(k) G(k) (k + 1))
2
2
J(0 , x0 , w, u
E C(k) (x(k) x
(k)) + D(k) (u(k) u
(k))
)
k=0
k=0
96
4 Optimal Control
From the expression above it is clear that the minimum of (4.47) is achieved
at u = u
.
4.5.4 An Application to a Failure Prone Manufacturing System
Let us consider a manufacturing system producing a single commodity. The
demand will be represented by a sequence of independent and identically distributed positive second order random variables {w(k); k = 0, 1, . . .}, with
mean E(w(k)) = > 0. It is desired to make the production meet the demand. The manufacturing system is subject to occasional breakdowns, and
therefore can be at two possible states: working and out of order. The transition between these two states satises a twostate discretetime Markov chain.
State 1 means that the manufacturing system is out of order, whereas state
2 means that it is working. Let x(k) denote the inventory of the commodity
at time k with initial value x0 , u(k) the total production at time k, and w(k)
the demand at time k. We have that the inventory at time k + 1 will satisfy
x(k + 1) = x(k) + b(k) u(k) w(k), x(0) = x0 ,
where b1 = 0 and b2 = 1. Here the transition between the states will be dened
by p11 (0, 1) and p22 (0, 1). Our goal is to control the production u(k) at
time k so that to minimize the expected discounted cost
J(0 , x0 , w, u) =
k=0
where m > 0 and (0, 1). Following Remark 4.11, we have that the above
problem can be rewritten as
x (k + 1) = 1/2 x (k) + 1/2 b(k) u (k) 1/2 w (k)
where
(4.54)
Clearly w = (w (0), w (1), . . .), satises w 2 < . In this case the cost to
be minimized is given by
J (0 , x0 , w, u) =
(4.55)
k=0
It is easy to verify that the system given by (4.54) and (4.55) satises the
requirements of mean square stabilizability and detectability, so that Theorem
4.14 can be applied. Writing X = (X1+ , X2+ ) the solution of the set of twocoupled Riccati equations, we have that it is the unique positive solution of
m + p12 X2+
1 p11
+
2
(p21 X1 + p22 X2+ )2
= m + (p21 X1+ + p22 X2+ )
1 + (p21 X1+ + p22 X2+ )
=
and
F1 (X + ) = 0, 1 < F2 (X + ) =
where
97
(4.56a)
(4.56b)
E2 (X + )
<0
1 + E2 (X + )
(4.57)
(4.58)
where
r)i =
j/2 E
j=1
1
=j
+
1/2 (1 + b() F() (X)+ )X(j+1)
 (0) = i, i = 1, 2.
r)1
= T 1 V
r)2
(4.59)
where
(1 p11 ) p12 (1 + F2 (X + ))
T =
p21 (1 p22 (1 + F2 (X + )))
p11 E1 (X + ) + p12 (1 + F2 (X + ))E2 (X + )
V =
.
p21 E1 (X + ) + p22 (1 + F2 (X + ))E2 (X + )
From (4.56), (4.57), and (4.59) we obtain X1+ , X2+ , F2 (X + ), r)1 and r)2 . From
(4.58) we get
r(k) =
1/2
(k+1)/2
+
E
(X
)w(k)
+
r
)
1{(k)=2}
2
2
1 + E2 (X + )
98
4 Optimal Control
u
(k) =
u(k)k/2
= F2 (X + )
x(k)k/2
+
that is,
1/2
(k+1)/2
+
E2 (X )w(k) + r)2 1{(k)=2}
1 + E2 (X + )
u
(k) = F2 (X + )(w(k) x
(k)) + c 1{(k)=2}
where
c=
(4.60)
r)2
1 + E2 (X + )
and
x
(k + 1) = x
(k) + b(k) u
(k) w(k).
Notice that u
(k) will always be positive if x0 w(0) +
we have the following result.
Proposition 4.15. If x0 w(0) +
1
F2 (X + ) c
x
(k) w(k) +
(4.61)
1
F2 (X + ) c.
Indeed,
then
1
c
F2 (X + )
for all k = 0, 1, . . . .
Proof. Applying induction on k, we have that the result holds for k = 0 by
hypothesis. Suppose now that it holds for k. Then, by the induction hypothesis,
0 b(k) u
(k) F2 (X + )(w(k) x
((k)) + c
so that
(k) w(k)
x
(k + 1) = x
(k) + b(k) u
x
(k) F2 (X + )(w(k) x
(k)) + c w(k)
= (1 + F2 (X + ))(
x(k) w(k)) + c
1
1
(1 + F2 (X + ))
c + c =
c
F2 (X + )
F2 (X + )
1
c.
w(k + 1) +
F2 (X + )
Thus (4.60) and (4.61) give the optimal control and optimal inventory even
under the restriction that u(k) 0 for all k = 0, 1, . . ., provided that
x0 w(0) +
1
c.
F2 (X + )
99
Example 4.16. Consider a manufacturing system producing a single commodity as described above, with inventory x(k) satisfying x(k + 1) = x(k) +
b(k) u(k)w(k). The initial amount of the commodity is assumed to be x(0) =
x0 = 0. The demand is represented by the sequence of i.i.d. positive second
order random variables {w(k); k = 0, 1, . . . , }, with mean E(w(k)) = = 1.
{(k); k = 0, 1, . . .} represents a Markov chain taking values in {1, 2}, where
state 1 means that the manufacturing system is out of order, with b1 = 0,
and state 2 means that the manufacturing system is working, with b2 = 1.
The transition probability matrix P of the Markov chain {(k); k = 0, 1, . . .}
is given by
0.1 0.9
P=
0.05 0.95
and it is desired to nd the optimal production {
u(k) 0; k = 0, 1, . . .} which
minimizes the following discounted expected cost:
k=0
100
4 Optimal Control
more than a decade, grew to a fairly substantial amount of papers in the last
two decades. Without any intention of being exhaustive, we mention, for instance, [2], [28], [52], [53], [56], [62], [64], [65], [78], [115], [137], [144], [147],
[171], [176], [206] and [219], as a representative sample (see also [67] and [117]
for the innite countable case and [22] for the robust continuoustime case).
Some issues regarding the associated coupled Riccati equation, including the
continuoustime case, can be found in [1], [59], [60], [84], [90], [91], [122] and
[192]. Quadratic optimal control problems of MJLS subject to constraints on
the state and control variables are considered in [75]. An iterative Monte Carlo
technique for deriving the optimal control of the innite horizon linear regulator problem of MJLS for the case in which the transition probability matrix of
the Markov chain is not known is analyzed in [76]. Several results on control
problems for jump linear systems are presented in the recent book [77]. Other
topics that deserve attention in this scenario are, for instance, those related
to a risksensitive (see [217]) approach for the optimal control of MJLS.
5
Filtering
Filtering problems are of interest not only because of their wide number of
applications but also for being the main steps in studying control problems
with partial observations on the state variable. There is nowadays a huge
body of theory on this subject, having the celebrated Kalman lter as one
of the great achievements. This chapter is devoted to the study of ltering
problems for MJLS. Two situations are considered: the rst one deals with
the class of Markov jump lters, which will be essential in devising a separation
principle in Chapter 6. In this case the jump variable is assumed accessible.
In the second situation the jump variable is not accessible, and we derive the
minimum linear mean square error lter and analyze the associated stationary
lter. The solutions will be obtained in terms of Riccati dierence equations
and algebraic Riccati equations.
102
5 Filtering
for the stationary Riccati lter equation and, moreover, this solution is the
limit of the error covariance matrix of the LMMSE. This result is suitable for
designing a timeinvariant stable suboptimal lter of LMMSE for MJLS. As
shown in Section 5.5, this stationary lter can also be derived from an LMI
optimization problem. The advantage of this formulation is that through the
LMI approach we can consider uncertainties in the parameters of the system.
In Subsections 8.4.1 and 8.4.2 we present some numerical examples of the
LMMSE lter with the IMM lter (see [32]), which is another suboptimal
lter used to alleviate the numerical diculties mentioned above.
103
We assume that
Gi (k)Hi (k) = 0
(5.2)
(5.3)
Notice that condition (5.3) makes sure that all components of the output and
their linear combinations are noisy. As we are going to see in Remark 5.1,
there is no loss of generality in assuming (5.2).
We consider dynamic Markov jump lters GK given by
((k) (k)(
((k) (k)y(k)
((k + 1) = A
x(k) + B
x
(
(5.4)
GK =
u(k) = C(k) (k)(
x(k)
x
((0) = x
(0 .
The reason for choosing this kind of lter is that they depend just on (k)
(rather than on the entire past history of modes (0), . . . , (k)), so that the
closed loop system is again a MJLS. In particular, as will be seen in Remark
5.6, timeinvariant parameters can be considered as an approximation for the
optimal solution.
(i (k), B
(i (k),
Therefore in the optimal ltering problem we want to nd A
(i (k) in (5.4) with x
(0 deterministic, such as to minimize the cost
C
T
E(v(k)2 )
k=1
where
v(k) = x(k) x
((k).
(5.5)
(k) (k)w(k) + B
(k) (k)y(k) + B(k) (k)u(k)
x(k + 1) = A(k) (k)x(k) + G
y(k) = L(k) (k)x(k) + H(k) (k)w(k)
G=
and
(5.7)
(5.9)
(5.8)
104
5 Filtering
(i (k), B
(i (k), C
(i (k)
Therefore the optimal ltering problem in this case is to nd A
in
((k) (k)(
((k) + B
(k) (k))y(k)
((k + 1) = A
x(k) + (B
x
(
GK =
(5.10)
u(k) = C(k) (k)(
x(k)
x
((0) = x
(0
T
with x
(0 deterministic, such that it minimizes k=1 E(v(k)2 ) with v(k) as
i (k) as in (5.7) and (5.8)
in (5.5), replacing Ai (k) and Gi (k) by Ai (k) and G
respectively. When we take the dierence between x(k) in (5.1) and x
((k) in
k (k)y(k) vanishes, and we return to the original problem.
(5.10) the term B
Notice that in this case condition (5.2) would be satised.
Remark 5.2. It is well known that for the case in which (y(k), (k)) are available, the best linear estimator of x(k) is derived from the Kalman lter for
time varying systems (see [52], [80]), since all the values of the mode of operation are known at time k. Indeed, the recursive equation for the covariance
error matrix Z(k) and the gain of the lter K(k) would be as follows:
Z(k + 1) =A(k) (k)Z(k)A(k) (k) + G(k) (k)G(k) (k)
(5.11)
Yj (k + 1)
105
*
pij (k) Ai (k)Yi (k)Ai (k) Ai (k)Yi (k)Li (k)
iJ(k)
(Hi (k)Hi (k) i (k) + Li (k)Yi (k)Li (k) )1 Li (k)Yi (k)Ai (k)
+
(5.13)
+ i (k)Gi (k)Gi (k) ,
Yi (0) i (0)(Q0 0 0 )
and
Ai (k)Yi (k)Li (k) (Hi (k)Hi (k) i (k) + Li (k)Yi (k)Li (k) )1 ,
for i J(k)
Mi (k)
0, for i
/ J(k).
(5.14)
The associated error related with the estimator given in (5.12) is dened by
x
)e (k) x(k) x
(e (k)
(5.15)
(5.16)
x
)e (0) =x0 E(x0 ) = x(0) 0 .
Using the identity (A.2), it follows that Yi (k) in (5.13) can also be written
as
Yj (k + 1) =
iJ(k)
(5.17)
xe (k))
xe (k) 1{(k)=i} ).
Yi (k) E()
i=1
(5.18)
106
5 Filtering
Yi (k + 1) =
iJ(k)
(5.19)
(5.20)
(i (k)
+ (Ai (k) + Mi (k)Li (k))E()
xe (k)y(k) 1{(k)=i} )B
(i (k)
+ (Gi (k) + Mi (k)Hi (k))E(w(k)(
x(k) 1{(k)=i} )A
(
+ (Gi (k) + Mi (k)Hi (k))E(w(k)y(k) 1{(k)=i} )Bi (k)
=
pij (k) (Ai (k) + Mi (k)Li (k))E()
xe (k)y(k) 1{(k)=i} )
iJ(k)
(i (k) .
+ (Gi (k) + Mi (k)Hi (k))E(w(k)y(k) 1{(k)=i} ) B
Notice that
y(k) = L(k) (k)x(k) + H(k) (k)w(k)
= L(k) (k)()
xe (k) + x
(e (k)) + H(k) (k)w(k)
and from the induction hypothesis on k, we have that for i J(k),
E()
xe (k)y(k) 1{(k)=i} )
=E()
xe (k))
xe (k) 1{(k)=i} )Li (k) + E()
xe (k)(
xe (k) 1{(k)=i} )Li (k)
+ E()
xe (k)w(k) 1{(k)=i} )Hi (k)
107
=E(w(k))
xe (k) 1{(k)=i} )Li (k) + E(w(k)(
xe (k) 1{(k)=i} )Li (k)
+ E(w(k)w(k) 1{(k)=i} )Hi (k)
(i (k)
+ Gi (k) + Mi (k)Hi (k) Hi (k) i (k) B
=
pij (k) Ai (k)Yi (k)Li (k)
iJ(k)
(i (k)
B
+ Mi (k) Li (k)Yi (k)Li (k) + i (k)Hi (k)Hi (k)
(
pij (k) Ai (k)Yi (k)Li (k) Ai (k)Yi (k)Li (k) Bi (k)
i=1
= 0.
Similarly,
E()
xe (k + 1)(
xe (k + 1) 1{(k+1)=j} ) = 0,
Theorem 5.4. Let v(k) and (Y1 (k), . . . , YN (k)) be as in (5.5) and (5.13) reN
spectively. Then for every k = 0, 1, . . ., E(v(k)2 ) i=1 tr(Yi (k)).
Proof. Recalling from Theorem 5.3 that
E()
xe (k)(
xe (k) 1{(k)=i} ) = 0,
E()
xe (k)(
x(k) 1{(k)=i} ) = 0,
and from (5.20) that
E()
xe (k))
xe (k) 1{(k)=i} ) = Yi (k),
108
5 Filtering
we have
(e (k) + x
(e (k) x
((k)2 )
E(v(k)2 ) =E(x(k) x
=
N
i=1
E(()
xe (k) + ((
xe (k) x
((k)))1{(k)=i} 2 )
tr E x
xe (k) x
((k))
)e (k) + ((
i=1
xe (k) x
((k)) 1{(k)=i}
x
)e (k) + ((
=
i=1
tr(E()
xe (k)(
xe (k) 1{(k)=i} ) E()
xe (k)(
x(k) 1{(k)=i} ))
i=1
i=1
tr(Yi (k))
i=1
The next theorem is straightforward from Theorem 5.4, and shows that the
solution for the optimal ltering problem can be obtained from the ltering
Riccati recursive equations Y (k) = (Y1 (k), . . . , YN (k)) as in (5.13) and gains
M (k) = (M1 (k), . . . , MN (k)) as in (5.14):
Theorem 5.5. An optimal solution for the ltering problem posed above is:
(i (k);
(op (k) = Ai (k) + Mi (k)Li (k) + Bi (k)C
A
i
(i (k) arbitrary, and the optimal cost is
with C
( op (k) = Mi (k);
B
i
T
k=1
N
i=1
(5.21)
tr(Yi (k)).
Remark 5.6. For the case in which Ai , Gi , Li , Hi and pij in (5.1) are timeinvariant and {(k)} satises the ergodic assumption 3.31, so that i (k) converges to i > 0 as k goes to innity, the ltering coupled Riccati dierence
equations (5.13) lead to the following ltering coupled algebraic Riccati equations and respective gains:
Yj =
i=1
(5.22)
Mi =
Ai Yi Li (Hi Hi i
Li Yi Li )1 .
(5.23)
109
Convergence to the stationary state is often rapid, so that the optimal lter
(5.21) could be approximated by the timeinvariant Markov jump lter
x
((k + 1) = (A(k) + M(k) L(k) )(
x(k) M(k) y(k),
which just requires us to keep in memory the gains M = (M1 , . . . , MN ).
Gv =
y(k) = L
x(k) + H
w(k)
(k)
1/2
(k)
(5.24)
1/2
Fi = Ri Fi .
The output and operation modes (y(k), (k) respectively) are known at each
time k. The noise {w(k); k T} and the Markov chain {(k); k T} are
independent sequences, and the initial condition (x0 , 0 ) is such that x0 and
0 are independent random variables with E(x0 ) = 0 and E(x0 x0 ) = Q0 .
The Markov chain {(k)} has timeinvariant transition probabilities pij , and
satises the ergodic assumption 3.31, so that i (k) converges to i > 0 for
each i N as k goes to innity.
We assume that
Hi Hi > 0
(5.25)
and, without loss of generality (see Remark 5.1), that
Gi Hi = 0.
We consider dynamic Markov jump lters GK given by
(5.26)
110
5 Filtering
GK
((k) x
((k) y(k)
((k + 1) = A
((k) + B
x
((k) x
=
u(k) = C
((k)
x
((0) = x
(0 .
(5.27)
As in the previous section, the reason for choosing this kind of lter is that
they depend just on (k) (rather than on the entire past history of modes
(0), . . . , (k)), so that the closed loop system is again a (timeinvariant)
MJLS.
From (5.24) and (5.27) we have that the closed loop system is

,
((k) x(k)
G(k)
x(k + 1)
A(k)
B(k) C
= (
+
((k) H(k) w(k)
((k)
x
((k + 1)
x
((k)
B
B(k) L(k) A
1/2
((k) ] x(k) .
v(k) = R(k) [F(k) C
(5.28)
x
((k)
Writing
(i
Gi
Ai Bi C
x(k)
1/2
(
i (
; i Ri [Fi Ci ]; v(k)
; i (
(i
x
((k)
Bi Hi
Bi Li A
(5.29)
we have from (5.28) that the Markov jump closed loop system Gcl is given by
v(k + 1) = (k) v(k) + (k) w(k)
(5.30)
Gcl =
v(k) = (k) v(k)
,
with v(k) of dimension ncl . We say that the controller GK given by (5.27) is
admissible if the closed loop MJLS Gcl (5.30) is MSS.
Therefore in the innite horizon optimal ltering problem we want to nd
(i , B
(i , C
(i in (5.27) with x
A
(0 deterministic, such that the closed loop system
(5.28) is MSS and minimizes
limk E(v(k)2 ).
The solution of the innite horizon ltering problem posed above is closely
related to the mean square stabilizing solution for the ltering coupled algebraic Riccati equations, dened next.
Denition 5.7 (Filtering CARE). We say that Y = (Y1 , . . . , YN ) Hn+
is the mean square stabilizing solution for the ltering CARE if it satises for
each j N
Yj =
i=1
with r (T ) < 1 where T (.) = (T1 (.), . . . , TN (.)) is dened as in (3.7) with
i = Ai + Mi (Y )Li and Mi (Y ) as
Mi (Y ) Ai Yi Li (Hi Hi i + Li Yi Li )1
111
(5.32)
for each i N.
As mentioned before, in Appendix A conditions for the existence of stabilizing solutions for the ltering (and control) CARE are presented in terms
of the concepts of mean square stabilizability and mean square detectability.
Let M = (M1 , . . . , MN ) be as in (5.32) (for simplicity we drop from now
on the dependence on Y ). We have the following theorem which shows that
the solution for the optimum ltering problem can be obtained from the mean
square stabilizing solution of the ltering CARE (5.31) and the gains M =
(M1 , . . . , MN ) (5.32). Recall the denition of the H2 optimal control problem
in Section 4.4 and (4.34).
Theorem 5.8. An optimal solution for the ltering problem posed above is:
(op = Ai + Mi Li + Bi Fi ;
A
i
( op = Mi ;
B
i
and
( op = Fi
C
i
(5.33)
(5.34)
i=1
and thus,
xop (k + 1) = [A(k) + B(k) F(k) ]xop (k) B(k) F(k) eop (k) + G(k) w(k)
eop (k + 1) = [A(k) + M(k) L(k) ]eop (k) + [G(k) + M(k) H(k) ]w(k)
that is,
op
op
x (k + 1)
x (k)
A(k) + B(k) F(k) B(k) F(k)
=
0
A(k) + M(k) L(k) eop (k)
eop (k + 1)
G(k)
+
w(k).
G(k) + M(k) H(k)
112
5 Filtering
+ 1) =
N
i=1
+ i (k)(Gi Gi + Mi Hi Hi Mi )]
k
N
i=1
i=1
tr(Fi Yi Fi ).
i=1
( B,
( C)
( such that the closed loop system (5.30) is MSS.
Consider now any (A,
From (4.43) and Theorem 5.4,
E(v(k)2 ) =
tr(i Pi (k)i )
i=1
tr(Fi Yi (k)Fi ),
i=1
iN
and Y (k) = (Y1 (k), . . . , YN (k)) as in (5.13). From Proposition 3.35 in Chapter
3,
Pj (k + 1) =
N
i=1
113
tr(i Pi i )
i=1
lim
tr(Fi Yi (k)Fi )
i=1
tr(Fi Yi Fi )
i=1
114
5 Filtering
added with an additional term that depends on the second moment matrix of
the state variable. This extra term would be zero for the case with no jumps.
Conditions to guarantee the convergence of the error covariance matrix to the
stationary solution of an N n dimensional algebraic Riccati equation are also
derived, as well as the stability of the stationary lter. These results allow us
to design a timeinvariant (a xed gain matrix) stable suboptimal lter of
LMMSE for MJLS.
In this section it will be convenient to introduce the following notation. For
any sequence of second order random vectors {r(k)} we dene the centered
random vector rc (k) as r(k) E(r(k)), r((kt) the best ane estimator of r(k)
given {y(0), . . . , y(t)}, and r)(kt) = r(k) r((kt). Similarly r(c (kt) is the best
linear estimator of rc (k) given {y c (0), . . . , y c (t)} and r)c (kt) = rc (k) r(c (kt).
It is well known (cf. [80], p. 109) that
r((kt) = r(c (kt) + E(r(k))
(5.35)
and, in particular, r)c (kt) = r)(kt). We shall denote by L(y k ) the linear subspace spanned by y k (y(k) y(0) ) (see [80]), that is, a random variable
k
r L(y k ) if r = i=0 (i) y(i) for some (i) Rm , i = 0, . . . , k.
We recall that for the second order random vectors r and s taking values
in Rn , the inner product < .; . > is dened as
< r; s >= E(s r)
and therefore we say that r and s are orthogonal if < r; s >= E(s r) = 0. For
t k, the best linear estimator r(c (kt) = ((
r1c (kt), . . . , r(nc (kt)) of the random
c
c
c
vector r (k) = (r1 (k), . . . , rn (k)) is the projection of rc (k) onto the subspace
L((y c )t ) and satises the following properties (cf. [80], p. 108 and 113):
1. r(jc (kt) L((y c )t ), j = 1, . . . , n
2. r)j (kt) is orthogonal to L((y c )t ), j = 1, . . . , n
3. if cov((y c )t ) is nonsingular then
t
(5.36)
(5.37)
115
zj (k) x(k)1{(k)=j} Rn
z1 (k)
z(k) ... RN n
zN (k)
and recall from (3.3) that q(k) = E(z(k)). Dene also z((kk 1) as the projection of z(k) onto L(y k1 ) and
z)(kk 1) z(k) z((kk 1).
The secondmoment matrices associated to the above variables are
Qi (k) E(zi (k)zi (k) ) B(Rn ), i N,
Z(k) E(z(k)z(k) ) = diag[Qi (k)] B(RN n ),
(
Z(kl)
E((
z (kl)(
z (kl) ) B(RN n ), 0 l k,
)
Z(kl)
E()
z (kl))
z (kl) ) B(RN n ), 0 l k.
We consider the following augmented matrices
..
..
Nn
..
A(k)
B(R )
.
.
.
(5.39)
Nn
, R ),
(5.41)
1/2
B(RN r , RN n ).
(5.40)
GN (k)]]
(5.42)
We recall that we still assume that conditions (5.2) and (5.3) hold. We present
now the main result of [58], derived from geometric arguments as in [80].
Theorem 5.9. Consider the system represented by (5.38). Then the LMMSE
x
((kk) is given by
x
((kk) =
z(i (kk)
(5.43)
i=1
)
)
z((kk) =(
z (kk 1) + Z(kk
1)L(k) L(k)Z(kk
1)L(k)
1
+ H(k)H(k)
(y(k) L(k)(
z (kk 1))
(5.44)
z((kk 1) =A(k 1)(
z (k 1k 1), k 1
(5.45)
0 1 (0)
..
z((0 1) =q(0) =
.
.
0 N (0)
116
5 Filtering
)
The positive semidenite matrices Z(kk
1) B(RN n ) are obtained from
)
(
Z(kk
1) = Z(k) Z(kk
1)
(5.46)
i=1
i=1
Qj (0) = Q0 j (0), j N
(5.47)
(
and Z(kk
1) are given by the recursive equation
)
(
(
(
1)L(k)
Z(kk)
=Z(kk
1) + Z(kk
1)L(k) L(k)Z(kk
1
(
+ H(k)H(k)
L(k)Z(kk
1)
(5.48)
(
( 1k 1)A(k 1) ,
Z(kk
1) =A(k 1)Z(k
(5.49)
( 1) =q(0)q(0) .
Z(0
is well dened since for each k = 0, 1, . . . there exists (k) N such that
(k) (k) > 0 and from condition (5.3)
)
L(k)Z(kk
1)L(k) + H(k)H(k) H(k)H(k)
=
i=1
N
A(k)(diag[i ])A(k) .
(5.50)
i=1
v V(, k)v =
117
N
E v(k+1) E(v(k+1) (k) = i) Ai i
i=1
Ai
v(k+1) E(v(k+1) (k) = i) (k) = i 0.
(5.51)
where Q(k) = (Q1 (k), . . . , QN (k)) Hn+ are given by the recursive equation
(5.47).
We redene the matrices A, H(k), L, G(k) dened in (5.39), (5.40), (5.41) and
(5.42) as
p11 A1 pN 1 AN
..
Nn
(5.52)
A = ...
B(R ),
.
p1N A1 pN N AN
Nn
1/2
(5.53)
, R ),
(5.54)
1/2
G1 (pN j N (k))
GN ]] B(R
N 2r
,R
Nn
),
(5.55)
118
5 Filtering
and dene
1/2
H [H1 1
1/2
HN N ] B(RN r , Rp ),
1/2
G diag[[(p1j 1 )
1/2
G1 (pN j N )
GN ]] B(R
(5.56)
N 2r
,R
Nn
).
(5.57)
Since we are considering the timeinvariant case, we have that the operator
V dened in (5.50) is also timeinvariant, and therefore we can drop the time
dependence. From MSS and ergodicity of the Markov chain, and from Proposition 3.36 in Chapter 3, Q(k) Q as k , where Q = (Q1 , . . . , QN ) Hn+
is the unique solution that satises:
Zj =
pij (Ai Zi Ai + i Gi Gi ), j N.
(5.58)
i=1
1). The idea of the proof is rst to show that there exists a unique positive
semidenite solution P for the algebraic Riccati equation, and then prove
that for some positive integer > 0, there exists lower and upper bounds,
) + k + 1), such that it squeezes
R(k) and P (k + ) respectively, for Z(k
)
asymptotically Z(k + k + 1) to P .
Theorem 5.12. Suppose that the Markov chain {(k)} is ergodic and that
System (5.38) is MSS. Consider the algebraic Riccati equation given by:
Z = AZA + GG AZL (LZL + HH )1 LZA + V(Q)
(5.60)
119
V 0
(5.61)
v(k) = Jx(k)
where we assume the same assumptions as in Subsection 5.4.3, including that
the Markov chain {(k)} is ergodic, and that System (5.61) is MSS. Here
y(k) is the output variable, which is the only information available from the
evolution of the system, since we assume that the jump variable (k) is not
120
5 Filtering
known. We wish to design a dynamic estimator v((k) for v(k) of the following
form:
z((k + 1) = Af z((k) + Bf y(k)
v((k) = Jf z((k)
(5.62)
(5.63)
(5.64)
U1 (k)
U (k) ...
UN (k)
and
i
..
...
pi1 piN
piN pi2
pi1 piN
pi2 piN
..
.
piN (1 piN )
i N.
121
i (k)pij Gi Gi .
(5.65)
i=1
(5.66)
i=1
V(Q) =
pi1 Ai Qi Ai
..
..
.
.
N
0
..
.
piN Ai Qi Ai
N
2
i=1 pi1 Ai Qi Ai
i=1 pi1 piN Ai Qi Ai
..
..
..
.
.
.
N
N
..
..
..
..
..
=
.
.
.
.
.
N
i=1
pi1 piN I
i=1
piN (1 piN )I
0
..
.
Ai Qi Ai
1/2
and given that i 0, we have that the square root matrix i 0 (so that
1/2 1/2
i = i i ) is well dened. It follows that (recall that represents the
Kronecker product)
V(Q) =
N
i=1
N
i=1
1/2
I)(i
1/2
(i
(i
1/2
1/2
Therefore, writing Di (i
1/2
I).
Z(k + 1) = AZ(k)A +
N
i=1
Dening
I) dg[Ai Qi Ai ]
(5.67)
122
5 Filtering
Z(k) U (k)
z(k)
z(k) z((k)
=
,
(
z((k)
U (k) Z(k)
S(k) E
(5.68)
N
A 0
A 0
Di
S(k + 1) =
S(k)
+
dg[Qi (k)] Di 0
Bf L Af
Bf L Af
0
i=1
0
G(k)
0 H(k) Bf .
G(k) 0 +
(5.69)
+
Bf H(k)
0
Q1
Z U
..
S=
Z= .
( ,
U Z
0
0
.
..
. .. 0.
QN
N
A 0
A 0
Di
V
dg[Xi ] Di 0
+
Bf L Af
Bf L Af
0
i=1
0
G
0 H Bf
G 0 +
+
Bf H
0
with
V=
X R
( ,
R X
X1 0
X = ... . . . ... .
0 XN
(5.70)
(5.71)
Furthermore if V satises
V
then V S.
N
A 0
A 0
Di
V
dg[Xi ] Di 0
+
Bf L Af
Bf L Af
0
i=1
0
G
0 H Bf
G 0 +
+
Bf H
0
(5.72)
123
i=1
lim E
((k)
i=1 zi (k) z
Jf
k
z((k)
I I 0
= tr J Jf
0 I
I
.
z(k)
.. 0 J
z(k) z((k)
lim E
z((k)
k
Jf
I
0 I
J
..
= tr J J Jf lim S(k) .
k
J
Jf
J
.
..
J J Jf S
= tr
(5.73)
J
Jf
where the last equality follows from Proposition 5.15 and
Z U
S=
Z = diag[Qi ],
( ,
U Z
which satises, according to Proposition 5.15, the equation
N
A 0
A 0
Di
S=
S
+
dg[Qi ] Di 0
Bf L Af
Bf L Af
0
i=1
0
G
0 H Bf .
G 0 +
+
Bf H
0
(5.74)
In the following section we shall formulate this problem as an LMI optimization problem.
124
5 Filtering
Z U
S=
( > 0, Z = diag[Qi ], i N
U Z
S
S J J Jf
>0
J J Jf S
W
(5.75)
(5.76)
A 0
Bf L Af
0
0
0
dg[Q1 ] D1 0
..
.
0
0
0
> 0.
0
dg[QN ]
0
0
dg[QN ] DN
0
I
0
G0
0 H Bf
0 I
0
DN
G
0
dg[QN ]
S
0
Bf H
0
(5.77)
0
dg[Q1 ]
0
0
0
0
0
0
0
0
A 0 D
1
S
dg[Q1 ]
0
Bf L Af
0
0
..
.
0
0
0
> 0, W
> 0, (Af , Bf , Jf ) satisfy (5.75), (5.76) and
Indeed, suppose that S
(5.77). Then from the Schur complement,
J J Jf
> tr J J Jf S
W
(5.78)
and
>
S
N
A 0 A 0
Di
S
+
dg[Qi ] Di 0
Bf L Af
Bf L Af
0
i=1
0
G
0 H Bf .
G 0 +
+
Bf H
0
(5.79)
From Proposition B.4 we have that System (5.63) is MSS. From Proposition
k
5.15 we know that S(k) S, and (5.73) and (5.74) hold. Furthermore, from
S. Since we want to minimize tr(W ), it is
Proposition 5.15, we have that S
clear that for (Af , Bf , Jf ) xed, the best solution would be, in the limit, S, W
satisfying (5.78) and (5.79) with equality. However, as will be shown next, it is
more convenient to work with the strict inequality restrictions (5.75)(5.77).
125
Theorem 5.16. The problem of nding S, W, and (Af , Bf , Jf ) such that minimizes tr(W ) and satises (5.75)(5.77) is equivalent to:
min tr(W )
subject to
X = diag[Xi ],
J J Jaux
X
X
0
J
..
X
. Jaux
> 0,
J
..
Y
J
J J
W
X
Y
0
0
0
0
0
0
0
dg[X1 ] 0
0
.
.
.
0
0
0
0
0
0
0
0
dg[X
N]
0
0
0
0
0
0
0
0
0
0
XA
XA
XD1 XDN
YA + F L + R YA + F L YD1 YDN
0
0
0
0
0
I
0
XG
YG
(5.80)
0 A X L F + A Y + R
0 A X
L F +A Y
0 D1 X
D1 Y
..
..
.
.
0
> 0,
0 DN X
DN Y
0 G X
G Y
I
0
H F
0
X
X
FH X
Y
(5.81)
i N,
Y, W, R, F, Jaux .
Bf = V 1 F,
Jf = Jaux (U X)1 .
(5.82)
The next results show that the stationary mean square error obtained by
the lter derived from the optimal solution of the LMI formulation coincides
with the one obtained from the associated ltering ARE derived in Theorem
5.12. Let us show rst that any solution of the LMI problem will lead to
an expected stationary error greater than the one obtained from the lter
generated by the ARE approach (5.60). Recall from Chap. 3 that r (T ) < 1
implies that there exists a unique solution Q = (Q1 , . . . , QN ) Hn satisfying
(5.58).
126
5 Filtering
The above proposition shows that any feasible solution of the LMI problem
will lead to lters such that the stationary mean square error will be greater
than the stationary mean square error obtained from the ltering ARE. Next
let us show that approximations of the ltering ARE (5.60) and (5.58) lead
to a feasible solution for the LMI optimization problem (5.80) and (5.81). This
shows that the optimal solution of the LMI will lead to a lter with associated
stationary mean square error smaller than the one obtained from any lter
derived by approximations of the ARE approach. The combination of these
two results show that the lter obtained from the optimal LMI solution and
from the ARE approach will lead to the same stationary mean square error.
Indeed, let us show that a feasible solution to (5.80) and (5.81) can be
obtained from approximation of (5.58) and (5.60). Let Q = (Q1 , . . . QN )
be the unique solution satisfying
Qj = Tj (Q ) +
pij i Gi Gi + 2I
i=1
(5.83)
where Z = diag[Qi ].
Proposition 5.18. Consider P the unique positivedenite solution of the
ARE (in V )
V = AV A + GG AV L (LV L + HH )1 LV A + V(Q ) + I
(5.84)
i N,
F = P1 T(P )
X = diag[Xi ]
P1
127
j=1
j Aj ,
j Lj ,
G=
H=
j=1
j=1
j Gj ,
j Hj .
(5.85)
j=1
i N,
(5.86)
J J
X
X
Jaux
> 0,
J J
X
Y
J J Jaux J J
W
(5.87)
and for j = 1, . . . , ,
X
X
0
X
Y
0
0
0
0
0
0
0
0
dg[X1 ] 0
..
.
0
0
0
0
0
0
0
0
dg[X
N]
0
0
0
0
0
0
0
0
0
0
XAj
XAj
XDj1 XDjN
YAj + F Lj + R YAj + F Lj YDj1 YDjN
0
0
0
0
0
I
0
XGj
YGj
Aj X Lj F + Aj Y + R
Aj X
Lj F + Aj Y
D1 X
Dj1 Y
..
..
.
.
0
> 0.
0 DjN X
DjN Y
j
j
0 G X
G Y
I
0
H F
0
X
X
F Hj X
Y
(5.88)
0
0
0
Then for the lter given as in (5.82) we have that System (5.63) is MSS, and
).
limk E(e(k)2 ) tr(W
Proof. Since (5.88) holds for each j = 1, . . . , , and the real matrices A, L, G, H
are as in (5.85), we have that (by taking the sum of (5.88) multiplied by j ,
over j from 1 to ) that (5.88) also holds for A, L, G, H, Di . By taking the
128
5 Filtering
129
can, in addition, contemplate uncertainties in the parameters through, for instance, an LMI approach. This robust version was analyzed in [69] and is also
described in this book. See also [86], [87], [88], [94], [95], [100], [102], [104], and
[166], which include sampling algorithms, H , nonlinear, and robust lters
for MJLS.
6
Quadratic Optimal Control with Partial
Information
The LQG control problem is one of the most popular in the control community. In the case with partial observations, the celebrated separation principle
establishes that the solution of the quadratic optimal control problem for linear systems is similar to the complete observation case, except for the fact that
the state variable is substituted by its estimation derived from the Kalman
lter. Thus the solution of the LQG problem can be obtained from the Riccati
equations associated to the ltering and control problems. The main purpose
of this chapter is to trace a parallel with the standard LQG theory and study
the quadratic optimal control problem of MJLS when the jump variable is
available to the controller. It is shown that a result similar to the separation
principle can be derived, and the solution obtained from two sets of coupled
Riccati equations, one associated to the ltering problem, and the other to
the control problem.
132
MJLS. When there is only one mode of operation our results coincide with
the traditional separation principle for the LQG control of discretetime linear
systems.
As pointed out in [146] and Chapter 5, the optimal xstate estimator for
this case, when the input {w(k); k T} is a Gaussian white noise sequence, is a
Kalman lter (see Remark 5.2 and (5.11)) for a time varying system. Therefore
the gains for the LQG optimal controller will be sample path dependent (see
the optimal controller presented in [52], which is based on this lter). In order
to get around with the sample path dependence, the authors in [146] propose
a Markov jump lter (that is, a lter that depends just on the present value of
the Markov parameter), based on a posterior estimate of the jump parameters.
Notice however that no proof of optimality for this class of Markov lters is
presented in [146], since the authors are mainly interested in the steadystate
convergence properties of the lter. In this chapter, by restricting ourselves
to a class of dynamic Markov jump controllers, we present the results derived
in [73] and [74] to obtain a proof of optimality, following an approach similar
to the standard theory for Kalman lter and LQG control (see, for instance,
[14] and [80]) to develop a separation principle for MJLS . A key point in
our formulation is the introduction of the indicator function for the Markov
parameter in the orthogonality between the estimation error and the state
estimation, presented in Theorem 5.3. This generalizes the standard case, in
which there is only one mode of operation, so that the indicator function in
this case would be always equal to one. The introduction of the indicator
function for the Markov parameter in Theorem 5.3 is essential for obtaining
the principle of separation, presented in Sections 6.2 for the nite horizon
case, and 6.3 for the innite horizon case.
133
T
1
(6.3)
k=0
134
tr Ci (k) Ci (k)E(x(k)x(k) 1{(k)=i} )
i=1
+ E(D(k) (k)u(k)2 )
(6.4)
and for any control law u = (u(0), . . . , u(T 1)) given by (6.2), we have from
(5.15) that x(k) = x
)e (k) + x
(e (k), and from (5.20) and Theorem 5.3,
E(x(k)x(k) 1{(k)=i} ) = E()
xe (k))
xe (k) 1{(k)=i} ) + E((
xe (k)(
xe (k) 1{(k)=i} )
= Yi (k) + E((
xe (k)(
xe (k) 1{(k)=i} )
i=1
tr(Vi Yi (T ))
i=1
and thus
J(0 , x0 , u) =
T
1
E((
ze (k)2 ) + E((
xe (T ) V(T ) x
(e (T ))
k=0
T
1
k=0
(6.5)
i=1
where the terms with Yi (k) do not depend on the control variable u. Therefore
minimizing (6.5) is equivalent to minimizing
Je (u) =
T
1
E((
ze (k)2 ) + E((
xe (T ) V(T ) x
(e (T ))
k=0
subject to
x
(e (k + 1) = A(k) (k)(
xe (k) + B(k) (k)u(k) M(k) (k)(k)
x
(e (0) = E(x0 ) = 0
where
(k) = y(k) L(k) (k)(
xe (k) = L(k) (k))
xe (k) + H(k) (k)w(k)
135
and u(k) is given by (6.2). Let us show now that {(k); k T} satises (4.2),
(4.7), (4.8) and (4.9). Set
i (k) = Li (k)Yi (k)Li (k) + Hi (k)Hi (k) .
Indeed we have from Theorem 5.3 and the fact that {w(k); k T} is a wide
sense white noise sequence independent of the Markov chain {(k); k T}
and initial condition x(0), that
E((k)(k) 1{(k)=i} ) =E((Li (k))
xe (k) + Hi (k)w(k))
(Li (k))
xe (k) + Hi (k)w(k)) 1{(k)=i} )
E((k)(
xe (k) 1{(k)=i} ) =Li (k)E()
xe (k)(
xe (k) 1{(k)=i} )
+ Hi (k)E(w(k)(
xe (k) 1{(k)=i} )
=0,
(i (k)
E((k)u(k) 1{(k)=i} ) =Li (k)E()
xe (k)(
x(k) 1{(k)=i} )C
(i (k)
+ Hi (k)E(w(k)(
x(k) 1{(k)=i} )C
=0,
for k T. Moreover for any measurable functions f and g, we have from the
fact that (k) and (k) are Gk measurable,
E(f ((k))g((k + 1))Gk ) = f ((k))
p(k)j (k)g(j).
j=1
Thus the results of Section 4.2 can be applied and we have the following
theorem.
Theorem 6.1 (A Separation Principle for MJLS). An optimal solution
for the control problem posed in Subsection 6.2.1 is obtained from (4.14) and
(5.13). The gains (4.15) and (5.14) lead to the following optimal solution:
(op (k) = Ai (k) + Mi (k)Li (k) + Bi (k)Fi (k)
A
i
op
(
B (k) = Mi (k)
i
( op (k) = Fi (k)
C
i
and the optimal cost is
N
T
1
0 , x0 ) =
i (0)0 Xi (0)0 +
tr Ci (k)Yi (k)Ci (k)
J(
i=1
+ tr Vi Yi (T ) +
k=0
T
1
k=0
i (k) tr Mi (k)i (k)Mi (k) Ei (X(k + 1), k) .
136
Remark 6.2. Notice that the choice of the Markov jump structure for the lter
as in (5.4) was crucial to obtain the orthogonality derived in Theorem 5.3,
and the separation principle presented here. Other choices for the structure
of the lter, with more information on the Markov chain, would lead to other
notions of orthogonality and separation principle. Therefore the separation
principle presented here is a direct consequence of the choice of the Markov
structure for the lter.
Remark 6.3. From Remarks 4.3 and 5.6 we have that a timeinvariant Markov
controller approximation for the optimal lter in Theorem 6.1 would be given
by the steadystate solutions (4.23) and (5.22), with gains (4.24) and (5.23),
provided that the convergence conditions presented in Appendix A were satised.
,
i
(i
Gi
Ai Bi C
; i (
(
(
Bi Hi
Bi Li Ai
137
x(k)
(
; i [Ci Di Ci ]; v(k)
(6.9)
x
((k)
we have from (6.8) that the closed loop MJLS Gcl is given by
v(k + 1) = (k) v(k) + (k) w(k)
Gcl =
z(k) = (k) v(k)
(6.10)
with v(k) of dimension ncl . We say that the controller GK given by (6.7) is
admissible if the closed loop MJLS Gcl (6.10) is MSS according to Denition
3.8 in Chapter 3.
6.3.2 Denition of the H2 control Problem
According to Denition 4.7, the H2 norm of the closed loop system Gcl , denoted by G2 , given by (6.10) with v(0) = 0 is dened as:
G22 =
zs 22
(6.11)
E(zs (k)2 )
(6.12)
s=1
k=1
and zs = (zs (0), zs (1), . . .) represents the output sequence when the input w
is given by:
a) w = (w(0), w(1), . . .), w(0) = es , w(k) = 0 for each k > 0, with es Rr the
unitary vector formed by 1 at the sth position, 0 elsewhere and,
b) (0) = i with probability i .
Here we have modied Denition 4.7 by considering the initial distribution
for the Markov chain as i = i . Since the system (6.10) is MSS we have
from Theorem 3.34 in Chapter 3 that the norms G22 and zs 22 in (6.11) and
(6.12) are nite. As mentioned before, for the deterministic case (with N = 1
and p11 = 1), the denition above coincides with the usual H2 norm.
According to the results in Subsection 4.4.2, the H2 norm can be computed
from the unique solution of the discrete coupled Grammians of observability
and controllability. Let S = (S1 , . . . , SN ) Hn and P = (P1 , . . . , PN )
Hn be the unique solution (see Proposition 3.20) of the observability and
controllability Grammians
Si = i Ei (S)i + i i ,
Pj =
N
i=1
iN
pij [i Pi i + i i i ],
(6.13)
j N.
(6.14)
138
N
N
i pij tr(i Sj i ) =
i=1 j=1
tr(i Pi i ).
(6.15)
i=1
i N.
(6.16)
i=1
k
Moreover, since the closed loop system is MSS, we have that P (k) P (see
Proposition 3.36 in Chapter 3), where P = (P1 , . . . , PN ) is the unique solution
(see Proposition 3.20) of the controllability Grammian (6.14). Notice that
E(z(k)2 ) = E(tr(z(k)z(k) ))
= tr(E((k) v(k)v(k) (k) ))
=
i=1
tr(i Pi (k)i )
i=1
tr(i Pi i )
i=1
= G22
and thus from Proposition 6.4 an alternative denition for the H2 control
(N ), B
( =
( B,
( C)
( in system (6.7), where A
( = (A
(1 , . . . , A
problem is to nd (A,
(1 , . . . , B
(N ), C
( = (C
(1 , . . . , C
(N ), such that the closed loop MJLS Gcl (6.10)
(B
is MSS and minimizes lim z(k)22 .
k
139
Gc =
GU =
(6.17)
(6.18)
(k)
1/2
and v(k) = R(k) (k), v = {v(k); k T}. Notice that system Gc does not
depend on the control u(k), and that
z(k) = Gc (w)(k) + GU (v)(k).
The next theorem establishes the principle of separation for H2 control of
MJLS. In what follows we recall that G2 represents the H2 norm of (6.6)
under a control law of the form (6.7).
Theorem 6.5. Consider System (6.6) and Markov jump mean square stabilizing controllers as in (6.7). Suppose that there exist the mean square stabilizing
solutions Y = (Y1 , . . . , YN ) and X = (X1 , . . . , XN ) for the ltering and control CARE as in (5.31) and (4.35) respectively, and let M = (M1 , . . . , MN )
140
i tr(Gi Ei (X)Gi )
i=1
N
i=1
1/2
1/2
tr(Ri Fi Yi Fi Ri ).
Proof. Consider the closed loop system (6.8) and (6.10), rewritten below for
1/2
1/2
convenience, with the output v(k) = R(k) (k) = R(k) (u(k) F(k) x(k)),
(k)
(k)
(k)
x
((k)
(6.19)
G(ws )22
s=1
where
ws (k) =
es , k = 0
0 , k = 0
and es is a vector with 1 at the sth position and zero elsewhere. Notice now
that from (C.11) in Proposition C.3,
+
+
**
1/2
)
)
E
R
B
E
A
.
.
.
A
(X)G
e
F
(0)
(0) s t , t 0
(t)
(t+1)
(0)
(t)
GU Gc (ws )(t) =
0
,t > 0
and since
Gv (w)(t) = (t)
t1 *
.
l=
+
/
(t1) . . . (l+1) (l) w(l)
we have that
Gv (ws )(t) =
Thus,
141
*
+
(t) (t1) . . . (1) (0) es , t > 0
0
, t 0.
GK
(6.20)
where Gc , Gv , and GK are as in (6.17), (6.19) (or (5.24)), and (6.7) respectively. But the solution of min Gv 22 is as in Theorem 5.8. Therefore, from
GK
i tr(Gi Ei (X)Gi )
(6.21)
i=1
and
min Gv 22 =
GK
N
i=1
1/2
1/2
tr(Ri Fi Yi Fi Ri )
(6.22)
Remark 6.6. Notice that as for the deterministic case [195] and [229], we have
from (6.20) that the H2 norm can be written as the sum of two H2 norms: the
rst one does not depend on the control u and has value given by (6.21), and
the second one is equivalent to the innite horizon optimal ltering problem,
and has optimal value given by (6.22).
142
7
H Control
In this chapter we consider an H like theory for MJLS. We follow an approach based on a worstcase design problem, tracing a parallel with the time
domain formulation used for studying the standard H theory. We analyze
the special case of state feedback, which we believe gives the essential tools
for further studies in the MJLS scenario. A recursive algorithm for the H control CARE is included.
144
7 H Control
required to tackle the MJLS H control problem, giving the interested readers
the essential tools to go further on this subject. In the last section of this
chapter we give a brief historical overview of the H theory, emphasizing the
key contributions to the theory, and suggesting further references for those
interested in this subject.
The chapter content is as follows. In Section 7.2 the H control problem
for MJLS is precisely dened, together with the main characterization result.
The proofs of suciency and necessity are presented in Section 7.3. Section
7.4 presents a numerical technique for obtaining the desired solution for the
CARE.
(7.1)
where T = {0, 1, . . .} and, as before, x = {x(k); k T} represents the ndimensional state vector, u = {u(k); k T} the mdimensional control sequence in C m , w = {w(k); k T} a rdimensional disturbance sequence in
C r , y = {y(k); k T} the pdimensional sequence of measurable variables and
z = {z(k); k T} the qdimensional output sequence. Unless stated otherwise, the spaces used throughout this chapter are dened as in Chapter 2 (for
instance, C m is dened as in Section 2.3).
In the spirit of the deterministic case, we can view w as any unknown
niteenergy stochastic disturbance signal which adversely aects the tobecontrolled output z from the desired value 0. The general idea is to attenuate
the eect of the disturbances in z via a control action u based on a dynamic
feedback. In addition, the feedback control has to be chosen in such a way
that the closed loop system Gcl is stabilized in an appropriate notion of stability. The eect of the disturbances on the tobecontrolled output z is then
described by a perturbation operator Gcl : w z, which (for zero initial
state) maps any nite energy disturbance signal w (the 2 norm of the signal
is nite) into the corresponding nite energy output signal z of the closed
loop system. The size of this linear operator is measured by the induced norm
and the larger this norm is, the larger is the eect of the unknown disturbance w on the tobecontrolled output z, i.e., Gcl measures the inuence
of the disturbances in the worst case scenario. The problem is to determine
whether or not, for a given > 0, there exists a stabilizing controller achieving
Gcl < . Moreover, we want to know how such controllers, if they exist, can
be constructed.
145
The standard H design leads to controllers of the w orst case type in the
sense that emphasis is focused on minimizing the eect of the disturbances
which produce the largest eect on the system output.
In this chapter we shall consider just the case in which y(k) = x(k), and
we shall look for statefeedback controllers, that is, controllers of the form
u(k) = F(k) x(k). For this case we shall denote the closed loop map Gcl by
ZF .
Remark 7.1. In the deterministic linear continuoustime case, the norm ZF
is given by the H norm (which is a norm in the Hardy space H , i.e. a space
which consists of all functions that are analytic and bounded in the open right
half complex plane) of the associated real rational transfer functions in the
space H . This is the reason for the control theory dealing with this problem
be known as H control theory. In our context, we use this terminology just
to refer to the the disturbance attenuation aspect of the problem.
Remark 7.2. Notice that our model allows for unknown stochastic instead of
deterministic disturbance signals. This is, in general, a necessary extension of
the deterministic case if the underlying systems itself is random. This has to
do with the fact that, seen in the context of dierential game as a minmax
problem (worst case design), the disturbance that maximizes the problem is
given in terms of the state (or output) of the system and therefore is stochastic
(restricted to deterministic disturbance, the maximum would not be achieved).
(7.2)
where we assume that both x(k) and (k) are known at each time k, and
again without loss of generality that conditions (4.26) and (4.27) hold (later
on we assume, for notational simplicity, that Di Di = I for each i N).
Our problem can be roughly formulated as follows: design a linear statefeedback control u(k) = F(k) x(k) + q(k) such that the closed loop system
ZF : w z has the following properties: (i) the feedback control F : x u
has to be chosen in such a way that the closed loop system Gcl is mean square
stable, and (ii) the operator ZF is dissipative for some given attenuation
level > 0 (we can guarantee a prescribed bound for the operator ZF ), i.e.,
the H problem here can be viewed as a worstcase design problem in the
sense that one seeks to guarantee a bound on the 2norm of the signal z
independently of which 2 disturbance signal actually occurs.
Before going into the technicalities of the main result, we need to introduce
some basic elements which will be essential in the treatment that we shall be
7 H Control
146
adopting here. First notice that, from (7.2) and the denition of the probabilistic space in Section 2.3, we have that xk = (x(0), . . . , x(k)) Ckn for every
k T. For (x0 , w, q) C0n C r C m , 0 0 , and F = (F1 , . . . , FN ) Hm,n ,
dene the linear operator XF (0 , .) from C0n C r C m to C n as
XF (0 , x0 , w, q) x = (x0 , x(1), . . .)
(7.3)
and
q = (q(0), q(1), . . .) C m .
(7.4)
We need also to dene the operators XF0 (0 , .) and ZF0 (0 , .) in B(C r , C n ) and
B(C r , C q ), respectively, as:
XF0 (0 , w) XF (0 , 0, w, 0) and ZF0 (0 , w) ZF (0 , 0, w, 0).
(7.5)
that is,
0
2
2
ZF (0 , w)2 =
E(C(k) x(k) + D(k) u(k) )
2
k=0
<
k=0
E(w(k) ) = 2 w(k)2
147
Theorem 7.3. Suppose that (C, A) is mean square detectable and consider
> 0 xed. Then there exists F = (F1 , . . . , FN ) such that it stabilizes (A, B)
in the mean square sense, and such that
sup ZF0 (0 , .) <
0 0
1
B
I 0
+ 1 i Ei (P ) Bi 1 Gi
0 I
Gi
1
1
=Ci Ci + (Ai + Bi Fi + Gi Ui ) Ei (P )(Ai + Bi Fi + Gi Ui )
+ Fi Fi Ui Ui ,
for all i N, where
1
Fi = (I + Bi Ei (P )Bi )1 Bi Ei (P )(Ai + Gi Ui )
1
1
Ui = (I 2 Gi Ei (P )Gi )1 ( Gi )Ei (P )(Ai + Bi Fi )
that is,
1
1
B Ei (P )Gi (I 2 Gi Ei (P )Gi )1 Gi Ei (P )Bi )1
2 i
1
1
(Bi (I + 2 Ei (P )Gi (I 2 Gi Ei (P )Gi )1 Gi )Ei (P )Ai )
1
= (I + Bi Ei (P )[I 2 Gi Gi Ei (P )]1 Bi )1
1
(Bi Ei (P )[I 2 Gi Gi Ei (P )]1 Ai )
1
1
Ui = (I 2 Gi Ei (P )Gi + 2 Gi Ei (P )Bi (I + Bi Ei (P )Bi )1 Bi Ei (P )Gi )1
1
( Gi (I Ei (P )Bi (I + Bi Ei (P )Bi )1 Bi )Ei (P )Ai )
1
1
= (I 2 Gi Ei (P )[I + Bi Bi Ei (P )]1 Gi )1
148
7 H Control
Gi Ei (P )Gi
= Ci Ci + Fi Fi + Ui (I
)Ui + (Ai + Bi Fi ) Ei (P )(Ai + Bi Fi )
2
2
Fi
Fi
Fi
F
i Ei (P )i + 2
i .
(7.6)
Ui
0
Ui
0
Recall from 3) that r (T ) = r (L) < 1 where L and T are dened as in (3.8)
and (3.7) respectively, with i = Ai +Bi Fi + 1 Gi Ui , which can also be written
as
Fi
1
i = Ai + Bi Fi + Gi Ui = Ai + Bi 1 Gi
.
Ui
149
(7.7)
r(k) = (I
1
G Ei (P )Gi )Ui .
2 i
(7.8)
Since
x(k + 1) = (A(k) + B(k) F(k) )x(k) + G(k) w(k)
x(0) = 0
we get that
2
1/2
P(k+1) x(k + 1) =E(x(k + 1) P(k+1) x(k + 1))
2
1
2
2
1/2
1/2
x(k
+
1)
x(k)
P(k+1)
P(k)
2
k=0
2
1/2
= P() x()
2
P max x()2 0,
7 H Control
150
so that
0=
k=0
showing (7.7).
0 0 , .) B(C r ) as
Dene the operator W(
0 0 , .) 1 U XF0 (0 , .) I,
W(
In the next proposition and in the proof of Lemma 7.8 we shall drop, for
notational simplicity, the dependence of the operators in 0 .
0
Proposition 7.7. Suppose that 1), 2) and 3) of Theorem 7.3 hold. Then W
is invertible.
Proof. For w C r , dene the operator Y) as Y) (w) = ()
y (0), y)(1), . . .) where
1
y)(k + 1) = (A(k) + B(k) F(k) + G(k) U(k) ))
y (k) G(k) w(k), y)(0) = 0.
0inv (w) = s) = ()
that is, for w C r , W
s(0), s)(1), . . .) where s)(k) = 1 U(k) y)(k)
s)
w(k), k T. From these denitions it is easy to verify that Y) (w) = XF0 ()
and Y) (w)
) = XF0 (w). Let us show that
0inv W
0 = I.
0W
0inv = W
W
Indeed,
1
1
1
0 ()
0W
0inv (w) = W
s) = U XF0 ()
s) s) = U Y) (w) ( U Y) (w) w) = w,
W
and
1
1
1
0inv W
0 (w) = W
0inv (w)
W
) = U Y) (w)
) w
) = U XF0 (w) ( U XF0 (w) w) = w,
0inv B(C r ).
0 1 = W
showing that W
151
0 1
Proof. Consider > 0 as in 1), and 1 > (/) such that W
1 for
every 0 0 . Since
1
0 1 1
0
w2 W
(w)
w2 W
1
2
we conclude that
2
r2
1
1
U(k) x(x) w(k)
I 2 G(k) E(k) (P )G(k)
=
E
k=0
1
2
2
1
E
U(k) x(k) w(k)
k=0
2
2
0
=
E W (w)(k)
k=0
2 2
2
0
2
=
w2
W (w)
1
2
2
2
2
2
w2
w2
1
2
2
= 1
w2
1
2
< 2 w2 ,
proving the desired result.
152
7 H Control
=
Lemma 7.9. Suppose that (C, A) is mean square detectable, there exists K
n,m
(K1 , . . . , KN ) H
that stabilizes (A, B), and
0
sup ZK
(0 , .) <
0 0
for some > 0. Then there exists P = (P1 , . . . , PN ) Hn+ satisfying conditions 1), 2) and 3) of Theorem 7.3.
From Corollary A.16, the fact that (C, A) is mean square detectable, and
(A, B) is mean square stabilizable, there exists a unique X = (X1 , . . . , XN )
Hn+ such that
Xi = Ci Ci + Ai Ei (X)Ai Ai Ei (X)Bi (I + Bi Ei (X)Bi )1 Bi Ei (X)Ai
= Ci Ci + (Ai + Bi Fi (X)) Ei (X)(Ai + Bi Fi (X)) + Fi (X) Fi (X)
where F(X) = (F1 (X), . . . , FN (X)) (dened as in (4.36) with Di Di = I) stabilizes (A, B). For any (x0 , w, q) C0n C r C m , 0 0 , w = (w(0), . . .), q =
(q(0), . . .), we set (as in (7.3) and (7.4))
x = (x(0), . . .) = XF (X) (0 , x0 , w, q),
z = (z(0), . . .) = ZF (X) (0 , x0 , w, q),
and
J (0 , x0 , w, q) =
=
k=0
k=0
2
2
2
E C(k) x(k) + u(k) 2 w(k)
2
2
1/2
E Q(k) x(k) + F(k) (X)x(k) + q(k)
2
w(k)
2
2
= ZF (X) (0 , x0 , w, q)2 2 w2
and we want to solve the following minimax problem:
J( (0 , x0 ) = sup infm J (0 , x0 , w, q).
wC r qC
(7.10)
153
k+1
A(l) + B(l) F(l) (X) X(k+j+1) G(k+j) w(k + j)  Fk
E
j=0
l=k+j
for k 0, and
where
Q(0 , w) q) = ()
q (0), q)(1), . . .)
1
q)(k) I + B(k)
E(k) (X)B(k)
B(k)
r(k), k 0.
As seen in Lemma 4.12, Theorem 4.14 and (4.50), we have that R(0 , .)
B(C r , C n ), Q(0 , .) B(C r , C m ), and
J) (0 , x0 , w) = J (0 , x0 , w, q)) = infm J (0 , x0 , w, q)
qC
(7.11)
(7.12)
We need the following result, due to Yakubovich [222], also presented in [151].
Lemma 7.10. Consider H a Hilbert space and a quadratic form J () =
S; , H and S B(H) self adjoint. Let M0 be a closed subspace of
H and M a translation of M0 by an element m H (i.e., M = M0 + m). If
inf
M0
S;
>0
;
7 H Control
154
Note from (7.12) that, for any q C m and every (x0 , w) C0n C r ,
2
2
)
2
2
Z(0 , x0 , w) 2 w2 ZF (X) (0 , x0 , w, q)2 2 w2
2
(7.13)
From Lemma 7.9, there exists > 0 such that for all w C r ,
Z 0 (0 , w)2
K
2
sup
< 2 2
2
w2
0 0
(7.14)
and thus, from (7.13) and (7.14), for every w C r and arbitrary 0 0 ,
2
2
2
0 , 0, w)2 2 w2
) 0 , 0, w)
2 w2 2 w2 Z(
Z(
2
2
2
that is,
inf
wC r
2
) 0 , 0, w)
2 w22
Z(
w2
2 .
(7.15)
(7.16)
inf
M0
S;
2
2
= inf r
wC
2
) 0 , 0, w)
2 w22
Z(
2
w2
155
2 > 0
and invoking Lemma 7.10, we obtain that there exists a unique element ( M
such that
( = inf J)(0 , ),
J) (0 , )
M
( 0 , .) B(C n , C r )
Z(
0
(7.17)
as:
( 0 , x0 ) X(
) 0 , x0 , W(0 , x0 )) = XF (X) (0 , x0 , W(0 , x0 ), Q(0 , W(0 , x0 )))
X(
= (x0 , x
((1), . . .) = x
(
( 0 , x0 ) Z(
) 0 , x0 , W(0 , x0 )) = ZF (X) (0 , x0 , W(0 , x0 ), Q(0 , W(0 , x0 )))
Z(
= (z0 , z((1), . . .) = z(
so that
2
(
2
2
J( (0 , x0 ) = sup infm J (0 , x0 , w, q) = Z(
0 , x0 ) W(0 , x0 )2
r
qC
2
wC
1
2
( 0 , x0 ); Z(
( 0 , x0 ) 2 W(0 , x0 ); W(0 , x0 )
= Z(
1
2
( Z
( 2 W W)(0 , x0 ); x0 = B(0 , x0 ); x0
= (Z
( Z
( 2 W W)(0 , .). Since
where B(0 , .) B(C0n ) is dened as B(0 , .) = (Z
n
for any x0 C0 ,
J( (0 , x0 ) = sup infm J (0 , x0 , w, q)
wC r qC
5
2
2
)
)
2
2
= sup Z(0 , x0 , w) w2 Z(
0
0 , x0 , 0)
wC r
156
7 H Control
inf J (0 , x0 , w, q)
m
wC k,r qC
(7.18)
(7.19)
k1
2
2
1/2
inf
x(l)
Q
+ F(l) (X)x(l) + q(l)2
(l)
k,m
wC k,r qC
2 w(l)2
l=0
2 5
1/2
+ X(k) x(k)
(7.20)
We shall now obtain a solution for (7.18) in a recursive way. Dene the
sequences P k = (P1k , . . . , PNk ), Pik B(Cn ), F k = (F1k , . . . , FNk ), Fik
k
B(Cn , Cm ), and U k = (U1k , . . . , UN
), Uik B(Cn , Cr ) as:
P 0 = (P10 , . . . , PN0 ) X = (X1 , . . . , XN )
(where X is the mean square stabilizing solution of the CARE (4.35)), and
Pik+1 Ci Ci + Ai Ei (P k )Ai Ai Ei (P k ) Bi 1 Gi
1 Bi
B
I 0
k
+ 1 i Ei (P k ) Bi 1 Gi
1 Ei (P )Ai
0 I
G
G
i
i
1
1
k+1
k+1
k
=Ci Ci + (Ai + Bi Fi
+ Gi Ui ) Ei (P )(Ai + Bi Fik+1 + Gi Uik+1 )
157
1
1
(Bi (I + 2 Ei (P k )Gi (I 2 Gi Ei (P k )Gi )1 Gi )Ei (P k )Ai ),
1
Uik+1 (I 2 Gi Ei (P k )Gi
1
+ 2 Gi Ei (P k )Bi (I + Bi Ei (P k )Bi )1 Bi Ei (P k )Gi )1
1
( Gi (I Ei (P k )Bi (I + Bi Ei (P k )Bi )1 Bi )Ei (P k )Ai ).
The existence of the above inverses will be established in the proof of the
proposition below. First we dene, for V = (V1 , . . . , VN ) Hn+ such that
I 12 Gi Ei (V )Gi > 0, i N,
R(V ) (R1 (V ), . . . , RN (V )),
D(V ) (D1 (V ), . . . , DN (V )),
M(V ) (M1 (V ), . . . , MN (V )),
U(V ) (U1 (V ), . . . , UN (V ))
as
Ri (V ) Ci Ci + Ai Ei (V )Ai Ai Ei (V ) Bi 1 Gi
1 Bi
B
I 0
+ 1 i Ei (V ) Bi 1 Gi
1 Ei (V )Ai ,
0 I
Gi
Gi
Mi (V ) I + Bi Ei (V )Bi ,
1
1
Di (V ) 2 (I 2 Gi Ei (V )Gi + 2 Gi Ei (V )Bi (I + Bi Ei (V )Bi )1 Bi Ei (V )Gi ),
and
Ui (V ) Gi (I Ei (V )Bi (I + Bi Ei (V )Bi )1 Bi )Ei (V )Ai
(7.21)
for i N.
Proposition 7.12. Consider 0 0 xed. Under the hypothesis of Lemma
7.9, for each k 0, we have that
1. P k = (P1k , . . . , PNk ) Hn+ .
2. 2 I Gi Ei (P k )Gi 2 I for all i N and > 0 as in (7.16).
7 H Control
158
3. J(k (0 , x0 ) = J (0 , x0 , w
(k , q(k ) = E(x0 P0 x0 ), where
w
(k = (w
(k (0), . . . , w
(k (k 1), 0, 0, . . .),
q k (0), . . . , q(k (k 1), 0, 0, , . . .),
q(k = ((
with
1 kl k
U x
( (l)
(l)
kl k
k
q( (l) = (F(l) (X) + F(l)
)(
x (l), l = 0, . . . , k 1,
w
(k (l) =
x
(k = ((
xk (0), x
(k (1), . . . , ) = XF (X) (0 , x0 , w
(k , q(k ).
Proof. Let us apply induction on k. From Theorem 4.5, we have that 1) and
3) are clearly true for k = 0. Let us prove now that 2) is satised for k = 0.
Fix 0 = i N and consider w(0) Cr , w = (w(0), 0, . . .) C r . Then, from
(7.15),
2
2
2
0, w)2 2 w2
) 0, w)
2 w2 2 w2 Z(i,
Z(i,
.
2
2
2
But
2
2
)
Z(i, 0, w) = ZF (X) ((1), G(1) w(0), 0)2 = w(0) Gi Ei (X)Gi w(0)
2
and
2
w2
and since i and w(0) are arbitrary, the result is proved for k = 0. Suppose
now that the proposition holds for k. For x0 C0n and 0 0 , we dene
2
k+1 (0 , x0 ) = sup infm E C(0) x(0)
w0 C0r q0 C0
2
2
2
k
+ F(0) (X)x(0) + q(0) w(0) + x(1) P(1) x(1)
(7.22)
where
x(1) = A0 x0 + B0 u0 + G0 w0
and u0 = F0 (X)x0 + q0 .
159
where
w(V,
( i, x0 ) = Di (V )1 Ui (V )x0
u
(0 (V, i, x0 , w0 ) = Mi (V )1 (Bi Ei (V )Ai x0 + Bi Ei (V )Gi w0 ),
which shows that the solution of (7.22) is given by
u0 = F0 (X)x0 + q0
= (
u(P k , i, x0 , w0 )
= (I + B0 E0 (P k )B0 )1 (B0 E0 (P k )A0 x0 + B0 E0 (P k )G0 w0 ),
1
x0 ,
w0 = Uk+1
0
and therefore,
1
q0 = F0 (X)x0 + (I + B0 E0 (P k )B0 )1 B0 E0 (P k )(A0 + G0 Uk+1
)x0
0
k+1
= (F0 (X) + F0 )x0 .
We have from (7.21), (7.22), (7.23) and above that
k+1 (0 , x0 ) = E(x0 (Pk+1
)x0 ) 0,
0
and thus, Pik+1 0, showing 1). Notice now that, by denition, J(k+1 (0 , x0 )
k+1 (0 , x0 ). On the other hand, consider, for any q C k+1,m , w(q) =
(w(q)(0), . . . , w(q)(k), 0, 0, . . .) C k+1,r as
w(q)(l) =
1 k+1l
x(q)(l), l = 0, . . . , k,
U
(l)
with
x(q) = (x(q)(0), x(q)(1), . . .) = XF (X) (0 , x0 , w(q), q).
We get that
k+1 (0 , x0 ) =
inf
qC k+1,m
k
l=0
C(l) x(q)(l)2 + F(l) (X)x(q)(l) + q(l)2
2
2
2 5
1/2
2
2 w(q)(l)2 + X(k) x(k + 1) .
2
Taking the supremum over w C k+1,r we get from (7.20) that k+1 (0 , x0 )
J(k+1 (0 , x0 ), showing 3), that is,
x0 ).
J(k+1 (0 , x0 ) = k+1 (0 , x0 ) = E(x0 Pk+1
0
Finally, let us show 2). Consider
7 H Control
160
(k+1 (l 1) =
w(0) = w0 Cr , w(l) = w
k+2l
q(0) = 0, q(l) = q(k+1 (l 1) = (F(l) (X) + F(l)
)x(l), l = 1, . . . , k
2
2
k+1
= E(w0 Gi P(1)
Gi w0 ) 2 w0
Z((1),
Gi w0 , w
(k+1 )2 2 w
(k+1 2
0, w)2 2 w2 + 2 w0 2
= Z(i,
2
2
2
We can now proceed to the proof of Lemma 7.9.
161
Proof (of Lemma 7.9). Let us show that Pi as dened above satises 1), 2)
and 3) of Theorem 7.3. Since for every x0 Cn and i
x0 Pik x0 = J(k (0 , x0 ) J( (0 , x0 ) = x0 Pi x0 as k ,
we get that Pik Pi as k , and thus P Hn+ . Taking the limit as
k in Proposition 7.12, we get that P satises 1) and 2) of Theorem 7.3.
Moreover from uniqueness of w
( established in Proposition 7.11, and that w
(k
is a maximizing sequence for J( (0 , x0 ) (see (7.10)), we can conclude, using
the same arguments as in the proof of Proposition 3 in [209], that w
(k w
( as
k
)
)
)
k . Continuity of X(0 , x0 , .) implies that X(0 , x0 , w
( ) X(0 , x0 , w)
(
as k , and thus
) 0 , x0 , w
) 0 , x0 , w)
x
(k = ((
xk (0), x
(k (1), . . .) = X(
(k ) X(
( = ((
x(0), x
((1), . . .) = x
(.
Therefore, for each l = 0, 1, . . . ,
w
(k (l) =
1 kl k
1
( (l) U(l) x
((l) as k .
U(l) x
Similarly,
Q(0 , w
(k ) = q k = ((
q k (0), q(k (1), . . .)
Q(0 , w)
( = q( = ((
q (0), q((1), , . . .) as k
so that for each l = 0, 1, . . .
kl k
q(k (l) = (F(l) (X) + F(l)
)(
x (l) (F(l) (X) + F(l) )(
x(l) as k .
and thus
( 0 , x0 ) = (x0 , x
((1), x
((2), . . .)
X(
where
1
x(k), x
((0) = x0 , (0) = 0 .
x
((k + 1) = (A(k) + B(k) F(k) + G(k) U(k) )(
( 0 , x0 ) C n for
( 0 , .) B(C n , C n ) (as seen in (7.17), we get that X(
Since X(
0
n
any 0 0 and x0 C0 which implies, from Theorem 3.34, that r (L) < 1.
162
7 H Control
Remark 7.13. The condition that (C, A) is mean square detectable (together
with mean square stabilizability of (A, B)) is only used to guarantee the existence of the mean square stabilizing solution to the CARE (see Corollary
A.16)
Xi = Ci Ci + Ai Ei (X)Ai Ai Ei (X)Bi (I + Bi Ei (X)Bi )1 Bi Ei (X)Ai .
In view of Theorem A.20 and Remark A.22, (C, A) mean square detectability
could be replaced by the following condition. For each i N, one of the
following conditions below is satised:
1. (Ci , pii Ai ) has no unobservable modes inside the closed unitary complex
disk, or
2. (Ci , pii Ai ) has no unobservable modes over the unitary complex disk, 0 is
not an unobservable mode of (Ci , pii Ai ) and (i) < , where (i) is as in
Theorem A.20.
163
w0 u0
x0 Ri (P )x0
= x0 Pi x0
u0
E x(k) P(k) x(k) x(k + 1) P(k+1) x(k + 1)
=
=
=
=
k=0
k=0
k=0
E E x(k) P(k) x(k) x(k + 1) P(k+1) x(k + 1)  Fk
E x(k) P(k) x(k) x(k + 1) E(k) (P )x(k + 1)
2
2
2
E C(k) x(k) + u(k) 2 w(k)
k=0
k=0
k=0
2
E M(k) (P )1/2 (u(k) + u
((P, (k), x(k), w(k)))
2
1/2
E D(k) (P ) (w(k) w(P,
(
(k), x(k)))
7 H Control
164
and thus
2
2
2
E C(k) x(k) + u(k) 2 w(k)
k=0
= E(x0 P0 x0 ) +
+
2
E M(k) (P )1/2 (u(k) + u
((P, (k), x(k), w(k)))
k=0
2
E D(k) (P )1/2 (w(k) w(P,
(
(k), x(k))
k=0
1
U(k) x
((k),
and
1
x(k), x
((0) = x0 , (0) = 0 .
x
((k + 1) = (A(k) + BF(k) + G(k) U(k) )(
(7.25)
where
w = (w (0), . . . , w ( 1), 0, . . .),
w (k) = D(k) (P k1 )1 B(k) (P k1 )x(k),
q = (q (0), q (1), . . .),
for k . Set w
( = (w
( (0), . . . , w
( ( 1), 0, . . . , 0), w
( (k) = w(k)
(
for k =
0, . . . , 1. From the fact that r (T ) < 1 (condition 3) of Theorem 7.3), we
165
can nd a > 0, 0 < b < 1, such that T k abk . Moreover, we have from
Proposition 3.1 that
2
1
2
2
2
2
w(k)
(
c1 E((
((k)
x(k) ) c2 T k x0 2 c3 bk x0 2
U(k) x
2 =E
w w
( 2 =
k=
w(k)
(
2
1
2
c bk x0 2 .
1b 3
(7.26)
2
2
Similarly we can show that ZF (X) (0 , x0 , w,
( Q(0 , w))
( 2 c3 x0 2 for some
appropriate constant c3 > 0. From (7.24) with x0 = 0, we have that
.
/
2
2
( Q(0 , w))
( 2 2 w2
0 = sup ZF (X) (0 , 0, w,
wC r
2
2
ZF (X) (0 , 0, w
(w
( , Q(0 , w
(w
( ))2 2 w
(w
( 2 .
(7.27)
2 w
( 2
2
2
ZF (X) (i, x0 , w,
( Q(i, w))
( 2 2 w
( 2
63
46
2 6 ZF (X) (i, x0 , w,
( Q(i, w));
( ZF (X) (i, 0, w
(w
( , Q(i, w
(w
( )) 6
( Q(i, w))
( 2
x0 Pi x0 2 ZF (X) (i, x0 , w,
ZF (X) (i, 0, w
(w
( , Q(i, w
(w
( )
x0 Pi x0
c4 x0 2 w
(w
( 2
x0 Pi x0
c5 x0 2 b/2
2 x0 (Pi
x0
Pi )x0 c5 b/2
166
7 H Control
8
Design Techniques and Examples
This chapter presents and discusses some applications of the theoretical results
introduced earlier. Also, some designoriented techniques, especially those
making use of linear matrix inequalities, are presented here. This nal chapter is intended to conclude the book assembling some problems in the Markov
jump context and the tools to solve them.
168
Fig. 8.1. Solar One energy plant. Courtesy of the National Renewable Energy
Laboratory; photo credit Sandia National Laboratory
The stochastic nature of this control problem arises because the system
dynamics is heavily dependent on the instantaneous insolation. Cloud movement over the heliostats can cause sudden changes in insolation and can be
treated, for practical purposes, as a stochastic process.
From insolation data collected at the site, it was established that the mean
duration of a cloudy period was approximately 2.3 min, while the mean interval of direct insolation was 4.3 min. Based on this information, two operation
modes were dened: (1) sunny; and (2) cloudy.
With these modes and the mean interval durations associated to them, a
transition probability matrix was obtained, which is, for a sample time of 6 s,
0.9767 0.0233
P=
.
(8.1)
0.0435 0.9565
The thermal receiver is described by the following simplied version of
(4.25), given by
x(k + 1) = A(k) x(k) + B(k) u(k)
G=
z(k) = C(k) x(k) + D(k) u(k)
with the parameters given in Table 8.1 for (k) {1, 2}.
169
Cloudy
A1 = 0.8353
A2 = 0.9646
B1 = 0.0915
B2 = 0.0982
C1 C1 = 0.0355
C2 C2 = 0.0355
D1 D1 = 1
D2 D2 = 1
Plant
Weights
Cloudy
CARE solution
X1 = 0.1295
X2 = 0.3603
Optimal controller
F1 = 0.0103
F2 = 0.0331
Monte Carlo simulations of the closed loop system are presented in Figure
8.2. The gure contains 2000 possible trajectories for initial condition x(0) =
1. The thick line in the gure is the expected trajectory. As can be seen,
an unfavorable sequence of states in the Markov chain may lead to a poor
performance (the upper trajectories in the gure), but no other controller
with the same structure will present better expected performance.
8.1.2 Optimal Policy for the National Income with a
MultiplierAccelerator Model
Samuelsons multiplieraccelerator model, published in 1939 [196], is possibly
the rst dynamic model based on economic theories to address the problem
of income determination and the business cycle.
A very interesting application of MJLS to economic modeling employing the multiplieraccelerator model is presented in [28] and here is slightly
adapted to t our framework.
170
0.9
0.8
0.7
x(k)
0.6
0.5
0.4
0.3
0.2
0.1
0
10
12
time [min]
14
16
18
20
Fig. 8.2. Monte Carlo simulations for the solar thermal receiver with optimal control
Name
Description
Norm
s and in midrange
Boom
Slump
171
(8.3)
A1 =
Boom
0
1
2.5 3.2
A2 =
B1 = [ 01 ]
Weights
C1 C1 =
3.6 3.8
3.8 4.87
D1 D1 = 2.6
0
1
43.7 45.4
Slump
B2 = [ 01 ]
C2 C2 =
10
3
3 8
D2 D2 = 1.165
A3 =
0
1
5.3 5.2
B3 = [ 01 ]
C3 C3 =
5 4.5
4.5 4.5
D3 D3 = 1.111
The optimal policy, like in the thermal solar receiver example, can be determined by solving a convex programming problem (see Appendix A, Problem
A.11 and Theorem A.12), which yields the results of Table 8.5.
Table 8.5. Optimal control for the multiplieraccelerator model
CARE solution
Norm
X1 =
Boom
X2 =
Slump
X3 =
18.6616 18.9560
18.9560 28.1085
30.8818 21.6010
21.6010 36.2739
35.4175 38.6129
38.6129 36.2739
Optimal controller
F1 = [ 2.3172 2.3317 ]
F2 = [ 4.1684 3.7131 ]
F3 = [ 5.1657 5.7933 ]
172
0.9
0.8
0.7
x2(k)
0.6
0.5
0.4
0.3
0.2
0.1
0
10
15
20
25
Fig. 8.3. Monte Carlo simulations for the multiplieraccelerator model with optimal
policy
chain and the system states were perfectly known. In Chapter 6 a framework
was developed that could account for partial noisy observations of the state
vector. It was shown there that we could design independently the optimal
controller and an optimal lter, beneting from a separation principle.
Here we will assume that the thermal receiver is subject to a noise as
described in Chapter 6. We will use the same controller of Subsection 8.1.1 in
series with a stationary Markov lter. System and control data are the ones
given in Tables 8.1 and 8.2 respectively. Table 8.6 presents the parameters for
this lter.
Table 8.6. Optimal lter for the solar thermal receiver
Sunny
G1 = 0.05 0
H1 = 0 0.2
Cloudy
G2 = 0.03 0
H2 = 0 0.1
CARE solution
Y1 = 0.0040
Y2 = 0.0012
Optimal lter
M1 = 0.1113
M2 = 0.2436
Noise
173
Monte Carlo simulations of the closed loop system, similar to those presented in Figure 8.2 are presented in Figure 8.4. The gure contains 2000
possible trajectories for initial condition x(0) = 1, where the thick line is the
expected trajectory.
Fig. 8.4. Monte Carlo simulations for the solar thermal receiver with partial observations
174
There are enough reasons that contribute to make LMI techniques such
an attractive tool for design. To mention a few:
1. There are ecient algorithms to solve LMIs. Once a problem is expressed
as an LMI, it is usually simple to get a numerical solution to it. In an
area still as widely unexplored as MJLS, a general straightforward way to
solve general (and eventually new) problems is highly valuable.
2. Some control and ltering problems involving MJLS can only be solved,
with our current knowledge on the area, via LMI approximations.
3. The LMI framework allows for the convenient introduction of several useful enhancements in control or lter design, like the inclusion of robustness,
uncertainties, special structures for the lters or controllers, restrictions on
variables, etc, that would be very dicult, or even impossible, to account
for using other techniques.
In this section we present a small sample of basic LMI tricks and tools
involving problems considered earlier in this book or complementing them.
A comprehensive description of MJLS design problems beneting from LMI
techniques would be far beyond the scope of this book, and necessarily incomplete, for the literature on the subject grows wider every year.
Since there is no perfect design technique, the reader must beware of intrinsic limitations on the use of LMIs, in control and lter design in general
and, specically when concerning MJLS.
First, LMI techniques are just a tool to obtain solutions for some control
and ltering problems. They do not give further insight or improve knowledge
on the problems themselves.
Second, many problems are not intrinsically convex, so their description
as LMIs involves some degree of approximation. The eects are usually the
necessity of using sometimes unnatural or not very useful models in order to
keep the problems convex or, which may be more serious, conservativeness
of the attained solution. In the following this will be put in evidence for the
presented problems.
Even considering these (and other) disadvantages, LMI techniques are still
very attractive design tools, and for many problems, the current stateoftheart approach is to use LMIs. Also, for some of them, it is the only known
eective way of obtaining a solution.
8.2.1 Robust H2 control
The results presented earlier in Chapter 4 regarding the H2 control of MJLS
that lead to the equivalence of this control problem to a convex programming problem can be modied in order to include uncertainties, both in the
transition probability matrix and in the system matrices. This is possible
by including these uncertainties as restrictions in the associated optimization
problem.
175
In order to keep the convexity of the optimization problem, the uncertainties must be described in an adequate manner, with structures that can
be inserted in the problem formulation without aecting its basic properties.
Therefore for this section we will consider the following version of System
(1.3) with uncertainties,
+G(k) w(k)
G=
(8.4)
z(k)
=
C(k) x(k) + D(k) u(k)
n
x0 C0 , (0) = 0 0
for k = 0, 1, . . ., where the uncertainties given by Ai , Bi satisfy the following norm bounded conditions for i N and appropriate matrices Ali , Ari ,
Bli , Bri ,
Ai = Ali i Ari
Bi = Bli i Bri
i i I .
The use of norm bounded expressions is a clever trick to include relatively
realistic and reasonable uncertainties while still maintaining the problem convex.
Also, the transition probability matrix associated with the Markov chain
P is assumed to be not exactly known, but to belong to a polytope dened
by
t t
t
P = {P; P =
P , 0,
t = 1}
(8.5)
t=1
t=1
N
j=1
tr
Wi1 Wi2
Ci Ci 0
,
0 Di Di Wi2
Wi3
while the restrictions given by the convex set are redened. The new restrictions are expressed in terms of a new convex set , given by
=
7
t=1
176
t = { W = (W
N ); for i N,
1, . . . , W
Wi1 Wi2
0, Wi1 > 0, Hit (W ) 0 }
Wi =
Wi2
Wi3
where for i, j N,
Ari Wi1 Ari Ari Wi2 Bri
Mi (W ) =
Bri Wi2
Ari Bri Wi3 Bri
+
*8
8
ptij (Ai Wi1 Ai + Bi Wi2
Ai + Ai Wi2 Bi
i=1
t
(W ) XNt j (W )
Wj1 Zjt (W ) X1j
t
X1j (W )
I M1 (W )
0
Hjt (W ) =
.
.
..
..
.
.
.
.
.
.
.
t
XN j (W )
0
I MN (W )
with all matrices assumed real.
Proposition 8.1 presents an auxiliary result that will be needed in Theorem
8.2, given in the following, which is the main result in this subsection.
Proposition 8.1. Suppose that W . Then for j N,
Wj1
i=1
+i Gi Gi )
where P = [pij ] P, Ai = Ali i Ari , Bi = Bli i Bri , i are such that
1
Wi1
for i N.
i i I and Fi = Wi2
Proof. If W then W t for each t = 1, . . . , . From the Schur complement (see Lemma 2.23), Hjt (W ) 0 if and only if
I M1 (W ) . . .
0
..
..
..
0
,
.
.
.
0
0 Wj1
Zjt (W )
. . . I MN (W )
X1j (W )
I M1 (W ) . . .
0
t
..
..
..
..
(W ) . . . XNt j (W )
X1j
.
.
.
.
t
XN j (W )
0
. . . I MN (W )
(8.6)
and
t
t
X1j (W ) . . . XNt j (W ) = X1j
(W ) . . . XNt j (W )
I M1 (W ) . . .
I M1 (W ) . . .
0
.
..
..
.
..
..
..
.
.
.
. . . I MN (W )
0
..
.
177
(8.7)
. . . I MN (W )
N
i=1
ptij (Ai Wi1 Ai + Bi Wi2
Ai + Ai Wi2 Bi + Bi Wi3 Bi
(8.8)
where
Bri Wi2
Ari Bri Wi3
Bri
Bri (Ai Wi2 + Bi Wi3 )
and
Bri Wi2
Ari Bri Wi3
Bri
Bri Wi2
Ari Bri Wi3
Bri
for j N. Write now
1/2
Ari Wi1 Ari Ari Wi2 Bri
I
Bri Wi2
Ari Bri Wi3
Bri
1/2
Ari Wi1 Ari Ari Wi2 Bri
Ali Bli i I
Bri Wi2
Ari Bri Wi3
Bri
for i N. Then, from (8.9) and the properties of the generalized inverse,
0 i (W )i (W )
Ari Wi1 Ari Ari Wi2 Bri
Ali
i
= i (W ) + Ali Bli i I
Bri Wi2
Ari Bri Wi3
Bri
Bli
Ali
178
Ali Bli i
i Bl
Bri Wi2
Ari Bri Wi3
Bri
i
Ali
for i N. From (8.8) and (8.12) and recalling that Wi3 Wi2
Wi1
Wi2 for
i N, we get
Wj1
ptij Ai Wi1 Ai + Bi Wi2
Ai + Ai Wi2 Bi
i=1
ptij Ai Wi1 Ai + Bi Wi2
Ai + Ai Wi2 Bi + Bi Wi3 Bi
i=1
Ari Wi1 Ari Ari Wi2 Bri
Ali
i
+ Ali Bli i
Bri Wi2
Ari Bri Wi3
Bri
Bli
Ali
+ Ali Bli i
+ i Gi Gi
Bri (Ai Wi2 + Bi Wi3 )
N
ptij (Ai + Ai )Wi1 (Ai + Ai ) + (Bi + Bi )Wi3 (Bi + Bi )
=
i=1
i=1
+ Bi Fi + Ai + Bi Fi ) + i Gi Gi )
(8.13)
179
for j N. Since pij = t=1 t ptij for some t 0, t=1 t = 1, and (8.13)
9
is satised for every t = 1, . . . , (recall that W t=1 t ) we have, after
multiplying by t and taking the sum over t = 1, . . . , , the desired result.
Notice that for the case in which there are uncertainties neither on A nor
on B, nor on the transition probability P (that is, Ali = 0, Bli = 0, Ari = 0,
Bri = 0, P = {P}), the restriction Hj1 (W ) 0 reduces to
0 Wj1 Zj1 (W )
= Wj1
pij (Ai Wi1 Ai + Bi Wi2
Ai + Ai Wi2 Bi + Bi Wi3 Bi + i Gi Gi )
i=1
Wi1
, i N, we have that
Then for F = (F1 , . . . , FN ) with Fi = Wi2
System GF is MSS and
GF 22 (W ).
i=1
+i Gi Gi )
for j N. Let us write W1 = (W11 , . . . , WN 1 ). From Proposition 8.1 we have
W1 T (W1 )
and from Theorem 3.9 we get that System GF is MSS. From Proposition 4.9
we have KY(W ) W , and nally from Proposition 4.8,
GF 22 = (KY(W )) (W )
completing the proof of the theorem.
180
Example 8.3 (including robustness on H2 control design). This simple example, borrowed from [72], illustrates how the LMI framework presented above
can be used to include reasonable uncertainties in the control design. Consider
the system with parameters given by Table 8.7.
Table 8.7. Parameters for the robut H2 control design
Mode 1
A1 =
Mode 2
0
1
2.2308+ 2.5462+
C1 =
B1 = [ 01 ]
1.5049 1.0709
1.0709 1.6160
0
0
D1 =
0
0
1.6125
G1 = [ 10 01 ]
A2 =
C2 =
Mode 3
0
1
38.9103+ 2.5462+
B2 = [ 01 ]
10.2036 10.3952
10.3952 11.2819
0
0
D2 =
0
0
1.0794
G2 = [ 10 01 ]
A3 =
C3 =
0
1
4.6384+ 4.7455+
B3 = [ 01 ]
1.7335 1.2255
1.2255 1.6639
0
0
D3 =
0
0
1.0540
G3 = [ 10 01 ]
Since this problem does not consider uncertainties and therefore is in the
framework of Chapter 4, the solution above is nonconservative.
181
P1
P2
P3
P4
0.51 0.25
= 0.14 0.55
0.10 0.18
0.83 0.09
= 0.46 0.39
0.42 0.02
0.50 0.25
= 0.20 0.50
0.30 0.30
100
= 0 1 0.
001
0.24
0.31
0.72
0.08
0.15
0.56
0.25
0.30
0.40
Notice that the transition probability matrix used in Case 1 also belongs
to this polytope. The robust controller is given by
F1 = 2.2167 1.7979
F2 = 38.8861 39.1083
F3 = 4.6279 5.4857
with
182
which yield as in Table 8.7 for the system given by (8.4). The robust
controller obtained is
F1 = 2.2281 2.4440
F2 = 38.8998 39.7265
F3 = 4.6360 4.8930
with
Due to the higher degree of uncertainty, the H2 norm is much higher than
in the previous cases.
8.2.2 Robust Mixed H2 /H control
The robust control for the H2 control problem, presented in the previous
subsection, was obtained as an extension to the results of Chapter 4. This
subsection presents an approximation for the mixed H2 /H control problem, also based on convex programming techniques, but with an independent
development.
The results presented here can be seen, when restricted to the deterministic
case, as analogous to those of [130].
We consider the following version of System (1.3):
x(0) = 0, (0) = 0 0
which is slightly dierent from the one considered in Subsection 4.3.1. Here
we will not consider uncertainties on the system matrices, like in Subsection
8.2.1, and most importantly, in the present case, matrix G is assumed constant
over the values taken by the Markov chain. This restriction is bundled into
Theorem 8.5 proof development, as will be seen in this subsection. We will
also assume (see Remark 4.1) that
1. Di Di > 0 and
2. Ci Di = 0 for i N.
As before, we will denote by GF the system above when u(k) = F(k) x(k).
As in the previous subsection all matrices will be assumed real. The following
proposition, presented in [59], will be useful for further developments.
Proposition 8.4. Suppose (C, A) is mean square detectable
X = (X1 , . . . , XN ) Hn and F = (F1 , . . . , FN ) Hn,m satisfy
and
183
Proof. From the hypothesis that (C, A) is mean square detectable (see Definition 3.41), there is J = (J1 , . . . , JN ) Hq,n such that r (L) < 1 for the
operator L dened as in (3.7) with i = Ai + Ji Ci . We dene
i = Bi (D Di )1/2 Ji
B
i
and
i =
K
0
.
Ci
i K
i = Ji Ci . Dene also
Clearly B
(Di Di )1/2 Fi
Fi =
0
i Fi = Bi Fi . Then
so that B
(Di Di )1/2 Fi
K i Fi =
Ci
184
N
i=1
tr{G Xi G}).
i=1
Proof. Comparing (8.15) and (8.16) it is immediate from Proposition 8.4 that
F stabilizes (A, B) in the mean square sense. Set i = Ai + Bi Fi and Oi (F ) =
Ci Ci + Fi Di Di Fi for i N. From Theorem 3.34, x = (0, x(1), . . .) C n for
every w = (w(0), w(1), . . .) C r . So we get from (8.16) that
E(x(k + 1) X(k+1) x(k + 1)
= E(x(k + 1) E(X(k+1) Fk )x(k + 1))
+ E(x(k) (k)
E(k) (X)Gw(k)) + E(k) (X)1/2 Gw(k)
2
2
2
2
2
=
G X(k+1) x(k + 1) 2 +
G X(k+1) x(k + 1)2
2
2 G X(k) x(k)2 + 2E(w(k) G E(k) (X)((k) x(k) + Gw(k)))
E(w(k) G E(k) (X)Gw(k)).
Thus
2
2
1/2
1/2
2
X(k+1) x(k + 1) X(k) x(k) + z(k)2
2
2
2
2
+ 2 G X(k) x(k)2 2 G X(k+1) x(k + 1)2
2
2 G X(k+1) x(k + 1)2 + 2E(w(k) G X(k+1) x(k + 1))
2
185
Notice that this argument is valid only if matrix G does not depend on
(k). Taking the sum from k = 0 to , and recalling that x(0) = 0 and that
x(k)2 0 as k , we get for z = (z(0), z(1), . . .),
2
z2 2
k=0
where (0, 2
i=1
2
sup ZF0 (0 , .) 2 (1 ) 2 .
0 0
GF 2 =
where
i tr{G Ei (So)G},
i=1
Soi = i Ei (So)i + Oi (F )
GF 2 =
i tr{G Ei (So)G}
i=1
i tr{G Ei (X)G},
i=1
The theorem above suggests the following approach to solve the mixed
H2 /H control problem: for > 0 xed, nd X = (X1 , . . . , XN ) 0 and
N
F = (F1 , . . . , FN ) that minimize i=1 i tr{G Ei (X)G} subject to (8.16) and
such that the closed loop system is mean square stable.
As will be shown with the next results, indeed a convex approximation for
this problem can be obtained, and the resulting LMI optimization problem
has an adequate structure for the inclusion of uncertainties. We will consider
uncertainties in the transition probability matrix P in the same manner of
Subsection 8.2.1, that is, we will assume that P is not exactly known, but
that P P, where P is as given in (8.5). We dene now the following problem:
Problem 8.6. Set = 2 . Find real 0 < X = (X1 , . . . , XN ) Hn , 0 < Q =
(Q1 , . . . , QN ) Hn , 0 < L = (L1 , . . . , LN ) Hn and Y = (Y1 , . . . , YN )
Hn,m such that
N
= min
i tr{G Ei (X)G}
(8.17a)
i=1
186
subject to
Qi
Qi Ai + Yi Bi Qi Ci Yi Di G
Ai Qi + Bi Yi
Li
0
0
0
0 for i N
C
Q
0
I
0
0
i i
Di Yi
0
0
I
0
G
0
0
0 I
:
:
Li pti1 Li ptiN
: Li
pt Li
Q1
0
0
i1
0 for i N, t = 1, . . . ,
..
..
.
.
0
0
:
t
piN Li
0
0
QN
Xi I
0 for i N.
I Qi
(8.17b)
(8.17c)
(8.17d)
The following theorem makes the connection between Optimization Problem 8.6 and the control problem.
Theorem 8.7. Suppose Problem 8.6 has a solution given by (P, Q, L, Y ) and
set F = (F1 , . . . , FN ) Hn,m with Fi = Yi Q1
for i N. Then System GF is
i
mean square stable,
GF 2 1/2
and
sup ZF0 (0 , .)
0 0
for every P P.
Proof. First of all notice that (8.17b)(8.17d) are equivalent (from the Schur
complement, Lemma 2.23) to
Qi Qi (Ai + Bi Fi ) L1
i (Ai + Bi Fi )Qi
1
N
Li for t = 1, . . . ,
Li Li
ptij Q1
(8.18b)
j
j=1
Xi Q1
(8.18c)
i
N
pij Q1
i
j = Ei (X)
j=1
187
(Ai + Bi Fi ) L1
i (Ai + Bi Fi ) + (Ci + Di Fi ) (Ci + Di Fi )
+ 1 Qi G GQi
Notice that the theorem above presents only a sucient condition, and so
any solution to the control problem obtained with it might be a conservative
one, although mean square stability is guaranteed. Also, if there is no solution
to the optimization problem, it does not mean that there is no solution to the
control problem.
8.2.3 Robust H control
The Optimization Problem 8.6 considered in the previous subsection can be
directly adapted to the H case. In fact, given the results above, it is a simpler
problem, as presented below.
Problem 8.8. Find R+ , 0 < Q = (Q1 , . . . , QN ) Hn real, 0 < L =
(L1 , . . . , LN ) Hn real, and Y = (Y1 , . . . , YN ) Hn,m real, such that
= min {}
subject to
Qi
Qi Ai + Yi Bi Qi Ci Yi Di
Ai Qi + Bi Yi
Li
0
0
Q
0
I
0
C
i
i
Di Yi
0
0
I
G
0
0
0
:
Li pti1
: Li
pt Li
Q1
i1
..
0
: .
ptiN Li
0
(8.19a)
G
0
0
0 for i N
0
I
Li ptiN
0
0
0 for i N, t = 1, . . . , .
..
.
0
0
QN
(8.19b)
(8.19c)
Since we are not concerned with the value of the H2 norm, X was withdrawn from the problem and so we have no equivalent to the LMI (8.17d) of
Problem 8.6. It is easy to see that this problem also leads to a mean square
stabilizing solution (just notice that only LMIs (8.17b) and (8.17c) are related
to stability in Problem 8.6).
188
Although the solution might be conservative, like in the previous subsection, we are concerned with nding a minimal upper bound for the disturbance
gain, while in Chapter 7, we were only interested in nding a solution for a
xed value of the upper bound. Also, using this LMI approach, it is possible
to include uncertainties in the problem, as seen.
189
ln( 0 /)
}.
ln(2)
190
Linearization points
q1
q2
q3
q1 q2 q3
17
10
18
15
11
19
15
12
20
15 15
13
21
15
14
22
15
15
15
23
15 15
16
24
15 15 15
191
PAAA Pf
P0
P = P0 PAPu A Ps .
P0
Ps PAPl A
(8.20)
The matrix was partitioned in 8 8 blocks according to the possible congurations. The blocks are given by
0.89 0.10 0
0
0
0
0
0
0.10 0.79 0.10 0
0
0
0
0
0
0 0.10 0.79 0.10 0
0
0
PAAA =
0
0
0 0.10 0.79 0.10 0
0
0
0
0
0 0.10 0.79 0.10 0
0
0
0
0
0 0.10 0.79 0.10
0
0
0
0
0
0 0.10 0.89
0.01 0
0
0
0
0
0
0
0 0.01 0
0
0
0
0
0
0
0
0.01
0
0
0
0
0
0
0
0
0.01
0
0
0
0
Pf =
0
0
0
0
0.01
0
0
0
0
0
0
0
0
0.01
0
0
0
0
0
0
0
0 0.01 0
0
0
0
0
0
0
0 0.01
00000000
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
P0 =
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
00000000
0.78 0.20 0
0
0
0
0
0
0.10 0.78 0.10 0
0
0
0
0
0
0 0.10 0.78 0.10 0
0
0
PAPu A =
0
0
0 0.10 0.78 0.10 0
0
0
0
0
0 0.10 0.78 0.10 0
0
0
0
0
0 0.10 0.78 0.10
0
0
0
0
0
0 0.2 0.78
with
PAPl A = PAPu A
and
Ps = 2Pf .
192
The blocks are arranged such that PAAA presents the transition probabilities between operation modes for normal operation and Pf presents the
probability of faults occurring while in normal operation. If a fault occurs, the
system may only move to conguration APu A and then it may move between
APu A and APl A according to the probabilities given by Ps .
Rigorously speaking, the switching between APu A and APl A is not intrinsically stochastic, but since it is quite complex to assess this operation
sequence a priori on a general basis, it is not at all unreasonable to assume it
is a Markov chain.
The models for the 24 operation modes are presented in Tables 8.9, 8.10
and 8.11. It is assumed that Gi = Bi for i = 1, . . . , 24. The system state is
formed by the angular position plus the angular velocity of each joint.
The obtained H controller is presented in Table 8.12. For this controller,
= 27.6.
Figure 8.6 presents the system response to a 20 step on the setpoint for
the three joints. On t = 4 s, a fault on joint 2 is articially induced, changing
the conguration from AAA to APu A. The associated sequence of the Markov
chain is presented in Figure 8.7.
In Figure 8.8, the estimated applied torque on each joint is shown. [200]
reports that the performance of this controller compares favorably with other
control designs employed in similar experiments.
25
joint position [ ]
20
15
10
joint 1
joint 2
joint 3
0
5
6
time [s]
10
1.01
0.03
0.02
0.50
1.13
0.79
1.01
0.03
0.03
0.50
1.13
0.79
1.01
0.02
0.02
0.44
0.97
0.69
1.01
0.02
0.02
0.44
0.97
0.69
1.01
0.03
0.02
0.50
1.13
0.80
1.01
0.03
0.02
0.50
1.13
0.80
1.01
0.02
0.02
0.45
0.99
0.72
1.01
0.02
0.02
0.45
0.99
0.72
0.01
1.04
0.03
0.56
1.44
1.28
0.01
1.04
0.03
0.56
1.44
1.28
0.01
1.03
0.03
0.49
1.26
1.15
0.01
1.03
0.03
0.49
1.26
1.15
0.01
1.04
0.03
0.56
1.44
1.27
0.01
1.04
0.03
0.56
1.44
1.27
0.01
1.03
0.03
0.49
1.28
1.18
0.01
1.03
0.03
0.49
1.28
1.18
0.06
0.01
0.01
1.26
0.59
0.42
0.06
0.01
0.01
1.26
0.59
0.42
0.06
0.01
0.01
1.23
0.51
0.36
0.06
0.01
0.01
1.23
0.51
0.36
0.06
0.01
0.01
1.26
0.59
0.42
0.06
0.01
0.01
1.26
0.59
0.42
0.06
0.01
0.01
1.23
0.52
0.38
0.06
0.01
0.01
1.23
0.52
0.38
Ai
0.00
0.02
1.02
0.20
0.64
1.00
0.00
0.02
1.03
0.20
0.64
1.00
0.00
0.01
1.02
0.17
0.58
0.96
0.00
0.01
1.02
0.17
0.58
0.96
0.00
0.02
1.02
0.20
0.63
0.98
0.00
0.02
1.02
0.20
0.63
0.98
0.00
0.01
1.02
0.18
0.59
0.96
0.00
0.01
1.02
0.18
0.59
0.96
0.01
0.08
0.03
0.46
2.19
1.05
0.01
0.08
0.03
0.46
2.19
1.05
0.01
0.08
0.02
0.40
2.04
0.95
0.01
0.08
0.02
0.40
2.04
0.95
0.01
0.08
0.03
0.47
2.19
1.05
0.01
0.08
0.03
0.47
2.19
1.05
0.01
0.08
0.02
0.41
2.05
0.97
0.01
0.08
0.02
0.41
2.05
0.97
0.00
0.01
0.06
0.08
0.27
1.42
0.00
0.01
0.06
0.08
0.27
1.42
0.00
0.01
0.06
0.07
0.24
1.41
0.00
0.01
0.06
0.07
0.24
1.41
0.00
0.01
0.06
0.08
0.27
1.42
0.00
0.01
0.06
0.08
0.27
1.42
0.00
0.01
0.06
0.08
0.25
1.41
0.00
0.01
0.06
0.08
0.25
1.41
Bi
2.81
1.26
1.98
2.81
6.39
7.22
6.39
1.98
10.00
50.26 112.50 79.18
112.50 288.86 255.68
255.68 399.92
79.18
2.81
1.26
1.98
2.81
6.39
7.22
6.39
1.98
10.00
50.26
112.50 79.18
112.50 288.86 255.68
255.68 399.92
79.18
2.43
1.10
1.72
2.43
5.76
6.29
5.76
1.72
9.56
43.99
97.20
68.71
97.20 251.78 230.24
230.24 382.48
68.71
2.43
1.10
1.72
2.43
5.76
6.29
5.76
1.72
9.56
43.99
97.20
68.71
97.20 251.78 230.24
230.24 382.48
68.71
2.82
1.26
1.99
2.82
6.35
7.21
6.35
1.99
9.81
50.38
112.76 79.72
112.76 288.48 253.82
253.82 392.49
79.72
2.82
1.26
1.99
2.82
6.35
7.21
6.35
1.99
9.81
50.38
112.76 79.72
112.76 288.48 253.82
253.82 392.49
79.72
2.47
1.11
1.81
2.47
5.90
6.39
5.90
1.81
9.59
44.53 98.85 72.27
98.85 255.57 235.85
235.85 383.65
72.27
2.47
1.11
1.81
2.47
5.90
6.39
5.90
1.81
9.59
44.53 98.85 72.27
98.85 255.57 235.85
72.27 235.85 383.65
0.05
0.00
0.00
0
0
0
0.05
0.00
0.00
0
0
0
0.05
0.00
0.00
0
0
0
0.05
0.00
0.00
0
0
0
0.05
0.00
0.00
0
0
0
0.05
0.00
0.00
0
0
0
0.05
0.00
0.00
0
0
0
0.05
0.00
0.00
0
0
0
0.00
0.05
0.00
0
0
0
0.00
0.05
0.00
0
0
0
0.00
0.05
0.00
0
0
0
0.00
0.05
0.00
0
0
0
0.00
0.05
0.00
0
0
0
0.00
0.05
0.00
0
0
0
0.00
0.05
0.00
0
0
0
0.00
0.05
0.00
0
0
0
0.00
0.00
0.05
0
0
0
0.00
0.00
0.05
0
0
0
0.00
0.00
0.05
0
0
0
0.00
0.00
0.05
0
0
0
0.00
0.00
0.05
0
0
0
0.00
0.00
0.05
0
0
0
0.00
0.00
0.05
0
0
0
0.00
0.00
0.05
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
Ci
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.00
0.00
0.00
0
0
0
0.03
0.07
0.05
1
0
0
0.03
0.07
0.05
1
0
0
0.03
0.06
0.04
1
0
0
0.03
0.06
0.04
1
0
0
0.03
0.07
0.05
1
0
0
0.03
0.07
0.05
1
0
0
0.03
0.06
0.05
1
0
0
0.03
0.06
0.05
1
0
0
Di
0.07
0.18
0.16
0
1
0
0.07
0.18
0.16
0
1
0
0.06
0.16
0.14
0
1
0
0.06
0.16
0.14
0
1
0
0.07
0.18
0.16
0
1
0
0.07
0.18
0.16
0
1
0
0.06
0.16
0.15
0
1
0
0.06
0.16
0.15
0
1
0
0.05
0.16
0.25
0
0
1
0.05
0.16
0.25
0
0
1
0.04
0.14
0.24
0
0
1
0.04
0.14
0.24
0
0
1
0.05
0.16
0.25
0
0
1
0.05
0.16
0.25
0
0
1
0.05
0.15
0.24
0
0
1
0.05
0.15
0.24
0
0
1
10
11
12
13
14
15
16
1
0
00
0
01
0
00
0
01
0
00
0
01
0
00
0
01
0
00
0
01
0
00
0
01
0
00
0
01
0
00
0
0
0
1.14
0.18
0
5.60
7.17
0
1.14
0.18
0
5.60
7.17
0
1.12
0.16
0
4.83
6.29
0
1.12
0.16
0
4.83
6.29
0
1.14
0.17
0
5.53
6.99
0
1.14
0.17
0
5.53
6.99
0
1.12
0.16
0
4.98
6.49
0
1.12
0.16
0
4.98
6.49
0
0.18
1.33
0
7.15
13.27
0
0.18
1.33
0
7.15
13.27
0
0.16
1.31
0
6.39
12.40
0
0.16
1.31
0
6.39
12.40
0
0.17
1.32
0
6.93
12.71
0
0.17
1.32
0
6.93
12.71
0
0.16
1.31
0
6.53
12.39
0
0.16
1.31
0
6.53
12.39
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0.04
0.12
0
1.59
3.93
0
0.04
0.12
0
1.59
3.93
0
0.04
0.12
0
1.42
3.74
0
0.04
0.12
0
1.42
3.74
0
0.04
0.12
0
1.54
3.81
0
0.04
0.12
0
1.54
3.81
0
0.04
0.12
0
1.45
3.74
0
0.04
0.12
0
1.45
3.74
0
3.48
4.05
0
139.25
161.95
0
3.48
4.05
0
139.25
161.95
0
2.98
3.48
0
119.30
139.19
0
2.98
3.48
0
119.30
139.19
0
3.45
3.97
0
138.07
158.80
0
3.45
3.97
0
138.07
158.80
0
3.08
3.63
0
123.13
145.17
0
3.08
3.63
0
123.13
145.17
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
11.29
20.72
0
451.67
828.72
0
11.29
20.72
0
451.67
828.72
0
10.07
19.32
0
402.80
772.92
0
10.07
19.32
0
402.80
772.92
0
10.95
19.85
0
437.91
794.02
0
10.95
19.85
0
437.91
794.02
0
10.30
19.32
0
412.03
772.79
0
10.30
19.32
0
412.03
772.79
0
0
0
0
0
00
0
0
0
0
00
0
0
0
0
00
0
0
0
0
00
0
0
0
0
00
0
0
0
0
00
0
0
0
0
00
0
0
0
0
0
0
0.05
0.00
0
0
0
0
0.05
0.00
0
0
0
0
0.05
0.00
0
0
0
0
0.05
0.00
0
0
0
0
0.05
0.00
0
0
0
0
0.05
0.00
0
0
0
0
0.05
0.00
0
0
0
0
0.05
0.00
0
0
0
0
0.00
0.06
0
0
0
0
0.00
0.06
0
0
0
0
0.00
0.06
0
0
0
0
0.00
0.06
0
0
0
0
0.00
0.06
0
0
0
0
0.00
0.06
0
0
0
0
0.00
0.06
0
0
0
0
0.00
0.06
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.00
0.00
0
0
0
0
0.00
0.00
0
0
0
0
0.00
0.00
0
0
0
0
0.00
0.00
0
0
0
0
0.00
0.00
0
0
0
0
0.00
0.00
0
0
0
0
0.00
0.00
0
0
0
0
0.00
0.00
0
0
0
0
0.09
0.10
1
0
00
0.09
0.10
1
0
00
0.07
0.09
1
0
00
0.07
0.09
1
0
00
0.09
0.10
1
0
00
0.09
0.10
1
0
00
0.08
0.09
1
0
00
0.08
0.09
1
0
0
0
0.00
0.00
0
0
0
0
0.00
0.00
0
0
0
0
0.00
0.00
0
0
0
0
0.00
0.00
0
0
0
0
0.00
0.00
0
0
0
0
0.00
0.00
0
0
0
0
0.00
0.00
0
0
0
0
0.00
0.00
0
0
0
0
0.09
0.06
0
2.64
2.59
0
0.09
0.06
0
2.64
2.59
0
0.09
0.06
0
2.44
2.36
0
0.09
0.06
0
2.44
2.36
0
0.09
0.06
0
2.60
2.50
0
0.09
0.06
0
2.60
2.50
0
0.09
0.06
0
2.48
2.39
0
0.09
0.06
0
2.48
2.39
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.28
0.52
0
0
1
0
0.28
0.52
0
0
1
0
0.25
0.48
0
0
1
0
0.25
0.48
0
0
1
0
0.27
0.50
0
0
1
0
0.27
0.50
0
0
1
0
0.26
0.48
0
0
1
0
0.26
0.48
0
0
1
24
23
22
21
20
19
18
17
1.01
0
0.02
0.34
0
0.73
1.01
0
0.02
0.34
0
0.73
1.01
0
0.02
0.34
0
0.72
1.01
0
0.02
0.34
0
0.72
1.01
0
0.02
0.34
0
0.69
1.01
0
0.02
0.34
0
0.69
1.01
0
0.02
0.34
0
0.67
1.01
0
0.02
0.34
0
0.67
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0.01
0
1.06
0.22
0
2.23
0.01
0
1.06
0.22
0
2.23
0.01
0
1.06
0.22
0
2.21
0.01
0
1.06
0.22
0
2.21
0.01
0
1.05
0.21
0
2.18
0.01
0
1.05
0.21
0
2.18
0.01
0
1.05
0.20
0
2.14
0.01
0
1.05
0.20
0
2.14
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0.05
0.01
0
1.11
0.00
0
0.05
0.01
0
1.11
0.00
0
0.05
0.01
0
1.11
0.00
0
0.05
0.01
0
1.11
0.00
0
0.05
0.01
0
1.10
0.00
0
0.05
0.01
0
1.10
0.00
0
0.05
0.01
0
1.10
0.00
0
0.05
0.01
0
1.10
0
0.50
6.45
0
19.88
0.16
0
0.50
6.45
0
19.88
0.16
0
0.49
6.48
0
19.67
0.16
0
0.49
6.48
0
19.67
0.16
0
0.48
6.33
0
19.00
0.16
0
0.48
6.33
0
19.00
0.16
0
0.46
6.32
0
18.49
0.16
0
0.46
6.32
0
18.49
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
4.20
19.68
0
168.02
0.49
0
4.20
19.68
0
168.02
0.49
0
4.16
19.47
0
166.45
0.49
0
4.16
19.47
0
166.45
0.47
0
4.10
18.80
0
163.84
0.47
0
4.10
18.80
0
163.84
0.46
0
4.02
18.29
0
160.88
0.46
0
4.02
18.29
0
160.88
0
0.00
0
0
0
0.05
0
0.00
0
0
0
0.05
0
0.00
0
0
0
0.05
0
0.00
0
0
0
0.05
0
0.00
0
0
0
0.05
0
0.00
0
0
0
0.05
0
0.00
0
0
0
0.05
0
0.00
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.05
0
0
0
0.00
0
0.05
0
0
0
0.00
0
0.05
0
0
0
0.00
0
0.05
0
0
0
0.00
0
0.05
0
0
0
0.00
0
0.05
0
0
0
0.00
0
0.05
0
0
0
0.00
0
0.05
0
0
0
0
0.00
0
0
0
0.00
0
0.00
0
0
0
0.00
0
0.00
0
0
0
0.00
0
0.00
0
0
0
0.00
0
0.00
0
0
0
0.00
0
0.00
0
0
0
0.00
0
0.00
0
0
0
0.00
0
0.00
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.00
0
0
0
0.00
0
0.00
0
0
0
0.00
0
0.00
0
0
0
0.00
0
0.00
0
0
0
0.00
0
0.00
0
0
0
0.00
0
0.00
0
0
0
0.00
0
0.00
0
0
0
0.00
0
0.00
0
0
0
0.00
0
0.01
1
0
0
0.00
0
0.01
1
0
0
0.00
0
0.01
1
0
0
0.00
0
0.01
1
0
0
0.00
0
0.01
1
0
0
0.00
0
0.01
1
0
0
0.00
0
0.01
1
0
0
0.00
0
0.01
1
0
0
0.05
0
0.00
1.08
0
0.17
0.05
0
0.00
1.08
0
0.17
0.05
0
0.00
1.08
0
0.17
0.05
0
0.00
1.08
0
0.17
0.05
0
0.00
1.08
0
0.16
0.05
0
0.00
1.08
0
0.16
0.05
0
0.00
1.08
0
0.16
0.05
0
0.00
1.08
0
0.16
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.01
0
0.11
0
0
1
0.01
0
0.11
0
0
1
0.01
0
0.10
0
0
1
0.01
0
0.10
0
0
1
0.01
0
0.10
0
0
1
0.01
0
0.10
0
0
1
0.01
0
0.10
0
0
1
0.01
0
0.10
0
0
1
i
1
2
3
4
5
6
7
8
Fi
0.052 0.017 0.003 0.040 0.016 0.003
0.008 0.028 0.014 0.013 0.015 0.004
0.001 0.004 0.027 0.002 0.003 0.005
0.052 0.017 0.003 0.040 0.016 0.003
0.008 0.028 0.014 0.013 0.015 0.004
0.001 0.004 0.027 0.002 0.003 0.005
0.053 0.016 0.002 0.040 0.016 0.003
0.007 0.029 0.014 0.013 0.015 0.004
0.001 0.004 0.027 0.002 0.003 0.005
0.053 0.016 0.002 0.040 0.016 0.003
0.007 0.029 0.014 0.013 0.015 0.004
0.001 0.004 0.027 0.002 0.003 0.005
0.052 0.017 0.003 0.040 0.016 0.003
0.008 0.028 0.015 0.013 0.015 0.004
0.001 0.004 0.027 0.002 0.003 0.005
0.052 0.017 0.003 0.040 0.016 0.003
0.008 0.028 0.015 0.013 0.015 0.004
0.001 0.004 0.027 0.002 0.003 0.005
0.053 0.016 0.003 0.040 0.015 0.003
0.007 0.029 0.015 0.013 0.015 0.004
0.001 0.004 0.027 0.002 0.003 0.005
0.053 0.016 0.003 0.040 0.015 0.003
0.007 0.029 0.015 0.013 0.015 0.004
0.001 0.004 0.027 0.002 0.003 0.005
Fi
17
18
19
20
21
22
23
24
Fi
197
25
Markov chain
20
15
10
5
6
time [s]
10
0.08
0.06
0.04
torque [Nm]
0.02
0
0.02
0.04
0.06
joint 1
joint 2
joint 3
0.08
0.1
5
6
time [s]
10
198
parison, we also present simulations of the IMM lter [32], which is a timevariant suboptimal lter for MJLS.
8.4.1 Stationary LMMSE Filter
In order to implement the stationary LMMSE lter, we solve the system of linear equations (5.58) with unique solution Qi , i N, we plug Q = (Q1 , . . . , QN )
into (5.60) and solve the corresponding ARE to obtain P . The stationary
LMMSE estimator x
((kk) is given by (5.43), where z((kk) satises the time)
invariant recursive equations (5.45) replacing Z(kk
1) by P . Notice that all
these calculations can be performed oline. Consider a scalar MJLS described
by the following equations,
x(k + 1) = a(k) x(k) + g(k) w(k)
y(k) = l(k) x(k) + h(k) w(k)
(8.21)
with x(0) Gaussian with mean 10 and variance 10. (k) {1, 2}, and {w(k)} is
an independent noise sequence, and 1 (0) = 2 (0) = 0.5. Table 8.13 presents 6
cases with dierent parameters ai , gi , li , hi and pij . For each case, 4000 Monte
Carlo simulations were performed and both lters (stationary LMMSE and
IMM) were compared under the same conditions. The results obtained for
Table 8.13. Simulation parameters
Cases
p11
p22
a1
a2
g1
g2
l1
l2
h1
h2
0.975 0.95 0.995 0.99 [ 0.1 0 ] [ 0.1 0 ] 1.0 1.0 [ 0 5.0 ] [ 0 5.0 ]
0.995 0.99 0.995 0.995 [ 0.5 0 ] [ 0.5 0 ] 1.0 0.8 [ 0 0.8 ] [ 0 0.8 ]
0.975 0.95 0.995 0.995 [ 0.1 0 ] [ 5.0 0 ] 1.0 1.0 [ 0 1.0 ] [ 0 1.0 ]
0.975 0.95 0.995 0.25 [ 1.0 0 ] [ 1.0 0 ] 1.0 1.0 [ 0 1.0 ] [ 0 1.0 ]
0.975 0.95 0.995 0.25 [ 0.1 0 ] [ 0.1 0 ] 1.0 1.0 [ 0 5.0 ] [ 0 5.0 ]
0.975 0.95 0.995 0.25 [ 0.1 0 ] [ 5.0 0 ] 1.0 1.0 [ 0 5.0 ] [ 0 5.0 ]
the above conguration are in Figure 8.9, showing the square root of the
mean square error (rms) for each of the 6 cases studied with three types of
noise distribution: normal, uniform and exponential. Both IMM and stationary LMMSE lter results are presented in the gure.
case 6
case 5
case 4
case 3
case 2
case 1
normal noise
uniform noise
0
2
0
2
0
2
0
2
0
2
0
3
2
1
0
0
3
2
1
0
0
3
2
1
0
200
400
200
400
k
exponential noise
0
2
199
200
k
400
Fig. 8.9. Comparison between IMM (solid line) and stationary LMMSE (dashed
line) lters
200
p11 a11
p21 a11
p a2
A3 = 11 12
p21 a1
A1 =
p12 a12
p22 a12
p12 a12
p22 a12
p11 a11
p21 a11
p a2
A4 = 11 12
p21 a1
A2 =
p12 a22
p22 a22
p12 a22
.
p22 a22
p11
p22
a1
a2
g1
g2
l1
l2
h1
h2
0.975 0.95 0.995 0.99 [ 0.1 0 ] [ 0.1 0 ] 1.0 1.0 [ 0 5.0 ] [ 0 5.0 ]
0.975 0.95 0.5 0.199 [ 1.0 0 ] [ 1.0 0 ] 1.0 1.0 [ 0 1.0 ] [ 0 1.0 ]
a1
a2
a11
a21
a12
a22
0.95 0.95
0.3
0.95
0.3
0.95
Figure 8.10 shows the mean square root errors for 4000 Monte Carlo simulations.
There are some important dierences between the IMM lter and the
robust LMMSE lter. The LMIbased design method of the latter may produce
conservative lters, with poorer performance than the IMM lter. On the other
hand the robust LMMSE lter is a timeinvariant mean square stable lter,
therefore not requiring online calculations for its implementation, while the
IMM lter is timevariant and provides no stability guarantees.
case 1
case 2
1
0.8
1.5
0.6
0.4
0.5
0
201
0.2
0
20
40
60
80
100
case 3
20
20
60
80
100
80
100
case 4
1.1
15
40
10
0.9
5
0
20
40
60
80
100
0.8
20
40
60
Fig. 8.10. Comparison between IMM (dashed line) and robust LMMSE (solid line)
lters
A
Coupled Algebraic Riccati Equations
The results in this appendix are concerned about coupled Riccati dierence
equations and the associated coupled algebraic Riccati equations, which are
used throughout this book. We deal, essentially, with the existence of solutions and asymptotic convergence. Regarding the existence of solutions, we
are particularly interested in maximal and stabilizing solutions. The appeal
of the maximal solution has to do with the fact that it can be obtained numerically via a certain LMI optimization problem. Although in control and
ltering applications the interest lies essentially in the stabilizing solution, it
is shown that the two concepts of solution coincide whenever the stabilizing
solution exists.
204
We recall now the denitions of (mean square) stabilizability and detectability (see Denitions 3.40 and 3.41 respectively), conveniently modied for our
purposes.
Denition A.1. Let A = (A1 , . . . , AN ) Hn , B = (B1 , . . . , BN ) Hm,n .
We say that (A, B, s) is stabilizable if there is F = (F1 , . . . , FN ) Hn,m such
that r (T ) < 1 when i = Ai + Bi Fi in (3.7) for i N. In this case, F is
said to stabilize (A, B, s).
Denition A.2. Let L = (L1 , . . . , LN ) Hn,p . We say that (s, L, A) is detectable if there is M = (M1 , . . . , MN ) Hp,n such that r (T ) < 1 when
i = Ai + Mi Li in (3.7) for i N. In this case, M is said to stabilize
(s, L, A).
Therefore (A, B, s) stabilizability is equivalent, when sij = pij , to mean
square stabilizability according to Denition 3.40. Similarly, (s, L, A) detectability is equivalent, when sij = pij , to mean square detectability according to Denition 3.41. The next proposition shows the duality between
these two concepts.
Proposition A.3. (A, B, s) is stabilizable if and only if (s , B , A ) is detectable.
Proof. From Denition A.1, there exists F = (F1 , . . . , FN ) Hn,m such that
r (T ) < 1 when i = Ai + Bi Fi in (3.7) for each i N. From (3.9), the
operator V is dened, for V = (V1 , . . . , VN ) Hn , as
N
Vi (V ) = (Ai + Fi Bi ) (
sij Vj )(Ai + Fi Bi )
j=1
205
W = {X Hn , RN ; i 0, i N, and
Bi Ei (X)Bi + i Di Di is non singular for each i N}
We need also the following denitions:
Denition A.4. We dene
F(., .) : W Hm ,
X (., .) : W Hn ,
Y(., .) : W Hn ,
in the following way; for (X, ) W, = (1 , . . . , N ), with i > 0
for each i N, X = (X1 , . . . , XN ), and F = (F1 , . . . , FN ) Hm,n ,
F(X, ) = (F1 (X, ), . . . , FN (X, )), X (X, ) = (X1 (X, ), . . . , XN (X, ))
and Y(X, ) = (Y1 (X, ), . . . , YN (X, )) are dened as
Fi (X, ) (Bi Ei (X)Bi + i Di Di )1 Bi Ei (X)Ai ,
Xi (X, ) Ai Ei (X)Ai + i Ci Ci
*
+1
Ai Ei (X)Bi i Di Di + Bi Ei (X)Bi
Bi Ei (X)Ai ,
and
Yi (X, )
;
sij Aj Xj Aj + j Cj Cj
j=1
*
+1
<
Aj Xj Bj j Dj Dj + Bj Xj Bj
Bj Xj Aj
(A.2)
(A.3)
206
X = X (X).
(A.4)
where X(0) Hn+ . Notice that from the identity (A.2) it is easy to check by
induction on k that X(k) Hn+ for all k = 0, 1, . . . .
Associated to the recursive equation (A.3), we have another set of coupled
Riccati dierence equations dened recursively for k = 0, 1, . . . as
Y (k + 1) = Y(Y (k), (k)),
(A.5)
with Y (0) Hn+ . As before, it is easy to check from the identity (A.2) that
Y (k) Hn+ for all k = 0, 1, . . . . We will also be interested in studying the
asymptotic behavior of Y (k) to a Y Hn+ satisfying
Y = Y(Y ).
(A.6)
207
Suppose now that 2) holds. For any X(0) Hn+ , set Y (0) = E(X(0)) Hn+ .
Then according to 1), limk Y (k) = Y Hn+ with Y satisfying Y = Y(Y ).
It is easy to verify that
Xi (k + 1) =Ai Yi (k)Ai + i (k)Ci Ci
*
+1
Ai Yi (k)Bi i (k)Di Di + Bi Yi (k)Bi
Bi Yi (k)Ai ,
Yi (k) =Ei (X(k))
and taking the limit as k we conclude that limk X(k) = X Hn+ ,
with X = X (X).
Finally, suppose that X Hn+ satises X = X (X). Then by dening
Y = E(X) Hn+ we get that Y = Y(Y ) and if F(X) stabilizes (A, B, s)
then clearly K(Y ) stabilizes (A, B, s). On the other hand, if Y Hn+ satises
Y = Y(Y ) then, by dening X Hn+ as
*
+1
Xi = Ai Yi Ai + i Ci Ci Ai Yi Bi i Di Di + Bi Yi Bi
Bi Yi Ai
= (Ai + Bi Fi (Y )) Yi (Ai + Bi Fi (Y )) + i Ci Ci + i Fi (Y ) Di Di Fi (Y )
we conclude that Y = E(X) and thus X = X (X). Clearly if K(Y ) stabilizes
(A, B, s) then F(X) stabilizes (A, B, s).
We can see now how (A.3), (A.4), (A.5) and (A.6) are related to the control
and ltering Riccati equations (4.14), (4.35), (5.13) and (5.31). For the control
equations (4.14) and (4.35) we just have to make sij = pij , i (k) = 1, i = 1
in (A.3) and (A.4) respectively. We have that the ltering equations (5.13)
(with J(k) = N and pij , A, L, G, H timeinvariant) and (5.31) can be seen
as dual of (A.5) and (A.6) respectively when sij = pij , i (k) = i (k), i = i ,
after we make the following correspondence between the two problems:
s s
Ai Ai
Bi Li
Ci Gi
Di Hi .
(A.7)
This means that if we take the coupled Riccati equations (A.5) and (A.6)
with sij = pij , i (k) = i (k), i = i , and relabel sij , Ai , Bi , Ci and Di by
respectively sji , Ai , Li , Gi , and Hi , we get precisely (5.13) (with J(k) = N
and pij , A, L, G, H timeinvariant) and (5.31). The same relabelling applied to
Ki (Y ) produces Mi (Y ) in (5.32). Thus the coupled Riccati equations (A.5)
and (A.6), and (5.13) and (5.31), are the same in all but notation. From
Proposition A.5 we have that the facts deduced from (A.3) and (A.4), can
also be applied to the ltering coupled Riccati equations (5.13) and (5.31).
208
Notice that to have the assumption i (k) > 0 satised we could iterate
(5.13) up to a time k0 such that i (k) > 0 for all k > k0 (such k0 exists since
by the ergodic assumption (Assumption 3.31 in Chapter 3) made in Chapters
4 and 6 for the innite horizon problems, i (k) i > 0 exponentially fast
for each i N), and consider i (k) = i (k + k0 ), and Y (k0 ) from (5.13) as the
initial condition for (A.5).
In the next sections we study the control coupled Riccati equations (A.3)
and (A.4). As mentioned before, from Proposition A.5, the results also hold
for the coupled Riccati equations (A.5) and (A.6). As seen in (A.7), the dual of
(A.5) and (A.6) lead to the ltering equations (5.13) and (5.31), and therefore
the results derived for the control coupled Riccati equations can also be used
for the ltering coupled Riccati equations.
209
Lemma A.7. Suppose that X L and for some F( = (F(1 , . . . , F(N ) Hn,m ,
( = (X
(1 , . . . , X
(N ) Hn satises for i N
1. X
(
(
(i (Ai + Bi F(i ) Ei (X)(A
(
X
i + Bi Fi ) = Oi (F ).
Then, for i N,
(i Xi ) (Ai + Bi F(i ) Ei (X
( X)(Ai + Bi F(i )
(X
= Xi (X) Xi + (F(i Fi (X)) Ri (X)(F(i Fi (X)). (A.8a)
( L then, for i N,
2. Moreover, if X
(i Xi ) (Ai + Bi Fi (X))
( Ei (X
( X)(Ai + Bi Fi (X))
(
(X
( Fi (X)) Ri (X)(Fi (X)
( Fi (X))
= Xi (X) Xi + (Fi (X)
( Ri (X)(
( F(i Fi (X)).
(
+ (F(i Fi (X))
(A.8b)
3. Furthermore, if Y( = (Y(1 , . . . , Y(N ) Hn and satises, for i N
( Ei (Y( )(Ai + Bi Fi (X))
( = Oi (F(X)),
(
Y(i (Ai + Bi Fi (X))
(A.8c)
then for i N,
(i Y(i ) (Ai + Bi Fi (X))
( Ei (X
( Y( )(Ai + Bi Fi (X))
(
(X
( Ri (X)(
( F(i Fi (X)).
(
= (F(i Fi (X))
(A.8d)
For the next lemma, set i = Ai + Bi Fi , i N, for some F = (F1 , . . . , FN )
= (L1 (.), . . . , LN (.)) B(Hn ) dened
Hn,m . Consider also the operator L(.)
as
Li (.) i Ei (.)i , i N
(A.9)
where i = Ai + Bi Ki , i N, for some K = (K1 , . . . , KN ) Hn,m .
The next lemma provides a crucial result for the development of this appendix.
Lemma A.8. Let L and L be as dened in (3.8) and (A.9), with i = Ai +
Bi Fi and i = Ai + Bi Ki , i N. Suppose that r (L) < 1 and for some
X = (X1 , . . . , XN ) 0 and > 0,
(A.10)
< 1.
Then r (L)
Proof. Set T = L , so that for any V = (V1 , . . . , VN ) Hn ,
Tj (V )
N
i=1
sij i Vi i , j N.
(A.11)
210
sij i Vi i
i=1
i=1
i=1
i=1
+ (1 +
N
1
)
sij Bi (Ki Fi )Vi (Ki Fi ) Bi
2 i=1
= (1 + 2 )Tj (V ) + (1 +
1
)Qj (V )
2
(A.13)
i=1
with
211
) = (1 + 1 )Q(.).
Q(.)
2
Then for t = 0, 1, 2, . . .
Z(t) Y (t) 0.
(A.15)
t1
) (s))
T) t1s Q(Y
s=0
s=0
t1
)
t1s Q(Y
(s)) .
1
s=0
)
s=0 Q(Y (s)) < . Then
1
t1
)
Z(t)1
t1s Q(Y
(s))
Y (0)1 +
1
1
t=0
t=0 s=0
)
=
Y (0)1 +
Q(Y (s)) <
1
1 s=0
1
and therefore from (A.14) and (A.15), for any Y (0) = (Y1 (0), . . . , YN (0)) 0,
212
t
T (Y (0)) =
Y
(t)
Z(t)1 < .
1
1
t=0
t=0
t=0
Indeed, setting
N
1
2
c0 = (1 + 2 ) Bmax
j=1
(1 +
N N
1
)
sij Bi (Ki Fi )Yi (s)(Ki Fi ) Bi
2
i=1 j=1
N N
1
2
)
sij Bi tr((Ki Fi )Yi (s)(Ki Fi ) )
2 i=1 j=1
1
2
(1 + 2 ) Bi max N
N
1/2
1/2
tr {Yi (s)} (Ki Fi ) (Ki Fi ) {Yi (s)}
(1 +
i=1
c0
i=1
= c0 Y (s); X L(X)
= c0 {Y (s); X Y (s); L(X)}
= c0 {Y (s); X T (Y (s)); X)}
= c0 {Y (s); X Y (s + 1); X} .
Taking the sum from s = 0 to , we get
)
Q(Y (s)) c0 {Y (0); X Y ( + 1); X} c0 Y (0); X
s=0
213
X l Hn+ ,
X 0 X 1 X l X for any arbitrary X M;
r (Ll ) < 1
(A.16a)
(A.16b)
(A.16c)
Ali Ai + Bi Fil ,
Fil Fi (X l1 ) for l = 1, 2, . . .
4.
l
l
l
Xil Al
i Ei (X )Ai = Oi (F ), i N.
(A.16d)
+
Moreover there exists X + = (X1+ , . . . , XN
) Hn+ such that X + =
+
+
l
X (X ), X X for any X M and X X + , as l . Further+
more r (L+ ) 1, where L+ (.) = (L+
(.), . . . , L+
N (.)) is dened as Li (.)
1
+
+
Ai Ei (.)Ai , for i N, and
Fi+ = Fi (X + )
+
A+
i = Ai + Bi Fi .
Proof. Let us apply induction on l to show that (A.16) are satised. Consider
an arbitrary X M. By the hypothesis that (A, B, s) is stabilizable we can
nd F 0 such that r (L0 ) < 1, where L0 (.) = (L01 (.), . . . , L0N (.)) and L0i (.) =
0
0
0
A0
i Ei (.)Ai with Ai = Ai + Bi Fi . Thus, from Proposition 3.20, there exists a
0
0
0
n
unique X = (X1 , . . . , XN ) H solution of
0
0
0
Xi0 A0
i Ei (X )Ai = Oi (F ), i N.
214
0
0
0
0
(Xi0 Xi ) A0
i Ei (X X)Ai = Xi (X) Xi + (Fi Fi ) Ri (Fi Fi )
and from Lemma A.8, r (Lk ) < 1. Let X k Hn+ be the unique solution of
(see Proposition 3.20 and recall that O(F l ) Hn+ )
k
k
l
Xik Ak
i Ei (X )Ai = Oi (F ), i N.
k
(Xik Xi ) Ak
i Ei (X X)Ai = Xi (X) Xi + (Fi Fi ) Di (Fi Fi )
and since r (Lk ) < 1, we get from Proposition 3.20 that X k X. Equation
(A.8d) in Lemma A.7 yields
k1
(Xik1 Xik ) Ak
X k )Aki = (Fik Fik1 ) Rik1 (Fik Fik1 )
i Ei (X
for i N, which shows, from the fact that r (Lk ) < 1, (Fik Fik1 ) Dik1 (Fik
Fik1 ) is positive semidenite for each i N, and Proposition 3.20, that
k1
k
X
argument for (A.16). Since
; l < X X. This completes the induction
X l=0 is a decreasing sequence with X l X, for all l = 0, 1, . . ., we get that
there exists X + Hn+ such that (see [216], p. 79) X l X + as l . Clearly,
X + X. Moreover, substituting Fil = Fi (X l1 ) in (A.16d) and taking the
limit as l , we get that for i N,
0 = Xi+ (Ai + Fi (X + )) Ei (X + )(Ai + Fi (X + )) Oi (F(X + )).
215
Xi
(A.17a)
i=1
subject to
Xi + Ai Ei (X)Ai + i Ci Ci
Ai Ei (X)Bi
0
Bi Ei (X)Ai
Bi Ei (X)Bi + i Di Di
Bi Ei (X)Bi + i Di Di > 0
(A.17b)
(A.17c)
216
(1 ) + + tr(X + X
(N ) 0.
tr(X1+ X
N
(1 0, . . . , X + X
(N 0 can only hold if
This, with the fact that X1+ X
N
+
+
(1 , . . . , X = X
(N .
X1 = X
N
Theorem A.12 presents a connection between the solution of Problem A.11
and the maximal solution of the CARE (A.4), which is: they are the same.
Thus Problem A.11 provides a viable numerical way for obtaining the maximal
solution of the CARE (A.4).
217
218
(A.18)
for i N. From the fact that Zi (X) 0 and Ri > 0, i N, we can nd > 0
such that for i N,
Zi (X) + (Fi Fi ) Ri (Fi Fi ) Zi (X) + (Fi Fi ) (Fi Fi ) . (A.19)
From Denition A.2 and the detectability hypothesis, we can nd H =
(H1 , . . . , HN ) Hn such that r (L) < 1, where L(.) = (L1 (.), . . . , LN (.)) is as
= (
1 , . . . ,
N )
in (3.8) with i = Ai + Bi Fi + Hi Zi (X)1/2 , i N. Dene
n,n+m
n,n+m
n+m,n
, = (1 , . . . , N ) H
, = (1 , . . . , N ) H
as
H
1/2
i 0 , i Zi (X)
, i Hi Bi .
Fi
Fi
Then it is easy to verify that for i N
Ai + i i = Ai + Bi Fi + Hi Zi (X)1/2 = i
i = Ai + Bi Fi = Ai
Ai + i
and from (A.18) and (A.19) we get
i i ) (
i i ) = (Zi (X) + (Fi Fi ) (Fi Fi ))
(
i Xi ) A Ei (X
X)Ai
(X
i
i Xi ) (Ai + i
X)(Ai + i
i ) Ei (X
i ).
= (X
= (L (.), . . . , LN (.)), Li (.) = A Ei (.)Ai , for i N, and recalling
Setting L(.)
1
i
X 0, we get from Lemma A.8 that r (L)
< 1. Moreover from the
that X
1/2
that Zi (0) = i Ci Ci , we have that M stabilizes (s, Z(0) , A) and the result
follows after taking X = 0 in Theorem A.15 and noticing that F(0) = 0.
219
N
1/2
1/2
Ai (V )
(sii i )k i
sij Vj i (sii i )k .
(A.20)
k=0
j=1,j =i
1/2
It is clear that since r (sii i ) < 1 for each i N then A B(Hn ). Moreover
A maps Hn+ into Hn+ .
In what follows it will be convenient to consider the following norm .1t
in Hn (see [216], p. 173). For V = (V1 , . . . , VN ) Hn ,
V 1t =
tr((Vi Vi )1/2 ) =
i=1
tr(Vi ).
i=1
t
N
N
=
tr(Si (t)) =
tr
Vi (k)
i=1
t
N
i=1
tr(Vi (k)) =
k=1 i=1
k=1
t
k=1
V (k)1t .
(A.21)
220
Proof. Let us show rst that 2) and 3) are equivalent. Indeed, if 2) holds then
from Proposition 2.5 we have that 3) also holds. On the other hand, suppose
that 3) holds. From continuity of the norm operator, and recalling that, since
A maps Hn+ into Hn+ , so that Aki (L) Hn+ for all k = 0, 1, . . ., it follows
from (A.21) that
t
t
Ak (L) = lim
Ak (L)
lim
t
t
k=0
k=0
1t
1t
t
k
A (L) .
= lim
1t
t
k=0
Since L > 0, we can nd, for each H Hn+ , some > 0 such that 0 H
L. Linearity of the operator A yields for each k = 0, 1, . . .,
0 Ak (H) Ak (L),
and thus
lim
t
t
k
k
A (H) lim
A (L) <
1t
1t
t
k=0
k=0
and from Proposition 2.5, r (A) < 1. Let us show now that 1) and 2) are
equivalent. Suppose that 2) holds and consider V = (V1 , . . . , VN ) Hn+ , Vi >
0 for each i N. Dene S(H) = (S1 (H), . . . , SN (H)) for H = (H1 , . . . , HN )
Hn as S(H) = L + A(H) where L = (L1 , . . . , LN ) Hn+ and
Li =
1/2
1/2
k=0
t1
By induction, it follows that S t (H) = At (H) + k=0 At1k (L), and from
(cf. Proposition 2.6) there exists
the hypothesis that r (A) < 1 we have that
a unique Q Hn such that S(Q) = Q = k=0 Ak (L) = limt S t (H), and
hence Q L > 0. Thus for each i N,
Qi = Si (Q) =
.
/
1/2
1/2
(sii i )k Vi + i
sij Qj i (sii i )k
k=0
j =i
221
From the above Lyapunov equation and standard results in linear systems
1/2
(see Theorem 2.14) it follows that r (sii i ) < 1 for each i N. Moreover,
for i N,
Qi =
.
/
1/2
1/2
(sii i )k Vi + i
sij Qj i (sii i )k = Li + Ai (Q)
k=0
j =i
where
Li =
1/2
1/2
i N.
k=0
t
k=0
k=0
k=0
sii A+
i x = x.
From (A.18) and setting F + = F(X + ), F = F(X), we get that
+
+
0 = x ((Xi+ Xi ) sii A+
i (Xi Xi )Ai )x
= x A+
sij (Xj+ Xj ) A+
i
i x
j=1,j =i
sij (Xj+ Xj ) A+
x A+
i
i x = 0,
j=1,j =i
222
x Zi (X)x = 0,
x (Fi+ Fi ) Ri (X)(Fi+ Fi )x = 0,
1/2
1/2
1/2
N
k=1
1. (Zi (X)1/2 , sii (Ai + Bi Fi (X))) has no unobservable modes inside the
closed unitary complex disk, or
1/2
2. (Zi (X)1/2 , sii (Ai + Bi Fi (X))) has no unobservable modes over the unitary complex disk, 0 is not an unobservable mode of (Zi (X)1/2 , Ai +
Bi Fi (X)) and
(i) min{T 1; sTij > 0 for some j } <
where { N; satises condition 1)}.
Then X + , the maximal solution of the CARE (A.4) in M, is also the mean
square stabilizing solution. Moreover, X + X > 0.
Proof. From the hypothesis made, Theorem A.10 and Lemma A.19, it is clear
1/2
that there is a maximal solution X + M and that r (sii A+
i ) < 1, for each
i N. Notice now that
N
+
+
+
(Xi+ Xi ) sii A+
sij (Xj+ Xj ) A+
i (Xi Xi )Ai = Ai
i + Zi Zi
j=1,j =i
where Zi = Zi (X)1/2 (Fi+ Fi ) Ri (X)1/2 . This implies that
(X + X) = H + A(X + X) H
(A.22)
223
Hi =
1/2
1/2
+ k
k
(sii A+
i ) Zi Zi (sii Ai ) 0.
(A.23)
k=0
X + X = At1 (X + X) +
which shows that
Ak (H)
k=0
k=0
Ak (H) 0
(A.24)
k=0
Ak (H) > 0,
(A.25)
k=0
then
0
k=0
A (H)
s=0
Ak (H)
k=s
Ak (H)
k=0
and from Lemma A.18 we get r (L ) < 1. It remains to prove that (A.25)
holds for some integer . We shall show rst that for any i , Hi > 0. From
1/2
(A.23), it is enough to show that (Zi , sii A+
i ) is observable (see Theorem
2.19). By contradiction, suppose that C is an unobservable mode of
1/2
n
(Zi , sii A+
i ), that is, for some x = 0 in C ,
1/2
Zi x =
sii A+
i x = x,
Zi (X)1/2 x
= 0,
Ri (X)1/2 (Fi+ Fi )x
1/2
1/2
+
sii A+
i x = sii (Ai + Bi Fi )x = sii (Ai + Bi Fi )x = x,
Zi (X)1/2 x = 0,
1/2
that is, is an unobservable mode of (Zi (X)1/2 , sii (Ai + Bi Fi )). Since
1/2
r (sii (Ai + Bi Fi+ )) < 1, we must have that < 1, which is a contradiction with 1).
Suppose now that i satises condition 2). Set T = (i). Since sTij > 0 for
some j and nite T , and T is the minimal integer with this property, we
can nd a sequence of distinct elements {i0 , i1 , . . . , iT 1 , iT }, i0 = i, iT = j,
such that sii1 si1 i2 . . . siT 1 j > 0 and each ik , k = 0, . . . , T 1, satises condition 2) (otherwise T would not be the minimum). Let us show by induction
that
224
Hj > 0
HiT 1 + AiT 1 (H) > 0
..
.
Hi + Ai (H) + . . . + ATi (H) > 0.
As seen in 1), Hj > 0. Suppose that Hik + Aik (H) + . . . + ATikk (H) > 0. Let
us show that
k+1
Hik1 + Aik1 (H) + . . . + ATik1
(H) > 0.
N
1/2
s +
= x
(sik1 ik1 A+
)
A
sik1 l (Hl + . . .
ik1
ik1
s=0
l=1,l =ik1
1/2
T k
+
s +
+ Al (H)) (sik1 ik1 Aik1 ) Aik1 x
x 0
x A+
sik1 l (Hl + . . . + ATl k (H)) A+
ik1
ik1
l=1,l =ik1
and since sik1 ik > 0, ik1 = ik , Hik + . . . + ATikk (H) > 0, we conclude that
A+
ik1 x = 0. Notice now that Hik1 x = 0 implies (see (A.23)) that Zik1 x = 0,
and thus Zik1 (X)x = 0 and Fi+k1 x = Fik1 x. Therefore,
+
A+
ik1 x = (Aik1 + Bik1 Fik1 )x = (Aik1 + Bik1 Fik1 )x = 0
Zik1 (X)1/2 x = 0
which implies that 0 is an unobservable mode of (Zik1 (X)1/2 , Aik1 +
Bik1 Fik1 ), contradicting hypothesis 2) of the theorem. Therefore (A.25)
is satised for = max{(i); i
/ }. Finally, notice from (A.24) and (A.25)
that
X+ X
As (H) > 0.
s=0
Corollary A.21. Suppose that (A, B, s) is stabilizable. Suppose also that for
some X M and each i N, one of the conditions below is satised:
225
1/2
N
j Xj ) (Ai + Bi Fi )
= x (Ai + Bi Fi )
sij (X
j=1,j =i
N
j Xj ) (Ai + Bi Fi ) x = 0,
x (Ai + Bi Fi )
sij (X
j=1,j =i
x Zi (X)x = 0,
x (Fi Fi ) Ri (X)(Fi Fi )x = 0
and therefore
1/2
1/2
sii (Ai + Bi Fi )x = sii (Ai + Bi Fi )x = x,
Zi (X)1/2 x = 0,
1/2
which shows that is an unobservable mode of (Zi (X)1/2 , sii (Ai + Bi Fi )),
contradicting (1) or (2).
Remark A.22. As in Corollary A.16, by taking X = 0 and using (2.5), the
above results could be written in terms of the nonobservable modes of each
pair (Ci , Ai ), i N.
226
( Ei (P (k))(Ai + Bi Fi (X))
( + Oi (F (X),
( (k))
Pi (k + 1) = (Ai + Bi Fi (X))
P (0) = X(0).
It is immediate to check that i (k) i (k), i (k + 1) i (k) > 0, and from
(A.1) that
limk i (k) = i
exponentially fast. We have the following result, showing the desired convergence.
( Hn+ for (A.4)
Proposition A.23. Suppose that the stabilizing solution X
n+
exists, and it is the unique solution for (A.4) over H . Then X(k) dened
( as k goes to innity whenever X(0) Hn+ .
in (A.3) converges to X
Proof. Dene V (k) = (V1 (k), . . . , VN (k)) as
V (k + 1) = X (V (k), (k)),
V (0) = 0.
Let us show by induction on k that
0 V (k) V (k + 1),
V (k) X(k) P (k).
(A.26)
(A.27)
For (A.26), the result is clearly true for k = 1. Suppose it holds for k, that is,
0 V (k 1) V (k). Then
Ri (V (k), (k)) > 0
and from (A.2),
Vi (k + 1) =(Ai + Bi Fi (V (k), (k))) Ei (V (k))(Ai + Bi Fi (V (k), (k)))
+ Oi (Fi (V (k), (k)), (k)),
Vi (k) =(Ai + Bi Fi (V (k), (k))) Ei (V (k 1))(Ai + Bi Fi (V (k), (k)))
+ Oi (Fi (V (k), (k)), (k 1))
(Fi (V (k), (k)) Fi (V (k 1), (k 1)) Ri (V (k 1), (k 1))
(Fi (V (k), (k)) Fi (V (k 1), (k 1)).
227
228
( Ei (S)(Ai + Bi Fi (X))
( + Oi (Fi (X),
( ).
Si = (Ai + Bi Fi (X))
(A.28)
B
Auxiliary Results for the Linear Filtering
Problem with (k) Unknown
In this appendix we present some proofs of the results of Sections 5.4 and 5.5.
(B.1)
(B.2)
we get that
y c (k) = L(k)z c (k) + H(k) (k)w(k).
(B.3)
= E(w(k) )E(H(k)
(y c )k1 )
=0
(B.4)
230
B Auxiliary Results for the Linear Filtering Problem with (k) Unknown
(B.5)
(B.6)
j=1
)
= L(k)Z(kk
1)L(k) + H(k)H(k) > 0,
y (kk 1) ) = E(()
z (kk 1) + z(c (kk 1))
y (kk 1) )
E(z c (k))
)
= Z(kk
1)H(k) .
(B.7)
(B.8)
i=1
i=1
(B.9)
i=1
where it can be easily shown that (5.3) implies that cov((y c )k1 ) > 0 for all
k 1, and in the second equality above we used the fact that G(k1) w(k
1) and L((y c )k1 ) are orthogonal (same reasoning as in (B.4) above). From
(5.37),
z(c (kk) =(
z c (kk 1)
+ E(z c (k))
y (kk 1) )E()
y (kk 1))
y (kk 1) )1 y)(kk 1)
(B.10)
and (B.5)(B.10) lead to
z(c (kk) =(
z c (kk 1)
)
)
+ Z(kk
1)L(k) (L(k)Z(kk
1)L(k) + H(k)H(k) )1
(y c (k) H(k)(
z c (kk 1))
(B.11)
and
231
(B.12)
Equation (5.45) now follows from (B.10) and (B.11) after noting from (B.1)
that
y c (k) L(k)(
z c (kk 1) = y(k) L(k)q(k) L(k)((
z (kk 1) q(k))
= y(k) L(k)(
z (kk 1)
and that
z((kk 1) = z(c (kk 1) + q(k),
with
E(zj (k + 1)) = E(E(zj (k + 1)Fk )) =
i=1
(B.14)
B Auxiliary Results for the Linear Filtering Problem with (k) Unknown
232
(B.15)
where
M (k + 1, j) = m1 (k + 1, j) . . . mN (k + 1, j) ,
mi (k + 1, j) = (1{(k+1)=j} pij (k))Ai (k)1{(k)=i} ,
M (k + 1, 1)
..
M (k + 1) =
,
.
M (k + 1, N )
..
(k) =
.
.
1{(k+1)=N } G(k) (k)w(k)
From (5.45) and (B.13), we obtain that
z((k + 1k) = A(k)(
z (kk 1) + T(k)L(k))
z (kk 1) + T(k)H(k) (k)w(k)
(B.16)
and thus, from (B.15) and (B.16) we get that:
z)(k + 1k) =(A(k) + T(k)L(k)))
z (kk 1)
+ M (k + 1)z(k) + T(k)H(k) (k)w(k) + (k).
(B.17)
)
Therefore from (B.17) the recursive equation for Z(kk
1) is given by
) + 1k) =(A(k) + T(k)L(k))Z(kk
)
Z(k
1)(A(k) + T(k)L(k))
+ E(M (k + 1)z(k)z(k) M (k + 1) ) + E((k)(k) )
+ T(k)E(H(k) (k)w(k)w(k) H(k) (k) )T(k)
(B.18)
(B.19)
E((k)(k) ) = G(k)G(k) ,
(B.20)
(B.21)
all i N (since i (k) i > 0 we have that this number exists). Dene
233
i (k) inf i ( + ).
k
Obviously
i (k + ) i (k) i (k 1), k = 1, 2, . . . , i N
(B.22)
1 (k), . . . , Q
N (k))
(Q
and i (k) i exponentially fast. Dene now Q(k)
n+
j (k + 1) =
Q
i (k)Ai + i (k)Gi Gi ).
pij (Ai Q
i=1
In the next lemma recall that Q = (Q1 , . . . , QN ) Hn+ is the unique solution
that satises (5.58).
k
1).
Q(k + ) Q(k)
Q(k
(B.23)
k
Q() 0 = Q(0)
and Q(1)
0 = Q(0).
Suppose that (B.23) holds for k.
Then from (B.22) and (B.23) we have that
Qj (k + 1 + ) =
i=1
i=1
j (k + 1)
=Q
i=1
j (k)
=Q
i (k)pij Gi Gi
i=1
H(k)
)1 LR(k)A
+ AR(k)L (LR(k)L + H(k)
234
B Auxiliary Results for the Linear Filtering Problem with (k) Unknown
(B.24)
i=1
H(k)
)(T(k + ) S(k))
(T(k + ) S(k))(LR(k)L + H(k)
) + k 1 + )(A + T(k + )L)
(A + T(k + )L)Z(k
,N
+ V(Q(k + )) + diag
i (k + )pij Gi Gi
i=1
i=1
H(k)
S(k)
+ diag
i (k)pij Gi G + S(k)H(k)
i
i=1
=R(K + 1)
and since R(0) = 0 R(1) the induction argument is completed for (B.24).
Proof of Theorem 5.12.
Proof. From MSS of (5.38), we have from Proposition 3.6 in Chapter 3 that
r (A) < 1 and thus according to standard results for algebraic Riccati equations there exists a unique positive semidenite solution P B(RN n ) to
(5.60) and moreover r (A + T(P )L) < 1 (see [48]). Furthermore P satises
235
(B.25)
) 1) and
Dene P (0) = Z(0
P (k + 1) =(A + T(P )L)P (k)(A + T(P )L) + V(Q(k))
+ G(k)G(k) + T(P )H(k)H(k) T(P ) .
(B.26)
)
Let us show by induction on k that P (k) Z(kk
1). Since
) + 1k) = V(Q(k)) + G(k)G(k)
Z(k
)
+ (A + T(P )L)Z(kk
1)(A + T(P )L) + T(P )H(k)H(k) T(P )
)
(T(k) T(P ))(LZ(kk
1)L + H(k)H(k) )(T(k) T(P ))
(B.27)
(B.29)
(B.30)
and P (k) P . From (B.30) and (B.24) it follows that 0 R(k) R(k +
1) P (k + 1 + ) and thus we can conclude that R(k) R whenever k
k
k
Q
for some R 0. Moreover, from the fact that i (k) i and Q(k)
we have that R satises (5.60). From uniqueness of the positive semidenite
solution to (5.60) we can conclude that R = P . From (B.30) and (B.24),
) + k 1 + ) P (k + ) and since R(k) P , and P (k) P as
R(k) Z(k
k
)
k , we get that Z(kk
1) P .
236
B Auxiliary Results for the Linear Filtering Problem with (k) Unknown
pij vj2
N
j=1
pij vj
2
2
= Ei V ((1))2 Ei V ((1)) 0
j=1
where V (j) = vj and Ei () denotes the expected value when (0) = i. This
shows the desired result.
Proof of Proposition 5.14.
Proof. By straightforward calculations from (5.63) and noting that Bf L(k) x(k) =
Bf Lz(k), we have that:
( + 1) =E z((k + 1)(
Z(k
z (k + 1)
=E (Bf Lz(k) + Af z((k) + Bf H(k) w(k))
(Bf Lz(k) + Af z((k) + Bf H(k) w(k))
=Bf LE(z(k)z(k) )L Bf + Af E((
z (k)(
z (k) )Af
+ Bf LE(z(k)(
z (k) )Af + Af E((
z (k)z(k) )L Bf
N
+ Bf
i (k)Hi Hi Bf
i=1
Z(k) U (k) L Bf
+ Bf H(k)H(k) Bf .
= Bf L Af
(
Af
U (k) Z(k)
(B.31)
Similarly,
z (k + 1)
Uj (k + 1) =E z(k + 1, j)(
=E(1{(k)=j} A(k) x(k) + G(k) w(k)
Bf Lz(k) + Af z((k) + Bf H(k) w(k) )
=
pij Ai Qi (k)Li Bf + Ai Ui (k)Af + i (k)Gi Hi Bf
i=1
N
i=1
pij Ai Qi (k)Li Bf +
N
i=1
pij Ai Ui (k)Af
(B.32)
237
Q1 (k)
.
..
Uj (k + 1) = p1j A1 pN j AN ..
.
0
..
.
L Bf
QN (k)
U1 (k)
.
+ p1j A1 pN j AN .. Af
UN (k)
so that
Z(k) U (k) L Bf
U (k + 1) = A 0
.
(
Af
U (k) Z(k)
(B.33)
The next results present necessary and sucient conditions for System
(5.63) to be MSS.
Proposition B.3. System (5.63) is MSS if and only if r (T ) < 1 and
r (Af ) < 1.
Proof. If System (5.63) is MSS then for any initial condition xe (0), (0),
k
z (k)2 0,
E xe (k)2 = E x(k)2 + E (
k
k
z (k)2 0. From Theorem 3.9 in
that is, E xe (k)2 0 and E (
Chapter 3, r (T ) < 1. Consider now an initial condition xe (0) = 0 z((0) .
k
Then clearly z((k) = Akf z((0) and since z((k) 0 for any initial condition
z((0), it follows that r (Af ) < 1.
Suppose now that r (T ) < 1 and r (Af ) < 1. From Theorem 3.9 in Chap.
3, r (T ) < 1 implies that E x(k)2 abk for some a > 0, 0 < b < 1. From
(5.63),
z((k) = Af z((0) +
k1
t=0
Ak1t
Bf L(t) x(t)
f
Akf E
(
z (0)
+ Bf Lmax
2 1/2
k1
t=0
t=0
1/2
Ak1t
E x(t)2
f
B Auxiliary Results for the Linear Filtering Problem with (k) Unknown
238
=a
(b)k +
k1
k k
(b)k1 = a
(b)k 1 + 0
b
t=0
showing that
k
E xe (k)2 = E x(k)2 + E (
z (k)2 0.
Proposition B.4. System (5.63) is
with
Q1
..
S= .
0
QN
(
U
Z
such that
S
N
A 0
A 0
Di
S
+
dg[Qi ] Di 0 > 0.
Bf L Af
Bf L Af
0
(B.34)
i=1
Proof. Consider the operator T) B(H(N +1)n ) as follows: for V) = (V)1 , . . . , V)N )
H(N +1)n , T) (V) ) (T)1 (V) ), . . . , T)N (V) )) is given by:
T)j (V) )
)i V)i A
)i ,
pij A
jN
(B.35)
i=1
where
)i
A
0
Ai
Bf Li Af
.
(B.36)
239
and )
S(k) ()
S1 (k), . . . , )
SN (k)) H(N +1)n . From Proposition 3.1 in Chapter
3, )
S(k + 1) = T) ()
S(k)), and System (5.63) is MSS if and only if there exists
)
)
)
S = (S1 , . . . , SN ) H(N +1)n , )
Si > 0, i N, such that (see Theorem 3.9 in
Chapter 3)
)
Sj T)j ()
S) > 0,
j N.
(B.37)
(j ,
Z
U1
U ... ,
UN
Z diag[Qi ],
j=1
Z U
S
( .
U Z
N
j=1
(j >
Z
N
j=1
1
Uj Q1
U.
j Uj = U Z
240
B Auxiliary Results for the Linear Filtering Problem with (k) Unknown
)
S(k)) +
Sj (k + 1) = T)j ()
pij i (k)
i=1
0
Gi Gi
0 Bf Hi Hi Bf
k
Sj 0, with )
S = ()
S1 , . . . , )
SN ) satisfying
and )
Sj (k) )
)
S) +
Sj = T)j ()
N
i=1
pij i
Gi Gi
0
.
0 Bf Hi Hi Bf
(B.38)
Note that
Qi (k) Ui (k) k Qi Ui
)
Si (k) =
( i (k) U Q
(i
Ui (k) Q
i
( i (k) = E((
where Q
z (k)(
z (k) 1{(k)=i} ). Moreover
(
Z(k)
=
( i (k) k
i=1
(
( i = Z.
Q
i=1
Dening
S=
Q1 0
Z = ... . . . ... ,
0 QN
Z U
( ,
U Z
U1
U = ... ,
UN
pij Ai Xi Ai
i=1
Qj =
pij i Gi Gi
i=1
pij Ai Qi Ai +
i=1
pij i Gi Gi
(B.39)
i=1
(V S) =
(B.40)
From Proposition 3.6 in Chap. 3, r (T ) < 1 implies that r (A) < 1 and thus
A 0
< 1.
r
Bf L Af
241
N
i=1
pij Ai Xi Ai +
pij i Gi Gi
(B.41)
i=1
N
and subtracting (B.39) from (B.41) it follows that (Xj Qj ) i=1 pij Ai (Xi
Qi )Ai , that is, (X Q) T (X Q) where X = (X1 , . . . , XN ). This implies
that X Q. Using this fact, we conclude that
A 0
A 0
(V S)
(V S)
Bf L Af
Bf L Af
A 0
< 1 it follows that V S 0.
and since r
Bf L Af
Proof of Theorem 5.16.
Proof. For (Af , Bf , Jf ) xed, consider S, W satisfying (5.75)(5.77). Without
loss of generality, suppose further that U is nonsingular (if not, redene U as
U + I so that it is nonsingular). As in [129] dene
Y V
S1 =
( >0
V Y
( > 0 are N n N n. We have that
where Y > 0 and Y
ZY + U V = I
( = 0,
U Y + ZV
(B.42)
(B.43)
(1 U < Z,
and from (B.42) and (B.43), Y1 = Z + U V Y1 = Z U Z
1
1
1
1
V = U U ZY = U (Y Z)Y implying that V is nonsingular.
Dene the nonsingular 2N n 2N n matrix
1
Y
Z
T =
0 V
and the nonsingular (2N n + p) (2N n + p) and (5N n + N r(N + 1)) (5N n +
N r(N + 1)) matrices T1 , T2 as follows:
T
0
0
0
00 0
1
0 dg[Q1 ] 0
0
0 0 0
.
.
0
.
0
0
0 0 0
T 0
T1 =
, T2 = 0
.
0
0 dg[Q1
0 I
N ] 0 0 0
0
0
0
0
I 0 0
0
0
0
0
0 I 0
0
0
0
0
00T
242
B Auxiliary Results for the Linear Filtering Problem with (k) Unknown
T S=
, T ST =
=
I 0
Z 1 Y
XY
where
X = Z 1 = diag[Xi ],
Xi = Q1
i ,
i N,
and
J
J
.
.
J
.. Z 1 U Jf .. Jaux
..
J
J
.
T S
=
=
J
J
J
.
.
Jf
..
..
J
T
ST =
Bf L Af
YA + V Bf L V Af U Z 1 0
Z 1 A
Z 1 A
=
YA + V Bf L + V Af U Z 1 YA + V Bf L
XA
XA
=
,
YA + F L + R YA + F L
where R = V Af U Z 1 = V Af U X. With these transformations of similarity,
(5.76) becomes (5.80) and (5.77) becomes (5.81). Similarly by applying the
inverse transformations of similarity we have that (5.80) becomes (5.76) and
(5.81) becomes (5.77).
Proof of Proposition 5.17.
243
J
..
)
= tr J J Z(kk 1) .
J
+ E J J z((kk 1) Jf z((k)2
)
tr J J Z(kk
1) ...
J
J
.
k
tr J J P ..
J
)
since, as shown in Theorem 5.12, Z(kk
1) P . This shows that
J
J
.
.
.
.
tr J J Jf S
tr J J P .. .
J
J
Jf
On the other hand, consider
244
B Auxiliary Results for the Linear Filtering Problem with (k) Unknown
T(P ) = AP L (LP L + HH )1 .
J
..
2
)
E(eop (k) ) = tr J J Z(k) .
J
where
)
Z(k)
= E((z(k) z(op (k))(z(k) z(op (k)) )
which satises (see Theorem 5.12)
) + 1) = (A + T(P )L)Z(k)(A
)
Z(k
+ T(P )L)
+ V(Q(k)) + G(k)G(k) + T(P )H(k)H(k) T(P ) .
k
)
As shown in Theorem 5.12, Z(k)
P and thus
J
.
2
lim E(eop (k) ) = tr J J P ..
k
J
J
..
X
X
. Jaux
J
>0
J J Jaux
J
J
W
..
Y
X
.
J
and from the Schur complement (see Lemma 2.23), we have that (5.80) is
equivalent to
X
1
J J Jaux
X
Y
>
X J J
J J
J J Jaux
W
that is
245
J
J
..
1
1 .
X XY X
. Jaux XY ..
J > 0
J
1
1 .
J J Jaux J J Y X W J J Y ..
J
and choosing
Jaux = [J
J](I Y1 X),
(B.44)
(5.80) becomes
Y XY1 X
0
> 0.
0
W J J Y1 J J
that is, X XY1 X > 0 and
J
W J J Y1 ... > 0.
J
Y
J J
> 0.
J J
W
The rst inequality follows from (5.81). Therefore (5.80), with the choice
Jaux = [J J](I Y1 X), can be rewritten as
J
..
.
> 0.
J
J J
W
With the choice U = U = (X1 Y1 ) > 0, we obtain that (5.82) becomes
V = Y(Y1 X1 )(X1 Y1 )1 = Y,
Af = (Y)1 R((X1 Y1 )X)1 = Y1 R(I Y1 X)1 ,
Bf = Y1 F,
Jf = Jaux ((X1 Y1 )X)1 = Jaux (I Y1 X)1 = J J .
B Auxiliary Results for the Linear Filtering Problem with (k) Unknown
246
1 1
X
X
0
0
0
0 Y1
dg[X1 ]
0
0
0
T3 =
1
dg[XN ]
0
0
I
0
1
1
X X
0
0
0
0 Y1
we have, choosing R = (YA + F L)(I Y1 X), that (5.81) becomes
11 12
12 22
where
X1 Y1
X1
= 1
X Y1 X1 Y1
0
0
0
0
22
0 dg[X11 ] 0
0
0
.
..
0
0
0
= 0
1
0
]
0
0
0
dg[X
N
0
0
0
0
I
0
0
0
0
0
22
11
and
0
I
1
0
D1 dg[X11 ] DN dg[XN
] G
12 =
0 Y1 F H
0
0
A
0
22 .
=
Y1 F L A + Y1 F L
(B.45)
247
Therefore the LMI (5.81) can be rewritten as in (B.45). Consider Jaux chosen
as in (B.44) and the choice of parameters for the LMIs (5.80) and (5.81) as in
Proposition 5.18. Notice that the extra terms I and 2I in (5.84) and (5.83)
ensure that P > 0, Z > 0. Subtracting (5.84) from (5.83), we get that
(Z P ) = A(Z P )A + T(P )(LP L + HH )T(P ) + I
(B.46)
since
AP L (LP L + HH )1 LP A = T(P )(LP L + HH )T(P ) .
From r (A) < 1, it follows that Z P > 0 (and thus X1 Y1 > 0). Notice
also that
A
0
Z P
Z
A
0
=
T(P )L A + T(P )L Z P Z P T(P )L A + T(P )L
AZ A
A(Z P )A + AP L T(P )
(B.47)
(B.48)
0 H T(P ) +
(B.49)
+
I I
T(P )H
so that (B.45) is satised. Finally notice that
Af = Y1 R(I Y1 X)1 = A + T(P )L
Bf = T(P )
Jf = J J .
C
Auxiliary Results for the H2 control Problem
x(k) =
A(k1) . . . A(l+1) B(l) (l).
l=
z(k) = C(k) {
A(k1) . . . A(l+1) B(l) (l)} + D(k) (k).
(C.2)
l=
k1
C(k) A(k1) . . . A(l+1) B(l) (l)2 + D(k) (k)2
l=
Cmax
k1
A(k1) . . . A(l+1) B(l) (l)2 + Dmax (k)2 .
l=
(C.3)
250
kl l ,
l=
2
=0  < ) and b 2 (that is,
=   < ) it follows that
k
the convolution c a b = (c0 , c1 , . . .), ck = k , lies itself in 2
with c2 a1 b2 (cf. [103], p. 529). Hence,
z2 =
1/2
2
E(z(k) )
k=
1/2
c2
= c2 < .
A(k, k) = I,
(C.4)
s
A(s, t)) B(t) (t).
t=
We will need to use the adjoint operator G of G, which is such that for any
C m and any v C q we have that
G( ); v = ; G (v)
(C.5)
G( ); v =
E(z(k) v(k))
k=
=E
.
*
k=
=E
/+
+ (k) D(k)
v(k)
(l) E
l=
*
E
(l) B(l)
A(k 1, l) C(k)
v(k) 1{lk1}
l=
251
k=l+1
+
(l) D(l) v(l)
l=
(l)
l=
A(k 1, l) C(k)
B(l)
v(k)Fl
A(k 1, l) C(k)
E B(l)
v(k)Fl
k=l+1
/+
+ D(l)
v(l)
= ; G (v)
and therefore
G (v)(l) =
k=l+1
(C.6)
Proposition C.2. Suppose that System (C.1) is MSS,
X = (X1 , X2 , . . . , XN ) Hn , Xi = Xi , satises for each i N
Then
and
that
Ai Ei (X)Ai Xi + Ci Ci = 0
(C.7)
Di Ci + Bi Ei (X)Ai = 0.
(C.8)
G G( )(k) = (D(k)
D(k) + B(k)
E(k) (X)B(k) ) (k).
(C.9)
A(k 1, l) C(k)
E B(l)
C(k) ( )(k 1)
k=l+1
+ B(l)
A(k 1, l) C(k)
D(k) (k)Fl
+ D(l)
C(l) ( )(l 1) + D(l)
D(l) (l).
252
A(k + l, l) C(k+l+1)
E B(l)
C(k+l+1) ( )(k + l)Fl
k=0
.
A(k + l, l) X(k+l+1)
E B(l)
k=0
/
A(k+l+1) X(k+l+2) A(k+l+1) ( )(k + l) Fl
=
.
A(k + l + 1, l) X(k+l+2) ( )(k + l + 1)
/
+ A(k + l + 1, l) X(k+l+2) B(k+l+1) (k + l + 1) Fl
= E B(l)
X(l+1) ( )(l)Fl
+
A(k + l + 1, l) C(k+l+1)
D(k+l+1) (k + l + 1)Fl
E B(l)
= E B(l)
A(k + l, l) A(k+l+1) X(k+l+2) B(k+l+1) (k + l + 1)Fl
= E B(l)
A(k + l + 1, l) X(k+l+2) B(k+l+1) (k + l + 1)Fl
and
D(l)
C(l) ( )(l 1) = E D(l)
C(l) ( )(l 1)Fl
= E B(l)
X(l+1) A(l) ( )(l 1)Fl .
Noticing that
.
/
E B(l)
X(l+1) ( )(l) A(l) ( )(l 1) Fl = B(l)
E(l) (X)B(l) (l)
we conclude, after adding up all the previous equations, that
D(l) + B(l)
E(l) (X)B(l) ) (l)
G G( )(l) = (D(l)
253
1/2
R(t) B(t)
+
+
**
)(l) . . . A
)(t+1) E(l)(X) G(l) (l)Fl .
E A
l=t
(C.11)
Proof. From the control CARE (4.35) we have that
) Ei (X)A
)i Xi + C
)i C
)i = 0
A
i
(C.12)
Ri
1/2
(Ri Fi + Bi Ei (X)Ai ) = 0.
= Ri
= Ri
(C.13)
1/2
1/2
1/2
1/2
1/2
*
+
) 1, l) C
)
E G(k) A(k
(k)F
l
(k)
k=l+1
and
)(k)
GU (v)(k) = C
k1
l=
and thus
+
1/2
1/2
)
A(k 1, l) B(l) R(l) v(l) + D(k) R(k) v(k)
254
Gc GU (v)(l)
*
+
) 1, l) C
) C
)
E G(l) A(k
(k) (k)
k=l+1
k1
*
t=
5
+
1/2
1/2
)
A(k 1, l) B(t) R(t) v(t) + D(k) R(k) v(k) Fl
*
+
) + l, l) C
)
)
=
E G(l) A(k
(k+l+1) C(k+l+1)
k=0
k+l *
5
+
1/2
)
A(k + l, l) B(t) R(t) v(t)Fl
t=
k=0
that is,
Gc GU (v)(l)
5
*
+
1/2
) + l, l) C
)
E G(l) A(k
D
R
v(k
+
l
+
1)F
l
(k+l+1) (k+l+1) (k+l+1)
*
+ *
) + l, l) X(k+l+1)
=
E G(l) A(k
k=0
)
)
A
(k+l+1) X(k+l+2) A(k+l+1)
k+l *
5
+
) + l, l) B(t) R1/2 v(t)Fl
A(k
(t)
t=
*
+
) + l, l) A
)
E G(l) A(k
(k+l+1)
k=0
1/2
X(k+l+2) B(k+l+1) R(k+l+1) v(k
5
+ l + 1)Fl
*
+
) + l, l) X(k+l+1)
E G(l) A(k
k=0
k+l *
t=
*
+
) + l, l) A
)
)
E G(l) A(k
(k+l+1) X(k+l+2) A(k+l+1)
k=0
5
+
1/2
)
A(k + l, l) B(t) R(t) v(t)Fl
k+l *
5
+
) + l, l) B(t) R1/2 v(t)Fl
A(k
(t)
t=
k=0
and thus,
G(l)
5
+
1/2
)
A(k + l, l) X(k+l+2) B(k+l+1) R(k+l+1) v(k + l + 1)Fl
255
*
+
) + l, l) X(k+l+1)
Gc GU (v)(l) =
E G(l) A(k
k=0
k+l *
t=
*
+
) + l + 1, l) X(k+l+2)
E G(l) A(k
k=0
5
+
1/2
)
A(k + l, t) B(t) R(t) v(t)Fl
k+l+1
*
5
+
1/2
)
A(k + l + 1, t) B(t) R(t) v(t)Fl
t=
k=0
k=0
5
*
+
) + l + 1, l) X(k+l+2) B(k+l+1) R1/2
E G(l) A(k
v(k
+
l
+
1)F
l
(k+l+1)
5
*
+
1/2
)
E G(l) A(k + l + 1, l) X(k+l+2) B(k+l+1) R(k+l+1) v(k + l + 1)Fl
5
k+l *
*
+
+
) + l, l) X(k+l+1)
) + l, t) B(t) R1/2 v(t)Fl
=
A(k
E G(l) A(k
(t)
t=
k=0
k=1
5
k+l *
*
+
+
) + l, l) X(k+l+1)
) + l, t) B(t) R1/2 v(t)Fl
A(k
E G(l) A(k
(t)
t=
5
l
*
+
) t) B(t) R1/2 v(t)Fl
A(l,
=E G(l) X(l+1)
(t)
t=
l
t=
+
) t) B(t) R1/2 v(t).
A(l,
(t)
Therefore,
5
E (Gc GU (v)(l)) (l)
Gc GU (v); =
l=
*
=E
l=
=E
*
l
*
+
+ 5
+
) t) B(t) R1/2 v(t)/Fl
E G(l) X(l+1)
A(l,
(l)
(t)
l= t=
=E
*
+
++
*
1/2
) t) X(l+1) G(l) (l)Fl
A(l,
v (t)
E R(t) B(t)
t=
=v; GU Gc (
and
t=
*
+
++
1/2
) t) X(l+1) G(l) (l)i{tl} Fl
A(l,
E v (t)R(t) B(t)
*
l=t
)
256
+
**
+
) t) E(l) (X)G(l) (l)Fl
E A(l,
l=t
References
9. K. J. Astrom
and B. Wittenmark. Adaptive Control. Addison Wesley, 1989.
10. M. Athans. Command and control (c2) theory: A challenge to control science.
IEEE Transactions on Automatic Control, 32:286293, 1987.
11. R. El Azouzi, M. Abbad, and E. Altman. Perturbation of multivariable linear
quadratic systems with jump parameters. IEEE Transactions on Automatic
Control, 46:16661671, 2001.
12. T. Basar and P. Bernhard. H Optimal Control and Related Minimax Problems. Birkh
auser, 1990.
13. J. Baczynski, M. D. Fragoso, and E. P. Lopes. On a discrete time linear jump
stochastic dynamic game. Int. J. of Syst. Science, 32:979988, 2001.
14. A. V. Balakrishnan. Kalman Filtering Theory. Optimization Software, Publications Division, 1987.
258
References
15. Y. BarShalom. Tracking methods in a multitarget environment. IEEE Transactions on Automatic Control, 23:618626, 1978.
16. Y. BarShalom and K. Birmiwal. Variable dimension for maneuvering target
tracking. IEEE Transactions on Automatic Control, 18:621628, 1982.
17. Y. BarShalom and X. R. Li. Estimation and Tracking. Principles, Techniques
and Software. Artech House, 1993.
18. G.K. Basak, A. Bisi, and M.K. Ghosh. Stability of a random diusion with
linear drift. Journal of Mathematical Analysis and Applications, 202:604622,
1996.
19. A. BenTal and A. Nemirovski. Lectures on Modern Convex Optimization:
Analysis, Algorithms, and Engineering Applications. SIAM Press, 2001.
20. K. Benjelloun and E. K. Boukas. Mean square stochastic stability of linear
timedelay systems with Markovian jump parameters. IEEE Transactions on
Automatic Control, 43:14561460, 1998.
21. A. Bensoussan. Stochastic Control of Partially Observable Systems. Cambridge
University Press, 1992.
22. F. Bernard, F. Dufour, and P. Bertrand. On the JLQ problem with uncertainty.
IEEE Transactions on Automatic Control, 42:869872, 1997.
23. T. Bielecki and P. R. Kumar. Necessary and sucient conditions for a zero
inventory policy to be optimal in an unreliable manufacturing system. In
Proceedings of the 25th IEEE Conference on Decision and Control, pages 248
250, Athens, 1986.
24. P. Billingsley. Probability and Measure. John Wiley, 1986.
25. R. R. Bitmead and B. O. Anderson. Lyapunov techniques for the exponential
stability of linear dierence equations with random coecients. IEEE Transactions on Automatic Control, 25:782787, 1980.
26. S. Bittanti, A. J. Laub, and J. C. Willems. The Riccati Equation. SpringerVerlag, 1991.
27. W. P. Blair and D. D. Sworder. Continuoustime regulation of a class of
econometric models. IEEE Transactions on Systems, Man and Cybernetics,
SMC5:341346, 1975.
28. W. P. Blair and D. D. Sworder. Feedback control of a class of linear discrete systems with jump parameters and quadratic cost criteria. International
Journal of Control, 21:833844, 1975.
29. G. Blankenship. Stability of linear dierential equations with random coecients. IEEE Transactions on Automatic Control, 22:834838, 1977.
30. H. A. P. Blom. An ecient lter for abruptly changing systems. In Proceedings
of the 23rd IEEE Conference on Decision and Control, pages 656658, Las
Vegas, NV, 1984.
31. H. A. P. Blom. Continuousdiscrete ltering for the systems with Markovian
switching coecients and simultaneous jump. In Proceedings of the 21st Asilomar Conference on Signals, Systems and Computers, pages 244248, Pacic
Grove, 1987.
32. H. A. P. Blom and Y. BarShalom. The interacting multiple model algorithm
for systems with Markovian jumping parameters. IEEE Transactions on Automatic Control, 33:780783, 1988.
33. S. Bohacek and E. Jonckheere. Nonlinear tracking over compact sets with linear
dynamically varying h control. SIAM Journal on Control and Optimization,
40:10421071, 2001.
References
259
260
References
References
261
70. O. L. V. Costa and S. Guerra. Stationary lter for linear minimum mean square
error estimator of discretetime Markovian jump systems. IEEE Transactions
on Automatic Control, 47:13511356, 2002.
71. O. L. V. Costa and R. P. Marques. Mixed H2 /H control of discretetime
Markovian jump linear systems. IEEE Transactions on Automatic Control,
43:95100, 1998.
72. O. L. V. Costa and R. P. Marques. Robust H2 control for discretetime Markovian jump linear systems. International Journal of Control, 73:1121, 2000.
73. O. L. V. Costa and E. F. Tuesta. Finite horizon quadratic optimal control and
a separation principle for Markovian jump linear systems. IEEE Transactions
on Automatic Control, 48:18361842, 2003.
74. O. L. V. Costa and E. F. Tuesta. H2 control and the separation principle for
discretetime Markovian jump linear systems. Mathematics of Control, Signals
and Systems, 16:320350, 2004.
75. O.L.V. Costa, E.O. Assumpca
o, E.K. Boukas, and R.P.Marques. Constrained
quadratic state feedback control of discretetime Markovian jump linear systems. Automatica, 35:617626, 1999.
76. O.L.V. Costa and J.C.C. Aya. MonteCarlo TD()methods for the optimal
control of discretetime Markovian jump linear systems. Automatica, 38:217
225, 2002.
77. A. Czornik. On Control Problems for Jump Linear Systems. Wydawnictwo
Politechniki Slaskiej, 2003.
78. A. Czornik and A. Swierniak. On the discrete JLQ and JLQG problems.
Nonlinear Analysis, 47:423434, 2001.
79. M. H. A. Davis. Linear Estimation and Stochastic Control. Chapman and
Hall, 1977.
80. M. H. A. Davis and R. B. Vinter. Stochastic Modelling and Control. Chapman
and Hall, 1985.
81. M.H.A. Davis and S.I. Marcus. An introduction to nonlinear ltering. In
M. Hazewinkel and J.C. Willems, editors, Stochastic Systems: The Mathematics of Filtering and Identication and Aplications, volume 2, pages 5375. D.
Reidel Publishing Company, 1981.
82. D. P. de Farias, J. C. Geromel, and J. B. R. do Val. A note on the robust
control of Markov jump linear uncertain systems. Optimal Control Applications
& Methods, 23:105112, 2002.
83. D. P. de Farias, J. C. Geromel, J. B. R. do Val, and O. L. V. Costa. Output
feedback control of Markov jump linear systems in continuoustime. IEEE
Transactions on Automatic Control, 45:944949, 2000.
84. C. E. de Souza and M. D. Fragoso. On the existence of a maximal solution for
generalized algebraic Riccati equation arising in stochastic control. Systems
and Control Letters, 14:233239, 1990.
85. C. E. de Souza and M. D. Fragoso. H control for linear systems with Markovian jumping parameters. Control Theory and Advanced Technology, 9:457
466, 1993.
86. C. E. de Souza and M. D. Fragoso. H ltering for Markovian jump linear
systems. International Journal of Systems Science, 33:909915, 2002.
87. C. E. de Souza and M. D. Fragoso. Robust H ltering for uncertain Markovian jump linear systems. International Journal of Robust and Nonlinear Control, 12:435446, 2002.
262
References
References
263
107. Y. Fang. A new general sucient condition for almost sure stability of jump
linear systems. IEEE Transactions on Automatic Control, 42:378382, 1997.
108. Y. Fang, K. A. Loparo, and X. Feng. Almost sure and moment stability of
jump linear systems. International Journal of Control, 59:12811307, 1994.
109. X. Feng, K. A. Loparo, Y. Ji, and H. J. Chizeck. Stochastic stability properties
of jump linear systems. IEEE Transactions on Automatic Control, 37:3853,
1992.
110. W. H. Fleming and W. N. MacEneaney. Risk sensitive optimal control and
dierential games. In K. S. Lawrence, editor, Stochastic Theory and Adaptive
Control, volume 184 of Lecture Notes in Control and Information Science,
pages 185197. SpringerVerlag, 1992.
111. W.H. Fleming. Future Directions in Control Theory  A Mathematical Perspective. SIAM Philadelphia, 1988.
112. W.H. Fleming and R.W. Rishel. Deterministic and Stochastic Optimal Control.
SpringerVerlag, 1975.
113. J. J. Florentin. Optimal control of continuoustime Markov stochastic systems.
Journal of Electronic Control, 10:473488, 1961.
114. M. D. Fragoso. On a partially observable LQG problem for systems with
Markovian jumping parameters. Systems and Control Letters, 10:349356,
1988.
115. M. D. Fragoso. On a discretetime jump LQG problem. International Journal
of Systems Science, 20:25392545, 1989.
116. M. D. Fragoso. A small random perturbation analysis of a partially observable
LQG problem with jumping parameter. IMA Journal of Mathematical Control
and Information, 7:293305, 1990.
117. M. D. Fragoso and J. Baczynski. Optimal control for continuous time LQ
problems with innite Markov jump parameters. SIAM Journal on Control
and Optimization, 40:270297, 2001.
118. M. D. Fragoso and J. Baczynski. Lyapunov coupled equations for continuoustime innite Markov jump linear systems. Journal of Mathematical Analysis
and Applications, 274:319335, 2002.
119. M. D. Fragoso and J. Baczynski. Stochatic versus mean square stability in
continuoustime linear innite Markov jump parameters. Stochastic Analysis
and Applications, 20:347356, 2002.
120. M. D. Fragoso and O. L. V. Costa. Unied approach for mean square stability of continoustime linear systems with Markovian jumping parameters and
additive disturbances. In Proceedings of the 39th Conference on Decision and
Control, pages 23612366, Sydney, Australia, 2000.
121. M. D. Fragoso and O. L. V. Costa. Mean square stabilizability of continuoustime linear systems with partial information on the Markovian jumping parameters. Stochastic Analysis and Applications, 22:99111, 2004.
122. M. D. Fragoso, O. L. V. Costa, and C. E. de Souza. A new approach to linearly
perturbed Riccati equations arising in stochastic control. Journal of Applied
Mathematics and Optimization, 37:99126, 1998.
123. M. D. Fragoso, J. B. R. do Val, and D. L. Pinto Junior. Jump linear H control: the discretetime case. Control Theory and Advanced Technology,
10:14591474, 1995.
124. M. D. Fragoso, E. C. S. Nascimento, and J. Baczynski. H control for
continuoustime linear systems systems with innite Markov jump parame
264
125.
126.
127.
128.
129.
130.
131.
132.
133.
134.
135.
136.
137.
138.
139.
140.
141.
142.
143.
References
ters via semigroup. In Proceedings of the 39th Conference on Decision and
Control, pages 11601165, Sydney, Australia, 2001.
M.D. Fragoso and E.M. Hemerly. Optimal control for a class of noisy linear
systems with markovian jumping parameters and quadratic cost. International
Journal of Systems Science, 22:25533561, 1991.
B. A. Francis. A Course in H Control Theory, volume 88 of Lecture Notes
in Control and Information Sciences. SpringerVerlag, 1987.
Z. Gajic and R. Losada. Monotonicity of algebraic Lyapunov iterations for
optimal control of jump parameter linear systems. Systems and Control Letters,
41:175181, 2000.
J. Gao, B. Huang, and Z. Wang. LMIbased robust H control of uncertain
linear jump systems with timedelay. Automatica, 37:11411146, 2001.
J. C. Geromel. Optimal linear ltering under parameter uncertainty. IEEE
Transactions on Signal Processing, 47:168175, 1999.
J. C. Geromel, P. L. D. Peres, and S. R. Souza. A convex approach to the mixed
H2 /H control problem for discretetime uncertain systems. SIAM Journal
on Control and Optimization, 33:18161833, 1995.
L. El Ghaoui and S. I. Niculescu, editors. Advances in Linear Matrix Inequality
Methods in Control. SIAM Press, 1999.
M.K. Ghosh, A. Arapostathis, and S. I. Marcus. A note on an LQG regulator
with Markovian switching and pathwise average cost. IEEE Transactions on
Automatic Control, 40:19191921, 1995.
K. Glover and J. C. Doyle. Statespace formulae for stabilizing controllers
that satisfy an H norm bound and relations to risk sensitivity. Systems and
Control Letters, 11:167172, 1988.
W. S. Gray and O. Gonz
alez. Modelling electromagnetic disturbances in closedloop computer controlled ight systems. In Proceedings of the 1998 American
Control Conference, pages 359364, Philadelphia, PA, 1998.
W. S. Gray, O. R. Gonz
alez, and M. Do
gan. Stability analysis of digital linear
ight controllers subject to electromagnetic disturbances. IEEE Transactions
on Aerospace and Electronic Systems, 36:12041218, 2000.
M. Green and D. J. N. Limebeer. Linear Robust Control. Prentice Hall, 1995.
B. E. Griths and K. A. Loparo. Optimal control of jump linear quadratic
Gaussian systems. International Journal of Control, 42:791819, 1985.
M. T. Hadidi and C. S. Schwartz. Linear recursive state estimators under
uncertain observations. IEEE Transactions on Automatic Control, 24:944948,
1979.
R. Z. Hasminskii. Stochastic Stability of Dierential Equations. Sijtho and
Noordho, 1980.
J. W. Helton and M. R. James. Extending H Control to Nonlinear Systems.
Frontiers in Applied Mathematics, SIAM, 1999.
A. Isidori and A. Astol. Disturbance attenuation and H control via measurement feedback in nonlinear systems. IEEE Transactions on Automatic
Control, 37:12831293, 1992.
M. R. James. Asymptotic analysis of nonlinear stochastic risksensitive control
and dierential games. Mathematical Control Signals and Systems, 5:401417,
1991.
Y. Ji and H. J. Chizeck. Controllability, observability and discretetime Markovian jump linear quadratic control. International Journal of Control, 48:481
498, 1988.
References
265
144. Y. Ji and H. J. Chizeck. Optimal quadratic control of jump linear systems with
separately controlled transition probabilities. International Journal of Control,
49:481491, 1989.
145. Y. Ji and H. J. Chizeck. Controllability, observability and continuoustime
Markovian jump linear quadratic control. IEEE Transactions on Automatic
Control, 35:777788, 1990.
146. Y. Ji and H. J. Chizeck. Jump linear quadratic Gaussian control: steady state
solution and testable conditions. Control Theory and Advanced Technology,
6:289319, 1990.
147. Y. Ji, H. J. Chizeck, X. Feng, and K. A. Loparo. Stability and control of
discretetime jump linear systems. Control Theory and Advanced Technology,
7:247270, 1991.
148. M. Kac. Discrete Thoughts : Essays on Mathematics, Science and Philosophy.
Birkh
auser, 1992.
149. G. Kallianpur. Stochastic Filtering Theory. SpringerVerlag, 1980.
150. T. Kazangey and D. D. Sworder. Eective federal policies for regulating residential housing. In Proceedings of the Summer Computer Simulation Conference, pages 11201128, 1971.
151. B. V. Keulen, M. Peters, and R. Curtain. H control with state feedback: The
innite dimensional case. Journal of Mathematical Systems, Estimation and
Control, 3:139, 1993.
152. M. Khambaghi, R. Malhame, and M. Perrier. White water and broke recirculation policies in paper mills via Markovian jump linear quadratic control. In
Proceedings of the American Control Conference, pages 738 743, Philadelphia,
1998.
153. P. P. Khargonekar and M. A. Rotea. Mixed H2 /H control: a convex optimization approach. IEEE Transactions on Automatic Control, 36:824837,
1991.
154. F. Kozin. A survey of stability of stochastic systems. Automatica, 5:95112,
1969.
155. N. N. Krasovskii and E. A. Lidskii. Analytical design of controllers in systems
with random attributes I, II, III. Automatic Remote Control, 22:10211025,
11411146, 12891294, 1961.
156. C. S. Kubrusly. Mean square stability for discrete bounded linear systems in
Hilbert spaces. SIAM Journal on Control and Optimization, 23:1929, 1985.
157. C. S. Kubrusly. On discrete stochastic bilinear systems stability. Journal of
Mathematical Analysis and Applications, 113:3658, 1986.
158. C. S. Kubrusly and Costa O. L. V. Mean square stability conditions for discrete stochastic bilinear systems. IEEE Transactions on Automatic Control,
36:10821087, 1985.
159. H. Kushner. Stochastic Stability and Control. Academic Press, New York,
1967.
160. A. Leizarowitz. Estimates and exact expressions for Lyapunov exponents of
stochastic linear dierential equations. Stochastics, 24:335356, 1988.
161. Z. G. Li, Y. C. Soh, and C. Y. Wen. Sucient conditions for almost sure
stability of jump linear systems. IEEE Transactions on Automatic Control,
45:13251329, 2000.
162. D. J. N. Limebeer, B. D. O. Anderson, P. P. Khargonekar, and M. Green.
A game theoretic approach to H control for time varying systems. SIAM
Journal on Control and Optimization, 30:262283, 1992.
266
References
163. K. A. Loparo. Stochastic stability of coupled linear systems: A survey of methods and results. Stochastic Analysis and Applications, 2:193228, 1984.
164. K. A. Loparo, M. R. Buchner, and K. Vasudeva. Leak detection in an experimental heat exchanger process: a multiple model approach. IEEE Transactions
on Automatic Control, 36:167177, 1991.
165. D. G. Luenberger. Introduction to Dynamic Systems: Theory, Models & Applications. John Wiley & Sons, 1979.
166. M. S. Mahmoud and P. Shi. Robust Kalman ltering for continuous timelag
systems with Markovian jump parameters. IEEE Transactions on Circuits and
Systems, 50:98105, 2003.
167. M. S. Mahmoud and P. Shi. Robust stability, stabilization and H control of
timedelay systems with Markovian jump parameters. International Journal
of Robust and Nonlinear Control, 13:755784, 2003.
168. R. Malhame and C. Y. Chong. Electric load model synthesis by diusion approximation in a high order hybrid state stochastic systems. IEEE Transactions
on Automatic Control, 30:854860, 1985.
169. M. Mariton. Almost sure and moments stability of jump linear systems. Systems and Control Letters, 30:11451147, 1985.
170. M. Mariton. On controllability of linear systems with stochastic jump parameters. IEEE Transactions on Automatic Control, 31:680683, 1986.
171. M. Mariton. Jump Linear Systems in Automatic Control. Marcel Decker, 1990.
172. M. Mariton and P. Bertrand. Output feedback for a class of linear systems
with jump parameters. IEEE Transactions on Automatic Control, 30:898900,
1985.
173. M. Mariton and P. Bertrand. Robust jump linear quadratic control: A mode
stabilizing solution. IEEE Transactions on Automatic Control, 30:11451147,
1985.
174. M. Mariton and P. Bertrand. Reliable ight control systems: Components
placement and feedback synthesis. In Proceedings of the 10th IFAC World
Congress, pages 150154, Munich, 1987.
175. R. MoralesMenendez, N. de Freitas, and D. Poole. Estimation and control of
industrial processes with particle lters. In Proceedings of the 2003 American
Control Conference, pages 579584, Denver, 2003.
176. T. Morozan. Optimal stationary control for dynamic systems with Markov
perturbations. Stochastic Analysis and Applications, 1:299325, 1983.
177. T. Morozan. Stabilization of some stochastic discretetime control systems.
Stochastic Analysis and Applications, 1:89116, 1983.
178. T. Morozan. Stability and control for linear systems with jump Markov perturbations. Stochastic Analysis and Applications, 13:91110, 1995.
179. A. S. Morse. Control Using LogicBased Switching. SpringerVerlag, London,
1997.
180. N. E. Nahi. Optimal recursive estimation with uncertain observation. IEEE
Transactions on Information Theory, 15:457462, 1969.
181. A. W. Naylor and G. R. Sell. Linear Operator Theory in Engineering and
Science. SpringerVerlag, second edition, 1982.
182. Y. Nesterov and A. Nemirovskii. Interior Point Polynomial Algorithms in
Convex Programming. SIAM Press, 1994.
183. K. Ogata. DiscreteTime Control Systems. Prentice Hall, 2nd. edition, 1995.
References
267
184. J. Oostveen and H. Zwart. Solving the innitedimensional discretetime algebraic Riccati equation using the extended symplectic pencil. Internal Report
W9511, University of Groningen, Department of Mathematics, 1995.
185. G. Pan and Y. BarShalom. Stabilization of jump linear Gaussian systems without mode observations. International Journal of Control, 64:631661, 1996.
186. Z. Pan and T. Basar. Zerosum dierential games with random structures and
applications in H control of jump linear systems. In 6th International Symposium on Dynamic Games and Applications, pages 466480, Quebec, 1994.
187. Z. Pan and T. Basar. H control of large scale jump linear systems via averaging and aggregation. International Journal of Control, 72:866881, 1999.
188. B. Park and W. H. Kwon. Robust onestep receding horizon control of discretetime Markovian jump uncertain systems. Automatica, 38:12291235, 2002.
189. S. Patilkulkarni, W. S. Gray H. HerenciaZapana, and O. R. Gonz
alez. On the
stability of jumplinear systems driven by nitestate machines with Markovian
inputs. In Proceedings of the 2004 American Control Conference, 2004.
190. R. Patton, P. Frank, and R. Clark, editors. Fault Diagnosis in Dynamic Systems. Prentice Hall, 1989.
191. M. A. Rami and L. El Ghaoui. Robust stabilization of jump linear systems
using linear matrix inequalities. In IFAC Symposium on Robust Control Design,
pages 148151, Rio de Janeiro, 1994.
192. M. A. Rami and L. El Ghaoui. LMI optimization for nonstandard Riccati equations arising in stochastic control. IEEE Transactions on Automatic Control,
41:16661671, 1996.
193. A. C. Ran and R. Vreugdenhil. Existence and comparison theorems for the
algebraic Riccati equations for continuous and discretetime systems. Linear
Algebra and its Applications, 99:6383, 1988.
194. S. Ross. Applied Probability Models with Optimization Applications. HoldenDay, 1970.
195. A. Saberi, P. Sannuti, and B. M. Chen. H2 Optimal Control. PrenticeHall,
1995.
196. P. A. Samuelson. Interactions between the multiplier analysis and the principle
of accelleration. Review of Economic Statistics, 21:7585, 1939.
197. P. Shi and E. K. Boukas. H control for Markovian jumping linear systems
with parametric uncertainty. Journal of Optimization Theory and Application,
95:7599, 1997.
198. P. Shi, E. K. Boukas, and R. K. Agarwal. Control of Markovian jump discretetime systems with norm bounded uncertainty and unknown delay. IEEE Transactions on Automatic Control, 44:21392144, 1999.
199. P. Shi, E. K. Boukas, and R. K. Agarwal. Robust control for Markovian jumping discretetime systems. International Jounal of Systems Science, 30:787797,
1999.
200. A. A. G. Siqueira and M. H. Terra. Nonlinear and Markovian H controls of
underactuated manipulators. IEEE Transactions on Control System Technology, to appear.
201. E. D. Sontag. Mathematical Control Theory. SpringerVerlag, 1990.
202. A. Stoica and I. Yaesh. Jump Markovianbased control of wing deployment for
an uncrewed air vehicle. Journal of Guidance, 25:407411, 2002.
203. A. A. Stoorvogel. The H Control Problem: A State Space Approach. PrenticeHall, New York, 1992.
268
References
References
269
As a general rule, lowercase Greek and Roman letters are used for vector,
scalar variables and functions, while uppercase Greek and Roman letters are
used for matrix variables and functions as well as sequences of matrices. Sets
and spaces are denoted by blackboard uppercase characters (such as R, C)
and operators by calligraphic characters (such as L, T ).
Sometimes it is not possible or convenient to adhere completely to this
rule, but the exceptions should be clearly perceived based on their specic
context.
The following lists present the main symbols and general notation used
throughout the book, followed by a brief explanation and the number of the
page of their denition or rst appearance.
Symbol
.
.1
.1
.1t
.2
.2
.max
2
GF 2
2
G
2
Description
Empty set.
Any norm.
1norm.
Induced 1norm.
1tnorm.
Euclidean norm.
Induced Euclidean norm.
maxnorm.
H2 cost for system GF . Also denotes the
H2 norm.
Optimal H2 cost for system GF
Direct sum of subspaces.
Kronecker product.
Conjugate transpose.
Transpose.
Indicator function.
N sequence of system matrices.
Page
222
16
16
16
219
16
16
16
80, 83
80
21
17
15
15
31
31
272
Symbol
A
B
0
(k)
0 = (0)
(k)
(k)
i (.)
(k) = E(x(k))
0 = (0)
(k)
i (k), i
(.)
(.)
(.)
A
(
A
B
(
B
C
(
C
Cm
D
diag[.]
dg[.]
E(.)
F
G
H
I, In
i
j
k
Description
N sequence of uncertainties.
N sequence of uncertainties.
N sequence of uncertainties.
Upper bound for H control.
The set of all F0 measurable variables in
N.
Markov state at time k.
Page
175
175
175
146
21
4, 59
63
16
31
Estimation error.
Probabilities.
N sequence of distributions.
A set.
Lyapunov function.
Linear operator.
Linear operator.
A set related with H2 control.
A set related with H2 control.
A set related with robust H2 control.
Sample space.
A set.
N sequence of system matrices.
N sequence of lter matrices.
N sequence of system matrices.
N sequence of lter matrices.
N sequence of system matrices.
N sequence of lter matrices.
A Hilbert space.
N sequence of system matrices.
Block diagonal matrix.
Block diagonal matrix.
Expected value.
N sequence.
N sequence of system matrices.
N sequence of system matrices.
Order n identity matrix.
Index (usually operation mode).
Index (usually operation mode).
Time.
134
48
9
58
22
17
17
86
86
175
20
20
8
103
8
103
8
103
21
8
34
119
21
57
8
8
16
2
4
2
Symbol
M
N
P
pij
Q(k)
Qi (k)
q(k)
qi (k)
Re
r (.)
S(k)
S
Sc
So
s = {sij }
tr(.)
U (k, l)
Uj (k, l)
u(k)
u
(k)
v(k)
X
(
X
XF (.)
XF0 (.)
Xl
X+
x(k)
x0 = x(0)
x
((k)
x
(0 = x
((0)
xe (k)
x
(e (k)
x
)e (k)
w(k)
Y
y(k)
ZF
ZF0 (.)
z(k)
Description
Riccati gain (lter).
Number of operation modes.
Transition probability matrix.
Probability of transition from mode i to
mode j.
N sequence of second moment matrices at
time k.
Second moment of x(k) at mode i.
Vector of rst moments at time k.
First moment of x(k) at mode i.
Real part.
Spectral radius.
A second moment matrix.
A stationary second moment matrix.
Controllability Grammian.
Observability Grammian.
Positive numbers (a generalization of pij ).
Trace operator.
N sequence.
A second moment matrix of x at times k,
l and mode j.
Control variable at time k.
An optimal control law.
Filtering error at time k.
Solution of the control CARE.
Stabilizing solution of the control CARE.
A linear operator.
A special case of XF (.).
An N sequence.
Maximal solution of the control CARE.
System state at time k.
273
Page
108
9
4
4
31
31
31
31
94
15
121
122
24, 83
25, 83
203
16
49
49
8
80
103
78
226
146
146
213
213
4
103
Augmented state.
Estimated state at time k.
Estimation error at time k.
Additive disturbance at time k.
Solution to the ltering CARE.
Measured variable at time k.
A linear operator.
A special case of ZF (.).
System output at time k.
120
104
105
8
108
8
146
146
8
274
Symbol
z(k)
zs
B(X, Y)
B(X) = B(X, X)
Cn
F
Hn,m
Hn = Hn,n
Hn
Hn+
L
M
N
N
Nk
P
Q(k)
Q0 = Q(0)
Rn
X
Y
T
Tk = {i T; i k}
U(k, l)
W
A1 , . . . , A4
B
C
Ckm
E(.)
F(X)
F(.)
F(., .)
G
Gc
Gcl
GF
GK
GU
Gv
Description
A vector.
An output sequence.
Banach space of X into Y.
ndimensional complex Euclidean space.
Set of stabilizing controllers.
Linear space.
Linear space.
Cone of positive semidenite N matrices.
A set.
A set.
Set of operation modes.
fold product space of N.
Copies of N.
Polytope of transition probability matrices.
Second moment of x(k).
Page
114
83
15
15
79
16
17
17
208
208
9
63
20
175
31
15
15
15
20
49
204
34
34
34
21
33
78
205
205
2
139
110
80
133
139
140
275
Symbol
I(.)
J (.)
K(.)
L(.)
L+ (.)
N
O(.)
P(.)
T (.)
U
Uc
V(.)
0
W(.)
X (.)
X (., .)
Y(.)
Y(.)
Y(., .)
Z(.)
D(.)
F
Fk
Gk
k
F
J
JT
J(0 , x0 , u)
0 , x0 )
J(
Jav (0 , x0 , u)
av (0 , x0 )
J
M(.)
N
R(.)
U(.)
V(., k)
Description
Identity operator.
A linear operator.
A mapping.
A linear operator.
A linear operator.
A matrix.
An operator.
Probability.
A linear operator.
Set of admissible controllers.
Set of admissible controllers.
A linear operator.
A linear operator.
A mapping.
A mapping.
A mapping.
A mapping.
A mapping.
A mapping.
A mapping (H Riccati equation).
eld.
eld.
eld.
eld.
Cost.
Cost.
Expected cost.
Minimal expected cost.
Long run average cost.
Optimal long run average cost.
Ane operator (H Riccati equation).
eld.
A mapping (H Riccati equation).
A mapping (H Riccati equation).
A linear operator.
Page
15
33
87
33
213
34
83
4, 20
33
79
73
33
150
205
205
87
205
205
217
157
20
20
73
20
26
26
74
74
79
79
157
20
157
157
116
Abbreviation
APV
ARE
ASC
AWSS
CARE
DC
Description
Analytical Point of View.
Algebraic Riccati Equation.
Almost Sure Convergence.
Asymptotically Wide Sense Stationary.
Coupled Algebraic Riccati Equations.
Direct Current.
Page
12
27
63
49
78
189
276
Abbreviation
HMM
i.i.d.
IMM
JLQ
LMI
LMMSE
LQG
LQR
LTI
MJLS
MSS
MM
MWe
rms
SS
w.p.1
Description
Hidden Markov Model.
Independent and Identically Distributed.
Interacting Multiple Models.
Jump Linear Quadratic.
Linear Matrix Inequality.
Linear Minimal Mean Square Error.
Linear Quadratic Gaussian.
Linear Quadratic Regulator.
Linear TimeInvariant.
Markov Jump Linear System(s).
Mean Square Stability.
Multiple Models.
Electrical power in MW.
Root Mean Square.
Stochastic Stability.
With Probability One.
Page
12
13
102
13
27
115
131
26
21
1
36
12
167
198
37
65
Index
nite horizon, 77
innite horizon, 80
stabilizing solution, 208, 216
existence, 217
unicity, 216
Cauchy summable sequence, 19
Controllability, 24
Grammian, 24
matrix, 24
Coupled algebraic Riccati equations, see
CARE
Coupled Lyapunov equation, 41
Coupled Riccati dierence equations,
205
Detectability, 204
for LTI systems, 26
for MJLS, 57
test, 59
Diagonal matrix, 34
Dierence Riccati equation, 26
discretetime homogeneous linear
timeinvariant system, 22
Duality, 204
CARE, 203
Equilibrium point, 22
Ergodic assumption, 48
Filtering
applied to a solar thermal receiver,
171
CARE, 110
examples, 197200
278
Index
History, 166
main result, 147
necessity, 151
suciency, 148
parallel with LTI case, 145, 148
problem, 146
recursive algorithm, 162
stabilizability condition, 162
HMM, 12
Homeomorphic, uniformly, 17
IMM lter, 200
Indicator function, 16, 31
Inner product
in Hn,m , 16
in 2 (C m ), 21
Inverse
generalized, 27
Kalman lter, 104, 131
Kronecker product, 17
Linear ltering, see Filtering
Linear matrix inequalities, see LMI
Linear quadratic gaussian, see LQG
Linearquadratic regulator, see LQR
LMI, 2728
advantages, 174
denition, 28
detectability test, 59
disadvantages, 174
maximal solution for the CARE, 215
mixed H2 /H control, 182
robust control, 173188
robust ltering formulation, 124
robust H control, 187
stabilizability test, 57
uncertainties
on the system, 174
transition probability matrix, 175
Long run average cost, see Quadratic
optimal control
LQG, 131
LQR, 2627
for LTI systems, 26
for MJLS, 80
Lyapunov equation
for LTI systems, 23
for MJLS, 42
Lyapunov function, 22
Index
for LTI systems, 23
for MJLS, 41
Lyapunov stability, 22
Lyapunov theorem, 22
Marginal propensity to save, 170
Markov chain, 2
Markov jump linear system, see MJLS
Mean square detectability, see
Detectability
Mean square stability, see MSS
Mean square stabilizability, see
Stabilizability
Mixed H2 /H control, 182
MJLS
comparison with LTI case, 1112
examples, 48
ltering, 9
general structure, 8
H2 control, 82
H control, 10, 143
History, 1314
mixed H2 /H control, 182
optimal control, 9
partial information, 10
robust H2 control, 174
robust H control, 187
separation principle, 132, 135
and lter structure, 136
stability, 9
Mode, see Operation mode
Models
Markov, see MJLS
philosophy, 1
stochastic, 2
Monte Carlo simulations
Multiplieraccelerator model, 171
solar thermal receiver, 169, 173
MoorePenrose inverse, 27
MSS
a stable system with unstable modes,
39
advantages, 29
an unstable system with stable
modes, 39
denition, 36
easy to check conditions, 45, 46
examples, 3741
History, 6970
279
main result, 36
nonhomogeneous case, 4857
spectral radius condition, 43
with additive disturbance
2 sequence, 55
wide sense stationary sequence, 49
Multiplier, 170
Multiplieraccelerator model, 169
description, 169
Monte Carlo simulations, 171
optimal policy, 169
Multiplieraccelerator model, 3
National income, 170
Norm
1norm, 16
2norm, 16
equivalence, 16
H2 norm, 83
induced 1norm, 16
induced 2norm, 16
maxnorm, 16
standard, 15
Norm bounded uncertainties, 175
Observability, 25
Grammian, 25
matrix, 25
Operation mode, 2
Operator
hermitian, 17
positive, 17
Optimal control, see Quadratic optimal
control
Optimal ltering, see Filtering
Optimal H control, 188
algorithm, 188
an application, 189
Partial information
History, 141
Polytope
of transition probability matrices, 175
Positive operator, 17
Probabilistic space, 2021
Probability measure, 20
Quadratic optimal control
innite horizon
with 2 input, 9099
280
Index
model, 168
Monte Carlo simulations, 169, 173
optimal control, 167
partial information, 171
Spectral radius, 15
Stability
asymptotic, 22
global asymptotic, 22
Lyapunov, 22
mean square, see MSS
with probability one, 6366
Stabilizability, 204
for LTI systems, 25
for MJLS, 57
partial information, 59
test, 57
State feedback, 24
Stochastic basis, 20
System
homogeneous, 22
timeinvariant, 22
Trace operator, 16
Transition probability matrix, 4
UarmII manipulator, 189
Markov model, 192
Uniformly homeomorphic, 17