You are on page 1of 11

6.

241 Dynamic Systems and Control


Lecture 5: Matrix Perturbations
Readings: DDV, Chapter 5

Emilio Frazzoli
Aeronautics and Astronautics

Massachusetts Institute of Technology

February 16, 2011

E. Frazzoli (MIT)

Lecture 5: Matrix Perturbations

Feb 16, 2011

1 / 10

Outline

Matrix Perturbations

E. Frazzoli (MIT)

Lecture 5: Matrix Perturbations

Feb 16, 2011

2 / 10

Introduction

Important issues in engineering, and in systems and control science in


particular, concern the sensitivity of computations, solution algorithms,
design methods, to uncertainty in the input parameters.

For example: What is the smallest perturbation (e.g., in terms of 2-norm)


that makes a matrix singular? What is the impact on the solution of a
least-square problem of uncertainty in the data? etc.

E. Frazzoli (MIT)

Lecture 5: Matrix Perturbations

Feb 16, 2011

3 / 10

Additive Perturbation

Theorem (Additive Perturbation)


Let A Cmn be a matrix with full column rank (= n). Then
min {2 : A + has rank < n} = min (A).

Cmn

Proof:
If A + has rank < n, then there exists x, with x2 = 1, such that
(A + )x = 0, i.e., x = Ax.
In terms of norms, 2 x2 = Ax2 min (A)
To prove that the bound is tight, let us construct a that achieves it.

Choose = min umin vmin


. Clearly, = min .
n

(A + )vmin =
i=1 i ui vi vmin min umin vmin vmin =
min umin min umin = 0.

E. Frazzoli (MIT)

Lecture 5: Matrix Perturbations

Feb 16, 2011

4 / 10

Multiplicative Perturbation

Theorem (Small Gain)


Let A Cmn :
min {2 : (I A) is singular } =

Cnn

1
,
max (A)

i.e., (I A) is non-singular if A2 2 < 1.


Proof:
If I A is singular, then there exists x = 0, such that (I A)x = 0.
Hence, x2 = Ax2 A2 x2 = max (A)x2 ,
that is, 2

x 2
x 2

1
max (A) .

To show that the bound is tight, choose =


2 = 1/max (A), and pick x = umax .

Then, (I A)x = umax


E. Frazzoli (MIT)

1
max (A) Avmax

max (A) vmax umax .

Clearly

= umax umax = 0.

Lecture 5: Matrix Perturbations

Feb 16, 2011

5 / 10

Perturbations measured in the Frobenius norm


A useful inequality: AF A2 , for any A Cmn .
n
2
A2F = Trace(A A) = i=1 i2 max
= A22 .
Note: a rank-one matrix A1 = uv = 0 only has only one non-zero singular
value. Hence, its Frobenius norm is equal to its induced 2-norm.
Since the matrices used in the proofs of the perturbation bounds were
both rank-one, the results extends to the Frobenius norm case:
Theorem (Additive Perturbation)
min {F : A + has rank < n} = min (A).

Cmn

Theorem (Small Gain)


min {F : (I A) is singular } =

Cnn

E. Frazzoli (MIT)

Lecture 5: Matrix Perturbations

1
,
max (A)

Feb 16, 2011

6 / 10

Total Least Squares

In the least squares estimation problem, we considered an inconsistent system


of equations y = Ax (where A has more rows than columns).
In order to compute a solution, we introduced a notion of measurement
error e = y Ax, and looked for a solution that is compatible with the
smallest measurement error.
A more general model (total least squares) also considers a notion of

modeling error, i.e., looks for a solution x of

y = (A + )x + e,
that minimizes F + e2 = [, e]F .

E. Frazzoli (MIT)

Lecture 5: Matrix Perturbations

Feb 16, 2011

7 / 10

Total Least Squares Solution


Rewrite the problem in block matrix form:
min

F +e2



x
A y + e
= 0,
1

i.e.,

+
x = 0,
min A

must be singular (
+
For this problem to have a valid solution, A
x = 0).
This is an additive perturbation problem, in the Frobenius norm... we know
= min (A
)umin v .
the smallest perturbation is
min
The total least squares solution
by x = vmin , rescaled so that the
is obtained

last entry is equal to 1, i.e., x 1 = vmin


.

E. Frazzoli (MIT)

Lecture 5: Matrix Perturbations

Feb 16, 2011

8 / 10

Conditioning of Matrix Inversion

100 100
5
1
Consider the matrix A =
. Its inverse is A =
100.2 100
5.01

5
.
5

100 100
Consider the perturbed matrix A + A =
. Its inverse is
100.1 100

10
10
(A + A)1 =
.
10.01 10
A 0.1% change in one of the entries of A results in a 100% change in the
entries of A1 ! Similarly for the solution of linear systems of the form
Ax = y .
Under what conditions does this happen? i.e., under what conditions is the
inverse of a matrix extremely sensitive to small perturbations in the elements
of the matrix?

E. Frazzoli (MIT)

Lecture 5: Matrix Perturbations

Feb 16, 2011

9 / 10

Condition number
Dierentiate A1 A = I . We get d(A1 ) A + A1 dA = 0.

Rearranging, and taking the norm:

d(A1 ) = A1 dA A1 A1 2 dA
That is,

d(A1 )
dA
A1 A
A1
A

The quantity K (A) = A1 A, called the condition number of the matrix


A gives a bound on the relative change on A1 given by a perturbation on A.
If we are considering the induced 2-norm,
K (A) = A1 A = max (A)/min (A).

E. Frazzoli (MIT)

Lecture 5: Matrix Perturbations

Feb 16, 2011

10 / 10

MIT OpenCourseWare
http://ocw.mit.edu

6.241J / 16.338J Dynamic Systems and Control


Spring 2011

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms .

You might also like