You are on page 1of 15

4-1 Linear Matrix Inequalities S. Lall, Stanford 2002.10.06.

01

4. Linear Matrix Inequalities


• Linear Matrix Inequalities
• Convexity
• LMIs as polynomial inequalities
• Intersection
• Matrix variables
• Schur complements
• Semidefinite programming problems
4-2 Linear Matrix Inequalities S. Lall, Stanford 2002.10.06.01

Linear Matrix Inequalities

Numerical properties
Linear Matrix Inequalities can be numerically solved easily.
• The computation is fast and reliable.
• We can find all solutions and optimal solutions.
• If there is no solution, it is easy to show.

Control applications
There is a wide variety of applications to control problems.
• Optimal control
• Gain selection
• Stability margin computation
• Robust synthesis
4-3 Linear Matrix Inequalities S. Lall, Stanford 2002.10.06.01

Linear Matrix Inequalities


A linear matrix inequality (LMI) has the form
F0 + x1F1 + x2F2 + · · · + xmFm < 0
where F0, F1, . . . , Fm are symmetric matrices.

Example
The inequality  
x1 − 3 x1 + x2 −1
x1 + x2 x2 − 4 0  < 0
−1 0 x1
is an LMI in two variables.

We can write this in the above form as


     
−3 0 −1 1 1 0 0 1 0
 0 −4 0  + x1 1 0 0 + x2 1 1 0 < 0
−1 0 0 0 0 1 0 0 0
4-4 Linear Matrix Inequalities S. Lall, Stanford 2002.10.06.01

Numerical Example
Consider the LMI  
x1 − 3 x1 + x2 −1
x1 + x2 x2 − 4 0  < 0
−1 0 x1
 
−2
With x = we have
2  
−5 0 −1
 0 −2 0  < 0
−1 0 −2

which is negative definite, by the Schur complement:


 
−5 0 −1    
1 −1 
 0 −2 0  < 0 −5 0 
⇐⇒ + −1 0 < 0
0 −2 2 0
−1 0 −2
4-5 Linear Matrix Inequalities S. Lall, Stanford 2002.10.06.01

Feasibility problems
We often write LMIs in the form
G(x) < 0
where G : Rm → Sn is an affine function.

Objective
Find x ∈ Rm which satisfies the inequality. Such a choice of x is called feasible.

Notes
• Recall F is affine means

F λx + (1 − λ)y = λF (x) + (1 − λ)F (y) for all λ ∈ R

• An alternative way to write LMIs is


F (x) < Q
where F is linear.
4-6 Linear Matrix Inequalities S. Lall, Stanford 2002.10.06.01

Convexity
The feasible set n o
C = x ∈ Rm F (x) < 0

is convex.

Proof
We need to show
x1, x2 ∈ C, λ ∈ [0, 1] =⇒ λx1 + (1 − λ)x2 ∈ C
Since F is affine,
F (λx1 + (1 − λ)x2) = λF (x1) + (1 − λ)F (x2)
<0
4-7 Linear Matrix Inequalities S. Lall, Stanford 2002.10.06.01

LMIs as polynomial inequalities


The feasible set is defined by polynomial inequalities. One way to see this is to make use
of the following result

Theorem
Suppose P ∈ Sn. Let Pk ∈ Rk×k be the submatrix of P consisting of the first k rows
and columns. Then

A>0 ⇐⇒ det(Ak ) > 0 for k = 1, . . . , n


4-8 Linear Matrix Inequalities S. Lall, Stanford 2002.10.06.01

Example: LMIs as polynomial inequalities


 
3 − x1 −(x1 + x2) 1
−(x1 + x2) 4 − x2 0 >0
1 0 −x1
is equivalent to
3 − x1 > 0 (C)
(3 − x1)(4 − x2) − (x1 + x2)2 > 0 (A)
−x1((3 − x1)(4 − x2) − (x1 + x2)2) − (4 − x2) > 0 (B)

A
B
C
4-9 Linear Matrix Inequalities S. Lall, Stanford 2002.10.06.01

Linear Inequalities
The linear constraints
aT1 x < b1
aT2 x < b2
..
aTn x < bn
can be expressed as the LMI
 T 
a1 x − b 1
T
a2 x − b2
 
 ... <0
 
aTn x −bn
4 - 10 Linear Matrix Inequalities S. Lall, Stanford 2002.10.06.01

Properties of LMIs
• Intersection: If
G(x) < 0 H(x) < 0
are LMIs, then so is  
G(x) 0
<0
0 H(x)

A point x ∈ Rm is feasible for the intersection of two LMIs if and only if it is feasible
for both the original LMIs.

• Scaling: For α > 0 we have


G(x) < 0 ⇐⇒ αG(x) < 0

• Similarity: Suppose G : Rm → Sn, and R ∈ Rn×p has ker R = {0}. Then


G(x) < 0 ⇐⇒ RT G(x)R < 0
4 - 11 Linear Matrix Inequalities S. Lall, Stanford 2002.10.06.01

Example of Intersections
The intersection of the two LMIS
 
2x1 + x2 + 2 0
<0
0 −x1 − 5
and
 
x1 − 3 x1 + x2 −1
x1 + x2 x2 − 4 0  < 0
−1 0 x1

is given by  
x1 − 3 x1 + x2 −1 0 0
x1 + x2 x2 − 4 0 0 0 
 
 −1 0 x1 0 0 
 <0
 0 0 0 2x1 + x2 + 2 0 
0 0 0 0 −x1 − 5
4 - 12 Linear Matrix Inequalities S. Lall, Stanford 2002.10.06.01

LMIs with Matrix Variables


The inequality  T T T

A Y +Y A Y B
T <0
B Y −I
is a linear matrix inequality in the variable Y . Here
A ∈ Rn×n B ∈ Rn×m Y ∈ Rn×n

This form is accepted by some software, such as the LMI Control Toolbox in Matlab. It
occurs commonly in control.

To convert this to standard form, write


 
x1 xn+1 . . .
 x2 
Y (x) =  .
 
. 
xn x2n . . . xn2

 T T T

A Y (x) + Y (x)A Y (x)B
Then is affine in x.
B T Y (x) −I
4 - 13 Linear Matrix Inequalities S. Lall, Stanford 2002.10.06.01

Schur Complements
Recall the Schur complement result
 
A B
<0 ⇐⇒ A < 0 and C − B ∗A−1B < 0
B∗ C

We can use this to convert quadratic constraints to LMIs.

Example
The ellipsoid constraint
xT P −1x < 1
where P > 0, is equivalent to
 
−P x
<0
xT −1
4 - 14 Linear Matrix Inequalities S. Lall, Stanford 2002.10.06.01

Semidefinite Program
A semidefinite program is an optimization problem of the form
minimize cT x
m
X
subject to F0 + xiFi ≤ 0
i=1

G(x) 6· 0

G(x) · 0

xopt

¡c
4 - 15 Linear Matrix Inequalities S. Lall, Stanford 2002.10.06.01

Some Important SDPs


Maximum-Eigenvalue Minimization

minimize λ
subject to G(x) − λI ≤ 0

Here G is an affine function.


Note that if µ is an eigenvalue of G then µ − λ is an eigenvalue of G − λI.

Linear Programming

minimize cT x
subject to Ax  b

Here  means componentwise inequality.

You might also like