25 views

Uploaded by Raúl Ruiz

Quadratic forms

Quadratic forms

© All Rights Reserved

- 奇异值分解
- Www.gateiespsu.com Materials 01-08-2014!04!01 11 CSE
- Eigenvalues and Eigenvectors Tutorial
- IntroMatlab
- Mobile Antenna Array
- Chapter 3 Matricescs student
- Dimensionality Reduction
- 56813979 Modal Analysis Report Main With MATLAB
- R free codes.
- Presentation
- Very Short Answer Questions
- Copy-Move Image Forgery Detection using SVD
- Sjr05010202 Mathematical Methods
- Face Recognition Using Contourlet Transform under Varying Illumination
- Imac 96 Id
- svd
- 8 Petras
- IJIRAE:: A Statistical Approach to Identify Cancer-Related Vital Genes
- DirectionFindingOfMultipleWideBandEmittersUsingStateSpaceModeling
- Matrix Solution Methods for Systems of Linear Algebraic

You are on page 1of 32

Stephen Boyd

Lecture 15

Symmetric matrices, quadratic forms, matrix

norm, and SVD

eigenvectors of symmetric matrices

quadratic forms

inequalities for quadratic forms

positive semidefinite matrices

norm of a matrix

singular value decomposition

151

suppose A Rnn is symmetric, i.e., A = AT

fact: the eigenvalues of A are real

to see this, suppose Av = v, v 6= 0, v Cn

then

v T Av = v T (Av) = v T v =

n

X

i=1

but also

T

v T Av = (Av) v = (v) v =

|vi|2

n

X

i=1

|vi|2

Symmetric matrices, quadratic forms, matrix norm, and SVD

152

fact: there is a set of orthonormal eigenvectors of A, i.e., q1, . . . , qn s.t.

Aqi = iqi, qiT qj = ij

in matrix form: there is an orthogonal Q s.t.

Q1AQ = QT AQ =

A = QQT =

n

X

iqiqiT

i=1

Symmetric matrices, quadratic forms, matrix norm, and SVD

153

Interpretations

replacements

A = QQT

x

QT

QT x

QT x

Ax

resolve into qi coordinates

scale coordinates by i

reconstitute with basis qi

Symmetric matrices, quadratic forms, matrix norm, and SVD

154

or, geometrically,

rotate by QT

diagonal real scale (dilation) by

rotate back by Q

decomposition

A=

n

X

iqiqiT

i=1

155

example:

1/2

3/2

3/2 1/2

T

1

1

1

0

1

1

1

1

=

0 2

2 1 1

2 1 1

A =

q2q2T x

x

q1

q1q1T x

1q1q1T x

q2

2q2q2T x

Ax

156

suppose v1, . . . , vn is a set of linearly independent eigenvectors of A:

Avi = ivi,

kvik = 1

then we have

viT (Avj ) = j viT vj = (Avi)T vj = iviT vj

so (i j )viT vj = 0

for i 6= j, i 6= j , hence viT vj = 0

in this case we can say: eigenvectors are orthogonal

in general case (i not distinct) we must say: eigenvectors can be

chosen to be orthogonal

Symmetric matrices, quadratic forms, matrix norm, and SVD

157

Example: RC circuit

i1

v1

c1

in

vn

resistive circuit

cn

ck v k = ik ,

i = Gv

thus v = C 1Gv where C = diag(c1, . . . , cn)

note C 1G is not symmetric

Symmetric matrices, quadratic forms, matrix norm, and SVD

158

use state xi =

civi, so

x = C 1/2v = C 1/2GC 1/2x

where C

1/2

we conclude:

eigenvalues 1, . . . , n of C 1/2GC 1/2 (hence, C 1G) are real

eigenvectors qi (in xi coordinates) can be chosen orthogonal

eigenvectors in voltage coordinates, si = C 1/2qi, satisfy

C 1Gsi = isi,

sTi Csi = ij

159

Quadratic forms

a function f : Rn R of the form

f (x) = xT Ax =

n

X

Aij xixj

i,j=1

in a quadratic form we may as well assume A = AT since

xT Ax = xT ((A + AT )/2)x

((A + AT )/2 is called the symmetric part of A)

uniqueness: if xT Ax = xT Bx for all x Rn and A = AT , B = B T , then

A=B

Symmetric matrices, quadratic forms, matrix norm, and SVD

1510

Examples

kBxk2 = xT B T Bx

Pn1

i=1

(xi+1 xi)2

kF xk2 kGxk2

sets defined by quadratic forms:

{ x | f (x) = a } is called a quadratic surface

{ x | f (x) a } is called a quadratic region

1511

suppose A = AT , A = QQT with eigenvalues sorted so 1 n

xT Ax = xT QQT x

= (QT x)T (QT x)

n

X

i(qiT x)2

=

i=1

n

X

(qiT x)2

i=1

= 1kxk2

i.e., we have xT Ax 1xT x

Symmetric matrices, quadratic forms, matrix norm, and SVD

1512

nxT x xT Ax 1xT x

q1T Aq1 = 1kq1k2,

1513

suppose A = AT Rnn

we say A is positive semidefinite if xT Ax 0 for all x

denoted A 0 (and sometimes A 0)

A 0 if and only if min(A) 0, i.e., all eigenvalues are nonnegative

not the same as Aij 0 for all i, j

we say A is positive definite if xT Ax > 0 for all x 6= 0

denoted A > 0

A > 0 if and only if min(A) > 0, i.e., all eigenvalues are positive

Symmetric matrices, quadratic forms, matrix norm, and SVD

1514

Matrix inequalities

we say A is negative semidefinite if A 0

we say A is negative definite if A > 0

otherwise, we say A is indefinite

matrix inequality: if B = B T Rn we say A B if A B 0, A < B

if B A > 0, etc.

for example:

A 0 means A is positive semidefinite

A > B means xT Ax > xT Bx for all x 6= 0

Symmetric matrices, quadratic forms, matrix norm, and SVD

1515

if A B and C D, then A + C B + D

if B 0 then A + B A

if A 0 and 0, then A 0

A2 0

A 6 B,

B 6 A

1516

Ellipsoids

E = { x | xT Ax 1 }

is an ellipsoid in Rn, centered at 0

s1

s2

1517

1/2

qi, i.e.:

eigenvalues determine lengths of semiaxes

note:

in direction q1, xT Ax is large, hence ellipsoid is thin in direction q1

in direction qn, xT Ax is small, hence ellipsoid is fat in direction qn

Symmetric matrices, quadratic forms, matrix norm, and SVD

1518

suppose A Rmn (not necessarily square or symmetric)

for x Rn, kAxk/kxk gives the amplification factor or gain of A in the

direction x

obviously, gain varies with direction of input x

questions:

what is maximum gain of A

(and corresponding maximum gain direction)?

what is minimum gain of A

(and corresponding minimum gain direction)?

how does gain of A vary with direction?

Symmetric matrices, quadratic forms, matrix norm, and SVD

1519

Matrix norm

the maximum gain

kAxk

x6=0 kxk

is called the matrix norm or spectral norm of A and is denoted kAk

max

kAxk2

xT AT Ax

T

max

=

max

=

(A

A)

max

2

2

x6=0 kxk

x6=0

kxk

p

so we have kAk = max(AT A)

min kAxk/kxk =

x6=0

min(AT A)

1520

note that

AT A Rnn is symmetric and AT A 0 so min, max 0

max gain input direction is x = q1, eigenvector of AT A associated

with max

min gain input direction is x = qn, eigenvector of AT A associated with

min

1521

35 44

44 56

1 2

example: A = 3 4

5 6

AT A =

then kAk =

0.620

0.785

0.785 0.620

90.7

0

0 0.265

0.620

0.785

0.785 0.620

T

max(AT A) = 9.53:

0.620

0.785
= 1,

2.18

0.620

A

=
4.99
= 9.53

0.785

7.78

1522

min gain is

min(AT A) = 0.514:

0.785

0.620
= 1,

A

0.46

0.785

0.14

=

= 0.514

0.620

0.18

kAxk

9.53

0.514

kxk

1523

n1

consistent

with

vector

norm:

matrix

norm

of

a

R

is

p

max(aT a) = aT a

scaling: kaAk = |a|kAk

triangle inequality: kA + Bk kAk + kBk

definiteness: kAk = 0 A = 0

norm of product: kABk kAkkBk

1524

decomposition (SVD) of A:

A = U V T

where

A Rmn, Rank(A) = r

U Rmr , U T U = I

V Rnr , V T V = I

= diag(1, . . . , r ), where 1 r > 0

Symmetric matrices, quadratic forms, matrix norm, and SVD

1525

A = U V T =

r

X

iuiviT

i=1

vi are the right or input singular vectors of A

ui are the left or output singular vectors of A

1526

AT A = (U V T )T (U V T ) = V 2V T

hence:

vi are eigenvectors of AT A (corresponding to nonzero eigenvalues)

i =

kAk = 1

1527

similarly,

AAT = (U V T )(U V T )T = U 2U T

hence:

ui are eigenvectors of AAT (corresponding to nonzero eigenvalues)

i =

v1, . . . vr are orthonormal basis for N (A)

1528

Interpretations

A = U V T =

r

X

iuiviT

i=1

V Tx

V T x

Ax

compute coefficients of x along input directions v1, . . . , vr

scale coefficients by i

difference with eigenvalue decomposition for symmetric A: input and

output directions are different

Symmetric matrices, quadratic forms, matrix norm, and SVD

1529

u1 is highest gain output direction

Av1 = 1u1

1530

example: consider A R44 with = diag(10, 7, 0.1, 0.05)

input components along directions v1 and v2 are amplified (by about

10) and come out mostly along plane spanned by u1, u2

input components along directions v3 and v4 are attenuated (by about

10)

kAxk/kxk can range between 10 and 0.05

A is nonsingular

for some applications you might say A is effectively rank 2

1531

resolve x along v1, v2: v1T x = 0.5, v2T x = 0.6, i.e., x = 0.5v1 + 0.6v2

now form Ax = (v1T x)1u1 + (v2T x)2u2 = (0.5)(1)u1 + (0.6)(0.5)u2

v2

u1

x

v1

Ax

u2

1532

- 奇异值分解Uploaded bysherinpumc
- Www.gateiespsu.com Materials 01-08-2014!04!01 11 CSEUploaded byVijayChaudhari
- Eigenvalues and Eigenvectors TutorialUploaded byMebeek Dagnew
- IntroMatlabUploaded bymppatilmayur1679
- Mobile Antenna ArrayUploaded byPai Heng
- Chapter 3 Matricescs studentUploaded byRahul Banga
- Dimensionality ReductionUploaded byghromis
- 56813979 Modal Analysis Report Main With MATLABUploaded bySergey Churilov
- R free codes.Uploaded byJorge Hirs
- PresentationUploaded byDheerajBokde
- Very Short Answer QuestionsUploaded byBandaru Chiranjeevi
- Copy-Move Image Forgery Detection using SVDUploaded byIRJET Journal
- Sjr05010202 Mathematical MethodsUploaded byandhracolleges
- Face Recognition Using Contourlet Transform under Varying IlluminationUploaded bySEP-Publisher
- Imac 96 IdUploaded byasimaliabbasi
- svdUploaded byJami Murphy
- 8 PetrasUploaded byakarimi7961
- IJIRAE:: A Statistical Approach to Identify Cancer-Related Vital GenesUploaded byIJIRAE- International Journal of Innovative Research in Advanced Engineering
- DirectionFindingOfMultipleWideBandEmittersUsingStateSpaceModelingUploaded bydanosiecki
- Matrix Solution Methods for Systems of Linear AlgebraicUploaded byLucas Nunes
- Modal identification of time-varying systems using simulated responses on wind turbines.pdfUploaded byMayra Zezatti
- PrincipalaxisthmUploaded bymwaseem143
- illum_iccv2001Uploaded bysandeep0029
- thurau12sdmUploaded byjuan perez arrikitaun
- Happiness Expression Recognition at Different Age ConditionsUploaded byIJMTER
- Linear_Algebra_for_Deep_Learning.pdfUploaded byzentropia
- 10.1.1.110.6662Uploaded bywanggang02
- scherpen2005Uploaded bytytydz
- HHW maths 12 (2016-17) - CopyUploaded byChitrita Pant
- Optimum Design of Linear Phase Paraunitary Filter Bank & its Applications in Signal ProcessingUploaded byEditor IJRITCC

- Diagonal IzeUploaded byRobert Dyll
- 1 Matrices booklet.pdfUploaded byRebeccaPrivitera
- What the Heck is an EigenvectorUploaded byfaraz107
- Recursive LU Factorization of a Matrix in PythonUploaded byTheodor Munteanu
- Eigenvectors—Wolfram Mathematica 9 Documentation.pdfUploaded byMahfudz Shodiq
- Linear Systems and 2x2 MatricesUploaded byrodwellhead
- TelatarUploaded byShriram Swaminathan
- F9040Uploaded byharishankarnadar
- DeterminantsUploaded byJason Urtula Gimena
- Advance Math_Assign 1Uploaded byLem Gan
- Block-4 MEC-003 Unit-10.pdfUploaded byAbhishek Patra
- ERRATA4eUploaded bySilvio Paula
- ps4.pdfUploaded bySubhamKumar
- Principal componentUploaded bygvaf
- Matrices (Part I)Uploaded bySue Yin
- 38_matrix True FalseUploaded byShawn Joshua Yap
- Jordan Canonical FormUploaded byJohn
- linear algebraUploaded bysarveshknitk
- VAR and SVAR 2016 HayashiUploaded bysilvio de paula
- Recent Computational Developments in Krylov Subspace Methods for Linear SystemsUploaded byAli Yıldırım
- Assignment 10Uploaded byayush_goyal_18
- Translate SilabusUploaded byOtto Manurung
- 2018 See MousUploaded byHipstersdssadad
- PcaUploaded byrohit_raj21
- BookUploaded byAnonymous PN6h7s8h
- pdf-mat-2Uploaded byblue_l1
- Orthogonal MatricesUploaded byHemant
- Phys622 Prob Set3Uploaded bysubhankarsarthak1
- RTv4manualUploaded bykloud
- Kryteax Sample MathsUploaded byRitwik Kumar