- Regularization Notes
- 1. Theory - I English
- AAFA4d01
- Unit en Cluster Analysis Basicstest
- Linear Algebra Study Guide
- 1999 Regularity of Harmonic Maps Chang Yang Wang
- Math Sample
- A Video Watermarking Scheme to Hinder Camcorder Piracy
- RandomizedAlgorithms HSI 2013
- s8-4
- Algorithm for Restoration of Blurred Image
- IIHMSP - A Novel SVD-Based Audio Watermarking Algorithm
- Feature Extraction Using Sparse SVD for Biometric Fusion in Multimodal Authentication
- Robust 1 Bit Compressed Sensing and sparse logistic regression: a convex programming approach
- Ieee 2011 Java & .Net Projects @ Sbgc ( Chennai, Trichy, India, Tamilnadu
- 51108602-microchannels.pdf
- Edge Colony
- 6.SŨNYAM SĀMYASAMUCCAYE.doc
- PDE part
- practicesolns1.pdf
- 2007 Matrix Methods in Data Mining and Pattern Recognition (Society for Industrial and Applied Mathematics,2007,0898716268)
- Discrete Inverse Problem - Insight and Algorithms
- Solving Inequalities by Multiplying or Dividing
- Trinity Grammar 2013 2U PT1 & Solutions
- Vectors Notes
- +2 DPP 23-24
- Nov 2015 p1 Memo
- 1213SEM1-MA1506 (1)
- (Advanced Structured Materials 69) Francesco Dell'Isola, Mircea Sofonea, David Steigmann (Eds.)-Mathematical Modelling in Solid Mecha
- Strurm Louville_Robin Boundary Condition
- Sapiens: A Brief History of Humankind
- The Unwinding: An Inner History of the New America
- Yes Please
- The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution
- Dispatches from Pluto: Lost and Found in the Mississippi Delta
- Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
- John Adams
- Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
- The Prize: The Epic Quest for Oil, Money & Power
- A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
- This Changes Everything: Capitalism vs. The Climate
- Grand Pursuit: The Story of Economic Genius
- The Emperor of All Maladies: A Biography of Cancer
- Team of Rivals: The Political Genius of Abraham Lincoln
- The New Confessions of an Economic Hit Man
- Rise of ISIS: A Threat We Can't Ignore
- Smart People Should Build Things: How to Restore Our Culture of Achievement, Build a Path for Entrepreneurs, and Create New Jobs in America
- The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
- The World Is Flat 3.0: A Brief History of the Twenty-first Century
- Bad Feminist: Essays
- How To Win Friends and Influence People
- Steve Jobs
- Angela's Ashes: A Memoir
- Leaving Berlin: A Novel
- The Silver Linings Playbook: A Novel
- The Sympathizer: A Novel (Pulitzer Prize for Fiction)
- Extremely Loud and Incredibly Close: A Novel
- The Light Between Oceans: A Novel
- The Incarnations: A Novel
- You Too Can Have a Body Like Mine: A Novel
- Life of Pi
- The Love Affairs of Nathaniel P.: A Novel
- The First Bad Man: A Novel
- We Are Not Ourselves: A Novel
- The Blazing World: A Novel
- The Rosie Project: A Novel
- The Flamethrowers: A Novel
- Brooklyn: A Novel
- A Man Called Ove: A Novel
- Bel Canto
- The Master
- Interpreter of Maladies
- Beautiful Ruins: A Novel
- The Kitchen House: A Novel
- Wolf Hall: A Novel
- The Art of Racing in the Rain: A Novel
- The Wallcreeper
- A Prayer for Owen Meany: A Novel
- The Cider House Rules
- The Perks of Being a Wallflower
- Lovers at the Chameleon Club, Paris 1932: A Novel
- The Bonfire of the Vanities: A Novel
- Little Bee: A Novel

) =

n i=1

|xi |.

(a) Show that ν(x) is a vector norm. (b) Construct the matrix norm induced by ν. (c) Find the minimum M1 and M2 and the maximum m1 and m2 such that m1 ν(x) ≤ x m2 ν(x) ≤ x are satisﬁed for any vector x. (d) For which vectors (if any) do the above inequalities hold with equality.

Solution

2 ∞

≤ M1 ν(x) ≤ M2 ν(x)

(a)

(ii) For all α ∈

(i) Clearly if x = 0 then ν(x) = 0. If x = 0, then there exists an i such that xi = 0, which implies that |xi | > 0, so ν(x) > 0.

Ê we have

n

n

ν(αx) =

i=1

|αxi | =

i=1

|α||xi | = |α|ν(x).

**(iii) We have that
**

n n n n

ν(x + y) =

i=1

|xi + yi | ≤

i=1

|xi | + |yi| ≤

i=1

|xi | +

i=1

|yi | ≤ ν(x) + ν(y).

**(b) Let aj ∈ m be the jth column of the matrix A ∈ matrix norm, A ν , as follows: A
**

ν

Ê

Êm×n .

We construct the induced

= max ν(Ax)

ν(x)=1 n

= max ν

ν(x)=1 j=1 n

xj aj |xj | ν(aj ) |xj | max ν(aj )

j

≤ max ≤ max

j

ν(x)=1

j=1

n ν(x)=1

j=1

≤ max ν(aj )

CME 302

Solutions Problem Set #1

October 24, 2007

Suppose that the kth column of A, ak , has the maximum ν-norm. Then this upper bound is achieved when x = ek . Here ek is the kth column of the identity matrix, the vector of all 0s with a 1 in the kth position. Thus, the induced matrix ν-norm is the maximum column sum: m A (c)

ν

= max

1≤j≤n

i=1

|aij | .

(i) m1 ν(x) ≤ x 2 Let x be the vector such that xi = |xi | for i = 1, . . . , n, and e be the vector of all ˆ ˆ ones. Then, ˆ ˆ ν(x) = xT e = xT e ≤ x 2 e 2 . ˆ √ But, e 2 = n and x 2 = x 2 , so ˆ √ ν(x) ≤ n x 2 . Thus, 1 √ ν(x) ≤ x n .

2

**√ To prove that m1 = 1/ n is the maximum constant take x = e and the above bound is tight. (ii) x 2 ≤ M1 ν(x) Observe that
**

n 2 n

ν(x) =

i=1

2

|xi |

=

i=1

|xi |2 +

i=j

|xi | |xj | .

The rightmost term,

i=j

**|xi | |xj |, is greater than or equal to zero and thus,
**

n

x

2 2

=

i=1

|xi |2 ≤ ν(x)2 .

2

Since both are norms and thus positive quantities, this implies that x Thus, 1 · x 2 ≤ ν(x).

≤ ν(x).

To prove that M1 = 1 is the minimum constant take x = ek and the above bound is tight. (iii) m2 ν(x) ≤ x We have that

∞ n

ν(x) =

i=1

|xi | ≤ n max |xi | = n x

i

∞

2

CME 302

Solutions Problem Set #1

October 24, 2007

(iv)

1 · ν(x) ≤ x ∞ . n To prove that m2 = 1/n is the maximum constant take x to be a vector with x1 = x2 = · · · = xn and the above bound is tight. x ∞ ≤ M2 ν(x) We have that

n

Thus,

x

∞

= max |xi | ≤

i

i=1

|xi | = ν(x).

Thus, x ∞ ≤ 1 · ν(x). To prove that M2 = 1 is the minimum constant take x = ek and the above bound is tight. (d) (i) For x = e we have n−1/2 ν(x) = x (ii) For x = αek with α ∈

Ê and k = 1, . . . , n we have Ê and k = 1, . . . , n we have

2.

x x

2

= ν(x). = ν(x).

(iii) For a vector x with x1 = x2 = · · · = xn , n−1 ν(x) = x (iv) For x = αek with α ∈

∞. ∞

**Problem 2 Consider the weighted 2-norm
**

n 1/2

**x where wi > 0 for all i. (a) Show that ·
**

w

w

=

i=1

wi |xi |

2

**is a norm for any set of positive numbers.
**

w.

**(b) Give an explicit expression for the matrix norm induced by ·
**

Solution

(a) First note that we can express

· x

w

in terms of ·

n 2 w

2.

We have that

=

i=1

wi |xi |2

3

CME 302

Solutions Problem Set #1

October 24, 2007

Because the wi ’s are all positive we can take their square root and thus move them inside the square

n

x

2 w

=

i=1

wi xi .

1/2 1/2

1/2

2

**Let W be the diagonal matrix W = diag(w1 , . . . , wn ). Then x
**

2 w

= Wx

2 2

.

w

It follows from the inherited properties of the 2-norm that ·

is a norm.

(i) If x = 0 then x w = 0. To see that x w = 0 only if x = 0 note that since wi > 0 for all i = 1, . . . , n, the matrix W has full rank. Therefore W x = 0 only if x = 0. Thus, x w = 0 if and only if x = 0. (ii) For all α ∈

Ê

**αx (iii) We have that x+y (b) Let A ∈
**

w

w

= W (αx)

2

= |α| W x

2

= |α| x

2+

w

.

= W (x + y)

2

= Wx + Wy

2

**Ên×n then the induced w-norm of A is given by
**

A

w

≤ Wx

Wy

2

≤ x

w+

y

w

.

Ax w x=0 x w W Ax 2 = max x=0 Wx 2 = max

**Since all the wi s are positive, W −1 exists, thus = max
**

x=0

W AW −1 W x Wx 2

2

**Let y = W x, then x = W −1 y = max −1
**

W

y=0

W AW −1 y y 2

2

**Since W −1 is full rank W −1 y = 0 implies that y = 0. So = max
**

y=0

W AW −1 y y 2 W AW −1

2

2

.

Thus, A

w

=

. 4

CME 302

Solutions Problem Set #1

October 24, 2007

**Problem 3 A norm · is said to be unitarily invariant if A = where U T U = I and V T V = I. (a) Show that ·
**

2

UAV T

and

·

F

are unitarily invariant.

**(b) Let A be an m × n matrix and let p = min{m, n}. Show that A
**

F 2 2 = (σ1 + · · · + σp )1/2

**where σ1 ≥ σ2 ≥ · · · ≥ σp > 0 are the singular values of A.
**

Solution

(a)

**(i) From the deﬁnition of the matrix 2-norm we have UAV T
**

2

= max

x

2 =1

UAV T x

2

**Let z = AV T x. Since U is an orthogonal matrix for the vector 2-norm Uz z 2. = max
**

x

2 =1

2

=

AV T x

2

**Let y = V T x. Again, since V T is orthogonal, the constraint x to the constraint y 2 = V T x 2 = x 2 = 1. So = max
**

y

2 =1

2

= 1 is equivalent

Ay

2 F

2

= A

2

**(ii) We can express the Frobenius norm as A UAV T Since U T U = I we have
**

2 F

= tr(AT A). Thus,

= tr(V AT U T UAV T )

**= tr(V AT AV T ) Using the fact that tr(AB) = tr(BA) = tr(V T V AT A) Since V T V = I we arrive at the result = tr(AT A) = A
**

2 F

. 5

CME 302

Solutions Problem Set #1

October 24, 2007

(b) Using the singular value decomposition we can represent every matrix A ∈ m×n as A = UΣV T , where U ∈ m×m with U T U = I, V ∈ n×n with V T V = I and Σ ∈ m×n with Σ = diag(σ1 , . . . , σp ), observe that since rank(A) ≤ min(m, n) we have that the number of singular values is p = min(m, n). Thus we have

Ê

Ê

Ê

Ê

**A Using the result that ·
**

F

F

=

UΣV T

F

**is unitarily invariant we have that = Σ
**

F

**Applying the deﬁnition of the Frobenius norm, and noting the structure of Σ, we arrive at the result
**

2 2 = σ1 + · · · + σp 1/2

.

Problem 4

Our hero is the intrepid, yet sensitive matrix A. Our villain is E , who keeps perturbing A. ˜ When A is perturbed he puts on a crumpled hat: A = A + E . —G.W. STEWART and JI-GUANG SUN, Matrix Perturbation Theory (1990) via N. HIGHAM, Accuracy and Stability of Numerical Algorithms (2002)

Let A be a square nonsingular matrix and let x⋆ be the true solution Ax⋆ = b for some given vector b. Suppose we have an approximate solution x ≃ x⋆ . Let r = b − Ax be the associated residual, and deﬁne the rank-one matrix E= rxT x 2

(The 2-norm is used throughout this question.) ˜ ˜ (a) Show that x is the exact solution of Ax = b where A = A + E. (b) Show that E = r / x . (We call this a backward error for x.)

(c) Suppose x is the exact solution of (A + F )x = b for some other matrix F . Show that F ≥ E . (This means that E is an optimal backward error.)

Solution

6

CME 302

Solutions Problem Set #1

October 24, 2007

(a) We have that (xT x)r ˜ Ax = Ax + Ex = Ax + = Ax + r = Ax + b − Ax = b. x 2 (b) From the deﬁnition of the the matrix 2-norm we have that E Using the relation xT y = x and y we have = max

y=0

rxT y x

2

y

y cos θ, where θ is the angle between the vectors x x y |cos θ| r x 2 y

= max

y=0

**Observing that this quantity is maximized when x = y we obtain the desired result: E = x x
**

2 2

r x

=

r . x

(c) Let’s assume that we do not have a trivial solution x = 0. We can manipulate (A + F )x = b to ﬁnd F x = b − Ax = r. We don’t know anything about F except that the matrix vector product F x = r. Taking norms we have that r = F x ≤ F x . Since x = 0 we have that x = 0. Dividing by x in the relation above we obtain E Thus, F ≥ E . = r x ≤ F .

7

- Regularization NotesUploaded byFayazKhanPathan
- 1. Theory - I EnglishUploaded byMu'ayyid
- AAFA4d01Uploaded byMolly Par
- Unit en Cluster Analysis BasicstestUploaded byMohammadEt
- Linear Algebra Study GuideUploaded byanything23
- 1999 Regularity of Harmonic Maps Chang Yang WangUploaded byinnumerate19798417
- Math SampleUploaded byaliatash
- A Video Watermarking Scheme to Hinder Camcorder PiracyUploaded byInternational Organization of Scientific Research (IOSR)
- RandomizedAlgorithms HSI 2013Uploaded byypnayak
- s8-4Uploaded byeeit_nizam
- Algorithm for Restoration of Blurred ImageUploaded bykingmakerd
- IIHMSP - A Novel SVD-Based Audio Watermarking AlgorithmUploaded byhealyron
- Feature Extraction Using Sparse SVD for Biometric Fusion in Multimodal AuthenticationUploaded byAIRCC - IJNSA
- Robust 1 Bit Compressed Sensing and sparse logistic regression: a convex programming approachUploaded bygoogle_chrome1
- Ieee 2011 Java & .Net Projects @ Sbgc ( Chennai, Trichy, India, TamilnaduUploaded bysathish20059
- 51108602-microchannels.pdfUploaded bymanjubd1
- Edge ColonyUploaded bySujeet Sharma
- 6.SŨNYAM SĀMYASAMUCCAYE.docUploaded byshuklahouse
- PDE partUploaded byThanh Minh Nguyen
- practicesolns1.pdfUploaded byTruong Dang
- 2007 Matrix Methods in Data Mining and Pattern Recognition (Society for Industrial and Applied Mathematics,2007,0898716268)Uploaded byChemick Edogawa
- Discrete Inverse Problem - Insight and AlgorithmsUploaded byTiep VuHuu
- Solving Inequalities by Multiplying or DividingUploaded bygebramath
- Trinity Grammar 2013 2U PT1 & SolutionsUploaded byYe Zhang
- Vectors NotesUploaded byAdil Khan
- +2 DPP 23-24Uploaded byKshitij Bansal
- Nov 2015 p1 MemoUploaded byrowan chibi
- 1213SEM1-MA1506 (1)Uploaded byJim Hippie
- (Advanced Structured Materials 69) Francesco Dell'Isola, Mircea Sofonea, David Steigmann (Eds.)-Mathematical Modelling in Solid MechaUploaded byAlejandro Puceiro
- Strurm Louville_Robin Boundary ConditionUploaded byAnonymous ya6gBBwHJF