30 views

Uploaded by Siddharth Kurella

A concise review of Multivariable Calculus and Linear Algebra.

- Backpropagation-SolutionsPublic
- Matrices tips and tricks
- MA1506
- Lecture 15
- Matrices
- OMN 201612 110650 1 Course_notes v1 Terence Tao
- Eigenvector Expansion
- BTechCoSB
- Linear Algebra MIT Syllabus
- Further Math Syllabus
- Normalization Cross Correlation Value Of
- Summary for Linear
- linear algebra math40-lect10
- Some Problems From Tre Fe Ten
- An Intuitive Guide to Linear Algebra _ BetterExplained
- Numerical Solution of Harmonic Oscillator
- Matlab Guide
- 15L301 (1).docx
- QRG_EE.pdf
- Extracting Symmetricdal Components Data From ATP

You are on page 1of 34

Part 1. Multivariable Calculus

1. Parametric Equations

Suppose x fptq and y gptq

then the variable t is considered a parameter and the equations are called parametric equations

As t varies, the point px, yq varies and traces out a curve that is called a parametric curve

The slope of a parametric curve is

dy

dx

dy{dt

dx{dt

(1)

Proof:

dy

dt

dy

dx

dx

dt

(Chain Rule)

Given xptq and yptq

A

b

a

y dx

yptq x

1

ptq dt (2)

where t increases from to

The length of a curve in the form fpxq is dened as

L

b

a

d

1 `

dy

dx

2

dx (3)

Substituting equation 1 into 2, one can nd the equation for the length of a parametric curve

L

dx

dt

2

`

dy

dt

2

dt (4)

The surface area of a parametric curve rotated around the x or y axis is

x axis : S

2y ds ; y axis : S

2x ds (5)

where ds

c

dx

dt

2

`

dy

dt

2

1

MCLA Concise Review

2. Polar Coordinates

The Cartesian coordinates px, yq of a polar coordinate (r, q is dened by

x r cos ; y r sin (6)

Therefore : r

2

x

2

`y

2

and tanpq

y

x

Figure 1. Polar Coordinate Representation

The graph of a polar equation, r fpq consists of all points P that have at least one polar

representation pr, q whose coordinates satisfy the equation

To nd a tangent line to a polar curve r fpq, is regarded as a parameter and the parametric

equations are written:

x r cospq fpq cospthetaq and y r sinpq fpq sinpthetaq

To nd slope use equation 1 in terms of polar coordinates

dy

dx

dy{d

dx{d

dr

d

sinpq `r cospq

dr

d

cospq r sinpq

(6)

The area of a sector of a circle is

A

1

2

r

2

(7)

Considering innitesimally small sectors of circles, the area of a polar curve between two angles is

A

b

a

1

2

r

2

d (8)

To nd the length of a polar curve simplify

`

dx

dt

2

`

`

dy

dt

2

dx

d

dr

d

cospq r sinpq ;

dy

d

dr

d

sinpq `r cospq

dx

d

2

`

dy

d

dr

d

cospq r sinpq

2

`

dr

d

sinpq `r cospq

2

r

2

`

dr

d

2

Therefore, plugging into equation 4,

L

d

r

2

`

dr

d

2

dt (9)

2

MCLA Concise Review

3. Sequences and Series

A sequence can be thought of as a list of numbers written in a denite order

If a sequence approaches a certain value it is considered convergent, otherwise it is divergent

A sequence is considered increasing if every term after the rst is greater than the preceding terms

and decreasing if every term after the rst is lower than the preceding terms

A sequence is considered monotonic if it is either increasing or decreasing

A sequence is bounded above if there exists a number M such that a

n

M for all n 1

A sequence is bounded below if there exists a number M such that a

n

M for all n 1

Theorem: Every bounded, monotonic sequence is convergent

A series is the addition of the terms of an innite sequence and is denoted by

8

n1

a

n

a

1

`a

2

`a

3

` `a

n

`

Theorem: If lim

n8

a

n

exists or =0 then the series

8

n1

a

n

is convergent, otherwise it is divergent

The Integral Test: If f is a continuous, positive, decreasing function on r1, 8q and let a

n

fpnq,

then the series

8

n1

a

n

is convergent if and only if the improper integral

8

1

fpxq dx is convergent

P-Series:

8

n1

1

n

p

is convergent if p 1 and divergent if p 1; Proved by integral test

The Comparison Test: Suppose that

a

n

and

b

n

are series with positive terms

(i) If

b

n

is convergent and a

n

b

n

for all n, then

a

n

is also convergent

(ii) If

b

n

is divergent and a

n

b

n

for all n, then

a

n

is also divergent

An alternating series is a series whose terms are alternately positive and negative

Example:

8

n1

p1q

n

n

If the alternating series

8

n1

p1q

n1

b

n

satises (i) b

n`1

b

n

(ii) lim

n8

b

n

0

Then the alternating series is convergent

A series

a

n

is absolutely convergent if the series of the absolute values

|a

n

| is convergent

and is called conditionally convergent if it is convergent but not absolutely convergent

The Ratio Test:

lim

n8

a

n`1

a

n

L

(i) If L 1 then the series

8

n1

a

n

is absolutely convergent and therefore convergent

(ii) If L 1 or L 8 then the series

8

n1

a

n

is divergent

(iii) If L 1 then the ratio test is inconclusive

3

MCLA Concise Review

The Root Test:

lim

n8

n

a

|a

n

| L

(i) If L 1 then the series

8

n1

a

n

is absolutely convergent and therefore convergent

(ii) If L 1 or L 8 then the series

8

n1

a

n

is divergent

(iii) If L 1 then the ratio test is inconclusive

A power series is a series of the form

8

n0

c

n

px aq

n

c

0

`c

1

px aq `c

2

px aq

2

`

Theorem: For a power series

8

n0

c

n

px aq

n

there are only three options:

(i) The series converges only when x a

(ii) The series converges for all x

(iii) There is a positive number R, called the radius of convergence, such that the series

converges if |x a| R and diverges if |x a| R

The interval of convergence of a power series is the interval that consists of all values of x for

which the series converges

A Taylor Series centered at a is given by:

fpxq

8

n0

f

pnq

paq

n!

px aq

n

where c

n

f

pnq

paq

n!

in the power series form

The special case where a=0 is called a Maclaurin Series

Important Maclaurin Series:

1

1 x

8

n0

x

n

e

x

n0

x

n

n!

sinpxq

8

n0

p1q

n

x

2n`1

p2n `1q!

cospxq

8

n0

p1q

n

x

2n

p2nq!

tan

1

x

8

n0

p1q

n

x

2n`1

p2n `1q

Above series were derived by dierentiating and integrating other series

4. Vectors

A vector is a quantity that has both a magnitude and a direction

Vector Addition: To add two vectors geometrically, align the head of the rst vector being

added to the tail of the second and draw a ling from the tail of the rst to the head of the second

Scalar Multiplication: The scalar multiple of a vector is cv whose length is |c| times the length

of v and whose directions is the same as v if c 0 and opposite if c 0

The magnitude of a vector is the length of it and is represented by either |v| or ||v||; In terms of

its components, the magnitude of a vector is |a|

a

a

2

1

`a

2

2

` `a

2

n

where n is the number of

4

MCLA Concise Review

dimensions the vector is in

Vectors can also be added algebraically by adding their components; For example, if u ru

1

, u

2

s

and v rv

1

, v

2

s then u `v ru

1

`v

1

, u

2

`v

2

s

The standard basis vectors i, j, k can be used to express any vector in V

3

i x1, 0, 0y; j x0, 1, 0y; k x0, 0, 1y

For example a xa

1

, a

2

, a

3

y xa

1

, 0, 0y `x0, a

2

, 0y `x0, 0, a

3

y a

1

i `a

2

j `a

3

k

A unit vector is a vector whose length is 1 such as i, j, and k; The unit vector that has the same

direction as another vector, a is u

1

|a|

a

The dot product of two vectors a xa

1

, a

2

, a

3

y and b xb

1

, b

2

, b

3

y is given by

a b a

1

b

1

`a

2

b

2

`a

3

b

3

which gives a scalar

The dot product can also be expressed by using the law of cosines

a b |a| |b| cospq(10)

Theorem: From the above denition: Two vectors a and b are orthogonal if and only if a b 0

The scalar projection and vector projection of a vector b onto another vector a is dened as

comp

a

b

a b

|a|

and proj

a

b

a b

|a|

2

a (11)

The cross product of two vectors a xa

1

, a

2

, a

3

y and b xb

1

, b

2

, b

3

y is given by

a b xa

2

b

3

a

3

b

2

, a

3

b

1

a

1

b

3

, a

1

b

2

a

2

b

1

y which results a vector

An easier way to view the cross product is to use determinants:

a b

i j k

a

1

a

2

a

3

b

1

b

2

b

3

(12)

The cross product is also expressed as

|a b| |a||b| sinpq (13)

The length of the dot product a b is equal to the area of the parallelogram determined by a and b

Properties of Dot Products: Suppose a, b,

and c are vectors and c is a scalar

a a |a|

2

a pb `cq a b `a c

a b b a

pcaq b cpa bq a pcbq

0 a 0

Properties of Cross Products: Suppose a, b,

and c are vectors and c is a scalar

a b b a

pcaq b cpa bq a pcbq

a pb `cq a b `a c

a pb cq pa bq c

a pb cq pa cqb pa bqc

5

MCLA Concise Review

5. Vector Functions

A vector function is a function whose domain is a set of real-numbers and range is a set of vectors:

rptq xfptq, gptq, hptqy fptqi `gptqj `hptqk

The components of vector function can be considered parametric equations that can be used to

sketch the curve

The derivative of a vector function is dened as follows:

dr

dt

r

1

ptq xf

1

ptq, g

1

ptq, h

1

ptqy f

1

ptq i `g

1

ptq j `h

1

ptq k

The integral of a vector function is dened as follows:

b

a

rptq dt

b

a

fptqdt

i `

b

a

gptqdt

j `

b

a

hptqdt

k

The length of a vector function can be determined by using equation 4 in three dimensions

L

b

a

a

rf

1

ptqs

2

`rg

1

ptqs

2

`rh

1

ptqs

2

s dt where rptq xfptq, gptq, hptqy

or the more compact form: L

b

a

|r

1

ptq| dt (14)

The unit tangent vector indicates the direction of the curve at a certain point and is given by

Tptq

r

1

ptq

|r

1

ptq|

(15)

The curvature of a curve at a given point is a measure of how quickly the curve changes at that

point

dT

ds

(16)

It is, however, easier to compute curvature in terms of the parameter t instead of s

dT

dt

dT

ds

ds

dt

and therefore

dT{dt

ds{dt

So ptq

|T

1

ptq|

|r

1

ptq|

(17)

The unit normal vector is orthogonal to the unit tangent vector and points in the direction that

the curve is curving towards

Nptq

T

1

ptq

|T

1

ptq|

(18)

The binormal vector can be found by using the right hand rule on the tangent and normal vectors

Bptq Tptq Nptq (19)

6

MCLA Concise Review

Figure 2. Tangent, Normal, and Binormal Vectors

6. Cylindrical and Spherical Coordinates

Apart from the commonly used Cartesian Coordinates Cylindrical Coordinates and

Spherical Coordinates are used in special cases to easily solve certain problems.

In a Cylindrical Coordinate system, a point P is represented by pr, , zq, where r and are the polar

coordinates of the projection of P onto the xy -plane and z is the distance from the point to the

xy-plane.

Using geometry, Conversions from cylindrical to cartesian coordinates and vice versa are below:

x r cos ; y r sin ; z z | r

2

x

2

`y

2

; tan

y

x

; z z

In a Spherical Coordinate system, a point P is represented by p, , q, where is the distance from

the origin to point P, is the angle that the projection of P makes with the x-axis, and is the

angle that the line from the origin to P makes with the z-axis; 0 and 0

(a) 2 dimension Vector Field

(b) 3 dimension Vector Field

Figure 3. Vector Fields

The conversions from spherical to cartesian coordinates and vice versa are below:

x sin cos ; y sin sin ; z cos |

2

x

2

`y

2

`z

2

7. Partial Derivatives

Suppose f is a function of two variables, x and y, and we let only x vary while keeping y xed at a

constant. Then we are really considering a function of a single variable x, namely, gpxq fpx, bq. If

g has a derivative at a, then we call it the partial derivative of f with respect to x at pa, bq

Therefore:

f

x

pa, bq g

1

paq where gpxq fpx, bq

Partial Derivative Notations:

f

x

px, yq f

x

Bf

Bx

D

x

f

7

MCLA Concise Review

f

y

px, yq f

y

Bf

By

D

y

f

To nd partial derivatives with respect to a variable consider the other variables as constants and

dierentiate the function with respect with the variable. Note: Partial Derivatives can be found for

functions with more than 2 variables.

Higher derivatives can also be applied to partial derivatives:

pf

x

q

x

f

xx

B

Bx

Bf

Bx

B

2

f

Bx

2

pf

x

q

y

f

xy

B

By

Bf

Bx

B

2

f

ByBx

pf

y

q

x

f

xy

B

Bx

Bf

By

B

2

f

BxBy

pf

y

q

y

f

yy

B

By

Bf

By

B

2

f

By

2

Partial Derivatives can help to compute the tangent plane to a surface that can be used to

perform linear approximations

The equation of a tangent plane to the surface z fpx, yq at the point Ppx

0

, y

0

, z

0

q is

z z

0

f

x

px

0

, y

0

qpx x

0

q `f

y

px

0

qpy

0

qpy y

0

q

This can be used to nd approximate values of points near the one used for the tangent plane

approximation

The chain rule can also be applied to derivatives of functions with more than one variable.

Suppose that z fpx, yq is dierentiable and x gptq and y hptq. Then:

dz

dt

Bf

Bx

dx

dt

`

Bf

By

dy

dt

(20)

If x gps, tq and y hps, tq then:

Bz

Bs

Bz

Bx

Bx

Bs

`

Bz

By

By

Bs

and

Bz

Bt

Bz

Bx

Bx

Bt

`

Bz

By

By

Bt

(21)

Example: Write out the Chain Rule for the case where w fpx, y, z, tq and

x xpu, vq, y ypu, vq, z zpu, vq, and t tpu, vq

Bw

Bu

Bw

Bx

Bx

Bu

`

Bw

By

By

Bu

`

Bw

Bz

Bz

Bu

`

Bw

Bt

Bt

Bu

Bw

Bv

Bw

Bx

Bx

Bv

`

Bw

By

By

Bv

`

Bw

Bz

Bz

Bv

`

Bw

Bt

Bt

Bv

The directional derivative of f at px

0

, y

0

q in the direction of a unit vector u xa, by is

D

u

fpx

0

, y

0

q lim

h0

fpx

0

`ha, y

0

`hbq fpx

0

, y

0

q

h

(22)

8

MCLA Concise Review

If u i then D

i

f f

x

and if u j then D

j

f f

y

Therefore the partial derivatives of f with

respect to x and y are special cases of the directional derivative An easier way to represent the

directional derivative in the direction of any unit vector u xa, by is

D

u

fpx, yq f

x

px, yqa `f

y

px, yqb or more simply D

u

fpx, yq xf

x

px, yq, f

y

px, yqy u

The term

xf

x

px, yq, f

y

px, yqy is also used commonly and is called the gradient vector and is denoted by fpx, yq

The gradient vector represents the direction of fastest increase of f

It is easy to see that D

u

fpx, yq fpx, yq u (23)

The directional deriviative and gradient can be applied in three dimensions as well:

f

Bf

Bx

i `

Bf

By

j `

Bf

Bz

k and D

u

fpx, y, zq fpx, y, zq u

Just as with single variable functions, the maxima and minima of multivariable functions can also

be found

Theorem: If f has a local maximum or minimum at (a,b) and the rst-order partial derivatives

exist there, then f

x

pa, bq 0 and f

y

pa, bq 0. The points at which f

x

pa, bq 0 and f

y

pa, bq 0 are

called critical points

Second Derivatives Test: Suppose (a,b) is a critical point.

Let D

f

xx

f

xy

f

yx

f

yy

f

xx

f

yy

pf

xy

q

2

(a)If D 0 and f

xx

pa, bq 0 then fpa, bq is a local minimum

(b) If D 0 and f

xx

pa, bq 0 then fpa, bq is a local maximum

(c) If D 0 then fpa, bq is neither a local minimum nor maximum but a saddle point

8. Multiple Integrals

The concept of an integral can be applied to multiple dimensions. Double Integrals are integrals

of two-variable functions that represent the volume under this curve. Riemann sums can be used to

estimate these integrals by adding up rectangular prisms instead of rectangles.

Figure 4. Riemann Sum of a 3-Dimensional function

Double integrals can be solved one integral at a time as an iterated integral

d

c

b

a

fpx, yqdxdy

d

c

b

a

fpx, yqdx

dy

Theorem: If the rectangle R px, yq|a x b, c y d, then

R

fpx, yq dA

d

c

b

a

fpx, yq dx dy

b

a

d

c

fpx, yq dy dx

9

MCLA Concise Review

Theorem:

R

gpxq hpyq dA

b

a

gpxq dx

d

c

hpyq dy

where R ra, bs rc, ds, a rectangle

Although, these calculations may seem simple, regions that are being integrated over are not always

rectangles q A plane region D is said to be Type I if it lies between graphs of two continuous

functions of x. Similarly it is said to be Type II if it lies between two continuous functions of y.

Figure 5. Type I and Type II Regions

If f is continuous on a type I region D such that D tpx, yq|a x b, g

1

pxq x g

2

pxqusq

then

D

fpx, yq dA

d

c

g

2

pxq

g

1

pxq

fpx, yq dy dx (24)

If f is continuous on a type II region D such that D tpx, yq|c x d, h

1

pyq x h

2

pyqusq

then

D

fpx, yq dA

d

c

h

2

pyq

h

1

pyq

fpx, yq dx dy (24)

Example: Evaluate

D

px `2yq dA where D is the region bounded by the parabolas y 2x

2

and

y 1 `x

2

From the graph the region D can be described as

D tpx, yq| 1 x 1, 2x

2

y 1 `x

2

u

Figure 6. y 2x

2

and y 1 ` x

2

Therfore

D

px `2yqdA

1

1

1`x

2

2x

2

px `2yqdydx

1

1

rxy `y

2

s

y1`x

2

y2x

2

dx

32

15

Therefore the Volume formed by z x `2y over region D is

32

15

10

MCLA Concise Review

Triple Integrals are used similarly to double integrals

E

fpx, y, zqdV

D

u

2

px,yq

u

1

px,yp

fpx, y, zqdz

dA

ApDq

D

dA and V pEq

E

dV

Multiple integrals can also be applied to regions dened by polar, cylindrical, and spherical

coordinates

If f is continuous on a polar regions of the form D tpr, q| , h

1

pq r h

2

pqu

Then

D

fpx, yq dA

h

2

pq

h

1

pq

fpr cospq, r sinpq r dr d (25)

Figure 7. D tpr, q| , h

1

pq r h

2

pqu

11

MCLA Concise Review

Example: Use a double integral to nd the are enclosed by one loop of the four leaved rose

r cosp2q

Figure 8. r cosp2q

ApDq

D

dA

{4

{4

cosp2q

0

r dr d

{4

{4

1

2

r

2

cosp2q

0

d

8

The formula for triple integration in cylindrical coordinates:

E

fpx, y, zq dV

h

2

pq

h

1

pq

u

2

pr cospq,r sinpqq

u

1

pr cospq,r sinpqq

fpr cospq, r sinpq r dr d (26)

Figure 9. Triple Integral of Cylindrical Coordinates

Figure 10

Example: Evaluate

2

2

?

4x

2

?

4x

2

2

?

x

2

`y

2

px

2

`y

2

q dz dy dx

This iterated integral is a triple over the solid region:

E tpx, y, zq| 2 x 2,

?

4 x

2

y

a

4 y

2

,

a

2 `y

2

z 2u

Using cylindrical coordinates and the fact that r

a

x

2

`y

2

the solid region can be represented as:

E tpr, , zq|0 2, r 2, r z 2u

Therfore

2

2

?

4x

2

?

4x

2

2

?

x

2

`y

2

px

2

`y

2

q dz dy dx

2

0

2

0

2

r

r

2

r dz dr d

16

5

12

MCLA Concise Review

The formula for triple integration in spherical coordinates:

E

fpx, y, zq dV

d

c

b

a

p sin cos , sin sin , cos q

2

sin d d d (27)

Where E is given by E tp, , q|a b, , c du

Figure 11. Spherical Volume Element:

2

sin d d d

When using spherical coordinates for triple integrals use the fact that

a

x

2

`y

2

`z

2

to

simplify the integral

To determine limits for triple integrals in spherical or cylindrical coordinates use a process similar

to the one below:

Figure 12. Process for Triple Integrals in Spherical Coordinates

9. Change of Variables in Multiple Integrals

Consider the displacement vector rpx, yq and the new set of variables u and v where x xpu, vq and

y ypu, vq.

R

fpx, yq dxdy

S

fpxpu, vq, ypu, vqq

Bpx, yq

Bpu, vq

dudv (28)

Where

Bpx, yq

Bpu, vq

Bx

Bu

Bx

Bv

By

Bu

By

Bv

Proof:

d

A

Br

Bu

du

Br

Bv

dv

Br

Bu

Br

Bv

du dv

In cartesian coordinates d

A dx `dy

Therefore dx dy

Br

Bu

Br

Bv

du dv where

Bpx, yq

Bpu, vq

Br

Bu

Br

Bv

13

MCLA Concise Review

Similarly in three dimensions:

R

fpx, yq dxdy

S

fpxpu, v, wq, ypu, v, wq, zpu, v, wqq

Bpx, y, zq

Bpu, v, wq

dudv (29)

Where

Bpx, y, zq

Bpu, v, wq

Bx

Bu

Bx

Bv

Bx

Bw

By

Bu

By

Bv

By

Bw

Bz

Bu

Bz

Bv

Bz

Bw

Example: Use formula 29 to derive the formula for triple integration in spherical coordinates.

x sin cos y sin sin z cos

Bpx, y, zq

Bp, , q

Bx

B

Bx

B

Bx

B

By

B

By

B

By

B

Bz

B

Bz

B

Bz

B

sin sin sin cos cos sin

cos 0 sin

=

2

sin

10. Vector Calculus

A Vector Field on two-dimensional space is a function F that assigns each point px, yq with a two

dimensional vector Fpx, yq

F can be written in terms of its component functions P and Q as follows:

Fpx, yq Ppx, yqi `Qpx, yqj (30)

Similarly for three dimensional vector elds:

Fpx, y, zq Ppx, y, zqi `Qpx, y, zqj `Rpx, y, zqk (31)

(a) 2 dimension Vector Field

(b) 3 dimension Vector Field

Figure 13. Vector Fields

It is important to note that the gradient vector is also a vector eld called the gradient vector

eld

fpx, yq f

x

px, yqi `f

y

px, yqj

If f is dened on a smooth curve C with two dimensions then the line integral of f along C is

c

fpx, yq ds where ds

c

dx

dt

2

`

dy

dt

2

(32)

A line integral can be interpreted as the area under the eld carved out by a particular curve.

14

MCLA Concise Review

Line integrals can also be respect to x and y:

C

fpx, yq dx

b

a

fpxptq, yptqq x

1

ptq dt (33)

C

fpx, yq dy

b

a

fpxptq, yptqq y

1

ptq dt (34)

Suppose that the curve C is given by a vector function rptq, a t b. Then:

C

F dr

b

a

Fprptqq r

1

ptq dt

C

F Tds (35)

Line integrals are treated similarly in three-dimensional space

(a) (b) (c)

(d) (e) (f)

Figure 14. Line Integral

C

F dr

C

P dx `Qdy|Rdz

Theorem: Let C be a smooth curve given by the vector function rptq, a t b.Then

C

f dr fprpbqq fprpaqq (36)

Proof:

C

f dr

b

a

Fprptqq r

1

ptq

b

a

Bf

Bx

Bx

Bt

`

Bf

By

By

Bt

`

Bf

Bz

Bz

Bt

dt

b

a

d

dt

fprptqq dt ( Chain Rule)

fprpbqq fprpaqq (Fundamental Theorem of Calculus)

Theorem:

C

F dr is independent of path in D if and only if

C

F dr 0 for every closed path

C in D.

Theorem: If

C

F dr is independent of path in D, the F is a conservative vector eld on D;

that is, there exists a function f such that f F.

Theorem: If Fpx, yq Ppx, yqi `Qpx, yqj is a conservative vector eld then:

BP

By

BQ

Bx

(37)

15

MCLA Concise Review

Proof:

F f

Pi `Qj

Bf

Bx

i `

Bf

By

j

Therefore P

Bf

Bx

and Q

Bf

By

BP

By

B

2

f

ByBx

B

2

f

BxBy

BQ

Bx

(Clairauts Theorem)

Greens Theorem: Let C be a positively oriented, simple closed curve in the plane and let D be the

region bounded by C.

C

P dx `Qdy

BQ

Bx

BP

By

dA (38)

If F P i ` j `Rk then the curl of F is

F curl F

i j k

B

Bx

B

By

B

Bz

P Q R

If F P i ` j `Rk then the divergence of F is

div F F

BP

x

`

Q

By

`

BR

Bz

Acts On Gives

Gradient Scalars Vectors

Divergence Vectors Scalars

Curl Vectors Vectors

Similar to line integrals, Surface Integrals can be thought of as the volume of the space carved

out by a surface over a space

S

fpx, y, zq dS

D

fpx, y, gpx, yqq

d

Bz

Bx

2

`

Bz

Bx

2

`1 dA

D

fprpu, vqq |r

u

r

v

| dA (39)

The surface integral over a continuous vector eld F is called the ux of F over S

S

F dS

D

F pr

u

r

v

q dA

Stokes Theorem: Let S be a positively oriented, simple closed surface that is bounded by a curve C

with positice orientation. Let F be a vector eld whose components have continuous partial

derivatives. Then

C

F dr

S

curl F dS (40)

16

MCLA Concise Review

Proof:

Fdr

P dx `Qdy `Rdz

BP

By

By Bx `

BP

Bz

Bz Bx `

BQ

Bx

Bx By `

BQ

Bz

Bz By `

BR

Bx

Bx Bz `

BR

By

By Bz

BP

By

pBA

k

q `

BP

Bz

pBA

j

q `

BQ

Bx

pBA

k

q `

BQ

Bz

pBA

i

q `

BR

Bx

pBA

j

q `

BR

By

pBA

i

q

BP

Bz

BR

Bx

BA

k

`

BQ

Bx

BP

By

BA

j

`

BR

By

BQ

Bx

BA

i

curl

F dA

Example: Use Stokes Theorem to evaluate

C

, F dr where

Fpx, y, zq y

2

i `x

j `z

2

k and C is

the curve of intersection of the plane y `z 4 and the cylinder x

2

`y

2

a

2

Figure 15. Intersection of y ` z 4 and x

2

` y

2

a

2

curl F

i j k

B

Bx

B

By

B

Bz

y

2

x z

2

p1 `2yq k

C

F dr

S

curl F dS

D

p1 `2yq dA

2

0

1

0

p1 `2r sin q r dr d

Divergence Theorem: Let E be a simple solid region and let S be the boundary surface of E, given

with positive orientation. Let F be a vector eld whose component functions have continuous

partial derivatives. Then

S

F dS

E

div FdV (41)

17

MCLA Concise Review

Proof:

FdS

F

i

dS

i

`

F

j

dS

j

`

F

k

dS

k

BF

i

BS

i

`

BF

j

BS

j

`

BF

k

BS

k

BF

i

Bx

Bx

ByBz

BF

j

By

By

BzBx `

BF

k

Bz

Bz

BxBy

BF

x

Bx

`

BF

y

By

`

BF

z

Bz

BV

p

Fq dV

Example: Find the ux of the vector eld

F z

i `y

j `x

a centered at the origin.

div F

Bz

Bx

`

By

By

`

Bx

Bz

0 `1 `0 1

S

F dS

B

div FdV

B

1 dV V pBq

4

3

r

3

4

3

a

3

11. Second-Order Differential Equations

A Second-order linear dierential equation has the form

Ppxq

d

2

y

dx

2

`Qpxq

dy

dx

`Rpxqy Gpxq (42)

The cases where Gpxq 0 are called homogenous linear equations and the cases where Gpxq 0

are called non-homogeneous linear equations

Theorem: If y

1

pxq and y

2

pxq are both solutions of the linear homogeneous equation 42 and c

1

and

c

2

are any constants, then the function

ypxq c

1

y

1

pxq `c

2

y

2

pxq

is also a solution of Equation 42

Proof:

Ppxqy

2

1

`Qpxqy

1

1

`Rpxqy

1

0 Ppxqy

2

2

`Qpxqy

1

2

`Rpxqy

2

0

Ppxqy

2

`Qpxqy

1

`Rpxqy Ppxqpc

1

y

1

`c

2

y

2

q

2

`Qpxqpc

1

y

1

`c

2

y

2

q

1

`Rpxqpc

1

y

1

`c

2

y

2

q

Ppxqpc

1

y

2

1

`c

2

y

2

2

q `Qpxqpc

1

y

1

1

`c

2

y

1

2

q `Rpxqpc

1

y

1

`c

2

y

2

q

c

1

rPpxqy

2

1

`Qpxqy

1

1

`Rpxqy

1

s `c

2

rPpxqy

2

2

`Qpxqy

1

2

`Rpxqy

2

s

c

1

p0q `c

2

p0q 0

18

MCLA Concise Review

The term ypxq c

1

y

1

pxq `c

2

y

2

pxq is also called the general solution

If the second-order linear equation has constants for Ppxq, Qpxq, Rpxq then the auxillary

equation(or characteristic equation) is

ar

2

`br `c 0 where ay

2

`by

1

`cy 0

The roots of this equation, r

1

and r

2

, are used to nd the general solution of the general equation

Case 1: b

2

4ac 0

y c

1

e

r

1

x

`c

2

e

r

2

x

Case 2: b

2

4ac 0

y c

1

e

rx

`c

2

xe

rx

Case 3: b

2

4ac 0

y e

x

pc

1

cos x `c

2

sin xq

On the other hand, the general solution of a nonhomogeneous dierential equation can be written as

ypxq y

p

pxq `y

c

pxq

where y

p

is a particular solution and y

c

is the general solution of the homogeneous form

of the equation

Procedure to nd the particular solution: Substitute y

p

pxq a polynomial of the same degree as G

into the dierential equation and determine the coecients. Example: Solve y

2

`4y e

3x

y

c

c

1

cos 2x `c

2

sin 2x(Using method to nd homogeneous solution

For the particular solution we try y

p

Ae

3x

. Then y

1

p

3Ae

3x

and y

2

p

9Ae

3x

Therefore 9Ae

3x

`4pAe

3x

q e

3

x so 13Ae

3x

e

3x

and A

1

13

y

p

pxq

1

13

e

3x

The general solution is thus y y

c

`y

p

ypxq

1

13

e

3x

`c

1

cos 2x `c

2

sin 2x

If initial conditions were provided, the constants c

1

and c

2

can be found.

Part 2. Linear Algebra

12. Linear Combinations

A linear combination is when a vector is equal to the sum of a scalar multiple of other vectors.

In mathematics, this is:

v c

1

v

1

`c

2

v

2

`c

3

v

3

`. . .

The scalars c

1

, c

2

, and c

3

, etc. are called the coecients of the linear combination.

Using linear combinations, one can create a new coordinate grid expressing all points as linear

combinations of two initial vectors.

13. Vector Inequalities

19

MCLA Concise Review

The Triangle Inequality states the following:

For all vectors u and v in R

n

:

u `v u `v

Proof:

This inequality will be proven by proving the

square of the left hand side is less than or equal

to the square of the right hand side.

u `v

2

pu `vq pu `vq

u u `2pu vq `v v

u

2

`2|u v| `v

2

(Absolute Value is non-negative)

u

2

`2uv `v

2

(Cauchy-Schwarz Inequality)

pu `vq

2

Since the squares of both sides satisfy the

inequality, and the sides are non-negative, the

inequality must be true.

The Cauchy-Schwarz Inequality states the

following:

For all vectors u and v in R

n

:

|u v| uv

Proof:

|u v| uv|cos |

Since |cos | 1, the inequality is true, as

uv|cos | uv

14. Lines and Planes

In R

n

, the normal form of the equation for a line is:

n px pq 0 or n x n p

where p is a specic point on the line and n, which is not 0, is a normal vector for the line .

The general form for the equation of a line is:

ax `by c

where a normal vector n for the line is equal to

a

b

.

The vector form of the equation of a line in R

2

or R

3

is:

x p `td

where p is a specic point on the line and d, which is not 0, is a direction vector for the line .

By taking each component of the vectors, the parametric equations of a line are obtained.

The normal form of the equation of a plane in R

3

is:

n px pq 0 or n x n p

where p is a specic point on P and n, which is not 0, is a normal vector for P.

The vector form of the equation of a plane in R

3

is:

x p `su `tv

where p is a point on P and u and v are direction vectors for the plane P (and are non-zero,

parallel to P, but not parallel to each other).

20

MCLA Concise Review

The equations that result from each component of the vectors are called the parametric

equations of the a plane.

15. Systems of Linear Equations and Matrices

A linear equation in n variables is an equation in the following form:

a

1

x

1

`a

2

x

2

`a

3

x

3

`. . . `a

n

x

n

b

where a

1

. . . a

n

are the coecients and b is the constant term.

The coecients and the constant term must be constants.

A solution of a linear equation is a vector whose components satisfy the equation (subsituting each

x

i

with the corresponding component).

A system of linear equations is a nite set of linear equations with the same variables. A

solution of the system is a vector that is a solution of all linear equations in the system. The

solution set is the set of all solutions for the system. Finding the solution set is called solving

the system.

A system of linear equations with real coecients has either:

A unique solution (consistent system)

Innitely many solutions (consistent system)

No solutions (inconsistent system)

Two linear systems are equivalent if their solution sets are the same.

Linear systems are generally solved by utilizing Gaussian elimination.

Take the linear system:

ax `by m

cx `dy n

Solving this using Gaussian elimination involves creating an augmented matrix:

a b m

c d n

and putting it in row-echelon form. Using back substitution from there, all the variables can be

solved for, and the solution vector

x

y

can be found.

A matrix is in row echelon form if it follows these guidelines:

(1) Rows consisting of zeros only are at the bottom

(2) The rst nonzero element in each nonzero row (called the leading entry) is in a column to

the left of all other leading entries.

This is an example of a matrix in row echelon form:

21

MCLA Concise Review

2 4 1

0 1 2

0 0 0

Elementary row operations are used to put matrices into row echelon form. The following are

elementary row operations:

Interchange two rows

Multiply a row by a constant (nonzero)

Add a row (or any multiple of it) to another

The process of putting a matrix in row echelon form is called row reduction.

Matrices are considered row equivalent if a series of elementary row operations can convert one

matrix into the other, and, therefore, if they share a row echelon form, since you can apply the

opposite row operations from row echelon form to get the original matrix.

The rank of a matrix is the number of nonzero rows it has in row echelon form.

The Rank Theorem:

Let A be the coecient matrix of a system of linear equations in n variables. If it is a consistent

system (that is, it has at least one solution), then:

number of free variables n rankpAq

or, for every variable, there needs to be another equation to nd a single solution.

To simply back substitution, reduced row echelon form can be used.

A matrix is in reduced row echelon form if:

(1) It is in row echelon form

(2) The leading entry in each nonzero row is a 1 (called a leading 1)

(3) Each column containing a leading 1 has zeroes everywhere else

Gauss-Jordan elimination is similar to Gaussian elimination, except instead of stopping at row

echelon form, it proceeds to reduced row echelon form.

Homogeneous systems of linear equations are systems where the constant term is zero in each

equation. Its augmented matrix is in the form

A 0

.

Homogeneous systems have at least one solution (the trivial solution, which is

).

If a homogeneous system has less equations than it does variables, it must have innitely many

solutions.

16. Spanning Sets

Theorem:

A system of linear equations with augmented matrix

A b

combination of the columns in A .

The span of a set of vectors tv

1

, v

2

, v

3

, . . . , v

k

u is the set of all linear combinations of that set.

If the span is equal to R

n

, the set is referred to as a spanning set for R

n

.

22

MCLA Concise Review

17. Linear Independence

A set of vectors tv

1

, v

2

, v

3

, . . . , v

k

u is linearly dependent if scalars exist such that:

c

1

v

1

`c

2

v

2

`. . . `c

k

v

k

0

and at least one of the scalars is not 0.

Additionally, a set of vectors is linearly dependent if and only if one of the vectors can be expressed

as a linear combination of the others.

If a set of vectors is not linearly dependent, they are said to be linearly independent.

Theorem:

If there is a set of m row vectors in a matrix A, the set of row vectors is linearly dependent if and

only if rankpAq m

Therefore, any set of m vectors in R

n

is linearly dependent if m n.

18. Matrices and Matrix Algebra

A matrix is a rectangular array of numbers. These numbers are called entries or elements.

This is an example of a matrix:

a b c

d e f

g h i

The size of a matrix is represented as the number of rows (m) the number of columns (n).

The above matrix has a size of 3 3.

A 1 m matrix is called a row matrix (and is also a row vector).

A n 1 matrix is called a column matrix (and is also a column vector).

The diagonal entries of a matrix are those where the column and row it is in are the same.

Examples are a

11

, a

22

, a

33

, . . . , a

nn

.

If the matrix has the same number of rows as it does columns, it is a square matrix.

A square matrix with zero non-diagonal entries is a diagonal matrix.

A diagonal matrix with all diagonal entries the same is a scalar matrix.

Lastly, if the scalar on the diagonal is 1, it is an identity matrix.

Two matrices are equal if their size and corresponding entries are the same.

Adding matrices is as simple as adding each corresponding entry to each other.

Scalar multiplication is just as easy: multiply every entry in the matrix by the scalar.

Multiplying two matrices is more complex. If the matrix C AB, where A has size mn and B

has size n r, then the size of C is mr.

Each element c

ij

in C is equal to A

i

b

j

.

Another way to write this is:

c

ij

n

k1

a

ik

b

kj

Matrices can be divided into submatrices by partitioning it into blocks.

23

MCLA Concise Review

Take, for example, the following matrix:

1 0 0 2 1

0 1 0 1 3

0 0 1 4 0

0 0 0 1 7

0 0 0 7 2

I B

O C

2 2 matrix.

Just like with scalar numbers, a matrix A

k

is equal to the matrix A multiplied by itself k times.

If A is a square matrix, and r and s are non-negative integers, then the following is true:

(1) A

r

A

s

A

r`s

(2) pA

r

q

s

A

rs

The transpose of a matrix A

T

is obtained by interchanging the rows and columns of a matrix.

Therefore, pA

T

q

ij

A

ji

.

A matrix is symmetric if its transpose is equal to itself.

Algebraic Properties of Matrix Addition and Scalar Multiplication:

If A, B, and C are matrices of the same size and c and d are scalars, then the following is true:

A `B B `A (Commutativity)

pA `Bq `C A `pB `Cq (Associativity)

A `O A

A `pAq O

cpA `Bq cA `cB (Distributivity)

pc `dqA cA `dA (Distributivity)

cpdAq pcdqA

1A A

Matrices can form linear combinations in the same way vectors do.

Similarly, the concept of linear independence also applies.

Lastly, just as we dene the span of a set of vectors,

the span of a set or matrices is the set of all linear combinations of the matrices.

Properties of Matrix Multiplication:

If A, B, and C are matrices of the appropriate

size such that the operations can be performed,

and k is a scalar, then the following is true:

ApBCq pABqC (Associativity)

ApB `Cq AB `BC (Left

Distributivity)

pA `BqC AC `BC (Right

Distributivity)

kpABq pkAqB ApkBq

I

m

A A AI

n

(if A is mn)

Transpose Properties:

24

MCLA Concise Review

If A and B are matrices of the appropriate size

so that the operations can be performed, and k

is a scalar, then the following is true:

pA

T

q

T

A

pA `Bq

T

A

T

`B

T

pkAq

T

kpA

T

q

pABq

T

B

T

A

T

pA

r

q

T

pA

T

q

r

for all non-negative

integers r.

If A is a square matrix, then A `A

T

is symmetric.

If A is a matrix, then AA

T

and A

T

A are symmetric for any matrix A.

19. The Inverse of a Matrix

If A is a square, n n matrix, it has a unique inverse A

1

that is also n n and where the

following is true:

AA

1

I A

1

A

If this matrix A

1

exists, A is invertible.

If A is a matrix of the form

a b

c d

A

1

1

ad bc

d b

c a

1

is invertible and pA

1

q

1

A.

If A is invertible and c is a nonzero scalar, then cA is invertible and pcAq

1

1

c

A

1

.

If A and B are invertible and have the same size, then AB is invertible and

pABq

1

B

1

A

1

.

If A is invertible, then so is A

T

, and pA

T

q

1

pA

1

q

T

.

If A is invertible, then A

n

is invertible for all non-negative integers n, and pA

n

q

1

pA

1

q

n

.

If A is invertible, we can dene A

n

as pA

1

q

n

pA

n

q

1

. Therefore, all properties of matrix powers

hold for a negative power, provided that the matrix is invertible.

Elementary matrices are those that can be obtained by performing a single elementary row

operation on an identity matrix.

Performing a row operation can then be expressed by left multiplying an elementary matrix to the

original matrix, provided it is of correct size.

All elementary matrices are invertible, and its inverse is also an elementary matrix, with the same

type of row operation performed.

If A is a square matrix, and a series of elementary row operations can reduce it to I, the same series

of row operations changes I into A

1

.

Using this, we can compute the inverse using Gaussian elimination. This is done by reducing

A I

I A

1

.

25

MCLA Concise Review

If AB I or BA I, A is invertible and B A

1

20. LU Factorization

Just as we can factor natural numbers, such as 10 2 5, we can factor matrices as a product of

other matrices. This is called a matrix factorization.

If A is a square matrix that can be reduced to row echelon form without interchanging any rows,

then the A has an LU factorization.

An LU factorization is a matrix factorization of the form:

A LU

where L is a unit lower triangular matrix, and U is upper triangular.

If U is the upper triangular matrix obtained when A is put into row echelon form, then L is the

product of the elementary matrices representing each row operation.

L can also be obtained by using the multipliers of each row operation of the form R

i

kR

j

, with

each multiplier being the pi, jq entry in the L matrix.

The row operations must be done in top to bottom, left to right order for this process to apply.

A matrix P that arises from interchanging two rows in the identity matrix is called a permutation

matrix.

The transpose of a permutation matrix is its inverse.

If A is a square matrix, then it has a factorization A P

T

LU, where P is a permutation matrix,

and U and L are as dened above.

Every square matrix has a P

T

LU factorization.

21. Subspaces, Dimensions and Basis

A subspace is a collection of vectors inside the space R

n

, so that it contains the zero vector 0, it is

closed under addition, and it is closed under scalar multiplication.

Therefore, for any set of vectors in a subspace, all linear combinations of those vectors are in the

same subspace.

For this reason, the span of a set of vectors forms a subspace in R

n

itself.

The subspace formed by the span of a set of vectors is the subspace spanned by that set.

The row space is the subspace spanned by the rows of a matrix, and the column space, by the

columns.

Matrices that are row equivalent have equivalent row spaces.

A basis for a subspace is a set of vectors that spans it, and is linearly independent.

The standard basis is a basis that consists of the standard unit vectors. An example would be:

e

1

, e

2

, . . . , e

n

for the space R

n

.

26

MCLA Concise Review

The solution to an equation of the form Ax 0 is referred to as the null space of the matrix A,

and is a subspace of R

n

, where n is the number of columns in the matrix.

For any system of linear equations of the form Ax b, it either has no solutions, one unique

solution, or innitely many solutions.

Two bases for a subspace must have the same number of vectors.

This leads to the denition of dimension, which is the number of vectors in the basis for a

subspace.

The row and column spaces of a matrix must have the same dimension.

We can now redene rank as the dimension of a matrixs row/column space.

A matrixs transpose has the same rank as itself.

The nullity of a matrix is dened as the dimension of its null space.

rankpAq `nullitypAq n, where n is the number of columns.

Using the notion of basis, one can denote any vector in a subspace as a linear combination of its

basis vectors. The coecients of the linear combination form the coordinates of the vector with

respect to the basis.

22. Linear Transformations

A transformation T : R

n

R

m

is a linear transformation if

Tpu `vq Tpuq `Tpvq

and

Tpcvq cTpvq.

A matrix transformation of the form Ax T

A

pxq is a linear transformation as well.

All linear transformations can be expressed as a matrix transformation where each column of the

standard matrix is the transformation of the standard unit vectors.

Composite transformations arise when apply a transformation to another transformation. Their

standard matrices are related in that the matrix of the composite is the product of the matrix.

Two transformations are inverse to each other if both composites (in either order) yield the

identity matrix.

The matrix of an inverse transformation is the inverse of the original matrix.

23. Determinant of a Matrix

Only square matrices have determinants.

The determinant of a square matrix is:

det A

n

j1

a

ij

C

ij

or

det A

n

i1

a

ij

C

ij

where C

ij

is the pi, jq cofactor.

The cofactor is dened as p1q

i`j

det A

ij

.

A

ij

is the matrix A with the ith row and jth

column omitted.

27

MCLA Concise Review

The determinant for a 1 1 matrix is the value

of its element.

For a matrix A and any square matrix B,

detpABq pdet Aqpdet Bq

.

A matrix is invertible if and only if its

determinant is zero.

For any square matrix A:

det A det A

T

and

detpA

1

q

1

det A

(if the matrix is invertible) s

The inverse of any invertible matrix is:

A

1

1

det A

adj A

The adjoint matrix is the transpose of the

cofactor matrix.

24. Eigenvalues and Eigenvectors

An eigenvalue is a scalar (denoted by ) such that:

Ax x

x is called an eigenvector of the matrix A.

All eigenvectors corresponding to an eigenvalue forms an eigenspace.

The eigenvalues of a matrix A are the solutions to:

detpA Iq 0

The algebraic multiplicity of an eigenvalue is the number of times it is a root in the

characteristic equation.

The geometric multiplicity is the dimension of its eigenspace.

Since the determinant of a triangular matrix is equal to the product of the diagonal entries, the

eigenvalues of a triangular matrix are the values on its diagonal.

n

is an eigenvalue of A

n

for any integer n, with the same corresponding eigenvector, if is an

eigenvalue of A.

If a vector x can be expressed as a linear combination of the eigenvectors of A, the following is true:

A

k

x c

1

k

v

1

`c

2

k

v

2

`. . .

The eigenvectors corresponding to unique eigenvectors are linearly independent.

25. Similarity of Matrices

If a matrix P exists so that P

1

AP B, A B.

This is similarity of matrices.

28

MCLA Concise Review

Similarity is transitive.

The following is true of similar matrices:

Their determinants are the same.

They are either both invertible or not invertible.

Their rank is the same.

Their characteristic polynomial, and, therefore, their eigenvalues, are the same.

A matrix is diagonalizable if it is similar to a diagonal matrix.

From this, it is diagonalizable if it the same number of unique eigenvectors, and, therefore, distinct

eigenvalues as it does rows/columns.

If a matrix is diagonalizable, its diagonal matrix D has entries that are its eigenvalues, and its P

matrixs columns are the corresponding eigenvectors, in order.

26. Orthogonality

A set of vectors is an orthogonal set if every vector is orthogonal (its dot product is zero) to every

other vector in the set.

An orthogonal set is linearly independent.

An orthoganal basis is a basis that is an orthogonal set.

For any vector x in a subspace dened by the orthogonal basis v

1

, v

2

, . . ., it is equal to

c

1

v

1

`c

2

v

2

`. . . .

where:

c

i

x v

i

v

i

v

i

An orthonormal set is an orthogonal set of unit vectors.

An orthogonal basis is dened similarly.

The columns of a matrix Q are an orthonormal set if and only if

Q

T

Q I

and this is called an orthogonal matrix.

For every orthogonal matrix Q:

Q

1

Q

T

Qx x

Qx Qy x y

Q

1

is orthogonal

det Q 1

The absolute value of its eigenvalues is one.

Q

1

Q

2

is orthogonal if Q

1

and Q

2

are.

29

MCLA Concise Review

A vector is orthogonal to a subspace if it is orthogonal to every vector inside of the subspace

(which is equivalent to being orthogonal to each of its basis vectors).

The set of all vectors orthogonal to a subspace is its orthogonal component.

For a matrix A:

prowpAqq

K

nullpAq

and

pcolpAqq

K

nullpA

T

q

These four subspaces are called the fundamental subspaces of A.

The orthogonal projection of a vector onto a subspace is the sum of the projections of the vector

onto each of the orthogonal basis vectors.

The component of the vector orthogonal to the space is the dierence between the vector

and its projection.

The Gram-Schmidt Process: This is a process that takes a basis for a subspace and makes an

orthogonal one. it is done by taking each basis vector one at a time, and subtracting the

projections of it onto the previous basis vectors (taking the perpendicular component of the vector

to the previous basis vectors).

This leads to the QR factorization.

The QR factorization can be done with any matrix with linearly independent columns.

It is factored A QR, where Q is a matrix with orthonormal columns and R is an invertible upper

triangular matrix.

A matrix is orthogonally diagonalizable if the diagonalizing matrix is orthogonal.

Orthogonally diagonalizable matrices are symmetric, and vice versa.

Distinct eigenvalues of a symmetric matrix have orthogonal eigenvectors.

Based on the Spectral Theorem, any real symmetric matrix A can be written as A QDQ

T

, and

since the diagonal entries of D are the eigenvalues of A, the matrix A can be written as:

A

1

q

1

q

1

T

`

2

q

2

q

2

T

`. . .

This is called the spectral decomposition of A.

27. Vector Spaces

A vector space is a set where addition and scalar multiplication are dened in some fashion, and

the following axioms are true for all vectors u, v, and w and for all scalars c and d in the set:

(1) It is closed under addition.

(2) It is commutative for addition.

(3) It is associative for addition.

(4) A zero vector exists, and is the additive identity.

(5) For each vector u that exists, there is an opposite, u, such that their sum is 0.

(6) It is closed under scalar multiplication.

(7) It is distributive such that the scalar can be distributed to the sum of vectors.

30

MCLA Concise Review

(8) It is distributive such that the vector can be distributed to the sum of scalars.

(9) cpduq pcdqu

(10) 1u u

The operations addition and scalar multiplication can be dened in any way so that the axioms are

fullled, and the vectors in the set may be anything so that the axioms are fullled as well.

For any vector space with u and c in it, the following are true as well:

0u 0

c0 0

p1qu u

If cu 0, either c 0 or u 0.

A subset is called a subspace of a vector space if it is also a vector space, and has the same scalars,

addition, and scalar multiplication denitions.

Following from this, if V is a vector space, and W is a subset (and is not empty) of V , W is a

subspace of V if and only if W is closed under addition and scalar multiplication.

A subspace has the same zero vector as its containing space(s).

The span of a set of vectors in a vector space is the smallest subspace containing those vectors.

Linear combinations, linear independence, and basis is dened in the same way as conventional

vectors.

For a basis of a vector space: any set with more vectors than the basis is linearly dependent, and

with fewer, it cannot span the vector space.

Vector spaces are nite-dimensional if their basis has a nite amount of vectors; otherwise, they

are innite-dimensional.

28. Change of Basis

The change-of-basis matrix from B to C P

CB

is the matrix whose columns are the coordinate

vectors of the original basis vectors into the new basis.

The matrix P

CB

has the following properties:

P

CB

rxs

B

rxs

C

P

CB

is unique for the above property.

P

CB

is invertible and its inverse is P

BC

By expressing the basis vectors for two bases B and C in terms of another base (call these

expressions B and C) , row reduction of

C B

yields

I P

CB

.

29. Kernel and Range

The kernel of a linear transformation is the set of all vectors mapped by a linear transformation.

The range is the set of all vectors that are images created by a linear transformation.

31

MCLA Concise Review

The rank of a linear transformation is the dimension of its range.

The nullity of a linear transformation is the dimension of its kernel.

The rank and nullitys sum is equal to the dimension of the original vector space of the

transformation.

A transformation is called one-to-one if every mapping is distinct. If the range of a linear

transformation is equal to the ending space, it is onto.

A linear transformation is one-to-one if and only if its kernel is t0u.

If the dimension of both spaces is the same for a linear transformation, either it is one-to-one and

onto, or neither.

A linear transformation is invertible if and only if it is one-to-one and onto.

A linear transformation that is both one-to-one and onto is an isomorphism.

30. Inner Products

An inner product is an operation on a vector space that assigns a real number to every pair of

vectors so that the following are true for any vector u and v, and for any scalar c:

(1) xu, vy xv, uy

(2) xu.v `wy xu, vy `xu, wy

(3) xcu, vy cxu, vy

(4) xu, uy 0, and only equals 0 when u 0

The length or norm of a vector v is v

a

xv, vy

The distance between two vectors is the norm of their dierence.

Two vectors are orthogonal if their inner product is zero.

31. Norms

A norm on a vector space associates a single vector with a real number so that the following are

true:

(1) The norm of a vector is always greater than or equal to zero. The only time it is zero is when

the vector is the zero vector.

(2) cv |c|v

(3) The norm of the sum of two vectors is less than or equal to the sum of the norms of those

two vectors.

The sum norm, also called the 1-norm is the sum of the absolute value of the vectors components.

The 8-norm or max norm is the greatest of the absolute values of the components of a vector.

A distance function is dened as:

distancepu, vq u v

A matrix norm associates each square matrix A with a real number so that the following is true,

in addition to the vector norm conditions (with matrices instead):

AB AB

32

MCLA Concise Review

32. Least Squares Approximation

The best approximation to a vector in a nite-dimensional subspace is its projection upon the

subspace.

If A is an mn matrix and b is in R

n

, a least squares solution of Ax b in R

n

so that:

b Ax b Ax

for all x in R

n

.

The solution, x is a unique least square solution if A

t

A is invertible. If so, the following is true:

x pA

T

Aq

1

A

T

b

The least squares solution of a QR factorized matrix is:

x R

1

Q

T

b

The pseudoinverse of a matrix A with linearly independent columns is:

A

`

pA

T

Aq

1

A

T

The pseudoinverse has the following properties:

AA

`

A A

A

`

AA

`

A

`

AA

`

and A

`

A are symmetric.

33. Singular Value Decomposition

The singular values of a matrix A are the square roots of the eigenvalues of A

T

A. They are

denoted by

1

. . .

n

.

The singular value decomposition is a way of factoring a mn matrix A into the form

A UV

T

where U is an mm orthogonal matrix, V is an n n orthogonal matrix, and is an

mn matrix of the form:

D O

O O

where D is a diagonal matrix whose entries are the nonzero singular values of A.

V is constructed from the eigenvectors of A

T

A so that each column is the corresponding eigenvector

to the singular value.

U is constructed from the eigenvectors of AA

T

so that each column is the corresponding eigenvector

to the singular value.

The matrix A can then be written as:

1

u

1

v

1

T

`

2

u

2

v

2

T

`. . .

(similar to the Spectral Theorem)

33

MCLA Concise Review

34. Fundamental Theorem of Invertible Matrices

Each of the following statements implies the others:

A is invertible, where A is a n n matrix.

Ax b has a unique solution for every b.

Ax 0 has only the trivial solution.

The reduced row echelon form of A is I.

A is a product of elementary matrices.

rankpAq n.

nullitypAq 0.

The column vectors of A are linearly independent.

The column vectors of A span R

n

.

The column vectors of A form a basis for R

n

.

The row vectors of A are linearly independent.

The row vectors of A span R

n

.

The row vectors of A form a basis for R

n

.

det A 0.

0 is not an eigenvalue of A.

T is invertible where T : V W is a linear transformation from basis B to basis C.

T is one-to-one.

T is onto.

kerpTq t0u

rangepTq W

0 is not a singular value of A.

34

- Backpropagation-SolutionsPublicUploaded byyuyang zhang
- Matrices tips and tricksUploaded byxacronyx
- MA1506Uploaded byJim Hippie
- Lecture 15Uploaded byMoy Erga
- MatricesUploaded byChris Devlin
- OMN 201612 110650 1 Course_notes v1 Terence TaoUploaded byMarinho Medeiros
- Eigenvector ExpansionUploaded byLiberatedDreamer
- BTechCoSBUploaded byAdnan Ansari
- Linear Algebra MIT SyllabusUploaded byscribdusername23
- Further Math SyllabusUploaded byJasper Lee
- Normalization Cross Correlation Value OfUploaded byInternational Journal of Research in Engineering and Technology
- Summary for LinearUploaded byGizem Daban
- linear algebra math40-lect10Uploaded byzeadma
- Some Problems From Tre Fe TenUploaded bythermopolis3012
- An Intuitive Guide to Linear Algebra _ BetterExplainedUploaded byMo Ibra
- Numerical Solution of Harmonic OscillatorUploaded bysebastian
- Matlab GuideUploaded byanant_nimkar9243
- 15L301 (1).docxUploaded byvenkat
- QRG_EE.pdfUploaded byckvirtualize
- Extracting Symmetricdal Components Data From ATPUploaded bymartinpells
- MATLAB CommandsUploaded byMujeeb Abdullah
- pset_2Uploaded byAlexander Friend
- Lenguaje Ensamblador Irvine 5a EdiciónUploaded byClaudio Guaita
- Mathematics - Problem Sheet Level 1 (2 Files Merged)Uploaded byphani
- r05010202-mathematical-methodsUploaded bySRINIVASA RAO GANTA
- Phase DiagramUploaded bySara Malvar
- Solution to Selected Problems in Numerical Optimization ( Nocedal ; Wright )-UnlockedUploaded byAnonymous bZtJlFvPtp
- 2008 - A Finite Element Approach for the Simulation of Tire Rolling NoiseUploaded byGuilherme Henrique Godoi
- A Novel Video Retrieval System Using GED-Based Similarity MeasureUploaded bywoelfl
- Chapter FiveUploaded byZy An

- ENG1091 - Lecture Notes 2011Uploaded byJanet Leong
- LAnotes.pdfUploaded byKirby Diaz
- Ma 2264 Nm Eee q.bUploaded bypmagrawal
- Chapter2 Rectangular systems and Echclon Forms.pdfUploaded byAngel Leandro
- M-11 Meifry ManuhutuUploaded byahmad naufal
- MatrixUploaded bydinamostafa77
- Lab_NumUploaded byNavjot Wadhwa
- Finite Element Analysis a Primer by S. M. MusaUploaded byAlfonso Moriñigo Aguado
- M1 Numerical Analysis CourseUploaded byFranco Nelson
- Numerical recipesUploaded byVinay Gupta
- SM 56Numerical AnalysisUploaded bypavan kumar kvs
- Chap4Uploaded byeshbli
- BookUploaded byJacky Po
- 1341NotesUploaded byVirg Yum
- Gaussian Elimination PPT LectureUploaded byKim Lee
- DeterminantUploaded byNindya Sulistyani
- antonchap1 moreinversesUploaded byapi-261282952
- 2012SP-Math 2418-8001-RashedUploaded byAnna Hoang
- Gauss Jordan MethodUploaded byDaus Firdaus Bahari
- 2.1 Gauss-Jordan EliminationUploaded bysaadkhalis
- MATH15 Project JavaScript Source CodeUploaded byIvan Perez
- Lecture NotesUploaded bygirish19
- Operations With MatricesUploaded byamir shaharan
- Book Linear AlgebraUploaded byAadarshPotluru
- 41156-2Uploaded byShaluka Wijesiri
- Numerical Methods- Principles, Analysis, And Algorithms_S. Pal (1)Uploaded byPratiksha Warule-Sonawane
- 001_2015_4_bUploaded byPhindile
- NA-5-LatexUploaded bypanmerge
- choleskyUploaded byjuanchotalargas
- Matrices wUploaded byNur Syahadah