You are on page 1of 23

Financial Engineering and Risk Management

Review of vectors

Martin Haugh Garud Iyengar


Columbia University
Industrial Engineering and Operations Research
Reals numbers and vectors
We will denote the set of real numbers by R
Vectors are finite collections of real numbers
Vectors come in two varieties
 
Row vectors: v = v1 v2 . . . vn
w1
 
 w2 
Column vectors w = 
 ... 

wn
By default, vectors are column vectors

The set of all vectors with n components is denoted by Rn

2
Linear independence
A vector w is linearly dependent on v1 , v2 if

w = α1 v1 + α2 v2 for some α1 , α2 ∈ R
Example:      
2 1 0
6 = 2 1 + 4 1
4 0 1
Other names: linear combination, linear span

A set V = {v1 , . . . , vm } are linearly independent if no vi is linearly


dependent on the others, {vj : j 6= i}

3
Basis
Every w ∈ Rn is a linear combination of the linearly independent set
           

 1 0 0   1 0 0
0 1
 0  0 1 0
B = . , . , . . . . w = w1  .  +w2  .  + . . . + wn  . 
           

 ..   ..   .. 
  ..   ..   .. 
 
0 0 1 0 0 1
 
|{z} |{z} |{z}
e1 e2 en

Basis ≡ any linearly independent set that spans the entire space
Any basis for Rn has exactly n elements

4
Norms
A function ρ(v) of a vector v is called a norm if
ρ(v) ≥ 0 and ρ(v) = 0 implies v = 0
ρ(αv) = |α| ρ(v) for all α ∈ R
ρ(v1 + v2 ) ≤ ρ(v1 ) + ρ(v2 ) (triangle inequality)
ρ generalizes the notion of “length”

Examples:
qP
n
`2 norm: kxk2 = i=1
|xi |2 ... usual length
Pn
`1 norm: kxk1 = i=1
|xi |
`∞ norm: kxk∞ = max1≤i≤n |x|i
P  1p
n
`p norm, 1 ≤ p < ∞: kxkp = i=1
|x|pi

5
Inner product
The inner-product or dot-product of two vector v, w ∈ Rn is defined as
n
X
v·w= vi wi
i=1

The `2 norm kvk2 = v·v
The angle θ between two vectors v and w is given by
v·w
cos(θ) =
kvk2 kwk2

Will show later: v · w = v> w = product of v transpose and w

6
Financial Engineering and Risk Management
Review of matrices

Martin Haugh Garud Iyengar


Columbia University
Industrial Engineering and Operations Research
Matrices
Matrices are rectangular arrays of real numbers
Examples:
 
2 3 7
A= : 2 × 3 matrix
1 6 5
 
B= 2 3 7 : 1 × 3 matrix ≡ row vector
a11 a12 . . . a1n
 
 a21 a22 . . . a2n 
A= ... .. .. .. : m × n matrix ... Rm×n
. . .

am1 am2 . . . amn
1 0 ... 0
 
0 1 ... 0
I=
 ... ... ..
.  ... n × n Identity matrix
.. 
.
0 0 ... 1

Vectors are clearly also matrices

2
Matrix Operations: Transpose
Transpose: A ∈ Rm×d
 >  
a11 a12 . . . a1d a11 a21 ... am1
 a21 a22 . . . a2d  a12 a22 ... ad2 
A> =  . ..  =  .. ..  ∈ R
d×m
   
.. .. .. ..
 .. . . .   . . . . 
am1 am2 . . . amd a1d a2d ... amd

Transpose of a row vector is a column vector

Example:
"
2 1
  #
2 3 7
A= : 2 × 3 matrix ... A> = 3 6 : 3 × 2 matrix
1 6 5
7 5
" #
2
v = 6 : column vector ... v> = 2
 
6 4 : row vector
4

3
Matrix Operations: Multiplication
Multiplication: A ∈ Rm×d , B ∈ Rd×p then C = AB ∈ Rm×p
 
b1j
 b2j 

 
cij = ai1 ai2 . . . aid  . 
 .. 
bdj

row vector v ∈ R1×d times column vector w ∈ Rd×1 is a scalar.


Identity times any matrix AIn = Im A = A

Examples:
  "2#    
2 3 7 2(2) + 3(6) + 7(4) 50
6 = =
1 6 5 1(2) + 6(6) + 5(4) 58
4
  s   s >  
1 p   1 1 1
`2 norm: 2 2
−2 = 1 + (−2) =
1 −2 =
−2 −2 −2
2

inner product: v · w = v> w

4
Linear functions
A function f : Rd 7→ Rm is linear if

f (αx + βy) = αf (x) + βf (y), α, β ∈ R, x, y ∈ Rd

A function f is linear if and only if f (x) = Ax for matrix A ∈ Rm×d

Examples
 x1
" #
3

f (x) : R 7→ R: f (x) = 2 3 4 x2 = 2x1 + 3x2 + 4x3
x3
  "x1 #  
3 2 2 3 4 2x1 + 3x2 + 4x3
f (x) : R →
7 R : f (x) = x2 =
1 0 2 x1 + 2x3
x3

Linear constraints define sets of vectors that satisfy linear relationships


Linear equality: {x : Ax = b} ... line, plane, etc.
Linear inequality: {x : Ax ≤ b} ... half-space

5
Rank of a matrix
column rank of A ∈ Rm×d = number of linearly independent columns
range(A) = {y : y = Ax for some x}
column rank of A = size of basis for range(A)
column rank of A = m ⇒ range(A) = Rm

row rank of A = number of linearly independent rows


Fact: row rank = column rank ≤ min{m, d}

Example:
     
1 2 3 1
A= , rank = 1, range(A) = λ :λ∈R
2 4 6 2

A ∈ Rn×n and rank(A) = n ⇒ A invertible, i.e. A−1 ∈ Rn×n

A−1 A = AA−1 = I

6
Financial Engineering and Risk Management
Review of linear optimization

Martin Haugh Garud Iyengar


Columbia University
Industrial Engineering and Operations Research
Hedging problem
d assets
Prices at time t = 0: p ∈ Rd

Market in m possible states at time t = 1


Price of asset j in state i = Sij
   
S1j S11 S12 ... S1d
 S2j    S21 S22 ... S2d 
m×d

Sj =  .  S = S1 S2 . . . Sd =  . ..  ∈ R
  
.. ..
 ..   .. . . . 
Smj Sm1 Sm2 ... Smd

Hedge an obligation X ∈ Rm
Have to pay Xi if state i occurs
Buy/short sell θ = (θ1 , . . . , θd )> shares to cover obligation

2
Hedging problem (contd)
Position θ ∈ Rd purchased at time t = 0
θj = number of shares of asset j purchased, j = 1, . . . , d
Pd
Cost of the position θ = j=1
pj θj = p> θ

Payoff from liquidating position at time t = 1


Pd
payoff yi in state i: yi = j=1
Sij θj
Stacking payoffs for all states: y = Sθ
Viewing the payoff vector y: y ∈ range(S)

θ1
 
d
  θ2  X
y = S1 S2 . . . Sd  . =
 ..  θj S j
j=1
θd

Payoff y hedges X if y ≥ X.

3
Hedging problem (contd)
Optimization problem:
Pd
min pj θj (≡ p> θ)
Pj=1
d
subject to j=1 Sij θj ≥ Xi , i = 1, . . . , m (≡ Sθ ≥ X)

Features of this optimization problem


Linear objective function: p> θ
Linear inequality constraints: Sθ ≥ X

Example of a linear program


Linear objective function: either a min/max
Linear inequality and equality constraints
max/minx c> x
subject to Aeq x = beq
Ain x ≤ bin

4
Linear programming duality
Linear program
P = minx c> x
subject to Ax ≥ b
Dual linear program
D = maxu b> u
subject to A> u = c
u≥0

Theorem.
Weak Duality: P ≥ D
Bound: x feasible for P, u feasible for D, c> x ≥ P ≥ D ≥ b> u
Strong Duality: Suppose P or D finite. Then P = D.
Dual of the dual is the primal (original) problem

5
More duality results
Here is another primal-dual pair
minx c> x = maxu b> u
subject to Ax = b subject to A> u = c

General idea for constructing duals

P = min{c> x : Ax ≥ b}
≥ min{c> x − u> (Ax − b) : Ax ≥ b} for all u ≥ 0
≥ b> u + min{(c − A> u)> x : x ∈ Rn }
 >
b u A> u = c
=
−∞ otherwise
≥ max{b> u : A> u = c}

Lagrangian relaxation: dualize constraints and relax them!

6
Financial Engineering and Risk Management
Review of nonlinear optimization

Martin Haugh Garud Iyengar


Columbia University
Industrial Engineering and Operations Research
Unconstrained nonlinear optimization
Optimization problem
minx∈Rn f (x)
Categorization of minimum points
x∗ global minimum if f (y) ≥ f (x∗ ) for all y
x∗loc local minimum if f (y) ≥ f (x∗loc ) for all y such that ky − x∗loc k ≤ r

Sufficient condition for local min


 ∂f 
∂x1

gradient ∇f (x) =  ...  = 0: local stationarity


 
∂f
∂xn
∂2f ∂2f ∂2f
 
2
∂x1 ∂x1 ∂x2
... ∂x1 ∂xn

Hessian ∇2 f (x) = 
 .. .. .. .. 
 positive semidefinite
 . . . . 
∂2f ∂2f ∂2f
∂xn ∂x1 ∂xn ∂x2
... ∂xn2

Gradient condition is sufficient if the function f (x) is convex.

2
Unconstrained nonlinear optimization
Optimization problem

minx∈R2 x12 + 3x1 x2 + x23

Gradient
" #
− 94
 
2x1 + 3x2
∇f (x) = =0 ⇒ x = 0,
3x1 + 3x22 3
2
 
2 3
Hessian at x: H =
3 6x2
 
2 3
x = 0: H = . Not positive definite. Not local minimum.
3 0
   
− 94 2 3
x= 3 : H= . Positive semidefinite. Local minimum
2
3 9

3
Lagrangian method
Constrained optimization problem
maxx∈R2 2 ln(1 + x1 ) + 4 ln(1 + x2 ),
s.t. x1 + x2 = 12

Convex problem. But constraints make the problem hard to solve.


Form a Lagrangian function

L(x, v) = 2 ln(1 + x1 ) + 4 ln(1 + x2 ) − v(x1 + x2 − 12)

Compute the stationary points of the Lagrangian as a function of v


 2 
−v 2 4
∇L(x, v) = 1+x 4
1
= 0 ⇒ x1 = − 1, x2 = − 1
1+x2 − v v v

Substituting in the constraint x1 + x2 = 12, we get


 
6 3 1 11
= 14 ⇒ v= ⇒ x=
v 7 3 25

4
Portfolio Selection
Optimization problem
maxx µ> x − λx> Vx
s.t. 1> x = 1
Constraints make the problem hard!

Lagrangian function
L(x, v) = µ> x − λx> Vx − v(1> x − 1)

Solve for the maximum value with no constraints


1
∇x L(x, v) = µ − 2λVx − v1 = 0 ⇒ x= · V−1 (µ − v1)

Solve for v from the constraint
1> V−1 µ − 2λ
1> x = 1 ⇒ 1> V−1 (µ − v1) = 2λ ⇒ v=
1> V−1 1
Substitute back in the expression for x

You might also like