Professional Documents
Culture Documents
1 Vector spaces
Operations on vectors
Vector Subspaces
3 Markov chains
Rn space
Rn = {(x1 , x2 , . . . , xn ) : x1 , x2 , . . . , xn ∈ R} .
Each element in Rn is called a vector of dimension n.
Vector operations
Let u = (x1 , x2 , . . . , xn ) and v = (y1 , y2 , . . . , yn ) be two vectors in Rn and let α be a real
number then
u + v = (x1 + y1 , x2 + y2 , . . . , xn + yn ) (Addition)
αu = (αx1 , αx2 , . . . , αxn ) (Scalar multiplication)
Rn is a vector space
The addition and the scalar multiplication have the following properties:
u+v=v+u
(u + v) + w = u + (v + w)
u+0=0+u=u
u + (−u) = 0
α(u + v) = αu + αv
(α + β)u = αu + βu
(αβ)u = α(βu)
1·u=u
Vector subspaces
Definition
A subspace of the vector space Rn is a subset H of Rn that has three properties:
1 The zero vector O = (0, 0, . . . , 0) is in H.
2 H is closed under vector addition. That is, for each u and v in H, the sum u + v is in H.
3 H is closed under multiplication by scalars. That is, for each u ∈ H and each scalar
α ∈ R, the vector αu ∈ H.
Example: The set {(x, y, 0) : x, y ∈ R} is a vector subspace of R3 .
Remark: The set R2 is a vector space. But R2 is not a vector subspace of R3 because R2 is a
not a subset of R3 .
Remark: For convenience, from now on, we will write vectors as columns.
First example
Linear combination
We have w = −3u + v. However, there is no c, d such that w′ = cu + dv. We say that
w is a linear combination of the vectors u and v.
w′ can not be written as a linear combination of the vectors u and v.
Definition
Let {u1 , u2 , . . . , uk } be a family of vectors in Rn .
Any vector of the form α1 u1 + α2 u2 + · · · + αk uk , for some α1 , . . . , αk ∈ R, is called a
linear combination of u1 , u2 , . . . , uk .
Denote Span(u1 , . . . , uk ) := {α1 u1 + α2 u2 + · · · + αk uk : α1 , . . . , αk ∈ R}. Then
Span(u1 , . . . , uk ) is a vector subspace of Rn . This space is called the subspace spanned by
u1 , . . . , uk .
Proposition
If v is a linear combination of u1 , . . . , uk then Span(u1 , . . . , uk , v) = Span(u1 , . . . , uk ).
Faculty of Economic Mathematics (UEL) Chapter 2: Vector spaces 7 / 19
Basic of a vector space Linear combination
Exercises
1 −1
Exercise 1: Let u = 1 and v = 0 be vectors in R3 . Determine Span(u, v) and prove
0 2
3
that it is a vector subspace of R .
Exercise 2: In R3 , given the following vectors:
1 0 0 2 0
v1 = 0 , v2 = 1 , v3 = 0 , v4 = 3 , O = 0 .
0 0 1 5 0
Determine a linear combination (if any) of
1 v1 via v2 , v3 , v4 . 4 v4 via v1 , v2 , v3 .
2 v2 via v1 , v3 , v4 .
3 v3 via v1 , v2 , v4 . 5 O via v1 , v2 , v3 .
Faculty of Economic Mathematics (UEL) Chapter 2: Vector spaces 8 / 19
Basic of a vector space Basic
Linear independence
Proposition
The vectors v1 , v2 , . . . , vm are linearly independent if and only if O has a unique expression as
a linear combination of v1 , v2 , . . . , vm , i.e., the vector equation x1 v1 + x2 v2 + · · · + xm vm has a
unique solution. That unique expression is O = 0 · v1 + 0 · v2 + · · · + 0 · vm .
Faculty of Economic Mathematics (UEL) Chapter 2: Vector spaces 10 / 19
Basic of a vector space Basic
Proposition
A system of vectors v1 , v2 , . . . , vm in Rn is linear independent if and only if the rank of the
matrices [v1 v2 · · · vm ] equals to m, the number of vectors.
1 0 0
Example: The vectors u1 = 1 , u2 = 1 , and u3 = 0 are linearly independent because
1 1 1
1 0 0
rank [u1 u2 u3 ] = rank 1 1 0 = 3.
1 1 1
Basic
Definition
In a vector space V, a system of vectors v1 , . . . , vn is a basic of V if V = Span(v1 , . . . , vk ) and
v1 , . . . , vn are linearly independent.
1 0 0
Example: The family of vectors e1 = 0, e2 = 1, and e3 = 0 is a basic of R3 .
0 0 1
Definition
Given a vector space V. Suppose that V has a basic v1 , . . . , vn . Then the dimension of V is
defined as dim V = n.
Example: dim R3 = 3, dim Rn = n.
Coordinates
Let V be a vector space generated by a basic {v1 , . . . , vn }. Then V = Span(v1 , . . . , vn ).
Therefore, for every v ∈ V, there are a1 , a2 , . . . , an ∈ R such that
v = a1 v1 + a2 v2 + · · · + an vn .
Definition
The n-tuple (a1 , a2 , . . . , an ) is called the coordinates of v with respect to the basic {v1 , . . . , vn }.
Notation: If we denote the basic {v1 , . . . , vn } by b. Then the coordinates of v with respect to
b is denoted by [v]b . In the other words,
[v]b = (a1 , a2 , . . . , an ).
Markov chains
The Markov chains are used as mathematical models of a wide variety of situations in biology,
business, chemistry, engineering, physics, and elsewhere.
Definition
A vector with non-negative entries that add up to 1 is a called a probability vector. A
stochastic matrix is a square matrix whose columns are probability vectors. A Markov chain is
a sequence of probability vectors x0 , x1 , x2 , . . . together with a stochastic matrix P, such that
or
xk+1 = Pxk for k = 1, 2, . . .
xk is often called a state vector.
Reference: p. 255 of the 2nd textbook
Faculty of Economic Mathematics (UEL) Chapter 2: Vector spaces 14 / 19
Markov chains
Example
Suppose that, each year 5% of the city Suppose the 2014 population of the region is
population moves to the suburbs, and 3% of 600,000 in the city and 400,000 in the
the suburban population moves to the city. suburbs (or 60%[ in the] city and 40% in the
This situation we can be modelled by the 0.60
suburbs): x0 = . Then the distribution
migration matrix M: 0.40
of the population in 2015 and 2016 are
[ ] [ ]
0.60 0.582
x1 = M · = ,
0.40 0.418
[ ] [ ]
0.582 0.565
x2 = M · =
0.418 0.435
Example
0.5 0.2 0.2 1
Let P = 0.3
0.8 0.3 and x0 = 0. Consider a system whose state is described by the
0.2 0.0 0.4 0
Markov
chain xk+1 = Pxk for k = 0, 1, . . .. Then for k large enough, we may show that
0.3
xk ≈ 0.6
0.1
Definition
If P is a stochastic matrix, then a steady-state vector (or equilibrium vector) for P is a
probability vector q such that Pq = q.
Remark: a steady-state vector can be found by solving (P − I)x = O.
Faculty of Economic Mathematics (UEL) Chapter 2: Vector spaces 16 / 19
Markov chains
Review
Requirement: Definition and formula, examples. Examples with application (video if necessary)
Key terms:
Limit Average productivity of Price elasticity of demand
labor and supply
Derivatives
Marginal product of labor Elastic/inelastic/unit
Derived functions
Law of diminishing elastic
Marginal values marginal productivity Average product of labor
Second-order derivatives Marginal values Maximum/minimum (local)
Concave and convex Marginal revenue/cost point
Law of diminishing Marginal propensity to Optimization of average
marginal utility consume/save product of labor.