Professional Documents
Culture Documents
(a) Assume that λ ∈ F is an eigenvalue for the matrix A with the corresponding eigenvector
#»
x ∈ Fn . Then, by definition, we have that A #»
x = λ #»
x . Then:
#»
(In − A) x
= In #»
x − A #»
x
= #»
x − λ #»
x (by the properties of matrix-vector multiplication)
= #»
x (1 − λ)
= (1 − λ) #»
x (by the properties of scalar multiplication)
Which by definition, means that (1 − λ) is an eigenvalue for the matrix (In − A) with
the corresponding eigenvector of #»
x.
(b) Assume that µ ∈ F is an eigenvalue for the matrix matrix In − A with the corresponding
eigenvector #»
x ∈ Fn . Then, by definition, we have that (In − A) #»
x = µ #»
x . Then:
#» #»
(In − A) x = µ x
=⇒ In #»
x − A #»
x = µ #»
x (by the properties of matrix-vector multiplication)
=⇒ A #»
x = #»
x − µ #»
x
∴ A #»
x = (1 − µ) #»
x (by the properties of scalar multiplication)
For (1 − µ) ∈ F. Thus, by definition, (1 − µ) is an eigenvalue for the matrix A. Say
(1 − µ) = λ ∈ F; thus, there exist λ being the eigenvalue for A and (1 − µ) = λ =⇒
µ = 1 − λ.
1
MATH136: Linear Algebra 1 Winter 2023
Written Assignment 4 Solutions
2
MATH136: Linear Algebra 1 Winter 2023
Written Assignment 4 Solutions
= −λ(7 − λ) + 10
= (λ − 2)(λ − 5).
Therefore, the eigenvalues of A are λ1 = 2 and λ2 = 5.
Now, let us find the eigenspace corresponding to each eigenvalue.
λ1 = 2:
5 −10 1 −2
A − 2I2 = ∼ .
1 −2 0 0
1 −2 2
Therefore, E2 = Null( ) = Span .
0 0 1
λ2 = 5:
2 −10 1 −5
A − 5I2 = ∼ .
1 −5 0 0
1 −5 5
Therefore, E5 = Null( ) = Span .
0 0 1
#»
Take v 1 =
2 #»
∈ E2 and v 2 =
5 #» #»
∈ E5 and construct matrices P = [ v 1 | v 2 ] =
2 5
1 1 1 1
and D = diag(λ1 , λ2 ) = diag(2, 5). As det(P ) = −3 ̸= 0 we have that P is invertible
and also, see that
2 5 2 0 −1/3 5/3 7 −10
A = P DP −1 = =
1 1 0 5 1/3 −2/1 1 0
as required.
(c) At first, let us find the inverse of the matrix P introduced in part (b). To do so, we
solve
the following
system:
2 5 1 0 1 0 −1/3 5/3
∼
1 1 0 1 0 1 1/3 −2/3
−1/3 5/3
where is the inverse of P .
1/3 −2/3
Knowing that A = P DP −1 by part (b), we get that An = P Dn P −1 forall n ∈ N by
the properties of diagonalization. Therefore, An #» v = P Dn P −1 #»
a
v = n+1 by part (a).
an
Thus, we get:
n
n #» n −1 #» 2 5 2 0 −1/3 5/3 26
A v = PD P v =
1 1 0 5 1/3 −2/3 7
n
2 5 2 0 3
=
1 1 0 5n 4
2 5 3 · 2n
= (by the properties of matrix-vector multiplication)
1 1 4 · 5n
3 · 2n+1 + 4 · 5n+1
an+1
= = .
3 · 2n + 4 · 5n an
3
MATH136: Linear Algebra 1 Winter 2023
Written Assignment 4 Solutions
Therefore, we conclude that as the two vectors are equal along with their components,
then an = 3 · 2n + 4 · 5n .
4
MATH136: Linear Algebra 1 Winter 2023
Written Assignment 4 Solutions
#»
∴ 0 ∈ SW .
Secondly, take #» x , #»
y ∈ SW . That is, #»
x , #»
y ∈ U and T ( #»x ), T ( #»
y ) ∈ W . Let c ∈ F.
#» #»
By the properties of subspaces, we get that (c x + y ) ∈ U . Similarily, we have that
cT ( #»
x ) + T ( #»
y ) ∈ W . As the transformation is linear, we can say that T (c #»
x + #»
y ) ∈ W.
Combining the two result, we get that
(c #»
x + #»
y ) ∈ SW .
5
MATH136: Linear Algebra 1 Winter 2023
Written Assignment 4 Solutions
#» #» #»
Q4. Let A ∈ Mm×n (F), { b 1 , . . . , b k } ⊆ Fm and #» x i be a solution to A #»
x = b i for i = 1, . . . , k.
#» #»
(a) Assume that { b 1 , . . . , b k } is linearly independent. Equivalently, for ci ∈ F with 1 ≤ i ∈
#» #» #»
N ≤ k ∈ N, we have that c1 b 1 + · · · + ck b k = 0 =⇒ c1 = · · · = ck = 0. Knowing that
#»
A #»
x i = b i , we have
#» #» #»
c1 b 1 + · · · + ck b k = c1 (A #»
x 1 ) + · · · + ck (A #»
x i ) = 0 =⇒ c1 = · · · = ck = 0
=⇒ c1 A = · · · = ck A = B
where B ∈ Mm×n (F) is the zero matrix. It is clear that ∀ #» y ∈ Fn , B #»
y = 0 #»
y and thus,
λ = 0 is an eigenvalue for the zero matrix B. Thus, we get:
c1 (A #»
x 1 ) + · · · + ck (A #»
x i ) = B #»
x 1 + · · · + B #» x k = λ #»
x 1 + · · · + λ #»
x k where λ = 0. Thus, by
#» #»
the linearity of { b 1 , . . . , b k } we can coclude that, for di ∈ F with 1 ≤ i ∈ N ≤ k ∈ N,
#»
d1 #»
x 1 + · · · + dk #»
x k = 0 =⇒ d1 = · · · = dk = λ = 0.
Therefore, by definition, the set { #» x 1 , . . . , #»
x k } is also linearly independent.
#»
=⇒ A #»
x1 = −c2 #»
c1 A x 2 + ··· + −ck #»
c1 A x k (A #»
x i = b i)
#»
=⇒ A(− #» x 1 + −c 2 #» −ck #»
c1 x 2 +· · ·+ c1 x k ) = 0 (by the properties of matrix-vector multiplication).
As rank(A) = n,by the system-rank theorem, the above equation has the unique solution,
#»
(− #»
x 1 + −c 2 #» −ck #»
c1 x 2 + · · · + c1 x k ) = 0 .
Note that as the coefficient of #» x 1 is −1 ̸= 0, we can already see that the set { #»
x 1 , . . . , #»
x k}
is NOT linearly independent by definition. This contradicts our assumption of { #» x 1 , . . . , #»
x k}
#» #»
being linearly independant and therefore, by contradiction, the set { b 1 , . . . , b k } must
be linearly independent.