Professional Documents
Culture Documents
Department of Mathematics
M K Bhavnagar University
For M.Sc. (Mathematics) Sem-2
Paper No.: 108
Linear Algebra
Instructor: Dr. P. I. Andharia
Syllabus: Unit-3
• Eigen Values and Eigen Vectors
• Characteristic Polynomials
• Cayley-Hamilton Theorem
• Minimal Polynomials
• Triangulation, Diagonalization
• Rational Canonical Form, Jordan
Canonical Form
• Inner Product Spaces
Definition: Eigen Values and Eigen Vectors
Let be a finite dimensional vector space over and we fix a basis
= { 1 , 2 , … … , }. Let : → be a linear [We use the same symbol
to denote the matrix ( )]. We say a real number is an Eigen
value of , if there exist a non-zero vector ∈ such that = .
Any such non-zero vector ∈ with = is called an Eigen vector
of corresponding to the Eigen value .
Working Rule to find Eigen Values and corresponding Eigen Vectors:
Suppose is an Eigen value of a matrix and is corresponding Eigen
vector. Then, = ⇒ ( − ) = ⇒ det( − ) = 0
The equation det( − ) = 0 is called characteristic equation and
det( − ) is called characteristic polynomial of the matrix .
Solve the equation det( − ) = 0 to find Eigen values of .
Note: (1) There may be more than one Eigen vector corresponding to
the same Eigen value.
(2) Eigen value and Eigen vector are also known as characteristic
value and characteristic vector respectively.
Example:
0 0 2
Find Eigen values and corresponding Eigen vectors of = 0 2 0 .
2 0 3
Solution:
The characteristic equation of is given by det( − ) = 0. i.e.
− 0 2
0 2− 0 = 0.
2 0 3−
2− 0 0 2−
On expanding we get, − +2 =0
0 3− 2 0
⇒ − (2 − )(3 − ) + 2(−2)(2 − ) = 0
2
⇒ (2 − )( − 3 − 4) = 0
⇒ (2 − )( − 4)( + 1) = 0
⇒ = −1, 2, 4 are Eigen values of .
To find Eigen vector corresponding to the Eigen value = −1:
Let = ( , , ) be the required Eigen vector then from = ,
0 0 2
0 2 0 = −1
2 0 3
2 0
⇒ 2 + = 0
2 +3 0
⇒ + 2 = 0, 3 = 0, 2 + 4 = 0
Solving these, we get = −2 , = 0.
Thus, required Eigen vector corresponding to = −1 is = (−2, 0, 1).
Similarly, to find Eigen vector corresponding to the Eigen value = 2:
Let = ( , , ) be the required Eigen vector then from = ,
00 2
02 0 =2
20 3
2 2 0
⇒ 2 − 2 = 0
2 +3 2 0
⇒ 2 − 2 = 0, 2 + =0
Solving these, we get = , 3 = 0 or = 0, = 0.
Thus, required Eigen vector corresponding to = 2 is = (0, 1, 0).
Finally, to find Eigen vector corresponding to the Eigen value = 4:
Let = ( , , ) be the required Eigen vector then from = ,
00 2
02 0 =4
20 3
2 4 0
⇒ 2 − 4 = 0
2 +3 4 0
⇒ − 2 = 0, −2 = 0, 2 − =0
Solving these, we get = 0, 2 = .
Thus, required Eigen vector corresponding to = 4 is = (1, 0, 2).
Theorem:
Let be a vector space. Let : → be a linear map. Assume that is
a nonzero Eigenvector of corresponding to the Eigen value of . Let
≠ , for ≠ , 1 ≤ , ≤ . Then, { 1 , 2 , … … , } is a linearly
independent set.
Proof:
We will prove this result by mathematical induction.
Clearly, for = 1, 1 ≠ , { 1 } is linearly independent.
For = 2, let 1 , 2 be two Eigen vectors of corresponding to Eigen
values 1 , 2 respectively where 1 ≠ 2 . Then,
1 = 1 1 and 2 = 2 2.
∴ 1 = 1 1 ⇒ ( 2) = 1 1
⇒ ( 2) = 1 1 ∵ is linear
⇒ 2 2 = 1 1
⇒ 2 2 = 1 1
⇒ 2 1 = 1 1
⇒( 2 − 1) 1 =
⇒ 2 − 1 =0 ∵ 1 ≠
⇒ 2 = 1
Which is contradiction to 1 ≠ 2.
= .
=1
−1 −1
Now, = ⇒ =
=1 =1
−1 −1
⇒ ( )= ∵ is linear
=1 =1
−1 −1
⇒ =
=1 =1
−1
⇒ ( − ) =
=1
But, { 1 , 2, … … , −1 } is linearly independent, therefore
( − ) = 0, ∀1≤ ≤ −1
′
Since, are not all zero, say ≠ 0, we have − = 0 for some .
This is contradiction to 1, 2, … … , are distinct.
Hence, our assumption is wrong and { 1 , 2, … … , } is linearly
independent. This completes the proof.
Theorem: Cayley-Hamilton Theorem
Every square matrix satisfies its characteristic equation.
Proof:
Let be an × matrix. Let
( )= −1
+ −1 + ………+ 1 + 0
be the characteristic polynomial of . Then we have to prove that
( ): −1
+ −1 + ………+ 1 + 0 = 0.
Recall that if is an × matrix then
−1
1 −1
1
= ( ) ⇒ = ( )
det det
⇒ det( ) = ( )
Take = − then
( − ) ( − ) = det( − ) = ( ) … … (1)
Now, ( − ) is a square matrix whose entries are determinants of
( − 1)-square submatrices of ( − ). Hence, ( − ) is a matrix
whose entries are polynomials in of degree at most − 1.
( −1
∴ − )= −1 + ………+ 1 + 0
where are matrices with real entries. So, from (1)
( −1
− )( −1 + ………+ 1 + 0)
−1
=( + −1 + ………+ 1 + 0)
Comparing the coefficients of like powers of , we get
−1 =
−2 − −1 = −1
−3 − −2 = −2
… … … … … … … … ..
0− 1 = 1
− 0= 0
Multiplying the first of these equations by , the second by −1 , ……,
the second last by and the last one by , and adding them we get the
desired result
−1
+ −1 + ………+ 1 + 0 = 0.
Definition:
Two matrices and of order × are said to be similar matrices if
−1
there exist a matrix of order × such that = .
Theorem:
Similar matrices have the same characteristic polynomial.
Proof:
Let and be two similar matrices of order × . Then there exist an
−1
× matrix such that = .
Now, we know that characteristic polynomial of is ( − ) and
that of is ( − ).
( −1
− ) = det ( − )
−1 −1
= det ( − )
−1 −1
= det ( − )
−1 −1
= det ( ( − ))
−1
= det ( ( − ) )
−1
= det( ) det( − ) det( )
= det( ) det( − ) det( )−1
1
= det( ) det( − )
det
= det( − )
Thus, similar matrices have the same characteristic polynomials.
Definitions:
(1) A linear transformation on a vector space is called triangulable
if there exist a basis for such that the matrix of relative to that
basis is an upper triangular matrix.
(2) A linear transformation on a vector space is called
diagonalizable if there exist a basis for such that the matrix of
relative to that basis is a diagonal matrix.
(3) A polynomial ( )= + −1 −1 + … … … + 1 + 0 is
called a monic polynomial if = 1. 1
(4) A monic polynomial ( ) of minimal degree such that ( ) = 0 is
called the minimal polynomial of a matrix .
(5) The companion matrix to a monic polynomial
( )= + −1 −1 + … … … + 1 + 0
is the × square matrix
0 0 0 … 0 − 0
1 0 0 … 0 − 1
⎛ ⎞
0 −
( ) or ( ) = ⎜0 1 0 … 2 ⎟
⎜0 0 1 … 0 − 3 ⎟
⋮ ⋮ ⋮ ⋱ ⋱ ⋮
⎝0 0 0 … 1 − −1 ⎠
(6) If is × matrix and is × matrix then the direct sum of
and denoted by ⨁ which is a matrix of order ( + ) × ( + )
and given by ⨁ = , where is zero matrix.
(7) (Result) Every × matrix is similar to a direct sum of companion
matrices ( 1 )⨁ ( 2 )⨁ … … … ⨁ ( ) where ( ) are monic
polynomial with ( )/ +1 ( ), 1 ≤ ≤ − 1. In this case
1 ( ), 2 ( ) , … … … , ( ) are called invariant factors of .
(8) A rational canonical form of an × matrix is a matrix which is
a direct sum of companion matrices ( 1 )⨁ ( 2 )⨁ … … … ⨁ ( )
where ( ) are monic polynomial with ( )/ +1 ( ), 1 ≤ ≤
− 1. In this case 1 ( ), 2 ( ), … … … , ( ) are called invariant
factors of . So, = ( 1 )⨁ ( 2 )⨁ … … … ⨁ ( ).
(9) A 1 × 1 Jordan block is a matrix ( , 1) = ( ). An × Jordan block
0 0 … 0 0
1 0 … 0 0
⎛0 1 … 0 0⎞
is a matrix ( , ) = ⎜ .
⋮ ⋮ ⋮ ⋱ ⋱ ⋮⎟
0 0 0 … 0
⎝0 0 0 … 1 ⎠
(10)Jordan canonical form is a direct sum of Jordan blocks.
Paper No.- 108
Paper Title: Linear Algebra
Hardik M Pandya
Department of Mathematics,
M. K. Bhavnagar University, Bhavnagar
Unit – 3
Inner product space
Norm
Metric
Inner product space
Definition:
Let 𝑉 be a vector space. A function (or a map) 〈, 〉: 𝑉 ×
𝑉 → ℝ is said to be an inner product on 𝑉 if it satisfies
the following conditions:
For 𝑥, 𝑦, 𝑧 ∈ 𝑉 and 𝛼 ∈ ℝ
(i) 〈𝑥, 𝑥 〉 ≥ 0 and 〈𝑥, 𝑥 〉 = 0 ⟺ 𝑥 = 𝜃𝑉
(ii) 〈𝑥, 𝑦〉 = 〈𝑦, 𝑥 〉
(iii) 〈𝑥 + 𝑦, 𝑧〉 = 〈𝑥, 𝑧〉 + 〈𝑦, 𝑧〉
(iv) 〈𝛼𝑥, 𝑦〉 = 𝛼 〈𝑥, 𝑦〉
An ordered pair (𝑉, 〈, 〉) is called an Inner Product Space.
For simplicity we say, 𝑉 is an inner product space.
Remarks:
(ii) 〈𝑥, 𝑦〉 = 𝑥 ∙ 𝑦
= ∑𝑛𝑖=1 𝑥𝑖 𝑦𝑖
= ∑𝑛𝑖=1 𝑦𝑖 𝑥𝑖
=𝑦∙𝑥
= 〈𝑦, 𝑥 〉
(iii) 𝑥 + 𝑦 = (𝑥1 + 𝑦1 , 𝑥2 + 𝑦2 , … … , 𝑥𝑛 + 𝑦𝑛 )
𝑛
∴ 〈𝑥 + 𝑦, 𝑧〉 = ∑(𝑥𝑖 + 𝑦𝑖 )𝑧𝑖
𝑖=1
= (𝑥1 + 𝑦1 )𝑧1 + (𝑥2 + 𝑦2 )𝑧2 + ⋯ + (𝑥𝑛 + 𝑦𝑛 )𝑧𝑛
= 𝑥1 𝑧1 + 𝑦1 𝑧1 + 𝑥2 𝑧2 + 𝑦2 𝑧2 + ⋯ + 𝑥𝑛 𝑧𝑛 + 𝑦𝑛 𝑧𝑛
= (𝑥1 𝑧1 + 𝑥2 𝑧2 + ⋯ + 𝑥𝑛 𝑧𝑛 )
+(𝑦1 𝑧1 + 𝑦2 𝑧2 + ⋯ + +𝑦𝑛 𝑧𝑛 )
= ∑𝑛𝑖=1 𝑥𝑖 𝑧𝑖 + ∑𝑛𝑖=1 𝑦𝑖 𝑧𝑖
= 〈𝑥, 𝑧〉 + 〈𝑦, 𝑧〉
(iv) 𝛼𝑥 = 𝛼 (𝑥1 , 𝑥2 , … … , 𝑥𝑛 ) = (𝛼𝑥1 , 𝛼𝑥2 , … … , 𝛼𝑥𝑛 )
𝑛
∴ 〈𝛼𝑥, 𝑦〉 = ∑ 𝛼𝑥𝑖 𝑦𝑖
𝑖=1
= 𝛼𝑥1 𝑦1 + 𝛼𝑥2 𝑦2 + … … … + 𝛼𝑥𝑛 𝑦𝑛
= 𝛼 (𝑥1 𝑦1 + 𝑥2 𝑦2 + … … … + 𝑥𝑛 𝑦𝑛 )
= 𝛼 ∑𝑛𝑖=1 𝑥𝑖 𝑦𝑖
= 𝛼 〈𝑥, 𝑦〉
So, 〈, 〉 satisfies all the conditions of inner product.
Hence it is an inner product on ℝ𝑛 .
Thus, dot product on ℝ𝑛 is an inner product on ℝ𝑛 .
Note: Now onwards, if not specified, the inner product
on ℝ𝑛 is taken as dot product.
Norm
Definition
Let 𝑉 be an inner product space and 𝑣 ∈ 𝑉. The norm of
𝑣 is denoted by ‖𝑣 ‖ and defined as ‖𝑣 ‖ = √〈𝑣, 𝑣 〉, the
positive square root of the non-negative number 〈𝑣, 𝑣 〉.
Remarks:
[1] ‖𝑣 ‖2 = 〈𝑣, 𝑣 〉
[2] The norm of a vector in ℝ𝑛 is a length of the
vector.
[3] A vector with norm 1 is called unit vector.
Example 4
Find norm of the following vectors:
(i) 𝑣 = (4, 0, – 3) ∈ ℝ3
(ii) 𝑤 = (1, – 2, 6, 3) ∈ ℝ4
Solution:
(i) 〈𝑣, 𝑣 〉 = 𝑣 ∙ 𝑣 = (4, 0, – 3) ∙ (4, 0, – 3)
2 2 2
= 4 + 0 + (– 3)
= 16 + 9 = 25
∴ ‖𝑣 ‖ = √〈𝑣, 𝑣 〉 = √25, the positive square root.
∴ ‖𝑣 ‖ = 5.
(ii) 〈𝑤, 𝑤〉 = 𝑤 ∙ 𝑤
= (1, – 2, 6, 3) ∙ (1, – 2, 6, 3)
2
= 1 + (– 2) + 62 + 32
2
= 50
∴ ‖𝑤‖ = √〈𝑤, 𝑤〉 = √50, the positive square root.
∴ ‖𝑤‖ = 5√2.
Theorem 1
Let V be an inner product space. Then for every 𝑥 ∈ 𝑉
and 𝛼 ∈ ℝ we have
(i) ‖𝑥 ‖ ≥ 0 and ‖𝑥 ‖ = 0 ⟺ 𝑥 = 𝜃𝑉
(ii) ‖𝛼𝑥 ‖ = |𝛼 |‖𝑥 ‖.
Proof
Let 𝑥 ∈ 𝑉 and 𝛼 ∈ ℝ.
(i) Since the norm is a positive square root of a non-
negative number, it is clear that ‖𝑥 ‖ ≥ 0.
Now ‖𝑥 ‖ = 0 ⟺ ‖𝑥 ‖2 = 0
⟺ 〈𝑥, 𝑥 〉 = 0
⟺ 𝑥 = 𝜃𝑉
(ii) ‖𝛼𝑥 ‖2 = 〈𝛼𝑥, 𝛼𝑥 〉
= 𝛼 〈𝑥, 𝛼𝑥 〉
= 𝛼 2 〈𝑥, 𝑥 〉
= 𝛼 2 ‖𝑥 ‖2
Taking positive square root on both the sides, we get
‖𝛼𝑥 ‖ = |𝛼 |‖𝑥 ‖.
Remark:
Using result (2) of Theorem 1,
‖𝑥 − 𝑦‖ = ‖(−1)(𝑦 − 𝑥 )‖
= |−1|‖𝑦 − 𝑥 ‖
= ‖𝑦 − 𝑥 ‖.
Theorem 2 Cauchy – Schwarz inequality
Let 𝑉 be an inner product space. If 𝑥, 𝑦 ∈ 𝑉 then
|〈𝑥, 𝑦〉| ≤ ‖𝑥 ‖‖𝑦‖.
Further, equality holds if and only if the set {𝑥, 𝑦} is
linearly dependent.
Proof:
If 𝑥 = 𝜃𝑉 or 𝑦 = 𝜃𝑉 then 〈𝑥, 𝑦〉 = 0 and ‖𝑥 ‖‖𝑦‖ = 0.
∴ |〈𝑥, 𝑦〉| = ‖𝑥 ‖‖𝑦‖
Also one of 𝑥, 𝑦 is 𝜃𝑉 , therefore the set {𝑥, 𝑦} is linearly
dependent.
If 𝑥 ≠ 𝜃𝑉 and 𝑦 ≠ 𝜃𝑉 then we can let
𝑥 𝑦
𝑢= ‖𝑥‖
and 𝑣 = ‖𝑦‖
.
∴ ‖𝑢‖ = 1 and ‖𝑣 ‖ = 1 (1)
∴ ‖𝑢‖‖𝑣 ‖ = 1
Now,
〈𝑢 − 𝑣, 𝑢 − 𝑣 〉 = 〈𝑢, 𝑢 − 𝑣 〉 + 〈−𝑣, 𝑢 − 𝑣 〉
= 〈𝑢, 𝑢〉 + 〈𝑢, −𝑣 〉 + 〈−𝑣, 𝑢〉 + 〈−𝑣, −𝑣 〉
= 〈𝑢, 𝑢〉 − 〈𝑢, 𝑣 〉 − 〈𝑣, 𝑢〉 + 〈𝑣, 𝑣 〉
= 〈𝑢, 𝑢〉 − 〈𝑢, 𝑣 〉 − 〈𝑢, 𝑣 〉 + 〈𝑣, 𝑣 〉
= ‖𝑢‖2 − 2〈𝑢, 𝑣 〉 + ‖𝑣 ‖2
= 2 − 2〈𝑢, 𝑣 〉 by using Equation (1)
= 2(1 − 〈𝑢, 𝑣 〉) (2)
And we know that 〈𝑢 − 𝑣, 𝑢 − 𝑣 〉 ≥ 0
∴ 2(1 − 〈𝑢, 𝑣 〉) ≥ 0 ⟹ 1 − 〈𝑢, 𝑣 〉 ≥ 0
⟹ 1 ≥ 〈𝑢, 𝑣 〉
⟹ 〈𝑢, 𝑣 〉 ≤ 1 (3)
Similarly,
〈𝑢 + 𝑣, 𝑢 + 𝑣 〉 = 〈𝑢, 𝑢〉 + 〈𝑢, 𝑣 〉 + 〈𝑣, 𝑢〉 + 〈𝑣, 𝑣 〉
= ‖𝑢‖2 + 2〈𝑢, 𝑣 〉 + ‖𝑣 ‖2
= 2 + 2〈𝑢, 𝑣 〉 by using Equation (1)
= 2(1 + 〈𝑢, 𝑣 〉) (4)
And we know that 〈𝑢 + 𝑣, 𝑢 + 𝑣 〉 ≥ 0
∴ 2(1 + 〈𝑢, 𝑣 〉) ≥ 0 ⟹ 1 + 〈𝑢, 𝑣 〉 ≥ 0
⟹ 1 ≥ −〈𝑢, 𝑣 〉
⟹ −〈𝑢, 𝑣 〉 ≤ 1 (5)
From the Equations (3) and (5), we get
|〈𝑢, 𝑣 〉| ≤ 1
𝑥 𝑦
∴ |〈 , 〉| ≤ 1
‖𝑥 ‖ ‖𝑦‖
1
∴ |〈𝑥, 𝑦〉| ≤ 1
‖𝑥 ‖‖𝑦‖
∴ |〈𝑥, 𝑦〉| ≤ ‖𝑥 ‖‖𝑦‖
Further, suppose equality holds, that is
|〈𝑥, 𝑦〉| = ‖𝑥 ‖‖𝑦‖
1
⟺ |〈𝑥, 𝑦〉| =1
‖𝑥‖‖𝑦‖
𝑥 𝑦
⟺ |〈‖ ‖ , ‖ ‖〉| =1
𝑥 𝑦
⟺ |〈𝑢, 𝑣 〉| = 1
⟺ 〈𝑢, 𝑣 〉 = 1 𝑜𝑟 – 〈𝑢, 𝑣 〉 = 1
⟺ 1 − 〈𝑢, 𝑣 〉 = 0 𝑜𝑟 1 + 〈𝑢, 𝑣 〉 = 0
⟺ 2(1 − 〈𝑢, 𝑣 〉) = 0 𝑜𝑟 2(1 + 〈𝑢, 𝑣 〉) = 0
⟺ 〈𝑢 − 𝑣, 𝑢 − 𝑣 〉 = 0 𝑜𝑟 〈𝑢 + 𝑣, 𝑢 + 𝑣 〉 = 0
by using Equations (2) and (4)
⟺ 𝑢 − 𝑣 = 𝜃𝑉 𝑜𝑟 𝑢 + 𝑣 = 𝜃𝑉
⟺ 𝑢 = ±𝑣
𝑥 𝑦
⟺ ‖𝑥‖
= ±‖ ‖
𝑦
‖𝑥‖
⟺𝑥= ±‖ ‖𝑦
𝑦
And ‖𝑥 − 𝑦‖2 = 〈𝑥 − 𝑦, 𝑥 − 𝑦〉
= 〈𝑥, 𝑥 〉 − 〈𝑥, 𝑦〉 − 〈𝑦, 𝑥 〉 + 〈𝑦, 𝑦〉
= ‖𝑥 ‖2 − 2〈𝑥, 𝑦〉 + ‖𝑦‖2 (2)
(i) ‖𝑥 + 𝑦‖ ≤ ‖𝑥 ‖ + ‖𝑦‖
⇒ ‖𝑥 + 𝑦‖2 = ‖𝑥 ‖2 + 2〈𝑥, 𝑦〉 + ‖𝑦‖2
by using Equation (1)
≤ ‖𝑥 ‖2 + 2|〈𝑥, 𝑦〉| + ‖𝑦‖2
≤ ‖𝑥 ‖2 + 2‖𝑥 ‖‖𝑦‖ + ‖𝑦‖2
(Using Cachy – Schwarz inequality)
= (‖𝑥 ‖ + ‖𝑦‖)2
∴ ‖𝑥 + 𝑦‖2 ≤ (‖𝑥 ‖ + ‖𝑦‖)2
Taking positive square root on both the sides, we get
‖𝑥 + 𝑦‖ ≤ ‖𝑥 ‖ + ‖𝑦‖
(ii) |‖𝑥 ‖ − ‖𝑦‖| ≤ ‖𝑥 − 𝑦‖
⇒ ‖𝑥 ‖ = ‖(𝑥 − 𝑦) + 𝑦‖
≤ ‖𝑥 − 𝑦‖ + ‖𝑦‖ (by Triangle inequality)
∴ ‖𝑥 ‖ − ‖𝑦‖ ≤ ‖𝑥 − 𝑦‖ (3)
‖𝑦‖ = ‖(𝑦 − 𝑥 ) + 𝑥 ‖
≤ ‖𝑦 − 𝑥 ‖ + ‖𝑥 ‖ (by Triangle inequality)
= ‖𝑥 − 𝑦‖ + ‖𝑥 ‖
∴ ‖𝑦‖ ≤ ‖𝑥 − 𝑦‖ + ‖𝑥 ‖
∴ ‖𝑦‖ − ‖𝑥 ‖ ≤ ‖𝑥 − 𝑦‖
∴ −( ‖𝑥 ‖ − ‖𝑦‖) ≤ ‖𝑥 − 𝑦‖ (4)
(iii) 𝑑 (𝑥, 𝑧) = ‖𝑥 − 𝑧‖
= ‖(𝑥 − 𝑦) + (𝑦 − 𝑧)‖
≤ ‖𝑥 − 𝑦‖ + ‖𝑦 − 𝑧‖
(by Cauchy - Schwarz inequality)
= 𝑑 (𝑥, 𝑦) + 𝑑 (𝑦, 𝑧)
∴ 𝑑 (𝑥, 𝑧) ≤ 𝑑 (𝑥, 𝑦) + 𝑑(𝑦, 𝑧)
Example 6
Let 𝑉 be an inner product space. For a metric 𝑑 defined
as 𝑑 (𝑥, 𝑦) = ‖𝑥 − 𝑦‖, ∀𝑥, 𝑦 ∈ 𝑉, show that 𝑑 (𝑥 + 𝑧, 𝑦 +
𝑧) = 𝑑(𝑥, 𝑦).
Solution:
𝑑 (𝑥 + 𝑧, 𝑦 + 𝑧) = ‖(𝑥 + 𝑧) − (𝑦 + 𝑧)‖
= ‖𝑥 + 𝑧 − 𝑦 − 𝑧‖
= ‖𝑥 − 𝑦‖
= 𝑑(𝑥, 𝑦)