Professional Documents
Culture Documents
1/26
D ETERMINANTS
1/26
D ETERMINANTS
3/26
Axiomatic Approach
In what follows we shall use the symbol K to denote the set of
scalars. Thus K is either the set R of real numbers or the set C
of complex numbers. Recall that the rows of a n × n matrix
A = (aij ) are denoted by A1 , . . . , An . In the axiomatic approach,
we think of det A as a function D(A1 , . . . , An ) of the rows of A.
The basic properties we require of this function are as follows.
1. [Multilinearity] D is linear in each row, i.e., for any
i = 1, . . . , n, and any α, β ∈ K,
4/26
Determinant function
10/26
Adjoints and Inverses and Cramer’s Rule
The adjoint of a matrix A, denoted adj(A) is defined to be the
transpose of the cofactor matrix of A.
Lemma
For any A ∈ Mn (K),
Corollary
If det A 6= 0, then A is invertible and A−1 = (det A)−1 (adj A).
10/26
Adjoints and Inverses and Cramer’s Rule
The adjoint of a matrix A, denoted adj(A) is defined to be the
transpose of the cofactor matrix of A.
Lemma
For any A ∈ Mn (K),
Corollary
If det A 6= 0, then A is invertible and A−1 = (det A)−1 (adj A).
10/26
Adjoints and Inverses and Cramer’s Rule
The adjoint of a matrix A, denoted adj(A) is defined to be the
transpose of the cofactor matrix of A.
Lemma
For any A ∈ Mn (K),
Corollary
If det A 6= 0, then A is invertible and A−1 = (det A)−1 (adj A).
11/26
Cramer’s Rule
11/26
Cramer’s Rule
11/26
Cramer’s Rule
det Mj
xj = for j = 1, . . . , n.
det A
Sketch of Proof: Since A is invertible, the system Ax = b has
the unique solution x = A−1 b and det A 6= 0. From the last
corollary, A−1 = (det A)−1 adj(A), and so we see that
n n
1 X 1 X 1
xj = adj(A)jk bk = bk cofkj (A) = det Mj
det A det A det A
k =1 k =1
where the last step follows from the expansion of det Mj along
the j th column.
11/26
GEM Applied to Compute the Determinant
One of the most efficient ways of calculating determinants is
using a variant of GEM.
First observe if the effect of an elementary row operation
on the determinant of a square matrix A of size n × n.
Ri ↔Rj
I. A −→ A0 ⇒ det A0 = − det A
cR
II. A −→i A0 ⇒ det A0 = c det A
Ri +c Rj
III. A −→ A0 ⇒ det A0 = det A
13/26
More on computations of determinants
We already used the following fact for upper triangular matrices.
Fact: If A is a (lower or upper) triangular square matrix, then
det A is the product of its diagonal entries.
A proof of this is easily obtained using the recursive formula
(Laplace expansion) or the permutation expansion. More
generally, for block triangular matrices, we have the following:
Theorem
A B
If M = is a square matrix in block upper triangular
0 D
form, where A, B, D are submatrices of appropriate sizes, then
15/26
3. V ECTOR S PACES
15/26
3. V ECTOR S PACES
15/26
3. V ECTOR S PACES
15/26
3. V ECTOR S PACES
15/26
Definition
A nonempty set V of objects (called elements or vectors) is
called a vector space over K if the following axioms are
satisfied :
16/26
Definition
A nonempty set V of objects (called elements or vectors) is
called a vector space over K if the following axioms are
satisfied :
I. Closure axioms:
16/26
Definition
A nonempty set V of objects (called elements or vectors) is
called a vector space over K if the following axioms are
satisfied :
I. Closure axioms:
1. (closure under addition) For every x, y ∈ V there is a
unique x + y ∈ V .
16/26
Definition
A nonempty set V of objects (called elements or vectors) is
called a vector space over K if the following axioms are
satisfied :
I. Closure axioms:
1. (closure under addition) For every x, y ∈ V there is a
unique x + y ∈ V .
2. (closure under multiplication by reals) For every x ∈ V and
scalar α ∈ K there is a unique element αx ∈ V .
16/26
Definition
A nonempty set V of objects (called elements or vectors) is
called a vector space over K if the following axioms are
satisfied :
I. Closure axioms:
1. (closure under addition) For every x, y ∈ V there is a
unique x + y ∈ V .
2. (closure under multiplication by reals) For every x ∈ V and
scalar α ∈ K there is a unique element αx ∈ V .
II. Axioms for addition:
16/26
Definition
A nonempty set V of objects (called elements or vectors) is
called a vector space over K if the following axioms are
satisfied :
I. Closure axioms:
1. (closure under addition) For every x, y ∈ V there is a
unique x + y ∈ V .
2. (closure under multiplication by reals) For every x ∈ V and
scalar α ∈ K there is a unique element αx ∈ V .
II. Axioms for addition:
3. (commutative law) x + y = y + x for all x, y ∈ V .
16/26
Definition
A nonempty set V of objects (called elements or vectors) is
called a vector space over K if the following axioms are
satisfied :
I. Closure axioms:
1. (closure under addition) For every x, y ∈ V there is a
unique x + y ∈ V .
2. (closure under multiplication by reals) For every x ∈ V and
scalar α ∈ K there is a unique element αx ∈ V .
II. Axioms for addition:
3. (commutative law) x + y = y + x for all x, y ∈ V .
4. (associative law) x + (y + z) = (x + y ) + z for all
x, y , z ∈ V .
16/26
Definition
A nonempty set V of objects (called elements or vectors) is
called a vector space over K if the following axioms are
satisfied :
I. Closure axioms:
1. (closure under addition) For every x, y ∈ V there is a
unique x + y ∈ V .
2. (closure under multiplication by reals) For every x ∈ V and
scalar α ∈ K there is a unique element αx ∈ V .
II. Axioms for addition:
3. (commutative law) x + y = y + x for all x, y ∈ V .
4. (associative law) x + (y + z) = (x + y ) + z for all
x, y , z ∈ V .
5. (existence of zero element) There exists a unique element
0 in V such that x + 0 = 0 + x = x for all x ∈ V .
16/26
Definition
(contd...)
17/26
Definition
(contd...)
6. (existence of inverse or negatives) For x ∈ V there exists a
unique element written as −x such that x + (−x) = 0.
17/26
Definition
(contd...)
6. (existence of inverse or negatives) For x ∈ V there exists a
unique element written as −x such that x + (−x) = 0.
III. Axioms for multiplication by scalars
17/26
Definition
(contd...)
6. (existence of inverse or negatives) For x ∈ V there exists a
unique element written as −x such that x + (−x) = 0.
III. Axioms for multiplication by scalars
7. (associativity) For all α, β ∈ K, x ∈ V ,
α(βx) = (αβ)x.
17/26
Definition
(contd...)
6. (existence of inverse or negatives) For x ∈ V there exists a
unique element written as −x such that x + (−x) = 0.
III. Axioms for multiplication by scalars
7. (associativity) For all α, β ∈ K, x ∈ V ,
α(βx) = (αβ)x.
8. (distributive law for addition in V ) For all x, y ∈ V and
α∈K
α(x + y ) = αx + αy .
17/26
Definition
(contd...)
6. (existence of inverse or negatives) For x ∈ V there exists a
unique element written as −x such that x + (−x) = 0.
III. Axioms for multiplication by scalars
7. (associativity) For all α, β ∈ K, x ∈ V ,
α(βx) = (αβ)x.
8. (distributive law for addition in V ) For all x, y ∈ V and
α∈K
α(x + y ) = αx + αy .
9. (distributive law for addition in K) For all α, β ∈ K and
x ∈ V,
(α + β)x = αx + βx
17/26
Definition
(contd...)
6. (existence of inverse or negatives) For x ∈ V there exists a
unique element written as −x such that x + (−x) = 0.
III. Axioms for multiplication by scalars
7. (associativity) For all α, β ∈ K, x ∈ V ,
α(βx) = (αβ)x.
8. (distributive law for addition in V ) For all x, y ∈ V and
α∈K
α(x + y ) = αx + αy .
9. (distributive law for addition in K) For all α, β ∈ K and
x ∈ V,
(α + β)x = αx + βx
10. (existence of identity for multiplication) For all x ∈ V ,
1x = x.
17/26
Remark
The elements of the field K are called scalars. Depending upon
whether we take K = R, C, in the definition above, we get a real
vector space or a complex vector space.
18/26
Remark
The elements of the field K are called scalars. Depending upon
whether we take K = R, C, in the definition above, we get a real
vector space or a complex vector space. The multiplication will
also be referred as scalar multiplication.
18/26
Remark
The elements of the field K are called scalars. Depending upon
whether we take K = R, C, in the definition above, we get a real
vector space or a complex vector space. The multiplication will
also be referred as scalar multiplication.
Example:
1. V = R with usual addition and multiplication.
18/26
Remark
The elements of the field K are called scalars. Depending upon
whether we take K = R, C, in the definition above, we get a real
vector space or a complex vector space. The multiplication will
also be referred as scalar multiplication.
Example:
1. V = R with usual addition and multiplication.
2. V = C with usual addition of complex numbers and
multiplication by real numbers as multiplication. This
makes C as a real vector space.
18/26
Remark
The elements of the field K are called scalars. Depending upon
whether we take K = R, C, in the definition above, we get a real
vector space or a complex vector space. The multiplication will
also be referred as scalar multiplication.
Example:
1. V = R with usual addition and multiplication.
2. V = C with usual addition of complex numbers and
multiplication by real numbers as multiplication. This
makes C as a real vector space. C is also a complex vector
space under usual addition and scalar multiplication being
multiplication of complex numbers.
18/26
Remark
The elements of the field K are called scalars. Depending upon
whether we take K = R, C, in the definition above, we get a real
vector space or a complex vector space. The multiplication will
also be referred as scalar multiplication.
Example:
1. V = R with usual addition and multiplication.
2. V = C with usual addition of complex numbers and
multiplication by real numbers as multiplication. This
makes C as a real vector space. C is also a complex vector
space under usual addition and scalar multiplication being
multiplication of complex numbers.
3. Rn = {(a1 , a2 , . . . , an ) : a1 , . . . , an ∈ R}. We refer to Rn as
n-dimensional euclidean space. It is a real vector space.
18/26
Remark
The elements of the field K are called scalars. Depending upon
whether we take K = R, C, in the definition above, we get a real
vector space or a complex vector space. The multiplication will
also be referred as scalar multiplication.
Example:
1. V = R with usual addition and multiplication.
2. V = C with usual addition of complex numbers and
multiplication by real numbers as multiplication. This
makes C as a real vector space. C is also a complex vector
space under usual addition and scalar multiplication being
multiplication of complex numbers.
3. Rn = {(a1 , a2 , . . . , an ) : a1 , . . . , an ∈ R}. We refer to Rn as
n-dimensional euclidean space. It is a real vector space.
Likewise Cn is a complex vector space.
18/26
4. Let S be any set and F (S, K) denote the set of all functions
from S to K. Given any two functions f1 , f2 : S → K, and a
scalar α ∈ K we define f1 + f2 and αf1 by
19/26
4. Let S be any set and F (S, K) denote the set of all functions
from S to K. Given any two functions f1 , f2 : S → K, and a
scalar α ∈ K we define f1 + f2 and αf1 by
19/26
4. Let S be any set and F (S, K) denote the set of all functions
from S to K. Given any two functions f1 , f2 : S → K, and a
scalar α ∈ K we define f1 + f2 and αf1 by
19/26
4. Let S be any set and F (S, K) denote the set of all functions
from S to K. Given any two functions f1 , f2 : S → K, and a
scalar α ∈ K we define f1 + f2 and αf1 by
19/26
4. Let S be any set and F (S, K) denote the set of all functions
from S to K. Given any two functions f1 , f2 : S → K, and a
scalar α ∈ K we define f1 + f2 and αf1 by
19/26
4. Let S be any set and F (S, K) denote the set of all functions
from S to K. Given any two functions f1 , f2 : S → K, and a
scalar α ∈ K we define f1 + f2 and αf1 by
20/26
... Indeed, we have
. . . ⊂ C r +1 (U) ⊂ C r (U) ⊂ . . . ⊂ C 1 (U)
7. Let t be an indeterminate. The set
K[t] = {a0 + a1 t + . . . + an t n : a0 , a1 , . . . , an ∈ K}
20/26
... Indeed, we have
. . . ⊂ C r +1 (U) ⊂ C r (U) ⊂ . . . ⊂ C 1 (U)
7. Let t be an indeterminate. The set
K[t] = {a0 + a1 t + . . . + an t n : a0 , a1 , . . . , an ∈ K}
20/26
... Indeed, we have
. . . ⊂ C r +1 (U) ⊂ C r (U) ⊂ . . . ⊂ C 1 (U)
7. Let t be an indeterminate. The set
K[t] = {a0 + a1 t + . . . + an t n : a0 , a1 , . . . , an ∈ K}
20/26
Subspaces and Linear Span
Definition
A nonempty subset W of a vector space V is called a subspace
of V if it is a vector space under the operations in V .
21/26
Subspaces and Linear Span
Definition
A nonempty subset W of a vector space V is called a subspace
of V if it is a vector space under the operations in V .
Theorem
A nonempty subset W of a vector space V is a subspace of V
if W satisfies the two closure axioms.
21/26
Subspaces and Linear Span
Definition
A nonempty subset W of a vector space V is called a subspace
of V if it is a vector space under the operations in V .
Theorem
A nonempty subset W of a vector space V is a subspace of V
if W satisfies the two closure axioms.
Proof: Suppose now that W satisfies the closure axioms. We
just need to prove existence of inverses and the zero element.
21/26
Subspaces and Linear Span
Definition
A nonempty subset W of a vector space V is called a subspace
of V if it is a vector space under the operations in V .
Theorem
A nonempty subset W of a vector space V is a subspace of V
if W satisfies the two closure axioms.
Proof: Suppose now that W satisfies the closure axioms. We
just need to prove existence of inverses and the zero element.
Let x ∈ W . By distributivity
0x = (0 + 0)x = 0x + 0x.
21/26
Subspaces and Linear Span
Definition
A nonempty subset W of a vector space V is called a subspace
of V if it is a vector space under the operations in V .
Theorem
A nonempty subset W of a vector space V is a subspace of V
if W satisfies the two closure axioms.
Proof: Suppose now that W satisfies the closure axioms. We
just need to prove existence of inverses and the zero element.
Let x ∈ W . By distributivity
0x = (0 + 0)x = 0x + 0x.
21/26
Examples
1. R is a subspace of the real vector space C. But it is not a
subspace of the complex vector space C.
22/26
Examples
1. R is a subspace of the real vector space C. But it is not a
subspace of the complex vector space C.
2. C r [a, b] is a subspace of the vector space C s [a, b] for
s < r.
22/26
Examples
1. R is a subspace of the real vector space C. But it is not a
subspace of the complex vector space C.
2. C r [a, b] is a subspace of the vector space C s [a, b] for
s < r . All of them are subspaces of F ([a, b]; R).
22/26
Examples
1. R is a subspace of the real vector space C. But it is not a
subspace of the complex vector space C.
2. C r [a, b] is a subspace of the vector space C s [a, b] for
s < r . All of them are subspaces of F ([a, b]; R).
3. Mm,n (R) is a subspace of the real vector space Mm,n (C).
22/26
Examples
1. R is a subspace of the real vector space C. But it is not a
subspace of the complex vector space C.
2. C r [a, b] is a subspace of the vector space C s [a, b] for
s < r . All of them are subspaces of F ([a, b]; R).
3. Mm,n (R) is a subspace of the real vector space Mm,n (C).
4. The set of points on the x-axis form a subspace of the
plane.
22/26
Examples
1. R is a subspace of the real vector space C. But it is not a
subspace of the complex vector space C.
2. C r [a, b] is a subspace of the vector space C s [a, b] for
s < r . All of them are subspaces of F ([a, b]; R).
3. Mm,n (R) is a subspace of the real vector space Mm,n (C).
4. The set of points on the x-axis form a subspace of the
plane. More generally, the set of points on a line passing
through the origin is a subspace of R2 .
22/26
Examples
1. R is a subspace of the real vector space C. But it is not a
subspace of the complex vector space C.
2. C r [a, b] is a subspace of the vector space C s [a, b] for
s < r . All of them are subspaces of F ([a, b]; R).
3. Mm,n (R) is a subspace of the real vector space Mm,n (C).
4. The set of points on the x-axis form a subspace of the
plane. More generally, the set of points on a line passing
through the origin is a subspace of R2 . Likewise the set of
real solutions of a1 x1 + . . . + an xn = 0 form a subspace of
Rn . It is called a hyperplane.
22/26
Examples
1. R is a subspace of the real vector space C. But it is not a
subspace of the complex vector space C.
2. C r [a, b] is a subspace of the vector space C s [a, b] for
s < r . All of them are subspaces of F ([a, b]; R).
3. Mm,n (R) is a subspace of the real vector space Mm,n (C).
4. The set of points on the x-axis form a subspace of the
plane. More generally, the set of points on a line passing
through the origin is a subspace of R2 . Likewise the set of
real solutions of a1 x1 + . . . + an xn = 0 form a subspace of
Rn . It is called a hyperplane.
More generally, the set of solutions of a homogeneous
system of linear equations in n variables forms a subspace
of Kn . In other words, if A ∈ Mm,n (K), then the set
{x ∈ Kn : Ax = 0} is a subspace of Kn . It is called the null
space of A.
22/26
Linear Span of a set in a Vector Space
Definition
Let S be a subset of a vector space V . The linear span of S is
the subset
Pn
L(S) = i=1 ci xi : x1 , . . . , xn ∈ S and c1 , . . . , cn are scalars .
We set L(∅) = {0} by convention.
23/26
Linear Span of a set in a Vector Space
Definition
Let S be a subset of a vector space V . The linear span of S is
the subset Pn
L(S) = i=1 ci xi : x1 , . . . , xn ∈ S and c1 , . . . , cn are
Pscalars .
We set L(∅) = {0} by convention. A typical element ni=1 ci xi of
L(S) is called a linear combination of xi ’s. Thus L(S) is the set
of all finite linear combinations of elements of S. In case
V = L(S), we say that S spans V or generates V .
23/26
Linear Span of a set in a Vector Space
Definition
Let S be a subset of a vector space V . The linear span of S is
the subset Pn
L(S) = i=1 ci xi : x1 , . . . , xn ∈ S and c1 , . . . , cn are
Pscalars .
We set L(∅) = {0} by convention. A typical element ni=1 ci xi of
L(S) is called a linear combination of xi ’s. Thus L(S) is the set
of all finite linear combinations of elements of S. In case
V = L(S), we say that S spans V or generates V .
Proposition
The smallest subspace of V containing S is L(S).
23/26
Linear Span of a set in a Vector Space
Definition
Let S be a subset of a vector space V . The linear span of S is
the subset Pn
L(S) = i=1 ci xi : x1 , . . . , xn ∈ S and c1 , . . . , cn are
Pscalars .
We set L(∅) = {0} by convention. A typical element ni=1 ci xi of
L(S) is called a linear combination of xi ’s. Thus L(S) is the set
of all finite linear combinations of elements of S. In case
V = L(S), we say that S spans V or generates V .
Proposition
The smallest subspace of V containing S is L(S).
23/26
Linear Span of a set in a Vector Space
Definition
Let S be a subset of a vector space V . The linear span of S is
the subset Pn
L(S) = i=1 ci xi : x1 , . . . , xn ∈ S and c1 , . . . , cn are
Pscalars .
We set L(∅) = {0} by convention. A typical element ni=1 ci xi of
L(S) is called a linear combination of xi ’s. Thus L(S) is the set
of all finite linear combinations of elements of S. In case
V = L(S), we say that S spans V or generates V .
Proposition
The smallest subspace of V containing S is L(S).
24/26
Remark
(i) Different sets may span the same subspace. For example
24/26
Remark
(i) Different sets may span the same subspace. For example
24/26
Remark
(i) Different sets may span the same subspace. For example
24/26
Remark
(i) Different sets may span the same subspace. For example
24/26
Linear Dependence
Definition
Let V be a vector space. A subset S of V is called linearly
dependent (L.D.) if there exist distinct elements v1 , . . . , vn ∈ S
and αi ∈ K, not all zero, such that
n
X
αi vi = 0. (∗)
i=1
25/26
Linear Dependence
Definition
Let V be a vector space. A subset S of V is called linearly
dependent (L.D.) if there exist distinct elements v1 , . . . , vn ∈ S
and αi ∈ K, not all zero, such that
n
X
αi vi = 0. (∗)
i=1
25/26
Linear Dependence
Definition
Let V be a vector space. A subset S of V is called linearly
dependent (L.D.) if there exist distinct elements v1 , . . . , vn ∈ S
and αi ∈ K, not all zero, such that
n
X
αi vi = 0. (∗)
i=1
Remark
(i) Thus a relation such as (∗) holds in a linearly independent
set S with distinct vi ’s iff all the scalars αi = 0.
25/26
Linear Dependence
Definition
Let V be a vector space. A subset S of V is called linearly
dependent (L.D.) if there exist distinct elements v1 , . . . , vn ∈ S
and αi ∈ K, not all zero, such that
n
X
αi vi = 0. (∗)
i=1
Remark
(i) Thus a relation such as (∗) holds in a linearly independent
set S with distinct vi ’s iff all the scalars αi = 0.
(ii) Any subset which contains a L.D. set is again L.D.
25/26
Linear Dependence
Definition
Let V be a vector space. A subset S of V is called linearly
dependent (L.D.) if there exist distinct elements v1 , . . . , vn ∈ S
and αi ∈ K, not all zero, such that
n
X
αi vi = 0. (∗)
i=1
Remark
(i) Thus a relation such as (∗) holds in a linearly independent
set S with distinct vi ’s iff all the scalars αi = 0.
(ii) Any subset which contains a L.D. set is again L.D.
(iii) The singleton set {0} is L.D. in every vector space.
25/26
Linear Dependence
Definition
Let V be a vector space. A subset S of V is called linearly
dependent (L.D.) if there exist distinct elements v1 , . . . , vn ∈ S
and αi ∈ K, not all zero, such that
n
X
αi vi = 0. (∗)
i=1
Remark
(i) Thus a relation such as (∗) holds in a linearly independent
set S with distinct vi ’s iff all the scalars αi = 0.
(ii) Any subset which contains a L.D. set is again L.D.
(iii) The singleton set {0} is L.D. in every vector space.
(iv) Any subset of a L.I. set is L.I.
25/26
Examples
(i) The set {ei = (0, . . . , 1, . . . , 0) : 1 ≤ i ≤ n} is L.I. in Kn .
26/26
Examples
(i) The set {ei = (0, . . . , 1, . . . , 0) : 1 ≤ i ≤ n} is L.I. in Kn .
This can be shown easily by taking dot product with ei with
a relation of the type (∗).
26/26
Examples
(i) The set {ei = (0, . . . , 1, . . . , 0) : 1 ≤ i ≤ n} is L.I. in Kn .
This can be shown easily by taking dot product with ei with
a relation of the type (∗).
(ii) The set S = {1, t, t 2 , . . . , t n , . . . } is L.I. in K[t]. This follows
from the definition of a polynomial (!). Alternatively, if we
think of polynomial functions (from K to K) defined by the
polynomials, then linear independence can be proved by
evaluating a dependence relation as well as its derivatives
of sufficiently high orders at t = 0.
26/26
Examples
(i) The set {ei = (0, . . . , 1, . . . , 0) : 1 ≤ i ≤ n} is L.I. in Kn .
This can be shown easily by taking dot product with ei with
a relation of the type (∗).
(ii) The set S = {1, t, t 2 , . . . , t n , . . . } is L.I. in K[t]. This follows
from the definition of a polynomial (!). Alternatively, if we
think of polynomial functions (from K to K) defined by the
polynomials, then linear independence can be proved by
evaluating a dependence relation as well as its derivatives
of sufficiently high orders at t = 0.
(iii) In the space C[a, b] for a < b ∈ R consider the set
S = {1, cos2 t, sin2 t}.
26/26
Examples
(i) The set {ei = (0, . . . , 1, . . . , 0) : 1 ≤ i ≤ n} is L.I. in Kn .
This can be shown easily by taking dot product with ei with
a relation of the type (∗).
(ii) The set S = {1, t, t 2 , . . . , t n , . . . } is L.I. in K[t]. This follows
from the definition of a polynomial (!). Alternatively, if we
think of polynomial functions (from K to K) defined by the
polynomials, then linear independence can be proved by
evaluating a dependence relation as well as its derivatives
of sufficiently high orders at t = 0.
(iii) In the space C[a, b] for a < b ∈ R consider the set
S = {1, cos2 t, sin2 t}. The familiar formula
cos2 t + sin2 t = 1 tells us that S is linearly dependent.
What about the set {1, cos t, sin t}?
26/26
Examples
(i) The set {ei = (0, . . . , 1, . . . , 0) : 1 ≤ i ≤ n} is L.I. in Kn .
This can be shown easily by taking dot product with ei with
a relation of the type (∗).
(ii) The set S = {1, t, t 2 , . . . , t n , . . . } is L.I. in K[t]. This follows
from the definition of a polynomial (!). Alternatively, if we
think of polynomial functions (from K to K) defined by the
polynomials, then linear independence can be proved by
evaluating a dependence relation as well as its derivatives
of sufficiently high orders at t = 0.
(iii) In the space C[a, b] for a < b ∈ R consider the set
S = {1, cos2 t, sin2 t}. The familiar formula
cos2 t + sin2 t = 1 tells us that S is linearly dependent.
What about the set {1, cos t, sin t}?
(iv) If Eij denotes the m × n matrix with 1 in (i, j)th position and
0 elsewhere, then the set {Eij : i = 1, . . . , m, j = 1, . . . , n}
is linearly independent in the vector space Mm,n (K).
26/26