## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Ratings:

Length: 552 pages6 hours

Based on author Siavash Shahshahani's extensive teaching experience, this volume presents a thorough, rigorous course on the theory of differentiable manifolds. Geared toward advanced undergraduates and graduate students in mathematics, the treatment's prerequisites include a strong background in undergraduate mathematics, including multivariable calculus, linear algebra, elementary abstract algebra, and point set topology. More than 200 exercises offer students ample opportunity to gauge their skills and gain additional insights.

The four-part treatment begins with a single chapter devoted to the tensor algebra of linear spaces and their mappings. Part II brings in neighboring points to explore integrating vector fields, Lie bracket, exterior derivative, and Lie derivative. Part III, involving manifolds and vector bundles, develops the main body of the course. The final chapter provides a glimpse into geometric structures by introducing connections on the tangent bundle as a tool to implant the second derivative and the derivative of vector fields on the base manifold. Relevant historical and philosophical asides enhance the mathematical text, and helpful Appendixes offer supplementary material.

Publisher: Dover PublicationsReleased: Mar 23, 2017ISBN: 9780486820828Format: book

You've reached the end of this preview. Sign up to read more!

Page 1 of 1

there.

**Part I **

**Pointwise **

.

Let *V *be a finite dimensional vector space over a field *F*. The set of linear mappings *V*→*F *will be denoted by *V**. This set will be endowed with the structure of a vector space over *F*. Let *α *and *ß *be elements of *V** and *r *an element of *F*, then we define *α+ß *and *rα *by

where *x *is an arbitrary element of *V*. With these operations, *V** becomes a vector space over *F *and will be called the *dual space *to *V*. Suppose (*e*1, . . . , *en*) is a basis for *V*. We define elements *ei *of *V** by their value on *ej, j *= 1, . . . , *n*, as follows:

denotes the value 1 or 0 depending on whether *i*=*j *or *i*≠*j*. Note that any element *α *of *V** can be written as a linear combination of *e*¹*, . . . , en*. In fact,

since the value of both sides on an arbitrary basis element *ej *is the same. Further, {*e*¹, . . . , *en*} is a linearly independent set, for if ∑*iriei*=0, evaluating both sides on the basis element *ej *yields *rj*=0. Therefore, the ordered set (*e*¹, . . . , *en*) is a basis for *V**, called the *dual basis *for *V** relative to (*e*1 , . . . , *en*). Thus *V** has the same dimension as V.

By repeating the operation of dual making, one can look at (*V**)*, the so-called *double dual *of *V*, usually denoted by *V***. The double dual will then have the same dimension as the original space *V*, and since all linear spaces of the same dimension over a given field are isomorphic, there are isomorphisms between *V*, *V** and *V***. But in the case of *V *and *V***, there is a distinguished ** natural isomorphism, **denoted by

It follows from (**1.1) and (1.2) that Iv(v) is indeed linear, i.e., it is a member of V**. That Iv is linear follows from the linearity of α. To show that Iv is an isomorphism, it suffices to show that its kernel is {0} since the domain and target linear spaces are finite dimensional of the same dimension. But α(v)=0 for all α in V* implies that v=0, and the isomorphism is established. Note that the definition of Iv was independent of the specific nature of the linear space V or the choice of basis for it. In fact, one can state the following general assertion. **

**1. Theorem **For any basis (*e*1, . . . , *en*) of V, (*Iv*(*e*1) , . . . , *Iv*(*en*)) is the dual basis in *V*** relative to the basis (*e*¹ , . . . , *en*)for *V**.

PROOF. We must show

This is a consequence of (

By virtue of the natural isomorphism *Iv*, the space *V*** is often identified with *V*. Under this identification, *Iv*(*ei*) is identified with *ei*, so that (*e*1 , . . . , *en*) becomes the dual basis for *V*** relative to (*e*¹ , . . . , *en*).

Let *V*1 , . . . , *VP *and *W *be vector spaces over a field. A map *α *: *V*1×⋯×*Vp*→*W *is called ** p-linear provided **that by fixing any

productsin elementary mathematics are of this nature.

**2. Examples **

(a) Let *V *be a vector space over a field *F*. Regard *F *as a one-dimensional vector

space over *F*. Then the product *F*×*V*→*V *given by

(*r*, *v*)↦*rv *

is 2-linear (*bilinear*).

(b) Let *F *be a field. Then the *p*-fold product *F*× ⋯ ×*F*→*F *given by

(*r*1, . . . , *rp*)↦*r*1⋯*rp *

is *p*-linear.

(c) Let *V *. Then any inner product *V*×*V*is another example of a bilinear mapping. In general, let *ß *: *V*×*V*→*F *be bilinear and consider a basis (*e*1 , . . . , *en*) for *V*. The *n*×*n *matrix B=[*ßij*], where *ßij *= *ß*(*ei*, *ej*), determines *ß *completely as

If B is a symmetric matrix with positive eigenvalues, then *ß *is an inner product. Conversely, any inner product on *V *is obtained in this manner.

(d) For a vector space *V *over a field *F*, the *evaluation pairing V*×*V**→*F*, given by (*v*, *α*)↦*α*(*v*), is bilinear.

(e) Let *F *be a field and *V*1 , . . . , *Vp*; *W*1 , . . . , *Wq *be vector spaces over *F*. Suppose *p*-linear and *q*-linear maps *α*:*V*1×⋯×*Vp→F *and *β*:*W*1×⋯×*WqF *are given. Then the *tensor product *

*α *⊗ *β *: *V1 *× ⋯ × *Vp *× *W*1 × ⋯ × *Wq *→ *F *

is defined by

Note that *α*⊗*β *is a (*p*+*q*)-linear mapping. Further, it follows from the associativity of the product operation in the field *F *that ⊗ is associative, hence the product *α*1⊗⋯⊗*αk *is unambiguously defined by induction.

In what follows, *V *will be a finite dimensional vector space over a field *F*. The *n*-fold product *V*×⋯×*V *will be denoted by *Vn*.

**3. Definition **

(a) A *p*-linear map *Vp*→*F *will be called a ** covariant p-tensor, **or a

(b) A *q*-linear map (*V**)*q*→*F *will be called a ** contravariant q-tensor, **or a

(c) A (*p *+ *q*)-linear map *V *px(*V **)→*F *will be called a ** mixed (p,q)-tensor, **or a

**4. Examples**An element of *V** is a covariant 1-tensor on *V*. In view of the natural isomorphism *Iv*, any member of *V *may be regarded as a contravariant tensor on y. The evaluation pairing (Example 2d) is a (1, l)-tensor on *V*. Inner products are examples of covariant 2-tensors.

We use the symbols *LP*(*V*), *Lq*(*V*, respectively, to denote the sets of (*p*, 0)-, (0, *q*)- and (*p*, *q*)-tensors on *V*. Under functional addition, and multiplication by elements of the field *F*, each of these becomes a vector space over *F*. The dimensions of these spaces are, respectively, *np*, *nq *and *np+q*, as the following will imply.

**5. Basis for the Space of Tensors ***Let *(*e*1, . . . , *en*) *be a basis for V. Then the following are basis elements for the spaces of tensors*.

(a) *For Lp*(*V*)*: *

(b) *For Lq*(*V*)*: *

(c) *For **: *

PROOF. Note that by virtue of Example 2e, the displayed tensors are actually elements of the stated spaces. We prove the third case which includes the other two. To show linear independence, suppose that

By applying the two sides to (*ei*1, . . . , *eip *, *ej*¹ , . . . , *ejq*can be written as

which can be verified by applying both sides to (*ei*1, . . . , *eip *, *ej*¹ , . . . , *ejq*

By convention, we let *L*⁰*V*=*L*0*V*=*F*.

**6. Change of Basis **

The bases introduced above for the spaces of tensors as well as the resulting components of the tensors depend on the original choice of basis for the linear space. We are now going to investigate how a linear change of basis for the space affects the value of tensor components. We take *V *to be an *n*-dimensional vector space over *F*. It will be convenient to write *n*×*n *matrices with entries from *F *, where the superscript denotes the row index and the subscript indicates the column of the matrix entry. Suppose two bases *B *=(*ei*,⋯,*en*are given for *V*as

Thus the components of *ēj *with respect to the basis B are the entries of the *j*, we have the dual bases B*=(*e*1 , . . . , *en*. We will first investigate the linear relationship between these two bases. We write

Therefore, the components of *ēi *with respect to the basis B* are the entries of the *i*. To identify B, we note that

Therefore, the matrix B is the inverse of the transpose of the matrix A:

B−1 = AT

Now let *α *be a (*p*, *q*)-tensor on *V*. With respect to the above bases, the following two representations for *α *are obtained.

. Using (**1.10), we have **

This is equal to

Thus we have obtained the desired formula for the change of tensor components under a linear change of variables

’ which transform under a linear change of variables according to formula (**1.13). **

(a) **Special case (p=l,q=0) **For a covariant l-tensor

we obtain

(b) **Special case (p=0,q=l) **Consider a contravariant l-tensor, or by virtue of the natural isomorphism *Iv*, an element *x *of *V *

In this case, we have

**7. Functoriality **

Let *V *and *W *be vector spaces over a field *F*, and suppose *f*:*V*→*W *is a linear map. For each non-negative integer *p*, a map *LPf*:*LpW*→*LpV *is defined as follows.

If *p*. For *p *> 0, suppose *α*∈*LpW *and *v*1,⋯, *vp*∈*V*, then

That (*Lpf*)(*α*) ∈ *LPV *follows from the linearity of *f *and the fact that *α *∈ *LPW*. The linearity of *LPf *follows from the definition of linear space operations in the space of tensors. The following two properties are straightforward consequences of definition and establish *LP *as a *contravariant functor*.

(a) *For any vector space V and any non-negative integer p*,

(b) *For linear maps f *: *V *→ *W and g *: *U *→ *V*, *and any non-negative integer p*,

Of course, *L*¹*V *= *V**. The induced linear map *L*l*f *is denoted by *f**. Note that by definition, *LqV** = *LqV*. For a linear map *f *: *V*→ *W*, we denote *Lqf** by *Lqf*. The following properties follow from (a) and (b) above and are summarized by saying that *Lq *is a *covariant functor. *

(c) *For any vector space V and non-negative integer q*,

(d) *For linear maps f *: *V *→ *W and g *: *U *→ *V *, *and any non-negative integer q*,

The so-called *anti-symmetric tensors *are among the most powerful tools in the study of geometric structures. As we shall see in the following section, these are closely related to the concepts of volume and orientation in the case of real vector spaces.

We recall some elementary facts about the group S*n *of permutations on *n *symbols {1,⋯, *n*}. A ** transposition is **a permutation that exchanges two symbols and leaves the other symbols fixed. Any permutation

**8. Definition **Let *V *be a finite-dimensional linear space over a field *F *and let *p *be a natural number. An element *α*∈*LpV *is called ** anti-symmetric (or alternating) **if for every

We denote the set of anti-symmetric elements of *LpV *by ⋀*pV*. By convention, ⋀⁰*V*=*L*⁰*V*=*F*.

**9. Elementary Properties of Anti-symmetric Tensors **

(a) A tensor *α*∈*LpV *is anti-symmetric if and only if for each transposition τ and any *u*1 , . . . , *up in V*,

*α*(*uτ*(1) , . . . , *uτ*(*P*)) = −*α*(*u*1 , . . . , *up*)

PROOF. The statement follows from the facts that *ε*(*τ*)= − 1, any permutation is a composition of transpositions and that *ε *

(b) A tensor *α*∈*LpV is anti-symmetric if and only if it has the property that α(u1 , . . . , up)=0 whenever ui=uj for i≠j*.

PROOF. Suppose the property holds and *u*1 , . . . , *up *are elements of *V*. Taking *i*<*j*, and expanding *α*(*u*1 , . . . , *ui *+ *uj *, . . . , *ui *+ *uj *, . . . , *up*) we obtain

0 =*α*(*u*1, . . . , *ui*, . . . , *ui*, . . . , *up*) + *α*(*u*1, . . . , *ui*, . . . , *uj*, . . . , *up*) + *α*(*u*1, . . . , *ui*, . . . , *uj*, . . . , *up*) + *α*(*u*1, . . . , *uj*, . . . , *uj*, . . . , *up*)

The first and the last term above vanish by the property, and the result follows.

Conversely, suppose that for *i*<*j*, we have *ui*=*uj*=*u *and consider the transposition that switches *i *and *j*. Applying (a) we obtain

*α*(*u*1, . . . , *ui*, . . . , *uj*, . . . , *up*) = −*α*(*u*1, . . . , *uj*, . . . , *ui*, . . . , *up*)

Since the field characteristic was assumed to be 0, we have 1 ≠ − 1, and

*α*(*u*1, . . . , *u *, . . . , *u *, . . . , *up*) = 0

(c) *Let α*∈⋀*pV*. *lf *{*u*1 , . . . , *up*}⊂ *V is linearly dependent, then α*(*u*1 , . . . , *up*)=0.

PROOF. We write one *ui *as a linear combination of the rest and expand by *p*

10. Basis for ⋀*PV *

Let (*e1 *, . . . , *en*) be a basis for *V *and consider an element *α*∈⋀*pV*. If *p*>*n*, then *α*=0 by 9c above. For 0<*p*≤*n*, anti-symmetry implies that *α *is completely determined by its value on *p*, where *i*1<⋯<*ip*. Therefore, to define an element of ⋀*PV*, it suffices to specify its values on *p*, where *i*1<⋯<*ip*, where *i*1<⋯<*ip*, by giving its value as

such elements in ⋀*PV*. One may extend the definition to an arbitrary multi-superscript (*i*1⋯*ip*=0 if there is repetition in superscripts, and by multiplying by *ε*(*σ*), where *σ *is the permutation that arranges *i*1 , . . . , *ip *in increasing order.

We can now state and prove a couple of very useful propositions.

(a) *Let *(*e*1 , . . . , *en*) *be a basis for V*. *Then for *0<*p*≤*n*, *a basis for *⋀*pV is given by *, *where i*1<⋯<*ip*. *Further*, dim ⋀⁰*V*=l *and *dim ⋀*pV*=0 *for p>n*.

PROOF. By earlier convention, ⋀*°V=L°V *is the underlying field. The case *p*>*n *was treated at the beginning of the previous paragraph. For 0<*p*≤*n*, suppose that

, where *j*1<⋯<*jp*, yields *cj*1⋯*jp*=0, and linear independence is established. Further, we have the representation

for *α*∈⋀*pV*, *j*1<⋯<*jp*

Let dim *V*=*n*. Any non-zero element of ⋀*nV *is called a *volume element *for *V *and serves as a basis for this one-dimensional linear space. The following amplifies 9c in the case *p*=*n*.

(b) *Let *dim *V*=*n and ω be a volume element for V. Then a subset *{*u*1 *, . . . , un*}⊂*V is linearly dependent if and only if ω*(*u*1 , . . . , *un*)=0.

PROOF. AS shown in 9c, linear dependence of {*u*1 , . . . , *un*} implies that *ω*(*u*1 , . . . , *un*)=0. Conversely, if {*u*1 , . . . , *un*} is linearly independent, then it is a basis for the *n- *

dimensional space *V*. Therefore, *ω*(*u*1 , . . . , *un*)=0 would imply that *ω *vanishes on any *n*-tuple of elements of *V*, i.e., *ω*

**11. Functoriality **

Let *f:V*→*W *be a linear map. We recall the definition of *Lpf:LpW*→*LpV *in (**1.16). If α∈⋀pW, it follows that Lpf(α)∈⋀pV. Thus denoting the restriction of Lpf to ⋀PW by ⋀pf, we obtain a linear map **

⋀*pf *: ⋀*PW*→ ⋀*PV *

by

For *p*<*n*=*min*{*dim V*, dim *W*}, ⋀*pf *is the zero map, and for *p*. From Subsection 7, we obtain the following by restriction.

(a) *For any linear space V and any non-negative integer p*,

(b) *For linear maps f:V*→*W and g: U*→*V, and any non-negative integer p*,

Note that ⋀¹*V *=*L*¹ *V *and ⋀¹*f*=*L*¹*f*=*f**.

**12. Determinants **

An important consequence of the one-dimensionality of ⋀*nV *is that for a linear map *f*:*V*→*V*, the induced linear map ⋀*nf*:⋀*nV*→⋀*nV *is multiplication by a (fixed) element of the field. This element we call the *determinant *of *f *and denote it by det *f*. Thus,

In the next section on real vector spaces we will give an incisive geometric interpretation of the determinant, but for now we concentrate on developing the formal algebraic properties of the concept.

**13. Elementary Properties of the Determinant **

(a) We have

These are consequences of (**1.25) and (1.26). **

(b) *A linear map f*:*V*→*V is invertible if and only if *det *f*≠0. *In this case, det f *−1=(det *f*) −1.

PROOF. The second statement is a consequence of 13a. For the first, let (*e*1 , . . . , *en*) be a basis and a volume element for *V*. By the definition of determinant

By 10b, we have *ω*(*e*1 , . . . , *en*)≠0, therefore it follows that det *f*≠0 if and only if the set {*f*(*e*1) , . . . , *f*(*en*)} is linearly independent, i.e., *f *

(c) **Expansion of the Determinant **

*Suppose that the matrix of a linear map f*: *V*→*Vrelative to a basis for V is **, then *

PROOF. Let (*e*1 , . . . , *en*) be a basis for *V*, and consider the volume element *e*¹⋯*n *for *V *(see (**1.22) for the definition of e¹⋯n). Thus e¹⋯n(e1 , . . . , en)=l. Now using (1.28), **

If there is any repetition among *i*1 , . . . , *in*, we get *e*¹⋯*n*(*ei*1 , . . . , *ein*)=0, otherwise (*ei*1 , . . . , *ein*) represents a permutation *σ *of (*e*1 , . . . , *en*), and *e*¹⋯*n*(*eil *, . . . , *ein*)=*ε*(*σ*

Note that as *σ *ranges over S*n *in the sum (**1.29), σ −1 also ranges over Sn. Moreover, ε(σ)=ε(σ −1), therefore one may also write **

An interpretation of this result is that the determinant of the transpose of a matrix is equal to the determinant of the original matrix. Equivalently, in (**1.29), the products of matrix entries are picked consecutively from columns 1 to n, while in (1.30), the products are taken consecutively from rows 1 to n. All familiar formulas about the expansion of the determinant according to column or row follow from (1.29) and (1.30). For these facts and a generalization, see Exercise 1.6 at the end of the chapter. **

with the standard basis (*e*1;..., *en*will be represented as *x*=(*x*l , . . . , *xn*. We will first try to find an interpretation for elements *ei*¹⋯*ip *of ⋀*p *. Let us look at the cases *p*= 1,2,3, where *x*, *y *and *z *.

From elementary analytic geometry, we know that the absolute values of the right-hand sides of the above have, respectively, the following interpretations: the length of the projection of *x *on the *i*-axis, the area of the projection of the parallelogram determined by *x *and *y *on the (*i*, *j*)-plane, and the volume of the projection of the parallelepiped determined by *x*, *y *and *z *on the (*i*, *j*, *k*)-space. Further, the signs of the above have the following meaning. In (**1.31), xi is positive or negative depending on whether the projection of x points in the same or against the direction of ei. In (1.32), the determinant is positive or negative depending on whether the projection of the ordered pair (x, y) is right-handed or left-handed relative to the ordered pair (ei, ej). Likewise, the sign of the determinant in (1.33) signifies whether the projection of the ordered triple (x, y, z) on (i, j, k)-space has the same or opposite handedness as the ordered triple (ei, ej, ek). (See Figure 1.) **

Based on the above intuition, we generalize the notions of volume and orientation to arbitrary real linear spaces. Let *V *be a linear space of dimension *n *, and consider two ordered bases B = (*e*1 , . . . , *en*for this space. There is a unique linear map *f*: *V*→ *V *with *f*(*ej*)=*ēj*, *j*=1 , . . . , *n*. *f *is invertible as it carries basis to basis, so det *f*≠0. We say that B has the *same orientation *if and only if det *f*>0. This is an equivalence relation and breaks up the set of ordered bases for *V *into two classes, each called an *orientation *for *V*. An equivalent approach is the following. For each ordered basis B = (*e*1 , . . . , *en*), consider the corresponding volume element *e*¹⋯*n *as in (**has the same orientation as B if and only if the determinant of the relating linear map is positive. Thus, the non-zero elements of the one-dimensional space ⋀ nV (i.e., the volume elements) break up into two classes, each signifying one of the two orientations of V. **

Continuing as above with the linear space *V *of dimension *n *on *R*, we consider a volume element *ω *on *V*. Let (*α*1 , . . . , *αn*) be an ordered *n*-tuple of elements of *V*. We define the ** n-dimensional parallelepiped determined **by (

FIGURE 1. Orientation

The ** volume (relative **to

This definition is compatible with the discussion at the beginning of the section. We let (*u*1 *, . . . , un*) be a basis for *V *with *ω*(*u*1 , . . . , *un*) = 1, i.e., we take *P*(*u*1 , . . . , *un*) to be a unit parallelepiped

relative to 9. Consider the linear map *f *: *V*→ *V *that sends *uj *to *aj *for each *j*= 1 , . . . , *n*. Then

Note that the matrix of *f *relative to the basis (*u*1 , . . . , *un*) has *a*1 , . . . , *an *as columns.

In Example 2e, we defined the tensor product *α*⊗*β *of *α*∈*LpV *and *β*∈*LqV *to be an element of *Lp+qV*. It is not the case, however, that if *α*∈⋀*pV *and *β*∈⋀*qV*, then *α*⊗*β *is anti-symmetric, i.e., it is an element of ⋀*p*+*qV*. Here we modify ⊗ by a sort of anti-symmetric averaging

to obtain an anti-symmetric tensor.

Let *α*∈⋀*pV *and *β*∈⋀*qV*. Then the *wedge product p*⋀

As we shall soon see, the choice of coefficient before the summation above makes the associative law come out true for the wedge product and allows for computational simplifications.

**14. Elementary Properties of the Wedge Product **

(a) *If α*∈⋀*pV and β*∈⋀*qV*, then *α*∧*β*∈⋀*p+qV*.

PROOF. Let *ρ*∈S*p*+*q*. Then

For a fixed *ρ*∈S*p*+*q*, the element *σρ *takes on all the values in the group *Sp+q *as *σ *does, so we may consider the above summation as running over *σρ*. It follows that

(b) *The wedge product is a bilinear map *⋀*pV*⋯⋀*qV*→⋀*p*+*qV*.

PROOF. This is an immediate consequence of the definition (

(c) *The wedge product is associative*.

PROOF. We take *α*∈⋀*pV*, *β*∈⋀*qV*, *γ*∈⋀*rV *and *u*1 , . . . , *up*+*q*+*r *in *V*. Then the value of ((*α*∧*β*)∧*γ*)(*u*1 , . . . , *up*+*q*+*r*) is equal to

For each *σ*∈S*p*+*q*+*r*, we let S*σ *be the subgroup of S*p*+*q*+*r *consisting of permutations that leave each of *σ*(*p *+ *q *+ 1) to *σ*(*p *+ *q *+ *r*) fixed. This is isomorphic to S*p*+*q*. The expression inside the summation above is then equal to

Now for given *σ*, *τ*∈S*p*+*q*+*r *we consider the equation *ρσ*=*τ *where *ρ*∈S*σ*. For *i*>(*p*+*q*), the definition of S*σ *implies that *τ*(*i*)=*σ*(*i*), so for a given *τ*, the number of *σ*’s that can satisfy this equation is (*p*+*q*)! On the other hand, for given *τ *and *σ*, a unique *ρ *satisfies this equation, therefore the value of the double summation above is

It follows then that ((*α*∧*β*)∧*γ*)(*u*1 , . . . , *up*+*q*+*r*) is equal to

The associativity of multiplication in the field then shows that the alternative ((*α*∧*β*)∧*γ*)(*u*1 , . . . , *up*+*q*+*r*

As an outcome of the above calculation, the expression *α*∧*β*∧*γ *is meaningful and the following formula holds:

Inductively, the expression *α*1∧⋯∧*αk *is unambiguously defined for *αi*∈⋀*pi V*, *i*=1 , . . . , *k*, and

(d) *Let αi*∈⋀*pi V*, *i*=1 , . . . , *k*, *and uj*∈*V*, *j*=1 , . . . , *p*1+...+*pk*, then

(e) *Let αi∈V* and i=1 , . . . , k, then *

(*α*1∧⋯∧*αk*(*u*1, . . . , *uk *= det [*αi*(*uj*)]

PROOF. This is the special case of (d) for *p*1=⋯=*pk*

(f) For *α*∈⋀*pV *and *β*∈⋀*pV*, one has

. Note also from the definition of the wedge product that *ei*∧*ej*= − *ej*⋀*ei*. Therefore in order to transform

into

in order, *p *places to the left by making consecutive transpositions. Since there are *q *, it takes *pq *

(g) *If p is odd and α*∈⋀*pV*, *then α*⋀*α*=0.

(h) *Let *(*e*1 , . . . , *en*) *be a basis for V with dual basis *(*e*¹ , . . . , *en*). *Then *

PROOF. The two sides have the same effect on any *p-tuple *

Let *f *: *V*→*W *be a linear map. Recalling the induced linear maps ⋀*pf *: ⋀*pW*→⋀*PV*, it is a routine matter to check that the induced maps preserve the wedge product.

(i) *Let f *: *V*→*W be a linear map*, *α*∈⋀*pW and β*∈⋀*qW*. *Then *

⋀*p*+*q f *(*α *∧ *β*) = ⋀*p f *(*α*) ∧ ⋀*q f *(*β*)

PROOF. Let *u*1 , . . . , *up*+*q *be elements of *V*. Then by definition of the induced map, (⋀*p*+*q f *(*α*∧*β*))(*u*1 , . . . , *up*+*q*) is equal to (*α*⋀*β*)(*f *(*u*1) , . . . , *f *(*up*+*q*)), which in turn is equal to

(j) **Graded Exterior Algebra **⋀**V *

Let dim *V*=*n*⋀*PV *as one linear space. Then

Thus an element of *A***V *is an (*β*+l)-tuple (*α*0,*α*1 , . . . , *αn*), or a formal sum *α*0+*α*1+⋯+*αn*, where *αi *is an element of ⋀*iV*. *αi *is called an element of degree *i *in ⋀**V*. Now the wedge product is extended to a product

∧ : ⋀**V *× ⋀**V *→ ⋀**V *

by stipulating that the distributive laws

*α *∧ (*β *+ *γ*) = (*α *∧ *β*) + (*α *∧ *γ*)

(*α *+ *β*) ∧ *γ *= (*α *∧ *γ*) + (*β *∧ *γ*)

hold. These are of course consistent with the bilinearity of ∧ as in (b). With the operations + and ∧, ⋀**V *is known as the *(graded) exterior algebra *of *V*.

For a linear map *f*: *V*→*W *, the induced linear maps ⋀*pf *give rise to a linear and ⋀-preserving linear map ⋀**W*→⋀* *V*, which is denoted by *f **. It is also customary to denote all ⋀*p f *by *f **, a convention we will henceforth adopt, unless there is danger of confusion.

**15. Interior Product or Contraction **

Let *V *be a linear space over a field *F *and let *x*∈*V*. As the final topic in this section, we consider a method for reducing the degree of an anti-symmetric tensor, known as ** contraction by x or interior product with x. **We define an operator

We will now state and prove the main properties of contraction.

(a) *If α*∈⋀*pV*, *then ixα*∈⋀*p*−l*V*.

PROOF. (*p*−l)-linearity and anti-symmetry of *ixα *follow from the corresponding properties of *α*

(b) *ixα is linear with respect to x, i.e., ix+y=ix+iy for x and y in V, and irx=rix for x in V and r in F*.

PROOF. This is linearity with respect to the first component of the argument of *a*

(c) *For x and y in V, ix*ο*iy= − iy*ο*ix, therefore ix*ο*ix*=0.

PROOF. For *p*<2, both sides are zero. Otherwise,

The exchange of *x *and *y *

(d) **Basic Example **

*Let *(*e*1 , . . . , *en*) *be a basis for V*. *Then *

*where the symbol **always indicates the deletion of **.

PROOF. We check that the two sides have the same effect on an arbitrary ordered (*p*. If *k *is not one of {*i*1 , . . . , *ip*}, then using 14e we see that the first column of the matrix consists of zeros, so the determinant is zero. Now suppose *k*=*iv*, then

If the subscript set {*j*2, . . . , *jp*} is not the same as {*i*1 , . . . , *iv*−1 ,*iv*+1 , . . . , *ip*involves *v*

(e) *Let α*∈⋀*PV*, *β*∈⋀*qV and x*∈*V*. *Then *

PROOF. We take an ordered basis (*e*1 , . . . , *en*) for *V*. Because of the bilinearity of the wedge product and the linearity of *ix *with respect to *x*, it suffices to consider the case where *x*=*ek*, where *i*1<⋯<*ip *and *j*1<⋯<*jq*. We consider four cases:

*Case 1: k*∉{*i*1 , . . . , *ip*}⋃{*j*1 , . . . , *jq*}.

In this case, *ixα*=0, *ixβ*=0 and *ix*(*α*∧*β*)=0, all by the first case of (**1.39). **

*Case 2*: *k*=*iµ*∈{*i*1 , . . . , *ip*}, but *k*∉{*j*1 , . . . , *jq*}.

Here again by (**1.39), ixβ=0, and **

*Case 3: k*∉*{i*1*..., ip*}, but *k*=*jv*∈{*j*1 , . . . , *jq*}.

This is similar to Case 2, except that an extra (−1)*p *appears.

*Case 4: k*=*iµ*∈{*i*1 , . . . , *ip*}, and *k*=*jv*∈{*j*1 , . . . , *jq*}.

Here we have *α*∧*β*=0, so the left-hand side of (**1.40) is zero. On the other hand, **

It takes (*p*−*µ*+*v*−1) transpositions to move *ek *

**1.1 **Let V be a vector space and *V** its dual. Show there is no isomorphism *V*→*V** that maps every basis to its dual.

**1.2 **Let *V *1 and *V2 *be vector spaces with ordered bases B1 and B2, respectively, and let A be the matrix of a linear map *f:V*1→*V2 *with respect to these bases. Show that the matrix of the induced linear map *f **:*V*2*→*V*1* with respect to the dual bases is the transpose of A.

**1.3 **Let *V *be a vector space and *α*¹ , . . . , *αk∈V**. Show that {*α*¹ , . . . , *αk*} is linearly dependent if and only if *α*¹∧⋯∧*αk*=0.

**1.4 **For anti-symmetric tensors *α *and on a vector space *V*, we define

[*α*,*β*]=*α*∧*β*−*β*∧*α *

If *α*, *β *and *γ *are anti-symmetric tensors on *V*, show that [[*α*,*β*],*γ]*=0.

**1.5 **Let B be an ordered basis for *n*-dimensional vector space *V*. Denote the basis constructed in 10a for ⋀*PV *by B*p *(you may choose any fixed order for the basis elements of B*p*).

(a) For *n*be the matrix of a linear map *f*: *V*→*V *with respect to B. Describe the matrices of ⋀*pf *with respect to B*p *for *p*= 0, 1, 2, 3.

(b) Do the same for arbitrary *n *and *p*.

**1.6 **(Laplace Expansion) Let *V *be a vector space with basis (*e*1 , . . . , *en*) and suppose *f*: *V*→*V *is a linear map.

(a) Using the formula 14i for *p *= 1 and *q = n*−1, obtain the expansion formula for determinant in terms of the first row (column).

(b) For arbitrary *p *and *q *with *p*+*q*=*n*, obtain a more general formula.

**1.7 **Denote the vector space of 2x2 matrices with entries from the field *F *by *M *2(*F*). Consider a fixed element M ∊*M*2(*F*) and denote the linear map X↦MX from *M *2(*F*) to itself by *f*.

(a) Show that det *f*=(det M)².

(b) Compute the matrices of ⋀*pf *relative to the bases described in 10a for all *p*.

**1.8 **Let dim *V*=*n*, 0≤*p*≤*n *and suppose that *f *: *V*→*V *is a linear map. Show that

**1.9 **We denote by τ*p *the trace of the linear map ⋀*p f*, where *f *: *V*→*V *is a linear map, and dim *V*=*n*. Identify *τ*0, *τ*1 and *τn*. Show that

**1.10 **Let *V *be a finite-dimensional vector space over a field *F *and suppose that *β*:*V*×*V*→*F *is a bilinear map. We define *β♭*:*V*→*V* *by

(*β♭*(*u*))(*v*) = *β*(*v*,*u*), *u*,*v*∈*V *

(a) Prove that in fact *β♭*(*u*)∈*V** and that *β♭ *is linear.

(b) Show that *β*♭ is an isomorphism if *β*is an inner product.

(c) Let B=(*e*1 , . . . , *en*) be a basis for *V*. Prove that

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading