You are on page 1of 15

Introduction to Classical Mechanics

Linear spaces and vectors


Summer Propaedeutic Course
2020

Prof. Oscar Rosas-Ortiz

1 Linear Spaces
Definition 1. A linear or vector space V is a tetrad {V, F, +, “·00 }, with V a nonempty
set (the underlying set), F a scalar field, “+” a mapping V × V → V called addition and,
“·” a mapping F × V → V called multiplication by a scalar1 . The latter operations satisfy
the following conditions:

i) v1 + v2 = v2 + v1 for all v1 , v2 ∈ V .

ii) v1 + (v2 + v3 ) = (v1 + v2 ) + v3 for all v1 , v2 , v3 ∈ V .

iii) There exists a unique element 0 ∈ V , called the neutral additive element, such that
0 + v = v for every v ∈ V .

iv) Associated with each v ∈ V is a unique element −v ∈ V , called the inverse additive,
such that v + (−v) = 0.

v) α(βv) = (αβ)v for all α, β ∈ F and all v ∈ V .

vi) 1v = v for all v ∈ V .

vii) 0v = 0 for all 0 ∈ V .

viii) α(v1 + v2 ) = αv1 + αv2 for all α ∈ F and all v1 , v2 ∈ V .

ix) (α + β)v = αv + βv for all α, β ∈ F and all v ∈ V .

The elements of V are called vectors and the elements of the field F are called scalars. 
• If F = R we say that V is a vector space over the field of real numbers (or real vector
space for short).
• If F = C we say that V is a vector space over the field of complex numbers (or
complex vector space for short).
• If F = Rn we say that V is a real vector space of dimension n.
• If F = Cn we say that V is a complex vector space of dimension n.
1
Formally we write α · v = v · α for all α ∈ F and all v ∈ V . However, for simplicity and to avoid
confusion, hereafter for the multiplication by a scalar we write α · v ≡ αv = vα.
NOTE: The elements of vector spaces are denoted in diverse forms, in agreement with the
context in which they are studied. They may be written as letters crowned by an arrow
~a, ~b, ~c, by boldface letters a, b, c, or by kets |ai, |bi, |ci, among other options. Notation
is not relevant if there exists no ambiguity, so avoiding confusion between vectors and
scalars you are free to use any symbol to denote the elements of a linear space. For
present purposes, dealing with either F = Rn or F = Cn , we shall use ~a (and occasionally
a) to denote the n-tuple:
~a = (a1 , a2 , . . . , an ), ak ∈ F.
| {z }
n-positions

Example: The scalar fields Rn and Cn are by themselves vector spaces.

Homework. To show the above statement.

Example: Let V = Lp [0, T ], 1 ≤ p < ∞, the set of all real (or complex) valued functions
v defined on [0, T ] such that
Z T
|v(t)|p dt < ∞, (1)
0
where the integral is the Lebesgue integral. We may define vector addition and scalar
multiplication as follows. For any v1 , v2 ∈ Lp [0, T ] we write
(v1 + v2 )(t) = v1 (t) + v2 (t), for all t ∈ [0, T ]. (2)
Also, for any α ∈ F and v ∈ Lp [0, T ], then
(αv)(t) = α(v(t)), for all t ∈ [0, T ]. (3)

Homework. If v1 and v2 are two elements of Lp [0, T ] that differ only on a set of measure
zero, then they are still different points in the linear space. Show that Lp [0, T ] is a vector
space with the vector addition and scalar multiplication defined above.

Definition 2. Let V be a linear space and {vk }k∈I a set of elements of V. Let {αk }k∈I ,
αk ∈ F, the expression
X
v= αv vk = α1 v1 + α2 v2 + · · · ,
k∈I

is called linear combination of the vectors v1 , v2 , . . . The scalars α1 , α2 , . . . are called the
coefficients of the linear combination.

Definition 3. Let V be a linear space and {vk }nk=1 a set of elements of V. We say that
this set spans V if there exists a set of scalars {αk }nk=1 , αk ∈ F, such that
n
X
v= αv vk for all v ∈ V.
k=1
Definition 4. Let V be a linear space and W ⊆ V. If W satisfies the properties
i) Given v, w ∈ W then v + w ∈ W.
ii) For any v ∈ W and α ∈ F then αv ∈ W.
iii) the element 0 ∈ W is also in W.
Then W is by itself a vector space, we say that it is a linear subspace of V.

Proposition 1. The set of all linear combinations of {vk }k∈calI , vk ∈ V, is a linear


subspace of V.
Proof: Homework.

Definition 5. The set of vectors {vk }k∈I is linearly independent if and only if
X
αk v k = 0
k∈I

implies αk = 0 for all k ∈ I. Otherwise the set is linearly dependent.

Example. Let V = Rn and consider the vectors êk ∈ V defined as the n-tuples:
ê1 = (1, 0, 0, . . . , 0), ê2 = (0, 1, 0, . . . , 0), ên = (0, 0, 0, . . . , 1).
Then n
X
~v = αk êk = (α1 , α2 , . . . , αn ) = ~0 ≡ (0, 0, . . . , 0) ⇒ αk = 0 ∀k.
k=1

Definition 6. If the vectors vk ∈ V, k = 1, 2, . . . , n, are linearly independent and span V


we say that the set {vk }nk=1 is a basis of V. We also say that the vectors vk form a basis
of V.

Definition 7. Let V be a vector space and {vk }nk=1 a basis of V. Then any v ∈ V can be
written as a linear combination v = α1 v1 + α2 v2 + · · · + αn vn . We say that (α1 , α2 , . . . , αn )
are the coordinates of v with respect to the basis vk and that αk is the kth-coordinate.
Pn
Example. ~v = k=1 αk êk = (α1 , α2 , . . . , αn ).

Example. Find the coordinates of ~v = (0, 1) with respect to the vectors ~a = (1, 1) and
~b = (−1, 2). Solution: ~v = α~a + β~b implies α = β = 1/3.

Theorem 1. Let V be a vector space. Assume that the set {vk }m k=1 spans V. Let
w1 , w2 , . . . , wn be elements of V and n > m. Then the vectors w1 , w2 , . . . , wn are linearly
dependent.
Proof: Let us look for a set of scalars αk , k = 1, 2, . . . , n, such that
α1 w1 + α2 w2 + · · · αn wn = 0.
As {vk }m
k=1 spans V, for every w` , ` = 1, 2, . . . , n, we have

w` = β`,1 v1 + β`,2 v2 + · · · + β`,m vm .


Therefore
α1 w1 + α2 w2 + · · · αn wn = α1 (β1,1 v1 + β1,2 v2 + · · · + β1,m vm )
+α2 (β2,1 v1 + β2,2 v2 + · · · + β2,m vm )
···
αn (βn,1 v1 + βn,2 v2 + · · · + βn,m vm ).
So that we arrive at the system
α1 β1,1 + α2 β2,1 + · · · + αn βn,1 = 0
···
α1 β1,m + α2 β2,m + · · · + αn βn,m = 0
which admits no trivial solution since n > m. Hence, the vectors w1 , w2 , . . . , wn are
linearly dependent.

Theorem 2. Let V be a vector space. Let {vk }nk=1 and {wk }m


k=1 be two different bases
of V. Then n = m.
Proof. Homework.

Definition 8. Let V be a vector space. Consider the set {vk }nk=1 of elements of V and an
nonnegative integer number r such that r ≤ n. We say that {vk }rk=1 is a maximal subset
of linearly independent elements if (i) the vectors v1 , v2 , . . . , vr are linearly independent
and (ii) given any v` with ` > r, the elements v` , v1 , v2 , . . . , vr are linearly dependent.

Theorem 3. Assume that the vectors {vk }nk=1 span the linear space V and let {vk }rk=1 be
a maximal subset of linearly independent elements with r ≤ n. Then {vk }rk=1 is a basis
of V.
Proof. Homework.

Definition 9. Let v1 , v2 , . . . , vn be linearly independent elements of a vector space V. We


say that they form a maximal set of linearly independent elements of V if, given w ∈ V,
the elements w, v1 , v2 , . . . , vn are linearly dependent.

Theorem 4. Let V be a linear space and {vk }nk=1 be a maximal set of linearly independent
elements of V. Then {vk }nk=1 is a basis of V.
Proof. Homework.

Theorem 5. Let V be a linear space. Assume that a given basis has n elements and
another one has m elements. Then n = m.
Proof. Homework.

Definition 10. Let V be a vector space with a basis of n elements. We say that n is the
dimension of V. If V includes only the neutral additive element then V has not a basis
and we say that its dimension is equal to zero.

Theorem 6. Let V be a linear space of dimension n. Let v1 , v2 , . . . , vn be linearly


independent elements of V. Then v1 , v2 , . . . , vn form a basis of V.
Proof. Homework.

Theorem 7. Let V be a linear space of dimension n and W be a linear subspace of V


the dimension dimension of which is equal to n. Then W = V.
Proof. Homework.

Definition 11. A set S of elements of a linear space V is said to be a Hamel basis of V


if (1) S is linearly independent and (2) V = span{S}.

NOTE: The Hamel basis is the natural concept of basis for spaces that have linear struc-
ture only. The above definitions and properties of vector spaces, including the notion of
a “basis”, are indeed taking into account the concept of a Hamel basis.

Example. In R2 , a set S containing any two non-collinear vectors is a Hamel basis for
the plane.

NOTE: As we have seen in Theorem 5, all Hamel bases of a linear space V contain the
same number of elements. This property allows to distinguish between finite and infinite-
dimensional linear spaces. For if you recall that two sets have the same cardinal number
provided they can be put into a one-to-one correspondence with one another, then the
following results are quite natural:

Theorem 8. If V1 and V2 are Hamel bases for a linear space V, then V1 and V2 have the
same cardinal number.

Definition 12. The cardinal number of any Hamel basis of a linear space V is said to be
the dimension of V. We denote the dimension of V by Dim(V).
As you can see, the above results are a refinement of Theorem 5 and Definition 10,
respectively. We are now able to talk about infinite-dimensional spaces, according to the
cardinal number of any Hamel basis of the vector space we are dealing with.
We can go a step further by recalling the notion of isomorphism:

Definition 13. The linear spaces V and W over the same scalar field F are said to be
isomorphic if there exists a one-to-one linear mapping φ of V onto W. The mapping φ is
then said to be an isomorphism of V onto W.
It is useful to realize that there is an important constraint for two spaces to be iso-
morphic:

Theorem 9. If V and W are linear spaces over the same scalar field, then V and W are
isomorphic if and only if Dim(V) = Dim(W).
Proof. Homework.
An important result is obtained when F is either Rn or Cn . It deals with the identi-
fication of ~v ∈ V with the n-tuple (v1 , v2 , . . . , vn ), and is given by:

Corollary T9. If V is a finite-dimensional linear space over the scalar field F, where
Dim(V) = n, then V is isomorphic to F n , the linear space made uo of ordered n-tuples
of scalars.
In other words, all n-dimensional real vector spaces are isomorphic to Rn , and all
n-dimensional complex vector spaces are isomorphic to Cn .

2 Normed and Metric Spaces


Definition 14. A real-valued function || · ||, defined on a vector space V, is said to be a
norm on V if, for any v, w ∈ V, and any α ∈ F, the following properties are true

i) ||v|| ≥ 0 (positivity)

ii) ||v + w|| ≤ ||v|| + ||w|| (triangle inequality)

iii) ||αv|| = |α| ||v|| (homogeneity)

iv) ||v|| = 0 if and only if v = 0 (positive definiteness)

The number ||v|| is referred to as the norm of, or length of v ∈ V.

Example. The Euclidean (or canonical) length of any vector ~x = (x1 , x2 ) denoting a
point in the real Euclidean plane R2 is given by
q
||~x|| = x21 + x22 . (4)

It may be shown that the above length satisfies the conditions of Definition 14, so it is a
norm in R2 .

Definition 15. A normed linear space is a pair (V, || · ||), where V is a linear space and
|| · || is a norm defined on V. When no confusion is possible we shall write V for simplicity.
Definition 16. A metric space is a pair (X, d), where X is a set called the underlying set,
and d(x, y) is a real-valued function, called the metric, defined for x, y ∈ X and satisfying
the following axioms. For all x, y, z ∈ X:
i) d(x, y) ≥ 0 and d(x, x) = 0 (positive)
ii) If d(x, y) = 0 then x = y (strictly positive)
iii) d(x, y) = d(y, x) (symmetry)
iv) d(x, y) ≤ d(x, z) + d(z, y) (triangle inequality)

Example. Function (4), defined in the previous example, generates a distance function
or metric, which makes the canonical length the archetypal example of a norm for a
linear space. To be precise, the Euclidean (or canonical) distance between two points
~x = (x1 , x2 ) and ~y = (y1 , y2 ) is given by
p
||~x − ~y || = (x1 − y1 )2 + (x2 − y2 )2 . (5)

Example. The Euclidean plane R2 equipped with the canonical distance (5) is a metric
space.
Notice that any normed linear space (V, || · ||) can be always equipped with at least
one distance d(x, y) –the canonical one–. In this form, a normed linear space V is also a
metric linear space (under the canonical distance).

Comment. Remark that the notion of distance is very important in physics, par-
ticularly in Newtonian mechanics. This provides the mathematical structure of the
theory with a “rule” to measure relations between the positions of material bodies
(and physical systems in general).
We have seen that such relations are fundamental to study motion in the Newtonian
picture, no matter the Newtonian space is absolute! On the other hand, the notion of
distance is a very geometric concept (recall Descartes) which can be associated with
the properties of space by itself, with no reference to any material body (Newton).
Concerning vectors, at the present stage you can realize that these are immediately
useful to localize different points (positions) in space. In this form, the relationships
between the positions of the material bodies under study can be expressed in terms
of relationships between the vectors that localize such positions (position vectors).
In such a picture the mathematical properties of vectors refer to the geometric
properties of space, and they must be such that the (physical) laws of motion
obeyed by material bodies are correctly represented. Otherwise, any mathematical
model constructed for describing the properties of space and motion would be not
useful.
3 Inner Product Spaces
Definition 17. Let V be a linear space. An inner product on V is a mapping that
associates to each ordered pair of vectors v, w, an element of the field, denoted (x, y), that
satisfies the following properties. For any v, w, z ∈ V and any α ∈ F:
i) Additivity: (v + w, z) = (v, z) + (w, z)
ii) Homogeneity: (v, αw) = α(v, w)
iii) Symmetry: (v, w) = (w, v)
iv) Positive definiteness: (v, v) > 0 for v 6= 0.
Hereafter z means the complex conjugate of z ∈ Z (we also use z ∗ to denote complex
conjugation). Other notations for inner product are, for instance, ~v · w,
~ v · w, and hv|wi.

Comment. The notion of inner product is very important in the mathematical


structure of any physical theory. This permits to introduce a form of comparing (in
geometrical terms) physical properties that are represented by vectors.
For instance, assume that ~v ∈ R3 represents either the velocity, the linear momen-
tum, or the angular momentum of a given system. One may wonder about “how
much” this vector resembles a concrete vector quantity ~u that is used as a standard.
The answer is easily obtained by calculating the inner product between ~v and ~u.
We may write ~v · ~u, which is in this case a real number (the field is R), and say that
this number is a measure of “how much” the vector ~v resembles the standard ~u.
If ~v ·~u = 0 then there is no relationship between ~v and ~u. In this case the description
of ~v can be done without any reference to ~u and we say that ~v represents a property
that is completely independent of the standard ~u. Geometrically we say that ~v
cannot be projected onto ~u.
For ~v · ~u 6= 0 the vector ~v can be projected onto ~u (and viceversa). In other words,
the property represented by ~v is connected to the properties of ~u.

NOTE that (0, 0) = (0, w) = (w, 0) = 0 for all w ∈ V.

Lemma D17. If (v, w) = 0 for all w ∈ V, then v = 0.

Example. In Cn we may introduce the rule (called canonical inner product on Cn ):


 
a1
 a 
~ ∗ ∗ ∗  2 
~a · b = (a1 , a2 , · · · , an )  ..  = |a1 |2 + |a2 |2 + · · · + |an |2 . (6)
 . 
an
Homework: Verify that the above rule satisfies the properties of Definition 17.
Example. Consider the vectors ê1 , ê2 , ê3 ∈ R3 . We have the inner product products
êj · êk = δjk , with δjk the Kronecker delta. That is ê1 cannot be projected onto neither ê2
nor ê3 (cyclically), which means that the properties of ê1 are completely independent of
the properties of either ê2 or ê3 (cyclically).

Example. Consider the vectors êk ∈ Cn . For any ~v ∈ Cn we have êk · ~v = vk and
~v · êk = vk∗ (Homework: Verify such a rule). That is, if vk 6= 0 then the properties of
~v are connected with the properties of êk (equivalently, ~v can be projected onto êk and
viceversa). Otherwise, the description of ~v is independent of the properties of êk .

Definition 18. An inner product space is the pair (V, (·, ·)), where V is a linear space
and (·, ·) an inner product defined on V.

Example. The linear space Cn equipped with the canonical inner product (6) is an inner
product space.

NOTE that any inner product space can be equipped with a norm
p
||v|| = |(v, v)|, (7)

called the canonical norm, such that V is a normed space (and thus, it is also a metric
space).
Homework: Verify that the above rule satisfies the properties of Definition 14.

Example. Using the canonical inner product (6) we have the canonical norm on Cn :
p
||~v || = |a1 |2 + |a2 |2 + · · · + |an |2 (8)

Example. The canonical norm of the vectors êk ∈ Cn is given by ||êk || = 1. That is, the
vectors êk ∈ Cn have norm equal to the unity and we say that they are unitary vectors.

Lemma D18. (Schwarz Inequality) Let (v, w) be an inner product on a linear space
V. Then
|(v, w)| ≤ ||v|| ||w||. (9)

Proof. Homework.

Theorem 10. (Parallelogram Law) Let X be an inner product space, then for all
x and y in X we have

||x + y||2 + ||x − y||2 = 2||x||2 + 2||y||2 .

Proof. Homework.
Definition 19. Two vectors v and w in an inner product space are said to be orthogonal
if (v, w) = 0.

Theorem 10. (Pythagorean Theorem). If v and w are orthogonal in an inner


product space X, then
||v + w||2 = ||v||2 + ||w||2 .

Proof. Homework.

Definition 20. Let X be an inner product space. The set of elements {vk }k∈I in X is
said to be orthogonal if (vk , v` ) = 0 for k 6= `.

Definition 21. Let X be an inner product space. The set of elements {vk }k∈I in X is
said to be orthonormal if (vk , v` ) = δk,` .

Theorem 11. Let X be an inner product space and {vk }k∈I in X an orthonormal set of
elements in X is linearly independent.
Proof. Homework.

Definition 22. An orthonormal set {vk }k∈I in an inner product space X is maximal if
there is no unit vector w0 in X such that {vk }k∈I ∪ {w0 } is an orthonormal set.

Lemma D22. An orthonormal set {vk }k∈I in an inner product space X is maximal if
and only if (w, vk ) = 0 for all k implies that w = 0.

Definition 22. A maximal orthonormal set {vk }k∈I in an inner product space X is
referred to as an orthonormal basis for X.

Homework. Consider the linear space R3 . Verify the following properties:


1) ~a · ~b = ||~a|| ||~b|| cos θ
2) ~a × ~b = ||~a|| ||~b|| sin θ
3) ~a × ~b = −~b × ~a
4) ~a × (~b + ~c) = ~a × ~b + ~a × ~c
5) ~a × (~b × ~c) = (~a · c)~b − (~a · b)~c

Hint: use the relation ~a ×~b = ~x, with ck = ai bj ijk , where ijk is the Levi-Civita tensor
 
 +1(even) 
ijk = +1(odd) permutation of i, j, k,
0(no)
 

with ikl ikn = 2δnl , and ikl imn = δmk δnl − δnk δml .
Comment. We have two different notion of basis for linear spaces: Hamel basis
and orthonormal basis.
The concept of a Hamel basis is purely algebraic and is particularly useful (and
intuitive) for linear combinations of finite (or denumerable) elements. Although this
may be applied to study infinite-dimensional linear spaces, in practical situations
it is preferred to look for other (more appropriate) kinds of bases.
The orthonormal basis is a very geometric concept that can be also expressed in
algebraic (and useful) terms. As we have seen, these bases permit to identify the
(possible) independence of a given vector from a (set of) standard(s). Thus, in
contraposition to the Hamel bases, the orthonormal bases represent a powerful
and versatile hand-tool to describe (not only) the motion (but the general physical
properties) of the systems studied in (physics, particularly in) Newtonian mechanics.
The notion of independence in the studying of a given property represented by a
vector v is therefore translated to the notion of orthogonality (a very geometric
one), and then to the notion of linear independence! Thus, a fundamental physical
property (the independence among different properties of a given physical system)
is directly connected to a geometrical property of space (the linear independence of
a set of vectors that are used to represent physical states)!
The latter is a very impressive connection between physics and geometry (and
algebra) that was anticipated by Galileo, delineated by Descartes, formalized by
Newton and improved by Einstein.
4 Description of mechanical particles
The position of a mechanical particle (particle for short) is given by a (position) vector
in R3 :
X3
~r = êk xk = ê1 x1 + ê2 x2 + ê3 x3 (≡ êk xk . Einstein notation)
k=1
(10)
≡ r = (x1 , x2 , x3 ), xk ∈ R.
In general the coordinates xk are time-dependent xk = xk (t), with t a real parameter that
represents time. Position is measured in length units, we write [~r] = [L]. In the MKS
system we have [~r] = m.
The velocity is a vector that results from the time-derivative of position:
3 3
d X d X
~v = ~r = êk xk ≡ êk ẋk = ṙ ≡ ~r˙. (11)
dt k=1
dt k=1

Velocity is measured in units of length over units of time, we write [~v ] = [L][T ]−1 . In the
MKS system we have [~v ] = ms−1 .
The acceleration is a vector that results from the time-derivative of velocity:

d d2
~a = ~v = 2 ~r. (12)
dt dt
Acceleration is measured in units of length over units of squared time, we write [~a] =
[L][T ]−2 . In the MKS system we have [~a] = ms−2 .
In particular, if the position does not depend on time ~r 6= ~r(t) we have no motion:

~r 6= ~r(t), ~v = ~0, ~a = ~0.

so that the particle is at rest.


On the other hand, let us assume that the coordinates are xk are elements of C 1 (R).
Then:
d
~r = ~r(t), ~v = ~r (with ~v 6= ~v (t)), ~a = ~0.
dt
Thus, the particle moves with constant velocity (recall: the latter means fixed magni-
tude and fixed direction) and describes a straight-line trajectory.
Now let us assume that the coordinates are xk are elements of C 2 (R). Then:

d d2
~r = ~r(t), ~v = ~r (with ~v = ~v (t)), ~a = ~r (with ~a 6= ~a(t)).
dt dt2
In this case the particle moves with constant acceleration (recall: the latter means
fixed magnitude and fixed direction).
For constant acceleration the coordinates ak do not depend on the parameter t. After
a time-integration one gets
Z t Z t
d
~adt = ~v dt = ~v |tt0 = ~v (t) − ~v (t0 )
t0 t0 dt (13)
= ~a|tt0 = ~at − ~at0 .
That is,
~v (t) − ~v0 = ~a(t − t0 ), (14)
where ~v0 = ~v (t0 ). In other words, for constant acceleration the variation of velocity is
lineal in time and along the direction of the acceleration. A second time-integration yields
~r(t) − ~r0 = 21 ~a(t2 − t20 ) + (~vo − ~at0 )(t − t0 ), (15)
with ~r0 = ~r(t0 ). That is, for constant acceleration the variation of position is quadratic
in time.

Summary 1: (1) To describe accelerated particles the position-vector ~r must be at least


C 2 (R). If ~r is only C 2 (R) then this can be associated at most with uniformly accelerated
motion. (2) Motion in a straight-line requires a position-vector ~r ∈ C 1 (R) (3) The resting
state requires a position-vector that is independent of the parameter t.

Homework. Use the above results to describe motion in one-dimension (for instance, free
falling bodies) and parabolic shot (two and three dimensions). Recover the expressions
for average velocity and average acceleration. Discuss on the meaning of speed and
its connection with velocity. Discuss the difference between displacement and distance.
Is it possible to get average velocity equal to zero but average speed different from zero?
Justify your answer.

The mass is a scalar quantity that measures the amount of matter, it refers to the prop-
erty of the particle which resists acceleration (inertial mass) as well as to the property
of the particle which determines how strongly it will be pulled by a gravitational field
(gravitational mass). The units of mass are denoted by [M ], in the MKS system we
have [M ] = k. In general, the mass is parameterized by t and may be function of any of
the physical quantities that characterize the dynamical state of the particle.
The linear momentum is a vector that results from the multiplication of velocity
with mass
d
p~ = m~v = m ~r. (16)
dt
Linear momentum is measured in units of velocity times units of mass, we write [~p] =
[L][M ][t]−1 . In the MKS system we have [~p] = kms−1 .
For instance, considering a time-dependent mass, the time-variations of the linear
momentum are given by
d
p~ = ṁ~v + m~a. (17)
dt
What about a position-dependent mass?
The angular momentum is a vector that results from the vector multiplication of
position with linear momentum
~ = ~r × p~ = m~r × ~v = m (~r × ~v ) ,
L L` = kj` rk pj , k, j, ` ∈ {1, 2, 3}. (18)
Angular momentum is measured in units of position times units of linear momentum
~ = [M ][L]2 [T ]−1 . In the MKS
(usually referred to as units of action), we write [L]
−1
system we have [~p] = kms .

Homework. Show that the time-variation of the angular momentum is given by the
expression
d~
L = ṁ~r × ~v + m~r × ~a. (19)
dt
Forces are vectors parameterized by t that may be functions of any of the physical
quantities that characterize the dynamical state of the particle. They are measured in
units of mass times units of acceleration, we write [F~ ] = [M ][L][T ]−2 . In the MKS system
we have [F~ ] = N .
The torque is a vector defined by a force F~ that is applied on a given point of
material bodies. The point of application is localized by a position-vector (lever arm) ~b.
Mathematically, the torque is expressed as the vector multiplication
~τ = ~b × F~ , τ` = kj` bk Fj , k, j, ` ∈ {1, 2, 3}. (20)
Torque is measured in units of length times units of force, we write [~τ ] = [M ][L]2 [T ]−2 .
In the MKS system we have [~τ ] = N m.
Dealing with rigid bodies the lever arm ~b is usually taken to be a constant vector. How-
ever, in a more general situation it is nothing but a position-vector, so it is parameterized
by t. Then, in general Eq. (20) reads ~τ = ~r × F~ .
Considering the mass m as the proportionality factor between acceleration and applied
forces, the variation of angular momentum (19) may be rewritten as follows
 
d~ d ~ + ~τ .
L = ṁ~r × ~v + ~τ ≡ (ln m) L (21)
dt dt
Clearly, for constant mass the time-variation of the angular momentum is equal to the
torque.
In turn, (considering the mass m as the proportionality factor between acceleration
and applied forces) the time-variation of velocity for time-dependent mass (17) can be
rewritten as
d
p~ = ṁ~v + F~ . (22)
dt
From the above result, if m = const then
d
p~ = F~ (valid for m= const only) (23)
dt
What about a position-dependent mass?

REMARK. The description of the position-vector and all its derivate quantities is not
restricted to conventional cartesian basis of R3 defined by the unitary vectors êk , with
~r = (x, y, z). For if we take another basis of R3 , namely êξk with ~r = (ξ1 , ξ2 , ξ3 ), we know
that all the 3-dimensional linear spaces are isomorphic to R3 , so that there is an isomor-
phism permitting the transformation of the coordinates (ξ1 , ξ2 , ξ3 ) into the cartesian ones
(x, y, z), and vice versa. As an example consider the relationship between the cartesian
coordinates (x, y, z) and the spherical ones (r, θ, φ) given by

x = r sin θ cos φ, y = r sin θ sin φ, z = r cos θ, (24)

where
r ≥ 0, 0 ≤ φ < 2π, 0 ≤ θ ≤ π. (25)
Formally we write

x = x(r, θ, φ), y = y(r, θ, φ), z = z(r, θ, φ), (26)

so that ~r = ~r(r, θ, φ). As originally we had ~r = ~r(x, y, z) = xê1 + yê2 + ẑe3 , to get
appropriate unitary vectors in the new representation we first introduce the scale factors

hα = || ∂α ~r||, α = r, θ, φ. (27)

That is,
hr = 1, hθ = r, hφ = r sin θ. (28)
Then, the unitary vectors
1 ∂
êα = ~r, α = r, θ, φ, (29)
hα ∂α
are tangent to the path defined by ~r in the direction α. Explicitly,

êr = sin θ cos φ ê1 + sin θ sin φ ê2 + cos θ ê3 ,

êθ = cos θ cos φ ê1 + cos θ sin φ ê2 − sin θ ê3 , (30)

êφ = − sin φ ê1 + cos φ ê2 .

Homework. (a) Show that {êr , êθ , êφ } is an orthonormal set in R3 . (b) Solve the system
(30) for the cartesian unitary vectors and express the vector ~r in the basis {êr , êθ , êφ }. (c)
Determine the scale factors and unitary vectors for cylindrical coordinates. (d) Express
the position vector ~r in the basis of cylindrical unitary vectors.

You might also like