You are on page 1of 9

Math 20900 Calculus 2 & Math 20901 Multivariable Calculus Lecture Notes

October 5, 2009
These lecture notes are unlikely to contain: material spoken verbally in class but not written on the board, examples presented in class, remarks that motivate the mathematical material, and explanations of standard mathematical notation. The ordering of material might differ slightly from that presented in class. On the other hand they contain material that being either elementary or tedious was not presented in class. Note that such material might be used later in class and you are expected to know it. These lecture notes might be modified as the course progresses. Please bring any mistakes to my attention.

Contents
1 Introduction to Rn 1.1 Rn as an inner product space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Functions Rn → Rm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Differential calculus in Rn 2.1 Differential operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Parametrised curves and surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Integral calculus in Rn 3.1 On one-dimensional domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 On two-dimensional domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 On three-dimensional domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 3 5 5 8 8 8 9 9

1

β ∈ R. . x1 ). ∀i ￿= j = 1. 0. i j Remark 1. y ∈ Rn . . y ∈ Rn : α(x + y) = αx + αy.5 (Scalar multiplication on Rn ). y￿ = 0. x2 . . . if ￿x. .1. xn ) | x1 . . . . Observe that for x ∈ Rn . y ∈ Rn . (the n) projections. n. 1. . ·￿. positive (i.1 (Rn ). For x ∈ R2 . .1. πj (ei ) = 0. x. .3 (Embeddings). . xn ) = (x1 . . y ∈ Rn are orthogonal. symmetric (i.1. .10 (Properties of the inner product).. ￿x. ∀x ∈ Rn .. x2 . 2 . . . y￿ = ￿y. (α + β)x = αx + βx. . . . (αβ)x = α(βx)) and possesses an identity.1. Addition is defined component-wise: ∀x. . . 2. . The orthogonal complement of x ∈ Rn is {x}⊥ := {y ∈ Rn | ￿x. .1. 0. 1. n.9 (Euclidean inner product). . x⊥ := (−x2 . Scalar multiplication on Rn is associative (i. ∀x ∈ Rn . 2.1. . For i = 1. . . e3 is also denoted by k. . . .) −x. . The inner product is bi-linear.e.6 (Properties of addition and scalar multiplication on Rn ).4 (Addition on Rn ). x = (x1 .1 1. the additive inverse of x ∈ Rn is −1x. xn .1. Rn ￿ (x1 . x. namely 0 ∈ Rn . n. y￿ = 0}. .7 (Basis vectors). . .e. . x2 . Rn := {(x1 .1. For m > n the function defined by Rn ￿ x ￿→ (x1 .e. 1) 1. . every x ∈ Rn possesses an additive inverse denoted −x. .1. For x. Definition 1. .12 (Orthogonality). associative and possesses an additive identity. Addition and scalar multiplication distribute over each other: ∀α.2 (Projections and components). ￿x. Definition 1. namely 1 (thus ∀x ∈ Rn . 2. . = x1 (1. Definition 1. . n. . xn ) ￿→ xi ∈ R are n we also denote π (x) by x . . n. ˆ In R2 and R3 . . 2. 0) + (0. i Definition 1. For i = 1. Addition on Rn is commutative. . . . ￿x. In R3 . 0.. 0. For x ∈ R i i Definition 1. 0) + · · · + xn (0. 0) + x2 (0. xn ) = x1 e1 + x2 e2 + · · · + xn en . . . 0) ∈ Rm (where the last m − n components are zero) is an embedding of Rn into Rm . ei ∈ Rn is defined by πi (ei ) = 1. . x￿ = 0 =⇒ x = 0). Scalar multiplication is defined component-wise: For α ∈ R and x ∈ Rn . (x + y)i = xi + yi . 2. x2 .1. β ∈ R. . . x2 . . x2 . .13 (Orthogonal vector). .1..1 Introduction to Rn Rn as an inner product space π Definition 1.1.1.11 (Inner product and projections). . n. (αx)i = αxi . Remark 1. x ⊥ y .e. . e1 and e2 are also denoted by ˆ and ˆ respectively. . ￿x. . 2. x + y ∈ Rn is defined by ∀i = 1. . 0. πi (·) = ￿ei . . 1x = x. ∀α. .1. This is the ith component of x.1. . 0) + · · · + (0. . .2 Rn as an inner product space Pn i=1 xi yi ∈ R. xn ∈ R}. αx ∈ Rn is defined by ∀i = 1. Definition 1. ∀x. . These properties follows from the corresponding properties of addition and multiplication on R. Remark 1. x￿ ￿ 0) and definite (i. x ∈ Rn . Observe that for i = 1. . . x￿). y ∈ Rn .1 Rn as a vector space Definition 1. y￿ := Remark 1. . Definition 1.8. .

i i=1 Remark 1. . is the cross product or vector product of x and y . The norm is positive (i. y￿ ￿ ￿x￿ ￿y￿ (Triangle inequality) (Cauchy-Schwarz inequality) Definition 1. . ￿ei ￿ = 1. . .7 (Sections of a function).2.2. ￿x￿ ￿ 0). 0) = (0.e. ￿x￿ ￿ B . For x ∈ Rn . 1. .5 (Composition of functions). ∀α ∈ R. x × y := (x2 y3 − x3 y2 . Let f : Rn → Rm and x. . Let f : Rn → Rm and g : Rm → Rl .2. for i = 1.2 (Bounded functions). x3 y1 − x1 y3 . y ∈ Rn . S ⊂ Rn is bounded if there exists B ￿ 0 such that ∀x ∈ S . Definition 1. . 1. is the cross product or vector product of x and y ..1. Definition 1. x ∈ Rn . x1 y2 − x2 y1 ). For x. ∀x ∈ Rn . positively homogeneous (i. ∀x ∈ Rn . x2 . ￿x + y￿ ￿ ￿x￿ + ￿y￿ ￿x. (f + g)(x) = f (x) + g(x). 0. For f : Rn → R2 we define f ⊥ : Rn → R2 by f ⊥ (x) := (f (x))⊥ . Definition 1. x ∈ Rn is a unit vector or normalised if ￿x￿ = 1.14 (Euclidean norm).. f (x) = ((f (x))1 . Remark 1. 0) × (y1 . The cross product in R2 results in a scalar (i. ￿x￿ = 0 =⇒ x = 0).2. y ∈ R2 .17 (Bounded sets). . . . ￿f (x)￿ ￿ C . f2 (x).. . .e. v ∈ Rn with ￿v￿ = 1.2. . . Definition 1. xn ).1. f : Rn → Rm is bounded if there exists C ∈ R such that ∀x ∈ Rn .1. Let f : Rn → Rm . Definition 1. fm (x)). x2 . ￿αx￿ = |α| ￿x￿) and satisfies ∀x.1. Given f : Rn → Rm and g : Rn → Rm we define f + g : Rn → Rm by ∀x ∈ Rn . For x. n.6 (Components of a function).1.2.3 (Addition of functions Rn → Rm ). x2 . x × y := x1 y2 − x2 y1 ∈ R.3 Rn as normed space p Definition 1.1. . y2 . . (x1 .e.4 (Scalar multiplication of functions Rn → Rm ). xn )￿ = t x2 .. .16 (Properties of the norm). 2.15. ￿x￿ := ￿x. x1 y2 − x2 y1 ) ∈ R3 . definite (i. . These definitions are related: (x1 . x￿ = ￿(x1 .4 The cross product in R2 and R3 Definition 1. x￿. (f (x))m ) = (f1 (x). .e. y ∈ R2 . . a real number) while the cross product in R3 results in another vector in R3 . Moreover. m. 1.1.2 Definition 1. v u n p p uX ￿x￿ = ￿x. We define the components of f to be the functions fj := πj ◦ f : Rn → R. ￿ · ￿ : Rn → R+ is defined by ∀x ∈ Rn .19. 2. (αf )(x) = α f (x). .2.1. 1. Remark 1. Given f : Rn → Rm and α ∈ R we define αf : Rn → Rm by ∀x ∈ Rn . (f (x))2 .2. j = 1.1.18 (Cross product in R2 and R3 ).1. section of f at x along v is the function R ￿ t ￿→ f (x + tv) ∈ Rm . Thus for x ∈ Rn . The 3 . (g ◦ f )(x) := g(f (x)). Then g ◦ f : Rn → Rl is defined by ∀x ∈ Rn . . 2.1 Operations on functions Rn → Rm Functions Rn → Rm Definition 1.

22 (Affineness is preserved by composition). Then fj : Rn → R.16 (Linearity of component functions implies linearity). Let f. . Moreover Mf +g = Mf + Mg . We say that Mf is the matrix associated with f .10. Lemma 1.2. Moreover Mαf = αMf and cαf = αcf . y ∈ Rn . (additivity) (1-homogenity). Lemma 1. Example 1. Let f : Rn → Rm be affine. Moreover Mfj is the j th row of Mf .2. 1. j = 1. Converse 1.9. 2. f : Rn → Rm is linear if ∀α ∈ R.2. j = 1.2 Linear functions Definition 1. Then g ◦ f : Rn → Rl is also affine.2. composition and projection: Lemma 1. f (x) = Mf x. Moreover Mfj is the j th row of Mf and cfj = (cf )j . If f : Rn → Rm is linear then f (0) = 0 but the converse is false. . Moreover Mg◦f = Mg Mf . Then f : Rn → Rm is also affine. cf ∈ Rm such that ∀x ∈ Rn . . Then αf : Rn → Rm is also linear. m be linear.2.18. f (x) = Mf x + cf . Affineness is stable under addition. Then fj : Rn → R. . Lemma 1.8 (Linear functions).2. j = 1.23 (Affineness implies affineness of component functions). Let f : Rn → Rm be linear. Then f : Rn → Rm is also linear.2. scalar multiplication.2. f (x) = f (x) + c. Let f : Rn → Rm and g : Rm → Rl be linear. Linearity is stable under addition.2.11 (Representation of linear functions). g : Rn → Rm be affine. g : Rn → Rm be linear. Let f : Rn → Rm and g : Rm → Rl be affine. x.2.2. Then f + g : Rn → Rm is also linear. Projections are linear Remark 1.24 (Affineness of component functions implies affineness). . Corollary 1. Moreover Mfj is the j th row of Mf . Let fj : Rn → R.2. j = 1. m are also affine.17 (Affine functions).21 (Affineness is preserved by scalar multiplication). Moreover Mf +g = Mf + Mg and cf +g = cf + cg . . Then f +g : Rn → Rm is also affine. 2. . . Then g ◦ f : Rn → Rl is also linear.2.15 (Linearity implies linearity of component functions). m are also linear.2. Converse 1. f : Rn → Rm is linear iff ∃!Mf ∈ Rm×n such that ∀x ∈ Rn .13 (Linearity is preserved by scalar multiplication). Moreover Mfj is the j th row of Mf and cfj = (cf )j .2. Let f. Let f : Rn → Rm be linear and α ∈ R. f : Rn → Rm is affine if ∃f : Rn → Rm linear and c ∈ Rm such that ˜ ∀x ∈ Rn . 2. 2.2. scalar multiplication.1.20 (Affineness is preserved by addition). We have a complete characterisation of linear functions: Theorem 1. Let fj : Rn → R. m be affine.2. . . We have a complete characterisation of affine functions: Theorem 1. Moreover Mαf = αMf . Let f : Rn → Rm be affine and α ∈ R. .2.14 (Linearity is preserved by composition). Lemma 1. composition and projection: Lemma 1. We say that Mf is the matrix associated with f . f (x + y) = f (x) + f (y). 4 . . Then αf : Rn → Rm is also affine. .2. Example 1. . . Constant functions and linear functions are affine.3 Affine functions ˜ Definition 1. . f : Rn → Rm is affine iff ∃!Mf ∈ Rm×n . Moreover Mg◦f = Mg Mf and cg◦f = Mg cf + cg .19 (Representation of affine functions).12 (Linearity is preserved by addition). f (αx) = αf (x). Corollary 1.

1.2. Df (xo ) is the derivative of f at xo and Df : Rn → Rm×n is the derivative of f . they are their own affine approximation.1. Affine functions are differentiable.2. Remark 2. Example 1. m are also continuous. ￿x − xo ￿ < δ =⇒ ￿f (x) − (Df (xo )(x − xo ) + f (xo ))￿ < ￿￿x − xo ￿.27 (Continuity is preserved by addition). discontinuous if ∃xo ∈ Rn at which it is discontinuous.26.31 (Continuity of component functions implies continuity).3 (Differentiability is preserved by addition). g : Rn → Rm be differentiable at x ∈ Rn .25 (Continuous functions).1. Lemma 1. 4. 2 2. Continuity is stable under addition.1 (Differentiable functions and derivatives). . the affine approximation to f at xo is Rn ￿ x ￿→ Df (xo )(x − xo ) + f (xo ) ∈ Rn .2.4 (Differentiability is preserved by scalar multiplication). . Moreover D(f + g)(x) = Df (x) + Dg(x). composition and projection: Lemma 1.2. We denote the the set of all continuous functions from Rn to Rm by C(Rn .2. Let f. scalar multiplication.30 (Continuity implies continuity of component functions). Lemma 2. continuous if it is continuous at all xo ∈ Rn . .1.4 Continuous functions Definition 1.1. 5 . Then f + g : Rn → Rm is also continuous. Let f. . g : Rn → Rm be continuous.1 Differential calculus in Rn Differential operators The derivative Definition 2. 1. If f : Rn → Rm is differentiable at xo ∈ Rn . 3. Then αf : Rn → Rm is also continuous.2. ￿x − xo ￿ < δ =⇒ ￿f (x) − f (xo )￿ < ￿. Corollary 1. Affine functions are continuous. Differentiability is stable under addition.1. m be continuous. Rm ). In this case. 2. composition and projection: Lemma 2.32. 2. 2. Then g ◦ f : Rn → Rl is also continuous.2. . . j = 1. continuous at xo ∈ Rn if ∀￿ > 0 ∃δ > 0 ∀x ∈ Rn . Notation 1. f : Rn → Rm is differentiable if it is differentiable at all xo ∈ Rn . f : Rn → Rm is 1. Then f : Rn → Rm is also continuous. Lemma 1. Let fj : Rn → R. . Let f : Rn → Rm be continuous and α ∈ R. . discontinuous at xo ∈ Rn if it is not continuous at xo . Then fj : Rn → R.1 2. Then f + g : Rn → Rm is also differentiable at x ∈ Rn . Let f : Rn → Rm be differentiable at x ∈ Rn and α ∈ R. 2.2. Moreover D(αf )(x) = αDf (x). scalar multiplication.2. Let f : Rn → Rm be continuous.2.1. Let f : Rn → Rm and g : Rm → Rl be continuous. Then αf : Rn → Rm is also differentiable at x ∈ Rn . Converse 1. indeed. f : Rn → Rm is differentiable at xo ∈ Rn if ∃Df (xo ) ∈ Rm×n such that ∀￿ > 0 ∃δ > 0 ∀x ∈ Rn .29 (Continuity is preserved by composition). 2.28 (Continuity is preserved by scalar multiplication). j = 1.

D3 f1 (x1 .13 these functions are differentiable. Remark 2.1.13 (Sufficient condition for differentiability). Let all partial derivatives of f : Rn → Rm exist and be continuous at x ∈ Rn . j = 1. 2. Theorem 2. x3 ) = 0.1. D1 f2 − D2 f1 ).1. x3 ) = 0. Definition 2. Notation 2. m be differentiable at x ∈ Rn . Rm ) is the set of all functions from Rn to Rm all of whose partial derivatives exist and are continuous. f2 (x1 . D2 f3 (x1 . the curl of f . Thus curl f (x) = (D2 f3 − D3 f2 . . . . . Compare Definition 2. n. . Then g ◦ f : Rm → Rl is also differentiable at x and D(g ◦ f )(x) = Dg(f (x))Df (x).1.j = Dj fi (x). x2 . v ∈ Rn with ￿v￿ = 1. x3 ) = x2 x3 . . Theorem 2. . . . The curl of a R2 -valued function on R2 is a R-valued function on R2 . m. x3 . 0 − x3 . 2. . Let f : Rn → Rm be differentiable at x ∈ Rn .18. ˛ dt t=0 1. x1 ).1. 2. x3 ) = x1 . j = 1. D1 f2 − D2 f1 ) = (0 − x2 . x3 ) = x3 x1 . 1. Then. whenever it exists. . x3 x1 ) ∈ R3 . Moreover Dfj (x) is the j th row of Df (x). x3 ) = x2 . Then f is continuous at x. x3 ) = 0. x2 . m are also differentiable at x. j = 1.15 with Definition 1.1. . ˛ d f (x + tv)˛ . . curl f : R3 → R3 is defined by curl f := (D2 f3 − D3 f2 . C 1 (Rn . Then f is differentiable at x. x2 . x3 ) = x3 .9. Let f : Rn → Rm be differentiable at x ∈ Rn . . 6 . D3 f2 (x1 . . Dv f (x). D1 f3 (x1 . Find the curl of R3 ￿ x ￿→ (x1 x2 . Remark 2.7. . Di f (x) exists and is the ith column of Df (x). n. . 2. else. Di f (x) := Dei f (x) is the ith partial derivative of f . For i = 1.1. Then f : Rn → Rm is also differentiable at x.17. . D1 f2 (x1 . both of whose partial derivatives exist at 0. x2 .1.1.11. x2 . Moreover Dv f (x) = Df (x)v . if f : Rn → Rm is differentiable at x ∈ Rn then (Df )i. We see that f1 (x1 . the curl of f . Let f : R2 → R2 . Let f : Rn → Rm be differentiable at x ∈ Rn . (By Theorem 2. D3 f1 − D1 f3 . Corollary 2. x2 . .1. x2 .5 (Chain rule). . x2 x3 . Solution.) 2. Let f : Rn → Rm and x. f3 (x1 . n.1. f Example 2. .1.16. .1. Let fj : Rn → R. Converse 2. even though the function is not differentiable there. curl f : R2 → R is defined by curl f := D1 f2 − D2 f1 .Theorem 2. the directional derivative of f in the direction v at x is the derivative at 0 of the section of f at x along v : ˛ Dv f (x) := 2. 2.15 (Curl).1. . x3 ) = x1 x2 .1.1.14. In particular.8 (Directional and partial derivatives). Moreover Dfj (x) is the j th row of Df (x). Let f : R3 → R3 . x2 .12 (Differentiability implies continuity).1. Existence of all directional derivatives of a function at a point does not imply that the function is differentiable at that point.6. Let f : Rn → Rm and g : Rm → Rl be differentiable at x ∈ Rn and f (x) respectively. i = 1.2 The curl Definition 2. whenever it exists. The curl is defined only for functions R2 → R2 and R3 → R3 . Remark 2. . Then fj : Rn → R. 2. 1.6 and Theorem 2. i = 1. 2. x2 . Then Dv f (x) exists ∀v ∈ Rn with ￿v￿ = 1. 2.1. but the curl of a R3 -valued function on R3 is another R3 -valued function on R3 . This is illustrated by the function R ￿ x ￿→ 2 ( 1 0 if x1 x2 = 0. 0 − x1 ) = −(x2 . D2 f1 (x1 . D3 f1 − D1 f3 .1.9. Then. Theorem 2. From Corollary 2.10.

For f ∈ C 2 (Rn .2. x3 ) = x1 . D(αf ) = αDf. id Thus div f (x) = D1 id1 (x) + D2 id2 (x) + D3 id3 (x) = 1 + 1 + 1 = 3. x3 ) = x2 .. div f : Rn → R is defined by div f := D1 f1 + D2 f2 + · · · + Dn fn . curl and divergence as differential operators Definition 2.26 are linear. x3 ) ∈ R3 .25. Rm ) → C(Rn . D2 f is symmetric. x3 ) ￿→ (x1 . 2. curl : C 1 (R3 . R). D2 : C 2 (Rn . (The trace of a square matrix is the sum of its diagonal elements. wherever it exists. x2 . x2 ..24. R1×n ). i. div : C 1 (Rn . x3 ) = x3 . div ◦ curl = 0 on C 2 (R3 . Rm ) to its derivative. Remark 2. The Laplacian. Note that the divergence is defined only for functions Rn → Rn and the divergence of a function is a R-valued function on Rn .21 (Differential operators). . Writing the function as R3 ￿ (x1 . If f is differentiable then div f = trace(Df ). Rn ) to its divergence.e. 1.20. R3 ).1. Solution. j = 1. 3.19. R) is the function that maps a function in C 1 (Rn .) Example 2. Definition 2. For f : Rn → Rn . (D2 f )i. f ∈ C 2 (Rn .4 The derivative. R3 ) to its curl. D1 id1 (x1 . x3 ) = 1. Pn 2 i=1 Di . 2. x2 . Rm ) and α ∈ R. R3 ) → C(R3 . ∆ : C 2 (Rn . id2 (x1 .j = (D2 f )j. . x2 . i. 1.1.1. Definition 2. x2 .1. R) → C(Rn .22.1.18 (Divergence).i . Theorem 2. . g ∈ C 1 (Rn .21 and 2. R2 ) → C(R2 .27. . R) → C(Rn . 2. 7 . we see that id1 (x1 . Remark 2. n. 2. E. D : C 1 (Rn . x3 ) = 1. x2 .1.1.e. the divergence of f . R2 ) to its curl. D3 id3 (x1 .26 (Laplacian and harmonic functions). for f.1. x3 ) = 1.g. Notation 2. Find the divergence of the identity function on R3 . curl ◦D = 0 on C 2 (R2 . R). the function R3 ￿ x ￿→ x ∈ R3 . curl : C 1 (R2 .1. x2 .1. 4. R3 ) is the function that maps a function in C 1 (R3 . D(f + g) = Df + Dg. 2. id3 (x1 .. D2 id2 (x1 . Rm×n ) is the function that maps a function in C 1 (Rn . x2 .1. This is because Di Dj f = Dj Di f . R) is harmonic if ∆f = 0.1. R) is defined by ∆ = div ◦D ≡ 2. i. Rn ) → C(Rn . Rn×n ) is defined by D2 f := D(Df ). 1.3 The divergence Definition 2. R) and C 2 (R3 . C 2 (Rn .1. Remark 2. R) is the set of all functions from Rn to R whose derivative exists and is in C 1 (Rn . The differential operators in Definitions 2. R) is the function that maps a function in C 1 (R2 .23 (Second derivative).1. 1.

Jordan’s theorem provides a definition of int(C). Then Z C Df · ds = f (p(b)) − f (p(a)). Closed if it can be parametrised by p ∈ C 1 ([a. Definition 2. 2. which coincides with our intuitive understanding and proves that all simple closed curves in R2 have interiors. else it is self-intersecting. 1. Simple closed curves in R2 can be oriented either clockwise or anti-clockwise.1 3.1 Parametrised curves and surfaces Differentiable functions R → Rn : Parametrised curves Definition 2. 8 .1 Integration on one-dimensional domains Integration on curves in Rn Z Z b a Definition 3. Dp(t)￿ dt. 2. R). −1).2. Rn ) and f ∈ C(C.1.5 (Surfaces that are graphs of functions).2.2.1. We define C f · ds := ￿f (p(t)). 3 Integral calculus in Rn Remark 3.2. −1). Simple if it can be parametrised by a path p ∈ C 1 ([a. Theorem 3. R).3 (Interiors of simple closed curves). A path parametrises its curve.2 (Fundamental theorem of line integrals). Let C ⊂ Rn be a curve parametrised by p ∈ C 1 ([a.1. 1) and the normal that points downwards is (D1 g.2). Let C ⊂ Rn be a simple closed curve and f ∈ C 1 (Rn . Corollary 3. (1) RemarkR3. This integral is independent of the parametrisation of C upto sign. Let C ⊂ Rn be a curve parametrised by p ∈ C 1 ([a. A path in Rn is a C 1 function [a.2 (Simple.6 (Linearity of integration). The normal that points upwards is (−D1 g. D2 g. self-intersectiong and closed curves).2. b].3. the interior of C . A curve in Rn is 1. Note that the right hand side of (1) depends only on f evaluated at the end points of C . A curve in Rn is the image of a path in Rn .2. (two-dimensional) subsets of R2 and surfaces in R3 . Then Z C Df · ds = 0. Rn ) such that p is injective on [a.1.2. Let g ∈ C 1 (Ω ⊂ R2 . b]. Rn ).2 Differentiable functions R2 → R3 : Parametrised surfaces in R3 Remark 2. 3.4. b] → Rn . b]. g(x)) ∈ R3 .0.2.1. 2. and (three-dimensional) subsets of R3 . b]. Remark 2. All these operators are linear.1 (Paths. Rn ) such that p(a) = p(b).1. R). else p(a) and p(b) are the end points of the curve.4 (to Theorem 3.2 2. Remark 2. Below we define integral operators on (one-dimensional) curves in Rn . Let C ⊂ R2 be a simple closed curve in R2 . b]. Thus C Df · ds depends on C only through its end points. The normal to this surface at (x. Then the graph of g p is the surface in R3 parametrised by Ω ￿ x ￿→ (x. −D2 g. 3. b) and (a. Rn ) and f ∈ C 1 (Rn . curves and parametrisations). D2 g. g(x)) is ±(D1 g.1.

2.3.2. Then ZZ S curl(f ) · dS = Z ∂S f · ds (2) where S and ∂S are consistently oriented: Imagine walking along ∂S so that S is to your left.5 (Stokes’ theorem).2. the boundary of S . Let V ⊂ R3 be elementary and f ∈ C 1 (R3 .4 (Boundary of a surface).2. Let D ⊂ R2 be elementary and f ∈ C 1 (R2 ..3.1 (Green’s theorem). R3 ).3 Integration on three-dimensional domains Theorem 3. Remark 3.2 Integration on surfaces in R3 Definition 3. Note that the right hand side of (2) depends only on f integrated over ∂S . 3.2.5).2.2 3. Let f ∈ C(R3 . ∂S . Let D ⊂ R2 be elementary and S ⊂ R3 be parametrised by p ∈ C 1 (D.1 (The divergence theorem or Gauss’ theorem).2. g(x)) | x ∈ ∂D}.2. Green’s theorem is the two-dimensional version of Stokes’ theorem (Theorem 3.3. Thus RR S curl(f ) · dS depends on S only through its boundary. Let D ⊂ R2 be elementary and S ⊂ R3 be parametrised by p ∈ C 1 (D. Then the (direction of the) normal to S should be picked so as to point upwards relative to you (i. Then ZZZ ZZ V div(f ) dV = ∂V f · dS where the normal to ∂V is taken to point outwards. Theorem 3.e. R3 ). Remark 3.6. R2 ). R3 ).2. n ◦ p￿ dA (this integral is independent of the parametrisation of S upto sign). Let f ∈ C(S. 3. 9 . is defined by ∂S := {(x. R3 ). R). Then D curl(f ) dA = ∂D f · ds where ∂D is oriented anti-clockwise. We define ZZ ZZ S f · dS := D ￿f ◦ p. Let D ⊂ R2 be elementary and S ⊂ R3 be the graph of a function g ∈ C 1 (D. from your feet to your head).2. Definition 3. R3 ).1 Integration on two-dimensional domains Integration on R2 ZZ Z Theorem 3.