You are on page 1of 53

Multivariable advanced calculus Kuttler

K
Visit to download the full and correct content document:
https://textbookfull.com/product/multivariable-advanced-calculus-kuttler-k/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Multivariable Advanced Calculus Kenneth L. Kuttler

https://textbookfull.com/product/multivariable-advanced-calculus-
kenneth-l-kuttler/

One Variable Advanced Calculus Kenneth Kuttler

https://textbookfull.com/product/one-variable-advanced-calculus-
kenneth-kuttler/

Math 302 Lecture Notes linear algebra and multivariable


calculus Kenneth L. Kuttler

https://textbookfull.com/product/math-302-lecture-notes-linear-
algebra-and-multivariable-calculus-kenneth-l-kuttler/

Multivariable calculus 8th Edition Stewart

https://textbookfull.com/product/multivariable-calculus-8th-
edition-stewart/
Multivariable Calculus 8th Edition James Stewart

https://textbookfull.com/product/multivariable-calculus-8th-
edition-james-stewart/

Calculus of Real and Complex Variables Kenneth Kuttler

https://textbookfull.com/product/calculus-of-real-and-complex-
variables-kenneth-kuttler/

Calculus of One and Many Variables Kenneth Kuttler

https://textbookfull.com/product/calculus-of-one-and-many-
variables-kenneth-kuttler/

Calculus Single and Multivariable 7th Edition Deborah


Hughes-Hallett

https://textbookfull.com/product/calculus-single-and-
multivariable-7th-edition-deborah-hughes-hallett/

Solutions Stewart s Multivariable Calculus 8th 8th


Edition James Stewart

https://textbookfull.com/product/solutions-stewart-s-
multivariable-calculus-8th-8th-edition-james-stewart/
Multivariable Advanced Calculus

Kenneth Kuttler

February 7, 2016
2
Contents

1 Introduction 9

2 Some Fundamental Concepts 11


2.1 Set Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.2 The Schroder Bernstein Theorem . . . . . . . . . . . . . . . . . . 13
2.1.3 Equivalence Relations . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 lim sup And lim inf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Double Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3 Basic Linear Algebra 25


3.1 Algebra in Fn , Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Subspaces Spans And Bases . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.3 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.4 Block Multiplication Of Matrices . . . . . . . . . . . . . . . . . . . . . . 38
3.5 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.5.1 The Determinant Of A Matrix . . . . . . . . . . . . . . . . . . . 39
3.5.2 The Determinant Of A Linear Transformation . . . . . . . . . . 50
3.6 Eigenvalues And Eigenvectors Of Linear Transformations . . . . . . . . 51
3.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.8 Inner Product And Normed Linear Spaces . . . . . . . . . . . . . . . . . 54
3.8.1 The Inner Product In Fn . . . . . . . . . . . . . . . . . . . . . . 54
3.8.2 General Inner Product Spaces . . . . . . . . . . . . . . . . . . . . 55
3.8.3 Normed Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . 57
3.8.4 The p Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.8.5 Orthonormal Bases . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.8.6 The Adjoint Of A Linear Transformation . . . . . . . . . . . . . 61
3.8.7 Schur’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.9 Polar Decompositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

4 Sequences 73
4.1 Vector Valued Sequences And Their Limits . . . . . . . . . . . . . . . . 73
4.2 Sequential Compactness . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.3 Closed And Open Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.4 Cauchy Sequences And Completeness . . . . . . . . . . . . . . . . . . . 81
4.5 Shrinking Diameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

3
4 CONTENTS

5 Continuous Functions 87
5.1 Continuity And The Limit Of A Sequence . . . . . . . . . . . . . . . . . 90
5.2 The Extreme Values Theorem . . . . . . . . . . . . . . . . . . . . . . . . 91
5.3 Connected Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.4 Uniform Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.5 Sequences And Series Of Functions . . . . . . . . . . . . . . . . . . . . . 96
5.6 Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5.7 Sequences Of Polynomials, Weierstrass Approximation . . . . . . . . . . 101
5.7.1 The Tietze Extension Theorem . . . . . . . . . . . . . . . . . . . 106
5.8 The Operator Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.9 Ascoli Arzela Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

6 The Derivative 123


6.1 Limits Of A Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
6.2 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
6.3 The Chain Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
6.4 The Matrix Of The Derivative . . . . . . . . . . . . . . . . . . . . . . . 128
6.5 A Mean Value Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6.6 Existence Of The Derivative, C 1 Functions . . . . . . . . . . . . . . . . 132
6.7 Higher Order Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . 135
6.8 C k Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.8.1 Some Standard Notation . . . . . . . . . . . . . . . . . . . . . . . 137
6.9 The Derivative And The Cartesian Product . . . . . . . . . . . . . . . . 138
6.10 Mixed Partial Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . 141
6.11 Implicit Function Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 142
6.11.1 More Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.11.2 The Case Of Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.12 Taylor’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
6.12.1 Second Derivative Test . . . . . . . . . . . . . . . . . . . . . . . . 150
6.13 The Method Of Lagrange Multipliers . . . . . . . . . . . . . . . . . . . . 152
6.14 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

7 Measures And Measurable Functions 159


7.1 Compact Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
7.2 An Outer Measure On P (R) . . . . . . . . . . . . . . . . . . . . . . . . 161
7.3 General Outer Measures And Measures . . . . . . . . . . . . . . . . . . 163
7.3.1 Measures And Measure Spaces . . . . . . . . . . . . . . . . . . . 163
7.4 The Borel Sets, Regular Measures . . . . . . . . . . . . . . . . . . . . . 164
7.4.1 Definition of Regular Measures . . . . . . . . . . . . . . . . . . . 164
7.4.2 The Borel Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
7.4.3 Borel Sets And Regularity . . . . . . . . . . . . . . . . . . . . . . 165
7.5 Measures And Outer Measures . . . . . . . . . . . . . . . . . . . . . . . 171
7.5.1 Measures From Outer Measures . . . . . . . . . . . . . . . . . . . 171
7.5.2 Completion Of Measure Spaces . . . . . . . . . . . . . . . . . . . 175
7.6 One Dimensional Lebesgue Stieltjes Measure . . . . . . . . . . . . . . . 178
7.7 Measurable Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
7.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
CONTENTS 5

8 The Abstract Lebesgue Integral 187


8.1 Definition For Nonnegative Measurable Functions . . . . . . . . . . . . . 187
8.1.1 Riemann Integrals For Decreasing Functions . . . . . . . . . . . . 187
8.1.2 The Lebesgue Integral For Nonnegative Functions . . . . . . . . 188
8.2 The Lebesgue Integral For Nonnegative Simple Functions . . . . . . . . 189
8.3 The Monotone Convergence Theorem . . . . . . . . . . . . . . . . . . . 190
8.4 Other Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
8.5 Fatou’s Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
8.6 The Righteous Algebraic Desires Of The Lebesgue Integral . . . . . . . 192
8.7 The Lebesgue Integral, L1 . . . . . . . . . . . . . . . . . . . . . . . . . . 193
8.8 Approximation With Simple Functions . . . . . . . . . . . . . . . . . . . 197
8.9 The Dominated Convergence Theorem . . . . . . . . . . . . . . . . . . . 199
8.10 Approximation With Cc (Y ) . . . . . . . . . . . . . . . . . . . . . . . . . 201
8.11 The One Dimensional Lebesgue Integral . . . . . . . . . . . . . . . . . . 203
8.12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

9 The Lebesgue Integral For Functions Of p Variables 213


9.1 π Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
9.2 p Dimensional Lebesgue Measure And Integrals . . . . . . . . . . . . . . 214
9.2.1 Iterated Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
9.2.2 p Dimensional Lebesgue Measure And Integrals . . . . . . . . . . 215
9.2.3 Fubini’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
9.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
9.4 Lebesgue Measure On Rp . . . . . . . . . . . . . . . . . . . . . . . . . . 222
9.5 Mollifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
9.6 The Vitali Covering Theorem . . . . . . . . . . . . . . . . . . . . . . . . 231
9.7 Vitali Coverings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
9.8 Change Of Variables For Linear Maps . . . . . . . . . . . . . . . . . . . 237
9.9 Change Of Variables For C 1 Functions . . . . . . . . . . . . . . . . . . . 242
9.10 Change Of Variables For Mappings Which Are Not One To One . . . . 248
9.11 Spherical Coordinates In p Dimensions . . . . . . . . . . . . . . . . . . . 249
9.12 Brouwer Fixed Point Theorem . . . . . . . . . . . . . . . . . . . . . . . 253
9.13 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256

10 Degree Theory, An Introduction 265


10.1 Preliminary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
10.2 Definitions And Elementary( Properties
) . . . . . . . . . . . . . . . . . . . 267
10.2.1 The Degree For C 2 Ω; Rn . . . . . . . . . . . . . . . . . . . . . 268
10.2.2 Definition Of The Degree For Continuous Functions . . . . . . . 274
10.3 Borsuk’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
10.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
10.5 The Product Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
10.6 Integration And The Degree . . . . . . . . . . . . . . . . . . . . . . . . . 293
10.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298

11 Integration Of Differential Forms 303


11.1 Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
11.2 Some Important Measure Theory . . . . . . . . . . . . . . . . . . . . . . 306
11.2.1 Eggoroff’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 306
11.2.2 The Vitali Convergence Theorem . . . . . . . . . . . . . . . . . . 308
11.3 The Binet Cauchy Formula . . . . . . . . . . . . . . . . . . . . . . . . . 310
11.4 The Area Measure On A Manifold . . . . . . . . . . . . . . . . . . . . . 311
11.5 Integration Of Differential Forms On Manifolds . . . . . . . . . . . . . . 314
6 CONTENTS

11.5.1 The Derivative Of A Differential Form . . . . . . . . . . . . . . . 317


11.6 Stoke’s Theorem And The Orientation Of ∂Ω . . . . . . . . . . . . . . . 317
11.7 Green’s Theorem, An Example . . . . . . . . . . . . . . . . . . . . . . . 321
11.7.1 An Oriented Manifold . . . . . . . . . . . . . . . . . . . . . . . . 321
11.7.2 Green’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
11.8 The Divergence Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 325
11.9 Spherical Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
11.10Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332

12 The Laplace And Poisson Equations 335


12.1 Balls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
12.2 Poisson’s Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
12.2.1 Poisson’s Problem For A Ball . . . . . . . . . . . . . . . . . . . . 341
12.2.2 Does It Work In Case f = 0? . . . . . . . . . . . . . . . . . . . . 343
12.2.3 The Case Where f ̸= 0, Poisson’s Equation . . . . . . . . . . . . 345
12.3 Properties Of Harmonic Functions . . . . . . . . . . . . . . . . . . . . . 348
12.4 Laplace’s Equation For General Sets . . . . . . . . . . . . . . . . . . . . 351
12.4.1 Properties Of Subharmonic Functions . . . . . . . . . . . . . . . 351
12.4.2 Poisson’s Problem Again . . . . . . . . . . . . . . . . . . . . . . . 356

13 The Jordan Curve Theorem 359

14 Line Integrals 371


14.1 Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
14.1.1 Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
14.1.2 Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
14.2 The Line Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
14.3 Simple Closed Rectifiable Curves . . . . . . . . . . . . . . . . . . . . . . 387
14.3.1 The Jordan Curve Theorem . . . . . . . . . . . . . . . . . . . . . 389
14.3.2 Orientation And Green’s Formula . . . . . . . . . . . . . . . . . 393
14.4 Stoke’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
14.5 Interpretation And Review . . . . . . . . . . . . . . . . . . . . . . . . . 402
14.5.1 The Geometric Description Of The Cross Product . . . . . . . . 402
14.5.2 The Box Product, Triple Product . . . . . . . . . . . . . . . . . . 404
14.5.3 A Proof Of The Distributive Law For The Cross Product . . . . 404
14.5.4 The Coordinate Description Of The Cross Product . . . . . . . . 405
14.5.5 The Integral Over A Two Dimensional Surface . . . . . . . . . . 405
14.6 Introduction To Complex Analysis . . . . . . . . . . . . . . . . . . . . . 407
14.6.1 Basic Theorems, The Cauchy Riemann Equations . . . . . . . . 407
14.6.2 Contour Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
14.6.3 The Cauchy Integral . . . . . . . . . . . . . . . . . . . . . . . . . 411
14.6.4 The Cauchy Goursat Theorem . . . . . . . . . . . . . . . . . . . 417
14.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419

15 Hausdorff Measures 429


15.1 Definition Of Hausdorff Measures . . . . . . . . . . . . . . . . . . . . . . 429
15.1.1 Properties Of Hausdorff Measure . . . . . . . . . . . . . . . . . . 430
15.1.2 Hn And mn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
15.2 Technical Considerations∗ . . . . . . . . . . . . . . . . . . . . . . . . . . 435
15.2.1 Steiner Symmetrization∗ . . . . . . . . . . . . . . . . . . . . . . . 437
15.2.2 The Isodiametric Inequality∗ . . . . . . . . . . . . . . . . . . . . 439

15.2.3 The Proper Value Of β (n) . . . . . . . . . . . . . . . . . . . . . 439

15.2.4 A Formula For α (n) . . . . . . . . . . . . . . . . . . . . . . . . 440
CONTENTS 7

15.3 Hausdorff Measure And Linear Transformations . . . . . . . . . . . . . . 442


Copyright ⃝
c 2007,
8 CONTENTS
Chapter 1

Introduction

This book is directed to people who have a good understanding of the concepts of one
variable calculus including the notions of limit of a sequence and completeness of R. It
develops multivariable advanced calculus.
In order to do multivariable calculus correctly, you must first understand some linear
algebra. Therefore, a condensed course in linear algebra is presented first, emphasizing
those topics in linear algebra which are useful in analysis, not those topics which are
primarily dependent on row operations.
Many topics could be presented in greater generality than I have chosen to do. I have
also attempted to feature calculus, not topology although there are many interesting
topics from topology. This means I introduce the topology as it is needed rather than
using the possibly more efficient practice of placing it right at the beginning in more
generality than will be needed. I think it might make the topological concepts more
memorable by linking them in this way to other concepts.
After the chapter on the n dimensional Lebesgue integral, you can make a choice
between a very general treatment of integration of differential forms based on degree
theory in chapters 10 and 11 or you can follow an independent path through a proof
of a general version of Green’s theorem in the plane leading to a very good version of
Stoke’s theorem for a two dimensional surface by following Chapters 12 and 13. This
approach also leads naturally to contour integrals and complex analysis. I got this idea
from reading Apostol’s advanced calculus book. Finally, there is an introduction to
Hausdorff measures and the area formula in the last chapter.
I have avoided many advanced topics like the Radon Nikodym theorem, represen-
tation theorems, function spaces, and differentiation theory. It seems to me these are
topics for a more advanced course in real analysis. I chose to feature the Lebesgue
integral because I have gone through the theory of the Riemann integral for a function
of n variables and ended up thinking it was too fussy and that the extra abstraction of
the Lebesgue integral was worthwhile in order to avoid this fussiness. Also, it seemed
to me that this book should be in some sense “more advanced” than my calculus book
which does contain in an appendix all this fussy theory.

9
10 CHAPTER 1. INTRODUCTION
Chapter 2

Some Fundamental Concepts

2.1 Set Theory


2.1.1 Basic Definitions
A set is a collection of things called elements of the set. For example, the set of integers,
the collection of signed whole numbers such as 1,2,-4, etc. This set whose existence will
be assumed is denoted by Z. Other sets could be the set of people in a family or
the set of donuts in a display case at the store. Sometimes parentheses, { } specify
a set by listing the things which are in the set between the parentheses. For example
the set of integers between -1 and 2, including these numbers could be denoted as
{−1, 0, 1, 2}. The notation signifying x is an element of a set S, is written as x ∈ S.
Thus, 1 ∈ {−1, 0, 1, 2, 3}. Here are some axioms about sets. Axioms are statements
which are accepted, not proved.

1. Two sets are equal if and only if they have the same elements.

2. To every set, A, and to every condition S (x) there corresponds a set, B, whose
elements are exactly those elements x of A for which S (x) holds.

3. For every collection of sets there exists a set that contains all the elements that
belong to at least one set of the given collection.

4. The Cartesian product of a nonempty family of nonempty sets is nonempty.

5. If A is a set there exists a set, P (A) such that P (A) is the set of all subsets of A.
This is called the power set.

These axioms are referred to as the axiom of extension, axiom of specification, axiom
of unions, axiom of choice, and axiom of powers respectively.
It seems fairly clear you should want to believe in the axiom of extension. It is
merely saying, for example, that {1, 2, 3} = {2, 3, 1} since these two sets have the same
elements in them. Similarly, it would seem you should be able to specify a new set from
a given set using some “condition” which can be used as a test to determine whether
the element in question is in the set. For example, the set of all integers which are
multiples of 2. This set could be specified as follows.

{x ∈ Z : x = 2y for some y ∈ Z} .

In this notation, the colon is read as “such that” and in this case the condition is being
a multiple of 2.

11
12 CHAPTER 2. SOME FUNDAMENTAL CONCEPTS

Another example of political interest, could be the set of all judges who are not
judicial activists. I think you can see this last is not a very precise condition since
there is no way to determine to everyone’s satisfaction whether a given judge is an
activist. Also, just because something is grammatically correct does not mean
it makes any sense. For example consider the following nonsense.

S = {x ∈ set of dogs : it is colder in the mountains than in the winter} .

So what is a condition?
We will leave these sorts of considerations and assume our conditions make sense.
The axiom of unions states that for any collection of sets, there is a set consisting of all
the elements in each of the sets in the collection. Of course this is also open to further
consideration. What is a collection? Maybe it would be better to say “set of sets” or,
given a set whose elements are sets there exists a set whose elements consist of exactly
those things which are elements of at least one of these sets. If S is such a set whose
elements are sets,
∪ {A : A ∈ S} or ∪ S
signify this union.
Something is in the Cartesian product of a set or “family” of sets if it consists of
a single thing taken from each set in the family. Thus (1, 2, 3) ∈ {1, 4, .2} × {1, 2, 7} ×
{4, 3, 7, 9} because it consists of exactly one element from each of the sets which are
separated by ×. Also, this is the notation for the Cartesian product of finitely many
sets. If S is a set whose elements are sets,

A
A∈S

signifies the Cartesian product.


The Cartesian product is the set of choice functions, a choice function being a func-
tion which selects exactly one element of each set of S. You may think the axiom of
choice, stating that the Cartesian product of a nonempty family of nonempty sets is
nonempty, is innocuous but there was a time when many mathematicians were ready
to throw it out because it implies things which are very hard to believe, things which
never happen without the axiom of choice.
A is a subset of B, written A ⊆ B, if every element of A is also an element of B.
This can also be written as B ⊇ A. A is a proper subset of B, written A ⊂ B or B ⊃ A
if A is a subset of B but A is not equal to B, A ̸= B. A ∩ B denotes the intersection of
the two sets A and B and it means the set of elements of A which are also elements of
B. The axiom of specification shows this is a set. The empty set is the set which has
no elements in it, denoted as ∅. A ∪ B denotes the union of the two sets A and B and
it means the set of all elements which are in either of the sets. It is a set because of the
axiom of unions.
The complement of a set, (the set of things which are not in the given set ) must be
taken with respect to a given set called the universal set which is a set which contains
the one whose complement is being taken. Thus, the complement of A, denoted as AC
( or more precisely as X \ A) is a set obtained from using the axiom of specification to
write
AC ≡ {x ∈ X : x ∈ / A}
The symbol ∈ / means: “is not an element of”. Note the axiom of specification takes
place relative to a given set. Without this universal set it makes no sense to use the
axiom of specification to obtain the complement.
Words such as “all” or “there exists” are called quantifiers and they must be under-
stood relative to some given set. For example, the set of all integers larger than 3. Or
2.1. SET THEORY 13

there exists an integer larger than 7. Such statements have to do with a given set, in
this case the integers. Failure to have a reference set when quantifiers are used turns
out to be illogical even though such usage may be grammatically correct. Quantifiers
are used often enough that there are symbols for them. The symbol ∀ is read as “for
all” or “for every” and the symbol ∃ is read as “there exists”. Thus ∀∀∃∃ could mean
for every upside down A there exists a backwards E.
DeMorgan’s laws are very useful in mathematics. Let S be a set of sets each of
which is contained in some universal set, U . Then
{ } C
∪ AC : A ∈ S = (∩ {A : A ∈ S})

and
{ } C
∩ AC : A ∈ S = (∪ {A : A ∈ S}) .
These laws follow directly from the definitions. Also following directly from the defini-
tions are:
Let S be a set of sets then

B ∪ ∪ {A : A ∈ S} = ∪ {B ∪ A : A ∈ S} .

and: Let S be a set of sets show

B ∩ ∪ {A : A ∈ S} = ∪ {B ∩ A : A ∈ S} .

Unfortunately, there is no single universal set which can be used for all sets. Here is
why: Suppose there were. Call it S. Then you could consider A the set of all elements
of S which are not elements of themselves, this from the axiom of specification. If A
is an element of itself, then it fails to qualify for inclusion in A. Therefore, it must not
be an element of itself. However, if this is so, it qualifies for inclusion in A so it is an
element of itself and so this can’t be true either. Thus the most basic of conditions you
could imagine, that of being an element of, is meaningless and so allowing such a set
causes the whole theory to be meaningless. The solution is to not allow a universal set.
As mentioned by Halmos in Naive set theory, “Nothing contains everything”. Always
beware of statements involving quantifiers wherever they occur, even this one. This little
observation described above is due to Bertrand Russell and is called Russell’s paradox.

2.1.2 The Schroder Bernstein Theorem


It is very important to be able to compare the size of sets in a rational way. The most
useful theorem in this context is the Schroder Bernstein theorem which is the main
result to be presented in this section. The Cartesian product is discussed above. The
next definition reviews this and defines the concept of a function.

Definition 2.1.1 Let X and Y be sets.

X × Y ≡ {(x, y) : x ∈ X and y ∈ Y }

A relation is defined to be a subset of X × Y . A function, f, also called a mapping, is a


relation which has the property that if (x, y) and (x, y1 ) are both elements of the f , then
y = y1 . The domain of f is defined as

D (f ) ≡ {x : (x, y) ∈ f } ,

written as f : D (f ) → Y .
14 CHAPTER 2. SOME FUNDAMENTAL CONCEPTS

It is probably safe to say that most people do not think of functions as a type of
relation which is a subset of the Cartesian product of two sets. A function is like a
machine which takes inputs, x and makes them into a unique output, f (x). Of course,
that is what the above definition says with more precision. An ordered pair, (x, y)
which is an element of the function or mapping has an input, x and a unique output,
y,denoted as f (x) while the name of the function is f . “mapping” is often a noun
meaning function. However, it also is a verb as in “f is mapping A to B ”. That which
a function is thought of as doing is also referred to using the word “maps” as in: f maps
X to Y . However, a set of functions may be called a set of maps so this word might
also be used as the plural of a noun. There is no help for it. You just have to suffer
with this nonsense.
The following theorem which is interesting for its own sake will be used to prove the
Schroder Bernstein theorem.

Theorem 2.1.2 Let f : X → Y and g : Y → X be two functions. Then there


exist sets A, B, C, D, such that

A ∪ B = X, C ∪ D = Y, A ∩ B = ∅, C ∩ D = ∅,

f (A) = C, g (D) = B.

The following picture illustrates the conclusion of this theorem.

X Y
f
A - C = f (A)

g
B = g(D)  D

Proof: Consider the empty set, ∅ ⊆ X. If y ∈ Y \ f (∅), then g (y) ∈ / ∅ because ∅


has no elements. Also, if A, B, C, and D are as described above, A also would have this
same property that the empty set has. However, A is probably larger. Therefore, say
A0 ⊆ X satisfies P if whenever y ∈ Y \ f (A0 ) , g (y) ∈
/ A0 .

A ≡ {A0 ⊆ X : A0 satisfies P}.

Let A = ∪A. If y ∈ Y \ f (A), then for each A0 ∈ A, y ∈ Y \ f (A0 ) and so g (y) ∈


/ A0 .
Since g (y) ∈
/ A0 for all A0 ∈ A, it follows g (y) ∈
/ A. Hence A satisfies P and is the
largest subset of X which does so. Now define

C ≡ f (A) , D ≡ Y \ C, B ≡ X \ A.

It only remains to verify that g (D) = B.


Suppose x ∈ B = X \ A. Then A ∪ {x} does not satisfy P and so there exists
y ∈ Y \ f (A ∪ {x}) ⊆ D such that g (y) ∈ A ∪ {x} . But y ∈ / f (A) and so since A
satisfies P, it follows g (y) ∈
/ A. Hence g (y) = x and so x ∈ g (D) and This proves the
theorem. 

Theorem 2.1.3 (Schroder Bernstein) If f : X → Y and g : Y → X are one to


one, then there exists h : X → Y which is one to one and onto.
2.1. SET THEORY 15

Proof: Let A, B, C, D be the sets of Theorem2.1.2 and define


{
f (x) if x ∈ A
h (x) ≡
g −1 (x) if x ∈ B

Then h is the desired one to one and onto mapping.


Recall that the Cartesian product may be considered as the collection of choice
functions.

Definition 2.1.4 Let I be a set and let Xi be a set for each i ∈ I. f is a choice
function written as ∏
f∈ Xi
i∈I

if f (i) ∈ Xi for each i ∈ I.

The axiom of choice says that if Xi ̸= ∅ for each i ∈ I, for I a set, then

Xi ̸= ∅.
i∈I

Sometimes the two functions, f and g are onto but not one to one. It turns out that
with the axiom of choice, a similar conclusion to the above may be obtained.

Corollary 2.1.5 If f : X → Y is onto and g : Y → X is onto, then there exists


h : X → Y which is one to one and onto.

Proof: For each y ∈ Y , f −1∏(y) ≡ {x ∈ X : f (x) = y} = ̸ ∅. Therefore, by the axiom


of choice, there exists f0−1 ∈ y∈Y f −1 (y) which is the same as saying that for each
y ∈ Y , f0−1 (y) ∈ f −1 (y). Similarly, there exists g0−1 (x) ∈ g −1 (x) for all x ∈ X. Then
f0−1 is one to one because if f0−1 (y1 ) = f0−1 (y2 ), then
( ) ( )
y1 = f f0−1 (y1 ) = f f0−1 (y2 ) = y2 .

Similarly g0−1 is one to one. Therefore, by the Schroder Bernstein theorem, there exists
h : X → Y which is one to one and onto.

Definition 2.1.6 A set S, is finite if there exists a natural number n and a


map θ which maps {1, · · · , n} one to one and onto S. S is infinite if it is not finite.
A set S, is called countable if there exists a map θ mapping N one to one and onto
S.(When θ maps a set A to a set B, this will be written as θ : A → B in the future.)
Here N ≡ {1, 2, · · · }, the natural numbers. S is at most countable if there exists a map
θ : N → S which is onto.

The property of being at most countable is often referred to as being countable


because the question of interest is normally whether one can list all elements of the set,
designating a first, second, third etc. in such a way as to give each element of the set a
natural number. The possibility that a single element of the set may be counted more
than once is often not important.

Theorem 2.1.7 If X and Y are both at most countable, then X × Y is also at


most countable. If either X or Y is countable, then X × Y is also countable.

Proof: It is given that there exists a mapping η : N → X which is onto. Define


η (i) ≡ xi and consider X as the set {x1 , x2 , x3 , · · · }. Similarly, consider Y as the set
16 CHAPTER 2. SOME FUNDAMENTAL CONCEPTS

{y1 , y2 , y3 , · · · }. It follows the elements of X×Y are included in the following rectangular
array.

(x1 , y1 ) (x1 , y2 ) (x1 , y3 ) ··· ← Those which have x1 in first slot.


(x2 , y1 ) (x2 , y2 ) (x2 , y3 ) ··· ← Those which have x2 in first slot.
(x3 , y1 ) (x3 , y2 ) (x3 , y3 ) ··· ← Those which have x3 in first slot. .
.. .. .. ..
. . . .

Follow a path through this array as follows.

(x1 , y1 ) → (x1 , y2 ) (x1 , y3 ) →


↙ ↗
(x2 , y1 ) (x2 , y2 )
↓ ↗
(x3 , y1 )

Thus the first element of X × Y is (x1 , y1 ), the second element of X × Y is (x1 , y2 ), the
third element of X × Y is (x2 , y1 ) etc. This assigns a number from N to each element
of X × Y. Thus X × Y is at most countable.
It remains to show the last claim. Suppose without loss of generality that X is
countable. Then there exists α : N → X which is one to one and onto. Let β : X×Y → N
be defined by β ((x, y)) ≡ α−1 (x). Thus β is onto N. By the first part there exists a
function from N onto X × Y . Therefore, by Corollary 2.1.5, there exists a one to one
and onto mapping from X × Y to N. This proves the theorem. 

Theorem 2.1.8 If X and Y are at most countable, then X ∪ Y is at most


countable. If either X or Y are countable, then X ∪ Y is countable.

Proof: As in the preceding theorem,

X = {x1 , x2 , x3 , · · · }

and
Y = {y1 , y2 , y3 , · · · } .
Consider the following array consisting of X ∪ Y and path through it.

x1 → x2 x3 →
↙ ↗
y1 → y2

Thus the first element of X ∪ Y is x1 , the second is x2 the third is y1 the fourth is y2
etc.
Consider the second claim. By the first part, there is a map from N onto X × Y .
Suppose without loss of generality that X is countable and α : N → X is one to one and
onto. Then define β (y) ≡ 1, for all y ∈ Y ,and β (x) ≡ α−1 (x). Thus, β maps X × Y
onto N and this shows there exist two onto maps, one mapping X ∪ Y onto N and the
other mapping N onto X ∪ Y . Then Corollary 2.1.5 yields the conclusion. This proves
the theorem. 

2.1.3 Equivalence Relations


There are many ways to compare elements of a set other than to say two elements are
equal or the same. For example, in the set of people let two people be equivalent if they
2.2. LIM SUP AND LIM INF 17

have the same weight. This would not be saying they were the same person, just that
they weighed the same. Often such relations involve considering one characteristic of
the elements of a set and then saying the two elements are equivalent if they are the
same as far as the given characteristic is concerned.

Definition 2.1.9 Let S be a set. ∼ is an equivalence relation on S if it satisfies


the following axioms.

1. x ∼ x for all x ∈ S. (Reflexive)

2. If x ∼ y then y ∼ x. (Symmetric)

3. If x ∼ y and y ∼ z, then x ∼ z. (Transitive)

Definition 2.1.10 [x] denotes the set of all elements of S which are equivalent
to x and [x] is called the equivalence class determined by x or just the equivalence class
of x.

With the above definition one can prove the following simple theorem.

Theorem 2.1.11 Let ∼ be an equivalence class defined on a set, S and let H


denote the set of equivalence classes. Then if [x] and [y] are two of these equivalence
classes, either x ∼ y and [x] = [y] or it is not true that x ∼ y and [x] ∩ [y] = ∅.

2.2 lim sup And lim inf


It is assumed in all that is done that R is complete. There are two ways to describe
completeness of R. One is to say that every bounded set has a least upper bound and a
greatest lower bound. The other is to say that every Cauchy sequence converges. These
two equivalent notions of completeness will be taken as given.
The symbol, F will mean either R or C. The symbol [−∞, ∞] will mean all real
numbers along with +∞ and −∞ which are points which we pretend are at the right
and left ends of the real line respectively. The inclusion of these make believe points
makes the statement of certain theorems less trouble.

Definition 2.2.1 For A ⊆ [−∞, ∞] , A ̸= ∅ sup A is defined as the least upper


bound in case A is bounded above by a real number and equals ∞ if A is not bounded
above. Similarly inf A is defined to equal the greatest lower bound in case A is bounded
below by a real number and equals −∞ in case A is not bounded below.

Lemma 2.2.2 If {An } is an increasing sequence in [−∞, ∞], then


sup {An } = lim An .
n→∞

Similarly, if {An } is decreasing, then

inf {An } = lim An .


n→∞

Proof: Let sup ({An : n ∈ N}) = r. In the first case, suppose r < ∞. Then letting
ε > 0 be given, there exists n such that An ∈ (r − ε, r]. Since {An } is increasing, it
follows if m > n, then r − ε < An ≤ Am ≤ r and so limn→∞ An = r as claimed. In
the case where r = ∞, then if a is a real number, there exists n such that An > a.
Since {Ak } is increasing, it follows that if m > n, Am > a. But this is what is meant
by limn→∞ An = ∞. The other case is that r = −∞. But in this case, An = −∞ for all
18 CHAPTER 2. SOME FUNDAMENTAL CONCEPTS

n and so limn→∞ An = −∞. The case where An is decreasing is entirely similar. This
proves the lemma. 
n
Sometimes the limit of a sequence does not exist. For example, if an = (−1) , then
limn→∞ an does not exist. This is because the terms of the sequence are a distance
of 1 apart. Therefore there can’t exist a single number such that all the terms of the
sequence are ultimately within 1/4 of that number. The nice thing about lim sup and
lim inf is that they always exist. First here is a simple lemma and definition.

Definition 2.2.3 Denote by [−∞, ∞] the real line along with symbols ∞ and
−∞. It is understood that ∞ is larger than every real number and −∞ is smaller
than every real number. Then if {An } is an increasing sequence of points of [−∞, ∞] ,
limn→∞ An equals ∞ if the only upper bound of the set {An } is ∞. If {An } is bounded
above by a real number, then limn→∞ An is defined in the usual way and equals the
least upper bound of {An }. If {An } is a decreasing sequence of points of [−∞, ∞] ,
limn→∞ An equals −∞ if the only lower bound of the sequence {An } is −∞. If {An } is
bounded below by a real number, then limn→∞ An is defined in the usual way and equals
the greatest lower bound of {An }. More simply, if {An } is increasing,

lim An = sup {An }


n→∞

and if {An } is decreasing then

lim An = inf {An } .


n→∞

Lemma 2.2.4 Let {an } be a sequence of real numbers and let Un ≡ sup {ak : k ≥ n} .
Then {Un } is a decreasing sequence. Also if Ln ≡ inf {ak : k ≥ n} , then {Ln } is an
increasing sequence. Therefore, limn→∞ Ln and limn→∞ Un both exist.

Proof: Let Wn be an upper bound for {ak : k ≥ n} . Then since these sets are
getting smaller, it follows that for m < n, Wm is an upper bound for {ak : k ≥ n} . In
particular if Wm = Um , then Um is an upper bound for {ak : k ≥ n} and so Um is at
least as large as Un , the least upper bound for {ak : k ≥ n} . The claim that {Ln } is
decreasing is similar. This proves the lemma. 
From the lemma, the following definition makes sense.

Definition 2.2.5 Let {an } be any sequence of points of [−∞, ∞]

lim sup an ≡ lim sup {ak : k ≥ n}


n→∞ n→∞

lim inf an ≡ lim inf {ak : k ≥ n} .


n→∞ n→∞

Theorem 2.2.6 Suppose {an } is a sequence of real numbers and that

lim sup an
n→∞

and
lim inf an
n→∞

are both real numbers. Then limn→∞ an exists if and only if

lim inf an = lim sup an


n→∞ n→∞

and in this case,


lim an = lim inf an = lim sup an .
n→∞ n→∞ n→∞
2.2. LIM SUP AND LIM INF 19

Proof: First note that

sup {ak : k ≥ n} ≥ inf {ak : k ≥ n}

and so from Theorem 4.1.7,

lim sup an ≡ lim sup {ak : k ≥ n}


n→∞ n→∞

≥ lim inf {ak : k ≥ n}


n→∞
≡ lim inf an .
n→∞

Suppose first that limn→∞ an exists and is a real number. Then by Theorem 4.4.3 {an }
is a Cauchy sequence. Therefore, if ε > 0 is given, there exists N such that if m, n ≥ N,
then
|an − am | < ε/3.
From the definition of sup {ak : k ≥ N } , there exists n1 ≥ N such that

sup {ak : k ≥ N } ≤ an1 + ε/3.

Similarly, there exists n2 ≥ N such that

inf {ak : k ≥ N } ≥ an2 − ε/3.

It follows that

sup {ak : k ≥ N } − inf {ak : k ≥ N } ≤ |an1 − an2 | + < ε.
3
∞ ∞
Since the sequence, {sup {ak : k ≥ N }}N =1 is decreasing and {inf {ak : k ≥ N }}N =1 is
increasing, it follows from Theorem 4.1.7

0 ≤ lim sup {ak : k ≥ N } − lim inf {ak : k ≥ N } ≤ ε


N →∞ N →∞

Since ε is arbitrary, this shows

lim sup {ak : k ≥ N } = lim inf {ak : k ≥ N } (2.1)


N →∞ N →∞

Next suppose 2.1. Then

lim (sup {ak : k ≥ N } − inf {ak : k ≥ N }) = 0


N →∞

Since sup {ak : k ≥ N } ≥ inf {ak : k ≥ N } it follows that for every ε > 0, there exists
N such that
sup {ak : k ≥ N } − inf {ak : k ≥ N } < ε
Thus if m, n > N, then
|am − an | < ε
which means {an } is a Cauchy sequence. Since R is complete, it follows that limn→∞ an ≡
a exists. By the squeezing theorem, it follows

a = lim inf an = lim sup an


n→∞ n→∞

and This proves the theorem. 


With the above theorem, here is how to define the limit of a sequence of points in
[−∞, ∞].
20 CHAPTER 2. SOME FUNDAMENTAL CONCEPTS

Definition 2.2.7 Let {an } be a sequence of points of [−∞, ∞] . Then limn→∞ an


exists exactly when
lim inf an = lim sup an
n→∞ n→∞

and in this case


lim an ≡ lim inf an = lim sup an .
n→∞ n→∞ n→∞

The significance of lim sup and lim inf, in addition to what was just discussed, is
contained in the following theorem which follows quickly from the definition.

Theorem 2.2.8 Suppose {an } is a sequence of points of [−∞, ∞] . Let

λ = lim sup an .
n→∞

Then if b > λ, it follows there exists N such that whenever n ≥ N,

an ≤ b.

If c < λ, then an > c for infinitely many values of n. Let

γ = lim inf an .
n→∞

Then if d < γ, it follows there exists N such that whenever n ≥ N,

an ≥ d.

If e > γ, it follows an < e for infinitely many values of n.

The proof of this theorem is left as an exercise for you. It follows directly from the
definition and it is the sort of thing you must do yourself. Here is one other simple
proposition.

Proposition 2.2.9 Let limn→∞ an = a > 0. Then


lim sup an bn = a lim sup bn .
n→∞ n→∞

Proof: This follows from the definition. Let λn = sup {ak bk : k ≥ n} . For all n
large enough, an > a − ε where ε is small enough that a − ε > 0. Therefore,

λn ≥ sup {bk : k ≥ n} (a − ε)

for all n large enough. Then

lim sup an bn = lim λn ≡ lim sup an bn


n→∞ n→∞ n→∞
≥ lim (sup {bk : k ≥ n} (a − ε))
n→∞
= (a − ε) lim sup bn
n→∞

Similar reasoning shows

lim sup an bn ≤ (a + ε) lim sup bn


n→∞ n→∞

Now since ε > 0 is arbitrary, the conclusion follows.


2.3. DOUBLE SERIES 21

2.3 Double Series

Sometimes it is required to consider double series which are of the form

 
∞ ∑
∑ ∞ ∞
∑ ∞

ajk ≡  ajk  .
k=m j=m k=m j=m

In other words, first sum on j yielding something which depends on k and then sum
these. The major consideration for these double series is the question of when

∞ ∑
∑ ∞ ∞ ∑
∑ ∞
ajk = ajk .
k=m j=m j=m k=m

In other words, when does it make no difference which subscript is summed over first?
In the case of finite sums there is no issue here. You can always write


M ∑
N ∑
N ∑
M
ajk = ajk
k=m j=m j=m k=m

because addition is commutative. However, there are limits involved with infinite sums
and the interchange in order of summation involves taking limits in a different order.
Therefore, it is not always true that it is permissible to interchange the two sums. A
general rule of thumb is this: If something involves changing the order in which two
limits are taken, you may not do it without agonizing over the question. In general,
limits foul up algebra and also introduce things which are counter intuitive. Here is an
example. This example is a little technical. It is placed here just to prove conclusively
there is a question which needs to be considered.

Example 2.3.1 Consider the following picture which depicts some of the ordered pairs
(m, n) where m, n are positive integers.
22 CHAPTER 2. SOME FUNDAMENTAL CONCEPTS

0 0 0 0 0 c 0 -c

0 0 0 0 c 0 -c 0

0 0 0 c 0 -c 0 0

0 0 c 0 -c 0 0 0

0 c 0 -c 0 0 0 0

b 0 -c 0 0 0 0 0

0 a 0 0 0 0 0 0

The numbers next to the point are the values of amn . You see ann = 0 for all n,
a21 = a, a12 = b, amn = c for (m, n) on the line y = 1 + x whenever m > 1, and
amn = −c for all (m, n) on the line y = x − 1 whenever m > 2.
∑∞ ∑∞ ∑∞
Then m=1 amn = a if n = 1, m=1 amn = b−c if n = 2 and if n > 2, m=1 amn =
0. Therefore,
∑∞ ∑ ∞
amn = a + b − c.
n=1 m=1
∑∞ ∑∞ ∑∞
Next observe that n=1 amn = b if m = 1, n=1 amn = a+c if m = 2, and n=1 amn =
0 if m > 2. Therefore,
∑∞ ∑ ∞
amn = b + a + c
m=1 n=1

and so the two sums are different. Moreover, you can see that by assigning different
values of a, b, and c, you can get an example for any two different numbers desired.
It turns out that if aij ≥ 0 for all i, j, then you can always interchange the order
of summation. This is shown next and is based on the following lemma. First, some
notation should be discussed.

Definition 2.3.2 Let f (a, b) ∈ [−∞, ∞] for a ∈ A and b ∈ B where A, B


are sets which means that f (a, b) is either a number, ∞, or −∞. The symbol, +∞
is interpreted as a point out at the end of the number line which is larger than every
real number. Of course there is no such number. That is why it is called ∞. The
symbol, −∞ is interpreted similarly. Then supa∈A f (a, b) means sup (Sb ) where Sb ≡
{f (a, b) : a ∈ A} .

Unlike limits, you can take the sup in different orders.


2.3. DOUBLE SERIES 23

Lemma 2.3.3 Let f (a, b) ∈ [−∞, ∞] for a ∈ A and b ∈ B where A, B are sets.
Then
sup sup f (a, b) = sup sup f (a, b) .
a∈A b∈B b∈B a∈A

Proof: Note that for all a, b, f (a, b) ≤ supb∈B supa∈A f (a, b) and therefore, for all
a, supb∈B f (a, b) ≤ supb∈B supa∈A f (a, b). Therefore,

sup sup f (a, b) ≤ sup sup f (a, b) .


a∈A b∈B b∈B a∈A

Repeat the same argument interchanging a and b, to get the conclusion of the lemma.

Theorem 2.3.4 Let aij ≥ 0. Then


∞ ∑
∑ ∞ ∞ ∑
∑ ∞
aij = aij .
i=1 j=1 j=1 i=1

Proof: First note there is no trouble in defining these sums because the aij are all
nonnegative. If a sum diverges, it only diverges to ∞ and so ∞ is the value of the sum.
Next note that
∑∞ ∑ ∞ ∑∞ ∑n
aij ≥ sup aij
n
j=r i=r j=r i=r

because for all j,



∑ ∑
n
aij ≥ aij .
i=r i=r

Therefore,
∞ ∑
∑ ∞ ∞ ∑
∑ n ∑
m ∑
n
aij ≥ sup aij = sup lim aij
n n m→∞
j=r i=r j=r i=r j=r i=r


n ∑
m ∑
n ∑
m
= sup lim aij = sup lim aij
n m→∞ n m→∞
i=r j=r i=r j=r
∑ ∞
n ∑ ∑ ∞
n ∑ ∞ ∑
∑ ∞
= sup aij = lim aij = aij
n n→∞
i=r j=r i=r j=r i=r j=r

Interchanging the i and j in the above argument proves the theorem.


24 CHAPTER 2. SOME FUNDAMENTAL CONCEPTS
Chapter 3

Basic Linear Algebra

All the topics for calculus of one variable generalize to calculus of any number of variables
in which the functions can have values in m dimensional space and there is more than
one variable.
The notation, Cn refers to the collection of ordered lists of n complex numbers. Since
every real number is also a complex number, this simply generalizes the usual notion
of Rn , the collection of all ordered lists of n real numbers. In order to avoid worrying
about whether it is real or complex numbers which are being referred to, the symbol F
will be used. If it is not clear, always pick C.

Definition 3.0.5 Define

Fn ≡ {(x1 , · · · , xn ) : xj ∈ F for j = 1, · · · , n} .

(x1 , · · · , xn ) = (y1 , · · · , yn ) if and only if for all j = 1, · · · , n, xj = yj . When

(x1 , · · · , xn ) ∈ Fn ,

it is conventional to denote (x1 , · · · , xn ) by the single bold face letter, x. The numbers,
xj are called the coordinates. The set

{(0, · · · , 0, t, 0, · · · , 0) : t ∈ F}

for t in the ith slot is called the ith coordinate axis. The point 0 ≡ (0, · · · , 0) is called
the origin.

Thus (1, 2, 4i) ∈ F3 and (2, 1, 4i) ∈ F3 but (1, 2, 4i) ̸= (2, 1, 4i) because, even though
the same numbers are involved, they don’t match up. In particular, the first entries are
not equal.
The geometric significance of Rn for n ≤ 3 has been encountered already in calculus
or in precalculus. Here is a short review. First consider the case when n = 1. Then
from the definition, R1 = R. Recall that R is identified with the points of a line. Look
at the number line again. Observe that this amounts to identifying a point on this line
with a real number. In other words a real number determines where you are on this line.
Now suppose n = 2 and consider two lines which intersect each other at right angles as
shown in the following picture.

25
26 CHAPTER 3. BASIC LINEAR ALGEBRA

6 · (2, 6)

(−8, 3) · 3
2

−8

Notice how you can identify a point shown in the plane with the ordered pair, (2, 6) .
You go to the right a distance of 2 and then up a distance of 6. Similarly, you can identify
another point in the plane with the ordered pair (−8, 3) . Go to the left a distance of 8
and then up a distance of 3. The reason you go to the left is that there is a − sign on the
eight. From this reasoning, every ordered pair determines a unique point in the plane.
Conversely, taking a point in the plane, you could draw two lines through the point,
one vertical and the other horizontal and determine unique points, x1 on the horizontal
line in the above picture and x2 on the vertical line in the above picture, such that
the point of interest is identified with the ordered pair, (x1 , x2 ) . In short, points in the
plane can be identified with ordered pairs similar to the way that points on the real
line are identified with real numbers. Now suppose n = 3. As just explained, the first
two coordinates determine a point in a plane. Letting the third component determine
how far up or down you go, depending on whether this number is positive or negative,
this determines a point in space. Thus, (1, 4, −5) would mean to determine the point
in the plane that goes with (1, 4) and then to go below this plane a distance of 5 to
obtain a unique point in space. You see that the ordered triples correspond to points in
space just as the ordered pairs correspond to points in a plane and single real numbers
correspond to points on a line.
You can’t stop here and say that you are only interested in n ≤ 3. What if you were
interested in the motion of two objects? You would need three coordinates to describe
where the first object is and you would need another three coordinates to describe
where the other object is located. Therefore, you would need to be considering R6 . If
the two objects moved around, you would need a time coordinate as well. As another
example, consider a hot object which is cooling and suppose you want the temperature
of this object. How many coordinates would be needed? You would need one for the
temperature, three for the position of the point in the object and one more for the
time. Thus you would need to be considering R5 . Many other examples can be given.
Sometimes n is very large. This is often the case in applications to business when they
are trying to maximize profit subject to constraints. It also occurs in numerical analysis
when people try to solve hard problems on a computer.
There are other ways to identify points in space with three numbers but the one
presented is the most basic. In this case, the coordinates are known as Cartesian
coordinates after Descartes1 who invented this idea in the first half of the seventeenth
century. I will often not bother to draw a distinction between the point in n dimensional
space and its Cartesian coordinates.
The geometric significance of Cn for n > 1 is not available because each copy of C
corresponds to the plane or R2 .

1 René Descartes 1596-1650 is often credited with inventing analytic geometry although it seems

the ideas were actually known much earlier. He was interested in many different subjects, physiology,
chemistry, and physics being some of them. He also wrote a large book in which he tried to explain
the book of Genesis scientifically. Descartes ended up dying in Sweden.
3.1. ALGEBRA IN FN , VECTOR SPACES 27

3.1 Algebra in Fn , Vector Spaces


There are two algebraic operations done with elements of Fn . One is addition and the
other is multiplication by numbers, called scalars. In the case of Cn the scalars are
complex numbers while in the case of Rn the only allowed scalars are real numbers.
Thus, the scalars always come from F in either case.

Definition 3.1.1 If x ∈ Fn and a ∈ F, also called a scalar, then ax ∈ Fn is


defined by
ax = a (x1 , · · · , xn ) ≡ (ax1 , · · · , axn ) . (3.1)
This is known as scalar multiplication. If x, y ∈ F then x + y ∈ F and is defined by
n n

x + y = (x1 , · · · , xn ) + (y1 , · · · , yn )
≡ (x1 + y1 , · · · , xn + yn ) (3.2)

the points in Fn are also referred to as vectors.

With this definition, the algebraic properties satisfy the conclusions of the following
theorem. These conclusions are called the vector space axioms. Any time you have a
set and a field of scalars satisfying the axioms of the following theorem, it is called a
vector space.

Theorem 3.1.2 For v, w ∈ Fn and α, β scalars, (real numbers), the following


hold.
v + w = w + v, (3.3)
the commutative law of addition,

(v + w) + z = v+ (w + z) , (3.4)

the associative law for addition,


v + 0 = v, (3.5)
the existence of an additive identity,

v+ (−v) = 0, (3.6)

the existence of an additive inverse, Also

α (v + w) = αv+αw, (3.7)

(α + β) v =αv+βv, (3.8)
α (βv) = αβ (v) , (3.9)
1v = v. (3.10)
In the above 0 = (0, · · · , 0).

You should verify these properties all hold. For example, consider 3.7

α (v + w) = α (v1 + w1 , · · · , vn + wn )
= (α (v1 + w1 ) , · · · , α (vn + wn ))
= (αv1 + αw1 , · · · , αvn + αwn )
= (αv1 , · · · , αvn ) + (αw1 , · · · , αwn )
= αv + αw.

As usual subtraction is defined as x − y ≡ x+ (−y) .


28 CHAPTER 3. BASIC LINEAR ALGEBRA

3.2 Subspaces Spans And Bases


The concept of linear combination is fundamental in all of linear algebra.

Definition 3.2.1 Let {x1 , · · · , xp } be vectors in a vector space, Y having the


field of scalars F. A linear combination is any expression of the form

p
ci xi
i=1

where the ci are scalars. The set of all linear combinations of these vectors is called
span (x1 , · · · , xn ) . If V ⊆ Y, then V is called a subspace if whenever α, β are scalars
and u and v are vectors of V, it follows αu + βv ∈ V . That is, it is “closed under
the algebraic operations of vector addition and scalar multiplication” and is therefore, a
vector space. A linear combination of vectors is said to be trivial if all the scalars in
the linear combination equal zero. A set of vectors is said to be linearly independent if
the only linear combination of these vectors which equals the zero vector is the trivial
linear combination. Thus {x1 , · · · , xn } is called linearly independent if whenever

p
ck xk = 0
k=1

it follows that all the scalars, ck equal zero. A set of vectors, {x1 , · · · , xp } , is called
linearly dependent if it is not linearly independent. Thus the set of vectors ∑p is linearly
dependent if there exist scalars, ci , i = 1, · · · , n, not all zero such that k=1 ck xk = 0.

Lemma 3.2.2 A set of vectors {x1 , · · · , xp } is linearly independent if and only if


none of the vectors can be obtained as a linear combination of the others.

Proof: Suppose first that {x1 , · · · , xp } is linearly independent. If



xk = cj xj ,
j̸=k

then ∑
0 = 1xk + (−cj ) xj ,
j̸=k

a nontrivial linear combination, contrary to assumption. This shows that if the set is
linearly independent, then none of the vectors is a linear combination of the others.
Now suppose no vector is a linear combination of the others. Is {x1 , · · · , xp } linearly
independent? If it is not, there exist scalars, ci , not all zero such that

p
ci xi = 0.
i=1

Say ck ̸= 0. Then you can solve for xk as



xk = (−cj ) /ck xj
j̸=k

contrary to assumption. This proves the lemma. 


The following is called the exchange theorem.

Theorem 3.2.3 Let {x1 , · · · , xr } be a linearly independent set of vectors such


that each xi is in the span {y1 , · · · , ys } . Then r ≤ s.
3.2. SUBSPACES SPANS AND BASES 29

Proof: Define span {y1 , · · · , ys } ≡ V, it follows there exist scalars, c1 , · · · , cs such


that
∑s
x1 = ci yi . (3.11)
i=1

Not all of these scalars can equal zero because if this were the case, it would follow
that x1∑= 0 and so {x1 , · · · , xr } would not be linearly independent. Indeed, if x1 = 0,
r
1x1 + i=2 0xi = x1 = 0 and so there would exist a nontrivial linear combination of
the vectors {x1 , · · · , xr } which equals zero.
Say ck ̸= 0. Then solve (3.11) for yk and obtain
 
s-1 vectors here
z }| {
yk ∈ span x1 , y1 , · · · , yk−1 , yk+1 , · · · , ys  .

Define {z1 , · · · , zs−1 } by

{z1 , · · · , zs−1 } ≡ {y1 , · · · , yk−1 , yk+1 , · · · , ys }

Therefore, span {x1 , z1 , · · · , zs−1 } = V because if v ∈ V, there exist constants c1 , · · · , cs


such that

s−1
v= ci zi + cs yk .
i=1

Now replace the yk in the above with a linear combination of the vectors, {x1 , z1 , · · · , zs−1 }
to obtain v ∈ span {x1 , z1 , · · · , zs−1 } . The vector yk , in the list {y1 , · · · , ys } , has now
been replaced with the vector x1 and the resulting modified list of vectors has the same
span as the original list of vectors, {y1 , · · · , ys } .
Now suppose that r > s and that span {x1 , · · · , xl , z1 , · · · , zp } = V where the
vectors, z1 , · · · , zp are each taken from the set, {y1 , · · · , ys } and l + p = s. This
has now been done for l = 1 above. Then since r > s, it follows that l ≤ s < r
and so l + 1 ≤ r. Therefore, xl+1 is a vector not in the list, {x1 , · · · , xl } and since
span {x1 , · · · , xl , z1 , · · · , zp } = V there exist scalars, ci and dj such that


l ∑
p
xl+1 = ci xi + dj zj . (3.12)
i=1 j=1

Now not all the dj can equal zero because if this were so, it would follow that {x1 , · · · , xr }
would be a linearly dependent set because one of the vectors would equal a linear com-
bination of the others. Therefore, (3.12) can be solved for one of the zi , say zk , in terms
of xl+1 and the other zi and just as in the above argument, replace that zi with xl+1
to obtain  
p-1 vectors here
z }| {
span x1 , · · · xl , xl+1 , z1 , · · · zk−1 , zk+1 , · · · , zp  = V.

Continue this way, eventually obtaining

span (x1 , · · · , xs ) = V.

But then xr ∈ span {x1 , · · · , xs } contrary to the assumption that {x1 , · · · , xr } is linearly
independent. Therefore, r ≤ s as claimed.
Here is another proof in case you didn’t like the above proof.
Another random document with
no related content on Scribd:
tons, was launched in December of 1907, and is one of the most
notable productions of recent years. She is spar-decked throughout,
with magnificent lines and a handsome appearance, whilst retaining
the more conventional stem-plus-bowsprit. She has exceptional
accommodation, all connected by corridors and vestibules with no
fewer than a dozen state-rooms for guests. She is driven by two sets
of triple-expansion engines actuating twin-screws, which, to minimise
vibration, are at a different pitch, and run at varying speeds. She can
carry sufficient coal to allow her to cruise for 6,000 miles, and both in
internal and external appearance is as handsome as she is capable.

THE S.Y. “SAGITTA.”


From a Photograph. By permission of Messrs. Camper & Nicholson, Ltd.
THE S.Y. “TRIAD.”
From a Photograph. By permission of the Caledon Shipbuilding Co., Ltd.

With the capabilities of which the motor has shown itself to be


possessed, the future of the steam yacht is perhaps a little uncertain.
Economy would seem to indicate that the former has numerous
merits in that it enables sail power to be utilised more readily, and
thus may arrest the fashion which is advancing in the direction of
steam. For long passages the extreme comfort which is now
obtainable in the modern liner leaves no choice in the matter. To
keep up a steam yacht for the usual summer season of four months
is a very serious item of expenditure. If we reckon £10 per ton as the
average cost—and this is the accepted estimate—it will be seen that
such a yacht as the Wakiva, for instance, leaves but little change out
of £10,000 per year, and for this expenditure most men would expect
to get a very large return in the way of sport and travel. Whether or
not a like proportionate return is made, at least in giving employment
to thousands of shipbuilding and yacht-hands, this special branch of
sea sport is deserving of the high interest with which it is regarded.
CHAPTER XI
THE BUILDING OF THE STEAMSHIP

We propose in the present chapter, now that we have seen the


evolution of the steamship through all its various vicissitudes and in
its special ways, to set forth within the limited space that is now left
to us some general idea of the means adopted to create the great
steamship from a mass of material into a sentient, moving being.
Around the building of a ship there is encircling it perhaps far
more sentiment than in the activity of almost any other industry.
Poets and painters have found in this a theme for their imagination
not once, but many times. Making a ship is something less prosaic, a
million times more romantic, than making a house, for the reason
that whilst the ship, as long as she remains on the stocks, is just so
many thousand tons of material, yet from the very moment when she
first kisses the water she becomes a living thing, intelligent, with a
character of her own, distinct and recognisable. In the whole
category of man-made things there is nothing comparable to this.

Fig. 1.—FLUSH-DECKED TYPE.


Fig. 2.—“THREE ISLAND,” TYPE.

Fig. 3.—TOP-GALLANT FORECASTLE TYPE.

Fig. 4.—TOP-GALLANT FORECASTLE TYPE, WITH


RAISED QUARTER-DECK.

Fig. 5.—EARLY “WELL-DECK” TYPE.

Her genesis begins when the future owners resolve to have her
built. Before any plans are drawn out there must first be decided the
dimensions, the displacement and the general features which she is
to possess, whether she is to be a slow ship, a fast ship, engaged in
passenger work, cargo-carrying, on the North Atlantic route, for the
East through the Suez Canal, and so on; for all these factors
combine to determine the lines on which she is to be built. Before we
progress any farther, let us get into our minds the nine different types
which separate the generic class of steamships. If the reader will
follow the accompanying illustrations, we shall not run the risk of
being obscure in our argument. Fig. 1, shows the steamship in its
elementary form, just a flush-decked craft, with casings for the
protection of the engines as explained on an earlier page. This
represents the type of which the coasting steamer illustrated
opposite page 134 is an example. This casing in the diagram before
us is, so to speak, an island on the deck, but presently it was so
developed that it extended to the sides of the ship, and, rising up as
a continuation of the hull, became a bridge. At the same time a
monkey forecastle and a short poop were added to make her the
better protected against the seas. This will be seen in Fig. 2. This is
known as the “three-island” type for obvious reasons. It must be
understood that on either side a passage leads beneath the bridge-
deck so as to allow the crew to get about the ship. But from being
merely a protection for the bows of the ship, the monkey forecastle
became several feet higher, so that it could accommodate the
quarters of the crew, and this “top-gallant” forecastle, as it is known,
will be seen in Fig. 3. At the same time, the short poop or hood at the
stern has now become lengthened into something longer. But in Fig.
4 we find the lengthened poop becoming a raised quarter-deck—that
is, not a mere structure raised over the deck, but literally a deck
raised at the quarter. This raised quarter-deck was the better able to
withstand the violent force of the sea when it broke over the ship. In
Fig. 5 we have a still further development in which the topgallant
forecastle is retained as before, but the long poop and the after end
of the bridge are lengthened until they meet and form one long
combination. This is one of the “well-deck” types, the “well” being
between the after end of the forecastle and the forward end of the
bridge-deck. This well was left for the reason that it was not required
for carrying cargo, because it was not desirable to load the ship
forward lest she might be down at the head (which in itself would be
bad), whilst at the same time it would raise the stern so that the
propeller was the more likely to race. But in the modern evolution of
the steamship it is not only a question of trim and seaworthiness that
have been taken into consideration, but also there are the rules and
regulations which have been made with regard to the steam vessel.
Now, this well-space not being reckoned in the tonnage of the ship
(on which she has to pay costly dues) if kept open, it was good and
serviceable in another way. Considered from the view of
seaworthiness, this well, it was claimed, would allow the prevention
of the sweeping of the whole length of the ship by whatever water
that broke aboard the bows (which would be the case if the well were
covered up). If left open, the water could easily be allowed to run out
through the scuppers. But this type in Fig. 5 is rather midway in the
transition between the “three-island” type and the shelter-deck type.
The diagram in Fig. 6 is more truly a well-decker, and differs from the
ship in Fig. 5, in that the one we are now considering has a raised
quarter-deck instead of a poop. She has a top-gallant forecastle, a
raised quarter-deck and bridge combined, and this type was largely
used in the cargo ships employed in crossing the Atlantic Ocean. It is
now especially popular in ships engaged in the coal trade. The
advantages of this raised quarter-deck are that it increases the cubic
capacity of the ship, and makes up for the space wasted by the shaft
tunnel. By enabling more cargo to be placed aft, it takes away the
chance of the ship being trimmed by the head.

Fig. 6.—“WELL-DECK” TYPE.


Fig. 7.—“SPAR-DECK” TYPE.

Fig. 8.—“AWNING-DECK” TYPE.

Fig. 9.—“SHADE-DECK” TYPE.

Fig. 7 shows a “spar-decker,” which is the first of the three-


deckers that we shall now mention. This was evolved for the purpose
of carrying passengers between decks. It has a continuous upper
deck of fairly heavy construction, the bridge deck, of course, being
above the spar deck. In Fig. 8 we have the “awning-decker,” which
has a continuous deck lighter in character than the last-mentioned
type, and like the latter, the sides are completely enclosed above the
main deck. Because of this lightness of construction, it is not
customary to add further erections above that are of any weight. Its
origin was due to the desire to provide a shelter for the ships
employed in carrying Oriental pilgrims. Later on this type was
retained in cargo-carriers. Finally, we have the “shade-decker” as in
Fig. 9, which is provided with openings at the side for ventilation.
This type is so well known to the reader from posters and
photographs, that it is scarcely essential to say much. But we may
remark that the lightly constructed deck fitted between the poop and
forecastle is supported by round stanchions, open at the sides (as
shown herewith), but sometimes closed by light plates. It is built just
of sufficient strength to provide a promenade for passengers, or
shelter for cattle, on the upper deck. This is still a very popular type
for intermediate and large cargo steamers.

THE BUILDING OF THE “MAURETANIA.”


Showing Floor and part of Frames.
From a Photograph. By permission of the Cunard Steamship Co.

With these different types before us, we may now go on with our
main subject. Having settled the question as to the type and
character of the steamship to be built, the next thing is to design the
midship section, which shows the general structural arrangements
and scantlings of the various parts. In the drawing-office the plans
are prepared, and the various sections of the ship worked out by
expert draughtsmen attached to the shipbuilding yard. This
necessitates the very greatest accuracy, and the building is usually
specially guarded against those who might like to have an
opportunity of obtaining valuable secrets. The plans having been
worked out on paper, there follows the “laying off” on the floor of an
immense loft, called the “mould floor,” where the plans are
transferred according to the exact dimensions that are to be
embodied in the ship. In many cases the future owner insists on a
wooden model being submitted in the first instance, by the builder,
so that a fair idea may be obtained of the hull of the proposed ship.
Each vessel is known at the shipbuilder’s by a number and not
by her name. The keel is the first part of her to be laid, which
consists of heavy bars of iron laid on to blocks of wood called
“stocks,” and the line of these slants gently down to the water’s
edge, so that when, after many months, the time arrives for the
launching of the great ship, she may slide down easily into the sea
that is, for the future, to be her support. After these bars have been
fastened together, then the frames or ribs are erected, the ship being
built with her stern nearest to the water, and her bow inland, except
in the few cases (as, for example, that of the Great Eastern), where
a vessel, owing to her length in proportion to the width of the water-
space available, has to be launched sideways. These ribs are bent
pieces of steel, which have been specially curved according to the
pattern already worked out. Let us now turn to the accompanying
illustrations which show the steamship in course of construction.
These have been specially selected in order that the reader might be
able to have before him only those which are of recent date, and
show ships whose names, at least, are familiar to him.
THE “GEORGE WASHINGTON” IN COURSE OF CONSTRUCTION.
Showing Framing from the Stern.
From a Photograph. By permission of the Norddeutscher Lloyd Co.

The photograph opposite page 286 represents the Mauretania


being built on the Tyne. This striking photograph shows the floor and
the double cellular bottom of the leviathan in the foreground; whilst in
the background the frames of the ship have been already set up.
Some idea of the enormous proportions may be obtained from the
smallness of the men even in the foreground. The next illustration
represents the Norddeutscher Lloyd liner, George Washington, and
exhibits the framing of the ship and bulkheads before the steel-
plating had been put on. The photograph was taken from the stern,
looking forward, and one can see already the “bulge” which is left on
either side to allow for the propeller shafts. Opposite page 290 is
shown the bow end of the Berlin (belonging to the same company) in
frame, and on examining her starboard side it will be seen that
already some of her lower plates have been affixed. Finally, opposite
page 292 is shown one of the two mammoth White Star liners in
course of construction. This picture represents the stern frame of the
Titanic as it appeared on February 9th, 1910. No one can look at
these pictures without being interested in the numerous overhead
cranes, gantries and scaffolding which have to be employed in the
building of the ship. The gantries, for instance, now being used at
Harland and Wolff’s Belfast yard are much larger than were used
even for the Celtic and Cedric, and have electric cranes, for handling
weights at any part of the berths where the ships are being built.
Cantilever and other enormous cranes are also employed. Cranes
are also now used in Germany fitted with very strong electro-
magnets which hold the plates by the power of their attraction, and
contribute considerably to the saving of labour.
Whilst the hull of the ship is being built, the engines are being
made and put together in the erecting-shop—which also must needs
have its powerful cranes—and after being duly tested, the various
parts of the engines are taken to pieces again and erected
eventually in the ship after she has been launched. After the frames
and beams are “faired” the deck-plating is got in hand. Besides
affording many advantages, such as promenades and supports for
state-rooms, the deck of a ship is like the top of a box, and gives
additional strength to a ship. The illustration opposite page 292
shows the shelter deck of the Orient liner Orsova. The photograph
was taken looking aft, on August 1st, 1908, whilst the ship was being
built at Messrs. John Brown & Co.’s yard, Clydebank. The
photograph is especially interesting as showing the enormous
amount of material which has to go to the making of the steamship.
But even still more significant is the next illustration, which shows
one of the decks of the Lusitania whilst in course of construction. To
the average man it seems to be well-nigh impossible ever to get
such masses into the water.
BOWS OF THE “BERLIN” IN COURSE OF
CONSTRUCTION.
From a Photograph. By permission of the Norddeutscher
Lloyd Co.
THE “BERLIN” JUST BEFORE HER
LAUNCH.
From a Photograph. By permission of the
Norddeutscher Lloyd Co.

After the plates have been all fastened by rivets to the frames,
and the outside of the ship has been given a paint of conventional
salmon pink, the time approaches for her to be launched. During her
building the ship has been resting on the keel blocks where her
centre touches, but her bilges have been supported by blocks and
shores. These latter will be seen in the illustration of the Mauretania
already considered. As the day for launching approaches, so also
does the anxiety of the builders increase, for at no time in her career
is the ship so seriously endangered. On the day of the launch the
weight of the vessel is gradually transferred from the stocks on which
she has been built, to the cradle, being lifted bodily from the keel-
blocks by means of an army of men driving wedges underneath her
bottom. This cradle is constructed on the launching ways, and the
ship herself, being now “cradle-borne,” is held in place only by a
number of props called “dog-shores.” At the right moment the signal
is given for these to be knocked aside, and at the first symptoms of
the ship in her cradle showing an inclination to glide, the bottle of
wine is broken against her bows by the lady entrusted with so
pleasant an honour. With a deep roar the ship goes down the ways,
and as soon as the vessel becomes waterborne the cradle floats.
The ship herself is taken in charge by a tug, whilst numerous small
boats collect the various pieces of timber which are scattered over
the surface of the water. Two or three days before the launch, the
cradle which has been fitted temporarily in place, is taken away and
smeared with Russian tallow and soft soap. The ways themselves
are covered with this preparation after they have been well scraped
clean. In case, however, the ship should fail to start at the critical
moment after the dog-shores have been removed, it is usual now to
have a hydraulic starting ram (worked by a hand-pump) under the
forefoot of the ship. This will give a push sufficiently powerful to start
the great creature down her short, perilous journey into the world of
water which is to be her future abiding-place.
But it can readily be imagined that such a ponderous weight as
this carries a good deal of impetus with it, and since in most cases
the width of the water is confined, precautions have to be taken to
prevent the ship running ashore the other side and doing damage to
herself—perhaps smashing her rudder and propellers, or worse.
Therefore, heavy anchors have been buried deep into the ground,
and cables or hawsers are led from the bows and quarters and
attached thereto, or else to heavy-weights composed of coils of
chain, whose friction over the ground gradually stops the vessel. Not
infrequently the cables break through the sudden jerk which the
great ship puts on them, and the anchors tear up the slip-way.
Perhaps as many as eight cables may be thus employed, each being
made fast to two or three separate masses of about five to fifteen
tons, but with slack chain between so that only one at a time is
started. As soon as the ship has left the ways, all the cables become
taut, and they put in motion the first lot of drags. Further on, the next
lot of drags receive their strain, then the third, so that no serious jerk
may have been given, and the ship gradually brings up owing to the
powerful friction. Lest the force of the ship going into the water
should damage the rudder or the propeller, these, if they have been
placed in position, are locked so as to prevent free play. After this the
ship is towed round to another part of the yard where her engines
are slung into her by means of powerful cranes. The upper
structures are completed, masts stepped and an army of men work
away to get her ready for her builders’ trials. Carpenters are busy
erecting her cabins, painters and decorators enliven her internal
appearance, and upholsterers add the final touches of luxury to her
saloons and lounges.
STERN FRAME OF THE “TITANIC,” FEB. 9, 1910.
From a Photograph. By permission of Messrs. Ismay, Imrie & Co.

Turning now to the illustration facing page 290, we see the


Norddeutscher Lloyd Berlin just before she was launched. The
anchors and cables which will be dropped as soon as she has
floated will be seen along her port side, and the platform for her
christening is already in place. In the illustration facing page 294,
which shows the launch of the Royal Mail Steam Packet Company’s
Araguaya, we have a good view afforded of the ship as she is just
leaving the ways and becoming water-borne. The other illustration
on the same page shows the launch of one of those turret-ships to
which reference was made in an earlier chapter. In the picture of the
Berlin will be seen the system of arranging the steel plates in the
construction of the ship, and the rivets which hold them in place.

THE SHELTER DECK OF THE “ORSOVA” IN COURSE OF


CONSTRUCTION.
From a Photograph. By permission of Messrs. Anderson, Anderson & Co.
ONE OF THE DECKS OF THE “LUSITANIA” IN COURSE
OF CONSTRUCTION.
From a Photograph. By permission of the Cunard Steamship Co.

One of the most important events of the ship’s life is her trial trip.
Before this occurs the ship’s bottom must be cleaned, for a foul
underwater skin will deaden the speed, and give altogether
erroneous data. The weather should be favourable also, the sea
calm, and the water not too shallow to cause resistance to ships of
high speed, while a good steersman must be at the helm so as to
keep the ship on a perfectly straight course. Around our coasts at
various localities are noticeable posts erected in the ground to
indicate the measured mile. To obtain the correct data as to the
speed of the ship, she may be given successive runs in opposite
directions over this measured mile; a continuous run at sea, the
number of revolutions being counted during that period, and a
continuous run past a series of stations of known distances apart,
the times at which these are passed being recorded as the ship is
abreast with them. For obtaining a “mean” speed over the measured
mile, one run with the tide and one against the tide supply what is
required. During these trials, the displacement and trim of the ship
should be as nearly as possible those for which she has been
designed. But besides affording the data which can only show
whether or not the ship comes up to her contract, these trials are
highly valuable as affording information to the builder for subsequent
use, in regard both to the design of the ship herself and the amount
of horsepower essential for sending her along at a required speed.
The amount of coal consumption required is also an important item
that is discovered. This is found as follows: Let there be used two
bunkers. The first one is not to be sealed, but the latter is. The
former is to be drawn upon for getting up steam, taking the ship out
of the harbour, and generally until such time as she enters upon her
trial proper. This first bunker is then sealed up, and the other one
unsealed, and its contents alone used during the trial. After the trial
is ended, the fires being left in ordinary condition, the second bunker
is again sealed up, and the first bunker drawn upon. By reckoning up
the separate amounts it is quite easy afterwards to determine the
exact quantity which the ship has consumed during a given number
of knots in a given time. Finally, after every detail has been
completed, the ship is handed over to her owners and steams away
from the neighbourhood of her birth. Presently she arrives at her
port, whence she will run for the next ten or twenty years, and before
long she sets forth with her first load of passengers, mails and cargo
on her maiden trip across the ocean. To begin with, she may not
establish any new records for speed; for a ship takes time to find
herself, and her officers to understand her individualities. “Know your
ship” is one of the mottoes which an ambitious officer keeps ever
before him, and if this is true on the navigation bridge, it is even still
more true down below, where the engines will not show their full
capabilities for several passages at least.
LAUNCH OF THE “ARAGUAYA.”
From a Photograph. By permission of the Royal Mail Steam Packet Co.

LAUNCH OF A TURRET-SHIP.
From a Photograph. By permission of Messrs. Doxford & Sons, Sunderland.

You might also like