You are on page 1of 363

# SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

1. Integration
1.1 Integration by parts

You are familiar with simple methods for integration such as by substitution or by looking up anti-derivatives in a table.
But there are many instance where you will need another approach. In this section we will look at a very powerful technique
known as integeration by parts. It is based upon the product rule for derivatives which for the functions f (x) and g(x)
you should recall to be
d(f g) df dg
=g +f
dx dx dx
Now integrate both sides
d(f g) df dg
Z Z Z
dx = g dx + f dx
dx dx dx
But integration is the inverse of differentiation, thus we have
df dg
Z Z
fg = g dx + f dx
dx dx
which we can re-arrange to
dg df
Z Z
f dx = f g g dx
dx dx
Thus we have converted one integral into another. The hope is that the second integral is easier than the first. This will
depend on the choices we make for f and dg/dx.

Example 1.1
Z
I= x exp(x) dx

We have to split the integrand x exp(x) into two pieces, f and dg/dx.
Choose
df
f (x) = x dx
=1
dg
dx
= exp(x) g(x) = exp(x)
Then
df
Z Z
I= x exp(x) dx = f g g dx
dx
Z
= x exp(x) 1 exp(x) dx

= x exp(x) exp(x) + C

Example 1.2
Use by-parts integration to find the indefinite integral
Z
I= x cos(x) dx

Choose
df
f (x) = x dx
=1
dg
dx
= cos(x) g(x) = sin(x)
and thus
Z Z
I= x cos(x) dx = x sin(x) 1 sin(x) dx

= x sin(x) + cos(x) + C
Example 1.3
Use by-parts integration to find the indefinite integral
Z
I= x log(x) dx

Choose
df
f (x) = x dx
=1
dg
dx
= log(x) g(x) =???
We dont know immediately the anti-derivative for log(x), so we try another split. This time we choose
df 1
f (x) = log(x) dx
= x

dg
dx
=x g(x) = 12 x2

## and this leads to

1 2 1 21
Z Z
I= x log(x) dx = x log(x) x dx
2 2 x
1 1
= x2 log(x) x2 + C
2 4

Example 1.4
Use by-parts integration to find the indefinite integral

log(x)
Z
I= dx
x
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

2. Hyperbolic functions
2.1 Hyperbolic functions

Do you remember the time when you first encountered the sine and cosine functions? That would have been in early
secondary school when you were studying trigonometry. These functions proved very useful when faced with problems to
do with triangles. You may have been surprised when (many years later) you found that those same functions also proved
useful when solving some integration problems. Here is a classic example.

## Example 2.1 Integration requiring trigonometric functions

Evaluate the following anti-derivative
1
Z
I= dx
1 x2
dx
We will use a substitution, x(u) = sin(u) then = cos(u) and then it follows
du

1
Z
I= dx
1 x 2

1 dx
Z
= p du
1 sin2 (u) du
1
Z
= cos(u) du
cos(u)
Z
= du

=u+C

for arbitrary constant. Since x(u) = sin(u) then u(x) = sin1 (x) and thus

1
Z
I= dx = sin1 (x) + C
1x 2

## for an arbitrary integration constant C.

This example was very simple and contained nothing new. But if we had been given the following integral
1
Z
I= dx
1 + x2
and continued to use a substitution based on simple sine and cosine functions then we would find the game to be rather
drawn out. As you can easily verify, the correct substitution is x(u) = tan(u) and the integration leads to
Z
1  
dx = loge x + 1 + x2 + C
1 + x2
for arbitrary integration constant C.

Example 2.2
Verify the above integration.

This situation is not all that satisfactory as it involve a series of tedious substitutions and takes far more work than the
first example. Can we do a better job? Yes, but it involves a trick where we define new functions, known as hyperbolic
functions, to do exactly that job.
For the moment we will leave behind the issue of integration and focus on this new class of functions. Later we will return
to our integrals to show how easy the job can be.

## 2.1.1 Hyperbolic functions

The hyperbolic functions are rather easy to define. It all begins with this pair of functions sinh(u), known as hyperbolic
sine and pronounced either as shine and (u), known as hyperbolic cosine and pronounced as cosh. They are defined
by
1 u 1 u
e eu and cosh(u) = e + eu for |u| <
 
sinh(u) =
2 2
These functions bare names similar to sine and cosine functions for the simple reason that they share properties similar to
those of sin() and cos() (as we will soon see).

The above definitions for sinh(u) and cosh(u) are really all you need to know everything else about hyperbolic functions
follows from these two definitions. Of course, it does not hurt to commit to memory some of the equations we are about
to present.

Here are a few elementary properties of sinh(u) and cosh(u) You can easily verify that

## and that the derivatives are

d 
cosh(u) = sinh(u)
du
d 
sinh(u) = cosh(u).
du

Here is a more detailed list of properties (which of course you will verify, by using the above definitions).
Properties of Hyperbolic functions. Pt.1

## cosh2 (u) sinh2 (u) = 1

cosh(u + v) = cosh(u) cosh(v) + sinh(u) sinh(v)
sinh(u v) = sinh(u) cosh(v) sinh(v) cosh(u)
2 cosh2 (u) = 1 + cosh(2u)
2 sinh2 (u) = 1 + cosh(2u)
d 
cosh(u) = sinh(u)
du
d 
sinh(u) = cosh(u)
du
10
5 cosh(x)
cosh, sinh
0
5

sinh(x)
10

3 2 1 0 1 2 3
x
This looks very pretty and reminds us (well it should remind us) of remarkably similar properties for the sine and cosine
functions. Now recall the promise we gave earlier, that these hyperbolic functions would make our life with certain integrals
much easier. So let us return to the integral from earlier in this chapter. Using the same layout and similar sentences here
is how we would complete the integral using our new found friends.
Example 2.3 Integration requiring hyperbolic functions
Evaluate the following anti-derivative
1
Z
I= dx
1 + x2
dx
We will use a substitution, x(u) = sinh(u) then = cosh(u) and then it follows
du

1
Z
I= dx
1 + x 2

1 dx
Z
= q du
1 + sinh2 (u) du
1
Z
= cosh(u) du
cosh(u)
Z
= du

=u+C

for arbitrary constant. Since x(u) = sinh(u) then u(x) = sinh1 (x). Therefore and thus

1
Z
I= dx = sinh1 (x) + C
1 + x2
for arbitrary integration constant C.

## 2.1.2 More hyperbolic functions

You might be wondering if there are hyperbolic equivalents to the familiar trigonometric functions; tangent, secant, cosecant
and cotangent functions. Good question, and yes, indeed there are equivalents tanh(u), cotanh(u), sech(u) and cosech(u).
The following table provides some basic facts (which again you should verify).
Properties of Hyperbolic functions. Pt.2

sinh(u)
tanh(u) =
cosh(u)
cosh(u)
cotanh(u) =
sinh(u)
1
sech(u) =
cosh(u)
1
cosech(u) =
sinh(u)
sech (u) + tanh2 (u) = 1
2

d 
tanh(u) = sech2 (u)
dx
d 
cotanh(u) = cosech2 (u)
dx
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

3. Improper integrals
3.1 Improper Integrals

You may think integration and definite integrals are a simple matter of cranking-the-handle (the standard routine: anti-
derivative, evaluate at the limits, job done). But there are surprises in store, for example look closely at the following
incorrect calculation Z 1  1
dx 1
I= 2
= = 2
1 x x 1
This must be wrong. Why? Look at the integrand its a positive function so its integral should also be a positive function.
And yet we got an answer of 2.

The cause of this problem is the nasty inifinity lurking in the integral. Notice that the integrand is singular at x = 0 and
the doamain of the integration includes this singular point. Intgerals such as these require special care adn we will see
shortly how we can properly handle such chaps.

When a definite integral contains an infinity, either in the integrand or on the limits, we say that we have an improper
integral. All other integrals are proper integrals.

Improper integrals must be treated with care if we are to avoid the nonsense of the previous example.

## Example 3.1 Improper and proper integrals

Which of the following are improper integrals? Give reasons for your choice.
Z 1 Z 2
dx dx dx
Z
(a) I = (b) I = (c) I =
0 x 1 x+2 0 1 + x2

1 1 /2
dx
Z Z Z
2
(d) I = cos(x ) dx (e) I = (f) I = tan(x) dx
1 1 x2 0
3.1.1 A standard strategy

The basic idea is to construct a new proper intgeral in which the singular points have been cut out from the original
improper integral. The new proper integral is easy to do (since there are no infinities we can proceed as usual for a definite
integral). But how does this answer help us say something about our original improper integral? Well, when we build the
new integral, lets call it I(), we control the cut-out (of the singular points) by a parameter . We choose I() in such a
way that we can recover the original improper integral by taking a suitable limit on . Here are some simple examples.

Example 3.2

1
dx
Z
I=
0 x

1
dx
Z
I() = >0
 x

## Since this is a proper integral for  > 0 we can evaluate it directly,

1  1
dx
Z
I() = = 2 x  =22 
 x

## Next we evaluate the limit as  0

lim I() = lim 2 2  = 2
0 0
As this answer is well defined (i.e. finite and independent of the way the limit is approached) we are justified in defining
this to be the value of the improper integral.

1 1
dx dx
Z Z
I= = lim =2
0 x 0  x

In this case we say we have a convergent improper integral. Had we not got a finite answer we would say that we had an
divergent improper integral.

Example 3.3

1
dx
Z
I=
0 x2

1
dx
Z
I() = >0
 x2

## which we can easily evaluate

1
I() = 1

and thus

lim I() =
0

This is not finite so this time we say the the improper integral is a divergent improper integral.

Example 3.4

1
dx
Z
I=
1 x3

This time we have an improper integral because the integrand is singular inside the region of integration. We create our
related proper integral by cutting out the singular point. Thus we define two separate proper integrals,

dx
Z
I1 () = dx >0
1 x3

1
dx
Z
I2 () = dx >0
 x3

If both I1 and I2 converge (i.e. have finite values) we say that I also converges with the value

## I = lim I1 () + lim I2 ()

0 0
But for our case

1
I1 () = 1 lim I1 () =
2 0

1
I2 () = 1 + lim I2 () = +
2 0

## Thus neither I1 nor I2 converges and thus I is a divergent improper integral.

This may seem easy (it is) but it does require some care as the next example shows.

Example 3.5
Suppose we chose I1 and I2 as before but we set

1 1
= p 2
2 =2
1 + 2 2 

## Then we would find that

 
1 1
I1 () + I2 () = 2
2 =2


lim I1 + I2 = 2
0

=

## we would have found that

lim I1 + I2 = 0
0

How can this be? The answer is that in computing I1 + I2 we are eventually trying to make sense of + . Depending
on how we approach the limit we can get any answer we like for + .

Consequently, when we say that an integral is divergent we mean that either its value is infinity or that it has no single
well defined value.

Example 3.6
Use your new found knowledge to finally make sense of the follwing is it a convergent improper integral?
Z 1
dx
I= 2
1 x
3.2 Comparison Test for Improper Integrals

Here is an apparently simple integral and yet we will run into wee problem.

Example 3.7
Z
2
I= ex dx
2

Z 
2
I() = ex dx
2

## and provided that the limit exists, we would write

I = lim I()


2
The trouble is we do not have a simple anti-derivative for ex . The trick here is to look at a simpler (improper) integral
for which we can find a simple anti-derivative.

Note that

2
0 < ex < ex for 2 < x <
Now integrate

Z  Z 
x2
0< e dx < ex dx
2 2

and the last integral on the right is easy to do (thats one reason why we chose ex ),

Z  Z 
x2
0< e dx < ex dx = e2 e
2 2

## Our next step is take the limit as 

Z 
2
0 < lim ex dx < lim e2 e = e2
 2 

R 2
The limit exists and is finite so we have our final answer that I = 2
ex dx is convergent.

Example 3.8

1 1
ex ex
Z Z
I= dx and I() = dx
0 x  x

## Again we do not have a simple anti-derivative for ex /x so we study a related integral

1 1
1 1
Z Z
J= dx and J() = dx
0 x  x
For each integral the appropriate limit is  0.
Now we proceed as follows
1 ex
0< < for 0 < x < 1
x x
Z 1 Z 1 x
1 e
0< dx < dx
 x  x

0 0

## 0 < < lim I()

0

R1
Thus we conclude that I = 0
ex /x dx is divergent.

Example 3.9

1 1
ex ex
Z Z
I= dx and I() = dx
0 x  x

Suppose (mistakenly) we thought that this integral converged. We might set out to prove this by starting with
ex 3
0< < for 0 < x < 1
x x

## then we would leap into the now familiar steps,

1 Z 1
ex 3
Z
0< dx < dx = 3(log 1 log )
 x  x
Z 1 x
e
0 < lim dx < lim 3(log 1 log ) =
0  x 0

## 0 < lim I() <

0

This last line tells us nothing. Though we set out to prove convergence we actually proved nothing. Thus either we were
wrong in supposing that the integral converged or we made a bad choice for the test function 3/x. We know from the
previous example that in fact this integral is divergent.

## 3.3 The General Strategy

Suppose we have Z
f (x) dx with f (x) > 0
0
Then we have two cases to consider.

## v Test for convergence If you can find c(x) such that

(1) 0 < f (x)R < c(x) and

(2) lim 0 c(x) dx is finite.
then I is convergent.

## v Test for divergence If you can find d(x) such that

(1) 0 < d(x)R < f (x) and

(2) lim 0 d(x) dx = .
then I is divergent.
We generally try to choose the test function (c(x) or d(x)) so that it has a simple anti-derivative.
R1
A strategy similar to the above would apply for integrals like I = 0 f (x) dx.

Example 3.10
R1
Re-write the above strategy for the case I = 0
f (x) dx.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

4. Series.
4.1 Convergence and divergence of series

The main issue with most infinite series is whether or not the series converges. Of secondary importance is what the sum
of that series might be, assuming it to be a convergent series.
Consider the infinite series
X
S = a0 + a1 + a2 + + an + = ak
k=0
and let Sn = a0 + a1 + a2 + + an be the partial sum for S, then

v Convergence. The infinite series converges when limn (Sn ) exists and is finite.
v Divergence. In all other cases we say that the series diverges.

## 4.1.1 Zero tail?

This is as simple as it gets. If the an do not vanish as n then the infinite series diverges. This should be obvious -
if the tail does not diminish to zero then we must be adding on a finite term at the end of the series and hence the series
can not settle down to one fixed number.
This condition, that an 0 as n for the series to converge, is known as a necessary condition.
Note that this condition tells us nothing about the convergence of the series when an 0 as n .

## This test first asks you to compute the limit  

an+1
L = lim
n an
then we have
v Convergence: When L < 1.

## v Divergence: When L > 1.

v Indeterminate: When L = 1

Example 4.1
Use the ratio test to show that the geometric series

X
S= sn
n=0

## is convergent when 0 < s < 1.

Example 4.2
Use the ratio test to show that the infinite series

X 2n
S=
n=0
n2 + 1

is a divergent series.

Example 4.3
P 1
What does the ratio test tell you about the Harmonic series n=1 n ?

## 4.2 Simple power series

Example 4.4 Motivational Example
Compare ex and some nth -degree polynomials
Here are some typical examples of what are known as power series

f (x) = 1 + x + x2 + x3 + + xn +

x2 x3 xn
g(x) = 1 + x + + + + +
2! 3! n!

x2 x4 x2n
h(x) = 1 + + (1)n +
2! 4! (2n)!

Each power series is a function of one variable, in this case x and so they are also referred to as a power series in x.

## v If the series converges, what value does it converge to?

The first question is a simple extension of the ideas we developed in the previous lectures with the one exception that the
convergence of the series may now depend upon the choice of x.

The second question is generally much harder to answer. We will find, in the next lecture, that it is easier to start with
a known function and to then build a power series that has the same values as the function (for values of x for which the
power series converges). By this method (Taylor series) we will see that the three power series above are representations
of the functions f (x) = 1/(1 x), g(x) = ex and h(x) = cos(x).
4.3 The general power series

## A power series in x around the point x = a is always of the form

X
2 n
a0 + a1 (x a) + a2 (x a) + + an (x a) + = an (x a)n
n=0

The point x = a is often said to the be point around which the power series is based.

## In a previous lecture it was claimed that

1
= 1 + x + x2 + x3 + + xn +
1x

x2 x3 xn
ex = 1 + x + + + + +
2! 3! n!

x2 x4 n x
2n
cos(x) = 1 + + (1) +
2! 4! (2n)!
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

5. Taylor series
5.1 Maclaurin series

Suppose we have a function f (x) and suppose we wish to re-express it as a power series. That is, we ask if it is possible to
find the coefficients an such that

X
2 n
f (x) = a0 + a1 x + a2 x + + an x + = an x n
n=0

## is valid (for values of x for which the series converges).

Lets just suppose that such an expansion is possible. How might we compute the an ? There is a very neat trick which we
will use. Note that if we evaluate both sides of the equation at x = 0 we get

f (0) = a0

Thats the first step. Now for a1 we first differentiate both sides of the equation for f (x), then put x = 0. The result is

df
(0) = a1
dx

And we follow the same steps for all subsequent an . Here is summary of the first 4 steps.

f (x) = a0 + a1 x + a2 x2 + a3 x3 + f (0) = a0
f 0 (x) = a1 + 2a2 x + 3a3 x2 + f 0 (0) = a1
f 00 (x) = 2a2 + 6a3 x + f 00 (0) = 2a2
f 000 (x) = 6a3 + f 000 (0) = 6a3
A power series developed in this way is known as a Maclaurin Series Here is a general formula for computing a Maclaurin
series.
Maclaurin Series

## Let f (x) be an infinitely differentiable function at x = 0. Then

f (x) = a0 + a1 x + a2 x2 + a3 x3 + + an xn +

with
1 dn f
an = (0)
n! dxn

Example 5.1
Compute the Maclaurin series for loge (1 + x).
Example 5.2
Compute the Maclaurin series for sin(x).

## 5.2 Taylor series

For a Maclaurin series we are required to compute the function and all its derivatives at x = 0. But many functions are
singular at x = 0 so what should we do in such cases?
Simple - choose a different point around which to build the power series. Recall that the general power series for f (x) is
of the form

X
f (x) = a0 + a1 (x c) + a2 (x c)2 + + an (x c)n + = an (x c)n
n=0

We can compute the an much as we did in the Maclaurin series with the one exception that now we evaluate the function
and its derivatives at x = c.

Taylor Series

## f (x) = a0 + a1 (x c) + a2 (x c)2 + + an (x c)n +

with
1 dn f
an = (c) .
n! dxn
Example 5.3
Compute the Taylor series for loge (x) around x = 1.

Example 5.4
Compute the Taylor series for sin(x) around x = 2 .

5.3 Uniqueness

Is it possible to have two different power series for the one function? That is, is it possible to have

## f (x) = a0 + a1 (x c) + a2 (x c)2 + a3 (x c)3 + + an (x c)n +

and
f (x) = b0 + b1 (x c) + b2 (x c)2 + b3 (x c)3 + + bn (x c)n +
where the an and bn are different?

The simple answer is no. The coefficients of a Taylor series are unique.

What is the use of this fact? It means that regardless of how we happen to compute a power series we will always obtain
the same results.

Example 5.5
Use the Taylor series
x2 x3 xn
ex = 1 + x + + + + +
2! 3! n!
to compute a power series for ex . Compare your result with the Taylor series for ex .
Example 5.6
1 1
Show how the Taylor series for 1x
can be used to obtain the Taylor series for (1x)2
.

## 5.4 Radius of convergence

If a series converges only for x in the interval |x c| < R, then the radius of convergence is defined to be R.

## Example 5.7 : Finite radius of convergence

Consider the power series

X
f (x) = 1 + x + x2 + x3 + + xn + = xn
n=0

This is the geometric series with common ratio x. We already know that this series converges when |x| < 1 and thus its
radius of convergence is 1.

Example 5.8
Use the ratio test to confirm the previous claim.

Example 5.9
Does the series converge for x = 1? Does it converge for x = 1? (These are minor dot-the-i-cross-the-t type questions).
Example 5.10 : Infinite radius of convergence
Find the radius of convergence for the series

x2 x3 xn X xn
g(x) = 1 + x + + + + + =
2! 3! n! n=0
n!

## Example 5.11 : Zero radius of convergence

Find the the radius of convergence for the series

X
Q(x) = 1 + x + 2!x2 + 3!x3 + + n!xn + = n!xn
n=0

## To compute the radius of convergence R for a power series of the form n

P
0 an (x c) you can take use the terms in the
power series to define a new series bn (x) = an (x c)n . Then solve the inequality
 
bn+1
lim <1
n bn

then this limit will give |(x c)n | < R for some natural number n.
Example 5.12
P (x+1)n
Find the radius of convergence for the series f (x) = n=0 n2n
.
(x+1)n
To find the radius of convergence, we let bn = n2n
then solve the inequality

(x + 1)n+1

n+1
(n + 1) 2

lim < 1

n
n (x + 1)
n

n2
!
(x + 1)n+1 n 2n
lim <1

(x + 1)n n + 1 2n+1

n
 
1 n
|x + 1| lim <1
2 n n + 1
1
|x + 1| < 1
2
|x + 1| < 2.

Hence, the radius of convergence is 2 and, from the last inequality, we can conclude that the series representation of f (x)
converges on the interval 3 < x < 1.

Example 5.13
xn
(1)n+1
P
Find the radius of convergence for the series f (x) = n=0 n
.

Example 5.14
(1)n
(x 1)n .
P
Find the radius of convergence for the series f (x) = n=0 22n
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 6. Applications of Taylor Series

6.1 Using Taylor series to calculate limits

In your many and varied journeys in the world of mathematics you may have found statements like
 
sin(x)
lim =1
x0 x

and  
loge (x)
lim =1
x1 1x
and you may have been inclined to wonder how such statements can be proved (you do like to know these things dont
you?). Our job in this section is develop a systematic method by which such hairy computations can be done with modest
effort. But first a clear warning - the following computations apply only to the troublesome indeterminate form
0
0
. If the calculation that troubles you is not of the form 00 then the following methods will give the wrong answer. Be very
careful!

The functions in both of the above examples are of the form f(x)
g(x)
. Our road to freedom from the gloomy prison of 00 is to
expand f (x) and g(x) as a Taylor series around the point in question. The limits are then easy to apply. Lets see this in
action!

Example 6.1
Consider the limit  
sin(x)
lim .
x0 x
Here we have
1 3 1 5
f (x) = sin(x) = x x + x +
3! 5!
g(x) = x
In this case the Taylor series for g(x) was rather easy but that isnt always the case. Thus we have
f (x) x 3!1 x3 + 5!1 x5 +
=
g(x) x
1 1
= 1 x2 + x4 +
3! 5!
and this can be substituted into our expression for the limit,
   
sin(x) 1 2 1 4
lim = lim 1 x + x +
x0 x x0 3! 5!
=1

Example 6.2
For the second limit  
loge (x)
lim
x1 1x
we must employ a Taylor series around x = 1. Thus we have
1 1
f (x) = loge (x) = (x 1) (x 1)2 + (x 1)3 +
2 3
1
g(x) = 1 x = (x 1)
and so
!
(x 1) 21 (x 1)2 + 31 (x 1)3 +
 
loge (x)
lim = lim
x1 1x x1 (x 1)
 
1 1 2
= lim 1 + (x 1) (x 1) +
x1 2 3
= 1

This is not all that hard, is it? Here is a slightly trickier example,
Example 6.3
Consider the limit
1 cos(x)
 
lim .
x0 sin(x2 )
Once again we build the appropriate Taylor series (in this case around x = 0),
1 2 1 4 1 6
f (x) = 1 cos(x) = x x + x +
2! 4! 6!
1 1
g(x) = sin x2 = x2 x6 + x8 +

3! 5!
and so
1
1 cos(x) x2 4!1 x4 + 6!1 x6 +
  
2!
lim = lim
x0 sin(x2 ) x0 x2 3!1 x6 + 5!1 x8 +
 
1 1 2 1 4
= lim x + x +
x0 2! 4! 6!
1
= .
2

0
By now the picture should be clear - a suitable pair of Taylor series can make short work of a troublesome 0
arising from
expressions of the form f(x)
g(x)
.
Note: It is possible to adapt our methods to cases such as

.

## 6.2 lHpitals rule

Though the above method works very well it can be a bit tedious. You may have noticed that our final answers depended
only on the leading terms in the Taylor series and yet we calculated the whole of the Taylor series. This looks like an
un-necessary extra burden. Can we achieve the same result but with less effort? Most certainly, and here is how we do it.
0
lHpitals rule for the form 0

## If limxa (f (x)) = 0 and limxa (g(x)) = 0 then

f 0 (x)
   
f (x)
lim = lim 0
xa g(x) xa g (x)

provided the limit exists. This rule can be applied recursively whenever the right hand side leads to 00 .

Here is an outline of the proof. We start by writing out the Taylor series for f (x) and g(x) around x = a (while noting
that f (a) = g(a) = 0)
1 00 1
f (x) = f 0 (a) (x a) + f (a) (x a)2 + f 000 (a) (x a)3 +
2! 3!
1 1
g(x) = g 0 (a) (x a) + g 00 (a) (x a)2 + g 000 (a) (x a)3 +
2! 3!
then
f (x) f 0 (a) (x a) + 2!1 f 00 (a) (x a)2 + 3!1 f 000 (a) (x a)3 +
=
g(x) g 0 (a) (x a) + 2!1 g 00 (a) (x a)2 + 3!1 g 000 (a) (x a)3 +
f 0 (a) + 2!1 f 00 (a) (x a) + 3!1 f 000 (a) (x a)2 +
=
g 0 (a) + 2!1 g 00 (a) (x a) + 3!1 g 000 (a) (x a)2 +
If we assume that g 0 (a) 6= 0 then it follows that
f 0 (a)
 
f (x)
lim = .
xa g(x) g 0 (a)
This is not exactly lHpitals rule but it gives you an idea of how it was constructed. With a little more care you can
extend this argument to recover the full statement in lHpitals rule (you need to consider cases where g 0 (a) = 0).
Example 6.4
Use lHpitals rule rule to verify the limits:

1 cos(x)
     
sin(x) loge (x) 1
lim = 1, lim = 1 and lim = .
x0 x x1 1x x0 2
sin(x ) 2

We mentioned earlier that the tricks of this section could not only help us make sense of expressions like 0/0 but also for
expressions like /. Without going into the proofs we will just state the variation of lHpitals rule for cases such as
this just do it! Yes, you can apply lHpitals rule in the same manner as before. Here it is

lHpitals rule for the form

## If limxa (f (x)) = and limxa (g(x)) = then

   0 
f (x) f (x)
lim = lim 0
xa g(x) xa g (x)

provided the limit exists. This rule can be applied recursively whenever the right hand side leads to
.
6.3 Approximating values of functions

We know that many functions can be written as a Taylor series, including, for example
1
= 1 + x + x2 + x3 + + xn +
1x
x2 x3 xn
ex = 1 + x + + + + +
2! 3! n!
x2 x4 x2n
cos(x) = 1 + + (1)n +
2! 4! (2n)!
x3 x5 n x
2n+1
sin(x) = x + + (1) +
3! 5! (2n + 1)!

Part of our reason for writing functions in this form was that it would allow us to compute values for the functions (given
a value for x).

But each such series is an infinite series and so it may take a while to compute every term! What do we do? Clearly we
have to use a finite series. Our plan then is to truncate the infinite series at some point hoping that the terms we leave off
contribute very little to the overall sum.

## Consider the typical Taylor series around x = 0

X
2 n
f (x) = a0 + a1 x + a2 x + + an x + = ak x k .
k=0
We can approximate the infinite series by its partial sums. Thus if we define the Taylor polynomial by
k=n
X
2 n
Pn (x) = a0 + a1 x + a2 x + + an x = an x n
k=0

we can expect each Pn (x) to be an approximation to f (x) (and only for values of x for which the infinite series converges).

The only question that we really need to ask is - How good is the approximation? Here are some examples.

## Example 6.5 : Taylor polynomials for cos(x)

The first four (distinct) Taylor polynomials for cos(x) are

P0 (x) = 1
x2
P2 (x) = 1
2!
x2 x4
P4 (x) = 1 +
2! 4!
2
x x4 x6
P6 (x) = 1 +
2! 4! 6!

## and this is what they look like

1.0
P0 (x)

P4 (x)
0.5
0.0
y
0.5

P2 (x) cos(x)
1.0

P6 (x)

6 4 2 0 2 4 6
x
Example 6.6
Why were the other Taylor polynomials P1 , P3 , P5 not listed?

Example 6.7
Using the above Taylor polynomials, estimate cos(0.1).

We observe that

## v The best approximation is P6 (x).

So the lesson is this: Build the Taylor polynomials in the region where you wish to approximate the function.

## Given the Taylor series of a function f (x) constructed at x = c is

X (x c)n df
f (x) = (c) ,
n=0
n! dx

you may ask can we extract a derivative of f at x = c from the Taylor series?

## The answer is yes.

Example 6.8
Consider the function xe2x . We wish to find f (9) (0).
The Taylor series of ex at c = 0 is

X
x 1 n
e = x ,
n=0
n!
then we have

2x
X (2)n
xe = xn+1
n=0
n!

X (2)m1 m
= x .
m=1
(m 1)!

## Thus for all m 1 the coefficient of (x 0)m is

(2)m1
.
(m 1)!
We have that for the Taylor series of a function f (x) at c = 0 the coefficient of xm is

f (m) (0)
.
m!
Therefore, we have
f (m) (0) (2)m1
=
m! (m 1)!
then
f (m) (0) = (2)m1 m.
Hence, for m = 9, we have
f (9) (0) = 28 (9) .
Example 6.9
Consider the function cosh(x2 ). We wish to find f (42) (0).
The Taylor series of cosh(x) at c = 0 is

X 1 2n
cosh(x) = x ,
n=0
(2n)!
then we have

2
 X 1 4n
cosh x = x
n=0
(2n)!

X (2)m1 m
= x .
m=1
(m 1)!

## Thus for all m 1 the coefficient of (x 0)m is

(2)m1
.
(m 1)!
We have that for the Taylor series of a function f (x) at c = 0 the coefficient of xm is

f (m) (0)
.
m!
Therefore, we have
f (m) (0) (2)m1
=
m! (m 1)!
then
f (m) (0) = (2)m1 m.
Hence, for m = 9, we have
f (9) (0) = 28 (9) .
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

7. Vectors in 3-dimensions
7.1 Vector cross product

This is another way to multiply vectors. Start with v = v1 i + v2 j + v3 k and w = w1 i + w2 j + w3 k. Then we define the
cross product v w by
v w = (v2 w3 v3 w2 ) i + (v3 w1 v1 w3 ) j + (v1 w2 v2 w1 ) k.
From this definition we observe

v v w is a vector

v v w = w v

v vv =0

v (v) w = (v w)

v (u + v) w = u w + v w

v (v w) v = (v w) w = 0

Example 7.1
Verify all of the above.

Example 7.2
Given v = i + 2j + 7k and w = 2i + 3j + 5k compute v w, and its dot product with each of v and w.
7.1.1 Interpreting the cross product

We know that v w is a vector and we know how to compute it. But can we describe this vector? First, we need a vector,
so lets assume that v w 6= 0. Then what can we say about the direction and length of v w?

Without loss of generality, assume that v is in the direction of i and assume that w is a vector in the first quadrant of
xy-plane.
Then, we can write the two vectors as
v = |v| i + 0j + 0k
w = |w| cos() i + |w| sin() j + 0k

## Now calculating the cross product of v and w gives

v w = 0i + 0j + |v| |w| sin() k.

We now observe:

## v The vector v w is perpendicular to both v and w.

v The length of the vector v w is |v| |w| sin().

Example 7.3
Show that |v w| also equals the area of the parallelogram formed by v and w.

## Let v = v1 i + v2 j + v3 k and w = w1 i + w2 j + w3 k. Then the Dot Product of v and w is defined by

v w = v1 w1 + v2 w2 + v3 w3

## v w = (v2 w3 v3 w2 ) i + (v3 w1 v1 w3 ) j + (v1 w2 v2 w1 ) k.

SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 8. Three-Dimensional Euclidean Geometry. Lines.

8.1 Lines in 3-dimensional space

Through any pair of distinct points we can always construct a straight line. These lines are normally drawn to be infinitely
long in both directions.

Example 8.1
Find all points on the line joining (x, y, z) = (2, 4, 0) and (x, y, z) = (2, 4, 7)

Example 8.2
Find all points on the line joining (x, y, z) = (2, 0, 0) and (x, y, z) = (2, 4, 7)

## x(t) = a + pt , y(t) = b + qt , z(t) = c + rt

where t is a parameter (it selects each point on the line) and the numbers a, b, c, p, q, r are computed from the coordinates
of two points on the line. (There are other ways to write an equation for a line.)

How do we compute a, b, c, p, q, r? First put t = 0, then x = a, y = b, z = c. That is (a, b, c) are the coordinates of one
point on the line and so a, b, c are known. Next, put t = 1, then x = a + p, y = b + q, z = c + r. Take this to be the second
point on the line, and thus solve for p, q, r.

A common interpretation is that (a, b, c) are the coordinates of one (any) point on the line and (p, q, r) are the components
of a (any) vector parallel to the line.

Example 8.3
Find the equation of the line joining the two points (x, y, z) = (1, 7, 3) and (x, y, z) = (2, 0, 3).
Example 8.4
Show that a line may also be expressed as
xa yb zc
= =
p q r
provided p 6= 0, q 6= 0 and r 6= 0. This is known as the Symmetric Form of the equation for a a straight line.

Example 8.5
In some cases you may find a small problem with the form suggested in the previous example. What is that problem and
how would you deal with it?

Example 8.6
Determine if the line defined by the points (x, y, z) = (1, 0, 1) and (x, y, z) = (1, 2, 0) intersects with the line defined by the
points (x, y, z) = (3, 1, 0) and (x, y, z) = (1, 2, 5).

Example 8.7
Is the line defined by the points (x, y, z) = (3, 7, 1) and (x, y, z) = (2, 2, 1) parallel to the line defined by the points
(x, y, z) = (1, 4, 1) and (x, y, z) = (0, 5, 1).

Example 8.8
Is the line defined by the points (x, y, z) = (3, 7, 1) and (x, y, z) = (2, 2, 1) parallel to the line defined by the points
(x, y, z) = (1, 4, 1) and (x, y, z) = (2, 23, 5).
8.2 Vector equation of a line

## The parametric equations of a line are

x(t) = a + pt , y(t) = b + qt z(t) = c + rt
Note that

## (a, b, c) = the vector to one point on the line

(p, q, r) = the vector from the first point to
the second point on the line
= a vector parallel to the line
Lets put d = (a, b, c), v = (p, q, r) and r(t) = (x(t), y(t), z(t)), then
r(t) = d + tv
This is known as the vector equation of a line.

Example 8.9
Write down the vector equation of the line that passes through the points (x, y, z) = (1, 2, 7) and (x, y, z) = (2, 3, 4).

Example 8.10
Write down the vector equation of the line that passes through the points (x, y, z) = (2, 3, 7) and (x, y, z) = (4, 1, 2).

Example 8.11
Find the shortest distance between the pair of lines described in the two previous examples. Hint : Find any vector that
joins a point from one line to the other and then compute the scalar projection of this vector onto the vector orthogonal
to both lines (it helps to draw a diagram).
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 9. Three-Dimensional Euclidean Geometry. Planes.

9.1 Planes in 3-dimensional space

A plane in 3-dimensional space is a flat 2-dimensional surface. The standard equation for a plane in 3-d is
ax + by + cz = d
where a, b, c and d are some bunch of numbers that identify this plane from all other planes. (There are other ways to write
an equation for a plane, as we shall see).

Example 9.1
Sketch each of the planes z = 1, y = 3 and x = 1.

## 9.1.1 Constructing the equation of a plane

A plane is uniquely determined by any three points (provided not all three points are contained on a line). Recall, that a
line is fully determined by any pair of points on the line.
Lets find the equation of the plane that passes through the three points (x, y, z) = (1, 0, 0), (x, y, z) = (0, 3, 0) and
(x, y, z) = (0, 0, 2). Our game is to compute a, b, c and d. We do this by substituting each point into the above equation,

## 1st point a1+b0+c0=d

2nd point a0+b3+c0=d
3rd point a0+b0+c2=d

Now we have a slight problem, we are trying to compute 4 numbers, a, b, c, d but we only have 3 equations. We have to
make an arbitrary choice for one of the 4 numbers a, b, c, d. Lets set d = 6. Then we find from the above that a = 6, b = 2
and c = 3. Thus the equation of the plane is
6x + 2y + 3z = 6
Example 9.2
What equation do you get if you chose d = 1 in the previous example? What happens if you chose d = 0?

Example 9.3
Find an equation of the plane that passes through the three points (x, y, z) = (1, 0, 0), (x, y, z) = (1, 2, 0) and (x, y, z) =
(2, 1, 5).

x(t) = a + pt

y(t) = b + qt

z(t) = c + rt

## A line is 1-dimensional so its points can be selected by a single parameter t.

However, a plane is 2-dimensional and so we need two parameters (say u and v) to select each point. Thus its no surprise
that every plane can also be described by the following equations

x(u, v) = a + pu + lv

y(u, v) = b + qu + mv

z(u, v) = c + ru + nv
Now we have 9 parameters a, b, c, p, q, r, l, m and n. These can be computed from the coordinates of three (distinct) points
on the plane. For the first point put (u, v) = (0, 0), the second put (u, v) = (1, 0) and for the final point put (u, v) = (0, 1).
Then solve for a through to n (its easy!).

Example 9.4
Find the parametric equations of the plane that passes through the three points (x, y, z) = (1, 0, 0), (x, y, z) = (1, 2, 0)
and (x, y, z) = (2, 1, 5).

Example 9.5
Show that the parametric equations found in the previous example describe exactly the same plane as found in Example
10.3 (Hint: substitute the answers from Example 10.4 into the equation found in Example 10.3).

Example 9.6
Find the parametric equations of the plane that passes through the three points (x, y, z) = (1, 2, 1), (x, y, z) = (1, 2, 3)
and (x, y, z) = (2, 1, 5).

Example 9.7
Repeat the previous example but with points re-arranged as (x, y, z) = (1, 2, 1), (x, y, z) = (2, 1, 5) and (x, y, z) =
(1, 2, 3). You will find that the parametric equations look different yet you know they describe the same plane. If you did
not know this last fact, how would you prove that the two sets of parametric equations describe the same plane?

## The Cartesian equation for a plane is

ax + by + cz = d
for some bunch of numbers a, b, c and d. We will now re-express this in a vector form.

Suppose we know one point on the plane, say (x, y, z) = (x, y, z)0 , then

## ax0 + by0 + cz0 = d

a(x x0 ) + b(y y0 ) + c(z z0 ) = 0

## This is an equivalent form of the above equation.

Now suppose we have two more points on the plane (x, y, z)1 and (x, y, z)2 . Then

## a(x1 x0 ) + b(y1 y0 ) + c(z1 z0 ) = 0

a(x2 x0 ) + b(y2 y0 ) + c(z2 z0 ) = 0

Put x10 = (x1 x0 , y1 y0 , z1 z0 ) and x20 = (x2 x0 , y2 y0 , z2 z0 ). Notice that both of these vectors lie in the
plane and that
(a, b, c) x10 = (a, b, c) x20 = 0
What does this tell us? Simply that both vectors are orthogonal to the vector (a, b, c). Thus we must have that

## Now lets put

n = (a, b, c) = the normal vector to the plane
d = (x0 , y0 , z0 ) = one (any) point on the plane
r = (x, y, z) = a typical point on the plane
Then we have
n (r d) = 0
This is the vector equation of a plane.
Example 9.8
Find the vector equation of the plane that contains the points (x, y, z) = (1, 2, 7), (x, y, z) = (2, 3, 4) and (x, y, z) =
(1, 2, 1).

Example 9.9
Re-express the previous result in the form ax + by + cz = d.

Example 9.10
Find the shortest distance between the pair of planes 2x + 3y 4z = 2 and 4x + 6y 8z = 3.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 10. Parametric curves in 3-dimensions

10.1 Parametric curves

## Here is a very simple example of what we call a parametric description of a curve,

x(t) = 7t 3, y(t) = 5t + 3, z(t) = 3t 4
which could also be written as the vector equation
r(t) = (7t 3) i + (5t + 3) j + (3t 4) k.
We instantly recognise this as being the parametric representation for a straight line (it was an instant recognition, was it
not?).
We can define a curve in 3-dimensional space parametrically by treating the position vector r as a function of some
parameter, in the previous example this parameter is t. We take the three numbers (x(t) , y(t) , z(t)) to be some point in a
3-dimensional space with corresponding position vector , r(t) = x(t) i + y(t) j + z(t) k. As we allow the parameter t to vary
(smoothly) we expect the point to trace out a (possibly smooth) curve in that 3-dimensional space.

Example 10.1
The vector equation
r(t) = 3 sin(t) i + 2 cos(t) j + tk
has the parametric equations
x(t) = 3 sin(t) , y(t) = 2 cos(t) , z(t) = t.
Notice that we can rewrite the first two parametric equations as
1 1
sin(t) = x(t) , cos(t) = y(t)
3 2
and given the Pythagorean identity sin2 (t) + cos2 (t) = 1 this gives
x2 y 2
+ =1
9 4
which is the equation of an ellipse in the xy-plane.
Example 10.2
Another possible parametric representation is

r(t) = ti + t2 + 2t 1 j + 3k.


In each case we have what we call a parametric representation of a curve. Some of the questions we might like to ask about
such parametric equations are

## v What use can we make of these parametric equations?

v Are there other parametric equations that represent the same curve?

A common interpretation of the parametric equations is that they record the history of a point particle moving in space.
It comes as no surprise then that the parameter in the equations is often chosen to be t, for time. But do not think that
this is universal - there is nothing magical in the choice of t as the parameter, you can use any symbol that you like. For
example, here is a popular parametric description of the unit circle in the xy-plane with the centre at the origin

## x() = cos() , y() = sin()

or equivalently,
r() = cos() i + sin() j + 0k.

In this instance is the parameter and as progresses from 0 to 2 the point (x() , y()) traces out one complete revolution
of the unit circle. If we allow to take on values from 2 to 3
2
we would only see three-quarters of the unit circle, while for
to take on values from 0 to 6 we would get three revolutions of the unit circle. This example show you that the allowed
domain of values for the parameter is an important aspect of the description of the curve.
So, keep in mind that when we say (x(t) , y(t) , z(t)) is a parametric description of a curve we should also specify the domain
of allowed values for the parameter t (or or whatever parameter we choose).

If someone draws a curve for you (in the sand, on the blackboard or on your generic tablet) you might wonder if there exists
a unique parametric description of that curve. The answer is most certainly not, there are many ways to write parametric
equations for a given curve.

Example 10.3
Show that the parametric equations

## x(u) = cos(u 2) and y(u) = sin(u 2) for 0 u < 2.

Note, however, these two parameterisations do not start at the same point or even have the same orientation:

v The parameterisation r(v) = x(v) i+y(v) j starts at (x, y) = (0, 1) for v = 0 and the particle point will move clockwise
around the unit circle as v increases from 0 to 2.

v The parameterisation r(u) = x(u) i + y(u) j starts at (x, y) = (1, 0) for u = 0 and the particle point will move
counter-clockwise around the unit circle as u increases from 0 to 2.

So, keep in mind that if you want specific start and end points, or you want a specific orientation for your curve, you will
need to select your parameterisation carefully. This will be important for your other units including ENG2005/ENG2006.

One of the easiest ways to see what the curve looks like is to plot some points obtained by choosing a range of values for
the parameter. This is best done using a computer and here is a simple example, commonly know as a helix.
1.0
0.8
x(t) = cos(t)
Z 0.6
y(t) = sin(t)
0.4 t
z(t) =
0.2 6
0.0 1.0 0 t < 6
0.5
-1.0 0.0
-0.5 Y
0.0 -0.5
0.5
X 1.0

Of course we can also use parametric forms to construct curves in 2-dimensions, such as in this pair of examples
4.0

3.0

2.0

1.0

0.0
Y

## x(t) = 2 cos(t) 3 sin(t)

-1.0 y(t) = 3 sin(t) + 2 cos(t)
-2.0 0 t < 2

-3.0

-4.0
-4.0 -3.0 -2.0 -1.0 0.0 1.0 2.0 3.0 4.0
X

1.0

0.8

0.6
Y

x(t) = t3
0.4
y(t) = t2
1 < t < 1
0.2
This last example is notable for the nasty kink at (x, y) = (0, 0) despite the fact there is nothing particularly alarming
about the simple functions x(t) = t3 and y(t) = t2 . This kind of behaviour, where the parametric equations are smooth
functions and yet the curve possess kinks, is something to be aware of but we shall not make much of a fuss about such
things at this introductory level (you will see more on this issue of smoothness in later units).

## 10.2 Tangent vectors and lines

Okay, suppose we are given the three functions x(t), y(t) and z(t). Then it is a simple matter to compute their first
derivatives, x0 (t), y 0 (t) and z 0 (t). What do we make of this? Previously we interpreted (x(t), y(t), z(t)) to describe a curve
in three dimensional space. What then do the derivatives (x0 (t), y 0 (t), z 0 (t)) tell us about the curve? Quite simply, it gives
us a vector that is tangent to the curve and pointing in the direction of increasing t.

Example 10.4
Compute the tangent vector to the curve defined by

## x(t) = sin(t) y(t) = cos(t) 0 < t < 2

Example 10.5
Prove that the vector obtained in the previous example is indeed tangent to the curve.

How do we prove this statement, that the derivatives gives us a tangent vector? It is quite easy. Start by writing
r(t) = (x(t), y(t), z(t)), which we interpret as the position vector to a point on the curve and then we turn to the basic
definition of a derivative,

## (x(t + t) , y(t + t) , z(t + t)) (x(t) , y(t) , z(t))

 
d 
r(t) = lim
dt t0 t
x(t + t) x(t) y(t + t) y(t) z(t + t) z(t)
 
= lim , ,
t0 t t t
 
dx dy dz
= , ,
dt dt dt

In the first line we see that we have two points, one at t the other at t + t. Importantly both points are on the curve.
Their difference is a short vector that is close to the curve. Clearly (not an ideal way to prove something but one I trust
you will accept) this vector remains close to the curve for all t and will be tangent to the curve in the limit t 0.

Once we have the tangent vector we can easily construct a tangent line to the curve at any chosen point. That is we build
a new straight line that glances off the curve at a chosen point. The tangent line and the original curve meet at one point
and have parallel tangent vectors at that point.

Example 10.6
Construct the tangent line to the curve defined by

## 10.3 Normal planes

There is another object that we can construct from our little curves. Pick any point on the curve and compute a tangent
vector at that point. Now we can easily build a plane that has that vector as its normal vector. Thus we can easily
construct the plane that cuts through this curve at the given point. Here is a simple example.
Example 10.7
Find the equation of a plane normal to the curve

r(t) = t2 t + 1 i + sin(t) j + tk


## at the point given by t = 0.

SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 11. Parametric representations of surfaces

11.1 Surfaces

A very common application of a function of two variables is to describe a surface in 3-dimensional space. How so? you
might ask. The idea is that we take the value of the function to describe the height of the surface above the xy-plane. If
we use standard Cartesian coordinates then such a surface could be described by the equation

z = f (x, y)

This surface has a height z units above each point (x, y) in the xy-plane.

The equation z = f (x, y) describes the surface explicitly as a height function over a plane and thus we say that the surface
is given in explicit form.

A surface such as z = f (x, y) is also often called the graph of the function f (analogous to y = F (x) is the graph of F ).

Here are some simple examples. A very good exercise is to try to convince yourself that the following images are correct
(i.e. that they do represent the given equation).
z = x2 + y 2
1 = x2 + y 2 z 2
 p 
z = cos 3 x + y exp(2 (x2 + y 2 ))
2 2
p
z= 1 + y 2 x2
z = xy exp (x2 y 2 )
1=x+y+z

## 11.2 Alternative forms

We might ask are there any other ways in which we can describe a surface? We should be clear that (in this unit) when we
say surface we are talking about a 2-dimensional surface in our familiar 3-dimensional space. With that in mind, consider
the equation
g(x, y, z) = 0
What do we make of this equation? Well, after some algebra we might be able to re-arrange the above equation into the
familiar form
z = f (x, y)
for some function f . In this form we see that we have a surface, and thus the previous equation g(x, y, z) = 0 also describes
a surface. When the surface is described by an equation of the form g(x, y, z) = 0 we say that the surface is given in
implicit form.

Consider all of the points in R3 (i.e all possible (x, y, z) points). If we now introduce the equation g(x, y, z) = 0 we are
forced to consider only those (x, y, z) values that satisfy this constraint. We could do so by, for example, arbitrarily choosing
(x, y) and using the equation (in the form z = f (x, y) to compute z. Or we could choose say (y, z) and use the equation
g(x, y, z) to compute x. Which ever road we travel it is clear that we are free to choose just two of the (x, y, z) with the
third constrained by the equation.

Now consider some simple surface and lets suppose we are able to drape a sheet of graph paper over the surface. We can
use this graph paper to select individual points on the surface (well as far as the graph paper covers the surface). Suppose
we label the axes of the graph paper by the symbols s and t. Then each point on surface is described by a unique pair of
values (s, t). This makes sense we are dealing with a 2-dimensional surface and so we expect we would need 2 numbers,
(s, t), to describe each point on the surface. The parameters (s, t) are often referred to as (local) coordinates on the
surface.

How does this picture fit in with our previous description of a surface, as an equation of the form g(x, y, z)? Pick any
point on the surface. This point will have both (x, y, z) and (s, t) coordinates. That means that we can describe the point
in terms of either (s, t) or (x, y, z). As we move around the surface all of these coordinates will vary. So given (s, t) we
should be able to compute the corresponding (x, y, z) values. That is we should be able to find functions P (s, t), Q(s, t)
and R(s, t) such that
x = P (s, t) , y = Q(s, t) , z = R(s, t)

## The above equations describe the surface in parametric form.

Example 11.1
Identify (i.e. describe) the surface given by the equations
x = 2s + 3t + 1, y = s 4t + 2, z = s + 2t 1
Hint: Try to combine the three equations into one equation involving x, y and z but not s and t.

## 11.3 Parametric representations of surfaces

Consider a 2-dimensional surface in the 3-dimensions space expressed in the explicit form as a function:
z = (x, y) ,
or in the implicit form as an equation:
g(x, y, z) = 0.

Example 11.2
A plane in 3-dimensional space can be expressed as

## (a) a function z of two independent variables x and y:

z := f (x, y) = x + y + ,

## (b) an equation of three variables x, y, and z:

g(x, y, z) := ax + by + cz + d = 0.
Surfaces which can be expressed in the form a of function z = f (x, y) are rather restrictive because of the uniqueness of the
image of a point in the domain of a function. While surfaces that are expressed in the form of an equation g(x, y, z) = 0,
are more diverse in nature because they may be multi-valued.
p
Compare an upper hemisphere expressed as a function z = 1 x2 y 2

## with that of a sphere represented as an equation x2 + y 2 + z 2 = 1

We need two independent variables to cover a 2-dimensional space in the parametric variables, so a 2-dimensional surface
in 3-dimensional space can be represented parametrically as the position vector of a function of two independent variables
s and t
r(s, t) = x(s, t) i + y(s, t) j + z(s, t) k,
with some bounds defining the domain for the parameters s and t.

Example 11.3
A plane can be represented parametrically as: r(s, t) = r0 + su + tv, where the position vector r0 = x0 i + y0 j + z0 k of a
given point on the plane, u and v are two independent vectors (that is, u and v are not in the same direction) parallel to
the plane.
Example 11.4
Consider a simple surface represented by a function z = f (x, y). Then we could choose the parametric variables: x = s
and y = t. A surface defined in terms of position vector:
r(s, t) = si + tj + f (s, t) k.
We still need to define the bounds of the surface in terms of the parameters s and t (that is, specifying x0 x x1 and
y0 y y1 ).
The plane 2x + 3y + 4z + 5 = 0 can be represented as:
2s + 3t + 5
r(s, t) = si + tj k
4
Again, we still need to define the bounds for the surface.

Example 11.5
A right-circular cylinder of unit height can be represented by the equation:
x2 + y 2 = a2 , such that 0 z 1.
Note: this represents only the wall of the cylinder, not the top and bottom circular disks.
We want to define two parametric variables to help us describe this cylindrical surface. For one of the parametric variables,
let z = t. The remaining parametric variable is most easily defined with a polar coordinate for angle. Define a circle
of radius a: x = a cos(s), y = a sin(s), with 0 s < 2. Parametrically, the cylindrical surface of unit height can be
represented by the position vector:
r(s, t) = a cos(s) i + a sin(s) j + tk, where 0 s < 2 and 0 t 1.

Note that these surface representations are not unique. As with many of these problems requiring a surface parameteri-
sation, the best representation will depend on the nature of the problem that needs to be solved.
Example 11.6
Consider another parametric representation of the right-circular cylinder of unit height described by the equation:

x2 + y 2 = a2 , such that 0 z 1.

## If we define z = t, and x = s then the new parametric representation is

r(s, t) = si a2 s2 j + tk, where a s a and 0 t 1.

Example 11.7
A sphere of radius a centred at the origin when expressed as an equation is:

x 2 + y 2 + z 2 = a2 .

Note: This represents only the spherical surface, not the volume of the ball contained by this surface.

## We can use spherical polar coordinates:

x = r sin() cos()
y = r sin() sin()
z = r cos()
If we define our parametric variables as s = and t = then the surface representation is:

## r(s, t) = a sin(s) cos(t) i + a sin(s) sin(t) j + a cos(s) k, where 0 s and 0 t < 2

where the radius r = r r is fixed to length a.

Again, there are many ways of representing this surface. The sphere of radius a centred at the origin could be done in, say,
Cartesian coordinates. Furthermore, there are even other ways to represent the sphere of radius a centred at the origin
using and as parameters, however, the roles of and will be different to the spherical polar coordinates parametric
representation given above.

Example 11.8
In many engineering texts, the roles of and are reversed.
In this case, s = and t = , and the sphere of radius a centred at the origin has parametric representation

## r(s, t) = a sin(s) cos(t) i + a sin(s) sin(t) j + a cos(s) k, where 0 s and 0 t < 2.

Example 11.9
The sphere of radius a centred at the origin in Matlab has the physical geographers

parameteric representation, where s represents longitude and t represents the latitude on the spherical surface:

r(s, t) = a cos(s) cos(t) i + a sin(s) cos(t) j + a sin(t) k, where 0 s < 2 and t .
2 2

Example 11.10
Consider an inverted right circular cone of height h and radius a. We are only interested in the surface of the cone. We
can define this surface by the function:
hp 2
z= x + y 2 + h, where 0 z h.
a
How do we represent this parametrically?

If we stay with Cartesian coordinates and choose parameters x = s and y = t then this leads to the parametric represen-
tation:
h 2
 
r(s, t) = si + tj + 2
s +t +h k
a
with the domain defined as s2 + t2 a2 . However this is messy to determine the domain for the parameters s and t. The
inequality s2 + t2 a2 suggests that it will be much better to use polar coordinates.
Let us return to using polar coordinates (r, ):

x = r cos()
y = r sin()
z = r + h

## 11.4 Coordinate vectors in other coordinate systems

In Cartesian coordinates we have the three coordinate vectors i, j and k which have the properties that

## v each two vectors are orthogonal, that is, i j = 0, j k = 0 and i k = 0, and

v the vector i points in the direction of increasing x-values, the vector j points in the direction of increasing y-values
and the vector k points in the direction of increasing z-values.

When we work in cylindrical or spherical coordinates do we have coordinate vectors with the same properties? If so, can
we relate them back to Cartesian coordinates?

## The answer to both questions is: yes.

11.4.1 Cylindrical coordinates

## x = R cos() , y = R sin() and z = z

p
where R = x2 + y 2 . Note that R represents the distance from the cylinder axis to the cylinder surface. The cylindrical
coordinate vectors are
eR = cos() i + sin() j + 0k
e = sin() i + cos() + 0k
ez = 0i + 0j + k

where eR points in the direction of increasing R-values, e points in the direction of increasing -values and ez points in
the direction of increasing z-values.

## 11.4.2 Spherical coordinates

Here we have
x = r sin() cos() , y = r sin() sin() and z = r cos()
p
where r = x2 + y 2 + z 2 . Note that r represents the distance from the origin to the spherical surface. The spherical
coordinate vectors are
er = sin() cos() i + sin() sin() j + cos() k
e = cos() cos() i + cos() sin() j sin() k
e = sin() i + cos() j + 0k

where er points in the direction of increasing r-values, e points in the direction of increasing -values and e points in the
direction of increasing -values.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 12. Linear systems of equations

12.1 Examples of linear systems

## 12.1.1 Bags of coins

We have three bags with a mixture of gold, silver and copper coins. We are given the following information

## Bag 1 contains 10 gold, 3 silver, 1 copper and weighs 60g

Bag 2 contains 5 gold, 1 silver and 2 copper and weighs 30g
Bag 3 contains 3 gold, 2 silver, 4 copper and weighs 25g

The question is What are the respective weights of the Gold, Silver and Copper coins?

Let G, S and C denote the weight of each of the gold, silver and copper coins. Then we have the system of equations

10G + 3S + C = 60
5G + S + 2C = 30
3G + 2S + 4C = 25

## 12.1.2 Silly puzzles

John and Marys ages add to 75 years. When John was half his present age John was twice as old as Mary. How old are
they?

## We have just two equations,

J + M = 75
1
2
J 2M = 0
12.1.3 Intersections of planes

Its easy to imagine three planes in space. Is it possible that they share one point in common? Here are the equations for
three such planes
3x + 7y 2z = 0
6x + 16y 3z = 1
3x + 9y + 3z = 3
Can we solve this system for (x, y, z)?

In all of the above examples we need to unscramble the set of linear equations to extract the unknowns (e.g. G, S, C etc.).

## We start with the previous example

3x + 7y 2z = 0 (1)
6x + 16y 3z = 1 (2)
3x + 9y + 3z = 3 (3)

Suppose by some process we were able to rearrange these equations into the following form

3x + 7y 2z = 0 (1)
2y + z = 1 (2)0
4z = 4 (3)00

## Then we could solve (3)00 for z

(3)00 4z = 4 z=1
and then substitute into (2)0 to solve for y
(2)0 2y + 1 = 1 y = 1
and substitute into (1) to solve for x
(1) 3x 7 2 = 0 x=3

The question is : How do we get the modified equations (1), (2)0 and (3)00 ?
The general trick is to take suitable combinations of the equations so that we can eliminate various terms. The trick is
applied as many times as we need to turn the original equations into the simple form like (1), (2)0 and (3)00 .
Lets start with the first pair of the original equations

3x + 7y 2z = 0 (1)
6x + 16y 3z = 1 (2)

We can eliminate the 6x in equations (2) by replacing equation (2) with (2) 2(1),

## 0x + (16 14)y + (3 + 4)z = 1 (2)0

2y + z = 1 (2)0

Likewise, for the 3x term in equation (3) we replace equation (3) with (3) (1),

2y + 5z = 3 (3)0

## At this point our system of equations is

3x + 7y 2z = 0 (1)
2y + z = 1 (2)0
2y + 5z = 3 (3)0
The last step is to eliminate the 2y term in the last equation. We do this by replacing equation (3)0 with (3)0 (2)0

4z = 4 (3)00

## So finally we arrive at the system of equations

3x + 7y 2z = 0 (1)
2y + z = 1 (2)0
4z = 4 (3)00

## which, as before, we solve to find z = 1, y = 1 and x = 3.

The procedure we just went through is known as a reduction to upper triangular form and we used elementary row
operations to do so. We then solved for the unknowns by back substitution.

This procedure is applicable to any system of linear equations (though beware, for some systems the back substitution
method requires special care, well see examples later).

The general strategy is to eliminate all terms below the main diagonal, working column by column from left to right.

## 12.3 Lines and planes

In previous lecture we saw how we could construct the equations for lines and planes. Now we can answer some simple
questions.

How do we compute the intersection between a line and a plane? Can we be sure that they do intersect? And what about
the intersection of a pair or more of planes?

The general approach to all of these questions is simply to write down equations for each of the lines and planes and then
to search for a common point (i.e. a consistent solution to the system of equations).
Example 12.1
Find the intersection of the plane y = 0 with the plane 2x + 3y 4z = 1.

Example 12.2
Find the intersection of the line x(t) = 1 + 3t, y(t) = 3 2t, z(t) = 1 t with the plane 2x + 3y 4z = 1.

Example 12.3
Find the intersection of the three planes 2x + 3y z = 1, x y = 2 and x = 1

In general, three planes may intersect at a single point or along a common line or even not at all.

Here are some examples (there are others) of how planes may (or may not) intersect.
No point of intersection
One point of intersection
Intersection in a common line

Example 12.4
What other examples can you draw of intersecting planes?
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 13. Gaussian Elimination

13.1 Gaussian elimination and back-substitution

## Example 13.1 : Typical layout

2x + 3y + z = 10 (1)
x + 2y + 2z = 10 (2)0 2(2) (1)
4x + 8y + 11z = 49 (3)0 (3) 2(1)

2x + 3y + z = 10 (1)
y + 3z = 10 (2)0
2y + 9z = 29 (3)00 (3)0 2(2)0

2x + 3y + z = 10 (1)
y + 3z = 10 (2)0
3z = 9 (3)00

## Now we solve this system using back-substitution, z = 3, y = 1, x = 2.

Note how we record the next set of row-operations on each equation. This makes it much easier for someone else to see
what you are doing and it also helps you track down any arithmetic errors.
13.2 Gaussian elimination

## In the previous example we found

2x + 3y + z = 10 (1)
y + 3z = 10 (2)0
3z = 9 (3)00

Why stop there? We can apply more row-operations to eliminate terms above the diagonal. This does not involve back-
substitution. This method is known as Gaussian elimination. Take note of the difference!

Example 13.2
Continue from the previous example and use row-operations to eliminate the terms above the diagonal. Hence solve the
system of equations.

## (c) If possible, re-scale each equation so that each diagonal element = 1.

(d) The right hand side is now the solution of the system of equations.

If you bail out after step 1 you are doing Gaussian elimination with back-substitution (this is usually the easier option).
13.3 Exceptions

## Example 13.3 : A zero on the diagonal

2x + y + 2z + w = 2 (1)
2x + y z + 2w = 1 (2)0 (2) (1)
x 2y + z w = 2 (3)0 2(3) (1)
x + 3y z + 2w = 2 (4)0 2(4) (1)

2x + y + 2z + w = 2 (1)
0y 3z + w = 1 (2)00 (3)0
5y + 0z 3w = 6 (3)00 (2)0
+ 5y 4z + 3w = 2 (4)0

The zero on the diagonal on the second equation is a serious problem, it means we can not use that row to eliminate the
elements below the diagonal term. Hence we swap the second row with any other lower row so that we get a non-zero term
on the diagonal. Then we proceed as usual. The result is w = 2, z = 1, y = 0 and x = 1.

Example 13.4
Complete the above example.
Example 13.5 : A consistent and under-determined system
Suppose we start with three equations and we wind up with

2x + 3y z = 1 (1)
5y + 5z = 1 (2)0
0z = 0 (3)00

The last equation tells us nothing! We cant solve it for any of x, y and z. We really only have 2 equations, not 3. That is
2 equations for 3 unknowns. This is an under-determined system.
We solve the system by choosing any number for one of the unknowns. Say we put z = where is any number (our
choice). Then we can leap back into the equations and use back-substitution.
The result is a one-parameter family of solutions
1 1
x = , y = + , z=
5 5
Since we found a solution we say that the system is consistent.

## Example 13.6 : An inconsistent system

Had we started with
2x + 3y z = 1 (1)
x y + 2z = 0 (2)
3x + 2y + z = 0 (3)

## we would have arrived at

2x + 3y z = 1 (1)
5y + 5z = 1 (2)0
0z = 2 (3)00
This last equation makes no sense as there are no finite values for z such that 0z = 2 and thus we say that this system
is inconsistent and that the system has no solution.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

14. Matrices
14.1 Matrices

## When we use row-operations on systems of equations such as

3x + 2y z = 3
x y + z = 1
2x + y z = 0

the x, y, z just hang around. All the action occurs on the coefficients and the right hand side. To assist in the bookkeeping
we introduce a new notation, matrices,
3 2 1 x 3
1 1 1 y = 1
2 1 1 x 0
Each [ ] is a matrix,
3 2 1
1 1 1
2 1 1
is a square 33 matrix, while
x 3
y and 1
z 0
are 1-dimensional matrices (also called column vectors).

We can recover the original system of equations by defining a rule for multiplying matrices,

e

f

g = i

a b c d

h

.. ..
. .
i = a e + b f + c g + d h +

Example 14.1
Write the above system of equations in matrix form.

3 2 1 x 3x+2y1z
1 1 1 y = 1 x 1 y + 1 z
2 1 1 z 2x+1y1z

Example 14.2
Compute      
2 3 1 7 1 7 2 3
and
4 1 0 2 0 2 4 1

Note that we can only multiply matrices that fit together. That is, if A and B are a pair of matrices then in order that
AB makes sense we must have the number of columns of A equal to the number of rows of B.

Example 14.3
Does the following make sense?
  1 7
2 3 0 2
4 1
4 1
14.1.1 Notation

## We use capital letters to represent matrices,

3 2 1 x 3
A = 1 1 1 , X = y , B= 1
2 1 1 x 0

## and our previous system of equations can then be written as

AX = B

Entries within a matrix are denoted by subscripted lowercase letters. Thus for the matrix B above we have b1 = 3, b2 = 1
and b3 = 0 while for the matrix A we have

3 2 1 a11 a12 a13
A = 1 1
1 = a21 a22
a23
2 1 1 a31 a32 a33

## aij = the entry in row i and column j of A

To remind us that A is a square matrix with elements aij we sometimes write A = [aij ].

## 14.1.2 Operations on matrices

v Equality:
A=B
only when all entries in A equal those in B.

## v Multiplication by a number: A = times each entry of A

v Multiplication of matrices:

?

?

? ? ? ? ? ? = ?

?
?

 T 1 0
1 2 7
= 2 3
0 3 4
7 4

## v The Identity matrix:

1 0 0 0

0 1 0 0

I=
0 0 1 0

0 0 0 1

.. .. .. .. ..
. . . . .
For any square matrix A we have IA = AI = A.
v The Zero matrix: A matrix full of zeroes!
v Symmetric matrices: Any matrix A for which A = AT .
v Skew-symmetric matrices: Any matrix A for which A = AT . Sometimes also called anti-symmetric.
14.1.4 Properties of matrices

v AB 6= BA
v (AB)C = A(BC)
v (AT )T = A
v (AB)T = B T AT

14.1.5 Notation

## For the system of equations

3x + 2y z = 1
x y + z = 4
2x + y z = 1
we call
3 2 1
1 1 1
2 1 1
the coefficient matrix and
3 2 1 1
1 1 1 4
2 1 1 1
the augmented matrix.
When we do row-operations on a system we are manipulating the augmented matrix. But each incarnation represents a
system of equations for the same original values for x, y and z. Thus if A and A0 are two augmented matrices for the same
system, then we write
A A0
The squiggle means that even though A and A0 are not the same matrices, they do give us the same values for x, y and z.
Example 14.4
Solve the system of equations
3x + 2y z = 1
x y + z = 4
2x + y z = 1
using matrix notation.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 15. Inverses of Square Matrices.

15.1 Matrix inverse

## Suppose we have a system of equations     

a b x u
=
c d y v
and that we write in the matrix form
AX = B
Can we find another matrix, call it A1 , such that

## If so, then we have

A1 AX = A1 B X = A1 B
Thus we have found the solution of the original system of equations.

 1  
1 a b 1 d b
A = =
c d ad bc c a

## v Use row-operations to reduce A to the identity matrix.

v Apply exactly the same row-operations to a matrix set initially to the identity.
v The final matrix is the inverse of A.

## v Crack open the champagne.

Example 15.1
 
1 7
Find the inverse for A =
3 4
Note that not all matrices will have an inverse. For example, if
 
a b
A=
c d

then  
1 1 d b
A =
ad bc c a
and for this to be possible we must have ad bc 6= 0.

We call this magic number the determinant of A. If it is zero then A does not have an inverse.

The question is is there a similar rule for an N N matrix? That is, a rule which can identify those matrices which have
an inverse.
15.2 Determinants

## The definition is a bit involved, here it is.

 
a b
v For a 2 2 matrix A = define detA = ad bc.
c d
v For an N N matrix A create a sub-matrix Sij of A by deleting row I and column J.

v Then define
detA = a11 detS11 a12 detS12 + a13 detS13 a1N detS1N

Thus to compute detA you have to compute a chain of determinants, from (N 1) (N 1) determinants all the way
down to 2 2 determinants. This is tedious and very prone to arithmetic errors!

15.2.1 Notation

## We often write detA = |A|.

Example 15.2
Compute the determinant of
1 7 2
A= 3 4 5
6 0 9

We can also expand the determinant about any row or column provided we observe the following pattern of signs.

+ + +
+ + +

+ + +
+ + +

Example 15.3
By expanding about the second row compute the determinant of

1 7 2
A= 3 4 5
6 0 9

Example 15.4
Compute the determinant of
1 2 7
A= 0 0 3
1 2 1
15.3 Inverse using determinants

## v Select a row I and column J of A.

v Compute (1)i+j detSIJ
detA

## v Store this at row J and column I in the inverse matrix.

v Repeat for all other entries in A.

That is , if
A = [ aIJ ]
then
1 
A1 = (1)I+J detSJI

detA

## This method for the inverse works but it is rather tedious.

The best way is to compute the inverse by Gaussian elimination, i.e. [A|I] [I|A1 ].

## 15.4 Vector cross products

The rule for a vector cross product can be conveniently expressed as a determinant. Thus if v = vx i + vy j + vz k and
w = wx i + wy j + wz k then
i j k

v w = vx vy
vz

wx wy wz
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 16. Eigenvalues and eigenvectors.

16.1 Introduction

Okay, its late in the aftrenoon, were feeling a little sleepy and we need somthing to get our minds fired up. So we play a
little game. We start with this simple 3 3matrix

1 2 0
R= 2 1 0
0 0 3

and when we apply R to any vector of the form v = [0, 0, 1]T we observe the curious fact that the vector remains unchanged
apart from an overall scaling by 3. That is,
Rv = 3v
Now we are wide awake and ready to play this game at full speed. Qustions that come to mind would (should) include,

## v Can we find vectors like v but with a different scaling?

This is a simple example of what is known as an eigenvector equation. The key feature is that the action of the matrix on
the vector produces a new vector that is parallel to the original vector (and in our case, it also happens to be 3 times as
long).
Eigenvalues and eigenvectors

If A is square matrix and v is a non-zero column vector satisfying the matrix equation

Av = v

then we say that the matrix A has eigenvalue with corresponding eigenvector v.

For the example of the 3 3 matrix given above we have an eigenvalue equal to 3 and a corresponding eigenvector of the
form v = [0, 0, 1]T .

Example 16.1
 
T 6 16
Show that v = [8, 1] is an eigenvector of the matrix A = .
1 4

Example 16.2
The matrix in example 16.1 has a second eigenvector this time with the eigenvalue 2. Find that eigenvector.

Example 16.3
Let v1 and v2 be two eigenvectors of some matrix. Is it possible to choose and so that v1 + v2 is also an eigenvector?

## v How many eigenvalues can a matrix have?

v How do we compute the eigenvalues?

## v Is this just pretty mathematics or is there a point to this game?

Good questions indeed. Lets see what we make of them. We will start with the issue of constructing the eigenvalues
(assuming, for the moment, that they exist).

16.2 Eigenvalues

Given an N N -matrix, our game here is to find the values of , if any, that allows the equation

Av = v

to have non-zero solutions for v, that is, v 6= 0. Assuming this is the case, then re-arrange the equation to

(A I) v = 0

where I is the N N -identity matrix. Since we are chasing non-zero solutions for v we must have the determinant of
A I equal to zero. That is, we require that det(A I) = 0. This gives a polynomial equation in terms of .

Characteristic equation

## The eigenvalues of an N N -matrix A are the solutions of the polynomial equation

det(A I) = 0

This is called the characteristic equation of A. If A is an N N -matrix, then this equation will be a polynomial
of degree N in . The eigenvalues may be real distinct, real repeated or complex numbers.
Example 16.4
 
6 16
Compute both eigenvalues of A = .
1 4
We can now answer the pervious question: How many eigenvalues can we find for a given matrix? If A is an N N matrix
then the characteristic equation will be a polynomial of degree N and so we can expect at most N distinct eigenvalues (one
for each root). The keyword here is distinct - it is possible that the characteristic equation has repeated roots. In such
cases we will find less than N (distinct) eigenvalues, as shown in the following example.

Example 16.5
 
1 3
Show that the matrix A = has only one eigenvalue.
0 1

Example 16.6
Look carefully at the previous matrix. It describes a stretch along the x-axis. Use this fact to argue that the matrix can
have only one eigenvalue. This is a pure geometrical argument, you should not need to to do any calculations.

## Example 16.7 A characteristic equation

Show that the characteristic equation for the matrix

3 2 1
A= 3 4 1
1 1 3

is given by
3 102 + 27 18 = 0
Example 16.8 The eigenvalues
Show that the eigenvalues of example 16.2 are 1 = 1, 2 = 3 and = 6.

## Example 16.9 The eigenvector corresponding to 1

We now know that the matrix
3 2 1
A= 3 4 1
1 1 3
has an eigenvalue equal to 1. How do we compute the corresponding eigenvector? We return to the eigenvector equation
(A I) v = 0 with = 1 = 1, that is,
2 2 1 a 0
3 3 1 b = 0

1 1 2 c 0
in which the v = [a, b, c]T is the eigenvector. Our game now is to solve this matrix equation for a, b and c. This we can do
using Gaussian elimination. After the first stage, where we eliminate the lower triangular part, we obtain

1 1 2 a 0
0 0 5 b = 0
0 0 0 c 0

Note that the last row is full of zeros. Are we surprised? No. Why Not? Well, since we were told that the matrix A has
= 1 as an eigenvalue we also know that det(A (1) I) = 0 which in turn tells us that at least one of the rows of A (1) I
must be a (hidden) linear combination of the other rows (and Gaussian elimination reveals that hidden combination). So
seeing a row of zeros is confirmation that we have det(A (1) I) = 0. Now lets return to the matter of solving the matrix
equation. Using back-substitution we find that every solution is of the form

a
b =
c 0
where is any number. We can set = 1 and this will give us a typical eigenvector for the eigenvalue 1 = 1,

1
v1 = 1
0
All other eigenvectors, for this eigenvalue, are parallel to this eigenvector (differing only in length). Is that what we
expected, that there would be an infinite set of eigenvectors for a given eigenvalue? Yes just look back at the definition,
Av = v. If v is a solution of this equation then so too is v. This is exactly what we have just found.

## Example 16.10 The eigenvector corresponding to 2

Now lets find the eigenvector corresponding to = 2 = 3. We start with (A (3) I) v = 0, that is,

0 2 1 a 0
3 1 1 b = 0
1 1 0 c 0
After performing Gaussian elimination we find

1 1 0 a 0
0 2 1 b = 0
0 0 0 c 0
Using back-substitution we find that every solution is of the form

a
b =
c 2
where is any number. We can set = 1 and this will give us a typical eigenvector for the eigenvalue 2 = 3,

1
v2 = 1 .
2
Example 16.11 The eigenvector corresponding to 3
Now lets find the eigenvector corresponding to = 3 = 6. We start with (A (6) I) v = 0, that is,

3 2 1 a 0
3 2 1 b = 0
1 1 3 c 0

1 1 3 a 0
0 5 10 b = 0
0 0 0 c 0

## Using back-substitution we find that every solution is of the form

a
b = 2
c

where is any number. We can set = 1 and this will give us a typical eigenvector for the eigenvalue 3 = 6,

1
v3 = 2 .

1

Note: As the eigenvalues and eigenvalues exercises will show, it is possible for an N N matrix to have repeated eigenvalues
or even complex eigenvalues. The question of how to find the corresponding eigenvectors for repeated eigenvalues or complex
eigenvalues will be addressed in ENG2005.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 17. Introduction to ODEs

17.1 Motivation

The mathematical description of the real world is most commonly expressed in equations that involve not just a function
f (x) but also some of its derivatives. These equations are known as ordinary differential equations (commonly abbreviated
as ODEs). Here are some typical examples.

d2 r(t) GM m
v Newtonian gravity m 2
=
dt r2
dN (t)
v Population growth = N (t)
dt
d2 y(x)
v Hanging chain 2
= 2 y(x)
dx
dI(t)
v Electrical currents L + RI(t) = E sin(t)
dt

The challenge for us is to find the functions that are solutions to these equations. The problem is that there is no systematic
way to solve an ODE; thus we are forced to look at a range of strategies. This will be our game for the next few lectures.
We will identify broad classes of ODES and develop particular strategies for each class.

17.2 Definitions

## Here are some terms commonly used in discussions on ODEs.

v Order The order of an ODE is the order of the highest derivative in the ODE.

v Linear The ODE only contains terms linear in the function and its derivatives.

## v Non-linear Any ODE that is not a linear ODE.

v Linear homogeneous A linear ODE that allows y = 0 as a solution.

## v Dependent variable The solution of the ODE. Usually y.

v Independent variable The variable that the solution of the ODE depends on. Usually x or
t.

v Boundary conditions A set of conditions that selects a unique solution of the ODE. Essential
for numerical work.

v Initial value problem An ODE with boundary conditions given at a single point. Usually
found in time dependent problems.

v Boundary value problem An ODE with boundary conditions specified at more than one point.
Common in engineering problems.

Here are some typical ODEs (some of which we will solve in later lectures).

## Linear first order homogeneous

dy
cos(x) + sin(x)y(x) = 0
dx

## Linear first order non-homogeneous

dy
cos(x) + sin(x)y(x) = e2x
dx

2
d2 y

dy
+ + y(x) = 0
dx2 dx

## Initial value problem

dN
= 2N (t) , N (0) = 123
dt

## Boundary value problem

2
d2 y

dy
+2 y(x) = 0 , y(0) = 0 , y(1) =
dx2 dx

## 17.3 Solution strategies

There are at least three different approaches to solving ODEs and initial/boundary value problems.

v Graphical This uses a graphical means, where the value of dy/dx are interpreted as a direction field,
to trace out a particular solution of the ODE. Primarily used for initial value problems.

v Numerical Here we use a computer to solve the ODE. This is a very powerful approach as it allows us to
tackle ODEs not amenable to any other approach. Used primarily for initial and boundary
value problems.
v Analytical A full frontal assault with all the mathematical machinery we can muster. This approach is
essential if you need to find the full general solution of the ODE.

In this unit we will confine our attention to the last strategy, leaving numerical and graphical methods for another day (no
point over indulging on these nice treats).

## So lets get this show on the road with a simple example.

Example 17.1
Find all functions y(x) which obey
dy
0= + 2x
dx

## First we rewrite the ODE as

dy
= 2x
dx
then we integrate both sides with respect to x
dy
Z Z
dx = 2 x dx
dx
But
dy
Z Z
dx = dy = y(x) C
dx
for any function y(x) and C is an arbitrary constant. Thus we have found

y(x) = C x2

is a solution of the ODE for any choice of constant C. All solutions of the ODE must be of this form (for a suitable choice
of C).
Example 17.2
Find all functions y(x) such that
dy
0= + 2xy
dx

## If we proceed as before we might arrive at

dy
Z Z
dx = 2 xy dx
dx
The left hand side is easy to evaluate but the right hand side is problematic we can not easily compute its anti-derivative
(we dont yet know y(x)). So we need a different approach. This time we shuffle the y onto the left hand side,

1 dy
Z Z
dx = 2 x dx
y dx
But
1 dy 1
Z Z
dx = dy = C + log y
y dx y
thus we find
2
log y = C x2 y(x) = Aex

We succeeded in this example because we were able to shuffle all x terms to one side of the equation and all y terms to the
other. This is an example of a separable equation. We shall meet these equations again in later lectures.

In both of these example we found that one constant of integration popped up. This means that we found not one solution
but a whole family, each member having a different value for C. This family of solutions is often called the general solution
of the ODE. The role of boundary conditions (if given) is to allow a single member of the family to be chosen.
17.4 General and particular solutions

Each time we take an anti-derivative, one constant of integration pops up. For a first order ODE we will need one
anti-derivative and thus one constant of integration. But for, say, a third order equation, we will need to apply three
anti-derivatives, each providing one constant of integration. What is the point of this discussion? It is the key to spotting
when you have found all solutions of the ODE. This is what you need to know.

## General solution of an ODE

If y(x) is a solution of an nth order ODE and if y(x) contains n independent integration constants then y(x) is the
general solution of the ODE. Every solution of the ODE will be found in this family.

## Particular solution of an ODE

If y(x) is a solution of an nth order ODE and if y(x) contains no free constants, then y(x) is a particular solution
of the ODE.

Such solutions usually arise after the boundary conditions have been applied to the general solution.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 18. Separable first order ODEs.

18.1 Separable equations

## In an earlier example we solved

dy x
=
dx y

by first rearranging the equation so that y appeared only on the left hand side while x appeared only on the right hand
side. Thus we found

Z Z
y dy = x dx

## and upon completing the integral for both sides we found

y 2 (x) = C x2

This approach is known as separation of variables. It can only be applied to those ODEs that allow us to shuffle the x and
y terms onto separate sides of the ODE.
Separation of variables

## If an ODE can be written in the form

dy f (x)
=
dx g(y)
then the ODE is said to be separable and its solution may be found from
Z Z
g(y) dy = f (x) dx

Example 18.1
Show that the ODE

dy
ex 2y = 1
dx

## is separable. Hence solve the ODE.

Example 18.2
Show that

dy
sin(x) + y 2 = cos(x)
dx

is not separable.
Example 18.3
The number of bacteria in a colony is believed to grow according to the ODE

dN
= 2N
dt

where N (t) is the number of bacteria at time t. Given that N = 20 initially, find N at later times.

## Example 18.4 : Newtons law of cooling

This is a simple model of how the temperature of a warm body changes with time.

The rate of change of the bodys temperature is proportional to the difference between the ambient and body temperatures.
Write down a differential equation that represents this model and then solve the ODE.

Example 18.5
Use the substitution u(x) = x2 + y(x) to reduce the non-separable ODE

du u
= 3x
dx x

## to a separable ODE. Hence obtain the general solution for u(x).

18.2 First order linear ODEs

dy
+ P (x)y = Q(x)
dx

Example 18.6
Given

dy 1
+ y=0
dx x

find y(x).

dy 1
= dx
y x

C
y(x) =
x

## Note that the above ODE has y(x) = 0 as a particular solution.

Whenever a linear ODE has y(x) = 0 as a solution we say that the ODE is homogeneous.

Example 18.7
Show that y(x) = x is a particular solution of

dy 1
+ y=2
dx x

We call it a particular solution because it does not contain an arbitrary constant of integration.

This ODE looks very much like the previous example with the one small change that Q(x) = 2 rather than Q(x) = 0. We
can expect that the general solution will be similar to the solution found in the previous example.
Example 18.8
Show that

C
y(x) = +x
x

## is the general solution of the ODE in the previous example.

Thus we have solved the ODE by a two step process, first by solving the homogeneous equation and second by finding any
particular solution.

## Suppose that yh (x) is the general solution of the homogeneous equation

dyh
+ P (x)yh = 0
dx
and suppose that yp (x) is any particular solution of

dy
+ P (x)y = Q(x)
dx
Then the general solution of the previous ODE is

## y(x) = yh (x) + yp (x)

Note, in some books yh (x) is written as yc (x) and is known as the complementary solution.

Though this above procedure sounds easy we still have two problems,

dy
+ P (x)y = 0
dx

dy
= P (x)dx
y

## which we can integrate, with the result

R
y(x) = Ce P (x) dx

Remember that this y(x) will be used as yh (x), the homogeneous solution of the non-homogeneous ODE.
Example 18.9
Verify the above solution for y(x)

## 18.2.2 Finding a particular solution

This usually involves some inspired guess work. The general idea is to look at Q(x) and then guess a class of functions
for yp (x) that might be a solution of the ODE. If you include a few free parameters you may be able to find a particular
solution any particular solution will do.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 19. The integrating factor.

19.1 The Integrating factor

## Example 19.1 : Easy

Use an inspired guess to find a particular solution of

dy
+ 3y = sin(x)
dx

## Example 19.2 : Harder

Use an inspired guess to find a particular solution of

dy
+ (1 + 3x)y = 3ex
dx

The main advantage of this method of inspired guessing (better known as the method of undetermined coefficients) is that
it is easy to apply. The main disadvantage is that it is not systematic it involves an element of guess work in finding the
particular solution.

## We begin by noticing that for any function I(x),

1 d(Iy) dy 1 dI
= +y
I dx dx I dx
The right hand side looks similar to the left hand side of our generic first order linear ODE. We can make it exactly the
same by choosing I(x) such that

1 dI
P (x) =
I dx

## This is a separable ODE for I(x), with the particular solution

R
P (x) dx
I(x) = e

So why our we doing this? Because once we know I(x) our original ODE may be re-written as

1 d(Iy)
= Q(x)
I dx

## We can now integrate this,

d(Iy)
dx
= I(x)Q(x)

R d(Iy) R
dx
dx = I(x)Q(x) dx

R
I(x)y(x) = I(x)Q(x) dx

1
R
y(x) = I(x)
I(x)Q(x) dx
The great advantage with this method is that it works every time! No guessing!

## The general solution of

dy
+ P (x)y = Q(x)
dx
is
1
Z
y(x) = I(x)Q(x) dx
I(x)

## where the integrating factor I(x) is given by R

P (x) dx
I(x) = e
Example 19.3
Find the general solution of

dy 1
+ y=2
dx x

## Here we have P (x) = 1/x and Q(x) = 2.

SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 20. Solutions to first order ODEs: A numerical method

20.1 Eulers method

Comparatively few differential equations can be integrated exactly to give a solution written in terms of elementary
functions. Thus, to determine the form of the solutions of any other differential equations, we need to use a numerical
method to calculate approximate solutions. Here we will, using Taylor series, derive a numerical method which is
applicable to first order differential equations of the form
dy
= f (x, y)
dx
where f is a function of both variables x and y. We wish to seek a function y(x) which will satisfy this differential equation
for all x > a, given some initial value x = a at which the value y(a) is specified.

Example 20.1
From the theory of linear differential equations we can show the equation
dy
= x + y 1, x > 0
dx
has the general solution
y(x) = Cex x
for any real number C.

To determine a unique solution for this problem, we need to use a specified value y0 at an initial point x = a. The value
y(a) = y0 is known as the initial condition.
Example 20.2
The differential equation
dy
= x + y 1, x > 0
dx
with the initial condition y(0) = 1 has C = 1 and then the exact solution to this initial value problem is

## y(x) = ex x for all x > 0.

Example 20.3
The differential equation
dy
= x + y 1, x > 1
dx
with the initial condition y(1) = 3 has C = 4e1 and then the exact solution to this initial value problem is

## y(x) = 4ex1 x for all x > 1.

In general, the choice of initial condition y(a) = y0 is determined by the particular problem being solved.

How would we attempt to find an approximate solution to an initial value ordinary differential equation problem?

The Eulers method is the simplest, and least accurate, of the many numerical methods that could be considered, but it
does illustrate the general of principle of finite difference numerical methods you may see in other units.

The first feature of finite difference methods is that they can only approximate values of the solutions at a finite number
of points, typically a sequence of point xn = a + nx for n = 0, 1, 2, . . . , N separated by a constant stepsize x. The
particular choice of x depends on how accurate we wish the approximation solution to be; the smaller the value of x
the more accurate the appoximate solution will be.
The first feature of finite difference methods is that they follow a marching procedure, moving from the known value of y
at x0 to find an approximate value of y at x1 , then moving from that approximate value of y at x1 to find an approximate
value of y at x2 , and so on. Thus given the initial condition y0 = y(x0 ) = y(a) we use the differential equation

dy
(x0 ) = f (x0 , y(x0 ))
dx
evaluate at that point to help us determine an approximate value y1 for y(x1 ). One way of doing this is to note that
differential equation tells us the slope of the curve y(x) at x0 , while we can estimate the slope between (x, y) = (x0 , y(x0 ))
and (x, y) = (x1 , y(x1 )) by the gradient formula

y(x1 ) y(x0 )
m=
x
for small x = x1 x0 . Combining these two results gives

y(x1 ) y(x0 )
f (x0 , y(x0 ))
x
and therefore,
y(x1 ) y(x0 ) + xf (x0 , y(x0 )) .
The right-hand side of this equation can be used to define an approximate value y1 for the exact solution y at x1 using

y1 = y0 + xf (x0 , y0 ) .

Having determined an approximate value y1 for y(x1 ), we now can proceed in a similar manner to find an approximate
value y2 for y(x2 ) using
y2 = y1 + xf (x1 , y1 ) .
The same process can be used indefinitely, leading to a sequence of approximate values yn for y(xn ) given by the recurrence
relation
yn+1 = yn + xf (xn , yn ) forn = 0, 1, . . . , N 1.
Algorithm for Eulers method

dy
Given a differential equation dx
= f (x, y) for x > a, an initial value y(a), a number of N steps and a final value b of
x
Set h to ba
N
Set x to a
Set y to y(a)
For n from 1 to N do
Set xL to x
Set yL to y
Set x to xL + x
Set y to yL + xf (xL , yL )
then the y value at the nth step is an approximate value of y(a + nx).

The magnitude in the error in Eulers method can be estimated by using a Taylor series expansion of y(x). For example,
at x1 = x0 + x the exact solution y(x1 ) can be written in the Taylor series form

dy (x)2 d2 y
y(x1 ) = y(x0 + x) = y(x0 ) + x (x0 ) + (x0 ) +
dx 2 dx2
and using the differential equation this becomes

(x)2 d2 y
y(x1 ) = y(x0 ) + xf (x0 , y0 ) + (x0 ) +
2 dx2
From the definition of the approximate value y1 it follows that the error |y1 y(x1 )| after one step (local trunctation error )
is of O (x)2 . If a similar error occurs over each of the succeding steps then at the fixed value of x = b, reached after N

steps, the error (global trunctation error ) will be of order N (x)2 or x (b a) which is O(x). The global truncation
error is the most significant error measure since it takes into account that extra steps are required to reach a fixed value of
x as x is decreased.

Example 20.4
Consider the the differential equation
dy
= x + y 1, x > 0
dx
with the initial condition y(0) = 1 on the interval [0, 1].
Recall, we stated the exact solution for this initial value problem is
y(x) = ex x.

## Eulers method for x = 0.5 gives

x y
0.0 1
0.5 1
1.0 1.25
The error in the approximation at the fixed value of x = 1 is
|y2 y(1)| = |1.25 1.7183| = 0.4683

## Eulers method for x = 0.25 gives

x y
0.0 1
0.25 1
0.5 1.0625
0.75 1.2031
1.0 1.4414
The error in the approximation at the fixed value of x = 1 is

## which is roughly half the error found for x = 0.5

Lastly, if we plot the sequence of approximated value for x = 0.5, x = 0.25 and x = 0.125 it is clear that the
approximation at x = 1 improves as we decrease x.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 21. Homogeneous Second order ODEs.

21.1 Second order linear ODEs

## The most general second order linear ODE is

d2 y dy
P (x) 2
+ Q(x) + R(x) y = S(x) .
dx dx

Such a beast is not easy to solve. So we are going to make life easy for ourselves by assuming P (x), Q(x), R(x) and S(x)
are constants. Thus we will be studying the reduced class of linear second order ODEs of the form

d2 y dy
a 2
+ b + cy = S(x)
dx dx

## where a, b, and c are constants.

No prizes for guessing that these are called constant coefficient equations.

We will consider two separate cases, the homogeneous equation where S(x) = 0 and the non-homogeneous equation where
S(x) 6= 0.

## 21.2 Homogeneous equations

Here we are trying to find all functions y(x) that are solutions of

d2 y dy
a 2
+ b + cy = 0
dx dx
Lets take a guess, lets try a solution of the form
y(x) = ex

We introduce the parameter as something to juggle in the hope that y(x) can be made to be a solution of the ODE. First
we need the derivatives,
dy d2 y
= ex and 2 = 2 ex
dx dx

## Then we substitute this into the ODE

a2 ex + bex + cex = 0
a2 + b + c ex = 0


a2 + b + c = 0.

## So we have a quadratic equation for , its two solutions are

b + b2 4ac b b2 4ac
1 = and 2 =
2a 2a

Lets assume for the moment that 1 6= 2 and that they are both real numbers.

What does this all mean? Simply that we have found two distinct solutions of the ODE,

## y1 (x) = e1 x and y2 (x) = e2 x

Now we can use two of the properties of the ODE, one, that it is linear and two, that it is homogeneous, to declare that

## is also a solution of the ODE for any choice of constants A and B.

Example 21.1
Prove the previous claim, that y(x) is a solution of the linear homogeneous ODE.

And now comes the great moment of enlightenment the y(x) just given contains two arbitrary constants and as the
general solution of a second order ODE must contain two arbitrary constants we now realise that y(x) above is the general
solution.

## Example 21.2 : Real and distinct roots

Find the general solution of

d2 y dy
+ 6y = 0
dx2 dx

## First we solve the quadratic

2 + 6 = 0
for . This gives 1 = 2 and 2 = 3 and thus

## The quadratic equation

a2 + b + c = 0
arising from the guess y(x) = ex is known as the characteristic equation for the ODE.

We have already studied one case where the two roots are real and distinct. Now we shall look at some examples where
the roots are neither real nor distinct.

## Example 21.3 : Complex roots

Find the general solution of

d2 y dy
2
2 + 5y = 0
dx dx

## First we solve the quadratic

2 2 + 5 = 0
for . This gives 1 = 1 2i and 2 = 1 + 2i. These are distinct but they are complex. Thats not a mistake just a venture
into slightly unfamiliar territory. The full solution is still given by

## for arbitrary constants A and B.

This is a perfectly correct mathematical expression and it is the solution of the ODE. However, in cases where the solution
of the ODE is to be used in a real-world problem, we would expect y(x) to be a real-valued function of the real variable
x. In such cases we must therefore have both A and B as complex numbers. This is getting a bit messy so its common
practice to re-write the general solution as follows.

## y(x) = ex ((A + B) cos(2x) + (iA + iB) sin(2x))

Now A + B and iA + iB are constants so lets just replace them with a new C and a new D, that is we write

## y(x) = ex (C cos(2x) + D sin(2x))

for arbitrary constant C and D.

This the general solution of the ODE written in a form suitable for use with real numbers.

## Example 21.4 : Equal roots

Find the general solution of

d2 y dy
2
+2 +y =0
dx dx

1 = 2 = 1

## y(x) = Ae1 x + Be2 x = Aex + Bex

was the general solution we would be fooling ourselves. Why? Because in this case the two integration constants combine
into one

## y(x) = Aex + Bex

= (A + B) ex
= Cex .
where C = A + B. We need two independent constants in order to have a general solution.

## Try a solution of the form

y(x) = (A + Bx) ex .

This does have two independent constants and you can show that this is a solution of the ODE for any choice of A and B.
Thus it must also be the general solution.

The upshot of all of this is that when solving the general linear second order homogeneous ODE we have three cases to
consider, real and distinct roots, complex root and equal roots. The recipe to apply in each case is listed in the following
table.
Constant coefficient 2nd order homogeneous ODEs

## For the ODE

d2 y dy
a 2
+ b + cy = 0
dx dx
first solve the quadratic
a2 + b + c = 0
for . Let the two roots be 1 and 2 . Then for the general solution of the previous ODE there are three cases.

## Case 3 : 1 = 2 y(x) = (A + Bx) ex

SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 22. Non-Homogeneous Second order ODEs.

22.1 Non-homogeneous equations

This is what the typical non-homogeneous linear constant coefficient second order ordinary differential equation (phew!)
looks like

d2 y dy
a 2
+ b + cy = S(x)
dx dx

where a, b, c are constants and S(x) 6= 0 is some given function. This differs from the homogeneous case only in that here
we have S(x) 6= 0.

Our solution strategy is very similar to that which we used on the general linear first order equation. There we wrote the
general solution as

## y(x) = yh (x) + yp (x)

where yh is the general solution of the homogeneous equation and yp (x) is any particular solution of the ODE.

We will use this same strategy for solving our non-homogeneous 2nd order ODE.

Example 22.1
Find the general solution of

d2 y dy
+ 6y = 1 + 2x
dx2 dx
This proceeds in three steps, first, solve the homogeneous problem, second, find a particular solution and third, add the
two solutions together.
Step 1 : The homogeneous solution
Here we must find the general solution of

d2 yh dyh
+ 6yh = 0
dx2 dx

## Step 2 : The particular solution

Here we have to find any solution of the original ODE. Since the right hand side is a polynomial we try a guess of the form

yp (x) = a + bx

## where a and b are numbers (which we have to compute).

Substitute this into the left hand side of the ODE and we find

d2 (a+bx) d(a+bx)
dx2
+ dx
6(a + bx) = 1 + 2x

b 6a 6bx = 1 + 2x
This must be true for all x and so we must have

b 6a = 1 and 6b = 2

## from which we get b = 1/3 and a = 2/9 and thus

2 1
yp (x) = x
9 3

Note finding a particular solution be this guessing method is often called the method of undetermined coefficients.

## This is the easy bit

2 1
y(x) = yh (x) + yp (x) = Ae2x + Be3x x
9 3

## 22.2 Undetermined coefficients

How do we choose a workable guess for the particular solution? Simply by inspecting the terms in S(x), the right hand
side of the ODE.
Here are some examples,

## S(x) = (a + bx + cx2 + + dxn )ekx

try yp (x) = (e + f x + gx2 + + hxn )ekx

## S(x) = (a sin(bx) + c cos(bx))ekx

try yp (x) = (c cos(bx) + f sin(bx))ekx

Example 22.2
What guesses would you make for each of the following?
S(x) = 2 + 7x2
S(x) = (sin(2x))e3x
S(x) = 2x + 3x3 + sin(4x) 2xe3x

22.3 Exceptions

## Without exception there are always exceptions!

If S(x) contains terms that are solutions of the corresponding homogeneous equation then in forming the guess for the
particular solution you should multiply that term by x (and by x2 if the term corresponded to a repeated root of the
characteristic equation).
Example 22.3
Find the general solution for

d2 y dy
+ 6y = e2x
dx2 dx

## yh (x) = Ae2x + Be3x

and thus we see that our right hand side contains a piece of the homogeneous solution. The guess for the particular solution
would then be

yp (x) = (a + bx)e2x

## Now solve for a and b.

SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 23. Applications of Differential Equations

23.1 Applications of ODEs

In the past few lectures we studied, in detail, various techniques for solving a wide variety of differential equations. What
we did not do is ask why we would want to solve those equations in the first place. A simple (but rather weak) answer
is that it is a nice intellectual challenge. A far better answer is that these ODEs arise naturally in the study of a vast
array of physical problems, such as population dynamics, the spread of infectious diseases, the cooling of warm bodies, the
swinging motion of a pendulum and the motion of planets. In this lecture we shall look at some of these applications.

In each of the following examples we will not spend time computing the solution of the ODE this is left as an exercise
for the (lucky) student!

## 23.2 Newtons law of cooling

Newtons law of cooling states that the rate of change of the temperature of a body is directly proportional to the
temperature difference between the body and its surrounding environment. Let the temperature of the body be T and
let Ta be that of the surrounding environment (the ambient temperature). Then Newtons law of cooling is expressed in
mathematical terms as

dT
= k(T Ta )
dt

## where k is some constant.

This is a simple non-homogeneous first order linear differential equation. Its general solution is

T (t) = Ta + Aekt
To apply this equation to a specific example we would need information that allows us to assign numerical values to the
three parameters, Ta , k, and A.

## Example 23.1 : A murder scene

We can use Newtons law of cooling to estimate the time of death at a murder scene. Suppose the temperature of the body
has been measured at 30 deg C. The normal body temperature is 37 deg C. So the question is How long does it take for
the body to cool from 37 deg C to 30 deg C? To answer this we need values for Ta , k, and A. Suppose the room temperature
was 20 deg C and thus Ta = 20. For k we need to draw upon previous experiments (how?). These show that a body left
to cool in a 20 deg C room will drop from 37 deg C to 35 deg C in 2 hours. Substitute this into the above equation and we
have

T (0) = 37 = 20 + Ae0
T (2) = 35 = 20 + Ae2k

Two equations in two unknowns, A and k. These are easy to solve, leading to

 
1 17
A = 17 and k = loge 0.06258
2 15

Thus
T (t) = 20 + 17e0.06258t

Now for the time of the murder. Put T (t) = 30 and solve for t,
 
0.06258t 1 10
30 = 20 + 17e t= loge 8.5
0.06258 17

## 23.3 Pollution in swimming pools

Swimming pools should contain just two things people and pure water. Yet all too often the water is not pure. One way
of cleaning the pool would be to pump in fresh water (at one point in the pool) while extracting the polluted water (at
some other point in the pool). Suppose we assume that the pools water remains thoroughly mixed (despite one entry and
exit point) and that the volume of water remains constant. Can we predict how the level of pollution changes with time?

Suppose at time t there is y(t) kgs of pollutant in the pool and that the volume of the pool is V litres. Suppose also
that pure water is flowing in at the rate litres/min and, since the volume remains constant, the outflow rate is also
litres/min.

Now we will set up a differential equation that describes how y(t) changes with time.

Consider a small time interval, from t to t + t, where t is a small number. In that interval t litres of polluted water
was extracted. How much pollutant did this carry? As the water is uniformly mixed we conclude that the density of the
pollutant in the extracted water is the same as that in the pool. The density in the pool is y/V kg/L and thus the amount
of pollutant carried away was (y/V )(t). In the same small time interval no new pollutants were added to the pool. Thus
any change in y(t) occurs solely from the flow of pollutants out of the pool. We thus have

y
y(t + t) y(t) = t
V

This can be reduced to a differential equation by dividing through by t and then taking the limit as t 0. The result is
dy
= y
dt V

## The general solution is

y(t) = y(0)et/V

Example 23.2
Suppose the water pumps could empty the pool in one day. How long would it take to halve the level of pollution?

## 23.4 Newtonian mechanics

The original application of ODEs was made by Newton (at the age of 22 in 1660) in the study of how things move. He
formulated a set of laws, Newtons laws of motion, one of which states that the nett force acting on a body equals the mass
of the body times the bodies acceleration.

Let F be the force and let r(t) be the position vector of the body. Then the bodys velocity and acceleration are defined by

dr
v(t) =
dt

dv d2 r
a(t) = = 2
dt dt
Then Newtons (second) law of motion may be written as

d2 r
m =F
dt2

If we know the force acting on the object then we can treat this as a second order ODE for the particles position r(t). The
usual method of solving this ODE is to write r(t) = x(t)i + y(t)j + z(t)k and to re-write the above ODE as three separate
ODEs, one each for x(t), y(t) and z(t).

d2 x
m = Fx
dt2

d2 y
m = Fy
dt2

d2 z
m 2 = Fz
dt

where Fx , Fy , Fz are the components of the force in the directions of the (x, y, z) axes, F = Fx i + Fy j + Fz k.

## Example 23.3 : Planetary motion

Newton also put forward a theory of gravitation that there exists a universal force of gravity, applicable to every lump
of matter in the universe, that states that for any pair of objects the force felt by each object is given by

Gm1 m2
F =
r2
where m1 and m2 are the (gravitational) masses of the respective bodies, r is the distance between the two bodies and G
is a constant (known as the Newtonian gravitational constant and by experiment is found to be 6.673 1011 N m2 /kg 2 ).
The force is directed along the line connecting the two objects.

Consider the motion of the Earth around the Sun. Each body will feel a force of gravity acting to pull the two together.
Each body will move due to the action of the force imposed upon it be the gravitational pull of its partner. However as
the Sun is far more massive than the Earth, the Sun will, to a very good approximation, remain stationary while the Earth
goes about its business.

## v The Earth orbits the Sun in the z = 0 plane.

Let r(t) = x(t)i + y(t)j be the position vector of the Earth. The force acting on the Earth due to the gravitational pull of
the Sun is then given by

GM m
F= r
r2

where r is a unit vector parallel to r, M is the mass of the Sun and m is the mass of the Earth. The minus sign shows that
the force if pulling the Earth toward the Sun. The unit vector is easy to compute, r = (xi + yj)/r. Thus we have, finally,
d2 x GM m x
m 2
= 2
dt r r

d2 y GM m y
m 2 = 2
dt r r

This is a non-linear coupled system of ODEs these are not easy to solve, so we resort to (more) simple approximations
(in other Maths subjects!).

## Example 23.4 : Simple Harmonic Motion

Many physical systems display an oscillatory behaviour, such as a swinging pendulum or a hanging weight attached to a
spring. It seems reasonable then to expect the sine an cosine functions to appear in the description of these systems. So
what type of differential equation might we expect to see for such oscillatory systems? Simply those ODEs that have the
sine and cosine functions as typical solutions. We saw in previous lectures that the ODE

d2 y
= k 2 y
dt2

has

## y(t) = A cos(kt) + B sin(kt)

as its general solution. This the classic example of what is called simple harmonic motion. Both the swinging pendulum
and the weighted spring are described (actually approximated) by the above simple harmonic equation.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 24. The Laplace Transform

24.1 What can the Laplace transform do?

## v transforming the differential equation into a simpler problem;

v solving the simpler problem;
v transforming the solution back to obtain the solution of the original problem.

They are most commonly used for time-dependent problems where the state of a system is known at some initial time
t = 0, say, and we want to examine the behaviour of the system for a later time t > 0.
In this unit we will use them to solve ordinary differential equations in time, such as those that arise from circuit theory
in electronics or from mass-transfer and reaction processes in chemical applications. In practice, however, they can also
be used to solve partial differential equations, such as those which will be seen in ENG2091/ENG2005 (for example, the
heat diffusion equation).
In this unit we will mostly consider Laplace transforms as a function of a real variable, but in practice engineers and
applied mathematicians often use them in terms of a complex-valued variable. The latter is made use of in some of the
complex analysis techniques covered in ENG2092/ENG2006.

## 24.2 Definition of the Laplace transform

For appropriate functions f (t) which are defined for all t 0, the Laplace transform of f is the function F (s) such that
Z  
st
F (s) = f (t) e dt
0
whenever that integral exists. In this unit we will usually treat s as a real-valued variable.
Notes:
v It is traditional to denote the Laplace transform of any function by the corresponding capital letter, for example the
Laplace transform of another function g(t) would usually be written as G(s).

v Notice that F is a function of a new variable s. Effectively we are changing from f in terms of the time domain
variable t to F in terms of the Laplace domain variable s.

v The transformed function F need not exist for every real value of s, in fact often the integral does not exist for s < 0.

v Sometimes we refer to process of taking the Laplace transform of f as L{f } or L{f (t)}, using the script letter L to
denote the transform operation.

v Taking a Laplace transform is an invertible process, and if F = L{f } then we refer to f as the inverse Laplace
transform of F , sometimes written as f = L1 {F }.

v Books can differ slightly with this notation, for example, compare James with Kreyszig.

The Laplace transforms of a lot of common functions can be tabulated and used, without the need to actually evaluate
any integrals every time we will see some of these over the next few lectures.

There are also a number of useful properties of the Laplace transform process which can help us determine the Laplace
transforms of more complicated functions, also without needing to evaluate any additional integrals. For example, we will
see that it is possible to express the Laplace transform L{f 0 } of the derivative f 0 (t) very simply in terms of the transform
F = L{f } of f (t).

## 24.3 Some simple Laplace transforms

Example 24.1
From the definition above, the Laplace transform of the constant function f (t) = 1 for all t 0 is
Z  
L{1} = 1 est dt
0

Although s is a variable here, since the value of the integral depends upon it, when the integral is being evaluated we treat
s as a fixed constant. We only vary s once we have the answer.
First, notice that for any fixed value of s > 0 the integrand is an exponentially decreasing function that tends to zero for
large values of t, and so the improper integral exists. Once we know it exists, the improper integral can be evaluated using
the anti-derivative of est , with
Z   Z    
st st
1e dt = lim 1e dt for 0 <  <
0  0
  
1 st
= lim e
 s 0
1 s

= lim e 1
s 
1
= for any fixed value of s > 0.
s

## The Laplace transform of the function f (t) = 1 is therefore

1
L{1} = for s > 0.
s

In a similar way it can be shown that for any constant a the exponential function f (t) = eat for all t 0 has Laplace
transform
1
L eat =

for s > a.
sa
Notice that when a = 0 this reduces to the result above for f (t) = 1. (It is always wise to cross-check!)
24.4 Linearity of Laplace transforms

The collection of Laplace transform pairs, or corresponding functions f (t) and F (s) = L{f (t)}, can be expanded consid-
erably by using some simple properties of the Laplace transform process.
The simplest property is the linearity of the transform process. If the functions f (t) and g(t) are defined for t 0 and
have Laplace transforms L{f } and L{g} then from the definition
Z  
L{f + g} = (f (t) + g(t)) est dt
Z0   Z  
st
= f (t) e dt + g(t) est dt
0 0
= L{f } + L{g}
using the linearity property of integrals. The process that we use to prove this property is also important, and will be
useful for demonstrating other properties of Laplace transforms.
Similarly, if c is any real constant then
Z  
L{cf } = (cf (t)) est dt
0
Z  
=c f (t) est dt
0
= cL{f } .

Combining these, we obtain the general linearity property for any constants a and b
L{af (t) + bg(t)} = aL{f (t)} + bL{g(t)} .

For example, this can be used with the results earlier to determine the Laplace transforms of hyperbolic functions sinh(t)
and cosh(t) for constant , as well as transient functions like f (t) = 1 et .
24.5 What sort of functions have Laplace transforms?

For a function f (t) which is defined for t 0 to have a Laplace transform, the integral
Z  
F (s) = f (t) est dt
0

must exist for at least some values of s. This means that f must be integrable for all t 0, and must also not grow so
rapidly as t that the improper integral does not have a finite limit for any s.

Sufficient conditions for F (s) to exist in most engineering applications are that f must:

v be piecewise continuous, so that f is continuous except at a finite number of finite jumps over the domain t 0;
and

## v have sub-exponential growth, so that |f (t)| M et for some constants M and .

Example 24.2
The function f (t) = eat for any constant a is both continuous and sub-exponential (with M = 1 and = a).

Example 24.3
The unit step function u(t) that will be used in a later lecture is both piecewise continuous and sub-exponential (with
M = 1 and = 0, for example).

Example 24.4
There are no constants M and for which f (t) = exp(t2 ) can be bounded by |f (t)| M et for all values of t 0, and
hence its improper integral over [0, ) does not exist for any real value of s. As a result, the function f (t) = exp(t2 ) does
not have a Laplace transform.
Example 24.5
1
The function f (t) = 1t does not have a Laplace transform, in this case because f (t) is not integrable near t = 1 and so
F (s) does not exist for any real value of s.

Note, however, that some functions that do not satisfy the sufficient conditions above can still have Laplace transforms,
for example later we will find the transform of f (t) = 1t , even though it is not continuous at t = 0.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 25. Inverting Laplace transforms

25.1 Reversing the process - finding inverse Laplace transforms

As mentioned previously, Laplace transforms can be used to assist in solving differential equations by:

## v solving the simpler problem;

v transforming the solution of that back into the solution of the original problem.

This solution procedure is only effective if we can perform the final step, which involves inverting the Laplace transform
process, in a straightforward manner. This means that having found the transform G(s) = L{g}, say, of the solution we
want to recover the unknown function g(t) as simply as possible.

## (a) an inversion formula based on integration in the complex plane; or

(b) inspection and manipulation, along with a table of known transforms and their properties.

In this unit we mostly follow the simpler approach based on a table of known transforms and properties, rather than use
integrals in the complex plane to evaluate the inverse transforms.

The tables-based approach requires that we rearrange a given transform F (s) into an equivalent combination of known
transforms that are all listed on our table. Typically, this also requires using some known properties of Laplace transforms
- including the linearity property in the previous lecture.
Example 25.1
If the Laplace transform of our desired solution f (t) were
1
F (s) = for s > 0
s (s + 1)
then we can use a partial fraction expansion to write this as
1 1
F (s) =
s s+1

The reason for rearranging F (s) into that form is that the two transforms
1 1
F1 (s) = and F2 (s) =
s s+1
were seen in the previous lecture to arise from transforming f1 (t) = 1 and f2 (t) = et respectively, with
1 1 1
and L eat = , so that L et =
 
L{1} =
s sa s+1

As a result, F (s) can be written in the form F (s) = L{1} L{et } and using the linearity property we have that
F (s) = L{1 et }, and hence f (t) = 1 et for all t 0.
The key steps to the tables-based inversion process are to:

## v establish a table of known Laplace transforms and properties; and

v manipulate a given transform so that all of its terms can be inverted using entries on the table.

Note: In practice two functions can have minor differences but still have the same Laplace transform, for example if they
differ only at a single point then the values of their integrals are not affected. The inversion process therefore cannot be
absolutely precise about values of f at jump discontinuities.
25.2 Laplace transforms of powers

It was seen in earlier lectures on ordinary differential equations that positive integer powers of t, such as t, t2 , t3 , . . . often
appear in solutions of differential equations. We therefore need to include their Laplace transforms in our table so that we
can identify such terms during the inversion process.

Example 25.2
When f (t) = t (or the ramp function) we can use integration by parts to deduce that
Z  
st
L{t} = te dt
0
Z   
st
= lim te dt for 0 <
0
  Z  
t st 1 st 
= lim e e dt
s 0 0 s
h i 
s i h1
st
= lim e + 0 2e
s s 0
 
h 1 1 i
= lim es 2 es 2
s s s
1
= 2.
s

## More generally it can be shown that

n!
L{tn } = .
sn+1
for any integer n, where n! is the factorial of n. (Recall that n! = 1 2 3 . . . (n 1) n.)
For powers of t that are not positive integers this result can be generalised to the form
( + 1)
L{t } = for any value of > 1
s+1
where is known as the Gamma function. This is the extension  ofthe factorial to non-integer values (with (n + 1) = n!
for integers n) and it has ( + 1) = () for all . Also 12 = , for example.

## 25.3 The s-shifting property

The number of known transforms can be extended by recognising a simple property of the transform process, that if F (s)
is the Laplace transform of f (t) then Z  
F (s) = f (t) est dt
0
and that replacing s in this by (s a), for any constant a, and using the index laws gives that
Z  
F (s a) = f (t) e(sa)t dt
Z0  
= f (t) eat est dt
0
= L f (t) eat .


## This result, that

F (s) = L{f (t)} implies that F (s a) = L f (t) eat


is often known as the s-shifting property, and it can both help us calculate new Laplace transforms and help identify
inverse transforms.
Graphically and analytically, the s-shifting property implies that a shift in the graph of the function F to the right by an
amount a, or replacing F (s) by F (s a), corresponds to multiplying the original function f by the exponential eat , with
f (t) replaced by f (t) eat .
As before, the key technique here is to be able to spot a known transform that has been s-shifted.

Example 25.3
Notice the relationship between L{1} and L{eat } that were seen in the previous lecture.

## 25.4 A preliminary table of some Laplace transforms

Based on results to date, we can start writing a table for use with Laplace transform problems:
R  st

f (t) L{f } = F (s) = 0
f (t) e dt

1
1 s
for s > 0

1
eat sa
for s > a

sinh(t) s2 2
for s > ||

s
cosh(t) s2 2
for s > ||

n!
tn for n 0 sn+1
for s > 0

(+1)
t for > 1 s+1
for s > 0

f (t) eat F (s a)
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 26. Laplace transforms of derivatives

26.1 Laplace transforms of first-order derivatives

The Laplace transform L{f 0 } of the derivative f 0 (t) of a given differentiable function f (t) is given by
Z  
0 0 st
L{f } = f (t) e dt
0

whenever that integral exists. It turns out that this expression can also be written in terms of the Laplace transform
F (s) = L{f } of f (t) itself. To see this, use integration by parts on the expression above, with
Z   Z   
0 st 0 st
f (t) e dt = lim f (t) e dt for 0 <
0
h 0 i Z  
st st

= lim f (t) e f (t) se dt
0 0
h i Z   
s st
= lim f ( ) e f (0) + s f (t) e dt
0
Z  
=s f (t) est dt f (0)
0
= sF (s) f (0)

so that
L{f 0 } = sF (s) f (0) where F (s) = L{f } .

In terms of Laplace transforms, the differentiation operation is replaced by an algebraic operation. This powerful result is
the basis of using Laplace transforms to help solve differential equations.
26.2 Initial-value problems for first-order linear ordinary differential equations

To illustrate the application of Laplace transforms to linear differential equations, consider the problem where some unknown
function y(t) satisfies the first-order initial-value problem

dy
+ 2y = 2 with initial condition y(0) = 2.
dt

You learned how to solve this in previous lectures, but alternatively we can use Laplace transforms and seek the transform
Y (s) = L{y(t)} of the solution. To find Y , take the Laplace transform of the differential equation using the derivative
property, so that  
dy
L + L{2y} = L{2}
dt
which gives
2
(sY (s) y(0)) + 2Y (s) =
s
and then applying the initial condition y(0) = 2 becomes
2
(sY (s) 2) + 2Y (s) = ,
s
and hence
2 (s + 1)
Y (s) = .
s (s + 2)
Using partial fractions, the Laplace transform Y of the solution y can be written as
1 1
Y (s) = +
s s+2
and inverting using our table gives that y(t) = 1 + e2t . Yet no differentiation or integration was involved!
26.3 Laplace transforms of higher-order derivatives

The technique used above for a first-order differential equation can be extended to higher-order differential equations, but
first we need to calculate the Laplace transforms of higher-order derivatives.

Example 26.1
To determine L{f 00 } we can use the property L{f 0 } = sF {s} f (0) recursively by applying it to f 00 and then to f 0 . This
gives that

## L{f 00 } = sL{f 0 } f 0 (0)

= s (sF (s) f (0)) f 0 (0)
= s2 F (s) sf (0) f 0 (0)

or that
L{f 00 } = s2 L{f } sf (0) f 0 (0)

In the next lecture this will be used to assist in solving problems involving second-order differential equations.

The same recursive process can be used to determine L{f 000 }, L f (4) and so on in terms of L{f }, although in this unit


## 26.4 Transforms of sine and cosine functions

When solving second-order differential equations, the sine and cosine functions often arise, so we need to add those to our
table of known transforms. One way to do this is to use the Euler formula

## eit = cos(t) + i sin(t)

for any real constant , where i = 1.

## From the definition of the Laplace transform we obtain that

Z  
it st
 it
L e = e e dt
Z0  
= e(is)t dt
0
Z   
(is)t
= lim e dt for 0 <
0
  
1 (is)t
= lim e
i s 0
 
1 (is) 1
= lim e
i s i s
1
=
s i
s + i
= 2
s + 2
s
= 2 2
+i 2
s + s + 2
Since
L eit = L{cos(t) + i sin(t)}


= L{cos(t)} + iL{sin(t)}

## it follows from the real and imaginary parts that

s
L{cos(t)} = and L{sin(t)} = 2
s2 + 2 s + 2
There are other ways to determine the same two results, for example directly from the definition by integration by parts
(twice), or instead by solving the differential equation f 00 + 2 f = 0 with the appropriate initial conditions on f for the
cosine and sine solutions, respectively.

In combination with the s-shifting, this allows us to invert transforms with any quadratic denominator.

## 26.5 Damped oscillations

The sine and cosine functions are used to describe harmonic oscillations, such as occur with a frictionless pendulum or
an electrical circuit with no resistance. In reality there is usually some form of damping that decreases the energy of the
system over time and eventually leads to no motion or current. Typically, such behaviour might be represented in terms
of the functions
eat cos(t) and eat sin(t)
where a is a negative parameter, so that both functions tend to zero as t becomes large.

We can calculate the Laplace transforms of these functions using our known results.

Example 26.2
s
If we write f (t) = cos(t) then F (s) = s2 + 2
, and from the s-shifting property we have that

## L eat cos(t) = L eat f (t)

 

= F (s a)
(s a)
=
(s a)2 + 2
Example 26.3

If we write g(t) = sin(t) then G(s) = s2 + 2
, and from the s-shifting property we have that

## L eat sin(t) = L eat g(t)

 

= G(s a)

=
(s a)2 + 2

These results will not be included on our table of Laplace transform as they can be derived easily from the other results.
However, notice that the denominator always has complex-valued roots s = a i. This is important as it will enable us
to invert partial fraction expansions that involve an irreducible quadratic factor on the denominator.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 27. Applications to differential equations

27.1 Using partial fraction

The solutions of ordinary differential equations often involve exponential and/or circular functions, so their Laplace trans-
forms will often involve partial fractions. A proper rational function

P (s)
R(s) =
Q(s)

is a ratio of polynomials in which the degree of the numerator P (s) is less than the degree of the denominator Q(s). All
proper rational functions can be re-written by expressing R(s) as the sum of simpler rational functions of degree one or
two, called partial fractions, which are easy to invert.

Example 27.1
We can write
5s 3 3 2
= + .
s2 2s 3 s3 s+1

Before we can do this, we need to know how to determine the partial fraction expansion of any proper rational function.

Note: If the original expression is not a proper rational function, then algebraic long division must be performed first.

## 27.2 Steps for determining partial fraction expansions

Step 1 is to write the denominator Q(s) in terms of linear and/or irreducible quadratic factors.
P(s)
Step 2 is to write the required rational function Q(s) as the sum of partial fractions. Here we use the following forms:
Type of factor in Q(s) Corresponding partial fraction terms(s)

A
as + b (linear) as+b

## (as + b)k for some integer k A

as+b
+ B
(as+b)2
+ C
(as+b)3
+ ... + K
(as+b)k

As+B
as2 + bs + c (irreducible to linear) as2 +bs+c

## k As+B Cs+D Ks+L

(as2 + bs + c) (irreducible to linear) as2 +bs+c
+ (as2 +bs+c)2
+ ... + (as2 +bs+c)k

Step 3 is to equate numerators over a common denominator, multiplying out the factors and either (a) collecting terms
with like powers of s or (b) evaluating at an appropriate number of values of s.

## Step 4 is to solve the resulting equations for the required constants A, B, C, . . .

This can be done by using traditional simultaneous equation techniques, for example.

## 27.3 Second-order initial-value problems for linear ODEs

In the previous lecture we saw how to solve a first-order linear differential equation for y(t) by taking the Laplace transform
of the differential equation itself, and using the transform of derivative property to determine an expression for Y (s) =
L{y(t)}. The same approach can be used for initial-value problems involving second-order ordinary differential equations
with constant-coefficients.
Example 27.2
Consider the problem
d2 y dy dy
2
2 3y = 0 where y(0) = 5 and (0) = 7.
dt dt dt

Taking Laplace transforms of the differential equation, and writing Y (s) = L{y}, we obtain that



## collecting like terms,

s2 2s 3 Y (s) = (s 2) y(0) + y 0 (0)


## and applying the initial conditions,

s2 2s 3 Y (s) = 5s 3,


that is,
5s 3
Y (s) = .
s2 2s 3

## Using the partial fraction expansion noted earlier, it follows that

3 2
Y (s) = +
s3 s+1
and, using our table to invert this, that the solution is y(t) = 3e3t + 2et for t 0.

## This same process works for a variety of applications.

27.4 Application to circuit theory

An electrical circuit that involves an inductance L, a resistance R and a capacitance C in series, with an applied voltage
vi (t), then the charge q(t) on the capacitor satisfies the ordinary differential equation

d2 q dq
L 2
+ R + f rac1Cq = vi (t) .
dt dt

Taking Laplace transforms of this, and writing Q(s) = L{q(t)} and Vi (s) = L{vi (t)} then

## L s2 Q(s) sq(0) q 0 (0) + R (sQ(s) q(0)) + f rac1CQ(s) = Vi (s)



and hence  
1
2
Ls + Rs + Q(s) = Vi (s) + [(Ls + R) q(0) + Lq 0 (0)] .
C

The square-bracketed term on the right-hand-side arises from the initial conditions q(0) and i(0) = q 0 (0).

Example 27.3
If there is a no resistance R = 0, no initial charge q(0) = 0, no initial current q 0 (0) = 0 and the voltage vi (t) = e0 is
switched on for t > 0 then Vi (s) = es0 and  
2 1 e0
Ls + Q(s) =
C s
and hence
Ce0
Q(s) =
s (CLs2 + 1)
Ce0 Ce0 s
= + 2 1 .
s s + CL
The second term is irreducible denominator, and has the form of the cosine term seen in the previous lecture, so the solution
is r
1
q(t) = Ce0 (1 cos(t)) with frequency = .
CL
This solution is pure oscillatory.

## 27.5 Application to mechanical vibrations

Consider a body of mass m which is suspended by a spring of spring constant k, and with a damping force that is
proportional to the speed of the body. If y(t) is the displacement of this body away from its equilibrium position then
Newtons second law of motion gives that
d2 y dy
m 2 + c + ky = 0
dt dt
where c is a constant (which determines the strength of the damping force).

Example 27.4
We might displace the body by y(0) = d and release it from rest (so y 0 (0) = 0) - we then seek y(t) for t > 0.

This initial-value problem can be solved using the same process as for the previous applications, by finding the Laplace
transform Y (s) = L{y(t)} that satisfies the transform of the DE, namely



## Using the initial conditions,

ms2 + cs + k Y (s) = (ms + c) d.

If the body has mass 1 kilogram and displaced by 1 metre on a spring which has spring constant 25 kg/s2 and the strength
of the damping force is 6 kg/s, then we have m = 1, d = 1, k = 25 and c = 6, respectively, and we obtain that
s+6
Y (s) =
s2
+ 6s + 25
s+3 3
= + .
(s + 3) + 16 (s + 3)2 + 16
2

## Inverting this, the damped oscillatory solution is

 
3t 3
y(t) = e cos(4t) + sin(4t) for t > 0.
4

## 27.6 Mixing liquids between two tanks

Consider equal-sized two tanks T1 and T2 in which a particular chemical is mixed in water so it has a uniform concentration
x1 and x2 , respectively. A proportion k > 0 of both tanks (for example, 2% or k = 0.02) is then transferred between the
tanks per unit time in order to mix their contents.

Conservation of mass yields two coupled first-order linear ODEs for x1 (t) and x2 (t) as functions of time t with

dx1 dx2
= kx1 + kx2 and = kx1 kx2 fort > 0.
dt dt
Example 27.5
We might have x1 (0) = 0 and x2 (0) = 1 initially, and then seek x1 (t) and x2 (t) for t > 0.

This can be solved using exactly the same process as earlier, by seeking the Laplace transforms X1 (s) = L{x1 (t)} and
X2 (s) = L{x2 (t)}. Taking transforms of each ODE gives that X1 and X2 satisfy

(sX1 (s) x1 (0)) = kX1 (s) + kX2 (s) and (sX2 (s) x2 (0)) = kX1 (s) kX2 (s) ,

and using the initial conditions yields two coupled linear algebraic equations for X1 and X2 , with

## (s + k) X1 (s) = kX2 (s) and (s + k) X2 (s) = kX1 (s) + 1.

s+k
The first equation implies that X2 (s) = k
X1 (s) and substituting into the second equation gives

k
X1 (s) =
(s + k)2 k 2
k
=
s (s + 2k)
 
1 1 1
=
2 s s + 2k
and hence  
1 1 1
X2 (s) = + .
2 s s + 2k

By inversion
1 1
1 e2kt andx2 (t) = 1 + e2kt for t > 0,
 
x1 (t) =
2 2
1
so both concentrations approach 2
for large time t.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 28. Step functions and t-shifting

28.1 Other properties of Laplace transforms

The derivative property of Laplace transform can also be inverted by considering the transform of
Z t 
g(t) = f ( ) d
0

in terms of the transform F (s) of f (t). Since g 0 (t) and g(0) = 0 we can use the transform of derivative property to deduce
that
F (s) = L{g 0 (t)}
= sL{g(t)} g(0)
= sL{g(t)} .

This means that we can eliminate an s in the denominator during the inversion process by using transform of integral
property
Z t   
L{g(t)} = L f ( ) d
0
1
= F (s) .
s

Another useful result can be obtained by differentiating a Laplace transform with respect to s, so
Z   
d  d st
F (s) = f (t) e dt
ds ds 0
Z 
st

= f (t) te dt
0
= L{tf (t)} .
This yields the derivative of transform property
d 
L{tf (t)} = F (s) ,
ds
which can be used to help find the Laplace transform of functions that involve powers of t times another function. In
particular, this result enables the earlier result
n!
L{tn } = n+1
s
1
to be deduced by differentiating L{1} = s with respect to s repeatedly for n times. Another transform that can be obtained
from this property is
d 
L{t sin(t)} = L{sin(t)}
ds  
d
=
ds s2 + 2
2s
= .
(s2 + 2 )2

## 28.2 The unit step function

When we first introduced the Laplace transform it was noted that they can be found for functions with a finite number
of finite jump discontinuities. In engineering, such a jump can correspond to flipping a switch in an electrical circuit or
applying an instantaneous displacement in a mechanical system. One of the advantages of Laplace transforms is that they
can handle jump discontinuities relatively easily, including those which can occur in solutions of differential equations.
Jump discontinuities of functions can be represented mathematically in terms of the unit step function u(t), which is
defined as 
0 if t < 0
u(t) =
1 if t 0

This is sometimes known as the Heaviside function (after the engineer Oliver Heaviside, who invented Laplace transforms
in the 19th Century).

Step functions are often used in combination with a displacement in time, so that the jump from zero to one occurs at
t = a, for some a 0. This can be expressed in terms of the unit step function u as

0 if t < a
u(t a) =
1 if t a
and the Laplace transform of u(t a) is given by
Z  
st
L{u(t a)} = 1e dt
a
Z   
st
= lim 1e dt
a
  
1 st
= lim e
s a
 
1 s 1 sa
= lim e + e
s s
1
= esa
s
Based on this unit step function a set of more complicated discontinuous functions can be constructed.

Example 28.1
Displaced unit step functions that switch a quantity on at t = a, and then off again at t = b, where b > a > 0. This can
be written as
h(t) = u(t a) u(t b)
And it represents a top hat function which has a value of one over the inverval [a, b) and a value of zero otherwise.
To obtain the Laplace transform H(s) of this function we use the linearity property, which gives

## H(s) = L{u(t a) u(t b)}

= L{u(t a)} L{u(t b)}
esa esb
= .
s
This top hat function is sometimes used to turn on and off the right-hand side (forcing) term in a differential equation.

## 28.3 The t-shifting property

The displaced unit step function u(t a) can also be used in combination with more complicated functions that are
switched on and off.
Example 28.2
A function f (t) that is defined for t 0 can be displaced to a new starting time t = a by using that

0 if t < a
u(t a) f (t a) =
f (t a) if t a

## The Laplace transform of this function is then given by

Z  
L{u(t a) f (t a)} = f (t a) est dt
a

## which can be evaluated using the substitution t0 = t a to give that

Z   Z  
st 0
f (t a) e dt = f (t0 ) es(t +a) dt0
a 0
Z  
sa 0 st0
=e f (t ) e dt0
0
sa
=e F (s)
As a result,
L{u(t a) f (t a)} = esa F (s)
in terms of F (s) = L{f }. So a delay of length a in time, or t-shifting, corresponds to multiplication of the transform by
the exponential function esa .
Compare that with the s-shifting property !

## 28.4 An application of t-shifting

Consider an RC circuit which initially has no charge q and current i. An applied voltage vi (t) is switched on to a constant
value e0 at the time t = a > 0 and then switched off again at the time t = b > a. The differential equation governing this
system is
dq 1
R + q = e0 (u(t a) u(t b))
dt C
where q(t) is the charge on the capacitor. Taking Laplace transforms of the DE gives
 
1 1 as 1 bs
R (sQ(s) q(0)) + Q(s) = e0 e e
C s s

## and using the initial condition q(0) = 0 gives

 
1 as 1 bs
(RCs + 1) Q(s) = Ce0 e e
s s
or
eas ebs
 
Q(s) = Ce0
s (RCs + 1)
From the partial fraction expansion
1 1 1
= 1
s (RCs + 1) s s + RC
1 1 1
= where =
s s+ RC
so Q(s) can be written as  
1 as 1 as 1 bs 1 bs
Q(s) = Ce0 e e e + e .
s s+ s s+
Inverting using our table of known transforms, including the t-shifting property, gives the solution

## q(t) = Ce0 u(t a) 1 e(ta) u(t b) 1 e(tb) .

 
Another way of expressing this solution is to split up the time period into three intervals, corresponding to the three values
of the top hat function
0 if 0 t < a
(ta)
q(t) = Ce0 1e  if a t < b
(ta) (ba)
e e 1 if t b

The corresponding current i(t) = q 0 (t) into the capacitor is then given by

0 if 0 t < a
e0
i(t) = e(ta) if a t < b
R
e(ta) e(ba) 1 if t b

which is positive for the second interval and negative for the third interval, with jumps at both t = a and t = b.
Notice also that both i(t) and q(t) tend to zero for large times t , so the system eventually returns to its original
uncharged state.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 29. Impulses and Delta functions

29.1 Impulses and delta functions

In some applications it is instructive to consider how the system responds to an impulse, or a short, sharp forcing. For
example, we might hit a stationary mass on a spring with a hammer over a very short period of time, to accelerate it to a
finite velocity, or we apply a sudden large-but-short burst of voltage to a circuit in order to charge a capacitor quickly.

Mathematically, an impulse that is applied at some time t = a, where a > 0, can be modelled in terms of a (Dirac) delta
function (t a). This is an unusual type of function and it has the properties that:

## v (t a) = 0 for all t 6= a, and

v its integral is equal to one over any interval that includes t = a, in particular
Z  
(t a) dt = 1.
0

In engineering, the delta function (t a) is also sometimes called the unit impulse function.

Notice that (t a) does not have a specific value at t = a, so it cannot be graphed or evaluated in the usual way. One
way to envision (t a) is as the limit as of a sequence of functions that have typical width and typical height 1
near t = a - for example, top hat or bell-shaped functions.
Example 29.1
Consider a mass moving along at a constant velocity v(t) = v0 (with zero acceleration a(t)) that is given a short, sharp
acceleration of v times (t 1) at the time t = 1. Therefore
Z t 
v(t) = v0 + a( ) d
0
Z t 
= v0 + (v) ( 1) d
0
Z t 
= v0 + v ( 1) d
0

and v(t) = v0 for t < 1. Once t > 1, however, the integral jumps in value and v(t) = v0 + v for t > 1.

Since (t a) = 0 for t 6= a, the delta function also has the so-called sifting property, which enables it to pick out values
of the integrand of an integral, with
Z  
g(t) (t a) dt = g(a) for any function g(t) .
0

This property allows us to determine the Laplace transform of the delta function (t a), since
Z  
st
L{(t a)} = (t a) e dt (using g(t) = est here)
0
sa
=e when a > 0

It follows that
L{(t a)} = esa for any a > 0.
The form of this Laplace transform is similar to that for the unit step function u(t a) introduced in the previous lecture,
where we saw that L{u(t a)} = 1s esa . In fact, the Delta (or unit impulse) function (t a) can be considered to be the
derivative of the unit step function, with
d 
u(t a) = (t a) .
dt

29.2 Convolution

Laplace transforms are simple to use and manipulate because they have the linearity property

## L{af (t) + bg(t)} = aL{f (t)} + bL{g(t)} for any constants a, b.

However, it is not uncommon to assume, incorrectly, that they also satisfy a similar property for multiplication,
L{f (t) g(t)} = L{f (t)} L{g(t)}. A product of transforms F (s) G(s) can be inverted but the answer is not usually equal
to f (t) times g(t)!
Nevertheless, it is possible to express the inverse transform of F (s) G(s) in terms of f (t) and g(t). To do that, we need to
introduce a special operation on two functions f and g known as the convolution (f g), defined by the integral
Z t 
(f g)(t) = f ( ) g(t ) d.
0

## It can then be shown that

L{f g} = L{f } L{g}
= F (s) G(g) .
Example 29.2
As an example of evaluating (f g), consider when f (t) = t and g(t) = t, so that
Z t 
(f g)(t) = f ( ) g(t ) d
Z0 t  
= (t ) d
0
h1 1 it
= 2t 3
2 3 0
1 3
= t.
6
Notice that this is also the inverse transform of
  
1 1
F (s) G(s) =
s2 s2
1
=
s4 
1 3!
= .
6 s4

Note that convolution operator here is not the multiplication operator, but it does have the same commutative property
that f g = g f . (Can you show this from its definition?)
Evaluating a convolution can be a little messy, but sometimes it can be quicker than using other ways of inverting
transforms, such as by partial fractions. We may use the convolution more extensively in your engineering units.
It is also very important to remember that
L{f (t) g(t)} =
6 F (s) G(s) in general.
29.3 A table of additional Laplace transforms

In addition to the initial table at the end of section 25.4, we have the following transforms and properties.
R  st

f (t) L{f } = F (s) = 0
f (t) e dt

df
dt
sF (s) f (0)

d2 f df
dt2
s2 F (s) sf (0) dt
(0)

sin(t) s2 + 2

s
cos(t) s2 + 2

 
d
tf (t) ds F (s)

## u(t a) f (t a) esa F (s) for s > 0

R t 
1
0
f ( ) d s
F (s)

R t 
(f g)(t) = 0
f ( ) g(t ) d F (s) G(s)
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 30. Table of Laplace Transforms

Table of Laplace Transforms
R  st

f (t) L{f } = F (s) = 0
f (t) e dt

1
1 s
for s > 0

1
eat sa
for s > a

sinh(t) s2 2
for s > ||

s
cosh(t) s2 2
for s > ||

sin(t) s2 + 2

s
cos(t) s2 + 2

n!
tn for n 0 sn+1
for s > 0

(+1)
t for > 1 s+1
for s > 0

(t a) eas

f (t) eat F (s a)

## u(t a) f (t a) esa F (s) for s > 0

df
sF (s) f (0)
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 31. Functions of Several Variables

31.1 Introduction

We are all familiar with simple functions such as y = sin(x). And we all know the answers (dont we?) to questions such
as

## (c) What is its derivative?

In this series of lectures we are going to up the ante by exploring similar questions for functions similar to z = cos(xy).
This is just one example of what we call functions of several variables. Though we will focus on functions that involve
three variables (usually x, y and z) the lessons learnt here will be applicable to functions of any number of variables.

31.2 Definition

A function f of two variables (x, y) is a single valued mapping of a subset of R2 into a subset of R.

What does this mean? Simply that for any allowed value of x and y we can compute a single value for f (x, y). In a sense
f is a process for converting pairs of numbers (x and y) into a single number f .

The notation R2 means all possible choices of x and y such as all points in the xy-plane. The symbol R denotes all real
numbers (for example all points on the real line). The use of the word subset in the above definition is simply to remind
us that functions have an allowed domain (i.e. a subset of R2 ) and a corresponding range (i.e. a subset of R).

Notice that we are restricting ourselves to real variables, that is the functions value and its arguments (x, y) are all real
numbers. This game gets very exciting and somewhat tricky when we enter the world of complex numbers. Such adventures
await you in later year mathematics (not surprisingly this area is known as Complex Analysis).
31.3 Notation

## Here is a simple function of two variables

f (x, y) = sin(x + y)
We can choose the domain to be R2 and then the range will be the closed set [1, +1]. Another common way of writing
all of this is
f : (x, y) R2 7 sin(x + y) [1, 1]
This notation identifies the function as f , the domain as R2 , the range as [1, 1] and most importantly the rule that (x, y)
is mapped to sin(x + y). For this subject we will stick with the former notation.

You should also note that there is nothing sacred about the symbols x, y and f . We are free to choose what ever symbols
takes our fancy, for example we could concoct the function

w(u, v) = loge (u v)

Example 31.1
What would be a sensible choice of domain for the previous function?
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 32. Partial derivatives

32.1 First derivatives

We all know and love the familiar definition of a derivative of a function of one variable,

f (x + x) f (x)
 
df
= lim .
dx x0 x

The natural question to ask is: Is there similar rule for functions of more than one variable? The answer is yes (surprised?)
and we will develop the necessary formulas by a simple generalisation of the above definition.

Okay, lets suppose we have a simple function, say f (x, y). Suppose for the moment that we pick a particular value of y,
say y = 3. Then only x is allowed to vary and in effect we now have a function of just one variable. Thus we can apply
the above definition for a derivative which we write as
f (x + x, y) f (x, y)
 
f
= lim .
x x0 x
d
Notice the use of the symbol x rather than dx . This is to remind us that in computing this derivative all other variables
are held constant (which in this instance is just y).

Of course, we could play the same again but with x held constant, which leads to derivative in y,

f (x, y + y) f (x, y)
 
f
= lim .
y y0 y

## Each of these derivatives, fx

and f
y
are known as partial derivatives of f while the derivative of a function of one
variable is often called an ordinary derivative.

You might think that we would now need to invent new rules for the (partial) derivatives of products, quotients and so on.
But our definition of partial derivatives is built upon the definition of an ordinary derivative of a function of one variable.
Thus all the familiar rules carry over without modification. For example, the product rule for partial derivatives is
  f g
f (x, y) g(x, y) = g(x, y) + f (x, y)
x x x
  f g
f (x, y) g(x, y) = g(x, y) + f (x, y)
y y y

## Computing partial derivatives is no more complicated than computing ordinary derivatives.

Example 32.1
If f (x, y) = sin(x) cos(y) then

f  
= sin(x) cos(y)
x x
   
= cos(y) sin(x) + sin(x) cos(y)
x x
= cos(y) cos(x) .

Example 32.2
2 y 2 z 2
If g(x, y, z) = ex then

g  x2 y2 z2 
= e
z z
2
2 2
 
= ex y z x2 y 2 z 2
z
x2 y 2 z 2
= 2ze .
32.2 Higher derivatives

The result of a partial derivative is another function of one or more variables. We are thus at liberty to take another
derivative, generating yet another function. Clearly we can repeat this any number of times (though possibly subject to
some technical limitations as noted below, see Exceptions).

Example 32.3
Let f (x, y) = sin(x) sin(y). Then we can define g(x, y) as the partial derivative of f with respect to x, that is,

f
g(x, y) =
x
 
= sin(x) sin(y)
x
= cos(x) sin(y)

and then define h(x, y) as the partial derivative of g with respect to x, that is,

g
h(x, y) =
x
 
= cos(x) sin(y)
x
= sin(x) sin(y)

Example 32.4
g
Continuing from the previous example, compute y
.
32.3 Notation

From the above example we see that h(x, y) was computed as follows
g
h(x, y) =
x
 f 
=
x x
This is often written as
2f
h(x, y) =
x2

Now consider the case where we costruct the function m(x, y) by taking the partial derivative of g(x, y) with respect to y,
that is,
g
m(x, y) =
y
 
f
=
y x
and this is normally written as
2f
m(x, y) =
yx
Note the order on the bottom line - you should read this from right to left. It tells you that to take a partial derivative in
x then a partial derivative in y.
Its now a short leap to cases where we might take say five partial derivatives, such as
5f
P (x, y) =
xyyxx

Partial derivatives that involve one or more of the independent variables are known as mixed partial derivatives.
Example 32.5
2f 2f
Given f (x, y) = 3x2 + 2xy compute xy
and yx
. Notice anything?

## Order of partial derivatives does not matter

In general, if f is a twice-differentiable function, then the order in which its mixed partial derivatives are calculated
does not matter. Each ordering will yield the same function. For a function of two variables this means

2f 2f
=
xy yx

This is not immediately obvious but it can be proved (its a theorem!) and it is a very useful result.
2
f 2
f
Note: For most multivariable functions we use in applications and modelling we do find xy = yx . However, there are
some functions for which this equality does not hold true as they fail specific assumptions in the theorem alluded to above.

Example 32.6
Use the above theorem to show that
5Q 5Q 5Q
P (x, y) = = =
xyyxx yyxxx xxxyy

This allows us to simplify our notation, all we need do is record how many of each type of partial derivative are required,
thus the above can be written as
5Q 5Q
P (x, y) = =
x3 y 2 y 2 x3
32.4 Exceptions: when derivatives do not exist

In earlier lectures we noted that at the very least a function must be continuous if it is to have a meaningful derivative.
When we take successive derivatives we may need to revisit the question of continuity for each new function that we create.

If a function fails to be continuous at some point then we most certainly can not take its derivative at that point.

Example 32.7
Consider the function

0 if < x 0
f (x) =
3x2 if 0 < x <

It is easy to see that something interesting might happen at x = 0. Its also not hard to see that the function is continuous
over its whole domain, and thus we can compute its derivative everywhere, leading to

df 0 if < x 0
=
dx 6x if 0 < x <

## This too is continuous and we thus attempt to compute its derivative,

df2 0 if < x 0
=
dx2 6 if 0 < x <
Now we notice that this second derivative is not continuous at x = 0. We thus can not take any more derivatives at x = 0.
Our chain of differentiation has come to an end.

We began with a continuous function f (x) and we were able to compute only its first two derivatives over the domain
x R. We say such that the function is twice differentiable over R. This is also often abbreviated by saying f is C 2 over
R, or write f C 2 (R). The symbol C reminds us that we are talking about continuity and the superscript 2 tells us how
many derivatives we can apply before we encounter a non-continuous function. The clause over R just reminds us that
the domain of the function is the set of real numbers (, ).

We should always keep in mind that a function may only posses a finite number of derivatives before we encounter a
discontinuity. The tell-tale signs to watch out for are sharp edges, holes or singularities in the graph of the function.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 33. Gradient vectors and directional derivatives

33.1 Gradient and Directional Derivative

Given any differentiable function of several variables we can compute each of its first partial derivatives. Lets do something
out of the square. We will assemble these partial derivatives as a vector which we will denote by f . So for a function
f (x, y) of two variables we define
f f
f = i+ j
x y
The is known as the gradient of f and is often pronounced grad of f .

This may be pretty but what use is it? If we look back at the formula for the chain rule we see that we can write it out as
a vector dot-product,
df f dx f dy
= +
ds x ds y ds
   
f f dx dy
= i+ j i+ j
x y ds ds
 
dx dy
= (f ) i+ j .
ds ds

## What do we make of the vector dx ds

i + dy
ds
j in this equation? Its not hard to see that it is a tangent vector to the curve
r(s) = x(s) i + y(s) j. And if we chose the parameter s to be distance along the curve then we also see that its a unit vector.

Example 33.1
Prove the last pair of statements, that the vector is a tangent vector and that its a unit vector.

It is customary to denote the tangent vector by u. With the above definitions we can now re-write the equation for a
directional derivative as follows
df
= u f
ds
df
Isnt that neat? The number that we calculate in this process ds
is known as the directional derivative of f in the
direction u.

Yet another variation on the notation is to include the tangent vector as subscript on . Thus we also have
df
= u f
ds

Directional derivative

df
The directional derivative ds
of a function f in the direction t is given by

df
= u f = u f
ds
where the gradient f is defined by
f f
f = i+ j
x y
and u is a unit vector, u u = 1.

Example 33.2
Given f (x, y) = sin(x) cos(y) compute the directional derivative of f in the direction u = 1 (i + j).
2

Example 33.3
df
Given f = 2xi + 2yj and x(s) = s cos(0.1) , y(s) = s sin(0.1) compute ds
at s = 1.
Example 33.4
Given f (x, y) = (xy)2 and the vector v = 2i + 7j compute the directional derivative at (x, y) = (1, 1). Hint: Is v a unit
vector?

We began this discussion by restricting a function of many variables to be a function of one variable. We achieved this by
df
choosing a path such as x = x(s) , y = y(s). We might ask if the value of ds depends on the choice of the path? That is we
could imagine many different paths all sharing the one point, call it P , in common. Amongst these different paths might
df
we get different answers for ds ?

This is a very good question. To answer it lets look at the directional derivative in the form
df
= u f
ds
First we note that f depends only on the values of (x, y) at P . It knows nothing about the curves passing through P .
That information is contained solely in the vector u. Thus if a family of curves passing through P share the same u then
df
we most certainly will get the same value for ds for each member of that family. But what class of curves share the same
u at P ? Clearly they are all tangent to each other at P . None of the curves cross any other curve at P .
df
At this point we can dispense with the curves and retain just the tangent vector u at P . All that we require to compute ds
is the direction we wish to head in, u, and the gradient vector, f , at P . Choose a different u and you will get a different
df df
answer for ds . In each case ds measures how rapidly f is changing the direction of u.

## 33.2 The gradient vector in cylindrical and spherical coordinates

Can we find the gradient vector in other coordinate systems? Yes. However, to derive the gradient vector in another
coordinate system will require some ENG2005/ENG2006 knowledge. For now, we will only show you, not derive, the
gradient vectors for the two non-Cartesian coordinate systems we use most often in our applications and modelling:
cylindrical and spherical coordinates.
Recall that in section 11.4 we saw the parameterisation for cylindrical surfaces and spherical surfaces. If we vary the radii
for these systems we can parameterise cylindrical volumes and ball volumes. (Recall sphere only refers to the surface
while ball refers to volume enclosed by the sphere surface.)

## 33.2.1 Gradient vector in cylindrical coordinates

Here we have
x = R cos() , y = R sin() and z = z
p
where R = x2 + y 2 represents the distance from the cylinder axis to the cylindrical surface.
Recall (section ) the cylindrical coordinate vectors are
eR = cos() i + sin() j + 0k
e = sin() i + cos() + 0k
ez = 0i + 0j + k
where eR points in the direction of increasing R-values, e points in the direction of increasing -values and ez points in
the direction of increasing z-values.

## The gradient vector of a function f (r, , z) is

f 1 f f
f = eR + e + ez .
R R z

## 33.2.2 Gradient vector in spherical coordinates

Here we have
x = r sin() cos() , y = r sin() sin() and z = r cos()
p
where r = x2 + y 2 + z 2 represents the distance from the origin to the spherical surface.

## The gradient vector of f (r, , ) is

f 1 f 1 f
f = er + e + e .
r r r sin()
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 34. Tangent planes and linear approximations

34.1 Tangent planes

For functions of one variable we found that a tangent line provides a useful means of approximating the function. It is
natural to ask how we might generalise this idea to functions of several variables.

Constructing a tangent line for a function of a single variable, f = f (x), is quite simple. Lets just remind ourselves how
df
we might do this. First we compute the functions value f and its gradient dx at some chosen point. We then construct a
straight line with these values at the chosen point.

Example 34.1
Construct the tangent line to f = sin(x) at x = 4 .

Notice that the tangent line is a linear function. Not surprisingly, for functions of several variables we will be constructing a
linear function which shares particular properties with the original function, in particular the functions value and gradient
at the chosen point.

Lets be specific. Suppose we have a function f = f (x, y) of two variables and suppose we choose some point, say x = a,
y = b. Lets call this point P . At P we can evaluate f and all the first partial derivatives, f
x
and f
y
. Now we want to
construct a new function, call it f = f(x, y), that shares these some numbers at P . What conditions, apart from being
linear, do we want to impose on f? Clearly we require
! !

f
 
f
f
 
f

fP = fP , = , =
x x P y y P
P P

## As we want f to be a linear function we could propose a function of the form

f(x, y) = C + Ax + By
We would need to carefully choose the numbers A, B, C so that we meet the above conditions. However, it is easier (and
mathematically equivalent) to choose
f(x, y) = C + A (x a) + B (y b)
In this form we find    
f f
C = fP , A= , B=
x P y P
and thus we have    
f f
f(x, y) = fP + (x a) + (y b)
x P y P

This describes the tangent plane to the function f = f (x, y) at the point (x, y) = (a, b).

Example 34.2
Prove that A, B, C are as stated.

## In terms of f we can write the tangent plane in the following form

f(r) = fP + (r rP ) (f )P

where r = xi + yj. This is a nice compact formula and it makes the transition to more variables (x, y, z ) trivial.

Example 34.3


Compute the tangent plane to the function f (x, y) = sin(x) sin(y) at (x, y) = ,
4 4
.
The Tangent Plane

Let f = f (x, y) be a differentiable function. The tangent plane to f at the point P is given by
   
f f
f (x, y) = fP + (x a) + (y b)
x P y P

## 34.2 Linear approximations

We have done the hard work now its time to enjoy the fruits of our labour. We can use the tangent plane as a way to
estimate the original function in a region close to the chosen point. This is very similar to how we used a tangent line in
approximations for functions of one variable.

Example 34.4
5 5

Use the result of the previous example to estimate sin(x) sin(y) at (x, y) = ,
16 16
.

Example 34.5
Would it make sense to use the same tangent plane as in the previous example to estimate f (5, 4)?

The bright and curious might now ask two very interesting questions, how large is the error in the approximation and how
can we build better approximations?

The answers to these questions takes us far beyond this subject but here is a very rough guide. Suppose
you are estimating

2 2 2
f at some point a distance away from P , that is, = (x a) + (y b) . Then the error, f (x, y) f (x, y) will be

proportional to 2 . The proportionality factor will depend on the second derivatives of f (after all this is what we left out
in building the tangent plane). The upshot is that the error grows quickly as you move away from P but also, each time
you halve the distance from P you will reduce the error by a factor of four.

The answer to the second question, are there better approximations than a tangent plane, is most certainly yes. The
key idea is to force the approximation to match higher derivatives of the original function. This leads to higher order
polynomials in x and y. Such constructions are known as Taylors series in many variables. We will revisit this later
in the course but only in the context of functions of a single variable.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## 35. Maxima and minima

35.1 Maxima and minima

Suppose you run a commercial business and that by some means you have formulated the following formula for the profit
of one of your lines of business
f (x, y) = 4 x2 y 2
Clearly the profit f depends on two variables x and y. Sound business practice suggest that you would like to maximise
your profits. In mathematical terms this means find the values of (x, y) such that f is a maximum. A simple plot of the
graph of f shows us that the maximum occurs at (x, y) = (0, 0). For other functions we might not be so lucky and thus
we need some systematic way of computing the points (x, y) at which f is maximised.

You would have met (in previous years) similar problems for the case of a function of one variable. And form that you
may expect that for the present problem we will be making a statement about the derivatives of f in order that we have a
maximum (i.e. that the derivatives should be zero). Lets make this precise.

Lets denote the (as yet unknown) point at which the function is a maximum by P . Now if we have a maximum at this
point then moving in any direction from this point should see the function decrease. That is the directional derivative must
be non-positive in every direction from P , thus we must have
df
= u (f )P 0
ds
for every choice of u. Lets assume (for the moment) that (f )P 6= 0 then we should be able to compute > 0 so that
u = (f )P is a unit vector. If you now substitute this into the above you will find

(f )P (f )P 0.

Look carefully at the left hand side. Each term is positive (recall that v v = |v|2 ) yet the right hand side is either zero or
negative. Thus this equation does not make sense and we have to reject our only assumption, that is (f )P 6= 0.

## We have thus found that if f is to have a maximum at P then we must have

(f )p = 0.
This is a vector equation and thus each component of f is zero at P , that is
   
f f
= 0 and = 0.
x P y P

## It is from these equations that we would compute the (x, y) coordinates of P .

Of course, we could have posed the related question of finding the points at which a function is minimised. The mathematics
would be much the same save for a change in words (maximum to minimum) and a corresponding change in signs.
The end result is the same, the gradient f must vanish at P .

Example 35.1
Find the points at which f (x, y) = 4 x2 y 2 attains its maximum.

## When we solve the equations

(f )P = 0
we might get more than one point P . What do we make of these points? Some of them might correspond to minimums
while others might correspond to maximums of f . Does this exhaust all possibilities? No, there maybe some points which
can not be classified as either a minima or a maxima of f . The three options are shown in the following graphs.
A typical local minimum
A typical local maximum
A typical saddle point

A typical case might consist of any number of points like the above. It is for this reason that each point is referred to as a
local maxima or a local minima.

35.3 Notation

Rather than continually having to qualify the point as corresponding to a minimum, maximum or a saddle point of f we
commonly lump these into the one term local extrema.

Note when we talk of minima, maxima and extrema we are talking about the (x, y) points at which the function has a
local minimum, maximum or extremum respectively.
35.4 Maxima, minima or saddle point?

You may recall that for a function of one variable, f = f (x), that its extrema could be characterised simply be evaluating
the sign of the second derivative. There is a similar test that we can apply for functions of two variables that is summarised
in the following box.

 2
2f 2f 2f
    
D(P ) =
x2 P y 2 P xy P

## then we have the following classification for P

 
2f
A local minima when D(P ) > 0 and x2
>0
 P
2f
A local maxima when D(P ) > 0 and x2
<0
P
A Saddle point when D(P ) < 0

Note: If (f )P = 0 and D(P ) = 0 then the second derivatives test gives no information: The function f could have a local
maximum or local minimum at P , or P could be a saddle point.

Example 35.2
Use the second derivative test to show that the point (x, y) = (0, 0) is a maximum point for the function f (x, y) = 4x2 y 2 .
Example 35.3
2 y 2
Given the function f (x, y) = xex , find the extrema points and use the second derivatives test to classify each extrema
point.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

ENG1005 Exercises
The following exercise questions are provided to assist with reinforcing the facts and practicing the skills covered in the
lectures in this unit. When writing out your solutions to these problems it is advised that you include your full working,
including concise explanations of your reasoning and the correct use of mathematical symbols.

The six exercise sets, a selection of practice exercises for the material covered in lectures, follow - one set for each
of the six major topic areas. You may find that you can only complete a small selection of these during support classes.
The best approach is to attempt all of the relevant questions in the exercise sets related to the previous weeks lectures for
yourself before your support class and then ask for help if you are having trouble with specific questions, or having any
other difficulties, during your support class. Assistance is available in your support class, at the Mathematics Learning
Centre, or by approaching the lecturers.

Answers for most of the exercises are provided following each exercise set but they do not describe how to complete the
questions - further assistance on details of how to undertake and complete a problem is available on a one-to-one basis
during each support class.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## Single Variable Calculus Exercises

Integration by substitution

## 1. Find each of the following indefinite integrals using integration by substitution:

x
Z Z
3 4

(a) x cos x dx (b) dx
3x2 + 1
Z Z
cos(x) 2
(c) sin(x)e dx (d) 2xe3x dx

ex 1
Z Z
(e) dx (f) dx
2 ex x loge (x)

Integration by parts

## 2. Evaluate each of the following using integration by parts.

Z Z
(a) x cos(x) dx (b) xex dx

Z p Z
(c) y y + 1 dy (d) x2 log(x) dx

Z Z
2
(e) sin () d (f) cos2 () d

Z Z
(g) sin() cos() d (h) sin2 () d
R
3. Use integration by parts twice to find ex sin(x) dx.
R
4. Use integration by parts twice to find ex cos(x) dx.

## 5. Use a substitution and an integration by parts to evaluate each of the following

Z Z
(a) (3x 7) sin(5x + 2) dx (b) cos(x) sin(x)ecos(x) dx

Z Z
2x x x
(c) e cos (e ) dx (d) e dx

## 6. Spot the error in the following calculation.

R 2
We wish to compute dx/x.R R will use integration by parts with u = 1/x and dv = dx. This gives us du = dx/x
For this we
and v = x. Thus using u dv = uv v du we find

dx dx
Z Z
=1+
x x

and thus 0 = 1. (If this answer does not cause you serious grief then a career in accountancy beckons).

## v Exercise set 8.8.5: Questions 110, 111.

Hyperbolic functions

## (d) sinh1 (1) (e) cosh1 (1) (f) tanh1 (1)

4
8. If tanh(x) = then find the value of the other five hyperbolic functions at x.
5
9. Use the definition of the hyperbolic functions to show the following:

## (c) cosh (x + y) = cosh(x) cosh(y) + sinh(x) sinh(y).

x2 1
(d) tanh (loge (x)) = .
x2 + 1
(e) (cosh(x) + sinh(x))2 = cosh (2x) + sinh (2x).

10. Find the first derivative with respect to the independent variable for the following functions:

## (b) g (t) = cosh (t) sinh (t).

1 cosh (r)
(c) h (r) = .
1 + cosh (r)

(d) F () = tanh e .



(e) y = sinh1 x . (Hint: Apply implicit differentiation to sinh(y) = x.)

11. Use appropriate hyperbolic function substitutions to evaluate the following indefinite integrals:

1
Z
(a) dx
9 + x2
1
Z
(b) dx
x 16
2

1
Z
(c) dx
25 x2

## v Exercise set 8.3.13: Questions 40,41

Improper integrals

12. Decide which of the following improper integrals will converge and which will diverge.
Z 1 Z 1
1 1
(a) dx (b) 1/4
dx
0 x 0 x

1
1
Z Z
(c) dy (d) e2x dx
0 y4 0

2
1 1
Z Z
(e) d (f) dx
0 1 + 2 0 1 x2

2 2
1 1
Z Z
(g) dx (h) dx
0 x(x + 2) 0 x(x 2)
Comparison test for Improper integrals

13. Use a suitable comparison function to decide which of the following integrals will converge and which will diverge.
Z 1 x Z 1
e 1
(a) dx (b) dx
0 1x
x 1/4
0

1
ey
Z Z
(c) dy (d) sin2 (x)e2x dx
0 y4 0

1
e 1
Z Z
(e) d (f) dx
0 1 + 2 0 x(1 x2 )
James G., Modern Engineering Mathematics (5th ed.) 2015.:

## Sequences and series

14. Find the limit, if it exists, for each of the following sequences
n
(a) 1, + 12 , 13 , + 14 , , (1)
n+1
,

(b) 21 , 23 , 34 , , n+1
n+2
,
1
(c) an = n+1
, n0
1 1
(d) an = n+2
n+1
, n0

1

1+ n+1
, n even

(e) an =
1

1 n+1 , n odd

en , n 100

(f) an = n
e ,

0 n < 100

(g) an = sin( n
4
) (Hint : Write out the first few terms.)
15. Consider the sequence defined by  n+1
1
an+1 = an + , n0
2
with a0 = 1.
(a) Write out the first few terms a0 , , a4 .

## (c) Generalize this result to express an+1 in terms of 21 an .

Pn
(d) Can you express an as a sum k=0 bk for some set of bk ?

(e) Suppose the limit limn an exists. Use the result of (c) to deduce the limit.

(f) Determine the values of for which the sequence an+1 = an + n converges.

16. In the Fibonacci sequence each new number is generated as the sum of the two previous numbers, for example,
0, 1, 1, 2, 3, 5, 8, 13, 21, . . . The general term in the Fibonacci sequence is often written as Fn , with Fn = Fn1 + Fn2 .
Fn
Show that if we construct the new sequence Gn = then
Fn1

1+ 5
lim (Gn ) = .
n 2

X
17. Given a series an with an 0 for all n the Integral Test says that if we can define f (n) = an where f is a continuous
n=1
and positive function on [1, ) then

Z
X
v if f (x)dx is convergent then an is convergent.
1 n=1
Z
X
v if f (x)dx is divergent then an is divergent.
1 n=1

X 1
Consider the infinite series p
.
n=1
n

Use the integral test to determine for what values of p this series is convergent and for what value of p this series is
divergent.
Note that the p = 1 case is the harmonic series.
James G., Modern Engineering Mathematics (5th ed.) 2015.:

## v Exercise set 7.2.3: Questions 1,2,4,5,12,13

v Exercise set 7.3.4: Questions 19,21,22,24
v Exercise set 7.6.4: Questions 41,44
v Exercise set 9.4.4: Questions 8-17

## The Ratio Test

18. Use the ratio test to examine the convergence of the following series.

X
n
X xn
(a) , where || > 1 (b) , where |x| < 1
n=0 n=0
n+1

X X n3
(c) n1n (d)
n=0 n=0
en+2

19. What does the ratio test tell you about the convergence of

X 1
.
n=0
(n + 1)2
Can you establish the convergence of this series by some other method?

20. The Starship USS Enterprise is being pursued by a Klingon warship. The dilithium crystals couldnt handle the warp speed
and so it would appear that Captain Kirk and his crew are about to become as one with the inter-galactic dust cloud.

Spock: Captain, the enemy are 10 light years away and are closing fast.

Kirk: But Spock, by the time they travel the 10 light years we will have travelled a further 5 light years. And when
they travel those 5 light years we will have moved ahead by a further 2.5 light years, and so on forever. Spock,
they will never capture us!

Spock: I must inform the captain that he has made a serious error of logic.

What was Kirks mistake? How far will Kirks ship travel before being caught?

Power series

21. Find the radius of convergence for each of the following power series.

X nxn X xn
(a) f (x) = (b) g(x) =
n=0
3n n=0
3n n!

X X x2n
(c) h(x) = n 2 xn (d) p(x) =
n=0 n=0
loge (1 + n)

X n!(x 1)n X
(e) q(x) = (f) r(x) = (1 + n)n xn
n=0
2n nn n=0
Maclaurin Series

22. Find the first 4 non-zero terms in Maclaurin series for each of the following functions.
(a) f (x) = cos(x) (b) f (x) = sin (2x)
1
(c) f (x) = loge (1 + x) (d) f (x) =
1 + x2

(e) f (x) = tan1 (x) (f) f (x) = 1 x2
23. Use the previous results to obtain the first 2 non-zero terms in the Maclaurin series for the following functions.
(b) f (x) = loge 1 + x2

(a) f (x) = cos(x) sin (2x)

1
(c) f (x) = (d) f (x) = arctan (arctan(x))
1 + cos2 (x)

Taylor Series

24. Compute the Taylor series, about the the given point, for each of the following functions.
1
(a) f (x) = ,a=1 (b) f (x) = x, a = 1
x
(c) f (x) = ex , a = 1 (d) f (x) = loge (x), a = 2
25. (a) Compute the Taylor series for ex
2
(b) Hence write down the Taylor series for ex

(c) Use the above to obtain an infinite series for the function
Z x
2
s(x) = eu du
0
(a) Compute the Taylor series, around x = 0, for log(1 + x) and log(1 x).

## (b) Hence obtain a Taylor series for f (x) = log 1+x


1x

(c) Compute the radius of convergence for the Taylor series in part (b).
1+x
(d) Show that the function defined by y(x) = 1x
has a unique inverse for almost all values of y.

(e) Use the above results to obtain a power series for log(y) valid for 1 < |y| < .

lHpitals rule

## 27. Use lHopitals rule to verify the following limits

x2 1 4 sin(4x)
(a) 2 = lim (b) = lim
x1 x+1 5 x0 sin(5x)
1 1 x + log(x) log(log(x))
(c) 2
= lim (d) 0 = lim
x1 1 + cos(x) x x
1 x
(e) = lim 1
(f) 0 = lim ex log(x)
4 x0 tan (4x) x

## 28. Prove that for any n > 0

0 = lim xn ex
x
29. Prove that for any n > 0
0 = lim xn log(x)
x

## v Exercise set 9.4.4: Questions 19

SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## Single Variable Calculus Exercise Answers

Integration by substitution

1
Z
x3 cos x4 dx = sin x4 + C
 
1. (a)
4

x 1 2
Z
(b) dx = 3x + 1 + C
3x2 + 1 3
Z
(c) sin(x)ecos(x) dx = ecos(x) + C

1 2
Z
2
(d) 2xe3x dx = e3x + C
3

ex
Z
(e) dx = loge (|2 ex |) + C
2 ex

1
Z

(f) dx = loge |loge (x)| + C
x loge (x)

## for arbitrary constant C.

Integration by parts
Z
2. (a) x cos(x)dx = cos(x) + x sin(x) + C

Z
(b) xex dx = ex xex + C

2 4
Z p 3 5
(c) y y + 1dy = y (y + 1) 2 (y + 1) 2 + C
3 15

x3 x3
Z
(d) x2 loge (x)dx = loge (x) +C
3 9

1
Z
(e) sin2 () d = ( cos () sin ()) + C
2

1
Z
(f) cos2 () d = ( + cos () sin ()) + C
2

1 2
Z
(g) sin () cos () d = sin () + C
2

1 1
Z
(h) sin2 () d = cos () sin () + sin2 () + 2 + C
2 4 4

## for arbitrary constant C.

ex
Z
x
3. e cos(x)dx = (sin(x) + cos(x)) + C for arbitrary constant C.
2
ex
Z
4. ex sin(x)dx = (sin(x) cos(x)) + C for arbitrary constant C.
2
3 1
Z
5. (a) (3x 7) sin (5x + 2) dx = sin (5x + 2) + (7 3x) cos (5x + 2) + C
25 5
Z
(b) cos(x) sin(x)ecos(x) dx = ecos(x) (1 cos(x)) + C

Z
(c) e2x cos (ex ) dx = cos (ex ) + ex sin (ex ) + C

Z
x x

(d) e dx = 2e x1 +C

## for arbitrary constant C.

6. Did we forget an integration constant? (And so with the natural order restored, fears of a career in accountancy fade from
view.)
Hyperbolic functions

3
7. (a) sinh (loge (2)) =
4

(b) tanh(0) = 0

e3 + e3
(c) cosh(3) = 10.0677
2
 
(d) sinh1 (1) = loge 1 + 2 0.8814

## (f) Since tanh1 (x) as x 1 then tanh1 (1) is undefined

4 5 5 3 3
8. sinh(x) = , cosh(x) = , coth(x) = , sech(x) = , cosech(x) = .
3 3 4 5 4
9. Use the definition of the hyperbolic functions to show the identities.

10. Find the first derivative with respect to the independent variable for the following functions:

df
(a) = 4 cosh (4x).
dx
dg
(b) = cosh2 (t) + sinh2 (t) = cosh (2t).
dt
dh 2 sinh (r)
(c) = .
dr (cosh (r) + 1)2
dF
= e sech2 e .

(d)
d
dy 1
(e) = .
dx 2 x x+1

1 x
Z
11. (a) dx = sinh1 +C
9 + x2 3

1 x
Z
(b) dx = cosh1 +C
x2 16 4

1 1 1 x
Z  
(c) dx = tanh +C
25 x2 5 5
for arbitrary constant C.
Improper integrals
1 1
1 1 4
Z Z
12. (a) dx diverges (b) 1/4
dx converges to
0 x 0 x 3

1
1 1
Z Z
(c) dy diverges (d) e2x dx converges to
0 y4 0 2

2
1 1
Z Z
(e) d converges to (f) dx diverges
0 1+ 2 2 0 1 x2

2 2
1 1
Z Z
(g) dx diverges (h) dx diverges
0 x (x + 2) 0 x (x 2)
Comparison test for Improper integrals
1
ex 1 ex
Z
13. (a) dx diverges, use < over 0 < x < 1
0 x x x

1
1
Z
1
(b) dx diverges, use x < x 4 over 0 < x < 1
0 1x 1/4

1
ey 1 ey
Z
(c) dy diverges, use < over 0 < y < 1
0 y4 3y 4 y4
Z
(d) e2x sin2 (x)dx converges, use sin2 (x)e2x < e2x over 0 < x <
0

e e 1
Z
(e) d converges, use < over 0 < <
0 1 + 2 1 + 2 1 + 2

1
1 1 1
Z
(f) dx diverges, use < over 0 < x < 1
0 x (1 x )
2 x x(1 x2 )

Sequences

14. (a) 0, (b) 1, (c) 0, (d) 0, (e) 1, (f) 0, (g) Limit does not exist.

## 15. This is the geometric series. It converges for || < 1.

1+ 5
16. Show that lim (Gn ) = .
n 2

Series

X 1 X 1
17. p
diverges for p 1 while p
converges for p > 1.
n=1
n n=1
n

## 18. (a) converges, (b) converges, (c) converges, (d) converges

X 1
19. Converges. Note that comparing it to which given the answer to 35.4 also suggests it should converge.
n=0
n2

20. Clearly the fast ship must catch the slow ship in a finite time. Yet Kirk has put an argument which shows that his slow
ship will still be ahead of the fast ship after each cycle (a cycle ends when the fast ship just passes the location occupied
by the slow ship at the start of the cycle). Each cycle takes a finite amount of time. The total elapsed time is the sum of
the times for each cycle. Kirks error was to assume that the time taken for an infinite number of cycles must be infinite.
We know that this is wrong an infinite series may well converge to a finite number.

Given the information in the question we can see that the fast ship is initially 10 light years behind the slow ship and that
it is traveling twice as fast as the slow ship. Suppose the fast ship is traveling at v light years per year. The distance
traveled by the fast ship decreases by a factor of 2 in each cycle. Hence the time interval for each cycle also decreases by a
factor of 2 in each cycle. The total time taken will then be
10 + 5 + 2.5 + 1.25 + ...
Time =
 v 
10 1 1 1
= 1 + + +
v 2 4 8
10 1
=
v 1 21
10
=
v/2

We expect that this must be time taken for the fast ship to catch the slow ship. The fast ship is traveling at speed v while
the slow ship is traveling at speed v/2. Thus the fast ship is approaching the slow ship at a speed v/2 and it is initially 10
light years behind. Hence it will take the Klingons 10/(v/2) light years to catch Kirks starship.

Power series

## 21. (a) R = 3, (b) 

unbounded (infinite) radius, (c) R = 1, (d) R = 1,
x n 
(e) Using lim 1+ = ex then R = 2e, (f) R = 0 (only converges at x = 0)
n n
Maclaurin Series
1 1 1 6
22. (a) cos(x) = 1 x2 + x4 x +
2 24 720

4 4 8 7
(b) sin (2x) = 2x x3 + x5 x +
3 15 315

1 1 1
(c) loge (1 + x) = x x2 + x3 x4 +
2 3 4

1
(d) = 1 x2 + x4 x 6 +
1 + x2

1 1 1
(e) tan1 (x) = x x3 + x5 x7 +
3 5 7

1 1 1
(f) 1 x2 = 1 x2 x4 x6 +
2 8 16
7
23. (a) cos(x) sin (2x) = 2x x3 +
3

1
(b) loge 1 + x2 = x2 x4 +

2

1 1 1
(c) 2
= + x2 +
1 + cos (x) 2 4

2
(d) tan1 tan1 (x) = x x3 +

3

Taylor Series

1
24. (a) = 1 (x 1) + (x 1)2 (x 1)3 + (x 1)4 +
x

1 1 1
(b) x = 1 + (x 1) (x 1)2 + (x 1)3 +
2 8 16
 
x 1 1 2 1 3
(c) e = e 1 + (x + 1) + (x + 1) + (x + 1) +
2 6

1 1 1
(d) loge (x) = loge (2) + (x 2) (x 2)2 + (x 2)3 +
2 8 24
1 1 1
25. (a) ex = 1 + x + x2 + x3 + x4 + .
2 6 24
x2 1 4 1 6 1
(b) e = 1 x + x x + x8 + .
2
Z x 2 6 24
u2 1 3 1 1 1 9
(c) s(x) = e du = x x + x5 x7 + x + .
0 3 10 42 216

X (1)(n+1)
1 1 1
26. (a) loge (1 + x) = x x2 + x3 x4 + = xn
2 3 4 n=1
n

1 2 1 3 1 4 X 1 n
loge (1 x) = x x x x + = x .
2 3 4 n=1
n
     
1+x 1 3 1 5
X 1
(b) loge = 2x + 2 x +2 x + = 2 x2n1 .
1x 3 5 n=1
2n 1
(c) R = 1.
y1
(d) x = for y 6= 1.
y+1

X 1 y1
(e) loge (y) = 2 x2n1 , x = .
n=1
2n 1 y+1

lHpitals rule

x2 1
   
sin (4x) 4
27. (a) lim = 2 (b) lim =
x1 x+1 x0 sin (5x) 5

1 x + loge (x)
   
1 loge (loge (x))
(c) lim = 2 (d) lim =0
x1 1 + cos (x) x x
 
x 1
(f) lim ex loge (x) = 0

(e) lim 1
=
x0 tan (4x) 4 x
28. Re-write as lim (xn /ex ) = 0 then apply lHpitals rule.
x

## 29. Re-write as lim (loge (x)/xn ) = 0 then apply lHpitals rule.

x
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## Coordinate Geometry and Vectors Exercises

Vectors, dot product, cross product

1. Find all the vectors whose tips and tails are among the three points with coordinates (x, y, z) = (2, 2, 3), (x, y, z) = (3, 2, 1)
and (x, y, z) = (0, 1, 4).

2. Let v = 3i + 2j 2k. How long is 2v. Find a unit vector (a vector of length 1) in the direction of v.

3. For each pair of vectors given below, calculate the vector dot product and the angle between the vectors.

(a) v = 3i + 2j 2k and w = i 2j k.

## (c) v = 2i + 2k and w = 3i 2j.

4. Given the two vectors v = cos () i + sin () j and w = cos () i + sin () j, use the dot product to derive the trigonometric
identity
cos ( ) = cos () cos () + sin () sin () .

5. Use the dot product to determine which of the following two vectors are perpendicular to one another: u = 3i + 2j 2k,
v = i + 2j 2k, w = 2i j + 2k.

6. For each pair of vectors given below, calculate the vector cross product. Assuming that the vectors define a parallelogram,
calculate the area of the parallelogram.

(a) v = 3i + 2j 2k, w = i 2j k.

## (c) v = 2i + 2k, w = 3i 2j.

7. Calculate the volume of the parallelepiped defined by the three vectors u = 3i + 2j 2k, v = i + 2j 2k, w = 2i j + 2k.

8. Verify that v w = w v.

## 9. Consider the points (x, y, z) = (1, 2, 1) and (x, y, z) = (2, 0, 3).

(a) Find a vector equation of the line through these points in parametric form.

(b) Find the distance between this line and the point (x, y, z) = (1, 0, 1). (Hint: Use the parametric form of the equation
and the dot product.)

10. Find an equation of the plane that passes through the points (x, y, z) = (1, 2, 1), (x, y, z) = (2, 0, 1) and (x, y, z) =
(1, 1, 0).

11. Consider a plane defined by the equation 3x + 4y z = 2 and a line defined by the following vector equation (in parametric
form)
x (t) = 2 2t, y (t) = 1 + 3t, z (t) = t for t R.

(a) Find the point where the line intersects the plane. (Hint: Substitute the parametric form into the equation of the
plane.)
(b) Find a normal vector to the plane.
(c) Find the angle at which the line intersects the plane. (Hint: Use the dot product.)

12. Find the distance between the parallel planes defined by the equations
2x y + 3z = 4 and 2x y + 3z = 24.

## (a) Find where the planes intersect the x, y and z axes.

(b) Find normal vectors for the planes.
(c) Find an equation of the line defined by the intersection of these planes. (Hint: Use the normal vectors to define the
direction of the line.)
(d) Find the angle between these two planes.

14. Find the minimum distance between the two lines defined by
r (t) = (i + j 2k) + t (i 3j + 2k) for t R
and
r (s) = (0i + j + 2k) + s (3i 2j k) for s R.
(Hint: Use scalar projection as demonstrated in the lecture notes. Alternatively, define the lines within parallel planes and
then go back to problem 35.4.)
James G., Modern Engineering Mathematics (5th ed.) 2015.:

## v Exercise set 4.3.3: Questions 66-69,72

v Exercise set 4.3.4: Questions 73,77-81
Curve and surface parameterisations

## 15. Consider the curve  

1 1
r (t) = (t + 1) i + t j + 0k with 1 t 1.
2 2

## (a) Verify that r (t) represents a segment of a straight line.

(b) Find the Cartersian coordinates for the end-points of this line segment.
1 1
(c) Derive a new parametric representation of this line segment using a parametric variable s defined as s = t + .
2 2
(d) Find the domain of the parameter s necessary to move between the two original end-points.

## (a) Circle in the xy-plane, of radius 3, centre (x, y, z) = (4, 6, 0).

(b) Straight line passing through the two points (x, y, z) = (2, 0, 4) and (x, y, z) = (3, 0, 9).
1 2
(c) Circle formed by intersecting the elliptical cylinder x + y 2 = 1 with the plane z = y.
2

## (a) r (t) = (4 + 6 cos (t)) i + 5j + (4 + 6 sin (t)) k.

1
(b) r (t) = ti + j + 0k.
t
p p
(c) r (t) = cos (t)i + sin (t)j + 0k.
(d) r (t) = (2 + cos (4t)) i + (6 + sin (4t)) j + 2tk for fixed > 0.

## (c) Parabolic cylinder z = 3y 2 . Hint: It may help to read this as z = 0x + 3y 2 .

p
(d) Elliptic cone z = 9x2 + y 2 .

19. Determine an implicit representation as an equation z = f (x, y) or g(x, y, z) = 0 for each of the following surface parametric
representations:

## (a) Plane r (s, t) = si + tj + (s + 2t 4) k where s R and t R.

(b) Elliptic paraboloid r (s, t) = 3s cos (t) i + 4s sin (t) j + s2 k where s 0 and 0 t < 2.

(c) Cone r (s, t) = t cos (s) i + t sin (s) j + ctk for some fixed > 0 and where 0 s < 2, t 0.

(d) Helicoid r (s, t) = t cos (s) i + t sin (s) j + sk for s 0 and 0 t < 2.

20. The coordinate vectors in cylindrical coordinates given in subsection 33.2.1 are

eR = cos () i + sin () j + 0k
e = sin () i + cos () + 0k
ez = 0i + 0j + k
(a) Calculate the length of each coordinate vector, that is, calculate |eR |, |e | and |ez |. What does this imply about the
three coordinate vectors?

(b) Calculate the dot products: eR e , e ez and eR ez . What does this imply about the three coordinate vectors?

(c) Calculate the cross products: eR e , e ez and ez eR . (Note the order of each pari of vectors.) What does this
imply about the three coordinate vectors?

21. The coordinate vectors in spherical coordinates given in subsection 11.4.2 are

## er = sin () cos () i + sin () sin () j + cos () k

e = cos () cos () i + cos () sin () j sin () k
e = sin () i + cos () j + 0k

(a) Calculate the length of each coordinate vector, that is, calculate |er |, |e | and |e |. What does this imply about the
three coordinate vectors?

(b) Calculate the dot products: er e , e e and er e . What does this imply about the three coordinate vectors?

(c) Calculate the cross products: er e , e e and e er . (Note the order of each pair of vectors.) What does this
imply about the three coordinate vectors?
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## Coordinate Geometry and Vectors Exercise Answers

Vectors, dot product, cross product

## 1. i 4j + 2k, i + 4j 2k, 2i j + 7k, 2i + j 7k, 3i + 3j + 5k, 3i 3j 5k.

You could also have 0 = 0i + 0j + 0k if the start and end point are the same point.
v 1
2. |2v| = 2 17, = (3i + 2j 2k).
|v| 17
 
1 1
3. (a) v w = 1 and = cos 1.4716 radians.
6 17
 
1 10
(b) v w = 10 and = cos 2.0887 radians.
17 24
 
1 6
(c) v w = 6 and = cos 2.1998 radians.
8 13
4. Use the dot product to derive the trigonometric identity cos ( ) = cos () cos () + sin () sin ().

5. u and w.

6. (a) v w = 6i + j 8k and |v w| = 101 units2 .

(b) v w = 6i + 16j + 4k and |v w| = 2 77 units2 .

(c) v w = 4i 6j 4k and |v w| = 2 17 units2 .

7. (u v) w = 4 units3 .

8. Verify that v w = w v.
Lines and planes

## 9. (a) x (t) = 1 + t, y (t) = 2 2t and z (t) = 1 + 4t for t R.

2
(b) 14 units.
7
10. 2x + y + 7z = 3

## 11. (a) (x, y, z) = (2, 1, 0).

(b) n = 3i + 4j k.
!
91
(c) = cos1 0.37567 radians.
2 26

12. 56 units.

## 13. Consider two planes defined by the equations 3x + 4y z = 2 and 2x + y + 2z = 6.

2
, (x, y, z) = 0, 12 , 0 and (x, y, z) = (0, 0, 2).
 
(a) (x, y, z) = 3
, 0, 0
(b) n1 = 3i + 4j k and n2 = 2i + j + 2k.
(c) r (t) = (2i + 2j + 0k) + t (9i 4j + 11k) for t R.
4
 
1
(d) = cos 1.835 radians.
3 26

14. 3 units.
Curve and surface parameterisations

1 1 1
15. (a) Given x = t + 1 then t = x 1 and then y = t becomes y = x + 1.
2 2 2
(b) t = 1 corresponds to (x, y, z) = (0, 1, 0).
t = 1 corresponds to (x, y, z) = (2, 0, 0).
(c) r (s) = (0i + j + 0k) + s (2i j + 0k).
(d) 0 s 1.

## 16. (a) r (t) = (4 + 3 cos (t)) i + (6 + 3 sin (t)) j + 0k for 0 t < 2.

(b) r (t) = (2 t) i + 0j + (4 + t) k for t R.

(c) r (t) = 2 cos (t) i + sin (t) j + sin (t) k for 0 t < 2.

17. (a) Circle in the y = 5-plane, of radius 6, centre (x, y, z) = (4, 5, 4).
(b) Hyperbola xy = 1.
(c) Lam curve x4 + y 4 = 1.
(d) Helix on a cylinder of radius (the axis of the cylinder is the z-axis).

## 18. (a) r (s, t) = si + (8 2s 5t) j + tk for s R and t R.

(b) r (s, t) = (1 + sin (s) cos (t)) i + sin (s) sin (t) j + (2 + cos (s)) k for 0 s and 0 t < 2.
(c) r (s, t) = si + tj + 3t2 k for s R and t R.
(d) r (s, t) = s cos (t) i + 3s sin (t) j + 3sk for s 0 and 0 t < 2.
19. (a) x + 2y z = 4.

1 1
(b) z = x2 + y 2 .
9 16
cp 2
(c) z = x + y2.
a
 
1 1 2xy
(d) z = sin or y = x tan(z).
2 x2 + y 2

20. (a) |eR | = 1, |e | = 1 and |ez | = 1. The cylindrical coordinate vectors are unit vectors.
(b) eR e = 0, e ez = 0 and eR ez = 0. The cylindrical coordinate vectors are orthogonal to each other.
(c) eR e = ez , e ez = eR and ez eR = e . The cylindrical coordinate vectors form a right handed coordinate
system, like i, j and k in Cartesian coordinates.

21. (a) |er | = 1, |e | = 1 and |e | = 1. The spherical coordinate vectors are unit vectors.
(b) er e = 0, e e = 0 and er e = 0. The spherical coordinate vectors are orthogonal to each other.
(c) er e = e , e e = er and e er = e . The spherical coordinate vectors form a right handed coordinate
system, like i, j and k in Cartesian coordinates.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## Matrix Algebra Exercises

Row operations and linear systems

1. Solve each of the following system of equations using Gaussian elimination with back-substitution. Be sure to record the
details of each row-operation (for example, as a note on each row of the form (2) 2(2) 3(1).)

J + M = 75 x + y = 5
(a) (b)
J 4M = 0 2x + 3y = 1

x + 2y z = 6 x + 2y z = 6
(c) 2x + 5y z = 13 (d) x + 2y + 2z = 3
x + 3y 3z = 4 2x + 5y z = 13

2x + 3y z = 4
(e) x + y + 3z = 1
x + 2y z = 3

Under-determined systems

2. Using Gaussian elimination with back-substitution to find all possible solutions for the following system of equations

x + 2y z = 6
x + 3y = 7
2x + 5y z = 13
3. Find all possible solutions for the system (sic) of equations

x + 2y z = 6

(Hint : You have one equation but three unknowns. You will need to introduce two free parameters).

Matrices

## 4. Evaluate each of the following matrix operations.

1 1 2 1 1 1 2 1
(a) 2 (b)
1 4 3 1 1 4 3 1

2 1
1 1 3
(c) 3 1

1 4 2
1 2

5. Rewrite the systems of linear equations for questions (a), (b) and (c) in question 35.4 in matrix form. Hence, write down
the coefficient and augmented matrices for those systems of linear equations.

6. Repeat the row-operations part of (d) and (e) in question 35.4 using matrix notation.

Matrix inverses

## 7. Compute the inverse A1 of the following matrices

2 3 1
1 1
(a) A = (b) A = 1 1 3

1 4
1 2 1

## Verify that A1 A = I and AA1 = I.

8. Use the results of the previous question to solve the system of equations of (a) and (e) in question 35.4.

## v Exercise set 5.4.1: Questions 52,53,58

Matrix determinants

9. Compute the determinant for the coefficient matrix in question 35.4. What do you observe?

## 10. For the matrix

2 3 1

A = 1 1 3

1 2 1
compute the determinant twice, first by expanding about the top row and second by expanding about the second column.
11. Given
1 1 2 1
A= , B=
1 4 3 1
compute det(A), det(B) and det(AB). Verify that det(AB) = det(A)det(B).

12. Compute the following determinants using expansions about any suitable row or column.

1 2 3 4 3 2

(a) det 3 2 2 (b) det 1 7 8

0 9 8 3 9 3

1 2 3 2 1 5 1 3

1 3 2 3 2 1 7 5

(c) det (d) det
4 0 5 0 1 2 1 0

1 2 1 2 3 1 0 1

13. Recompute the determinants in the previous question this time using row operations (that is, Gaussian elimination).

14. Which of the following statements are true? Which are false?

(a) If A is a 3 3 matrix with a zero determinant, then one row of A must be a multiple of some other row.

(b) Even if any two rows of a square matrix are equal, the determinant of that matrix may be non-zero.

(c) If any two columns of a square matrix are equal then the determinant of that matrix is zero.

(d) For any pair of n n matrices, A and B, we always have det(A + B) = det(A) + det(B)

## (e) Let A be an 3 3 matrix. Then det(7A) = 73 det(A).

(f) If A1 exists, then det(A1 ) = det(A).

15. Given
1 k
A=
0 1
Compute A2 , A3 and hence write down An for n > 1.

## 16. Assume that A is square matrix with an inverse A1 .

1
Prove that det A1 =

det (A)
17. Let
5 2
A=
2 1
Show that
A2 6A + I = 0
where I is the 2 2-identity matrix. Use this result to compute A1 .

## 18. Consider the following pair of matrices

11 18 7 3 1 12

A= a 6 3, B = b 1 5

3 5 2 2 1 6

Compute the values of a and b so that A is the inverse of B while B is the inverse of A.
19. Here is a 2 2-matrix equation
a b e f p q
=
c d g h r s
Show that this is equivalent to the following sets of equations

a e f
= p + r
c g h

and
b e f
= q + s
d g h

20. Use the result of the previous question to show that if the original 2 2-matrix equation is written as A = EP then the
columns of A are linear combinations of the columns of E.

21. Following on from the previous two questions, show that the rows of A can be written as linear combinations of the rows
of P .

## v Exercise set 4.2.13: Questions 57-59

Matrix operations

## 22. Suppose you are given a matrix of the form

cos() sin()
R() =
sin() cos()

Consider now the unit vector v = [1, 0]T in a two dimensional plane. Compute R()v. Repeat your computations this time
using w = [0, 1]T . What do you observe? Try thinking in terms of pictures, look at the pair of vectors before and after the
action of R().

23. You may have recognised the two vectors in the previous question to be the familar basis vectors for a two dimensional
space, i.e., i and j. We can express any vector as a linear combination of j and j, that is,

u = ai + bj

for some numbers a and b. Given what you learnt from the previous question, what do you think will be result of R () u?
Your answer can be given in simple geometrical terms (e.g., in pictures).

24. Give reasons why you expect R( + ) = R()R(). Hence deduce that

## cos( + ) = cos() cos() sin() sin()

sin( + ) = sin() cos() + sin() cos()

25. Give reasons why you expect R()R() = R()R(). Hence prove that the rotation matrices R() and R() commute.

## 26. Show that det (R()) = +1.

27. Given the above form for R() write down, without doing any computations, the inverse of R().
Eigenvectors and eigenvalues

## A square matrix A has an eigenvector v with eigenvalue provided

Av = v

The vector v would normally be written as a column vector. Its transpose vT is a row vector.

det (A I) = 0

## 28. Compute the eigenvalues and eigenvectors of the following matrices.

4 2 6 1
(a) A = (b) A =
5 3 3 2

5 3
(c) A =
3 1

29. Given that one eigenvalue is = 4, compute the remaining eigenvalues of the following matrix.

1 3 3 2

A= 3 1 3 2

3 2 3 2 2
30. Given that one eigenvalue is = 4, compute the remaining eigenvalues of the following matrix.

3 1 3 2

A = 1 3 3 2

3 2 3 2 2

Compute the corresponding eigenvectors for all three eigenvalues. Verify that the eigenvectors are mutually orthogonal
(that is, v1T v2 = 0, v1T v3 = 0 and v2T v3 = 0).

31. Suppose the matrix A has eigenvectors v with corresponding eigenvalues . Show that v is an eigenvector of An . What is
its corresponding eigenvalue?

32. If , v are an eigenvalue-eigenvector pair for A then show that v is also an eigenvector of A for any real number 6= 0.

33. Suppose the matrix A has eigenvalue with corresponding eigenvector v. Deduce an eigenvalue and corresponding eigen-
vector of R1 AR, where R is a non-singular matrix.

34. Let A be any matrix of any shape. Show that AT A is a symmetric square matrix.

## v Exercise set 5.7.8: Questions 104

SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## Matrix Algebra Exercise Answers

Row operations and linear systems

## 1. (a) J = 60, M = 15. (b) x = 14, y = 9.

(c) x = 7, y = 0, z = 1. (d) x = 1, y = 2, z = 1.
(e) x = 1, y = 2, z = 0.

Under-determined systems

## 3. Solution is x(u, v) = u 2v + 6, y(u, v) = v, z(u, v) = u where u, v R are parameters.

Matrices

0 3 5 0
4. (a) (b)
1 9 10 5

8 6
(c)
8 1

1 1 1 1 75 1 1 1 1 5
5. (a) and (b) and
1 4 1 4 0 2 3 2 3 1

1 2 1 1 2 1 6

(c) 2 5 1 and 2 5 1 13

1 3 3 1 3 3 4

6. Should be easy.

Matrix inverses

7 1 10
1 4 1 1
7. (a) A1 = (b) A1 = 4 1 7

5 1 1 3

1 1 1

8. Use the results of the previous question to solve the system of equations of (a) and (e).

Matrix determinants

9. The determinant is zero, which indicates that there is either no solution or infinitely many solutions to the system of
equations.

10. det(A) = 3.

## 11. det(A) = 5, det(B) = 5 and det(AB) = 25.

12. Compute the following determinants using expansions about any suitable row or column.

1 2 3 4 3 2

(a) det 3 2 2 = 31 (b) det 1 7 8 = 165

0 9 8 3 9 3

1 2 3 2 1 5 1 3

1 3 2 3 2 1 7 5

(c) det = 0 (d) det = 162
4 0 5 0 1 2 1 0

1 2 1 2 3 1 0 1

## 13. Recompute the determinants in the previous question.

14. (a) False, (b) False, (c) True, (d) False, (e) True, (f) False.

## 15. Compute A2 and A3 and note the pattern.

1 nk
An = .
0 1

1
16. Prove that det A1 =

det (A)

1 2
17. A1 = 6I A = .
2 5

## 19. Show the equivalance.

20. Show that the columns of A are linear combinations of the columns of E.

21. Show that the rows of A can be written as linear combinations of the rows of P .

Matrix operations

22. Each of the vectors will have been rotated about the origin by the angle in a counterclockwise direction.

23. The rotation observed in the previous question also applies to the general vector u. Thus R() is often referred to as a
rotation matrix. Matrices like this (and their 3 dimensional counterparts) are used extensivly in computer graphics.

24. Any object rotated first by and then by could equally have been subject to a single rotation by + . The resulting
objects must be identical. Hence R( + ) = R()R().

25. Regardless of the order in which the rotations have been applied the nett rotation will be the same. Thus R()R() =
R()R(). Equally, you could have started by writing + = +, then R( +) = R(+) and so R()R() = R()R().

cos() sin()
26. det(R()) = det = 1.
sin() cos()

## 28. (a) 1 = 1 and 2 = 2. (b) 1 = 3 and 2 = 5.

(c) 1 = 2 and 2 = 2.

29. 1 = 4, 2 = 4 and 3 = 8.
30. 1 = 4,
2 = 4 and
3= 8.
1 1 1

v1 = 1 , v2 = 1 and v3 = 1.

2 0 2

## 33. The matrix R1 AR will have as an eigenvalue with eigenvector R1 v.

T T
34. Use (P Q)T = QT P T and AT = A to show that AT A = AT A. Hence AT A is symmetric.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## Ordinary Differential Equations Exercises

Introduction to ODEs

## Separable first order ODEs

1. Find the general solution for each of the following seperable ODEs.
dy dy
(a) = 2xy (b) y + sin(x) = 0
dx dx
dy y
dy 1+ 1
(c) sin(x) + y cos(x) = 2 cos(x) (d) dx = x
dx dy y
1 1+
dx x
James G., Modern Engineering Mathematics (5th ed.) 2015.:

## v Exercise set 10.5.6: Questions 18,20

Non-separable first order ODEs

2. Find the general solution for each of the following homogeneous ODEs.
dy dy
(a) +y =0 (b) y =0
dx dx
dy dy
(c) + 2y = 0 (d) 2y = 0
dx dx

## 3. Find the particular solution for each of the following ODEs.

dy dy
(a) +y =1 (b) + 2y = 2 + 3x
dx dx
dy dy
(c) y = e2x (d) y = ex
dx dx
dy dy
(e) + 2y = cos (2x) (f) 2y = 1 + 2x sin(x)
dx dx

4. Given the solutions in 35.4 and 35.4, determine the general solution for each of the ODEs.
dy dy
(a) +y =1 (b) + 2y = 2 + 3x
dx dx
dy dy
(c) y = e2x (d) y = ex
dx dx
dy dy
(e) + 2y = cos (2x) (f) 2y = 1 + 2x sin(x)
dx dx
Integrating factors

5. Use an integrating factor to find the general solution for each of the following ODEs.
dy dy 2
(a) + 2y = 2x (b) + y=1
dx dx x
dy dy
(c) + cos(x)y = 3 cos(x) (d) sin(x) + cos(x)y = tan(x)
dx dx

## v Exercise set 10.5.11: Questions 31-35

Eulers method
dy
6. For the differential equation = y with y(0) = 1 on the interval [0, 1] use Eulers method to determine an approximation
dx
solution.

## (iii) Using ten steps of x = 0.1.

Then

(iv) given the exact solution yexact (x) = ex , calculate the absolute error |yexact yapprox | for each of the approximate
solutions, found above, at each point and
(v) on one graph, plot the three approximate solutions and the exact solution.

dy
7. For the differential equation dx
= xy with y(0) = 1 on the interval [0, 1] use Eulers method to determine an approximation
solution.

## (iii) Using ten steps of x = 0.1.

Then

(iv) given the exact solution yexact (x) = 2ex +x1, calculate the absolute error |yexact yapprox | for each of the approximate
solutions, found above, at each point and

(v) on one graph, plot the three approximate solutions and the exact solution.

dy
8. For the differential equation dx
= 2xy x with y(0) = 0 on the interval [0, 1] use Eulers method to determine an
approximation solution.

## (iii) Using ten steps of x = 0.1.

Then
2
(iv) given the exact solution yexact (x) = 12 12 ex , calculate the absolute error |yexact yapprox | for each of the approximate
solutions, found above, at each point and

(v) on one graph, plot the three approximate solutions and the exact solution.

## 9. Find the general solution for each of the following ODEs:

d2 y dy d2 y
(a) + 2y = 0 (b) 9y = 0
dx2 dx dx2
d2 y dy d2 y dy
(c) 2
+ 2 + 2y = 0 (d) 2
+ 6 + 10y = 0
dx dx dx dx

## Second order non-homogenous ODEs

10. Find the particular solution for each of the following ODEs:
d2 y dy d2 y
(a) + 2y = 1 x (b) 9y = e3x
dx2 dx dx2
d2 y dy d2 y dy
(c) 2
+ 2 + 2y = sin(x) (d) 2
+ 6 + 10y = e2x cos(x)
dx dx dx dx

11. Given the solutions in 35.4 and 35.4, determine the general solution for each of the following ODEs.
d2 y dy d2 y
(a) + 2y = 1 x (b) 9y = e3x
dx2 dx dx 2

d2 y dy d2 y dy
(c) 2
+ 2 + 2y = sin(x) (d) 2
+ 6 + 10y = e2x cos(x)
dx dx dx dx

## Boundary value problems

12. Given the general solutions in 35.4 solve the following boundary value problems.

d2 y dy dy
(a) 2
+ 2y = 1 x, y(0) = 0 and (0) = 0
dx dx dx

d2 y
(b) 9y = e3x , y(0) = 0 and y(1) = 1
dx2
d2 y dy

(c) 2
+ 2 + 2y = sin(x), y(0) = 1 and y 2
=1
dx dx

d2 y dy dy
(d) 2
+ 6 + 10y = e2x cos(x), y(0) = 1 and (0) = 0
dx dx dx

## 13. Solve the boundary value problem:

d2 y 1
2
+ y = ex , y(0) = 0, y () = 0.
dx 4

## 14. Solve the boundary value problem:

d3 y d2 y dy dy d2 y
+ + 3 5y = x (1 x) , y(0) = 1, (0) = 0, (0) = 0.
dx3 dx2 dx dx dx2
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## Ordinary Differential Equations Exercise Answers

Separable first order ODEs

2
p
1. (a) y(x) = Cex (b) y(x) = 2 cos(x) + C

C C
(c) y(x) = 2 + (d) y(x) =
sin(x) x
for arbitrary constant C.

## 2. (a) yh (x) = Cex (b) yh (x) = Cex

(c) yh (x) = Ce2x (d) yh (x) = Ce2x

## for arbitrary constant C.

1 3x
3. (a) yp (x) = 1 (b) yp (x) = +
4 2

## (c) yp (x) = e2x (d) yp (x) = xex

1 1 1 2
(e) yp (x) = cos (2x) + sin (2x) (f) yp (x) = 1 x + cos(x) + sin(x)
4 4 5 5
4. The solutions are given as a linear combination of the solution of the homogeneous ODE and the particular solution, that
is, y(x) = yh (x) + yp (x).
1 3x
(a) y(x) = 1 + Cex (b) y(x) = + + Ce2x
4 2

## (c) y(x) = e2x + Cex (d) y(x) = xex + Cex

1 1 1 2
(e) y(x) = cos (2x) + sin (2x) + Ce2x (f) y(x) = 1 x + cos(x) + sin(x) + Ce2x
4 4 5 5
for arbitrary constant C.

Integrating factors

1 x C
5. (a) y(x) = x + Ce2x (b) y(x) = +
2 3 x2
C loge (cos(x))
(c) y(x) = 3 + Ce sin(x) (d) y(x) =
sin(x)
for arbitrary constant C.

Eulers method
dy
6. = y with y(0) = 1 on the interval [0, 1]
dx
dy
7. = x y with y(0) = 1 on the interval [0, 1]
dx
dy
8. For the differential equation = 2xy x with y(0) = 0 on the interval [0, 1]
dx
Second order homogenous ODEs

## 9. (a) yh (x) = C1 e2x + C2 ex (b) yh (x) = C1 e3x + C2 e3x

(c) yh (x) = ex (C1 cos(x) + C2 sin(x)) (d) yh (x) = e3x (C1 cos(x) + C2 sin(x))

## Second order non-homogenous ODEs

1 1
10. (a) Trying yp (x) = Ax + B gives yp (x) = x
2 4
1
(b) Trying yp (x) = Axe3x gives yp (x) = xe3x
6
2 1
(c) Trying yp (x) = A cos(x) + B sin(x) gives yp (x) = cos(x) + sin(x)
5 5
 
2x 2x 1 2
(d) Trying yp (x) = e (A cos(x) + B sin(x)) gives yp (x) = e cos(x) + sin(x)
29 145
1 1
11. (a) y(x) = C1 e2x + C2 ex + x
2 4
1
(b) y(x) = C1 e3x + C2 e3x + xe3x
6
2 1
(c) y(x) = ex (C1 cos(x) + C2 sin(x)) cos(x) + sin(x)
5 5
 
3x 2x 1 2
(d) y(x) = e (C1 cos(x) + C2 sin(x)) + e cos(x) + sin(x)
29 145
for arbitrary constants C1 and C2 .

## Boundary value problems

1 1 1
12. (a) y(x) = e2x + x
4 2 4
 3  3
e 6 e 6
 
33x 1
(b) y(x) = e e3+3x + xe3x
6 (e 1)
6 6 (e 1)
6 6
3 4 2 1
(c) y(x) = ex cos(x) + e 2 x sin(x) cos(x) + sin(x)
5 5 5 5
 
30 3x 462 3x 2x 1 2
(d) y(x) = e cos(x) e sin(x) + e cos(x) + sin(x)
29 145 29 145
4 x 4 x 4
13. y(x) = cos e sin + ex .
5 2 5 2 5
1 99 x 9 x 1 1 13
14. y(x) = ex + e cos (2x) e sin (2x) + x2 + x + .
2 250 125 5 25 125
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## Laplace Transforms Exercises

Laplace Transforms

1. Using the known Laplace transforms and , determine the Laplace transforms for each of the following functions, simplifying
your answers:

## (a) f (t) = 1 et (b) f (t) = 1 2et + e2t

(c) f (t) = et sinh (t) (d) f (t) = sinh (t) cosh (t)

(e) f (t) = ea+bt for constants a, b (f) f (t) = a + bect for constants a, b, c

## For what range of values of s do each of these transforms exist?

2. Use the definition of the Laplace transform (in terms of an integral) to determine the Laplace transform of each of the
following functions, where f (t) = 0 apart from at the values specified:

## (c) f (t) = 1 t for 0 t 1 (d) f (t) = b for 0 t a

 
b 1
(e) f (t) = t for 0 t a (f) f (t) = b 1 t for 0 t a
a a

In each case sketch f (t). For what range of values of s do each of the transforms exist?

3. For which of the following functions do their Laplace transforms exist, giving reasons:
 
2 1 2
(c) f (t) = sinh t2
 
(a) f (t) = exp t (b) f (t) = exp t
2

  1
(d) f (t) = exp exp (t) (e) f (t) = exp exp (t) (f) f (t) =
t

1 1
(g) f (t) = (h) f (t) = (i) f (t) = |sin (t)|
t+1 (t 1)2

## 4. Use the Taylor series

1 2 1
et = 1 + t + t + + tn +
2! n!
to show for all t > 0 that
tn n!et for any fixed integer n 0
and hence confirm that f (t) = tn has subexponential growth when t is large, for any integer n.

5. Use the definition of the Laplace transform L {f (t)} = F (s) to show that
 
1 1
L {f (at)} = F s when a > 0.
a a
1 1
Given that L et = , use the property above to verify that L eat =
 
.
s1 sa
James G., Modern Engineering Mathematics (5th ed.) 2015.:

## v Exercise set 11.2.6: Questions 1 and 3.

Inverting Laplace Transforms

## 6. Use partial fractions to invert each of the following Laplace transforms:

1 1
(a) F (s) = (b) F (s) =
s (1 s) 1 s2

2s 5
(c) F (s) = (d) F (s) =
1 s2 s2 + s 6

as + b 1
(e) F (s) = for constants a, b (f) F (s) =
s2 + 3s + 2 s (s2 1)
n 
7. Use integration by parts to show that L {tn } = L tn1 when n is a positive integer. Ensure that any limits that arise

s
are evaluated carefully.
1 n!
Use that L {1} = to deduce that L {tn } = n+1 .
s s
8. Use the known value for L {tn } to determine the Laplace transforms of:

## (a) f (t) = 1 + t (b) f (t) = (1 + t)2

1 1 n
(c) f (t) = (t + 1) (t 1) (d) f (t) = 1 + t + . . . + t for any positive integer n
2 n!

1

9. Given that 2
=
and ( + 1) = () determine:
 
n 1o
2 3 n 1o n 3o
(a) L t (b) and hence L t 2 (c) L t 2
2
10. Use the s-shifting property to determine the Laplace transforms of:

## 11. Invert the following Laplace transforms:

s1 1 2s + s2
(a) F (s) = (b) F (s) =
s2 s3

1 s
(c) F (s) = (d) F (s) = using (a)
(1 + s)2 (1 + s)2

as + b
(e) F (s) = for any constants a, b, c
(s + c)2
James G., Modern Engineering Mathematics (5th ed.) 2015.:

## Laplace Transforms of derivatives

12. Show that the known Laplace transform of f (t) = tn satisfies the derivative property
 
df
= sF (s) f (0) = L ntn1 .

L
dt
Repeat using f (t) = et , f (t) = tet (see 35.4(a)) and f (t) = 12 (1 + t)2 (see 35.4(b)).
13. Determine the Laplace transform Y (s) of the solution y (t) of the following initial-value problems:
dy dy
(a) + y = 2 when y(0) = 1 (b) y = et when y(0) = 1
dt dt
dy dy
(c) + y = et when y(0) = 1 (d) + y = t when y(0) = 1
dt dt

## Invert Y (s) and hence determine y (t) in each case.

14. Use the known values of L {sin (t)}, L {cos (t)}, along with other properties of circular functions and Laplace transforms,
to determine the transforms of each of the following functions:

(a) f (t) = cos (2t) (b) f (t) = sin2 (t) (use an appropriate double-angle formulae)

## (c) f (t) = et cos (2t) (d) f (t) = e2t sin (3t)

15. Use direct integration, using integration by parts, to determine L {teit } and hence determine the values of L {t sin (t)} and
L {t cos (t)}.
James G., Modern Engineering Mathematics (5th ed.) 2015.:

## Applications to differential equations

16. Write Q (s) = s2 + 2s + 5 in the form (s + a)2 + 2 and hence determine the inverse transform of:
1 s+1
(a) F (s) = (b) F (s) =
s2 + 2s + 5 s2 + 2s + 5
s bs + c
(c) F (s) = (d) F (s) = for any constants b, c
s2 + 2s + 5 s2 + 2s + 5

17. Write each of the following as partial-fraction expansions and determine their inverse transforms:
2 2s + 1
(a) F (s) = (b) F (s) =
s (s2 1) s2(s + 1)
s 2s 1
(c) F (s) = (d) F (s) =
s2 + 2s + 2 (s + 2) (s2 + 1)
s2 2s + 6
(e) F (s) =
s3 s2 + 4s 4

18. Solve each of the following initial-value problems using Laplace transforms:

d2 y dy dy
(a) 2
+ 5 + 6y = 0 with y(0) = 0 and (0) = 1
dt dt dt

d2 y dy dy
(b) + 2 + 5y = 0 with y(0) = 0 and (0) = 1
dt2 dt dt

d2 y dy
(c) 2
+ y = 1 with y(0) = 0 and (0) = 0
dt dt

d2 y dy dy
(d) 2
+ 2 + 5y = 5 with y(0) = 0 and (0) = 0
dt dt dt
d2 y dy dy
(e) 2
+ 3 + 2y = 2t + 1 with y(0) = 1 and (0) = 0
dt dt dt

19. Use the derivative of transform property to determine Laplace transforms of each of the following:

## (c) f (t) = t cos (t) (d) f (t) = t2 exp (t)

20. A harmonic oscillator is excited at a different frequency from its natural mode, so that

d2 y
+ y = sin (t) when 6= 1.
dt2
dy
Assuming that y(0) = 0 and (0) = 0, show that the Laplace transform of the solution is
dt

Y (s) = 2
(s + ) (s2 + 1)
2

## and hence show that

1
y (t) = (sin (t) sin (t)) .
1 2
The near resonance case occurs when 1. How close to = 1 does the excitation frequency = 1 + need to be for the
size of the sin (t) part of the response in y (t) to be about 100 times the forcing amplitude? What happens when = 1
exactly?

21. A harmonic system is said to resonate when it is forced at its natural frequency, for example when

d2 y dy 1
+ y = sin (t) assuming that y(0) = 0 and (0) = ,
dt2 dt 2
find the Laplace transform Y (s) of the solution and hence determine y (t) for t > 0. Deduce that max {|y|} over each
period will always increase with time.

## Step functions and t-shifting

22. Use t-shifting to determine the inverse Laplace transforms of each of the following:

es e2s
(a) F (s) = (b) F (s) =
s s2
es 2e4s
(c) F (s) = (d) F (s) =
1 + s2 s (s + 2)

in terms of the unit step function u (t). Sketch each of the inverse transforms as a function of t 0.

23. Using the appropriate unit step functions, solve the initial-value problem

2
dy 1 if < t < 2
+ y =
dt2 0 otherwise

dy
with the initial conditions y(0) = 1 and (0) = 0. Compare the form of y (t) for 0 < t < with that for t > 2. What is
dt
the overall outcome of the temporary forcing? What would happen to the final value if the forcing had been for < t < 3
instead of < t < 2?
Impulses and delta functions

24. Demonstrate, using two simple functions such as f (t) = t and g (t) = et , that the transform of a product f (t) g (t) is not
necessarily equal to the product of the transforms of f and g. Find two functions f and g for which it does happen to be
true.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## Laplace Transforms Exercise Answers

Laplace Transforms

1 2
1. (a) L 1 et = (b) L 1 2et + e2t =
 
for s > 0 for s > 0
s (s + 1) s (s + 1) (s + 2)

1 1
(c) L et sinh (t) =

for s > 2 (d) L {sinh (t) cosh (t)} = 2 for s > 2
s (s 2) s 4

ea (a + b) s ac
(e) L ea+bt = (f ) L a + bect =
 
for s > b for s > c
sb s (s c)

## 2. Each of the following are true for all s 6= 0

1 es 1 (s + 1) es
(a) L {f (t)} = (b) L {f (t)} =
s s2

s 1 + es b (1 eas )
(c) L {f (t)} = (d) L {f (t)} =
s2 s

## b (1 (as + 1) eas ) b (as 1 + eas )

(e) L {f (t)} = (f ) L {f (t)} =
as2 as2

## (a) f (t) = exp t2 , (e) f (t) = exp (exp (t)),



1
(g) f (t) = (i) f (t) = |sin (t)|.
t+1
Inverting Laplace Transforms

1 t
6. (a) f (t) = 1 et e et

(b) f (t) =
2

## (c) f (t) = et + et (d) f (t) = e2t e3t



1 t
(e) f (t) = (b a) et + (2a b) e2t e + et 1

(f ) f (t) =
2
s+1 s2 + 2s + 2
(b) L (1 + t)2 =

8. (a) L {1 + t} =
s2 s3

2 s2 1 + s + . . . + sn
   
1 1
(c) L (t + 1) (t 1) = (d) L 1 + t + . . . + tn =
2 2s3 n! sn+1
n 1 o r
9. (a) L t 2 =
s

1 n 1 o 1r
 
3
(b) = so L t 2 =
2 2 2 s3

3 n 3 o 3r
 
5
(c) = so L t 2 =
2 4 4 s5
1 1 2
10. (a) L tet = (b) L tet = (c) L t2 et =
  
(s 1)2 (s + 1)2 (s 1)3

6 n! 2s
(d) L t3 e2t = (e) L tn et =
 
(f ) L {t sinh (t)} =
(s 2)4 (s + 1)n+1 (s2 1)2
1 2 
11. (a) f (t) = 1 t (b) f (t) = t 4s + 2
2

## Laplace Transforms of derivatives

3 1
13. (a) y (t) = 2 et (b) y (t) = et et
2 2

## (c) y (t) = (1 + t) et (d) y (t) = t 1 + 2et

s 2
(b) L sin2 (t) =

14. (a) L {cos (2t)} = 2
s +4 s (s2+ 4)

s+1 3
(c) L et cos (2t) = (d) L e2t sin (3t) =
 
s2 + 2s + 5 s2 4s + 13
1 s2 1 2s
15. L teit =

2 so L {t cos (t)} = 2 and L {t sin (t)} = .
(s i) (s2 + 1) (s2 + 1)2
Applications to differential equations

1
16. (a) f (t) = et sin (2t) (b) f (t) = et cos (2t)
2
   
t 1 t 1
(c) f (t) = e cos (2t) sin (2t) (d) f (t) = e b cos (2t) + (c b) sin (2t)
2 2

## 17. (a) f (t) = 2 cosh (t) 2 (b) f (t) = t + 1 et

(c) f (t) = et (cos (t) sin (t)) (d) f (t) = cos (t) e2t

## (e) f (t) = et sin (2t)

1
18. (a) y (t) = e2t e3t (b) y (t) = et sin (2t)
2
 
t 1
(c) y (t) = 1 cos (t) (d) y (t) = 1 e cos (2t) + sin (2t)
2

## (e) y (t) = t 1 + 3et e2t

1 2s
19. (a) L tet =

(b) L {t sinh (t)} =
(s 1)2 (s2 2 )2

s2 2 2
(d) L t2 exp (t) =

(c) L {t cos (t)} =
(s2 + 2 )2 (s )3
1 1
20. 100 so 0.995; this solution is undefined if = 1, but see below.
1 2 2

1 1
21. y (t) = t cos (t), for which |y| varies between t over each period in t, that is, amplifies.
2 2

## (d) f (t) = 1 e2(t4) u (t 4)


(c) f (t) = sin (t 1) u (t 1)
 
23. y (t) = cos (t) + 1 + cos (t) u (t ) 1 cos (t) u (t 2); cos (t) versus 3 cos (t); cos (t) for t > 3.
SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## Multivariable Calculus Exercises

Limits

1. At which points in R2 are the following two variable functions discontinuous (if any)?

(x y)2
(a) f (x, y) = tan (x + y) (b) g(x, y) =
(x + y)2
1 + u + u2
(d) p(u, v) = exp u2 v 2

(c) h(u, v) = 2
1+v+v

## v the limit along the x = a line, and

v along any straight line line y = mx + c through that point (x, y) = (a, b) (for finite, non-zero constant m).

If you find the same value for all three cases then the limit may be that value.
If one of these three cases does not agree with the other two or is undefined then the limit does not exist.

(x + y 1)2
   
sin (x + y)
(a) lim (b) lim
(x,y)(0,0) x+y (x,y)(1,1) (x y + 1)2
 2
x y2 1 1 exp (x2 y 2 )
  
(c) lim (d) lim
(x,y)(1,0) x2 + y 2 1 (x,y)(0,0) xy
Partial Derivatives

3. Evaluate the first partial derivatives for each of the following functions

## (a) f (x, y) = cos(x) cos(y) (b) f (x, y) = sin(xy)

loge (1 + x) x+y
(c) f (x, y) = (d) f (x, y) =
loge (1 + y) xy

(f) f (u, v) = uv 1 u2 v 2

(e) f (x, y) = xy

   
f f
= .
x y y x

## Gradient vectors and directional derivatives

5. Compute the directional derivative for each for the following functions in the stated direction. Be sure that you use a unit
vector!
(a) f (x, y) = 2x + 3y at (x, y) = (1, 2) in the direction v = 15 (3i + 4j)

1

(b) g(x, y) = sin(x) cos(y) at (x, y) = ,
4 4
in the direction v = 2
(i + j)



## (e) r(x, y, z) = z exp (2xy) at (x, y, z) = (1, 1, 1) in the direction of v = i 3j + 2k

p
1 1 1

(f) w(x, y, z) = 1 x2 y 2 z 2 at (x, y, z) = , ,
2 2 2
in the direction of v = 2i j + k

## 6. (a) Find the gradient vector for the function g(x, y, z) = x2 + y 2 1.

(b) Rewrite the function g(x, y, z) = x2 + y 2 1 as a function of cylindrical coordinates, that is, g (R, , z).

(c) Find the gradient vector for the function g (R, , z).
7. (a) Find the gradient vector for the function g(x, y, z) = x2 + y 2 + z 2 1.

(b) Rewrite the function g(x, y, z) = x2 + y 2 + z 2 1 as a function of spherical coordinates, that is, g (r, , ).

## (c) Find the gradient vector for the function g (r, , ).

cos ()
8. Find the gradient vector for the function g (R, , z) = a cos () + in cylindrical coordinates for constant a > 0.
R

## James G., Modern Engineering Mathematics (5th ed.) 2015.:

v Exercise set 9.6.4: Question 46

Tangent planes

9. Compute the tangent plane f approximation for each of the following functions at the stated point.

## (a) f (x, y) = 2x + 3y at (x, y) = (1, 2)


(b) g(x, y) = sin(x) cos(y) at (x, y) = ,
4 4



## (e) r(x, y, z) = z exp (2xy) at (x, y, z) = (1, 1, 1)

p
1 1 1

(f) w(x, y, z) = 1 x2 y 2 z 2 at (x, y, z) = , ,
2 2 2

10. Use the result from the previous question to estimate the function at the stated points. Compare your estimate with that
given by a calculator.
(a) f (x, y) at (x, y) = (1.1, 1.9)

3 5

(b) g(x, y) at (x, y) = ,
16 16

## (f) w(x, y, z) at (x, y, z) = (0.6, 0.4, 0.6)

11. This is more a question on theory rather than being a pure number question. It is thus not examinable.

Consider a function f = f (x, y) and its tangent plane approximation f at some point P . Both of these may be drawn as
surfaces in 3-dimensional space. You might ask - How can I compute the normal vector to the surface for f at the point
P ? And that is exactly what we will do in this question.

Construct f at P (that is, write down the standard formula for f). Draw this as a surface in the 3-dimensional space. This
surface is a flat plane tangent to the surface for f at P (hence the name, tangent plane).

Given your equation for the plane, write down a 3-vector normal to this plane. Hence deduce the normal to the surface for
the function f = f (x, y) at P .

12. Generalise your result from the previous question to surfaces of the form g(x, y, z) = 0. This question is also a non-
examinable extension. But it is fun! (agreed?).
Maxima and Minima

13. Find all of the extrema (if any) for each of the following functions (you do not need to charactise the extrema).

(a) f (x, y) = 4 x2 y 2

## (b) g(x, y) = xy exp x2 y 2



(c) h(x, y) = x x3 + y 2





## v Exercise set 9.6.4: Question 79-81,86

SCHOOL OF MATHEMATICAL SCIENCES

ENG1005

Engineering mathematics

## Multivariable Calculus Exercise Answers

Limits

1. At which points in R2 are the following two variable functions discontinuous (if any)?
 
3 5 n o
(a) (x, y) : x + y = , , , . . . (b) (x, y) : x + y = 0
2 2 2

## 2. (a) 1, (b) 1, (c) Undefined, (d) 0

Partial Derivatives
f f
3. (a) = sin(x) cos(y) and = cos(x) sin(y)
x y
f f
(b) = y cos(xy) and = x cos(xy)
x y
f 1 f loge (1 + x)
(c) = and =
x (1 + x) loge (1 + y) y (1 + y) log2e (1 + y)
f 2y f 2x
(d) = and =
x (x y)2 y (x y)2
f f
(e) = y and =x
x y
f f
= v 1 3u2 v 2 and = u 1 u2 3v 2
 
(f)
u v
   
2 f f
4. For the function f (x, y) = y sin(x) verify that = .
x y y x

## Gradient vectors and directional derivatives

18 35 2e2 2
5. (a) , (b) 0, (c) 0, (d) , (e) , (f)
5 14 14 6
6. (a) g(x, y, z) = 2xi + 2yj + 0k.

(b) g (R, , z) = R2 1.

(c) g (R, , z) = 2ReR + 0e + 0ez . Observe this vector points out radially from the cylinder x2 + y 2 = 1 axis and is
normal (perpendicular) to the cylinder surface.

## 7. (a) g(x, y, z) = 2xi + 2yj + 2zk.

(b) g(x, y, z) = r2 1.

(c) g (r, , ) = 2rer + 0e + 0e . Observe this vector points out radially from the origin and is normal (perpendicular)
to the spherical surface x2 + y 2 = 1.
 
cos () a sin () sin ()
8. g (R, , z) = eR + e + 0ez .
R2 R R2
Tangent planes

9. (a) f(x, y) = 8 + 2 (x 1) + 3 (y 2)

1 1  1  
(b) g(x, y) = + x y
2 2 4 2 4

y, z) = log (2) + (x 1) + (z 1)
(c) h(x, e

(d) q(x, y, z) = 5 9 (y 1) + 8 (z 2)

     
1 1 1 1
y, z) = x
(f) w(x, y z
2 2 2 2

## 10. The calculators answer is in brackets.

(a) 7.9 (7.900), (b) 0.304 (0.2397), (c) 0.393 (0.3784), (d) 3.7 (3.267),
(e) -0.14 (-0.1613), (f) 0.4 (0.3464)

   
f f
N= i+ jk
x y

## is normal to the surface.

12. For a surface written in the form g(x, y, z) = 0 the vector
     
g g g
N = g = i+ j+ k
x y z

## 13. (a) (x, y) = (0, 0)

   
(b) (x, y) = (0, 0), (x, y) = 1 , 1 , (x, y) = 12 , 12 ,
2 2

   
(x, y) = 1 , 1 , (x, y) = 12 , 12
2 2

   
(c) (x, y) = 1 , 0 and (x, y) = 13 , 0
3