You are on page 1of 185

Lecture Notes

Engineering Mathematics

Dr. Elhassen Ai Ahmed


Table of Contents

Part ( 1 ) : Linear Algebra ......................................................................................................................... 3


1.1 - Review of Matrix Algebra ......................................................................................................................... 3
1.2 - Linear System of Equations......................................................................................................................... 6
1.2.1 - Elementary row operations ERO .......................................................................................................... 8
1.2.2 - Gauss elimination method .................................................................................................................... 9
1.2.3 - Row echelon form matrix ................................................................................................................... 10
1.2.4 - The rank of matrix............................................................................................................................. 14
1.2.5 - Existence and uniqueness of solutions of linear systems ...................................................................... 16
1.3 - Matrix Eigenvalue Analysis ...................................................................................................................... 19
1.4 - Diagonalization Power and Exponential Matrices .................................................................................... 24
Exercises (1)..................................................................................................................................................... 27
Part ( 2 ) : Differential Equations ............................................................................................................ 30
2.1 - Review of Basic Concepts ........................................................................................................................ 31
2.2 - Second Order Ordinary Differential Equations.......................................................................................... 33
2.2.1 - Superposition principle or linearity principle....................................................................................... 33
2.2.2 - Homogeneous linear ODEs with constant coefficients ........................................................................ 34
2.2.3 - Non Homogeneous linear ODEs with constant coefficients ................................................................ 36
2.2.4 - Euler – Cauchy OD equation ............................................................................................................. 41
2.2.5 - Linear ODEs with variable Coefficients ............................................................................................. 42
2.3 - Higher Order Ordinary Differential Equations .......................................................................................... 43
Exercise (2) ...................................................................................................................................................... 45
Part ( 3 ) : Power Series Solutions of ODE ............................................................................................ 47
3.1 - Review of Power Series ............................................................................................................................ 47
3.2 - Series Solutions of 2nd Order Linear DEs .................................................................................................. 51
3.2.1 - Solution about ordinary point............................................................................................................. 52
3.2.2 - Legendre’s Differential Equation ....................................................................................................... 56
3.2.3 - Solution about regular singular point (Frobenius method) ................................................................... 57
3.2.4 - Bessel’s Differential Equation ........................................................................................................... 61
Exercise (3) ...................................................................................................................................................... 67
Part ( 4 ) : Solution of ODEs by Laplace Transform ................................................................................. 68
4.1 - Introduction .............................................................................................................................................. 68
4.2 - Laplace Transform .................................................................................................................................... 70
4.2.1 - Existence and uniqueness of Laplace transforms ................................................................................. 72
4.2.2 - Properties of the Laplace transform..................................................................................................... 74

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 1


4.2.3 - Evaluation of inverse transforms (partial fraction method).................................................................. 77
4.2.4 - Discontinuous and Periodic Functions ............................................................................................... 78
4.3 - Laplace Method Algorithm for ODE Solution ........................................................................................... 84
4.4 - Convolution Method Solution of IVP Differential Equations .................................................................... 98
4.5 - Laplace Method Solution of Integral and Integro-Differential Equations .................................................... 99
4.6 - Differential Equations with variable Coefficients..................................................................................... 100
Exercise (4) .......................................................................................................................................... 102
Part ( 5 ) : Solution of A system of ODEs............................................................................................... 105
5.1 - Introduction ............................................................................................................................................ 105
5.2 - Conversion of an nth-Order ODE to a System ......................................................................................... 106
5.3 - The Substitution or Elimination Method .................................................................................................. 109
5.4 - Matrix Solutions of Linear System of ODEs – (Homogeneous Case) ...................................................... 111
5.5 - Matrix Solutions of Linear System of ODEs (Non-homogeneous Case) ................................................... 118
5.5.1 - Variation of parameters .................................................................................................................... 118
5.5.2 - Solution by diagonalization .............................................................................................................. 119
5.5.3 - Solutions by Laplace transform......................................................................................................... 121
Exercises ( 5 ) ................................................................................................................................................. 124
Part ( 6 ) : Boundary Value Problems and Fourier Analysis ................................................................... 125
6.1 - Boundary Value Problems....................................................................................................................... 125
6. 2 - Eigen Value Problems ............................................................................................................................ 127
6.3 - Fourier Series ......................................................................................................................................... 129
6.4 - Fourier Cosine and Sine Series ................................................................................................................ 138
Exercises ( 6 ) ................................................................................................................................................. 141
Part ( 7 ) : Partial Differential Equations ............................................................................................... 143
7.1 - Introduction ............................................................................................................................................ 143
7.2 - Classification of 1st Order PD Equations................................................................................................. 144
7.3 - Classification of 2nd Order PD Equations ................................................................................................. 145
7.4 - Initial and Boundary Conditions ............................................................................................................ 147
7.5 - Solution of a PDE ................................................................................................................................... 149
7.6 - Separation of Variables ........................................................................................................................... 149
7.6.1 - The heat equation ............................................................................................................................. 150
7.6.2 - The Laplace equation ....................................................................................................................... 158
7.6.3 - The wave equation ........................................................................................................................... 166
7.7 - Solution by Laplace Transform ............................................................................................................... 171
Exercise ( 7 ) .................................................................................................................................................. 178

Laplace Transform Tables .............................................................................................................................. 180

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 2


Part ( 1 ) : Linear Algebra
______________________________________________________________________________

1.1 - Review of Matrix Algebra


Knowledge of matrices is essential for understanding the solution of linear algebraic
equations. Matrices provide a concise way to represent and manipulate linear
algebraic equations.
A matrix is a rectangular array of elements. The elements can be symbolic expressions
or numbers. Matrix [A] is denoted by

11 12. . . 1
⎡ 21 22 2 ⎤
[ ] = ⎢⎢ . . . . .


⎢ . . ⎥
⎣ 1 2 . . ⎦

A horizontal set of elements is called a row and a vertical set is called a column. The
first subscript i always designates the number of the row in which the element lies.
The second subscript j designates the column.
Each matrix has rows and columns and this defines the size of the matrix. If a matrix
[A] has m rows and n columns, the size of the matrix is denoted by (m×n). The
matrix [A] may also be denoted by [A]m×n to show that [A] is a matrix with m rows and
n columns.
Each entry in the matrix is called the entry or element of the matrix and is denoted by
aij where i is the row number and j is the column number of the element. The set m
×
x n of matrices with real number entries is denoted ℝ . The set of m x n matrices
×
with complex entries is ℂ .
If the number of rows (m) of a matrix is equal to the number of columns (n) of the
matrix, (m = n), it is called a square matrix. The entries a11, a22, . . . ann are called the
diagonal elements of a square matrix. Sometimes the diagonal of the matrix is also
called the principal or main of the matrix.
An identity matrix is a diagonal matrix where all elements on the main diagonal are
equal to 1 and the other elements in the matrix are zeros

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 3


A vector is a matrix with only one row or column. Its entries are called the
components of the vector.

Matrix operation
Addition and subtraction : Two matrices [A] and [B] can be added or subtracted only
if they are the same size and the result is given by
[ ] = [ ] ± [ ] ℎ = ±
[ ] ± [ ] = [ ] ± [ ] Commutative law of addition
[A]+ ([B]+ [C]) = ([A]+ [B])+ [C] Associative law of addition
Multiplication : Two matrices [A] and [B] can be multiplied only if the number of
columns of [A] is equal to the number of rows of [B] to give
[ ] × =[ ] × [ ] ×

If [A] is a n × n matrix and k is a real number, then the scalar product of k and [A] is
another matrix [B], where bij = k aij .
Associative law of multiplication : If [A], [B] and [C] are m× n, n × p and p × r size
matrices, respectively, then [A]([B][C]) = ([A][B])[C]
and the resulting matrix size on both sides is m× r.
Commutative law of multiplication : [A] [B] ≠ [B] [A].
A and B are said commute If [A] [B] = [B] [A].
Distributive law: If [A] and [B] are m× n size matrices, and [C] and [D] are n × p size
matrices
[A]([C]+ [D]) = [A][C]+ [A][D] and ([A]+ [B])[C] = [A][C]+ [B][C]
and the resulting matrix size on both sides is m× p.
Linearity [A] ( α[B]+ β[C] ) = α[A][B]+ β[A][C]
Transpose of a matrix :
Let [A] be a m x n matrix. Then [B] is the transpose of the [A] if bji = aij for all i and j.
That is, the ith row and the jth column element of [A] is the jth row and ith column
element of [B]. Note, [B] would be a n×m matrix. The transpose of [A] is denoted by
[A]T. Note, the transpose of a row vector is a column vector and the transpose of a

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 4


column vector is a row vector. Also, note that the transpose of a transpose of a
matrix is the matrix itself.
Trace of a square matrix: the sum of diagonal elements Tr =∑
×

Determinant of a matrix
A determinant of a square matrix is a single unique real number corresponding to a
matrix. For a matrix [A], determinant is denoted by |A| or det(A). So do not use [A]
and |A| interchangeably. If [A] and [B] are square matrices of same size, then
det (AB) = det (A) det (B). A matrix A is said to be a singular if det(A) = 0. It is called
non-singular if det(A) ≠ 0.
Determinant theorems : Let [A] be a n×n matrix.
1. If a row or a column in a n×n matrix [A] is zero, then det (A) =0
2. If a row is proportional to another row, then det(A) = 0.
3. If a column is proportional to another column, then det (A) = 0
4. If a column or row is multiplied by k to result in matrix [B]. Then det(B)=k det(A).
5. Since det(In) = 1, where In is the n×n identity matrix
6. If B is obtained from A by interchanging two rows then det(B) = − det(A),
7. B is obtained from A by multiplying a row by c then det(B) = c det(A),
8. Let A ( upper matrix form, lower matrix form or diagonal form ) then its
determinant is the product of the diagonal elements.
Inverse of a matrix
The inverse of a square matrix [A], if existing, is denoted by [A]-1 such that
[A][A] −1 = [I] = [A] −1 [A]
In other words, let [A] be a square matrix. If [B] is another square matrix of same size
such that [B][A] = [I], then [B] is the inverse of [A]. [A] is then called to be invertible or
non-singular. If [A]-1 does not exist, [A] is called to be non invertible or singular.
Special types of matrices
There are a number of special forms of square matrices that are important and should
be noted:

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 5


Asymmetric matrix is one where the rows equal the columns. that is, aij = aji for all i.s
and j.s. that is AT = A

Skew symmetric matrix is one where the rows equal the columns. that is, aij = - aji for
all i.s and j.s. AT = - A

Zero matrix: A matrix whose all entries are zero is called a zero matrix
An upper triangular matrix is one where all the elements below the main diagonal
are zero, and A lower triangular matrix is one where all elements above the main
diagonal are zero. A diagonal matrix is a square matrix where all elements off the
main diagonal are equal to zero.
Orthogonal matrix if the transpose gives the inverse of the matrix AT = A-1
Diagonally dominant matrix : A general n x n matrix A = (aij) is row diagonally
dominant if

| |≥ | |> for at least one i,

That is, for each row, the absolute value of the diagonal element is greater than or equal to the
sum of the absolute values of the rest of the elements of that row, and that the inequality is
strictly greater than for at least one row. Diagonally dominant matrices are important in
ensuring convergence in iterative schemes of solving simultaneous linear equations.
Tridiagonal matrices: A tridiagonal matrix is a square matrix in which all elements not
on the following are zero - the major diagonal, the diagonal above the major diagonal,
and the diagonal below the major diagonal.
A banded matrix has all elements equal to zero, with the exception of a band
centered on the main diagonal:

1.2 - Linear System of Equations


Recall that in two dimensions a line in a rectangular xy-coordinate system can be
represented by an equation of the form
+ = ( 0 )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 6


and in three dimensions a plane in a rectangular xyz-coordinate system can be
represented by an equation of the form
+ + = ( , 0 )
These are examples of “linear equations,” the first being a linear equation in the
variables x and y and the second a linear equation in the variables x, y, and z. More
generally, we define a linear equation in the n variables to be one that can be
expressed in the form
+ + … + =
where : a's and b are constants, and the a's are not all zero.

A finite set of linear equations is called a system of linear equations or, more briefly, a
linear system. The variables are called unknowns. For a general set of “m” linear
equations and “n” unknowns.

where the a's are constant coefficients, the b's are constants, the x's are unknowns, n
is the number of unknowns and m is the number of equations.
The system is called linear because each variable x appears in the first power only, just
as in the equation of a straight line. a11, … , amn are given numbers, called the
coefficients of the system. b1, … , bm on the right are also given numbers. If all the bj
are zero, then thesystem is called a homogeneous system. If at least one bj is not
zero, then it is called a nonhomogeneous system.
A solution of the system is a set of numbers x1, … , xn that satisfies all the m
equations. A solution vector is a vector x whose components form a solution of the

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 7


system. If the system is homogeneous, it always has at least the trivial solution x1 = 0,
… , xn = 0.

Let a system with : m the number of equations and n the number of unknowns
IF m < n the system is Underdetermined system
IF m = n the system is Determined system
IF m > n the system is over-determined system
Matrix form of linear systems
The system can be rewritten in the matrix form as :
. . . 1
1
⎡ ⎤ ⎡ 2⎤ ⎡ ⎤
⎢ . ⎢ 2⎥

. . . . ⎥⎥ ⎢⎢ . ⎥⎥ = ⎢ . ⎥
⎢ . . ⎥⎢ . ⎥ ⎢ . ⎥
⎣ . . ⎦⎣ ⎦ ⎣ ⎦

Denoting the matrices by [A], [X ], and [B], the system of equation is


[A] [X ]=[B]
where [A] is called the coefficient matrix, [B] is called the right hand side vector and
[X ] is called the solution vector.
Sometimes [A] [X ]=[B] systems of equations is written in the augmented form. That
. . . 1⎤

⎢ . . .
2⎥
[ | ] = ⎢ . . . ⎥
⎢ . . . ⎥
. .
⎣ ⎦

Note that the augmented matrix [ | ] determines the system completely because it contains
all the given numbers appearing in the system.

The system of linear equations is homogeneous if the vector of the constants [b] = 0, that is a
zero vector. And the system is non-homogeneous if [b] is not zero.

1.2.1 - Elementary row operations ERO


The set of equation operations ERO on the equations does not alter the solution set
of the system. The three basic operations are summarized as follows:
 Interchange : Interchange two equations ( rows ). ↔
 Scaling : Multiply an equation ( row )by a non-zero number. →

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 8


 Replacement : Replace an equation ( row ) by the sum of this equation ( row )
and another equation ( row ) multiplied by a number. → +

Clearly, the interchange of two equations does not alter the solution set. Neither does their
addition because we can undo it by a corresponding subtraction. Similarly for their
multiplication, which we can undo by multiplying the new equation by 1/a (since a ≠ 0),
producing the original equation.
Equivalent Linear Systems : Let [A b] and [C d] be augmented matrices of two linear
systems. Then the two linear systems are said to be equivalent if [C d] can be obtained
from [A b] by application of a finite number of elementary row operations.
systems having the same solution sets are often called equivalent systems. But note well that
we are dealing with row operations. No column operations on the augmented matrix are
permitted in this context because they would generally alter the solution set.

1.2.2 - Gauss elimination method


The Gaussian elimination method is a procedure for solving a linear system Ax = b (consisting of
m equations in n unknowns) by bringing the augmented matrix

to an upper triangular form

by application of elementary row operations. This elimination process is also called the forward
elimination method. This new system can be solved by a technique called backward-
substitution (forward-substitution), the unknowns are found starting from the bottom (the top)
of the system.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 9


The Gauss Jordon elimination is carried out by reducing the original system to an equivalent
system of a diagonal matrix form, the unknowns are directly calculated.

Example 1.8 :
Solve the following linear system by Gauss elimination method.
x + y + z = 3, x + 2y + 2z = 5, 3x + 4y + 4z = 11

then

The Gauss Elimination method starts with the augmented matrix [A b]


1 1 1 3
1 2 2 5
3 4 4 11
and proceeds as follows:
1. Replace 2nd equation by 2nd equation minus the 1st equation
2. Replace 3rd equation by 3rd equation minus 3 times 1st equation.
1 1 13
0 1 12
0 1 12
3. Replace 3rd equation by 3rd equation minus the 2nd equation.
1 1 13
0 1 12
0 0 00
Thus, the solution set is

with z arbitrary. In other words, the system has infinite number of solutions.


1.2.3 - Row echelon form matrix
The original system of m equations in n unknowns has augmented matrix [A | b]. This
is to be row reduced to matrix [R | f]. The two systems Ax = b and Rx = f are
equivalent: if either one has a solution, so does the other, and the solutions are

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 10


identical. The elimination method introduced in the previous section reduces the
augmented matrix to a good matrix ( meaning the corresponding equations are easy
to solve). Two of these matrices are matrices in either row-echelon form ( REF).
A rectangular matrix is said to be in row-echelon form ( REF ) if it has the following
three characterizations:
 All rows consisting entirely of zeros are at the bottom.
 The leading entry in each non-zero row is 1 and is located in a column to the right of
the leading entry of the row above it.
 All entries in a column below a leading entry are zero.
The matrix is said to be in reduced row-echelon form if in addition to the above, the
matrix has the following additional characterization:
 Each leading 1 is the only nonzero entry in its column.
From the definition above, note that a matrix in row-echelon form has zeros below
each leading 1, whereas a matrix in reduced row-echelon form has zeros both above
and below each leading 1.
There are three facts about row echelon forms and reduced row echelon forms that
are important to know but we will not prove:
1. Every matrix has a unique reduced row echelon form; that is, regardless of whether
you use Gauss-Jordan elimination or some other sequence of elementary row
operations, the same reduced row echelon form will result in the end.
2. Row echelon forms are not unique; that is, different sequences of elementary row
operations can result in different row echelon forms.
3. Although row echelon forms are not unique, all row echelon forms of a matrix A
have the same number of zero rows, and the leading 1's always occur in the same
positions in the row echelon forms of A. Those are callled the pivot positions of A. A
column that contains a pivot position is called a pivot column of A.
Example 1.10 :
Apply ERO to transform the following matrix into REF then RREF
0 3 −6 6 4 −5
3 −7 8 −5 8 9
3 −9 12 −9 6 15

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 11


Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 12
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 13
1.2.4 - The rank of matrix
The rank of a matrix A as the maximum number of linearly independent row vectors of A. the
maximum number of linearly independent row vectors of a matrix does not change if we
change the order of rows or multiply a row by a nonzero c or take a linear combination by
adding a multiple of a row to another row. This shows that rank is invariant under elementary
row operations.
The rank r of a matrix A also equals the maximum number of linearly independent column
vectors of A. Hence A and its transpose AT have the same rank.
Also, the number of nonzero rows in the row-reduced coefficient matrix R is called the rank of
R and also the rank of A.
( ) ≤ ( , )
The null space of matrix A written as Nul A and it is the set of all solutions of the homogeneous
equation Ax = 0 and written in a notation form as :
Nul A = { : is in R and Ax = 0}

The Column Space of a Matrix Another important subspace associated with a matrix is its
column space. Unlike the null space, the column space is defined explicitly via linear
combinations. The column space of an m×n matrix A, written as Col A, is the set of all linear
combinations of the columns of A

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 14


The Row Space of matrix If A is an m×n matrix, each row of A has n entries and thus can be
identified with a vector in Rn. The set of all linear combinations of the row vectors is called the
row space of A and is denoted by Row A. Each row has n entries, so Row A is a subspace of Rn.
Since the rows of A are identified with the columns of AT, we could also write Col AT in place of
Row A.

Example 1.11 :
Find the spanning set for the null space of a given system ?
The first step is to find the general solution of the system Ax = 0 in terms of free variables by
reducing the system [A 0 ]
−3 6 −1 1 −7 0
1 −2 2 3 −1 0
2 −4 5 8 −4 0
By ERO the system converted to rref
1 −2 0 −1 3 0
0 0 1 2 −2 0
0 0 0 0 0 0
Then : the free variables are x2, x4 and x5
x1 = 2x2 + x4 – 3x5 and x3 = -2x4 + 2x5
expressing the general solution into a linear combinations of vectors weighing free variables
2 + −3 2 1 −3
⎧ ⎫ ⎧ ⎫ ⎧ ⎫ ⎧ ⎫ ⎧ ⎫
⎪ ⎪ ⎪ ⎪ ⎪1⎪ ⎪0⎪ ⎪0⎪
= −2 +2 = 0 + −2 + 2
⎨ ⎬ ⎨ ⎬ ⎨0⎬ ⎨1⎬ ⎨0⎬
⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪
⎩ ⎭ ⎩ ⎭ ⎩0⎭ ⎩0⎭ ⎩1⎭

= + +

Then every linear combination of the vectors u, v and w are elemnets of the null
space of the system. Thus {u, v, w} spanning for Nul A

Example 1.12 :
Determine whether b is in the column space of A and if so, express b as a linear
combination of the column vectors of A:

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 15


The coefficient matrix Ax = b is

The augmented matrix for the linear system that corresponds to the system is

We reduce this matrix to the Reduced Row Echelon Form:

Therefore, the system has no solution (i.e. the system is inconsistent). Since the
equation Ax = b has no solution, therefore b is not in the column space of A.

1.2.5 - Existence and uniqueness of solutions of linear systems


Rank gives complete information about existence, uniqueness, and general structure of the
solution set of linear systems as follows:

Consider a linear system : Ax = b


where A is m × n matrix, and x, b are vectors of orders n × 1, and m × 1, respectively.
Suppose that rank (A) = r(A) and rank([A b]) = r(A b ). Then:
 The system has no solution If r(A) ≠ r(A b), the linear system has no solu on. the system is
inconsistent
 The system has a solution If r(A) = r(A b) then the linear system is consistent.
i. One solution : If r = n ( the number of unknowns or columns ) then the solution set
contains a unique vector x0 satisfying A x0 = b.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 16


ii. Infinite number of solutions If r < n then the solution set has the form
{x0 + k1u1 + k2u2 + · · · + kn−r un−r : ki ∈ ℝ, 1 ≤ i ≤ n − r}

Note : Consistent system of equations can only have a unique solution or infinite solutions
AND cannot have a finite (more than one but not infinite) number of solutions.

Homogeneous Linear System

always has the trivial solution x1 =0,…, xn =0. Nontrivial solutions exist if and only if rank If
rank(A) < n. if rank(A) = r < n, these solutions, together with x = 0 form a vector space of
dimension (n – r) called the solution space of the system.
In particular, if x(1) and x(2) are solution vectors of the system, then = ( ) + ( )

with any scalars c1 and c2 is a solution vector of the system (This does not hold for
nonhomogeneous systems. Also, the term solution space is used for homogeneous systems
only.)
The solution space of the homogeneous system is also called the null space of A because
Ax = 0 for every x in the solution space. Its dimension is called the nullity of A.
Rank ( A ) + Nullity ( A ) = n ( the number of unknowns)
Nonhomogeneous Linear System
If a nonhomogeneous linear system is consistent, then all of its solutions are obtained in the
form :
= +
Where:
is any (fixed or particular ) solution of of the system and
all the solutions of the corresponding homogeneous system.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 17


Example 1.13 :
Consider a linear system Ax = b. the size of the system ( 6 x 7 ), where m=6 is the number of
equations and n=7 the number of unknowns. Suppose the application of elimination method
by ( Gauss Jordon ) has reduced the augmented matrix [A b] to RREF :
0 2 −1 0 0 2 8
⎡0 1 3 0 0 5 1⎤
⎢ ⎥
[ ⋮ ] = ⎢0 0 0 0 1 0 −1 2⎥
⎢0 0 0 0 0 1 1 4 ⎥
⎢0 0 0 0 0 0 0 0⎥
⎣0 0 0 0 0 0 0 0⎦
Observations :
The rank of the Coefficient matrix [A] = 4
The rank of the Coefficient matrix [A : b] = 4
Since Rank(A) = Rank(a:b), then the system has a solution
Since Rank(A) < n , then the system has infinite number of solutions
Since the rank is 4 then there is 4 basic variables.
The leading terms appear in columns 1, 2, 5 and 6. Thus, the variables x1, x2, x5 and x6 are the
basic variables.
The remaining variables are free variables x3, x4 and x7 are free variables = n – r = 3
The solution will be :
Assume that : x3 = t, x4 = s and x7 = v Where { (t , s , v) ∈ ℝ }
x6 = 4 - x7 , x5 = 2 + x7 , x2 = 1 - x3 - 3x4 - 5x7 and x1 = 8 - 2x3 + x4 - 2x7
Then the solution set can be put in the form:
8−2 + −2 8 −2 1 −2
⎧⎡ ⎤ ⎡ ⎤ ⎡ 1⎤ ⎡−1⎤ ⎡−3⎤ ⎡−5⎤ ⎫
⎪⎢
⎪ ⎥ ⎢1 − − 3 − 5 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪

⎢ ⎥ ⎢ ⎥ ⎢ 0⎥ ⎢1⎥ ⎢0⎥ ⎢0⎥
x = ⎢ ⎥ = = ⎢ 0⎥ + ⎢ 0 ⎥+ ⎢ 1 ⎥+ 0
⎢ ⎥ ∶ ∀ , , ∈ ℝ
⎨⎢ ⎥ ⎢ 2+ ⎥ ⎢ 2⎥ ⎢0⎥ ⎢0⎥ ⎢1⎥ ⎬

⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢0⎥ ⎢0⎥ ⎢−1⎥ ⎪

4− 4
⎩⎣ ⎦ ⎣ ⎦ ⎣ 0⎦ ⎣0⎦ ⎣0⎦ ⎣1⎦ ⎭

x is called the general solution of the inhomogeneous system


Observe that the general solution to the inhomogeneous system worked out here can be
written in the form xI = x0 + xH
Where xH is the general solution of the corresponding homogeneous system

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 18


1.3 - Matrix Eigenvalue Analysis
Consider the operator box illustrated in the figure where the input to the operator
box is the n × 1 nonzero column vector x = col(x1, x2, . . . , xn) and the output from the
operator box is the n × 1 column vector y = Ax were A is a n × n nonzero constant
matrix. The operator box is said to transform the nonzero column vector x to the
column vector y by matrix multiplication.

If there are special nonzero column vectors x such that the output y is proportional to
the input x, then these special vectors are called eigenvectors and the proportionality
constants are called eigenvalues. If the output y is proportional to the nonzero input
x, then the equation y = Ax = λx must be satisfied, where λ is the scalar proportionality
constant. If the equation Ax = λx has nonzero solutions, then one can write

( − ) =

Cramer’s3 rule states that in order for this last equation to have a nonzero solution it
is required that the determinant of the unknowns x1, x2, . . . , xn be zero. This requires
that

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 19


Solving this equation for the values of λ gives the eigenvalues (λ1, λ2, . . . , λn)
associated with the matrix A. Hence the eigenvalues of A are simply the roots of the
characteristic polynomial and are also known as the characteristic roots of A. The set
of distinct eigenvalues, denoted by σ (A) , is called the spectrum of A.
Once the eigenvalues of a matrix (A) have been found substituting an eigenvalue λ
into the corresponding homogeneous system enables one to solve for the
corresponding eigenvector. we can find the eigenvectors by Gaussian Elimination.

Applications
Eigenvectors and eigenvalues have many important applications such as aeronautical
engineering eigenvalues may determine whether the flow over a wing is laminar or turbulent.
In electrical engineering they may determine the frequency response of an amplifier or the
reliability of a national power system. In structural mechanics eigenvalues may determine
whether an automobile is too noisy or whether a building will collapse in an earth-quake.
In probability they may determine the rate of convergence of a Markov process. In ecology
they may determine whether a food web will settle into a steady equilibrium. In numerical
analysis they may determine whether a discretization of a differential equation will get the right
answer or how fast a conjugate gradient iteration will converge.

Eigen space
The set of all solutions of (A - λI)x = 0 is just the null space of the matrix A - λI. So this
set is a subspace of Rn and is called the eigenspace of A corresponding to λ. The
eigenspace consists of the zero vector and all the eigenvectors corresponding to λ .

Additional Properties Involving Eigenvalues and Eigenvectors


The following are some additional properties and definitions relating to eigenvalues
and eigenvectors of an n×n square matrix A. The properties are given without proof.
1. If the n eigenvalues λ1, λ2, . . . , λn of A are all distinct, then there exists n-linearly
independent eigenvectors.
2. The sum of the eigenvalues of A is ∑ = = ( ) = ∑

3. The product of the eigenvalues of A is ∏ = det( )


Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 20
4. The eigenvalues of the transposed matrix AT are the same of the matrix A
5. If an eigenvalue repeats itself, then the characteristic equation is said to have a
multiple root. In such cases there may or may not exit n linearly independent
eigenvectors.
6. If A is a symmetric matrix and λi is an eigenvalue of multiplicity r , then there are
r linearly independent eigenvectors.
7. An n × n square matrix is similar to a diagonal matrix if it has n independent eigenvectors.
8. The set of all eigenvalues of A is called the spectrum of the matrix A.
9. The largest (in absolute value) eigenvalue of the matrix A is called the spectral radius of A.
10. If A is a real symmetric matrix, then all eigenvalues are real.
11. If A is a real skew symmetric matrix, then all eigenvalues are imaginary.

Example 1.14 :
Find the eigen values and the corresponding eigen spaces for the matrix
2 −4
=
−1 −1
We first seek all scalars λ so that Ax = λx :

2 −4
=
−1 −1

2 −4 1 0 0
− =
−1 −1 0 1 0

2− −4 0
=
−1 −1 − 0
The above system of linear equations has nontrivial solutions precisely when
2− −4
=0
−1 −1 −

2− −4
= (2 − )(−1 − ) − 4 = − −6=0
−1 −1 −

− − 6 = ( + 2)( − 3) = 0
Then the eigen values are : λ=-2 and λ=3
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 21
The spectrum is ( ) = {−2,3}
Let's find the eigenvectors corresponding to λ1 = 3.
The system will be : ( − 3 ) = 0
2− −4 0
=
−1 −1 − 0

2−3 −4 0
=
−1 −1 − 3 0

−1 −4 0
=
−1 −4 0
The solution of the linear system will be = = −4 which is the parametric form

−4
The eigen vectors corresponding to λ1 = 3 are multiples of and
1
−4
ℎ ℎ ℎ
1
−4
∶ ∈ ℝ and it is called the eigen space for λ1 = 3
1

Repeating this process with λ2 = -2, we find that


1
The eigen vectors corresponding to λ2 = -2 are multiples of and
1
1
ℎ ℎ ℎ
1
1
: ∈ ℝ and it is called the eigen space for λ2 = -2
1

Example 1.15 :
Find the eigen values and the corresponding eigen spaces for the matrix
5 8 16
= 4 1 8
−4 −4 11
5− 8 16
( − ) = 4 1− 8 = ( − 1)( + 3) = 0
−4 −4 11 −

Then the eigen value are = 1 , = −3


The spectrum is ( ) = {1, −3, −3}

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 22


The eigen vector corresponding to = 1 must satisfy the system

4 8 16 0 −2
4 0 8 = 0 by Gauss elimination we get : = −
−1 −4 −12 0

−2
The eigen vectors corresponding to λ1 = 1 are multiples of −1 and
1
−2
−1 ℎ ℎ ℎ
1
−2
−1 ∶ ∈ ℝ and it is called the eigen space for λ2 = -2
1

The eigen vector corresponding to , = −3 must satisfy the system


8 8 16 0 − −2
4 4 8 = 0 by Gauss elimination we get : =
−4 −4 −8 0

The eigen vectors corresponding to λ1 = 1 are multiples of


−1 −2
1 + 0
0 1
−1 −2
The eigen space is two dimensional and its basis 1 , 0
0 1
−1 −2
1 , 0 ∶ , ∈ ℝ and it is called the eigen space for λ2 = -3
0 1

Example 1.16 :
2 −1
Determine the eigenvalues and eigenvectors of =
5 −2
2− −1 0
=
5 −2 − 0

The above system of linear equations has nontrivial solutions precisely when

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 23


2− −1
= (2 − )(−2 − ) + 5 == +1=0
5 −2 −
The spectrum is ( ) = {i , −i }
Notice that the eigen values are complex conjugates of each other. Now find the
eigen spaces. Let's find the eigenvectors corresponding to λ1 = - i
The system will be : ( − 3 ) = 0
2+ −1 0
=
5 −2 + 0

The augmented matrix will be and using Gauss elimination

2+ −1 0
divide R1 by (2+i) = =
5 −2 + 0

1 0 0
2 = 2 – 5 1 → 1
5 −2 + 0 0 0 0
The solution of the linear system will be = = −

The eigen vectors corresponding to λ1 = - i

ℎ ℎ ℎ
1
2−
it can be put in the form Similarly ( work out the details ? )
5
2+
The eigen vectors corresponding to λ1 = i is
5

1.4 - Diagonalization Power and Exponential Matrices

Diagonalization of an n × n matrix
Let the n × n matrix A have n eigenvalues λ1, λ2, . . . , λn, not all of which need be distinct, and
let there be n corresponding distinct eigenvectors x1, x2, . . . , xn, so that
A =
then if we define the matrix X to be the n × n matrix in which the ith column is the eigenvector
xi , with i = 1, 2, . . . , n, so that in partitioned form
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 24
= [ … … ]
and let D be the n x n diagonal matrix
⋯ 0
= ⋮ ⋱ ⋮
0 ⋯
i.e. the matrix whose diagonal entries are the eigenvalues of the matrix A and whose
all other entries are zero.
Then
D = P A P
In such a case we call A diagonalizable and say that P diagonalizes A.

General Remarks About Diagonalization


(i) An n × n matrix can be diagonalized provided it possesses n linearly independent
eigenvectors.
(ii) A symmetric matrix can always be diagonalized.
(iii) The diagonalizing matrix for a real n × n matrix A may contain complex elements. This is
because although the characteristic polynomial of A has real coefficients, its zeros either will be
real or will occur in complex conjugate pairs.
(iv) A diagonalizing matrix is not unique, because its form depends on the order in which the
eigenvectors of A are used to form its columns.
A useful consequence of the diagonalized form of a matrix is that it enables it to be raised to a
positive integral power with the minimum of effort. This property will be used later when the
matrix exponential is introduced.

Power : Similarly it can be used to find any power of a matrix :


A = P D P-1
It is also can be easily prove that :
An = P Dn P-1
Exponential matrix : There is the exponential matrix eAt defined by the series

= + + +⋯+ +⋯
2! !

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 25


And it can be evaluated by
e = P e P
The exponential matrix eX is an important matrix used for solving systems of linear
differential equations, given some properties
= = =
( ) ( )
= =
[ ]
= =
Symmetric Matrices.
A real square matrix A is said to be symmetric if transposition leaves it unchanged, i.e.
AT = A. If A is a real symmetric n × n matrix, then
• its eigenvalues, λ1, . . . , λn, are all real;
• the corresponding eigenvectors x1, x2, . . . , xn are linearly independent.
Hence in particular, all real-symmetric matrices are diagonalizable.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 26


Exercises (1)
1. space, and find the associated scalar potential f .
2. Solve the linear system : y + z = 2, 2x + 3z = 5, x + y + z = 3.
3. Solve the following linear system by Gauss elimination method.
x + y + z = 3, x + 2y + 2z = 5, 3x + 4y + 4z = 11
4. Solve the following linear system by Gauss elimination method.
x + y + z = 3, x + 2y + 2z = 5, 3x + 4y + 4z = 12
5. Describe the solution of the following simple linear system:
10 x – 3y – 2z =0
0 −2 1 −1
6. If = , = and = . Examine if vectors u and v are eigen
−4 2 1 1
vectors of A ?
7. Determine the existence and the solutions for the system

8. Find the eigen values and eigenvectors of the matrix


1 −3 3
= 3 −5 3
6 −6 4
5 8 16
9. If = 4 1 8 has an eigen value of λ=2. Find its corresponding eigen space?
−4 −4 11
10. Determine whether b is in the column space of A and if so, express b as a linear
combination of the column vectors of A:

11. Find bases for the row space, the column space and the null space of the matrix

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 27


12. Show that the matrix is diagonalizable then find the 5th power of A?

13. Find the rank and nullity of the 3X5 matrix, M then Find the solutions (if any) to
Mx = 0

14. Find out if the vectors given are linearly dependent


2 0 4
= 3 , = 1 = 8
4 −4 0
15. Find all possibilities for the solution set of the linear system According to different choices
of the parameter a:

16. Show that the matrix is diagonalizable find the 5th power of A?

17. Consider the linear system :

 Using Gauss Jordon elimination put the system in RREF.


 Find the solution set of the system ?

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 28


18. Solve the following system of linear equation by LU factorizing with Doolittle's method

3 + 5 + 2 = 8
8 + 2 = −7
6 + 2 + 8 = 26
19. Construct a system of linear equations and determine the unknowns in the circuit
below ?

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 29


Part ( 2 ) : Differential Equations
______________________________________________________________________________

The laws of physics are generally written down as differential equations. Therefore, all of
science and engineering use differential equations to some degree. Understanding differential
equations is essential to understanding almost anything you will study in your science and
engineering classes. Many physical laws and relations appear mathematically in the form of
such equations (figure below).

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 30


2.1 - Review of Basic Concepts
A Differential equation : An equation containing the derivatives of one or more unknown
functions (or dependent variables), with respect to one or more independent variables.
Classification of differential equations
Classification by type :
An ordinary differential equation (ODE) A differential equation is an equation relating
an independent variable, e.g. t or x, a dependent variable, y, and one or more
derivatives of y with respect to t or x. The equation may also contain y itself, known
functions of x or t, and constants. A most general ODE has the form
( )
, , ,…, =0
As in calculus, y' = dy/dx , y'' = d2y/dx2 ... … y(n) = dny/dxn .
In a partial differential equation there is more than one independent variable and the
derivatives are therefore partial derivatives , for example, T=f(x,t) and obeys the PDE

PDEs have important engineering applications, but they are more complicated than ODEs.
A system of ordinary differential equations is two or more equations involving the
derivatives of two or more unknown functions of a single independent variable.
If differential equations contain two or more dependent variable and one
independent variable, then the set of equations is called a system of differential
equations.

Two or more dependent variable and two or more independent variable gives a
system of partial differential equations. (rarely to see)
Classification by order :
An ODE is said to be of order n if the nth derivative of the unknown function y is the
highest derivative of y in the equation. The concept of order gives a useful
classification into ODEs of first order, second order, and so on.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 31


Classification by Linearity :
An nth-order ordinary differential equation is said to be linear if it takes the form

two properties of a linear ODE are as follows:


• The dependent variable y and all its derivatives are of the first degree, that is, the
power of each term involving y is 1.
• The coefficients a0, a1, . . . , an of y and its derivatives depend at most on the
independent variable x.
A nonlinear ordinary differential equation is simply one that is not linear. Nonlinear
functions of the dependent variable or its derivatives, such as sin y or , cannot appear
in a linear equation.
A linear equation is called homogeneous if g (x) = 0, and inhomogeneous if f (x) ≠ 0.
Concept of solution
A function y= h(x) is called a solution of a given ODE on some interval I , if h(x)
is defined and differentiable throughout the interval and that it satisfies the equation
for all x in I.
For example : The ODE y' = cos x can be solved directly using calculus by
integration on both sides.
( )= +
Where : c is an arbitrary constant. This is a family of solutions. Each value of c can
give a certain solution. For an ODE has a solution that contains an arbitrary constant c.
The “general solution” of a differential equation is the most general algebraic
relationship between the dependent and independent variables which satisfies the
differential equation. It will contain one or more arbitrary constants (the number of
these constants being equal to the order of the equation).
A “particular solution” (or “particular integral”) is a solution which contains no
arbitrary constants. Particular solutions are usually the result of applying initial
and/or boundary condition to a general solution.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 32


The solution can be an explicit solution where y=f(x) or in the implicit form solution
f(x,y)=0. Sometimes an implicit solution can be converted to explicit form, if this is not
possible, a graph of the contour lines (Direction fields) of the implicit solution
function can help to understand the solution.

2.2 - Second Order Ordinary Differential Equations


Generally, the second -order linear differential equation takes the form
′′ + ( ) + ( ) = ( )

In case r(x) ≠ 0 , the equation is called nonhomogeneous. Otherwise is homogeneous.

The functions p (x) and q(x) are called the coefficients of the ODE.
Trivial Solution: For the homogeneous equation above, note that the function y(x) = 0
always satisfies the given equation, regardless what p(x) and q(x) are. This constant
zero solution is called the trivial solution of such an equation.

2.2.1 - Superposition principle or linearity principle


Linear ODEs have a rich solution structure. For the homogeneous equation the
fundamental of this structure is the superposition principle or linearity principle, which
says that we can obtain further solutions from given ones by adding them or by
multiplying them with any constants.
If y1 and y2 are solutions of the homogeneous linear ODE on some interval, so is any
linear combination of solutions is again a solution.

= +
Where c1 and c2 any arbitrary constants
Then, according to the Existence and Uniqueness Theorem, for any pair of initial
conditions y(x0) = y0 and y′(x0) = y′0 there must exist uniquely a corresponding pair of
coefficients C1 and C2 that satisfies the system of (algebraic) equations
= ( )+ ( )
′ = ′ ( )+ ′ ( )

From linear algebra, we know that for the above system to always have a unique
solution (C1, C2) for any initial values y0 and y′0, the coefficient matrix of the system
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 33
must be invertible, or, equivalently, the determinant of the coefficient matrix must be
nonzero. That is
( ) ( )
det ≠0
′ ( ) ′ ( )
This determinant above is called the Wronskian or the Wronski determinant. It is a
function of x as well, denoted W(y1, y2)(x), and is given by the expression
( , )( ) = ′ − ′
Formally, if W(y1,y2)(x) ≠ 0, then the functions y1, y2 are said to be linearly
independent. Else they are called linearly dependent.
Suppose y1 and y2 are two linearly independent solutions of a second order
homogeneous linear equation
y″ + p(x) y′ + q(x) y = 0.
That is, y1 and y2 both satisfy the equation, and W(y1, y2)(x) ≠ 0. Then (and only then)
their linear combination y = C1 y1 + C2 y2 forms a general solution of the differential
equation. Therefore, a pair of such linearly independent solutions y1 and y2 is called a
set of fundamental solutions, because they are essentially the basic building blocks of
all particular solutions of the equation.
CAUTION! Don’t forget that this highly important theorem holds for homogeneous
linear ODEs only but does not hold for non-homogeneous linear or nonlinear ODEs

2.2.2 - Homogeneous linear ODEs with constant coefficients


An equation of the form

+ + =
a, b and c are constants, is called a linear second order differential equation with
constant coefficients.

Let :
if = then ′= and ′′ =
Substituting back
+ + =
( + + ) =

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 34


Thus = is a solution of the given equation if and only if is a root of the equation
+ + =0
It is called the auxiliary equation, and since the equation is a quadratic, the roots may
be obtained either by factorizing or by using the quadratic formula. Since, in the
auxiliary equation, a, b and c are real values, then the equation may have three cases
If the roots of the auxiliary equation are:
(i) real and different, say λ1 =α and λ2 =β, then the general solution is
= +
(ii) real and equal, say twice, then the general solution is
=( + )
(iii) complex, say = ± then the general solution is

= ( cos + sin )
Given initial and/or boundary conditions, constants A and B, may be determined and
the particular solution of the differential equation obtained.
Reduction of order method
It happens quite often that one solution can be found by inspection or in some other
way. Then a second linearly independent solution can be obtained by solving a first-
order ODE. The general 2nd order linear homogeneous equation will be in standard
form
+ ( ) ′ + ( ) =
The first solution is already known and the second solution will be
=
′ = ′ + ′
′′ = + ′ ′ + ′ ′ + ′′
′′ = + 2 ′ ′ + ′′
Substitute into the DE
+ 2 ′ ′ + ′′ + ( ′ + ′ )+ =
Collecting u'' u' and u separately

+ ′ (2 ′ + ) + ′′ + ′ + =

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 35


+ 2 + =

2 +
+ =

= , = =

Which is 2nd differential equation with missing y term can be transformed to 1st order
2 +
+ =


=− 2 + integrating ln = (−2 ln −∫ )

Then
1
= exp −

The second solution will be

2.2.3 - Non Homogeneous linear ODEs with constant coefficients


The equation of the form
+ + = ( )
A general solution of the non-homogeneous ODE on an open interval I is a solution of the form
( )= ( )+ ( )
yh(x) is the general solution of homogeneous differential equation containing
arbitrary constants, and it is called the complementary function (C.F.)
yp(x) the general solution of non homogeneous DE which can be determined without
containing any arbitrary constants and it is called the particular integral (P.I.)
The particular solution, is obtained by assigning specific values to the arbitrary
constants in yh(x)
Method of undetermined coefficients
This method is suitable for linear ODEs with constant coefficients a , b and c. In this
method first we assume that the particular integral is of certain form with some
coefficients. Then substituting the value of this particular integral in the given

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 36


equation and comparing the coefficients, we get the value of these “undetermined”
coefficients.
Choice Rules for the Method of Undetermined Coefficients
+ + = ( )
(a) Basic Rule. If r(x) in the above linear ODE is one of the functions in the first
column of the table given below, choose yp(x) in the same line and determine its
undetermined coefficients by substituting yp(x) and its derivatives into the ODE.
(b) Modification Rule. If a term in your choice for yp(x) happens to be a solution of the
homogeneous ODE corresponding to the above linear ODE, multiply this term by x (or
x2 if this solution corresponds to a double root of the characteristic equation of the
homogeneous ODE).
(c) Sum Rule. If r(x) is a sum of functions in the first column of the table given below,
choose for yp(x) the sum of the functions in the corresponding lines of the second
column.

Table of functions to try for PI

The solution procedure


(i) Rewrite the given differential equation as
′′ + ′+ = ( )
(ii) Substitute m for D, and solve the auxiliary equation to find the roots
+ + =0

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 37


(iii) Obtain the complementary function, which is achieved using the same
procedure as the solution of homogeneous 2nd order ODEs
(iv) To determine the particular integral, firstly assume a particular integral
which is suggested by f (x), but which contains undetermined coefficients.
(v) Substitute the suggested P.I. into the differential equation and equate
relevant coefficients to find the constants introduced.
(vi) The general solution is given by ( )= ( )+ ( )
(vii) Given boundary conditions, arbitrary constants in the C.F. may be
determined and the particular solution of the differential equation
obtained.
Variation of parameters method ( Lagrange Method )
The Linear non homogeneous ODE of second order with constant coefficients in the
standard form is
+ + = ( )
The two linearly independent solutions y1(x) and y2(x) of the associated homogeneous
equation are known.
( )= ( )+ ( )
Finding yp(x) by the method of variation of parameters needs that the arbitrary
constants A and B replaced by a functions
( )= ( ) ( )+ ( ) ( )
Lagrange found out these functions which satisfying the DE as
( ) ( ) ( ) ( )
( ) = − ( ) =
( ) ( )
Where W(x) is the Wronski determinant
( )= ′ − ′
Then the solution of the non homogeneous DE written as
( )= ( )+ ( )
( ) ( ) ( ) ( )
( )= − ( )+ + ( )
( ) ( )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 38


CAUTION! The solution formula of P I is obtained under the assumption that the
ODE is written in standard form or can be put in the standard form.

Example 2.6 :
ʹʹ
Solve − = −
The general solution will be
( )= ( )+ ( )
To find yh(x) :
the associated homogeneous equation.
− =0
Then the roots are equal and equal λ1 = - 2 , λ2 = 2, then the solution will be
( )= +
To find yp(x) :
Because r(x)= 8x2 −2x is a polynomial. From the table we choose
( )= + +
Differentiating :
′( ) = 2 +
′′( ) = 2
Substituting into the ODE :
− = −
2 − ( + + )= −
−4 −4 + (2 −4 )= −
By comparison
−4 = 8 → = −2
−4 = −2 → = 1/2
1
2 −4 = 0 → = = −1
2
Then
1
( ) = −2 + −1
2
Then the general solution will be

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 39


( )= ( )+ ( )
1
( )= + −2 + −1
2
Example 2.7 :
Solve : y'' + y' = sec x
The solution of the corresponding homogeneous equations is
( )= cos + sin
Can be put in the form
( )= y ( ) + y ( )
The solutions are linearly independent, this can be shown by determining the Wronski
determinant
( ) ( )
W(x) = det ≠0
′ ( ) ′ ( )
cos x sin x
W(x) = = cos − (− sin )=1
− sin x cos x
Then
( )= ( ) ( )+ ( ) ( )
( ) ( ) sin sec
( ) = − = − = ln|cos |
( ) 1
( ) ( ) cos sec
( ) = = =
( ) 1
The integration constants are already included in the general solution
Then
( )= ( ) ( )+ ( ) ( )
( ) = ln|cos | cos + sin
Then solution of the non homogeneous equation is
( )= ( )+ ( )
( ) = ( + ln|cos |) cos + ( + ) sin

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 40


2.2.4 - Euler – Cauchy OD equation
Euler–Cauchy equations are ODEs of the form
+ + = and x≠0
with given constants a and b and unknown function y(x). A change of variables will
convert Euler’s equation to a constant coefficient linear second order homogeneous
equation, which we can always solve.
Let ∶ = then = = ( − 1)
Substituting back in the original equation
( − 1) + + =

+ ( − ) + =
Notes that : y = xm was a rather natural choice because we have obtained a common
factor xm . Dropping it, we have the auxiliary equation
+ ( − ) + =0
Hence y = xm is a solution of the DE if and only if m is a root of the auxiliary
equation. There will be three cases :
Case I : Two equal real roots
The general solution will be in the form

= ( + ln ) and x > 0
Case I : Two different real roots
The general solution will be in the form

= ( + )
Case III : Complex roots
The roots are complex conjugate, , = ± then the general solution will be
= ( cos( ln ) + sin( ln )) and x > 0

Example 2.8 :
Solve
−6 + 10 = 6
The general solution will be
( )= ( )+ ( )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 41


The homogeneous form of the given problem −6 + 10 = 0
This is an Euler equation a = -6 and b = 10 : + ( − ) + =0
And the characteristic equation will be : − 7 + 10 = 0
Use the law as
= 5 =2
( ) = + = +
To find yp(x) :
Because r(x)= 8x2 −2x is a polynomial. From the table we choose
( )= + + +
Differentiating :
( )
=3 +2 +
( )
=6 +2
Substituting in the ODE
−6 + 10 = 6
(6 +2 ) − 6 (3 +2 + ) + 10( + + + )=6
(6 − 18 + 10) − (2 − 12 + 10) + (−6 + 10) + 10 =6

It is clear that : = = = 0 and (−2) = 6 → = −3 then the


( ) = −3
And the general solution will be
( )= ℎ
( )+ ( ) = + − 3 3

2.2.5 - Linear ODEs with variable Coefficients


An equation of the form
( ) + ( ) + ( ) = ( )
In the most general case we can’t do anything: but in some special cases we can. If
we already know the general solution to the homogeneous equation:
( ) + ( ) + ( ) =
The following is the most well known equations that are encountered in engineering
applications :
Bessel's equations + +( − ) =0

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 42


Laguerre's equation + (1 − ) + =0
Hermite's equation −( − ) =0
Legendre's equation (1 − ) −2 + ( + 1) = 0
Chebyshev's equation (1 − ) − + =0
Airy's equation − =0
Euler- Cauchy equation + + =0
We will look in the solution of some of these equations.
If the homogeneous ODE is known then by suitable selection of PI the solution of
inhomogeneous ODE maybe achieved.
Equations of variable coefficients with missing y-term
If the y-term (that is, the dependent variable term) is missing in a second order linear
equation which takes the form
( ) + ( ) = ( )
then the equation can be readily converted into a first order linear equation by a
simple transformation
= =
Then the DE becomes
( ) ′ + ( ) = ( )
and solved using the integrating factor method to evaluate u(x).
Finally integrate to find the solution of the 2nd ODE

( ) = ( )

2.3 - Higher Order Ordinary Differential Equations


Generally, the high-order linear differential equation takes the form
( ) ( ) ( ) ( ) = ( )
+ +⋯+
In case r(x) = 0, the equation is called homogeneous. Otherwise is nonhomogeneous.
The basic results about linear ODEs of higher order are essentially the same as for second order
equations, with 2 replaced by n. For higher order constant coefficient ODEs, the methods
developed for 2nd ODE are used.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 43


Example 2.9 :
ʹʹʹ
Solve − + = − +
The general solution will be
( )= ( )+ ( )
To find yh(x) : the associated homogeneous equation.
− +6 =0
Then the roots are equal and equal λ1 = 1 , λ2 = 2, λ3 = -3, then the solution will be
( )= + +
To find yp(x) : Because ( ) = − + is a polynomial. From the table we choose
( )= + +
′( ) = 2 +
′′( ) = 2
′′′( ) = 2
Substituting into the ODE :
ʹʹʹ
− + = − +
− (2 + ) + ( + + )= − +
(6 ) + (−14 + ) + (2 − 7 +6 )= − +
By comparison
6 = 1 → = 1/6
1 14 2
−14 + = −1 → = −1 + =
6 6 9
1 14 23
2−7 +6 = 1 → = 1−2+ =
6 9 54
Then
1 2 23
( )= + +
6 9 54
Then the general solution will be
( )= ( )+ ( )
1 2 23
( )= + + + + +
6 9 54

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 44


Exercise (2)
1. Classify the following first order ordinary differential equations. DO NOT SOLVE!
The classification relates to the method of solution, i.e ( separable,
homogeneous, linear, Bernoulli and exact ). It could be more than one method
works.
a) (1 + x ) dy = (xy + cos x)dx
b) dy = xye dx
c) (xe + sin y) dy + ye dx = 0
d) y − x y′ = 2xy
e) y + = ty
f) t y y + = t y
g) 4 ′− ′− = cos
h) 2y + y y ′ = x
2. Find a general solution. Show the steps of derivation. Check your answer by
substitution.
i. 9 +4 =0

ii. = + tan

iii. + =2

iv. ( +3 ) + ( +3 ) =0
v. +3 + = 0

3. Solve the IVP. Show the steps of derivation, beginning with the general solution.
i. +5 = 0 , (0) = 1
ii. 2 + (4 + 3 ) = 0 , (0.2) = −1.5
iii. + 4 = 8 , (1) = 2
4. Solve the IVP. Show the steps of derivation, beginning with the general solution.
+ + 0.25 = 0 , (0) = 3 (0) = −3.5
9 − 24 + 16 = 0 , (0) = 3 (0) = 3
+2 +3 + = 0 , (0) = 3 , (0) = −3 (0) = 47
5. Solve
i. −2 +3 = + cos
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 45
ii. − − 2 = sin 2
iii. + +4 + 4 = −4 −6
6. The ODE
1
+ + =0

represents a current i flowing in an electrical circuit containing resistance R, inductance L


and capacitance C connected in series. If R=200 Ohms, L =0.20 Henry and C =20×10−6
Farads, solve the equation for I given the Initial conditions that when t =0, i=0 and i'= 100
7. Analysis of RLC-circuit, the DE will be

+ + =

If the EMF source gives ( ) = sin Find the general solution?


If circuit data are :
= 11 Ω , = 0.1 , = 10 , = 60 = 110
Find the current in the circuit if the initial current and capacitor charge are zero?
8. Solve the boundary value problem
+ =0
With the boundary condition :
i. y(0) = 1 and y(π/2) = 0
ii. y(0) = 0 and y(π) = 2
9. Solve the boundary value problem
+ 4 = cos
With the boundary condition : y’(0) = 0 and y’(π) = 0
10. In the problem, either solve the given boundary value problem or else show that it has no
solution.
i. + =0 With the boundary condition : y(0) = 0 and y(L) = 0
ii. + 4 = sin With the boundary condition : y(0) = 0 and y(π) = 0
11. Find the eigenvalues and eigenfunctions of the given boundary value problem. Assume
that all eigenvalues are real.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 46


Part ( 3 ) : Power Series Solutions of ODE
______________________________________________________________________________

Many differential equations can’t be solved explicitly in terms of finite combinations


of simple familiar functions. This is true even for a simple-looking equation like
′′
− = ( Airy's equation )
Second order ordinary differential equations that cannot be solved by analytical
methods, i.e. those involving variable coefficients, can often be solved in the form of
an infinite series of powers of the variable.
The power series method is the standard method for solving linear ODEs with
variable coefficients. It gives solutions in the form of power series. These series can
be used for computing values, graphing curves, proving formulas, and exploring
properties of solutions.

3.1 - Review of Power Series


An important procedure in Mathmatics is to represent a given function as an infinite
series of terms involving simpler or otherwise more appropriate functions. Thus, if f is
the given function, it may be represented as power series expansion of the form
A power series about a point x0 is an expression of the form

( − ) = + ( − ) + ( − ) +⋯

If x0 = 0 , the power series becomes

( )= + + + ⋯ =

Analytic functions
An analytic function is a function which is infinitely differentiable in the domain
neighborhood of interest about x0, then the function can be represented by the sum
of a power series ∑ ( − ) that has positive radius of convergence, and
the fact that power series behave nicely under addition, multiplication,
differentiation, and integration accounts for their usefulness.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 47


Uniqueness of a Power Series Representation.. It means that a function f(x) cannot
be represented by two different power series with the same center. Hence if a function
f(x) can be represented by a power series with any center this representation is
unique.
Power series convergence
For any power series ∑∞ ( − ) there is a number R ∈ [0,∞] (meaning: R > 0
and can be infinity) such that:
• The power series converges for all x such that |x−x0|< R
• The power series diverges for all x such that |x−x0|> R

The quantity R is called the radius of convergence.


The radius of convergence can be determined from the coefficients of the series by

=

Convergence for all x (R = ∞) is the best possible case, convergence in some finite interval is
the usual, and convergence only at the center point ( R=0) is useless. And the set of the values
that the series converges is called the interval of convergence
Convergence tests :

Ratio test : If ≠ 0 for n = 1,2,… and → =

Root test : Let the series given as ∑ then → | |=


Then
If L < 1.0 the series is absolutely convergent
If L = 1.0 the test fails
If L > 1.0 the series is divergent

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 48


Radius of convergence
R is the radius of convergence of the power series, and the interval of radius R
centered at ( xo ) is called the interval of convergence. The interval of convergence
may be open, closed, or half-open, depending on the particular series. The absolute
convergence is − < < + of length 2R on the real line.
For this important practical task the radius of convergence can be determined by the
ratio test ( Cauchy formula ) which is given by

= = if the limit exists


The ratio test formula will not help if the limit doesn’t exist, then the alternative is
the root test :

= = if the limit exists


→ | |

Convergence for all x (R = ∞) is the best possible case, convergence in some finite interval is
the usual, and convergence only at the center point ( R=0) is useless.
Power series operations
Given two power series y1(x) and y2(x) with converges in |x−x0|< R1 and |x−x0|< R2
respectively
∞ ∞

( )= ( − ) ( ) = ( − )

Term wise addition or subtraction of two power series with radii of convergence R1
and R2 yields a power series with radius of convergence at least equal to the smaller
of (R1,R2)

( ± )( ) = ( ± )( − )

Differentiation and integration of series


For a power series given by and converges in |x−x0|< R

( − )

It can be differentiated and converges in the same interval as ∑∞ ( − )

And integrated as and converges in the same interval as ∑∞ ( − )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 49


Addition and Multiplication of series
Given two power series y1(x) and y2(x) with converges in |x−x0|< R1 and |x−x0|< R2
respectively
∞ ∞

( )= ( − ) ( ) = ( − )

Then their addition will be


( + )( ) = ( + )( − )

The radius of convergence will be |x−x0|< min (R1,R2)


And their multiplication will be

( ) ( ) = ( − )

Shifting indices :
Suppose the power series after differentiation twice has the form

( − )

The series index need to be shifted to n = 0. Let m = n -2 , then n = m + 2 and


substitute in the series . it will be in the form

( + )( + )

Another case for a series in the form :


( − )

Here the series doesn’t need shifting since


∞ ∞

( − ) = ( − )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 50


3.2 - Series Solutions of 2nd Order Linear DEs
Consider a homogeneous equation
( ) + ( ) + ( ) =
in which each functions P(x), Q(x) and R(x) are real analytic functions in an interval ( I )
about the base point
If ( )≠ then the point is an ordinary point. Otherwise, is a singular
point.
If the limits
( ) ( )
lim ( − ) , lim ( − )
→ ( ) → ( )
are exist and finite then the singular point is regular. Otherwise is irregular, that is
at least one of the above limits doesn’t exist.

Example 3.1 :
Determine the power series solution of the differential equation:
y'' + xy' + 2y = 0
using Leibniz–Maclaurin’s method, given the initial conditions that at x = 0 , y = 1 and
y' = 2
Each term is differentiated n times, which gives:
y'' differentiated n times becomes y(n+2)
xy' differentiated n times ( use Leibniz theorm )
v = x and u = y'
( ) = ( ( ) ( )
ʹ) = + +0
y differentiated n times becomes y(n)
substituting back into the DE, we get
( ) ( ) ( ) ( )
+ + + =

( ) ( ) ( )
+ + ( + ) =

The recurrence relation at x = 0 will be

( ) ( )
+ ( + ) =

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 51


( ) ( )
= −( + )

Substituting n=0, 1, 2, 3,…will produce a set of relationships between the various coefficients.
at x = 0 , y = 1 and y' = 2

For n = 0 = −2 For n = 1 = −3 ′
For n = 2 = −4 =8 For n = 3 = −5 = 15 ′

Substitute in Maclurin Series


( )= + ′ + ′′ + ′′′ + ′′′′ + ′′′′′ … ..
2! 3! 4! 5!
( )= + ′ − 2 − 3 ′ +8 + 15 ′ … ..
2! 3! 4! 5!

The solution of the DE will be


( )= 1−2 +8 + ⋯ . . + ′ − 3 ′ + +⋯
2! 4! 3! 5!

3.2.1 - Solution about ordinary point


For an ordinary point we may assume a Taylor expansion

( ) = ( − )

Then, for a given constants a0; a1 there is a unique solution y(x) to the initial value
problem
( ) ′′
+ ( ) ′
+ ( ) = , ( )= ′( ) =

Moreover, the solution y(x) can be represented in the power series


( )= ( − ) = + ( − ) + ( − ) +⋯

Let us note that the initial conditions appears in the solution as


= ( ) = ′( )

Example 3.2 :
Consider the differential equation
′′
+ = , (0) = 1 ′(0) = 0
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 52
the solution y(x) can be written as

( ) = = + + +⋯

Differentiating and substituting back into the DE


′( ) =

′′( ) = ( − )

Then
∞ ∞

( − ) + =

Let us re-write this in terms of the same powers of x by shifting indices

For the first term :


−2= → = + 2
For the second term :
=
Substituting back in the DE
∞ ∞

( + )( + ) + =

For the same indexes of the summation sign and the same powers of x

[( + )( + ) + ] =

( + )( + ) + =
Which can be put in the form

=
( + )( + )
m = 0, 1, 2,…
it is called the a recursion relation.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 53


If are known which are the initial conditions of the given IVP, this
equation allows us to determine the remaining coefficients recursively by putting in
succession as :

m The relation The constants


0 ( )( ) + = =− / =− / =− / !
1 ( )( ) + = =− / =− / =− / !
2 ( )( ) + = =− / = / = / !
3 ( )( ) + = =− / = / = / !
4 ( )( ) + = =− / =− / × =− / !
5 ( )( ) + = =− / =− / × =− / !

Substitute the constants into the solution we get :


( ) = + + +⋯

( ) = + − − + + − − +⋯
! ! ! ! ! !

( ) = − + − +⋯ + − + − +⋯
! ! ! ! ! !

which is the Maclaurin series for cos(x) and sinx) with base point = then the
solution of the given IVP 2nd ODE is
y( x) = cos + sin
And the Initial conditions gives a0 = 1 and a1 = 0
Substitute the constants into the solution we get
( )=
Example 3.3 :
Consider the differential equation + + =
It is clear that x = 0 is an ordinary point. We can use the power series method

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 54


∞ ∞ ∞

= , = = ( − )

Insert back into the DE


∞ ∞ ∞

( − ) + + =

∞ ∞ ∞

( − ) + + =

Let us re-write this in terms of the same powers of x


− = → = +
∞ ∞ ∞

( + )( + ) + + =

Now we can express the DE in a single sigma but they should start with the same
index n = 1. Therefore n = 0 terms should be written separately
∞ ∞ ∞

+ + ( + )( + ) + + =

( + )+ [( + )( + ) + + ] =

( + )+ [( + )( + ) + ( + ) ] =

Then
( + )= gives =−

( + )( + ) + ( + ) =

− ( + ) −
= =
( + )( + ) ( + )

This is the a recursion relation. If are known which are the initial
conditions of the given IVP, this equation allows us to determine the remaining
coefficients recursively by putting in succession as :

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 55


m The relation The constants
0 =− =−
1 =− / =− /
2 =− / = /
3 =− / = /
4 =− / =− /
Substitute into the solution

( ) = − + − +⋯ + − + +⋯

3.2.2 - Legendre’s Differential Equation


( − ) − + ( + ) = | | < 1:
Solving at an ordinary point at =0
Solving the Legendre's equation by power series at =0

( ) = ( − )

Differentiate and substitute back into Legendre's equation and let n(n+1) = K
∞ ∞

( )= ( )= ( − 1)

∞ ∞ ∞

(1 − ) ( − 1) −2 + =0

∞ ∞ ∞ ∞

( − 1) − ( − 1) − 2 + =0

The powers of x needed to be the same, only the first term needs to be shifted
∞ ∞

( − 1) = ( + 2)( + 1)

∞ ∞

( − 1) = ( − 1)

∞ ∞

2 = 2

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 56


∞ ∞ ∞ ∞

( + 2)( + 1) − ( − 1) − 2 + =0

Now we can express the DE in a single sigma

[( + 2)( + 1) − ( − 1) −2 + ] =0

( + 2)( + 1) −[ ( − 1) + 2 − ] =0

( − 1) + 2 −
=
( + 2)( + 1)
( + 1) − ( + 1)
=
( + 2)( + 1)
Which can be put in the form
( − )( + + 1)
=−
( + 2)( + 1)
Note that this is a second order recursion (relating am+2 to am) thus there are two
undetermined constants a0 and a1 giving two independent series, we get
m The relation The constants
0 = − ( + 1) /2 = − ( + 1) /2!
1 = −( − 1)( + 2) /3 × 2 = −( − 1)( + 2) /3!
2 = −( − 2)( + 3) /4 × 3 = ( + 1)( − 2)( + 3) /4!
3 = −( − 3)( + 4) /5 × 4 = ( − 1)( + 2)( − 3)( + 4) /5!

( ) = − ( + 1) + ( + 1)( − 2)( + 3) +⋯
! !

+ − ( − 1)( + 2) + ( − 1)( + 2)( − 3)( + 4) +⋯


! !

3.2.3 - Solution about regular singular point (Frobenius method)


For a regular singular point a straight power ( Taylor ) expansion fails. Instead one
should try a Frobenius Series. The DE takes the form
+ ( ) + ( ) = 0
Suppose x0 = 0 is a regular singular point of the DE and P and Q are both functions
of x and are analytic at x0 = 0,

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 57


( ) ( )
+ + = 0

Then testing :
( ) ( )
lim ( − ) = ( ) lim ( − ) = ( )
→ →

then the equation can be represented by a power series, and solved by the Frobenius
method.
The differential equation has a Frobenius solution in the form of Frobenius series as

( ) = = : ≠0

where the exponent c may be any (real or complex) number (and c is chosen so that
a ≠ 0 and has an interval of convergence R

′( ) = ( + ) ∶ ≠0

′′( ) = ( + )( + − 1) ∶ ≠0

The corresponding equation of the smallest power will be in the form :


[ ( − 1) + + ] =0

Since by assumption ≠ , the expression in the brackets must be zero. This gives
( − 1) + + =0
This important quadratic equation is called the indicial equation of the ODE. The
Frobenius method yields a basis of solutions of the DE based on the roots of indicial
equation. There are three cases:
Case 1. Distinct roots not differing by an integer.
If c1 > c2 and ( c1 − c2 ) is not a positive integer, then there are two linearly
independent Frobenius solutions

( ) = ( ) = ∗

With a ≠ 0 and a∗ ≠ 0 solutions are valid within R

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 58


Case 2. A double root.
If ( c1 = c2 ) , then there is a Frobenius solution

( ) = ( )= ( ) ∗
+

With a ≠ 0 and a∗ ≠ 0 solutions are valid within R, These solutions are


linearly independent.
Case 3. Roots differing by an integer
If c1 > c2 and ( c1 − c2 ) is a positive integer, then there are two linearly
independent Frobenius solutions

( ) = ( )= ( ) ∗
+

With a ≠ 0 and a∗ ≠ 0 solutions are valid within R


Note that :
 In all cases ( ) and ( ) are solutions and their sum is a solution since
they are linearly independent ( )= ( )+ ( ).
 Case 1 includes complex conjugate roots too.

Example 3.5 :
Determine, using the Frobenius method, the general power series solution of the
differential equation:
3x y'' + y' - y = 0 and x0 = 0 is a regular singular point .
Determination of the indicial equation : Assume that the solution will be
ʹ ʹʹ
= = = ( − )
( − ) + − =
( − ) + − =
[ ( − ) + ] − =
The equation of the smallest power will be indicial equation :
( − ) + = − = ( − )=
The roots are 1 = 2 3 2 = 0

The difference is not a positive integer, the solution will be in case (1)

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 59


∞ ∞

( ) = ∗
( ) =

With a ≠ 0 and a∗ ≠ 0 solutions are valid within R


Solution evaluation
Let the solution be of the form

( ) = : ≠0

ʹ( ) = ( + )

ʹʹ( ) = ( + )( + − 1)

Substituting into each term of the given DE


∞ ∞ ∞

3 ( + )( + − 1) + ( + ) − =0

∞ ∞ ∞

3( + )( + − 1) + ( + ) − =0

[ 3( + )( + − 1) + ( + )] − =0

Another way to determining of the indicial equation : equating the equation om


minimum power to zero
[ 3( + )( + − 1) + ( + )] = 0
At m = 0
[3( )( − 1) + ( )] = 0
[ 3( ) − 2 ] = 0
The roots are 1 = 2 3 2 = 0
The first solution at c = 0

(3 − 2 ) − =0

Equating for the minimum power, for the second term put m = m - 1

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 60


(3 − 2 ) − =0

Then

[ (3 − 2 ) − ] =0

Then

=
(3 − 2 )

m The relation The constants


1 = =
2 = /2 × 4 = /8
3 = /3 × 7 = /8 × 21
4 = /4 × 10 = /8 × 21 × 40
5 = /5 × 13 = /8 × 21 × 40 × 65

Substitute back in the main solution


2
= { 0 + 1 + 2 + ⋯+ + ⋯} = =1
2 3 4
= 0 + 1 + 2 + 3 + 4 +⋯

0 2 0 3 0 4
( )= 0 + 0 + + + +⋯
8 21 × 8 4 × 10 × 21 × 8
2 3 4
( )= 0 1+ + + + +⋯
8 21 × 8 4 × 10 × 21 × 8

3.2.4 - Bessel’s Differential Equation


One of the most important differential equations in applied mathematics is Bessel’s
equation and is of the form:
+ + ( − ) = 0
where
v is a real constant ≥ 0. This equation, which has applications in electric fields,
vibrations and heat conduction, may be solved using Frobenius’ method.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 61


We transform the DE in standard form as
( − )
+ + =0

The functions are both analytical at x0 = 0. Then it can solved using the Frobenius
method
Determination of the indicial equation :
Assume that the solution will be
ʹ ʹʹ
= = = ( − )
( − ) + +( − ) =
( − ) + + − =
+ [ ( − ) + − ] =
The equation of the smallest power will be indicial equation :
[ ( − )+ − ]= − =
The roots are 1 = 2 = −
1 − 2 = 2
The first solution at c =

( ) = : ≠

ʹ( ) = ( + )

ʹʹ( ) = ( + )( + − )

Substitute in the DE
∞ ∞ ∞

( + )( + − ) + ( + ) +

− =

Substituting c =

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 62


∞ ∞ ∞

( + )( + − ) + ( + ) +

− =

By replacing m by (m - 2) in the third term to give the corresponding terms in ,


which occurs in all equations, i.e.
∞ ∞ ∞

( + )( + − ) + ( + ) +

− =

∞ ∞

[( + )( + − )+( + )− ] + =

To find the recursion relation the sigma should start with the same index hence the
coefficients of m =0 and m=1 taking out the sigma
∞ ∞

( +2 ) + [m ( + 2 )] + =

( +2 ) + {[m ( + 2 )] + } =

The solution must satisfy the equation


( +2 ) =
Then = giving all the odd coefficients will be zero, and m will be even.
And
[m ( + 2 )] + = 0 putting m = 2k where k = 1,2,3,…
Then the recursion relation will be
− −
= =
( + 2 ) ( + )
For k = 1

=
( + )
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 63
For k = 2

= =
( + ) ( + ) ( + )
For k = 3
− −
= =
( + ) × ( + ) ( + )( + )
For k = 4

= =
( + ) × × ( + ) ( + )( + )( + )
In general
(−1)
=
! ( + ) ( + ) … … ( + )
Then the solution will be

(−1)
( ) =
! ( + ) ( + ) … … ( + )

The constant a0 assumed to be


1
=
2 !
Substituting in the solution

(−1)
( ) =
! ( + )!

Using gamma function



(−1)
( ) =
! ( + + )

Which is called the Bessel function of first kind of order n.


Bessel function of order n = 0 will be

( ) = 1 − + − +⋯
2 (1!) 2 (2!) 2 (3!)
Which is similar to cosine function series
Bessel function of order n = 1 will be

( ) = − + − +⋯
2 2 1! 2! 2 2! 3! 2 3! 4!
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 64
Which is similar to sine function series

The second solution of Bessel function


Because Bessel’s equation is second-order DE, there is a second solution that is
linearly independent from Jν(x). The indicial equation of Bessel function has the
roots 1 = 2 = − , the form that a second solution will take depends
on the difference
1− 2= − (− ) = 2
Case 1 : If 2ν is not an integer :
Then Jν and J−ν are linearly independent. In this case, the general solution of Bessel’s
equation is
( )= ( )+ ( )
Where

(−1)
( ) = ( ) =
! ( + + )


(−1)
( ) = ( ) =
! ( − + )

Case 2 : If ν = 0 :
That is the roots are zero 1 = 2

(−1)
( ) = ( ) =
( !)

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 65



(−1) H(k)
( )= ( ) −
( !)

Where H(k) is the harmonic series



1
( ) = ( ) =

( )= ( )+ ( )
Case 3 : If 2ν is an integer :
If 2ν is an odd positive integer, say 2ν =2n +1 for some nonnegative integer n, then ν
=(n + 1)/2 and Jν and J−ν are again linearly independent. In this case, the general
solution of Bessel’s equation is
( )= ( )/ ( )+ ( )/ ( )
For n = 0
( )= / ( )+ / ( )

2
( )= ( sin + sin )

If 2ν is an integer ( ≥ 0 ) then Jν and J−ν are solutions of Bessel’s equation, but are
linearly dependent. Hence the second solution needed to be found which is leading
to the Bessel function of the second kind. The Bessel function of second kind of order
n given by ( For n = 0 )

(−1)
( )= ( ) + +
( !)

Euler constant = 0.57722

In general the Bessel function of the 2nd kind defined as


( ) − ( )
( )=

The general solution will be
( )= ( )+ ( )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 66


Exercise (3)
1. Solve the ODE + 2 = 0 Then prove the result by power series
method?

2. Identify and classify the singular points in the following Des


a) ′ +3 + 2 = 0
b) ′ +3 + 2 = 0
c) ( − 1) ′ + + = 0
d) ′ + + (1 − ) = 0

3. Solve by power series method


+ =0
2 − + (1 − ) = 0
( − 1) +3 + =0

4. Solve the Legendre DE with n = 0 ?

5. Find the general solution of Airy's equation :


− = 0 about x = 0
− = 0 about x = 1

6. Use power series method to solve the IVPs


( − 1 ) − + = 0 ∶ (0) = 2, (0) = 6
−2 + = 0 ∶ (0) = 0, (0) = 1

7. Use power series method to solve


− 4 ′ − 4 =

8. Write the general solution in terms of Bessel function


+ + =0

9. Consider the Chebyshev differential equation

(1 − ) − t + = 0
i. Identify the ordinary and singular points for the DE?
ii. Solve by power series around xo = 0, write the first six terms?

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 67


Part ( 4 ) : Solution of ODEs by Laplace Transform
______________________________________________________________________________

4.1 - Introduction
In some situations, a difficult problem can be transformed into an easier problem,
whose solution can be transformed back into the solution of the original problem.
For example, an integrating factor can sometimes be found to transform a non-exact
first order first degree ordinary differential equation into an exact ODE. Similarly the
Bernoulli ODE, which, upon a transformation of the dependent variable, becomes a
linear ODE. Also, Laplace transform will convert an initial value problem into a much
easier algebra problem.
Linear and nonlinear Systems
When the system is linear, the superposition principle can be applied. This important fact is the
reason that the techniques of linear-system analysis have been so well developed. The
superposition principle can be stated as follows. If the input/output relation of a system is :
( ) → ( ), ( )+ ( )→ ( )+ ( )
Then the system is linear. So, a system is said to be nonlinear if this equation is not valid.
Time-varying and time-invariant systems
A system is said to be time-invariant if a time shift in the input signal causes the same time shift
in the output signal. If y(t) is the output corresponding to input x(t), a time-invariant system will
have y(t-t0) as the output when x(t-t0) is the output. So, the rule used to compute the system
output does not depend on time at which the input is applied. On the other hand, if the system
output y(t-t0) is not equal to x(t-t0), we call this system time variant or time varying.
A time-invariant differential equation is a differential equation in which none of its
coefficients depend on the independent time variable, t.

+ + = ( )

A system characterized by such a differential equation is said to be a linear time-


invariant system.
A time-variable differential equation is a differential equation with one or more of its
coefficients are functions of time t.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 68


+ + = ( )

Since the differential equation is linear and with variable coefficients, a system
characterized by such a model is said to be a linear time-variant system.
The main goal in the analysis of systems is to find the system response ( system
output – solution of the ODE ) due to external ( system inputs x(t) ) and internal
(system initial conditions) forces. It is known from elementary theory of differential
equations that the solution of a linear differential equation has two additive
components: the homogenous and particular solutions. The homogenous solution is
contributed by the initial conditions and the particular solution comes from the
forcing function. In engineering, the homogenous solution is also called the system
natural response, and the particular solution is called the system forced response.

A Laplace transform will convert an initial value problem into a much easier algebra
problem. The solution of the original problem is then the inverse Laplace transform
of the solution to the algebra problem.

Uses of Laplace transforms include:


1) the solution of some ordinary differential equations
2) solution of partial differential equations
3) the solution of some integro-differential equations, such as
t
y t   g t    h  t  x  y  x  dx .
0

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 69


The Laplace transform method has two main advantages over the methods discussed
in mathematically :
 Initial value problems are solved without first determining a general solution.
 The use of the Heaviside and Dirac’s delta functions make the method powerful
for problems with inputs (driving forces) that have discontinuities or represent
short impulses or complicated periodic functions.
The foundation of Laplace theory is based on Lerch's cancellation law :
ℒ { ( )} = ℒ { ( )} implies that ( )= ( )
In differential equation applications, y(t) is the sought-after unknown while f(t) is an explicit
expression taken from integral tables.
Having obtained expressions for the Laplace transforms of derivatives with known all
initial conditions, we are now in a position to use Laplace transform methods to solve

ordinary linear differential equations.

4.2 - Laplace Transform


The concept of transformation : an integral of the form

( , ) ( )

is called an integral transform of f(t), where the f (t) transformed from a (t) space into
another space (s) , the function k(s,t )is called the kernel of the transform and the
parameter s, which is independent of t, belongs to some to main on the real line or in the
complex plane. Choosing different kernels and different values of a and b, different
integral transforms introduced , for example : Laplace Fourier, Hankel and Merlin
transforms

We are particularly interested in an integral transform, where the interval of


integration is the unbounded interval [0, ∞). If f (t) is defined in the interval then the
improper integral

( , ) ( ) = ( , ) ( )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 70


If the limit exists, then we say that the integral exists or is convergent; if the limit does
not exist, the integral does not exist and is divergent. The limit will, in general, exist
for only certain values of the variable s.
When it exists, the Laplace transform F(s) of a real function f (t) with domain of
definition 0 ≤ t < ∞ is defined as the integral transform with the kernel K(t, s) = e−st ,
the interval of integration 0 ≤ t < ∞, and s a complex variable such that Re s < c for
some nonnegative constant c, so that

{ ( )} = ( ) = ( )

The inverse Laplace transform will be


If ℒ { ( )} = ( ) then ( )=ℒ { ( )}
This correspondence between the functions F(s) and f(t) is called the inverse Laplace
transformation, f (t) being the inverse transform of F(s)

Before proceeding, there are a few observations relating to the definition worthy of
comment.
a) The symbol denotes the Laplace transform operator; when it operates on a
function f (t), it transforms it into a function F(s) of the complex variable s. We
say the operator transforms the function f (t) in the t domain (usually called the
time domain) into the function F(s) in the s domain (usually called the complex
frequency domain, or simply the frequency domain). It is usual to refer to f (t)
and F(s) as a Laplace transform pair, written as { f (t), F(s)}.
b) Because the upper limit in the integral is infinite, the domain of integration is
infinite. Thus the integral is an example of an improper integral, and hence the
limit must exist so the integral is convergent.
c) Because the lower limit in the integral is zero, it follows that when taking the
Laplace transform, the behavior of f(t) for negative values of t is ignored or
suppressed. This means that F(s) contains information on the behavior of f(t)
only for t>0, so that the Laplace transform is not a suitable tool for investigating

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 71


problems in which values of f(t) for t<0 are relevant. In most engineering
applications this does not cause any problems, since we are then concerned
with physical systems for which the functions we are dealing with vary with
time t. An attribute of physical realizable systems is that they are non-
anticipatory in the sense that there is no output (or response) until an input (or
excitation) is applied. Because of this causal relationship between the input and
output, we define a function f (t) to be causal if f (t) = 0 (t > 0).
d) If the behaviour of f (t) for t _ 0 is of interest then we need to use the
alternative two-sided or bilateral Laplace transform of the function f (t),
defined by

{ ( )} = ( )

The Laplace transform defined with lower limit zero, is sometimes referred to
as the one-sided or unilateral Laplace transform of the function f (t). In this
course we shall concern ourselves only with the latter transform, and refer to
it simply as the Laplace transform of the function f (t).

4.2.1 - Existence and uniqueness of Laplace transforms


Sufficient conditions guaranteeing the existence of Laplace transform of a function f(t)
are that f be piecewise continuous on [0,∞) and that f be of exponential order for t>T.
Recall that a function f is piecewise continuous on [0, ∞) if, in any interval 0 ≤ a ≤ t ≤
b, there are at most a finite number of points tk, k = 1, 2, . . . , n (tk-1 < tk,) at which f
has finite discontinuities and is continuous on each open interval (tk-1 , tk ) as shown in
figure

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 72


The concept of exponential order is defined in the following manner : A function f is
said to be of exponential order if there exist constants c, M 0, and T 0 such that
| ( )| < >
That is, if f is an increasing function, then the condition simply states that the graph of f(t) on
the interval (T, ∞) does not grow faster than the graph of the exponential function Mect, where
c is a positive constant as shown in the figure

Equivalently, a function is said to be of exponential order α if


lim ( )

Then we conclude that :


If f(t) is piecewise continuous on [0,∞ ) and of exponential order c then
{ ( )} ( ) exists for all values of s > c
Uniqueness : If the Laplace transform of a given function exists, it is uniquely determined.
Conversely, it can be shown that if two functions (both defined on the positive real axis) have
the same transform, these functions cannot differ over an interval of positive length. Hence we
may say that the inverse of a given transform is essentially unique. In particular, if two
continuous functions have the same transform, they are completely identical.

Example 4.1
Using the Laplace transform definition determine the Laplace transform of the ramp function
f(t) = t ?

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 73


{ }= =


= − + = − −
→ → → →

Taking s = σ + jω, where σ and ω are real numbers , then

( )
lim = = 0 > 0
→ → →

1 1 1
= = = 0 > 0
→ →∞ →∞

Then
The Laplace transform of the function f(t)=t only exist if Re(s) > 0 and it is

{ }= = ( ) >

4.2.2 - Properties of the Laplace transform


we consider some of the properties of the Laplace transform that will enable us to
find further transform pairs { f (t ), F(s)} without having to compute them directly
using the definition.

The linearity property


A fundamental property of the Laplace transform is its linearity, which may be stated
as follows: If f(t) and g(t) are functions having Laplace transforms if a and b are
any constants then

ℒ{ ( )+ ( )} = ℒ { ( )} + ℒ { ( )} = ( )+ ( )

The first shift ( S-shifting) property


Commonly referred as the exponential modulation theorem. If f(t) is a function has a
Laplace transform F(s) with Re(s) > σ then

ℒ{ ( )} = ∫ ( )
( )
∫ ( ) = ( − )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 74


Laplace transform of functions (multiplied and divided by t )

ℒ { ( )} = − [ ( )]

( ) ()
ℒ =∫ ( ) Providing the limit of lim → exists.

The frequency differentiation and integration property

ℒ{ ( )} = (−1) ( )
( ) ()
ℒ =∫ ( ) Providing the limit of lim → exists.

The second shift (t-shifting) property


If f(t) is a function has a Laplace transform F(s) with Re(s) > σ then the shifting
function is by replacing t by (t-a)
0 <
( − ) ( − ) =
( − ) ≥
Then
ℒ { ( − ) ( − )} = ∫ ( − ) = ( ) where ( ) = ℒ { ( )}

Laplace Transform of the Derivative of Any Order


Let f, f' ,… , fn-1 be continuous for all t >= 0 and the Laplace transform exists.
Furthermore, let f(n) be piecewise continuous on every finite interval t > = 0. Then the
transform of f(n) satisfies

( )
ℒ ( ) = ℒ( ) − (0) − (0) − ⋯ − (0) − (0)
Such as
ℒ { ( )} = ℒ ( ) − (0) and ℒ{ ( )} = ℒ( ) − (0) − (0)

Laplace Transform of Integral


Let F(s) denote the transform of a function f(t) which is piecewise continuous for
t >= 0 and satisfies a growth restriction. Then, for s>0 and t>0

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 75


ℒ ( ) = ( ) ( ) =ℒ ( )

The scaling property


Let L{ f (t)} = F(s). Then if k > 0,

ℒ { ( )} =

Limiting Theorems
The initial- and final-value theorems are two useful theorems that enable us to predict system
behaviour as t → 0 and t → ∞ without actually inver ng Laplace transforms.
Initial value theorem
If f(t) and f’(t) are both Laplace transformable and lim → ( ) exist then
lim ( ) = (0 ) = lim ( )
→ →

It is important to recognize that the initial-value theorem does not give the initial value f(0−)
used when determining the Laplace transform, but rather gives the value of f (t) as t → 0+.
Final value theorem
If f(t) and f’(t) are both Laplace transformable and lim → ( ) exist then
lim ( ) = lim ( )
→ →

The final-value theorem provides a useful vehicle for determining a system’s steady state gain
(SSG) and the steady-state errors, or offsets, in feedback control systems, both of which are
important features in control system design. The SSG of a stable system is the system’s steady-
state response, that is the response as t → ∞, to a unit step input.

Convolution Theorem

Convolution has to do with the multiplication of transforms. Addition of transforms


provides no problem by linearity principle . Now multiplication of transforms occurs
frequently in connection with ODEs, integral equations, and elsewhere. Then we
usually know ℒ ( ) ℒ ( ) and would like to know the function whose transform
is the product ℒ ( )ℒ ( ) . Note that
ℒ( ) ≠ ℒ ( )ℒ ( )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 76


If two functions f and g satisfy the assumption in the existence theorem that is the a
growth restriction so that their transforms F and G exist, the product H=FG is the
transform of h given by

( ∗ )( ) = ( ) ( − )

ℒ ( ) ( − ) = ℒ {( ∗ )( )} = ( ) ( )

Or
ℒ { ( ) ( )} = ( ∗ )( )

4.2.3 - Evaluation of inverse transforms (partial fraction method)


The most obvious way of finding the inverse transform is to make use of a table of
transforms. But mostly it is first necessary to carry out some algebraic manipulation
on F(s). The method of partial fractions transforms a function of the form a(s)/b(s),
where both a(s) and b(s) are polynomials in s, into the sum of other fractions such
that the denominator of each new fraction is either a first-degree or a quadratic
polynomial raised to some power. The method requires only that the degree of a(s)
be less than the degree of b(s) (if this is not the case, first perform long division, and
consider the remainder term) and b(s) be factored into the product of distinct linear
and quadratic polynomials raised to various powers. The method is carried out as
follows:
1. To each factor of b(s) of the form (s − a)m assign a sum of m fractions, of the form

+ +⋯+
( − ) ( − ) ( − )
2. To each factor of b(s) of the form (s2 + bs + c) m, assign a sum of p fractions, of the form
+ + +
+ +⋯+
( + + ) ( + + ) ( + + )
3. Set the original fraction a(s)/b(s) equal to the sum of all these partial fractions.
Clear the resulting equation of fractions and arrange the terms in decreasing
powers of s.
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 77
4. Equate the coefficients of corresponding powers of s and solve the resulting
equations for the undetermined coefficients.

Example 4.2 :
Find the Laplace inverse for
2 −1 2 −1
( ) = =
( + + )( − 1) ( + 1)( + 3)( − 1)
To evaluate the inverse LT using partial fraction
2 −1
= + +
( + 1)( + 3)( − 1) ( + 1) ( + 3) ( − 1)
2 − 1 = ( − 1)( + 3) + ( + 1)( − 1) + ( + 1)( + 3)
Put s = 1 then : 1 = 8 C then C = 1/8
Put s = -1 then : -3 = -4 A then A= 3/4
Put s = -3 then : -7 = 8 B then B = -7/8
2 −1 3 7 1
= − +
( + 1)( + 3)( − 1) 4( + 1) 8( + 3) 8( − 1)

3 7 1
( )= − +
4( + 1) 8( + 3) 8( − 1)
Taking the inverse Laplace
3 7 1
( )=ℒ − +
4( + 1) 8( + 3) 8( − 1)

1 1 1
( )= ℒ − ℒ + ℒ
( + 1) ( + 3) ( − 1)
Using Laplace tables

( ) = − +

4.2.4 - Discontinuous and Periodic Functions

The Heaviside step function


The unit step function or Heaviside function H(t-a) is 0 for t < a, has a jump of size 1
at t = a (where we can leave it undefined), and is 1 for t > a, in a formula:

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 78


0 <
( )= ( − )= ( ≥ 0)
1 >

Then Laplace transform is

ℒ { ( − )} = ( − ) = =

If a = 0 ( at the origin )then


1
ℒ { ( )} =

That is H ( t ) = 1
The unit step function can be used in different cases as illustrated in the cases below :
 The pulse (window) function H(t − a) − H(t − b) with a < b. Pulses are used to turn a
signal off until time t = a and then to turn it on until time t = b, after which it is
switched off again.
0 <
( )[ ( − ) − ( − )] = ( ) < <
0 <
Where : Π , ( )= ( − )− ( − )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 79


 The step function has a useful property: multiplying an ordinary function f(t) by the
step function H(t) changes it into a causal function; e.g. if f(t) = sin t then sint H(t) is
causal ( defined only for positive t ) as shown in Fig A.
 H(t −a) can be used to turn a signal (function) off until time t =a and then to turn it on.
0 <
( ) ( − )=
( ) >

ℒ { ( ) ( − )} = ℒ { ( + )}
 The unit step function can be used to translate the function along the t-axis through
a time a. f(t−a)u(t−a) as shown in Fig C

(A) f(t) = 5 sin t (B) f(t) H(t-2) (C) f(t-2) H(t-2)


 Figure below shows the effect of many unit step functions, three of them in (A) and
infinitely many in (B) when continued periodically to the right; this is the effect of a
rectifier that clips off the negative half waves of a sinuosidal voltage.

The second shift (time shifting or t-shifting) property


If f(t) is a function has a Laplace transform F(s) then the shifting function is by
replacing t by (t-a)

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 80


0 <
( − ) ( − ) =
( − ) ≥
Then
ℒ { ( − ) ( − )} = ∫ ( − ) = ( ) where ( ) = ℒ { ( )}
{ ( )} = ( − ) ( − )
Example 4.3 :
Express the function in a Heaviside step function
0 ≤ < 2
( ) = 5 2 ≤ < 20
20 ≤
The function f(t) in ‘English’
– All functions start off.
– At t = 0 we turn on t3 t3 H ( t )
– At t = 2 we turn off t3 and turn on 5 - t3 H (t-2) + 5 H( t – 2 )
– At t = 20 we turn off 5 and turn on e−t - 5 H (t – 20 ) + e−t H( t – 20 )
– (We never turn off e−t)
Then the function expressed in a Heaviside function as
( )= − ( − 2) + 5 ( − 2) − 5 ( − 20) + ( − 20)
Note : H ( t ) = 1
The impulse delta ( Dirac) function
The delta function can be considered to be the limit of a rectangular “pulse” of height h and
width 1/h in the limit as h→∞. Thus the area of the graph representing the pulse remains
constant at 1 as h→∞, while its height increases to infinity and its width decreases to zero. The
graphical representation of such a pulse f (t) = δ(t − a) located at t = a, before proceeding to
the limit, is shown in Fig.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 81


We adopt the following definition of the delta function in terms of the unit step function.

The delta function the delta or located at t = a and denoted by δ(t − a) is defined as the limit

1
( − ) = lim [ ( − ) − ( − − ℎ)]
→ ℎ

Then it is defined by

∞ =
( − )=
0 ℎ
The operational property of the delta function, usually called its filtering or shifting property
and it can be given as follows : Let f (t) be defined and integrable over all intervals contained
within 0 ≤ t < ∞, and let it be continuous in a neighborhood of a. Then for a ≥ 0

( ) ( − ) = ( )

A purely formal derivation of the Laplace transform of the delta function proceeds as follows.
By definition,

{ ( − )} = ( − )

An application of the filtering property reduces this to


{ ( − )} =
And if a = 0, then
{ ( )} =
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 82
Also it can proved by taking the Laplace transform of the definition as
1
ℒ { ( − )} = [ ( − ) − ( − − ℎ)]
→ ℎ

( )
( − )
{ ( − )} = − =
→ →

( − )
{ ( − )} =

( − )
= =
→ →

Hence the transform of f (t) = δ(t − a) by this limit, that is,


{ ( − )} =

Impulse Response of a linear invariant time system h(t)


defined as the response of the system to a unit impulse ( ) applied at time t = 0
when all the initial conditions are zero.

Periodic function
In many engineering applications, however, one frequently encounters periodic

functions f(t) = f(t + T) that exhibit discontinuous behavior. Examples of typical


periodic functions of practical importance are shown in Figure below, such periodic
functions may be represented as infinite series of terms involving step functions; once
expressed in such a form their Laplace transforms can be evaluated by

ℒ { ( )} = ( )

Example 4.4 :
Evaluate the Laplace transform of the periodic square function with T=2a the
function will be ( + 2 ) = ( ) :

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 83


0 ≤ ≤
( ) =
− ≤ ≤2
The L T is

ℒ { ( )} = −

ℒ { ( )} = −
− − −

−1 −
ℒ { ( )} = −
− − −

1−2 +
( )=

(1 − ) (1 − )
( )=
(1 − )(1 + )
= (1 + )

(1 − )
( )= =
(1 + )

4.3 - Laplace Method Algorithm for ODE Solution


Our goal is to show how Laplace transforms can be used to solve initial value
problems for linear differential equations. Recall that we have already studied ways of
solving such initial value problems, but the LT method has the following advantages :
1. Solving a nonhomogeneous ODE does not require first solving the
homogeneous ODE.
2. Initial values are automatically taken care of.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 84


3. Complicated inputs x(t) (right sides of linear ODEs) can be handled very
efficiently, as we show in the next sections.
To solve an initial value problem:
a) Take the Laplace transform of both sides of the equation.
b) Use the properties of the Laplace transform and the initial conditions to obtain
an equation for the Laplace transform of the solution.
c) Determine the inverse Laplace transform of the solution by using a suitable
method (such as partial fractions) in combination with the table.

Laplace Transform of the Derivative of Any Order


Let f, f' ,… , fn-1 be continuous for all t >= 0 and the Laplace transform exists.
Furthermore, let f(n) be piecewise continuous on every finite interval t > = 0. Then the
transform of f(n) satisfies

( )
ℒ ( ) = ℒ( ) − (0) − (0) − ⋯ − (0) − (0)
Such as
ℒ { ( )} = ℒ ( ) − (0)

ℒ{ ( )} = ℒ( ) − (0) − (0)

Consider the general second-order linear differential equation


+ + = ( )
Subjected to the initial conditions
( ) = ′( ) =
where a and b are constant. Here x(t) is the given input (driving force) applied to the
mechanical or electrical system and x(t) is the output (response to the input) to be
obtained. In Laplace’s method we do three steps:
Step 1 : Setting up the subsidiary equation : This is an algebraic equation for the
Laplace transform of the DE.
{ } + { } + { } = { ( ) }

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 85


{ ( )− ( ) − ′( )} + { ( ) − ( )} + ( ) = ( )
( ){ + + } = ( + )+ + ( )
Step 2 : Solution of the subsidiary equation by algebra. We divide by and use the so-
called transfer function

( )=
+ +
( ) = [ ( + )+ ] ( )+ ( ) ( )
With zero initial conditions ( )= ( ) ( )
Note that Q depends neither on x(t) nor on the initial conditions (but only on a and b).
Step 3. Inversion of Y to obtain y : We reduce the subsidiary equation (usually by
partial fractions as in calculus) to a sum of terms whose inverses can be found from
the tables

Example 4.5 :
Find the complete solution of the initial value problem
− = ( ) = ( )=

Step 1 Taking the LT of the DE


ℒ{ } = ( )− ( )− ( )= ( )− −
ℒ{ } = ( )
1
ℒ{ } =



1
( ) { − } – { + } =

Step 2 : The transfer function

( )=

( )
( ) = ( + ) ( ) +

+ 1
( ) = +
− ( − )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 86


1
( ) = +
( − 1) ( − )
Step 3 Taking Laplace inverse
By partial fraction
1
= +
( − ) ( − )
= ( − )+
= → = (−1) → =−
= → = (1) → =

( ) = + −
( − 1) ( − )

( ) = { } + { − }

Example 3.6 :
Solve the initial value problem : + 4 + 3 = with ( ) = ( )=
Taking the LT of the DE
ℒ{ } = ( )− ( )− ( )= ( )−
ℒ{ } = ( )− ( )= ( )
ℒ{ } = ( )
1
ℒ{ } =
( − 1)
1
( )− + ( )+ ( )=
( − 1)
1 2 −1
[ + + ] ( ) = + =
( − 1) ( − 1)
2 −1 2 −1
( ) = =
( + + )( − 1) ( + 1)( + 3)( − 1)
To evaluate the inverse LT using partial fraction
2 −1
= + +
( + 1)( + 3)( − 1) ( + 1) ( + 3) ( − 1)
2 − 1 = ( − 1)( + 3) + ( + 1)( − 1) + ( + 1)( + 3)
Put s = 1 then : 1 = 8 C then C = 1/8
Put s = -1 then : -3 = -4 A then A= 3/4
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 87
Put s = -3 then : -7 = 8 B then B = -7/8

2 −1 3 7 1
= − +
( + 1)( + 3)( − 1) 4( + 1) 8( + 3) 8( − 1)
3 7 1
( )= − +
4( + 1) 8( + 3) 8( − 1)
Taking the inverse Laplace
3 7 1
( )=ℒ − +
4( + 1) 8( + 3) 8( − 1)
1 1 1
( )= ℒ − ℒ + ℒ
( + 1) ( + 3) ( − 1)
Using Laplace tables

( ) = − +

Example 3.7 :
Find the complete solution of the initial value problem

d2y 0  0  t  3
 4 y  f t    ; y 0   y 0  0 .
dt 2
t  t  3

The function f(t) can be expressed in terms of Heaviside function as


( ) = ( − 3) = ( − 3) ( − 3) + 3 ( − 3)

+ 4 = ( − 3) ( − 3) + 3 ( − 3)
Taking the LT of the DE
− −
{ ( )− ( ) − ′( )} + ( ) = +

− −
( ){ + }= +
− −
( )= +
( + ) ( + )

+
( )= −

( + )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 88


Using partial fraction
+ +
= + +
( + ) +

3 1 3 1
= , = , =− =−
4 4 4 4

+ 3 1 1 1 3 1 2
= + − −
( + ) 4 4 4 + 4 8 +

+ 3 1 3 1
= + − cos 2 − sin 2
( + ) 4 4 4 8

Therefore, using the second shift theorem,


{ ( )} = ( − ) ( − )
Therefore
+ 3 1 3 1
( )= −
= + ( − 3) − cos 2( − 3) − sin 2( − 3) ( − 3)
( + ) 4 4 4 8
or equivalently

0 0≤ ≤3
( )= 3 1
− cos 2( − 3) − sin 2( − 3) ≥3
4 4 8

Example 4.8 :
y + 3y + 2 y = ( + + ) ( − )
The initial conditions will be : y'(0)= 5 and y(0)=5
Taking the Laplace transform:
ℒ{ } = ( )− ( )− ( )= ( )−
ℒ{ } = ( )− ( )= ( )−
ℒ { } = ( )
ℒ {( + + ) ( − )} =? ? ?

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 89


By using the second shit Rule
ℒ { ( − ) ( − )} = ( )
Hence the function f(t) must be put in the form f(t-a)
( + + ) → [( − + − + )+ + ]
( + + ) → [( − ) + − ]
( + + ) → [( − ) + ( − + ) − ] = [( − ) + ( − ) + ]
2 5 7
ℒ {[( − ) + ( − ) + ] ( − )} = + +

Substitution back into the differential equations


2 5 7
[ ( )− − ] + 3[ ( ) − ] + 2Y(s) = 40 + +

40 (7 + 5 + 2)
( )[ + + ]−( + )=

40 (7 + 5 + 2) (5 + 20)
( )= +
( + + ) ( + + )
Taking the Laplace inverse to evaluate y(t)
40 (7 + 5 + 2) (5 + 20)
( )= +
( + + ) ( + + )

Using partial fraction


(5 + 20)
( )=
( + + )

(5 − 20) 5 + 20
= = +
( + + ) ( + )( + ) ( + ) ( + )

5 + 20 = ( + ) + ( + )
Put s = -1
15 = → = 15
Put s = -1
10 = − → = −10
Then
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 90
5 − 20 15 −10
= +
+ + ( + ) ( + )

5 − 20
( )= = 15 − 10
+ +
40 (7 + 5 + 2)
( )=
( + + )
(7 + 5 + 2)
= + + + +
( + + ) +1 +

7 +5 +2 = ( + )( + ) + ( + )( + ) + ( + )( + )
+ ( + )+ ( + )

Put s = 0 : 2= ( )( ) → =
Put s = -1 : 4=− ( ) → = −4
Put s = -2 : 20 = −8 (− ) → = 5/2
7 +5 +2 = ( + + )+ ( + + )+( + + )−4 ( + )
5
+ ( + )
2
Comparing the coefficients
5
→0= −4+ → = 3/2
2
9 5
→0= + −8+ → =1
2 2
To check the results
→7 =3+3+1
→ 5 =2+3
→2=2

(7 + 5 + 2) 3 1 1 4 5
= + + − +
( + + ) 2 +1 ( + )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 91


40 (7 + 5 + 2) 3 3 5
( )= = 40 + + − 4 − ( − 2)
( + + ) 2 2 2
The total solution by superposition principle will be
3 3 5
( )= ( )+ ( ) = 15 − 10 + 40 + + − 4 − ( − 2)
2 2 2
It can be simplified !!!

Example 4.9 :
In the RC circuit shown here, there is no charge on the capacitor and no current
flowing at the time t = 0. The input voltage Ein is a constant Eo during the time t1 < t <
t2 and is zero at all other times. Find the output voltage Eout for this circuit.

Mathematically the input signal can be expressed as :


< <
=
0 ℎ
The input signal is a pulse function which can be expressed in terms of Heaviside function as
= [ ( − )− ( − )]
The initial conditions are :

i(0) = 0 and q(0) = 0 where =


The differential equation of the system will be
dq q
R + =
dt C
dq q
R + = [ ( − )− ( − )]
dt C
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 92
Taking the LT of the DE

Q
R [s Q − 0] + = [ − ]
C


Q(s) = [ − ]
1
+

Using the partial fractions

1
= − = −
1 1 1
+ + +

Then

/
ℒ = −
1
+

Therefore, using the second shift theorem,


( )/( ) ( − )
ℒ = −
1
+


( )/( ) ( − )
ℒ = −
1
+

Then
( )/( ) ( − )− ( )/( ) ( − )
q(t) = − −
and
q(t) ( )/( ) ( )/( )
= = − ( − )− − ( − )
C

Suppose the circuit with numerical data as :


R = 250 000 Ω, = 1 × 10 , = 10 = 2 = 3

The solution will be :


( ) ( − 2) − ( ) ( − 3)
q(t) = 10 − −

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 93
Example 4.10 :
In an LRC circuit with L = 1H, R = 8W and C = 1/15 F, the capacitor initially carries a
charge of 1 C and no currents are flowing. There is no external voltage source. At
time t = 2s, a power surge instantaneously applies an impulse of 4 ( − ) into the
system. Describe the charge of the capacitor over time.
d q dq q
L + R + =
dt dt C
q′′ + 8q′ + 15q = ( − )
The initial conditions will be : i(0) = q'(0)=0 and q(0)=1
Taking LT
s Q − s + 8 s Q − 8 + 15 Q =
Q(s + 8 s + 15 ) − (s + 8) =
s+8
Q= +
s + 8 s + 15 s + 8 s + 15

s+8
Q= +
( s + 3)( s + 5) ( s + 3)( s + 5)
Using partial fraction decompositions :
s+8
= +
( s + 3)( s + 5) s + 3 s + 5
s + 8 = ( s + 5) + B( s + 3)
= − → = (2) → = /
= − → = (−2) → = − /

= +
( s + 3)( s + 5) s+3 s+5
4 = ( s + 5) + B( s + 3)
= − → = (2) → =
= − → = (−2) → = −
1 1 1 1
Q = − + −
s+3 s+5 s+3 s+5
Take The inverse Laplace to evaluate q(t) the charge of the capacitor

( ) ( )
q(t) = − + ( − ) −

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 94


Example 4.11 :
Consider an LR-circuit with E(t) being a unit square wave, period of which is 2T, and
1 0 ≤ ≤
( ) =
0 ≤ ≤2
To determine its current i(t) with i(0) = 0, we solve the following IVP:
di
L + R i = ( )
dt
Taking the Laplace transform on both sides, we get

( + ) ( ) = ( )

( + ) ( ) =

Evaluate the integral

1−
= =

(1 − )
( + ) ( ) = =
− ( − )

(1 − ) 1
( ) =
( + )( − )
= ( + )( + )

Using the partial fractions


1
= +
+ +

1= + +

Put S=0 A = L/R and put s= -R/L B = - 1/R

1 1 1 1
= − = −
+ + +

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 95


Using Binomial theorem
( + ) ( + )( + )
( + ) = − + − +⋯
! !

1
=( + ) = − + − +⋯
( + )

1 1 1
( )= = − ( − + − +⋯)
( + )( + )
+

Taking inverse Laplace transform

1 1 1 /
ℒ − = −
+

1 1 1 ( ) /
ℒ − = − ( − )
+

1 1 1 ( ) /
ℒ − = − ( −2 )
+

And so on
1 / ( ) / ( ) /
i(t) = − − − ( − )+ − ( −2 )+⋯

which can put into a series form as

1 ( ) /
i(t) = (− ) − ( − )

Example 4.12 :
In an undamped mass-spring system, resonance occurs if the frequency of the driving
force equals the natural frequency of the system. Then the model is
+ = sin
where , = k is the spring constant, and m is the mass of the body attached to
the spring. We assume y(0)=0 and y’(0)=0, for simplicity. Then the subsidiary equation
is

[ + ] ( ) =
+

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 96


( ) =
( + )
Calculating the inverse Laplace transform
1 1
ℒ { ( )} = ℒ = ℒ .
( + ) + +
1 1 sin sin
( ) = ℒ ∗ℒ = ∗
+ +

sin
( ) = sin sin ( − ) = − cos +
2

Example 4.13 :
Using convolution, determine the response of the system modeled by
ʹ
+ 3 +2 = ( )
where g(t) = 1 if 1< t < 2 and zero elsewhere with zero initial conditions
This system with an input (a driving force) that acts for some time only,
then, taking Laplace transform we get
( )
= ( ) gives ( )= ( ) ( )
( )

Then convolution theorem will be applied to evaluate the response


( )=ℒ { ( ) ( )} = ℒ { ( )} ∗ ℒ { ( )}
The transfer function will be
1 1
( )= = = +
s + 3 s + 2 (s + 1)(s + 2) (s + 1) (s + 2)
By partial fraction A = 1 and B = -1
1 1
( )= − and ℒ { ( )} = ( ) = −
− −2
( ) ( )

And ( )= and ℒ { ( )} = ( ) = 1

Using convolution integral

−( − ) −2( − )

Since the forcing input signal defined only between 1< t < 2 then

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 97


−2( − ) −2( −2) −2( −1)
−( − ) −2( − ) −( − ) −( −2) −( −1)
− = − = − − −
2 2 2

( )
1 ( ) ( )
1 ( )
( )= − − −
2 2

4.4 - Convolution Method Solution of IVP Differential Equations

Given the initial value problem in the form


+ + = ( )
Subjected to the initial conditions
( ) = ′( ) =
Then the solution of the initial value problem will be
( ) = ( )+ ( )
where u(t) is the solution to the homogeneous equation :
+ + =
and v(t) is ( ) = (ℎ ∗ )( )
h(t) has the Laplace transform given by

( )=
+ +
Example 4.14 :
Solve + = tan
with y(0)=1 and y’(0)=0. Taking the L T of both sides of the equation
( )− + ( )=0

( )=
+1
( )=
1
( )=
+1
ℎ( ) =
Then the solution will be
( ) = ( ) + (ℎ ∗ )( )

( )= + ( − )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 98


4.5 - Laplace Method Solution of Integral and Integro-Differential Equations
The first involves a special type of equation called an integral equation, and the
second an integro-differential equation. An equation of the form :

( )= ( )+ ( , ) ( )

is called a Volterra integral equation, where λ is a parameter and K(t, τ) is called the
kernel of the integral equation. Equations of this type are often associated with the
solution of initial value problems. The Laplace transform is well suited to the solution
of such integral equations when the kernel K(t, τ) has a special form that depends on t
and τ only through the difference t − τ , because then K(t, τ) = K(t − τ ) and the integral
becomes a convolution integral. Equations involves both the integral of an unknown
function and its derivative are called integro-differential equations. These equations
occur in many applications of mathematics. An example of an integro-differential
equation when considering the R–L–C circuit

Example 3.15 :
Solve

+ = ( ) ( − )

with y(0)=1 and y’(0)=0. Taking the L T of both sides of the equation
( )
( )− + ( )=
+1
1
+1− ( )=
+1
( + 2)
( )=
+1
+1 1
( )= = +
( + 2) 2 2( + 2)
1
( )= 1 + cos √2
2

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 99


4.6 - Differential Equations with variable Coefficients
If a differential equation has polynomial coefficients, that is variable coefficients –
these coefficients in general will be functions of t. we can use the Laplace transform
property called frequency differentiation and integration. Let f (t) have Laplace
transform F(s), and assume that F(s) is differentiable.
Then the s-differentiation rule is

ℒ { ( )} = − ( )

2
ℒ{ ( )} = 2
( )

ℒ{ ( )} = (−1) ( )

Example 4.16 :
Solve
ʹ
+ 2 − 4 = 1 with zero initial conditions
Taking the L T of both sides of the equation
ℒ{ } = ( )− ( )− ( )= ( )

ℒ { ′} = − [ ( ) − ( )] = − ( ) − ( )

ℒ { } = ( )
1
ℒ { } =

Substituting Back in the DE :


1
( )− ( )− ( )− ( )=

1
( − ) ( )− ( )=

Rearranging
( − ) 1
( )− ( )=−
2
This is a linear first order differential equation solve by integration factor

+ ( ) = ( )

∫ ( ) ∫ ⁄
( )= = =

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 100


Multiply the original ODE by ( ) and integrate

⁄ ⁄
( − ) ⁄
1
( )− ( )=−
2

1 ⁄
( )=− +
2
⁄ ( )= ⁄
+
1
( )= + ⁄

In order to have Y(s) =0 as s tends to 0 then C must be zero, then


1
( )=

Taking the inverse Laplace transfer we get ( )=

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 101


Exercise (4)
Part A :
1. Evaluate the Laplace transform using the its definition :
i. ℒ{ }
ii. ℒ {sin }
iii. ℒ{ }
iv. ℒ {sinh }
v. ℒ { ( − )}
2. Evaluate the Laplace transform of the following problems
i. ℒ{ }
ii. ℒ{ sin }
iii. ℒ { ( )}
0 <2
where ( ) =
+1 ≥2
iv. ℒ { − 4 + 5 + 3 sin 2 }
v. ℒ { sin 2 }
3. Write the following function using unit step function and find its transform
0 0 < < 1
( ) = 1 < < ⁄2
2
cos ≥ ⁄2

<4
( ) =
1
+ 2 sin − ≥4
12 3
4. Find the Lpalace transform of the periodic function
3 0 < < 2
( ) =
0 2< <4
f(t+4) = f(t)
5. Find the Laplace inverse of the following functions :
i. ( ) =

ii. ( ) =

iii. ( ) =

iv. ( ) =
( )( )


v. ( ) =
( )( )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 102


6. Find the Laplace inverse of the following functions :
1
( ) =
( − 4)
Use both methods partial fraction and convolution theorems

Part B :
1. Find the solution of the initial value problem
+ 5 + 6 = (0) = 2 (0) = 1
+ = sin 2 (0) = 2 (0) = 1
2. Compute the solution of the following differential equations:
i. y'' + 3y' + 2y = r(t), where r(t) = 1 if 0 < t < 1 and r(t) = 0, t > 1, with zero
initial conditions.
ii. y'' + y' = r(t), where r(t) = t if 0 < t < 1 and r(t) = 0, t > 1, with zero initial
conditions
iii. y'' + 9y = r(t), where r(t) = 8 sin t if 0 < t < π and r(t) = 0, t > π,
with initial conditions: y(0) = 0 and y'(0) = 4.
3. y′′ + 5y′ + 6y = x(t) where x (t) is the pulse function
3 0 ≤ < 2
( ) =
0 ≥2
and subject to the initial conditions x(0) = 0 and x '(0) = 2.
4. Find the complete solution of the initial value problem

d2y 0  0  t  3
 4 y  f t    ; y 0   y 0  0 .
dt 2
t  t  3
5. Solve :
a) y′′ + 2y′ + 2 y = ( − 3) and y + 3y + 2 y = 1 + ( − 4)
The I Cs are : y'(0)=0 and y(0)=0
b) Determine the impulse response of the linear system whose response y(t) to an input

x(t) is determined by the differential equation : y′′ + 5y′ + 6y = 5 ( )

6. Solve the initial value problem


ʹʹ
− 2 −8 = ( ) with initial conditions y'(0) = 0 and y(0) = 1
ʹʹ
− = ( ) with initial conditions y'(0) = -4 and y(0) = 2

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 103


And f(t) is any arbitrary function.
ʹʹ
7. + 3 + 2 = ( − 1) with initial conditions y'(0) = 0 and y(0) = 0
8. Given the equation of motion of a system by
ʹʹ
+ 5 + 4 = 3 ( − 2) with initial conditions y'(0) = -2 and y(0) = 2
Determine an expression of the displacement y in terms of t
9. Solve the Volterra integral equation :

( ) − (1 + ) ( − ) = 1 − sinh

10. Solve the following DE


ʹʹ
+ + = 0 with zero initial condition

11. Find the solution of the initial value problem :


+ 4 = ( ) ; (0) = (0) = 0
The forcing function is the ramp loading function which given by

12. Determine the response of the system modeled by


+ = ( ) with y(0) =1 and the input function defined by :
0 0< ≤1
( ) =
1<
13. Using convolution formula, determine the response of the system modeled by
− = with zero initial condition
14. Find the impulse response for the following linear system described by DE
+ +( + ) = ( )

15. Use the convolution theorem to show that the solution to the initial value problem
+ = ( )
with y(0) = 0 and y'(0) = 0 is

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 104


Part ( 5 ) : Solution of A system of ODEs
______________________________________________________________________________

5.1 - Introduction
Simultaneous ordinary differential equations involve two or more equations that contain
derivatives of two or more unknown functions of a single independent variable.
A system of ordinary differential equations of the first order can be considered as:
= ( , , ,……, )
= (, , ,… …, )

= (, , ,… …, )
where each equation represents the first derivative of one of the unknown functions
as a mapping depending on the independent variable t , and n unknown functions f1; .
. . ; fn.
Linear systems of ODEs are of practical importance in various applications. We will
apply matrices to the solution of a system of n linear differential equations in n
unknown functions:
( )= 11 ( ) ( )+ 12 ( ) ( )+⋯+ 1 ( ) ( )+ ( )
( )= 21 ( ) ( )+ 22 ( ) ( )+⋯+ 2 ( ) ( )+ ( )

( )= 1 ( ) ( )+ 2 ( ) ( )+ ⋯+ ( ) ( )+ ( )
The system is said to be homogeneous when all the functions gi (t) are zero, and to
be nonhomogeneous when at least one of them is nonzero. It is a linear system
because it is linear in the functions y1(t), y2(t), . . . , yn(t) and their derivatives, and it is
a variable coefficient system whenever at least one of the coefficients aij(t) is a
function of t; otherwise, it becomes a constant coefficient system. An initial value
problem for system involves seeking a solution of the system such that at t = t0 the
variables y1(t), y2(t), . . . , yn(t) satisfy the initial conditions.
( )= , ( )= , … ( ) =
where k1, k2, . . . , kn are given constants.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 105


The functions aij(t) are continuous and gj(t) are piecewise continuous on some interval
(I) (We write J instead of I because we need I to denote the unit matrix.
Define matrices
( ) 11 ( ) 12 ( ). . .
1 ( ) ( ) ( )
⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤
⎢ ′ ( ) ⎥ ⎢ 21 ( ) 22 ( ) 2 ( )⎥ ⎢ ( )⎥ ⎢ ( )⎥
. . .
⎢ . ⎥=⎢ . . ⎥⎢ . ⎥ +⎢ . ⎥
⎢ . ⎥ ⎢ . . ⎥⎢ . ⎥ ⎢ . ⎥
⎣ ′ ( )⎦ ⎣ 1 ( ) 2 ( ) . . ( )⎦ ⎣ ( )⎦ ⎣ ( )⎦
With this notation, the system of linear differential equations is
[ ′( )] = [ ( )][ ( )] + [ ( )]
Or simply as
′= +
We will refer to this as a linear system. This system is homogeneous if G(t) is the n × 1
zero matrix. Otherwise the system is nonhomogeneous. We have an initial value
problem for this linear system if the solution is specified at some value t =to. Then the
representation of the initial value problem of a linear system will be :

⎡ ⎤
⎢ ⎥
( )= =⎢ . ⎥
⎢ . ⎥
⎣ ⎦
A solution to a system of differential equations is a set of differentiable that satisfies
each equation on some interval J.
Before we start our discussion of systems of linear differential equations, we first
observe that every ordinary differential equation of order n can be written as a
system consisting of n linear ordinary differential equation of first order one, hence
we restrict our study to solution of a system of differential equations of the first
order.

5.2 - Conversion of an nth-Order ODE to a System


Any nth-order ODE of the general form can be converted to a system of n first-order
ODEs. This is practically and theoretically important. Practically because it permits the
study and solution of single ODEs by methods for systems and theoretically because it
opens a way of including the theory of higher orders ODEs into that of first-order
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 106
systems. This conversion is another reason for the importance of systems, in addition
to their use as models in various basic applications. The idea of the conversion is
simple and straightforward, as follows:

Or in a standard form with constant coefficient as

+ + + … + + = ( )

( ) ( ) ( )
+ + + … + + = ( )
can be converted to a system of n first-order ODEs by setting
( )
= , = , = , … , =
The system of first order DE will be
′= =
= =

( )
=
( ) ( ) ( ) ʹ
=− − − … − − + ( )
( )
=− − − … − − + ( )
or in matrix form

⎡ ⎤ ⎡ 0 1 0 … 0 0
⎤⎡ ⎤ ⎡
0

⎢ ⎥ ⎢ 0 0 1 …

0 0
⎥⎢ ⋮ ⎥ ⎢
0

⎢ ( )⎥ = ⎢ ⋮
⋮ ⋮ ⋮

⋮ ⋮ ⎥⎢ ⎥+⎢ ⋮ ⎥
⎢ ⎥ ⎢ 0 0 0 0 1 ⎥⎢ ⎥ ⎢ 0 ⎥

⎣ ( ) ⎦ ⎣− − … − − ⎦⎣ ⎦ ⎣ ( )⎦

Example 5.1 :
Convert the initial value problem
+3 + 2 = 0 with y(0) =1 and y’0)=3
Into a linear system of 1st order DE and put in a matrix form
Let
= =
Gives the system of 1st ODE :

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 107


=
= −3 −2
In matrix form
0 1 0
= +
−2 −3 0
Since the vector G = 0, then it is a homogeneous system with initial conditions in the
form
(0) 0
=
(0) 3

Example 5.2 :
Consider the system of 2nd order DEs
+ − =0
+ − sin =
Considering new dependent parameter z
Let
= =
Then
= −
= − + sin
The equivalent system after conversion will be 4 1st ODEs
=
=
= −
= − + sin
In matrix form :
0 1 0
= +
−2 −3 0

⎡ ⎤ 0 0 1 0 0
⎢ ⎥= 0 0 0 1
+ 0
⎢ ⎥ −1 1 0 0 0
⎣ ⎦ 0 0 −1 sin

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 108


5.3 - The Substitution or Elimination Method
The second order DE can be converted to a system of two first order DEs. Then A
system of two 1st ODEs can be converted back into a second ODE. The methods
discussed for the solution of second order ordinary linear differential equations with
constant coefficients may now be used for cases of two first order differential
equations which must be satisfied simultaneously. The technique will be illustrated by
the following examples:

Example 5.3 :
Determine the general solutions for y and z in the case when

5 − 2 + 4 − =
+ 8 − 3 = 5
First, we eliminate one of the dependent variables from the two equations; in this
case, we eliminate z. From the second equation :
1
= ( + 8 − 5 )
3
Substituting in the first equation we get
2 1
5 − ( + 8 − 5 ) + 4 − ( + 8 − 5 ) =
3 3
Rearranging
2 1
5 − ( + 8 ′ + 5 ) + 4 − ( + 8 − 5 ) =
3 3
2 2 4 8
− − + =
3 3 3 3
OR
+ − 2 = −4
Which is a 2nd ODE can be solved ( Solve to find y(t) then z(t) by substitution)
The general solution will be :
( )=2 + +
( )=3 + +

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 109


Example 5.4 :
Determine the general solutions for the system

= 2 − +4−
= − +2 +1
With the initial conditions
(0) = 1 (0) = 0
It is a non-homogeneous system with constant coefficients of 1st ODEs
We can use the same procedure as the previous example. But we can use :
Differentiate the first DE
= 2 − −2
Substituting for from the second DE
= 2 − (− +2 + 1) − 2
= 2 + −2 −1−2
Then Substituting for from the first DE
=− + 2 +4−
So, using this result to eliminate from the second order equation for

= 2 + − 2(− + 2 +4− )−1−2


Rearranging
= 4 −3 +2 −9−2
Or
− 4 +3 = −2 −9
This is a second order non homogeneous DE which can be solved by undetermined
coefficients ( Work out the solution ), the result will be :
53 10 2
( )= + − + +
27 9 3
It now remains for us to find y2, and this is accomplished by substituting for y1 in the
first equation, after differentiation of the y1 solution. The result will be
28 8 1
( )=− + − + +
27 9 3
To solve the initial value problem,C1 andC2 must be chosen such that the given
initial conditions will satisfied :
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 110
(0) = 1 (0) = 0
Setting t =0 in the general solution and using these initial conditions, we find that C1
and C2 must satisfy the equations
53
1= + −
27
28
0=− + −
27
Solving the equations we get
26
=
27
=2
Thus, the required solution of the initial value problem is
26 53 10 2
( )= + 2 − + +
27 27 9 3
26 28 8 1
( )=− + 2 − + +
27 27 9 3

5.4 - Matrix Solutions of Linear System of ODEs – (Homogeneous Case)


Then the representation of the initial value problem of a linear homogeneous system
of first ODEs will be :
′=
Let y1, y2, . . . . , yn be n linearly independent solutions to the homogeneous system on
the interval J where A(t) is an (nxn) matrix function continuous on J. Then every
solution of system on J can be expressed in the form
( )= 1 ( )+ 2 ( ) +⋯+ ( )
where c1, . . . . , cn are constants.
Linearly independent solutions of a homogeneous system
A set of solutions {y1, y2. . . . , yn} that are linearly independent on J is called a
fundamental solution set for of the system. The linear combination is referred to as
the general solution of the system.
An (nxn) matrix Ф(t) of linearly independent solutions to the homogeneous linear
system is called a fundamental matrix or the state transition matrix, which is defined

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 111


as the matrix whose columns are linearly independent solutions of the homogeneous
system. A necessary and sufficient condition for the matrix of solutions Ф(t) to be a
fundamental matrix is that det(Ф(t))≠0 for some (or any ) t.
The general solution vector of the homogeneous system of equations has the form
( )= ()
where Ф(t) is any fundamental matrix and C is a constant vector.

We now restrict our discussion to homogeneous first-order systems with constant


coefficients that is the elements of the coefficients are all constants and independent
of t . The idea behind the so-called Eigenvalue Method is the following observation:
We already know that for a single first ODE in the form y’ = k y has the solution of
=
So let us try the solution for a system as

=
Substituting in the main system equation
′=
We get
=

Since is never zero, we can always divide both sides by and get the
eigenvalue problem
=
( )=
where λ is an eigenvalue of A and x is a corresponding eigenvector.
We assume that A has a linearly independent set of n eigenvectors. This holds in most
applications, in particular if A is symmetric or skew-symmetric or has n different
eigenvalues.
Just like the solution of a second order homogeneous linear equation, there are three
possibilities, depending on the number and the type of eigenvalues of the coefficient
matrix A has. The possibilities are :
 Distinct real eigenvalues
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 112
 A repeated eigenvalue
 Complex conjugate eigenvalues
Let us consider the different cases with (2x2) case :
Case 1: P( λ ) = 0 has two distinct real solutions λ1 and λ2 :
The corresponding eigenvectors for the eigenvalues respectively are

= =

Then the general solution :


( )= 1 ( )+ 2 ( )
Or
Y ( t) = C x +C x
Or in matrix form
( )
= +
( )
Which can be put in the general form as :
( )= 1 + 2

( )= 1 + 2

Or in terms of the fundamental matrix as


( ) = ( )
( )
=
( )

Case 2: P( λ ) = 0 has a repeated eigenvalue that is :


( )=( − )
That is is the eigenvalue with multiplicity 2
(1) λ0 has two linearly independent eigenvectors:

= =

Then the general solution :


Y ( t) = ( C x + C x )
Or in terms of the fundamental matrix as
( ) = ( )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 113


( )
=
( )
(2) λ0 has only one associate eigenvector:

Then the second eigenvector can be determined by the equation


( − ) =
Then the general solution :
Y(t) = ( C x + C (t x + ) )
Or in terms of the fundamental matrix as
( ) = ( )
( ) ( + )
=
( ) ( + )

Case 3: P( λ ) = 0 has two conjugate complex solutions α + iβ and α – iβ :


The corresponding eigenvector for the eigenvalue α + iβ is
+
x= ℎ ℎ = =
+
Then the general solution :

Y ( t) = [ C ( x cos( )− sin( ) ) + C ( x cos( )+ sin( )) ]
Or in terms of the fundamental matrix as
( ) = ( )
( ) ( cos( )− sin( )) ( cos( )− sin( ))
=
( ) ( cos( )− sin( )) ( cos( )− sin( ))

Example 5.5 :
Find the general solution of the following system of DEs
= − 3
= + 5
Solution steps :
Step 1 : putting the system in the matrix form

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 114


1 −3
=
1 5
Step 2 : The characteristics polynomial and the eigenvalues determination
−1 −3
det( − )= = ( − 1)( − 5) + 3 = −6 +8=0
1 −5
Then the eigenvalues are = 2 and λ2=4
Then the eigenvectors calculated as :
= 2 : The equation should be
1 −3 2
=
1 5 2
Solving the system we get that = −3
Then the eigenvector is
−3
=
1
= 4 : The equation should be
1 −3 4
=
1 5 4
Solving the system we get that = −
Then the eigenvector is
−1
=
1
Step 3: Write down the general solution to the system:
( )= 1 ( )+ 2 ( )
−3 −1
= +
1 1
Or in the form
( ) = −3 1 − 2

( )= 1 + 2

Example 5.6 :
Find the general solution of the following system of DEs
=
= −
Solution :

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 115


0 1
=
−1 0
1
det( − )= = +1 =0
−1
Then the eigenvalues are = i and λ2=-i
Then the eigenvectors calculated as :
= i : The equation should be
0 1
=
−1 0
Solving the system we get that = . Then the eigenvector is
1
=

= -i : Then the eigenvector is the conjugate of the previous vector


1
=

The general solution to the system:
1 1
=C + C
i −i
But we want real-valued solutions, so we need to replace the complex-valued
solutions
cos − sin
=C + C
sin cos
Or
( )= 1 cos − 2 sin
( )= 1 sin + 2
cos

Example 5.7 :
Solve the following system of 1st ODE
= − +
= 3 + 4
= 2 +
Given an initial condition for the system as , [ ( )] = [ ]
Step 1 : putting the system in the matrix form
′=

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 116


1 −1 2
= 3 0 4
2 1 0
Step 2 : The eigenvalues and eigenvectors determination ( Work it out )
The eigenvalues are λ1 = 1.303, λ2 = -2.303 and λ3= 2. And The corresponding
eigenvectors
−0.172 −0.557 0
0.891 , −0.467 0.894
0.42 0.687 0.447
Step 3: Write down the general solution to the system:
−0.172 −0.557 0
. .
= C 0.891 + C −0.467 + C 0.894
0.42 0.687 0.447

In terms of component, the general solution is


( ) = −0.172 C . .
− 0.557 C
( ) = 0.891 C . .
− 0.467 C + 0.894 C
( ) = 0.42 C . .
+ 0.687 C + 0.447 C

Step 4: Plug in the initial conditions and solve the system for the arbitrary constants
−0.172 C − 0.557 C =
0.891 C − 0.467 C + 0.894 C = 0
0.42 C + 0.687 C + 0.447 C = 1
Solving the system we get the constants are C = -3.228, C =0.997 and C = 3.738
The final solution will be
( ) = 0.555 [ . . ]

( ) = −2.876 . .
− 0.466 + 3.342
( ) = −1.356 . .
+ 0.685 + 1.671 C

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 117


5.5 - Matrix Solutions of Linear System of ODEs (Non-homogeneous Case)
A nonhomogeneous with constant coefficient system of first order linear differential
equations can be written
Y = AY + G
Where : G is the input signal vector
Its general solution can be expressed Y(t) = Ψ (t) + Ψ ( )
Where :
Ψ (t) = Φ(t) C is the solution of the homogeneous system.
Ψ (t) is the particular solution to be determined
Φ(t) is a fundamental matrix for the homogeneous system

5.5.1 - Variation of parameters


Variation of parameters for systems follows the same line of reasoning as variation of
parameters for second order linear differential equations. Then we can assume that
the particular solution in the form
Ψ ( ) = Φ ( t) ( )
U(t) is an n ×1 vector to be determined. The proposed particular solution should
satisfy the original system, substituting into the nonhomogeneous system to obtain:
(Φ(t) ( ))′ = A[Φ(t) ( )] + G
Differentiating and rearranging ( dropping t for simplicity )
ϕ U + ϕU = A(ϕ ) + = (Aϕ) +
ϕ U + ϕU = (Aϕ) +
Since ϕ is a solution for the homogeneous system then:
ϕ = Aϕ and ϕ U = (Aϕ) U
Then the equation becomes
ϕU =
Premutiplying by the inverse of the fundamewntal matrix ϕ we obtain
U =ϕ
Then U(t) vector determined by integrating both sides

U(t) = ϕ ( ) ( )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 118


In which we integrate a matrix by integrating each element of the matrix. Once we have U(t),
we have the general solution of the nonhomogeneous system of 1st order DE as :
( )= ( ) + ( )= ( ) + ( ) ( )
Example 5.10 :
Solve the system
1 −10
= +
−1 4 1
Determination of the fundamental matrix by calculating the eigenvalues and vectors for A
Which are : the eigenvalues are = -1 and λ2 = 6 and the corresponding eigenvectors are :
5 −2

1 1
Then the fundamental matrix is

( )= 5 −2
+ = 5 −2
1 1
Calculation of the inverse of ()

() = 2 = 2
− 5 − 5
Calculation of the inverse of ()

( )= 2
− 5 1
Multiplying and integrating each element individually we obtain
( + 1) ⁄7
( )=
(−29⁄252) + (1⁄42)

The general solution of the nonhomogeneous system is

( )= ( ) + ( ) ( )
( + 1) ⁄7
( )= 5 −2 + 5 −2
(−29⁄252) + (1⁄42)

(17⁄6) + (49⁄7)
( )= 5 −2 +
(1⁄12) + ( ⁄2)

5.5.2 - Solution by diagonalization


If A is a diagonalizable matrix of real numbers, then we can solve the system:

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 119


Y = AY + G
By the change of variables Y=X Z, where X diagonalizes A.
Substituting in the system of DEs
Y =( ) = XZ = A XZ + G
XZ = (AX) Z + G
Premutiplying by X we obtain
Z = (X AX) Z + X G
Z = D Z + X G
This is an uncoupled system, consisting of two 1st order differential equations, solve
each of these first-order linear differential equations individually.

Example 5.11 :
Solve the system
3 3 8
= +
1 5 4

The eigenvalues of A are 2, 6, with eigenvectors, respectively are:


−3 1

1 1

−3 1 −1⁄4 1⁄4 2 0
= , = =
1 1 1⁄4 3⁄4 0 6
= +
′ 2 0 −1⁄4 1⁄4 8
= = +
′ 0 6 1⁄4 3⁄4 4
′ 2 −2+
=
′ 6 +2+3
Solving each individually
+ +1
=
− − 1⁄3
+ +1
( ) = ( ) = −3 1
1 1 − − 1⁄3
( ) −3 + −4 − 10⁄3 −4 − 10⁄3
( )= = = −3 +
( ) + ⁄
+2 3 ⁄
2 3

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 120


5.5.3 - Solutions by Laplace transform
Physical systems, such as circuits with multiple loops, may be modeled by systems of
linear differential equations. The Laplace transform can be used to solve initial value
problems for systems of linear first order differential equations by introducing the
Laplace transform of each dependent variable that is involved, solving the resulting
algebraic equations for each transformed dependent variable, and then inverting the
results. We consider a first-order linear system with constant coefficients.

Homogeneous Linear System of ODEs


Consider the system of 1st order DEs given by
Y′ = AY
With initial conditions vector as
Y(0) = k
Taking the Laplace transform of the system
sY(s) − Y(0) = AY(s)
(sI − A) Y(s) = Y(0)
Premultiplying both sides by (sI − A) we obtain
Y(s) = [(sI − A) ] Y(0)
Noting that (sI − A) can be expanded by a power series as

(sI − A) = + + +⋯

Taking the inverse Laplace transform


Y ( t) = ℒ [(sI − A) ] Y(0)

Φ(t) = ℒ [(sI − A) ] = + + + +⋯=


2! 3!
Then
Y ( t) = Y(0) or Y(t) = Φ(t) Y(0)
Φ(t) is the state transition matrix

Example 5.12 :
Determine the state transition matrix Φ(t) of the system

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 121


= and = −2 −3
0 1
=
−2 −3

The state function of the system can be obtained by


Φ(t) = =ℒ [(sI − A) ]
Which can be obtained by ( exponential matrix method ) or by Laplace transform
(sI − A) = 0 0 1 −1
− =
0 −2 −3 2 +3

+3 1
1 +3 1 ( + 1)( + 2) ( + 1)( + 2)
(sI − A) = =
( + 1)( + 2) −2 −2
( + 1)( + 2) ( + 1)( + 2)

Use the partial fraction for each term individually, the evaluate the L-1, we obtain
− −2 −
− −2
Φ(t) = ℒ [(sI − A) ] = 2 −

−2 −
−2 + − + 2 −2
Non - homogeneous Linear System of ODEs
Consider the system of 1st order DEs given by
Y = AY +
With initial conditions vector as
Y(0) = k
Taking the Laplace transform of the system
sY(s) − Y(0) = AY(s) + G( )
(sI − A) Y(s) = Y(0) + G(s)
Pre-multiplying both sides by (sI − A) we obtain
Y ( t) = ℒ {[(sI − A) ] [Y(0) + G(s)]}
Example 5.13 :
Solve the initial value problem
−2 + = sin
+2 − =1
With (0) = 1 (0) = −1

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 122


Putting the system in matrix form
2 −1 sin
= +
−2 1 1
The solution given by the relation
( )= {[( − ) ] [ ( ) + ( )]}

(
02 −1 −2 1
− )= − =
0 −2 1 2 −1
(0) 1
( )= =
(0) −1
ℒ (sin ) 1⁄( + 1)
( )= =
ℒ (1) 1⁄
Then
−1 −1
−1 −1 ( − ) ( − )
( − ) = =
( − ) 2 −2 −2 −2
( − ) ( − )
And
1 1⁄( + 1) ( + 2)⁄( + 1)
[ ( ) + ( )] = + =
−1 1⁄ (1 − )⁄
Then
−1 −1
( ) ( − ) ( − ) ( + 2)⁄( + 1)
( )= =
( ) −2 −2 (1 − )⁄
( − ) ( − )
After performing the product
( − 1)( + + 2 + 1)
⎡ ⎤
⎢ ( + 1)( − ) ⎥
( )= ⎢ ⎥
⎢−( − + 3 + + 2)⎥⎥

⎣ ( + 1)( − ) ⎦
The inverse transforms can be determined by partial fraction and the result will be

4 1 1 2 43
( ) = + − sin − cos + 3
9 3 5 5 45

5 2 1 3 43
( ) = + + sin − cos − 3
9 3 5 5 45

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 123


Exercises ( 5 )
1. Determine the general solutions by elimination

+ 2 − = 1 +
+ + 2 = 3
Subjected to initial conditions y(0) = 5/2 and z(0) = -1/2

2. Solve the system :


= +
= +
= +
3. Solve the system

0 1 (0) 1
= and initial conditions are =
−5 −4 (0) −1

4. Solve the initial value problem


a. + + = sin 2 and + − = 1 With (0) = 0 (0) = 0
b. − +3 = 1 + and + − = 2 With (0) = 2 (0) = −2

5. Solve the initial value problem system


′+ − =1
− + =1
′+ − =0
With (0) = 1 (0) = 0 (0) = 1

6. Consider the differential equation u'' + 0.25u' + 2u = 3 sin t. with initial conditions
u(0) = 2, u(0) = −2. Transform this problem into an equivalent one for a system of
first order equations. Then solve the system.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 124


Part ( 6 ) : Boundary Value Problems and Fourier Analysis
______________________________________________________________________________

6.1 - Boundary Value Problems


When solving partial differential equations it will often be necessary to approximate functions
by series of orthogonal functions. One way to obtain an orthogonal family of functions is by
solving a particular type of boundary value problem for a second-order linear ordinary
differential equation. Up until now you have dealt only with initial-value problems (IVPs) where
x(t0) and x0(t0) are both given at the same value of t.
While initial value problems require determination of the function subject to specified values of
it and maybe of its derivatives at one end of the domain (typically t = 0), in BVPs involving
second order ODEs and function values are specified at the two ends of the solution domain
(typically x = 0 and x = L). Many problems in engineering and science can be formulated as
BVPs. In contrast with initial value problems, boundary value problems often have multiple
solutions or even fail to have a solution.
Here we will consider the boundary conditions, which are the values of the dependent variable
y or y’ specified at two different points.
A typical example of two-point boundary value problems is
+ ( ) + ( ) = ( )
with the boundary conditions y(a) = y0 and y(b) = y1 OR y’(a) = y0 and y′(b) = y1
To solve the boundary value problem, we need to find a function y = f(x) that satisfies the
differential equation. If g(x) is identically equal to zero and y0 = y1 = 0, then the above problem
is called homogeneous boundary value problem. Otherwise, it is nonhomogeneous.

BVP solution : For a differential equation with constant coefficients , the roots of the
characteristic polynomial will be one of the cases ; If the roots of r ∈ R, then the boundary
value problem has a unique solution for all y0, y1 ∈ R. But if the roots form a complex
conjugate pair, then the solution of the boundary value problem belongs to only one of the
following three possibilities: (a) There exists a unique solution; (b) There exists infinitely many
solutions; and (c) There exists no solution.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 125


Example 6.1 :
+ =
The corresponding characteristic equation will be :
+ = → = ±
= +
Case 1 : y(0) = 1 , y(π/4)=-1
using the boundary conditions we get
1 = (0) =
1 0 1
→ =
−1 = = 0 1 −1
4
The linear system above has the unique solution c1 = 1 and c2 = -1. Hence, the boundary value
problem above has the unique solution
( ) = cos(2 ) − sin(2 )
Case 2 : y(0) = 1 , y(π/2)=-1
using the boundary conditions we get
1 = (0) =
1 0 1
→ =
−1 = =− −1 0 −1
2
The linear system above has infinitely many solution, as can be seen from the following:

Hence, the boundary value problem above has infinitely many solutions given by
( ) = cos(2 ) − sin(2 ) ∀ ∈ℝ
Case 3 : y(0) = 1 , y(π/2)=1
using the boundary conditions we get

From the equations above we see that there is no solution for c1, hence there is no solution for
the boundary value problem above

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 126


6. 2 - Eigen Value Problems
The purpose of this chapter is to develop tools required to solve these equations. In this section
we consider the following problems, where λ is a real number and L > 0:
1 + = 0 ∶ (0) = 0 ( ) = 0
2 + = 0 ∶ (0) = 0 ′ ( ) = 0
3 + = 0 ∶ ′(0) = 0 ( ) = 0
4 + = 0 ∶ ′(0) = 0 ′( ) = 0
5 + = 0 ∶ (− ) = ( ) ′(− ) = ′( )

In each problem the conditions following the differential equation are called boundary
conditions. Note that the boundary conditions in Problem 5, unlike those in Problems 1-4, don’t
require that y or y’ be zero at the boundary points, but only that y have the same value at
x = ±L, and that y’ have the same value at x = ±L. We say that the boundary conditions in
Problem 5 are periodic.
Obviously, y ≡ 0 (the trivial solution) is a solution of Problems 1-5 for any value of λ. For most
values of λ, there are no other solutions.

The interesting question is this:


A value of λ for which the problem has a nontrivial solution is an eigenvalue of the problem,
and the nontrivial solutions are λ-eigenfunctions, or eigenfunctions associated with λ. Note that
a nonzero constant multiple of a λ-eigenfunction is again a λ-eigenfunction. Problems 1-5 are
called eigenvalue problems. Solving an eigenvalue problem means finding all its eigenvalues
and associated eigenfunctions.

Example 6.2 :
Solve the eigenvalue problem
+ 3 + (2 + ) = 0 ℎ (0) = 0 (1) = 0

The characteristic equation of the DE is


+ + ( + ) =
The roots are
−3 ± √1 − 4
, =
2
If λ< 1/4 then m1 and m2 are real and distinct, so the general solution of the differential
equation in
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 127
y(x) = A e + B e
using boundary conditions
0 = + → = −
0= +
0 = ( − )
Since ( − ) ≠ 0 the system has only the trivial solution. Therefore λ isn’t an
eigenvalue of

If λ= 1/4 then m1 = m2 = -3/2 are real and equal, so the general solution of the differential
equation in
( ) = ( + ) /

The boundary condition y(0) = 0 requires that A = 0, so


( ) = /

the boundary condition y(1) requires that B = 0. Therefore λ = 1/4 isn’t an eigenvalue of
the differential equation.
If λ > 1/4 then m1 and m2 are complex conjugate and are
−3
, = ±
2
Where

√4 − 1 1+4
= → =
2 4
so the general solution of the differential equation in
( )= / (
cos + sin )

The boundary condition y(0) = 0 requires that A = 0, and the solution will be
( )= /

which holds with B≠ 0 if and only if = where n is a positive integer n =1,2,. Then the
Eigen values are
1+4 1+4
= = = 1,2,3 ….
4 4
And the associated eigenfunctions are
( )= /

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 128


6.3 - Fourier Series
Before learning to solve partial differential equations, it is necessary to know how to
approximate arbitrary functions by infinite series, using special families of functions. This
process is called Fourier approximation, and the families of functions we use are called
orthogonal families.

Suppose = { ( )} is an orthogonal set on an interval [a, b] and if f is a function


defined on the same interval, then we can formally expand f in an orthogonal series as

( ) = ( ) = ( )+ ( )+ ( )+⋯

where the coefficients an are determined by using the inner product concept.
The orthogonal set of trigonometric functions
2 2 3 3
= 1 , cos , , , , , … , , ,…….

Suppose that f is a function defined on the interval (-L, L) and can be expanded in an
orthogonal series consisting of the trigonometric functions in the orthogonal set, that is,

( ) ~ + cos + sin
2

It is called the Trigonometric Fourier Series for f(x) on the interval [-L, L]. Note that each
function in the set S has period 2L; that is φn(x + 2L) = φn(x) for all x; therefore, if f(x) is
represented by its Trigonometric Fourier Series , it will be a periodic function with period 2L.
The coefficients a's and b's are called the Fourier coefficients are given by the Euler
formulae

1
= ( )

1
= ( ) ∶ = 0,1,2, …

1
= ( ) ∶ = 1,2,3, …

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 129


Here the “ ~ ” means “has the Fourier series”. We have not said if the series
converges yet. For now let’s assume that the series converges uniformly so we can
replace the ~ with an = as follows :

( ) = + cos + sin
2

Even and Odd Functions


It is important to notice that the Fourier series representation of f(x) contains two infinite sums,
one of even functions (the cosines) and the other of odd functions (the sines). It will be recalled
that a function f(x) defined in the interval−L ≤ x ≤ L even and odd function. If f and g are both
odd functions, or both even functions, then the product f(t)g(t) is even; and if one is even and
the other is odd then the product is odd.
A function f is even on [−L, L] if its graph on [−L, 0] is the reflection across the vertical axis of the
graph on [0, L]. This happens when f (−x) = f (x) for 0<x ≤ L. For example, x2n and cos (nπx/L) are
even on any [−L, L] for any positive integer n. Its Fourier series reduces to a Fourier cosine
series

( ) = + cos

And the Fourier coefficients will be

2 2
= ( ) = ( )

( ) = 2 ( )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 130


A function f is odd on [−L, L] if its graph on [−L, 0) is the reflection through the origin of the
graph on (0, L]. This means that f is odd when f(−x) = −f(x) for 0<x ≤L. For example, x2n+1 and
sin(nπx/L) are odd on [−L, L] for any positive integer n. Its Fourier series reduces to a Fourier
sine series

( ) = sin

And the Fourier coefficients will be

2
= ( )

( ) = 0

Most functions are neither even nor odd, but any function in an interval −L ≤ x ≤ L can
be expressed as the sum of an even function and an odd function defined over the
interval.

Fourier Series Convergence


Before stating conditions under which a Fourier series converges, we need to review
the one sided limits.
( +) = lim ( + ℎ) ( −) = ( − ℎ)
→ →

called, respectively, the right- and left-hand limits of f at x.


Piecewise continuous functions
A function f is said to be piecewise continuous on a closed interval p [a,b] if there are

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 131


 a finite number of points x1 < x2 < . . . < xn in [a,b] at which f has a finite (or
jump) discontinuity, and
 f is continuous on each open interval (xk, xk+1).
As a consequence of this definition, the one-sided limits f(x+) and f(x-) must exist at
every x satisfying a < x < b. The limits f(a+) and f(b-) must also exist but it is not
required that f be continuous or even defined at either a or b.

Note f is piecewise smooth on [a, b] if f is piecewise continuous and f ’ exists and is


continuous at all but perhaps finitely many points of (a, b).

Piecewise smooth continuous functions


A function f is said to be piecewise smooth on [a, b] if:
 f has at most finitely many points of discontinuity in (a, b);
 f’ exists and is continuous except possibly at finitely many points in (a, b);
 ( +) = lim → ( ) ( +) = → ( ) ≤ ≤
 ( −) = lim → ( ) ( −) = → ( ) ≤ ≤
Since f and f’ are required to be continuous at all but finitely many points in [a, b],
f(x0+) = f(x0−) and f’(x0+) = f’(x0−) for all but finitely many values of x0 in (a, b). f is said
to have a jump discontinuity at x0 if f(x0+) ≠ f(x0−).
Conditions for Convergence
Let f and f ’ be piecewise continuous on the interval –L < x < L [-L, L] except possibly at a
finite number of internal points x1, x2, . . . , at each point xn of which the function has a finite
jump discontinuity f (xn+) − f (xn−). Furthermore, let the left- and right-hand derivatives f’(xn−)
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 132
and f‘(xn+) exist for n = 1, 2, . . . . Then at points of continuity of f(x) its Fourier series converges
uniformly to f(x), and at each point of discontinuity it converges pointwise to
1
[ ( −) + ( +)] = 1,2, … … …
2
If, in addition, f(x) has a right-hand derivative f‘(−L+) at the left end point of the
interval and a left-hand derivative f‘(L−) at the right end point of the interval, then at x
= ± L the Fourier series converges point wise to
1
[ (− +) + ( −)]
2
A function f(t) is said to be periodic if its image values are repeated at regular
intervals in its domain. Thus the graph of a periodic function can be divided into
‘vertical strips’ that are replicas of each other, as illustrated in Figure below. The
interval between two successive replicas is called the period of the function.

We therefore say that a function f (t) is periodic with period T if, for all its domain values t,
( + ) = ( ) m is ay integer
We define the frequency of a periodic function to be the reciprocal of its period, so that
1
= =2

The smallest positive period is often called the fundamental period. Familiar periodic
functions are the cosine, sine, tangent, and cotangent. Examples of functions that are
not periodic are ln x ex , xm , to mention just a few. Furthermore if f(t) and g(t)
have period T, then a f(T) + b g(T) with any constants a and b also has the period T.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 133


Example 6.6 :
Let f x)= x − x2 for −π ≤ x ≤π. Compute the Fourier series

( ) = + ( cos + sin )
2

The constants are calculated :

1 2
= ( − ) =−
3

1 4 sin −4 −2
= ( − ) =

4 4 4
=− =− (−1) = (−1)

1 2 −2
= ( − ) =

2 2 2
=− =− (−1) = (−1)

Then the Fourier series is

4 2
( ) = − + (−1) cos + (−1) sin
3
Now we can examine the relationship between this series and f (x).
f‘(x)=1−2x is continuous for all x, hence f is piecewise smooth on [−π,π]. For −π <x <π, the
Fourier series converges to x −x2. At both π and −π, the Fourier series converges to

[ ( −) + (− +)] = [( − ) + (− − (− ) )] = (− )=−

Example 6.7 :
Let ( ) = + − ≤ ≤ 0 ( ) = − 0 ≤ ≤

The periodic extension of f (x) is the "triangular wave". In this example the extended function is
continuous for all x. One finds

( ) = + ( + )
2

The constants are calculated :


Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 134
1
= + + − =0
2 2

1 2
= + + − = (1 − )
2 2

1
= + + − =0
2 2

Hence

2 1
( ) = (1 − )

4 4
( ) = + 3 + ⋯
9
Since there are no jumps, one must expect convergence everywhere. It should, however, be
noted that at the corners (where f '(x) has a jump), the convergence is poorer than elsewhere.

Example 6.8 :
Find the Fourier series representation of f (x) = x + 1 for −1 ≤ x ≤ 1.

( ) = + ( + )
2

In this case L= 1, so using integration we find the constants are calculated :

= ( + 1) = + =2
2

= ( + 1) = 0

2
= ( + 1) = − − = (−1)

Substituting these coefficients Fourier series representation to be

2 1
( ) = 1 + (−1) − 1 ≤ ≤ 1.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 135


A graph of the partial sum approximation S10(x) to f (x) is shown in Fig

Example 6.9 :
Find the Fourier coefficients of the periodic function f(t) given in figure below. The formula is

− − < < 0
( )= ( ) = ( + )
< <
Note : Functions of this kind occur as external forces acting on mechanical systems,
electromotive forces in electric circuits, etc. (The value of f(t) at a single point does
not affect the integral; hence we can leave f(t) undefined at t = 0, ±π ,… )
The Fourier series is

( ) = + ( cos + sin )
2

Since the function is odd function then


= 0 and = 0 ( work it out ? )

1 1
= ( ) = − +

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 136


1 2
= − = (1 − )

Then the Fourier series is



2
( ) = (1 − ) sin

−1 2
= ℎ (1 − ) =
1 0

4 4 1 1
( ) = sin = + 3 + 5 + ⋯
3 5

Since the solution is an infinite series, it is clearly not possible to plot a graph of the
result. However, by considering finite partial sums, it is possible to plot graphs of
approximations to the series. Denoting the sum of the first N terms in the infinite
series by fN (t), that is

sin(2 − 1)
( ) =
2 −1

The graphs of fN(t) for N = 1, 2, 3 and 20 are as shown in Figure below


It can be seen that at points where f(t) is continuous the approximation of f (t) by fN(t)
improves as n increases, confirming that the series converges to f(t) at all such points.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 137


It can also be seen that at points of discontinuity of f(t), which occur at t = ±n π , n =
0, 1, 2, . . ., the series converges to the mean value of the discontinuity, which in this

particular example is (−1 + 1) = 0. As a consequence, the equality sign in the

solution needs to be considered carefully.

6.4 - Fourier Cosine and Sine Series


When solving partial differential equations, it is often necessary to approximate a
given function f on a half-interval [0,L] by a trigonometric series which contains just
sine functions, or just cosine functions. Suppose you are given a function f(t) defined
on [0,L] and want to approximate it on that half-interval by a series of the form

( )~

Since it does not matter what the series converges to on [-L,0], we can assume that
the function f is extended as an odd function on [-L,L]. Then its full Fourier Series will
contain only sine terms (the coefficients an are all zero if f is odd).
For ∈ [0, ] the series

( ) =

Where

1 2
= ( ) = ( ) = 1,2, ….

will converge to f(t) as desired. Note that we never have to define f(t) on [-L,0], but
just assume that f is odd. The series of sine terms is called a Fourier Sine Series for f(t)
on [0;L]. Similarly Let f(t) be piecewise continuous on the interval [0,L]. The Fourier
cosine series of f(t) on [0,L] is

( ) = +

Where

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 138


2
= ( ) = 0,1,2, ….

The trigonometric series in cosine series is just the Fourier series for fe(t) the even
2L-periodic extension of f(t) and that in sine series is the Fourier series for fo(t) the
odd 2L-periodic extension of These are called half-range expansions for f(t).

Example 6.10 :
Expand : f (x) = x2, 0 < x < L,
(a) in a cosine series (b) in a sine series (c) in a Fourier series.

SOLUTION The graph of the function is given in Fig

(a) We have

2 2 2 4
= = = = (−1)
3

where integration by parts was used twice in the evaluation of an.

(− )
( )= +

(b) In this case we must again integrate by parts twice:

2 2 4
= = (−1) + [(−1) − 1]

(− )
( )= + [(− ) − ]

(c) With T= L/2, 1/T = 2/L, and nπ/L= 2nπ/L we have

2 2 2 2
= = = =
3

2
= =−

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 139


( )= + −

The series (a), (b), and (c) converge to the 2L-periodic even extension of f, the 2L-
periodic odd extension of f, and the L-periodic extension of f, respectively. The graphs
of these periodic extensions are shown in Figure

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 140


Exercises ( 6 )
1. Solve the boundary value problem
+ =0
With the boundary condition :
 y(0) = 1 and y(π/2) = 0
 y(0) = 0 and y(π) = 2
2. Solve the boundary value problem
+ 4 = cos
With the boundary condition : y’(0) = 0 and y’(π) = 0
3. In the problem, either solve the given boundary value problem or else show that it has no
solution.
+ =0 With the boundary condition : y(0) = 0 and y(L) = 0
+ 4 = sin With the boundary condition : y(0) = 0 and y(π) = 0

4. In the problem, _nd the eigenvalues and eigenfunctions of the given boundary value problem.
Assume that all eigenvalues are real.
− + = 0 ℎ (1) = 0 ( ) = 0 > 1

5. Solve the Sturm–Liouville problem


+ =0
With the boundary condition : y(0) +y’(0) = 0 and y(1) +3y’(1) = 0

6. Solve the eigen value problem


+2 + + = 0 (0) = 0 (1) = 0

7. Let f ( x ) = π - x . Represent f(x) by a Fourier series over the interval -π <x < n .

8. Find the Fourier series representation of f (x) = x on the interval −2 ≤ x ≤ 2. Then test the
convergence of the function and its Fourier representation ?

9. Compute the Fourier series for

−1 − < <0
( )=
1 0< ≤

To which function does the Fourier series converge?

10. A sinusoidal voltage , E sin wt where t is time, is passed through a half-wave rectifier
that clips the negative portion of the wave. Find the Fourier series of the resulting periodic
function
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 141
11. Obtain the complex form of the Fourier series of the saw tooth function f (t) defined by

( )= ≤ ≤ ( + )= ( )

Schematically plot the discrete amplitude and phase spectra for the function?
12. Find the Fourier series expansions of the function assuming both odd and even expansions
of the function given by the figure

13. Find the Fourier series of x4 on [−1, 1].


14. Find the Fourier series representation of f (x) = |x| in the interval −L ≤ x ≤ L.
15. Find the complex Fourier series representation of

16. Find the Fourier transform of the functions


1 | |<
( ) = | | ( ) = ( )=
and
0 | |>

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 142


Part ( 7 ) : Partial Differential Equations
______________________________________________________________________________

7.1 - Introduction
A partial differential equation (PDE) : A differential equation that contains, in
addition to the dependent variable and the independent variables, one or more
partial derivatives of the dependent variable . The key defining property of a partial
differential equation (PDE) is that there is more than one independent variable x, y, . .
. . There is a dependent variable that is an unknown function of these variables u(x, y,
. . . ). We will often denote its derivatives by subscripts; thus ∂u/∂x = ux , and so on.
In general, it may be written in the form
, ,…, , , ,…, , , ,… = 0
involving several independent variables x, y, . . ., an unknown function u of these
variables, and the partial derivatives ux, uy, . . ., uxx, uxy, . . ., of the function. Subscripts
on dependent variables denote differentiations, e.g.,

= = …

Some examples of PDEs (all of which occur in physical theory) are:


1. + = 0 ( transport )
2. + = 0 ( transport )
3. + = 0 ( shock wave )
4. + = 0 ( Laplace equation )
5. − + = 0 ( wave with interaction )
6. + + = 0 ( dispersive wave )
7. + = 0 ( vibrating bar )
8. − = 0 ( quantum mechanics )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 143


7.2 - Classification of 1st Order PD Equations
The Linear First Order PDE for u(x, y)
A linear first order PDE for the unknown function u(x, y) can always be written as
( , ) + ( , ) = ( , ) + ( , )
where p(x, y), q(x, y), r (x, y), and s(x, y) are arbitrary functions of x and y, and the term s(x, y)
that does not multiply u, ux, or uy is called the nonhomogeneous term. The PDE is called
homogeneous when s(x, y) = 0. When, as often happens, the functions p, q, and r are constants,
the PDE becomes a constant coefficient equation. The equation is called linear because u, ux,
and uy all occur linearly (with degree 1) in each term. A typical linear first order PDE:
+ = +2
The Semilinear First Order PDE for u(x, y)
A semilinear first order PDE is slightly more complicated than a linear first order equation
because it is of the form
( , ) + ( , ) = ( , , )
where f is an arbitrary nonlinear function of u. Atypical example of a semilinear first order PDE
is
+ (1 + ) = (1 + + )

The Quasilinear First Order PDE


A quasilinear first order PDE is one that can be written in the form
( , , ) + ( , , ) = ( , , )
where the functions p and q may or may not depend on x and y, but at least one of them
depends on the undifferentiated function u. When f is present it may or may not depend on all
of x, y, and u, though the presence or absence of f does not alter the quasilinear nature of the
equation. A typical quasilinear first order PDE is
+ =
The definitions of linearity and quasilinearity extend quite naturally to PDEs of all orders. A PDE
of any order is linear if the unknown function u and all its derivatives only appear linearly (to
degree 1), so a general linear second order PDE for the unknown function u(x, y) can be written
( , ) + ( , ) + ( , ) + ( , ) + ( , ) + ( , ) = ( , )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 144


Analogously, a PDE of order n is said to be quasilinear when its partial derivatives of order n
occur linearly in the equation, but combinations of u and some of its derivatives up to order n−1
occur as coefficients of the nth order partial derivatives. A general quasilinear second order
PDE for the unknown function u(x, y) can be written
, , , , + , , , , + , , , , = , , , ,

7.3 - Classification of 2nd Order PD Equations


We shall be primarily concerned with linear second-order partial differential
equations, which frequently arise in problems of mathematical physics. If we let u
denote the dependent variable and let x and y denote the independent variables,
then the general form of a linear second-order partial differential equation is given
by

+ + + + + =

where the coefficients A, B, C, . . . , G are functions of x and y. When G(x, y) = 0,


equation is said to be homogeneous; otherwise, it is nonhomogeneous.

A linear second-order partial differential equation in two independent variables with constant
coefficients can be classified as one of three types. This classification depends only on the
coefficients of the second-order derivatives. For the equation to be of second order, A, B, and C
cannot all be zero. Define its discriminant to be (B2 – 4AC). The properties and behavior of its
solution are largely dependent of its type, as classified below.
If B2 – 4AC > 0 , then the equation is called hyperbolic.
If B2 – 4AC = 0 , then the equation is called parabolic.
If B2 – 4AC < 0, then the equation is called elliptic.
In general, elliptic equations describe processes in equilibrium. While the hyperbolic and
parabolic equations model processes which evolve over time.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 145


Applications
In this section we list several physical applications and the PDE used to model them.
a) For the heat equation (parabolic )

is found in the following applications :


1. Conduction of heat in bars and solids
2. Diffusion of concentration of liquid or gaseous substance in physical chemistry
3. Diffusion of neutrons in atomic piles
4. Diffusion of vorticity in viscous fluid flow
5. Telegraphic transmission in cables of low inductance or capacitance
6. Equilization of charge in electromagnetic theory.
7. Long wavelength electromagnetic waves in a highly conducting medium
b) Laplace’s/Poisson’s equation (elliptic)
0
+ =
( , )
is found in the following examples
1. Steady state temperature
2. Steady state electric field (voltage)
3. Inviscid fluid flow
4. Gravitational field.
c) Wave equation (hyperbolic)

appears in the following applications


1. Linearized supersonic airflow
2. Sound waves in a tube or a pipe
3. Longitudinal vibrations of a bar
4. Torsional oscillations of a rod
5. Vibration of a flexible string
6. Transmission of electricity along an insulated low-resistance cable

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 146


7.4 - Initial and Boundary Conditions
The PDEs classified in section 7.2 and 7.3, are special cases of the general linear PDE for an
unknown function u(x, y) of the two independent variables x and y though sometimes with y
replaced by t.

Initial conditions
In the case of a linear first order PDE it will be seen later that in principle a general solution can
be found, though usually only the solution of a specific problem is required. In order to specify
such a problem for a first order PDE, the auxiliary condition that identifies the problem uniquely
involves prescribing the value the solution u is required to attain along a line in D. An auxiliary
condition of this nature is called a Cauchy condition, and the problem of finding the solution of
a PDE in D that satisfies a Cauchy condition is called a Cauchy problem for the PDE.
Since solutions of (1) and (2) depend on time t, we can prescribe what happens at t = 0; that is,
we can give initial conditions (IC). If f(x) denotes the initial temperature distribution throughout
the rod, then a solution u(x,t) must satisfy the single initial condition u(x, 0) = f(x), 0 < x < L. On
the other hand, for a vibrating string we can specify its initial displacement (or shape) f(x) as
well as its initial velocity g(x). In mathematical terms we seek a function u(x,t) that satisfies and
the two initial conditions

Boundary condition
The additional conditions may be imposed on spatial boundaries belonging to a region D where
the solution is required, and when this is done the conditions are called boundary conditions.
Atypical boundary condition for a second order PDE defined in a rectangle could be that the
solution is required to assume specified values on the sides of the rectangle. If time is involved,
it is necessary to specify how the solution starts, and a condition of this type is called an initial
condition. Problems requiring initial and boundary conditions are called initial boundary value
problems (IBVPs).
Physical problems whose solution is governed by a 2nd order linear PDE of this type are
formulated in some region D of the (x, y)-plane on the boundary Γ of which suitable auxiliary

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 147


conditions, called boundary conditions, are imposed that serve to identify a particular problem.
The most important types of boundary conditions are as follows
(a) The specification of the functional form to be taken by the solution u(x, y) on the boundary Γ,
by requiring that
( , ) = ( , ) ( , )
where φ(x, y) is a given function. A boundary condition of this type is called a Dirichlet
condition.
(b)The specification of the functional form to be taken by the derivative of the solution u(x, y)
normal to the boundary Γ, by requiring that

( , ) = ( , ) ( , )

where ψ(x, y) is a given function and ∂/∂n is the directional derivative normal to the boundary
Γ. A boundary condition of this type is called a Neumann condition.
(c) The specification of the functional form to be taken by a linear combination of a Dirichlet
condition and a Neumann condition by the solution u(x, y) on the boundary Γ, by requiring that

( , ) ( , )+ ( , ) ( , ) = ( , ) ( , )

where a(x, y), b(x, y), and c(x, y) are given functions. A boundary condition of this type is called
a mixed condition, and sometimes either a Robin condition or a boundary condition of the
third kind. When c(x, y) = 0, this condition is called a homogeneous mixed condition.
(d)The specification on Γ of the functional form to be taken by both the solution u(x, y) and its
derivative normal to the boundary, by requiring that

( , ) = ( , ) ( , ) = ( , ) ( , )

Where φ(x, y) and Ψ(x, y) are given functions and ∂/∂n is the directional derivative normal to
the boundary Γ. Boundary conditions of this type are called Cauchy conditions for a second
order PDE. When the solution u is a function of a space variable x and the time t, and Cauchy
conditions are specified when t = 0, so that Γ becomes the x-axis and

( , 0) = ( ) ( , 0) = ( )

the Cauchy conditions are usually called initial conditions for a second order PDE.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 148


7.5 - Solution of a PDE
A classical solution of a PDE defined in some region D of the (x,y)-plane is a real function u with
the property that all of its partial derivatives that occur in the PDE are defined and continuous
throughout D, and when the function is substituted into the PDE it satisfies the equation
identically.
The solution u = u(x, y) of any PDE in a region D of the (x, y)-plane where the PDE is defined can
be represented in the form of a surface above D called an integral surface. For most PDEs it is
impossible to find a general solution. Although in principle a general solution of a linear first
order PDE can be found, unlike the general solution of a linear first order ordinary differential
equation (ODE) that contains an arbitrary constant, the general solution of a linear first order
PDE contains an arbitrary function.
A particular solution of partial differential equation is one that doesn’t contain arbitrary
functions or constants

Superposition principle

If the functions ui, i =1,2, ….. are separately the solutions of a linear homogeneous partial
differential equation then the series

Is also a solution of the partial differential equation provided that the derivatives appearing in
the DE can be differentiated term by term.

7.6 - Separation of Variables


We shall introduce one of the most common and elementary methods, called the
method of separation of variables, for solving initial boundary-value problems. The
class of problems for which this method is applicable contains a wide range of
problems of mathematical physics, applied mathematics, and engineering science.
We now describe the method of separation of variables and examine the conditions
of applicability of the method to problems which involve second-order partial
differential equations in two independent variables.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 149


Note that if the partial differential equation has a mixed derivatives, the first step is
to eliminate them by introducing new coordinates called characteristic coordinates
and transform the partial differential equation to a conical form as follows
( , ) + ( , ) + ( , ) + ( , ) + ( , ) = 0
The idea is by considering think of a solution, say u(x,y) to a partial differential
equation as being a linear combination of simple component functions un(x,y),
n=0,1,2,….. which also satisfy the differential equation and certain boundary
conditions. (This is a reasonable assumption provided the partial differential equation
and the boundary conditions are linear.) To determine a component solution, we
assume a separable solution of in the form :
( , )= ( ) ( )
where X and Y are, respectively, functions of x and of y alone, and are twice continuously
differentiable. Substituting this form for a solution into the partial differential equation and
using the boundary conditions leads, in many circumstances, to two ordinary differential
equations for the unknown functions X(x) and Y(y).

7.6.1 - The heat equation


the heat equation, which describes the temperature of a material as function of time and
space. The equation contains partial derivatives of both time and space variables. We solve this
equation using the separation of variables method, which transforms the partial differential
equation into a set of infinitely many ordinary differential equations.
The Initial-Boundary Value Problem : Consider a solid bar as the one sketched in Fig. below.
Let u be the temperature in that bar. Assume that u depends on time t and only one space
coordinate x, so the temperature is a function with values u(t;x). This assumption simplifies
the mathematical problem we are about to solve. This is not an unreal assumption, this
situation exists in the real world. One needs to thermally insulate all horizontal surfaces of the
bar and provide initial and boundary conditions that do not depend on neither y or z. Anyway,
besides assuming that u depends only on t and x, we also assume that the temperature of the
bar is held constant on the surfaces x = 0 and x = L, with values u(t; 0) = 0 and u(t; L) = 0.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 150


The one-space dimensional heat equation for the temperature function u is the partial
differential equation. The temperature distribution of the rod is given by the solution of the
initial boundary-value problem

= =

Subjected to the initial and boundary conditions :


(0, ) = 0 ≥ 0
( , ) = 0 ≥ 0
( , 0) = ( ) 0 ≤ ≤
The boundary conditions are Dirichlet conditions.
Note that the equation must be linear and for the time being also homogeneous (no
heat sources or sinks). The boundary conditions must also be linear and
homogeneous.
Step 1 : By the method of separation of variables, we assume a solution in the form
( , )= ( ) ( )
that is the solution can be written as a product of a function of x and a function of t.
Differentiating and substituting in the heat equation
= =
= ′′
1 ′ ′′
=

whenever XT ≠ 0. Since the le side of equa on is independent of t and the right side
is independent of x, we must have
1 ′ ′′
= = −

where λ is a separation constant.


Thus we will have two ordinary differential equations,
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 151
+ = 0

+ = 0
Step 2 : Separate the boundary conditions.
(0, ) = (0) ( ) = 0 ≥ 0
Since ( ) ≠ 0 ≥ 0 ℎ
(0) = 0 ( ) = 0
Step 3 : solution of the Eigen value problem
+ = 0
With boundary conditions X(0)=0 and X(L) =0. We look for values of which gives
us nontrivial solutions. This is a Regular Sturm-Liouville Problem can be solved to
find the eigenvalues and eigen functions.
The general solution in this case is of the form
( ) = cos + sin
From the condition X (0) = 0, we obtain A = 0. The condition X (L) = 0 gives
sin = 0
If B = 0, the solution is trivial. For nontrivial solutions, B≠0, hence, sin = 0
This equation is satisfied when = = 1,2, ….
Or

Then the solution will be

( )= sin

Step 4 : Solution for T(t). For any given n, we get a solution Tn(t)
+ = 0

= −

Solved by integration to give


( )=


( )=
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 152
Step 5 : We get now the eigen values and their eigen functions
Hence, the nontrivial solution of the heat equation which satisfies the two boundary
conditions


( , ) = ( ) ( ) =

Where : n = 1,2,3, ……. and = .


and since the PDE is linear and homogeneous, by the superposition principle, The
sum of them is also a solution. This gives the formal solution

( , ) = ( , )


( , ) =

Step 6 : Find An by initial condition. Fit in the initial condition


This solution satisfies the PDE and the boundary conditions. To find , we must use
the initial condition

( , 0) = ( ) =

we conclude that An must the Fourier sine coefficients for the odd periodic extension
of f(x),

2
= ( ) sin

We conclude that a solution of the boundary-value problem described by the heat


partial differential equation with the given boundary and initial conditions is given by


( , ) = ( ) =

Discussion on solutions:
• Harmonic oscillation in x, exponential decay in t:

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 153


• Speed of decay depending on λn = nπ/L. Faster decay for larger n, meaning the high frequency
components are ’killed’ quickly. After a while, what remain in the solution are the terms with
small n.
• As t → ∞, we have u(x, t) = 0 for all x. This is called asympto c solu on or steady state of the
heat equation.

In the special case


when the initial temperature is u(x, 0) = 100, L = π , and α = 1, you should verify that

2 200 1 − (−1)
= 100 sin =

And the solution will be

200 1 − (−1)
( , ) = ( ) =

Example 7.5 : Sinusoidal initial temperature.


Find the solution to the boundary value problem for the one-dimensional heat
equation with the initial temperature

( ) = 100
80
Satisfy the initial conditions:


( , ) =

Substituting t = 0

( , 0) = ( ) = = 100
80

By comparison : n =1 and L = 80

= ( ≥ 2 ) = 0
80
= 100


( , ) = 100
80

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 154


Example 7.6 : Triangular initial temperature in a bar.
Find the solution to the boundary value problem for the one-dimensional heat
equation with the triangular initial temperature


( , ) =

This solution satisfies the PDE and the boundary conditions. To find , we must use
the initial condition

( , 0) = ( ) =

we conclude that An must the Fourier sine coefficients for the odd periodic extension
of f(x),
/
2 2
= ( ) sin = + ( − )
/

2 2 2 2
= − cos + − ( − ) cos −
/

2 2 4
= − + + + = =
2 ( ) 2 2 ( ) 2 ( ) 2

4
⎧ = 1,5,9, … … . .
4 ⎪( )
= =
( ) 2 ⎨ −4
⎪ = 3,7,11, … … . .
⎩( )
We have

and the solution is


4 1
( , )= − +⋯
( ) 9

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 155


Example 7.7 : Insulated Ends
Suppose the bar has insulated ends, hence no energy is lost across the ends. The
temperature distribution is modeled by the initial-boundary value problem

= =

Subjected to the initial and boundary conditions :


′(0, ) = 0 ≥ 0
′( , ) = 0 ≥ 0
( , 0) = ( ) 0 ≤ ≤

separation of variables yields two ordinary differential equations,


+ = 0
+ = 0
The insulation conditions give us
′(0) = 0 ′( ) = 0
The solution of the Eigen value problem with BCs , following the same procedure the
eigenvalues and their corresponding eigenfunctions will be

Then the solution will be

( )= cos = 0,1,2, …

Solution for T(t). For any given n, we get a solution Tn(t)


+ = 0

= −

Solved by integration to give


( )=

For n =0, we get T0(t)= constant.


For n =1, 2, · · · ,


( )=
or constant multiples of this function. We now have a function

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 156



( , ) = ( ) ( ) =

Where : n = 1,2,3, ……. and = .


and since the PDE is linear and homogeneous, by the superposition principle, The sum of them
is also a solution. This gives the formal solution

( , ) = ( , )


( , )= +
2

Then

( , 0) = + = ( )
2

so choose the An’s to be the Fourier cosine coefficients of f on [0, L]:

2
= ( ) cos

Suppose the ends of the bar are insulated and the left half of the bar is initially at constant
temperature A while the right half is initially at temperature zero. Then
0 ≤ ≤ /2
( ) =
0 ≤ ≤
2
Then
/
2
= =

and, for n =1, 2, · · · ,


/
2 2
= = sin
2

Thus the solution will be

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 157


2 1
( , )= +
2 2

Since sin(nπ/2)=0 if n is even and =1 for odd numbers , we can retain only odd n in this
summation to write this solution
as

2 1 (2 − 1) ( )
( , )= +
2 (2 − 1)

7.6.2 - The Laplace equation


The Laplace equation in two and three dimensions characterizes a large group of physical
problems that are independent of the time, and for this reason they are usually called steady
state problems, the most important of which are in the description of gravitational fields,
diffusion, electric current flow, magnetism, and elasticity. Moreover, a solution of Laplace’s
equation can also be interpreted as a steady-state temperature distribution. Solutions of
Laplace’s equation are also called potential functions because of the role played by the
gravitational potential that determines the gravitational force acting on a body and the electric
potential in space caused by a potential distribution on electrically conducting walls present in,
and possibly bounding, the space. The Laplace equation in two dimensional is

+ = 0

As the Laplace equation in can be written as ∆φ = 0, the symbol ∆ is called the Laplacian
operator in two dimensions, and ∆φ is called the Laplacian of φ. Consequently, a function φ
will be harmonic if its Laplacian is zero.
The simplest boundary value problems for the Laplace equation involve specifying either φ on
the boundary or the derivative of φ normal to the boundary usually denoted by ∂φ/∂n. The
specification of φ on the boundary is called a Dirichlet boundary condition, and the
requirement that φ satisfy the Laplace equation and a Dirichlet boundary condition is called a
Dirichlet boundary value problem for the harmonic function φ. The specification of ∂φ/∂n on
the boundary of R is called a Neumann boundary condition, and the requirement that φ satisfy

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 158


both the Laplace equation and a Neumann boundary condition is called a Neumann boundary
value problem for the harmonic function φ. Dirichlet and Neumann boundary value problems
are also known as boundary value problems of the first and second kind, respectively.

We begin by considering the rectangular ∈ {( , ) ∶ 0 ≤ ≤ , 0 ≤ ≤ }, solving


boundary value problems Laplace’s equation with the following possible boundary conditions
(1 − ) ( , 0) + ( , 0) = ( ) , 0 ≤ ≤
(1 − ) ( , ) + ( , )= ( ) , 0 ≤ ≤
(1 − ) (0, ) + (0, ) = ( ) , 0 ≤ ≤
(1 − ) ( , ) + ( )= ( ) , 0 ≤ ≤
Where , , can each be either 0 or 1; thus, there are 16 possibilities.
Let BVP ( , , , ) ( , , , ) denote the problem of finding a solution of Laplace’s
equation that satisfies these conditions.
This is a Dirichlet problem if = = = = 0 or a Neumann problem if = = = =1
The other 14 problems are mixed.
Therefore we concentrate on problems where only one of the functions ( , , , ) isn’t
identically zero. There are 64 (count them!) problems of this form. Each has homogeneous
boundary conditions on three sides of the rectangle, and a nonhomogeneous boundary
condition on the fourth.

Dirichlet Problem for a Rectangle


The region D exerts a great influence on our ability to explicitly solve a Dirichlet problem, or
even whether a solution exists. Some regions admit solutions by Fourier methods.
Let D be the solid rectangle consisting of points (x, y) with 0 ≤ x ≤ a, 0≤ y ≤ b. We will solve the
Dirichlet problem for D.
This problem can be solved by separation of variables if the boundary data is nonzero on only
one side of D. We will illustrate this for the case that this is the upper horizontal side of D.
The problem in this case is
Consider the potential problem in a rectangular region R : { 0 < x < a , 0 < y < b }

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 159


+ = 0 ( , ) ∈

Subjected to the boundary conditions :


( , 0) = 0 , ( , ) = ( ) (0, ) = ( , ) = 0

By the method of separation of variables, we assume a solution in the form We


assume a separable solution of in the form
( , )= ( ) ( )
Then
+ =0

− = =−

Thus,
+ = 0
′ − = 0
For nontrivial solutions of the problem, only > 0 gives and accepted solution.
The general solution in this case is of the form
( ) = cos + sin
And the boundary conditions
(0) = ( ) = 0
Application of the boundary conditions then yields A = 0 and for nontrivial solutions,
B≠0, hence, sin = 0
This equation is satisfied when = = 1,2, ….
Or

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 160


Then the solution will be

( )= sin

The solution of the Y equation is clearly


( ) = +
And the boundary conditions
(0) = 0
Application of the boundary conditions then yields D = -C,
substituting back in the solution we get
( ) = −
Or
( ) = sinh

( )= sinh

Hence, the nontrivial solution of the Laplace equation which satisfies the two
boundary conditions and since the PDE is linear and homogeneous, by the
superposition principle, the solution will be the sum of all solutions , the general
solution of the DE will be

( , ) = sin sinh

The coefficient an can be evaluated using the fourth boundary condition


( , )= ( )

( , ) = sin sinh = ( )

Then this corresponding to a sine Fourier series with the constant calculated by the
relation

2
sinh = ( ) sin

Then

2
= ( ) sin

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 161
With this choice of coefficients, the solution can be written

2 sinh
( , )= ( ) sin

A Neumann Problem for a Rectangle


Consider the potential problem in a rectangular region R : { 0 < x < a , 0 < y < b }

+ = 0 ( , ) ∈

Subjected to the boundary conditions :


( , 0) = ( , ) = 0 0 < <
(0, ) = 0 ( , ) = ( ) 0 < <

By the method of separation of variables, we assume a solution in the form We
assume a separable solution of in the form ( , )= ( ) ( )
Then
+ =0

=− =−

Thus,
− = 0 ℎ (0) = 0
′ + = 0 ℎ (0) = 0 ( )=0
This Sturm-Liouville problem for Y has eigenvalues and eigenfunction. The general
solution in this case is of the form
( ) = cos + sin
′( ) = sin + cos
Application of the boundary conditions then yields B = 0 and for nontrivial solutions,
A≠0, hence, cos = 0 This equation is satisfied when = = 1,2, ….
Or the eigen values are

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 162


And their eigen functions

( )= cos = 0,1,2, …

The solution of the X equation is clearly


( )= +
This problem for X has only a boundary condition at x =0, so we must look at cases in
solving for X.
( ) = +

( )= −

(0) = − =0

so C = D. This means that X(x) must be have the form


( ) = +
Or

( ) = cosh = ℎ

For n =0, we get X0(t)= constant.


For n =1, 2, · · · ,

( )= ℎ

or constant multiples of this function. We now have a function

( , ) = ( ) ( ) = ℎ

Where : n = 1,2,3, ……. and = .


We have used the zero boundary conditions on the top, bottom, and left sides of the rectangle.
To satisfy the last boundary condition (on the right side) attempt a superposition

( , )= + ℎ

( , )
= ( ) = ℎ

This is a Fourier cosine expansion of g(y) on [0, b].

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 163


1
= ( ) =0

and we would have a contradiction if this integral were not zero. In this event, this problem
would have no solution.
For the other coefficients in this cosine expansion, we have

2
ℎ = ( ) cos

2
= ( )

And the solution of the Neumann problem is

( , )= + ℎ

Example 7.8 :
Find the steady state temperature distribution T(x, y) in the uniform slab of metal shown in Fig.
given that no heat sources are present in the slab and the temperatures on the boundaries are

+ = 0

Subjected to the boundary conditions :


( , 0) = 0 , ( , ) = 0 0 < < ∞ (0, ) = ( )
where f (y) is a bounded function. State any additional condition that must be imposed on
T(x,y) for the solution to be physically possible.

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 164


As the metal is uniform and there are no heat sources present, it follows that the steady state
temperature must be a solution of the Laplace equation. The sides of the slab are parallel to the
coordinate axes, and the equation is homogeneous, so we may separate variables by setting
( , )= ( ) ( )
Then
+ =0

=− =−

Thus,
− = 0 ′ + = 0
For nontrivial solutions of the problem, only > 0 gives and accepted solution.
The general solution in this case is of the form
( ) = cos + sin
And the boundary conditions
(0) = ( ) = 0
Application of the boundary conditions then yields A = 0 and for nontrivial solutions,
B≠0, hence, sin = 0
This equation is satisfied when = = 1,2, ….
Or

Then the solution will be

( )= sin

The solution of the X equation is clearly


( )= +
To make further progress it is now necessary to recognize that when no sources are
present in the metal, and a finite temperature is imposed along the boundary x = 0,
0 < y < a, a physically possible temperature distribution is one that must be bounded
throughout the metal. This being so, we must set the coefficients Dn = 0 to remove
the terms exp(nπx/a) that would otherwise become infinite as x→∞, then

( )=
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 165
Then the solution will be

( , )=

Hence we seek a solution in the form

( , ) = sin

If we set x = 0 in this summation and use the boundary condition T(0, y) = f (y), this
reduces to

(0, ) = sin = ( )

from which it follows in the usual manner that it solve by Fourier series

2
= ( ) sin

7.6.3 - The wave equation


As a first example, we shall consider the problem of a vibrating string of constant tension T∗ and
density ρ with c2 = T∗/ρ stretched along the axis from 0 to l, fixed at its end points.

= 0 < < > 0

Subjected to the initial and boundary conditions :


(0, ) = 0 ≥ 0
( , ) = 0 ≥ 0
( , 0) = ( ) 0 ≤ ≤
( , 0) = ( ) 0 ≤ ≤
where f and g are the initial displacement and initial velocity respectively.
By the method of separation of variables, we assume a solution in the form
We assume a separable solution of in the form
( , )= ( ) ( )
Differentiating and substituting in the wave equation
1 ′′ ′′
= =

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 166


whenever XT ≠ 0. Since the left side of equation is independent of t and the right side
is independent of x, we must have
1 ′′ ′′
= = −

where (-λ2 ) is a separation constant. Thus,


+ = 0
+ = 0
We now separate the boundary conditions.
(0, ) = (0) ( ) = 0 ≥ 0
Since ( ) ≠ 0 ≥ 0 ℎ
(0) = 0 ( ) = 0
To determine X (x) we first solve
+ = 0
With boundary conditions X(0)=0 and X(L) =0. We look for values of which gives us
nontrivial solutions. We consider three possible cases:
< 0 , = 0 > 0
Case 1. <
The general solution in this case is of the form
( ) = +
where A and B are arbitrary constants. To satisfy the boundary conditions, we must have
+ = 0
+ = 0
Consequently, A and B must both be zero, and hence, the general solution X (x) is identically
zero. The solution is trivial and hence, is no interest.
Case 2. =
Here, the general solution is
( ) = +
Applying the boundary conditions, we have
A = 0, and A+ BL = 0. Hence A = B = 0. The solution is thus identically zero.
Case 3. >
The general solution in this case is of the form
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 167
( ) = cos + sin
From the condition X (0) = 0, we obtain A = 0. The condition X (L) = 0 gives
sin = 0
If B = 0, the solution is trivial. For nontrivial solutions, B≠0, hence, sin = 0
This equation is satisfied when = = 1,2, ….
Or

Then the solution will be

( )= sin

The second DE will be

+ = 0

The general solution in this case is of the form

( )= cos + sin

Then the solution will be


( , )= ( ) ( )
Let

( , )= + = 1,2, …

Here λn are eigenvalues, and un(x, t) are eigenfunction. The set of eigenvalues {λ1, λ2, · · · }
are called the spectrum. T(t) gives change of amplitude in t, harmonic oscillation and different
n gives different motion. These are called modes.
Since the PDE is linear and homogeneous, by the superposition principle, the solution will be
the sum of all solutions , the general solution of the DE will be

( , ) = cos + sin sin

By applying the initial conditions


( , 0) = ( ) 0 ≤ ≤

( , 0) = sin = ( )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 168


differentiate the series with respect to t. We have

( , )= − sin + cos sin

( , 0) = ( ) 0 ≤ ≤

( , 0) = ( ) sin = ( )

These equations will be satisfied if f (x) and g (x) can be represented by Fourier sine series.

2
= ( ) sin

2
= ( ) sin

Example 7.9 : Zero Initial Displacement


We will solve the initial-boundary value problem for the displacement of the string if there is an
initial velocity but no initial displacement. The problem is

= 0 < < > 0

Subjected to the initial and boundary conditions :


(0, ) = 0 ( , ) = 0 ≥ 0
( , 0) = 0
( , 0) = ( ) 0 ≤ ≤
Using the separation of variables the problem will be two ODEs
The Eigen value problem
+ = 0 ℎ ℎ (0) = 0 ( )=0
The solution will be
The eigen values = Then the solution will be ( )= sin

The second problem

+ = 0

but now the zero initial displacement gives us u(x,0)= X(x)T(0)=0, so T(0)=0. Solutions
of this problem for T (t) have the form

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 169


( )= sin

The solution of the problem will be in the form

( , )= ( ) ( )= = 1,2, …

that satisfy the wave equation, the boundary conditions, and the initial condition
y(x,0)=0. To satisfy the initial velocity condition, we will generally (depending on g)
need a superposition

( , ) = = 1,2, …

We must choose the Cn ’s to satisfy


( , )=

Using the initial condition

( , 0) = = ( )

To find Cn using the Fourier sine expansion

2
= ( ) sin

Suppose the string is released from its horizontal position with an initial velocity given
by g(x)= x(1+cos(π x/L)). First compute the integral
3
⎧ =1
2 ⎪ 2
= 1 + cos sin =
⎨2 (−1)
⎪ = 2,3, … . .
⎩ ( − 1)
Then the solution will be

3 2 (−1)
( , ) = +
2 ( − 1)

Then with c=1 and L =π, this solution is

3 2(−1)
( , ) = +
2 ( − 1)

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 170


7.7 - Solution by Laplace Transform
we illustrated the effective use of Laplace transforms in solving ordinary differential equations.
The transform replaces a differential equation in y(t) with an algebraic equation in its transform
Y(s). It is then a matter of finding the inverse transform of Y(s) either by partial fractions
and/or tables. Laplace transforms also provide a potent technique for solving partial differential
equations. When the transform is applied to the variable t in a partial differential equation for a
function u(x, t), the result is an ordinary differential equation for the transform u(x, s). The
ordinary differential equation is solved for u(x, s) and the function is inverted to yield u(x, t).
The major advantage of Laplace transform or any kind of mathematical transform is that
basically, one can reduce partial differential equation into an ordinary differential equation,
that is advantage. And the applicability of mathematical transform is for linear partial
differential equation.
Given a function u(x,t) defined for all t > 0 and assumed to be bounded we can apply the
Laplace transform in t considering x as a parameter. Then the LT defined as :

ℒ { ( , )} = ( , ) ≡ ( , )

In applications to PDEs we need the following:


ℒ { ( , )} = ( , ) − ( , 0)
ℒ{ ( , )} = ( , ) − ( , 0) − ( , 0)
( , )
ℒ{ ( , )} = ( , ) =

( , )
ℒ{ ( , )} = ( , ) =

Example 7.12 : Heat equation by Laplace transform


We consider a semi-infinite insulated bar which is initially at a constant temperature, then the
end x=0 is held at zero temperature. Parbolic PDE

Subjected to the initial and boundary conditions :


( , 0) = 0 ≤ ≤ ∞

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 171


(0, ) = 0 ≥ 0
( , ) → ∞ → ∞ ≥ 0
Take the LT of the PDE

ℒ = ℒ

( , ) − ( , 0 ) =

( , ) − =

( , )
− ( , )=−

Second order linear ODE non homogeneous


( , ) = ( , )+ ( , )
Homogeneous solution :

− = 0 → = → = ∓√

( , ) = +
Particular solution
=

− =− → =

Then

( , ) = ( , )+ ( , )= + +

Evidently, =0 from the third condition


( , )= +

Since (0, ) = 0 ℎ (0, ) = 0


Then

=−
And so


1
( , )= − = −

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 172


Taking Laplace inverse


1
( , )= ℒ −

( , )= 1 − erf
2√
Example 7.13 :
Parbolic PDE

Subjected to the initial and boundary conditions :


(0, ) = 1 ≥ 0
(1, ) = 1 ≥ 0
( , 0) = 1 + sin 0 ≤ ≤
Take the LT of the PDE

ℒ =ℒ

( , ) − ( , 0 ) =

( , ) − (1 + ) =

( , )
− ( , ) = − (1 + )

Second order linear ODE non homogeneous


( , ) = ( , )+ ( , )
Homogeneous solution :

− =0

− = 0 → = → = ∓√
√ √
( , ) = +
Particular solution
= + sin = cos and ′ = − sin
− − ( + ) = −(1 + )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 173


− ( + ) − = − (1 + )
1 1
= =
+
Then
1 1
( , ) = + sin
+
The solution will be

√ √
1 1
( , ) = + + +
+
The Laplace Transform of BCs
(0, ) = 1 → = 1/
(1, ) = 1 → = 1/
1 1
= + + → = −

1 √ √
1 1
= + + +
+
1 √ √
1
= − +

= 0 =0
Then the solution will be
1 1
( , ) = +
+
Taking the Lplace inverse
( , ) = 1 +

Example 7.14 :

= + sin

Subjected to the initial and boundary conditions :


(0, ) = (1, ) = 0 0 ≤ ≤ 1
( , 0) = 0 ( , 0) = 0 ≥ 0
Take the LT of the PDE

ℒ = ( , ) − ( , 0) − ( , 0) = ( , )

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 174


ℒ =

ℒ{ }=

Then
( , )
− ( , )=−

Second order linear ODE non homogeneous


( , ) = ( , )+ ( , )
Homogeneous solution :

− =0

− = 0 → = → = ∓


( , ) = +
Particular solution
= cos + sin
= − sin + cos
′ =− −
Therefore :

− − − ( + )=−

( + ) + ( + ) =

1
= 0 =
( + )
Then
1
( , ) = sin
( + )
The solution will be


1
( , ) = + +
( + )
Applying the Boundary conditions :
(0, ) = + = 0
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 175
(1, ) = + =0
Solving the two relations gives = = 0
The solution will be
1
( , ) =
( + )
Finally applying inverse laplace
1
ℒ { ( , )} = ( , ) = ℒ
( + )
Using partial fraction
1 + 1 1
= + = −
( + ) + +

1
ℒ = (1 − cos )
( + )
Then the solution will be

( , )= (1 − )

Example 7.15 :
Solve the wave equation for a semi-infinite string by Laplace transforms, given that

Subjected to the initial and boundary conditions :


(a) ( , 0) = 0 , ≥ 0 (string initially undisturbed);
( , 0) = /
(b) , ≥ 0 (string given an initial velocity);
(c) (0, ) = 0 , ≥ 0 (string held at x = 0);
(d) ( , ) → ∞ , ≥ 0 (string held at infinity).
Taking Laplace transforms on both sides of wave equation
( , )
= ( , ) − ( , 0) − ( , 0)

( , ) /
= ( , )−

OR
( , ) /
− ( , ) = −
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 176
we obtain a solution of the differential equation as ( Solve it )

/
/ /
2 /
( , )= + − +
− −

Transforming the given boundary conditions (c) and (d), we have U(0, s) = 0 and U(x, s) → 0 as x
→ ∞, which can be used to determine A and B. From the second condition A = 0, and the first
condition then gives
2 /
=

/
2 / /
2 /
( , )= − +
− − −

2 / /
2 /
( , )= − −
− − −

Let =
2 2 /
( , )= + −
( − ) ( − ) ( − )
Fortunately in this case these transforms can be inverted from tables of Laplace transforms.
Using the second shift theorem together with the Laplace transform pairs

ℒ = sinh
( − )
2 ℎ − ℎ
ℒ =2 = ℎ − ℎ
( − ) 2

Then the solution will be

( , )= ℎ + ℎ − ℎ

− − ℎ − − ℎ −

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 177


Exercise ( 7 )
1. Find the solution of heat equation given by :

Subjected to the initial and boundary conditions :


(0, ) = 0 ≥ 0
( , ) = 0 ≥ 0
( , 0) = ( − ) 0 ≤ ≤
2. Solve the Laplace’s equation

+ = 0

u=0 when x =0 i.e. u(0, y)=0 for 0≤y≤b


u=0 when x =a i.e. u(a, y)=0 for 0≤y≤b
u=0 when y=b i.e. u(x, b)=0 for 0≤x≤a
u=f(x) when y=0 i.e. u(x, 0)=f (x) for 0≤x≤a
3. solve the potential problem

+ = 0 ℎ : {0 < < 0 < < 1}

Subjected to a mixed boundary conditions


(0, ) = 0 ( , ) = 0
( , 0) = cos ( , 1) =
4. A metal bar, insulated along its sides, is 1m long. It is initially at room
temperature of 15◦C and at time t =0, the ends are placed into ice at 0◦C. Find
an expression for the temperature at a point P at a distance x m from one end
at any time t seconds after t =0.
5. write the d’Alembert solution for the one dimensional wave equation problem
= − ∞ < < ∞, 0 < < ∞
With ( , 0) = ( ) ( , 0) = ( )
 c=1, f(x)=x2, g(x)=−x
 c=4, f(x)=x2 −2x, g(x)=cos(x)
 c=12, f(x)=−5x +x2, g(x)=3

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 178


6. solve by separation of variables the one dimensional wave equation problem
= 1.44
Subjected to the initial and boundary conditions :
(0, ) = 0 ( , ) = 0 ≥ 0
( , 0) = 0
( , 0) = sin( ) 0 ≤ ≤
Where η is a non-integer positive number

7. Consider the 1st Order PDE :

+ = 0

Subjected to conditions
(0, ) = ( , 0) = 0
Solve by :
i. Separation of variables
ii. Laplace Transform
8. Solve by Laplace transform :

Subjected to the initial and boundary conditions :


(0, ) = cos ( , ) = 0 0 ≤ ≤
( , 0) = 0 ≥ 0
9. Solve by Laplace transform :

Subjected to the initial and boundary conditions :


(0, ) = 0 0 ≥
( , 0) = 0 ( , 0) = 1 − ( − 1) ≥ 0
10. Solve by Laplace transform :

+ =

Subjected to the initial and boundary conditions :


( , 0) = 0 ≥ 0
(0, ) = 0 ≥ 0
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 179
Laplace Transform Tables

Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 180


Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 181
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 182
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 183
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 184

You might also like