Professional Documents
Culture Documents
Engineering Mathematics
11 12. . . 1
⎡ 21 22 2 ⎤
[ ] = ⎢⎢ . . . . .
⎥
⎥
⎢ . . ⎥
⎣ 1 2 . . ⎦
A horizontal set of elements is called a row and a vertical set is called a column. The
first subscript i always designates the number of the row in which the element lies.
The second subscript j designates the column.
Each matrix has rows and columns and this defines the size of the matrix. If a matrix
[A] has m rows and n columns, the size of the matrix is denoted by (m×n). The
matrix [A] may also be denoted by [A]m×n to show that [A] is a matrix with m rows and
n columns.
Each entry in the matrix is called the entry or element of the matrix and is denoted by
aij where i is the row number and j is the column number of the element. The set m
×
x n of matrices with real number entries is denoted ℝ . The set of m x n matrices
×
with complex entries is ℂ .
If the number of rows (m) of a matrix is equal to the number of columns (n) of the
matrix, (m = n), it is called a square matrix. The entries a11, a22, . . . ann are called the
diagonal elements of a square matrix. Sometimes the diagonal of the matrix is also
called the principal or main of the matrix.
An identity matrix is a diagonal matrix where all elements on the main diagonal are
equal to 1 and the other elements in the matrix are zeros
Matrix operation
Addition and subtraction : Two matrices [A] and [B] can be added or subtracted only
if they are the same size and the result is given by
[ ] = [ ] ± [ ] ℎ = ±
[ ] ± [ ] = [ ] ± [ ] Commutative law of addition
[A]+ ([B]+ [C]) = ([A]+ [B])+ [C] Associative law of addition
Multiplication : Two matrices [A] and [B] can be multiplied only if the number of
columns of [A] is equal to the number of rows of [B] to give
[ ] × =[ ] × [ ] ×
If [A] is a n × n matrix and k is a real number, then the scalar product of k and [A] is
another matrix [B], where bij = k aij .
Associative law of multiplication : If [A], [B] and [C] are m× n, n × p and p × r size
matrices, respectively, then [A]([B][C]) = ([A][B])[C]
and the resulting matrix size on both sides is m× r.
Commutative law of multiplication : [A] [B] ≠ [B] [A].
A and B are said commute If [A] [B] = [B] [A].
Distributive law: If [A] and [B] are m× n size matrices, and [C] and [D] are n × p size
matrices
[A]([C]+ [D]) = [A][C]+ [A][D] and ([A]+ [B])[C] = [A][C]+ [B][C]
and the resulting matrix size on both sides is m× p.
Linearity [A] ( α[B]+ β[C] ) = α[A][B]+ β[A][C]
Transpose of a matrix :
Let [A] be a m x n matrix. Then [B] is the transpose of the [A] if bji = aij for all i and j.
That is, the ith row and the jth column element of [A] is the jth row and ith column
element of [B]. Note, [B] would be a n×m matrix. The transpose of [A] is denoted by
[A]T. Note, the transpose of a row vector is a column vector and the transpose of a
Determinant of a matrix
A determinant of a square matrix is a single unique real number corresponding to a
matrix. For a matrix [A], determinant is denoted by |A| or det(A). So do not use [A]
and |A| interchangeably. If [A] and [B] are square matrices of same size, then
det (AB) = det (A) det (B). A matrix A is said to be a singular if det(A) = 0. It is called
non-singular if det(A) ≠ 0.
Determinant theorems : Let [A] be a n×n matrix.
1. If a row or a column in a n×n matrix [A] is zero, then det (A) =0
2. If a row is proportional to another row, then det(A) = 0.
3. If a column is proportional to another column, then det (A) = 0
4. If a column or row is multiplied by k to result in matrix [B]. Then det(B)=k det(A).
5. Since det(In) = 1, where In is the n×n identity matrix
6. If B is obtained from A by interchanging two rows then det(B) = − det(A),
7. B is obtained from A by multiplying a row by c then det(B) = c det(A),
8. Let A ( upper matrix form, lower matrix form or diagonal form ) then its
determinant is the product of the diagonal elements.
Inverse of a matrix
The inverse of a square matrix [A], if existing, is denoted by [A]-1 such that
[A][A] −1 = [I] = [A] −1 [A]
In other words, let [A] be a square matrix. If [B] is another square matrix of same size
such that [B][A] = [I], then [B] is the inverse of [A]. [A] is then called to be invertible or
non-singular. If [A]-1 does not exist, [A] is called to be non invertible or singular.
Special types of matrices
There are a number of special forms of square matrices that are important and should
be noted:
Skew symmetric matrix is one where the rows equal the columns. that is, aij = - aji for
all i.s and j.s. AT = - A
Zero matrix: A matrix whose all entries are zero is called a zero matrix
An upper triangular matrix is one where all the elements below the main diagonal
are zero, and A lower triangular matrix is one where all elements above the main
diagonal are zero. A diagonal matrix is a square matrix where all elements off the
main diagonal are equal to zero.
Orthogonal matrix if the transpose gives the inverse of the matrix AT = A-1
Diagonally dominant matrix : A general n x n matrix A = (aij) is row diagonally
dominant if
That is, for each row, the absolute value of the diagonal element is greater than or equal to the
sum of the absolute values of the rest of the elements of that row, and that the inequality is
strictly greater than for at least one row. Diagonally dominant matrices are important in
ensuring convergence in iterative schemes of solving simultaneous linear equations.
Tridiagonal matrices: A tridiagonal matrix is a square matrix in which all elements not
on the following are zero - the major diagonal, the diagonal above the major diagonal,
and the diagonal below the major diagonal.
A banded matrix has all elements equal to zero, with the exception of a band
centered on the main diagonal:
A finite set of linear equations is called a system of linear equations or, more briefly, a
linear system. The variables are called unknowns. For a general set of “m” linear
equations and “n” unknowns.
where the a's are constant coefficients, the b's are constants, the x's are unknowns, n
is the number of unknowns and m is the number of equations.
The system is called linear because each variable x appears in the first power only, just
as in the equation of a straight line. a11, … , amn are given numbers, called the
coefficients of the system. b1, … , bm on the right are also given numbers. If all the bj
are zero, then thesystem is called a homogeneous system. If at least one bj is not
zero, then it is called a nonhomogeneous system.
A solution of the system is a set of numbers x1, … , xn that satisfies all the m
equations. A solution vector is a vector x whose components form a solution of the
Let a system with : m the number of equations and n the number of unknowns
IF m < n the system is Underdetermined system
IF m = n the system is Determined system
IF m > n the system is over-determined system
Matrix form of linear systems
The system can be rewritten in the matrix form as :
. . . 1
1
⎡ ⎤ ⎡ 2⎤ ⎡ ⎤
⎢ . ⎢ 2⎥
⎢
. . . . ⎥⎥ ⎢⎢ . ⎥⎥ = ⎢ . ⎥
⎢ . . ⎥⎢ . ⎥ ⎢ . ⎥
⎣ . . ⎦⎣ ⎦ ⎣ ⎦
Note that the augmented matrix [ | ] determines the system completely because it contains
all the given numbers appearing in the system.
The system of linear equations is homogeneous if the vector of the constants [b] = 0, that is a
zero vector. And the system is non-homogeneous if [b] is not zero.
Clearly, the interchange of two equations does not alter the solution set. Neither does their
addition because we can undo it by a corresponding subtraction. Similarly for their
multiplication, which we can undo by multiplying the new equation by 1/a (since a ≠ 0),
producing the original equation.
Equivalent Linear Systems : Let [A b] and [C d] be augmented matrices of two linear
systems. Then the two linear systems are said to be equivalent if [C d] can be obtained
from [A b] by application of a finite number of elementary row operations.
systems having the same solution sets are often called equivalent systems. But note well that
we are dealing with row operations. No column operations on the augmented matrix are
permitted in this context because they would generally alter the solution set.
by application of elementary row operations. This elimination process is also called the forward
elimination method. This new system can be solved by a technique called backward-
substitution (forward-substitution), the unknowns are found starting from the bottom (the top)
of the system.
Example 1.8 :
Solve the following linear system by Gauss elimination method.
x + y + z = 3, x + 2y + 2z = 5, 3x + 4y + 4z = 11
then
with z arbitrary. In other words, the system has infinite number of solutions.
1.2.3 - Row echelon form matrix
The original system of m equations in n unknowns has augmented matrix [A | b]. This
is to be row reduced to matrix [R | f]. The two systems Ax = b and Rx = f are
equivalent: if either one has a solution, so does the other, and the solutions are
The Column Space of a Matrix Another important subspace associated with a matrix is its
column space. Unlike the null space, the column space is defined explicitly via linear
combinations. The column space of an m×n matrix A, written as Col A, is the set of all linear
combinations of the columns of A
Example 1.11 :
Find the spanning set for the null space of a given system ?
The first step is to find the general solution of the system Ax = 0 in terms of free variables by
reducing the system [A 0 ]
−3 6 −1 1 −7 0
1 −2 2 3 −1 0
2 −4 5 8 −4 0
By ERO the system converted to rref
1 −2 0 −1 3 0
0 0 1 2 −2 0
0 0 0 0 0 0
Then : the free variables are x2, x4 and x5
x1 = 2x2 + x4 – 3x5 and x3 = -2x4 + 2x5
expressing the general solution into a linear combinations of vectors weighing free variables
2 + −3 2 1 −3
⎧ ⎫ ⎧ ⎫ ⎧ ⎫ ⎧ ⎫ ⎧ ⎫
⎪ ⎪ ⎪ ⎪ ⎪1⎪ ⎪0⎪ ⎪0⎪
= −2 +2 = 0 + −2 + 2
⎨ ⎬ ⎨ ⎬ ⎨0⎬ ⎨1⎬ ⎨0⎬
⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪
⎩ ⎭ ⎩ ⎭ ⎩0⎭ ⎩0⎭ ⎩1⎭
= + +
Then every linear combination of the vectors u, v and w are elemnets of the null
space of the system. Thus {u, v, w} spanning for Nul A
Example 1.12 :
Determine whether b is in the column space of A and if so, express b as a linear
combination of the column vectors of A:
The augmented matrix for the linear system that corresponds to the system is
Therefore, the system has no solution (i.e. the system is inconsistent). Since the
equation Ax = b has no solution, therefore b is not in the column space of A.
Note : Consistent system of equations can only have a unique solution or infinite solutions
AND cannot have a finite (more than one but not infinite) number of solutions.
always has the trivial solution x1 =0,…, xn =0. Nontrivial solutions exist if and only if rank If
rank(A) < n. if rank(A) = r < n, these solutions, together with x = 0 form a vector space of
dimension (n – r) called the solution space of the system.
In particular, if x(1) and x(2) are solution vectors of the system, then = ( ) + ( )
with any scalars c1 and c2 is a solution vector of the system (This does not hold for
nonhomogeneous systems. Also, the term solution space is used for homogeneous systems
only.)
The solution space of the homogeneous system is also called the null space of A because
Ax = 0 for every x in the solution space. Its dimension is called the nullity of A.
Rank ( A ) + Nullity ( A ) = n ( the number of unknowns)
Nonhomogeneous Linear System
If a nonhomogeneous linear system is consistent, then all of its solutions are obtained in the
form :
= +
Where:
is any (fixed or particular ) solution of of the system and
all the solutions of the corresponding homogeneous system.
If there are special nonzero column vectors x such that the output y is proportional to
the input x, then these special vectors are called eigenvectors and the proportionality
constants are called eigenvalues. If the output y is proportional to the nonzero input
x, then the equation y = Ax = λx must be satisfied, where λ is the scalar proportionality
constant. If the equation Ax = λx has nonzero solutions, then one can write
( − ) =
Cramer’s3 rule states that in order for this last equation to have a nonzero solution it
is required that the determinant of the unknowns x1, x2, . . . , xn be zero. This requires
that
Applications
Eigenvectors and eigenvalues have many important applications such as aeronautical
engineering eigenvalues may determine whether the flow over a wing is laminar or turbulent.
In electrical engineering they may determine the frequency response of an amplifier or the
reliability of a national power system. In structural mechanics eigenvalues may determine
whether an automobile is too noisy or whether a building will collapse in an earth-quake.
In probability they may determine the rate of convergence of a Markov process. In ecology
they may determine whether a food web will settle into a steady equilibrium. In numerical
analysis they may determine whether a discretization of a differential equation will get the right
answer or how fast a conjugate gradient iteration will converge.
Eigen space
The set of all solutions of (A - λI)x = 0 is just the null space of the matrix A - λI. So this
set is a subspace of Rn and is called the eigenspace of A corresponding to λ. The
eigenspace consists of the zero vector and all the eigenvectors corresponding to λ .
Example 1.14 :
Find the eigen values and the corresponding eigen spaces for the matrix
2 −4
=
−1 −1
We first seek all scalars λ so that Ax = λx :
2 −4
=
−1 −1
2 −4 1 0 0
− =
−1 −1 0 1 0
2− −4 0
=
−1 −1 − 0
The above system of linear equations has nontrivial solutions precisely when
2− −4
=0
−1 −1 −
2− −4
= (2 − )(−1 − ) − 4 = − −6=0
−1 −1 −
− − 6 = ( + 2)( − 3) = 0
Then the eigen values are : λ=-2 and λ=3
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 21
The spectrum is ( ) = {−2,3}
Let's find the eigenvectors corresponding to λ1 = 3.
The system will be : ( − 3 ) = 0
2− −4 0
=
−1 −1 − 0
2−3 −4 0
=
−1 −1 − 3 0
−1 −4 0
=
−1 −4 0
The solution of the linear system will be = = −4 which is the parametric form
−4
The eigen vectors corresponding to λ1 = 3 are multiples of and
1
−4
ℎ ℎ ℎ
1
−4
∶ ∈ ℝ and it is called the eigen space for λ1 = 3
1
Example 1.15 :
Find the eigen values and the corresponding eigen spaces for the matrix
5 8 16
= 4 1 8
−4 −4 11
5− 8 16
( − ) = 4 1− 8 = ( − 1)( + 3) = 0
−4 −4 11 −
4 8 16 0 −2
4 0 8 = 0 by Gauss elimination we get : = −
−1 −4 −12 0
−2
The eigen vectors corresponding to λ1 = 1 are multiples of −1 and
1
−2
−1 ℎ ℎ ℎ
1
−2
−1 ∶ ∈ ℝ and it is called the eigen space for λ2 = -2
1
Example 1.16 :
2 −1
Determine the eigenvalues and eigenvectors of =
5 −2
2− −1 0
=
5 −2 − 0
The above system of linear equations has nontrivial solutions precisely when
2+ −1 0
divide R1 by (2+i) = =
5 −2 + 0
1 0 0
2 = 2 – 5 1 → 1
5 −2 + 0 0 0 0
The solution of the linear system will be = = −
ℎ ℎ ℎ
1
2−
it can be put in the form Similarly ( work out the details ? )
5
2+
The eigen vectors corresponding to λ1 = i is
5
Diagonalization of an n × n matrix
Let the n × n matrix A have n eigenvalues λ1, λ2, . . . , λn, not all of which need be distinct, and
let there be n corresponding distinct eigenvectors x1, x2, . . . , xn, so that
A =
then if we define the matrix X to be the n × n matrix in which the ith column is the eigenvector
xi , with i = 1, 2, . . . , n, so that in partitioned form
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 24
= [ … … ]
and let D be the n x n diagonal matrix
⋯ 0
= ⋮ ⋱ ⋮
0 ⋯
i.e. the matrix whose diagonal entries are the eigenvalues of the matrix A and whose
all other entries are zero.
Then
D = P A P
In such a case we call A diagonalizable and say that P diagonalizes A.
= + + +⋯+ +⋯
2! !
11. Find bases for the row space, the column space and the null space of the matrix
13. Find the rank and nullity of the 3X5 matrix, M then Find the solutions (if any) to
Mx = 0
16. Show that the matrix is diagonalizable find the 5th power of A?
3 + 5 + 2 = 8
8 + 2 = −7
6 + 2 + 8 = 26
19. Construct a system of linear equations and determine the unknowns in the circuit
below ?
The laws of physics are generally written down as differential equations. Therefore, all of
science and engineering use differential equations to some degree. Understanding differential
equations is essential to understanding almost anything you will study in your science and
engineering classes. Many physical laws and relations appear mathematically in the form of
such equations (figure below).
PDEs have important engineering applications, but they are more complicated than ODEs.
A system of ordinary differential equations is two or more equations involving the
derivatives of two or more unknown functions of a single independent variable.
If differential equations contain two or more dependent variable and one
independent variable, then the set of equations is called a system of differential
equations.
Two or more dependent variable and two or more independent variable gives a
system of partial differential equations. (rarely to see)
Classification by order :
An ODE is said to be of order n if the nth derivative of the unknown function y is the
highest derivative of y in the equation. The concept of order gives a useful
classification into ODEs of first order, second order, and so on.
The functions p (x) and q(x) are called the coefficients of the ODE.
Trivial Solution: For the homogeneous equation above, note that the function y(x) = 0
always satisfies the given equation, regardless what p(x) and q(x) are. This constant
zero solution is called the trivial solution of such an equation.
= +
Where c1 and c2 any arbitrary constants
Then, according to the Existence and Uniqueness Theorem, for any pair of initial
conditions y(x0) = y0 and y′(x0) = y′0 there must exist uniquely a corresponding pair of
coefficients C1 and C2 that satisfies the system of (algebraic) equations
= ( )+ ( )
′ = ′ ( )+ ′ ( )
From linear algebra, we know that for the above system to always have a unique
solution (C1, C2) for any initial values y0 and y′0, the coefficient matrix of the system
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 33
must be invertible, or, equivalently, the determinant of the coefficient matrix must be
nonzero. That is
( ) ( )
det ≠0
′ ( ) ′ ( )
This determinant above is called the Wronskian or the Wronski determinant. It is a
function of x as well, denoted W(y1, y2)(x), and is given by the expression
( , )( ) = ′ − ′
Formally, if W(y1,y2)(x) ≠ 0, then the functions y1, y2 are said to be linearly
independent. Else they are called linearly dependent.
Suppose y1 and y2 are two linearly independent solutions of a second order
homogeneous linear equation
y″ + p(x) y′ + q(x) y = 0.
That is, y1 and y2 both satisfy the equation, and W(y1, y2)(x) ≠ 0. Then (and only then)
their linear combination y = C1 y1 + C2 y2 forms a general solution of the differential
equation. Therefore, a pair of such linearly independent solutions y1 and y2 is called a
set of fundamental solutions, because they are essentially the basic building blocks of
all particular solutions of the equation.
CAUTION! Don’t forget that this highly important theorem holds for homogeneous
linear ODEs only but does not hold for non-homogeneous linear or nonlinear ODEs
+ + =
a, b and c are constants, is called a linear second order differential equation with
constant coefficients.
Let :
if = then ′= and ′′ =
Substituting back
+ + =
( + + ) =
+ ′ (2 ′ + ) + ′′ + ′ + =
2 +
+ =
= , = =
Which is 2nd differential equation with missing y term can be transformed to 1st order
2 +
+ =
=− 2 + integrating ln = (−2 ln −∫ )
Then
1
= exp −
Example 2.6 :
ʹʹ
Solve − = −
The general solution will be
( )= ( )+ ( )
To find yh(x) :
the associated homogeneous equation.
− =0
Then the roots are equal and equal λ1 = - 2 , λ2 = 2, then the solution will be
( )= +
To find yp(x) :
Because r(x)= 8x2 −2x is a polynomial. From the table we choose
( )= + +
Differentiating :
′( ) = 2 +
′′( ) = 2
Substituting into the ODE :
− = −
2 − ( + + )= −
−4 −4 + (2 −4 )= −
By comparison
−4 = 8 → = −2
−4 = −2 → = 1/2
1
2 −4 = 0 → = = −1
2
Then
1
( ) = −2 + −1
2
Then the general solution will be
+ ( − ) + =
Notes that : y = xm was a rather natural choice because we have obtained a common
factor xm . Dropping it, we have the auxiliary equation
+ ( − ) + =0
Hence y = xm is a solution of the DE if and only if m is a root of the auxiliary
equation. There will be three cases :
Case I : Two equal real roots
The general solution will be in the form
= ( + ln ) and x > 0
Case I : Two different real roots
The general solution will be in the form
= ( + )
Case III : Complex roots
The roots are complex conjugate, , = ± then the general solution will be
= ( cos( ln ) + sin( ln )) and x > 0
Example 2.8 :
Solve
−6 + 10 = 6
The general solution will be
( )= ( )+ ( )
( ) = ( )
ii. = + tan
iii. + =2
iv. ( +3 ) + ( +3 ) =0
v. +3 + = 0
3. Solve the IVP. Show the steps of derivation, beginning with the general solution.
i. +5 = 0 , (0) = 1
ii. 2 + (4 + 3 ) = 0 , (0.2) = −1.5
iii. + 4 = 8 , (1) = 2
4. Solve the IVP. Show the steps of derivation, beginning with the general solution.
+ + 0.25 = 0 , (0) = 3 (0) = −3.5
9 − 24 + 16 = 0 , (0) = 3 (0) = 3
+2 +3 + = 0 , (0) = 3 , (0) = −3 (0) = 47
5. Solve
i. −2 +3 = + cos
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 45
ii. − − 2 = sin 2
iii. + +4 + 4 = −4 −6
6. The ODE
1
+ + =0
+ + =
( − ) = + ( − ) + ( − ) +⋯
( )= + + + ⋯ =
Analytic functions
An analytic function is a function which is infinitely differentiable in the domain
neighborhood of interest about x0, then the function can be represented by the sum
of a power series ∑ ( − ) that has positive radius of convergence, and
the fact that power series behave nicely under addition, multiplication,
differentiation, and integration accounts for their usefulness.
=
→
Convergence for all x (R = ∞) is the best possible case, convergence in some finite interval is
the usual, and convergence only at the center point ( R=0) is useless. And the set of the values
that the series converges is called the interval of convergence
Convergence tests :
The ratio test formula will not help if the limit doesn’t exist, then the alternative is
the root test :
Convergence for all x (R = ∞) is the best possible case, convergence in some finite interval is
the usual, and convergence only at the center point ( R=0) is useless.
Power series operations
Given two power series y1(x) and y2(x) with converges in |x−x0|< R1 and |x−x0|< R2
respectively
∞ ∞
( )= ( − ) ( ) = ( − )
Term wise addition or subtraction of two power series with radii of convergence R1
and R2 yields a power series with radius of convergence at least equal to the smaller
of (R1,R2)
∞
( ± )( ) = ( ± )( − )
( − )
( )= ( − ) ( ) = ( − )
( + )( ) = ( + )( − )
( ) ( ) = ( − )
Shifting indices :
Suppose the power series after differentiation twice has the form
∞
( − )
( + )( + )
( − )
( − ) = ( − )
Example 3.1 :
Determine the power series solution of the differential equation:
y'' + xy' + 2y = 0
using Leibniz–Maclaurin’s method, given the initial conditions that at x = 0 , y = 1 and
y' = 2
Each term is differentiated n times, which gives:
y'' differentiated n times becomes y(n+2)
xy' differentiated n times ( use Leibniz theorm )
v = x and u = y'
( ) = ( ( ) ( )
ʹ) = + +0
y differentiated n times becomes y(n)
substituting back into the DE, we get
( ) ( ) ( ) ( )
+ + + =
( ) ( ) ( )
+ + ( + ) =
( ) ( )
+ ( + ) =
Substituting n=0, 1, 2, 3,…will produce a set of relationships between the various coefficients.
at x = 0 , y = 1 and y' = 2
For n = 0 = −2 For n = 1 = −3 ′
For n = 2 = −4 =8 For n = 3 = −5 = 15 ′
( ) = ( − )
Then, for a given constants a0; a1 there is a unique solution y(x) to the initial value
problem
( ) ′′
+ ( ) ′
+ ( ) = , ( )= ′( ) =
( )= ( − ) = + ( − ) + ( − ) +⋯
Example 3.2 :
Consider the differential equation
′′
+ = , (0) = 1 ′(0) = 0
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 52
the solution y(x) can be written as
∞
( ) = = + + +⋯
′( ) =
′′( ) = ( − )
Then
∞ ∞
( − ) + =
( + )( + ) + =
For the same indexes of the summation sign and the same powers of x
[( + )( + ) + ] =
( + )( + ) + =
Which can be put in the form
−
=
( + )( + )
m = 0, 1, 2,…
it is called the a recursion relation.
( ) = + − − + + − − +⋯
! ! ! ! ! !
( ) = − + − +⋯ + − + − +⋯
! ! ! ! ! !
which is the Maclaurin series for cos(x) and sinx) with base point = then the
solution of the given IVP 2nd ODE is
y( x) = cos + sin
And the Initial conditions gives a0 = 1 and a1 = 0
Substitute the constants into the solution we get
( )=
Example 3.3 :
Consider the differential equation + + =
It is clear that x = 0 is an ordinary point. We can use the power series method
= , = = ( − )
( − ) + + =
∞ ∞ ∞
( − ) + + =
( + )( + ) + + =
Now we can express the DE in a single sigma but they should start with the same
index n = 1. Therefore n = 0 terms should be written separately
∞ ∞ ∞
+ + ( + )( + ) + + =
( + )+ [( + )( + ) + + ] =
( + )+ [( + )( + ) + ( + ) ] =
Then
( + )= gives =−
( + )( + ) + ( + ) =
− ( + ) −
= =
( + )( + ) ( + )
This is the a recursion relation. If are known which are the initial
conditions of the given IVP, this equation allows us to determine the remaining
coefficients recursively by putting in succession as :
( ) = − + − +⋯ + − + +⋯
( ) = ( − )
Differentiate and substitute back into Legendre's equation and let n(n+1) = K
∞ ∞
( )= ( )= ( − 1)
∞ ∞ ∞
(1 − ) ( − 1) −2 + =0
∞ ∞ ∞ ∞
( − 1) − ( − 1) − 2 + =0
The powers of x needed to be the same, only the first term needs to be shifted
∞ ∞
( − 1) = ( + 2)( + 1)
∞ ∞
( − 1) = ( − 1)
∞ ∞
2 = 2
( + 2)( + 1) − ( − 1) − 2 + =0
[( + 2)( + 1) − ( − 1) −2 + ] =0
( + 2)( + 1) −[ ( − 1) + 2 − ] =0
( − 1) + 2 −
=
( + 2)( + 1)
( + 1) − ( + 1)
=
( + 2)( + 1)
Which can be put in the form
( − )( + + 1)
=−
( + 2)( + 1)
Note that this is a second order recursion (relating am+2 to am) thus there are two
undetermined constants a0 and a1 giving two independent series, we get
m The relation The constants
0 = − ( + 1) /2 = − ( + 1) /2!
1 = −( − 1)( + 2) /3 × 2 = −( − 1)( + 2) /3!
2 = −( − 2)( + 3) /4 × 3 = ( + 1)( − 2)( + 3) /4!
3 = −( − 3)( + 4) /5 × 4 = ( − 1)( + 2)( − 3)( + 4) /5!
( ) = − ( + 1) + ( + 1)( − 2)( + 3) +⋯
! !
Then testing :
( ) ( )
lim ( − ) = ( ) lim ( − ) = ( )
→ →
then the equation can be represented by a power series, and solved by the Frobenius
method.
The differential equation has a Frobenius solution in the form of Frobenius series as
( ) = = : ≠0
where the exponent c may be any (real or complex) number (and c is chosen so that
a ≠ 0 and has an interval of convergence R
′( ) = ( + ) ∶ ≠0
′′( ) = ( + )( + − 1) ∶ ≠0
Since by assumption ≠ , the expression in the brackets must be zero. This gives
( − 1) + + =0
This important quadratic equation is called the indicial equation of the ODE. The
Frobenius method yields a basis of solutions of the DE based on the roots of indicial
equation. There are three cases:
Case 1. Distinct roots not differing by an integer.
If c1 > c2 and ( c1 − c2 ) is not a positive integer, then there are two linearly
independent Frobenius solutions
( ) = ( ) = ∗
( ) = ( )= ( ) ∗
+
( ) = ( )= ( ) ∗
+
Example 3.5 :
Determine, using the Frobenius method, the general power series solution of the
differential equation:
3x y'' + y' - y = 0 and x0 = 0 is a regular singular point .
Determination of the indicial equation : Assume that the solution will be
ʹ ʹʹ
= = = ( − )
( − ) + − =
( − ) + − =
[ ( − ) + ] − =
The equation of the smallest power will be indicial equation :
( − ) + = − = ( − )=
The roots are 1 = 2 3 2 = 0
The difference is not a positive integer, the solution will be in case (1)
( ) = ∗
( ) =
( ) = : ≠0
ʹ( ) = ( + )
ʹʹ( ) = ( + )( + − 1)
3 ( + )( + − 1) + ( + ) − =0
∞ ∞ ∞
3( + )( + − 1) + ( + ) − =0
[ 3( + )( + − 1) + ( + )] − =0
(3 − 2 ) − =0
Equating for the minimum power, for the second term put m = m - 1
Then
[ (3 − 2 ) − ] =0
Then
=
(3 − 2 )
0 2 0 3 0 4
( )= 0 + 0 + + + +⋯
8 21 × 8 4 × 10 × 21 × 8
2 3 4
( )= 0 1+ + + + +⋯
8 21 × 8 4 × 10 × 21 × 8
The functions are both analytical at x0 = 0. Then it can solved using the Frobenius
method
Determination of the indicial equation :
Assume that the solution will be
ʹ ʹʹ
= = = ( − )
( − ) + +( − ) =
( − ) + + − =
+ [ ( − ) + − ] =
The equation of the smallest power will be indicial equation :
[ ( − )+ − ]= − =
The roots are 1 = 2 = −
1 − 2 = 2
The first solution at c =
∞
( ) = : ≠
ʹ( ) = ( + )
ʹʹ( ) = ( + )( + − )
Substitute in the DE
∞ ∞ ∞
( + )( + − ) + ( + ) +
− =
Substituting c =
( + )( + − ) + ( + ) +
− =
( + )( + − ) + ( + ) +
− =
∞ ∞
[( + )( + − )+( + )− ] + =
To find the recursion relation the sigma should start with the same index hence the
coefficients of m =0 and m=1 taking out the sigma
∞ ∞
( +2 ) + [m ( + 2 )] + =
( +2 ) + {[m ( + 2 )] + } =
( ) = 1 − + − +⋯
2 (1!) 2 (2!) 2 (3!)
Which is similar to cosine function series
Bessel function of order n = 1 will be
( ) = − + − +⋯
2 2 1! 2! 2 2! 3! 2 3! 4!
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 64
Which is similar to sine function series
∞
(−1)
( ) = ( ) =
! ( − + )
Case 2 : If ν = 0 :
That is the roots are zero 1 = 2
∞
(−1)
( ) = ( ) =
( !)
( )= ( )+ ( )
Case 3 : If 2ν is an integer :
If 2ν is an odd positive integer, say 2ν =2n +1 for some nonnegative integer n, then ν
=(n + 1)/2 and Jν and J−ν are again linearly independent. In this case, the general
solution of Bessel’s equation is
( )= ( )/ ( )+ ( )/ ( )
For n = 0
( )= / ( )+ / ( )
2
( )= ( sin + sin )
If 2ν is an integer ( ≥ 0 ) then Jν and J−ν are solutions of Bessel’s equation, but are
linearly dependent. Hence the second solution needed to be found which is leading
to the Bessel function of the second kind. The Bessel function of second kind of order
n given by ( For n = 0 )
(−1)
( )= ( ) + +
( !)
(1 − ) − t + = 0
i. Identify the ordinary and singular points for the DE?
ii. Solve by power series around xo = 0, write the first six terms?
4.1 - Introduction
In some situations, a difficult problem can be transformed into an easier problem,
whose solution can be transformed back into the solution of the original problem.
For example, an integrating factor can sometimes be found to transform a non-exact
first order first degree ordinary differential equation into an exact ODE. Similarly the
Bernoulli ODE, which, upon a transformation of the dependent variable, becomes a
linear ODE. Also, Laplace transform will convert an initial value problem into a much
easier algebra problem.
Linear and nonlinear Systems
When the system is linear, the superposition principle can be applied. This important fact is the
reason that the techniques of linear-system analysis have been so well developed. The
superposition principle can be stated as follows. If the input/output relation of a system is :
( ) → ( ), ( )+ ( )→ ( )+ ( )
Then the system is linear. So, a system is said to be nonlinear if this equation is not valid.
Time-varying and time-invariant systems
A system is said to be time-invariant if a time shift in the input signal causes the same time shift
in the output signal. If y(t) is the output corresponding to input x(t), a time-invariant system will
have y(t-t0) as the output when x(t-t0) is the output. So, the rule used to compute the system
output does not depend on time at which the input is applied. On the other hand, if the system
output y(t-t0) is not equal to x(t-t0), we call this system time variant or time varying.
A time-invariant differential equation is a differential equation in which none of its
coefficients depend on the independent time variable, t.
+ + = ( )
Since the differential equation is linear and with variable coefficients, a system
characterized by such a model is said to be a linear time-variant system.
The main goal in the analysis of systems is to find the system response ( system
output – solution of the ODE ) due to external ( system inputs x(t) ) and internal
(system initial conditions) forces. It is known from elementary theory of differential
equations that the solution of a linear differential equation has two additive
components: the homogenous and particular solutions. The homogenous solution is
contributed by the initial conditions and the particular solution comes from the
forcing function. In engineering, the homogenous solution is also called the system
natural response, and the particular solution is called the system forced response.
A Laplace transform will convert an initial value problem into a much easier algebra
problem. The solution of the original problem is then the inverse Laplace transform
of the solution to the algebra problem.
( , ) ( )
is called an integral transform of f(t), where the f (t) transformed from a (t) space into
another space (s) , the function k(s,t )is called the kernel of the transform and the
parameter s, which is independent of t, belongs to some to main on the real line or in the
complex plane. Choosing different kernels and different values of a and b, different
integral transforms introduced , for example : Laplace Fourier, Hankel and Merlin
transforms
( , ) ( ) = ( , ) ( )
→
{ ( )} = ( ) = ( )
Before proceeding, there are a few observations relating to the definition worthy of
comment.
a) The symbol denotes the Laplace transform operator; when it operates on a
function f (t), it transforms it into a function F(s) of the complex variable s. We
say the operator transforms the function f (t) in the t domain (usually called the
time domain) into the function F(s) in the s domain (usually called the complex
frequency domain, or simply the frequency domain). It is usual to refer to f (t)
and F(s) as a Laplace transform pair, written as { f (t), F(s)}.
b) Because the upper limit in the integral is infinite, the domain of integration is
infinite. Thus the integral is an example of an improper integral, and hence the
limit must exist so the integral is convergent.
c) Because the lower limit in the integral is zero, it follows that when taking the
Laplace transform, the behavior of f(t) for negative values of t is ignored or
suppressed. This means that F(s) contains information on the behavior of f(t)
only for t>0, so that the Laplace transform is not a suitable tool for investigating
{ ( )} = ( )
The Laplace transform defined with lower limit zero, is sometimes referred to
as the one-sided or unilateral Laplace transform of the function f (t). In this
course we shall concern ourselves only with the latter transform, and refer to
it simply as the Laplace transform of the function f (t).
Example 4.1
Using the Laplace transform definition determine the Laplace transform of the ramp function
f(t) = t ?
−
= − + = − −
→ → → →
( )
lim = = 0 > 0
→ → →
1 1 1
= = = 0 > 0
→ →∞ →∞
Then
The Laplace transform of the function f(t)=t only exist if Re(s) > 0 and it is
{ }= = ( ) >
ℒ{ ( )+ ( )} = ℒ { ( )} + ℒ { ( )} = ( )+ ( )
ℒ{ ( )} = ∫ ( )
( )
∫ ( ) = ( − )
ℒ { ( )} = − [ ( )]
( ) ()
ℒ =∫ ( ) Providing the limit of lim → exists.
ℒ{ ( )} = (−1) ( )
( ) ()
ℒ =∫ ( ) Providing the limit of lim → exists.
( )
ℒ ( ) = ℒ( ) − (0) − (0) − ⋯ − (0) − (0)
Such as
ℒ { ( )} = ℒ ( ) − (0) and ℒ{ ( )} = ℒ( ) − (0) − (0)
ℒ { ( )} =
Limiting Theorems
The initial- and final-value theorems are two useful theorems that enable us to predict system
behaviour as t → 0 and t → ∞ without actually inver ng Laplace transforms.
Initial value theorem
If f(t) and f’(t) are both Laplace transformable and lim → ( ) exist then
lim ( ) = (0 ) = lim ( )
→ →
It is important to recognize that the initial-value theorem does not give the initial value f(0−)
used when determining the Laplace transform, but rather gives the value of f (t) as t → 0+.
Final value theorem
If f(t) and f’(t) are both Laplace transformable and lim → ( ) exist then
lim ( ) = lim ( )
→ →
The final-value theorem provides a useful vehicle for determining a system’s steady state gain
(SSG) and the steady-state errors, or offsets, in feedback control systems, both of which are
important features in control system design. The SSG of a stable system is the system’s steady-
state response, that is the response as t → ∞, to a unit step input.
Convolution Theorem
( ∗ )( ) = ( ) ( − )
ℒ ( ) ( − ) = ℒ {( ∗ )( )} = ( ) ( )
Or
ℒ { ( ) ( )} = ( ∗ )( )
+ +⋯+
( − ) ( − ) ( − )
2. To each factor of b(s) of the form (s2 + bs + c) m, assign a sum of p fractions, of the form
+ + +
+ +⋯+
( + + ) ( + + ) ( + + )
3. Set the original fraction a(s)/b(s) equal to the sum of all these partial fractions.
Clear the resulting equation of fractions and arrange the terms in decreasing
powers of s.
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 77
4. Equate the coefficients of corresponding powers of s and solve the resulting
equations for the undetermined coefficients.
Example 4.2 :
Find the Laplace inverse for
2 −1 2 −1
( ) = =
( + + )( − 1) ( + 1)( + 3)( − 1)
To evaluate the inverse LT using partial fraction
2 −1
= + +
( + 1)( + 3)( − 1) ( + 1) ( + 3) ( − 1)
2 − 1 = ( − 1)( + 3) + ( + 1)( − 1) + ( + 1)( + 3)
Put s = 1 then : 1 = 8 C then C = 1/8
Put s = -1 then : -3 = -4 A then A= 3/4
Put s = -3 then : -7 = 8 B then B = -7/8
2 −1 3 7 1
= − +
( + 1)( + 3)( − 1) 4( + 1) 8( + 3) 8( − 1)
3 7 1
( )= − +
4( + 1) 8( + 3) 8( − 1)
Taking the inverse Laplace
3 7 1
( )=ℒ − +
4( + 1) 8( + 3) 8( − 1)
1 1 1
( )= ℒ − ℒ + ℒ
( + 1) ( + 3) ( − 1)
Using Laplace tables
( ) = − +
ℒ { ( − )} = ( − ) = =
That is H ( t ) = 1
The unit step function can be used in different cases as illustrated in the cases below :
The pulse (window) function H(t − a) − H(t − b) with a < b. Pulses are used to turn a
signal off until time t = a and then to turn it on until time t = b, after which it is
switched off again.
0 <
( )[ ( − ) − ( − )] = ( ) < <
0 <
Where : Π , ( )= ( − )− ( − )
The delta function the delta or located at t = a and denoted by δ(t − a) is defined as the limit
1
( − ) = lim [ ( − ) − ( − − ℎ)]
→ ℎ
Then it is defined by
∞ =
( − )=
0 ℎ
The operational property of the delta function, usually called its filtering or shifting property
and it can be given as follows : Let f (t) be defined and integrable over all intervals contained
within 0 ≤ t < ∞, and let it be continuous in a neighborhood of a. Then for a ≥ 0
( ) ( − ) = ( )
A purely formal derivation of the Laplace transform of the delta function proceeds as follows.
By definition,
{ ( − )} = ( − )
( )
( − )
{ ( − )} = − =
→ →
( − )
{ ( − )} =
→
( − )
= =
→ →
Periodic function
In many engineering applications, however, one frequently encounters periodic
ℒ { ( )} = ( )
−
Example 4.4 :
Evaluate the Laplace transform of the periodic square function with T=2a the
function will be ( + 2 ) = ( ) :
ℒ { ( )} = −
−
ℒ { ( )} = −
− − −
−1 −
ℒ { ( )} = −
− − −
1−2 +
( )=
−
(1 − ) (1 − )
( )=
(1 − )(1 + )
= (1 + )
(1 − )
( )= =
(1 + )
( )
ℒ ( ) = ℒ( ) − (0) − (0) − ⋯ − (0) − (0)
Such as
ℒ { ( )} = ℒ ( ) − (0)
ℒ{ ( )} = ℒ( ) − (0) − (0)
( )=
+ +
( ) = [ ( + )+ ] ( )+ ( ) ( )
With zero initial conditions ( )= ( ) ( )
Note that Q depends neither on x(t) nor on the initial conditions (but only on a and b).
Step 3. Inversion of Y to obtain y : We reduce the subsidiary equation (usually by
partial fractions as in calculus) to a sum of terms whose inverses can be found from
the tables
Example 4.5 :
Find the complete solution of the initial value problem
− = ( ) = ( )=
( )=
−
( )
( ) = ( + ) ( ) +
+ 1
( ) = +
− ( − )
( ) = + −
( − 1) ( − )
( ) = { } + { − }
Example 3.6 :
Solve the initial value problem : + 4 + 3 = with ( ) = ( )=
Taking the LT of the DE
ℒ{ } = ( )− ( )− ( )= ( )−
ℒ{ } = ( )− ( )= ( )
ℒ{ } = ( )
1
ℒ{ } =
( − 1)
1
( )− + ( )+ ( )=
( − 1)
1 2 −1
[ + + ] ( ) = + =
( − 1) ( − 1)
2 −1 2 −1
( ) = =
( + + )( − 1) ( + 1)( + 3)( − 1)
To evaluate the inverse LT using partial fraction
2 −1
= + +
( + 1)( + 3)( − 1) ( + 1) ( + 3) ( − 1)
2 − 1 = ( − 1)( + 3) + ( + 1)( − 1) + ( + 1)( + 3)
Put s = 1 then : 1 = 8 C then C = 1/8
Put s = -1 then : -3 = -4 A then A= 3/4
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 87
Put s = -3 then : -7 = 8 B then B = -7/8
2 −1 3 7 1
= − +
( + 1)( + 3)( − 1) 4( + 1) 8( + 3) 8( − 1)
3 7 1
( )= − +
4( + 1) 8( + 3) 8( − 1)
Taking the inverse Laplace
3 7 1
( )=ℒ − +
4( + 1) 8( + 3) 8( − 1)
1 1 1
( )= ℒ − ℒ + ℒ
( + 1) ( + 3) ( − 1)
Using Laplace tables
( ) = − +
Example 3.7 :
Find the complete solution of the initial value problem
d2y 0 0 t 3
4 y f t ; y 0 y 0 0 .
dt 2
t t 3
+ 4 = ( − 3) ( − 3) + 3 ( − 3)
Taking the LT of the DE
− −
{ ( )− ( ) − ′( )} + ( ) = +
− −
( ){ + }= +
− −
( )= +
( + ) ( + )
+
( )= −
( + )
3 1 3 1
= , = , =− =−
4 4 4 4
+ 3 1 1 1 3 1 2
= + − −
( + ) 4 4 4 + 4 8 +
+ 3 1 3 1
= + − cos 2 − sin 2
( + ) 4 4 4 8
0 0≤ ≤3
( )= 3 1
− cos 2( − 3) − sin 2( − 3) ≥3
4 4 8
Example 4.8 :
y + 3y + 2 y = ( + + ) ( − )
The initial conditions will be : y'(0)= 5 and y(0)=5
Taking the Laplace transform:
ℒ{ } = ( )− ( )− ( )= ( )−
ℒ{ } = ( )− ( )= ( )−
ℒ { } = ( )
ℒ {( + + ) ( − )} =? ? ?
40 (7 + 5 + 2)
( )[ + + ]−( + )=
40 (7 + 5 + 2) (5 + 20)
( )= +
( + + ) ( + + )
Taking the Laplace inverse to evaluate y(t)
40 (7 + 5 + 2) (5 + 20)
( )= +
( + + ) ( + + )
(5 − 20) 5 + 20
= = +
( + + ) ( + )( + ) ( + ) ( + )
5 + 20 = ( + ) + ( + )
Put s = -1
15 = → = 15
Put s = -1
10 = − → = −10
Then
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 90
5 − 20 15 −10
= +
+ + ( + ) ( + )
5 − 20
( )= = 15 − 10
+ +
40 (7 + 5 + 2)
( )=
( + + )
(7 + 5 + 2)
= + + + +
( + + ) +1 +
7 +5 +2 = ( + )( + ) + ( + )( + ) + ( + )( + )
+ ( + )+ ( + )
Put s = 0 : 2= ( )( ) → =
Put s = -1 : 4=− ( ) → = −4
Put s = -2 : 20 = −8 (− ) → = 5/2
7 +5 +2 = ( + + )+ ( + + )+( + + )−4 ( + )
5
+ ( + )
2
Comparing the coefficients
5
→0= −4+ → = 3/2
2
9 5
→0= + −8+ → =1
2 2
To check the results
→7 =3+3+1
→ 5 =2+3
→2=2
(7 + 5 + 2) 3 1 1 4 5
= + + − +
( + + ) 2 +1 ( + )
Example 4.9 :
In the RC circuit shown here, there is no charge on the capacitor and no current
flowing at the time t = 0. The input voltage Ein is a constant Eo during the time t1 < t <
t2 and is zero at all other times. Find the output voltage Eout for this circuit.
Q(s) = [ − ]
1
+
1
= − = −
1 1 1
+ + +
Then
/
ℒ = −
1
+
( )/( ) ( − )
ℒ = −
1
+
( )/( ) ( − )
ℒ = −
1
+
Then
( )/( ) ( − )− ( )/( ) ( − )
q(t) = − −
and
q(t) ( )/( ) ( )/( )
= = − ( − )− − ( − )
C
s+8
Q= +
( s + 3)( s + 5) ( s + 3)( s + 5)
Using partial fraction decompositions :
s+8
= +
( s + 3)( s + 5) s + 3 s + 5
s + 8 = ( s + 5) + B( s + 3)
= − → = (2) → = /
= − → = (−2) → = − /
= +
( s + 3)( s + 5) s+3 s+5
4 = ( s + 5) + B( s + 3)
= − → = (2) → =
= − → = (−2) → = −
1 1 1 1
Q = − + −
s+3 s+5 s+3 s+5
Take The inverse Laplace to evaluate q(t) the charge of the capacitor
( ) ( )
q(t) = − + ( − ) −
( + ) ( ) = ( )
−
( + ) ( ) =
−
1−
= =
−
(1 − )
( + ) ( ) = =
− ( − )
(1 − ) 1
( ) =
( + )( − )
= ( + )( + )
1= + +
1 1 1 1
= − = −
+ + +
1
=( + ) = − + − +⋯
( + )
1 1 1
( )= = − ( − + − +⋯)
( + )( + )
+
1 1 1 /
ℒ − = −
+
1 1 1 ( ) /
ℒ − = − ( − )
+
1 1 1 ( ) /
ℒ − = − ( −2 )
+
And so on
1 / ( ) / ( ) /
i(t) = − − − ( − )+ − ( −2 )+⋯
1 ( ) /
i(t) = (− ) − ( − )
Example 4.12 :
In an undamped mass-spring system, resonance occurs if the frequency of the driving
force equals the natural frequency of the system. Then the model is
+ = sin
where , = k is the spring constant, and m is the mass of the body attached to
the spring. We assume y(0)=0 and y’(0)=0, for simplicity. Then the subsidiary equation
is
[ + ] ( ) =
+
sin
( ) = sin sin ( − ) = − cos +
2
Example 4.13 :
Using convolution, determine the response of the system modeled by
ʹ
+ 3 +2 = ( )
where g(t) = 1 if 1< t < 2 and zero elsewhere with zero initial conditions
This system with an input (a driving force) that acts for some time only,
then, taking Laplace transform we get
( )
= ( ) gives ( )= ( ) ( )
( )
And ( )= and ℒ { ( )} = ( ) = 1
−( − ) −2( − )
−
Since the forcing input signal defined only between 1< t < 2 then
( )
1 ( ) ( )
1 ( )
( )= − − −
2 2
( )=
+ +
Example 4.14 :
Solve + = tan
with y(0)=1 and y’(0)=0. Taking the L T of both sides of the equation
( )− + ( )=0
( )=
+1
( )=
1
( )=
+1
ℎ( ) =
Then the solution will be
( ) = ( ) + (ℎ ∗ )( )
( )= + ( − )
( )= ( )+ ( , ) ( )
is called a Volterra integral equation, where λ is a parameter and K(t, τ) is called the
kernel of the integral equation. Equations of this type are often associated with the
solution of initial value problems. The Laplace transform is well suited to the solution
of such integral equations when the kernel K(t, τ) has a special form that depends on t
and τ only through the difference t − τ , because then K(t, τ) = K(t − τ ) and the integral
becomes a convolution integral. Equations involves both the integral of an unknown
function and its derivative are called integro-differential equations. These equations
occur in many applications of mathematics. An example of an integro-differential
equation when considering the R–L–C circuit
Example 3.15 :
Solve
+ = ( ) ( − )
with y(0)=1 and y’(0)=0. Taking the L T of both sides of the equation
( )
( )− + ( )=
+1
1
+1− ( )=
+1
( + 2)
( )=
+1
+1 1
( )= = +
( + 2) 2 2( + 2)
1
( )= 1 + cos √2
2
ℒ { ( )} = − ( )
2
ℒ{ ( )} = 2
( )
ℒ{ ( )} = (−1) ( )
Example 4.16 :
Solve
ʹ
+ 2 − 4 = 1 with zero initial conditions
Taking the L T of both sides of the equation
ℒ{ } = ( )− ( )− ( )= ( )
ℒ { ′} = − [ ( ) − ( )] = − ( ) − ( )
ℒ { } = ( )
1
ℒ { } =
1
( − ) ( )− ( )=
Rearranging
( − ) 1
( )− ( )=−
2
This is a linear first order differential equation solve by integration factor
+ ( ) = ( )
∫ ( ) ∫ ⁄
( )= = =
⁄ ⁄
( − ) ⁄
1
( )− ( )=−
2
⁄
1 ⁄
( )=− +
2
⁄ ( )= ⁄
+
1
( )= + ⁄
<4
( ) =
1
+ 2 sin − ≥4
12 3
4. Find the Lpalace transform of the periodic function
3 0 < < 2
( ) =
0 2< <4
f(t+4) = f(t)
5. Find the Laplace inverse of the following functions :
i. ( ) =
ii. ( ) =
iii. ( ) =
iv. ( ) =
( )( )
v. ( ) =
( )( )
Part B :
1. Find the solution of the initial value problem
+ 5 + 6 = (0) = 2 (0) = 1
+ = sin 2 (0) = 2 (0) = 1
2. Compute the solution of the following differential equations:
i. y'' + 3y' + 2y = r(t), where r(t) = 1 if 0 < t < 1 and r(t) = 0, t > 1, with zero
initial conditions.
ii. y'' + y' = r(t), where r(t) = t if 0 < t < 1 and r(t) = 0, t > 1, with zero initial
conditions
iii. y'' + 9y = r(t), where r(t) = 8 sin t if 0 < t < π and r(t) = 0, t > π,
with initial conditions: y(0) = 0 and y'(0) = 4.
3. y′′ + 5y′ + 6y = x(t) where x (t) is the pulse function
3 0 ≤ < 2
( ) =
0 ≥2
and subject to the initial conditions x(0) = 0 and x '(0) = 2.
4. Find the complete solution of the initial value problem
d2y 0 0 t 3
4 y f t ; y 0 y 0 0 .
dt 2
t t 3
5. Solve :
a) y′′ + 2y′ + 2 y = ( − 3) and y + 3y + 2 y = 1 + ( − 4)
The I Cs are : y'(0)=0 and y(0)=0
b) Determine the impulse response of the linear system whose response y(t) to an input
( ) − (1 + ) ( − ) = 1 − sinh
15. Use the convolution theorem to show that the solution to the initial value problem
+ = ( )
with y(0) = 0 and y'(0) = 0 is
5.1 - Introduction
Simultaneous ordinary differential equations involve two or more equations that contain
derivatives of two or more unknown functions of a single independent variable.
A system of ordinary differential equations of the first order can be considered as:
= ( , , ,……, )
= (, , ,… …, )
= (, , ,… …, )
where each equation represents the first derivative of one of the unknown functions
as a mapping depending on the independent variable t , and n unknown functions f1; .
. . ; fn.
Linear systems of ODEs are of practical importance in various applications. We will
apply matrices to the solution of a system of n linear differential equations in n
unknown functions:
( )= 11 ( ) ( )+ 12 ( ) ( )+⋯+ 1 ( ) ( )+ ( )
( )= 21 ( ) ( )+ 22 ( ) ( )+⋯+ 2 ( ) ( )+ ( )
( )= 1 ( ) ( )+ 2 ( ) ( )+ ⋯+ ( ) ( )+ ( )
The system is said to be homogeneous when all the functions gi (t) are zero, and to
be nonhomogeneous when at least one of them is nonzero. It is a linear system
because it is linear in the functions y1(t), y2(t), . . . , yn(t) and their derivatives, and it is
a variable coefficient system whenever at least one of the coefficients aij(t) is a
function of t; otherwise, it becomes a constant coefficient system. An initial value
problem for system involves seeking a solution of the system such that at t = t0 the
variables y1(t), y2(t), . . . , yn(t) satisfy the initial conditions.
( )= , ( )= , … ( ) =
where k1, k2, . . . , kn are given constants.
⎡ ⎤
⎢ ⎥
( )= =⎢ . ⎥
⎢ . ⎥
⎣ ⎦
A solution to a system of differential equations is a set of differentiable that satisfies
each equation on some interval J.
Before we start our discussion of systems of linear differential equations, we first
observe that every ordinary differential equation of order n can be written as a
system consisting of n linear ordinary differential equation of first order one, hence
we restrict our study to solution of a system of differential equations of the first
order.
+ + + … + + = ( )
( ) ( ) ( )
+ + + … + + = ( )
can be converted to a system of n first-order ODEs by setting
( )
= , = , = , … , =
The system of first order DE will be
′= =
= =
( )
=
( ) ( ) ( ) ʹ
=− − − … − − + ( )
( )
=− − − … − − + ( )
or in matrix form
′
⎡ ⎤ ⎡ 0 1 0 … 0 0
⎤⎡ ⎤ ⎡
0
⎤
⎢ ⎥ ⎢ 0 0 1 …
…
0 0
⎥⎢ ⋮ ⎥ ⎢
0
⎥
⎢ ( )⎥ = ⎢ ⋮
⋮ ⋮ ⋮
…
⋮ ⋮ ⎥⎢ ⎥+⎢ ⋮ ⎥
⎢ ⎥ ⎢ 0 0 0 0 1 ⎥⎢ ⎥ ⎢ 0 ⎥
…
⎣ ( ) ⎦ ⎣− − … − − ⎦⎣ ⎦ ⎣ ( )⎦
Example 5.1 :
Convert the initial value problem
+3 + 2 = 0 with y(0) =1 and y’0)=3
Into a linear system of 1st order DE and put in a matrix form
Let
= =
Gives the system of 1st ODE :
Example 5.2 :
Consider the system of 2nd order DEs
+ − =0
+ − sin =
Considering new dependent parameter z
Let
= =
Then
= −
= − + sin
The equivalent system after conversion will be 4 1st ODEs
=
=
= −
= − + sin
In matrix form :
0 1 0
= +
−2 −3 0
⎡ ⎤ 0 0 1 0 0
⎢ ⎥= 0 0 0 1
+ 0
⎢ ⎥ −1 1 0 0 0
⎣ ⎦ 0 0 −1 sin
Example 5.3 :
Determine the general solutions for y and z in the case when
5 − 2 + 4 − =
+ 8 − 3 = 5
First, we eliminate one of the dependent variables from the two equations; in this
case, we eliminate z. From the second equation :
1
= ( + 8 − 5 )
3
Substituting in the first equation we get
2 1
5 − ( + 8 − 5 ) + 4 − ( + 8 − 5 ) =
3 3
Rearranging
2 1
5 − ( + 8 ′ + 5 ) + 4 − ( + 8 − 5 ) =
3 3
2 2 4 8
− − + =
3 3 3 3
OR
+ − 2 = −4
Which is a 2nd ODE can be solved ( Solve to find y(t) then z(t) by substitution)
The general solution will be :
( )=2 + +
( )=3 + +
= 2 − +4−
= − +2 +1
With the initial conditions
(0) = 1 (0) = 0
It is a non-homogeneous system with constant coefficients of 1st ODEs
We can use the same procedure as the previous example. But we can use :
Differentiate the first DE
= 2 − −2
Substituting for from the second DE
= 2 − (− +2 + 1) − 2
= 2 + −2 −1−2
Then Substituting for from the first DE
=− + 2 +4−
So, using this result to eliminate from the second order equation for
=
Substituting in the main system equation
′=
We get
=
Since is never zero, we can always divide both sides by and get the
eigenvalue problem
=
( )=
where λ is an eigenvalue of A and x is a corresponding eigenvector.
We assume that A has a linearly independent set of n eigenvectors. This holds in most
applications, in particular if A is symmetric or skew-symmetric or has n different
eigenvalues.
Just like the solution of a second order homogeneous linear equation, there are three
possibilities, depending on the number and the type of eigenvalues of the coefficient
matrix A has. The possibilities are :
Distinct real eigenvalues
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 112
A repeated eigenvalue
Complex conjugate eigenvalues
Let us consider the different cases with (2x2) case :
Case 1: P( λ ) = 0 has two distinct real solutions λ1 and λ2 :
The corresponding eigenvectors for the eigenvalues respectively are
= =
( )= 1 + 2
= =
Example 5.5 :
Find the general solution of the following system of DEs
= − 3
= + 5
Solution steps :
Step 1 : putting the system in the matrix form
( )= 1 + 2
Example 5.6 :
Find the general solution of the following system of DEs
=
= −
Solution :
Example 5.7 :
Solve the following system of 1st ODE
= − +
= 3 + 4
= 2 +
Given an initial condition for the system as , [ ( )] = [ ]
Step 1 : putting the system in the matrix form
′=
Step 4: Plug in the initial conditions and solve the system for the arbitrary constants
−0.172 C − 0.557 C =
0.891 C − 0.467 C + 0.894 C = 0
0.42 C + 0.687 C + 0.447 C = 1
Solving the system we get the constants are C = -3.228, C =0.997 and C = 3.738
The final solution will be
( ) = 0.555 [ . . ]
−
( ) = −2.876 . .
− 0.466 + 3.342
( ) = −1.356 . .
+ 0.685 + 1.671 C
U(t) = ϕ ( ) ( )
( )= 5 −2
+ = 5 −2
1 1
Calculation of the inverse of ()
() = 2 = 2
− 5 − 5
Calculation of the inverse of ()
( )= 2
− 5 1
Multiplying and integrating each element individually we obtain
( + 1) ⁄7
( )=
(−29⁄252) + (1⁄42)
( )= ( ) + ( ) ( )
( + 1) ⁄7
( )= 5 −2 + 5 −2
(−29⁄252) + (1⁄42)
(17⁄6) + (49⁄7)
( )= 5 −2 +
(1⁄12) + ( ⁄2)
Example 5.11 :
Solve the system
3 3 8
= +
1 5 4
−3 1 −1⁄4 1⁄4 2 0
= , = =
1 1 1⁄4 3⁄4 0 6
= +
′ 2 0 −1⁄4 1⁄4 8
= = +
′ 0 6 1⁄4 3⁄4 4
′ 2 −2+
=
′ 6 +2+3
Solving each individually
+ +1
=
− − 1⁄3
+ +1
( ) = ( ) = −3 1
1 1 − − 1⁄3
( ) −3 + −4 − 10⁄3 −4 − 10⁄3
( )= = = −3 +
( ) + ⁄
+2 3 ⁄
2 3
(sI − A) = + + +⋯
Example 5.12 :
Determine the state transition matrix Φ(t) of the system
+3 1
1 +3 1 ( + 1)( + 2) ( + 1)( + 2)
(sI − A) = =
( + 1)( + 2) −2 −2
( + 1)( + 2) ( + 1)( + 2)
Use the partial fraction for each term individually, the evaluate the L-1, we obtain
− −2 −
− −2
Φ(t) = ℒ [(sI − A) ] = 2 −
−
−2 −
−2 + − + 2 −2
Non - homogeneous Linear System of ODEs
Consider the system of 1st order DEs given by
Y = AY +
With initial conditions vector as
Y(0) = k
Taking the Laplace transform of the system
sY(s) − Y(0) = AY(s) + G( )
(sI − A) Y(s) = Y(0) + G(s)
Pre-multiplying both sides by (sI − A) we obtain
Y ( t) = ℒ {[(sI − A) ] [Y(0) + G(s)]}
Example 5.13 :
Solve the initial value problem
−2 + = sin
+2 − =1
With (0) = 1 (0) = −1
(
02 −1 −2 1
− )= − =
0 −2 1 2 −1
(0) 1
( )= =
(0) −1
ℒ (sin ) 1⁄( + 1)
( )= =
ℒ (1) 1⁄
Then
−1 −1
−1 −1 ( − ) ( − )
( − ) = =
( − ) 2 −2 −2 −2
( − ) ( − )
And
1 1⁄( + 1) ( + 2)⁄( + 1)
[ ( ) + ( )] = + =
−1 1⁄ (1 − )⁄
Then
−1 −1
( ) ( − ) ( − ) ( + 2)⁄( + 1)
( )= =
( ) −2 −2 (1 − )⁄
( − ) ( − )
After performing the product
( − 1)( + + 2 + 1)
⎡ ⎤
⎢ ( + 1)( − ) ⎥
( )= ⎢ ⎥
⎢−( − + 3 + + 2)⎥⎥
⎢
⎣ ( + 1)( − ) ⎦
The inverse transforms can be determined by partial fraction and the result will be
4 1 1 2 43
( ) = + − sin − cos + 3
9 3 5 5 45
5 2 1 3 43
( ) = + + sin − cos − 3
9 3 5 5 45
+ 2 − = 1 +
+ + 2 = 3
Subjected to initial conditions y(0) = 5/2 and z(0) = -1/2
0 1 (0) 1
= and initial conditions are =
−5 −4 (0) −1
6. Consider the differential equation u'' + 0.25u' + 2u = 3 sin t. with initial conditions
u(0) = 2, u(0) = −2. Transform this problem into an equivalent one for a system of
first order equations. Then solve the system.
BVP solution : For a differential equation with constant coefficients , the roots of the
characteristic polynomial will be one of the cases ; If the roots of r ∈ R, then the boundary
value problem has a unique solution for all y0, y1 ∈ R. But if the roots form a complex
conjugate pair, then the solution of the boundary value problem belongs to only one of the
following three possibilities: (a) There exists a unique solution; (b) There exists infinitely many
solutions; and (c) There exists no solution.
Hence, the boundary value problem above has infinitely many solutions given by
( ) = cos(2 ) − sin(2 ) ∀ ∈ℝ
Case 3 : y(0) = 1 , y(π/2)=1
using the boundary conditions we get
From the equations above we see that there is no solution for c1, hence there is no solution for
the boundary value problem above
In each problem the conditions following the differential equation are called boundary
conditions. Note that the boundary conditions in Problem 5, unlike those in Problems 1-4, don’t
require that y or y’ be zero at the boundary points, but only that y have the same value at
x = ±L, and that y’ have the same value at x = ±L. We say that the boundary conditions in
Problem 5 are periodic.
Obviously, y ≡ 0 (the trivial solution) is a solution of Problems 1-5 for any value of λ. For most
values of λ, there are no other solutions.
Example 6.2 :
Solve the eigenvalue problem
+ 3 + (2 + ) = 0 ℎ (0) = 0 (1) = 0
If λ= 1/4 then m1 = m2 = -3/2 are real and equal, so the general solution of the differential
equation in
( ) = ( + ) /
the boundary condition y(1) requires that B = 0. Therefore λ = 1/4 isn’t an eigenvalue of
the differential equation.
If λ > 1/4 then m1 and m2 are complex conjugate and are
−3
, = ±
2
Where
√4 − 1 1+4
= → =
2 4
so the general solution of the differential equation in
( )= / (
cos + sin )
The boundary condition y(0) = 0 requires that A = 0, and the solution will be
( )= /
which holds with B≠ 0 if and only if = where n is a positive integer n =1,2,. Then the
Eigen values are
1+4 1+4
= = = 1,2,3 ….
4 4
And the associated eigenfunctions are
( )= /
( ) = ( ) = ( )+ ( )+ ( )+⋯
where the coefficients an are determined by using the inner product concept.
The orthogonal set of trigonometric functions
2 2 3 3
= 1 , cos , , , , , … , , ,…….
Suppose that f is a function defined on the interval (-L, L) and can be expanded in an
orthogonal series consisting of the trigonometric functions in the orthogonal set, that is,
( ) ~ + cos + sin
2
It is called the Trigonometric Fourier Series for f(x) on the interval [-L, L]. Note that each
function in the set S has period 2L; that is φn(x + 2L) = φn(x) for all x; therefore, if f(x) is
represented by its Trigonometric Fourier Series , it will be a periodic function with period 2L.
The coefficients a's and b's are called the Fourier coefficients are given by the Euler
formulae
1
= ( )
1
= ( ) ∶ = 0,1,2, …
1
= ( ) ∶ = 1,2,3, …
( ) = + cos + sin
2
( ) = + cos
2 2
= ( ) = ( )
( ) = 2 ( )
( ) = sin
2
= ( )
( ) = 0
Most functions are neither even nor odd, but any function in an interval −L ≤ x ≤ L can
be expressed as the sum of an even function and an odd function defined over the
interval.
We therefore say that a function f (t) is periodic with period T if, for all its domain values t,
( + ) = ( ) m is ay integer
We define the frequency of a periodic function to be the reciprocal of its period, so that
1
= =2
The smallest positive period is often called the fundamental period. Familiar periodic
functions are the cosine, sine, tangent, and cotangent. Examples of functions that are
not periodic are ln x ex , xm , to mention just a few. Furthermore if f(t) and g(t)
have period T, then a f(T) + b g(T) with any constants a and b also has the period T.
( ) = + ( cos + sin )
2
1 2
= ( − ) =−
3
1 4 sin −4 −2
= ( − ) =
4 4 4
=− =− (−1) = (−1)
1 2 −2
= ( − ) =
2 2 2
=− =− (−1) = (−1)
Then the Fourier series is
4 2
( ) = − + (−1) cos + (−1) sin
3
Now we can examine the relationship between this series and f (x).
f‘(x)=1−2x is continuous for all x, hence f is piecewise smooth on [−π,π]. For −π <x <π, the
Fourier series converges to x −x2. At both π and −π, the Fourier series converges to
[ ( −) + (− +)] = [( − ) + (− − (− ) )] = (− )=−
Example 6.7 :
Let ( ) = + − ≤ ≤ 0 ( ) = − 0 ≤ ≤
The periodic extension of f (x) is the "triangular wave". In this example the extended function is
continuous for all x. One finds
( ) = + ( + )
2
1 2
= + + − = (1 − )
2 2
1
= + + − =0
2 2
Hence
2 1
( ) = (1 − )
4 4
( ) = + 3 + ⋯
9
Since there are no jumps, one must expect convergence everywhere. It should, however, be
noted that at the corners (where f '(x) has a jump), the convergence is poorer than elsewhere.
Example 6.8 :
Find the Fourier series representation of f (x) = x + 1 for −1 ≤ x ≤ 1.
( ) = + ( + )
2
= ( + 1) = + =2
2
= ( + 1) = 0
2
= ( + 1) = − − = (−1)
2 1
( ) = 1 + (−1) − 1 ≤ ≤ 1.
Example 6.9 :
Find the Fourier coefficients of the periodic function f(t) given in figure below. The formula is
− − < < 0
( )= ( ) = ( + )
< <
Note : Functions of this kind occur as external forces acting on mechanical systems,
electromotive forces in electric circuits, etc. (The value of f(t) at a single point does
not affect the integral; hence we can leave f(t) undefined at t = 0, ±π ,… )
The Fourier series is
∞
( ) = + ( cos + sin )
2
1 1
= ( ) = − +
−1 2
= ℎ (1 − ) =
1 0
∞
4 4 1 1
( ) = sin = + 3 + 5 + ⋯
3 5
Since the solution is an infinite series, it is clearly not possible to plot a graph of the
result. However, by considering finite partial sums, it is possible to plot graphs of
approximations to the series. Denoting the sum of the first N terms in the infinite
series by fN (t), that is
∞
sin(2 − 1)
( ) =
2 −1
( )~
Since it does not matter what the series converges to on [-L,0], we can assume that
the function f is extended as an odd function on [-L,L]. Then its full Fourier Series will
contain only sine terms (the coefficients an are all zero if f is odd).
For ∈ [0, ] the series
( ) =
Where
1 2
= ( ) = ( ) = 1,2, ….
will converge to f(t) as desired. Note that we never have to define f(t) on [-L,0], but
just assume that f is odd. The series of sine terms is called a Fourier Sine Series for f(t)
on [0;L]. Similarly Let f(t) be piecewise continuous on the interval [0,L]. The Fourier
cosine series of f(t) on [0,L] is
( ) = +
Where
The trigonometric series in cosine series is just the Fourier series for fe(t) the even
2L-periodic extension of f(t) and that in sine series is the Fourier series for fo(t) the
odd 2L-periodic extension of These are called half-range expansions for f(t).
Example 6.10 :
Expand : f (x) = x2, 0 < x < L,
(a) in a cosine series (b) in a sine series (c) in a Fourier series.
(a) We have
2 2 2 4
= = = = (−1)
3
(− )
( )= +
2 2 4
= = (−1) + [(−1) − 1]
(− )
( )= + [(− ) − ]
2 2 2 2
= = = =
3
2
= =−
The series (a), (b), and (c) converge to the 2L-periodic even extension of f, the 2L-
periodic odd extension of f, and the L-periodic extension of f, respectively. The graphs
of these periodic extensions are shown in Figure
4. In the problem, _nd the eigenvalues and eigenfunctions of the given boundary value problem.
Assume that all eigenvalues are real.
− + = 0 ℎ (1) = 0 ( ) = 0 > 1
7. Let f ( x ) = π - x . Represent f(x) by a Fourier series over the interval -π <x < n .
8. Find the Fourier series representation of f (x) = x on the interval −2 ≤ x ≤ 2. Then test the
convergence of the function and its Fourier representation ?
−1 − < <0
( )=
1 0< ≤
10. A sinusoidal voltage , E sin wt where t is time, is passed through a half-wave rectifier
that clips the negative portion of the wave. Find the Fourier series of the resulting periodic
function
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 141
11. Obtain the complex form of the Fourier series of the saw tooth function f (t) defined by
( )= ≤ ≤ ( + )= ( )
Schematically plot the discrete amplitude and phase spectra for the function?
12. Find the Fourier series expansions of the function assuming both odd and even expansions
of the function given by the figure
7.1 - Introduction
A partial differential equation (PDE) : A differential equation that contains, in
addition to the dependent variable and the independent variables, one or more
partial derivatives of the dependent variable . The key defining property of a partial
differential equation (PDE) is that there is more than one independent variable x, y, . .
. . There is a dependent variable that is an unknown function of these variables u(x, y,
. . . ). We will often denote its derivatives by subscripts; thus ∂u/∂x = ux , and so on.
In general, it may be written in the form
, ,…, , , ,…, , , ,… = 0
involving several independent variables x, y, . . ., an unknown function u of these
variables, and the partial derivatives ux, uy, . . ., uxx, uxy, . . ., of the function. Subscripts
on dependent variables denote differentiations, e.g.,
= = …
+ + + + + =
A linear second-order partial differential equation in two independent variables with constant
coefficients can be classified as one of three types. This classification depends only on the
coefficients of the second-order derivatives. For the equation to be of second order, A, B, and C
cannot all be zero. Define its discriminant to be (B2 – 4AC). The properties and behavior of its
solution are largely dependent of its type, as classified below.
If B2 – 4AC > 0 , then the equation is called hyperbolic.
If B2 – 4AC = 0 , then the equation is called parabolic.
If B2 – 4AC < 0, then the equation is called elliptic.
In general, elliptic equations describe processes in equilibrium. While the hyperbolic and
parabolic equations model processes which evolve over time.
Initial conditions
In the case of a linear first order PDE it will be seen later that in principle a general solution can
be found, though usually only the solution of a specific problem is required. In order to specify
such a problem for a first order PDE, the auxiliary condition that identifies the problem uniquely
involves prescribing the value the solution u is required to attain along a line in D. An auxiliary
condition of this nature is called a Cauchy condition, and the problem of finding the solution of
a PDE in D that satisfies a Cauchy condition is called a Cauchy problem for the PDE.
Since solutions of (1) and (2) depend on time t, we can prescribe what happens at t = 0; that is,
we can give initial conditions (IC). If f(x) denotes the initial temperature distribution throughout
the rod, then a solution u(x,t) must satisfy the single initial condition u(x, 0) = f(x), 0 < x < L. On
the other hand, for a vibrating string we can specify its initial displacement (or shape) f(x) as
well as its initial velocity g(x). In mathematical terms we seek a function u(x,t) that satisfies and
the two initial conditions
Boundary condition
The additional conditions may be imposed on spatial boundaries belonging to a region D where
the solution is required, and when this is done the conditions are called boundary conditions.
Atypical boundary condition for a second order PDE defined in a rectangle could be that the
solution is required to assume specified values on the sides of the rectangle. If time is involved,
it is necessary to specify how the solution starts, and a condition of this type is called an initial
condition. Problems requiring initial and boundary conditions are called initial boundary value
problems (IBVPs).
Physical problems whose solution is governed by a 2nd order linear PDE of this type are
formulated in some region D of the (x, y)-plane on the boundary Γ of which suitable auxiliary
( , ) = ( , ) ( , )
where ψ(x, y) is a given function and ∂/∂n is the directional derivative normal to the boundary
Γ. A boundary condition of this type is called a Neumann condition.
(c) The specification of the functional form to be taken by a linear combination of a Dirichlet
condition and a Neumann condition by the solution u(x, y) on the boundary Γ, by requiring that
( , ) ( , )+ ( , ) ( , ) = ( , ) ( , )
where a(x, y), b(x, y), and c(x, y) are given functions. A boundary condition of this type is called
a mixed condition, and sometimes either a Robin condition or a boundary condition of the
third kind. When c(x, y) = 0, this condition is called a homogeneous mixed condition.
(d)The specification on Γ of the functional form to be taken by both the solution u(x, y) and its
derivative normal to the boundary, by requiring that
( , ) = ( , ) ( , ) = ( , ) ( , )
Where φ(x, y) and Ψ(x, y) are given functions and ∂/∂n is the directional derivative normal to
the boundary Γ. Boundary conditions of this type are called Cauchy conditions for a second
order PDE. When the solution u is a function of a space variable x and the time t, and Cauchy
conditions are specified when t = 0, so that Γ becomes the x-axis and
( , 0) = ( ) ( , 0) = ( )
the Cauchy conditions are usually called initial conditions for a second order PDE.
Superposition principle
If the functions ui, i =1,2, ….. are separately the solutions of a linear homogeneous partial
differential equation then the series
Is also a solution of the partial differential equation provided that the derivatives appearing in
the DE can be differentiated term by term.
= =
whenever XT ≠ 0. Since the le side of equa on is independent of t and the right side
is independent of x, we must have
1 ′ ′′
= = −
+ = 0
Step 2 : Separate the boundary conditions.
(0, ) = (0) ( ) = 0 ≥ 0
Since ( ) ≠ 0 ≥ 0 ℎ
(0) = 0 ( ) = 0
Step 3 : solution of the Eigen value problem
+ = 0
With boundary conditions X(0)=0 and X(L) =0. We look for values of which gives
us nontrivial solutions. This is a Regular Sturm-Liouville Problem can be solved to
find the eigenvalues and eigen functions.
The general solution in this case is of the form
( ) = cos + sin
From the condition X (0) = 0, we obtain A = 0. The condition X (L) = 0 gives
sin = 0
If B = 0, the solution is trivial. For nontrivial solutions, B≠0, hence, sin = 0
This equation is satisfied when = = 1,2, ….
Or
( )= sin
Step 4 : Solution for T(t). For any given n, we get a solution Tn(t)
+ = 0
= −
( )=
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 152
Step 5 : We get now the eigen values and their eigen functions
Hence, the nontrivial solution of the heat equation which satisfies the two boundary
conditions
( , ) = ( ) ( ) =
( , ) = ( , )
( , ) =
( , 0) = ( ) =
we conclude that An must the Fourier sine coefficients for the odd periodic extension
of f(x),
2
= ( ) sin
( , ) = ( ) =
Discussion on solutions:
• Harmonic oscillation in x, exponential decay in t:
2 200 1 − (−1)
= 100 sin =
200 1 − (−1)
( , ) = ( ) =
( ) = 100
80
Satisfy the initial conditions:
( , ) =
Substituting t = 0
( , 0) = ( ) = = 100
80
By comparison : n =1 and L = 80
= ( ≥ 2 ) = 0
80
= 100
( , ) = 100
80
( , ) =
This solution satisfies the PDE and the boundary conditions. To find , we must use
the initial condition
( , 0) = ( ) =
we conclude that An must the Fourier sine coefficients for the odd periodic extension
of f(x),
/
2 2
= ( ) sin = + ( − )
/
2 2 2 2
= − cos + − ( − ) cos −
/
2 2 4
= − + + + = =
2 ( ) 2 2 ( ) 2 ( ) 2
4
⎧ = 1,5,9, … … . .
4 ⎪( )
= =
( ) 2 ⎨ −4
⎪ = 3,7,11, … … . .
⎩( )
We have
= =
( )= cos = 0,1,2, …
= −
( )=
or constant multiples of this function. We now have a function
( , ) = ( , )
( , )= +
2
Then
( , 0) = + = ( )
2
2
= ( ) cos
Suppose the ends of the bar are insulated and the left half of the bar is initially at constant
temperature A while the right half is initially at temperature zero. Then
0 ≤ ≤ /2
( ) =
0 ≤ ≤
2
Then
/
2
= =
Since sin(nπ/2)=0 if n is even and =1 for odd numbers , we can retain only odd n in this
summation to write this solution
as
2 1 (2 − 1) ( )
( , )= +
2 (2 − 1)
+ = 0
As the Laplace equation in can be written as ∆φ = 0, the symbol ∆ is called the Laplacian
operator in two dimensions, and ∆φ is called the Laplacian of φ. Consequently, a function φ
will be harmonic if its Laplacian is zero.
The simplest boundary value problems for the Laplace equation involve specifying either φ on
the boundary or the derivative of φ normal to the boundary usually denoted by ∂φ/∂n. The
specification of φ on the boundary is called a Dirichlet boundary condition, and the
requirement that φ satisfy the Laplace equation and a Dirichlet boundary condition is called a
Dirichlet boundary value problem for the harmonic function φ. The specification of ∂φ/∂n on
the boundary of R is called a Neumann boundary condition, and the requirement that φ satisfy
− = =−
Thus,
+ = 0
′ − = 0
For nontrivial solutions of the problem, only > 0 gives and accepted solution.
The general solution in this case is of the form
( ) = cos + sin
And the boundary conditions
(0) = ( ) = 0
Application of the boundary conditions then yields A = 0 and for nontrivial solutions,
B≠0, hence, sin = 0
This equation is satisfied when = = 1,2, ….
Or
( )= sin
( )= sinh
Hence, the nontrivial solution of the Laplace equation which satisfies the two
boundary conditions and since the PDE is linear and homogeneous, by the
superposition principle, the solution will be the sum of all solutions , the general
solution of the DE will be
( , ) = sin sinh
( , ) = sin sinh = ( )
Then this corresponding to a sine Fourier series with the constant calculated by the
relation
2
sinh = ( ) sin
Then
2
= ( ) sin
ℎ
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 161
With this choice of coefficients, the solution can be written
2 sinh
( , )= ( ) sin
ℎ
+ = 0 ( , ) ∈
=− =−
Thus,
− = 0 ℎ (0) = 0
′ + = 0 ℎ (0) = 0 ( )=0
This Sturm-Liouville problem for Y has eigenvalues and eigenfunction. The general
solution in this case is of the form
( ) = cos + sin
′( ) = sin + cos
Application of the boundary conditions then yields B = 0 and for nontrivial solutions,
A≠0, hence, cos = 0 This equation is satisfied when = = 1,2, ….
Or the eigen values are
( )= cos = 0,1,2, …
( )= −
(0) = − =0
( ) = cosh = ℎ
( )= ℎ
( , ) = ( ) ( ) = ℎ
( , )= + ℎ
( , )
= ( ) = ℎ
and we would have a contradiction if this integral were not zero. In this event, this problem
would have no solution.
For the other coefficients in this cosine expansion, we have
2
ℎ = ( ) cos
2
= ( )
ℎ
( , )= + ℎ
Example 7.8 :
Find the steady state temperature distribution T(x, y) in the uniform slab of metal shown in Fig.
given that no heat sources are present in the slab and the temperatures on the boundaries are
+ = 0
=− =−
Thus,
− = 0 ′ + = 0
For nontrivial solutions of the problem, only > 0 gives and accepted solution.
The general solution in this case is of the form
( ) = cos + sin
And the boundary conditions
(0) = ( ) = 0
Application of the boundary conditions then yields A = 0 and for nontrivial solutions,
B≠0, hence, sin = 0
This equation is satisfied when = = 1,2, ….
Or
( )= sin
( )=
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 165
Then the solution will be
( , )=
( , ) = sin
If we set x = 0 in this summation and use the boundary condition T(0, y) = f (y), this
reduces to
(0, ) = sin = ( )
from which it follows in the usual manner that it solve by Fourier series
2
= ( ) sin
( )= sin
+ = 0
( )= cos + sin
( , )= + = 1,2, …
Here λn are eigenvalues, and un(x, t) are eigenfunction. The set of eigenvalues {λ1, λ2, · · · }
are called the spectrum. T(t) gives change of amplitude in t, harmonic oscillation and different
n gives different motion. These are called modes.
Since the PDE is linear and homogeneous, by the superposition principle, the solution will be
the sum of all solutions , the general solution of the DE will be
( , 0) = sin = ( )
( , 0) = ( ) 0 ≤ ≤
( , 0) = ( ) sin = ( )
These equations will be satisfied if f (x) and g (x) can be represented by Fourier sine series.
2
= ( ) sin
2
= ( ) sin
+ = 0
but now the zero initial displacement gives us u(x,0)= X(x)T(0)=0, so T(0)=0. Solutions
of this problem for T (t) have the form
( , )= ( ) ( )= = 1,2, …
that satisfy the wave equation, the boundary conditions, and the initial condition
y(x,0)=0. To satisfy the initial velocity condition, we will generally (depending on g)
need a superposition
( , ) = = 1,2, …
( , )=
( , 0) = = ( )
2
= ( ) sin
Suppose the string is released from its horizontal position with an initial velocity given
by g(x)= x(1+cos(π x/L)). First compute the integral
3
⎧ =1
2 ⎪ 2
= 1 + cos sin =
⎨2 (−1)
⎪ = 2,3, … . .
⎩ ( − 1)
Then the solution will be
3 2 (−1)
( , ) = +
2 ( − 1)
3 2(−1)
( , ) = +
2 ( − 1)
ℒ { ( , )} = ( , ) ≡ ( , )
( , )
ℒ{ ( , )} = ( , ) =
ℒ = ℒ
( , ) − ( , 0 ) =
( , ) − =
( , )
− ( , )=−
Then
( , ) = ( , )+ ( , )= + +
( , )= +
1
( , )= − = −
1
( , )= ℒ −
( , )= 1 − erf
2√
Example 7.13 :
Parbolic PDE
ℒ =ℒ
( , ) − ( , 0 ) =
( , ) − (1 + ) =
( , )
− ( , ) = − (1 + )
− =0
− = 0 → = → = ∓√
√ √
( , ) = +
Particular solution
= + sin = cos and ′ = − sin
− − ( + ) = −(1 + )
√ √
1 1
( , ) = + + +
+
The Laplace Transform of BCs
(0, ) = 1 → = 1/
(1, ) = 1 → = 1/
1 1
= + + → = −
1 √ √
1 1
= + + +
+
1 √ √
1
= − +
= 0 =0
Then the solution will be
1 1
( , ) = +
+
Taking the Lplace inverse
( , ) = 1 +
Example 7.14 :
= + sin
ℒ = ( , ) − ( , 0) − ( , 0) = ( , )
ℒ{ }=
Then
( , )
− ( , )=−
− =0
− = 0 → = → = ∓
( , ) = +
Particular solution
= cos + sin
= − sin + cos
′ =− −
Therefore :
− − − ( + )=−
( + ) + ( + ) =
1
= 0 =
( + )
Then
1
( , ) = sin
( + )
The solution will be
1
( , ) = + +
( + )
Applying the Boundary conditions :
(0, ) = + = 0
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 175
(1, ) = + =0
Solving the two relations gives = = 0
The solution will be
1
( , ) =
( + )
Finally applying inverse laplace
1
ℒ { ( , )} = ( , ) = ℒ
( + )
Using partial fraction
1 + 1 1
= + = −
( + ) + +
1
ℒ = (1 − cos )
( + )
Then the solution will be
( , )= (1 − )
Example 7.15 :
Solve the wave equation for a semi-infinite string by Laplace transforms, given that
( , ) /
= ( , )−
OR
( , ) /
− ( , ) = −
Advanced Engg. Math. by Dr. Elhassen Ali Ahmed 176
we obtain a solution of the differential equation as ( Solve it )
/
/ /
2 /
( , )= + − +
− −
Transforming the given boundary conditions (c) and (d), we have U(0, s) = 0 and U(x, s) → 0 as x
→ ∞, which can be used to determine A and B. From the second condition A = 0, and the first
condition then gives
2 /
=
−
/
2 / /
2 /
( , )= − +
− − −
2 / /
2 /
( , )= − −
− − −
Let =
2 2 /
( , )= + −
( − ) ( − ) ( − )
Fortunately in this case these transforms can be inverted from tables of Laplace transforms.
Using the second shift theorem together with the Laplace transform pairs
ℒ = sinh
( − )
2 ℎ − ℎ
ℒ =2 = ℎ − ℎ
( − ) 2
( , )= ℎ + ℎ − ℎ
− − ℎ − − ℎ −
+ = 0
+ = 0
Subjected to conditions
(0, ) = ( , 0) = 0
Solve by :
i. Separation of variables
ii. Laplace Transform
8. Solve by Laplace transform :
+ =