Professional Documents
Culture Documents
Laplace Transform and Numerical Analysis 1
Laplace Transform and Numerical Analysis 1
Fig. 1 The s-plane for graphical plotting of the values in the s-domain such the poles of the transfer function
(Davidson, B. (2007), Experimental and simulation-based assessment of the human postural response to sagittal plane
perturbations with localized muscle fatigue and aging)
Laplace Transform
Laplace transform is one of the significant hallmarks in mathematics and
engineering as it becomes a useful tool for simplifying complex problems such as
finding solutions to differential equations by transforming it into an algebraic
expression. Laplace transform has the advantage over Fourier Transform since it is
applicable to a wider domain of functions. One can recall that the formula for the
Fourier Transform is:
∞
𝐹(𝜔) = ∫ 𝑓(𝑥)𝑒 −𝑗𝜔𝑥 𝑑𝑥
−∞
The notation for the Laplace Transform of 𝑓(𝑥) and the Inverse Laplace
Transform of 𝐹(𝑠) is
ℒ{𝑓(𝑡)}
𝑓(𝑡) ⇄ 𝐹(𝑠)
−1
ℒ {𝐹(𝑠)}
Example
𝑒 −𝑡 𝑓𝑜𝑟 𝑡 ≥ 0
a. Find the Laplace Transform of 𝑓(𝑡) = {
0 𝑓𝑜𝑟 𝑡 < 0
∞ ∞ ∞ ∞
𝐹(𝑠) = ∫ 𝑓(𝑡)𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝑒 −𝑡 𝑒 −𝑠𝑡 𝑑𝑡 = ∫ 𝑒 −𝑡−𝑠𝑡 𝑑𝑡 = ∫ 𝑒 −(1+𝑠)𝑡 𝑑𝑡
0 0 0 0
−(1+𝑠)𝑡 ∞
𝑒 𝑒 −(1+𝑠)(∞) 𝑒 −(1+𝑠)(0) 0 1 𝟏
= | = [( )−( )] = [( )−( )] =
−(1 + 𝑠) 0 −(1 + 𝑠) −(1 + 𝑠) −(1 + 𝑠) −(1 + 𝑠) 𝒔+𝟏
𝑒 𝑗2𝑡 −𝑒 −𝑗2𝑡
Converting sin 2𝑡 to its complex exponential equivalent 𝑗2
∞ ∞ ∞
−𝑠𝑡
𝑒 𝑗2𝑡 − 𝑒 −𝑗2𝑡 −𝑠𝑡 𝑒 𝑗2𝑡−𝑠𝑡 − 𝑒 −𝑗2𝑡−𝑠𝑡
= ∫ sin 2𝑡 𝑒 𝑑𝑡 = ∫ ( ) 𝑒 𝑑𝑡 = ∫ ( ) 𝑑𝑡
0 0 𝑗2 0 𝑗2
∞ ∞ ∞
𝑒 −(𝑠−𝑗2)𝑡 − 𝑒 −(𝑠+𝑗2)𝑡 𝑒 −(𝑠−𝑗2)𝑡 𝑒 −(𝑠+𝑗2)𝑡
=∫ ( ) 𝑑𝑡 = ∫ ( ) 𝑑𝑡 − ∫ ( )
0 𝑗2 0 𝑗2 0 𝑗2
∞ ∞
𝑒 −(𝑠−𝑗2)𝑡 𝑒 −(𝑠+𝑗2)𝑡 𝑒 −(𝑠−𝑗2)(∞) 𝑒 −(𝑠−𝑗2)(0) 𝑒 −(𝑠+𝑗2)(∞) 𝑒 −(𝑠+𝑗2)(0)
= | −( | ) = [( )−( )] − [( )−( )]
−𝑗2(𝑠 − 𝑗2) 0 −𝑗2(𝑠 + 𝑗2) 0 −𝑗2(𝑠 − 𝑗2) −𝑗2(𝑠 − 𝑗2) −𝑗2(𝑠 + 𝑗2) −𝑗2(𝑠 + 𝑗2)
0 1 0 1
= [( )−( )] − [( )−( )]
−𝑗2(𝑠 − 𝑗2) −𝑗2(𝑠 − 𝑗2) −𝑗2(𝑠 + 𝑗2) −𝑗2(𝑠 + 𝑗2)
1 1 1 1 1 1 (𝑠 + 𝑗2) − (𝑠 − 𝑗2)
= − = ( − )= ( )
𝑗2(𝑠 − 𝑗2) 𝑗2(𝑠 + 𝑗2) 𝑗2 (𝑠 − 𝑗2) (𝑠 + 𝑗2) 𝑗2 (𝑠 − 𝑗2)(𝑠 + 𝑗2)
1 𝑗4 𝟐
= ( 2 ) =
𝑗2 𝑠 − 𝑗 2 4 𝒔𝟐 + 𝟒
c. Determine the Laplace Transform of 𝑓(𝑡) = 𝑡 2 𝑢(𝑡)
∞ ∞
−𝑠𝑡
𝐹(𝑠) = ∫ 𝑓(𝑡)𝑒 𝑑𝑡 = ∫ 𝑡 2 𝑒 −𝑠𝑡 𝑑𝑡
0 0
𝑒 −𝑠𝑡
Let 𝑢 = 𝑡 2 , 𝑑𝑢 = 2𝑡 𝑑𝑡, 𝑑𝑣 = 𝑒 −𝑠𝑡 𝑑𝑡 and 𝑣 = − . Employing integration by parts
𝑠
𝑏 𝑏
∫ 𝑢 𝑑𝑣 = 𝑢𝑣|𝑏𝑎 − ∫ 𝑣 𝑑𝑢
𝑎 𝑎
∞ ∞ ∞
2 −𝑠𝑡
𝑒 −𝑠𝑡
2
𝑒 −𝑠𝑡
∫ 𝑡 𝑒 𝑑𝑡 = (𝑡 ) (− )| − ∫ (− ) 2𝑡 𝑑𝑡
0 𝑠 0 0 𝑠
∞
𝑒 −𝑠𝑡
The term (𝑡 2 ) (− )| reduces to zero upon inspection hence
𝑠 0
∞ ∞
2 −𝑠𝑡
𝑒 −𝑠𝑡 2 ∞ −𝑠𝑡
∫ 𝑡 𝑒 𝑑𝑡 = 0 − ∫ (− ) 2𝑡 𝑑𝑡 = ∫ 𝑡𝑒 𝑑𝑡
0 0 𝑠 𝑠 0
𝑒 −𝑠𝑡
Let 𝑢 = 𝑡, 𝑑𝑢 = 𝑑𝑡, 𝑑𝑣 = 𝑒 −𝑠𝑡 𝑑𝑡 and 𝑣 = − . Employing again integration by parts
𝑠
∞ ∞
2 −𝑠𝑡
2 ∞ −𝑠𝑡 2 𝑒 −𝑠𝑡 ∞
𝑒 −𝑠𝑡 2 1 ∞ −𝑠𝑡
∫ 𝑡 𝑒 𝑑𝑡 = ∫ 𝑡𝑒 𝑑𝑡 = [(𝑡) (− )| − ∫ (− ) 𝑑𝑡] = [0 + ∫ 𝑒 𝑑𝑡]
0 𝑠 0 𝑠 𝑠 0 0 𝑠 𝑠 𝑠 0
∞
2 1 𝑒 −𝑠𝑡 2 𝑒 −𝑠(∞) 𝑒 −𝑠(0) 2 0 1 2 1 𝟐
= [ ((− )| )] = 2 [(− ) − (− )] = 2 [(− ) − (− )] = 2 ( ) = 𝟑
𝑠 𝑠 𝑠 0
𝑠 𝑠 𝑠 𝑠 𝑠 𝑠 𝑠 𝑠 𝒔
Example
ℒ{𝑒 𝑡 − 𝑒 −𝑡 }
1 1 (𝑠 + 1) − (𝑠 − 1) 𝟐
ℒ{𝑒 𝑡 − 𝑒 −𝑡 } = ℒ{𝑒 𝑡 } − ℒ{𝑒 −𝑡 } = − = = 𝟐
𝑠−1 𝑠+1 (𝑠 − 1)(𝑠 + 1) 𝒔 −𝟏
1 𝑠
Time Scaling ℒ{𝑓(𝑎𝑡)} = 𝐹( )
|𝑎| 𝑎
Example
𝑠
If ℒ{𝑓(𝑡)} = 𝑠+1 determine ℒ{𝑓(3𝑡)}
𝑠 𝑠
1 𝑠 1 3 1 3 1 𝑠 𝒔
ℒ{𝑓(3𝑡)} = 𝐹 ( ) = (𝑠 )= ( )= ( )=
|3| 3 3 3 𝑠+3 3 𝑠+3 𝟑𝒔 + 𝟗
3+1 3
Differentiation ℒ{𝑓 𝑛 (𝑡)} = 𝑠 𝑛 𝐹(𝑠) − 𝑠 𝑛−1 𝑓(0) − 𝑠 𝑛−2 𝑓 ′ (0) − ⋯ − 𝑠𝑓 𝑛−2 (0) − 𝑓 𝑛−1 (0)
Example
𝑑2 𝑦 2𝑑𝑦
If ℒ { 𝑑𝑡 2 + 𝑑𝑡 + 𝑦} given 𝑦(0) = 0 and 𝑦′(0) = 0
𝑑2 𝑦 2𝑑𝑦 𝑑2𝑦 2𝑑𝑦
ℒ{ 2 + + 𝑦} = ℒ { 2 } + ℒ { } + ℒ{𝑦}
𝑑𝑡 𝑑𝑡 𝑑𝑡 𝑑𝑡
= [𝑠 2 𝑌(𝑠) − 𝑠𝑦(0) − 𝑦′(0)] + [2𝑠𝑌(𝑠) − 𝑦(0)] + 𝑌(𝑠) = 𝒔𝟐 𝒀(𝒔) + 𝟐𝒔𝒀(𝒔) + 𝒀(𝒔)
𝑡
1
Integration ℒ {∫ 𝑓(𝜏) 𝑑𝜏} = 𝐹(𝑠)
0 𝑠
𝑑𝐹(𝑠)
Multiplication by 𝑡 ℒ{𝑡𝑓(𝑡)} = −
𝑑𝑠
1
ℒ{sin 𝜔𝑡 𝑓(𝑡)} = [𝐹(𝑠 − 𝑗𝜔) − 𝐹(𝑠 + 𝑗𝜔)]
Modulation 𝑗2
1
ℒ{cos 𝜔𝑡 𝑓(𝑡)} = [𝑋(𝑠 − 𝑗𝜔) + 𝑋(𝑠 + 𝑗𝜔)]
2
𝑡
Convolution ℒ {∫ 𝑓(𝑡 − 𝜏) 𝑔(𝜏) 𝑑𝜏} = ℒ{𝑓 ∗ 𝑔} = 𝐹(𝑠)𝐺(𝑠)
0
However, things will get messy once one attempt to find the Inverse Laplace
Transform using the above formula. To avoid all the dirty work, one may instead use
the previous table of Laplace transforms and their properties to reverse engineer the
process. This takes a lot of intuitive work and trial and error but with practice, one can
become adept in doing the transforms even without thinking of it.
Example
1
a. Find the Inverse Laplace Transform of 𝐹(𝑠) = 3𝑠+9
1 1 1
We can factor out 3 from the function to obtain 𝐹(𝑠) = 3 ∙ 𝑠+3
1
ℒ{1} =
𝑠
1
1 = ℒ −1 { }
𝑠
4 1
ℒ −1 { } = 4 ℒ −1 { } = 4(1) = 4
𝑠 𝑠
𝑠+1
For the second term ℒ −1 {𝑠2 +4}, we can break it down into two terms
𝑠+1 𝑠 1 𝑠 1
ℒ −1 { } = ℒ −1
{ + } = ℒ −1
{ } + ℒ −1
{ }
𝑠2 + 4 𝑠2 + 4 𝑠2 + 4 𝑠2 + 4 𝑠2 + 4
𝑠
For ℒ −1 {𝑠2 +4}, we use the following transform from the table
𝑠
ℒ{cos (𝑎𝑡)} =
𝑠2 + 𝑎2
𝑠
cos (𝑎𝑡) = ℒ −1 { }
𝑠 2 + 𝑎2
Performing Inverse Laplace Transform
𝑠 𝑠
ℒ −1 { 2 } = ℒ −1 { 2 } = cos (2𝑡)
𝑠 +4 𝑠 + (2)2
1
Forℒ −1 {𝑠2 +4}, we use the following transform from the table
𝑎
ℒ{sin (𝑎𝑡)} =
𝑠 2 + 𝑎2
𝑎
sin (𝑎𝑡) = ℒ −1 { 2 }
𝑠 + 𝑎2
Performing Inverse Laplace Transform
1 1 2 1 2 1
ℒ −1 { } = ℒ −1 { ∙ 2 } = ℒ −1 { 2 } = sin (2𝑡)
𝑠2 +4 2 𝑠 + (2) 2 2 𝑠 + (2) 2 2
2
For ℒ −1 {𝑠3 }, we use the following transform from the table
𝑛!
ℒ{𝑡 𝑛 } =
𝑠 𝑛+1
𝑛!
𝑡 𝑛 = ℒ −1 { }
𝑠 𝑛+1
2 1 1 2! 1 2!
ℒ −1 { 3 } = 2 ℒ −1 { 3 } = 2 ℒ −1 { ∙ 2+1 } = 2 ∙ ℒ −1 { 2+1 } = 𝑡 2
𝑠 𝑠 2! 𝑠 2! 𝑠
Hence the complete Inverse Laplace Transform is
4 𝑠+1 2 𝟏
𝑓(𝑡) = ℒ −1 { + 2 − 3 } = 𝟒 + 𝐜𝐨𝐬 (𝟐𝒕) + 𝐬𝐢𝐧 (𝟐𝒕) − 𝒕𝟐
𝑠 𝑠 +4 𝑠 𝟐
Example
a. Find the convolution of 𝑓(𝑡) = 𝑒 𝑡 and 𝑔(𝑡) = cos 𝑡
The convolution of the two functions is written as
𝑡 𝑡
𝑓 ∗ 𝑔 = ∫ 𝑓(𝑡 − 𝜏) 𝑔(𝜏) 𝑑𝜏 = ∫ 𝑒 𝑡−𝜏 cos 𝜏 𝑑𝜏
0 0
Of course, the integral can be obtained using integration by parts. However, to avoid
going through integrals, we could take advantage of the convolution property of
Laplace Transform.
1
𝐹(𝑠) = ℒ{𝑒 𝑡 } =
𝑠−1
𝑠
𝐺(𝑠) = ℒ{cos 𝑡} = 2
𝑠 +1
1 𝑠 𝑠
ℒ{𝑓 ∗ 𝑔} = 𝐹(𝑠)𝐺(𝑠) = ( )( 2 )=
𝑠−1 𝑠 +1 (𝑠 − 1)(𝑠 2 + 1)
We need to do the Inverse Laplace transform to obtain the convolution. From the
resulting expression, we obtain the equivalent partial fraction
𝑠 𝐴 𝐵𝑠 𝐶
2
= + 2 + 2
(𝑠 − 1)(𝑠 + 1) (𝑠 − 1) (𝑠 + 1) (𝑠 + 1)
𝑠 = 𝐴(𝑠 2 + 1) + 𝐵𝑠(𝑠 − 1) + 𝐶(𝑠 − 1)
If we let 𝑠 = 1, then the above equation reduce to
1 = 𝐴((1)2 + 1) + 𝐵(1)(1 − 1) + 𝐶(1 − 1)
1 = 2𝐴 + 0 + 0
1
𝐴=
2
If we let 𝑠 = 𝑗, then the above equation reduce to
𝑗 = 𝐴((𝑗)2 + 1) + 𝐵(𝑗)(𝑗 − 1) + 𝐶(𝑗 − 1)
𝑗 = 0 + 𝐵(𝑗 2 − 𝑗) + 𝐶(𝑗 − 1)
𝑗 = −𝐵(1 + 𝑗) + 𝐶(𝑗 − 1)
If we let 𝑠 = −𝑗, then the above equation reduce to
−𝑗 = 𝐴((−𝑗)2 + 1) + 𝐵(−𝑗)((−𝑗) − 1) + 𝐶((−𝑗) − 1)
−𝑗 = 0 + 𝐵(𝑗 2 + 𝑗) + 𝐶(−𝑗 − 1)
−𝑗 = −𝐵(1 − 𝑗) − 𝐶(𝑗 + 1)
Solving for B and C
𝑗 = −𝐵(1 + 𝑗) + 𝐶(𝑗 − 1)
−𝑗 = −𝐵(1 − 𝑗) − 𝐶(𝑗 + 1)
0 = −2𝐵 − 2𝐶
2𝐵 = −2𝐶
𝐵 = −𝐶
Substitute
𝑗 = −(−𝐶)(1 + 𝑗) + 𝐶(𝑗 − 1)
𝑗 = 𝐶(1 + 𝑗) + 𝐶(𝑗 − 1)
𝑗 = 𝑗2𝐶
1
𝐶=
2
1
𝐵 = −𝐶 = −
2
1 1 1
Going back to the partial fraction and substituting 𝐴 = 2, 𝐵 = − 2, and 𝐶 = 2
1 1 1
(2) (− 2) 𝑠 (2) 1 𝑠 1
+ + = − +
(𝑠 − 1) (𝑠 2 + 1) (𝑠 2 + 1) 2(𝑠 − 1) 2(𝑠 2 + 1) 2(𝑠 2 + 1)
To obtain the convolution, we solve for the Inverse Laplace Transform
1 𝑠 1 1 𝑠 1
ℒ −1 { − 2
+ 2 } = ℒ −1 { } − ℒ −1 { 2 } + ℒ −1 { 2 }
2(𝑠 − 1) 2(𝑠 + 1) 2(𝑠 + 1) 2(𝑠 − 1) 2(𝑠 + 1) 2(𝑠 + 1)
𝒆𝒕 𝟏 𝟏
𝒇∗𝒈 = − 𝐜𝐨𝐬 𝒕 + 𝐬𝐢𝐧 𝒕
𝟐 𝟐 𝟐
𝑥 = √2
With a scientific calculator, one could easily compute its decimal value without
even thinking. However, without the aide of one, things get complicated. To work
around this, the equation can be rewritten into a function 𝑓(𝑥) with √2 as one of its
roots.
𝑓(𝑥) = 𝑥 2 − 2
Using Newton-Raphson Method for finding the root. The formula is given as
𝑓(𝑥𝑛 )
𝑥𝑛+1 = 𝑥𝑛 −
𝑓′(𝑥𝑛 )
Substituting the function 𝑓(𝑥𝑛 ), we obtain
𝑥𝑛 2 − 2
𝑥𝑛+1 = 𝑥𝑛 −
2𝑥𝑛
𝑥𝑛 2
𝑥𝑛+1 = 𝑥𝑛 − +
2 2𝑥𝑛
𝑥𝑛 1
𝑥𝑛+1 = +
2 𝑥𝑛
This time, we arbitrarily assign a value for 𝑥1 and we first let 𝑥1 = 1. We will see
that by using the above formula and by five iterations of the algorithm, the value
converge to the approximate value of the square root of 2 which is 𝑥5 = 1.4142. The
% error is obtained by the formula
𝑥𝑛+1 − 𝑥𝑛
% 𝑒𝑟𝑟𝑜𝑟 = | | 𝑥 100%
𝑥𝑛+1
n 𝒙𝒏 𝒙𝒏+𝟏 % error
𝑥1 1
𝑥2 = +
2 𝑥1
(1) 1
𝑥2 = +
2 (1)
1 1 𝑥2 = 0.5 + 1 =1.5
2 1.5 1.416667 33.33%
3 1.416667 1.414216 5.88%
4 1.414216 1.414214 0.17%
5 1.414214 1.414214 0.00%
Of course, this is just the positive value of the √2. To get the negative value,
we let 𝑥1 = −1. From there, we obtain the approximate value of the other root of the
square root of 2 which is 𝑥5 = −1.4142.
n 𝒙𝒏 𝒙𝒏+𝟏 % error
𝑥1 1
𝑥2 = +
2 𝑥1
(−1) 1
𝑥2 = +
2 (−1)
1 -1 𝑥2 = −0.5 − 1 =-1.5
2 -1.5 -1.41667 33.33%
3 -1.41667 -1.41422 5.88%
4 -1.41422 -1.41421 0.17%
5 -1.41421 -1.41421 0.00%
Secant Method
One of the disadvantages of the Newton-Raphson method is the need to
analytically obtain the derivative 𝑓′(𝑥𝑛 ) every time. There are instances that this
becomes difficult or impossible to do. To eliminate the need to derive from time to time,
we could just replace the derivative with the algebraic equivalent of slope which is
𝑓(𝑥𝑛+1 ) − 𝑓(𝑥𝑛 )
𝑓 ′ (𝑥𝑛+1 ) ≈
𝑥𝑛+1 − 𝑥𝑛
Substituting back to the Newton-Raphson formula, the formula becomes what
known as the Secant Method
𝑥𝑛+1 − 𝑥𝑛
𝑥𝑛+2 = 𝑥𝑛+1 − 𝑓(𝑥𝑛+1 )
𝑓(𝑥𝑛+1 ) − 𝑓(𝑥𝑛 )
This time, let’s consider the equation
𝑒 −𝑥 = 𝑥
Upon inspection, we will see that this is difficult to solve analytically. Now, we
will try to obtain the root using Secant Method by letting the above equation be written
into a function
𝑓(𝑥) = 𝑒 −𝑥 − 𝑥
Then using the formula for Secant Method
𝑥𝑛+1 − 𝑥𝑛
𝑥𝑛+2 = 𝑥𝑛+1 − 𝑓(𝑥𝑛+1 )
𝑓(𝑥𝑛+1 ) − 𝑓(𝑥𝑛 )
𝑥𝑛+1 − 𝑥𝑛
𝑥𝑛+2 = 𝑥𝑛+1 − (𝑒 −𝑥𝑛+1 − 𝑥𝑛+1 ) −𝑥
(𝑒 𝑛+1 − 𝑥𝑛+1 ) − (𝑒 −𝑥𝑛 − 𝑥𝑛 )
We arbitrarily choose the values for 𝑥0 and 𝑥1 as our starting point. We let 𝑥0 =
1 and 𝑥1 = 2. Then by six iterations, our answer converges into 𝑥6 = 0.5671.
n 𝒙𝒏 𝒙𝒏+𝟏 𝒙𝒏+𝟐 % error
0 1 2 0.487142
1 2 0.487142 0.58378 310.56%
2 0.487142 0.58378 0.567386 16.55%
3 0.58378 0.567386 0.567143 2.89%
4 0.567386 0.567143 0.567143 0.04%
5 0.567143 0.567143 0.567143 0.00%
Example
a. Determine ℎ(𝑡) from the transfer function
𝑠 2 − 3𝑠 + 2
𝐻(𝑠) = 3
6𝑠 + 31𝑠 2 + 8𝑠 − 45
We know that ℎ(𝑡) = ℒ −1 {𝐻(𝑠)}. Doing partial fraction, we could decompose the above
function to different terms to obtain the Inverse Laplace Transform. However, we must
know the factors that constitute the denominator 6𝑠 3 + 31𝑠 2 + 8𝑠 − 45. Upon
inspection, we conclude that the only way to separate it into those factors is using
synthetic division. However, such method relies on trial and error which appears
cumbersome. So, we will try to resort to numerical means using Newton-Raphson
method. First, we must obtain the derivative of the denominator.
𝑓 ′ (𝑠) = (3)6𝑠 3−1 + (2)31𝑠 2−1 + 8𝑠 0 − 0
𝑓 ′ (𝑠) = 18𝑠 2 + 62𝑠 + 8
𝑓 ′ (𝑠𝑛 ) = 18𝑠𝑛 2 + 62𝑠𝑛 + 8
Substitute to the formula
𝑓(𝑠𝑛 )
𝑠𝑛+1 = 𝑠𝑛 −
𝑓′(𝑠𝑛 )
6𝑠𝑛 3 + 31𝑠𝑛 2 + 8𝑠𝑛 − 45
𝑠𝑛+1 = 𝑠𝑛 −
18𝑠𝑛 2 + 62𝑠𝑛 + 8
By five iterations in the algorithm using three initial values 𝑠1 = 2, −1 and −5
from informed guess, we obtained the three roots of the denominator.
n 𝒔𝒏 𝒔𝒏+𝟏 % error
1 2 1.29902
2 1.29902 1.039542 53.96%
3 1.039542 1.000842 24.96%
4 1.000842 1 3.87%
5 1 1 0.08%
n 𝒔𝒏 𝒔𝒏+𝟏 % error
1 -1 -1.77778
2 -1.77778 -1.66658 43.75%
3 -1.66658 -1.66667 6.67%
4 -1.66667 -1.66667 0.01%
5 -1.66667 -1.66667 0.00%
n 𝒔𝒏 𝒔𝒏+𝟏 % error
1 -5 -4.59459
2 -4.59459 -4.50444 8.82%
3 -4.50444 -4.50001 2.00%
4 -4.50001 -4.5 0.10%
5 -4.5 -4.5 0.00%
5 9
Hence the roots are 𝑠 = 1, −1.667 𝑜𝑟 − 3 and − 4.5 𝑜𝑟 − 2 and the equivalent factors
are (𝑠 − 1), (3𝑠 + 5) and (2𝑠 + 9). We use them to decompose the function into partial
fraction.
𝑠 2 − 3𝑠 + 2 𝑠 2 − 3𝑠 + 2
𝐻(𝑠) = 3 =
6𝑠 + 31𝑠 2 + 8𝑠 − 45 (𝑠 − 1)(3𝑠 + 5)(2𝑠 + 9)
𝑠 2 − 3𝑠 + 2 𝐴 𝐵 𝐶
= + +
(𝑠 − 1)(3𝑠 + 5)(2𝑠 + 9) (𝑠 − 1) (3𝑠 + 5) (2𝑠 + 9)
𝑠 2 − 3𝑠 + 2 = 𝐴(3𝑠 + 5)(2𝑠 + 9) + 𝐵(𝑠 − 1)(2𝑠 + 9) + 𝐶(𝑠 − 1)(3𝑠 + 5)
If we let 𝑠 = 1, then
(1)2 − 3(1) + 2 = 𝐴(3(1) + 5)(2(1) + 9) + 𝐵(1 − 1)(2(1) + 9) + 𝐶(1 − 1)(3(1) + 5)
1 − 3 + 2 = 𝐴(8)(11) + 0 + 0
0 = 𝐴(88)
𝐴=0
5
If we let 𝑠 = −1.667 𝑜𝑟 − 3, then
5 2 5 5 5 5 5 5 5
(− ) − 3 (− ) + 2 = 𝐴 (3 (− ) + 5) (2 (− ) + 9) + 𝐵 ((− ) − 1) (2 (− ) + 9) + 𝐶 ((− ) − 1) (3 (− ) + 5)
3 3 3 3 3 3 3 3
88 8 17
= 0 + 𝐵 (− ) ( ) + 0
9 3 3
88(3)(3)
𝐵=
9(−8)(17)
11
𝐵=−
17
9
If we let 𝑠 = −4.5 𝑜𝑟 − 2, then
9 2 9 9 9 9 9 9 9
(− ) − 3 (− ) + 2 = 𝐴 (3 (− ) + 5) (2 (− ) + 9) + 𝐵 ((− ) − 1) (2 (− ) + 9) + 𝐶 ((− ) − 1) (3 (− ) + 5)
2 2 2 2 2 2 2 2
143 11 17
= 0 + 0 + 𝐶 (− ) (− )
4 2 2
143(2)(2)
𝐶=
4(−11)(−17)
13
𝐶=
17
11 13
𝑠 2 − 3𝑠 + 2 𝐴 𝐵 𝐶 0 (− 17) (17)
= + + = + +
6𝑠 3 + 31𝑠 2 + 8𝑠 − 45 (𝑠 − 1) (3𝑠 + 5) (2𝑠 + 9) (𝑠 − 1) (3𝑠 + 5) (2𝑠 + 9)
𝑠 2 − 3𝑠 + 2 11 1 13 1 11 1 13 1
𝐻(𝑠) = = −( ) +( ) = −( ) +( )
6𝑠 3 + 31𝑠 2 + 8𝑠 − 45 17 (3𝑠 + 5) 17 (2𝑠 + 9) 17(3) (𝑠 + 5) 17(2) (𝑠 + 9)
3 2
11 1 13 1 11 1 13 1
ℎ(𝑡) = ℒ −1 {𝐻(𝑠)} = ℒ −1 {− ( ) +( ) } = ℒ −1 {− ( ) } + ℒ −1 {( ) }
51 (𝑠 + 5) 34 (𝑠 + 9) 51 (𝑠 + 5) 34 (𝑠 + 9)
3 2 3 2
11 −1 1 13 1
ℎ(𝑡) = − ℒ { } + ℒ −1 { }
51 5 34 9
(𝑠 + 3) (𝑠 + 2)
𝟏𝟏 − 𝟓𝒕 𝟏𝟑 − 𝟗𝒕
𝒉(𝒕) = − 𝒆 𝟑 + 𝒆 𝟐
𝟓𝟏 𝟑𝟒
𝑑 𝑑 1 1
𝑠𝑌(𝑠) − 𝑦(0) + 𝑌(𝑠) = −ℒ{𝑡𝑒 −𝑡 } = − (− (ℒ{𝑒 −𝑡 })) = ( )=−
𝑑𝑠 𝑑𝑠 𝑠 + 1 (𝑠 + 1)2
1
𝑌(𝑠)(𝑠 + 1) = − + 𝑦(0)
(𝑠 + 1)2
1 𝑦(0)
𝑌(𝑠) = − +
(𝑠 + 1)3 𝑠 + 1
1 𝑦(0) 1 𝑦(0)
𝑦 = ℒ −1 {𝑌(𝑠)} = ℒ −1 {− 3
+ } = −ℒ −1 { 3
} + ℒ −1 { }
(𝑠 + 1) 𝑠+1 (𝑠 + 1) 𝑠+1
𝑛!
Given ℒ{𝑡 𝑛 } = (𝑠)𝑛+1 and ℒ{𝑒 𝑎𝑡 𝑓(𝑡)} = 𝐹(𝑠 − 𝑎) we find the Inverse Laplace transform
1 𝑦(0) 1 2! 1
𝑦 = −ℒ −1 { } + ℒ −1
{ } = −ℒ −1
{ ∙ } + 𝑦(0)ℒ −1
{ }
(𝑠 + 1)3 𝑠+1 2! (𝑠 − (−1))2+1 𝑠 − (−1)
1
𝑦 = − 𝑡 2 𝑒 −𝑡 + 𝑦(0)𝑒 −𝑡
2
At 𝑡 = 1, 𝑦 = 2
1 1
2 = − (1)2 𝑒 −(1) + 𝑦(0)𝑒 −(1) = − 𝑒 −1 + 𝑦(0)𝑒 −1
2 2
1
𝑦(0)𝑒 −1 = 2 + 𝑒 −1
2
1
𝑦(0) = 2𝑒 + = 5.937
2
1
𝑦 = − 𝑡 2 𝑒 −𝑡 + 𝑦(0)𝑒 −𝑡
2
1
𝑦 = − 𝑡 2 𝑒 −𝑡 + 5.937𝑒 −𝑡
2
𝒚 = 𝒆−𝒕 (𝟓. 𝟗𝟑𝟕 − 𝟎. 𝟓𝒕𝟐 )
To find 𝑡 at 𝑦 = 0, we’ll try the Secant Method.
𝑡𝑛+1 − 𝑡𝑛
𝑡𝑛+2 = 𝑡𝑛+1 − 𝑓(𝑡𝑛+1 )
𝑓(𝑡𝑛+1 ) − 𝑓(𝑡𝑛 )
Let 𝑓(𝑡𝑛 ) = 𝑒 −𝑡𝑛 (5.937 − 0.5𝑡2𝑛 )
2
𝑡𝑛+1 − 𝑡𝑛
𝑡𝑛+2 = 𝑡𝑛+1 − (𝑒 −𝑡𝑛+1 (5.937 − 0.5𝑡𝑛+1 )) 2 2
(𝑒 −𝑡𝑛+1 (5.937 − 0.5𝑡𝑛+1 )) − (𝑒 −𝑡𝑛 (5.937 − 0.5𝑡𝑛+1 ))
Let 𝑡0 = −5 and 𝑡1 = −4
Hence, within seven iterations, the values converge to 𝒕 = ±𝟑. 𝟒𝟒𝟔 for 𝑦 = 0
One disadvantage of the Secant Method to that of the Newton-Raphson Method
is that it is slower in convergence. Nevertheless, these two are just few among many
other valuable methods for determining the root of function such as the Bisection
Method, False Position Method, Fixed Point Iteration Method, Bairstow’s Method, etc.
Euler’s Method
Given a first order differential equation, we can represent it into the following
formula
𝑦 ′ = 𝑓(𝑡, 𝑦), where 𝑦(𝑡0 ) = 𝑦0
We remember that 𝑦′ is the slope of a function 𝑦 and by approximation, we can
calculate these slope as
𝑦𝑛+1 − 𝑦𝑛
𝑦′ =
𝑡𝑛+1 − 𝑡𝑛
Substituting it back to 𝑦′ and letting ℎ = 𝑡𝑛+1 − 𝑡𝑛 which is the incremental step
of the algorithm, the formula becomes
𝑦𝑛+1 − 𝑦𝑛
= 𝑓(𝑡𝑛 , 𝑦𝑛 )
ℎ
𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
To proceed with the algorithm, we start off with the initial conditions 𝑡0 , 𝑦0 then
successively repeat with the next values of 𝑡𝑛 by ℎ increments. Meanwhile, appropriate
selection of ℎ will affect the error of the approximate solution.
Example
a. Approximate the solution of 𝑦 ′ + 𝑦 = −𝑡𝑒 −𝑡 given 𝑦(1) = 2
We rewrite the equation as 𝑦 ′ = −𝑡𝑒 −𝑡 − 𝑦 then we let 𝑓(𝑡𝑛 , 𝑦𝑛 ) = −𝑡𝑛 𝑒 −𝑡𝑛 − 𝑦𝑛
We arbitrarily choose the value of ℎ = 0.5. Then by ten iterations, we obtain the
resulting values along with the true values of 𝑦(𝑡𝑛 ) to get the error. We have previous
knowledge that the solution of the above familiar equation is 𝑦 = 𝑒 −𝑡 (5.937 − 0.5𝑡 2 )
which we will use to compute the true values.
n 𝒕𝒏 𝒚𝒏 𝒚𝒏+𝟏 𝒚(𝒕𝒏 ) error
(approx.) (true value) (true-approx.)
0 1.0 2.0 0.81606 2.000161 0.000160522
1 1.5 0.81606 0.240683 1.073702 0.257642051
2 2.0 0.240683 -0.01499 0.532815 0.292132491
3 2.5 -0.01499 -0.1101 0.230823 0.24581704
4 3.0 -0.1101 -0.12973 0.071544 0.181647277
5 3.5 -0.12973 -0.11771 -0.00568 0.124055124
6 4.0 -0.11771 -0.09549 -0.03779 0.079926374
7 4.5 -0.09549 -0.07274 -0.04652 0.048962569
8 5.0 -0.07274 -0.05321 -0.04422 0.028517619
9 5.5 -0.05321 -0.03785 -0.03755 0.015664994
The resulting graph is shown where we can see a major deviation between 1.5 to 3.5.
2.5
euler
true_value
2
1.5
yn 1
0.5
-0.5
1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5
tn
Improved Euler’s Method (Heun’s Method)
As we can see, Euler’s Method is just a rough approximation of the solution and
is subject to large deviation from the true value. Hence, an improved Euler’s Method
was developed wherein a corrective step was incorporated. One will still need to use
∗
the original formula but need to get a temporary value called y𝑛+1
∗
𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
Using this temporary value, we do a correction by repeating the process but
∗
multiplying ℎ with the average of the 𝑓(𝑥𝑛 , 𝑦𝑛 ) and 𝑓(𝑥𝑛+1 , y𝑛+1 ).
∗
𝑓(𝑥𝑛 , 𝑦𝑛 ) + 𝑓(𝑥𝑛+1 , 𝑦𝑛+1 )
𝑦𝑛+1 = 𝑦𝑛 + ℎ [ ]
2
By introducing the corrective step, the rate of approximation was dampened so
as not to deviate much from the true value such as shown in the example below with
the corresponding graph for the same equation 𝑦 ′ + 𝑦 = −𝑡𝑒 −𝑡 , 𝑦(1) = 2, ℎ = 0.5.
n 𝒕𝒏 𝒚𝒏 𝒚∗𝒏+𝟏 𝒚𝒏+𝟏 𝒚(𝒕𝒏 ) error
(approx.) (true value) (true-approx.)
0 1.0 2.0 0.81606 1.120341 2.000161 0.000160522
1 1.5 1.120341 0.392823 0.590709 1.073702 0.046638929
2 2.0 0.590709 0.160019 0.284056 0.532815 0.057893731
3 2.5 0.284056 0.039422 0.114543 0.230823 0.053233002
4 3.0 0.114543 -0.01741 0.026497 0.071544 0.042999131
5 3.5 0.026497 -0.0396 -0.01497 -0.00568 0.032173714
6 4.0 -0.01497 -0.04411 -0.03101 -0.03779 0.022818548
7 4.5 -0.03101 -0.0405 -0.03405 -0.04652 0.015514902
8 5.0 -0.03405 -0.03387 -0.03111 -0.04422 0.010168917
9 5.5 -0.03111 -0.0268 -0.02597 -0.03755 0.006436085
2.5
euler (heun)
true_value
2
1.5
yn 1
0.5
-0.5
1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5
tn
Runge-Kutta Method
An improved and most popular method of numerically solving differential
equations is the Runge-Kutta Method of the fourth order. This consists of the following
set of formulas
𝑦𝑛+1 = 𝑦𝑛 + 𝐾𝑛 wherein
1
𝐾𝑛 = (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 )
6
1 1
𝑘1 = ℎ𝑓(𝑡𝑛 , 𝑦𝑛 ) 𝑘2 = ℎ𝑓 (𝑡𝑛 + ℎ, 𝑦𝑛 + 𝑘1 )
2 2
1 1
𝑘3 = ℎ𝑓 (𝑡𝑛 + ℎ, 𝑦𝑛 + 𝑘2 ) 𝑘4 = ℎ𝑓(𝑡𝑛 + ℎ, 𝑦𝑛 + 𝑘3 )
2 2
We will try to use the above method for the same equation 𝑦 ′ + 𝑦 = −𝑡𝑒 −𝑡 ,
𝑦(1) = 2, ℎ = 0.5.
n 𝒕𝒏 𝒚𝒏 𝑲𝒏 𝒚𝒏+𝟏 𝒚(𝒕𝒏 ) error
(approx.) (true value) (true-approx.)
0 1.0 2.0 -5.60242 1.066262919 2.000160522 0.000160522
1 1.5 1.066263 -3.26401 0.52226062 1.073702331 0.007439412
2 2.0 0.522261 -1.81486 0.219784427 0.53281501 0.01055439
3 2.5 0.219784 -0.95006 0.0614414 0.230823016 0.011038589
4 3.0 0.061441 -0.45406 -0.014235275 0.071544017 0.010102617
5 3.5 -0.01424 -0.18263 -0.044672842 -0.005677108 0.008558167
6 4.0 -0.04467 -0.04317 -0.051867473 -0.037785163 0.006887679
7 4.5 -0.05187 0.021693 -0.048251997 -0.046524478 0.005342996
8 5.0 -0.04825 0.046366 -0.040524299 -0.044221146 0.004030851
9 5.5 -0.04052 0.050795 -0.032058391 -0.037549256 0.002975043
From the graph below, we can see that the Runge-Kutta Method is the most
accurate so far in approximating the solution of the differential equation compared to
the Euler’s and Heun’s Methods.
2.5
runge-kutta
true_value
2
1.5
yn 1
0.5
-0.5
1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5
tn