Professional Documents
Culture Documents
The very purpose of finite difference approximation is to approximate any given derivative. Recall that
this method includes the central and noncentral finite difference approximation. However, depending on
the truncation error, the first central approximation has an error O(h^2) which yields a better result
compare to the first noncentral finite difference approximation with an error of O(h). Another method
used as an approximation of the derivative is the Richardson extrapolation which makes use of a finite
difference approximation of order O(h^2). Similarly, the Lagrange method of the first and second
derivative is used to approximate the given data points at x=0, x=0.5, and x=1. Based on the results of
the machine exercise. And using the same step size of h=0.1 with three x conditions for situation 1 a.
the values obtained have no relationship between each other since the three used methods are
approximating values at the different points in the data. The same case happens for part b such that the
value does not correlate with each other. However, at first derivative, a slight similarity was spotted
such that at x=0.5, with step-size h=0. 2, there is a closeness between values obtained using the three
methods. For part c, using h=0.1 with an error of O(h^2), the values obtained again produce different
results. Aside from the first derivative at x=0.5 which yields a closer value. Another method used is
Richardson and Lagrange. Observed that as the value of p changes, the value of f''(x) also changes
and it can be seen that it is somehow close within each other.
In situation 2, a given function is needed to be integrated. To verify the accuracy of the results, a given
function was integrated analytically. The results obtained using composite trapezoidal, composite
Simpsons, Romberg, and gauss Legendre show precision and accuracy as the values are closer to one
another and the true value.
erf(1) Relative percent error
Comp. Trapezoidal 0.842008717 0.08213%
Analytical 0.842700793
Table 1: Approximation of erf(1) using Trapezoidal, 1/3 Simpsons, Romberge and Lagrange
The table shows that the Romberge Interpolation method shows the least error followed by Gauss
Legendre, Composite Simpsons, and Composite trapezoidal.
Situation 1
(A)
Algorithm:
Step 1: Input data points
Step 2: Read step-size h=0.1
Step 3: Input formula for finite difference approximation method (first forward and first backward
approximation method for first and second derivatives).
Step 4: Compute the first and second derivative of a function at x equals zero using the first forward
difference approximation method.
Step 5: Display output at x equals zero
Step 6: Set x to 0.5 and compute the first and second derivative of a function using forward and
backward difference methods.
Step 7: Display output at x equals 0.5.
Step 8: Compute the first and second derivative of the last data points (x equals 1) using the first
backward difference method.
Step 9: Display output at x equals to one. Stop.
Pseudocode:
Step 1: Input data points
Step 2: Read step-size h=0.1
Step 3: Input formula for a finite difference approximation method
First forward:
'
f ( x n +h ) −f ( x )
f ( x )=
h
f (x)= {f(x)-2f left (x+h right ) +f(x+2 h)} over {{h} ^ {2}
First backward:
'
f ( x n )−f ( x−h )
f ( x )=
h
f (x)= {f left (x-2 h right ) -2f left (x-h right ) +f(x)} over {{h} ^ {2}
f ( x n +h ) −f ( x )
Step 4: Solve for x=0. Use f ' ( x )= and
h
f (x)= {f(x)-2f left (x+h right ) +f(x+2 h)} over {{h} ^ {2}
Step 5: Display output: f’(x); f”(x) at x=0
f ( x n )−f ( x−h )
Step 6: Solve for x=0.5. Use f ' ( x )= and
h
f (x)= {f left (x-2 h right ) -2f left (x-h right ) +f(x)} over {{h} ^ {2}
Step 7: Display output: f’(x); f”(x) at x=0.5
f ( x n )−f ( x−h )
Step 8: Solve for x=1. Use f ' ( x )= and
h
f (x)= {f left (x-2 h right ) -2f left (x-h right ) +f(x)} over {{h} ^ {2}
Step 9: Display output: f’(x); f”(x) at x=1. Stop.
Flowchart
Start
Display output:
f’(x); f”(x) at x=0
Display output:
f’(x); f”(x) at x=0.5
Display output:
f’(x); f”(x) at x=1
End
B) Algorithm:
Step 1: Input data points
Step 2:Read step-size h=0.2
Step 3: Input formula for finite difference approximation method (second forward, backward, and first
central approximation method for first and second derivatives.
Step 4: Compute the first and second derivative of a function at x equals zero using the second forward
difference method.
Step 5: Display output at x equals zero
Step 6: Set x equals 0.5 and compute for the first and second derivative using the first central
difference approximation method.
Step 7: Display output at x equals 0.5
Step 8: Compute the first and second derivative of a function at x equals 1 using the second backward
difference approximation method.
Step 9: Display output at x equals 1. Stop:
Pseudocode:
Step 1: Input data points
Step 2: Read step-size h=0.2
Step 3: Input formula for a finite difference approximation method
Second forward
:
−f ( x +2 h ) +4 f ( x+ h )−3 f ( x )
f ' ( x )= ∧f (x)= {2f left (x right ) -5f left (x+h right ) +4f left (x+2 h right
2h
First central
f ( x+ h )−f (x−h)
:f ' ( x )= an
2h
f (x)= {f left (x+h right ) -2f left (x right ) +f(x-h)} over {{h} ^ {2}d
Second backward
:
3 f ( x )−4 f ( x−h ) + f ( x−2 h )
f ' ( x )= ∧f (x)= {2f left (x right ) -5f left (x-h right ) +4f left (x-2 h right ) -
2h
' −f ( x +2 h ) +4 f ( x+ h )−3 f ( x )
Step 4: Solve for x=0. Use f ( x )= and
2h
−f ( x +2 h ) +4 f ( x+ h )−3 f (x)
f ' ( x )=
2h
Step 5: Display output: f’(x); f”(x) at x=0
f ( x+ h )−f ( x−h)
Step 6: Solve for x=0.5. Use f ' ( x )= and
2h
f (x)= {f left (x+h right ) -2f left (x right ) +f(x-h)} over {{h} ^ {2}
Step 7: Display output: f’(x); f”(x) at x=0.5
3 f ( x )−4 f ( x−h ) + f ( x−2 h)
Step 8: Solve for x=1. Use f ' ( x )= and
2h
f (x) {2f left (x right ) -5f left (x-h right ) +4f left (x-2 h right ) -f(x-3 h)} over {{h} ^ {2}
Step 9: Display output: f’(x); f”(x) at x=1. Stop.
Flowchart
Start
Display output:
f’(x); f”(x) at x=0
Solve for the mid
data point at x=0.5
using:
Display output:
f’(x); f”(x) at x=0.5
Display output:
End
f’(x); f”(x) at x=1
C)Algorithm
Step 1: Input data points
Step 2: Read step-size h=0.1
Step 3: Input formula for finite difference approximation method(second forward and backward, and first
central difference approximation method)
Step 4: Compute for the first and second derivative of a function at x equal 0 using the second forward
difference approximation method.
Step 5: Display output at x=0
Step 6: Set x=5 and compute for the first and second derivative using the second forward and
backward, and first central approximation method.
Step 7: Display output at x equals 0.5
Step 8: Compute the first and second derivative of the last data point (x=1) using a second backward
difference approximation method.
Step 9: Display output. Stop.
Pseudocode:
Step 1: Input data points
Step 2: Read step-size h=0.1
Step 3: Input formula for a finite difference approximation method
Second forward
:
−f ( x +2 h ) +4 f ( x+ h )−3 f ( x )
f ' ( x )= ∧f (x)= {2f left (x right ) -5f left (x+h right ) +4f left (x+2 h right ) -f(x+3
2h
First central
f ( x+ h )−f (x−h)
:f ' ( x )= and
2h
f (x)= {f left (x+h right ) -2f left (x right ) +f(x-h)} over {{h} ^ {2}
Second backward
:
3 f ( x )−4 f ( x−h ) + f ( x−2 h )
f ' ( x )= ∧f (x)= {2f left (x right ) -5f left (x-h right ) +4f left (x-2 h right ) -f(x-3 h)
2h
Step 4: Solve for x=0. Use
−f ( x +2 h ) +4 f ( x+ h )−3 f ( x )
f '(x) ∧f (x) {2f left (x right ) -5f left (x+h right ) +4f left (x+2 h right ) -f(x+3 h)}
2h
Step 5: Display output: f’(x); f”(x) at x=0
Step 6: Solve for x=0.5. Use
−f ( x +2 h ) +4 f ( x+ h )−3 f ( x )
f '(x) ∧f (x) {2f left (x right ) -5f left (x+h right ) +4f left (x+2 h right ) -f(x+3 h)}
2h
f ( x+ h )−f ( x−h)
f ' ( x )= and
2h
f (x)= {f left (x+h right ) -2f left (x right ) +f(x-h)} over {{h} ^ {2}
3 f ( x )−4 f ( x−h ) + f ( x−2 h )
f ' ( x )= and
2h
f (x)= {2f left (x right ) -5f left (x-h right ) +4f left (x-2 h right ) -f(x-3 h)} over {{h} ^ {2}
Step 7: Display output: f’(x); f”(x) at x=0.5
Step 8: Solve for x=1. Use
3 f ( x )−4 f ( x−h )+ f ( x−2 h )
f '( x) ∧f (x) {2f left (x right ) -5f left (x-h right ) +4f left (x-2 h right ) -f(x-3 h)} o
2h
Step 9: Display output: f’(x); f”(x) at x=1.Stop.
Flowchart Start
Display output:
f’(x); f”(x) at x=0
Display output:
f’(x); f”(x) at x=0.5
Display output:
End
f’(x); f”(x) at x=1
D) Richardson
Algorithm:
Step 1: Input data points
Step 2: Read h=0.2, 0.1
Step 3: Input formula for the first and second derivative of the second forward approximation method.
Step 4: Input formula for the first and second derivative of the first central approximation method
Step 5: Input formula for the first and second derivative of the second backward approximation method
Step 6: Solve for g(h) and g’(h) at x=0 using the formula from step 3.
Step 7: Display output:
g(h) and g’(h) at x equals 0
Step 8: Solve for g(h) and g’(h) at x=0.5 using the formula from step 4
Step 9: Display output:
g(h) and g’(h) at x equals 0.5
Step 10: Solve for g(h) and g’(h) at x=0.5 using the formula from step 5
Step 11: Display output:
g(h) and g’(h) at x equals 1
Step 12: Input p equals 2
Step 13: Perform Richardson Extrapolation at x=0, 0.5, 1 using p from step 12.
Step 14: Display the first and second derivative of a function at x=0, 0.5, 1
Step 15: Input p equals 3 and 4 and repeat step 13. Stop.
Pseudocode:
Step 1: Input data points
Step 2: Read h=0.2, 0.1
Step 3: Input formula for a finite difference approximation method
Second forward
−f ( x +2 h ) +4 f ( x+ h )−3 f ( x )
f ' ( x )= ∧f (x)= {2f left (x right ) -5f left (x+h right ) +4f left (x+2 h right
2h
:
First central
:
f ( x+ h )−f (x−h)
f ' ( x )= ∧f (x)= {f left (x+h right ) -2f left (x right ) +f(x-h)} over {{h} ^ {2}
2h
Second backward
:
3 f ( x )−4 f ( x−h ) + f ( x−2 h )
f ' ( x )= ∧f (x)= {2f left (x right ) -5f left (x-h right ) +4f left (x-2 h right ) -
2h
Step 4: Solve for x=0. Use
−f ( x +2 h ) +4 f ( x+ h )−3 f ( x )
f '(x) ∧f (x) {2f left (x right ) -5f left (x+h right ) +4f left (x+2 h right ) -f(x+3 h)
2h
Step 5: Display output g(h) and g’(h) at x = 0.
Step 6: Solve for x=0.5. Use
f ( x+ h )−f (x−h)
f ' ( x )= ∧f (x)= {f left (x+h right ) -2f left (x right ) +f(x-h)} over {{h} ^ {2}
2h
Step 7: Display output g(h) and g’(h) at x = 0.5.
Step 8: Solve for x=1. Use
3 f ( x )−4 f ( x−h )+ f ( x−2 h )
f '( x) ∧f (x) {2f left (x right ) -5f left (x-h right ) +4f left (x-2 h right ) -f(x-3 h)} o
2h
Step 9: Display output g(h) and g’(h) at x = 1.
Step 10: Input p= 2
h
Step 11: Perform Richardson extrapolation using
G=
( )
2 p g 1 −g (h1)
2 at x=0,0.5, 1.
p
2 −1
Step 12: Output f’(0) and f”(0) at x=0,0.5, 1.
Step 13: Input p=3 and p=4 and repeat step 11. Stop
Flowchart:
Start
Display output:
g(h) and g’(h) at x=0
Display output:
g(h) and g’(h) at x=0.5
Display output:
g(h) and g’(h) at x=1
Input p=2
Perform Richardson
Extrapolation at x=0, 0.5, 1
f’(0)=
G=
2p g ( h2 )−g (h )
1
1
2 p−1
h
f’’(0)=
G=
( )
2 p g 1 −g (h1)
2
Display first and second
2 p−1
derivative of a function at
x=0, 0.5, 1
Input p=3
Input p=4
End
Pseudocode:
Step 1: Input data points
Read x=0, 0.5, 1
n=10
Step 2: Solve for constant a0,a1,a2…..an
yi
a i= for i≠j
( xi−xj ) ….( xi−xjn)
Step 3: Display output:
a0,a1,a2,…..,an
Step 4: Solve for the first derivative using:
Step 5: Display output:
f’(x) at x=0, 0.5, 1
Step 6: Solve for the second derivative using:
Step 7: Display output:
f’’(x) at x=0, 0.5, 1. Stop.
Flowchart
Start
Display output:
a0,a1,a2,…..,an
Display output:
f’(x) at x=0, 0.5, 1
Display output:
f’’(x) at x=0, 0.5, 1
End
Situation 2
Algorithm: (Composite trapezoidal)
1.Start
2.Input a, b, and step size h
3. Solve for n: b minus a divided step-size h
4. Set x0=xi=0
5. Solve for xi
6. Test if xn equals to b; if not repeat step 5 hence continue.
7. Solve for erf(a)i : Substitute xi to the given equation.
8. Input the multiplier
9. Solve for the summation of the product of erf(a) i and the multiplier, then sum times step-size over 2.
10. Display output
11. End
Pseudocode:
Step1: Start
Step2: Input
b=1; a=0
h=0.01
Step3: Solve for n:
n=(b-a)/h
Step4: For x0=xi=0
Step5: Solve for xi:
xi+1=xi+h
Step6: Test if xn=b; if not repeat step 5, hence continue.
Step7: Substitute xi to the given function: erf(a)i=2/sqrt(pi)*(e^(-xi2))
Step8: Input multiplier:
x0=1; x1=4; x2=2…..xn=1
Step9: Solve for [erf(ao)+2erf(a1)+ 2erf(a2)+…+2erf(an-1)+ erf(an)]h/2
Step10: Display output: erf(1)
Step11: End
Flowchart
Start
Input
b=1; a=0
h=0.01
Solve for n:
n=(b-a)/h
For x0=xi=a to b
No
Is
xn=b?
yes
erf(a)i=2/sqrt(pi)*(e^(-xi)^2)
Input multiplier:
x0=1; x1=2…..xn=1
Display output:
erf(1)
1.Start
2.Input a, b, and step size h
3. Solve for n: b minus a divided step-size h
4. Set x0=xi=0
5. Solve for xi
6. Test if xn equals to b; if not repeat step 5 hence continue.
7. Solve for erf(a)i :Substitute xi to the given equation.
8. Input the multiplier
9. Solve for the summation of the product of erf(a) i and the multiplier, then sum times step-size over 3.
10. Display output
11. End
Pseudocode:
Step1: Start
Step2: Input
b=1; a=0
h=0.01
Step3: Solve for n:
n=(b-a)/h
Step4: For x0=xi=0
Step5: Solve for xi:
xi+1=xi+h
Step6: Test if xn=b; if not repeat step 5, hence continue.
Step7: Substitute xi to the given function: erf(a)i=2/sqrt(pi)*(e^(-xi2))
Step8: Input multiplier:
x0=1; x1=4; x2=2…..xn=1
Step9: Solve for [erf(ao)+4erf(a1)+ 2erf(a2)+4erf(a3)+…+2erf(an-2)+4erf(an-1)+ erf(an)]h/3
Step10: Display output: erf(1)
Step11: End
Flowchart:
Start
Input
b=1; a=0
h=0.01
Solve for n:
n=(b-a)/h
For x0=xi=a to b
No Is
xn=b?
Yes
erf(a)i=2/sqrt(pi)*(e^(-xi)^2)
Input multiplier:
x0=1; x1=4; x2=2…..xn=1
[erf(ao)+4erf(a1)+ 2erf(a2)+4erf(a3)+…+2erf(an-2)+4erf(an-
1)+ erf(an)]h/3
Display output:
erf(1)
End
Romberg integration method:
Algorithm:
Step 1: Input b=1; a=0 and h=0.01
Step 2: Input formula for recursive trapezoidal rule to solve for R 1,1, R2,1,R3,1, R4,1
Step 3: Perform recursive trapezoidal rule using the given input.
Step 4: Display output for R1,1, R2,1,R3,1, R4,1 as O(h2).
Step 5: Perform Richardson extrapolation to solve for R 2,2, R3,2,R4,2, R3,3, R4,3, R4,4.
Step 6: Display output R2,2, R3,2,R4,2, R3,3, R4,3, R4,4.
Step 7: Solve for error.Difference between true value and O(h i2) divided by true value. Ans. Multiply by
100.
Step8: Solve for tolerance. If error is less than 0.5% percent stop hence continue.
Pseudocode:
Step 1: Input
b=1; a=0
h=0.01
Step 2: Input formula for recursive trapezoidal rule
Flowchart: Start
Input
b=1; a=0
Start
Algorithm: Gauss Legendre
Step 1: Input b=1; a=0 n=4
Step 2: Input the given error and weight at n equals 4.
Step 3: Solve for the value of x: x equals sum b and a divided by 2. Ans. Plus difference of b and a
divided by 2 times ξ.
Step 4: Display output of x
Step 5: Solve for prod. x and ξi.
Step 6: Display output of xi
Step 7: Solve for the summation of prod A i and f(xi) times 0.5
Step 8. Display output I. Stop.
Pseudocode:
Step 1: Input
b=1; a=0
n=4
Step 2: Input
+/-ξi and Ai for n=4
b+a b−a
Step 3: Solve for x: + ξ
2 2
Step 4: Display output:
x=0.5+0.5ξ
Step 5: Compute for xi=0.5+0.5ξi
Step 6: Display output of xi
Step 7: Solve for I:
Step 8: Display output I: Stop.
Flowchart
Start
Input
b=1; a=0
n=4
Input
+/-ξi and Ai for n=4
Solve for x:
b+a b−a
+ ξ
2 2
Display output:
X=0.5+0.5ξ
Compute for xi
using:
Display output:
xi
Display output: I
End