You are on page 1of 14

MATH447 Project: Applications of Real

Analysis
A.Chattopadhyay, Z.Chen, A. Winnicki, R. Johnson, N.Singh, W. Wan

12/7/2018

1 Introduction
This paper will present two specific applications of real analysis as encountered
in different fields of engineering, natural sciences, and computational mathe-
matics. The applications that we will focus upon are Newton’s method for root
finding and the Laplace equation for heat transfer.

Newton’s method is an iterative algorithm for solving for the zeros of a non-
linear equation f (x) = 0. It is widely used in the fields of computational math-
ematics and finds applications in various branches of engineering for problems
relating to optimization and numerical verification of solutions for non-linear
differential equations. Among several root finding algorithms, it has seen the
most widespread adoption due to its balance of good performance and compu-
tational complexity. It is superior to other iterative methods in that is displays
- under a set of criteria - fast convergence to the solution. The limitations of
the Newton’s method will also be explored. Two methods, Secant Method and
Bisection Method are introduced in order to tackle the shortcomings two other
iterative methods, . The ultimate goal would be to derive an algorithm which
would safeguard the convergence to a root over an interval.

2 Definitions
The following are some definitions that will be used later in the paper and are
presented here as reference.

2.1 Intermediate Value Theorem


The Intermediate Value Theorem states that if f : [a, b] → R is continuous, and
u ∈ [f (a), f (b)], then ∃ c ∈ [a, b] such that f (c) = u.

1
2.2 Rate of Convergence
For a general iterative method, we define the error at iteration k as ek = |xk −x∗ |.
For a sequence of xk , we obtain a corresponding sequence of ek . This error
sequence converges if

|ek+1 |
lim ≤C
k→∞ |ek |r
for some finite non-zero constant C < 1. Here, r is said to be the rate of
convergence of the method. Also, rate of convergence is measured when the
approximation is close the root, that is, ek < 1.
An algorithm is said to be:
• Linearly convergent if r = 1
• Superlinearly convergent if r ∈ (1, 2),
• rth order convergent if r = 2, 3...

2.3 Taylor Series


The Taylor series for a real valued function f (x), which is an infinitely differ-
entiable function (i.e., the function is nth order differentiable where n → ∞
around some point x which is a real number) is a power series defined by

X hn f (n) (x)
f (x + h) =
n=0
n!

Which can be expanded as

h2 f 00 (x)
f (x + h) = f (x) + hf 0 (x) + + ...
2!
A variant of the Taylor series, often referred to as the Taylor series with the
Lagrange form of the remainder, is often also encountered. It truncates the
expansion to the linear terms and is often stated as

h2 f 00 (ξ)
f (x + h) = f (x) + hf 0 (x) +
2!
Where ξ ∈ [x, x + h].

2.4 Tolerance
Tolerance can be defined as the maximum error bound for the calculated vari-
able. Tolerance, tol , is mathematically defined as follows:

xcalculated ∈ (xreal − tol , xreal + tol )

2
3 Application 1: Newton’s Method
3.1 Newton’s Method
Newton’ method is an iterative method for rapidly computing the zeros of dif-
ferentiable functions.
Suppose that f : R → R is twice continuously differentiable, and there is a
point x∗ ∈ R such that f (x∗ ) = 0 and f 0 (x∗ ) 6= 0. There is an δ > 0 so
that if x0 belongs to [x∗ − δ, x∗ + δ], then the iterates xk+1 = xk − ff0(x k)
(xk )
converge to x∗ . Moreover, there is a non-zero positive constant M such that
|xk+1 − x∗ | ≤ M |xk − x∗ |2 for k ≥ 1.

3.2 Proof of Convergence


Let ek = |xk − x∗ |, so that xk − ek = x∗ . Using the Taylor series expansion of
the function f by setting x = xk and h = −ek , we have

(ek )2 00
f (xk − ek ) = f (xk ) − ek f 0 (xk ) +
f (ξk )
2
for some ξk between xk and x∗ . Since xk − ek = x∗ and f (x∗ ) = 0, we have

(ek )2 00
0 = f (xk ) − (xk − ek )f 0 (xk ) + f (ξk )
2
Since the derivative of f is continuous with f 0 (x∗ ) 6= 0, we have f 0 (xk ) 6= 0 as
long as xk is close enough to x∗ . So we can divide by f 0 (xk ) to obtain

f (xk ) (ek )2 f 00 (ξk )


0= 0
− (xk − x∗ ) +
f (xk ) 2f 0 (xk )
The first two terms on the right hand side of the above expression, by the
definition of Newton’s method, is xk+1 . So the above expression can be
rewritten as

(ek )2 f 00 (ξk )
xk+1 − x∗ =
2f 0 (xk )
So,

|f 00 (ξk )|
|xk+1 − x∗ | = |xk − x∗ |2
2|f 0 (xk )|
By continuity, f 0 (xk ) converges to f 0 (x∗ ) and since ξk is between xk and x∗ , ξk
converges to x∗ and hence f 00 (ξk ) converges to f 00 (x∗ ), so for large enough k,

|f 00 (x∗ )|
|xk+1 − x∗ | ≤ M |xk − x∗ |2 if M >
2|f 0 (x∗ )|

3
3.3 Applications
3.3.1 Solving Equations
For a given real-valued differentiable function f (x) defined on R, Newton’s
method seeks the roots of the equation
f (x) = 0
by beginning with an initial guess x0 and generating successive terms of a se-
quence {xk } according to the formula
f (xk )
xk+1 = xk − f 0 (xk )

The geometric basis for the iteration formula is easy to see: the point xk+1 is
merely the x-intercept of tangent line to y = f (x) at the point (xk , f (xk )).

! = #(%)

! − # %( = #′(%( )(% − %( )

(%( , # %( )

%(+,

Figure 1: Geometrical description of Newton’s method

The equation of this tangent line is


y − f (xk ) = f 0 (xk )(x − xk )
If we find the x -intercept of this line by setting y = 0 and solving for x, the
solution xk+1 is given by the iterative formula described above.
For example, if we attempt to solve 4x3 + x − 1 = 0 with an initial guess of
x0 = 1, we obtain the iterates as shown below:
x0 = 1.0000000000
x1 = 0.6923076923
x2 = 0.5412930628
x3 = 0.5023901750
x4 = 0.5000085354
x5 = 0.5000000001

4
and it’s easy to see that we are getting closer and close to x∗ = 0.5 which is one
of the solutions of the equation.
Newton’s method also extends to the multivariate case. Suppose f is a
differentiable function of n variables with values in Rn and x0 ∈ Rn . The idea
underlying Newton’s Method for solving the equation

f (x) = 0

is very similar to the single variable case. More precisely, if xk is the current
iterate, the next iterate xk+1 of Newton’s method is the given by:
−1
xk+1 = xk − [Jf (xk )] f (xk )

where [Jf (xk )] is the Jacobian matrix for f (x) at xk .

3.3.2 Function Minimization


Suppose that f (x) is a twice continuously differentiable, real-valued function of n
variables and suppose that x0 ∈ Rn . Then the Newton’s method sequence {xk }
with initial point x0 for minimizing f (x) is defined by the following recurrence
formula:
xk+1 = xk − [Hf (xk )]−1 ∇f (xk )
where H is the Hessian matrix. To build intuition, let’s return to the one-
dimensional case. We have a function f : R 7→ R, and we want to minimize the
function f . We can try find a critical point of f : a place where f 0 (x) = 0. This
is something that ordinary Newton Method can do. Starting at some guess x0 ,
we follow the iteration:
f 0 (xk )
xk+1 = xk − f 00 (xk )

which is Newton’s Method applied to the function f 0 . For example, if we try to


minimize the function:

f (x, y) = x2 + 2xy + 2y 2 + 4x − 2y
 
2x + 2y + 4
with initial guess x0 = 0. We can calculate the gradient: ∇f (x) =
  2x + 4y − 2
2 2
and the Hessian: Hf (xk ) = . Then apply the recurrence formula above,
2 4
we will have:
      
−1 0 1 −1/2 4 −5
x1 = x0 − [Jf (x0 )] f (x0 ) = − =
0 −1/2 1/2 −2 3

5
3.4 Limitations of the Newton’s Method
• Newton’s method requires the evaluation of the derivative of the function
at each iteration which may be computationally expensive.
• Newton’s method may not converge if the derivative of the function be-
comes zero at any iteration as the term ff0(x k)
(xk ) would become undefined.

3.5 Secant Method


Secant method can be used as an approximation of Newton’s method since in
the Newton’s iteration for finding the root of a function, we need the
derivative of the function at every iteration:

f (xk )
xk+1 = xk +
f 0 (xk )
for k th iteration
Sometimes it is computationally expensive to evaluate the derivative of the
function. We use an approximation of the derivative of the function at point
xk as follows:

f (xk ) − f (xk−1 )
f 0 (xk ) ≈
xk − xk−1
As it can be easily observed that the ≈ sign disappears if we have
(xk − xk−1 ) → 0 and the approximation becomes worse if we increase the
distance between these two points.
Therefore, initially we start with two points close to each other and then we
keep approximating the derivative of the function at xk as:

f (xk )
xk+1 = xk −
slope
where the slope is given by the approximation of the derivative shown above.
An example is given below where the function of which we need to find the
root is given as f (x) = x2 − 612.
We perform the secant method iterations to get the following roots at every
iteration with intial guess x0 = 10 and x1 = 30:
x2 = 22.80000000000000
x3 = 24.545454545454547
x4 = 24.746543778801843
x5 = 24.73860275369709
x6 = 24.738633748750722
x7 = 24.738633753705965
x8 = 24.738633753705965

6
Figure 2: A plot of the function f (x) = x2 − 612 and tangents at every iteration
of the secant method

3.6 Limitations of the Secant method


• This method also may not converge if any of our guesses in any iteration
have the slope equal to 0, that is, two points have the same value of the
function.
• As we approximate the derivative of the function at a point, the method
is not quadratic convergent (as in Newton’s method) but it is superlinear
in convergence.

3.7 Bisection Method


Say we want to find the root of a continuous function f (x), and we know
sign(f (a)) 6= sign(f (b)), with a < b. Recall the intermediate value theorem
which states if u ∈ [f (a), f (b)], then ∃ c ∈ [a, b] such that f (c) = u. Because
sign(f (a)) 6= sign(f (b)), then 0 ∈ [f (a), f (b)]. Thus, ∃ c ∈ [a, b] such that
f (c) = 0. Therefore, we know a root exists on [a, b].
The bisection method can be used to find this root c by reducing the size of
the interval [a, b] through the following process:
(ak + bk )
ck =
2(
[ck , bk ] sign(f (ak )) = sign(f (ck ))
[ak+1 , bk+1 ] =
[ak , ck ] sign(f (bk )) = sign(f (ck ))

This results in intervals such that ∃ c ∈ [a1 , b1 ] with f (c) = 0 such that

7
∀ k ∈ N, [ak+1 , bk+1 ] ⊂ [ak , bk ]. This gives us smaller and smaller intervals
which all contain a root c for f (x). Hence, ultimately we converge to said root
c.

3.8 Limitations of the Bisection Method


• The bisection method is linearly convergent
• It is not always easy to find values a and b for which the condition
sign(f (a)) 6= sign(f (b)) holds.

3.9 Safeguarding Convergence in Newton’s Method


Assuming that we have an interval [a, b] such that sign(f (a)) 6= sign(f (b)), then
we can choose a random initial guess x0 ∈ [a, b] and apply the Newton’s method
to obtain the next iterate x1 . After one Newton iteration, we check if x1 is in
[a, b], if x1 ∈ [a, b] then we use x1 to apply the Newton’s method again else if
x1 6∈ [a, b] we apply the Bisection method and reduce the interval [a, b] to [a, c]
or [c, b] and use the midpoint as the initial guess to apply Newton’s method and
then repeat the same procedure until we converge to the root. The Algorithm
is as follows:
a+b
1. Choose initial guess as the midpoint of the interval,i.e., x0 := 2

f (x0 )
2. Apply the Newton’s method, i.e., : x1 := x0 − f 0 (x0 )

3. Check if x1 ∈ [a, b]
4. If x1 ∈ [a, b] then x0 := x1 and restart from Step 2
5. If x1 6∈ [a, b], c := a+b
2 if sign(f (a)) = sign(f (c)) then a := c else b := c
and Repeat from Step 1
6. Repeat until x1 − x0 < tolerance

Hence, by this way we are able to guarantee convergence in an interval [a, b]


and also we are able to converge faster than Bisection method.

4 Application 2: Laplace Equation


4.1 Laplace’s equation
In this part, the aim is to find the temperature of distribution based on laplace’s
equation given a boundary. This is known as the Steady-State Heat Equation.
Laplace’s equation is a second-order partial differential equation related to
diffusion or wave procession. Note that a partial differential equation (PDE) is a
differential equation that contains beforehand unknown multivariable functions

8
and their partial derivatives.
The inhomogeneous version of Laplace’s equation is called Poisson’s equation
with f a given function:”δu = f ” If a procession of diffusion or wave is stationary
(independent of time) :ut = 0 and utt = 0. Then the wave of diffusion equation
can be reduced to ”Laplace’s equation”:

One dimension: uxx = 0


Two dimension:∆u = uxx + uyy = 0
Three dimension:∆u = uxx + uyy + uzz = 0
In one dimension, we have simply uxx = 0, so the only harmonic functions in
one dimension are u(x) = A + Bx. In two dimension,consider the problem of
determining the temperature on a thin metal disk given that the temperature
on the boundary circle is fixed. We assume that there is no heat loss in the third
dimension. Perhaps this disk is placed between two insulating pads. Meanwhile
we can assume that the system is at equilibrium. As a consequence, the tem-
perature at each point remains constant over time.

Define the disk:”D̂ = {(x, y) : x2 + y 2 ≤ 1} = {(r, θ) : 0 ≤ r ≤ 1, −π ≤ θ ≤ π}”


(0, θ) represents the origin for all values of θ and identify (r, −π) = (r, π) for
all r ≥ 1. More generally, we allow (r, θ) for any real value of θ and make the
identification (r, θ + 2π) = (r, θ).
We will prove that the ∆u = 0 where
1 1
∆u = urr + ur + 2 uθθ .
r r
Let us take a region R where

R = r, θ : r0 ≤ r ≤ r1 , θ0 ≤ θ ≤ θ1 .

R.png

We will eventually show using material covered in class what the difference
of the heat in this region in terms of partial derivatives. We can see that the
boundary of this region, which we call L consists of L0 and L1 where

9
 
L0 = r, θ0 : θ0 ≤ r ≤ r1 and L1 = r, θ1 : r0 ≤ r ≤ r1
and two arcs A0 and A1 where
 
A0 = r0 , θ : θ0 ≤ θ ≤ θ1 and A1 = (r1 , θ : θ0 ≤ θ ≤ θ1 .
Observe that L is the sum of L0 , L1 , A0 , and A1 . But, we have to take into
account the orientation of some of the vectors that allow us to have normal
vectors pointing outside of the region R. Thus,

L = L0 + A1 − L1 − A0 .
The heat dissipated is simply the integral of the derivatives of the outwards
facing or normal component of the heat. We can simply take the sum of the
individual integrals of L0 , L1 , A0 , and A1 to get our integral. As we learned in a
high school calculus course, the derivative of the normal component of the heat
is a partial derivative, where we take the derivative with respect to one variable
and keep the rest constant.
Taking partial derivative at the normal at any point on the arcs is the same
thing as taking the partial derivative with respect to theta as the normal is in
the direction of θ. Using the definition of derivative that we learned in class
(noting that the partial just keeps everything except theta constant), we have
 
∂u  u r, θ1 + h − u r, θ1 1 
r, θ1 = lim = uθ r, θ1 .
∂n h→0 rh r
And clearly on the arcs, the normal is in the direction of r, so the normal
component of the derivative
 is in the direction of the radius, so the normal
derivative is just ur r, θ .
Then, we need to take the integral of the derivatives on the arcs to obtain
the amount of heat transfer. Note for the arcs that only change with θ, from
high school physics, we have that the arc length is given by rdθ.
And we also have that the arc length along the radii L0 and L1 is just dr.
Therefore, we write the conservation law over the boundary as follows

Z r1   Z θ1 Z θ1
1    
0= uθ r, θ1 − uθ r, θ0 dr + ur r1 , θ r1 dθ − ur r0 , θ r0 dθ
r0 r θ0 θ0

Z r1   Z θ1  
1    
= uθ r, θ1 − uθ r, θ0 dr + r1 ur r1 , θ − r0 ur r0 , θ dθ.
r0 r θ0

We can divide by θ1 − θ0 and take the limit as θ1 approaches θ0 . Observe


that since first term is integrated with respect to r, which is independent of θ, we
can use Leibniz’s rule, the proof of which is omitted, but is a simple application
of the definition of derivative and the Mean Value Theorem.

10


Leibniz’s rule tells us that suppose that f (x, t) and ∂x f x, t are continuous

functions on [a,b]x[c,d]. Then the function F x on [a,b] given by F x =
Rd  Rd ∂
f x, t dt is differentiable and F 0 x = c ∂x
 
c
f x, t dt.
And for the second term, we can use the Fundamental Theorem of Calculus
which we learned in high school to simplify the second term. The Fundamental
Theorem of Calculus states let f be a continuous real-valued function defined
on a closed interval
 [a, b]. Let F be the function defined, for all x in [a, b], by
F x intxa f t dt. Then, F is uniformlycontinuous on [a,b], differentiable on the
open interval a, b , and F 0 x = f x for all x in a, b .
 

Applying Leibniz’s rule and the Fundamental Theorem of Calculus and sim-
plifying we get

r1 Z θ1 

uθ r, θ1 ) − uθ r, θ0
Z 
1 1  
0= lim dr+ lim r1 ur r1 , θ −r0 ur r0 , θ dθ
r0 r θ1 →θ0 θ1 − θ0 θ1 →θ0 θ1 − θ0 θ
0

Z r1
1   
= uθθ r, θ0 dr + r1 ur r1 , θ0 − r0 ur r0 , θ0 .
r0 r
To further simplify, we divide by r1 − r0 and take the limit as r1 decreases to
r0 . We obtain the following (using the Fundamental Theorem of Calculus again,
the definition of derivative, and the product rule)

r1
 
r1 ur r1 , θ0 − r0 ur r0 , θ0
Z
1 1 
0 = lim uθθ r, θ0 dr +
r1 →r0 r1 − r0 r0 r r1 − r0
1  ∂  
= uθθ r0 , θ0 + rur r0 , θ0
r0 ∂r
1    
=
uθθ r0 , θ0 + ur r0 , θ0 + r0 urr r0 , θ0 = r0 ∆u r0 , θ0 .
r0
Thus, our differential equation and boundary condition become
1 1
∆u := urr + ur + 2 uθθ = 0 for 0 ≤ r < 1, −π ≤ θ ≤ π,
r r
 
u 1, θ = f θ for − π ≤ θ ≤ π.

4.2 Polar coordinate transformation


We can also use Polar coordinate transformation to prove it: u(r, θ) as the
temperature distribution function over the disk with x = rcosθ and y = rsinθ.
The Laplace equation is invariant under all rigid motions, including translations
and rotations:
. property of Invariance isuxx + uyy = ux0 x0 + uy0 y0 ;
. translation meansx0 = x + a; y 0 = y + b.
. rotation meansx0 = xcosα + ysinα; y 0 = −xsinα + ycosα

11
The proof of invariance is straightforward by calculating ”ux0 x0 ” and uy0 y0 by
chain rule. Now calculate ∆u(r, θ) with x = rcosθ and y = rsinθ
δu δx δu δy δu δu
ur = + = cosθ + sinθ
δx δr δy δr δx δy
2
δ u δx δ δu δy δ 2 u δy δ δu δx
urr = cosθ( 2 + ) + sinθ( 2 + )
δx δr δy δx δr δy δr δx δy δr
δ2 u δ2 u δ2 u
= cos2 θ 2 + 2cosθsinθ + sin2 θ 2
δx δx δy
δu δx δu δy δu δu
uθ = + = −rsinθ + rcosθ
δx δθ δy δθ δx δy

uθθ = −rcosθ δu δ δu δu δ δu
δx − rsinθ δθ δx − rsinθ δy + rcosθ δθ δy
2 2
δ δu δy δ u δy
=-rcosθ δu δ u δx
δx − rsinθ( δx2 δθ + δy δx δθ ) − rsinθ δu
δy + rcosθ( δy 2 δθ +
δ δu δx
δx δy δθ )
2 2 2
=-r(cosθ δu δu 2 2 δ u δ u 2 δ u
δx + sinθ δy + r (sin θ δx2 − 2cosθsinθ δxy + cos θ δy 2 )
Then We combine urr and r12 uθθ :
1 1 δu δ2 u δ2 u
urr + r 2 uθθ = r δr + δx2 + δy 2 since cos2 θ + sin2 θ = 1

Then we can obtain uxx +uyy = urr + 1r ur + r12 uθθ With the proof in two dimen-
sions, we can approach the steady-heat equation in three dimension. Spherical

dimension.png

coordinates (r, θ, φ) will be applied here to avoid messy variables.


p √ p
r = x2 + y 2 + z 2 = s2 + z 2 ; s = x2 + y 2
x = scosφ; z = rcosθ; y = ssinφ; s = rsinθ
According to the two dimension Laplace calculation we just obtained, it is cer-
tain to conclude that :

12
uzz + uss = urr + 1r ur + r12 uθθ ;
uxx + uyy = uss + 1s us + s12 uφφ
Combine the above equation and cancel uss on the both side, we have:
∆u = uxx + uyy + uzz = urr + 1r ur + 1
r 2 uθθ + 1s us + 1
s2 uφφ

In order to cancel s, us and us s in the equation, we substitute them in terms


δφ
of (r, θ, φ):s = rsinθ; s2 = r2 sin2 θ; us = δu δr δθ s
δs = ur δs + uθ δs + uφ δs = ur r +
uθ cosθ
r + uφ ∗ 0 Then we can obtain the ∆u easily in terms of (r, θ, φ)

∆u = urr + 2r ur + 1
r 2 [uθθ + (cotθ)uθ + 1
sin2 θ uφφ ]

4.3 Formal solution of the steady-heat equation in two


dimension
The steady-state heat equation is complicated so we have to make unjustified
assumption that there will be solution of special forms.Then we can conclude a
general solution ignoring all convergence issues. This procedure is justified as
experiments that provide us a likely candidate for the solution but do not in
themselves constitute a proper derivation of the solution. Separation of variables
will be applied to find solutions of form u(r, θ) = R(r)Θ(θ).where R is a function
which is only of r and Θ is a function which is only of θ. Then∆u = 0 transforms
to
R00 (r)Θ(θ) + 1r R0 (r)Θ(θ) + 1 0 00
r 2 R (r)Θ (θ) =0
2 00 0 00
r R (r)+rR (r) −Θ (θ)
=⇒ R(r) = Θ(θ) = c where c is a constant

So we can rewrite it as :Θ00 (θ) + cΘ(θ) = 0 and r2 R00 (r) + rR0 (r) − cR(r) = 0
Θ00 (θ) + cΘ(θ) = 0 is a well-known Differential Equation with constant coeffi-
cients. The solutions of it is displayed below:
 √ √
 Acos( cθ) + Bsin( cθ), for c > 0
Θ(θ) = A+ √
Bθ, forc =√0
−cθ
Ae + Be− −cθ , forc < 0

In this problem, we require the solution must be 2π periodic since (r, −π) and
(r, π) represent the same point. So the case c < 0 should√be eliminated. And
when c = 0, it is a constant function. For c¿0, it forces c to be an integer.
Then we have :
Acos(nθ) + Bsin(nθ), for c = n2 ≥ 1

Θ(θ) =
A, forc = 0
Now for each c = n2 , n ≥ 0,it should also satisfiesr2 R00 (r)+rR0 (r)−n2 R(r) = 0.
In order to solve it, we substitute r with et :r = et .

dR dR dr 0
dt = dr dt = R r and
2
0 dr 00 0
d R
d2 t = d
dr (R r) dt = (R r + R )r = r2 R00 + rR0

13
2 2
Hence our Differential Equation becomes ddtR 2 d R 2
2 = n R.Hence d2 t − n R = 0.This

is a linear differential equation with constant coefficients with solutions:


ae + be−nt = arn + br−n , for n ≥ 1
 nt
R=
a + bt = a + blogr, forn = 0
physical experiments and considerations demand R continuous at r=0, which
eliminates r−n and logr. Consequently R(r) = arn for each b ≥ 0 is left. Back to
our assumption u(r, θ) = R(r)Θ(θ) with c = n2 , we have the solution u(r, θ) =
An rn cosnθ + Bn rn sinnθ for n ≥ 0.When n=0, it yields u(r, θ) = A0 .Sum of
solutions for a homogeneous differential equation will be a general solutions
(ignoring convergence issues).:
P∞
u(r, ) = A0 + n=1 An rn cosnθ + Bn rn sinnθ

Regardless of
Pconvergence, let r=1 and use the boundary condition we have

f (θ) = A0 + n=1 An cosnθ +Bn sinnθ for temperature distribution over bound-
ary.

References
[1] J. E. Dennis and R. B. Schnabel, Numerical Methods for Unconstrained
Optimization and non-Linear Equations, (Prentice-Hall, New Jersey, 1996).
[2] K. Davidson and A. Donsig, Real Analysis and Applications, (Springer,
2010).
[3] M. Heath, Scientific Computing: An Introductory Survey, (McGraw-Hill,
2002).
[4] K. Ross, Elementary Analysis: The Theory of Calculus, (Springer, New
York, 2013).
[5] Secant Method, available at https://en.wikipedia.org/wiki/Secantmethod.
[6] Proof of quadratic convergence of Newton’s Method, available at
https://cs.nyu.edu/overton/NumericalComputing/newton.pdf.

[7] Bisection method and safeguarding, E. Solomonik, CS 450 Fall 2018 lecture
notes, University of Illinois at Urbana Champaign.
[8] Walter A.Strauss, Partial Differential Equations: An Introduction, 2nd
Edition,(John Wiley Sons,Dec 2007).

14

You might also like