This action might not be possible to undo. Are you sure you want to continue?
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Numerical Analysis and Computing
Lecture Notes #04 — Solutions of Equations in One Variable,
Interpolation and Polynomial Approximation — Accelerating
Convergence; Zeros of Polynomials; Deﬂation; M¨ uller’s Method;
Lagrange Polynomials; Neville’s Method
Joe Mahaﬀy,
mahaffy@math.sdsu.edu
Department of Mathematics
Dynamical Systems Group
Computational Sciences Research Center
San Diego State University
San Diego, CA 921827720
http://wwwrohan.sdsu.edu/∼jmahaﬀy
Spring 2010
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (1/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Outline
1
Accelerating Convergence
Review
Aitken’s ∆
2
Method
Steﬀensen’s Method
2
Zeros of Polynomials
Fundamentals
Horner’s Method
3
Deﬂation, M¨ uller’s Method
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
4
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (2/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Review
Aitken’s ∆
2
Method
Steﬀensen’s Method
Introduction
“It is rare to have the luxury of quadratic convergence.”
(BurdenFaires, p.83)
There are a number of methods for squeezing faster convergence out of
an already computed sequence of numbers.
We here explore one method which seems the have been around since the
beginning of numerical analysis... Aitken’s ∆
2
method. It can be used
to accelerate convergence of a sequence that is linearly convergent,
regardless of its origin or application.
A review of modern extrapolation methods can be found in:
“Practical Extrapolation Methods: Theory and Applications,” Avram
Sidi, Number 10 in Cambridge Monographs on Applied and Compu
tational Mathematics, Cambridge University Press, June 2003. ISBN:
0521661595
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (3/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Review
Aitken’s ∆
2
Method
Steﬀensen’s Method
Recall: Convergence of a Sequence
Deﬁnition
Suppose the sequence {p
n
}
∞
n=0
converges to p, with p
n
= p for all
n. If positive constants λ and α exists with
lim
n→∞
p
n+1
−p
p
n
−p
α
= λ
then {p
n
}
∞
n=0
converges to p of order α, with asymptotic error
constant λ.
Linear convergence means that α = 1, and λ < 1.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (4/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Review
Aitken’s ∆
2
Method
Steﬀensen’s Method
Aitken’s ∆
2
Method I/II
Assume {p
n
}
∞
n=0
is a linearly convergent sequence with limit p.
Further, assume we are far out into the tail of the sequence (n
large), and the signs of the successive errors agree, i.e.
sign(p
n
−p) = sign(p
n+1
−p) = sign(p
n+2
−p) = . . .
and that
p
n+2
−p
p
n+1
−p
≈
p
n+1
−p
p
n
−p
≈ λ (the asymptotic limit)
This would indicate
(p
n+1
−p)
2
≈ (p
n+2
−p)(p
n
−p)
p
2
n+1
−2p
n+1
p + p
2
≈ p
n+2
p
n
−(p
n+2
+ p
n
)p + p
2
We solve for p and get...
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (5/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Review
Aitken’s ∆
2
Method
Steﬀensen’s Method
Aitken’s ∆
2
Method II/II
We solve for p and get...
p ≈
p
n+2
p
n
−p
2
n+1
p
n+2
−2p
n+1
+ p
n
A little bit of algebraic manipulation put this into the classical
Aitken form:
ˆ p
n
= p = p
n
−
(p
n+1
−p
n
)
2
p
n+2
−2p
n+1
+ p
n
Aitken’s ∆
2
Method is based on the assumption that the ˆ p
n
we
compute from p
n+2
, p
n+1
and p
n
is a better approximation to the
real limit p.
The analysis needed to prove this is beyond the scope of this class, see
e.g. Sidi’s book.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (6/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Review
Aitken’s ∆
2
Method
Steﬀensen’s Method
Aitken’s ∆
2
Method The Recipe
Given a sequence ﬁnite {p
n
}
N
n=0
or inﬁnite {q
n
}
∞
n=0
sequence
which converges linearly to some limit.
Deﬁne the new sequences
ˆ p
n
= p
n
−
(p
n+1
−p
n
)
2
p
n+2
−2p
n+1
+ p
n
, n = 0, 1, . . . , N −2
or
ˆ q
n
= q
n
−
(q
n+1
−q
n
)
2
q
n+2
−2q
n+1
+ q
n
, n = 0, 1, . . . , ∞
The numerator is a forward diﬀerence squared, while the
denominator is a second order central diﬀerence.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (7/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Review
Aitken’s ∆
2
Method
Steﬀensen’s Method
Aitken’s ∆
2
Method Example
Consider the sequence {p
n
}
∞
n=0
, where the sequence is generated by the
ﬁxed point iteration p
n+1
= cos(p
n
), p
0
= 0.
Iteration p
n
ˆ p
n
0 0.000000000000000 0 .685073357326045
1 1.000000000000000 0.7 28010361467617
2 0 .540302305868140 0.73 3665164585231
3 0 .857553215846393 0.73 6906294340474
4 0 .654289790497779 0.73 8050421371664
5 0.7 93480358742566 0.73 8636096881655
6 0.7 01368773622757 0.73 8876582817136
7 0.7 63959682900654 0.73 8992243027034
8 0.7 22102425026708 0.7390 42511328159
9 0.7 50417761763761 0.7390 65949599941
10 0.73 1404042422510 0.7390 76383318956
11 0.7 44237354900557 0.73908 1177259563
∗
12 0.73 5604740436347 0.73908 3333909684
∗
Note: Bold digits are correct; ˆ p
11
needs p
13
, and ˆ p
12
additionally needs p
14
.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (8/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Review
Aitken’s ∆
2
Method
Steﬀensen’s Method
Faster Convergence for “AitkenSequences”
Theorem (Convergence of Aitken∆
2
Sequences)
Suppose {p
n
}
∞
n=0
is a sequence that converges linearly to the limit
p, and for n large enough we have (p
n
−p)(p
n+1
−p) > 0. Then
the Aitkenaccelerated sequence {ˆ p
n
}
∞
n=0
converges fast to p in the
sense that
lim
n→∞
_
ˆ p
n
−p
p
n
−p
_
= 0.
We can combine Aitken’s method with ﬁxedpoint iteration in
order to get a “ﬁxedpoint iteration on steroids.”
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (9/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Review
Aitken’s ∆
2
Method
Steﬀensen’s Method
Steﬀensen’s Method: FixedPoint Iteration on Steroids
Suppose we have a ﬁxed point iteration:
p
0
, p
1
= g(p
0
), p
2
= g(p
1
), . . .
Once we have p
0
, p
1
and p
2
, we can compute
ˆ p
0
= p
0
−
(p
1
−p
0
)
2
p
2
−2p
1
+ p
0
At this point we “restart” the ﬁxed point iteration with p
0
= ˆ p
0
,
e.g.
p
3
= ˆ p
0
, p
4
= g(p
3
), p
5
= g(p
4
),
and compute
ˆ p
3
= p
3
−
(p
4
−p
3
)
2
p
5
−2p
4
+ p
3
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (10/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Review
Aitken’s ∆
2
Method
Steﬀensen’s Method
Steﬀensen’s Method: The Quadratic, ggA, Waltz! Quadratic Convergence
Algorithm: Steﬀensen’s Method
Input: Initial approximation p
0
; tolerance TOL; maximum
number of iterations N
0
.
Output: Approximate solution p, or failure message.
1. Set i = 1
2. While i ≤ N
0
do 36
3
∗
Set
p
1
= g(p
0
), p
2
= g(p
1
),
p = p
0
−(p
1
−p
0
)
2
/(p
2
−2p
1
+ p
0
)
4. If p −p
0
 < TOL then
4a. output p
4b. stop program
5. Set i = i + 1
6. Set p
0
= p
7. Output: “Failure after N
0
iterations.”
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (11/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Review
Aitken’s ∆
2
Method
Steﬀensen’s Method
Steﬀensen’s Method: Potential Breakage
3
∗
If at some point p
2
− 2p
1
+ p
0
= 0 (which appears in the
denominator), then we stop and select the current value of p
2
as our approximate answer.
Both Newton’s and Steﬀensen’s methods give quadratic
convergence. In Newton’s method we compute one function value
and one derivative in each iteration. In Steﬀensen’s method we
have two function evaluations and a more complicated algebraic
expression in each iteration, but no derivative. It looks like we
got something for (almost) nothing.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (12/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Review
Aitken’s ∆
2
Method
Steﬀensen’s Method
Steﬀensen’s Method: Potential Breakage
3
∗
If at some point p
2
− 2p
1
+ p
0
= 0 (which appears in the
denominator), then we stop and select the current value of p
2
as our approximate answer.
Both Newton’s and Steﬀensen’s methods give quadratic
convergence. In Newton’s method we compute one function value
and one derivative in each iteration. In Steﬀensen’s method we
have two function evaluations and a more complicated algebraic
expression in each iteration, but no derivative. It looks like we
got something for (almost) nothing. However, in order the
guarantee quadratic convergence for Steﬀensen’s method, the ﬁxed
point function g must be 3 times continuously diﬀerentiable, e.g.
f ∈ C
3
[a, b], (see theorem2.14 in BurdenFaires). Newton’s
method “only” requires f ∈ C
2
[a, b] (BF Theorem2.5).
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (12/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Review
Aitken’s ∆
2
Method
Steﬀensen’s Method
Steﬀensen’s Method: Example 1 of 2
Below we compare a Fixed Point iteration, Newton’s Method, and
Steﬀensen’s Method for solving:
f (x) = x
3
+ 4x
2
−10 = 0
or alternately,
p
n+1
= g(p
n
) =
¸
10
p
n
+ 4
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (13/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Review
Aitken’s ∆
2
Method
Steﬀensen’s Method
Steﬀensen’s Method: Example 2 of 2
Fixed Point Iteration
i p
n
g(p
n
)
0 1.50000 1.34840
1 1.34840 1.36738
2 1.36738 1.36496
3 1.36496 1.3652
4 1.36526 1.36523
5 1.36523 1.36523
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (14/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Review
Aitken’s ∆
2
Method
Steﬀensen’s Method
Steﬀensen’s Method: Example 2 of 2
Newton’s Method
i x
n
f (x
n
)
0 1.50000 1.51600e01
1 1.36495 3.11226e04
2 1.36523 1.35587e09
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (14/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Review
Aitken’s ∆
2
Method
Steﬀensen’s Method
Steﬀensen’s Method: Example 2 of 2
Newton’s Method
i x
n
f (x
n
)
0 1.50000 1.51600e01
1 1.36495 3.11226e04
2 1.36523 1.35587e09
Steﬀensen’s Method
i p
n
p
1
p
2
p p − p
2

0 1.50000 1.34840 1.36738 1.36527 3.96903e05
1 1.36527 1.36523 1.36523 1.36523 2.80531e12
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (14/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Horner’s Method
Zeros of Polynomials
Deﬁnition: Degree of a Polynomial
A polynomial of degree n has the form
P(x) = a
n
x
n
+ a
n−1
x
n−1
+· · · + a
1
x + a
0
, a
n
= 0
where the a
i
’s are constants (either real, or complex) called the
coeﬃcients of P.
Why look at polynomials? — We’ll be looking at the problem
P(x) = 0 (i.e. f (x) = 0 for a special class of functions.)
Polynomials are the basis for many approximation methods, hence
being able to solve polynomial equations fast is valuable.
We’d like to use Newton’s method, so we need to compute P(x)
and P
′
(x) as eﬃciently as possible.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (15/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Horner’s Method
Fundamentals
Theorem (The Fundamental Theorem of Algebra)
If P(x) is a polynomial of degree n ≥ 1 with real or complex
coeﬃcients, then P(x) = 0 has at least one (possibly complex)
root.
The proof is surprisingly(?) diﬃcult and requires understanding of
complex analysis... We leave it as an exercise for the motivated
student!
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (16/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Horner’s Method
Key Consequences of the Fundamental Theorem of Algebra 1 of 2
Corollary
If P(x) is a polynomial of degree n ≥ 1 with real or complex
coeﬃcients then there exists unique constants x
1
, x
2
, . . . , x
k
(possibly complex) and unique positive integers m
1
, m
2
, . . . , m
k
such that
k
i =1
m
i
= n and
P(x) = a
n
(x −x
1
)
m
1
(x −x
2
)
m
2
· · · (x −x
k
)
m
k
— The collection of zeros is unique.
— m
i
are the multiplicities of the individual zeros.
— A polynomial of degree n has exactly n zeros, counting
multiplicity.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (17/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Horner’s Method
Key Consequences of the Fundamental Theorem of Algebra 2 of 2
Corollary
Let P(x) and Q(x) be polynomials of degree at most n. If x
1
, x
2
,
. . . , x
k
, with k > n are distinct numbers with P(x
i
) = Q(x
i
) for
i = 1, 2, . . . , k, then P(x) = Q(x) for all values of x.
— If two polynomials of degree n agree at at least (n +1) points,
then they must be the same.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (18/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Horner’s Method
Horner’s Method: Evaluating Polynomials Quickly 1 of 4
Let
P(x) = a
n
x
n
+ a
n−1
x
n−1
+· · · + a
1
x + a
0
We want an eﬃcient method to solve
P(x) = 0.
In 1819, William Horner developed an eﬃcient algorithm with n
multiplications and n additions to evaluate P(x
0
).
Technique is called Horner’s Method or Synthetic Division.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (19/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Horner’s Method
Horner’s Method: Evaluating Polynomials Quickly 2 of 3
Assume
P(x) = a
n
x
n
+ a
n−1
x
n−1
+· · · + a
1
x + a
0
If we are looking to evaluate P(x
0
) for any x
0
, let
b
n
= a
n
, b
k
= a
k
+ b
k+1
x
0
, k = (n −1), (n −2), . . . , 1, 0,
then b
0
= P(x
0
). We have only used n multiplications and n
additions.
At the same time we have computed the decomposition
P(x) = (x −x
0
)Q(x) + b
0
,
where
Q(x) =
n−1
k=1
b
k+1
x
k
.
If b
0
= 0, then x
0
is a root of P(x).
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (20/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Horner’s Method
Horner’s Method: Evaluating Polynomials Quickly 3 of 4
Huh?!? Where did the expression come from? — Consider
P(x) = a
n
x
n
+ a
n−1
x
n−1
+· · · + a
1
x + a
0
= (a
n
x
n−1
+ a
n−1
x
n−2
+· · · + a
1
)x + a
0
= ((a
n
x
n−2
+ a
n−1
x
n−3
+· · · )x + a
1
)x + a
0
= (. . . ((
. ¸¸ .
n−1
a
n
x + a
n−1
. ¸¸ .
b
n−1
)x +· · · )x + a
1
)x + a
0
Horner’s method is “simply” the computation of this parenthesized
expression from the insideout...
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (21/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Horner’s Method
Horner’s Method: Evaluating Polynomials Quickly 4 of 4
Now, if we need to compute P
′
(x
0
) we have
P
′
(x)
¸
¸
¸
¸
x=x
0
= (x −x
0
)Q
′
(x) + Q(x)
¸
¸
¸
¸
x=x
0
= Q(x
0
)
Which we can compute (again using Horner’s method) in (n −1)
multiplications and (n −1) additions.
Proof? We really ought to prove that Horner’s method works. It
basically boils down to lots of algebra which shows that the
coeﬃcients of P(x) and (x −x
0
)Q(x) + b
0
are the same...
A couple of examples may be more instructive...
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (22/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Horner’s Method
Example#1: Horner’s Method
For P(x) = x
4
−x
3
+ x
2
+ x −1, compute P(5):
x
0
= 5 a
4
= 1 a
3
= −1 a
2
= 1 a
1
= 1 a
0
= −1
b
4
x
0
b
3
x
0
b
2
x
0
b
1
x
0
b
4
= 1
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (23/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Horner’s Method
Example#1: Horner’s Method
For P(x) = x
4
−x
3
+ x
2
+ x −1, compute P(5):
x
0
= 5 a
4
= 1 a
3
= −1 a
2
= 1 a
1
= 1 a
0
= −1
b
4
x
0
= 5 b
3
x
0
b
2
x
0
b
1
x
0
b
4
= 1 b
3
= 4
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (23/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Horner’s Method
Example#1: Horner’s Method
For P(x) = x
4
−x
3
+ x
2
+ x −1, compute P(5):
x
0
= 5 a
4
= 1 a
3
= −1 a
2
= 1 a
1
= 1 a
0
= −1
b
4
x
0
= 5 b
3
x
0
= 20 b
2
x
0
b
1
x
0
b
4
= 1 b
3
= 4 b
2
= 21
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (23/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Horner’s Method
Example#1: Horner’s Method
For P(x) = x
4
−x
3
+ x
2
+ x −1, compute P(5):
x
0
= 5 a
4
= 1 a
3
= −1 a
2
= 1 a
1
= 1 a
0
= −1
b
4
x
0
= 5 b
3
x
0
= 20 b
2
x
0
= 105 b
1
x
0
b
4
= 1 b
3
= 4 b
2
= 21 b
1
= 106
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (23/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Horner’s Method
Example#1: Horner’s Method
For P(x) = x
4
−x
3
+ x
2
+ x −1, compute P(5):
x
0
= 5 a
4
= 1 a
3
= −1 a
2
= 1 a
1
= 1 a
0
= −1
b
4
x
0
= 5 b
3
x
0
= 20 b
2
x
0
= 105 b
1
x
0
= 530
b
4
= 1 b
3
= 4 b
2
= 21 b
1
= 106 b
0
= 529
Hence, P(5) = 529, and
P(x) = (x −5)(x
3
+ 4x
2
+ 21x + 106) + 529
Similarly we get P
′
(5) = Q(5) = 436
x
0
= 5 a
3
= 1 a
2
= 4 a
1
= 21 a
0
= 106
b
3
x
0
= 5 b
2
x
0
= 45 b
1
x
0
= 330
b
3
= 1 b
2
= 9 b
1
= 66 b
0
= 436
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (23/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Horner’s Method
Algorithm: Horner’s Method
Algorithm: Horner’s Method
Input: Degree n; coeﬃcients a
0
, a
1
, . . . , a
n
; x
0
Output: y = P(x
0
), z = P
′
(x
0
).
1. Set y = a
n
, z = a
n
2. For j = (n −1), (n −2), . . . , 1
Set y = x
0
y + a
j
, z = x
0
z + y
3. Set y = x
0
y + a
0
4. Output (y, z)
5. End program
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (24/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
Deﬂation — Finding All the Zeros of a Polynomial
If we are solving our current favorite problem
P(x) = 0, P(x) a polynomial of degree n
and we are using Horner’s method of computing P(x
i
) and P
′
(x
i
),
then after N iterations, x
N
is an approximation to one of the roots
of P(x) = 0.
We have
P(x) = (x −x
N
)Q(x) + b
0
, b
0
≈ 0
Let ˆr
1
= x
N
be the ﬁrst root, and Q
1
(x) = Q(x).
We can now ﬁnd a second root by applying Newton’s method to
Q
1
(x).
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (25/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
Deﬂation — Finding All the Zeros of a Polynomial
After some number of iterations of Newton’s method we have
Q
1
(x) = (x −ˆr
2
)Q
2
(x) + b
(2)
0
, b
(2)
0
≈ 0
If P(x) is an n
th
degree polynomial with n real roots, we can apply
this procedure (n −2) times to ﬁnd (n −2) approximate zeros of
P(x): ˆr
1
, ˆr
2
, . . . , ˆr
n−2
, and a quadratic factor Q
n−2
(x).
At this point we can solve Q
n−2
(x) = 0 using the quadratic
formula, and we have n roots of P(x) = 0.
This procedure is called Deﬂation.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (26/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
Quality of Deﬂation
Now, the big question is “are the approximate roots ˆr
1
, ˆr
2
, . . . ,
ˆr
n
good approximations of the roots of P(x)???”
Unfortunately, sometimes, no.
In each step we solve the equation to some tolerance, i.e.
b
(k)
0
 < tol
Even though we may solve to a tight tolerance (10
−8
), the errors
accumulate and the inaccuracies increase iterationbyiteration...
Question: Is deﬂation therefore useless???
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (27/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
Improving the Accuracy of Deﬂation
The problem with deﬂation is that the zeros of Q
k
(x) are not good
representatives of the zeros of P(x), especially for high k’s.
As k increases, the quality of the root ˆr
k
decreases.
Maybe there is a way to get all the zeros with the same quality?
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (28/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
Improving the Accuracy of Deﬂation
The problem with deﬂation is that the zeros of Q
k
(x) are not good
representatives of the zeros of P(x), especially for high k’s.
As k increases, the quality of the root ˆr
k
decreases.
Maybe there is a way to get all the zeros with the same quality?
The idea is quite simple... in each step of deﬂation, instead of just
accepting ˆr
k
as a root of P(x), we rerun Newton’s method on the
full polynomial P(x), with ˆr
k
as the starting point — a couple of
Newton iterations should quickly converge to the root of the full
polynomial.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (28/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
Improved Deﬂation — Algorithm Outline
Algorithm Outline: Improved Deﬂation
1. Apply Newton’s method to P(x) → ˆr
1
, Q
1
(x).
2. For k = 2, 3, . . . , (n −2) do 34
3. Apply Newton’s method to Q
k−1
→ ˆr
∗
k
, Q
∗
k
(x).
4. Apply Newton’s method to P(x) with ˆr
∗
k
as the initial point
→ ˆr
k
Apply Horner’s method to Q
k−1
(x) with x =ˆr
k
→ Q
k
(x)
5. Use the quadratic formula on Q
n−2
(x) to get the two remaining
roots.
Note: “Inside” Newton’s method, the evaluations of polynomi
als and their derivatives are also performed using Horner’s
method.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (29/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
Deﬂation & Improvement Wilkinson Polynomials
The Wilkinson Polynomials
P
W
n
(x) =
n
k=1
(x −k)
have the roots {1, 2, . . . , n}, but provide surprisingly tough
numerical rootﬁnding problems. (Additional details in Math 543 .)
In the next few slides we show the results of Deﬂation and
Improved Deﬂation applied to Wilkinson polynomials of degree 9,
10, 12, and 13.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (30/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
Deﬂation & Improvement P
W
9
(x) and P
W
10
(x)
0 2 4 6 8 10
10
−16
10
−14
10
−12
10
−10
10
−8
Relative Error of the Computed Roots
Deflation
Improved
0 2 4 6 8 10
10
−14
10
−12
10
−10
10
−8
Relative Error of the Computed Roots
Deflation
Improved
Figure: [Left] The result of the two algorithms on the Wilkinson polynomial of degree
9; in this case the roots are computed so that b
(k)
0
 < 10
−6
. [Right] The result of
the two algorithms on the Wilkinson polynomial of degree 10; in this case the roots are
computed so that b
(k)
0
 < 10
−6
. In both cases the lower line corresponds to improved
deﬂation and we see that we get an improvement in the relative error of several orders
of magnitude.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (31/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
Deﬂation & Improvement P
W
12
(x) and P
W
13
(x)
0 2 4 6 8 10 12
10
−16
10
−14
10
−12
10
−10
10
−8
10
−6
Relative Error of the Computed Roots
Deflation
Improved
0 2 4 6 8 10 12 14
10
−16
10
−14
10
−12
10
−10
10
−8
10
−6
10
−4
Relative Error of the Computed Roots
Deflation
Improved
Figure: [Left] The result of the two algorithms on the Wilkinson polynomial of degree
12; in this case the roots are computed so that b
(k)
0
 < 10
−4
. [Right] The result of
the two algorithms on the Wilkinson polynomial of degree 13; in this case the roots are
computed so that b
(k)
0
 < 10
−3
. In both cases the lower line corresponds to improved
deﬂation and we see that we get an improvement in the relative error of several orders
of magnitude.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (32/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
What About Complex Roots???
One interesting / annoying feature of polynomials with real
coeﬃcients is that they may have complex roots, e.g.
P(x) = x
2
+ 1 has the roots {−i , i }. Where by deﬁnition
i =
√
−1.
If the initial approximation given to Newton’s method is real, all
the successive iterates will be real... which means we will not ﬁnd
complex roots.
One way to overcome this is to start with a complex initial
approximation and do all the computations in complex arithmetic.
Another solution is M¨ uller’s Method...
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (33/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
M¨ uller’s Method
M¨ uller’s method is an extension of the Secant method...
Recall that the secant method uses two points x
k
and x
k−1
and
the function values in those two points f (x
k
) and f (x
k−1
). The
zerocrossing of the linear interpolant (the secant line) is used as
the next iterate x
k+1
.
M¨ uller’s method takes the next logical step: it uses three points:
x
k
, x
k−1
and x
k−2
, the function values in those points f (x
k
),
f (x
k−1
) and f (x
k−2
); a second degree polynomial ﬁtting these
three points is found, and its zerocrossing is the next iterate x
k+1
.
Next slide: f (x) = x
4
−3x
3
−1, x
k−2
= 1.5, x
k−1
= 2.5, x
k
= 3.5.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (34/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
M¨ uller’s Method — Illustration f (x) = x
4
−3x
3
−1
0 1 2 3 4
0
20
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (35/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
M¨ uller’s Method — Fitting the Quadratic Polynomial
We consider the quadratic polynomial
m(x) = a(x −x
k
)
2
+ b(x −x
k
) + c
at the three ﬁtting points we get
f (x
k−2
) = a(x
k−2
−x
k
)
2
+ b(x
k−2
−x
k
) + c
f (x
k−1
) = a(x
k−1
−x
k
)
2
+ b(x
k−1
−x
k
) + c
f (x
k
) = c
We can solve for a, b, and c:
a =
(x
k−1
−x
k
)(f (x
k−2
) −f (x
k
)) −(x
k−2
−x
k
)(f (x
k−1
) −f (x
k
))
(x
k−2
−x
k
)(x
k−1
−x
k
)(x
k−2
−x
k−1
)
b =
(x
k−2
−x
k
)
2
(f (x
k−1
) −f (x
k
)) −(x
k−1
−x
k
)
2
(f (x
k−2
) −f (x
k
))
(x
k−2
−x
k
)(x
k−1
−x
k
)(x
k−2
−x
k−1
)
c = f (x
k
)
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (36/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
M¨ uller’s Method — Identifying the Zero
We now have a quadratic equation for (x −x
k
) which gives us two
possibilities for x
k+1
:
x
k+1
−x
k
=
−2c
b ±
√
b
2
−4ac
In M¨ uller’s method we select
x
k+1
= x
k
−
2c
b + sign(b)
√
b
2
−4ac
we are maximizing the (absolute) size of the denominator, hence
we select the root closest to x
k
.
Note that if b
2
−4ac < 0 then we automatically get complex roots.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (37/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
M¨ uller’s Method — Algorithm
Algorithm: M¨ uller’s Method
Input: x
0
, x
1
, x
2
; tolerance tol ; max iterations N
0
Output: Approximate solution p, or failure message.
1. Set h
1
= (x
1
− x
0
), h
2
= (x
2
− x
1
), δ
1
= [f (x
1
) − f (x
0
)]/h
1
,
δ
2
= [f (x
2
) −f (x
1
)]/h
2
, d = (δ
2
−δ
1
)/(h
2
+ h
1
), j = 3.
2. While j ≤ N
0
do 37
3. b = δ
2
+ h
2
d, D =
_
b
2
−4f (x
2
)d complex?
4. If b −D < b + D then set E = b + D else set E = b −D
5. Set h = −2f (x
2
)/E, p = x
2
+ h
6. If h <tol then output p; stop program
7. Set x
0
= x
1
, x
1
= x
2
, x
2
= p, h
1
= (x
1
− x
0
),
h
2
= (x
2
− x
1
), δ
1
= [f (x
1
) − f (x
0
)]/h
1
, δ
2
= [f (x
2
) − f (x
1
)]/h
2
,
d = (δ
2
−δ
1
)/(h
2
+ h
1
), j = j + 1
8. output — “M¨ uller’s Method failed after N
0
iterations.”
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (38/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
Now We Know “Everything” About Solving f (x) = 0 !?
Let’s recap... Things to remember...
The relation between root ﬁnding (f (x) = 0) and ﬁxed point
(g(x) = x).
Key algorithms for root ﬁnding: Bisection, Secant Method, and
Newton’s Method. — Know what they are (the updates), how to
start (one or two points? bracketing or not bracketing the root?),
can the method break, can breakage be ﬁxed? Convergence
properties.
Also, know the mechanics of the Regula Falsi method, and
understand why it can run into trouble.
Fixed point iteration: Under what conditions do FPiteration
converge for all starting values in the interval?
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (39/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Deﬂation: Finding All the Zeros of a Polynomial
M¨ uller’s Method — Finding Complex Roots
Recap, continued...
Basic error analysis: order α, asymptotic error constant λ. —
Which one has the most impact on convergence? Convergence
rate for general ﬁxed point iterations?
Multiplicity of zeros: What does it mean? How do we use this
knowledge to “help” Newton’s method when we’re looking for a
zero of high multiplicity?
Convergence acceleration: Aitken’s ∆
2
method. Steﬀensen’s
Method.
Zeros of polynomials: Horner’s method, Deﬂation (with
improvement), M¨ uller’s method.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (40/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
New Favorite Problem:
Interpolation and Polynomial Approximation
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (41/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
Weierstrass Approximation Theorem
The following theorem is the basis for polynomial approximation:
Theorem (Weierstrass Approximation Theorem)
Suppose f ∈ C[a, b]. Then ∀ǫ > 0 ∃ a polynomial P(x) :
f (x) −P(x) < ǫ, ∀x ∈ [a, b].
Note: The bound is uniform, i.e. valid for all x in the interval.
Note: The theorem says nothing about how to ﬁnd the polyno
mial, or about its order.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (42/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
Illustrated: Weierstrass Approximation Theorem
0 2 4 6 8 10
−1
−0.5
0
0.5
1
1.5
2
2.5
f
f+ε
f−ε
Figure: Weierstrass approximation Theorem guarantees that we (maybe with sub
stantial work) can ﬁnd a polynomial which ﬁts into the “tube” around the function
f , no matter how thin we make the tube.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (43/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
Candidates: the Taylor Polynomials???
Natural Question:
Are our old friends, the Taylor Polynomials, good candidates
for polynomial interpolation?
Answer:
No. The Taylor expansion works very hard to be accurate in
the neighborhood of one point. But we want to ﬁt data at
many points (in an extended interval).
[Next slide: The approximation is great near the expansion point
x
0
= 0, but get progressively worse at we get further away from the
point, even for the higher degree approximations.]
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (44/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
Taylor Approximation of e
x
on the Interval [0, 3]
0 0.5 1 1.5 2 2.5 3
0
5
10
15
20
e^x
P_0(x)
P_1(x)
P_2(x)
P_3(x)
P_4(x)
P_5(x)
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (45/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
Taylor Approximation of f (x) = 1/x about x = 1
Let
f (x) =
1
x
The Taylor expansion about x
0
= 1 is
P
n
(x) =
n
k=0
(−1)
k
(x −1)
k
.
Note: f (3) = 1/3, but P
n
(3) satisﬁes:
n 0 1 2 3 4 5 6
P
n
(3) 1 1 3 5 11 21 43
The Taylor’s series only converges x −1 < 1 by the ratio test (a
geometric series). Thus, not valid for x = 3.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (46/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
Lookahead: Polynomial Approximation
Clearly, Taylor polynomials are not well suited for approximating a
function over an extended interval.
We are going to look at the following:
• Lagrange polynomials — Neville’s Method. [This Lecture]
• Newton’s divided diﬀerences.
• Hermite interpolation.
• Cubic splines — Piecewise polynomial approximation.
• (Parametric curves)
• (B´ezier curves — used in e.g. computer graphics)
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (47/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
Interpolation: Lagrange Polynomials
Idea: Instead of working hard at one point, we will prescribe
a number of points through which the polynomial must pass.
As warmup we will deﬁne a function that passes through the
points (x
0
, f (x
0
)) and (x
1
, f (x
1
)). First, lets deﬁne
L
0
(x) =
x −x
1
x
0
−x
1
, L
1
(x) =
x −x
0
x
1
−x
0
,
and then deﬁne the interpolating polynomial
P(x) = L
0
(x)f (x
0
) + L
1
(x)f (x
1
),
then P(x
0
) = f (x
0
), and P(x
1
) = f (x
1
).
– P(x) is the unique linear polynomial passing through
(x
0
, f (x
0
)) and (x
1
, f (x
1
)).
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (48/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
An ndegree polynomial passing through n + 1 points
We are going to construct a polynomial passing through the points
(x
0
, f (x
0
)), (x
1
, f (x
1
)), (x
2
, f (x
2
)), . . . , (x
N
, f (x
n
)).
We deﬁne L
n,k
(x), the Lagrange coeﬃcients:
L
n,k
(x) =
n
i=0, i=k
x −x
i
x
k
−x
i
=
x −x
0
x
k
−x
0
· · ·
x −x
k−1
x
k
−x
k−1
·
x −x
k+1
x
k
−x
k+1
· · ·
x −x
n1
x
k
−x
n
,
which have the properties
L
n,k
(x
k
) = 1; L
n,k
(x
i
) = 0, ∀i = k.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (49/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
Example of L
n,k
(x)
0 1 2 3 4 5 6
0.5
0
0.5
1
This is L
6,3
(x), for the points x
i
= i , i = 0, . . . , 6.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (50/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
The n
th
Lagrange Interpolating Polynomial
We use L
n,k
(x), k = 0, . . . , n as building blocks for the Lagrange
interpolating polynomial:
P(x) =
n
k=0
f (x
k
)L
n,k
(x),
which has the property
P(x
i
) = f (x
i
), ∀i = 0, . . . , n.
This is the unique polynomial passing through the points
(x
i
, f (x
i
)), i = 0, . . . , n.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (51/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
Error bound for the Lagrange interpolating polynomial
Suppose x
i
, i = 0, . . . , n are distinct numbers in the interval [a, b],
and f ∈ C
n+1
[a, b]. Then ∀x ∈ [a, b] ∃ ξ(x) ∈ (a, b) so that:
f (x) = P
Lagrange
(x) +
f
(n+1)
(ξ(x))
(n + 1)!
n
i =0
(x −x
i
),
where P
Lagrange
(x) is the n
th
Lagrange interpolating polynomial.
Compare with the error formula for Taylor polynomials
f (x) = P
Taylor
(x) +
f
(n+1)
(ξ(x))
(n + 1)!
(x −x
0
)
n+1
,
Problem: Applying the error term may be diﬃcult...
The error formula is important as Lagrange polynomials are used
for numerical diﬀerentiation and integration methods.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (52/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
The Lagrange and Taylor Error Terms
Just to get a feeling for the nonconstant part of the error terms in
the Lagrange and Taylor approximations, we plot those parts on
the interval [0, 4] with interpolation points x
i
= i , i = 0, 1, . . . , 4:
0 0.5 1 1.5 2 2.5 3 3.5 4
−4
−3
−2
−1
0
1
2
3
4
0 0.5 1 1.5 2 2.5 3 3.5 4
0
200
400
600
800
1000
1200
Figure: [Left] The nonconstant error terms for the Lagrange interpolation oscillates in the interval [−4, 4]
(and takes the value zero at the node point x
k
), and [Right] the nonconstant error term for the Taylor
extrapolation grows in the interval [0, 1024].
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (53/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
Practical Problems
Applying (estimating) the error term is diﬃcult.
The degree of the polynomial needed for some desired accuracy is
not known until after cumbersome calculations — checking the
error term.
If we want to increase the degree of the polynomial (to e.g.
n + 1) the previous calculations are not of any help...
Building block for a ﬁx: Let f be a function deﬁned at x
0
, . . . , x
n
,
and suppose that m
1
, m
2
, . . . , m
k
are k (< n) distinct integers,
with 0 ≤ m
i
≤ n ∀i . The Lagrange polynomial that agrees with
f (x) the k points x
m
1
, x
m
2
, . . . , x
m
k
, is denoted P
m
1
,m
2
,...,m
k
(x).
Note: {m
1
, m
2
, . . . , m
k
} ⊂ {0, 1, . . . , n}.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (54/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
Increasing the degree of the Lagrange Interpolation
Theorem
Let f be deﬁned at x
0
, x
1
, . . . , x
k
, and x
i
and x
j
be two distinct
points in this set, then
P(x) =
(x −x
j
)P
0,...,j −1,j +1,...,k
(x) −(x −x
i
)P
0,...,i −1,i +1,...,k
(x)
x
i
−x
j
is the k
th
Lagrange polynomial that interpolates f at the k + 1
points x
0
, . . . , x
k
.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (55/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
Recursive Generation of Higher Degree Lagrange Interpolating Polynomials
x
0
P
0
x
1
P
1
P
0,1
x
2
P
2
P
1,2
P
0,1,2
x
3
P
3
P
2,3
P
1,2,3
P
0,1,2,3
x
4
P
4
P
3,4
P
2,3,4
P
1,2,3,4
P
0,1,2,3,4
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (56/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
Neville’s Method
The notation in the previous table gets cumbersome... We
introduce the notation Q
Last,Degree
= P
Last–Degree,. . . ,Last
, the table
becomes:
x
0
Q
0,0
x
1
Q
1,0
Q
1,1
x
2
Q
2,0
Q
2,1
Q
2,2
x
3
Q
3,0
Q
3,1
Q
3,2
Q
3,3
x
4
Q
4,0
Q
4,1
Q
4,2
Q
4,3
Q
4,4
Compare with the old notation:
x
0
P
0
x
1
P
1
P
0,1
x
2
P
2
P
1,2
P
0,1,2
x
3
P
3
P
2,3
P
1,2,3
P
0,1,2,3
x
4
P
4
P
3,4
P
2,3,4
P
1,2,3,4
P
0,1,2,3,4
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (57/58)
Accelerating Convergence
Zeros of Polynomials
Deﬂation, M¨ uller’s Method
Polynomial Approximation
Fundamentals
Moving Beyond Taylor Polynomials
Lagrange Interpolating Polynomials
Neville’s Method
Algorithm: Neville’s Method — Iterated Interpolation
Algorithm: Neville’s Method
To evaluate the polynomial that interpolates the n + 1 points
(x
i
, f (x
i
)), i = 0, . . . , n at the point x:
1. Initialize Q
i ,0
= f (x
i
).
2.
FOR i = 1 : n
FOR j = 1 : i
Q
i ,j
=
(x −x
i −j
)Q
i ,j −1
−(x −x
i
)Q
i −1,j −1
x
i
−x
i −j
END
END
3. Output the Qtable.
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (58/58)
Accelerating Convergence Zeros of Polynomials Deﬂation, M¨ller’s Method u Polynomial Approximation
Outline
1 Accelerating Convergence Review Aitken’s ∆2 Method Steﬀensen’s Method Zeros of Polynomials Fundamentals Horner’s Method Deﬂation, M¨ller’s Method u Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method
2
3
4
Joe Mahaﬀy, mahaffy@math.sdsu.edu
#4: Solutions of Equations in One Variable
— (2/58)
Accelerating Convergence Zeros of Polynomials Deﬂation, M¨ller’s Method u Polynomial Approximation
Review Aitken’s ∆2 Method Steﬀensen’s Method
Introduction
“It is rare to have the luxury of quadratic convergence.” (BurdenFaires, p.83) There are a number of methods for squeezing faster convergence out of an already computed sequence of numbers. We here explore one method which seems the have been around since the beginning of numerical analysis... Aitken’s ∆2 method. It can be used to accelerate convergence of a sequence that is linearly convergent, regardless of its origin or application. A review of modern extrapolation methods can be found in: “Practical Extrapolation Methods: Theory and Applications,” Avram Sidi, Number 10 in Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, June 2003. ISBN: 0521661595
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (3/58)
mahaffy@math. Linear convergence means that α = 1. and λ < 1.Accelerating Convergence Zeros of Polynomials Deﬂation. Joe Mahaﬀy.edu #4: Solutions of Equations in One Variable — (4/58) . M¨ller’s Method u Polynomial Approximation Review Aitken’s ∆2 Method Steﬀensen’s Method Recall: Convergence of a Sequence Deﬁnition Suppose the sequence {pn }∞ converges to p. If positive constants λ and α exists with pn+1 − p =λ n→∞ pn − pα lim then {pn }∞ converges to p of order α. with asymptotic error n=0 constant λ. with pn = p for all n=0 n.sdsu.
Accelerating Convergence Zeros of Polynomials Deﬂation. .e. and that pn+1 − p pn+2 − p ≈ ≈λ pn+1 − p pn − p This would indicate (pn+1 − p)2 ≈ (pn+2 − p)(pn − p) 2 pn+1 − 2pn+1 p + p2 ≈ pn+2 pn − (pn+2 + pn )p + p2 I/II (the asymptotic limit) We solve for p and get. Joe Mahaﬀy.edu #4: Solutions of Equations in One Variable — (5/58) .. sign(pn − p) = sign(pn+1 − p) = sign(pn+2 − p) = . M¨ller’s Method u Polynomial Approximation Review Aitken’s ∆2 Method Steﬀensen’s Method Aitken’s ∆2 Method Assume {pn }∞ is a linearly convergent sequence with limit p. and the signs of the successive errors agree.. . i. assume we are far out into the tail of the sequence (n large). mahaffy@math. n=0 Further.sdsu.
The analysis needed to prove this is beyond the scope of this class. Sidi’s book..Accelerating Convergence Zeros of Polynomials Deﬂation.sdsu. M¨ller’s Method u Polynomial Approximation Review Aitken’s ∆2 Method Steﬀensen’s Method Aitken’s ∆2 Method We solve for p and get. Joe Mahaﬀy.g. p≈ 2 pn+2 pn − pn+1 pn+2 − 2pn+1 + pn II/II A little bit of algebraic manipulation put this into the classical Aitken form: pn = p = pn − ˆ (pn+1 − pn )2 pn+2 − 2pn+1 + pn Aitken’s ∆2 Method is based on the assumption that the pn we ˆ compute from pn+2 . mahaffy@math. pn+1 and pn is a better approximation to the real limit p..edu #4: Solutions of Equations in One Variable — (6/58) . see e.
Deﬁne the new sequences pn = pn − ˆ or (pn+1 − pn )2 . 1. . while the denominator is a second order central diﬀerence. Joe Mahaﬀy. mahaffy@math. . . M¨ller’s Method u Polynomial Approximation Review Aitken’s ∆2 Method Steﬀensen’s Method Aitken’s ∆2 Method Given a sequence ﬁnite {pn }N or inﬁnite {qn }∞ sequence n=0 n=0 which converges linearly to some limit.sdsu. N − 2 The Recipe qn = qn − ˆ n = 0. .Accelerating Convergence Zeros of Polynomials Deﬂation.edu #4: Solutions of Equations in One Variable — (7/58) . ∞ The numerator is a forward diﬀerence squared. pn+2 − 2pn+1 + pn (qn+1 − qn )2 . . . qn+2 − 2qn+1 + qn n = 0. 1. . .
7 28010361467617 0.73 8050421371664 0.000000000000000 1.654289790497779 0. p11 needs p13 .7 44237354900557 0.857553215846393 0 .73 8992243027034 0.7390 42511328159 0. and p12 additionally needs p14 .73 6906294340474 0.73 8636096881655 0.685073357326045 0.7 50417761763761 0. Iteration 0 1 2 3 4 5 6 7 8 9 10 11 12 pn 0.7 22102425026708 0.Accelerating Convergence Zeros of Polynomials Deﬂation.7 01368773622757 0.73908 3333909684∗ Note: Bold digits are correct.edu #4: Solutions of Equations in One Variable — (8/58) . mahaffy@math. ˆ ˆ Joe Mahaﬀy.540302305868140 0 .7390 76383318956 0.73 3665164585231 0.7 63959682900654 0.73 5604740436347 pn ˆ 0 .73908 1177259563∗ 0.sdsu. p0 = 0. where the sequence is generated by the n=0 ﬁxed point iteration pn+1 = cos(pn ). M¨ller’s Method u Polynomial Approximation Review Aitken’s ∆2 Method Steﬀensen’s Method Aitken’s ∆2 Method Example Consider the sequence {pn }∞ .7390 65949599941 0.73 1404042422510 0.7 93480358742566 0.73 8876582817136 0.000000000000000 0 .
” Joe Mahaﬀy. pn − p n→∞ We can combine Aitken’s method with ﬁxedpoint iteration in order to get a “ﬁxedpoint iteration on steroids. Then the Aitkenaccelerated sequence {ˆn }∞ converges fast to p in the p n=0 sense that lim pn − p ˆ = 0. and for n large enough we have (pn − p)(pn+1 − p) > 0. mahaffy@math.sdsu. M¨ller’s Method u Polynomial Approximation Review Aitken’s ∆2 Method Steﬀensen’s Method Faster Convergence for “AitkenSequences” Theorem (Convergence of Aitken∆2 Sequences) Suppose {pn }∞ is a sequence that converges linearly to the limit n=0 p.Accelerating Convergence Zeros of Polynomials Deﬂation.edu #4: Solutions of Equations in One Variable — (9/58) .
we can compute p0 = p0 − ˆ (p1 − p0 )2 p2 − 2p1 + p0 At this point we “restart” the ﬁxed point iteration with p0 = p0 . p1 and p2 . p5 = g (p4 ).g. . Once we have p0 .edu p4 = g (p3 ). mahaffy@math.. (p4 − p3 )2 p5 − 2p4 + p3 #4: Solutions of Equations in One Variable — (10/58) .sdsu. ˆ e.. p2 = g (p1 ). M¨ller’s Method u Polynomial Approximation Review Aitken’s ∆2 Method Steﬀensen’s Method Steﬀensen’s Method: FixedPoint Iteration on Steroids Suppose we have a ﬁxed point iteration: p0 . ˆ and compute p3 = p3 − ˆ Joe Mahaﬀy.Accelerating Convergence Zeros of Polynomials Deﬂation. p1 = g (p0 ). p3 = p0 .
or failure message. Output: Approximate solution p. p = p0 − (p1 − p0 )2 /(p2 − 2p1 + p0 ) 4. While i ≤ N0 do 36 3∗ Set p1 = g (p0 ). 1. output p 4b.Accelerating Convergence Zeros of Polynomials Deﬂation.sdsu. tolerance TOL.” Joe Mahaﬀy. Set i = i + 1 6.edu #4: Solutions of Equations in One Variable — (11/58) . p2 = g (p1 ). Output: “Failure after N0 iterations. M¨ller’s Method u Polynomial Approximation Review Aitken’s ∆2 Method Steﬀensen’s Method Steﬀensen’s Method: The Quadratic. Waltz! Quadratic Convergence Algorithm: Steﬀensen’s Method Input: Initial approximation p0 . Set p0 = p 7. Set i = 1 2. maximum number of iterations N0 . mahaffy@math. ggA. If p − p0  < TOL then 4a. stop program 5.
sdsu. M¨ller’s Method u Polynomial Approximation Review Aitken’s ∆2 Method Steﬀensen’s Method Steﬀensen’s Method: Potential Breakage 3∗ If at some point p2 − 2p1 + p0 = 0 (which appears in the denominator).edu #4: Solutions of Equations in One Variable — (12/58) . but no derivative. In Newton’s method we compute one function value and one derivative in each iteration. mahaffy@math. Both Newton’s and Steﬀensen’s methods give quadratic convergence.Accelerating Convergence Zeros of Polynomials Deﬂation. In Steﬀensen’s method we have two function evaluations and a more complicated algebraic expression in each iteration. Joe Mahaﬀy. then we stop and select the current value of p2 as our approximate answer. It looks like we got something for (almost) nothing.
but no derivative.5). Joe Mahaﬀy. (see theorem2. b] (BF Theorem2. e.sdsu. b]. then we stop and select the current value of p2 as our approximate answer. In Newton’s method we compute one function value and one derivative in each iteration. Both Newton’s and Steﬀensen’s methods give quadratic convergence. mahaffy@math.14 in BurdenFaires).edu #4: Solutions of Equations in One Variable — (12/58) . f ∈ C 3 [a. Newton’s method “only” requires f ∈ C 2 [a. It looks like we got something for (almost) nothing. However.g. In Steﬀensen’s method we have two function evaluations and a more complicated algebraic expression in each iteration.Accelerating Convergence Zeros of Polynomials Deﬂation. the ﬁxed point function g must be 3 times continuously diﬀerentiable. in order the guarantee quadratic convergence for Steﬀensen’s method. M¨ller’s Method u Polynomial Approximation Review Aitken’s ∆2 Method Steﬀensen’s Method Steﬀensen’s Method: Potential Breakage 3∗ If at some point p2 − 2p1 + p0 = 0 (which appears in the denominator).
Accelerating Convergence Zeros of Polynomials Deﬂation. Newton’s Method.sdsu. mahaffy@math. M¨ller’s Method u Polynomial Approximation Review Aitken’s ∆2 Method Steﬀensen’s Method Steﬀensen’s Method: Example 1 of 2 Below we compare a Fixed Point iteration. and Steﬀensen’s Method for solving: f (x) = x 3 + 4x 2 − 10 = 0 or alternately.edu #4: Solutions of Equations in One Variable — (13/58) . pn+1 = g (pn ) = 10 pn + 4 Joe Mahaﬀy.
3652 1.edu #4: Solutions of Equations in One Variable — (14/58) .36523 g (pn ) 1.36523 1.36496 1.Accelerating Convergence Zeros of Polynomials Deﬂation.36496 1. M¨ller’s Method u Polynomial Approximation Review Aitken’s ∆2 Method Steﬀensen’s Method Steﬀensen’s Method: Example 2 of 2 Fixed Point Iteration i 0 1 2 3 4 5 pn 1.36526 1.36738 1.34840 1.34840 1. mahaffy@math.36523 Joe Mahaﬀy.sdsu.50000 1.36738 1.
mahaffy@math.50000 1.35587e09 Joe Mahaﬀy.sdsu.edu #4: Solutions of Equations in One Variable — (14/58) . M¨ller’s Method u Polynomial Approximation Review Aitken’s ∆2 Method Steﬀensen’s Method Steﬀensen’s Method: Example 2 of 2 Newton’s Method i 0 1 2 xn 1.11226e04 1.36523 f (xn ) 1.36495 1.51600e01 3.Accelerating Convergence Zeros of Polynomials Deﬂation.
35587e09 Steﬀensen’s Method i 0 1 pn 1.sdsu.51600e01 3.edu #4: Solutions of Equations in One Variable — (14/58) .36527 p1 1.34840 1.Accelerating Convergence Zeros of Polynomials Deﬂation.80531e12 Joe Mahaﬀy. M¨ller’s Method u Polynomial Approximation Review Aitken’s ∆2 Method Steﬀensen’s Method Steﬀensen’s Method: Example 2 of 2 Newton’s Method i 0 1 2 xn 1.36495 1.36523 p 1.36523 p − p2  3. mahaffy@math.50000 1.36523 p2 1.36527 1.96903e05 2.11226e04 1.36523 f (xn ) 1.50000 1.36738 1.
f (x) = 0 for a special class of functions. mahaffy@math.sdsu. We’d like to use Newton’s method. Joe Mahaﬀy. an = 0 where the ai ’s are constants (either real.Accelerating Convergence Zeros of Polynomials Deﬂation. or complex) called the coeﬃcients of P. M¨ller’s Method u Polynomial Approximation Fundamentals Horner’s Method Zeros of Polynomials Deﬁnition: Degree of a Polynomial A polynomial of degree n has the form P(x) = an x n + an−1 x n−1 + · · · + a1 x + a0 . hence being able to solve polynomial equations fast is valuable. so we need to compute P(x) and P ′ (x) as eﬃciently as possible.edu #4: Solutions of Equations in One Variable — (15/58) .e.) Polynomials are the basis for many approximation methods. Why look at polynomials? — We’ll be looking at the problem P(x) = 0 (i.
. then P(x) = 0 has at least one (possibly complex) root. The proof is surprisingly(?) diﬃcult and requires understanding of complex analysis.edu #4: Solutions of Equations in One Variable — (16/58) .Accelerating Convergence Zeros of Polynomials Deﬂation. We leave it as an exercise for the motivated student! Joe Mahaﬀy. M¨ller’s Method u Polynomial Approximation Fundamentals Horner’s Method Fundamentals Theorem (The Fundamental Theorem of Algebra) If P(x) is a polynomial of degree n ≥ 1 with real or complex coeﬃcients. mahaffy@math.sdsu..
— mi are the multiplicities of the individual zeros. mk such that k mi = n and i=1 P(x) = an (x − x1 )m1 (x − x2 )m2 · · · (x − xk )mk — The collection of zeros is unique. . xk (possibly complex) and unique positive integers m1 . . . . . counting multiplicity. M¨ller’s Method u Polynomial Approximation Fundamentals Horner’s Method Key Consequences of the Fundamental Theorem of Algebra Corollary If P(x) is a polynomial of degree n ≥ 1 with real or complex coeﬃcients then there exists unique constants x1 . .Accelerating Convergence Zeros of Polynomials Deﬂation. Joe Mahaﬀy. . mahaffy@math. x2 .sdsu. 1 of 2 — A polynomial of degree n has exactly n zeros. m2 .edu #4: Solutions of Equations in One Variable — (17/58) . .
k. . — If two polynomials of degree n agree at at least (n + 1) points. M¨ller’s Method u Polynomial Approximation Fundamentals Horner’s Method Key Consequences of the Fundamental Theorem of Algebra 2 of 2 Corollary Let P(x) and Q(x) be polynomials of degree at most n. with k > n are distinct numbers with P(xi ) = Q(xi ) for i = 1. then they must be the same. .edu #4: Solutions of Equations in One Variable — (18/58) . . . . . mahaffy@math. 2.sdsu. x2 . If x1 . xk . Joe Mahaﬀy. . .Accelerating Convergence Zeros of Polynomials Deﬂation. then P(x) = Q(x) for all values of x.
Accelerating Convergence Zeros of Polynomials Deﬂation. Technique is called Horner’s Method or Synthetic Division. mahaffy@math. In 1819.sdsu.edu #4: Solutions of Equations in One Variable — (19/58) . Joe Mahaﬀy. William Horner developed an eﬃcient algorithm with n multiplications and n additions to evaluate P(x0 ). M¨ller’s Method u Polynomial Approximation Fundamentals Horner’s Method Horner’s Method: Evaluating Polynomials Quickly 1 of 4 Let P(x) = an x n + an−1 x n−1 + · · · + a1 x + a0 We want an eﬃcient method to solve P(x) = 0.
(n − 2). 1. P(x) = an x n + an−1 x n−1 + · · · + a1 x + a0 k = (n − 1). mahaffy@math.sdsu. bk = ak + bk+1 x0 . Joe Mahaﬀy.edu #4: Solutions of Equations in One Variable — (20/58) . let bn = an . . If b0 = 0. We have only used n multiplications and n additions. 0. then x0 is a root of P(x). At the same time we have computed the decomposition where P(x) = (x − x0 )Q(x) + b0 .Accelerating Convergence Zeros of Polynomials Deﬂation. M¨ller’s Method u Polynomial Approximation Fundamentals Horner’s Method Horner’s Method: Evaluating Polynomials Quickly Assume If we are looking to evaluate P(x0 ) for any x0 . . . 2 of 3 then b0 = P(x0 ). . n−1 Q(x) = k=1 bk+1 x k .
Accelerating Convergence Zeros of Polynomials Deﬂation, M¨ller’s Method u Polynomial Approximation
Fundamentals Horner’s Method
Horner’s Method: Evaluating Polynomials Quickly Huh?!? Where did the expression come from? — Consider P(x) = an x n + an−1 x n−1 + · · · + a1 x + a0
3 of 4
= (. . . (( an x + an−1 )x + · · · )x + a1 )x + a0
n−1 bn−1
= ((an x n−2 + an−1 x n−3 + · · · )x + a1 )x + a0
= (an x n−1 + an−1 x n−2 + · · · + a1 )x + a0
Horner’s method is “simply” the computation of this parenthesized expression from the insideout...
Joe Mahaﬀy, mahaffy@math.sdsu.edu #4: Solutions of Equations in One Variable — (21/58)
Accelerating Convergence Zeros of Polynomials Deﬂation, M¨ller’s Method u Polynomial Approximation
Fundamentals Horner’s Method
Horner’s Method: Evaluating Polynomials Quickly
4 of 4
Now, if we need to compute P ′ (x0 ) we have P ′ (x)
x=x0
= (x − x0 )Q ′ (x) + Q(x)
= Q(x0 )
x=x0
Which we can compute (again using Horner’s method) in (n − 1) multiplications and (n − 1) additions. Proof? We really ought to prove that Horner’s method works. It basically boils down to lots of algebra which shows that the coeﬃcients of P(x) and (x − x0 )Q(x) + b0 are the same... A couple of examples may be more instructive...
Joe Mahaﬀy, mahaffy@math.sdsu.edu
#4: Solutions of Equations in One Variable
— (22/58)
Accelerating Convergence Zeros of Polynomials Deﬂation, M¨ller’s Method u Polynomial Approximation
Fundamentals Horner’s Method
Example#1: Horner’s Method
For P(x) = x 4 − x 3 + x 2 + x − 1, compute P(5): x0 = 5 a4 = 1 a3 = −1 b4 x0 a2 = 1 b3 x0 b4 = 1
a1 = 1 b2 x0
a0 = −1 b1 x0
Joe Mahaﬀy, mahaffy@math.sdsu.edu
#4: Solutions of Equations in One Variable
— (23/58)
edu #4: Solutions of Equations in One Variable — (23/58) .sdsu. M¨ller’s Method u Polynomial Approximation Fundamentals Horner’s Method Example#1: Horner’s Method For P(x) = x 4 − x 3 + x 2 + x − 1. mahaffy@math.Accelerating Convergence Zeros of Polynomials Deﬂation. compute P(5): x0 = 5 a4 = 1 b4 = 1 a3 = −1 b4 x0 = 5 b3 = 4 a2 = 1 b3 x0 a1 = 1 b2 x0 a0 = −1 b1 x0 Joe Mahaﬀy.
compute P(5): x0 = 5 a4 = 1 b4 = 1 a3 = −1 b4 x0 = 5 b3 = 4 a2 = 1 b3 x0 = 20 b2 = 21 a1 = 1 b2 x0 a0 = −1 b1 x0 Joe Mahaﬀy.edu #4: Solutions of Equations in One Variable — (23/58) .Accelerating Convergence Zeros of Polynomials Deﬂation. mahaffy@math. M¨ller’s Method u Polynomial Approximation Fundamentals Horner’s Method Example#1: Horner’s Method For P(x) = x 4 − x 3 + x 2 + x − 1.sdsu.
M¨ller’s Method u Polynomial Approximation Fundamentals Horner’s Method Example#1: Horner’s Method For P(x) = x 4 − x 3 + x 2 + x − 1.sdsu. compute P(5): x0 = 5 a4 = 1 b4 = 1 a3 = −1 b4 x0 = 5 b3 = 4 a2 = 1 b3 x0 = 20 b2 = 21 a1 = 1 b2 x0 = 105 b1 = 106 a0 = −1 b1 x0 Joe Mahaﬀy. mahaffy@math.Accelerating Convergence Zeros of Polynomials Deﬂation.edu #4: Solutions of Equations in One Variable — (23/58) .
M¨ller’s Method u Polynomial Approximation Fundamentals Horner’s Method Example#1: Horner’s Method For P(x) = x 4 − x 3 + x 2 + x − 1. mahaffy@math.sdsu. and P(x) = (x − 5)(x 3 + 4x 2 + 21x + 106) + 529 Similarly we get P ′ (5) = Q(5) = 436 x0 = 5 a3 = 1 b3 = 1 a2 = 4 b3 x0 = 5 b2 = 9 a1 = 21 b2 x0 = 45 b1 = 66 a0 = 106 b1 x0 = 330 b0 = 436 — (23/58) Joe Mahaﬀy.edu #4: Solutions of Equations in One Variable . P(5) = 529.Accelerating Convergence Zeros of Polynomials Deﬂation. compute P(5): a4 = 1 b4 = 1 a3 = −1 b4 x0 = 5 b3 = 4 a2 = 1 b3 x0 = 20 b2 = 21 x0 = 5 a1 = 1 b2 x0 = 105 b1 = 106 a0 = −1 b1 x0 = 530 b0 = 529 Hence.
End program Joe Mahaﬀy.sdsu. . Set y = x0 y + a0 4. Output (y . . For j = (n − 1). . 1 Set y = x0 y + aj . . 1. coeﬃcients a0 . a1 . z = an 2.Accelerating Convergence Zeros of Polynomials Deﬂation. z) 5. . . z = P ′ (x0 ). x0 Output: y = P(x0 ). mahaffy@math. . (n − 2). Set y = an . M¨ller’s Method u Polynomial Approximation Fundamentals Horner’s Method Algorithm: Horner’s Method Algorithm: Horner’s Method Input: Degree n. . z = x0 z + y 3. an .edu #4: Solutions of Equations in One Variable — (24/58) .
xN is an approximation to one of the roots of P(x) = 0. b0 ≈ 0 Let ˆ1 = xN be the ﬁrst root. and Q1 (x) = Q(x). We have P(x) = (x − xN )Q(x) + b0 . P(x) a polynomial of degree n and we are using Horner’s method of computing P(xi ) and P ′ (xi ). M¨ller’s Method u Polynomial Approximation Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u Deﬂation — Finding All the Zeros of a Polynomial If we are solving our current favorite problem P(x) = 0.sdsu.Accelerating Convergence Zeros of Polynomials Deﬂation. Joe Mahaﬀy. mahaffy@math.edu #4: Solutions of Equations in One Variable — (25/58) . r We can now ﬁnd a second root by applying Newton’s method to Q1 (x). then after N iterations.
Accelerating Convergence Zeros of Polynomials Deﬂation. and a quadratic factor Qn−2 (x). Joe Mahaﬀy. This procedure is called Deﬂation. we can apply this procedure (n − 2) times to ﬁnd (n − 2) approximate zeros of P(x): ˆ1 . r (2) b0 ≈ 0 (2) If P(x) is an nth degree polynomial with n real roots. ˆn−2 . M¨ller’s Method u Polynomial Approximation Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u Deﬂation — Finding All the Zeros of a Polynomial After some number of iterations of Newton’s method we have Q1 (x) = (x − ˆ2 )Q2 (x) + b0 .sdsu.edu #4: Solutions of Equations in One Variable — (26/58) . mahaffy@math. . and we have n roots of P(x) = 0. ˆ2 . . . . r r r At this point we can solve Qn−2 (x) = 0 using the quadratic formula.
.Accelerating Convergence Zeros of Polynomials Deﬂation. no. b0  < tol Even though we may solve to a tight tolerance (10−8 ). r r ˆn good approximations of the roots of P(x)???” r Unfortunately. mahaffy@math. Question: Is deﬂation therefore useless??? (k) Joe Mahaﬀy. . .sdsu. sometimes. ˆ2 .. M¨ller’s Method u Polynomial Approximation Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u Quality of Deﬂation Now. the big question is “are the approximate roots ˆ1 . . the errors accumulate and the inaccuracies increase iterationbyiteration.edu #4: Solutions of Equations in One Variable — (27/58) .e. In each step we solve the equation to some tolerance. . i.
Accelerating Convergence Zeros of Polynomials Deﬂation.edu #4: Solutions of Equations in One Variable — (28/58) .sdsu. mahaffy@math. r Maybe there is a way to get all the zeros with the same quality? Joe Mahaﬀy. M¨ller’s Method u Polynomial Approximation Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u Improving the Accuracy of Deﬂation The problem with deﬂation is that the zeros of Qk (x) are not good representatives of the zeros of P(x). As k increases. especially for high k’s. the quality of the root ˆk decreases.
especially for high k’s. the quality of the root ˆk decreases. Joe Mahaﬀy. in each step of deﬂation.. instead of just accepting ˆk as a root of P(x). r Maybe there is a way to get all the zeros with the same quality? The idea is quite simple. with ˆk as the starting point — a couple of r Newton iterations should quickly converge to the root of the full polynomial.Accelerating Convergence Zeros of Polynomials Deﬂation.sdsu. M¨ller’s Method u Polynomial Approximation Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u Improving the Accuracy of Deﬂation The problem with deﬂation is that the zeros of Qk (x) are not good representatives of the zeros of P(x). we rerun Newton’s method on the r full polynomial P(x).edu #4: Solutions of Equations in One Variable — (28/58) . mahaffy@math. As k increases..
Q1 (x). Note: “Inside” Newton’s method. Apply Newton’s method to P(x) with ˆk as the initial point r∗ → ˆk r Apply Horner’s method to Qk−1 (x) with x = ˆk → Qk (x) r 5. Joe Mahaﬀy. r∗ k 4.Accelerating Convergence Zeros of Polynomials Deﬂation. . mahaffy@math. Apply Newton’s method to P(x) → ˆ1 . Apply Newton’s method to Qk−1 → ˆk . r 2. M¨ller’s Method u Polynomial Approximation Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u Improved Deﬂation — Algorithm Outline Algorithm Outline: Improved Deﬂation 1. the evaluations of polynomials and their derivatives are also performed using Horner’s method. . For k = 2. . 3. (n − 2) do 34 3. Q∗ (x). . Use the quadratic formula on Qn−2 (x) to get the two remaining roots.sdsu.edu #4: Solutions of Equations in One Variable — (29/58) .
. M¨ller’s Method u Polynomial Approximation Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u Deﬂation & Improvement Wilkinson Polynomials The Wilkinson Polynomials n W Pn (x) = k=1 (x − k) have the roots {1. (Additional details in Math 543 . but provide surprisingly tough numerical rootﬁnding problems. Joe Mahaﬀy. and 13. n}.Accelerating Convergence Zeros of Polynomials Deﬂation. . . 10. 12.sdsu. . mahaffy@math. 2.edu #4: Solutions of Equations in One Variable — (30/58) .) In the next few slides we show the results of Deﬂation and Improved Deﬂation applied to Wilkinson polynomials of degree 9.
edu #4: Solutions of Equations in One Variable — (31/58) . M¨ller’s Method u Polynomial Approximation Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u Deﬂation & Improvement 10 −8 W W P9 (x) and P10 (x) 10 −8 Relative Error of the Computed Roots Deflation Improved Relative Error of the Computed Roots Deflation Improved 10 −10 10 −10 10 −12 10 −12 10 −14 10 10 −16 −14 0 2 4 6 8 10 0 2 4 6 8 10 Figure: [Left] The result of the two algorithms on the Wilkinson polynomial of degree (k) 9. mahaffy@math. in this case the roots are (k) computed so that b0  < 10−6 .sdsu. in this case the roots are computed so that b0  < 10−6 . [Right] The result of the two algorithms on the Wilkinson polynomial of degree 10. In both cases the lower line corresponds to improved deﬂation and we see that we get an improvement in the relative error of several orders of magnitude.Accelerating Convergence Zeros of Polynomials Deﬂation. Joe Mahaﬀy.
Joe Mahaﬀy.edu #4: Solutions of Equations in One Variable — (32/58) . in this case the roots are (k) computed so that b0  < 10−3 . In both cases the lower line corresponds to improved deﬂation and we see that we get an improvement in the relative error of several orders of magnitude.Accelerating Convergence Zeros of Polynomials Deﬂation. mahaffy@math. in this case the roots are computed so that b0  < 10−4 . [Right] The result of the two algorithms on the Wilkinson polynomial of degree 13. M¨ller’s Method u Polynomial Approximation Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u Deﬂation & Improvement Relative Error of the Computed Roots 10 −6 W W P12 (x) and P13 (x) 10 −4 Relative Error of the Computed Roots Deflation Improved Deflation Improved 10 −6 10 −8 10 −8 10 −10 10 −10 10 −12 10 −12 10 −14 10 −14 10 −16 0 2 4 6 8 10 12 10 −16 0 2 4 6 8 10 12 14 Figure: [Left] The result of the two algorithms on the Wilkinson polynomial of degree (k) 12.sdsu.
Where by deﬁnition √ i = −1.. If the initial approximation given to Newton’s method is real..edu #4: Solutions of Equations in One Variable — (33/58) ..sdsu. One way to overcome this is to start with a complex initial approximation and do all the computations in complex arithmetic. u Another solution is M¨ller’s Method. i}..Accelerating Convergence Zeros of Polynomials Deﬂation. Joe Mahaﬀy. P(x) = x 2 + 1 has the roots {−i.g. e. mahaffy@math. which means we will not ﬁnd complex roots. M¨ller’s Method u Polynomial Approximation Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u What About Complex Roots??? One interesting / annoying feature of polynomials with real coeﬃcients is that they may have complex roots. all the successive iterates will be real.
M¨ller’s Method u Polynomial Approximation Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u M¨ller’s Method u M¨ller’s method is an extension of the Secant method.. Joe Mahaﬀy. xk−2 = 1.5. f (xk−1 ) and f (xk−2 ).. u Recall that the secant method uses two points xk and xk−1 and the function values in those two points f (xk ) and f (xk−1 ). Next slide: f (x) = x 4 − 3x 3 − 1. xk = 3.5.sdsu.Accelerating Convergence Zeros of Polynomials Deﬂation. and its zerocrossing is the next iterate xk+1 .5. xk−1 = 2. The zerocrossing of the linear interpolant (the secant line) is used as the next iterate xk+1 . the function values in those points f (xk ). M¨ller’s method takes the next logical step: it uses three points: u xk . a second degree polynomial ﬁtting these three points is found. mahaffy@math.edu #4: Solutions of Equations in One Variable — (34/58) . xk−1 and xk−2 .
edu #4: Solutions of Equations in One Variable .Accelerating Convergence Zeros of Polynomials Deﬂation.sdsu. mahaffy@math. M¨ller’s Method u Polynomial Approximation Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u M¨ller’s Method — Illustration f (x) = x 4 − 3x 3 − 1 u 20 0 0 1 2 3 4 — (35/58) Joe Mahaﬀy.
edu .Accelerating Convergence Zeros of Polynomials Deﬂation. b. and c: a = (xk−1 − xk )(f (xk−2 ) − f (xk )) − (xk−2 − xk )(f (xk−1 ) − f (xk )) (xk−2 − xk )(xk−1 − xk )(xk−2 − xk−1 ) (xk−2 − xk )2 (f (xk−1 ) − f (xk )) − (xk−1 − xk )2 (f (xk−2 ) − f (xk )) (xk−2 − xk )(xk−1 − xk )(xk−2 − xk−1 ) f (xk ) #4: Solutions of Equations in One Variable — (36/58) b c = = Joe Mahaﬀy.sdsu. mahaffy@math. M¨ller’s Method u Polynomial Approximation Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u M¨ller’s Method — Fitting the Quadratic Polynomial u We consider the quadratic polynomial m(x) = a(x − xk )2 + b(x − xk ) + c at the three ﬁtting points we get f (xk−2 ) = a(xk−2 − xk )2 + b(xk−2 − xk ) + c f (xk−1 ) = a(xk−1 − xk )2 + b(xk−1 − xk ) + c f (xk ) = c We can solve for a.
mahaffy@math. M¨ller’s Method u Polynomial Approximation Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u M¨ller’s Method — Identifying the Zero u We now have a quadratic equation for (x − xk ) which gives us two possibilities for xk+1 : xk+1 − xk = In M¨ller’s method we select u xk+1 = xk − 2c √ b + sign(b) b2 − 4ac −2c √ b ± b2 − 4ac we are maximizing the (absolute) size of the denominator.edu #4: Solutions of Equations in One Variable — (37/58) . Note that if b2 − 4ac < 0 then we automatically get complex roots. Joe Mahaﬀy. hence we select the root closest to xk .Accelerating Convergence Zeros of Polynomials Deﬂation.sdsu.
M¨ller’s Method u Polynomial Approximation Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u M¨ller’s Method — Algorithm u Algorithm: M¨ller’s Method u Input: x0 . 1. x2 . p = x2 + h If h < tol then output p.sdsu. h2 = (x2 − x1 ).Accelerating Convergence Zeros of Polynomials Deﬂation. 5. mahaffy@math. stop program Set x0 = x1 . d = (δ2 − δ1 )/(h2 + h1 ).” u Joe Mahaﬀy. δ2 = [f (x2 ) − f (x1 )]/h2 . δ1 = [f (x1 ) − f (x0 )]/h1 . 2. j = 3. h1 = (x1 − x0 ). output — “M¨ller’s Method failed after N0 iterations. 4. tolerance tol. 7. j = j + 1 8. D = b 2 − 4f (x2 )d complex? If b − D < b + D then set E = b + D else set E = b − D Set h = −2f (x2 )/E . While j ≤ N0 do 37 3. d = (δ2 − δ1 )/(h2 + h1 ).edu #4: Solutions of Equations in One Variable — (38/58) . δ1 = [f (x1 ) − f (x0 )]/h1 . b = δ2 + h2 d. δ2 = [f (x2 ) − f (x1 )]/h2 . max iterations N0 Output: Approximate solution p. x2 = p. x1 = x2 . Set h1 = (x1 − x0 ). x1 . or failure message. h2 = (x2 − x1 ). 6.
Key algorithms for root ﬁnding: Bisection. can the method break.. and understand why it can run into trouble. Fixed point iteration: Under what conditions do FPiteration converge for all starting values in the interval? Joe Mahaﬀy. can breakage be ﬁxed? Convergence properties.. Also.. M¨ller’s Method u Polynomial Approximation Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u Now We Know “Everything” About Solving f (x) = 0 !? Let’s recap. Secant Method.sdsu. and Newton’s Method. — Know what they are (the updates). how to start (one or two points? bracketing or not bracketing the root?). mahaffy@math. Things to remember..Accelerating Convergence Zeros of Polynomials Deﬂation.edu #4: Solutions of Equations in One Variable — (39/58) . The relation between root ﬁnding (f (x) = 0) and ﬁxed point (g (x) = x). know the mechanics of the Regula Falsi method.
Zeros of polynomials: Horner’s method.sdsu. M¨ller’s Method u Polynomial Approximation Deﬂation: Finding All the Zeros of a Polynomial M¨ller’s Method — Finding Complex Roots u Recap.Accelerating Convergence Zeros of Polynomials Deﬂation. continued.. Basic error analysis: order α. asymptotic error constant λ. Steﬀensen’s Method. u Joe Mahaﬀy. M¨ller’s method. — Which one has the most impact on convergence? Convergence rate for general ﬁxed point iterations? Multiplicity of zeros: What does it mean? How do we use this knowledge to “help” Newton’s method when we’re looking for a zero of high multiplicity? Convergence acceleration: Aitken’s ∆2 method. Deﬂation (with improvement). mahaffy@math.edu #4: Solutions of Equations in One Variable — (40/58) ..
edu #4: Solutions of Equations in One Variable — (41/58) .sdsu. mahaffy@math. M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method New Favorite Problem: Interpolation and Polynomial Approximation Joe Mahaﬀy.Accelerating Convergence Zeros of Polynomials Deﬂation.
Accelerating Convergence Zeros of Polynomials Deﬂation. b]. or about its order.e. Joe Mahaﬀy. valid for all x in the interval.sdsu. M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method Weierstrass Approximation Theorem The following theorem is the basis for polynomial approximation: Theorem (Weierstrass Approximation Theorem) Suppose f ∈ C [a.edu #4: Solutions of Equations in One Variable — (42/58) . mahaffy@math. b]. Note: The bound is uniform. i. Then ∀ǫ > 0 ∃ a polynomial P(x) : f (x) − P(x) < ǫ. ∀x ∈ [a. Note: The theorem says nothing about how to ﬁnd the polynomial.
Joe Mahaﬀy.Accelerating Convergence Zeros of Polynomials Deﬂation.edu #4: Solutions of Equations in One Variable — (43/58) .5 −1 0 2 4 6 8 10 f f+ε f−ε Figure: Weierstrass approximation Theorem guarantees that we (maybe with substantial work) can ﬁnd a polynomial which ﬁts into the “tube” around the function f . no matter how thin we make the tube.sdsu.5 0 −0.5 2 1.5 1 0. M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method Illustrated: Weierstrass Approximation Theorem 2. mahaffy@math.
M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method Candidates: the Taylor Polynomials??? Natural Question: Are our old friends. good candidates for polynomial interpolation? Answer: No.Accelerating Convergence Zeros of Polynomials Deﬂation.] Joe Mahaﬀy. the Taylor Polynomials. mahaffy@math. The Taylor expansion works very hard to be accurate in the neighborhood of one point. [Next slide: The approximation is great near the expansion point x0 = 0. But we want to ﬁt data at many points (in an extended interval).edu #4: Solutions of Equations in One Variable — (44/58) .sdsu. but get progressively worse at we get further away from the point. even for the higher degree approximations.
edu #4: Solutions of Equations in One Variable — (45/58) .5 2 2. M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method Taylor Approximation of e x on the Interval [0.sdsu.5 1 1.5 3 Joe Mahaﬀy. 3] 20 15 e^x P_0(x) P_1(x) P_2(x) P_3(x) P_4(x) P_5(x) 10 5 0 0 0.Accelerating Convergence Zeros of Polynomials Deﬂation. mahaffy@math.
sdsu. Thus. not valid for x = 3. mahaffy@math.Accelerating Convergence Zeros of Polynomials Deﬂation. M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method Taylor Approximation of f (x) = 1/x about x = 1 1 x The Taylor expansion about x0 = 1 is f (x) = n Let Pn (x) = k=0 (−1)k (x − 1)k . but Pn (3) satisﬁes: n Pn (3) 0 1 1 1 2 3 3 5 4 11 5 21 6 43 The Taylor’s series only converges x − 1 < 1 by the ratio test (a geometric series).edu #4: Solutions of Equations in One Variable — (46/58) . Note: f (3) = 1/3. Joe Mahaﬀy.
• Hermite interpolation. mahaffy@math. M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method Lookahead: Polynomial Approximation Clearly.sdsu. Taylor polynomials are not well suited for approximating a function over an extended interval. • Cubic splines — Piecewise polynomial approximation. • (Parametric curves) • (B´zier curves — used in e. [This Lecture] • Newton’s divided diﬀerences. We are going to look at the following: • Lagrange polynomials — Neville’s Method.g. computer graphics) e Joe Mahaﬀy.Accelerating Convergence Zeros of Polynomials Deﬂation.edu #4: Solutions of Equations in One Variable — (47/58) .
then P(x0 ) = f (x0 ). Joe Mahaﬀy. we will prescribe a number of points through which the polynomial must pass. and P(x1 ) = f (x1 ).Accelerating Convergence Zeros of Polynomials Deﬂation. f (x1 )). x1 − x 0 and then deﬁne the interpolating polynomial P(x) = L0 (x)f (x0 ) + L1 (x)f (x1 ). lets deﬁne L0 (x) = x − x1 .sdsu. f (x0 )) and (x1 . M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method Interpolation: Lagrange Polynomials Idea: Instead of working hard at one point. x0 − x 1 L1 (x) = x − x0 . f (x1 )). As warmup we will deﬁne a function that passes through the points (x0 . f (x0 )) and (x1 . First.edu #4: Solutions of Equations in One Variable — (48/58) . mahaffy@math. – P(x) is the unique linear polynomial passing through (x0 .
f (x1 )).sdsu.k (x). . .Accelerating Convergence Zeros of Polynomials Deﬂation. We deﬁne Ln. . f (x2 )). . (x2 . ∀i = k. xk − xi xk − x0 xk − xk−1 xk − xk+1 xk − xn which have the properties Ln.k (xi ) = 0. f (x0 )). the Lagrange coeﬃcients: n Ln.edu . (x1 . mahaffy@math. Ln. #4: Solutions of Equations in One Variable — (49/58) Joe Mahaﬀy.k (xk ) = 1. (xN . f (xn )). M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method An ndegree polynomial passing through n + 1 points We are going to construct a polynomial passing through the points (x0 .k (x) = i=0. i=k x − x0 x − xi x − xk−1 x − xk+1 x − xn1 = ··· · ··· .
sdsu. M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method Example of Ln. . . mahaffy@math. .Accelerating Convergence Zeros of Polynomials Deﬂation. for the points xi = i.edu #4: Solutions of Equations in One Variable — (50/58) .5 0 1 2 3 4 5 6 This is L6. Joe Mahaﬀy. .3 (x). i = 0.k (x) 1 0.5 0 0. 6.
f (xi )).sdsu.Accelerating Convergence Zeros of Polynomials Deﬂation. . . . . ∀i = 0. . .k (x). . Joe Mahaﬀy. k = 0. . . mahaffy@math. n. n as building blocks for the Lagrange interpolating polynomial: n P(x) = k=0 f (xk )Ln. This is the unique polynomial passing through the points (xi .k (x). M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method The nth Lagrange Interpolating Polynomial We use Ln. . which has the property P(xi ) = f (xi ). i = 0. n.edu #4: Solutions of Equations in One Variable — (51/58) . . .
b) so that: f (x) = PLagrange (x) + where PLagrange (x) is the nth f (n+1) (ξ(x)) (n + 1)! n i=0 (x − xi ). Lagrange interpolating polynomial.sdsu. Then ∀x ∈ [a. b].. . (n + 1)! Problem: Applying the error term may be diﬃcult.. b]. The error formula is important as Lagrange polynomials are used for numerical diﬀerentiation and integration methods. M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method Error bound for the Lagrange interpolating polynomial Suppose xi . n are distinct numbers in the interval [a. b] ∃ ξ(x) ∈ (a. i = 0. . Compare with the error formula for Taylor polynomials f (x) = PTaylor (x) + f (n+1) (ξ(x)) (x − x0 )n+1 . Joe Mahaﬀy. and f ∈ C n+1 [a.Accelerating Convergence Zeros of Polynomials Deﬂation. .edu #4: Solutions of Equations in One Variable — (52/58) . . mahaffy@math.
1.5 1 1.5 3 3. Joe Mahaﬀy. .Accelerating Convergence Zeros of Polynomials Deﬂation. .edu #4: Solutions of Equations in One Variable — (53/58) .5 2 2. 4: 4 1200 3 1000 2 1 800 0 600 −1 400 −2 200 −3 −4 0 0.5 4 Figure: [Left] The nonconstant error terms for the Lagrange interpolation oscillates in the interval [−4.5 1 1. mahaffy@math.5 4 0 0 0. i = 0. . 4] (and takes the value zero at the node point xk ).sdsu.5 3 3. 1024]. we plot those parts on the interval [0.5 2 2. and [Right] the nonconstant error term for the Taylor extrapolation grows in the interval [0. M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method The Lagrange and Taylor Error Terms Just to get a feeling for the nonconstant part of the error terms in the Lagrange and Taylor approximations. . 4] with interpolation points xi = i.
.mk (x). . mk } ⊂ {0. xm2 .g.sdsu. .. mahaffy@math. .. . xn . m2 . The degree of the polynomial needed for some desired accuracy is not known until after cumbersome calculations — checking the error term. . Note: {m1 . .edu #4: Solutions of Equations in One Variable — (54/58) . mk are k (< n) distinct integers.. M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method Practical Problems Applying (estimating) the error term is diﬃcult. n + 1) the previous calculations are not of any help.. is denoted Pm1 . . .Accelerating Convergence Zeros of Polynomials Deﬂation. . If we want to increase the degree of the polynomial (to e.. . m2 .. . . n}. . xmk . . . . . with 0 ≤ mi ≤ n ∀i. . The Lagrange polynomial that agrees with f (x) the k points xm1 . Joe Mahaﬀy. Building block for a ﬁx: Let f be a function deﬁned at x0 .m2 . and suppose that m1 . . 1.
M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method Increasing the degree of the Lagrange Interpolation Theorem Let f be deﬁned at x0 ..sdsu..edu #4: Solutions of Equations in One Variable — (55/58) ...i+1. ..j−1. . then P(x) = (x − xj )P0...k (x) xi − x j is the k th Lagrange polynomial that interpolates f at the k + 1 points x0 . xk .. xk ...Accelerating Convergence Zeros of Polynomials Deﬂation. . Joe Mahaﬀy.. and xi and xj be two distinct points in this set... .i−1.. . x1 .. mahaffy@math. .. .j+1.k (x) − (x − xi )P0. .
2 P1.4 Joe Mahaﬀy. mahaffy@math.3.3.2.2.2.1 P1.sdsu.4 P0.3 P3.edu #4: Solutions of Equations in One Variable — (56/58) .4 P0.3 P1.4 P0.1.1.Accelerating Convergence Zeros of Polynomials Deﬂation.2.3.3 P2.1. M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method Recursive Generation of Higher Degree Lagrange Interpolating Polynomials x0 x1 x2 x3 x4 P0 P1 P2 P3 P4 P0.2 P2.
1 Q2.4 P0.1 Q2.3 Q4.3 Q4.2.Accelerating Convergence Zeros of Polynomials Deﬂation.3 P1.0 Q4.edu #4: Solutions of Equations in One Variable .Degree = PLast–Degree.4 Compare with the old notation: x0 x1 x2 x3 x4 P0 P1 P2 P3 P4 P0. .Last .0 Q1.2 P1.3.sdsu.3. . the table becomes: x0 x1 x2 x3 x4 Q0. ..2 Q3.1 P1.4 P0.3 P3.3 P2.1 Q3.1 Q4.1. mahaffy@math.2 Q3. We introduce the notation QLast.2.2. M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method Neville’s Method The notation in the previous table gets cumbersome.2 P2.0 Q3.0 Q1.4 P0..0 Q2.4 — (57/58) Joe Mahaﬀy..3.1.2 Q4.1.2.
Accelerating Convergence Zeros of Polynomials Deﬂation. Output the Qtable. . . mahaffy@math.j−1 Qi. Initialize Qi. n at the point x: 1.j = xi − xi−j END END 3. 2. M¨ller’s Method u Polynomial Approximation Fundamentals Moving Beyond Taylor Polynomials Lagrange Interpolating Polynomials Neville’s Method Algorithm: Neville’s Method — Iterated Interpolation Algorithm: Neville’s Method To evaluate the polynomial that interpolates the n + 1 points (xi . . i = 0. .0 = f (xi ). FOR i = 1 : n FOR j = 1 : i (x − xi−j )Qi.edu #4: Solutions of Equations in One Variable — (58/58) .sdsu.j−1 − (x − xi )Qi−1. f (xi )). Joe Mahaﬀy.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.