Professional Documents
Culture Documents
Lec 4
Lec 4
UCSC
1 Approximation theory
1 Weierstrass approximation theorem
2 Minimax approximation
3 Orthogonal polynomials and least squares
4 Near minimax apoproximation
2 Interpolation
1 The interpolation problem
2 Different representations for the interpolating polynomial
3 The error term
4 Minimizing the error term with Chebyshev nodes
5 Discrete least squares again
6 Piececwise polynomial interpolation: splines
Approximation methods
where
x ∈ [a, b ]
n : Degree of interpolation
cj : Basis coefficients
ϕj (x ) : Basis functions which are polynomials of degree ≤ n.
Approximation methods: Introduction
kf (x )k∞ = max |f (x )|
x ∈[a,b ]
Approximation methods: Local vs Global
Theorem
Let f (x ) be a continuous function on [a, b ] (i.e., f ∈ C [a, b ]) , then for
all e > 0,there exists a sequence of polynomials pn (x ) of degree ≤ n
that converges uniformly to f (x ) on [a, b ]. That is, ∀e > 0, ∃N ∈ N
and polynomials pn (x ) such that
kf (x ) − pn (x )k∞ = max |f (x ) − pn (x )|
x ∈[a,b ]
In other words,
Another version
Theorem
if f ∈ C [a, b ],then for all e > 0 there exists a polynomial p (x ) such that
kf (x ) − pn (x )k∞ ≤ ε ∀x ∈ [a, b ]
such that
lim pn (x ) = f (x ) for all x ∈ [0, 1]
n−→∞
uniformly.
Weierstrass theorem is conceptually valuable but it is not
practical. From a computational point perspective it is not efficient
to work with Bernstein polynomials.
Bernstein polynomials converge very slowly!
Approximation methods: Minimax approximation
kf (x ) − pn (x )k∞ = max |f (x ) − pn (x )|
x ∈[a,b ]
dn (f ) = inf kf − pn k∞
pn
= inf max |f (x ) − pn (x )|
pn x ∈[a,b ]
Approximation methods: Minimax approximation
dn (f ) = kf − pn∗ k∞
Theorem
Let f ∈ C [a, b ]. Then for any n ∈ N there exists a unique pn∗ (x ) that
minimizes kf − pn k∞ among all polynomials of degree ≤ n.
Theorem
Let f ∈ C [a, b ]. The polynomial pn∗ (x ) is the minimax polynomial of
degree n that approximates f (x ) in [a, b ], if and only if, the error
f (x ) − pn∗ (x ) , assumes the values + ∗
− kf − pn k∞ with an alternating
change of sign in at least n + 2 points in [a, b ]. That is, in
a ≤ x0 < x1 < ... < xn+1 ≤ b we have the following
The theorem give us the desired shape of the error when we want
L∞ approximation (the best approximation).
For example it says that the maximum error of a cubic
approximation should be achieved at least five times and that the
sign of the error should alternate between these points.
Approximation methods: Minimax approximation
Example 1 (cont.):
According to the oscillation property we must have
h (0) = −h ( 2a ) = h (2) which implies that
h (0) = h (2)
−b = 4 − 2a − b
a=2
and
a
h (0) = −h
2
a2
−b = +b
4
but we already know that a = 2 thus
1
b=−
2
Approximation methods: Minimax approximation
Example 1 (cont.):
Then , the approximating polynomial is
1
p1∗ (x ) = 2x −
2
and the maximun error
1 1
max x 2 − 2x − =
[0,2] 2 2
n
Assume a general family of polynomials ϕj (x ) j =0 that
consititute a basis for C (X ) .
Assume that f (x ) is defined on [a, b ]. We want to approximate f by
a linear combination of polynomials: pn (x ) = ∑nj=0 cj ϕj (x )
Define the error as
E (x ) = f (x ) − pn (x )
Zb
min kE (x )k2L2 = min E (x )2 w (x ) dx
fb fb
a
Zb
= min (f (x ) − pn (x ))2 w (x ) dx
fb
a
Zb
!2
n
minn w ( x ) f ( x ) − ∑ cj ϕ j ( x ) dx
{cj }j =0 a j =0
FOC’s w.r.t ck :
Zb
!
n
2 w (x ) f (x ) − ∑ cj ϕ j ( x ) ϕk (x ) dx = 0 ∀k = 0, 1, 2, ...n
a j =0
Approximation methods: Least squares
n Zb Zb
∑ cj ϕk (x ) ϕj (x ) w (x ) dx = f (x ) ϕk (x ) w (x ) dx ∀k = 0, 1, 2, ...n
j =0 a a
c0 h ϕ0 , ϕ0 i + c1 h ϕ0 , ϕ1 i + c2 h ϕ0 , ϕ1 i + ... + cn h ϕ0 , ϕn i = hf , ϕ0 i
c0 h ϕ1 , ϕ0 i + c1 h ϕ1 , ϕ1 i + c2 h ϕ1 , ϕ2 i + ... + cn h ϕ1 , ϕn i = hf , ϕ1 i
...
c0 h ϕn , ϕ0 i + c1 h ϕn , ϕ1 i + c2 h ϕn , ϕ1 i + ... + cn h ϕn , ϕn i = hf , ϕn i
Approximation methods: Least squares
In matrix form
h ϕ0 , ϕ0 i h ϕ0 , ϕ1 i ... h ϕ0 , ϕn i c0 hf , ϕ0 i
h ϕ1 , ϕ0 i h ϕ1 , ϕ1 i ... h ϕ1 , ϕn i c1 hf , ϕ1 i
=
... ... ... ... ... ...
h ϕn , ϕ0 i h ϕn , ϕ1 i ... h ϕn , ϕn i cn hf , ϕn i
Candidates for ϕj (x ) :
Monomials constitute a basis for C (X ) :
1, x, x 2 , x 3 , ..., x n
Zb
ϕk (x ) ϕj (x ) w (x ) dx = 0 for k 6= j
a
Approximation methods: Least squares
Zb
!2
n
minn f (x ) − ∑ cj x j
dx
{ cj } j = 0 a j =0
Zb Zb
!
n
−2 f (x ) x k dx + 2 ∑ cj x j x k dx = 0 ∀k = 0, 1, 2, ...n
a a j =0
Approximation methods: Least squares
Zb Zb
!
n
∑ cj x j +k
dx = f (x ) x k dx ∀k = 0, 1, 2, ...n
a j =0 a
Zb Zb
Where ∑nj=0 cj x j +k dx = ∑nj=0 cj x j +k dx and
a a
Zb b
j +k +1 j +k +1 j +k +1
x j +k dx = jx+k +1 = b j +k−+a 1 thus,
a
a
Zb
!
n
b j +k +1 − aj +k +1
∑ j +k +1
cj = f (x ) x k dx ∀k = 0, 1, 2, ...n
j =0 a
Approximation methods: Least squares
The
above is a linear system of equations with n + 1 unknowks
n
( cj j =0 ) and n + 1 equations.
If we assume that [a, b] = [0, 1] then the system of equations to
n
determine each of the cj j =0 becomes
n Z1
1
∑ j +k +1
cj = f (x ) x k dx ∀k = 0, 1, 2, ...n
j =0 0
Approximation methods: Least squares
for k = 0
Z1
1 1 1
c0 + c1 + c3 + ... + cn = f (x ) dx
2 3 n+1
0
for k = 1
Z1
1 1 1 1
c0 + c1 + c3 + ... + cn = f (x ) xdx
2 3 4 n+2
0
for k = 2
Z1
1 1 1 1
c0 + c1 + c3 + ... + cn = f (x ) x 2 dx
3 4 5 n+3
0
for k = n
Z1
1 1 1 1 f (x ) x n dx
n+1 c0 + n+2 c1 + n+3 c3 + ... + n+n+1 cn =
0
Approximation methods: Least squares
n
If the polynomial family ϕj (x ) j =0 is orthogonal, that is
h ϕn , ϕm i = 0 for n 6= m then the system becomes
h ϕ0 , ϕ0 i 0 0 ... 0 c0 hf , ϕ0 i
0 h ϕ1 , ϕ1 i 0 ... 0 c1 h f , ϕ 1 i
=
... ... ... ... ... ... ...
0 0 0 ... h ϕn , ϕn i cn hf , ϕn i
hf , ϕk i
ck = ∀k = 0, 1, 2, ...n
h ϕk , ϕk i
Thus
n
hf , ϕk i
pn (x ) = ∑ h
ϕ (x )
ϕk , ϕk i j
j =0
Approximation methods: Least squares
There are many families of orthogonal polynomials that are basis for
function spaces:
Chebyshev: For x ∈ [−1, 1]
Tn (x ) = cos (n arccos x )
Legendre: For x ∈ [−1, 1]
(−1)n d n 2
n
Pn ( x ) = [ 1 − x ]
2n n! dx n
Laguerre: For x ∈ [0, ∞]
ex d n
x n e −x
Ln ( x ) =
n! dx n
Hermite: For x ∈ [−∞, ∞]
2 d n −x 2
Hn (x ) = (−1)n e x e
dx n
Most of this polynomials come from the soultion of ”important”
difference equations.
Orthogonal polynomials: Most common families
Theorem
Let { ϕn (x )}n∞=0 be an orthogonal family of monic polynomials on [a, b ]
relative to an inner product h·, ·i ,
ϕ0 (x ) = 1, ϕ1 (x ) = x − h ϕ 0(x ), ϕ 0(x )i . Then { ϕn (x )}n∞=0 satisfies the
hx ϕ (x ), ϕ (x )i
0 0
recursive scheme
ϕn+1 (x ) = (x − δn ) ϕn (x ) − γn ϕn−1 (x )
where
hx ϕn , ϕn i h ϕn , ϕn i
δn = and γn =
h ϕn , ϕn i h ϕ n −1 , ϕ n −1 i
en (x ) = cos (n arccos x )
T for n > 1
2n−1
Using the general recursive formula for orthogonal monic polynomials
we obtain the recursion shceme for the monic Chebyshev polynomial
e0 (x ) = 0
T
e1 (x ) = x
T
T e1 (x ) − 1 T
e2 (x ) = x T e0 (x )
2
and
en (x ) − 1 T
en+1 (x ) = x T
T en−1 (x ) for n ≥ 2
4
Orthogonal polynomials: Recursive representation
2n e 2n−1 e 1 2n−2 e
Tn + 1 ( x ) = x T n ( x ) − Tn−1 (x ) for n ≥ 2
2n 2n − 1 4 2n−2
1 1 1 1
Tn+1 (x ) = x n−1 Tn (x ) − Tn−1 (x )
2n 2 4 2n−2
2n 1 2n
Tn+1 (x ) = x n−1 Tn (x ) − Tn−1 (x )
2 4 2n−2
1
= 2xTn (x ) − 22 Tn−1 (x )
4
= 2xTn (x ) − Tn−1 (x )
Orthogonal polynomials
Z1
0 for i 6= j
Ti (x ) Tj (x ) w (x ) dx =
Cj for i = j
−1
where Cj is a constant
Integrating by substitution and solving for w (x ) yields
1
w (x ) = √
1 − x2
Orthogonal polynomials
Therefore
Z1
1 f (x )
c0 = √ dx
π 1 − x2
−1
and
Z1
2 f (x ) Tk (x )
ck = √ dx ∀k = 1, 2, ...n
π 1 − x2
−1
The Chebyshev polynomial approximation is
n
Tn (x ) = c0 + ∑ ck Tk (x )
k =1
Orthogonal polynomials: Chebyshev least squares
Theorem
Let pn∗ (x ) be the minmax polynomial approximation to f ∈ C [−1, 1]. If
Tn∗ (x ) is the nth degree Chebyshev least square approximation to
f ∈ C [−1, 1] then
4
kf − pn k∞ ≤ kf − Tn k∞ ≤ 4 + 2 ln n kf − pn∗ k∞
∗ ∗
π
and
lim kf − Tn∗ k∞ = 0 uniformly
n→∞
exp (x ) ≈ c0 T0 (x ) + c1 T1 (x )
where
T0 (x ) = cos ((0) (arccos x )) = 1
T1 (x ) = cos (arccos x ) = x
and
Z1
1 exp (x )
c0 = √ dx = 1.2661
π 1 − x2
−1
Z1
2 exp (x ) x
c1 = √ dx = 1.1303
π 1 − x2
−1
Orthogonal polynomials: Chebyshev least squares
exp (x ) ≈ c0 T0 (x ) + c1 T1 (x ) + c2 T2 (x )
T2 (x ) = cos (2 arccos x )
andc
Z1
2 exp (x ) T2 (x )
c2 = √ dx = 0.2715
π 1 − x2
−1
Orthogonal polynomials: Chebyshev least squares
Theorem
If x0 , ..., xn are distinct, then for any f (x0 ) , ..., f (xn ) there existis a
unique polynomial pn (xi ) of degree ≤ n such that the interpolation
conditions
f (xi ) = pn (xi ) ∀i : 0s, ...n
are satisfied.
Linear interpolation
f (x0 ) = p1 (x0 )
= a0 + a1 x0
f (x1 ) = p1 (x1 )
= a0 + a1 x1
Linear interpolation
f ( x1 ) − f ( x0 )
a0 = f (x0 ) − x0
x1 − x0
f ( x1 ) − f ( x0 )
a1 =
x1 − x0
Thus, the interpolating polynomial is
f (x1 ) − f (x0 ) f ( x1 ) − f ( x0 )
p1 (x ) = f (x0 ) − x0 + x
x1 − x0 x1 − x0
Linear interpolation
Newton form
f (x1 ) − f (x0 )
p1 (x ) = f (x0 ) + (x − x0 )
x1 − x0
Lagrange form
x − x1 x − x0
p1 (x ) = f (x0 ) + f (x1 )
x0 − x1 x1 − x0
1 x0 x02
a0 f ( x0 )
1 x1 x 2 a1 = f (x1 )
1
1 x2 x22 a2 f ( x2 )
− ( x1 + x2 )
x0 + x2
a1 = f ( x0 ) + f ( x1 )
(x0 − x1 ) (x0 − x2 ) ( x0 − x1 ) ( x1 − x2 )
− ( x0 + x1 )
+ f (x2 )
( x0 − x2 ) ( x1 − x2 )
−1
1
a2 = f ( x0 ) + f ( x1 )
( x0 − x1 ) ( x0 − x2 ) (x0 − x1 ) (x1 − x2 )
1
+ f ( x2 )
(x0 − x2 ) (x1 − x2 )
Quadratic interpolation
p2 (x ) = a0 + a1 x + a2 x 2
( x − x1 ) ( x − x2 ) ( x − x0 ) ( x − x2 )
p2 (x ) = f (x0 ) + f ( x1 )
( x0 − x1 ) ( x0 − x2 ) ( x1 − x0 ) ( x1 − x2 )
(x − x0 ) (x − x1 )
+ f ( x2 )
( x2 − x0 ) ( x2 − x1 )
f ( x1 ) − f ( x0 )
p 2 ( x ) = f ( x0 ) + ( x − x0 )
x1 − x0
f (x2 )−f (x1 )
f (x1 )−f (x0 )
(x2 −x1 )
− (x1 −x0 )
+ ( x − x0 ) ( x − x1 )
( x2 − x0 )
or
f (x0 ) = a0 + a1 x0 + ... + an x0n
In matrix form
... x0n
1 x0 a0 f ( x0 )
1 x1
... x1n
a1 f ( x1 )
=
... ... ... ... ... ...
1 xn ... xnn an f (xn )
More compactly
n
∑f
pn ( x ) = xj ln,j (x )
j =0
Interpolation: The general case
For j = 0
n x − xj
(x − x1 ) ... (x − xn )
ln,0 (x ) =
(x0 − x1 ) ... (x0 − xn )
= ∏ x0 − xj
j =0
j 6 =0
For j = 1
n x − xj
(x − x0 ) (x − x2 ) ... (x − xn )
ln,1 (x ) =
(x1 − x0 ) (x1 − x2 ) ... (x1 − xn )
= ∏ x1 − xj
j =0
j 6 =1
For j = n
n x − xj
(x − x0 ) (x − x2 ) ... (x − xn−1 )
ln,n (x ) =
(xn − x0 ) (xn − x2 ) ... (xn − xn−1 )
= ∏ x2 − xj
j =0
j 6 =2
Interpolation: The general case
For any ∀0 ≤ j ≤ n
n x − xj
ln,j (x ) = ∏ xi − xj
j =0
j 6 =i
pn (x ) = c0 + c1 (x − x0 ) + c2 (x − x0 ) (x − x1 ) ... + cn (x − x0 ) (x − x1 ) ...
d ( x0 ) = f ( x0 )
f ( x1 ) − f ( x0 )
d ( x1 , x0 ) =
x1 − x0
d (x2 , x1 ) − d (x1 , x0 )
d ( x2 , x1 , x0 ) =
x2 − x0
f (x2 )−f (x1 ) f (x1 )−f (x0 )
x2 −x1 − x1 −x2
=
x2 − x0
Interpolation: The general case
d (x3 , x2 , x1 ) − d (x2 , x1 , x0 )
d ( x3 , x2 , x1 , x0 ) =
x3 − x0
! !
f (x )−f (x )
3 2 f (x )−f (x )
2 1 f (x2 )−f (x1 ) f (x1 )−f (x0 )
x3 −x2 − x2 −x1 x2 −x1 − x −x
x3 −x2 − x2 −x0
1 2
=
x3 − x0
or
n j −1
p n ( x ) = d ( x0 ) + ∑d xj , ..., x1 , x0 ∏ ( x − xk )
j =1 k =0
Interpolation: The interpolation error
Theorem
Let f (x ) ∈ Cn+1 [a, b ]. Let pn (x ) a polynomial of degree ≤ n such that
it interpolates f (x ) at the n + 1 distinct nodes {x0 , x1 , ..., xn } . Then
∀x ∈ [a, b ], there exists a ξ n ∈ (a, b ) such that
n
1
f (x ) − pn (x ) = f ( n + 1 ) ( ξ n ) ∏ ( x − xk )
(n + 1) ! k =0
Fact
The error term for the nth Taylor approximation around the point x0 is
f (n +1) ( ξ n )
( x − x0 ) n + 1
(n + 1) !
Interpolation: The interpolation error
or
max |f (x ) − pn (x )|
x ∈[a,b ]
!
n
1
≤
(n + 1) !
max
ξ n ∈[a,b ]
f (n +1)
(ξ n ) max ∏ ( x − xk )
x ∈[a,b ] k =0
thus
lim (f (x ) − pn (x )) = 0
n→∞
But nothing guarantees convergence (neither point or uniform
convergence).
Interpolation: The interpolation error
Recall that:
We want to interpolate a function f (x ) : [a, b ] → R
n is the degree of the interpolating polynomial: pn (x )
The interpolation conditions are
But we can choose the nodes in order to obtain the smallest value for
n n
∏ ( x − xk ) = max ∏ (x − xk )
x ∈[a,b ] k =0
k =0 ∞
ej (x ) = cos (j arccos x )
T
2j −1
with x ∈ [−1, 1] and j = 1, 2, ..., n
en (x ) are given by the soltution to
Then, the zeros of T
en (x ) = 0
T
cos (n arccos x ) = 0
cos (nθ ) = 0
where θ = arccos x. Thus θ ∈ [0, π ].
Interpolation: Choosing the nodes
2k − 1
nθ = π for k = 1, 2, ..n
2
Notice that
2k − 1
cos π
2
π
= cos kπ −
2
π π
= cos (kπ ) cos − sin (kπ ) sin
| {z2 } | {z } 2
=0
=0
=0
Interpolation: Choosing the nodes
cheby nodes.m
Interpolation: Choosing the nodes
We want the roots of the monic chebyshev and we have the roots of
the cosine function:
1
nθ = k − π for k = 1, 2, ..n
2
2k − 1
xk = cos π for k = 1, 2, ..n
2n
cheby nodes.m
Interpolation: Choosing the nodes
Theorem
The Chebyshev polynomial Tn (x ) of degree n ≥ 1 has n zeros in [−1, 1]
at
2k − 1
xk = cos π for k = 1, 2, ..n
2n
Moreover, Tn (x ) assumes its extremum at
kπ
xk∗ = cos for k = 0, 1, .., n
n
with
Tn (xk∗ ) = (−1)k for k = 0, 1, .., n
Interpolation: Choosing the nodes
Corollary
The monic Chebyshev polynomial T en (x ) has the same zeros and
extremum points as Tn (x ) but with extremum values given by
k
en (x ∗ ) = (−1) for k = 0, 1, ..., n
T k
2n − 1
Interpolation: Choosing the nodes
Tn (x ) = cos (n arccos x )
then
dTn (x )
= Tn0 (x )
dx
n
= − sin (n arccos x ) − √
1 − x2
n sin (n arccos x )
= √
1 − x2
Tn (x ∗ ) = cos (n arccos x ∗ )
kπ
= cos n arccos cos
n
kπ
= cos n
n
= cos (kπ )
= (−1)k for k = 0, 1, ..., n
max |Tn (x )| = 1
x ∈[−1,1]
Interpolation: Choosing the nodes
Tn xk∗
∗
en (x ) =
T k for k = 0, 1, ..., n
2n−1
(−1)k
= n−1 for k = 0, 1, ..., n
2
Therefore
max en (x ) = 1
T
x ∈[−1,1] 2n−1
Interpolation: Choosing the nodes
Theorem
en (x ) is a monic polynomial of degree n defined on [−1, 1], then
If p
max en (x ) = 1 ≤ max |p
T en (x )|
x ∈[−1,1] 2n − 1 x ∈[−1,1]
n
The smallest value that maxx ∈[−1,1] ∏ ( x − xk ) can take is 21n .
k =0
Therefore
n
1
max ∏ (x − xk )
x ∈[−1,1] k =0
=
2n
= max en+1 (x )
T
x ∈[−1,1]
which implies that
n
∏ (x − xk ) = Ten+1 (x )
k =0
n
Therefore the zeros of ∏ (x − xk ) must be the zeros of Ten+1 (x )
k =0
which are given by
2k + 1
xk = cos π for k = 1, 2, ..n + 1
2 (n + 1)
Interpolation: Choosing the nodes
n
Since maxx ∈[−1,1] ∏ ( x − xk ) = 1
2n then the maximum
k =0
interpolation error becomes
n
1
max |f (x ) − pn (x )| ≤
x ∈[a,b ] (n + 1) !
max
ξ n ∈[a,b ]
f (n +1) ( ξ n ) max ∏ (x
x ∈[a,b ] k =0
1 1
max |f (x ) − pn (x )| ≤ max f (n +1) ( ξ n )
x ∈[a,b ] (n + 1) ! ξ n ∈[a,b ] 2n
Chebyshev nodes eliminate violent oscillations for the error term
compared to uniform spaced nodes.
Interpolation with Chebyshev nodes has better convergence
properties.
It is possible to show that pn (x ) → f (x ) as n → ∞ uniformly. This
is not guaranteed under uniform spaced nodes.
Interpolation: Choosing the nodes
Algorithm:
Step 1: Compute the m Chebyshev interpolation nodes on
[−1, 1]:
2k − 1
zk = cos π for k = 1, ..., m
2m
b−a
xk = ( zk + 1 ) + a for k = 1, ..., m
2
Algorithm (Cont.):
Step 4: Compute the Chebyshev least squares coefficients
The coefficients that solve the discrete LS problem
" #2
m n
min ∑ yk − ∑ cj Tj (zk )
k =1 j =0
are given by
m
∑ yk Tj (zk )
k =1
cj = m for j = 0, 1, ..., n
∑
2
Tj (zk )
k =1
2xk − (a + b )
zk =
b−a
Interpolation through regresssion
2x − (a + b )
z=
b−a
and the cj are estimated using the LS coefficients
m
∑ yk Tj (zk )
cj = km=1 for j = 0, 1, ..., n
∑
2
Tj (zk )
k =1
Piecewise linear approximation
f (xi ) = a0 + a1 xi
f (xi +1 ) = a0 + a1 xi +1
Piecewise linear approximation
f (xi +1 ) − f (xi )
a0 = f (xi ) − xi
xi +1 − xi
f (xi +1 ) − f (xi )
a1 =
xi +1 − xi
Piecewise linear interpoland:
x − xi
pi (x ) = f (xi ) + (f (xi +1 ) − f (xi ))
xi +1 − xi
Piecewise linear approximationPiecewise polynomial
approximation: Splines
s0 (x ) = a0 + b0 x + c0 x 2 + d0 x 3 for x ∈ [x0 , x1 ]
s (x ) =
s1 (x ) = a1 + b1 x + c1 x 2 + d1 x 3 for x ∈ [x1 , x2 ]
00 00
s0 (x1 ) = s1 (x1 )
2c0 + 6d0 x1 = 2c1 + 6d1 x1
Piecewise polynomial approximation: Splines
f 0 (x0 ) = s 0 (x0 )
f 0 (x2 ) = s 0 (x2 )
Secant Hermite spline: use the secant to estimate the slope at the
end points
s (x1 ) − s (x0 )
s 0 (x0 ) =
x1 − x0
0 s (x2 ) − s (x1 )
s (x2 ) =
x2 − x1
Piecewise polynomial approximation: Splines