You are on page 1of 11

Interpolation in Digital Signal Processing and Numerical Analysis 1

Interpolation in Digital Signal Processing and


Numerical Analysis

Saeed Babaeizadeh
Spring 2003

1. Introduction
In many practical applications of digital signal processing, one is faced with the problem of changing the
sampling rate of a signal, either increasing it or decreasing it by some amount. An increase in the sampling
rate by an integer factor L can be accomplished by interpolating L-1 new samples between successive values
of the signal. The interpolation process can be accomplished in a variety of ways in both fields of digital
signal processing or numerical analysis.
There is one very simple and straightforward approach to changing the sampling rate of a digital signal.
This approach, called the analog approach, merely reconstructs the continuous-time signal from the original
set of samples and then resamples the signal at the new rate. In practice this approach suffers from one major
problem, namely, that the ideal operations required reconstructing the continuous-time signal from the
original samples and to resample the signal at the new rate cannot be implemented exactly [1]. An alternative
is the so-called direct digital approach that does the job without going through D/A or A/D conversion.

2. The Concept of Interpolation in DSP


In DSP we use a process that preserves the spectral shape of the signal sequence x(n) [2]. Here I describe
a process to increase the sample rate by an integer factor [18].
L
If the sampling rate is increased by the integer factor L, then the new sampling rate will be samp./sec.,
T
T
thus the new sampling interval will be T ′ = . Let the original signal be x(n) and the interpolated high rate
L
L
signal be y(n). If we define an intermediate signal with the sample rate of
T
⎧ n
⎪ x( ), n = 0,± L,±2 L,L
(1) v (n) =ˆ ⎨ L
⎪0, otherwise

, it is known [1,2,3,18] that the Fourier transforms of them are related as:
(2) V (e jω ) = X (e jωL )
Equation (2) says that the Fourier transform V is just the same as that of X except that the frequency scale
is compressed by the factor L.
Now, the Fourier transform Y that we require, should be the same as one period of X but compressed to an
L’th band of the full band [− π ,+π ] . There is also a scale factor of L which is necessary to make the inverse
Fourier transform come out right. Hence we need:
Interpolation in Digital Signal Processing and Numerical Analysis 2

⎧ jωL π
⎪⎪LX (e ), ω ≤ L
(3) Y (e ) = ⎨jω

⎪0, π ≤ ω ≤ π
⎩⎪ L
To obtain Y from V, we need to pull out the first period of V and scale it by the factor L. This, can be done
by using the ideal low pass filter (LPF)
⎧ π
L, ω ≤
(4) jω ⎪⎪ L
H (e ) =ˆ ⎨
⎪0, π < ω ≤ π
⎩⎪ L
Then y(n)=v(n)*h(n) at rate L . Thus, the block diagram of the ideal interpolator is:
T

x(n ) 1:L v(n ) ID EA L y(n )


LPF

Figure 1) The block diagram of the ideal interpolator

In practical designs we must replace the ideal LPF by a more practical finite order filter. Any low pass
filter can be used, but typically, the filter is a FIR filter, because using a FIR one can do “random access” on
the data with no extra computational cost. It is a useful property if N is high, because in such cases typically
only a fraction of the samples in the oversampled signal are used.
When we replace H by an FIR filter, we must allow for a transition band, thus, the filter cannot be flat up
π
to .
L
Define
ωp : The highest frequency of the signal band (of signal x(n)) to be preserved.
ωs : Full signal bandwidth of x(n), i.e. there is no energy in x(n) above this frequency.
Now, the filter specifications can be designed to pass the band ⎡ ωp ωp ⎤ and to have stopband starting at
⎢− ,+ ⎥
⎣ L L ⎦

⎛ 2π ω s ⎞ . Hence, we have the following filter design problem:


⎜ − ⎟
⎝ L L ⎠
⎧1 jω ⎡ ωp ⎤
⎪ H (e ) − 1 ≤ δ 1 for ω ∈ ⎢0, ⎥
(5) ⎪L ⎣ L ⎦

⎪1 jω ⎡ 2π ω p ⎤
⎪ L H (e ) ≤ δ 2 for ω ∈ ⎢ L − L ,π ⎥
⎩ ⎣ ⎦
There are so many different methods to design this FIR filter, but those methods are behind the scope of
this paper.
We know that when we are using a finite number sequence to design a low pass FIR filter, the result filter
would have a transition band for going from passband to stopband. Also the filter oscillates in passband and
stopband and it has some sidelobes in frequency domain which result in leakage of energy to other
frequencies. All these, affect the accuracy of the interpolation and make some distortions in the result.
Besides, as it is seen in Fig(1), we have to filter v(n) and it means we ignore the energy of signal components
in frequencies more than the cut-off frequency of the filter, and it is another source of error.
Hence, this method is good when the signal is band limited in frequency domain and we have to try to
design a LPF with less sidelobes and smaller transition band. An slightly different interpretation of this
method, which is also called the “FFT method”, has given by Fraser [17].
Interpolation in Digital Signal Processing and Numerical Analysis 3

3. The Concept of Interpolation in Numerical Analysis


In the field of numerical analysis, the interpolation is to create an approximation of the continuous signal,
from the information contained in the samples, and to sample that. A common interpolation method is linear
interpolation, where the continuous function is approximated as piecewise-linear by drawing lines between
the successive samples. An even more crude form of interpolation is drop-sample interpolation, drawing a
horizontal line from each sample until the following sample. Both of these methods fall into the category of
piecewise polynomial interpolators. For modeling or interpolating a function, we can use polynomial models.
However, if the order of the polynomial is too small, it will not result in a good approximation for the rapid
varying parts of the function. Conversely, if the order is chosen too large, the estimated function may be too
varying in the other points. In other words, as shown in [4], with a polynomial approximation, “if the
function to be approximated is badly behaved anywhere in the interval of approximation, then the
approximate is poor everywhere”. However, by using low order piecewise polynomials (For instance 2nd or
3rd degree splines) we can well approximate rapid varying parts of a function without affecting the other parts
of the function.
A polynomial interpolator can and should be thought as a filter with a continuous-time impulse response
[5]. A non-discrete impulse response yields a non-periodic frequency response that has an overall descending
envelope. So, the spectral sidelobes attenuated by this continuous filter, making resampling a more sensible
process. Ideally there would be no sidelobes, as the continuous signal that we are dealing with, is presumed
⎡ f f ⎤
to be bandlimited in range ⎢− s , s ⎥ when f s is the sampling frequency. The goal is to have the sidelobes
⎣ 2 2⎦
attenuated as much as possible. Polynomial interpolators don’t have a flat passband.
Piecewise polynomial interpolation in this context means that individual polynomials are created between
successive sample points. These interpolators can be classified both by the number of sample points required
from the neighborhood for calculating the value of the position, and by the order of the polynomials.
Depending on the interpolator, the polynomial order is typically one less than the number of points, matching
the number of coefficients in the polynomial to the number of samples, but there are many exceptions to this
rule [5].
Here, I only consider the interpolators that operate on an odd number of points, which have impulse
responses symmetrical around zero.
3.1. Some Known Piecewise Interpolators
The only piecewise polynomials that are symmetrical and, thus, do not introduce phase distortions are
those corresponding to the case where the order of the polynomials is odd and the number of constituent
samples on either side of the interpolation interval is the same [9]. So, in the following section, I just will
show some examples of odd-order piecewise polynomials.
3.1.1. Drop-sample, linear, B-spline
B-splines are a family of interpolators that can be constructed by convolving a drop-sample interpolator
by a drop-sample interpolator repeated times [5].
When contrasted with local or running polynomial fits, the use of B-splines functions, which are
piecewise polynomials as well, seems to have a number of advantages [6]. First, higher order polynomials
tend to oscillate while spline functions are usually smooth and well-behaved. Second, the juxtaposition of
local polynomial approximations may produce strong discontinuities in the connecting regions. B-spline
surfaces, by contrast, are continuous everywhere. The polynomial segments are patched together so that the
interpolating function and all derivatives up to order n-1 are continuous at all joining intersections. Third,
there is exactly one B-spline coefficient associated with each grid point and the range of these coefficients is
of the same order of magnitude as that of the initial gray level values [7]. Finally, either exact or approximate
B-spline signal representations can be evaluated quite efficiently [6].
Drop-sample is the 0th-order B-spline that operates only on one point, its impulse response is:
Interpolation in Digital Signal Processing and Numerical Analysis 4

⎧1 0 ≤ x < 1
f ( x) = ⎨
⎩0 otherwise
(6) Drop-sample interpolator impulse response
The first three odd-order B-spline impulse responses are
⎧2 2 1 3
⎧ 1− x 0 ≤ x <1 ⎪3 − x + 2 x 0 ≤ x <1
⎪ ⎪4 1
f ( x) = ⎨ 0 1≤ x ⎪ 2 3
f ( x) = ⎨ − 2 x + x − x 1 ≤ x < 2
⎪ f (− x) otherwise ⎪03 6
⎩ 2≤ x

⎪⎩ f (− x) otherwise
(7) 2-point, 1st-order linear interpolator impulse response. (8) 4-point, 3rd-order B-spline impulse response
⎧ 11 1 2 1 4 1 5
⎪ 20 − 2 x + 4 x − 12 x 0 ≤ x <1
⎪ 17 5 7 5 3 1 5
⎪ + x − x2 + x3 − x4 + x 1≤ x < 2
⎪ 40 8 4 4 8 24
f ( x ) = ⎨ 81 27 9 3 1 1 5
⎪ − x + x2 − x3 + x4 − x 2≤x<3
⎪ 40 8 4 4 8 120
⎪0 3≤ x
⎪⎩ f ( − x ) otherwise
(9) 6-point, 5th-order B-spline impulse response
These impulse and frequency responses are shown in Figure(2).
Please note that the higher-order B-splines don’t have zero-crossing at integer x, so the interpolated curve
will not necessarily go thorough the points.

Figure 2) The impulse and frequency responses of the first three odd-order B-splines, including the linear interpolator.

It is seen that the signal components which have frequencies of multiples of 2π, will be attenuated heavily
but those which have frequencies of multiples of π, will not have much attenuation. Besides, we observe that
higher order means smaller sidelobes and as a result, better interpolating. So, when we use more neighbor
points to estimate the value of the signal at the point we are looking for, i.e. we have a higher order
polynomial, we will get a better result and it is seen that the associated filter would have smaller sidelobes.
3.1.2. Lagrange
Lagrange polynomials are forced to go thorough a number of points. For instance, the 4-point Lagrange
interpolator polynomial is formed so that it goes through all of the four neighboring points, and the middle
section is used [5]. The 1st-order (2-point) Lagrange interpolator is the linear interpolator, which was already
presented as a part of the B-spline family. The order of the Lagrange polynomial is always one less than the
number of points. The third and 5th-order Lagrange impulse responses are:
Interpolation in Digital Signal Processing and Numerical Analysis 5

⎧ 1 2 1 3 ⎧
⎪1 − 2 x − x + 2 x 0 ≤ x <1 1 5 2 5 3 1 4 1 5
⎪1 − 3 x − 4 x + 12 x + 4 x − 12 x 0≤ x <1
⎪ 11 1 3 ⎪
⎪ 2 13 5 25 3 3 4 1 5
f ( x) = ⎨1 − x + x − x 1≤ x < 2 ⎪1 −

x − x2 + x − x + x 1≤ x < 2
⎪0 6 6 f (x) = ⎨ 12 8 24 8 24
2≤ x 137 15 2 17 3 1 4 1 5
⎪ ⎪1 − x+ x − x + x − x 2≤ x<3
⎪ 60 8 24 8 120
⎪⎩ f (− x) otherwise 3≤ x
⎪0
⎪ f (− x)
⎩ otherwise
(8) 4-point, 3rd-order Lagrange impulse response. (9) 6-point, 5th-order Lagrange impulse response

Figure 3) The impulse and frequency responses of the first three odd-order Lagrange, including the linear interpolator.

It is seen that the sidelobes don’t get much lower when the order increases, but the passband behavior
improves.
3.1.3. Hermite (1st-order-osculating)
Hermite interpolation may be considered that type of osculatory interpolation where the derivatives up to
some degree, are not only supposed to be continuous everywhere but are also required to assume prespecified
values at the sample points. Hermite interpolators have a continuous first derivative.
Here are the impulse responses of some Hermite interpolators.
⎧ 7 4 3
⎧ 5 3 2 ⎪1 − 3 x + 3 x 0 ≤ x <1
⎪1 − 2 x + 2 x 0 ≤ x <1 ⎪ 5 59 7
⎪ 5 1 ⎪ − x + 3x 2 + x 3 1≤ x < 2
⎪ ⎪
f ( x) = ⎨ 2 3 127 12
2 3
f ( x ) = ⎨2 − 4 x + x − x 1 ≤ x < 2
⎪0 2 2 2 2 1 3
2≤ x ⎪− + x − x + x 2 ≤ x < 3
⎪ ⎪ 2 4 3 12
⎩⎪ f (− x) otherwise ⎪0 3≤ x
⎪ f (− x)
⎩ otherwise
(10) 4-point, 3rd-order Hermite impulse response. (11) 6-point, 3rd-order Hermite impulse response.
This is also known as the Catmull-Rom spline.
⎧ 25 2 5 3 13 4 5 5
⎪1 − 12 x + 12 x + 12 x − 12 x 0 ≤ x <1
⎪ 5 35 2 35 3 13 4 5 5
⎪1 + x − x + x − x + x 1≤ x < 2
⎪ 12 8 8 8 24
f (x) = ⎨ 29 155 2 65 3 13 4 1 5
⎪ 1 − x + x − x + x − x 2 ≤ x < 3
⎪ 4 24 24 24 24
⎪ 0 3 ≤ x
⎪ f (− x)
⎩ otherwise
(12) 6-point, 5 -order Hermite impulse response
th
Interpolation in Digital Signal Processing and Numerical Analysis 6

Figure 4) The impulse and frequency responses of the 3rd-(4-point and 6-point) and 5th-order Hermite.

An interesting feature in the Hermite frequency responses is that the largest sidelobes (compare for
example to the frequency responses of Lagrange interpolators) have been “punctured”.
3.1.4. 2nd-order-osculating
The definition of the 2nd-order-osculating interpolator is the same as of the Hermites, but in this case, the
second derivatives are also matched with the Lagrangians. The order of the interpolation polynomial must be
at least 5 [5].
Here are the impulse responses of some 2nd-order-osculating interpolators.
⎧ 5 2 35 3 21 4 25 5
⎪1 − 4 x − 12 x + 4 x − 12 x 0 ≤ x <1
⎧ 2 9 3 15 4 5
⎪ 75 245 2 545 3 63 4 25 5
⎪1 − x − 2 x + 2 x − 3 x 0 ≤ x <1 ⎪− 4 + x− x + x − x + x 1≤ x < 2
⎪ ⎪ 4 8 24 8 24
⎪ − 4 + 18 x − 29 x 2 + 43 x 3 − 15 x 4 + x 5
f ( x) = ⎨ 1≤ x < 2 f ( x) = ⎨ 153 255 2 313 3 21 4 5 5
⎪0 2 2 ⎪18 − x+ x − x + x − x 2≤ x<3
⎪ 2≤ x ⎪ 4 8 24 8 24
⎪⎩ f ( − x ) otherwise ⎪0 3≤ x
⎪ f (− x)
⎩ otherwise
(10) 4-point, 5th-order 2nd-order-osculating impulse response. (11) 6-point, 5th-order 2nd-order-osculating impulse response.

Figure 5) The impulse and frequency responses of the 2nd-order-osculating(4-point and 6-point).

3.2. Shannon Sampling Theory


We know that if a signal has finite energy, the minimum sampling rate is equal to two samples per period
of the highest frequency component of the signal. Specifically, if the highest frequency component of the
signal is B hertz, then the signal, x(t), can be recovered from the samples by [8]:
⎛ n ⎞ sin[π (2 Bt − n )]

1
(12) x(t ) = x⎜ ⎟
π n = −∞ ⎝ 2 B ⎠ ∑
2 Bt − n
The frequency B is also referred to the signal’s bandwidth. The series of (12) is called the “cardinal
series”[8] but has many other names, including the Whittaker-Shannon sampling theorem [Goodman
(1968)], the Whittaker-Shannon-Kotel’nikov-Kramer sampling theorem [Jerri (1977)].
Interpolation in Digital Signal Processing and Numerical Analysis 7

3.2.1. Sources of Error


Exact interpolation using the cardinal series assumes that (a) the values of the samples are known exactly,
(b) the sample locations are known exactly and (c) an infinite number of terms are used in the series.
Deviation these requirements results in interpolation error due to (a) data noise (b) jitter and (c) truncation
respectively.
3.2.1.1. Effects of additive noise
Suppose that the signal we sample is corrupted by real additive zero-mean wide-sense-stationary noise,
ξ (t ) . Then, instead of sampling the deterministic bandlimited signal, we would be sampling the signal
y (t ) = x(t ) + ξ (t ) . From these samples, we form the series
∞ ∞
⎛ n ⎞ ⎛ n ⎞
(13) z (t ) = ∑
n = −∞
y⎜
⎝ 2W
⎟ sinc(2Wt − n ) = x(t ) +
⎠ = −∞
n1
∑ξ⎜ ⎟ sinc(2Wt − n )
⎝ 2W ⎠
4444244443
η (t )
Therefore, η (t ) is the stochastic process generated by the samples of ξ (t ) alone and it is independent of
the signal. Note that since ξ (t ) is zero-mean, so is η (t ) . Hence, expecting both sides of (13) leads us to the
desirable conclusion that
(14) z (t ) = x(t )
A meaningful of the cardinal series’ noise sensitivity is the interpolation noise level, which is the noise
variance, η 2 (t ) . It can easily be shown[8] that η 2 = ξ 2 . That is, the cardinal series interpolation results in a
noise level equal to that of the original signal before sampling [Bracewell]. In many cases, one can reduce
the interpolation noise level by oversampling and filtering. If , for example, we place z(t) in (13) thorough a
filter that is unity for f ≤ B and zero otherwise, then the output is
(15) z r (t ) = z (t ) * 2 B sinc(2 Bt ) = x(t ) + η (t ) * 2 B sinc(2 Bt )
14444244443
ηr (t )

It is proved that [8]: η r2 ≤ η 2 = ξ 2 .

3.2.1.2. Jitter
Jitter occurs when samples are taken near to but not exactly at the desired sample locations. The cardinal
series interpolation of jittered samples yields a biased estimate of the original signal. Although the bias can
be corrected with an inverse filter, the resulting interpolation noise variance is increased [8].
3.2.1.3. Truncation Error
The cardinal series requires an infinite number of terms to exactly interpolate a bandlimited signal from
its samples. In practice, only a finite number of terms can be used. We can write the truncated cardinal series
approximation of x(t) as
N
⎛ n ⎞
(16) xN (t ) = ∑ x⎜⎝ 2B ⎟⎠ sinc(2Bt − n)
n=− N

The error resulting from using a finite number of terms is referred to as truncation error: eN (t) = x(t) − xN (t)
2
.
It can be shown that [Papoulis (1966)]:
4 NB / π 2
eN (t ) ≤ (E − E N ) 2
N
(17) sin 2 (2πBt ) ; t <
N − (2 Bt )
2
2B
Where E is the energy of x(t ) and E N (t ) is the energy of x N (t ) .
Interpolation in Digital Signal Processing and Numerical Analysis 8

4. Comparison of the Interpolation Using Shannon Sampling


Theorem and Piecewise Polynomials Interpolators
The main difference between them is that in Shannon sampling theorem, we will assume that the given
data is band limited to some band of frequencies and develop schemes which are optimal on this basis, while
a numerical analyst typically assumes that his data is samples of polynomials (or very nearly so) and
develops schemes to minimize the resulting error.
Please note that the sources of error discussed in 3.2.1.1 and 3.2.1.2 exist for piecewise polynomials
interpolators as well. Besides, for piecewise polynomials interpolators, since to compute the value of each
lost point, we just use the values of a finite number of its neighbor points, we will have another source of
error.

5. Interpolation Using Linear Prediction


Another approach to interpolation is to consider data as realization of a random process at given spatial
locations and to make inferences on the observed values of the process by means of a statistically optimal
predictor. Here, “optimal” refers to minimizing the mean-squared prediction error, which requires a model of
the covariance between the data points. So, it does not have an interpretation in frequency domain. This
approach to interpolation is generally known as Kriging [10]. In Kriging, a crucial role is played by the
variogram, a diagram of the variance of the difference between the measurements at two input locations [11].
A unique feature of Kriging is that it provides an estimation of the error at each interpolated point, providing
a measure of confidence in the modeled surface. The effectiveness of Kriging depends on the correct
specification of several parameters that describe the semivariogram and the model of the drift (i.e., does the
mean value change over distance). Because Kriging is a robust interpolator, even a naive selection of
parameters will provide an estimate comparable to many other grid estimation procedures. The trade-off for
estimating the optimal solution for each point by Kriging is computation time. Given the additional trial and
error time necessary to select appropriate parameters, Kriging should be applied where best estimators are
required, data quality is good, and error estimates are essential [16]. Besides, In this method, samples need
not be uniformly sampled.
5.1. Formal Model for Kriging
A random process Z(.) can be described by {Z ( s) : s ∈ D} where D is a fixed subset of R d and Z(s) is a
random function at location s ∈ D . There are several types of Kriging, such as “Ordinary Kriging”, “Simple
Kriging”, and “Universal Kriging”, but I limit this subsection to Ordinary Kriging, which makes the
following two assumptions [11]:
(i) The model assumption is that the random process consists of a constant μ and an error term δ (s) :
(18) Z ( s ) = μ + δ ( s ) with s ∈ D, μ ∈ R
(ii) The predictor assumption is that the predictor for the point s0 - denoted by p( Z ( s0 )) - is a weighted
linear function of all the observed output data:
n n
(19) P( Z ( s0 )) = ∑ λ Z (s ) with ∑ λ
i =1
i i
i =1
i =1

To select the weights λi in (19), the criterion is minimal mean-squared prediction error, σ e2 , defined as
[
(20) σ e2 ≡ E {Z ( s0 ) − p( Z ( s0 ))}2 ]
n
To minimize (20) given (19), let m be the Lagrangian multiplier ensuring ∑λ
i =1
i = 1 , then we can write the

prediction error as
Interpolation in Digital Signal Processing and Numerical Analysis 9

2
⎡ n ⎤ ⎡ n ⎤
(21) E ⎢ Z ( s0 ) −
⎣⎢

i =1 ⎦⎥

λi Z ( s j )⎥ − 2m ⎢ λ − 1⎥
⎣⎢ i =1 ⎦⎥
Minimizing (21) gives us the optimal λ1 , L , λn [11]:

(21)
⎛ 1 − 1T Γ −1γ ⎞
λ T = ⎜⎜ γ + 1 T −1 ⎟⎟ and m=−
(1 − 1
T
Γ −1 γ )
⎝ 1 Γ 1 ⎠ 1T Γ −11
where γ denotes the vector [γ (s0 − s1 ), L , γ (s0 − sn )]T and Γ denotes the n × n matrix whose (i, j )th
element is γ (si − s j ) .
The optimal weights (21) give the minimal mean-square prediction error. (20) becomes

(22) σ e2 = γ T Γ −1γ −
(1T
Γ −1γ )
2

1T Γ −11
However, in (21) and (22) γ (h ) is unknown. Usually it is estimated by

∑( )
1
(23) 2γˆ (h) = Z ( si ) − Z (s j ) 2
N ( h) N ( h )
Where N (h) denotes the number of distinct pairs in N (h) = {(si , s j ) : si − s j = h; i, j = 1, L n}. The estimator
(23) is unbiased, if the process Z (.) is indeed second-order stationary [12].

6. Non-uniform Sampling
Irregular or non-uniform sampling constitutes another whole area of research. There are essentially two
strategies for finding a solution [13].
6.1. Irregular Sampling in Shift-Invariant Spaces
Here, we consider the same kind of shift-invariant spaces as in the uniform case and fit the model to the
measurements [14].
6.2. Non-Uniform Splines and Radial Basis Functions
Here, we define new basis functions (or spaces) that are better suited to the non-uniform structure of the
problem, for instance, non-uniform splines [4,15].

7. Simulation
I used MATLAB to simulate some of these methods. In simulation, we have to note that since in
MATLAB any signal is stored and treated as a data vector, when we are dealing with continuous signals or
discrete-time signals with different sampling rate, we have to have the same number of data points in each
time unit for all the signals, so if they have different sampling rates, we can fill the missing data in each unit
time with zeros. So, after this zero padding, all the signals will have the same number of data in each time
unit and we can work with them easily. For example, Script 1 shows the MATLAB script I wrote to do the
simulation for piecewise polynomial interpolators. In this script, I tried to recover the signal t sin(t ) + 3 t in
the time interval 0 ≤ t ≤ 10 . First, I took some samples from the original signal, then I down-sampled the
result discrete-time signal and used the result data points with some different piecewise polynomial
interpolators to recover the original signal by convolving the “interpolator” and “known data points”. Since
we are using the “convolution operation” and both our signals have finite duration, the (NOP/2).Fs points in
the beginning and the end of the recovered signals are not correct and as you can see in the MATLAB script,
I got rid of them (NOP: the number of points used in the interpolator, Fs: sampling frequency of the original signal).
Then, I computed the average error for the data points I estimated, and the real data points that I had before
Interpolation in Digital Signal Processing and Numerical Analysis 10

N N
the down-sampling. The error measurement I used, is RDM = ∑ (xˆ
i =1
i − xi )2 ∑x
i =1
2
i where xi is the real

signal data point and x̂i is the estimated data point and N is the number of data points we estimated. Please
note that we are recovering an analog signal, so we have to use “integral” instead of “sum” to compute the
average error, but as I said, all signals in MATLAB are discrete, so I used the “sum”. Figure 6 shows some of
the simulation results.
t=[0:0.35:10]; x = t.*(sin(t))+ 3.*t.^(1/2);
[w,n] = dnsample(x,2); [vv,n] = upsample(x,Fs/2); [v,n] = upsample(w,Fs);
y=conv(v,f); yy = y((NOP/2)*Fs+1:size(y,2)-(NOP/2)*Fs); % NOP: the number of points used in interpolator “f”
figure(1); hold on; tt = n/(max(n)/max(t)); plot(tt,yy,'b-');
stem(tt(find(vv~=0)),vv(find(vv~=0)),'r:');stem(tt(find(v~=0)),v(find(v~=0)),'g-.');
% vv: Real values, v: Known values, yy: Interpolated function
function [v,n] = upsample(x,L);
lenx = length(x); x = reshape(x,1,lenx); lenv = lenx*L-(L-1); n = [0:lenv-1]; v = zeros(1,lenv); v(1:L:end) = x;
function [w,n] = dnsample(x,M);
lenx = length(x); x = reshape(x,1,lenx); w = x(1:M:end); lenw = length(w); n = [0:lenw-1];
Script1 - The MATLAB script I used to simulate some of the piecewise polynomial interpolators

Figure 6) Some of the simulation results.

8. Summary
In this paper, I talked about some of the methods being used to interpolation, but there are lot’s of other
methods that I did not mention here. People with different backgrounds and different point of views have
worked on these methods and found them useful in different areas. So, it is impossible to give a general
comparison of them and say which one is the best. It depends on the case we are dealing with, to choose a
method and I don’t think there will be any guarantee that the chosen method be the best possible one. Some
people have compared some of these methods in their papers. For instance, Fraser says [17] “unlike small-
kernel convolution methods, which have poor accuracy anywhere near the Nyquist limit, the accuracy of the
FFT method is maintained to high frequencies, very close to Nyquist”. But, we shouldn’t forget that both
those types of methods have been highly improved since then.

p.s.: I was one asked once about the accuracy of a naive approach using the shift property of DFT and a
non-integer shift. This method mathematically is wrong and does not work, because in the proof of the shift
property of DFT, it is an essential assumption that the shift is integer [2, 3]. I tried to test that in practice, and
noticed it works (to some extent!) just in a case that the non-integer shift is very small in comparison with the
DFT period (less than %10) and the original signal is smooth. We can say that in this case, the DFT
coefficients which actually are the uniformly-spaced samples of the unit circle, have moved very little and so
the IDFT gives us the values of the points that are very close to the original samples. But, it is not useful,
because when we know that the signal is smooth and we want to find the values of the samples very close to
Interpolation in Digital Signal Processing and Numerical Analysis 11

the known samples, it is obvious that the result values will be very close the values of the original samples
anyway!

Bibliography
[1] J.S. Lim and A.V. Oppenheim, “Advance Topics in Signal Processing,” Prentice Hall, 1998.
[2] J.G. Proakis and D.G. Manolakis, “Digital signal processing: principles, algorithms and applications,” Third
Edition, Prentice Hall, 1996.
[3] A. V. Oppenheim and R. W. Schafer, “Discrete Time Signal Processing,” Prentice Hall, 1989.
[4] Carl Deboor, “A practical guide to splines,” Springer-Verlag, 1978.
[5] Olli Niemitalo, “Polynomial Interpolators for High-Quality Resampling of Oversampled Audio,”
http://www.student.oulu.fi/~oniemita/DSP/INDEX.HTM
[6] M. Unser, A. Aldroubi and M. Eden, ”B-Spline Signal Processing: Part I-Theory,” IEEE Transactions on Signal
Processing, vol. 41, no. 2, pp. 821-833, February 1993.
[7] C. de Boor, “A practical Guide to Splines,” New York: Springer-Verlag, 1978.
[8] Robert J. Marks II, “Introduction to Shannon Sampling Theory,” New York: Springer-Verlag. 1991.
[9] R.W. Schafer and L.R. Rabiner, “A Digital Signal Processing approach to interpolation,” Proc. IEEE, vol. 61, pp.
692-702, June 1973.
[10] Erik Meijering, “A Chronology of Interpolation: Form Ancient Astronomy to Modern Signal and Image
Processing,” Proceedings of the IEEE, Vol. 90, pp. 319-342, March 2002.
[11] Win C.M. van Beers and Jack P.C. Kleijnen, “Kriging for Interpolation in Random Simulation,” Discussion paper,
ISSN 0924-7815.
[12] Cressie, “Statistics for Spatial Data,” New York: Wiley, 1993.
[13] Michael Unser, “Sampling-50 Years After Shannon,” Proceedings of the IEEE, Vol. 88, No. 4, April 2000.
[14] J.J. Benedetto and W. Heller, “Irregular Sampling and the Theory of Frames,” Math. Note, Vol. 10, pp. 103-125,
1990.
[15] L.L. Schumaker, “Spline functions: Basic Theory”, New York, Wiley, 1981.
[16] “Kriging Interpolation”, http://lazarus.elte.hu/hun/digkonyv/havas/mellekl/vm25/vma07.pdf
[17] Donald Fraser, “Interpolation by the FFT Revisited – An Experimental Investigation,” IEEE Transactions on
Acoustics, Speech, and Signal Processing, Vol. 37, No. 5, pp. 665-675, 1989.
[18] R.W. Schafer and L.R. Rabiner, “A digital Signal Processing approach to interpolation,” Proc. IEEE, Vol. 61, pp.
692-702, June 1973.

You might also like