Professional Documents
Culture Documents
Interpolation in Digital Signal Processing and Numerical Analysis
Interpolation in Digital Signal Processing and Numerical Analysis
Saeed Babaeizadeh
Spring 2003
1. Introduction
In many practical applications of digital signal processing, one is faced with the problem of changing the
sampling rate of a signal, either increasing it or decreasing it by some amount. An increase in the sampling
rate by an integer factor L can be accomplished by interpolating L-1 new samples between successive values
of the signal. The interpolation process can be accomplished in a variety of ways in both fields of digital
signal processing or numerical analysis.
There is one very simple and straightforward approach to changing the sampling rate of a digital signal.
This approach, called the analog approach, merely reconstructs the continuous-time signal from the original
set of samples and then resamples the signal at the new rate. In practice this approach suffers from one major
problem, namely, that the ideal operations required reconstructing the continuous-time signal from the
original samples and to resample the signal at the new rate cannot be implemented exactly [1]. An alternative
is the so-called direct digital approach that does the job without going through D/A or A/D conversion.
, it is known [1,2,3,18] that the Fourier transforms of them are related as:
(2) V (e jω ) = X (e jωL )
Equation (2) says that the Fourier transform V is just the same as that of X except that the frequency scale
is compressed by the factor L.
Now, the Fourier transform Y that we require, should be the same as one period of X but compressed to an
L’th band of the full band [− π ,+π ] . There is also a scale factor of L which is necessary to make the inverse
Fourier transform come out right. Hence we need:
Interpolation in Digital Signal Processing and Numerical Analysis 2
⎧ jωL π
⎪⎪LX (e ), ω ≤ L
(3) Y (e ) = ⎨jω
⎪0, π ≤ ω ≤ π
⎩⎪ L
To obtain Y from V, we need to pull out the first period of V and scale it by the factor L. This, can be done
by using the ideal low pass filter (LPF)
⎧ π
L, ω ≤
(4) jω ⎪⎪ L
H (e ) =ˆ ⎨
⎪0, π < ω ≤ π
⎩⎪ L
Then y(n)=v(n)*h(n) at rate L . Thus, the block diagram of the ideal interpolator is:
T
In practical designs we must replace the ideal LPF by a more practical finite order filter. Any low pass
filter can be used, but typically, the filter is a FIR filter, because using a FIR one can do “random access” on
the data with no extra computational cost. It is a useful property if N is high, because in such cases typically
only a fraction of the samples in the oversampled signal are used.
When we replace H by an FIR filter, we must allow for a transition band, thus, the filter cannot be flat up
π
to .
L
Define
ωp : The highest frequency of the signal band (of signal x(n)) to be preserved.
ωs : Full signal bandwidth of x(n), i.e. there is no energy in x(n) above this frequency.
Now, the filter specifications can be designed to pass the band ⎡ ωp ωp ⎤ and to have stopband starting at
⎢− ,+ ⎥
⎣ L L ⎦
⎧1 0 ≤ x < 1
f ( x) = ⎨
⎩0 otherwise
(6) Drop-sample interpolator impulse response
The first three odd-order B-spline impulse responses are
⎧2 2 1 3
⎧ 1− x 0 ≤ x <1 ⎪3 − x + 2 x 0 ≤ x <1
⎪ ⎪4 1
f ( x) = ⎨ 0 1≤ x ⎪ 2 3
f ( x) = ⎨ − 2 x + x − x 1 ≤ x < 2
⎪ f (− x) otherwise ⎪03 6
⎩ 2≤ x
⎪
⎪⎩ f (− x) otherwise
(7) 2-point, 1st-order linear interpolator impulse response. (8) 4-point, 3rd-order B-spline impulse response
⎧ 11 1 2 1 4 1 5
⎪ 20 − 2 x + 4 x − 12 x 0 ≤ x <1
⎪ 17 5 7 5 3 1 5
⎪ + x − x2 + x3 − x4 + x 1≤ x < 2
⎪ 40 8 4 4 8 24
f ( x ) = ⎨ 81 27 9 3 1 1 5
⎪ − x + x2 − x3 + x4 − x 2≤x<3
⎪ 40 8 4 4 8 120
⎪0 3≤ x
⎪⎩ f ( − x ) otherwise
(9) 6-point, 5th-order B-spline impulse response
These impulse and frequency responses are shown in Figure(2).
Please note that the higher-order B-splines don’t have zero-crossing at integer x, so the interpolated curve
will not necessarily go thorough the points.
Figure 2) The impulse and frequency responses of the first three odd-order B-splines, including the linear interpolator.
It is seen that the signal components which have frequencies of multiples of 2π, will be attenuated heavily
but those which have frequencies of multiples of π, will not have much attenuation. Besides, we observe that
higher order means smaller sidelobes and as a result, better interpolating. So, when we use more neighbor
points to estimate the value of the signal at the point we are looking for, i.e. we have a higher order
polynomial, we will get a better result and it is seen that the associated filter would have smaller sidelobes.
3.1.2. Lagrange
Lagrange polynomials are forced to go thorough a number of points. For instance, the 4-point Lagrange
interpolator polynomial is formed so that it goes through all of the four neighboring points, and the middle
section is used [5]. The 1st-order (2-point) Lagrange interpolator is the linear interpolator, which was already
presented as a part of the B-spline family. The order of the Lagrange polynomial is always one less than the
number of points. The third and 5th-order Lagrange impulse responses are:
Interpolation in Digital Signal Processing and Numerical Analysis 5
⎧ 1 2 1 3 ⎧
⎪1 − 2 x − x + 2 x 0 ≤ x <1 1 5 2 5 3 1 4 1 5
⎪1 − 3 x − 4 x + 12 x + 4 x − 12 x 0≤ x <1
⎪ 11 1 3 ⎪
⎪ 2 13 5 25 3 3 4 1 5
f ( x) = ⎨1 − x + x − x 1≤ x < 2 ⎪1 −
⎪
x − x2 + x − x + x 1≤ x < 2
⎪0 6 6 f (x) = ⎨ 12 8 24 8 24
2≤ x 137 15 2 17 3 1 4 1 5
⎪ ⎪1 − x+ x − x + x − x 2≤ x<3
⎪ 60 8 24 8 120
⎪⎩ f (− x) otherwise 3≤ x
⎪0
⎪ f (− x)
⎩ otherwise
(8) 4-point, 3rd-order Lagrange impulse response. (9) 6-point, 5th-order Lagrange impulse response
Figure 3) The impulse and frequency responses of the first three odd-order Lagrange, including the linear interpolator.
It is seen that the sidelobes don’t get much lower when the order increases, but the passband behavior
improves.
3.1.3. Hermite (1st-order-osculating)
Hermite interpolation may be considered that type of osculatory interpolation where the derivatives up to
some degree, are not only supposed to be continuous everywhere but are also required to assume prespecified
values at the sample points. Hermite interpolators have a continuous first derivative.
Here are the impulse responses of some Hermite interpolators.
⎧ 7 4 3
⎧ 5 3 2 ⎪1 − 3 x + 3 x 0 ≤ x <1
⎪1 − 2 x + 2 x 0 ≤ x <1 ⎪ 5 59 7
⎪ 5 1 ⎪ − x + 3x 2 + x 3 1≤ x < 2
⎪ ⎪
f ( x) = ⎨ 2 3 127 12
2 3
f ( x ) = ⎨2 − 4 x + x − x 1 ≤ x < 2
⎪0 2 2 2 2 1 3
2≤ x ⎪− + x − x + x 2 ≤ x < 3
⎪ ⎪ 2 4 3 12
⎩⎪ f (− x) otherwise ⎪0 3≤ x
⎪ f (− x)
⎩ otherwise
(10) 4-point, 3rd-order Hermite impulse response. (11) 6-point, 3rd-order Hermite impulse response.
This is also known as the Catmull-Rom spline.
⎧ 25 2 5 3 13 4 5 5
⎪1 − 12 x + 12 x + 12 x − 12 x 0 ≤ x <1
⎪ 5 35 2 35 3 13 4 5 5
⎪1 + x − x + x − x + x 1≤ x < 2
⎪ 12 8 8 8 24
f (x) = ⎨ 29 155 2 65 3 13 4 1 5
⎪ 1 − x + x − x + x − x 2 ≤ x < 3
⎪ 4 24 24 24 24
⎪ 0 3 ≤ x
⎪ f (− x)
⎩ otherwise
(12) 6-point, 5 -order Hermite impulse response
th
Interpolation in Digital Signal Processing and Numerical Analysis 6
Figure 4) The impulse and frequency responses of the 3rd-(4-point and 6-point) and 5th-order Hermite.
An interesting feature in the Hermite frequency responses is that the largest sidelobes (compare for
example to the frequency responses of Lagrange interpolators) have been “punctured”.
3.1.4. 2nd-order-osculating
The definition of the 2nd-order-osculating interpolator is the same as of the Hermites, but in this case, the
second derivatives are also matched with the Lagrangians. The order of the interpolation polynomial must be
at least 5 [5].
Here are the impulse responses of some 2nd-order-osculating interpolators.
⎧ 5 2 35 3 21 4 25 5
⎪1 − 4 x − 12 x + 4 x − 12 x 0 ≤ x <1
⎧ 2 9 3 15 4 5
⎪ 75 245 2 545 3 63 4 25 5
⎪1 − x − 2 x + 2 x − 3 x 0 ≤ x <1 ⎪− 4 + x− x + x − x + x 1≤ x < 2
⎪ ⎪ 4 8 24 8 24
⎪ − 4 + 18 x − 29 x 2 + 43 x 3 − 15 x 4 + x 5
f ( x) = ⎨ 1≤ x < 2 f ( x) = ⎨ 153 255 2 313 3 21 4 5 5
⎪0 2 2 ⎪18 − x+ x − x + x − x 2≤ x<3
⎪ 2≤ x ⎪ 4 8 24 8 24
⎪⎩ f ( − x ) otherwise ⎪0 3≤ x
⎪ f (− x)
⎩ otherwise
(10) 4-point, 5th-order 2nd-order-osculating impulse response. (11) 6-point, 5th-order 2nd-order-osculating impulse response.
Figure 5) The impulse and frequency responses of the 2nd-order-osculating(4-point and 6-point).
3.2.1.2. Jitter
Jitter occurs when samples are taken near to but not exactly at the desired sample locations. The cardinal
series interpolation of jittered samples yields a biased estimate of the original signal. Although the bias can
be corrected with an inverse filter, the resulting interpolation noise variance is increased [8].
3.2.1.3. Truncation Error
The cardinal series requires an infinite number of terms to exactly interpolate a bandlimited signal from
its samples. In practice, only a finite number of terms can be used. We can write the truncated cardinal series
approximation of x(t) as
N
⎛ n ⎞
(16) xN (t ) = ∑ x⎜⎝ 2B ⎟⎠ sinc(2Bt − n)
n=− N
The error resulting from using a finite number of terms is referred to as truncation error: eN (t) = x(t) − xN (t)
2
.
It can be shown that [Papoulis (1966)]:
4 NB / π 2
eN (t ) ≤ (E − E N ) 2
N
(17) sin 2 (2πBt ) ; t <
N − (2 Bt )
2
2B
Where E is the energy of x(t ) and E N (t ) is the energy of x N (t ) .
Interpolation in Digital Signal Processing and Numerical Analysis 8
To select the weights λi in (19), the criterion is minimal mean-squared prediction error, σ e2 , defined as
[
(20) σ e2 ≡ E {Z ( s0 ) − p( Z ( s0 ))}2 ]
n
To minimize (20) given (19), let m be the Lagrangian multiplier ensuring ∑λ
i =1
i = 1 , then we can write the
prediction error as
Interpolation in Digital Signal Processing and Numerical Analysis 9
2
⎡ n ⎤ ⎡ n ⎤
(21) E ⎢ Z ( s0 ) −
⎣⎢
∑
i =1 ⎦⎥
∑
λi Z ( s j )⎥ − 2m ⎢ λ − 1⎥
⎣⎢ i =1 ⎦⎥
Minimizing (21) gives us the optimal λ1 , L , λn [11]:
(21)
⎛ 1 − 1T Γ −1γ ⎞
λ T = ⎜⎜ γ + 1 T −1 ⎟⎟ and m=−
(1 − 1
T
Γ −1 γ )
⎝ 1 Γ 1 ⎠ 1T Γ −11
where γ denotes the vector [γ (s0 − s1 ), L , γ (s0 − sn )]T and Γ denotes the n × n matrix whose (i, j )th
element is γ (si − s j ) .
The optimal weights (21) give the minimal mean-square prediction error. (20) becomes
(22) σ e2 = γ T Γ −1γ −
(1T
Γ −1γ )
2
1T Γ −11
However, in (21) and (22) γ (h ) is unknown. Usually it is estimated by
∑( )
1
(23) 2γˆ (h) = Z ( si ) − Z (s j ) 2
N ( h) N ( h )
Where N (h) denotes the number of distinct pairs in N (h) = {(si , s j ) : si − s j = h; i, j = 1, L n}. The estimator
(23) is unbiased, if the process Z (.) is indeed second-order stationary [12].
6. Non-uniform Sampling
Irregular or non-uniform sampling constitutes another whole area of research. There are essentially two
strategies for finding a solution [13].
6.1. Irregular Sampling in Shift-Invariant Spaces
Here, we consider the same kind of shift-invariant spaces as in the uniform case and fit the model to the
measurements [14].
6.2. Non-Uniform Splines and Radial Basis Functions
Here, we define new basis functions (or spaces) that are better suited to the non-uniform structure of the
problem, for instance, non-uniform splines [4,15].
7. Simulation
I used MATLAB to simulate some of these methods. In simulation, we have to note that since in
MATLAB any signal is stored and treated as a data vector, when we are dealing with continuous signals or
discrete-time signals with different sampling rate, we have to have the same number of data points in each
time unit for all the signals, so if they have different sampling rates, we can fill the missing data in each unit
time with zeros. So, after this zero padding, all the signals will have the same number of data in each time
unit and we can work with them easily. For example, Script 1 shows the MATLAB script I wrote to do the
simulation for piecewise polynomial interpolators. In this script, I tried to recover the signal t sin(t ) + 3 t in
the time interval 0 ≤ t ≤ 10 . First, I took some samples from the original signal, then I down-sampled the
result discrete-time signal and used the result data points with some different piecewise polynomial
interpolators to recover the original signal by convolving the “interpolator” and “known data points”. Since
we are using the “convolution operation” and both our signals have finite duration, the (NOP/2).Fs points in
the beginning and the end of the recovered signals are not correct and as you can see in the MATLAB script,
I got rid of them (NOP: the number of points used in the interpolator, Fs: sampling frequency of the original signal).
Then, I computed the average error for the data points I estimated, and the real data points that I had before
Interpolation in Digital Signal Processing and Numerical Analysis 10
N N
the down-sampling. The error measurement I used, is RDM = ∑ (xˆ
i =1
i − xi )2 ∑x
i =1
2
i where xi is the real
signal data point and x̂i is the estimated data point and N is the number of data points we estimated. Please
note that we are recovering an analog signal, so we have to use “integral” instead of “sum” to compute the
average error, but as I said, all signals in MATLAB are discrete, so I used the “sum”. Figure 6 shows some of
the simulation results.
t=[0:0.35:10]; x = t.*(sin(t))+ 3.*t.^(1/2);
[w,n] = dnsample(x,2); [vv,n] = upsample(x,Fs/2); [v,n] = upsample(w,Fs);
y=conv(v,f); yy = y((NOP/2)*Fs+1:size(y,2)-(NOP/2)*Fs); % NOP: the number of points used in interpolator “f”
figure(1); hold on; tt = n/(max(n)/max(t)); plot(tt,yy,'b-');
stem(tt(find(vv~=0)),vv(find(vv~=0)),'r:');stem(tt(find(v~=0)),v(find(v~=0)),'g-.');
% vv: Real values, v: Known values, yy: Interpolated function
function [v,n] = upsample(x,L);
lenx = length(x); x = reshape(x,1,lenx); lenv = lenx*L-(L-1); n = [0:lenv-1]; v = zeros(1,lenv); v(1:L:end) = x;
function [w,n] = dnsample(x,M);
lenx = length(x); x = reshape(x,1,lenx); w = x(1:M:end); lenw = length(w); n = [0:lenw-1];
Script1 - The MATLAB script I used to simulate some of the piecewise polynomial interpolators
8. Summary
In this paper, I talked about some of the methods being used to interpolation, but there are lot’s of other
methods that I did not mention here. People with different backgrounds and different point of views have
worked on these methods and found them useful in different areas. So, it is impossible to give a general
comparison of them and say which one is the best. It depends on the case we are dealing with, to choose a
method and I don’t think there will be any guarantee that the chosen method be the best possible one. Some
people have compared some of these methods in their papers. For instance, Fraser says [17] “unlike small-
kernel convolution methods, which have poor accuracy anywhere near the Nyquist limit, the accuracy of the
FFT method is maintained to high frequencies, very close to Nyquist”. But, we shouldn’t forget that both
those types of methods have been highly improved since then.
p.s.: I was one asked once about the accuracy of a naive approach using the shift property of DFT and a
non-integer shift. This method mathematically is wrong and does not work, because in the proof of the shift
property of DFT, it is an essential assumption that the shift is integer [2, 3]. I tried to test that in practice, and
noticed it works (to some extent!) just in a case that the non-integer shift is very small in comparison with the
DFT period (less than %10) and the original signal is smooth. We can say that in this case, the DFT
coefficients which actually are the uniformly-spaced samples of the unit circle, have moved very little and so
the IDFT gives us the values of the points that are very close to the original samples. But, it is not useful,
because when we know that the signal is smooth and we want to find the values of the samples very close to
Interpolation in Digital Signal Processing and Numerical Analysis 11
the known samples, it is obvious that the result values will be very close the values of the original samples
anyway!
Bibliography
[1] J.S. Lim and A.V. Oppenheim, “Advance Topics in Signal Processing,” Prentice Hall, 1998.
[2] J.G. Proakis and D.G. Manolakis, “Digital signal processing: principles, algorithms and applications,” Third
Edition, Prentice Hall, 1996.
[3] A. V. Oppenheim and R. W. Schafer, “Discrete Time Signal Processing,” Prentice Hall, 1989.
[4] Carl Deboor, “A practical guide to splines,” Springer-Verlag, 1978.
[5] Olli Niemitalo, “Polynomial Interpolators for High-Quality Resampling of Oversampled Audio,”
http://www.student.oulu.fi/~oniemita/DSP/INDEX.HTM
[6] M. Unser, A. Aldroubi and M. Eden, ”B-Spline Signal Processing: Part I-Theory,” IEEE Transactions on Signal
Processing, vol. 41, no. 2, pp. 821-833, February 1993.
[7] C. de Boor, “A practical Guide to Splines,” New York: Springer-Verlag, 1978.
[8] Robert J. Marks II, “Introduction to Shannon Sampling Theory,” New York: Springer-Verlag. 1991.
[9] R.W. Schafer and L.R. Rabiner, “A Digital Signal Processing approach to interpolation,” Proc. IEEE, vol. 61, pp.
692-702, June 1973.
[10] Erik Meijering, “A Chronology of Interpolation: Form Ancient Astronomy to Modern Signal and Image
Processing,” Proceedings of the IEEE, Vol. 90, pp. 319-342, March 2002.
[11] Win C.M. van Beers and Jack P.C. Kleijnen, “Kriging for Interpolation in Random Simulation,” Discussion paper,
ISSN 0924-7815.
[12] Cressie, “Statistics for Spatial Data,” New York: Wiley, 1993.
[13] Michael Unser, “Sampling-50 Years After Shannon,” Proceedings of the IEEE, Vol. 88, No. 4, April 2000.
[14] J.J. Benedetto and W. Heller, “Irregular Sampling and the Theory of Frames,” Math. Note, Vol. 10, pp. 103-125,
1990.
[15] L.L. Schumaker, “Spline functions: Basic Theory”, New York, Wiley, 1981.
[16] “Kriging Interpolation”, http://lazarus.elte.hu/hun/digkonyv/havas/mellekl/vm25/vma07.pdf
[17] Donald Fraser, “Interpolation by the FFT Revisited – An Experimental Investigation,” IEEE Transactions on
Acoustics, Speech, and Signal Processing, Vol. 37, No. 5, pp. 665-675, 1989.
[18] R.W. Schafer and L.R. Rabiner, “A digital Signal Processing approach to interpolation,” Proc. IEEE, Vol. 61, pp.
692-702, June 1973.