You are on page 1of 8

1

A Regularized Sampling Algorithm by Removing


Discontinuities
Weidong Chen
Najing Center for Applied Math, Nanjing, Jiangsu 211135, China
(Email: chenw@ksu.edu).

Abstract—In this paper, the ill-posedness of the sampling prob- For a band-limited signal f ∈ L2 (R), we have the following
lems is discussed. We will show that the formula in the Shannon sampling theorem [1]-[3].
Sampling Theorem is not reliable in the noisy case by the proof
in theory and numerical examples in the experiment. The Fourier
transform of the sinc function is analyzed. We will see the Shannon Sampling Theorem. The Ω-band-limited signal
formula in the Shannon Sampling Theorem is very sensitive to f (t) ∈ L2 (R) can be exactly reconstructed from its samples
the noise. And this is due to fact that the sinc function converges f (nh), and
to zero too slowly. This is caused by the discontinuities of the ∞
Fourier transform of the sinc function in the frequency domain. X sin Ω(t − nh)
A regularized sampling algorithm for band-limited signals is f (t) = f (nh) (1)
n=−∞
Ω(t − nh)
presented by removing discontinuities of the sinc function in the
frequency domain. The convergence properties of the regularized where h := π/Ω.
sampling algorithm is given and the error estimation of the
regularization method is also given. Numerical results are given to
compare the regularization sampling algorithm with the formula The sampling problem is also studied recently in [23] and
in the Shannon Sampling Theorem and other algorithms by some [24]. However the ill-posedness of the sampling problem has
examples. not received much attention. In [25], the ill-posedness is
Index Terms—Shannon Sampling theorem, ill-posedness, reg- considered and regularization method is applied to compute
ularization, removing discontinuity. f (nh) from nonuniform sampling. Then formula (1) is used
to reconstruct f (t). However formula (1) is not reliable in
I. I NTRODUCTION noisy cases.

I N this paper, the ill-posedness of the sampling problem is


discussed. A regularized sampling algorithm is presented
by revising the sinc function.
The non-sinc interpolation kernels are used in some papers
[26], [27]. But the ill-posedness is not taken into account.

In many practical problems, the samples {f (nh)} are noisy:


First, we describe the band-limited signals and the sampling
theorem for function f ∈ L2 (R). The detail can be seen in f (nh) = fE (nh)+η(nh) (2)
[1].
where {η(nh)} is the noise
2
Definition: A function f ∈ L (R) is said to be Ω-band-limited |η(nh)| ≤ δ (3)
if
fˆ(ω) = 0, ∀ω ∈/ [−Ω, Ω]. and fE ∈ L2 is the exact band-limited signal.

Here fˆ is the Fourier transform of f : If the samples are noisy [4], [7-8], [22], the sampling prob-
Z +∞ lem is an ill-posed problem. In [5], [6] and [11] the ill-
ˆ
F(f )(ω) = f (ω) = f (t)eiωt dt, ω ∈ R posedness is analyzed. In [10], the ill-posedness is considered
−∞
in band-limited extrapolation. In [12], the ill-posedness is
We then have the inversion formula: considered in computation of its Fourier transform. In [13-14],
1
Z Ω the ill-posedness is considered in restoration of lost samples
F −1 (fˆ)(t) = f (t) = fˆ(ω)e−iωt dω, a.e. t ∈ R. in one and two dimensional cases. In [15], a regularized
2π −Ω
lower pass filter is presented. In [16], computation of two-
Remark. The definition of Fourier transform for a function dimensional Fourier transforms for noisy band-limited signals
f ∈ L2 (R) can be seen in [1]. is considered. In [17], a regularized sampling algorithm in
In this paper, we will consider the sampling problem: reconstructing non-bandlimited functions is given. In [18], the
given f (nh), find f (t) ill-posedness is analyzed for the two dimensional sampling
problem. In [19-20], the ill-posedness of computation of
where h is the step size of sampling. derivatives is discussed.
2

One Peak Definition 2.1: Assume A : D → U is an operator in which D


0.15
and U are metric spaces with distances ρD (∗, ∗) and ρU (∗, ∗),
0.1
respectively. The problem
0.05
Az = u (5)
0

-0.05
-40 -30 -20 -10 0 10 20 30 40
of determining a solution z in the space D from the “initial
t=-40 to 40 data” u in the space U is said to be well-posed on the pair
Three Peaks
of metric spaces (D, U ) in the sense of Hadamard if the
0.15
following three conditions are satisfied:
0.1
i) For every element u ∈ U there exists a solution z in the
0.05

0
space D; in other words, the mapping A is surjective.
-0.05
ii) The solution is unique; in other words, the mapping A
-40 -30 -20 -10 0 10 20 30 40
t=-40 to 40 is injective.
iii) The problem is stable in the spaces (D, U ): ∀ϵ >
Fig. 1. The functions with one peak and three peaks 0, ∃δ > 0, such that
ρU (u1 , u2 ) < δ ⇒ ρD (z1 , z2 ) < ϵ.
In [11], the theorem below is given. In other words, the inverse mapping A−1 is uniformly contin-
uous.
Theorem. For band-limited signals f , the square of its L2
norm ∞ Problems that violate any of the three conditions are said to
X
2
||f ||L2 = h |f (nh)|2 be ill-posed.
n=−∞
π Definition 2.2: Assume
where h = Ω.
AzE = uE (“E” for exact).
By the theorem, if η is band-limited, the error energy, given
An operator R(∗, α) : U → D, depending on a parameter α,
by
X∞ is called a regularizing operator for the equation Az = u in a
||η||2L2 = h |η(nh)|2 , neighborhood of uE if there exists a function α = α(δ) of δ
n=−∞ such that if
can be very large even if (3) is satisfied. lim ||uδ − uE || = 0 and zα = R(uδ , α(δ))
δ→0

In [11], a regularized sampling algorithm is also presented: then



lim ||zα − zE || = 0.
X sin Ω(t − nh) f (nh) δ→0
fα (t) = (4) The approximate solution zα = R(u, α) to the exact solution
n=−∞
Ω(t − nh) 1 + 2πα + 2πα(nh)2
zE obtained by the method of regularization is called a
where f (nh) is given in (2). regularized solution.
This algorithm is effective in the case of one peak. If there In the case of the sampling problem, the equation (5)
are several peaks, the algorithm will not be effective. becomes
The example of functions of one peak and three peaks can
Af = u (6)
be seen in figure 1.
where A is an operator
In this paper, we will study the ill-posedness of the sampling
A: CBL (−∞, ∞) → l∞
problem, and we will find a more advanced algorithm for this
ill-posed problem by revising the sinc function. In section II, and f ∈ CBL (−∞, ∞), u = {f (nh) : n ∈ Z} ∈ l∞ . Here
the concepts of ill-posedness and regularizing operator are CBL (−∞, ∞) is the set of all Ω-band-limited functions, l∞ is
introduced. In section III, we will give a new method by the set of all bounded sequences and Z is the set of integers.
regularization and show its convergence property. In section
IV, the error estimation is given. The the experimental results The ill-posedness is due to the fact that the sinc function
of numerical examples are given in section V. Finally, a converges to zero very slowly.
conclusion will be given in section VI.
In [11], we choose η(nh) = ϵ sgn{sin Ω(t0 −nh)/Ω(t0 −nh)}
II. THE CONCEPT OF ILL-POSEDNESS AND where t0 is a given point in the time domain and ϵ is close
REGULARIZING OPERATOR to zero. Then the noise signal by using Shannon’s Sampling
Theorem is

The concept of ill-posed problems was introduced in [9]. Here X sin Ω(t − nh)
η(t) = η(nh)
we borrow the following definition from it: n=−∞
Ω(t − nh)
3

∞  
X sin Ω(t − nh) sin Ω(t0 − nh) Fourier Transforms
=ϵ sgn .
n=−∞
Ω(t − nh) Ω(t0 − nh) 1.2

We can see that |η(nh)| ≤ ϵ. However, the noise at t = t0 in 1


the sampling theorem is
∞ 0.8
X sin Ω(t0 − nh)
η(t0 ) = ϵ = ∞.
n=−∞
Ω(t0 − nh) 0.6

Also at any point t = t0 + kπ/Ω, k ∈ Z 0.4

η(t0 + kπ/Ω) 0.2

∞  
X sin Ω(t0 − nh) sin Ω(t0 − nh)
=ϵ (−1)k sgn 0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
n=−∞
Ω(t0 + kπ/Ω − nh) Ω(t0 − nh) -


X Ω(t0 − nh) sin Ω(t0 − nh) Fig. 2. The graph of ϕ̂λ (ω)
= (−1)k ϵ = ±∞.
n=−∞
Ω(t 0 + kπ/Ω − nh) Ω(t 0 − nh)
This means the existence and the stability conditions are not In order to prove the convergence property of this regularized
satisfied. sampling formula in [−T, T ], we need some lemmas. Then
we construct a regularized sampling theorem.
We will apply the regularization method to solve this ill-posed
problem. Lemma 1. Let

π X
III. THE REGULARIZED SAMPLING ALGORITHM fEα (t) = ϕλ (t−nh)fE (nh) (8)
Ω n=−∞
In this section we will find a function instead of the sinc Then fˆEα (ω) = ϕ̂λ (ω)fˆE (ω).
function by regularization and removing the discontinuity of Proof.
the Fourier transform of the sinc function. ∞ Z ∞
π X
fˆEα (ω) = fE (nh) ϕλ (t − nh)eiωt dt
First, we consider the sinc function. Its Fourier transform is Ω n=−∞ −∞
1[−Ω,Ω] in which 1S being the indicator function of any S ⊂ ∞ Z ∞
R. The sinc function converges to zero slowly is due to the π X
= fE (nh)einhω ϕλ (s)eiωs ds
fact that its Fourier transform is discontinuous at ω = −Ω and Ω n=−∞ −∞
ω = Ω. So we replace 1[−Ω,Ω] by ∞
π X
− 1 (ω + Ω), ω ∈ [−Ω, −λΩ] = fE (nh)ϕ̂λ (ω)einhω = ϕ̂λ (ω)fˆE (ω).

 (λ−1)Ω

 Ω n=−∞
1, ω ∈ [−λΩ, λΩ]
ϕ̂λ (ω) = 1
(ω − Ω), ω ∈ [λΩ, Ω] Lemma 2.
 (λ−1)Ω


0, ω ∈ / [−Ω, Ω]. Z Ω
where λ = 1 − α and α > 0 is the regularization parameter |ϕ̂λ (ω) − 1|dω = αΩ.
−Ω
which is close to zero.
The graph ϕ̂λ (ω) can be seen in Figure 1. In Figure 1, we Proof.
Z Ω Ω
show the case Ω = 1.
Z  
1
The inverse Fourier transform of ϕ̂λ (ω) is |ϕ̂λ (ω) − 1|dω = −2 (ω − Ω) − 1 dω
−Ω λΩ (λ − 1)Ω
cos Ωt − cos λΩt  Ω
ϕλ (t) = 1 2
π(λΩ − Ω)t2 =− (ω − Ω) + 2αΩ = αΩ.
(λ − 1)Ω λΩ

Now we construct the regularized sampling formula: Lemma 3.



π X Z Ω
fα (t) = ϕλ (t−nh)f (nh) (7) 5
Ω n=−∞ |ϕ̂λ (ω) − 1|2 dω = αΩ.
−Ω 3
where f (nh) is given in (2). Proof.
Z Ω Z 2Ω 
Since the exact signal fE is in L2 , we can assume the signal 1
|ϕ̂λ (ω) − 1|2 dω = 2 (ω − Ω) − 1 dω
is ϵ-concentrated in the interval [−T, T ]: −Ω λΩ (λ − 1)Ω
Z Z Ω 
1 1
|fE (t)|2 dt ≤ ϵ. =2 (ω − Ω) 2
− 2 (ω − Ω) + 1 dω
2 2
|t|≥T λΩ (λ − 1) Ω (λ − 1)Ω
4

 Ω  Ω
2 1 Since
= (ω − Ω)3 − (ω − Ω)2
3(λ − 1)2 Ω2 λΩ (λ − 1)Ω λΩ cos Ω(t − nh) − cos λΩ(t − nh) 2
2

5 (t − nh) (nh − T )2
+2αΩ = αΩ.
3 for |nh| > T and
Lemma 4. If fˆE (ω) is bounded, X 2
(nh − T )2
|nh|>T
|fEα (t) − fE (t)| = O(α).
converges, we have
Proof. By Lemma 1 δ
I2 = O( ).
Z Ω α
1 This implies
fEα (t) − fE (t) = [fˆEα (ω) − fˆE (ω)]e−iωt dω
2π −Ω δ
I1 + I2 = O( ).
Z Ω α
1
= [ϕ̂λ (ω) − 1]fˆE (ω)e−iωt dω.
2π −Ω
Theorem 1. On any finite interval [−T, T ], T = const. > 0.
This implies If we choose α = α(δ) such that α(δ) → 0 and δ/α(δ) → 0
Z Ω as δ → 0, then fα (t) → fE (t) in [-T, T] as δ → 0. Also we
1
|fEα (t) − fE (t)| ≤ |[ϕ̂λ (ω) − 1]fˆE (ω)|dω = O(α) have the estimation
2π −Ω √
δ
||fα (t) − fE (t)||C[−T,T ] = O( ) + O( α)
by lemma 2. α
where || · ||C[−T,T ] is the maximum norm.
Lemma 5. For fˆE (ω) ∈ L2 Proof.
√ ∞
|fEα (t) − fE (t)| = O( α). π X
|fα (t)−fE (t)| = | ϕλ (t−nh)[η(nh)+fE (nh)]−fE (t)|
Proof. By Lemma 1 Ω n=−∞
2 ∞ ∞
1
Z Ω π X π X
|fEα (t)−fE (t)| = 2
[ϕ̂λ (ω) − 1]fˆE (ω)e−iωt dω . ≤| ϕλ (t−nh)η(nh)|+| ϕλ (t−nh)fE (nh)−fE (t)|
(2π)2 Ω n=−∞ Ω n=−∞
−Ω

By Cauchy inequality and Lemma 3 By lemma 5 and 6, we have


δ √
|fEα (t) − fE (t)|2 ||fα (t) − fE (t)||C[−T,T ] = O( ) + O( α).
α
Z Ω Z Ω
1 By lemma 4 and similar proof, we have next theorem.
≤ 2
|ϕ̂λ (ω) − 1| dω |fˆE (ω)|2 dω = O(α).
(2π)2 −Ω −Ω
Theorem 2. Assume fˆE (ω) is bounded. If we choose α =
Lemma 6. On any finite interval [−T, T ], T = const. > 0
α(δ) such that α(δ) → 0 and δ/α(δ) → 0 as δ → 0, then
∞ fα (t) → fE (t) in [-T, T] as δ → 0. Also we have the
X δ
| ϕλ (t − nh)η(nh)| = O( ). estimation
n=−∞
α
δ
Proof. ||fα (t) − fE (t)||C[−T,T ] = O( ) + O(α).
∞ α
X
| ϕλ (t − nh)η(nh)|
IV. ESTIMATION OF ERROR ENERGY
n=−∞


X cos Ω(t − nh) − cos λΩ(t − nh) In section III, by comparing Theorem 1 and Theorem 2, we
=| η(nh)|
n=−∞
π(λΩ − Ω)(t − nh)2 can have a more accurate estimation in Theorem 2 if fˆE (ω)
is bounded. In this section, without the condition fˆE (ω) is
X cos Ω(t − nh) − cos λΩ(t − nh) bounded, we can obtain the same order of the estimation for
≤| η(nh)|
π(λΩ − Ω)(t − nh)2 the error energy as Theorem 2.
|nh|≤T

X cos Ω(t − nh) − cos λΩ(t − nh) Lemma 7. On any finite interval [−T, T ], T = const. > 0
+| η(nh)| =: I1 +I2 .
π(λΩ − Ω)(t − nh)2 ∞
|nh|>T X δ2
|| ϕλ (t − nh)η(nh)||2L2 [−T,T ] = O( ).
Since |λΩ − Ω| = αΩ and |η(nh)| ≤ δ and there are only n=−∞
α2
finite terms in it, we have Proof. By the inequality
δ
I1 = O( ). ||f + g||2 ≤ 2(||f ||2 + ||g||2 )
α
5

we have V. EXPERIMENTAL RESULTS



X
|| ϕλ (t − nh)η(nh)||2L2 [−T,T ] In this section, we give some examples to show that the
n=−∞ regularized sampling algorithm is more effective in controlling
X X the noise than the Shannon sampling theorem.
= || ϕλ (t−nh)η(nh)+ ϕλ (t−nh)η(nh)||2L2 [−T,T ] We first describe the algorithm. In computation, only finite
|nh|≤T |nh|>T terms can be used in (6). So we choose a large integer N , and
use next formula in computation:
X cos Ω(t − nh) − cos λΩ(t − nh) N
≤ 2|| η(nh)||2L2 [−T,T ] π X
π(λΩ − Ω)(t − nh)2 fα (t) = ϕλ (t−nh)f (nh) (9)
|nh|≤T Ω
n=−N

X cos Ω(t − nh) − cos λΩ(t − nh) where f (nh) is the noisy sampling data given in (2).
+2|| η(nh)||2L2 [−T,T ]
π(λΩ − Ω)(t − nh)2
|nh|>T Suppose the exact signal in example 1 and 2 is
1 − cost 1 − cos(t − 30) 1 − cos(t + 30)
δ2 fE (t) = + + .
= O( ). πt2 π(t − 30)2 π(t + 30)2
α2
Then
Lemma 8. For fˆE (ω) ∈ L2 
1 − |ω| + (1 − |ω|)e30ωi + (1 − |ω|)e−30ωi , ω ∈ [−Ω, Ω]
fˆE (ω) =
0, ω ∈/ [−Ω, Ω]
||fEα (t) − fE (t)||2L2 (−∞,∞) = O(α2 ).
where Ω = 1.
Proof. By Parseval’s Theorem and Lemma 1
Example 1. We consider the noise
1 ˆ
||fEα (t) − fE (t)||2L2 (−∞,∞) = ||fEα (ω) − fˆE (ω)||2L2 η(nh) = ϵ sgn{sin Ω(t0 − nh)/Ω(t0 − nh)}

1 where h = π/Ω = π, t0 = 30, and ϵ = 0.05. This noise is


= ||[ϕλ (ω) − 1]fˆE (ω)||2L2 used in the analysis of the stability of the algorithm in the

Shannon Sampling Theorem.
By Cauchy inequality
The result of the Shannon Sampling Theorem is in figure 3.
||[ϕλ (ω)−1]fˆE (ω)||2L2 ≤ ||[ϕλ (ω)−1]||2L2 ||fˆE (ω)||2L2 = O(α2 ). The result of the regularized sampling algorithm in [11] with
α = 0.2 is in figure 4. The result of the regularized sampling
Theorem 3. On any finite interval [−T, T ], T = const. > 0 algorithm in this paper with α = 0.2 is in figure 5.
δ2
||fα (t) − fE (t)||2L2 [−T,T ] = O( ) + O(α2 ). Example 2. We consider the noise to be the white noise
α2
uniformly distributed on [-0.025, 0.025].
Proof. By the inequality
The result of the Shannon Sampling Theorem is in figure 6.
||f + g||2 ≤ 2(||f ||2 + ||g||2 ) The result of the regularized sampling algorithm in [11] with
α = 0.2 is in figure 7. The result of the regularized sampling
we have algorithm in this paper with α = 0.2 is in figure 8.
||fα (t) − fE (t)||2L2 [−T,T ]
Suppose the exact signal in example 3 and 4 is

π X (t2 − 4π 2 ) cos t − 2t sin t
= || ϕλ (t − nh)[η(nh) + fE (nh)] − fE (t)||2L2 [−T,T ] fE (t) = 2
Ω n=−∞ (t2 − 4π 2 )2
[(t − 30)2 − 4π 2 ] cos(t − 30) − 2(t − 30) sin(t − 30)
π X
∞ +2
≤ 2|| ϕλ (t − nh)η(nh)||2L2 [−T,T ] [(t − 30)2 − 4π 2 ]2
Ω n=−∞
[(t + 30)2 − 4π 2 ] cos(t + 30) − 2(t + 30) sin(t + 30)
+2 .

[(t + 30)2 − 4π 2 ]2
π X
+2|| ϕλ (t − nh)fE (nh) − fE (t)||2L2 [−T,T ] Example 3. We choose the noise in Example 1.
Ω n=−∞
The result of the Shannon Sampling Theorem is in figure 9.
By Lemma 7 and 8, The result of the regularized sampling algorithm in [11] with
δ2 α = 0.2 is in figure 10. The result of the regularized sampling
||fα (t) − fE (t)||2L2 [−T,T ] = O( ) + O(α2 ). algorithm in this paper with α = 0.2 is in figure 11.
α2
6

Example 4. We consider the noise to be the white noise [19] W. Chen, “The Ill-posedness of Derivative Interpolation and Regularized
uniformly distributed on [-0.025, 0.025]. Derivative Interpolation”, EURASIP Journal on Advances in Signal
Processing, Springer, July, 2020.
[20] W. Chen, “Regularized Derivative Interpolation for Two Dimensional
The result of the Shannon Sampling Theorem is in figure 12. Band-limited Function”, Journal of Signal Processing, ELSEVIER, July,
The result of the regularized sampling algorithm in [11] with 2021.
[21] W. Chen, “The Ill-Posedness of Derivative Interpolation and Regular-
α = 0.2 is in figure 13. The result of the regularized sampling ized Derivative Interpolation for Non- Bandlimited Functions”, Applied
algorithm in this paper with α = 0.2 is in figure 14. Mathematics, Vol.13 No.1, January 2022.
[22] T Strohmer, “Numerical Analysis of the Nonuniform Sampling Prob-
lem”, Journal of computational and applied mathematics, Elsevier, pp.
VI. CONCLUSION 297-316, October 2000.
[23] Leonid P. Yaroslavsky, Advances in Sampling Theory and Techniques,
SPIE, 2020.
[24] Holger Boche and Ullrich J. Monich, “Effective Approximation Of
The sampling problem is a highly ill-posed problem. The Bandlimited Signals And Their Samples”, ICASSP 2020 - 2020 IEEE
noise can give rise to large errors in the formula of the Shan- International Conference on Acoustics, Speech and Signal Processing
non Sampling Theorem. A regularized sampling algorithm is (ICASSP), 2020, pp. 5590-5594.
[25] S. Karnik, J. Romberg, and M. A. Davenport, “Bandlimited signal
presented by removing discontinuities of the sinc function reconstruction from nonuniform samples,” in 2019 Proc. Work. on Signal
in the frequency domain. The convergence property and the Processing with Adaptive Sparse Structured Representations (SPARS),
estimation are given. The numerical results show that the IEEE, 2019.
[26] A.H.Siddiqi and P. Manchanda, “Sampling and Approximation The-
regularized sampling algorithm in this paper is much better orems for Wavelets and Frames on Vilenkin group”, International
in computation compared with other algorithms in the noisy Conference On Sampling Theory And Applications (Sampta), 2017.
cases. [27] Zhi-Chao Zhang, “An approximating interpolation formula for bandlim-
ited signals in the linear canonical transform domain associated with
finite nonuniformly spaced samples”, Optik, Volume 127, Issue 17, 2016,
pp. 6927-6932.
R EFERENCES
[1] A. Steiner, “Plancherel’s Theorem and the Shannon Series Derived
Simultaneously”, The American Mathematical Monthly, 87(1980), 193-
197.
[2] C. E. Shannon, “A mathematical theory of communication,” The Bell
System Technical Journal, vol. 27, July 1948.
[3] A. Papoulis, “Generalized Sampling Expansion” IEEE Trans. on
Circuits and Systems, vol. CAS-24, pp.652-654, 1977.
[4] Y. C. Eldar and M. Unser, “Nonideal Sampling and Interpolation from
Noisy Observations in Shift-Invariant Spaces,” IEEE Trans. on Sig.
Proc., vol. 54, no. 7, July 2006.
[5] K. F. Cheung and R. J. Marks II, “Ill-posed Sampling Theorems,” IEEE
Trans. on Circuits and Systems, vol. CAS-32, pp.481-484, May. 1985.
[6] R. J. Marks II, “Noise sensitivity of band-limited signal derivative
interpolation,” IEEE Trans. Acoust. Speech, Signal Processing, vol.
ASSP-31, pp.1028-1032, 1983.
[7] Z. Cvetkovic and I. Daubechies, “Single-Bit Oversampled A/D Conver-
sion with Exponential Accuracy in the Bit-Rate,” The Prodeedings of
Data Compression Conference, 2000.
[8] Steve Smale and Ding-Xuan Zhou, “Shannon Sampling and Function
Reconstruction from Point Values,” Journal: Bull. Amer. Math. Soc. 41
2004, pp.279-305.
[9] A. N. Tikhonov and V. Y. Arsenin, Solution of Ill-Posed Problems.
Winston/Wiley, 1977.
[10] W. Chen,“An Efficient Method for An Ill-posed Problem—Band-limited
Extrapolation by Regularization,” IEEE Trans. on Signal Processing, vol
54, pp.4611-4618, 2006.
[11] W. Chen, “The ill-posedness of the sampling problem and regularized
sampling algorithm,” Digit. Signal Process. 21 (2011) 375-390, Elsevier.
[12] W. Chen, “Computation of Fourier Transform for Noisy Band-limited
Signals,” SIAM J. on Numerical Analysis, Vol. 49, No. 1, 2011.
[13] W. Chen, “The Ill-posedness of Restoring Lost Samples and Regularized
Restoration for Band-limited Signals”, Journal of Signal Processing,
ELSEVIER, May 2011.
[14] W. Chen, “Regularized Restoration for Two Dimensional Band-limited
Signals”, Multidimensional Systems and Signal Processing, Springer,
Dec., 2013.
[15] W. Chen, “The Regularized Low Pass Filter”, J. of Signal and Informa-
tion Processing, vol. 5 no. 1, 2014.
[16] W. Chen, “Computation Of Two-Dimensional Fourier Transforms For
Noisy Band-Limited Signals, Applied Mathematics and Computation,
ELSEVIER”, 246, 199-209, 2014.
[17] W. Chen, “The Regularized Sampling Algorithm in reconstructing non-
bandlimited functions”, Journal of Computational and Applied Mathe-
matics, ELSEVIER, Vol. 301, Aug. 2016, pp. 259-270.
[18] W. Chen, “A Regularized Two-Dimensional Sampling Algorithm, Jour-
nal of Inverse and Ill-posed Problems”, June, 2017.
7

The result of Shannon Sampling Theorem The result of Shannon Sampling Theorem
0.2 0.2

0.15 0.15

0.1 0.1

0.05 0.05

0 0

-0.05 -0.05
-40 -30 -20 -10 0 10 20 30 40 -40 -30 -20 -10 0 10 20 30 40
t=-40 to 40 t=-40 to 40

Fig. 3. Example 1: The result of Shannon Sampling Theorem. Fig. 6. Example 2: The result of Shannon Sampling Theorem.

The result of old Reg Sampling Algorithm The result of old Reg Sampling Algorithm
0.2 0.2

0.15 0.15

0.1 0.1

0.05 0.05

0 0

-0.05 -0.05
-40 -30 -20 -10 0 10 20 30 40 -40 -30 -20 -10 0 10 20 30 40
t=-40 to 40 t=-40 to 40

Fig. 4. Example 1: The result of Reg Sampling Algorithm in [11]. Fig. 7. Example 2: The result of Reg Sampling Algorithm in [11].

The result of new Reg Sampling Algorithm


The result of new Reg Sampling Algorithm 0.2
0.2

0.15
0.15

0.1
0.1

0.05
0.05

0
0

-0.05
-0.05 -40 -30 -20 -10 0 10 20 30 40
-40 -30 -20 -10 0 10 20 30 40
t=-40 to 40
t=-40 to 40

Fig. 5. Example 1: The result of Reg Sampling Algorithm in this paper. Fig. 8. Example 2: The result of Reg Sampling Algorithm in this paper.
8

The result of Shannon Sampling Theorem The result of Shannon Sampling Theorem
0.1 0.1

0.08 0.08

0.06 0.06

0.04 0.04

0.02 0.02

0 0

-0.02 -0.02

-0.04 -0.04

-0.06 -0.06

-0.08 -0.08

-0.1 -0.1
-40 -30 -20 -10 0 10 20 30 40 -40 -30 -20 -10 0 10 20 30 40
t=-40 to 40 t=-40 to 40

Fig. 9. Example 3: The result of Shannon Sampling Theorem. Fig. 12. Example 4: The result of Shannon Sampling Theorem.

The result of old Reg Sampling Algorithm The result of old Reg Sampling Algorithm
0.1 0.1

0.08 0.08

0.06 0.06

0.04 0.04

0.02 0.02

0 0

-0.02 -0.02

-0.04 -0.04

-0.06 -0.06

-0.08 -0.08

-0.1 -0.1
-40 -30 -20 -10 0 10 20 30 40 -40 -30 -20 -10 0 10 20 30 40
t=-40 to 40 t=-40 to 40

Fig. 10. Example 3: The result of Reg Sampling Algorithm in [11]. Fig. 13. Example 4: The result of Reg Sampling Algorithm in [11].

The result of new Reg Sampling Algorithm The result of new Reg Sampling Algorithm
0.1 0.1

0.08 0.08

0.06 0.06

0.04 0.04

0.02 0.02

0 0

-0.02 -0.02

-0.04 -0.04

-0.06 -0.06

-0.08 -0.08

-0.1 -0.1
-40 -30 -20 -10 0 10 20 30 40 -40 -30 -20 -10 0 10 20 30 40
t=-40 to 40 t=-40 to 40

Fig. 11. Example 3: The result of Reg Sampling Algorithm in this paper. Fig. 14. Example 4: The result of Reg Sampling Algorithm in this paper.

You might also like