AN INTRODUCTION TO COMPRESSIVE SENSING

Rodrigo B. Platte
School of Mathematical and Statistical Sciences
APM/EEE598 Reverse Engineering of Complex Dynamical Networks
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
OUTLINE
1
INTRODUCTION
2
INCOHERENCE
3
RIP
4
POLYNOMIAL MATRICES
5
DYNAMICAL SYSTEMS
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 2 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
THE RICE DSP WEBSITE
Resources for papers, codes, and more ....
http://www.dsp.ece.rice.edu/cs/
References:
Emmanuel Cand` es, Compressive sampling. (Proc. International
Congress of Mathematics, 3, pp. 1433-1452, Madrid, Spain, 2006)
Richard Baraniuk, A Lecture on Compressive Sensing. (IEEE
Signal Processing Magazine, July 2007)
Emmanuel Cand` es and Michael Wakin, An introduction to
compressive sampling. (IEEE Signal Processing Magazine, 25(2),
pp. 21 - 30, March 2008)
m-files and some links are available in the course page
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 3 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
VIDEO LECTURES
Some well known CS people:
Emmanuel Cand` es (Stanford University)
Sequence of papers with Terence Tao and Justin Romberg in 2004.
David Donoho (Stanford University)
Richard Baraniuk (Rice University)
Ronald A. DeVore (Texas A&M)
Anna C. Gilbert (Univ. of Michigan)
Jared Tanner (University of Edinburgh)
. . .
A good way to learn the basics of CS is to watch these IMA video
lectures:
http://www.ima.umn.edu/videos/
→ IMA New Directions short courses → Compressive Sampling and
Frontiers in Signal Processing (two weeks long)
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 4 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
UNDERDETERMINED SYSTEMS
cafeperss.com $20
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 5 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
UNDERDETERMINED SYSTEMS
IEEE SIGNAL PROCESSING MAGAZINE [119] JULY 2007
SOLUTION
DESIGNING A STABLE
MEASUREMENT MATRIX
The measurement matrix must allow
the reconstruction of the length-N signal
x from M < N measurements (the vector
y). Since M < N, this problem appears
ill-conditioned. If, however, x is K-sparse
and the K locations of the nonzero coef-
ficients in s are known, then the problem
can be solved provided M ≥ K. A neces-
sary and sufficient condition for this sim-
plified problem to be well conditioned is
that, for any vector v sharing the same K
nonzero entries as s and for some > 0
1 − ≤
v
2
v
2
≤ 1 + . (3)
That is, the matrix must preserve the
lengths of these particular K-sparse vec-
tors. Of course, in general the locations
of the K nonzero entries in s are not
known. However, a sufficient condition
for a stable solution for both K-sparse
and compressible signals is that satis-
fies (3) for an arbitrary 3K-sparse vector
v. This condition is referred to as the
restricted isometry property (RIP) [1]. A
related condition, referred to as incoher-
ence, requires that the rows {φ
j
} of
cannot sparsely represent the columns

i
} of (and vice versa).
Direct construction of a measure-
ment matrix such that = has
the RIP requires verifying (3) for each
of the

N
K

possible combinations of K
nonzero entries in the vector v of
length N. However, both the RIP and
incoherence can be achieved with high
probability simply by selecting as a
random matrix. For instance, let the
matrix elements φ
j,i
be independent
and identically distributed (iid) random
variables from a Gaussian probability
density function with mean zero and
variance 1/N [1], [2], [4]. Then the
measurements y are merely M different
randomly weighted linear combinations
of the elements of x, as illustrated in
Figure 1(a). The Gaussian measure-
ment matrix has two interesting and
useful properties:
I The matrix is incoherent with
the basis = I of delta spikes with
high probability. More specifically, an
M × N iid Gaussian matrix
= I = can be shown to have
the RIP with high probability if
M ≥ cK log(N/K), with c a small
constant [1], [2], [4]. Therefore, K-
sparse and compressible signals of
length N can be recovered from
only M ≥ cK log(N/K) N random
Gaussian measurements.
I The matrix is universal in the
sense that = will be iid
Gaussian and thus have the RIP with
high probability regardless of the
choice of orthonormal basis .
DESIGNING A SIGNAL
RECONSTRUCTION ALGORITHM
The signal reconstruction algorithm
must take the M measurements in the
vector y, the random measurement
matrix (or the random seed that gen-
erated it), and the basis and recon-
struct the length-N signal x or, equiva-
lently, its sparse coefficient vector s. For
K-sparse signals, since M < N in (2)
there are infinitely many s

that satisfy
s

= y. This is because if s = y then
(s + r) = y for any vector r in the null
space N() of . Therefore, the signal
reconstruction algorithm aims to find
the signal’s sparse coefficient vector in
the (N − M)-dimensional translated null
space H = N() + s.
I Minimum
2
norm reconstruction:
Define the
p
norm of the vector s as
(s
p
)
p
=

N
i=1
|s
i
|
p
. The classical
approach to inverse problems of this
type is to find the vector in the trans-
lated null space with the smallest
2
norm (energy) by solving

s = argmins

2
such that s

= y.
(4)
This optimization has the convenient
closed-form solution

s =
T
(
T
)
−1
y.
Unfortunately,
2
minimization will
almost never find a K-sparse solution,
returning instead a nonsparse

s with
many nonzero elements.
I Minimum
0
norm reconstruction:
Since the
2
norm measures signal
energy and not signal sparsity, con-
sider the
0
norm that counts the
number of non-zero entries in s.
(Hence a K-sparse vector has
0
norm equal to K.) The modified opti-
mization

s = argmins

0
such that s

= y
(5)
can recover a K-sparse signal exactly
with high probability using only
M = K +1 iid Gaussian measure-
ments [5]. Unfortunately, solving (5)
is both numerically unstable and NP-
complete, requiring an exhaustive
enumeration of all

N
K

possible loca-
tions of the nonzero entries in s.
I Minimum
1
norm reconstruction:
Surprisingly, optimization based on
the
1
norm

s = argmins

1
such that s

= y
(6)
[FIG1] (a) Compressive sensing measurement process with a random Gaussian
measurement matrix and discrete cosine transform (DCT) matrix . The vector of
coefficients s is sparse with K = 4. (b) Measurement process with = . There are
four columns that correspond to nonzero s
i
coefficients; the measurement vector y is a
linear combination of these columns.
M N
K-sparse
y
y
Φ
Θ
Ψ S
(a) (b)
S
x
= =
Solve
Ax = b,
where A is mN
and m < N.
In CS we want to obtain sparse solutions, i.e., x
j
≈ 0, for several j

s.
One option: Minimize |x|

1
subject to Ax = b.
|x|

p
=

[x
0
[
p
+[x
2
[
p
+ +[x
N
[
p

1/p
Why p = 1?
Remark: the location of nonzero x
j
’s is not known in advance.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 6 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
UNDERDETERMINED SYSTEMS
IEEE SIGNAL PROCESSING MAGAZINE [119] JULY 2007
SOLUTION
DESIGNING A STABLE
MEASUREMENT MATRIX
The measurement matrix must allow
the reconstruction of the length-N signal
x from M < N measurements (the vector
y). Since M < N, this problem appears
ill-conditioned. If, however, x is K-sparse
and the K locations of the nonzero coef-
ficients in s are known, then the problem
can be solved provided M ≥ K. A neces-
sary and sufficient condition for this sim-
plified problem to be well conditioned is
that, for any vector v sharing the same K
nonzero entries as s and for some > 0
1 − ≤
v
2
v
2
≤ 1 + . (3)
That is, the matrix must preserve the
lengths of these particular K-sparse vec-
tors. Of course, in general the locations
of the K nonzero entries in s are not
known. However, a sufficient condition
for a stable solution for both K-sparse
and compressible signals is that satis-
fies (3) for an arbitrary 3K-sparse vector
v. This condition is referred to as the
restricted isometry property (RIP) [1]. A
related condition, referred to as incoher-
ence, requires that the rows {φ
j
} of
cannot sparsely represent the columns

i
} of (and vice versa).
Direct construction of a measure-
ment matrix such that = has
the RIP requires verifying (3) for each
of the

N
K

possible combinations of K
nonzero entries in the vector v of
length N. However, both the RIP and
incoherence can be achieved with high
probability simply by selecting as a
random matrix. For instance, let the
matrix elements φ
j,i
be independent
and identically distributed (iid) random
variables from a Gaussian probability
density function with mean zero and
variance 1/N [1], [2], [4]. Then the
measurements y are merely M different
randomly weighted linear combinations
of the elements of x, as illustrated in
Figure 1(a). The Gaussian measure-
ment matrix has two interesting and
useful properties:
I The matrix is incoherent with
the basis = I of delta spikes with
high probability. More specifically, an
M × N iid Gaussian matrix
= I = can be shown to have
the RIP with high probability if
M ≥ cK log(N/K), with c a small
constant [1], [2], [4]. Therefore, K-
sparse and compressible signals of
length N can be recovered from
only M ≥ cK log(N/K) N random
Gaussian measurements.
I The matrix is universal in the
sense that = will be iid
Gaussian and thus have the RIP with
high probability regardless of the
choice of orthonormal basis .
DESIGNING A SIGNAL
RECONSTRUCTION ALGORITHM
The signal reconstruction algorithm
must take the M measurements in the
vector y, the random measurement
matrix (or the random seed that gen-
erated it), and the basis and recon-
struct the length-N signal x or, equiva-
lently, its sparse coefficient vector s. For
K-sparse signals, since M < N in (2)
there are infinitely many s

that satisfy
s

= y. This is because if s = y then
(s + r) = y for any vector r in the null
space N() of . Therefore, the signal
reconstruction algorithm aims to find
the signal’s sparse coefficient vector in
the (N − M)-dimensional translated null
space H = N() + s.
I Minimum
2
norm reconstruction:
Define the
p
norm of the vector s as
(s
p
)
p
=

N
i=1
|s
i
|
p
. The classical
approach to inverse problems of this
type is to find the vector in the trans-
lated null space with the smallest
2
norm (energy) by solving

s = argmins

2
such that s

= y.
(4)
This optimization has the convenient
closed-form solution

s =
T
(
T
)
−1
y.
Unfortunately,
2
minimization will
almost never find a K-sparse solution,
returning instead a nonsparse

s with
many nonzero elements.
I Minimum
0
norm reconstruction:
Since the
2
norm measures signal
energy and not signal sparsity, con-
sider the
0
norm that counts the
number of non-zero entries in s.
(Hence a K-sparse vector has
0
norm equal to K.) The modified opti-
mization

s = argmins

0
such that s

= y
(5)
can recover a K-sparse signal exactly
with high probability using only
M = K +1 iid Gaussian measure-
ments [5]. Unfortunately, solving (5)
is both numerically unstable and NP-
complete, requiring an exhaustive
enumeration of all

N
K

possible loca-
tions of the nonzero entries in s.
I Minimum
1
norm reconstruction:
Surprisingly, optimization based on
the
1
norm

s = argmins

1
such that s

= y
(6)
[FIG1] (a) Compressive sensing measurement process with a random Gaussian
measurement matrix and discrete cosine transform (DCT) matrix . The vector of
coefficients s is sparse with K = 4. (b) Measurement process with = . There are
four columns that correspond to nonzero s
i
coefficients; the measurement vector y is a
linear combination of these columns.
M N
K-sparse
y
y
Φ
Θ
Ψ S
(a) (b)
S
x
= =
Solve
Ax = b,
where A is mN
and m < N.
In CS we want to obtain sparse solutions, i.e., x
j
≈ 0, for several j

s.
One option: Minimize |x|

1
subject to Ax = b.
|x|

p
=

[x
0
[
p
+[x
2
[
p
+ +[x
N
[
p

1/p
Why p = 1?
Remark: the location of nonzero x
j
’s is not known in advance.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 6 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
UNDERDETERMINED SYSTEMS
IEEE SIGNAL PROCESSING MAGAZINE [119] JULY 2007
SOLUTION
DESIGNING A STABLE
MEASUREMENT MATRIX
The measurement matrix must allow
the reconstruction of the length-N signal
x from M < N measurements (the vector
y). Since M < N, this problem appears
ill-conditioned. If, however, x is K-sparse
and the K locations of the nonzero coef-
ficients in s are known, then the problem
can be solved provided M ≥ K. A neces-
sary and sufficient condition for this sim-
plified problem to be well conditioned is
that, for any vector v sharing the same K
nonzero entries as s and for some > 0
1 − ≤
v
2
v
2
≤ 1 + . (3)
That is, the matrix must preserve the
lengths of these particular K-sparse vec-
tors. Of course, in general the locations
of the K nonzero entries in s are not
known. However, a sufficient condition
for a stable solution for both K-sparse
and compressible signals is that satis-
fies (3) for an arbitrary 3K-sparse vector
v. This condition is referred to as the
restricted isometry property (RIP) [1]. A
related condition, referred to as incoher-
ence, requires that the rows {φ
j
} of
cannot sparsely represent the columns

i
} of (and vice versa).
Direct construction of a measure-
ment matrix such that = has
the RIP requires verifying (3) for each
of the

N
K

possible combinations of K
nonzero entries in the vector v of
length N. However, both the RIP and
incoherence can be achieved with high
probability simply by selecting as a
random matrix. For instance, let the
matrix elements φ
j,i
be independent
and identically distributed (iid) random
variables from a Gaussian probability
density function with mean zero and
variance 1/N [1], [2], [4]. Then the
measurements y are merely M different
randomly weighted linear combinations
of the elements of x, as illustrated in
Figure 1(a). The Gaussian measure-
ment matrix has two interesting and
useful properties:
I The matrix is incoherent with
the basis = I of delta spikes with
high probability. More specifically, an
M × N iid Gaussian matrix
= I = can be shown to have
the RIP with high probability if
M ≥ cK log(N/K), with c a small
constant [1], [2], [4]. Therefore, K-
sparse and compressible signals of
length N can be recovered from
only M ≥ cK log(N/K) N random
Gaussian measurements.
I The matrix is universal in the
sense that = will be iid
Gaussian and thus have the RIP with
high probability regardless of the
choice of orthonormal basis .
DESIGNING A SIGNAL
RECONSTRUCTION ALGORITHM
The signal reconstruction algorithm
must take the M measurements in the
vector y, the random measurement
matrix (or the random seed that gen-
erated it), and the basis and recon-
struct the length-N signal x or, equiva-
lently, its sparse coefficient vector s. For
K-sparse signals, since M < N in (2)
there are infinitely many s

that satisfy
s

= y. This is because if s = y then
(s + r) = y for any vector r in the null
space N() of . Therefore, the signal
reconstruction algorithm aims to find
the signal’s sparse coefficient vector in
the (N − M)-dimensional translated null
space H = N() + s.
I Minimum
2
norm reconstruction:
Define the
p
norm of the vector s as
(s
p
)
p
=

N
i=1
|s
i
|
p
. The classical
approach to inverse problems of this
type is to find the vector in the trans-
lated null space with the smallest
2
norm (energy) by solving

s = argmins

2
such that s

= y.
(4)
This optimization has the convenient
closed-form solution

s =
T
(
T
)
−1
y.
Unfortunately,
2
minimization will
almost never find a K-sparse solution,
returning instead a nonsparse

s with
many nonzero elements.
I Minimum
0
norm reconstruction:
Since the
2
norm measures signal
energy and not signal sparsity, con-
sider the
0
norm that counts the
number of non-zero entries in s.
(Hence a K-sparse vector has
0
norm equal to K.) The modified opti-
mization

s = argmins

0
such that s

= y
(5)
can recover a K-sparse signal exactly
with high probability using only
M = K +1 iid Gaussian measure-
ments [5]. Unfortunately, solving (5)
is both numerically unstable and NP-
complete, requiring an exhaustive
enumeration of all

N
K

possible loca-
tions of the nonzero entries in s.
I Minimum
1
norm reconstruction:
Surprisingly, optimization based on
the
1
norm

s = argmins

1
such that s

= y
(6)
[FIG1] (a) Compressive sensing measurement process with a random Gaussian
measurement matrix and discrete cosine transform (DCT) matrix . The vector of
coefficients s is sparse with K = 4. (b) Measurement process with = . There are
four columns that correspond to nonzero s
i
coefficients; the measurement vector y is a
linear combination of these columns.
M N
K-sparse
y
y
Φ
Θ
Ψ S
(a) (b)
S
x
= =
Solve
Ax = b,
where A is mN
and m < N.
In CS we want to obtain sparse solutions, i.e., x
j
≈ 0, for several j

s.
One option: Minimize |x|

1
subject to Ax = b.
|x|

p
=

[x
0
[
p
+[x
2
[
p
+ +[x
N
[
p

1/p
Why p = 1?
Remark: the location of nonzero x
j
’s is not known in advance.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 6 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
WHY
1
?
Unit ball:

0
,
1/2
,
1
,
2
,
4
,

|x|

p
=

[x
0
[
p
+ +[x
N
[
p

1/p
or, for 0 ≤ p < 1,
|x|

p
=

[x
0
[
p
+ +[x
N
[
p

|x|

0
= # of nonzero entries in x
ideal (?) but leads to a NP-complete problem

p
, with p < 1 is not a norm (triangular inequality). Also not
practical.

2
computationally easy but does not lead to sparse solutions.
The unique solution of minimum
2
norm is (pseudo-inverse)
x = A

(AA

)
−1
b
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 7 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
SPARSITY AND THE
1
-NORM (2D CASE)
EXAMPLE
a
1
x
1
+a
2
x
2
= b
1
x
1
x
2
!1.5 !1 !0.5 0 0.5 1 1.5
!1.5
!1
!0.5
0
0.5
1
1.5
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 8 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
SPARSITY AND THE
1
-NORM (2D CASE)
EXAMPLE –
2
min
x
1
,x
2

x
2
1
+x
2
2
subject to a
1
x
1
+a
2
x
2
= b
1
x
1
x
2

x
2
1
+ x
2
2
> 0.8944

x
2
1
+ x
2
2
< 0.8944
!1.5 !1 !0.5 0 0.5 1 1.5
!1.5
!1
!0.5
0
0.5
1
1.5
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 8 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
SPARSITY AND THE
1
-NORM (2D CASE)
EXAMPLE –
1
min
x
1
,x
2
[x
1
[ +[x
2
[ subject to a
1
x
1
+a
2
x
2
= b
1
x
1
x
2
|x
1
| + |x
2
| < 1
|x
1
| + |x
2
| > 1
!1.5 !1 !0.5 0 0.5 1 1.5
!1.5
!1
!0.5
0
0.5
1
1.5
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 8 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
MINIMIZING |x|

2
Recall Parseval’s Formula:
f (t ) =

N
k=0
x
k
φ
k
(t ), with φ
k
orthonormal in L
2
.
|f |
2
2
=
N

k=0
[x
k
[
2
.
Also,
2
penalizes heavily large values, while small values don’t affect
the norm significantly. In general will not give a sparse representation!
See matlab experiment! (Test-l1-l2.m)
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 9 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
MINIMIZING |x|

1
Matlab experimet! (Test-l1-l2.m)
Note: solution may not be unique!
Solve an optimization problem (in practice O(N
3
) operations).
Several codes are available for CS see:
http://www.dsp.ece.rice.edu/cs/
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 10 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
A SIMPLE EXAMPLE
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
f (t ) =
1

N
N

k=1
x
k
sin(πkt )
N = 1024, number of samples: m = 50
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 11 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
A SIMPLE EXAMPLE
System of equations:
f (t
j
) =
1

1024
1024

k=1
x
k
sin(πkt
j
), j = 1. . . 50
SOLVE:
min|x|

1
subject to Ax = b,
where A has 50 rows and 1024 columns.
A
j ,k
=
1

1024
sin(πkt
j
), b
j
= f (t
j
).
Matlab code on Blackboard: ”SineExample.m” (uses CVX)
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 11 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
A SIMPLE EXAMPLE
0 200 400 600 800 1000 1200
−1.5
−1
−0.5
0
0.5
1
1.5
2


original
decoded
Recovery of coefficients is accurate to almost machine precision!
|x −x
0
|
2
|x
0
|
2
= 7.9611... 10
−11
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 11 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
WHY SPARSITY?
Sparsity is often a good regularization criteria because most signals
have structure.
Take a picture! (this one has 512 512 pixels)
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 12 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
WHY SPARSITY?
Sparsity is often a good regularization criteria because most signals
have structure.
Gray scale please!
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 12 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
WHY SPARSITY?
Sparsity is often a good regularization criteria because most signals
have structure.
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
Find wavelet coefficients. Daubechies(6,2), 3 vanish. moments
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 12 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
WHY SPARSITY?
Sparsity is often a good regularization criteria because most signals
have structure.
Make 75% of the coefficients zero.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 12 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
WHY SPARSITY?
Sparsity is often a good regularization criteria because most signals
have structure.
Restored image from 25% of the coefficients.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 12 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
WHY SPARSITY?
Sparsity is often a good regularization criteria because most signals
have structure.
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
Relative error ≈ 3%.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 12 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
WHY SPARSITY?
Sparsity is often a good regularization criteria because most signals
have structure.
50 100 150 200 250 300 350 400 450 500
50
100
150
200
250
300
350
400
450
500
Keep only 2% of the coefficients, set 98% to zero.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 12 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
WHY SPARSITY?
Sparsity is often a good regularization criteria because most signals
have structure.
Reconstructed image from 2% of the coefficients.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 12 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
SPARSITY IS NOT SUFFICIENT FOR CS TO WORK!
Example: A is a finite difference matrix
A maps a sparse vector x into another sparse vector y.












0
0
.
.
.
1
−1
0
.
.
.












=






1 0 0 0
−1 1 0 0
0 −1 1 0
. . . . . . . . . . . . . . . . . . . . .
0 0 −1 1


















0
0
.
.
.
1
0
0
.
.
.












A few samples of y are likely to be all zeros!
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 13 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
SPARSITY IS NOT SUFFICIENT FOR CS TO WORK!
Example: A is a finite difference matrix
A maps a sparse vector x into another sparse vector y.












0
0
.
.
.
1
−1
0
.
.
.












=






1 0 0 0
−1 1 0 0
0 −1 1 0
. . . . . . . . . . . . . . . . . . . . .
0 0 −1 1


















0
0
.
.
.
1
0
0
.
.
.












A few samples of y are likely to be all zeros!
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 13 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
SPARSITY IS NOT SUFFICIENT FOR CS TO WORK!
The image below is sparse in physical domain and Haar wavelet
coefficients.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 14 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
A GENERAL APPROACH
Sample coefficients in a representation by random vectors.
y =
N

k=1
< y, ψ
k
> ψ
k
,
ψ
k
are obtained from orthogonalized Gaussian matrices.
Ax = y ⇒ Ψ

Ax = Ψ

y ⇒ Θx = z
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 15 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
INCOHERENCE + SPARSITY IS NEEDED
INCOHERENCE
sparse representation sample here
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 16 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
INCOHERENCE + SPARSITY IS NEEDED
INCOHERENCE
sparse representation sample here
THEOREM (CAND
`
ES, ROMBERG, TAO)
Assume that x is S-sparse and that we are given K Fourier coefficients
with frequencies selected uniformly at random. Suppose that the
number of observations obeys
K ≥ C S logN.
Then minimizing
1
reconstructs x exactly with overwhelming
probability. In details, if the constant C is of the form 22(δ +1), then
the probability of success exceeds 1 −O(N
−δ
).
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 16 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
INCOHERENCE + SPARSITY IS NEEDED
NUMERICAL EXPERIMENT
Signal recovered from Fourier coefficients:
0 100 200 300 400 500 600
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2


original
decoded
Code ”FourierSampling.m”.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 16 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
INCOHERENT SAMPLING
Let (Φ, Ψ) be orthonormal bases of R
n
.
f (t ) =
n

i =1
x
i
ψ
i
(t ) and y
k
= ¸f , ϕ
k
¸, k = 1, . . . , m.
Representation matrix: Ψ = [ψ
1
ψ
2
ψ
n
]
Sensing matrix: Φ = [ϕ
1
ϕ
2
ϕ
n
]
COHERENCE BETWEEN Φ AND Ψ
µ(Φ, Ψ) =

n max
1≤j ,k≤n
[¸ϕ
k
, ψ
j
¸[.
Remark: µ(Φ, Ψ) ∈ [1,

n]
Upper bound: Cauchy-Schwarz
Lower bound: Ψ
T
Φ is also orthonormal, hence

[¸ϕ
k
, ψ
j
¸[
2
= 1 ⇒ max
j
[¸ϕ
k
, ψ
j
¸[ ≥ 1/

n
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 17 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
A GENERAL RESULT FOR SPARSE RECOVERY
f (t ) =
n

i =1
x
i
ψ
i
(t ) and y
k
= ¸f , ϕ
k
¸, k = 1, . . . , m.
Consider the optimization problem:
min
x∈R
n
|x|

1
subject to y
k
= ¸Ψx, ϕ
k
¸, k = 1, . . . , m.
THEOREM (CAND
`
ES AND ROMBERG, 2007)
Fix f ∈ R
n
and suppose that the coefficient sequence x of f in the
basis Ψ is s-sparse. Select m measurements in the Φ domain
uniformly at random. Then if
m ≥ C µ
2
(Φ, Ψ) S log(n/δ)
for some positive constant C, the solution of the problem above is
exact with probability exceeding 1 −δ.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 18 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
EXAMPLES OF INCOHERENT BASES
Φ is the identity (ϕ
k
(t ) = δ(t −k)) and Ψ is the Fourier basis. The
time-frequency pair obeys µ(Φ, Ψ) = 1.
noiselets and Haar wavelets have coherence

2.
random matrices are largely incoherent with any fixed basis Ψ
(about

2 logn).
Matlab example: ’measurementsl1.m’
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 19 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
MULTIPLE SOLUTIONS OF MIN
1
-NORM
f (t ) =
a
0
2
+
N

k=1
a
k
cos(πkt ) +
N

k=1
b
k
sin(πkt ), t ∈ [−1, 1]
Data: f (−1) = 1, f (0) = 1, f (1) = 1
even function: b
k
= 0
Solutions of min
1
: ¦a
2
= 1, a
k
= 0(k ,= 2)¦, ¦a
4
= 1, a
k
= 0(k ,= 4)¦,
. . .
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 20 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
MULTIPLE SOLUTIONS OF MIN
1
-NORM
f (t ) =
a
0
2
+
N

k=1
a
k
cos(πkt ) +
N

k=1
b
k
sin(πkt ), t ∈ [−1, 1]
Data: f (−1) = 1, f (0) = 1, f (1) = 1
even function: b
k
= 0
Solutions of min
1
: ¦a
2
= 1, a
k
= 0(k ,= 2)¦, ¦a
4
= 1, a
k
= 0(k ,= 4)¦,
. . .
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 20 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
THE RESTRICTED ISOMETRY PROPERTY (RIP)
How about signals that are not exactly sparse?
ISOMETRY CONSTANTS
For each s = 1, 2, . . . , define δ
s
of a matrix A as the smallest number
such that
(1 −δ
s
)|x|
2

2
≤ |Ax|
2

2
≤ (1 +δ
s
)|x|
2

2
holds for all s-sparse vectors x.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 21 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
THE RESTRICTED ISOMETRY PROPERTY (RIP)
THEOREM (CAND
`
ES, 2007?)
Assume δ
2s
<

2 −1. Then
x

:= argmin
x∈R
n
|x|

1
subject to y = Ax

|x

−x|

2
≤ C|x −x
s
|

1

s.
where x
s
is the vector x with all but the largest s components set to 0.
If x is s-sparse (exactly), then the recovery is exact.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 22 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
RIP - BASIC IDEA
want A to preserve norm of s-sparse vectors.
|Ax
1
−Ax
2
|
2
2
should not be small for s-sparse vectors x.
want 0 < c|x
1
−x
2
|
2
2
≤ |A(x
1
−x
2
)|
2
2
for all s-sparse x.
If δ
2
s = 1, then |Az|
2
= 0 for a 2s-sparse z.
z = x
1
−x
2
with x
1
and x
2
both s-sparse.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 23 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
RIP - REMARKS
the theorem above is deterministic
how does one show that column vectors taken from arbitrarily
subsets are nearly orthogonal?
isometry constants are shown for random matrices (randomness
is back)
for Fourier basis m ≥ C s log
4
n
RIP is too conservative (Donoho, Tanner 2010)
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 24 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
POLYNOMIAL MATRICES
Back to Dr. Lai’s dynamical system problem:
dx
dt
= F(x(t )),
with
[F(x(t ))]
j
=

k
1

k
2

k
m
(a
j
)
k
1
k
2
···k
m
x
k
1
1
(t ) . . . x
k
m
m
(t )
This does not fit in classical CS-results.
monomial basis becomes ill-conditioned even for small powers
we know condition numbers of Vadermonde depend on where x is
evaluated.
Some CS results are available for orthogonal polynomials.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 25 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
ORTHOGONAL POLYNOMIALS
For Chebyshev polynomials expansions we have that
f (x) ≈
N

k=0
λ
k
cos(k arccos(x))
If we let y = arccos(x) or x = cos(y),
f (cos(y)) ≈
N

k=0
λ
k
cos(ky)
A Chebyshev expansion is equivalent to a cosine expansion on the
variable y.
Results carry over from Fourier expansions but with samples chosen
independently according to the chebyshev measure
dν(x) = π
−1
(1 −x
2
)
−1/2
dx
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 26 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
SPARSE LEGENDRE EXPANSIONS
Rauhut and Ward (2010) proved that the same type sampling applies
for Legendre exapasions.
How about polynomial expansions as power series?
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 27 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
ROBERT THOMPSON’S EXPRIMENTS
Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques
Sparse Polynomial Discovery
Discovering Sparse Polynomials
How about if we choose just a few function values?
Φ
m
: m randomly chosen rows of identity matrix.
And assume that x is K-sparse.
t
d
according to some distribution in (−1, 1).
y
m
=








f(t
d
1
)
f(t
d
2
)
.
.
.
.
.
.
f(t
d
m
)








m
= Φ
m








t
0
0
t
1
0
. . . t
N
0
t
0
1
t
1
1
. . . t
N
1
.
.
.
.
.
.
.
.
.
.
.
.
t
0
N
t
1
N
. . . t
N
N
















x
1
x
2
.
.
.
.
.
.
x
N








AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 28 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
ROBERT THOMPSON’S EXPERIMENTS
Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques
Sparse Polynomial Discovery
How Well Does It Work?
m/N
k
/
m
1d polynomial recovery for N = 36, uniform sampling


0.25 0.5 0.75 1
0.75
0.5
0.25
0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.2
0
m/N
k
/
m
1d polynomial recovery for N = 36, Chebyshev sampling


0.25 0.5 0.75 1
0.75
0.5
0.25
0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.2
0
Each pixel, 50 experiments: choose random polynomial with k
non-zero Gaussian i.i.d coefficients, measure m samples, attempt to
recover polynomial coefficients.
Sampling at Chebyshev points give (very) slightly better results than
uniform points.
Increasing m doesn’t make as much difference as might be expected.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 29 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
ROBERT THOMPSON’S EXPERIMENTS
Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques
Sparse Polynomial Discovery
Comparison With Chebyshev-Sparse Polynomials
Consider linear combinations of Chebyshev polynomials:
y = ￿

N
i=1
T
i
(t), T
i
(t) = cos(i arccos(t))
Φ
m
: m randomly chosen rows of identity matrix.
And assume that x is K-sparse.
t
d
according to some distribution in (−1, 1).
y
m
=








f(t
d
1
)
f(t
d
2
)
.
.
.
.
.
.
f(t
d
m
)








m
= Φ
m








T
0
(t
0
) T
1
(t
0
) . . . T
N
(t
0
)
T
0
(t
1
) T
1
(t
1
) . . . T
N
(t
1
)
.
.
.
.
.
.
.
.
.
.
.
.
T
0
(t
N
) T
1
(t
N
) . . . T
N
(t
N
)
















x
1
x
2
.
.
.
.
.
.
x
N








AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 30 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
ROBERT THOMPSON’S EXPERIMENTS
Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques
Sparse Polynomial Discovery
How Well Does It Work?
Vandermonde
m/N
k
/
m
1d polynomial recovery for N = 36, Chebyshev sampling


0.25 0.5 0.75 1
0.75
0.5
0.25
0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.2
0
Chebyshev
Sparse 1d Chebyshev polynomial recovery, N = 36
m/N
k
/
m


0.25 0.5 0.75 1
0.75
0.5
0.25
0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.2
0
Using Chebyshev basis functions, we realize improvement as m
increases.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 31 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
ROBERT THOMPSON’S EXPERIMENTS
Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques
Sparse Polynomial Discovery
Increasing m helps
Columns of C are orthogonal.
All vectors will be distinguishable if we use full C.
If we use less than full C, orthogonality is lost, some vectors start to
become indistinguishable.
V


−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
C


−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 32 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
ROBERT THOMPSON’S EXPERIMENTS
Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques
Sparse Polynomial Discovery
Discovering 2-D Sparse Polynomials
What about 2-D polynomials?
In natural basis: f(t, u) = ￿

i+j=0..Q
x
ij
t
i
u
j
(t
d
, u
d
) according to some distribution in (−1, 1) ×(−1, 1).
y
m
=








f(t
d
1
, u
d
1
)
f(t
d
2
, u
d
2
)
.
.
.
.
.
.
f(t
d
m
, u
d
m
)








m
= Φ
m








1 t
0
u
0
t
0
u
0
t
2
0
u
2
0
. . .
1 t
1
u
1
t
1
u
1
t
2
1
u
2
1
. . .
.
.
.
.
.
.
.
.
.
.
.
.
1 t
N
u
N
t
N
u
N
t
2
N
u
2
N
. . .

















x
00
x
10
x
01
x
11
x
20
.
.
.









AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 33 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
ROBERT THOMPSON’S EXPERIMENTS
Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques
Sparse Polynomial Discovery
How Well Does It Work?
2d polynomial recovery, N = 36
m/N
k
/
m


0.25 0.5 0.75 1
0.75
0.5
0.25
0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.2
0
Similar to 1-d results.
Again increasing m doesn’t change much.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 34 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
ROBERT THOMPSON’S EXPERIMENTS (BACK TO DYNAMICAL SYSTEMS)
Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques
Dynamical System Discovery
Example - Logistic Map
x
n+1
= f(x
n
) = rx
n
(1 −x
n
)
Coefficient vector: (0, r, −r, 0, . . . )
We can recover the system equation in chaotic regime taking about
10 sample pairs or more.
0 5 10 15 20 25 30 35
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
n
x
n
Sampl i ng the l ogi sti c map, m = 10
0 5 10 15 20 25 30 35
10
−8
10
−6
10
−4
10
−2
10
0
10
2
10
4
10
6
m
|
|
c


c
|
|
2
Recovery error for logistic map, r = 3.7
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 35 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
ROBERT THOMPSON’S EXPERIMENTS (BACK TO DYNAMICAL SYSTEMS)
Reverse Engineering Dynamical Systems From Time-Series Data Using Compressed Sensing Techniques
Dynamical System Discovery
How Well Does It Work?


5 10 15 20 25 30 35
2.4
2.5
2.6
2.7
2.8
2.9
3
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9
4
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Sensitive to the dynamics determined by r.
(Bifurcation diagram: Wikipedia).
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 36 / 37
INTRODUCTION INCOHERENCE RIP POLYNOMIAL MATRICES DYNAMICAL SYSTEMS
FINAL REMARKS
As previously pointed by Dr. Lai – recovery seems impractical with
monomial basis of large degree. Change of basis to orthogonal
polynomials result in full coefficients.
Considering small degree expansions in high dimensions – what
is the optimal sampling strategy?
How about a system of PDEs? For example,
u
t
= u(1 −u) −uv +´u
v
t
= v(1 −v) +uv +´v
Thanks! In particular to Robert Thompson and Wen Xu.
AN INTRODUCTION TO COMPRESSIVE SENSING R. PLATTE MATHEMATICS AND STATISTICS 37 / 37

Sign up to vote on this title
UsefulNot useful