You are on page 1of 47

AN IMPROVED K-SPARSE IMAGE PROCESSING USING

PBD

By
N.RESHMA
960417621318
Of
C.S.I INSTITUTE OF TECHNOLOGY, THOVALAI
A PROJEC T REPORT

Submitted to the
FACULTY OF INFORMATION AND COMMUNICATION ENGINEERING

In partial fulfillment of the requirements

for the award of the degree

of

MASTER OF COMPUTER APPLICATIONS

ANNA UNIVERSITY

CHENNAI
May 2020
ANNA UNIVERSITY, CHENNAI

BONAFIDE CERTIFICATE

Certified that this Report titled “AN IMPROVED K-SPARSE IMAGE PROCESSING
USING PBD” is the bonfire work of N. RESHMA (960417621318) who carried out the
work under my supervision. Certified further that to the best of my knowledge the work
reported herein does not form part of any other thesis or dissertation on the basis of which
a degree or award was conferred on an earlier occasion on this or any other candidate.

Head of the Department Supervisor


Mrs. Julie Emerlad Jiju MCA, M.Tech, M.Phil Mrs. V. Merin Shobi MCA, MPhil
Department of M.C.A Department of M.C.A
C.S.I Institute of Technology C.S.I Institute of Technology
Thovalai - 629302 Thovalai – 629302

Submitted for the project Viva-Voce examination held on

Internal Examiner External Examiner


III

ABSTRACT

We have previously shown that augmenting orthogonal matching pursuit


(OMP) with an additional step in the identification stage of each pursuit iteration
yields improved ksparse reconstruction and denoising performance relative to
baseline OMP. At each iteration a “path,” or geodesic, is generated between the two
dictionary atoms that are most correlated with the residual and from this path a new
atom that has a greater correlation to the residual than either of the two bracketing
atoms is selected. Here, we provide new computational results illustrating
improvements in sparse coding and denoising on canonical datasets using both learned
and structured dictionaries. Two methods of constructing a path are investigated for
each dictionary type: the Euclidean geodesic formed by a linear combination of the
two atoms and the 2-Wasserstein geodesic corresponding to the optimal transport map
between the atoms. We prove here the existence of a higher-correlation atom in the
Euclidean case under assumptions on the two bracketing atoms and introduce
algorithmic modifications to improve the likelihood that the bracketing atoms meet
those conditions. Although we demonstrate our augmentation on OMP alone, in
general it may be applied to any reconstruction algorithm that relies on the selection
and sorting of high-similarity atoms during an analysis or identification phase.
IV

ABSTRACT

ஒவ் வவொரு பின் வதொடர்தல் மறு வெய் கையின் அகடயொளம் -கைஷன்

ைட்டத்தில் கூடுதல் படியுடன் ஆர்த்கதொைனல் கமட்சிங் நொட்டம் (OMP)

வபரிதொை்குவது, கமம் பட்ட Ksparse புனரகமப்பு மற் றும் அடிப்பகட OMP உடன்

ஒப்பிடும் கபொது வெயல் திறகனை் குறிை்கிறது. ஒவ் வவொரு மறு வெய் கையிலும்
எஞ் சியவற் றுடன் மிைவும் வதொடர்புபட்டுள் ள இரண்டு அைரொதி அணுை்ைளுை்கு

இகடயில் ஒரு “பொகத” அல் லது ஜிகயொவடசிை் உருவொை்ைப்படுகிறது, கமலும் இந்த

பொகதயிலிருந்து இரண்டு அகடப்புை்குறி அணுை்ைளில் ஒன் கறை் ைொட்டிலும்

எஞ் சியவற் றுடன் அதிை வதொடர்பு வைொண்ட ஒரு புதிய அணு


கதர்ந்வதடுை்ைப்படுகிறது. ைற் றறிந்த மற் றும் ைட்டகமை்ைப்பட்ட அைரொதிைகளப்
பயன் படுத்தி நியொயமொன தரவுத்வதொகுப்புைளில் சிதறல் குறியீட்டு மற் றும்

வடகனொசிங் கில் கமம் பொடுைகள விளை்கும் புதிய ைணை்கீட்டு முடிவுைகள இங் கை


நொங் ைள் வழங் குகிகறொம் . ஒவ் வவொரு அைரொதி வகைை்கும் ஒரு பொகதகய
உருவொை்குவதற் ைொன இரண்டு முகறைள் ஆரொயப்படுகின் றன: இரண்டு
அணுை்ைளின் கநரியல் ைலகவயொல் உருவொன யூை்ளிடியன் ஜிகயொவடசிை் மற் றும்
அணுை்ைளுை்கு இகடயிலொன உைந்த கபொை்குவரத்து வகரபடத்துடன்

வதொடர்புகடய 2-வொஸர்ஸ்டீன் ஜிகயொவடசிை். இரண்டு அகடப்புை்குறி


அணுை்ைளின் அனுமொனங் ைளின் கீழ் யூை்ளிடியன் வழை்கில் அதிை வதொடர்பு
வைொண்ட அணுவின் இருப்கப நொங் ைள் இங் கு நிரூபிை்கிகறொம் மற் றும் அகடப்பு
அணுை்ைள் அந்த நிகலகமைகள பூர்த்தி வெய் கின் றன என் பதற் ைொன

இன் ட்கரொடூவெல் ைொரிகதொமிை்கமொடி-கைஷன் ஸ்ஸ்கடொம் பிகரொவவலிகுலிட்டி. OMP


இல் மட்டுகம எங் ைள் வளர்ெ்சிகய நொங் ைள் நிரூபித்தொலும் , வபொதுவொை இது ஒரு

பகுப்பொய் வு அல் லது அகடயொள-கைஷன் ைட்டத்தின் கபொது உயர்-ஒற் றுகம


அணுை்ைளின் கதர்வு மற் றும் வரிகெப்படுத்தகல நம் பியிருை்கும் எந்தவவொரு

புனரகமப்பு வழிமுகறயிலும் பயன் படுத்தப்படலொம் .


V

ACKNOWLEDGEMENT

First and foremost, I thank the God Almighty for this grace to enable me to
complete this project work a successful one.
Words cannot be expressed to show my gratitude to my respected principle
Dr. K. DHINESH KUMAR, M.E., Ph.D., for his support and freedom throughout
my course.
I would like to thank Mrs. M. JULIE EMERALD JIJU, MCA, M.Phil,
M.Tech, Head of the Department, Department of Computer Applications for her
valuable help and constant encouragements towards my project.
It is my proud privilege to express my sincere thanks to my staff-in-charge,
Mrs. V. Merin Shobi MCA, MPhil, Department of Computer Applications for
providing me with her valuable suggestions and non-stop guidance throughout the
development of this project.
My sincere thanks to all faculty members the technical and non-technical of the
M.C.A. department for their valuable suggestion and support in all my moves towards
the successful finishing of this project.
It is my great pleasure to acknowledge my parents and family members for their
prayer and generous support that they have extended towards the successful
completion of the project.
VI

TABLE OF CONTENTS

CHAPTER NO TITLE PAGE NO

ABSTRACT III

ACKNOWLEDGEMENT V

LIST OF TABLES IX

LIST OF FIGURES X

LIST OF ABBREVIATIONS X1

1 INTRODUCTION 1

1.1 Sparsity 1

1.2 Dictionary learning 1

1.3 Sparse coefficient estimation 2

1.4 Compressed sampling 2

2 LITERATURE REVIEW 3

2.1 Tree-based backtracking orthogonal matching 3


pursuit for sparse signal reconstruction
2.2 Fusion of Satellite Images using Compressive 4
Sampling Matching Pursuit (CoSaMP)
Method
2.3 Reducing Basis Mismatch in Harmonic 5
Signal Recovery via Alternating Convex
Search
2.4 Orthogonal Matching Pursuit for Sparse 6
Signal
Recovery with Noise
2.5 Clustering K-SVD for sparse representation 7
of
Images
2.6 Sparse Data Recovery using Optimized 8
Orthogonal Matching Pursuit for WSNs
3 SYSTEM DESIGN 10

3.1 EXISTING SYSTEM 10

3.2 PROPOSED SYSTEM 10


3.3 SYSTEM SPECIFICATION 11
VII

3.3.1 Software Requriement 11

3.3.2 Hardware Requriement 11

4 SYSTEM IMPLENTATION 12

4.1 MODULES 12

4.1.1 Paths Created Between Atom 12

4.1.2 Gaussian Denoising 12

4.1.3 K-Sparse Reconstructions 12

5 SYSTEM ARCHITECTURE 15

5.1 BLOCK DIAGRAM 15

5.2 DATA FLOW DIAGRAM 16

6 LANGUAGE DESCRIPTION 18

6.1 INTRODUCTION 18

6.2 MATLAB's POWER OF COMPUTAIONAL 18


MATHMATICS
6.3 FEATURES OF MATLAB 19

6.4 USES OF MATLAB 19

6.5 UNDERTANDING THE MATLAB 20


ENVIRONMENT
6.6 COMMONLY USED OPERATORS AND 23
SPATIAL CHARATERS
6.7 COMMANDS 23

6.7.1 Commands for managing a session 24


6.8 INPUT AND OUTPUT COMMAND 24
6.9 M FILES 25

6.10 DATA TYPES AVAILABLE IN MATLAB 25

7 SYSTEM TESTING 27

7.1 INTRODUCTION 27

7.2 TYPES OF TESTS 27


7.2.1 Unit testing 27
VIII

7.2.2 Integration testing 27

7.2.3 Functional test 27


7.2.4 System Test 28
7.2.5 White Box Testing 28

7.2.6 Black Box Testing 28


7.3 UNIT TESTING 29
7.4 ACCEPTANCE TESTING 29
8 SCREEN SHOTS 30
9 CONCLUSION 31
11 APPENDIX XII
10 REFERENCE XV
IX

LIST OF TABLES

TABLE NO TABLE NAME PAGE NO

6.6 Operators and Spatial Characters 23


6.7.1 Commands for Managing a Session 24

6.8 Input and Output Commands 24

6.10 Data Types in MATLAB 25


X

LIST OF FIGURES

FIGURE NO FIGURE NAME PAGE NO

5.1 Block Diagram 15

5.2.1 Level 0- Paths Created Between Atom 16

5.2.2 Level 1- Gaussian Denoising 16

5.2.3 Level 2- K-Sparse Reconstruction 17

6.5.1 MATLAB desktop environment 20

6.5.2 Current folder 20


6.5.3 Command window 21
6.5.4 Workspace 21

6.5.5 Command history 22


XI

LIST OF ABBREVIATIONS

S.NO ABBREVIATION EXPANTION

1 OMP Orthogonal Matching Pursuit

2 POMP Path Orthogonal Matching Pursuit

3 CS Compresed Sampling

4 ACS Alternating Convex Search

5 RIP Restricted Isometry Property

6 ERC Exact Recovery Condition


7 CoSaMP Compressive Sampling Matching
Pursuit
XII

APPENDIX

A. Proof of Theorem
Theorem A.1 (Straddle Theorem). Let||d1|| = ||d2|| = ||r|| = 1, hr,d1i = r>d1 =
cosθ1,hr,d2i = r>d2 = cosθ2 and hd1,d2i = d> 1 d2 = cosθ1,2. If|hr,d1i| > |hr,di|for all
d ∈D,sgn(cos θ1) = sgn(cosθ2) and cosθ1,2 ∈ [0,cosθ1/cosθ2),then there exists t* ∈
(0,1) such that |hr,dt?i| > |hr,d1i|.
Proof. Let ||d1|| = ||d2|| = ||r|| = 1, hr,d1i = r>d1 = cosθ1, hr,d2i = r>d2 = cosθ2 and
hd1,d2i = d> 1 d2 = cosθ1,2. Assume that |hr,d1i| > |hr,di| for all d ∈ D, sgn(cosθ1) =
sgn(cosθ2) and cosθ1,2 ∈ [0,cosθ1/cosθ2). Define α = cosθ2/cosθ1. Note that 0 ≤ α <
1 is implied by |hr,d1i| > |hr,d2i| together with sgn(cosθ1) = sgn(cosθ2). Further, we
have 1−α2 > 0. Using these facts together with our assumptions we have
cosθ1,2 < α
0 < 2α−2cosθ1,2
0 < 1−α2 < 2α−2cosθ1,2 + 1−α2
0 <1−α2 2α−2cosθ1,2 + 1−α2< 1.
From the property of reals we have that there exists some number m such that
1−α2 2α−2cosθ1,2 + 1−α2< m < 1.
Set t? = m ∈ (0,1). Consequently,
1−α2 < t*(2α−2cosθ1,2 + 1−α2)
=⇒ 0 < t*(2α−2cosθ1,2 + 1−α2) + α2 −1
=⇒ 0 < 2t*(α−cosθ1,2) + (1−t?)(α2 −1).
Rewriting in terms of the angles we have

Multiplying through by
(cosθ1)2 yields 0 < 2t*(cosθ2 cosθ1 −cosθ1,2(cosθ1)2) + (1−t*)((cosθ2)2 −(cosθ1)2).
Expanding and writing in terms of vector multiplication we have
0 < t*(r>d2d> 1 r + r>d1d> 2 r)−2t* cosθ1,2(r>d1d> 1 r) + (1−t*)(r>d2d> 2 r−r>d1d> 1
).
XIII

Multiply through by the positive value

to produce

Next, add 0 = t2 *r>d1d> 1 r−t2 *r>d1d> 1 to the numerator and regroup terms to get

From Lemma III.1 we have that the first two lines of Eq. 29 can be simplified to r>Pdt*
r and the last line simplifies to r>Pd1r. Thus, we have
0 < r>Pdt* r−r>Pd1r
0 < ||Pdt* r||2 −||Pd1r||2
=⇒ ||Pd1r||2 < ||Pdt* r||
Finally, using Lemma III.2 we have
|hr,d1i| < |hr,dt*i|.
Thus, we have that there exists a t* ∈ (0,1) such that dt* is “closer” to r than d1.
B. Proof of Corollary III.3.2
Corollary A.1.1 (Positive Maximizer). For each pair {d1, d2} ∈ D if |hr, d1i| > |hr, d2i|
and sgn(cosθ1) = sgn(cosθ2) there is a single positive t (given by a closed form) that
maximizes ||Pdt* r||2−||Pd1r||2. Explicitly

Proof. We seek to maximize ||Pdtr||2 −||Pd1r||2 over all feasible t. Using Lemma III.1
together with Eq. 30 we can write
f(t) = ||Pdtr||2 −||Pd1r||2 (35) f(t) = r>Pdtr−r>Pdtr.
XIV

Equivalently,

The function f(t) is differentiable and has two critical points given by

This can be verified by hand or through the use of software. When sgn(cosθ1) =
sgn(cosθ2) and |hr, d1i| > |hr, d2i| it can be seen that t1 < 0 and t2 > 0.Further, it can be
shown that for all values of cosθ1,cosθ2, and cosθ1,2 that f00(t2) < 0 implying that t2 is
a local maximum of f(t) and the global maximum when t ≥ 0.
1

CHAPTER 1
INTRODUCTION
Sparse representation aims to model signals as sparse linear combinations of the
atoms in a dictionary, and this technique is widely used in various fields of image
processing. A frequent goal within signal/image processing is to reconstruct or compress
the information contained in a signal by representing it as a linear combination of a set
of reference signals. In the most general case, this reference set is a (possibly over
complete) dictionary composed of signal atoms drawn from some underlying signal
model. “Good” models are those that can sparsely represent signals as linear
combinations of relatively few atoms drawn from the dictionary. Signals that can be
represented to within some acceptable error tolerance using at most k atoms are defined
as k-sparse relative to that dictionary. Reconstruction algorithms designed to decompose
the signal into a linear combination of atoms can be designed to terminate based on an
error threshold or a fixed degree of sparsity.
1.1 Sparsity
For fixed sparsity, consideration of all possible atom combinations of that order is
computationally intractable other than for a limited set of problems. A popular and
successful approach to this combinatorial optimization problem is a greedy algorithm
called Matching Pursuit (MP). Standard MP begins by greedily searching for the best
reconstruction produced from a single atom where “best” is determined by the magnitude
of the inner product between the signal and the dictionary atoms. This optimal atom is
scaled by the length of the projection of the signal onto the space spanned by the optimal
atom and is then subtracted from the original signal to yield a residual. The residual
image is then fit in the same greedy way, updated, and the process repeats such that k
iterations of MP yields a ksparse representation with some associated final error/residual
Rk.
1.2 Dictionary Learning
The HRPAN image is down sampled to factor FDS based on the spatial resolution
of the LRMS (Here FDS = 4). The LRPAN image ‘Y0’ and the LRMS image ‘Y’ are
divided into small patches of different sizes (3 × 3 to 9 × 9) .These patches y0 and yk,
where k stands for the kth band and k = 1, 2, . . , N may also be partially overlapped. The
patches of LRPAN image are arranged into column vectors and normalized to form ‘Dl’
called the LR dictionary. This is repeated for the HRPAN image to form the HR
dictionary ‘Dh’. The LRMS patches yk and their respective HR patches xk are
represented as sparse in this LR/HR PAN dictionary pair since these dictionaries are
formed from the PAN image that observes the same area as the LRMS bands. The size
of the image, patch, and the overlap area determine the size of the dictionary. This gives
an alternative image fusion method that can be used when large collections of
representative satellite images are not available.
2

1.3 Sparse Cofficient Estimation


The Orthogonal Matching Pursuit (OMP) algorithm is used for the recovery of a
high-dimensional sparse signal based on a small number of noisy linear measurements.
OMP selects the column index that has maximum correlation with the current residuals.
Compressive Sampling Matching Pursuit (CoSaMP) is based on OMP and also takes in
a number of vectors to build the approximation at each iteration. CoSaMP selects a preset
number of vectors from dictionary. CoSaMP then restricts the approximation to a
particular level to get the required sparsity. But the sparsity coefficients are defined at
the input which cannot be changed. In the proposed method, the sparsity is estimated in
each iteration because the coefficients will vary from patch to patch.
1.4 Compressed Sampling
Compresed sampling (CS) and, more generally, sparse signal recovery has recently
generated interest within the signal processing community. CS suggests that, given
certain restrictions, a signal (image, waveform, etc.) can be reconstructed with a much
greater bandwidth than that of the device used to acquire the signal. This has obvious
implications for any sensor that converts analog to digital information and for which size,
weight, and power constrain design. One key restriction is that the signal of interest be
accurately modeled using a linear combination of only a few non-zero vectors. A signal
that requires only K such vectors is said to be K-sparse with respect to the collection of
N vectors comprising the model. (CS) and, more generally, sparse signal recovery has
recently generated interest within the signal processing community. CS suggests that,
given certain restrictions a signal (image, waveform, etc.) can be reconstructed with a
much greater bandwidth than that of the device used to acquire the signal. This has
obvious implications for any sensor that converts analog to digital information and for
which size, weight, and power constrain design. One key restriction is that the signal of
interest be accurately modeled using a linear combination of only a few non-zero vectors.
A signal that requires only K such vectors is said to be K-sparse with respect to the
collection of N vectors comprising the model.
3

CHAPTER 2
LITERATURE REVIEW
2.1 Tree-Based Backtracking Orthogonal Matching Pursuit for Sparse Signal
Reconstruction
Sparse reconstruction algorithm is one of the three core problems (signal sparse
representation, measurement matrix design, and reconstruction algorithm design) of CS.
The existed sparse reconstruction algorithms such as ROMP and CoSaMP algorithms
employ the sparsity 𝐾 as the prior knowledge for exact recovery, which has many
limitations for the realistic applications. However, although the sparsity level are not
required for OMP and BAOMP algorithms, they do not use the characteristics of special
sparse basis to improve the performance of the algorithms. In this paper, a new Treebased
Backtracking Orthogonal Matching Pursuit (TBOMP) algorithm was proposed based on
the tree model in wavelet domain. Our algorithm can convert the wavelet tree structures
to the corresponding relations of candidate atoms without any prior information of signal
sparsity level. Moreover, the unreliable atoms can be deleted according to the
backtracking algorithm. Compared with other compressive sensing algorithms (OMP,
ROMP, and BAOMP), the signal reconstruction Results of TBOMP out perform the
abovementioned CS algorithms.
These characteristics of tree structure provide a new way for the study of
reconstruction algorithm. Thanks to the tree structure of wavelet coefficients, when the
signal is sparsely represented by the wavelet transform, it also provides a clew for the
selection of atoms in the reconstruction algorithm. This will greatly improve the
reliability of the atom selection. The coefficients of wavelet decomposition include low
frequency coefficients and high-frequency coefficients (scaling coefficients and wavelet
coefficients in 𝛼). The more levels of wavelet decomposition, the less low-frequency
coefficients, and more important information is reserved in the high frequency
coefficients. Compared with the high-frequency coefficients, the number of low-
frequency coefficients are much less if the decomposition level is big enough. Since the
low-frequency coefficients play an important role in the wavelet reconstruction, in our
proposed algorithm, only the high-frequency coefficients are measured by measurement
matrix. For the reconstruction, we combine the reconstructed High-frequency
coefficients ̂𝜔 and the unprocessed low frequency coefficients. Then the inverse wavelet
transform is applied to obtain a reconstructed 𝑥̂ of the original signal 𝑥.
OMP is quite fast, both theoretically and experimentally. It makes 𝑛 iterations,
where each iteration amounts to a multiplication by a𝑁×𝑀matrixΦ ∗ (computing the
observation vector 𝛼) and solving a least squares problem in dimensions at most 𝑀×𝑛.
This yields strongly polynomial running time. In practice, OMP is observed to perform
faster and is easier to implement than (𝑙1)-minimization. For more details, see. OMP is
quite transparent; at each iteration, it selects a new coordinate from the support of the
4

signal 𝛼 in a very specific and natural way. In contrast, the known (𝑙1)-minimization
solvers, such as the simplex method and interior point methods, compute a path toward
the solution. However, the geometry of (𝑙1) is clear, whereas the analysis of greedy
algorithms can be difficult simply because they are iterative. On the other hand, OMP
has weaker guarantees of exact recovery. Unlike (𝑙1)-minimization, the guarantees of
OMP are nonuniform: for each fixed sparse signal 𝛼 and not for allsignals, the algorithm
performs correctly with high probability.
Rauhut has shown that uniform guarantees for OMP are impossible for natural
random measurement matrices. Moreover, OMP’s condition on measurement matrices
given in is more restrictive than the Restricted Isometry Condition. In particular, it is not
known whether OMP succeeds in the important class of partial Fourier measurement
matrices.
2.2 Fusion of Satellite Images using Compressive Sampling Matching Pursuit
(CoSaMP) Method
Fusion of Low Resolution Multi Spectral (LRMS) image and High Resolution
Panchromatic (HRPAN) image is a very important topic in the field of remote sensing.
This paper handles the fusion of satellite images with sparse representation of data. The
High resolution MS image is produced from the sparse, reconstructed from HRPAN and
LRMS images using Compressive Sampling Matching Pursuit (CoSaMP) based on
Orthogonal Matching Pursuit (OMP) algorithm. Sparse coefficients are produced by
correlating the LR MS image patches with the LR PAN dictionary. The HRMS is formed
by convolving the Sparse coefficients with the HR PAN dictionary. The world view -2
satellite images (HRPAN and LRMS) of Madurai, Tamil Nadu are used to test the
proposed method.
The experimental results show that this method can well preserve spectral and
spatial details of the input images by adaptive learning. While compared to other well-
known methods the proposed method offers high quality results to the input images by
providing 87.28% Quality with No Reference (QNR).
Recent Advancements in the field of remote sensing provides images of different
sensors for same application. This provides the platform for the development of image
fusion algorithms. Pan sharpening is a type of satellite image fusion in which high
frequency components of the HRPAN image is injected into resample versions of LRMS
image to obtain a color image of pleasant and sharp appearance. Remote sensing calls
for superior pan sharpening methods to translate the spectral information of the coarse-
scale MS data to the fine scale of the Pan image with least introduction of spectral
distortions. Various pan-sharpening methods Intensity Hue Saturation (IHS) the principal
component analysis, Multiresolution transforms, the Gram–Schmidt (GS) transform
based methods and so on are proposed.
The conventional methods suffer from spatial and spectral distortions due to the
restricted resampling of MS image from high spatial PAN image. Recently, compressed
5

sensing is involved in image fusion to recover high quality HRMS image using sparse
representation. But this method requires big collection of LRMS and HRPAN image
pairs which are not possible all times and the sparse coefficients are not unique since
they are randomly chosen. Spectral and spatial quality index metrics are used to evaluate
the proposed system. These parameters depend upon the patch size and overlapping area
between patches. This gives a direct impact on efficient dictionary formation. Too small
patch size results in a slack dictionary. On the other hand, patches of very large size
endangers the sparse representation of multispectral image patches in the dictionary pair.
A Compressive Sampling Matching Pursuit (CoSaMP) - based remote sensing
image fusion method has been developed. In this proposed method, the dictionaries for
the PAN image and LRMS image are derived from the input images adaptively and the
sparse coefficients are selected iteratively. The proposed method provides comparatively
better results than other state-of-the-art methods, such as IHS, Wavelet, IAW, FFT
enhanced IHS and Basis Pursuit Algorithm by preserving radiometric and geometric
information. The future improvement of this work will be on fast sparse recovery and
multiscale dictionary learning to improve the efficiency.
2.3 Reducing Basis Mismatch in Harmonic Signal Recovery via Alternating Convex
Search
This work describes an Alternating Convex Search (ACS) algorithm for correcting
frequency errors in the signal model used in compressive sampling applications for real
harmonic signals. The algorithm treats both frequencies and model coefficients as
unknowns and uses an iterative approach in developing estimates. Specifically, the
approach uses the familiar GPSR algorithm to update the model coefficients, followed
by a maximum likelihood estimate of the unknown frequency locations. The algorithm
was demonstrated effective at recovering harmonic signals possessing varying levels of
sparsity in competitive computation times.
This is the crux of the so-called “basis mismatch” problem, whereby even a good
signal model can yield a poor reconstruction due to seemingly small differences between
basis vectors assumed by the reconstruction algorithm and a similar set yielding a far
sparser representation of the signal. This problem has received recent theoretical and
experimental treatment and solutions are still under development underdetermined
problem with stronger coherence between the vectors in the signal model, both of which
work against the conditions needed for successful recovery. Another approach is to treat
both the model coefficients and the associated frequencies as unknowns to be solved.
Boufounos et al. developed an approach that isolates the unknown frequencies in
separate, non-overlapping bins and then solves for their location and amplitude, though
the. One straightforward approach to the basis mismatch problem is to simply oversample
the frequency space (e.g., let contain sinusoids with frequencies 1/QN apart instead of
1/N apart for a large integer Q). Though a higher resolution frequency discretization can
shrink errors, it comes at the cost of an increasingly method is restricted to specific
sampling strategies.
6

2.4 Orthogonal Matching Pursuit for Sparse Signal Recovery with Noise
The orthogonal matching pursuit (OMP) algorithm for the recovery of a high-
dimensional sparse signal based on a small number of noisy linear measurements. OMP
is an iterative greedy algorithm that selects at each step the column, which is most
correlated with the current residuals. In this paper, we present a fully data driven OMP
algorithm with explicit stopping rules. It is shown that under conditions on the mutual
incoherence and the minimum magnitude of the nonzero components of the signal, the
support of the signal can be recovered exactly by the OMP algorithm with high
probability. In addition, we also consider the problem of identifying significant
components in the case where some of the nonzero components are possibly small. It is
shown that in this case the OMP algorithm will still select all the significant components
before possibly selecting incorrect ones. Moreover, with modified stopping rules, the
OMP algorithm can ensure that no zero components are selected.
The MIP requires the mutual incoherence to be small. Other conditions used in the
compressed sensing literature include the Restricted Isometry Property (RIP) and Exact
Recovery Condition (ERC). See, for example, Candes and Tao and Tropp. In contrast to
the MIP, these conditions are not computationally feasible to verify for a given matrix.
On the other hand, the MIP condition is stronger than both RIP and ERC: The MIP
implies RIP and ERC but the converse is not true. However, it should be emphasized
here that although we focus our attention under the MIP because the condition is more
intuitive, all the results given in this paper hold under the ERC.
In the present paper we consider the orthogonal matching pursuit (OMP) algorithm
for the recovery of the support of the k-sparse signal under the model. OMP is an iterative
greedy algorithm that selects at each step the column of which is most correlated with
the current residuals. This column is then added into the set of selected columns. The
algorithm updates the residuals by projecting the observation onto the linear subspace
spanned by the columns that have already been selected and the algorithm then iterates.
Compared with other alternative methods, a major advantage of the OMP is its simplicity
and fast implementation. This method has been used for signal recovery and
approximation, where it was shown that is a sufficient condition for recovering a -sparse
exactly in the noiseless case. Results in Cai, Wangand Xu (2010a) imply that this
condition is in fact sharp. In this paper we consider the OMP algorithm in the general
setting where noise is present. Note that the residuals after each step in the OMP
algorithm are orthogonal to all the selected columns of, so no column is selected twice
and the set of selected columns grows at each step. One of the key components of an
iterative procedure like OMP is the stopping rule. Specific stopping rules are given for
the OMP algorithm in both bounded noise and Gaussian noise cases.
The algorithm is then fully data-driven. Our results show that under the MIP
condition and a condition on the minimum magnitude of the nonzero coordinates of, the
support of can be recovered exactly by the OMP algorithm in the bounded noise cases
and with high probability in the Gaussian case. In fact, a more general condition than can
7

guarantee the recovery of the support with high probability. In particular, all the main
results hold under the Exact Recovery Condition (ERC). In many applications, the focus
is often on identifying significant components, i.e., coordinates of with large magnitude,
instead of the often too ambitious goal of recovering the whole support of exactly. In this
paper, we also consider the problem of identifying large coordinates of in the case where
some of the nonzero coordinates are possibly small. It is shown that in this case the OMP
algorithm will still select all the most important components before possibly selecting
incorrect ones. In addition, with modified stopping rules, the OMP algorithm can ensure
that no zero components are selected.
2.5 Clustering K-SVD for sparse representation of images
A K-SVD-based method but also provides the clustering DL model. The potential
and advantage of the clustering model mainly come from two aspects. First, different
cluster of dictionaries is isolated from each other. Thus, an atom of learned dictionary
could concentrate on a specific type of feature, leading greater utilization of atoms. In
other words, a common phenomenon in the conventional DL model can be avoid, that is,
a part of atoms is widely employed by training samples whereas others are seldom used.
Second, the clustering DLmodel makes it possible to adjust the sparsity based on
different training samples and therefore to reduce the underfitting of overfitting of sparse
representation. We provide the SwCK-SVD by adaptively selected the number of used
atoms for each cluster. It is believed that the adaptive strategy can also be implemented
by adjusting the number of clusters. This potential is verified by the fact the SwCK-SVD
performs obviously better than the standard CK-SVD.
We proposed a DL method named CK-SVD for sparse representation of images.
For CK-SVD, the atoms of dictionary are divided into a set of groups, and each group of
atoms serve for image features of a specific cluster. Hence, the features of all clusters
can be utilized and the redundant atoms are avoided. Based on this strategy, we
introduced the CK-SVD and two practical extensions. Experimental results demonstrated
that the proposed methods could provide more accurate sparse representation of images,
compared to the conventional K-SVD algorithm and its extended methods.
To date, researchers have proposed various DL algorithms. In Engan et al. propose
the well-known DL method, named “the method of optimal directions (MOD)”. The
MOD contained two iterative process, sparse coefficients computing and dictionary
updating. The dictionary updating is globally realized by the least squares (LS)
computation in terms of training samples and sparse coefficients. In Aharon et al. propose
another LS-based algorithm for DL, referred to as the K-SVD. Different from the global
update strategy, for KSVD, the atoms of dictionary is updated separately. The MOD, the
K-SVD, and their extended methods are used for batch DL, i.e., the training samples
input simultaneously. However, when the training samples can not beobtained all at one,
the online learning is required. In Mairal et al. propose the online DL (ODL) algorithm,
aiming to update the atoms by using only the newly input samples. This algorithm allows
training samples to be input successively and realizes the online learning. Additionally,
8

a set of DL methods that are extended from MOD, K-SVD, and ODL have also been
proposed, in order to improve the sparse representation accuracy or reduce the
computational complexity. Among these algorithms, the K-SVD is frequently used in the
fields of image processing due to its generality and low complexity. The K-SVD
algorithm consists two processes, sparse coding and dictionary updating, which are
executed alternately. In the sparse coding process, at most k sparse coefficients for each
training sample are computed via greedy algorithms, inducing clustering features. For
the K-SVD algorithm, all the atoms of dictionary jointly represent the training images,
regardless of clusters. While representing different training samples, an atom may be
employed by different clusters of features. In the applications of image processing, the
features of different clusters vary dramatically, and therefore, the above phenomenon
may reduce the accuracy of sparse representation. In, Nazzal et al. utilize the residual of
training samples to train a set of sub-dictionaries. However, the sub-dictionaries are not
distinguished by different clusters. In Smith and Elad improve the KSVD by considering
only the used atoms in dictionary updating process. In Tariyal et al. propose the deep DL
by combining the concepts of DL and deep learning. The multiple DL framework is
developed for multiple levels of dictionaries. In Yi et al. build a hierarchical sparse
representation framework that consists of the local histogram-based model, the weighted
alignment pooling model, and the sparsity-based discriminative model.
2.6 Sparse Data Recovery using Optimized Orthogonal Matching
Pursuit for WSNs
Compressed Sensing based recovery algorithms are bounded by limitations such
as recovery error and high computational require-ments. This work presents a novel
algorithm based on Orthogonal Matching Pursuit algorithm for efficient and fast data
recovery in wireless sensor networks. The proposed algorithm significantly reduces the
number of iterations required to recover the original data in a relatively small interval of
time. Simulations show that the proposed algorithm converges in a short interval of time
with significantly better results as compared to the existing data recovery approaches in
wireless sensor networks.
Compressed sensing (CS) framework is aimed at precisely obtaining a high
dimensional signal x such that x _ Rm×n from a relatively small sample set of linearly
combined observations y such that y _ Rm, given the linear measurements are the
projections of the original signal which is sparse in some institutively known domain. CS
based compression is driven by the idea of obtaining sparse signals at a rate notably lesser
than the Nyquist rate. The CS principle utilizes the sparse nature of the wireless sensor
network (WSN) data for efficient compression and recovery, However, in practical
scenarios the signal in question is not sparse itself but in most of the cases has a sparse
representation in some a × b directory ‘D’ such that (a < b). Given the inherent energy
and bandwidth constraints of WSNs, CS based data gathering has gained the desired
attention in recent years. The central idea is to compress the data at the deployed nodes
9

before it is transmitted to the sink where the original data is recovered with almost no
loss. Thus, the major energy consuming task is shifted from the nodes to the sink.
Based on the well known OMP algorithm, we propose a relatively fast and accurate
version of the OMP and is termed as Optimized OMP (OOMP) here onwards. Multiple
indices are chosen in each iteration for reducing the complexity and the execution time
of the algorithm. The comparison of the reconstructed result with the original data is
necessary for every iteration in the process of reconstruction of data. If _rk_2, then the
reconstruction can be stopped and we can jump to output. That means, if the demand is
met, the iteration can be halted earlier, rather than after M iterations. To reduce the
complexity and speeding up the execution time, multiple indices are chosen in each
iteration. The selection of multiple indices results in multiple keys to be added to list.
Thus, the requirement of selection of M indices, is achieved with small number of
iteration as compared to OMP. A reconstruction error threshold is incorporated, which is
the necessary and sufficient condition for the convergence of algorithm.
10

CHAPTER 3
SYSTEM DESIGN
3.1 EXISTING SYTEM
The reconstruction is more difficult for images. Few local image regions can be
exactly represented by an atom from a single structured. Images generally produce
higher-sparsity representations. One problem persists, however, because even if the
underlying signal model could perfectly represent the signal the atoms must be discretely
sampled from the model and will in general fail to represent any given signal component
exactly. For example, a 1-D sinusoidal signal composed of a single tone is well-
represented by a sinusoidal model, but if the frequency of the signal falls between the
discrete Fourier frequencies of a given Fourier basis then the number of non-zero Fourier
coefficients can actually be quite large.
Disadvantage:
Computational overhead
Low-resolution face images
3.2 PROPOSED SYSTEM
Certain assumptions regarding the endpoints are satisfied. Modify the algorithm
to increase the probability. Conditions will be satisfied during each stage of the algorithm.
MP is a greedy search based approach to solving the combinatorial optimization problem
to identify the best sparse representation. Due to the success of the original algorithm
several variations of MP. In general, each of these variations consist of three steps. The
identification step in an iteration of an MP-based algorithm refers to determining which
atom(s) is(are) closest to the current residual. Augmentation is used to describe the step
of adding the atom(s) identified to the support of the reconstruction. Finally, each pursuit-
type algorithm is concluded by a residual update. The fundamental difference between
MP and OMP is that the residual in OMP is updated by projecting the image onto the
orthogonal complement of the span of the current support. Note that when the dictionary
consists of pairwise orthogonal atoms, MP and OMP are equivalent.
ADVANTAGE:
Reduced computational time
Increased performance on multiple tasks in machine learning
11

3.3 SYSTEM SPECIFICATION


3.3.1 SOFTWARE REQURIEMENT
Operating system : Windows XP/7.
Coding Language : Matlab
Tool : Matlab 2018a

3.3.2 HARDWARE REQURIEMENT


System : core i3.
Hard Disk : 540 GB.
Floppy Drive : 1.44 Mb.
Monitor : 22 VGA Colour.
Mouse : Logitech.
Ram : 2 GB.
12

CHAPTER 4
SYSTEM IMPLENTATION
4.1 MODULES
1. Paths Created Between Atom
2. Gaussian Denoising
3. K-Sparse Reconstructions
4.1.1 Paths Created Between Atom
Our approach can be differentiated from other existing variants of OMP by the
construction of a path between the two most-similar dictionary atoms and then
performing a secondary indentification step. In the OMP case, similarity is quantified by
the inner product between a dictionary atom and the test image (residual) at that stage.
We recall that the largest-magnitude inner product is equivalent to the closest vector as
determined by the angle, between the vectorized images.
Although a path can be created between any arbitrary dictionary atoms, here we
exclusively consider paths between the two atoms that are most similar to the current test
signal (i.e., have the largest inner product or the smallest angle). That is to say in a
primary identification step we identify the two most similar dictionary atoms and form a
path between them. Given a path between two atoms we search for a novel atom which
is more similar to the test signal than either of the path end-points. Here, we “search” by
drawing samples from a path that correspond to a discrete set of t sampled with uniform
spacing from [0; 1] and test their similarity to the test image. An alternate formulation
might seek a local optimum by performing a line search along the geodesic.
4.1.2 Gaussian Denoising
Denoising experiments were performed with = 20 added to image. Patches of
the noisy image, of size 8 8, using 5 iterations. A denoised image is constructed by OMP
estimated patches. Performance is measured using output PSNR. Patches of the noisy
image, of size 8 * 8, are then estimated using 5 iterations of the indicated algorithm. Both
DCT and the globally trained learned dictionary are tested. A denoised image is then
constructed by stitching together the OMP estimated patches.
4.1.3 K-Sparse Reconstructions
The reconstruction error between a pristine original image and a patch-based k-
sparse reconstruction using a dictionary. In particular, we consider the DCT and kSVD
dictionaries composed of 256 8x8 image atoms that were provided in support of [11].
Each dictionary is used to reconstruct 8x8 patches overlapped with a stride of one where
the error between the original and the reconstructed image is given by jjRkjjF =jjTjjF,
where k is the number of atoms included in the reconstruction.
13

Algorithm 1: Orthogonal Matching Pursuit


Input: T, the test image and D, the dictionary, K the number of iterations/sparsity level.
Output: X, the image estimate, S; the support of the reconstruction, and, the vector of
atom indices

Algorithm 2: Path Orthogonal Matching Pursuit


14
15

CHAPTER 5

SYSTEM ARCHITECTURE

5.1 BLOCK DIAGRAM


16

5.2 DATA FLOW DIAGRAM


5.2.1 Paths Created Between Atom
Level 0

5.2.2 Gaussian Denoising


Level 1
17

5.2.3 K-Sparse Reconstructions


Level 2
18

CHAPTER 6
LANGUAGE DESCRIPTION
6.1 INTRODUCTION
The name MATLAB stands for MATrix LABoratory. MATLAB was written
originally to provide easy access to matrix software developed by the LINPACK (linear
system package) and EISPACK (Eigen system package) projects. MATLAB is a high-
performance language for technical computing. It integrates computation, visualization,
and programming environment. MATLAB has many advantages compared to
conventional computer languages (e.g., C, FORTRAN) for solving technical problems.
MATLAB is an interactive system whose basic data element is an array that does not
require dimensioning. Specific applications are collected in packages referred to as
toolbox. There are tool boxes for signal processing, symbolic computation, control
theory, simulation, optimization, and several other fields of applied science and
engineering.
6.2 MATLAB’s POWER OF COMPUTAIONAL MATHMATICS
MATLAB i s u s e d i n e v e r y fac et o f c o mp u ta tio n a l ma th e m a t i c s .
Following are s o me commonly used mathematical calculations where it is used most
commonly:
Dealing with Matrices and Arrays
2-D and 3-D Plotting and graphics
Linear Algebra
Algebraic Equations
Non-linear Functions
Statistics
Data Analysis
Calculus and Differential Equations Numerical Calculations
Integration
• Transforms
Curve Fitting
Various other special functions
19

6.3 FEATURES OF MATLAB


Following are the basic features of MATLAB
It is a high-level language for numerical computation, visualization and application
development.
It also provides an interactive environment for iterative exploration, design and problem
solving.
It provides vast library of mathematical functions for linear algebra, statistics, Fourier
analysis, filtering, optimization, numerical integration and solving ordinary differential
equations.
It provides built-in graphics for visualizing data and tools for creating custom plots.
• MATLAB's programming interface gives development tools for improving code
quality, maintainability, and maximizing performance.
• It provides tools for building applications with custom graphical interfaces.
• It provides functions for integrating MATLAB based algorithms with external
applications and languages such as C, Java, .NET and Microsoft Excel.
6.4 USES OF MATLAB
MATLAB is widely used as a computational tool in science and engineering
encompassing the fields of physics, chemistry, math and all engineering streams. It
is used in a range of applications including:
• signal processing and Communications
• image and video Processing
• control systems
• test and measurement
• computational finance
• computational biolog
20

6.5 UNDERTANDING THE MATLAB ENVIRONMENT


MATLAB development IDE can be launched from the icon created on the desktop.
The main working window in MATLAB is called the desktop. When MATLAB is
started, the desktop appears in its default layout.

Fig.6.5.1. MATLAB desktop environment

The desktop has the following panels:


Current Folder - This panel allows you to access the project folders and files.

Fig.6.5.2. current folder


21

Command Window - This is the main area where commands can be entered at the
command line. It is indicated by the command prompt (>>).

Fig.6.5.3. command window

Workspace - The workspace shows all the variables created and/or imported from files.

Fig.6.5.4.workspace
22

Command History - This panel shows or rerun commands that are entered at the
command line.

Fig.6.5.5 command history


23

6.6 COMMONLY USED OPERATORS AND SPATIAL CHARATERS


MATLAB supports the following commonly used operators and special characters:
Operator Purpose
+ Plus; addition operator.

- Minus, subtraction

* operator.
Scalar and matrix

.* Array and
multiplication operator.
multiplication
^ Scalar and matrix
operator.
.^ Array exponentiation
exponentiation
operator.
\ Left-division operator.
operator.

/ Right-division operator

.\ Array left-division

./ Array right-division

Table.6.6 MATLAB used operators and special characters.


6.7 COMMANDS
MATLAB is an interactive program for numerical computation and data visualization.
You can enter a command by typing it at the MATLAB prompt '>>' on the Command
Window.
24

6.7.1 Commands for managing a session


MATLAB provides various commands for managing a session. The following
table provides all
Commands Purpose

Clc Clear command window


Clear Removes variables from
Exist memory
Checks for existence of file or
Global Declare variables to be global.
Help Searches for help topics.
variable.
Look for Searches help entries for a
Quit Stops MATLAB.
Who Lists current variable.
keyword.
Whos Lists current variables (Long

Table.6.7.1 commands for managing a session


Display).

6.8 INPUT AND OUTPUT COMMAND


MATLAB provides the following input and output related commands:

Command Purpose
Disp Displays content for an array
or
Fscanf Read formatted data from a
string.
Format Control screen-display
file.
Fprintf format.
Performs formatted write to

Input Displays prompts and waits


screen or a file.
for
; Suppresses screen printing.
input.
Table.6.8 input and output commands
25

6.9 M FILES
MATLAB allows writing two kinds of program files:
Scripts:
script files are program files with .m extension. In these files, you write series of
commands, which you want to execute together. Scripts do not accept inputs and do not
return any outputs. They operate on data in the workspace.
Functions:
functions files are also program files with .m extension. Functions can accept inputs
and return outputs. Internal variables are local to the function.
Creating and Running Script File:
To create scripts files, you need to use a text editor. You can open the MATLAB editor
in two ways:
Using the command prompt
Using the IDE
You can directly type edit and then the filename (with .m extension).
Edit or
edit<file name>

6.10 DATA TYPES AVAILABLE IN MATLAB


MATLAB provides 15 fundamental data types. Every data type stores data that
is in the form of a matrix or array. The size of this matrix or array is a minimum of
0-by-0 and this can grow up to a matrix or array of any size.
The following table shows the most commonly used data types in MATLAB:
Datatype Description
Int8 8-bit signed integer
Unit8 8-bit unsigned integer
Int16 16-bit signed integer
Unit16 16-bit unsigned integer
Int32 32-bit signed integer
unit32 32-bit unsigned integer
Int64 64-bit signed integer
26

Unit64 64-bit unsigned integer


Single Single precision numerical data
Double Double precision numerical data

Logical Logical variables are


1 or 0, represent true &false
respectively
Char Character data(strings are stored
as vector of characters)
Call array Array of indexed calls, each
capable of storing array of a
different dimension and datatype

Structure C-like structure each structure


having named fields capable of
storing an array of a different
dimension and datatype

Function handle Pointer to a function


User classes Object constructed from a user-
defined class

Java classes Object constructed from a java


Class
27

CHAPTER 7
SYSTEM TESTING
7.1 INTRODUCTION
The purpose of testing is to discover errors. Testing is the process of trying
to discover every conceivable fault or weakness in a work product. It provides a
way to check the functionality of components, sub-assemblies, assemblies and/or
a finished product It is the process of exercising software with the intent of
ensuring that the Software system meets its requirements and user expectations
and does not fail in an unacceptable manner. There are various types of test. Each
test type addresses a specific testing requirement.
7.2 TYPES OF TESTS
7.2.1 Unit testing
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid
outputs. All decision branches and internal code flow should be validated. It is
the testing of individual software units of the application .it is done after the
completion of an individual unit before integration. This is a structural testing,
that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a business
process performs accurately to the documented specifications and contains
clearly defined inputs and expected results.
7.2.2 Integration testing
Integration tests are designed to test integrated software components to
determine if they actually run as one program. Testing is event driven and is
more concerned with the basic outcome of screens or fields. Integration tests
demonstrate that although the components were individually satisfaction, as
shown by successfully unit testing, the combination of components is correct and
consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of components.
7.2.3 Functional test
Functional tests provide systematic demonstrations that functions tested are
available as specified by the business and technical requirements, system
documentation, and user manuals.
28

Functional testing is centered on the following items:


Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures: interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on requirements,
key functions, or special test cases. In addition, systematic coverage pertaining to
identify Business process flows; data fields, predefined processes, and successive
processes must be considered for testing. Before functional testing is complete,
additional tests are identified and the effective value of current tests is
determined.
7.2.4 System Test
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-
driven process links and integration points.
7.2.5 White Box Testing
White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at
least its purpose. It is purpose. It is used to test areas that cannot be reached from
a black box level.
7.2.6 Black Box Testing
Black Box Testing is testing the software without any knowledge of the
inner workings, structure or language of the module being tested. Black box tests,
as most other kinds of tests, must be written from a definitive source document,
such as specification or requirements document, such as specification or
requirements document. It is a testing in which the software under test is treated,
as a black box. You cannot “see” into it. The test provides inputs and responds to
outputs without considering how the software works.
29

7.3 UNIT TESTING


Unit testing is usually conducted as part of a combined code and unit test
phase of the software lifecycle, although it is not uncommon for coding and unit
testing to be conducted as two distinct phases.
Test strategy and approach
Field testing will be performed manually and functional tests will be
written in detail.
Test objectives
All field entries must work properly.
Pages must be activated from the identified link.
The entry screen, messages and responses must not be delayed.
Features to be tested
Verify that the entries are of the correct format
No duplicate entries should be allowed
All links should take the user to the correct page.
7.4 INTEGRATION TESTING
Software integration testing is the incremental integration testing of two or
more integrated software components on a single platform to produce failures
caused by interface defects.
The task of the integration test is to check that components or software
applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
7.5 ACCEPTANCE TESTING
User Acceptance Testing is a critical phase of any project and requires
significant participation by the end user. It also ensures that the system meets the
functional requirements.
30

CHAPTER 8
SCREEN SHOTS
31

CHAPTER 9
CONCLUSION

A novel modification to the well-known matching pursuit algorithm has


been proposed using path-based dictionary expansion/augmentation at each
iteration. Theoretical results guarantee improved reconstruction error after a fixed
number of iterations under certain assumptions. Furthermore, in the case of an
orthonormal, under-determined dictionary the theoretical assumptions are
trivially satisfied. Existing theory has been proved for the case of a linear path-
based approach and extensions to generic paths are underway because it is likely
that the current heuristic (same-sign sufficient condition) is sub-optimal for the
OT-based path. This work has demonstrated added benefit to a path-based OMP
approach for both k-sparse signal reconstruction and image denoising. Results
show improved output PSNR using the path-augmented approach on both
structured and unstructured dictionaries. Additionally, for all considered sparsity
levels there is a reduced reconstruction error obtained using the proposed
modification. We note that this augmentation can be integrated into any MP-type
approach.
XV

CHAPTER 10
REFERENCE

[1] AT&T Laboratories Cambridge Database of Faces.


http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html.
[2] Saurav Basu, Soheil Kolouri, and Gustavo K Rohde. Detecting and
visualizing cell phenotype differences from microscopy images using transport-
based morphometry. Proceedings of the National Academy of Sciences,
111(9):3448–3453, 2014.
[3] Francois Bergeaud and Stephane Mallat. Matching pursuit of images. In
Image Processing, 1995. Proceedings., International Conference on, volume 1,
pages 53–56. IEEE, 1995.
[4] P Boufounos, V Cevher, A C Gilbert, Y Li, and M J Strauss. What’s the
frequency, Kenneth? Sublinear Fourier sampling off the grid. Lecture Notes in
Computer Science, 7408:61–72, 2012.
[5] T Tony Cai and Lie Wang. Orthogonal matching pursuit for sparse signal
recovery with noise. IEEE Transactions on Information theory, 57(7):4680–4688,
2011.
[6] Y Chi, L Scharf, A Pezeshki, and A R Calderbank. Sensitivity to basis
mismatch in compressed sensing. IEEE Transactions on Signal Processing,
59(5):2182–2195, 2011.
[7] Wei Dai and Olgica Milenkovic. Subspace pursuit for compressive sensing
signal reconstruction. IEEE Transactions on Information Theory, 55(5):2230–
2249, 2009.
[8] David L Donoho and Michael Elad. Optimally sparse representation in
general (nonorthogonal) dictionaries via 1 minimization. Proceedings of the
National Academy of Sciences, 100(5):2197–2202, 2003.
[9] David L Donoho, Yaakov Tsaig, Iddo Drori, and Jean-Luc Starck. Sparse
solution of underdetermined systems of linear equations by stagewise orthogonal
matching pursuit. IEEE Transactions on Information Theory, 58(2):1094–1121,
2012.
[10] C Ekanadham, D Tranchina, and E P Simoncelli. Recovery of sparse
translation-invariant signals with continuous basis pursuit. IEEE Transactions on
Signal Processing, 59(10):4735–4744, 2011.

You might also like