Professional Documents
Culture Documents
PBD
By
N.RESHMA
960417621318
Of
C.S.I INSTITUTE OF TECHNOLOGY, THOVALAI
A PROJEC T REPORT
Submitted to the
FACULTY OF INFORMATION AND COMMUNICATION ENGINEERING
of
ANNA UNIVERSITY
CHENNAI
May 2020
ANNA UNIVERSITY, CHENNAI
BONAFIDE CERTIFICATE
Certified that this Report titled “AN IMPROVED K-SPARSE IMAGE PROCESSING
USING PBD” is the bonfire work of N. RESHMA (960417621318) who carried out the
work under my supervision. Certified further that to the best of my knowledge the work
reported herein does not form part of any other thesis or dissertation on the basis of which
a degree or award was conferred on an earlier occasion on this or any other candidate.
ABSTRACT
ABSTRACT
வபரிதொை்குவது, கமம் பட்ட Ksparse புனரகமப்பு மற் றும் அடிப்பகட OMP உடன்
ஒப்பிடும் கபொது வெயல் திறகனை் குறிை்கிறது. ஒவ் வவொரு மறு வெய் கையிலும்
எஞ் சியவற் றுடன் மிைவும் வதொடர்புபட்டுள் ள இரண்டு அைரொதி அணுை்ைளுை்கு
ACKNOWLEDGEMENT
First and foremost, I thank the God Almighty for this grace to enable me to
complete this project work a successful one.
Words cannot be expressed to show my gratitude to my respected principle
Dr. K. DHINESH KUMAR, M.E., Ph.D., for his support and freedom throughout
my course.
I would like to thank Mrs. M. JULIE EMERALD JIJU, MCA, M.Phil,
M.Tech, Head of the Department, Department of Computer Applications for her
valuable help and constant encouragements towards my project.
It is my proud privilege to express my sincere thanks to my staff-in-charge,
Mrs. V. Merin Shobi MCA, MPhil, Department of Computer Applications for
providing me with her valuable suggestions and non-stop guidance throughout the
development of this project.
My sincere thanks to all faculty members the technical and non-technical of the
M.C.A. department for their valuable suggestion and support in all my moves towards
the successful finishing of this project.
It is my great pleasure to acknowledge my parents and family members for their
prayer and generous support that they have extended towards the successful
completion of the project.
VI
TABLE OF CONTENTS
ABSTRACT III
ACKNOWLEDGEMENT V
LIST OF TABLES IX
LIST OF FIGURES X
LIST OF ABBREVIATIONS X1
1 INTRODUCTION 1
1.1 Sparsity 1
2 LITERATURE REVIEW 3
4 SYSTEM IMPLENTATION 12
4.1 MODULES 12
5 SYSTEM ARCHITECTURE 15
6 LANGUAGE DESCRIPTION 18
6.1 INTRODUCTION 18
7 SYSTEM TESTING 27
7.1 INTRODUCTION 27
LIST OF TABLES
LIST OF FIGURES
LIST OF ABBREVIATIONS
3 CS Compresed Sampling
APPENDIX
A. Proof of Theorem
Theorem A.1 (Straddle Theorem). Let||d1|| = ||d2|| = ||r|| = 1, hr,d1i = r>d1 =
cosθ1,hr,d2i = r>d2 = cosθ2 and hd1,d2i = d> 1 d2 = cosθ1,2. If|hr,d1i| > |hr,di|for all
d ∈D,sgn(cos θ1) = sgn(cosθ2) and cosθ1,2 ∈ [0,cosθ1/cosθ2),then there exists t* ∈
(0,1) such that |hr,dt?i| > |hr,d1i|.
Proof. Let ||d1|| = ||d2|| = ||r|| = 1, hr,d1i = r>d1 = cosθ1, hr,d2i = r>d2 = cosθ2 and
hd1,d2i = d> 1 d2 = cosθ1,2. Assume that |hr,d1i| > |hr,di| for all d ∈ D, sgn(cosθ1) =
sgn(cosθ2) and cosθ1,2 ∈ [0,cosθ1/cosθ2). Define α = cosθ2/cosθ1. Note that 0 ≤ α <
1 is implied by |hr,d1i| > |hr,d2i| together with sgn(cosθ1) = sgn(cosθ2). Further, we
have 1−α2 > 0. Using these facts together with our assumptions we have
cosθ1,2 < α
0 < 2α−2cosθ1,2
0 < 1−α2 < 2α−2cosθ1,2 + 1−α2
0 <1−α2 2α−2cosθ1,2 + 1−α2< 1.
From the property of reals we have that there exists some number m such that
1−α2 2α−2cosθ1,2 + 1−α2< m < 1.
Set t? = m ∈ (0,1). Consequently,
1−α2 < t*(2α−2cosθ1,2 + 1−α2)
=⇒ 0 < t*(2α−2cosθ1,2 + 1−α2) + α2 −1
=⇒ 0 < 2t*(α−cosθ1,2) + (1−t?)(α2 −1).
Rewriting in terms of the angles we have
Multiplying through by
(cosθ1)2 yields 0 < 2t*(cosθ2 cosθ1 −cosθ1,2(cosθ1)2) + (1−t*)((cosθ2)2 −(cosθ1)2).
Expanding and writing in terms of vector multiplication we have
0 < t*(r>d2d> 1 r + r>d1d> 2 r)−2t* cosθ1,2(r>d1d> 1 r) + (1−t*)(r>d2d> 2 r−r>d1d> 1
).
XIII
to produce
Next, add 0 = t2 *r>d1d> 1 r−t2 *r>d1d> 1 to the numerator and regroup terms to get
From Lemma III.1 we have that the first two lines of Eq. 29 can be simplified to r>Pdt*
r and the last line simplifies to r>Pd1r. Thus, we have
0 < r>Pdt* r−r>Pd1r
0 < ||Pdt* r||2 −||Pd1r||2
=⇒ ||Pd1r||2 < ||Pdt* r||
Finally, using Lemma III.2 we have
|hr,d1i| < |hr,dt*i|.
Thus, we have that there exists a t* ∈ (0,1) such that dt* is “closer” to r than d1.
B. Proof of Corollary III.3.2
Corollary A.1.1 (Positive Maximizer). For each pair {d1, d2} ∈ D if |hr, d1i| > |hr, d2i|
and sgn(cosθ1) = sgn(cosθ2) there is a single positive t (given by a closed form) that
maximizes ||Pdt* r||2−||Pd1r||2. Explicitly
Proof. We seek to maximize ||Pdtr||2 −||Pd1r||2 over all feasible t. Using Lemma III.1
together with Eq. 30 we can write
f(t) = ||Pdtr||2 −||Pd1r||2 (35) f(t) = r>Pdtr−r>Pdtr.
XIV
Equivalently,
The function f(t) is differentiable and has two critical points given by
This can be verified by hand or through the use of software. When sgn(cosθ1) =
sgn(cosθ2) and |hr, d1i| > |hr, d2i| it can be seen that t1 < 0 and t2 > 0.Further, it can be
shown that for all values of cosθ1,cosθ2, and cosθ1,2 that f00(t2) < 0 implying that t2 is
a local maximum of f(t) and the global maximum when t ≥ 0.
1
CHAPTER 1
INTRODUCTION
Sparse representation aims to model signals as sparse linear combinations of the
atoms in a dictionary, and this technique is widely used in various fields of image
processing. A frequent goal within signal/image processing is to reconstruct or compress
the information contained in a signal by representing it as a linear combination of a set
of reference signals. In the most general case, this reference set is a (possibly over
complete) dictionary composed of signal atoms drawn from some underlying signal
model. “Good” models are those that can sparsely represent signals as linear
combinations of relatively few atoms drawn from the dictionary. Signals that can be
represented to within some acceptable error tolerance using at most k atoms are defined
as k-sparse relative to that dictionary. Reconstruction algorithms designed to decompose
the signal into a linear combination of atoms can be designed to terminate based on an
error threshold or a fixed degree of sparsity.
1.1 Sparsity
For fixed sparsity, consideration of all possible atom combinations of that order is
computationally intractable other than for a limited set of problems. A popular and
successful approach to this combinatorial optimization problem is a greedy algorithm
called Matching Pursuit (MP). Standard MP begins by greedily searching for the best
reconstruction produced from a single atom where “best” is determined by the magnitude
of the inner product between the signal and the dictionary atoms. This optimal atom is
scaled by the length of the projection of the signal onto the space spanned by the optimal
atom and is then subtracted from the original signal to yield a residual. The residual
image is then fit in the same greedy way, updated, and the process repeats such that k
iterations of MP yields a ksparse representation with some associated final error/residual
Rk.
1.2 Dictionary Learning
The HRPAN image is down sampled to factor FDS based on the spatial resolution
of the LRMS (Here FDS = 4). The LRPAN image ‘Y0’ and the LRMS image ‘Y’ are
divided into small patches of different sizes (3 × 3 to 9 × 9) .These patches y0 and yk,
where k stands for the kth band and k = 1, 2, . . , N may also be partially overlapped. The
patches of LRPAN image are arranged into column vectors and normalized to form ‘Dl’
called the LR dictionary. This is repeated for the HRPAN image to form the HR
dictionary ‘Dh’. The LRMS patches yk and their respective HR patches xk are
represented as sparse in this LR/HR PAN dictionary pair since these dictionaries are
formed from the PAN image that observes the same area as the LRMS bands. The size
of the image, patch, and the overlap area determine the size of the dictionary. This gives
an alternative image fusion method that can be used when large collections of
representative satellite images are not available.
2
CHAPTER 2
LITERATURE REVIEW
2.1 Tree-Based Backtracking Orthogonal Matching Pursuit for Sparse Signal
Reconstruction
Sparse reconstruction algorithm is one of the three core problems (signal sparse
representation, measurement matrix design, and reconstruction algorithm design) of CS.
The existed sparse reconstruction algorithms such as ROMP and CoSaMP algorithms
employ the sparsity 𝐾 as the prior knowledge for exact recovery, which has many
limitations for the realistic applications. However, although the sparsity level are not
required for OMP and BAOMP algorithms, they do not use the characteristics of special
sparse basis to improve the performance of the algorithms. In this paper, a new Treebased
Backtracking Orthogonal Matching Pursuit (TBOMP) algorithm was proposed based on
the tree model in wavelet domain. Our algorithm can convert the wavelet tree structures
to the corresponding relations of candidate atoms without any prior information of signal
sparsity level. Moreover, the unreliable atoms can be deleted according to the
backtracking algorithm. Compared with other compressive sensing algorithms (OMP,
ROMP, and BAOMP), the signal reconstruction Results of TBOMP out perform the
abovementioned CS algorithms.
These characteristics of tree structure provide a new way for the study of
reconstruction algorithm. Thanks to the tree structure of wavelet coefficients, when the
signal is sparsely represented by the wavelet transform, it also provides a clew for the
selection of atoms in the reconstruction algorithm. This will greatly improve the
reliability of the atom selection. The coefficients of wavelet decomposition include low
frequency coefficients and high-frequency coefficients (scaling coefficients and wavelet
coefficients in 𝛼). The more levels of wavelet decomposition, the less low-frequency
coefficients, and more important information is reserved in the high frequency
coefficients. Compared with the high-frequency coefficients, the number of low-
frequency coefficients are much less if the decomposition level is big enough. Since the
low-frequency coefficients play an important role in the wavelet reconstruction, in our
proposed algorithm, only the high-frequency coefficients are measured by measurement
matrix. For the reconstruction, we combine the reconstructed High-frequency
coefficients ̂𝜔 and the unprocessed low frequency coefficients. Then the inverse wavelet
transform is applied to obtain a reconstructed 𝑥̂ of the original signal 𝑥.
OMP is quite fast, both theoretically and experimentally. It makes 𝑛 iterations,
where each iteration amounts to a multiplication by a𝑁×𝑀matrixΦ ∗ (computing the
observation vector 𝛼) and solving a least squares problem in dimensions at most 𝑀×𝑛.
This yields strongly polynomial running time. In practice, OMP is observed to perform
faster and is easier to implement than (𝑙1)-minimization. For more details, see. OMP is
quite transparent; at each iteration, it selects a new coordinate from the support of the
4
signal 𝛼 in a very specific and natural way. In contrast, the known (𝑙1)-minimization
solvers, such as the simplex method and interior point methods, compute a path toward
the solution. However, the geometry of (𝑙1) is clear, whereas the analysis of greedy
algorithms can be difficult simply because they are iterative. On the other hand, OMP
has weaker guarantees of exact recovery. Unlike (𝑙1)-minimization, the guarantees of
OMP are nonuniform: for each fixed sparse signal 𝛼 and not for allsignals, the algorithm
performs correctly with high probability.
Rauhut has shown that uniform guarantees for OMP are impossible for natural
random measurement matrices. Moreover, OMP’s condition on measurement matrices
given in is more restrictive than the Restricted Isometry Condition. In particular, it is not
known whether OMP succeeds in the important class of partial Fourier measurement
matrices.
2.2 Fusion of Satellite Images using Compressive Sampling Matching Pursuit
(CoSaMP) Method
Fusion of Low Resolution Multi Spectral (LRMS) image and High Resolution
Panchromatic (HRPAN) image is a very important topic in the field of remote sensing.
This paper handles the fusion of satellite images with sparse representation of data. The
High resolution MS image is produced from the sparse, reconstructed from HRPAN and
LRMS images using Compressive Sampling Matching Pursuit (CoSaMP) based on
Orthogonal Matching Pursuit (OMP) algorithm. Sparse coefficients are produced by
correlating the LR MS image patches with the LR PAN dictionary. The HRMS is formed
by convolving the Sparse coefficients with the HR PAN dictionary. The world view -2
satellite images (HRPAN and LRMS) of Madurai, Tamil Nadu are used to test the
proposed method.
The experimental results show that this method can well preserve spectral and
spatial details of the input images by adaptive learning. While compared to other well-
known methods the proposed method offers high quality results to the input images by
providing 87.28% Quality with No Reference (QNR).
Recent Advancements in the field of remote sensing provides images of different
sensors for same application. This provides the platform for the development of image
fusion algorithms. Pan sharpening is a type of satellite image fusion in which high
frequency components of the HRPAN image is injected into resample versions of LRMS
image to obtain a color image of pleasant and sharp appearance. Remote sensing calls
for superior pan sharpening methods to translate the spectral information of the coarse-
scale MS data to the fine scale of the Pan image with least introduction of spectral
distortions. Various pan-sharpening methods Intensity Hue Saturation (IHS) the principal
component analysis, Multiresolution transforms, the Gram–Schmidt (GS) transform
based methods and so on are proposed.
The conventional methods suffer from spatial and spectral distortions due to the
restricted resampling of MS image from high spatial PAN image. Recently, compressed
5
sensing is involved in image fusion to recover high quality HRMS image using sparse
representation. But this method requires big collection of LRMS and HRPAN image
pairs which are not possible all times and the sparse coefficients are not unique since
they are randomly chosen. Spectral and spatial quality index metrics are used to evaluate
the proposed system. These parameters depend upon the patch size and overlapping area
between patches. This gives a direct impact on efficient dictionary formation. Too small
patch size results in a slack dictionary. On the other hand, patches of very large size
endangers the sparse representation of multispectral image patches in the dictionary pair.
A Compressive Sampling Matching Pursuit (CoSaMP) - based remote sensing
image fusion method has been developed. In this proposed method, the dictionaries for
the PAN image and LRMS image are derived from the input images adaptively and the
sparse coefficients are selected iteratively. The proposed method provides comparatively
better results than other state-of-the-art methods, such as IHS, Wavelet, IAW, FFT
enhanced IHS and Basis Pursuit Algorithm by preserving radiometric and geometric
information. The future improvement of this work will be on fast sparse recovery and
multiscale dictionary learning to improve the efficiency.
2.3 Reducing Basis Mismatch in Harmonic Signal Recovery via Alternating Convex
Search
This work describes an Alternating Convex Search (ACS) algorithm for correcting
frequency errors in the signal model used in compressive sampling applications for real
harmonic signals. The algorithm treats both frequencies and model coefficients as
unknowns and uses an iterative approach in developing estimates. Specifically, the
approach uses the familiar GPSR algorithm to update the model coefficients, followed
by a maximum likelihood estimate of the unknown frequency locations. The algorithm
was demonstrated effective at recovering harmonic signals possessing varying levels of
sparsity in competitive computation times.
This is the crux of the so-called “basis mismatch” problem, whereby even a good
signal model can yield a poor reconstruction due to seemingly small differences between
basis vectors assumed by the reconstruction algorithm and a similar set yielding a far
sparser representation of the signal. This problem has received recent theoretical and
experimental treatment and solutions are still under development underdetermined
problem with stronger coherence between the vectors in the signal model, both of which
work against the conditions needed for successful recovery. Another approach is to treat
both the model coefficients and the associated frequencies as unknowns to be solved.
Boufounos et al. developed an approach that isolates the unknown frequencies in
separate, non-overlapping bins and then solves for their location and amplitude, though
the. One straightforward approach to the basis mismatch problem is to simply oversample
the frequency space (e.g., let contain sinusoids with frequencies 1/QN apart instead of
1/N apart for a large integer Q). Though a higher resolution frequency discretization can
shrink errors, it comes at the cost of an increasingly method is restricted to specific
sampling strategies.
6
2.4 Orthogonal Matching Pursuit for Sparse Signal Recovery with Noise
The orthogonal matching pursuit (OMP) algorithm for the recovery of a high-
dimensional sparse signal based on a small number of noisy linear measurements. OMP
is an iterative greedy algorithm that selects at each step the column, which is most
correlated with the current residuals. In this paper, we present a fully data driven OMP
algorithm with explicit stopping rules. It is shown that under conditions on the mutual
incoherence and the minimum magnitude of the nonzero components of the signal, the
support of the signal can be recovered exactly by the OMP algorithm with high
probability. In addition, we also consider the problem of identifying significant
components in the case where some of the nonzero components are possibly small. It is
shown that in this case the OMP algorithm will still select all the significant components
before possibly selecting incorrect ones. Moreover, with modified stopping rules, the
OMP algorithm can ensure that no zero components are selected.
The MIP requires the mutual incoherence to be small. Other conditions used in the
compressed sensing literature include the Restricted Isometry Property (RIP) and Exact
Recovery Condition (ERC). See, for example, Candes and Tao and Tropp. In contrast to
the MIP, these conditions are not computationally feasible to verify for a given matrix.
On the other hand, the MIP condition is stronger than both RIP and ERC: The MIP
implies RIP and ERC but the converse is not true. However, it should be emphasized
here that although we focus our attention under the MIP because the condition is more
intuitive, all the results given in this paper hold under the ERC.
In the present paper we consider the orthogonal matching pursuit (OMP) algorithm
for the recovery of the support of the k-sparse signal under the model. OMP is an iterative
greedy algorithm that selects at each step the column of which is most correlated with
the current residuals. This column is then added into the set of selected columns. The
algorithm updates the residuals by projecting the observation onto the linear subspace
spanned by the columns that have already been selected and the algorithm then iterates.
Compared with other alternative methods, a major advantage of the OMP is its simplicity
and fast implementation. This method has been used for signal recovery and
approximation, where it was shown that is a sufficient condition for recovering a -sparse
exactly in the noiseless case. Results in Cai, Wangand Xu (2010a) imply that this
condition is in fact sharp. In this paper we consider the OMP algorithm in the general
setting where noise is present. Note that the residuals after each step in the OMP
algorithm are orthogonal to all the selected columns of, so no column is selected twice
and the set of selected columns grows at each step. One of the key components of an
iterative procedure like OMP is the stopping rule. Specific stopping rules are given for
the OMP algorithm in both bounded noise and Gaussian noise cases.
The algorithm is then fully data-driven. Our results show that under the MIP
condition and a condition on the minimum magnitude of the nonzero coordinates of, the
support of can be recovered exactly by the OMP algorithm in the bounded noise cases
and with high probability in the Gaussian case. In fact, a more general condition than can
7
guarantee the recovery of the support with high probability. In particular, all the main
results hold under the Exact Recovery Condition (ERC). In many applications, the focus
is often on identifying significant components, i.e., coordinates of with large magnitude,
instead of the often too ambitious goal of recovering the whole support of exactly. In this
paper, we also consider the problem of identifying large coordinates of in the case where
some of the nonzero coordinates are possibly small. It is shown that in this case the OMP
algorithm will still select all the most important components before possibly selecting
incorrect ones. In addition, with modified stopping rules, the OMP algorithm can ensure
that no zero components are selected.
2.5 Clustering K-SVD for sparse representation of images
A K-SVD-based method but also provides the clustering DL model. The potential
and advantage of the clustering model mainly come from two aspects. First, different
cluster of dictionaries is isolated from each other. Thus, an atom of learned dictionary
could concentrate on a specific type of feature, leading greater utilization of atoms. In
other words, a common phenomenon in the conventional DL model can be avoid, that is,
a part of atoms is widely employed by training samples whereas others are seldom used.
Second, the clustering DLmodel makes it possible to adjust the sparsity based on
different training samples and therefore to reduce the underfitting of overfitting of sparse
representation. We provide the SwCK-SVD by adaptively selected the number of used
atoms for each cluster. It is believed that the adaptive strategy can also be implemented
by adjusting the number of clusters. This potential is verified by the fact the SwCK-SVD
performs obviously better than the standard CK-SVD.
We proposed a DL method named CK-SVD for sparse representation of images.
For CK-SVD, the atoms of dictionary are divided into a set of groups, and each group of
atoms serve for image features of a specific cluster. Hence, the features of all clusters
can be utilized and the redundant atoms are avoided. Based on this strategy, we
introduced the CK-SVD and two practical extensions. Experimental results demonstrated
that the proposed methods could provide more accurate sparse representation of images,
compared to the conventional K-SVD algorithm and its extended methods.
To date, researchers have proposed various DL algorithms. In Engan et al. propose
the well-known DL method, named “the method of optimal directions (MOD)”. The
MOD contained two iterative process, sparse coefficients computing and dictionary
updating. The dictionary updating is globally realized by the least squares (LS)
computation in terms of training samples and sparse coefficients. In Aharon et al. propose
another LS-based algorithm for DL, referred to as the K-SVD. Different from the global
update strategy, for KSVD, the atoms of dictionary is updated separately. The MOD, the
K-SVD, and their extended methods are used for batch DL, i.e., the training samples
input simultaneously. However, when the training samples can not beobtained all at one,
the online learning is required. In Mairal et al. propose the online DL (ODL) algorithm,
aiming to update the atoms by using only the newly input samples. This algorithm allows
training samples to be input successively and realizes the online learning. Additionally,
8
a set of DL methods that are extended from MOD, K-SVD, and ODL have also been
proposed, in order to improve the sparse representation accuracy or reduce the
computational complexity. Among these algorithms, the K-SVD is frequently used in the
fields of image processing due to its generality and low complexity. The K-SVD
algorithm consists two processes, sparse coding and dictionary updating, which are
executed alternately. In the sparse coding process, at most k sparse coefficients for each
training sample are computed via greedy algorithms, inducing clustering features. For
the K-SVD algorithm, all the atoms of dictionary jointly represent the training images,
regardless of clusters. While representing different training samples, an atom may be
employed by different clusters of features. In the applications of image processing, the
features of different clusters vary dramatically, and therefore, the above phenomenon
may reduce the accuracy of sparse representation. In, Nazzal et al. utilize the residual of
training samples to train a set of sub-dictionaries. However, the sub-dictionaries are not
distinguished by different clusters. In Smith and Elad improve the KSVD by considering
only the used atoms in dictionary updating process. In Tariyal et al. propose the deep DL
by combining the concepts of DL and deep learning. The multiple DL framework is
developed for multiple levels of dictionaries. In Yi et al. build a hierarchical sparse
representation framework that consists of the local histogram-based model, the weighted
alignment pooling model, and the sparsity-based discriminative model.
2.6 Sparse Data Recovery using Optimized Orthogonal Matching
Pursuit for WSNs
Compressed Sensing based recovery algorithms are bounded by limitations such
as recovery error and high computational require-ments. This work presents a novel
algorithm based on Orthogonal Matching Pursuit algorithm for efficient and fast data
recovery in wireless sensor networks. The proposed algorithm significantly reduces the
number of iterations required to recover the original data in a relatively small interval of
time. Simulations show that the proposed algorithm converges in a short interval of time
with significantly better results as compared to the existing data recovery approaches in
wireless sensor networks.
Compressed sensing (CS) framework is aimed at precisely obtaining a high
dimensional signal x such that x _ Rm×n from a relatively small sample set of linearly
combined observations y such that y _ Rm, given the linear measurements are the
projections of the original signal which is sparse in some institutively known domain. CS
based compression is driven by the idea of obtaining sparse signals at a rate notably lesser
than the Nyquist rate. The CS principle utilizes the sparse nature of the wireless sensor
network (WSN) data for efficient compression and recovery, However, in practical
scenarios the signal in question is not sparse itself but in most of the cases has a sparse
representation in some a × b directory ‘D’ such that (a < b). Given the inherent energy
and bandwidth constraints of WSNs, CS based data gathering has gained the desired
attention in recent years. The central idea is to compress the data at the deployed nodes
9
before it is transmitted to the sink where the original data is recovered with almost no
loss. Thus, the major energy consuming task is shifted from the nodes to the sink.
Based on the well known OMP algorithm, we propose a relatively fast and accurate
version of the OMP and is termed as Optimized OMP (OOMP) here onwards. Multiple
indices are chosen in each iteration for reducing the complexity and the execution time
of the algorithm. The comparison of the reconstructed result with the original data is
necessary for every iteration in the process of reconstruction of data. If _rk_2, then the
reconstruction can be stopped and we can jump to output. That means, if the demand is
met, the iteration can be halted earlier, rather than after M iterations. To reduce the
complexity and speeding up the execution time, multiple indices are chosen in each
iteration. The selection of multiple indices results in multiple keys to be added to list.
Thus, the requirement of selection of M indices, is achieved with small number of
iteration as compared to OMP. A reconstruction error threshold is incorporated, which is
the necessary and sufficient condition for the convergence of algorithm.
10
CHAPTER 3
SYSTEM DESIGN
3.1 EXISTING SYTEM
The reconstruction is more difficult for images. Few local image regions can be
exactly represented by an atom from a single structured. Images generally produce
higher-sparsity representations. One problem persists, however, because even if the
underlying signal model could perfectly represent the signal the atoms must be discretely
sampled from the model and will in general fail to represent any given signal component
exactly. For example, a 1-D sinusoidal signal composed of a single tone is well-
represented by a sinusoidal model, but if the frequency of the signal falls between the
discrete Fourier frequencies of a given Fourier basis then the number of non-zero Fourier
coefficients can actually be quite large.
Disadvantage:
Computational overhead
Low-resolution face images
3.2 PROPOSED SYSTEM
Certain assumptions regarding the endpoints are satisfied. Modify the algorithm
to increase the probability. Conditions will be satisfied during each stage of the algorithm.
MP is a greedy search based approach to solving the combinatorial optimization problem
to identify the best sparse representation. Due to the success of the original algorithm
several variations of MP. In general, each of these variations consist of three steps. The
identification step in an iteration of an MP-based algorithm refers to determining which
atom(s) is(are) closest to the current residual. Augmentation is used to describe the step
of adding the atom(s) identified to the support of the reconstruction. Finally, each pursuit-
type algorithm is concluded by a residual update. The fundamental difference between
MP and OMP is that the residual in OMP is updated by projecting the image onto the
orthogonal complement of the span of the current support. Note that when the dictionary
consists of pairwise orthogonal atoms, MP and OMP are equivalent.
ADVANTAGE:
Reduced computational time
Increased performance on multiple tasks in machine learning
11
CHAPTER 4
SYSTEM IMPLENTATION
4.1 MODULES
1. Paths Created Between Atom
2. Gaussian Denoising
3. K-Sparse Reconstructions
4.1.1 Paths Created Between Atom
Our approach can be differentiated from other existing variants of OMP by the
construction of a path between the two most-similar dictionary atoms and then
performing a secondary indentification step. In the OMP case, similarity is quantified by
the inner product between a dictionary atom and the test image (residual) at that stage.
We recall that the largest-magnitude inner product is equivalent to the closest vector as
determined by the angle, between the vectorized images.
Although a path can be created between any arbitrary dictionary atoms, here we
exclusively consider paths between the two atoms that are most similar to the current test
signal (i.e., have the largest inner product or the smallest angle). That is to say in a
primary identification step we identify the two most similar dictionary atoms and form a
path between them. Given a path between two atoms we search for a novel atom which
is more similar to the test signal than either of the path end-points. Here, we “search” by
drawing samples from a path that correspond to a discrete set of t sampled with uniform
spacing from [0; 1] and test their similarity to the test image. An alternate formulation
might seek a local optimum by performing a line search along the geodesic.
4.1.2 Gaussian Denoising
Denoising experiments were performed with = 20 added to image. Patches of
the noisy image, of size 8 8, using 5 iterations. A denoised image is constructed by OMP
estimated patches. Performance is measured using output PSNR. Patches of the noisy
image, of size 8 * 8, are then estimated using 5 iterations of the indicated algorithm. Both
DCT and the globally trained learned dictionary are tested. A denoised image is then
constructed by stitching together the OMP estimated patches.
4.1.3 K-Sparse Reconstructions
The reconstruction error between a pristine original image and a patch-based k-
sparse reconstruction using a dictionary. In particular, we consider the DCT and kSVD
dictionaries composed of 256 8x8 image atoms that were provided in support of [11].
Each dictionary is used to reconstruct 8x8 patches overlapped with a stride of one where
the error between the original and the reconstructed image is given by jjRkjjF =jjTjjF,
where k is the number of atoms included in the reconstruction.
13
CHAPTER 5
SYSTEM ARCHITECTURE
CHAPTER 6
LANGUAGE DESCRIPTION
6.1 INTRODUCTION
The name MATLAB stands for MATrix LABoratory. MATLAB was written
originally to provide easy access to matrix software developed by the LINPACK (linear
system package) and EISPACK (Eigen system package) projects. MATLAB is a high-
performance language for technical computing. It integrates computation, visualization,
and programming environment. MATLAB has many advantages compared to
conventional computer languages (e.g., C, FORTRAN) for solving technical problems.
MATLAB is an interactive system whose basic data element is an array that does not
require dimensioning. Specific applications are collected in packages referred to as
toolbox. There are tool boxes for signal processing, symbolic computation, control
theory, simulation, optimization, and several other fields of applied science and
engineering.
6.2 MATLAB’s POWER OF COMPUTAIONAL MATHMATICS
MATLAB i s u s e d i n e v e r y fac et o f c o mp u ta tio n a l ma th e m a t i c s .
Following are s o me commonly used mathematical calculations where it is used most
commonly:
Dealing with Matrices and Arrays
2-D and 3-D Plotting and graphics
Linear Algebra
Algebraic Equations
Non-linear Functions
Statistics
Data Analysis
Calculus and Differential Equations Numerical Calculations
Integration
• Transforms
Curve Fitting
Various other special functions
19
Command Window - This is the main area where commands can be entered at the
command line. It is indicated by the command prompt (>>).
Workspace - The workspace shows all the variables created and/or imported from files.
Fig.6.5.4.workspace
22
Command History - This panel shows or rerun commands that are entered at the
command line.
- Minus, subtraction
* operator.
Scalar and matrix
.* Array and
multiplication operator.
multiplication
^ Scalar and matrix
operator.
.^ Array exponentiation
exponentiation
operator.
\ Left-division operator.
operator.
/ Right-division operator
.\ Array left-division
./ Array right-division
Command Purpose
Disp Displays content for an array
or
Fscanf Read formatted data from a
string.
Format Control screen-display
file.
Fprintf format.
Performs formatted write to
6.9 M FILES
MATLAB allows writing two kinds of program files:
Scripts:
script files are program files with .m extension. In these files, you write series of
commands, which you want to execute together. Scripts do not accept inputs and do not
return any outputs. They operate on data in the workspace.
Functions:
functions files are also program files with .m extension. Functions can accept inputs
and return outputs. Internal variables are local to the function.
Creating and Running Script File:
To create scripts files, you need to use a text editor. You can open the MATLAB editor
in two ways:
Using the command prompt
Using the IDE
You can directly type edit and then the filename (with .m extension).
Edit or
edit<file name>
CHAPTER 7
SYSTEM TESTING
7.1 INTRODUCTION
The purpose of testing is to discover errors. Testing is the process of trying
to discover every conceivable fault or weakness in a work product. It provides a
way to check the functionality of components, sub-assemblies, assemblies and/or
a finished product It is the process of exercising software with the intent of
ensuring that the Software system meets its requirements and user expectations
and does not fail in an unacceptable manner. There are various types of test. Each
test type addresses a specific testing requirement.
7.2 TYPES OF TESTS
7.2.1 Unit testing
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid
outputs. All decision branches and internal code flow should be validated. It is
the testing of individual software units of the application .it is done after the
completion of an individual unit before integration. This is a structural testing,
that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a business
process performs accurately to the documented specifications and contains
clearly defined inputs and expected results.
7.2.2 Integration testing
Integration tests are designed to test integrated software components to
determine if they actually run as one program. Testing is event driven and is
more concerned with the basic outcome of screens or fields. Integration tests
demonstrate that although the components were individually satisfaction, as
shown by successfully unit testing, the combination of components is correct and
consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of components.
7.2.3 Functional test
Functional tests provide systematic demonstrations that functions tested are
available as specified by the business and technical requirements, system
documentation, and user manuals.
28
CHAPTER 8
SCREEN SHOTS
31
CHAPTER 9
CONCLUSION
CHAPTER 10
REFERENCE