Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Save to My Library
Look up keyword
Like this
2Activity
0 of .
Results for:
No results containing your search query
P. 1
A New Image Compression framework: DWT Optimization using LS-SVM regression under IWP-QPSO based hyper parameter optimization

A New Image Compression framework: DWT Optimization using LS-SVM regression under IWP-QPSO based hyper parameter optimization

Ratings: (0)|Views: 121 |Likes:
Published by ijcsis
In this chapter, a hybrid model integrating DWT and least squares support machines (LSSVM) is proposed for Image coding. In this model, proposed Honed Fast Haar wavelet transform (HFHT) is used to decompose an original RGB Image with different scales. Then the LS-SVM regression is used to predict series of coefficients. The hyper coefficients for LS-SVM selected by using proposed QPSO technique called intensified worst particle based QPSO (IWP-QPSO). Two mathematical models discussed, one is to derive the HFHT that is computationally efficient when compared with traditional FHT, and the other is to derive IWP-QPSO that performed with minimum iterations when compared to traditional QPSO. The experimental results show that the hybrid model, based on LSSVM regression, HFHT and IWP-QPSO, outperforms the traditional Image coding standards like jpeg and jpeg2000 and, furthermore, the proposed hybrid model emerged as best in comparative study with jpeg2000 standard.
In this chapter, a hybrid model integrating DWT and least squares support machines (LSSVM) is proposed for Image coding. In this model, proposed Honed Fast Haar wavelet transform (HFHT) is used to decompose an original RGB Image with different scales. Then the LS-SVM regression is used to predict series of coefficients. The hyper coefficients for LS-SVM selected by using proposed QPSO technique called intensified worst particle based QPSO (IWP-QPSO). Two mathematical models discussed, one is to derive the HFHT that is computationally efficient when compared with traditional FHT, and the other is to derive IWP-QPSO that performed with minimum iterations when compared to traditional QPSO. The experimental results show that the hybrid model, based on LSSVM regression, HFHT and IWP-QPSO, outperforms the traditional Image coding standards like jpeg and jpeg2000 and, furthermore, the proposed hybrid model emerged as best in comparative study with jpeg2000 standard.

More info:

Published by: ijcsis on Aug 13, 2011
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

05/17/2012

pdf

text

original

 
 
A New Image Compression framework :DWTOptimization using LS-SVM regression under IWP-QPSO based hyper parameter optimization
S.Nagaraja Rao,
Professor of ECE,G.Pullaiah College of Engineering & Technology,Kurnool, A.P., India
Dr.M.N.Giri Prasad,
Professor of ECE,J.N.T.U.College of Engineering,Anantapur, A.P., India
 Abstract
— In this chapter, a hybrid model integrating DWT andleast squares support machines (LSSVM) is proposed for Imagecoding. In this model, proposed Honed Fast Haar wavelettransform(HFHT) is used to decompose an original RGB Imagewith different scales. Then the LS-SVM regression is used topredict series of coefficients. The hyper coefficients for LS-SVMselected by using proposed QPSO technique called intensifiedworst particle based QPSO (IWP-QPSO). Two mathematicalmodels discussed, one is to derive the HFHT that iscomputationally efficient when compared with traditional FHT,and the other is to derive IWP-QPSO that performed withminimum iterations when compared to traditional QPSO. Theexperimental results show that the hybrid model, based on LS-SVM regression, HFHT and IWP-QPSO, outperforms thetraditional Image coding standards like jpeg and jpeg2000 and,furthermore, the proposed hybrid model emerged as best incomparative study with jpeg2000 standard.
 Keywords-
Model integrating DWT; Least squares support machines (LS-SVM); Honed Fast Haar wavelet transforms(HFHT);
QPSO; HFHT; FHT 
.I.
 
I
 NTRODUCTION
Compression of a specific type of data entails transformingand organizing the data in a way which is easily represented.Images are in wide use today, and decreasing the bandwidthand space required by them is a benefit. With images, lossycompression is generally allowed as long as the losses aresubjectively unnoticeable to the human eye.The human visual system is not as sensitive to changes inhigh frequencies [1]. This piece of information can be utilized by image compression methods. After converting an imageinto the frequency domain, we can effectively control themagnitudes of higher frequencies in an image.Since the machine learning techniques are spanning intovarious domains to support selection of contextual parameters based on given training. It becomes obvious to encourage thismachine learning techniques even in image and signal processing, particularly in the process of signal and imageencoding and decoding.A machine learning approach LS-SVM for regression can be trained to represent a set of values. If the set of values arenot complex in their representation they can be roughlyapproximated using a hyper parameters. Then this can be usedto compress the images.The rest of the chapter organized as; Section II describesrelated work in image coding using machine learningtechniques. Section III describes the technologies used in proposed image and signal compression technique. Section IVdescribes a mathematical model to optimize the Fast HAAR Wavelet Transform. Section V describes a mathematical modelto optimize the QPSO based parameter search and Section VIdescribes the mathematical model for LS-SVM Regressionunder QPSO. Section VII describes the proposed image andsignal compression technique. Section VII contains resultsdiscussion. Section VIII contains comparative analysis of theresults acquired from the proposed model and existingJPEG2000 standard.II.
 
ELATED
W
ORK 
 Machine learning algorithms also spanned into Image processing and have been used often in image compression.M H Hassoun et al[2] proposed a method that uses back- propagation algorithm in a feed-forward network which is the part of neural network.
Observation:
The compression ratio of the imagerecovered using this algorithm was generally around 8:1 withan image quality much lower than JPEG, one of the most well-known image compression standards.Amerijckx et al. [3] presented an image coding techniquethat uses vector quantization (VQ) on discrete cosinetransform (DCT) coefficients using Kohonen map.
Observation:
Only in the ratios greater than 30:1, it’s been proven to be better than jpeg.Robinson et al[4] described an image coding technique that perform SVm regression on DCT coefficients. Kecman etal[5] also described SVM regression based technique thatdiffers with [4] in parameter selection
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 9, No. 7, July 201152http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
 
Observation:
These [4, 5] methods has produced better image quality than JPEG in higher compression ratios.Sanjeev Kumar et al [6] described the usage of SVMregression to minimize the compression artifacts.
Observation:
Since the hyper parameter searchcomplexity, The model being concluded as fewer efficient inlarge dataCompression based on DCT has some drawbacks asdescribed in the following section. The modern and papular still image compression standard called JPEG2000 uses DWTtechnology with the view of overcoming these limitations.It is also quite considerable that in color (RGB) imagecompression, it is a well-known fact that independentcompression of the R, G, B channels is sub-optimal as itignores the inherent coupling between the channels.Commonly, the RGB images are converted to YCbCr or someother unrelated color space followed by independentcompression in each channel, which is also part of theJPEG/JPEG-2000 standard. This limit encourages us to findefficient image and signal coding model particularly in RGBImages.To optimize these DWT based compression models, animage compression algorithm based on wavelet technologyand machine learning technique LS-SVM regression is proposed. The aim of the work is to describe the usage of novel mathematical models to optimize FHT is one of the popular DWT technique, QPSO is one of the effective hyper  parameter search technique for SVM. The result of compression is considerable and comparative study withJPEG2000 standard concluding the significance of the proposed model.III.
 
E
XPLORATION
O
F
T
ECHNOLOGIES
U
SED
 
 A.
 
 HAAR and Fast HAAR Wavelet Transformation
The DWT is one of the fundamental processes in theJPEG2000 image compression algorithm. The DWT is atransform which can map a block of data in the spatial domaininto the frequency domain. The DWT returns informationabout the localized frequencies in the data set. A two-dimensional (2D) DWT is used for images. The 2D DWTdecomposes an image into four blocks, the approximationcoefficients and three detail coefficients. The details includethe horizontal, vertical, and diagonal coefficients. The lower frequency (approximation) portion of the image can be preserved, while the higher frequency portions may beapproximated more loosely without much visible quality loss.The DWT can be applied once to the image and then again tothe coefficients which the first DWT produced. It can bevisualized as an inverted treelike structure. The original imagesits at the top. The first level DWT decomposes the image intofour parts or branches, as previously mentioned. Each of thosefour parts can then have the DWT applied to them individually;splitting each into four distinct parts or branches. This methodis commonly known as wavelet packet decomposition
 B.
 
The Properties of the Haar and FHT Transform
 
Haar Transform is real and orthogonal. ThereforeHr=Hr* ……. (1)Hr 
-1
= Hr 
T
…….. (2)
 
Haar Transform is a very fast transf orm.
 
The basis vectors of the Haar matrix are sequenceordered.
 
Haar Transform has poor energy compaction for images.
 
Orthogonal property: The original signal is split intoa low and a high frequency part, and filters enablingthe splitting without duplicating information are saidto be orthogonal.
 
Linear Phase: To obtain linear phase, symmetricfilters would have to be used.
 
Compact support: The magnitude response of thefilter should be exactly zero outside the frequencyrange covered by the transform. If this property issatisfied, the transform is energy invariant.
 
Perfect reconstruction: If the input signal istransformed and inversely transformed using a set of weighted basis functions, and the reproduced samplevalues are identical to those of the input signal, thetransform is said to have the perfect reconstruction property. If, in addition no information redundancy is present in the sampled signal, the wavelet transformis, as stated above, ortho normal. No wavelets can possess all these properties, so the choiceof the wavelet is decided based on the consideration of whichof the above points are important for a particular application.Haar-wavelet, Daubechies-wavelets and bi-orthogonalwavelets are popular choices. These wavelets have propertieswhich cover the requirements for a range of applications.
C.
 
Quantitative Particle Swarm Optimization
The development in the field of quantum mechanics ismainly due to the findings of Bohr, de Broglie, Schrödinger,Heisenberg and Bohn in the early twentieth century. Their studies forced the scientists to rethink the applicability of classical mechanics and the traditional understanding of thenature of motions of microscopic objects [7].As per classical PSO, a particle is stated by its positionvector xi and velocity vector vi, which determine the trajectoryof the particle. The particle moves along a determinedtrajectory following Newtonian mechanics. However if weconsider quantum mechanics, then the term trajectory ismeaningless, because xi and vi of a particle cannot bedetermined simultaneously according to uncertainty principle.Therefore, if individual particles in a PSO system havequantum behavior, the performance of PSO will be far fromthat of classical PSO [8].In the quantum model of a PSO, the state of a particle isdepicted by wave function
(,)
 x
ψ 
, instead of position andvelocity. The dynamic behavior of the particle is widelydivergent from that of the particle in traditional PSO systems.In this context, the probability of the particle’s appearing in
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 9, No. 7, July 201153http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
 
 position xi from probability density function
2
|(,)|
 x
ψ 
, theform of which depends on the potential field the particle . The particles move according to the following iterative equations[9], [10]:
x(t +1) = p + * |mbest - x(t)| *ln(1/ u) if k 0.5
 β 
 
x(t +1) = p - * |mbest - x(t)| *ln(1/ u) if k < 0.5
 β 
 
1id2gd1211
where p= (c p cP ) /(c+ c )1mbest=
 M ik i
 p M 
=
U
 Mean best (mbest) of the population is defined as the mean of the best positions of all particles; u, k, c1 and c2 are uniformlydistributed random numbers in the interval [0, 1]. The parameter b is called contraction-expansion coefficient.The flow of QPSO algorithm is Initialize the swarmDoFind mean bestOptimize particles positionUpdate P
 best
 Update P
gbest
 Until (maximum iteration reached)
 D.
 
 LS-SVM 
Support vector machine (SVM) introduced by Vapnik[12,13] is a valuable tool for solving pattern recognition andclassification problem. SVMs can be applied to regression problems by the introduction of an alternative loss function.Due to its advantages and remarkable generalization performance over other methods, SVM has attracted attentionand gained extensive application[12]. SVM shows outstanding performances because it can lead to global models that areoften unique by embodies the structural risk minimization principle[12], which has been shown to be superior to thetraditional empirical risk minimization principle. Furthermore,due to their specific formulation, sparse solutions can befound, and both linear and nonlinear regression can be performed. However, finding the final SVM model can becomputationally very difficult because it requires the solutionof a set of nonlinear equations (quadratic programming problem). As a simplification, Suykens and Vandewalle[14] proposed a modified version of SVM called least-squaresSVM (LS-SVM), which resulted in a set of linear equationsinstead of a quadratic programming problem, which canextend the applications of the SVM. There exist a number of excellent introductions of SVM [15, 16] and the theory of LS-SVM has also been described clearly by Suykens et al[14, 15]and application of LS-SVM in quantification and classificationreported by some of the works[17, 18].In principle, LS-SVM always fits a linear relation (y = w
x
 + b) between the regression (x) and the dependent variable (y).The best relation is the one that minimizes the cost function(Q) containing a penalized regression error term:….. (3)Subject to:…..(4)The first part of this cost function is a weight decay whichis used to regularize weight sizes and penalize large weights.Due to this regularization, the weights converge to similar value. Large weights deteriorate the generalization ability of the LS-SVM because they can cause excessive variance. Thesecond part of cost function is the regression error for alltraining data. The relative weight of the current part comparedto the first part can be indicated by the parameter ‘g’, whichhas to be optimized by the user.Similar to other multivariate statistical models, the performances of LS-SVMs depends on the combination of several parameters. The attainment of the kernel function iscumbersome and it will depend on each case. However, thekernel function more used is the radial basis function (RBF), asimple Gaussian function, and polynomial functions wherewidth of the Gaussian function and the polynomial degree will be used, which should be optimized by the user, to obtain thesupport vector. For the RBF kernel and the polynomial kernelit should be stressed that it is very important to do a carefulmodel selection of the tuning parameters, in combination withthe regularization constant g, in order to achieve a goodgeneralization model.IV.
 
A
 
MATHEMATICAL
 
MODEL
 
TO
 
OPTIMIZE
 
THE
 
FAST
 
HAAR 
 
WAVELET
 
TRANSFORM.Since the reconstruction process in multi-resolution waveletare not require approximation coefficients, except for the level0. The coefficients can be ignored to reduce the memoryrequirements of the transform and the amount of inefficientmovement of Haar coefficients. As FHT, we use 2
 N
data.For Honed Fast Haar Transform, HFHT, it can be done by just taking (w+ x + y + z)/ 4 instead of (x + y)/ 2 for approximation and (w+ x
y
z)/ 4 instead of (x
y)/ 2 for differencing process. 4 nodes have been considered at oncetime. Notice that the calculation for (w+ x
y
z)/ 4 willyield the detail coefficients in the level of n
2.For the purpose of getting detail coefficients, differencing process (x
y)/ 2 still need to be done. The decompositionstep can be done by using matrix formulation as well.Overall computation of decomposition for the HFHT for 2
 N
 data as follow:q=N/4;Coefficients:
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 9, No. 7, July 201154http://sites.google.com/site/ijcsis/ISSN 1947-5500

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->