Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
1Activity
0 of .
Results for:
No results containing your search query
P. 1
Image Super Resolution Using Marginal Ditribution Prior

Image Super Resolution Using Marginal Ditribution Prior

Ratings: (0)|Views: 111|Likes:
Published by ijcsis
In this paper, we propose a new technique for image super-resolution. Given a single low resolution (LR) observation and a database consisting of low resolution images and their high resolution versions, we obtain super-resolution for the LR observation using regularization framework. First we obtain a close approximation of the super-resolved image using learning based technique. We learn high frequency details of the observation using Discrete Cosine Transform (DCT). The LR observation is represented using a linear model. We model the texture of the HR image using marginal distribution and use the same as priori information to preserve the texture. We extract the features of the texture in the image by computing histograms of the filtered images obtained by applying filters in a filter bank and match them to that of the close approximation. We arrive at the cost function consisting of a data fitting term and a prior term and optimize it using Particle Swarm Optimization (PSO). We show the efficacy of the proposed method by comparing the results with interpolation methods and existing super-resolution techniques. The advantage of the proposed method is that it quickly converges to final solution and does not require number low resolution observations.
In this paper, we propose a new technique for image super-resolution. Given a single low resolution (LR) observation and a database consisting of low resolution images and their high resolution versions, we obtain super-resolution for the LR observation using regularization framework. First we obtain a close approximation of the super-resolved image using learning based technique. We learn high frequency details of the observation using Discrete Cosine Transform (DCT). The LR observation is represented using a linear model. We model the texture of the HR image using marginal distribution and use the same as priori information to preserve the texture. We extract the features of the texture in the image by computing histograms of the filtered images obtained by applying filters in a filter bank and match them to that of the close approximation. We arrive at the cost function consisting of a data fitting term and a prior term and optimize it using Particle Swarm Optimization (PSO). We show the efficacy of the proposed method by comparing the results with interpolation methods and existing super-resolution techniques. The advantage of the proposed method is that it quickly converges to final solution and does not require number low resolution observations.

More info:

Published by: ijcsis on Jun 12, 2010
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

06/12/2010

pdf

text

original

 
(IJCSIS) International Journal of Computer Science and Information Security,Vol.
8
 , No.
2
 , 2010
IMAGE SUPER RESOLUTION USINGMARGINAL DITRIBUTION PRIOR
S.Ravishankar
Department of Electronics and CommunicationAmrita Vishwa Vidyapeetham UniversityBangalore, Indias_ravishankar@blr.amrita.edu
Dr.K.V.V.Murthy
Department of Electronics and CommunicationAmrita Vishwa Vidyapeetham UniversityBangalore, Indiakvv_murthy@blr.amrita.edu 
 Abstract
— In this paper, we propose a new technique for imagesuper-resolution. Given a single low resolution (LR) observationand a database consisting of low resolution images and their highresolution versions, we obtain super-resolution for the LRobservation using regularization framework. First we obtain aclose approximation of the super-resolved image using learningbased technique. We learn high frequency details of theobservation using Discrete Cosine Transform (DCT). The LRobservation is represented using a linear model. We model thetexture of the HR image using marginal distribution and use thesame as priori information to preserve the texture. We extractthe features of the texture in the image by computing histogramsof the filtered images obtained by applying filters in a filter bankand match them to that of the close approximation. We arrive atthe cost function consisting of a data fitting term and a priorterm and optimize it using Particle Swarm Optimization (PSO).We show the efficacy of the proposed method by comparing theresults with interpolation methods and existing super-resolutiontechniques. The advantage of the proposed method is that itquickly converges to final solution and does not require numberlow resolution observations.Keywords-component; formatting; style; styling; insert (keywords)
I
.I
NTRODUCTION
 In many applications high resolution images lead to betterclassification, analysis and interpretation. The resolution of animage depends on the density of sensing elements in thecamera. High end camera with large memory storagecapability can be used to capture the high resolution images.In some applications such as wildlife sensor network, videosurveillance, it may not be feasible to employ costly camera.In such applications algorithmic approaches can be helpful toobtain high resolution images from low resolution imagesobtained using low cost cameras. The super-resolution ideawas first proposed by Tsai and Huang [1]. They use frequencydomain approach and employ motion as a cue. In [2], theauthors use a Maximum a posteriori (MAP) framework for jointly estimating the registration parameters and the high-resolution image for severely aliased observations. Theauthors in [3] describe an MAPMRF based super-resolutiontechnique using blur cue and recover both the high-resolutionscene intensity and the depth fields simultaneously. Theauthors in [4] present technique of image interpolation usingwavelet transform. They estimate the wavelet coefficients athigher scale from a single low resolution observation andachieve interpolation by taking in-verse wavelet transform.The authors in [5] propose technique for super-resolving asingle frame image using a database of high resolution images.They learn the high frequency details from a database of highresolution images and obtain initial estimate of the image to besuper-resolved. They formulate regularization using waveletprior and MRF model prior and employ simulated annealingfor optimization. Recently, learning based techniques areemployed for super-resolution. Missing information of thehigh resolution image is learned from a database consisting of high resolution images. Freeman et al. [6] propose an examplebased super-resolution technique. They estimate missing high-frequency details by interpolating the input low-resolutionimage into the desired scale. The super-resolution isperformed by the nearest neighbor based estimation of high-frequency patches based on the corresponding patches of inputlow-frequency image. Brandi et al. [7] propose an example-based approach for video super-resolution. They restore thehigh-frequencyInformation of an interpolated block by searching in adatabase for a similar block, and by adding the high frequencyof the chosen block to the interpolated one. They use the highfrequency of key HR frames instead of the database toincrease the quality of non-key restored frames. In [8], theauthors address the problem of super-resolution from a singleimage using multi-scale tensor voting framework. Theyconsider simultaneously all the three color channels to producea multi-scale edge representation to guide the process of high-resolution color image reconstruction, which is subjected tothe back projection constraint. The authors in [9] recover thesuper-resolution image through neighbor embeddingalgorithm. They employ histogram matching for selectingmore reasonable training images having related contents. In[10] authors propose a neighbor embedding based super-resolution through edge detection and Feature Selection(NeedFS). They propose a combination of appropriate featuresfor preserving edges as well as smoothing the color regions.The training patches are learned with different neighborhoodsizes depending on edge detection. The authors in [11]propose modeling methodology for texture images. They
347http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security,Vol.
8
 , No.
2
 , 2010
capture the features of texture using a set of filters whichrepresents the marginal distribution of image and match thesame in feature fusion to infer the solution. In this paper, wepropose an approach to obtain super-resolution from a singleimage. First, we learn the high frequency content of the super-resolved image from the high-resolution training images in thedata base and use the learnt image as a close approximation tothe final solution. We solve this ill-posed problem using priorinformation in the form of marginal distribution. We applydifferent filters on the image and calculate the histograms. Weassume that these histograms remain deviate from that of theclose approximation. We show the result of our method onreal images and compare it with the existing approaches..
II
.
 
DCT
 
BASED
 
APPROACH
 
FOR
 
CLOSE
 
APPROXIMATION.In this section, DCT based approach to learn high frequencydetails for the super-resolved for a decimation factor of 2 (q =2) is described. Each set in the database consists of a pair of low resolution and high resolution image. The test image andLR training images are of size M × M pixels. CorrespondingHR training images have size of 2M × 2M pixels. We first upsample the test image and all low resolution training imagesby factor of 2 and create images of size 2M X 2M pixels each.A standard interpolation technique can be used for the same.We divide each of the images, i.e. the up sampled test image,up sampled low resolution images and their high resolutionversions, in blocks of size4 × 4. The motivation for dividinginto 4X4 block is due to the theory of JPEG compressionwhere an image is divided into 8X8 blocks in order to extractthe redundancy in each block. However, in this case we areinterested in learning the non aliased frequency componentsfrom the HR training images using the aliased test image andthe aliased LR training images. This is done by taking theDCT on each of the block for all the images in the database aswell as the test image. Fig.1.a) shows the DCT blocks of theup sampled test image whereas Fig.1. (b) Shows the DCTblocks of up sampled LR training images and HR trainingimages. We learn DCT coefficients for each block in the testimage from the corresponding blocks in the HR images in thedatabase. It is reasonable to assume that when we interpolatethe test image and the low resolution training images to obtain2M × 2M pixels, the distortion is minimum in the lowerfrequencies. Hence we can learn those DCT coefficients thatcorrespond to high frequencies (already aliased) and nowdistorted due to interpolation. We consider up sampled LRtraining images to find the best matching DCT coefficients foreach of the blocks in the test image.Let C
T
(i , j), 1
≤ ( i , j) ≤ = 4, be the DCT coefficient at
location (i, j) in a 4 × 4 block of the test image. Similarly, letC
(m)LR
( i , j) and C
(m)HR
(I , j), m = 1,2,……,L, be the DCTcoefficients at location( i, j) ,in the block at the same positionin the m
th
up-sampled LR image and m
th
HR image. Here Lis the number of the training sets in the data base. Now thebest matching HR block for the considered low resolutionimage block (up-sampled) is obtained as
m
=
C
T(m)
(i ,j)
C
LR(m)
(i,j)
2
 
8
+
 
>
ℎℎ
.
 (1)Here,
m
(i, j) is the index for the training image which givesthe minimum for the block. Those non aliased best matchingHR image DCT coefficients are now copied in to thecorresponding locations in the block of the up sampled testimage. In effect, we learn non aliased DCT coefficients forthe test image block from the set of LR-HR images. Thecoefficients that correspond to low frequencies are not altered.Thus at location (i , j)in a block, we have,
Χ
Τ
(ι, ϕ) =

(
)
(
,
 
) 

 (
,
 
)>
ℎℎ
(
,
 
) 

 
(2)
This is repeated for every block in the test image. Weconducted experiment with different Threshold values. Webegin with Threshold =2 where all the coefficients except theDC coefficient are learned. We subsequently increased thethreshold value and conducted the experiment. The bestresults were obtained when the Threshold was set to 4 thatcorrespond to learning a total of 10 coefficients from the bestmatching HR image in the database. After learning the DCTcoefficients for every block in the test image, we take inverseDCT transform to get high spatial resolution image andconsider it as the close approximation to the HR image.III.
 
IMAGE
 
FORMATION
 
MODELIn this work, we obtain super-resolution for an image from asingle observation. The observed image Yis of size M x M pixels. Let y represent the lexicographicallyordered vector of size M
2
× 1, which contains the pixels fromimage Y and z be the super-resolved image. The observedimages can be modeled asy = Dz + n, (3)where D is the decimation matrix which takes care of aliasing.For an integer decimation factor of q, the decimation matrix Dconsists of q
2
non-zero elements along each row at appropriatelocations. We estimate this decimation matrix from the initialestimate. The procedure for estimating the decimation matrixis described below. n is the i.i.d noise vector with zero meanand variance
2
. It is of the size, M
2
× 1. The multivariatenoise probability density is given by 2

2
.Our problem is to estimate z given y, which is an ill-posedinverse problem. It may be mentioned here that theobservation captured is not blurred. In other words, we assume
348http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security,Vol.
8
 , No.
2
 , 2010
identity matrix for blur. Generally, the decimation model toobtain the aliased pixel intensities from the high resolutionpixels, for a decimation factor of q, has the form [12]D =
1
2
1 1 1 01 1… 10 1 1.1
(4)The decimation matrix in Eq. (4) indicates that a lowresolution pixel intensity
Y
(i, j) is obtained by averaging theintensities of q
2
pixels corresponding to the same scene in thehigh resolution image and adding noise intensity n (i, j).Training Set-1
 
 
 
Training set L
Figure-1
IV
 
TEXTURE
 
MODELLINGNatural images consist of smooth regions, edges and textureareas. We regularize the solution using the texture preservingprior. We capture the features of texture by applying differentfilters to the image and compute histograms of the filteredimages. These histograms estimate the marginal distribution of the image. These histograms are used as the features of theimage. We use a filter bank that consists of two kinds of filters: Laplacian of Gaussian (LoG) filters and Gabor filters.
 A. Filter Bank 
The Gaussian filters play an important role due to its nice lowpass frequency property. The two dimensional Gaussianfunction can be defined asG(x, y\x
0
,y
0
x,
σ
y
)=
12


−
−
02
2
+
−
02
2
 
(5)Here (x
0
, y
0
) are location parameters and (σ
x
σ
y
) are scaleparameters. The Laplacian of Gaussian (LoG) filter is aradially symmetric centers around Gaussian filter with (x
0
, y
0
)
= (0, 0) and σ
x
 
= σ
y
= T. Hence LoG filter can be representedbyF(x, y\0
,
0, T) = c
.
(x
2
+y
2
–T
2
)
 
2+
2
2
(6)Here c is a constant and T scale parameter. We can choosedifferent scales with T =
1
√ 
2
, 1, 2,3, and so on.The Gabor filter
with sinusoidal frequency ‘ω ‘and amplitude modulated by the
Gaussian function can be represented byF
w
(x,y) = G(x,y\ 
0,0,σ
x ,
 
σ
y
)
−
(7)A simple case of Eq. (7) with both sine and cosine componentscan chosen asG(x,y\0, 0,T,
 
) = c
12
2
 
(4(

+

)
2
+
 
  
 )2 
(8)By varying frequency and rotating the filter in - y plane, wecan obtain a bank of filters. We can choose different scalesT=2, 4, 6, 8 and so on. Similarly, the orientation can be variedas
=0
0
,30
0
,60
0
,90
0
and so on.
 B. Marginal Distribution Prior 
As mentioned earlier, the histograms of the filtered imagesestimate the marginal distribution of the image. We use thismarginal distribution as a prior. We obtain the closeapproximation Z
C
of the HR image using discrete cosinetransform based learning approach as described in section IIand assume that the marginal distribution of the super-resolvedimage should match that of the close approximation Z
C
. Let Bbe a bank of filters. We apply each of the filters in B to Z
C
andobtain filtered images), where
 
, where α = 1,. . . . . . ,
|
|
.We compute histogram
 
(
)
of 
(
)
. Similarly, we applyeach of the filter in B to the initial HR estimate and obtainfiltered images
,where
= 1, 2, 3….
 |
|
. We computehistogram
of 
. We define the marginal distributionprior term as,C
H
=
|H
C
α
 
H
α
|
|
|
=1
(9)
349http://sites.google.com/site/ijcsis/ISSN 1947-5500

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->