This action might not be possible to undo. Are you sure you want to continue?

2, 2010

**IMAGE SUPER RESOLUTION USING MARGINAL DITRIBUTION PRIOR
**

S.Ravishankar

Department of Electronics and Communication Amrita Vishwa Vidyapeetham University Bangalore, India s_ravishankar@blr.amrita.edu

Dr.K.V.V.Murthy

Department of Electronics and Communication Amrita Vishwa Vidyapeetham University Bangalore, India kvv_murthy@blr.amrita.edu

Abstractβ In this paper, we propose a new technique for image super-resolution. Given a single low resolution (LR) observation and a database consisting of low resolution images and their high resolution versions, we obtain super-resolution for the LR observation using regularization framework. First we obtain a close approximation of the super-resolved image using learning based technique. We learn high frequency details of the observation using Discrete Cosine Transform (DCT). The LR observation is represented using a linear model. We model the texture of the HR image using marginal distribution and use the same as priori information to preserve the texture. We extract the features of the texture in the image by computing histograms of the filtered images obtained by applying filters in a filter bank and match them to that of the close approximation. We arrive at the cost function consisting of a data fitting term and a prior term and optimize it using Particle Swarm Optimization (PSO). We show the efficacy of the proposed method by comparing the results with interpolation methods and existing super-resolution techniques. The advantage of the proposed method is that it quickly converges to final solution and does not require number low resolution observations. Keywords-component; formatting; style; styling; insert (key words)

I.INTRODUCTION

wavelet transform. They estimate the wavelet coefficients at higher scale from a single low resolution observation and achieve interpolation by taking in-verse wavelet transform. The authors in [5] propose technique for super-resolving a single frame image using a database of high resolution images. They learn the high frequency details from a database of high resolution images and obtain initial estimate of the image to be super-resolved. They formulate regularization using wavelet prior and MRF model prior and employ simulated annealing for optimization. Recently, learning based techniques are employed for super-resolution. Missing information of the high resolution image is learned from a database consisting of high resolution images. Freeman et al. [6] propose an example based super-resolution technique. They estimate missing highfrequency details by interpolating the input low-resolution image into the desired scale. The super-resolution is performed by the nearest neighbor based estimation of highfrequency patches based on the corresponding patches of input low-frequency image. Brandi et al. [7] propose an examplebased approach for video super-resolution. They restore the high-frequency Information of an interpolated block by searching in a database for a similar block, and by adding the high frequency of the chosen block to the interpolated one. They use the high frequency of key HR frames instead of the database to increase the quality of non-key restored frames. In [8], the authors address the problem of super-resolution from a single image using multi-scale tensor voting framework. They consider simultaneously all the three color channels to produce a multi-scale edge representation to guide the process of highresolution color image reconstruction, which is subjected to the back projection constraint. The authors in [9] recover the super-resolution image through neighbor embedding algorithm. They employ histogram matching for selecting more reasonable training images having related contents. In [10] authors propose a neighbor embedding based superresolution through edge detection and Feature Selection (NeedFS). They propose a combination of appropriate features for preserving edges as well as smoothing the color regions. The training patches are learned with different neighborhood sizes depending on edge detection. The authors in [11] propose modeling methodology for texture images. They

In many applications high resolution images lead to better classification, analysis and interpretation. The resolution of an image depends on the density of sensing elements in the camera. High end camera with large memory storage capability can be used to capture the high resolution images. In some applications such as wildlife sensor network, video surveillance, it may not be feasible to employ costly camera. In such applications algorithmic approaches can be helpful to obtain high resolution images from low resolution images obtained using low cost cameras. The super-resolution idea was first proposed by Tsai and Huang [1]. They use frequency domain approach and employ motion as a cue. In [2], the authors use a Maximum a posteriori (MAP) framework for jointly estimating the registration parameters and the highresolution image for severely aliased observations. The authors in [3] describe an MAPMRF based super-resolution technique using blur cue and recover both the high-resolution scene intensity and the depth fields simultaneously. The authors in [4] present technique of image interpolation using

347

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 2, 2010

capture the features of texture using a set of filters which represents the marginal distribution of image and match the same in feature fusion to infer the solution. In this paper, we propose an approach to obtain super-resolution from a single image. First, we learn the high frequency content of the superresolved image from the high-resolution training images in the data base and use the learnt image as a close approximation to the final solution. We solve this ill-posed problem using prior information in the form of marginal distribution. We apply different filters on the image and calculate the histograms. We assume that these histograms remain deviate from that of the close approximation. We show the result of our method on real images and compare it with the existing approaches. .

II.

is the number of the training sets in the data base. Now the best matching HR block for the considered low resolution image block (up-sampled) is obtained as m = ππππππππππππ οΏ½ οΏ½

8 ππ+ππ >ππβππππππ βππππππ (m) (m) οΏ½CT (i , j) β CLR (i, j)οΏ½ . 2

DCT BASED APPROXIMATION.

APPROACH

FOR

CLOSE

Here, m(i, j) is the index for the training image which gives οΏ½ the minimum for the block. Those non aliased best matching HR image DCT coefficients are now copied in to the corresponding locations in the block of the up sampled test image. In effect, we learn non aliased DCT coefficients for the test image block from the set of LR-HR images. The coefficients that correspond to low frequencies are not altered. Thus at location (i , j)in a block, we have, (1)

(ππ ) πΆπΆ (ππ, ππ) ππππ (ππ , ππ) > ππβππππππβππππππ οΏ½ Ξ§Ξ€(ΞΉ, Ο) = οΏ½ π»π»π»π» πΆπΆππ(ππ ,ππ ) ππππππππ οΏ½

(2)

In this section, DCT based approach to learn high frequency details for the super-resolved for a decimation factor of 2 (q = 2) is described. Each set in the database consists of a pair of low resolution and high resolution image. The test image and LR training images are of size M Γ M pixels. Corresponding HR training images have size of 2M Γ 2M pixels. We first up sample the test image and all low resolution training images by factor of 2 and create images of size 2M X 2M pixels each. A standard interpolation technique can be used for the same. We divide each of the images, i.e. the up sampled test image, up sampled low resolution images and their high resolution versions, in blocks of size4 Γ 4. The motivation for dividing into 4X4 block is due to the theory of JPEG compression where an image is divided into 8X8 blocks in order to extract the redundancy in each block. However, in this case we are interested in learning the non aliased frequency components from the HR training images using the aliased test image and the aliased LR training images. This is done by taking the DCT on each of the block for all the images in the database as well as the test image. Fig.1.a) shows the DCT blocks of the up sampled test image whereas Fig.1. (b) Shows the DCT blocks of up sampled LR training images and HR training images. We learn DCT coefficients for each block in the test image from the corresponding blocks in the HR images in the database. It is reasonable to assume that when we interpolate the test image and the low resolution training images to obtain 2M Γ 2M pixels, the distortion is minimum in the lower frequencies. Hence we can learn those DCT coefficients that correspond to high frequencies (already aliased) and now distorted due to interpolation. We consider up sampled LR training images to find the best matching DCT coefficients for each of the blocks in the test image. Let CT (i , j), 1β€ ( i , j) β€ = 4, be the DCT coefficient at location (i, j) in a 4 Γ 4 block of the test image. Similarly, let C(m)LR ( i , j) and C(m)HR (I , j), m = 1,2,β¦β¦,L, be the DCT coefficients at location( i, j) ,in the block at the same position in the mth up-sampled LR image and mth HR image. Here L

This is repeated for every block in the test image. We conducted experiment with different Threshold values. We begin with Threshold =2 where all the coefficients except the DC coefficient are learned. We subsequently increased the threshold value and conducted the experiment. The best results were obtained when the Threshold was set to 4 that correspond to learning a total of 10 coefficients from the best matching HR image in the database. After learning the DCT coefficients for every block in the test image, we take inverse DCT transform to get high spatial resolution image and consider it as the close approximation to the HR image. III. IMAGE FORMATION MODEL In this work, we obtain super-resolution for an image from a single observation. The observed image Y is of size M x M pixels. Let y represent the lexicographically ordered vector of size M 2 Γ 1, which contains the pixels from image Y and z be the super-resolved image. The observed images can be modeled as y = Dz + n, (3)

where D is the decimation matrix which takes care of aliasing. For an integer decimation factor of q, the decimation matrix D consists of q2 non-zero elements along each row at appropriate locations. We estimate this decimation matrix from the initial estimate. The procedure for estimating the decimation matrix is described below. n is the i.i.d noise vector with zero mean 2 and variance ππππ . It is of the size, M2Γ 1. The multivariate 2 noise probability density is given by 2ππππππ . Our problem is to estimate z given y, which is an ill-posed inverse problem. It may be mentioned here that the observation captured is not blurred. In other words, we assume

348

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 2, 2010

identity matrix for blur. Generally, the decimation model to obtain the aliased pixel intensities from the high resolution pixels, for a decimation factor of q, has the form [12] D=
ππ

1

2οΏ½

0

1 1 β¦1 1 1β¦ 1

οΏ½ 1 1 β¦ .1

0

A. Filter Bank The Gaussian filters play an important role due to its nice low pass frequency property. The two dimensional Gaussian function can be defined as G(x, y\x0,y0,Οx,Οy)=

2πππππ₯π₯ππ π¦π¦ 1

(4)

The decimation matrix in Eq. (4) indicates that a low resolution pixel intensity Y (i, j) is obtained by averaging the intensities of q2 pixels corresponding to the same scene in the high resolution image and adding noise intensity n (i, j).

Here (x0, y0) are location parameters and (Οx Οy) are scale parameters. The Laplacian of Gaussian (LoG) filter is a radially symmetric centers around Gaussian filter with (x0, y0) = (0, 0) and Οx = Οy = T. Hence LoG filter can be represented by F(x, y\0 ,0, T) = c.(x2 +y2 βT2)ππ

β
π₯π₯

2 +π¦π¦ 2 ππ 2
ππ

βοΏ½ π₯π₯

βπ₯π₯ 0 2ππ 2

+ π¦π¦

βπ¦π¦ 0 2ππ 2

οΏ½

(5)

(6)

Here c is a constant and T scale parameter. We can choose 1 different scales with T = , 1, 2,3, and so on.The Gabor filter β2 with sinusoidal frequency βΟ βand amplitude modulated by the Gaussian function can be represented by Training Set-1 β’ β’ β’ Fw(x,y) = G(x,y\0,0,Οx , Οy)ππ βππππππ

1

(7)

A simple case of Eq. (7) with both sine and cosine components can chosen as G(x,y\0, 0,T, ππ) = cππ 2ππ 2 ( 4(π₯π₯π₯π₯ππππππ + π¦π¦ππππππππ)2 + (βπ₯π₯ππππππππ+π¦π¦π₯π₯ππππππ)2 (8)

By varying frequency and rotating the filter in - y plane, we can obtain a bank of filters. We can choose different scales T=2, 4, 6, 8 and so on. Similarly, the orientation can be varied as ππ = 00 ,300,600,900 and so on. B. Marginal Distribution Prior As mentioned earlier, the histograms of the filtered images estimate the marginal distribution of the image. We use this marginal distribution as a prior. We obtain the close approximation ZC of the HR image using discrete cosine transform based learning approach as described in section II and assume that the marginal distribution of the super-resolved image should match that of the close approximation ZC. Let B be a bank of filters. We apply each of the filters in B to ZC and πΌπΌ obtain filtered images), where πππΆπΆ , where Ξ± = 1,. . . . . . ,|π΅π΅|. (πΌπΌ) (πΌπΌ) We compute histogram π»π» πΆπΆ of πππΆπΆ . Similarly, we apply each of the filter in B to the initial HR estimate and obtain filtered images ππ πΌπΌ ,where πΌπΌ = 1, 2, 3β¦. |π΅π΅|. We compute πΌπΌ πΌπΌ histogram π»π»πΆπΆ of πππΆπΆ . We define the marginal distribution prior term as,

Ξ± CH =οΏ½πΌπΌ=1|HC β H Ξ± | |π΅π΅|

**Training set L Figure-1
**

IV

TEXTURE MODELLING

Natural images consist of smooth regions, edges and texture areas. We regularize the solution using the texture preserving prior. We capture the features of texture by applying different filters to the image and compute histograms of the filtered images. These histograms estimate the marginal distribution of the image. These histograms are used as the features of the image. We use a filter bank that consists of two kinds of filters: Laplacian of Gaussian (LoG) filters and Gabor filters.

(9)

349

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 2, 2010

V. SUPER-RESOLVING THE IMAGE The final cost function consisting of the data fitting term and marginal distribution prior term can be expressed as

βππβπ·π·ππβ |π΅π΅| Ξ± Μ ππ= argument minοΏ½ + ππ βπΌπΌ=1|HC β H Ξ± |οΏ½. 2 2ππππ

2

Μ where f (i, j)is the original high resolution image and ππ (i , j) is estimated super-resolution image.

(10)

Where, Ξ» is a suitable weight for the regularization term. The cost function consists of non-linear term it cannot be minimized using simple gradient descent optimization technique. We employ particle swarm optimization and avoid the computationally complex optimization methods like simulated annealing. Let S be the swarm. The swarm S is populated of images Zp, p =1β¦β¦|ππ| expanded using existing interpolation techniques such as bi-cubic interpolation, lanczose interpolation and learning based approaches. Each pixel in this swarm is a particle. The dimension of the search space for each image is D = N Γ N. The i-th image of the swarm can be represented by a D-dimensional vector, Z =(Zi1, Zi2,β¦, ZiD)T . The velocity of particles in this image can be represented by another Ddimensional vector Vi = (vi1; vi2,β¦.., viD)T. The best previously visited position of the i-th image is denoted as Pi = (pi1, pi2;β¦., piD)T .Defining β g β as the index of the best particle in the swarm, the swarm is manipulated according to the following two equations [13], ππ (11) π£π£ππππ+1 = wπ£π£ππππ + c1 r1( ππππππ - π§π§ππππ )+ c2r2 ( ππππππ - π§π§ππππ ) ππ ππ ππ ππ ππ where d =1, 2,β¦.,D; i =1, 2,.β¦.F; w is weighting function, r1 and r2 are random numbers uniformly distributed V1,V2,β¦are the iteration numbers, C1,C2,.. are cognitive and social parameter, respectively. The fitness function in our case is the cost function that has to be minimized. ππππππ+1 = ππππππ + π£π£ππππ ππ ππ ππ (12)

(a) HR Image

(b) Learnt Image

(c) PSO Optimized Image Figure-2

(a) HR Image

(b) Learnt Image

(c)PSO Optimized Image VI. EXPERIMENTAL RESULTS In this section, we present the results ( shown in the fig.2, fig.3 and table-1) of the proposed method for the super-resolution. We compare the performance of the proposed method on the basis of quality of images. All the experiments were conducted on real images. Each observed image is of size 128 Γ128 pixels. The super-resolved images are also of size 128 Γ 128. We used the quantitative measure Mean Square Error (MSE) for comparison of the results. The MSE used here is M.S.E =

οΏ½ βππ,ππ οΏ½ππ(ππ,ππ )βππ(ππ,ππ )οΏ½ βππ,ππ |ππ(ππ,ππ )|2

2

Figure-3 TABLE-1

Image Num MMSE between HR and Learnt images MMSE between HR and PSO images

1 2

0.02173679178 0.01117524672

0.02154759509 0.01107802761

350

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

**(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 2, 2010
**

VII.CONCLUSION. [6] W.Freeman, T.Jones, E.Pasztor, βExample based super-resolution,β IEEE Computer GRAPHICS AND asPPLICATIONS, Vol.22,no.2,pp.56-65,2002. [7]F.Brandi, R.de Queiroz, and D.Mukerjee, βsuper-resolution of video using key frames,βIEEE International Symposium on circuits and systems,pp.16081611,2008. [8]Y.W.Tai, W.S.tongand C.K.Tang,βPerceptually inspired and edge-directed color image super-resolution,βIEEE Computer Society Conference on Computer Vision and Pattern recognition, vol.2,pp.1948-1955,2006. [9] T.Chan and J.Zhang,βAn improved super-resolution with manifold learning and histogram matching,βProc.IAPRInternational Conference on Biometric,pp.756-762,2006. [10] T.Chan,J.Zhang, J.Pu, and H.Huang,βNeighbot Embedding based superresolution algorithm through edge detection and feature selection,βPattern Recognition Letters, Vol.30,no.5,pp,494-502,2009. [11] W.Y.Zhu, S.C. and D.Mumford,βFilters,random fields and aximum entropy(FRAME): Towards unified theory for texture modeling,βInternational Journal of computer Vision, vol.27,no.2,pp.107-126, 1998. [12] R.R.Schultz and R.L.Stevenson,βA Bayseian approach to image expansion for improved definition,β IEEE Trans.Image.Process,vol.3, no.3,pp233-242, May 1994. [13] M.Vrahatis and K.Parsopoulous, β Natural Computing,β Kluwer , 2002.

We have presented a technique to obtain super-resolution for an image captured using a low cost camera. The high frequency content of the super-resolved image is learnt from a database of low resolution images and their high resolution versions. The suggested technique for learning the high frequency content of the super-resolved image yields close approximation to the solution. The LR observation is represented using linear model and marginal distribution is used as prior information for regularization. The cost function consisting of a data fitting term and a marginal distribution prior term is optimized using particle swarm optimization. The optimization process converges rapidly. It may be concluded that the proposed method yields better results considering both smoother regions as well as texture regions and greatly reduces the optimization time. REFERENCES

[1]R.Y.Tsai and T.S.Huang, βMultiframe image resolution and registrationβ Advances in computer vision and image processing, pp.317-339, 1984. [2] R.C. Hardle, K.J. Barnard, and E.E. Armstrong. βJoint MAP registration and high-resolution image estimation using a sequence of under sampled imagesβ, IEEE Trans.Image Process,vol.6.no12,pp.1621-1633,Dec.1997. [3] D. Rajan and S. Chaudhuri, βGeneration of super-resolution images from blurred observation using an MRF model,β.ath.Imag.ision, Vol.16, pp,5-15, 2002. [4] S.Chaudhuri, Super-resolution imaging, S.Chaudhuri, Ed.kluwer, 2001. [5] C.V.Jiji, M.V.Joshi, and S. Chaudhuri, βSingle frame image superresolution using learned wavelet coefficients. βInternational Journal of Imaging Systems and Technology, Vol.14,no.3,pp.105-112,2004.

AUTHORS PROFILE

351

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

- Journal of Computer Science IJCSIS March 2016 Part II
- Journal of Computer Science IJCSIS March 2016 Part I
- Journal of Computer Science IJCSIS April 2016 Part II
- Journal of Computer Science IJCSIS April 2016 Part I
- Journal of Computer Science IJCSIS February 2016
- Journal of Computer Science IJCSIS Special Issue February 2016
- Journal of Computer Science IJCSIS January 2016
- Journal of Computer Science IJCSIS December 2015
- Journal of Computer Science IJCSIS November 2015
- Journal of Computer Science IJCSIS October 2015
- Journal of Computer Science IJCSIS June 2015
- Journal of Computer Science IJCSIS July 2015
- International Journal of Computer Science IJCSIS September 2015
- Journal of Computer Science IJCSIS August 2015
- Journal of Computer Science IJCSIS April 2015
- Journal of Computer Science IJCSIS March 2015
- Fraudulent Electronic Transaction Detection Using Dynamic KDA Model
- Embedded Mobile Agent (EMA) for Distributed Information Retrieval
- A Survey
- Security Architecture with NAC using Crescent University as Case study
- An Analysis of Various Algorithms For Text Spam Classification and Clustering Using RapidMiner and Weka
- Unweighted Class Specific Soft Voting based ensemble of Extreme Learning Machine and its variant
- An Efficient Model to Automatically Find Index in Databases
- Base Station Radiationβs Optimization using Two Phase Shifting Dipoles
- Low Footprint Hybrid Finite Field Multiplier for Embedded Cryptography

Sign up to vote on this title

UsefulNot usefulIn this paper, we propose a new technique for image super-resolution. Given a single low resolution (LR) observation and a database consisting of low resolution images and their high resolution ver...

In this paper, we propose a new technique for image super-resolution. Given a single low resolution (LR) observation and a database consisting of low resolution images and their high resolution versions, we obtain super-resolution for the LR observation using regularization framework. First we obtain a close approximation of the super-resolved image using learning based technique. We learn high frequency details of the observation using Discrete Cosine Transform (DCT). The LR observation is represented using a linear model. We model the texture of the HR image using marginal distribution and use the same as priori information to preserve the texture. We extract the features of the texture in the image by computing histograms of the filtered images obtained by applying filters in a filter bank and match them to that of the close approximation. We arrive at the cost function consisting of a data fitting term and a prior term and optimize it using Particle Swarm Optimization (PSO). We show the efficacy of the proposed method by comparing the results with interpolation methods and existing super-resolution techniques. The advantage of the proposed method is that it quickly converges to final solution and does not require number low resolution observations.

- Survey on Multi Frame Image Super Resolution
- Edge Detection Com Pol a Belling Pyramid Hold Scheme
- Al 23221225
- WorldView2 Basics and ERDAS IMAGINE
- isto
- CSEA-0103520101
- A Novel method for Unified Blind motion Deblurring of single / multi image / video using blur Deconvolution
- Drystar
- Voxel Size in Spatial Resolution
- Ieee Tgrs 2011
- 10.1.1.222.1554.pdf
- 87779
- A Novel Super Resolution Reconstruction of Low Reoslution Images Progressively Using DCT and Zonal Filter Based Denoising
- 4 CCI Land_Cover_Inland Water
- Report 1
- Presentation IASIM Siewert Hugelier
- fulltext(2)
- 1307.2434.pdf
- Example Based Resolution
- Super Resolution and Denoising of Images via Dictionary Learning
- Scp 2120 Ptc
- Optimizing Low Resolution Image Through
- Photoshop Fundamentals
- Performance Evaluation of Various Pixel Level Fusion Methods for Satellite Remote Sensor Images
- Hi 2614491454
- iciar_01_gorskii
- v40-35-1
- REPORT FarsiuThesis
- Dither
- Multispectral Image Analysis Using the Object-Oriented Paradigm
- Image Super Resolution Using Marginal Ditribution Prior

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd