Information Sciences 372 (2016) 196–207

Contents lists available at ScienceDirect

Information Sciences
journal homepage:

Single image super-resolution reconstruction based on genetic
algorithm and regularization prior model
Yangyang Li a,∗, Yang Wang a, Yaxiao Li a, Licheng Jiao a, Xiangrong Zhang a,
Rustam Stolkin b
Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, International Research Center for Intelligent
Perception and Computation, Joint International Research Laboratory of Intelligent Perception and Computation, Xidian University, Xi’an,
Shaanxi 710071, China
Department of Mechanical Engineering, University of Birmingham, Birmingham B15 2TT, UK

a r t i c l e i n f o a b s t r a c t

Article history: Single image super-resolution (SR) reconstruction is an ill-posed inverse problem because
Received 13 April 2015 the high-resolution (HR) image, obtained from the low-resolution (LR) image, is non-
Revised 3 August 2016
unique or unstable. In this paper, single image SR reconstruction is treated as an opti-
Accepted 15 August 2016
mization problem, and a new single image SR method, based on a genetic algorithm and
Available online 16 August 2016
regularization prior model, is proposed. In the proposed method, the optimization problem
Keywords: is constructed with a regularization prior model which consists of the non-local means
Single image super-resolution (NLMs) filter, total variation (TV) and adaptive sparse domain selection (ASDS) scheme for
Genetic algorithm sparse representation. In order to avoid local optimization, we combine the genetic algo-
Regularization prior model rithm and the iterative shrinkage algorithm to deal with the regularization prior model.
Non-local means Compared with several other state-of-the-art algorithms, the proposed method demon-
strates better performances in terms of both numerical analysis and visual effect.
© 2016 Elsevier Inc. All rights reserved.

1. Introduction

Image reconstruction plays an important role in many practical applications, such as heterogeneous image transformation
[45,46], and sketch-photo synthesis [44]. As one of the basic techniques in image reconstruction, super-resolution [21] re-
construction aims to derive a high resolution image from one or more low resolution image frames. Due to the limited
capacity of imaging equipment and complex imaging environments, many situations arise where images are generated with
insufficiently high-resolution (HR). Image super-resolution (SR) reconstruction technology attempts to overcome the limita-
tions of imaging equipment and environments by recovering high frequency information which is lost during the process
of low-resolution (LR) image acquisition [21]. The problem of SR reconstruction is attracting increasing interest from the
imaging research community, and its solution offers significant potential for applications to video, remote sensing, medical
imaging, image or video forensics, and many other fields.
Image SR reconstruction methods can broadly be divided into multi-frame image SR methods [14,49] and single image
SR methods [19,23,33]. With large computational complexity and storage requirements, current multi-frame SR methods
can have difficulty satisfying the real-time requirements of e.g. video applications. In contrast, only using a single image as
the input for the reconstruction process, enables reduced computational complexity and comparatively small data storage

Corresponding author.
E-mail address:, (Y. Li).
0020-0255/© 2016 Elsevier Inc. All rights reserved.

gradient priors [41].and high-resolution image patches have the same sparse representation coefficients. In contrast to the above methods. but tend to generate HR images with fuzzy edges. Yang et al.48].30] is a population-based search method that is capable of handling complex and multimodal search spaces. Interpolation-based SR methods use the values of adjacent pixels to estimate the values of interpolated pixels. / Information Sciences 372 (2016) 196–207 197 requirements.52. GA approaches avoid local optima and offer considerable robustness for SR reconstruction problems.g. .17]. SR can also be viewed in terms of search over a complex. Based on the two cate- gories for SR reconstruction methods in [45].35. In such conditions. Section 4 presents the experimental results of comparing our method against other state-of-the-art methods. Prior knowledge methods have proved effective at suppressing noise and preserving edges. Related work In this section. we construct the regularization prior model by adding both NLMs and TV regularization into an ASDS-based sparse representation. (2) we show how SR reconstruction can be broken down into two distinct stages: first. reconstruction-based methods [5. Pioneering work by Freeman et al. and have been extended to combine these consistency constraints with predicted or expected image content [37–39]. noisy. which suggest that our method can better recover both structure and edge information.56. conventional SR methods are limited to being local in scope. which combined data clustering and PCA to learn a set of compact sub-dictionaries. the GA is used to perform a multiple-point search to overcome local optima and second.22] and total variation (TV) [29]. Dong et al. edge priors [42]. and propose a single image SR reconstruction method based on a genetic algorithm (GA) and regularization prior models. In Wang et al. the regularization prior model is then used to do a single-point search to further improve the quality of image reconstruction. However.28. Therefore. Section 3 describes the details of our proposed SR reconstruction method.18. Y. using a variety of natural images.20. Many different kinds of priors have been incorporated into reconstruction-based methods. high dimensional and multimodal surface. we briefly review the GA. This paper makes two main contributions: (1) we introduce GA into the iterative shrinkage algorithm to overcome the shortcomings of gradient-based local search algorithms. Dong et al. and little further progress results from additional iterations. single image SR reconstruction methods can be broadly categorized into three main groups: interpolation-based methods [57. and thus improve the quality of image reconstruction as compared with other methods from the literature. Chang et al. [44]. to expand the scope of the search and help avoid convergence on local optima. Therefore. [10] used a deep convolutional neural network to learn mappings between low. Section 2 reviews related work. Li et al. [15] proposed an example-based image SR method. these prior knowledge methods have so far demonstrated less success with respect to reconstructing plausible details [1] under large magnification. in this paper. Recently. Dong et al introduced autoregressive (AR) models [49] and non-local means (NLMs) to build two types of adaptive regularization (AReg) terms to improve the reconstruction quality. [6] established a learning model in- spired by locally linear embedding (LLE). we introduce genetic algorithms into the optimization process. and assumed that low.34. We present the results of experiments. The method learned two dictionaries for low. Such methods are not generally restricted by magnification. [51] trained the coupled dictionary by selecting patches based on standard deviation thresholding and employing a neural network model for fast sparse inference. A variety of learning-based methods for SR reconstruction have been proposed. e. [11] adopt the iterative shrinkage algorithm to solve the ASDS_AReg-based sparse representation. we combine genetic algorithms with ASDS_AReg based sparse representation. Alternative ideas about prior knowledge and reconstruction constraints use Markov Random Fields (MRF) to impose probabilistic constraints on pixel consistency [3. such methods tend to converge on a local optimum. both reconstruction fidelity and synthesis fidelity were optimized to reduce high losses on a face sketch-photo synthesis problem. NLMs algorithm and the ASDS scheme. Our experimental results suggest that replacing the AR with TV can improve performance. [11] proposed an adaptive sparse domain selection (ASDS) scheme for sparse representation. 2. These methods are comparatively simple and deliver real-time processing.15. Zeyde et al.and high-resolution images. GA [7. es- pecially when the reconstruction is based on blurred or noisy LR images.36.44. Additionally. by using prior information of the images to build a closed mathematical form to perform single-point search in the search space. After a sufficient number of iterations.32] and learning-based methods [10.50. Yang et al. The remainder of this paper is organized as follows. with the result that single image SR reconstruction methods are more widely used. By increasing the diversity of solutions. [50] used a sparse signal representation to reconstruct HR images. non-local means (NLMs) [4. Dong et al.27].24]. In this paper. Reconstruction-based methods [25] make use of prior knowledge of reconstruction constraints. learning-based methods [25] estimate the high-frequency details lost in an LR image by learning relationships between LR and HR image patch pairs from sample databases.and high-resolution image patches. [54] used principal component analysis (PCA) to reduce the dimension and K-SVD for dic- tionary training. steering kernel regression (SKR) [55. Section 5 summarizes the paper and provides concluding remarks.6. obtaining excellent reconstruction quality.

and are then high-pass filtered to generate an edge dataset Sh = [sh1 . proposed by Dong et al.1. . . μk ] is the matrix containing all the centroids. . which is inspired by the evolutionary mecha- nism.59]. . k = 1. . such as constrained optimization problems [40] and com- bination optimization problems [8. sh2 . it is particularly suitable for dealing with complex and non-linear optimization problems. [11] this is re-written for convenience as:  −1  Q  Q   Xˆ =  ◦ α = ˆ T Ri Ri Ri ki αi T (5) i=1 i=1 . . we select the sub-dictionary for xˆi by comparing the high-pass filtered patch xˆi of xˆi with the centroid μk :     ki = arg min c xˆi − c μk  h (3) k 2 c represents the most significant eigenvectors from the PCA transformation matrix of U.3. .2. h is a global smoothing parameter that controls the decay of the smoothing. . . The kth i sub-dictionary ki will be selected and assigned to patch xˆi . μk } are obtained by: first. learning a dictionary k for each subset Sk . k = 1. Li et al. Due to robustness against convergence on local optima. 2.198 Y. . The weight wi j denotes the similarity between the neighborhood of pixel Xi and its relevant neighborhood of pixel X j and is calculated as:     Ni − N j 2 G wi j = exp − 2 (2) h Ni and N j represent the column vectors formed by expanding the neighborhood pixels of Xi and X j in lexicographic ordering respectively. the dataset S can be clustered into K subsets Sk . . . with uk the centroid of cluster Ck .e. and adaptively assigns each local patch the best sub-dictionary that is most relevant to the local patch. Pi is the index set of pixels in its relevant neighborhood. and then computing the centroid μk h of each cluster Ck associated with Sk . 2. We learn sub- dictionaries k . . s2 . It was first proposed in 1975 [20]. and G is a kernel matrix that assigns a larger weight to the pixels closed to the target pixel. and assume that a pixel can be approx- imated by a weighted combination of the pixels in its relevant neighborhood. . U = [μ1 . It simulates the Darwinian evolution principle of “survival of the fittest” and generates improved approximate solutions or the optimal solution through an iterative procedure of generational evolution. . shM . ]. learns a series of compact sub-dictionaries from example image patches. K. . K. Non-local means NLMs [28] exploit the fact that many repetitive patterns exist in natural images. 2. We partition Sh into K clusters {C1 . Next. / Information Sciences 372 (2016) 196–207 2. . Correspondingly.:  wi j X j j∈Pi Xˆ i =  (1) wi j j∈Pi where Xˆ i is the estimate of Xi . [11]. Learning sub-dictionaries M image patches S = [s1 . 2.2. from subsets Sk using principal component analysis (PCA) [58. . . 2. Adaptive selection of the sub-dictionary K pairs {k . 2.3. . ASDS proceeds according to two steps: learning of sub-dictionaries and adaptive selection of the sub-dictionary. . sM ] containing edge structures are selected. which are explained in the following two subsections. Genetic algorithm GA is a kind of numerical optimization method based on random search. . which denotes the ith pixel in X. .1. C2 .12]. The GA begins with a population initialized randomly over the search space of the optimization problem. CK } using the K-means algorithm. Adaptive sparse domain selection The ASDS scheme. i.3. written as [13]:  −1  Q  Q   Xˆ = Ri ki αi T T Ri Ri (4) i=1 i=1 In Dong et al. μ2 . The whole image X can then be reconstructed by making use of information from all of the patches xˆi .

Due to blurring. Y. To make the problem well-posed.(t+1 j ) = soft(αi. j is the coefficient associated with the jth atom of ki and λi. j αi.τi. which can be calculated by first reconstructing each image patch with Xˆ i = ki and then averaging all the reconstructed image patches. All tests are carried out using only the luminance component of color . the second term is the non-local similarity regularization term. t (f) Update the matrices W using the improved estimate Xˆ . • Upscaling factor s. j ). (b) Compute α (t+1/2) = [Tk1 R1 X (t+1/2) . Therefore. Experimental results and analysis To validate the effectiveness of the proposed method. Q is the number of patches and q is the number of pixels of each patch. D is a down-sampling operator.τi.μ and the maximum iteration number. denoted by Max_Iter. H is a blurring operator.P) = 0. the process of HR image reconstruction is an ill-posed inverse problem [2]. where soft (·. (d) Set t = 0. Objective: Estimate HR image Xˆ Inputs: • LR image Y. I is the identity matrix. (t+1 ) (d) Compute Xˆ =  ◦ αi(t+1) using Eq. Output: the final HR image Xˆ .TkN RN X (t+1/2) ]. We use the iterative shrinkage algorithm [9] to solve Eq. 3.47] are used as objective quality measures of the SR reconstruction results. including SC-based [50]. j . SPM [31]. down sampling and additive noise. The detailed implementation steps are shown in Table 2.· · · .(8). ASDS [11]. Proposed SR reconstruction method For single image SR reconstruction problems. and the first l2 -norm term is the fidelity term. and obtain initial HR estimation X 0 . we select the sub-dictionary ki and calculate W for the non-local weights. Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM) [16. j . αi. (2) Iteration on t until t ≥ Max_Iter is satisfied. (t+1 ) (e) If mod (k. and V is additive noise. we use genetic algorithm to optimize HR estimation Xˆ and obtain an improved solution X (t+1) . (7) . αi denotes the sparse repre- sentation coefficients of the patch xˆi . and Q is the number of patches. α is the concatenation of all αi . (7) builds a closed mathematical form to perform single-point search in the search space by using an iterative shrinkage algorithm. we have conducted empirical experiments on a variety of natural images. / Information Sciences 372 (2016) 196–207 199 Table 1 Proposed image SR reconstruction algorithm. (c) αi. where  is the concatenation of all sub-dictionaries {k }. the process of obtaining a HR image from a LR image can be generally modeled as: Y = DHX + v (6) where Y is the LR image. In order to avoid local optima. (b) With the initial estimate X 0 . where N is the total number of image patches. The detailed implementation of the SR reconstruction algorithm is presented in Tables 1 and 2. ANR [43]. we incorporate the GA [26] into the iterative shrinkage algorithm to solve the regularization prior model.(t+1 j /2 ) . Eq. using bi-cubic interpolation. 4. Steps: (1) Initialization: (a) Upscale Y with. X is the unknown HR image to be estimated. j is the weight assigned to αi. and μ is the trade-off parameter to balance the non-local similarity regularization term. We compare our proposed method against a variety of state-of-the-art methods. (7). j (7) α i=1 j=1  −1  Q  Q   Xˆ =  ◦ αˆ = ˆ T Ri Ri Ri ki αˆ i T (8) i=1 i=1 In Eq. The third l1 -norm term is the total variation regularization term and λ is the trade-off parameter to balance the total variation regularization term. [11]. the HR image obtained from a LR image is non-unique. AULF [53]. j ) is a soft thresholding function with threshold τi. The details of this computation process can be found in Dong et al.where U = (DH)T DH and V = μ(I-W)T (I-W) +λI. (a) X (t+1/2) = X t +(Uy-UX t -VX t ). we incorporate both the non-local similarity regularization and TV regularization meth- ods into an ASDS-AReg sparse representation:  Q  q αˆ = arg min y − DH  ◦ α +μ · (I − W ) ◦ α +λ | ◦ α|1 + 2 2 2 2 γi.λ. (c) Preset γ . Li et al. Ri is an operator that obtains the image patch xˆi from xˆ . by a factor of s.

785 Introduced GA 28. Among the 2 ∗ N individuals. where  is a ⎡ value and P (x ) can be calculated as: floating 0. N new individuals will be generated and combined with the parental N individuals.v (the element in ith row and jth column of Xˆ ) with x˜u. f ≥ favg p p max avg If one random number r1 ∈ [0. Four iterations are alternately performed during the aforementioned two stages. favg is the average fitness value of the population.800 0. Experimental settings In our experiments.787 images. x > 255 x. else (t+1 ) After N-1 new matrixes are generated. is random value in [−8. f is the fitness value of the individual. For the calculation of NLMs weight W. fmax is the maximum fitness value of the population.39 27. f < favg pc = ( p − p )( f − f ) pc1 − c1 f c2 − f avg . 2. In the second stage.55 0. (a) Crossover: The crossover probability pc of each individual is calculated as: pc1 . N. (2) Iteration: Perform the following steps K times. pc2 = 0. / Information Sciences 372 (2016) 196–207 Table 2. .v +}. (b) Mutation: The mutation probability pm of each new individual is calculated as:  pm 1 . In the first stage. (c) Set k = 0.798 0. pm1 − ( m1 −f m2−)(ffmax − f ) . . In above equations.55 24.21 27.200 Y.6 and 0. we compare the SR performance on five test images. the initial population Xˆ consists of the N-1 new matrices and Xˆ .20 24. 8].03 for the noiseless images and 0.9. respectively. 1] is smaller than pm . the maximum iteration time for gradient descent is set at 200 iterations. pc1 = 0. (t+1 ) Input: Xˆ Output : X (t+1) Steps: (1) Initialization: (t+1 ) (a) One new matrix can be constructed by replacing every xu.35 31.75 24. the number of iterations is 10. x < 0 P (x ) = ⎣255. and used to generate two new solutions as:  k+1 xˆ1 = α xˆ1 +(1 − α ) xˆ2 k k k+1 xˆ2 = α xˆ2 +(1 − α ) xˆ1 k k where α is a random number in [0.1. Performance comparison between the algorithms with and without GA. because human vision is most sensitive to the luminance component. we set the smoothing parameter h to 10.914 0.97 31.04 and 0. 4. For the noisy images.874 0. Neighbors are considered to be those lying within a 13 × 13neighborhood. Table 3. pm1 = 0.6 and decimated by a factor of 3.923 0.45 for the noisy images.39 0. two individuals xˆ1 and xˆ2 should be selected using roulette selection.v = P {xu. the best N individuals will be selected as the parental individuals in the next generation.917 0.i = 1. .001. 4. Li et al. PSNR and SSIM are tabulated in Table 3 and the results suggest that the addition of the GA can improve performance. population size N = 30. the regularization parametersμand λare set to 0. the Gaussian white noise with standard deviation of 5 is added to the LR images.878 0. . and the bi-cubic interpolator is applied to the chromatic components.1. the input LR images are generated from the original HR image by a7 × 7Gaussian kernel of standard deviation 1.65 24. pm2 = 0. f < f a v g pm = . 2 (b) Fitness function: Fi = 1/(Ei +ε ). Details of implementation of genetic algorithm.922 0. 1].Ei = Y − DH Xˆ i 2 . the individual should be modified by performing the following operation: k+1 k k Xˆ = Xˆ +β DT H T (Y − DH Xˆ ) (c) Selection: With crossover and mutation. Effectiveness of introducing GA To validate the effectiveness of introducing the GA to the iterative algorithm.6. . f ≥ favg max avg k k Based on the crossover probabilities. and the patch size to7 × 7.2. Image Butterfly Hat Leaves Bike Comic Without GA 27.Xˆ i is the ith of Xˆ .

891 0. Let the super-resolved image size be nm.74 25.830 0.880 4.47 33.909 0. the proposed algorithm is slightly inferior to the results of AULF.872 0.68 31.26 33.862 0. respectively.923 0. However.6.63 31.741 0. Experiments on images with artificially added noise In this subsection.91 26.865 0. we conduct experiments on 8 images to evaluate SR performance in comparison with five other state- of-the-art methods.3.884 0.84 30. The fitness values of the individuals cost approximately O(t1 Nm2 n2 ) and the crossover and mutation cost approximately O(t1 mn).871 0. 4. the computational cost is re- lated to two factors: iteration times t1 and the population size N. [11] and the remaining four images are chosen from the BSDS500 image database as shown in Fig.16 31.867 0.63 30. The ASDS method can preserve edges and suppress noise well. sharper edges and more faithful detail. Y.17 0. This may be due to the fact that we use an ASDS-AReg based sparse representation that learns sub-dictionaries from external images to “hallucinate” details. we show 3×magnification results of Butterfly and Leaves in Figs.09 30. We can see that.913 0.895 0.35 30.51 0.62 30.76 29.55 0.75 0.18 31. This performance can be readily observed in the resulting image “Leaves”. We refer to the images in Table 4 by its position in the raster scanning order. PSNR (dB) and SSIM results of eight test images for 3×magnification (σn = 0). where the edges gener- ated by our proposed method are clearer and sharper. To assess the visual quality.09 30.50 28.879 Leaves 24.853 0. the proposed method is superior to other SR algorithms in terms of the averaged results of PSNR and SSIM over all eight images.4. Because SPM and ANR are known to perform on images without artificially added noise [31. Li et al.07 25.910 0. we carry out SR experiments on noisy images to explore the extent to which our proposed method is robust against noise.33 0.84 24.814 Woman 30.889 0.806 0.83 32.922 Hat 30. Our proposed method can produce less artifacts. so that the total cost of the first stage is approxi- mately O(t1 (Nm2 n2 +mn)). 1.72 33.922 0.74 25.855 0.33 0. Experiments on images without artificially added noise In this subsection.893 0.62 25. and then contaminating the resulting image by Gaussian white noise with standard deviation 5 [11].07 30. the proposed SR method incurs major costs in two main parts: the GA method and the regularization prior model. the main costs are the computation of the NLM weight matrix and iteratively . then dec- imating by a factor of 3. The first four images are the same as those used in Dong et al. In the first stage. / Information Sciences 372 (2016) 196–207 201 Fig. we do not show comparative evaluation results for these two methods in our numerical results (Table 5). Test images used in Table 4.925 0. Discussion of computational cost As illustrated in Tables 1 and 2. 4.96 31.885 0.57 25.851 0.745 0.922 Elephants 30.34 27.83 0.887 0.896 0.00 27.873 0. 1.91 30.21 0. In the second stage.904 0.854 0.93 31.94 28.710 0.61 31.905 0.854 0.50 27.927 Red flower 30.78 27. Test image SC [50] SPM [31] ANR [43] ASDS [11] AULF [53] Proposed method Butterfly 25.43].15 26.893 Building 24.803 0.869 0. The proposed method performs better than all the comparison methods in terms of the averaged results of PSNR and SSIM.905 0. Table 4. 2 and 3.717 0. for two images out of eight.72 31.35 0.913 0.897 0.880 0.83 32. Table 5 presents the objective evaluation results of PSNR and SSIM values. The objective evaluation results of PSNR and SSIM are presented in Table 4.88 25.798 0.5.786 0.51 31.811 0.923 Plants 31.94 32.72 31.99 26.72 30.94 31. under noisy conditions.846 0. The noisy LR images are generated from the original HR image by applying a 7 × 7Gaussian kernel of standard deviation 1.912 0.754 0.54 29.15 30.763 Average 28.

202 Y. (e) Result of AULF. (b) Result of SPM. (f) Result of proposed method. (a) Result of SC-based method. (e) Result of AULF. (a) (b) (c) (d) (e) (f) (g) Fig. (f) Result of proposed method. The area in green rectangle is the 2×local magnification of red rectangle in each example. (c) Result of ASDS. (c) Result of ASDS. 2. Li et al. (g) The original HR image. (a) Result of SC-based method. (c) Result of ANR. (g) The original HR image. / Information Sciences 372 (2016) 196–207 (a) (b) (c) (d) (e) (f) (g) Fig. (b) Result of SPM. The area in green rectangle is the 2×local magnification of red rectangle in each example. . 3. Super-resolved images (3×) of Butterfly with different methods. (c) Result of ANR. Super-resolved images (3×) of Leaves with different methods.

75 – – 29.77 – – 29.841 0. updating the HR image.70 30. . Li et al. the size of local analysis window d. The proposed SR method costs less than AULF. it can be seen that the proposed method converges more quickly compared with conventional ASDS.08 26.819 0. Y.27 – – 30.877 0. Test image SC [50] SPM [31] ANR [43] ASDS [11] AULF [53] Proposed method Butterfly 25.77 0.52 – – 25.6. Comparisons of CPU time between the ASDS method.875 Hat 29.814 0.821 0.706 0.747 0. Hence.833 0.15 0.874 0.02 0.819 0. From Fig. the AULF method and the proposed method. 4 shows the CPU time spent on all the test images.828 0.73 – – 24.861 0.27 30.10(R2010a) on a Windows Server with 2.4 GHz.838 Elephants 30.716 Average 28.76 0.871 0.29 28.62 29. Fig. Moreover. The computation of the NLM weight matrix is related to three factors: the searching radius r.836 Building 24. it takes overall O(mnd2 r2 ) to evaluate the NLM weight matrix for all the pixels.814 0.77 30.15 29. the proposed method consistently performs better than ASDS. Because the GA is combined with the ASDS-AReg.33 0.869 0.31 – – 31.88 27. / Information Sciences 372 (2016) 196–207 203 Table 5.883 Plants 31.859 0.49 29. 4.755 0.82 28. PSNR (dB) and SSIM results of eight test images for 3×magnification (σn = 5). 4.679 0.70 25. and the super-resolved image size nm.49 0.05 26.754 Woman 29.813 Leaves 24.50 26. using MATLAB 7.836 0.57 25.695 0. The gradient decent is t2 iterations and the second stage cost is approximately O(t2 (Qq2 +4m2 n2 +6mn)+mnd2 r2 ).749 0.865 0.12 – – 28.87 0.96 0.840 0.878 Red flower 29.844 0.41 30. 5. and only slightly more than ASDS.824 1200 ASDS The method[45] 1000 Proposed 800 CPU Times(seconds) 600 400 200 0 Butterfly Hat Leaves Plants Elephant Woman Red flow er Building Fig.04 30. the iteration times in the second stage are reduced.61 31. while still producing higher quality SR image recovery. Discussion of computation time and the convergence rate We have compared the proposed method with ASDS and AULF in terms of CPU time.65 – – 29.00GB of RAM and two i3 Intel cores of 2.02 – – 26.10 31.846 0.816 0.854 0.54 0.

5 21 28 20 27.5 ASDS ASDS 26 Proposed 31 Proposed 30. other effective priors. based on sparse representation. Boccacci.5 25 30 24 PNSR PNSR 29. In future work. Baker. (d) PNSR changes of Hat. we approach single image SR reconstruction as an optimization problem. such as SKR and edge priors. (c) PNSR changes of Leaves.5 22 29 28. Li et al.5 25 PNSR PNSR 31 24 30. . and propose a new single image SR framework which combines the GA and a regularization prior model. Intell. / Information Sciences 372 (2016) 196–207 Plants Butterfly 33. Introduction to Inverse Problems in Imaging. 61272279. [2] M. NCET-12-0920). both with and without added noise. empirical experiments on a number of images. we use a regularization prior model to perform a single-point search in the solution space. suggest that the proposed method performs competitively in comparison to other state-of-the-art methods. Limits on super-resolution and how to break them. the National Basic Research Program (973 Program) of China (No. Pattern Anal. U.5 26 32 31. Acknowledgment This work was supported by the National Natural Science Foundation of China (Nos.5 0 20 40 60 80 100 120 140 160 180 200 0 20 40 60 80 100 120 140 160 180 200 Interation Times Interation Times (c) (d) Fig. First. IOP. IRT_15R53). 1998. could be introduced into the ASDS-AReg to improve the proposed method.5 21 0 20 40 60 80 100 120 140 160 180 200 0 20 40 60 80 100 120 140 160 180 200 Interation Times Iteration Times (a) (b) Leaves 27 Hat 31. B07048). (b) PNSR changes of Butterfly.5 30 23 29. 24 (9) (2002) 1167–1183. Bristol. the Program for New Century Excellent Talents in University (No. Second. (a)PNSR changes of Plants.K. 5. References [1] S. Comparisons of convergence rate between ASDS method and proposed method..5 28 ASDS ASDS 33 Proposed 27 Proposed 32. for avoiding local minima. IEEE Trans. and obtain higher quality SR estimation. Conclusion In this paper. P.5 23 29 22 28. 5. Finally. the Program for Cheung Kong Scholars and Innovative Research Team in University (No. Bertero. 61371201. and the Fund for Foreign Scholars in University Research and Teaching Programs (the 111 Project) (No. and 61203303). Mach. 61272282. we use a GA to search the solution space and obtain an improved HR image. T. drawn from two different public benchmark data sets.204 Y. 2013CB329402). Kanade.

X. in: Proceedings of IEEE Conference on Computer Vision. Sheikh. M. Image Vision Comput. Kang. [17] S. [18] D. Marquina. Neural Netw. Multiscale Model Simul. in: Proceedings of IEEE International Conference on Computer Vision. pp. 13 (10) (2004) 1327–1344.H. 2013. et al. Lett. P. M. Peleg. Am. K. [25] X. Sci.. V. W. 1996. Wang. Shi. Sci. [30] Z. 2009. J. [24] X. C.. X. Anchored neighborhood regression for fast example-based super-resolution. Partially supervised neighbor embedding for example-based image super-resolution. A review of image denoising algorithms. J. [22] S. Dual-sparsity regularized sparse representation for single image super-resolution. Sci.S. Sapiro. 15 (8) (2006) 2226–2238. Int. Defriese. 2009. Inf. 20 0 0. 20 (7) (2011) 1838–1857. Trivedi. 295 (1) (2015) 395–406. IEEE Trans. M. 305 (2015) 320–348. [36] H. Gao. Wang. A. and the Bayesian restoration of images. Michalewicz. Zhang. Greig. Elad. J. IEEE Trans. Wu. [52] S. 2400–2407. Takeda.. Asymmetric kernel regression. Sci. Single image super-resolution with non-local means and steering kernel regression. Dong. Li et al. [43] R. Chang. in: Proceedings of Data Compression Conference. et al. G. Inf. Li. [34] M. C. Image Process. IEEE Trans. W. An EM/E-MRF strategy for underwater navigation. [47] Z. L. Deblurring and denoising of images by nonlocal functionals. Parada. Hwang. [20] J. in: Proceedings of IEEE Conference on Computer Vision Pattern Recognition. 25 (4) (2014) 780–792. Model-guided adaptive recovery of compressive sensing. An edge-guided image interpolation algorithm via directional filtering and data fusion. T. Li. Learn. Simul. Pure Appl. Image Process. Y. Ma.W. Image quality assessment: from error visibility to structural similarity. and Machine Learning. A real-coded genetic algorithm with a direction-based crossover operator. Yeung. [15] W. Syst. pp. Zhang. Millanfar. High-resolution image recovery from image-plane arrays. M. Image Process. IEEE Trans. Yang. Cohen. IEEE Trans. Soc. IEEE Trans. [13] M. Single image super-resolution via directional group sparsity and directional features. IEEE Trans. 275–282. X. H. J. [7] Y. L. A fuzzy-rule-based appoach for single frame super resolution. A unified learning framework for single image Super-resolution. Yang. H. J. 6 (6) (1984) 721–741. IEEE Trans. IEEE Trans. IEEE Trans. Opt. 2008. New edge-directed interpolation. Generalizing the non-local means to super-resolution reconstruction. S. Garibaldi. IEEE J. Buades. D. Comput. [11] W. 23 (5) (2014) 2277–2290. He.C. E. Drake.M. J. Sel. R. Mach. Zisserman. [49] X. Image Gr. Optimization. Wen. 6 (11) (1989) 1715–1726. Zhou. 2004.P.. 37 (3) (2008) 367–382. Sun. [12] J. A Comprehensive survey to face hallucination. SpringerVerlag. A. E. 24 (9) (2013) 1364–1376. An EM/E-MRF algorithm for adaptive model based tracking in extremely poor visibility. Candocia. 2010. Ann Arbor. B. A rapid learning algorithm for vehicle classification. IEEE Trans. IEEE Trans. 123–132. Z. Y. P. Kindermann. Image Process. Brown. Pattern Anal. Vis. Image super-resolution using gradient profile prior. 24 (9) (2015) 2874–2888. 26 (4) (2008) 480–495. Protter. Gao. [27] M. H. [50] J. Wang. Neural Netw. Neural Netw. Osher. Image Process. Stolkin. Automatic combination of operators in a genetic algorithm to solve the traveling salesman problem. Li. W. Wu. Elad.. Wang. 10 (2) (1999) 381–390. Fast and robust multi-frame super resolution. L. Pattern Recongnit. Tao. X. J. using convex projection. Soc. Y. [54] R. [4] A. Gao. Lin. Tsai.P. Sci. A multi-cycled sequential memetic computing approach for constrained optimisation. Tao. Geman. in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. S. 18 (1) (2009) 36–51. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Non-local sparse models for image restoration. pp. S. Stolkin. B. L. [6] H. X. Bach. P.T. X. Example-based super-resolution. [5] F. Learn. 21 (11) (2012) 4544–4556. M.K. [16] X. [44] N. X. 1 (2) (1984) 317–339. Gao. 10 (10) (2001) 1521–1527.D. [55] K. Li. Elad. 298 (C) (2015) 257–273. Learn.T.. D. Vis. Elad. Coll. Ding. Mairal. Combining sparse representation and local rank constraint for single image super resolution. Image Process. Syst. Y. De Smet. C. J. Farsiu. Greig. Single image super-resolution using locally adaptive multiple linear regression. in: Proceedings of IEEE Conference on Computer Vision Pattern Recognition. Sci. Pattern Anal. Hu. On the statistical-analysis of dirty pictures. Li. D. Y. Z.J. Wright. Image Process. Stochastic relaxation. et al. G. pp. Timofte. 15 (12) (2006) 3736–3745. Intell. B-48 (5–6) (1986) 259–302. Jul. 15 (2) (2004) 276–282. Ramteke. Addison-Wesley. Gr. Orchard. W. J. Gao. X. S. Image super-resolution via sparse representation. Hodgetts. 1975. Geman. Burke. 4 (2) (2005) 490–530. [41] J. Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. X.C. Fang. Image Process. [26] J.J. pp. D. Image Process.E. C. Li. Commun. 711–730. Egon. [19] W. Li. W. M. 106 (1) (2014) 9–30. Iiguni. J. IEEE Trans. [14] S. Holland. Yu. Gao. J. Comput. Evol. Robinson. Extended Markov Random Fields for predictive image segmentation. Li. A. X. M. Opt. M. Huang. [37] R. Yu. Comput. Van Gool. IEEE Trans. [8] C. 325 (2015) 429–454. Intell. 9 (1) (2004) 62–68. IEEE Trans. M. Chanda.C. Gong. Stat. Protter. Bovik. Image Process. Image super-resolution reconstruction based on parallel genetic algorithm. C. Tai. Lu. Osher. H. E. Transductive face sketch-photo synthesis. [45] N. Tieu. Goldberg. New York. 1989. Liu. Hodgetts. Liu. [39] R. [56] K. Tao. A statistical prediction model based on sparse representations for single image super-resolution. [28] J. Huang. Image Process. Math. R. Image Process. [35] T. Oskoui. / Information Sciences 372 (2016) 196–207 205 [3] J. Li. P. Plos One 10 (9) (2015) e0137724. Ko. Li. 23 (6) (2014) 2569–2582. Zeyde. Gong. 2272–2279. R. F. 21 (8) (2012) 3467–3478. Thouis. Neural Netw. Principe. T. Liu. 4 (4) (2005) 1091–1115.K. Soc. Ponce. New York. Zhang. Coupled dictionary training for image super-resolution. [38] R. Image super-resolution using deep convolutional networks.Y. Freeman. Lin. R.R. On single image scale-up using sparse-representations. Super-resolution through neighbor embedding. Stark. [31] T. Zhang. Genetic Algorithms + Data Structures = Evolution Programs. [51] J. Y. [53] J. Sci. De Mol. X. Comput. A 32 (12) (2015) 2264–2275. Gao. 34 (4) (2013) 77–84. Tao. Syst. V. [33] P. Zhang. J. et al. 2010.R. [57] L. P. Shum. D. Inf. [48] X. D. University of Michigan Press. IEEE Trans. Huang. M. Gibbs distributions. [42] Y. [46] N. Ghune. IEEE Trans. Image Process. Super resolution using edge prior and single image detail synthesis. L. Dong. Neural Netw. [29] A. [10] C. Wang. Image Process. [32] M. Adv. [40] J.T. Maeda. Chen. Adaptation in Natural and Artificial Systems. D. Multi-frame image restoration and registration. Morel. Tao. pp. X. in: Curves and Surfaces. 1–8.-Y. et al. IEEE Trans. Sun. Super-resolution of images based on local correlations. Milanfar. Zhang. 2007. Shao. Stolkin. [9] I. in: Proceedings of Bmvc. Aharon. Özcan. Elad. S. Jones. Tao. X. Li. Am. Genetic Algorithms in Search. Image interpolation for progressive transmission by using radial basis function networks. 19 (11) (2010) 2861–2873. [21] T. X. 57 (2004) 1413–1457.Z. Daubechies.C. IEEE Trans. S. Li. Inf. et al. Contreras-Bolton. Pal. D. Purkait. 340–341 (2016) 175–190. pp. Appl. X. 5 (2) (2011) 230–239. 325 (2015) 1–19. Inf. Xu. Chuang.M. D. A case study of controlling crossover in a selection hyper-heuristic framework using the multidimensional knapsack problem. Learn. Image denoising via sparse and redundant representations over learned dictionaries. J. Image quality assessment based on multiscale geometric analysis. Multiscale Model. Mackenzie. Simulated binary jumping gene: a step towards enhancing the performance of real-coded genetic algorithm. A. A. M. Simoncelli. Li. 1920–1927. Tao. J. Loy. in: Proceedings of the 6th International Conference on Advances in Pattern Recognition. N. J. X. Xiong. IEEE Trans. Syst. Besag. Sigitani. 10 (2) (1999) 372–380. 22 (2) (2002) 56–65. He. [23] J. Xue. 24 (1) (2016) 113–141. Mach.S. Image Process 13 (4) (2004) 600–612. . Wang. N. Y. Topic Signal Process. Wu.. Wang. IEEE Trans. with a new one. Heterogeneous image transformation. V. Image super-resolution by TV regularization and Bregman iteration. M. 38 (2) (2014) 1. H. Inf. 18 (7) (2009) 1409–1423. IEEE Comput.M.

X. Shi. G. Zhang. D. Wu. 43 (2010) 1531–1549.206 Y. Zhang. Lukac. Pattern Recognit. 18 (4) (2009) 797–812. Dong. D. . IEEE Trans. R. Zhang. Image Process. Li et al. / Information Sciences 372 (2016) 196–207 [58] L. [59] L. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras. W. Two-stage image denoising by principal component analysis with local pixel grouping. Zhang.

Xi’an. in 2007. Y. and the Ph. . China.D. and M. China. Li et al. in 2001 and 2004.S. artificial immune systems. / Information Sciences 372 (2016) 196–207 207 Yangyang Li received the B. and data mining. degree in Pattern Recognition and Intelligent System from Xidian University. Xi’an. respectively. degrees in Computer Science and Technology from Xidian University. Her current research interests include quantum-inspired evolutionary computation. She is currently a professor in the school of Electronic Engineering at Xidian University.S.