Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Standard view
Full view
of .
Look up keyword
Like this
0 of .
Results for:
No results containing your search query
P. 1


Ratings: (0)|Views: 0|Likes:
Published by Tauheed Ahmed

More info:

Published by: Tauheed Ahmed on Oct 01, 2013
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





Sudha Velusamy, Ajit Bopardikar, Radhika R, Amit Prabhudesai and Basavaraja SV 
Samsung India Software Operations, Bangalore, India.Email:
sudha.v, ajit, radhikar, mr.amitp
@samsung.com, svbasavaraj@gmail.com
Image and video capture devices are often designed with dif-ferent capabilities and end-users in mind. As a result, very of-ten images are captured at one resolution and need to be dis-played at another resolution. Resizing thus continues to be animportant research area in today’s milieu. Well-known pixel-based methods (like bicubic interpolation) have traditionallybeen used for the purpose. However, these methods are oftenfound lacking in quality from a perceptual standpoint as theytend to soften edges and blur details. Recently, vectorizationbased approaches have been explored because of their inherentproperty of artifact free scaling. However, these methods failto faithfully reproduce textures and fine details in an image. Inthis work, we present a novel, layered approach of combiningimage vectorization with a pixel based interpolation techniqueto achieve high quality scaling of digital images. The proposedmethod decomposes an image into two layers: a ‘coarse’ layercapturing the visually important structure of the image, and a‘fine’ layer containing the details. We vectorize only the coarselayer and render at the desired scale, while using classical inter-polation on the fine layer. The final scaled image is composedby blending the independently scaled layers. We compare theperformance of the proposed method with several state-of-the-art vectorization and scaling methods, and report comparablybetter performance.
Vector Graphics, Vectorization, Photo-Realistic Rendering, Image Resizing, Decomposition.
Television screens, especially high-definition (HD) screens, arebeing increasingly used to view not just broadcast content, butalso other content like personal multimedia collections capturedusing digital cameras and mobile phones. Often the resolu-tion of the multimedia content does not match the display res-olution of the display screen. Devising scaling methods thatbridge the difference in resolution between the content and thedisplays has long been an active research area. The classicalpixel scaling techniques such as bicubic interpolation are fastand efficient, but yield poor visual quality. In recent times sev-eral edge based techniques have been proposed. When multiplelow-resolution images of a scene are available, super-resolution
The author is currently with Nokia Pvt Ltd, India.
algorithms hold the promise of better image quality. However,their high computational complexity has been an impediment totheir implementation in real-time and embedded systems.Recently, vector graphics has been an area of active researchbecause it promises arbitrary scaling of the given image whileallowing for easy editing and compact representation. Vectorgraphics already find use in representing fonts to enable artifact-freescaling of text. It isintuitivethen, toextendthis approachtoimage scaling. This is, however, not a trivial task due to the dif-ficulties involved in vectorizing (bitmap-to-vector conversion)natural images. In this work, we present a vectorization basedlayered image scaling technique. The present approach is moti-vated by the fact that our human visual system resolves a scenein hierarchical layers starting from coarse structures to fine de-tails. Most scaling techniques are inefficient in simultaneousscaling of edge like structures and fine details. In the presentwork, we decompose an image into coarse and fine layers, andapply scaling techniques that are appropriate for each layer. Fi-nally, the scaled layers are blended together to get the final out-put. In the next section, we review some of the related work inthe area of vector and pixel based image scaling.
The problem of 
, or raster-to-vector conversion,has been an actively studied problem in the computer graphicscommunity. Researchers have addressed the problem of vector-izing line drawings [1] or synthetic images like cartoons [2, 3],as well as the more challenging problem of vectorizing naturalcolor images [4, 5, 6]. These approaches can be grouped intotwo main categories based on the type of images they handle.The first category deals with mostly synthetic images that havea limited color palette, smooth color fills with well-defined bor-ders and no (or relatively simple) texture. The approaches of Zhang et al [2] and Koloros [3] fall in this category. These ap-proaches cannot handle real-life images that have many colors,smooth gradients and complex textures. The second category of methods attempts to generate a photo-realistic reconstruction of such images. Battiato et al [4] present a region-based approachthat generates an over-segmented image using the watershed al-gorithm and fits polygons to the resulting region boundaries.The information is represented in the Scalable Vector Graph-ics (SVG) format which allows a very compact encoding of thevector data. A related approach [7] that is used in the RaveGrid978-1-4244-7493-6/10/$26.00c
2010 IEEE ICME 2010
software [8] is the use of the Delaunay triangulation that fits tri-angles to image regions. The limitation of this method (and of [4]) is that a large amount of vector data (triangles or polygons)is required for a faithful reconstruction of the original image.Mesh-based techniques have also been proposed that use reg-ular or irregular meshes to represent the image data. Price etal [5] present a method for easy image editing based on mesh-fitting. A similar method based on gradient meshes is presentedby Sun et al [6]. Though these methods provide sufficientlygood rendering of most natural images, rendering fine textures(like hair, or fur) remains a challenge.Vector-based approaches have primarily been used to designtools for professional illustrators that are used for easy creationand/oreditingofimagecontent. Textfontsareoftenrepresentedinvectorformduetotheadvantagesofvectorscalingofthedatathat it represents. It is then intuitive to extend this philosophyto images. Application of vectorization techniques to the imagescaling problem, however, is far from trivial. Simple images arerelatively easy to vectorize faithfully. Real-life images are sig-nificantly more complex, and photo-realistic reconstruction of such images using vectorization methods remains a challenge.Image scaling is an area that has received considerable atten-tion in the past several years. So called
super resolution
tech-niques [9, 10] have been proposed to use several low-resolutionimages of a scene to reconstruct a high resolution image of thesame scene. Single-image interpolation techniques have alsobeenproposed; mostnotablyanumberofedge-directedinterpo-lation (EDI) methods (see [11] for an overview) that use the lo-cal statistical and geometrical properties to interpolate the miss-ing pixel values. These methods are proven to be superior toconventional scaling methods like bicubic interpolation, as theypreserve the sharpness and continuity of the interpolated edges.The primary disadvantage of most of these methods is thecomputational cost that makes real-time implementation diffi-cult, especially on embedded hardware platforms. Our researchwas guided by a quest for an efficient image scaling solutionwith comparable performance to the best existing techniques.The main contribution of this work is to propose a novel ap-plication of vectorization to the image scaling problem. Ourapproach is motivated by the studies on the human visual sys-tem [12]. We propose a layered approach wherein the image tobe scaled is decomposed into two layers, and each layer is pro-cessed separately. Our approach avoids the pitfalls of vector-izing natural images by processing only the most perceptuallysignificant information in the image using vectorization tech-niques. We note that Saito et al [13] adopt a similar (layered)approach. However, our main contribution is the use of vector-ization techniques on top of such a layered representation.
The proposed method is based on decomposing the input imageinto two layers. We assume an additive model for decomposinganimage
. The proposed method is based on the premise thatrescaling each layer separately using methods that are most ap-
Fig. 1
The proposed system
propriate to that layer, followed by blending of the scaled layersshould yield improved quality over conventional methods.
3.1. System Overview
Figure 2 gives a pictorial representation of the proposed system.The ‘coarse’ layer is extracted from the input image by passingit through the detail wiping module. This layer is then convertedto vector form and rendered at the original scale. The ‘fine’ ortexture layer is generated by computing a pixel-wise differencebetween the original image and the rendered coarse layer. Forscaling the image, the vector representation of the coarse layeris rendered at the desired magnification by the rendering engine.The texture layer is then scaled separately by the same magnifi-cation factor. Finally, the scaled layers are blended to yield therescaled image. The blending process includes the applicationof a post-processing step on rendered coarse layer; this is doneto ensure a more natural and visually pleasing output image.Our approach of separating the image into coarse and finelayers circumvents the problem of vectorizing richly texturednatural images. We input only the coarse layer, which is freeof fine textures and does not requires any over-segmentation forfaithful reconstruction, to the vectorization module. Hence, thevectorization ensures a fast, efficient scaling of the coarse layer.
3.2. Generating the Coarse Layer
The coarse layer generation involves two major modules: viz.,the
detail wiping filter 
and the
vectorization module
Detail-wiping filter:
We pass the input image,
, througha detail-wiping filter that retains the strong edges and salientstructure, and wipes out the fine details like textures. Examplesof detail-wiping filters include Symmetric Nearest Neighbor fil-ter, Kuwahara filter [14], and Bilateral filter [15]. We found thebilateral filter provides the most visually appealing results andgood control over the level of details to be wiped out. The out-put of the bilateral filter,
, containing only strong edges andgross structure is then presented to the vectorization module.
Fig. 2
. Example:
Input Image,
Bottom left:
coarse layer,
Bottom right:
Detail layer
We use vectorization for scaling as vector prim-itives (such as lines, curves, polygons) can be scaled to anyarbitrary magnification without any artifacts. Vectorization in-volves three major modules, namely, i) image segmentation; ii)curve fitting for segmented regions, and; iii) rendering. The bi-lateral filtered image
is given to the segmentation module,which decomposes it into visually homogenous regions to beprocessed by the curve fitting module. In our work, we used the‘EDISON’ segmentation method [16]. The algorithm providesuser-controlonlevelofsegmentationintermsofparameterslikecolor threshold, processing window size, etc. The algorithmsegments the input image such that each connected componentsof pixel in the image is assigned with an unique label. Eachlabel is then assigned with a color value that is the mean of thecolor values of all the pixels named with that label. The out-put of the segmentation module is thus an approximation of theimage with flat color regions. Note that, the proposed methoddo not suffer from any limitations of the segmentation methodused. Because, the application of additive decomposition modelensures that any loss of information in
) is captured inits complement layer such that
. The curve fittingmodule fits boundary of each segmented region with suitablegeometric shapes (polygons and lines). This results in a vec-tor representation of the filtered image, which we denote as
.The details of shapes/paths along with their color informationcan be stored in the SVG format. Given an required displayresolution (for example, resolution of image
), the renderingmodule render an image,
, which is a ‘coarse layer’ of 
(SeeFig. 2). We use Potrace library [17] that applies cubic B
zierspline internally for curve fitting, and openVG for rendering.
3.3. Generating the Fine layer
The rendered coarse layer generated above is subtracted fromthe original bitmap image, in pixel-wise fashion, to get the fine-detail or texture layer,
(See Fig. 2). This layer contains thefine texture details that are complementary to coarse layer con-tent. Fig. 2 shows the coarse layer and fine-detail layers for asample input image.
Fig. 3
Flow diagram of adaptive filter3.4. Scaling the Coarse and Fine layers
Given a scale factor
, the coarse layer
and the detail layer
are independently scaled by the same scale factor.
Scaling the coarse layer:
The scaling of coarse layer is an ef-ficient and simple process that requires rendering of vector dataat the required scale factor (
). The vector-based coarse layerscaling results in an image,
with sharp, well-defined edges,asthegeometricvectorprimitivesremainsharpandartifactfree,irrespective of the rendering scale.
Scaling the detail layer:
The fine-detail layer is independentlyscaled by the same scale factor using a suitable texture scalingalgorithm. Since the residue image does not have sharp andlong edges, a basic interpolation scheme such as bilinear withsmaller kernel size could be used. The scaled fine-detail layeris denoted as
3.5. Blending the Layers
The final output is constructed by blending the scaled coarselayer,
, and detail layer,
However, direct blending mayproduce an output image that is sharp, but not visually pleas-ing. This is due to the unnatural sharpness (a
-like effect)introduced around the edges of the image, when we combinea highly sharp vector layer with smooth fine-detail layer. Toavoid this, we filter
with a spatially adaptive blur filter,
.This may seem counter-intuitive to our stated aim, which is togenerate a sharp, high quality rescaled version of the input im-age. However, the amount of blur introduced is very small, andselectively introduced only at certain regions. This results ina final output which is more pleasing and yet sharp. The fil-ter used is a spatially varying smoothing filter (with gaussiankernel), whose parameters are decided based on the ‘blur map’(computed as in [18]) of the input image. A flow chart of thefiltering process is shown in Fig. 3. The final, high resolutionoutput image
is obtained by combining the vector scaledand processed coarse layer with the interpolated residue layer.This is given by
= (
) +
denotes the convolution operation.

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->