You are on page 1of 8

International Journal of Electronics and Computer Science Engineering Available Online at www.ijecse.

org

1062

ISSN- 2277-1956

Comparative Analysis of Structure and Texture based Image Inpainting Techniques


S. Padmavathi, K. P. Soman Amrita School of Engineering Coimbatore, India e-mail: s_padmavathi@cb.amrita.edu, kp_soman@amrita.edu Abstract There are various real world situations where, a portion of the image is lost or damaged or hidden by an unwanted object which needs an image restoration. Digital Image Inpainting is a technique which addresses such an issue. Inpainting techniques are based on interpolation, diffusion or exemplar based concepts. This paper briefly describes the application of such concepts for inpainting and provides their detailed performance analysis. It is observed that the performance of these techniques vary while restoring the structure and texture present in an image. This paper gives the limitations of each technique and suggests the choice of appropriate technique for a given scenario. Keywordsdigital image inpainting, exemplar based inpainting, TV inpainting, isotropic diffusion, anisotropic diffusion. 1-INTRODUCTION A Photographic picture is a two dimensional image which can contain many objects. One may be interested in the object or scene that is hidden by another. For example, a beautiful picture may contain some letters written on it or a view of the Taj mahal maybe occluded or a historic painting may be torn or damaged. Here the picture below the letters, the occluded portion of the Taj mahal and the damaged portion of the painting needs to be restored. This problem is addressed under various headings like disocclusion, Object Removal, Image Inpainting etc. Retrieving the information that is hidden or missing becomes difficult when there is no prior knowledge or reference image. Here the information surrounding the missing area and other known area has to be utilized for the restoration. Usually, the user in the form of mask specifies the unwanted foreground or the object to be removed or the portion of image to be retrieved. Clone Brush tool of Adobe Photoshop restores the image when a sample of the image to be placed in the missing area is selected by the user whereas in inpainting the missing area is automatically filled in by the algorithm. Digital image inpainting is a kind of digital image processing that modifies a portion of the image based on the surrounding area in an undetectable way. The techniques rely majorly on the diffusion and the sampling process. It has a wide variety of applications in restoration of deteriorated photos, denoising images, creating special effects in movies, digital zoom-in and edge based image compression. 2. STATE OF ART The inpainting problem can be considered as assigning the gray levels to the missing area called as with the help of gray levels in the known area as shown in fig. 2.1. The boundary , between the two plays a major role in deciding the intensities in . All the algorithms are iterative and try to fill in first and moves inwards successively altering each time. The algorithm stops when all the pixels in are successfully assigned some values. The restoration of the structural information like edges or textural information like repeating patterns pose a major challenge for the inpainting techniques. Based on the nature of filling, the algorithms could be classified into structure based and texture based methods.

ISSN-2277-1956/V1N3-1062-1069

1063 Comparative Analysis of Structure and Texture based Image Inpainting Techniques

Figure2.1. Digital image Inpainting problem Structure based methods uses geodesic curves and the points of isophotes arriving at the boundary for inpainting. Isophotes are the lines joining the same gray levels and geodesic curves are lines following the shortest possible paths between two points. When used in its primitive form it may result in disconnected objects. This is illustrated in Fig 2.2; while inpainting the black square in Fig 2.2a, a horizontal bar is expected but the algorithm results in two disconnected bars as in Fig 2.2b.

The mathematical models for deterministic and variational PDE are explained in detail in [4] and [6]. A series of Partial differential equations are used to extend isophotes in to the missing area in [1],[2] and[11]. In [12] a convolution mask is used to extend the gray levels in to the inpainting area. The curvatures are extended into the inpainting area in [5]. The Texture based methods mainly rely on texture synthesis, [3] and [9] which grow a new image outward from an initial seed. Before a pixel is synthesized, its neighbors are sampled. Then the whole image is queried to find out a source pixel with similar neighbors. At this point, the source pixel is copied to the pixel to be synthesized which is the missing area. This is called as Exemplar based synthesis. Based on whether a pixel or a sub window is used for sampling it is further classified as pixel based sampling and patch based sampling. The patch size, matching criteria and order of filling varies between algorithms. Exemplar based inpainting is used in [7] and [15]. 3. DIGITAL IMAGE INPAINTING TECHNIQUES The digital image inpainting involves two major steps. First step involves the selection of area to be inpainted and the second is the inpainting algorithm which gives appropriate values for the selected area. 3.1 Inpainting area selection The area to be inpainted is selected by the user based on color, region selected by user or a binary image specifying the missing area. Color based selection is more flexible and it could be used for specifying the area irrespective of the shape, area and number of regions. Instead of looking for the exact color value, the color values closer to it is also taken, into account for the quantization effects. This method requires the missing area to be in a unique and different color from the rest of the image. The user can select the missing area through a free hand selection or polygon selection. This method is capable of selecting the missing area irrespective of the color. This could be used predominantly on black and white images. However the missing area cannot be precisely specified in this method. It becomes tedious to select more than one area as in the case of imposed text on the image. If the area to be inpainted remains constant across various images or the template of the damage is known, the missing area is specified in the form of a binary image with the same size of the input image. This method is best suited to specify the black text imposed on black and white images. In practice the missing area is selected using any image manipulation software and given a different color which is then used for the inpainting algorithm. The user selected area is usually called as the mask or the region to be inpainted. 3.2 Structure based inpainting These methods are based on the Partial differential equations which contribute to the structural information in an image. The differential equations which use the concept of Interpolation, Diffusion and Total Variational PDEs are discussed in this paper.

ISSN-2277-1956/V1N3-1062-1069

IJECSE,Volume1,Number 3
S. Padmavathi and K. P. Soman

3.2.1 Interpolation Based Inpainting The simplest method uses soap film PDE, where becomes its boundary conditions. A set of linear equations are formed with the known values of and unknown values of in four major direction namely the north, south, east and west. Interpolation of the four neighbors is used to frame the equation. The forms the right hand side of the equations. The equations are solved to get the intensities of . 3.2.2 Anisotropic Diffusion Based Inpainting Inpainting problem is considered as diffusion of gray levels from the boundary area into the unknown area . The level set theory used to explain the diffusion boundaries during various periods. If the diffusion process does not depend on the direction or the presence of edges, it is called as isotropic diffusion. Interpolation technique is isotropic in this sense. Anisotropic diffusion[13] is used to avoid blurring across edges. Equation 3.1 shows the anisotropic diffusion where g represents a smooth function, K represents the curvature, I represent the gradient of the image and c represents the area other than . The curvature is given by the equation 3.2.

I (x, y, t ) = g ( x, y)k ( x, y) I ( x, y) , ( x, y) c t 2 2 I xx I y 2 I x I y I xy + I yy I x k= 3 I

(3.1) (3.2)

where Ixx, Ixy, Iyy represents the second derivatives, Ix, Iy represents the first derivative in the corresponding direction. Bertalmio and Sapiro[1] iteratively updates by propagating information in the direction normal to the boundary until a steady state is reached. Intermediately they use few steps of anisotropic diffusion. Each pixel is modified by adding its current intensity to an updated intensity times a delta factor. The iterative equation, the update and the change in Laplacian information for a pixel at (x, y) are specified in equation 3.3, 3.4 and 3.5 respectively.

I n +1 ( x, y ) = I n ( x, y ) + tI tn ( x, y ), ( x, y )
N ( x, y , n ) n I t ( x, y ) = Ln ( x, y ). N ( x, y , n ) n I ( x, y )

(3.3)

(3.4)

Ln ( x, y ) = ( Ln ( x + 1, y ) Ln ( x 1, y ), Ln ( x, y + 1) Ln ( x, y 1)
(3.5) Where t represents the rate of improvement, L represents the laplacian and N represents the normal to the gradient. 3.2.3 TOTALVARIATION BASED INPAINTING Total variation (TV) inpainting model [6] is derived from the Euler Lagrange equation. The nonlinear PDE is solved iteratively until a tolerance limit is achieved, by assuming the intermediate diffusivity coefficient as given in equation 3.6 whose numerical implementation boils down to equation 3.7 for a pixel at P.

1 I n
2

+2

(3.6)

ISSN-2277-1956/V1N3-1062-1069

1065 Comparative Analysis of Structure and Texture based Image Inpainting Techniques

0=
D

1 0 ( I P I D ) + P ( I P I P ) I D

(3.7)

where I D specifies the gradient in the direction D and ID specifies the pixel at D. The value is set to 0and 1 for the inpainting pixels and known pixels respectively. P specifies the value at pixel P. 3.3 Texture based inpainting The Exemplar based method uses a patch based sampling and filling. The order in which the patches are considered for filling is varied based on the structural information present around it. A small patch whose center is on with some known pixels from and some unknown pixels from is considered. A patch that is similar to the known pixels is searched in the entire image. The values corresponding to the unknown pixels positions are copied from the best matched patch. To ensure a better quality of the image the algorithm checks the boundary area for sharp changes like edges and assigns more weights to the unknown pixels closer to the edges. The algorithm also gives more weights to the pixels near the boundary. These are achieved by calculating the edge factor E(p)and known pixels factor K(p) for a patch Pp centered at pixel p, as given in Eq3.7 and Eq3.8 respectively .

E ( p) =

q I Pp

e( q )
Pp

(3.7)

K ( p) =

q I Pp

k (q)
Pp
(3.8)

where e(q), k(q) represents the edges and known pixels in the known area of patch Pp , |Pp| represents the cardinality of the patch .The K values of the known and unknown areas are initialized to 1 and 0 respectively. The overall weight is taken as a product of these two terms as in Eq3.9. The pixels with more weight are considered first for filling.

P(p) = E(p) * K(p)

(3.9)

Once a patch is filled the boundary () and the weights are recomputed before the next filling. The process is repeated until all pixels of are filled in. 4. Experimentation and results The experimentation is carried out for various images which can be broadly categorized into fixed shaped images, textured images and natural scene images. The structure based inpainting methods are tried on the grayscale images and the exemplar based inpainting is tried on the color images as well. The inpainting area is chosen by the user based on color, region selection or input image mask. The inpainting area is chosen with various shapes and sizes for each algorithm. The results of the interpolation based inpainting is shown in Fig 4.1. Fig 4.1a shows an image with fixed shape; the mask is specified as black color, which is narrow and includes two regions. The mask is dilated before solving for the equation.

ISSN-2277-1956/V1N3-1062-1069

IJECSE,Volume1,Number 3
S. Padmavathi and K. P. Soman

Figure 4.1a Inpainting using interpolation

Interpolation based method works well if the inpainting area is contained in a uniform area. The performance is high even when large number of smaller areas is inpainted. It results in blurring along the border. It does not preserve the shape of the object in the surrounding area. These are illustrated in the image shown in fig 4.1b, 4.1c and 4.1d. In Fig 4.1b the mask is chosen at top right corner of the circle, in Fig 4.1c the mask is specified in between the two squares and Fig 4.1d is a textured image where the mask is specified by the polygonal region selection.

Figure 4.1b

Figure 4.1c

Figure 4.1d

Figure 4.1 Results of Interpolation based inpainting

The result of anisotropic diffusion based inpainting is shown in Fig. 4.2, where 2 diffusion steps are repeated for every 15 steps of inpainting and a maximum number of iterations is set to 3000. The result obtained after 300, 750 and 2000 iterations are shown in the figure. It could be seen that as the number of iterations increase beyond a certain limit the algorithm starts oscillating and does not give a better result.

Figure 4.2 Anisotropic diffusion based inpainting

The results of TV inpainting are shown in Fig 4.3a through 4.3d. The area to be inpainted is a L shaped region in Fig4.3a, an ellipse between the boxes in Fig4.3b, a rectangle between the two ellipses in Fig4.3c and a small square touching two patterns in Fig 4.3d. The thickness of the area to be inpainted increases from a to c and the number of

ISSN-2277-1956/V1N3-1062-1069

1067 Comparative Analysis of Structure and Texture based Image Inpainting Techniques iterations are kept fixed at 500. It could be seen that when the thickness is small the image is inpainted in a better way An increased thickness or increased number of iterations results in more blurring. Fig 4.1.4d shows that the blurred boundary of the texture region after inpainting. It could also be observed that as the thickness of the inpainting area increases TV inpainting requires more iterations to inpaint the area.

Figure 4.3 a

Figure 4.3 b Figure 4.3c Figure 4.3 Results of TV inpainting

Figure 4.3 d

These methods perform well for smaller inpainting area; images with definite shapes and images with lesser isophotes in the boundary. These methods fail when the isophotes end inside and in a textured region. The results of exemplar based inpainting are shown in Fig 4.4a through Fig 4.4d. The images are inpainted using a squared patch of size 9x9. This method is best suited for textured a image which is evident from Fig4.4 d. It does not blur the image. The algorithm replaces the structure information well when the patch shape approximates the shape present in the image as shown in Fig4.4b; otherwise it fails to reconstruct the shape properly as shown in Fig4.4c. Fig 4.4a shows a failure case due to inappropriate selection of patch size.

Figure 4.4 a

Figure 4.4 Figure 4.4 c

Figure 4.4 d

Figure 4.4 Results of Exemplar based inpainting

The patches in the known area could be taken as disjoint non overlapping patches or overlapping patches. Searching for a matching patch in the known area takes more time and gives better accuracy for the later. A snapshot showing the partially inpainted color image is shown in Fig 4.4 e. The user is allowed to choose size of patch and the nature of patch. The accuracy of inpainting is always a subjective measure. The error could be calculated if the inpainting area is known. The performance of these methods is compared in Table 4.1.

Figure 4.2 e Snapshot showing partially inpainted image using Exemplar method

ISSN-2277-1956/V1N3-1062-1069

IJECSE,Volume1,Number 3
S. Padmavathi and K. P. Soman

Table 4.1: Performance Analysis of Inpainting algorithms Anisotropic Diffusion good better for smaller area poor to Robust to change for Better for smaller area yes moderate Robust to change Better for smaller area yes for larger size low Total Variation (TV) good Exemplar good depends image good better if matches with patch shape Robust to Change no low on nature of

Parameters Uniform area Structure Reconstruction Texture Reconstruction inpaint shape area

Interpolation good

poor poor Robust change Better smaller area yes very high

better for smaller area poor

inpaint area size Blurring Edges Speed of

other Dependencies

Nil

Nil

Iterations depend on mask size 5 Conclusions

Performance depend on Patch Size and shape

Digital inpainting techniques can be used for all kinds of repairs, such as removing text from an image, erasing power lines from a scenic view, or repairing cracks and scratches. The success of the inpainting algorithm lies in how well the information (photometry) and the structure (geometry) are propagated into the unknown area. The diffusion based methods work well if the unknown area is smaller. The area can be no more than about fifteen pixels across in any direction, and it really shouldn't be more than about four for good results. Texture synthesis methods try to find a suitable texture to be put in the unknown area. These methods work well for larger unknown area but generally result in undesirable boundaries. The more accurate the propagation of the structure the more perfect will be the restored image. REFERENCES
[1] M. BERTALMIO, G. SAPIRO, V. CASELLES AND C. BALLESTER, Image Inpainting, Proceedings of SIGGRAPH 2000, New Orleans, USA. July 2000, pp 417-424. [2] M. BERTALMIO, A.L. BERTOZZI AND G. SAPIRO, Navier-Stokes, Fluid Dynamics, and Image and Video Inpainting, Proc. IEEE Computer Vision and Pattern Recognition (CVPR01), Hawaii, December 2001. [3] M. BERTALMIO, L. VESE, G. SAPIRO AND S. OSHER, Simultaneous Structure and Texture Image Inpainting, Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR03), volume 2, June 2003. [4] T.F. CHAN AND J. SHEN, Mathematical Models for Local Deterministic Inpaintings, UCLA Computational and Applied Mathematics Reports 00-11, March 2000. [5] T.F. CHAN AND J. SHEN, Non-Texture Inpainting by Curvature Driven Diffusion (CDD), UCLA Computational and Applied Mathematics Reports 00-35, September 2000. [6] T.F. CHAN, J. SHEN AND L. VESE, Variational PDE Models in Image Processing, UCLA Computational and Applied Mathematics Reports 02-61, Dec. 2002. [7] A. CRIMINISI, P. PRES AND K. TOYAMA, Object Removal by Exemplar-Based Inpainting, Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR03).

ISSN-2277-1956/V1N3-1062-1069

1069 Comparative Analysis of Structure and Texture based Image Inpainting Techniques
[8] R.C. GONZALES AND R.E. WOODS, Digital Image Processing, Second Edition, Prentice Hall, Inc. 2002. [9] H. IGEHY AND L. PEREIRA, Image Replacement through Texture Synthesis, Proceedings of the IEEE International Conference on Image Processing, October 1997. [10] A.C. KOKARAM, R.D. MORRIS, W.J. FITZGERALD AND P.J.W. RAYNER, Interpolation of Missing Data in Image Sequences IEEE Transactions on Image Processing. Vol. 4. no.11, Nov. 1995, pp 1509-1519. [11] S. MASNOU AND J.M. MOREL, Level Lines Based Disocclusion, Proceedings of the fifth IEEE International Conference on Image Processing, New York, 1998. [12] M.M. OLIVIEIRA, B. BOWEN, R. MCKENNA AND Y.S. CHUNG, Fast Digital Image Inpainting, Proceedings of the International Conference on Visualization, Imaging and Image Processing (VIIP 2001), Marbella, Spain 2001. Sep. 3-5, 2001, pp 261-266. [13] P. PERONA AND J. MALIK, Scale-Space Edge Detection Using Anisotropic Diffusion, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 12, No.7, July 1990 [14] A. EFROS ANDW.T. FREEMAN. Image quilting for texture synthesis and transfer In Proc. ACM Conf. Comp. Graphics (SIGGRAPH), pages 341346, Eugene Fiume, August 2001. [15] A. HERTZMANN, C. JACOBS, N. OLIVER, B. CURLESS, AND D. SALESIN. Image analogies. In Proc. ACM Conf. Comp. Graphics (SIGGRAPH), Eugene Fiume, August 2001. [16] L. LIANG, C. LIU, Y.-Q. XU, B. GUO, AND H.-Y. SHUM. Real-time texture synthesis by patch-based sampling. ACM Transactions on Graphics, 2001

ISSN-2277-1956/V1N3-1062-1069

You might also like