You are on page 1of 3

# A methodology for rigid shape matching and retrieval

## Alejandro J. Giangreco Maidana

Christian E. Schaerer

Waldemar Villamayor-Venialbo

Laboratorio de Computacin Cient o ca y Aplicada, Facultad Politcnica, Universidad Nacional de Asuncin e o E-mail: cschaerer@pol.una.py, wvenialbo@pol.una.py

ABSTRACT

1. Introduction Computer vision systems, such as those specialized in the recognition of objects, the retrieval of similar images from a database, and the registration of silhouettes, relies mainly on the comparison of a set of basic characteristics extracted from the target objects. Additionally, the enormous advance in the data storage and image acquisition technologies raises the need for systems that can help humans in the recovering of visual information. In this work we introduce a new shape descriptor technique, the Contour Point-Signature, that makes use of the contour points of a shape. From this descriptor, we establish a method to achieve the best match points between two shapes and can thus have an ane transformation between them. Finally, we dene a new dissimilarity measure between two shapes, which will tell us which is the degree of similarity between them. 2. Contour-Point Signature Given a set of contour points A = {xi Z2 , i = 1, ..., M }, we select N reference points equally spaced. Then, we have the set P = {pi R2 , i = 1, ..., N }. The discrete Contour-Point Signature (CPS) of a point pi P is: fpi (j) := 1 |pi pr(j) |, A r(j) = i, i + 1, . . . , N, 1, 2, . . . , i j = 1, 2, ..., N + 1 (1)

Observation: Notice that a point pi R2 can be represented by a point fpi RN . The CPS has the following properties: Starting point independence, Scale independence, Translation and Rotation independence. 3. Matching Shapes Then, we match the contour points from one shape to the contour points from another shape, using the CPS. This is an assignement problem, and we solve it with a cost matrix. Given two shapes A and B, its reference points P and Q, and, its Contour-Point Signature fi and gi , we dene the cost matrix as: Cij = [d(fi , g(i,j) )]ij (2) where d is a distance in the metric space RN and (i, j) = (i + j 2)mod(N ) + 1.

n

H(j) =
i=1

## d(fi , g(i,j) ) j = 1, 2, ..., n

(3)

This value gives us the correspondence between the points {p1 qj , p2 qj+1 ...} Observation: Notice that the minimization of the equation (3) is equal to nd the smallest sum column. 4. Transformation A one to one ane transformation from P to Q in R2 is: y = T (x) = T x where T is a 3x3 matrix using homogeneous coordinates. The matrix T has the form T = QP where P and Q are N x3 matrices that contains the homogeneous respectively, i.e. 1 p11 p12 1 q11 q12 . . . Q= . . . . . . . P = . . . . . . . . 1 pN 1 pN 2 and P = P t (P P t )1 denotes the pseudo-inverse of P . 5. Dissimilarity Measure We dene a dissimilarity measure as follows: d(P, Q) = dC (P, Q) + dT (P, Q) where dC = H() is the minimum of the cost function and dT = Q T P by the ane transformation. Observation: corresponds to the sum of the matrix elements. 6. Experimental Results To determine the optimal number of sample points N , and the optimal threshold T. We use a training set. We select a certain number of pairs of images from the MPEG-7 database. There are two groups of images, one group is made up of individually dierent images; another group is made up of similar images. The following gures display the distribution of the comparisons under dierent sample points. Each blue(O) point is the distance between similar images and the red(X) one is for the dissimilar images. The optimal N is the smallest number of sample points where the shapes can be distinguished correctly. Usually the optimal N is obtained by experiment. In our experiment we obtain N = 128. Our next experiment involves the MPEG-7 shape silhouette database, specically the Core Experiment CE-Shape-1 part B, which measures performance of similarity-based retrieval. The database consist of 1400 images: 70 shape categories, 20 images per category. The performance is measured using the so-called bullseye test, in which each image is used as a query and one counts the number of correct images in the top matches. We obtain a retrieval rate of 70.72 % using N = 128, = 0.3 and = 0.7.

(4)

1 qN 1 qN 2

## (7) is a measure induced

Figure 1: Distance between similar images and dierent images 7. Conclusions We present a method for comparing shapes using the Contour-Point Signature. We obtain good results in dierent experiments using the MPEG-7 database. The authors are condent that this work can be improved, an issue that remains to be explored in future works.

References
[1] D. Zhang and G. Lu. Review of shape representation and description techniques. Pattern Recognition, 37(1):119, 2004. [2] M. Yang, K. Kpalma, and J. Ronsin. A survey of shape feature extraction techniques. In Pattern Recognition Techniques, Technology and Applications, chapter 3, pages 4390. I-Tech, 2008 [3] R. C. Veltkamp and L. J. Latecki. Properties and performance of shape similarity measures. In Data Science and Classication, Studies in Classication, Data Analysis, and Knowledge Organization, pages 4756. Springer Berlin Heidelberg, 2006. [4] W. Villamayor-Venialbo, H. Legal-Ayala, and C.E. Schaerer. Contour-Point Signature: A New Descriptor for Matching Rigid Shapes with a Single Closed Contour. In An. do I Congr. de Matematica Aplicada e Computational da Region Sudeste, pages 524526, 2011. [5] W. Villamayor-Venialbo. Matching Shapes using CPSs Ratios. In The Contour-Point Signature: A new descriptor for rigid shape matching. LCCA Technical Report. Appendix B, p. 20, 2010.