Professional Documents
Culture Documents
I. I NTRODUCTION
On October 2nd, 2009, UNESCO acknowledged that batik
is Indonesias cultural heritage. Name of batik is derived
from Javanese language, namely ?gembat? (throw) and ?itik?
(writing some dots on fabric or other materials) [1]. Batik is
an ancient technique for decorating textile [2]. Patterns that
compose batik are called motif. Based on the ornaments and
their structures, the type of batik can be identified by its
basic motif, as listed in Table I [3][4]. Therefore, batik can
be classified based on those motifs.
Several studies have discussed about batik image recognition and classification. Cheong and Loke [5] did recognition and classification colored texture for ?atik?and
?ongket?motifs. They used six multispectral co-occurrence
matrices to extract colored texture based on RGB. The
Tchebichef orthogonal polynomial was used to get the moment coefficients, as texture descriptors, from co-occurrence
matrices. That method gives a good degree of accuracy
in discriminating color texture. Continuing previous study,
Loke and Cheong [6] did feature reduction using Principal
Component Analysis (PCA). They reduced down feature to
2%. The result showed that the appropriate feature reduction
can increase the speed of recognition without significantly
reducing the classification rate. Nurhaida et al [1] did performance comparison on three extraction methods for batik image
recognition. Those methods are Gray Level Co-occurrence
Matrices (GLCM), Canny Edge Detection, and Gabor filters.
The comparisons were conducted to determine which the best
method to recognizing batik images. The result showed that
GLCM is the best method in extracting information from batik
images.
Most studies of batik image recognition use texture feature
as descriptor to distinguish the type of batik. Most of them use
texture feature because texture can be used to represent motifs
in batik images. A texture extraction method can be done either
based on statistical approaches or spectral approaches. GLCM
is the most common texture extraction method that based
on statistical approaches. GLCM have been proved to be an
effective texture descriptor in [1] [7] and [8]. Whereas wavelet
transform is the most common texture extraction method based
on spectral approaches. Discrete wavelet transform (DWT)
have been proved to be a very powerful texture descriptor
used in image analysis [9] [10] [11].
Most studies about batik image recognition use dataset
that have been well prepared for a classification process.
For instance, one class of batik motif may consist of a
number of images that are acquired from one fabric that
captured from different sides. Whereas, in this study, dataset
are acquired randomly from the internet, thus one class of
batik motif contains various fabrics but has the same basic
motif. Moreover, as shown in Fig. 1, dataset acquired randomly
from the internet may contain various types of noise. First, the
image color could be light (high intensity) in one side and dark
(low intensity) in another side. It can be caused by unbalanced
brightness when capturing the images. Second, there are folds
on fabrics. Third, the different size of basic motifs. Fourth,
the low contrast that caused the edges of batik motif could
not be clearly visualized. Lastly, there is watermark on batik
images. Because of these complexity, classification of these
complicated batik images is not the trivial task.
To overcome these problems, the proper extraction method
of texture feature should be selected to achieve the high
accuracy rate of classification rate. In this paper, we propose
249
Fig. 1: Noise in images that acquired from internet. (a) unbalanced brightness, (b) folding, (c) different scale, (d) low contrast,
(e) watermark
TABLE I: Examples Type of Batik
Motif
Banji
Ceplok
Nitik
Lereng
Semen
Lunglungan
Buketan
Examples
banji bengkok,
guling, kerton
kawung
ceplok
nogosari
picis,
keci,
rengganis,
nitikkrawitan,
tirtateja
alit,
truntum
parang barong,
parangrusak,
udanliris
B. Preprocessing
There are three pre-processing steps for the main dataset.
First, images are resized to 320x320 pixels. Second, the
resized images are converted to the greylevel intensity. Finally,
applying the adaptive histogram equalizer technique on the
converted images. Different from the main dataset, the preprocessing step for the second dataset is just the conversion
to the grey level intensity.
babonangrem,
gragehwalu, lung
klewer, peleman
buket
isen
latar,
snow
white,
buketan
pekalongan
C. Wavelet
xa
1
)
a,b (x) = p (
b
|a|
A. Dataset
(1)
Z
(W f )(a, b) = {f, a,b (x)} =
(2)
250
Fig. 2: Some of the Main Dataset : (a) banji, (b) ceplok, (c) kawung, (d) nitik, (e) parang, and (f) lereng
Fig. 3: Example Second Dataset : (a) ceplok, (b) parang, (c) semen, (d) truntum, (e) lereng, and (f) lung-lungan
(3)
DLH = lT h
(4)
DHL = hT l
(5)
DHH = hT h
(6)
three detail images HL, LH, and HH. Every sub-band images
contains the information at a specific scale and orientation. LL
corresponds to the approximation image. LH corresponding to
horizontal details contains image information of low horizontal
frequency and high vertical frequency. HL corresponding to
vertical details contains high horizontal frequency and low vertical frequency. HH corresponding to diagonal details contains
high horizontal and high vertical frequencies.
To get another level of decomposition, LL sub-band image
is successively decomposed using 2-D DWT. Thus, wavelet
packet decomposition of an image produces 4i sub-band
images, where ??is the level of decomposition.DWT has
various type of filter bank, such as Haar, Daubechies, Coiflets,
Biorthogonal, and etc. The different filter bank that used can
caused the different wavelet transform function that gives
different range of frequencies
D. Gray Level Co-Occurrence Matrix (GLCM)
GLCM is a square matrix with size LxL, where L is
the number of grey-level in the original image. That matrix
contains the probability value of two pixels with i greylevel intensity and j grey-level intensity, respectively, which
is separated by distance d and direction . Thus, the probability value can be written as P(i, j, d, ). The distance
d can be chosen from 1 to 8 and the direction can be any
one of 0 , 45 , 90 , 135 , 180 , 225 , 270 , 315 . Fig. 5 show
illustration of direction that can be used in GLCM. Whereas,
Fig. 6 show illustration of GLCM.
Where, l and h denote the 1-D low pass and high pass
filter, respectively. After transformation process, each image
gotten from transformation process is downsampled (?2) by a
factor of 2 in each direction. Downsampled operation is used
to compress the size of each of the bands to half.
In each level of decomposition, the 2-D DWT provides
four sub-band images, namely one low pass image LL and
251
Fig. 5: Direction of
Corr =
L1
X
(i x )(j y )P (i, j, d, )
x y
i,j=0
(11)
Where:
PL1
x = i,j=0 i.P (i, j, d, ),
PL1
y = i,j=0 j.P (i, j, d, ),
PL1
x = i,j=0 (i x )2 .P (i, j, d, ),
PL1
y = i,j=0 (i y )2 .P (i, j, d, )
E. Probabilistic Neural Network (PNN)
ASM =
P 2 (i, j, d, )
(7)
i,j=0
L1
X
(8)
i,j=0
L1
X
(i j)2 .P (i, j, d, )
(9)
The input layer consists of input nodes of feature vector. Thus, input layer does not conduct any computational
operation.The pattern layer consists of neurons equal to the
total number of training dataset. In the pattern layer, the
input pattern is calculated by the following multi-dimensional
Gaussian function with the probability density function (PDF)
based on Parzen window.
i,j=0
Inverse Difference Moment (IDM) is also called homogeneity. Homogeneity will have high value for a low contrast
image. IDM is calculated by,
IDM =
L1
X
P (i, j, d, )
i + |i j|2
i,j=0
(10)
Where xi is feature vector, xk i is training dataset, k indicates the class, and is smoothing parameters.The summation
layer calculated a summation and mean of the pattern layer
output for each class, using
(12)
252
n
1 X
||xi xki ||2
exp(
)
2 2
(2)d n d i=1
gk (x) = p
(13)
(14)
253
Haar
1st Level
2nd Level
3rd Level
4th Level
5th Level
44
67
39
34
39
Accuracy (%)
Daubechies
(db4)
44
72
33
28
22
Coiflet
(coif2)
44
50
28
28
17
Biorthogonal
(bior2.2)
17
28
50
39
28
Haar
1st Level
2nd Level
3rd Level
4th Level
5th Level
100
86
71
71
29
Accuracy (%)
Daubechies
(db4)
100
79
86
64
64
Coiflet
(coif2)
100
93
71
50
29
Biorthogonal
(bior2.2)
29
100
86
86
64
Haar
mean
58
Accuracy (%)
Daubechies
(db4)
59
Coiflet
(coif2)
51
Biorthogonal
(bior2.2)
53
IV. C ONCLUSION
This paper proposes a new method to extract texture features
of batik images using co-occurrence matrices of sub-band
images. This method is proposed to overcome the problem
in classifying batik images that acquired randomly from the
internet. Next, the acquired batik images are called the main
Accuracy (%)
Decomposition mean
1st Level
60
2nd Level
72
3rd Level
58
4th Level
50
5th Level
37
[1] I. Nurhaida, R. Manurung, and A. M. Arymurthy, Performance comparison analysis features extraction methods for batik recognition, in
Advanced Computer Science and Information Systems (ICACSIS), 2012
International Conference on. IEEE, 2012, pp. 207212.
[2] A. Haake, The role of symmetry in javanese batik patterns, Computers
& Mathematics with Applications, vol. 17, no. 4, pp. 815826, 1989.
[3] H. S. Doellah, Batik : Pengaruh Zaman dan Lingkungan. Batik Danar
Hadi Solo, 2002.
[4] F. Kerlogue, The Book of Batik. Archipelago Press, 2004.
[5] M. Cheong and K.-S. Loke, An approach to texture-based image
recognition by deconstructing multispectral co-occurrence matrices using tchebichef orthogonal polynomials, in Pattern Recognition, 2008.
ICPR 2008. 19th International Conference on. IEEE, 2008, pp. 14.
[6] K.-S. Loke and M. Cheong, Efficient textile recognition via decomposition of co-occurrence matrices, in Signal and Image Processing
Applications (ICSIPA), 2009 IEEE International Conference on. IEEE,
2009, pp. 257261.
[7] C. W. D. de Almeida, R. M. de Souza, and A. L. B. Candeias, Texture
classification based on co-occurrence matrix and self-organizing map,
in Systems Man and Cybernetics (SMC), 2010 IEEE International
Conference on. IEEE, 2010, pp. 24872491.
[8] R. W. Conners and C. A. Harlow, A theoretical comparison of texture
algorithms, Pattern Analysis and Machine Intelligence, IEEE Transactions on, no. 3, pp. 204222, 1980.
[9] S. Sidhu and K. Raahemifar, Texture classification using wavelet
transform and support vector machines, in Electrical and Computer
Engineering, 2005. Canadian Conference on. IEEE, 2005, pp. 941
944.
[10] G. Fan and X.-G. Xia, Wavelet-based texture analysis and synthesis
using hidden markov models, Circuits and Systems I: Fundamental
Theory and Applications, IEEE Transactions on, vol. 50, no. 1, pp.
106120, 2003.
[11] S. G. Mallat, A theory for multiresolution signal decomposition: the
wavelet representation, Pattern Analysis and Machine Intelligence,
IEEE Transactions on, vol. 11, no. 7, pp. 674693, 1989.
[12] N. Otsu, A threshold selection method from gray-level histograms,
Automatica, vol. 11, no. 285-296, pp. 2327, 1975.
[13] D. F. Specht, Probabilistic neural networks, Neural networks, vol. 3,
no. 1, pp. 109118, 1990.
254