You are on page 1of 23

CLASSIFICATION OF EMOTIONS AND

DIFFERENT BHAVAS IN KATHAKALI USING


GIST AND SIFT FEATURES

Author 1:
ANISHA M

Author 2:
DR.B.S SHAJEE MOHAN

July 22, 2019

1 / 23
OUTLINE

Contents
• Introduction
• Overview
• Dataset Collection
• Feature Extraction
• SIFT based similarity matrix
• Spatial Pyramid Match Kernel Generation
• ITML Method
• Experimentation
• Conclusion
• References

2 / 23
INTRODUCTION

Introduction
• Emotion defines what do we feel about something and facial
expressions are the most obvious way to show it. Mainly seven
basic human emotions are considered for facial expression
analysis.
• In kathakali the performer (Acharyan) communicates with the
audience through bhavas.
• Navarasam (Facial expressions), is a highly stylized technique
in invocation of bhava has been developed in kathakali. Indian
dramatic theory explain nine kinds of basic expressions.
• In this paper we are trying to classify basic human emotions
and nine types of bhavas in kathakali using different classifiers.

3 / 23
OVERVIEW

General Emotion Classification System

Figure: Block Diagram of a general emotion classification system.

4 / 23
DATASET COLLECTION

Kathakali Bhava Dataset


• Facial expression (Navarasam) images and neutral face images
of 30 different Kathakali players are collected from Kerala
Kalamandalam and P.S.V Natyasangham Kottakkyal.

Representative images

Figure: Katahkali Bhavas


5 / 23
DATASET COLLECTION

JAFFE DATASET KDEF DATASET

Figure: Emotion dataset JAFFE Figure: Emotion dataset KDEF


Source:http://www.kasrl.org/jaffe download.html - JAFFE Source:www.emotionlab.se/resources/kdef - KDEF femele
dataset facial expressions.

6 / 23
FEATURE EXTRACTION

GIST FEATURES
• GIST is a global descriptor.

• GIST features describe spatial properties of the scene.

• A single image is having 512 GIST descriptor points.

SIFT FEATURES
• SIFT (Scale Invariant Feature Transform) features are a type
of local features.

• SIFT feature is represented both in spatial and frequency


domains.

• A single image contain number of SIFT key points and each


SIFT key point is of 128 dimension.
7 / 23
SIFT BASED SIMILARITY MATRIX

SIFT Based Similarity Matrix


• SIFT feature based kNN classification is done using SIFT
based similarity matrix. The similarity matrix is calculated as
follows:
Average score(i,j)
• M(i,j) = T otal no of matches(i,j)

where M(i,j) represents the similarity measure between the ith


image and j th image.
• Average score(i, j) is obtained by the mean of scores between
ith and j th images. Total number of matches(i, j) is given by
the number of matching between the ith and j th images.
• The similarity matrix is a square matrix with diagonal elements
equal to zero.
• As each entry in the similarity matrix becomes least, closeness
between the corresponding images increases.

8 / 23
SPATIAL PYRAMID MATCH KERNEL GENERATION

SPM
• SIFT feature based SVM classification is done using Spatial
Pyramid Match (SPM) kernel technique.
• A spatial pyramid is a collection of orderless feature
histograms computed over cells defined by a multi-level
recursive image decomposition.
• At level 0, the decomposition consists of just a single cell. At
level 1, the image is subdivided into four quadrants, yielding
four feature histograms, and so on.
• Spatial pyramids can be matched using the pyramid kernel,
which weights features at higher levels more highly, reflecting
the fact that higher levels localize the features more precisely.

9 / 23
FORMATION OF NEW KERNEL USING ITML
TECHNIQUE

ITML technique
• In ITML technique, we formulate the problem as that of
minimizing the differential relative entropy between two
multivariate Gaussian under constraints on the distance
function.
• Considering this problem as a particular Bergman optimization
problem that of minimizing the LogDet divergence subject to
linear constraints.
ADVANTAGES OF ITML TECHNIQUE
• It can handle a wide variety of constraints and can optionally
incorporate a prior on the distance function.
• It is fast and scalable. In this method no eigenvalue
computations or semi-definite programming are required.
10 / 23
EXPERIMENTATION

CLASSIFICATION OF GIST FEATURE BASED kNN


CLASSIFIER-KATHAKALI DATASET

Classification accuracy in percentage

k value Fold1 Fold2 Fold3 Fold4 Fold5 Fold6 Fold7 Fold8 Fold9 Fold10

k=3 62.74 50.79 49.58 58.89 49.23 63.24 53.13 52.90 50.24 54.76

k=5 45.57 37.23 39.99 49.40 43.33 48.89 47.73 40.52 38.82 39.90

k=10 30.22 25.56 22.67 32.45 28.89 33.54 22.39 34.58 28.54 24.32

Table: Classification accuracy of normalized GIST feature based kNN


classifier

11 / 23
EXPERIMENTATION

CLASSIFICATION OF GIST FEATURE BASED kNN


CLASSIFIER-EMOTION DATASET
Dataset Classification accuracy in percentage
JAFFE k value fold1 fold2 fold3 fold4 fold5
k= 1 91.22 90.38 92.00 89.93 88.89
k= 5 56.14 55.90 54.59 56.84 50.78
k = 10 24.56 22.76 24.90 23.00 26.65
KDEF k= 1 30.71 27.45 28.32 25.98 26.66
k =5 25.54 23.33 26.00 24.02 25.83
k = 10 20.84 18.90 21.22 23.91 20.09
Table: Classification accuracy of GIST feature based kNN classifier for
emotion dataset

12 / 23
EXPERIMENTATION

CLASSIFICATION OF SIFT FEATURE BASED kNN


CLASSIFIER-KATHAKALI DATASET

Classification accuracy in percentage

k value fold1 fold2 fold3 fold4 fold5 fold6 fold7 fold8 fold9 fold10

k=1 79.46 78.12 79.01 78.12 81.25 79.91 75.44 82.12 74.55 79.91

k=3 78.12 76.33 75.89 78.57 79.46 78.12 75.44 78.14 74.38 76.78

k=5 68.75 69.64 67.41 66.92 66.96 71.42 68.75 72.76 70.06 63.39

k=10 59.82 56.03 59.82 62.06 56.69 57.14 55.80 56.25 59.69 57.14

Table: Classification accuracy of SIFT feature based kNN classifier

13 / 23
EXPERIMENTATION

CLASSIFICATION OF SIFT FEATURE BASED kNN


CLASSIFIER-EMOTION DATASET
Dataset Classification accuracy in percentage
JAFFE k value fold1 fold2 fold3 fold4 fold5
k =1 88.88 89.76 89.45 85.34 89.91
k =5 39.23 44.92 46.71 41.23 48.71
k = 10 23.58 26.60 30.23 31.24 22.78
KDEF k =1 35.59 30.72 28.90 31.42 30.06
k =5 27.78 29.32 25.14 25.36 27.34
k = 10 21.54 24.04 22.68 23.30 20.09
Table: Classification accuracy of SIFT feature based kNN classifier for
emotion dataset

14 / 23
EXPERIMENTATION

CLASSIFICATION OF GIST FEATURE BASED SVM


CLASSIFIER-KATHAKALI DATASET

Classification accuracy in percentage

Kernel Type fold1 fold2 fold3 fold4 fold5 fold6 fold7 fold8 fold9 fold10

Linerar 74.34 65.13 71.71 73.68 65.13 76.32 71.05 72.36 67.76 73.68

Polynomial
76.32 68.42 72.37 76.97 63.81 79.60 73.68 75.00 71.71 75.86
d=2

Gaussian
63.16 57.23 61.84 61.18 46.71 53.94 59.86 58.89 58.55 61.18
st=0.1

Table: Classification accuracy of GIST feature based SVM classifier

15 / 23
EXPERIMENTATION

CLASSIFICATION OF GIST FEATURE BASED SVM


CLASSIFIER-EMOTION DATASET
Data
kernel Cvalue Classification accuracy (%)
set
fold1 fold2 fold3 fold4 fold5
JAFFE Linear 830 87.71 85.03 83.56 85.56 85.90
Polynomial
290 87.71 85.03 83.56 85.56 85.90
d=2
Polynomial
185 86.06 87.77 84.62 83.33 82.89
d=3
Gaussian
530 91.22 89.03 90.54 88.09 89.73
st=0.1
KDEF Linear 1000 41.04 39.81 38.84 40.28 37.24
Polynomial
825 47.08 45.01 46.18 44.33 45.90
d=2
Polynomial
286 41.29 39.21 40.17 38.47 37.28
d=3
Gaussian
150 41.29 40.67 39.54 38.23 39.61
st=0.1

Table: Classification accuracy of GIST feature based SVM classifier


16 / 23
EXPERIMENTATION

CLASSIFICATION OF SIFT FEATURE BASED SVM CLASSIFIER

• SIFT feature based SVM classification is done using a software


LIBSVM, here we use SPM kernel for SVM classification.

Dataset Classification accuracy (%)


Kathakali 61.65
JAFFE 83.06
KDEF 49.18
Table: Classification accuracy of SIFT feature based SVM classifier for
different dataset

17 / 23
EXPERIMENTATION

CLASSIFICATION OF SIFT FEATURE BASED SVM CLASSIFIER


USING ITML LEARNT KERNEL
• Inorder to increase the classification accuracy we use ITML
technique to learn an efficient kernel and using the learnt
kernel SVM classification is done.
• Classification accuracy of ITML kernel based SVM
classification is given below.

Dataset Classification accuracy (%)


Kathakali 65.54
JAFFE 91.06
KDEF 54.66
Table: Classification accuracy of SIFT feature based SVM classifier for
different dataset using ITML learnt kernel

18 / 23
RESULT ANALYSIS

Graph 1 Graph 2 Graph 3 Graph 4

Figure: Figure: Figure: Figure:


Comparison of Comparison Comparison of Comparison of
performances performances performances performances
of GIST Based of SIFT Based of GIST Based of SIFT Based
kNN kNN SVM SVM
classifiers for Classifiers for classifiers for Classifiers for
different different different different
datasets. datasets. datasets datasets

19 / 23
CONCLUSION

CONCLUSION
• We have conducted experiments to classify emotions and
kathakali bhavas using different classifiers.
• The classification accuracy of kNN based classifier and SVM
based classifier is higher for JAFFE dataset compared to
Kathakali bhava dataset and KDEF dataset.
• We implemented SPM kernel based SVM classifier build using
SIFT feature representation of the image and it is seen that
the performance is better compared to other classifiers.
• We considered ITML based technique to improve the
performance of SPM kernel based SVM classifier buid using
SIFT feature representation of image to improve classification
accuracies. Use of ITML method provide improvement in the
classification accuracies of all the dataset.

20 / 23
FUTURE WORK

Future Work
• The kernel learning process will be complex when the time for
executing the kernel learning program scales exponentially
with the number of images in the dataset increases.
• So as a future work we are trying to apply a new method to
improve the speed and reduce the complexity of execution.
• Inorder to improve the speed of execution we are trying to
apply divide and conquer method, in this method a big kernel
is sub divided into smaller chunks of kernel of size m, where
m  n.
• Where n is the size of the original kernel. These smaller
kernels are learned separately and join them to get the final
learnt kernel with less complexity and execution time.

21 / 23
REFERENCES

[1] Aude Oliva, A. Torralba, ”Gist of the scene”, Neurobiology


of Attention. Elsevier, San Diego, CA, pp. 251-256
[2] Jason V. Davis, Brian Kulis, Prateek Jain, Suvrit Sra,
Inderjit S. Dhillon, ”Information-Theoretic Metric Learning”,
Dept. of Computer Science, University of Texas at Austin,
Austin, TX 78712.
[3] J. Wang, X. Wu, and C. Zhang, ”Support vector machines
based on K-means clustering for real-time business intelligence
systems,” International Journal of Business Intelligence and
Data Mining, vol. 1, no. 1, pp. 54-64, 2005.
[4] D. Lowe, ”Distinctive Image Features from Scale-Invariant
Keypoints”, International J. Computer Vision, vol. 2, no. 60,
pp. 91-110, 2004.

22 / 23
Thank You

Thank You

23 / 23

You might also like