1 views

Original Title: Paper 29 Ppt

Uploaded by Maz Har Ul

- Scholastic Book SupportVectorM Part01 2014-01-26
- Jiang2014a.pdf
- Temp Rory
- Object Detection Using a Max-margin Hough Transform - Maji, Malik - Proceedings of IEEE Conference on Computer Vision and Pattern Recognition - 2009
- 90000233
- Support vector regression to predict porosity and permeability: Effect of sample size
- Seminar
- 10.1.1.43
- Classification
- s 0219622011004750
- LSSVM-ABC Algorithm for Stock Price prediction
- Using Developers’ Features to Estimate Story Points
- Proceedings+of+Fifth+International+Confe
- Attribute Based Face Classification Using Support Vector Machine
- snj_june07
- Gloss Aries Begin
- Adaptive Duplicate Detection
- bluebrainfinal
- A Study On Face, Eye Detection And Gaze Estimation
- Scikit_Learn_Cheat_Sheet_Python.pdf

You are on page 1of 29

Content Based Image Retrieval

Presented By: Dr. Minakshi Banerjee

RCC Institute of Information Technology

Canal South Road, Beliaghata, Kolkata - 700015, West Bengal, India

Wednesday, 01.07.2015

CBIR: What is it?

I

based on features extracted directly from the image data.

recognition, signal processing, and computer vision are

commonly deployed.

feature space.

but it is difficult to handle high dimensional features while

classification and image retrieval task using similarity

measurement are involved.

Objectives of the paper

I

as the most real world data requires nonlinear methods in

order to perform tasks that involve the analysis and discovery

of patterns successfully.

optimum number of clusters.

as this classier is biased to the learned concept of a particular

category.

Proposed Method

Proposed Method

Database preprocessing

Image

Database

Visual

features

extraction

(CSD)

Q uery

Image

Display

36 nearest

images

Mapping high

dimensional features

space to a lower

dimensional space

using kernel P CA

Query features

in mapped

space

Clustering using P AM

knowing no. of clusters

fro m optimu m silhouette

width plot

Similarity

me asure using

L 1 norm

time) to mark relevant and

non- relevant

Test samples

accumulation from query

image s belonging cluster

by removing outliers by

SVC (reduc ed database)

Classification

one-class SVM

Automaticall

y select

entire

relevant

images

mapped space which are corresponding to all

relevant images

Display

Display 36 nearest

images using L1 norm

Select original

CSD features

vectors

corresponding to

all positive

samples

Proposed Method

Tackling the curse of dimensionality: What is this and Why?

Why?

I

Tackling the curse of dimensionality: What is this and Why?

Kernel Principal Component Analysis (KPCA)

I

The basic idea is to first map the input space into a feature

space via a nonlinear map and then compute the principal

components in that feature space.

(x)

Tackling the curse of dimensionality: What is this and Why?

Definition

Definition

A reproducing kernel k is a function k : 2 R

I

{x1 , x2 , ..., xl }

is typically a subset of RN

dimensional space F, and then taking the dot product there.

A feature map : RN F is a function that maps the input data

patterns into a higher dimensional space F .

Tackling the curse of dimensionality: What is this and Why?

Illustration

Illustration

Using a feature map to map the data from input space into a

higher dimensional feature space F :

Tackling the curse of dimensionality: What is this and Why?

Kernel Trick

Kernel Trick

We would like to compute the dot product in the higher

dimensional space, or

(x).(y).

To do this we only need to compute

k(x, y),

since

k(x, y) = (x).(y).

Note that the feature map is never explicitly computed. We

avoid this, and therefore avoid a burdensome computational task.

Tackling the curse of dimensionality: What is this and Why?

Example kernels

Example kernels

2

)

Gaussian: k(x, y) = exp( kxyk

2 2

d

Polynomial: k(x, y) = (x.y + c) , c 0

Sigmoid: k(x, y) = tanh( < x.y > +)

Nonlinear separation can be achieved.

Tackling the curse of dimensionality: What is this and Why?

Nonlinear Separation

Nonlinear Separation

Tackling the curse of dimensionality: What is this and Why?

Mercer Theory

Mercer Theory

Input Space to Feature Space

Necessary condition for the kernel-mercer trick:

k(x, y) =

NF

X

i

X

i

i ui uiT

is the normalized eigenfunction analogous to a normalized

eigenvector

Tackling the curse of dimensionality: What is this and Why?

Mercer :: Linear Algebra

Tackling the curse of dimensionality: What is this and Why?

Kernel Principal Component Analysis ....

Tackling the curse of dimensionality: What is this and Why?

KPCA and Dot Products

Tackling the curse of dimensionality: What is this and Why?

From Feature Space to Input Space

Tackling the curse of dimensionality: What is this and Why?

Projection Distance Illustration

Tackling the curse of dimensionality: What is this and Why?

Minimizing Projection Distance

Tackling the curse of dimensionality: What is this and Why?

Fixed-point iteration

Fixed-point iteration

Tackling the curse of dimensionality: What is this and Why?

One Class Support Vector Machine (OCSVM)

I

space using a kernel and iteratively finds the maximal margin

hyperplane which best separates the training data from the

origin.

quadratic programming minimization function is

Tackling the curse of dimensionality: What is this and Why?

One Class Support Vector Machine (OCSVM)....

Here, (w, ) are a weight vector and offset parameterizing a

hyperplane in the feature space associated with the kernel

Parameter :

I

examples regarded out-of-class) and,

it is a lower bound on the number of training examples used

as Support Vector.

Parameter i

I

data (or to create a soft margin), slack variables i are

introduced to allow some data points to lie within the margin

using Lagrange techniques and using a kernel function for the

dot-product calculations, the decision function becomes:

Tackling the curse of dimensionality: What is this and Why?

One Class Support Vector Machine (OCSVM)....

and which has maximal distance from the origin in feature

space F and separates all the data points from the origin.

Partitioning Around Medoids (PAM)

Medoids is most central objects (the best representatives) of each

cluster

This allows using only dissimilarities d(r, s) of all pairs (r, s) of

the objects.

The aim is to find the clusters C1 , C2 , ..., Ck that minimize the

target function:

k P

P

i=1 rCi

P

rCi

d(r, mi )

Partitioning Around Medoids (PAM)

Partitioning Around Medoids (PAM): Algorithm

Randomly select k objects m1 , m2 , ..., mk as initial medoids.

Until the maximum number of iterations is reached or no

improvement of the target function has been found do:

I

associating each point to the nearest medoid and calculate the

value of the target function.

improve the target function by taking xs to be a new medoid

point and mi o be a non-medoid point.

Partitioning Around Medoids (PAM)

Number of clusters selection using PAM

p(r) : the average dissimilarity of the object r and the objects of

the same cluster

q(r) :the average dissimilarity of the object r and the objects of the

neighboring cluster

Silhouette of the object r :the measure of how well is r

clustered

silw(r) =

(q(r) p(r))

[1, 1]

max(p(r), q(r))

I close to 1 . . . the object r is well clustered

I close to 0 . . . the object r is at the boundary of clusters

I less than 0 . . . the object r is probably placed in a wrong

cluster

Partitioning Around Medoids (PAM)

Number of clusters selection using PAM : Example

In the following figure we have shown that if number of clusters

are k where (k=1,2,3,...,25) then silhouette width for every point is

computed and average is found. Finally, the cluster number which

gives the maximum average silhouette width plot is selected. It is

obvious in the figure that the number of clusters are 3 by taking

upto 25 clusters.

Support Vector Clustering (SVC)

(SVC)

Let Xi be a dataset with dimensionality d, SVC computes a sphere

of radius R and center a containing all these data. The

computation of such a smallest sphere is obtained from solving

minimization problem considering Lagrangian formulation which

produces following expression:

kx ak2 = (x.x) 2

N

P

i=1

i (x.xi ) +

N P

N

P

i=1 j=1

necessary condition is i 0.

Support Vector Clustering (SVC)

Outliers detection criteria of Support Vector Clustering (SVC) : Example

(SVC): Example

- Scholastic Book SupportVectorM Part01 2014-01-26Uploaded byephifannia
- Jiang2014a.pdfUploaded bysrikanth madaka
- Temp RoryUploaded byBhavya Mishra
- Object Detection Using a Max-margin Hough Transform - Maji, Malik - Proceedings of IEEE Conference on Computer Vision and Pattern Recognition - 2009Uploaded byzukun
- 90000233Uploaded byTrần Ngọc Lâm
- Support vector regression to predict porosity and permeability: Effect of sample sizeUploaded byAnderson Mendes
- SeminarUploaded byHaritha Chowdary
- 10.1.1.43Uploaded byMehvish Tamkeen
- ClassificationUploaded byGajdos Laszlo
- s 0219622011004750Uploaded byjimakosjp
- LSSVM-ABC Algorithm for Stock Price predictionUploaded byseventhsensegroup
- Using Developers’ Features to Estimate Story PointsUploaded byJexia
- Proceedings+of+Fifth+International+ConfeUploaded byOmprakash Verma
- Attribute Based Face Classification Using Support Vector MachineUploaded byIRJET Journal
- snj_june07Uploaded byzalpusta
- Gloss Aries BeginUploaded byAnders Steen Dige Madsen
- Adaptive Duplicate DetectionUploaded byPutra Sabudi
- bluebrainfinalUploaded byVishal Sanjay Samrat
- A Study On Face, Eye Detection And Gaze EstimationUploaded byijcses
- Scikit_Learn_Cheat_Sheet_Python.pdfUploaded byrakesharumalla
- DATA MINING CLASSIFICATION ALGORITHMS FOR KIDNEY DISEASE PREDICTIONUploaded byJames Moreno
- 10.1016@j.neucom.2019.01.038Uploaded bySuhail Khokhar
- 438Uploaded bySilviu
- ScheduleUploaded byjijo123408
- Assignment 2Uploaded byKonstantinos Stavrou
- Machine Learning Section4 eBook v03Uploaded bySuhario Tanamo Aidi
- Benchmarking Functions for Genetic AlgorithmsUploaded bykalanath
- 06046696Uploaded bySathya Vignesh
- 83457_10Uploaded byTom Miralrio
- 9fab5052adef5027cf3e6366e3745dfbcf8bUploaded byLontchi Ludovic

- The n-th order backward difference/sum of the discrete-variable functionUploaded byMaz Har Ul
- TRANS_2DUploaded bymarcelopipo
- lin1992.pdfUploaded byMaz Har Ul
- CS-302-(O).pdfUploaded byMaz Har Ul
- CS-302-(3001-(O)).pdfUploaded byMaz Har Ul
- ClippingUploaded bysohiii
- CS-302Uploaded byMaz Har Ul
- Question Bank CS604BUploaded byMaz Har Ul
- CS---302.pdfUploaded byMaz Har Ul
- DS7-Recursion -Data Structure LectureUploaded byMaz Har Ul
- Discrete-variable real functionsUploaded byMaz Har Ul
- proj1.pdfUploaded byMaz Har Ul
- CS-302 (1).pdfUploaded byMaz Har Ul
- 10.1109@ATSIP.2017.8075588Uploaded byMaz Har Ul
- dynetxUploaded byMaz Har Ul
- 2004 Gori IcprUploaded byMaz Har Ul
- DS 302 MI lec 1.pdfUploaded byMaz Har Ul
- DS 302 MI lec 1.pdfUploaded byMaz Har Ul
- Bresenhamcircle DerivationUploaded byMaz Har Ul
- HSRUploaded byMaz Har Ul
- CS301-lec02Uploaded byism33
- s41044-016-0016-yUploaded byMaz Har Ul
- ECCV12-lecture1Uploaded byMaz Har Ul
- Cellular edge detection: Combining cellular automata and cellular learning automataUploaded byMaz Har Ul
- Handout PDSUploaded byAmbar Khetan
- Lecture1 (1)Uploaded bylav
- GraphMining 01 IntroductionUploaded byMaz Har Ul
- sub_ncs_301_30sep14Uploaded byNrgk Prasad
- Data Structure Program Using C and CPPUploaded byMaz Har Ul
- 07_chapter 1.pdfUploaded byMaz Har Ul

- cs.pdfUploaded bySomu
- 10.1.1.23Uploaded byHong Nhung Chu
- Introduction to Semi-Supervised LearningUploaded byCainan Alves
- Benchmarking of LSTM NetworksUploaded byFelipe De Souza Araujo
- Back PropUploaded byEvyyos Zie
- A Practical Implementation of Face Detection by Using Matlab Cascade Object DetectorUploaded byGerald Varon
- ProjectUploaded byShreyas Kash Prince
- Artificial Neural Network For Recognition Of Handwritten Devanagari CharacterUploaded byInternational Organization of Scientific Research (IOSR)
- Unsupervised Learning: Part III Counter Propagation NetworkUploaded byRoots999
- Artificial IntelligenceUploaded byastrosevy
- 310866552 Aangan by Khadija Mastoor PDFUploaded byAnam Qureshi
- Deep Residual Learning for Image Recognition (Summary)Uploaded byTomoki Tsuchida
- Lecture1.pptxUploaded byGlairet Gonzalez
- Facial Recognition Using Eigen FacesUploaded byAkshay Shinde
- 4433.pdfUploaded byalokesengupta
- CS6659 AIUploaded bysmilingeyes_nic
- [IJCST-V6I4P17]:P.T.V.LakshmiUploaded byEighthSenseGroup
- SeqGAN: Sequence Generative Adversarial Nets with Policy GradientUploaded bywwfwpifpief
- Neural NetworksUploaded byalexaalex
- 02_YAPAY_SİNİR_AĞLARIUploaded byuyelikicin3737
- aiUploaded byramji99
- Artificial Neural Network TutorialUploaded byAnto Padaunan
- Tutorial Emnlp18Uploaded byFelipe Fernandes Lulia Jacob
- Numbers of ClassifierUploaded byDurgesh Kumar
- 0_ML_introUploaded byYugal Kumar
- IJIGSP-V6-N8-1Uploaded byUdegbe Andre
- 51Uploaded byMarinos Giannoukakis
- Neural NetworksUploaded byBukhosi Msimanga
- Applied OpenStack Design PatternsUploaded byChahoub Chahhoub
- ml-4up-04-mlabelclassificationUploaded byRahaf Ahmad