## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Pi19404

March 31, 2014

Contents

Contents

Training and Testing Multi-class Logistic Classier

Introduction . . . . . . . 0.1.1 Loss Function . 0.1.2 Theano . . . . . . 0.1.3 Example . . . . . . 0.2 Theano Code . . . . . . References . . . . . . . . . . . 0.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

3 4 5 6 7 8

2 | 9

Training and Testing Multi-class Logistic Classifier

**Training and Testing Multi-class Logistic Classifier
**

0.1 Introduction

In this article we will look at training and testing of a Multi-class Logistic Classifier

Logistic regression is a probabilistic, linear classifier. It is parametrized by a weight matrix W and a bias vector b. Classification is done by projecting data points onto a set of hyperplanes, the distance to which is used to determine a class membership probability. Mathematically this can be expressed as

P (Y = i x; W; b) =

j

Pe e

j

Wi x+bi Wj x+bj

**Corresponding to each class y logistic classifier is paramemterized by a set of parameters W ; b .
**

i i i

These parameters are used to compute the class probability. Given a unknown vector x,The prediction is performed as

ypred = argmaxi P (Y = i x; W; b) ypred = argmaxi

Pe e

j

j

Wi x+bi Wj x+bj

**Given a set of labelled training data X ; Y where i in1; : : : ; N we need to estimate these parameters.
**

i i

3 | 9

Training and Testing Multi-class Logistic Classifier

0.1.1 Loss Function

**Ideally we would like to compute the parameters so that the loss is minimized
**

`0;1 =

0

1

jDj X I

i=0

f (x

(i) )=y (i)

6

f (x) = argmaxk P (Y = yk x; )

j

P (Y = yk x; ) is modelled using logistic function.

The 0 1 loss function is not differentiable ,hence optimizing it for large modes is computationally infesible. Instead we maximize the log-likelyhood of the classifier given the training data D. Maximum Likelyhood estimation is used to perform this operation. Estimate the parameters so that likelyhood of training data is maximized under the model parameters

j

D

It is assumed that the data samples are independent ,so the probability of the set is the product of probabilities of individual examples.

L( = W; b; L(; L(;

D) = argmax

N i=1

YP Y

N

(

= yi X = xi ; W; b)

D) = argmax D) =

X logP Y

i=1

j

(

= yi X = xi ; W; b)

X argmin logP Y

N

j

(

= yi X = xi ; W; b)

j

i=1

It should be noted that Likelyhood of correct class is not same as number of right predictions. Log Likelyhood function can be considered as differential version of the 0 1 loss function. In the present application negative log-likelyhood is used as the loss function Optimal parameters are learned by minimizing the loss function.

4 | 9

Training and Testing Multi-class Logistic Classifier

In the present application gradient based methods are used for minimization. Specifically stochastic gradient descent and conjugated gradient descent are used for minimization of the loss function. The cost function is expressed as

L(; L(; L(;

D) = D) = D) =

X logP Y

N

(

= yi X = xi ; W; b)

X loge

i=1 N i=1 N i i=1

X log Pe

i=1 N

j

Wi x+bi

j

eWj x+bj

Wi x+bi

log

Xe

j j

Wj x+bj

L(;

D) =

XW x

+ bi + log

Pe

1

Wj x+bj

The first part of the sum is affine,second is a log of sum of exponentials which is convex Thus the loss function is convex. Thus we can compute the parameters corresponding to global maxima of the loss function using gradient descent methods. Thus we compute the derivatives of the loss function L(; D) with respect to ,@`=@W and @`=@b

0.1.2 Theano

Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It is a expression compiler,and can evaluate symbolic expression when executed.Typically programs which are implemented in C/C++ can be written concisely and efficiently in Theano. computing the gradients in most programming languages (C/C++, Matlab, Python), involves manually deriving the expressions for the gradient of the loss with respect to the parameters @`=@W , and @`=@b, This approah not only involves manual coding but the derivatives can get difficult to compute for complex models,

5 | 9

Training and Testing Multi-class Logistic Classifier

With Theano, this work is greatly simplified as it performs automatic differentiation .

0.1.3 Example

For demonstration,we will use MNIST dataset The MNIST dataset consists of handwritten digit images and it is divided in 60,000 examples for the training set and 10,000 examples for testing. The official training set of 60,000 is divided into an actual training set of 50,000 examples and 10,000 validation examples All digit images have been size-normalized and centered in a fixed size image of 28 x 28 pixels. In the original dataset each pixel of the image is represented by a value between 0 and 255, where 0 is black, 255 is white and anything in between is a different shade of grey. The dataset can be found at http://deeplearning.net/data/mnist/ mnist.pkl.gz. The data set is pickled can be loaded using python pickle package. The data set consists of training,validation and test set. The data set consists of feature vector of length and number of classes are 10.

28x28 = 784

A class called Logistic Regression is defined which encapsulates the methods that are used to perform training and testing of multi-class Logistic Regression classifier.

6 | 9

Training and Testing Multi-class Logistic Classifier

0.2 Theano Code

The python code for training and testing can be found in the git repository https://github.com/pi19404/OpenVision ImgML/LogisticRegression. py file. the ImgML/load_datasets.py contains methods to load datasets from pickel files or SVM format files.

""" symbolic expressions defining input and output vectors""" x=T.matrix('x'); y=T.ivector('y'); """ The mnist dataset in pickel format""" model_name1="/media/LENOVO_/repo/mnist.pkl.gz" """ creating object of class Logistic regression""" """ input is 28*28 dimension feature vector ,and output lables are digits from 0-9 """ classifier = LogisticRegression(x,y,28*28,10); """ loading the datasets""" [train,test,validate]=load_datasets.load_pickle_data(model_name1); """ setting the dataset""" classifier.set_datasets(train,test,validate); # #classifier.init_classifier(model_name1);n_out """ Training the classifiers""" classifier.train_classifier(0.13,1000,30); """ Saving the model """ classifier.save('1'); #x=classifier.train[0].get_value(borrow=True)[0]; #classifier.predict(x); """ Loading the model""" classifier.load('1') x=train[0].get_value(borrow=True); y=train[1].eval(); print 'True class:'+`y` xx,yy=classifier.predict(x); print 'Predicted class:' + `yy` classifier.testing();

7 | 9

Training and Testing Multi-class Logistic Classifier

C/C++ code has been also written using Eigen/OpenCV and incorporated in OpenVision Library. This can be found in the files https://github.com/pi19404/OpenVision ImgML/LogisticRegression. cpp and ImgML/LogisticRegression.hpp files.

8 | 9

Bibliography

Bibliography

[1] James Bergstra et al. Theano: a CPU and GPU Math Expression Compiler. In: Presentation. Austin, TX, June 2010.

Proceedings of the Python for Scientic Computing Conference (SciPy). Oral

9 | 9

- polynomial approximation of a 2D signaluploaded bypi194043
- Modified Canny Edge Detectionuploaded bypi194043
- Polynomial Approximation of 1D signaluploaded bypi194043
- Fast Asymmetric Learning for Cascade Face Detection Training/Testing Utilityuploaded bypi194043
- Gaussian Multivariate Distribution -Part 1uploaded bypi194043
- Fast 2D Separable Symmetric/Anti-Symmmetric Convolutionuploaded bypi194043
- Continuous Emission Hidden Markov Model for sequence classificationuploaded bypi194043
- Dense optical flow expansion based on polynomial basis approximationuploaded bypi194043
- Android OpenCV Simple Face Trackeruploaded bypi194043
- Adaptive Skin Color Detectoruploaded bypi194043
- ARM Neon Optimization for image interleaving and deinterleavinguploaded bypi194043
- A linear channel filteruploaded bypi194043
- OpenVision Library Gaussian Mixture Model Implementationuploaded bypi194043
- Normalized convolution for image interpolationuploaded bypi194043
- Implementation of discrete hidden markov model for sequence classification in C++ using Eigenuploaded bypi194043
- Computing Rectangular Haar Features for Cascade Detection Traininguploaded bypi194043
- Android OpenCV Face detectionuploaded bypi194043
- Quick Sort algorithm in Haskelluploaded bypi194043
- Android : shape Classification using OpenCV,JavaCV and SVMuploaded bypi194043
- Local Binary Patternuploaded bypi194043
- Uniform Color Quantizationuploaded bypi194043
- Random Ferns for Patch Descriptionuploaded bypi194043
- Image enhancement using Fusionuploaded bypi194043
- SubPixel Corner Localization Algorithmuploaded bypi194043
- Region Growing Algorithm For UnderWater Image Segmentationuploaded bypi194043
- A simple color balance algorithmuploaded bypi194043
- Android OpenCV First Applicationuploaded bypi194043
- Android Touch Gesture Recognition based on FastDTW algoritmuploaded bypi194043
- Android : Uni-stroke Touch Gesture Recognition using $1 gesture Reconigizeruploaded bypi194043
- Seeded Region Growing using Line Scan algorithm - Stack base Implementationuploaded bypi194043

- Compiling Native C/C++ library for Androiduploaded bypi194043
- Polynomial Approximation of 2D image patch -Part 2uploaded bypi194043
- Fast 2D Separable Symmetric/Anti-Symmmetric Convolutionuploaded bypi194043
- Continuous Emission Hidden Markov Model for sequence classificationuploaded bypi194043
- Dense optical flow expansion based on polynomial basis approximationuploaded bypi194043
- Adaptive Skin Color Detectoruploaded bypi194043
- ARM Neon Optimization for image interleaving and deinterleavinguploaded bypi194043
- A linear channel filteruploaded bypi194043
- OpenVision Library Gaussian Mixture Model Implementationuploaded bypi194043
- Markov chain implementation in C++ using Eigenuploaded bypi194043
- Implementation of discrete hidden markov model for sequence classification in C++ using Eigenuploaded bypi194043
- C++ Const,Volatile Type Qualifiersuploaded bypi194043
- C++ Inheritanceuploaded bypi194043
- Mean Shift Algorithmuploaded bypi194043
- Local Binary Patternuploaded bypi194043
- C++ Class Members and friendsuploaded bypi194043
- Embedded Systems Programming with ARM on Linux - Blinking LEDuploaded bypi194043
- Random Ferns for Patch Descriptionuploaded bypi194043
- Integral Image for Computation of Mean And Varianceuploaded bypi194043
- C++ static members and functionuploaded bypi194043
- C++ virtual functions and abstract classuploaded bypi194043
- Tan and Triggs Illumination normalizationuploaded bypi194043
- Normalized convolution for image interpolationuploaded bypi194043
- Mean Shift Trackinguploaded bypi194043
- Uniform Local Binary Pattern and Spatial Histogram Computationuploaded bypi194043

Read Free for 30 Days

Cancel anytime.

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading