You are on page 1of 1

================================================================================ =================== Error Table Test set Method 0 1 2 3 | A | B | _______________ | 221 | 239 | | 91 | 96 | | 22 | 28 | | 12 | 31 |

Classifier 1: Define the function M(p,q,xc,yc) as the central moment with center xc,yc. Define M(p,q) = M(p,q,0,0). Then for each (p,q) in {(1,1),(1,2),(1,3),(1 ,4),(2,1),(2,2),(2,3),(3,1),(3,2),(4,1)}, we calculated the average m(p,q) = M( p,q, M(1,0)/M(0,0), M(0,1)/M(0,0) ) for each class, and divided by the rms of m( p,q) for all 10 classes. For each test image, we chose the class that minimized the Euclidean distance between the image's m(p,q) and the average m(p,q) of the class. Classifier2: We used eqs. 2.41-2.43 in the book to compute the covariance matrix for each class, then averaged the matrices entry-wise. We then applied eqs. 2.5 9-2.62 to compute the discriminant for each classifier, choosing the largest dis criminant as our classification. Classifier3: For each classifier For each image of the classifier For each pixel in the image's 16x16 array Add 1/100 to a 16x16 grayscale array if the pixel is dar k Then we use the Euclidean distance to classify test data Classifier4: Using the same grayscale array from Classifier3, we used 90 classifiers, one for each pair of classes, where each classifier uses eq. 89 to determine which of t he pair of classes "wins". The class with the most wins is chosen as our classif ication. We noticed that our errors are strictly decreasing for both columns A and B with increasing method index. We can see that Classifier2 and Classifier4 are design ed to minimize error, whereas Classifier1 and Classifier3 are not. This also tel ls us that the per-pixel classifier is better than the moment classifier, which makes sense since we require 256 dimensions instead of just 10.

You might also like