- Pages From Zhang2015
- IJCSIT 040608
- Blind Source Separation
- BRAIN CANCER CLASSIFICATION USING BACK PROPAGATION NEURAL NETWORK AND PRINCIPLE COMPONENT ANALYSIS
- Matlab Assignment Help - Onlineassignmenthelp.com
- A Survey on Face Detection and Recognition Techniques in Different Application Domain
- Krzanowski - Sensitivity of Principal Components - 1984
- Understanding Principal Component
- wallace ny fire epidemic.pdf
- Fingerprint Based Gender Classification Using 2D Discrete Wavelet Transforms and Principal Component Analysis
- JURNAL
- UsingDiscriminantEigenFeaturesForImageRetrieval
- Face Recognition Using DCT and PCA Approach
- F06 Austin SCC Detectiodsn
- 11
- Contents
- 1-s2.0-S0260877413004111-main.pdf
- Machine Learning
- A Study on Face Recognition Technique based on Eigenface
- dafx02
- Tecnicas multivariantes
- dataminingreport2
- Chemical and Biological Indicators of Soil Quality - Yadvinder Singh, PAU
- Camiz2017 Morfology Latin Volcanoes
- 1-s2.0-S092523121301014X-main
- a053c720d58d78ab32e236ee185a90c61b2a.docx
- Rda
- Optimizing Face Recognition Using PCA
- hw4_wi14
- 10.1.1.80.6563
- Sapiens: A Brief History of Humankind
- The Unwinding: An Inner History of the New America
- Yes Please
- The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution
- Dispatches from Pluto: Lost and Found in the Mississippi Delta
- Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
- John Adams
- Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
- The Prize: The Epic Quest for Oil, Money & Power
- A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
- This Changes Everything: Capitalism vs. The Climate
- Grand Pursuit: The Story of Economic Genius
- The Emperor of All Maladies: A Biography of Cancer
- Team of Rivals: The Political Genius of Abraham Lincoln
- The New Confessions of an Economic Hit Man
- Rise of ISIS: A Threat We Can't Ignore
- Smart People Should Build Things: How to Restore Our Culture of Achievement, Build a Path for Entrepreneurs, and Create New Jobs in America
- The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
- The World Is Flat 3.0: A Brief History of the Twenty-first Century
- Bad Feminist: Essays
- How To Win Friends and Influence People
- Steve Jobs
- Angela's Ashes: A Memoir
- Leaving Berlin: A Novel
- The Silver Linings Playbook: A Novel
- The Sympathizer: A Novel (Pulitzer Prize for Fiction)
- Extremely Loud and Incredibly Close: A Novel
- The Light Between Oceans: A Novel
- The Incarnations: A Novel
- You Too Can Have a Body Like Mine: A Novel
- Life of Pi
- The Love Affairs of Nathaniel P.: A Novel
- The First Bad Man: A Novel
- We Are Not Ourselves: A Novel
- The Blazing World: A Novel
- The Rosie Project: A Novel
- The Flamethrowers: A Novel
- Brooklyn: A Novel
- A Man Called Ove: A Novel
- Bel Canto
- The Master
- Interpreter of Maladies
- Beautiful Ruins: A Novel
- The Kitchen House: A Novel
- Wolf Hall: A Novel
- The Art of Racing in the Rain: A Novel
- The Wallcreeper
- A Prayer for Owen Meany: A Novel
- The Cider House Rules
- The Perks of Being a Wallflower
- Lovers at the Chameleon Club, Paris 1932: A Novel
- The Bonfire of the Vanities: A Novel
- Little Bee: A Novel

I.

PCA Analysis

1. Intensity PCA

Mean Face

20 Eigenfaces with largest eigenvalues in descending order (as is, not added to mean) – Ordered across columns, then down rows.

Four Examples of reconstructed face vs. original testing face:

Reconstruction Error Plot: 150 Training Images used to construct eigenvectors Compressed 27 Testing Images along eigenvectors Decompressed 27 Images and obtained average of pixel error across all images

With 20 Eigenvectors: The average squared difference per pixel ~ 300, square root = +-17.21. The range of the pixel matrix was 255. Thus the pixel error was 6.74% of the pixel range.

As seen above, error decreases approximately exponentially with increase in eigenvectors used to deconstruct an image. This makes sense to me, because the eigenvectors are ordered by descending eigenvalues. Thus the first eigenvector captures the most scatter across all the images, and thus should provide the most information. The next eigenvector provides information along an orthogonal dimension, thus the information is not redundant, but it is less informative then the previous eigenvector, so we should see a decrease in error. Each following eigenvector provides less information, so we should see the slope of the error decrease.

**2. Geometry PCA – Using 87 Landmarks
**

Mean Warp

5

Largest Eigen-Warpings (Added to mean for visualization)

As you can see above, the eigen-warpings indeed show different angles/rotations/shapings of the face. Visually they approximately represent orthogonal warping dimensions, meaning they are warping in along different directions. Reconstruction Error Plot: 150 Training Images used to construct eigenvectors Compressed 27 Testing Images along eigenvectors Decompressed 27 Images and obtained average of landmark error across all images

With 20 Eigenvectors:

Each image was divided into 87 landmarks with an (x,y) coordinate. Error was gathered as an average of all x coordinate error and y coordinate error. The average squared difference per coordinate ~ 2.5, square root = +-1.58. The empirical range of the coordinate matrix was 235.19. Thus the coordinate error was 0.67% of the range.

Geometric error plot followed similar exponential curve as intensity error, same methodology used for PCA deconstruction. However we obtained a lower error as a percentage of the range of our random variable. This is probably due to the fact that intensity has 256^2 dimensions per image, while landmarks only have 87*2 dimensions per image, thus there is less information to capture for geometry. Holding the number of eigenvectors constant, it is expected that PCA will lose more information in the case with much larger dimensionality.

**3. Hybrid PCA – Geometry and Intensity PCA
**

Here we first warp all intensity images to the mean landmark, then do intensity PCA on the aligned images, then warp each image back to its original landmarks using geometry PCA.

Note: Intensity images are warped pixel by pixel to landmark orientation via interpolation, Octave 3.2.4 does not support cubic interpolation, so I used linear interpolation. Mean Alignment vs. Aligned Intensity Mean vs. Unligned intensity Mean - (Each training data was aligned to mean landmarks to compute aligned mean)

Here you can see some fuzzy interpolation for the Aligned Intensity Mean vs the Unaligned Intensity Mean.

_____________________________________________________________________________________________

Reconstruction Error Plot: 150 Training Images used to construct eigenvectors Compressed 27 Testing Images along eigenvectors Decompressed 27 Images and obtained average of pixel error across all images

With 20 Eigenvectors each for Geometry and Intensity: The average squared difference per coordinate ~ 400, square root = +20. The empirical range of the coordinate matrix was 255. Thus the pixel error was 8% of the range.

The graph below shows the pixel error holding the number of Geometry Eigenvectors constant at 20

We reached a slightly higher pixel error as a percentage of the pixel range using hybrid PCA reconstruction vs Intensity PCA construction alone, 8% vs 6.74%. At first it is counter-intuitive that we are reaching a higher error rate using 20 eigenvectors for geometry and 20 eigenvectors for intensity vs. just using 20 eigenvectors for intensity. However, the process through which we are compressing our image data in this hybrid case has many more stages where information loss occurs. We must be encountering information loss at these stages: 1. When aligning all intensity images to the mean landmarks, we lose information to interpolation, linear interpolation in this case. 2. When compressing an intensity image to 20 eigenvectors, we lose intensity information. 3. When compressing landmarks to 20 eigenvectors, we lose geometric information. 4. When re-warping decompressed intensity image, we further lose information to interpolation. Thus, there are more information leaks in the hybrid pca process than the intensity pca process alone, which explains our higher error rate even with more eigenvectors.

All Reconstructed Testing Faces, Original vs. Reconstructed with Intensity and Geo PCA 20 igenvectors each for Geometry and Intensity

**4. Random Synthesis of Faces using Geometry and Intensity PCA
**

Below are 20 randomly synthesized faces. Each face is a reconstruction of 10 aligned intensity eigenvectors warped back to 10 geometric eigenvectors. The scalar value of each eigenvector projection was obtained by randomly sampling from all the scalar projections of our image data given a specific eigenvector. As you can see below, the images look quite warped in some cases, indicating a large warp scalar along one or more geometric eigenvectors. The fuzziness can be attributed to both pixel interpolation and warping.

**Part 2 – Fisher Discriminant Analysis
**

1. 1-D Fisher Discriminant Using Mixed Intensity and Geometric Data

Training Plot: i) Discriminant from full, uncompressed data

78 Males and 75 Females were used to train our data, but as you can see below there are only two visible points after projecting the genders on to our Fisher Dimension. The reason for this is that the variance of the two classes after projection is 0! This makes sense as the goal of the Fisher Projection is to maximize Between-Class-Scatter and minimize Within-Class Scatter. In our case Within Class Scatter became 0. Inspection of our projection vector ‘w’ reveals that 22% of the dimensions were given a 0 weight, thus the projection has discarded information from 22% of our original dimensions. These dimensions were not useful for discrimination: specifically they either increased within-class-scatter or decreased between class-scatter. Since the sample variance of the 2 projected classes was 0, and thus equal, I chose a threshold “z” which was the mean of the two projected class means to be the discriminant. This is represented by the vertical line.

As you can see from our testing data above, 1D fisher discriminant only misclassified one point, and it was just on the threshold line. That is 5% of our total testing set.

With 1D, the unknown testing data was within a region halfway from the discriminant to either projected class mean.

**2. 2D Fisher Face Visualization – Full Uncompressed Data
**

Here we visualize the separation when our data is projected onto two fisher dimensions: one for intensity and one for geometry. We used full uncompressed data to determine our fisher projections.

Once again, as seen above, variance of projected distributions for each class is zero on training data.

For testing data, we see that it appears to be linearly separable, although there is a clustering near the center of the plot between classes. The separating line also looks orthogonal to the two projected training distributions.

With 2 dimensions, 3 points in the unknown data veer towards the triangle class (male), and one towards the female class. This is contrary to the 1D case where 3 points veered towards the female class. However paying attention to the scale of the graph they are still very close to the discriminant line. I performed a visual inspection of the unknown faces, and it looks like 3 faces are male, and one as female. Thus it seems 2-D projection works better on the unknown cases.

Additional Analysis: Mixed Fisher 1D using Compressed Data Here I have used 10 eigenvectors for geometry and 10 for intensity once again, and compressed each image into 20 scalar values which along each eigenvector. I then performed mixed 1D Fisher analysis on the compressed images.

As you can see above, the projected training data was clearly not well separated.

As you can see above, the projected testing data was also not well separated.

The poor performance of Fisher analysis using PCA compressed data should not be surprising. PCA is a generative technique, it discovers the best dimensions along which to represent each image, and compresses along those dimensions. We can reconstruct well using PCA. However, PCA preserves the most important generative information, its information extraction policy does not consider anything regarding discrimination between classes, or images. Thus, when we compress the images using PCA, we may very well lose information that helps Fisher analysis discriminate between classes. In our case, we did not preserve enough discriminative information after PCA compression.

- Pages From Zhang2015Uploaded byKAS Mohamed
- IJCSIT 040608Uploaded byAnonymous Gl4IRRjzN
- Blind Source SeparationUploaded byVu Hung Cuong
- BRAIN CANCER CLASSIFICATION USING BACK PROPAGATION NEURAL NETWORK AND PRINCIPLE COMPONENT ANALYSISUploaded byInternational Jpurnal Of Technical Research And Applications
- Matlab Assignment Help - Onlineassignmenthelp.comUploaded byDavid Mark
- A Survey on Face Detection and Recognition Techniques in Different Application DomainUploaded bysridharparthipan
- Krzanowski - Sensitivity of Principal Components - 1984Uploaded byramdabom
- Understanding Principal ComponentUploaded byDinh Khai Lai
- wallace ny fire epidemic.pdfUploaded byLeandro Ignacio Blanco
- Fingerprint Based Gender Classification Using 2D Discrete Wavelet Transforms and Principal Component AnalysisUploaded byseventhsensegroup
- JURNALUploaded byrindangsukmanita
- UsingDiscriminantEigenFeaturesForImageRetrievalUploaded byyary_upr
- Face Recognition Using DCT and PCA ApproachUploaded byabhi
- F06 Austin SCC DetectiodsnUploaded byPrimitivo González
- 11Uploaded byvijaigk
- ContentsUploaded byramesh158
- 1-s2.0-S0260877413004111-main.pdfUploaded bybafiikhza
- Machine LearningUploaded bysrconstantin
- A Study on Face Recognition Technique based on EigenfaceUploaded bySadique Nayeem
- dafx02Uploaded byJonatan Dalmonte
- Tecnicas multivariantesUploaded bytibhy_4323643
- dataminingreport2Uploaded byapi-282845094
- Chemical and Biological Indicators of Soil Quality - Yadvinder Singh, PAUUploaded byCSISA Project
- Camiz2017 Morfology Latin VolcanoesUploaded byGalaxad García
- 1-s2.0-S092523121301014X-mainUploaded byటి. కార్తికేయ శర్మ
- a053c720d58d78ab32e236ee185a90c61b2a.docxUploaded byLiceth Portilla
- RdaUploaded byNazimah Maqbool
- Optimizing Face Recognition Using PCAUploaded byAdam Hansen
- hw4_wi14Uploaded byAdam Hicks
- 10.1.1.80.6563Uploaded byMohan SC