You are on page 1of 2

Gautam Kumar

(22MS1014)

Assignment 2
Q. Why following is used: -
a.) KMO: - The Kaiser-Meyer-Olkin (KMO) Test determines how well your data is suitable for
Factor Analysis. The test determines sampling adequacy for each variable in the model as
well as for the entire model. The statistic is a measure of the proportion of variation that
may be common variance among variables. The lesser the proportion, the better your data
is suited for Factor Analysis.
Kaiser assigned the following values to the findings for reference:
0.00 to 0.49 is considered undesirable.
0.50 to 0.59 depressing.
0.60 to 0.69 is a poor score.

0.70 to 0.79 is considered average.


Meritorious from 0.80 to 0.89.
0.90 to 1.00 fantastic.

b.) Bartlett Test: - Bartlett's test is used in statistics to determine if k samples come from
populations with equal variances. Equal variances across populations are referred to as
homoscedasticity or variance homogeneity. Some statistical tests, such as the ANOVA test,
assume that variances across groups or samples are equal. That assumption may be
validated using the Bartlett test. The variance of two or more samples can be compared
using Bartlett's test to see if they are taken from populations with equal variation. It is
appropriate for regularly distributed data. There are numerous ways to test for variance
equality (homogeneity) among groups.
c.) Determinant Test: - The determinant of the supplied matrix is calculated using this
function. The determinant may be thought of as a function with a square matrix as input
and an integer as output. To pass the data adequacy test, the value must be positive.

Q.) Why factor analysis is done?


Factor analysis is a technique for condensing a large number of variables into a smaller
number of elements. This method pulls the largest common variance from all variables and
converts it into a single score. This score can be used for further analysis as an index of all
factors. Factor analysis is a subset of the general linear model (GLM), and it also makes
certain assumptions: there is a linear connection, there is no multicollinearity, relevant
variables are included in the analysis, and there is real correlation between variables and
factors. There are other methods available, but principal component analysis is the most
often utilised.

Q. Why Parallel analysis is done?


Parallel analysis, commonly known as Horn's parallel analysis, is a statistical approach for
determining the number of components to retain in a principal component analysis or the
number of factors to retain in an exploratory factor analysis. To identify the number of
components, parallel analysis uses random data simulation. The Monte Carlo Simulation
Technique is used to produce a random simulative (artificial) data set in addition to the
actual (real) data set, and the estimated eigenvalues are determined.

You might also like