Introduction

1.1 Introduction

There are a large number of higher learning institutions in Bangladesh that are governed by and under supervision of Ministry of Education (MOE), Bangladesh. As to date, Bangladesh has 26 public universities (http://www.moedu.gov.bd) and University of Dhaka (DU) is the largest university in Bangladesh, with a student population of over 115000. Students are main assets of universities. The students’ performance (academic achievement) plays an important role in producing the best quality graduates who will become great leader and manpower for the country thus responsible for the countries economic and social development. The performance of students in universities should be a concern not only to the administrators and educators, but also to corporations in the labor market. Academic achievement is one of the main factors considered by the employer in recruiting workers especially the fresh graduates. Thus, students have to place the greatest effort in their study to obtain a good grade in order to fulfill the employer’s demand. Students’ academic achievement is measured by the Cumulative Grade Point Average (CGPA). CGPA shows the overall students’ academic performance where it considers the average of all examinations’ grade for all semesters/years during the tenure in university. Many factors could act as barrier and catalyst to students achieving a high CGPA that reflects their overall academic performance. There are several ways to determine student academic performance which are cumulative grade point average (CGPA), grade point average (GPA), tests and others. In Bangladesh, researchers evaluate the student academic performance based on CGPA. In addition, a study in the United States by Nonis and Wright (2003) also evaluate student performance based on CGPA. Most of the researches done in other countries used GPA as a measurement of academic performance. They used GPA because they are studying the student performance for that particular semester/year. Some other researcher used test results since they are studying performance for the specific subject.

1

1.2

Review of literature

Many studies have been developed concerning the factors influence students performance such as demographic, active learning, student attendance, extracurricular activities, peers influence and course assessment. Studies have shown that

demographic characteristics can influence academic excellence. Among these characteristics are parents’ income, parent’s education and English results.

Hossain (1994) in his work “A study of Factor Analysis and its Application" discussed about its background, advantages limitations, uses factor model, method of analysis, uses of SPSS for factor analysis. He also gave an example of motivation measures and student’s attitude on the basis of his study.

Nasri and Ahmed (2007) in their study on business students’ (national students and non-national students) in United Arab Emirates indicate that non-national students had higher grade point average were more competent in English, which is reflected in higher average for high school English.

Shamima Syeda Sultana (2003) in her work "Factor Analysis: An application to gross domestic product data" discussed about the factors which have effects on the domestic product from 1995-96 to 1999-2000 for 64 districts of Bangladesh. She also discussed the division wise factors and comparison among the districts for the factors for that period.

2

1.3

Objectives of the study

The objectives of this study are 1. To collect primary data for the factor analysis. 2. To reduce the list of variables to few factors for modeling purposes. 3. To fit a model with this factors to check the significance of the model. . 4. To find out the factors which influence the academic performance of the students. 5. To find out the association between these variables.

1.4

Sources of Data:

The data used in this study were collected from the students who live in the Shahidullah hall (residential) of University of Dhaka.

1.5

Data processing:

After collecting data, the following computer application packages are used to process the data: 1. SPSS 16 2. Microsoft Excel

1.6

Limitations of the study :

The limitations of this study are: 1. As primary data is used to analysis the data collecting procedure was not 100% accurate. 2. In this report we only consider major 12 variables. Other less influential variables have been ignored. 3. In our data collecting procedure the non residential students are ignored. 4. The data collecting procedure took a long time so that enough analysis could not be done.

3

Methods of Factor Analysis

2.1

Principal Component Analysis (PCA)

Principal component analysis (PCA) is a classical statistical method. It was first derived by Karl Pearson (1901) and rediscovered by Hostelling in (1933). Principal Components Analysis (PCA) is a multivariate procedure which rotates the data such that maximum variabilities are projected onto the axes. Essentially, a set of correlated variables are transformed into a set of uncorrelated variables which are ordered by reducing variability. The uncorrelated variables are linear combinations of the original variables, and the last of these variables can be removed with minimum loss of real data. The main use of PCA is to reduce the dimensionality of a data set while retaining as much information as is possible. It computes a compact and optimal description of the data set. In communication theory, it is known as the Karhunen-Loeve transform. This procedure performs Principal Component Analysis on the selected dataset. A principal component analysis is concerned with explaining the variance covariance structure of a high dimensional random vector through a few linear combinations of the original component variables. Consider a p-dimensional random vector X = (Xi, X2...
YI,

Xp). k principal components ( k£ p ) of X are k (univariate) random variables Yk which are defined by the following formulae Y1 = l’1X = l11X1 +l12X2 + . . . + l1pXp Y2 = l’2X = l21X1 +l22X2 + . . . + l2PXp
. . .

Y2,...,

Yk = l’k X = lk1X1 +lk2X2 + . . . + lpkXp
4

Where the coefficient vectors l1, l2 . . . etc are chosen such that they satisfy the following conditions:

First Principal Component = Linear combination l1′X that maximizes Var (l1′ X) and || l1|| =1 Second Principal Component = Linear combination l2’X and maximizes Var (l2′X) and || l2||=1 and Cov (l1′X, l2 ′X) = 0. j th Principal Component = Linear combination lj‘X that maximizes Var(lj‘X) and ||lj|| =1 and Cov(lk'X,l 'j X) =0 for all k < j.

This says that the principal components are those linear combinations of the original variables which maximize the variance of the linear combination and which have zero covariance (and hence zero correlation) with the previous principal components.

It can be proved that there are exactly p such linear combinations. However, typically, the first few of them explain most of the variance in the original data. So instead of working with all the original variables X1, X2, . . . ,Xp you would typically first perform PCA and then use only first two or three principal components, say Y1 and Y2, in subsequent analysis.

2.2

Objectives of principal component analysis

1. To discover or to reduce the dimensionality of the data set. 2. To identify new meaningful underlying variables. 3. To derive a small number of linear combinations (principal components) of a set of variables that retain as much of the information in the original variables as possible. 4. To reveal relationship that was not previously suspected and thereby allows interpretation that would not ordinarily result.

5

Constrain v to generate a unique solution: The constraint on the numbers in V1 is that the sum of the squares of the coefficients equals 1. Principal components maximize variance of the transformed elements one by one: Hotelling (1933) derived the "principal components" solution. Expressed mathematically. which will be the first element of y and be defined by the coefficients in the first column of V. It proceeds as follows: for the first principal component. where y is the transformed variable. Thus the mathematical problem "find a unique V such that Dy is diagonal" cannot be solved as it stands. we want a solution such that the variance of y1 will be maximized 5. all the off-diagonal elements of Dy must be zero.2. Orthogonal transformations simplify things: To produce a transformation vector for y for which the elements are uncorrelated is the same as saying that we want V such that Dy is a diagonal matrix. 3. 6 . Transformation from z to y: The equation y = V'z represents a transformation. (denoted by V1). 2. we wish to maximize 1 N 2 ∑ y1i N i =1 where y1i = v1' zi and v1'v1 = 1 (this is called "normalizing” v1). 4. This is called an orthogonalizing transformation. Infinite number of values for V: There are an infinite number of values for V that will produce a diagonal Dy for any correlation matrix R. A number of famous statisticians such as Karl Pearson and Harold Hotelling pondered this problem and suggested a "variance maximizing" solution. That is.3 Properties of Principal Components 1. z is the original standardized variable and V is the premultiplier to go from z to y.

in turn. The second principal component. Z1 = l11X1 + l12X2 + . ...Xp The first principal component is then the linear combination of the variables X1.. subject to the condition that li ′li => 1211+ I2'1p + . . The variances of the principal components are the eigenvalues of the matrix C. C1 p  ...+12 1p =1 Thus the variance of Z..   . If there are p variables.6.. . C pp   Where the diagonal elements Cii are the variances of Xi' s and Cii' s are covariance's.. +12pXp =1′2 X is such that V(Z2 ) is as large as possible subject to 12′12=1=> 1221+ 1222 +122p =1 and also to the condition that Z1 and Z are uncorrected i. That varies as much as possible for the individuals.. .. C11 C12 C C 22  21 C=  . .. Similarly other principal components are defined in this way. . which. we want to maximize v′1 Rv1 subject to v1'v1 = 1.4 Procedure for Principal Component Analysis Principal Components are particular linear combinations of the p random variables X1... 2.... . 7 .   Cp1 Cp 2 ... COV(Zl Z2)≈COV(l’1 X. + 11pXP = l′iX. ... Substituting the middle equation in the first yields 1 N ∑y i =1 N 2 1i = v′1 Rv1 where R is the correlation matrix of Z. Z2 = l21X1 + 122X2 +. C2 p   . Therefore. . .. . Computation of first principal component from R and v1.e. the original data matrix.Xp . is the standardized matrix of X.l'2X)= 0. X2. X2. V(Z)is as large as possible given this constraint on the constants 1ij.. The variance-covariance matrix is. there can be up to p principal components...

Choose the number of principal components.. R 3.1ip . Exclude principal components whose eigenvalues are less than tr(S)/l (for R). + c pp . Covariance Matrix.5 The Steps in a Principal Component Analysis 1.Assuming that the eigenvalues are ordered as λ1 ≥ λ 2 ≥ . . It means that sum of the variances of principal components is equal to the sum of the variance of the original data. 4.. ≥ λ p ≥ 0.. thenλi corresponds to the ith principal component Z 1 = 111 X 1 + 112 X 2 + . Rescale principal components (aj*= λi and jth principal component is aji* 1/ 2 ai ) and find correlation between ith variable 5... X p to have zero means and unit variances. a p ). + 11 p X p = 1' i X . λ p ) and eigenvectors ( ai . ≥ λ p = c11 + c 22 + . proportion of total variation explained by the jth principal component is λ j / tr(S) and proportion of total variation explained by the jth principal component is λ j / p. Get eigenvalues ( λ1.. 2. X 2..1i 2. Then calculate the nxp Data Matrix........ S or Correlation Matrix. 8 . An important property of the eigenvalues is λ1 ≥ λi ≥ . 2.. λi . First code the variables X 1. … Now V(Zj) = λi and the constants 1i1. Select a percentage of the total variation that could be explained (70%-90%)... a 2... are the elements of eigenvector. ...

Perhaps an equal number of hypotheses and theories linking these variables have been suggested. Common factor analysis: . so the solution generated will include as many factors as there are variables. Principal component analysis: .6 Factor Analysis Factor Analysis is a statistical approach that can be used to analyze interrelationships among a large number of variables and to explain these variables in terms of their common underlying dimensions (factors).7 Types of factor analysis: Two main types: 1. the number of factors will always be less than the number of original variables." This family of techniques uses an estimate of common variance among the original variables to generate the factor solution. Hair et al. Thousands of variables have been proposed to explain or describe the complex variety and interconnections of social and international relations.2. choosing the number of factors to keep for further analysis is more problematic using common factor analysis than in principle components.This method provides a unique solution.This is what people generally mean when they say "factor analysis. It looks at the total variance among the variables. Bryman and Cramer (1990) broadly defined factor analysis as "a number of related statistical techniques which help us to determine the characteristics which go together". So. so that the original data can be reconstructed from the results. Cureton and DAgostino (1983) described factor analysis as "a collection of procedures for analyzing the relations among a set of random variables observed or counted or measured for each individual of a group". (1992) described factor analysis as "The statistical approach involving finding a way of condensing the information contained in a number of original variables into a smaller set of dimensions (factors) with a minimum loss of information" 2. although it is unlikely that they will all meet the criteria for retention. Because of this. There is only one method for completing a principal components analysis. 2. 9 . this is not true of any of the other multidimensional methods described here.

where the large number of variables precludes modeling all the measures individually. 5. based on which original variables have the highest correlations with the principal component factors. To determine network groups by determining which sets of people cluster together. To reduce a large number of variables to a smaller number of factors for modeling purposes. To identify clusters of cases and/or outliers. factor analysis can be and is often used on a stand-alone basis for similar purposes. 10 .8 Objectives of Factor analysis 1. and to drop proposed scale items which cross-load on more than one factor. 6. 3. 2.2. factor analysis is integrated in structural equation modeling (SEM). helping create the latent variables modeled by SEM. To establish that multiple tests measure the same factor. 7. To validate a scale or index by demonstrating that its constituent items load on the same factor. To select a subset of variables from a larger set. To create a set of factors to be treated as uncorrelated variables as one approach to handling multi co-linearity in such procedures as multiple regression 4. thereby giving justification for administering fewer tests. As such. However.

c... F is a factor value having mean 0 and standard deviation =1 for all the individuals as a whole. F and 6i are independent and V (F) is assumed to be unity.Fk are k uncorrelated common factors.In the way the generalized factor analysis model is - X i = 1i1 F1 + 1i 2 F2 + .m). Relationships among the pairs of variables are linear. (j = 1. 2. test score) with mean 0. Large enough sample to yield reliable estimates of the correlations among the variables. F1.10 Procedure of Factor Analysis Factor analysis has similar aim to principal component analysis. … where -X~ is the ith standardized test score with mean 0. Here also. so that l2i+ V( ∈i ) = 1. we reduce a set of p variables to a few number of indices or factors and hence elucidate the relationship between variables. 3. Ijj' s. variance 1.. Some degree of co-linearity among the variables but not an extreme degree or singularity among the variables. . are factor loadings for the ith response variable. . . Absence of outliers among the cases. is the part of X that is specific to ith test. standard deviation 1.g. ci's are uncorrelated with any of the common factors and have zero means. .+ l2ik V(Fk) + V ( ∈i )=l2i1+ . . .2. . Statistical inference is improved if the variables are multivariate normal. + l2ik + V( ∈i ) 11 .9 Assumptions of Factor Analysis 1. . In this model V (Xi) =1 = l211 V (F1) + l211 V (F2) + . each with mean 0 and standard deviation I and …… is a factor specific only to the ith response.2. Also V(Xi)=l 2i + V( ∈i ) Since li is a constant. But V(X. … .) is also unity. . Spearman proposed the idea that the test scores are all of the form X i = i F + ∈i . 5. + 1ik Fk + ∈i Where Xi is the ith response score (e. . Hence the constant li also called the factor loading. 1i is a constant. 6. Large ratio of N/k.. 4. 2. is such that its square is the proportion of the variance of Xj that is accounted for by its factor.F2.

Extract an initial solution. + xnm Fm . Collect data and compute an intercorrelation matrix. the part of variance that is unrelated to the common factors. 12 . 2. V( ∈i )is called the specific variance or uniqueness i. From the initial solution. rotate the factors to clarify the factor pattern in order to better interpret the nature of the factors 5.e. 2 ....12 The Factor Model Let us assume that our Y variables are related to a number of functions operating linearly. + x 2 m Fm . + x1m Fm .. That is. Y3 = x31 F1 + x32 F2 + . + x3m Fm . determine the appropriate number of factors to be extracted in the final solution 4. 3..11 Steps in Factor Analysis 1. Depending upon subsequent applications. . If necessary. Y2 = x21 F1 + x 22 F2 + . compute a factor score for each subject on each factor 2. Compute the factorability of the matrix. + l2jk is called the communality. Equation 1: Y1 = x11 F1 + x12 F2 + .where l2i1 + l2i1 + .... .. f ( ) of some unknown variables. Where: Y = a variable with known data x= a constant F = a function. K K K Yn = xn1 F1 + xn 2 F2 + .

13 . Alpha method 7. uncorrelated) principle components (Zillmer and Vuz. The size of each loading for each factor measures how much that specific function is related to Y.e. The loadings emerging from a factor analysis are the constants.. Principal component analysis is a separate technique from the ML method because it partitions the variance of the correlation matrix into new principle components (Zillmer and Vuz. Principle components method j 2. The factors are the F functions. Principal axis method also know as common factor analysis 4.13 Methods of Estimation A variety of methods have been developed to extract factors from an intercorrelation matrix.. SPSS offers the following methods i 1. For any of the Y variables of Equation 1 we may write Equation 2: Y = x1 F1 + x2 F2 + x3 F3 + . With the F's representing factors and the it's representing loadings. smaller sets of linear combinations of independent (i. Unweighted least-squares method 5. + xmFm. 1995). In this methods principal component analysis transforms the correlation matrix into new.. Generalized least squares method 6. 1995).By application to the known data on the Y variables. 2. factor analysis defines the unknown F functions. Maximum likelihood method (a commonly used method) 3. Image factoring The most popular methods of estimation of parameters of factor analysis are the principle component method and the maximum likelihood method.

..2.. ∈i ) with λi .. plus or minus) of the loading that is important in the interpretation of a factor. In a word.. The component matrix indicates the correlation of each variable with each factor....Factor loadings are the basis for imputing a label to the different factors...... Let Z has eigenvalue - ∑ =λ We can write 1 ∈1∈'1 +.. In fact.loading A factor loading is the correlation between a variable and a factor that has been extracted from the data . + λ p ∈ p ∈' p ......14 Some Basic Terms Related to Factor Analysis 2..1 The Principal Component Method Let the observable random vector X has covariance matrix eigenvector pairs ( λi ....13.... The correlations between the variables and the two factors (or "new" variables).....(2) ' 2.1 Factor..... these correlations are called factor loadings. ≥ λ p ≥ ) 0 Then ∑ .14......... It is the absolute size (rather than the signs... factor loading work as key to understanding what the factors mean. They are also known as factor-variable correlation. Factor loading are those values. correlation between the factor and a variable is called factor-loading.. which explain how closely the variables are related to each one of the factors discovered. as they are extracted by default.. (1) ∑= LL Allowing for the specific variance T we can write the equation ∑ =LL +ψ .. ≥ λ 2≥ .. 14 .

15 . The closer to zero the coefficient. When we take the sum of squared values of factor loading relating to a factor. The full correlation matrix involved in the factor analysis is usually shown if the number of variables analyzed is not overly large. • To interpret the coefficient.3 Eigenvalue (or Latent Root) i Eigenvalue is the amount of variance in variable set explained by the factor.14. the closer to one. • The coefficients of correlation express the degree of linear relationship between the row and column variables of the matrix. 2.14. and this is called the communality. This will give the percent variation in common.2.14. Often. then such sum is referred to as eigenvalue or latent root. 2. square it and multiply by 100. Communality shows how much of each variable is accounted for by the underlying factor taken together.. the correlation matrix has the following features. Eigenvalue indicates the relative importance of each factor in accounting for the particular set of variables being analyzed. the matrix is presented without comment.2 Communality The sum of the squared factor loadings for all factors for a given variable (row) is the variance in that variable accounted for by all the factors. for the data on the two variables.. the greater the relationship. however. A negative sign indicates that the variables are inversely related. Specifically. A high value of communality means that not much of the variable is left over after that ever the factors represent is taken into consideration.4 Correlation Matrix The most often employed techniques of factor analysis are applied to a matrix of correlation coefficients among all the variables. It is worked out in respect of each variable as under: Communality of the ith variable= (ith factor loading of factor A)2 + (ith factor loading of factor B)2+ . the less the relationship.

16 . 3.05). 2.A value close to 1.0 & 0.0). indicating the factor analysis is preferable. have a significant value less than . we want this test to be significant (i..5 if the sample is adequate. 2. Orthogonal Rotation: .5 Rotation There are various methods that can be used in factor rotation.Quartimax Rotation attempts to achieve loadings of ones and zeros in the rows of the component matrix (1.0).Varimax Rotation attempts to achieve loadings of ones and zeros in the columns of the component matrix (1.. geometrically they remain 90° apart.Equimax Rotation combines the objectives of both varimax and quartimax rotations 4.Orthogonal Rotation preserves the independence of the factors. Varimax Rotation: . 1. For factor analysis to work we need some relationships between variables and if the R-matrix were an identity matrix then all correlation coefficients would be zero. The KMO statistic varies between 0 and 1. 2. Quartimax Rotation: . Bartlett’s measure test the null hypothesis that the original correlation matrix is an identity matrix. A significant test tells us that the R-matrix is not an identity matrix.14.e. Therefore. KMO value should be greater than 0.• The correlation coefficient between two variables is the cosine of the angle between the variables as vectors plotted on the cases (coordinate axes).. therefore there are some relationships between variables we hope to include in the analysis.5 KMO and Bartlett’s test KMO and Bartlett’s test of sphericity produces the Kaiser-Meyer-Olkin measure of sampling adequacy and Bartlett’s test.14.0 & 0. Equimax Rotation: .

From matrix algebra we know that an orthogonal transformation corresponds to a rigid rotation of the coordinate axes. The features of the matrix which are useful for interpretation are as follows • The number of factors (columns) is the number of substantively meaningful independent (uncorrelated) patterns of relationship among the variables. Oblique Rotation: . one must select of the right rotation. • The loadings. For this reason an orthogonal transformation of the factor loadings and the implied orthogonal transformation of the factors are called factor rotation. Just as different structures in the data.Oblique Rotation produce factors that are not independent. geometrically not 90° apart. from the stand point of making sense of the results of factor analysis.14. Though different rotations give results that appear to be entirely different. If the factors are independent orthogonal rotation is done and if the factors are correlation. «. the second delineates the next largest pattern that is independent of (uncorrelated with) the first. Communality for each variable will remain undisturbed regardless of rotation but the eigenvalue will change as a result of rotation. but from a statistical point of view. 2. an oblique rotation is made. measure which variables are involved in which factor pattern and to what degree The square of the loading multiplied by 100 equals the percent variation that a variable has in common with an unrotated pattern. Rotation. all results are taken as equal. However. Thus the amount of variation in the data described by 17 .7 Unrotated Factor Matrix Two different factor matrices are often displayed in a report on a factor analysis.5. none superior or inferior to others.. is some thing like staining a microscope slide.14. and so on. the third pattern delineates the third largest pattern that is independent of the first and second. 2. in the context of factor analysis. it is usually given without comment.6 Factor Rotation All factor loadings obtained from the initial loadings by an orthogonal transformation have the same ability to reproduce the covariance matrix. • The first unrotated factor pattern delineates the largest pattern of relationships in the data. The first is the unrelated factor matrix.

the last pattern the least.each pattern decreases successively with each factor. The following features characterize the rotated matrix: If the rotated matrix is orthogonal then several features of the unrotated matrix are preserved by the orthogonally rotated matrix. Their loadings define the separate patterns and degree of involvement in the patterns for each variable. the percent of common variance figures measure how much of the variation accounted for by all the patterns is involved in each pattern. The coefficient (communality) shown in this column. If the rotated matrix is oblique rather than orthogonal then Oblique rotation takes place in one of two coordinate systems: either a system of primary axes or a system of reference axes. gives the percent of variation of a variable in common with each pattern. factor patterns are ordered by the amount of data variation they account for. the first pattern defines the greatest amount of variation. 18 .14. The primary factor pattern matrix and the reference factor structure matrix delineate the oblique patterns or clusters of interrelationship among the variables. except that the h2 may not be given and eigenvalues are inappropriate. This is the proportion of a variable's total variation that is involved in the patterns. 2. They measure the amount of variation accounted for by a pattern. • The eigenvalues equal the sum of the column of squared loadings for each factor. In the orthogonally rotated matrix. In the unrotated matrix.8 Rotated Factor Matrix The rotated factor matrix should not differ in format from the unrelated factor matrix. multiplied by 100. Dividing the eigenvalues either by the number of variables or by the sum of h2 values and multiplying by 100 determines the percent of either total or common variance. •The percent of common variance figures indicate how whatever regularity exists in the data is divided among the factor patterns. respectively. •The column headed "h2" displays the communality of each variable. no significance is attached to factor order. The percent of total variance figures measure bow much of the data variation is involved in a pattern. with the first defining the greatest degree of relationship in the data.

7. computed from the factor score coefficient matrix. The technique may be used in the context of empirical clustering of products. media or people i.0.9 Factor Scores A useful by product of factor analysis is factor scores. several other multivariate analyses can be performed. 2. The technique can reveal the latent factors (i. 2. Factor scores are composite measures that can be computed for each subject on each factor. and disentangle complex interrelationships into their major and distinct regularities. underlying factors not directly observed that determine relationships among several variables concerning a research study. 19 .14.. Factor score represents the degree to which each respondent gets high scores on the group of items that load high on each factor. Factor scores can explain what the factors mean with such scores. for providing a classification scheme when data scored on various rating scales have to be grouped together. 4. compensate for random error and invalidity. 6. The technique is useful to verify conceptualization of a construct of interest. 3. They are standardized measures with a mean = 0. The technique is helpful in pointing out important and interesting relationships among observed data that were there all the time.e. media or people i..2. but not easy to see from the data alone.0 and a standard deviation of 1. for providing a classification scheme when data scored on various rating scales have to be grouped together.e. The technique of factor analysis is quite useful when we want to condense and simplify the multivariate data. 5.. Factor analysis can simultaneously manage over a hundred variables. The technique may be used in the context of empirical clustering of products.e.15 Advantages of Factor Analysis The advantages of Factor Analysis are discussed below 1.

and communality etc.17 Uses of Factor Analysis The uses of factor analysis are discussed below 1.2. UN votes. Parsimony or data reduction: . 2.Factor analysis can be useful for reducing a mass of information to an economical description. 3. 2.16 Disadvantages of Factor Analysis The disadvantages of Factor Analysis are discussed below 1. descriptively or analytically. Factor analysis is a complicated decision tool that can be used only when one has through knowledge and enough experience of handling this tool. With computer facility available these days.If a scientist has a table of data-say. loadings. personality characteristics. It involves laborious computations involving heavy cost burden. but the cost factor continues to be the same i. 5. Even then. and few of them are acquainted with the method. Scholars in this field are drawn from many disciplines and professions.e. at times it may not work well and may even disappoint the user. analysis. 4. simple structure. The management. orthogonal. rotate. 20 . The results of a single factor analysis are considered generally less reliable and dependable for very often a factor analysis starts with a set of imperfect data 6. The problem of communicating factor analysis is especially crucial for peace research. It is mathematically complicated and entails diverse and numerous considerations in application. data on fifty characteristics for 300 nations are unwieldy to handle. and understanding of such data are facilitated by reducing them to their common factor patterns. there is no doubt that factor analyses have become relatively faster and easier. large factor analyses are still bound to be quite expensive. 2. Its technical vocabulary includes strange terms such as eigenvalues. or answers to a questionnaire-and if he suspects that these data are interrelated in a complex fashion and then factor analysis may be used to untangle the linear relationships into their separate patterns.. For example. Interdependency and pattern delineation: .

This means the systematic attempt to chart major empirical concepts and sources of variation. or conflict. Factor analysis offers a solution by dividing the characteristics into independent sources of variation (factors). The scale may refer to such phenomena as political participation.Factor analysis can be used to transform data to meet the assumptions of other techniques. 5. relationships. and authoritarianism. It can be used to classify nation profiles into types with similar characteristics or behavior. These concepts may then be used to describe a domain or to serve as inputs to further research. liberal voting. voting behavior. Data collected on a large sample of groups and factor analyzed can help disclose this structure 4. Data transformation: . social behavior. A problem in developing a scale is to weight the characteristics being combined.In a new domain of scientific interest like peace research. leadership. It can be used to group interdependent variables into descriptive categories. personality. revolution. group. perhaps startling. As a case in point. 9. factor analysis also enables a scientist to map the social terrain. Hypothesis testing:. and age-of variation in group characteristics and behavior. It can reduce complex interrelationships to a relatively simple linear expression and it can uncover unsuspected. Since the meaning usually associated with characteristics or existence.Hypotheses abound regarding dimensions of attitude. 8. Exploration: . factor analysis can be employed to reduce them to a smaller set of uncorrelated factor scores. 21 x "dimension" is that of a cluster or group of highly intercorrelated j behavior. Classification or description: . or nations can be rated and compared.Factor analysis is a tool for developing an empirical typology. Mapping: .Besides facilitating exploration. factor analysis may be used to test for their empirical . Structure: . the complex interrelations of phenomena have undergone little systematic investigation. such as ideology. The unknown domain may be explored through factor analysis. a scientist may want to uncover the primary independent lines or dimensions-such as size. If the predictor variables are correlated in violation of the assumption. Scaling:-A scientist often wishes to develop a scale on which individuals. voting. 6.Factor analysis may be employed to discover the basic structure of a domain. groups.3. and conflict. 7.

4th and the M. In that hall every kind of students lives. From this selected clusters 56 sampling units are taken. Shahidullah Hall is one of the biggest halls of university of Dhaka. this is because enough information was not available to construct the sampling frame for other probability sampling techniques. Among them 32 were selected randomly.Data and Variables 3.SC students of the Shahidullah Hall of University of Dhaka. In that building hall rooms were considered as clusters. For data collection only the main building of the Shahidullah Hall is considered because most of the 3rd. Each student is considered as sampling unit. There are 178 clusters in the sampling frame. 22 .1 Target population The intended target population for this study is the 3rd. The 3rd.SC students live in that building. For sampling cluster sampling technique is used.SC students were considered in this study because they have at least spent three years in this university and for that reason their academic performances can be considered as adequate for this kind of analysis. 4th and the M. So for data collection this hall is considered. 4th and the M.

28 % are 4th year students and rest of them are M. Figure 4. The questionnaire is given in appendix. Of the 56 sample units.07 % are 3rd year students. A total of 56 questionnaires were completed.SC students.1: Data 23 . 64. 16.3.2 Data The questionnaire was distributed to the selected hall students.

2: Data 24 .Figure 4.

Involvement in political activities. 4th and the M. Involvement in financial (income earning) activities. Entertainment. 6. Number of roommate. 8. 9. Family income. 4th and the M. Family income: This variable refers to the monthly income of the earning members of the family of the of the 3rd. The 11 Variables are given below: 1. Attendance in class. Involvement in extracurricular activities. Study hours per week after class: This variable refers to the time spent for study in a week after class of the 3rd.SC students of the Shahidullah hall of University of Dhaka 25 . 4. 2.3 Variables We consider here 11 Variables which influence student’s academic performances. 7. Past academic performances (SSC and HSC results). Sleeping hours. 11. How long it took to get a seat in the hall. 3.SC students in their class of the Shahidullah hall of University of Dhaka 2. 10. Qualitative variables are ignored for this study to avoid complications.SC students of the Shahidullah hall of University of Dhaka 3. 4th and the M. 5. Study hours per week after class.3. Attendance in class: Attendance in class refers to the attendance of the 3rd. 1.

11.SC students of the Shahidullah hall of University of Dhaka. discussions. 8. Sleeping hours: It refers to the time a student used to sleep over 24 hours. This variable refers to the SSC and HSC result of the 3rd. 4th and the M.4.SC students of the Shahidullah hall of University of Dhaka. participate in debate competitions etc of the 3rd. 4th and the M. 4th and the M. reading novels etc of the 3rd. meetings and other political activities of the 3rd. 9.SC students of the Shahidullah hall of University of Dhaka. Involvement in political activities: This variable refers to the time spent for political works. 4th and the M. part time jobs.SC students of the Shahidullah hall of University of Dhaka. 4th and the M. 6. Entertainment: It refers to the time spent for watching TV. Past academic performances (SSC and HSC results). 5. 7.SC students of the Shahidullah hall of University of Dhaka. listening to music. business etc. Involvement in extracurricular activities: This variable refers to the time spent for the extracurricular activities such as playing different types indoor and outdoor games. Number of roommate: It refers to the number of roommate of the 3rd. 26 . Involvement in financial (income earning) activities: It refers to the time spent for the income earning activities such as tutoring students. How long it took to get a seat in the hall: This variable refers to the time taken to get seat in the hall. 10.

05). If we find any variables that do not correlate with any other variables then we should consider excluding these variables before the factor analysis is run. A significant test tells us that the R-matrix is not an identity matrix. 27 . The correlations between variables can be checked using the correlate procedure to create a correlation matrix of all variables. Bartlett’s measures test the null hypothesis that the original correlation matrix is an identity matrix. KMO and Bartlett’s test of sphericity produces the Kaiser-Meyer-Olkin measure of sampling adequacy and Bartlett’s test.Analysis of Data Factor Analysis is a statistical approach that can be used to analyze interrelationships among a large number of variables and to explain these variables in terms of their common underlying dimensions (factors). This matrix can also be created as part of the main factor analysis. have a significant value less than .e. The first thing to do when conducting a factor analysis is to look at the inter-correlation between variables. So this method is chosen for analysis. we want this test to be significant (i. Thousands of variables have been proposed to explain or describe the complex variety and interconnections of social and international relations. We expect that our variables correlate with each other. indicating the factor analysis is preferable. For factor analysis to work we need some relationships between variables and if the R-matrix were an identity matrix then all correlation coefficients would be zero. Perhaps an equal number of hypotheses and theories linking these variables have been suggested. Therefore.5 if the sample is adequate. KMO value should be greater than 0. The KMO statistic varies between 0 and 1. therefore there are some relationships between variables we hope to include in the analysis.A value close to 1.

130 1.149 1.106 .098 -.000 average attendance average study hour per week Family income average financial activity average extracurricular activity average entertainment per day -.072 .000 .084 -.366 1.144 .022 .197 .000 .025 .218 -.1 Correlation matrix.234 -.352 -.174 -.000 average political activity average sleep per day 28 .000 -.319 .112 .391 .010 1.124 -.255 -.000 .122 -.175 1.608 .007 .000 -.127 1.4.000 -.181 -.000 -.661 .049 -.118 -.043 . KMO-Barlett’s test.734 -.390 .091 1.017 -.000 .1: The correlation matrix involved in the factor analysis for academic performance Previous result Number of roommate Time taken to get a hall seat 1.368 .183 1.299 1.084 -.074 -. Factor analysis and interpretation SPSS software is used to analyze the data.210 1.073 -.655 .074 .094 -. Table 4.162 .025 -.182 -.066 -.000 -.505 .319 .370 -.203 -.029 . Eigenvalues.170 .

and strong negative correlation with average political activity. Average attendance has positive correlation with average study per week after class and negative correlation with political activity.From table 4.1 we can see that students average attendance and average study per week after class have high positive correlation with previous result. 29 . That means students whose parents have high income do not have to involve in much financial activities and their involvements in extracurricular activities are better for this reason. It means students have good previous result who attends the class regularly and who studies a good amount of time after class than the other students. We found that the family income has negative correlations with the financial activity and positive correlation with extracurricular activity. That means if a student is regular in class then he spends good amount of time for his study and can not spend a lot of time for his political activities.

have a significant value less than . indicating the factor analysis is preferable. The KMO statistic varies between 0 and 1.Barlett’s test for academic performance Kaiser-Meyer-Olkin Measure of Sampling Adequacy.031 (for df 55 and level of significance . therefore there are some relationships between variables we hope to include in the analysis.5 and 0. Bartlett’s test is significant as p < .So we can say that Correlation matrix of 11 variable is not a singular but positive definite matrix and we can move forward to Extract the Factors.000 KMO and Bartlett’s test of sphericity produces the Kaiser-Meyer-Olkin measure of sampling adequacy and Bartlett’s test. Therefore.05. we want this test to be significant (i.A value close to 1.Table 4.616 (Values between 0.So. Which is ‘Rejected’ for tabulated Chi-Square Value 194. Chi-Square df Sig.01) .e.031 55 . KMO value should be greater than 0. Also in Bartlett's Test of Sphericity. As the value of KMO is .5 if the sample is adequate. . 30 .7 are considered as mediocre) it indicates that the patterns of correlations are relatively compact and so factor analysis yield distinct and reliable factors. For these data. For factor analysis to work we need some relationships between variables and if the R-matrix were an identity matrix then all correlation coefficients would be zero.2: KMO.05). Bartlett’s measure tests the null hypothesis that the original correlation matrix is an identity matrix. A significant test tells us that the R-matrix is not an identity matrix.616 194. factor analysis is appropriate here. Bartlett's Test of Sphericity Approx. our null Hypothesis is “Correlation Matrix Is a Singular Matrix”.

800 .078 44.018 7.510 1.078 31 Cumulative % 29.1.1 Total Variance Explained By Eigenvalue The Eigenvalue of the Correlation matrix is the variance of the variables explained by the factors.520 .140 29. 2nd column shows the variance explained by each factor and 3rd column shows the cumulative variance.981 13.887 96.648 1.458 3.429 92.169 .253 .729 4.199 % of Variance 29.564 98.623 8.059 57.296 1. Table 4.490 .271 4.411 76.4.788 68.677 2.125 3.078 14.729 10.429 83.882 .000 29.3: Initial Eigenvalues for students academic performance Initial Eigenvalues Component 1 2 3 4 5 6 7 8 9 10 11 1 Total 3.078 . Here in Table 4.860 100.199 1.700 88. We calculate twelve Eigenvalues as same number of the variables.3 1st column shows the Calculated Eigenvalues of factor components.404 .

The fourth factor has an eigenvalue= 1. this is greater than 1.981%.648 times as much. it explains more variance than a single variable. The percent a variance explained = 14. We have found four eigenvalues greater than unity here & in this study this four factors can explain the 68. Then the percentage of total sample variance explained by the first factor is 29. in fact 1.The third factor has an eigenvalue= 1. Since. The percent a variance explained 10.648.199. which is greater than unity. Factors 5 through 11 have eigenvalues less than 1 & therefore explain less variance than a single variable.510. The second factor has an eigenvalue= 1. 32 ..0.729%. The percent a variance explained = 13.The largest eigenvalue is 3.623%.411% the total variation.169.078%.

Figure 4. here component under the line is not considered because eigenvalues less than 1 is can not explain variance of more than one variable.1.2 Scree Plot Scree plot is special kind of graph showing Eigenvalues for the component factor.4.3: Scree plot 33 .

4.169 Variance 29.1.729 10. 34 .648 1.510 1.4).078 14.788 68.623 Cumulative % 29.199 1.078 44.981 13.059 57.3 Extracted factors Here factor having Eigenvalues greater than 1 is extracted Table 4.411 Extracted four factors by Scree Plot describe almost 70 % of the total variance (Table 4. Factor Analysis by PCA is effective.4: Extracted Factor with Variance Extraction Sums of Squared Loadings % of Component 1 2 3 4 Total 3. That is.

501 10.962 Cumulative % 26.078 16.206 % of Variance 26. After rotation 1st factor component’s described variation reduces where other factor components variation increase.450 68. Table 4. This improves the result.595 1.856 1.4.1.949 57.4 Extracted factor after Rotation Extracted factors after rotation also explain more than 70 % of variation. (See Table 4.078 42.871 14.411 35 .869 1.5).5: Rotation of Extracted Factor with Variance Rotation Sums of Squared Loadings Component 1 2 3 4 Total 2.

017 .161 -.121 36 .008 .199 .763 -.388 .435 -.279 .126 .277 .555 -.414 -.437 .4.1.147 -.174 -.097 .342 -.015 -.852 -.119 .672 -.123 .164 .865 .400 -.254 .229 .451 -.351 .031 .010 -.334 -.484 -.718 .719 .430 -.189 .348 .788 -.085 .5 Factor Loading to Each Variable: Table 4.130 .637 .6: Component matrix Component variable Previous result Number of roommate Time taken to get a hall seat average attendance average study hour per week Family income average financial activity average extracurricular activity average entertainment per day average political activity average sleep per day 1 2 3 4 .385 -.

689 From the Table 4. In PCA we assumed all the variables have same variance in common.626 . Table 4.766 .382 .623 .588 .531 .1.744 .7: Communalities for students academic performance Communalities Variable Previous result Number of roommate Time taken to get a hall seat average attendance average study hour per week Family income average financial activity average extracurricular activity average entertainment per day average political activity average sleep per day .7 we have seen that almost every variable has high communality (common or shared variance explained by the factors) except number of roommate. It is the sum of squared factor loading to each variable.856 .818 .4. 37 .902 .6 Communality The proportion of the variance Explained by the factor loading is the communality of a variable.

719 .865 average attendance average study hour per week Family income average financial activity average extracurricular activity average entertainment per day average political activity average sleep per day .7 Suppressed factor loadings We omit the factor loading less than 40 % for better understanding of factor description.4.1.430 .672 -.8: Suppressed Component Matrix Component variable Previous result Number of roommate Time taken to get a hall seat 1 .555 38 .788 .437 .763 2 3 4 .435 -.451 -.637 -.400 -.414 . Table 4.484 .718 -.852 .

Time taken to get a hall seat. Family income.641 39 . average entertainment per day.880 average financial activity -.816 average extracurricular activity .464 average entertainment per day .949 average study hour per week .502 Time taken to get a hall seat . average extracurricular activity. Factor-2 holds Family income.737 Number of roommate .740 average sleep per day .Here we see.784 Family income .766 average political activity -.532 -. Factor-1 holds all variables except Number of roommate.9: 1 2 3 4 .905 average attendance . This factor does not give us precise idea. So we will check the factor loading for Varimax rotated factor Rotation.1. average financial activity. average entertainment per day and average sleep per day.8 Suppressed Rotated Factor Loadings Suppressed Rotated Component Matrix variable Previous result Table 4. Factor-3 holds Family income. average extracurricular activity and average entertainment per day. average political activity and average sleep per day. 4. Factor-4 holds Time taken to get a hall seat and average extracurricular activity.

Average extracurricular activity. 4. • Average sleep per day.Here we see Factor-1 holds • • • • Previous result.1. Factor-2 holds • • • Family income . • Number of roommate Factor-4 holds • Time taken to get a hall seat • Average extracurricular activity. Average political activity. Average study hour per week. Factor-3 holds • Average entertainment per day.9 Factor naming Factor1: Academic effort factor Factor 2: Financial factor Factor 3: Leisure and entertainment factor Factor 4: Hall seat factor 40 . Average attendance. Average financial activity .

Table 4.10: Factor score 41 .

factor 1+b2. we consider CGPA= dependent variable Factor 1= independent variable Factor 2 = independent variable Factor 3= independent variable Factor 4= independent variable The linear regression model CGPA=b0 + b1.factor 2 + b3.2: Regression Model Here.4.factor 4 42 .factor 3+ b4.

000 .000 .000 .000 . 43 .500 .000 . From the above table we can see that factor 1 has strong correlation with CGPA and factor 3 and factor has negative correlation with CGPA.380 . .000 1.000 .000 .000 . .500 . .500 56 56 56 56 56 factor3 -.124 .000 .000 .Table 4. 56 56 56 56 56 This table gives details of the correlation between each pair of variables.000 .500 .500 56 56 56 56 56 factor2 . (1-tailed) CGPA factor1 factor2 factor3 factor4 N CGPA factor1 factor2 factor3 factor4 1.011 .000 .469 .000 .000 . We do not want strong correlations between the criterion and the predictor variables.500 .500 .380 . Factor 2 has very small correlation with the CGPA.469 56 56 56 56 56 factor1 .500 .181 .11: Correlations between Each Pair of Variables CGPA Pearson Correlation CGPA factor1 factor2 factor3 factor4 Sig.925 .000 1.011 .124 -.000 .500 .042 -.500 56 56 56 56 56 factor4 -.181 .000 .000 1.500 .925 1. .042 .500 .

which assesses the overall significance of our model.000 a This table reports an ANOVA.474 df 4 51 55 Mean Square 1. The Adjusted R Square value tells us that our model accounts for 86.827 6.647 . .066 Sig.12: Model Summary Model Summary Adjusted R Model 1 R . Table 4. 44 .Table 4.13: ANOVA ANOVA Sum of Model 1 Regression Residual Total Squares 5. As p < 0.016 F 87. Error of the Estimate .934 a Std.412 .05 our model is significant.862 This table is important.12733 R Square .872 Square .2% of variance in the CGPA scores – a very good model.

A large value indicates that a unit change in this predictor variable has a large effect on the criterion variable. Also factor 3 is significant and has negative effect on outcome.017 .232 .043 -. Error .004 Std. From the table we can see that the factor 1 is highly significant as expected. these variables are correlated and they have a negative impact on academic performance.925 .000 1. larger values of this factor result in better academic performance.317 .017 1 (Constant) factor1 factor2 factor3 factor4 189.479 -. The t and Sig (p) values give a rough indication of the impact of each predictor variable – a big absolute t value and small p value suggests that a predictor variables having a large impact on the criterion variable.14: Collinearity diagnostics Coefficients Standardized Unstandardized Coefficients Coefficients Beta t Sig. Thus. numbers of roommate has high loadings on this factor.000 .000 The Standardized Beta Coefficients give a measure of the contribution of each variable to the model.952 .124 -.000 1.014 -.833 1.011 18.017 . number of hours of sleep per day and to some extent. Average time spent on entertainment.477 .409 .017 . 45 .212 . meaning that higher values for this factor score result in poorer academic performance.833 -2. Collinearity Statistics Tolerance Model B 3.017 .000 .000 1.Table 4.042 -. Academic perseverance has positive effect on outcome meaning.017 .

The factor analysis is done with eleven variables which are attendance in class. who attends the class regularly. political influences. The findings of the study were summarized and discussed in the following paragraphs. extracurricular activities. sleeping hours. numbers of roommate has high loadings on this factor. involvement in financial activities. The result of the significance test is that our model is significant with these factors. We also found that hall seat factor. number of hours of sleep per day and to some extent. From the result we also found that entertainment factor has negative influence to the academic performance of the students. Average time spent on entertainment. financial factor. past academic performances (SSC and HSC results). family income. The result of the analysis indicates that the academic effort factor has very high positive effect on the CGPA that means students who have good previous result. who studies a good amount of time after class have. We test significance of the model. entertainment. factor 3 is significant and has negative effect on outcome. From the analysis our finding is that we reduce the variables to four factors that are academic effort factor. meaning that higher values for this factor score result in poorer academic performance. 46 .Conclusion This study is conducted to identify the factors influencing the performance of students living at Shahidullah Hall in University of Dhaka. Then a regression model is fitted with this factors which are considered as impendent variables and Current CGPA is considered as dependent variable. Thus. these variables are correlated and they have a negative impact on academic performance. That means if a student is too busy with entertainment that it will have bad effect on his academic performance. That means if a student is busy with income earning activities then it will have little effect on his CGPA. We found that the financial factor has very small influence to the academic performance. better CGPA than the other students. leisure and entertainment factor and hall seat factor. That means this four factors mainly influence the academic performance of the students living at Shahidullah Hall in University of Dhaka. number of roommate. study hours per week after class. how long it took to get a seat in the hall.

attendance. After discussing all these. study hour after class have the positive correlation and political activity has negative correlation with the CGPA that means students who have good previous result. 47 . but also to corporations in the labour market. who studies a good amount time after class have better CGPA than the other students. we can comment that. We also found that Time taken to get a hall seat has negative correlation with Average extracurricular activity. who attends the class regularly. We can interpret this as if a student is busy with entertainment then he gets tired and sleep more than usual and can not involve in much political activities. This is because if a student can not get a seat in the hall in proper time then he has to face many problems So he does not get proper time for involvement in extracurricular activities. This is proved by the result that involvement in extracurricular activities has positive relationship with CGPA. Thus. From the result we also found that entertainment has positive correlation with the sleep per day and has negative correlation with political activity. Again if a student is spending lots of time for his political activities then he has less CGPA then others who have not involved in political activities. The performance of students in universities should be a concern not only to the administrators and educators.The result of the analysis indicates the previous result. We found that students who were actively engage in extracurricular activities obtained greater CGPA. the students performance (academic achievement) plays an important role in producing the best quality graduates who will become great leader and manpower for the country thus responsible for the countries economic and social development. That means if time taken to get a hall seat is too long for a student then a he can not involve in extracurricular activities as much as other can. developing social competency and consistent attendance. students have to place the greatest effort in their study not only to obtain a good grade but also in order to developing a set of moral and ethical values. Academic achievement is one of the main factors considered by the employer in recruiting workers especially the fresh graduates. That means students whose parents have high income do not have to involve in much financial activities and their involvements in extracurricular activities are better for this reason. We found that the family income has negative correlations with the financial activity and positive correlation with extracurricular activity.

Appendix Questionnaire 48 .

2. S.C.S.A survey to determine the factors influencing academic performance of students living in Shahidullah Hall of University of Dhaka Questionnaire Department: Current year: Division: 1.4.S. Your GPA at 1st year 2nd year 3rd year 4th year 1.1. result: 1. Current CGPA: 1.3.C. H. result: 49 .

2. If yes.3.3..1.) 5. Attendance in class (in percentage) during 1st year 2nd year 3rd year 4th year 4.1 What is your father’s occupation? 5. How many roommates do you have? 3.1. What is your father’s educational status? a) No formal education b) Primary (class 1 to 5) c) Secondary (class 6 to 10) f) Postgraduate d) Higher secondary (class 11 to 12) e) Undergraduate 5. yes b.10 p. Average number of study hours after class per week during 1st year 2nd year 3rd year 4th year 4. What is your mother’s educational status? 50 .2. What is your family’s monthly income? 5. 6p.m.g.2. At which times of the day do you usually study after class? (e. how long did it take for you to get a seat? 2.4. no 2.2. Have you been allocated a seat in the Hall? a.m.

no 8. yes If yes. business. second child.2.e. yes If yes.)? a) yes b) no 8. Have you ever been involved in extracurricular activities (say. yes If yes. tutoring students. average amount of time (in hrs) spent in a day: b) 2nd year a. If yes. etc) during a) 1st year a. yes If yes. average amount of time (in hrs) spent in a day: b. playing football or cricket. What is your birth order (i. What is the average amount of time (in hrs) you spend each day for entertainment (say. etc)? 7. average amount of time (in hrs) spent in a day: b. no d) 4th year a. no b. etc. reading novels.)? 51 . youngest. watching TV. no c) 3rd year a. listening to music. etc. Have you been involved in money earning activities (for example. 1. average amount of time (in hrs) spent in a day: b.a) No formal education b) Primary (class 1 to 5) e) Undergraduate c) Secondary (class 6 to 10) f) Postgraduate d) Higher secondary (class 11 to 12) 6. part time jobs. participating in debate competitions. how many hours did you spend on average per day during the 1st year 2nd year 3rd year 4th year 9. eldest.

average amount of time spent per week: b. yes If yes. yes If yes. no b. what is the condition of your physical heath? a) good b) somewhat good c) bad 12. how many hours do you sleep in a day (i. over a 24hr period)? 11.10. no c) 3rd year a. no d) 4th year a. yes If yes. no b. Do you smoke? a) yes b) no 14 How many hours do you spend with friends outside class each day? hrs 15. average amount of time spent per week: 11. 3. average amount of time spent per week: b. average amount of time spent per week: hrs hrs hrs hrs b) 2nd year a. At present. yes If yes. Do you wear eye glasses? a) yes b) no 13. On average. 2.1. 52 .1 Are classes conducted regularly and in a timely manner for most of the courses? a) yes b) no. Have you ever been involved in political activities? a) 1st year a. Have you ever been seriously ill during the academic year? a) yes b) no 11.e.

yes. Do you rely on financial support from your family? a) yes b) no 18. b average c poor quality 53 . yes. no.2. b. Do you find other students in your class helpful? a) yes b) no 16. good .3. Are you satisfied with your current academic performance? a. How many students are there in your class? 16.15. I am satisfied.1. a. Comment on the quality of food served in the dining halls. 20.4. I really need to improve it. What is your marital status? a) single b) engaged c) married 19. Where do you usually study? a) library b) own room c) class room d) other places e) reading room 17. c. Do you engage in group study? a) yes b) no 16.2. but I want to improve it. Are course teachers very helpful in general? a) yes b) no 16.

• Syeda Shamima Sultana (2003) Factor Analysis: An application to Gross Domestic Product Data. • Md.Bibliography • Richard A. • • Mc Donald.com http://www.net http://www.moedu. Institute of Statistical Research and Training.hawaii. Economic Trends. Bangladesh Bank.gov. Omar Faruque (2008) An application of factor analysis to the Agricultural Production in Bangladesh.Hall.cscanada. New Jersy 07632. Websites: • • • • • http://www. July 2007. NJ: Erlbaum. July 2009. Inc. Johnson and Dean W. Statistics Department. Wichern (1982). Applied Multivariate Statistical Analysis. Hillsdale.cscanada. Factor Analysis and Related Methods. R.wikiedia.edu http://www. July 2008. University of Dhaka. Prentic. (1985). Institute of Statistical Research and Training.bd 54 . Englewood Cliffs. University of Dhaka.org http://www.

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.