You are on page 1of 7

How to prepare term paper - Broad Outline

1. Introduction:
• present relevant background or contextual material
• define terms or concepts when necessary
• explain the focus of the paper and your specific purpose

2. Literature Review:
• Distinguishing what has been done from what needs to be done
• Sets the broad context of the study
• Sets the scope of the study
• Discovering important variables relevant to the topic;
• Synthesising and gaining a new perspective;

3. Problem definition: It provides the framework for reporting the results and indicates
what is probably necessary to conduct the study and explain how the findings will
present this information.

4. Research Objective: Research objective is a concrete statement describing what is to


be achieved by the study. It should be closely related to the statement of the problem.

5. Research Methodology: Discuss how the results were achieved and provide
explanations of how data was gathered/collated/generated and how the data was
analysed. The methods section of a research paper provides the information by
which a study's validity is judged. Research methodology answers two main
questions.

a. How did you collect or generate the data?


b. How did you analyze the data?

6. Results & Discussion: Provide the interpretation, presentation and/or discussion of


the results. Also, any comparisons with the results of previous research or effects of
methods used on the data obtained.

7. Conclusion: It is the final say on the issues that have risen in the paper, to synthesize
thoughts, to demonstrate the importance of ideas, and to propel your reader to a new
view of the subject.

1
Sampling Design Process
Sampling is the process of selecting units (e.g., people, organizations) from a population of
interest so that by studying the sample we may fairly generalize our results back to the
population from which they were chosen. There are two types of sampling i.e. Probability
sampling & Non probability sampling.

• Probability sampling: Probability sampling is a sampling technique wherein the


samples are gathered in a process that gives all the individuals in the population equal
chances of being selected. There are four types of probability sampling.

1. Simple Random Sampling: In simple random sampling each element in


the populating has a known and equal probability of selection.

ii. Systematic Sampling: In systematic sampling the sample is chosen by


selecting a random starting point and then picking every ith element in
succession from the sampling frame.

iii. Stratified Sampling: In stratified sampling, a two step process in which


the population is partitioned into subpopulations or strata.

iv. Cluster Sampling: In cluster sampling the target population 1s first


divided into mutually exclusive and collectively exhaustive
subpopulations, or clusters.

• Non-probability sampling: Non-probability sampling is a sampling technique


where the samples are gathered in a process that does not give all the individuals in
the population equal chances of being selected. There are four types of non
probability sampling.

i. Convenience Sampling: Convenience Sampling attempts to obtain a sample


of convenient elements. Often respondents are selected because they happen to
be in the right place at the right time.

11. Judgmental Sampling: Judgmental sampling is a form of convemence


sampling in which the population elements are selected based on the judgment
of the researcher.
111. Quota Sampling: Quota sampling is a two stage restricted judgmental
sampling.

a. The first stage consists of developing control categories, or quotas, of


population elements.
b. In the second stage, sample elements are selected based on
convenience or judgment.

1v. Snowball Sampling: In snowball sampling, an initial group of respondents


are selected, usually at random.
a. After being interviewed, these respondents are asked to identify
others who belong to the target population of interest.
b. Subsequently respondents are selected based on referrals.

Hypothesis Testing
A statistical hypothesis is a hypothesis that is testable on the basis of observing a
process that is modeled via a set of random variables. A statistical hypothesis test is a
method of statistical inference. A hypothesis is proposed for the statistical relationship
between the two data sets, and this is compared as an alternative to an idealized null
hypothesis that proposes no relationship between two data sets. The comparison is
deemed statistically significant if the relationship between the data sets would be an
unlikely realization of the null hypothesis according to a threshold probability-the
significance level. Hypothesis tests are used in determining what outcomes of a study
would lead to a rejection of the null hypothesis for a pre-specified level of
significance. The process of distinguishing between the null hypothesis and
the alternative hypothesis is aided by identifying two conceptual types of errors (type
1 & type 2), and by specifying parametric limits on e.g. how much type 1 error will be
permitted.

The general idea of hypothesis testing involves:


• Making an initial assumption.
• Collecting data.
• Based on the available data, deciding whether to reject or not reject the initial
assumption

Dependent variables & Independent variable


In data analysis, there are two main types of variables: dependent
variables and independent variables.

Dependent Variable:
The dependent variable -- also called the response variable -- is the output of a process or
statistical analysis. Its name comes from the fact that it depends on or responds to other
variables. Typically, the dependent variable is the result you want to achieve. In marketing,
the results desired are tied to sales revenue. Sales as a dependent variable can be looked at in
many ways, such as sales of a specific doll, sales of a category like toy cars, overall sales at a
particular store, or even sales for the entire company.
Independent Variable:
An independent variable is an input to a process or analysis that influences the dependent
variable. While there can only be one dependent variable in a study, there may be multiple
independent variables. When the dependent variable is sales revenue, the elements of the
marketing mix -- product, price, promotion and place -- will definitely influence the
dependent variable and can therefore be identified as independent variables.

Multivariate Techniques
Multivariate analysis is an ever-expanding set of techniques for data analysis that encompasses
a wide range of possible research situations as evidenced by the classification scheme just discussed.
The more established as well as emerging techniques include the following:

1. Principal components and common factor analysis


2. Multiple regression and multiple correlation
3. Multiple discriminant analysis and logistic regression
4. Canonical correlation analysis
s. Multivariate analysis of variance and covariance
6. Cluster analysis
7. Structural equation modeling and confirmatory factor analysis

1. Principal Components and Common Factor Analysis

Factor analysis, including both principal component analysis and common factor analysi , is a sta
tistical approach that can be used to analyze interrelationships among a large number of variables
and to explain these variables in terms of their common underlying dimensions ( actors). The objec
tive is to find a way of condensing the information contained in a number of original variables
into a smaller set of variates (factors) with a minimal loss of information. By providing an empirical
estimate of the structure of the variables considered, factor analysis becomes an objective basis for
creating summated scales.

2. Multiple regression and multiple correlation

Multiple regression is the appropriate method of analysis when the research problem involves
a single metric dependent variable presumed to be related to two or more metric independent
variables. The objective of multiple regression analysis is to predict the changes in the
dependent variable in response to changes in the independent variables. This objective is
most often achieved through the statistical rule of least squares. Whenever the researcher is
interested in predicting the amount or size of the dependent variable, multiple regression is
useful.

3. Multiple discriminant analysis

Multiple discriminant analysis (MDA) is the appropriate multivariate technique if the single
dependent variable is dichotomous (e.g., male-female) or multichotomous (e.g., high
medium-low) and therefore nonmetric. As with multiple regression, the independent
variables are assumed to be metric. Discriminant analysis is applicable in situations in
which the total sample can be divided into groups based on a nonmetric dependent variable
characterizing several known classes. The primary objectives of multiple discriminant
analysis are to understand group differences and to predict the likelihood that an entity
(individual or object) will belong to a particular class or group based on several metric
independent variables.

Discriminant analysis might be used to distinguish innovators from noninnovators according


to their demographic and psychographic profiles. Other applications include distinguishing
heavy product users from light users, males from females, national-brand buyers from private-
label buyers, and good credit risks from poor credit risks. Even the Internal Revenue Service
uses discriminant analysis to compare selected federal tax returns with a composite,
hypothetical, nonnal taxpayer's return (at different income levels) to identify the most
promising returns and areas for audit.

4. Canonical correlation analysis

Canonical correlation analysis can be viewed as a logical extension of multiple regression


analysis. Recall that multiple regression analysis involves a single metric dependent variable
and several metric independent variables. With canonical analysis the objective is to correlate
simultaneously several metric dependent variables and several metric independent variables.
Whereas multiple regression involves a single dependent variable, canonical correlation
involves multiple dependent variables. The underlying principle is to develop a linear
combination of each set of variables (both independent and dependent) in a manner that
maximizes the correlation between the two sets. Stated in a different manner, the procedure
involves obtaining a set of weights for the dependent and independent variables that provides
the maximum simple correlation between the set of dependent variables and the set of
independent variables.

5. Multivariate analysis of variance and covariance


Multivariate analysis of variance (MANOVA) is a statistical technique that can be used to
simultaneously explore the relationship between several categorical independent variables (usually
referred to as treatments) and two or more metric dependent variables. As such, it
represents an extension of univariate analysis of variance (ANOVA). Multivariate analysis of
covariance (MANCOVA) can be used in conjunction with MANOVA to remove (after the
experiment) the effect of any uncontrolled metric independent variables (known as covariates)
on the dependent variables. The procedure is similar to that involved in bivariate partial
correlation, in which the effect of a third variable is removed from the correlation. MANOVA is
useful when the researcher designs an experimental situation (manipulation of several nonmetric
treatment variables) to test hypotheses concerning the variance in group responses on two or
more metric dependent variables.

6. Cluster analysis

Cluster analysis is an analytical technique for developing meaningful subgroups of individuals


or objects. Specifically, the objective is to classify a sample of entities (individuals or objects)
into a small number of mutually exclusive groups based on the similarities among the entities. In
cluster analysis unlike discriminant analysis, the groups are not predefined. Instead, the
technique is used to identify the groups. Cluster analysis usually involves at least three steps.
The first is the measurement of some form of similarity or association among the entities to
determine how many groups really exist in the sample. The second step is the actual clustering
process, whereby entities are partitioned into groups (clusters). The final step is to profile the
persons or variables to determine their composition. Many times this profiling may be
accomplished by applying discriminant analysis to the groups identified by the cluster technique.

7. Structural equation modeling and confirmatory factor analysis

Structural equation modeling (SEM) is a technique that allows separate relationships for each of a
set of dependent variables. In its simplest sense, structural equation modeling provides the
appropriate and most efficient estimation technique for a series of separate multiple regression
equations estimated simultaneously. It is characterized by two basic components: (1) the
structural model and (2) the measurement model. The structural model is the path model, which
relates independent to dependent variables. In such situations, theory, prior experience, or other
guidelines enable the researcher to distinguish which independent variables predict each
dependent variable. Models discussed previously that accommodate multiple dependent
variables-multivariate analysis of variance and canonical correlation-are not applicable in this
situation because they allow only a single relationship between dependent and independent
variables.
The measurement model enables the researcher to use several variables (indicators) for a single
independent or dependent variable. For example, the dependent variable might be a concept
represented by a summated scale, such as self-esteem. In a confirmatory factor analysis the
researcher can assess the contribution of each scale item as well as incorporate how well the scale
measures the concept (reliability). The scales are then integrated into the estimation of the
relationships between dependent and independent variables in the stmctural model. This
procedure is similar to performing a factor analysis (discussed in a later section) of the scale items
and using the factor scores in the regression.

You might also like