You are on page 1of 51

CHAPTER

FOUR
Research Design/Research Methods

 Decisions regarding what, where,


when, how much, by what means
concerning an inquiry or a research
study constitute a research design.
 “A research design is the
arrangement of conditions for
collection and analysis of data in a
manner that aims to combine
relevance to the research purpose
with economy in procedure.”
The methods or procedures section is really the heart
of the research proposal.
It is a plan that specifies the sources and types
of information relevant to the research problem.
It is a strategy specifying which approach will
be used for gathering and analyzing the data.
In brief, research design must, at least,
contain
(a) a clear statement of the research
problem;
(b) Procedures and techniques to be
used for gathering information;
Cont……………….
(c) the population to be studied; and
(d) methods to be used in processing and
analyzing data
You must decide exactly how you are going to
achieve your stated objectives:
 Generally it is the blueprint for the collection,
measurement and analysis of data.
Questions to be asked to prepare a research
design for a research proposal

 Where will the study be carried out?


 What do I want to measure?
 How can I measure it?
 What type of data is required?
 Where can the required data be found?
 What techniques of data collection will be used?
 What will be the sample design?
 What professional & non-professional staff do I need to
carry out this study?
 How will the data be analyzed?
 In what style will the report be prepared?
Cont’d……………….

 How can I avoid introducing


biases into the study?
 What constraints may affect this
study?
Components of research design
*may not necessarily be true for all types of
research

 Description of study area


Description of study design
Description of study participants
Determination of sample size
Description of selection process (sampling
method)
Methods of data collection
How data quality is ensured
Presentation of the data analysis methods
One may split the overall research
design into the following parts:

(a) The sampling design which deals with


the method of selecting items to be
observed for the given study;
(b) The observational design which relates
to the conditions under which the
observations are to be made;
(c) The statistical design which
concerns with the question of how
many items are to be observed and
how the information and data
gathered are to be analyzed; and
(d) The operational design which
deals with the techniques by
which the procedures specified in
the sampling, statistical and
observational designs can be carried
out
IMPORTANT CONCEPTS RELATING TO
RESEARCH DESIGN

 1. Dependent and independent


variables:
 A concept which can take on different
quantitative values is called a variable.
Concepts like weight, height, income are
all examples of variables.
 Phenomena which can take on
quantitatively different values even in
decimal points are called ‘continuous
variables’.
• Dependent Variable: If one variable
depends upon or a consequence of the
other variable it is called a dependent
variable.

• It is a variable that is to be predicted or


explained.

• Independent variable: is a variable that


is expected to influence the dependent
variable.
 2. Extraneous variable: Independent
variables that are not related to the
purpose of the study, but may affect
the dependent variable.
 Whatever effect is noticed on
dependent variable as a result of
extraneous variable(s) is technically
described as an ‘experimental error’.
 A study must always be so designed
that the effect upon the dependent
variable is attributed entirely to the
independent variable(s), and not to
some extraneous variable or variables.
3. Control: variables in the study
Minimizing the effects of extraneous
independent variables. In
experimental researches, the term
‘control’ is used to refer to restrain
experimental conditions.
4. Confounded relationship: When
the dependent variable is not free
from the influence of extraneous
variable(s), the relationship between
the dependent and independent
variables is said to be confounded by
an extraneous variable(s).
5. Research hypothesis: The research
hypothesis is a predictive statement
that relates an independent variable to
a dependent variable. Usually a
research hypothesis must contain, at
least, one independent and one
dependent variable.
6. Experimental and non-
experimental hypothesis-testing
research: When the purpose of
research is to test a research
hypothesis, it is termed as hypothesis-
testing research.
 7. Experimental and control
groups: In an experimental
hypothesis-testing research when a
group is exposed to usual
conditions, it is termed a ‘control
group’, but when the group is
exposed to some novel or special
condition, it is termed an
‘experimental group’.
 8. Treatments: The different
conditions under which experimental
and control groups are put are
usually referred to as ‘treatments’.
 9. Experiment: The process of
examining the truth of a statistical
hypothesis, relating to some research
problem, is known as an experiment.
 10. Experimental unit(s): The pre-
determined plots or the blocks, where
different treatments are used, are
known as experimental units. Such
experimental units must be selected
(defined) very carefully.
FORMS OF RESEARCH
DESIGNS

 Different research designs can be


conveniently described if we
categorize them as in case of
(1) Exploratory research studies;
(2) Descriptive and diagnostic research
studies, and
(3) Hypothesis-testing research studies.
1. Research design in case of
exploratory research studies:
 The major emphasis in such studies is
on the discovery of ideas and
insights.
 As such the research design
appropriate for such studies must be
flexible enough to provide
opportunity for considering different
aspects of a problem under study.
 2. Research design in case of
descriptive and diagnostic
research studies:
 The design in such studies must be
rigid and not flexible and must
focus attention on the following:
(a) Formulating the objective of the
study (what the study is about and
why is it being made?)
(b) Designing the methods of data
collection (what techniques of
gathering data will be adopted?)
(c) Selecting the sample (how much
material will be needed?)
(d) Collecting the data (where can the
required data be found and with what
time period should the data be
related?)
(e) Processing and analyzing the data.
(f) Reporting the findings.
3. Research design in case of
hypothesis-testing research
studies: Hypothesis-testing research
studies (generally known as
experimental studies) are those
where the researcher tests the
hypotheses of causal relationships
between variables.
 Such studies require procedures that
will not only reduce bias and increase
reliability, but will permit drawing
inferences about causality.
FEATURES OF A GOOD
DESIGN
 A good design is often characterized
by adjectives like flexible,
appropriate, efficient, economical and
so on.
 Generally, the design which
minimizes bias and maximizes the
reliability of the data collected and
analyzed is considered a good design
Usually involves the consideration of the
following factors:
I. The means of obtaining information;
II. The availability and skills of the
researcher and his staff, if any;
III.The objective of the problem to be
studied;
IV.The nature of the problem to be
studied; and
V. The availability of time and money for
the research work.
Research Approach
Quantitative
Qualitative
Mixed (concurrent and
sequential)
MEASUREMENT & SCALING
OF CONCEPTS
Measurement and Scaling of
Concepts
5.1. Definitions of Concepts
 A concept or a construct is a generalized
idea about a class of
 objects,
 attributes,
 occurrences, or
 processes.
 Some concepts are concrete and
quantifiable while others are abstract
and qualitative.
 The nature of concepts calls for clearly
defining them conceptually and
operationally.
Operational Definition
 A concept must be made operational in
order to be measured.
 An operational definition gives meaning
to a concept by specifying the activities or
operations necessary to measure it.
 Concepts like grievances may be difficult
to operationalize, whereas a concept like
personnel turnover is less difficult.
 The operational definition specifies what
must be done to measure the concept
under investigation
 An operational definition is like a manual
of instructions or a recipe
Types of Scales
 A scale may be defined as "any series of items
which is progressively arranged according to
value or magnitude into which an item can be
placed according to its quantification

 A scale is a continuous spectrum or series of


categories.
 The purpose of scaling is to represent, usually
quantitatively, an item's, a person's, or an
event's place in the spectrum.
 The four types of scales are:
nominal,
ordinal,
interval, and
ratio.
Nominal Scale
Scale continued

Nominal Scale
 The numbers or letters assigned to objects
serve as labels for identification or
classification.
E.g. Coding of males as 1 and females as 2.

Nominal Scale Properties


 Uniquely classifies
Ordinal Scale

3 2 1
Interval scale continued

Ordinal Scale
 Arranges objects or alternatives according to
their magnitude in an ordered relationship.
 Respondents are asked to rank order their
preferences, in ordinal values
 Does not say anything about the distance or
interval between the values
e.g "excellent," "good," "fair," or "poor."

Ordinal Scale Properties


 Uniquely classifies
 Preserves order
 Win, place, & show
Interval Scale
Interval Scale

Not only rank order values but also measure


order (distance) in units of equal intervals.
 The location of the zero point is arbitrary-
does not signify absence.
 price index
 The classic example of an interval scale is
the Fahrenheit temperature
 lack of an absolute zero point.
 Mathematical and Statistical Analysis of
Scales
Interval Scale

Interval Scale Properties


 Uniquely classifies
 Preserves order
 Equal intervals
 Consumer Price Index (Base 100)
 Fahrenheit temperature
Ratio Scale
Ratio Scale
 Ratio scales have absolute rather than
relative quantities.
 For example, money and weight are ratio
scales because they possess an absolute
zero and interval properties.
 Zero represents absence of the given
attribute.

Ratio Scale Properties


 Uniquely classifies
 Preserves order
 Equal intervals
 Natural zero
 Weight and distance
Type of Scale Numerical descriptive

Operation Statistics

Nominal Counting Frequency in each


category
Percentage in each
category

Ordinal Rank ordering Mode, Median,


Range,
Percentile ranking

Interval Arithmetic operations on Mean, variance


intervals between numbers Standard deviation

Ratio Arithmetic operations Geometric mean


on actual quantities Harmonic mean
Coefficient of variation
5.3. CRITERIA FOR GOOD MEASUREMENT

Two major criteria for evaluating measurements:


 Reliability, and
 Validity
The Goal of Measurement
Validity
 The ability of a scale to measure what was
intended to be measured
 The purpose of measurement is to measure
what we intend to measure
 A student received poor grade may say: "I
really understood that material because I
studied hard. The test measured my ability to
do arithmetic and to memorize formulas rather
than my understanding of statistics."
 Validity addresses the problem of whether a
measure measures what it is supposed to
measure.
 Researchers have attempted to assess validity
in a variety of ways:
 asking questions such as
 "Is there a consensus among my colleagues that
my attitude scale measures what it is supposed
to measure?"
 "Does my measure correlate with others'
measure of the 'same' concept?" or
 "Does the behavior expected from my measure
predict the actual observed behavior?"
 There are three measures of validity
Validity

Validity

FACE OR CONTENT CRITERION VALIDITY CONSTRUCT VALIDITY

CONCURRENT PREDICTIVE
 Face validity or content validity
refers to the subjective agreement
among professorial that a scale logically
appears to reflect accurately what it
purports to measure.

 Criterion validity is an attempt by


researchers to answer the question
"Does my measure correlate with other
measures of the 'same' construct?"
Criterion validity
• Criterion validity may be classified
as either:
• concurrent validity or
• predictive validity,
• In concurrent validity new measure
is taken at the same time as the
criterion measure.
• Predictive validity is established
when an attitude measure predicts
a future event.
Reliability
 Reliability is the degree to which
measures are free from random error and
therefore yield consistent results

 When the outcome of the measuring


process is reproducible, the measuring
instrument is reliable.
 It is the degree to which measures
are free from error and therefore
yield consistent results.
 Two dimensions underlie the concept
of reliability:
 repeatability and
 internal consistency..
Reliability

RELIABILITY

STABILITY INTERNAL CONSISTENCY

TEST RETEST EQUIVALENT FORMS SPLITTING HALVES


Reliability

 Two problems with measures of test-


retest reliability:
 The premeasure (or first measure) may
sensitize the respondents to their
participation in a research project and
subsequently influence the results of the
second measure.
 the time between measures is long, there
may be attitude change or other
maturation of the subjects.
 correlation between the first and the
second administration
Reliability
 The homogeneity of the measure.
 An attempt to measure an attitude may
require asking several similar (but not
identical) questions or presenting a
battery of scale items.

 Internal Consistency
 The technique of splitting halves is the
most basic method for checking internal
consistency when a measure contains a
large number of items.
END

You might also like