You are on page 1of 39

Research design

A. K. Sharma
Retd. Professor
IIT Kanpur

1
Research means different things to
different researchers/donors/policy makers
• Generating knowledge (filling gaps in knowledge)
• Resolving academic debates
• Systematic explorations of issues
• Answering theoretical questions
• Guiding policy makers through facts
• Learning about appropriateness and effectiveness of
interventions (monitoring and evaluation/operations
research)
• Advocacy

2
Research Design decisions
1. What is the study about?
2. Why is the study being made?
3. Where will the study be carried out?
4. What type of data is required?
5. Where can the required data be found?
6. What period of time the study will include?

3
Cont.
7. What will be the sample design?
8. What techniques of data collection will be
used?
9. How will the data be analyzed?
10.In what style will the report be prepared?

4
Think how research differs from
• Journalistic writings
• Political arguments
• Statements of vision and mission
• Armchair philosophy or exploring non-
scientific hypothesis (Is God male or female? Is
religion A more scientific than B?)
• Unplanned findings/serendipity
• Taken for granted ideas

5
Thus to prepare research design is to do
the following:
1. Identify research problem or hypotheses

2. Review relevant theories

3. Review context of research (problem domains


and target population)

4. In the context, what kinds of data would you


expect to be useful to support or refute each
hypothesis

6
What methods, kinds of data collection,
and kinds of analysis should you use to
evaluate the hypotheses?
1. How to select the sample?
2. How to collect the relevant data?
3. Do the data conform to your expectations?
4. How much confident you are about the
inferences?
5. New approaches to evaluate the same
hypotheses

7
Weaknesses of a research design
1. Failing to specify the research problem in sufficient
detail
2. Collecting the wrong kind of data
3. Selecting the wrong scales
4. Selecting the wrong analytical methods
5. Failing to think carefully about what kind of result
would support, weaken, or eliminate a particular
hypothesis
6. Failing to account for the likely sources of errors in
observations/ measures
7. Failing to specify the population
8. Having too small a sample size to estimate parameters
or test hypotheses with adequate precision and
confidence 8
Source: Google Images
9
Do you require indirect measures that you think
make good "proxies" for the observations of
interest?
For example, if you want to observe the number of people
who lived in a prehistoric village, you may have to make
do with measuring something like floor area, which you
think is correlated with the number of inhabitants. Some
archaeologists would call this "operationalizing" your hypothesis
test.

Note that this actually introduces yet another hypothesis


(that there is a particular relationship between #
inhabitants and floor area) that you would also need to
evaluate.

10
Remember
• The hypothesis might call for a very specific kind of
observations
• Always think about sources of error and try to
estimate how big they probably are.
• Analyze the data using an analytical method that is
appropriate given the kind of data and the research
question.
• Compare the results of your analysis with the ones
you already specified to see if they are consistent
with the hypothesis, or would be impossible or
highly unlikely if the hypothesis were true, thus
leading you to conclude that the hypothesis is
probably false.
11
1. Collecting a much larger sample than necessary, thus
wasting time that could have been spent more
profitably
2. Collecting a sample that is actually not representative
of the population of interest
3. Obtaining results that do not allow you to reject the
hypothesis that any patterns in the data are simply due
to human error, sampling error, or some other kind of
error
4. Obtaining results that do not allow you to tell whether
they are due to your hypothesis of interest or to some
assumptions you made in "operationalizing" the
research, Avoid
5. Obtaining results that look interesting or provocative,
but that are not likely to convince anyone that your
hypothesis is true. 12
Let us now look at some qualitative
research designs
Malhotra and Dash
• Exploratory
• Conclusive
a. Descriptive
b. Diagnostic

13
Exploratory designs: Is there any effect of
ISO 9001 certification on employees’
morale
• When nothing significant is known
• When you are looking for a holistic view of the
problem
• Unstructured interviews/PRA techniques
should be preferred
• Better if the researcher himself/herself
participates in the fieldwork
• To generate rather than test the hypotheses

14
Descriptive designs: measurement of on
employees’ morale and productivity in the
post-ISO 9001 certification

• To learn new facts


• When the aspects of the problem are already
discovered
• Here you require a clear conceptual scheme to
decide what facts are to be generated
• Emphasis to be placed on scaling and
measuring
• Assumptions to be stated clearly 15
Diagnostic designs: effect of ISO
9001 certification on employees’
morale and productivity
• Scientifically developed
• Involve test of hypotheses such as certification
has lead to higher employees' morale
• Have to be prepared with utmost care
• When you have a clear understanding of null
and alternative hypotheses
16
Experimental designs
• True experimental designs
• Quasi-experimental designs
• Non-experimental designs such as surveys

17
The differences between designs
Research design Type of study

Exploratory Descriptive/diagnostic

Overall design Flexible Rigid

(i) Sampling design Non-probability Probability

(ii) Statistical design No preplanned design Preplanned design

(iii) Observational design Unstructured Structured or well thought


instruments/PRA out instruments for data

(IV) Operational design No fixed design Advanced decisions about


operational procedures

18
True experimental designs
• Pretest-posttest control group design
(Random assignment)
• Posttest-only control group design
• Multiple treatment designs
(can test not only the effects of A and B but
also of A + B)

19
Be cautious about threats to results

1. History threat – factors other than treatment


2. Maturation threat – natural changes
3. Testing threat – findings the effect of
measurement
4. Instrumentation threat – different
instruments pre-test and post-test
5. Mortality threat – some units have withdrawn
6. Regression threat – moving towards average
20
Pretest-posttest control group design

Before After

O1 O2
Control group
Random
Inference
allocation
Experimental O3 O4
group

21
Posttest-only control group design

There are no pretest observations.


How does it matter?

22
Multiple treatment designs

Could you think of some advantages


and
disadvantages of this?

23
Factorial design
Dependent variable: perceived ability to perform certain
roles among middle level managers
4*3 SIMPLE FACTORIAL DESIGN

Control Experimental variable (Training)


variable
(EDUCATION) Treatment Treatment Treatment Treatment
A: no B: lectures C: audio- D:
training visual aids demons-
tration
Level I Cell 1 Cell 4 Cell 7 Cell 10

Level II Cell 2 Cell 5 Cell 8 Cell 11

Level III Cell 3 Cell 6 Cell 9 Cell 12

24
Factorial designs are specially suitable to study
interaction effects

MORE FACILITIES

MEAN BYCYCLE DISTRIBUTION


EFFECT ON
MOTIVATIONAL
GIRLS ‘
PROGRAMMES
ENROLMENT

LEVEL I: CASTE A Level II: CASTE B

25
Quasi-experimental design
• Non-equivalent control group design
(individuals not randomly assigned to
experimental and control group)
• Time series design (similar to the non-
experimental pretest-posttest design)
• Separate sample pretest-posttest design

26
Non-experimental design
• Posttest only design (only experimental group
and experimental data)
• Pretest-posttest design (pre and post in the
control group only)
• Static group comparison (experimental group
observations are compared with comparison
group)

27
This takes us to the issue of

Is causal analysis possible in the


non-experimental data

28
You can depend on multivariate
statistical analysis
In the absence of the following:

a. possibility of randomization
and matching

b. time series data

c. baseline survey

29
Qualitative research as employing
exploratory design is more suited
to postmodern paradigm
• The idea of one grand narrative has gone
• Verstehen/ Understanding – community to
organizations
• Postmodern/ post-structuralist theory

30
Qualitative research: theory
Need for a theory
• Geertz’s interpretation and thick description – researcher’
interpretation; descriptions of values, beliefs, actions; stories of what
is significant; and how culture is symbolically constructed
• George Herbert Mead’s Symbolic interaction: signs, symbols, self-
fulfilling prophecy; identity management and studies of socialization
• Ethnography of communication
• Feminism
• Participatory research
• Sensemaking: making sense through “enactment, selection and
retainment”
• Structuration

Source: Sarah J. Tracy, Qualitative Research Methods, Wiley-Blackwell, 2013.


31
Qualitative research: negotiating access
• Complete participant – ardent activist, aims not
revealed to all community/movement members
• Play participant – controlled scepticism
• Focused participant observer – clear role/ explicit
researcher, clear agenda
• Complete observer – no questioning of the actor,
only observing things with “detachment and
separation”
Source: Sarah J. Tracy, Qualitative Research Methods, Wiley-Blackwell, 2013.

32
Bricolage/ triangulation

33
Technical issues are not everything
In research you often face:
• Ethical issues
• Practical issues
• Financial issues
• Administrative concerns

34
Ethical issues
• Informed consent
• Balance the interests of society on the one
hand and subjects of study on the other
• Respect people’s rights and dignity
• Do not deny service facilities

35
Other issues
• Practical, financial and administrative
• Priorities of the donors
• Climate
• Environmental culture
• Beliefs about good research
• Policies of the organization

36
In the final analysis the act of defending the
research design is a political act
Source: Google Images

Where
is
politics
in
research
design?

37
To recapitulate, the major steps of research are
• Statement of the problem
• Purpose of the study
• Rationale of the study
• Review of literature
• Objectives
• Conceptual framework
• Hypotheses (conjectural relationship between two or
more variables as antecedent and consequent variables)
• Methodology (specifying the approach, sampling, tools
of data collection, fieldwork)
• Analysis of data
• Reporting the findings of the study
• Limitations
• Problems for future research 38
Thank you!

39

You might also like