You are on page 1of 21

Bryman & Bell, Business Research

Methods, 2nd edition, Chapter 6

The Nature of Quantitative Research

‘A significant part of the research process entails


convincing others of the significance and validity of
one's findings’.

Authored by David McHugh


©
The Main Steps in Quantitative Research
1. Theory

2. Hypothesis (deductive stage)

3. Research design

4. Derive measures of concepts

Feedback 5. Select research sites

(inductive 6. Select research subjects/respondents


stage)
7. Administer research instruments/collect data

8. Process data

9. Analyse data

10. Findings/conclusions

11. Write up findings/conclusions Fig. 6.1


Authored by David McHugh
©
What are Concepts?
• Concepts are:
– Building blocks of theory
– Labels that we give to elements of the social world
– Categories for the organization of ideas and
observations (Bulmer)

• Concepts are useful for:


– Providing an explanation of a certain aspect of the
social world
– Standing for things we want to explain
– A basis for measuring changes or variations

Authored by David McHugh


©
Why Measure?
• To delineate fine differences between people,
organizations, or any other unit of analysis

• As a consistent device for gauging distinctions

• To produce precise estimates of the degree of


relationship between concepts

Authored by David McHugh


©
Devising Indicators
• Through self-report questions on attitude (e.g. job
satisfaction), status (e.g. job title) or behaviour (e.g. job tasks
and responsibilities)
• Through the recording of individuals' behaviour using a
structured observation schedule (e.g. managerial activity)
• Through official statistics, such as the use of WERS survey
data (Research in focus 2.15) to measure UK employment
policies and practices
• Through content analysis, for example, to determine changes
in the salience of an issue, such as courage in managerial
decision making (Harris 2001)

See Key concept 6.2

Authored by David McHugh


©
Why Use More Than One Indicator?
• Single indicators may incorrectly classify many
individuals
• Single indicators may capture only a portion of the
underlying concept or be too general
• Multiple indicators can make finer distinctions
between individuals
• Multiple indicators can capture different
dimensions of a concept
See Research in focus 6.3 & 6.4

Authored by David McHugh


©
Types of Reliability
• Stability
– is the measure stable over time?
• e.g. test–retest method

• Internal reliability
– are the indicators consistent?
• e.g. split-half method

• Inter-observer consistency
– is the measure consistent between observers?
see Research in focus 6.5 & 6.6

Authored by David McHugh


©
Types of Validity
• Face validity
• Concurrent validity
• Predictive validity
• Construct validity
• Convergent validity
see Key concept 6.7
Authored by David McHugh
©
Face Validity
A researcher developing a new measure could establish it has
face validity - i.e., that the measure reflects the content of
the concept in question. This might be established by asking
other people whether the measure seems to be getting at the
concept that is the focus of attention. E.g. people, possibly
with experience or expertise in a field, might be asked to act
as judges to determine whether on the face of it the measure
seems to reflect the concept concerned. Face validity is,
therefore, an essentially intuitive process.

Authored by David McHugh


©
Concurrent Validity
Researchers can also gauge the concurrent validity of measures,
employing a criterion relevant to the concept in question on which cases
(e.g. people) are known to differ. For a new measure of job satisfaction a
criterion might be absenteeism, some people being absent from work
(other than through illness) more often than others. Thus we might see
how far people who are satisfied with their jobs are less likely to be
absent than those who are not satisfied. A lack of correspondence, such as
there being no difference in levels of job satisfaction among frequent
absentees, might cast doubt on whether our measure is really addressing
job satisfaction.

Authored by David McHugh


©
Predictive Validity

Another possible test for the validity of a new measure is predictive


validity, whereby the researcher uses a future criterion measure, rather
than a contemporary one, as in the case of concurrent validity. With
predictive validity, the researcher would take future levels of absenteeism
as the criterion against which the validity of a new measure of job
satisfaction would be examined. The difference from concurrent validity
is that a future rather than a simultaneous criterion measure is employed.

Authored by David McHugh


©
Construct Validity

Researchers could also estimate the construct validity of a measure,


deducing hypotheses from a theory relevant to the concept. E.g. drawing
on ideas about the impact of technology on the experience of work a
researcher might anticipate that people satisfied with their jobs are less
likely to work on routine jobs; those not satisfied are more likely to work
on routine jobs. We could investigate this by examining the relationship
between job satisfaction and job routine. However, some caution is
required as either the theory or the deduction that is made from it might
be misguided, or the measure of job routine could be an invalid measure
of that concept.

Authored by David McHugh


©
Convergent Validity
The validity of a measure could be gauged by comparing it to measures
of the same concept developed through other methods. For example, if
we develop a questionnaire measure of how much time managers spend
on various activities (such as attending meetings, touring their
organization, informal discussions, etc.), we might examine its validity by
tracking a number of managers and using a structured observation
schedule to record how much time is spent in various activities and their
frequency.

see Key concept 6.8 & 6.9

Authored by David McHugh


©
Main Preoccupations of Quantitative Researchers

1. Measurement

2. Causality

3. Generalization

4. Replication

Authored by David McHugh


©
Measurement
Concerns:
– Operational definitions
– Mapping of properties or characteristics
– Following rules or procedures
– Generalizability of findings
– Establishing reliability & validity

Authored by David McHugh


©
Causality
Concerns:
– Explanation
• why things are the way they are

– Direction of causal influence


• relationship between dependent & independent
variables

– Confidence
• in the researcher's causal inferences

Authored by David McHugh


©
Generalization
Concerns:
– Can findings be generalized beyond the
confines of the particular context?
– Can findings be generalized from sample
to population?
– How representative are samples?

see Research in focus 6.10

Authored by David McHugh


©
Replication
Concerns:
– Minimizing contamination from researcher
biases or values
– Explicit description of procedures
– Control of conditions of study
– Ability to replicate in differing contexts

see Research in focus 6.11


Authored by David McHugh
©
Criticisms of Quantitative Research
• Quantitative researchers fail to distinguish people and
social institutions from `the world of nature'
• The measurement process possesses an artificial and
spurious sense of precision and accuracy
• The reliance on instruments and procedures hinders the
connection between research and everyday life
• The analysis of relationships between variables creates a
static view of social life that is independent of people's
lives

Authored by David McHugh


©
Is It Always Like This?
Concerns:
– gap between textbook accounts of
research practice and actual research
practice
– providing accounts of good practice

– time, cost, and feasibility

Authored by David McHugh


©
Common Departures From Good Practice
• Reverse operationism
– operational concepts can be produced
inductively
• Reliability and validity testing
– researchers may not follow recommended
practices
• Sampling
– Use of non-probability or convenience samples

Authored by David McHugh


©

You might also like