You are on page 1of 17

Unit 9: Validity and Reliability of a Research Instrument

1 / 17
Outline

1 Validity
The Concept of Validity
Types of Validity

2 Reliability
The Concept of Reliability
Methods of Determining Reliability

2 / 17
Outline

Research Journey

3 / 17
Validity The Concept of Validity

The Concept of Validity


What is validity?
► Validity is the ability of a research instrument to measure what it
is designed to measure.
► Validity is defined as the degree to which the researcher has mea-

sured what he has set out to measure (Smith 1991, 106)


► The most common definition of validity is epitomized by the

question: Are we measuring what we think we are measuring?


(Kerlinger, 1973, 457)
► Extent to which an empirical measure adequately reflects the real

meaning of the concept under consideration (Babbie, 1989)


There are two perspectives onvalidity:
► Is the research investigation providing answers to the research
questions for which it was undertaken?
► If so, is it providing these answers using appropriate methods and

procedures?
4 / 17
Validity The Concept of Validity

The Concept of Validity: Key questions

1 Who decides whether an instrument is measuring what it is sup-


posed to measure?
The person who designed the study, the readership of the report
and experts in the field.
2 How can it be established that an instrument is measuring what
it supposed to measure?
In the social sciences there appear to be two approaches:
► Logical approach: Providing justification of each question in re-
lation to the objectives of the study.
Easy if questions relate to tangible matters.
Difficult in situations where we are measuring attitude, effective-
ness, satisfaction etc.
► Statistical approach: By calculating the coefficient of correlations
between the questions and the outcome variables.

5 / 17
Validity Types of Validity

Types of Validity

6 / 17
Validity Types of Validity

Face and Content Validity


Content validity:
► Extent to which a measuring instrument covers a representative
sample of the domain of the aspects measured.
► Whether items and questions cover the full range of the issues or

problem being measured.


► Coverage of issue should be balanced; that is, each aspect should

have similar and adequate representation in questions.


Face validity:
► The extent to which a measuring instrument appears valid on its
surface.
► Each question or item on the research instrument must have a

logical link with the objective.


Problems:
► Judgement is based upon subjective logic.
► The extent to which question reflect the objectives of a study
may differ.
7 / 17
Validity Types of Validity

Concurrent and Predictive Validity


Concurrent validity:
► It is judged by how well an instrument compares with a second
assessment doneconcurrently.
► It is comparing the findings of the instrument with those found

by another which is well accepted.


► Example:

Comparing a newly designed test to measure intelligence with


existing IQ tests.
Predictive validity:
► Is judged by the degree to which an instrument can forecast an
outcome.
► It cannot be used for all measures.

► Example:

Comparing the SAT test with GPA in college, comparing job


performance with aptitude or ability test.
8 / 17
Validity Types of Validity

Construct Validity

Assesses the extent to which a measuring instrument accurately


measures a theoretical construct (e.g. attitude scales, aptitude
and personality) it is designed to measure.
Measured by correlating performance on the test with performance
on a test for which construct validity has been determined.
Determined by ascertaining the contribution of each construct to
the total variance observed in a phenomenon using statistical pro-
cedures. The greater the variance attributable to the construct,
the higher the validity of the instrument.
Common statistical techniques used in establishing construct va-
lidity include correlational analysis and factor analysis.
Examples:
Job satisfaction, trust, customer loyalty, self-esteem, etc.
9 / 17
Validity Types of Validity

Types of Validity: Summary

Validity Description
Content Does the measure adequately measure the concept?
Face Do “experts” validate that the instrument measures
what its name suggests it measures?
Concurrent Does the measure differentiate in a manner that helps
to predict a criterion variable currently?
Predictive Does the measure differentiate individuals in a manner
as to help predict a future criterion?
Construct Does the instrument tap the concept as theorized?

10 / 17
Reliability The Concept of Reliability

The Concept of Reliability


What is reliability?
► The research tool is reliable if it consistent, stable, predictable
and accurate when used repeatedly. The greater the degree of
consistency and stability in a research instrument, the greater the
reliability.
► A scale or test is reliable to the extent that repeat measurements

made by it under constant conditions will give the same result.


► Reliability is the degree of accuracy or precision in the measure-

ments made by a research instrument.


The concept of reliability can be looked at from two sides:
1 How reliable is an instrument?
Focus on the ability of an instrument to produce consistent mea-
surements.
2 How unreliable is it?
Focus on the degree of consistency in the measurements made by
an instrument.
11 / 17
Reliability The Concept of Reliability

Validity vs. Reliability


Reliability is a necessary contributor to validity but is not a suffi-
cient condition for validity.
► If a measure is not valid, it hardly matters that it is reliable,
because it does not measure what needs to be measured in order
to solve the research problem.
Example: When the bathroom scale measures you
► correctly (using a concurrent criterion such as a scale known to
be accurate), then it is both reliable and valid.
► consistently overweighs you by 2 kg, it is reliable (you get the

same result each time) but not valid (does not allow you to make
accurate conclusions about the weight).
► erratically from time to time, then it is not reliable, and therefore

cannot be valid.
In this context, reliability is not a valuable as validity, but it is
much easier to assess.
12 / 17
Reliability The Concept of Reliability

Validity vs. Reliability

13 / 17
Reliability The Concept of Reliability

Reliability
Factors affecting the reliability of a research instrument:
1 The wording of questions
2 The physical setting
3 The respondent’s mood
4 The interviewer’s mood
5 The nature of interaction
6 The regression effect of an instrument:
Methods of determining the reliability in quantitative research:
There are a number of ways of determining the reliability of an
instrument and these can be classified as:
► External consistency
Test and retest
Parallel forms of the same test
► Internal consistency
The split-half technique
14 / 17
Reliability Methods of DeterminingReliability

External Consistency Procedures


External consistency procedures compare findings from two indepen-
dent processes of data collection with each other as a means of verifying
the reliability of the measure. The two methods of doing this are as
follows:
1 Test/ retest (repeatabilitytest)

► An instrument is administered once, and then again, under the


same or similar conditions.
► The ratio between test and retest score is an indication of the

reliability of the instrument. The greater the value of the ratio,


the higher the reliability of the instrument.
► The greater the difference between the test scores or findings, the

greater the unreliability of the instrument.


► Advantage: it permits the instrument to be compared withitself.

► Disadvantage: a respondent may recall the responses that they

gave in the first round (Overcome by increasing the time span


between twotests).
15 / 17
Reliability Methods of DeterminingReliability

External Consistency Procedures

2 Parallel forms of the same test


► Two instruments intended to measure the same phenomena is
constructed and administered to two similar populations.
► The results obtained from one test is compared with another. If

the results are similar, then the instrument is reliable.


► Advantage: does not suffer from the problem of recall andtime

lapse between two test is not required.


► Disadvantages:

Need to construct two instrument instead of one.


Difficulty of constructing two instruments that are comparable in
their measurement of a phenomenon
Difficulty to achieve comparability in the two population groups
and in the two conditions under which the tests are administered.

16 / 17
Reliability Methods of DeterminingReliability

Internal Consistency Procedures


The idea behind internal consistency procedures is that items or ques-
tions measuring the same phenomenon, if they are reliable indicators,
should produce similar results irrespective of their number in an in-
strument. The following method is commonly used for measuring the
reliability of an instrument in this way:
1 Split-half technique

► To correlate half of the items with the other half in a research


instruments.
► Questions are divided in half in such way that any two questions

intended to measure the same aspect fall into different halves.


► The scores obtained by administering the two halves are corre-

lated.
► Reliability is calculated using the correlation between scores ob-

tained from the two halves (Cronbach’s Alpha is commonly used


to measure correlation).
17 / 17

You might also like