You are on page 1of 30

Research Methodology

Lecture No : 11
(Goodness Of Measures)

1
Recap

• Measurement is the process of assigning numbers or


labels to objects, persons, states of nature, or
events.

• Scales are a set of symbols or numbers, assigned by


rule to individuals, their behaviors, or attributes
associated with them

2
3
• Using these scales we complete the development of
our instrument.
• It is to bee seen if these instruments accurately and
measure the concept.

4
Sources of Measurement Differences

Why do ‘scores’ vary? Among the reasons legitimate


differences, are differences due to error (systematic or
random)
1. That there is a true difference in what is being
measured.
2. That there are differences in stable characteristics of
individual respondents
 On satisfaction measures, there are systematic
differences in response based on the age of the
respondent.

01/25/22 5
3.Differences due to short term personal factors – mood
swings, fatigue, time constraints, or other transistory
factors.
Example – telephone survey of same person, difference
may be due to these factors (tired versus refreshed)
may cause differences in measurement.
4.Differences due to situational factors – calling when
someone may be distracted by something versus full
attention.

01/25/22 6
• 5.Differences resulting from variations in
administering the survey – voice inflection, non
verbal communication, etc.

• Differences due to the sampling of items included in


the questionnaire.

7
7. Differences due to a lack of clarity in measurement
instrument
(measurement instrument error).
Example; unclear or ambiguous questions.

8. Differences due to mechanical or instrument factors


– blurred questionnaires, bad phone connections.

01/25/22 8
Goodness of Measure

• Once we have operationalized, and assigned scales


we want to make sure that these instruments
developed measure the concept accurately and
appropriately.
• Measure what is suppose to be measured
• Measure as well as possible

01/25/22 9
• Validity : checks as to how well an instrument that is
developed measured the concept
• Reliability: checks how consistently an instrument
measures

10
11
Ways to Check for Reliability
How to check for reliability of measurement instruments or
the stability of measures and internal consistency of
measures?

Two methods are discussed to check the stability .


1. Stability
(a) Test – Retest
 Use the same instrument, administer the test shortly
after the first time, taking measurement in as close
to the original conditions as possible, to the same
participants.
01/25/22 12
 If there are few differences in scores between the
two tests, then the instrument is stable. The
instrument has shown test-retest reliability.
 Problems with this approach.
 Difficult to get cooperation a second time
 Respondents may have learned from the first
test, and thus responses are altered
 Other factors may be present to alter results
(environment, etc.)

13
(b) Equivalent Form Reliability
 This approach attempts to overcome some of the
problems associated with the test-retest
measurement of reliability.
 Two questionnaires, designed to measure the same
thing, are administered to the same group on two
separate occasions (recommended interval is two
weeks).

01/25/22 14
 If the scores obtained from these tests are
correlated, then the instruments have equivalent
form reliability.
 Tough to create two distinct forms that are
equivalent.
 An impractical method (as with test-retest) and
not used often in applied research.

15
(2)Internal Consistency Reliability

This is a test of the consistency of respondents


answer to all the items in a measure . The items
should ‘hang together as a set.

i.e. the items are independent measures of the


same concept, they will correlated with one another

01/25/22 16
Developing questions on the Concept Enriched Job
Validity

• Definition: Whether what was intended to be


measured was actually measured?

01/25/22 18
Face Validity
• The weakest form of validity
• Researcher simply looks at the measurement
instrument and concludes that it will measure what
is intended.
• Thus it is by definition subjective.

01/25/22 19
Content Validity

The degree to which the instrument items represent


the universe of the concepts under study.
In English: did the measurement instrument cover all
aspects of the topic at hand?

01/25/22 20
Criterion Related Validity
• The degree to which the measurement instrument
can predict a variable known as the criterion
variable.

01/25/22 21
• Two subcategories of criterion related validity
• Predictive Validity
– Is the ability of the test or measure to differentiate
among individuals with reference to a future criterion.
– E.g. an instrument which is suppose to measure the
aptitude of an individual, when used can be compared
with the future job performance of a different
individual. Good performance (Actual) should also have
scored high in the aptitude test and vise versa

22
• Concurrent Validity
– Is established when the scale discriminates
individuals who are known to be different that is
they should score differently on the test.
– E.g. individuals who are happy at availing welfare
and individuals who prefer to do job must score
differently on a scale/ instrument which measures
work ethics.
Construct Validity
• Does the measurement conform to some underlying
theoretical expectations. If so then the measure has
construct validity.
• i.e. If we are measuring consumer attitudes about
product purchases then do the measure adhere to
the constructs of consumer behavior theory.
• This is the territory of academic researchers

01/25/22 24
• Two approaches are used to measure construct
validity
• Convergent Validity
– A high degree of correlation among 2 different

measures intended to measure same construct


• Discriminant Validity
– The degree of low correlation among varaibles

that are assumed to be different.

01/25/22 25
• To check validity through Correlation analysis, Factor
Analysis, Multi trait , Multi matrix correlation etc

26
• Reflective vs Formative measure scales:
• In some multi item measure where it is measuring
different dimensions of a concept do not hang
together
• Such is the case of Job Description Index measure
which measures job satisfaction from 5 different
dimension i.e Regular Promotions, Fairly good
chance for promotion, Income adequate, Highly
Paid, good opportunity for accomplishment.

27
• In this case some items of dimensions Income
adequate and Highly paid to be correlated but
dimension items of Opportunity for Advancement
and Highly Paid might not correlated.
• In this measure not all the items would related to
each other as it’s dimensions address different
aspect of job satisfaction.
• This measure /scale is termed as Formative scale

28
• In some cases the measure dimensions and items
correlate.
• In this kind of measure/scale the different dimensions
share a common basis ( common interest)
• An example is of a scale on Attitude towards the
Offer scale.
• Since the items are all focused on the price of an
item, all the items are related hence this scale is
termed as Reflective Scale.

29
Recap

30

You might also like