Professional Documents
Culture Documents
Lecture No : 11
(Goodness Of Measures)
1
Recap
2
3
• Using these scales we complete the development of
our instrument.
• It is to bee seen if these instruments accurately and
measure the concept.
4
Sources of Measurement Differences
01/25/22 5
3.Differences due to short term personal factors – mood
swings, fatigue, time constraints, or other transistory
factors.
Example – telephone survey of same person, difference
may be due to these factors (tired versus refreshed)
may cause differences in measurement.
4.Differences due to situational factors – calling when
someone may be distracted by something versus full
attention.
01/25/22 6
• 5.Differences resulting from variations in
administering the survey – voice inflection, non
verbal communication, etc.
7
7. Differences due to a lack of clarity in measurement
instrument
(measurement instrument error).
Example; unclear or ambiguous questions.
01/25/22 8
Goodness of Measure
01/25/22 9
• Validity : checks as to how well an instrument that is
developed measured the concept
• Reliability: checks how consistently an instrument
measures
10
11
Ways to Check for Reliability
How to check for reliability of measurement instruments or
the stability of measures and internal consistency of
measures?
13
(b) Equivalent Form Reliability
This approach attempts to overcome some of the
problems associated with the test-retest
measurement of reliability.
Two questionnaires, designed to measure the same
thing, are administered to the same group on two
separate occasions (recommended interval is two
weeks).
01/25/22 14
If the scores obtained from these tests are
correlated, then the instruments have equivalent
form reliability.
Tough to create two distinct forms that are
equivalent.
An impractical method (as with test-retest) and
not used often in applied research.
15
(2)Internal Consistency Reliability
01/25/22 16
Developing questions on the Concept Enriched Job
Validity
01/25/22 18
Face Validity
• The weakest form of validity
• Researcher simply looks at the measurement
instrument and concludes that it will measure what
is intended.
• Thus it is by definition subjective.
01/25/22 19
Content Validity
01/25/22 20
Criterion Related Validity
• The degree to which the measurement instrument
can predict a variable known as the criterion
variable.
01/25/22 21
• Two subcategories of criterion related validity
• Predictive Validity
– Is the ability of the test or measure to differentiate
among individuals with reference to a future criterion.
– E.g. an instrument which is suppose to measure the
aptitude of an individual, when used can be compared
with the future job performance of a different
individual. Good performance (Actual) should also have
scored high in the aptitude test and vise versa
22
• Concurrent Validity
– Is established when the scale discriminates
individuals who are known to be different that is
they should score differently on the test.
– E.g. individuals who are happy at availing welfare
and individuals who prefer to do job must score
differently on a scale/ instrument which measures
work ethics.
Construct Validity
• Does the measurement conform to some underlying
theoretical expectations. If so then the measure has
construct validity.
• i.e. If we are measuring consumer attitudes about
product purchases then do the measure adhere to
the constructs of consumer behavior theory.
• This is the territory of academic researchers
01/25/22 24
• Two approaches are used to measure construct
validity
• Convergent Validity
– A high degree of correlation among 2 different
01/25/22 25
• To check validity through Correlation analysis, Factor
Analysis, Multi trait , Multi matrix correlation etc
26
• Reflective vs Formative measure scales:
• In some multi item measure where it is measuring
different dimensions of a concept do not hang
together
• Such is the case of Job Description Index measure
which measures job satisfaction from 5 different
dimension i.e Regular Promotions, Fairly good
chance for promotion, Income adequate, Highly
Paid, good opportunity for accomplishment.
27
• In this case some items of dimensions Income
adequate and Highly paid to be correlated but
dimension items of Opportunity for Advancement
and Highly Paid might not correlated.
• In this measure not all the items would related to
each other as it’s dimensions address different
aspect of job satisfaction.
• This measure /scale is termed as Formative scale
28
• In some cases the measure dimensions and items
correlate.
• In this kind of measure/scale the different dimensions
share a common basis ( common interest)
• An example is of a scale on Attitude towards the
Offer scale.
• Since the items are all focused on the price of an
item, all the items are related hence this scale is
termed as Reflective Scale.
29
Recap
30