You are on page 1of 23

Characteristics of a Good

Measurement Tool
Mathematical Statistics
Measurement in Research
Measurement in research consists of assigning numbers to
empirical events, objects of properties, or activities in
compliance with a set of rules. This implies that
measurement is a three-part process:

1. Selecting observable empirical events.


2. Developing a scheme (or mapping rules) for assigning
numbers or symbols to represent aspects of the event
being measured.
3. Applying the mapping rule/s to each observation of
that event.
Measurement in Research
Illustration of the 3-part process of measurement where the
observable empirical event are the people who attended an
auto show.
Measurement in Research
The goal of measurement is to provide the highest quality,
lowest-error data for testing hypotheses, estimation or
prediction, or description.
The object of measurement is either a concept, construct
or variable. Concepts, constructs and variables may be
defined descriptively or operationally. An operational
definition defines a variable in terms of specific
measurement and testing criteria.
What is Measured?
Variables being studied in research may be classified as
objects or as properties.
Objects include tangible items such as people,
automobiles, etc.
Properties are the characteristics of the object such as
weight, height, attitudes, intelligence, leadership ability, etc.
In a literal sense, researchers do not measure either
objects or properties. They measure indicants of the
properties of objects.
What is Measured?
Properties like height, weight, age, years of experience,
or number of employees are easy to measure.
In contrast, it is not easy to measure properties of
constructs like attitudes, satisfaction, engagement, work-life
balance, or persuasiveness. Since each property cannot be
measured directly, one must infer its presence or absence
by observing some indicant.
The nature of measurement scales, sources of error and
characteristics of sound measurement are considered in
subsequent slides.
Nature of Measurement Scales
Sources of Measurement Differences
The ideal study should be designed and
controlled for precise and unambiguous
measurement of the variables. Since complete
control is unattainable, error does occur.
Sources of Measurement Differences
1. The Respondent
Opinion differences that affect measurement
come from relatively stable characteristics of the
respondent like employee status, and social
class. Respondents may also suffer from
temporary factors like fatigue, boredom, anxiety
or general variations in mood or other
distractions; these limit the ability to respond
accurately and fully.
Sources of Measurement Differences
2. Situational Factors
Any condition that places a strain on the
interview or measurement session can have
serious effects on the interviewer-respondent
rapport. Examples of such condition are:
presence of another person during the interview,
belief that anonymity is not ensured and
“ambush” interviews.
Sources of Measurement Differences
3. The Measurer
The interviewer can distort responses by rewording,
paraphrasing, or reordering questions.
a. Inflections of voice and conscious or unconscious
prompting with smiles, nods, and so forth, may encourage or
discourage certain replies.
b. Careless mechanical processing like checking the wrong
response of failure to record full replies will obviously distort
findings.
c. Incorrect coding, careless tabulation, and faulty statistical
calculation may introduce further errors.
Sources of Measurement Differences
4. The Instrument
The instrument can be too confusing and ambiguous.
a. Use of complex words and syntax beyond participant
comprehension
b. Leading questions, ambiguous meanings, mechanical defects
(e.g. poor printing), and multiple questions suggest the range
of problems.

One technique used to be minimize measurement


differences in research instruments is through pilot testing.
Characteristics of Good Measurement
There are 3 major criteria for evaluating a
measurement tool:
❖Validity
❖Reliability
❖practicality
Characteristics of Good Measurement
Validity is the extent to which a test measures what
we actually intend to measure.
Two major forms:
1. External validity of research findings is the data’s
ability to be generalized across persons, settings,
and times.
2. Internal validity is the ability of a research
instrument to measure what it is purported to
measure.
Major forms of Validity
Characteristics of Good Measurement
Reliability
A measure is reliable to the degree that it
supplies consistent results. Reliability is
concerned with estimates of the degree to
which a measurement is free of random or
unstable error.
It is a necessary contributor to validity but it
is not a sufficient condition for validity.
Characteristics of Good Measurement
Perspectives on Reliability
1. Stability – A measure is said to possess
stability if one can secure consistent results
with repeated measurements of the same
person with the same instrument.
Stability is concerned with personal and
situational fluctuations from one time to another.
Characteristics of Good Measurement
Perspectives on Reliability
2. Equivalence – considers how much error may
be introduced by different investigators (in
observation) or different samples of items
being studied (in questioning or scales).
Equivalence is concerned with variations at
one point in time among observers and samples
of items. Examples of indicators used to assess
equivalence is interrater reliability.
Characteristics of Good Measurement
Perspectives on Reliability
3. Internal consistency – refers to homogeneity among
the items. Among the techniques used are the split-
half technique and the Spearman Brown correction
formula.
The split-half technique is used when the measuring
tool has many similar questions while the Spearman
Brown correction formula is used to adjust for the effect
of test length and to estimate reliability of the whole test.
Other measures of internal consistency are KR20,
Cronbach’s 𝛼 and McDonald’s 𝜔.
Summary of reliability estimates
Characteristics of Good Measurement
Practicality
The scientific requirements of a project calls
for the measurement process to be reliable and
valid, while the operational requirements call for
it to be practical. Practicality has been defined
as economy, convenience and interpretability.
Selection of a Measurement Scale
Selecting and constructing a measurement
scale requires the consideration of several
factors that influence the reliability, validity and
practicality of the scale.
These are: research objectives, response types,
data properties, number of dimensions,
balanced or unbalanced, forced or unforced
choices, number of scale points and rater errors.
References:

Statistical Analysis with Software Applications. Philippines:


. McGraw-Hill Education.

You might also like