You are on page 1of 22

Chapter 9

Measurement of Variables
MONEY MATTERS?
 A company wants to perform a customer relationship
management (CRM) employee evaluation process that will
allow an overall ranking of all CRM employees.

 Key question is, “What is performance?”

13-2
WHAT DO I MEASURE?
 Measurement
 The process of describing some property of a phenomenon, usually
by assigning numbers in a reliable and valid way.

 Concept
A generalized idea about a class of objects that has been given a
name, an abstraction of reality that is the basic unit for theory
development. Every discipline and theory is made up of concepts,
e.g. key ideas, key words, key phrases.

13–3
OPERATIONAL DEFINITIONS
 Operationalization
 The process of identifying scales that correspond to variance in a
concept involved in a research process.
 Scales
 A device providing a range of values that correspond to different
values in a concept being measured.
 Correspondence rules
 Indicate the way that a certain value on a scale corresponds to some
true value of a concept.
 Constructs
 A term used for concepts that are measured with multiple variables.
13–4
REVIEW OF TERMS
 Concept: a bundle of meanings or characteristics associated with certain events,
objects, conditions, situations, or behaviors.

 Construct: an image or idea specifically invented for a given research and/or


theory-building purpose.
 We build constructs by combining the simpler, more concrete concepts, especially
when the idea or image we intend to convey is not subject to direct observation

 Variable:an event, act, characteristic, trait, or attribute that can be measured and to
which we assign numerals or values; a synonym for the construct or the property
being studied.

 Operational definition: a definition for a construct stated in terms of specific


criteria for testing or measurement; refers to an empirical standard (we must be able
to count, measure, or gather information about the standard through our senses).

© 2019 McGraw-Hill Education. 10-5


LEVELS OF SCALE MEASUREMENT
 Nominal
 Assigns a value to an object for identification or classification
purposes.
 Most elementary level of measurement.

 Ordinal
 Ranking scales allowing things to be arranged based on how much
of some concept they possess.

13–6
LEVELS OF SCALE MEASUREMENT
(CONT’D)
 Interval
 Interval scales have both nominal and ordinal properties.
 But they also capture information about differences in
quantities of a concept.

 Ratio
 Highest form of measurement.
 Have all the properties of interval scales with the
additional attribute of representing absolute quantities.
 Absolute zero.

13–7
EXHIBIT 7.2 NOMINAL, ORDINAL, INTERVAL, AND RATIO
SCALES PROVIDE DIFFERENT INFORMATION

13–8

©2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
SUMMARY OF SCALES BY DATA
LEVELS
Scale Type Characteristics Empirical Operations
• Count (frequency distribution); mode as central
Classification (mutually exclusive and
tendency; No measure of dispersion
Nominal collectively exhaustive categories), but no
• Used with other variables to discern patterns, reveal
order, distance, or natural origin
relationships
• Determination of greater or lesser value
Classification and order, but no distance or
Ordinal • Count (frequency distribution); median as central
natural origin
tendency; nonparametric statistics
• Determination of equality of intervals or differences
• Count (frequency distribution); mean or median as
Classification, order, and distance (equal
Interval measure of central tendency; measure of dispersion is
intervals), but no natural origin
standard deviation or interquartile range; parametric
tests
• Determination of equality of ratios
Classification, order, distance, and natural • Any of the above statistical operations, plus
Ratio
origin multiplication and division; mean as central tendency;
coefficients of variation as measure of dispersion

© 2019 McGraw-Hill Education. 11-9


MATHEMATICAL AND STATISTICAL
ANALYSIS OF SCALES
 Discrete Measures
 Measures that can take on only one of a finite number of values.
 Continuous Measures
 Measures that reflect the intensity of a concept by assigning
values that can take on any value along some scale range.

13–
10
INDEX MEASURES

 Attributes
 Single characteristics or fundamental features that pertain
to an object, person, situation, or issue.
 Index Measures
 An index assigns a value based on how much of the
concept being measured is associated with an observation.
 Indexes often are formed by putting several variables
together.
 Composite Measures
 Assign a value to an observation based on a mathematical
13–
derivation of multiple variables. 11
COMPUTING SCALE VALUES
 Summated Scale
 A scale created by simply summing (adding together) the
response to each item making up the composite measure.
 Reverse Coding
 A method of making sure all items forming a composite scale
are scored in the same direction. Negative items can be
recoded, changing the value of a response to a scale so it is
opposite of the original value. Done so that negative items in a
scale are scored in the same direction as positive items.

13–
12
RECODING MADE EASY
1. Click on transform.

2. Click on recode.

3. Choose to recode into the same variable.

4. Select the variable(s) to be recoded.

5. Click on old and new values.

6. Use the menu that appears to enter the old values and the matching
new values. Click add after entering each pair.

7. Click continue. 13-13


THREE CRITERIA FOR GOOD
MEASUREMENT

Reliability
Reliability Validity
Validity

Good
Good
Measurement
Measurement

Sensitivity
Sensitivity

13–14
RELIABILITY
 Reliability
 The degree to which measures are free from random error and
therefore yield consistent results.
 Internal Consistency
 Represents a measure’s homogeneity or the extent to which
each indicator of a concept converges on some common
meaning.

13–
15
HTTP://EN.WIKIBOOKS.ORG/WIKI/HANDBOOK_OF_MANAGEMENT_SCALES#
UNIDIMENSIONAL_CONSTRUCTS
Handbook of Management Scales/Reward
system
< Handbook of Management Scales
This page may need to be reviewed for quality.
Jump to: navigation, search

Contents
Reward system (composite reliability = 0.87)
Description

The authors follow a two -phased approach to develop multi-item scales that measure dimensions
of service orientation in the context of business -to-business e-commerce. Service orientation was
conceptualized as a third -order construct comprised of five combinative service competency
bundles: service climate; market focus; process management; human resource policy; and
metrics and standards.

One of these five second-order dimensions, human resourc e policy, consists of two first-order
dimensions: human capital and reward system.

Definition

The extent to which a co mpany has a reward system in place to reward employees on the basis of
their performance.

Items

 Our employees are rewarded on the basis of how well they perform on non -financial
measures (e.g. customer satisfaction) as well as financial measures. (0.76)
 We reward our employees based on customer loyalty metrics. (0.75)
 For our employees, recognition is based on exceeding both internal and external service
expectations. (0.80)
 Evaluation and reward of most managers is linked to service performance. (0.85)

Source

 Oliveira/Roth (2012): Service orientation: The derivation of underlying constructs and


measures. International Journal of Operations & Production Management, Vol. 32, No. 2,
pp. 156-190.
VALIDITY
 Validity refers to whether we are measuring
what we want to measure.
UNDERSTANDING VALIDITY AND
RELIABILITY

Jump to long description


© 2019 McGraw-Hill Education. 11-20
SENSITIVITY
 Sensitivity
A measurement instrument’s ability to accurately measure
variability in stimuli or responses.
 Generally increased by adding more response points or
adding scale items.

13–
21
PRACTICALITY
 A measuring device passes the convenience test if it is easy to administer.
 The interpretability aspect of practicality is relevant when persons other than the test

designers must interpret the results. In such cases, the designer of the data collection
instrument provides several key pieces of information to make interpretation
possible.
1) A statement of the functions the instrument was designed to measure and the
procedures by which it was developed;
2) Detailed instructions for administration;

3) Scoring keys and instructions;

4) Norms for appropriate reference groups;

5) Evidence of reliability;

6) Evidence regarding the intercorrelations of subscores;

7) Evidence regarding the relationship of the test to other measures; and

8) Guides for test use.

© 2019 McGraw-Hill Education. 10-22

You might also like