Professional Documents
Culture Documents
Wajib
1. Sekaran, 2016, Research Methods for Business, 7E (Skr)
Tambahan
1. Cooper, 2014, Business Research Methods, 12E (C)
2. Saunders, 2016, Research Methods for Business Students, 7E (Snd)
1
Topics
2
Variables
Research Design
4
Operational Definition
5
Measurement
Measurement:
the assignment of numbers or other symbols to characteristics (or attributes) of
objects according to a prespecified set of rules.
Objects:
persons, strategic business units, companies, countries, bicycles, elephants,
kitchen appliances, restaurants, shampoo, yogurt, and so on.
Characteristics of objects:
achievement motivation, organizational effectiveness, shopping enjoyment,
length, weight, ethnic diversity, service quality, conditioning effects, and taste,
etc.
We cannot measure objects (for instance, a company); but we measure
characteristics or attributes of objects (for instance, the organizational
effectiveness of a company).
6
7
8
Construct, Concept, Variable
Construct:
an image or abstract idea specifically invented for a given research and/or
theory-building purpose.
Concept:
a generally accepted collection of meanings or characteristics associated with
certain events, objects, conditions, situations, and behaviors.
Variable:
a symbol of an event, act, characteristic, trait, or attribute that can be measured
and to which we assign values.
We build constructs by combining the simpler, more concrete concepts,
especially when the idea or image we intend to convey is not subject to
direct observation.
Variable is used as a synonym for construct, or the property being
studied.
9
Operationalization
10
Measurement: Scaling, reliability
and validity
Scales
Dichotomous scale
Category scale
Semantic differential scale
Numerical scale
Itemized rating scale
Likert scale
Fixed or constant sum rating scale
Stapel scale
Graphic rating scale
Consensus scale
14
Ranking Scales
Ranking scales are used to tap preferences between two or among more
objects or items (ordinal in nature):
1. Paired comparison is used when, among a small number of
objects, respondents are asked to choose between two objects at a
time.
2. The forced choice enables respondents to rank objects relative to
one another, among the alternatives provided. This is easier for the
respondents, particularly if the number of choices to be ranked is
limited in number.
3. Comparative scale provides a benchmark or a point of reference to
assess attitudes toward the current object, event, or situation under
study
15
The Characteristics of Good Measurement
Validity:
the extent to which a test
measures what we actually wish
to measure.
Reliability:
has to do with the accuracy and
precision of a measurement
procedure.
Practicality:
concerned with a wide range of
factors of economy, convenience,
and interpretability.
16
17
Validity Description
Content validity Does the measure adequately measure the concept?
Face validity Do “experts” validate that the instrument measures what its
name suggests it measures?
Criterion-related validity Does the measure differentiate in a manner that helps to predict
a criterion variable?
Concurrent validity Does the measure differentiate in a manner that helps to predict
a criterion variable currently?
Predictive validity Does the measure differentiate individuals in a manner that
helps predict a future criterion?
Construct validity Does the instrument tap the concept as theorized?
Convergent validity Do two instruments measuring the concept correlate highly?
Discriminant validity Does the measure have a low correlation with a variable that is
supposed to be unrelated to this variable?
18
19
Reliability
indicates the extent to which it is without bias (error free) and hence ensures consistent
measurement across time and across the various items in the instrument.
Stability of measures
indicative of its stability and low vulnerability to changes in the situation
Test–retest reliability
the correlation between the scores obtained at the two different times from one and
the same set of respondents
Parallel-form reliability
When responses on two comparable sets of measures tapping the same construct are
highly correlated (e.g. the scoring of Olympic figure skaters by a panel of judges)
Internal consistency of measures
indicative of the homogeneity of the items in the measure that taps the construct
Interitem consistency reliability
To the degree that items are independent measures of the same concept, they will be
correlated with one another
Split-half reliability
21
Downward bias in stability
Some of the difficulties that can occur in the test–retest methodology and
cause a downward bias in stability include:
Time delay between measurements —leads to situational factor changes
(also a problem in observation studies).
Insufficient time between measurements —permits the respondent to
remember previous answers and repeat them, resulting in biased
reliability indicators.
Respondent’s discernment of a study’s disguised purpose —may
introduce bias if the respondent holds opinions related to the purpose
but not assessed with current measurement questions.
Topic sensitivity —occurs when the respondent seeks to learn more
about the topic or form new and different opinions before the retest.
22
Reflective and Formative Scale
23