You are on page 1of 23

Reference

Wajib
1. Sekaran, 2016, Research Methods for Business, 7E (Skr)

Tambahan
1. Cooper, 2014, Business Research Methods, 12E (C)
2. Saunders, 2016, Research Methods for Business Students, 7E (Snd)

1
Topics

1. Research in Business & Research Process (Skr1-2, C1-4, Snd1, 3, 6)


2. Research Question/Problem Definition (Skr3, C5, Snd2)
3. Literature Review, Theoretical Framework & Hypothesis (Skr4-5, C5, Snd4)
4. Research Design (Skr6, C6, Snd5)
5. Data Collection Methods (Skr7-10, C7-10, Snd8-11)
6. Variables: Definition, Measurement, Scales (Skr11-12, C11-13)
7. Presentasi Proposal & Mid Exam
9. Sampling (Skr13, C1, Snd7)
10. Quantitative Data Analysis 1 (Skr14-15, C15-18, Snd12)
11. Quantitative Data Analysis 2 (Skr14-15, C15-18, Snd12)
12. Quantitative Data Analysis 3 (Skr14-15, C15-18, Snd12)
13. Qualitative Data Analysis (Skr16, Snd13)
14. Report (Skr17, C19-20, Snd14)
15. Presentasi Hasil & Final Exam

2
Variables
Research Design

4
Operational Definition

 Operationalizing the concepts:


Reduction of abstract concepts to render them measurable in a tangible way.
 Operational definition:
A definition stated in terms of specific criteria for testing or measurement
(Cooper).
 Steps:
1. To come up with a definition of the construct that we want to measure;
2. To think about the content of the measure [an instrument (one or more
items or questions) that actually measures the concept that one wants to
measure has to be developed];
3. To choose a response format or scale (for instance, a seven‐point rating
scale with end‐points anchored by “strongly disagree” and “strongly
agree”);
4. To assess the validity and reliability of the measurement scale.

5
Measurement

 Measurement:
the assignment of numbers or other symbols to characteristics (or attributes) of
objects according to a prespecified set of rules.
 Objects:
persons, strategic business units, companies, countries, bicycles, elephants,
kitchen appliances, restaurants, shampoo, yogurt, and so on.
 Characteristics of objects:
achievement motivation, organizational effectiveness, shopping enjoyment,
length, weight, ethnic diversity, service quality, conditioning effects, and taste,
etc.
 We cannot measure objects (for instance, a company); but we measure
characteristics or attributes of objects (for instance, the organizational
effectiveness of a company).

6
7
8
Construct, Concept, Variable

 Construct:
an image or abstract idea specifically invented for a given research and/or
theory-building purpose.
 Concept:
a generally accepted collection of meanings or characteristics associated with
certain events, objects, conditions, situations, and behaviors.
 Variable:
a symbol of an event, act, characteristic, trait, or attribute that can be measured
and to which we assign values.
 We build constructs by combining the simpler, more concrete concepts,
especially when the idea or image we intend to convey is not subject to
direct observation.
 Variable is used as a synonym for construct, or the property being
studied.

9
Operationalization

 An operationalization does not describe the correlates of the concept


 Operationalizing a concept does not consist of delineating the reasons,
antecedents, consequences, or correlates of the concept.
 Operationalizing a concept describes its observable characteristics in
order to be able to measure the concept.
 Operationalizations are necessary to measure abstract and subjective
concepts such as feelings and attitudes.
 More objective variables such as age or educational level are easily
measured through simple, straightforward questions and do not have to
be operationalized.

10
Measurement: Scaling, reliability
and validity
Scales

 Measurement means gathering data in the form of numbers. To be able


to assign numbers to attributes of objects we need a scale.
 Scale:
A tool or mechanism by which individuals are distinguished as to how they differ
from one another on the variables of interest to our study.
 Types of scales:
1. Nominal
2. Ordinal
3. Interval
4. Ratio scales
 Categories of attitudinal scales:
1. Rating scales have several response categories and are used to elicit
responses with regard to the object, event, or person studied.
2. Ranking scales make comparisons between or among objects, events, or
persons and elicit the preferred choices and ranking among them.
12
13
Rating Scales

 Dichotomous scale
 Category scale
 Semantic differential scale
 Numerical scale
 Itemized rating scale
 Likert scale
 Fixed or constant sum rating scale
 Stapel scale
 Graphic rating scale
 Consensus scale

14
Ranking Scales

Ranking scales are used to tap preferences between two or among more
objects or items (ordinal in nature):
1. Paired comparison is used when, among a small number of
objects, respondents are asked to choose between two objects at a
time.
2. The forced choice enables respondents to rank objects relative to
one another, among the alternatives provided. This is easier for the
respondents, particularly if the number of choices to be ranked is
limited in number.
3. Comparative scale provides a benchmark or a point of reference to
assess attitudes toward the current object, event, or situation under
study

15
The Characteristics of Good Measurement

 Validity:
the extent to which a test
measures what we actually wish
to measure.

 Reliability:
has to do with the accuracy and
precision of a measurement
procedure.

 Practicality:
concerned with a wide range of
factors of economy, convenience,
and interpretability.

16
17
Validity Description
Content validity Does the measure adequately measure the concept?
Face validity Do “experts” validate that the instrument measures what its
name suggests it measures?
Criterion-related validity Does the measure differentiate in a manner that helps to predict
a criterion variable?
Concurrent validity Does the measure differentiate in a manner that helps to predict
a criterion variable currently?
Predictive validity Does the measure differentiate individuals in a manner that
helps predict a future criterion?
Construct validity Does the instrument tap the concept as theorized?
Convergent validity Do two instruments measuring the concept correlate highly?
Discriminant validity Does the measure have a low correlation with a variable that is
supposed to be unrelated to this variable?

18
19
Reliability

indicates the extent to which it is without bias (error free) and hence ensures consistent
measurement across time and across the various items in the instrument.
 Stability of measures
indicative of its stability and low vulnerability to changes in the situation
 Test–retest reliability

the correlation between the scores obtained at the two different times from one and
the same set of respondents
 Parallel-form reliability

When responses on two comparable sets of measures tapping the same construct are
highly correlated (e.g. the scoring of Olympic figure skaters by a panel of judges)
 Internal consistency of measures
indicative of the homogeneity of the items in the measure that taps the construct
 Interitem consistency reliability

To the degree that items are independent measures of the same concept, they will be
correlated with one another
 Split-half reliability

Reflects the correlations between two halves of an instrument


20
Summary of Reliability Estimates

21
Downward bias in stability

Some of the difficulties that can occur in the test–retest methodology and
cause a downward bias in stability include:
 Time delay between measurements —leads to situational factor changes
(also a problem in observation studies).
 Insufficient time between measurements —permits the respondent to
remember previous answers and repeat them, resulting in biased
reliability indicators.
 Respondent’s discernment of a study’s disguised purpose —may
introduce bias if the respondent holds opinions related to the purpose
but not assessed with current measurement questions.
 Topic sensitivity —occurs when the respondent seeks to learn more
about the topic or form new and different opinions before the retest.

22
Reflective and Formative Scale

 In a reflective scale, the items (all of them!) are


expected to correlate, each item in a reflective scale is
assumed to share a common basis (the underlying
construct of interest).
 Formative scale: a scale that contains items that are
not necessarily related. A formative scale is used
when a construct is viewed as an explanatory
combination of its indicators.

23

You might also like