You are on page 1of 27

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/249234533

Testing a Revised Measure of Public Service Motivation: Reflective


versus Formative Specification

Article in Journal of Public Administration Research and Theory · July 2010


DOI: 10.1093/jopart/muq048

CITATIONS READS

138 1,239

1 author:

Sangmook Kim
Seoul National University of Science and Technology
52 PUBLICATIONS 3,141 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Education and Public Service Motivation View project

All content following this page was uploaded by Sangmook Kim on 12 October 2017.

The user has requested enhancement of the downloaded file.


JPART 21:521–546

Testing a Revised Measure of Public Service


Motivation: Reflective versus Formative
Specification
Sangmook Kim
Seoul National University of Technology

ABSTRACT

Public service motivation (PSM) is perceived as a multidimensional construct, as an overall,


unobserved latent variable with various latent dimensions. The present study focuses on the

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


relationships between PSM and its dimensions. The purposes in this study are to confirm
a set of revised items as indicators of a rational base of PSM, to compare a four-factor model
with various three-factor models, and to analyze whether PSM is defined as a formative or
a reflective measurement model. Using survey data from 2,497 firefighters and a revised
12-item measure of PSM, this study shows that the revised items to measure the dimension
of attraction to policy making are more appropriate for representing the rational base of PSM
and that the original four-factor model is superior to any three-factor models. Based on the
discussions about formative and reflective specification and the test results, this study
provides theoretical and empirical evidence in support of a second-order, formative
approach that PSM is an aggregate construct, meaning that it is a composite of its four
dimensions. It also shows that it is necessary to develop more appropriate items for some
dimensions of PSM. The practical implications are discussed.

INTRODUCTION
Academics have long explored whether there are special motives for public service that
differ from those for the private sector (Frederickson and Hart 1985; Mosher 1968; Perry
and Porter 1982). Public service motivation (PSM) provides a useful basis for understand-
ing public employee motivation. According to Perry and Wise (1990, 368), PSM is defined
as ‘‘an individual’s predisposition to respond to motives grounded primarily or uniquely in
public institutions and organizations.’’ Even though the definitions of PSM itself vary
slightly by authors (Brewer and Selden 1998; Rainey and Steinbauer 1999; Vandenabeele,
Scheepers, and Hondeghem 2006), ‘‘its definition has a common focus on motives and
action in the public domain that are intended to do good for others and shape the well-being
of society’’ (Perry and Hondeghem 2008, 3).

The author is grateful for valuable comments on earlier drafts from James L. Perry, David H. Coursey,
and anonymous JPART reviewers. Special thanks to Professor Seung Hyun Kim and Professor Hyok-Joo Rhee
at the Department of Public Administration, Seoul National University of Technology, for their kind support
and encouragement. Address correspondence to the author at smook@snut.ac.kr.

doi:10.1093/jopart/muq048
Advance Access publication on August 12, 2010
ª The Author 2010. Published by Oxford University Press on behalf of the Journal of Public Administration Research
and Theory, Inc. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
522 Journal of Public Administration Research and Theory

PSM has rational, norm-based, and affective motives (Perry and Wise 1990). Rational
motives are grounded in individual utility maximization, norm-based motives are grounded
in a desire to pursue the common good and further the public interest, and affective motives
are grounded in human emotion. A variety of rational, norm-based, and affective motives
appear to be primarily or exclusively associated with public service. Rational motives in-
clude participation in the process of policy formulation, commitment to a public program
because of personal identification, and advocacy for a special or private interest. Norm-
based motives include a desire to serve the public interest, loyalty to duty and to the gov-
ernment as a whole, and social equity. Affective motives include patriotism of benevolence
and commitment to a program from a genuine conviction about its social importance.
Perry (1996) identified a multidimensional scale to measure PSM, which has four di-
mensions: attraction to policy making (APM), commitment to public interest (CPI), com-
passion (COM), and self-sacrifice (SS). Since then, many studies conducted in the United
States and in other countries have used the dimensions and items of Perry’s (1996) measure,
and improvement has been made in the PSM scale. Close attention to the measurement of

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


PSM has yielded efficient accumulation of knowledge about its antecedents and consequen-
ces (Bright 2008; Camilleri 2006; Castaing 2006; Choi 2004; DeHart-Davis, Marlowe, and
Pandey 2006; Lee 2005; Moynihan and Pandey 2007a; Pandey, Wright, and Moynihan
2008; Park and Rainey 2008; Perry 1997; Perry et al. 2008; Steijn 2008; Taylor 2008;
Vandenabeele 2008a; Wright and Pandey 2008). However, the generalizability of the di-
mensional structure and items of PSM is still not confirmed. In some cases, dimensions
have been left out of the analysis (Castaing 2006; Coursey and Pandey 2007a; DeHart-
Davis, Marlowe, and Pandey 2006; Moynihan and Pandey 2007a, 2007b; Wright and
Pandey 2005, 2008). On the other hand, some dimensions have been created to cover
the extent of PSM. For example, in particular with respect to the CPI dimension, research
suggests that in order to cover the full spectrum of applicable public values, multiple di-
mensions are necessary. Belgian and Swiss research (Giauque et al. 2009; Vandenabeele
2008b) has found, in addition to the original CPI dimension, one or more dimensions re-
ferring to public values related to administrative or constitutional governance. Developing
an instrument to measure a construct such as PSM is an evolutionary process, which de-
mands repeated iterations of the instrument.
There are three significant issues regarding the PSM measure that need to be discussed
in this study. The first issue is whether the items measuring the APM dimension are ap-
propriate to represent a rational base of PSM. Motives such as participation in the process of
policy formulation, commitment to a public program because of personal identification
with it, and advocacy for special interests are rational in nature (Perry and Wise 1990).
However, Perry’s (1996) original items to measure the APM dimension have been the sub-
ject of considerable controversy as to their relevance. The original items have little face
validity as indicators of APM itself or of a rational motivational base (Kim 2009b). The
items are not asking whether the respondents are attracted to public policy making but
whether they like or dislike politics, politicians and political phenomena. According to
Coursey and Pandey (2007a), in the original items, the terms ‘‘politics’’ and ‘‘politicians’’
induce negative reactions and tap political distrust, so additional item development and
testing are important in the APM dimension. As Perry (1996, 20) himself acknowledged,
‘‘Because the current subscale is composed entirely of negatively worded items, it con-
founds whether the subscale taps the attraction to policy-making dimension or whether
Kim Reflective versus Formative Specification 523

it also may tap cynicism or negative affect toward politics. Thus the addition of positively
worded items to the APM sub-scale would be desirable.’’ A revised measurement scale
needs to be developed. In a previous study, new items were composed to measure
APM, and the revised 12-item PSM scale was tested (Kim 2009a). However, additional
testing with different samples is still desirable, as is evaluation of new items. The first
research issue is to validate the revised APM items with another sample.
The second issue concerns the dimensions of PSM. Perry’s (1996) exploratory anal-
ysis provided support for both a three- and a four-dimension PSM construct. Four dimen-
sions were empirically distinguished, but the dimensions of CPI and SS were highly
correlated (r 5 .89). Even though a differential x2 test indicates that the four-dimension
model is superior to the three-dimension model, combining CPI and SS, there is relatively
little difference between the two. Among 19 studies using Perry’s (1996) measurement
scale for PSM, eight studies analyzed four dimensions but seven studies used three dimen-
sions. Even in the same country, both the three-dimensional solution and the four dimen-
sions of PSM are reported (see table 1). For example, Vandenabeele (2008a) found that the

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


three-dimensional solution is the best-fitting alternative for the Belgian context. In this
three-dimensional model, the dimensions of COM and APM were retained, whereas the
dimensions of CPI and SS were collapsed into a single dimension. However, Vandenabeele
(2008b) provided support for the original four dimensions of the PSM construct with
another Belgian sample. Dimensional instability limits the confidence in the findings
and interpretation of a study (Wright 2008). Therefore, the dimensionality of PSM still
needs to be analyzed. The second research issue is to compare a four-dimension model
with various three-factor models and to find which one is superior to the others.
The third issue, recently raised by Wright (2008) and Coursey et al. (2008a), is
whether PSM needs to be defined as a formative or a reflective measure.1 A reflective mea-
sure suggests that the separate dimensions of PSM—such as APM, CPI, COM, and SS—are
actually different manifestation of the PSM construct and as such ‘‘reflect’’ the content of
PSM, whereas a formative measure suggests that PSM is defined as the outcome of its
dimensions. For example, when an increase in any one of the dimensions (e.g., COM) also
implies an increase in all the other dimensions of PSM, then PSM should be theorized as
reflective. On the other hand, when an increase in any one of the dimensions (e.g., COM)

1 The choice of measurement model would depend on the generality or specificity of one’s theoretical interest
(MacKenzie, Podsakoff, and Jarvis 2005). A typical example of formative measurement model is socioeconomic status
(SES), which is formed as a combination of education, income, occupation, and residence. If any one of these
indicators increases, SES would increase; conversely, if a person’s SES increases, this would not necessarily be
accompanied by an increase in all four indicators (Diamantopoulos and Winklhofer 2001). Other examples may be job
performance, role stress, procedural justice, transformational leadership, job perceptions, human development index,
and quality-of-life index (Diamantopoulos and Winklhofer 2001; Edwards 2001; MacKenzie, Podsakoff, and Jarvis
2005). A typical example of the reflective model is the Organizational Commitment Questionnaire (Mowday, Steers,
and Porter 1979). Variation in the level of organizational commitment leads to variation in its indicators, but not vice
versa. Other examples may be personality, attitude, general work values, LMX, work withdrawal, and psychological
climate (Diamantopoulos and Siguaw 2006; Edwards 2001). However, some constructs can be regarded not only as
a reflective but also as a formative model. For example, job satisfaction is treated as a unidimensional construct when it
is measured by indicators that each describe satisfaction with the job as a whole. In this case, the indicators of job
satisfaction are reflective, as treated in this study. On the other hand, job satisfaction can also be viewed as a composite
of satisfaction with specific job facets, such as pay, promotion, supervisor, coworkers, and the work itself. In that case,
the dimensions of job satisfaction are formative rather than reflective (Kim 2005b; Ironson et al. 1989; Williams,
Edwards, and Vandenberg 2003).
524 Journal of Public Administration Research and Theory

Table 1
Previous Studies Using Perry’s (1996) Measurement Scale for PSM
Four-Dimension Model Three-Dimension Model or Others

Lee (2005) 1) Korea; public and Scott and Pandey (2005) 1) USA; managers in
private employees state health and
human service
agencies
2) 24 items; 24 items 2) 11 items; 11 items
3) APM, CPI, COM, SS; 3) APM, CPI, COM
no report on a
Camilleri (2006) 1) Malta; public officials DeHart-Davis, 1) USA; managers in
Marlowe, and Pandey state health and
(2006) human service
agencies
2) 24 items; 24 items 2) 10 items; 10 items
3) APM (.21), CPI (.63), 3) APM (.72), CPI (.68),

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


COM (.60), SS (.80); COM (.55)
values are factor
loadings on PSM
Taylor (2007) 1) Australia; public Moynihan and Pandey 1) USA; managers in
employees (2007a) state health and
human service
agencies
2) 24 items, some 2) 11 items; 7 items
revised; 24 items
3) APM (.64), CPI (.78), 3) APM (.72), CPI (.67);
COM (.76), SS (.82) not to employ COM
(.40)
Bright (2008) 1) United States; public Castaing (2006) 1) France; civil service
employees employees
2) 24 items; 24 items 2) 4 items; 4 items
3) APM, CPI, COM, SS; 3) CPI (.65)
no report on a
Vandenabeele (2008b) 1) Belgium; civil Coursey and Pandey 1) USA; managers in
servants (2007a) state health and
human service
agencies
2) 47 items, some added; 2) 10 items; 10 items
18 items
3) APM, CPI, COM, SS, 3) APM, CPI, COM; no
and democratic report on a
governance
Clerkin, Paynter, and 1) USA; undergraduate Moynihan and Pandey 1) USA; managers in
Taylor (2009) students (2007b) state health and
human service
agencies
2) 24 items; 24 items 2) 11 items; 3 items
3) APM (.59), CPI (.69), 3) APM; Others are
COM (.70), SS (.78) failed to generate
minimally acceptable
alphas
Continued
Kim Reflective versus Formative Specification 525

Table 1 (continued)
Previous Studies Using Perry’s (1996) Measurement Scale for PSM
Four-Dimension Model Three-Dimension Model or Others

Kim (2009a) 1) Korea; public Vandenabeele (2008a) 1) Belgium; graduate


employees students
2) 14 items, some 2) 24 items; 13 items
revised; 12 items
3) (Sample 1, Sample 2): 3) APM (.66), COM
APM(.75, .75), (.65), CPI 1 SS (.71)
CPI(.70, .71),
COM(.73, .66),
SS(.75, .79)
Kim (2009b) 1) Korea; public Coursey et al. (2008a) 1) USA; national award-
employees winning volunteers
2) 24 items; 14 items 2) 12 items; 12 items
3) (Sample 1, Sample2): 3) CPI, COM, SS

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


APM (.62, .71), CPI
(.74, .74), COM (.74,
.60), SS (.73, .72)
Liu, Tang, and Zhu 1) China; part-time
(2008) MPA students
(full-time public
employees)
2) 24 items; 10 items
3) APM (.69), CPI (.54),
SS (.57); COM is not
confirmed
Leisink and Steijn 1) The Netherlands;
(2009) public sector
employees
2) 11 items; 11 items
3) APM (.55), CPI (.68)
Coursey et al. (2008b) 1) USA; national award-
winning volunteers
2) 12 items; 12 items
3) CPI, COM, SS;
testing formative and
reflective models
Note: 1) 5 sample, 2) 5 items used in survey; items used in statistical analysis, 3) 5 dimensions used in statistical analysis and
Cronbach’s alphas in parentheses.

increases the overall magnitude of PSM, without necessarily affecting the rest of the di-
mensions, PSM should be defined as formative. Different parameter estimates and conclu-
sions can be drawn depending on which measure is used for analysis (Law and Wong 1999;
MacCallum and Browne 1993).
This issue owes not so much to the PSM scale as it was originally constructed but to
advances in measurement theory and methods. Coursey et al. (2008a, 88) wrote, ‘‘The PSM
construct is reflective, not formative, and the studies to date have not tested such a formative
specification.’’ On the other hand, Wright (2008, 85) said, ‘‘Researchers should consider
operationalizing this four-dimension conceptualization as first-order reflective and
526 Journal of Public Administration Research and Theory

second-order formative.’’ Coursey et al. (2008b) examined whether PSM is best repre-
sented as a formative or reflective construct but they exclude APM and used only the three
remaining dimensions. The previous studies have raised this issue but the empirical test
with the four dimensions of PSM has been found wanting. The third research issue is
to discuss a reflective model (the dimensions of PSM function as specific manifestations
of PSM) and a formative model of PSM (the dimensions of PSM are conceived as specific
components, which together collectively constitute PSM) and to examine theoretically and
empirically which model is more desirable in second-order factor specifications. This is
a critical question not only to PSM but also to many other public management constructs
(Coursey and Pandey 2007b).2
The next section of this article will review the previous studies on the dimensions and
measures of PSM. Then both a reflective and a formative model are discussed, and both
models are applied to the PSM construct. We then consider the research sample (n 5 2,497)
as well as the measurement of the variables before discussing the applied methods. The
final section consists of an analysis of the survey results, followed by a discussion and

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


some concluding remarks.

Dimensions and Measures of Public Service Motivation


A measurement scale for PSM was developed by Perry (1996). The forty survey items were
devised to correspond to six dimensions of PSM: APM, CPI, civic duty, social justice,
COM, and SS. Perry identified four empirical components of the PSM construct as
APM, COM, CPI, and SS and developed a list of twenty-four items measuring the four
subscales of PSM. The dimensions of PSM have been empirically analyzed further.
The previous studies using Perry’s (1996) instrument are summarized in table 1.
Table 1 shows that various models have been used in the previous studies. First, the
four-dimension model of PSM was confirmed in Australia, Belgium, Korea, Malta, and the
United States. Second, the three-dimensional solution was also used in Belgium, China, and
the United States because of theoretical or empirical reasons. The dimension of SS was
excluded in some studies, largely because it was not included in the original conception
of PSM and because of its conceptual similarity to and overlap with COM (Coursey and
Pandey 2007a; DeHart-Davis, Marlowe, and Pandey 2006; Moynihan and Pandey 2007a;
Scott and Pandey 2005). The dimension of APM was also excluded due to the sample and
scale items being deemed less applicable in a shortened instrument (Coursey et al. 2008a).
The dimension of COM was unconfirmed in China because of its failure to make one factor
(Liu, Tang, and Zhu 2008). The dimensions of CPI and SS were collapsed into a single
dimension in Belgium (Vandenabeele 2008a). Third, only two dimensions were used for
explaining PSM. Moynihan and Pandey (2007a) excluded the SS dimension because of
theoretical reasons and chose not to employ the COM dimension because of its low level
of reliability. The theoretically expected dimensions were not neatly reproduced in the
Netherlands and so only the dimensions of APM and CPI were used for analysis (Leisink
and Steijn 2009). Fourth, one dimension was used for representing PSM. Moynihan and
Pandey (2007b) used an abbreviated version of the original scale, which focuses on the
APM dimension because other measures of PSM or the larger scale failed to generate

2 Coursey and Pandey (2007b) used second-order confirmatory factor analysis to test two specifications for red tape,
one as a formative index and the other as a reflective scale.
Kim Reflective versus Formative Specification 527

minimally acceptable Cronbach’s alphas. Castaing (2006) used only the CPI dimension
because other dimensions were not representative of the specific public service ethos of
the French context.
However, there is no discussion whether we can accept the findings of the empirical
tests using only two or three dimensions as those of PSM. If we regard PSM as a reflective
construct, it is possible to analyze PSM with only two or three dimensions because omitting
dimensions does not change the essential nature of PSM. Otherwise, it cannot be acceptable
because omitting a dimension may change the meaning of PSM as a formative construct. The
consequences of dropping a dimension are potentially quite serious in a formative model.
Shorter scales are generally preferred in studies so that respondents’ workload is re-
duced. Abridged versions of Perry’s (1996) scale were used in 10 studies, whereas the full
version was tested in eight studies. Unless the shorter version is a valid and reliable measure
of the construct that the longer scale measures (DeVellis 1991), shortening the scale, could
threaten the integrity of the overall measurement of PSM. Moreover, 7 out of 10 studies
using the short forms of the Perry (1996) scale used two or three dimensions of PSM, and 2

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


studies used only one dimension for statistical analysis. So it is important to retain the
conceptually based dimensions when shortening the scale.
There was a tendency to uncritically employ a reflective model without discussing
whether PSM is a formative or reflective construct (Camilleri 2006; Coursey et al.
2008a; Kim 2009a, 2009b), whereas Coursey et al. (2008b) examined both a formative
and a reflective model. However, they used only three dimensions and exclude APM,
and thus, it was difficult to figure out which model is better when considering both the-
oretical and empirical perspectives. Even studies that use similar indicators but differ in
creating formative or reflective measurement models can severely bias parameter estimates
and either inflate or deflate relationships (Podsakoff et al. 2003b; Wright 2008). Without the
proper specification of the measurement model, relationships among constructs cannot be
meaningfully tested (Edwards and Bagozzi 2000).
Despite impressive gains in the empirical studies, the previous research has employed
multiple measures of PSM. There is evidence of a considerable degree of commonality in
results from different samples and different nations, but there are also variations and con-
flicts in findings. Obviously some of the variations can come simply from sampling error or
variations among samples as well as national differences, but some of the inconsistency
may be due to differences in how PSM was measured. Such diversity in operational def-
initions of PSM makes it difficult for studies to advance our understanding of PSM by
building on the findings of previous studies (Wright 2008). It is not certain whether mea-
suring two or three dimensions may be equivalent to measuring all the dimensions of PSM.
We need to figure out whether it is more appropriate to use a four-dimension or a three-
dimension model and whether it is more plausible to use a reflective or a formative model.

Formative versus Reflective Model


A theory can be divided into two parts: one that specifies relationships between theoretical
constructs and another that describes relationships between constructs and measures. A
construct refers to a phenomenon of theoretical interest, and a measure is a multi-item
operationalization of a construct. The distinction between a construct and a measure breaks
down under operation, in which a construct is defined in terms of its measure. Constructs
are usually viewed as causes of indicators, meaning that variation in a construct leads to
528 Journal of Public Administration Research and Theory

variation in its indicators. Such indicators are termed ‘‘reflective’’ because they represent
reflections, or manifestations, of a construct. In some situations, indicators are viewed as
causes of constructs. Such indicators are termed ‘‘formative’’, meaning that the construct is
formed or induced by its indicators (Edwards and Bagozzi 2000).
For example, transformational leadership can be modeled as having formative indi-
cators (MacKenzie, Podsakoff, and Jarvis 2005). It is conceptualized as being a function of
charisma, idealized influence, inspirational leadership, intellectual stimulation, and indi-
vidualized consideration (Bass 1985). These forms of leader behavior are conceptually
distinct, likely to have different antecedents and/or consequences and are not interchange-
able. On the other hand, leader-member exchange (LMX) can be an example of a reflective
model. LMX theory suggests that leaders do not use the same style in dealing with all
subordinates but rather develop a different type of relationship with each subordinate
(Liden and Maslyn 1998). LMX is an overall latent variable with various dimensions such
as affect, loyalty, contribution, and professional respect, and these dimensions are expected
to be caused by LMX. Changes in LMX are manifested in changes in all the dimensions.

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


PSM is perceived as a multidimensional construct, an overall latent variable with
various latent dimensions. This is referred to as a second-order factor model. Most multi-
dimensional constructs are either superordinate or aggregate. A superordinate construct
is a general concept that is manifested by its dimensions that are analogous to reflective
measures. An aggregate construct is a composite of its dimensions that are analogous to
formative measures (Edwards 2001). The measurement model specifies the relationship
between constructs and measures. The direction of the relationship is either from the
construct to the measures (reflective measurement) or from the measures to the construct
(formative measurement). The reflective measurement model has a long tradition in social
sciences, whereas the formative model was first introduced more than 40 years ago but is
still only scarcely used (Diamantopoulos, Riefler, and Roth 2008). If the direction of cau-
sality between a construct and its measure is not specified correctly, it causes severe biases
in parameter estimates (Jarvis, MacKenzie, and Podsakoff 2003).
The measure development procedures associated with the two approaches are very dif-
ferent. For reflective measures, scale development places major emphasis on the intercorre-
lations among the items, focuses on common variance, and emphasizes unidimensionality
and internal consistency. For formative measures, index construction focuses on explaining
abstract variance, considers multicollinearity among the indicators, and emphasizes the
role of indicators as predictor rather than predicted variables (Diamantopoulos and Siguaw
2006).3 Formative measures are commonly used for constructs conceived as composites of

3 Formally, if h is a latent variable and x1, x2, . . ., xn are a set of observable indicators, the reflective specification
implies that
xi 5 lih 1 ei,
where li is the expected effect of h on xi and ei is the measurement error for the ith indicator (i 5 1, 2, . . ., n). It is
assumed that COV(h, ei) 5 0, and COV(ei, ej) 5 0, for i 6¼ j and E(ei) 5 0. In contrast, the formative specification
implies that
h 5 g1x1 1 g2x2 1 . . . 1 gnxn 1 z,
where gi is the expected effect of xi on h and z is an error term, with COV(xi, z) 5 0 and E(z) 5 0 (Diamantopoulos and
Siguaw, 2006). The error term (z) in a formative measurement model represents the impact of all remaining causes
other than those represented by the indicators included in the model. The more comprehensive the set of formative
indicators specified for the construct, the smaller the influence of the error term and the more valid the construct
(Diamantopoulos, Riefler, and Roth 2008).
Kim Reflective versus Formative Specification 529

specific component variables (Bollen 1989; Bollen and Lennox 1991; Edwards and Bagozzi
2000). A construct should be modeled as having formative indicators if the following con-
ditions prevail: (a) the indicators are viewed as defining characteristics of the construct,
(b) changes in the indicators are expected to cause changes in the construct, (c) changes
in the construct are not expected to cause changes in the indicators, (d) the indicators do not
necessarily share a common theme, (e) eliminating an indicator may alter the conceptual
domain of the construct, (f) a change in the value of one of the indicators is not necessarily
expected to be associated with a change in all the other indicators, and (g) the indicators are
not expected to have the same antecedents and consequences. On the other hand, a construct
should be modeled as having reflective indicators if the opposite is true (Jarvis, MacKenzie,
and Podsakoff 2003, 203).
Reflective indicators are essentially interchangeable—and therefore, while adding or
removing indicators may affect reliability, it does not change the essential nature of the
underlying construct (Diamantopoulos and Winklhofer, 2001). In contrast, ‘‘omitting an
indicator is omitting a part of the construct’’ in a formative model (Bollen and Lennox

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


1991, 308). A change in a formative indicator leads to changes in the construct, without
necessarily affecting any of the construct’s other indicators. The choice of measurement
perspective, the use of reflective versus formative indicators, should be based on theoretical
considerations regarding the direction of the links between the construct and its indicators
(Edwards and Bagozzi 2000).
Measurement model misspecification severely biases structural parameter estimates
and can lead to inappropriate conclusions about relationships between constructs (Jarvis,
MacKenzie, and Podsakoff 2003). Several of the most commonly researched constructs in
behavioral and organizational research have formative measures that are incorrectly mod-
eled as though they were reflective measures (MacKenzie, Podsakoff, and Jarvis 2005).
This is a problem because ‘‘in a Monte Carlo simulation, measurement model misspeci-
fication can inflate unstandardized structural parameter estimates by as much as 400% or
deflate them by as much as 80% and lead to either Type I or Type II errors of inference,
depending on whether the endogenous or the exogenous construct is misspecified’’
(MacKenzie, Podsakoff, and Jarvis 2005, 728). The results of another simulation provide
strong evidence that ‘‘measurement model misspecification of even one formatively mea-
sured construct within a typical structural equation model can have very serious consequen-
ces for the theoretical conclusions drawn from that model’’ (Jarvis, MacKenzie, and
Podsakoff 2003). Law and Wong (1999) demonstrated that measurement model misspe-
cification can bias effects on structural parameter estimates. Diamantopoulos and Siguaw
(2006) showed that the erroneous adoption of a reflective perspective would have resulted
in an underestimation of the links between constructs and different substantive conclusions.
Therefore, measurement relationships should be appropriately modeled.
When the construct is complex, we should use higher order models because such
models treat each dimension as an important component of the construct (Ruiz et al.
2008). PSM is conceived as a second-order construct with its four dimensions as first-order
factors and items of the dimensions as observed variables.4 When the relationships

4 Each dimensions such as APM, CPI, COM, and SS have several reflective indicators based on the following criteria
(Jarvis, MacKenzie, and Podsakoff 2003): the relative homogeneity and interchangeability of indicators pertaining to
each dimension, the high degree of covariation among indicators of each dimension, and the expectation that the
indicators of each dimension are likely to be affected by the same antecedents and have the same consequences.
530 Journal of Public Administration Research and Theory

flow from PSM and its dimensions, PSM is termed superordinate, meaning that PSM
is a general entity that is manifested or reflected by the specific dimensions that serve
as its indicators. Its dimensions function as specific manifestations of PSM. When the re-
lationships flow from the dimensions to PSM, PSM is called aggregate, meaning that PSM
is a composite of its dimensions. The dimensions are conceived as specific components
which collectively constitute PSM (Edwards 2001; Williams, Edwards, and Vandenberg
2003).5
Even though the nature and direction of relationships between PSM and its dimensions
should be clearly specified by theory, and PSM is defined as a superordinate or an aggregate
construct prior to examining substantive relationships between PSM and related constructs,
there was no explicit consideration at the construct definition stage because at that time
multidimensional constructs were scarcely used in this field (Perry 1996; Perry and Wise
1990). Thus, it is still challenging to determine whether to specify PSM as a superordinate
or an aggregate construct. That is, we need to know whether PSM causes the first-order
dimension such as APM, CPI, COM, or SS (reflective model) or all first-order dimensions

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


cause PSM (formative model).
The choice between a formative and a reflective specification should primarily be
based on theoretical considerations regarding the causal priority between PSM and its
dimensions (Diamantopoulos and Winklhofer 2001). The important criteria to define a con-
struct as reflective (Jarvis, MacKenzie, and Podsakoff 2003) are the interchangeability of
dimensions pertaining to PSM and the expectation that the dimensions are likely to be
affected by the same antecedents and have the same consequences. The three motives
and four dimensions provide a comprehensive and theory-based conceptualization of
PSM (Perry 1996; Perry and Wise 1990). Each dimension is based on a different aspect
of human motion. A variety of rational, norm-based, and affective motives appear to be
primarily or exclusively associated with public service. Perry (1996) identified four em-
pirical components of the PSM construct, and three of these subscales map directly to mo-
tivational foundations (Perry 1996, 2000). APM coincides with rational choice processes,
CPI with normative processes, and COM with affective processes. SS means the willing-
ness to substitute service to others for tangible personal awards (Perry 1996). The dimen-
sions represent different aspects of PSM; a change in the value of any one dimension is not
necessarily expected to be associated with a change in the others, but a change in the value
of any one dimension is expected to cause changes in PSM. For example, if there is an
increase in APM, then the level of PSM is also changed but it may not be necessarily as-
sociated with a change in CPI, COM, or SS. Each dimension of PSM has its own unique

5 PSM can be modeled as reflective or formative in second-order factor specifications (Edwards 2001; Jarvis,
MacKenzie, and Podsakoff 2003). The reflective model treats PSM as a superordinate construct that has reflective
dimensions and the dimensions themselves have reflective indicators (reflective first order, reflective second order);
and the formative model treats PSM as an aggregate construct that has formative dimensions and the dimensions
themselves have reflective indicators (reflective first order, formative second order). In the reflective model, the
meaning generally does not alter when dropping a dimension, whereas in the formative model it is necessary to include
all first-order dimensions that form PSM because dropping one may alter the meaning of PSM. Formally, the second-
order measurement model of PSM can be represented as follows:
1) Reflective model: APM 5 l1PSM 1 e1, CPI 5 l2PSM 1 e2, COM 5 l3PSM 1 e3, SS 5 l4PSM 1 e4
2) Formative model: PSM 5 g1APM 1 g2CPI 1 g3COM 1 g4SS 1 z
Kim Reflective versus Formative Specification 531

characteristics and theoretical backgrounds. The dimensions of PSM are not interchange-
able, and they are not expected to have the same antecedents and consequences.6 Thus, the
dimensions of PSM are not reflective but formative.
Each individual’s PSM needs to be analyzed and evaluated by all dimensions of PSM.
That is, the dimensions of PSM combine to produce PSM. It can be defined as a linear sum
of its dimensions, and so a formative composite in the form of indexes is conceptually
appropriate. So PSM needs to be defined as an aggregate construct. It is necessary to include
all first-order dimensions that form PSM because omitting one may alter the meaning of
PSM.7 This consideration of the relationships between PSM and its dimensions shows that
PSM is formative in nature. The next step is to analyze whether the formative model has
more desirable statistical properties than the reflective model because when constructing
a measure, it needs to reconcile the theory-driven conceptualization of the measure with the
desired statistical properties of the items comprising the measure as revealed by empirical
testing (Diamantopoulos and Siguaw 2006). The two models are identical except for the
direction of causality between PSM and its dimensions.

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


METHODS
Sample
The data upon which this study is based were collected in a survey among firefighters at
Gyonggi Provincial Firefighting and Disaster Headquarters in Korea. All firefighters in
Korea are full-time public servants. They are employed according to performance and
qualification and are expected to make a life-long commitment to the service. In October
2007, all firefighters (5,343) received an e-mail in which they were asked to participate in
the survey during regular working hours, with the survey questionnaire attached. The ac-
companying cover letter outlined the study objectives, requested participation, and guar-
anteed anonymity. A total of 2,584 responses were returned, yielding a response rate of
48.4%.8 Of these, 2,497 have been utilized in this study as cases with missing values have
been deleted. The majority of the respondents was men (93.1%), with those in their
30s constituting the largest age group (47.0%). Managerial level employees made up
18.8% of respondents. Firefighters who had worked for fewer than 10 years were over-
represented in the sample, but the other distributions of the sample such as sex, age, and
organizational rank were fairly typical of the population from which the sample was drawn
(table 2).9

6 Perry (1997) found some significant differences in the influence of independent variables on the four different
dimensions of PSM. Taylor (2007) confirmed that when the multiple dimensions of PSM are analyzed simultaneously,
certain dimensions are found to be more important than others in influencing work outcomes. Moynihan and Pandey
(2007a) reported that red tape is negatively and significantly related to APM, but not to CPI, whereas reform orientation
and hierarchical authority is positively related to CPI, but not to APM.
7 That is, PSM 5 g1APM 1 g2CPI 1 g3COM 1 g4SS 1 z 6¼ g1APM 1 g2CPI 1 g3COM 1 z (see footnote 5).
8 This survey was conducted as a part of the Survey on Firefighters’ Working Conditions supported by Gyonggi
Provincial Firefighting and Disaster Headquarters in Korea. Because the firefighters were very concerned about their
working conditions, they actively participated in the survey process.
9 The population for this study comprised all firefighters in central and local governments. The number of the
population is 30,158, in which men were 94.9% and women 5.1% (National Emergency Management Agency
[NEMA], Korea, 2008). Most (45.2%) were in their 30s, 33.4% were in their 40s, and 10.6% in their 20s. Those in the
largest group (46.7%) had worked for more than 10 years and fewer than 20 years, and 39.1% had worked for fewer
than 10 years. Those at the managerial levels were 16.7%.
532 Journal of Public Administration Research and Theory

Table 2
Background of Respondents (n 5 2,497)
Variables Characteristics Respondents (%)
Sex Male 93.1
Female 6.9
Age 20s 14.1
30s 47.0
40s 29.7
50s 9.2
Length of service (years) 0–5 33.7
5–10 15.2
10–15 22.5
15–20 15.5
20–25 8.2
251 4.9
Education High school diploma or under 31.9

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


Junior college diploma (2 years) 39.2
Undergraduate degree (4 years) 27.5
Graduate degree or more 1.4
Organization Provincial headquarters 9.2
Fire Service College 1.2
Fire stations 89.6
Note: No answer is excluded.

Measure
All the indicators described below allowed responses on a five-point Likert-type scale (1 5
strong disagreement, 5 5 strong agreement). PSM was measured using 12 items: 3 items for
each of the dimensions of APM, CPI, COM, and SS. Perry (1996) developed a list of 24
items measuring 4 dimensions of PSM, but the APM items in Perry’s scale may not be
appropriate to represent the rational base of PSM in the Korean context (Kim 2009b). Ac-
cording to Perry and Wise (1990), the rational base is understood as that individuals may be
drawn to government or to pursue particular courses of action within government because
of a belief that their choices will facilitate the interests of special groups and that one motive
prevalent in pluralistic societies is an individual’s advocacy for special interests. However,
the original APM items do not ask whether the respondents are more attracted to partic-
ipation in the process of policy formulation or in advocating specific public programs.
Thus, the 3 items of APM were replaced with more representative ones, and the 12-item
scale for the 4 factors was proposed. The test results provided support for convergent val-
idity as well as discriminant validity of the four-factor model, and the reliability coefficients
of all subscales were good (Kim 2009a). Among the items, eight are exactly same as those
originally specified by Perry (1996), whereas one in the COM dimension replaces a reverse-
scored item of Perry’s (1996) study. Table 3 sets out the 12 items in list form.
A formative measurement model, in isolation, is under-identified and cannot be es-
timated (Bollen 1989). Consequently, in this study, two reflectively measured constructs,
job satisfaction, and organizational commitment, are added in empirical analysis as
Kim Reflective versus Formative Specification 533

outcome variables for solving a problem of underidentification.10 Job satisfaction was mea-
sured with three items. The sample items are ‘‘My job provides a chance to do challenging
and interesting work’’ and ‘‘I feel good about my job, the kind of work I do.’’ The reliability
coefficient was .783. Organizational commitment was measured with five items from the
Organizational Commitment Questionnaire (Mowday, Steers, and Porter 1979). The sam-
ple questions are ‘‘I talk up this organization to my friends as a great organization to work
for’’ and ‘‘I find that my values and the organization’s values are very similar.’’ The
reliability coefficient was .865.

Analysis
The statistical analysis applies structural equation modeling (SEM) using Amos 17.0 with
the asymptotically distribution-free (ADF) estimation method.11 After testing the four-
dimension measurement model of PSM, I conducted a series of CFAs to compare with
three-factor models. Then I compared a formative measurement model with a reflective
one. The only difference between the two models is the direction between PSM and its

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


dimensions. For gauging reliability, reliability coefficient (Cronbach’s alpha) and compos-
ite reliability are tested.12 Convergent validity is assessed from the measurement model by
determining whether each indicator’s estimated pattern coefficient on its specified under-
lying construct is statistically significant, and by calculating average variance extracted
(AVE).13 Discriminant validity is assessed by determining whether the confidence interval

10 In order to get the necessary conditions for the identification of formative indicator constructs,

(1) the scale of measurement for the latent construct is established by constraining a path from
one of the construct’s indicators to be equal to 1 or by constraining the residual error variance for
the construct to be equal to 1 and

(2) to resolve the indeterminacy associated with the construct level error term, a formative
construct emits paths to

a) at least two unrelated latent constructs with reflective indicators

b) at least two theoretically appropriate reflective indicators, or

c) one reflective indicator and one latent construct with reflective indicators (Jarvis, MacKenzie,
and Podsakoff 2003; MacCallum and Browne 1993; MacKenzie, Podsakoff, and Jarvis 2005).

11 The discrete, noncontinuous distributions are not suitable for standard maximum likelihood estimation (MLE) in
confirmatory factor analysis. Applying standard MLE in such cases produces significant estimation problems, such as
inflation of chi-square fit statistics and biased underestimation of parameters and SEs. One preferred approach for
Likert-type items is to apply weighted least squares (WLS) or robust WLS (see Coursey and Pandey 2007b, footnote 2).
WLS performs adequately with large sample sizes. WLS is known as the ADF estimation method when used with
a correct asymptotic covariance matrix (Flora and Curran 2004).
12 The composite reliability estimates the extent to which a set of latent construct indicators share in their
measurement of a construct, whereas Cronbach’s alpha measures how well a set of variables or items measures a single,
unidimensional latent construct. Composite reliability 5 (sum of standardized loading)2/{(sum of standardized
loading)2 1 sum of indicator measurement error}. Indicator measurement error 5 1 2 the square of each standardized
loading.
13 AVE indicates the amount of variance captured by the construct in relation to the variance due to measurement
error, and AVEs above 0.50 are treated as indications of convergent validity. AVE 5 (sum of squared standardized
loading)/(sum of squared standardized loading 1 sum of indicator measurement error).
534 Journal of Public Administration Research and Theory

Table 3
Descriptive and Measurement Statistics (n 5 2,497)

Dimensions and Items Mean (SD) SFL SMC (R2)

APM
PSM1: I am interested in making public programs that 3.57 (0.812) .834 .658
are beneficial for my country or the community I
belong to.
PSM2: Sharing my views on public policies with 3.51 (0.827) .845 .714
others is attractive to me.
PSM3: Seeing people get benefits from the public 3.79 (0.782) .811 .695
program I have been deeply involved in brings me
a great deal of satisfaction.
Commitment to the public interest
PSM4: I consider public service my civic duty. 4.02 (0.743) .858 .737
PSM5: Meaningful public service is very important 4.00 (0.741) .882 .778
to me.

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


PSM6: I would prefer seeing public officials do what is 3.98 (0.788) .759 .577
best for the whole community even if it harmed my
interests.
COM
PSM7: It is difficult for me to contain my feelings 4.14 (0.688) .805 .648
when I see people in distress.
PSM8: I am often reminded by daily events how 3.96 (0.699) .700 .490
dependent we are on one another.
PSM9: I feel sympathetic to the plight of the 4.15 (0.704) .793 .630
underprivileged.
SS
PSM10: Making a difference in society means more 3.52 (0.859) .771 .595
to me than personal achievements.
PSM11: I am prepared to make enormous sacrifices for 3.55 (0.859) .830 .689
the good of society.
PSM12: I believe in putting duty before self. 3.62 (0.826) .795 .632
Cronbach’s alpha, composite reliability, and interfactor correlations
Alpha CR AVE 1 2 3
1. APM .852 .869 0.689
2. CPI .858 .873 0.697 .647
3. COM .805 .811 0.589 .586 .835
4. SS .839 .841 0.639 .789 .728 .596
Note: SFL, standardized factor loading; SMC, squared multiple correlations; alpha, Cronbach’s alpha; CR, composite reliability. All
standardized factor loadings and correlations are significant at p , .001. PSM9, ‘‘I feel sympathetic to the plight of the underprivileged,’’
replaced the reverse-scored item of ‘‘I am rarely moved by the plight of the underprivileged.’’

around the correlation estimate between two factors included 1.00 (Anderson and Gerbing
1988). The methods traditionally used for assessing construct reliability and validity are not
appropriate for a formative model because the direction of causality is posited to flow from
the dimensions to PSM (Jarvis, MacKenzie, and Podsakoff 2003). Instead, indicator
validity (Bollen 1989) and criterion validity or predictive validity (Diamantopoulos and
Siguaw 2006) will be analyzed.
To get the necessary conditions for the identification of formative indicator constructs
(Jarvis, MacKenzie, and Podsakoff 2003; MacCallum and Browne 1993; MacKenzie,
Kim Reflective versus Formative Specification 535

Podsakoff, and Jarvis 2005), the residual error variance for PSM is constrained to unity and
two unrelated latent constructs with reflective indicators are added. The two output con-
structs are job satisfaction and organizational commitment. Theoretical relationships can be
postulated to exist between PSM and job satisfaction (Kim 2005a, 2006; Liu, Tang, and Zhu
2008; Naff and Crum 1999; Park and Rainey 2008; Steijn 2008; Taylor 2008; Wright and
Pandey 2008) and between PSM and organizational commitment (Castaing 2006; Kim
2005a, 2006).
For model fit comparison, goodness-of-fit index (GFI) and the root mean square error of
approximation (RMSEA) are used because the Monte Carlo simulations showed that
RMSEA and GFI among various goodness-of-fit indices are better for detecting measurement
model misspecification (Jarvis, MacKenzie, and Podsakoff 2003; MacKenzie, Podsakoff,
and Jarvis 2005).14 The model achieves an acceptable fit to the data when GFI equals or
exceeds 0.90, and RMSEA values fall below 0.05 (Hu and Bentler 1999). In general, the
larger the value of GFI and the smaller the value of RMSEA, the better the fit of the model
(Bollen 1989). The proper specification of the direction of causality between PSM and

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


its dimensions may result in a better model fit (MacCallum and Browne 1993).
Since all the items were measured at the same time and from the same person, the
effects of common method variance (Podsakoff et al. 2003a) need to be examined. Com-
mon method variance refers to variance that is attributable to the measurement method
rather than to the construct of interest. It can have a substantial impact on the observed
relationships between predictor and criterion variables. To assess the potential impact
of this form of bias on the structural relationships, a second model that included a common
method factor was assessed. This common method model differs from the structural model
in that a ‘‘method’’ latent variable is added. All the items that originate from the same
source are then double-loaded onto its substantive latent variable and the method variable
as well. If the paths that are found to be significant remain significant in this final model, we
can offer support that the relationships found are robust to common method effects
(Moorman and Blakely, 1995; Williams, Edwards, and Vandenberg 2003).

RESULTS
The four-correlated factor model with 12 items of PSM was tested using CFA, which hy-
pothesized a priori that (a) responses to the 12-item PSM scale could be explained by four
factors; (b) each item would have a nonzero loading on the PSM factor it was designed to
measure and zero loadings on all other factors; (c) the four factors, consistent with the
theory, would be correlated; and (d) measurement error terms would be uncorrelated
(Byrne 2001). The resulting CFA for the sample (n 5 2,497) shows that it had an acceptable
fit to the data, x2 (degrees of freedom [df] 5 48) 5 334.6, p , .001; GFI 5 0.931,
RMSEA 5 0.049. As table 3 shows, both Cronbach’s coefficient alphas and the composite
reliability of the set of reflective indicators for each dimension of PSM exceed .80. The
resulting factor structure shows a clean four-factor structure with all items loading signif-
icantly onto their a priori dimension (p , .001), and the standardized factor loadings rang-
ing from 0.700 to 0.845. In each factor, AVE exceeds 0.50. The results provide support for

14 The chi-square is also able to detect the measurement model misspecification, but it needs to be discounted because
lower values of chi-square indicate a better fit and should be nonsignificant, but for large sample sizes, this statistic may
lead to rejection of a model with a good fit (Jarvis, MacKenzie, and Podsakoff 2003).
536 Journal of Public Administration Research and Theory

Table 4
Model Fit Indices for Different Dimension Scales
Model x2, (df), p value GFI RMSEA
(1) Three-factor model #1 (APM, CPI 1 COM, SS) 481.6, (51), .000 0.901 0.058 [0.053, 0.063]
(2) Three-factor model #2 (APM 1 SS, CPI, COM) 521.2, (51), .000 0.892 0.061 [0.056, 0.066]
(3) Three-factor model #3 (APM, CPI 1 SS, COM) 678.9, (51), .000 0.860 0.070 [0.066, 0.075]
(4) Three-factor model #4 (APM, CPI, COM 1 SS) 714.3, (51), .000 0.852 0.072 [0.068, 0.077]
(5) Four-factor model 334.6, (48), .000 0.931 0.049 [0.044, 0.054]
Note: Dx 2(3) between model (5) and any three-factor models is significant at p , .001.

convergent validity. The correlation estimates between the two factors range from .586
to .835, and the confidence intervals (62 standard errors [SEs]) around the correlation
estimates between the two factors do not include 1.00. The result provides support for
discriminant validity.
In the APM dimension of the four-correlated factor model, all factor loadings of the

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


three items are greater than 0.7 and all squared multiple correlations are greater than .5.
Both the reliability coefficient and composite reliability are greater than .8, and AVE is
greater than 0.5. The results prove that the four-factor measurement model including
the revised APM dimension can be confirmed with this sample. Thus, the revised APM
items are more appropriate for representing the rational base of PSM than the original items
(Perry 1996).
To compare the four-dimension model with the various three-dimension models, I
conducted a series of CFA, and the results show that the four-factor model is a significantly
better fit than any three-dimension models. As table 4 shows, the four-factor model captures
the covariance among the 12 items much better than any other models. In this study, the
three-factor model (Perry 1996; Vandenabeele 2008a), combining CPI and SS, shows
a worse fit than the four-factor model. Thus, the original four-factor model is superior
to the three-factor model, combining CPI and SS. Subsequent to the model comparisons,
the question of how to consolidate these dimensions into a second-order construct needs to
be addressed. Should the conceptualization of PSM be superordinate or aggregate?
Figure 1 shows a reflective model of PSM, with emitting paths to job satisfaction and
organizational commitment. Each dimension loads significantly on PSM, and all standard-
ized loadings exceed 0.70; the composite reliability of the set of reflective dimensions for
PSM is 0.912 and its AVE is 0.722. For second-order constructs with reflective dimensions,
convergent validity can also be assessed by the average variance in the first-order dimen-
sions accounted for by the second-order latent construct they represent (MacKenzie,
Podsakoff, and Jarvis 2005). This can be calculated by averaging the squared multiple cor-
relations (R2) for the construct’s first-order factors. It is desirable for the squared multiple
correlation for each dimension to be greater than .50. The squared multiple correlations of
the four dimensions range from .616 to .820. Thus, the results provide support for reliability
and convergent validity. Assessment of discriminant validity for second-order factors is not
appropriate in this study because PSM is the only latent construct at the second-order level.
Figure 2 shows a formative model of PSM, with emitting paths to job satisfaction and
organizational commitment. There are three issues to be addressed to construct a model
with formative indicators: content validity, identification, and indicator collinearity
(Collier and Bienstock 2006). Content validity is essential when constructing formative
indicators because the scope of the construct is formed by the indicators. In this study,
Kim Reflective versus Formative Specification 537

Figure 1
A Reflective Model of PSM with JS and QC

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


the scope of PSM is captured by all four first-order dimensions (Perry 1996), so content
validity is achieved with the theory-driven conceptualization of the measure. In regards to
achieving identification with formative indicators, the two conditions necessary to prevent
under-identification are satisfied by constraining the residual error variance for PSM to
unity and adding two unrelated constructs with reflective indicators. Multicollinearity is
an undesirable property in formative models as it causes estimation difficulties (Diaman-
topoulos and Winklhofer 2001). Multicollinearity occurs ‘‘when intercorrelations among
some variables are so high (e.g., .0.85) that certain mathematical operations are either
impossible or unstable because some denominators are close to zero’’ (Kline 2004, 56).
In this study, the interfactor correlations are less than .85, and the variance inflation factor
(VIF) of all dimensions ranges between 1.55 and 1.92, clearly inside the standard of accept-
able values for VIF of 3.33 (Diamantopoulos and Siguaw 2006) with lower values being
better. So multicollinearity is not a problem with the model.
For second-order constructs with formative first-order dimensions, convergent valid-
ity is not relevant because the model does not imply that the dimensions should necessarily
be correlated (MacKenzie, Podsakoff, and Jarvis 2005). Instead, assessments of construct
validity should be based on indicator validity and criterion validity. The four dimensions
538 Journal of Public Administration Research and Theory

Figure 2
A Formative Model of PSM with JS and QC

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


explain a good proportion of the variance of PSM (R2 5 .423). The g-parameters, which
show the impact of the formative first-order dimensions on PSM (see Notes 3 and 5), de-
scribe indicator validity (Bollen 1989). Items with nonsignificant g-parameters should be
considered for elimination as they cannot represent valid indicators of the construct
(Diamantopoulos, Riefler, and Roth 2008). All the paths in the formative model are sig-
nificant but the effect of COM on PSM is negative and insignificant. This will be more
thoroughly explored in the Discussion.
Criterion validity or predictive validity concerns the correlation between a multi-item
operationalization of a construct and some criterion variable of interest. Criterion validity is
captured by the ability of a reflective scale versus that of a formative index to predict an
outcome variable (Diamantopoulos and Siguaw 2006). The magnitude of the path coeffi-
cients between PSM and these two constructs is greater with the formative model. An anal-
ysis of the differences between correlations shows that the variance of job satisfaction and
organizational commitment is significantly greater with the formative model (Ruiz et al.
2008). Thus, the formative model outperforms a reflective measure in terms of criterion
validity. These results suggest that a formative model of PSM provides a better predictor of
these two constructs than a reflective one.
Evidence of relationships between PSM and job satisfaction and between PSM and
organizational commitment was found. On the basis that in the vast majority of studies PSM
is positively correlated with job satisfaction (Kim 2005a, 2006; Liu, Tang, and Zhu 2008;
Kim Reflective versus Formative Specification 539

Naff and Crum 1999; Park and Rainey 2008; Steijn 2008; Taylor 2008; Wright and Pandey
2008), ‘‘job satisfaction appears primarily to be a function of an individual’s unique wants
and expectations’’ (Pandey and Stazyk 2008, 112). For firefighters, these wants and expect-
ations may be linked to a desire to help individuals and to contribute to society through
protecting life and property from fires and providing relevant services to communities. Fire-
fighters may choose their occupation to realize these desires, and so they are committed to
the honorable profession and the organization that impose the role on them. They can re-
alize their wants and expectations through working as firefighters in fire stations and doing
the job, fighting fires and taking care of communities (Lee and Olshfski 2002). Thus, PSM
is an important individual predisposition, which explains the job satisfaction and organi-
zational commitment of firefighters (Castaing 2006).
The model fit indices show that the formative model (x2 5 432.3, df 5 149, p , .001,
GFI 5 0.945, RMSEA 5 0.028) fits the data better than the reflective model (x2 5 800.5,
df 5 155, p , .001, GFI 5 0.898, RMSEA 5 0.041). Respecification of dimensions from
reflective to formative results in an improved model fit (DGFI 5 0.047, DRMSEA 5

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


0.013). Thus, the formative model has more desirable statistical properties than the reflec-
tive model.
The method models, which included a common method factor, were evaluated in
both cases. These models contained all the indicators and factors of the previous struc-
tural model, and the indicators of the factors were double loaded onto a method factor.
Any shared variance based on the source of the rating would be controlled when assessing
the significance of the structural paths (Moorman and Blakely, 1995; Podsakoff et al.
2003a; Williams, Edwards, and Vandenberg 2003). These method models adequately
fit the data. When comparing the model with and without the common method factor
in each case, the model including the measurement factor exhibited better fit with the
data in both cases. As shown in table 5, the results indicated that all the path coefficients
found to be significant in the structural models maintained their significance and they were
of significant magnitude. Thus, common method variance had some effect on the regression
weights of the paths, but it is unlikely to be a serious concern for this study.15
Based on model fit assessment and predictive validity, the formative model has more
desirable statistical properties than the reflective model.16 The two models are identical
apart from the direction of causality between PSM and its dimensions. The empirical test
provides appropriate evidence that the direction of the relationship needs to be from the
dimensions to PSM, and PSM needs to be defined as an aggregate construct, meaning that
PSM is a composite of its dimensions.

15 The degree of the effects of common method variance can be evaluated by calculating each indicator’s variances
substantively explained by the principal construct and by the method (Liang, Saraf, Hu, and Xue 2007; Williams, Cote,
and Buckley 1989). In the formative model after controlling the effects of common method variance, the average
substantively explained variance of the indicators is 0.585, whereas the average method-based variance is 0.040. In
addition, most method factor loadings are not significant. In the reflective model after controlling the effects of
common method variance, the average proportion of variance attributed to the measurement factors is higher than the
average proportion of variance attributed to the method factor (0.520 vs. 0.120). These provide some evidence that
substantive relationships, and not merely common method bias, are likely responsible for the observed findings.
16 One may argue that the reflective model may be a better comparable fit because every path is highly significant for
the reflective specification, whereas there is one insignificant path under the formative model. However, the
standardized path estimates in the reflective model cannot be directly compared with those in the formative model
because the directions of paths are different between the two models (see footnotes 3 and 5).
540 Journal of Public Administration Research and Theory

Table 5
Test Results for Formative and Reflective Models (n 5 2,497)
Formative Model Reflective Model
Path Standardized Estimate SMC (R2) Standardized Estimate SMC (R2)
Models without controlling for the effects of common method variance
PSM / JS .671*** .450 .629*** .396
z Value for differences in R2: 2.32*
PSM / OC .987*** .973 .851*** .724
z Value for differences in R2: 31.28***
APM / PSM .132*** .423
CPI / PSM .244***
COM / PSM 2.038
SS / PSM .366***
PSM / APM .821*** .674
PSM / CPI .883*** .779
PSM / COM .785*** .616

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


PSM / SS .905*** .820
Measures of fit x2 5 423.3, df 5 149, p , .001, x2 5 800.5, df 5 155, p , .001,
GFI 5 0.945, RMSEA 5 0.028 GFI 5 0.898, RMSEA 5 0.041
[0.025, 0.031] [0.038, 0.044]
Models with controlling for the effects of common method variance
PSM / JS .680*** .463 .534*** .285
z Value for differences in R2: 7.34***
PSM / OC .968*** .937 .802*** .644
z Value for differences in R2: 33.47***
APM / PSM .140*** .430
CPI / PSM .214***
COM / PSM 2.072
SS / PSM .414***
PSM / APM .765*** .585
PSM / CPI .876*** .767
PSM / COM .881*** .776
PSM / SS .869*** .755
Measures of fit x2 5 228.6, df 5 129, p , .001, x2 5 461.9, df 5 135, p , .001,
GFI 5 0.971, RMSEA 5 0.018 GFI 5 0.941, RMSEA 5 0.031
[0.014, 0.021] [0.028, 0.034]
Note: JS, job satisfaction; OC, organizational commitment; SMC, squared multiple correlations.
* p , .05, ** p , .01, *** p , .001.

DISCUSSION
Specification of the measurement model is a critical decision that needs to be made on the
basis of conceptual criteria (MacKenzie, Podsakoff, and Jarvis 2005). Measurement model
misspecification can bias structural parameter estimates and result in errors of inference.
The theoretical consideration of the relationships between PSM and its dimensions shows
that PSM is an aggregate construct. The empirical testing also shows that the formative
model has more desirable statistical properties than the reflective model. First, the fit in-
dices show that the formative model provides a better fit than the reflective model. The poor
fit experienced in covariance structure models may indicate that the wrong type of mea-
surement model has been applied (Diamantopoulos and Winklhofer 2001). Second, the
formative model significantly outperforms reflective measures in terms of predictive
Kim Reflective versus Formative Specification 541

validity. ‘‘While reliable and valid measurement provides a necessary foundation upon
which to test causal claims, causal tests (predictive validity) can provide some of the stron-
gest evidence that the existence of a construct is being properly measured’’ (Wright 2008,
80). This study provides empirical evidence in support of a second-order, formative ap-
proach to PSM. Therefore, both theoretical and empirical considerations suggest that
the formative model is more plausible than the reflective one. Thus, it is more reasonable
to define PSM as a formative construct, as Wright (2008, 85) said: ‘‘Researchers should
consider operationalizing this four-dimension conceptualization as first-order reflective
and second-order formative.’’
Although it is appropriate to measure PSM as a formative second-order factor spec-
ification, a problem is the dimension of COM. Table 5 shows a negative and insignificant
effect of COM on PSM. PSM has rational, norm-based, and affective motives (Perry and
Wise 1990). Affective motives, grounded in human emotion, are commitment to a program
from a genuine conviction about its social importance and the patriotism of benevolence.
The dimension of COM should coincide with affective motives because each dimension

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


needs to represent an important component of the construct in the formative model. But the
g-parameter of COM shows that COM is not an important determinant of PSM. There are
several ways to explain this. First, in this study, only three items are used to measure COM
instead of the original scale of eight items (Perry 1996). Even though these three items were
selected based on the previous empirical tests (Kim 2009a, 2009b), these items might not
represent the core content of the COM dimension. When using an abbreviated version of the
original scale, the scale employed for COM failed to achieve an acceptable level of reli-
ability in the United States (DeHart-Davis et al. 2006; Moynihan and Pandey 2007b). This
model therefore needs to be examined using the original scale. Second, it seems that its
items are not separately considered by the firefighters in the Korean context from those for
CPI because the interfactor correlation between COM and CPI is high. This model needs to
be tested with different samples and for different nationalities. Third, the COM items may
not represent a unique and salient quality of affective motives. The original items need to be
re-examined to determine whether they represent affective motives appropriately, and if
necessary, more appropriate items need to be developed for this dimension. However, this
study alone is insufficient to show which explanation is more reasonable. Further studies
are needed to make a conclusion.
The practical implications of this formative specification are two-fold. First, PSM is
formed as a combination of several dimensions and each dimension provides a unique con-
tribution to PSM. An individual’s PSM is determined by the individual’s APM, CPI, COM,
and SS. Consequently deleting a dimension means omitting a part of PSM as well as chang-
ing the meaning of PSM. The consequences of dropping one of its dimensions may be
serious. Thus, all dimensions that form PSM should be included in the study. The relation-
ships between PSM and its antecedents and consequences found in the previous studies
need to be reexamined, if not tested with its full dimensions.
Second, PSM can be measured by a linear sum of its dimensions. It can be operation-
alized by summing scores of its dimensions, so that the dimensions are assigned equal
weight or assigning the dimensions empirically derived weights obtained from principal
component analysis or factor analysis (Edwards 2001). Alternatively, the dimensions can
also be estimated by a formative specification as in this study: ‘‘If the summated scale is
well-constructed, valid, and reliable instrument, then it is probably the best alternative’’
542 Journal of Public Administration Research and Theory

(Hair et al. 2006, 140), and ‘‘a well-developed summated rating scale can have good re-
liability and validity’’ (Spector 1992, 2). When developing the measure of PSM using SEM
approaches, once items have been selected following dimensionality and reliability tests
and dimensions have been confirmed, it is practical to combine them to generate overall
measures of PSM (Diamantopoulos and Siguaw 2006). Overall indexing PSM by measur-
ing all the dimensions of PSM and simply summing the averages of each dimension can
make it easier to measure PSM in practice.

CONCLUSION
The purposes of this study were to confirm the revised APM items that were developed
for replacing the original items that had little face validity as indicators of a rational base
of PSM, to compare a four-factor model with three-factor models because the dimensionality
of PSM is still controversial, and to analyze whether PSM is formative or reflective in
nature. Using survey responses from 2,497 firefighters and a revised 12-item measure of

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


PSM, this study shows that the revised items of APM are more appropriate for repre-
senting the rational base of PSM and that the original four-factor model is superior to any
three-factor models. Based on the discussions about a formative and reflective model and
the test results, this study provides theoretical and empirical evidence in support of a second-
order, formative approach. That is, PSM needs to be defined as an aggregate construct,
meaning that PSM is a composite of its dimensions that have several reflective indicators.
The formative model explains a good proportion of the variance of PSM. However, the
component weight of COM is insignificant and negative, whereas the three dimensions of
the formative model are salient contributors to PSM. The COM dimension needs to be
reexamined to determine whether they represent affective motives appropriately with
the original PSM scale and with different samples.
The direction of relationship between PSM and its dimensions should be clearly spec-
ified prior to examining substantive relationships between theoretical constructs. The
failure to specify a measurement model properly can bias estimates of the structural rela-
tionships between constructs. The first issue to decide when designing a study is whether
the construct of interest is formative or reflective in nature. This requires a clear conceptual
definition of the construct, generation of a set of measures fully representing the domain of
the construct, and careful consideration of the relationships between the construct and its
measures (Jarvis, MacKenzie, and Podsakoff 2003). The previous studies focus almost ex-
clusively on scale development, whereby items (i.e., observed variables) and dimensions
are perceived as reflective indicators of PSM. On the contrary, this study provides theo-
retical and empirical evidence of formative specification. It does not only mean that re-
searchers should include all dimensions that form PSM in the study and regard PSM as
a composite of its dimensions but also that the previous studies assuming PSM as a reflec-
tive construct need to be reexamined.
This study is one of the first empirical tests of formative versus reflective specifica-
tions of PSM. It examines both a second-order reflective model and a second-order
formative model theoretically and empirically with the four dimensions of PSM. This study
may provide a worthwhile contribution to research on PSM by advancing the development
of a measure of this concept at least three ways. One is that it validates new items of the
APM dimension that appear to improve on the original items. The second contribution is
that it provides theoretical and empirical evidence in support of the formative approach, by
Kim Reflective versus Formative Specification 543

analyzing the full dimensions of PSM. The third contribution is the raising of the question
about the measurement and fit of the COM dimension. It will help to make researchers of
PSM aware of the difficulties of measuring the concept and offers an alternative lens for
conceptualizing and measuring PSM.
This study has several limitations. First, this study used exactly the same set of items
and dimensions for comparing reflective and formative specification. It was assumed that
the only difference resulting from applying the formative versus reflective approach relates
to the direction of the relationship between PSM and its dimensions (Diamantopoulos and
Siguaw 2006). However, items and dimensions might be developed differently if PSM is
defined as an aggregate construct at the construct definition stage. It could not compara-
tively test both a formative model and a reflective model that are intentionally and inde-
pendently developed from the concept of PSM. Second, this study used a shortened and
adapted version of Perry’s (1996) instrument instead of the original one because the revised
12-item scale was shown as a valid and reliable measure in the previous study and respond-
ents’ workload is reduced with shorter scale (Kim 2009a). The development of a measure-

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


ment instrument is an evolving process, but it can make comparisons with other studies
difficult. The original scale needs to be examined by applying the formative and reflective
approaches. Third, the sample of this study is limited to one particular group, firefighters in
Korea. Thus, it is necessary to examine the measurement models of PSM with different
samples in different contexts.
This study alone is insufficient to show how much improvement can be made by using
the revised items of APM and the formative model of PSM. More research should not only
investigate the appropriateness of the PSM instrument but also test a formative measure of
PSM with varying samples and for different nationalities. From a practical perspective, this
study used the well-known structural equations program, AMOS, but some authors recom-
mend that a better empirical way is to use the vanishing tetrad test to objectively compare
formative and reflective models (Bollen, Lennox, and Dahly 2009).

REFERENCES
Anderson, James C., and David D. Gerbing. 1988. Structural equation modeling in practice: A review and
recommended two-step approach. Psychological Bulletin 103:411–23.
Bass, Bernard M. 1985. Leadership and performance beyond expectations. New York: Free Press.
Bollen, Kenneth A. 1989. Structural equations with latent variables. New York: Wiley.
Bollen, Kenneth A., and Richard D. Lennox. 1991. Conventional wisdom on measurement: A structural
equation perspective. Psychological Bulletin 110:305–14.
Bollen, Kenneth A., Richard D. Lennox, and Darren L. Dahly. 2009. Practical application of the vanishing
tetrad test for causal indicator measurement models: An example from health-related quality of life.
Statistics in Medicine 28:1524–36.
Brewer, Gene A., and Sally Coleman Selden. 1998. Whistle blowers in the federal civil service: New
evidence of the public service ethic. Journal of Public Administration Research and Theory 8:413–39.
Bright, Leonard. 2008. Does public service motivation really make a difference on the job satisfaction and
turnover intentions of public employees? American Review of Public Administration 38:149–66.
Byrne, Barbara M. 2001. Structural equation modeling with amos: Basic concepts, applications, and
programming. Mahwah, NJ: Erlbaum.
Camilleri, Emanuel. 2006. Towards developing an organizational commitment—Public service moti-
vation model for Maltese public service employees. Public Policy and Administration 21:63–83.
Castaing, Sébastien. 2006. The effects of psychological contract fulfillment and public service motivation on
organizational commitment in the French civil service. Public Policy and Administration 21:84–98.
544 Journal of Public Administration Research and Theory

Choi, Do Lim. 2004. Public service motivation and ethical conduct. International Review of Public
Administration 8:99–106.
Clerkin, Richard M., Sharon R. Paynter, and Jami Kathleen Taylor. 2009. Public service motivation in
undergraduate giving and volunteering decisions. American Review of Public Administration
39:675–98.
Collier, Joel E., and Carol C. Bienstock. 2006. Measuring service quality in E-retailing. Journal of Service
Research 8:260–75.
Coursey, David H., and Sanjay K. Pandey. 2007a. Public service motivation measurement: Testing an
abridged version of Perry’s proposed scale. Administration & Society 39:547–68.
———. 2007b. Content domain, measurement, and validity of the Red Tape concept. American Review of
Public Administration 37:342–61.
Coursey, David H., James L. Perry, Jeffrey L. Brudney, and Laura Littlepage. 2008a. Psychometric
verification of Perry’s public service motivation instrument: Results for volunteer exemplars. Review
of Public Personnel Administration 28:79–90.
Coursey, David H., Jeffrey L. Brudney, James L. Perry, and Laura Littlepage. 2008b. Measurement
questions in public service motivation: Construct formation and nomological distinctiveness and
explanatory power for volunteering activities. Paper presented for the Minnowbrook III Conference,

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


Lake Placid, NY, September 5–7, 2008.
DeHart-Davis, Leisha, Justin Marlowe, and Sanjay K. Pandey. 2006. Gender dimensions of public service
motivation. Public Administration Review 66:873–87.
DeVellis, Robert F. 1991. Scale development: Theory and applications. Thousand Oaks: Sage.
Diamantopoulos, Adamantios, Petra Riefler, and Katharina P. Roth. 2008. Advancing formative mea-
surement models. Journal of Business Research 61:1203–18.
Diamantopoulos, Adamantios, and Judy A. Siguaw. 2006. Formative versus reflective indicators in or-
ganizational measure development: A comparison and empirical illustration. British Journal of
Management 17:263–82.
Diamantopoulos, Adamantios, and Heidi M. Winklhofer. 2001. Index construction with formative
indicators: An alternative to scale development. Journal of Marketing Research 38:269–77.
Edwards, Jeffrey R. 2001. Multidimentional constructs in organizational behavior research: An integrative
analytical framework. Organizational Research Methods 4:144–92.
Edwards, Jeffrey R., and Richard P. Bagozzi. 2000. On the nature and direction of relationships between
constructs and measures. Psychological Methods 5:155–74.
Flora, David B., and Patrick J. Curran. 2004. An empirical evaluation of alternative methods of estimation
for confirmatory factor analysis with ordinal data. Psychological Methods 9:466–91.
Frederickson, H. George and David K. Hart. 1985. The public service and the patriotism of benevolence.
Public Administration Review 45:547–53.
Giauque, David, Adrian Ritz, Frédéric Varone, Simon Anderfuhren-Biget, and Christian Waldner. 2009.
Motivation of public employees at the municipal level in Switzerland. Paper prepared for delivery at
the International Public Service Motivation Research Conference, Bloomington, IN, June 7–9, 2009.
Hair, Joseph F., Jr., William C. Black, Barry J. Babin, Rolph E. Anderson, and Ronald L. Tatham. 2006.
Multivariate data analysis. Upper Saddle River, NJ: Prentice Hall.
Hu, Li-tze, and Peter M. Bentler. 1999. Cutoff criteria for fit indexes in covariance structure analysis:
Conventional criteria versus new alternatives. Structural Equation Modeling 6:1–55.
Ironson, G. H., P. C. Smith, M. T. Brannick, W. M. Gibson, and K. B. Paul. 1989. Construction of a job in
general scale: A comparison of global, composite, and specific measures. Journal of Applied
Psychology 74:193–200.
Jarvis, Cheryl Burke., Scott B. MacKenzie, and Philip M. Podsakoff. 2003. A critical review of construct
indicators and measurement model misspecification in marketing and consumer research. Journal of
Consumer Research 30:199–218.
Kim, Sangmook. 2005a. Individual-level factors and organizational performance in government organ-
izations. Journal of Public Administration Research and Theory 15:245–61.
———. 2005b. Gender differences in the job satisfaction of public employees: A study of Seoul Met-
ropolitan Government, Korea. Sex Roles: A Journal of Research 52:667–81.
Kim Reflective versus Formative Specification 545

———. 2006. Public service motivation and organizational citizenship behavior in Korea. International
Journal of Manpower 27:722–40.
———. 2009a. Revising Perry’s measurement scale of public service motivation. American Review of
Public Administration 39:149–63.
———. 2009b. Testing the structure of public service motivation in Korea: A research note. Journal of
Public Administration Research and Theory 19:839–51.
Kline, Rex B. 2004. Principles and practices of structural equation modeling, 2nd ed. New York: Guilford.
Law, Kenneth, and Chi-Sum Wong. 1999. Multidimensional constructs in structural equation analysis: An
illustration using the job perception and job satisfaction constructs. Journal of Management
25:143–60.
Lee, Geunjoo. 2005. PSM and public employees’ work performance. Korean Society and Public
Administration 16:81–104.
Lee, Seok-Hwan, and Dorothy Olshfski. 2002. Employee commitment and firefighters: It’s my job. Public
Administration Review 62:108–14.
Leisink, Peter, and Bram Steijn. 2009. Public service motivation and job performance of public sector
employees in the Netherlands. International Review of Administrative Sciences 75:35–52.
Liang, Huigang, Nilesh Saraf, Qing Hu, and Yajiong Xue. 2007. Assimilation of enterprise systems: The

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


effect of institutional pressures and the mediating role of top management. MIS Quarterly 31:59–87.
Liden, Rober C., and John M. Maslyn. 1998. Multidimensionality of leader-member exchange: An
empirical assessment through scale development. Journal of Management 24:43–72.
Liu, Bangcheng, Ningyu Tang, and Xiaomei Zhu. 2008. Public service motivation and job satisfaction in
China: An investigation of generalisability and instrumentality. International Journal of Manpower
29:684–99.
MacCallum, Robert C., and Michael W. Browne. 1993. The use of causal indicators in covariance structure
models: Some practical issues. Psychological Bulletin 114:533–41.
MacKenzie, Scott B., Philip M. Podsakoff, and Cheryl Burke Jarvis. 2005. The problem of measurement
model misspecification in behavioral and organizational research and some recommended solutions.
Journal of Applied Psychology 90:710–30.
Moorman, Robert H., and Gerald L. Blakely. 1995. Individualism-collectivism as an individual difference
predictor of organizational citizenship behavior. Journal of Organizational Behavior 16:127–42.
Mosher, Frederick. 1968. Democracy and the public service. Oxford: Oxford Univ. Press.
Mowday, Richard T., Richard M. Steers, and Lyman W. Porter. 1979. The measurement of organizational
commitment. Journal of Vocational Behavior 14:224–47.
Moynihan, Donald P., and Sanjay K. Pandey. 2007a. The role of organizations in fostering public service
motivation. Public Administration Review 67:40–53.
———. 2007b. Finding workable levers over work motivation: Comparing job satisfaction, job in-
volvement, and organizational commitment. Administration & Society 39:803–32.
Naff, Katherine C., and John Crum. 1999. Working for America: Does PSM make a difference? Review of
Public Personnel Administration 19:5–16.
National Emergency Management Agency (NEMA), Korea. 2008. Data and statistics on fire adminis-
tration. Seoul: NEMA.
Pandey, Sanjay K., and Edmund C. Stazyk. 2008. Antecedents and correlates of public service motivation.
In Motivation in public management, ed. James L. Perry and Annie Hondeghem, 101–17. Oxford:
Oxford Univ. Press.
Pandey, Sanjay K., Bradley E. Wright, and Donald P. Moynihan. 2008. Public service motivation and
interpersonal citizenship behavior in public organizations: Testing a preliminary model. Interna-
tional Public Management Journal 11:89–108.
Park, Sung Min, and Hal G. Rainey. 2008. Leadership and public service motivation in U.S. Federal,
agencies. International Public Management Journal 11:109–42.
Perry, James L. 1996. Measuring public service motivation: An assessment of construct reliability and
validity. Journal of Public Administration Research and Theory 6:5–22.
———. 1997. Antecedents of public service motivation. Journal of Public Administration Research and
Theory 7:181–97.
546 Journal of Public Administration Research and Theory

Perry, James L., Jeffrey L. Brudney, David Coursey, and Laura Littlepage. 2008. What drives morally
committed citizens? A study of the antecedents of public service motivation. Public Administration
Review 68:445–58.
Perry, James L., and Annie Hondeghem. 2008. Editor’s introduction. In Motivation in public management,
ed. James L. Perry and Annie Hondeghem, 1–14. Oxford: Oxford Univ. Press.
Perry, James L., and Lyman W. Porter. 1982. Factors affecting the context for motivation in public
organizations. Academy of Management Review 7:89–98.
Perry, James L., and Lois R. Wise. 1990. The motivational bases of public service. Public Administration
Review 50:367–73.
Podsakoff, Philip M., Scott B. MacKenzie, Jeong-Yeon Lee, and Nathan P. Podsakoff. 2003a. Common
method biases in behavioral research: A critical review of the literature and recommended remedies.
Journal of Applied Psychology 88:879–903.
Podsakoff, Philip M., Scott B. MacKenzie, Nathan P. Podsakoff, and Jeong-Yeon Lee. 2003b. The
mismeasure of man(agement) and its implications for leadership research. Leadership Quarterly
14:615–56.
Rainey, Hal G., and Paula Steinbauer. 1999. Galloping elephants: Developing elements of a theory of
effective government organizations. Journal of Public Administration Research and Theory 9:1–32.

Downloaded from jpart.oxfordjournals.org at Purdue University Libraries ADMN on July 6, 2011


Ruiz, David Martin, Dwayne D. Gremler, Judith H. Washburn, and Gabriel Cepeda Carrión. 2008. Service
value revisited: Specifying a higher-order, formative measure. Journal of Business Research
61:1278–91.
Scott, Patrick G., and Sanjay K. Pandey. 2005. Red tape and public service motivation. Review of Public
Personnel Administration 25:155–80.
Spector, Paul E. 1992. Summated rating scales construction: An introduction. Newbury Park, CA: Sage.
Steijn, Bram. 2008. Person-environment fit and public service motivation. International Public Man-
agement Journal 11:13–27.
Taylor, Jeannette. 2007. The impact of public service motives on work outcomes in Australia: A com-
parative multi-dimensional analysis. Public Administration 85:931–59.
———. 2008. Organizational influence, public service motivation and work outcomes: An Australian
study. International Public Management Journal 11:67–88.
Vandenabeele, Wouter. 2008a. Government calling: Public service motivation as an element in selecting
government as an employer of choice. Public Administration 86:1089–105.
———. 2008b. Development of a public service motivation measurement scale: Corroborating and
extending Perry’s measurement instrument. International Public Management Journal 11:143–67.
Vandenabeele, Wouter, Sarah Scheepers, and Annie Hondeghem. 2006. Public service motivation in an
international comparative perspective: The UK and Germany. Public Policy and Administration
21:13–31.
Williams, Larry J., Joseph A. Cote, and M. Ronald Buckley. 1989. Lack of method variance in self-
reported affect and perceptions at work: Reality or artifact? Journal of Applied Psychology
74:462–8.
Williams, Larry J., Jeffrey R. Edwards, and Robert J. Vandenberg. 2003. Recent advances in causal
modeling methods for organizational and management research. Journal of Management 29:903–36.
Wright, Bradley E. 2008. Methodological challenges associated with public service motivation research.
In Motivation in public management, ed. James L. Perry and Annie Hondeghem, 80–98. Oxford:
Oxford Univ. Press.
Wright, Bradley E., and Sanjay K. Pandey. 2005. Exploring the nomological map of the public service
motivation concept. msx.
———. 2008. Public service motivation and the assumption of person-organization fit: Testing the
mediating effect of value congruence. Administration & Society 40:502–21.

View publication stats

You might also like