You are on page 1of 5

FROM THE EDITORS: PUBLISHING IN "AMJ"—PART 2: RESEARCH DESIGN

Author(s): Joyce E. Bono and Gerry McNamara


Source: The Academy of Management Journal, Vol. 54, No. 4 (August 2011), pp. 657-660
Published by: Academy of Management
Stable URL: https://www.jstor.org/stable/23045105
Accessed: 06-12-2019 10:03 UTC

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms

Academy of Management is collaborating with JSTOR to digitize, preserve and extend access
to The Academy of Management Journal

This content downloaded from 43.231.60.18 on Fri, 06 Dec 2019 10:03:11 UTC
All use subject to https://about.jstor.org/terms
® Academy of Management Journal
2011, Vol. 54, No. 4, 657-660.

FROM THE EDITORS

PUBLISHING IN AM]—PART 2: RESEARCH DESIGN


Editor's Note:

This editorial continues a seven-part series, "Publishing in AMJ," in which the editors give suggestions and advice
improving the quality of submissions to the Journal. The series offers "bumper to bumper" coverage, with installmen
ranging from topic choice to crafting a Discussion section. The series will continue in October with "Part 3: Setting th
Hook."- J.A.C.

Most scholars, as part of their doctoral education, and macro research. Rejection does not happen be
take a research methodology course in which they cause such data are inherently flawed or because
learn the basics of good research design, including reviewers or editors are biased against such data. It
that design should be driven by the questions being happens because many (perhaps most) research ques
asked and that threats to validity should be avoided. tions in management implicitly—even if not framed
For this reason, there is little novelty in our discus as such—address issues of change. The problem with
sion of research design. Rather, we focus on common cross-sectional data is that they are mismatched with
design issues that lead to rejected manuscripts atresearch questions that implicitly or explicitly deal
AMJ. The practical problem confronting researchers with causality or change, strong tests of which require
as they design studies is that (a) there are no hard andeither measurement of some variable more than once,
fast rules to apply; matching research design to re or manipulation of one variable that is subsequently
search questions is as much art as science; and (b)linked to another. For example, research addressing
external factors sometimes constrain researchers'
such topics as the effects of changes in organizationa
ability to carry out optimal designs (McGrath, 1981). leadership on a firm's investment patterns, the effects
Access to organizations, the people in them, andof CEO or TMT stock options on a firm's actions, or
rich data about them present a significant challenge the effects of changes in industry structure on behav
for management scholars, but if such constraints ior implicitly addresses causality and change. Sim
become the central driver of design decisions, the larly, when researchers posit that managerial behav
outcome is a manuscript with many plausible ior al affects employee motivation, that HR practices
ternative explanations for the results, which leadsreduce turnover, or that gender stereotypes constrain
ultimately to rejection and the waste of consider the advancement of women managers, they are also
able time, effort, and money. Choosing the appro implicitly testing change and thus cannot conduc
priate design is critical to the success of a manu adequate tests with cross-sectional data, regardless of
script at AMJ, in part because the fundamental whether that data was drawn from a pre-existing dat
design of a study cannot be altered during the base re or collected via an employee survey. Researchers
vision process. Decisions made during the researchsimply cannot develop strong causal attributions
design process ultimately impact the degree of con
with cross-sectional data, nor can they establish
fidence readers can place in the conclusions drawn
change, regardless of which analytical tools they use.
from a study, the degree to which the results pro
Instead, longitudinal, panel, or experimental data are
vide a strong test of the researcher's arguments, and
needed to make inferences about change or to estab
the degree to which alternative explanations can be
lish strong causal inferences. For example, Nyberg,
discounted. In reviewing articles that have been
Fulmer, Gerhart, and Carpenter (2010) created a pane
rejected by AMJ during the past year, we identified
set of data and used fixed-effects regression to mode
three broad design problems that were common
the degree to which CEO-shareholder financial align
sources of rejection: (a) mismatch between research
ment influences future shareholder returns. This data
question and design, (b) measurement and opera structure allowed the researchers to control for cross
tional issues (i.e., construct validity), and (c) inap
firm heterogeneity and appropriately model how
propriate or incomplete model specification.
changes in alignment within firms influenced share
holder returns.
Our point is not to denigrate the potential use
Matching Research Question and Design
fulness of cross-sectional data. Rather, we point out
Cross-sectional data. Use of cross-sectional data the importance of carefully matching research design
is a common cause of rejection at AMJ, of both microto research question, so that a study or set of studies
657

Copyright of the Academy of Management, all rights reserved. Contents may not be copied, emailed, posted to a listserv, or otherwise transmitted without the copyright holder's
written permission. Users may print, download or email articles for individual use only.

This content downloaded from 43.231.60.18 on Fri, 06 Dec 2019 10:03:11 UTC
All use subject to https://about.jstor.org/terms
Academy of Management Journal

is capable of testing the question of interest. Research thor developing a new construct must clearly artic
ers should ask themselves during the design stage ulate the definition and boundaries of the new con
whether their underlying question can actually be struct, map its association with existing constructs,
answered with their chosen design. If the question and avoid assumptions that scales with the same
involves change or causal associations between vari name reflect the same construct and that scales with
ables (any mediation study implies causal associa different names reflect different constructs (i.e., jingle
tions), cross-sectional data are a poor choice. jangle fallacies [Block, 1995]). Failure to define the
Inappropriate samples and procedures. Much or core construct often leads to inconsistency in a man
ganizational research, including that published in uscript. For example, in writing a paper, authors may
AMJ, uses convenience samples, simulated business initially focus on one construct, such as organization
situations, or artificial tasks. From a design stand al legitimacy, but later couch the discussion in terms
point, the issue is whether the sample and procedures of a different but related construct, such as reputation
are appropriate for the research question. Asking stu or status. In such cases, reviewers are left without a
dents with limited work experience to participate in clear understanding of the intended construct or its
experimental research in which they make executive theoretical meaning. Although developing theory is
selection decisions may not be an appropriate way to not a specific component of research design, readers
test the effects of gender stereotypes on reactions and reviewers of a manuscript should be able to
to male and female managers. But asking these same clearly understand the conceptual meaning of a con
students to participate in a scenario-based experi struct and see evidence that it has been appropriately
ment in which they select the manager they would measured.
prefer to work for may present a good fit between Inappropriate adaptation of existing measures.
sample and research question. Illustrating this notion A key challenge for researchers who collect field
of matching research question with sample is a study data is getting organizations and managers to com
on the valuation of equity-based pay in which Devers, ply, and survey length is frequently a point of con
Wiseman, and Holmes (2007) used a sample of exec cern. An easy way to reduce survey length is to
utive MBA students, nearly all of whom had experi eliminate items. Problems arise, however, when
ence with contingent pay. The same care used in researchers pick and choose items from existing
choosing a sample needs to be taken in matching scales (or rewrite them to better reflect their unique
procedures to research question. If a study involves context) without providing supporting validity ev
an unfolding scenario wherein a subject makes a se idence. There are several ways to address this prob
ries of decisions over time, responding to feedback lem. First, if a manuscript includes new (or sub
about these decisions, researchers will be well served stantially altered measures), all the items should be
by collecting data over time, rather than having a included in the manuscript, typically in an appen
series of decision and feedback points contained in a dix. This allows reviewers to examine the face va
single 45 minute laboratory session. lidity of the new measures. Second, authors might
Our point is not to suggest that certain samples include both measures (the original and the short
(e.g., executives or students) or procedures are inher ened versions) in a subsample or in an entirely
ently better than others. Indeed, at AMJ we explicitly different sample as a way of demonstrating high
encourage experimental research because it is an ex convergent validity between them. Even better
cellent way to address questions of causality, and we would be including several other key variables in
recognize that important questions—especially those the nomological network, to demonstrate that the
that deal with psychological process—can often be new or altered measure is related to other similar
answered equally well with university students or and dissimilar constructs.
organizational employees (see AM/s August 2008 Inappropriate application of existing mea
From the Editors [vol. 51: 616-620]). What we ask of sures. Another way to raise red flags with review
authors—whether their research occurs in the lab or ers is to use existing measures to assess completely
the field—is that they match their sample and proce different constructs. We see this problem occurring
dures to their research question and clearly make the particularly among users of large databases. For
case in their manuscript for why these sample or example, if prior studies have used an action such
procedures are appropriate. as change in format (e.g., by a restaurant) as a mea
sure of strategic change, and a submitted paper uses
this same action (change in format) as a measure of
Measurement and Operationalization
organizational search, we are left with little confi
Researchers often think of validity once they be dence that the authors have measured their in
gin operationalizing constructs, but this may be too tended construct. Given the cumulative and incre
late. Prior to making operational decisions, an au mental nature of the research process, it is critical

This content downloaded from 43.231.60.18 on Fri, 06 Dec 2019 10:03:11 UTC
All use subject to https://about.jstor.org/terms
2011 Bono and McNamara

Third, there is aboth


that authors establish logical reason the
that the control
uni
new construct, howvariable is it
not a more central variable in
relates tothe study,
exi
and the validity either
of a hypothesized
their oneoperation
or a mediator. If a vari
Common method able meeting
variance. these three conditions
We is excluded
see
AM] manuscripts in
from the study,which
the results may sufferdata
from omittedar
sectional, but arevariable
also bias. However,
assessed
if control variables are
viain a
(e.g., a survey will
cluded have
that don't meet multiple
these three tests, they may pr
rion variables completed by
hamper the study by unnecessarily a de sin
soaking up
Common method grees of freedom or bias the findings
variance presents related to the a
interpretation ofhypothesized
observed variables (increasing either type I or
correlati
correlations maytype
be II error)
the(Becker, 2005).
resultThus, researchers
of s
variance due to should
measurement
think carefully about the controls theyme
in
rater effects, item effects,
clude—being or
sure to include proper cont
controls but
koff, MacKenzie, Lee, and Podsakoff (2003) dis excluding superfluous ones.
cussed common method variance in detail and also Operationalizing mediators. A unique charac
suggested ways to reduce its biasing effects (see
teristic of articles in AMJ is that they are expected
also Conway & Lance, 2010). to test, build, or extend theory, which often takes
Problems of measurement and operationalizationthe form of explaining why a set of variables are
of key variables in AM] manuscripts have implicarelated. But theory alone isn't enough; it is also
tions well beyond psychometrics. At a conceptualimportant that mediating processes be tested em
level, sloppy and imprecise definition and operapirically. The question of when mediators should
be included in a model (and which mediators)
tionalization of key variables threaten the infer
ences that can be drawn from the research. If the needs to be addressed in the design stage. When an
nature and measurement of underlying constructs area of inquiry is new, the focus may be on estab
are not well established, a reader is left with little lishing a causal link between two variables. But,
confidence that the authors have actually tested the once an association has been established, it be
model they propose, and reasonable reviewers cancomes critical for researchers to clearly describe
find multiple plausible interpretations for the reand measure the process by which variable A af
sults. As a practical matter, imprecise operational fects variable B. As an area of inquiry becomes
and conceptual definitions also make it difficult to more mature, multiple mediators may need to be
quantitatively aggregate research findings acrossincluded. For example, one strength of the trans
studies (i.e., to do meta-analysis). formational leadership literature is that many me
diating processes have been studied (e.g., LMX
[Kark, Shamir, & Chen, 2003; Pillai, Schriesheim, &
Model Specification
Williams, 1999; Wang, Law, Hackett, Wang, &
One of the challenges of specifying a theoretical Chen, 2005]), but a weakness of this literature is
model is that it is practically not feasible to includethat most of these mediators, even when they are
every possible control variable and mediating proconceptually related to each other, are studied in
cess, because the relevant variables may not exist in isolation. Typically, each is treated as if it is the
the database being used, or because organizations unique process by which managerial actions influ
constrain the length of surveys. Yet careful atten ence employee attitudes and behavior, and other
tion to the inclusion of key controls and mediating known mediators are not considered. Failing to
processes during the design stage can provide sub assess known, and conceptually related mediators,
stantial payback during the review process. makes it difficult for authors to convince reviewers
Proper inclusion of control variables. The in that their contribution is a novel one.
clusion of appropriate controls allows researchers
to draw more definitive conclusions from their
Conclusion
studies. Research can err on the side of too few or
too many controls. Control variables should meet Although research methodologies evolve over
three conditions for inclusion in a study (Becker, time, there has been little change in the fundamen
2005; James, 1980). First, there is a strong expecta tal principles of good research design: match your
tion that the variable be correlated with the depen design to your question, match construct definition
dent variable owing to a clear theoretical tie or with operationalization, carefully specify your
prior empirical research. Second, there is a strong model, use measures with established construct va
expectation that the control variable be correlated lidity or provide such evidence, choose samples
with the hypothesized independent variable(s). and procedures that are appropriate to your unique

This content downloaded from 43.231.60.18 on Fri, 06 Dec 2019 10:03:11 UTC
All use subject to https://about.jstor.org/terms
Academy of Management Journal

research question. The core problem with AMJ sub REFERENCES


missions rejected for design problems is not that
Beck, N., Bruderl, J., & Woywode, M. 2008. Momen
they were well-designed studies that ran into prob
or deceleration? Theoretical and methodologic
lems during execution (though this undoubtedly flections on the analysis of organizational cha
happens); it is that the researchers made too many Academy of Management Journal, 51: 413-43
compromises at the design stage. Whether a re
Becker, T. E. 2005. Potential problems in the stat
searcher depends on existing databases, actively
control of variables in organizational resear
collects data in organizations, or conducts experi
qualitative analysis with recommendations. Or
mental research, compromises are a reality of the zational Research Methods, 8: 274-289.
research process. The challenge is to not compro
mise too much (Kulka, 1981). Block, J. 1995. A contrarian view of the five-factor ap
A pragmatic approach to research design starts proach to personality description. Psychological
Bulletin, 117: 187-215.
with the assumption that most single-study designs
are flawed in some way (with respect to validity). Conway, J. M., & Lance, C. E. 2010. What reviewers
The best approach, then, to a strong research design should expect from authors regarding common
may not lie in eliminating threats to validity method bias in organizational research. Journal of
(though they can certainly be reduced during the Business and Psychology, 25: 325-334.
design process), but rather in conducting a series of Devers, C. E., Wiseman, R. M., & Holmes, R. M. 2007. The
studies. Each study in a series will have its own effects of endowment and loss aversion in manage
flaws, but together the studies may allow for stron rial stock option valuation. Academy of Manage
ger inferences and more generalizable results than ment Journal, 50: 191-208.
would any single study on its own. In our view, Grant, A. M., & Berry, J. W. 2011. The necessity of others
multiple study and multiple sample designs are is the mother of invention: Intrinsic and prosocial
vastly underutilized in the organizational sciences motivations, perspective taking, and creativity.
and in AMJ submissions. We encourage researchers Academy of Management Journal, 54: 73-96.
to consider the use of multiple studies or samples, James, L. R. 1980. The unmeasured variables problem in
each addressing flaws in the other. This can be path analysis. Journal of Applied Psychology, 65:
done by combining field studies with laboratory 415-421.

experiments (e.g., Grant & Berry, 2011), or by test Kark, R., Shamir, B., & Chen, G. 2003. The two faces of
ing multiple industry data sets to assess the robust
transformational leadership: Empowerment and de
ness of findings (e.g., Beck, Bruderl, & Woywode, pendency. Journal of Applied Psychology, 88: 246—
2008). As noted in AMfs "Information for Contrib 255.

utors," it is acceptable for multiple study manu


Kulka, R. A. 1981. Idiosyncrasy and circumstance. Amer
scripts to exceed the 40-page guideline. ican Behavioral Scientist, 25: 153-178.
A large percentage of manuscripts submitted to
AMJ that are either never sent out for review or that McGrath, J. E. 1981. Introduction. American Behavioral
Scientist, 25: 127-130.
fare poorly in the review process (i.e., all three re
viewers recommend rejection) have flawed designs, Nyberg, A. J., Fulmer, I. S., Gerhart, B., & Carpenter, M. A.
but manuscripts published in AMJ are not perfect. 2010. Agency theory revisited: CEO return and
They sometimes have designs that cannot fully an shareholder interest alignment. Academy of Man
swer their underlying questions, sometimes use agement Journal, 53: 1029-1049.
poorly validated measures, and sometimes have mis Pillai, R., Schriesheim, C. A., & Williams, E. S. 1999. Fair
specified models. Addressing all possible threats to ness perceptions and trust as mediators for transforma
validity in each and every study would be impossibly tional and transactional leadership: A two-sample
complicated, and empirical research might never get study. Journal of Management, 25: 897-933.
conducted (Kulka, 1981). But honestly assessing Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. 2003.
threats to validity during the design stage of a re Common method biases in behavioral research: A
search effort and taking steps to minimize them— critical review of the literature and recommended
either via improving a single study or conducting remedies. Journal of Applied Psychology, 25: 879
multiple studies—will substantially improve the po 903.

tential for an ultimately positive outcome. Wang, H., Law, K. S., Hackett, R. D., Wang, D., & Chen,
Joyce E. Bono Z. X. 2005. Leader-member exchange as a mediato
University of Florida of the relationship between transformational leader
ship and followers' performance and organizational
Gerry McNamara citizenship behavior. Academy of Management
Michigan State University Journal, 48: 420-432.

This content downloaded from 43.231.60.18 on Fri, 06 Dec 2019 10:03:11 UTC
All use subject to https://about.jstor.org/terms

You might also like