You are on page 1of 7

889683

research-article2019
JMIXXX10.1177/1056492619889683Journal of Management InquiryGardner

Dialog

Journal of Management Inquiry

R & R Dialog: Why I Rejected Your R&R


1­–7
© The Author(s) 2019
Reprintsreuse
Article and permissions:
guidelines:
Submission and What You Could Have sagepub.com/journalsPermissions.nav
sagepub.com/journals-permissions
DOI: 10.1177/1056492619889683
https://doi.org/10.1177/1056492619889683

Done to Secure an Acceptance jmi.sagepub.com


journals.sagepub.com/home/jmi

William L. Gardner1

Abstract
Richard Daft’s (1995) now classic chapter from Cummings and Frost’s edited volume, Publishing in the Organizational Sciences,
titled “Why I Recommended That Your Manuscript Be Rejected and What You Can Do About It?” highlights the many
reasons initial manuscripts are rejected. Daft presented the results of a content analysis he conducted on 111 manuscripts
reviewed for Administrative Science Quarterly and the Academy of Management Journal to identify common manuscript problems.
Following his lead, this manuscript includes a content analysis of manuscripts returned after a revise and resubmit decision
granted by Group & Organization Management (GOM) and ultimately issued reject decisions. In effect, this manuscript, as part
of the dialog on the revise and resubmit process, details “worst practices” that authors commit when returning a revision.

Keywords
careers, mentoring, management education

Introduction calendar year. During this time period, GOM received 1,374
submissions. As editor, I decide if original submissions are
The collective wisdom of the R&R PDW panelists regarding desk rejected or assigned to one of the AEs. Over the review
“Do’s and Don’ts” distilled in Professor Lester’s opening period, I desk rejected 647 of the new submissions (47%)—
piece to this dialog (Lester, in press) provides valuable “best providing the corresponding author with a detailed explana-
practices” advice to scholars who receive an R&R and strive tion of my decision—and sent 727 manuscripts out for review;
to achieve a successful revision. But what about the “worst the assigned AEs assumed responsibility for the final editorial
practices”? What can we learn from authors who received an decisions. Of the reviewed manuscripts, I assigned 123 to
R&R, but failed to seal the deal and ultimately had their myself, based on the fit of my expertise with the focal topic.
paper rejected? To help answer this question, I was inspired Of the new submissions I was assigned, 60 were rejected and
by Richard Daft’s (1995) now classic chapter from Cummings 63 received R&R decisions. Finally, of the 63 R&R decisions,
and Frost’s edited volume, Publishing in the Organizational 46 were ultimately accepted, and 17 were rejected—eleven
Sciences, titled “Why I Recommended That Your Manuscript after one revision, five after two, and one after four.1 The con-
Be Rejected and What You Can Do About It?” In this chap- tent analysis focuses on the 17 manuscripts that received
ter, Daft presented the results of a content analysis he con- R&Rs, but were subsequently rejected. The goal of this analy-
ducted on 111 manuscripts that he reviewed for Administrative sis is to determine what went wrong in these cases and to iden-
Science Quarterly and the Academy of Management Journal tify some of the “worst practices” that lead to unsuccessful
to identify common manuscript problems that lead to his revisions. To do so, I examined my editorial letters, plus all of
reject recommendations. Following his example, I conducted the reviews, to identify the types of concerns the reviewers and
a content analysis of revised manuscripts that were submitted I expressed.
to Group & Organization Management (GOM) and ulti- Before I present the content analysis results, it is useful to
mately issued reject decisions. consider the recommendations made by the reviewers for the
Let me put this content analysis in context. Currently, I
serve as GOM’s Editor-in-Chief. My first three-year term
began in July of 2014, and I opted for a second term that will 1
Rawls College of Business, Texas Tech University, Lubbock, TX, USA
conclude on December 31, 2020. For the purposes of this anal-
Corresponding Author:
ysis, I will focus on R&Rs I issued as an action editor (AE) William L. Gardner, Rawls College of Business, Texas Tech University,
that eventually received a reject decision. The time frame Area of Management, 703 Flint, Lubbock, TX 79409, USA.
begins with my appointment and extends through the 2018 Email: william.gardner@ttu.edu
2 Journal of Management Inquiry 00(0)

rejected R&Rs to provide some context for my decisions. Note and expressed gratitude that the manuscript was not pub-
that the default number of reviewers for manuscripts submit- lished, as doing so avoided a potential retraction.
ted to GOM is three, but as a paper goes through the review
process, some of the reviewers may drop out (i.e., not submit
Problems Identified in Unsuccessful R&Rs
a review of the revision) or sign off (i.e., recommend accep-
tance) during a prior round, reducing the number of reviews. A summary of the results of Daft’s (1995) content analysis
For the rejected R&Rs, here is a breakdown of the frequency and my own is provided in Table 1. For comparative pur-
of particular combinations of reviewer recommendations: (a) poses, I present Daft’s problem categories first and his results
two rejects and one revision (five manuscripts); (b) two revise in the initial three columns and my results in the final three
and one reject (three manuscripts); (c) two rejects (two manu- columns. For two of Daft’s problem categories—theoretical
scripts); (d) one reject and one revise (two manuscripts); (e) issues and inadequate research design—I specify subcatego-
one reject and one accept (one manuscript); (f) two revise (one ries in my own results to provide a more fine-grained analysis
manuscript); (g) one reject (one manuscript); (h) one revise of the problems. Finally, I augmented Daft’s categories with
(one manuscript); and (i) three accepts, with a methods editor seven additional problem types that emerged in my analysis.
reject recommendation (one manuscript). In the remainder of this manuscript, I present a compara-
This analysis reveals several insights. First, consensus tive discussion of the results from Daft’s and my content
among the reviewers is rare, as noted in Lester (in press). analyses for his original problem categories, followed by a
Indeed, for the manuscripts that received at least two reviews discussion of the additional problem types I observed. One
for the final revision, only two reflected complete agreement, caveat to keep in mind is that my sample of 17 R&R manu-
with one set recommending rejection and another recom- scripts is much smaller than Daft’s sample of 111 manu-
mending revision. Second, the most common combination scripts, so the results are not necessarily representative of the
involved two reject and one revise recommendation, reflect- types of issues that would emerge from a more comprehen-
ing consistency with my ultimate reject decision. Third, the sive analysis of R&R reject decisions. Nonetheless, for our
next most common combinations at two each involved either purposes of identifying some of the “worst practices” to
two reject recommendations or an even split between reject avoid, they are informative and provide insights for scholars
and revise recommendations. Fourth, for two manuscripts, seeking to turn their R&Rs into accept decisions.
only a single review was provided in the final round, with
one reject and one revise recommendation. Finally, there was
one manuscript for which all three reviewers recommended Theoretical Issues
acceptance, but when the manuscript was sent to the methods The most common problem Daft identified involved a lack
editor for a final methods check, a fatal flaw in the methods of theory, as it was found in over half of the manuscripts.
was revealed that lead to a reject decision. Here, when Daft refers to theory, it is useful to consider
While this analysis is informative, keep in mind that my Bacharach’s (1989, p. 498) definition of theory as:
job as editor is not to count the reviewers’ “votes”, but to
make a decision based on reviewer input and my own assess- A statement of relationships between units observed or
ment of the rigor and potential contribution of the manuscript approximated in the empirical world. Approximated units mean
to the extant literature. Also, it is important to recognize that constructs, which by their very nature cannot be observed
not all reviewer recommendations are of equal quality. For directly (e.g., centralization, satisfaction, or culture). Observed
instance, one or more reviewers may have recommended units mean variables, which are operationalized empirically by
acceptance or revision, while another detailed serious sub- measurement. The primary goal of a theory is to answer
stantive and methodological issues that would have been questions of how, when, and why, unlike the goal of description,
extremely difficult for the authors to address. Under such which is to answer the question of what (italics in original).
circumstances, I gave more weight to the reject recommen-
dation (coupled with my own assessment of manuscript In my analysis, theoretical issues were even more prevalent
quality), because I considered it unfair to all concerned to than they were in Daft’s, as they were found in 88.2% of the
request an additional round of reviews, given the low prob- manuscripts. As Daft (1995, p. 166) explains, “[t]heory pro-
ability that the ultimate decision would be favorable. As for vides the story that gives data meaning.” So, for the vast
the manuscript that was rejected based on the methods majority of the rejected R&R manuscripts, the authors failed
review, I communicated personally to the authors to express to provide a convincing story that reflects sound underlying
my regret that the fatal flaw was not identified earlier in the theory. More specific theoretical issues identified in my con-
review process. However, I also noted that the methods tent analysis include, in decreasing order of occurrence: (a)
review had served its purpose in that it prevented publication inadequate specification and/or rationale for the research
of a manuscript with methodological limitations that yielded questions/hypotheses; (b) problems with the research model,
erroneous results from entering the literature. The authors such as a lack of focus, precision, oversimplification, and
replied that they understood the basis for the reject decision, omitted variables; (c) omission and/or misrepresentation of
Gardner 3

Table 1. Problems Found in Manuscripts Reviewed.

Daft Content Analysis Gardner Content Analysis


a b
Problem Types Identified by Daft N % of Problems % of Manuscripts N % of Problems %of Manuscripts

1. Theoretical issues 56 21.7 50.5 15 12.3 88.2


1a. Inadequate specification and/or rationale 11 9.0 64.7
for research questions/hypotheses
1b. Problems with research model (e.g., lack of 10 8.2 58.8
focus, precision, oversimplification, omitted
variables)
1c. Omission/misrepresentation of relevant 9 7.4 52.9
literature
1d. C onfusion regarding mediation and/or 6 4.9 35.3
moderation
1e. Level of analysis issues (theory) 3 2.5 17.6
1f. Lack of an overarching framework 1 .8 5.9
2. Concepts and operationalization not in 35 13.6 31.5 11 9.0 64.7
alignment
3. Insufficient definition—Theory 27 10.5 24.3 8 6.6 47.1
4. Insufficient rationale—Design 27 10.5 24.3 7 5.7 41.2
5. Macrostructure—Organization and flow 26 10.1 23.4 11 9.0 64.7
6. Amateur style and tone/writing quality 23 8.9 20.7 8 6.6 47.1
7. Inadequate research design 22 8.5 19.8 15 12.3 88.2
7a. Measurement issues (e.g., source of 13 10.7 76.5
measures, validity, reliability, CFA/EFA, etc.)
7b. Inappropriate and/or inadequate 5 4.1 29.4
explanation of analysis
7c. Levels of analysis issues (e.g., aggregation, 4 3.3 23.5
lack of attention to nested data)
7d. Control variable issues 3 2.5 17.6
7e. Construct/measure redundancy 2 1.6 11.8
7f. C
 ausality/temporal issues (e.g., ‘longitudinal’ 2 1.6 11.8
design is really time-lagged)
7g. C ommon method variance (inadequately 2 1.6 11.8
addressed)
7h. S ampling issues (e.g., sampling design, lack 2 1.6 11.8
of power)
8. Not relevant to the field 20 7.7 18.0 1 .8 5.9
9. Over-engineering 11 4.3 9.9 0 0.0 0.0
10. Conclusions not in alignment 6 2.3 5.4 10 8.2 58.8
11. Cutting up the data 5 1.9 4.5 0 0.0 0.0
Additional Problem Types
12. Lack of responsiveness/success in 12 9.8 70.6
addressing reviewer concerns
13. Inadequate contribution to literature 9 7.4 52.9
14. Inadequate articulation of research 8 6.6 47.1
purpose/contribution
15. New concerns that emerged through 7 5.7 41.2
revision process
16. Recommendation for additional data 1 .8 5.9
collection not followed
a
N=258 major problems in the 111 manuscripts.
b
N=129 major problems in the 17 manuscripts.
4 Journal of Management Inquiry 00(0)

the relevant literature; (d) confusion regarding mediation and if so, specify how they came to that conclusion.
and/or moderation processes; (e) levels of analysis issues Alternatively, do they conclude that these elements tend to be
(e.g., unconvincing attempts to elevate individual level con- present, but are not necessarily required? Hence, when I find
structs to higher levels of analysis; inadequate theoretical concepts are ill-defined in an original submission, I encour-
explication of posited multi-level relationships); and (f) a age authors to answer these questions, and then follow the
lack of an overarching theoretical framework. subsequent steps Podsakoff and associates describe for
advancing a concept definition. Unfortunately, when focal
concepts remain ill-defined following a revision, this often
Concepts and Operationalization Not in leads to a reject decision.
Alignment
The second most common problem type identified by Daft Insufficient Design: Rationale
(1995) was found in 31. 5% of the manuscripts, and involves
a disconnect between the focal concepts identified by the The next most common problem identified by Daft (1995)
authors and the operationalization of those concepts. This also occurred in nearly a quarter of the manuscripts and
problem was even more common in my analysis, as it was involved an insufficient explanation of study procedures. As
identified in 64.7% of the rejected R&Rs. Most often, this Daft explains, this included “[s]imple things, like describing
disconnect involved levels of analysis issues, as constructs the sample, saying who completed the questionnaires, provid-
that were conceptualized at the group or higher levels of ing example questions from the questionnaire, and reporting
analysis were too often operationalized using individual means and standard deviations” (1995, p. 169). This was also
level-measures without adequate justification for aggrega- a common problem with the rejected R&R manuscripts, as
tion to the collective level. However, there were also over 40% failed to adequately explain basic elements of the
instances where the authors simply did not make a convinc- research design to the reviewers’ satisfaction. Further, as Daft
ing case for the construct validity of the methods used to notes, this lack of transparency often contributed to the lack of
operationalize focal constructs, be they measures or alignment between theory and methods, making it impossible
manipulations. for the editorial team to determine if the method employed
provided an adequate test of the research questions or hypoth-
eses. Such transparency is essential because, without it,
Insufficient Definition: Theory researchers cannot replicate the research—a fundamental
The next most common problem identified by Daft (1995, p. requirement for science to advance (Kerlinger & Lee, 2000).
168) occurred in a quarter of the manuscripts he reviewed
when “[a]uthors did not provide definition, explanation, or Macrostructure: Organization and Flow
reasoning for some of their variables.” As Bacharach’s defi-
nition of theory indicates, constructs (i.e., concepts) are a “Macrostructure means whether the various parts of the
fundamental component of theory. Insufficient definitions paper fit together into a coherent whole. Microstructure per-
violate one of the key criteria Bacharach specifies for theory tains to the individual sentences and paragraphs, which are
evaluation, falsifiability, as it is impossible to refute a theory satisfactory in most papers” (Daft, 1995, p. 169). This prob-
for which constructs are so ill-defined that they cannot be lem occurred in 23.4% of the papers reviewed by Daft, and
operationalized. Here again, this problem was even more in 64.7% of the rejected R&R papers. Examples include the
common in the rejected R&R’s, as it was found in nearly half following: (a) a disconnect between the relationships
of the manuscripts. Although a lack of concept definitions is advanced in the theory section and the conclusions drawn in
common in manuscripts that I desk reject, I was surprised by the results section; (b) results that are intriguing but not
its frequency in the R&Rs. Over the past couple of years, related to the hypotheses; (c) the ad hoc introduction of a
I’ve addressed this problem by referring authors to an article table or figure in the discussion section (when it belongs in
by Podsakoff, MacKenzie, and Podsakoff (2016) which pro- the results section); (d) the discussion of theories and vari-
vides recommendations for defining a new concept or revis- ables in the conclusions section that have not heretofore been
ing an existing definition. In doing so, they make an important presented; (e) an insufficient number of headings to help the
distinction between two basic types of concept structures: (1) reader follow the authors’ arguments; (f) frequent and dis-
necessary and sufficient; and (2) family resemblance. In the ruptive parenthetical statements or footnotes; and (g) dra-
former, the definition specifies the attributes that must be matically exceeding page limits. Such lack of internal
present for a case to qualify as an example of the focal con- alignment across sections is a major red flag for reviewers
struct. In the latter, cases that reflect most, but not all of the and editors. Frequently, it is a focus of the feedback during
attributes associated with the concept may nonetheless be the initial round of reviews. Authors who succeed in correct-
considered qualifying examples. So, I ask authors to first ing such misalignments are much more likely to receive
identify the key elements of their focal concepts, and then favorable editorial decisions, whereas continuing misalign-
determine if these reflect necessary and sufficient criteria, ment typically results in an R&R being rejected. Here again,
Gardner 5

Daft (1995) provides valuable advice to authors, noting that the measure source; unjustified “adaptations” of established
“[s]cholars must make a special effort to visualize the entire scales that involved dropping or modifying items without jus-
paper—especially the interconnections among the parts— tification and/or validity evidence; and problems with the
and be confident that they are effectively constructed before reporting of exploratory and confirmatory factor analyses);
submitting the paper for publication” (p. 169). (b) statistical analyses that are either inappropriate for the
research questions or inadequately explained; (c) levels of
analysis issues (e.g., aggregation of individual level data to
“Amateur Style and Tone”/Writing Quality collective levels without correctly conducting the requisite
Daft (1995) labeled this problem as “amateur style and tone,” tests; and a failure to account for nested data using multi-level
and it occurred in 20.7% of the manuscripts he reviewed. He analysis); (d) control variable issues involving the omission,
noted that “[s]tyle and tone can signal that authors do not absence of justification, or selection of the wrong or an
know what they are doing, that they are amateurs” (p. 170). In incomplete set of controls; (e) concerns about construct
my content analysis of rejected R&Rs, this problem occurred redundancy arising from the introduction of new measures for
much more frequently (47.1%); the issue was reflected at a established constructs; (f) causality and temporal issues (e.g.,
more basic level through the quality of the writing, or lack making causal inferences from cross-sectional designs; and
thereof. Common reviewer complaints ranged from a high describing time-lagged studies with two waves of data-col-
number of typos and grammatical errors to turgid prose, to lection as longitudinal); (g) problems arising in the interpreta-
omitted references to a lack of compliance with the APA style tion of findings as a result of common method variance; and
guidelines (American Psychological Association, 2009) that (h) sampling issues (e.g., potential non-response bias; and a
GOM follows. While such lack of professionalism was not lack of power). This is a fairly representative, but incomplete,
fatal in and of itself, it contributed to reviewer frustration with list of “worst practices” arising from design and analytical
the revision and brought into question the ability of the authors issues. The key takeaway is this: if reviewers and the AE
to competently complete other components of the research. identify such issues as problematic, it behooves the authors to
Here the value of the “best practice” recommendation to attend make a concerted effort to address them. To do otherwise dra-
to final details is readily apparent, as such relatively minor but matically increases the risk of disappointment, as a promising
annoying mistakes can sometimes sway reviewers who are on R&R turns into a frustrating reject decision.
the fence towards a reject recommendation.
Conclusions Not in Alignment
Inadequate Research Design Daft (1995, p. 172) noted that this “problem occurs just often
Research design problems occurred in about 20% of the man- enough to be worth mentioning.” While it was found in only
uscripts Daft (1995) reviewed, but in a full 88.2% of the 5.4% of the papers he reviewed, it arose in 58.8% of the
rejected GOM revisions. In his chapter, Daft observed that rejected revisions. There are a variety of manifestations of this
theory problems were far more likely to result in reject recom- problem, including overgeneralizing the results beyond the
mendations than problems with the study design. In contrast, population sampled, introducing new theories and constructs
theory and design problems surfaced at a very high and equiv- that were not including in the research model, failing to ade-
alent rate in my analysis. Perhaps this is not surprising, given quately recognize limitations, and insufficient attention to
that authors who receive R&Rs, many of which involve “high practical implications of the findings. While it is important to
risk revisions,” are nearly always challenged to address theo- consider future directions for research, the discussion section
retical and/or methodological issues. Moreover, when design should not be too far removed from what Daft (1995) describes
problems become apparent, they are often fatal, particularly as the “operational base” of the manuscript—the underlying
during the initial round of reviews. In the case of R&Rs, how- theory and hypotheses and the methods used to test them. Too
ever, the reviewers may have been initially unsure about the often, this section is underdeveloped, as authors seem to be in
severity of the design problems they observed, and hence, see- a hurry to wrap up the paper and leave it to the reader to figure
ing promise in the manuscript, requested a revision. out the significance of the findings. As Daft notes, this is a
Unfortunately, for the R&Rs I rejected, upon learning more mistake because the “conclusion section deserves as much
about the design, and/or seeing the authors’ attempts to fix it, attention as the theory, method, and results section, because
the reviewers and I frequently concluded the design flaws the conclusion section explains what it all means” (p. 173).
could not be or had not been adequately addressed, and hence
the manuscript could not be salvaged. Lack of Relevance, Overengineering, and Cutting
In my content analysis, I drilled down further than Daft
(1995) to pinpoint more specific design issues. These included up the Data
the following: (a) validity and reliability concerns stemming The final set of problems identified by Daft (1995)—“Not
from measurement issues (e.g., a failure to accurately identify relevant to the field” (18%) “Overengineering” (9.9%), and
6 Journal of Management Inquiry 00(0)

“Cutting up the data” (4.5%)—arose more often in the papers that the manuscript has promise, and hence the authors
he reviewed than in the rejected R&Rs, where there was only should be given an opportunity to make a case for the paper’s
one occurrence of a lack of relevance, and no instances of the contribution. For many of the rejected R&R’s, the authors
other problems. This finding makes sense, since I explicitly were unsuccessful in doing so. Given that this problem was
screen for the former and the latter issues in deciding whether identified as a key reason for rejection in over half of the
to desk reject a manuscript. However, I was surprised that unsuccessful revisions, it behooves authors to pay special
there were not more instances of “overengineering,” where attention to this feedback and make as compelling a case as
authors strive to impress the reviewers with sophisticated possible that their research is consequential and should be
analytical techniques until the “methodology became an end part of the extant literature.
in itself” (Daft, 1995, p. 172). Perhaps overengineering Another broad category of concerns involved new issues
resulted in reject decisions during the initial round of reviews, that emerged as part of the revision process. Indeed, for 41.2%
while the guidance provided by reviewers and me discour- of the rejected R&Rs, the reviewers and/or I explicitly stated
aged authors from pursuing this practice during the revision that the reject decision was at least in part attributable to new
phase. At any rate, Daft’s advice to avoid overengineering issues that were not apparent in the original submission. Such
remains timely, as it constitutes one of the “worst practices” issues are hardly surprising, in that there is always a risk that
that is more likely to confuse, rather than impress, reviewers. the authors’ efforts to address issues raised about the original
submission, will raise new concerns that may be as serious or
even more serious. Here again, the importance of the best
Additional Problem Types practice advice to actively revise the manuscript while at the
Beyond the issues identified by Daft (1995), several addi- same time using the response letter to explain the rationale for
tional types of problems emerged in my content analysis. Not the changes and anticipate new issues is apparent. In many
surprisingly, the most common problem that arose in 70.6% cases, such anticipation may serve to mitigate reviewer con-
of the revisions involved a perception by the reviewers that cerns, and at a minimum, give the authors a chance to address
the authors were either not responsive to their feedback, or the emergent issues in a subsequent revision.
unable to successfully address the issues they raised. Clearly, Finally, there was one manuscript for which the authors
this is a problem that is unique to an R&R, since reviewer were advised to collect additional data to address the short-
feedback is not available during the initial submission pro- comings of the original submission, but they declined to do so;
cess. Such an inference by the reviewers is very hard to over- a reject decision ensued. Here again, the wisdom of the best
come, as they often conclude that paper is a lost cause, given practice advice to always collect additional data, if possible,
the lack of improvement. Indeed, as an extreme example, I when such a recommendation is issued is readily apparent.
encountered one revision for which the authors decided not The basis for such a recommendation is to give the authors an
to submit a response letter, and instead provided an abstract opportunity to address a serious limitation which the review-
summary of the changes they made to the manuscript in their ers and editor conclude cannot be rectified with the existing
cover letter to the editor. Needless to say, this decision was research design and/or data. Not surprisingly, squandering this
not received well by the reviewers, who expressed confusion opportunity typically contributes to adverse reactions from the
and irritation, and unanimously responded with reject recom- review team and in predictable reject decisions.
mendations. This example and the larger problem of a lack of
responsiveness makes it clear why it is so important for
Conclusion
authors to follow the “best practice” recommendations to
“absorb the critiques,” “appraise the comments,” and My content analysis revealed that most of the problems that
“actively revise”, provided in Lester (in press), and to pay Daft (1995) identified were present and, in many cases, even
particular attention to crafting a thorough response letter that more prevalent in the rejected GOM R&Rs. At the same
is attentive to reviewer and editor feedback. To do otherwise time, some additional reasons for a reject decision were iden-
almost guarantees a rejection. tified, including a perceived lack of responsiveness to
Two additional and related problems involved an inade- reviewer concerns, inadequate contributions to the extant lit-
quate contribution to the literature and articulation of the erature and/or articulation thereof, new concerns that
research purpose/contribution, which were found in 52.9% emerged during the revision process, as well as a decision to
and 47.1% of the rejected R&Rs, respectively. During the ignore a request to collect additional data. Together, these
initial review process, it is very common for reviewers and content analyses revealed some of the “worst practices” that
editors to express concerns about the contribution of the contribute to reject decisions, including those issued to
manuscript and/or the authors’ articulation of the purpose of authors whose initial submissions were deemed to be prom-
the research. However, the issuance of an R&R signals that ising and worthy of a revision opportunity. Hopefully, aware-
the editor concluded, based in part on reviewer feedback, ness of what not to do in an R&R will help authors who are
Gardner 7

fortunate enough to receive one avoid the pitfalls of the pro- References
cess and secure the ultimate goal—an acceptance and an American Psychological Association. (2009). Publication
opportunity to make a value-added contribution to the body Manual of the American Psychological Association (6th ed.).
of knowledge in one’s discipline. Washington, D.C.: American Psychological Association.
Bacharach, S. B. (1989). Organizational theories: Some criteria for
Declaration of Conflicting Interests evaluation. Academy of Management Review, 14, 496–515.
Daft, R. L. (1995). Why I recommended that your manuscript be
The author declared no potential conflicts of interest with respect to rejected and what you can do about it. In L. L. Cummings & P.
the research, authorship, and/or publication of this article. J. Frost (Eds.), Publishing in the organizational sciences (pp.
164–182). Thousand Oaks, CA: SAGE Publications.
Funding Kerlinger, F. N., & Lee, H. B. (2000). Foundations of behavioral
research (4th ed.): Wadsworth-Thomson Learning.
The author received no financial support for the research, author- Lester, G. (in press). R & R dialog: Congratulations, you got a
ship, and/or publication of this article. revise and resubmit! Now what? The impetus behind and les-
sons learned from a successful years-long PDW focused on the
ORCID iD peer review revision process. Journal of Management Inquiry.
William L. Gardner https://orcid.org/0000-0003-4694-9951 doi:10.1177/1056492619882508.
Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2016).
Note Recommendations for creating better concept definitions in the
organizational, behavioral, and social sciences. Organizational
1. Yes, this does happen – fortunately not very often! Research Methods, 19, 159–203.

You might also like