Professional Documents
Culture Documents
research-article2014
JMQXXX10.1177/1077699013520195Journalism & Mass Communication QuarterlyWatson and Riffe
Editorial Report
Journalism & Mass Communication Quarterly
2014, Vol. 91(1) 5–16
Who Submits Work to © 2014 AEJMC
Reprints and permissions:
JMCQ and Why? sagepub.com/journalsPermissions.nav
DOI: 10.1177/1077699013520195
A Demographic Profile jmcq.sagepub.com
Corresponding Author:
Daniel Riffe, School of Journalism and Mass Communication, University of North Carolina-Chapel Hill,
383 Carroll Hall, Chapel Hill, NC 27599-3365, USA.
Email: driffe@email.unc.edu
comments that J/MC research is “too quantitative” and “based on social science meth-
odologies.” Surveying AEJMC members, Poindexter found women more likely than
men to rate “bias against methods” and “bias against topics” as threats to the peer
review process.10
Surveys in other disciplines have found the strongest predictor of satisfaction is
simply whether one’s latest manuscript was accepted.11 The authors concluded that
peer review is seen as a “hurdle” rather than an opportunity to obtain advice and assis-
tance.12 This “hurdle vs. opportunity” issue has not been addressed within J/MC. Thus,
we pose three research questions:
RQ1: What is the demographic profile of authors submitting to Journalism & Mass
Communication Quarterly (JMCQ)?
RQ2: How do authors evaluate JMCQ’s peer review process, compared with mass
communication journals “generally”?
RQ3: What individual characteristics best predict satisfaction with the peer review
process?
Method
Design and Sample
Authors who had submitted at least one manuscript to JMCQ from 2005 to 2010 were
invited to complete a web-based survey about the review process for the last article
they submitted to JMCQ and about review processes of other mass communication
journals. A link was emailed in fall 2010 to 714 authors; five reminders yielded 377
(52.8%) responses (330 were fully completed). To secure human subjects approval, all
identifying information was removed once survey responses were received. No iden-
tifying information was ever shared with any journal personnel, a fact emphasized to
respondents.
Respondents indicated the decision (accept, reject, revise) on their last JMCQ sub-
mission, how many articles they submitted and had accepted by JMCQ and other mass
communication journals during 2005-2010, and their total career peer-reviewed pub-
lications (as sole or co-authors). They reported hours per week spent on research and
percentage of work effort devoted to research. Each identified a preferred research
approach: qualitative, quantitative, mixed-methods, or “other.” Finally, respondents
indicated their age, sex, years of experience in higher education, highest degree, aca-
demic rank, and tenure status.
To measure beliefs about the peer review process, respondents used a 7-point scale
(7 = strongly agree) with seventeen statements adapted from previous studies13 about
peer review (see Table 2 for wording and descriptive statistics), ranging from admin-
istrative processes to the substance of reviewer comments, to perceptions of bias and
whether reviewer comments were helpful in improving one’s work. Respondents com-
pleted the battery of seventeen items for “mass communication journals” before com-
pleting the same battery for JMCQ.
Results
RQ1 asked about the demographic profile of authors submitting to JMCQ. As Table 1
shows, the average respondent was forty-four and had spent nearly eleven years in
higher education. Fifty-six percent were men and most (87.6%) held doctorates.
Twenty-six (8.0%) were grad students, more than half (51.9%) were associate or full
professors, and 52.8% were tenured. Respondents averaged 17.8 hours weekly on
research, which constituted about 38% of work effort. A plurality (44.4%) preferred
quantitative methods, compared with 38.3% mixed-methods and 14.8% qualitative
methods.
Respondents averaged 15.5 articles published in peer-reviewed journals across
their career, had submitted 9.9 articles to peer-reviewed journals in the past five years,
and had 7.5 accepted. Each respondent’s five-year “success rate” was computed as
follows: number of manuscripts published divided by number submitted—the sample-
wide average was 0.711 with a SD = 0.272.
They had submitted an average of 1.8 articles to JMCQ in the past five years, of
which 0.70 had been accepted (average JMCQ “success rate” = 0.325, SD = 0.411).
The fact that half are at senior rank and tenured, and that all had submitted a manu-
script to JMCQ in the past five years, suggests that these are experienced scholars.
Table 1 data also reveal similarities, and some differences, between male and
female contributors. Notably, there are no statistically significant differences on a
number of individual characteristics—tenure status, rank (despite a greater percentage
of female graduate students and assistant professors), degree attainment—as well as
time (in hours per week) and effort (as a percentage of total) devoted to research and
preferred research approach. Generally speaking, male and female contributors to
JMCQ have been similarly active as researchers during the last five years (in terms of
similar numbers of articles submitted and published in JMCQ and in other peer-
reviewed journals).
Career-wise, however, men in the sample had significantly more total refereed
journal publications (17.5) than women (12.9), a “gender gap” in productivity that
mirrors age differences: female respondents were significantly younger than their
male counterparts and had significantly less academic experience. Judging from self-
reported data for this sample, then, any gender difference in research productivity is a
function of the age and greater accumulated experience of the males in the sample.
RQ2 asked how this sample of J/MC scholars evaluated JMCQ’s peer review pro-
cess. As shown in Table 2 data for the eleven satisfaction measures, authors rated
JMCQ most favorably in terms of politeness (M = 4.96/7, SD = 1.552) and clarity of
Table 1. Descriptive Statistics on Academic Variables, Age and Years on Faculty, and
Research Activity, by Gender.
reviewers’ comments (M = 4.75/7, SD = 1.501), but rated the journal’s reviewing pro-
cess least favorably in terms of its contribution to subsequent scholarship (M = 3.98/7,
SD = 1.760). Based on an average across all eleven measures (i.e., Table 2’s “Overall
Satisfaction”), authors are slightly more positive than negative in their view of the
journal’s peer review process (M = 4.47/7, SD = 1.54), with ten of the eleven compo-
nent items garnering mean scores above the midpoint.
RQ2 also asked how JMCQ’s peer review process compared with other mass com-
munication journals’ processes. First, note that all eleven satisfaction items’ means for
mass communication journals were above the scale midpoint. As indicated in Table 2’s
Table 2. Peer Review Survey Items,a and Descriptive Statistics and Percentage of
Agreement, by JMCQ Comparedb with Mass Communication Journals Generally (No. of
Cases = 315-322).
JMCQ Mass communication journals
Table 3. Correlations (Pearson’s r) between Key Peer Review Variables and Author
Characteristics (Pairwise Deletion Used; No of cases = 267-325).
Variable 1 2 3 4 5 6 7 8
1. “Overall —
Satisfaction”
with JMCQ
review
2. “Overall Bias” in .531** —
JMCQ review
3. Satisfaction .480** .276** —
with time to
complete review
4. JMCQ submission −.478** −.321** −.284** —
rejecteda
5. Tenure statusb −.089 −.087 .096 −.039 —
6. Hours per week −.005 .034 .064 −.001 −.131* —
on research
7. Gender femalec .180** −.019 .161** −.018 −.088 −.011 —
8. Non-quantitative −.161** −.268** −.059 .123* .034 −.165** −.005 —
preferredd
female (β = .158, p < .01) and tenured (β = .118, p < .05) predicted greater satisfaction
with the time for the review. Being tenured in particular may make an individual less
anxious/more patient to receive a peer review decision.
Having the submission rejected predicted an additional 6.4% of the variance in
authors’ satisfaction with the review time. Authors whose manuscripts were rejected
after the initial review were significantly less satisfied with the time it took to receive
that decision (β = −.253, p < .001), perhaps reflecting a preference for receiving even
a negative decision quickly to move on and resubmit the manuscript elsewhere.
The second regression model predicted 14.2% of variance in authors’ perceptions
that JMCQ’s peer review process is free of “Overall Bias.” Individual characteristics
predicted 5.5% of variance. Non-quantitative researchers were significantly less likely
to perceive the review process as unbiased (β = −.209, p < .001), a relationship that
merits expansion. Non-quantitative authors (60%) in the sample were in fact more
likely to report having their last article rejected by JMCQ than were quantitative
authors (40%), but that difference was not statistically significant, χ2(1, N = 317) =
3.174, p > .05.
However, if one uses t-tests to examine differences in computed “success rates”
over the past five years, quantitative researchers’ success rates for JMCQ are
Table 4. Regression Models for Author Satisfaction with JMCQ’s Peer Review Process.
*p < .05. **p < .01 level. ***p < .001 level.
Table 5. Gender Differences in Satisfaction with the Peer Review Process by Whether
Submission to JMCQ was Rejected (one-way ANOVAs).
Note. All variables were scored on a 7-point Likert-type scale (1 = strongly disagree, 7 = strongly agree).
Because of unequal variances, Games-Howell post hoc tests were used to test for significant mean
differences between groups; common subscripts indicate significant between-group mean differences in
rows. JMCQ = Journalism & Mass Communication Quarterly; ANOVA = analysis of variance.
*Omnibus F significant at p ≤ .001.
JMCQ peer review process nonetheless improved their current manuscript (female,
M = 3.93; male, M = 3.14) and their subsequent work (female, M = 3.78; male, M = 3.08).
Conclusion
This study found that authors who have submitted to JMCQ in the past five years had
slightly more positive than negative attitudes toward the journal’s peer review process,
but that the review process was rated less positively for helping improve authors’ cur-
rent manuscripts or subsequent work. Data suggest that JMCQ’s peer review process
is not seen as particularly better or worse than the processes of other mass communica-
tion journals, except in two regards: as noted, JMCQ reviews are rated as somewhat
less helpful, but the journal’s reviews are perceived as clearer, more polite and profes-
sional, and certainly faster.
The regression analyses also show that whether an author’s previous submission to
JMCQ was rejected was consistently one of the strongest predictors of satisfaction
with the peer review process. Again, this finding raises troubling questions about the
perceptions of peer review as a hurdle rather than a means to improve one’s
scholarship.14
Rejection also related to perception of bias. Non-quantitative researchers were less
likely to perceive JMCQ’s peer review process as unbiased, and non-quantitative
researchers had significantly lower rates of success publishing in JMCQ, but signifi-
cantly higher rates of publishing success in other mass communication journals.
Women, however, actually rated the peer review process more positively, even
when the editorial decision was negative. Women who had their articles rejected were
significantly more satisfied with the peer review process and more likely to credit it
with a positive impact on current and subsequent research than were men who had
their last article rejected.
These gender differences are consistent with previous research on women’s
responses to evaluation that suggests that women internalize negative feedback (i.e.,
view a negative review as reflecting weaknesses in one’s manuscript) rather than
blaming others (i.e., reviewers) or external factors or processes,15 are more eager to
use negative evaluative feedback to improve future performance,16 and, if they receive
negative feedback, actually are more likely to improve subsequent performance than
are men who receive negative feedback.17
Of course, the results of this study cannot be generalized beyond the survey’s
respondents. The survey focused on authors who had submitted to a single journal,
and the authors were not randomly selected. Nonetheless, the data may be instructive
for those who review for mass communication journals and those who make reviewer
assignments. The data may also provide some context for authors submitting their
work for peer review. Furthermore, while this study was not set up as a theoretical
test of gender differences in response to evaluative feedback, it does suggest some
avenues for future research of potentially significant gender differences beyond con-
cerns of bias.
Authors’ Note
This editorial report was not peer-reviewed. However, a longer version of the survey results was
reviewed and presented at a session sponsored by the Commission on the Status of Women at
the AEJMC conference in Chicago in August 2012.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of
this article.
Notes
1. Lutz Bornmann and Hans-Dieter Daniel, “The Effectiveness of the Peer Review Process:
Inter-Referee Agreement and Predictive Validity of Manuscript Refereeing at Angewandte
Chemie,” Angewandte Chemie 47 (38, 2008): 7173-78; Peter M. Rothwell and Christopher
N. Martyn, “Reproducibility of Peer Review in Clinical Neuroscience: Is Agreement
between Reviewers Any Greater than Would Be Expected by Chance Alone?” Brain 123
(9, 2000): 1964-69.
2. Sara Schroter, Nick Black, Stephen Evans, Fiona Godlee, Lyda Osorio, and Richard Smith,
“What Errors Do Peer Reviewers Detect, and Does Training Improve Their Ability to
Detect Them?” Journal of the Royal Society of Medicine 101 (10, 2008): 507-14.
3. Mark Henderson, “Problems with Peer Review,” British Medical Journal 340 (2010):
c1409.
4. Gwendolyn B. Emerson, Winston J. Warme, Frederic W. Wolf, James D. Heckman,
Richard A. Brand, Seth S. Leopold, “Testing for the Presence of Positive-Outcome Bias
in Peer Review: A Randomized Controlled Trial,” Archives of Internal Medicine 170 (21,
2010): 1934-39; David Shatz, Peer Review: A Critical Inquiry (Lanham, MD: Rowman &
Littlefield, 2004).
5. Ann C. Weller, Editorial Peer Review: Its Strengths and Weaknesses (Medford, NJ:
Information Today, 2001).
6. Nicholas H. Wolfinger, Mary A. Mason, and Marc Goulden, “Problems in the Pipeline:
Gender, Marriage, and Fertility in the Ivory Tower,” The Journal of Higher Education 79
(5, 2008): 388-405.
7. Laura Padilla-González, Amy S. Metcalfe, Jesús F. Galaz-Fontes, Donald Fisher,
and Iain Snee, “Gender Gaps in North American Research Productivity: Examining
Faculty Publication Rates in Mexico, Canada, and the U.S.,” Compare: A Journal of
Comparative and International Education 41 (5, 2011): 649-68; Erin Leahey, “Gender
Differences in Productivity: Research Specialization as Missing Link,” Gender &
Society 20 (6, 2006): 754-80; Yu Xie and Kimberlee A. Shauman, “Sex Differences
in Research Productivity Revisited: New Evidence about an Old Puzzle,” American
Sociological Review 63 (6, 1998): 847-70.
8. Michael Ryan, “Evaluating Scholarly Manuscripts in Journalism and Communications,”
Journalism Quarterly 59 (2, 1982): 273-85.
9. Larry Z. Leslie, “Peer Review Practices of Mass Communication Scholarly Journals,”
Evaluation Review 14 (2, 1990): 151-65.
10. Paula Poindexter, “What’s Right and What’s Wrong with the Reviewing Process: AEJMC
Members Evaluate Peer and Tenure Review” (paper presented at the Meeting of the
Association for Education in Journalism and Mass Communication, San Francisco, CA,
August 2006).
11. Bobbie J. Sweitzer and David J. Cullen, “How Well Does a Journal’s Peer Review Process
Function?” Journal of American Medical Association 272 (2, 1994): 152-53; Ellen J.
Weber, Patricia P. Katz, Joseph F. Waeckerle, and Michael L. Callaham, “Impact of Review
Quality and Acceptance on Satisfaction,” Journal of American Medical Association 287
(21, 2002): 2790-93.
12. Sweitzer and Cullen, “How Well Does a Journal’s Peer Review Process Function?”
13. Poindexter, “What’s Right and What’s Wrong with the Reviewing Process”; Paula
Poindexter, “An Examination of AEJMC Member Perceptions of the Integrity of the
Competitive Paper Review Process” (paper presented at the Meeting of the Association for
Education in Journalism and Mass Communication, Boston, MA, August 2009); Sweitzer
and Cullen, “How Well Does a Journal’s Peer Review Process Function?”; Weber et
al., “Impact of Review Quality and Acceptance on Satisfaction”; Leslie, “Peer Review
Practices of Mass Communication Scholarly Journals.”
14. Sweitzer and Cullen, “How Well Does a Journal’s Peer Review Process Function?”
15. Angela J. Hirshy and Joseph R. Morris, “Individual Differences in Attributional Style:
The Relational Influence of Role Self-Efficacy, Self-Esteem, and Sex Role Identity,”
Personality and Individual Differences 32 (2, 2002): 183-96.
16. Maria Johnson and Vicki S. Helgeson, “Sex Differences in Response to Evaluative
Feedback: A Field Study,” Psychology of Women Quarterly 26 (3, 2002): 242-51.
17. Deidra J. Schleicher, Chad H. Van Iddekinge, Frederick P. Morgenson, and Michael A.
Campion, “If at First You Don’t Succeed, Try, Try, Again: Understanding Race, Age, and
Gender Differences in Retesting Score Improvement,” Journal of Applied Psychology 95
(4, 2010): 603-17.