You are on page 1of 6

JBR-08737; No of Pages 6

Journal of Business Research xxx (2015) xxx–xxx

Contents lists available at ScienceDirect

Journal of Business Research

Heresies and sacred cows in scholarly marketing publications☆

Barry J. Babin a,⁎, Mitch Griffin b,1, Joseph F. Hair Jr. c

a
Max P. Watson Jr., College of Business Research, Louisiana Tech University, Ruston, LA 71227, USA
b
Department of Marketing, Bradley University, Peoria, IL 61625, USA
c
Department of Marketing, Kennesaw State University, Kennesaw, GA 30144, USA

a r t i c l e i n f o a b s t r a c t

Article history: Merriam-Webster defines heresies as “dissent or deviation from a dominant theory, opinion, or practice.” This
Received 1 April 2015 Journal of Business Research special issue and the editorial examine heresies and sacred cows in marketing re-
Received in revised form 1 September 2015 search. Seven papers investigate different aspects of typical academic business journal presentations. Each man-
Accepted 1 September 2015
uscript critically analyzes generally accepted practices for the pursuit of publication in academic journals and
Available online xxxx
reveals ways these practices may do more harm than good, hindering the goal of presenting true growth of
Keywords:
knowledge through publication. The editorial provides an integrative schema for the manuscripts in the special
Heresy issue. Providing a series of broader topics to tie the papers together, this special issue illustrates how the findings
Knowledge of each study can help improve our pursuit of knowledge. In addition, the editorial discusses heresies and sacred
Journals cows not covered by manuscripts in the current issue. The editorial concludes with recommendations for both
Method authors and reviewers that may enhance the approach to research, methodologies employed, and reporting of
Marketing Research scholarly research.
© 2015 Elsevier Inc. All rights reserved.

Marketing academics, like those in other disciplines, conduct re- often have a lot of confidence in their theories (Stapel, 2012). Thus,
search motivated by a desire to publish reports of their research in aca- they may feel justified in writing a narrative instead of a report.
demic journals. Herein lays a potential dilemma. The demands of A cursory look at today's journals suggests a dogmatic presence in
maintaining the highest standards in research may conflict with the the manner in which authors present research. For example, is there a
norms of the publication process. When senior faculty train doctoral specific way that authors must present an empirical marketing research
students and mentor junior faculty, this dilemma evidences itself article to survive the review process and eventually end up in a journal's
when an emphasis on “playing the game” of publication takes precedent pages? Clearly, authors must write well and present discipline-relevant
over presenting a descriptive account of the research in a meaningful topics. However, must papers stay generally within well-defined
way. Sometimes, this process plays out subtly, such as through sugges- boundaries and styles of presentation to appease reviewers and editors?
tions of building a paper's reference list with an eye toward flattering Must an article first present some deductively driven hypotheses based
particular members of a journal's ERB who might review the manu- on an already known and named theory, followed by an empirical test
script. Other times, academics may withhold a preponderance of evi- that presents results corroborating those same hypotheses? Must arti-
dence to emphasize primarily desirable results—in other words, those cles generally follow current precedents and “generally accepted” pro-
results that are consistent with the author's enlightened predictions. cedures? Must the articles use fashionable theories, typically derived
Taken to the extreme, study data and/or results may be, implicitly or outside the discipline, at the expense of discovering new theories from
otherwise, a work of fiction to present what the author perceives to be within marketing? Finally, must the researchers employ trendy
the received view (Enserink, 2012). The pressures and desires to be- methods and analytical tools even if they possess little understanding
come an author in a noted journal are strong and, after all, researchers of their actual relevance, precision, or appropriateness?
The articles in this special issue examine themes related to current
☆ The authors thank O.C. Ferrell, Arch G. Woodside, and Marko Saarstadt for helpful trends in marketing academic articles. In this essay, the special issue ed-
comments on this editorial. In addition, the authors thank Nina Krey and Christian itors provide a brief overview of each paper and its contribution. In
Bushardt for help in editing the manuscript. doing so, they call attention to just a few of the common practices and
⁎ Corresponding author. Tel.: +1 318 257 4012.
procedures applied in the academic marketing literature, with an eye
E-mail addresses: bbabin@latech.edu (B.J. Babin), mg@bradley.edu (M. Griffin),
Jfhair3@kennesaw.edu (J.F. Hair). toward understanding those that may constitute “sacred cows” more
1
Tel.: +1 309 677 2287. than vehicles for building a better and more meaningful literature. In a

http://dx.doi.org/10.1016/j.jbusres.2015.12.001
0148-2963/© 2015 Elsevier Inc. All rights reserved.

Please cite this article as: Babin, B.J., et al., Heresies and sacred cows in scholarly marketing publications, Journal of Business Research (2015),
http://dx.doi.org/10.1016/j.jbusres.2015.12.001
2 B.J. Babin et al. / Journal of Business Research xxx (2015) xxx–xxx

classic commentary on the marketing literature nearly 35 years ago, becoming in part or perhaps fully TRUTH rather than truth. Fig. 1 depicts
Peter & Olson (1983, p. 116) argue not that marketing is science, but this notion graphically.
“science is marketing.” Interestingly, when interviewed about the moti- Below are a series of assumptions about the publication standards
vation for fabricating study results as a basis for academic journal pub- for papers that appear in marketing journals. The articles in this special
lications, Staple expresses the same notion: issue critically examine these widely accepted beliefs to see if in fact
they represent appropriate standards for scientific discovery or may
It was a quest for aesthetics, for beauty—instead of the truth… it's be sacred cows of the discipline.
hard to know the truth… you need grants, you need money…
science is of course about discovering the truth, about digging to find
the truth… but it is also communication, persuasion, marketing. 1. Journal articles should be deductive
(Stapel quoted in an interview by Bhattacharjee, 2013, p. MM44).
Researchers may not like to consider themselves as bumblers, but
If a journal article reports truth objectively, the prose should avoid Alba (2012) points to bumbling as the way most scientific break-
falling into a narrative. However, when some passionate motivation throughs occur. In fact, Alba (2012) reminds the marketing and con-
slips in and interferes with the objective presentation of study results, sumer research academy of the simple admonition that scientists
the line between fiction and nonfiction may blur. Thus, when research advance knowledge more often by discovering an unexpected regulari-
is presented in a manner other than that in which it is conducted—when ty than by testing an expected regularity. Given that scientific progress
it is presented more with an eye toward passing review than simple de- lies in discovery, it's somewhat ironic that the academic literature, and
scription, when the meaning becomes obscured by technical complexity the ubiquitous review process, often works to stifle discovery so much
intended to impress reviewers, when trivial (albeit perhaps statistically so that even academics with a distinguished publishing career may
significant) effects are presented as important, when data are portrayed find it difficult to answer the “what did you discover” question
or treated with more precision than the research method allows, when (Armstrong, 2003).
hypotheses are conceptualized post hoc based on study results, when In the typical marketing article, the introduction section precedes a
results that do not present the “desired” outcomes are not reported, ar- section on “conceptual” or “theoretical” development, followed by a
ticles stray from the basic presentation of the truth. The narrative that section describing an empirical study. In some papers, the
the authors might have written without any research remains author(s) may refer to their work as exploratory, simply as a way of
uninterrupted. lessening the criticism of problems with generalizability or other short-
Hunt (2010, p. 306) presents a thorough discussion of various ways comings of the methodology. In the end, marketing authors sacrifice
that research can come to present fiction. He portrays a continuum be- discovery in favor of justification, although the scientific method clearly
tween truth and TRUTH, with the former being an objective representa- requires attention to both (Hunt, 2010). The lack of theories originating
tion of reality and the latter being the T-hoc RUTH that MUST exist. Hunt in marketing may be one price paid for the affinity of the academy to-
uses the table to illustrate how dogmatic philosophies result in a lack of ward the hypo-deductive presentation of research. Authors perceive re-
regard for reality. The mere existence of a continuum implies that the viewers as much more comfortable with some famous theory based in
difference between truth and TRUTH is not always so clear. Even if not another discipline than with the derivations that would reflect a theory
dogmatic in philosophy, does academic research slip into dogma in pre- born in marketing.
sentation? When papers are written with the idea in mind that they In this issue, Daugherty, Hoffman, and Kennedy (2015) point to the
MUST contain things (such as carefully deduced hypotheses, specific an- dearth of reports of inductive studies in the marketing literature, in con-
alytical tools regardless of applicability, statistically significant results, trast to the “hard sciences” or applied fields like medicine. They illus-
supported hypotheses, etc.) or MUST be written in certain ways (deduc- trate an inductive orientation by applying a “reverse approach”
tively testing in the context of justification), the literature risks employing neurological measures to illustrate differences in brain

truth

Employing analytical techniques that are not


understood, not called for, without
Reporting results based on statistical significance consideration of precision
or ability to support hypotheses
Selecting methodologies and analytical
Building conceptualizations after considering techniques aimed first at appealing to reviewers
study results
Failing to call attention or even report study
Employing irrelevant data nonresults

Plagiarism Improper measurement

Reporting tests that were not conducted


Zealous data cleaning
Data “tricking”–forcing a desired result through
slight manipulations
Fabrication

TRUTH
Fig. 1. Some issues affecting the truth of scholarly research.

Please cite this article as: Babin, B.J., et al., Heresies and sacred cows in scholarly marketing publications, Journal of Business Research (2015),
http://dx.doi.org/10.1016/j.jbusres.2015.12.001
B.J. Babin et al. / Journal of Business Research xxx (2015) xxx–xxx 3

activity among consumers across different product types. The brain ac- concerns, students are not doing near this number of surveys. In fact,
tivity may well aid in explaining differences in reactions to varying ad few are likely to do 17 in an entire academic year, much less in a
appeals, including differences in memory across advertisements. The week. The experience effect in crowd sourced data, particularly in mar-
authors call for greater acceptance of discovery oriented research and keting and consumer experiments, should be a serious concern.
the inductive processes as acceptable methods of discovery. Although In addition, mTurk data requires the most “data cleaning” in general.
true exploratory research may sometimes be judged as lacking rigor, re- One of the greatest risk of mTurk type research participants in general,
lying purely on deduction dismisses needlessly all other methods of dis- beyond their nonnaiveté, resides in the fact that the same researcher
covery. Further, in an era of big data, practical marketing researchers that does the cleaning is likely the one that tests hypotheses. Thus, the
rely heavily on inductive processes for meaningful discovery. Perhaps subtle, or not so subtle, temptation is to clean the data in a manner
not as much of our research needs to follow the dogma of deductively that shapes support for hypotheses (Chandler, Mueller, & Paolacci,
contrived and statistically supported hypotheses in order to render a 2014). In comparison, a professionally managed panel offers a huge ad-
contribution to the discipline. vantage in terms of objectivity. High-quality panels often provide data
cleaning as a service. A panel associate removes respondents who are
2. Journal articles should prioritize internal validity flat-lining or otherwise failing attention filters from the sample prior
to making the data available to the researcher and therefore prior to
Following the noteworthy debate between the merits of internal and any data analysis related to hypothesis testing. The panel conducts
external validity appearing in the Journal of Consumer Research (for ex- these services independent of researchers' aims. Unless the researcher
ample, see Calder, Phillips, & Tybout, 1981, 1983; Lynch, 1983), the provides a panel employee with a clue, the panel employee who does
idea of choosing a sample with the purpose of testing an hypothesis rig- the cleaning is blind to the purpose of the research. Thus, a recommend-
orously with the aim toward falsification, as opposed to showing one ed practice of data cleaning is that the researcher involved in the study
might be true in any circumstance, seems to have lost out. Seldom in should not perform the data cleaning. A panel often provides such ser-
marketing and consumer research articles today do authors give any ex- vices. Further, any data involving the “cleaning” of large percentages
plicit mention of to whom the inferences made in their research apply? of respondents creates potentially more questions about nonresponse
Yet, inferences are ubiquitous. In fact, authors give data quality little at- bias than do late respondents in mail surveys.
tention in most articles with the exception of references to standard ci-
tations to allay fears (perhaps “crocodile fears”) of nonresponse bias or 3. Journal articles should follow current trends
of common methods bias. As a consequence, authors report complicated
tests and draw countless statistical inferences based on samples of Authors are particularly susceptible to following trends (or fads?) in
which the reader knows very little, much less what population the sam- research approaches as reported in journal articles. Three current trends
ple represents. Thus, the risk of practical irrelevance looms large. in survey measurement include researchers' use of single-item mea-
One can often find disclaimers in research that the authors do not in- sures, employing formative measures, and extended descriptions of
tend the results to generalize and thus any convenience sample should sometimes complicated assessments of common “variance” resulting
suffice. One could ask at that point whether the word sample really ap- from the use of a single measurement approach. Three articles in the
plies. If no generalization arises, does a sample exist? In this issue, two special issue address these topics.
papers address issues related to data quality in the sense of meaningful- Can a single item measure a doubly concrete construct? Sarstedt,
ness based on deriving data from convenience samples. Espinosa and Diamantopoulos, Salzberger, and Baumgartner (2015) address this
Ortinau (2015) present data showing the high percentage of articles question. Their results suggest caution in using single-item measures.
employing student samples in the discipline's top journals, along with Indeed, even with constructs that seem relatively simple such as atti-
the percentage of papers that acknowledge the lack of generalizability tude toward the ad, techniques involving expert opinion and other
as a result. A minority of published articles in their review note student methods fail to converge on evidence pointing to which of multiple
samples as a limitation. Only in the Journal of Business Research did more items is the best for representing that concept. Furthermore, identifying
than half of the papers with student samples note their use as a possible the best item through other methods requires measurement with mul-
limitation on the research findings. In addition, their research casts tiple items, thus defeating the purpose of single-item measures with re-
doubt on the generalizability of student research to other populations, spect to greater simplicity.
even in common consumer contexts like restaurants. Clearly, practical For the past few years, the marketing literature shows a trend toward
results require greater attention to the relevance of the population to increasing numbers of formative measurement operationalizations. In-
which samples generalize. deed, researchers sometimes criticize older literature for treating things
Amazon's mTurk provides a mechanism to crowd source respon- as reflective that should be formative (MacCallum & Browne, 1993;
dents and subjects by making often trivial payments per completed Diamantopoulos & Siguaw, 2006). Wilcox, Howell, and Breivik (2008)
hit. Smith, Roster, Golden, and Albaum (2015) take an in-depth look demonstrate the fallacy in thinking of concepts as formative or reflective
at data quality issues from mTurk samples. In contrast to research and illustrate how a scale might measure a concept either way depending
from some years ago, they demonstrate that the typical respondent to on context or instructions. In this issue, Chang, Franke, and Lee (2015) ex-
an mTurk survey response request is Asian and likely resides outside amine the potential for bias across reflective and formative measurement
of the U.S.A. Additionally, in contrast to well-managed panels, one can specifications. Their simulation results demonstrate greater bias in pa-
never be certain exactly who is responding to a request. Their research rameter estimates from formative specifications and signal the greater
demonstrates that mTurk respondents tend to be “speeders” relative to robustness of reflective specifications. Thus, with any ambiguity in the
panel respondents, meaning they work their way through requests at nature of a scale, less bias will occur with reflective operationalization.
very fast rates. In general, mTurk data, particularly those responses ob- Measurement is and will remain critically important and measure-
tained by non U.S.A. workers, is of relatively poor quality. Most astound- ment problems will continue to be a source of rejection from publica-
ingly, mTurk participants average between 10 and 17 survey responses tion. Although one can certainly conduct statistical analyses on and
per week, compared to 3.2 for panel respondents. Thus, the experience with poor measures, the meaningfulness of any theory derived, theory
effect looms large among mTurk respondents, making them a particu- test, or hypothesis examination becomes illusory with deficient mea-
larly poor choice for subjects in marketing and consumer experiments, sures. Thus, a battery of measures organized into a meaningful structure
as they likely are more prone to pick up on cues indicating how they is every bit as much a “theory” as is a set of relationships between con-
should respond in order to obtain the payment. While researchers may structs. As such, just as much care and attention should go into deriving
call student subjects into question because of experience effect and examining the measurement theory. Sajtos and Magyar (2015)

Please cite this article as: Babin, B.J., et al., Heresies and sacred cows in scholarly marketing publications, Journal of Business Research (2015),
http://dx.doi.org/10.1016/j.jbusres.2015.12.001
4 B.J. Babin et al. / Journal of Business Research xxx (2015) xxx–xxx

remind researchers that the auxiliary theory, being the one that brings 4. Moving forward as researchers first
concepts to life for purposes of further analysis, requires the upmost
rigor. The habit of casually selecting scales without a rigorous examina- The editors of the special issue sought research that questioned some
tion of item content works against good scientific practice. The failure to of today's conventional publication trends and habits – the sacred cows.
adapt a scale to a context and time works against a well-structured auxil- The articles in the special issue hit on several of these topics. However, a
iary theory. Dropping items for empirical reasons without due attention few others were not included in the issue. Noting some of the issues that
to how the meaning of the concept changes with these deletions also rep- authors do not examine, this conclusion offers recommendations with
resents poor practice and disregard for the auxiliary theory. A scale should the intention of creating a more practically meaningful literature; a liter-
be eponymous in the sense that the concept measures should be explicitly ature where authors write first as objective researchers rather than po-
evident among the item battery. The auxiliary theory merits testing as a tential journal authors desiring publication in prestigious journals to
whole, just as a structural theory merits testing as a whole. the point that judgment is impaired. Passion should not create a rooting
All research contains problems and design questions with no perfect interest. Passion should lie in conducting the research with the upmost
answers. From a research standpoint, the researcher needs to possess respect for representing and/or discovering the truth (not the TRUTH).
full awareness of the potential sources for error. It is important the re- Importantly, these recommendations apply to reviewers as well as re-
searcher understands the level of precision present in any measurement. searchers. All too often reviewers succumb to style over substance and
From a publishing standpoint, the author may work around potential may wish to inflict the pain they feel as authors on others. Reviewers
error problems and acknowledge the most obvious (like the use of stu- may follow red herrings in the form of elegant statistical processes con-
dent samples). Likewise, while procedures may allow no more than two ducted on irrelevant data. Thus, reviewers, as well as authors, need to be
significant digits, authors often report results to three, four, or more sig- mindful of what is real and meaningful.
nificant digits (reporting whatever the software provides) sending a sig- Unquestionably, the desire and pressure to publish research and ob-
nal of false precision. The researcher often takes precedent from the tain research funding has never been stronger. At times, researchers
literature without careful consideration. A good example would be the stretch the boundaries of good ethics in writing about the results of
standard reference to Armstrong & Overton (1977) to allay criticism of their research. Among the relatively small number of studies addressing
“nonresponse” bias through some check of “late” rather than “non” re- unethical publishing behavior, one reports that the number of articles
spondents. The author aimed at publishing success uses precedent blindly retracted for fraudulent reporting increased over 10 times since the
to cover inadequacies in the research. Researcher and author alike should 1970s (Fang, Steen, & Casadevail, 2012). A survey of Academy of Man-
give issues related to basic data quality due respect and priority. This due agement Proceedings suggests that one in four of the papers displays
respect and priority results from simple presentation of basic descriptive some elements of plagiarism (Karabag & Berggren, 2012). If the motiva-
statistics. tion to publish and receive financial support for research can lead to
The early principal components analysis and factor analysis literature outright fraud and plagiarism, then perhaps journal submission authors
is replete with issues related to response style and halo effects (Hotelling, also are tempted to design a paper to prioritize publication at the ex-
1933; Crosby & Stephens, 1987; Oliver & Bearden, 1985). When such a pense of faithfully reporting research results. Clearly, not all researchers
concern is present, statisticians recommend some form of ipsatization to intentionally mislead. However, even subtle practices in reporting re-
remove any potential nuisance or response-style effect (Cattell, 1966; search can perhaps make discovering the full meaning of research diffi-
Johnston, 1973). Outside of personality research, authors seldom report cult. What might some of these practices be?
employing these simple steps today. In fact, basic facts about data go un-
mentioned, so much so that papers often do not report the basic descrip- 5. Lacking full disclosure
tive statistics for prescient variables. Thus, if some variable is in fact a near
constant, the reader may never know. The best solutions to such data Hubbard and Armstrong (1992) report that the percentage of hy-
problems typically lie before collecting data rather than after. potheses supported in the top marketing journals is, astoundingly, well
In recent years, submissions to marketing journals routinely include over 90%! A cursory examination of recent articles shows that percent-
sections to allay the fear of one potential problem, the bias associated age may well have increased to about 95%. How can this be? The goal
with the fact that respondents recorded responses with only one type of the researcher should be to conduct research with the aim of rejecting
of scale. Fuller, Dickerson, Atinc, Atinc, and Babin (2015) examine this hypotheses, yet journal articles find support for nearly all hypotheses.
issue. Upon reviewing JBR articles mentioning procedures to assess Perhaps one reason is a belief that insignificant results will outweigh rig-
common methods variance, among articles that mention common orous methodology, so researchers only test very safe hypotheses. So
methods variance or bias, they find no reports of problematic bias due safe, in fact, that they border on tautology. Another reason could be
to the use of common methods. In the current article, simulation data that authors cherry-pick research results and only report inferences
suggests that substantial bias given the typical types of variables that show support. Such cherry-picking could be born in the perhaps
employed in marketing studies would not exist until over half of the var- not mistaken belief that reviewers reject papers based on an insufficient
iance in data is common across all variables (and thus potentially due to a number of statistically significant findings. Armstrong (2003) suggests
common method or perhaps some other common response style or halo). that reviewers in top consumer research journals expect hypotheses to
In fact, the level of common method variance that would need to be pres- be supported at least 80% of the time, in contrast to high school students
ent to cause great concern for bias exceeds even the high estimates of who would expect about half of hypotheses to be supported.
how much common methods variance may actually exist across studies. Other potential reasons include the possibility of outright fraud. In a
Further, even a simple test of eigenvalues would be sufficient to provide comprehensive study examining the robustness of findings reported in
evidence that the data are free from bias due to common methods. The psychological science, the researchers find reason for pause. Even after
so-called Harmon's one-factor test may be more powerful diagnostically gaining the cooperation of many of the original authors, attempts to re-
than previously thought. More conservatively, if the first eigenvalue ac- produce findings from the studies generally failed to corroborate the re-
counts for less than 40% of all the data variance, there appears to be little sults presented in journal articles. In the published journal articles, 97%
concern of any amount of bias due to “common methods” that would dis- displayed statistically significant results (p b 0.05). In the attempt to val-
tort meaning given the typical level of precision involved in survey mea- idate the results, only 36% displayed statistically significant results. Ad-
sures. The research suggests that the amount of attention shown to CMV ditionally, the overall average effect size obtained in the replication
in many journal articles may be disproportionate to the risk, particularly studies is less than half of that in the original studies (Open Science
when nothing substantively or practically changes as a consequence of Collaboration, 2015). Further complicating the matter is the file-
the tests. drawer effect, where studies examining theoretically hypothesized

Please cite this article as: Babin, B.J., et al., Heresies and sacred cows in scholarly marketing publications, Journal of Business Research (2015),
http://dx.doi.org/10.1016/j.jbusres.2015.12.001
B.J. Babin et al. / Journal of Business Research xxx (2015) xxx–xxx 5

effects but not finding statistical significance often go unreported. The statistically significant results. In this way, more surprising results
end result is that the academic community misunderstands the nature emerge from the literature and perhaps create a climate where mar-
of relationships because the published literature likely overstates keting (business) scientists actually discover something (Armstrong,
those relationships and fails to take into account the studies not finding 2003).
statistical significance (Hubbard & Armstrong, 1992). • More journal articles should report studies in an inductive presenta-
Many journals also display a trend toward articles reporting multiple tion style. In other words, if one truly starts off with an exploratory
studies to address research questions. What happens to the studies con- study, aimed at discovery, why not present the research report in
ducted along the way that provide unpleasing results to the authors? If that fashion rather than presenting in a style more suitable for justifi-
an article reports four studies suggesting some effect, but in the process cation? Interesting data come from interesting subject matter. The
studies with results not supporting the researchers hypotheses go unre- paper can present methods, data and then implications for practice
ported, does the article fully represent objective reality? Perhaps eight and theoretical development following the results. This style allows
studies exist but the literature only reports four. To the extent that un- for discovery and may closely match the way much research takes
reported studies not supporting the authors' hypotheses exist, the sta- place. Post-hoc deduction in which hypotheses are cherry-picked or
tistical inferences reported in the journal article are misleading. The derived after examining the data should not present an avenue for
literature may proceed toward TRUTH rather than truth. publication over an inductive presentation. One primary challenge
for research is making sense out of the surprisingly nonsensical
6. Objective measurement (Woodside, 2012). Good discussion of nonresults from concepts solid-
ly linked conceptually provide an avenue for such sense-making.
Objective measurement remains critically important in research Under these circumstances, theory development becomes more pos-
(Sajtos & Magyar, 2015). One can change the bathroom scale to show a sible and more prominent.
lower scale result, but that does not cause the weight of the person on • A detailed presentation of basic data characteristics needs to appear
that scale to change. As such, formative indicators, from the perspective with every journal article. In particular, the report should include fre-
of objective measurement, present this unpleasing implication that the quencies for less than interval measures and means and variances for
measure causes the phenomenon. Furthermore, they present greater po- metric variables at the variable level (not just the construct level).
tential for bias (Chang et al., 2015) and, like single-item measures, defy Readers may take more from these data than from statistically mas-
well-known and understood validation procedures (Sarstedt et al., saged results. Reports also should fully disclose reasons for excluding
2015; Wilcox et al., 2008). Above all the validation methods for any mea- participants from analyses.
sure, face validity remains an imperative unequaled by other tests. • When multiple studies address a hypothesis or a closely related set of
hypotheses, researchers should report “failed” studies, those not pro-
7. Meaningful samples and generalizable research viding consistent results or not supporting hypotheses. The studies
that do not support the hypotheses indeed qualify any statistical sig-
Researchers should apply procedures with an eye toward faithfully nificance supporting hypotheses. One way to deal with this is to con-
depicting results, not in a manner to survive the review process. In sider more of a meta-analysis type reporting for multiple studies.
this process, the researcher investigates all types of potential research Further, if a study contains flaws to the extent that it should not be in-
error. Researchers should address basic questions like the legitimacy cluded, the authors should explain how it came to be so flawed. The
of respondents. If one designs a questionnaire that inflates measure- bottom-line is that nonresults are equally important to results in un-
ment validity or enhances likelihood of supporting hypotheses, research derstanding the real world.
flaws exist from the start. If a researcher performs detailed tests of com- • Researchers should give proportionate attention to potential problems
mon methods bias on largely irrelevant data, the tests do not serve in the research. For example, there is no need for reporting multiple
much of a purpose. If the author intends results to have practical mean- and sophisticated tests on issues that do not show high potential for
ing, but the research participants bring into question the ability to gen- distorting results substantially given the characteristics of a study
eralize results or present experience effects that may distort responses, (such as CMV). These tests often go beyond the level of precision pro-
an article reporting those results is of reduced value. vided by the data and need not take up a significant amount of space in
Coinciding with sample concerns, generalizability also depends on a journal article. Such extended discussion can be a distraction to the
research conducted in realistic settings. Relatively new to the literature, reader. When space is a concern, such tests may appear in appendices.
authors are encouraged to adopt the term “boundary conditions” to Indeed, the remedy for problems that might cause issues like CMB lies
specify exactly when some statistical effect occurs. However, the best in steps taken before collecting data, not after (Conway & Lance,
boundary conditions often limit the practical meaning of the research 2010). In surveys, researchers must monitor for potential problems
results to the point of near irrelevance (Yang & Lynn, 2014). In addition, like order bias that could artificially impact relationships among
the narrow conditions employed in consumer experiments makes re- items within a scale or between scales. Indeed, multiple articles in
producing results difficult. Despite these findings, authors sometimes this special section point to the critical importance of construct validity.
feel pressured to overstate the generalizability of their results in an ef- All aspects of construct validity are important including fit validity,
fort to achieve acceptance for publication in top journals (Huber, convergent validity, and discriminant validity. But, even rigorous CFA
Payne, & Puto, 2014). examinations, which remain critically important, do not mean very
much when the items lack face validity. A scale assessing smiles should
8. Conclusions not drop the only item in the scale mentioning a smile. The item con-
tent from two items on separate constructs should not match closely.
Considering the issues that this special issue raises, consider these Sometimes, when authors fall back on using previously published or
points as summary recommendations: handbook indexed scales, problems such as these arise. A previously
applied scale does not guarantee face validity particularly in the con-
• The contribution of a journal article should not depend on the statisti- text of a different study. Thus, researchers' hesitation to modify a
cal significance of tests or support for hypotheses. If the research scale's content due to context represents a potential sacred cow. A
methods are valid and the questions worth asking, then the support common sense appraisal of item content remains as important a step
or lack thereof of hypotheses does not determine the contribution. in measurement validity as exists.
Nonresults, particularly unexpected null findings, are essential to full • Researchers should give more attention to practical meaningfulness
disclosure. Articles should report null findings in addition to rather than statistical significance. In fact, when researchers conduct

Please cite this article as: Babin, B.J., et al., Heresies and sacred cows in scholarly marketing publications, Journal of Business Research (2015),
http://dx.doi.org/10.1016/j.jbusres.2015.12.001
6 B.J. Babin et al. / Journal of Business Research xxx (2015) xxx–xxx

tests with no intention for generalization, they need not report statisti- Conway, J.M., & Lance, C.E. (2010). What reviewers should expect from authors regarding
common method bias in organizational research. Journal of Business & Psychology, 25,
cal significance. Effect sizes deserve greater attention relative to statis- 3250344.
tical significance. Reviewers need to understand that research is not Crosby, L.A., & Stephens, N. (1987). Effects of relationship marketing on satisfaction, re-
perfect and they should not succumb to the charm of significant re- tention, and prices in the life insurance industry. Journal of Marketing Research, 404-
411.
sults, sophisticated but not well-understood or cogent tools, an inaccu- Daugherty, T., Hoffman, E., & Kennedy, K. (2015). Research in reverse: Ad testing using an
rate presentation of precision, or an exhaustive amount of studies. inductive consumer neuroscience approach. Journal of Business Research forthcoming.
• Authors should remain researchers when writing. They should remain Diamantopoulos, A., & Siguaw, J.A. (2006). Formative versus reflective indicators in orga-
nizational measure development: a comparison and empirical illustration. British
objective and resist passion about their theory or hypothesis that cre-
Journal of Management, 17(4), 263–282.
ates a rooting interest in the results. What if articles reported results Enserink, M. (2012). Final report on Stapel also blames field as a whole. Science,
as published blind, not just reviewed blind? In this way, ego and repu- 338(6112), 1270–1271.
Espinosa, J.A., & Ortinau, D.J. (2015). Debunking legendary beliefs about student samples
tation may not interfere with the accurate presentation of research re-
in marketing research. Journal of Business Research (forthcoming).
sults. Perhaps the literature would be more enlightening. Fang, F.C., Steen, R.G., & Casadevail, A. (2012). Misconduct accounts for the majority of
• Any mention of research integrity should acknowledge the disturbing retracted scientific publications. PNAS, 109, 17028–17033.
issue of plagiarism. Actions intended solely to manipulate the review Fuller, C. M., Simmering, M. J., Atinc, G., Atinc, Y., & Babin, B. J. (2015). Common methods
variance detection in business research. Journal of Business Research (forthcoming).
process should be discouraged. Authors should be able to clearly Hotelling, H. (1933). Analysis of a complex of statistical variables into principal compo-
state the contribution each had to a published piece of research. Col- nents. Journal of Educational Psychology, 24(6), 417.
leagues with little direct role in a research project should not pressure Hubbard, R., & Armstrong, S. (1992). Are null results becoming an endangered species in
marketing? Marketing Letters, 3, 127–136.
other colleagues for authorship without due cause. Plagiarism, includ- Huber, J., Payne, J.W., & Puto, C.P. (2014). Let's be honest about the attraction effect.
ing self-plagiarism, represents a lack of integrity and causes disrepute Journal of Marketing Research, 51, 520–525.
for the entire academic research community. Hunt, S.D. (2010). Marketing theory: foundations, controversy, strategy, resource-
advantage theory. Armonk. NY: M.E. Sharpe.
• More research on academic researcher honesty is encouraged. A search Johnston, R.J. (1973). Possible extensions to the factorial ecology method: A note.
of academic dishonesty in the academic literature reveals a plethora of Environment and Planning, 5, 719–734.
studies dealing with student cheating and plagiarism. Of course, aca- Karabag, S.F., & Berggren, C. (2012). Retraction, dishonesty and plagiarism: Analysis of a
crucial issue for academic publishing, and the inadequate responses from leading
demics were students once. Do only honest students become aca- journals in economics and management disciplines. Journal of Applied Economics
demics? Research needs to address the impact of pressures to and Business Research, 2, 172–183.
publish and to gain research funding in the form of substantial grants Lynch, L.G. (1983). The role of external validity in theoretical research. Journal of
Consumer Research, 10, 109–111.
on the tendency, whether intentional or not, to steer results toward
MacCallum, R.C., & Browne, M.W. (1993). The use of causal indicators in covariance struc-
TRUTH rather than truth. ture models: Some practical issues. Psychological Bulletin, 114, 533–541.
Oliver, R., & Bearden, W.O. (1985). Disconfirmation processes and consumer evaluations
in product usage. Journal of Business Research, 13, 235–246.
Open Science Collaboration. (2015). Estimating the reproducibility of psychological sci-
References ence. Science, 349(6251), 943–952.
Peter, J. P., & Olson, J. C. (1983). Is science marketing? Journal of Marketing, 47, 111–125.
Alba, J.W. (2012). In defense of bumbling. Journal of Consumer Research, 38, 981–987. Sajtos, L., & Magyar, B. (2015). Auxilliary theories as translational mechanisms for mea-
Armstrong, J.S. (2003). Discovery and communication of important marketing findings: surement model specification. Journal of Business Research (forthcoming).
Evidence and proposals. Journal of Business Research, 56, 69–84. Sarstedt, M., Diamantopoulos, A., Salzberger, T., & Baumgartner, P. (2015). Selecting single
Armstrong, J. S., & Overton, T. S. (1977). Estimating nonresponse bias in mail surveys. items to measure doubly-concrete constructs: A cautionary tale. Journal of Business
Journal of Marketing Research, 14, 396–402. Research forthcoming.
Bhattacharjee, Y. (2013). The mind of a con man. New York Times MM44. Smith, S., Roster, C.A., Golden, L.L., & Albaum, G.S. (2015). Respondent data quality in
Calder, B.J., Phillips, L.W., & Tybout, A.M. (1981). Designing research for application. managed consumer panels and mTurk samples. Journal of Business Research
Journal of Consumer Research, 8, 197–207. (forthcoming).
Calder, B.J., Phillips, L.W., & Tybout, A.M. (1983). Beyond external validity. Journal of Stapel, D. (2012). Ontsporing. Amsterdam: Prometheus.
Consumer Research, 10, 112–114. Wilcox, J.B., Howell, R.D., & Breivik, E. (2008). Questions about formative measurement.
Cattell, R.B. (1966). The meaning and strategic use of factor analysis. In R.B. Cattell (Ed.), Journal of Business Research, 61, 1219–1228.
Handbook of environmental psychology (pp. 174–243). Chicago: Rand McNally. Woodside, A. (2012). Incompetency training: Theory, practice and remedies. Journal of
Chandler, J., Mueller, P., & Paolacci, G. (2014). Nonnaivete among Amazon mechanical Business Research, 65, 279–293.
Turk workers: Consequences and solutions for behavioral researchers. Behavioral Yang, S., & Lynn, M. (2014). More evidence challenging the robustness and usefulness of
Research, 46, 112–130. the attraction effect. Journal of Marketing Research, 51, 508–513.
Chang, W., Franke, G.R., & Lee, N. (2015). Comparing reflective and formative measures:
New insights from relevant simulations. Journal of Business Research (forthcoming).

Please cite this article as: Babin, B.J., et al., Heresies and sacred cows in scholarly marketing publications, Journal of Business Research (2015),
http://dx.doi.org/10.1016/j.jbusres.2015.12.001

You might also like