Professional Documents
Culture Documents
net/publication/338013405
Online Product Review Impact: The Relative Effects of Review Credibility and
Review Relevance
CITATIONS READS
12 813
5 authors, including:
Some of the authors of this publication are also working on these related projects:
Online Product Reviews and Electronic Word-of-Mouth (eWOM) Impact View project
All content following this page was uploaded by Kelley O'Reilly on 18 December 2019.
Alhassan G. Mumuni, Kelley O’Reilly, Amy MacMillan, Scott Cowley & Brett
Kelley
To cite this article: Alhassan G. Mumuni, Kelley O’Reilly, Amy MacMillan, Scott Cowley & Brett
Kelley (2019): Online Product Review Impact: The Relative Effects of Review Credibility and
Review Relevance, Journal of Internet Commerce
ABSTRACT KEYWORDS
This study conceptualizes, operationalizes, and identifies the Electronic word-of-mouth
drivers of online product review (OPR) relevance and examines (eWOM); online product
its relative effect on OPR impact compared to review credibil- reviews (OPR); OPR impact;
persona similarity; review
ity. In contrast to previous studies, this study is the first to credibility; review relevance;
conceptualize review credibility as a distinct construct from reviewer expertise; reviewer
reviewer expertise and trustworthiness comprising a cognitive- trustworthiness;
affective dimension (perceptions) and a behavioral dimension usage similarity
(likelihood to act). Results show that review relevance contrib-
utes significantly to explaining OPR impact and that review
relevance and review credibility (drivers of OPR impact) pro-
vide a significantly better fit to the empirical data than review
credibility alone. In fact, review relevance is almost an equally
strong driver of OPR impact as review credibility. However,
the relationships between review credibility and its two
hypothesized drivers—reviewer trustworthiness and reviewer
expertise—are mixed. While a significant positive relationship
is found between credibility and trustworthiness, as expected,
a significant negative relationship is found between credibility
and expertise.
Introduction
The Internet has provided consumers the means easily to acquire product
information from other consumers and also to share their own product
experiences. This online consumer-to-consumer communication (C2C) is
referred to as electronic word-of-mouth (eWOM) (Chatterjee 2001). One
popular form of eWOM is online product reviews (OPRs), i.e., online eval-
uations and ratings of products by consumers. Studies show that a large
majority of consumers use OPRs as a source of information for product
purchase decisions (Chou, Picazo-Vela, and Pearson 2013; PeopleClaim
2013, BBB/Nielsen 2017; Statista 2018), and that consumers trust OPRs
relative to other product information sources, with over 85% of Internet
shoppers reporting that they trust online reviews as much as personal rec-
ommendations (Statista 2017). Consumers also seem to trust products with
corresponding OPRs more than those products without reviews, with one
recent study demonstrating that the mere presence of OPRs is associated
with a 270% greater purchase likelihood for reviewed products (Askalidis
and Malthouse 2016; SRC 2017).
A long record of research supports the influence of OPRs on consumers’
ability to evaluate products and the likelihood of making subsequent pur-
chase decisions, including the what, when, and how of product purchases
(Adjei, Noble, and Noble 2010; Chevalier and Mayzlin 2006; Cole et al.
2011; Leonard and Jones 2010; Teng et al. 2017; Zhang and Tran 2011;
Zhu and Zhang 2010). In the process, extant research also seeks to under-
stand the mechanisms and key drivers by which OPRs and their respective
elements impact consumers’ purchase decisions. This prior research pri-
marily focuses on constructs such as reviewer expertise and trustworthiness
as drivers of review credibility and subsequent OPR impact (Teng et al.
2017; Cheung and Thadani 2012; Cheung et al. 2009). In contrast, we sug-
gest that a credibility-centric focus results in an incomplete understanding
of impact, as it neglects situational factors that determine the true effect of
a review.
Some evidence suggests that latent situational factors play a sizable role
in determining OPR impact. For example, prior research suggests that the
amount of experience a consumer has shopping online is associated with
the influence of a review (Zhu and Zhang 2010). More recently, qualitative
research by O’Reilly et al. (2016) indicates that the impact of an OPR is
determined not only by its credibility, but also by its relevance to the con-
sumer in terms of whether the reviewer exhibits personality characteristics
that make the review more relevant to the reader. Specifically, they posit
that review readers consider both their own similarity to the reviewer’s
intended product use and the reviewer’s more general persona characteris-
tics as indicators of the relevance of a review.
This research examines the extent to which an OPR’s relevance to a con-
sumer (review relevance) contributes to OPR impact, as well as its effect
relative to the review’s credibility. The contribution of this research is
three-fold. First and foremost, it provides a broader account of the impact
of OPRs by incorporating the effect of review relevance as suggested by
O’Reilly et al. (2006). Second, it conceptualizes and operationalizes review
credibility, distinguishing it from its underlying dimensions of expertise
and trustworthiness, which were previously considered to be an integral
part of credibility itself. Third, it utilizes a multi-dimensional
JOURNAL OF INTERNET COMMERCE 3
Reviewer Experse
Review Credibility
Reviewer
Trustworthiness
=.715
OPR Impact
Reviewer-Receiver
Persona Similarity
=.63
Review Relevance
Reviewer-Receiver
Usage Similarity
interactions as well (Cheung et al. 2009; Lim et al. 2006; Xu 2014; Zhu,
Guopeng, and Wei 2014).
H1b: The impact of an online product review is positively driven by its perceived
relevance to the receiver.
Review credibility
Credibility is an enduring construct in the extant communications litera-
ture, tracing its roots back to Aristotle’s notion of “ethos” (Andersen and
Clevenger 1963; Bowden, Caldwell, and West 1934; Kulp 1934; Ewing 1942;
McCroskey and Young 1981). Many researchers have studied this construct
with common reference to Hovland, Janis and Kelley’s (1953) source-cred-
ibility model, which defined credibility as “the resultant value (combined
effect) of (1) the extent to which a communicator is perceived to be a
source of valid assertions (his ‘expertness’) and (2) the degree of confidence
in the communicator’s intent to communicate the assertions he considers
most valid (his ‘trustworthiness’)”. This definition suggests that credibility
and its underlying dimensions (expertise and trustworthiness) are one and
the same. For this reason, many researchers have considered expertise and
trustworthiness to be reflective indicators of credibility. Other researchers
however have offered alternative definitions and perspectives that suggest
that credibility is a distinct construct from expertise and trustworthiness.
For instance, in an earlier discussion of the constructs, Kelley and Thibaut
(1954) wrote:
In certain instances, the initiator may be viewed instrumentally as a “mediator of
fact” by virtue of his perceived expertness, credibility, and trustworthiness. In other
instances, the recipient may be motivated to agree with the initiator without regard
to his “correctness”; agreement may become an independent motive. The strength of
this motive seems to depend partly on the strength of positive attachment to and
affection for the initiator. (From Simons, Berkowitz, and Moyer 1970, 743;
emphasis added)
This viewpoint paves the way for a possibility that expertness (expertise)
and trustworthiness contribute to credibility in a formative relationship,
rather than being reflective of it. Ohanian (1990) and O’Keefe’s (2002) defi-
nitions of credibility are consistent with this notion. Both define credibility
in terms of believability. In Ohanian (1990) credibility is the information
8 A. G. MUMUNI ET AL.
Review relevance
Relevance is understood to be a multidimensional, dynamic, cognitive con-
cept that is largely dependent on users’ perceptions of information and
their own information needs at a particular point in time (Schamber,
Eisenberg, and Nilan 1990; for historical review of relevance, see Saracevic
1975). Because this concept is both dynamic and temporal, it has also been
described as situational relevance (Cooper 1971; Wilson 1973).
O’Reilly et al. (2016) identify two drivers of message relevance: revie-
wer–receiver (R–R) persona similarity and R–R product usage similarity.
Together, these drivers reflect the perceived degree of similarity between
the reviewer and the receiver and determine the extent to which the
receiver will consider the reviewer’s online review as relevant to their par-
ticular circumstance. O’Reilly et al. (2016) define persona similarity as the
receiver’s assessment of how alike the reviewer is to them in terms of char-
acter, background, and experiences. This notion of similarity is often
referred to as homophily in the literature and has been broadly defined as
“ … the degree to which pairs of individuals who interact are similar with
respect to certain attributes, such as beliefs, values, education, social status,
etc.” (Rogers and Bhowmik 1970, 526). A source who is attractive, likable,
or similar will have a stronger effect on a receiver than a less attractive or
dissimilar source (Turner 1991). In essence, because people tend to like
similar others, they perceive the ideas and attitudes held by those similar
others to be more appropriate and relevant to themselves (Racherla,
Mandviwalla, and Connolly 2012; Thompson and Malaviya 2013; Xia and
Bechwati 2008). This phenomenon has been documented in numerous
studies (see Wilson and Sherrell (1993) for a meta-analysis), including
experiments involving salesperson-customer interactions (Brock 1965; Jiang
et al. 2010) and secondary data analyses exploring what governs the com-
position of teams (Ruef, Aldrich, and Carter 2003). Therefore, a receiver’s
perception of similarity to an online reviewer creates relevance for the
JOURNAL OF INTERNET COMMERCE 11
review to the receiver. In sum, the present study posits that perceived simi-
larity between a consumer and the reviewer may serve as a heuristic cue to
the consumer that the product or service might fit their needs, making the
review more relevant to the consumer’s particular circumstance.
Usage similarity refers to a receiver’s assessment of how alike the source’s
use of the product is to their own intended use. In other words, from the
receiver’s point of view the question is whether the consumer posting
information online is using the product in the same manner that they
intend to use it (O’Reilly et al. 2016). Following O’Reilly et al. (2016), this
is posited as an important additional driver of message relevance because
similarity to a reviewer (persona similarity) will have limited effect if the
reviewer’s message is unrelated to the receiver’s circumstances and needs. If
the review discusses the product or service with dimensions that match
that of the receiver’s expected use, then this also creates relevance (Costello
2017; Dholakia and Sternthal 1977; Duffy 2015; Williams et al. 2010; Xia
and Bechwati 2008). Together, persona similarity and usage similarity
reflect the degree of similarity between the reviewer and the receiver and
they determine the degree of relevance a receiver will assign a message.
Accordingly, we hypothesize that:
H3a: The perceived relevance of an online review to a receiver is positively driven by
their perceived persona similarity with the reviewer.
Research methods
Research design
Data to test the hypotheses were collected through a structured, self-
administered survey administered to respondents recruited through an
online panel. Respondents read a hypothetical (researcher-contrived) review
for a fitness tracker and responded to a battery of measurement items
reflective of the study constructs. A fitness tracker was used because it is a
gender-neutral product and one for which respondents are likely to read
online reviews prior to purchase (NPD 2015).
Because of the possibility that expertise and trustworthiness of a reviewer
(two constructs of interest to the study) can be inferred from or at least
implied by the content of a review, four versions of the review were cre-
ated. These versions reflected two levels of reviewer expertise (expert versus
non-expert) and two levels of reviewer trustworthiness (trustworthy versus
untrustworthy). Thus, four “manipulations” of the reviewer’s expertise and
trustworthiness were implemented, resulting in four data collection
12 A. G. MUMUNI ET AL.
“conditions” (see Appendix 1). Notably, the goal was not to test the effect
of the different manipulations on model parameters. Rather it was to
ensure that a broad enough spectrum of reviews was assessed by respond-
ents so that results of the model test would not be attenuated by the spe-
cific nature of any one particular review. For each version, respondents
first read the relevant review and then responded to questionnaire items
that were identical across review versions.
evaluated only one review. The high expertise and high trustworthiness
reviews both received the higher mean scores in this rating task (7.5 and
9.2 respectively).
The ranking test, conducted in a separate class, used a paired-compari-
son ranking task in which the reviews were presented to respondents in
pairs (a low-high expertise pair and a low-high trustworthiness pair). Each
respondent read only one pair and simply indicated which of the two
reviews in the pair they thought had the higher expertise or trustworthi-
ness. Seven of the nine respondents who received the low-high expertise
pair (77.8%) chose the high expertise review as the reviewer with the higher
expertise. The remaining two chose the low expertise review as the reviewer
with the higher expertise. Similarly, of the eight respondents who received
the low-high trustworthiness pair, six (87.5%) chose the high trustworthi-
ness review as the reviewer with the higher trustworthiness. Thus, in both
tests, responses confirm adequacy of the manipulation check.
Construct operationalizations
Conceptualizations and operationalization of the study constructs are in
Appendix 2. Reviewer expertise was conceptualized as the extent to which
the reviewer is perceived as a source of valid assertions (Hovland, Janis,
and Kelley 1953) based on their knowledge regarding the subject matter of
the review (O’Reilly et al. 2016, 79). It was measured using a six-item,
seven-point semantic differential scale adapted from Eisend (2006),
McCroskey and Teven (1999) and Ohanian (1990). Reviewer trustworthi-
ness was conceptualized as the degree of confidence the respondent has in
the reviewer’s intent to communicate valid assertions (Hovland, Janis, and
Kelley 1953) without bias or alternative motives for posting the message
(O’Reilly et al. 2016, 79). It was similarly measured using a seven-point
semantic differential scale applied to seven items adapted from Eisend
(2006), McCroskey and Teven (1999) and Ohanian (1990). Persona similar-
ity addressed the respondent’s assessment of how alike the reviewer is to
them in terms of character, background, and experiences (O’Reilly et al.
2016, 80), and was measured using a six-item scale developed by the
authors and supported by measurement scales from Hernandez-Ortega
(2018). Usage similarity was defined as the respondent’s assessment of how
alike the reviewer’s use of the product is to their own intended use
(O’Reilly et al. 2016, 80) and was measured on a five-item scale, similarly
developed by the authors and supported by measurement scales from
Hernandez-Ortega (2018). Responses to scale items for both persona simi-
larity and usage similarity were solicited on five-point Likert scales
(1 ¼ Strongly disagree; 5 ¼ Strongly agree).
14 A. G. MUMUNI ET AL.
Measurement validation
Following Anderson and Gerbing (1988), prior to the structural analyses,
the construct measures were validated through confirmatory factor analysis
using LISREL 8.80 for Windows (J€ oreskog and Dag 2004). The measure-
ment model was fit to a covariance matrix and maximum likelihood esti-
mation was used to derive the model parameters. To improve model fit, a
number of indicator error covariances (all of which were within-construct
measures) were allowed to correlate based on modification indices. Table 2
shows results of the confirmatory factor analysis.
The overall model fit statistics show acceptable fit of the measurement
model to the data on commonly used model fit criteria [Root Mean Square
Error of Approximation (RMSEA) ¼ 0.058; Comparative Fit Index (CFI) ¼
0.99; Goodness-of-Fit Index (GFI) ¼ 0.91; Adjusted Goodness-of-Fit Index
(AGFI) ¼ 0.89; v2(705 df) ¼ 2837.3 (p < .001); v2/df ¼ 4.03]. RMSEA is
just slightly higher than the recommended value of 0.05 for excellent fit,
GFI is just above 0.90 as preferred, while AGFI falls slightly below 0.90.
Composite reliabilities for all constructs are around 0.9, substantially above
the recommended 0.7 minimum (Hulland 1999), providing evidence in
support of acceptable measure reliability.
On measurement validity, the standardized factor loadings are all above
the recommended level of 0.5 (Anderson and Gerbing 1988) and average
variance extracted is above 0.5 for all constructs, indicating acceptable con-
vergent validity of the measures (Smith and Barclay 1997). Discriminant
validity was assessed using the Fornell–Larcker procedures (Fornell and
Larcker 1981), which require that for any pair of constructs, the average
variance extracted for each construct be higher than the square of the cor-
relation between them, or alternatively, that the square root of the average
16
Results
Descriptive statistics
Construct means, standard deviations, and coefficients of variation are
shown in the first three columns of Table 3. Means for all constructs are
around their respective scale mid-points, standard deviations are similar
across constructs (coefficient of variation between 0.25 and 0.31 for con-
structs), and there is sufficient variance in each construct to justify the ana-
lysis conducted for the hypothesis testing.
Hypotheses testing
The hypotheses were tested through structural equation modeling (SEM)
by adding structural paths to the measurement model in Figure 1. Because
of the lack of discriminant validity among the first order construct indica-
tors of OPR impact in formulating and estimating the structural model,
composite scores were computed for the first-order factors and used as
reflective indicators of OPR impact. Parameters and statistics for the result-
ing structural model are shown in Table 4. The results show that this
model fits the data very well, with virtually no change in overall model fit
statistics compared to the measurement model (RMSEA ¼ 0.057; CFI ¼
0.99; GFI ¼ 0.89; AGFI ¼ 0.87; v2(464 df) ¼ 1983.5 (p < .001);
v2/df ¼ 4.3].
Hypotheses H1a and H1b predicted positive relationships between OPR
impact and each of its two drivers—review credibility (H1a) and review
relevance (H1b). Both hypotheses find support in the empirical data, given
that the path coefficients for credibility (b ¼ 0.54; t ¼ 18.81; p < .01) and
message relevance (b ¼ 0.45; t ¼ 15.72; p < .01) are both positive and
statistically significant. Similarly, hypotheses H2a and H2b predicted
positive relationships between review credibility and each of its two
20 A. G. MUMUNI ET AL.
consumers are more likely to perceive OPRs as credible when they come
from sources they perceive as similar to themselves (Costello 2017;
Hernandez-Ortega 2018; Hwang, Park, and Woo 2018; Karimi and Wang
2017; Ma and Atki; Shan 2016). While this study is the first to fully oper-
ationalize the notion of relevance and its underlying drivers (persona simi-
larity and usage similarity), in the context of OPRs, recent works have also
incorporated complementary constructs, such as psychological distance,
social relevance, and social influence (Costello 2017; Hernandez-Ortega
2018; Shan 2016). This study’s findings indicate that consumers are using
their perceived similarity to the source of OPR and their perceived shared
expectations for the focal product as heuristics to cull OPRs they deem
irrelevant to their particular circumstances. Placing additional emphasis on
relevance and its drivers is strongly recommended for marketers to
improve the likelihood that online reviews will have the desired impact.
do so. It is not clear whether this study’s findings extrapolate to those con-
texts as well. Future research could use a variety of actual reviews across an
assortment of product contexts to examine whether the effect of review
relevance uncovered in this study holds in those contexts.
Second, the sampling and data collection decisions implemented for the
study have resulted in a sample that is slightly younger better educated,
and probably more technologically savvy. While this does not undermine
the conclusions of our study, care should be taken when extrapolating the
attitudes and behaviors derived from this research to other segments that
have been under-sampled. For example, it is possible that older individuals
or those with less education may apply different criteria toward determin-
ing review relevance. One possible opportunity for future research would
be to better understand the formation of persona similarity, and how this
is shaped by respondent characteristics such as age. In the context of this
research, there may be differential effects of age-related experience that
alter respondents’ assessments of reviewer similarity that deserve
exploration.
A third potential limitation is the review context used in these studies. It
is now common for online product review systems to include profile infor-
mation about reviewers. For instance, Amazon provides profile information
about each reviewer that includes the number of reviews posted by the
reviewer, number of helpful votes made, number of hearts received, and an
overall reviewer ranking. Such information is undoubtedly useful to con-
sumers in ascertaining reviewer credibility and expertise. However, the
reviews used in the present study did not include such profile information.
Future studies using actual user-generated content could examine whether
the presence of this information affects coefficients in the model tested in
this study.
Finally, although the present study used a more robust multi-item meas-
ure of OPR impact, the data collected are still respondent self-reports. In
the future, it would be worthwhile to collect data about respondents’
behavior both during and after reading OPRs to better understand the
impact of OPRs on consumer decision-making.
Conclusion
In summary, both the credibility of a source and the relevance of a review
contribute to OPR impact. The addition of review relevance adds explana-
tory power to how consumers assess OPRs and the resultant impact these
reviews can have on consumer decision-making. Findings also demonstrate
that the trustworthiness of a reviewer is substantially more important than
the reviewer’s expertise in evaluating the credibility of OPRs. As the first
JOURNAL OF INTERNET COMMERCE 25
References
Adjei, M., S. Noble, and C. Noble. 2010. The influence of C2C communications in online
brand communities on customer purchase behavior. Journal of the Academy of
Marketing Science 38 (5):634–653. doi: 10.1007/s11747-009-0178-5.
Andersen, K., and T. Clevenger. Jr. 1963. A summary of experimental research in ethos.
Speech Monographs 30 (2):59–78. doi: 10.1080/03637756309375361.
Anderson, J. C., and D. W. Gerbing. 1988. Structural equation modeling in practice: A
review and recommended two-step approach. Psychological Bulletin 103 (3):411–423. doi:
10.1037//0033-2909.103.3.411.
Andreassen, T., and S. Streukens. 2009. Service innovation and electronic word-of-mouth:
Is it worth listening to? Managing Service Quality: An International Journal 19 (3):
249–265. doi: 10.1108/09604520910955294.
Applbaum, R. F., and K. W. E. Anatol. 1972. The factor structure of source credibility as a
function of the speaking situation. Speech Monographs 39 (3):216–222. doi: 10.1080/
03637757209375760.
Askalidis, G., and E. C. Malthouse. 2016. The value of online customer reviews. Paper pre-
sented at the Proceedings of the 10th ACM Conference on Recommender Systems
(RecSys’16). ACM, New York, NY, USA, 155–8. doi: 10.1145/2959100.2959181.
Ayeh, J. K. 2015. Travelers’ acceptance of consumer-generated media: An integrated model
of technology acceptance and source credibility theories. Computers in Human Behavior
48:173–180. doi: 10.1016/j.chb.2014.12.049.
Banerjee, S., S. Bhattacharyya, and I. Bose. 2017. Whose online reviews to trust?
Understanding reviewer trustworthiness and its impact on business. Decision Support
Systems 96:17–26. doi: 10.1016/j.dss.2017.01.006.
BBB/Nielsen. 2017. https://www.bbb.org/globalassets/local-bbbs/council-113/media/docu-
ments/12468-d-01_cbbb_report.pdf (accessed December 10, 2018).
Berlo, D. K., J. B. Lemert, and R. J. Mertz. 1969. Dimensions for evaluating the acceptabil-
ity of message sources. Public Opinion Quarterly 33 (4):563–576. doi: 10.1086/267745.
Bowden, A. O., F. F. Caldwell, and G. A. West. 1934. A study in prestige. American
Journal of Sociology 40 (2):193–203. doi: 10.1086/216684.
Bowers, J. W., and W. A. Phillips. 1967. A note on the generality of source-credibility
scales. Speech Monographs 34 (2):185–186. doi: 10.1080/03637756709375542.
Brock, T. C. 1965. Communicator-recipient similarity and decision change. Journal of
Personality and Social Psychology 1 (6):650–654. doi: 10.1037/h0022081.
26 A. G. MUMUNI ET AL.
Handbook of research methods in consumer psychology (pp. 319–336). New York, NY:
Routledge.
Heesacker, M., R. E. Petty, and J. T. Cacioppo. 1983. Field dependence and attitude change:
Source credibility can alter persuasion by affecting message-relevant thinking. Journal of
Personality 51 (4):653–666. doi: 10.1111/j.1467-6494.1983.tb00872.x.
Hernandez-Ortega, B. 2018. Don’t believe strangers: Online consumer reviews and the role
of social psychological distance. Information & Management 55 (1):31–50. doi: 10.1016/j.
im.2017.03.007.
Homer, P. M., and L. R. Kahle. 1990. Source expertise, time of source identification, and
involvement in persuasion: An elaborative processing perspective. Journal of Advertising
19 (1):30–39. doi: 10.1080/00913367.1990.10673178.
Hovland, C. I., I. L. Janis, and H. H. Kelley. 1953. Communication and persuasion. New
Haven, CT: Yale University Press.
Hovland, C. I., and W. Weiss. 1951. The influence of source credibility on communication
effectiveness. Public Opinion Quarterly 15 (4):635–650. doi: 10.1086/266350.
Hsu, C. L., J. C.-C. Lin, and H. S. Chiang. 2013. The effects of blogger recommendations
on customers’ online shopping intentions. Internet Research 23 (1):69–88. doi: 10.1108/
10662241311295782.
Hulland, J. 1999. Use of partial least squares (PLS) in strategic management research: A
review of four recent studies. Strategic Management Journal 20 (2):195–204. doi: 10.1002/
(SICI)1097-0266(199902)20:2<195::AID-SMJ13>3.3.CO;2-Z.
Hwang, J., S. Park, and M. Woo. 2018. Understanding user experiences of online travel
review websites for hotel booking behaviors: An investigation of a dual motivation the-
ory. Asia Pacific Journal of Tourism Research 23 (4):359–372. doi: 10.1080/10941665.
2018.1444648.
Imhoff, R., P. Lamberty, and O. Klein. 2018. Using power as a negative cue: How conspir-
acy mentality affects epistemic trust in sources of historical knowledge. Personality and
Social Psychology Bulletin 44 (9):1364–1379. doi: 10.1177/0146167218768779.
Jiang, L., J. Hoegg, D. W. Dahl, and A. Chattopadhyay. 2010. The persuasive role of inci-
dental similarity on attitudes and purchase intentions in a sales context. Journal of
Consumer Research 36 (5):778–791. doi: 10.1086/605364.
Johnson, H. H., J. M. Torcivia, and M. A. Poprick. 1968. Effects of source credibility on
the relationship between authoritarianism and attitude change. Journal of Personality and
Social Psychology 9 (2, Pt.1):179–183. doi: 10.1037/h0021250.
J€
oreskog, K. G., and S. Dag. 2004. LISREL 8.7 for Windows. Lincolnwood, IL: Scientific
Software International.
Jucks, R., and F. M. Thon. 2017. Better to have many opinions than one from an expert?
Social validation by one trustworthy source versus the masses in online health forums.
Computers in Human Behavior 70:375–381. doi: 10.1016/j.chb.2017.01.019.
Karimi, S., and F. Wang. 2017. Online review helpfulness: Impact of reviewer profile image.
Decision Support Systems 96:39–48. doi: 10.1016/j.dss.2017.02.001.
Kelley, H. H. 1967. Attribution theory in social psychology. Nebraska Symposium on
Motivation 15:192–238.
Kelley, H. H., and J. W. Thibaut. 1954. Experimental studies of group problem solving and
process. Handbook of Social Psychology 2:735–785.
Kim, J., and P. Gupta. 2012. Emotional expressions in online user reviews: How they influ-
ence consumers’ product evaluations. Journal of Business Research 65 (7):985–992. doi:
10.1016/j.jbusres.2011.04.013.
JOURNAL OF INTERNET COMMERCE 29
Kimball, S. H. 2019. Survey data collection; online panel efficacy. A comparative study of
Amazon MTurk and Research Now SSI/Survey Monkey/Opinion Access. Journal of
Business Diversity 19 (2):16–45.
Kozinets, R. V., K. De Valck, A. C. Wojnicki, and S. J. S. Wilner. 2010. Networked narra-
tives: Understanding word-of-mouth marketing in online communities. Journal of
Marketing 74 (2):71–89. doi: 10.1509/jm.74.2.71.
Kulp, D. H. 1934. Prestige, as measured by single-experience changes and their perman-
ency. The Journal of Educational Research 27 (9):663–672. doi: 10.1080/00220671.1934.
10880448.
Lafferty, B. A., R. E. Goldsmith, and S. J. Newell. 2002. The dual credibility model: The
influence of corporate and endorser credibility on attitudes and purchase intentions.
Journal of Marketing Theory and Practice 10 (3):1–12. doi: 10.1080/10696679.2002.
11501916.
Lee, M., and S. Youn. 2009. Electronic word of mouth (eWOM). How eWOM platforms
influence consumer product judgement. International Journal of Advertising 28 (3):
473–499. doi: 10.2501/S0265048709200709.
Leonard, L. N. K., and K. Jones. 2010. Consumer-to-consumer e-commerce research in
information systems journals. Journal of Internet Commerce 9 (3–4):186–207. doi: 10.
1080/15332861.2010.529052.
Lim, K. H., C. L. Sia, M. K. Lee, and I. Benbasat. 2006. Do I trust you online, and if so,
Will I buy? An empirical study of two trust-building strategies. Journal of Management
Information Systems 23 (2):233–266. doi: 10.2753/MIS0742-1222230210.
Martin, W. C., and J. E. Lueg. 2013. Modeling word-of-mouth usage. Journal of Business
Research 66 (7):801–808. doi: 10.1016/j.jbusres.2011.06.004.
McCracken, G. 1989. Who is the celebrity endorser? Cultural foundations of the endorse-
ment process. Journal of Consumer Research 16 (3):310–321. doi: 10.1086/209217.
McCroskey, J. C., W. Holdridge, and J. K. Toomb. 1974. An instrument for measuring the
source credibility of basic speech communication instructors. The Speech Teacher 23 (1):
26–33. doi: 10.1080/03634527409378053.
McCroskey, J. C., and J. J. Teven. 1999. Goodwill: A reexamination of the construct and its
measurement. Communication Monographs 66 (1):90–103. doi: 10.1080/
03637759909376464.
McCroskey, J. C., and T. J. Young. 1981. Ethos and credibility: The construct and its meas-
urement after three decades. Central States Speech Journal 32 (1):24–34. doi: 10.1080/
10510978109368075.
Mudambi, S. M., and D. Schuff. 2010. What makes a helpful online review? A study of cus-
tomer reviews on Amazon.com. MIS Quarterly 34 (1):185–200.
Newell, S. J., and R. E. Goldsmith. 2001. The development of a scale to measure perceived
corporate credibility. Journal of Business Research 52 (3):235–247. doi: 10.1016/S0148-
2963(99)00104-6.
NPD. 2015. The demographic divide: Fitness trackers and smartwatches attracting very dif-
ferent segments of the market, according to the NPD group. https://www.npd.com/wps/
portal/npd/us/news/press-releases/2015/the-demographic-divide-fitness-trackers-and-
smartwatches-attracting-very-different-segments-of-the-market-according-to-the-npd-
group/ (accessed December 4, 2018).
Ohanian, R. 1990. Construction and validation of a scale to measure celebrity endorsers’
perceived expertise, trustworthiness, and attractiveness. Journal of Advertising 19 (3):
39–52. doi: 10.1080/00913367.1990.10673191.
O’Keefe, D. J. 2002. Persuasion: Theory and research. Vol. 2. Thousand Oaks, CA: Sage.
30 A. G. MUMUNI ET AL.
Sen, S., and D. Lerman. 2007. Why are you telling me this? An examination into negative
consumer reviews on the web. Journal of Interactive Marketing 21 (4):76–94. doi: 10.
1002/dir.20090.
Shan, Y. 2016. How credible are online product reviews? The effects of self-generated and
system-generated cues on source credibility evaluation. Computers in Human Behavior
55:633–641. doi: 10.1016/j.chb.2015.10.013.
Simons, H. W., N. N. Berkowitz, and J. R. Moyer. 1970. Similarity, credibility, and atti-
tude change: A review and a theory. Psychological Bulletin 73 (1):1–16. doi: 10.1037/
h0028429.
Smith, A., and M. Anderson. 2016. Online shopping and E-commerce. Pew Research Center.
https://www.pewresearch.org/internet/2016/12/19/online-reviews (accessed November 18,
2019).
Smith, B. J., and D. W. Barclay. 1997. The effects of organizational differences and trust on
the effectiveness of selling partner relationships. Journal of Marketing 61 (1):3–21. doi:
10.2307/1252186.
SRC. 2017. How online reviews influence sales. Spiegel Research Center. https://spiegel.med-
ill.northwestern.edu/_pdf/Spiegel_Online%20Review_eBook_Jun2017_FINAL.pdf
(accessed December 4, 2018).
Statista. 2017. Trust in online customer reviews 2014–2017. Statista - The Statistics Portal.
https://www.statista.com/statistics/315755/online-custmer-review-trust/ (accessed December
4, 2018).
Statista. 2018. U.S. online review usage frequency prior to new product purchase 2017.
Statista - The Statistics Portal. https://www.statista.com/statistics/713090/us-online-
review-usage-frequency-new-purchases/ (accessed December 4, 2018).
Sternthal, B., R. Dholakia, and C. Leavitt. 1978. The persuasive effect of source credibility:
Tests of cognitive response. Journal of Consumer Research 4 (4):252–260. doi: 10.1086/
208704.
Teng, S., K. W. Khong, A. Y.-L. Chong, and B. Lin. 2017. Examining the impacts of elec-
tronic word-of-mouth message on consumers’ attitude. Journal of Computer Information
Systems 57 (3):238–251. doi: 10.1080/08874417.2016.1184012.
Thompson, D. V., and P. Malaviya. 2013. Consumer-generated ads: Does awareness of
advertising co-creation help or hurt persuasion?. Journal of Marketing 77 (3):33–47. doi:
10.1509/jm.11.0403.
Thon, F. M., and R. Jucks. 2017. Believing in expertise: How authors’ credentials and lan-
guage use influence the credibility of online health information. Health Communication
32 (7):828–836. doi: 10.1080/10410236.2016.1172296.
Thorson, K. S., and S. Rodgers. 2006. Relationships between Blogs as eWOM and inter-
activity, perceived interactivity, and parasocial interaction. Journal of Interactive
Advertising 6 (2):5–44. doi: 10.1080/15252019.2006.10722117.
Tirunillai, S., and G. J. Tellis. 2012. Does chatter really matter? Dynamics of user-generated con-
tent and stock performance. Marketing Science 31 (2):198–215. doi: 10.1287/mksc.1110.0682.
Trusov, M., R. E. Bucklin, and K. Pauwels. 2009. Effects of word-of-mouth versus trad-
itional marketing: Findings from an internet social networking site. Journal of Marketing
73 (5):90–102. doi: 10.1509/jmkg.73.5.90.
Tsao, W.-C., and M.-T. Hsieh. 2015. eWOM persuasiveness: Do eWOM platforms and
product type matter?. Electronic Commerce Research 15 (4):509–541. doi: 10.1007/s10660-
015-9198-z.
Turner, J. C. 1991. Social influence. Bristol, PA: University Press. doi: 10.1093/sw/18.1.118.
32 A. G. MUMUNI ET AL.
Wang, P. 2015. Exploring the influence of electronic word-of-mouth on tourists’ visit inten-
tion. A dual process approach. Journal of Systems and Information Technology 17 (4):
381–395. doi: 10.1108/JSIT-04-2015-0027.
Whitehead, A. N. 1968. Modes of thought. Vol. 93521. New York: Simon and Schuster.
Williams, R., T. van der Wiele, J. van Iwaarden, and S. Eldridge. 2010. The Importance of
user-generated content: The case of hotels. The TQM Journal 22 (2):117–128. doi: 10.
1108/17542731011024246.
Wilson, P. 1973. Situational relevance. Information Storage and Retrieval 9 (8):457–469. doi:
10.1016/0020-0271(73)90096-X.
Wilson, E. J., and D. L. Sherrell. 1993. Source effects in communication and persuasion
research: A meta-analysis of effect size. Journal of the Academy of Marketing Science 21
(2):101–112. doi: 10.1007/BF02894421.
Xia, L., and N. N. Bechwati. 2008. Word of mouse. Journal of Interactive Advertising 9 (1):
3–13. doi: 10.1080/15252019.2008.10722143.
Xu, Q. 2014. Should I trust him? The effects of reviewer profile characteristics on
eWOM credibility. Computers in Human Behavior 33:136–144. doi: 10.1016/j.chb.2014.
01.027.
Zhang, K. Z.K., C. M. K. Cheung, and M. K. O. Lee. 2014. Examining the moderating effect
of inconsistent reviews and its gender differences on consumers’ online shopping deci-
sion. International Journal of Information Management 34 (2):89–98. doi: 10.1016/j.ijin-
fomgt.2013.12.001.
Zhang, R., and T. Tran. 2011. An information gain-based approach for recommending use-
ful product reviews. Knowledge and Information Systems 26 (3):419–434. doi: 10.1007/
s10115-010-0287-y.
Zhang, W., and S. A. Watts. 2008. Capitalizing on content: Information adoption in two
online communities. Journal of the Association for Information Systems 9 (2):73–94. doi:
10.17705/1jais.00149.
Zhang, Z., Y. Qiang, R. Law, and Y. Li. 2010. The impact of e-word-of-mouth on the
online popularity of restaurants: A comparison of consumer reviews and editor reviews.
International Journal of Hospitality Management 29 (4):694–700. doi: 10.1016/j.ijhm.
2010.02.002.
Zhu, J., D. K. C. Tse, and Q. Fei. 2018. Effects of online consumer reviews on firm-based
and expert-based communications. Journal of Research in Interactive Marketing 12 (1):
45–78. doi: 10.1108/JRIM-02-2017-0007.
Zhu, L., Y. Guopeng, and H. Wei. 2014. Is this opinion leader’s review useful? Peripheral
cues for online review helpfulness. Journal of Electronic Commerce Research 15 (4):
267–280.
Zhu, F., and X. (Michael) Zhang. 2010. Impact of online consumer reviews on sales: The
moderating role of product and consumer characteristics. Journal of Marketing 74 (2):
133–148. doi: 10.1509/jmkg.74.2.133.
JOURNAL OF INTERNET COMMERCE 33
Appendices
Appendix 1. Four manipulations of reviewer expertise and trustworthiness
1. High expertise reviewer
Directions:
You are finalizing your choice of a wearable device that tracks your fitness. You’d like to
improve your overall fitness level so you look and feel your best. Here is a review for a
fitness tracker that fits your budget and has all of the features you are looking for.
Please read it and then rate the degree of expertise that you feel is demonstrated in
this review:
Improved my aerobic capacity, lost seven pounds … and ran my first marathon!
I am a skeptic who turned into a fan. This product is amazing.
Admittedly, I’m a bit of a fitness fanatic. I’ve always been a gym rat – lifting weights,
doing spin classes, and sometimes working with a personal trainer. I watch what I eat,
limiting sugar and saturated fats and only using natural nutritional supplements
and vitamins.
Even so, over the past five years, I have found it increasingly difficult to stay in the kind
of shape I used to be in and to shed a couple of extra pounds – to get back to my fighting
weight. The initial results in cardio capacity and weight loss were usually pretty good, but
not sustainable, and within a month or so, I was back to status quo.
After having used this fitness tracker for seven months, I can strongly recommend it as
effective in the short-term and over the longer haul, and it’s easy to use. I tested it by walk-
ing and counting exactly 100 steps as they advise, and it was within 2 steps every time. I
even tried tricking it by holding things in my hands or pulling my kids in a wagon but it
never erred beyond the tiniest deviation.
It helped me quickly realize that while I was working out fine in the gym that was about
it; I was making sedentary lifestyle choices the rest of the time. Now, because it tracks my
steps and summarizes results for me – by day, by week, by month – I can easily track my
progress and I’m motivated to keep doing more.
After two months, my doctor’s office measured a 10% decrease in my blood pressure,
my resting pulse dropped from 74 to 65, and my lung capacity increased by 15%. I was so
elated by this that I decided to train for a marathon. Yesterday, I finished my first one, in
just over four hours. True, this won’t set any world records but it’s nothing to sneeze at,
and I have never felt so good.
My only criticism is that the sleep tracker gives results that aren’t as easy to read as the
step-counting results and they vary so much that I’m not sure whether it’s me or if there is
something inconsistent about the way it measures REM sleep. Also, it’s not a huge deal,
but the band is a little clunky and sometimes catches on things.
Overall, I cannot say enough about how brilliantly this product performs for tracking
and motivating fitness. I am hooked for life.
Low High
Expertise Expertise
1 2 3 4 5 6 7 8 9 10
34 A. G. MUMUNI ET AL.
Low High
Expertise Expertise
1 2 3 4 5 6 7 8 9 10
might prefer a wider range of options. I am also a little uncertain about how accurate the
sleep measurement function is. The results vary widely. Maybe it’s my sleep patterns or
maybe the device needs fine-tuning. That would be worth asking about if that’s a key elem-
ent for you.
Whatever choice you make to improve your health and fitness, good luck to you. This
option worked for me, and I am hooked for life!
Low High
Trustworthiness Trustworthiness
1 2 3 4 5 6 7 8 9 10
Low High
Trustworthiness Trustworthiness
1 2 3 4 5 6 7 8 9 10
w Reviewer 1 w Reviewer 2
This body is ready for the beach!!!! Improved my aerobic capacity, lost seven pounds … and
ran my first marathon!
Omg, I cant wait to strut my stuff on the I am a skeptic who turned into a fan. This product is
beach this weekend!!! It’s not like I was amazing. Admittedly, I’m a bit of a fitness fanatic. I’ve
totally out of shape, but I could feel those always been a gym rat – lifting weights, doing spin
love handles getting more classes, and sometimes working with a personal trainer.
I watch what I eat, limiting sugar and saturated fats and
if you know what I mean. LOL, when only using natural nutritional supplements and vitamins.
I looked in the mirror, I did not see the fab
body that made everybody look in h.s.!!
So I thought why not try this thing out. Now Even so, over the past five years, I have found it
I always take the stairs instead of the increasingly difficult to stay in the kind of shape I used
elevator and I’m always looking for more to be in and to shed a couple of extra pounds – to get
ways to add steps to my day. And it’s easy back to my fighting weight. The initial results in cardio
to use. capacity and weight loss were usually pretty good, but
not sustainable, and within a month or so, I was back to
status quo.
I’m looking in the mirror now … and I like After having used this fitness tracker for seven months,
what I see – and I’m pretty sure I won’t be I can strongly recommend it as effective in the short-
the only one!!! This product is amazing!!! term and over the longer haul, and it’s easy to use. I
Best fitness tracker on the planet!!! tested it by walking and counting exactly 100 steps as
they advise, and it was within 2 steps every time. I even
tried tricking it by holding things in my hands or pulling
my kids in a wagon but it never erred beyond the
tiniest deviation.
It helped me quickly realize that while I was working out
fine in the gym that was about it; I was making
sedentary lifestyle choices the rest of the time. Now,
because it tracks my steps and summarizes results for
me – by day, by week, by month – I can easily track my
progress and I’m motivated to keep doing more.
After two months, my doctor’s office measured a 10%
decrease in my blood pressure, my resting pulse
dropped from 74 to 65, and my lung capacity increased
by 15%. I was so elated by this that I decided to train
for a marathon. Yesterday, I finished my first one, in just
over four hours. True, this won’t set any world records
but it’s nothing to sneeze at, and I have never felt so
good.
My only criticism is that the sleep tracker gives results that
aren’t as easy to read as the step-counting results and
they vary so much that I’m not sure whether it’s me or if
there is something inconsistent about the way it
measures REM sleep. Also, it’s not a huge deal, but the
band is a little clunky and sometimes catches on things.
Overall, I cannot say enough about how brilliantly this
product performs for tracking and motivating fitness. I
am hooked for life.
High-low trustworthiness
Directions: You are finalizing your choice of a wearable device that tracks your fitness.
You’d like to improve your overall fitness level so you look and feel your best. Below are
two reviews for a fitness tracker that fits your budget and has all of the features you are
looking for. Please read each and then select which reviewer seems to be more trustworthy
by clicking the appropriate box:
JOURNAL OF INTERNET COMMERCE 37
w Reviewer 1 w Reviewer 2
I am finally back in shape!! The right “fit” for you!
I have rarely ever taken the time to write a review, but Why sacrifice fashion for fitness? This fitness
I found this fitness tracker to be so extraordinary that tracker blends a sleek look with cutting
I felt compelled to share my experience. edge electronics to help you become your
best, healthiest self.
Having tried all sorts of exercise programs and diets and It’s a slim, stylish device that tracks all-day
even another type of fitness tracker, I was initially activities like steps, distance, calories burned,
skeptical. Nothing ever seemed to work for me. Either and active minutes. The latest version has a
I became bored with it or I lost a few pounds and then longer battery life and syncs wirelessly and
gained them back just as quickly – sometimes even automatically to computers and leading
adding a pound or two. Climbing stairs and running smart phones.
around with my kids seemed to make me huff and puff
more than I remembered.
Keep in mind that I am not a top-notch athlete or fitness Find fitness every step you take with this
fanatic; I just like to keep active and stay healthy. Maybe fitness tracker. It has been hugely popular
what works for me wouldn’t be enough for someone and inventory can run low, so it’s critical to
else. That said, I could not believe how easy this product order soon. And don’t forget to check out
was to use and how motivating it was – with simple online seasonal promotions.
stats liked stairs climbed or hours slept. It is the first And if fitness is your thing, you may also be
thing that has ever motivated me to stick with interested in new sport apparel with built-in
something and to keep improving. sun protection at www.SunGuardStuff.com.
After two weeks, I had lost two pounds and I started to Love your body, love yourself, love
feel more like playing chase with the kids. After two this product!
months, I was completely hooked. Not only was I well
on my way to reaching my weight goal, but I felt about
five or ten years younger.
I am not a big fan of how this product looks. It’s fairly
clunky and it catches on my sweaters. The color choice is
either black or dark gray, which is fine for me, but
others might prefer a wider range of options. I am also a
little uncertain about how accurate the sleep
measurement function is. The results vary widely. Maybe
it’s my sleep patterns or maybe the device needs fine-
tuning. That would be worth asking about if that’s a key
element for you.
Whatever choice you make to improve your health and
fitness, good luck to you. This option worked for me,
and I am hooked for life!
38 A. G. MUMUNI ET AL.