You are on page 1of 41

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/338013405

Online Product Review Impact: The Relative Effects of Review Credibility and
Review Relevance

Article  in  Journal of Internet Commerce · December 2019


DOI: 10.1080/15332861.2019.1700740

CITATIONS READS

12 813

5 authors, including:

Kelley O'Reilly Amy Macmillan


Western Michigan University Kalamazoo College
32 PUBLICATIONS   463 CITATIONS    6 PUBLICATIONS   89 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Online Product Reviews and Electronic Word-of-Mouth (eWOM) Impact View project

All content following this page was uploaded by Kelley O'Reilly on 18 December 2019.

The user has requested enhancement of the downloaded file.


Journal of Internet Commerce

ISSN: 1533-2861 (Print) 1533-287X (Online) Journal homepage: https://www.tandfonline.com/loi/wico20

Online Product Review Impact: The Relative Effects


of Review Credibility and Review Relevance

Alhassan G. Mumuni, Kelley O’Reilly, Amy MacMillan, Scott Cowley & Brett
Kelley

To cite this article: Alhassan G. Mumuni, Kelley O’Reilly, Amy MacMillan, Scott Cowley & Brett
Kelley (2019): Online Product Review Impact: The Relative Effects of Review Credibility and
Review Relevance, Journal of Internet Commerce

To link to this article: https://doi.org/10.1080/15332861.2019.1700740

Published online: 17 Dec 2019.

Submit your article to this journal

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=wico20
JOURNAL OF INTERNET COMMERCE
https://doi.org/10.1080/15332861.2019.1700740

Online Product Review Impact: The Relative Effects of


Review Credibility and Review Relevance
Alhassan G. Mumunia, Kelley O’Reillya, Amy MacMillanb, Scott Cowleya, and
Brett Kelleya
a
Department of Marketing, Western Michigan University, Kalamazoo, Michigan, USA;
b
Department of Economics and Business, Kalamazoo College, Kalamazoo, Michigan, USA

ABSTRACT KEYWORDS
This study conceptualizes, operationalizes, and identifies the Electronic word-of-mouth
drivers of online product review (OPR) relevance and examines (eWOM); online product
its relative effect on OPR impact compared to review credibil- reviews (OPR); OPR impact;
persona similarity; review
ity. In contrast to previous studies, this study is the first to credibility; review relevance;
conceptualize review credibility as a distinct construct from reviewer expertise; reviewer
reviewer expertise and trustworthiness comprising a cognitive- trustworthiness;
affective dimension (perceptions) and a behavioral dimension usage similarity
(likelihood to act). Results show that review relevance contrib-
utes significantly to explaining OPR impact and that review
relevance and review credibility (drivers of OPR impact) pro-
vide a significantly better fit to the empirical data than review
credibility alone. In fact, review relevance is almost an equally
strong driver of OPR impact as review credibility. However,
the relationships between review credibility and its two
hypothesized drivers—reviewer trustworthiness and reviewer
expertise—are mixed. While a significant positive relationship
is found between credibility and trustworthiness, as expected,
a significant negative relationship is found between credibility
and expertise.

Introduction
The Internet has provided consumers the means easily to acquire product
information from other consumers and also to share their own product
experiences. This online consumer-to-consumer communication (C2C) is
referred to as electronic word-of-mouth (eWOM) (Chatterjee 2001). One
popular form of eWOM is online product reviews (OPRs), i.e., online eval-
uations and ratings of products by consumers. Studies show that a large
majority of consumers use OPRs as a source of information for product
purchase decisions (Chou, Picazo-Vela, and Pearson 2013; PeopleClaim

CONTACT Kelley O’Reilly kelley.oreilly@wmich.edu Department of Marketing, Western Michigan


University, Kalamazoo, MI 49008-5430, USA.
ß 2019 Taylor & Francis Group, LLC
2 A. G. MUMUNI ET AL.

2013, BBB/Nielsen 2017; Statista 2018), and that consumers trust OPRs
relative to other product information sources, with over 85% of Internet
shoppers reporting that they trust online reviews as much as personal rec-
ommendations (Statista 2017). Consumers also seem to trust products with
corresponding OPRs more than those products without reviews, with one
recent study demonstrating that the mere presence of OPRs is associated
with a 270% greater purchase likelihood for reviewed products (Askalidis
and Malthouse 2016; SRC 2017).
A long record of research supports the influence of OPRs on consumers’
ability to evaluate products and the likelihood of making subsequent pur-
chase decisions, including the what, when, and how of product purchases
(Adjei, Noble, and Noble 2010; Chevalier and Mayzlin 2006; Cole et al.
2011; Leonard and Jones 2010; Teng et al. 2017; Zhang and Tran 2011;
Zhu and Zhang 2010). In the process, extant research also seeks to under-
stand the mechanisms and key drivers by which OPRs and their respective
elements impact consumers’ purchase decisions. This prior research pri-
marily focuses on constructs such as reviewer expertise and trustworthiness
as drivers of review credibility and subsequent OPR impact (Teng et al.
2017; Cheung and Thadani 2012; Cheung et al. 2009). In contrast, we sug-
gest that a credibility-centric focus results in an incomplete understanding
of impact, as it neglects situational factors that determine the true effect of
a review.
Some evidence suggests that latent situational factors play a sizable role
in determining OPR impact. For example, prior research suggests that the
amount of experience a consumer has shopping online is associated with
the influence of a review (Zhu and Zhang 2010). More recently, qualitative
research by O’Reilly et al. (2016) indicates that the impact of an OPR is
determined not only by its credibility, but also by its relevance to the con-
sumer in terms of whether the reviewer exhibits personality characteristics
that make the review more relevant to the reader. Specifically, they posit
that review readers consider both their own similarity to the reviewer’s
intended product use and the reviewer’s more general persona characteris-
tics as indicators of the relevance of a review.
This research examines the extent to which an OPR’s relevance to a con-
sumer (review relevance) contributes to OPR impact, as well as its effect
relative to the review’s credibility. The contribution of this research is
three-fold. First and foremost, it provides a broader account of the impact
of OPRs by incorporating the effect of review relevance as suggested by
O’Reilly et al. (2006). Second, it conceptualizes and operationalizes review
credibility, distinguishing it from its underlying dimensions of expertise
and trustworthiness, which were previously considered to be an integral
part of credibility itself. Third, it utilizes a multi-dimensional
JOURNAL OF INTERNET COMMERCE 3

conceptualization of OPR impact, viewing it as comprising a cognitive-


affective dimension (perceptions) and a behavioral dimension (likelihood to
act). Previous studies have assessed the impact of OPRs mainly using sin-
gle-item measures such as sales, review helpfulness, and purchase intent.
This paper starts with a brief, formal overview of OPRs as context for
the theoretical basis and conceptual model outlining the drivers of OPR
impact. Next hypotheses are discussed along with the study’s methods.
Finally, results are presented and implications for theory and practice
are discussed.

Theory and hypothesis development


Online product reviews
Online product reviews (OPRs) are “voluntary consumer-generated evalua-
tions of businesses, products or services by internet-users who purchased,
used, or had experience with the particular product or service” (Statista
2018). These written evaluations and opinions are often supplemented with
a grade or rating (typically star-ratings) to indicate an overall assessment.
Thus, OPRs serve as recommendations by fellow consumers, providing
relatable insight that is often beyond what is traditionally available from
company-controlled information sources (Park, Lee, and Han 2007; Zhu,
Tse, and Fei 2018). Online product reviews (OPRs) are a special form of
electronic word-of-mouth (eWOM), which is broadly considered as “any
positive or negative statement made by potential, actual, or former custom-
ers about a product or company which is made available to a multitude of
people and institutions via the Internet” (Zhang et al. 2010, 39).
Online consumer product reviews are posted in a variety of platforms,
including among others, company or brand websites (e.g., iPhone reviews
on Apple.com), retailer websites (e.g., iPhone review on BestBuy.com) and
dedicated independent product review platforms such as CNET,
ConsumerReports.com, ConsumerSearch.com, DPReview.com (Chevalier
and Mayzlin 2006; Clemons and Gao 2008, Lee and Youn 2009; Parsons
and Lepkowska-White 2010; Mudambi and Schuff 2010; Sen and Lerman
2007; Tirunillai and Tellis 2012). OPRs are also posted on other communi-
cations platforms, such as blogs (Colton 2018; Cosenza, Solomon, and
Kwon 2015; Dhar and Chang 2009; Hsu, Lin, and Chiang 2013; Kozinets
et al. 2010; Thorson and Rodgers 2006), social media platforms such as
Facebook and Twitter (Dwyer, Hiltz, and Passerini 2007; Saleem and Ellahi
2017; Trusov, Bucklin, and Pauwels 2009), and discussion forums (e.g.,
Andreassen and Streukens 2009; Cheung et al. 2009). Increasingly OPRs
are also being presented in the form of video reviews on YouTube and
other online video platforms.
4 A. G. MUMUNI ET AL.

Reviewer Experse

Review Credibility

Reviewer
Trustworthiness
=.715
OPR Impact

Reviewer-Receiver
Persona Similarity
=.63
Review Relevance

Reviewer-Receiver
Usage Similarity

Figure 1. Conceptual model of OPR impact.

Conceptual model and hypotheses


A conceptual model of OPR was created to show the drivers of OPR
impact, shown in Figure 1. This model posits that the impact of an OPR is
driven by the consumer’s perceptions of both the review’s credibility and
its relevance to their particular circumstance. In turn, review credibility is
posited to be driven by consumer perceptions of the reviewer’s expertise
and trustworthiness, while review relevance is driven by consumer percep-
tions of both persona similarity and product usage similarity between the
reviewer and the consumer.
It is important to note the model’s focus on review credibility rather
than reviewer credibility. There are a number of reasons for this. First,
focusing on the review allows one to specify the direct drivers of OPR
impact at the same level of analysis, given that relevance is specified at the
level of the review. In this way, the credibility and relevance of a review
jointly determine its impact. Second, in the online environment with its
largely “faceless” reviewers (exceptions include review sites that display
reviewer photos and information), the review becomes the “face” of the
reviewer and the basis upon which most inferences about the reviewer are
made. Thus, the review becomes a proxy for the persona of the reviewer
and as such, can be deemed credible or otherwise. Third, even in the trad-
itional communications literature, there is a distinction between source
credibility and message credibility. For instance, Hovland and Weiss (1951)
argue that a communicator’s (or reviewer’s) credibility can have an impact
on the credibility of the review itself, suggesting that the two types of cred-
ibility are different. This relationship has proven to hold true for online
JOURNAL OF INTERNET COMMERCE 5

interactions as well (Cheung et al. 2009; Lim et al. 2006; Xu 2014; Zhu,
Guopeng, and Wei 2014).

Online product review (OPR) impact


The impact of OPRs (hereafter OPR impact) has been assessed in many
ways, including product sales, consumer purchase intent or stated probabil-
ity of purchase (Ayeh 2015; Banerjee, Bhattacharyya, and Bose 2017; Doh
and Hwang 2009; East, Hammond, and Lomax 2008; Furner, Zinko, and
Zhu 2016; Peng et al. 2016; Saleem and Ellahi 2017; Tsao and Hsieh 2015;
Zhang, Cheung, and Lee 2014), and consumers’ perceptions of review use-
fulness or helpfulness (Ayeh 2015; Cheung, Lee, and Rabjohn 2008;
Cheung et al. 2009; Pan and Zhang 2011; Park and Lee 2008; Schlosser
2011). These studies all adopt a uni-dimensional view of OPR impact
which ignore the multi-faceted process that consumers use during their
information search process. For instance, consumers’ opinions may be mal-
leable and influenced by OPRs to pursue many different actions toward a
product rather than a simple binary choice to buy or not to buy. Because it
is common that consumers may already have direct experience with, or a
preconceived notion of a product, it is often difficult to find a direct meas-
ure of impact using such a binary approach. In line with this thinking, Zhu
and Zhang (2010) call for a broader notion of eWOM impact, arguing that,
positive correlations between reviews and product sales might be spurious
(139) because of the difficulty in untangling the unique impact of the
review from the quality of the product. Some researchers have addressed
this concern by including purchase intention as an additional indicator of
OPR impact. However, this approach still excludes other possible ways an
OPR might impact a consumer. Specifically, after reading an OPR, a con-
sumer has two main courses of action: ignore the review or consider its
recommendations. If the consumer chooses the latter, they could act upon
its recommendation by purchasing the product if the recommendation is
positive. Alternatively, if the recommendation is negative, they could either
remove the product from further consideration or keep it in consideration
but seek additional reviews. In both instances the review has had an
impact, therefore, it is suboptimal to think of OPR impact only in terms of
a positive impact on purchase or purchase intention.
Based on these considerations, the OPR impact is conceptualized in this
study as the degree to which an OPR affects a consumer’s perceptions of
and likelihood to act toward the OPR. Thus, OPR impact incorporates
both a cognitive-affective dimension (perceptions) and a behavioral dimen-
sion (likelihood to act). The cognitive-affective dimension includes the con-
sumer’s perceptions of the review’s helpfulness and post-OPR consumption
6 A. G. MUMUNI ET AL.

product impressions, whilst the behavioral dimension includes the consum-


er’s likelihood to act (i.e., likelihood to purchase the product and likelihood
to recommend it to others). Review helpfulness is included in the cogni-
tive-affective dimension because it has been demonstrated in many studies
to be a crucial indicator of review impact (Cheng and Ho 2015; Gonzalez-
Rodriguez, Martınez-Torres, and Toral 2016; Peng et al. 2016; Reichelt,
Sievert, and Jacob 2014; Zhang and Watts 2008). Product impressions are
also included in the perceptual component because observed changes in
impressions tie directly to the idea of changing knowledge, attitudes or
overt behaviors as a condition of effective communication (Rogers and
Bhowmik 1970). Likelihood to purchase and likelihood to recommend are
included in the behavioral component because these have been identified in
the customer satisfaction literature as enduring indicators of product satis-
faction (Doh and Hwang 2009; Park, Lee, and Han 2007; Park and Lee
2008; Zhang, Cheung, and Lee 2014).
In this study, the hypothesized drivers of OPR impact are review cred-
ibility and review relevance. Review credibility is hypothesized to be a posi-
tive driver of OPR impact based on a number of theoretical arguments and
empirical findings. Source credibility theory (Hovland, Janis, and Kelley
1953) posits that credibility perceptions affect a receiver’s intention to alter
his or her attitude toward the information presented. Source and/or mes-
sage credibility have been shown to impact communication persuasiveness
(Cheung et al. 2009; Chung and Han 2017; Johnson, Torcivia, and Poprick
1968; Lafferty, Goldsmith, and Newell 2002; O’Reilly et al. 2016; Peng
et al. 2016).
In the online context, Cheng and Ho (2015) argue that since a consumer
cannot fully know the identity of a source online, they can only judge cred-
ibility of the review content to determine its usefulness. Previous studies
have shown that consumers do in fact use review credibility in this manner.
Shan (2016) finds that credibility perceptions of an online review positively
impacts consumers’ attitudes and behaviors toward the focal product or
service. A credible review that positively evaluates and recommends a prod-
uct leads to positive consumer attitudes toward the product and induces
more purchase intention than a positive review from a less credible source
(Shan 2016). More recently, Teng et al. (2017) find that highly credible
reviews are more persuasive and tend to generate favorable attitudes toward
products.
O’Reilly et al. (2016) suggest that source credibility provides only a par-
tial explanation of eWOM impact. In their study, they find that for any
communication to be effective (i.e., to have an impact on a receiver) the
message must also be relevant to the receiver. They defined message rele-
vance as the degree to which an eWOM receiver perceives an eWOM
JOURNAL OF INTERNET COMMERCE 7

communication to be applicable to their particular circumstance (80).


Rabjohn, Cheung, and Lee (2008) found that the relevance and comprehen-
siveness of information has a significant impact on the perceived usefulness
of online product reviews. Based on the preceding discussions we formally
hypothesize that:
H1a: The impact of an online product review is positively driven by its perceived
credibility.

H1b: The impact of an online product review is positively driven by its perceived
relevance to the receiver.

Review credibility
Credibility is an enduring construct in the extant communications litera-
ture, tracing its roots back to Aristotle’s notion of “ethos” (Andersen and
Clevenger 1963; Bowden, Caldwell, and West 1934; Kulp 1934; Ewing 1942;
McCroskey and Young 1981). Many researchers have studied this construct
with common reference to Hovland, Janis and Kelley’s (1953) source-cred-
ibility model, which defined credibility as “the resultant value (combined
effect) of (1) the extent to which a communicator is perceived to be a
source of valid assertions (his ‘expertness’) and (2) the degree of confidence
in the communicator’s intent to communicate the assertions he considers
most valid (his ‘trustworthiness’)”. This definition suggests that credibility
and its underlying dimensions (expertise and trustworthiness) are one and
the same. For this reason, many researchers have considered expertise and
trustworthiness to be reflective indicators of credibility. Other researchers
however have offered alternative definitions and perspectives that suggest
that credibility is a distinct construct from expertise and trustworthiness.
For instance, in an earlier discussion of the constructs, Kelley and Thibaut
(1954) wrote:
In certain instances, the initiator may be viewed instrumentally as a “mediator of
fact” by virtue of his perceived expertness, credibility, and trustworthiness. In other
instances, the recipient may be motivated to agree with the initiator without regard
to his “correctness”; agreement may become an independent motive. The strength of
this motive seems to depend partly on the strength of positive attachment to and
affection for the initiator. (From Simons, Berkowitz, and Moyer 1970, 743;
emphasis added)

This viewpoint paves the way for a possibility that expertness (expertise)
and trustworthiness contribute to credibility in a formative relationship,
rather than being reflective of it. Ohanian (1990) and O’Keefe’s (2002) defi-
nitions of credibility are consistent with this notion. Both define credibility
in terms of believability. In Ohanian (1990) credibility is the information
8 A. G. MUMUNI ET AL.

Table 1. Source credibility factors.


Authors Source credibility tested factors
Hovland, Janis, and Kelley (1953) Expertness, Trustworthiness
Bowers and Phillips (1967) Trustworthiness (Character) and Competence (Authoritativeness)
Giffin (1967) Expertise, Reliability, Goodwill, Dynamism and Likability
Whitehead (1968) Trustworthy, Competence, Dynamism, Objectivity
Berlo, Lemert, and Mertz (1969) Safety, Qualification, Dynamism
Applbaum and Anatol (1972) Trustworthiness, Expertness, Dynamism, Objectivity
McCroskey, Holdridge, and Toomb (1974) Competence, Extroversion, Composure, Character, Sociability
Sternthal, Dholakia, and Leavitt (1978) Trustworthiness, Expertness
McCroskey and Young (1981) Competence, Character, Goodwill or Intention
Ohanian (1990) Expertise, Trustworthiness, Attractiveness
Newell and Goldsmith (2001) Expertise, Trustworthiness
Lafferty, Goldsmith, and Newell (2002) Expertise, Trustworthiness, Attractiveness
Eisend (2006) Trustworthiness, Competence, Attraction

receivers’ perception of believability toward the source of information. In


O’Keefe (2002) source credibility is defined as “ … judgements made by a
perceiver concerning the believability of the communicator” (181).
In this study we affirm this view that credibility is a distinct construct
from expertise and trustworthiness. Furthermore, as indicated earlier, the
communications literature distinguishes between source credibility and
message credibility, and the present study focuses specifically on message
credibility. Therefore, following Ohanian (1990) and O’Keefe (2002), we
define review credibility as the consumer’s perceptions regarding the believ-
ability of an online product review.
Reviewer expertise and trustworthiness are posited as drivers of review
credibility based on more than 60 years of research that has established
them as principal factors related to credibility. Table 1 provides a represen-
tative overview of studies (see Eisend 2006, Table 2, 5 for a comprehensive
summary). As shown in the Table, expertness (expertise) and trustworthi-
ness stand out as factors that transcend most of these studies. Much of the
research reported in Table 1 was done prior to the internet and online
shopping phenomenon. In the online context, dimensions such as attract-
iveness, extroversion, and dynamism are less relevant because relatively few
visible cues are available.
A majority of the studies in Table 1 have adopted a factor-analytic
approach to uncovering the underlying dimensions of source credibility.
Accordingly, the dimensions reported in Table 1 are assumed to have a
reflective relationship with credibility, i.e., they are indicators of the con-
struct. However, as indicated earlier, following Ohanian and O’Keefe, this
study views reviewer expertise and trustworthiness as distinct constructs
from credibility and considers them as criterion variables that drive con-
sumers’ perceptions about a review’s credibility. In other words, we posit a
formative relationship between expertise and trustworthiness on one hand
and credibility on the other.
JOURNAL OF INTERNET COMMERCE 9

In the communications literature, source expertise refers to the extent to


which a communication source is perceived as making valid or accurate
assertions based on his or her relevant knowledge and skills (Hovland,
Janis, and Kelley 1953; Homer and Kahle 1990). In this study, and in align-
ment with Hovland, Janis, and Kelley (1953), we adopt the view that
expertise speaks to the source’s knowledge regarding the subject matter of
the message (O’Reilly et al. 2016). Previous studies show that communica-
tion sources that are high in expertise are more persuasive than low-
expertise sources in inducing positive attitudes and behavior change (Wang
2015). Similarly, strong arguments have a stronger impact on recipient atti-
tude than weak arguments when arguments are delivered by experts
(Heesacker, Petty, and Cacioppo 1983). In the context of online product
reviews, reviewer expertise has been shown to positively impact both OPR
helpfulness and persuasiveness (Cheng and Ho 2015; Gonzalez-Rodriguez,
Martınez-Torres, and Toral 2016; Shan 2016). These previous studies have
found a direct relationship between expertise and OPR impact because they
have conceptualized credibility in terms of expertise and trustworthiness.
When credibility is viewed as a distinct construct from expertise, it is rea-
sonable to expect that perceived expertise of a reviewer will have a direct
positive impact on perceived credibility of the review.
In the communications literature, source trustworthiness addresses the
degree of confidence a receiver has in a communication source’s intent to com-
municate assertions, without bias, that they consider most valid (Hovland,
Janis, and Kelley 1953). In the context of OPRs, a consumer’s perception of the
trustworthiness of a product review is determined by his or her inferences
regarding the reviewer’s motivation for expressing a positive or negative opin-
ion about the product (McCracken 1989). It speaks to the reviewer’s motive for
posting the message. The consumer may attribute a positive opinion to either
actual product performance or factors unrelated to the product attributes.
According to attribution theory, the consumer may discount a review if he or
she attributes a positive opinion and related product endorsement to a
reviewer’s intent to persuade rather than to product performance (Kelley
1967). In this study, we support the view of Hovland, Janis, and Kelley (1953)
and define trustworthiness as the degree of confidence a receiver has in the
reviewer’s intent to communicate valid assertions without bias. Along with
expertise, reviewer trustworthiness is the second key factor affecting credibility.
In the online context, research suggests that for an OPR to be deemed
credible by the receiver, the reviewer must be perceived to both have the
expertise to make an informed judgment about the product or service
(Cheng and Ho 2015; Gonzalez-Rodriguez, Martınez-Torres, and Toral
2016; Shan 2016) and be trustworthy (Banerjee, Bhattacharyya, and Bose
2017; Chung and Han 2017; Filieri 2016; Jucks and Thon 2017). Thus, if a
10 A. G. MUMUNI ET AL.

receiver deems the source of an OPR to have an appropriate level of


expertise and knowledge of the subject at hand and appears to be without
bias or motive regarding what action the receiver takes as a result of the
message, then the review is likely to be considered credible. Accordingly,
we hypothesize that:
H2a: Perceived credibility of an online review is positively determined by perceived
expertise of the reviewer.

H2b: Perceived credibility of an online review is positively determined by perceived


trustworthiness of the reviewer.

Review relevance
Relevance is understood to be a multidimensional, dynamic, cognitive con-
cept that is largely dependent on users’ perceptions of information and
their own information needs at a particular point in time (Schamber,
Eisenberg, and Nilan 1990; for historical review of relevance, see Saracevic
1975). Because this concept is both dynamic and temporal, it has also been
described as situational relevance (Cooper 1971; Wilson 1973).
O’Reilly et al. (2016) identify two drivers of message relevance: revie-
wer–receiver (R–R) persona similarity and R–R product usage similarity.
Together, these drivers reflect the perceived degree of similarity between
the reviewer and the receiver and determine the extent to which the
receiver will consider the reviewer’s online review as relevant to their par-
ticular circumstance. O’Reilly et al. (2016) define persona similarity as the
receiver’s assessment of how alike the reviewer is to them in terms of char-
acter, background, and experiences. This notion of similarity is often
referred to as homophily in the literature and has been broadly defined as
“ … the degree to which pairs of individuals who interact are similar with
respect to certain attributes, such as beliefs, values, education, social status,
etc.” (Rogers and Bhowmik 1970, 526). A source who is attractive, likable,
or similar will have a stronger effect on a receiver than a less attractive or
dissimilar source (Turner 1991). In essence, because people tend to like
similar others, they perceive the ideas and attitudes held by those similar
others to be more appropriate and relevant to themselves (Racherla,
Mandviwalla, and Connolly 2012; Thompson and Malaviya 2013; Xia and
Bechwati 2008). This phenomenon has been documented in numerous
studies (see Wilson and Sherrell (1993) for a meta-analysis), including
experiments involving salesperson-customer interactions (Brock 1965; Jiang
et al. 2010) and secondary data analyses exploring what governs the com-
position of teams (Ruef, Aldrich, and Carter 2003). Therefore, a receiver’s
perception of similarity to an online reviewer creates relevance for the
JOURNAL OF INTERNET COMMERCE 11

review to the receiver. In sum, the present study posits that perceived simi-
larity between a consumer and the reviewer may serve as a heuristic cue to
the consumer that the product or service might fit their needs, making the
review more relevant to the consumer’s particular circumstance.
Usage similarity refers to a receiver’s assessment of how alike the source’s
use of the product is to their own intended use. In other words, from the
receiver’s point of view the question is whether the consumer posting
information online is using the product in the same manner that they
intend to use it (O’Reilly et al. 2016). Following O’Reilly et al. (2016), this
is posited as an important additional driver of message relevance because
similarity to a reviewer (persona similarity) will have limited effect if the
reviewer’s message is unrelated to the receiver’s circumstances and needs. If
the review discusses the product or service with dimensions that match
that of the receiver’s expected use, then this also creates relevance (Costello
2017; Dholakia and Sternthal 1977; Duffy 2015; Williams et al. 2010; Xia
and Bechwati 2008). Together, persona similarity and usage similarity
reflect the degree of similarity between the reviewer and the receiver and
they determine the degree of relevance a receiver will assign a message.
Accordingly, we hypothesize that:
H3a: The perceived relevance of an online review to a receiver is positively driven by
their perceived persona similarity with the reviewer.

H3b: The perceived relevance of an online review to a receiver is positively driven by


their perceived product usage similarity with the reviewer.

Research methods
Research design
Data to test the hypotheses were collected through a structured, self-
administered survey administered to respondents recruited through an
online panel. Respondents read a hypothetical (researcher-contrived) review
for a fitness tracker and responded to a battery of measurement items
reflective of the study constructs. A fitness tracker was used because it is a
gender-neutral product and one for which respondents are likely to read
online reviews prior to purchase (NPD 2015).
Because of the possibility that expertise and trustworthiness of a reviewer
(two constructs of interest to the study) can be inferred from or at least
implied by the content of a review, four versions of the review were cre-
ated. These versions reflected two levels of reviewer expertise (expert versus
non-expert) and two levels of reviewer trustworthiness (trustworthy versus
untrustworthy). Thus, four “manipulations” of the reviewer’s expertise and
trustworthiness were implemented, resulting in four data collection
12 A. G. MUMUNI ET AL.

“conditions” (see Appendix 1). Notably, the goal was not to test the effect
of the different manipulations on model parameters. Rather it was to
ensure that a broad enough spectrum of reviews was assessed by respond-
ents so that results of the model test would not be attenuated by the spe-
cific nature of any one particular review. For each version, respondents
first read the relevant review and then responded to questionnaire items
that were identical across review versions.

Questionnaire and pretest


The questionnaire for the main study consisted of items to measure the
study constructs as well as warm-up and filler questions about respondents’
online review use and fitness tracker ownership and use. Upon clicking the
questionnaire link, respondents were taken to the landing page where they
read a cover letter and answered a few warm-up questions about their use
of online product reviews and fitness tracker ownership. From there, they
were presented with one of four fitness tracker reviews: low expertise, high
expertise, low trustworthiness, and high trustworthiness. Thereafter, they
provided responses to items designed to measure their perceptions of the
reviewer’s expertise, trustworthiness, persona similarity, and usage similar-
ity, as well as their perception of the review itself, the relevance of its mes-
sage, fitness tracker impressions, and their likelihood of purchasing or
recommending the fitness tracker.
Prior to the main data collection, two pretests were conducted. The first
pretest focused on testing question wording and phrasing. Eight student
respondents completed the questionnaire in the presence of one of the
authors. Respondents were encouraged to verbalize their impressions about
the questionnaire items as they answered them. At the end of the question-
naire they also answered five questions specifically about the questionnaire
itself. Input from this study was used to refine the wording of the question-
naire items.
The second pretest was a “manipulation check” focused on the four
“manipulations” used in the main study. Its goal was to test whether
respondents would perceive the four different reviews as coming from
reviewers with different levels of expertise and trustworthiness. This test
was conducted using students in classes taught by two of the authors and
consisted of two separate and independents tests—one a rating and the
other a ranking task. In the rating test, conducted in one of the classes, stu-
dents were randomly assigned to read one of the four reviews and indicate
their opinions about the level of expertise or trustworthiness of the
reviewer on a 10-point semantic differential rating scale (1 ¼ low expertise/
trustworthiness, 10 ¼ high expertise/trustworthiness). Each student read and
JOURNAL OF INTERNET COMMERCE 13

evaluated only one review. The high expertise and high trustworthiness
reviews both received the higher mean scores in this rating task (7.5 and
9.2 respectively).
The ranking test, conducted in a separate class, used a paired-compari-
son ranking task in which the reviews were presented to respondents in
pairs (a low-high expertise pair and a low-high trustworthiness pair). Each
respondent read only one pair and simply indicated which of the two
reviews in the pair they thought had the higher expertise or trustworthi-
ness. Seven of the nine respondents who received the low-high expertise
pair (77.8%) chose the high expertise review as the reviewer with the higher
expertise. The remaining two chose the low expertise review as the reviewer
with the higher expertise. Similarly, of the eight respondents who received
the low-high trustworthiness pair, six (87.5%) chose the high trustworthi-
ness review as the reviewer with the higher trustworthiness. Thus, in both
tests, responses confirm adequacy of the manipulation check.

Construct operationalizations
Conceptualizations and operationalization of the study constructs are in
Appendix 2. Reviewer expertise was conceptualized as the extent to which
the reviewer is perceived as a source of valid assertions (Hovland, Janis,
and Kelley 1953) based on their knowledge regarding the subject matter of
the review (O’Reilly et al. 2016, 79). It was measured using a six-item,
seven-point semantic differential scale adapted from Eisend (2006),
McCroskey and Teven (1999) and Ohanian (1990). Reviewer trustworthi-
ness was conceptualized as the degree of confidence the respondent has in
the reviewer’s intent to communicate valid assertions (Hovland, Janis, and
Kelley 1953) without bias or alternative motives for posting the message
(O’Reilly et al. 2016, 79). It was similarly measured using a seven-point
semantic differential scale applied to seven items adapted from Eisend
(2006), McCroskey and Teven (1999) and Ohanian (1990). Persona similar-
ity addressed the respondent’s assessment of how alike the reviewer is to
them in terms of character, background, and experiences (O’Reilly et al.
2016, 80), and was measured using a six-item scale developed by the
authors and supported by measurement scales from Hernandez-Ortega
(2018). Usage similarity was defined as the respondent’s assessment of how
alike the reviewer’s use of the product is to their own intended use
(O’Reilly et al. 2016, 80) and was measured on a five-item scale, similarly
developed by the authors and supported by measurement scales from
Hernandez-Ortega (2018). Responses to scale items for both persona simi-
larity and usage similarity were solicited on five-point Likert scales
(1 ¼ Strongly disagree; 5 ¼ Strongly agree).
14 A. G. MUMUNI ET AL.

Following Ohanian (1990), review credibility was conceptualized as the


respondent’s perceptions regarding believability of the reviewer’s assertions
as expressed in the review. It was measured on a 4-item scale developed by
the authors and supported by measurement scales from Hernandez-Ortega
(2018). Review relevance addresses the degree to which the respondent per-
ceives the review to be applicable to their particular circumstance and was
measured using a two-item scale developed by the authors. Responses to
scale items for both constructs were also solicited on five-point Likert-types
agree/disagree scales (1 ¼ Strongly disagree; 5 ¼ Strongly agree).
Finally, OPR impact was measured as a second-order construct com-
posed of three first-level constructs: review usefulness, product impressions,
and likelihood of buying or recommending the product. Review usefulness
addressed the extent to which the respondent finds the review to be useful,
product impressions captured the respondent’s impressions about the fitness
tracker (the subject of the review), and likelihood of purchasing/recommend-
ing captured the likelihood that the respondent will purchase the fitness
tracker or recommend it to someone else. Review usefulness and product
impressions were both measured using seven-point semantic differential
scales and five scale items each. The scale items for review helpfulness were
adopted from Park and Lee (2008), while those for product impressions
were adopted from Doh and Hwang (2009) and Kim and Gupta (2012).
Likelihood to purchase/recommend was measured using a two-item Likert
scale with components drawn from Zhang and Watts (2008) and Rabjohn,
Cheung, and Lee (2008). Responses were solicited on a five-point agree/dis-
agree scale. Seven-point scales were used for some constructs and five-point
scales for others simply to provide variation in response options. This com-
bination of scale types does not present any issues because the analyses are
focused on correlations and relationships among constructs.

Data collection and sample characteristics


Respondents for the main study were recruited through Amazon Mechanical
Turk (MTurk), a commonly used source for research participants (Duan and
Dholakia 2018; Chi 2018). This respondent pool is appropriate for our
domain of interest because the typical MTurk demographic makeup skews
younger and more educated (Kimball 2019), which reflects the typical distri-
bution of online review users (Smith and Anderson 2016). We also took steps
toward minimizing common issues with MTurk sampling (Chmielewski and
Kucker 2019; Buhrmester, Talaifar, and Gosling 2018). Specifically, we paid
participants a fair wage, crafted survey length and comprehensibility to
minimize issues of attrition and attentiveness (Hauser, Paolacci, and
JOURNAL OF INTERNET COMMERCE 15

Chandler 2018), and dropped participants exhibiting low quality respondent


behaviors, such as excessive nonresponse.
We assigned respondents sequentially into each of the four product
review conditions over a one-week period. Each questionnaire was posted
separately, and participants were allowed to respond to only one of the
four product review conditions. A total of 1045 responses was obtained for
the four versions. Eight respondents were removed due to excessive item
nonresponse, leaving a final sample of 1037 distributed among the versions
as follows: high expertise ¼ 214; low expertise ¼ 311; high trustworthiness
¼ 197; low trustworthiness ¼ 315. The sample skewed slightly toward male
(57.8%), young (51.3% aged 25–34 years) and highly educated respondents
(47.3% with bachelor’s degrees). These characteristics were similar across
the four product review conditions.

Measurement validation
Following Anderson and Gerbing (1988), prior to the structural analyses,
the construct measures were validated through confirmatory factor analysis
using LISREL 8.80 for Windows (J€ oreskog and Dag 2004). The measure-
ment model was fit to a covariance matrix and maximum likelihood esti-
mation was used to derive the model parameters. To improve model fit, a
number of indicator error covariances (all of which were within-construct
measures) were allowed to correlate based on modification indices. Table 2
shows results of the confirmatory factor analysis.
The overall model fit statistics show acceptable fit of the measurement
model to the data on commonly used model fit criteria [Root Mean Square
Error of Approximation (RMSEA) ¼ 0.058; Comparative Fit Index (CFI) ¼
0.99; Goodness-of-Fit Index (GFI) ¼ 0.91; Adjusted Goodness-of-Fit Index
(AGFI) ¼ 0.89; v2(705 df) ¼ 2837.3 (p < .001); v2/df ¼ 4.03]. RMSEA is
just slightly higher than the recommended value of 0.05 for excellent fit,
GFI is just above 0.90 as preferred, while AGFI falls slightly below 0.90.
Composite reliabilities for all constructs are around 0.9, substantially above
the recommended 0.7 minimum (Hulland 1999), providing evidence in
support of acceptable measure reliability.
On measurement validity, the standardized factor loadings are all above
the recommended level of 0.5 (Anderson and Gerbing 1988) and average
variance extracted is above 0.5 for all constructs, indicating acceptable con-
vergent validity of the measures (Smith and Barclay 1997). Discriminant
validity was assessed using the Fornell–Larcker procedures (Fornell and
Larcker 1981), which require that for any pair of constructs, the average
variance extracted for each construct be higher than the square of the cor-
relation between them, or alternatively, that the square root of the average
16

Table 2. Results of confirmatory factor analysis.


Loadinga t-Valueb Item reliability CRc AVEc
Reviewer expertise 0.90 0.61
1) The reviewer does not know what they are talking about 0.72 —d 0.52
2) The reviewer is ill-informed—is well-informed 0.79 27.95 0.62
3) The reviewer is a novice–is an authority 0.73 22.17 0.54
4)The reviewer is inexperienced—is experienced 0.80 24.15 0.64
5) The reviewer is unknowledgeable—is knowledgeable 0.83 25.00 0.69
6) The reviewer is unqualified—is qualified 0.82 23.31 0.67
Reviewer trustworthiness 0.86 0.56
A. G. MUMUNI ET AL.

1) The reviewer is not fair/balanced—is fair/balanced 0.78 —d 0.61


2) The reviewer is not concerned about what’s best for me—is concerned about what’s best for me 0.71 23.97 0.51
3) The reviewer is dishonest—is honest 0.73 24.26 0.53
4) The reviewer is unreliable—is reliable 0.82 27.70 0.67
5) The reviewer has a hidden agenda—does not have a hidden agenda 0.69 22.78 0.48

Reviewer–receiver persona similarity 0.90 0.61


1) The reviewer is someone I’d be friends with 0.78 —d 0.60
2) The reviewer writes in the same style as me 0.77 25.76 0.59
3) The reviewer is someone I can relate to 0.84 28.77 0.70
4) The reviewer has a background similar to mine 0.75 24.70 0.56
5) The reviewer values the same things that I do 0.76 25.33 0.57
6) The reviewer seems similar to me 0.77 25.61 0.59

Reviewer–receiver usage similarity 0.85 0.54


1) The reviewer has expectations for the [product] that are similar to mine 0.80 —d 0.65
2) The reviewer is using the [product] in the same way that I intend to use it 0.72 23.64 0.52
3) The reviewer is using the [product] for the same purpose as I will 0.70 22.83 0.49
4) The reviewer will use the [product] for as long as I expect to use it 0.74 23.13 0.54
5) The reviewer will take care of this [product] like I will 0.71 23.23 0.50

Review credibility 0.88 0.65


1) I believe this review 0.84 —d 0.71
2) I believe this review is credible 0.84 33.05 0.71
3) I believe that I can trust this review 0.85 33.72 0.73
4) I believe this review is not biased 0.68 24.49 0.47
Review relevance 0.84 0.72
1) The review is relevant to me 0.82 —d 0.68
2) The review is appropriate for my needs 0.88 32.52 0.78

Review usefulness 0.90 0.64


1) Review rating—Useless/Very useful 0.77 —d 0.59
2) Review rating—Unhelpful/Very helpful 0.84 35.83 0.71
3) Review rating—Unexciting/Very exciting 0.74 28.93 0.55
4) Review rating—Uninteresting/Very interesting 0.82 34.23 0.68
5) Review rating—Uninformative/Very informative 0.83 34.81 0.69

Product impressions 0.93 0.71


1) Impression/feelings toward product—Unfavorable/Favorable 0.83 —d 0.70
2) Impression/feelings toward product—Unimpressed/Impressed 0.84 33.52 0.71
3) Impression/feelings toward product—Unexcited/Excited 0.83 33.00 0.70
4) Impression/feelings toward product—Uninterested/Interested 0.87 35.24 0.76
5) Impression/feelings toward product—Unmotivated to consider it/Motivated to consider it 0.85 34.31 0.73

Likelihood of purchasing/recommending product 0.89 0.80


1) Likelihood of choosing product 0.90 —d 0.81
2) Likelihood of recommending product 0.89 39.67 0.78

OPR impact 0.94 0.84


1) Review usefulness 0.94 —d 0.88
2) Product impressions 0.90 31.05 0.82
3) Likelihood of purchasing/recommending product 0.91 34.74 0.83
Notes:
a
Factor loadings are from the completely standardized solution.
b
All t-values are significant at pP< .01. P P P P P
c
Composite reliability (CR) ¼ ( kyi)2/(( kyi)2 þ var(ei)) where var(ei) ¼ 1  kyi2; Average variance extracted (AVE) ¼ kyi2/( kyi2 þ var(ei)) where var(ei) ¼ 1  kyi2 [Fornell
and Larcker (1981)]. CRs and AVEs are computed using parameters of the completely standardized solution.
d
T-values are not computed for these variables because their values were fixed to “1”.
Model Fit Statistics: Chi-Square (704 df) ¼ 2,837.3; v2/df ¼ 4.03; RMSEA ¼ 0.058; GFI ¼ 0.87; AGFI ¼ 0.85; NFI ¼ 0.99; NNFI ¼ 0.99; PNFI ¼ 0.89; CFI ¼ 0.99; IFI ¼ 0.99; RFI ¼ 0.99;
RMR ¼ 0.37; SRMR ¼ 0.21.
JOURNAL OF INTERNET COMMERCE
17
18 A. G. MUMUNI ET AL.

Table 3. Descriptive statistics, inter-construct correlations and discriminant validity testsa.


Descriptive statistics Correlations and discriminant validity testsc
Mean SD CVb 1 2 3 4 5 6 7 8 9 10
1. Expertise 4.98 1.30 0.26 0.81
1. Trustworthiness 4.72 1.34 0.28 0.62 0.75
1. Persona Similarity 3.26 1.02 0.31 0.60 0.68 0.81
1. Usage Similarity 3.55 0.90 0.25 0.55 0.63 0.75 0.77
1. Credibility 3.55 1.02 0.29 0.58 0.82 0.73 0.68 0.84
1. Relevance 3.57 1.12 0.31 0.61 0.67 0.76 0.73 0.76 0.88
1. Usefulness 4.97 1.55 0.31 0.73 0.77 0.74 0.68 0.79 0.77 0.80
1. Impressions 5.07 1.43 0.28 0.68 0.71 0.68 0.68 0.74 0.71 0.84 0.84
1. Likelihood 4.94 1.66 0.34 0.65 0.71 0.73 0.67 0.74 0.73 0.82 0.83 0.89
1. OPR Impact 4.99 1.46 0.29 0.73 0.78 0.76 0.72 0.80 0.78 0.94 0.94 0.94 0.91
Notes:
a
5-point Likert scale (1 ¼ Low; 5 ¼ High); 7-point Likert scale (1 ¼ Low; 7 ¼ High).
b
CV ¼ Coefficient of Variation ¼ SD/Mean.
c
In the last ten columns, diagonal entries are square roots of average variance extracted for each construct; off-
diagonal entries are inter-construct correlations.

variance extracted for each construct be higher than the correlation


between the constructs. Results of these tests are in Table 3, which also
includes descriptive statistics for the constructs. The last ten columns con-
tain the inter-construct correlations in off-diagonal entries and square roots
of average variance extracted (AVE) for each construct in the diag-
onal entries.
Discriminant validity is confirmed for all but seven of the 45 con-
struct-pairs. Two of these are the trustworthiness-credibility and trust-
worthiness-OPR impact pairs where the inter-construct correlations (0.82
and 0.78 respectively) are higher than the square root of the AVE for
trustworthiness (0.75). The remaining five pairs all involve the OPR
impact construct and its first-order indicator constructs, with all falling
into two categories. The first category is between OPR impact and its
first-order indicators, i.e., usefulness, impressions, and likelihood. The
inter-construct correlations between impact and each of these constructs
(0.94 for all pairs) are greater than the respective AVE square roots.
Thus, discriminant validity cannot be established for these. However, this
presents no cause for concern because OPR impact is simply a composite
of these three constructs, so it is to be expected that they will be highly
correlated. The second category is between usefulness and each of the
remaining two first-order indicators of OPR impact, i.e., impressions and
likelihood. For these pairs, the inter-construct correlations are also all
greater than the respective AVE square roots, indicating that discriminant
validity cannot be established between these. This is addressed below in
formulating the structural model.
JOURNAL OF INTERNET COMMERCE 19

Table 4. Path coefficients for structural model.


Hypothesis/path ba tb hypothesis supported
H1a Credibility ! OPR Impact 0.54 18.81 Yes
H1b Relevance ! OPR Impact 0.45 15.72 Yes
H2a Expertise ! Credibility 0.07 2.18 No
H2b Trustworthiness ! Credibility 1.00 18.61 Yes
H3a Persona Similarity ! Relevance 0.55 10.29 Yes
H3b Usage Similarity ! Relevance 0.39 7.40 Yes
Notes:
a
Betas are from the completely standardized solution; t-values are of the raw Lisrel estimates.
b
All t-values are significant at p < .01.
Model Fit Statistics: Chi-Square (464 df) ¼ 1,983.5; v2/df ¼ 4.3; RMSEA ¼ 0.057; GFI ¼ 0.89; AGFI ¼ 0.87;
NFI ¼ 0.99; NNFI ¼ 0.99; CFI ¼ 0.99; IFI ¼ 0.99; RFI ¼ 0.97; RMR ¼ 0.09; SRMR ¼ 0.044.
Squared multiple correlation (R2) for endogenous constructs (reduced form): OPR impact ¼ 0.79; Credibility ¼
0.89; Relevance ¼ 0.83.

Results
Descriptive statistics
Construct means, standard deviations, and coefficients of variation are
shown in the first three columns of Table 3. Means for all constructs are
around their respective scale mid-points, standard deviations are similar
across constructs (coefficient of variation between 0.25 and 0.31 for con-
structs), and there is sufficient variance in each construct to justify the ana-
lysis conducted for the hypothesis testing.

Hypotheses testing
The hypotheses were tested through structural equation modeling (SEM)
by adding structural paths to the measurement model in Figure 1. Because
of the lack of discriminant validity among the first order construct indica-
tors of OPR impact in formulating and estimating the structural model,
composite scores were computed for the first-order factors and used as
reflective indicators of OPR impact. Parameters and statistics for the result-
ing structural model are shown in Table 4. The results show that this
model fits the data very well, with virtually no change in overall model fit
statistics compared to the measurement model (RMSEA ¼ 0.057; CFI ¼
0.99; GFI ¼ 0.89; AGFI ¼ 0.87; v2(464 df) ¼ 1983.5 (p < .001);
v2/df ¼ 4.3].
Hypotheses H1a and H1b predicted positive relationships between OPR
impact and each of its two drivers—review credibility (H1a) and review
relevance (H1b). Both hypotheses find support in the empirical data, given
that the path coefficients for credibility (b ¼ 0.54; t ¼ 18.81; p < .01) and
message relevance (b ¼ 0.45; t ¼ 15.72; p < .01) are both positive and
statistically significant. Similarly, hypotheses H2a and H2b predicted
positive relationships between review credibility and each of its two
20 A. G. MUMUNI ET AL.

Table 5. Path coefficients for structural model with only credibility.


Path ba tb
1. Expertise ! Credibility 0.04 1.33
1. Trustworthiness ! Credibility 0.89 21.66
1. Credibility ! OPR Impact 0.91 18.81
Notes:
a
Betas are from the completely standardized solution; t-values are of the raw
Lisrel estimates.
b
p < .01.
c
Model Fit Statistics: v2 (316 df) ¼ 7,298.4; v2/df ¼ 23.1; RMSEA ¼ 0.15; GFI ¼ 0.66;
AGFI ¼ 0.59; NFI ¼ 0.95; NNFI ¼ 0.95; CFI ¼ 0.95; IFI ¼ 0.95; RFI ¼ 0.95; RMR ¼
0.58; SRMR ¼ 0.37.
d
Squared multiple correlation (R2) for endogenous constructs (Reduced form): OPR
impact ¼ 0.70; Credibility ¼ 0.85.

drivers—reviewer expertise (H2a) and reviewer trustworthiness (H2b). The


results show that only trustworthiness has a significant positive relationship
with credibility (b ¼ 1.00; t ¼ 18.61; p < .01) while expertise has a negative
significant relationship with credibility (b ¼ 0.07; t ¼ 2.18; p < .05),
providing support for only H2b but not H2a. Thus, in relative terms, trust-
worthiness has a far greater impact on credibility assessments than expert-
ise. But also, higher levels of reviewer expertise appear to be associated
with lower levels of perceived review credibility. Finally, hypothesis H3a
and H3b predicted positive relationships between review relevance and
each of its two drivers—reviewer–receiver similarity (H3a) and reviewer–re-
ceiver usage similarity (H3b). The results provide support for both hypoth-
eses as the path coefficients for persona similarity (b ¼ 0.55; t ¼ 10.29;
p < .01) and usage similarity (b ¼ 0.39; t ¼ 7.40; p < .01) are both positive
and statistically significant.
A key argument in the present research is that the model tested in this
study (which includes both review credibility and review relevance) pro-
vides a better explanation of OPR impact than a model that includes only
review credibility. To formally test this assertion, we fit a structural model
with only OPR impact, review credibility, reviewer expertise, and reviewer
trustworthiness (i.e., the top half of Figure 1) to the data using the same
estimation procedures as before. Results of this test are in Table 5.
The results show that the fit of this model to the data (v2316df ¼ 7298.4;
RMSEA ¼ 0.0.15; SRMR ¼ 0.37) is much worse than for the model with
both review credibility and review relevance. A v2 difference test shows
that the difference in fit between the two models is highly statistically sig-
nificant (Dv2 ¼ 5314.9; Ddf ¼ 148; p < .01). This model also explains a rela-
tively smaller amount of the variance in OPR impact (R2 ¼ 0.70) than the
model with both credibility and relevance (R2 ¼ 0.79). The coefficient for
the credibility-OPR impact path is now almost double its value in Table 4,
exaggerating its true effect on OPR impact.
JOURNAL OF INTERNET COMMERCE 21

Discussion and implications


The goal of this study was to examine whether and to what extent review
relevance contributes to explaining OPR impact above and beyond review
credibility, the presumed primary driver of OPR impact in the extant litera-
ture. Findings from this empirical study indicate that review relevance does
indeed contribute to OPR impact. A structural equation model that
includes both review credibility and review relevance fit the empirical data
better than one that only includes review credibility. Additionally, the path
coefficients for credibility and relevance are almost equal, indicating that
review relevance is an equally strong driver of OPR impact as review
credibility. The findings also confirm expectations of strong positive
relationships between review relevance and its two hypothesized drivers—
reviewer–receiver persona similarity and reviewer–receiver usage similarity.
For the drivers of review credibility, a hypothesized positive relationship
with reviewer trustworthiness was also confirmed. However, a similarly
hypothesized positive relationship with reviewer expertise was not con-
firmed, as expertise had a surprisingly negative relationship with review
credibility.
Taken together the findings provide three distinct implications for
understanding the drivers of OPR impact: (1) Relevance is important more
than ever in today’s information-overload society, (2) Technological devel-
opments provide new ways for marketers to meet consumer needs based
on relevance-matching, and (3) Expertise exerts a different influence than
some marketers assume; too much expertise appears to be inversely corre-
lated with credibility. Each of these ideas will be discussed in turn.

Relevance is important now more than ever


Online information is much more prevalent today than when eWOM
research first began to capture the attention of academic research, when
the primary focus was on information credibility. Positioned historically as
the believability of information, academics have long held the notion that
credibility is the primary, if not sole, driver of a communication’s persua-
siveness. However, as shown in this work, credibility is only one factor
driving OPR impact. Today, OPRs are a mainstream source of information
for consumers and have proven to be a strong influence on consumer deci-
sion-making. With a cache of OPRs far exceeding what a typical consumer
might require to make a product decision, consumers are becoming more
reliant on heuristic methods to “filter” OPRs in a manner more relevant to
their particular needs. In this sea of information, many recent research
works have considered the notion of “relevance” as important in influenc-
ing the perceived usefulness of OPRs and review websites because
22 A. G. MUMUNI ET AL.

consumers are more likely to perceive OPRs as credible when they come
from sources they perceive as similar to themselves (Costello 2017;
Hernandez-Ortega 2018; Hwang, Park, and Woo 2018; Karimi and Wang
2017; Ma and Atki; Shan 2016). While this study is the first to fully oper-
ationalize the notion of relevance and its underlying drivers (persona simi-
larity and usage similarity), in the context of OPRs, recent works have also
incorporated complementary constructs, such as psychological distance,
social relevance, and social influence (Costello 2017; Hernandez-Ortega
2018; Shan 2016). This study’s findings indicate that consumers are using
their perceived similarity to the source of OPR and their perceived shared
expectations for the focal product as heuristics to cull OPRs they deem
irrelevant to their particular circumstances. Placing additional emphasis on
relevance and its drivers is strongly recommended for marketers to
improve the likelihood that online reviews will have the desired impact.

Meet consumer needs based on relevance-matching


Technology is increasingly allowing marketers to learn about consumer
personas and usage patterns along with their browsing and purchase pat-
terns. Marketers have long used recommendation systems based on con-
sumer purchase habits (e.g., if you buy a swim mask, you might also want
swim fins). The present results suggest that marketers can also benefit by
including a feature in their online review systems that recommends rele-
vant reviews to users based on their persona and usage similarity with
reviewers. For instance, displaying additional data about reviewers and their
background and product use habits would give receivers the ability to
ascertain the relevance of their OPRs by matching them to the particular
customer’s needs. Marketers could benefit from personalizing the display of
reviews based on persona and usage data. Because these recommendation
systems are already capturing a treasure trove of data on consumers, they
could be used to match consumer data with reviewers that are deemed
similar in both persona and product usage dimensions. In this way, reviews
might be matched based on aspects of relevance rather than simply prod-
uct-centric algorithms. A mechanism such as this would deliver more
appropriate OPRs for consumers, yielding greater impact and ultimately
improving and (potentially) speeding up consumer decision-making. As
example, Karimi and Wang (2017) found that simply including a reviewer
profile image improves the helpfulness scores for OPRs. The present study
recommends that marketers go much further than this and make a con-
certed effort to showcase the relevance features most vital to consumers:
their similarity to the reviewer in regard to background, experience, charac-
ter, and expectations for product use.
JOURNAL OF INTERNET COMMERCE 23

Negative impact of reviewer expertise


This study’s empirical results show that reviewer expertise is not only less
important than trustworthiness in determining credibility of an OPR, but its
effect on credibility is actually negative. In other words, higher levels of
reviewer expertise lead receivers to perceive associated reviews as less credible.
The finding that expertise is less important than trustworthiness in credibility
assessments is consistent with some recent work. For example, Martin and
Lueg (2013) found that source expertise does not lead to greater word-
of-mouth usage. Instead, they found that consumers place substantially greater
weight on the source’s familiarity with the product than on his or her compe-
tence in the product category. It could be argued that because the product in
this study, a fitness tracker, is simple to understand and relatively inexpensive,
consumers feel they already possess substantial information concerning the
product, making the expertness of the OPR source less valuable.
Recent work focused on credibility assessments of scientific information
may provide additional insight into societal and cultural changes creating
unconscious bias against expertise. In their study Imhoff, Lamberty, and
Klein (2018) posit that there is an on-going societal debate on the increase
in anti-elitist sentiments and conspiracy theories regarding the
“untrustworthy power elite” (1365). In their work, they found that respond-
ents who exhibited a conspiracy mentality consistently assigned lower rat-
ings of credibility to powerful sources (experts) and assigned higher ratings
of credibility to powerless sources (non-experts) (1374).
Thon and Jucks (2017) found that although information users attributed
a source’s credentials (medical vs. non-medical) as more credible, technical
language use negatively affected user’s opinions regarding the source’s
integrity and ultimately their credibility. Accordingly, the use of technical
language in itself will not help sources establish themselves as experts in
online health communication. Therefore, the present study’s findings and
support from other recent work add urgency for marketers to reevaluate
their OPR systems to ensure that dimensions of relevance and trustworthi-
ness are given more weight and visibility to increase the impact of OPRs
on their sites.

Limitations and future research


This study has a number of limitations that future studies could address.
First, the use of hypothetical reviews could limit generalizability of the
results to actual user-generated reviews. Specifically, the reviews for this
study were created to emphasize specific reviewer characteristics (i.e.,
expertise and trustworthiness) of interest to the study. Although in real life
some reviewers do strive to appear knowledgeable or trustworthy, not all
24 A. G. MUMUNI ET AL.

do so. It is not clear whether this study’s findings extrapolate to those con-
texts as well. Future research could use a variety of actual reviews across an
assortment of product contexts to examine whether the effect of review
relevance uncovered in this study holds in those contexts.
Second, the sampling and data collection decisions implemented for the
study have resulted in a sample that is slightly younger better educated,
and probably more technologically savvy. While this does not undermine
the conclusions of our study, care should be taken when extrapolating the
attitudes and behaviors derived from this research to other segments that
have been under-sampled. For example, it is possible that older individuals
or those with less education may apply different criteria toward determin-
ing review relevance. One possible opportunity for future research would
be to better understand the formation of persona similarity, and how this
is shaped by respondent characteristics such as age. In the context of this
research, there may be differential effects of age-related experience that
alter respondents’ assessments of reviewer similarity that deserve
exploration.
A third potential limitation is the review context used in these studies. It
is now common for online product review systems to include profile infor-
mation about reviewers. For instance, Amazon provides profile information
about each reviewer that includes the number of reviews posted by the
reviewer, number of helpful votes made, number of hearts received, and an
overall reviewer ranking. Such information is undoubtedly useful to con-
sumers in ascertaining reviewer credibility and expertise. However, the
reviews used in the present study did not include such profile information.
Future studies using actual user-generated content could examine whether
the presence of this information affects coefficients in the model tested in
this study.
Finally, although the present study used a more robust multi-item meas-
ure of OPR impact, the data collected are still respondent self-reports. In
the future, it would be worthwhile to collect data about respondents’
behavior both during and after reading OPRs to better understand the
impact of OPRs on consumer decision-making.

Conclusion
In summary, both the credibility of a source and the relevance of a review
contribute to OPR impact. The addition of review relevance adds explana-
tory power to how consumers assess OPRs and the resultant impact these
reviews can have on consumer decision-making. Findings also demonstrate
that the trustworthiness of a reviewer is substantially more important than
the reviewer’s expertise in evaluating the credibility of OPRs. As the first
JOURNAL OF INTERNET COMMERCE 25

study to conceptualize and operationalize review (source) credibility as a


distinct construct from its underlying dimensions of expertise and trust-
worthiness, this research contributes to the literature by teasing apart the
effects of trustworthiness and expertise as independent constructs and
showcases the vital role that reviewer trustworthiness plays. Finally, by
operationalizing OPR impact as comprising a cognitive-affective dimension
(perceptions) and a behavioral dimension (likelihood to act), this research
highlights the need for marketers to look beyond purchase intent to gauge
OPR impact.

References
Adjei, M., S. Noble, and C. Noble. 2010. The influence of C2C communications in online
brand communities on customer purchase behavior. Journal of the Academy of
Marketing Science 38 (5):634–653. doi: 10.1007/s11747-009-0178-5.
Andersen, K., and T. Clevenger. Jr. 1963. A summary of experimental research in ethos.
Speech Monographs 30 (2):59–78. doi: 10.1080/03637756309375361.
Anderson, J. C., and D. W. Gerbing. 1988. Structural equation modeling in practice: A
review and recommended two-step approach. Psychological Bulletin 103 (3):411–423. doi:
10.1037//0033-2909.103.3.411.
Andreassen, T., and S. Streukens. 2009. Service innovation and electronic word-of-mouth:
Is it worth listening to? Managing Service Quality: An International Journal 19 (3):
249–265. doi: 10.1108/09604520910955294.
Applbaum, R. F., and K. W. E. Anatol. 1972. The factor structure of source credibility as a
function of the speaking situation. Speech Monographs 39 (3):216–222. doi: 10.1080/
03637757209375760.
Askalidis, G., and E. C. Malthouse. 2016. The value of online customer reviews. Paper pre-
sented at the Proceedings of the 10th ACM Conference on Recommender Systems
(RecSys’16). ACM, New York, NY, USA, 155–8. doi: 10.1145/2959100.2959181.
Ayeh, J. K. 2015. Travelers’ acceptance of consumer-generated media: An integrated model
of technology acceptance and source credibility theories. Computers in Human Behavior
48:173–180. doi: 10.1016/j.chb.2014.12.049.
Banerjee, S., S. Bhattacharyya, and I. Bose. 2017. Whose online reviews to trust?
Understanding reviewer trustworthiness and its impact on business. Decision Support
Systems 96:17–26. doi: 10.1016/j.dss.2017.01.006.
BBB/Nielsen. 2017. https://www.bbb.org/globalassets/local-bbbs/council-113/media/docu-
ments/12468-d-01_cbbb_report.pdf (accessed December 10, 2018).
Berlo, D. K., J. B. Lemert, and R. J. Mertz. 1969. Dimensions for evaluating the acceptabil-
ity of message sources. Public Opinion Quarterly 33 (4):563–576. doi: 10.1086/267745.
Bowden, A. O., F. F. Caldwell, and G. A. West. 1934. A study in prestige. American
Journal of Sociology 40 (2):193–203. doi: 10.1086/216684.
Bowers, J. W., and W. A. Phillips. 1967. A note on the generality of source-credibility
scales. Speech Monographs 34 (2):185–186. doi: 10.1080/03637756709375542.
Brock, T. C. 1965. Communicator-recipient similarity and decision change. Journal of
Personality and Social Psychology 1 (6):650–654. doi: 10.1037/h0022081.
26 A. G. MUMUNI ET AL.

Buhrmester, M. D., S. Talaifar, and S. D. Gosling. 2018. An evaluation of Amazon’s


Mechanical Turk, its rapid rise, and its effective use. Perspectives on Psychological Science
13 (2):149–154. doi: 10.1177/1745691617706516.
Chatterjee, P. 2001. Online reviews: Do consumers use them? Paper presented at the
Association for Consumer Research Proceedings, eds. M. C. Gilly, and J. Myers-Levy,
129–134. Available at SSRN: https://ssrn.com/abstract=900158.
Cheng, Y.-H., and H.-Y. Ho. 2015. Social influences impact on reader perceptions of
online reviews. Journal of Business Research 68 (4):883–887. doi: 10.1016/j.jbusres.2014.
11.046.
Cheung, C. M. K., and D. R. Thadani. 2012. The impact of electronic word-of-mouth com-
munication: A literature analysis and integrative model. Decision Support Systems 54 (1):
461–470. doi: 10.1016/j.dss.2012.06.008.
Cheung, C. M. K., M. K. O. Lee, and N. Rabjohn. 2008. The impact of electronic word-of-
mouth. The adoption of online opinions in online customer communities. Internet
Research 18 (3):229–247. doi: 10.1108/10662240810883290.
Cheung, M. Y., C. Luo, C.L. Sia, and H. Chen. 2009. Credibility of electronic
word-of-mouth: Informational and normative determinants of online consumer recom-
mendations. International Journal of Electronic Commerce 13 (4):9–38. doi: 10.2753/
JEC1086-4415130402.
Chevalier, J. A., and D. Mayzlin. 2006. The effect of word of mouth on sales:
Online book reviews. Journal of Marketing Research 43 (3):345–354. doi: 10.1509/jmkr.
43.3.345.
Chi, T. 2018. Mobile commerce website success: Antecedents of consumer satisfaction and
purchase intention. Journal of Internet Commerce 17 (3):189–215. doi: 10.1080/15332861.
2018.1451970.
Chmielewski, M., & Kucker, S. C. (in press). An MTurk crisis? Shifts in data quality
and the impact on study results. Social Psychological and Personality Science. doi: 10.
1177/1948550619875149.
Chou, S. Y., S. Picazo-Vela, and J. M. Pearson. 2013. The effect of online review configura-
tions, prices, and personality on online purchase decisions: A study of online review pro-
files on eBay. Journal of Internet Commerce 12 (2):131–153. doi: 10.1080/15332861.2013.
817862.
Chung, N., and H. Han. 2017. The relationship among tourists’ persuasion, attachment and
behavioral changes in social media. Technological Forecasting and Social Change 123:
370–380. doi: 10.1016/j.techfore.2016.09.005.
Clemons, E. K., and G. (Gordon) Gao. 2008. Consumer informedness and diverse con-
sumer purchasing behaviors: Traditional mass-market, trading down, and trading out
into the long tail. Electronic Commerce Research and Applications 7 (1):3–17. doi: 10.
1016/j.elerap.2007.10.001.
Cole, M. D., M. M. Long, L. G. Chiagouris, and P. Gopalakrishna. 2011. Transitioning
from traditional to digital content: An examination of opinion leadership and word-of-
mouth communication across various media platforms. Journal of Internet Commerce 10
(2):91–105. doi: 10.1080/15332861.2011.571990.
Colton, D.A. 2018. Antecedents of consumer attitudes’ toward corporate blogs. Journal of
Research in Interactive Marketing 12 (1):94–104. doi: 10.1108/JRIM-08-2017-0075.
Cooper, W.S. 1971. A definition of relevance for information retrieval. Information Storage
and Retrieval 7 (1):19–37. doi: 10.1016/0020-0271(71)90024-6.
JOURNAL OF INTERNET COMMERCE 27

Cosenza, T. R., M. R. Solomon, and W. Kwon. 2015. Credibility in the blogosphere: A


study of measurement and influence of wine blogs as an information source. Journal of
Consumer Behaviour 14 (2):71–91. doi: 10.1002/cb.1496.
Costello, K. L. 2017. Social relevance assessment for virtual worlds: Interpersonal source
selection in the context. Journal of Documentation 73:1209–1227. doi: 10.1108/JD-07-
2016-0096.
Dhar, V., and E.A. Chang. 2009. Does chatter matter? The impact of user-generated con-
tent on music sales. Journal of Interactive Marketing 23 (4):300–307. doi: 10.1016/j.
intmar.2009.07.004.
Dholakia, R., and B. Sternthal. 1977. Highly credible sources: Persuasive facilitators
or persuasive liabilities? Journal of Consumer Research 3 (4):223–232. doi: 10.1086/
208671.
Doh, S. J., and J. S. Hwang. 2009. How consumers evaluate eWOM (electronic
word-of-mouth) messages. CyberPsychology & Behavior 12 (2):193–197. doi: 10.1089/cpb.
2008.0109.
Duan, J., and R. R. Dholakia. 2018. How purchase type influences consumption-related
posting behavior on social media: The moderating role of materialism. Journal of
Internet Commerce 17 (1):64–80. doi: 10.1080/15332861.2018.1424396.
Duffy, A. 2015. Friends and fellow travelers: Comparative influence of review sites and
friends on hotel choice. Journal of Hospitality and Tourism Technology 6 (2):127–144.
doi: 10.1108/JHTT-05-2014-0015.
Dwyer, C., S. R. Hiltz, and K. Passerini. 2007. Trust and privacy concern within social net-
working sites: A comparison of Facebook and MySpace. Paper presented at AMCIS
Proceedings, 339. http://aisel.aisnet.org/amcis2007/339
East, R., K. Hammond, and W. Lomax. 2008. Measuring the impact of positive and nega-
tive word of mouth on brand purchase probability. International Journal of Research in
Marketing 25 (3):215–224. doi: 10.1016/j.ijresmar.2008.04.001.
Eisend, M. 2006. Source credibility dimension in marketing – A generalized solution.
Journal of Empirical Generalisations in Marketing 10 (2):1–33.
Ewing, T. N. 1942. A study of certain factors involved in changes of opinion. The Journal
of Social Psychology 16 (1):63–88. doi: 10.1080/00224545.1942.9714105.
Filieri, R. 2016. What makes an online consumer review trustworthy?. Annals of Tourism
Research 58:46–64. doi: 10.1016/j.annals.2015.12.019.
Fornell, C., and D. F. Larcker. 1981. Evaluating structural equation models with unobserv-
able variables and measurement error. Journal of Marketing Research 18 (1):39–50. doi:
10.1177/002224378101800104.
Furner, C. P., R. Zinko, and Z. Zhu. 2016. Electronic word-of-mouth and information over-
load in an experiential service industry. Journal of Service Theory and Practice 26 (6):
788–810. doi: 10.1108/JSTP-01-2015-0022.
Giffin, K. 1967. The contribution of studies of source credibility to a theory of interper-
sonal trust in the communication process. Psychological Bulletin 68 (2):104–120. doi: 10.
1037/h0024833.
Gonzalez-Rodriguez, M. R., R. Martınes-Torres, and S. Toral. 2016. Post-visit and pre-visit
tourist destination image through eWOM sentiment analysis and perceived helpfulness.
International Journal of Contemporary Hospitality Management 28 (11):2609–2627. doi:
10.1108/IJCHM-02-2015-0057.
Hauser, D., Paolacci, G., & Chandler, J. (2019). Common concerns with MTurk as a par-
ticipant pool: Evidence and solutions. In F. Kardes, P. Herr, & N. Schwarz (Eds.),
28 A. G. MUMUNI ET AL.

Handbook of research methods in consumer psychology (pp. 319–336). New York, NY:
Routledge.
Heesacker, M., R. E. Petty, and J. T. Cacioppo. 1983. Field dependence and attitude change:
Source credibility can alter persuasion by affecting message-relevant thinking. Journal of
Personality 51 (4):653–666. doi: 10.1111/j.1467-6494.1983.tb00872.x.
Hernandez-Ortega, B. 2018. Don’t believe strangers: Online consumer reviews and the role
of social psychological distance. Information & Management 55 (1):31–50. doi: 10.1016/j.
im.2017.03.007.
Homer, P. M., and L. R. Kahle. 1990. Source expertise, time of source identification, and
involvement in persuasion: An elaborative processing perspective. Journal of Advertising
19 (1):30–39. doi: 10.1080/00913367.1990.10673178.
Hovland, C. I., I. L. Janis, and H. H. Kelley. 1953. Communication and persuasion. New
Haven, CT: Yale University Press.
Hovland, C. I., and W. Weiss. 1951. The influence of source credibility on communication
effectiveness. Public Opinion Quarterly 15 (4):635–650. doi: 10.1086/266350.
Hsu, C. L., J. C.-C. Lin, and H. S. Chiang. 2013. The effects of blogger recommendations
on customers’ online shopping intentions. Internet Research 23 (1):69–88. doi: 10.1108/
10662241311295782.
Hulland, J. 1999. Use of partial least squares (PLS) in strategic management research: A
review of four recent studies. Strategic Management Journal 20 (2):195–204. doi: 10.1002/
(SICI)1097-0266(199902)20:2<195::AID-SMJ13>3.3.CO;2-Z.
Hwang, J., S. Park, and M. Woo. 2018. Understanding user experiences of online travel
review websites for hotel booking behaviors: An investigation of a dual motivation the-
ory. Asia Pacific Journal of Tourism Research 23 (4):359–372. doi: 10.1080/10941665.
2018.1444648.
Imhoff, R., P. Lamberty, and O. Klein. 2018. Using power as a negative cue: How conspir-
acy mentality affects epistemic trust in sources of historical knowledge. Personality and
Social Psychology Bulletin 44 (9):1364–1379. doi: 10.1177/0146167218768779.
Jiang, L., J. Hoegg, D. W. Dahl, and A. Chattopadhyay. 2010. The persuasive role of inci-
dental similarity on attitudes and purchase intentions in a sales context. Journal of
Consumer Research 36 (5):778–791. doi: 10.1086/605364.
Johnson, H. H., J. M. Torcivia, and M. A. Poprick. 1968. Effects of source credibility on
the relationship between authoritarianism and attitude change. Journal of Personality and
Social Psychology 9 (2, Pt.1):179–183. doi: 10.1037/h0021250.
J€
oreskog, K. G., and S. Dag. 2004. LISREL 8.7 for Windows. Lincolnwood, IL: Scientific
Software International.
Jucks, R., and F. M. Thon. 2017. Better to have many opinions than one from an expert?
Social validation by one trustworthy source versus the masses in online health forums.
Computers in Human Behavior 70:375–381. doi: 10.1016/j.chb.2017.01.019.
Karimi, S., and F. Wang. 2017. Online review helpfulness: Impact of reviewer profile image.
Decision Support Systems 96:39–48. doi: 10.1016/j.dss.2017.02.001.
Kelley, H. H. 1967. Attribution theory in social psychology. Nebraska Symposium on
Motivation 15:192–238.
Kelley, H. H., and J. W. Thibaut. 1954. Experimental studies of group problem solving and
process. Handbook of Social Psychology 2:735–785.
Kim, J., and P. Gupta. 2012. Emotional expressions in online user reviews: How they influ-
ence consumers’ product evaluations. Journal of Business Research 65 (7):985–992. doi:
10.1016/j.jbusres.2011.04.013.
JOURNAL OF INTERNET COMMERCE 29

Kimball, S. H. 2019. Survey data collection; online panel efficacy. A comparative study of
Amazon MTurk and Research Now SSI/Survey Monkey/Opinion Access. Journal of
Business Diversity 19 (2):16–45.
Kozinets, R. V., K. De Valck, A. C. Wojnicki, and S. J. S. Wilner. 2010. Networked narra-
tives: Understanding word-of-mouth marketing in online communities. Journal of
Marketing 74 (2):71–89. doi: 10.1509/jm.74.2.71.
Kulp, D. H. 1934. Prestige, as measured by single-experience changes and their perman-
ency. The Journal of Educational Research 27 (9):663–672. doi: 10.1080/00220671.1934.
10880448.
Lafferty, B. A., R. E. Goldsmith, and S. J. Newell. 2002. The dual credibility model: The
influence of corporate and endorser credibility on attitudes and purchase intentions.
Journal of Marketing Theory and Practice 10 (3):1–12. doi: 10.1080/10696679.2002.
11501916.
Lee, M., and S. Youn. 2009. Electronic word of mouth (eWOM). How eWOM platforms
influence consumer product judgement. International Journal of Advertising 28 (3):
473–499. doi: 10.2501/S0265048709200709.
Leonard, L. N. K., and K. Jones. 2010. Consumer-to-consumer e-commerce research in
information systems journals. Journal of Internet Commerce 9 (3–4):186–207. doi: 10.
1080/15332861.2010.529052.
Lim, K. H., C. L. Sia, M. K. Lee, and I. Benbasat. 2006. Do I trust you online, and if so,
Will I buy? An empirical study of two trust-building strategies. Journal of Management
Information Systems 23 (2):233–266. doi: 10.2753/MIS0742-1222230210.
Martin, W. C., and J. E. Lueg. 2013. Modeling word-of-mouth usage. Journal of Business
Research 66 (7):801–808. doi: 10.1016/j.jbusres.2011.06.004.
McCracken, G. 1989. Who is the celebrity endorser? Cultural foundations of the endorse-
ment process. Journal of Consumer Research 16 (3):310–321. doi: 10.1086/209217.
McCroskey, J. C., W. Holdridge, and J. K. Toomb. 1974. An instrument for measuring the
source credibility of basic speech communication instructors. The Speech Teacher 23 (1):
26–33. doi: 10.1080/03634527409378053.
McCroskey, J. C., and J. J. Teven. 1999. Goodwill: A reexamination of the construct and its
measurement. Communication Monographs 66 (1):90–103. doi: 10.1080/
03637759909376464.
McCroskey, J. C., and T. J. Young. 1981. Ethos and credibility: The construct and its meas-
urement after three decades. Central States Speech Journal 32 (1):24–34. doi: 10.1080/
10510978109368075.
Mudambi, S. M., and D. Schuff. 2010. What makes a helpful online review? A study of cus-
tomer reviews on Amazon.com. MIS Quarterly 34 (1):185–200.
Newell, S. J., and R. E. Goldsmith. 2001. The development of a scale to measure perceived
corporate credibility. Journal of Business Research 52 (3):235–247. doi: 10.1016/S0148-
2963(99)00104-6.
NPD. 2015. The demographic divide: Fitness trackers and smartwatches attracting very dif-
ferent segments of the market, according to the NPD group. https://www.npd.com/wps/
portal/npd/us/news/press-releases/2015/the-demographic-divide-fitness-trackers-and-
smartwatches-attracting-very-different-segments-of-the-market-according-to-the-npd-
group/ (accessed December 4, 2018).
Ohanian, R. 1990. Construction and validation of a scale to measure celebrity endorsers’
perceived expertise, trustworthiness, and attractiveness. Journal of Advertising 19 (3):
39–52. doi: 10.1080/00913367.1990.10673191.
O’Keefe, D. J. 2002. Persuasion: Theory and research. Vol. 2. Thousand Oaks, CA: Sage.
30 A. G. MUMUNI ET AL.

O’Reilly, K., A. MacMillan, A. G. Mumuni, and K. M. Lancendorfer. 2016. Extending our


understanding of eWOM impact: The role of source credibility and message relevance.
Journal of Internet Commerce 15 (2):77–96. doi: 10.1080/15332861.2016.1143215.
Pan, Y., and J. Q. Zhang. 2011. Born unequal: A study of the helpfulness of user-generated
product reviews. Journal of Retailing 87 (4):598–612. doi: 10.1016/j.jretai.2011.05.002.
Park, D. H., and J. Lee. 2008. eWOM overload and its effect on consumer behavioral inten-
tion depending on consumer involvement. Electronic Commerce Research and
Applications 7 (4):386–398. doi: 10.1016/j.elerap.2007.11.004.
Park, D. H., J. Lee, and I. Han. 2007. The effect of on-line consumer reviews on consumer
purchasing intention: The moderating role of involvement. International Journal of
Electronic Commerce 11 (4):125–148. doi: 10.2753/JEC1086-4415110405.
Parsons, A. L., and E. Lepkowska-White. 2010. Web site references in print advertising: An
analysis of calls to action. Journal of Internet Commerce 9 (3–4):151–163. doi: 10.1080/
15332861.2010.526487.
Peng, L., Q. Liao, Z. Wang, and X. He. 2016. Factors affecting female user information
adoption: An empirical investigation on fashion shopping guide websites. Electronic
Commerce Research 16 (2):145–169. doi: 10.1007/s10660-016-9213-z.
PeopleClaim. 2013. Review of reviews. available at: http://www.peopleclaim.com/blog/index.
php/the-review-of-ratings/ (accessed August 5, 2018).
Rabjohn, N., C. M.K. Cheung, and K.O. Lee. 2008. Examining the perceived credibility of
online opinions: Information adoption in the online environment. Proceedings of the
41st Hawaii International Conference on System Sciences. https://wwwresearchgate.net/
publication/221181609.
Racherla, P., M. Mandviwalla, and D. J. Connolly. 2012. Factors affecting consumers’ trust in
online product reviews. Journal of Consumer Behaviour 11 (2):94–104. doi: 10.1002/cb.385.
Reichelt, J., J. Sievert, and F. Jacob. 2014. How credibility affects eWOM reading:
The influences of expertise, trustworthiness, and similarity on utilitarian and social
functions. Journal of Marketing Communications 20 (1–2):65–81. doi: 10.1080/13527266.
2013.797758.
Rogers, E. M., and D. K. Bhowmik. 1970. Homophily-heterophily: Relational concepts
for communication research. Public Opinion Quarterly 34 (4):523–538. doi: 10.1086/
267838.
Ruef, M., H. E. Aldrich, and N. M. Carter. 2003. The structure of founding teams:
Homophily, strong ties, and isolation among U.S. entrepreneurs. American Sociological
Review 68 (2):195–222. doi: 10.2307/1519766.
Saleem, A., and A. Ellahi. 2017. Influence of electronic word of mouth on purchase inten-
tion of fashion products on social networking websites. Pakistan Journal of Commerce
and Social Sciences 11 (2):597–622.
Saracevic, T. 1975. Relevance: A review of and a framework for the thinking on the notion
in information science. Journal of the American Society for Information Science 26 (6):
321–343. doi: 10.1002/asi.4630260604.
Schamber, L., M. B. Eisenberg, and M.S. Nilan. 1990. A re-examination of relevance:
Toward a dynamic, situation definition. Information Processing & Management 26 (6):
755–775. doi: 10.1016/0306-4573(90)90050-C.
Schlosser, A. E. 2011. Can including pros and cons increase the helpfulness and persuasive-
ness of online reviews? The interactive effects of ratings and arguments. Journal of
Consumer Psychology 21 (3):226–239. doi: 10.1016/j.jcps.2011.04.002.
JOURNAL OF INTERNET COMMERCE 31

Sen, S., and D. Lerman. 2007. Why are you telling me this? An examination into negative
consumer reviews on the web. Journal of Interactive Marketing 21 (4):76–94. doi: 10.
1002/dir.20090.
Shan, Y. 2016. How credible are online product reviews? The effects of self-generated and
system-generated cues on source credibility evaluation. Computers in Human Behavior
55:633–641. doi: 10.1016/j.chb.2015.10.013.
Simons, H. W., N. N. Berkowitz, and J. R. Moyer. 1970. Similarity, credibility, and atti-
tude change: A review and a theory. Psychological Bulletin 73 (1):1–16. doi: 10.1037/
h0028429.
Smith, A., and M. Anderson. 2016. Online shopping and E-commerce. Pew Research Center.
https://www.pewresearch.org/internet/2016/12/19/online-reviews (accessed November 18,
2019).
Smith, B. J., and D. W. Barclay. 1997. The effects of organizational differences and trust on
the effectiveness of selling partner relationships. Journal of Marketing 61 (1):3–21. doi:
10.2307/1252186.
SRC. 2017. How online reviews influence sales. Spiegel Research Center. https://spiegel.med-
ill.northwestern.edu/_pdf/Spiegel_Online%20Review_eBook_Jun2017_FINAL.pdf
(accessed December 4, 2018).
Statista. 2017. Trust in online customer reviews 2014–2017. Statista - The Statistics Portal.
https://www.statista.com/statistics/315755/online-custmer-review-trust/ (accessed December
4, 2018).
Statista. 2018. U.S. online review usage frequency prior to new product purchase 2017.
Statista - The Statistics Portal. https://www.statista.com/statistics/713090/us-online-
review-usage-frequency-new-purchases/ (accessed December 4, 2018).
Sternthal, B., R. Dholakia, and C. Leavitt. 1978. The persuasive effect of source credibility:
Tests of cognitive response. Journal of Consumer Research 4 (4):252–260. doi: 10.1086/
208704.
Teng, S., K. W. Khong, A. Y.-L. Chong, and B. Lin. 2017. Examining the impacts of elec-
tronic word-of-mouth message on consumers’ attitude. Journal of Computer Information
Systems 57 (3):238–251. doi: 10.1080/08874417.2016.1184012.
Thompson, D. V., and P. Malaviya. 2013. Consumer-generated ads: Does awareness of
advertising co-creation help or hurt persuasion?. Journal of Marketing 77 (3):33–47. doi:
10.1509/jm.11.0403.
Thon, F. M., and R. Jucks. 2017. Believing in expertise: How authors’ credentials and lan-
guage use influence the credibility of online health information. Health Communication
32 (7):828–836. doi: 10.1080/10410236.2016.1172296.
Thorson, K. S., and S. Rodgers. 2006. Relationships between Blogs as eWOM and inter-
activity, perceived interactivity, and parasocial interaction. Journal of Interactive
Advertising 6 (2):5–44. doi: 10.1080/15252019.2006.10722117.
Tirunillai, S., and G. J. Tellis. 2012. Does chatter really matter? Dynamics of user-generated con-
tent and stock performance. Marketing Science 31 (2):198–215. doi: 10.1287/mksc.1110.0682.
Trusov, M., R. E. Bucklin, and K. Pauwels. 2009. Effects of word-of-mouth versus trad-
itional marketing: Findings from an internet social networking site. Journal of Marketing
73 (5):90–102. doi: 10.1509/jmkg.73.5.90.
Tsao, W.-C., and M.-T. Hsieh. 2015. eWOM persuasiveness: Do eWOM platforms and
product type matter?. Electronic Commerce Research 15 (4):509–541. doi: 10.1007/s10660-
015-9198-z.
Turner, J. C. 1991. Social influence. Bristol, PA: University Press. doi: 10.1093/sw/18.1.118.
32 A. G. MUMUNI ET AL.

Wang, P. 2015. Exploring the influence of electronic word-of-mouth on tourists’ visit inten-
tion. A dual process approach. Journal of Systems and Information Technology 17 (4):
381–395. doi: 10.1108/JSIT-04-2015-0027.
Whitehead, A. N. 1968. Modes of thought. Vol. 93521. New York: Simon and Schuster.
Williams, R., T. van der Wiele, J. van Iwaarden, and S. Eldridge. 2010. The Importance of
user-generated content: The case of hotels. The TQM Journal 22 (2):117–128. doi: 10.
1108/17542731011024246.
Wilson, P. 1973. Situational relevance. Information Storage and Retrieval 9 (8):457–469. doi:
10.1016/0020-0271(73)90096-X.
Wilson, E. J., and D. L. Sherrell. 1993. Source effects in communication and persuasion
research: A meta-analysis of effect size. Journal of the Academy of Marketing Science 21
(2):101–112. doi: 10.1007/BF02894421.
Xia, L., and N. N. Bechwati. 2008. Word of mouse. Journal of Interactive Advertising 9 (1):
3–13. doi: 10.1080/15252019.2008.10722143.
Xu, Q. 2014. Should I trust him? The effects of reviewer profile characteristics on
eWOM credibility. Computers in Human Behavior 33:136–144. doi: 10.1016/j.chb.2014.
01.027.
Zhang, K. Z.K., C. M. K. Cheung, and M. K. O. Lee. 2014. Examining the moderating effect
of inconsistent reviews and its gender differences on consumers’ online shopping deci-
sion. International Journal of Information Management 34 (2):89–98. doi: 10.1016/j.ijin-
fomgt.2013.12.001.
Zhang, R., and T. Tran. 2011. An information gain-based approach for recommending use-
ful product reviews. Knowledge and Information Systems 26 (3):419–434. doi: 10.1007/
s10115-010-0287-y.
Zhang, W., and S. A. Watts. 2008. Capitalizing on content: Information adoption in two
online communities. Journal of the Association for Information Systems 9 (2):73–94. doi:
10.17705/1jais.00149.
Zhang, Z., Y. Qiang, R. Law, and Y. Li. 2010. The impact of e-word-of-mouth on the
online popularity of restaurants: A comparison of consumer reviews and editor reviews.
International Journal of Hospitality Management 29 (4):694–700. doi: 10.1016/j.ijhm.
2010.02.002.
Zhu, J., D. K. C. Tse, and Q. Fei. 2018. Effects of online consumer reviews on firm-based
and expert-based communications. Journal of Research in Interactive Marketing 12 (1):
45–78. doi: 10.1108/JRIM-02-2017-0007.
Zhu, L., Y. Guopeng, and H. Wei. 2014. Is this opinion leader’s review useful? Peripheral
cues for online review helpfulness. Journal of Electronic Commerce Research 15 (4):
267–280.
Zhu, F., and X. (Michael) Zhang. 2010. Impact of online consumer reviews on sales: The
moderating role of product and consumer characteristics. Journal of Marketing 74 (2):
133–148. doi: 10.1509/jmkg.74.2.133.
JOURNAL OF INTERNET COMMERCE 33

Appendices
Appendix 1. Four manipulations of reviewer expertise and trustworthiness
1. High expertise reviewer
Directions:
You are finalizing your choice of a wearable device that tracks your fitness. You’d like to
improve your overall fitness level so you look and feel your best. Here is a review for a
fitness tracker that fits your budget and has all of the features you are looking for.
Please read it and then rate the degree of expertise that you feel is demonstrated in
this review:


Improved my aerobic capacity, lost seven pounds … and ran my first marathon!
I am a skeptic who turned into a fan. This product is amazing.
Admittedly, I’m a bit of a fitness fanatic. I’ve always been a gym rat – lifting weights,
doing spin classes, and sometimes working with a personal trainer. I watch what I eat,
limiting sugar and saturated fats and only using natural nutritional supplements
and vitamins.
Even so, over the past five years, I have found it increasingly difficult to stay in the kind
of shape I used to be in and to shed a couple of extra pounds – to get back to my fighting
weight. The initial results in cardio capacity and weight loss were usually pretty good, but
not sustainable, and within a month or so, I was back to status quo.
After having used this fitness tracker for seven months, I can strongly recommend it as
effective in the short-term and over the longer haul, and it’s easy to use. I tested it by walk-
ing and counting exactly 100 steps as they advise, and it was within 2 steps every time. I
even tried tricking it by holding things in my hands or pulling my kids in a wagon but it
never erred beyond the tiniest deviation.
It helped me quickly realize that while I was working out fine in the gym that was about
it; I was making sedentary lifestyle choices the rest of the time. Now, because it tracks my
steps and summarizes results for me – by day, by week, by month – I can easily track my
progress and I’m motivated to keep doing more.
After two months, my doctor’s office measured a 10% decrease in my blood pressure,
my resting pulse dropped from 74 to 65, and my lung capacity increased by 15%. I was so
elated by this that I decided to train for a marathon. Yesterday, I finished my first one, in
just over four hours. True, this won’t set any world records but it’s nothing to sneeze at,
and I have never felt so good.
My only criticism is that the sleep tracker gives results that aren’t as easy to read as the
step-counting results and they vary so much that I’m not sure whether it’s me or if there is
something inconsistent about the way it measures REM sleep. Also, it’s not a huge deal,
but the band is a little clunky and sometimes catches on things.
Overall, I cannot say enough about how brilliantly this product performs for tracking
and motivating fitness. I am hooked for life.

Low High
Expertise Expertise
1 2 3 4 5 6 7 8 9 10
34 A. G. MUMUNI ET AL.

2. Low expertise reviewer


Directions:
You are finalizing your choice of a wearable device that tracks your fitness. You’d like to
improve your overall fitness level so you look and feel your best. Here is a review for a fit-
ness tracker that fits your budget and has all of the features you are looking for. Please
read it and then rate the degree of expertise that you feel is demonstrated in this review:


This body is ready for the beach!!!!


Omg, I cant wait to strut my stuff on the beach this weekend!!! It’s not like I was totally
out of shape, but I could feel those love handles getting more if you know what I mean.
LOL, when I looked in the mirror, I did not see the fab body that made everybody look
in h.s.!!
So I thought why not try this thing out. Now I always take the stairs instead of the ele-
vator and I’m always looking for more ways to add steps to my day. And it’s easy to use.
I’m looking in the mirror now … and I like what I see – and I’m pretty sure I won’t be
the only one!!! This product is amazing!!! Best fitness tracker on the planet!!!

Low High
Expertise Expertise
1 2 3 4 5 6 7 8 9 10

3. High trustworthiness reviewer


Directions:
You are finalizing your choice of a wearable device that tracks your fitness. You’d like to
improve your overall fitness level so you look and feel your best. Here is a review for a fit-
ness tracker that fits your budget and has all of the features you are looking for. Please
read it and then rate the trustworthiness you feel about this review:


I Am finally back in shape!!


I have rarely ever taken the time to write a review, but I found this fitness tracker to be so
extraordinary that I felt compelled to share my experience.
Having tried all sorts of exercise programs and diets and even another type of fitness
tracker, I was initially skeptical. Nothing ever seemed to work for me. Either I became
bored with it or I lost a few pounds and then gained them back just as quickly – some-
times even adding a pound or two. Climbing stairs and running around with my kids
seemed to make me huff and puff more than I remembered.
Keep in mind that I am not a top-notch athlete or fitness fanatic; I just like to keep
active and stay healthy. Maybe what works for me wouldn’t be enough for someone else.
That said, I could not believe how easy this product was to use and how motivating it was
– with simple stats liked stairs climbed or hours slept. It is the first thing that has ever
motivated me to stick with something and to keep improving.
After two weeks, I had lost two pounds and I started to feel more like playing chase
with the kids. After two months, I was completely hooked. Not only was I well on my way
to reaching my weight goal, but I felt about five or ten years younger.
I am not a big fan of how this product looks. It’s fairly clunky and it catches on my
sweaters. The color choice is either black or dark gray, which is fine for me, but others
JOURNAL OF INTERNET COMMERCE 35

might prefer a wider range of options. I am also a little uncertain about how accurate the
sleep measurement function is. The results vary widely. Maybe it’s my sleep patterns or
maybe the device needs fine-tuning. That would be worth asking about if that’s a key elem-
ent for you.
Whatever choice you make to improve your health and fitness, good luck to you. This
option worked for me, and I am hooked for life!

Low High
Trustworthiness Trustworthiness
1 2 3 4 5 6 7 8 9 10

4. Low trustworthiness reviewer


Directions:
You are finalizing your choice of a wearable device that tracks your fitness. You’d like
to improve your overall fitness level so you look and feel your best. Here is a review for a
fitness tracker that fits your budget and has all of the features you are looking for. Please
read it and then rate the trustworthiness you feel about this review:


The right “fit” for you!


Why sacrifice fashion for fitness? This fitness tracker blends a sleek look with cutting edge
electronics to help you become your best, healthiest self.
It’s a slim, stylish device that tracks all-day activities like steps, distance, calories burned,
and active minutes. The latest version has a longer battery life and syncs wirelessly and
automatically to computers and leading smart phones.
Find fitness every step you take with this fitness tracker. It has been hugely popular and
inventory can run low, so it’s critical to order soon. And don’t forget to check out online
seasonal promotions.
And if fitness is your thing, you may also be interested in new sport apparel with built-
in sun protection at www.SunGuardStuff.com.
Love your body, love yourself, love this product!

Low High
Trustworthiness Trustworthiness
1 2 3 4 5 6 7 8 9 10

Side-by-side display for paired comparison task


Low-high expertise
Directions: You are finalizing your choice of a wearable device that tracks your fitness.
You’d like to improve your overall fitness level so you look and feel your best. Below are
two reviews for a fitness tracker that fits your budget and has all of the features you are
looking for. Please read each and then select which reviewer seems to demonstrate a higher
degree of expertise by clicking the appropriate box:
36 A. G. MUMUNI ET AL.

w Reviewer 1 w Reviewer 2
This body is ready for the beach!!!! Improved my aerobic capacity, lost seven pounds … and
ran my first marathon!
Omg, I cant wait to strut my stuff on the I am a skeptic who turned into a fan. This product is
beach this weekend!!! It’s not like I was amazing. Admittedly, I’m a bit of a fitness fanatic. I’ve
totally out of shape, but I could feel those always been a gym rat – lifting weights, doing spin
love handles getting more classes, and sometimes working with a personal trainer.
I watch what I eat, limiting sugar and saturated fats and
if you know what I mean. LOL, when only using natural nutritional supplements and vitamins.
I looked in the mirror, I did not see the fab
body that made everybody look in h.s.!!
So I thought why not try this thing out. Now Even so, over the past five years, I have found it
I always take the stairs instead of the increasingly difficult to stay in the kind of shape I used
elevator and I’m always looking for more to be in and to shed a couple of extra pounds – to get
ways to add steps to my day. And it’s easy back to my fighting weight. The initial results in cardio
to use. capacity and weight loss were usually pretty good, but
not sustainable, and within a month or so, I was back to
status quo.
I’m looking in the mirror now … and I like After having used this fitness tracker for seven months,
what I see – and I’m pretty sure I won’t be I can strongly recommend it as effective in the short-
the only one!!! This product is amazing!!! term and over the longer haul, and it’s easy to use. I
Best fitness tracker on the planet!!! tested it by walking and counting exactly 100 steps as
they advise, and it was within 2 steps every time. I even
tried tricking it by holding things in my hands or pulling
my kids in a wagon but it never erred beyond the
tiniest deviation.
It helped me quickly realize that while I was working out
fine in the gym that was about it; I was making
sedentary lifestyle choices the rest of the time. Now,
because it tracks my steps and summarizes results for
me – by day, by week, by month – I can easily track my
progress and I’m motivated to keep doing more.
After two months, my doctor’s office measured a 10%
decrease in my blood pressure, my resting pulse
dropped from 74 to 65, and my lung capacity increased
by 15%. I was so elated by this that I decided to train
for a marathon. Yesterday, I finished my first one, in just
over four hours. True, this won’t set any world records
but it’s nothing to sneeze at, and I have never felt so
good.
My only criticism is that the sleep tracker gives results that
aren’t as easy to read as the step-counting results and
they vary so much that I’m not sure whether it’s me or if
there is something inconsistent about the way it
measures REM sleep. Also, it’s not a huge deal, but the
band is a little clunky and sometimes catches on things.
Overall, I cannot say enough about how brilliantly this
product performs for tracking and motivating fitness. I
am hooked for life.

High-low trustworthiness
Directions: You are finalizing your choice of a wearable device that tracks your fitness.
You’d like to improve your overall fitness level so you look and feel your best. Below are
two reviews for a fitness tracker that fits your budget and has all of the features you are
looking for. Please read each and then select which reviewer seems to be more trustworthy
by clicking the appropriate box:
JOURNAL OF INTERNET COMMERCE 37

w Reviewer 1 w Reviewer 2
I am finally back in shape!! The right “fit” for you!
I have rarely ever taken the time to write a review, but Why sacrifice fashion for fitness? This fitness
I found this fitness tracker to be so extraordinary that tracker blends a sleek look with cutting
I felt compelled to share my experience. edge electronics to help you become your
best, healthiest self.
Having tried all sorts of exercise programs and diets and It’s a slim, stylish device that tracks all-day
even another type of fitness tracker, I was initially activities like steps, distance, calories burned,
skeptical. Nothing ever seemed to work for me. Either and active minutes. The latest version has a
I became bored with it or I lost a few pounds and then longer battery life and syncs wirelessly and
gained them back just as quickly – sometimes even automatically to computers and leading
adding a pound or two. Climbing stairs and running smart phones.
around with my kids seemed to make me huff and puff
more than I remembered.
Keep in mind that I am not a top-notch athlete or fitness Find fitness every step you take with this
fanatic; I just like to keep active and stay healthy. Maybe fitness tracker. It has been hugely popular
what works for me wouldn’t be enough for someone and inventory can run low, so it’s critical to
else. That said, I could not believe how easy this product order soon. And don’t forget to check out
was to use and how motivating it was – with simple online seasonal promotions.
stats liked stairs climbed or hours slept. It is the first And if fitness is your thing, you may also be
thing that has ever motivated me to stick with interested in new sport apparel with built-in
something and to keep improving. sun protection at www.SunGuardStuff.com.
After two weeks, I had lost two pounds and I started to Love your body, love yourself, love
feel more like playing chase with the kids. After two this product!
months, I was completely hooked. Not only was I well
on my way to reaching my weight goal, but I felt about
five or ten years younger.
I am not a big fan of how this product looks. It’s fairly
clunky and it catches on my sweaters. The color choice is
either black or dark gray, which is fine for me, but
others might prefer a wider range of options. I am also a
little uncertain about how accurate the sleep
measurement function is. The results vary widely. Maybe
it’s my sleep patterns or maybe the device needs fine-
tuning. That would be worth asking about if that’s a key
element for you.
Whatever choice you make to improve your health and
fitness, good luck to you. This option worked for me,
and I am hooked for life!
38 A. G. MUMUNI ET AL.

Appendix 2. Conceptualization and operationalization of study constructs

Construct Conceptual definition Operationalization/Measures Source


Expertise The extent to which a 1. The reviewer doesn’t know what they Adapted from Eisend (2006),
communicator is are talking about McCroskey and Teven
perceived as a source of 2. The reviewer is ill-informed—is (1999) and
valid assertions (Hovland, well-informed Ohanian (1990).
Janis, and Kelley 1953). 3. The reviewer is a novice—is
The reviewer’s knowledge an authority
regarding the subject 4. The reviewer is inexperienced—is
matter of the message experienced
(O’Reilly et al. 2016). 5. The reviewer is unknowledgeable—is
knowledgeable
6. The reviewer is unqualified—
is qualified
Trustworthiness The degree of confidence in 1. The reviewer is biased— Adapted from Eisend (2006),
the communicator’s intent is unbiased McCroskey and Teven
to communicate the 2. The reviewer is not fair/ (1999) and
assertions that she balanced—is fair/balanced Ohanian (1990).
considers most valid 3. The reviewer is not concerned about
(Hovland, Janis, and what’s best for me—is concerned
Kelley 1953). about what’s best for me
4. The reviewer is dishonest—
is honest
5. The reviewer is unreliable—
is reliable
6. The reviewer has a hidden agenda—
does not have a hidden agenda
7. The reviewer is a company
marketer—is not a
company marketer
Persona Similarity OPR receiver’s assessment of 1. The reviewer is someone I’d be Developed by authors for
how alike the reviewer is friends with present study and
to them in terms of 2. The reviewer writes in the same style supported by
character, background, as me measurement scales of
and experiences. 3. The reviewer is someone I can Hernandez-Ortega (2018).
relate to
4. The reviewer has a background
similar to mine
5. The reviewer values the same things
that I do
6. The reviewer seems similar
to me
Usage Similarity OPR receiver’s assessment of 1. The reviewer has expectations for the Developed by authors for
how alike the reviewer’s fitness tracker that are similar present study and
use of the product is to to mine supported by
their own intended use. 2. The reviewer is using the fitness measurement scales of
tracker in the same way that I intend Hernandez-Ortega (2018).
to use it
3. The reviewer is using the fitness
tracker for the same purpose as I will
4. The reviewer will use the fitness
tracker for as long as I expect to
use it
5. The reviewer will take care of this
fitness tracker like I will
Review Credibility The information receivers’ 1. I believe this review Developed by authors for
perception of believability 2. I believe this review is credible present study and
toward the source of 3. I believe that I can trust supported by
information this review. measurement scales of
(Ohanian 1990). 4. I believe this review is Hernandez-Ortega (2018).
not biased
Review Relevance The degree to which an OPR 1. The review is relevant to me Developed by authors for
receiver perceives an OPR 2. The review is appropriate for present study
communication to be my needs
applicable to their
particular circumstance.
OPR Impact The degree to which the OPR Second order factor composed of: See below
receiver will act upon a 1. Review usefulness
2. Product impressions
(continued)
JOURNAL OF INTERNET COMMERCE 39

particular online 3. Likelihood of purchase/


product review. recommendation
Review usefulness The extent to which the OPR 1. Useless—Very useful Park and Lee (2008)
receiver finds the review 2. Unhelpful—Very helpful
to be useful. 3. Unexciting—Very exciting
4. Uninteresting—Very interesting
5. Uninformative—Very informative
Product The OPR receiver’s 1. Unfavorable—Favorable Doh and Hwang (2009)
impressions impressions about the 2. Unimpressed—Impressed Kim and Gupta (2012)
product that is the 3. Unexcited—Excited
subject of the review. 4. Uninterested—Interested
5. Unmotivated to consider
it—Motivated to consider it
Likelihood of Likelihood that the OPR 1. Likelihood of Zhang and Watts (2008)
purchasing/ receiver will purchase or choosing product Rabjohn, Cheung, and
recommending recommend the product 2. Likelihood of Lee (2008)
that is the subject of recommending product
the review.

View publication stats

You might also like