You are on page 1of 21

Offline Context Affects Online Reviews: The

Effect of Post-Consumption Weather

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


LEIF BRANDES
YANIV DOVER

This empirical study investigates whether unpleasant weather—a prominent aspect of


a consumer’s offline environment—influences online review provision and content. It
uses a unique dataset that combines 12 years of data on hotel bookings and reviews,
with weather condition information at a consumer’s home and hotel address. The
results show that bad weather increases review provision and reduces rating scores
for past consumption experiences. Moreover, 6.5% more reviews are written on rainy
days and that these reviews are 0.1 points lower, accounting for 59% of the difference
in average rating scores between four- and five-star hotels in our data. These results
are consistent with a scenario in which bad weather (i) induces negative consumer
mood, lowering rating scores, and (ii) makes consumers less time-constrained, which
increases review provision. Additional analyses with various automated sentiment
measures for almost 300,000 review texts support this scenario: reviews on rainy days
show a significant reduction in reviewer positivity and happiness, yet are longer and
more detailed. This study demonstrates that offline context influences online reviews,
and discusses how platforms and businesses should include contextual information in
their review management approaches.

Keywords: online reviews, weather, mood, user-generated content, context effect,


automated text analysis

Leif Brandes (leif.brandes@unilu.ch) is a professor of marketing and


strategy at the faculty of economics and management at the University of
Lucerne, Frohburgstrasse 3, Lucerne 6002, Switzerland. Yaniv Dover
(yaniv.dover@mail.huji.ac.il) is an assistant professor of marketing at the
Jerusalem Business School at the Hebrew University of Jerusalem, Mount INTRODUCTION
Scopus Campus, #5114, Jerusalem 91905, Israel, and a member of The
Federmann Center for the Study of Rationality, Edmond J. Safra Campus,
Jerusalem 91904, Israel. Both authors contributed equally to this research.
Please address correspondence to either Leif Brandes or Yaniv Dover.
O nline product reviews, a prominent form of word-of-
mouth (WOM), have become a key source of informa-
tion for consumers and have a major influence on consumer
The authors thank the current (Andrew Stephen) and former Editor (Jeff
Inman), and the Associate Editor and Reviewer Team for their excellent decision-making and product sales (Baker, Donthu, and
and constructive guidance during the review process. The authors are also Kumar 2016; Chevalier and Mayzlin 2006; Vana and
grateful to David Godes, Tobias Klein, Dina Mayzlin, and Thomas Lambrecht 2021). Given this practical relevance, a growing
Scheurer for their helpful comments and suggestions. Feedback from par-
ticipants at the 2016 Choice Symposium at Lake Louise, the 2017 body of literature has examined the antecedents and drivers
Marketing Science Conference in Los Angeles, the 2018 ZEW of WOM (Babic Rosario et al. 2016; Berger 2014). Particular
Conference on the Economics of Information and Communication emphasis has been placed on understanding why consumers
Technologies in Mannheim, the 2019 CB SIG conference in Berne, and
seminar participants at the DMEP seminar series at Ben-Gurion are more likely to write reviews for extreme versus moderate
University, the Rotterdam School of Management, Warwick Business experiences (Brandes, Godes and Mayzlin 2022;
School, and University of Zurich is gratefully acknowledged. Part of this Schoenmueller, Netzer, and Stahl 2020), and how the com-
research was conducted while the first author was still a faculty member
at Warwick Business School, University of Warwick, UK. Supplementary
munication channel (Berger and Iyengar 2013; Lovett, Peres,
materials are included in the web appendix accompanying the online ver- and Shachar 2013) and audience (Barasch and Berger 2014;
sion of this article. Chen 2017) affect what people share with others. Overall, the
Editors: J. Jeffrey Inman and Andrew T. Stephen extant research demonstrates the importance of various in-
trinsic motives—particularly self-enhancement, impact, and
Associate Editor: David A. Schweidel altruism—for what people share and with whom.
In comparison, research on the influence of external, sit-
Advance Access publication January 28, 2022
uational characteristics on online review provision and
C The Author(s) 2022. Published by Oxford University Press on behalf of Journal of Consumer Research, Inc.
V
All rights reserved. For permissions, please e-mail: journals.permissions@oup.com  Vol. 49  2022
https://doi.org/10.1093/jcr/ucac003

595
596 JOURNAL OF CONSUMER RESEARCH

content remains relatively limited and narrow in scope. with information on weather conditions, (ii) at the
Two prominent situational features are social and physical reviewer’s residential address on the day of review provi-
contexts (Belk 1975). Previous studies have predominantly sion, and (iii) at the booked hotel during the stay. By study-
focused on the influence of social context, such as social ing only reviews provided within the first week after the
density (Consiglio, De Angelis, and Costabile 2018), or ex- end of a stay, the study established that consumption and
posure to opinions from others (Godes and Silva 2012; review provision are separated across time and space, such
Moe and Schweidel 2012; Moe and Trusov 2011; that residential weather conditions are unrelated to the

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


Schlosser 2005). To date, the influence of a consumer’s reviewed consumption experience.
physical (offline) surroundings—weather, location, decor, Why do we expect that contemporaneous weather condi-
sounds, aromas, lighting, and visible configurations (Belk tions, that is, weather conditions around the time of review
1975, 159)—at the time of review provision has largely provision affect online reviews for past consumption? This
been ignored.1 expectation was built on evidence that bad weather,3 such
Indeed, this lack of research on the physical context dur- as rain or snow, has a causal, negative influence on individ-
ing review provision is mirrored in managerial practice uals’ level of positive affect (Govind, Garg, and Mittal
when it comes to firms’ management of online reviews. 2020), possibly in turn influencing opinion formation and
While firms are now using an arsenal of communication expression through incidental affect. Existing studies sug-
approaches to get more (positive) customer reviews (e.g., gest that bad weather reduces evaluations for unrelated
sending emails to ask for reviews, offering financial objects, and that this effect occurs automatically and sub-
rewards in exchange for reviews), current procedures usu- consciously (Schwarz and Clore 1983). Thus, the expecta-
ally adopt a one-size-fits-all logic and do not condition the tion was that bad weather conditions would reduce the
time and content of the communication on a customer’s positivity of reviews posted online. Regarding review pro-
physical environment. Consider the example of the travel vision, prior research shows that bad weather conditions,
platform Booking.com; once a customer checks out of their such as rain, increase the relative attractiveness of indoor
hotel, the platform automatically sends out an email asking activities (Connolly 2008), and make people more produc-
them to write a review about this trip. Similarly, a subscrip- tive in general (Lee, Gino, and Staats 2014). As more than
tion on the platform JungleScout.com enables individual a third of all reviews are still written on desktop computers
businesses to automatically request reviews after five days (Mariani, Borghi, and Gretzel 2019), and as many consum-
of the expected delivery date for all their Amazon orders. ers cite lack of time as the reason for not providing reviews
These examples illustrate that platforms and individual (Statista 2019), it was expected that bad weather conditions
businesses currently seem to ignore a reviewer’s offline increase consumers’ likelihood of writing a review.
context (e.g., weather and location) when asking for online Empirical results from more than three million bookings
reviews. However, is this context-free approach actually and almost 300,000 associated reviews over more than
warranted? Or, are firms using misguided strategies to nur- 12 years confirm these expectations to show that bad
ture their online reputation, by not incorporating data about weather affects both review content and review provision.
the actual physical context at the time of review provision? Contemporaneous rain at a consumer’s residential address
This study demonstrates that a reviewer’s offline context on the day of review provision reduced rating scores by
at the time of review provision systematically affects on- about 0.10 points. This is a large effect as it amounts to
line review content. It focused on weather-related events, more than half (59%) of the difference in the average rat-
that is, the presence of rain and snow at a customer’s resi- ing scores for four-star (M ¼ 5.03) and five-star (M ¼ 5.20)
dential address, as a prominent feature of the reviewer’s hotels. Additionally, bad weather increased the likelihood
physical surroundings when providing a review. We hy- for review provision; relative to days without precipitation,
pothesized and demonstrated that such weather-related consumers are up to 6.5% more likely to write a review on
events have a significant effect on online review provision a rainy day. These findings were obtained using a model
and content, even days after the consumption experience that corrects for potential sample selection in review provi-
has already ended.2 This effect could be identified because sion and includes controls for monthly weather variation
the study focused on online reviews for hotel bookings, but during any given year. The results are robust to alternative
was also able to observe the exact start and end dates of specifications, controlling for the influence of weather dur-
consumption. Specifically, we constructed a novel dataset ing the customer’s stay at the hotel.
combining information on (i) hotel bookings and reviews Subsequent automated sentiment classification of review
texts with three different commonly used text-analysis
1 Demonstrating the role of visual configurations in the context of tools provided consistent evidence in support of the pro-
offline WOM, Berger and Iyengar (2013) report that people are more
likely to talk about products visible in their physical surroundings. posed theoretical mechanism: reviews written on rainy
2 In the terminology of Moe and Schweidel (2012), we model both
incidence (review provision) and adjustment (rating content, e.g., 3 Throughout this article, the term “bad weather” is used to describe
scores). all detrimental weather conditions.
BRANDES AND DOVER 597

days (i) include, on average, a significantly lower propor- service quality for consumers in regions with more rain
tion of words related to positive affect, (ii) have lower hap- and snow (Mittal, Kamakura, and Govind 2004), a greater
piness scores, (iii) are less positive, overall, and (iv) show preference for products with features compatible with pre-
lower arousal levels. Interestingly, reviews written on rainy vailing weather conditions (e.g., winter clothes on surpris-
days too showed a lower proportion of negative affect- ingly cold days in Conlin, O’Donoghue, and Vogelsang
related words. Additional analyses revealed this effect cor- 2007; convertible cars on sunny days in Busse et al. 2015),
responded with a reduction in negative, high-arousal emo- or for hedonic products that help consumers restore well-

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


tions, such as anger. In contrast, words related to negative, being on bad weather days (Govind et al. 2020). A com-
low-arousal emotions, such as sadness, increased on bad mon feature of these studies is that current weather condi-
weather days. To gain more insight into the mindset of tions influence customers’ perceived needs at the time of
consumers as they wrote reviews in bad weather, a final set decision-making, making products that help satisfy these
of ex-post analyses was conducted, which found that needs (now or later) more attractive.
reviews written on rainy days are longer, more detailed, In the extant weather-related literature, however, WOM
and less focused on the reviewer. These results are consis- is a relatively unexplored factor. The few existing studies
tent with previous work on the impact of negative mood on focus on the effect of weather during restaurant visits on
cognition (Forgas 2013), and support the identification ap- subsequent WOM. For example, Bakhshi, Kanuparthy, and
proach assuming that most consumers write hotel reviews Gilbert (2014) report a negative correlation between bad
at home. weather (rain and snow) during a visit, and online ratings
This research makes four major contributions to the lit- for the visit. However, given their approach, it remains
erature. First, we show that unpleasant weather, a previ- unclear which mechanism drives this result, that is,
ously ignored external, situational influence factor for whether it simply stems from a worse consumption experi-
WOM in the marketing literature, has a significant influ- ence. In this regard, Bujisic et al. (2019) further document
ence on online review provision and content. More gener- that bad weather during a customer’s restaurant visit results
ally, we report a cross-channel effect, through which an in more negative comments, and a reduced willingness to
aspect of consumers’ offline physical environment has a engage in WOM, prompted by the negative mood that con-
significant influence on their online reviewing behavior. sumers experienced during their visit. Existing work thus
Second, we demonstrate that consumption-unrelated suggests that less pleasant consumption experiences, even
events, such as bad weather on the day the review is pro- those caused by bad weather, result in a lower propensity
vided, may influence online reviews, and that this effect is to engage in WOM, and that the WOM, when available,
likely driven by the impact of weather on incidental affect. tends to be relatively negative.
The third contribution is to show that contemporaneous By contrast, this article argues that contemporaneous
weather impacts opinion formation and expression for past bad weather conditions, as an example of an unpleasant en-
consumption experiences. Finally, the study provides prac- vironment, affect what consumers share with others about
titioners with guidance to contextualize their review man- their past consumption experiences. We hypothesize and
agement approaches. Because businesses know a demonstrate that such conditions influence (i) what con-
customer’s residential address, they can combine this infor- sumers share with others about their past experiences and
mation with publicly available weather forecasts to condi- (ii) whether they share these experiences at all. These two
tion the timing of their communication on the presence of types of influence are caused by the fact that bad weather
rain. Notably, the results indicate that platforms are better (i) affects consumer mood, serving as an incidental affect
off asking for reviews on rainy instead of sunny days, during reviewing and (ii) reduces the opportunity costs of
while the opposite holds true for individual businesses. sharing.
The rest of this article is structured as follows. The next
section reviews the related literature and develops the hy- Weather and Online Review Content
potheses, followed by the data and empirical methodology, Weather is widely known to influence people’s affective
and then the results. The article concludes with a discus- states (Denissen et al. 2008; Govind et al. 2020; Howarth
sion of the substantive implications of the findings. and Hoffman 1984; Kööts, Realo, and Allik 2011;
Persinger and Levesque 1983; Sanders and Brizzolara
RELATED LITERATURE AND THEORY 1982). One study found that weather (changes) explained
DEVELOPMENT up to 60% of participants’ daily mood variations (Persinger
and Levesque 1983). Govind et al. (2020) established that
A growing body of research indicates that specific weather had a causal influence on individuals’ mood
weather conditions may influence a broad range of con- through changes in positive affect: relative to a typical day,
sumer attitudes, decisions, and behaviors. Examples in- participants reported significantly higher positive affect for
clude the greater importance of automobile product and a sunny day, but significantly lower positive affect for a
598 JOURNAL OF CONSUMER RESEARCH

rainy or snowy day. Since positive affect includes feelings one. Schwarz and Clore (1983) found no difference in par-
of excitement, enthusiasm, strength, pride, and activity ticipant mood between participants exposed to positive
(Watson, Clark, and Tellegen 1988), the results of Govind mood manipulation and those in the control group. Even in
et al. (2020) help to explain why individuals are relatively the absence of mood manipulation, participants reported
more optimistic (Howarth and Hoffman 1984) and less being in a rather positive mood. In contrast, mood was sig-
tired (Denissen et al. 2008) on sunny days. On the flip side, nificantly lower after negative mood manipulation than in
people feel relatively less optimistic and less energized on the control group. Similarly, Govind et al. (2020) reported

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


days of rain and snow. Because of these opposing effects, a smaller difference in positive affect between a neutral
weather is a common manipulation technique that affects and a sunny day (þ0.57) than the absolute difference in
mood in psychological research (Cohen, Pham, and positive affect between neutral and rainy (0.95) or snowy
Andrade 2008). (1.49) days. Given the negative relationship between bad
Why would a customer’s mood, prompted by contempo- weather and positive affect, and both mood mechanisms
raneous weather conditions, influence their opinion about detailed above, the following hypothesis is proposed:
past consumption experiences? The extant literature estab-
lishes that current mood can serve as a type of incidental H1: Bad weather at a consumer’s residential address on the
affect, with affect-congruent influences on judgment and day of a review provision decreases a consumer’s online rat-
decision-making (Cohen et al. 2008). That is, a good (bad) ing score for a recent hotel stay.
mood leads to a better (worse) object evaluation, despite its
source being unrelated to the object. For example, Schwarz Weather and Online Review Provision
and Clore (1983) found that study participants were less
satisfied with life on bad weather days. Two alternative Besides changing individuals’ mood, weather conditions
mechanisms have been proposed to explain such mood may also affect the opportunity costs of review provision
effects on evaluative judgment (Schwarz 2002); the first in two ways. First, weather influences the relative attrac-
postulates that current mood activates mood-congruent tion of indoor and outdoor activities. Bad weather (particu-
memories. According to this associative-network model of larly, rain) has been found to deter people from outdoor
memory (Bower 1981), consumers evaluate a recent prod- activities (Chan, Ryan, and Tudor-Locke 2006; Tudor-
uct experience less favorably on a bad weather day because Locke et al. 2004), and to encourage people to spend more
their current bad mood increases the likelihood of retriev- time at work (Connolly 2008). Essentially, this pattern has
ing negative memories about an experience. been attributed to the reduced opportunity costs of indoor
The second explanation is the so-called “affect as activities on bad weather days. This may influence review
information” mechanism (Pham 2009; Schwarz 2002; provision because more than a third of all reviews are still
Schwarz and Clore 1983). In this scenario, people evaluate written on (indoor) desktop computers (Mariani et al.
an object by asking how they feel about it; they treat their 2019).4 On bad weather days, we thus expect consumers,
current mood as a source of information about the value of on average, to engage more with digital platforms
an object. This process has been termed the “how-do-I- indoors.5
feel-about-it?” heuristic. In support of this explanation, Second, bad weather makes people more productive by
Schwarz and Clore (1983) found that the negative mood ef- reducing the distractions from alternative outdoor activities
fect disappeared when participants were explicitly made (Lee et al. 2014). Consumers are more productive, and feel
aware of the bad weather, allowing them to attribute their less busy, busyness being a key obstacle for consumers to
mood to the weather, whereas previously they had auto- leave online product reviews (Statista 2019). While it is an
matically and subconsciously attributed it to their overall opportunity cost mechanism at heart, it would still predict
quality of life. If we consider that moods are “generalized, more online reviews on bad weather days, even if most of
diffuse states or dispositions that are less intense, but last them are now written on mobile devices.
longer than emotional responses” (Schacter et al. 2016, In sum, more consumers are expected to write reviews
393), and that individuals often struggle to identify the ex- on bad weather days.
act source of their mood, whether good or bad (Cohen
et al. 2008), misattribution seems more likely the norm 4 Mariani et al. (2019) show that, as recently as January 2015, the
than an exception. Indeed, Avnet, Pham, and Stephen majority of online reviews were still written on desktop computers.
(2012, 721) conclude, “It appears that, by default, people 5 Bad weather is also known to reduce mobile phone reception.
“Droplets in the air reduce signal strength, with different-sized drop-
assume that their momentary feelings are representative of lets affecting specific frequencies in the signal” (Rowe 2006), which
the target to be evaluated.” further increases the attraction of desktop computers on rainy days.
This article focuses throughout on the effect of bad Overall, however, this is likely to be of reduced importance in our data
which are from 2004 to 2017, when fewer customers had mobile
weather, such as rain, snow, or both, on online reviews, in- phones, and mobile phone connectivity was slower than now. For ex-
formed by previous findings that individuals tend to be less ample, 4G only became available in German cities—the focus of the
sensitive to positive mood manipulation than to a negative empirical analyses—in 2012 and 2013 (LTE-Anbieter n.d.).
BRANDES AND DOVER 599

H2: Bad weather at a consumer’s residential address weather conditions at a consumer’s residential address on
increases the probability that the consumer will write an on- the day of the review were reconstructed. The assumption
line review for a recent hotel stay. was that most consumers returned home within a day of
Taken together, hypotheses 1 and 2 suggest that bad the end date of their vacation,8 and that their domicile is
weather affects both what consumers share with others also the geographical area in which they write their
about their past experiences and whether they share these reviews. To avoid concerns about the heterogeneity of
experiences at all. reviewing behavior across different countries and to ensure

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


considerable variation in weather conditions across resi-
DATA dential addresses on any given day, the analysis was re-
stricted to bookings from customers with a residential
The data were obtained from a major online travel portal address in Germany. Weather conditions in Germany, the
based in Europe that serves as a booking and reviewing fourth-largest country in the European Union, can vary
platform with relatively high volumes of activity.6 They substantially across regions on any day. A notable example
cover the period from September 2004 to May 2017. In is presented in figure 1, which shows the weather forecasts
2017, the company had, on average, 23,993 monthly book- for Germany on February 14, 2018: rain for Berlin in the
ings, and by the second half of 2018, attracted more than east, heavy snow for München (Munich) in the south, and
five million unique monthly users to its website. The plat- the sun for Köln (Cologne) in the west.
form serves as a mediator between consumers and suppli- The analysis focused on the impact of precipitation types
ers of travel services and provides consumer-generated (rain, snow, and rain with snow) and temperature on online
reviews on its website, allowing it to record a rich set of reviews because the literature review revealed rain to be
details about consumers, their transactions, and on-site particularly associated with notable mood and opportunity
reviews (when provided). In the sample period, the major- cost effects, and because precipitation and temperature are
ity of its customers reported residential addresses in the most reliable types of weather information across the
Germany. 1,900 weather stations in Germany. To model the effect of
The portal, while similar in many aspects to mainstream bad weather on review provision, the decision to write a re-
travel portals, such as Expedia, Booking.com, and Orbitz, view was considered a function of residential weather con-
differs in a few ways relevant to our context. First, it allows ditions in the first week after the end of the vacation—the
reviews from consumers who have booked through the first d days were included in the sample if the customer
platform and those who have not, thereby combining the provided a review on day d  7. If a customer did not pro-
Expedia and TripAdvisor models. Second, reviewers eval- vide a review by the end of that week (either because they
uate hotel stays across six dimensions: overall quality, ser- did not write a review or they wrote one after), the first
vice, room, food, location, and entertainment or gym seven days were included in our dataset. Thus, our final
facilities. Unlike most other review platforms, consumers data include 20,158,624 booking  day observations for
are asked to rate each dimension on a scale from 1 (low) to 3,050,276 bookings and 341,494 reviews. The review
6 (high).7 The rating score most prominently published probability for any booking in the first seven days is 11%.
from each review form is the average score across the six Because the analysis included only reviews written in
dimensions. Each review contains the date and many iden- the first seven days, and given that bad weather across
tifying details of the reviewer, such as age, nationality, trip Europe may often last for several days, omitting this infor-
purpose, and length of stay. Third, reviewers can choose to mation could result in spurious effects from residential
write long or short online reviews. Finally, like other plat- weather on online reviews. As it was not possible to di-
forms, it sends review invitation e-mails to consumers who
rectly test the assumption that a consumer provided a re-
recently returned from their vacation but not yet left a
view from their residential address, this was a potential
review.
concern in the study setting.9 Therefore, salient weather
To explore the relationship between bad weather condi-
conditions data during the consumer’s stay at the hotel
tions and online reviews, information on hotel bookings
were collected: the shares of rainy and snowy days, and the
and reviews from the travel platform was matched to inter-
national weather data from the German Weather Service
8 Based on information from the company, the large majority of cus-
(DWD) and the National Oceanic and Atmospheric tomers are leisure travelers, and not business travelers, which should
Administration’s National Centers for Environment reduce “back-to-back” travel bookings. This was confirmed in our
Information (NCEI). The exact data construction procedure data: for 92.5% of all bookings in our sample, customers did not have
another trip starting within the first week after the end of travel, which
is described in web appendix A. Here, we note that the reduces potential concerns that any identified weather effects may be
spurious. As we demonstrate in table 5, excluding the few back-to-
6 The company that runs the portal wishes to remain anonymous. back bookings leaves our results unchanged.
7 The platform allows customers not to rate a dimension on which 9 We are grateful to an anonymous reviewer for bringing this possi-
they lack sufficient information or experience. bility to our attention.
600 JOURNAL OF CONSUMER RESEARCH

average temperature. Detailed information on the hotel’s Panel (b) of table 1 shows the reconstructed weather
geolocation, as well as data from the NCEI weather ar- conditions at the residential address on the day of review
chive, was collected, which helped to control for the possi- provision for about 92% of all reviews (314,163 out of
bility that bad weather conditions at the residential address 341,494). They show considerable variation: 20% of
were correlated with those at the hotel location. reviews were written on rainy days, 1% on snowy days,
Table 1 presents summary statistics for the dataset, sepa- and 2% on days with rain and snow. The average tem-
rately for the bookings and reviews.10 Panel (a) shows the perature across all review days was 11.58 Celsius, al-

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


descriptive statistics of the individual booking level of the though some were written on substantially colder
3,050,276 bookings in the sample. The average travel (20.2 Celsius) and hotter (30.6 Celsius) days. Average
length was approximately seven days, and 17% of the weather conditions during vacation for reviewed book-
bookings resulted in a review. As previously mentioned, ings do not seem to differ from those across all
the probability of a review being provided in the first week bookings.
after travel was 11%. Panel (a) also provides information
on weather conditions at the hotel address that we managed METHODOLOGY
to reconstruct for about 81% of all bookings (2,467,271 out
of 3,050,276). The analysis used a Heckman (1979) sample selection
The weather at the destination shows considerable varia- model to model review content (star rating) and review
tion across bookings, with an average of 24% rainy days, provision as two separate but related processes, adjusting
2% snowy days, and an average temperature of 20 Celsius for possible correlation between the error terms in both
across the complete stay. Almost half of the bookings equations; this modeling approach is consistent with recent
(48%) involved a stay at a four-star hotel.11 Finally, as empirical work on online reviews (Karaman 2020; Lee
panel (a) shows, the average time between two bookings et al. 2021).
from the same customer is about one year, and the reduced
number of observations shows that the majority of consum- How Weather Affects Online Review Content:
ers had only one booking in the sample period.
Model Specification
Panel (b) of table 1 presents the descriptive statistics
for the individual review level. In total, 341,494 reviews In our main specification, we model a review’s overall
were written within the first week after travel, approxi- rating score as a linear function of weather condition meas-
mately 64% of all reviews for the bookings in the sample. ures at the residential address on the day of review provi-
The rating scores tended to be positive, with a relatively sion (rain, snow, rain and snow, and mean temperature),
high average of 5.03, and consistent with other studies on booking characteristics (length of stay and hotel fixed
online reviews (Chevalier and Mayzlin 2006). With the effects), customer characteristics (residential zip code fixed
exception of the difference between five-star and six-star effects), and review-provision time characteristics (written
hotels, differences in average rating scores across hotel on weekends vs. weekdays, days since the end of travel,
star categories are quite small (between 0.16 and 0.24 month fixed effects).
points). Moreover, consumers who reviewed within the
0 0 0 0
first week after travel did so after 3.5 days on average. Ratingitjr ¼ d0 þ Witr dW þ Bij dB þ Tit dT þ Cr dC þ eitjr
Finally, the data exhibit considerable variation in the
proportion and intensity of words related to a wide variety (1)
of emotions, including positive emotions, negative where Ratingitjr denotes the star rating in consumer i’s online
emotions, anger, sadness, happiness, valence, and arousal review written on day t for their stay at hotel j, and where
in review texts. These values were constructed from three consumer i lives in residential area r. All terms in bold
0
automated sentiment analysis tools discussed in the results denote vectors. Specifically, W itr ¼ ½ rainitr snowitr
section.12 rain and snowitr mean 0 temperatureitr  contains all
weather-related measures, Bij denotes booking characteristics
such as length of stay and hotel fixed effects to capture any
10 In web appendix B, we provide correlation tables between our fo- time-invariant
0 hotel-specific influences on the rating score;
cal variables on the booking and review level.
Tit reflects all time characteristics such as a weekend
11 The travel platform classifies hotel star categories from 0 to 6 stars
in 0.5-star increments. For our research purposes, we bin 0–1 star dummy, a set of month fixed effects to capture monthly var-
into the 1-star category, 1.5 and 2 stars into the 2-star category, 2.5 iations in weather conditions throughout the year, and a set
and 3 stars into the 3-star category, and so on. of dummies to capture nonlinear0 effects in the days since the
12 Since different tools were used to analyze specific emotions, and end of travel (latency); and, Cr contains fixed effects for
because these tools differ in their classification approach of very
short texts as either missing values (e.g., Hedonometer) or 0s (e.g., customer i’s residential area r. This latter set of fixed effects
LIWC), the number of observations differs across emotions. helps to capture customer time-invariant heterogeneity at the
BRANDES AND DOVER 601

FIGURE 1 (and using search terms in German) for hotel j’s city on
that day.14 Because the intention to help others is a key
VARIABILITY IN WEATHER CONDITIONS ACROSS GERMANY motivation for WOM (Berger 2014), customers are
expected to be more likely to write a review in times with
keen interest in the travel destination:
0 0 0 0 0
pitjr ¼ b0 þ W itr bW þ Bij bB þ Tit bT þ Cr bC þ Gjt bG

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


þ itjr
(2)
0
where Gjt includes both the linear and squared terms of
Google search0 activity for hotel j’s city on day t. Omitting
the terms in Gjt from the ratings equation 1 provides the ex-
clusion restrictions needed to identify the ratings effect
when selection is controlled for by inclusion of the inverse
Mills ratio in the Heckman sample selection model. The
next subsection discusses how to construct this ratio and
estimate the model.

Estimation Procedure
Rating score Ratingitjr is observed if customer i decides
to write a review, which is unlikely to be random.
However, estimation of (1) via OLS using 0 0 only0 the ob-
0
served ratings and regressing them on W itr , Bij , Tit , and Cr
is known to give inconsistent estimates of true parameter
values, unless the error terms eitjr and mitjr are uncorrelated.
Note. Weather forecast for Germany for February 14, 2018. Source: www.wet-
teronline.de. Reproduced with permission.
To account for the potential correlation between the errors
in equations 1 and 2, the study applies a variant of
Heckman’s two-step procedure, based on first estimating a
consumer’s probability of writing a review, and then aug-
regional level. Specifically, customer mentality may differ menting the rating score equation 1 by including the esti-
0
across more rainy regions and more sunny regions because mated inverse Mills ratio, kðZitjr b
hÞ, where
customers self-select
0
into their residential areas.13
h i
Subsequently, W i;t;r is extended to include weather condition 0 0 0 0 0 0
Zitr ¼ 1 Witr Bij Tit Cr Gjt ; and
measures at the hotel address during the stay (shares of rainy
and snowy days, and average temperature throughout the b
h ¼ ½b0 bW bB bT bC bG  :
stay).
0 0 0 0
Ratingitjr ¼ ~d 0 þ Witr ~d W þ Bij ~d B þ Tit ~d T þ Cir ~d C
 0 
How Weather Affects Review Provision: Model þ r12 k Zitjr b h þ gitjr (10)
Specification
We model the probability pitjr that consumer i who lives To obtain the inverse Mills ratio, the standard procedure
in residential area r will write a review for the vacation at would be to estimate the review probability from equation 2
hotel j on day t (t ¼ 1, . . ., 7) after the end of the vacation. by a Probit model, and then to construct
In addition to the same set of weather conditions at the  0 
consumer’s residential address on day t, booking character-  0  / Zitjrbh
istics, customer characteristics, and review-provision time k Zitjr b
h ¼ 0 , where /ðÞ denotes the standard
UðZitjrb

characteristics as in equation 1, we follow Moe and
Schweidel (2012), and include, for each day t, information 14 We considered collecting Google search trends directly on the ho-
tel level. However, most of the 18K hotels in the dataset had zero
on the historical Google search activity from Germany search hits on any given day. Therefore, search trends on the city
level were included. This also reflects on anecdotal evidence that
13 Alternatively, the region fixed effects can also be considered as consumers first choose which city to travel to, and then select a hotel
controls for differences in average weather conditions across differ- at the location. Accordingly, interest in the location initially drives a
ent regions. customer’s information search.
602 JOURNAL OF CONSUMER RESEARCH

TABLE 1

SUMMARY STATISTICS

Panel (a): Booking level

Variable Mean SD Min Max N


Length of stay 7.36 4.31 0 30 3,050,276
Share of reviewed bookings 0.17 0.38 0 1 3,050,276

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


Share of bookings reviewed within 0.11 0.32 0 1 3,050,276
first seven days
Share of rainy days abroad 0.24 0.29 0 1 2,476,271
Share of snowy days abroad 0.02 0.11 0 1 2,476,271
Average temperature abroad (in 20.06 7.68 29.31 37.72 2,476,271
Celsius)
Share of bookings for 1-star hotels 0.13 0.11 0 1 3,050,276
Share of bookings for 2-star hotels 0.03 0.17 0 1 3,050,276
Share of bookings for 3-star hotels 0.20 0.40 0 1 3,050,276
Share of bookings for 4-star hotels 0.48 0.50 0 1 3,050,276
Share of bookings for 5-star hotels 0.27 0.44 0 1 3,050,276
Share of bookings for 6-star hotels 0.005 0.07 0 1 3,050,276
Days between two consecutive 367 487.55 0 4,473 1,200,530
bookings per customer

Panel (b): Review level

Variable Mean SD Min Max N


Rating score (1–6) across all reviews 5.03 0.85 1 6 341,494
Number of days until review (latency) 3.51 1.91 1 7 341,494
Share of positive emotion words in review text 5.54 5.90 0 100 341,493
Share of negative emotion words in review text 1.28 1.75 0 100 341,493
Share of sadness-related words in review text 0.42 1.11 0 100 341,493
Share of anger-related words in review text 0.56 1.21 0 37.5 341,493
Happiness score in review text 5.36 0.93 0 7.82 341,217
Valence score in review text 5.86 1.05 0 9.55 340,986
Arousal score in review text 4.03 0.73 0 6.97 340,986
Rating score for 1-star hotels 4.80 1.02 1 6 3,437
Rating score for 2-star hotels 4.63 1.00 1 6 6,549
Rating score for 3-star hotels 4.79 0.91 1 6 62,953
Rating score for 4-star hotels 5.03 0.82 1 6 168,274
Rating score for 5-star hotels 5.20 0.80 1 6 99,634
Rating score for 6-star hotels 5.63 0.50 3 6 647
Share of review days with rain at customer residential address 0.20 0.40 0 1 314,163
Share of review days with snow at customer residential address 0.01 0.11 0 1 314,163
Share of review days with rain and snow at customer residential address 0.02 0.12 0 1 314,163
Average temperature on review day at customer residential address 11.58 6.64 20.2 30.6 314,647
Share of rainy days abroad 0.22 0.26 0 1 281,204
Share of snowy days abroad 0.02 0.10 0 1 281,204
Average temperature abroad (in Celsius) 20.32 7.25 21.0 37.03 281,204

normal density, and UðÞ is the associated cumulative dis- procedure (Correia 2017).15 This command implements an
tribution function. However, because of the increased com- efficient and feasible estimator for linear models with
putational intensity of the Probit model, and because of the high-dimensional fixed effects, which augments the fixed-
very large number of fixed effects in the model (the most point iteration of Guimaraes and Portugal (2010) and
comprehensive specification for review provision equa- Gaure (2013), and has the advantages of speed over other
tion 2 includes more than 82,000 fixed effects), similar to algorithms and the flexibility to adjust standard errors for
the previous literature (Pope and Schweitzer 2011), a linear clustering at both the hotel and residential area levels.
probability model (LPM) was used, instead of the Probit
model, to estimate equation 2, as is frequently done in em- 15 This algorithm is widely used, as is evident from more than 250
pirical economics when there are many fixed effects in the citations of Correia (2017) as of April 2021. Studies using this algo-
rithm have recently been published in top-tier academic journals,
model (Bartling, Brandes, and Schunk 2015). Specifically, such as the American Economic Review (Deryugina et al. 2019), or
equation 2 was estimated using Stata’s reghdfe estimation the Journal of Financial Economics (Luck and Zimmermann 2020).
BRANDES AND DOVER 603
 0 
Based on these estimates, we constructed k Zitjr b
h LPM as 0.10 stars lower on average. Relative to the standard devia-
tion in rating scores (0.85), this was a 12% change.
above. Column 2 in table 2 presents the estimation results from
To estimate the augmented rating score equation 10 , the an extended model specification that includes controls for
reghdfe algorithm was also used. The use of a linear model weather conditions at the hotel address during the custom-
is appropriate for the analysis of rating scores because av- er’s stay, which finds a 0.10-star reduction on days with
erage rating scores between integer values (e.g., 3.3) were any form of rain at a customer’s residential address. In ad-
b 12 (the estimated coefficient of
observed in the dataset. If r

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


dition, column 2 reveals that bad weather during a vacation
the inverse Mills ratio term) is significantly different from tends to reduce the average star rating; for example, a vaca-
0, then the error terms across equations 1 and 2 are corre- tion with rain every day of the vacation resulted in a 0.04-
lated and a sample correction model is required. The study star rating score deduction. Similarly, a vacation with snow
reports bootstrapped standard errors adjusted for clustering each day resulted in a 0.03-star deduction, although this
at the hotel and residential area levels, the recommended latter effect failed to achieve statistical significance
practice given that standard errors in the second step of (p ¼ .108). While not the focus of this study, the result that
Heckman’s two-step procedure are known to be incorrect bad weather during the consumption experience results in
(Cameron and Trivedi 2005). more negative reviews is consistent with recent works
(Bujisic et al. 2019).
RESULTS We also tested whether the effects of residential weather
vary with the weather experienced during one’s hotel stay.
We first present the results on the effect of weather con- For example, consumers might experience hedonic adapta-
ditions on review scores without and with controls for tion (Frederick and Loewenstein 1999) in response to ex-
weather conditions at the hotel destination. Notably, the tended bad weather spells, such that additional rain no
weather at the hotel is omitted from the review provision longer has an aversive effect on their mood. In this case,
equation because our theory is that residential weather con- bad weather at home might have a relatively smaller effect
ditions influence the opportunity costs of writing a review on rating scores for customers who experienced long rain
on that day. Therefore, the study does not theorize why spells at the hotel than for those experienced more pleasant
(and how) these opportunity costs should be influenced by weather. Column 3 in table 2 contains the results from an
past weather at the hotel. Accordingly, all the results on the extended model that includes interaction terms between
effect of weather conditions on review scores involve the rain at the hotel and weather at the residential address.
same specification for the review provision decision.16 The These results confirmed the previous finding that rain and
second part of this section presents the results on how rain with snow at the residential address reduced the rating
weather affects review provision. Next, we report how score by approximately 0.10 stars, but did not show any ev-
weather impacts the parameters of the review text, which idence for adaptation. However, residential temperature
helps to shed some light on the mechanism behind the re- had a more positive effect on the rating when the weather
view score results. We then document the robustness of the at the hotel was worse (i.e., a higher share of rainy days).
effect of bad weather on review scores, when excluding This finding may be seen as a contrast effect, because of
back-to-back bookings, across hotel star tiers, and when the negative relationship between weather at the hotel and
controlling for a particular type of customer heterogeneity, the impact of residential weather on rating scores (Bless
namely gender. Finally, we report the findings on the effect and Schwarz 2010). However, it remains unclear why such
of bad weather when extending [restricting] the dataset to an effect would only exist for different temperature levels,
the first 10 [four] days after travel. and not for the—perhaps more salient—bad weather in-
volving rain. Irrespective of this partial interaction effect,
Effects of Bad Weather on Review Scores however, the findings from all three model specifications
demonstrate a negative effect of bad residential weather on
Column 1 in table 2 gives the estimation results of the rat-
rating scores, and thus support hypothesis 1.
ing score analysis, the star rating posted being the dependent
variable. Bad weather conditions at a customer’s residential
address reduced such ratings, and the presence of rain was Effects of Bad Weather on Review Provision
particularly impactful: compared to reviews on days without Column 4 in table 2 contains the estimation results for
precipitation, those on days with rain, or rain and snow, were the review provision analysis when using the binary review
provided as the dependent variable. Rainy weather condi-
16 Web appendix C reports results when including weather condi- tions at a customer’s residential address increased review
tions at the hotel as additional controls in the review provision equa- provision. Relative to days without precipitation, review
tion. However, greater weight is placed on the model specification
without interactions in the review provision model, because it is provision was, on average, 0.0011 and 0.0009 higher on
more consistent with our theoretical reasoning. days with rain and days with rain and snow, respectively.
604 JOURNAL OF CONSUMER RESEARCH

TABLE 2

EFFECTS OF BAD WEATHER ON REVIEW SCORES AND REVIEW PROVISION

(1) (2) (3) (4)


Variables Star rating Star rating Star rating Review provision

Weather at residential address on


review day

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


Rain 0.106*** 0.101*** 0.106*** 0.0011***
(0.008) (0.009) (0.010) (0.0001)
Snow 0.002 0.004 0.017 0.0004
(0.015) (0.014) (0.019) (0.0003)
Rain and snow 0.100*** 0.098*** 0.094*** 0.0009**
(0.014) (0.013) (0.017) (0.0003)
Mean temperature 0.003*** 0.003*** 0.002*** 0.00004***
(0.0005) (0.001) (0.0006) (0.00001)
Weather at hotel address during stay
Share of rainy days – 0.043*** 0.093*** –
(0.007) (0.012)
Share of snowy days – 0.030 –
(0.018)
Mean temperature – 0.004*** –
(0.001)
Weather at hotel  Weather at resi-
dential address
Rain  Share of rainy days 0.011
(0.017)
Snow  Share of rainy days 0.053
(0.053)
Rain and snow  Share of rainy 0.021
days (0.042)
Mean temperature  Share of rainy 0.005***
days (0.0009)
Controls
Length of stay 0.031*** 0.031*** 0.032*** 0.0004***
(0.003) (0.003) (0.003) (0.00001)
Weekend 0.026*** 0.027*** 0.028*** 0.0003***
(0.004) (0.005) (0.005) (0.0001)
Google trends – – – 0.00003***
(5.40E06)
Google trends (squared) – – – 1.13E07***
(1.86E08)
Inverse Mills ratio Significant Significant Significant –
Individual month dummies (Jan– Yes Yes Yes Yes
Dec)
Latency dummies (1–7) Yes Yes Yes Yes
Hotel fixed effects Yes Yes Yes Yes
Zip code fixed effects Yes Yes Yes Yes
Wald Chi2/F-Stat. 1,316.32*** 1,387.23*** 1,382.00*** 666.88***
Adjusted R-squared 0.219 0.220 0.220 0.005
# zip code clusters 7,729 7,576 7,576 14,112
# hotel clusters 15,741 13,298 13,298 68,211
Observations 298,058 252,156 252,156 18,024,576
NOTE— Cluster-adjusted standard errors are shown in parentheses. For star rating analyses, standard errors were based on 250 bootstrap replications.
***p<.001, **p<.01, *p<.05, †p < .1.

Relative to the unconditional, mean daily review probabil- the hotel’s city is very high. Column 4 results support hy-
ity of 0.017, customers were 6.5% and 5.3% more likely to pothesis 2.
write a review on days with rain and days with rain and
snow, respectively. Measured in terms of the daily standard
deviation (0.13), rain increases review probability by 0.9%.
Effects of Bad Weather on Review Text
For the exclusion restriction variable, Google search activ- Given the observed negative effect of bad weather on
ity, there was a significant nonlinear effect, such that con- review scores, it was important to know if this was associ-
sumers are more likely to write a review when interest in ated with a corresponding negative effect on emotionality
BRANDES AND DOVER 605

in the review text itself. Besides being directly relevant in a subcategory. In all other specifications, the dependent
for consumer decision-making (Humphreys and Wang variable was the overall rating of emotionality in a subcate-
2018; Ludwig et al. 2013), weather-related changes in the gory. In column 1, reviews written on bad weather days
review text may provide some hints about the mechanism contained fewer words in LIWC’s positive emotion subca-
of the review score effect. Specifically, under the mood- tegory. This effect holds for all three types of precipitation.
related mechanism, customers were expected to be, on av- Relative to the overall standard deviation of the proportion
erage, less happy, and write less positively or more nega- of positive affect (5.90), these indicate reductions of 4.5%,

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


tively on rainy days than on days without. To test this 7%, and 8.5% on rainy, snowy, rain-with-snow days, re-
expectation, three automated sentiment analysis tools spectively. Column 2 shows that words related to negative
were used. emotions also appeared less frequently in reviews written
The first two, Linguistic Inquiry and Word Count soft- on days involving rain. Compared to the standard deviation
ware (LIWC2015; Pennebaker et al. 2015) and (1.75), this shows 5% and 3.7% reductions on rainy days
Hedonometer (Dodds and Danforth 2010; Dodds et al. and days with rain and snow.
2011) have been extensively used in marketing research to Both Hedonometer and Köper and Schulte im Walde’s
analyze various types of user-generated texts (Ludwig (2016) approach provide more continuous measures of pos-
et al. 2013; Melumad, Inman, and Pham 2019; Van Laer itivity and show [columns 3 and 4] that the net effects of
et al. 2019). Both of these tools operate on dictionaries of rain and rain with snow considerably reduce happiness and
linguistic categories. As all the reviews of the dataset were pleasantness. Relative to their associated standard devia-
written in German, the German adaptation of LIWC, DE- tions, happiness and valence were approximately 6.2% and
LIWC2015, was used (Meier et al. 2018). The analysis fo- 7.5% lower on rainy days, respectively. There were similar
cused on the “positive emotion” and “negative emotion” effect sizes (5.7% and 7.2%) for days with rain and snow.
categories; in DE-LIWC2015, the “positive emotion” cate- Column 5 shows a reduction in emotional intensity
gory consists of 2,243 words [e.g., glücklich (happy)] and (arousal) on days with rain and rain with snow, represent-
the “negative emotion” category consists of 2,739 words ing a shift of 1.8% to 2.5% relative to the standard devia-
[e.g., heulen (cry)]. tion. Inspired by this finding and arousal-valence models
While Hedonometer classifies texts according to the pos- of emotions (Russell 1980), we examined whether the re-
itivity or negativity of words related to emotions, it does duction in the proportion of negative emotions might corre-
not just count the proportion of emotional words in a text spond to a reduction in high-arousal (vs. low-arousal)
as LIWC does, but also incorporates information on the rel- negative emotions.
ative emotionality (on a scale from 1 to 9) of 9,762 words In the next step, DE-LIWC2015’s subcategories, anger,
(in the list of German words called labMT-de-v2) that dif- a negative, high-arousal emotion, and sadness, a negative,
fer in their degree of happiness and sadness. For example, low-arousal emotion, were considered. The anger category
while “Liebe (love)” and “Energie (energy)” are both posi- comprises 1014 words [e.g., sauer (angry)] and the sadness
tive words, Liebe is happier (happiness score of 7.94 vs. category, 691 words [e.g., traurig (sad)]. Table 3 column 6
6.76). By incorporating the relative strength of emotions, a shows anger was considerably lower on days with rain or
more continuous measure of emotionality in the review rain with snow. Relative to the standard deviation, this
text was obtained. High scores were interpreted as positive amounted to a 6.6% reduction. However, the proportion of
texts, and low scores, as negative. words related to sadness was between 5% and 6% higher
Finally, a large body of work uses affective norms to on days with any form of rain (column 7). These results
classify emotional dimensions within texts (for the English not only corroborate our previously reported findings on
language: Warriner, Kuperman, and Brysbaert 2013). happiness, valence, and arousal, but also support the
Here, a text classification approach (KSiW hereafter) was widely held belief that bad weather may result in greater
applied, based on the largest affective dictionary for the sadness.
German language (Köper and Schulte im Walde 2016). Overall, this set of analyses finds bad weather at a cus-
The dictionary rates more than 350,000 German words on tomer’s residential address on the day of review provision
their relative level of pleasantness (valence), emotional in- associated with less happy, less pleasant, and less positive
tensity (arousal), abstractness, and imaginability. The anal- review texts. Taken together, the results obtained from
ysis focused on valence and arousal categories, a higher three different sentiment analysis tools provide consistent,
valence rating indicating higher pleasantness, and a higher suggestive evidence for the proposed mechanism that rain
arousal rating indicating higher emotional energy. affects rating scores through incidental mood. For the read-
Table 3 displays estimation results for the effect of bad er’s convenience, the effect sizes from all review content
weather on review texts. In all LIWC specifications, the variables, including review provision and rating scores, are
dependent variable was the proportion of emotional words summarized in panel (a) of table 4.
606 JOURNAL OF CONSUMER RESEARCH

TABLE 3

EFFECTS OF BAD WEATHER ON REVIEW TEXTS

(1) (2) (3) (4) (5) (6) (7)


Positive emo- Negative emo- Happiness Valence Arousal (KSiW) Anger (LIWC) Sadness
Variables tions (LIWC) tions (LIWC) (Hedonometer) (KSiW) (LIWC)

Weather at residential ad-

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


dress on review day
Rain 0.265*** 0.090*** 0.058*** 0.079*** 0.018*** 0.196*** 0.068***
(0.050) (0.018) (0.002) (0.003) (0.003) (0.011) (0.012)
Snow 0.413*** 0.011 0.011** 0.016* 0.001 0.005** 0.032
(0.109) (0.032) (0.003) (0.007) (0.005) (0.017) (0.021)
Rain and snow 0.491*** 0.065* 0.053*** 0.076*** 0.013** 0.127*** 0.057**
(0.097) (0.033) (0.003) (0.006) (0.005) (0.020) (0.021)
Mean 0.005 0.004*** 0.002*** 0.002*** 0.001*** 0.008*** 0.002**
temperature (0.004) (0.001) (0.0001) (0.0002) (0.0001) (0.001) (0.0007)
Controls
Length of stay 0.114*** 0.034*** 0.017*** 0.025*** 0.004*** 0.063*** 0.016***
(0.018) (0.007) (0.001) (0.001) (0.001) (0.004) (0.004)
Weekend 0.084** 0.017† 0.012*** 0.014*** 0.001 0.006*** 0.031***
(0.028) (0.009) (0.001) (0.002) (0.001) (0.006) (0.006)
Inverse Mills ratio Significant Significant Significant Significant Significant Significant Significant
Individual month dum- Yes Yes Yes Yes Yes Yes Yes
mies (Jan–Dec)
Latency dummies (1–7) Yes Yes Yes Yes Yes Yes Yes
Hotel fixed effects Yes Yes Yes Yes Yes Yes Yes
Zip code fixed effects Yes Yes Yes Yes Yes Yes Yes
Wald Chi2 12,715.88*** 426.49*** 2,055.85*** 1,632.55*** 235.63*** 3,417.57*** 1,856.97***
Adjusted R-squared 0.076 0.022 0.053 0.063 0.031 0.038 0.017
# zip code clusters 7,729 7,729 7,728 7,728 7,728 7,729 7.729
# hotel clusters 15,741 15,741 15,736 15,730 15,730 15,741 15,741
Observations 298,058 298,058 297,802 297,588 297,588 298,058 298,058
NOTE.— KSiW is the dictionary classification approach developed by Köper and Schulte im Walde (2016). Cluster-adjusted standard errors are shown in paren-
theses. For star rating analyses, standard errors were based on 250 bootstrap replications. Even though rain and rain with snow both reduce positive and negative
emotions, the results are based on word counts, and not on their relative positivity or negativity.
***p<.001, **p<.01, *p<.05, †p < .1 .

Other Effects of Rainy Weather on Review with this expectation, we found a 20% increase for words
Content in the subcategory of “cognitive processes” [e.g., denken
(think), wissen (know)] for reviews written on rainy days.
The previous section focused on the dimensions of the To explore further changes in the reviewers’ cognition
review text theoretically most closely related to the pro- process on rainy days, three additional patterns were se-
posed mechanism’s influence on cognitive content—the in- lected, as identified by prior research on mood and cogni-
fluence of rain on what customers think during reviewing.
tion (Forgas 2013); negative mood (i) results in more
This section reports evidence from a post hoc analysis of
detailed descriptions of past events, (ii) leads to more per-
additional LIWC dimensions to provide more insights on
suasive arguments, and (iii) promotes a greater focus on
the consumer’s mindset, or how customers think when
others than on the self. These patterns were the focus be-
posting on rainy days.17 The results are displayed in table 4
cause they have been shown to improve review helpfulness
panel (b).
and influence (Filieri 2015; Wang and Karimi 2019). To
In addition to their effect on cognitive content, moods
have also been demonstrated to influence the process of assess whether consumers on rainy days write more de-
cognition in several ways (Forgas 2017). For example, neg- tailed reviews (i), a large number of LIWC subcategories
ative mood makes people more likely to use more effortful were considered, including words per sentence (as a proxy
and systematic information processing strategies. Under for increased details per sentence), prepositions, functional
the proposed mood-related mechanism, reviews written on words, common verbs, common adjectives, comparisons,
rainy days were expected to be characterized by a more numbers, and quantifiers. There was evidence for increased
careful and deliberate cognitive thinking style. Consistent use of words in all these dimensions, except common
adjectives, on rainy days.
17 We are grateful to the Associate Editor for suggesting this further To evaluate review persuasiveness (ii), the study focused
analysis to us. on the two subdimensions of “certainty” and “authenticity”
BRANDES AND DOVER 607

TABLE 4

SUMMARY OF EFFECT SIZES

Review content dimensions Change on rainy days Change on snowy days Change on days with rain and snow

Panel (a): core analysis


Review provisiona þ0.9% NS þ0.7%
Rating score 12.47% NS 11.76%

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


Positive emotions (LIWC) 4.49% 7.00% 8.32%
Negative emotions (LIWC) 5.14% NS 3.71%
Happiness (hedonometer) 6.24% 1.18% 5.70%
Valence (KSiW) 7.52% 1.52% 7.24%
Arousal (KSiW) 2.47% NS 1.78%
Anger (LIWC) 16.20% þ0.41% 10.50%
Sadness (LIWC) þ6.13% NS þ 5.14%
Panel (b): ex-post analysis
Cognitive processes þ19.84% NS þ13.97%
Personal pronoun: I 4.13% NS 5.38%
Personal pronoun: you NS NS NS
Words per sentence þ2.51% þ7.2% NS
Prepositions þ21.85% NS þ16.85%
Functional words þ24.33% NS þ18.47%
Common verbs þ22.70% NS þ18.09%
Common adjectives NS 4.52% NS
Comparisons þ7.61% NS þ4.94%
Numbers þ18.53% NS þ15.64%
Quantifiers þ11.25% NS þ7.35%
Certainty þ9.98% 5.21% þ6.93%
Authenticity þ9.00% NS þ5.67%
Review length (word count) þ47.37% NS þ38.55%
NOTE.— All changes were measured relative to the standard deviation of a dimension. Only statistically significant effects are displayed. NS: not significant (at
5% level).
a
Note that this is the daily review probability, which explains the much smaller effect size. All other variables in this table refer to review level.

(supposed to measure honesty), because both concepts in- Effects of Bad Weather on Review Scores:
crease persuasiveness (Haran and Shalvi 2020; Karmarkar Robustness Checks
and Tormala 2010), which makes these results highly rele-
vant for the impact of online reviews. Reviews written on The empirical analyses showed sizable effects of the in-
rainy days scored higher on both subdimensions. To assess fluence of bad weather on review content and provision.
a customer’s change in focus (i.e., (iii)), we compared the This section demonstrates the robustness of the previous
use of first-person singular [ich (I)] against second-person findings for a number of alternative specifications in two
[du (you [singular]), ihr (you [plural])] pronouns. We steps. First, table 5 demonstrates the robustness of the
expected a relative increase in second-person versus first- results when omitting back-to-back bookings, and when
person singular pronouns. The results showed that first- allowing the effect of residential weather conditions to dif-
person singular pronouns were used significantly less on fer across hotel star categories and customer gender. In all
rainy days, but there was no change in the use of second- the additional analyses, the focus remains on the effects of
person ones. bad weather on review behavior in the first seven days after
Finally, the study provides additional evidence for the travel. To illustrate the sensitivity of our results when
argument that consumers have less time constraints on bad changing this cutoff value, the second part of this section
weather days. Review length, measured by the total word contains the results when we extend [restrict] our analysis
count in the reviews, was 99 words longer on average to the first 10 [four] days after travel.
when written on a rainy versus sunny day. Relative to the In table 5, columns 1 and 2 report the estimation results
standard deviation, this marked an increase of approxi- for rating scores and review provision when excluding
mately 50%. The results from the ex-post analysis are back-to-back bookings (those when a customer left for an-
largely consistent with the literature on the effect of nega- other trip within the first week after the end of the focal
tive mood on the process of cognition, lending further trip). As previously mentioned, such back-to-back book-
credibility to the proposed theoretical mechanism in this ings are relatively rare and account for only 7.5% of all
study. bookings in our sample. As the results show, dropping
608 JOURNAL OF CONSUMER RESEARCH
TABLE 5

EFFECTS OF BAD WEATHER ON ONLINE REVIEWS: ROBUSTNESS CHECKS

(1) (2) (3) (4) (5)


Variables Star rating Review provision Star rating Star rating Star rating

Weather at residential address on


review day
Rain 0.107*** 0.0011*** 0.080*** 0.106*** 0.115***

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


(0.008) (0.0001) (0.022) (0.008) (0.010)
Snow 0.010 0.0005 0.089 0.005 0.010
(0.015) (0.0003) (0.078) (0.015) (0.023)
Rain and snow 0.102*** 0.0009** 0.149* 0.100*** 0.156***
(0.013) (0.0003) (0.065) (0.012) (0.021)
Mean temperature 0.003*** 0.00004*** 0.004* 0.003*** 0.0002
(0.0005) (9.07E06) (0.002) (0.0005) (0.0005)
Hotel stars  rain 0.006
(0.005)
Hotel stars  snow 0.021
(0.018)
Hotel stars  rain and snow 0.012
(0.016)
Hotel stars  mean temperature 0.0001
(0.0004)
Male 0.024*** 0.096***
(0.004) (0.008)
Male  Rain 0.013†
(0.008)
Male  Snow 0.009
(0.028)
Male  Rain and snow 0.089***
(0.024)
Male  Mean temperature 0.006***
(0.0007)
Controls
Length of stay 0.030*** 0.0004*** 0.031*** 0.031*** 0.030***
(0.004) (0.00001) (0.003) (0.003) (0.003)
Weekend 0.026*** 0.0003*** 0.026*** 0.026*** 0.026***
(0.004) (0.0001) (0.004) (0.004) (0.004)
Inverse Mills ratio Significant – Significant Significant Significant
Google trends No Yes No No No
Individual month dummies (Jan– Yes Yes Yes Yes Yes
Dec)
Latency dummies (1–7) Yes Yes Yes Yes Yes
Hotel fixed effects Yes Yes Yes Yes Yes
Zip code fixed effects Yes Yes Yes Yes Yes
Wald Chi2/F-Stat. 1,541.46*** 671.98*** 1,355.18*** 1,834.92*** 1,870.50***
Adjusted R-squared 0.219 0.005 0.219 0.220 0.220
Number of zip code clusters 7,723 14,097 7,729 7,729 7,729
Number of hotel clusters 15,455 65,912 15,741 15,741 15,741
Observations 295,150 17,512,916 298,058 298,048 298,048
NOTE.— Cluster-adjusted standard errors are shown in parentheses. For star rating analyses, standard errors were based on 250 bootstrap replications.
***p<.001, **p<.01, *p<.05, †p < .1.

these bookings from the sample left our results unaltered. same customer (94.5% of customers reviewed at most
In a second robustness check, we investigated whether the twice during our sample period), in combination with our
negative influence of bad weather on rating scores affected two-step estimation approach, prevented us from doing
all hotels’ quality tiers equally. Column 3 in table 5 reports so.18 Therefore, the study focused on observable types of
the estimation results from an expanded rating score
model, in which we included interaction terms between a
18 We conducted several analyses that revealed severe multicolli-
hotel’s number of stars and bad weather variables. As the nearity problems (based on variance inflation factors) between the in-
results show, we did not find evidence that the influence of verse Mills ratio control and other explanatory controls when running
bad weather differs across hotel quality tiers. models only for customers with multiple reviews. For example, mul-
In another robustness check, the goal was to address po- ticollinearity was found to be a considerable problem when trying to
control for average rating scores from past reviews in the rating equa-
tential concerns about unobserved customer heterogeneity. tion. It was also a problem in a model that included multiple controls
Unfortunately, the sparsity of repeated reviews from the for observable heterogeneity (e.g., age and gender).
BRANDES AND DOVER 609

customer heterogeneity and on the moderating effect of results provide further support for H2. Column 6, however,
gender. Column 4 in table 5 shows that the results on the revealed a significant positive effect (þ0.043) of rain on
star rating are robust to the inclusion of a gender control rating scores, and thus contradicted H1. A number of addi-
variable in both the review provision and rating equation. tional analyses were conducted to understand the differen-
However, the results provide a novel result that men tend ces in the results, particularly, the extent to which this
to evaluate more negatively than women. Column 5 reports difference could be explained by (a) the smaller number of
the results from a specification in which we interacted fo- observations and/or (b) differences in customer behavior

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


cal weather conditions with gender in both equations. for the early versus later days of posting.
Interestingly, and consistent with the results by Govind To address (a), we estimated a series of alternative mod-
et al. (2020) that women display greater affective reactions els with four-day windows (e.g., 2–5 days, 3–6 days, and
to changes in weather conditions, we observe that the nega- 4–7 days). Once the first two days after travel were no lon-
tive rating effect from bad weather is significantly smaller ger included (i.e., for 3–6 days and 4–7 days), these models
for men on rainy days (0.102 vs. 0.115) and days with showed a significant negative, albeit weaker, rating effect
rain and snow (0.067 vs. 0.156). The results from these (0.03 and 0.02) for rainy weather. Thus, the results in
analyses establish the robustness of the findings across nu- Column 6 do not appear to be driven by the reduced num-
merous alternative specifications and provide further evi- ber of observations, but support (b) and suggest that cus-
dence consistent with the theoretical mechanism. tomers who post within the first two days are different
Table 6 reports the estimation results for the extended from those who post later. Indeed, subsequent analyses
dataset (up to 10 days after travel) and the restricted dataset revealed that reviews in the first two days were more often
(up to four days after travel). For ease of comparison with extremely negative (the worst 5% of reviews included at
the main empirical results when including up to seven days most three stars) than reviews on later days (by the sixth
after travel, these are in columns 1 and 2. Column 3 shows day, the worst 5% of reviews included up to 3.5 stars).
the estimation results for the review provision equation us- Assuming that many of these extremely negative reviews
ing the extended dataset. The positive effect of rainy came from angry customers, that is, unhappy and highly
weather on reviewing was 50% smaller but significant. aroused customers motivated to share their bad experiences
With the notable exception of the negative effect of snow with others promptly,20 the text-related results may provide
and the positive effect of weekends on reviewing, all esti- a hint for a possible explanation of the difference. The pos-
mates were close to those in the main specification, sup- itive effect of rain on rating scores in the restricted dataset
porting those results. may be explained by the results in table 4, suggesting rain
Column 4 displays the associated estimation results for might dampen reviewer anger. Following the feelings-as-
the effect of bad weather on rating scores, which are about information hypothesis, this reduced level of anger could
0.07 stars lower on days with any type of rain. While this be associated with improved rating scores, relative to angry
effect is somewhat weaker than in the main specification, it customers writing reviews on early sunny days.
is still significant and represents a noticeable reduction of While the research setup does not allow a direct assess-
approximately 8% relative to the standard deviation in rat- ment of the exact mechanism behind the findings for our
ing scores. In contrast, snow had a positive effect on the restricted dataset, it is reassuring that, given the observed
rating scores (þ0.042), though this surfaces only after the reviewer differences across time, the theoretical mecha-
inclusion of reviews posted on the ninth day after travel; it nism is consistent with the differing effects across different
may have been impacted by means of the inverse Mills ra- ranges of days after travel.
tio due to the different results for snow in the review provi-
sion equation. In the absence of a theoretical explanation GENERAL DISCUSSION
for this change, we do not place too much weight on this
finding. Overall, these results demonstrate that proposed Although the role of situational variables, especially so-
bad weather effects on review provision and rating scores cial and physical contexts, for consumer behavior has long
extend beyond the seven-day cutoff in the main been acknowledged (Belk 1975), research on consumer on-
specification.19 line reviews has focused predominantly on social situation
Columns 5 and 6 display the estimation results for the variables, neglecting other salient aspects of the consum-
restricted dataset up to four days after travel. For the re- er’s offline environment, such as their actual physical sur-
view provision equation, column 5 shows the significant roundings when writing reviews. This article focuses on
unpleasant weather as a prominent feature of a consumer’s
positive effects of weather with any form of rain. These
physical surroundings during review provision and studies
19 We also estimated separate models for reviews posted within the
its influence on online review content and provision. The
first eight and nine days. The negative effect of bad weather was al-
ways significant, and corresponded to changes of 0.08 and 0.07 20 Berger (2011) demonstrates that (emotional) arousal increases a
stars, respectively. person’s intention to share information with others.
610 JOURNAL OF CONSUMER RESEARCH

TABLE 6

EFFECTS OF BAD WEATHER ON ONLINE REVIEWS: ROBUSTNESS ACROSS NUMBER OF DAYS AFTER END OF TRAVEL

(1) (2) (3) (4) (5) (6)


Review provision Star rating (1– Review provision Star rating (1– Review provision Star rating (1–
Variables (1–7 days) 7 days) (1–10 days) 10 days) (1–4 days) 4 days)

Weather at residential ad-

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


dress on review day
Rain 0.001*** 0.106*** 0.0005*** 0.066*** 0.003*** 0.043***
(0.0001) (0.008) (0.0001) (0.004) (0.0001) (0.007)
Snow 0.0004 0.002 0.001** 0.042** 0.0004 0.001
(0.0003) (0.015) (0.0002) (0.013) (0.0005) (0.016)
Rain and snow 0.001** 0.100*** 0.0004† 0.070*** 0.002** 0.021
(0.0003) (0.014) (0.0002) (0.011) (0.0004) (0.015)
Mean temperature 0.00004*** 0.003*** 0.00005*** 0.005*** 0.00003** 0.0003
(0.00001) (0.0005) (0.00001) (0.001) (0.00001) (0.0006)
Controls
Length of stay 0.0004*** 0.031*** 0.0004*** 0.039*** 0.0004*** 0.009***
(0.00001) (0.003) (0.00001) (0.002) (0.00002) (0.001)
Weekend 0.0003*** 0.026*** 0.0004*** 0.030*** 0.0011*** 0.024***
(0.0001) (0.004) (0.0001) (0.004) (0.0001) (0.005)
Google trends 0.00003*** – 0.00003*** – 0.0001*** –
(5.40E06) (3.94E06) (8.32E06)
Google trends (squared) 1.13E-07*** – 9.74E08*** – 2.26E07*** –
(1.86E08) (1.39E08) (2.92E08)
Inverse Mills ratio – Significant – Significant – Significant
Individual month dummies Yes Yes Yes Yes Yes Yes
(Jan–Dec)
Latency dummies Yes Yes Yes Yes Yes Yes
Hotel fixed effects Yes Yes Yes Yes Yes Yes
Zip code fixed effects Yes Yes Yes Yes Yes Yes
Wald Chi2/F-Stat. 666.88*** 1,316.32*** 976.77*** 2,510.95*** 696.63*** 1,150.06***
Adjusted R-squared 0.005 0.219 0.005 0.219 0.005 0.220
# zip code clusters 14,112 7,729 14,261 7,836 14,109 7,390
# hotel clusters 68,211 15,741 89,829 17,348 68,205 12,498
Observations 18,024,576 298,058 25,449,081 351,739 10,574,650 202,082
NOTE.— Cluster-adjusted standard errors are shown in parentheses. For star rating analyses, standard errors were based on 250 bootstrap replications.
Latency dummies included dummies for days 1–7 in models (1) and (2), for days 1–10 in models (3) and (4) and for days 1–4 in models (5) and (6).
***p<.001, **p<.01, *p<.05, †p < .1.

findings from a uniquely constructed dataset that combines studies have either demonstrated cross-channel effects on
more than 12 years of rich hotel booking and review data customer engagement with firm-generated content
with detailed information on weather conditions at a cus- (Andrews et al. 2016; Li et al. 2017), or only within-
tomer’s residential and hotel addresses demonstrate that channel effects of offline external stimuli (e.g., product vis-
bad weather in a reviewer’s offline physical surroundings ibility) on the sharing of offline user-generated content
influences online reviews. The consistent results on review (Berger and Iyengar 2013; Berger and Schwartz 2011). By
text sentiment from three different analysis tools provide studying all major aspects of online reviews (volume, va-
evidence for the theoretical mechanism: bad weather lence, and text), we demonstrate that external situational
reduces positive affect for consumers, which translates into stimuli may affect not only consumers’ engagement, but
less positive hotel evaluations. The findings have important also their evaluative judgment and expression style.
implications for scholars and practitioners. Second, this study advances the understanding of con-
sumer WOM by showing that online review content can be
Theoretical Implications affected by a priori irrelevant and random events, such as
For scholars, our research makes three important contri- detrimental weather conditions on the day the review is
butions. First, we demonstrate a novel cross-channel effect provided. Specifically, this effect can exist even days after
in which an offline stimulus affects online behavior—on- consumption. Other studies have focused on the effects of
line review provision. The results reveal that weather con- social context. For example, in a recent series of experi-
ditions, as an example of a prominent situational factor in a ments, Brannon and Samper (2018) showed that one-on-
consumer’s physical environment, may affect both sharing one, post-consumption discussions with other consumers
and content of online user-generated reviews. Existing can affect consumer evaluations. We observed a physical
BRANDES AND DOVER 611

context effect, rather than a social context effect, in real- In particular, platforms should integrate weather forecast
life large data. In addition, our setting differed from that of data with their customer records and send out solicitation
Brannon and Samper (2018) because we studied online emails on rainy days. Consumers are more likely to write
reviews, as a “broadcasting” channel, whereas they studied reviews on bad weather days, indicating that emails on
face-to-face, offline interactions as a “narrowcasting” such days are likely to be more effective. However, the in-
channel (Barasch and Berger 2014). Finally, we discussed creased number of reviews is not the only benefit of this
how bad weather affects not only product evaluations but approach. In addition, these solicited reviews will be lon-

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


also their likelihood of being shared with others. ger, more detailed, and less self-focused, implying that
Third, we contribute to an emerging group of studies they could be of higher quality. Indeed, several studies
that demonstrate the effects of weather conditions on con- demonstrate a positive relationship between review length,
sumer behavior. Previous studies have explored the effect level of detail, and reduced use of first-person singular pro-
of weather on immediate (Govind et al. 2020) or future nouns and perceived review helpfulness (Filieri 2015;
consumption decisions, such as catalog orders (Conlin Hong et al. 2017; Wang and Karimi 2019). Even though
et al. 2007), car purchases (Busse et al. 2015), college en- we find that bad weather induces more negative ratings,
rollment (Simonsohn 2007), and outdoor movie viewing platform business models provide some protection against
(Buchheim and Kolaska 2017). Our work differs by focus- this negative effect; platforms usually profit as long as con-
sumers purchase any offering on their website and do not
ing on how weather affects opinions on experiences al-
rely on the sales of a single product. Therefore, platforms
ready occurred.
have much to gain from solicitation emails on rainy days.
In contrast, individual businesses (e.g., sellers on
Managerial Relevance Amazon, or hotels on booking.com) may be better off
Businesses actively manage their online reputation in sending email solicitation emails on sunny days. Our
various ways. They send out emails asking customers to results indicate that these businesses face an important
write reviews for their consumption experiences (Brandes trade-off in the integration of weather-related information
et al. 2022), respond to reviews to demonstrate that they in their solicitation practices. While they obtain more high-
value customer feedback (Chevalier, Dover, and Mayzlin quality reviews for their products on bad weather days,
2018), and analyze online review content to anticipate and these reviews are more negative. Additional evidence sug-
mitigate potential crises (Herhausen et al. 2019). As we gests this negative effect may sometimes be dominant.
discuss in the following, our findings have implications for Considering the perspective of a five-star hotel competing
all these online reputation management activities. with four-star hotels in the same region, our data showed
Furthermore, concerning review solicitation, our findings that the average rating scores for both hotel categories,
suggest that digital platforms and individual businesses fol- which together accounted for 75% of all bookings, differs
low different approaches when considering the impact of by 0.17 points. This is a stark and noticeable difference for
customers’ physical environment. potential consumers. However, if a five-star hotel is un-
First, consider the implications of our work for review aware of the effect of weather and solicits reviews on rainy
days, the difference between its rating and that of the four-
solicitation procedures of digital platforms such as
star hotel could be reduced by 59% (to 0.06 stars). This is
Amazon and Booking.com. A major reason why customers
an obvious risk to the five-star hotel, or any hotel that is
frequently use these platforms is the quality and volume of
not aware of the effect of the physical context. Moreover,
their reviews. For example, 40% of US internet users claim
potential consumers may perceive these negative reviews
that reviews and recommendations are a major reason for to be more helpful, thereby placing greater weight on them
them to purchase from Amazon in 2020 (Statista 2020). in their decision-making. Thus, individual businesses must
Thus, it is very important for Amazon to provide customers carefully weigh when to send out solicitation emails.
with a large number of high-quality reviews. The same is Practitioners may wonder whether weather effects really
true for Booking.com, a platform on which online reviews matter for their online reputation in the medium to long
play a major role, with its mission “to make it easier for ev- term, and claim that these findings are just a manifestation
eryone to experience the world” (Booking.com 2021). of transient noise patterns that will cancel each other out
Indeed, both platforms actively encourage customers to over time. While intuitively appealing, such a conclusion
provide reviews for their recent trips and purchases. implicitly assumes that consumers consider the complete
However, it does not seem that the platforms understand history of reviews and weigh all reviews equally.
the importance of the physical context and integrate it into However, surveys show that this is not how consumers ac-
their solicitation efforts. Our work suggests that contextu- tually behave. Instead, consumers seem to have a much
alizing this approach using weather conditions in a narrower temporal focus, with 50% of them basing their
reviewer’s residential area can support platforms in im- decisions only on reviews from the past two weeks
proving solicitation success rates. (BrightLocal 2020). Consequently, even a relatively short
612 JOURNAL OF CONSUMER RESEARCH

rain spell of three to four days across a country or region alternative explanation of mood congruency (Bower 1981).
may have a substantial impact on a hotel’s set of relevant Future research could rule out alternative explanations and
reviews for consumers. Moreover, recent evidence demon- deeply investigate the phenomenon using controlled
strates that a negative first review for a product can have experiments.
long-term negative effects on the product’s rating valence Another limitation of this work’s estimation approach is
and volume (Park, Shin, and Xie 2021). Accordingly, our its reliance on the assumption that consumers return home
results are particularly relevant for new businesses. promptly after their vacation. While the results are robust

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


Second, the results show that individual businesses can to the exclusion of back-to-back bookings, which should
use weather-related information to forecast the number of alleviate potential identification concerns, there is no way
incoming online reviews that need responding on any to directly test this assumption. However, if erroneous, this
given day. In the dataset, the number of reviews increased assumption works against the findings and, in the worst-
on bad weather days by a considerable 6.5% indicating a case scenario, leads to an underestimation of the effect of
surge of reviews, which could lead, if unexpected, to a de- weather. If consumers do not return home as assumed, the
lay in response times to reviews. Such delays could harm constructed weather conditions used in the analysis should
businesses otherwise. For example, a recent survey found not be correlated with the actual weather at the place of re-
that “when writing a review, 20% of consumers expect to view provision, and weather would have no effect on
receive a response within one day” (BrightLocal 2020). behavior.
Our study thus extends previous work on how businesses
can use weather forecasts to schedule their communica-
Future Research
tions to consumers (Li et al. 2017) by demonstrating that
such weather forecasts can also help predict communica- Despite these limitations, the overall body of evidence,
tions from consumers. including all the variables explored and the text-analysis
Third, the results help individual businesses understand approach, suggests that bad weather affects both content
certain patterns and fluctuations in their online reviews. and provision of online reviews, and that the negative ef-
Consider the example of a hotel that noticed a sudden drop fect on ratings is strongly associated with reduced cus-
from 4.0 to 3.9 in its rating scores from a group of recent tomer mood on bad weather days. This finding opens up
customers. Such a reduction can have a substantial effect new avenues for future research with greater focus on
on consumer behavior, because 52% of consumers state when and where a consumer is writing a review, and which
they will not consider a hotel with fewer than four stars features—weather, sounds, lighting, smells, or visual
(BrightLocal 2020). Therefore, the business needs to arrangements—characterize the physical surroundings at
quickly understand what is driving this change and how to that point. Thus, reviewing behaviors across different types
respond to it. While an intuitive response from firms may of environments need further study.
be to start looking for how the consumers’ experiences A greater focus on where and when questions would
were affected by internal, systematic patterns during the nicely complement recent studies on online reviews that
time of consumption, which would explain these rating have shifted their focus to how a consumer is writing a re-
score reductions, our study reveals that the answer may ac- view, that is, on a mobile or desktop device, and its conse-
tually lie in how external, systematic patterns during the quences for review content (Melumad et al. 2019;
time of review provision affect consumers’ evaluation of Ransbotham, Lurie, and Liu 2019) and readers’ perceptions
their experiences. (Grewal and Stephen 2019). We hope that our work
encourages future research to broaden the understanding of
Limitations how situational factors influence reviews, and we look for-
A major limitation of this work is that, given the data, it ward to seeing more studies on the where, when, and how
is not possible to identify the accurate behavioral mecha- of online review provision.
nism underlying the effect of weather on reviewing behav-
ior. However, with the existing data, several approaches DATA COLLECTION INFORMATION
were used to delineate the guidelines for the possible char-
acteristics of the mechanism. Specifically, we presented fa- The first author obtained the data from the travel plat-
vorable evidence from three automated sentiment form in the autumn of 2017, collected the geolocation in-
classification tools in support of the proposed mood effect formation on residential addresses, the international
on evaluative judgment. Subsequent ex-post analyses fur- weather information in 2020, and matched weather condi-
ther corroborated this interpretation. However, while these tions to residential and hotel addresses. The second author
results are consistent with the feelings-as-information hy- collected Google Trends data in 2020 (and in 2021 for the
pothesis (Pham 2009; Schwarz and Clore 1983), it is diffi- extended dataset), created the final dataset, and analyzed
cult to distinguish between this mechanism and the the data. The data are stored in Dropbox (Leif Brandes)
BRANDES AND DOVER 613

and Google Drive (Yaniv Dover) folders, and secure copies Evaluations,” Journal of Consumer Research, 45 (4),
are also stored on both authors’ professional computers. 810–32.
BrightLocal (2020), “Local Consumer Review Survey 2020,”
https://bit.ly/3p4xMHc [accessed April 29, 2021].
REFERENCES Buchheim, Lukas and Thomas Kolaska (2017), “Weather and the
Psychology of Purchasing Outdoor Movie Tickets,”
Andrews, Michelle, Xueming Luo, Zheng Fang, and Anindya Management Science, 63 (11), 3718–38.
Ghose (2016), “Mobile Ad Effectiveness: Hyper-Contextual Bujisic, Milos, Vanja, Bogicevic, H. G., Parsa, Verka, Jovanovic,
Targeting with Crowdedness,” Marketing Science, 35 (2),

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


and Anupama, Sukhu (2019), “It’s Raining Complaints! How
218–33. Weather Factors Drive Consumer Comments and Word-of-
Avnet, Tamar, Michel T. Pham, and Andrew T. Stephen (2012), Mouth,” Journal of Hospitality & Tourism Research, 43 (5),
“Consumers’ Trust in Feelings as Information,” Journal of 656–81.
Consumer Research, 39 (4), 720–35. Busse, Meghan R., Devin G. Pope, Jaren C. Pope, and Jorge Silva-
Babic Rosario, Ana, Francesca Sotgiu, Kristine de Valck, and Risso (2015), “The Psychological Effect of Weather on Car
Tommo H. A. Bijmolt (2016), “The Effect of Electronic Purchases,” The Quarterly Journal of Economics, 130 (1),
Word of Mouth on Sales: A Meta-Analytic Review of 371–414.
Platform, Product and Metric Factors,” Journal of Marketing Cameron, Colin A. and Pravin K. Trivedi. (2005),
Research, 53 (3), 297–318. Microeconometrics, New York: Cambridge University Press.
Baker, Andrew M., Naveen Donthu, and V. Kumar (2016), Chan, Catherine, Daniel A. J. Ryan, and Catrine Tudor-Locke
“Investigating How Word-of-Mouth Conversations about (2006), “Relationship between Objective Measures of
Brands Influence Purchase and Retransmission Intentions,” Physical Activity and Weather: A Longitudinal Study,” The
Journal of Marketing Research, 53 (2), 225–39. International Journal of Behavioral Nutrition and Physical
Bakhshi, Saeideh, Partha Kanuparthy, and Eric Gilbert (2014),
Activity, 3 (21), 21–9.
“Demographics, Weather and Online Reviews: A Study of
Chen, Zoey (2017), “Social Acceptance and Word of Mouth: How
Restaurant Recommendations,” in Proceedings of the 23rd
the Motive to Belong Leads to Divergent WOM with
International Conference on World Wide Web – WWW ’14,
Strangers and Friends,” Journal of Consumer Research, 44
ed. Chin-Wang Chung, Soul, Korea, 443–54.
(3), 613–32.
Barasch, Alixandra and Jonah Berger (2014), “Broadcasting and
Chevalier, Judith A., Yaniv Dover, and Dina Mayzlin (2018),
Narrowcasting: How Audience Size Affects What People
“Channels of Impact: User Reviews When Quality is
Share,” Journal of Marketing Research, 51 (3), 286–99.
Bartling, Björn, Leif Brandes, and Daniel Schunk (2015), Dynamic and Managers Respond,” Marketing Science, 37
“Expectations as Reference Points: Field Evidence from (5), 688–709.
Professional Soccer,” Management Science, 61 (11), Chevalier, Judith A. and Dina Mayzlin (2006), “The Effect of
2646–824. Word of Mouth on Sales: Online Book Reviews,” Journal of
Belk, Russell W. (1975), “Situational Variables and Consumer Marketing Research, 43 (3), 345–54.
Behavior,” Journal of Consumer Research, 2 (3), 157–64. Cohen, Joel B., Michel T. Pham, and Eduardo B. Andrade (2008),
Berger, Jonah (2011), “Arousal Increases Social Transmission of “The Nature and Role of Affect in Consumer Behavior,” in
Information,” Psychological Science, 22 (7), 891–3. Handbook of Consumer Psychology, ed. Curtis P. Haugtvedt,
——— (2014), “Word of Mouth and Interpersonal Communication: Paul M. Herr, and Frank R. Kardes, New York: Taylor &
A Review and Directions for Future Research,” Journal of Francis, 297–348.
Consumer Psychology, 24 (4), 586–607. Conlin, Michael, Ted, O’Donoghue, and Timothy J, Vogelsang
Berger, Jonah and Raghuram Iyengar (2013), “Communication (2007), “Projection Bias in Catalogue Orders,” American
Channels and Word of Mouth: How the Medium Shapes the Economic Review, 97 (4), 1217–49.
Message,” Journal of Consumer Research, 40 (3), 567–79. Connolly, Marie (2008), “Here Comes the Rain Again: Weather
Berger, Jonah and Eric M. Schwartz (2011), “What Drives and the Intertemporal Substitution of Leisure,” Journal of
Immediate and Ongoing Word of Mouth?,” Journal of Labor Economics, 26 (1), 73–100.
Marketing Research, 48 (5), 869–80. Consiglio, Irene, Matteo De Angelis, and Michele Costabile
Bless, Herbert and Norbert Schwarz (2010), “Mental Construal (2018), “The Effect of Social Density on Word of Mouth,”
and the Emergence of Assimilation and Contrast Effects: The Journal of Consumer Research, 45 (3), 511–28.
Inclusion/ Exclusion Model,” in Advances in Experimental Correia, Sergione (2017), “Linear Models with High-Dimensional
Social Psychology, Chapter 6, ed. Mark P. Zanna, Fixed Effects: An Efficient and Feasible Estimator,”
Amsterdam: Elsevier, 319–73. Working Paper, http://scorreia.com/research/hdfe.pdf.
Booking.com (2021), “About Booking.com,” https://bit.ly/ Denissen, Jaap A., Ligaya Butalid, Lars Penke, and Marcel A. G.
3s4RSD8 [accessed December 16, 2021]. van Aken (2008), “The Effects of Weather on Daily Mood,”
Bower, Gordon H. (1981), “Mood and Memory,” American Emotion (Washington, D.C.), 8 (5), 662–7.
Psychologist, 36 (2), 129–48. Deryugina, Tatyana, Garth Heutel, Nolan H. Miller, David
Brandes, Leif, David Godes, and Dina Mayzlin (2022), Molitor, and Julian Reif (2019), “The Mortality and Medical
“EXPRESS: Extremity Bias in Online Reviews: The Role of Costs of Air Pollution: Evidence from Changes in Wind
Attrition,” Journal of Marketing Research, Forthcoming, Direction,” American Economic Review, 109 (12), 4178–219.
https://doi.org/10.1177/00222437211073579. Dodds, Peter Sheridan and Christopher M. Danforth (2010),
Brannon, Daniel C. and Adriana Samper (2018), “Maybe I Just “Measuring the Happiness of Large-Scale Written
Got (Un)Lucky: One-on-One Conversations and the Expression: Songs, Blogs, and Presidents,” Journal of
Malleability of Post-Consumption Product and Service Happiness Studies, 11 (4), 441–56.
614 JOURNAL OF CONSUMER RESEARCH

Dodds, Peter Sheridan, Kameron Decker Harris, Isabel M. Karmarkar, Uma R. and Zakary L. Tormala (2010), “Believe Me,
Kloumann, Catherine A. Bliss, and Christopher M. Danforth I Have No Idea What I’m Talking about: The Effects of
(2011), “Temporal Patterns of Happiness and Information in Source Certainty on Consumer Involvement and Persuasion,”
a Global Social Network: Hedonometrics and Twitter,” PLoS Journal of Consumer Research, 36 (6), 1033–49.
One, 6 (12), e26752. Kööts, Liisi, Anu Realo, and Jüri Allik (2011), “The Influence of
Filieri, Raffaele (2015), “What Makes Online Reviews Helpful? A the Weather on Affective Experience,” Journal of Individual
Diagnosticity-Adoption Framework to Explain Informational Differences, 32 (2), 74–84.
and Normative Influences in e-WOM,” Journal of Business Köper, Maximilian and Sabine Schulte im Walde (2016),

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


Research, 68 (6), 1261–70. “Automatically Generated Affective Norms of Abstractness,
Forgas, Joseph P. (2013), “Don’t Worry, Be Sad! on the Arousal, Imageability and Valence for 350,000 German
Cognitive, Motivational, and Interpersonal Benefits of Lemmas,” in Proceedings of the Tenth International
Negative Mood,” Current Directions in Psychological Conference on Language Resources and Evaluation
Science, 22 (3), 225–32. (LREC’16), ed. Nicoletta Calzolari, Choukri Khalid,
——— (2017), “Mood Effects on Cognition: Affective Influences Declerck Thierry, Goggi Sara, Grobelnik Marko, Maegaard
on the Content and Process of Information Processing and Bente, Mariani Joseph, Mazo Helene, Moreno Asuncion,
Behavior,” in Emotions and Affect in Human Factors and Odikj Jan and Piperidis Stelios, Portoroz, Slovenia, 2595–8.
Human-Computer Interactions, Chapter 3, ed. Jeon Lee, Heeseung Andrew, Angela Aerry Choi, Tianshu Sun, and
Myounghoon, London: Elsevier, 89–122. Wonseaok Oh (2021), “Reviewing before Reading? An
Frederick, Shane and George Loewenstein (1999), “Hedonic Empirical Investigation of Book-Consumption Patterns and
Adaptation,” in Well-Being: The Foundations of Hedonic Their Effects on Reviews and Sales,” Information Systems
Psychology, Chapter 16, ed. Daniel Kahneman, Ed Diener Research, 32 (4), 1368–89.
and Norbert Schwarz, New York: Russell Sage Foundation, Lee, Jooa Julia, Francesca, Gino, and Bradley R. Staats (2014),
302–29. “Rainmakers: Why Bad Weather Means Good Productivity,”
Gaure, Simen (2013), “Lfe: Linear Group Fixed Effects,” The R The Journal of Applied Psychology, 99 (3), 504–13.
Journal, 5 (2), 104–16. Li, Chenxi, Xueming Luo, Cheng Zhang, and Xiaoyi Wang
Godes, David and Jose C. Silva (2012), “Sequential and Temporal (2017), “Sunny, Rainy, and Cloudy, with a Chance of Mobile
Dynamics of Online Opinion,” Marketing Science, 31 (3), Phone Promotion Effectiveness,” Marketing Science, 36 (5),
448–73. 762–79.
Govind, Rahul, Nitika Garg, and Vikas Mittal (2020), “Weather, Lovett, Mitchell J., Renana Peres, and Ron Shachar (2013), “On
Affect, and Preference for Hedonic Products,” Journal of Brands and Word of Mouth,” Journal of Marketing
Marketing Research, 57 (4), 717–38. 7 Research, 50 (4), 427–44.
Grewal, Lauren and Andrew T. Stephen (2019), “In Mobile We LTE-Anbieter (n.d.), “Wie die mobile Datenübertragung laufen
Trust: The Effects of Mobile versus Nonmobile Reviews on lernte – die Mobilfunk Geschichte vom A-Netz bis LTE,”
Consumer Purchase Intentions,” Journal of Marketing [How mobile data transfer learned to walk – the history of
Research, 56 (5), 791–808. mobile communications from radio telephone network A to
Guimaraes, Paulo and Pedro Portugal (2010), “A Simple Feasible LTE] https://bit.ly/30FDztt.
Procedure to Fit Models with High-Dimensional Fixed Luck, Stephan and Tom Zimmermann (2020), “Employment
Effects,” The Stata Journal: Promoting Communications on Effects of Unconventional Monetary Policy: Evidence from
Statistics and Stata, 10 (4), 628–49. QE,” Journal of Financial Economics, 135 (3), 678–703.
Haran, Uriel and Shaul Shalvi (2020), “The Implicit Honesty Ludwig, Stephan, Ko de Ruyter, Mike Friedman, Elisabeth C.
Premium: Why Honest Advice is More Persuasive than Brüggen, Martin Wetzels, and Gerard Pfann (2013), “More
Highly Informed Advice,” Journal of Experimental than Words: The Influence of Affective Content and
Psychology. General, 149 (4), 757–73. Linguistic Style in Online Reviews on Conversion Rates,”
Heckman, James J. (1979), “Sample Selection Bias as a Journal of Marketing, 77 (1), 87–103.
Specification Error,” Econometrica, 47 (1), 153–61. Mariani, Marcello M., Matteo Borghi, and Ulrike Gretzel (2019),
Herhausen, Dennis, Stephan Ludwig, Dhruv Grewal, Jochen “Online Reviews: Differences by Submission Type,”
Wulf, and Marcus Schoegel (2019), “Detecting, Preventing, Tourism Management, 70, 295–8.
and Mitigating Online Firestorms in Brand Communities,” Meier, Tabea, Ryan L. Boyd, James W. Pennebaker, Matthias R.
Journal of Marketing, 83 (3), 1–21. Mehl, Mike Martin, Markus Wolf, and Andrea B. Horn
Hong, Hong, Di Xu, G. Alan Wang, and Weiguo Fan (2017), (2018), “‘LIWC auf Deutsch’: The Development,
“Understanding the Determinants of Online Review Psychometrics, and Introduction of DE-LIWC2015,” https://
Helpfulness: A Meta-Analytic Investigation,” Decision osf.io/tfqzc.
Support Systems, 102, 1–11. Melumad, Shiri, J. Jeffrey Inman, and Michel Tuan Pham (2019),
Howarth, E. and M. S. Hoffman (1984), “A Multidimensional “Selectively Emotional: How Smartphone Use Changes
Approach to the Relationship between Mood and Weather,” User-Generated Content,” Journal of Marketing Research,
British Journal of Psychology, 75 (1), 15–23. 56 (2), 259–75.
Humphreys, Ashlee and Rebecca Jen-Hui Wang (2018), Mittal, Vikas, Wagner A. Kamakura, and Rahul Govind (2004),
“Automated Text Analysis for Consumer Research,” Journal “Geographic Patterns in Customer Service and Satisfaction:
of Consumer Research, 44 (6), 1274–306. An Empirical Investigation,” Journal of Marketing, 68 (3),
Karaman, Hülya (2020), “Online Review Solicitations Reduce 48–62.
Extremity Bias in Online Review Distributions and Increase Moe, Wendy W. and David A. Schweidel (2012), “Online Product
Their Representativeness,” Management Science, 67 (7), Opinions: Incidence, Evaluation, and Evolution,” Marketing
4420–45. Science, 31 (3), 372–86.
BRANDES AND DOVER 615

Moe, Wendy W. and Michael Trusov (2011), “The Value of Schwarz, Norbert (2002), “Feelings as Information: Moods
Social Dynamics in Online Product Ratings Forums,” Influence Judgment and Processing Strategies,” in Heuristics
Journal of Marketing Research, 48 (3), 444–56. and Biases: The Psychology of Intuitive Judgment, ed. T.
Park, Sungsik, Woochoel Shin, and Jinhong Xie (2021), “The Gilovich, D. W. Griffin, and D. Kahneman, New York:
Fateful First Consumer Review,” Marketing Science, 40 (3), Cambridge University Press, 534–47.
481–507. Schwarz, Norbert and Gerald L. Clore (1983), “Mood,
Pennebaker, James A, Roger J. Booth, Ryan L. Boyd, and Martha Misattribution and the Judgments of Well-Being: Informative
E. Francis (2015), “Linguistic Inquiry and Word Count: and Directive Functions of Affective States,” Journal of
LIWC2015.” Austin, TX: Pennebaker Conglomerates. Personality and Social Psychology, 45 (3), 513–23.

Downloaded from https://academic.oup.com/jcr/article/49/4/595/6516531 by Universitätsbibliothek Mannheim user on 25 May 2023


Persinger, M.A. and B. F. Levesque (1983), “Geophysical Simonsohn, Uri (2007), “Clouds Make Nerds Look Good: Field
Variables and Behavior: XII. The Weather Matrix Evidence of the Impact of Incidental Factors on Decision
Accommodates Large Portions of Variance of Measured Making,” Journal of Behavioral Decision Making, 20 (2),
Daily Mood,” Perceptual and Motor Skills, 57 (3), 868–70. 143–52.
Pham, Michel T. (2009), “The Lexicon and Grammar of Affect as Statista (2019), “Consumer Reasons for not Leaving Product
Information in Consumer Decision Making: The GAIM,” in Reviews Online 2019,” https://bit.ly/33vZCDM [accessed
Social Psychology of Consumer Behavior, ed. Michaela April 29, 2021].
Wanke, New York: Psychology Press, 167–200. —— (2020), “Reasons for Internet Users in the United States to
Pope, Devin G. and Maurice E. Schweitzer (2011), “Is Tiger Shop on Amazon as of January 2020,” https://bit.ly/3mgc7dr
Woods Loss Averse? Persistent Bias in the Face of [accessed December 16, 2021].
Experience, Competition, and High Stakes,” American Tudor-Locke, Catrine, David R. Bassett, Ann M. Swartz, Scott J.
Economic Review, 101 (1), 129–57. Strath, Brian B. Parr, Jared P. Reis, Katrina D. DuBose, and
Ransbotham, Sam, Nicholas H. Lurie, and Hongju Liu (2019), Barbara E. Ainsworth (2004), “A Preliminary Study of One
“Creation and Consumption of Mobile Word of Mouth: How Year of Pedometer Self-Monitoring,” Annals of Behavioral
Are Mobile Reviews Different?,” Marketing Science, 38 (5), Medicine, 28 (3), 158–62.
773–92. Vana, Prasad and Anja Lambrecht (2021), “The Effect of
Rowe, Duncan G. (2006), “Mobile-Phone Signals Reveal Individual Online Reviews on Purchase Likelihood,”
Rainfall,” Nature.com. Marketing Science, 40 (4), 708–30.
Russell, James A. (1980), “A Circumplex Model of Affect,” Van Laer, Tom, Jennifer Edson Escalas, Stephan Ludwig, and
Journal of Personality and Social Psychology, 39 (6), Ellis A. van den Hende (2019), “What Happens in Vegas
1161–78. Stays on TripAdvisor? A Theory and Technique to
Sanders, Jeffrey L. and Mary S. Brizzolara (1982), “Relationships Understand Narrativity in Consumer Reviews,” Journal of
between Weather and Mood,” The Journal of General Consumer Research, 46 (2), 267–85.
Psychology, 107 (1), 155–6. Wang, Fang and Sahar Karimi (2019), “This Product Works Well
Schacter, Daniel, Daniel Gilbert, Daniel Wegner, and Bruce Hood (for Me): the Impact of Firs-Person Singular Pronouns on
(2016), Psychology, London: Palgrave MacMillan Online Review Helpfulness,” Journal of Business Research,
Education. 104, 283–94.
Schlosser, Ann E. (2005), “Posting versus Lurking: Warriner, Amy B., Victor Kuperman, and Mark Brysbaert (2013),
Communicating in a Multiple Audience Context,” Journal of “Norms of Valence, Arousal and Dominance for 13,915 English
Consumer Research, 32 (2), 260–5. Lemmas,” Behavior Research Methods, 45 (4), 1191–207.
Schoenmueller, Verena, Oded Netzer, and Florian Stahl (2020), Watson, David, Lee Anna Clark, and Auke Tellegen (1988),
“The Polarity of Online Reviews: Prevalence, Drivers and “Development and Validation of Brief Measures of Positive
Implications,” Journal of Marketing Research, 57 (5), and Negative Affect: The PANAS Scales,” Journal of
853–77. Personality and Social Psychology, 54 (6), 1063–70.

You might also like