Professional Documents
Culture Documents
gamified app
Allison Eden
Department of Communication, Michigan State University, MI, USA
Britta C Brugman
Department of Communication Science, Vrije Universiteit Amsterdam,
the Netherlands
Abstract
In this paper, we reflect on the implementation of a gamified application for helping students learn
important facts about their study program. We focus on two design features, of which different
configurations were tested in a field experiment among Dutch university students (N ¼ 101). The
first feature is feedback, which is expected to increase engagement, with personalized (“tailored”)
feedback being more effective than generic feedback. The second feature is a session limit that
was designed to prevent users from “binging” the game, because this could prevent deep learning.
Results showed that generic feedback was more effective than tailored feedback, contrasting our
expectations. The session limit, however, did prevent binging without reducing the overall
number of sessions played. Our findings suggest that careful consideration of game properties
may impact sustaining and encouraging play via a gamified application.
Keywords
Gamification, education, feedback, distributed practice, adaptive learning
Corresponding author:
Kasper Welbers, De Boelelaan 1105, 1081 HV Amsterdam, the Netherlands.
Email: k.welbers@vu.nl
Welbers et al. 93
New technologies offer exciting opportunities to engage student learning in new ways. One
of the new-technology potentials for motivating students to learn is gamification, which can
be defined as “the use of game-design elements in non-game contexts” (Deterding et al.,
2011: 9). In the past decade, the popularity of gamification increased rapidly, and various
cases are known in which businesses, web designers, and education workers used gamifica-
tion in order to engage and motivate a target group with successful outcomes (Chou, 2017;
Mollick and Rothbard, 2014). However, more systematic research is needed to know when
and how gamification can be used to the greatest benefit in an education setting. Many
different gamification options exist and usage varies widely, which we briefly review in the
next sections to contextualize our study.
The vast interest in gamification instigated a wide array of studies across many different
topics, audiences, and disciplines. For instance, a 2012 literature review found >125 empir-
ical studies examining effects of gamification in a variety of contexts (Connolly et al., 2012).
By contrast, a more recent systematic review that only searched for gamification in the
context of education found only 15 studies on some aspect of gamification in an education
setting (Nah et al., 2014). This latter review, focusing specifically on the education context,
indicates that feedback seems to be a potentially useful mechanism for increasing engage-
ment among students for a specific course or learning outcome. However, research on
feedback in other contexts shows that the effectiveness of feedback is not only dependent
on the verbal feedback itself, but also on situational factors (e.g. Burgers et al., 2015; Hatala
et al., 2014; Kluger and DeNisi, 1996). One of the aspects of feedback that is often associ-
ated with effectiveness is the use of personalized (or “tailored”) feedback (e.g. Krebs et al.,
2010; Lustria et al., 2013). Thus, we studied whether personalized feedback also outperforms
non-personalized (“generic”) feedback in stimulating play and learning in the context of
gamified learning tools for education.
Second, gamification for learning can only be successful when students play for an
extended amount of time, such that they process enough (new) information for learning
to take place. In addition, for learning a new skill or knowledge, literature shows that
distributed practice enhances learning (for a review, see Dunlosky et al., 2013). This
means that it is more effective to spend the time learning spread over short time intervals
over several days rather than in one long session (see also Heidt et al., 2016; Rohrer, 2015).
However, in some contexts, players choose to “binge” an entire game over one longer period
of time rather than spacing out play over several short sessions. Thus, we tested whether
“enforcing” distributed learning through imposing a daily session limit positively impacts
play and learning or whether it backfires.
To answer these questions, the current study investigated the potential use of gamifica-
tion for engaging university students with an online learning platform at a Dutch university.
A gamified application, in the form of a multiple-choice quiz app, was launched among
undergraduate students in a Faculty of Social Sciences. The purpose of the app was to teach
students relevant information about various aspects of student rules, regulations, and socials
in university life, ranging from exam regulations to social events at campus. Fitz-Walter
et al. (2011: 123) found that “new students often feel lost, having trouble meeting new
friends and finding what services and events are available on campus” and argue that
games can help alleviate this. Accordingly, our game was tailored toward new and existing
students who may not yet be aware of specific campus services or may be disinclined to
browse or search through the university website. The gamified application was a modified
version of a popular online quiz game which rewards users for playing and correctly
94 E-Learning and Digital Media 16(2)
answering questions about specific topics. The game aspects include points, rewards, and
feedback from the governing system. Our content was identical to content available,
at various places, on the university website, but brought together in this game and thus
presented within this novel learning environment. To introduce the app, we sent out emails,
presented the app in classes, and put up posters. Students could participate by following a
link to a short survey where they could enter basic personal information and register for
using the app.
We hypothesized that learning engagement, in the form of persistent play, would be
highest for players who (a) received tailored feedback and (b) had a session limit to enforce
distributed learning. In order to test our hypotheses, we created experimental conditions in
which we tailored feedback within the app for some participants but not others, limited play
lengths for some but not others, in addition to tracking user data for all participants,
including surveying participation and engagement. The results of tracking and survey
data (e.g. the interest of students in the app) can provide lessons for future designs and
launch campaigns in university settings. The following sections elaborate on the background
of gamification and the distributed practice in the field of education to further specify the
hypotheses that guided our research.
Gamification
Despite its current popularity, the term “gamification” is still young. Its first documented
use is often credited to Brett Terill (2008), who talked about “gamification” in a blog post to
define the act of “taking game mechanics and applying them to other web properties to
increase engagement.” In its current usage, the concept of gamification is not restricted to
web properties, but more generally refers to “the use of game design elements in non-game
contexts” (Deterding et al., 2011: 9) or “using game-based mechanics, aesthetics and game
thinking to engage people, motivate action, promote learning, and solve problems” (Kapp,
2012: 10). In the current paper, we focused on the properties of gamification that can be used
to stimulate learning.
Gamification is related, but not identical, to the concept of game-based learning. Where
gamification is about the use of game design elements in a non-game context, game-based
learning refers to the use of actual games to acquire skills or knowledge. In game-based
learning, the skills that are put to the test in the game correspond to the learning task (Gee,
2013), as is for instance the case in a game where medical students or personnel perform
surgical procedures in a simulated environment (Kapp, 2012).
For certain cases, such as the multiple-choice quiz app used in the current study, the
distinction between a gamified experience and game-based learning can be blurred. Cheong
et al. (2013: 207) argue that gamification “can be viewed as a continuum ranging from
serious games at one end of the spectrum to normal activities to which game elements
have been added at the other end of the spectrum.” The gamified multiple-choice quiz
app falls in the middle of this spectrum. The simple learning task of memorizing a list of
facts is made into a game through the presentation of facts in a quiz format. However, in
itself, this quiz format still lacks important game elements, such as progression, rewards, and
competition. To add these elements, tested gamification mechanics have been implemented
in the app, such as avatars, experience points, and badges. Accordingly, the application can
more accurately be classified as a gamified multiple-choice quiz, similar to the application
used by Cheong et al. (2013), rather than as a dedicated game for learning.
Welbers et al. 95
Although gamification for learning and game-based learning are two different concepts,
they share common ground on the idea that game elements can make learning experiences
more engaging. Accordingly, research into whether and why certain game elements are
effective in games for learning, and in games in general, is relevant for understanding the
efficacy of gamification. Academic interest in effectively using game design elements dates
back at least 30 years to Malone (1982), who studied the appealing features of computer
games for the purpose of using these features to make user interfaces more interesting and
enjoyable. Sweetser and Wyeth (2005) contributed greatly to our understanding of the
features that make games enjoyable by developing a scale to measure game enjoyment,
although they did not elaborate on the use of these features in a non-game context.
Building on their work, Fu et al. (2009) developed a scale to measure the enjoyment of
people playing educational games. Cozar-Gutierrez and Saez-Lopez (2016) recently reported
that teachers’ interest in and perceived innovativeness of using games in the classroom is
strong, showing a desire to understand the best practices in incorporating games and gami-
fied education in the classroom.
Despite academic interest in understanding and using the appealing features of games,
academia has been slow to react to the surge of gamification projects in businesses and on
the Internet (Huotari and Hamari, 2012). Initial support for the efficacy of gamification
mainly came from businesses, where the idea that tasks can be made more efficient and
engaging by wrapping them in game design elements rapidly gained popularity. Yu-Kai
Chou, an influential gamification expert, collected and published a list of 95 documented
gamification cases, based on the criterion that the documentation reports return on invest-
ment indicators (Chou, 2017). Overall, these cases show that gamification can indeed have a
strong, positive impact on engagement and performance in various activities. Although it is
not reported how these cases were selected, and there could very well be a bias toward
successful cases, this adds weight to the claim that gamification can work, given the right
context and implementation.
A recent literature study of academic gamification research found that most studies on
the subject verified that gamification can work, even though effects differ across contexts
(Hamari et al., 2014). By context, the authors refer to the type of activity being gamified,
such as exercise, sustainable consumption, monitoring, or education. In the context of
education—which is the focus of the current study—the outcomes of gamification were
mostly found to be positive, as seen in the forms of increased motivation, engagement,
and enjoyment (e.g. Cheong et al., 2013; Denny, 2013; Dong et al., 2012; Li et al., 2012).
Another literature study that focused broadly on the effects of games but also discussed
games for learning in particular found that “players seem to like the game-based approach
to learning and find it motivating and enjoyable” (Connolly et al., 2012: 671). Yet, they also
argue that the motivational features of learning oriented games should be examined in more
detail and note that evidence for more effective learning was not strong.
In all, these studies concluded that games have the potential to be useful tools for learn-
ing, but stress that their efficacy depends heavily on the use of different game features and
how they are implemented (Connolly et al., 2012; Hamari et al., 2014). Features such as
feedback options and the way in which the level of difficulty adapts to a player’s skills can be
critical to a game’s success and need to be investigated in more detail. For example, Barata
et al. (2015) report success in clustering student types based on learning performance in a
gamified engineering course. Over two years, by targeting student groups who responded
differently to the learning environment, Barata et al. (2015) were able to reduce
96 E-Learning and Digital Media 16(2)
underperforming students from 40% of the class to 25% of the class. Therefore, by targeting
learning and interventions to particular types of players, the outcomes of gamified applica-
tions in higher education can be improved.
H1. Students receiving feedback play more sessions of the gamified app compared to students
receiving no feedback.
However, not all feedback is equally effective in achieving its goal (e.g. Burgers et al.,
2015; Kluger and DeNisi, 1996). For instance, one study shows that the effectiveness of
negative feedback (i.e. feedback emphasizing the elements that could be improved upon)
and positive feedback (i.e. feedback emphasizing the elements that went well during an
activity) depends on the task at hand (Burgers et al., 2015). Negative feedback was more
effective than positive feedback when the problem could immediately be repaired (e.g. in the
case of a game which enables a new session to be started immediately). By contrast, positive
feedback was more effective than negative feedback when repair was delayed (e.g. in the case
of a game which only enables one session per specified time period). Thus, when using
feedback, it is important to match the specific type of feedback to the specific task at hand.
One type of feedback which has been associated with enhancing effectiveness is the use of
tailored feedback over generic feedback (e.g. De Vries et al., 2008; Krebs et al., 2010). In
tailored feedback, the specific content is personalized (“tailored”) to the individual, through
mechanisms like personalization (i.e. addressing the receiver by name) or by adapting the
feedback to their individual performance (e.g. by including descriptive statistics that refer to
the receiver’s personal performance). By contrast, generic feedback is similar for all addres-
sees receiving the feedback. Tailored messaging may take the form of, for example, frequent
prompt or reminder emails (Neff and Fry, 2009), often edited (or tailored) to include infor-
mation specific to particular participants (Schneider et al., 2013). While simple interventions
such as emails can increase participants’ logging into online systems, tailored information
can further increase desired behaviors in specific cases (Krebs et al., 2010; Neff and
Fry, 2009).
Nevertheless, other studies show different results (e.g. Kroeze et al., 2006; Noble et al.,
2015). For instance, a systematic review by Kroeze et al. (2006) demonstrates that the
effectiveness of tailoring depends on the specific kind of behavior targeted. For instance,
Welbers et al. 97
for 11 out of 14 interventions targeting fat reduction, the authors found positive effects of
tailoring over a generic intervention. By contrast, for only 3 out of 11 interventions targeting
physical education, did the authors find such positive effects of tailoring. Thus, the question
whether or not tailoring improves effectiveness may also be dependent on contextual factors
like the targeted behavior. In the current study, we aim to motivate students to continue
using a gamified app to increase learning. For this specific context, we do not yet have
information on whether tailoring is an effective strategy or not. Yet, given that, across
behaviors, tailoring typically boosts performance compared to generic information (e.g.
Krebs et al., 2010), we expect that:
H2. Students receiving tailored feedback play more sessions of the gamified app compared to
students receiving generic feedback.
H3. Students with a daily limit play an equal amount of sessions on more different days compared
to students without a daily limit in playing the gamified app.
H4. The relation between performance and prolonged play follows an inverted u-curve, with pro-
longed play decreasing if players perform (a) very poorly, and (b) very well.
Method
Participants
The study was conducted within two periods in the Netherlands at a Faculty of Social
Sciences of a public university between March and April 2016 (period 1; including all
students of the School of Social Sciences) and October to December 2016 (period 2; only
including freshmen students of the same Faculty of Social Sciences). We informed students
of the app by sending out emails, putting up posters, and presenting the app in lectures. To
participate, students had to fill out a one-minute online introductory survey that could be
completed on a mobile phone. The link to the survey was distributed per email using a
shortened link and made accessible as a QR-code. After completion, students were auto-
matically assigned to conditions, and within 24 hours received an invitation to use the app.
The app itself could be downloaded for free from official app stores such as Google Play, for
which students were given directions in the campaign material and at the end of the brief
introductory survey.
In total, 2444 students were contacted in period 1 and 706 students were contacted in
period 2 (grand total: 3150 students). Of all students contacted, 307 (9.8%) completed the
survey, of which 223 (9.1%) in the first period and 84 (11.9%) in the second period. One
hundred and one students (3.2%) finished all steps of the registration, of which 70 (2.9%) in
the first period and 31 (4.4%) in the second period.
Table 1 contains the descriptive statistics of the participants. Overall, there were slightly
more female participants (65.3%) than male, and most participants were in their early
twenties (M ¼ 22.53; SD ¼ 3.51). As expected, first-year students in the bachelor and pre-
master were generally more interested in the app as the acquired information is more new
Variable Levels n %
and relevant to them (e.g. details of exam regulations; see above). The over-representation of
the Communication Science and Culture, Organization, and Management students is pro-
portionate to the relatively high number of students in these programs. Furthermore, stu-
dents were asked to judge their prior knowledge about the university on a five-point scale,
which showed that on average, students considered their knowledge to be between neutral
and good (M ¼ 3.61; SD ¼ 0.77). Notably, the 206 students that did fill in the survey but did
not use the app did not score any different (M ¼ 3.61; SD ¼ 0.76), indicating that prior
knowledge was not a relevant factor in their decision not to participate.
Design
The experiment had a 2 (daily limit: present vs. absent) 3 (no feedback, generic feedback,
personalized feedback) between-subjects experimental design. Participants in the condition
with a daily limit were limited in their play to four sessions per day. Participants in the
condition without a daily limit could play as many sessions per day as they wanted.
Participants were randomly assigned to the six conditions.
Participants in the feedback conditions received a weekly email on Monday for three
consecutive weeks that encouraged them to play (if they did not play that week) or to play
more. The difference between the generic and personalized feedback conditions was limited
to the information provided in the email. Participants in the generic-feedback condition
were not addressed by name, and the email only reported whether or not they played in the
previous week. Participants in the personalized-feedback condition were addressed by their
first name, and the email reported the exact number of sessions they played in the previous
week. The encouragement message also changed depending on how many sessions were
played. The personalized feedback condition deliberately did not include additional support,
such as offering tips or replying to specific questions that the participant answered incor-
rectly. The condition thereby focuses purely on whether the participant was addressed as a
generic and anonymous user versus as an individual that is personally monitored. The effect
investigated in this study is thus only a communication effect and not an effect of offering a
different learning experience.
The number of participants per condition is reported in Table 2. The distribution of
participants is not perfectly balanced, because not all students who completed the survey
(upon which they were assigned to a condition) actually participated.
The application
The application used for this study, Knowingo (https://knowingo.com/), was developed by a
partner company. It is normally licensed to businesses that use it as a tool to disseminate
Message condition
No limit 14 13 19 46
Limit 17 20 18 55
Total 31 33 37 101
Welbers et al. 101
Results
Predictors of player participation
To test the hypotheses about the effects of feedback on player participation, we used regres-
sion analysis to explain the variance in the number of sessions played by each participant.
Figure 1 shows the distribution of this dependent variable. Note that this is a heavily over-
dispersed count variable: most participants only played one or a few sessions, but there are
also several players that continued for more than 200 sessions. Accordingly, we used a
negative binomial generalized linear model. Results are presented in Table 3. Contrary to
our hypothesis, we did not find a significant effect of feedback messages in general on the
number of sessions played (b ¼ 0.449, p = 0.310). Therefore, we reject H1.
Our second hypothesis was that the tailored feedback messages have a stronger effect on
the number of sessions played than the generic feedback messages—or, given that we did not
find a general effect of feedback, that there would be an effect for tailored feedback.
However, our results showed that tailored feedback messages did not have an effect
(b ¼ –0.118, p = 0.341), but rather the other way around. The number of sessions played
is significantly higher for participants in the generic feedback condition (b ¼ 0.821, p < 0.05).
102 E-Learning and Digital Media 16(2)
Table 3. Negative binomial regression predicting the number of sessions played per user.
Sessions played
M1 M2a M2b
Additionally, male participants tend to play more sessions compared to female participants
(b ¼ 0.924, p < 0.01), and older participants tend to play less sessions (b ¼ –0.102, p < 0.05).
Days played
as presented in Table 4, show that participants in the daily-limit condition indeed played on
more unique days (b ¼ 0.620, p < 0.01). Based on these results, we can accept H3a.
Playing on more different days, however, would not be beneficial to learning if the overall
number of sessions played suffers from the daily limit condition. Our results showed that
participants in the daily limit condition—who could only play four sessions each day—did
not play less sessions overall (M ¼ 66.44, Mdn ¼ 17.5) compared to participants without a
daily limit (M ¼ 41.29, Mdn ¼ 22). The nonparametric Mann–Whitney–Wilcoxon test shows
that the difference is not significant (W ¼ 156.5, p ¼ 0.921). This supports H3b. This indi-
cates that participants in the daily limit condition were not less motivated than those with-
out a daily limit, but rather spread out their sessions over more different days—which is the
intended effect of this feature.
Table 5. Logistic regression analysis, with random intercepts for users, predicting prolonged play.
Prolonged play
M1 M2 M3
feature can prevent people from binge playing a game, but a concern is that users will simply
play less sessions, rather than maintain their interest over a longer period of time. Findings
of this study show that participants in the daily limit condition indeed played on more
different days compared to participants that could “binge” as many sessions as they
wanted, while playing a similar amount of sessions. So, it seems the daily limit did not
demotivate them and spread out the learning experience over more sessions. This suggests
that including a daily limit in the gamified app in an education setting can be a useful tool to
prevent binge playing and enhance distributed learning.
Finally, we investigated whether a participant’s performance in the game affects pro-
longed play. Results show that if participants perform very well (in the current study often
having a perfect score), they become less likely to continue playing, which confirms the
importance of ensuring that participants are sufficiently challenged. For a multiple-choice
quiz about mostly independent facts, it can be difficult to manipulate the difficulty of the
learning task. However, it is possible to estimate the likelihood that a participant answers a
question correctly and to use this to manipulate performance. By using a client-server
architecture, where all user data is collected, information from all users can be used to
improve this estimate. More large-scale research with this type of application can help us
better understand how performance affects prolonged play.
This study has several limitations that need to be taken into account. The number of
unique participants in our sample was small due to a relatively small sample population. In
addition, we found that three factors made it difficult to get students to play. First, volun-
tary participation might be perceived by students as additional and unnecessary work. Since
we are interested in the extent to which the app alone manages to engage students, we did
not offer any form of compensation for participating. Second, the app is mainly directed at
students that know little about the university, such as starting year students, so our pool of
all students in the School of Social Sciences is (purposefully) too broad in the first period of
our data collection. Third, the current project, which served as a pilot, was launched in the
Spring, which is close to the end of the academic year. To compensate for the low partic-
ipation rate, we included a second wave of data collection only pertaining to freshmen
students and starting the data collection earlier, that is in the Autumn. The relative response
rates in the second period are higher compared to the first period, but the number of
interested students remains to be a small minority.
Aside from limiting our sample, this also tells us something important about the chal-
lenges of gamification projects in natural occurrences of an educational field setting, where it
would be inappropriate to compensate students for participation. This calls for more field
research that investigates whether and how we can get students to actually participate. The
results of this study do show hope: once students started playing, a non-trivial number of
students did become engaged, with some students playing for many hours. More than
improving the game itself, the challenge could be to have students take that first step. In
addition, experimental research into the efficacy of educational games may complement the
valuable insights from field studies.
In conclusion, while our findings are preliminary, we are cautiously optimistic about
continuing this line of research in the future. Our findings suggest that careful manipulation
of game mechanisms can have an impact on sustaining and encouraging play via a gamified
application. Despite the limitations of our study, we are encouraged by these findings and
hope to continue this line of work in gamifying educational settings.
106 E-Learning and Digital Media 16(2)
Acknowledgements
We thank Loren Roosendaal and IC3D Media for allowing us to use the Knowingo application for the
purpose of this research and their support in accessing the API for extracting the log data of our
participants. Furthermore, we thank the education office of the Faculty of Social Sciences at the Vrije
Universiteit Amsterdam, and in particular Karin Bijker, for enabling us to perform this study.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of
this article.
Note
1. The common approach to test for an inverted u-curve would be to fit a quadratic term. However,
for our current data, showing a ceiling effect for performance, this would not be a good way to test
our hypothesis. The reason is that it would test whether prolonged play decreases if a user performs
below his or her average, but even then a user does not perform poorly on average. For reference,
we also fitted a quadratic effect, but this did not improve the model (v2 ¼ 0.346, df ¼ 1, p ¼ ns).
References
Barata G, Gama S, Jorge J, et al. (2015) Gamification for smarter learning: tales from the trenches.
Smart Learning Environments 2(10): 1–23.
Burgers C, Eden A, van Engelenburg M, et al. (2015) How feedback boosts motivation and play in a
brain-training game. Computers in Human Behavior 48: 94–103.
Cheong C, Cheong F and Filippou J (2013) Quick quiz: a gamified approach for enhancing learning.
In: Proceedings of the Pacific Asia conference on information systems (PACIS), Jeju Island, Korea.
Chou Y (2017) A comprehensive list of 90þ gamification studies with ROI stats [Blog post]. Available
at: http://yukaichou.com/gamification-examples/gamification-stats-figures/ (accessed 20
October 2017).
Connolly TM, Boyle EA, MacArthur E, et al. (2012) A systematic literature review of empirical
evidence on computer games and serious games. Computers and Education 59(2): 661–686.
C
ozar-Gutiérrez R and Sáez-Lopez JM (2016) Game-based learning and gamification in initial teacher
training in the social sciences: an experiment with MinecraftEdu. International Journal of
Educational Technology in Higher Education 13(2): 1–11.
Csikszentmihalyi M (1990) Flow: The Psychology of Optimal Performance. New York: Cambridge
University Press.
de Vries H, Kremers SPJ, Smeets T, et al. (2008) The effectiveness of tailored feedback and action
plans in an intervention addressing multiple health behaviors. American Journal of Health
Promotion 22(6): 417–424.
Denny P (2013) The effect of virtual achievements on student engagement. In: Proceedings of the
SIGCHI conference on human factors in computing systems, Paris, France, 27 April–2 May 2013,
pp.763–772. New York: ACM.
Deterding S, Dixon D, Khaled R, et al. (2011) From game design elements to gamefulness: defining
gamification. In: Proceedings of the 15th international academic MindTrek conference: envisioning
future media environments, Tampere, Finland, 28–30 September 2011, pp.9–15. New York: ACM.
Welbers et al. 107
Dong T, Dontcheva M, Joseph D, et al. (2012) Discovery-based games for learning software. In:
Proceedings of the SIGCHI conference on human factors in computing systems, Austin Texas,
USA, 5–10 May, 2012, pp.2083–2086. New York: ACM.
Dunlosky J, Rawson KA, Marsh EJ, et al. (2013) Improving students’ learning with effective learning
techniques: promising directions from cognitive and educational psychology. Psychological Science
in the Public Interest 14(1): 4–58.
Fitz-Walter Z, Tjondronegoro D and Wyeth P (2011) Orientation passport: using gamification to
engage university students. In: Proceedings of the 23rd Australian computer-human interaction con-
ference, Canberra, Australia, 28 November–2 December 2011, pp.122–125. New York: ACM.
Fu FL, Su RC and Yu SC (2009) Egameflow: a scale to measure learners enjoyment of e-learning
games. Computers and Education 52(1): 101–112.
Garris R, Ahlers R and Driskell JE (2002) Games, motivation, and learning: a research and practice
model. Simulation and Gaming 33(4): 441–467.
Gee JP (2013) Games for learning. Educational Horizons 91(4): 16–20.
Giora R, Fein O, Kronrod A, et al. (2004) Weapons of mass distraction: optimal innovation and
pleasure ratings. Metaphor and Symbol 19(2): 115–141.
Hamari J, Koivisto J and Sarsa H (2014) Does gamification work? A literature review of empirical
studies on gamification. In: 2014 47th Hawaii international conference on system sciences, Waikoloa
Hawaii, USA, 6–9 January 2014, pp.3025–3034. New York: IEEE.
Hatala R, Cook DA, Zendejas B, et al. (2014) Feedback for simulation-based procedural skills train-
ing: a meta-analysis and critical narrative synthesis. Advances in Health Sciences Education
19(2): 251–272.
Heidt CT, Arbuthnott KD and Price HL (2016) The effects of distributed learning on enhanced
cognitive interview training. Psychiatry, Psychology and Law 23(1): 47–61.
Huotari K and Hamari J (2012) Defining gamification: a service marketing perspective. In: Proceeding
of the 16th international academic MindTrek conference, Tampere, Finland, 3–5 October 2012,
pp.17–22. New York: ACM.
Kapp KM (2012) The Gamification of Learning and Instruction: Game-based Methods and Strategies for
Training and Education. San Francisco, CA: Pfeiffer.
Khan A, Ahmad FH and Malik MM (2017) Use of digital game based learning and gamification in
secondary school science: the effect on student engagement, learning and gender difference.
Education and Information Technologies 22(6): 2767–2804.
Kluger AN and DeNisi A (1996) The effects of feedback interventions on performance: a historical
review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin
119(2): 254–284.
Krebs P, Prochaska JO and Rossi JS (2010) A meta-analysis of computer-tailored interventions for
health behavior change. Preventive Medicine 51(3): 214–221.
Kroeze W, Werkman A and Brug J (2006) A systematic review of randomized trials on the effective-
ness of computer-tailored education on physical activity and dietary behaviors. Annals of
Behavioral Medicine 31(3): 205–223.
Lee JJ and Hammer J (2011) Gamification in education: what, how, why bother? Academic Exchange
Quarterly 15(2): 146.
Li W, Grossman T and Fitzmaurice G (2012) Gamicad: a gamified tutorial system for first time
autocad users. In: Proceedings of the 25th annual ACM symposium on user interface software and
technology, Cambridge Massachusetts, USA, 7–10 October 2012, pp.103–112. New York: ACM.
Lustria MLA, Noar SM, Cortese J, et al. (2013) A meta-analysis of web-delivered tailored health
behavior change interventions. Journal of Health Communication 18(9): 1039–1069.
Malone TW (1982) Heuristics for designing enjoyable user interfaces: lessons from computer games.
In: Proceedings of the 1982 conference on Human factors in computing systems, Gaithersburg
Maryland, USA, 15-17 March 1982, pp.63–68. New York: ACM.
108 E-Learning and Digital Media 16(2)
Malone TW and Lepper MR (1987) Making learning fun: a taxonomy of intrinsic motivations for
learning. In: Snow RE and Farr MJ (eds) Aptitude, Learning, and Instruction. vol. 3, London,
England: Routledge, pp.223–253.
Mollick ER and Rothbard N (2014) Mandatory fun: consent, gamification and the impact of games at
work. The Wharton School Research Paper Series. Epub ahead of print 4 December 2017. DOI:
10.2139/ssrn.2277103.
Nah FFH, Zeng Q, Telaprolu VR, et al. (2014) Gamification of education: a review of literature. In:
International conference on HCI in business, Heraklion Crete, Greece, 22-27 June 2014, pp.401–409.
Switzerland: Springer International Publishing.
Neff R and Fry J (2009) Periodic prompts and reminders in health promotion and health behavior
interventions: systematic review. Journal of Medical Internet Research 11(2): e16.
Noble N, Paul C, Carey M, et al. (2015) A randomised trial assessing the acceptability and effective-
ness of providing generic versus tailored feedback about health risks for a high need primary care
sample. BMC Family Practice 16: 95–103.
Rohrer D (2015) Student instruction should be distributed over long time periods. Educational
Psychology Review 27(4): 635–643.
Schneider F, de Vries H, Candel M, et al. (2013) Periodic email prompts to re-use an internet-delivered
computer-tailored lifestyle program: influence of prompt content and timing. Journal of Medical
Internet Research 15(1): e23.
Su CH and Cheng CH (2015) A mobile gamification learning system for improving the learning
motivation and achievements. Journal of Computer Assisted Learning 31(3): 268–286.
Sweetser P and Wyeth P (2005) Gameflow: a model for evaluating player enjoyment in games.
Computers in Entertainment 3(3): 3.
Terill B (2008) My coverage of lobby of the social gaming summit [Blog post]. Available at: www.
bretterrill.com/2008/06/my-coverage-of-lobby-of-social-gaming.html (accessed October 12 2017).
Wattal S, Telang R, Mukhopadhyay T, et al. (2012) What’s in a “name”? Impact of use of customer
information in e-mail advertisements. Information Systems Research 23(3): 679–697.
Author Biographies
Kasper Welbers is a postdoctoral researcher at the Department of Communication Science
at Vrije Universiteit Amsterdam. His research focuses on how news distribution processes
have changed due to the proliferation of new media technologies.