Professional Documents
Culture Documents
DOI 10.1007/s40558-015-0041-0
ORIGINAL RESEARCH
123
A. Groth, D. Haslwanter
1 Introduction
With increasing mobile-broadband subscription rates, from 268 million in the year
2007 to 2.1 billion in 2013, the market for smartphones and laptops grew by 40 %,
making it the most dynamic ICT market.1 Taking a closer look at the very saturated
Austrian mobile market, in 2013 almost two out of three Austrians accessed the
Internet while on-the-go or already work via portable devices (e.g. laptop, tablet, or
smartphone). Out of these persons, 56 % made use of mobile phones or smartphones
and one-third used portable computers.2 This tremendous shift in the pattern of
Internet usage demands for an implementation of appropriate technology to design
and present websites and its content to this still emerging and information hungry
mobile user group. Although technological innovations provide end-users with new
smartphones every year, there are still limitations and challenges for interfaces on
mobile devices due to inherent characteristics of such devices like smaller screen
sizes, non-traditional input methods, and navigational difficulties (Nah et al. 2005).
Being mobile or ‘‘on the move’’ easily conjures the image of a touristic context,
with people searching for context-sensitive information or posting, commenting and
liking everything they deem noteworthy to their friends back home through their
preferred social networks using their smartphones. Especially within the field of
eTourism, research concentrates and recognizes the importance of mobile
technologies and mainly concentrates on four identifiable areas: (1) mobile-
technology-oriented (e.g. Kawase et al. 2013), (2) system-oriented (e.g. Garcia et al.
2013), business-oriented (e.g. Kasahara et al. 2013), and (4) user-oriented. Within
the latter, another three main strands of applied research can be identified: (1) user
acceptance and adoption (e.g. Bader et al. 2012; Bortenschlager et al. 2010), (2)
social context (Tussyadiah 2013), and (3) user data (Not and Venturini 2013). User-
oriented research is mainly understood as quantitative data analysis with a focus on
causal explanations on how users deal, accept and intent to use mobile technologies
within their travel experiences. Although there are exceptions (e.g. Wang and
Fesenmaier 2013), hardly any user-behavioral studies have been conducted, in order
to better understand, how people actually ‘‘use’’ these mobile technologies and
services in a touristic context, or even yield any benefit in regards of tourist
information needs from an interface point-of-view at all.
Through the increasing penetration of the market with internet-enabled
smartphones, developers are challenged to deliver apps and services of superior
quality, in order to compete. Among many aspects of such quality, an important one
is usability (Nayebi et al. 2012). Under the paradigm of usability, developers
promote new web interface concepts like adaptive, responsive or even material
design in order to improve accessibility and user experience for non-experienced
users.
1
International Telecommunication Union, February 2013, http://www.itu.int/en/ITU-D/Statistics/
Documents/facts/ICTFactsFigures2013-e.pdf.
2
Statistik Austria, 2013—3.5 million people go online shopping, http://www.statistik.at/web_en/
dynamic/statistics/information_society/ict_usage_in_households/073632.
123
Efficiency, effectiveness, and satisfaction of responsive…
Especially for touristic service providers, the development of mobile apps and
websites poses a significant investment, which is often out-sourced to external web-
marketing-agencies, hence raising additional cost-effective concerns regarding
support, maintenance, and up-to-date information. As immersive, emotional and
usable websites are already quite well established by tourism service providers, the
most cost-efficient solution seems to be the provision of a classic desktop and an
additional adapted mobile, or in some cases, a responsive website. Although
destination management organizations (DMOs) start to recognize the importance of
being innovative in this regard (Gibbs and Gretzel 2015), such responsive websites
still leave a rather contradictive impression on user experience parameters like
attractiveness, intuitiveness and perceived usability (Groth and Haslwanter 2015).
Within this paper we aim to contribute towards a better understanding on, and
even more how, users utilize mobile tourism information websites when encoun-
tering responsive or adaptive destination websites on a smartphone, in order to
complete various touristic information-search-related tasks. Through heuristic
evaluation and user testing, the efficiency, effectiveness, and satisfaction towards
these two types of mobile web interfaces is analyzed and compared.
2 Theoretical background
Research on the usage of smartphones within the tourism domain has generally
revolved around the (1) development of specific applications for mobile phones (e.g.
Rasinger et al. 2009), (2) acceptance and adoption of smartphones as an information
communication tool (e.g. Eriksson and Strandvik 2009, Kim et al. 2008), or (3) the
impact of smartphone use on various aspects on a tourist’s travel experience (e.g.
Kramer et al. 2007). Even more, a tourist’s smartphone enables interactions between
the user and both the virtual and physical world, without any regard for the current
location of use (Gretzel et al. 2006). Within the literature of human–computer-
interaction and tourism information systems and services, the focus has been laid on
(1) mobile recommender systems (e.g. Ricci 2010), (2) navigation systems (e.g.
Haid et al. 2008), (3) location-based systems (e.g. Kaasinen 2005), and naturally (4)
various design aspects and impacts within mobile tour guides (e.g. Grün et al. 2008).
Following a qualitative study by Wang et al. (2014), a tourist’s smartphone not
only plays an important role during the trip itself, but also impacts the whole
touristic experience, hence, changes a tourist’s travel activities on all three stages of
a trip: pre-trip travel planning, en-route activities, and after-trip activities. In their
study, respondents referred to an improved ease-of-use when utilizing their
smartphone for planning activities, as well as their smartphone being the most
convenient solution when searching for tourism information at the destination,
resulting in an increased flexibility during the actual trip. While ‘‘the smartphone
appears to be an effective and handy tool to search for information regarding
transportation, accommodation, dining, things to do during trips, travel ideas, and
123
A. Groth, D. Haslwanter
deals both before and during trips.’’ (p. 18), perceived convenience and ease-of-use
has been the top response, when asked for a rationale for their smartphone use.
In comparison towards the context of mobile information search, Kellar et al.
(2007) distinguish three behavioral patterns when utilizing one’s smartphone: (1)
information-seeking (fact-finding, gathering and browsing information), (2) action-
support (in-the-moment and planning), and (3) information-exchange (transaction
and communication). These general behavioral patterns match within a touristic
context through (1) information search (e.g. for restaurants, deals), (2) facilitation
(e.g. navigation during trip, checking weather), and (3) communication (phone calls,
login to Facebook). A further context of entertainment (e.g. taking and sharing
photos, play games, listening to music) is added, although this context may only be
referred less towards mobile tourism information search, but more towards a search
for distraction or killing time (Wang et al. 2014).
Within the extensive research on tourism information search behavior, several
streams of literature can be identified. Firstly, it is commonly understood that people
basically search for information within their (1) internal resources, which are
derived and retrieved from previous experiences and past search results (Chen and
Gursoy 2000). This knowledge of a destination affects information search behavior
and consequently decision-making (e.g. Gursoy 2003). In addition, (2) external
information sources like destination-specific literature, family and friends, media,
and travel agencies (Snepenger and Snepenger 1993), as well as recommendations
through professional advice, advertisements, word-of-mouth, and non-tourism
movies and books are distinguished (Baloglu 2000). Secondly, tourism information
search is considered from a process perspective, providing various models towards
explaining and predicting information search behavior (e.g. Vogt and Fesenmaier
1998; Fodness and Murray 1999). Thirdly, with the rise and importance of the
Internet, literature has focused on specifics of online search patterns and the overall
search process when searching for tourism information online (e.g. Mitsche 2005;
Pan and Fesenmaier 2006) As the Internet further matures, evolvements and
deviations through the introduction and inclusion of social media (e.g. Pan et al.
2007) and virtual travel communities (e.g. Wang and Fesenmaier 2004) into this
search process have been studied. Fourthly, and most relevant for this study,
research in the field of search strategies, distinguishing between searching via
keywords (e.g. Chen et al. 1998), via search engines (e.g. Hawk and Wang 1999),
via browsing the Internet (e.g. Chung 2006), via utilizing sub-directories (e.g.
Nachmias and Gilad 2002), and via visiting known websites (e.g. Fidel et al. 1999).
Acknowledging, that search behavior has been analyzed in regards to User
Experience (e.g. Adukaite et al. 2013), there is still little understanding on how a
mobile user interface and representation of tourism information on smartphones is
actually utilized by users, and how their search behavior, in terms of efficiency,
effectiveness, and satisfaction, is influenced.
Furthermore, Ho et al. (2012) recognize the importance of an effective tourism
information search, especially towards a better understanding of a tourist’s search
behavioral characteristics, as these are not only significant in identifying and
maintaining a strong position within a competitive e-commerce environment, but
also serve as a basis for further improving mobile interfaces and search
123
Efficiency, effectiveness, and satisfaction of responsive…
2.2 Usability
Within ISO 9241-11 ‘Usability’ is defined as ‘‘the extent to which a product can be
used by specified users to achieve specified goals with effectiveness, efficiency and
satisfaction in a specified context of use’’. In more detail effectiveness addresses the
accuracy and completeness of how users achieve goals, efficiency the resources
expended when achieving this goal, and satisfaction is defined as a user’s comfort
with and positive attitude towards the use of the system.
Nielsen (2012) defines ‘Usability’ as a qualitative attribute in order to assess the
ease of use of system interfaces. The term itself also refers to methods for enhancing
ease-of-use during the design process phase. Furthermore, ‘Usability’ can be defined
through five components strongly contributing to overall product quality:
learnability, efficiency, memorability, errors, satisfaction, and utility. The latter
refers to a design’s functionality and investigates whether the system actually is
fulfilling a user’s needs.
Usability itself should be approached from multiple vantage points in order to
become sensitized to the various aspects and elements that may have an impact on
the usage of a system. A study by Hertzum (2010) identified six images
(perspectives) of usability in order to generate complementary and competing
insights on the usability of systems: universal, situational, perceived, hedonic,
organizational, and cultural usability. All six images do not assume to form an
exhaustive set of usability images, nor are they mutually exclusive. They are
interwoven point of views and their borders blended, providing a good overview on
the variety of issues that have to be genuinely understood to understand the usability
of a system.
Shneiderman (2000, p. 85) defines universal usability as ‘‘having more than 90 %
of all households as successful users of information and communication services at
least once a week’’. One challenge of today’s information and communication
services is to provide functionalities that are accessible and usable for a broad
audience of unskilled users. Some older technologies like postal services,
telephones, and televisions have reached this goal of universal usability. Never-
theless, especially computing technologies are still too difficult to use for a large
number of people. In order to achieve universal usability, three major challenges
can be identified for web-based and other services: (1) technology variety, (2) user
diversity, and (3) gaps in user knowledge.
Technology variety addresses the innate problem of supporting a wide range of
hardware, software, and network access. Modern computing and network services
have to remain usable across a range of very different software technologies, such as
operating systems and protocols, or varying processor speeds, screen sizes, and
network bandwidths. The co-existence of users with vastly different network
connections, like users who continue to use older smartphones while others will
123
A. Groth, D. Haslwanter
upgrade to newer, faster, and more capable devices poses a major challenge in this
area (Hertzum 2010). User diversity describes the existence of users with different
skills, knowledge, age, gender, disability, disabling conditions, literacy, culture, and
income (Shneiderman 2000). Gaps in user knowledge identify the divergence
between what users know and what they need to know in order to make use of a
service (Shneiderman 2000). Successful approaches to minimize those gaps in
knowledge include the use of familiar metaphors and an inclusive design in
combination with the allocation of customer service, online help and training, as
well as supportive user communities (Hertzum 2010).
Specifically within the mobile technology sector, these three challenges become
even more crucial as a wider range of people own and use their smartphones for a
variety of online services and functionalities, either via apps or browsing. Mobile
technology between smartphones is very difficult to compare and follows a highly
innovation-based release policy on an annual basis, outdating smartphones very
quickly. In addition, smartphones are already marketed and offered for ‘everybody’,
regardless of social status or income, which directly transfers over to the last
challenge of how users inform themselves about the usage of their phone. Gaps in
user knowledge may also be interpreted as users being structurally uninformed on
how their devices actually work, or how apps should be set up and used properly.
Official instructions are not delivered as a physical manual anymore, so people start
experimenting and learn by their own experience, or through the advice of their
peers, which results in very different, not comparable, and not-predictable use-
behaviors.
Therefore, strategies to cope with these challenges in order to achieve universal
usability, even for mobile websites or applications, remain mostly in the agreement
and adherence to general guidelines and standards (Hertzum 2010). An example for
such a universal guideline is defined as follows: ‘‘If menu selection is accomplished
by pointing, as on touch displays, design the acceptable area for pointing to be as
large as consistently possible, including at least the area of the displayed option
label plus a half-character distance around the label’’ (Smith and Mosier 1986,
p. 230). This guideline stands as a good representation of the above mentioned
‘‘universality’’, as it applies to all menu items that are selected by pointing,
regardless of the user herself, her tasks, and other factors attributing towards the
specific context of use, be it mobile or desktop (Hertzum 2010).
123
Efficiency, effectiveness, and satisfaction of responsive…
Coursaris and Kim analyzed all notable studies in this field and identified three—not
surprising—core-constructs, that have been most researched upon in this area:
Efficiency, Effectiveness, and Satisfaction.
Mobile phones do have a variety of advanced functionalities and features, but
usability issues are still increasingly challenging. These advances in mobile
technology have been the accelerator for the development of a wide range of
applications that can be used by people when travelling or generally on-the-go.
However, one aspect that is still overlooked by many developers is the context of
user interaction. Users want to fully use and utilize their devices wherever and
whenever they are. Usability and user experience have a critical impact on the
success of any mobile website or application in this special context of mobility. This
context comes along with small screen sizes, limited connectivity and different data
entry modes, as well as high power consumption rates (Harrison et al. 2013).
In a study by Budiu and Nielsen (2010) on mobile user experience, the overall
evaluation has been significantly inferior as compared to the usability of regular
websites. The average success rate of given tasks on mobile websites was only
59 %, substantially lower than the success rate for websites on a regular PC with
about 80 %. The main identified problems of mobile usability are:
• Small screens The physical characteristics of mobile devices imply that there are
fewer visible options at any given time. Users therefore rely more on their short-
term memory to build an understanding of the overall information space. This
has negative consequences on the overall interaction with the device.
• Awkward input Input paradigms differ between desktop computers and mobile
devices. Operating graphical interface widgets without a mouse, especially when
typing, or using menus and buttons with your fingers take longer time and are
more error-prone.
• Download delays Mobile bandwidth rates often suffer through lower or
unstable connections. This delay leads to longer page-loading times.
• Mis-designed sites Most websites are still optimized and tailored for desktop
usability. As a result, they do not adhere to any guidelines of mobile usability.
123
A. Groth, D. Haslwanter
tailored heuristics for mobile devices have to be applied. An adapted and tested
framework for the evaluation of mobile usability would help designers to find usability
problems more efficiently and will eventually lead to the design of better solutions
(Heo et al. 2009).
123
Efficiency, effectiveness, and satisfaction of responsive…
of a device. The original proportions of the design are obtained and not distorted
through this approach. Resizing in that sense means both expanding and contracting.
Flexible images pertain to a flexible layout, which itself is mainly based on
percentages when dealing with images and graphical elements. Whenever such
elements are not properly prepared before they are uploaded to a website, the result
can be an image or graphic that overflows its own container and breaks the viewport
of the device. Responsive web design addresses this issue with the establishment of
CSS rules and guidelines. The easiest way is to restrict the elements to a 100 %
width or height dimension, using the maxi-width property. That means that every
element that is inside a predefined flexible container with such a rule can only be the
maximum width or height of this container and will be automatically scaled to its
container size. If a flexible container resizes itself, which implies that the images are
being enlarged or shrunken, the image’s aspect ratio remains untouched. Media
queries, finally address a problem that might arise out of the usage of flexible grids
and layouts, which result in possible usability issues. Under certain conditions, the
changes in layout could compromise readability and will lead to a detraction
regarding user experience. For example, a navigation menu could be teared apart
into two lines, because of the unexpected shrinking width of its column. A proper
solution would be the use of CSS3’s media queries, which allow browsers to serve
different styles for different viewing contexts. This adds the ability to target media
features such as screen and device width and orientation. Typical examples for such
queries are that a smartphone would have less than 570 pixels width or that a tablet
device can support orientation. Such categories that effectively adjust the content
and layout to the context of the device greatly ensure that the user has a better and
richer viewing experience (Gardner 2011).
RWD provides a solution to the challenge of maintaining and updating more than
one set of content for different types of websites. Another major improvement is,
that there is no need to additionally promote a website as ‘mobile’, since responsive
websites recognize the use context (mobile or desktop) and automatically adjust
their layout to the used target-device. Users will barely realize that they are using a
responsive website, because all the information that is present on a full desktop site
is also available on the responsive version of it. All features of the full desktop site
that are supported by the device can also be used and therefore users will benefit
from an optimized mobile experience, while being able to still access the whole
range of content and services (Bohyun 2013). RWD aims to create one singular
website that is available and accessible to any user and any sort of device, therefore
establishing consistency in content delivery across a variety of platforms.
A RWD approach does not per se guarantee a satisfactory mobile experience.
Examples for unsuccessful implementations of responsive web design can be seen
when the conversion between full site and responsive site does not include
adjustments in text and page structure. Often responsive websites result in a long
page filled with too many lines of text, navigation items, and links. A positive
mobile experience requires more than simply making elements flow into a long
strip. With the restricted space on mobile screens, there has to be an alternative of
how content can be presented in a streamlined and uncluttered way while focusing
on the most important items that mobile users want to access (Bohyun 2013).
123
A. Groth, D. Haslwanter
Adaptive mobile websites are following mostly responsive design paradigms like
progressive enhancement, but distinguish themselves to provide fixed and pre-
designed layouts for various screen sizes. When visiting such a prepared website,
the device will be identified through the web-browser and the adjusted design will
be delivered. ‘‘Adaptive’’ in this sense can be seen as pre-defined for different
screen resolutions (Gustafson et al. 2013).
3 Methodology
‘‘Every usability evaluation method has its advantages and disadvantages. Some are
difficult to apply, and others are dependent on the measurers’ opinions or
instruments. In addition to these challenges, mobile devices and applications
change very quickly, and updated methods of usability evaluation and measurement
are required on an ongoing basis’’ (Nayebi et al. 2012, p. 1).
Against the background described, a research setting has been designed
comprising of the above mentioned mobile usability heuristics, which not only
apply to the context of mobile devices and use when searching for tourism
information, but also should prove to be meaningful and most of all comparable.
Hence we focused on the most applied measures in mobile usability testing:
Efficiency, effectiveness and satisfaction (Harrison et al. 2013).
The following research hypothesis have been formulated for the usability test
H1: A responsive mobile touristic website is more efficient to use than a mobile
touristic websites.
H2: A responsive mobile touristic website is more effective to use than a mobile
touristic website.
H3: A responsive mobile touristic website is more satisfying to use than a mobile
touristic website.
The main goal of our usability experiment was to measure the influence and
effect of two different mobile design approaches on usability and the overall
performance of users, who were exposed to two different mobile touristic
websites—one applying a RWD-approach and one a mobile-approach with some
basic elements of RWD. The whole experiment included two sessions in which
users perform a series of information-seeking tasks on a smartphone.
The usability experiment investigated two different websites out of the area of
tourism destinations during the time-period of 23rd of June and 12th of July 2014.
There were no significant changes to these mobile websites in the timeframe of the
evaluation. The first website—further called Website A—http://www.tirol.at3
applies a very strict implementation of the responsive design approach as described
3
Tirol Werbung is a destination marketing organization, responsible for brand building and awareness of
the country of Tyrol. The website offers all important destination-related information like events, local
weather, tours for biking, hiking, and skiing, and a booking functionality within the Tyrolean region.
123
Efficiency, effectiveness, and satisfaction of responsive…
above. Strict here means that many features of the website will respond to smaller
screen sizes, such as changes in layout, different navigation, alternative naming of
links and headings. In general, the website follows a strong degree of compliance
with the proposed guidelines of RWD: the main background excludes image as well
as several widgets like the weather-box, the picture gallery and the registration box
for the newsletter. These changes allow a short page length and therefore limit
extensive scrolling operations. In addition, some descriptions in the navigation
menu links alter between desktop and smartphone versions. The booking pages also
reduce the amount of pictures and limit the entries of accommodation entities to
their names and a brief description.
As a second website—further called Website B—http://www.oetztal.com4 was
selected, which applies an adaptive mobile design approach. Elements changed only
slightly, images stayed persistent and content was not cut or reduced in length,
functionalities are not excluded. This results in a very long, vertical column-like
website with intensive scrolling operations for the users. The main navigation menu
changes from a horizontal design with three main sections on desktop to a vertical
menu with four main sections and subsections when viewed on a smartphone. The
additional menu item of ‘‘Events’’ on smartphone is incorporated in the section of
‘‘Current News’’ on the desktop version. There were no changes in the naming of
navigation menu links. The large number of menu items fills a long vertical list that
is not being accessible within the dimensions of one (iPhone, 4 inch) screen size. As
a result, the main menu requires intensive scrolling operations.
20 persons (14 male and 6 female) at the age between 16 and 29 were asked to
participate in the usability experiment. The sample size of 20 participants was
chosen in order to comply with the minimum number required to run an analysis of
variance (ANOVA) (Simmons et al. 2011).
As usability tests deal with smaller group sizes, a decision regarding the age of
the test group had to be made. Following several studies, 88 % of Internet users are
aged between 16 and 24 years who make use of mobile devices to enter the World
Wide Web away from home or work.5 All participants were asked to estimate their
daily time spent on a desktop computer and on a smartphone in hours, with an
average of 3.2 h per day (SD = 2.78) on a computer and with an average 2.2 h per
day (SD = 1.62) on their smartphone.
All participants had a basic understanding of using smartphones and the mobile
Internet in order to deal with the required tasks. Participants had to evaluate
themselves on a scale from expert, intermediate to beginner regarding their
expertise with smartphones. Two rated themselves as experts, 17 as intermediate,
and one as beginner. Nevertheless, all participants regularly used their build in web-
4
Ötztal Tourismus is responsible for marketing the valley of Ötztal in Tyrol, including its areas of
Sölden and Obergurgl/Hochgurgl. The website offers all important destination-related information like
events, local weather, tours for biking, hiking, and skiing, and a booking functionality.
5
Statistik Austria, 2013—3.5 million people go online shopping, http://www.statistik.at/web_en/
dynamic/statistics/information_society/ict_usage_in_households/073632.
123
A. Groth, D. Haslwanter
browser to surf the internet. All participants owned a smartphone, with eight using
Apple iOS and twelve persons using Google Android. In addition, all users were
familiar with Tirol Marketing and Ötztal as holiday destination, but had never
visited either the desktop or mobile website of both DMOs before. The preferred
system to look for touristic information online has been for 18 participants their
desktop computer or laptop, and for two participants their smartphone.
A decision had to be made regarding the selection of the used smartphone. In order
to measure the influence of a RWD approach, we opted for not letting people use
their own smartphone to enforce comparability between users. Especially within the
Android ecosphere exists a plethora of different devices with different screen sizes
and (haptic) device buttons, which would make it difficult to avoid a certain bias by
learned error handling and navigation on a user’s preferred smartphone. Therefore,
an iPhone 5s running iOS 7.1.1 was chosen, as the iPhone sports no additional
buttons (only the home button) and all navigation can be solely handled via touch-
screen interaction, which would help minimize this bias, as touch interaction for
navigation is common on all current smartphones. Android users were given several
minutes to make themselves familiar with the handling of the device. As a web-
browser the pre-installed Safari Browser was used. All participants were recorded
on video (face and screen) and audio with Magitest (http://www.magitest.com). The
data was collected and analyzed with Microsoft Excel 2013 and IBM SPSS
Statistics 21.
3.4 Procedure
The experiment was split in two sessions, with the second session held 2 weeks after
the first.
The overall experiment was designed as an A/B test setting. A/B testing (also
called split testing) compares two versions of a website to identify, which one
performs better from a user point of view (Brau et al. 2008). A/B testing is a popular
method for the comparison of alternative designs on web pages. The users work
randomly with either the first or the second version of deployed design alternatives
and are separated into Group A and Group B (Sauro and Lewis 2012). Predefined
criteria are then measured in order to compare the results of the two tested groups.
In the first session the group has been divided equally, with Group A starting with
Website A, and Group B with Website B. After a break of 2 weeks, the same
persons participated on the second part of the experiment, flipping websites.
The experiment followed Rubin’s (Rubin and Chisnell 2008) outline for usability
testing. Within a brief introduction phase, the participants got familiar with the topic
and the setting of the experiment was described. Then a pre-questionnaire was
answered about the participants’ general travel behavior as well as questions about
their demographics. Afterwards all tasks had to be completed on the smartphone.
The aim of the tasks was to discover the content and functionalities of the two
different websites, whereas all tasks were designed to be very similar in order to
123
Efficiency, effectiveness, and satisfaction of responsive…
guarantee that the results are comparable. The expressions were captured using a
camera and a microphone in order to document the actions of the users while they
were facing the tasks. A post-test questionnaire was conducted after the participants
finished all tasks collecting insights about their general use of smartphones and
desktop computers.
Each session comprised of five tasks, which were divided into four tourism
information-seeking, and one tourism action-oriented tasks. The set of tasks was the
same for the two different sessions in order to collect comparable results. The
classification of the tasks into two categories, one regarding difficulty level (easy,
medium, difficult) and the other regarding degree of scrolling (easy, medium,
heavy) ensured, that the tasks had varying levels of difficulty and that assumptions
could be made through the effects of scrolling on effectiveness and efficiency.
Participants were also not familiar with the tasks or how to solve those (Raptis et al.
2013). Four of our tasks were related to information search, as this has been
conceptualized by us as the main activity, when visiting tourism websites on a
smartphone. Finding information within a mobile website may be seen trivial at
first, but poses a strikingly challenging task, especially when more detailed, or
hidden, information has to be searched for. Website structure and a user’s web-
orientation skills are put to the test, hence we focused on progressively more
detailed and complex information retrieving tasks, recognizing learning effects
when navigating on the websites to happen. Within the action-oriented Task 5, the
websites’ booking functionality has been selected, in order to measure, how users,
after becoming familiar with the website, are able to handle this rather complex
action, which still poses difficulties for many users, even on modern tourism
desktop websites.
123
A. Groth, D. Haslwanter
Sauro and Lewis (2010) underlines the importance of Time on Task (ToT) as a
powerful mean to measure the efficiency of users while performing tasks. ToT is
about how long a user needs to complete a task in seconds and or minutes,
calculating the time elapsed between the start of a task and the end of a task (Tullis
and Albert 2013). Within the experiment, ToT is not only employed to measure
performance, but also to monitor, if users become faster on consecutive tasks.
The main measurement for efficiency beside ToT is the number of page views.
Burby and Brown (2007, p. 7) define page views as ‘‘The number of times a page
(an analyst-definable unit of content) was viewed.’’ Therefore, the measurement of
page views was aligned to this definition and the observer of the experiment
documented the number of page views when each task was performed.
All participants evaluated each version of the websites using the System Usability
Scale (SUS) questionnaire (Brooke 1996a, b). SUS comprises of ten questions (on a
5 or 7-point Likert Scale) and calculates a value between 0 and 100 (100 = perfect
usability).
123
Efficiency, effectiveness, and satisfaction of responsive…
The participants evaluated each version of the websites using the Net Promoter
Score (NPS) questionnaire (Reichheld 2003). NPS comprises of a single question
(How likely would you recommend X to a friend?) on a simple 1–10 scale. Users
are then further categorized as Promoters (score 9–10) and Detractors (score 0–6).
NPS is calculated as the percentage of Promoters minus the percentage of Detractors
(score between -100 and 100).
As the sample size of the usability study was smaller than 25, the geometric mean
was used to estimate the center of the population (Sauro and Lewis 2010). The
accepted level of errors (alpha) for the mean values of the following analyses is
5 %, which is a 95 % confidence interval and implies that the analysis is 95 percent
certain or wrong in 5 % of the time (Tullis and Albert 2013). All following results
should not be interpreted one website being better than the other. All results have
been analyzed regarding a user’s behavior on responsive or adaptive websites.
The following tables and figures represent the results of the measurement of time on
task. The task times for this measurement only include successful task times as
recommended by Tullis and Albert (2013). Furthermore, outliers in the dataset were
removed following the method of the Grubb’s Test (Grubbs 1969), also called ESD
method (extreme studentized deviate).
The data analysis in Table 1 identifies that the largest differences between the
two websites were measured in the newsletter task. The disparity was very likely
Table 1 Mean time on task for the Website A and Website B (with SD and 95 % confidence interval)
Task Geometric SD Lower Bound Upper Bound
mean (95 %) (95 %)
123
A. Groth, D. Haslwanter
200
Website A Website B
Fig. 1 Mean time on task, in seconds, for Website A and Website B (error bars represent the 95 %
confidence interval)
The amount of effort that was required to complete the tasks was measured through
tracking all pages visited (Page views), when searching for the information required.
The total number of visited pages accounts for 23 for Website A and 19 for Website
B.
The results shown in Fig. 2 for Website A showed that there is only one
significant difference in the number of page views within Task 2. After completing
Task 1, users on Website A continued to use the standard navigation and took their
time in browsing the website with their thumbs. On Website B users became
123
Efficiency, effectiveness, and satisfaction of responsive…
7
Number of visited pages
0
Newsletter Address Aqua Dome Altitude Hiking Tour Duration Hiking Booking
Tour
Website A Website B
Fig. 2 Mean number of page views per task for Website A and Website B (error bars represent the 95 %
confidence interval)
frustrated much faster and started using the search functionality, as soon as they
realized, that they won’t be able to find the information quickly and simply through
browsing. In addition, when comparing task times and page views per task, users on
Website A where on average much faster browsing through the website and opening
more websites in less time compared to the tasks before, which hints on becoming
quickly familiar with the Website A’s navigational structure.
As efficiency was measured simply through analyzing all page views without taking
the task’s duration into account, a second metric was applied. Efficiency can be
described as a combination of task success and time on task. Task Success was
1.0
Tasks Successfully Completed per Minute
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
Website A Website B
Fig. 3 Average number of tasks completed successfully per minute, for Websites A and B
123
A. Groth, D. Haslwanter
calculated as the percentage of successful tasks (no problems) and Time on Task
calculated in minutes (Tullis and Albert 2013). The International Organization for
Standardization specifies in the common format for industry reports (ISO/IEC
25062:2006) that the core measure of efficiency is the ratio of task completion rate
to the mean time per task.
In order to calculate the average efficiency, a variation was implemented by
counting the number of successful completed tasks by each participant and dividing
this number by the total time spent by the participant on all tasks (successful and
unsuccessful). The result provides a rather straightforward measure for efficiency
for all participants, identified as the number of tasks completed per minute (Tullis
and Albert 2013).
The results in Fig. 3 show that participants were generally more efficient on
Website A. In particular, user completed 0.6 tasks on average on Website A
(SD = 0.20) and 0.5 tasks per minute on Website B (SD = 0.21).
Results for the levels of success are presented by a four-point scoring method. The
following bar chart shows the levels of success as frequencies for each task. The
percentages show the users of each category or level (Tullis and Albert 2013).
The results in Fig. 4 for the levels of task completion of Website A showed that
the booking task was the one where participants faced the most problems. Three
participants (15 %) were not able to successfully complete the task, two participants
(10 %) had major problems, four participants (20 %) minor problems and only 11
participants (55 %) had no problems.
Results for task completion levels of Website B showed that negative results
were also present for the booking task with one participant (5 %) who was not able
to complete the task, seven participants (35 %) had major problems, five
100%
90%
Percent of Participants
80% Failure
70%
60% Major Problem
50% Minor Problem
40%
30% No Problem
20%
10%
0%
123
Efficiency, effectiveness, and satisfaction of responsive…
participants (25 %) had minor problems and only seven participants (35 %) had no
problems at all. Another task of Website B where nearly every participant had at
least some problems was the newsletter registration task. One participant (5 %) was
not even able to complete this task, four participants (20 %) had major problems,
nine participants (45 %) minor problems and only six participants (30 %) had no
problems.
Taking a closer look on Fig. 5, participants had overall much more problems with
the most difficult and most scrolling intensive task on Website B, than on Website
A. Although three participants were not able to complete this task on Website A,
overall more participants had major problems on Website B. As a reason
participants needed much longer to find the booking functionalities at all on
Website B and were very much irritated by a pop-up window that did not carry over
all booking criteria selected before. Hence, participants often had to close the
window and re-enter their booking criteria.
The System Usability Scale (SUS) was applied as a main indicator for the perceived
usability of the two mobile websites. The following mean SUS scores, standard
deviations and confidence intervals (a = 5 %) were measured for the two versions
of websites. As the sample size of the usability study was smaller than 25
participants, the geometric mean was used to estimate the mean values of the
different versions (Sauro and Lewis 2010).
Website A scored a geometric mean of 64.06 (SD = 19.97 and 95 % confidence
interval is 58.03–76.72) compared to Website B with 62.91 (SD = 19.29 and 95 %
confidence interval is 56.97–75.03).
When comparing the results in Table 2 of perceived usability via SUS scores,
there is only a small difference between both websites. Participants rated Website A
slightly higher than Website B (difference of 1.15). When compared with the
adjective ratings scale from Bangor et al. (2009) it can be said that both smartphone
versions were rated in category C—‘‘Good’’.
100%
90%
Percent of Participants
80%
70% Failure/Quit
60% Major Problem
50% Minor Problem
40% No Problem
30%
20%
10%
0%
Booking - Website A Booking - Website B
123
A. Groth, D. Haslwanter
Fig. 6 Comparison of adjective ratings, acceptability scores, and school grading scales, in relation to the
average SUS score (Bangor et al. 2009)
Figure 6 brings this number into perspective. The average SUS scores out of a
study of 3500 surveys is about 70. The survey also covered the average SUS scores
with respect to the applied interface type. The comparison showed that classical
Web interfaces have an average SUS Score of 68.2 while Cell phones only reach
about 65.9. (Bangor et al. 2009) This comparison may also reflect the results of the
experiment that showed very similar tendencies and differences with respect to
those assumptions.
To better understand above findings on perceived usability, the Net Promoter Score
(NPS) along with standard deviations and means were measured for both websites.
Table four shows that Website A reached minus 40 (SD = 2.52 and a mean of 6.0)
and Website B reached minus 45 (SD = 2.82 and a mean of 5.4).
The average NPS for business companies and services is about plus five to ten.
Accordingly, results below zero imply that there are more detractors than promoters
for the tested product or service.6 Applying this framework to the results of our
experiment, no significant difference between both websites can be found. The NPS
for the two versions with minus 40 and minus 45 means that there are 40 and 45 %
more detractors than promoters for each version of the website (Table 3).
6
Logic, H. & LLC. Net Promoter Benchmarking—Net Promoter Community. Retrieved from http://
www.netpromoter.com/why-net-promoter/compare/.
123
Efficiency, effectiveness, and satisfaction of responsive…
5 Discussion
H1: A responsive mobile touristic website is more efficient to use than a mobile
touristic websites.
Comparing all mean task times between the two versions using an independent
samples t test or a Mann–Whitney U test (when data is not normal distributed), leads
to the following results:
Task 1: Newsletter resulted in a mean of 97.05 (SD = 29.58) for Website A and a
mean of 150.61 (SD = 40.42) for Website B (independent samples T test,
t = -4.6, p = 0.00). With p \ 0.05 it can be concluded that the mean task times
are significantly different.
Task 2: Aquadome had no normal distribution in the dataset of Website B
(Shapiro–Wilk test of normality p \ 0.05), so the differences in the mean task time
had to be compared using the non-parametric Mann–Whitney U test. The test results
showed a mean of 94.63 (SD = 10.26) for Website A and a mean of 58.25
(SD = 8.07) for Website B (Mann–Whitney U = 76.5, p = 0.01 two-tailed). With
p \ 0.05 it can be said that participants needed significantly longer on Website A
than on Website B.
Task 3: Altitude Hiking tour also had no normal distributed (Shapiro–Wilk test of
normality p \ 0.05) data. The following means were measured: Mean of 51.95
(SD = 7.87) for Website A and a mean of 47.21 (SD = 9.35) for Website B
(Mann–Whitney U = 164, p = 0.63). With p [ 0.05 no significant differences
were identified.
Task 4 - Duration Hiking tour had again no normal distributed data (Shapiro–
Wilk test of normality p \ 0.05). The following means were measured: Mean of
75.84 (SD = 8.01) for Website A and a mean of 63.17 (SD = 8.58) for Website B
(Mann–Whitney U = 139, p = 0.33). With p [ 0.05 there were no significant
differences identified.
Task 5 - Booking had also no normal distribution in the dataset (Shapiro–Wilk
test of normality p \ 0.05). The mean of Website A was 146.0 (SD = 17.01) and
Website B had a mean of 185.26 (SD = 28.84). The Mann–Whitney U test resulted
in U = 139.5, p = 0.68. As p [ 0.05 there were no significant differences
identified.
As there was no normal distribution for the data of the total task time (Shapiro–
Wilk test of normality p \ 0.05) the effects of the website version on Total Task
Time had to be measured with the non-parametric Kruskal–Wallis Test. The results
show that there is a significant difference between Total Task Times of the tested
versions of the websites (Kruskal–Wallis p = 0.003). In order to investigate on
these differences between both versions a post hoc test (Mann–Whitney) was
conducted. The test method was chosen because it is a valid method when there are
test samples without normal distribution. Ultimately, no significant difference
between both website versions (p = 0.946) can be found. Concluding, the Total
Task Time was not significantly different between both versions.
Correlation between Total Task Time and Perceived Usability (SUS) shows no
significant correlation (p [ 0.05) between Total Task Time and SUS Scores for
123
A. Groth, D. Haslwanter
123
Efficiency, effectiveness, and satisfaction of responsive…
more promoted than Website B. In spite of above results, the third hypothesis (H3)
has to be rejected.
123
A. Groth, D. Haslwanter
small mobile screen. Users expect graphical pages at first and feel magically
attracted by them (Groth and Haslwanter 2015), but when looking closer at usability
aspects, users act more confident on responsive websites. But still, responsiveness
does not simply contribute or lead to a better evaluation or either promotion of such
websites, which leaves, luckily, a lot of room for further research. The challenge
will be, to not only develop a usable responsive website, but a responsive website
that fascinates visitors and stimulates promotion. How this may be done and which
aspects are more valued than others in encouraging users to do so, still remains
unclear.
Nevertheless, some interesting aspects could be identified in our pre- and post-
test questionnaires that seem noteworthy for further research. (1) Users voice, feel,
and evaluate themselves as being much more proficient, when using a smartphone
compared to using a desktop computer. This results in a very straightforward and
courageous behavior to simply try, how everything works. Although we have
addressed the gap in user knowledge before, it seems that users counter their
knowledge gap with a feeling of ‘‘being more in control’’. The challenge here is,
that developers and tourism service providers are confronted with a very confident
and proficient target group, with high demands towards orientation, information
quality and usability, compared to the desktop environment. (2) Within our
experimental setting, users could sit back and relax on a chair and could try to solve
our tasks undisturbed. This of course is far from a realistic setting, especially when
thinking about mobile use cases. With the rise of new apps that help monitor user
activity on smartphones, it would be very insightful to study in-the-field applications
and scenarios, with users in the middle of a city or when commuting in a bus or
metro, to better understand implications of responsive design approaches, as they
especially focus on one finger touch & point navigation and not on mere search
input handling. Here we expect a more significant divergence between those two
approaches, than in our experimental setting. (3) Finally, our user group may cover
the most IT-savvy generation, but leaves out the much larger demographic groups of
Generation X and Baby Boomers. Especially the latter is of increasing interest from
a touristic point of view, being a generation with enough financial background to
regularly travel, staying healthy, and with an unbound curiosity to adopt new
technologies and making them their own.
Behavioral studies naturally come with limitations that need to be addressed.
First, it may seem unclear to compare two different destination websites according
to usability metrics. Within our experiment, the focus has been solely on how user
behavior differs between responsive and adaptive websites when searching for
tourism information. A comparative point of view regarding one website being
‘better’ than the other in terms of e.g. navigational structure, information and image
quality, or loading times has not been the objective of this study. Nevertheless,
comparative statistical analysis has been conducted as all test persons remained the
same during the 2 weeks’ period. Second, it can be argued, that performance
measures and implications towards efficiency are individual to each website and
hence not comparable by nature. This is basically correct, although within our study
a more comprised look on task time and page views has been applied. As reported,
lower numbers within Website B have been achieved by employing the search
123
Efficiency, effectiveness, and satisfaction of responsive…
functionality within the website, as users were, after completing Task 1, already
familiar with the rather bulky design of Website B, and therefor frustrated to scroll
the long website with their fingers. This user behavior could not be observed with
the responsive Website A at all; actually just the opposite behavior was observed:
users made use of the websites navigation and searched along the websites structure
and navigation in order to complete the task. It may be concluded that learning
effects did occur on Website B, namely to circumvent navigational features.
Interestingly enough, Website A was not able to harvest on such learning effects,
although, as reported, a faster browsing behavior could be observed. Efficiency has
been interpreted on these dimensions, and not towards just the performance of users
to achieve their task. Thirdly, it should be noted that with UsERA (Inversini et al.
2011) an established and useful measure to assess usability on tourism websites
already exists. Nevertheless, this concept has been deliberately neglected for two
reasons: (1) Due to the highly competitive environment within the Tyrolean tourism
destinations, access to log files would be interesting, but very restrictive. (2) In order
to assess a user’s behavior when looking for tourism information on mobile
websites, log file analysis and risk assessment may not fit when thinking about
users, but more when optimizing for information and service providers. Following
this notion, a tourism information provider’s strategic perspective on implementing
RWD has not been tackled. This rather novel focus has been taken on by Gibbs and
Gretzel (2015) and would, in combination with results from user behavior studies
like ours, provide tremendous insights regarding feasibility, return on investment,
innovativeness, and impact of RWD in the tourism domain.
Concluding, the absence of a universal mobile usability framework became
painfully apparent. All applied heuristics, established as they are, became
questionable and even antithetic, when observing users and their tourism
information search behavior, especially in the context of mobile devices.
Specifically, ‘efficiency’ provides rather weak insights into how users utilize their
phone in this regard, even more so when using a responsive website. Mobile user
behavior may be considered the younger sister of desktop behavior, but
understanding her character proves quite contradicting and challenging.
References
Adukaite A, Inversini A, Cantoni L (2013) Examining user experience of cruise online search funnel. In:
Marcus A (ed) Design, user experience, and usability: web, mobile, and product design; second
international conference, DUXU 2013, held as part of HCI International 2013, Las Vegas, NV,
USA, July 21–26, 2013; proceedings, part IV, vol 8015. Springer, Berlin, pp 163–172
Bader A, Baldauf M, Leinert S, Fleck M, Liebrich A (2012) Mobile tourism services and technology
acceptance in a mature domestic tourism market: the case of Switzerland. In: Fuchs M, Ricci F,
Cantoni L (eds) Information and communication technologies in tourism 2012. Springer Vienna,
Vienna, pp 296–307
Bahadir D, Yumusak N, Arsoy S (2013) Guided-based usability evaluation on mobile websites. http://
www.thinkmind.org/index.php?view=article&articleid=iciw_2013_9_20_20212
123
A. Groth, D. Haslwanter
Baloglu S (2000) A path analytic model of visitation intention involving information sources, socio-
psychological motivations, and destination image. J Travel Tourism Marketing 8(3):81–90. doi:10.
1300/J073v08n03_05
Bangor A, Kortum P, Miller J (2009) Determining what individual SUS scores mean: adding an adjective
rating scale. J Usabil Studies 4(3):114–123
Bohyun K (2013) Responsive web design, discoverability, and mobile challenge. Library technology
reports:29–39
Bortenschlager M, Häusler E, Schwaiger W, Egger R, Jooss M (2010) Evaluation of the concept of early
acceptance tests for touristic mobile applications. In: Gretzel U, Law R, Fuchs M (eds) Information
and communication technologies in tourism 2010. Springer Vienna, Vienna, pp 149–158
Brau H, Diefenbach S, Hassenzahl M, Koller F, Peissner M, Röse K (eds) (2008) Usability Professionals
2008. German Chapter der Usability Professionals’ Association, Stuttgart
Brooke J (1996a) SUS: a quick and dirty usability scale. In: Jordan PW, Thomas B, Weerdmeester BA,
McClelland IL (eds) Usability evaluation in industry. Taylor and Francis, London, pp 189–194
Brooke J (1996) SUS: a quick and dirty usability scale. In: Jordan PW (ed) Usability evaluation in
industry: based on the International Seminar Usability Evaluation in Industry that was held at
Eindhoven, The Netherlands, on 14 and 15 September 1994. Taylor & Francis, London
Budiu R, Nielsen J (2010) Usability of Mobile Websites: 85 design guidelines for improving access to
web-based content and services through mobile devices
Burby J, Brown A (2007) Web analytics definitions. Web Analytics Association, Washington DC
Champeon S (2003) Progressive Enhancement and the Future of Web Design. http://www.hesketh.com/
thought-leadership/our-publications/progressive-enhancement-and-future-web-design. Accessed 18
June 2014
Chen JS, Gursoy D (2000) Cross-cultural comparison of the information sources used by first-time and
repeat travelers and its marketing implications. Int J Hospit Manage 19(2):191–203. doi:10.1016/
S0278-4319(00)00013-X
Chen H, Houston AL, Sewell RR, Schatz BR (1998) Internet Browsing and searching: user evaluations of
category map and concept space techniques. J Am Soc Inf Sci 49(7):582–603
Chung W (2006) Studying information seeking on the non-English Web: an experiment on a Spanish
business Web portal. Int J Hum Comput Stud 64(9):811–829. doi:10.1016/j.ijhcs.2006.04.009
Coursaris CK, Kim DJ (2011) A Meta-analytical review of empirical mobile usability studies. J. Usabil
Stud 6(3):11:117–11:171
Eriksson N, Strandvik P (2009) Possible Determinants Affecting the Use of Mobile Tourism Services. In:
Filipe J, Obaidat MS (eds) e-Business and telecommunications: international conference, ICETE
2008, Porto, Portugal, July 26–29, 2008, revised selected papers, vol 48., SpringerBerlin,
Heidelberg, pp 61–73
Fidel R, Davies RK, Douglass MH, Holder JK, Hopkins CJ (1999) A visit to the information mall: web
searching behavior of high school students. J Am Soc Inform Sci 50:24–37
Fodness D, Murray B (1999) A model of tourist information search behavior. J Travel Res
37(3):220–230. doi:10.1177/004728759903700302
Fox R (2012) Being responsive. OCLC Syst Serv 28(3):119–125. doi:10.1108/10650751211262100
Frain B (2012) Responsive web design with HTML5 and CSS3. Packt Publishing, Limited
Garcia A, Torre I, Linaza MT (2013) Mobile social travel recommender system. In: Xiang Z, Tussyadiah
I (eds) Information and communication technologies in tourism 2014. Springer International
Publishing, Cham, pp 3–16
Gardner BS (2011) Responsive web design: enriching the user experience: connectivity and the user
experience:13–19
Gibbs C, Gretzel U (2015) Drivers of responsive website design innovation by destination marketing
organizations. In: Tussyadiah I, Inversini A (eds) Information and communication technologies in
tourism 2015. Springer International Publishing, Cham, pp 581–592
Gretzel U, Fesenmaier DR, O’Leary JT (2006) The transformation of consumer behaviour. In: Buhalis D,
Costa C (eds) Tourism business frontiers: consumers, products and industry. Elsevier Butterworth-
Heinemann, Amsterdam, pp 9–18
Groth A, Haslwanter D (2015) Perceived usability, attractiveness and intuitiveness of responsive mobile
tourism websites: a user experience study. In: Tussyadiah I, Inversini A (eds) Information and
communication technologies in tourism 2015. Springer International Publishing, Cham, pp 593–606
Grubbs FE (1969) Procedures for detecting outlying observations in samples. Technometrics 11(1):1–21
123
Efficiency, effectiveness, and satisfaction of responsive…
123
A. Groth, D. Haslwanter
Not E, Venturini A (2013) Discovering functional requirements and usability problems for a mobile
tourism guide through context-based log analysis. In: Cantoni L, Xiang Z (eds) Information and
communication technologies in tourism 2013. Springer, Heidelberg, pp 12–23
Pan B, Fesenmaier DR (2006) Online information search. Ann Tourism Res 33(3):809–832. doi:10.1016/
j.annals.2006.03.006
Pan B, MacLaurin T, Crotts JC (2007) Travel blogs and the implications for destination marketing.
J Travel Res 46(1):35–45. doi:10.1177/0047287507302378
Raptis D, Tselios N, Kjeldskov J, Skov MB (2013) Does size matter? In: Rohs M, Schmidt A, Ashbrook
D, Rukzio E (eds) Proceedings of the 15th international conference on Human-computer interaction
with mobile devices and services. ACM, p 127
Rasinger J, Fuchs M, Beer T, Höpken W (2009) Building a mobile tourist guide based on tourists’ on-site
information needs. Tourism Anal 14(4):483–502. doi:10.3727/108354209X12596287114255
Reichheld FF (2003) The one number you need to grow. Harvard Bus Rev 81(12):46–55
Ricci F (2010) Mobile recommender systems. Inform Technol Tourism 12(3):205–231. doi:10.3727/
109830511X12978702284390
Rubin J, Chisnell D (2008) Handbook of usability testing: How to plan, design, and conduct effective
tests, 2nd edn. Wiley Pub, Indianapolis
Sauro J, Lewis JR (2010) Average task times in usability tests. In: Mynatt E, Schoner D, Fitzpatrick G,
Hudson S, Edwards K, Rodden T (eds) the 28th international conference, p 2347
Sauro J, Lewis JR (2012) Quantifying the user experience: Practical statistics for user research. Elsevier
Shneiderman B (2000) Universal usability. Commun ACM 43(5):84–91. doi:10.1145/332833.332843
Simmons JP, Nelson LD, Simonsohn U (2011) False-positive psychology: undisclosed flexibility in data
collection and analysis allows presenting anything as significant. Psychol Sci 22(11):1359–1366.
doi:10.1177/0956797611417632
Smith LS, Mosier JN (1986) Guidelines for designing user interface software. MITRE Corporation,
Beford
Snepenger D, Snepenger M (1993) Information search by pleasure travelers. In: Khan MA, Olsen MD,
Var T (eds) VNR’s encyclopedia of hospitality and tourism. J. Wiley, New York, pp 830–835
Tullis T, Albert B (2013) Measuring the user experience: collecting, analyzing, and presenting usability
metrics, second edition, 2nd edn. Morgan Kaufmann, Waltham, Mass
Tussyadiah I (2013) When cell phones become travel buddies: social attribution to mobile phones in
travel. In: Cantoni L, Xiang Z (eds) Information and communication technologies in tourism 2013.
Springer, Berlin Heidelberg, Berlin, Heidelberg, pp 82–93
Vogt CA, Fesenmaier DR (1998) Expanding the functional information search model. Ann Tourism Res
25(3):551–578. doi:10.1016/S0160-7383(98)00010-3
Wang Y, Fesenmaier DR (2004) Towards understanding members’ general participation in and active
contribution to an online travel community. Tour Manag 25(6):709–722. doi:10.1016/j.tourman.
2003.09.011
Wang D, Fesenmaier DR (2013) Transforming the travel experience: the use of smartphones for travel.
In: Cantoni L, Xiang Z (eds) Information and communication technologies in tourism 2013.
Springer, Berlin, pp 58–69
Wang D, Xiang Z, Fesenmaier DR (2014) Adapting to the mobile world: a model of smartphone use. Ann
Tourism Res 8:11–26. doi:10.1016/j.annals.2014.04.008
123