You are on page 1of 28

Inf Technol Tourism

DOI 10.1007/s40558-015-0041-0

ORIGINAL RESEARCH

Efficiency, effectiveness, and satisfaction of responsive


mobile tourism websites: a mobile usability study

Aleksander Groth1 • Daniel Haslwanter1

Received: 2 March 2015 / Revised: 17 November 2015 / Accepted: 20 November 2015


 Springer-Verlag Berlin Heidelberg 2015

Abstract Considering the high penetration of internet-enabled smartphones, it is


not surprising that DMOs feel the need to adapt their websites and services for
mobile devices, although these adaptations are very cost intensive. Responsive web
design (RWD) offers an efficient and practicable solution to address the plethora of
different mobile devices with countless varying characteristics (scree-size, input,
size, etc.). Moreover, the lack of evidence about the effects of websites employing
RWD on mobile usability, as well as tourism information search behavior, raises
questions both to practitioners and researchers. With this paper we investigate the
efficiency, effectiveness and satisfaction when searching for and encountering
tourism information on a smartphone on a responsive mobile tourism website
compared to a mobile adaptive website. Through an experiment, 20 participants
interacted with two representative websites and fulfilled specific information
retrieval tasks. Effects between both websites could be derived, although differences
were not consistently significant, and well-applied heuristics failed to measure user
behavior systematically. Overall the responsively designed website performed better
but failed to distinguish itself in terms of satisfaction and perceived usability.

Keywords Efficiency  Effectiveness  Satisfaction  Mobile usability 


Responsive web design

& Aleksander Groth


aleksander.groth@mci.edu
Daniel Haslwanter
dan.haslwanter@mci4me.at
1
Department Management, Communication and IT (MCiT), Management Center Innsbruck,
Innsbruck, Austria

123
A. Groth, D. Haslwanter

1 Introduction

With increasing mobile-broadband subscription rates, from 268 million in the year
2007 to 2.1 billion in 2013, the market for smartphones and laptops grew by 40 %,
making it the most dynamic ICT market.1 Taking a closer look at the very saturated
Austrian mobile market, in 2013 almost two out of three Austrians accessed the
Internet while on-the-go or already work via portable devices (e.g. laptop, tablet, or
smartphone). Out of these persons, 56 % made use of mobile phones or smartphones
and one-third used portable computers.2 This tremendous shift in the pattern of
Internet usage demands for an implementation of appropriate technology to design
and present websites and its content to this still emerging and information hungry
mobile user group. Although technological innovations provide end-users with new
smartphones every year, there are still limitations and challenges for interfaces on
mobile devices due to inherent characteristics of such devices like smaller screen
sizes, non-traditional input methods, and navigational difficulties (Nah et al. 2005).
Being mobile or ‘‘on the move’’ easily conjures the image of a touristic context,
with people searching for context-sensitive information or posting, commenting and
liking everything they deem noteworthy to their friends back home through their
preferred social networks using their smartphones. Especially within the field of
eTourism, research concentrates and recognizes the importance of mobile
technologies and mainly concentrates on four identifiable areas: (1) mobile-
technology-oriented (e.g. Kawase et al. 2013), (2) system-oriented (e.g. Garcia et al.
2013), business-oriented (e.g. Kasahara et al. 2013), and (4) user-oriented. Within
the latter, another three main strands of applied research can be identified: (1) user
acceptance and adoption (e.g. Bader et al. 2012; Bortenschlager et al. 2010), (2)
social context (Tussyadiah 2013), and (3) user data (Not and Venturini 2013). User-
oriented research is mainly understood as quantitative data analysis with a focus on
causal explanations on how users deal, accept and intent to use mobile technologies
within their travel experiences. Although there are exceptions (e.g. Wang and
Fesenmaier 2013), hardly any user-behavioral studies have been conducted, in order
to better understand, how people actually ‘‘use’’ these mobile technologies and
services in a touristic context, or even yield any benefit in regards of tourist
information needs from an interface point-of-view at all.
Through the increasing penetration of the market with internet-enabled
smartphones, developers are challenged to deliver apps and services of superior
quality, in order to compete. Among many aspects of such quality, an important one
is usability (Nayebi et al. 2012). Under the paradigm of usability, developers
promote new web interface concepts like adaptive, responsive or even material
design in order to improve accessibility and user experience for non-experienced
users.

1
International Telecommunication Union, February 2013, http://www.itu.int/en/ITU-D/Statistics/
Documents/facts/ICTFactsFigures2013-e.pdf.
2
Statistik Austria, 2013—3.5 million people go online shopping, http://www.statistik.at/web_en/
dynamic/statistics/information_society/ict_usage_in_households/073632.

123
Efficiency, effectiveness, and satisfaction of responsive…

Especially for touristic service providers, the development of mobile apps and
websites poses a significant investment, which is often out-sourced to external web-
marketing-agencies, hence raising additional cost-effective concerns regarding
support, maintenance, and up-to-date information. As immersive, emotional and
usable websites are already quite well established by tourism service providers, the
most cost-efficient solution seems to be the provision of a classic desktop and an
additional adapted mobile, or in some cases, a responsive website. Although
destination management organizations (DMOs) start to recognize the importance of
being innovative in this regard (Gibbs and Gretzel 2015), such responsive websites
still leave a rather contradictive impression on user experience parameters like
attractiveness, intuitiveness and perceived usability (Groth and Haslwanter 2015).
Within this paper we aim to contribute towards a better understanding on, and
even more how, users utilize mobile tourism information websites when encoun-
tering responsive or adaptive destination websites on a smartphone, in order to
complete various touristic information-search-related tasks. Through heuristic
evaluation and user testing, the efficiency, effectiveness, and satisfaction towards
these two types of mobile web interfaces is analyzed and compared.

2 Theoretical background

2.1 The role of smartphones in tourism information search

Research on the usage of smartphones within the tourism domain has generally
revolved around the (1) development of specific applications for mobile phones (e.g.
Rasinger et al. 2009), (2) acceptance and adoption of smartphones as an information
communication tool (e.g. Eriksson and Strandvik 2009, Kim et al. 2008), or (3) the
impact of smartphone use on various aspects on a tourist’s travel experience (e.g.
Kramer et al. 2007). Even more, a tourist’s smartphone enables interactions between
the user and both the virtual and physical world, without any regard for the current
location of use (Gretzel et al. 2006). Within the literature of human–computer-
interaction and tourism information systems and services, the focus has been laid on
(1) mobile recommender systems (e.g. Ricci 2010), (2) navigation systems (e.g.
Haid et al. 2008), (3) location-based systems (e.g. Kaasinen 2005), and naturally (4)
various design aspects and impacts within mobile tour guides (e.g. Grün et al. 2008).
Following a qualitative study by Wang et al. (2014), a tourist’s smartphone not
only plays an important role during the trip itself, but also impacts the whole
touristic experience, hence, changes a tourist’s travel activities on all three stages of
a trip: pre-trip travel planning, en-route activities, and after-trip activities. In their
study, respondents referred to an improved ease-of-use when utilizing their
smartphone for planning activities, as well as their smartphone being the most
convenient solution when searching for tourism information at the destination,
resulting in an increased flexibility during the actual trip. While ‘‘the smartphone
appears to be an effective and handy tool to search for information regarding
transportation, accommodation, dining, things to do during trips, travel ideas, and

123
A. Groth, D. Haslwanter

deals both before and during trips.’’ (p. 18), perceived convenience and ease-of-use
has been the top response, when asked for a rationale for their smartphone use.
In comparison towards the context of mobile information search, Kellar et al.
(2007) distinguish three behavioral patterns when utilizing one’s smartphone: (1)
information-seeking (fact-finding, gathering and browsing information), (2) action-
support (in-the-moment and planning), and (3) information-exchange (transaction
and communication). These general behavioral patterns match within a touristic
context through (1) information search (e.g. for restaurants, deals), (2) facilitation
(e.g. navigation during trip, checking weather), and (3) communication (phone calls,
login to Facebook). A further context of entertainment (e.g. taking and sharing
photos, play games, listening to music) is added, although this context may only be
referred less towards mobile tourism information search, but more towards a search
for distraction or killing time (Wang et al. 2014).
Within the extensive research on tourism information search behavior, several
streams of literature can be identified. Firstly, it is commonly understood that people
basically search for information within their (1) internal resources, which are
derived and retrieved from previous experiences and past search results (Chen and
Gursoy 2000). This knowledge of a destination affects information search behavior
and consequently decision-making (e.g. Gursoy 2003). In addition, (2) external
information sources like destination-specific literature, family and friends, media,
and travel agencies (Snepenger and Snepenger 1993), as well as recommendations
through professional advice, advertisements, word-of-mouth, and non-tourism
movies and books are distinguished (Baloglu 2000). Secondly, tourism information
search is considered from a process perspective, providing various models towards
explaining and predicting information search behavior (e.g. Vogt and Fesenmaier
1998; Fodness and Murray 1999). Thirdly, with the rise and importance of the
Internet, literature has focused on specifics of online search patterns and the overall
search process when searching for tourism information online (e.g. Mitsche 2005;
Pan and Fesenmaier 2006) As the Internet further matures, evolvements and
deviations through the introduction and inclusion of social media (e.g. Pan et al.
2007) and virtual travel communities (e.g. Wang and Fesenmaier 2004) into this
search process have been studied. Fourthly, and most relevant for this study,
research in the field of search strategies, distinguishing between searching via
keywords (e.g. Chen et al. 1998), via search engines (e.g. Hawk and Wang 1999),
via browsing the Internet (e.g. Chung 2006), via utilizing sub-directories (e.g.
Nachmias and Gilad 2002), and via visiting known websites (e.g. Fidel et al. 1999).
Acknowledging, that search behavior has been analyzed in regards to User
Experience (e.g. Adukaite et al. 2013), there is still little understanding on how a
mobile user interface and representation of tourism information on smartphones is
actually utilized by users, and how their search behavior, in terms of efficiency,
effectiveness, and satisfaction, is influenced.
Furthermore, Ho et al. (2012) recognize the importance of an effective tourism
information search, especially towards a better understanding of a tourist’s search
behavioral characteristics, as these are not only significant in identifying and
maintaining a strong position within a competitive e-commerce environment, but
also serve as a basis for further improving mobile interfaces and search

123
Efficiency, effectiveness, and satisfaction of responsive…

functionalities of mobile tourism information applications. Hence, the difficulties


that are encountered, when tourists utilize such systems through means of online
representations, need to be fully understood, in order to improve accessibility and
usability, as well as efficiency, effectiveness, and satisfaction.

2.2 Usability

Within ISO 9241-11 ‘Usability’ is defined as ‘‘the extent to which a product can be
used by specified users to achieve specified goals with effectiveness, efficiency and
satisfaction in a specified context of use’’. In more detail effectiveness addresses the
accuracy and completeness of how users achieve goals, efficiency the resources
expended when achieving this goal, and satisfaction is defined as a user’s comfort
with and positive attitude towards the use of the system.
Nielsen (2012) defines ‘Usability’ as a qualitative attribute in order to assess the
ease of use of system interfaces. The term itself also refers to methods for enhancing
ease-of-use during the design process phase. Furthermore, ‘Usability’ can be defined
through five components strongly contributing to overall product quality:
learnability, efficiency, memorability, errors, satisfaction, and utility. The latter
refers to a design’s functionality and investigates whether the system actually is
fulfilling a user’s needs.
Usability itself should be approached from multiple vantage points in order to
become sensitized to the various aspects and elements that may have an impact on
the usage of a system. A study by Hertzum (2010) identified six images
(perspectives) of usability in order to generate complementary and competing
insights on the usability of systems: universal, situational, perceived, hedonic,
organizational, and cultural usability. All six images do not assume to form an
exhaustive set of usability images, nor are they mutually exclusive. They are
interwoven point of views and their borders blended, providing a good overview on
the variety of issues that have to be genuinely understood to understand the usability
of a system.
Shneiderman (2000, p. 85) defines universal usability as ‘‘having more than 90 %
of all households as successful users of information and communication services at
least once a week’’. One challenge of today’s information and communication
services is to provide functionalities that are accessible and usable for a broad
audience of unskilled users. Some older technologies like postal services,
telephones, and televisions have reached this goal of universal usability. Never-
theless, especially computing technologies are still too difficult to use for a large
number of people. In order to achieve universal usability, three major challenges
can be identified for web-based and other services: (1) technology variety, (2) user
diversity, and (3) gaps in user knowledge.
Technology variety addresses the innate problem of supporting a wide range of
hardware, software, and network access. Modern computing and network services
have to remain usable across a range of very different software technologies, such as
operating systems and protocols, or varying processor speeds, screen sizes, and
network bandwidths. The co-existence of users with vastly different network
connections, like users who continue to use older smartphones while others will

123
A. Groth, D. Haslwanter

upgrade to newer, faster, and more capable devices poses a major challenge in this
area (Hertzum 2010). User diversity describes the existence of users with different
skills, knowledge, age, gender, disability, disabling conditions, literacy, culture, and
income (Shneiderman 2000). Gaps in user knowledge identify the divergence
between what users know and what they need to know in order to make use of a
service (Shneiderman 2000). Successful approaches to minimize those gaps in
knowledge include the use of familiar metaphors and an inclusive design in
combination with the allocation of customer service, online help and training, as
well as supportive user communities (Hertzum 2010).
Specifically within the mobile technology sector, these three challenges become
even more crucial as a wider range of people own and use their smartphones for a
variety of online services and functionalities, either via apps or browsing. Mobile
technology between smartphones is very difficult to compare and follows a highly
innovation-based release policy on an annual basis, outdating smartphones very
quickly. In addition, smartphones are already marketed and offered for ‘everybody’,
regardless of social status or income, which directly transfers over to the last
challenge of how users inform themselves about the usage of their phone. Gaps in
user knowledge may also be interpreted as users being structurally uninformed on
how their devices actually work, or how apps should be set up and used properly.
Official instructions are not delivered as a physical manual anymore, so people start
experimenting and learn by their own experience, or through the advice of their
peers, which results in very different, not comparable, and not-predictable use-
behaviors.
Therefore, strategies to cope with these challenges in order to achieve universal
usability, even for mobile websites or applications, remain mostly in the agreement
and adherence to general guidelines and standards (Hertzum 2010). An example for
such a universal guideline is defined as follows: ‘‘If menu selection is accomplished
by pointing, as on touch displays, design the acceptable area for pointing to be as
large as consistently possible, including at least the area of the displayed option
label plus a half-character distance around the label’’ (Smith and Mosier 1986,
p. 230). This guideline stands as a good representation of the above mentioned
‘‘universality’’, as it applies to all menu items that are selected by pointing,
regardless of the user herself, her tasks, and other factors attributing towards the
specific context of use, be it mobile or desktop (Hertzum 2010).

2.3 Mobile usability and guidelines

As already hinted above, the inadequacy of a universal approach in usability


becomes apparent when transferring and applying universal guidelines to the mobile
context. Although mobile usability is recognized among scholars as an important
dimension to focus and research on, there is so far no accepted universal mobile
usability framework at one’s disposal. Suggestions in this direction aim on a specific
context of use within the context of mobility (e.g. user, environment, technology,
and task/activity), all directing on various, well applied usability dimensions, like
effectiveness, efficiency, satisfaction, usefulness, utility, etc. (Coursaris and Kim
2011). In their meta-analytical review of empirical mobile usability studies,

123
Efficiency, effectiveness, and satisfaction of responsive…

Coursaris and Kim analyzed all notable studies in this field and identified three—not
surprising—core-constructs, that have been most researched upon in this area:
Efficiency, Effectiveness, and Satisfaction.
Mobile phones do have a variety of advanced functionalities and features, but
usability issues are still increasingly challenging. These advances in mobile
technology have been the accelerator for the development of a wide range of
applications that can be used by people when travelling or generally on-the-go.
However, one aspect that is still overlooked by many developers is the context of
user interaction. Users want to fully use and utilize their devices wherever and
whenever they are. Usability and user experience have a critical impact on the
success of any mobile website or application in this special context of mobility. This
context comes along with small screen sizes, limited connectivity and different data
entry modes, as well as high power consumption rates (Harrison et al. 2013).
In a study by Budiu and Nielsen (2010) on mobile user experience, the overall
evaluation has been significantly inferior as compared to the usability of regular
websites. The average success rate of given tasks on mobile websites was only
59 %, substantially lower than the success rate for websites on a regular PC with
about 80 %. The main identified problems of mobile usability are:

• Small screens The physical characteristics of mobile devices imply that there are
fewer visible options at any given time. Users therefore rely more on their short-
term memory to build an understanding of the overall information space. This
has negative consequences on the overall interaction with the device.
• Awkward input Input paradigms differ between desktop computers and mobile
devices. Operating graphical interface widgets without a mouse, especially when
typing, or using menus and buttons with your fingers take longer time and are
more error-prone.
• Download delays Mobile bandwidth rates often suffer through lower or
unstable connections. This delay leads to longer page-loading times.
• Mis-designed sites Most websites are still optimized and tailored for desktop
usability. As a result, they do not adhere to any guidelines of mobile usability.

On a more practical and business-oriented level, Nielsen and Budiu (2013)


compared conversion rates on several e-commerce websites. Conversion rate in this
context is defined as ‘‘the percentage of visiting users who end up taking a desired
action’’ (p. IX). According to their results, these conversion rates dramatically
differed, depending on the type of device used. With 3.5 % desktop computers showed
a significant higher rate than mobile phones with only 1.4 %. Two possible
explanations are proposed: (1) the mobile user experience must be horrible, as mobile
sales could be 2.5 times higher when being on par with desktop websites, and (2) it is
assumed that there is no commitment by the provider to invest in mobile design as
mobile users do not account for very much overall revenue (Nielsen and Budiu 2013).
Traditional usability methods and models that are employed for desktop computers
cannot be simply transferred to a mobile environment owing to the high degree of
mobile specifications (Bahadir et al. 2013). The particular characteristics of these
devices demand an alternative and careful approach when evaluating usability and

123
A. Groth, D. Haslwanter

tailored heuristics for mobile devices have to be applied. An adapted and tested
framework for the evaluation of mobile usability would help designers to find usability
problems more efficiently and will eventually lead to the design of better solutions
(Heo et al. 2009).

2.4 Responsive web design

Responsive web design (RWD) can be seen as a methodology introduced to help


realizing the vision of a ‘‘One Web’’ (Gardner 2011). To achieve this, RWD aims to
combine the capabilities of HTML5 and CSS3 with a new design paradigm for
website architectures, which are able to flexibly adapt to different screen sizes
(Groth and Haslwanter 2015). This requires a change within all current approaches
of web design and transforms static websites into responsive, adjustable and fluid
layouts (Frain 2012). Marcotte (2011, p. 8) emphasizes this need for an answer to
the emerging number of mobile devices and the shift in current user behavior, by
‘‘rather than creating disconnected designs, each tailored to a particular device or
browser, we should instead treat them as facets of the same experience.’’ Hence,
through a responsive design approach, a web page adjusts itself in response to the
respective screen size of a device. This results in a layout that handles elements
much more flexibly and rearranges them automatically (Bohyun 2013).
Responsive design changes the way of web development. The approach drifts away
from designing a fully formed site targeting all needs of a perfect desktop experience.
Responsive design forces content providers to consider carefully, what is really
essential about their content and should be delivered to visitors. Following this change
in paradigm, the concept starts with providing minimal services and content in an
effective way on the smallest portable device (Fox 2012). Then, functionalities and
components are added to devices with larger screen dimensions and different sets of
input/output devices. This does not require the design to accomplish a list and rule-set
for every individual device, browser, or portable operating system. Instead, responsive
design uses categories, which are derived from an examination of the typical
characteristics that all devices have in common. This examination covers all devices,
from the average desktop computer to the smallest cellphone (Fox 2012).
Champeon (2003) elaborates on this approach and summarizes his model under the
term ‘‘Progressive enhancement’’. It encourages web designers to focus on
accessibility, semantic HTML markup, external style sheets and scripting technolo-
gies. Progressive enhancement makes use of existing and new web technologies,
which allow everyone to access basic content, and functionalities of a web page,
without special requirements for browser technology or Internet connection. In
addition, more advanced browsers or software and Internet connections with higher
bandwidth are serviced with an enhanced version of the website.
Marcotte (2011) clarifies responsive design to be a composition of three distinct
parts: (1) a flexible grid, (2) flexible images or more correctly, images that work in a
flexible context, and (3) media queries that optimize the design for different viewing
contexts (devices), and spot-fix bugs that occur at different resolution ranges.
Flexible grids change pixel-based values of a web design layout into relative
proportional terms. This results in a grid that can resize itself similar to the viewport

123
Efficiency, effectiveness, and satisfaction of responsive…

of a device. The original proportions of the design are obtained and not distorted
through this approach. Resizing in that sense means both expanding and contracting.
Flexible images pertain to a flexible layout, which itself is mainly based on
percentages when dealing with images and graphical elements. Whenever such
elements are not properly prepared before they are uploaded to a website, the result
can be an image or graphic that overflows its own container and breaks the viewport
of the device. Responsive web design addresses this issue with the establishment of
CSS rules and guidelines. The easiest way is to restrict the elements to a 100 %
width or height dimension, using the maxi-width property. That means that every
element that is inside a predefined flexible container with such a rule can only be the
maximum width or height of this container and will be automatically scaled to its
container size. If a flexible container resizes itself, which implies that the images are
being enlarged or shrunken, the image’s aspect ratio remains untouched. Media
queries, finally address a problem that might arise out of the usage of flexible grids
and layouts, which result in possible usability issues. Under certain conditions, the
changes in layout could compromise readability and will lead to a detraction
regarding user experience. For example, a navigation menu could be teared apart
into two lines, because of the unexpected shrinking width of its column. A proper
solution would be the use of CSS3’s media queries, which allow browsers to serve
different styles for different viewing contexts. This adds the ability to target media
features such as screen and device width and orientation. Typical examples for such
queries are that a smartphone would have less than 570 pixels width or that a tablet
device can support orientation. Such categories that effectively adjust the content
and layout to the context of the device greatly ensure that the user has a better and
richer viewing experience (Gardner 2011).
RWD provides a solution to the challenge of maintaining and updating more than
one set of content for different types of websites. Another major improvement is,
that there is no need to additionally promote a website as ‘mobile’, since responsive
websites recognize the use context (mobile or desktop) and automatically adjust
their layout to the used target-device. Users will barely realize that they are using a
responsive website, because all the information that is present on a full desktop site
is also available on the responsive version of it. All features of the full desktop site
that are supported by the device can also be used and therefore users will benefit
from an optimized mobile experience, while being able to still access the whole
range of content and services (Bohyun 2013). RWD aims to create one singular
website that is available and accessible to any user and any sort of device, therefore
establishing consistency in content delivery across a variety of platforms.
A RWD approach does not per se guarantee a satisfactory mobile experience.
Examples for unsuccessful implementations of responsive web design can be seen
when the conversion between full site and responsive site does not include
adjustments in text and page structure. Often responsive websites result in a long
page filled with too many lines of text, navigation items, and links. A positive
mobile experience requires more than simply making elements flow into a long
strip. With the restricted space on mobile screens, there has to be an alternative of
how content can be presented in a streamlined and uncluttered way while focusing
on the most important items that mobile users want to access (Bohyun 2013).

123
A. Groth, D. Haslwanter

Adaptive mobile websites are following mostly responsive design paradigms like
progressive enhancement, but distinguish themselves to provide fixed and pre-
designed layouts for various screen sizes. When visiting such a prepared website,
the device will be identified through the web-browser and the adjusted design will
be delivered. ‘‘Adaptive’’ in this sense can be seen as pre-defined for different
screen resolutions (Gustafson et al. 2013).

3 Methodology

‘‘Every usability evaluation method has its advantages and disadvantages. Some are
difficult to apply, and others are dependent on the measurers’ opinions or
instruments. In addition to these challenges, mobile devices and applications
change very quickly, and updated methods of usability evaluation and measurement
are required on an ongoing basis’’ (Nayebi et al. 2012, p. 1).
Against the background described, a research setting has been designed
comprising of the above mentioned mobile usability heuristics, which not only
apply to the context of mobile devices and use when searching for tourism
information, but also should prove to be meaningful and most of all comparable.
Hence we focused on the most applied measures in mobile usability testing:
Efficiency, effectiveness and satisfaction (Harrison et al. 2013).
The following research hypothesis have been formulated for the usability test
H1: A responsive mobile touristic website is more efficient to use than a mobile
touristic websites.
H2: A responsive mobile touristic website is more effective to use than a mobile
touristic website.
H3: A responsive mobile touristic website is more satisfying to use than a mobile
touristic website.
The main goal of our usability experiment was to measure the influence and
effect of two different mobile design approaches on usability and the overall
performance of users, who were exposed to two different mobile touristic
websites—one applying a RWD-approach and one a mobile-approach with some
basic elements of RWD. The whole experiment included two sessions in which
users perform a series of information-seeking tasks on a smartphone.

3.1 Selection of mobile websites

The usability experiment investigated two different websites out of the area of
tourism destinations during the time-period of 23rd of June and 12th of July 2014.
There were no significant changes to these mobile websites in the timeframe of the
evaluation. The first website—further called Website A—http://www.tirol.at3
applies a very strict implementation of the responsive design approach as described

3
Tirol Werbung is a destination marketing organization, responsible for brand building and awareness of
the country of Tyrol. The website offers all important destination-related information like events, local
weather, tours for biking, hiking, and skiing, and a booking functionality within the Tyrolean region.

123
Efficiency, effectiveness, and satisfaction of responsive…

above. Strict here means that many features of the website will respond to smaller
screen sizes, such as changes in layout, different navigation, alternative naming of
links and headings. In general, the website follows a strong degree of compliance
with the proposed guidelines of RWD: the main background excludes image as well
as several widgets like the weather-box, the picture gallery and the registration box
for the newsletter. These changes allow a short page length and therefore limit
extensive scrolling operations. In addition, some descriptions in the navigation
menu links alter between desktop and smartphone versions. The booking pages also
reduce the amount of pictures and limit the entries of accommodation entities to
their names and a brief description.
As a second website—further called Website B—http://www.oetztal.com4 was
selected, which applies an adaptive mobile design approach. Elements changed only
slightly, images stayed persistent and content was not cut or reduced in length,
functionalities are not excluded. This results in a very long, vertical column-like
website with intensive scrolling operations for the users. The main navigation menu
changes from a horizontal design with three main sections on desktop to a vertical
menu with four main sections and subsections when viewed on a smartphone. The
additional menu item of ‘‘Events’’ on smartphone is incorporated in the section of
‘‘Current News’’ on the desktop version. There were no changes in the naming of
navigation menu links. The large number of menu items fills a long vertical list that
is not being accessible within the dimensions of one (iPhone, 4 inch) screen size. As
a result, the main menu requires intensive scrolling operations.

3.2 Selection of participants

20 persons (14 male and 6 female) at the age between 16 and 29 were asked to
participate in the usability experiment. The sample size of 20 participants was
chosen in order to comply with the minimum number required to run an analysis of
variance (ANOVA) (Simmons et al. 2011).
As usability tests deal with smaller group sizes, a decision regarding the age of
the test group had to be made. Following several studies, 88 % of Internet users are
aged between 16 and 24 years who make use of mobile devices to enter the World
Wide Web away from home or work.5 All participants were asked to estimate their
daily time spent on a desktop computer and on a smartphone in hours, with an
average of 3.2 h per day (SD = 2.78) on a computer and with an average 2.2 h per
day (SD = 1.62) on their smartphone.
All participants had a basic understanding of using smartphones and the mobile
Internet in order to deal with the required tasks. Participants had to evaluate
themselves on a scale from expert, intermediate to beginner regarding their
expertise with smartphones. Two rated themselves as experts, 17 as intermediate,
and one as beginner. Nevertheless, all participants regularly used their build in web-
4
Ötztal Tourismus is responsible for marketing the valley of Ötztal in Tyrol, including its areas of
Sölden and Obergurgl/Hochgurgl. The website offers all important destination-related information like
events, local weather, tours for biking, hiking, and skiing, and a booking functionality.
5
Statistik Austria, 2013—3.5 million people go online shopping, http://www.statistik.at/web_en/
dynamic/statistics/information_society/ict_usage_in_households/073632.

123
A. Groth, D. Haslwanter

browser to surf the internet. All participants owned a smartphone, with eight using
Apple iOS and twelve persons using Google Android. In addition, all users were
familiar with Tirol Marketing and Ötztal as holiday destination, but had never
visited either the desktop or mobile website of both DMOs before. The preferred
system to look for touristic information online has been for 18 participants their
desktop computer or laptop, and for two participants their smartphone.

3.3 Selection of device and software

A decision had to be made regarding the selection of the used smartphone. In order
to measure the influence of a RWD approach, we opted for not letting people use
their own smartphone to enforce comparability between users. Especially within the
Android ecosphere exists a plethora of different devices with different screen sizes
and (haptic) device buttons, which would make it difficult to avoid a certain bias by
learned error handling and navigation on a user’s preferred smartphone. Therefore,
an iPhone 5s running iOS 7.1.1 was chosen, as the iPhone sports no additional
buttons (only the home button) and all navigation can be solely handled via touch-
screen interaction, which would help minimize this bias, as touch interaction for
navigation is common on all current smartphones. Android users were given several
minutes to make themselves familiar with the handling of the device. As a web-
browser the pre-installed Safari Browser was used. All participants were recorded
on video (face and screen) and audio with Magitest (http://www.magitest.com). The
data was collected and analyzed with Microsoft Excel 2013 and IBM SPSS
Statistics 21.

3.4 Procedure

The experiment was split in two sessions, with the second session held 2 weeks after
the first.
The overall experiment was designed as an A/B test setting. A/B testing (also
called split testing) compares two versions of a website to identify, which one
performs better from a user point of view (Brau et al. 2008). A/B testing is a popular
method for the comparison of alternative designs on web pages. The users work
randomly with either the first or the second version of deployed design alternatives
and are separated into Group A and Group B (Sauro and Lewis 2012). Predefined
criteria are then measured in order to compare the results of the two tested groups.
In the first session the group has been divided equally, with Group A starting with
Website A, and Group B with Website B. After a break of 2 weeks, the same
persons participated on the second part of the experiment, flipping websites.
The experiment followed Rubin’s (Rubin and Chisnell 2008) outline for usability
testing. Within a brief introduction phase, the participants got familiar with the topic
and the setting of the experiment was described. Then a pre-questionnaire was
answered about the participants’ general travel behavior as well as questions about
their demographics. Afterwards all tasks had to be completed on the smartphone.
The aim of the tasks was to discover the content and functionalities of the two
different websites, whereas all tasks were designed to be very similar in order to

123
Efficiency, effectiveness, and satisfaction of responsive…

guarantee that the results are comparable. The expressions were captured using a
camera and a microphone in order to document the actions of the users while they
were facing the tasks. A post-test questionnaire was conducted after the participants
finished all tasks collecting insights about their general use of smartphones and
desktop computers.

3.5 List of tasks

Each session comprised of five tasks, which were divided into four tourism
information-seeking, and one tourism action-oriented tasks. The set of tasks was the
same for the two different sessions in order to collect comparable results. The
classification of the tasks into two categories, one regarding difficulty level (easy,
medium, difficult) and the other regarding degree of scrolling (easy, medium,
heavy) ensured, that the tasks had varying levels of difficulty and that assumptions
could be made through the effects of scrolling on effectiveness and efficiency.
Participants were also not familiar with the tasks or how to solve those (Raptis et al.
2013). Four of our tasks were related to information search, as this has been
conceptualized by us as the main activity, when visiting tourism websites on a
smartphone. Finding information within a mobile website may be seen trivial at
first, but poses a strikingly challenging task, especially when more detailed, or
hidden, information has to be searched for. Website structure and a user’s web-
orientation skills are put to the test, hence we focused on progressively more
detailed and complex information retrieving tasks, recognizing learning effects
when navigating on the websites to happen. Within the action-oriented Task 5, the
websites’ booking functionality has been selected, in order to measure, how users,
after becoming familiar with the website, are able to handle this rather complex
action, which still poses difficulties for many users, even on modern tourism
desktop websites.

• Task 1: Subscribe to the newsletter of the website (easy, light scrolling).


• Task 2: Inform yourself about the Aqua Dome. Please note down the address and
phone number. (easy, light scrolling)
• Task 3: Inform yourself about the Hiking Tours in Tirol—‘‘Adlerweg’’/Inform
yourself about the Hiking Tours in Ötztal—‘‘Ötztal-Trek’’. Please note down,
how much elevation/how many kilometres the tour comprises. (medium, light
scrolling)
• Task 4: Inform yourself about the National Parks. What is the duration in hours
of the hiking tour to the Trelebitschsee/Frischmannhütte in the National Park
‘‘Hohe Tauern’’? (difficult, medium scrolling)
• Task 5: Please book a vacation using your own criteria on the website, using a
budget of 1500€. Define your trip first using the following attributes: Date of
Arrival/Departure, City/Village, Category, and Number of adults/children.
(difficult, heavy scrolling)

123
A. Groth, D. Haslwanter

3.6 Applied measures

In order to collect a multi-dimensional rating of the participants and to assess the


performance of the users the following five measures were applied:

3.6.1 Task success time (efficiency)

Sauro and Lewis (2010) underlines the importance of Time on Task (ToT) as a
powerful mean to measure the efficiency of users while performing tasks. ToT is
about how long a user needs to complete a task in seconds and or minutes,
calculating the time elapsed between the start of a task and the end of a task (Tullis
and Albert 2013). Within the experiment, ToT is not only employed to measure
performance, but also to monitor, if users become faster on consecutive tasks.

3.6.2 Page views (efficiency)

The main measurement for efficiency beside ToT is the number of page views.
Burby and Brown (2007, p. 7) define page views as ‘‘The number of times a page
(an analyst-definable unit of content) was viewed.’’ Therefore, the measurement of
page views was aligned to this definition and the observer of the experiment
documented the number of page views when each task was performed.

3.6.3 Task success level (effectiveness)

Task success is understood as a universal measurement not requiring extensive


explanations or statistical analysis. When users fail to complete simple tasks it can
be strong evidence that something needs to be fixed. The following levels of task
success from Tullis and Albert (2013) were applied: (1) no problem: the user
completed the task successfully without any difficulty or inefficiency, (2) minor
problem: the user completed the task successfully but took a slight detour; one or
two small mistakes were made but could be recovered quickly and was successful,
(3) major problem: the user completed the task successfully but had major
problems/struggled and/or took a major detour in the eventual successful
completion of the task, and (4) failure/gave up: the user provided the wrong
answer, gave up before completing the task, or the moderator moved to the next task
before successful completion.

3.6.4 Self-evaluation questionnaire (overall-usability)

All participants evaluated each version of the websites using the System Usability
Scale (SUS) questionnaire (Brooke 1996a, b). SUS comprises of ten questions (on a
5 or 7-point Likert Scale) and calculates a value between 0 and 100 (100 = perfect
usability).

123
Efficiency, effectiveness, and satisfaction of responsive…

3.6.5 Self-evaluation questionnaire (satisfaction)

The participants evaluated each version of the websites using the Net Promoter
Score (NPS) questionnaire (Reichheld 2003). NPS comprises of a single question
(How likely would you recommend X to a friend?) on a simple 1–10 scale. Users
are then further categorized as Promoters (score 9–10) and Detractors (score 0–6).
NPS is calculated as the percentage of Promoters minus the percentage of Detractors
(score between -100 and 100).

4 Data analysis and results

As the sample size of the usability study was smaller than 25, the geometric mean
was used to estimate the center of the population (Sauro and Lewis 2010). The
accepted level of errors (alpha) for the mean values of the following analyses is
5 %, which is a 95 % confidence interval and implies that the analysis is 95 percent
certain or wrong in 5 % of the time (Tullis and Albert 2013). All following results
should not be interpreted one website being better than the other. All results have
been analyzed regarding a user’s behavior on responsive or adaptive websites.

4.1 Time on task (efficiency)

The following tables and figures represent the results of the measurement of time on
task. The task times for this measurement only include successful task times as
recommended by Tullis and Albert (2013). Furthermore, outliers in the dataset were
removed following the method of the Grubb’s Test (Grubbs 1969), also called ESD
method (extreme studentized deviate).
The data analysis in Table 1 identifies that the largest differences between the
two websites were measured in the newsletter task. The disparity was very likely

Table 1 Mean time on task for the Website A and Website B (with SD and 95 % confidence interval)
Task Geometric SD Lower Bound Upper Bound
mean (95 %) (95 %)

Newsletter–Website A 93.1 29.58 80.65 107.38


Newsletter–Website B 145.5 40.12 126.89 166.81
Address Aquadome–Website A 85.9 44.73 69.24 106.67
Address Aquadome–Website B 51.1 36.09 40.68 64.1
Altitude Adlerweg–Website A 43.5 34.31 32.67 57.82
Altitude Ötztal-Trek–Website B 37.5 40.74 27.62 50.9
Duration Trelebitschsee– 68.8 34.91 55.43 85.52
Website A
Duration Frischmannhütte– 54.5 36.38 41.03 72.37
Website B
Booking–Website A 134.3 68.03 108.22 166.56
Booking–Website B 152 125.69 111.68 206.94

123
A. Groth, D. Haslwanter

200

Mean Time on Task (seconds)


180
160
140
120
100
80
60
40
20
0
Newsletter Address Altitude Hiking Duration Hiking Booking
Aquadome Tour Tour

Website A Website B

Fig. 1 Mean time on task, in seconds, for Website A and Website B (error bars represent the 95 %
confidence interval)

caused by the overall first impression of Website B, where participants indicated


that they needed a certain time to understand the overall structure and idea of the
website and how it works. Consequently, participants were browsing, scrolling and
scanning through the website for a relatively long time until they discovered the link
for the newsletter-subscription. As a second reason Website B requires to fill out
several input fields for the registration of the newsletter registration, with not all
being mandatory. Although some participants remarked that they were not aware of
this and hence filled out every field.
Interestingly, participants were quicker on Website B when completing Tasks 2,
3 and 4 compared to Website A. It could be observed, that users who encountered
difficulties with Website B adapted and focused more on the search functionality of
the website instead of navigating through the menu. In Task 5, the booking process
took on average 18 s longer on Website B, then on Website A. This has been caused
by several obstacles during the booking process itself: the most difficult problem to
solve on Website B has been, when participants entered their parameters,
sometimes—but not always—were redirected to a second booking input mask
and asked to re-enter all their data again. Overall Website A had a more consistent
and straightforward booking process without major interferences (Fig. 1).

4.2 Page views (efficiency)

The amount of effort that was required to complete the tasks was measured through
tracking all pages visited (Page views), when searching for the information required.
The total number of visited pages accounts for 23 for Website A and 19 for Website
B.
The results shown in Fig. 2 for Website A showed that there is only one
significant difference in the number of page views within Task 2. After completing
Task 1, users on Website A continued to use the standard navigation and took their
time in browsing the website with their thumbs. On Website B users became

123
Efficiency, effectiveness, and satisfaction of responsive…

7
Number of visited pages

0
Newsletter Address Aqua Dome Altitude Hiking Tour Duration Hiking Booking
Tour
Website A Website B

Fig. 2 Mean number of page views per task for Website A and Website B (error bars represent the 95 %
confidence interval)

frustrated much faster and started using the search functionality, as soon as they
realized, that they won’t be able to find the information quickly and simply through
browsing. In addition, when comparing task times and page views per task, users on
Website A where on average much faster browsing through the website and opening
more websites in less time compared to the tasks before, which hints on becoming
quickly familiar with the Website A’s navigational structure.

4.3 Efficiency as a combination of task success and time

As efficiency was measured simply through analyzing all page views without taking
the task’s duration into account, a second metric was applied. Efficiency can be
described as a combination of task success and time on task. Task Success was

1.0
Tasks Successfully Completed per Minute

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0.0

Website A Website B

Fig. 3 Average number of tasks completed successfully per minute, for Websites A and B

123
A. Groth, D. Haslwanter

calculated as the percentage of successful tasks (no problems) and Time on Task
calculated in minutes (Tullis and Albert 2013). The International Organization for
Standardization specifies in the common format for industry reports (ISO/IEC
25062:2006) that the core measure of efficiency is the ratio of task completion rate
to the mean time per task.
In order to calculate the average efficiency, a variation was implemented by
counting the number of successful completed tasks by each participant and dividing
this number by the total time spent by the participant on all tasks (successful and
unsuccessful). The result provides a rather straightforward measure for efficiency
for all participants, identified as the number of tasks completed per minute (Tullis
and Albert 2013).
The results in Fig. 3 show that participants were generally more efficient on
Website A. In particular, user completed 0.6 tasks on average on Website A
(SD = 0.20) and 0.5 tasks per minute on Website B (SD = 0.21).

4.4 Task success rate (effectiveness)

Results for the levels of success are presented by a four-point scoring method. The
following bar chart shows the levels of success as frequencies for each task. The
percentages show the users of each category or level (Tullis and Albert 2013).
The results in Fig. 4 for the levels of task completion of Website A showed that
the booking task was the one where participants faced the most problems. Three
participants (15 %) were not able to successfully complete the task, two participants
(10 %) had major problems, four participants (20 %) minor problems and only 11
participants (55 %) had no problems.
Results for task completion levels of Website B showed that negative results
were also present for the booking task with one participant (5 %) who was not able
to complete the task, seven participants (35 %) had major problems, five

100%
90%
Percent of Participants

80% Failure
70%
60% Major Problem
50% Minor Problem
40%
30% No Problem
20%
10%
0%

Fig. 4 Task completion status, by task for Website A and Website B

123
Efficiency, effectiveness, and satisfaction of responsive…

participants (25 %) had minor problems and only seven participants (35 %) had no
problems at all. Another task of Website B where nearly every participant had at
least some problems was the newsletter registration task. One participant (5 %) was
not even able to complete this task, four participants (20 %) had major problems,
nine participants (45 %) minor problems and only six participants (30 %) had no
problems.
Taking a closer look on Fig. 5, participants had overall much more problems with
the most difficult and most scrolling intensive task on Website B, than on Website
A. Although three participants were not able to complete this task on Website A,
overall more participants had major problems on Website B. As a reason
participants needed much longer to find the booking functionalities at all on
Website B and were very much irritated by a pop-up window that did not carry over
all booking criteria selected before. Hence, participants often had to close the
window and re-enter their booking criteria.

4.5 Self-evaluation questionnaires (perceived usability)

The System Usability Scale (SUS) was applied as a main indicator for the perceived
usability of the two mobile websites. The following mean SUS scores, standard
deviations and confidence intervals (a = 5 %) were measured for the two versions
of websites. As the sample size of the usability study was smaller than 25
participants, the geometric mean was used to estimate the mean values of the
different versions (Sauro and Lewis 2010).
Website A scored a geometric mean of 64.06 (SD = 19.97 and 95 % confidence
interval is 58.03–76.72) compared to Website B with 62.91 (SD = 19.29 and 95 %
confidence interval is 56.97–75.03).
When comparing the results in Table 2 of perceived usability via SUS scores,
there is only a small difference between both websites. Participants rated Website A
slightly higher than Website B (difference of 1.15). When compared with the
adjective ratings scale from Bangor et al. (2009) it can be said that both smartphone
versions were rated in category C—‘‘Good’’.

100%
90%
Percent of Participants

80%
70% Failure/Quit
60% Major Problem
50% Minor Problem
40% No Problem
30%
20%
10%
0%
Booking - Website A Booking - Website B

Fig. 5 Task 5: booking, completion status

123
A. Groth, D. Haslwanter

Table 2 SUS scores in relation to mobile website


Geometric mean Standard deviation Lower bound (95 %) Upper bound (95 %)

Website A 64.06 19.97 58.03 76.72


Website B 62.91 19.29 56.67 75.03

Fig. 6 Comparison of adjective ratings, acceptability scores, and school grading scales, in relation to the
average SUS score (Bangor et al. 2009)

Table 3 NPS scores in relation to mobile website


NPS Standard deviation Mean (95 %)

Website A -40 2.52 6.0


Website B -45 2.82 5.4

Figure 6 brings this number into perspective. The average SUS scores out of a
study of 3500 surveys is about 70. The survey also covered the average SUS scores
with respect to the applied interface type. The comparison showed that classical
Web interfaces have an average SUS Score of 68.2 while Cell phones only reach
about 65.9. (Bangor et al. 2009) This comparison may also reflect the results of the
experiment that showed very similar tendencies and differences with respect to
those assumptions.

4.6 Self-evaluation questionnaires (satisfaction)

To better understand above findings on perceived usability, the Net Promoter Score
(NPS) along with standard deviations and means were measured for both websites.
Table four shows that Website A reached minus 40 (SD = 2.52 and a mean of 6.0)
and Website B reached minus 45 (SD = 2.82 and a mean of 5.4).
The average NPS for business companies and services is about plus five to ten.
Accordingly, results below zero imply that there are more detractors than promoters
for the tested product or service.6 Applying this framework to the results of our
experiment, no significant difference between both websites can be found. The NPS
for the two versions with minus 40 and minus 45 means that there are 40 and 45 %
more detractors than promoters for each version of the website (Table 3).
6
Logic, H. & LLC. Net Promoter Benchmarking—Net Promoter Community. Retrieved from http://
www.netpromoter.com/why-net-promoter/compare/.

123
Efficiency, effectiveness, and satisfaction of responsive…

5 Discussion

H1: A responsive mobile touristic website is more efficient to use than a mobile
touristic websites.
Comparing all mean task times between the two versions using an independent
samples t test or a Mann–Whitney U test (when data is not normal distributed), leads
to the following results:
Task 1: Newsletter resulted in a mean of 97.05 (SD = 29.58) for Website A and a
mean of 150.61 (SD = 40.42) for Website B (independent samples T test,
t = -4.6, p = 0.00). With p \ 0.05 it can be concluded that the mean task times
are significantly different.
Task 2: Aquadome had no normal distribution in the dataset of Website B
(Shapiro–Wilk test of normality p \ 0.05), so the differences in the mean task time
had to be compared using the non-parametric Mann–Whitney U test. The test results
showed a mean of 94.63 (SD = 10.26) for Website A and a mean of 58.25
(SD = 8.07) for Website B (Mann–Whitney U = 76.5, p = 0.01 two-tailed). With
p \ 0.05 it can be said that participants needed significantly longer on Website A
than on Website B.
Task 3: Altitude Hiking tour also had no normal distributed (Shapiro–Wilk test of
normality p \ 0.05) data. The following means were measured: Mean of 51.95
(SD = 7.87) for Website A and a mean of 47.21 (SD = 9.35) for Website B
(Mann–Whitney U = 164, p = 0.63). With p [ 0.05 no significant differences
were identified.
Task 4 - Duration Hiking tour had again no normal distributed data (Shapiro–
Wilk test of normality p \ 0.05). The following means were measured: Mean of
75.84 (SD = 8.01) for Website A and a mean of 63.17 (SD = 8.58) for Website B
(Mann–Whitney U = 139, p = 0.33). With p [ 0.05 there were no significant
differences identified.
Task 5 - Booking had also no normal distribution in the dataset (Shapiro–Wilk
test of normality p \ 0.05). The mean of Website A was 146.0 (SD = 17.01) and
Website B had a mean of 185.26 (SD = 28.84). The Mann–Whitney U test resulted
in U = 139.5, p = 0.68. As p [ 0.05 there were no significant differences
identified.
As there was no normal distribution for the data of the total task time (Shapiro–
Wilk test of normality p \ 0.05) the effects of the website version on Total Task
Time had to be measured with the non-parametric Kruskal–Wallis Test. The results
show that there is a significant difference between Total Task Times of the tested
versions of the websites (Kruskal–Wallis p = 0.003). In order to investigate on
these differences between both versions a post hoc test (Mann–Whitney) was
conducted. The test method was chosen because it is a valid method when there are
test samples without normal distribution. Ultimately, no significant difference
between both website versions (p = 0.946) can be found. Concluding, the Total
Task Time was not significantly different between both versions.
Correlation between Total Task Time and Perceived Usability (SUS) shows no
significant correlation (p [ 0.05) between Total Task Time and SUS Scores for

123
A. Groth, D. Haslwanter

Website A [Pearson’s r = -0.248, p = 0.291 (two-tailed)] and no significant


correlation (p [ 0.05) for Website B [Pearson’s r = -0.332, p = 0.153 (two-
tailed)].
Concluding, H1 can be partially answered affirmatively. Especially for easy tasks
in information-seeking, a responsive approach seems to be more appropriate and
seems to contribute to effectiveness. The more complex the tasks, the less important
the differences become between both website interface versions, as for Task 3, 4,
and 5 no significant differences can be found. Interestingly, on Website A users
utilized all navigational elements much more consistently, making it easy for them
to browse with one hand, without reverting to the site’s search functionality. So
even if users took longer on Website A to achieve their task, more websites have
been visited and more information and understanding of the website has been
achieved while browsing.
H2: A responsive mobile touristic website is more effective to use than a mobile
touristic website.
Both website versions are showing some significant differences in regards of
effectiveness for a user. Although, when looking at our data, Website A shows a
higher percentage of no or minor problems, when users deal with tasks on this site,
compared to Website B, except in Task 2. So there seems to be an overall notion
towards a smoother experience, making users much more secure when browsing on
Website A on a mobile device. Although in higher complex settings like Task 5,
Website A could not prove to be more effective than its non-responsive counterpart,
mainly due to failures in process logic and implementation. But Website B also
failed to hit the mark due to questionable design decisions, like pop-ups. Hence, H2
can be partially answered affirmatively.
H3: A responsive mobile touristic website is more satisfying to use than a mobile
touristic website.
In order to investigate on the effect of the tested mobile websites’ design
approach on a user’s satisfaction, the SUS scores for the two website versions were
compared. The results for the first within-subjects ANOVA showed that there was a
significant effect for the version of the websites on perceived usability, Wilks’
Lambda = 0.62, F(3, 17) = 3.48, p = 0.039, partial Eta2 = 0.38 and compliance
with Mauchly’s Sphericity p = 0.662. A post hoc analysis (Bonferroni) led to no
significant findings (all pairs with p [ 0.05). Therefore, an alternative post hoc
analysis (Fischer’s Least Significant Difference Test = LSD) was conducted. The
pairwise comparisons indicated that there was no significant difference between the
two website versions (p = 0.829). Users recognize perceived usability as an
important factor, but both websites could not significantly distinguish themselves in
terms of overall usability.
Taking the NPS into account, we have a similar picture. Again, on a final rating
on satisfaction, both websites are on par regarding SUS scores for perceived
usability and NPS for overall satisfaction. Nevertheless Website A achieved slightly
overall better results, although the differences were too small and therefore
negligible. From a user’s point of view it seems that Website A simply did not have
enough unique features, failed to fascinate the user or is not ‘special’ enough to be

123
Efficiency, effectiveness, and satisfaction of responsive…

more promoted than Website B. In spite of above results, the third hypothesis (H3)
has to be rejected.

6 Conclusions, further research and limitations

With this experiment we contributed to a better understanding of the impact of


responsive or adaptive designed touristic website on user’s efficiency, effectiveness
and satisfaction when searching for tourism-related information on smartphones.
Two different implementations of tourism destination websites have been selected,
one with a responsive design implementation and one with an adaptive design
approach for mobile devices. Three hypotheses have been established: a responsive
approach positively influences the efficiency (H1, partially answered affirmatively),
the effectiveness (H2, partially answered affirmatively), and satisfaction (H3,
rejected) on such mobile websites.
Overall the responsive mobile webdesign leaves a mixed impression regarding
the investigated aspects. It would be too easy to argue, that RWD provides a more
comfortable and smooth user experience on a smartphone compared to an adaptive
design—out-of-the-box. Nevertheless, some merits could be identified. In our
overall impression and evaluation, Website A does prove to be more efficient,
effective and even slightly more satisfying, compared to its adaptive counter-part, in
regards to finding the requested information.
Our data shows that, although users took more time to fulfill our information-
seeking tasks on Website A, more pages have been visited and therefore more
information has been consumed during their visit. Several learning effects could be
observed regarding time on tasks, as users became consistently more efficient for
their subsequent tasks, although for different reasons like e.g. coping with an
unstructured navigation. Nevertheless, these effects were not considered within our
experiment, neither learning effects between the two-week time span. Especially on
the first-time use, it took all users quite a while to become familiar with the idea and
structure of each website, which took much longer on Website B, resulting in a
much higher use of the implemented search functionality to avoid going through the
complex navigation and scrolling. Notably, users of Website A mainly went along
with the implemented website navigation without utilizing the search functionality
at all, hence visiting more pages on the website, taking much longer time, and
consequently encountering less critical errors on their journey.
With increasing task complexity, the design approach becomes secondary and the
overall logic and usability of the process takes over. Form fields loosing already
entered data, pop-up windows on a smartphone, and unclear navigational structure
within the booking itself are severe usability showstoppers, which should be
critically addressed by developers and even better—avoided at all cost. As RWD
becomes more prominent with service providers and tourism agencies, an
identifiable inexperience not only with decision makers understanding the
challenges of critical processes online, but most of all with designers of mobile
websites in conceptualizing these processes. This inexperience is supplemented with
the user’s uncertainty to what should be achieved and how they should behave on a

123
A. Groth, D. Haslwanter

small mobile screen. Users expect graphical pages at first and feel magically
attracted by them (Groth and Haslwanter 2015), but when looking closer at usability
aspects, users act more confident on responsive websites. But still, responsiveness
does not simply contribute or lead to a better evaluation or either promotion of such
websites, which leaves, luckily, a lot of room for further research. The challenge
will be, to not only develop a usable responsive website, but a responsive website
that fascinates visitors and stimulates promotion. How this may be done and which
aspects are more valued than others in encouraging users to do so, still remains
unclear.
Nevertheless, some interesting aspects could be identified in our pre- and post-
test questionnaires that seem noteworthy for further research. (1) Users voice, feel,
and evaluate themselves as being much more proficient, when using a smartphone
compared to using a desktop computer. This results in a very straightforward and
courageous behavior to simply try, how everything works. Although we have
addressed the gap in user knowledge before, it seems that users counter their
knowledge gap with a feeling of ‘‘being more in control’’. The challenge here is,
that developers and tourism service providers are confronted with a very confident
and proficient target group, with high demands towards orientation, information
quality and usability, compared to the desktop environment. (2) Within our
experimental setting, users could sit back and relax on a chair and could try to solve
our tasks undisturbed. This of course is far from a realistic setting, especially when
thinking about mobile use cases. With the rise of new apps that help monitor user
activity on smartphones, it would be very insightful to study in-the-field applications
and scenarios, with users in the middle of a city or when commuting in a bus or
metro, to better understand implications of responsive design approaches, as they
especially focus on one finger touch & point navigation and not on mere search
input handling. Here we expect a more significant divergence between those two
approaches, than in our experimental setting. (3) Finally, our user group may cover
the most IT-savvy generation, but leaves out the much larger demographic groups of
Generation X and Baby Boomers. Especially the latter is of increasing interest from
a touristic point of view, being a generation with enough financial background to
regularly travel, staying healthy, and with an unbound curiosity to adopt new
technologies and making them their own.
Behavioral studies naturally come with limitations that need to be addressed.
First, it may seem unclear to compare two different destination websites according
to usability metrics. Within our experiment, the focus has been solely on how user
behavior differs between responsive and adaptive websites when searching for
tourism information. A comparative point of view regarding one website being
‘better’ than the other in terms of e.g. navigational structure, information and image
quality, or loading times has not been the objective of this study. Nevertheless,
comparative statistical analysis has been conducted as all test persons remained the
same during the 2 weeks’ period. Second, it can be argued, that performance
measures and implications towards efficiency are individual to each website and
hence not comparable by nature. This is basically correct, although within our study
a more comprised look on task time and page views has been applied. As reported,
lower numbers within Website B have been achieved by employing the search

123
Efficiency, effectiveness, and satisfaction of responsive…

functionality within the website, as users were, after completing Task 1, already
familiar with the rather bulky design of Website B, and therefor frustrated to scroll
the long website with their fingers. This user behavior could not be observed with
the responsive Website A at all; actually just the opposite behavior was observed:
users made use of the websites navigation and searched along the websites structure
and navigation in order to complete the task. It may be concluded that learning
effects did occur on Website B, namely to circumvent navigational features.
Interestingly enough, Website A was not able to harvest on such learning effects,
although, as reported, a faster browsing behavior could be observed. Efficiency has
been interpreted on these dimensions, and not towards just the performance of users
to achieve their task. Thirdly, it should be noted that with UsERA (Inversini et al.
2011) an established and useful measure to assess usability on tourism websites
already exists. Nevertheless, this concept has been deliberately neglected for two
reasons: (1) Due to the highly competitive environment within the Tyrolean tourism
destinations, access to log files would be interesting, but very restrictive. (2) In order
to assess a user’s behavior when looking for tourism information on mobile
websites, log file analysis and risk assessment may not fit when thinking about
users, but more when optimizing for information and service providers. Following
this notion, a tourism information provider’s strategic perspective on implementing
RWD has not been tackled. This rather novel focus has been taken on by Gibbs and
Gretzel (2015) and would, in combination with results from user behavior studies
like ours, provide tremendous insights regarding feasibility, return on investment,
innovativeness, and impact of RWD in the tourism domain.
Concluding, the absence of a universal mobile usability framework became
painfully apparent. All applied heuristics, established as they are, became
questionable and even antithetic, when observing users and their tourism
information search behavior, especially in the context of mobile devices.
Specifically, ‘efficiency’ provides rather weak insights into how users utilize their
phone in this regard, even more so when using a responsive website. Mobile user
behavior may be considered the younger sister of desktop behavior, but
understanding her character proves quite contradicting and challenging.

References
Adukaite A, Inversini A, Cantoni L (2013) Examining user experience of cruise online search funnel. In:
Marcus A (ed) Design, user experience, and usability: web, mobile, and product design; second
international conference, DUXU 2013, held as part of HCI International 2013, Las Vegas, NV,
USA, July 21–26, 2013; proceedings, part IV, vol 8015. Springer, Berlin, pp 163–172
Bader A, Baldauf M, Leinert S, Fleck M, Liebrich A (2012) Mobile tourism services and technology
acceptance in a mature domestic tourism market: the case of Switzerland. In: Fuchs M, Ricci F,
Cantoni L (eds) Information and communication technologies in tourism 2012. Springer Vienna,
Vienna, pp 296–307
Bahadir D, Yumusak N, Arsoy S (2013) Guided-based usability evaluation on mobile websites. http://
www.thinkmind.org/index.php?view=article&articleid=iciw_2013_9_20_20212

123
A. Groth, D. Haslwanter

Baloglu S (2000) A path analytic model of visitation intention involving information sources, socio-
psychological motivations, and destination image. J Travel Tourism Marketing 8(3):81–90. doi:10.
1300/J073v08n03_05
Bangor A, Kortum P, Miller J (2009) Determining what individual SUS scores mean: adding an adjective
rating scale. J Usabil Studies 4(3):114–123
Bohyun K (2013) Responsive web design, discoverability, and mobile challenge. Library technology
reports:29–39
Bortenschlager M, Häusler E, Schwaiger W, Egger R, Jooss M (2010) Evaluation of the concept of early
acceptance tests for touristic mobile applications. In: Gretzel U, Law R, Fuchs M (eds) Information
and communication technologies in tourism 2010. Springer Vienna, Vienna, pp 149–158
Brau H, Diefenbach S, Hassenzahl M, Koller F, Peissner M, Röse K (eds) (2008) Usability Professionals
2008. German Chapter der Usability Professionals’ Association, Stuttgart
Brooke J (1996a) SUS: a quick and dirty usability scale. In: Jordan PW, Thomas B, Weerdmeester BA,
McClelland IL (eds) Usability evaluation in industry. Taylor and Francis, London, pp 189–194
Brooke J (1996) SUS: a quick and dirty usability scale. In: Jordan PW (ed) Usability evaluation in
industry: based on the International Seminar Usability Evaluation in Industry that was held at
Eindhoven, The Netherlands, on 14 and 15 September 1994. Taylor & Francis, London
Budiu R, Nielsen J (2010) Usability of Mobile Websites: 85 design guidelines for improving access to
web-based content and services through mobile devices
Burby J, Brown A (2007) Web analytics definitions. Web Analytics Association, Washington DC
Champeon S (2003) Progressive Enhancement and the Future of Web Design. http://www.hesketh.com/
thought-leadership/our-publications/progressive-enhancement-and-future-web-design. Accessed 18
June 2014
Chen JS, Gursoy D (2000) Cross-cultural comparison of the information sources used by first-time and
repeat travelers and its marketing implications. Int J Hospit Manage 19(2):191–203. doi:10.1016/
S0278-4319(00)00013-X
Chen H, Houston AL, Sewell RR, Schatz BR (1998) Internet Browsing and searching: user evaluations of
category map and concept space techniques. J Am Soc Inf Sci 49(7):582–603
Chung W (2006) Studying information seeking on the non-English Web: an experiment on a Spanish
business Web portal. Int J Hum Comput Stud 64(9):811–829. doi:10.1016/j.ijhcs.2006.04.009
Coursaris CK, Kim DJ (2011) A Meta-analytical review of empirical mobile usability studies. J. Usabil
Stud 6(3):11:117–11:171
Eriksson N, Strandvik P (2009) Possible Determinants Affecting the Use of Mobile Tourism Services. In:
Filipe J, Obaidat MS (eds) e-Business and telecommunications: international conference, ICETE
2008, Porto, Portugal, July 26–29, 2008, revised selected papers, vol 48., SpringerBerlin,
Heidelberg, pp 61–73
Fidel R, Davies RK, Douglass MH, Holder JK, Hopkins CJ (1999) A visit to the information mall: web
searching behavior of high school students. J Am Soc Inform Sci 50:24–37
Fodness D, Murray B (1999) A model of tourist information search behavior. J Travel Res
37(3):220–230. doi:10.1177/004728759903700302
Fox R (2012) Being responsive. OCLC Syst Serv 28(3):119–125. doi:10.1108/10650751211262100
Frain B (2012) Responsive web design with HTML5 and CSS3. Packt Publishing, Limited
Garcia A, Torre I, Linaza MT (2013) Mobile social travel recommender system. In: Xiang Z, Tussyadiah
I (eds) Information and communication technologies in tourism 2014. Springer International
Publishing, Cham, pp 3–16
Gardner BS (2011) Responsive web design: enriching the user experience: connectivity and the user
experience:13–19
Gibbs C, Gretzel U (2015) Drivers of responsive website design innovation by destination marketing
organizations. In: Tussyadiah I, Inversini A (eds) Information and communication technologies in
tourism 2015. Springer International Publishing, Cham, pp 581–592
Gretzel U, Fesenmaier DR, O’Leary JT (2006) The transformation of consumer behaviour. In: Buhalis D,
Costa C (eds) Tourism business frontiers: consumers, products and industry. Elsevier Butterworth-
Heinemann, Amsterdam, pp 9–18
Groth A, Haslwanter D (2015) Perceived usability, attractiveness and intuitiveness of responsive mobile
tourism websites: a user experience study. In: Tussyadiah I, Inversini A (eds) Information and
communication technologies in tourism 2015. Springer International Publishing, Cham, pp 593–606
Grubbs FE (1969) Procedures for detecting outlying observations in samples. Technometrics 11(1):1–21

123
Efficiency, effectiveness, and satisfaction of responsive…

Grün C, Werthner H, Pröll B, Retschitzegger W, Schwinger W. Assisting tourists on the move-an


evaluation of mobile tourist guides. In: 2008 7th International Conference on Mobile Business
(ICMB), pp 171–180
Gursoy D (2003) Prior product knowledge and its influence on the traveler’s information search behavior.
J Hospit Leisure Market 10(3–4):113–131. doi:10.1300/J150v10n03_07
Gustafson A, Engler O, Zeldman J (2013) Adaptive web design: Créer des sites riches avec l’amélioration
progressive. Easy Readers, Pearson
Haid E, Kiechle G, Göll N, Soutschek M (2008) Evaluation of a web-based and mobile ski touring
application for gps-enabled smartphones. In: O’Connor P, Gretzel U, Höpken W (eds) Information
and communication technologies in tourism 2008: proceedings of the international conference in
Innsbruck, Austria, 2008. Springer, Wien, pp 313–323
Harrison R, Flood D, Duce D (2013) Usability of mobile applications: literature review and rationale for a
new usability model. J Interact Sci 1(1):1. doi:10.1186/2194-0827-1-1
Hawk WB, Wang P (1999) User interaction on the World Wide Web: problems and problem-solving.
Proceedings of the ASIS Annual Meeting. Information Today, Medford, pp 256–270
Heo J, Ham D, Park S, Song C, Yoon WC (2009) A framework for evaluating the usability of mobile
phones based on multi-level, hierarchical model of usability factors. Interact Comput
21(4):263–275. doi:10.1016/j.intcom.2009.05.006
Hertzum M (2010) Images of usability. Int J Hum-Comp Inter 26(6):567–600. doi:10.1080/
10447311003781300
Ho C, Lin M, Chen H (2012) Web users’ behavioural patterns of tourism information search: from online
to offline. Tour Manag 33(6):1468–1482. doi:10.1016/j.tourman.2012.01.016
Inversini A, Cantoni L, Bolchini D (2011) Connecting usages with usability analysis through the user
experience risk assessment model: a case study in the tourism domain. In: Marcus A (eds) Design,
user experience, and usability. theory, methods, tools and practice, vol 6770. Springer Berlin
Heidelberg, Berlin, Heidelberg, pp 283–293
Kaasinen E (2005) User acceptance of location-aware mobile guides based on seven field studies. Behav
Inform Technol 24(1):37–49. doi:10.1080/01449290512331319049
Kasahara H, Mori M, Mukunoki M, Minoh M (2013) Business model of mobile service for ensuring
students’ safety both in disaster and non-disaster situations during school trips. In: Xiang Z,
Tussyadiah I (eds) Information and communication technologies in tourism 2014. Springer
International Publishing, Cham, pp 101–114
Kawase J, Kurata Y, Yabe N (2013) Predicting from GPS and Accelerometer data when and where
tourists have viewed exhibitions. In: Xiang Z, Tussyadiah I (eds) Information and communication
technologies in tourism 2014. Springer International Publishing, Cham, pp 115–127
Kellar M, Watters C, Inkpen KM (2007) An exploration of web-based monitoring. In: Rosson MB,
Gilmore D (eds) the SIGCHI conference. ACM, New York, pp 377–386
Kim D, Park J, Morrison AM (2008) A model of traveller acceptance of mobile technology. Int J Tourism
Res 10(5):393–407. doi:10.1002/jtr.669
Kramer R, Modsching M, ten Hagen K, Gretzel U (2007) Behavioural impacts of mobile tour guides. In:
Sigala M, Mich L, Murphy J (eds) Information and communication technologies in tourism 2007:
Proceedings of the international conference in Ljubljana, Slovenia, 2007. Springer, Wien,
pp 109–118
Marcotte E (2011) Responsive web design. Book Apart, New York
Mitsche N (2005) Understanding the Information Search Process within a Tourism Domain-specific
Search Engine. In: Frew AJ (ed) Information and communication technologies in tourism 2005:
Proceedings of the international conference in Innsbruck, Austria, 2005. Springer, Wien,
pp 183–193
Nachmias R, Gilad A (2002) Needle in a Hyperstack. J Res Technol Edu 34(4):475–486. doi:10.1080/
15391523.2002.10782362
Nah FF, Siau K, Sheng H (2005) The value of mobile applications. Commun ACM 48(2):85–90. doi:10.
1145/1042091.1042095
Nayebi F, Desharnais J, Abran A (2012) The state of the art of mobile application usability evaluation. In:
2012 25th IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), pp 1–4
Nielsen J (2012) Usability 101: Introduction to Usability. http://www.nngroup.com/articles/usability-101-
introduction-to-usability/. Accessed 16 Apr 2015
Nielsen J, Budiu R (2013) Mobile usability. New Riders, Berkeley

123
A. Groth, D. Haslwanter

Not E, Venturini A (2013) Discovering functional requirements and usability problems for a mobile
tourism guide through context-based log analysis. In: Cantoni L, Xiang Z (eds) Information and
communication technologies in tourism 2013. Springer, Heidelberg, pp 12–23
Pan B, Fesenmaier DR (2006) Online information search. Ann Tourism Res 33(3):809–832. doi:10.1016/
j.annals.2006.03.006
Pan B, MacLaurin T, Crotts JC (2007) Travel blogs and the implications for destination marketing.
J Travel Res 46(1):35–45. doi:10.1177/0047287507302378
Raptis D, Tselios N, Kjeldskov J, Skov MB (2013) Does size matter? In: Rohs M, Schmidt A, Ashbrook
D, Rukzio E (eds) Proceedings of the 15th international conference on Human-computer interaction
with mobile devices and services. ACM, p 127
Rasinger J, Fuchs M, Beer T, Höpken W (2009) Building a mobile tourist guide based on tourists’ on-site
information needs. Tourism Anal 14(4):483–502. doi:10.3727/108354209X12596287114255
Reichheld FF (2003) The one number you need to grow. Harvard Bus Rev 81(12):46–55
Ricci F (2010) Mobile recommender systems. Inform Technol Tourism 12(3):205–231. doi:10.3727/
109830511X12978702284390
Rubin J, Chisnell D (2008) Handbook of usability testing: How to plan, design, and conduct effective
tests, 2nd edn. Wiley Pub, Indianapolis
Sauro J, Lewis JR (2010) Average task times in usability tests. In: Mynatt E, Schoner D, Fitzpatrick G,
Hudson S, Edwards K, Rodden T (eds) the 28th international conference, p 2347
Sauro J, Lewis JR (2012) Quantifying the user experience: Practical statistics for user research. Elsevier
Shneiderman B (2000) Universal usability. Commun ACM 43(5):84–91. doi:10.1145/332833.332843
Simmons JP, Nelson LD, Simonsohn U (2011) False-positive psychology: undisclosed flexibility in data
collection and analysis allows presenting anything as significant. Psychol Sci 22(11):1359–1366.
doi:10.1177/0956797611417632
Smith LS, Mosier JN (1986) Guidelines for designing user interface software. MITRE Corporation,
Beford
Snepenger D, Snepenger M (1993) Information search by pleasure travelers. In: Khan MA, Olsen MD,
Var T (eds) VNR’s encyclopedia of hospitality and tourism. J. Wiley, New York, pp 830–835
Tullis T, Albert B (2013) Measuring the user experience: collecting, analyzing, and presenting usability
metrics, second edition, 2nd edn. Morgan Kaufmann, Waltham, Mass
Tussyadiah I (2013) When cell phones become travel buddies: social attribution to mobile phones in
travel. In: Cantoni L, Xiang Z (eds) Information and communication technologies in tourism 2013.
Springer, Berlin Heidelberg, Berlin, Heidelberg, pp 82–93
Vogt CA, Fesenmaier DR (1998) Expanding the functional information search model. Ann Tourism Res
25(3):551–578. doi:10.1016/S0160-7383(98)00010-3
Wang Y, Fesenmaier DR (2004) Towards understanding members’ general participation in and active
contribution to an online travel community. Tour Manag 25(6):709–722. doi:10.1016/j.tourman.
2003.09.011
Wang D, Fesenmaier DR (2013) Transforming the travel experience: the use of smartphones for travel.
In: Cantoni L, Xiang Z (eds) Information and communication technologies in tourism 2013.
Springer, Berlin, pp 58–69
Wang D, Xiang Z, Fesenmaier DR (2014) Adapting to the mobile world: a model of smartphone use. Ann
Tourism Res 8:11–26. doi:10.1016/j.annals.2014.04.008

123

You might also like