You are on page 1of 11

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/320238241

Usability of Apps and Websites: A Meta-Regression Study

Conference Paper · August 2017

CITATIONS READS
5 644

2 authors:

Ali Balapour Rajiv Sabherwal


University of Arkansas University of Arkansas
5 PUBLICATIONS   324 CITATIONS    141 PUBLICATIONS   10,788 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Information Systems Success View project

IEEE Transactions on Engineering Management View project

All content following this page was uploaded by Ali Balapour on 06 May 2019.

The user has requested enhancement of the downloaded file.


Usability of Apps and Websites: A Meta-Regression Study

Usability of Apps and Websites: A Meta-


Regression Study
Full Paper

Ali Balapour Rajiv Sabherwal


Sam M. Walton College of Business Sam M. Walton College of Business
ABalapour@walton.uark.edu RSabherwal@walton.uark.edu

Abstract
Lack of usability is an important common issue in computer websites and mobile apps, despite a rich
stream of literature in this area. Drawing upon 17 empirical studies on usability, we performed a meta-
regression to examine whether the correlation between usability perception and a factor affecting usability
depends on the nature of that influencing factor. The results suggest that information and application
design affect usability perceptions of users. However, these two factors differ in an important aspect.
Information has a positive effect on usability only in studies in the mobile context. By contrast,
application design is positively related to usability perception in both websites and apps. The literature
associates numerous factors with usability, but there is no consensus among researchers in this regard.
Hence, by integrating factors affecting usability through a meta-analysis, this paper offers new insights to
the literature.

Keywords:

Meta-analysis, Meta-regression, Usability, Website Usability, Mobile Application Usability.

Introduction
The markets of e-commerce and m-commerce are continuously growing, with m-commerce market
expected to hit 80 billion dollars by the end of 2020 (Milnes 2016). For instance, there are over 1 billion
websites, and over 2 million applications on Apple’s App Store (Internetlivestats.com 2016; Statista
2016). Despite this growing trend in popularity and adoption of interfaces across mobile and computer
devices, their usability is questioned. Approximately, 40 percent of mobile apps are reported as “useless”
(Schick 2015), implying concerns about both their usability and potential usefulness. Moreover, usability
issues on “websites” are frequently emphasized in the literature (Wang and Senecal 2007; Lee and Kozar
2012). The usability obstacles that developers encounter are threats to the success of the designed
interfaces. Developing a high-quality website or an app is expensive, and despite the apparent increase in
quality many of these expensive interfaces fail (Hoehle and Venkatesh 2015).
Against this backdrop, usability studies have attempted to resolve usability issues of IT-related devices
and software. For example, Green and Pearson (2006) adopted three usability guidelines of ISO 9241:
effectiveness, efficiency and satisfaction to predict website usability. Hoehle and Venkatesh (2015)
developed a framework for designing more useful mobile applications. Agarwal and Venkatesh (2002)
utilized Microsoft’s guidelines to propose a framework for developing useful websites. There are
numerous studies in the area of usability; indeed, we have journals dedicated to usability studies, such as
Journal of Usability Studies and Computers in Human Behavior. Consequently, both practitioners and
scholars are faced with overabundance of variables that predict usability. For example, Lee and Benbasat
(2004), Zhang and Adipat (2005), and Agarwal and Venkatesh (2002) propose somewhat different
variables for predicting usability. The variety of predictors and attributes of usability in the both discussed
contexts can potentially confuse developers and IT technicians who are the audience of these studies. In
addition, apart from a few integrative reviews (Hornbæk 2006; Chiou et al. 2010), there has not been
much efforts on assimilating prior studies to guide both scholars and practitioners. To address this gap in
the literature, we use meta-analyses of the relationships between usability and its predictors, and then

Twenty-third Americas Conference on Information Systems, Boston, 2017 1


Usability of Apps and Websites: A Meta-Regression Study

conduct a meta-regression based on the meta-analyses results. Thus, we focus on the following research
questions:
RQ1: Does the correlation between usability perception and a factor affecting usability depend on the
nature of that factor?
RQ2: How does the context (i.e., web or mobile) of the study moderate the relationships?
To address the above questions, we use findings from 17 prior empirical studies, and the resulting 86
observations of correlation between usability perception and factors potentially affecting it in the context
of computer website and mobile interface.

Literature Review (Usability Studies)


From the users’ perspective, an important issue when considering IT artifacts (i.e. software, computer
devices, interfaces) is whether they are usable or not. Lack of usability has critical implications, such as
users’ frustration (Hong and Tam 2006). Thus, usability is an important topic for practice. Three streams
of studies emerge from the prior literature on usability. First, studies that use theoretical backgrounds to
develop usability measures without any follow-up empirical analyses to validate those measures (Lee and
Benbasat 2004; Gebauer et al. 2010; Zhang and Adipat 2005). The explosive growth of mobile technology
after 2000 called for the need to guide interface developers. The 7c conceptual framework proposed by
Lee and Benbasat (2004) has addressed this issue. Task-Technology-Fit (TTF) is another example for
developing useful managerial information systems which maneuvers on data processing, communication,
information access, notification, functionality, verification, network quality, and customization (Gebauer
et al. 2010). By the same token, Zhang and Adipat (2005) suggested nine attributes for usability which are
learnability, efficiency, memorability, error, satisfaction, effectiveness, simplicity, comprehensibility, and
learning performance.
The other two streams of usability research are empirical studies that differentiate their methods to
evaluate usability, such as a survey, lab, and field study (Duh et al. 2006; Adipat et al. 2011; Hoehle and
Venkatesh 2015). We can divide these studies into two types that are subjective (preference) and objective
(performance) studies of usability (Hornbæk 2006; Urbaczewski and Koivisto 2007). In the second
stream, focusing on subjective studies of usability, samples of people are questioned about their attitude
toward usability of an application, software, website, device, and so forth. One common method is
surveying users in pre and/ or post stage(s) of device or interface use (Urbaczewski and Koivisto 2007).
The subjective measurers of usability are also known as preference measurers. The source of usability
attributes is grounded in practice and/ or originated from academic literature. For example, Agarwal and
Venkatesh (2002) utilized Microsoft usability guidelines to develop subjective measures of website
usability. A similar case could be made for development of usability measures from international standard
organizations’ guidelines for ergonomics of human-computer interaction (ISO 9241). Several scholars
benchmarked the three introduced elements in ISO 9241 that are efficiency, effectiveness, and satisfaction
to build their models (Green and Pearson 2006; Oztekin et al. 2009; Tsiaousis and Giaglis 2010). Thus,
there is no agreement on the factors affecting usability across different contexts in this stream.
In the third stream of research on usability, as already introduced, usability is measured through log data.
Log data refers to a set of computer-generated records that are constantly produced as the user works
with the device or interface. Thus, log data could be monitored during and after the interaction of user
with device or interface which is the reason for calling it performance/ objective approach to usability.
Three well-recognized objective measures of usability are task completion, error rate, and time; these are
measured through the number of clicks or fingertips required to get the desired task complete, number of
miss clicks, and the time spent to do the task in seconds or minutes, respectively (Duh et al. 2006;
Urbaczewski and Koivisto 2007; Tsiaousis and Giaglis 2010; Reinecke and Bernstein 2011). Each stream
answers a different question in this area. For instance, second stream reveals what the user thinks about
the IT artifact, whereas the third stream reflects the true experience of the user from working directly with
the IT artifact.

Twenty-third Americas Conference on Information Systems, Boston, 2017 2


Usability of Apps and Websites: A Meta-Regression Study

Theoretical Model
Usability literature lacks empirical integration of studies, to the extent that researchers have no unified
guideline for themselves to select and operationalize the most appropriate usability measurers (Hornbæk
and Law 2007). To the best of our knowledge, over the past two decades there has been limited effort on
literature review and meta-analysis of usability literature in order to illuminate the path for future studies
(Hornbæk 2006; Hornbæk and Law 2007). One reason that prior meta-analysis on usability found a
notable variation in usability measurers suggests that there might be situational factors such as study
context in play (Hornbæk and Law 2007). Therefore, we propose our model in a way that reflects the
nature of dependency of usability perception on other factors. In addition, following our second research
question we will identify the effect of contextual factors.

Usability Perception
Usability is defined as “the extent to which a product can be used by specified users to achieve specified
goals with effectiveness, efficiency, and satisfaction in a specified context of use” (ISO 1998, part 11).
Thus, a basic definition of usability perception is the perception that the interface is effective (fits its
purpose) and efficient (gets the task done in a timely manner). Recognizing usability as multidimensional,
prior studies usually measure usability perception as a second-order construct (Agarwal and Venkatesh
2002).

Information
Microsoft usability guidelines refers to the content which is defined as the breadth, depth, currency, and
relevance of information that developers will put on their interfaces (Agarwal and Venkatesh 2002). Lee
et al. (2015) studied interactivity with the mobile context, and they specifically pointed at the type of
information being transferred between the user and system. In addition, the terminology of support and
supportability is also interchangeably used for content which refers to the amount of available
information on the website to guide and support the user if something unplanned occurs (Aladwani and
Palvia 2002; Lee and Kozar 2012). By the same token, some used a more straightforward term such as
information quality as affecting website usability (Liu and Arnett 2000; Oztekin et al. 2009). Therefore,
we deemed this variable to be an important part of our nomological model.
Moreover, the prior literature on the relationship between information and usability established a positive
and significant effect. The overall correlation between the two has often been non-zero and positive
(Aladwani and Palvia 2002; Oztekin et al. 2009; Eveleth et al. 2015). There are also, however, a couple of
models that are against relationship between the two. Hasan (2016) found no positive connection between
information design, navigation and visual design of the website, but it should be noted that the results of
that study was under the effect of a certain emotion (irritation). So too did Tractinsky and Ikar (2000),
who found negative and weak correlation between usability and information, yet they did not explain the
line of reasoning behind such an outcome. Thus, we expect to find a greater correlation between the two
in the meta-analysis:
H1a. The correlation between usability perception and information is greater when the correlations
between usability perception and other factors potentially affecting it.

Application Design
Design element of interface is the second most important factor affecting usability perception that we take
into consideration. Application design refers to “the degree to which a user perceives that a mobile
application is generally designed well” (Hoehle and Venkatesh 2015). Prior literature has adopted visual
aspects such as interface graphics and aesthetical values (i.e. color and font) as affecting usability,
whereas, here we are referring to technical aspects of design such as data preservation and less loading/
waiting time for users (Cyr et al. 2010; Hoehle et al. 2015; Hoehle et al. 2016). Accordingly, usability
models are suggesting a well-established relationship between usability and application design (Hoehle
and Venkatesh 2015; Hoehle et al. 2015). Therefore, we hypothesize:

Twenty-third Americas Conference on Information Systems, Boston, 2017 3


Usability of Apps and Websites: A Meta-Regression Study

H1b: The correlation between usability perception and application design is greater when the correlations
between usability perception and other factors potentially affecting it, except information.

Context
The usability principals have been applied to variety of contexts, such as mobile device, computer devices,
mobile apps, website on mobile device, website on computer device, interface, software, PDA, tablets, and
so forth. The limitation that we imposed on this study narrowed down our contextual factors. By
extension, we focused on empirical subjective studies of usability and in that area the frequently studied
contexts are mobile (interface and apps) and computer website. Although we observed somewhat similar
factors affecting usability across studies in the aforementioned contexts, they also have dissimilarities due
to the physical nature of the use associated with mobile devices and computer desktops (Oztekin et al.
2009; Hoehle et al. 2016; Lee and Kozar 2012). Prior literature states that customers believe m-commerce
is severely imperfect compared to e-commerce in terms of interface and usability limitations (Ozok and
Wei 2010). Therefore, we expect to see a moderating effect from the context on the nature of relationships
between usability perception and the factors affecting it. Therefore, we posit:
H2a. In the mobile context, the correlation between usability perception and information is greater when
the correlations between usability perception and other factors potentially affecting it.
H2b: In the mobile context, the correlation between usability perception and application design is greater
when the correlations between usability perception and other factors potentially affecting it, except
information.
H2c. In the computer website context, the correlation between usability perception and information is
greater when the correlations between usability perception and other factors potentially affecting it.
H2d: In the computer website context, the correlation between usability perception and application
design is greater when the correlations between usability perception and other factors potentially affecting
it, except information.

Methodological Factors
Prior Meta-analyses added methodological attributes of study that believed to alter the analysis results
across studies. Accordingly, we control for three methodological attributes, namely, sample size (which
affects estimation of correlations and their confidence intervals), publication year (time could change
relevancy and results of some relationships), and publication journal (this will control for the quality of
the findings and proposed relationships). Sample is among the most commonly studied methodological
attributes in meta-analyses (Floyd et al. 2014; Sabherwal and Jeyaraj 2015). Furthermore, over the course
of time, technology is improving, so do the interfaces. Therefore, we control for the effect of publication
year of the paper on usability perception. Finally, a prior meta-analysis hypothesized that “publication
journal of the study” is directly related to the quality of research findings, so it can affect the dependent
variable (Floyd et al. 2014). In a similar approach, we control for whether the publication appeared in one
of the elite IS journals. Figure 1 represents the complete research model.

Research Methods
Meta-analysis is a method of integrating empirical relationships proposed by prior studies to evaluate
what the overall relationship among two or more variables would be (Schmidt and Hunter 2014). In
particular, the variable of study in every meta-analysis is the effect size, which could be correlation, mean
differences or variation among variables of interest that has been studied extensively in the literature, yet
their relationship is indefinite (Schmidt and Hunter 2014). Meta-regression is another form of meta-
analysis—the approach which Sabherwal and Jeyaraj (2015) took in their study of IT performance.
Essentially, meta-regression’s approach is to look at the effect size as dependent variable in a regression
with study characteristics, i.e. sample size as independent variable. We follow Sabherwal’s and Jeyaraj’s
(2015) three comprehensive steps which will be discussed in the following sections.

Twenty-third Americas Conference on Information Systems, Boston, 2017 4


Usability of Apps and Websites: A Meta-Regression Study

Identifying Studies
A preliminary search of keywords such as “Usability measures, Usability evaluation, Website usability,
Interface Usability, Application Usability” on EBSCO, ProQuest, and JSTOR databases yielded over 700
“relevant” papers to usability, from which 143 approved as candidates for possible inclusion in this study.
In the process of filtering the most appropriate studies, we excluded papers due to one or more of the
following reasons: (1) review papers; (2) papers that developed usability theory and constructs without
empirical validation; (3) papers that studied usability, but limited to topics other than websites and
mobile interfaces which are out of this study’s scope; (4) usability studies for which we could not retrieve
correlations; and (5) studies that developed usability measures without an empirical test(s). To avoid
redundancy—presenting a separate reference list for the papers used in the analysis—we identified them
in the reference. “Only in meta-regression” used for the papers which were not cited in the body, and
“Also in meta-regression” is used for the papers which were both cited in the body and used in the
analysis.

Figure 1. Research Model

Coding Studies
During the data collection stage, we tried to collect all the key definitions of each variables that have been
used in the usability studies along with empirical statistics (i.e. reliabilities, correlations, mean, and
standard deviation), contexts, sample size, and publication year, so we could use them for coding and
analysis. To achieve our ultimate goal, we coded the constructs based on the similarities of key definitions
among variables. For example, if a variable’s key identifier was website use, and the other was website
success, we coded both under “usability” construct. Both authors independently codded the studies and
compared the results. There was only one disagreement between the authors, and they resolved it through
introducing a variable named “application design” instead of combining it into a higher ordered construct
called “interface”. The coding protocol is available on Table 2. The final sample included 17 studies, which
yielded 86 “true” correlations (observations) based on the Schmidt and Hunter (2014) correlation
correction formula. Table 3 describes variables in a greater detail.
Construct label Examples of key identifiers
• Site/ app/ internet use
• Web/ mobile/ internet usability
• Behavioral intention to use/ continued intention to use
Usability perception
• Website/ mobile success
• Expectation confirmation
• Website quality
• Information and the way it is presented on the web/app/software (i.e.
Information
relevancy, breadth, depth, quality, currency)

Twenty-third Americas Conference on Information Systems, Boston, 2017 5


Usability of Apps and Websites: A Meta-Regression Study

• Supportability (user guidance)


• Consistency in design
Application Design
• Design

Table 2. Coding Protocol

Analyses
We employed ordinary least squares (OLS) regression to test hypotheses. The dependent variable is
continuous and coded as UsabCorr (usability correlations) based on the following rules: (1) each pre and
post lab-survey study is considered as having two different samples; (2) If they had multiple samples we
used correlations across aggregated samples (if reported), else reported correlations of each sample coded
into this study. Independent variables are information (Info) and application design (AppDes). This study
also account for the context in which usability has been studied—which are mobile (DB1_Mobile) and
website (DB2_Web), or other. Furthermore, we controlled for sample size (Sample), publication year
(Year), and published journal’s classifications (DC_TopJournal) in the study.

Mean (S.d.) or
Construct Variable Operationalization
Frequency
Usability correlation
(correlation between
Continuous measure 0.374 (M),
usability perception and -1.00 < ρ < +1.00
(UsabCorr) 0.30 (S.d.)
factors potentially
affecting it)
1 if the correlation is between
Factors potentially Information (Info) 17 (1), 69 (0)
usability and information
affecting usability
Application Design 1 if the correlation is between
perceptions 7 (1), 79 (0)
(AppDes) usability and application design
1 if the study examined mobile
Mobile app (DB1_Mobile) 49 (1), 37 (0)
app usability
1 if the study examined website
Context Website (DB2_Web) 32 (1), 54 (0)
usability
1 if the study examined other
Other (DB3_Other) 5 (1), 81 (0)
devices/ Interfaces usability
Continuous measure 398.12 (M),
Sample size Obtained from studies
(Sample) 369.87 (S.d.)
2013.02 (M),
Publication year Continuous measure (Year) Obtained from studies
4.66 (S.d.)
Category of the journal in 1 if the study is published in
Journal which the study is published one of the top 8 journals of 23 (1), 63 (0)
(DC_TopJournal) Information Systems 1

Table 3. Variables, Operationalization, and Descriptive Statistics

Stata 13.1 was used for the analyses. We used cluster option to incorporate the potential interdependence
of observations from the same study, but not of observations across studies (Sabherwal and Jeyaraj 2015).
Furthermore, variance inflation factor (VIF) was checked to avoid multicollinearity issues, and all were
under the recommended threshold of ten (Hair et al. 2013). For completeness, we tested the effects of the
hypothesized variables alone (model 0), but tested the study model in three hierarchical steps: (1) direct
effect of control variables; (2) direct effect of control, independent, and moderate variables; (3) all direct
effects and posited interactions. Table 4 provides the results. The changes in F-Statistic for model 2 and 3
are statistically significant. Model 3 is the best fit that we found after trying several combinations of

1 We considered the top eight journals that have been announced by senior scholars of information systems
(https://aisnet.org/?SeniorScholarBasket).

Twenty-third Americas Conference on Information Systems, Boston, 2017 6


Usability of Apps and Websites: A Meta-Regression Study

interactions. Specifically, Model 3 meets the criteria of hierarchical significance in each step from model 1
to model 3.

Discussion
This study used 86 observation across 17 studies of usability perception to identify the factors potentially
affecting usability perceptions and the effect of study attributes (i.e. sample size) on relationships. We
found that the correlation between usability perception and a factor potentially affecting it depends on the
nature of the selected factor. By extension, information and application design are significant in our meta-
regression. Model 3 for which we found the best fit, rejects H1a. Despite finding a significant relationship
between usability correlation and information (β = -0.23, P < 0.1), this relationship is negative. This
implies that information negatively affects usability perception. On the contrary, H1b was supported (β =
-0.20, P < 0.01); application design has a positive effect on usability perception. We did not find support
for H2a through H2d. However, to our surprise, when the context of the study is mobile, the nature of the
relationship between information and usability perception is positive and significant (β = -0.38, p < 0.05).
Figure 2 demonstrates the moderating effect of mobile context on the relationship between information
and usability perception. We did not find any of the control variables to significantly affect usability
perception.
Variables Model (0) 2 Model (1) Model (2) Model (3)
Sample size 0.00 (-1.02) 0.00 (-1.75*) 0.00 (-1.55)
Publication Year -0.01 (-0.50) -.02 (-0.85) -0.02 (-0.85)
Top Journal 0.04 (0.49) 0.00 (0.00) 0.04 (0.46)
Information -0.06(-0.52) -0.12 (-1.21) -0.23 (-1.78*)
Application Design 0.12(1.32) 0.19 (5.19***) 0.20 (5.22***)
Mobile 0.29 (0.80) 0.18 (0.53)
Website 0.41 (1.56) 0.37 (1.56)
Information × Mobile 0.38 (2.56 **)
R2 0.020 0.032 0.172 0.22
Adjusted R2 -0.003 -0.003 0.095 0.139
Model F 1.11 0.52 8.42 6.55
∆R2 (LR-test): Model (0) to (1) 0.012(1.08) 3 0.012(1.08) 0.012(1.08)
∆R2 (∆F): Model (1) to (2) 0.14 (0.00***) 0.14 (0.00***)
∆R2 (∆F): Model (2) to (4) 0.045 (0.021**)

Table 4. Results for Main Analysis

Our results include a couple of unexpected outcomes. In particular, prior literature suggests that breadth,
depth, currency and relevance of information are directly related to the usability of interface (Liu and
Arnett 2000; Agarwal and Venkatesh 2002; Aladwani and Palvia 2002; Oztekin et al. 2009; Lee and
Kozar 2012), yet our meta-analysis suggests that information negatively affects usability perception.
However, mobile device moderates the nature of correlation between information and usability.
Admittedly, the computer website context did not play any moderating roles, nor a direct role in
predicting usability perception. Thus, we can argue website context attenuates correlation between
information and usability. We assume, because of the inherent limitations like screensizes in mobile
devices, developpers do not have the flexibility and leisure of conveying broad information as they could
in computer websites. Thus, handling information in mobile context becomes arduous.

Moreover, the results indicate application design as a potential factor affecting usability perception. This
is consistent with the prior literature which suggests technical design of interface has a considerable
impact on boosting users’ usability perception (Hoehle and Venkatash 2015). However, the relationship

2 N = 86. DV for all models = UsabCorr. All models cluster by study. Each cell contains beta coefficient, with the t-statistic in
parentheses. ***p < 0.01 (|t| ≥ 2.58), **p < 0.05 ((|t| ≥ 1.96), *p < 0.1 ((|t| ≥ 1.65).
3 Likelihood-ratio chi-square test.

Twenty-third Americas Conference on Information Systems, Boston, 2017 7


Usability of Apps and Websites: A Meta-Regression Study

between the two could not be under the influence of mobile or web context. Meaning that, regardless of
the context of study, application design affects usability, and technical elements of interface are critical in
determining usability of mobile apps and websites. Similar to Floyd et al. (2014), we did not observe any
effects from publication quality, which could be the result of having limited number of papers from elite
journals in the analysis—only three papers from top basket journals were included. Furthermore, while
Sabherwal and Jeyaraj (2015) suggested that sample size act as control in meta-regression, we did not
observe such an effect in the results. Lastly, publication year was rejected as control variable, perhaps
because all the 17 studies belong to the period of 2000 to 2016 which is fairly recent.

Figure 2. Moderating Effect of Mobile Context

Limitations, Implications, and Future Studies


Admittedly, this study has limitations. First and foremost, it focuses on subjective studies of usability,
leaving out objective studies of usability and conceptual papers. We hope to add them to our analysis in
the future. Second, we deem the number of observations in our meta-analysis to be limited compared to
successful meta-analyses. For example, Sabherwal and Jeyaraj (2015) used 303 samples and Floyd et al.
(2014) used 443 observations. Third, to estimate the true correlation we used average reliability measure
for the studies that did not report any form of reliability. Although minor, this will affect the results.
Nonetheless, this study has implications for both research and practice. In terms of research, we offer a
few insights. First, to our knowledge, this is the first meta-regression in the area of usability. Thus, the
procedure that we implemented, the coding sheets, and list of the studies could be a building block for
future researchers. Second, the approach which we have adopted in categorizing usability literature (the
three identified streams) provide a holistic view of this rich field to the reader. This categorization sheds
light on the path for scholars who are interested to extend this field of study. Third, the results of this
meta-regression will guide future studies in developing deeper insights into factors affecting usability
perceptions. For instance, results suggest that application design is a significant predictor of usability
regardless of the context. This result should provide researcher the confidence to incorporate it as a factor
affecting usability perception in future studies.
This study has implications for developers and designers. First, the model offers a framework for
developing interfaces that encourage users to accept them. Developers and designers should be careful
about the amount of information they put on their apps and interfaces. Unlike computer websites,
breadth and depth of information might not be an important case for mobile apps and interfaces, instead
mobile apps should put the most relevant and concise information on their interfaces. Second, the
technical aspect is something that developers should account for. More specifically, it is recommended
that they design the interface in a way that it signify three key elements: (a) speed; (b) flexibility; and (c)
branding.

Twenty-third Americas Conference on Information Systems, Boston, 2017 8


Usability of Apps and Websites: A Meta-Regression Study

There are things that we could not explain in details, and therefore we recommend future researchers to
follow them. First, usability may depend on several other factors such as ease of use, that we did not
consider in our analysis because of the dataset limitations. Therefore, we believe it is important for future
meta-analyses to delve deeper into interactions of other factors affecting usability perceptions with
contexts, methodological attributes, theoretical lenses of studies, and other study attributes. Second, the
web context has no moderating effect on any of the independent variables. There is a need for empirical
investigation of this matter. For example, a study that specifically compares the difference between
information presentation on computer websites and mobile apps could address this gap.

REFERENCES
Adipat, B., Zhang, D., and Zhou, L. 2011. "The Effects of Tree-View Based Presentation Adaptation on
Mobile Web Browsing," MIS Quarterly (35:1), pp. 99-122.
Agarwal, R., and Venkatesh, V. 2002. "Assessing a Firm's Web Presence: A Heuristic Evaluation
Procedure for the Measurement of Usability," Information Systems Research (13:2), pp. 168-186.
Aladwani, A. M., and Palvia, P. C. 2002. "Developing and Validating an Instrument for Measuring User-
Perceived Web Quality," Information and management (39:6), pp. 467-476. Also in meta-regression.
Braddy, P. W., Meade, A. W., and Kroustalis, C. M. 2008. "Online Recruiting: The Effects of
Organizational Familiarity, Website Usability, and Website Attractiveness on Viewers’ Impressions of
Organizations," Computers in Human Behavior (24:6), pp. 2992-3001. Only in meta-regression.
Chiou, W. C., Lin, C. C., and Perng, C. 2010. "A Strategic Framework for Website Evaluation Based on a
Review of the Literature from 1995–2006," Information and management (47:5), pp. 282-290.
Cyr, D., Head, M., and Larios, H. 2010. "Colour Appeal in Website Design within and across Cultures: A
Multi-Method Evaluation," International journal of human-computer studies (68:1), pp. 1-21. Also in
meta-regression.
Duh, H. B. L., Tan, G. C. B., and Chen, V. H. 2006. “Mobile Usability: Usability Evaluation for Mobile
Device: A Comparison of Laboratory and Field Tests,” in Proceedings of the 8t Conference on
Human–Computer Interaction with Mobile Devices and Services, Helsinki, Finland, September 12-15,
pp. 4-16.
Eveleth, D. M., Baker-Eveleth, L. J., and Stone, R. W. 2015. "Potential Applicants’ Expectation-
Confirmation and Intentions," Computers in Human Behavior (44), pp. 183-190. Also in meta-
regression.
Floyd, K., Freling, R., Alhoqail, S., Cho, H. Y., and Freling, T. 2014. "How Online Product Reviews Affect
Retail Sales: A Meta-analysis," Journal of Retailing (90:2), pp. 217-232.
Gebauer, J., Shaw, M. J. and Gribbins, M. L. 2010. "Task-Technology Fit for Mobile Information
Systems," Journal of Information Technology (25:3), pp. 259-272.
Green, D., and Pearson, J. M. 2006. "Development of A Web Site Usability Instrument Based on ISO
9241-11," Journal of Computer Information Systems (47:1), pp. 66-72.
Hair, Jr., J. F., Anderson, R. E., Tatham, R. L., and Black, W. C. 2013. Multivariate Data Analysis with
Readings (7th ed.), Pearson New International: Pearson Education Limited.
Hartmann, J., Sutcliffe, A., and De Angeli, A. 2007. "Investigating Attractiveness in Web User Interfaces,"
in Proceedings of the SIGCHI conference on Human factors in computing systems, San Jose,
California, pp. 387-396. Only in meta-regression.
Hasan, B. 2016. "Perceived Irritation in Online Shopping: The Impact of Website Design Characteristics,"
Computers in Human Behavior (54), pp. 224-230. Also in meta-regression.
Hoehle, H., Aljafari, R., and Venkatesh, V. 2016. "Leveraging Microsoft‫׳‬s Mobile Usability Guidelines:
Conceptualizing and Developing Scales for Mobile Application Usability," International Journal of
Human-Computer Studies (89), pp. 35-53. Also in meta-regression.
Hoehle, H., and Venkatesh, V. 2015. "Mobile Application Usability: Conceptualization and Instrument
Development," MIS Quarterly (39:2), pp. 435-472. Also in meta-regression.
Hoehle, H., Zhang, X., and Venkatesh, V. 2015. "An Espoused Cultural Perspective to Understand
Continued Intention To Use Mobile Applications: A Four-Country Study of Mobile Social Media
Application Usability," European Journal of Information Systems (24:3), pp. 337-359. Also in meta-
regression.
Hong, S. J., and Tam, K. Y. 2006. "Understanding The Adoption of Multipurpose Information Appliances:
The Case of Mobile Data Services," Information systems research (17:2), pp. 162-179. Also in meta-
regression.

Twenty-third Americas Conference on Information Systems, Boston, 2017 9


Usability of Apps and Websites: A Meta-Regression Study

Hornbæk, K. 2006. "Current Practice in Measuring Usability: Challenges to Usability Studies and
Research," International journal of human-computer studies (64:2), pp. 79-102.
Hornbæk, K., and Law, E. L. C. 2007. "Meta-analysis of Correlations among Usability Measures," in
Proceedings of the SIGCHI conference on Human factors in computing systems, San Jose, California,
pp. 617-626.
Internetlivestats. 2016. "Total Number of Websites" (available online at
https://www.Internetlivestats.com/; accessed October 9, 2016)
ISO, S. 1998. "9241-11:1998". Ergonomic Requirements for Office Work with Visual Display Terminals
(VDTs)–Part II Guidance on Usability.
Lee, D., Moon, J., Kim, Y. J., and Mun, Y. Y. 2015. "Antecedents and Consequences of Mobile Phone
Usability: Linking Simplicity and Interactivity to Satisfaction, Trust, and Brand Loyalty," Information
and Management (52:3), pp. 295-304. Also in meta-regression.
Lee, Y. E., and Benbasat, I. 2004. "Interface Design for Mobile Commerce," Communications of the ACM
(46:12), pp. 48-52.
Lee, Y., and Kozar, K. A. 2012. "Understanding of Website Usability: Specifying and Measuring Constructs
and Their Relationships," Decision Support Systems (52:2), pp. 450-463.
Liu, C., and Arnett, K. P. 2000. "Exploring The Factors Associated with Web Site Success in the Context of
Electronic Commerce," Information and management (38:1), pp. 23-33. Also in meta-regression.
lo Storto, C. 2013. "Evaluating Ecommerce Websites Cognitive Efficiency: An Integrative Framework
Based on Data Envelopment Analysis," Applied ergonomics (44:6), pp. 1004-1014. Only in meta-
regression.
Milnes, H. 2016. "Where Mobile Commerce Is Going in 2016" (available online at
http://digiday.com/marketing/mobile-commerce-going-2016/; accessed October 9, 2016).
Ozok, A. A., and Wei, J. 2010. "An Empirical Comparison of Consumer Usability Preferences in Online
Shopping Using Stationary and Mobile Devices: Results from A College Student Population,"
Electronic Commerce Research (10:2), pp. 111-137.
Oztekin, A., Nikov, A., and Zaim, S. 2009. "UWIS: An Assessment Methodology for Usability of Web-
Based Information Systems," Journal of Systems and Software (82:12), pp. 2038-2050. Also in meta-
regression.
Reinecke, K., and Bernstein, A. 2011. "Improving Performance, Perceived Usability, and Aesthetics with
Culturally Adaptive User Interfaces," ACM Transactions on Computer-Human Interaction (TOCHI)
(18:2), pp. 8-37.
Sabherwal, R., and Jeyaraj, A. 2015. "Information Technology Impacts on Firm Performance: An
Extension of Kohli and Devaraj (2003),” MIS Quarterly (39:4), pp. 809-836.
Schick, S. 2015. “Report: 39% of Consumers Delete Apps for Being 'Useless'” (available online at
http://www.fiercedeveloper.com/story/report-39-consumers-delete-apps-being-useless/2015-08-21;
accessed April 6, 2016).
Schmidt, F. L., and Hunter, J. E. 2014. Methods of Meta-analysis: Correcting Error and Bias in Research
Findings (3rd ed.), Thousand Oaks, CA: Sage Publications.
Statista. 2016. "Number of Apps Available in Leading App Stores as of June 2016" (available online at
https://www.statista.com/statistics/276623/number-of-apps-available-in-leading-app-stores/;
accessed October 9, 2016).
Tractinsky, N., Katz, A. S., and Ikar, D. 2000. "What Is Beautiful Is Usable," Interacting with computers
(13:2), pp. 127-145. Also in meta-regression.
Tsiaousis, A. S., and Giaglis, G. M. 2010. "An Empirical Assessment of Environmental Factors That
Influence the Usability of A Mobile Website," in 2010 Ninth International Conference on Mobile
Business / 2010 Ninth Global Mobility Roundtable, Athens, Greece, pp. 161-167.
Urbaczewski, A., and Koivisto, M. 2007. "Measuring Mobile Device Usability as a Second Order Construct
in Mobile Information Systems," in Proceedings of the Americas Conference on Information Systems,
Keystone, Colorado.
Wang, J., and Senecal, S. 2007. "Measuring Perceived Website Usability," Journal of Internet Commerce
(6:4), pp. 97-112. Also in meta-regression.
Yoon, H. S., and Steege, L. M. B. 2013. "Development of a Quantitative Model of the Impact of Customers’
Personality and Perceptions on Internet Banking Use," Computers in Human Behavior (29:3), pp.
1133-1141. Only in meta-regression.
Zhang, D., and Adipat, B. 2005. "Challenges, Methodologies, and Issues in the Usability Testing of Mobile
Applications," International Journal of Human-Computer Interaction (18:3), pp. 293-308.

Twenty-third Americas Conference on Information Systems, Boston, 2017 10

View publication stats

You might also like