You are on page 1of 22

Information Technology for Development

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/titd20

M-government continuance intentions: an


instrument development and validation

Thamer Alshammari, Chris Messom & Yen Cheung

To cite this article: Thamer Alshammari, Chris Messom & Yen Cheung (2022) M-government
continuance intentions: an instrument development and validation, Information Technology for
Development, 28:1, 189-209, DOI: 10.1080/02681102.2021.1928589

To link to this article: https://doi.org/10.1080/02681102.2021.1928589

© 2021 The Author(s). Published by Informa


UK Limited, trading as Taylor & Francis
Group

Published online: 04 Jun 2021.

Submit your article to this journal

Article views: 1276

View related articles

View Crossmark data

Citing articles: 1 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=titd20
INFORMATION TECHNOLOGY FOR DEVELOPMENT
2022, VOL. 28, NO. 1, 189–209
https://doi.org/10.1080/02681102.2021.1928589

M-government continuance intentions: an instrument


development and validation
Thamer Alshammaria,b, Chris Messomb and Yen Cheungb
a
College of Computing and Informatics, Saudi Electronic University, Riyadh, Saudi Arabia; bFaculty of Information
Technology, Monash University, Melbourne, Australia

ABSTRACT KEYWORDS
M-government’s potential benefits can help countries achieve both M-government; continuance
economic and human development. From the economic point of view, intentions; instrument
m-government enhances efficiency and reduces government spending. development; instrument
From citizens’ point of view, m-government enhances the service validation; reliability; pilot
test
quality offered to them. However, for m-government to offer
sustainable development, citizens must continuously use m-
government; hence, it is important to investigate the factors affecting
citizens’ continuance intentions. This paper builds on prior studies in
information communication technologies for development by
proposing a model and instrument that focuses on continuance
intentions to use m-government. The proposed measures were
developed and validated following rigorous guidelines for quantitative
research in the field of information systems. More than 100 validators
participated in the process of instrument validation. This study applies
five techniques to validate the instrument. The survey items were
narrowed down from 59 items to 27 items. The results show that the
items have high validity and reliability.

1. Introduction
M-government services have been launched in many countries worldwide to achieve sustainable
development that benefits citizens, governments, and businesses (Althunibat & Sahari, 2011). M-gov-
ernment utilizes handheld mobile devices to provide information and services to the public; its
potential benefits include, but are not limited to, improving transparency, reducing government
spending, and reaching a wider population, as well as providing instant information access and per-
sonalized services (Althunibat & Sahari, 2011; Mahmood et al., 2019). In addition, the advance of m-
government has made participatory democracy possible (Kushchu & Kuscu, 2003; Pirannejad, 2017;
Rodríguez Bolívar et al., 2016). Nonetheless, these benefits cannot be realized because of citizens’
low uptake of m-government (Al-Hujran et al., 2015).
ICT in general and m-government in particular enable sustainable development in both govern-
ment initiatives and people’s lives (Ochara & Mawela, 2015). However, sustainable development can
only be achieved when m-government is consistently used (Weerakkody et al., 2009). Therefore, by
investigating the factors that affect m-government continuance intentions, we will be able to better

CONTACT Thamer Alshammari T.Alshammari@seu.edu.sa 4552 Prince Mohammed Ibn Salman Ibn Abdulaziz Rd, Ar Rabi,
Riyadh 13316 6867, Saudi Arabia
Erran Carmel is the accepting Editor for this manuscript.
This article has been republished with minor changes. These changes do not impact the academic content of the article.
© 2021 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://
creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the
original work is properly cited, and is not altered, transformed, or built upon in any way.
190 T. ALSHAMMARI ET AL.

understand the challenges that hinder governments worldwide from realizing sustainable develop-
ment in terms of m-government benefits.
According to Alshammari et al. (2018), quantitative methodology is the most commonly used
research method in the context of m-government. However, for a quantitative study to be reliable
and valid, the measures used in the questionnaire must be validated before collecting data; other-
wise, the results/findings may be deemed unreliable. This paper follows the guidelines of MacKenzie
et al. (2011) and Straub et al. (2004) in developing and validating the proposed instrument, with the
objective of developing an instrument that can be confidently used in future studies.
In this study, therefore, we formulate the following research question: what are the factors that
affect the continuance intentions to use m-government?
This paper first reviews the literature and next discusses the proposed model’s theoretical back-
ground. Then, it illustrates in detail the procedure for developing and validating the instrument.
After that, pilot test results are presented. Finally, a discussion and concluding remarks on the
research are presented.

2. Literature review
2.1. Definition of m-government
The term m-government has been defined by several scholars. For instance, Kushchu and Kuscu
(2003, p. 254) defined m-government as ‘a strategy and its implementation involving the utilization
of all kinds of wireless and mobile technology, services, applications and devices for improving
benefits to the parties involved in e-government including citizens, businesses and all government
units.’ Antovski and Gusev (2005) defined m-government as ‘getting public sector IT systems geared
to interoperability with citizen’s mobile devices.’ Later, Misra (2009) stated that ‘m-government is
public service delivery including transactions on mobile devices like mobile phones, pagers and per-
sonal digital assistants (PDAs).’ Although definitions differ slightly, Zefferer (2011) reported that they
almost all agree on the fact that m-government utilizes mobile technologies for the provision of
public services.

2.2. M-government services and benefits


According to Mohammed (2016), governments serve their citizens in many ways, and the main
purpose of establishing a government is the provision of public services. To enhance their services
and to reach a wider population, governments worldwide have adopted mobile technology as a new
channel for the provision of services (Ahmad & Khalid, 2017; Ochara & Mawela, 2015).
M-government can be beneficial in many fields, including education, health, transport, and secur-
ity. Kamnapp is a mobile application that allows citizens and residents in Saudi Arabia to report on
incidents such as crimes and traffic accidents (Omaier et al., 2019); by using such applications, safety
can be improved, which in turn can lead to social and human development. Moreover, some tasks for
public sector workers in the field, such as home healthcare providers and law enforcement officials,
can be done only with mobile devices (Nguyen et al., 2015). The last two examples of m-government
services show that m-government can enhance people’s lives by improving healthcare services as
well as improving the efficiency of law enforcement.
M-government, when utilized, can allow for significant development across many areas, as it
allows the accomplishment of tasks that cannot be done through the traditional (face-to-face)
method because its services are available to people on the move and can provide instant real-
time alerts. Thus, m-government improves service quality and citizen satisfaction, and ultimately
helps in achieving sustainable development.
In the field of public administration, scholars have investigated the rule of ICT, including m-gov-
ernment, in encouraging citizens’ collaboration and participation, eventually enabling public value
INFORMATION TECHNOLOGY FOR DEVELOPMENT 191

to be co-created (Lönn & Uppström, 2015). M-government can promote transparency, reduce cor-
ruption, and alleviate the digital divide (Trimi & Sheng, 2008; Wang et al., 2020).
Besides the benefits of m-government, there are also certain limitations. These limitations are
related to the technologies associated with m-government; for instance, the wireless networks
associated with m-government are less reliable than technologies associated with, for example, e-
government (Ahmad et al., 2009). For this reason, some critical services, such as mobile voting,
have not been widely adopted (Petrauskas, 2012).

2.3. Current status of m-government


According to the United Nations (UN), 56 of 193 UN member countries provided public m-services in
2012; two years later, 95 countries provided such services (UN, 2014). Scholars have stated that many
governments worldwide have realized the potential development they can gain by providing public
services via mobile devices, and hence have implemented m-government (Glood et al., 2016).
However, some studies have shown that the current level of m-government adoption by citizens
is low in many countries (Sultana et al., 2016). For example, the World Bank (2012) reported that
about half of the world’s population uses the internet regularly, yet most fail to exploit m-govern-
ment services (Alssbaiheen & Love, 2015). Moreover, many studies have stated that only a minority
of people have adopted m-government services, regardless of the developments in m-government
infrastructure (Al-Hujran et al., 2015). Hence, for governments to realize potential development, they
must ensure that citizens adopt and consistently use m-government.
Alshammari et al. (2018) reported that most studies in the context of m-government have focused
on investigating the factors affecting the acceptance of m-government, with the Technology Accep-
tance Model (TAM) (Davis, 1989), Diffusion of Innovations (DOI) (Rogers, 1995), and the Unified
Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh et al., 2003) the most commonly
applied models. In their paper, they recommended applying a model that has not yet been tested in
the m-government context to provide further insights into this phenomenon.
Although studies have successfully identified the factors affecting the acceptance of m-govern-
ment, some scholars have criticized the models applied. For example, Karahanna et al. (1999)
argued that these types of models implicitly assume that acceptance automatically leads to continu-
ance of use, and the applied models cannot explain discontinuance of use after initial acceptance.
This is also highlighted by Bhattacherjee (2001), who argued that the acceptance models (e.g.
TAM, DOI, and UTAUT) ignore the fact that perceptions change after initial use and over time. More-
over, Tornatzky et al. (2011) added that although acceptance is a pre-requisite of continuance inten-
tions, aspects that influence acceptance may have no influence on continuance intentions. As a
result, to gain deeper insights into the factors that affect continuance intentions, this paper proposes
a model developed based on theories that are more relevant to the post-acceptance stage. The pro-
posed model is discussed in the following section.

3. Theoretical model
Based on the discussion above, we propose a model to investigate and understand continuance inten-
tions, which has been developed based on the ECM (see Figure 1). The conceptual model also incor-
porates some aspects of the IS success model; specifically, information quality, service quality, and
system quality (Delone & McLean, 2003). Trust is an external factor that has been incorporated into
the model, as it has been proved to be an influential regarding IS continuance (Zhang et al., 2018).
To test this model empirically, items to measure the constructs must be developed. Depending on
the items’ validity and reliability, testing this model will improve our understanding of the low-level-
of-usage phenomenon. The following subsections define the relevant factors. It is worth mentioning
that since no study has yet investigated continuance intentions for m-government, definitions of the
constructs will be adopted from studies in similar contexts.
192 T. ALSHAMMARI ET AL.

Figure 1. Proposed model of M-government continuance intentions.

3.1. Perceived usefulness


Perceived usefulness (PU) is defined as the degree to which users believe that using a system would
improve their performance (Davis, 1989). Empirical studies have shown that perceived usefulness
influences not only IS acceptance but also its continued use (Bhattacherjee, 2001; Davis et al.,
1989). According to Bhattacherjee (2001), perceived usefulness is the strongest factor for determin-
ing users’ satisfaction. Many recent studies, in the contexts of online communities (Apostolou et al.,
2017), digital textbooks (Joo et al., 2017), mobile payment (Park et al., 2017), social networking sites
(Yin et al., 2013), and electronic learning (Ifinedo, 2017), have empirically proved that perceived use-
fulness has a positive relationship with satisfaction.

3.2. Information quality


Information quality (IQ) can be defined as the degree to which the information provided adequately
meets users’ requirements (Chang et al., 2005). Information quality concerns the accuracy, timeliness,
and completeness of such information (Delone & McLean, 2003; Horan & Schooley, 2007). The infor-
mation quality construct is adapted from the IS success model (Delone & McLean, 2003) to test the
relationship between m-government information quality and citizens’ satisfaction. In fact, many
studies have investigated the relationship between information quality and satisfaction in the con-
texts of online communities (Apostolou et al., 2017), social networking sites (Dong et al., 2014),
mobile banking (Chung & Kwon, 2009), and electronic learning (Lin, 2007), and all of these found
a significant link between information quality and satisfaction.

3.3. Service quality


Service quality (SeQ) is defined as the degree to which the way of delivering a service is excellent and
superior (Parasuraman et al., 1985). According to Bolton and Drew (1991), quality of service pertains
to reliability, responsiveness, assurance, and empathy. Service quality has been found to have a posi-
tive influence on satisfaction in various contexts. For example, Lin (2007) investigated how service
quality influences the continued use of electronic learning through satisfaction, and reported that
INFORMATION TECHNOLOGY FOR DEVELOPMENT 193

service quality has a significant effect on satisfaction. Another good example is a study conducted by
Kuo et al. (2009), who concluded that the quality of mobile commerce service is positively related to
users’ satisfaction.

3.4. System quality


System quality (SyQ) can be defined as the extent to which functionalities of a system can be learned
and utilized with few errors (Chen et al., 2013). System quality concerns availability, usability,
dependability, performance, and functionality (Sarrab et al., 2015). According to Delone and
McLean (2003), high system quality leads to higher satisfaction. Studies in different contexts such
as online communities (Apostolou et al., 2017), social networking sites (Dong et al., 2014), mobile
banking (Chung & Kwon, 2009), electronic learning (Lin, 2007), and mobile commerce (Gao et al.,
2015) have empirically tested the relationship between system quality and satisfaction and con-
cluded that the former has a positive relationship with the latter.

3.5. Confirmation
Confirmation (C), in the context of IS continuance use, is defined as the degree to which the perform-
ance of an IS meets users’ expectations (Assadi & Hassanein, 2010; Bhattacherjee, 2001). Confir-
mation is a cognitive belief derived from previous IS use (Bhattacherjee, 2001). When users’
expectations of an IS are confirmed, they will be satisfied and will therefore continue to use the
IS. When users’ expectations are not confirmed, however, they will not be satisfied and will desist
using the IS.
Studies have supported a positive relationship between confirmation and satisfaction. For
example, Chen et al. (2013) concluded that confirmation leads to higher satisfaction in the
context of mobile services. Another good example is a study of electronic medical records in
which a positive relationship between confirmation and satisfaction was confirmed (Mettler,
2012).

3.6. Satisfaction
The notion of satisfaction (S) was first defined as a ‘pleasurable or positive emotional state resulting
from the appraisal of one’s job’ (Locke, 1976, p. 1300). In the IS field, satisfaction is defined as ‘the
degree to which users have a positive affective orientation toward an information system; i.e. the
extent to which they feel good about it’ (Mclean, 1992, p. 342). Satisfaction is a key determinant
of continuance intentions; across various contexts, including online communities (Apostolou
et al., 2017), mobile commerce (Gao et al., 2015; Sensuse et al., 2017), social networking sites
(Dong et al., 2014) and electronic learning (Lin, 2007), satisfaction has been found to have the stron-
gest effect on continuance intentions.

3.7. Trust
Trust (T) in the IS field has several definitions. For instance, trust is the willingness to be vulnerable to
an action taken by a trusted party according to the feeling of confidence or assurance (Mayer et al.,
1995). Another definition of trust is ‘beliefs about a trustee’s integrity, competence, and benevo-
lence’ (Bernard & Makienko, 2011, p. 99). Trust is vital in ISs in general and m-government in particu-
lar because, as in the case of m-government services, users share private and important details. Trust
in an IS leads to continuance intentions; for example, in the contexts of electronic health (Zhang
et al., 2018) and social networking sites (Chen et al., 2019), trust was found to have a positive
influence on such intentions.
194 T. ALSHAMMARI ET AL.

3.8. Continuance intentions


Continuance intention (CI), the dependent variable of the proposed model, refers to users’ intention
to continue using m-government (Bhattacherjee, 2001). As discussed earlier, satisfaction and trust
are the strongest determinants of continuance intention, which in turn is the most important indi-
cator of m-government success (Bhattacherjee, 2001). Understanding continuance intention can
improve the likelihood of achieving sustainable development.

4. Research methodology
The authors have adopted the guidelines of MacKenzie et al. (2011) and Straub et al. (2004) to inves-
tigate the factors affecting continuance intentions to use m-government. These guidelines ensure
that the developed questionnaire has a well-validated instrument, and help authors to minimize sub-
jectivity in developing and validating the instrument, in turn reducing measurement errors. The final
output of this study can be used to empirically investigate continuance intentions for m-
government.

4.1. Instrument development


The research focuses on developing the instrument in an objective manner. The authors reviewed
the literature to find and follow the best practices for instrument development. According to MacK-
enzie et al. (2011), developing an instrument that will be representative of and accurately measure
constructs should include two stages: conceptualization and operationalization. The following sub-
sections discuss the procedure for each stage.

4.1.1. Construct conceptualization


Conceptualization can be defined as providing a clear and brief conceptual definition of the con-
struct that provides a single interpretation (MacKenzie et al., 2011). According to MacKenzie et al.
(2011), to conceptualize constructs, we must review the literature on the constructs’ meaning,
describe the constructs’ characteristics, and define the constructs using unambiguous words.
Table 1 provides definitions of the constructs investigated in this study. The definitions were
modified to fit the study’s purpose.

4.1.2. Construct operationalization


Once the first stage was accomplished, we reviewed the literature to operationalize the constructs of
interest. This stage is important as it starts with unobservable concepts and ends with measurable

Table 1. Definitions of constructs in the research model.


Construct Definition Source
PU The degree to which users believe that using m-government would improve their Davis (1989)
performance.
IQ The degree to which the information that m-government provides adequately meets users’ Chang et al. (2005)
requirements.
SeQ The degree to which the method of delivering a m-government service is excellent and Parasuraman et al.
superior. (1985)
SyQ The degree to which m-government functionalities can be learned and utilized with as few Chen et al. (2015)
errors as possible.
C The degree to which m-government service performance meets users’ expectations. Bhattacherjee (2001)
S The degree to which users have a positive affective orientation toward m-government; i.e. Mclean (1992)
the degree to which they feel good about it.
T The willingness to be vulnerable to an action that a trusted party takes according to Mayer et al. (1995)
feelings of confidence or assurance.
CI Users’ intention to continue using the m-government. Bhattacherjee (2001)
INFORMATION TECHNOLOGY FOR DEVELOPMENT 195

Table 2. Sources of the items.


Study context Construct Source
Online banking Continuance intentions, satisfaction, perceived usefulness, confirmation Bhattacherjee
(2001)
Electronic mail Perceived usefulness Davis (1989)
Mobile Information quality, service quality, system quality, trust, satisfaction, continuance Gao et al. (2015)
commerce intentions
Electronic Perceived usefulness, information quality, service quality, system quality, confirmation, Roca et al. (2006)
learning continuance intentions
Augmented Information quality, perceived usefulness, satisfaction, continuance intentions Kim et al. (2016)
reality

items that reflect the concepts (Colton & Covert, 2007). Items were adopted from previous studies in
the IS field. Based on the reviewed literature, we identified 59 items. Modifications were made for the
items to fit this study’s purpose. Table 2 shows the sources of the items adopted for this study and
the context of those studies.

4.2. Instrument validation and refinement


The importance of validation has been recognized widely in the IS research (Straub, 1989) because
the results from an invalidated instrument cannot be trusted (Straub et al., 2004). Therefore, empiri-
cal studies require a high level of validation (Straub, 1989). With validated items, researchers can
enhance investigated construct measurements (Ives & Olson, 1984).
According to Straub (1989), researchers should validate their instruments, even if they were
adopted from the literature, for two reasons: first, the instrument might not have been adequately
validated, and second, in almost all cases, researchers modify these instruments to fit the purpose of
their research and, therefore, items lose their original validity and must be revalidated.
Several validity components need to be considered before actual data collection begins. The
techniques in Table 3 were carried out to have a high degree of construct validity, with high potential
for good reliability of results. These techniques ensure that each validity component is reviewed at
least twice.
At this point, it is important to define the validity components and how they benefit the research
(see Table 4).
The following subsections discuss the procedure and outputs for conducting each technique. It is
worth noting that the authors intentionally included validators from different backgrounds to cover
different perceptions in the validation process. Moreover, none of the validators was allowed to par-
ticipate in more than one stage. Table 5 provides an overview of validator characteristics.
Snowball sampling was used in the first four techniques (interviews, content evaluation panel, Q-
sort method, and pre-test). This technique is well known, commonly used, and the most suitable for
small studies (Black & Champion, 1976). Despite the advantages of the snowball technique, it allows
the accessing of validators through other validators, which may result in a biased sample (Faugier &
Sargeant, 1997). Nonetheless, in line with its efficiency, applying five validation techniques, and the
impracticality of random sampling, we selected snowball sampling to ensure the inclusion of valida-
tors from different backgrounds (e.g. university professors and field experts) in the different phases
of the validation process.

Table 3. Techniques applied and the components that they enhance (N = validators/respondents).
Technique N Face validity Content validity Convergent validity Discriminant validity Reliability
Interviews 5 X X
Content evaluation panel 10 X X X
Q-sorting 4 X X X
Pre-test 5 X X X X X
Pilot test 86 X X X X X
196 T. ALSHAMMARI ET AL.

Table 4. Definitions of validity components.


Validity component Definition
Face validity An instrument measures what it is supposed to measure (Nurco, 1985).
Content validity An instrument comprehensively covers the constructs being studied (Muchinsky, 1997).
Convergent validity Items from the same construct correlate strongly (Schoenecker & Swanson, 2002).
Discriminant validity The extent to which items of different constructs are not correlated (Sekaran & Bougie, 2016).
Reliability The extent to which an instrument provides consistent results with different samples (Peter, 1979).

Moreover, in attempts to ensure the validator and respondents were aware of the meaning of
m-government, the authors textually explained the concept. The authors could have verbally
explained the concept of m-government, especially in the first four techniques (interviews,
content evaluation panel, Q-sort method, and pre-test) because of the small number of validators
in each phase; however, since the main data collection was to be conducted via a self-adminis-
tered questionnaire, it was considered that providing textual explanations during the instrument
validation process could help in detecting any errors, so they could be solved prior to the main
data collection. Below is the statement given to the validators and the respondents of the
questionnaire:
The principal aim of this study is to validate items for the factors that impact the continuance intention of mobile
government (m-government). In other words, this study validates the aspects of perceived usefulness, quality
dimensions, confirmation, satisfaction, and trust concerning the continuance intention of m-government. M-
government refers to the delivery of information and services to citizens, business, and public sectors via
mobile phones.

In this study, we started with 59 items; based on the validation process, 32 items were removed,
resulting in 27 items believed to be valid and able to provide reliable results. Table 6 shows the
32 removed items, their respective factors, and techniques via which they were removed as well
as the number of removed items for each technique.

Table 5. An Overview of the Validators who Participated in the Validation Process.


Technique Participant ID Information
Interview Interview_1 University professors who hold PhDs in the IS field.
Interview_2
Interview_3 Experts working in IT departments in the public sector who hold bachelor degrees in
Interview_4 computer science.
Interview_5 A non-IT background validator.
Content evaluation Panel_1 A university professor who holds a PhD in the IS field.
panel Panel_2 Experts working in IT departments in the public sector.
Panel_3
Panel_4 Postgraduate students (master degree) in IT.
Panel_5
Panel_6 PhD students who have published in the IS field.
Panel_7
Panel_8
Panel_9 Non-IT background validators.
Panel_10
Q-sort method Q_sort_1 Experts’ working in IT departments in the public sector who hold bachelor degrees in
Q_sort_2 computer science and management information system respectively.
Q_sort_3 Non-IT background validators.
Q_sort_4
Pre-test Pre_1 PhD students who have published in the IS field.
Pre_2
Pre_3 Non-IT background validators.
Pre_4
Pre_5
Pilot test 86 respondents Respondents’ demographic information is presented in the pilot test section. Unlike
prior techniques, in this part more data are collected (e.g. gender, age group, and
level of general internet knowledge).
INFORMATION TECHNOLOGY FOR DEVELOPMENT 197

Table 6. Items removed by each technique and their respective factors.


No. of removed
Technique items Factor Removed items
Interviews 19 PU - Using m-government can improve my effectiveness in completing tasks.
- M-government is useful in my daily life.
- M-government helps improve work efficiency.
IQ - M-government provides precise information that the user needs
- M-government provides me with sufficient information.
SeQ - M-government provides prompt responses to my questions.
- M-government has visually appealing materials.
- M-governments provides prompt services.
SyQ - I could conduct my tasks on m-government at anytime, anywhere I want.
- Steps to complete a task in m-government applications follow a logic
sequence.
C - M-government can meet demands in excess of what I required for the
service.
S - I am satisfied with the performance of m-government.
- I am pleased with the experience of using m-government.
- I am satisfied with using m-government.
- I am not complaining about using m-government.
T - In general I think I can trust m-government.
- M-government applications have enough safeguards to make me feel
comfortable using them.
CI - I will use m-government on a regular basis in the future.
- I will frequently use m-government in the future.
Content evaluation 6 PU - Overall, m-government is useful in completing tasks
panel IQ - Information that is provided by m-Government is clear and understandable.
- M-government presents information in an appropriate format.
SyQ - M-government applications quickly loads all the text and graphics.
T - I am concerned that unauthorized people (e.g. hackers) have access to my
personal information in m-government databases.
- I am concerned about the security of m-government during data
transmission.
Q-sort method 6 SyQ - M-government applications are visually attractive.
C - The benefit of using m-government is better than what I expected.
- M-government applications are more user-friendly than what I expected.
S - My decision to use m-government was a wise one.
- M-government fulfills my demands.
CI - I will strongly recommend that others use it.
Pre-test 1 CI - If I could, I would like to discontinue my use of m-government (reverse
coded).
Pilot test 0 None.

The 27 items were added to the questionnaire developed in this study. The questionnaire also
consists of demographic questions such as gender and age group (see Table 7).

4.2.1. Interviews
Interviews were conducted with five validators (see Table 5). The validators were asked to go
through items one by one and provide feedback on each. This technique is believed to enhance
both face and content validities: it enhances face validity as validators are asked to report any con-
fusing items; it enhances content validity as validators are asked to suggest adding new items and
removing overlapping items for each construct.
This technique resulted in the removal of 19 items (see Table 6), most of which were overlapping.
Besides the removal of these items, a minor change was made to some items. Eventually, at the end
of this stage, 40 items remained for further validation. Overall, this technique seems to work best at
identifying and removing overlapping items, which improves the questionnaire’s overall quality by
reducing the time needed to complete it, thereby minimizing the chances for boredom (Drolet &
Morrison, 2001).
198 T. ALSHAMMARI ET AL.

Table 7. Survey Questions.


Personal information
Question Options
What is your gender? - Male
- Female
How old are you? - 18–24
- 25–29
- 30–34
- 35–39
- 40–44
- 45 or older
What is your highest education level? - Less than high school
- High school
- Diploma
- Undergraduate degree
- Postgraduate degree
What is your current employment? - Public sector
- Private sector
- Non-profit
- Self-employed
- Retired
- Student
- Unemployed
How do you describe your general internet knowledge? - Poor
- Fair
- Good
- Very good
- Excellent
M-government continuance intention
Respondents were asked to rate the following statements on a scale of 1–5:
[1 = Strongly disagrees; 2 = Disagree; 3 = Neutral; 4 = Agree; 5 = Strongly agree]
PU- Using m-government improves my performance in completing tasks.
- Using m-government increases my productivity in completing tasks.
- I find m-government to be useful for me.
IQ- M-government provides relevant information to my needs.
- M-government provides accurate information.
- M-government provides up-to-date information.
- M-government provides the information I need on time.
SeQ- M-government provides on-time services.
- M-government provides professional services.
- M-government provides personalized services.
- M-government provides modern looking interfaces.
SyQ- M-government applications are easy to use.
- M-government applications are easy to navigate.
- I can complete tasks in m-government applications anytime anywhere.
- M-government applications respond quickly even in the busiest times of the day.
C- My experience with using m-government was better than what I expected.
- The service level provided by m-government was better than what I expected.
- Overall, most of my expectations from using m-government were confirmed.
S- I feel satisfied with my overall experience of m-government.
- I feel pleased with my overall experience of m-government.
- I feel contented with my overall experience of m-government.
T- I trust m-government application to keep my personal information safe.
- I feel safe in my transactions with the m-government applications.
- M-government is trustworthy.
CI- I intend to continue using m-government rather than discontinue its use.
- I intend to continue using m-government than use any alternative means (face-to-face).
- If I could, I would like to continue my use of m-government.

4.2.2. Content evaluation panel


During this stage, we adopted the Lawshe method (also known as the content-validity ratio [CVR]).
Through this method, the authors asked several panel experts (10 validators were included; see
Table 5) to rate each item using a form with a list of all items and the option to rate each item as
either essential, useful but not essential, or not essential. The results were used in a formula to
INFORMATION TECHNOLOGY FOR DEVELOPMENT 199

calculate the CVR value, with items scoring below the minimum accepted value removed. Below is
the CVR formula established by Lawshe (1975), with ne being the number of panellists rating an item
as essential and N being the total number of validators:
 
ne − N/2
CVR =
N/2
The minimum accepted value is relative, depending on the total number of validators. In this study,
10 validators were included; thus, according to Lawshe (1975), the minimum accepted value is 0.62.
By conducting this technique, face validity will improve, as it is assumed that confusing items would
not score higher than the minimum accepted value. This assumption is made mainly because any
ambiguous item would not be rated by enough validators as essential. This technique will also
improve discriminant validity, as it helps remove irrelevant items. Moreover, convergent validity
will also be improved, as only items believed to be essential by a sufficient number of validators
will be accepted.
Out of 40 items, six failed to score above the minimum accepted value of 0.62 (see Table 6). Overall,
34 items had sufficient support from the 10 validators to be considered essential. Therefore, this tech-
nique helped in improving the instrument’s validity and, potentially, the results’ reliability.

4.2.3. Q-sort method


The Q-sort method was conducted to judge and enhance the items’ face, convergent, and discrimi-
nant validities. This procedure uses an iterative process (Nahm et al., 2002), across two phases. First, a
pair of experts (referred to as judges) are asked to categorize the items according to the most suit-
able construct. Second, items that have not been categorized in the right construct are either
modified or removed. This method adopts experts’ decisions on keeping, modifying, or removing
items. Table 8 shows the judges’ results for sorting the items for the first round.
According to Nahm et al. (2002), two evaluations assess item validity: first, the hit ratio measures
the percentage of items sorted correctly, and second, Cohen’s Kappa measures inter-judge agree-
ment level. The first round’s average hit ratio was 75%, while the Kappa value was 0.70. Some
items were reworded or deleted before conducting the second round with two other judges.
Table 9 shows the judges’ results for sorting the items for the second round.
During the second round, the average hit ratio increased by 22%, scoring a high hit ratio of 92%,
and the Kappa value was 0.78. This value is considered an ‘excellent agreement’ Kappa, the range for
which covers 0.76–1.00 (Landis & Koch, 1977). There was a 0.08 improvement in the Kappa value
during the second round.
All in all, conducting the Q-sort method resulted in the removal of six items (see Table 6), as well
as modifications to several other items. It is believed that the Q-sort method improved face validity,
as it is assumed that overly ambiguous items would not be categorized in the right constructs or
would be categorized as not applicable (NA). Conversely, items that were clear are assumed to be
sorted correctly. The method also improved convergent validity, as only items related to a construct
were expected to be categorized into that construct by two independent experts. Finally, and

Table 8. First Q-sorting round.


Construct 1 2 3 4 5 6 7 8 NA Total %
1 7 1 1 1 10 70
2 6 1 1 8 75
3 6 1 1 8 75
4 1 7 8 88
5 1 5 2 8 63
6 2 1 6 1 10 60
7 8 8 100
8 6 2 8 75
Total items placement=68; number of hits=51; hit ratio = 75%
200 T. ALSHAMMARI ET AL.

Table 9. Second Q-sorting round.


Construct 1 2 3 4 5 6 7 8 NA Total %
1 6 1 1 8 75
2 8 8 100
3 7 1 8 88
4 8 8 100
5 6 6 100
6 1 6 1 8 75
7 8 8 100
8 6 6 100
Total items placement=60; number of hits=55; hit ratio = 92%

importantly, the method also improved discriminant validity, as items that were sorted into the
wrong constructs were assumed to have low discriminant validity and removed.

4.2.4. Pre-test
The pre-test study assesses the whole questionnaire, rather than only the items,’ validity. Five vali-
dators (see Table 5) were asked to complete the online questionnaire and provide feedback via infor-
mal interviews, which aim to allow validators to express any concerns about the questionnaire, from
spelling mistakes or wording problems to questionnaire length. The interviews also were conducted
to obtain suggestions that may improve the questionnaire’s overall quality.
Based on the feedback, one item was removed (see Table 6), and modifications were made to the
questionnaire’s introduction, the section with demographic questions, and the questionnaire’s
overall design. As a result of conducting the pre-test study, unnecessary demographic questions
were removed, the introduction length was reduced, and the questionnaire’s presentation was
improved.
Regarding the instrument’s validity and reliability, the pre-test study helped improve face validity,
as some typos were resolved. Content validity, convergent validity, and discriminant validity were
also considered during this stage, as all validators were asked whether they think all constructs
are well-covered by corresponding items, whether the items are correlated strongly to the constructs
that they measure, and whether any item can be viewed as reflecting two or more unrelated con-
structs. Validators’ answers explicitly confirmed that the instrument has high validity. They also
responded that the questionnaire is sufficiently clear and that results will have high reliability. Ulti-
mately, this stage’s output provides not only a validated instrument, but also an enhanced version of
the questionnaire, divided into three parts: introduction, demographic questions, and items from the
research model. At this point, the questionnaire is deemed ready for piloting.

4.2.5. Pilot test


Similar to the pre-test, the pilot test considers the whole questionnaire, rather than just the items,’
validity and reliability. The pilot test’s main objective is to collect preliminary results, eliminate errors,
and enhance the questionnaire’s wording, instructions, and design (Moore & Benbasat, 1991). No
agreement exists on pilot test sample size, but some scholars have suggested using a ratio of
three respondents to one item (3:1) (Cattell, 2012). Since there are 27 items, the smallest acceptable
sample would comprise 81 respondents. For the pilot test sampling, the convenience technique was
used as it is the most suitable technique for this type of data collection and the most commonly used
in behavioral research (Harvey & Land, 2016).
The authors sent the questionnaire to 150 people from different backgrounds to seek different
perceptions. They were also encouraged to provide feedback once they had completed the ques-
tionnaire. At the end, 86 complete responses were received, a 59% response rate. Table 10
provides the respondents’ demographic information.
Based on respondents’ feedback, none of the items was deleted or changed, but some changes
were made to the mobile-experience question section, as well as the order of some questions.
INFORMATION TECHNOLOGY FOR DEVELOPMENT 201

Table 10. Respondents’ demographic information.


Frequency Percentage
Gender Female 31 36.05
Male 55 63.95
Age 18–24 2 2.33
25–29 4 4.65
30–34 26 30.23
35–39 18 20.93
40–44 17 19.77
45 or older 19 22.09
Education level Less than high school 0 0
High school 10 11.63
Diploma 3 3.49
Bachelor 46 53.49
Master 19 22.09
PhD 8 9.30
Career Public sector 59 68.60
Private sector 18 20.93
Student 2 2.33
Self-employed 7 8.14
Retired 0 0
Unemployed 0 0
General internet knowledge Poor 0 0
Fair 0 0
Good 6 6.98
Very good 15 17.44
Excellent 65 75.58

Therefore, this paper’s output is a validated instrument of 27 items that can be used to measure
continuance intentions in future research (see Table 7). The following section presents the analysis
of data collected during this stage.

5. Analysis
The 86 responses received in the pilot test are analyzed in this section. It is worth mentioning that
feedback determined face and content validities, while factor-loading analysis and the square root of
average variance extracted were used to test convergent and discriminant validities. Finally, Cron-
bach’s alpha values and composite reliability (CR) were used to test instrument reliability. The Stat-
istical Package for the Social Sciences (SPSS), Version 25, was used to analyze collected data.

5.1. Face and content validities


Based on respondents’ feedback, it was deemed that the instrument has sufficient face and content
validities. None of the respondents reported any issue regarding the items’ clarity; in fact, many
respondents asserted that the investigated aspects were well-represented through clear items.
Moreover, deriving results with high reliability (presented later in this paper) also proves that the
items have sufficient face validity (Abbott et al., 2009).

5.2. Convergent and discriminant validities


Before conducting the factor analysis, it is essential to assess factorability by checking the cor-
relation matrix, results from the Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy, and
Bartlett’s test of sphericity (Andersen & Herbertsson, 2003; Arpaci et al., 2015). Results from
testing through bivariate correlation indicate that the items have many significant correlations,
which means the data can be categorized into homogeneous sets of variables (Andersen & Her-
bertsson, 2003). The KMO result is 0.795, which exceeds the cut-off value of 0.5 (Kaiser & Rice,
202 T. ALSHAMMARI ET AL.

1974). This result shows that the items of a construct belong together. Finally, and importantly,
results from Bartlett’s test of sphericity were found to be significant, with a p-value less than
0.001 and a chi-square value of 1,373. All these previous results indicate that the data are appro-
priate for factor analysis.
At this point, we can statistically test convergent and discriminant validities by applying principal
component analysis (PCA), which is the most appropriate method for purposes of this research (Hair
et al., 2013). We used varimax as the rotation method, following recommendations by Tabachnick
and Fidell (2000). Table 11 shows the factor-loading results.
Convergent validity is determined by how strongly items of a construct load into the same factor
(Streiner et al., 2015). Table 11 shows that PU items load strongly into the same factor, which means
that PU1, PU2, and PU3 have established convergent validity. The same applies to all other items;
namely, the IQ, SeQ, SyQ, C, S, T, and CI items.
Whereas discriminant validity can be determined by looking for items that load into two or more
factors (Fornell & Larcker, 1981), Table 11 shows that there is no case of cross-loadings in which an
item loads by more than 0.5 into two or more factors. Thus, the items have very good discriminant
validity.
Another way to test convergent and discriminant validity is by determining average variance
extracted (AVE) and construct correlations.
In Table 12, diagonal elements (in bold) are the square root of AVE, while the off-diagonal
elements are the correlations among the constructs. Table 12 shows that the constructs have AVE
values ranging from 0.533 to 0.809; that is, by exceeding 0.5, all the constructs demonstrate satisfac-
tory convergent validity (Hair et al., 1998). It also shows that the square root of the AVE (diagonal
elements) values are greater than the correlations among factors (off-diagonal elements), indicating
that the constructs have adequate discriminant validity (Gefen et al., 2000).

Table 11. Factor loadings.


Factor
1 2 3 4 5 6 7 8
PU1 0.735 0.119 0.013 0.237 −0.020 0.204 0.262 0.182
PU2 0.822 0.174 0.186 0.114 0.055 0.215 0.083 0.151
PU3 0.724 0.149 0.247 0.231 0.145 0.189 0.097 0.195
IQ1 0.425 0.619 0.122 −0.047 0.019 −0.080 0.237 0.183
IQ2 0.089 0.800 0.003 0.218 0.171 −0.057 0.060 0.062
IQ3 0.135 0.783 0.085 0.168 0.100 0.211 0.061 −0.019
IQ4 0.026 0.736 0.301 0.055 0.102 0.194 0.031 0.224
SeQ1 −0.006 0.335 0.573 0.234 −0.009 0.248 0.262 0.008
SeQ2 0.225 0.088 0.724 0.097 0.166 −0.221 0.115 0.019
SeQ3 0.310 0.382 0.553 0.021 0.205 0.054 0.253 0.080
SeQ4 0.083 0.034 0.696 0.261 0.025 0.197 0.081 0.288
SyQ1 0.101 0.301 0.114 0.737 −0.032 0.171 0.171 0.087
SyQ2 0.044 0.231 0.066 0.810 0.125 −0.030 0.100 0.185
SyQ3 0.274 0.099 0.179 0.649 0.188 0.228 0.095 0.049
SyQ4 0.163 −0.104 0.157 0.806 0.083 0.207 0.001 0.156
C1 0.201 0.077 0.144 −0.041 0.788 0.138 0.176 0.012
C2 −0.040 0.059 0.091 0.155 0.806 −0.055 0.173 0.163
C3 0.001 0.208 0.013 0.155 0.841 −0.051 0.068 0.111
S1 0.146 0.114 0.028 0.173 0.090 0.818 0.135 0.091
S2 0.103 0.103 0.278 0.010 0.253 0.765 0.191 0.162
S3 0.203 0.090 0.244 0.135 0.253 0.713 0.290 0.063
T1 0.111 0.134 0.008 0.167 −0.077 0.067 0.831 0.148
T2 0.180 0.055 0.022 0.160 0.088 0.165 0.844 0.118
T3 0.213 0.058 0.100 0.117 0.019 0.333 0.785 0.101
CI1 0.268 0.293 0.073 0.259 0.053 −0.008 0.266 0.716
CI2 0.112 −0.031 0.273 0.039 0.203 0.227 0.044 0.777
CI3 0.206 0.173 −0.009 0.251 0.106 0.182 0.080 0.752
INFORMATION TECHNOLOGY FOR DEVELOPMENT 203

Table 12. CR, AVE, and construct correlations.


CR AVE Factor PU IQ SeQ SyQ C S T CI
0.917 0.786 PU 0.887
0.876 0.64 IQ 0.445 0.8
0.816 0.533 SeQ 0.494 0.531 0.73
0.893 0.676 SyQ 0.493 0.382 0.505 0.822
0.856 0.67 C 0.286 0.282 0.311 0.23 0.818
0.927 0.809 S 0.481 0.305 0.387 0.422 0.181 0.9
0.912 0.775 T 0.466 0.373 0.54 0.38 0.432 0.478 0.88
0.891 0.733 CI 0.529 0.409 0.461 0.468 0.305 0.392 0.414 0.856

5.3. Reliability
Cronbach’s alpha is the most commonly used test of questionnaire reliability (Hair et al., 2013). It
measures internal consistency; that is, how strongly items from the same construct are related
(Pallant, 2013). It is a statistic that ranges from 0 (completely unreliable) to 1 (completely reliable).
Table 13 shows Cronbach’s alpha values. The results show that the instrument’s reliability ranges
from 0.774 to 0.882, indicating highly reliable coefficients that all exceed the 0.70 threshold
(Pallant, 2013).
Besides Cronbach’s alpha, CR is also used to scale an instrument’s reliability. Hair et al. (2013) rec-
ommended using the CR over Cronbach’s alpha, as CR takes into consideration the items’ factor load-
ings while calculating reliability, whereas Cronbach’s alpha does not, and treats all items equally
while calculating reliability. Table 12 shows that the constructs’ CR values range from 0.816 to
0.927, and therefore exceed the cut-off value of 0.70 (Fornell & Larcker, 1981).

6. Discussion
This research attempts to shed light on developing and validating a quantitative instrument in the IS
field; in particular, for m-government. The quantitative methodology is similar to scientific research

Table 13. Reliability of the items.


Item Mean Standard deviation Cronbach’s alpha
PU1 4.59 0.561 0.862
PU2 4.59 0.517
PU3 4.67 0.519
IQ1 4.48 0.547 0.821
IQ2 4.49 0.548
IQ3 4.52 0.525
IQ4 4.53 0.547
SeQ1 4.49 0.526 0.744
SeQ2 4.47 0.547
SeQ3 4.53 0.525
SeQ4 4.45 0.546
SyQ1 4.42 0.563 0.841
SyQ2 4.47 0.568
SyQ3 4.45 0.626
SyQ4 4.37 0.575
C1 4.40 0.492 0.816
C2 4.42 0.496
C3 4.38 0.489
S1 4.57 0.543 0.882
S2 4.60 0.559
S3 4.59 0.540
T1 4.44 0.523 0.855
T2 4.47 0.547
T3 4.40 0.538
CI1 4.59 0.561 0.817
CI2 4.52 0.547
CI3 4.56 0.586
204 T. ALSHAMMARI ET AL.

regarding the reliability and replicability of the results (Klein & Myers, 1999); however, items used in
the questionnaire must be validated before actual data collection is carried out.
It has been noted in the literature that a pilot test is the most commonly used technique to vali-
date an instrument. In fact, several studies have established instrument development and validation
guidelines that help researchers develop and validate their instruments. In this study, we developed
the instrument based on guidelines suggested by MacKenzie et al. (2011). We then followed
the guidelines of Straub et al. (2004) to validate the instrument. The latter has helped enormously
in validating the instrument; Straub et al.’s guidelines include several techniques in addition to
the pilot test.
We started with 59 items (see Table 6); based on the validation process, 32 items were
removed, leaving 27 items believed to be valid and able to provide reliable results. In this
study, the authors decided to minimize subjectivity in selecting the items. Therefore, a set of
techniques was applied, and over 100 validators and respondents from different backgrounds
were included in the validation process. It was assumed that interviews are the best technique
to identify overlapping items because they do not entail as much workload as other techniques,
which involve rating, sorting, or answering the instrument. Thus, validators in the interview stage
can focus fully on the items themselves and, as a result, can easily remember whether they have
read two overlapping items. For this reason, the authors started with this technique, which
resulted in the removal of 19 items, mostly because of overlapping issues. As mentioned
earlier, Table 6 shows that no items were removed during the pilot test stage because the
items had already been validated. Table 6 also shows that, apart from the Q-sort method, the
numbers of items removed during each stage decreased, which is expected because, after
each stage, the chances of identifying issues with items decrease.
The results of this study, however, do not allow us to decide whether other techniques, such as
the Q-sort method and content evaluation panel, may overcome the benefits of pilot testing and
lessen the need to use it. However, we found that, compared with other studies that only applied
pilot tests to identify lack of validity and/or reliability in some of their items (Akter et al., 2013;
Beyah et al., 2003; Ng et al., 2015), in this study, the pilot test results confirmed that the items
were already valid and reliable. This can be interpreted as implying that the techniques applied
prior to the pilot test improved validity and reliability to a degree at which the pilot test results
could only confirm that item validities are satisfactory. This assumption opens a discussion as to
whether these techniques can replace the most commonly practiced technique (the pilot test)
applied to validate instruments in the IS field. Since little research has been done on the validation
of instruments in the IS field, it is worth replicating this type of research; researchers could then
identify each technique’s strengths and drawbacks.

7. Conclusion
This study provides holistic guidelines for developing and validating items that measure continuance
intentions, which have not been previously investigated in the m-government context. The theoreti-
cal model combines the ECM and three factors from the IS success model (namely, information
quality, service quality, and system quality) as well as the external factor trust. This study recognizes
the importance of continuance intentions for achieving sustainable development. Therefore, it is
essential for governments that have launched m-government to understand the factors affecting
continuance intentions to achieve sustainable development. As a result, items were developed to
measure aspects that have been proven to influence continuance intentions to use new
technologies.
In addition to identifying this gap in the literature, this study makes two important contributions.
First, it provides a step-by-step explanation as to how items can be, with less subjectivity, developed
and validated. It explains in detail how the applied techniques enhance item validity and reliability.
These guidelines can be of great benefit to researchers interested in conducting quantitative studies.
INFORMATION TECHNOLOGY FOR DEVELOPMENT 205

Second, this study provides validated questionnaire items that can be used to investigate continu-
ance intentions for m-government.
This study suggests three directions for future research. First, the results are based on a small
sample (86 respondents), so a larger sample would enhance the ability to test the significance of
the factors’ influence and generalize the findings to a wider population. Second, this paper
follows the guidelines of MacKenzie et al. (2011) and Straub et al. (2004) to further improve the
process of instrument development and validation; other techniques could be applied. Third, the
proposed model investigates continuance intentions in the context of m-government based on
various aspects, such as PU, IQ, SeQ, SyQ, C, S, T, and CI. It would also be useful to explore other
aspects’ roles in continuance intentions. These studies can draw useful information from here that
may be beneficial to policy makers who seek to achieve sustainable development via utilizing
mobile technology.

Disclosure statement
No potential conflict of interest was reported by the authors.

Notes on contributors
Thamer Alshammari is a lecturer in the College of Computing and Informatics, Saudi Electronic University. His PhD
research focused on investigating the factors affecting continuance intention of m-government. He also has published
an article in the usability testing methods. He has presented in three international and national conferences.
Dr Chris Messom is the Director of Graduate Programs at Faculty of Information Technology, Monash University. He has
over 100 peer reviewed publications in journals, conferences and book chapters. His research interests and publications
are in the fields of intelligent systems, educational data mining, high performance computing and adoption of new
technologies.
Dr Yen Cheung is a senior lecturer at Faculty of information Technology, Monash University. She has published peer
reviewed journal articles and conference papers in several fields, electronic government, electronic commerce, artificial
intelligent. She also has supervised several PhD students.

References
Abbott, J. H., Flynn, T. W., Fritz, J. M., Hing, W. A., Reid, D., & Whitman, J. M. (2009). Manual physical assessment of spinal
segmental motion: Intent and validity. Manual Therapy, 14(1), 36–44. https://doi.org/10.1016/j.math.2007.09.011
Ahmad, S. Z., & Khalid, K. (2017). The adoption of M-government services from the user’s perspectives: Empirical evi-
dence from the United Arab Emirates. International Journal of Information Management, 37(5), 367–379. https://
doi.org/10.1016/j.ijinfomgt.2017.03.008
Ahmad, T., Hu, J., & Han, S. (2009, October 19–21). An efficient mobile voting system security scheme based on elliptic curve
cryptography. 3rd international conference on Network and System Security, Gold Coast, Australia.
Akter, S., D’Ambra, J., & Ray, P. (2013). Development and validation of an instrument to measure user
perceived service quality of mHealth. Information & Management, 50(4), 181–195. https://doi.org/10.1016/j.im.
2013.03.001
Al-Hujran, O., Al-Debei, M. M., Chatfield, A., & Migdadi, M. (2015). The imperative of influencing citizen attitude toward e-
government adoption and use. Computers in Human Behavior, 53, 189–203. https://doi.org/10.1016/j.chb.2015.06.
025
Alshammari, T., Cheung, Y., & Messom, C. (2018, December 3–5). M-government adoption research trends: A systematic
review. 10th Australasian Conference on Information Systems, Sydney, Australia.
Alssbaiheen, A., & Love, S. (2015). The opportunities and challenges associated with M-government as an E-government
platform in KSA: A literature review. International Journal of Management & Business Studies, 5(2), 31–38. https://pdfs.
semanticscholar.org/6b99/644fe01a4c3324ff0af81be641861c77ac52.pdf
Althunibat, A., & Sahari, N. (2011). Modelling the factors that influence mobile government services acceptance. African
Journal of Business Management, 5(34), 13030–13043. https://doi.org/10.5897/AJBM11.2083
Andersen, T. M., & Herbertsson, T. T. (2003). Measuring globalization. Institute for the Study of Labor (IZA).
Antovski, L., & Gusev, M. (2005). M-government framework. Euro mGov.
Apostolou, B., Bélanger, F., & Schaupp, L. C. (2017). Online communities: Satisfaction and continued use intention.
Information Research, 22(4), 1–27. http://informationr.net/ir/22-4/paper774.html
206 T. ALSHAMMARI ET AL.

Arpaci, I., Kilicer, K., & Bardakci, S. (2015). Effects of security and privacy concerns on educational use of cloud services.
Computers in Human Behavior, 45, 93–98. https://doi.org/10.1016/j.chb.2014.11.075
Assadi, V., & Hassanein, K. (2010, June 7–9). Continuance intention to use high maintenance information systems: The role
of perceived maintenance effort. 18th European conference on Information Systems, Pretoria, South Africa.
Bernard, E. K., & Makienko, I. (2011). The effects of information privacy and online shopping experience in e-commerce.
Academy of Marketing Studies Journal, 15, 97–112. http://go.galegroup.com.ezproxy.lib.monash.edu.au/ps/i.do?&id=
GALE|A272168400&v=2.1&u=monash&it=r&p=ITOF&sw=w
Beyah, G., Xu, P., Woo, H., Mohan, K., & Straub, D. (2003, August 4-6). Development of an instrument to study the use of
recommendation systems. 9th Americas conference on Information Systems, Tampa, USA.
Bhattacherjee, A. (2001). Understanding information systems continuance: An expectation-confirmation model. MIS
Quarterly, 25(3), 351–370. https://doi.org/10.2307/3250921
Black, J. A., & Champion, D. J. (1976). Methods and issues in social research. John Wiley & Sons.
Bolton, R. N., & Drew, J. H. (1991). A multistage model of customers’ assessments of service quality and value. Journal of
Consumer Research, 17(4), 375–384. https://doi.org/10.1086/208564
Cattell, R. (2012). The scientific use of factor analysis in behavioral and life sciences. Springer Science & Business Media.
Chang, I.-C., Li, Y.-C., Hung, W.-F., & Hwang, H.-G. (2005). An empirical study on the impact of quality antecedents on tax
payers’ acceptance of Internet tax-filing systems. Government Information Quarterly, 22(3), 389–410. https://doi.org/
10.1016/j.giq.2005.05.002
Chen, J. V., Elakhdary, M. A., & Ha, Q.-A. (2019). The continuance use of social network sites for political participation:
Evidences from Arab countries. Journal of Global Information Technology Management, 22(3), 156–178. https://doi.
org/10.1080/1097198X.2019.1642021
Chen, J. V., Jubilado, R. J. M., Capistrano, E. P.S., & Yen, D. C. (2015). Factors affecting online tax filing – An application of
the IS Success Model and trust theory. Computers in Human Behavior, 43, 251–262. https://doi.org/10.1016/j.chb.
2014.11.017
Chen, S.-C., Liu, M.-L., & Lin, C.-P. (2013). Integrating technology readiness into the expectation–confirmation model: An
empirical study of mobile services. Cyberpsychology, Behavior, and Social Networking, 16(8), 604–612. https://doi.org/
10.1089/cyber.2012.0606
Chung, N., & Kwon, S. J. (2009). Effect of trust level on mobile banking satisfaction: A multi-group analysis of information
system success instruments. Behaviour & Information Technology, 28(6), 549–562. https://doi.org/10.1080/
01449290802506562
Colton, D., & Covert, R. W. (2007). Designing and constructing instruments for social research and evaluation. John Wiley &
Sons.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS
Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theor-
etical models. MIS Quarterly, 35(8), 982–1003. http://www.jstor.org/stable/2632151
Delone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of information systems success: A ten-year update.
Journal of Management Information Systems, 19(4), 9–30. https://doi.org/10.1080/07421222.2003.11045748
Dong, T.-P., Cheng, N.-C., & Wu, Y.-C. J. (2014). A study of the social networking website service in digital content indus-
tries: The Facebook case in Taiwan. Computers in Human Behavior, 30, 708–714. https://doi.org/10.1016/j.chb.2013.
07.037
Drolet, A. L., & Morrison, D. G. (2001). Do we really need multiple-item measures in service research?. Journal of Service
Research, 3(3), 196–204. https://doi.org/10.1177/109467050133001
Faugier, J., & Sargeant, M. (1997). Sampling hard to reach populations. Journal of Advanced Nursing, 26(4), 790–797.
https://doi.org/10.1046/j.1365-2648.1997.00371.x
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement
error. Journal of Marketing Research, 18(1), 39–50. https://doi.org/10.1177/002224378101800104
Gao, L., Waechter, K. A., & Bai, X. (2015). Understanding consumers’ continuance intention towards mobile purchase: A
theoretical framework and empirical study–A case of China. Computers in Human Behavior, 53, 249–262. https://doi.
org/10.1016/j.chb.2015.07.014
Gefen, D., Straub, D., & Boudreau, M.-C. (2000). Structural equation modeling and regression: Guidelines for research
practice. Communications of the Association for Information Systems, 4(1), 7. https://doi.org/10.17705/1CAIS.00407
Glood, S. H., Osman, W. R. S., & Nadzir, M. M. (2016). The effect of civil conflicts and net benefits on M-government
success of developing countries: A case study of Iraq. Journal of Theoretical & Applied Information Technology, 88
(3), 541–552. http://www.jatit.org/volumes/Vol88No3/21Vol88No3.pdf
Hair, J., Black, W. C., Babin, B. J., & Anderson, R. E. (2013). Multivariate data analysis: International version (7th ed.). Pearson
Education Limited.
Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate data analysis (5th ed.). Prentice Hall.
Harvey, M., & Land, L. (2016). Research methods for nurses and midwives: Theory and practice. SAGE.
Horan, T. A., & Schooley, B. L. (2007). Time-critical information services. Communications of the ACM, 50(3), 73–78. https://
doi.org/10.1145/1226736.1226738
INFORMATION TECHNOLOGY FOR DEVELOPMENT 207

Ifinedo, P. (2017). Students’ perceived impact of learning and satisfaction with blogs. The International Journal of
Information and Learning Technology, 34(4), 322–337. https://doi.org/10.1108/IJILT-12-2016-0059
Ives, B., & Olson, M. H. (1984). User involvement and MIS success: A review of research. Management Science, 30(5), 586–
603. https://doi.org/10.1287/mnsc.30.5.586
Joo, Y. J., Park, S., & Shin, E. K. (2017). Students’ expectation, satisfaction, and continuance intention to use digital text-
books. Computers in Human Behavior, 69, 83–90. https://doi.org/10.1016/j.chb.2016.12.025
Kaiser, H. F., & Rice, J. (1974). Little jiffy, mark IV. Educational and Psychological Measurement, 34(1), 111–117. https://doi.
org/10.1177/001316447403400115
Karahanna, E., Straub, D. W., & Chervany, N. L. (1999). Information technology adoption across time: A cross-sectional
comparison of pre-adoption and post-adoption beliefs. MIS Quarterly, 23(2), 183–213. https://doi.org/10.2307/
249751
Kim, K., Hwang, J., Zo, H., & Lee, H. (2016). Understanding users’ continuance intention toward smartphone augmented
reality applications. Information Development, 32(2), 161–174. https://doi.org/10.1177/0266666914535119
Klein, H. K., & Myers, M. D. (1999). A set of principles for conducting and evaluating interpretive field studies in infor-
mation systems. MIS Quarterly, 23(1), 67–94. https://doi.org/10.2307/249410
Kuo, Y.-F., Wu, C.-M., & Deng, W.-J. (2009). The relationships among service quality, perceived value, customer satisfac-
tion, and post-purchase intention in mobile value-added services. Computers in Human Behavior, 25(4), 887–896.
https://doi.org/10.1016/j.chb.2009.03.003
Kushchu, I., & Kuscu, H. (2003, July 3–4). From E-government to M-government: Facing the inevitable. 3rd European con-
ference on e-Government, Dublin, Ireland.
Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159–
174. https://doi.org/10.2307/2529310
Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology, 28(4), 563–575. https://doi.org/
10.1111/j.1744-6570.1975.tb01393.x
Lin, H.-F. (2007). Measuring online learning systems success: Applying the updated DeLone and McLean model.
Cyberpsychology & Behavior, 10(6), 817–820. https://doi.org/10.1089/cpb.2007.9948
Locke, E. A. (1976). The nature and causes of job satisfaction. In M. D. Dunnette (Ed.), Handbook of industrial and organ-
izational psychology (pp. 1297–1343). Rand McNally College Pub. Co.
Lönn, C.-M., & Uppström, E. (2015, August 13–15). Core aspects for value co-creation in public sector. Twenty-first
Americas Conference on Information Systems (AMCIS 2015), Puerto Rico.
MacKenzie, S. B., Podsakoff, P. M., & Podsakoff, N. P. (2011). Construct measurement and validation procedures in MIS
and behavioral research: Integrating new and existing techniques. MIS Quarterly, 35(2), 293–334. https://doi.org/10.
2307/23044045
Mahmood, M., Weerakkody, V., & Chen, W. (2019). The influence of transformed government on citizen trust: Insights
from Bahrain. Information Technology for Development, 25(2), 275–303. https://doi.org/10.1080/02681102.2018.
1451980
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of
Management Review, 20(3), 709–734. https://doi.org/10.5465/amr.1995.9508080335
Mclean, E. R. (1992). Promoting information system success: The respective roles of user participation and user involve-
ment. Journal of Information Technology Management, 3(1), 1–12. http://jitm.ubalt.edu/III-1/JITM%20Vol%20III%
20No.1-1.pdf
Mettler, T. (2012, December 16–19). Post-acceptance of electronic medical records: Evidence from a longitudinal field study.
33rd international conference on Information Systems, Orlando, USA.
Misra, D. (2009, October 8–10). Make M-government an integral part of e-government: An agenda for action. Proceedings
of TRAI conference on Mobile Applications for Inclusive Growth and Sustainable Development, New Delhi, India.
Mohammed, A. K. (2016). An assessment of the impact of local government fragmentation in Ghana. Public Organization
Review, 16(1), 117–138. https://doi.org/10.1007/s11115-014-0299-2
Moore, G. C., & Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an infor-
mation technology innovation. Information Systems Research, 2(3), 192–222. https://doi.org/10.1287/isre.2.3.192
Muchinsky, P. M. (1997). Psychology applied to work: An introduction to industrial and organizational psychology. Brooks/
Cole Publishing Co.
Nahm, A. Y., Rao, S. S., Solis-Galvan, L. E., & Ragu-Nathan, T. (2002). The Q-sort method: Assessing reliability and construct
validity of questionnaire items at a pre-testing stage. Journal of Modern Applied Statistical Methods, 1(1), 114–125.
https://doi.org/10.22237/jmasm/1020255360
Ng, S.-N., Matanjun, D., D’Souza, U., & Alfred, R. (2015). Understanding pharmacists’ intention to use medical apps.
Electronic Journal of Health Informatics, 9(1), 1–17. https://doi.org/10.1.1.981.7832
Nguyen, T., Goyal, A., Manicka, S., Nadzri, M. H. M., Perepa, B., Singh, S., & Tennenbaum, J. (2015). IBM mobile first in action
for mGovernment and citizen mobile services. International Business Machines Corporation.
Nurco, D. N. (1985). A discussion of validity. In B. A. Rouse, N. Kozel, & L. Richards (Eds.), Self-report methods of drug use:
Meeting current challenges to validity (Vol. 57, pp. 4–11). NIDA Research Monograph: Citeseer.
208 T. ALSHAMMARI ET AL.

Ochara, N. M., & Mawela, T. (2015). Enabling social sustainability of e-participation through mobile technology.
Information Technology for Development, 21(2), 205–228. https://doi.org/10.1080/02681102.2013.833888
Omaier, H. T., Alharbi, A. Z., Alotaibi, M. F., & Ibrahim, D. M. (2019). Comparative study between emergency response
mobile applications. International Journal of Computer Science and Information Security (IJCSIS), 17(2). https://www.
researchgate.net/profile/Dina_Hussein6/publication/332246006_Comparative_Study_between_Emergency_
Response_Mobile_Applications/links/5ca8c217299bf118c4b6c904/Comparative-Study-between-Emergency-
Response-Mobile-Applications.pdf
Pallant, J. (2013). SPSS survival manual: A step by step guide to data analysis using IBM SPSS (5th ed.). McGraw-Hill
Education (UK).
Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1985). A conceptual model of service quality and its implications for future
research. Journal of Marketing, 49(4), 41–50. https://doi.org/10.1177/002224298504900403
Park, M., Jun, J., & Park, H. (2017). Understanding mobile payment service continuous use intention: An expectation-
confirmation model and inertia. Quality Innovation Prosperity, 21(3), 78–94. https://doi.org/10.12776/qip.v21i3.983
Peter, J. P. (1979). Reliability: A review of psychometric basics and recent marketing practices. Journal of Marketing
Research, 16(1), 6–17. https://doi.org/10.1177/002224377901600102
Petrauskas, R. (2012). E-democracy projects in the regions of Lithuania: Evaluation aspects. Social Technologies, 2(2),
404–419. https://core.ac.uk/download/pdf/25752897.pdf
Pirannejad, A. (2017). Can the internet promote democracy? A cross-country study based on dynamic panel data
models. Information Technology for Development, 23(2), 281–295. https://doi.org/10.1080/02681102.2017.1289889
Roca, J. C., Chiu, C.-M., & Martínez, F. J. (2006). Understanding e-learning continuance intention: An extension of the
technology acceptance model. International Journal of Human-Computer Studies, 64(8), 683–696. https://doi.org/
10.1016/j.ijhcs.2006.01.003
Rodríguez Bolívar, M. P., Alcaide Muñoz, L., & López Hernández, A. M. (2016). Scientometric study of the progress and
development of e-government research during the period 2000–2012. Information Technology for Development, 22
(1), 36–74. https://doi.org/10.1080/02681102.2014.927340
Rogers, E. (1995). Diffusion of innovations (4th ed.). The Free Press.
Sarrab, M., Al-Shihi, H., & Al-Manthari, B. (2015). System quality characteristics for selecting mobile learning applications.
Turkish Online Journal of Distance Education, 16(4), 18–27. https://doi.org/10.17718/tojde.83031
Schoenecker, T., & Swanson, L. (2002). Indicators of firm technological capability: Validity and performance implications.
IEEE Transactions on Engineering Management, 49(1), 36–44. https://doi.org/10.1109/17.985746
Sekaran, U., & Bougie, R. (2016). Research methods for business: A skill building approach (7th ed.). John Wiley & Sons.
Sensuse, D. I., Handoyo, I. T., Fitriani, W. R., Ramadhan, A., & Rahayu, P. (2017, September 20-21). Understanding continu-
ance intention to use mobile commerce: A case of urban transportation service. International conference on ICT For
Smart Society (ICISS), Tangerang, Indonesia.
Straub, D., Boudreau, M.-C., & Gefen, D. (2004). Validation guidelines for IS positivist research. Communications of the
Association for Information Systems, 13(1), 380–427. https://doi.org/10.17705/1CAIS.01324
Straub, D. W. (1989). Validating instruments in MIS research. MIS Quarterly, 13(2), 147–169. https://doi.org/10.2307/
248922
Streiner, D. L., Norman, G. R., & Cairney, J. (2015). Health measurement scales: A practical guide to their development and
use (5th ed.). Oxford University Press.
Sultana, M. R., Ahlan, A. R., & Habibullah, M. (2016). A comprehensive adoption model of M-government services among
citizens in developing countries. Journal of Theoretical & Applied Information Technology, 90(1), 49–60. http://www.
jatit.org/volumes/Vol90No1/6Vol90No1.pdf
Tabachnick, B. G., & Fidell, L. S. (2000). Computer-assisted research design and analysis. Allyn and Bacon.
Tornatzky, P., Exarchou, M., Vlachopoulou, M., & Zarmpou, T. (2011, March 10–13). Electronic and mobile G2C services:
Greek municipalities’ and citizens’ perspectives. IADIS International Conference on e-Society, Berlin, Germany.
Trimi, S., & Sheng, H. (2008). Emerging trends in M-government. Communications of the ACM, 51(5), 53–58. https://doi.
org/10.1145/1342327.1342338
UN. (2014). United Nations E-Government survey 2014 - E-government for the future we want. United Nations. https://
publicadministration.un.org/egovkb/portals/egovkb/documents/un/2014-survey/e-gov_complete_survey-2014.pdf
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a
unified view. MIS Quarterly, 27(3), 425–478. https://doi.org/10.2307/30036540
Wang, C., Teo, T. S., & Liu, L. (2020). Perceived value and continuance intention in mobile government service in China.
Telematics and Informatics, 48, 101348. https://doi.org/10.1016/j.tele.2020.101348
Weerakkody, V., Dwivedi, Y. K., & Kurunananda, A. (2009). Implementing e-government in Sri Lanka: Lessons from the
UK. Information Technology for Development, 15(3), 171–192. https://doi.org/10.1002/itdj.20122
World Bank. (2012). World development indicators 2012. World Bank Publications.
Yin, G., Zhu, L., & Cheng, X. (2013). Continuance usage of localized social networking services: A conceptual model and
lessons from China. Journal of Global Information Technology Management, 16(3), 7–30. https://doi.org/10.1080/
1097198X.2013.10845640
INFORMATION TECHNOLOGY FOR DEVELOPMENT 209

Zefferer, T. (2011). Mobile government: E-government for mobile societies. Stocktaking of current trends and initiatives.
Secure Information Technology Center.
Zhang, X., Yan, X., Cao, X., Sun, Y., Chen, H., & She, J. (2018). The role of perceived e-health literacy in users’ continuance
intention to use mobile healthcare applications: An exploratory empirical study in China. Information Technology for
Development, 24(2), 198–223. https://doi.org/10.1080/02681102.2017.1283286

You might also like