Professional Documents
Culture Documents
To cite this article: Thamer Alshammari, Chris Messom & Yen Cheung (2022) M-government
continuance intentions: an instrument development and validation, Information Technology for
Development, 28:1, 189-209, DOI: 10.1080/02681102.2021.1928589
ABSTRACT KEYWORDS
M-government’s potential benefits can help countries achieve both M-government; continuance
economic and human development. From the economic point of view, intentions; instrument
m-government enhances efficiency and reduces government spending. development; instrument
From citizens’ point of view, m-government enhances the service validation; reliability; pilot
test
quality offered to them. However, for m-government to offer
sustainable development, citizens must continuously use m-
government; hence, it is important to investigate the factors affecting
citizens’ continuance intentions. This paper builds on prior studies in
information communication technologies for development by
proposing a model and instrument that focuses on continuance
intentions to use m-government. The proposed measures were
developed and validated following rigorous guidelines for quantitative
research in the field of information systems. More than 100 validators
participated in the process of instrument validation. This study applies
five techniques to validate the instrument. The survey items were
narrowed down from 59 items to 27 items. The results show that the
items have high validity and reliability.
1. Introduction
M-government services have been launched in many countries worldwide to achieve sustainable
development that benefits citizens, governments, and businesses (Althunibat & Sahari, 2011). M-gov-
ernment utilizes handheld mobile devices to provide information and services to the public; its
potential benefits include, but are not limited to, improving transparency, reducing government
spending, and reaching a wider population, as well as providing instant information access and per-
sonalized services (Althunibat & Sahari, 2011; Mahmood et al., 2019). In addition, the advance of m-
government has made participatory democracy possible (Kushchu & Kuscu, 2003; Pirannejad, 2017;
Rodríguez Bolívar et al., 2016). Nonetheless, these benefits cannot be realized because of citizens’
low uptake of m-government (Al-Hujran et al., 2015).
ICT in general and m-government in particular enable sustainable development in both govern-
ment initiatives and people’s lives (Ochara & Mawela, 2015). However, sustainable development can
only be achieved when m-government is consistently used (Weerakkody et al., 2009). Therefore, by
investigating the factors that affect m-government continuance intentions, we will be able to better
CONTACT Thamer Alshammari T.Alshammari@seu.edu.sa 4552 Prince Mohammed Ibn Salman Ibn Abdulaziz Rd, Ar Rabi,
Riyadh 13316 6867, Saudi Arabia
Erran Carmel is the accepting Editor for this manuscript.
This article has been republished with minor changes. These changes do not impact the academic content of the article.
© 2021 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://
creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the
original work is properly cited, and is not altered, transformed, or built upon in any way.
190 T. ALSHAMMARI ET AL.
understand the challenges that hinder governments worldwide from realizing sustainable develop-
ment in terms of m-government benefits.
According to Alshammari et al. (2018), quantitative methodology is the most commonly used
research method in the context of m-government. However, for a quantitative study to be reliable
and valid, the measures used in the questionnaire must be validated before collecting data; other-
wise, the results/findings may be deemed unreliable. This paper follows the guidelines of MacKenzie
et al. (2011) and Straub et al. (2004) in developing and validating the proposed instrument, with the
objective of developing an instrument that can be confidently used in future studies.
In this study, therefore, we formulate the following research question: what are the factors that
affect the continuance intentions to use m-government?
This paper first reviews the literature and next discusses the proposed model’s theoretical back-
ground. Then, it illustrates in detail the procedure for developing and validating the instrument.
After that, pilot test results are presented. Finally, a discussion and concluding remarks on the
research are presented.
2. Literature review
2.1. Definition of m-government
The term m-government has been defined by several scholars. For instance, Kushchu and Kuscu
(2003, p. 254) defined m-government as ‘a strategy and its implementation involving the utilization
of all kinds of wireless and mobile technology, services, applications and devices for improving
benefits to the parties involved in e-government including citizens, businesses and all government
units.’ Antovski and Gusev (2005) defined m-government as ‘getting public sector IT systems geared
to interoperability with citizen’s mobile devices.’ Later, Misra (2009) stated that ‘m-government is
public service delivery including transactions on mobile devices like mobile phones, pagers and per-
sonal digital assistants (PDAs).’ Although definitions differ slightly, Zefferer (2011) reported that they
almost all agree on the fact that m-government utilizes mobile technologies for the provision of
public services.
to be co-created (Lönn & Uppström, 2015). M-government can promote transparency, reduce cor-
ruption, and alleviate the digital divide (Trimi & Sheng, 2008; Wang et al., 2020).
Besides the benefits of m-government, there are also certain limitations. These limitations are
related to the technologies associated with m-government; for instance, the wireless networks
associated with m-government are less reliable than technologies associated with, for example, e-
government (Ahmad et al., 2009). For this reason, some critical services, such as mobile voting,
have not been widely adopted (Petrauskas, 2012).
3. Theoretical model
Based on the discussion above, we propose a model to investigate and understand continuance inten-
tions, which has been developed based on the ECM (see Figure 1). The conceptual model also incor-
porates some aspects of the IS success model; specifically, information quality, service quality, and
system quality (Delone & McLean, 2003). Trust is an external factor that has been incorporated into
the model, as it has been proved to be an influential regarding IS continuance (Zhang et al., 2018).
To test this model empirically, items to measure the constructs must be developed. Depending on
the items’ validity and reliability, testing this model will improve our understanding of the low-level-
of-usage phenomenon. The following subsections define the relevant factors. It is worth mentioning
that since no study has yet investigated continuance intentions for m-government, definitions of the
constructs will be adopted from studies in similar contexts.
192 T. ALSHAMMARI ET AL.
service quality has a significant effect on satisfaction. Another good example is a study conducted by
Kuo et al. (2009), who concluded that the quality of mobile commerce service is positively related to
users’ satisfaction.
3.5. Confirmation
Confirmation (C), in the context of IS continuance use, is defined as the degree to which the perform-
ance of an IS meets users’ expectations (Assadi & Hassanein, 2010; Bhattacherjee, 2001). Confir-
mation is a cognitive belief derived from previous IS use (Bhattacherjee, 2001). When users’
expectations of an IS are confirmed, they will be satisfied and will therefore continue to use the
IS. When users’ expectations are not confirmed, however, they will not be satisfied and will desist
using the IS.
Studies have supported a positive relationship between confirmation and satisfaction. For
example, Chen et al. (2013) concluded that confirmation leads to higher satisfaction in the
context of mobile services. Another good example is a study of electronic medical records in
which a positive relationship between confirmation and satisfaction was confirmed (Mettler,
2012).
3.6. Satisfaction
The notion of satisfaction (S) was first defined as a ‘pleasurable or positive emotional state resulting
from the appraisal of one’s job’ (Locke, 1976, p. 1300). In the IS field, satisfaction is defined as ‘the
degree to which users have a positive affective orientation toward an information system; i.e. the
extent to which they feel good about it’ (Mclean, 1992, p. 342). Satisfaction is a key determinant
of continuance intentions; across various contexts, including online communities (Apostolou
et al., 2017), mobile commerce (Gao et al., 2015; Sensuse et al., 2017), social networking sites
(Dong et al., 2014) and electronic learning (Lin, 2007), satisfaction has been found to have the stron-
gest effect on continuance intentions.
3.7. Trust
Trust (T) in the IS field has several definitions. For instance, trust is the willingness to be vulnerable to
an action taken by a trusted party according to the feeling of confidence or assurance (Mayer et al.,
1995). Another definition of trust is ‘beliefs about a trustee’s integrity, competence, and benevo-
lence’ (Bernard & Makienko, 2011, p. 99). Trust is vital in ISs in general and m-government in particu-
lar because, as in the case of m-government services, users share private and important details. Trust
in an IS leads to continuance intentions; for example, in the contexts of electronic health (Zhang
et al., 2018) and social networking sites (Chen et al., 2019), trust was found to have a positive
influence on such intentions.
194 T. ALSHAMMARI ET AL.
4. Research methodology
The authors have adopted the guidelines of MacKenzie et al. (2011) and Straub et al. (2004) to inves-
tigate the factors affecting continuance intentions to use m-government. These guidelines ensure
that the developed questionnaire has a well-validated instrument, and help authors to minimize sub-
jectivity in developing and validating the instrument, in turn reducing measurement errors. The final
output of this study can be used to empirically investigate continuance intentions for m-
government.
items that reflect the concepts (Colton & Covert, 2007). Items were adopted from previous studies in
the IS field. Based on the reviewed literature, we identified 59 items. Modifications were made for the
items to fit this study’s purpose. Table 2 shows the sources of the items adopted for this study and
the context of those studies.
Table 3. Techniques applied and the components that they enhance (N = validators/respondents).
Technique N Face validity Content validity Convergent validity Discriminant validity Reliability
Interviews 5 X X
Content evaluation panel 10 X X X
Q-sorting 4 X X X
Pre-test 5 X X X X X
Pilot test 86 X X X X X
196 T. ALSHAMMARI ET AL.
Moreover, in attempts to ensure the validator and respondents were aware of the meaning of
m-government, the authors textually explained the concept. The authors could have verbally
explained the concept of m-government, especially in the first four techniques (interviews,
content evaluation panel, Q-sort method, and pre-test) because of the small number of validators
in each phase; however, since the main data collection was to be conducted via a self-adminis-
tered questionnaire, it was considered that providing textual explanations during the instrument
validation process could help in detecting any errors, so they could be solved prior to the main
data collection. Below is the statement given to the validators and the respondents of the
questionnaire:
The principal aim of this study is to validate items for the factors that impact the continuance intention of mobile
government (m-government). In other words, this study validates the aspects of perceived usefulness, quality
dimensions, confirmation, satisfaction, and trust concerning the continuance intention of m-government. M-
government refers to the delivery of information and services to citizens, business, and public sectors via
mobile phones.
In this study, we started with 59 items; based on the validation process, 32 items were removed,
resulting in 27 items believed to be valid and able to provide reliable results. Table 6 shows the
32 removed items, their respective factors, and techniques via which they were removed as well
as the number of removed items for each technique.
The 27 items were added to the questionnaire developed in this study. The questionnaire also
consists of demographic questions such as gender and age group (see Table 7).
4.2.1. Interviews
Interviews were conducted with five validators (see Table 5). The validators were asked to go
through items one by one and provide feedback on each. This technique is believed to enhance
both face and content validities: it enhances face validity as validators are asked to report any con-
fusing items; it enhances content validity as validators are asked to suggest adding new items and
removing overlapping items for each construct.
This technique resulted in the removal of 19 items (see Table 6), most of which were overlapping.
Besides the removal of these items, a minor change was made to some items. Eventually, at the end
of this stage, 40 items remained for further validation. Overall, this technique seems to work best at
identifying and removing overlapping items, which improves the questionnaire’s overall quality by
reducing the time needed to complete it, thereby minimizing the chances for boredom (Drolet &
Morrison, 2001).
198 T. ALSHAMMARI ET AL.
calculate the CVR value, with items scoring below the minimum accepted value removed. Below is
the CVR formula established by Lawshe (1975), with ne being the number of panellists rating an item
as essential and N being the total number of validators:
ne − N/2
CVR =
N/2
The minimum accepted value is relative, depending on the total number of validators. In this study,
10 validators were included; thus, according to Lawshe (1975), the minimum accepted value is 0.62.
By conducting this technique, face validity will improve, as it is assumed that confusing items would
not score higher than the minimum accepted value. This assumption is made mainly because any
ambiguous item would not be rated by enough validators as essential. This technique will also
improve discriminant validity, as it helps remove irrelevant items. Moreover, convergent validity
will also be improved, as only items believed to be essential by a sufficient number of validators
will be accepted.
Out of 40 items, six failed to score above the minimum accepted value of 0.62 (see Table 6). Overall,
34 items had sufficient support from the 10 validators to be considered essential. Therefore, this tech-
nique helped in improving the instrument’s validity and, potentially, the results’ reliability.
importantly, the method also improved discriminant validity, as items that were sorted into the
wrong constructs were assumed to have low discriminant validity and removed.
4.2.4. Pre-test
The pre-test study assesses the whole questionnaire, rather than only the items,’ validity. Five vali-
dators (see Table 5) were asked to complete the online questionnaire and provide feedback via infor-
mal interviews, which aim to allow validators to express any concerns about the questionnaire, from
spelling mistakes or wording problems to questionnaire length. The interviews also were conducted
to obtain suggestions that may improve the questionnaire’s overall quality.
Based on the feedback, one item was removed (see Table 6), and modifications were made to the
questionnaire’s introduction, the section with demographic questions, and the questionnaire’s
overall design. As a result of conducting the pre-test study, unnecessary demographic questions
were removed, the introduction length was reduced, and the questionnaire’s presentation was
improved.
Regarding the instrument’s validity and reliability, the pre-test study helped improve face validity,
as some typos were resolved. Content validity, convergent validity, and discriminant validity were
also considered during this stage, as all validators were asked whether they think all constructs
are well-covered by corresponding items, whether the items are correlated strongly to the constructs
that they measure, and whether any item can be viewed as reflecting two or more unrelated con-
structs. Validators’ answers explicitly confirmed that the instrument has high validity. They also
responded that the questionnaire is sufficiently clear and that results will have high reliability. Ulti-
mately, this stage’s output provides not only a validated instrument, but also an enhanced version of
the questionnaire, divided into three parts: introduction, demographic questions, and items from the
research model. At this point, the questionnaire is deemed ready for piloting.
Therefore, this paper’s output is a validated instrument of 27 items that can be used to measure
continuance intentions in future research (see Table 7). The following section presents the analysis
of data collected during this stage.
5. Analysis
The 86 responses received in the pilot test are analyzed in this section. It is worth mentioning that
feedback determined face and content validities, while factor-loading analysis and the square root of
average variance extracted were used to test convergent and discriminant validities. Finally, Cron-
bach’s alpha values and composite reliability (CR) were used to test instrument reliability. The Stat-
istical Package for the Social Sciences (SPSS), Version 25, was used to analyze collected data.
1974). This result shows that the items of a construct belong together. Finally, and importantly,
results from Bartlett’s test of sphericity were found to be significant, with a p-value less than
0.001 and a chi-square value of 1,373. All these previous results indicate that the data are appro-
priate for factor analysis.
At this point, we can statistically test convergent and discriminant validities by applying principal
component analysis (PCA), which is the most appropriate method for purposes of this research (Hair
et al., 2013). We used varimax as the rotation method, following recommendations by Tabachnick
and Fidell (2000). Table 11 shows the factor-loading results.
Convergent validity is determined by how strongly items of a construct load into the same factor
(Streiner et al., 2015). Table 11 shows that PU items load strongly into the same factor, which means
that PU1, PU2, and PU3 have established convergent validity. The same applies to all other items;
namely, the IQ, SeQ, SyQ, C, S, T, and CI items.
Whereas discriminant validity can be determined by looking for items that load into two or more
factors (Fornell & Larcker, 1981), Table 11 shows that there is no case of cross-loadings in which an
item loads by more than 0.5 into two or more factors. Thus, the items have very good discriminant
validity.
Another way to test convergent and discriminant validity is by determining average variance
extracted (AVE) and construct correlations.
In Table 12, diagonal elements (in bold) are the square root of AVE, while the off-diagonal
elements are the correlations among the constructs. Table 12 shows that the constructs have AVE
values ranging from 0.533 to 0.809; that is, by exceeding 0.5, all the constructs demonstrate satisfac-
tory convergent validity (Hair et al., 1998). It also shows that the square root of the AVE (diagonal
elements) values are greater than the correlations among factors (off-diagonal elements), indicating
that the constructs have adequate discriminant validity (Gefen et al., 2000).
5.3. Reliability
Cronbach’s alpha is the most commonly used test of questionnaire reliability (Hair et al., 2013). It
measures internal consistency; that is, how strongly items from the same construct are related
(Pallant, 2013). It is a statistic that ranges from 0 (completely unreliable) to 1 (completely reliable).
Table 13 shows Cronbach’s alpha values. The results show that the instrument’s reliability ranges
from 0.774 to 0.882, indicating highly reliable coefficients that all exceed the 0.70 threshold
(Pallant, 2013).
Besides Cronbach’s alpha, CR is also used to scale an instrument’s reliability. Hair et al. (2013) rec-
ommended using the CR over Cronbach’s alpha, as CR takes into consideration the items’ factor load-
ings while calculating reliability, whereas Cronbach’s alpha does not, and treats all items equally
while calculating reliability. Table 12 shows that the constructs’ CR values range from 0.816 to
0.927, and therefore exceed the cut-off value of 0.70 (Fornell & Larcker, 1981).
6. Discussion
This research attempts to shed light on developing and validating a quantitative instrument in the IS
field; in particular, for m-government. The quantitative methodology is similar to scientific research
regarding the reliability and replicability of the results (Klein & Myers, 1999); however, items used in
the questionnaire must be validated before actual data collection is carried out.
It has been noted in the literature that a pilot test is the most commonly used technique to vali-
date an instrument. In fact, several studies have established instrument development and validation
guidelines that help researchers develop and validate their instruments. In this study, we developed
the instrument based on guidelines suggested by MacKenzie et al. (2011). We then followed
the guidelines of Straub et al. (2004) to validate the instrument. The latter has helped enormously
in validating the instrument; Straub et al.’s guidelines include several techniques in addition to
the pilot test.
We started with 59 items (see Table 6); based on the validation process, 32 items were
removed, leaving 27 items believed to be valid and able to provide reliable results. In this
study, the authors decided to minimize subjectivity in selecting the items. Therefore, a set of
techniques was applied, and over 100 validators and respondents from different backgrounds
were included in the validation process. It was assumed that interviews are the best technique
to identify overlapping items because they do not entail as much workload as other techniques,
which involve rating, sorting, or answering the instrument. Thus, validators in the interview stage
can focus fully on the items themselves and, as a result, can easily remember whether they have
read two overlapping items. For this reason, the authors started with this technique, which
resulted in the removal of 19 items, mostly because of overlapping issues. As mentioned
earlier, Table 6 shows that no items were removed during the pilot test stage because the
items had already been validated. Table 6 also shows that, apart from the Q-sort method, the
numbers of items removed during each stage decreased, which is expected because, after
each stage, the chances of identifying issues with items decrease.
The results of this study, however, do not allow us to decide whether other techniques, such as
the Q-sort method and content evaluation panel, may overcome the benefits of pilot testing and
lessen the need to use it. However, we found that, compared with other studies that only applied
pilot tests to identify lack of validity and/or reliability in some of their items (Akter et al., 2013;
Beyah et al., 2003; Ng et al., 2015), in this study, the pilot test results confirmed that the items
were already valid and reliable. This can be interpreted as implying that the techniques applied
prior to the pilot test improved validity and reliability to a degree at which the pilot test results
could only confirm that item validities are satisfactory. This assumption opens a discussion as to
whether these techniques can replace the most commonly practiced technique (the pilot test)
applied to validate instruments in the IS field. Since little research has been done on the validation
of instruments in the IS field, it is worth replicating this type of research; researchers could then
identify each technique’s strengths and drawbacks.
7. Conclusion
This study provides holistic guidelines for developing and validating items that measure continuance
intentions, which have not been previously investigated in the m-government context. The theoreti-
cal model combines the ECM and three factors from the IS success model (namely, information
quality, service quality, and system quality) as well as the external factor trust. This study recognizes
the importance of continuance intentions for achieving sustainable development. Therefore, it is
essential for governments that have launched m-government to understand the factors affecting
continuance intentions to achieve sustainable development. As a result, items were developed to
measure aspects that have been proven to influence continuance intentions to use new
technologies.
In addition to identifying this gap in the literature, this study makes two important contributions.
First, it provides a step-by-step explanation as to how items can be, with less subjectivity, developed
and validated. It explains in detail how the applied techniques enhance item validity and reliability.
These guidelines can be of great benefit to researchers interested in conducting quantitative studies.
INFORMATION TECHNOLOGY FOR DEVELOPMENT 205
Second, this study provides validated questionnaire items that can be used to investigate continu-
ance intentions for m-government.
This study suggests three directions for future research. First, the results are based on a small
sample (86 respondents), so a larger sample would enhance the ability to test the significance of
the factors’ influence and generalize the findings to a wider population. Second, this paper
follows the guidelines of MacKenzie et al. (2011) and Straub et al. (2004) to further improve the
process of instrument development and validation; other techniques could be applied. Third, the
proposed model investigates continuance intentions in the context of m-government based on
various aspects, such as PU, IQ, SeQ, SyQ, C, S, T, and CI. It would also be useful to explore other
aspects’ roles in continuance intentions. These studies can draw useful information from here that
may be beneficial to policy makers who seek to achieve sustainable development via utilizing
mobile technology.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes on contributors
Thamer Alshammari is a lecturer in the College of Computing and Informatics, Saudi Electronic University. His PhD
research focused on investigating the factors affecting continuance intention of m-government. He also has published
an article in the usability testing methods. He has presented in three international and national conferences.
Dr Chris Messom is the Director of Graduate Programs at Faculty of Information Technology, Monash University. He has
over 100 peer reviewed publications in journals, conferences and book chapters. His research interests and publications
are in the fields of intelligent systems, educational data mining, high performance computing and adoption of new
technologies.
Dr Yen Cheung is a senior lecturer at Faculty of information Technology, Monash University. She has published peer
reviewed journal articles and conference papers in several fields, electronic government, electronic commerce, artificial
intelligent. She also has supervised several PhD students.
References
Abbott, J. H., Flynn, T. W., Fritz, J. M., Hing, W. A., Reid, D., & Whitman, J. M. (2009). Manual physical assessment of spinal
segmental motion: Intent and validity. Manual Therapy, 14(1), 36–44. https://doi.org/10.1016/j.math.2007.09.011
Ahmad, S. Z., & Khalid, K. (2017). The adoption of M-government services from the user’s perspectives: Empirical evi-
dence from the United Arab Emirates. International Journal of Information Management, 37(5), 367–379. https://
doi.org/10.1016/j.ijinfomgt.2017.03.008
Ahmad, T., Hu, J., & Han, S. (2009, October 19–21). An efficient mobile voting system security scheme based on elliptic curve
cryptography. 3rd international conference on Network and System Security, Gold Coast, Australia.
Akter, S., D’Ambra, J., & Ray, P. (2013). Development and validation of an instrument to measure user
perceived service quality of mHealth. Information & Management, 50(4), 181–195. https://doi.org/10.1016/j.im.
2013.03.001
Al-Hujran, O., Al-Debei, M. M., Chatfield, A., & Migdadi, M. (2015). The imperative of influencing citizen attitude toward e-
government adoption and use. Computers in Human Behavior, 53, 189–203. https://doi.org/10.1016/j.chb.2015.06.
025
Alshammari, T., Cheung, Y., & Messom, C. (2018, December 3–5). M-government adoption research trends: A systematic
review. 10th Australasian Conference on Information Systems, Sydney, Australia.
Alssbaiheen, A., & Love, S. (2015). The opportunities and challenges associated with M-government as an E-government
platform in KSA: A literature review. International Journal of Management & Business Studies, 5(2), 31–38. https://pdfs.
semanticscholar.org/6b99/644fe01a4c3324ff0af81be641861c77ac52.pdf
Althunibat, A., & Sahari, N. (2011). Modelling the factors that influence mobile government services acceptance. African
Journal of Business Management, 5(34), 13030–13043. https://doi.org/10.5897/AJBM11.2083
Andersen, T. M., & Herbertsson, T. T. (2003). Measuring globalization. Institute for the Study of Labor (IZA).
Antovski, L., & Gusev, M. (2005). M-government framework. Euro mGov.
Apostolou, B., Bélanger, F., & Schaupp, L. C. (2017). Online communities: Satisfaction and continued use intention.
Information Research, 22(4), 1–27. http://informationr.net/ir/22-4/paper774.html
206 T. ALSHAMMARI ET AL.
Arpaci, I., Kilicer, K., & Bardakci, S. (2015). Effects of security and privacy concerns on educational use of cloud services.
Computers in Human Behavior, 45, 93–98. https://doi.org/10.1016/j.chb.2014.11.075
Assadi, V., & Hassanein, K. (2010, June 7–9). Continuance intention to use high maintenance information systems: The role
of perceived maintenance effort. 18th European conference on Information Systems, Pretoria, South Africa.
Bernard, E. K., & Makienko, I. (2011). The effects of information privacy and online shopping experience in e-commerce.
Academy of Marketing Studies Journal, 15, 97–112. http://go.galegroup.com.ezproxy.lib.monash.edu.au/ps/i.do?&id=
GALE|A272168400&v=2.1&u=monash&it=r&p=ITOF&sw=w
Beyah, G., Xu, P., Woo, H., Mohan, K., & Straub, D. (2003, August 4-6). Development of an instrument to study the use of
recommendation systems. 9th Americas conference on Information Systems, Tampa, USA.
Bhattacherjee, A. (2001). Understanding information systems continuance: An expectation-confirmation model. MIS
Quarterly, 25(3), 351–370. https://doi.org/10.2307/3250921
Black, J. A., & Champion, D. J. (1976). Methods and issues in social research. John Wiley & Sons.
Bolton, R. N., & Drew, J. H. (1991). A multistage model of customers’ assessments of service quality and value. Journal of
Consumer Research, 17(4), 375–384. https://doi.org/10.1086/208564
Cattell, R. (2012). The scientific use of factor analysis in behavioral and life sciences. Springer Science & Business Media.
Chang, I.-C., Li, Y.-C., Hung, W.-F., & Hwang, H.-G. (2005). An empirical study on the impact of quality antecedents on tax
payers’ acceptance of Internet tax-filing systems. Government Information Quarterly, 22(3), 389–410. https://doi.org/
10.1016/j.giq.2005.05.002
Chen, J. V., Elakhdary, M. A., & Ha, Q.-A. (2019). The continuance use of social network sites for political participation:
Evidences from Arab countries. Journal of Global Information Technology Management, 22(3), 156–178. https://doi.
org/10.1080/1097198X.2019.1642021
Chen, J. V., Jubilado, R. J. M., Capistrano, E. P.S., & Yen, D. C. (2015). Factors affecting online tax filing – An application of
the IS Success Model and trust theory. Computers in Human Behavior, 43, 251–262. https://doi.org/10.1016/j.chb.
2014.11.017
Chen, S.-C., Liu, M.-L., & Lin, C.-P. (2013). Integrating technology readiness into the expectation–confirmation model: An
empirical study of mobile services. Cyberpsychology, Behavior, and Social Networking, 16(8), 604–612. https://doi.org/
10.1089/cyber.2012.0606
Chung, N., & Kwon, S. J. (2009). Effect of trust level on mobile banking satisfaction: A multi-group analysis of information
system success instruments. Behaviour & Information Technology, 28(6), 549–562. https://doi.org/10.1080/
01449290802506562
Colton, D., & Covert, R. W. (2007). Designing and constructing instruments for social research and evaluation. John Wiley &
Sons.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS
Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theor-
etical models. MIS Quarterly, 35(8), 982–1003. http://www.jstor.org/stable/2632151
Delone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of information systems success: A ten-year update.
Journal of Management Information Systems, 19(4), 9–30. https://doi.org/10.1080/07421222.2003.11045748
Dong, T.-P., Cheng, N.-C., & Wu, Y.-C. J. (2014). A study of the social networking website service in digital content indus-
tries: The Facebook case in Taiwan. Computers in Human Behavior, 30, 708–714. https://doi.org/10.1016/j.chb.2013.
07.037
Drolet, A. L., & Morrison, D. G. (2001). Do we really need multiple-item measures in service research?. Journal of Service
Research, 3(3), 196–204. https://doi.org/10.1177/109467050133001
Faugier, J., & Sargeant, M. (1997). Sampling hard to reach populations. Journal of Advanced Nursing, 26(4), 790–797.
https://doi.org/10.1046/j.1365-2648.1997.00371.x
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement
error. Journal of Marketing Research, 18(1), 39–50. https://doi.org/10.1177/002224378101800104
Gao, L., Waechter, K. A., & Bai, X. (2015). Understanding consumers’ continuance intention towards mobile purchase: A
theoretical framework and empirical study–A case of China. Computers in Human Behavior, 53, 249–262. https://doi.
org/10.1016/j.chb.2015.07.014
Gefen, D., Straub, D., & Boudreau, M.-C. (2000). Structural equation modeling and regression: Guidelines for research
practice. Communications of the Association for Information Systems, 4(1), 7. https://doi.org/10.17705/1CAIS.00407
Glood, S. H., Osman, W. R. S., & Nadzir, M. M. (2016). The effect of civil conflicts and net benefits on M-government
success of developing countries: A case study of Iraq. Journal of Theoretical & Applied Information Technology, 88
(3), 541–552. http://www.jatit.org/volumes/Vol88No3/21Vol88No3.pdf
Hair, J., Black, W. C., Babin, B. J., & Anderson, R. E. (2013). Multivariate data analysis: International version (7th ed.). Pearson
Education Limited.
Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate data analysis (5th ed.). Prentice Hall.
Harvey, M., & Land, L. (2016). Research methods for nurses and midwives: Theory and practice. SAGE.
Horan, T. A., & Schooley, B. L. (2007). Time-critical information services. Communications of the ACM, 50(3), 73–78. https://
doi.org/10.1145/1226736.1226738
INFORMATION TECHNOLOGY FOR DEVELOPMENT 207
Ifinedo, P. (2017). Students’ perceived impact of learning and satisfaction with blogs. The International Journal of
Information and Learning Technology, 34(4), 322–337. https://doi.org/10.1108/IJILT-12-2016-0059
Ives, B., & Olson, M. H. (1984). User involvement and MIS success: A review of research. Management Science, 30(5), 586–
603. https://doi.org/10.1287/mnsc.30.5.586
Joo, Y. J., Park, S., & Shin, E. K. (2017). Students’ expectation, satisfaction, and continuance intention to use digital text-
books. Computers in Human Behavior, 69, 83–90. https://doi.org/10.1016/j.chb.2016.12.025
Kaiser, H. F., & Rice, J. (1974). Little jiffy, mark IV. Educational and Psychological Measurement, 34(1), 111–117. https://doi.
org/10.1177/001316447403400115
Karahanna, E., Straub, D. W., & Chervany, N. L. (1999). Information technology adoption across time: A cross-sectional
comparison of pre-adoption and post-adoption beliefs. MIS Quarterly, 23(2), 183–213. https://doi.org/10.2307/
249751
Kim, K., Hwang, J., Zo, H., & Lee, H. (2016). Understanding users’ continuance intention toward smartphone augmented
reality applications. Information Development, 32(2), 161–174. https://doi.org/10.1177/0266666914535119
Klein, H. K., & Myers, M. D. (1999). A set of principles for conducting and evaluating interpretive field studies in infor-
mation systems. MIS Quarterly, 23(1), 67–94. https://doi.org/10.2307/249410
Kuo, Y.-F., Wu, C.-M., & Deng, W.-J. (2009). The relationships among service quality, perceived value, customer satisfac-
tion, and post-purchase intention in mobile value-added services. Computers in Human Behavior, 25(4), 887–896.
https://doi.org/10.1016/j.chb.2009.03.003
Kushchu, I., & Kuscu, H. (2003, July 3–4). From E-government to M-government: Facing the inevitable. 3rd European con-
ference on e-Government, Dublin, Ireland.
Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159–
174. https://doi.org/10.2307/2529310
Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology, 28(4), 563–575. https://doi.org/
10.1111/j.1744-6570.1975.tb01393.x
Lin, H.-F. (2007). Measuring online learning systems success: Applying the updated DeLone and McLean model.
Cyberpsychology & Behavior, 10(6), 817–820. https://doi.org/10.1089/cpb.2007.9948
Locke, E. A. (1976). The nature and causes of job satisfaction. In M. D. Dunnette (Ed.), Handbook of industrial and organ-
izational psychology (pp. 1297–1343). Rand McNally College Pub. Co.
Lönn, C.-M., & Uppström, E. (2015, August 13–15). Core aspects for value co-creation in public sector. Twenty-first
Americas Conference on Information Systems (AMCIS 2015), Puerto Rico.
MacKenzie, S. B., Podsakoff, P. M., & Podsakoff, N. P. (2011). Construct measurement and validation procedures in MIS
and behavioral research: Integrating new and existing techniques. MIS Quarterly, 35(2), 293–334. https://doi.org/10.
2307/23044045
Mahmood, M., Weerakkody, V., & Chen, W. (2019). The influence of transformed government on citizen trust: Insights
from Bahrain. Information Technology for Development, 25(2), 275–303. https://doi.org/10.1080/02681102.2018.
1451980
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of
Management Review, 20(3), 709–734. https://doi.org/10.5465/amr.1995.9508080335
Mclean, E. R. (1992). Promoting information system success: The respective roles of user participation and user involve-
ment. Journal of Information Technology Management, 3(1), 1–12. http://jitm.ubalt.edu/III-1/JITM%20Vol%20III%
20No.1-1.pdf
Mettler, T. (2012, December 16–19). Post-acceptance of electronic medical records: Evidence from a longitudinal field study.
33rd international conference on Information Systems, Orlando, USA.
Misra, D. (2009, October 8–10). Make M-government an integral part of e-government: An agenda for action. Proceedings
of TRAI conference on Mobile Applications for Inclusive Growth and Sustainable Development, New Delhi, India.
Mohammed, A. K. (2016). An assessment of the impact of local government fragmentation in Ghana. Public Organization
Review, 16(1), 117–138. https://doi.org/10.1007/s11115-014-0299-2
Moore, G. C., & Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an infor-
mation technology innovation. Information Systems Research, 2(3), 192–222. https://doi.org/10.1287/isre.2.3.192
Muchinsky, P. M. (1997). Psychology applied to work: An introduction to industrial and organizational psychology. Brooks/
Cole Publishing Co.
Nahm, A. Y., Rao, S. S., Solis-Galvan, L. E., & Ragu-Nathan, T. (2002). The Q-sort method: Assessing reliability and construct
validity of questionnaire items at a pre-testing stage. Journal of Modern Applied Statistical Methods, 1(1), 114–125.
https://doi.org/10.22237/jmasm/1020255360
Ng, S.-N., Matanjun, D., D’Souza, U., & Alfred, R. (2015). Understanding pharmacists’ intention to use medical apps.
Electronic Journal of Health Informatics, 9(1), 1–17. https://doi.org/10.1.1.981.7832
Nguyen, T., Goyal, A., Manicka, S., Nadzri, M. H. M., Perepa, B., Singh, S., & Tennenbaum, J. (2015). IBM mobile first in action
for mGovernment and citizen mobile services. International Business Machines Corporation.
Nurco, D. N. (1985). A discussion of validity. In B. A. Rouse, N. Kozel, & L. Richards (Eds.), Self-report methods of drug use:
Meeting current challenges to validity (Vol. 57, pp. 4–11). NIDA Research Monograph: Citeseer.
208 T. ALSHAMMARI ET AL.
Ochara, N. M., & Mawela, T. (2015). Enabling social sustainability of e-participation through mobile technology.
Information Technology for Development, 21(2), 205–228. https://doi.org/10.1080/02681102.2013.833888
Omaier, H. T., Alharbi, A. Z., Alotaibi, M. F., & Ibrahim, D. M. (2019). Comparative study between emergency response
mobile applications. International Journal of Computer Science and Information Security (IJCSIS), 17(2). https://www.
researchgate.net/profile/Dina_Hussein6/publication/332246006_Comparative_Study_between_Emergency_
Response_Mobile_Applications/links/5ca8c217299bf118c4b6c904/Comparative-Study-between-Emergency-
Response-Mobile-Applications.pdf
Pallant, J. (2013). SPSS survival manual: A step by step guide to data analysis using IBM SPSS (5th ed.). McGraw-Hill
Education (UK).
Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1985). A conceptual model of service quality and its implications for future
research. Journal of Marketing, 49(4), 41–50. https://doi.org/10.1177/002224298504900403
Park, M., Jun, J., & Park, H. (2017). Understanding mobile payment service continuous use intention: An expectation-
confirmation model and inertia. Quality Innovation Prosperity, 21(3), 78–94. https://doi.org/10.12776/qip.v21i3.983
Peter, J. P. (1979). Reliability: A review of psychometric basics and recent marketing practices. Journal of Marketing
Research, 16(1), 6–17. https://doi.org/10.1177/002224377901600102
Petrauskas, R. (2012). E-democracy projects in the regions of Lithuania: Evaluation aspects. Social Technologies, 2(2),
404–419. https://core.ac.uk/download/pdf/25752897.pdf
Pirannejad, A. (2017). Can the internet promote democracy? A cross-country study based on dynamic panel data
models. Information Technology for Development, 23(2), 281–295. https://doi.org/10.1080/02681102.2017.1289889
Roca, J. C., Chiu, C.-M., & Martínez, F. J. (2006). Understanding e-learning continuance intention: An extension of the
technology acceptance model. International Journal of Human-Computer Studies, 64(8), 683–696. https://doi.org/
10.1016/j.ijhcs.2006.01.003
Rodríguez Bolívar, M. P., Alcaide Muñoz, L., & López Hernández, A. M. (2016). Scientometric study of the progress and
development of e-government research during the period 2000–2012. Information Technology for Development, 22
(1), 36–74. https://doi.org/10.1080/02681102.2014.927340
Rogers, E. (1995). Diffusion of innovations (4th ed.). The Free Press.
Sarrab, M., Al-Shihi, H., & Al-Manthari, B. (2015). System quality characteristics for selecting mobile learning applications.
Turkish Online Journal of Distance Education, 16(4), 18–27. https://doi.org/10.17718/tojde.83031
Schoenecker, T., & Swanson, L. (2002). Indicators of firm technological capability: Validity and performance implications.
IEEE Transactions on Engineering Management, 49(1), 36–44. https://doi.org/10.1109/17.985746
Sekaran, U., & Bougie, R. (2016). Research methods for business: A skill building approach (7th ed.). John Wiley & Sons.
Sensuse, D. I., Handoyo, I. T., Fitriani, W. R., Ramadhan, A., & Rahayu, P. (2017, September 20-21). Understanding continu-
ance intention to use mobile commerce: A case of urban transportation service. International conference on ICT For
Smart Society (ICISS), Tangerang, Indonesia.
Straub, D., Boudreau, M.-C., & Gefen, D. (2004). Validation guidelines for IS positivist research. Communications of the
Association for Information Systems, 13(1), 380–427. https://doi.org/10.17705/1CAIS.01324
Straub, D. W. (1989). Validating instruments in MIS research. MIS Quarterly, 13(2), 147–169. https://doi.org/10.2307/
248922
Streiner, D. L., Norman, G. R., & Cairney, J. (2015). Health measurement scales: A practical guide to their development and
use (5th ed.). Oxford University Press.
Sultana, M. R., Ahlan, A. R., & Habibullah, M. (2016). A comprehensive adoption model of M-government services among
citizens in developing countries. Journal of Theoretical & Applied Information Technology, 90(1), 49–60. http://www.
jatit.org/volumes/Vol90No1/6Vol90No1.pdf
Tabachnick, B. G., & Fidell, L. S. (2000). Computer-assisted research design and analysis. Allyn and Bacon.
Tornatzky, P., Exarchou, M., Vlachopoulou, M., & Zarmpou, T. (2011, March 10–13). Electronic and mobile G2C services:
Greek municipalities’ and citizens’ perspectives. IADIS International Conference on e-Society, Berlin, Germany.
Trimi, S., & Sheng, H. (2008). Emerging trends in M-government. Communications of the ACM, 51(5), 53–58. https://doi.
org/10.1145/1342327.1342338
UN. (2014). United Nations E-Government survey 2014 - E-government for the future we want. United Nations. https://
publicadministration.un.org/egovkb/portals/egovkb/documents/un/2014-survey/e-gov_complete_survey-2014.pdf
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a
unified view. MIS Quarterly, 27(3), 425–478. https://doi.org/10.2307/30036540
Wang, C., Teo, T. S., & Liu, L. (2020). Perceived value and continuance intention in mobile government service in China.
Telematics and Informatics, 48, 101348. https://doi.org/10.1016/j.tele.2020.101348
Weerakkody, V., Dwivedi, Y. K., & Kurunananda, A. (2009). Implementing e-government in Sri Lanka: Lessons from the
UK. Information Technology for Development, 15(3), 171–192. https://doi.org/10.1002/itdj.20122
World Bank. (2012). World development indicators 2012. World Bank Publications.
Yin, G., Zhu, L., & Cheng, X. (2013). Continuance usage of localized social networking services: A conceptual model and
lessons from China. Journal of Global Information Technology Management, 16(3), 7–30. https://doi.org/10.1080/
1097198X.2013.10845640
INFORMATION TECHNOLOGY FOR DEVELOPMENT 209
Zefferer, T. (2011). Mobile government: E-government for mobile societies. Stocktaking of current trends and initiatives.
Secure Information Technology Center.
Zhang, X., Yan, X., Cao, X., Sun, Y., Chen, H., & She, J. (2018). The role of perceived e-health literacy in users’ continuance
intention to use mobile healthcare applications: An exploratory empirical study in China. Information Technology for
Development, 24(2), 198–223. https://doi.org/10.1080/02681102.2017.1283286