You are on page 1of 20

tGov Workshop11 (1GOV11)

March 17-18, 2011, Brunel University, West London, UB83PH


1
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

A NEW COBRAS FRAMEWORK TO EVALUATE E-
GOVERNMENT SERVICES: A CITIZEN CENTRIC
PERSPECTIVE
Ibrahim H. Osman, American University of Beirut, Olayan School of Business,
Lebanon.
Ibrahim.osman@aub.edu.lb
Abdel Latef Anouze, American University of Beirut, Olayan School of Business, Lebanon.
aa153@aub.edu.lb
Zahir Irani, Brunel Business School, Brunel University, UK
Zahir.Irani@brunel.ac.uk
Habin Lee, Brunel Business School, Brunel University, UK
Habin.Lee@brunel.ac.uk
Asm Balc, Trksat, Seluk University, Turkey
Abalci@turksat.com.tr
Tun D. Medeni, Trksat, Middle East Technical University, Turkey
Tdmedeni@turksat.com.tr
Vishanth Weerakkody, Brunel Business School, Brunel University, UK
Vishanth.Weerakkody@brunel.ac.uk

Abstract
E-government services involve a large number of stakeholders. In the literature, there are a
huge number of fragmented papers on e-government models that aim to evaluate an e-
government services efficiency and effectiveness from a general perspective. But a little
effort exists to provide a holistic evaluation model from a specific stakeholders perspective.
In this paper, a holistic (COBRAS) evaluation framework is proposed based on the most
successful measurement factors that impact the satisfaction of users with an e-government
service. Such factors are identified from the published literature, classified into four groups
and validated using e-government experts and users as follows: Cost; Opportunity, Benefit;
Risk, Analysis for Satisfaction. The framework balances the users cost and risk of engaging
with an e-service with the associated benefit and opportunity from such e-service. The
balanced analysis would determine the degree of satisfaction of users that ultimately
ascertains the success of an e-service take-up. A set of 49 validated questionnaires are tested
on a sample of 2785 users of TurkSat e-government portal in Turkey and analyzed using
confirmatory factor analysis and structural equation modeling to establish relationships. The
proposed framework is demonstrated as a useful tool for evaluating satisfaction of users and
the success of e-government services.

Keywords: e-government service evaluation, cost-benefit analysis, risk-opportunity analysis,
structured equation modeling, user satisfaction, COBRAS model.

1. INTRODUCTION
E-government services involve many stakeholders such as citizen and business users; government
employees; information technology developers; government policy makers, public administrators and
politicians (Rowley, 2011). Each stakeholder has different interests, costs, benefits and objectives that
would have impacts on the success and take-up of e-government services. Moreover, e-government is
a dynamic socio-technical system encompassing several issues starting from governance; policy
tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
2
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

development, societal trends; information management; interaction; technological change; to human
factors (Dawes, 2009). In the literature, there have been a large number of models/frameworks to
evaluate E-government success for different purposes or from different perspectives (Jaeger and John,
2010). Although, these models aim to help policy makers and practitioners to evaluate and improve E-
government services, a little effort has been made to provide a holistic framework to evaluate e-
services and the interaction with citizens (Wang et al., 2005). Moreover, e-government success is a
complex concept, and its measurement involves multi-dimensional factors in nature (Wang and Liao,
2008; Irani et al., 2007; 2008; Weerakkody and Dhillon, 2008). Therefore, in this study, a new
conceptual framework to measure success from users satisfaction perspective is proposed. The
framework development methodology follows a grounded theory approach where an extensive
literature review on existing E-government measurement models is conducted to identify the various
fragmented success factors (key performance indicators, KPIs). The aim is to propose a holistic
framework that can be re-used for evaluating any e-services in any country. The identified measures
are then grouped into four main categories: cost; benefit; risk; and opportunity. The proposed holistic
assessment model measures a users satisfaction in terms of the users cost-benefit and users risk-
opportunity from engaging with an e-service. This approach in line with the recent literature that
considers stakeholders costs, benefits, outcomes, outputs and impacts in their conceptual e-service
evaluations (Millard, 2008; Lee et al. 2008, Rowley, 2011). The current evaluation takes into accounts
the operational assessment of an e-government service efficiency and the outputs and outcomes
effectiveness of the service delivery. Hence, policy makers would have an overall understanding of
the e-government service (e-service) capability and consequently better improvement policies can be
developed for unsuccessful e-services.
The remaining part of the paper is explained as follows. Section 2 presents a background on the
evaluation of e-government success models and frameworks. Section 3 introduces the new framework
with associated assessment components. Section 4 discusses the methodological approach for the
validation process, data collection, and data analysis on a selected sample of e-services in Turkey. The
final section concludes with a managerial implication and further research directions.
2. BACKGROUND ON THE EVALUATION OF E-GOVERNMENT SUCCESS
MODELS
There have been numerous attempts by e-government researchers and practitioners alike to present a
set of guidelines to bridge the gap between theory and practice for e-government architectural design
(Meneklis and Douligeris, 2010). An investigation of the literature on conceptual models/
frameworks to evaluate user satisfaction with E-government services include the various publications
by (Rowley, 2011; Jaeger and Bertot, 2010; Verdegem and Verleye, 2009; Hammer and Al-Qahtani,
2009; Irani et al., 2008; Wang and Liao, 2008; Esteves and Joseph, 2008; van Dijk et al., 2008; Nour
et al., 2008; Ghapanchi et al., 2008; Zarei and Ghapanchi, 2008; Azad and Faraj, 2008; Irani et al.
2007; Kim et al., 2007; Gouscos et al., 2007; Grant and Chau, 2005; Moon et al., 2005; Evans and
Yen, 2005; Gupta and Jana, 2003; Holliday, 2002; Mechling (2002); and Federal CIO Council, 2002).
These models and frameworks can be classified into the three categories from the evaluation
perspectives: E-government value evaluation models; E-government success evaluation models and
E-government service quality evaluation models.
2.1 E-government Value Measurement Models
According to Mechling and Hamilton (2002), the e-government Value Measurement Models (VMM)
was introduced by Harvard University in response to a request to US government and was released by
the Best Practices Committee of the US Federal CIO Council (2002) to assess of the value and usage
of e-government websites and projects based on a multidimensional analysis of the cost/benefit,
social, and political factors. The VMM framework includes five value factors: direct user value;
social/public value; government financial value; government operational/foundational value; and
strategic/political value (Foley, 2006). It starts with developing a set of values for each factor
including: costs; risks; tangible returns; and intangible returns for each service. These values are
measured through a set of dimensions/ elements, and then assigned scores to each element/dimension.
Accordingly, it becomes possible to give yes/no decisions in a fairly objective and repeatable manner
tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
3
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

for each element. The VMM approach would allow comparison between different values (cost; risk;
return) among e-government services. The U.S Federal CIO Council has developed model based on
VMM to assess the value of US e-services. Mechling and Hamilton (2002) extended the VMM model
to include six essential factors: cost/benefit analyses; projects political and social value to assess the
E-government projects. Gouscos et al. (2007) proposed a different model to assess the quality and
performance of one-stop e-government service offerings. Gupta and Jana (2003) suggested a different
methodology in terms of tangible and intangible economic benefits that can be produced by an e-
government service. It should be noted that the Gupta and Janas model can be considered as a subset
of the first two models. Moreover, the VMM model was designed to provide policy makers with
qualitative data that help in assessing the potential benefits of using e-services. Although the VMM
published studies shed lights and draw attention to focus on performance of e-government services
from both users and government perspectives, none of the studies considered monitoring and
evaluating performance at an individual e-service level or across number of e-services.
2.2 E-government Success Models
E-government success (or maturity) models were introduced by DeLone and McLean (1992); the
D&M model was then updated by DeLone and McLean (2003) to measure success of any e-
commerce information system. It consists of six dimensions of success factors: system quality;
information quality; service quality; system use; user satisfaction; and net benefits. Information
quality has involved features such as: accuracy; relevancy; precision; reliability; completeness; and
currency. System quality has referred to: ease of use; user friendliness, system flexibility, usefulness,
and reliability. Based on this evaluation model, any online services can be evaluated in terms of
information; system; and service quality. These dimensions affect the subsequent use or intention to
use and user satisfaction, as a result of using the online services, certain benefits will be achieved. The
benefits will (positively or negatively) influence user satisfaction and further use of the information
system.
There are many researchers who adopted D&M model to assess the E-government success
including (Wang and Liao, 2008; Chen, 2010; Floropoulos et al. 2010; and Jang, 2010). Jang (2010)
employed the updated D&M model to measure E-Government Procurement (e-GP) system success.
Results showed that information quality, system quality, and service quality had a significant effect on
individual performance through the usage and user satisfaction with an e-GP system. In addition, the
key antecedents to user satisfaction and system usage did differ between high and low computer self-
efficacy users. Floropoulos et al. (2010) adopted the D&M model to measure the success of the Greek
Tax Information System. The results provided evidence that there are strong connections among the
success constructs. All hypothesized relationships are supported, except the relationship between
system quality and user satisfaction. Furthermore, Chen (2010) used the D&M model to measure
online tax-filing system in Taiwan. Structural equation modelling results confirmed that the quality
antecedents strongly influence taxpayer satisfaction with the online tax-filing system. The factors of
information and system quality were more important than service quality in measuring taxpayer
satisfaction. Wang and Liao (2008) adopted D&M model to assess the success of E-government
systems in Taiwan, their results showed that the hypothesized relationships between the six success
factors are significantly supported by the data except he link from system quality to use.
Unlike VMM models, the D&M models pay more attention to the quality of technology and user
benefits with less attention to other dimensions such as cost; risk and opportunity that are very
important to VMM users satisfactions. Hence including both D&M and VMM f measurement
factors would provide an inaccurate understanding of overall E-government success to be verified as
intended in the current work.
2.3 E-government Service Quality Models
E-government service quality models are mostly proposed by Parasuraman et al. (1988; 2005) under
the name of SERVQUAL model. Parasuraman et al. (1988) SERVQUAL model consists of 22
service quality measures that are organized in five dimensions: tangibles (appearance of physical
facilities, equipment, personnel and communication materials); reliability (ability to perform the
promised service dependable and accurately); responsiveness (willingness to help customers and
tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
4
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

provide prompt service); assurance (knowledge and courtesy of employees and ability to convey trust
and confidence); and empathy (provision of caring, individualized attention to customers). There are
huge numbers of research papers that expanded or updated the SERVQUAL model. For instance,
Iwaarden et al. (2003) expanded the SERVQUAL model; the resulting model includes five quality
dimensions corresponding to the ones of the initial SERVQUAL model, with their meaning adapted to
the specificities of the websites: tangibles (appearance of the website, navigation, search options and
structure), reliability (ability to judge the trustworthiness of the offered service and the organization
performing the service), responsiveness (willingness to help customers and provide prompt service),
assurance (ability of the website to convey trust and confidence in the organization behind it with
respect to security and privacy) and empathy (appropriate user recognition and customization). Later
on, Parasuraman et al. (2005) developed and tested E-SQUAL as a new measure of e-service website
quality. E-SQUAL is composed of 22-item scale of four dimensions: efficiency; fulfilment; system
availability; and privacy. Moreover, Parasuraman et al. (2005) developed another (E-RecS-QUAL)
model which was directed only to non-routine websites users. It contains 11 items in three
dimensions: responsiveness, compensation, and contact.
Other researchers proposed new service quality models. For instance, Huang and Chao (2001)
asserted that e-government websites should be evaluated based on usability principles, i.e., websites
should specifically follow a user-centred design to allow users of e-government websites to
effectively reach the information desired, while Holliday (2002) proposed a set of evaluation criteria
for the level of usefulness of e-government websites, including factors such as amount of information
about the government, contact information, feedback options, search capabilities, and related links.
Balog et al (2008) proposed an e-ServEval model for quality evaluation of e-services, while,
Papadomichelaki and Mentzas (2009) developed an e-government service quality model (e-GovQual)
that consists of 25 quality attributes classified into 4 quality dimensions: reliability, efficiency, user
support and trust. Reliability (the feasibility and speed of accessing, using and receiving services of
the site measured by 6 items); efficiency (ease of using the site and the quality of information it
provides measured by 11 items); user support (the ability to get help when needed, measured by 4
items) and trust (the degree to which the user believes the site is safe from intrusion and protects
personal information, measured by 4 items). Liu et al (2010) established an e-government website
evaluation index system using analytic hierarchy approach (AHP). The components of the index
system are: content (practical, comprehensive, accuracy, timeliness, transparency and unique);
function (network office, online communication, online monitoring, opinion survey); technology
(convenient, availability, security) and other (website content protection, adaptability).
Furthermore, building on previous e-services research Fassnacht and Koese (2006) developed a
broad hierarchical quality model for e-service that consists of three dimensions: e-service delivery
quality (information quality, ease of use, attractiveness of selection and technical quality); outcome
quality (functional benefit, reliability and emotional benefit) and environment quality (graphic quality
and clarity of layout). Whereas, Rowley (2006) proposed a framework that includes: website features;
security; communication; information; accessibility; delivery; reliability; customer support;
responsiveness; and personalization. Fassnacht and Koese (2006) developed a broadly applicable
hierarchical quality model for e-services. The model consists of three dimensions and nine sub-
dimensions: environment quality (graphic quality, clarity of layout); delivery quality (attractiveness of
selection, information quality, ease of use, technical quality); and outcome quality (reliability,
functional benefit, emotional benefit). Halaris et al. (2007) model for assessing quality of e-
government services consists of four layers: back office performance layer (including factors from
quality models for traditional government services); website technical performance layer (website
performance, such as reliability and security); website quality layer (interface and usability); and
users overall satisfaction layer. Esteves and Joseph (2008) suggested a three-dimensional ex-post
framework for the assessment of e-government initiatives. The three dimensions are e-government
maturity level, stakeholders, and assessment levels. The assessment levels consider the technological,
strategic, organizational, operational, service, and economic aspects. Jansen et al (2010) proposed a
Contextual Benchmark Method (CBM) that is based on the Modelling Approach for Designing
Information Systems framework (MADIS) by Essink's (1988). CBM consists of three levels and five
aspects; the first level is the group of organizations involved in the benchmarking (benchmark
tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
5
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

partners). The second level is the individual organization that is involved in the benchmarking
exercise (organization). The third level is the e-government services that are analysed (service).
Whereas, the five aspects are: goal (CBM is an organized set of elements and relationships between
them, focused on achieving a set of organizational goals); respondents (users who evaluate an
electronic service using indicators); indicators (several indicators that should be measured in a
benchmarking exercise); methods (different methods to be used in order to produce the knowledge
which is needed); and infrastructure (the availability of hardware and software).
On the other hand few researchers adopted ISO/IEC 9126 to evaluate e-service quality such as
(Behkamal et al. 2009) and (Chutimaskul et al, 2008). Behkamal et al. (2009) proposed six-quality
dimensions: functionality (suitability, accuracy, interoperability, security, traceability); reliability
(maturity, fault tolerance, recoverability, availability); usability (understandability, learnability,
operability, attractiveness, customizability, navigability); efficiency (time behavior, resource
utilization); maintainability (analyzability, changeability, stability, testability); and portability
(adaptability, install ability, co-existence, replace ability), whereas Chutimaskul et al (2008)
integrated ISO/IEC 9126 with M&D model to measure Thailand e-government development success.
Finally, few user-centric models have been recently suggested to address the shortfall of the
previously three mentioned categories. For instance, Rowley (2011) argued that any successful e-
government service should satisfy the following user benefits: easy to use; accessibility and
inclusivity; confidentiality and privacy. Magoutas and Mentzas (2010) proposed SALT (Self Adaptive
quaLity moniToring) model to monitor the user satisfaction and the quality of e-government services.
Jaeger and Bertot (2010) argued that any attempt to create user-centered e-government services must
account for a number of essential elements. These elements range from basic issues related to the
ability to use e-government, to build trust and to tie e-government to established social and institution
requirements such as: access needs; information and service needs; technology needs; information and
technology literacy; government literacy; availability of appropriate content and services; usability
and functionality; meeting user expectations; information concerns; social institutions providing
access to e-government; trust; e-government 2.0; lifelong e-government usage; and understanding
how users actually use e-government. Pazalos et al (2010) proposed and validated a structured
methodology for assessing and improving e-services developed in digital cities. The proposed
methodology assesses the various types of value generated by an e-service and also the relationship
among them, hence allowing a more structured evaluation, a deeper understanding of the value
generation process.
3. THE COBRAS FRAMEWORK AND COMPONENTS
From the previous reviewed models, dimensions with associated indicators and performed analytical
tests are presented in Tables a, b and c in the appendix. It is clear that the evaluation of e-government
success is approached from different directions with a recent interest in user-centered satisfaction.
However, user's satisfaction evaluation depends exclusively on the users experience and interaction
with an e-service and the generated values. Existing methodologies show that the VMM is based on a
rational thinking of policy makers using a fixed weight approach assigned to indicators for evaluation.
This rationality encourages the development of e-government services from users perspectives based
users costs, benefits and risks used separately for evaluation but not simultaneously in previous
performance evaluation models. These evaluation models ignored the value of opportunities and
impact that can be obtained from using e-services. The SERVQUAL based models accounted for the
service quality of system that includes some of benefit and risk aspects, but it ignores the cost and
opportunity aspects. Whereas the D&M updated models account for users benefit and overlooked the
cost; risk and opportunity. Consequently, our proposed evaluation framework builds on previous
models and extended them to develop a holistic assessment model for e-government services. The
various fragmented performance factors are now integrated and new updates based on the following
observations on users satisfaction namely: the users experience during the execution and interaction
with an e-service, the efficiency of the e-system, the effectiveness of the delivered e-service and the
post-impact of the delivered e-service. The new framework is based on theoretical causal-effect
relationships between the cost-benefit analysis and the risk-opportunity analysis on the one hand, and
users satisfaction on the other hand. The observed casual relationships among constructs and various
tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
6
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

performance indicators in the literature are grouped into four sets of dimensions/constructs: Cost;
Benefit; Risk and Opportunity. The cost and benefit variables are mostly tangible and are often easy
to measure, whereas the risk and opportunity are mostly intangible. The expected directions of the
hypothesized causal-effect relationships among the five constructs of the new framework called
COBRAS: Costs, Opportunities, Benefits, Risks Analysis for Satisfaction are presented in Figure 1.
COBRAS is developed by analogy to a strategic management tool known as SWOT (Strengths,
Weaknesses, Opportunities and Threats) analysis. SWOT analysis is recently used in combination
with data envelopment analysis to reduce the subjectivity of weight assignments in evaluation models
like VMM, (Dyson, 2000). Moreover, SWOT analysis is often used in academia for development of
business projects and improvement of operations. In our analogy, strengths correspond to benefits,
weaknesses to costs, threats to risks and opportunities are the same. Normally, the costs and benefits
are internal factors to an e-service whereas the opportunities and risks are external factors to the e-
service. Similarly, COBRAS can be very subjective like SWOT analysis. Elaboration on these factors
will be followed next.


Figure 1: The COBRAS Model for User Satisfaction
3.1 Cost-Benefit Analysis
Logically users compare the e-service costs with the associated benefits to decide on use/reuse of the
e-service. The cost-benefit analysis is a well-known concept in management and economic where
managerial decisions to select a project is based on the highest ratio of benefits to costs among
competing alternatives.

Cost Construct has been implicitly addressed by researchers in e-government including Verdegem
and Verleye (2009); Foley (2008); Bertot et al. (2008); and Kertesz, (2003). The cost variables are
often tangible and can be measured, like the actual spending of money and time to complete a
requested e-service. Some cost variables include
1. Access time: The number of attempts to find the requested service on the site; length of time
to find the requested service on the site (accessible time; downloading time; waiting response
time; searching time).
2. Post-interaction time: time to receive confirmation of submissions, waiting time to receive a
service (visa, passport, driving license).
3. Authorization requirements: authorization code and associated costs, registration with the site
(username and password) for authentication.
The cost to satisfaction hypothesis - H1: the lower the e-service cost is the higher the user
satisfaction.

Benefit Construct represents the value of using an e-service. It measures among others the total
values of information availability, services quality and system quality. This set of measures has been
tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
7
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

widely used in several models like DeLone and McLean model (2003); SERVQUAL; e-SQ;
EGOVSAT models. Some benefits variables include
1. Tangible benefits such as saving time or saving money.
2. Intangible benefits such as information quality (information availability, adequacy, accuracy,
relevancy, reliability, understandability, completeness); Service quality (design, well
organized site, quick delivery, accessibility, ease of navigation); System quality (quick loads,
responsive, visually attractive, adequacy of links, well- organized).
The benefit to satisfaction hypothesis H2: The higher the e-service benefit is the higher the user
satisfaction
3.2 Risk-Opportunity Analysis
Using e-government services may involve risk that would be generated from sending personal
information stored electronically. Third parties can intercept, read and modify such information. In
electronic burglary, large quantities of delicate information can be stolen/ destroyed easily without the
publics consent (Horst et al, 2007). Therefore, citizen needs to trust government or other involved
agencies (Evangelidis et al, 2004). However, e-service provides an opportunity and impact to users.
Some researchers may include opportunities under benefit construct due no clear definition of
opportunities.

Risk Construct: risks arise when conditions in external environment jeopardize the reliability of e-
services. They compound the vulnerability when they relate to low cost. Risks are often
uncontrollable. Users have concern about their personal and credit card information. Trust in
technology infrastructure and those managing the infrastructure would reduce risk leading to a strong
impact on the adoption of a technology Colesca (2009). This risk dimension has been addressed by a
few researchers, including Kertesz, (2003); Rotchanakitumnuai (2008); Udo et al. (2008); Zhang and
Prybutok (2005). Some risk variables include
1. Privacy risk arises from the use of personal data for other purposes;
2. Financial audit risk: storage of personal information and documents for a long period may
worry users of being audited again and asked for an additional tax payment;
3. Time and technology risk: users may feel they are wasting time when online services and
requiring additional professional support to retrieve or renter data to the e-service.
4. Social risk: users may have less interaction with their friends during social events to
continuous engagement with e-services; or may feel exposed to damage in social image (non
E-government users may feel embarrassed for not using e-services or feel inferior to other
citizens who use such e-services).
The risk to satisfaction hypothesis- H3: The lower the e-service risk is the higher the user
satisfaction.

Opportunity Construct: Opportunities are presented by the environment/ country within which e-
service operates. These arise when a user can take benefit of conditions to use e-services that enable
him/her to become more beneficial. Users can gain personal advantage by making use of
opportunities. Users should be careful and recognize the opportunities and grasp them whenever they
arise. Opportunities may arise from environment, government and technology incentives. Some
benefits variables may include:
1. Service support (ease to access any time, flexibility in time 24x7 accesses); access anywhere
(flexibility in place).
2. Technological support (error correction; gain up-to-date information on progress, access
provision of e-services in a public area (public library, cafe) and follow up facilities using
email and media tools).
3. Technological advances in the e-service process and provision such as making use of
personalized e-services.
4. Bypassing third party providers and avoiding bureaucratic processes.
The opportunity to satisfaction hypothesis- H4: The higher the e-service opportunity is the higher the
user satisfaction.
tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
8
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

4. METHODOLOGY AND DATA ANALYSIS
4.1 Methodology
This research utilizes several modes of data collection based on questionnaires that are validated by
experts as well as advanced data analysis based on Structured Equation Modeling (SEM) to test the
theorized model. Although we could have concurrently validated the measurement and the structural
relationships (testing reliability and validity), this approach is not advisable (Perry et al., 2008). Hence
the analysis was conducted separately in this section. Items (defined as measurable variables) of the
constructs/factors are mainly adapted from the different sources in the literature such as the updated
D&M IS success model; SERVQUAL; Verdegem and Verleye (2009); Foley (2008); Bertot et al.
(2008); Rotchanakitumnuai (2008); Udo et al. (2008). Items were developed for each of factors and
dimensions listed in Table b and c based on their practical importance by researchers with
modification and re-wording by ourselves and expert feedbacks. All items were measured using a
five-point Likert scale (1=strongly disagree to 5=strongly agree).
4.2 Data Collection and Online Survey
An online survey was designed to include questions related to the new proposed model in addition to
geographical data. The survey was developed in stages. Two workshops were conducted in United
Kingdom and Turkey to ascertain the validity of the content and relevance of the proposed
questionnaire to the objective of study. In Turkey, a workshop was conducted on the next day of the
ICEGEG conference on explorations in e-government and e-governance (Antalya, March 2010),
twenty experts from e-government public administration, private IT institutions and professional
researchers were invited. At this workshop, the conceptual COBRAS framework is presented, and the
questionnaire was distributed to participants for reviews of 60 initial questions. The updated
questionnaire was then reduced to 49 questions that were again validated at the second T-Government
workshop (London, March 2010).
Face validity was also conducted to evaluate the appearance of the questionnaire in terms of
feasibility, readability, consistency of style and formatting, and the clarity of the language used.
Thirty MBA students at the American university of Beirut were selected to conduct the face validity.
The students assessed each question in terms of clarity of wording; the likelihood that the target
audience would be able to answer it; and finally the layout and style of the questionnaire. In addition
to the 49 questions, there were open-ended questions to provide general comments for content
analysis.
The data collection was conducted in all Turkish cities (such as Ankara, Istanbul) from
Turkish users using TurkSat portal for the period starting from July 2010 to December 2010. Users
were asked to voluntarily fill in the questionnaire after immediate use of an e-service. A total of 3506
responses were collected. Only 2785 (79.44%) were valid due to incomplete questionnaires. The
remaining sample size is deemed sufficient for the analysis. From the geographical data, it was found
that around half of respondents (45%) had at least a bachelors degree or higher; (67%) had
experience in working with a computer or with Internet and used e-government websites; (12.7%) of
respondents had poor computer skills with the majority reporting at least an average level of computer
proficiency, and (94.4%) have used current e-government services at least once a month whereas the
remainder had used it once or several time per year.
4.3 E-services Types
E-services in Turkey are heterogeneous in terms of functionality and maturity level. An attempt was
made to divide them in three types of categories. Each category would then include homogenous
services in functionality with respect to users. The three groups are as follows:
1- Informational e-services provide public contents and do not require any authentication from
users, for which we have 2258 respondents who used such e-services.
2- Interactive/Transactional e-services require authentication but allow download forms, contact
officials and make appointments, for which we have 243 respondents.
tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
9
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

3- Personalized e-services require authentication and allow users to customize the content of e-
services, conduct financial transactions online; users can pay for e-services, for which we
have 284 respondents.
The above grouping is different from the maturity model of Layne and Lee (2001). It is more in line
with the view of Coursey and Norris (2008) who stated that the maturity models do not accurately
describe or predict the development of e-government development.
4.4 Reliability and Validly of Measures
All the constructs/dimensions that were measured by using the 49 Items/indicators are presented in
Table 1. The construct validity of measures was conducted using confirmatory factor analysis whereas
the internal consistency reliability of measures was tested using Cronbachs alpha. The construct
validity tests the degree to which the items/questions in the questionnaire relate to the relevant
theoretical construct/factor using principle component analysis (PCA). The factor loadings of the final
PCA solution and their factorial weights are shown in Table 1 that shows all items have a loading
0.5 which is the acceptable norm, except two items with loading of 0.46 and 0.43 which are not very
far from 0.5; they were kept in the model as they are important to the relevant factor.
Kaiser-Meyer-Olkin (KMO) test has a value of 0.98 indicating a high sampling adequacy for the
factor analysis. Moreover, the Bartlett' Sphericity indicates the appropriateness of the factor model
and its test indicates that the correlation matrix has an identity matrix at a highly significant level with
p<0.00. The final PCA solution of four factors using the 49 items accounted for 73.46% of the total
variance. The items that went into each factor/ construct are explained as follows.
1- Benefit and Opportunity Factor has accounted for 41.8% of the total variance and included
thirty five (35) items. Out of the 35 items, there were 31 questions focusing both on users
benefits and opportunities constructs. The other four items have also good loadings on cost-
money factor; therefore they are removed from the current group.
2- Cost- Money Factor has accounted for 12.70% of the total variance and included seven
items. It focuses on the cost-money paid to use an e-service.
3- Cost- Time Factor has accounted for 11.79% of the total variance and included six items. It
focuses on the cost- time saved from using an e-service.
4- Risk Factor has accounted for 7.12% of the total variance and includes five items. It focuses
on the risk of using an e-service.


No
Item/Question
Factor/Construct
1 2 3 4
C1 The e-service is easy to find 0.81

C2 The e-service is easy to navigate 0.84

C3 The description of each link is provided 0.79

C4 The e-service information is easy to read (font size, color, ) 0.72

C5 The e-service is accomplished quickly 0.84

C6 The e-service requires no technical knowledge 0.70

C7 The instructions are easy to understand 0.83

C8 The e-service information is well organized 0.87

C9 The drop-down menu facilitates completion of the e-service 0.86

C10 New updates on the e-service are highlighted 0.81

C11 The requested information is uploaded quickly 0.80

C12 The information is relevant to my service 0.83

C13 The e-service information covers a wide range of topics 0.75

C14 The e-service information is accurate 0.73

C15 The e-service operations are well integrated 0.84

C16 The e-service information is up-to-date 0.75

C17 The instructions on performing e-service are helpful 0.82

C18 The referral links provided are useful 0.79

C19 The Frequently Asked Questions (FAQs) are relevant 0.76

tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
10
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective


No
Item/Question
Factor/Construct
1 2 3 4
C20 Using the e-service saved me time 0.78 0.50

C21 Using the e-service saved me money 0.67 0.51

C22
The provided multimedia services (SMS, email,) facilitate
contact with e-service staff
0.71

C23 I can share my experiences with other e-service users 0.67

C24 The e-service can be accessed anytime 0.73

C25 The e-service can be reached from anywhere 0.69

C26 The information needed for using the e-service is accessible 0.78

C27
The e-service points me to the place of filled errors, if any, during
a transaction
0.68

C28 The e-service allows me to update my records online 0.66

C29 The e-service can be completed incrementally (at different times) 0.68

C30
The e-service removes any potential under table cost to get the
service from e-government agency (tips)
0.60

C31 The e-service reduces the bureaucratic process

0.61

C32
The e-service offers tools for users with special needs (touch
screen, Dictaphone, )
0.61

C33
The information are provided in different languages (Arabic,
English, Turkish, )
0.51

C34
The e-service provides a summary report on completion with date,
time, checkup list,
0.61

C35
There is a strong incentive for using e-service (such as paperless,
extended deadline, less cost, )
0.63

C36 I am afraid my personal data may be used for other purposes

0.74
C37
The e-service obliges me to keep record of documents in case of
future audit
0.69
C38
The e-service may lead to a wrong payment that needs further
correction
0.71
C39
I worry about conducting transactions online requiring personal
financial information such visa, account number
0.74
C40 Using e-service leads to fewer interactions with people

0.50
C41 The password and renewal costs of e-service are reasonable 0.52 0.46

C42 The internet subscription costs is reasonable 0.51 0.43

C43
The e-service reduces my travel cost to get the service from e-
government agency.
0.59

C44
It takes a long-time to arrange an access to the e-service (the time
includes: arrange for password; renew password; and internet
subscription)

0.77

C45 It takes a long-time to upload of e-service homepage

0.86

C46
It takes a long-time to find my needed information on the e-service
homepage.
0.84

C47 It takes a long-time to download/ fill the e-service application

0.86

C48
It takes several attempts to complete the e-service due to system
break-downs
0.83

C49 It takes a long-time to acknowledge the completion of e-service.

0.86

Kaiser-Meyer-Olkin (KMO Test 0.98

Bartlett' Sphericity Test (df) 56687 (153)
Table 1: The Principle Component Analysis and Loading of Component Matrix

tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
11
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

The Internal Consistency Reliability was computed using Cronbachs Alpha was computed for all
questionnaire items and they are above 0.98 which indicates a high correlation among the responses
for given question. This indicates the questionnaire is consistently reliable. Furthermore, the
Cronbachs Alpha is computed for each of the dimensions associated with each e-service type. All
values are exceeding 0.89 which are very acceptable values as show in Table 2. Table 2 also
illustrates the belonging of questions to each of COBRAS construct.

Dimension
All
sample
(n=2785)
E-services Type
Informational
(n= 243)
Interactive/Transactional
(n=2258)
Personalized
(n=284)
Cost Construct
- Tangible cost: C20-C21;
C30-C31; C41-C43
96.0 93.2 93.5 91.5
- Intangible cost: C44-C49 93.2 93.6 93.3 91.6
Risk Construct
- Personal risk: C36 & C40 91.0 91.9 91.0 89.9
- Financial audit risk: C37-
C39
91.3 91.1 91.0 89.3
Benefit Construct
- Information quality: C1-
C7
97.7 98.1 97.7 97.2
- Service quality: C8- C18 97.1 98.9 97.5 97.0
Opportunity Construct
- Service support: C19;
C22-C26;
97.2 97.7 97.1 96.7
- Technology support:
C27-C29 & C32-C35
97.3 97.6 97.0 96.3
All Items 98.3 98.7 98.3 98.0
Table 2: Cronbach Alpha results

4.5 Satisfaction Analysis
To measure users satisfaction, respondents were asked to indicate their satisfaction level using set of
items that reflects their satisfaction with each of the constructs: cost; benefit; risk and opportunity
after using an e-service. The summary of the results is presented in Table 3. It is of interest to note
that the overall average users satisfaction level 72%. However, the personalized e-services have the
highest satisfaction level (77%) showing the authentication cost and risk are well balanced with the
benefit from the opportunity of having personalized services. However, it is not the case for the
Interactive/Transactional group of e-services that has a satisfaction level of 65%. This
Interactive/Transactional group of e-services requires the same level of authentication and risk but not
having enough advantage/ opportunity for users like those using personalized e-services. However,
the Informational e-services do not require authentication and cost, hence they are better received by
the users.

E-service Average satisfaction
Informational e-services 75%
Interactive/Transactional e-services 65%
Personalized e-services 77%
Overall Average of Satisfaction 72%
Table 3: users satisfaction level across e-services
4.6 COBRAS Model Analysis
Structured Equation Modeling (SEM) was deployed to test the theorized model in Figure 1. Structural
Equations Model AMOS software (v.18) was used to conduct the analysis. The result support the
proposition that the users satisfaction measurement model could be explained by having satisfaction
analysis based cost-benefit and risk-opportunity constructs. A summary SEM statistic measures are
tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
12
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

provided in Table 4. The findings of the different tests support the fitness of the COBRAS model
to data. The measurement model had initial CMIN/df = 2.90; and 3.00 with
significance probability with (p<0.001). Both values are below the accepted level of 5 for both the
informational and personalized services, indicating a strong goodness of fit for the model (Garson,
2005). Also, RMSEA values for all services are all below 0.9, indicating a strong goodness of fit for
the model across services. However, NFI, CFI, and TLI values are slightly below 0.90 for some of
them, indicating an acceptable level to support hypothesized relationships. In summary, it can be
confirmed all e-services have best goodness of fit. Consequently, the conceptual theoretical model fits
well our data.

Test Model Accepted Level Informational Interactive/Transactional Personalized
CMIN (df) 13417 (1092) 3175 (1092) 3277 (1092)
CMIN/df < 5.00 2.9 12.28 3
NFI > 0.90** 0.81 0.90 0.80
CFI > 0.90** 0.87 0.90 0.85
TLI > 0.90** 0.87 0.90 0.87
RMSEA < 0.08 * 0.08 0.07 0.08
LO 90 (P) < 0.08 * 0.08 (0.00) 0.07 (0.00) 0.08 (0.00)

Path
Analysis

0.34 -0.10 -0.11
-0.4 -0.001 0.1
0.44 0.52*** 0.37
Opportunity -3.31 -3.98 -1.14
* The less is the better ** The higher is the better *** Significant at 0.01
Table 4: Fit Indices results for Simple, Personal and Integrated e-services

Table 4 shows the t-statistics for path coefficients for some of the hypothesized relationships.
The coefficient values for the cost-satisfaction relationship for authenticated e-services are negative in
support of H1-the lower e-service cost the higher the user satisfaction. But it is not the case for the
informational e-services. The coefficient values for the benefit-satisfaction relationships are all
positive in support of H2- the higher the e-service benefits the higher user satisfaction. The coefficient
values for the risk to satisfaction relationships are in support of H3 for the non-personalized e-
services, - The lower e-service risks the higher the user satisfaction. Finally, the coefficients of
opportunity to satisfaction relationships are negative and are not in support of H4 the higher the e-
service opportunity the higher the user satisfaction. This finding is interesting and requires further
analysis and investigation by looking more at bio-data and characteristic of users.

5. CONCLUSION
Unlike the other suggested techniques, the COBRAS proposed model overcomes the shortcoming of
previous models. It integrates all the related KPIs in the literature on users satisfaction with e-
services. Its measurement is based on cost- benefit and risk-opportunity analysis in management and
economic fields. COBRAS was developed based on the analogy to the well-known strategic
management concepts know as SWOT analysis. COBRAS provides a new approach to evaluate
satisfaction from any stakeholders perspective. It also allows policy makers to compare one or more
homogenous e-services in the same country or compare them to similar e-services in any other
tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
13
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

countries. COBRAS model was confirmed as a useful tool for evaluating the success within the e-
government context based on theoretical analogy and experimental data analysis. Although, no
previous study has directly applied the suggested model, the analyzed results are consistent with those
reported in previous studies, including Jang (2010); Foley (2008); Bertot et al. (2008);
Rotchanakitumnuai (2008); Udo et al (2008). Moreover, in line with DeLone and McLean (2003) and
also Jang (2010) and Wang and Liao (2008), our findings confirmed that information quality and
service quality had a significant impact on users satisfaction. The practical implication implies that
e-government agencies should provide e-government services with better on-line services regarding
the man-machine interaction and operation assistance in end-user online environments. The findings
show that the conceptual model fits all the three informational, interactive/transactional and
personalized e-services. Future research investigation is needed to establish the significance of the
relationship by collecting more samples from different users in other countries. The established links
between user satisfaction and benefit, cost, risk and opportunity construct would pave the way to
develop more prescriptive methods. It can be combined with a well-known operations management
tool of data envelopment analysis to provide more prescriptive guidance to improve e-services than
just reporting descriptive statistical values, Lee et al. (2008).

Acknowledgment.
The authors would like to acknowledge the support of European Union Framework7 for funding
CEES project under contract PIAP-GA-2008-230658: Citizen-oriented Evaluation of E-Government
Services: A Reference Process Model; American University of Beirut, Lebanon; Turksat, and METU,
Turkey; Brunel University, UK.

6. REFERENCES
Aladwani, A. and Palviab, P. (2002), Developing and validating an instrument for measuring user-
perceived web quality, Information and Management, 39(6): 467476
Alsaghier, H., Ford, M., Nguyen, A., and Hexel, R. (2009), Conceptualizing Citizens Trust in e-
Government: Application of Q Methodology, Electronic Journal of e-Government, 7(4): 295-
310
Azad, B. and Faraj, S. (2008), Making e-government systems workable: Exploring the evolution of
frames, Journal of Strategic Information Systems, 17(2): 7598
Balog, A., Bdulescu, G., Bdulescu, R and Petrescu, F (2008), E-ServEval: a system for quality
evaluation of the on-line public services, Revista Informatica Economic, 2(46): 18-21
Banker, R.D., Charnes, A., and Cooper, W.W. (1984), Some Models for estimating technical and
scale inefficiencies in data envelopment analysis, Management Science, 30(9):1078-1092.
Barnes, S., and Vidgen, R. (2006), Data triangulation and web quality metrics: A case study in e-
government, Information & Management, 43(6): 767-777
Bauer, C. and Scharl, A. (2000), Quantitative Evaluation of Web Site Content and Structure, Library
Computing, 19(3/4): 134-146
Behkamal, B. Kahani, M. and Akbari, M. (2009), Customizing ISO 9126 quality model for evaluation
of B2B applications, Information and Software Technology, 51(3): 599609
Berner, F. and Unisys (2006), E-government Trendbarometer, In Kunstelj, M. Jukic, T. and Vintar,
M. (2009), How to fully exploit the results of e-government user surveys: the case of Slovenia,
International Review of Administrative Sciences, 75(1): 117-149
Bertot, J, Jaeger, P and McClure, C (2008), Citizen-centered E-Government Services: Benefits, Costs,
and Research Needs, Proceedings of the 9
th
International Digital Government Research
Conference, Montreal
Carter, L. and Belanger, F. (2004), Citizen Adoption of Electronic Government Initiatives,
Proceedings of the 37
th
Hawaii International Conference on System Sciences, 2004, New York:
IEEE Publishing
Chang, I.-C., Li, Y.C., Hung, W.-F. and Hwang, H.-G. (2005), An Empirical Study on the Impact of
Quality Antecedents on Tax Payers Acceptance of Internet Tax-filling Systems, Government
Information Quarterly, 22(): 389410
tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
14
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

Chen, C-W (2010), Impact of quality antecedents on taxpayer satisfaction with online tax-filing
systems: An empirical study, Information & Management, 47(5-6): 308315
Chutimaskul, W. Funilkul, S. and Chongsuphajaisiddhi, V. (2008), The Quality Framework of e-
Government Development, Proceedings of 2
ed
International Conference on Theory and Practice
of Electronic Governance (ICEGOV2008), Cairo, Egypt,1-4 December, 2008
Colesca, S. (2009), Understanding Trust in e-Government, Inzinerine Ekonomika-Engineering
Economics, (3): 7-15
Coursey, D. and D.F. Norris (2008), Models of E-Government: Are they correct? An empirical Assessment. Public
administration Review, 68(3): 523-436.
Davis, F. (1989), Perceived usefulness, perceived ease of use, and user acceptance of information
technology, MIS Quarterly, 13 (3): 319339
Dawes, S. (2009), Governance in the digital age: A research and action framework for an uncertain
future, Government Information Quarterly, 26(2): 257264
DeLone, W. and McLean E. (1992), Information Systems Success: The Quest for the Dependent
Variable, Information Systems Research, 3(1): 60-95
DeLone, W. and McLean, E. (2003), The DeLone and McLean Model of Information Systems
Success: A Ten-Year Update, Journal of Management Information Systems, 19(4): 9-30
Dyson, R.G. (2000), Strategy, Performance and Operational Research. The Journal of the Operational
Research Society, 51(1): 5-11
Essink, L. (1988), A conceptual framework for information systems development, In T. Olle, H. Sol,
and A. Verrijn-Stuart (Eds.), Information systems design methodologies: Improving the practice,
Amsterdam: North Holland
Esteves, J and Joseph, R (2008), A comprehensive framework for the assessment of eGovernment
projects, Government Information Quarterly, 25(1): 118132
Evangelidis, A (2004), FRAMES A Risk Assessment Framework for e-Services, Electronic Journal
of e-Government, 2 (1): 21-30
Evangelidis, A., Akomode, A. Taleb-Bendiab and M. Taylor (2002), Risk Assessment & Success
Factors for e-Government in a UK Establishment, in Proceedings of Electronic Government,
First International Conference, Aix-en-Provence, France, September 2-6, 2002: 395-402
Evans, D. and Yen, D. (2005), e-Government: An analysis for implementation: Framework for
understanding cultural and social impact, Government Information Quarterly, 22(3): 354373
Fassnacht, M and Koese, I. (2006), Quality of electronic services, Journal of Service Research, 9(1):
1937
Federal CIO Council (2002), Value Measurement Methodology, Retrieved at:
http://www.cio.gov/documents/ValueMeasuring_Methodology_HowToGuide_Oct_2002.pdf
(June, 2010)
Fishbein, M. and Ajzen, I. (1975), Belief, Attitude, Intention and Behavior: An Introduction to Theory
and Research, Addison-Wesley, Reading, MA
Floropoulos, J. Spathis, C. Halvatzis, D. and Tsipouridou, M. (2010), Measuring the success of the
Greek Taxation Information System, International Journal of Information Management, 30(1):
4756
Foley, K., Hamilton, B. (2006), Using the value measuring methodology to evaluate: government
initiatives. In: Proceedings of the Crystal Ball User Conference, Retrieved at:
http://www.decisioneering.com/cbuc/2006/papers/cbuc06-foley.pdf (June, 2010)
Foley, P. (2008) Realising the transformation agenda: enhancing citizen use of eGovernment,
European Journal of ePractice, (4): 44-58
Ghapanchi, A., Albadvi, A. and Zarei, B. (2008), A framework for e-government planning and
implementation, Electronic Government, An International Journal, 5(1): 7190
Gilbert, D., Balestrini, P. and Littleboy, D. (2004), Barriers and Benefits in the Adoption of e-
Government, The International Journal of Public Sector Management, 17(4): 286-301
Gouscos, D. Kalikakis, M. Legal, M. and Papadopoulou, S. (2007), A general model of performance
and quality for one-stop e-Government service offerings, Government Information Quarterly,
24(4): 860-885
tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
15
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

Grant, G. and Chau, D. (2005), Developing a generic framework for e-government, Journal of Global
Information Management, 13(1): 130
Gupta, M. and Jana, D. (2003), e-Government evaluation: A framework and case study, Government
Information Quarterly, 20(4): 365387
Halaris, C. Magoutas, B. Papadomichelaki, X. and Mentzas, G. (2007), Classification and synthesis of
quality approaches in e-government services, Internet Research, 17(4): 378401
Hammer, M. and Al-Qahtani, F. (2009), Enhancing the case for electronic government in developing
nations: A people-centric study focused in Saudi Arabia, Government Information Quarterly,
26(1): 137143
Henriksson, A. et al (2007), Evaluation instrument for e-government Websites, Electronic
Government: an International Journal, 4(2): 204-226
Holliday, I. (2002), Building e-government in East and Southeast Asia: regional rhetoric and national
(in) action, Public Administration and Development, 22(4): 323335
Horan, T. and Abhichandani, T. (2009), Evaluating user satisfaction in an e-government initiative:
results of structural equation modeling and focus group discussions, Journal of Information
Technology Management, 17(4): 187-198
Horst, M. Kuttschreuter, M. and Gutteling, J. (2007), Perceived usefulness, personal experiences, risk
perception and trust as determinants of adoption of e-government services in The Netherlands,
Computers in Human Behavior, 23(4): 1838-1852
Hu, Y., Xiao, J., Pang, J., and Xie, K. (2005), A Research on the Appraisal Framework of
eGovernment Project Success, Proceeding ICEC '05 Proceedings of the 7th international
conference on Electronic commerce, http://portal.acm.org/citation.cfm?id=1089647
Huang, C. and Chao M-H (2001), Managing www in Public Administration: Uses and Misuses,
Government Information Quarterly, 18(4): 357-73
Hung, S-Y, Chang, C-M and Yu, T-J (2006), Determinants of user acceptance of the e-Government
services: The case of Online Tax Filing and Payment System, Government Information
Quarterly, 23(1): 97122
Irani Z., Elliman, T. and Jackson, P. (2007), Electronic Transformation of Government in the UK,
European Journal of Information Systems, 16(3): 327-335
Irani, Z, Love, P.E.D. and Jones, S. (2008), Learning lessons from evaluating
eGovernment: Reflective case experiences that support transformational government, The
Journal of Strategic Information Systems, 17(2): 155-164
Iwaarden, J. Wiele, T. Ball, L. and Millen, R. (2003), Applying SERVQUAL to web sites: an
exploratory study, International Journal of Quality and Reliability Management, 20(8): 919935
Jaeger, P. and Bertot, J. (2010), Designing, Implementing, and Evaluating User-centered and Citizen-
centered E-government, International Journal of Electronic Government Research, 6(2): 1-17
Jang, C-L (2010), Measuring Electronic Government Procurement Success and Testing for the
Moderating Effect of Computer Self-efficacy, International Journal of Digital Content
Technology and its Applications, 4(3): 224-232
Jansen, J. de Vries, S. and van Schaik, P. (2010), The Contextual Benchmark Method: Benchmarking
e-Government services, Government Information Quarterly, 27(3): 213219
Kaisara, G. and Pather, S. (In press), The e-Government evaluation challenge: A South African Batho
Pele-aligned service quality approach, Government Information Quarterly,
doi:10.1016/j.giq.2010.07.008
Kertesz, S. (2003), Cost-Benefit Analysis of e-Government Investments, Retrieved at:
http://www.edemocratie.ro/publicatii/Cost-Benefit.pdf (August, 2010)
Kim, H., Pan, G. and Pan, S. (2007), Managing IT-enabled transformation in the public sector: A case
study on e-government in South Korea, Government Information Quarterly, 24(2): 338352
Kim, T., Im, K. and Park, S. (2005), Intelligent measuring and improving model for customer
satisfaction level in e-government, Proceeding of the Electronic Government: 4th International
Conference, EGOV 2005, Copenhagen, August 22-26
Layne, K. and Lee, J. (2001), Developing fully functional e-government: A four stage model,
Government Information Quarterly, 18(2): 122-136
tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
16
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

Lee, H., Irani,Z., Osman, I.H., Balci, A., Ozkan,S., and Medeni., T.D. (2008), Research Note:
Toward a Reference Process Model for Citizen Oriented Evaluation of E-Government Services.
Transforming Government: People, Process and Policy, 2(4):297-310.
Liu, M. Wang, Z. and Xie, H. (2010), Evaluation of E-government Web Site, Proceeding of 20I0
International conference on computer design and applications (ICCDA 2010), Qinhuangdao,
China, June 25 - 27, 2010
Loiacono, E. Watson, R. and Goodhue, D. (2000), WebQual: a web site quality instrument, Working
Paper 2000-126-0, University of Georgia, Atlanta, GA
Magoutas, B. and Mentzas, G (2010), SALT: A semantic adaptive framework for monitoring citizen
satisfaction from e-government services, Expert Systems with Applications, 37(6): 42924300
Magoutas, B., Schmidt, K-W., Mentzas, G. and Stojanovic, L (2010), An adaptive e-questionnaire for
measuring user perceived portal quality, International Journal of Human-Computer Studies,
68(10): 729-745
Mechling, J. (2002), Building a Methodology for Measuring the Value of E-Services, Booz Allen
Hamilton Washington D.C, p.p. 1-52, Retrieved at:
http://www.estrategy.gov/documents/measuring_finalreport.pdf (Sep, 2010)
Mechling, J. and Hamilton, A. (2002), Building a methodology for measuring the value of E-services.
Washington, D.C.: Booz Allen Hamilton
Meneklis, V. and Douligeris, C (2010), Bridging theory and practice in e-government: A set of
guidelines for architectural design, Government Information Quarterly, 27(1): 70-81
Millard, J (2008), eGovernment measurement for policy makers, European Journal of ePractice,
4(August):19-32
Mohamed,N , Hussin, H, and Hussein, R. (2009), Measuring Users Satisfaction with Malaysias
Electronic Government Systems, Electronic Journal of e-Government, 7(3): 283-294
Moon, M., Welch, E. and Wong, W. (2005), What drives global e-governance? An exploratory study
at a macro level, In Proceedings of 38
th
Hawaii International Conference on Systems Sciences,
5: 131-135
Nour, M., Abdel Rahman, A. and Fadlalla, A. (2008), A context-based integrative framework for e-
government initiatives, Government Information Quarterly, 25(3): 448461
Papadomichelaki, X and Mentzas, G. (2009), A Multiple-Item Scale for Assessing E-Government
Service Quality, in Wimmer, M. et al. (Eds.), EGOV 2009, Springer-Verlag, Berlin-Heidelberg,
Germany: 163175
Parasuraman, A, Zeithaml, V and Malhotra, A (2005), E-S-QUAL: a multiple-item scale for assessing
electronic service quality, Journal of Service Research, 7(3): 213233
Parasuraman, A. Zeithaml, V. and Berry, L. (1988) SERVQUAL: a multi-item scale for measuring
consumer perceptions of service quality, Journal of Retailing, 64(1): 1240
Parasuraman, A., Zeithaml, V. and Malhotra, A. (2005), E-S-QUAL: a multiple-item scale for
assessing electronic service quality, Journal of Service Research, 7(3): 213233
Pazalos, K. Loukis, E. and Nikolopoulos, V. (In press), A structured methodology for assessing and
improving e-services in digital cities, Telematics and Informatics, DOI:
10.1016/j.tele.2010.05.002
Perry, J. L., Brudney, J. L., Coursey, D. and Littlepage, L. (2008), What Drives Morally Committed
Citizens? A Study of the Antecedents of Public Service Motivation. Public Administration
Review, 68: 445458.
Rotchanakitumnuai, S. (2008), Measuring e-government service value with the E-GOVSQUAL-RISK
model, Business Process Management Journal, 14(5): 724-737
Rowley, J. (2006), An analysis of the e-service literature: towards a research agenda, Internet
Research, 16(3), 339359
Rowley, J. (2011), e-Government stakeholders- Who are they and what do they want? International
Journal of Information Management, doi:10.1016/j.ijinfomgt.2010.05.005
Seddon, P. (1997), A Respecification and Extension of the DeLone and McLean Model of IS Success,
Information Systems Research, 8(3): 240-253
tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
17
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

Seel, C., and Thomas, O (2007), Process Performance Measurement for E-Government: A Case
Scenario from the German Ministerial Administration, Systemic, Cybernetics and Informatics,
5(3): 23-29
Tan, C.-W., Benbasat, I. and Cenfetelli, R (2008), Building Citizen Trust towards e-Government
Services: Do High Quality Websites Matter? Proceedings of the 41st Hawaii International
Conference on System Sciences- 7-10 Jan. 2008, New York: IEEE Publishing
Tan, C-W., Benbasat, I., and Cenfetelli, R. (2008), Building Citizen Trust towards e-Government
Services: Do High Quality Websites Matter? Proceedings of the 41st Hawaii International
Conference on System Sciences, Waikoloa, Big Island, Hawaii, January 07-January 10
Teo, T., Srivastava, S., and Jiang, L. (2007), Assessing E-Government Success: Integrating Trust and
Delone and Mclean's IS Success Model, Academy of Management Meeting 2007, (AOM 2007),
August 3-8, 2007, Philadelphia, Pennsylvania, U.S.A
Udo, G Bagchi, K and Kirs, P (2008), Assessing Web Service Quality Dimensions: The E- Servperf
Approach, Issues in Information Systems, 11(2): 313-322
van Dijk, J, Peters, O and Ebbers, W (2008), Explaining the acceptance and use of government
internet services: A multivariate analysis of 2006 survey data in the Netherlands, Government
Information Quarterly, 25(3), 379399
Van Ryzin, G., Muzzio, D., Immerwahr, S., Gulick, L. and Martinez, E. (2004), Drivers and
Consequences of Citizen Satisfaction: An Application of the American Customer Satisfaction
Index Model to New York City, Public Administration Review, 64(3): 331-41
Verdegem, P and Verleye, G (2009), User-centered E-Government in practice: A comprehensive
model for measuring user satisfaction, Government Information Quarterly, 26(3): 487497
Wang, L., Bretschneider, S. and Gant, J. (2005), Evaluating web-based e-government services with a
citizen-centric approach, Proceedings of 38
th
Hawaii International Conference on System
Sciences, 3-6 January 2005, Hawaii,
Wang, Y.-S. (2003), The adoption of electronic tax filling systems: an empirical study, Government
Information Quarterly, 20(4): 333352
Wang, Y-S and Liao, Y-W (2008), Assessing eGovernment systems success: A validation of the
DeLone and McLean model of information systems success, Government Information Quarterly,
25(4): 717-733
Webb, H., and Webb, L. (2004), SiteQual: an integrated measure of web site quality, Journal of
Enterprise Information Management, 17(6): 430440
Weerakkody, V. and Dhillon, G. (2008), Moving from e-government to t-government: A study of
process reengineering challenges in a UK local authority context, International Journal of
Electronic Government Research, 4(4): 1-16.
Xenia, P. and Mentzas, G. (2009), A Multiple-Item Scale for Assessing E-Government Service
Quality, Proceeding of 8
th
international EGOV conference 2009 and DEXA conference cluster,
Linz (Austria), August 30 - September 3, 2009
Yoo, B. and Donthu, N. (2001) Developing a Scale to Measure the Perceived Quality of an Internet
Shopping Site (Sitequal), Quarterly Journal of Electronic Commerce, 2(1) 31 46
Yu, C-C. (2008), Building a Value-Centric e-Government Service Framework Based on a Business
Model Perspective, Lecture Notes in Computer Science, 2008, Volume 5184/2008, 160-171,
DOI: 10.1007/978-3-540-85204-9_14
Yu, C-C. and Wang, H-I. (2008), Towards Refining Digital Divide Strategies via Strategy Gap
Analysis- The Method and Case Study, The Journal of International Management Studies,
Volume 3, Number 28 2, August, 2008
Zarei, B. and Ghapanchi, A. (2008), Guidelines for government-to-government initiative architecture
in developing countries, International Journal of Information Management, 28(4): 277284
Zhang, X. and Prybutok, V. 2005, A Consumer Perspective of E-service Quality, IEEE Transactions
on Engineering Management, 52(4): 461-477
tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
18
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

Appendix
Table a: E-government service evaluation Models and performed tests*
Study Model/ Indicators Performed test
Alsaghier, Ford, Nguyen and Hexel (2009)
E-government Trust model (intention to engagement; perceived risk; trust on government; Disposition
to trust ; Familiarity; Institution-based trust; Perceived website quality; Perceived risk; Perceived ease of
use and Perceived usefulness
Q-methodology
Berner and Unisys (2006) Satisfaction with E-government: Correlation between internet experience and awareness/ use/future use Bivariate analysis
Carter & Belanger (2004) Adapted TAM+ DOI + Web trust models Multiple regression analysis
Chang et al (2005) Adapted TAM + DeLone & McLean quality indicator Multiple regression analysis
Gilbert et al (2004) Adapted TAM model + SERVQUAL Multiple regression analysis
Henriksson et al (2007)
eGwet: The instrument questions are grouped into the six categories to evaluate the quality of government
websites: Security / Privacy; Usability; Content; Services; Citizen Participation; and Features (the
presence of commercial advertising, external links and advanced search capabilities)
Conceptual Model
Horan and Abhichandani (2006)
EGOVSAT (E-government satisfaction model consists of: utility; efficiency, customization, Reliability
(whether the website functions appropriately in terms of technology as well as accuracy of the content)
and Flexibility.
Confirmatory Factor Analysis (CFA)
Hu et al (2005)
E-Government success appraisal model covers seven major dependent variables: five in E-government:
system quality; information and service quality; the foundation and the environment of E-government;
perceived usefulness and user satisfaction. These five variables together cause impact on the users and
impact on Government, then the goal of E-government is realized
Conceptual Model
Hung, Chang and Yu (2006)
Theory of planned behavior (TPB) model is developed in this study to evaluate user acceptance.
The following variables are included: perceived usefulness; perceived ease of use, subjective norms items;
(perceived behavioral control, and attitudes); assessing perceived risk item, personal innovativeness, and
trust. Moreover, measurements of behavioral intention are derived from Taylor and Todd.
SEM and Normal Statistics (CFA)
Kaisara and Pather (2009) e-SQ dimensions (Information quality, Security/trust, Communication, Site aesthetics, Design, Access) Descriptive Statistics
Kim et al (2005)
g-CSI: Customer Satisfaction Index for E-government (g-CSI) model is an integrated model of: National
Customer Satisfaction Index (NCSI) in Korea and American Customer Satisfaction Index (ACSI). Based on this
model Perceived Quality (Information, Process, Customer, Service, Budget Execution, and Management
Innovation) and user expectation will lead to user satisfaction which is the moderator for user complaints and
other outcomes such as: trust; and reuse
Sensitivity Analysis (the difference of
the error when the feature is removed
and when it is left in place) as measure
for the feature selection
Magoutas and Mentzas (2009) and
Magoutas et al (2010)
MAQM: to evaluate the portal and e-service quality by users in an adaptive manner. MAQM (Model for
Adaptive Quality Measurement) comprises different ontologies including concepts regarding quality
aspects, questions and questionnaires, portal characteristics and problems encountered by users while
using the portal.
Conceptual
Mohamed, Hussin and Hussein (2009) End-user computing model (EUCS consist of: Content; Accuracy; Format; end of use; Timeliness) confirmatory and structural equation
tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
19
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

Study Model/ Indicators Performed test
model (AMOS)
Rotchanakitumnuai (2008)
E-GOVSQUAL-RISK model: Service quality (service design; website design; technology support; and
user support) Perceived risk (performance risk; privacy risk; social risk; time risk and financial risk)
in-depth interview and content
analysis
Tan et al. (2008) Adapted TAM model + SERVQUAL indicators +Trust indicators Partial Least Square
Teo, Srivatava and Jang (2008)
Trust in E-government websites related to its consequent success as defined by DeLone and McLeans
(using seven-point Likert scale). Questionnaires distributed to university students (Singapore).
Partial Least Square
Thomas, O and Seel, C (2007),
Reference model based on ratios to evaluate the process performance.
The performance indicator includes: total of send-outs documents); number of defective units; average
cycle time; number of corrections; average correction time; percentage of rework related costs; total
process costs per unit and percentage of efforts.
Number; Ratios; Percentages and
Values
University of Bath (2001) and Barnes and
Vidgen (2006)
eQual: It includes the following dimensions: Information quality; site design; trust; empathy; and usability.
Users are asked to rate target sites against each of a range of qualities and to rate each of them on their
importance.
Different quantitative and Qualitative
tests (such as ANOVA; frequencies;
mean; median
Van Ryzin et al (2004) Adapted ACSI Model Partial Least Square
Verdegem and Verleye (2009)
E-Government acceptance model;
The following aspects are perceived as extremely important with regard to E-government service delivery:
communication about services; recency of information; security; help or guidance; personal contact and
centralization/integration. The indicators can be clustered in three groups: 1) access to the service; 2) use
of the service; 3) impact of the service.
Exploratory data analysis and
logistics regression
Wang (2003) Adapted TAM model Confirmatory Factor Analysis
Xenia and Mentzas (2009)
e-GovQual: that includes 25 quality attributes (55 questions) classified under 4 quality dimensions:
Reliability, Efficiency, Citizen Support and Trust
Confirmatory Factor Analysis
Yu (2007) Value-based Model based on Balanced Scorecard, Strategy Map and Strategy Gap Analysis Gap Analysis
Yu and Wang (2008) Strategy Gap Analysis based on Balanced Scorecard and Strategy Map models, Gap Analysis
tGov Workshop11 (1GOV11)
March 17-18, 2011, Brunel University, West London, UB83PH
20
Osman et al. (2011)
A new COBRAS framework to evaluate e-government services: a citizen centric perspective

Table b: Some of the suggested Models to evaluate e-service quality
Author/s Model Dimensions of e-service quality
bizrate.com Bizrate.com
Ease of ordering
Web site performance
Privacy policies
Shipping and handling
On-time delivery
Product selection
Product representation
Customer support
Production information
Price
consumerreports.org E-Ratings
Credibility: privacy, security, customer service, and disclosure
Usability: design and navigation in the Web site
Li et al, (2002) Li, Tan, and Xie (2002)
Tangibles
Integration of communication
Quality of information
Reliability Empathy
Responsiveness
Assurance
Lociacono, Watson, and
Goodhue (2000)
WebQUAL
Information fit to task
Business process
Response time
Intuitiveness
Substitutability
Visual appeal
Interaction
Flow Integrated
Innovativeness Trust
Communication Design
Parasuraman et al, (2005) E-RecS-QUAL
Fulfillment
Compensation
Responsiveness
Efficiency
System availability
Contact Privacy
webbyawards.com Webby Awards
Content
Visual design
Interactivity
Structure and navigation
Functionality
Overall experience
Wolfinbarger and Gilly
(2002, 2003)
.comQ/ eTailQ
Web site design
Reliability
Customer service
Privacy
worldbestwebsites.com Worlds Best Websites
Functionality: Accessibility, speed and bandwidth sensitivity, HTML
quality, navigation and links, and legality
Design: Graphic design, user friendliness, aesthetics, alignment, layout,
and integration
Content: Purpose, human interactivity, information process, verbal
expression, and attention to detail
Originality: Creativity, distinctiveness, and vision
Professionalism: Customer service, values, and focus of message
Yoo and Donthu (2001) SITEQUAL
Processing speed
Aesthetic design
Responsiveness
Ease of use Interactive
Zeithaml et al, (2002) E-S-QUAL
Compensation
Fulfillment
Responsiveness
Efficiency Contact
Reliability Privacy

Table c: The criteria pool for website evaluation
Factors/criteria
Number of
supported studies Factors/criteria
Number of
supported studies
Ease of navigation 49 Searching mechanism 26
Content relevancy and usefulness 44 Ease of access 25
Appealing and consistent style 44 Privacy policy 25
Logical structure 39 Quick response to customer 25
Security protection 38 Reliable and innovative system 24
Interactive communications 37 Accuracy 24
Ease of online transaction 35 Customer service support 23
User-friendly interface 34 Easy to find target information 22
Comprehensive content coverage 33 Online assistance and help 16
Loading and processing speed 32 Data retrieve mechanism 14
Up-to-date content 31 Playfulness 13
Proper multimedia 30 Convenient payment methods 12
Well and quick linkage 29 Know the present location 10
Customized service 28 Overview of selected items 6
Easy to understand and read 27 Easy to cancel or modify order 5
Source: Adopted from Chiou, W. Lin, C. and Perng, C. (2010), A strategic framework for website evaluation based on a review of the
literature from 19952006, Information & Management, 47(5-6): 282290

You might also like