You are on page 1of 10

Telematics and Informatics 29 (2012) 314–323

Contents lists available at SciVerse ScienceDirect

Telematics and Informatics


journal homepage: www.elsevier.com/locate/tele

Evaluating portal performance: A study of the National Higher


Education Fund Corporation (PTPTN) portal
Sulaiman Ainin ⇑, Shamshul Bahri, Asri Ahmad
Faculty of Business and Accountancy, University of Malaya, 50603 Kuala Lumpur, Malaysia

a r t i c l e i n f o a b s t r a c t

Article history: This study examines the National Higher Education Fund Corporation (PTPTN) portal per-
Received 5 May 2011 formance. Performance is viewed in terms of user satisfaction (i.e. students perspectives). It
Received in revised form 3 November 2011 incorporates three constructs (system, information and service quality) introduced by
Accepted 5 November 2011
DeLone and McLean as well as perceived usefulness first introduced in the technology
Available online 23 November 2011
acceptance model. Empirical data was collected using a survey questionnaire administered
to students in two universities in Malaysia. The study illustrated that generally the stu-
Keywords:
dents are satisfied with the portal’s performance. It was found that perceived usefulness
Portal performance
User satisfaction
is the most significant factor that influences their level of satisfaction. The study would
Malaysia enable PTPTN to enhance the portal performance.
Information quality Ó 2011 Elsevier Ltd. All rights reserved.
System quality
Service quality
Perceived usefulness

1. Introduction

Portals have been increasingly employed to manage the communication between different stakeholders in an organiza-
tion or initiative. An example of this initiative is the portal of the Malaysian’s National Higher Education Fund Corporation
(PTPTN). Realizing the importance of higher education to a country’s national performance (Barr, 2004), the corporation was
established in 1997 to provide loans to students pursuing their studies in local higher institutions of learning. The loan would
allow students to pay their tuition fees and part of their daily subsistence for the duration of their study. The scheme there-
fore provides greater opportunities for Malaysians to continue their education. In 2008 there were 1.2 million students who
obtained the education loans which amounted to RM26 billion.
In line with the development in the Internet technology and the Malaysian government’s e-government initiative, PTPTN
launched their portal (www.ptptn.gov.my). Whereas before the students had to apply for the loan using an OMR form, the
portal allows students to apply for their loan online. In addition, the portal enables the students to view the status of their
application, the offer letter as well as contractual documents. Although the services provided by the portal are relatively lim-
ited, it has not escaped complaints particularly when the system was first implemented. Many of the complaints were per-
taining to data quality and service quality. There are several avenues that have been used to convey the complaints such as
the daily newspaper and online through the PTPTN’ complaints online’ or e-forum. In order to address these complaints, a
study was conducted to evaluate the level of student’s satisfaction with the PTPTN’s portal. In addition, the aim of the study
was to identify which aspects of the portal that the students are most and least satisfied with. This knowledge would help

⇑ Corresponding author. Tel.: +60 3 79673853; fax: +60 3 79673810.


E-mail addresses: ainins@um.edu.my (S. Ainin), esbi@um.edu.my (S. Bahri), asri_ahmad1@yahoo.com (A. Ahmad).

0736-5853/$ - see front matter Ó 2011 Elsevier Ltd. All rights reserved.
doi:10.1016/j.tele.2011.11.004
S. Ainin et al. / Telematics and Informatics 29 (2012) 314–323 315

the organization to refine the portal to suit the students’ needs. On the other hand, this study would add to the small number
of empirical studies on portal’s performance (Urbach et al., 2010).
The paper proceeds with the theoretical background section whereby the key constructs of the study and hypothesis are
developed. The methodology section describes the procedures used for data collection, operationalization of constructs and
the test that were conducted with the proposed model. The results are reported in the findings section. The paper concludes
with a discussion of the findings, limitations and contribution of the research.

2. Theoretical background and hypotheses development

A review of the literature illustrate that portal evaluations were carried out using two main models i.e. end user satisfac-
tion (Mohamed et al., 2009; Lee et al., 2009) and DeLone and Mclean success model (Masrek, 2007; Hussein et al., 2005).
Urbach et al. (2010) extended DeLone and McLean model with the addition of two more constructs namely process quality
and collaboration quality as these two constructs are said to be specific to employees’ portal. Meanwhile, Tojib et al. (2008)
proposed three variables (information quality, system quality and system design quality) with nine dimensions to investi-
gate the performance of business-to-employees portal. All these studies measure the portal’s performance from the users’
perspective. Users are now more directly involved with the systems as they navigate themselves typically via an interactive
user interface, thus assuming more responsibility for their own applications. As such, the ability to capture and measure end-
user satisfaction serves as a tangible surrogate measure in determining the performance of any IS function, services and
application deployed within an organization (Ives et al., 1983) including PTPTN’s portal.
User satisfaction refers to the successful interaction between the information system itself and its users. User satisfaction
provides a significant surrogate to the critical product of the information system (Longinidis and Gotzamani, 2009), which
cannot be measured, namely changes in organizational effectiveness. Bailey and Pearson (1983) define end user satisfaction
as the sum of one’s feelings or attitudes towards a variety of factors affecting that situation. In addition, Zviran et al. (2005)
viewed user satisfaction in terms of system use and acceptance as the practical measure of IS success. According to Gatcha-
lian (1999), end user satisfaction is a measure of success in a highly competitive market and understanding the product’s
feature and characteristics by end users. Consequently, because the aim of the study was to evaluate the student’s level
of satisfaction with the portal, we had chosen the user satisfaction’s construct as the dependent variable for the study. Using
this construct, we were able to gauge the students’ level of satisfaction and suggest ways in which the organization may en-
hance its portal. Although other constructs such as usage may replace user satisfaction as the dependent variable, it would
not enable us to reach the aim of the study. Furthermore, the usage of the system is mandatory, thus using the usage con-
struct would be less useful.
A review of the literature illustrates that there have been many instrument that are used to measure user satisfaction.
This study focuses on two well established instrument/models: the End user computing satisfaction (EUCS) instrument
by Doll and Torkzadeh (1988) and the DeLone and McLean success model (1992, 2003). EUCS has five constructs comprising
information quality: content, accuracy, format, ease of use, and timeliness whereas in the DeLone and McLean model there is
three main independent constructs: system quality, information quality and service quality. Both models have been adapted
by many researchers. For example, Azadeh et al. (2009) analysed end-user satisfaction in an Iranian power holding company
while Mohamed et al. (2009) evaluated the officers and directors satisfaction of Malaysia’s electronic government systems.
Lee et al. (2009) assessed the relationships between end-user satisfaction with campus portal services in accordance with the
method developed previously by Doll and Torkzadeh (1988).
Masrek (2007) evaluate students’ satisfaction towards a university portal using DeLone and McLean model while Wixom
and Todd (2005) proposed a model integrating user satisfaction with technology acceptance. Consequently, Ong et al. (2009)
validated a model to measure user satisfaction of question answering systems. In both Wixom and Ong’s studies, there were
three constructs adapted from DeLone and Mclean: system quality, information quality and service quality. Seddon and Kiew
(1996) and Fan and Fang (2006) tested the DeLone & McLean model and found that there were substantial support for the
linkage among system quality, information quality, and user satisfaction.

2.1. System quality

System quality is ‘‘concerned with whether there are bugs in the system, the consistency of user interface, ease of use,
quality of documentation, and sometimes, quality and maintainability of program code’’ (1997 p. 246). According to Roldán
and Leal (2006), system quality refers to the desired characteristics of the IS itself, which produces the information and it is
related to the quality of IS output. DeLone and McLean (2003) and Aasheim et al. (2007), among others, highlighted that sys-
tem quality is recognized by technical features regarding the network and the IT equipment itself. Therefore, some of the
fundamental facets of the system quality found in the previous research addressed features like reliability, response time,
and accuracy, ease of integration, flexibility and functionality as fundamentals of system quality (Hu, 2003). Roldán and Leal
(2006) found that, system quality of the executive IS exerted a significant positive influence on end-user satisfaction. Since
system quality has a significant effect on end-user satisfaction, it is therefore hypothesized that:

H1. System quality will have a significant, positive relationship with User Satisfaction Level.
316 S. Ainin et al. / Telematics and Informatics 29 (2012) 314–323

2.2. Information quality

Information quality is concerned with issues such as timeliness, accuracy, relevance and format of information gener-
ated by an information system (Madu and Madu, 2002). Information is power and when it falls in the wrong hands, an
organization’s competitive advantage can be severely compromised. Similarly, users are becoming increasingly concerned
with information integrity. This is defined as the condition in which information or programs are preserved for their in-
tended purpose; including the accuracy and completeness of information systems and the data maintained within those
systems.
If the information presented is deemed inaccurate, misleading or biased, users will not be inclined to use the system.
Therefore, high confidence level in the information presented is important. If confidence level is particularly low, users will
find themselves spending more time to justify or verify the information, instead of applying the insights gathered for deci-
sion-making. This simply means that the data presented must be ‘‘fit for use’’ and that it meets the underlying objectives
(Kumar and Ballou, 1998). Additionally, data quality can be measured by its accuracy, completeness, consistency and time-
liness (Ballour and Pazer, 1985).
Findings from previous research indicated that information quality is positively related to user satisfaction when regard-
ing information systems (DeLone and McLean, 2003; Almutairi and Subramaniam, 2005). Therefore, it is hypothesized that
user satisfaction is correlated with information quality hence the second hypothesis is:

H2. There is a significant relationship between information quality and user satisfaction.

2.3. Service quality

Service quality is determined by the difference between the customers’ expectations of the service and perceptions of the
service they actually receive (Parasuraman et al., 1985). Understanding the attributes of service that customers use to eval-
uate and define quality service can help organizations develop more effective ways of improving services (Rowley, 1998).
Parasuraman et al. (1985) have referred to these attributes as ‘‘determinants of quality’’ and have identified ten general char-
acteristics which constitute quality in service: reliability, responsiveness, competence, access, courtesy, communication,
credibility, security, understanding the customer, and tangibles.
The relationship of service quality and end user satisfaction has attracted considerable interest from researchers in the
field of IS (Luarn et al., 2005; Kim et al., 2005) and has long been recognized as playing a crucial role for both the successful
use of the firm’s IS and company performance, especially today where the market is highly competitive. Studies found that
there is a positive correlation between service quality and user satisfaction concerning information systems (DeLone and
McLean, 2003; Luarn et al., 2005; Kim et al., 2005). Chung and Dauw (2009) also found that service quality has the most
influence on user satisfaction in Health Information Systems (HIS) quality. It is then hypothesized that:

H3. Service quality will have a significant, positive relationship with User Satisfaction Level.

In addition to the DeLone and Mclean and EUCS models, Wixom and Todd (2005) and Ong et al. (2009) proposed a model
that combines user satisfaction and the technology acceptance model (TAM). One of the TAM construct that is applicable to
portal evaluation is perceived usefulness.

2.4. Perceived usefulness

Perceived usefulness is defined as the degree to which a person believes that using a particular system would enhance his
or her task performance (Davis, 1989) and eventually his/her own level of satisfaction. Teo et al. (1999) explained that per-
ceived usefulness is correlated with messaging, downloading, browsing and purchasing activities hence applicable to this
study. They further stressed that continued usage of the web without any specific purpose may decline over time when

Service Quality (SRQ)


H1

H2 User
System Quality (SQ) Satisfaction
H3 Level
Information Quality (IQ)

H4
Perceived Usefulness (PU)

Fig. 1. Research model (adapted from Wixom and Todd (2005) and Ong et al. (2009).
S. Ainin et al. / Telematics and Informatics 29 (2012) 314–323 317

the novelty effect of the website wears off. This may imply that even though a system may seem easy to use and at the same
time informative; it may later be ignored if they do not provide functionality like loan balance and loan disbursement sche-
dule. Thus the following hypothesis is developed.

H4. Perceived usefulness will have a significant, positive relationship with User Satisfaction Level.

Based on the discussion above, the following research model was developed (Fig. 1).

3. Methodology

The empirical data for this study was collected using survey questionnaire. The questionnaire consists of two sections,
A and B. Section A consists of four sub-sections i.e. system quality, information quality, service quality and perceived use-
fulness. Altogether there were six items on information quality, eight each for system quality and service quality, and
seven items on perceived usefulness. A seven-point Likert scale ranging from (1) strongly disagree to (7) strongly agree
was used to rate the extent to which respondents agree to the statements. The items on information quality, system
quality, service quality, and perceived usefulness were taken from previous studies (Parasuraman et al., 1985, 1988;
Madu and Madu, 2002; Devaraj et al., 2002; Wixom and Todd, 2005; Ong et al., 2009). User satisfaction is measured
using a single item ‘Overall, I am satisfied with the system’. The items are attached in Appendix 1. Although the items
in the questionnaire are based on past studies, a number of the items were reworded to suit the research context and the
respondents who are students (as explained in the following paragraphs). In Section B, the respondents were required to
indicate their demographic profile: gender, status i.e. first year or second year above, institution type, programme level,
ethnicity, year of computer and year of internet experience. The variables were measured using a close-ended multiple
choice format.
The questionnaire was then pre-tested whereby it was given to 10 Prokhas’s staff, the vendor of the PTPTN portal.
The same company conducted the User Acceptance Test (UAT) to validate the portal’s content, scope, and purpose.
Feedback was then obtained as to gauge the ambiguity and content validity of the questions used in this study.
Changes (in terms of wordings for some items) were then made based on the feedback. The questionnaire was then
pilot tested whereby it was distributed to 30 students who are users of the portal. The pilot test results were analyzed
and the loadings for all reflective items exceeded the recommended value of 0.6 and the composite reliability values
exceeded the recommended value of 0.7 (Hair et al., 2010). The final version of the questionnaire was then developed
and distributed.
The targeted respondents were students in both public and private higher education institutions (HEI) in the Klang Valley.
Klang Valley was chosen as it has the highest number of HEIs in Malaysia (www.mohe.my). The institutions were approached
and after several deliberations, it was decided that the questionnaire will be distributed to students from a public university
and a private university. The universities were chosen based on their willingness to participate. The sampling procedure
adopted in this research was the convenience sampling method with pre-planned sample size of 400 respondents, i.e. 200
each for both universities. The target respondents were students who have used the PTPTN portal.
The questionnaires were personally distributed by-hand to the students. They were handed to the students when they
visited the respective university’s Student Affairs Office to collect their loan contracts from the PTPTN’s officers. Some of
the students completed the questionnaire on the spot while others returned them to the officers after several days. Hence,
only 258 questionnaires were returned mostly by those who completed the survey on the spot. Out of the total responses
received, 10 were invalid or incomplete and subsequently excluded from the analysis which was carried out using the
SPSS software. Thus, the non response bias is not an issue. Generally, it can be said that the response rate was rather high
(64.5%) as the most of the respondents returned the questionnaire immediately after filling them in. Moreover, they were
given tokens of appreciation after completing the questionnaire. The results of the analysis are described in the following
section.

4. Research findings

4.1. Demographic profile of respondents

The demographic profile of the respondents is presented in Table 1. The number of male respondents was less than the
female respondents which is reflective of the students’ population in both universities. A majority of the respondents were
Malay, followed by Chinese. This is quite representative of the actual nation population composition. Forty-six percent of the
respondents were from the private university while the remaining were from the public university although it was initially
targeted to have equal number of respondents from each type of institution. In terms of programme level, the highest per-
centage was from respondents studying for their Bachelor Degree (53.6%) and closely followed by students doing their Cer-
tificate or Diploma level. With regards to the students’ status, more than three quarters of the respondents were in their first
year. This is expected as most students apply for the loan in their first year of study. Forty-nine percent of the respondents
access the PTPTN’s portal from their own home 49.2%, followed students who are using the College’s WiFi or internet facility
26.2%, Slightly more than half of the respondents have more than 4 years of computing experience followed by those who
318 S. Ainin et al. / Telematics and Informatics 29 (2012) 314–323

Table 1
Demographic profile of respondents.

Variables Value description Frequency %


Gender Male 79 31.9
Female 169 68.1
Institution type Public University 114 46
Private University 134 54
Programme level Certificate/diploma 107 43.2
Bachelor degree 133 53.6
Postgraduate 8 3.2
Student status First year 196 79
Second year and above 52 21
Ecthinicity Malay 157 63.3
Chinese 65 26.2
Indian 11 4.4
Others 15 6.1
Computer experience None 12 4.8
Less than one year 40 16.2
One to four years 60 24.2
More than four years 136 54.8
None 12 4.9
Years of internet experience Less than one year 30 12.1
One to four years 75 30.2
More than four years 131 52.8
Access Home 122 49.2
College 66 26.6
Internet café 54 21.8
Office 5 2
Others 1 0.4

have one to 4 years (24.2%). Similar pattern was found in terms of Internet usage experience. Nearly fifty-three percent of the
students have more than 4 years cyber space experience. This is expected as most the students are from GEN-Y (born after
1980) where the information technology knowledge is a must or common within the generation or so called the tech-savvy
generation. These figures illustrate that they are in the position to answer the questionnaire as they have adequate Internet
and computer knowledge to assess the portal success.

4.2. Constructs influencing level of satisfaction

Factor analysis was conducted to identify the underlying constructs that were deemed important in determining the
overall level of user satisfaction amongst students who have the experience of using this portal. Principal component anal-
ysis was used as the method of extraction. The Kaiser rule for number of factors to extract was utilized. Factor components
with Eigenvalue greater than one were retained and Varimax was selected as the rotation method. The criterion was em-
ployed to avoid a situation of cross-loading, to determine and interpret whether the factors extracted were similar to those
used by Igbaria and Tan (1997) i.e. the cut-off loading was 0.5 or greater on one factor and 0.35 or lower in the other factors.
To determine sampling adequacy, the KMO and Bartlett’s test was carried out. The results indicated that the KMO value of
.956 indicates that the sample is adequate to perform factor analysis. This is further confirmed by the Bartlett’s Test with a
significance level of p = 0.000. Four components were extracted with Eigenvalue exceeding 1, which explain the total vari-
ance of 71.70. The components are: Factor 1: service quality; Factor 2: system quality; Factor 3: perceived usefulness and
Factor 4: information quality. The details are presented in Table 2.

4.3. Reliability analysis

The results showed that the coefficient alpha values for all the measured variables were all above 0.9. The findings indi-
cate that all the questionnaires scales score has adequate internal consistency reliability. Hence, based on the reliability anal-
ysis and factor analysis, all the items in the questionnaire were retained for further analysis. In addition, the common
method bias analysis was also conducted as the study only relies on a single key informant (Bagozzi and Yi, 1991). The exis-
tence of common method bias was determined by performing Harman’s single factor test. This approach utilizes an EFA
based on PAF and oblique rotation, in which the common method bias exists if the analysis results in a single factor solution,
or if one of the factors converged, accounts for the majority of the covariance among the measures (Podsakoff et al., 2003).
S. Ainin et al. / Telematics and Informatics 29 (2012) 314–323 319

Table 2
Rotated component matrix.

Code Component
SRQ SQ PU IQ
SRQ1 .613
SRQ2 .642
SRQ3 .716
SRQ4 .806
SRQ5 .730
SRQ6 .801
SRQ7 .760
SRQ8 .647
SQ1 .626
SQ2 .707
SQ3 .677
SQ4 .652
SQ5 .617
SQ6 .603
SQ7 .693
SQ8 .660
IQ1 .611
IQ2 .629
IQ3 .803
IQ4 .716
IQ5 .718
IQ6 .698
PU1 .728
PU2 .707
PU3 .732
PU4 .749
PU5 .710
PU6 .584
PU7 .563
Number of items 8 8 7 6
Eigenvalues 16.48 2.06 1.24 1.02
Percent of variance explained 56.83 7.10 4.27 3.50

Table 3
Pearson correlation analysis.

Variables Service quality System quality Information quality Perceived usefulness Overall satisfaction
Service quality 1.00
System quality 1.00
Information quality 1.00
Perceived usefulness 1.00
** **
Overall satisfaction 0.663 0.658 0.641** 0.809** 1.00
**
Correlation is significant at the 0.01 level.

The results produced a four factors solution with an eigenvalue greater than 1 thus the common bias method bias is not an
issue for this study.

4.4. Factors influencing PTPTN portal performance

The results in Table 3 explain the magnitude of the relationship between the constructs and the overall satisfaction (por-
tal performance) as well as the relationship among the constructs. In general, the magnitude of relationship between the
constructs is fairly strong as all the constructs have a positive relationship. The coefficients are statistically significant at
0.01 levels (2 tailed). Satisfaction is highly correlated with the perceived usefulness (PU) at 0.809. This is followed by the
system quality (SQ) at 0.658 and the information quality (IQ) at 0.641.
Regression analysis was carried out in order to assess the predictive power of the predictors (or independent variables)
i.e. system quality, information quality, service quality and perceived usefulness in explaining the variance of dependent var-
320 S. Ainin et al. / Telematics and Informatics 29 (2012) 314–323

Table 4
Regression analysis.

Predictors (independent variables) Standardized coefficient beta T Significant (p)


Service quality 0.068 1.139 0.256
System quality 0.048 0.669 0.504
Information quality 0.025 0.351 0.726
Perceived usefulness 0.703 9.828 0.000

Dependent variable: overall satisfaction, adjusted R2 = 0.66.

Table 5
Level of satisfaction.

Constructs Mean Std. deviation Range Min. Max.


Overall, I am satisfied with the system 4.4177 1.77750 6.00 1.00 7.00
Service quality 4.4299 1.34167 6.00 1.00 7.00
System quality 4.2969 1.34854 6.00 1.00 7.00
Information quality 4.6695 1.44370 6.00 1.00 7.00
Perceived usefulness 4.3605 1.43115 6.00 1.00 7.00

iable i.e. user satisfaction. The regression analysis using the enter method was used. Table 4 depicts the summarized results
of the regression analysis, which shows constructs that are statistically significant at the 1% level. The F-value for the model
is 111.866 and the model’s r2 is 0.66, which means that the constructs are able to explain 66% of variation in the model. This
is reasonably sufficient to describe the variance in the model. The coefficient explains the contribution of each construct in
the model.
The standardized coefficients value (Beta) for perceived usefulness (0.703) is the highest among the predictors,
which indicates that perceived usefulness is the most important variable in the predicting user satisfaction. The re-
sults also showed that perceived usefulness is significant (0.000) at the significance level of 0.05. This indicated
that there is significant relationship between perceived usefulness and user satisfaction. Surprisingly, information
quality, system quality and service quality are not statistically significant in explaining the variance in user satis-
faction despite the correlation analysis results showed positive relationship between the three variables and user
satisfaction.

4.5. Association between the demographic factors and the overall satisfaction (performance)

Analysis was carried out to identify the relationship between satisfaction (dependent) and respondents’ demographic
profile (independent). It was discovered that the value of R = 0.102 indicates that the strength of the relationship be-
tween the independent and dependent variables is very weak. The R2 value of 0.010 suggests that only 1.0% of the var-
iance in end-user satisfaction is explained by the demographic variables in this sample i.e. student status (first year or
second year and above), years of computer experience and years of internet experience. This deduces that 99.0% of the
variance in end-user satisfaction is explained by other demographic variables which have not been included in this
study.

4.6. Level of satisfaction

From Table 5, it can be seen that generally, the students are satisfied with the portal’s performance as the mean values for
all the constructs are above 3.5 (average value for 7). The portal’s information quality performance is considered by the
respondents to be the best followed by the portal’s service quality. Nevertheless PTPTN should enhance the portal to increase
the portal’s performance further as there were some respondents who perceived the performance of the portal to be very
poor (indicated by the min value 1: strongly disagree).

5. Discussion

The correlation analysis (Table 3) illustrates that all the independent variables (system quality, information quality, per-
ceived usefulness and service quality) are positively related to dependant variable (end-user satisfaction). This finding sug-
gests that that the increase in any of the independent variables will be followed by an increase user satisfaction. However,
when the regression analysis was carried out in order to assess the predictive power of the predictors (or independent vari-
ables) in explaining the variance of dependent variable i.e. user satisfaction, the findings were different. The findings suggest
S. Ainin et al. / Telematics and Informatics 29 (2012) 314–323 321

that system quality, information quality and service quality are weak variables in predicting end-user satisfaction. As a re-
sult, they contradict the findings from previous studies (DeLone and McLean, 2003; Luarn et al., 2005; Kim et al., 2005) that
found those three variables to strongly predict end-user satisfaction.
For this contradiction, we would like to offer some explanation. First, the students who use this portal may not highly
value system quality because it is a very simple system to use. Although the students did have grouses with the system,
it has very few problems such as breakdowns. Because the system is easy enough to use and has very few problems, system
quality may not strongly affect user satisfaction. Second, the end users may also value information quality less because the
portal does not offer a lot of information in the first place. The information that is available in the portal includes how to
apply for the students’ loan and the results of the application. That small amount of information however is adequate for
students to prepare their loan application. As a result, information quality has never presented any problem to the students
and therefore does not significantly influence their satisfaction with the system. Third, the students who use this portal may
also perceive the system’s service quality as less importance because it does not provide a direct service to the students. The
portal is just a mean for them to apply for the PTPTN study loan. The students are more interested in the quick processing
and delivery of their loans which depends on non-system’s factors such as the availability of funds and organizational
processes.
On the other hand, the relationship between perceived usefulness and user satisfaction; was statistically significant (sig-
nificance value is 0.000 which is less than p = 0.05) and thus supported. Subsequently this finding supports Wixom and
Todd (2005) and Ong et al. (2009) who have included perceived usefulness construct from the TAM in discussing user
satisfaction. We would like to offer an explanation to this finding. The ability of the students in enrolling in an under-
graduate programme in Malaysia depends highly on the loan. Many families neither have the means nor the savings to
enable their children to continue their education at the university level. Without the loans offered by PTPTN, many of
these students would not be able to afford university education. Because the portal provides the tool to enable them to
apply for the loans, students would perceive that this is the most useful function of the portal, thus the significance of
the finding.
Meanwhile, the correlation analysis conducted to examine the association between demographic factors (student sta-
tus, years of computer and years of internet experiences) and user’s satisfaction level shows a very weak positive rela-
tionship. Additionally, the F-value calculated from the ANOVA analysis did not meet the significant level while its p-
value at 0.467 indicated that the result is not significant and the model is a poor fit. As such, it can be concluded that
there is no significant association between the demographic factors and user satisfaction level. We suspect that the mul-
tiple regression analysis may become unstable when two or more predictor variables are strongly related to one another
(Kumar et al., 2009). Furthermore, the results of the multiple regression analysis produced parameter estimates that
have ordinal-level properties. As a result, the only inference that can be made is about its rank order of co-variation with
the dependent variable.

6. Conclusion

This article provides minor theoretical contributions but major practical contributions. Theoretically, it provides an
adapted model to evaluate the success of a portal. Practically, the study had enabled the organization to gauge the level
of end-user satisfaction with its portal. Findings from the study suggest that the users are generally satisfied with the portal.
They also suggest that one factor (usefulness) is the most important factor for the students’ overall satisfaction. These find-
ings would enable PTPTN’s management to identify the most appropriate action to improve the system and the processes
related to the students’ loan management. For example, the management has to identify ways in which the organizational
processes of loan approval can be expedited.
This study, however has several weaknesses. First, the use of multiple regressions did not produce many significant
findings. Second, the sample size of the study was limited to only two universities (one public and one private) in the
Klang Valley. The generalizability of the findings may be limited as the sample size compared to the overall population
(15 public and almost 50 private universities) is rather small. Third, the number of factors employed in this study is also
limited compared to similar studies by Urbach et al. (2010) and Tojib et al. (2008). For example, Tojib et al. investigated
user satisfaction with a portal via five constructs: usefulness, confidentiality, ease of use, convenience of access and portal
design.
Based on these limitations, we would like to propose refinements to future research. First, instead of using mul-
tiple regressions technique, perhaps future research should consider the dominance analysis approach. The approach
permits the direct comparison of measures within a model and permits inferences concerning an attribute’s direct
effect (i.e. when considered by itself), total effect (i.e. when considered with other attributes), and partial effect (i.e.
when considered with various combinations of other predictors). More importantly, it permits direct comparison of
measures and allows us to predict the level of influence of one attribute in comparison with other attributes. Sec-
ond, it would also be worthwhile to conduct an importance-performance analysis. This framework was first intro-
duced by Martila and James (1977) to assist in understanding customer satisfaction as a function of both
expectations concerning the significant attributes (‘‘importance’’) and judgments about their performance
(‘‘performance’’).
322 S. Ainin et al. / Telematics and Informatics 29 (2012) 314–323

Appendix A

The construct and items.

Construct Items Code


System quality The registration process is simple SQ1
Instructions on how to use the system are directly available SQ2
Information is provided through frequently asked questions and answers SQ3
Information required is found with minimum number of clicks SQ4
A standard navigation bar, home button and back/forward button are available on every SQ5
page
Navigation is consistent and standardized SQ6
Scrolling through pages and text is kept to a minimum time SQ7
It is easy to recover from errors I make when using the system SQ8

Information quality Information provided in the system is complete IQ1


Information provided in the system is easy to understand IQ2
Information provided in the system is personalised IQ3
Information in the system is comprehensive IQ4
Information in the system is secured IQ5
Terms and conditions of your loan application are accessible IQ6

Service quality The system provides confirmation of acceptance immediately SRQ1


The system provides preview of information entered before submission SRQ2
The contact centre provide prompt services to users SRQ3
The contact center staff have the knowledge to do their work efficiently and effectively SRQ4
System is available 24 h and 7 days a week SRQ5
Contact center contact number and e-mail link is available SRQ6
Queries or complaints are resolved with 24 h SRQ7
The system provides data security protection i.e. password to access detailed information SRQ8

Perceived usefulness I find it easy to access this system PU1


The amount of information displayed on the screen is adequate PU2
The sequence of obtaining and making transaction is clear PU3
The layout of pages made tasks easier PU4
The system makes my life easier PU5
The rate at which the information was displayed was fast enough PU6
The description/explanation in the website for accomplishing my task is adequate PU7

User satisfaction Overall, I am satisfied with the system USL


level

References

Aasheim, C., Gowan, J.A., Reichgelt, H., 2007. Establishing an assessment process for a computing program. Information Systems Education Journal 5 (1)
(accessed from http://isedj.org/5/1/ on 30 April 2011).
Almutairi, H., Subramaniam, G.H., 2005. An empirical application of the Delone and McLean model in the Kuwait private sector. Journal of Computer
Information Systems 45 (3), 113–122.
Azadeh, A., Sangari, M.S., Songhori, M.J., 2009. An empirical study of the end-user satisfaction with information systems using the Doll and Torkzadeh
instrument. International Journal of Business Information Systems 4 (3), 324–339.
Bailey, J.E., Pearson, S.W., 1983. Development of a tool for measuring and analyzing computer user satisfaction. Management Science 29 (5), 530–545.
Ballour, D.P., Pazer, H.L., 1985. Modeling data and process quality in multi-input, multi-output information systems. Management Science 31 (2), 150–162.
Bagozzi, R.P., Yi, Y., 1991. Multitrait–multimethod matrices in consumer research. Journal of Consumer Research 17, 426–439.
Barr, N., 2004. Higher education funding. Oxford Review Economic Policy 20 (2), 264–283.
Chung, H.T., Dauw, S.Z., 2009. A study on the effect of work environment perception on user satisfaction in health information systems: HISs quality as
mediator. International Journal of Information Technology and Management 8 (2), 196–213.
Davis, F.D., 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly 13 (3), 319–340.
DeLone, W.H., McLean, E.R., 1992. Information system success: the quest for the dependent variable. Information System Research 3 (1), 60–95.
DeLone, W.H., McLean, E.R., 2003. De Lone and McLean model of information system success: a ten-year update. Journal of Management Information
Systems 19 (14), 9–30.
Devaraj, S., Fan, M., Kohil, R., 2002. Antecedentents of B2C channel satisfaction and preference validating e-commerce metrics. Information Systems
Research 13 (2), 316–333.
Doll, W.J., Torkzadeh, G., 1988. The measurement of end-user computing, satisfaction. MIS Quarterly 12 (2), 259–274.
Fan, J., Fang, K. 2006. ERP implementation of information systems success: a test of Delone and McLean’s Model, PICMET 2006 Proceedings, 9–13 July 2006,
Istanbul Turkey.
S. Ainin et al. / Telematics and Informatics 29 (2012) 314–323 323

Gatchalian, M.M., 1999. Quality assessment through statistically-based sensory evaluation methods. The TQM Magazine 11 (6), 389–396.
Hair, J.F., Black, W.C., Babin, B.J., Anderson, R.E., 2010. Multivariate Data Analysis. Prentice-Hall, Upper Saddle River, NJ.
Hu, P.J.-H., 2003. Evaluating telemedicine systems success: a revised model, Proceedings of the 36th Hawaii International Conference on System Sciences
(HICSS 03), January 6–9, Big Island, Hawaii.
Hussein, R., Selamat, H., Karim, A.N.S., 2005. The impact of technological factors on information systems success in the electronic government context. The
Second International Conference on Innovations in Information Technology. September 26–28 2005. Dubai, UAE.
Igbaria, M., Tan, M., 1997. The consequences of information technology acceptance on subsequent individual performance. Information & Management 32
(3), 113–121.
Ives, B., Olson, M., Baroudi, J.J., 1983. The measurement of user information satisfaction. Communication of the ACM 26 (10), 785–793.
Kim, Y.J., Eom, M.I.I., Ahn, J.J., 2005. Measuring IS service quality in the context of the service quality-user satisfaction relationship. Journal of Information
Technology Theory and Application 7 (2), 53–70.
Kumar, M., Kee, F.T., Manshor, A.T., 2009. Determining the relative importance of critical factors in delivering service quality of banks: an application of
dominance analysis in SERVQUAL model. Managing Service Quality 19 (2), 211–228.
Kumar, T.G., Ballou, D.P., 1998. Examining data quality. Communications of the ACM 41 (2), 54–58.
Lee, H.S., Choi, Y.H., Jo, N.O., 2009. Determinants affecting user satisfaction with campus portal services in Korea. Journal of Internet Banking and Commerce
14 (1) (assessed from from http://www.arraydev.com/commerce/jibc/ on April 30 2011).
Longinidis, P., Gotzamani, K., 2009. ERP user satisfaction issues: insights from a Greek industrial giant. Industrial Management & Data Systems 109 (5), 628–
645.
Luarn, P., Lin, T., Lo, P., 2005. Non-enforceable implementation of enterprise mobilization – An exploratory study of the critical success factors. Industrial
Management and Data Systems 106 (6), 786–814.
Madu, C.N., Madu, A.A., 2002. Dimensions of e-quality. International Journal of Quality & Reliability Management 19 (3), 246–258.
Martila, J.A., James, J.C., 1977. Importance-performance analysis. Journal of Marketing 42 (1), 77–79.
Masrek, M.N., 2007. Measuring campus portal effectiveness and the contributing factors. Campus-Wide Information Systems 24 (5), 342–354.
Mohamed, N., Hussin, H., Hussein, R., 2009. Measuring users’ satisfaction with Malaysia’s electronic government systems. Electronic Journal of e-
Government 7 (3), 283–294 (available online at www.ejeg.com).
Ong, C.S., Day, M.H., Hsu, W.L., 2009. The measurement of user satisfaction question answering systems. Information and Management 46, 397–403.
Parasuraman, A., Berry, L.L., Zeithaml, V.A., 1988. SERVQUAL: a multiple-item scale measuring consumer perceptions of quality. Journal of Retailing 64 (1),
12–40.
Parasuraman, A., Berry, L.L., Zeithaml, V.A., 1985. A conceptual model of service quality and its implications for future research. Journal of Marketing 49 (4),
41–50.
Podsakoff, P., MacKenzie, S., Lee, J., Podsakoff, N., 2003. Common method biases in behavioral research: a critical review of the literature and recommended
remedies. Journal of Applied Psychology 88 (5), 879–903.
Roldán, J.L., Leal, A., 2006. A validation test of an adaptation of the DeLone and McLean’s model in the Spanish EIS Field. In: Cano, J.J. (Ed.), Critical Reflections
on Information Systems: A Systemic Approach. Idea Group Publishing, Hershey, PA.
Rowley, J., 1998. New perspectives on service quality (SERVQUAL and SERVPERF instruments). Library Association Record 98, 416.
Seddon, P.B., Kiew, M.Y., 1996. A partial test and development of the Delone and McLean model of IS success. Australian Journal of Information Systems 4
(1).
Teo, T.S.H., Lim, V.K.G., Lai, R.Y.C., 1999. Intrinsic and extrinsic motivation in internet usage. Omega 27, 25–37.
Tojib, D.R., Sugianto, L., Sendjaya, S., 2008. User satisfaction with business-to-employee portals: Conceptualization and scale development. European Journal
of Information Systems 17, 649–667.
Urbach, N., Smolnik, S., Riempp, G., 2010. An empirical investigation of employee portal success. Strategic Information Systems 19, 184–206.
Wixom, B.H., Todd, P.A., 2005. A theoretical integration of user satisfaction and technology acceptance. Information System Research 16 (1), 85–102.
Zviran, M., Glezer, C., Avni, I., 2005. User satisfaction from commercial web sites: the effect of design and use. Information & Management 43 (1), 157–178.

You might also like