Evaluating Overall Online Service Quality and Customer Satisfaction of EDUGATE Portal at King Saud University
Abstract Organisations are interested in deploying Quality Function as an essential activity towards evaluating their web sites. Since, Quality Function Deployment (QFD) as a design technique primarily attempts to deliver the voice of the customers throughout every single planning and design activity, voice of the customer has been given importance in this research. In spite of the availability of many online service quality evaluation models in the research literature, majority have been reported by establishing it with the lack of proving coherence among the hypothesis testing model and underlying conceptual model. So in order to provide such referential adequacy especially at the level of matured research (as against academic research) a portal based online service quality instrument called WePoServQual has been proposed. WePoServQual Instrument is developed based on Parasuraman et al. (1988) Service Quality Model and tested for its consistency as well its measurement and evaluation capacity using an educational portal called EDUGATE of King Saud University, Riyadh in Saudi Arabia. For this purpose, an online survey has been posted to collect the customer service quality evaluation data. 48 subjects of customer responses are collected within King Saud University community, especially covering only the Deanships of College of Administrative Sciences and Computer Information Sciences, e-Transactions and Communications (IT), Student Registration Deanships. The results indicated that WePoServQual is an effective instrument to evaluate the overall service quality. However, the relationship between service quality and customer satisfaction is found questionable.

Keywords: E-Service Quality, Edugate, Evaluating Educational Portals, Evaluating Portal Service Quality, King Saud University, Quality Function Deployment.

1. Introduction As Sahut and Kucerova (2003) elucidated, quality function deployment (QFD) is a distinguished service design technique primarily attempts to deliver the voice of the customers throughout every single planning and design activity. Adopting the current trend of customer-centric approach into consideration, our QFD evaluation approach results in formulating a set of service quality dimensions and items disclosed based on the service quality gap theory (Parasuraman et al., 1985; Parasuraman et al., 1991). It also helps to provide decision support to project managers of the portal development to improve portal service quality further towards better customer satisfaction. Yang and Fang (2004) mentioned that


2 listening to customer voices is the initial step in planning service quality improvement endeavours. In turn, identifying customer perceptions on service quality and their satisfaction or dis-satisfaction can provide a framework of reference for online service quality providers to self-evaluate the provision of service performance. Analyzing the voice of customers and building a transformation framework bringing it into practice are the leading steps of the methodology and become most important steps in regulating the level of quality of portal services (Sahut and Kucerova, 2003).

Zeithaml is considered as one of the pioneers who introduced the concept of electronic service quality (e-SQ) and who examined the service quality of web sites as well as their role in the service quality delivery to customers, defined e-SQ and web site service quality as the extent to which web site facilitates efficient and effective service delivery (Zeithaml, 2002; Al-Mushasha and Hassan, 2011).

Web portals are increasingly being used for various tasks in different domains. Since web portals can be used to provide information, services and applications to the customer, study on their service quality is an important prerequisite for their successful implementation and performance. ISO has mentioned internal and external quality perspectives in the quality (Herrera et al. 2010) and can also be made applicable to study service quality of educational web portal.

Online Service quality is a much studied concept. There is considerable evidence that user expectations and perceptions of self-service and online service quality differ. However, improvement in online service quality can be suggested by making the web site more self-service oriented. Universities are in forefront to adopt this view of making a web site more self-service oriented by adopting it through implementing web portals for a wide range of online transaction for the use by a wide range of stake holders. Stakeholders in a university include students, faculty, staff, administrators, the government and the wider society in which the university operates (Sahney et al. 2004). A research study on service quality of web portals is proposed to be conducted in the King Saud University environment. Our proposed study can offer a number of insights to service quality providers/organisations such as improving the relationship between faculty, staff and students, how customers are inclining towards the adoption of new technology in the form of satisfying their expectations. A considerable negative effect can be predictable if organiations are slow in adopting a customer-centric view point and persists in providing interfaces that are inconsistent to customer perceptions (Tate, Evermann, Hope and Barnes, 2009).


com web site informs that the daily ad revnue and estimated revenue are not applicable to the web site. The average page load time is found as 0 seconds. overriding certain regulations and exceptions require the intervention and assistance of the departmental officials (Student_Guide V5. Usually the lower the rank. science.2 EDUGATE -Online Academic Portal EDUGATE is an Online Academic Portal of King Saud University (Cap 252. In addition to its role in teaching and research. Though the web site is estimated to earn at least $476 USD per day from advertising revenues to an overall value of $347. Graduation Document and Student Reports Validation. Instructor can monitor the academic progress of their students. 2010). (Cap 252. They can also provide their feedback and evaluation about their instructors (Student_Guide V5.051 08-04-2013 . It is playing a contributory role in becoming a source of skilled professionals to meet the needs of nation in various fields of academics such as medicine.951 USD (Statmyweb. 2012). languages and Islamic culture. Through EDUGATE. EDUGATE is a comprehensive. Although registration of the course system is fully automated. the popular the web site is. humanities. 2010). edit their profile etc. agriculture. Some of the hosting-server side statistics of EDUGATE portal of King Saud University are given as follows: EDUGATE is ranked #8. modify. confirm and print their schedules. siteglimpse. user-friendly system that enables students to use many of service relevant to their course registrations. 1. view transcripts/grades and more. engineering. PageRank is given as 5/10 with a total of 104. Major Plans. students can monitor their academic progress. They can mark absences and award marks for the assignments performed by the students. Courses Schedule.1 King Saud University King Saud University is the premier institution of higher education in the city of Riyadh in Kingdom of Saudi Arabia. The username and password are required to be obtained initially from e-communications and transactions Deanship once a person is associated with the King Saud University either as a student or staff and hence the user types are categorized as student and staff. It was established in the year 1957 to enhance the knowledge and learning capabilities towards nation‟s growth and well-being. Edugate services are categorized mainly into five modules: Academic Calendar. it practices its vital functions towards health care and private sector development (King Saud University Web site). 750 unique visitors per day and 398. 2012). 367 on the World Wide Web.3 1. 2010).

ksu. 1998) that customer or the user is the best judge of service quality. since the user satisfaction is defined as the emotional reaction to a specific transaction or service encounter. the user satisfaction may or may not be directly related to the performance of the web portal services on a specific occasion. Though the relationship and nature of these customer evaluations remains unclear through service quality and satisfaction. Aim and Objectives of the Research The aim of the research is to evaluate the King Saud University customer satisfaction on the provided EDUGATE portal web site using service quality as an instrument. Texas.4 pageviews. however expectations and perception are considered as key instruments in the empirical studies of service quality and customer satisfaction. Customer satisfaction is defined as “the levels of service quality performances that meets users‟ expectations” (Wang and Shieh. According to it. Service quality is the difference between customer‟s expectations and perceptions (Parasuraman et al. 1985). Five perspectives of quality evaluation mentioned by Lovelock and Wirtz (2007.. in which first four gaps are identified as functions of the way in which the service is delivered from the service provider to the 08-04-2013 . 1990. Peiris and Jayasundara. Somaratna.edu.. Altman and Hernon. 2. The server is located at Dallas. Somaratna. Peiris and Jayasundara. 2007. Customers can give a negative answer to a query as a result of dissatisfaction because of an upsetting or angry encounter to a positively influencing service quality feature. 418) are: (i) transactional/encounter based (ii) product-based (iii) user based (iv) manufacturing based and (v) value based (Kabir and Carlsson. 2010). Since it is posited (Ziethaml et al. 2010). Satisfaction is an important measure of service quality (Filiz. Conversely. p. 197). So the relationship between service quality and customer satisfaction is a complex one. 2006. customer could give a positive answer to a query of feeling satisfied as a result of feeling pleasant encounters. The Service Gap Analysis model (of Parasuraman et al. Service Quality is defined as a component of user satisfaction (Somaratna.sa. 2010).. p. 1985) presents five gaps. 2010). User based evaluation approach has been adopted in this research of examining the level of service quality through evaluating the EDUGATE web portal. (2010) presented an argument on relationship between the satisfaction and performance.9% (Statmyweb.edugate. Peiris and Jayasundara. The web portal address is www. United States with a SEO score of 61. 2010) or it could be vice versa. Peiris and Jayasundara. which are considered two big research paradigms and introduced as synonyms within the service business and adopted by many researcher especially while evaluating the overall service quality and overall customer satisfaction (Somaratna.

Purpose of the Study Purpose of this study is to evaluate the Electronic Service Quality of EDUGATE Portal towards determining the customer satisfaction and to suggest the service providers about improving electronic services quality based on the proposed service quality criteria.5 customer. Research Questions 1) To what extent the EDUGATE Portal is satisfying the needs of faculty. Thus our service quality is defined as the difference between the perceptions of internal customers and the perceptions of the external customers. while the fifth gap is connected to the customer. The service quality criterion is the proposition of a portal based e-service quality instrument having set of proposed service quality dimensions. 2010). We assumed that the perceptions of Service Providers of EDUGATE Portal (e. staff and students? 2) What do users of Edugate Portal perceive the key quality dimensional attributes of web based portal services? 3) Which attributes of web service quality do users perceive to be relatively important? 4) Is there a statistically significant predictive relationship between portal service quality and user satisfaction? 4. In this research. It is now required to confirm these proposed e-service quality dimensions based on the 08-04-2013 . 3. the aim of the research is to evaluate the overall customer satisfaction and the objectives are set as evaluation of service quality and also the evaluation of the relationship existing between overall service quality and customer satisfaction based on the evaluation of user based service quality measurement approach. staff and students of King Saud University as the actual perceptions of (external) customers. Service Providing Users are otherwise called as internal customers and Service Requesting Users as the external customers (AlSudairi and Vasista 2013) to the organisation. Deanship of Admissions and Registration. In cinch. Deanship of Admission and Registration and Student Affairs in the province south of Riyadh and Deanship of Graduate Studies) are considered as the expectations of the (internal) customers or providers who define the level of quality of the portal initially. staff from Deanship of e-Transactions and Communication.g. while the perceptions of the service requesters such as rest of faculty. we classified total users of EDUGATE portal into two categories: (i) Service Providing Users and Service Requesting Users. It is important for a service organisation to define the level of quality at which the web portal can be operated (Kabir and Carlsson.

(1989)‟s Technology Acceptance Model. The research is to statistically test the developed portal service quality instrument (WePoServQual) that would determine the overall service quality and overall satisfaction in terms of identifying whether online service provider can influence it customer‟s use of portal based web site through a set of web site design features that match with the service quality dimensions derived from service quality gap model. Deshmuh and Vrat. Functional Usefulness. which identified four service provider side gaps viz. Responsiveness. (i) Service Information Gap (ii) Service Communications Gap (iii) Service Standards Gap and (iv) Service Performance Gap that is equal to customer side service quality gap. content validation with extensive literature review. the derived dimensions and items are proposed based on correlations elicited from extensive research literature. dimensions and item generation.6 empirical or quantitative test results. (i) Accessibility (ii) Responsiveness (iii) Functional Usefulness (iv) Usability (v) Safety (vi) Convenience and (vii) Realization with their corresponding (30) items. The seven dimensions are fundamentally derived from the Service Quality Gap Model proposed by Parasuraman et al. The five stages undertaken are: Conceptual model development. Convenience and Realization. A tabular form containing the dimensions and corresponding references to the authors and their research literature is given in Table 1. 2008). Antecedents of Online Service Quality Online services provision exercise considerable latitude in designing their online offerings and web site interactivity to enable or subvert various features that match the basic service quality gap factors (Parasuraman et al. Parasuraman et al. A web portal service quality instrument is proposed that refers to different reviewed service quality models (Seth. Research Methodology for Evaluating EDUGATE Portal The research methodology involves five sequential stages to achieve the purpose and research aim. WePoEServQual is an instrument for measuring web portal e-service quality. Accessibility. 1985). 08-04-2013 . Teas (1993) Berkley and Gupta (1994)‟s IT alignment model and Davis et al. Spreng and Mackoy (1996). Though z-score or t-score and p-values can help in basically finding the effectiveness of the use of the proposed WePoServQual instrument. (1985). 2012). Usability. A Theoretical Framework is proposed based on the strategy developed by (AlSudairi. exploratory study and confirmatory study (Tojib. Safety. Sugianto and Sendjaya. 5. confirmatory factor analysis is required to confirm the proposed dimensions viz. Further. It examines Seven (7) dimensions viz. 2005) such as Gronroos (1982). (1985).

Wilcox (1999). Johnston (1995). Seigel and Madnick (2001). Galletta. Hoffman and Novak (2009) 3. Kulakarni (2006). (2002). Loiacono et al. 1. Johnston (1995). Tate et al. Quesenbery (2007). (1985). it is the 08-04-2013 . Sugianto and Sendjaya (2008) 7. Ziethaml et al. Tate et al. (2005). (1994). Erradi. Tate et al. (2001). Papazouglou (2003). According to Tashakkori and Creswell (2007. Berthon. Eliassen. Zeithaml et al. (2005) 2. Realization Rouvoy. Tojib. Tojib. Research Method and Tools A mixed method of conducting research is adopted because it employs a combination of qualitative and quantitative methods. p. Griffith and Krampf (1998). Loiacono et al. (1985). Kuo et al. (2002). Sugianto and Sendjaya (2008) Davis (1989). Griffth and Krampf (1998). E-Service Quality Features supporting by Authors Dimensions Accessibility Parasuraman et al. Doll et al. (2002). Wooldridge. (2007). Zeithaml et al. Ding. (2007). van Enginlen (2004). al. Zeithaml et al. (2001). integrates the findings and draws inferences using both qualitative and quantitative approaches or methods in a single study or program inquiry. Pitt and Watson (1996). Zhonghua and Erfeng (2010) 6. Wolfinbarger and Gilly (2002). (2009) 5. mixed method of research is a research in which the investigator collects and analyze data. Proposed Portal Service Quality Evaluating Dimensional Features and Supporting Authors Sl. Henry. Functional Usefulness 4.7 (40 references are required to be typed from the table below) Table 1. Tate et al. Yang and Jun (2002). McKoy and Polak (2003). Barone. Hallsteinsen and Lorenzo (2009). Yang et al. (2002). 4). Sugianto and Sendjaya (2008). Tojib. No. Tojib. Brian Caulfield (2002). (2009) 6. Safety Wooldridge. Jennings and JKinny (2000). 557). Zhu. (2007). Anand. Responsiveness Parasuraman et al. Jennings and Kinny (2000). p. Johnston (1995). Yoo and Donthu (2001). Convenience Zeithaml et. Gant and Gant (2002). Among the four types of Mixed-Method of Research mentioned by Creswell (2008. Usability Davis (1998). (2001). Sugianto and Sendjaya (2008). Tate et al. Abdullah (2006).

2000) have not clearly mentioned that their studies have adopted mixed methods of research and truly explored and derived their service quality dimensions based on the underlying conceptual theories and models. Typically respondents are instructed to select one of the five responses: „strongly agree‟. Hence it is felt to design and propose a general purpose new instrument called WePoEServQual Instrument based on service quality gap model to measure electronic service quality especially for the case of Portals and verify its applicability in the context of educational web portals within the university environment. 2003). business and other disciplines related to attitudes. Hu.e. Respondents are asked to indicate their level of agreement with a given statement by way of an ordinal scale”. Bertram (2006) mentioned the definition of Likert Scale as: “A psychometric response scale primarily used in questionnaires to obtain participant‟s preferences or degree of agreement with a statement or set of statements. Zhao and Guo. emotions. opinions. It first explores the phenomenon followed by quantitative data to explain the relationship found in the qualitative data. 2004) unfortunately the ServQual and e-SQ and web quality instruments are not directly applicable to online educational environment (Shaik. Each item in a scale is represented in the form of a either statement or a question. Since there are limited number of instruments (Tate and Evermann. 2009. However some of the studies (Lociacono et al. Survey on EDUGATE Portal is designed for the purpose of gathering opinions about portal‟s service quality. Bhat. Though there are many variations of the scales with ranges as 4point scale. 2009) are available in the research literature that can be claimed to have theoretical basis while suggesting the service quality dimensions. A summated scale as name implies contain multiple items. 2003). A number of researchers have evaluated the quality of site-based educational services using variations of ServQual instrument (Holdford and Reinders.. Benbasat and Cenfetelli (2013) have partly adopted this methodology in the context of E-Government Service Quality. Beaver and Linng. For example Tan. and 9-point scale. personalities and description‟s of people‟s environment (Gliem and Gliem. Exploratory design collects data sequentially in two phases. marketing. „agree‟. 5-point scale. and respondents are asked to give rating about this statement or question in a five-option Licker-type scale (Gliem and Gliem. Proposed thirty (30) multiple items will be combined or summed for producing scores for each of the seven (7) corresponding service quality dimensions under which these items are grouped. 2006). 7-point scale. A self-administered questionnaire has been designed by adopting summated scale to capture responses using 5-point Likert Scale against thirty (30) items. 08-04-2013 . most commonly used is seen as 5-point scale. Lowe and Pinegar. Exploratory Mixed Method design has been adopted. Sherry. 2001.8 fourth method i. 6-point scale. Likerttype scales are used to gather responses in social sciences.

Results can be monitored real time. These thirty items of the seven dimensions are transcribed and transformed into the corresponding questions. in testing hypothesis. QuestionPro will take care of collecting and recording the responses. involving devising measures for the abstract concepts. excel or SPSS acceptable formats with graphical display of bar and pie charts (QuestionPro web site. QuestionPro is web based software to create and distribute surveys online. 08-04-2013 . an approach that allows the research to be conducted with reference to hypothesis and ideas inferred from the theory. 2008. Research Variables This section explains various types of research variables used for hypothesis testing with their corresponding codification of research variables.9 „undecided or neutral‟. 7 dimensions. Developing measurement instruments entails the principle of deduction. p. is a necessary condition.com. WePoEServQual Instrument is consisting of seven electronic service dimensions and thirty items that are selected from the literature review given in Table 1. overall service quality and overall customer satisfaction are considered as electronic service quality research variables with codes for statistical computation purposes. These dimensions and items are also derived and mapped to the four service quality gaps proposed by Parasuraman et al. operationalization of the concepts. with the conviction of a hypothesis that can be tested will thereby allow explanations of laws or principles to be assessed (Bryman. 2003). 2013). The survey questions are customized to suit both service providers and service requesters. Considering formal conceptual definitions and properties derived from the abstract concepts are necessary for validating the constructs of the hypothesis (Wacker. These reports can be transferred or exportable to word. The specific responses to the items are combined so that individuals with the most favourable opinions will have the highest scores while individuals with the least favourable opinions will have the lowest scores (Gliem and Gliem. The questionnaire is posted online using at questionpro. It has built-in tools for providing the statistical data analysis and viewing of results obtained based on the survey data. 13). 2004). These 30 items. (1985). The dimensions and items considered for building the questionnaire are adopted based on the WePoEServQual Instrument. „disagree‟ and „strongly disagree‟. 7. However.

08-04-2013 . it is hypothetically proposed that the increase in customer satisfaction is proportional to the increase in service quality. The thirty independent research variables are: viz. Irrespective of whether there is a linear or non linear relationship existing. (i) ABDI (ii) ADCI (iii) ABMW (iv) AFUM (v) AOCC (vi) RHSPL (vii) RFUS (viii) RPRG (ix) RMSG (x) FRI (xi) FUDI (xii) FSCI (xiii) FVTS (xiv) USC (xv) UCSC (xvi) UOHL (xvii) UMMI (xviii) SCA (xix) SAA (xx) SPA (xxi) ST (xxii) CMIC (xxiii) CAS (xxiv) CCC (xxv) CPCS (xxvi) CAWS (xxvii) CTFS (xxviii) CPAR (xxix) RSDSA (xxx) RSAPL. Overall Portal Service Quality has a positive influence on the Overall Customer Satisfaction. Fig.10 Independent Variables Service Quality is considered as the independent variable of measure. Dependent Variables Seven Service Quality dimensions are coded as: (i) ACC (ii) RESP (iii) USB (iv) FU (v) SFT (vi) CONV and (vii) RLZ Overall Portal Service Quality is coded as: OPSQ and Overall Customer Satisfaction is coded as: OCS Hypotheses declaration Customer satisfaction is considered as the dependent variable on Overall Portal Service Quality. 1 A Theoretical Framework for (Self-Service Oriented) Web Portal Service Influencing Online Service Quality and Customer Satisfaction.

H02 = Each of the Seven H2 = Each of the Seven Internal consistency and Dimensions of the Portal Dimensions of the Portal based Validation based web portal positively web portal cannot influence Reliability test using influences overall service overall service quality and Cronbach Alpha value Overall Portal Quality and quality and hence said to hence cannot be claimed that for constitute the instrument. Deanship of Admission and Registration and Student Affairs in the province south of Riyadh and Deanship of Graduate Studies. they constitute the instrument. z-value expectations than (and/or t-value) and p-value. Service which measures the Overall which measures the Overall interpretation Portal Service Quality (self.Spearman-Brown service oriented) service oriented) Prophecy for testing consistency among each service quality gap of the instrument 3. 08-04-2013 . 8.Portal Service Quality (self.11 Hypothesis Null Hypothesis Alternate Hypothesis Statistical Proposed Methods for Hypothesis Testing 1. Service Providers of EDUGATE Portal includes staff from Deanship of e-Transactions and Communication. Deanship of Admissions and Registration. Population There are two kinds of population: (i) Service Providers and (ii) Service Requesters. H01 = There is no difference H1 = External customers have Mean value. But we targeted approximate number of staff are from all these Deanships and is estimated as: nnnnnnn. between (internal) customer higher expectations and (external) Internal customer expectations customer perceptions 2. H03 = Overall Customer H3 Service by = Overall Customer Confirmatory Factor Satisfaction is influenced by Satisfaction has no influence Analysis that confirms Overall Quality Portal Overall Portal Service that each factor of the instrument has a Quality positive influence on overall service quality and overall customer satisfaction.

Population Mean is usually denoted by μ symbol. 1.994 during the year 2010 (MoHE web site. A demarcation level bench mark is required to be set to categorize them into guessed correctly and guessed incorrectly (calculation of p-value) Unknown Standard Error = (for Sample) Normal tables: z-test t-tables: t-test Non-parametric MannWhitney U test (rather than unpaired t-test) or Chi-Square or G-Test (>5) Fisher’s Exact Test (if n <= 5) Test statistics are compared with the critical values of the normal distribution at a specific significance level for deciding whether to reject the null hypothesis (H0) or not.12 Total Service Requesters of EDUGATE portal includes all faculty. It is expected that the size of the population of service providers is significantly less than the size of the service requester population. Ministry of Higher Education web site informs that the status of total enrollment is 66.174 and the total staff or 5. 4). 8. 2010. The Population Mean Value of Provider and that of Requester are given as PMP and PMR respectively then the Population Mean Difference (PMD) = PMP-PMR. (i) service providers (ii) service requesters. staff and students of King Saud University. The following are some of the useful and relevant equations in calculating test statistics. August. Corresponding to the size of the sample the following criteria has been set and summary of the choice of statistical table has been given (Anonymous. 08-04-2013 . But the targeted and distribution population is expected from College of Business Administration and College of Computers and Information Systems. Undated) as follows: Population Standard Deviation Sample Size Large Small Very Small Known Standard Error = (for Population) Normal tables: z-value and p-value Normal tables: z-value and p-value Population Variances and Distributions are not known (Assumption: Distributions of the two groups are the same under the null hypothesis) or *To conduct Fisher test.1 Testing for the Mean of a Population Two methods can be described for the size of the population that is intended to be targeted with the two kinds of population viz.

Calculate the p-value: A p-value is the probability of obtaining a sample outcome. Compute Measure of variability or the Standard Error of the Difference (SED = difference between service provider i. When a null hypothesis is rejected.(null hypothesis mean value) µ0]/ While standard normal distribution is used for calculating p-value and single sample z-test. 4. To calculate t. The p value of the sample outcome is compared to the level of statistical significance. Measure of Variability (MoV) or Standard Error of the Difference (SED) = √ 5. 2012). the t-distribution with df= ∞ is identical to use the standard normal distribution (Weaver.e. standard deviation of the population is not known. 2011). SER). This estimate tends to miss by an amount called the standard error (SE) of the mean. first sample standard deviation is required to be calculated using the formula: (Sample Standard Deviation) s = √ (Standard error of the Mean) t = [(sample mean) ∑( ) where n-1 is the degree of freedom. t-distribution must be used with n-1 degrees of freedom. = s/ . given that the value stated in the null hypothesis is true. 7. 08-04-2013 . for a single sample t-test.e. Usually. Z (Hypothesis) = [(SMD-0)/SED] –> Calculation of test statistics (z-value) is valid only for large sample sizes. i. In fact. it must be estimated using the sample standard deviation. when the sample size obtained is too small then in such cases t-test values are to be calculated against z-test values. Sample Mean is usually denoted by the symbol. When we go for sample collections.com 3. Calculate SEP and SER values based on the above equation for standard error. e. As it is not possible to gather the complete population data of King Saud University. In such cases. Calculating the t-value: In many real-world cases of hypothesis testing. Thus the population mean μ is estimated using the sample mean . SEP and service requester i. 6. which is calculated as: Standard error = where 4. a sample will be collected using a questionnaire based survey posting online using any of the free online survey tools such as questionpro.13 2. a result is significant (Privitera.com or suveymonkey. standard values can be available in the tabular form for the normal distribution and t-distribution. Now the test statistic: Z-value = Samples Mean Difference (SMD) – Zero (0) (as we expect there should not be any difference)/ Standard Error of the Difference (SED) or Measure of Variability (MoV).

These variables derived out from test instruments are declared to be reliable only when they provide stable and reliable responses over a repeated administration of the test. 1998. However Cronbach‟s Aplha can be calculated using other statistical tool such as SPSS and Microsoft Excel. Salim and Downe. Data on the various. 2003). multi-item constructs representing different components of service quality and customer satisfaction are required to be tested for reliability and validity by computing Cronbach‟s Alpha values. Significance of gap between perceived satisfaction and the importance of all the service quality dimensions can be obtained from t-test results. 1967). The constructs that are usually represented by some variable name consists of indexed responses to dichotomous or multi-point questionnaires. Usually a reliability coefficient above 0.7 is considered sufficient for exploratory studies (Nunnally. ordinal scale.5 (Hair et al. There are some web sites where the vale Cronbach‟s Alpha can be computed online and a readymade Microsoft Excel documents are also available. Taiwo. 2011). Various types of measurement scales such as nominal scale. EDUGATE Portal Evaluation has involved gathering data about 30 items represented by the 30 research variables grouped under 7 e-service quality dimensions. Mohammed and Alhamadani. 08-04-2013 . interval scale and ratio scales are described in Sekaran and Bougie (2009).. The relationship between service quality and levels of customer satisfaction can be tested by conducting a regression analysis (Loke. which are later summed to arrive at a resultant score associated with a particular respondent. 2011).14 8. Cronbach‟s Alpha determines the internal consistency or average correlation of items in a survey instrument to gauge its reliability (Santos.2 Significance of Cronbach’s Alpha Cronbach‟s Alpha is a tool for assessing the reliability of scales of measuring for example the service quality. As individuals attempt to quantify constructs which are not directly measurable and often use multiple-item scales and summated ratings to quantify the constructs (Gliem and Gliem. Further Santos (1999) explained the use of the ALPHA option of the PROC CORR procedure from SAS ® statistical tool to assess and improve upon the reliability of variables derived from summated scales. Summated scales are often used in survey instruments to probe underlying constructs that the researcher wants to measure. All individual loadings are recommended to be greater than equal to 0. A Factor Analysis may be required to be performed to assess the convergent validity. 1999).

025 in each tail = 1.15 9.05 indicates the level of risk the researcher is willing to take that true margin of error may exceed the acceptable margin of error) s = estimate of standard deviation in the population = nnn (estimate of variance deviation for 5 point scale calculated by using 5 [inclusive range of scale] divided by 4 [number of standard deviations that include almost all (approximately 98%) of the possible values in the range] d = acceptable margin of error for mean being estimated = nnn (Number of points on primary scale* acceptable margin of error. it is required to conduct a pilot survey based study. Research Sample This study is conducted based on a pilot survey sample. The size of the statistical sample is now can be estimated based on the calculations performed on the data collected from the pilot survey based study using Cochrane formula (Barlett. Deanship of Admission and Registration and Student Affairs in the province south of Riyadh and Deanship of Graduate Studies for determining the Service Providers Sample size and Faculty. The selection of the confirmation on which assumption has to be finalized will depend on facts of the pilot survey. points on primary scale = 5. Deanship of Admissions and Registration. Statistical society of the present research (also called targeted population) is considered as: staff from Deanship of e-Transactions and Communication. Staff and Students of College of Administrative Sciences and College of Computer Science and Information Systems for determining the Service Requesters Sample size It is proposed that either (a) to use random sampling technique assuming the data as a continuous data or (b) to use the stratified sampling technique by assuming the categorized samples with above statistical guidelines mentioned or discussed in this paper.96 (the alpha level of 0. 2001) Cochran‟s sample size formula for continuous data is given as: n0 = (t)2 * (sd)2 / (mem)2 Where n0 = the required sample size t = value for selected alpha level of 0. So in order to estimate the size of the statistical sample. acceptable margin of error = nnn [error the researcher is willing to accept]) 08-04-2013 . Kotrlik and Higgins.

Alternative Hypothesis is the Opposite of Null Hypothesis. the Null hypothesis can be express as: H0 = The average portal service quality is believed at least EQUAL TO The mean value calculated from the provider‟s sample or it can be express as: The population mean difference of service provider and service requester are assumed zero. Plan for Hypothesis Testing Hypothesis testing or significance testing is a method for testing a claim or hypothesis about a parameter in a population. al. 2012). 1. because we do not want to be bias. (iii) Determine the mean value of the sample for constructing the Null hypothesis and Alternative hypothesis from service provider‟s population. 2005): (i) There are two hypotheses. Steps involved in Hypothesis Testing (Keller. Null hypothesis and 2. Now for example. Hence this is what is required to be determined. the adjusted sample size (after keeping the uncertainties of losses in questionnaires. ignoring the distributed questionnaire. Now for example. the Alternative hypothesis can be express as H1 = The average portal service quality is NOT EQUAL TO the mean value calculated from the provider‟s sample or The population mean difference of service provider and service requester are Not Equal to Zero. Cochran‟s correction formula should be used to calculate the estimated minimum returned sample size and hence n = n0/ (1+n0/targeted population) When the oversampling scenario has to be considered. It is set as NOT EQUAL TO. forgetting to answer the questionnaires etc. (iv) The goal of the process is to determine whether there is enough evidence to infer the alternate hypothesis is true. Alternate hypothesis (ii) Testing hypothesis begins with the assumption that the null hypothesis is true. 08-04-2013 . using data measured in a sample (Privitera.05 = nn). The average portal service quality calculated as the mean value from service providers has to be set as the benchmarking value.16 If the sample size exceeds 5% of the population (targeted population * 0.65 (65% is considered based on the prior research experience (Cochran 1977)) 10. Groebner et. in mind) nadj = n/0. 2009. the size can be estimated as follows: Assuming the anticipated return rate as 65%.

(v) Two possible errors can be made in any test. service communication gap. Data Analysis and Results 11. Reliability has calculated for testing consistency against (i) overall service quality and (ii) service quality gaps (viz. calculate and provide inferences based on computing the mean value of the sample and summarizing the data into test statistic with use the test statistic in terms of determining the z-value and/or t-value and p-value.17 It means the mean value of service quality calculated from service providers is to be compared against the mean value of the service quality calculated from service requesters. A Type I error occurs when we reject a true null hypothesis and a Type II error occurs when we don‟t reject false hypothesis. It means the portal possess better service quality.1 Discussion on Internal Consistency High quality tests are important to evaluate the validity and reliability of the proposed instrument in a research study. It means the portal do not possess better service quality (2) Greater than (SQ P > SQ R) – concludes that there is not enough evidence to support the alternative hypothesis. p. service providers‟ data and service requesters‟ data using Excel based Reliability Calculator created by Siegle. Two possible decisions can be made here: (1) Less than (SQ P < SQ R) – concludes that there is enough evidence to support the alternative hypothesis. Internal consistency is concerned with interrelatedness of the proposed set of items. It describes the extent to which all the items that are proposed can measure the same concept or construct. The probabilities of Type I and Type II errors are: P (Type I error) = α P (Type II error) = β (vi) Collect. 2011). service information gap. service standards gap and service performance gap) and (iii) service quality dimensions. It ensures that the research study is reliable and valid against the proposed items of the service quality measurement (Tavakol and Dennick. For this purpose Cronbach Alpha value is calculated against two sets of data viz. 2003. It reflects that the proposed WePoEServQual Instrument becomes valid to measure web portal electronic service quality when the Cronbach Alpha value is more than 0.7 (Gliem and Gliem. Cronbach‟s Alpha is a commonly employed index of test reliability. 11. 08-04-2013 . 87).

Acceptable from Requesters side and unacceptable from Providers perspective Service Requester Inference/Remarks/Comments 08-04-2013 .571 (0.884 (0.Acceptable from both Service Information Gap 0.930 (0.831) 0. But from the service requesters perspective.Acceptable from both Service Standards Gap 0. it proved consistent against Service Communication gap Inference.831 (0.890) 0.854) 0.18 Validating Internal Consistency of WePoServQual Instrument Service Provider 5 subjects (48 manipulated subjects) Overall Service Quality 0.795 Consistent against Service Performance Gap Inference.Acceptable from both Service Communication Gap -1 (-1.937) 0.835 Consistent against Service Standards Gap Inference.706 No improvement even after deleting items or adding items. Inconsistent against service communication gap from providers‟ perspective.Acceptable from both provider and requester perspectives Service Quality Gaps Service Performance Gap 0.921 The instrument is said to be reflecting consistency against overall service quality Inference.05) 0.827 Q 13 (Item 13 is to be deleted) Consistent against Service Information Gap Inference.

Acceptable from both sides Responsiveness 0.838 (-1.827) Consistent against Functional Usefulness Q-13 or Item 13 Deleted Inference.876) (0.75) (0.Acceptable from both sides Convenience 0.571 (0.Acceptable from both sides Safety 0.462 (0.831) (0.854) 0.711 Consistent against Convenience Inference.855 Consistent against Safety Q18-Item 18 Deleted for Service Providers Perspective No Item Deleted for Service Requesters perspective Inference.942) 0.Acceptable from Providers side and Questionable from Requesters side Accessibility -1.795 Consistent against Realization Inference.600) Inconsistent against accessibility Item 4 Deleted from Service Providers perspective No Item Deleted from Service 08-04-2013 .652) Consistent against Responsiveness Q7-Item 7 Deleted for Providers No Item deleted for Service Requesters Inference.747) Consistent against Usability Inference.825 (0.93 (0.Acceptable from requester side but not acceptable from provider side Usability 0.19 Service Quality Dimensions Realization 0.857 (0.025) (0.Acceptable from both sides Functional Usefulness 0.557) 0.75 (0.

The WePoServQual Instrument is after validated against the internal consistency.Questionable but wanted to make it acceptable as it is contributing in a positive influence manner to service communication gap as well as to overall service quality The provider‟s survey data analysis of Edugate Portal of King Saud University reported a value of 0. The service quality of EDUGATE portal is matched the expectations of the customer with 81% probability @ 95% two-tailed confidence level from the service quality dimensions perspective and with 69% probability @ 95% two-tailed confidence level from the service quality gaps 08-04-2013 . It means the customers perceived better service quality of EDUGATE portal than expected (expected is set as the overall mean of the perceived). standard deviation and standard error for all the posed statements in the Lickert Scale 1-5 and to questions in varied range of scale as well as for dichotomous type too. alpha is grounded in the tau-equivalent model. only customers‟ data is considered for hypothesis testing. However the focus of statistical calculations has been limited to validate the proposed WePoServQual Instrument as well as evaluating the Edugate portal based on this proposed instrument. As the number of test items (= 30) are significant in number.890 against randomly manipulated 48 subjects. The requester‟s survey data analysis of Edugate Portal of King Saud University reported a value of 0. So.921 against 48 subjects. 11. which assumes that each test item measures the same latent trait on the same scale.884 against 5 subjects and 0. So our proposed model with 30 set of items is a valid and reliable e-service quality instrument for measuring web portal service quality of education portals. we considered the mean value of the 30 items of the instrument as the bench mark to validate the customer responses against each of these 30 items. Results show a negative gap score. However we did not consider the providers sample as we could get a response of 5 subjects. More importantly. We could get 48 responses of service requesters (customers) from the online survey (by QuestionPro). The QuestionPro has provided with default calculation values of mean.Unacceptable Inference.2 Discussion on the difference between Customer Expectations and Customer Perceptions Initially it is planned and executed for obtaining two stratified samples (from providers‟ perspective and requesters‟ perspective).20 Requesters – Questionable Providers . it does not violate the assumption of tau-equivalence and hence do not underestimate the reliability.

IGI Global. 13. Chapter 19: A Model for Mobile Learning Service Quality in University Environment. 53-55. 7. The development of HEdPERF: a new measuring instrument of service quality for the higher education sector. Vo. However there is not customer satisfaction. and Hassan. No. However in order to claim its consistency. S. Vol. Suggesting Service Quality Model for E-Governance. (2010) as mentioned in the section 2-Aim and Objectives of the Research of this document. In Khalil and Weippl (Eds. z-statistics are also generated for finding this relationship from each of the overall service quality and customer satisfaction as variable. No. As a part of triangulation. The overall service quality reported as Acceptable @ 95% confidence level. June 1314. Como. 6. 29. 30. 1.) Innovations in Mobile Multimedia communications and Applications: New Technologies.21 perspective. The chi-square based results indicated that there is no relationship between two. In your opinion (i) what is the overall service quality (ii) what is your satisfaction towards Edugate portal services. ECEG13. International Journal of Consumer Studies. References Abdullah. Service Quality and customer satisfaction do matter. F. 38. American Libraries. Conclusions Based on the results of hypothesis testing. AlSudairi (2012). Vol. it can be concluded that WePoServQual instrument can be used effectively for measuring and evaluating the Portal based online service quality. (1998). N. 569581. pp. 6-24. P. Altman. (2006). pp. USA. Italy. Iss. The results are matching with chi-square test results indicating no relationship between overall service quality and customer satisfaction. 08-04-2013 . F. Journal of Theoretical and Applied Information Technology. pp. We applied a triangulation technique by including two questions posted in the questionnaire. Al-Mushasha. Peiris and Jayasundara. (2011). A report is generated to find the relationship between overall service quality and customer satisfaction from the reports wizard of the QuestionPro by pivoting the service quality against the customer satisfaction. AlSudairi and Vasista (2013). E and Hernon. E-Service Quality Strategy: Achieving Customer Satisfaction in Online Banking. Full Paper Accepted. it is suggested to be tested against other domains such as Online Banking Services and Online Government Services too. These results are in compliance with the arguments of Somaratna.

(1977). Service quality of University Library: a survey amongst students at Osmangazi University and Anadolu University. [Online] www. Learning and Performance Journal. NJ: Pearson Prentice Hall.palgrave. (2001). Calculating. Multivariate Data Analysis.com/EDUGATE. W. http://www. Springer-Verlag [Online] (Accessed on March 26. 1. 1-19.com/business/taylor/taylor1/lecturers/lectures/handouts/hChap7. 2013) Gliem.ua.edu.rs/~kristina//topic-dane-likert.istanbul. T. C. Groebner et. Development of an instrument to assess student perceptions of the quality of Pharmaceutical Education. and Gliem. 6 ed. I and Calero. J.matf.22 Anonymous (Undated). W. G. (2008).. Hair. conducting and evaluating quantitative and qualitative research. E. 125131.doc (Accessed on March 18. Holdford. W. 2013) Cochran.pdf (Accessed on March 24. J. http://eidergisi.. Moraga. pp.edu. 2013) Bryman. Cap 252 Project. (1998). R. http://cap252ksu. New York: John Wiley & Sons.siteglimpse. Tatham. Sampling techniques (3rd ed.tr/sayi5/iueis5m1.ksu. American Journal of Pharmaceutical Education. 5th Ed. Organisational Research: Determining Appropriate Sample Size in Survey Research.ksu. 1-10. (2008). 08-04-2013 . 3rd Edition. C. W. (2010). D. [Online] http://poincare. (2001). Bertram. Midwest Research to Practice conference in Adult.bg. (2006). (2003). 2013) Barlett. A. Chapter 10: Hypothesis testing. and Renders. Jr. A.sa (Accessed on March 19. Anderson. Prentice-Hall.pdf (Accessed on March 19. Upper Saddle River. (2007). C. Quality in Use Model for Web Portals (QiUWeP). E. Creswell. Kotrlik. Interpreting and Reporting Cronbach‟s Aplha Reliability Coefficient for Likert-Type Scales.dlsi. A. . Proceedings of the 10th International Confernece on Current Trends in Web Engineering. Z.sa. R. al. Prentice-Hall. J. International. 3rd Edition.pdf (Accessed on March 19. (2005). EDUGATE. R. Educational research: Planning. pp. Cap 252 (2010). M. http://gplsi. Likert Scales. Information Technology.. 43-50. Upper Saddle River.es/congresos/qwe10/fitxers/QWE10_Herrera. N J.. Caballero. EDUGATE Portal Address. CPSC 681 – Topic Report. and Higgins. F. p.wordpress. L. Business Statistics: A Decision-Making Approach. J.com/2010/09/king-saud- university.edu. and Community Education.). 2013) Filiz. Continuing. No. Vol. Inc. Oxford University Press. and Black. Social Research Methods. 1. Ekonometri ve Istatistik Say.pdf 2013). D. C. 65. J. Hypothesis Testing. Herrera. R. 19.ac.files.

(2011). and Wirtz. 41-50. Lovelock. A.. A.mit. H. and Downe. 14. C. http://www. Singapore. (2012).edu/article/understanding-customer-expectations-of-service/ (Accessed on March 18. International Conference on Financial Management and Economics. Vol. F. New Uork. L. S. Berry. Zhao. L. A. and Berry. M. pp. (2009). pp. Inc. Inc. Keller. Gotland University.. 49 No. p. (1985). A. And Carlsson. Watson.. G. and Zeithaml. 168-180.mohe. H. Nunnally.gov.aspx Mohammed. IT Student Guide V5 (2012. SAGE Publications. August. (1967). S. IPEDR vol. Y. Taiwo. 08-04-2013 .ksu. MIT Sloan Management Review (April 15.edu. G.ksu. J. R. A. Psychometric Theory. J. LACSIT Press. Service Quality and Customer Satisfaction in a Telecommunication Service Provider. http://sloanreview. 2 No. 2012) Kabir. R.sa/sites/ccis. V. and Bruce. Pyrczak. Student Study Guide with SPSS Workbook for Statistics for the Behavioural Sciences: Chapter 8: Introduction to Hypothesis Testing. (2002). 4) King Saud University. (1991). MoHE web site (2010. A.sa/files/Student_Guide%2520V5%2520(Nov-2012). 7th Edition. Euro Journals Publishing. Loke. 162 pages. 11. Journal of Marketing. M. (2007). Parasuraman. M. 432-437.. South Western-Cengage Learning. R. D. 13. 4.pdf (Accessed on March 19. pp. E. http://ccis. Zeithaml.L. Service Marketing – People. T. A. Vol. (2009). NY Parasuraman. “A conceptual model of service quality and its implications for future research. Marketing Educators Conference: Marketing Theory and Applications. Y. AHP and CA Based Evaluation of Website Information Service Quality: An Empirical Study on High-Tech Industry Information Centre Web Portals. and Goodhue. Technology. Academic Affairs: KSU Registration System (EDUGATE). S. and Alhamadani. and Guo.. WEBQUAL: Measure of web site quality. Chapter 11: Introduction to hypothesis testing in Statistics for Management and Economics. Iss. Journal of Service Science and Management. V. A. perceptions and satisfaction about Service Quality at Destination Gotland – A case study.. Pyrczak Publishing. (2011). Master thesis in Business Administration.. Lociacono. A. Salim. Pearson Prentice Hall. Middle Eastern Finance and Economics. G. McGraw-Hill. Sweden.23 Hu. C. L. Understanding customer expectations of service.edu. J. 12). Strategy. (2011). Ninth Edition. 1991). T. (2010) Service Quality-Expectations. 3. Writing empirical research reports: a basic guide for students of the social and behavioural science. L. M. Service Quality Perspectives and Customer Satisfaction in Commercial Banks working in Jordan. T. 2013) Privitera.sa/en/studyinside/Government-Universities/Pages/KSU.

statmyweb.. (2006). (2005). and Bougie. Sahut. pp. Tan. N. ICULA2010. 8. S. (1999). Enhance Internet Banking Service Quality with Quality Function Deployment Approach. Tashakkori. User expectations versus user perception of service quality in University libraries: a case study. Deshmukh. Cronbach‟s Alpha: A Tool for Assessing the Reliability of Scales. International Journal of Quality & Reliability Management. C. 2.ac. EDUGATE.com/site/edugate.ksu.. R. And Hayasundara. Research Methods for Business – A Skill Building Approach. http://www. R.edu.. S.htm (Accessed on March 26. Beaver..gifted. and Linng. Vol.sa. D. Vol. and Pinegar. University of West Georgia Distance Education Centre [Online] http://www.westga. 2013) Statmyweb (2012). How it works?.sa (Accessed on March 19. 08-04-2013 . Banwet.913 – 949 Shaik. 448 Pages Seth. 1 No.24 QuestionPro web site (2013). & Vrat. 1. Bhat. G. Journal of Extension. (2010). N. Sekaran.xls (Accessed March 23. Vol. Vol. and Karunes. J. vol. NY: Wiley & Sons Inc.edu/siegle/research/Instrument%20Reliability%20and%20Validity/reliabil itycalculator2. (2004). Lowe. (2007). 2013). November.edu. R. 2013). (2013). DL-sQUAL: A Multiple-Item Scale for Measuring Service Quality of Online Distance Learning programs. Sherry. K. HERDSA conference proceedings. Conceptualising total quality management in Higher Education. C. Vol. pp. R. Students as customers: The expectations and perceptions of local and international student. B. No.lk/research/bitstream/70130/168/1/ccj4. and Kucerova. and Benbasat. Reliability Calculator. http://www. No. 2013). The Journal of Internet Banking and Commerce. I. 16. N. 2013) Somaratna.uconn.arraydev. pp. Service quality models: a review. (2003). Journal of Mixed Methods Research.edu/~distance/ojdla/summer92/shaik92.htm (Accessed March 18. 2. 22 Iss: 9. The TQM magazine. P. D. [Online]. T. pp. 145-159. J.ksu.com/commerce/jibc/0311-09.. and Creswell.pdf (Accessed on March 19. http://www. No. S. Peiris. IX. The new era of mixed methods. (2004). Santos. Z. Vol. S. K. 2013). 5th Edition. [Online] http://archive. 37 No.questionpro. MIS Quarterly.com/home/howItWorks. 1.html (Accessed on March 19. S. A. 77-109. II. J. 3-7. 37. Sahney. IT Mediated customer service content and delivery in Electronic Governments: An Empirical Investigation of the Antecedents of Service Quality. http://www. U. and Cenfetelli. C. A. Siegle. (2010).cmb. Online Journal of Distance Learning Administration. C. W.

15.angelfire. pp. European Journal of Information Systems.com/wv/bwhomedir/notes/z_and_t_tests. Z. A. pp. and Malhotra. B. An Empirical Examination of the Service Quality-Value-Loyalty Chain in Electronic Channel. (eds. The relationship between service quality and customer satisfaction: the example of CJCU library.) Self-Service in the Internet Age. 302-326. D. Hypothesis Using zand t-tests. A. International Journal of Medical Education. Tate et al. Zeithaml. Tavakol and Dennick (2011). X. International Journal of Service Industry Management. 629-650. Working Paper. Parasuraman. User satisfaction with business-to-employee portals: conceptualization and scale development. Sugainto. R. 2. 193-209. 3. Parasuraman. (2006). 27. IEEE. No. I. University of North Carolina. Vol. G.10007/978-1-8-84800-207-4_4. And Sendjaya. V.. 22. Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS‟07). Online Service Quality Dimensions and their relationships with satisfaction. DOI 10. A Theory of formal conceptual definitions: Developing theory building measurement instruments. pp. (2007). Oliver et al. S. Chapter 4: Stakeholder Expectations of Service Quality in a University Web Portal. Y. Journal of Operations Management. 08-04-2013 . (2011). 262375. A. and Fang. J. (2002).pdf (Accessed March 19. And Shieh. 53-55. http://www. Vol.. C. Perceived Service Quality in a University Web Portal: Revising the E-Qual Instrument. V. and Malhotra. 649667 Wacker. (2004). (2002). pp. Vol.. 17. Service Quality delivery through web sites: A critical review of extant knowledge. (2009). vol. Zeithaml.. Wang. (2008). Journal of Information & Optimization Sciences. (2004). Making sense of Crobach‟s alpha. In D. L. 2013) Yang. Springer-Verlag Lonon Limited. Journal of the Academy of Marketing Science. A. Weaver. 30: pp.25 Tate et al. Tojib.

Sign up to vote on this title
UsefulNot useful