Professional Documents
Culture Documents
CITATIONS READS
164 6,843
3 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Jacqueline Douglas on 23 May 2017.
The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document
and are linked to publications on ResearchGate, letting you access and read them immediately.
The current issue and full text archive of this journal is available at
www.emeraldinsight.com/0968-4883.htm
Measuring
Measuring student satisfaction at student
a UK university satisfaction
Jacqueline Douglas, Alex Douglas and Barry Barnes
Faculty of Business and Law, Liverpool John Moores University, Liverpool, UK 251
Abstract
Purpose – The purpose of this paper is to report on the design and use of a questionnaire to measure
student satisfaction at Liverpool John Moores University’s Faculty of Business and Law.
Design/methodology/approach – The paper utilised the concept of the service-product bundle to
design the survey questionnaire and then used SPSS and Quadrant Analysis to analyse the results to
determine which aspects of the University’s services were most important and the degree to which
they satisfied the students.
Findings – The most important aspects were those associated with teaching and learning, while the
least important were those associated with the physical facilities.
Practical implications – The concept of the service-product bundle is a valid and reliable tool for
the design of a satisfaction survey and segments a University’s service offering in such a way as to
allow management to target resources at those areas that are perceived to be low satisfaction and high
importance. The questionnaire can be utilised in most education establishments.
Originality/value – Utilising the concept service-product bundle places responsibility for
questionnaire content and design firmly on the service provider rather than the user.
Keywords Service levels, Higher education, Students, Surveys, United Kingdom
Paper type Research paper
Introduction
Students’ opinions about all aspects of academic life are now sought by educational
institutions worldwide, generally, in the form of a satisfaction feedback questionnaire.
It is this student satisfaction survey, within the context of Liverpool John Moores
Faculty of Business and Law that this paper addresses.
In the UK, Higher Education (HE) students were considered to be the “primary
customers” of a University (Crawford, 1991), even before they were liable for the
payment of “up-front” tuition fees. Students are the direct recipients of the service
provided, i.e. a three year degree programme made up of a number of modules at each
level. As if to confirm this status of the “student as customer”, the Higher Education
Funding Council for England (HEFCE) has introduced a National Student Survey. This
survey is aimed at final year students to seek their views on a number of aspects of
teaching, assessment and support provided by their university and its courses. The
results will ultimately be used by Government and Funding Bodies to produce league
tables of university performance. The position of a university in any league tables will
impact ultimately on its image. Image has a strong impact on the retention of current
students and the attraction of potential students (James et al, 1999). Indeed recruitment
and retention of students has been moved to the top of most universities’ agendas by
HEFCE due to their desire to increase the UK student population in line with Quality Assurance in Education
Vol. 14 No. 3, 2006
Government targets. Poor retention rates may have adverse funding consequences for pp. 251-267
institutions (Rowley, 2003a). This paper takes the view that student satisfaction, q Emerald Group Publishing Limited
0968-4883
retention and recruitment are closely linked. Thus student satisfaction has become an DOI 10.1108/09684880610678568
QAE extremely important issue for universities and their management. The aim is to try to
maximise student satisfaction, minimise dissatisfaction and therefore retain students
14,3 and so improve the institutions performance across a number of league tables.
A number of previous research studies (see for example, Galloway, 1998 and
Banwet and Datta, 2003) into student perceptions of quality/satisfaction have utilised
the SERVQUAL framework (Parasuraman et al., 1988). However, SERVQUAL has
252 been much criticised over the years (see for example, Buttle, 1996; Asubonteng et al.,
1996; Pariseau and McDaniel, 1997; Aldridge and Rowley, 1998). Taking these
criticisms into consideration the questionnaire used in the satisfaction survey asked
only for perceptions of performance of a range of service aspects (as well as
importance) but did not aim to collect data associated with expectations. Indeed, the
survey questionnaire was designed around the concept of the service-product bundle.
This concept is discussed in the next section.
The last bullet point as the rationale behind the survey undertaken for the particular 253
research project described in this paper.
Keeping customers satisfied is what leads to customer loyalty. Research conducted
by Jones and Sasser Jr (1995) into thirty organisations from five different markets
found that where customers have choices the link between satisfaction and loyalty is
linear; as satisfaction rises, so too does loyalty. However, in markets where competition
was intense they found a difference between the loyalty of satisfied and completely
satisfied customers. Put simply, if satisfaction is ranked on a 1-5 scale from completely
dissatisfied to completely satisfied, the 4’s – though satisfied – were six times more
likely to defect than the 5’s.
Customer loyalty manifests itself in many forms of customer behaviour. Jones and
Sasser Jr (1995) grouped ways of measuring loyalty into three main categories:
(1) intent to re-purchase;
(2) primary behaviour – organisations have access to information on various
transactions at the customer level and can track five categories that show actual
customer re-purchasing behaviour; viz, recency, frequency, amount, retention,
and longevity; and
(3) secondary behaviour – e.g. customer referrals, endorsements and spreading the
word are all extremely important forms of consumer behaviour for an
organisation.
Translating this into university services, this covers intent to study at a higher level
within the same institution, how frequently and recently a student used ancillary
services, such as the library, catering and IT services, and lastly the willingness to
recommend the institution to friends, neighbours and fellow employees.
Banwet and Datta (2003) believed that satisfied customers are loyal, and that satisfied
students were likely to attend another lecture delivered by the same lecturer or opt for
another module or course taught by her/him. In their survey of 168 students who
attended four lectures delivered by the same lecturer, covering perceived service
quality, importance and post-visit intentions, they found that students placed more
importance on the outcome of the lecture (knowledge and skills gained, availability of
class notes and reading material, coverage and depth of the lecture and teacher’s
feedback on assessed work) than any other dimension. This supports the findings of
Schneider and Bowen (1995) who deduced that the quality of the core service influences
the overall quality of the service perception. For universities the core service delivery
method is still the lecture. Overall Banwet and Datta (2003) found that students’
intentions to re-attend or recommend lectures was dependent on their perceptions of
quality and the satisfaction they got from attending previous lectures. This is
supported by the research of Hill et al. (2003) who utilised focus groups to determine
what quality education meant to students. The most important theme was the quality
of the lecturer including classroom delivery, feedback to students during the session Measuring
and on assignments, and the relationship with students in the classroom. student
Research by Tam (2002) to measure the impact of Higher Education (HE) on
student’s academic, social and personal growth at a Hong Kong university found that satisfaction
as a result of their university experience students had changed intellectually, socially,
emotionally and culturally. This growth was evidenced as students progressed from
one year to another as their university career developed. Is this also the case with 255
student’ perceptions of service quality and satisfaction? A number of researchers have
suggested that this might indeed be the case (Hill, 1995; O’Neil, 2003) although
obtaining valid and reliable data to support such a stance is difficult. This study aims
to determine if there are differences in those aspects of a university service that
students consider important, as well as their satisfaction levels, associated with their
year/level of study, i.e. first, second and third.
Methodology
A quantitative survey was designed to elicit student satisfaction levels across the
University’s service offerings. The questionnaire consisted of 60 questions informed by
previous research studies and subdivided into the various categories of the
service-product bundle including, lecture and tutorial facilities, ancillary facilities,
the facilitating goods, the explicit service and the implicit service. At the end students
were asked for their overall satisfaction rating and whether they would recommend the
University to a prospective student. The satisfaction questions were preceded by a
series of demographic questions that would allow the sample population to be
segmented. These included, interalia, questions regarding gender, age, level of study,
mode of study and country of origin.
Participation in the survey was entirely voluntary and anonymous. The length and
complexity of the questionnaire was influenced, in part, by the balance between the
quest for data and getting students to complete the survey.
The questionnaire was piloted among 100 undergraduate volunteers. The length of
time it took them to complete the survey was noted and at the end they were asked for
any comments regarding the validity and reliability of individual questions. They were
also asked if there was anything “missing” from the questionnaire. Based on the
feedback received a number of questions were amended and the design of the
questionnaire altered slightly. It took on average 12 minutes to complete the
questionnaire.
In order to get as large and representative a sample as possible, core modules from
programmes in all five Schools within the Faculty of Business and Law at all three
undergraduate levels were targeted. Staff teaching these modules were approached
and permission sought to utilise 15 minutes of their lecture time in order to explain the
rationale behind the survey and to persuade students to complete the survey in class.
Generally this “personal touch” was successful in eliciting a good response. Over the
course of the two weeks the survey was undertaken, only one person refused to
complete the questionnaire.
Researchers are divided as to whether or not determinants of satisfaction should be
weighted by their importance because different attributes may be of unequal
importance to different people (Angur, 1998; Harvey, 1995; Patterson and Spreng,
1997). In this study both satisfaction and importance were measured.
QAE There is no such thing as the perfect rating scale. However, some produce more
14,3 reliable and valid results than others. Devlin et al. (1993) determined that a good rating
scale should have, inter alia, the following characteristics:
.
minimal response bias;
.
discriminating power;
256 .
ease of administration; and
.
ease of use by respondents.
In order to accommodate these characteristics, the rating scale contained five points
with well-spaced anchor points representing the possible range of opinions about the
service. The scale contained a neutral category and the negative categories were
presented first (to the left).
Thus, undergraduates were required to respond utilising a 5-point Likert scale of 1
to 5, where 1 is very unsatisfactory, 2 is unsatisfactory, 3 is neutral (neither satisfactory
or unsatisfactory), 4 is satisfactory and 5 is very satisfactory. This type of scale
provides a common basis for responses to items concerned with different aspects of the
University experience. The importance that students place on each criteria was
measured utilising a 5-point Likert scale, where 1 is very unimportant, 2 is
unimportant, 3 is neutral (neither important or unimportant) 4 is important and 5 is
very important. Respondents were asked to tick the box next to the number that
represented their opinion on each item.
A sample of 865 students from a total within the Faculty of 3800 was surveyed.
The questionnaires were analysed using SPSS v. 11 and Quadrant Analysis
conducted in order to determine those areas perceived as being the least satisfactory
with the greatest importance rating.
Finally, respondent focus groups were assembled to discuss some of the issues that
required more in-depth analysis and which, due to constraints of space and time, were
not explicitly asked about in the original survey.
Results
A total of 864 questionnaires were returned, although not all had complete data sets.
Table I details the demographic mix of the respondents.
Based on all student responses, the most important (i.e. list of the top ten starting
from the highest value) and least important (i.e. list of the bottom ten starting from the
lowest value) aspects of the University service are shown in Table II.
As can be seen from Table II the most important areas of the University services are
those associated with learning and teaching. Interestingly, given the recommendations
of a Government White Paper (HEFCE et al., 2003) that from 2006 all newly recruited
university teaching staff should obtain a teaching qualification that incorporates
agreed professional standards, the most important aspect of the service is the teaching
ability of staff, closely followed by their subject expertise. The consistency of teaching
quality irrespective of the teacher is also considered by the respondents as important,
recognising that teaching quality can be variable. The students also recognise the
importance of the lecture and tutorial, which is not surprising given that for most
universities that is still the core service offering and is very much linked to the teaching
ability and subject knowledge of staff. Teaching and learning support materials were
Measuring
%
student
Gender Male 46 satisfaction
Female 54
Nationality Home (UK) 89
European Union (EU) 4
International (non-EU) 7 257
Mode of study Full-time 80
Part-time 8
Sandwicha 12
Level of study Level 1 30
Level 2 48
Level 3 22 Table I.
Demographic mix of
Note: aSandwich students are those whose programme of study includes a year in industry respondents
also ranked highly, particularly supplementary handout materials and the use of
Blackboard for enhancing student learning. These are mostly associated with the
explicit service delivered to the students and the facilitating goods.
With regard to facilities, students have ranked the importance of IT facilities very
highly, reflecting the usefulness of connection to the Internet for research purposes and
software packages for producing high quality word-processed documentation for
coursework assignments and dissertations. This links well with the high ranking of the
Learning Resource Centre where IT facilities can be accessed and books and journals
sourced in “hard” copy or electronic copy.
Table II also shows those areas of the service that students find relatively
unimportant. These are mostly associated with the lecture and tutorial facilities and
the ancillary services, for example, layout and decoration of lecture and tutorial
facilities, catering facilities and vending machines.
A further analysis was undertaken to determine whether different segments of the
respondent population had similar or different rankings of the University services’
attributes with regard to importance and unimportance.
QAE With regard to mode of study, Table III shows the rankings for students studying
14,3 full-time with the University. Whilst acknowledging the fact that 80 per cent of the
sample population is full time students, the rankings of those service aspects
considered most important are very similar to those for the sample population as a
whole, the only difference being that “supplementary tutorial materials” replaces
“approachability of staff”. Once again the majority of aspects considered least
258 important are associated with the facilities and ancillary services.
When the views of Part-time students are considered, a number of interesting
differences in their priorities are worthy of discussion. Table IV shows the rankings of
service aspects for part time students. The IT facilities drops from third to tenth in
their importance rankings, perhaps indicative of the fact that they have access to IT
facilities at work and/or at home, thus rendering it less important relative to other
aspects of service. Blackboard (a virtual learning environment that allows teaching
staff to make learning and other material available via the internet), on the other hand
rises from 10th to 7th in importance indicating its usefulness as a teaching aid for
students who do not attend the University on a daily basis and who may miss classes
due to work or family commitments. Interestingly, the “helpfulness of technical staff”
is considered unimportant, again reflecting their access to such help at work or a
greater level of expertise on their part through working with IT on a daily basis.
Figure 1.
Satisfaction and
importance grid
Quadrant A – high importance and low satisfaction Quadrant B – high importance and high satisfaction
Importance – satisfaction
satisfaction
Measuring
Table V.
261
14,3
262
QAE
Table V.
Quadrant C – low importance and low satisfaction Quadrant D – low importance and high satisfaction
Conclusions
Based on the results of this comprehensive study of students studying within the
Faculty of Business and Law at Liverpool John Moores University (LJMU) it is clear
that many of the physical aspects of the University services are not important with
regards to student satisfaction. This finding supports previous findings by Schneider
and Bowen (1995), Banwet and Datta (2003) and Hill et al. (2003) all of whom found that
the most important aspects of a university’s service offerings were associated the core
service, i.e. the lecture, including the attainment of knowledge, class notes and
QAE materials and classroom delivery. Furthermore, the findings also confirm the research
14,3 of Price et al. (2003) in that it seems that the University’s physical facilities influence
students’ choice. However, once here it is the quality of the teaching and learning
experience that is of importance. Fortunately, LJMU has a state-of-the-art Learning
Resource Centre equipped with scores of computer stations fitted with the latest
software and linked to the Internet. The Faculty of Business and Law also has a new
264 technologically advanced Lecture Theatre and a large IT suite. These aspects of the
facilities can be, and indeed are, used to attract students to the Faculty of Business and
Law, for example during Open Days. However, once students have enrolled, it is the
quality of the teaching and learning that will cause satisfaction or dissatisfaction and
they are prepared to tolerate, to a large extent, “wobbly tables” and paint flaking off
walls as long as the teaching they receive is at an acceptable level. This may have
implications for management responsible for resource allocations to various areas of
the University services and infrastructure.
Student feedback tends to confirm that they do receive high quality teaching
from staff with high levels of expertise in their various academic disciplines. The
lecture and tutorial are the core service provided by the university and it is what
goes on in these classrooms that determine student satisfaction with the explicit
service. They are prepared to tolerate to a large extent deficiencies in the physical
aspects of the facilities as long as the teaching they receive is perceived to be at
an acceptable level.
The focus groups confirmed the findings of Banwett and Datta (2003) that students
“vote with their feet” based on their experiences in lectures and are more likely to enrol
on an optional module delivered by a teacher perceived as providing good teaching.
However, in line with Coles (2002) large classes are likely to cause dissatisfaction.
The explicit service is aided (and abetted) by the facilitating goods such as
PowerPoint presentation slides, supplementary handout materials and the
recommended textbooks. Given the large numbers of students taking most modules
there will never be sufficient textbooks available in the LRC to satisfy demand.
With regard to quality improvement it may be worthwhile introducing explicit
standards of service to various aspects of the University services. These would cover
most “moments of truth”. For example, teaching staff would undertake to respond to
all student e-mails within 48 hours and aim to provide feedback on coursework
assignments within 15 working days. Similar standards could be introduced in the
LRC and administration offices and could encompass internal and external service
level agreements. Management can also be involved in such service standards by
guaranteeing that tutorial classes will not have more than 20 students. It is also the
responsibility of management to provide the resources necessary to meet any
standards.
Universities world-wide are now competing for students both nationally and
internationally. In order to recruit and retain students they should aim to enhance
student satisfaction and reduce student dissatisfaction. This can only be achieved if all
the services that contribute to “academic life” are delivered to a suitable standard. The
students are the sole judges of whether or not this has been achieved therefore student
satisfaction surveys should be undertaken on a regular basis and a university’s service
offering adapted accordingly.
Glossary Measuring
EU European Union student
HE Higher Education satisfaction
HEFCE Higher Education Funding Council for England and Wales
IT Information Technology 265
LJMU Liverpool John Moores University
LRC Learning Resource Centre
References
Aldridge, S. and Rowley, J. (1998), “Measuring customer satisfaction in higher education”,
Quality Assurance in Education, Vol. 6 No. 4, pp. 197-204.
Angur, M.G. (1998), “Service quality measurements in a developing economy: SERVQUAL
versus SERVPERF”, Journal of Customer Service in Marketing Management, Vol. 4 No. 3,
pp. 47-60.
Asubonteng, P., McCleary, K.J. and Swan, J.E. (1996), “SERVQUAL revisited: a critical review of
service quality”, The Journal of Services Marketing, Vol. 10 No. 6, pp. 62-81.
Banwet, D.K. and Datta, B. (2003), “A study of the effect of perceived lecture quality on
post-lecture intentions”, Work Study, Vol. 52 No. 5, pp. 234-43.
Buttle, F. (1996), “SERVQUAL: review, critique, research agenda”, European Journal of
Marketing, Vol. 30 No. 1, pp. 8-32.
Carlzon, J. (1989), Moments of Truth, HarperCollins, New York, NY.
Coles, C. (2002), “Variability of student ratings of accounting teaching: evidence from a Scottish
business school”, International Journal of Management Education, Vol. 2 No. 2, pp. 30-9.
Crawford, F. (1991), Total Quality Management, Committee of Vice-Chancellors and Principals,
occasional paper (London, December), cited in Hill, F.M. (1995), “Managing service quality
in higher education: the role of the student as primary consumer”, Quality Assurance in
Education, Vol. 3 No. 3, pp. 10-21.
Dale, B.G. (2003), Managing Quality, 4th ed., Blackwell Publishing, Oxford.
Deming, W.E. (1982), Out of the Crisis, Massachusetts Institute of Technology, Cambridge, MA.
Devlin, S.J., Dong, H.K. and Brown, M. (1993), “Selecting a scale for measuring quality”,
Marketing Research, Vol. 5 No. 3, pp. 12-17.
Dillon, W.R., Madden, T.J. and Firtle, N.H. (1993), Essentials of Marketing Research, Irwin,
Boston, MA.
Galloway, L. (1998), “Quality perceptions of internal and external customers: a case study in
educational administration”, The TQM Magazine, Vol. 10 No. 1, pp. 20-6.
Gold, E. (2001), Customer Service: A Key Unifying Force for Today’s Campus, Netresults,
National Association of Student Personnel Administrators, 22 January, available at:
www.naspa.org/netresults, cited in Banwet, D.K. and Datta, B. (2003), “A study of the
effect of perceived lecture quality on post-lecture intentions“, Work Study, Vol. 52 No 5,
pp. 234-43.
Harvey, L. (1995), “Student aatisfaction”, The New Review of Academic Librarianship, Vol. 1,
pp. 161-73.
QAE HEFCE, UUK and SCP (2003), Final Report of the TQEC on the Future Needs and Support for
Quality Enhancement of Learning and Teaching in Higher Education, TQEC, London.
14,3
Hill, F.M. (1995), “Managing service quality in higher education: the role of the student as
primary consumer”, Quality Assurance in Education, Vol. 3 No. 3, pp. 10-21.
Hill, Y., Lomas, L. and MacGregor, J. (2003), “Students’ perceptions of quality in higher
education”, Quality Assurance in Education, Vol. 11 No. 1, pp. 15-20.
266
James, D.L., Baldwin, G. and McInnis, C. (1999), Which University? The Factors Influencing the
Choices of Prospective Undergraduates, Centre for the Study of Higher Education,
Melbourne.
Jones, T. and Sasser, W.E. Jr (1995), “Why satisfied customers defect”, Harvard Business Review,
November-December, pp. 88-99.
Low, L. (2000), Are College Students Satisfied? A National Analysis of Changing Expectations,
Noel-Levitz, Iowa City, IA, cited in Banwet, D.K. and Datta, B. (2003), “Study of the effect of
perceived lecture quality on post-lecture intentions”, Work Study, Vol 52 No 5, pp. 234-43.
Martilla, J.A. and James, J.C. (1977), “Importance – performance analysis”, Journal of Marketing,
January, pp. 77-9.
O’Neill, M. (2003), “The influence of time on student perceptions of service quality: the need for
longitudinal measures”, Journal of Educational Administration, Vol. 41 No. 3, pp. 310-24.
Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1988), “SERVQUAL: a multiple-item scale for
measuring conumer perceptions of service quality”, Journal of Retailing, Vol. 64 No. 1,
pp. 12-24.
Pariseau, S.E. and McDaniel, J.R. (1997), “Assessing service quality in business schools”,
International Journal of Quality and Reliability Management, Vol. 14 No. 3, pp. 204-18.
Patterson, P.G. and Spreng, R.A. (1997), “Modelling the relationship between perceived value,
satisfaction and repurchase intentions in a business-to-business service context: an
empirical examination”, International Journal of Service Industry Management, Vol. 8
No. 5, pp. 414-34.
Price, I., Matzdorf, F., Smith, L. and Agahi, H. (2003), “The impact of facilities on student choice
of university”, Facilities, Vol. 21 No. 10, pp. 212-22.
Rowley, J. (2003a), “Retention: rhetoric or realistic agendas for the future of higher education”,
The International Journal of Educational Management, Vol. 17 No. 6, pp. 248-53.
Rowley, J. (2003b), “Designing student feedback questionnaires”, Quality Assurance in
Education, Vol. 11 No. 3, pp. 142-9.
Sasser, W.E., Olsen, R.P. and Wyckoff, D.D. (1978), Management of Service Operations, Allyn
and Bacon, Boston, MA.
Schneider, B. and Bowen, D.E. (1995), Winning the Service Game, Harvard Business School Press,
Boston, MA.
Sohail, M.S. and Shaikh, N.M. (2004), “Quest for excellence in business education: a study of
student impressions of service quality”, The International Journal of Educational
Management, Vol. 18 No. 1, pp. 58-65.
Tam, M. (2002), “Measuring the effect of higher education on university students”, Quality
Assurance in Education, Vol. 10 No. 4, pp. 223-8.
Appendix Measuring
Questionnaire design
The questionnaire was designed round the concept of the service-product bundle described student
earlier. It therefore contained questions relating to the physical facilities/facilitating goods, the satisfaction
explicit service and the implicit service (see Figure A1)
267
Figure A1.
The implicit service