You are on page 1of 7

Longitudinal Student Evaluations Of A Postgraduate Unit Using Importance-

Performance Analysis

Steven Pike & Ingrid Larkin, Queensland University of Technology

Abstract

In the competitive tertiary education market, a consumer orientation is essential. Management


must assess the perceptions of prospective students and monitor the (dis)satisfaction levels of
current customers. This study reports the results of a rare longitudinal investigation of student
satisfaction using a technique that has been underutilised in the education marketing
literature. Importance-performance analysis (IPA) was used to monitor the expectations of,
and satisfaction with, a core postgraduate business unit during Semester 1, 2005. The study
represents the first stage of a four semester trial of IPA as a tool to monitor satisfaction. This
first stage documents key benchmarks for which amendments to the unit based on student
feedback can be measured over time, through a series of cross-sectional longitudinal surveys
in Semester 2, 2005 and Semesters 1 and 2, 2006. In this stage the IPA matrix identified five
aspects of the unit that were important to students, but where the unit performance requires
improvement.

Key words

Student satisfaction, importance-performance analysis, longitudinal.

Introduction

There can be few other industry sectors as demanding as tertiary education when it comes to
achieving customer satisfaction. The potential for critical incidents to occur during encounters
with administrative, academic, library, security and hospitality staff, which can impact on
satisfaction, is high. Furthermore, many such encounters are in group situations such as a
lecture or tutorial, in which the customer-student has little if any control. In Australia, tertiary
education is a competitive market (James 2001), and so a market orientation is as necessary
for university management as it is for other services. Such an orientation recognises the
achievement of organisational goals requires an understanding of the needs of the market, and
delivering satisfaction more effectively than rivals (Kotler, Adam, Brown and Armstrong
2003). Two different research approaches are required to effectively monitor this process, if
service delivery decisions are to be made with the customer in mind. Management must
assess the perceptions held of the university by prospective students (see for example Lawley
and Blight 1997), as well as track the (dis)satisfaction levels of existing students (see for
example McInnis and James 1999). This paper is concerned with the issue of gaining a better
understanding of (dis)satisfaction with the delivery of a core postgraduate unit in a business
course.

Expectations of tertiary education can only be realised after consumption. Therefore,


perceptions play an important role in the decision process, and may only have a tenuous
relationship with fact (Reynolds 1965). For example it has even been suggested that most
Australian undergraduate applicants’ perceptions of university reputations are based on “very
flimsy hearsay evidence” (Baldwin and James 2000, p. 147). Thus, an insight into how well a

ANZMAC 2005 Conference: Marketing Education 173


university unit is perceived across a range of attributes is complete without an evaluation of
student expectations. Satisfaction results from expectations about important attributes and
then perceived performance on those attributes (Myers and Alpert 1968). However, during the
authors’ combined experiences at a number of institutions, evaluations of units and teaching
have omitted any measure of student expectations.

Importance-performance analysis (IPA) was first reported in the marketing literature by


Martilla and James (1977), and has been under-reported in the higher education literature. The
technique considers both the importance of product/service attributes to the individual as well
as the perceived product performance across the same range of attributes. In IPA, importance
and performance are analysed separately, rather than summed as in Fishbein’s (1967) multi-
attribute model. This is important since two summed scores could represent either high
importance/low performance or low importance/high performance (Ennew, Reed and Binks
1993). IPA’s versatility has been demonstrated in a range of applications, including the
evaluation of: breakfast food brands (Sethna 1982), national competitiveness (Leong and Tan
1992), therapeutic recreation services (Kennedy 1986), communication effectiveness
(Richardson 1987), a new sports complex (Bartlett and Einart 1992), dental practices (Nitse
and Bush 1993), employee satisfaction (Graf, Hemmasi and Nielsen 1992, Havitz, Twynam
and DeLorenzo 1991, Novatorov 1997, Williams and Neal 1993), tourism policy (Evans and
Chon, 1989), banking (Ennew, Reed and Binks 1993), operations improvement priorities
(Slack, 1994), and short break holiday destinations (Pike 2002). The greatest strength of IPA
is the ability to enable managerial decision-making due to the simplicity and power of the
matrix. As shown in Figure 1, the IPA matrix represents two dimensions and four quadrants.
The Y-axis plots respondents’ importance ratings of attributes, and the X-axis plots perceived
product performance on the same attributes. The goal is to identify the attributes in Quadrants
1 and 2.

Figure 1 – IPA Matrix

Quadrant 1 Quadrant 2

Concentrate here Keep it up


I
m
p
o
r
t
a
n
c
Quadrant 3 Quadrant 4
e

Low Priority Possible Overkill

Performance

Source: Martilla and James (1977)

ANZMAC 2005 Conference: Marketing Education 174


In the education field the technique has been reported in an evaluation of business schools
(Ford, Joseph and Joseph 1999), tertiary students’ perceptions of service quality (Wright and
O’Neill 2002), and perceptions of a regional university campus (Pike 2004). Of interest to this
study is the potential efficacy of IPA as a tool for tracking service improvements and
customer evaluations over time. One example of this was reported by Guadangalo (1985),
who used of the method to evaluate a 10 kilometre running race over three consecutive years.
Recommendations from the first year’s study were implemented, and then tracked for
improved performance in the following year. The purpose of this research was to trial IPA in
a longitudinal evaluation of student perceptions of a core postgraduate business unit. The
authors are not aware of this approach being reported in the education marketing literature. It
was felt IPA would enable i) the identification of gaps between expectations and performance,
and ii) monitoring the effectiveness of any resultant changes made to the unit as a result of the
feedback, over time. The project was initiated at the commencement of involvement by the
new unit coordinator and two new tutors. Other than a third tutor, who had been involved with
the unit previously, this was a new teaching team and thus an opportune time to benchmark
perceptions.

Method

Students of a postgraduate marketing unit were invited to participate in the research in


Semester 1, 2005. Advance notice describing the purpose of the study and the longitudinal
design was provided to all students 10 days prior to the first lecture, and a copy of the
research information sheet was posted to the unit web site. Of the 107 students initially
enrolled in the unit, 90 attended the first lecture and 85 students participated in the first
questionnaire. Of these, 60 were female and 25 male, 66 were full time and 17 part time, 61
were international and 23 were domestic students. The questionnaire was based on the
university’s standard Evaluation of unit/teaching. Whereas this instrument would usually be
conducted at the end of semester to measure performance, the questionnaire was adapted to
ask students to rate the importance of the 20 attributes. A seven-point scale was used,
anchored at ‘not important’ (1) and ‘very important’ (7). A zero non-response option was also
provided in case any students were unsure about any particular attribute. Two open ended
questions were included to elicit any other important attributes that were not included in the
scale items.

A second and final questionnaire was administered to the same group during the Week 10
lecture. This instrument again asked students to rate the importance of the attributes and then
the performance of the unit and teaching. Regarding the latter, two additional attributes were
included due to the unexpected departure of one of the tutors during Week 9, which resulted
in a number of disgruntled students. A seven point scale was use, anchored at ‘not satisfied’
(1) and ‘very satisfied’ (7). By this stage, 98 students remained enrolled in the unit, of which
73 participated in the second questionnaire. Of these, 52 were female and 21 male, 59 were
full time and 14 part time, while 51 were international students and 22 were domestic.

Results

The mean attribute importance and campus performance ratings are listed in Table 1, where
two issues are apparent. First, all of the attribute importance means were lower in Week 10
than in Week 1. The reason(s) for this were not apparent, but may be related to the next point.

ANZMAC 2005 Conference: Marketing Education 175


Second, all of the attribute performance means were lower than the attribute importance
means. It is suggested this was as a result of two critical incidents, the most serious of which
was the withdrawal of one of the tutors during Week 9. For Week 1 attribute importance,
independent-samples t-tests did not reveal any significant differences in the attribute
importance ratings by gender, full time/part time, or international/domestic. However, for
Week 10 attribute importance, independent-samples t tests indicated significant differences at
p<.05 between fulltime/part time for eight items, and between international/domestic for three
items. The full list of scale items is shown at the following link:
http://www.talss.qut.edu.au/service/EVALUATION/index.cfm?fa=getFile&rNum=657235.

The IPA matrix for the Week 10 results is shown in Figure 2, where the scale mid-point was
used to place the cross hairs. This graphically highlights in Quadrant 1 those attributes
deemed important to the class, but where the unit was perceived to perform relatively poorly.
The aim should be to initiate action that will improve the perceived performance on the
attributes over time. Pleasingly, even though the performance means were all lower than the
importance means, the majority of items are plotted in Quadrant 2. This indicates that in
general the unit performed higher than the scale mid point. The highest rating attribute in
terms of importance (mean 5.6, Std. 1.3) and performance (mean 5.0, Std. 1.7) was ‘lecturer
performance’. Not surprisingly, given the tutor withdrawal the worst performing attribute was
‘tutor performance’ (mean 3.5, Std. 2.1). This was also reflected in numerous qualitative
comments.

Table 1 – Attribute means

Attribute Importance Std. Importance Std. Performance Std.


Week 1 Week 10 Week 10
1 5.8 1.5 5.3 1.5 4.5 1.4
2 5.7 1.3 5.2 1.5 4.6 1.3
3 6.1 1.1 5.1 1.6 4.2 1.5
4 5.7 1.3 5.2 1.6 4.4 1.4
5 5.7 1.3 4.9 1.7 4.1 1.7
6 6.1 1.1 5.1 1.7 3.6 1.7
7 5.9 0.9 5.1 1.7 3.7 1.6
8 6.1 1.4 5.4 1.2 4.8 1.3
9 5.6 1.3 5.1 1.6 4.0 1.4
10 5.6 1.1 4.9 1.5 4.1 1.6
11 6.1 1.3 5.1 1.7 4.0 1.6
12 6.0 0.8 4.9 1.8 4.2 1.6
13 6.4 1.0 5.2 1.6 4.5 1.7
14 6.0 1.0 5.0 1.7 4.2 1.7
15 6.3 1.0 5.2 1.7 3.8 1.8
16 6.2 1.0 5.1 1.6 4.1 1.5
17 6.1 1.0 5.3 1.4 4.7 1.5
18 5.8 1.2 5.0 1.6 3.7 1.5
19 5.8 0.9 5.1 1.5 4.2 1.4
20 6.1 0.9 5.3 1.5 4.2 1.5
21 n/a n/a 5.0 1.9 3.5 2.1
22 n/a n/a 5.6 1.3 5.0 1.7
Grand 5.9 5.1 4.2
mean

ANZMAC 2005 Conference: Marketing Education 176


Figure 2 – Week 10 IPA matrix
7

6
Importance

3
3 4 5 6 7
Performance

Discussion

Importance-performance analysis using a standard Student evaluation of unit/teaching was


used to analyse student expectations of a postgraduate unit, relative to perceptions of the
performance of the unit. The results highlighted clear gaps between the means for attribute
importance and performance. It is argued that without the attribute importance ratings the unit
performance ratings could be misleading. The paper highlights the occurrence of two critical
incidents during the semester, one of which had a direct impact on a quarter of the class. The
IPA matrix graphically highlights the areas where corrective action is required. In this regard
the results provide benchmarks for which the effectiveness of any changes to the unit can be
monitored over time.

At the time of writing the intent is to adapt teaching and learning approaches during Semester
2, 2005 and Semesters 1 and 2, 2006. IPA will again be employed at the beginning and end of
each of these semesters. While the research remains a work in progress, the first results
indicate IPA is a useful technique for benchmarking perceptions of a student cohort.
However, it remains to be seen whether IPA will prove valuable for tracking the unit over
time with different cohorts. For example, the majority of students in the Semester 1 cohort
were female, international, full time students. A different mix of part time, domestic and male
students might for example have generated different expectations and performance
perceptions. The paper represents a work in progress, for which a significant contribution to
the literature lies in addressing the paucity of longitudinal analyses of student satisfaction.

ANZMAC 2005 Conference: Marketing Education 177


References

Bartlett, P. & Einar, A.E. (1993). Analysis of the design function of an adult softball complex
in a new public recreational park. Journal of Park and Recreation Administration.
10(1): 71-81.
Baldwin, G., & James, R. (2000). The market in Australian higher education and the concept
of student as informed consumer. Journal of Higher Education Policy and
Management. 22(2): 139-148.
Ennew, C. T., Reed, G. V., & Binks, M. R. (1993). Importance-performance analysis and the
measurement of service quality. European Journal of Marketing. 27 (2): 59-70.
Evans, M. R., & Chon, K. (1989). Formulating and evaluating tourism policy using
importance-performance analysis. Hospitality Education & Research. 13 (2): 203-213.
Fishbein, M. (1967). Readings in Attitude Theory and Measurement. New York: John Wiley
& Sons.
Ford, J. B., Joseph, M., & Joseph, B. (1999). Importance-performance analysis as a strategic
marketing tool for service marketers: the case of service quality perceptions of
business students in New Zealand and the USA. The Journal of Services Marketing.
13(2): 171-186.
Guadagnolo, F. (1985). The importance-performance analysis: an evaluation and marketing
tool. Journal of Park and Recreation Administration. 3(2): 13-22.
James, R. (2001). Understanding prospective student decision making in higher education and
the implications for marketing strategies. Marketing Education Conference. Sydney.
October.
Kennedy, D. W. (1986). Importance-performance analysis in marketing and evaluating
therapeutic recreation services. Therapeutic Recreation Journal. 20(3): 30-36.
Kotler, P., Adam, S., Brown, L., & Armstrong, G. (2003). Principles of Marketing. (2nd Ed).
Pearson Education Australia.
Lawley, M., & Blight, D. (1997). International students: reasons for choice of an overseas
study destination. 11th Annual Australian International Education Conference.
Melbourne. September.
Leong, S. M., & Tan, C. T. (1992). Assessing national competitive superiority: An
importance-performance matrix approach. Marketing Intelligence & Planning. 10(1):
42-48.
Martilla, J. A., & James, J. C. (1977). Importance-performance analysis. Journal of
Marketing. January: 77-79.
McInnis, C., & James, R. (1999). Transition from secondary to tertiary: a performance study.
Higher Education Series. Department of Training and Youth Affairs Higher Education
Division. Report 36. August.
Myers, J. H. & M. I. Alpert (1968). Determinant buying attitudes: meaning and
measurement. Journal of Marketing 32(October): 13-20.
Nitse, P. S., & Bush, R. P. (1993). An examination of retail dental practices versus private
dental practices using an importance-performance analysis. Health Marketing
Quarterly. 11(1/2): 207-221.
Novatorov, E. V. (1997). An importance-performance approach to evaluating internal
marketing in a recreation centre. Managing Leisure. 2: 1-16.
Pike, S. (2002). The use of importance-performance analysis to identify determinant short
break destination attributes in New Zealand. Pacific Tourism Review. 6(1): 23-33.
Pike, S. (2004). The use of repertory grid analysis and importance-performance analysis to
identify determinant attributes of universities. Journal of Marketing for Higher
Education. 14 (2): 1-18.

ANZMAC 2005 Conference: Marketing Education 178


Richardson, S. L. (1987). An importance-performance approach to evaluating communication
effectiveness. Journal of Park and Recreation Administration. 5(4): 71-83.
Reynolds, W. H. (1965). The role of the consumer in image building. California
Management Review. Spring: 69-76.
Sethna, B. N. (1982). Extensions and testing of importance-performance analysis. Business
Economics. September: 28-31.
Slack, N. (1994). The importance-performance matrix as a determinant of improvement
priority. International Journal of Operations & Production Management. 14(5): 59-
75.
Williams, A. E. & Neal, L. L. (1993). Motivational assessment in organisations: an
application of importance-performance analysis. Journal of Park and Recreation
Administration. 11(2): 60-71.
Wright, C., & O’Neill, M. (2002). Service quality evaluation in the higher education sector:
an empirical investigation of students’ perceptions. Higher Education Research &
Development. 21(1): 2002.

ANZMAC 2005 Conference: Marketing Education 179

You might also like