You are on page 1of 18

High Educ (2010) 60:441–458

DOI 10.1007/s10734-010-9308-8

Developing generic skills through university study:


a study of arts, science and engineering in Australia

Paul B. T. Badcock • Philippa E. Pattison • Kerri-Lee Harris

Published online: 11 February 2010


Ó Springer Science+Business Media B.V. 2010

Abstract This study examined relationships between important aspects of a university


education and the assessment and development of generic skills. A sample of 323 students
enrolled in single or double arts, engineering and/or science degrees from a research-
intensive university in Australia were administered the Graduate Skills Assessment to
measure four generic skills—critical thinking, interpersonal understandings, problem
solving and written communication. As expected, students’ grade point average was
generally found to be significantly related to scores for all four skill scales both within each
discipline area and across the total sample. Reporting of academic achievement through the
GPA therefore provides some measure of students’ generic skill levels. However, since
relationships were modest, GPA should be considered an imperfect indicator of levels of
generic skills attainment. In addition, we found only limited evidence that students’ skill
levels increased with progression through their studies, with study length being consis-
tently related only to Problem Solving. Finally, our analyses revealed significant, inter-
disciplinary variations in students’ skill scores. Results are discussed with respect to
theoretical, practical and methodological implications.

Keywords Higher education  Undergraduate study  Generic skills 


Graduate attributes  Generic skill development  Student assessment

Introduction

For more than 20 years, there has been increasing responsibility placed upon the higher
education (HE) sector to produce versatile and adaptable graduates able to meet the
changing demands of the work environment (Kemp and Seagraves 1995; Leckey and
McGuigan 1997). The research literature, political rhetoric and public commentary argue
that employers require graduates to be confident communicators, to work effectively in
teams, to be critical thinkers, problem solvers, and able to initiate and respond to change

P. B. T. Badcock (&)  P. E. Pattison  K.-L. Harris


The University of Melbourne, Melbourne, VIC, Australia
e-mail: pbadcock@unimelb.edu.au

123
442 High Educ (2010) 60:441–458

(Harvey et al. 1997; Holmes 2001; Crebert et al. 2004). There is also evidence that many
employer groups value such skills more highly than specialist subject knowledge and
numeracy (Hesketh 2000; NBEET 1992).
This has led, in turn, to increasing pressure for universities to equip their students with
‘generic’ skills—skills or attributes beyond disciplinary content knowledge—that can be
broadly applied across different contexts (Drummond et al. 1998; Barrie 2006). Among the
more widely cited generic skills and graduate attributes are critical thinking, problem
solving, interpersonal skills, a capacity for logical and independent thought, communica-
tion and information management skills, intellectual curiosity and rigor, creativity, ethical
awareness and practice, integrity and tolerance (Bath et al. 2004).

The complex issues associated with teaching generic skills

Despite the evident value of the capacity to transfer skills across domains and adapt to new
situations, generic skills and their acquisition raise a complex set of issues. First, there are
issues of definition. The term is used in various ways and has numerous synonyms
(Clanchy and Ballard 1995). In some cases, different levels of these skills are identified,
and there are widespread differences between employers, academics and government
bodies in terms of how they are defined and how the significance of particular skills is
interpreted (Bennett et al. 1999). In Australia, university statements concerning generic
skills stress the relevance of such skills to both the world of work and life in general,
emphasising graduates’ roles as global citizens and effective members of society (Bowden
et al. 2000).
Second, the separability of generic and discipline-based skills has been questioned both
theoretically and empirically (Barrie et al. 2009; Nusche 2008), and there is not a well-
developed tradition within HE to attempt independent assessment. Rather, generic skills
are typically assumed to develop in conjunction with the development of knowledge and
skills within a discipline area (Drummond et al. 1998), and while assessment tasks might
include generic skills among the listed objectives and explicit criteria, levels of attainment
of particular generic skills are rarely reported separately from discipline content
knowledge.
Third, there are challenges with the teaching of generic skills within university cur-
ricula. It is important to acknowledge that universities recognise the value of generic skills
and the role of tertiary education in their development (Barrie 2004). Many university
teachers consider some of the skills labeled ‘generic’—such as writing skills and critical
thinking—to be central to learning in their disciplines. The challenge remains, however, in
balancing the teaching of discipline-specific knowledge and skills with the development of
more transferable skills, and integrating both within university curricula (Barrie et al.
2009).
Finally, there are widespread claims that universities are failing to adequately prepare
graduates for the workplace. Within the United Kingdom, for example, the Quality in
Higher Education (QHE) Project has reviewed stakeholder perceptions extensively since
1992 and revealed a considerable gap between employer expectations of graduate skill
levels and HE skill provision (Leckey and McGuigan 1997). This situation is far from
peculiar to Britain. Concerns that universities are failing to produce graduates with the
necessary skills to meet the demands of a rapidly changing job market have also been
voiced in Europe (Leckey and McGuigan 1997), America (O’Brien 1997) and Australia
(BHERT 1992; Candy and Crebert 1991; ICAA 1994; NBEET 1992; ACNielsen Research
Services 1998).

123
High Educ (2010) 60:441–458 443

Governments have responded to this perceived ‘generic skills gap’ through a range of
policy initiatives (Leckey and McGuigan 1997; Drummond et al. 1998). In Australia, the
need for HE institutions to attend to generic skills has been strongly emphasised by the
Higher Education Council (HEC 1992), and reinforced through the development of the
Australian University Quality Agenda (Bath et al. 2004), the external quality auditing
processes it has implemented (AUQA 2006), and government reports promoting institu-
tional reporting of students’ generic skill outcomes (DEST 2002). Of particular signifi-
cance was the introduction in 2006 of a national, performance-based incentive scheme that
included generic skills as one of seven performance indicators (Harris and James 2006;
Harris 2007).
Arguably, evaluating the contribution of higher education to the development of generic
skills is dependent upon effective assessment measures. In Australia, one particular dataset
has dominated the evaluation of generic skills development at both national and institu-
tional levels. The Course Experience Questionnaire (CEQ) is one component of a national,
systematic and annual survey of recent graduates (Harris and James 2006). The CEQ asks
graduates to comment on various aspects of their experience of university study. Since its
inception in 1992, the CEQ has played a significant role in institutional planning around
teaching and learning. CEQ data is a common feature of university evaluation cycles, and
was featured in the recent national review of higher education as an important performance
indicator for the sector (Bradley et al. 2008).
The CEQ includes a ‘generic skills scale’—a series of questions through which
respondents self-assess the extent to which their course of university study has contributed
to the development of particular generic skills (Wilson et al. 1997). Beyond this subjective
measure of achievement, however, there have been no large-scale systematic attempts to
directly assess generic skill development in Australian universities through use of more
objective measures. With this in mind, a recently developed test, the Graduate Skills
Assessment (GSA), may not only influence the future evaluation of university teaching in
Australia (Bradley et al. 2008), but may also contribute directly to assessment of student
learning within universities. The GSA is intended to provide a standardized, objective
measure of four generic skills—critical thinking, problem solving, interpersonal skills, and
written communication (Hambur et al. 2002)—and is designed to test skill levels inde-
pendent of disciplinary content and curricula. The test has parallels in other standardised
testing recently implemented in higher education, such as the Collegiate Learning
Assessment in the United States (Benjamin and Chun 2003).
In summary, in a competitive HE environment, increased attention is being paid to the
teaching and measurement of generic skills in universities across Australia (OECD 2008).
A recent initiative of the OECD is set to further focus attention on this issue. The inter-
national Assessing Higher Education Learning Outcomes (AHELO) project is exploring
various possibilities for standardised assessment of student learning outcomes, including
generic skills.

The assessment of generic skills within the curriculum

The extent to which generic skills can and should be further incorporated into discipline-
based curricula is contested. Among university teaching staff there is some agreement that
students do develop generic skills during the course of their studies (Drummond et al.
1998); and skills such as critical thinking, written and oral communication, problem
solving and teamwork are often cited as important learning outcomes from undergraduate
study (Bath et al. 2004). Despite this, disciplinary ‘content’ often takes precedence in

123
444 High Educ (2010) 60:441–458

university teaching and assessment, particularly in undergraduate courses. In many ways


this is to be expected, as fields of study are distinguished by the specific knowledge and
skills that students must develop and learn to apply (Becher and Trowler 2001). In addi-
tion, many disciplines report an increasingly ‘crowded curriculum’ due to the explosion of
disciplinary knowledge (Barnett and Coate 2005), limiting attention to generic skills.
However, to the extent that generic skill development does form part of a university
course, it needs to be incorporated into assessment. Ensuring that university assessment—
and therefore students’ grades—is aligned to the intended learning outcomes is critical
(Biggs and Tang 2007). Arguably, effective assessment both drives and reports on student
learning (Carless et al. 2006; James et al. 2002), and grades are a signal to employers and
others about a students’ level of academic achievement and potential readiness for the
workplace. Therefore, if the development of generic skills is truly prioritised in university
teaching, it should be addressed by academic assessment.

A study in an Australian university

This paper reports a study undertaken in an Australian university in 2007, involving


undergraduate students in arts, science and engineering. The study takes advantage of the
GSA as a means of measuring generic skills, independent of course content assessment,
and then uses it to examine the relationship between students’ academic grades and levels
of generic skills. It also examines the cumulative influence of university study on particular
generic skills, using the number of study units completed by students from a range of year
levels as the measure of university study. Finally, the study provides new information on
the varying generic skills profiles associated with undergraduate courses in the fields of
arts, science and engineering.
If generic skill levels are effectively assessed within university courses, levels of
generic skill competence should be reflected in students’ grades. This study investigated
the extent to which various generic skills—as measured by the GSA—were reflected by
overall academic assessment within three discipline areas. Specifically, the study inves-
tigated the relationships between students’ GPA and their scores for the critical thinking,
interpersonal understandings, problem solving and written communication scales of the
GSA. For example, if skills in written communication are a prominent feature of assess-
ment in a particular course, it is anticipated that there will be a high correspondence
between students’ course (academic) grades and their scores on the Written Communi-
cation scale of the GSA.
In a study previously undertaken by ACER (Hambur et al. 2002), the relationship
between GSA scores and GPA was separately investigated for eleven universities. In all
but one case the ACER study showed significant correlations between students’ GSA and
GPA scores. On the assumption that students who perform well academically should also
perform well on the GSA, the authors used the correlation findings to argue for the validity
of the GSA. Further, they argued that the fact that this correlation existed despite the
disciplinary diversity of each sample was testament to the ‘generic’ nature of the test.
In many ways the present study is complementary to the ACER study. Whereas ACER
were testing the validity of the GSA against other measures of student achievement, the
present study accepts the demonstrated validity of the GSA (Hambur et al. 2002) and
employs the test as an objective measure of generic skills to examine the extent to which
such skills are reflected in students’ GPA.
Notably, the Hambur et al. (2002) study also identified interdisciplinary differences in
students’ generic skill scores, suggesting that different skills may be enhanced through

123
High Educ (2010) 60:441–458 445

participation in different fields of study. To address this issue, we also explored the extent
to which certain generic skills are emphasised by particular disciplines through investi-
gating whether students’ generic skill profiles differed across the three discipline areas
involved in our study.
This study makes a significant contribution to advancing our understanding of the
assessment and development of generic skills in higher education. First, unlike the Hambur
et al. (2002) study, it investigates the relationship between student grades and generic skill
levels both within and across a number of distinct discipline areas, and it does so after
controlling statistically for the potentially confounding influence of length of university
study. Similarly, it examines the influence of ‘time at university’ on skill acquisition over
and above relationships between academic achievement and skill scores. It also builds
upon the work of Hambur et al. (2002) by examining whether study within a certain
discipline area is associated with differential skill enhancement. Here, we compared the
GSA skill profiles of students from our larger sample with the profiles of students enrolled
in single or double arts, engineering and/or science degrees, respectively.
Finally, Hambur et al. (2002) have identified significant relationships between GSA
performance and age, gender, and academic competence at entry level (assessed via ter-
tiary entrance scores; TES). Furthermore, they demonstrate covariation between entering
competence and GPA. Our own investigation reflects a more controlled design than the
Hambur et al. (2002) study by adjusting statistically for the effects of these variables on
generic skill levels. Specifically, our inclusion of these variables in our data analyses
enabled us to isolate relationships between skill scores and students’ GPA and length of
study from the potentially confounding influence of entering competence, age and gender;
and control for covariation between these confounds and our variables of interest.

Method

The institution in this study is a large, research-intensive university in Australia, offering a


comprehensive range of undergraduate programs across the arts, sciences, and professional
disciplines such as medicine, architecture and engineering.

Participants

Undergraduate students were recruited from specific discipline areas to take part in this
study during Semesters One and Two, 2007. Potential participants expressed interest in the
study by responding to advertisements placed on student notice-boards throughout the
University, electronic advertisements sent by administrative departments to relevant stu-
dent bodies via e-mail, and an advertisement listed on the University Careers Online
service, an internet resource listing occupational vacancies, volunteer work and research
projects for students of the University. Of the 600 students initially expressing interest, 323
(54%) qualified for inclusion as arts, engineering and/or science students and took part in
the study. Their tertiary entrance scores (TES), University unit enrolment details and
academic grades were obtained from University records, while age, gender and discipline
area were provided by participating students.
Participating students were enrolled in a range of undergraduate degrees, including both
single (e.g. Bachelor of Arts) and double (e.g. Bachelor of Engineering/Science; Bachelor
of Science/Commerce) degrees. Students were classified into three discipline groups for
the purposes of analysis—Arts (31%), Engineering (39%), and Science (50%) (Table 1).

123
446 High Educ (2010) 60:441–458

One in five participating students were enrolled in double degrees involving two of these
disciplines. For analyses restricted to each distinct discipline area, these students were
included in both groups.
Participants were also recruited from a range of year levels, including students enrolled
in their first, penultimate and final years of tertiary study. Participants ranged in age from
18 to 29 years, with a mean age of 20.5 years and no significant difference in age profile
between disciplines. The study involved 141 male and 182 female students, with pro-
nounced yet representative differences in gender distributions between the three discipline
groups (Table 1).

The graduate skills assessment

The graduate skills assessment (GSA) is a five scale, objective measure of undergraduate
students’ generic skill levels (Hambur et al. 2002). The test includes a multiple-choice
component, comprising eighty-three items, and two writing tasks. The multiple-choice
component assesses three generic skills—critical thinking, interpersonal understandings
and problem solving. The written component measures argument writing and report
writing skills. The average scores for these two scales are used to produce a single written
communication skill score. Participants are given 2 h to complete the multiple-choice
component and an hour to complete the writing tasks. Scores for each of the scales are
calculated by ACER. As incentive to participate, all students in the study were provided
with an authorised GSA report, free of charge.
Students completed the GSA under supervised conditions and in accordance with the
testing requirements specified by ACER. Testing occurred on two dates in 2007. Com-
pleted GSA tests were returned to ACER for scoring.

Table 1 Participating students by discipline groups, year level, age and gender
Group Participating students Degree combination Year levelb (n) Age (years) Gender (female %)

n % Disciplinea n F P L SD This study Unic

Total 323 100 20.5 1.98 56.3 57


Arts 100 31 A 54 35 35 30 20.5 1.89 75.0 72
A/E 11
A/S 18
A/Od 17
Engineer 126 39 E 53 35 79 13 20.8 1.89 37.3 24
E/S 34
A/E 11
E/Od 28
Science 160 50 S 94 49 58 52 20.5 2.04 57.5 56
A/S 18
E/S 34
S/Od 14
a
Degree description provided by students: A arts, E engineering, S science, O other
b
Year level: F first, P penultimate (2nd or 3rd, depending upon degree), L last
c
2005 census data
d
Typically commerce

123
High Educ (2010) 60:441–458 447

Analysis

The study involved a cross-sectional, correlational design that included four independent
variables: self-designated enrolment in one or more of the three discipline areas (i.e., arts,
engineering and science); participants’ levels of academic achievement at the University
(assessed via Grade Point Average, an average of all marks on a 0–100 scale for subjects
completed at University); total length of study at the University (assessed via the total
number of subject points completed by each student); and entering competence (assessed
via TES scores). Dependent variables were participants’ scores for the four generic skill
dimensions of the GSA (i.e., critical thinking, interpersonal understandings, problem
solving and written communication). The GSA scales were nominated as dependent
variables on purely analytical grounds. Specifically, this allowed us to control statistically
for potentially confounding relationships between the independent variables, and derive
their unique contributions to explaining variance in GSA scores. As noted, previous work
suggests that GSA performance can vary as a function of age and gender (Hambur et al.
2002). Accordingly, these student characteristics were included in all analyses as covari-
ates. Data were analysed using the SPSS statistical package (Version 15).
Unstandardised regression coefficients (B), standardised regression coefficients (b),
95% confidence intervals (95% CI), semi-partial correlations squared (sr2), and R2 statistics
are reported for various analyses. Note that unstandardised regression coefficients indicate
the units of change in the dependent variable associated with a one-unit change in each
predictor. Furthermore, semi-partial correlations squared indicate effect sizes—that is, the
percentage of response variance in each GSA scale that is uniquely explained by each
predictor (i.e., over and above the influence of all other predictors). Finally, R2 statistics
indicate the amount of response variance for the GSA scale explained by all of the pre-
dictors. Significant values indicate that taken together, the predictors reliably enhance
prediction of scores for the relevant GSA scale.

Results

Relationships of entering competence, academic achievement and length of study


with generic skill levels

To examine and control for relationships between entering competence and participants’
skill levels, we entered students’ TES scores as the predictor in a series of regression
analyses, with the four GSA scales as respective dependent variables (Tables 2, 3). We
used GPA and ‘Total Subject Points’ to examine relationships between academic
achievement and length of study at the University over and above the influence of entering
competence. ‘Gender’ and ‘Age’ were entered as covariates to control statistically for the
influence of these student characteristics upon the dependent variables. Note that the
reduced number of participants (n = 202) is attributable to the reduced number with TES
scores. In order to capture those students who did not complete their secondary education
in Victoria (i.e., those without a TES score), the same analyses were then performed
without TES as a predictor (Tables 4, 5).
As expected, after controlling for the influence of entering competence, length of
university study, age and gender, GPA was significantly, positively and uniquely related to
all four of the generic skills scales (Tables 2, 3, 4 and 5). Furthermore, while total subject
points was significantly related to problem solving only (Table 3), this variable was

123
448 High Educ (2010) 60:441–458

Table 2 Standard multiple regression of participants’ TES, grade point average, total subject points, gender
and age on the critical thinking and interpersonal understandings scales of the GSA (n = 202)
Predictors Critical thinking Interpersonal understandings
2
B b 95% CI sr B b 95% CI sr2
(unique) (unique)
Lower Upper Lower Upper

TES .461** .207 .125 .797 .03 .246 .134 -.029 .521 .01
Grade .398** .257 .170 .626 .05 .290** .227 .102 .477 .04
point
average
Total .002 .020 -.015 .018 .00 .002 .036 -.011 .016 .00
subject
points
Gender .523 .023 -2.467 3.513 .00 5.057** .265 2.606 7.509 .07
Age .317 .050 -.992 1.307 .00 -.060 -.012 -1.003 .883 .00
R2 = .16** R2 = .17**

* p \ .05
** p \ .01

Table 3 Standard multiple regression of participants’ TES, grade point average, total subject points, gender
and age on the problem solving and written communication scales of the GSA (n = 202)
Predictors Problem solving Written communication

B b 95% CI sr2 B b 95% CI sr2


(unique) (unique)
Lower Upper Lower Upper

TES .637** .239 .249 1.026 .04 .214* .159 .010 .418 .02
Grade .414** .223 .150 .678 .04 .276** .294 .137 .416 .07
point
average
Total .022* .220 .003 .040 .02 .002 .049 -.007 .012 .00
subject
points
Gender -2.268 -.082 -5.727 1.191 .01 .580 .041 -1.243 2.404 .00
Age -.500 -.070 -1.830 .830 .00 -.106 -.030 -.807 .595 .00
R2 = .22 R2 = .16**

* p \ .05
** p \ .01

significantly and positively related to critical thinking, problem solving and written
communication scores after excluding entering competence from analyses. Females
obtained significantly higher scores for the interpersonal understandings dimension than
males (Tables 2, 4), while there is some evidence of males outperforming females on the
problem solving scale (Table 5), although not when TES was factored in (Table 3).

123
High Educ (2010) 60:441–458 449

Table 4 Standard multiple regression of participants’ grade point average, total subject points, gender and
age on the critical thinking and interpersonal understandings scales of the GSA (n = 318)
Predictors Critical thinking Interpersonal understandings
2
B b 95% CI sr B b 95% CI sr2
(unique) (unique)
Lower Upper Lower Upper

Grade .481** .315 .320 .642 .10 .305** .231 .163 .446 .05
point
average
Total .014* .165 .003 .025 .02 .008 .109 -.002 .017 .01
subject
points
Gender .254 .011 -2.133 2.640 .00 3.892** .199 1.796 5.988 .04
Age -.314 -.055 -1.060 .432 .00 -.220 -.045 -.875 .436 .00
R2 = .11** R2 = .09**

* p \ .05
** p \ .01

Table 5 Standard multiple regression of participants’ grade point average, total subject points, gender and
age on the problem solving and written communication scales of the GSA (n = 318)
Predictors Problem solving Written communication

B b 95% CI sr2 B b 95% CI sr2


(unique) (unique)
Lower Upper Lower Upper

Grade .592** .319 .403 .782 .10 .321** .312 .212 .429 .10
point
average
Total .027** .269 .014 .040 .05 .009* .163 .002 .017 .02
subject
points
Gender -3.315* -.120 -6.125 -.505 .01 .374 .024 -1.241 1.988 .00
Age -.843 -.122 -1.722 .035 .00 -.475 -.124 -.979 .029 .01
R2 = .18** R2 = .12**

* p \ .05
** p \ .01

Relationships of academic achievement and length of study with generic skill levels
within discipline groups

To investigate whether academic achievement and length of study were significantly


related to students’ skill levels within each distinct discipline area, the above analyses
were repeated for each of the three discipline groups. Tables 6 and 7 show the results of
these analyses, examining relationships between each of the predictors and single or
double arts (n = 97), engineering (n = 125) and science (n = 156) students’ scores for
each of the four GSA dimensions. TES scores were omitted from these analyses because
of low power.

123
450

Table 6 Standard multiple regression of students’ grade point average, total subject points, gender and age on the critical thinking and interpersonal understandings GSA
scales as a function of discipline area

123
Predictors Critical thinking Interpersonal understandings

B b 95% CI sr2 (unique) B b 95% CI sr2 (unique)

Lower Upper Lower Upper

Arts
Grade point average .597** .349 .266 .929 .12 .222 .155 -.054 .498 .02
Total subject points .014 .164 -.008 .036 .01 .007 .094 -.012 .025 .00
Gender .223 .009 -4.770 5.216 .00 2.535 .120 -1.619 6.689 .01
Age -.111 -.019 -1.604 1.382 .00 1.226 .251 -.016 2.468 .04
R2 = .14** R2 = .15**
Engineering
Grade point average .407** .274 .149 .665 .07 .314** .238 .091 .536 .05
Total subject points .014 .194 -.004 .032 .02 .014 .205 -.002 .029 .02
Gender 1.408 .062 -2.455 5.270 .00 4.653** .232 1.324 7.983 .05
Age -.469 -.081 -1.887 .948 .00 -1.428* -.276 -2.650 -.206 .04
2 2
R = .11** R = .16**
Science
Grade point average .486** .314 .250 .721 .09 .298** .238 .102 .495 .05
Total subject points .022** .231 .006 .039 .04 .013 .167 -.001 .027 .02
Gender -1.174 -.048 -4.895 2.547 .00 3.364* .169 .257 6.470 .03
Age -.160 -.027 -1.191 .870 .00 -.168 -.035 -1.029 .692 .00
R2 = .15** R2 = .09**

* p \ .05
** p \ .01
High Educ (2010) 60:441–458
Table 7 Standard multiple regression of students’ grade point average, total subject points, gender and age on the problem solving and written communication GSA scales as
a function of discipline area
Predictors Problem solving Written communication

B b 95% CI sr2 (unique) B b 95% CI sr2 (unique)

Lower Upper Lower Upper

Arts
High Educ (2010) 60:441–458

Grade point average .527* .227 .083 .971 .05 .287** .284 .094 .480 .08
Total subject points .040** .345 .010 .070 .07 .018** .351 .005 .031 .07
Gender -6.116 -.179 -12.800 .568 .03 -1.170 -.078 -4.070 1.730 .01
Age -.308 -.039 -2.307 1.690 .00 -.191 -.056 -1.058 .676 .00
R2 = .16** R2 = .17**
Engineering
Grade point average .627** .346 .330 .924 .11 .292** .282 .120 .464 .08
Total subject points .026* .291 .006 .047 .04 .017** .334 .005 .029 .05
Gender 3.687 .134 -.763 8.137 .02 1.653 .105 -.914 4.220 .01
Age -1.727* -.243 -3.360 -.094 .03 6.600** .431 -2.462 -.575 .07
2 2
R = .20** R = .19**
Science
Grade point average .696** .413 .461 .930 .16 .290** .305 .143 .437 .09
Total subject points .037** .353 .020 .053 .09 .009 .155 -.001 .019 .02
Gender -2.787 -.104 -6.489 .915 .01 -.966 -.064 -3.287 1.355 .00
Age -.575 -.089 -1.601 .450 .01 -.122 -.033 -.765 .521 .00
R2 = .29** R2 = .12**

* p \ .05
** p \ .01
451

123
452 High Educ (2010) 60:441–458

As seen in Tables 6 and 7, results indicated minimal disciplinary variation in the


relationship of academic achievement with the four GSA skill scales (see Tables 6, 7).
Significant, positive relationships were identified between GPA and all four generic skill
levels within each discipline area, with the solitary exception of a non-significant finding
between GPA and interpersonal understandings within arts. Furthermore, examination of
semi-partial correlations squared (sr2) indicated that effect sizes for significant relation-
ships were quite varied, such that GPA explained between 4 and 16% of the variation in
GSA scores across the different analyses.
With respect to relationships between length of study and GSA scores, results dem-
onstrated some interdisciplinary differences. While total subject points was positively
related to problem solving scores across all three discipline groups, this variable was only
significantly associated with critical thinking among science students; and written com-
munication among arts and engineering students. Notably, effect sizes for significant
relationships ranged from 4 to 9%.

Disciplinary differences in generic skill scores

To compare the generic skill levels of students from each of the three distinct discipline areas
with the skill levels of the total sample, a series of regression analyses were conducted using
the four skill scales as respective dependent variables, and enrolment in arts, engineering and
science as respective predictors. This enabled us to examine the contribution of discipline
area to the prediction of GSA test scores. Scores for students enrolled in double degrees that
included disciplines other than arts, engineering or science were also included in analyses to
control for the influence of enrolment in other discipline areas. In addition, GPA, length of
study, age and gender were included in each analysis to control statistically for relationships
between these factors and the dependent variables. Table 8 presents the means (X) and
standard deviations (SD) for the four GSA scales, reported by discipline area.

Table 8 Means and standard deviations for the four GSA skill scales as a function of discipline area
(n = 318) and means from a national study of Australian undergraduate student skill scores
GSA scale Arts Engineering Science

This study National This study National This study National


(n = 97) studya (n = 125) studya (n = 156) studya
(n = 1335) (n = 1017) (n = 1604)

X SD X X SD X X SD X

Critical thinking 49.57* 11.04 43.0 44.94* 10.97 39.0 46.69 12.56 42.0
Interpersonal 46.30* 9.28 44.0 41.06* 9.71 37.0 43.66 9.87 42.0
understand
Problem solving 39.97 14.84 36.0 43.83 13.39 39.0 40.70 13.24 41.0
Written 43.71 6.61 41.5 40.51* 7.63 37.0 42.79* 7.56 39.5
communication
* p \ .05
a
To facilitate comparison between the results for the sample used in this study and the wider Australian
university student population, results from research conducted by ACER are included (Butler, January 7,
personal communication). The ACER study involved 34 universities across Australia and included 1,335
Arts students, 1,017 Engineering/Architecture students, and 1,604 Mathematics/Science students. With the
exception of Problem Solving scores for Science students, GSA means for our sample were between 2 and 6
points higher than those reported by ACER

123
High Educ (2010) 60:441–458 453

After controlling for the influence of GPA, length of study, age, gender and enrolment
in a degree other than arts, engineering and/or science, analyses revealed a significant,
positive relationship between enrolment in a single or double arts degree and both critical
thinking (t[5, 313] = 2.723, p \ .01) and interpersonal understandings (t[5, 313] = 2.477,
p \ .05), indicating that relative to other students, these participants tended to obtain
higher scores on these scales. Examination of semi-partial correlations squared indicated
that over and above the influence of all other predictors, enrolment in arts accounted for
approximately 2% of the variance in both of these scales. Analyses also revealed that when
compared to other students, those enrolled in single or double engineering degrees tended
to score significantly lower on critical thinking (t(313) = -2.792, p \ .01), interpersonal
understandings (t(313) = -3.163, p \ .01) and written communication (t(313) = -4.465,
p \ .01), with enrolment in this discipline area explaining approximately 2, 3 and 5% of
the variance in these scales, respectively. Finally, a significant, positive association was
identified between written communication (t(313) = 1.997, p \ .05) and enrolment in a
single or double Science degree, accounting for approximately 1% of response variance.
All other relationships were not significant.

Discussion

This study examined three aspects of generic skills and higher education—whether aca-
demic assessment provides a measure of generic skill level; to what extent the experience
of university contributes to skill development; and the extent of differences in generic skill
levels between disciplines. Despite the inherent limitations of such a study, which will be
discussed in the coming sections, the results provide insight into each of these areas.

Assessment results as an indicator of generic skill levels

Students’ GPA was related to their skill levels in all four of the generic skills tested using
the GSA. These results demonstrate that, to some extent, University assessment provides
an indication of student generic skill levels (as measured by the GSA). Moreover, with the
exception of a non-significant correlation between GPA and interpersonal understandings
within arts, statistically significant, positive relationships were identified between GPA and
all four GSA scales within each discipline area. Such findings are encouraging, as they
demonstrate that the grading systems particular to distinct discipline areas each incorpo-
rates an element of generic skills assessment.
This finding extends results from an earlier ACER study (Hambur et al. 2002) that also
reported a correlation between GSA and GPA scores. However, unlike Hambur et al.
(2002), the present study investigated relationships between grades and generic skill levels
both within and across different discipline areas, and controlled for the potentially con-
founding influence of entering competence, length of study, age and gender.
As discussed earlier, there is considerable pressure for HE institutions to provide
graduates and employers with a reliable index of ‘work-readiness’, and to this end,
demonstrating a correspondence between the sorts of skills sought by employers and
reports of graduates’ achievement (i.e., grades) is vital. In this context it is of some concern
that the relationships between GPA and GSA scores were only modest. GPA was mod-
erately related to three of the GSA scales, uniquely explaining 10% of the variation in
students’ critical thinking, problem solving and written communication scores. Further-
more, after controlling for relationships between entering competence and skill scores,

123
454 High Educ (2010) 60:441–458

these values dropped to between 4 and 5%; and within each discipline area, effect sizes for
significant relationships were varied, with GPA explaining between 4 and 16% of the
variation in skill scores. The findings indicate that in general, between 90 and 95% of the
variation in students’ GSA scores was not explained by average student grades. This
suggests that GPA may be a poor mechanism for reporting on students’ generic skill levels.
It should be acknowledged, however, that GPA conflates students’ achievement across a
range of subject types with often highly specific intended learning outcomes and priorities.
The development of generic skills is, of course, only one component of the learning
outcomes for most University courses, and the importance of assessing disciplinary-spe-
cific knowledge and skills should not be overlooked. With this in mind, it is perhaps
unsurprising that GPA provides only a coarse indicator of students’ levels of generic skill
attainment.

The influence of the university experience on the development of generic skills

The study found only limited evidence that students in their later years of study demon-
strated higher skill levels when compared with students in their earlier years of study.
While it is encouraging that an increase in total subject points was related to an increase in
skill scores for the critical thinking, problem solving and written communication scales
across the total sample; after controlling for the influence of entering competence, length of
study was only related to problem solving. Furthermore, effect sizes were all low, with
length of study explaining no more than 5% of the variation in GSA scores. Finally, fewer
significant relationships were identified between total subject points and skill scores within
each discipline group. Specifically, while total subject points was positively related to
problem solving scores across all three discipline groups, this variable was only signifi-
cantly associated with critical thinking among science students; and written communica-
tion among arts and engineering students. These results contrast somewhat with those
reported by Hambur et al. (2002), where significant differences were demonstrated
between all of the mean GSA skill scores of first vs. third year arts and maths/science
students. They do not present data, however, for engineering students.
This finding raises an important issue for the institution. It is an aim that the University
experience—encompassing academic studies and other aspects of University life—
develops students’ skills in these areas. It would therefore be expected that students at the
end of their undergraduate degree would exhibit higher skill levels than first year students.
That the evidence of this is limited in some courses is therefore of concern. Specifically,
our results question the assumption that generic skill development is an inevitable outcome
of time spent studying at university, and as discussed, this appeals to an issue that has
received considerable attention both within and beyond HE institutions (Leckey and
McGuigan 1997).
There are, however, a number of methodological issues that must be born in mind. The
main limitation of this study relates to our use of a cross-sectional design. This approach is
susceptible to cohort effects, and is therefore subject to the confounding influence of
individual differences across the samples of first and latter year students. Our findings must
therefore be interpreted with caution. Indeed, a more accurate assessment of generic skill
development over time requires the use of longitudinal work to track changes in students’
generic skill levels as they progress through their respective years of study. This is an
important avenue for future research.
Second, it may be that the GSA is not sufficiently sensitive to detect improvements.
Indeed, since the GSA is designed to assess skills in a way that is independent of

123
High Educ (2010) 60:441–458 455

knowledge acquired at tertiary level, its assessments may be focussed on skills that can be
acquired earlier rather than later in students’ educational careers, and that may be relatively
well-developed for students entering a research-intensive University with highly selective
intakes.
Third, it is possible that important influences—external to study load and not factored
into this study—confounded the results. That is, by restricting our analyses to identify
unique relationships between total subject load and skill scores, we did not account for
potentially crucial interactions between length of study and other variables linked to
generic skill development. As Pascarella and Terenzini (1991) point out, skill development
within the HE context is not influenced by any one factor in isolation, but is facilitated by
complex interactions between a number of different factors—such as students’ academic
ability, their degree of involvement in the wider university environment, their exposure to
particular types of pedagogical and extracurricular experiences, and the quality of their
interactions with staff and students. This study did not take such interactions into account,
and its inability to capture the differential impact of these factors may explain the lack of
significant results.

Disciplinary differences in generic skills

This study identified some important differences between the skill scores of students from
different discipline areas, suggesting variation in the extent to which particular generic
skills are developed in arts, science, and engineering. On average, arts students tended to
score more highly on critical thinking and interpersonal understandings, engineering stu-
dents scored lower on all GSA scales except problem solving, whilst science students fared
better on the written communication scale. There are several possible explanations for
these differences.
As can be seen from lists of learning objectives for different subjects and courses,
different disciplines place different emphases on particular skills. It is quite possible, then,
that the differences in GSA scores reflect a genuine difference in learning priorities.
Through its influence upon course content and structure, an emphasis on the importance of
critical thinking skills in arts (Goyette and Mullen 2006), for example, is likely to influence
the development of a study program in order to more effectively foster such skills. A
second possibility is that the learning environments and pedagogical practices in different
disciplines are conducive to different forms of skill development. For example, many arts
subjects involve extensive interactive tutorial participation, and this may account for the
higher levels of interpersonal understandings in this student group. Similarly, the need for
both report and argument writing in science may explain the higher written communication
scores among science students. Another important consideration is that disciplinary dif-
ferences in GSA skill scores may be due to intrinsic differences in student characteristics.
That is, differential skill scores may reflect general differences in the ‘sorts’ of students
who are attracted to, and enrol in, each discipline area (Hambur et al. 2002). It should be
noted that the study’s limitations, described earlier, apply to this discussion also. Further,
voluntary participation resulted in samples that were not representative of the disciplinary
population (see also Table 1), and this may have influenced these results.
In any case, the results presented here support Bath et al.’s (2004) contention that it is
crucial to be sensitive to the particular disciplinary contexts responsible for skill provision,
and suggest that further research into the particular sorts of HE experiences responsible for
increased skill scores within certain discipline areas is warranted. Despite the likely
multiple factors at play, and the intrinsic limitations of the study, these findings further

123
456 High Educ (2010) 60:441–458

suggest that attention needs to be given to the extent to which undergraduate study
develops generic skills. It may be that further comparison between discipline-specific
priorities, assessment methods, and teaching and learning activities would yield valuable
information in this regard.

Concluding remarks

While it is acknowledged that the GSA is an imperfect measure of generic skills (Hambur
et al. 2002; Chanock et al. 2004; Cleary et al. 2007), the findings of this study demonstrate
that the link between university study and the development of a broad range of generic
skills cannot simply be assumed. If undergraduate studies are to contribute to generic skill
development, teaching and learning activities need to be purposefully designed. If aca-
demic grades are to measure and report on generic skill levels, assessment criteria need to
reflect this. And if other opportunities for skills to develop are a feature of the University
experience, students need to be able to document and report on their achievements in these
areas.

Acknowledgments The author would like to acknowledge the invaluable contributions of all members of
the project team: Ms. Barbara Hammond, Prof. Richard James, Ass. Prof. Steve James, Ass. Prof. Michelle
Livett, Dr. Graham Moore and Ass. Prof. Harald Sondergaard. This project was funded by part of the
Teaching and Learning Performance Fund allocation for the University of Melbourne by the Australian
Government Department of Education, Employment and Workplace Relations.

References

ACNielsen Research Services. (1998). Research on employer satisfaction with graduate skills—interim
report (evaluations and investigations report 98/8). http://www.dest.gov.au/archive/highered/eippubs/
eip98-8/execsum.htm. Accessed 23 Nov 2006.
Australian Universities Quality Agenda (AUQA). (2006). Audit manual version 3. http://www.auqa.
edu.au/qualityaudit/auditmanuals/auditmanual_v3 /audit_manual_3.pdf. Accessed 22 Jan 2007.
Barnett, R., & Coate, K. (2005). Engaging the curriculum in higher education. Buckingham: The Society for
Research into Higher Education.
Barrie, S. C. (2004). A research-based approach to generic graduate attributes policy. Higher Education
Research and Development, 23, 261–275.
Barrie, S. C. (2006). Understanding what we mean by the generic attributes of graduates. Higher Education:
The International Journal of Higher Education and Educational Planning, 51, 215–241.
Barrie, S., Hughes, C., & Smith, C. (2009). The national graduate attributes project: Key issues to consider
in the renewal of learning and teaching experiences to foster graduate attributes. Sydney: Australian
Learning and Teaching Council.
Bath, D., Smith, C., Stein, S., & Swann, R. (2004). Beyond mapping and embedding graduate attributes:
Bringing together quality assurance and action learning to create a validated and living curriculum.
Higher Education Research and Development, 23, 313–328.
Becher, T., & Trowler, P. R. (2001). Academic tribes and territories (2nd ed.). Buckingham: The Society for
Research into Higher Education.
Benjamin, R., & Chun, M. (2003). A new field of dreams: The collegiate learning assessment project. Peer
Review, 26–29.
Bennett, N., Dunne, E., & Carre, C. (1999). Patterns of core and generic skill provision in higher education.
Higher Education, 37, 71–93.
Biggs, J., & Tang, C. (2007). Teaching for quality learning at university (3rd ed.). New York: Society for
Research into Higher Education & Open University Press.
Bowden, J., Hart, G., King, B., Trigwell, K., & Watts, O. (2000). Generic capabilities of ATN university
graduates. http://www.clt.uts.edu.au/ATN.grad.cap.project.index.html. Accessed 23 Nov 2006.

123
High Educ (2010) 60:441–458 457

Bradley, D., Noonan, P., Nugent, H., & Scales, B. (2008). Review of Australian Higher Education final
report. Canberra: Department of Education, Employment and Workplace Relations, Commonwealth
Government of Australia.
Business/Higher Education Round Table (BHERT). (1992). Educating for excellence (commissioned report
no. 2). Camberwell: BHERT.
Candy, P. C., & Crebert, R. G. (1991). Ivory tower to concrete jungle: The difficult transition from the
academy to the workplace as learning environments. Journal of Higher Education, 62, 570–592.
Carless, D., Joughlin, G., Liu, N.-F., et al. (2006). How assessment supports learning: Learning-oriented
assessment in action. Hong Kong: Hong Kong University Press.
Chanock, K., Clerehan, C., Moore, T., & Prince, A. (2004). Shaping university teaching towards mea-
surement for accountability: Problems of the graduate skills assessment test. Australian Universities
Review, 47(1), 22–29.
Clanchy, J., & Ballard, B. (1995). Generic skills in the context of higher education. Higher Education
Research and Development, 14, 155–166.
Cleary, M., Flynn, R., Thomasson, S., Alexander, R., & McDonald, B. (2007). Graduate employability
skills. Canberra: Precision Consultancy. http://www.dest.gov.au/NR/rdonlyres/E58EFDBE-BA83-
430E-A541-E91BCB59DF1/18858/GraduateEmployabilitySkillsFINALREPORT.pdf. Accessed 23
Jan 2007.
Crebert, G., Bates, M., Bell, B., Patrick, C. J., & Cragnolini, V. (2004). Developing generic skills at
university, during work placement and in employment: Graduates’ perceptions. Higher Education
Research and Development, 23, 147–165.
Department of Education, Science and Training (DEST). (2002). Striving for quality: Learning, teaching
and scholarship. Canberra: DEST.
Drummond, I., Nixon, I., & Wiltshire, J. (1998). Personal transferable skills in higher education: The
problems of implementing good practice. Quality Assurance in Education, 6, 19–27.
Goyette, K. A., & Mullen, A. L. (2006). Who studies the arts and sciences? Social background and the
choice and consequences of undergraduate field of study. The Journal of Higher Education, 77,
497–538.
Hambur, S., Rowe, K., & Luc, L.T. (2002). Graduate skills assessment: Stage one validity study. http://
www.acer.edu.au/tests/university/gsa/documents/GSA_Validity_Study.pdf. Accessed 8 Oct 2006.
Harris, K.-L. (2007). A critical examination of a new Australian performance-based incentive fund for
teaching excellence. In K.-L. Harris & B. Longden (Eds.), Funding higher education: A question of
who pays? (pp. 60–76). Amsterdam: EAIR.
Harris, K.-L., & James, R. (2006). The course experience questionnaire, graduate destinations survey and
the learning and teaching performance fund in Australian higher education. Public Policy for
Academic Quality. Research Program, UNC. http://www.unc.edu/ppaq/CEQ_final.html. Accessed 8
Apr 2008.
Harvey, L., Geall, V., & Moon, S. (1997). Graduates’ work: Implications of organizational change for the
development of student attributes. Industry and Higher Education, 11, 287–296.
Hesketh, A. J. (2000). Recruiting an elite? Employers’ perceptions of graduate education and training.
Journal of Education and Work, 13, 245–271.
Holmes, L. (2001). Reconsidering graduate employability: The ‘‘graduate identity’’ approach. Quality in
Higher Education, 7, 111–119.
Institute of Chartered Accountants in Australia (ICAA). (1994). Chartered accountants in the 21st century.
Sydney: ICAA.
James, R., McInnis, C., & Devlin, M. (2002). Assessing learning in Australian universities: Ideas, strategies
and resources for quality in student assessment. Australian Universities Teaching Committee.
http://www.cshe.unimelb.edu.au/assessinglearning/index.html. Accessed 1 Sept 2008.
Kemp, I. J., & Seagraves, L. (1995). Transferable skills—can higher education deliver? Studies in Higher
Education, 20, 315–328.
Leckey, J. F., & McGuigan, M. A. (1997). Right tracks–wrong rails: The development of generic skills in
higher education. Research in Higher Education, 38, 365–378.
National Board of Employment, Education and Training (NBEET). (1992). Skills required of graduates:
One test of quality in Australian Higher Education (Higher Education Council Commissioned Report
No. 20). Canberra: Australian Government Publishing Service.
Nusche, D. (2008). Assessment of learning outcomes in higher education: A comparative review of selected
practices. OECD Education Working Paper No. 15. www.oecd.org/dataoecd/14/8/40257354.pdf.
Accessed 10 Sept 2008.
O’Brien, T. (1997). Life after college: employment, further education, lifestyle for recent grads. AAHE
Bulletin, 50, 7–10.

123
458 High Educ (2010) 60:441–458

Organistation for Economic Co-operation and Development (OECD). (2008). OECD feasibility study for the
international assessment of higher education learning outcomes (AHELO). http://www.oecd.org/
edu/ahelo. Accessed 9 Sept 2008.
Pascarella, E. T., & Terenzini, P. T. (1991). How college affects students: Findings and insights from twenty
years of research. San Francisco, CA: Jossey-Bass Publishers.
Wilson, K., Lizzio, A., & Ramsden, P. (1997). The development, validation and application of the course
experience questionnaire. Studies in Higher Education, 22(1), 33–53.

123

You might also like