Professional Documents
Culture Documents
D. Royce Sadler
To cite this article: D. Royce Sadler (2016) Three in-course assessment reforms to improve higher
education learning outcomes, Assessment & Evaluation in Higher Education, 41:7, 1081-1099,
DOI: 10.1080/02602938.2015.1064858
Introduction
This article is mainly about cognitive capabilities that are important in most
academic fields: proficiency in thinking, reasoning, synthesising, conceptualising,
evaluating and communicating. These ‘higher order’ capabilities form a subset of
what are also variously called ‘intended learning outcomes’ (Biggs and Tang 2011)
or some combination of ‘generic’, ‘graduate’ or ‘higher education’ with ‘competen-
cies’, ‘skills’ or ‘attributes’. With the rapid expansion of higher education world-
wide, it is natural to ask about the extent to which all students can demonstrate
adequate levels of such ‘higher order’ capabilities by the time they graduate. But
what is meant by ‘adequate’? This is the fundamental question. A number of
agencies and commentators referenced in the next section have alleged that, while
many graduates do achieve desired standards, many others do not.
This article is based on the premise that the most logical, direct and appropriate
site for developing capabilities is within the courses that constitute degree pro-
grammes. Research by Jones (2009, 2013) has demonstrated that interpretations of
competences differ from field to field, sometimes widely. This is the nature of
disciplines. However, there are reasonable grounds for believing that capabilities
developed thoroughly in one context – a particular course or sequence of courses –
normally have a transferable element to them. This allows them to be reconfigured
*Email: d.sadler@uq.edu.au
and repurposed for use in other contexts at other times. As Strathern (1997, 320), an
anthropologist, explained it:
In making transferable skills an objective, one cannot reproduce what makes a skill
work, i.e. its embeddedness … what is needed is the very ability to embed oneself in
diverse contexts, but that can only be learnt one context at a time … if you embed
yourself in site A you are more likely, not less, to be able to embed yourself in site B.
But if in Site A you are always casting around for how you might do research in B or
C or D, you never learn that. There is a lesson here for disciplines … Somehow we
have to produce embedded knowledge: i.e. insights that are there for excavating later,
when the context is right, but not until then … we have not to block or hinder … the
organism’s capacity to use time for the absorption of information … time-released
knowledge or delayed-reaction comprehension. [Capitalization in the original]
Reforming three particular assessment practices would increase the likelihood that
more students, especially those currently at the minimum ‘pass’ level, would achieve
the levels expected of all graduates. The three form a mutually interdependent pack-
age. They are the design and specification of assessment tasks; the requirements for
a pass; and the design of course assessment programmes. Wherever these are not
currently being practiced as aspects of normal institutional quality assurance, they
amount to reforms that require enabling changes to be made elsewhere in the
learning environment.
Context
Two widely read books by Bok (2006) and Arum and Roksa (2010) describe
unevenness in graduate outcomes as perceived in the USA. Bok (2006, 7–8) wrote
‘Survey after survey of students and recent graduates shows that they are remarkably
pleased with their college years’. Overall, they also ‘achieve significant gains in
critical thinking, general knowledge, moral reasoning, quantitative skills, and other
competencies’. At the same time and fully compatible with that, ‘colleges and uni-
versities, for all the benefits they bring, accomplish far less for their students than
they should. Many seniors graduate without being able to write well enough to sat-
isfy their employers’ (8) by expressing themselves ‘with clarity, precision, … style
and grace’ (82). ‘Many cannot reason clearly or perform competently in analysing
complex, nontechnical problems, even though faculties rank critical thinking as the
primary goal of a college education’ (8). ‘The ability to think critically – to ask
pertinent questions, recognise and define problems, identify the arguments on all
sides of an issue, search for and use relevant data, and arrive in the end at carefully
reasoned judgments – is the indispensable means of making effective use of
information’ (109).
Here, Bok has raised quite specific concerns. They may be valid to a greater or
lesser extent for particular institutions, academic degree programmes or component
courses – there is usually no independent way of telling. However, his portrayal of
the situation in the USA resonates with similar concerns raised in other countries.
These are reflected in the number of national and international discussions, policies,
projects, regulations, instruments and forms of cooperation aimed at assuring gradu-
ate outcomes (Australian Learning and Teaching Council 2010; Bergan and Damian
2010; Blömeke et al. 2013; Coates 2014; Dill and Beerkens 2013; Douglass,
Thomson, and Zhao 2012; Lewis 2010; Sadler 2013b; Shavelson 2010, 2013;
Tremblay 2013; Williams 2010). Part of the overall unease is because, globally,
Assessment & Evaluation in Higher Education 1083
higher education has expanded rapidly without matching increases in public funding
directed specifically towards teaching.
Despite what may seem an overwhelming challenge, progress could be made by
ensuring that the course grades entered on students’ academic transcripts can be
trusted to represent adequate levels of the expected graduate competencies. Across a
full degree programme, the transcript reports student performance on a large range
of demanding tasks, in a wide variety of courses, studied over a considerable period
of time, and covering substantial disciplinary and professional territory. Specialised
tests of graduate competencies are not set up to do this (Shavelson 2013). If third
parties are to draw reasonably robust conclusions about a graduate’s acquired overall
capability or competence, the grades on transcripts must be trustworthy.
and analyse how students responded to the tasks. This process is passive and
distinctly different from that used to score responses. What is sought is at least a
partial diagnosis of any deficiencies in the task design or specifications. Where at
least some responses technically do fall within a literal interpretation but are much
simpler than was intended, it may not have been imagined that such interpretations
would be possible. At the opposite extreme is a response that really ‘captures’ what
was intuitively hoped for but not fully conceptualised when the specifications were
written. Capitalise on that for the future.
The final check is to consult students themselves (Hounsell 1987), rather than try
to infer what they ‘must have been thinking’ as they went about the task. This is the
only independent source that can confirm or disconfirm their understanding or reac-
tions (Alderson 1986). What went on in their heads while they were working out
how to respond to the task, and then during the planning and production phases?
Were they surprised by how the quality of their work was appraised?
Table 1. Five grade descriptors for the lowest level of achievement in a course for which
credit can be counted towards the degree. Conditions may apply.
Item Designator* Grade descriptor
1 50–59 Pass Satisfactory. Demonstrates appreciation of subject matter and issues.
Addresses most of the assessment criteria adequately but may lack in
depth and breadth. Often work of this grade demonstrates only basic
comprehension or competency. Work of this grade may be poorly
structured and presented. (Monash University)
2 D (D+, D, D-) Earned by work that is unsatisfactory but indicates some minimal
command of the course materials and some minimal participation in
class activities that is worthy of course credit toward the degree.
(Harvard University, College of Arts and Sciences)
3 40–49 3rd Acceptable attainment of most intended learning outcomes,
Pass displaying a qualified familiarity with a minimally sufficient range of
relevant materials, and a grasp of the analytical issues and concepts
which is generally reasonable, albeit insecure. (University of Stirling)
4 E Sufficient: A performance that meets the minimum criteria, but no
more. The candidate demonstrates a very limited degree of
judgement and independent thinking. (University of Oslo)
5 D Deficient in mastery of course material; originality, creativity, or both
apparently absent from performance; deficient performance in
analysis, synthesis, and critical expression, oral or written; ability to
work independently deficient. (Dartmouth College)
*Grade code as entered on academic transcript.
Assessment & Evaluation in Higher Education 1087
Accumulation of marks
In theory, a course grade is meant to represent a student’s level of capability attained
by the end of a course: ‘grading … is the assignment of a letter or number to indi-
cate the level of mastery the student has attained at the end of a course of study’
(Schrag 2001, 63). It is literally the out-come that goes on record. This is entirely
consistent with the customary (and legitimate) way of expressing intended learning
outcomes: ‘By the end of this course, students should …’ Whether the actual path
of learning is smooth or bumpy, and regardless of the effort the student has (or has
not) put in, only the final achievement status should matter in determining the course
grade (Sadler 2009, 2010b). However, in many higher education institutions,
accumulating marks or points for work assessed during a period of learning (con-
tinuous assessment) is the prevailing practice, mandated or at least endorsed by the
institution. Readily available software provides bookkeeping tools for it. These make
it easy to progressively ‘bank’ marks, then weight and process them at the point of
withdrawal for conversion into the course grade.
The common arguments for accumulation are essentially instrumentalist
(Isaksson 2008). The purpose is not so much to help learners attain adequate levels
of complex knowledge and skills by the end of a course, as to keep them working
1088 D. R. Sadler
examination conditions could well include making time limits generous (within
reasonable limits) and allowing review time and re-examination (with an accompa-
nying fee if necessary). If it is objected that all students in a course should perform
under identical conditions, the reply is straightforward. Students with special needs
typically have accommodations made for them, but within any course, some stu-
dents may be just below the threshold at which special accommodations would
apply. In addition, the quality of a student’s response as appraised against standards
rather than against other students’ work is a clearer indicator of their capability than
the speed of task completion.
Two observations apply regardless of the mode or medium of response: effi-
ciency and sampling. An efficient plan results in high levels of valid achievement
information relative to the costs of getting it – including time in setting and marking
student work, and administrative overheads. Appropriate sampling involves cover-
age across both the course subject matter (a preoccupation with many examiners)
and the range of relevant intended higher order outcomes. These two together are
somewhat analogous to evaluating the economic potential of a mineral deposit by
drilling a series of cores into a prospective ore body to test its lateral extent and its
richness (Whateley and Scott 2006). Emphasising depth in thinking and precision in
expression may well result in higher quality but more condensed outputs.
Goal setting
Extensive research over several decades in a wide variety of field and laboratory set-
tings has investigated the impact that so-called hard goals have on task performance.
Progressive reviews of this work are available in Locke et al. (1981), Locke et al.
(1990), and the first and last chapters of Locke and Latham (2013). Hard goals are
specific and clear rather than general or vague, difficult and challenging rather than
simple or easy, and closer to the upper limit of a person’s capacity to perform than
was their initial level of performance. Goals that require students to stretch for them
generally lead to substantial gains in performance. They act to focus attention, mobi-
lise effort and increase persistence at a task. In contrast, do-your-best goals often
fare little better than having no goals at all. As one would expect, the degree of
improvement is moderated by other factors, including the complexity of the task, the
1092 D. R. Sadler
learner’s ability, the strategies employed and various contextual constraints (Locke
et al. 1981). However, the general conclusion is that ‘an individual [cannot] thrive
without goals to provide a sense of purpose … If the purpose is neither clear nor
challenging, very little gets accomplished’ (Locke and Latham 2009, 22).
Arranging the learning environment so that all students have an adequate grasp
of the higher order outcomes stated in course outlines is a clear imperative for uni-
versities and colleges. Setting standards that some students initially see as tough –
and possibly even unfair or coercive, depending on their initial expectations – is part
of that. Serious students adapt pragmatically to hard constraints provided the settings
are known, fair and relevant. The consequences of a hard-earned pass are highly
positive in terms of both credit towards the degree and personal sense of accom-
plishment. Carried out ethically, hard goals work constructively for the student in
both the short and the long term (Sadler 2014b).
Genuine achievement for which a student works hard and produces a high
quality result brings about levels of fulfilment and confidence that come only from
possessing deep and thorough knowledge of some body of worthwhile material, or
attaining proficiency in high-level professional skills. The terms pleasure, satisfac-
tion, motivation and accomplishment have many nuanced and overlapping mean-
ings, but there is little doubt about the legitimacy of ‘pleasure as a by-product of
successful striving’ (Duncker 1941, 391). This is categorically different from, in the
modern context, having satisfying experiences in the classroom (although the two
may co-occur) or experiencing success in winning against others. For some students
more than others, developing this type of personal capital demands substantial
striving and struggling – and induces considerable stress. However, little by way of
significant and enduring learning comes cheaply, and experiencing success at some-
thing that was originally thought to be out of reach brings a distinctive personal
reward, a palpable sense of accomplishment. Not to insist on a demonstration of an
adequate level of higher order capabilities is to deprive students of both an important
stimulus to achieve and the satisfaction of reaching a significant goal.
Inhibitors of change
Some inhibitors are conceptual in nature. One of these consists of the multiple
meanings attached to the term ‘standard’. Add to that a limited awareness of the
need for externally validated anchorage points for standards generally – and passing
grades in particular (Sadler 2011, 2013a). Others have to do with assessment prac-
tices that detract from the integrity of course marks and grades. Some have been
criticised in the literature for decades (Elton 2004; Oppenheim, Jahoda, and James
1967; Sadler 2009), but they are now so deeply embedded in assessment cultures
they are resistant to change. In addition, new practices keep coming along and are
added incrementally. Accepted uncritically, these often become popular through
being labelled as ‘innovative’ or ‘best practice’. They are defended strongly by aca-
demic teachers, students and administrators, and may even be mandated in institu-
tional assessment policies. Accumulating marks is but one example. That they
reduce the integrity of course grades goes largely unheralded.
Whether hard goals are actually set and enforced depends on a variety of other
factors as well, some of which are related to the grading dispositions of individual
academics. At successively higher levels in the chain of authority, the freedom of
academics to make significant changes depends on: an enabling and supportive con-
text provided by academic department heads and programme directors; the fixedness
of the prevailing assessment traditions, grading policies and academic priorities; and
requirements externally set by governments or accrediting agencies.
the students’ level of understanding. The underlying drive is to ensure the least
possible discomfort or stress for students (Fiamengo 2013).
At the institutional policy level lie: curriculum freedom that allows students the
flexibility to pick and choose courses from a wide range to make up a substantial
part of a degree programme; and credit transfer policies and recognition of prior
learning that impose few restrictions. These decrease the effectiveness of coherent
sequences of courses specifically designed to promote development of higher order
outcomes, which require considerable time and multiple encounters to mature prop-
erly. Institutional factors are also influenced by financial considerations, particularly
continuity in total income from student fees and government funding. In principle,
assuring the quality of all graduates, maintaining student entry levels and ensuring a
satisfactory enrolments-based income stream are not incompatible. However in prac-
tice, academic achievement standards can be compromised to avoid rebalancing
internal resource allocations to prioritise teaching.
At the scale of individual academics, the following statements all have their
origins in conversations with academics in universities in different countries. They
reveal a range of problematic dispositions and attitudes related to passing courses.
‘Many students are low-entry or from disadvantaged backgrounds. They have lim-
ited ability to achieve well on higher order outcome measures, but they nevertheless
benefit greatly from the experience of higher education’. ‘Students who put in sub-
stantial effort no doubt learn something of importance in the process and therefore
deserve to pass’. ‘While it is disappointing when students submit low-level work,
there is no guarantee those students would gain employment directly in the fields of
their degrees anyway’. ‘Students who fail courses suffer adverse personal and social
consequences, such as loss of face, additional fees and delay to graduate earnings,
so avoiding failing grades is important’. ‘When students have to pay substantial fees,
they expect to pass and in any case would appeal against failure’. ‘All grade results
are reviewed by the Assessment Review Committee and, with very few exceptions,
approved without amendment’. ‘Consistent with the principle of academic freedom,
professors must be free to decide, according to their own professional judgments,
the grades to be assigned’. ‘Creative ways are found for students to earn enough
marks for them to at least pass, with scaffolding and active coaching there to help’.
‘Students these days need a qualification even if it means they are not truly qualified
at the end. In any case, graduates learn most of what they need to know after
graduation’. ‘Cutting out cumulative assessment and instead, grading according to
serious standards would produce high failure rates and consequential loss of income.
The institution would not tolerate that’.
Finally, ‘I know I am generous in grading, but I need to keep my teaching
evaluation scores up so I can look forward to tenure’. Whether there is a causal link
between grades and teaching evaluations is debated, but
Regardless of the true relationship between grades and teaching evaluations … that
many instructors perceive a positive correlation between assigned grades and student
evaluations of teaching has important consequences when there also exists a perception
that student course evaluations play a prominent role in promotion, salary, and tenure
decisions. (Johnson 2003, 49)
Most of these comments amount to admissions that things as they exist may not be
as they ought to be, but, by implication, not much can be done about it. Addressing
Assessment & Evaluation in Higher Education 1095
inflated pass rates at their source by raising actual achievement levels is the only
valid means of ensuring grade integrity. No amount of tinkering with other variables,
and no configuration of proxy measurements, will make the difference required.
Conclusion
In recent decades, the focus for evaluating teaching quality has been heavily
weighted towards inputs (student entry levels, participation rates, facilities, resources
and support services) and a select group of outcomes (degree completions, employa-
bility, starting salaries and student satisfaction, experience or engagement). Con-
spicuously absent is anything to do with actual academic achievement in courses.
This has allowed a number of sub-optimal assessment practices to become nor-
malised into assessment cultures. One of the consequences is that too many students
have been able to graduate without the capabilities expected of graduates, yet this is
not necessarily apparent from their transcripts.
The focus in this article is on student outcomes rather than inputs, with particular
emphasis on the higher order capabilities of students. Many students fail to master
these, yet they gain credit in course after course and eventually graduate. Directly
addressing the deficient aspects of assessment culture and practice could radically
alter this state of affairs, but it would require a transformation in thinking and prac-
tice on the part of many academics. The ultimate aim is to ensure that all students
accept a significant proportion of the responsibility for achieving adequate levels of
higher order outcomes. Bluntly put, no student would be awarded a pass in a course
without being able to demonstrate these levels. For some students, this would
necessitate a major change in their priorities. For academics, both their assessment
practices and the nature of the student–teacher relationship would change.
Undoubtedly, determination to pursue this end would have significant washback
effects on teaching, learning, and course and programme objectives, but that is
intended. The likelihood of success depends on finding a rational, ethical and afford-
able way to do it. This may require re-engineering some parts of the transition path,
creating other parts from scratch, and reworking priorities, policies and practices to
a considerable extent. In particular, it would entail rebalancing institutional resource
allocations in order to cater for student cohorts that have become much more
diversified. Except for aims geared narrowly to economic and employment con-
siderations, this goal is broadly consistent with older and many recent statements of
the real purposes of higher education.
Disclosure statement
No potential conflict of interest was reported by the author.
Notes on contributor
D. Royce Sadler is a senior assessment scholar in the School of Education, University of
Queensland. His research interests are in assessment and grading policies and practice in
higher education, especially the role of assessment in improving learning and capability,
academic achievement standards and the competence of graduates.
1096 D. R. Sadler
References
Alderson, J. C. 1986. “Innovations in Language Testing?” In Innovations in Language Test-
ing: Proceedings of the IUS/NFER Conference, edited by M. Portal, 93–105. Windsor:
NFER-Nelson.
Arum, R., and J. Roksa. 2010. Academically Adrift: Limited Learning on College Campuses.
Chicago, IL: University of Chicago Press.
Australian Learning and Teaching Council. 2010. Learning and Teaching Academic Stan-
dards Project – Final Report. Strawberry Hills, NSW: Australian Learning and Teaching
Council.
Bergan, S., R. Damian, eds. 2010. Higher Education for Modern Societies: Competences and
Values. Higher Education Series No. 15. Strasbourg: Council of Europe Publishing.
Biggs, J. B., and C. Tang. 2011. Teaching for Quality Learning at University: What the
Student Does. 4th ed. Maidenhead: Open University Press.
Blömeke, S., O. Zlatkin-Troitschanskaia, C. Kuhn, and J. Fege, eds. 2013. Modeling and
Measuring Competencies in Higher Education: Tasks and Challenges. Rotterdam: Sense
Publishers.
Bloom, B. S. 1974. “Time and Learning.” American Psychologist 29 (9): 682–688.
doi:10.1037/h0037632.
Bok, D. 2006. Our Underachieving Colleges: A Candid Look at How Much Students Learn
and Why They Should Be Learning More. Princeton, NJ: Princeton University Press.
Brookhart, S. M. 1991. “Grading Practices and Validity.” Educational Measurement: Issues
and Practice 10 (1): 35–36. doi:10.1111/j.1745-3992.1991.tb00182.x.
Budé, L., T. Imbos, M. W. van de Wiel, M. P. Berger. 2011. “The Effect of Distributed
Practice on Students’ Conceptual Understanding of Statistics.” Higher Education 62 (1):
69–79. doi:10.1007/s10734-010-9366-y.
Coates, H., ed. 2014. Higher Education Learning Outcomes Assessment: International Per-
spectives. Vol. 6 Series: Higher Education Research and Policy. Frankfurt am Main:
Peter Lang.
Assessment & Evaluation in Higher Education 1097
Conway, M. A., G. Cohen, and N. Stanhope. 1992. “Very Long-Term Memory for Knowl-
edge Acquired at School and University.” Applied Cognitive Psychology 6 (6): 467–482.
doi:10.1002/acp.2350060603.
Dill, D. D., and M. Beerkens. 2013. “Designing the Framework Conditions for Assuring
Academic Standards: Lessons Learned about Professional, Market, and Government Reg-
ulation of Academic Quality.” Higher Education 65 (3): 341–357. doi:10.1007/s10734-
012-9548-x.
Douglass, J. A., G. Thomson, and C.-M. Zhao. 2012. “The Learning Outcomes Race: The
Value of Self-Reported Gains in Large Research Universities.” Higher Education 64 (3):
317–335. doi:10.1007/s10734-011-9496-x.
Duncker, K. 1941. “On Pleasure, Emotion, and Striving.” Philosophy and Phenomenological
Research 1 (4): 391–430. doi:org/10.2307/2103143.
Ebbinghaus, H. 1885. Memory: A Contribution to Experimental Psychology. Translated
H. A. Ruger and C. E. Bussenius. 1913. New York: Teachers College, Columbia University.
Edwards, J. M., and K. Trimble. 1992. “Anxiety, Coping and Academic Performance.”
Anxiety, Stress & Coping: An International Journal 5 (4): 337–350. doi:10.1080/10615
809208248370.
Elton, L. 2004. “A Challenge to Established Assessment Practice.” Higher Education Quar-
terly 58 (1): 43–62. doi:10.1111/j.1468-2273.2004.00259.x.
Emig, J. 1977. “Writing as a Mode of Learning.” College Composition and Communication
28 (2): 122–128. doi:10.2307/356095.
Entwistle, N. 1995. “Frameworks for Understanding as Experienced in Essay Writing and in
Preparing for Examination.” Educational Psychologist 30 (1): 47–54. doi:10.1207/
s15326985ep3001_5.
Fiamengo, J. 2013. “The Fail-Proof Student.” Academic Questions 26 (3): 329–337.
doi:10.1007/s12129-013-9372-5.
Frith, C. D. 2014. “Action, Agency and Responsibility.” Neuropsychologia 55 (1): 137–142.
doi:10.1016/j.neuropsychologia.2013.09.007.
Gardner, S., and H. Nesi. 2013. “A Classification of Genre Families in University Student
Writing.” Applied Linguistics 34 (1): 25–52. doi:10.1093/applin/ams024.
Grunert O’Brien, J., B. J. Millis, and M. W. Cohen. 2008. The Course Syllabus: A
Learning-Centered Approach. San Francisco: Jossey-Bass.
Hernández, R. 2012. “Does Continuous Assessment in Higher Education Support Student
Learning?” Higher Education 64 (4): 489–502. doi:10.1007/s10734-012-9506-7.
Hounsell, D. 1987. “Essay Writing and the Quality of Feedback.” In Student Learning:
Research in Education and Cognitive Psychology, edited by J. T. E. Richardson, M. W.
Eysenck and D. Warren-Piper, 109–119. Milton Keynes: Open University Press.
Isaksson, S. 2008. “Assess As You Go: The Effect of Continuous Assessment on Student
Learning During a Short Course in Archaeology.” Assessment & Evaluation in Higher
Education 33 (1): 1–7. doi:10.1080/02602930601122498.
Johnson, V. E. 2003. Grade Inflation: A Crisis in College Education. New York: Springer-
Verlag.
Jones, A. 2009. “Redisciplining Generic Attributes: The Disciplinary Context in Focus.”
Studies in Higher Education 34 (1): 85–100. doi:10.1080/03075070802602018.
Jones, A. 2013. “There is Nothing Generic about Graduate Attributes: Unpacking the Scope
of Context.” Journal of Further and Higher Education 37 (5): 591–605. doi:10.1080/
0309877X.2011.645466.
Lewis, R. 2010. “External Examiner System in the United Kingdom.” In Public Policy for
Academic Quality: Analyses of Innovative Policy Instruments, edited by D. D. Dill and
M. Beerkens, 21–36. Dordrecht: Springer.
Lindgren, R., and R. McDaniel. 2012. “Transforming Online Learning through Narrative and
Student Agency.” Educational Technology & Society 15 (4): 344–355.
Locke, E. A., and G. P. Latham. 2009. “Has Goal Setting Gone Wild, or Have Its Attackers
Abandoned Good Scholarship?” Academy of Management Perspectives 23 (1): 17–23.
doi:10.5465/AMP.2009.37008000.
Locke, E. A., and G. P. Latham, eds. 2013. New Developments in Goal Setting and Task
Performance. New York: Routledge.
1098 D. R. Sadler
Locke, E. A., G. P. Latham, K. J. Smith, and R. E. Wood. 1990. A Theory of Goal Setting
and Task Performance. Englewood Cliffs, NJ: Prentice Hall.
Locke, E. A., K. N. Shaw, L. M. Saari, and G. P. Latham. 1981. “Goal Setting and
Task Performance: 1969–1980.” Psychological Bulletin 90 (1): 125–152. doi:10.1037/
0033-2909.90.1.125.
Nicol, D. J., and D. Macfarlane-Dick. 2006. “Formative Assessment and Self‐Regulated
Learning: A Model and Seven Principles of Good Feedback Practice.” Studies in Higher
Education 31 (2): 199–218. doi:10.1080/03075070600572090.
Oppenheim, A. N., M. Jahoda, and R. L. James. 1967. “Assumptions Underlying the Use of
University Examinations.” Higher Education Quarterly 21 (3): 341–351. doi:10.1111/
j.1468-2273.1967.tb00245.x.
Osman, M. 2014. Future-Minded: The Psychology of Agency and Control. Basingstoke:
Palgrave-Macmillan.
Pacherie, E. 2008. “The Phenomenology of Action: A Conceptual Framework.” Cognition
107 (1): 179–217. doi:10.1016/j.cognition.2007.09.003.
Park, C. 2003. “In Other (People’s) Words: Plagiarism by University Students – Literature
and Lessons.” Assessment & Evaluation in Higher Education 28 (5): 461–488.
doi:10.1080/02602930301677.
Roese, N. 1997. “Counterfactual Thinking.” Psychological Bulletin 121 (1): 133–148.
doi:10.1037/0033-2909.121.1.133.
Rohrer, D., and H. Pashler. 2010. “Recent Research on Human Learning Challenges Conven-
tional Instructional Strategies.” Educational Researcher 39 (5): 406–412. doi:10.3102/
0013189X10374770.
Sadler, D. R. 1989. “Formative Assessment and the Design of Instructional Systems.”
Instructional Science 18 (2): 119–144. doi:10.1007/BF00117714.
Sadler, D. R. 2009. “Grade Integrity and the Representation of Academic Achievement.”
Studies in Higher Education 34 (7): 807–826. doi:10.1080/03075070802706553.
Sadler, D. R. 2010a. “Beyond Feedback: Developing Student Capability in Complex Apprai-
sal.” Assessment & Evaluation in Higher Education 35 (5): 535–550. doi:10.1080/
02602930903541015.
Sadler, D. R. 2010b. “Fidelity as a Precondition for Integrity in Grading Academic Achieve-
ment.” Assessment & Evaluation in Higher Education 35 (6): 727–743. doi:10.1080/
02602930902977756.
Sadler, D. R. 2011. “Academic Freedom, Achievement Standards and Professional Identity.”
Quality in Higher Education 17 (11): 103–118. doi:10.1080/13538322.2011.554639.
Sadler, D. R. 2013a. “Assuring Academic Achievement Standards: From Moderation to Cali-
bration.” Assessment in Education: Principles, Policy and Practice 20 (1): 5–19.
doi:10.1080/0969594X.2012.714742.
Sadler, D. R. 2013b. “Making Competent Judgments of Competence.” In Modeling and Mea-
suring Competencies in Higher Education, edited by S. Blömeke, O. Zlatkin-Troitschan-
skaia, C. Kuhn and J. Fege, 13–27. Rotterdam: Sense Publishers.
Sadler, D. R. 2013c. “Opening up Feedback: Teaching Learners to See.” In Reconceptualising
Feedback in Higher Education: Developing Dialogue with Students, edited by S. Merry,
M. Price, D. Carless and M. Taras, 54–63. London: Routledge.
Sadler, D. R. 2014a. “Learning from Assessment Events: The Role of Goal Knowledge.” In
Advances and Innovations in University Assessment and Feedback, edited by C. Kreber,
C. Anderson, N. Entwistle and J. McArthur, 152–172. Edinburgh: Edinburgh University
Press.
Sadler, D. R. 2014b. “The Futility of Attempting to Codify Academic Achievement
Standards.” Higher Education 67 (3): 273–288. doi:10.1007/s10734-013-9649-1.
Schatzki, T. R. 2001. “Practice Mind-ed Orders.” In The Practice Turn in Contemporary
Theory, edited by T. R. Schatzki, K. K. Cetina and E. von Savigny, 50–63. London:
Routledge.
Schrag, F. 2001. “From Here to Equality Grading Policies for Egalitarians.” Educational
Theory 51 (1): 63–73. doi:10.1111/j.1741-5446.2001.00063.x.
Seligman, M. E. P., P. Railton, R. F. Baumeister, and C. Sripada. 2013. “Navigating Into the
Future or Driven by the Past.” Perspectives on Psychological Science 8 (2): 119–141.
doi:10.1177/1745691612474317.
Assessment & Evaluation in Higher Education 1099