Professional Documents
Culture Documents
This guide has been designed to help you make better use of technology to manage the assessment and feedback process. It will help you improve academic practice and
the business process that support this.
Throughout the guide we use the term electronic management of assessment (EMA) frequently. This describes the way technology can support the management of the
entire life cycle of assessment and feedback activity, including the electronic submission of assignments, marking, feedback and the return of marks and feedback to
students.
Supporting guides
For an introduction to EMA, read our supporting guide, electronic management of assessment (available via Wayback Machine)
[http://web.archive.org/web/20220119073630/https://www.jisc.ac.uk/guides/electronic-assessment-management] .
Our guide on EMA systems and processes (available via Wayback Machine) [http://web.archive.org/web/20221007114836/www.jisc.ac.uk/guides/electronic-
management-of-assessment-processes-and-systems] gives guidance for higher education institutions on improving business processes and choosing information
systems to support assessment and feedback.
Our enhancing assessment and feedback with technology for FE and skills guide [https://www.jisc.ac.uk/guides/enhancing-assessment-and-feedback-with-technology]
shows how technology can add value to assessment and feedback processes and provides practical advice and guidance including a number of effective practice
examples.
For FE and skills, our guide assessment for learning: a tool for benchmarking your practice in FE and skills (pdf)
[https://repository.jisc.ac.uk/6706/1/assessment_benchmarking_feandskills.pdf] is a hands-on tool to help colleges and providers self-assess their assessment
practices.
For universities and colleges, our 2019 guide, how to enhance student learning, progression and employability with e-portfolios [/guides/e-portfolios] , includes evidence
of the value of e-portfolios in enhancing assessment practices.
Podcasts
Listen to our podcast (mp3) [https://repository.jisc.ac.uk/6308/1/Benefits_of_EMA.mp3] to find out more about the benefits that institutions are achieving through the
electronic management of assessment or read the full text transcript (pdf) [https://repository.jisc.ac.uk/6303/1/Podcast_EMA_benefits_transcript_v2.pdf] .
Listen to our podcast (mp3) [https://repository.jisc.ac.uk/6309/1/Using_Jisc_EMA_resources.mp3] to find out more about how institutions are making use of our
assessment resources or read the full text transcript (pdf) [https://repository.jisc.ac.uk/6306/1/Podcast_Jisc_resources_transcript_v4.pdf] .
CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]
If your role involves managing administrative processes or IT systems you may find that this is the most helpful route into the resources for you.
Specifying [/guides/transforming-assessment-and-feedback/specifying]
Setting [/guides/transforming-assessment-and-feedback/setting]
Supporting [/guides/transforming-assessment-and-feedback/supporting]
Submitting [/guides/transforming-assessment-and-feedback/submitting]
Reflecting [/guides/transforming-assessment-and-feedback/reflecting]
i df db k f i j
Managing an assessment and feedback transformation project
Transforming your assessment and feedback practice with the help of technology is a major change initiative requiring strong leadership, project management skills and
the ability to engage stakeholders and manage that change. We have a number of resources that can help you plan and implement this type of project.
For FE and skills, our guide assessment for learning: a tool for benchmarking your practice in FE and skills
[https://repository.jisc.ac.uk/6706/1/assessment_benchmarking_feandskills.pdf] is a hands-on tool to help colleges and providers self-assess their assessment
practices.
Our changing assessment and feedback practice [/guides/changing-assessment-and-feedback-practice] guide gives a brief overview of the topic and links to many
other resources
Our guide on electronic management of assessment in higher education: processes and systems [/guides/electronic-management-of-assessment-processes-and-
systems] will help with process improvement and system change
Our project management [/guides/project-management] guide covers everything you need to know about taking a structured approach to planning and organising
your project with a comprehensive set of project management templates for you to use
Our change management [/guides/change-management] guide takes you through finding the right approach to change for your own organisational culture. We put
particular emphasis on stakeholder engagement [/guides/change-management/stakeholder-engagement] and on techniques such as appreciative inquiry
[/guides/change-management/appreciative-inquiry] which prove to be effective in changing assessment and feedback practice
Our guidance on developing a project baseline [/guides/transforming-assessment-and-feedback/project-baseline] will help you evaluate the impact of EMA projects
Approaches to change
Different institutions have approached organisational change to make better use of EMA in different ways. Here are a few examples linked to our case studies:
The University of Exeter [https://repository.jisc.ac.uk/5589/3/collaborate.pdf] developed a series of tools to ensure that its assessment practices help students develop
employability skills.
Queen's University Belfast [https://ema.jiscinvolve.org/wp/2015/03/04/ema-case-study-queens-university-belfast/] uses an appreciative inquiry approach to help its
academic schools identify what they do well in assessment and feedback and what needs to change. Central staff then provide support to implement the technology
supported solutions that best meet their needs.
Not all of our users implement EMA organisation-wide. In this short podcast [https://repository.jisc.ac.uk/6311/1/Getting_started_with_EMA.mp3] Bryony Olney from the
University of Sheffield talks about how she organised an EMA pilot in her department. This case study is also available as a full text transcript
[https://repository.jisc.ac.uk/6307/1/Podcast_getting_started_with_EMA_transcript_v2.pdf] .
Further resources
Our EMA blog features a number of other case studies [https://ema.jiscinvolve.org/wp/category/case-studies/] looking at various transformation aspects of the
assessment and feedback lifecycle.
You can also read the following case studies which cover a range of areas around assessment:
Viewpoints as a catalyst for change [https://repository.jisc.ac.uk/5598/3/viewpoints.pdf] - Harper Adams and Cardiff Metropolitan universities.
Sheffield Hallam University is undertaking a university-wide change programme over three years to enhance the assessment experience for students, staff and the
university as a whole. It has used the assessment and feedback lifecycle as part of an ‘Assessment Essentials [https://academic.shu.ac.uk/assessmentessentials/] ’
resource to support staff through this process.
It may imply a measurable improvement in time, cost, quality etc but qualitative evidence that the experience of certain stakeholders has improved can be equally valid. By
developing a baseline you ensure that you understand the current state of play before you try to change it.
The baseline is a component of your evaluation plan and a precursor as it can play an important role in helping define the scope of your project.
A rough outline of relevant project activities might include the following steps:
Identify how you will measure improvement and what sources of evidence you will collect
Step seven is by far the largest project element and will consume the most time and resources but baselining and evaluation are the activities that show the project was
worth doing. They assume increasing importance in the current climate - baselining can help you tackle the right issues in the correct way, involving the right stakeholders.
Evaluation ensures you deliver the expected benefits and capture the essential learning for your next project.
Getting project scope right – it gives you an opportunity to refine the scope of your project. You will realise you can’t solve a particular problem without tackling one or
more related issues
Identifying project stakeholders – you can avoid finding a “skeleton in the closet” further down the line in the form of a stakeholder you should have consulted but
missed
Managing and communicating project scope – baselining helps you manage stakeholders’ project expectations. You may need to clarify that certain issues are out of
scope to avoid disappointment.
Challenging myths – baselining activity can reveal myths that need challenging before you can move forward. Often they relate to unspoken assumptions about what
aspects of practice, processes and systems can and can’t be changed; "We’ve always done it that way" isn’t a reason nor a justification
Showing evidence of improvement – you can’t show how far you have travelled unless you know where you started.
You need to beware of ‘paralysis by analysis’ - don’t get so bogged down describing the way you do things now that you run out of time to improve them. Equally however
you need to be aware that involving other stakeholders is a big step towards getting ownership and buy in for the eventual solutions.
Aspect of current practice Key questions Types of evidence
Strategy and policy What strategies and policies have a bearing on assessment and feedback? Core institutional documents
What does the vocabulary indicate about how this is approached/perceived? Committee structures
Where does responsibility/authority sit within the organisation? Membership of relevant committees
Who is involved?
- IT Timetables
Stakeholders What is the level of stakeholder satisfaction? National Student Survey (NSS)
Rich pictures
Who are the audiences for the report? You may find the report a useful way of engaging other stakeholders
How do you want each set of stakeholders to respond to the report eg,
- note and approve
- understand the theoretical basis of your project
- actively engage with your project
- use as a lever for change
- take other specific action
What type of presentation/media will best get your message across to each set of stakeholders eg,
- graphs and figures
- comparison with other benchmarks
- authentic user experiences such as audio/video interviews
- citation of academic research
What if my project isn’t the only thing that could impact over the life of the project?
This is probably the case in very many projects. In learning and teaching related areas it’s notoriously difficult to attribute any kind of simplistic cause and effect because
there are so many different factors at play.
Many projects may involve scaling up innovations that have been trialled previously so the project teams already have a good idea where they expect to see their
interventions having an impact.
It’s important that your baseline captures aspects that are directly related to your intervention. You should agree your evaluation plan stakeholders and capture evidence
that is credible and relevant.
The more ambitious your project the more difficult it will be to find simple cause/effect relationships. If you are looking to effect institutional transformation then you may
expect to see changes to institutional strategy, policy and structures.
For examples of how others have approached baselining see a range of resources and examples from previous projects
[http://jiscdesignstudio.pbworks.com/w/page/46422956/Example baseline reports] .
CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]
The assessment and feedback lifecycle is an academic model showing a high level view of the academic processes involved in assessment and feedback. It is intended to
be pedagogically neutral ie, it is more concerned with asking questions and stimulating thought than having a basis in any particular pedagogic stance.
The model can apply to both formative and summative assessment and to any scale of learning from a three year degree to a short course that takes place over a single
day. It covers all assessment and feedback practice whether or not materials are in digital format or supported by information systems.
Lifecycle stages
The eight main stages in the lifecycle apply equally to further and higher education. At a more detailed level the processes also include:
Assessment scheduling
Submission of assignments
Tracking of submissions
Academic integrity
Examinations
Marks recording
Within these processes there are variations between further and higher education with student tracking against outcomes, predefined by awarding bodies, being of great
significance in FE. HE has its own set of quality assurance procedures around marking.
Another important feature of the lifecycle is that it is iterative from both an institutional and student perspective.
The reflecting [/guides/transforming-assessment-and-feedback/returning-feedback] element of the lifecycle is the final stage of one iteration. Learner reflection on the
outcomes of one assignment should influence how they approach the next, and staff reflection on the outcomes of a cohort should influence the next iteration of course
delivery.
Such a model needs to recognise that there is no such thing as a "one-size-fits-all" approach (usually even within a single institution). It is a framework to stimulate
discussion and can be used for many purposes such as:
A means of helping individual stakeholders take a holistic view of assessment and feedback activities
The model is central to promoting shared understanding and dialogue amongst all of the many practitioners who collaborated with us on the production of this guide.
See how Sheffield Hallam University adapted the model to create its assessment essentials [http://academic.shu.ac.uk/assessmentessentials/] .
Listen to our podcast [http://repository.jisc.ac.uk/6297/1/The_assessment_lifecycle_v2.mp3] to find out more about the development of the lifecycle and how others have
used it, or read the full text transcript [http://repository.jisc.ac.uk/6304/1/Podcast_lifecycle_transcript_v5.pdf] .
The lifecycle is one route into this guidance. You will find a full description of each life cycle element along with common challenges faced by institutions in getting this
aspect to work well, how to support it with technology and resources to highlight good practice.
If your role involves managing administrative processes or IT systems you may find that this is the most helpful overview for you.
Case studies
We have a range of examples of how different institutions have applied electronic management of assessment (EMA) to parts of the lifecycle:
Specifying
A stage of the assessment and feedback lifecycle
CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]
Specifying is the process of determining the details of a course or programme of study and consequently the assessment strategy within it.
In further education much of this will be prescribed by an awarding body but in higher education there is a lot of freedom of choice. Details of each module will be recorded
in a specification, ideally online however paper-based processes continue to exist in many institutions.
Specifying takes place following a new course proposal or when an existing course undergoes periodic review. Additionally there will be a process of making minor
modifications if there is a desire to change the assessment approach.
At the specifying stage you will normally determine the type of assignment, give an idea of the scale eg, a 4,000 word essay, and indicate its value as a percentage of the
overall marks for that module.
Some types of assignment can model things that students may have to do in the workplace and help them develop future employability skills. Other types of assignment
may equally demonstrate a grasp of course content but without evidencing a range of other transferable skills.
Manchester Metropolitan University's guidance on specifying and assessment strategy [http://www.celt.mmu.ac.uk/assessment/design/types.php] asks will the
assessment type enable students to demonstrate the learning outcomes and will you look forward to marking it? This will help you to consider assessment design, as a
poorly designed assignment might leave you marking numerous identical submissions. A well-designed brief can generate originality and individuality in student
approaches.
Assignments that demand some individuality of approach also make it much more difficult for students to plagiarise.
How might we use technology at the specifying stage and what are the benefits?
At this stage of the lifecycle you will require some form of course management system that contains the definitive version of course and module specifications. This can
provide the following benefits:
A central view so that all stakeholders see the same version of the information
Information that can be re-used for many purposes including course and module handbooks
A curriculum overview showing the relationship between assessment and learning outcomes at module and programme level
Tip: Clarity around the specification stage is extremely important. Consider a central database of course and module information.
Modularisation of the curriculum means that learning and assessment is broken down into chunks. Often those chunks don't build back up to a close match with a course
or programme's desired learning outcomes. This is exacerbated by the difficulties in having a clear programme level overview.
Institutions that have done some basic curriculum analytics often discover that some learning outcomes are assessed multiple times whilst others are not assessed at all.
Many realise that they are over-assessing causing unnecessary work for staff and stress for students.
Tip: Analyse the number of assessments per module of similar size and the number of times you are assessing each learning outcome.
Because specifying and major reviews of these specifications are done infrequently, changes in the time periods between reviews are inevitable. For new courses there may
be a considerable time lag (of one to two years) between course validation and initial delivery. Often this results in course delivery by new staff who have little ownership of
the original design.
There are quality processes to manage changes in the interim but staff often find the processes so arduous that they find ways to implement change "under the radar" of
the formal minor modifications processes. This exacerbates the issues around having the correct version of information.
Tip: Develop clear and simple processes for ongoing course improvement. This ensures that academics can keep courses up-to-date on the basis of lessons learnt and changing student needs.
There is a lot of risk aversion in relation to assessment design. Staff fear being too creative in case their assessment is too challenging and brings down average marks for
a cohort, or they incur the disapproval of external examiners. Students don't like being guinea pigs in any aspect of their learning and particularly not in relation to
assessment. This is in spite of the fact that more flexible and creative assessment design can help to ensure fairness and inclusivity.
Differences in marks relating to factors like gender or ethnicity can be caused as much by the assessment design as the actual marking process. Technology has a role to
play here in ensuring that the range and size of file types that lend themselves to e-submission is not a limiting factor in the choice of assignment type.
Tip: Staff development should emphasise the benefits of using varied assessment types. Student induction should include assessment literacy development at an early stage.
The specifying stage of the lifecycle causes a particular set of problems in FE and skills. This is due to the complexity of awarding body criteria for assessing against
particular learning outcomes and the frequency with which the specifications can change.
Tip: Colleges using Moodle can incorporate Grade Tracker [http://ema.jiscinvolve.org/wp/2014/08/07/ema-tool-available-from-bedford-college/] to configure and track progress against BTEC, City & Guilds and A/AS
level qualifications in a single system.
The University of Hertfordshire's guidance outlines how to apply assessment for learning principles
[http://jiscdesignstudio.pbworks.com/w/file/fetch/68646815/ITEAM%20UH%20Assessment%20Principles%20and%20Guidance%20August%202013.pdf] to
assessment design
The University of Bradford's programme assessment strategies project generated this short guide on programme focused assessment
[http://www.pass.brad.ac.uk/short-guide.pdf]
Related themes
Assessment design [/guides/transforming-assessment-and-feedback/assessment-design] Assessing group work [/guides/transforming-assessment-and-feedback/group-work]
Setting
A stage of the assessment and feedback lifecycle
CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]
Whilst the overall assessment strategy and approach is specified very early in the lifecycle, setting assignment details needs to happen each time a group of students takes
a particular module. This is often known as an instance of delivery.
At this point students receive details, usually in the form of an assignment brief, about precise topics, deadlines, learning outcomes assessed, marking criteria, and
feedback arrangements.
As a member of academic or administrative staff you need to be clear about how the work will be marked (see our section on marking and feedback workflows) and any
deadlines for the return of marks and feedback. This means being clear about marking criteria and grading schemes and also any penalties for non-compliance with the
stated requirements.
In the case of an overly long submission for example, is there a fixed penalty or will you only mark up to the word limit? Similarly what are the penalties for late submission
and how will you deal with extenuating circumstances?
If you are using online submission you may need to give guidance on file format. You may also have specific requirements regarding naming conventions to ensure
anonymity.
At this point you need to consider assessment scheduling to ensure workload for both students and staff is appropriately distributed.
How might we use technology at the setting stage of the lifecycle and what are the benefits?
You should make use of online templates for assignment briefs and marking rubrics, and use digital information about the curriculum to model assignment scheduling and
to present information about deadlines.
Depending on the systems you have available, you could offer students a personalised calendar showing deadlines for assignment submission and the return of marks and
feedback. This can provide the following benefits:
Write a statement of requirements in your own words and check it out with other students
Write down what you think is required and take this to the tutor for comment
Ask the tutor if he/she has any completed examples of the kind of work you are asked to do. Make it clear that you are not going to copy from these and that you are
mainly interested in the approach
Students from other year groups may be a good source of advice about what makes a good or poor piece of work
Check-out published writing in the assignment area on how to present arguments and writing style. Remember however that what you write for your assignment must
be in your own words and not copied from other sources.
The words in assessment criteria and grade descriptors are often quite opaque and dense for students (and staff), and are rich in tacit understandings and disciplinary
discourse. They require sophisticated interpretive skills.
This exercise engages students in making meaning from the criteria and discussing the notion of quality.
Exercise
In small groups they share their different interpretation, write up best suggestions on flip chart paper, pin on walls, wander around to see how other groups have interpreted
these
The lecturer facilitates refining the criteria and grade descriptors in class or online. This provides a student-friendly set of criteria for programmes/modules and/or tasks.
Intended outcomes
Tip: Establish a common template for assignment briefs to capture essential information and present it to students in a consistent way for every assignment.
Assessment bunching is a common issue. This is a problem for individual students when a number of assessment deadlines fall closely together meaning that the student
has less time to spend on each assignment and produces poorer quality submissions.
There is also a lack of opportunity for the student to receive formative feedback on one assignment and use this in a developmental way to help with future assignments.
Even where a course or programme is managed in such a way that assessment bunching is not a particular problem for individuals, it can pose a problem at institutional
level. Manchester Metropolitan University undertook some modelling from its coursework submission database and identified significant peaks in assignment submissions
(the highest being around 17,000 individual submissions due in a single week at the end of March 2012).
Tip: Model the curriculum to verify sufficient formative development opportunities for learners and ensure that bunching does not occur. Consider defining a maximum number of summative assessments for modules of
a particular size.
For feedback to be useful it needs to be received at a point where students can act on it. It also needs to explain the extent to which they have met the learning outcomes
and what they need to do in order to achieve a better grade next time.
Feedback is however often left to the discretion of the individual academic. Inconsistencies in approach and ineffective feedback are often not picked up by unit or course
leaders until it is too late.
Tip: Define an overall feedback strategy at the specifying stage. Ensure assignment briefs outline what type of feedback students can expect to receive and when and how they should act on it.
What resources can help?
The University of Hertfordshire assessment timelines tool (view via UK Web Archive)
[https://www.webarchive.org.uk/wayback/en/archive/20150529100437/http://jiscdesignstudio.pbworks.com/w/page/30631817/ESCAPE%20-
%20Assessment%20timelines] aids planning by outlining the consequences of assessment timing
Manchester Metropolitan University has developed guidance on assessment grading, criteria and marking
[http://www.mmu.ac.uk/academic/casqe/regulations/docs/assessment_procedures.pdf]
The University of Reading A-Z of assessment methods [http://www.reading.ac.uk/web/FILES/eia/A-Z_of_Assessment_Methods_FINAL_table.pdf] can help you
choose the most appropriate type of assessment
Rogo [http://www.nottingham.ac.uk/rogo/index.aspx] is an open source tool, developed by the University of Nottingham with support from Jisc, that can deliver a
range of online assessments
The University of Wisconsin has published a useful set of rubrics [http://www.uwstout.edu/soe/profdev/rubrics.cfm#cooperative] for different types of assessment
Time to assess learning outcomes in e-learning (TALOE [http://taloetool.up.pt/] ) is a web-based tool with associated guidance to help match suitable assessment
types to learning outcomes.
Related themes
Assessment design [/guides/transforming-assessment-and-feedback/assessment-design] Assessing group work [/guides/transforming-assessment-and-feedback/group-work]
Supporting
A stage of the assessment and feedback lifecycle
CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]
This component looks specifically at supporting students in the period between setting and submission of assignments ie, while they are in the process of completing an
assignment.
It is separate from the more general support needed for the business processes and technologies throughout the lifecycle, although it does have a relationship with the
broader digital literacies agenda for both staff and students.
What are we trying to achieve?
This stage is about helping each student do their best work for each assignment. However the real purpose is developing students' assessment literacy
[/guides/transforming-assessment-and-feedback/assessment-literacies] so that they understand what is involved in the process of making academic judgements.
Ultimately we are trying to turn students into independent and self-regulated learners who are able to monitor and evaluate their own learning. Assessment preparation
should ensure appropriate scaffolding to facilitate this.
How might we use technology at the supporting stage of the lifecycle and what are the benefits?
The information sources we suggest you create at earlier stages of the lifecycle are invaluable in supporting students with consistent information. Technology can provide
formative development opportunities and include online quizzes and testing. Electronic voting systems (also known as personal response systems or clickers) can test
understanding of a topic or gather feedback from students during teaching sessions.
Technology can provide formative feedback on draft assignments - this may be in the form of tutor feedback, peer feedback or self-development such as the use of
academic integrity checking tools.
Consistency of information sources (see the specifying [/guides/transforming-assessment-and-feedback/specifying] and setting [/guides/transforming-assessment-
and-feedback/setting] stages) helping staff to provide consistent information in direct contact with students
A digital overview of the curriculum helps students understand their individual learning pathway, particularly how one assignment relates to others
Formative opportunities such as online quizzes and testing can help consolidate learning
Opportunities for self and peer reflection can help with deeper learning.
Tip: Use the same terminology when giving support to students in class via email or by other means.
Students may focus too much on the assignment in hand rather than understanding where this piece of learning fits into the overall learning outcomes for their course or
programme of study. This results in researching the subject in a narrow way where they think they will gain the most marks.
Tip: Provide students with an overview of their learning pathway to help them understand how what they learn from one assignment will feed into future assignments and their overall development.
Be clear about the overall learning outcomes for the course and transferable, employability skills or graduate attributes that they are expected to develop.
Students may view each assignment as a one-off to be forgotten once it is completed. This may partly be a problem of mindset and not understanding how the different
elements of the course hang together. It can also be due to a curriculum that doesn't offer sufficient opportunities for formative development.
Another problem is a curriculum that doesn't provide sufficient time for feedback on formative activities to influence student work on their final submission, or for feedback
on one summative assignment to influence the next.
Tip: Build regular opportunities for formative development into the curriculum. Support activities might include assignment tutorials, submission of drafts and formative quizzes with online feedback which students can
take in their own time.
Providing formative opportunities can sometimes add further complications to the set up steps for EMA information systems eg, the system needs to distinguish between
draft submissions that are for feedback only and the final submission for marking. Similarly, submissions should not be flagged as having unoriginal content simply
because a draft of the same piece of work has previously been submitted.
Tip: Make sure that staff managing EMA systems know when a particular assignment has a draft phase. Have clear guidance about how to set the system up to manage drafts.
Some students may have particular special needs eg, dyslexia or other disability or may not have English as their first language. You should think about making curriculum
and assessment practice as inclusive as possible from the design stage eg, using technologies such as lecture capture to aid student revision and offering alternative
formats for assignments wherever possible. You may however still need to provide special services for certain types of learner.
Tip: Make sure each assignment brief makes it clear to learners where they can get help for any special needs.
A personal tutoring system is a means of ensuring that a student's long term development needs are catered for. Often the personal tutor is removed from the marking
process so features such as anonymity can be preserved. Effective personal tutoring does however require a means of allowing the personal tutor to see a full view of
feedback. Currently this is problematic in many systems.
Oxford Brookes University's guide provides advice for students on how to do better on their assignments (pdf)
[http://www.brookes.ac.uk/WorkArea/DownloadAsset.aspx?id=2147552644]
The University of Hertfordshire's at a glance guide shows how electronic voting systems (EVS) in different disciplines
[http://jiscdesignstudio.pbworks.com/w/file/63296787/ITEAM%20Case%20studies%20mapped%20Afl%20Feb%202013.docx] helped support its assessment for
learning principles
Our case study from Ayrshire College outlines how a lecturer designed a multi-media comic book [http://www.rsc-scotland.org/?p=3962] to help creative arts students
engage better with formative assessment tasks
Our case study from Perth College shows how smartphones and QR codes [http://www.rsc-scotland.org/?p=232] engaged hairdressing and beauty therapy students
with formative assessment tasks supporting enquiry based learning, self directed learning, group work and peer evaluation.
Case study: marking exercise - University of An easy and effective way of orienting students to allocate effort in an appropriately focused way, in relation to
Winchester assessment demands, is a classroom exercise in which students mark three or four good, bad and indifferent
assignments from students from the previous year (with their permission, and made anonymous).
Students should read and allocate a mark to each example without discussion, then discuss their marks and reasons for allocating these with two or three other students
who have marked the same assignments.
The tutor then reveals the marks the assignments actually received, and why, in relation to the criteria and standards for the course. Finally, provide two more assignment
examples for the students to mark, with their now enhanced understanding of the criteria.
Students undertaking such exercises have gained one grade higher for their course than they have would otherwise for the investment of about 90 minutes in the marking
exercise. This advantage occurs in a subsequent course. It is hard to imagine a more cost-effective intervention.
Related themes
Assessment design [/guides/transforming-assessment-and-feedback/assessment-design] Assessing group work [/guides/transforming-assessment-and-feedback/group-work]
Submitting
A stage of the assessment and feedback lifecycle
CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]
This is the process of students handing over their completed assignment to the appropriate person so that marking and/or feedback can take place. It may involve taking a
completed piece of work to a physical location or submitting something electronically (e-submission).
A receipting system indicates that a piece of work has been submitted or that an ephemeral assignment such as a presentation or dance performance has actually taken
place.
Clear deadlines also help in managing staff workload. Some institutions view student anonymity as an important means of ensuring fairness in the marking process and
the ability to handle anonymous submissions can save time and complicated workarounds later in the process.
In using e-submission we try to make the process as easy as possible for students and avoid them having to make a journey to campus just for this purpose. We also try to
streamline administration making it readily possible to see who has and hasn't submitted, to undertake academic integrity checking and distribute the assignments to
markers.
How might we use technology at the submitting stage of the lifecycle and what are the benefits?
E-submission is rapidly becoming the norm. Features can include receipting, academic integrity checking and support for managing anonymity, distribution of work to
markers and the application of penalties for late submission.
This is the area of the lifecycle where the benefits of EMA for students are most widely understood and accepted. For staff and the institution also, these include:
Automatic proof of receipt and avoidance of anxiety about missing assignments in the postal system
Electronic reminders about deadlines and improved clarity about turnaround times for marking
That is not however to say that institutions have already ironed out all of the issues around this area; technical, process, pedagogic and cultural issues do remain.
There are also limits on the type of assignment suitable for e-submission. Where the nature of the physical artefact is important, such as a piece of sculpture, a digital
representation may never be an acceptable alternative. Similarly some pieces of assessed work may be quite ephemeral eg, a dance performance or an oral examination.
Tip: Think carefully about where digital technology can help. Even if the assignment can't be submitted electronically do you need a digital record that submission took place or, in the case of presentation and
performance, would a digital recording help with marking and feedback?
Case study: digitising thought Abertay University is keen to make the most of digital technology wherever it can and has adopted an
innovative approach to assessing art and design work.
Professor Louis Natanson, head of the school of arts, media and computer games, told us that with a traditional portfolio a lot of the work of interpreting the student's
thought processes actually falls back on the lecturer who needs to try and make sense of what they are presented with.
By using a digital portfolio, the student is required to make decisions about how to present their work in the same way they would have to make decisions when deciding how
to display a range of paintings in a physical space. The process of thinking this through and documenting the thought process is what actually achieves the learning outcome.
There is often no need for the university to store the physical artefacts meaning significant cost savings on the amount of storage space needed for arts and design subjects.
Ensuring that a submission is made in time for the assignment deadline can be stressful for students. E-submission avoids the need for artificial deadlines such as the time
when the departmental office closes.
However greater flexibility has other implications such as the need for technical support outside normal office hours. When submission systems become mission-critical,
any technical issues can have serious repercussions. System downtime beyond institutional control is commonplace in recent years and there are issues with submissions
timing out when students have a slow internet connection.
When technical problems occur and students are unclear about whether or not their submission has been successful, it is a natural instinct to resubmit (often multiple
times) which exacerbates server load problems.
Tip: Separate the physical act of submission from any other part of the workflow such as academic integrity checking or distribution to markers. E-submission systems require a holding area for submissions to be
received, acknowledgement sent to students, and the submission held in a cache until the next part of the workflow commences.
Distinguish between receipting and verification eg, an assignment may be received on time but not actually be a valid submission.
Submitting is the part of the lifecycle where any agreed extensions to deadlines need to be managed. It is also important to know about any extenuating circumstances.
This is largely a matter of institutional policy: some organisations believe they have clear policies but find that interpretation varies widely between different parts of the
institution.
The variability of approaches makes it difficult for system suppliers to build in functionality to apply coding for managing extensions and extenuating circumstances and/or
penalties for late submission.
Even when institutions do have a clear and consistent approach, they are often not able to change the product settings to match their policy. Human behaviour adds a
further dimensional to the problem; predictably many students leave submission until the last possible moment and a late submission may be recorded when a student
starts a submission at 00:59 for a midnight deadline but the upload does not complete until 00:01.
Tip: Develop a clear institutional policy on submission and examples of how this can be consistently applied in different scenarios. Follow consistent procedures in the event of system failure.
A need for student anonymity can cause issues from this stage in the process onwards. Sometimes the problem is due to human error eg, students inserting their own
name into the filename even though they have been requested not to do so. In other cases, where there is full anonymity, the problem is identifying which students have not
submitted or identifying students who have special needs or extenuating circumstances.
Maintaining anonymity can also be an issue in the later stages of the lifecycle.
Tip: Before deciding that anonymity is a requirement, be clear what purpose it serves and how to apply it consistently. For example there are some subjects such as performance art where anonymous assessment is
impractical.
In some cases staff training on good assessment design and avoiding unconscious bias may better serve your needs.
Try the e-Assignment [http://www.southampton.ac.uk/assignments/] open source tool for the submission, marking and feedback of student work
Related themes
Assessment literacy [/guides/transforming-assessment-and-feedback/assessment-literacies] Work-based assessment [/guides/transforming-assessment-and-feedback/work-based-assessment]
CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]
This is a key stage in the lifecycle when student work is formally evaluated against a set of predefined assessment criteria with marks and feedback provided.
Feedback and marking are separate entities and serve quite distinct purposes. In some cases assessments may be purely formative and feedback given with no mark
associated. In this section we have addressed the more complex scenario where an assignment will have both marks and feedback. The importance of purely formative
feedback for student's long term development must however be noted.
There may be multiple evaluators involved in assessing the work of a single student - the reasons for this are discussed further below. Here we look at a situation where
teaching staff provide feedback and marks and consider peer review and assessment as separate themes (also under the lifecycle section on reflecting).
There are many distinct tasks under this element of the lifecycle including distributing work to markers, marking itself, production of feedback and collation and verification
of marks.
It's possible to carry out all of these tasks electronically but in practice many institutions still use a combination of online, off-line and paper-based processes.
The processes used to carry out these tasks are designed with two further objectives in mind:
Particularly in HE, internal quality assurance processes will generally demand that marking and feedback is carried out in a certain way for a particular assignment. The
main features of such processes are:
Double or second marking to give additional scrutiny to the work of individual students particularly for high stakes assignments.
There are four main models of marking and feedback in the UK. We have defined these models based on research undertaken at the University of Manchester and validated
with a wide range of HE providers as part of our EMA project [/rd/projects/electronic-management-of-assessment] . The models are:
Early moderation
Assignments are marked and marks and feedback recorded. A sample of assignments is then submitted to a moderator (or occasionally multiple moderators) who verifies
that marking is consistent across all of the different markers involved. Any anomalies are reconciled prior to feedback and marks being released to students.
The sample used is usually a percentage that should include any fails, firsts or borderline marks.
Late moderation
The process is as described above except that feedback (and possibly provisional marks as well) can be released to students prior to the moderation process taking place
Double-blind marking
The work of each student is evaluated by two assessors who are each working blind to the marks and feedback given by the other. The two markers can work in parallel
provided the technical settings of the marking tool used allows for this. In this case the student will usually receive two marks and two sets of feedback on the same piece
of work.
The choice of approach to marking generally depends on the overall value of the assignment and its weighting in relation to overall marks for a particular programme of
study. Some form of second marking and moderation can however be a staff development opportunity for new members of staff.
How might we use technology at the marking and production of feedback stage of the lifecycle and
what are the benefits?
Technology can support all aspects of marking and feedback (often termed e-marking and e-feedback). Marking can take place online and increasingly off-line and
feedback can be provided in digital formats including text, audio and video. The benefits of e-marking and e-feedback are most evident for academic staff and include:
Improved clarity of marking and feedback and the ability to include lengthy comments at the appropriate point in the text
Ability to give qualitatively different feedback in different media eg, audio feedback for language programmes for intonation and pronunciation.
The benefits are also significant for students and there is indeed considerable student demand for e-feedback. These include:
Easy storage increasing the likelihood that feedback will be reviewed at a later date
Improved quality of feedback as tutors spend less time repeating common comments and concentrate more on the individual aspects of the assignment
Marks and feedback are different entities and need to be handled differently but technology platforms tend to conflate the two. Additionally most commercial systems don't
provide functionality to meet the needs of each of the different roles in common marking and moderation processes. In practice this causes the following difficulties:
Inability to support blind marking ie to hide the first marker's comments from a second marker
Risk of second markers and external examiners overwriting or deleting comments made by an earlier marker
Difficulties in recording decisions taken during the moderation process eg mark before and after moderation and reason it was changed
Tip: Ensure your marking and feedback processes are not over-complicated. Use our workflow models to compare your own practice to sector norms and cut out any unnecessary steps.Use the model and system
specification to discuss your requirements with your system supplier to inform their future development plans.
Despite the benefits of e-marking and e-feedback, institutions often defer to the preferences of individual academics when it comes to how they carry out the task. This
means institutions supporting academics using a variety of different tools for marking and feedback as well as those who still work on paper.
There are some general issues around the ability of systems to deal with mathematical and scientific or musical notation but otherwise the issue is really down to personal
preference as to whether or not tutors like to mark on screen.
For those who are prepared to undertake e-marking there is also a distinction between online or off-line marking with the former being currently better supported by
systems than the latter. Some staff prefer familiar tools such as Microsoft Word but find the lack of integration with other EMA systems a drawback eg, the need to return
work to each student separately involving an email per student and compromising anonymity.
Getting used to online marking tools may take a while but there is good evidence that integrated tools save time on the overall marking process. Deciding which tools to use
is not necessarily a straightforward matter if the institution supports multiple tools, as the attractiveness of each may vary according to the type of assignment and
marking process.
Tip: Emphasising the benefits of online marking and getting internal champions to share their experiences can be an effective way of bringing more staff on board without needing to be strongly directive.
Whilst a lot of effort is expended on comparing and moderating marks, it is less common for programme teams to discuss feedback given to students. Consequently
approaches can vary greatly with some feedback being much more comprehensive and useful than other examples.
Feedback can take many forms. Praise and feedback on content is less effective in the long term than feedback on skills and self-regulatory abilities. The latter are more
likely to develop autonomy in learning and an ability to make evaluative judgements without the support of a teacher. Clarifying what purpose feedback is expected to serve
and analysing tutor feedback therefore needs to become normal practice for academic staff.
Tip: Our feedback and feed forward guide [/guides/feedback-and-feed-forward] provides more ideas on giving effective feedback and tools for monitoring and comparing feedback.
Giving feedback can be very time consuming for academic staff and some staff may question whether the time spent is worthwhile if they are not confident that students
are using and acting upon the feedback. See our section on reflecting [/guides/transforming-assessment-and-feedback/reflecting] for more help on this topic.
Tip: Make use of time saving tools such as comment banks to avoid repeating frequently used comments. Try giving audio feedback if you type slowly - this can save time and be more meaningful and personal.
Read our case study on embedding electronic assessment management [http://repository.jisc.ac.uk/5595/3/e-affect.pdf] at Queen’s University Belfast
Electronic Feedback is a marking assistant developed by Philip Denton of Liverpool John Moores University. The application uses MS Excel and Word to generate
student reports in print and email messages. To obtain a copy of this free tool contact p.denton@ljmu.ac.uk [mailto:p.denton@ljmu.ac.uk]
Read Manchester Metropolitan University's suggestions for trying something new with feedback [http://www.celt.mmu.ac.uk/feedback/try.php?tryid=6]
The Sounds Good project explored using digital audio to give student feedback [http://docs.google.com/viewer?
a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxzb3VuZHNnb29kdWt8Z3g6M2ZhNTYxZDU5MjM5ZmZiOA]
Read the University of Oxford's guide on how to create audio feedback [https://weblearn.ox.ac.uk/access/content/group/info/howto/Audio_feedback.pdf]
The universities of Reading and Plymouth have a useful website based on their experiences with video feedback [http://www.reading.ac.uk/videofeedback/]
Related themes
Feedback and feed forward [/guides/transforming-assessment-and-feedback/feedback] Marking practice [/guides/transforming-assessment-and-feedback/marking-practice]
Recording grades
A stage of the assessment and feedback lifecycle
Students should be able to see how marks are arrived at in relation to the criteria, so as to understand the criteria better in future. They should be able to understand why the grade they
got is not lower or higher than it actually is.
One way to do this is to use the sentence stems:" “You got a better grade than you might have done because you ...” and “To have got one grade higher you would have had to ...”.
Transforming the Experience of Students through Assessment (TESTA)
What does recording grades involve?
The assessment and feedback life cycle (adapted from an original by Manchester Metropolitan
University)
CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]
The end point of this stage is the culmination of marking and moderation processes - a single grade is recorded against each piece of work. In practice this can involve a
number of separate tasks such as collating the marks from different assessors who may be marking in various tools and/or on paper; profiling the marks in order to identify
a sample to be moderated; reconciling anomalies and formally approving the marks via some form of board.
There are therefore a lot of iterative relationships with the marking and production of feedback [/guides/transforming-assessment-and-feedback/feedback-production] .
Institutional regulations will determine who records the grade, how this is verified and in which system it is stored. However, in most cases, the student record system is the
definitive source of grading information.
In order to do this there has to be quality assurance of the original marking process and a procedure for reconciling any anomalies.
How might we use technology at the recording grades stage of the lifecycle and what are the
benefits?
Ideally there would be a seamless workflow whereby the work of each marker would be picked up from the marking tool they used, submitted for profiling and analysis and
then transferred to the system that records the final mark along with any associated audit trail.
In practice most institutions are still a long way from achieving this and there is a lot of manual intervention in most cases. Some institutions have however developed their
own marks recording systems. EMA can provide the following benefits:
Storing marks in digital format can make collation and profiling easier even when a number of different systems are involved
The problems of manual intervention are often exacerbated by academics not trusting in the ability to edit central systems as needed and keeping marks elsewhere 'under
their control' until all adjustments have been made and marks have been verified. In many cases the moderation process is carried out on shared drives and by exchanging
emails back and forth.
Tip: Review the workflows in your institution and identify the most effective process, bearing in mind the tools that you have available. Promote the benefits of adhering to the standard process and using central
information systems.
Tip: When reviewing workflows bear in mind the need for an audit trail; compliance is another driver behind the need for centralised information sources.
The ways in which systems record and store marks can cause issues for many institutions whose grading schemes do not match the way the software is configured eg, an
institution may have a letter grading scheme whereas their IT systems can only support percentage marks. There are also concerns about the rounding of numeric marks
and the possibility that double rounding of marks in different systems can give an inaccurate result.
Tip: Determine your grading policy by academic requirements not system capabilities. Use our system specification to discuss your requirements with your system supplier and to inform their future development plans.
Related theme
Quality assurance and standards [/guides/transforming-assessment-and-feedback/quality-assurance]
CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]
This stage informs students about the outcomes of an assessed piece of work. Marks and feedback can either be returned together or separately, and marks may be
provisional until confirmed by some form of academic board.
Marks and feedback may be returned in a variety of formats depending on the nature of the assignment and how it was submitted for assessment. Formats vary from
handwritten comments on scripts through a range of digital formats including audio and video feedback to ephemeral forms such as verbal feedback.
The process may be fully automated with information being delivered to students on a specified deadline after the submission date. Alternatively it may be entirely manual:
there remain instances where students can only obtain feedback on demand by making an appointment with their tutor.
This stage however is about returning marks and feedback [/guides/transforming-assessment-and-feedback/returning-feedback] which doesn't necessarily mean that
students understand and act upon the outcomes. We discuss this further in the section on reflecting [/guides/transforming-assessment-and-feedback/reflecting] .
How might we use technology at this stage of the lifecycle and what are the benefits?
Technology can be used to post marks and feedback direct to individual students without the need for manual intervention. This can provide the following benefits:
Reduced workload as compared to distributing scripts or sending marks by other means such as email
Guarantee that marks and feedback will be available on the stated deadline
Ability to push the information direct to students or at least to alert them that it is available for viewing
Evidence that students make more use of feedback that is stored electronically.
In the case of feedback you need to bear in mind that it will only be useful if it is received in time to impact on subsequent assignments (see the section on setting
[/guides/transforming-assessment-and-feedback/setting] ). Conversely students not collecting/viewing feedback is a major concern for staff and investigation has shown
that this can sometimes be due to lack of clarity that feedback is available for students to view.
The problem may exist even when feedback is available electronically as not all systems have an automated alert facility to notify students that the feedback is ready.
Tip: Outline your approach to feedback and the deadline by which it will be available to students at the setting stage and ensure this is made clear.
A common concern voiced by academic staff is that students are heavily focused on marks and grades and often ignore feedback altogether. The converse argument
voiced by students is that the feedback is often received too late to be useful.
The disaggregation of marks and feedback can address both of these issues. It allows feedback to be released while marks are still undergoing moderation so that
feedback is more timely. Students may be required to give some evidence of having at least viewed the feedback before they are allowed to see their mark.
Where such approaches have been implemented they have been shown to be of value but barriers remain in many cases. For example, one issue is the inability of systems
to support separate release of marks and feedback - such an approach often necessitates amendments to academic regulations.
Tip: Consider disaggregating the release of marks and feedback and think about how you might require students to show evidence of having engaged with the feedback before they see their mark.
Case study: activities to encourage reflection on Manchester Metropolitan University has the following suggestions for getting students to reflect on feedback:
feedback
Idea one
Separate marks and feedback so that you give back the feedback one week and the mark the next.
Ask students to reflect on the interpretation of feedback by getting them to predict the mark they got from the feedback you’ve given. You could even offer them a bonus
mark, say five per cent for an accurate prediction. You need to make rules for the attribution of the extra marks eg, the date by which the predicted mark has to be submitted
and the degree of precision required.
A final activity could be to have a brief class discussion about why the predictions were or were not accurate. You may think that this is taking valuable classroom time away
from the programme content. However by helping students to engage with the outputs of the unit as well as the input, you will help them to improve their understanding and
performance.
Idea two
If you have two assignments in the same unit try using some of the marks available in the second assignment to reward students who show how they have acted on the
feedback in the first assignment.
This could be asking them to provide a simple statement at the end of the assignment which explains what they did in response to the feedback and indicating where the
evidence for improvement can be found in the second submission. Any marks you give for this should be on the quality of the statement rather than their improvement which
will be marked anyway as part of the second assignment.
The format of feedback has a considerable impact in terms of how easy it is for students to use. Handwriting is notoriously difficult to decipher (and there are also privacy
concerns around hard copy assignments being left in pigeonholes to be collected). Many students still say that they value hard copy feedback but few of them have organised
approaches to its storage and retrieval meaning they are likely to look at it once and then never go back to it.
It is generally the case that students find electronic feedback easier to store and retrieve therefore they are more likely to look at it again.
Tip: Provide all feedback in digital format. There is good evidence to show this greatly increases the amount of students who actually look at and make effective use of feedback.
Tip: If your system doesn't support disaggregation of marks and feedback, use a workaround ie not filling in the mark field then emailing the mark later after students have reviewed the feedback.
Glasgow College developed an application called Examview [http://www.rsc-scotland.org/?p=551] to pull assessment marks from the student record system direct to
the VLE avoiding the need for duplicate mark entry
Sheffield Hallam University has a guide to achieving the three week turnaround [http://academic.shu.ac.uk/assessmentessentials/wp-
content/uploads/2015/09/Achieving-the-3-Week-Turnaround.pdf]
Related theme
Feedback and feed forward [/guides/transforming-assessment-and-feedback/feedback]
Reflecting
A stage of the assessment and feedback lifecycle
CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]
This is one of the most important stages of the lifecycle. Real student learning takes place through an iterative process of reflecting on how progress matches against
learning outcomes. It is also where staff review the outcomes of various assignments in order to continuously improve curriculum design and delivery.
In a similar vein academic staff need to engage with student feedback and statistics on the performance of particular cohorts.
How might we use technology at the reflecting stage of the lifecycle and what are the benefits?
It can be used to store feedback and make it accessible to students and staff. We can also use it to support self-reflection on a portfolio of work and dialogue around
feedback, whether this is staff/student dialogue or peer to peer dialogue.
E-feedback improves the quality of feedback and consequently the self dependency of learners
Peer review activities help students understand the process of making academic judgements
E-portfolios can aid self-reflection and be used to present student skills to future employers
Digital feedback permits forms of auditing and analysis that can support staff development planning
Digitally available marks and feedback are prerequisites for various forms of learning analytics including assessment analytics.
What are the common problems?
It can be difficult to gain an overview of student feedback to support long term development. Feedback on individual assignments is generally stored at module level and it
is difficult for students and tutors alike to get an overall view of feedback across a particular student's programme of study.
This is problematic for personal tutors who need to understand how students perform across a range of units but may not teach on any of those units and don't have
access to any of the marks or feedback.
There is good research evidence to show that an effective combination of self-reflection and peer review may make the biggest difference to student learning and future
employability (see our sections on peer review [/guides/transforming-assessment-and-feedback/peer-review] and student self-assessment and reflection
[/guides/transforming-assessment-and-feedback/self-reflection] ).
In spite of this peer review activities are unfamiliar to many students and they can be uncomfortable with the approach. This is partly due to the notion that formal
education is about learning from experts and students often don't value working with peers.
Tip: Ensure that both student induction and the supporting stage of the lifecycle emphasise the benefits of peer learning to students, in particular developing critical thinking and communication skills and their relevance
to the world of work.
Academic staff often have concerns that giving better feedback and, more specifically engaging in dialogue around feedback, necessarily means more work. Our evidence
suggests that where e-marking and e-feedback tools are used effectively, time is saved on more routine and repetitive elements of the process and this time can be used
for giving better quality feedback and engaging in dialogue.
The use of generic feedback on common issues can save repeating the same comments many times.
Tip: When thinking about effectiveness consider the overall workflow. Allow time staff to familiarise themselves when evaluating time-saving tools before measuring hours spent.
Make full use of functions such as comment banks to save repeating frequently used phrases.
The University of Westminster's making assessment count project emphasised student self reflection
[http://jiscdesignstudio.pbworks.com/w/page/23495173/Making%20Assessment%20Count%20Project] . Supporting resources include a project report
[http://www.jisc.ac.uk/media/documents/programmes/curriculumdelivery/mac_final_reportV5.pdf] and the Feedback+
[https://sites.google.com/a/my.westminster.ac.uk/feedback-plus/home] tool.
The University of Dundee interACT [http://jiscdesignstudio.pbworks.com/w/page/50671082/InterACT%20Project] project placed great emphasis on creating the
conditions for dialogue around feedback and has produced a range of resources to help others
Read our case study on using technology to promote feedback dialogue at the University of Dundee
Related themes
Assessment design [/guides/transforming-assessment-and-feedback/assessment-design] Developing academic practice [/guides/transforming-assessment-and-feedback/academic-practice]
For each theme we discuss why it is important, what are the common problems and how applying technology might help. We also relate each theme back to the different
stages of the lifecycle so you can see exactly what you need to be thinking about and doing at each stage of the process.
We point to case studies of good practice and a range of free tools that you can use and adapt in your own context.
Assessment design
"Good assessments create a good educational experience, set out high expectations, foster appropriate study behaviours and stimulate students’ inquisitiveness, motivation and
interest for learning."
University of Hertfordshire
Survey data such as that from the national student survey (NSS) regularly shows that students are less satisfied with assessment and feedback than with any other aspect
of the HE experience. Good assessment design is at the heart of improving this aspect of the learning experience and achieving better learning outcomes overall.
Good design should make the assessment experience inspiring and motivating for both students and staff. It should create a positive climate that encourages interaction
and dialogue. Assessment should appear relevant and authentic and wherever possible allowed students to draw on their personal experience and to exercise choice with
regard to topics, format and timing of assessment.
There should be effective mechanisms for generating high quality feedback and ensuring that learners understand and act on feedback. Reflective skills should be
developed that help students direct and regulate their own learning and support the learning of their peers.
Characteristics of a learning environment that supports assessment for learning
[https://www.plymouth.ac.uk/uploads/production/document/path/2/2729/RethinkingFeedbackInHigher
©Kay Sambell
How might we use technology to support assessment design and what are the benefits?
Curriculum management systems can help give an overview of assessment forms and patterns across a range of modules in order to aid programme focused
assessment. This ensures that the desired learning outcomes of the overall study programme are effectively addressed.
Online assessment briefs and grading criteria make it easy for students to find information about what learning outcomes are being assessed, how they will be assessed
and what the standards are.
Technology can help to ensure parity and fairness of assessment by providing alternative formats of information and other support for students with a disability (see our
section on inclusive assessment) [/guides/transforming-assessment-and-feedback/inclusive-assessment] . It can also provide students with a choice of formats to
deliver an assignment. Assessment formats that are novel and interesting encourage creativity, inquisitive enquiry and participation.
Generating feedback in digital formats can speed up the process and make it more usable by students and easier to store and refer to in the future.
Technology can also support peer review and assessment, and is usually necessary to enable the use of such techniques with large class sizes.
How does assessment design relate to the assessment and feedback lifecycle?
It relates closely to where you are specifying [/guides/transforming-assessment-and-feedback/specifying] the overall assessment strategy for the programme of study.
You also need to refer to these intentions at the setting [/guides/transforming-assessment-and-feedback/setting] and supporting [/guides/transforming-assessment-and-
feedback/supporting] stages to ensure that you effectively implement your intended approach for each cohort of students.
Because the lifecycle is an iterative process, at each stage of reflecting [/guides/transforming-assessment-and-feedback/reflecting] you consider whether to make any
changes to assessment design for future instances of delivery.
Taking a principled approach to assessment and An approach that many Jisc projects have found central to improving assessment and feedback practice is
feedback practice defining the educational principles that underpin assessment and feedback in their institution.
By defining shared educational values, academics, learning technologists and those responsible for quality assurance and administration have worked together to look at
whether their principles are genuinely reflected in practice. Where improvement is required they have moved forward on the basis of a shared understanding of what is
fundamentally important.
In a short guide, Why use assessment and feedback principles? [http://www.reap.ac.uk/TheoryPractice/Principles.aspx] Professor David Nicol highlights the fact that they
can:
Help put important ideas into operation through strategy and policy
Provide a common language
Provide a reference point for evaluating change in the quality of educational provision
Summarise and simplify the research evidence for those who don't have time to read all the research literature.
Principles need to be written in a way that requires action rather than passive acceptance if they are to effect change. You also need to bear in mind that generic principles
can be interpreted in various ways. For example, the principle ‘help clarify what good performance is’ can be implemented in ways that are teacher-centric or in ways that
actively engage students.
A starting point for many institutions has been to review the well-known re-engineering assessment practices (REAP) [http://www.reap.ac.uk/] principles from the University
of Strathclyde.
Well planned assessment facilitates the student to reflect on their own learning and self-assess. Assessments should encourage effective learning behaviours ie deep not
surface, understanding not just memory. These include spending appropriate time on tasks, with effort spread across topics and weeks and making links across knowledge
domains.
Stimulates dialogue
Good assessment supports the development of a learning community and provides opportunities for students to engage in dialogue about their learning. Teachers should
also have an opportunity to engage in dialogue with students and colleagues to help them shape their teaching and engage in staff, module and programme development.
Stage one
Decide on the intended learning outcomes. What should the students be able to do on completion of the course, and what underpinning knowledge and understanding will
they need in order to do it that they could not do when they started?
Stage two
Devise the assessment task(s). If you have written precise learning outcomes this should be easy because the assessment should be whether or not they can satisfactorily
demonstrate achievement of the outcomes.
Stage three
Devise the learning activities necessary (including formative assessment tasks) to enable the students to satisfactorily undertake the assessment task(s). These stages
should be conducted iteratively, with each stage informing the others to ensure coherence.
The likelihood that more than one iteration might occur reflects the need to ensure what is sometimes referred to as 'alignment' between the learning outcomes at
programme level and those at module level; in other words to ensure that the learning outcomes at programme level are actually being addressed through the combination of
modules.
Our video outlines how the University of Strathclyde implemented new models of assessment practice:
The University of Hertfordshire's guidance outlines how to apply assessment for learning principles
[http://jiscdesignstudio.pbworks.com/w/file/fetch/68646815/ITEAM%20UH%20Assessment%20Principles%20and%20Guidance%20August%202013.pdf] to
assessment design. Their activity cards can help with designing assessment for learning
[http://jiscdesignstudio.pbworks.com/w/file/68762994/ITEAM%20AfL%20activity%20cards.docx] .
The Quality Assurance Agency (QAA)'s guide gives recommendations on how to implement institutional change in assessment and feedback practices
[http://www.enhancementthemes.ac.uk/docs/publications/transforming-assessment-and-feedback.pdf?sfvrsn=12] .
Read the study on the influence of disciplinary assessment patterns on student learning [https://www.tandfonline.com/doi/pdf/10.1080/03075079.2014.943170] .
The University of Bradford's programme assessment strategies project generated a short guide on programme focused assessment
[http://www.pass.brad.ac.uk/short-guide.pdf] and a series of accompanying case studies [http://www.pass.brad.ac.uk/case-studies.php]
Oxford Brookes University's guide outlines how to take a social constructivist approach to assessment in three easy steps (pdf)
[http://www.brookes.ac.uk/WorkArea/DownloadAsset.aspx?id=2147552649]
Birmingham City University's assessment design checklist [http://repository.jisc.ac.uk/6194/1/BCU_Assessment_Checklist_and_guide.pdf] is a useful tool and
reference guide.
[1]Rust, C., O’Donovan, B. & Price, M. (2005), ‘A social constructivist assessment process model: how the research literature shows us this could be best practice’,
Assessment and Evaluation in Higher Education, Vol. 30, No. 3, 233-241
In assessment terms evaluating group assignments can save academic staff time depending on the approach taken. Establishing a fair and appropriate means of
allocating marks for group assessment can however prove challenging. It's possible to allocate a single mark to a whole group which may often mirror working life where a
whole team shares in the success or failure of a project.
Alternatively you may choose to allocate individual marks based on the contribution of each student. Other approaches include a combination of the two ie, an overall mark
for the group with a certain percentage allocated for individual contributions.
Finally, although this may counter the concept of group work, you could set each student an individual piece of work based on the topic addressed by the group.
In choosing the best solution you need to think about what learning outcomes are being addressed. If the process of arriving at and/or presenting the final outcome is
important then you are unlikely to adequately assess the learning outcomes by simply looking at the finished product.
If it's important to understand the dynamics of how the group worked and what each individual contributed, some form of self or peer review can be helpful.
Students may be unfamiliar with group work and therefore feel anxious about it. This problem can be exacerbated by the fact that group working is best suited to longer,
more complex assignments which may account for a significant percentage of the overall marks.
You will need to consider the issue during assessment design and when thinking about assessment patterning and scheduling [/guides/transforming-assessment-and-
feedback/pattern-and-scheduling] . This will ensure that students have sufficient practice at component tasks before they undertake a high-stakes assignment.
Students often resent approaches which allocate a single mark to the whole group. In particular stronger students can feel they have been let down by the weaknesses of
others. Allocating individual marks is a way of getting round this but issues of fairness can still arise eg, timid students who do a lot of research may not get full recognition
if the final outcome is an oral presentation.
Cultural issues can be a barrier to group working. In some cases, multicultural groups may be a real asset in achieving learning outcomes particularly when requiring
students to confront situations they may face in a working environment.
However it is likely that multicultural groups will take longer to form effective communication channels and working relationships so you will need to factor this in. Research
suggests that a period of about four months is the point at which distinctions between the performance of homogenous and culturally diverse groups disappear.[1]
How might we use technology in assessing group work and what are the benefits?
It can be used in the following ways:
Facilitate collaboration between group members ie, upload and comment on wiki contributions or use social media channels to stay in contact and organise activities
Enhance the fairness of evaluating individual contributions - providing facilities for students to keep a reflective log or e-portfolio reveals their individual contribution to
a group project
Support peer review to acknowledge the differential contribution of the individuals involved.
How does this theme relate to the assessment and feedback lifecycle?
At the specifying stage [/guides/transforming-assessment-and-feedback/specifying] you will identify that group work is important to achieving learning outcomes. At the
setting stage [/guides/transforming-assessment-and-feedback/setting] you will think about scheduling activities, particularly so that students can practice component
elements before they undertake high-stakes summative assessment.
Throughout the supporting stage [/guides/transforming-assessment-and-feedback/supporting] you will help students develop the skills they need to work effectively in
groups.
Tips for assessing group work Adapted from the work of Professor Graham Gibbs.
Allocate differential marks to individual students to increase fairness and avoid freeloading
Form an ideal group size of four to six - the maximum group size should be eight
Our case study from Edinburgh College shows how social media and project management tools [http://www.rsc-scotland.org/?p=2566] support group assignments
to reduce workload and integrate assessment across different units in the curriculum.
Footnotes
[1] Watson, W. E., Kumar, K. & Michaelsen, L. K. (1993) Cultural diversity’s impact on group process and performance: comparing culturally homogeneous and culturally diverse task groups, The Academy of Management
Journal, 36(3), pp. 590–602.
Assessment literacy
"Students need to be given the opportunity to take part in the processes of making academic judgements to help them develop ‘appropriate evaluative expertise themselves’ and make
more sense of and take greater control of their own learning."
University of Dundee
The term assessment literacy is still uncommon. We talk increasingly about study skills, graduate attributes and digital literacies but none of these fully addresses student
understanding of, and engagement in, the overall assessment process.
"Assessment literacy is an iterative process, and therefore course design and implementation should provide unhurried opportunities and time within and across programmes to
develop complex knowledge and skills, and to create a clear paths for progression."
Higher Education Academy
A review of study skills materials available online shows a distinct emphasis on developing assessment technique through essay writing, presentation and preparing for
exams rather than understanding the nature and purpose of assessment and feedback practice. More integrated approaches that emphasise graduate attributes or
employability skills as key course outcomes can still fail to make the connection between the development of these skills and assessment practice.
The issue is however by no means confined to HE. A common observation in OFSTED reports on failing colleges is that there is insufficient use of pre-course assessments
to plan and teach to meet the needs of individual learners.
Peer review is an activity that can be very beneficial in developing assessment literacy because it engages students with assessment criteria and enables them to practice
making evaluative judgements. It can however be an unfamiliar and sometimes uncomfortable activity for many students so it is important to outline the purpose and
benefits of such techniques at an early stage.
Technology can be used to support activities such as peer review to help develop assessment literacy.
Text matching tools that generate an originality report for each assignment can be used to support the development of academic writing skills such as appropriate
referencing and citation. Using the tools in a formative way with students can be more productive than simply using them to assist with the detection of plagiarism.
At the submitting [/guides/transforming-assessment-and-feedback/submitting] stage, students can make use of originality reports generated by text matching tools to
check their referencing and citation.
Find out more about Bath Spa and Winchester universities' student fellow scheme [http://jiscdesignstudio.pbworks.com/w/page/51251270/FASTECH%20Project]
including a video of student fellows talking about their experiences
Our case study from Cumbernauld College outlines how the college developed targeted formative and summative assessment tasks in Moodle [http://www.rsc-
scotland.org/?p=2465] to improve grammar
Oxford Brookes University's guide outlines how to improve your students' performance in 90 minutes (pdf)
[http://www.brookes.ac.uk/WorkArea/DownloadAsset.aspx?id=2147552287]
Student led individually created courses Edinburgh College of Art uses a model called Student Led Individually Created Courses (SLICCs) to embed
assessment for learning approaches and employability into the curriculum. Students create their own course,
self-reflect and formatively self-assess their own learning with supervision by tutors.
Student project proposals must detail the learning activities together with how they will evidence the set learning outcomes (which are the same for all students and include
employability learning outcomes). Tutors sign off the academic viability of the proposal. Students must re-interpret the learning outcomes in their own words in their proposal
and this aids student understanding of what is required of them and how they will be assessed.
Students have to regularly evidence and articulate their learning as it unfolds (aligned to the set learning outcomes), using an e-portfolio and digital artefacts. Tutors do not
formally lecture in this model but they provide regular formative feedback via the e-portfolio based on the principle that feedback requires students to take action.
SLICCs is also linked to the university’s Edinburgh Award [http://www.employability.ed.ac.uk/Student/EdinburghAward/] , enabling a certificate of recognition from the
university to be gained and an entry on their Higher Education Achievement Report (HEAR).
Many universities run PAL schemes which allow students to provide cross-year support to one another.
Peer assisted learning (PAL) schemes
Schemes such as that implemented at Bournemouth University
[https://www.bournemouth.ac.uk/students/learning/peer-assisted-learning] serve to:
Bath Spa and Winchester Universities use paid student fellows to act as change agents, co-developers and co-
Students as change agents
researchers in developing their assessment practice.
In training their first set of student fellows the universities introduced the students to the institutions' educational principles and to current thinking about assessment practice
from the research literature, as well as taking them through the overall process. This gave the students a much broader base on which to draw than simply their own prior
experience and gave them a different understanding of processes that had previously seemed complicated and incomprehensible.
A number of student fellows recognised the "naivety" with which they had previously viewed some aspects of the process.
Footnotes
[1] For a discussion of these issues see Wingate, U. (2006) Doing away with 'study skills'. Teaching in Higher Education, vol 11, No. 4, October 2006, pp 457-469.
http://embeddingskills.hud.ac.uk/sites/embeddingskills.hud.ac.uk/files/W... [http://embeddingskills.hud.ac.uk/sites/embeddingskills.hud.ac.uk/files/Wingate.pdf]
You must distribute student effort fairly evenly across all important topics rather than concentrating it at particular times of the year. This needs combining with a
developmental approach where assignments build on prior learning to become increasingly complex and demanding over time (see the information on programme focused
assessment in our section on assessment design).
This can be a difficult balancing act. Too many similar assignments become trivial yet too much variation may not allow students sufficient practice at each type of
assessment. It may be more difficult for students to see the relevance of feedback when the next assignment is significantly different in form.
Over-assessment can have a detrimental effect on student attainment as with too many different assignments to complete, students cannot concentrate sufficient effort
on each one. This is a particular problem if combined with 'assessment bunching' where the deadlines to submit assignments fall closely together.
Where there is too much emphasis on summative assessment, students may overly focus on the final mark and feel less inclined to read and act on feedback. This is
exacerbated when having so many assignments to mark. It means that tutors are unable to return marks and feedback in a timely manner to inform the students' approach
to the next assignment (see also the section on feedback and feed forward [/guides/transforming-assessment-and-feedback/feedback] ).
How might we use technology and what are the benefits?
It can help students to engage with assessment criteria and standards through the use of online templates for assignment briefs and marking rubrics.
Digital information about the curriculum can model assignment scheduling and present information about deadlines.
The University of Greenwich programme mapper [https://sites.google.com/site/mapmyprogramme/home] can help you mange staff and student workload
The University of South Wales' video outlines the creation of online assessment diaries for students.
[http://jiscdesignstudio.pbworks.com/w/file/fetch/67926972/Assessment_Diary.mp4]
Learning and teaching practice does not stand still. Whilst underlying good practice principles may have a relatively long validity, the changing technical landscape regularly
offers new ways of implementing good practice.
Even the underpinning principles themselves require review at certain intervals. For example, recent research into approaches such as peer review reveals benefits that you
should take into account when updating assessment strategies.
Risk aversion is also a significant issue. Staff don't want to take risks with something as important as assessment practice. They may fear the opinion of external quality
assessors or worry that innovative or unfamiliar types of assignment may prove problematic for students and bring down the grade point average.
"New tutors often have a limited feel for what good feedback looks like or what standard of feedback, in terms of length and specificity, is expected. They may concentrate on proving
their superior knowledge to the student rather than focussing on improving the students’ work in future."
Transforming the Experience of Students through Assessment (TESTA)
Culture is also an issue. Approaches to assessment and feedback are often highly personal. Academic staff often don't view themselves as having regular working hours.
The fact that they often complete marking and feedback off campus suggests that it's done in their own time and they should therefore have freedom of choice in how they
do it.
Feedback in particular has been described as taking place in a black box with little or no discussion amongst programme teams about approaches and the types of
feedback given by individual academics. In these circumstances it is not surprising to find inconsistent approaches and variable quality.
Despite rigorous processes to ensure quality and standards, marking and grading is still a subjective matter. New lecturers tend to rely more heavily on written criteria but
also to mark more harshly. More experienced lecturers may develop tacit and personalised standards of marking which are not necessarily shared across the whole
programme or department.
Profiling feedback Evidence from feedback audits undertaken by Jisc projects shows that typical feedback profiles may differ
considerably between institutions. In our audits the 'typical' profile in each sample skewed towards a particular
type of feedback:
In sample A (postgraduate online distance learning programme in medical education) 95% of feedback related to content and 72% related to the immediate task
In sample B (a range of masters level courses in education) praise statements were the most common element of summative feedback; there was little advice given and
that advice was short rather than longer term
In sample C (modern languages) a higher percentage of comments concentrated on weaknesses rather than strengths.
Good feedback looks at strengths and weaknesses and helps students define specific actions for improvement which feeds into future assignments. In the resources below
you will find some tools to help you undertake your own feedback audit.
E-marking and e-feedback can be quicker and more user-friendly, both for staff and students, than traditional methods (see our section on marking practice
[/guides/transforming-assessment-and-feedback/marking-practice] ).
There are a range of online tools available to support staff development by helping with feedback auditing and supporting tutor self-development and we point to some of
these in our resources section.
How does designing assessment practice relate to the lifecycle?
The reflecting stage [/guides/transforming-assessment-and-feedback/reflecting] covers not only student reflection but also staff reflection both on their own practice and
student performance to inform future iterations of similar courses.
University College London's feedback profiling tool [http://assessmentcareers.jiscinvolve.org/wp/files/2013/02/Feedback-profiling-tool.pdf] and supporting guidance
[http://assessmentcareers.jiscinvolve.org/wp/files/2013/02/Guidelines-for-using-the-feedback-profiling-tool.pdf] enables individuals or teams to analyse feedback and
encourages reflection
Consensus marking exercise (adapted from the work The TESTA project highlighted student awareness of variations between markers. They identify ‘hawks’ and
of TESTA) ‘sparrows’ on programme teams and often choose modules accordingly.
Use two previously assessed scripts at the same level but with different marks
Invite the programme team to an hour long meeting
Ask colleagues to brainstorm what they are looking for in this assignment – this should be fresh and intuitive rather than orthodox
Ask colleagues to read and mark the pieces
Collect initial marks in a hat – written but anonymous
The Open University (OU) has the highest ratings of any university for feedback in the National Student Survey
Good practice in feedback monitoring
[http://www.thestudentsurvey.com/] even though all of its teaching takes place at a distance. The OU gives all
of its tutors training on how to give feedback. They provide exemplars of good feedback and advice on using the 'OU sandwich’ of positive comments, advice on how to
improve, followed by an encouraging summary.
The OU also monitors the standard of feedback that tutors provide to students. An experienced staff tutor samples new tutors’ marking. If they see feedback that falls below
accepted standards (for example too brief to be understandable) or is inappropriate (ie, overly critical with little advice on how to improve), they will contact the tutor for a
discussion. That tutor’s feedback will go on a higher level of monitoring until it's seen to improve.
Walsall College, which has an outstanding Ofsted rating, takes a similar approach. It has a number of ‘coaches’ who sample feedback for each subject area and provide staff
development for any tutors whose feedback is felt to be inappropriate for the level of study - particularly that which focuses only on the assessment criteria and not on
longitudinal development. Careful monitoring of feedback provided by new tutors takes place in the early days.
Both the OU and Walsall College operate strict rules for the timely return of feedback and monitor tutor adherence to this.
In other sections of this guide (see particularly setting [/guides/transforming-assessment-and-feedback/setting] and supporting [/guides/transforming-assessment-and-
feedback/supporting] ) we state the importance of achieving clarity around assessment criteria and standards. However in some cases the notion of fixed assessment
criteria and grading structures may run counter to the work environment.
In a business context working out exactly what the client requires, and what are the most crucial parts of the brief, can often be challenging but key to success. This
underlines the value of engaging learners in defining assessment criteria and making evaluative judgements.
This raises the challenging question: Are we being too specific in detailing exactly how students get marks from our assessments? Should part of the assessment be the
task of working out which are the more crucial parts of the assessment itself?
"It seems that assessment in business is a cumulative and ongoing process with mostly undefined criteria that must be independently discovered, so in order to make our own
assessments more authentic we may need to reduce the level of definition we create about mark allocation."
University of Exeter
The development of self-regulated learning, perhaps best expressed as the skills needed for lifelong learning should be one of the key aims of assessment. However the
lifelong learning approach is often neglected in assessment design.
Other terms that may be used to describe this type of assessment include: integrated; work focused; experiential; work-related; contextual; alternative, and situated.
Another issue is that students are often not very good at recognising the transferable skills they have developed and articulating these to potential employers.
Providing rich evidence of employability skills (through audio and video recording devices, webcams, e-portfolios)
Enabling learners to capture and reflect on the process of learning (through e-portfolios, blogs, video annotation software)
Capturing work-related performance for appraisal by a tutor or mentor (through audio and video recording devices, webcams)
Creating opportunities for employment-related assessments that are difficult to create in the classroom (ie, virtual worlds, online simulated professional and vocational
environments)
Supporting peer assessment and review (using software tools such as Peerwise [http://peerwise.cs.auckland.ac.nz/] or WebPA [http://webpaproject.lboro.ac.uk/] )
Mapping opportunities for acquiring and assessing wider employability skills across complex curricula ie, medicine (using mind mapping and curriculum mapping
tools)
Mapping assessments and learning outcomes against employability outcomes; making these visible to all stakeholders (via curriculum databases, virtual learning
environments or VLEs, learning portals).
At the setting [/guides/transforming-assessment-and-feedback/setting] stage you will select topics and problems that relate to real world situations and clarify how
specific learning outcomes relate to a broader set of skills and competencies (you may call these graduate attributes).
The reflecting [/guides/transforming-assessment-and-feedback/reflecting] stage is likely to be of particular importance with both self and peer reflection as important
features of assessment practice that seek to enhance employability.
Birmingham City University uses videos embedded in a 3-D graphical representation of a town called Shareville [http://shareville.bcu.ac.uk/index.php] enabling
students to develop the professional skills needed in the real world
The College of West Anglia set up an award-winning media production company and internet TV station, Springboard TV [http://www.springboardtv.com/] ,
remodelling the curriculum in the process to enable Media BTEC and diploma learners to be assessed on real-world projects
Our Keele University case study (pdf) [http://repository.jisc.ac.uk/7335/1/Jisc_e-portfolio_Keele2019.pdf] in our e-portfolio guide, how to enhance student learning,
progression and employability with e-portfolios (2019) [/guides/e-portfolios] , shows the importance of preparing students for the reflective practice required for
registration with a professional body
Our guide on e-portfolios, how to enhance student learning, progression and employability with e-portfolios (2019) [/guides/e-portfolios] , includes up to date evidence from UK colleges and universities. See, for example,
our case studies from Abertay (pdf) [http://repository.jisc.ac.uk/7332/1/Jisc_eportfolio_Abertay2019.pdf] and Nottingham Trent (pdf) [http://repository.jisc.ac.uk/7336/1/Jisc_eportfolio_NTU2019.pdf] universities.
Read case studies in our guide, how to enhance student learning, progression and employability with e-portfolios [/guides/e-portfolios] :
Nottingham Trent University (pdf) [http://repository.jisc.ac.uk/7336/1/Jisc_eportfolio_NTU2019.pdf]
Problem / Set a real world problem as the core assessment task, supported by real world data
data
Purely academic learning might require a theoretical problem in order to test a theoretical understanding. In employment though problems tend to be
very real, and data rarely comes in coherent, standardised forms. It is usually in 'messier' formats that need to be interpreted to be of use. Using a real
world problem and real world data helps to develop skills in analysis, interpretation and evaluation.
Time Move to a more distributed pattern of assessment; consider introducing ‘surprise’ points
Assessments are often delivered in the form of one summative assessment, eg an exam or essay, at the end of a period of formal learning. In
employment however, ‘assessment’ or evaluation points tend to occur frequently. In addition, timing is often out of individual control, and consequently
it can be necessary to juggle competing tasks at short notice. Using multiple assessment points helps to develop reflective thinking, whilst ‘surprise’
points support task prioritisation.
Collaboration Create teams of students who work together to complete the assessment, encourage collaboration
Many forms of assessment require working alone, yet employment invariably requires some form of collaboration and team work, and often with
unknown and perhaps even challenging individuals. Encouraging students to work collaboratively and in teams improves their ability to negotiate and
discuss, and develops their understanding of team roles and role flexibility.
Typically the review of assessments (ie, feedback) in formal education is only provided by teaching staff. In employment, however, much of the review
process comes in multiple forms, eg, informal peer feedback from colleagues, formal and informal reviews from clients, and self-review of personal
performance. Including peer and/or self-review explicitly within an assessment helps students to develop critical thinking skills, and encourages
articulation and evidencing.
Most thinking on assessment suggests that there should be explicit guidance to students concerning how and where marks are attained. However in
employment part of the challenge for the individual and/or team is the structuring of the work that needs to be completed. Tasks need to be identified,
processes decided, and priorities allocated. Using a light structure approach encourages students to plan tasks and goals in order to solve a bigger
problem, strengthening their project management and prioritisation skills.
In higher education the audience for an assessment is implicitly the academic that sets it, who will naturally be already aligned in some way with the
course and/or module. This contrasts with employment, where the audience can be peers, but is more often the client or another external third party,
with different values, priorities and expectations. Having to think for a different audience on an assessment provokes greater reflective thinking, and
requires new types of synthesis.
">
Feedback provides information to learners about where they are in relation to their learning goals so that they can evaluate their progress, identify gaps or misconceptions
in their understanding and take remedial action. Generated by tutors, peers, mentors, supervisors, a computer, or as a result of self-assessment, feedback is a vital
component of effective learning.
While feedback focuses on a student’s current performance, and may simply justify the grade awarded, feed forward looks ahead to subsequent assignments and offers
constructive guidance on how to do better. A combination of both feedback and feed forward helps ensure that assessment has a developmental impact on learning.
Effective feedback should also stimulate action on the part of the student. The most effective practice treats feedback as an ongoing dialogue and a process rather than a
product.
Regularity
Feedback needs to be quite regular and hence on relatively small chunks of course content to be useful. One piece of detailed feedback on an extended essay or design
task after ten weeks of study is unlikely to support learning across a whole course very well.
Approaches to feedback
Academic staff can often ignore the need to discuss feedback approaches. The feedback given can then be inconsistent or weak in other ways ie, skewed towards a
particular type of observation (such as praise) or short term and too focused on the assignment in hand rather than truly developmental.
A common Ofsted report observation in failing colleges is that tutors (and workplace assessors in the case of apprentices) don’t provide feedback to students that helps
them understand how they can improve.
Missed opportunities
Students often don't collect or read feedback. This is sometimes because it arrives too late to be useful. In some cases the problem is as simple as the students not
realising the feedback is available.
Alternatively students can focus on the overall mark and not understand the benefits of making use of the feedback. We discuss this further in the section on returning
marks and feedback [/guides/transforming-assessment-and-feedback/returning-feedback] where we suggest that disaggregating marks and feedback can encourage
students to engage better with the feedback.
Passive learners
Students can often be passive recipients of feedback, viewing it as the tutor's role to deliver feedback to them, rather than understanding the need for them to engage in
meaningful dialogue around the feedback to aid their development.
Lack of overview
It can often be difficult for both students and tutors (especially tutors with a pastoral or personal tutoring role) to gain an overview of feedback. This may be because
feedback is online but stored at a module level or because it is in a more ephemeral format such as paper or verbal feedback.
Understanding feedback
Feedback that appears self-evident to tutors may be difficult for students to understand. There could be difficulties with the format eg, indecipherable hand writing or with
how the feedback is expressed and contextualised.
The Open University asked a group of students to work through their feedback [http://www.open.ac.uk/blogs/efep/?page_id=523] talking out loud about what they
understood and what they didn't understand.
Improve clarity about marking and feedback deadlines and provide students with a personalised schedule
Help students store and access feedback easily if it's provided in a digital format
Support individual learners or subjects where certain digital formats are more suitable eg, audio feedback for language courses
Help teachers make efficiency gains by using a feedback format that best suits them eg, a slow typist may provide better quality feedback by making an audio
recording.
During the setting stage [/guides/transforming-assessment-and-feedback/setting] ensure that the overall submission and marking schedules allow for timely
feedback that can inform the next assignment. Infom students on how to make use of feedback and also provide formative opportunities
At the marking and production stage [/guides/transforming-assessment-and-feedback/feedback-production] generate feedback for students
When returning marks and feedback [/guides/transforming-assessment-and-feedback/returning-feedback] adopt an approach that is most likely engage students
At the reflecting stage [/guides/transforming-assessment-and-feedback/reflecting] engage in dialogue with students about their feedback and reflect on the feedback
you have given, how useful it's been and any changes you should make in the future.
Sheffield Hallam University has a set of useful case studies and guides [http://academic.shu.ac.uk/assessmentessentials/marking-and-feedback/feedback/] on
different ways to give feedback
The Sounds Good website highlights how to provide better feedback using audio [http://sites.google.com/site/soundsgooduk/]
Manchester Metropolitan University's guidance gives staff ideas for changing feedback practice [http://www.celt.mmu.ac.uk/feedback/index.php]
University College London (UCL) produced a useful feedback profiling tool [http://assessmentcareers.jiscinvolve.org/wp/files/2013/02/Feedback-profiling-tool.pdf]
and supporting guidance [http://assessmentcareers.jiscinvolve.org/wp/files/2013/02/Guidelines-for-using-the-feedback-profiling-tool.pdf]
The TESTA project generated resources including a feedback guide for lecturers [http://www.testa.ac.uk/index.php/resources/category/7-best-practice-guides] and
feedback guide for students [http://www.testa.ac.uk/index.php/resources/category/7-best-practice-guides]
Our Moray College, UHI case study outlines how audio feedback improved learner engagement [http://www.rsc-scotland.org/?p=5423] .
Technology, Feedback, Action at Sheffield Hallam In this project the university evaluated how a range of technical interventions might encourage students to
University engage with their feedback and formulate actions to improve future learning.
Delivering feedback electronically offered considerable benefits including greater control for students over how and when they reviewed their feedback. Electronic storage
made it more likely that students would revisit the feedback in future.
The interACT project placed great emphasis on creating the conditions for dialogue around feedback:
Creating feedback dialogue at the University of
"Neglecting dialogue can lead to dissatisfaction with feedback. The transmission model of feedback ignores
Dundee these factors and importantly the role of the student in learning from the feedback. Simply providing feedback
does not ensure that students read it, understand it, or use it to promote learning."
Interventions on a postgraduate online programme in medical education included the requirement for students to submit a compulsory cover sheet with each assignment
reflecting on how well they think they met the criteria and indicating how previous feedback has influenced this assignment.
Following feedback from the tutor they were then invited to log onto a wiki (this is optional) and include a reflection on the following four questions:
How well does the tutor feedback match with your self-evaluation?
Inclusive assessment
Why is inclusive assessment important?
Inclusivity is a very important factor in assessment design as fair assessment must reflect the needs of a diverse student body. The Quality Assurance Agency (QAA) UK's
quality code for higher education [http://www.qaa.ac.uk/quality-code] has a series of indicators that reflect sound practice. Indicator ten states:
Through inclusive design wherever possible, and through individual reasonable adjustments wherever required, assessment tasks provide every student with an equal opportunity to
demonstrate their achievement.
In order to provide all students with an equal opportunity to demonstrate their learning, you need to consider the different means of demonstrating a particular learning
outcome. Ensuring that students have variety in assessment and some individual choice, eg, in the topic or in the method/format of the assessment, can lead to overall
enhancement of the assessment process to benefit all students.
Assessment procedures and methods must be flexible enough to allow adjustments to overcome any substantial disadvantage that individual students could experience.
Considering school holidays and the impact on students with childcare responsibilities when setting deadlines
Considering students' previous educational background and providing support for unfamiliar activities eg, for students unused to group work
Considering the needs of students with disabilities - our guide on making assessments accessible [/guides/making-assessments-accessible] can help with this
Setting up alternative arrangements for students with particular needs can create a sense of a two tier system that singles out students with special needs. Try to make
sure that a single process can accommodate students with additional needs.
Most people are well aware of the need to consider students with disabilities but may give less consideration to cultural, religious and domestic factors. As an example a
case study for a management course set in a brewery may not be the most appropriate choice if an alternative scenario would achieve the learning outcomes equally well.
It's also particularly important in helping to meet the needs of learners with disabilities - find out more on assistive technologies in our guide on making assessments
accessible [/guides/making-assessments-accessible]
Roehampton University's project explored the development of equivalent assessment for students with disabilities
[http://www.plymouth.ac.uk/uploads/production/document/path/2/2538/Whats_it_worth.pdf]
Manchester Metropolitan University's DEMOS project outlined how to assess disabled students without breaking the law
[http://www.celt.mmu.ac.uk/ltia/issue4/wray.shtml]
Read Birmingham City University's guide to Moodle accessibility for students with specific learning difficulties
[http://repository.jisc.ac.uk/6195/1/BCU_Moodle_Accessibility_Guidelines.pdf]
The University of Bristol has a useful discussion paper on Ethical issues in technology enhanced assessment [http://www.bristol.ac.uk/media-
library/sites/education/migrated/documents/ethicalissues.pdf]
Marking practice
Why is marking practice important?
Marking and the validation of marks (verifying, analysing, moderating) ensures fairness and consistent application of standards. However it takes up a considerable amount
of academic staff time. It's also a high stakes activity with serious repercussions if it goes wrong.
Many of the most comprehensive marking tools are best suited to online marking and there are more hurdles to overcome for people who wish to use them in situations
where internet access is not available.
The discourse of resistance is also highly personalised eg, some older members of staff may cite eye-strain as an issue with online marking. Others of the same age group
prefer e-marking, and use technology to adapt to their personal needs and make reading easier.
It's important therefore that staff who work in situations outside of their formal work place are aware of how to set up and use their equipment in order to avoid
musculoskeletal problems and eye-strain. Extended use of laptops in particular is likely to cause problems.
Past experience
Tools to support e-marking have recently become considerably more user-friendly. Academic staff who tried to use such technologies a while ago and reverted to paper
marking because they found them difficult may take some persuading to try the new generation of tools.
Case study: resolving access issues with Grademark In 2014, the University of Hull received complaints from a small number of academic users who struggled to
at the University of Hull use the university approved marking tool, Turnitin Grademark, on their university PC.
When viewing the originality report characters appeared fuzzy and difficult to read for any length of time despite none of the users having a particular visual impairment.
Testing showed that the quality of the original PDF assignment submission deteriorated when used through Turnitin Grademark. This was specifically an issue with the
rendering engine of the marking software rather than a browser issue.
Further testing showed a resolution to the problem when marking was undertaken by using the iPad app (which uses a different rendering engine to both the browser and the
Turnitin web engine).
Banks of frequently used comments to quickly insert for pointing out grammatical errors or the need for a citation
Rubrics with marking criteria in a pre-defined matrix which improve marking consistency and may enable a quick calculation of marks and grades
Other standalone tools which may still capture annotations in digital format include the mark-up features in Microsoft Word or the use of digital pens.
One of the most significant benefits of digital approaches is improved clarity for students who don't have to struggle to read handwriting.
Academic staff may need some time to familiarise themselves with new tools but many find the processes to be more efficient and more rewarding as many of the
repetitive elements have been removed. The real efficiencies are however most evident when you consider the overall process. Other benefits include:
The convenience of not having to collect and carry large quantities of paper
The University of Northampton has a useful guide to setting up your computer workstation
[https://nile.northampton.ac.uk/bbcswebdav/orgs/LearnTech/User%20Guides/Staff%20guides/Online%20Marking%20HS%20Advice.pdf] for safe and comfortable
online marking
The University of York's reading on screen [https://readingonscreen.wordpress.com/] guidance is for use by both staff and students
Queen Margaret University's guide for tutors outlines how to mark and feedback using the Turnitin Grademark product.
[http://www.qmu.ac.uk/cap/TELPdfFiles/GrademarkTutorGuide2014.pdf]
Good practice in e-marking (adapted from guidance produced by the University of Huddersfield)
Concentrate on the use of the tool to provide dialogic feedback (rather than a monologue) to facilitate conversation with students about their work. You can do this by:
Never using generic comments alone to annotate student work - always include ‘bubble’ comments written specifically for that student. Redeploy time by offering more
bespoke comments and support
Make use of first person pronouns when engaging with the student’s work (‘I found this sentence difficult to interpret’ or ‘the opening paragraph you have offered here is
really compelling: I’m feeling motivated to read on’)
Invite students to ask their tutor to focus on specific aspects of their work in feedback
Use colour coding to distinguish strengths from weaknesses and ensure that all students’ work has at least some aspects of their work highlighted in both categories
Avoid building generic comments that replicate bad habits common in paper marking such as a tick or a comment that simply says ‘good’ or ‘vague’
Agree a shared approach with colleagues to using the marking tool that offers consistency to students while still allowing tutors to benefit from the flexibility that the tool
offers.
(adapted from work at the University of Huddersfield - see the report on their Evaluating the Benefits of
A suggested strategy for implementing e-marking
Electronic Assessment Management (EBEAM) project
[http://jiscdesignstudio.pbworks.com/w/page/50671451/EBEAM%20Project] for more information)
Those who are happy to mark electronically should be encouraged to do so while academics who prefer to mark on paper are supported by provision of a print-out of student
submissions. This strategy will lead to e-marking spreading organically but only if there is simultaneous pressure provided by strategic policy decisions.
This pressure comes in the form of change agency from early adopters, from administrative systems which reward academic staff who adopt e-marking (eg, by lightening
their assessment administration load) and finally from student demand. The aim is to achieve critical mass whereby e-marking becomes established as the norm.
This allows it to become not just a student expectation but an entitlement and makes those who are reluctant to mark electronically the exceptions rather than the rule.
To achieve this critical mass, the bulk of academic staff (ie, those who are neither early adopters or especially resistant) need to find it easier and more rewarding to move to
electronic marking than to stay in a paper-based system. This middle group is therefore the most important.
To achieve the goal of maximising electronic management of assessment, it's important to build a strategy and a system which provides each group with the support they
need. It must also offers rewards and apply pressure in a consistent way so that moving away from paper-based marking and into e-marking makes the most sense to as
many as possible.
Due to the differing attitudes of these three groups to e-marking, an effective strategy must be sensitive to them all.
Peer assessment
Why is peer assessment important?
This relates to the topic of peer review and is a variant of this approach; students mark or grade the work of other students.
Many of the earliest attempts to develop peer activities concentrated on peer assessment. Experience shows that peer assessment is often more difficult than peer review
to implement successfully. This is largely because students lack confidence in their own and their peers' ability to undertake grading. This undermines their trust in the
fairness of the assessment process and their satisfaction with it which can erode learning benefits.
The benefits of the peer approach derive mainly from thinking about and evaluating others' work rather than grading it. That is not to say that peer assessment can't be
undertaken successfully and we introduce some examples throughout this guide.
Differences between tutor and peer marks can cause dissatisfaction and lack of confidence in the approach.
How might we use technology and what are the benefits?
As with peer review [/guides/transforming-assessment-and-feedback/peer-review] , the use of suitable software is essential for effective peer assessment. The overall
benefits are similar to those outlined in the section on peer review. Specific benefits include:
Giving credit to individual students on group projects - those who contribute more earn higher marks
Grading of a student’s abilities against key skills such as leadership, communication and report writing
Permits automatic final grade calculation and helps to avoid transcription errors from manipulating data in spreadsheets.
It may be a feature of the supporting [/guides/transforming-assessment-and-feedback/supporting] stage where purely formative peer review activities take place ahead of
introducing an assessment element. Peer assessment forms part of producing marking and feedback [/guides/transforming-assessment-and-feedback/feedback-
production] - it may generate a considerable volume of useful feedback very easily and, in some cases, peer marks may count towards the mark for a summative
assessment.
It's also an important feature of the reflecting stage [/guides/transforming-assessment-and-feedback/reflecting] as the critical analysis and evaluation produced by these
activities is the source of deep learning.
Find out more about the WebPA tool [http://webpaproject.lboro.ac.uk/] - download the latest version and access detailed guidance for instructors and students
Read the University of Hertfordshire's peer assessment initiative case study [http://jiscdesignstudio.pbworks.com/w/page/28195425/ESCAPE+-
+Peer+Assessment+(Assessment+TIP's)]
Peer review
" ...developing students’ capacity for the making of evaluative judgements about their own and others’ work is weakly developed in higher education, even though these skills are highly
valued in all aspects of life beyond university."
University of Strathclyde PEER project
The topic is important because producing peer feedback helps students develop critical thinking skills and make evaluative judgements based on the assignment criteria. In
giving and receiving feedback, students develop skills that help prepare them for future professional practice and helps them understand the process of making academic
judgements.
Receiving peer feedback can be a valuable supplement to tutor feedback and enable students to reflect on things they may not have thought about. Research however
shows that students giving feedback generally see the benefits in terms of their own development, even if the work they are reviewing is weak, whereas significant numbers
of students receive peer reviews in a more passive fashion and find them unhelpful.
When reviewing the work of others students inevitably make comparisons with the work they have produced themselves and gain an understanding of the strengths and
weaknesses of different approaches.
The study also suggested that the best way to enhance learning is by making peer review a platform for the development of critical thinking across a whole programme of
study, rather than as an occasional task in a module or course.
Tip: Peer review doesn't have to criticise; it might involve suggesting something to improve an assignment or highlighting an issue or perspective that is missing.
Despite the many benefits, peer review is equally unfamiliar to some academic staff and many staff fear either student dissatisfaction, increased workload or both as a
result of introducing peer review activities.
Staff often raise concerns about students plagiarising the work of others but you can generally overcome these with well-designed peer review activities.
Tip: Following peer review ask students to comment on their own assignment without altering it so they can't plagiarise.
Giving and receiving feedback (adapted from the University of Strathclyde's PEER project which identified tools to support peer review.)
Helps to address lack of understanding in assignment tasks - constructing feedback requires students to actively engage with assessment criteria
Students develop disciplinary expertise and writing skills through regular evaluation. This process complements and elaborates on teacher and peer feedback
Stimulates self-reflection and results in the transfer of learning to students' own work. They see different approaches and recognise that they can achieve quality in a
variety of ways
If handled sensitively, engaging students in feedback in a safe and trusting environment can help develop social cohesion and learning communities. Peer feedback moves
away from learning and assessment as a private activity
Helps students develop the ability to appraise their own work. In this way, peer review directly helps students to become more independent and effective at self-regulating
their own learning.
The benefits for students receiving feedback from peers:
Peers often provide feedback that is easier to understand than the teachers as it's written in a language that's more accessible
Students might receive more feedback than is possible from a single teacher
They learn how different readers interpret their work. This is important for developing communication skills where anticipating the reader response is important
Peer review might save some teacher time and reduce the need for extensive teacher feedback, or allow teachers to target feedback.
"Being involved in peer feedback, then, didn’t just keep students on track by telling them what they had done well or aspects they had missed. It also helped some reframe their views of
feedback as a dialogic, participative process, and helped them begin to recognise the importance of taking deep approaches to learning and viewing the subject matter through a
different lens."
Sambell (2011)
The immediacy, frequency and volume of software supported peer feedback is likely to make up for any difference in quality between peer and tutor feedback.
The University of Strathclyde's peer review study concluded that software is not only beneficial but necessary to support peer review because:
Students generally value anonymity which would be difficult to achieve without software support
It would be unreasonable to expect academic staff to administer peer review manually due to the extra workload this would cause.
Despite the many benefits there appears room for further improvement in current peer software systems. In particular many systems (especially those that are open
source) don't offer easy integration with VLEs. Improved support for managing student groups would also be beneficial.
It may feature at the supporting stage where activities are purely formative. Peer review also forms part of producing marking and feedback as it may generate a
considerable volume of useful feedback very easily -
It may be a feature of the supporting stage where purely formative peer review activities take place ahead of introducing an assessment element. Peer assessment forms
part of producing marking and feedback - it may generate a considerable volume of useful feedback very easily and, in some cases, peer marks may count towards the
mark for a summative assessment.
Professor David Nicol's paper developing students' ability to construct feedback [http://www.enhancementthemes.ac.uk/docs/publications/developing-students-
ability-to-construct-feedback.pdf?sfvrsn=30] forms part of the University of Strathclyde's PEER project
Oxford Brookes University's guidance outlines making peer feedback work in three easy steps (pdf) [http://www.brookes.ac.uk/WorkArea/DownloadAsset.aspx?
id=2147552652]
It's extremely challenging to measure something as complex as a learning experience and compare them across different institutions. The issues are complicated and
beyond the scope of this guide however our is to support transformational assessment practice, based on sound educational principles, to enhance students' prospects.
The approach to assessment suggested in this guide, and many of the examples of good practice, are a far cry from the traditional approaches supported by the many
existing regulations, standards and marking schemes. The 2007 Burgess group report noted that a single summative judgement is increasingly irrelevant and inappropriate
given more flexible curricula, different forms of study including work-based learning, and more diverse assessment practices.
"All of this has given rise to a dramatic increase in the diversity of assessment practices, beyond the traditional examinations at the end of a year, or years, of study, and is designed to
capture a wider range of student achievement in greater depth. Assessment is increasingly complicated with much more use of continuous assessment and assessment of
achievements and progress where the criteria and the mark distributions are both very different from conventional examinations (such as projects, dissertations, shows and
performance).
Increasingly different types of achievements are being assessed – involving for example both knowledge and skills – which simply cannot be added together in a meaningful way. The
steering group concluded that there is a need to do justice to this wide range of experience by allowing a wider recognition of achievement instead of spending considerable time and
effort attempting to fit these into a single summative judgement."
Beyond the honours degree classification: Burgess Group final report [http://www.hear.ac.uk/sites/default/files/Burgess_final2007.pdf]
Teachers may combine marks in a way that fails to represent the different types of learning outcome achieved through each individual component. Different types of
assessment format like coursework compared to examinations and different disciplinary customs and practices may distort marks. Many experts question whether it's
possible to distinguish the quality of work to a precision of one percentage point.
Communicating standards
Standards are only useful when they are valid and understood. Issues have been identified with how standards are communicated and understood by students and staff.
We consider this in the sections on the setting [/guides/transforming-assessment-and-feedback/setting] and supporting [/guides/transforming-assessment-and-
feedback/supporting] elements of the lifecycle, and suggest some useful approaches to staff development in the section on developing academic practice
[/guides/transforming-assessment-and-feedback/academic-practice] .
The process of making academic judgement requires a certain amount of tacit knowledge. In the sections on feedback and feed forward [/guides/transforming-
assessment-and-feedback/feedback] and on peer review [/guides/transforming-assessment-and-feedback/peer-review] we look at creating the conditions for dialogue
that enables students to better understand the process of making academic judgements.
Group size and diversity also has an impact on how easy or difficult the students may find it to work together. All of these factors must be taken into account when
evaluating group work and determining its equivalence to other types of assignment. We offer guidance on this in the section on assessing group work
[/guides/transforming-assessment-and-feedback/group-work] .
A diverse student population has diverse learning needs and some students may be unable to take particular assignments due to various types of disability. We look at the
issue of alternative formats and equivalence in the section on inclusive assessment [/guides/transforming-assessment-and-feedback/inclusive-assessment] .
Digital storage of marks and feedback can simplify analysis to identify anomalies. It can also permit auditing and profiling of feedback to support staff development.
Digital records of learning outcomes can produce richer student achievement evidence such as the diploma supplement and higher education achievement record (HEAR)
which students can show to potential employers.
Technologies such as e-portfolios allow students to build up a rich picture of their skills and achievements when seeking employment. They can then take these forward to
support continuous professional development throughout their working lives.
Technology can also be used to allow students a range of alternative formats for tackling an assignment. This choice may encourage creativity, better engagement and
support students who are unable to complete particular forms of assignment due to a disability.
At the marking and production of feedback [/guides/transforming-assessment-and-feedback/feedback-production] stage, you may implement quality assurance
measures such as second marking and moderation.
Verification and validation of marks will take place during the recording grades stage [/guides/transforming-assessment-and-feedback/recording-grades] . The reflecting
stage [/guides/transforming-assessment-and-feedback/reflecting] involves considering programme and module outcomes against other comparators to see whether
improvements can be made for the future.
What resources can help?
The Quality Assurance Agency's guide outlines the role of assessment in safeguarding academic standards
[http://www.qaa.ac.uk/en/Publications/Documents/understanding-assessment.pdf]
The Burgess Group's final report [http://www.hear.ac.uk/sites/default/files/Burgess_final2007.pdf] discusses the issues around summative assessment in higher
education
In 2007 a group of leading academics brought together by the Assessment Standards Knowledge exchange (ASKe) produced 'assessment standards: a manifesto for
change' [http://www.brookes.ac.uk/aske/assessment-standards-manifesto/] which underpins the Higher Education Academy's framework for transforming assessment
in HE. [https://www.heacademy.ac.uk/sites/default/files/downloads/transforming-assessment-in-he.pdf]
Research shows that a combination of student self-reflection and peer review is most likely to result in deeper learning. Helping students better understand their own level
of achievement is likely to reduce costly and time-consuming appeals and complaints.
The aim is to create a learning experience in which students can take responsibility for setting their own learning goals and evaluating progress in reaching those goals.
Responding to feedback
The means of capturing self-assessment and reflection also needs to facilitate dialogue around that reflection. For example an assignment cover sheet can be a useful
reflective tool but simply giving students a form to fill in doesn't necessarily challenge a teacher-centric approach.
One university found that rather than undertaking self-assessment, students used cover sheets to write a 'shopping list' of what they wanted from tutors. A more effective
solution involved closing the feedback loop by asking students to keep a reflective journal giving their response to the feedback.
Nicol (2010)[2]notes that developing the capacity to critically self evaluate the quality or impact of work may be implicit in most university curricula although it's almost
never explicitly stated as a learning outcome. He argues that doing so would significantly change the organisation and delivery of the curriculum.
For example there would be a much greater emphasis on self and peer processes and putting learners in control as co-contributors to the curriculum.
Online quizzes with automated, interactive feedback - they offer self-assessment opportunities before attempting an assignment
Screen capture software can demonstrate how to use assessment criteria, clarify goals and standards in an accessible way
Online dialogue through blogging, fora, email, internet messaging and wikis can provide opportunities to test and correct understanding, enabling the incremental
development of self-monitoring and self-evaluative skills
E-portfolios facilitate peer-to-peer, peer-to-tutor dialogue, private reflection and, in some cases, assignment submission and receipt
Audio and video feedback offer richer, more personalised feedback. Audio recorded podcasts also provide an efficient approach to giving feed forward to large groups
However, the focus need not be on individual technologies. Increasingly, curriculum designers draw on combinations of technologies to provide a learning environment that
continuously promotes self-monitoring, self-evaluation and reflection on progress.
The reflecting [/guides/transforming-assessment-and-feedback/reflecting] stage of the lifecycle is an iteration that students will cover many times in evaluating their
progress.
Our case study from Glasgow Clyde College shows how games help medical administration students self-assess
[http://web.archive.org/web/20150809162258/http://www.rsc-scotland.org/?p=3684] in relation to difficult terminology.
Footnotes
[1] See for example Nicol, D. & Macfarlane-Dick, D. (2006) Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, Vol 31(2), 199-218.
[2] Nicol, D. (2010) The Foundation for Graduate Attributes: developing self-regulation through self and peer assessment. Quality Assurance Agency for Higher Education.
http://www.enhancementthemes.ac.uk/pages/docdetail/docs/publications/the... [http://www.enhancementthemes.ac.uk/pages/docdetail/docs/publications/the-foundation-for-graduate-attributes-developing-self-regulation-
through-self-assessment]
Work-based assessment
Why is work-based assessment important?
Many students opt to learn in a work-based environment rather than on a university or college campus. This may be to support their study or continually update their skills
after an initial period of education.
Many institutions develop expertise in collaboration with employers to deliver and assess learning in work-based environments. This type of education can demand
different modes of curriculum delivery and assessment to meet the needs of the workplace host.
Work placements to gain experience of working environments while in full-time education or training ie, a sandwich course, year in industry or apprenticeship in a
professional setting
Learning at work eg, acquisition or renewal of skills while in post, plus any workforce development initiated by the employer
Learning through work ie, re-engagement with education or training to achieve a better standing at work using the workplace as a learning environment or point of
reference
See also our section on employability and assessment [/guides/transforming-assessment-and-feedback/employability] which looks at ways in which traditional college
and university courses can enhance a student's future employment prospects.
Learners on work placement are likely to be assessed by workplace mentors for competencies/skills, and by institutional tutors on their ability to relate theory to practice.
Assessors in the two different locations must have a mutual understanding of the assessment criteria and standards, and use a common vocabulary to aid student
understanding.
Equivalence of standards, or at least similar grade distributions may be difficult to achieve if those marking in the workplace do not also mark in an academic context.
Learning at work (such as continuing professional development) can involve activities such as self and peer evaluation which may be unfamiliar for many students (see
also our sections on peer assessment [/guides/transforming-assessment-and-feedback/peer-assessment] and peer review [/guides/transforming-assessment-and-
feedback/peer-review] ).
Both learning at work and through work can involve a combination of online learning and learning and assessment activities in the workplace. Study patterns may not fit the
traditional academic year and computers must be available in the workplace that are suitable for the intended forms of assessment.
A common observation in Ofsted reports on failing colleges is that too few apprentices have their skills assessed in a place of work.
Capture of workplace skills in situ (digital video, audio, still photography, webcams)
Contextualised assessment management (mobile access to competency maps and assessment records)
Delivery, assessment and accreditation of short courses in any location (e-portfolios, VLEs)
Convenient, secure submission, return and storage of assignments (online assessment management tools)
Asynchronous and synchronous communication with tutors, peers and workplace mentors (voice boards, VLEs, e-portfolios, social networking tools, blogs).
At the supporting stage [/guides/transforming-assessment-and-feedback/supporting] you will need to ensure that students are aware of all the possible sources of
support and help them to make full use of collaborative tools so they don't feel a sense of isolation. You will also ensure clear and robust arrangements for submitting
assignments.
The reflecting stage [/guides/transforming-assessment-and-feedback/reflecting] will be of particular importance with both self and peer reflection as important features
of assessment practice in these contexts.
What resources can help?
The University of Exeter discuss how to prepare students for personal development reviews in the workplace
[http://collaboratevoices.blogspot.co.uk/2012/04/assessment-in-workplace.html]
Assessment, feedback and accreditation Our video show how the University of Derby supports work-based learning with technology including the
accreditation of work-based assessors as university lecturers.
Our video shows how five universities collaborated to transform assessment in practice settings through the
Assessment and learning in practice
use of a shared competency map and mobile devices
Our video shows how the University of Wolverhampton used the PebblePad e-portfolio tool for delivering and
E-portfolio implementation
assessing short courses to SMEs
Our video shows how Thanet College uses e-portfolios for work-based assessment
E-portfolios for work-based assessment
> Advice > Guides > Transforming assessment and feedback with technology > Full guide
Areas
Connectivity
Cyber security
Cloud
Data analytics
Student experience
Explore
Guides
Training
Consultancy
Events
Innovation
Useful
About
Membership
Get involved
News
Jobs
Get in touch
Contact us
YouTube
Cookies
Privacy
Modern slavery
Accessibility