You are on page 1of 37

Advice

Transforming assessment and feedback with technology


Provides ideas and resources to help colleges and universities enhance the entire assessment and feedback lifecycle

About this guide


Published: 9 October 2015

Updated: 20 April 2016

This guide has been designed to help you make better use of technology to manage the assessment and feedback process. It will help you improve academic practice and
the business process that support this.

Throughout the guide we use the term electronic management of assessment (EMA) frequently. This describes the way technology can support the management of the
entire life cycle of assessment and feedback activity, including the electronic submission of assignments, marking, feedback and the return of marks and feedback to
students.

Supporting guides
For an introduction to EMA, read our supporting guide, electronic management of assessment (available via Wayback Machine)
[http://web.archive.org/web/20220119073630/https://www.jisc.ac.uk/guides/electronic-assessment-management] .

Our guide on EMA systems and processes (available via Wayback Machine) [http://web.archive.org/web/20221007114836/www.jisc.ac.uk/guides/electronic-
management-of-assessment-processes-and-systems] gives guidance for higher education institutions on improving business processes and choosing information
systems to support assessment and feedback.

Our enhancing assessment and feedback with technology for FE and skills guide [https://www.jisc.ac.uk/guides/enhancing-assessment-and-feedback-with-technology]
shows how technology can add value to assessment and feedback processes and provides practical advice and guidance including a number of effective practice
examples.

For FE and skills, our guide assessment for learning: a tool for benchmarking your practice in FE and skills (pdf)
[https://repository.jisc.ac.uk/6706/1/assessment_benchmarking_feandskills.pdf] is a hands-on tool to help colleges and providers self-assess their assessment
practices.

For universities and colleges, our 2019 guide, how to enhance student learning, progression and employability with e-portfolios [/guides/e-portfolios] , includes evidence
of the value of e-portfolios in enhancing assessment practices.

Podcasts
Listen to our podcast (mp3) [https://repository.jisc.ac.uk/6308/1/Benefits_of_EMA.mp3] to find out more about the benefits that institutions are achieving through the
electronic management of assessment or read the full text transcript (pdf) [https://repository.jisc.ac.uk/6303/1/Podcast_EMA_benefits_transcript_v2.pdf] .

Listen to our podcast (mp3) [https://repository.jisc.ac.uk/6309/1/Using_Jisc_EMA_resources.mp3] to find out more about how institutions are making use of our
assessment resources or read the full text transcript (pdf) [https://repository.jisc.ac.uk/6306/1/Podcast_Jisc_resources_transcript_v4.pdf] .

Explore the lifecycle


The assessment and feedback life cycle (adapted from an original by Manchester Metropolitan
University)

CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]

If your role involves managing administrative processes or IT systems you may find that this is the most helpful route into the resources for you.

Explore an overview [/guides/transforming-assessment-and-feedback/lifecycle] , or view each individual area of the lifecycle:

Specifying [/guides/transforming-assessment-and-feedback/specifying]

Setting [/guides/transforming-assessment-and-feedback/setting]

Supporting [/guides/transforming-assessment-and-feedback/supporting]

Submitting [/guides/transforming-assessment-and-feedback/submitting]

Marking and production of feedback [/guides/transforming-assessment-and-feedback/feedback-production]

Recording grades [/guides/transforming-assessment-and-feedback/recording-grades]

Returning marks and feedback [/guides/transforming-assessment-and-feedback/returning-feedback]

Reflecting [/guides/transforming-assessment-and-feedback/reflecting]

i df db k f i j
Managing an assessment and feedback transformation project
Transforming your assessment and feedback practice with the help of technology is a major change initiative requiring strong leadership, project management skills and
the ability to engage stakeholders and manage that change. We have a number of resources that can help you plan and implement this type of project.

For FE and skills, our guide assessment for learning: a tool for benchmarking your practice in FE and skills
[https://repository.jisc.ac.uk/6706/1/assessment_benchmarking_feandskills.pdf] is a hands-on tool to help colleges and providers self-assess their assessment
practices.

Our changing assessment and feedback practice [/guides/changing-assessment-and-feedback-practice] guide gives a brief overview of the topic and links to many
other resources

Our guide on electronic management of assessment in higher education: processes and systems [/guides/electronic-management-of-assessment-processes-and-
systems] will help with process improvement and system change

Our project management [/guides/project-management] guide covers everything you need to know about taking a structured approach to planning and organising
your project with a comprehensive set of project management templates for you to use

Our change management [/guides/change-management] guide takes you through finding the right approach to change for your own organisational culture. We put
particular emphasis on stakeholder engagement [/guides/change-management/stakeholder-engagement] and on techniques such as appreciative inquiry
[/guides/change-management/appreciative-inquiry] which prove to be effective in changing assessment and feedback practice

Our guidance on developing a project baseline [/guides/transforming-assessment-and-feedback/project-baseline] will help you evaluate the impact of EMA projects

Approaches to change
Different institutions have approached organisational change to make better use of EMA in different ways. Here are a few examples linked to our case studies:

Strong central steer Manchester Metropolitan University [https://ema.jiscinvolve.org/wp/2014/08/06/transforming-assessment-


feedback-for-institutional-change-traffic-at-mmu/] decided a radical rethink offered greater benefit than
incremental change. The university reviewed all its policies, procedures and practice relating to assessment and created institution-wide standards supported by new and
improved information systems.

The University of Huddersfield


Focus on process improvement
[https://jiscdesignstudio.pbworks.com/w/file/66830875/EBEAM%20Project%20report.pdf] recognised the
clear benefits of EMA and undertook business improvement underpinned by the philosophy "If you can get a machine to do it, get a machine to do it." They stopped short of
imposing e-marking on academics but strongly encouraged the practice not least by lightening the administrative workload of those who adopted the practice.

Keele University [https://ema.jiscinvolve.org/wp/2014/08/06/technology-supporting-assessment-and-feedback-at-keele/] identified three recommended assessment and


feedback processes to help make best use of their available technologies whilst allowing some variation to meet individual preferences.

The University of Hertfordshire [https://ema.jiscinvolve.org/wp/2015/01/12/ema-supporting-assessment-


Specific pedagogic stance
for-learning-at-the-university-of-hertfordshire/] has agreed a set of assessment for learning principles that
underpin curriculum design institution wide and applied a range of tools and technologies to support the approach.

University College London Institute of Education [https://ema.jiscinvolve.org/wp/2015/01/12/ema-supporting-effective-feedback-at-the-institute-of-education-now-part-of-


ucl/] wanted to improve learner longitudinal development. Deciding that engagement with feedback was central to meeting its goals they made policy and system changes in
this area.

The University of Exeter [https://repository.jisc.ac.uk/5589/3/collaborate.pdf] developed a series of tools to ensure that its assessment practices help students develop
employability skills.

Bath Spa and Winchester [https://repository.jisc.ac.uk/5597/3/fastech.pdf] universities use a scheme where


Grass roots developments
student fellows research ways of enhancing the student experience of assessment and feedback using
technology. Working with staff they try out and evaluate new approaches and act as advocates for those that offer proven learning gains.

Queen's University Belfast [https://ema.jiscinvolve.org/wp/2015/03/04/ema-case-study-queens-university-belfast/] uses an appreciative inquiry approach to help its
academic schools identify what they do well in assessment and feedback and what needs to change. Central staff then provide support to implement the technology
supported solutions that best meet their needs.

Not all of our users implement EMA organisation-wide. In this short podcast [https://repository.jisc.ac.uk/6311/1/Getting_started_with_EMA.mp3] Bryony Olney from the
University of Sheffield talks about how she organised an EMA pilot in her department. This case study is also available as a full text transcript
[https://repository.jisc.ac.uk/6307/1/Podcast_getting_started_with_EMA_transcript_v2.pdf] .

Further resources
Our EMA blog features a number of other case studies [https://ema.jiscinvolve.org/wp/category/case-studies/] looking at various transformation aspects of the
assessment and feedback lifecycle.

You can also read the following case studies which cover a range of areas around assessment:

Embedding EMA [https://repository.jisc.ac.uk/5595/3/e-affect.pdf] - Queen's University Belfast

Using technology to promote feedback dialogue [https://repository.jisc.ac.uk/5596/3/interact.pdf] - University of Dundee

Viewpoints as a catalyst for change [https://repository.jisc.ac.uk/5598/3/viewpoints.pdf] - Harper Adams and Cardiff Metropolitan universities.

Sheffield Hallam University is undertaking a university-wide change programme over three years to enhance the assessment experience for students, staff and the
university as a whole. It has used the assessment and feedback lifecycle as part of an ‘Assessment Essentials [https://academic.shu.ac.uk/assessmentessentials/] ’
resource to support staff through this process.

Developing your project baseline


What is a baseline?
This is a starting point against which you can show that your project has delivered a tangible improvement.

It may imply a measurable improvement in time, cost, quality etc but qualitative evidence that the experience of certain stakeholders has improved can be equally valid. By
developing a baseline you ensure that you understand the current state of play before you try to change it.

The baseline is a component of your evaluation plan and a precursor as it can play an important role in helping define the scope of your project.

A rough outline of relevant project activities might include the following steps:

Outline your project definition

Define the baseline

Refine your project definition in the light of the outcomes of baselining

Identify where you hope to make improvements

Identify how you will measure improvement and what sources of evidence you will collect

Design your evaluation plan

Conduct the project and post-project evaluation

Compare the end result with the baseline.

Step seven is by far the largest project element and will consume the most time and resources but baselining and evaluation are the activities that show the project was
worth doing. They assume increasing importance in the current climate - baselining can help you tackle the right issues in the correct way, involving the right stakeholders.

Evaluation ensures you deliver the expected benefits and capture the essential learning for your next project.

Why capture a baseline?


The benefits of capturing a baseline include:

Getting project scope right – it gives you an opportunity to refine the scope of your project. You will realise you can’t solve a particular problem without tackling one or
more related issues

Identifying project stakeholders – you can avoid finding a “skeleton in the closet” further down the line in the form of a stakeholder you should have consulted but
missed

Managing and communicating project scope – baselining helps you manage stakeholders’ project expectations. You may need to clarify that certain issues are out of
scope to avoid disappointment.

Challenging myths – baselining activity can reveal myths that need challenging before you can move forward. Often they relate to unspoken assumptions about what
aspects of practice, processes and systems can and can’t be changed; "We’ve always done it that way" isn’t a reason nor a justification

Showing evidence of improvement – you can’t show how far you have travelled unless you know where you started.

What should be included?


There are no hard and fast rules; you need to decide what is appropriate for your project. Here we suggest aspects of the current situation that you might want to look at.
Remember that a baseline is just that – at this stage you are describing a current state not trying to solve problems immediately.

You need to beware of ‘paralysis by analysis’ - don’t get so bogged down describing the way you do things now that you run out of time to improve them. Equally however
you need to be aware that involving other stakeholders is a big step towards getting ownership and buy in for the eventual solutions.
Aspect of current practice Key questions Types of evidence

Strategy and policy What strategies and policies have a bearing on assessment and feedback? Core institutional documents

What does the vocabulary indicate about how this is approached/perceived? Committee structures

Where does responsibility/authority sit within the organisation? Membership of relevant committees

Process How do we do it now? Process maps

How does reality match the formal process? Usage stats

What workarounds do we need and how often? Interviews

How long does it take? Service level agreements (SLA’s)

Who is involved?

What is the level of take-up where systems/innovations are optional?

Where are the bottlenecks?

When is information difficult to obtain/not timely?

Infrastructure What institutional infrastructure supports the activity: System inventories

- IT Timetables

- physical estate Usage stats

- support services Architecture diagrams

- Is the infrastructure under/over-used? User feedback

Can the infrastructure meet demand at all times?

How well are elements of the infrastructure integrated?

Stakeholders What is the level of stakeholder satisfaction? National Student Survey (NSS)

Are the right stakeholders involved? Survey data

Does responsibility/authority sit in the right areas? Interviews

Is there effective communication between all stakeholders? Focus groups

Rich pictures

How should you present the baseline?


There are no hard and fast rules however here are a few things to consider:

Who are the audiences for the report? You may find the report a useful way of engaging other stakeholders

How do you want each set of stakeholders to respond to the report eg,
- note and approve
- understand the theoretical basis of your project
- actively engage with your project
- use as a lever for change
- take other specific action

Do you require different report versions for different audiences?

What type of presentation/media will best get your message across to each set of stakeholders eg,
- graphs and figures
- comparison with other benchmarks
- authentic user experiences such as audio/video interviews
- citation of academic research

What if my project isn’t the only thing that could impact over the life of the project?
This is probably the case in very many projects. In learning and teaching related areas it’s notoriously difficult to attribute any kind of simplistic cause and effect because
there are so many different factors at play.

Many projects may involve scaling up innovations that have been trialled previously so the project teams already have a good idea where they expect to see their
interventions having an impact.

It’s important that your baseline captures aspects that are directly related to your intervention. You should agree your evaluation plan stakeholders and capture evidence
that is credible and relevant.

The more ambitious your project the more difficult it will be to find simple cause/effect relationships. If you are looking to effect institutional transformation then you may
expect to see changes to institutional strategy, policy and structures.

You may even expect to see changes in institutional culture [http://jisccdd.jiscinvolve.org/wp/2011/11/09/tracks-in-the-snow-finding-and-making-sense-of-the-evidence-


for-institutional-transformation/] as evidenced by interactions between different stakeholders and by vocabularies.

How do I get started with baselining?


For FE and skills, our guide assessment for learning: a tool for benchmarking your practice in FE and skills
[http://repository.jisc.ac.uk/6706/1/assessment_benchmarking_feandskills.pdf] is a hands-on tool to help colleges and providers self-assess their assessment practices.

For examples of how others have approached baselining see a range of resources and examples from previous projects
[http://jiscdesignstudio.pbworks.com/w/page/46422956/Example baseline reports] .

The assessment and feedback lifecycle


An overview of the stages involved in the academic assessment and feedback lifecycle.
The assessment and feedback life cycle (adapted from an original by Manchester Metropolitan
University)

CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]

The assessment and feedback lifecycle is an academic model showing a high level view of the academic processes involved in assessment and feedback. It is intended to
be pedagogically neutral ie, it is more concerned with asking questions and stimulating thought than having a basis in any particular pedagogic stance.

The model can apply to both formative and summative assessment and to any scale of learning from a three year degree to a short course that takes place over a single
day. It covers all assessment and feedback practice whether or not materials are in digital format or supported by information systems.

Lifecycle stages
The eight main stages in the lifecycle apply equally to further and higher education. At a more detailed level the processes also include:

Assessment scheduling

Submission of assignments

Tracking of submissions

Extension requests and approvals

Academic integrity

Academic misconduct processes

Examinations

Marks recording

Moderation and external examining

Student progress tracking.

Within these processes there are variations between further and higher education with student tracking against outcomes, predefined by awarding bodies, being of great
significance in FE. HE has its own set of quality assurance procedures around marking.

Another important feature of the lifecycle is that it is iterative from both an institutional and student perspective.

The reflecting [/guides/transforming-assessment-and-feedback/returning-feedback] element of the lifecycle is the final stage of one iteration. Learner reflection on the
outcomes of one assignment should influence how they approach the next, and staff reflection on the outcomes of a cohort should influence the next iteration of course
delivery.

The lifecycle framework


The lifecycle offers a ready means of mapping business processes and potential supporting technologies against the key academic stages. It provides a framework to gain
a holistic picture of institution-wide activity and a means of encouraging dialogue between different types of stakeholders who may have a silo view of only part of the
lifecycle.

Such a model needs to recognise that there is no such thing as a "one-size-fits-all" approach (usually even within a single institution). It is a framework to stimulate
discussion and can be used for many purposes such as:

A means of helping individual stakeholders take a holistic view of assessment and feedback activities

A prompt to support academic decision making during curriculum development

A starting point for process review and improvement

A starting point for a technology roadmap

A means of clarifying requirements to system suppliers.

The model is central to promoting shared understanding and dialogue amongst all of the many practitioners who collaborated with us on the production of this guide.

Using the lifecycle


The assessment and feedback lifecycle was originally developed by Manchester Metropolitan University [http://www2.mmu.ac.uk/] and it has been used and adapted by
many other institutions since. You are free to use this model for your own purposes citing the creative commons license.

See how Sheffield Hallam University adapted the model to create its assessment essentials [http://academic.shu.ac.uk/assessmentessentials/] .

Listen to our podcast [http://repository.jisc.ac.uk/6297/1/The_assessment_lifecycle_v2.mp3] to find out more about the development of the lifecycle and how others have
used it, or read the full text transcript [http://repository.jisc.ac.uk/6304/1/Podcast_lifecycle_transcript_v5.pdf] .

The lifecycle is one route into this guidance. You will find a full description of each life cycle element along with common challenges faced by institutions in getting this
aspect to work well, how to support it with technology and resources to highlight good practice.
If your role involves managing administrative processes or IT systems you may find that this is the most helpful overview for you.

Case studies
We have a range of examples of how different institutions have applied electronic management of assessment (EMA) to parts of the lifecycle:

Queen's University, Belfast [http://ema.jiscinvolve.org/wp/2015/03/04/ema-case-study-queens-university-belfast/]

Institute of Education, University College London [http://ema.jiscinvolve.org/wp/2015/01/12/ema-supporting-effective-feedback-at-the-institute-of-education-now-


part-of-ucl/]

University of Hertfordshire [http://ema.jiscinvolve.org/wp/2015/01/12/ema-supporting-assessment-for-learning-at-the-university-of-hertfordshire/]

Bedford College [http://ema.jiscinvolve.org/wp/2014/08/07/ema-tool-available-from-bedford-college/]

Keele University [http://ema.jiscinvolve.org/wp/2014/08/06/technology-supporting-assessment-and-feedback-at-keele/]

Manchester Metropolitan University [http://ema.jiscinvolve.org/wp/2014/08/06/transforming-assessment-feedback-for-institutional-change-traffic-at-mmu/]

Walsall College [http://ema.jiscinvolve.org/wp/2014/06/29/end-to-end-ema-at-walsall-college/]

Specifying
A stage of the assessment and feedback lifecycle

What does specifying involve?


The assessment and feedback life cycle (adapted from an original by Manchester Metropolitan
University)

CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]

Specifying is the process of determining the details of a course or programme of study and consequently the assessment strategy within it.

In further education much of this will be prescribed by an awarding body but in higher education there is a lot of freedom of choice. Details of each module will be recorded
in a specification, ideally online however paper-based processes continue to exist in many institutions.

Specifying takes place following a new course proposal or when an existing course undergoes periodic review. Additionally there will be a process of making minor
modifications if there is a desire to change the assessment approach.

At the specifying stage you will normally determine the type of assignment, give an idea of the scale eg, a 4,000 word essay, and indicate its value as a percentage of the
overall marks for that module.

What are we trying to achieve at the specifying stage?


We are trying to show that students can demonstrate achievement of the desired learning outcomes for each module or other piece of learning they have undertaken.
There are likely to be many different ways to demonstrate that a learning outcome has been achieved so it's important to be creative and innovative at this stage.

Some types of assignment can model things that students may have to do in the workplace and help them develop future employability skills. Other types of assignment
may equally demonstrate a grasp of course content but without evidencing a range of other transferable skills.

Manchester Metropolitan University's guidance on specifying and assessment strategy [http://www.celt.mmu.ac.uk/assessment/design/types.php] asks will the
assessment type enable students to demonstrate the learning outcomes and will you look forward to marking it? This will help you to consider assessment design, as a
poorly designed assignment might leave you marking numerous identical submissions. A well-designed brief can generate originality and individuality in student
approaches.

Assignments that demand some individuality of approach also make it much more difficult for students to plagiarise.

How might we use technology at the specifying stage and what are the benefits?
At this stage of the lifecycle you will require some form of course management system that contains the definitive version of course and module specifications. This can
provide the following benefits:

A central view so that all stakeholders see the same version of the information

Information that can be re-used for many purposes including course and module handbooks

A curriculum overview showing the relationship between assessment and learning outcomes at module and programme level

A means of comparing programmes to identify effective practice

A source of inspiration for staff designing new programmes


What are the common problems?
In many institutions a lot of course and module specification information is still either paper-based or held in a range of local systems. This leads to problems in knowing
which is the correct version. As a result it can be difficult to generate accurate information that flows through the lifecycle and is readily reusable for a variety of purposes
and stakeholders.

Tip: Clarity around the specification stage is extremely important. Consider a central database of course and module information.
Modularisation of the curriculum means that learning and assessment is broken down into chunks. Often those chunks don't build back up to a close match with a course
or programme's desired learning outcomes. This is exacerbated by the difficulties in having a clear programme level overview.

Institutions that have done some basic curriculum analytics often discover that some learning outcomes are assessed multiple times whilst others are not assessed at all.
Many realise that they are over-assessing causing unnecessary work for staff and stress for students.

Tip: Analyse the number of assessments per module of similar size and the number of times you are assessing each learning outcome.
Because specifying and major reviews of these specifications are done infrequently, changes in the time periods between reviews are inevitable. For new courses there may
be a considerable time lag (of one to two years) between course validation and initial delivery. Often this results in course delivery by new staff who have little ownership of
the original design.

There are quality processes to manage changes in the interim but staff often find the processes so arduous that they find ways to implement change "under the radar" of
the formal minor modifications processes. This exacerbates the issues around having the correct version of information.

Tip: Develop clear and simple processes for ongoing course improvement. This ensures that academics can keep courses up-to-date on the basis of lessons learnt and changing student needs.
There is a lot of risk aversion in relation to assessment design. Staff fear being too creative in case their assessment is too challenging and brings down average marks for
a cohort, or they incur the disapproval of external examiners. Students don't like being guinea pigs in any aspect of their learning and particularly not in relation to
assessment. This is in spite of the fact that more flexible and creative assessment design can help to ensure fairness and inclusivity.

Differences in marks relating to factors like gender or ethnicity can be caused as much by the assessment design as the actual marking process. Technology has a role to
play here in ensuring that the range and size of file types that lend themselves to e-submission is not a limiting factor in the choice of assignment type.

Tip: Staff development should emphasise the benefits of using varied assessment types. Student induction should include assessment literacy development at an early stage.
The specifying stage of the lifecycle causes a particular set of problems in FE and skills. This is due to the complexity of awarding body criteria for assessing against
particular learning outcomes and the frequency with which the specifications can change.

Tip: Colleges using Moodle can incorporate Grade Tracker [http://ema.jiscinvolve.org/wp/2014/08/07/ema-tool-available-from-bedford-college/] to configure and track progress against BTEC, City & Guilds and A/AS
level qualifications in a single system.

What resources can help with the specifying stage?


The University of Ulster's viewpoints' staff development materials [http://wiki.ulster.ac.uk/display/VPR/Home] aid curriculum design with an emphasis on
assessment and feedback

The University of Hertfordshire's guidance outlines how to apply assessment for learning principles
[http://jiscdesignstudio.pbworks.com/w/file/fetch/68646815/ITEAM%20UH%20Assessment%20Principles%20and%20Guidance%20August%202013.pdf] to
assessment design

The University of Bradford's programme assessment strategies project generated this short guide on programme focused assessment
[http://www.pass.brad.ac.uk/short-guide.pdf]

Blackboard produced a useful rubric [http://www.blackboard.com/resources/catalyst-awards/bbexemplarycourserubric_nov2013.pdf] to aid course design

Related themes
Assessment design [/guides/transforming-assessment-and-feedback/assessment-design] Assessing group work [/guides/transforming-assessment-and-feedback/group-work]

Assessment literacy [/guides/transforming-assessment-and-feedback/assessment-literacies] Assessment patterning and scheduling [/guides/transforming-assessment-and-feedback/pattern-and-scheduling]

Employability and assessment [/guides/transforming-assessment-and-feedback/employability] Feedback and feed forward [/guides/transforming-assessment-and-feedback/feedback]

Inclusive assessment [/guides/transforming-assessment-and-feedback/inclusive-assessment] Peer assessment [/guides/transforming-assessment-and-feedback/peer-assessment]

Peer review [/guides/transforming-assessment-and-feedback/peer-review] Quality assurance and standards [/guides/transforming-assessment-and-feedback/quality-assurance]

Student self-reflection [/guides/transforming-assessment-and-feedback/self-reflection] Work-based assessment [/guides/transforming-assessment-and-feedback/work-based-assessment]

Setting
A stage of the assessment and feedback lifecycle

What does setting involve?


The assessment and feedback life cycle (adapted from an original by Manchester Metropolitan
University)

CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]
Whilst the overall assessment strategy and approach is specified very early in the lifecycle, setting assignment details needs to happen each time a group of students takes
a particular module. This is often known as an instance of delivery.

At this point students receive details, usually in the form of an assignment brief, about precise topics, deadlines, learning outcomes assessed, marking criteria, and
feedback arrangements.

What are we trying to achieve?


The purpose of setting is to achieve clarity for both students and staff: what is required, in what format, by when and how it will be assessed.

As a member of academic or administrative staff you need to be clear about how the work will be marked (see our section on marking and feedback workflows) and any
deadlines for the return of marks and feedback. This means being clear about marking criteria and grading schemes and also any penalties for non-compliance with the
stated requirements.

In the case of an overly long submission for example, is there a fixed penalty or will you only mark up to the word limit? Similarly what are the penalties for late submission
and how will you deal with extenuating circumstances?

If you are using online submission you may need to give guidance on file format. You may also have specific requirements regarding naming conventions to ensure
anonymity.

At this point you need to consider assessment scheduling to ensure workload for both students and staff is appropriately distributed.

How might we use technology at the setting stage of the lifecycle and what are the benefits?
You should make use of online templates for assignment briefs and marking rubrics, and use digital information about the curriculum to model assignment scheduling and
to present information about deadlines.

Depending on the systems you have available, you could offer students a personalised calendar showing deadlines for assignment submission and the return of marks and
feedback. This can provide the following benefits:

Clarity for students about what is required for each assignment

Clarity for students and staff about deadlines

Ability to manage both staff and student workload

Consistent and effective approach to feedback

Effective quality assurance mechanisms.

What are the common problems?


Students are often confused about exactly what is required of them and staff find themselves repeating the same information many times. The University of Strathclyde
[https://www.strath.ac.uk/] has the following tips for students on what is required before they begin an assignment:

Write a statement of requirements in your own words and check it out with other students

Write down what you think is required and take this to the tutor for comment

Ask the tutor if he/she has any completed examples of the kind of work you are asked to do. Make it clear that you are not going to copy from these and that you are
mainly interested in the approach

Students from other year groups may be a good source of advice about what makes a good or poor piece of work

Check-out published writing in the assignment area on how to present arguments and writing style. Remember however that what you write for your assignment must
be in your own words and not copied from other sources.

Case study: criteria crunching at the University of Rationale


Winchester
Students rarely ingest and internalise the meaning of criteria and/or grade descriptors. They may read module
handbooks, containing carefully explained statements of assessment criteria but find them difficult to understand and apply to tasks.

The words in assessment criteria and grade descriptors are often quite opaque and dense for students (and staff), and are rich in tacit understandings and disciplinary
discourse. They require sophisticated interpretive skills.

This exercise engages students in making meaning from the criteria and discussing the notion of quality.

Exercise

Students read through the assessment criteria


Individually they rewrite in their own words using whatever genre they are at ease with – from academic text to poetry to recipe instructions to rap

In small groups they share their different interpretation, write up best suggestions on flip chart paper, pin on walls, wander around to see how other groups have interpreted
these
The lecturer facilitates refining the criteria and grade descriptors in class or online. This provides a student-friendly set of criteria for programmes/modules and/or tasks.
Intended outcomes

Question the language of criteria and grade descriptors

Question the instrumental use of the same


Students creatively engage with the criteria in a critical way
Students and staff engage in dialogue about the meaning of criteria and quality.

Tip: Establish a common template for assignment briefs to capture essential information and present it to students in a consistent way for every assignment.
Assessment bunching is a common issue. This is a problem for individual students when a number of assessment deadlines fall closely together meaning that the student
has less time to spend on each assignment and produces poorer quality submissions.

There is also a lack of opportunity for the student to receive formative feedback on one assignment and use this in a developmental way to help with future assignments.
Even where a course or programme is managed in such a way that assessment bunching is not a particular problem for individuals, it can pose a problem at institutional
level. Manchester Metropolitan University undertook some modelling from its coursework submission database and identified significant peaks in assignment submissions
(the highest being around 17,000 individual submissions due in a single week at the end of March 2012).

Such peaks place considerable strain on administrative processes and IT systems.

Tip: Model the curriculum to verify sufficient formative development opportunities for learners and ensure that bunching does not occur. Consider defining a maximum number of summative assessments for modules of
a particular size.
For feedback to be useful it needs to be received at a point where students can act on it. It also needs to explain the extent to which they have met the learning outcomes
and what they need to do in order to achieve a better grade next time.

Feedback is however often left to the discretion of the individual academic. Inconsistencies in approach and ineffective feedback are often not picked up by unit or course
leaders until it is too late.

Tip: Define an overall feedback strategy at the specifying stage. Ensure assignment briefs outline what type of feedback students can expect to receive and when and how they should act on it.
What resources can help?
The University of Hertfordshire assessment timelines tool (view via UK Web Archive)
[https://www.webarchive.org.uk/wayback/en/archive/20150529100437/http://jiscdesignstudio.pbworks.com/w/page/30631817/ESCAPE%20-
%20Assessment%20timelines] aids planning by outlining the consequences of assessment timing

A webinar from the University of Hertfordshire discusses the efficiency of assessment


[http://jiscdesignstudio.pbworks.com/w/page/51924065/Efficiency%20of%20assessment%20-%20initial%20thinking%20from%20the%20ITEAM%20project] and
introduces their assessment resource calculator
[http://jiscdesignstudio.pbworks.com/w/file/67218607/ITEAM%20Assessment%20Comparitor%2026%20Nov%202012.xlsx]

The University of Hertfordshire has developed an example set of assessment criteria


[http://jiscdesignstudio.pbworks.com/w/file/67218330/University%20Assessment%20Criteria%2029-04-13.docx]

Manchester Metropolitan University has developed guidance on assessment grading, criteria and marking
[http://www.mmu.ac.uk/academic/casqe/regulations/docs/assessment_procedures.pdf]

The University of Reading A-Z of assessment methods [http://www.reading.ac.uk/web/FILES/eia/A-Z_of_Assessment_Methods_FINAL_table.pdf] can help you
choose the most appropriate type of assessment

Rogo [http://www.nottingham.ac.uk/rogo/index.aspx] is an open source tool, developed by the University of Nottingham with support from Jisc, that can deliver a
range of online assessments

The University of Wisconsin has published a useful set of rubrics [http://www.uwstout.edu/soe/profdev/rubrics.cfm#cooperative] for different types of assessment

Time to assess learning outcomes in e-learning (TALOE [http://taloetool.up.pt/] ) is a web-based tool with associated guidance to help match suitable assessment
types to learning outcomes.

Related themes
Assessment design [/guides/transforming-assessment-and-feedback/assessment-design] Assessing group work [/guides/transforming-assessment-and-feedback/group-work]

Assessment literacy [/guides/transforming-assessment-and-feedback/assessment-literacies] Assessment patterning and scheduling [/guides/transforming-assessment-and-feedback/pattern-and-scheduling]

Employability and assessment [/guides/transforming-assessment-and-feedback/employability] Feedback and feed forward [/guides/transforming-assessment-and-feedback/feedback]

Inclusive assessment [/guides/transforming-assessment-and-feedback/inclusive-assessment] Quality assurance and standards [/guides/transforming-assessment-and-feedback/quality-assurance]

Student self-reflection [/guides/transforming-assessment-and-feedback/self-reflection] Work-based assessment [/guides/transforming-assessment-and-feedback/work-based-assessment]

Supporting
A stage of the assessment and feedback lifecycle

What does supporting involve?


The assessment and feedback life cycle (adapted from an original by Manchester Metropolitan
University)

CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]

This component looks specifically at supporting students in the period between setting and submission of assignments ie, while they are in the process of completing an
assignment.

It is separate from the more general support needed for the business processes and technologies throughout the lifecycle, although it does have a relationship with the
broader digital literacies agenda for both staff and students.
What are we trying to achieve?
This stage is about helping each student do their best work for each assignment. However the real purpose is developing students' assessment literacy
[/guides/transforming-assessment-and-feedback/assessment-literacies] so that they understand what is involved in the process of making academic judgements.

Ultimately we are trying to turn students into independent and self-regulated learners who are able to monitor and evaluate their own learning. Assessment preparation
should ensure appropriate scaffolding to facilitate this.

How might we use technology at the supporting stage of the lifecycle and what are the benefits?
The information sources we suggest you create at earlier stages of the lifecycle are invaluable in supporting students with consistent information. Technology can provide
formative development opportunities and include online quizzes and testing. Electronic voting systems (also known as personal response systems or clickers) can test
understanding of a topic or gather feedback from students during teaching sessions.

Technology can provide formative feedback on draft assignments - this may be in the form of tutor feedback, peer feedback or self-development such as the use of
academic integrity checking tools.

This can provide the following benefits:

Consistency of information sources (see the specifying [/guides/transforming-assessment-and-feedback/specifying] and setting [/guides/transforming-assessment-
and-feedback/setting] stages) helping staff to provide consistent information in direct contact with students

A digital overview of the curriculum helps students understand their individual learning pathway, particularly how one assignment relates to others

Formative opportunities such as online quizzes and testing can help consolidate learning

Opportunities for self and peer reflection can help with deeper learning.

What are the common problems?


Students may not always understand what is required of them (see the 'setting' stage of the lifecycle). This problem can be exacerbated when staff use a range of
academic terms for the same thing and sometimes even the same words for different things eg, rubric/marking schema/marks sheet/cover sheet etc.

Tip: Use the same terminology when giving support to students in class via email or by other means.
Students may focus too much on the assignment in hand rather than understanding where this piece of learning fits into the overall learning outcomes for their course or
programme of study. This results in researching the subject in a narrow way where they think they will gain the most marks.

Tip: Provide students with an overview of their learning pathway to help them understand how what they learn from one assignment will feed into future assignments and their overall development.
Be clear about the overall learning outcomes for the course and transferable, employability skills or graduate attributes that they are expected to develop.
Students may view each assignment as a one-off to be forgotten once it is completed. This may partly be a problem of mindset and not understanding how the different
elements of the course hang together. It can also be due to a curriculum that doesn't offer sufficient opportunities for formative development.

Another problem is a curriculum that doesn't provide sufficient time for feedback on formative activities to influence student work on their final submission, or for feedback
on one summative assignment to influence the next.

Tip: Build regular opportunities for formative development into the curriculum. Support activities might include assignment tutorials, submission of drafts and formative quizzes with online feedback which students can
take in their own time.
Providing formative opportunities can sometimes add further complications to the set up steps for EMA information systems eg, the system needs to distinguish between
draft submissions that are for feedback only and the final submission for marking. Similarly, submissions should not be flagged as having unoriginal content simply
because a draft of the same piece of work has previously been submitted.

Tip: Make sure that staff managing EMA systems know when a particular assignment has a draft phase. Have clear guidance about how to set the system up to manage drafts.
Some students may have particular special needs eg, dyslexia or other disability or may not have English as their first language. You should think about making curriculum
and assessment practice as inclusive as possible from the design stage eg, using technologies such as lecture capture to aid student revision and offering alternative
formats for assignments wherever possible. You may however still need to provide special services for certain types of learner.

Tip: Make sure each assignment brief makes it clear to learners where they can get help for any special needs.
A personal tutoring system is a means of ensuring that a student's long term development needs are catered for. Often the personal tutor is removed from the marking
process so features such as anonymity can be preserved. Effective personal tutoring does however require a means of allowing the personal tutor to see a full view of
feedback. Currently this is problematic in many systems.

What resources can help?


The University of Derby's fit to submit? [https://uodpress.wordpress.com/fit-to-submit-assignment-checklist-2/] checklist helps students avoid common mistakes in
their coursework .

Oxford Brookes University's guide provides advice for students on how to do better on their assignments (pdf)
[http://www.brookes.ac.uk/WorkArea/DownloadAsset.aspx?id=2147552644]

The University of Hertfordshire's at a glance guide shows how electronic voting systems (EVS) in different disciplines
[http://jiscdesignstudio.pbworks.com/w/file/63296787/ITEAM%20Case%20studies%20mapped%20Afl%20Feb%202013.docx] helped support its assessment for
learning principles

Our case study from Ayrshire College outlines how a lecturer designed a multi-media comic book [http://www.rsc-scotland.org/?p=3962] to help creative arts students
engage better with formative assessment tasks

Our case study from Perth College shows how smartphones and QR codes [http://www.rsc-scotland.org/?p=232] engaged hairdressing and beauty therapy students
with formative assessment tasks supporting enquiry based learning, self directed learning, group work and peer evaluation.

Case study: marking exercise - University of An easy and effective way of orienting students to allocate effort in an appropriately focused way, in relation to
Winchester assessment demands, is a classroom exercise in which students mark three or four good, bad and indifferent
assignments from students from the previous year (with their permission, and made anonymous).

Students should read and allocate a mark to each example without discussion, then discuss their marks and reasons for allocating these with two or three other students
who have marked the same assignments.

The tutor then reveals the marks the assignments actually received, and why, in relation to the criteria and standards for the course. Finally, provide two more assignment
examples for the students to mark, with their now enhanced understanding of the criteria.

Students undertaking such exercises have gained one grade higher for their course than they have would otherwise for the investment of about 90 minutes in the marking
exercise. This advantage occurs in a subsequent course. It is hard to imagine a more cost-effective intervention.
Related themes
Assessment design [/guides/transforming-assessment-and-feedback/assessment-design] Assessing group work [/guides/transforming-assessment-and-feedback/group-work]

Assessment literacy [/guides/transforming-assessment-and-feedback/assessment-literacies] Feedback and feed forward [/guides/transforming-assessment-and-feedback/feedback]

Inclusive assessment [/guides/transforming-assessment-and-feedback/inclusive-assessment] Peer assessment [/guides/transforming-assessment-and-feedback/peer-assessment]

Peer review [/guides/transforming-assessment-and-feedback/peer-review] Student self-reflection [/guides/transforming-assessment-and-feedback/self-reflection]

Work-based assessment [/guides/transforming-assessment-and-feedback/work-based-assessment]

Submitting
A stage of the assessment and feedback lifecycle

What does submitting involve?


The assessment and feedback life cycle (adapted from an original by Manchester Metropolitan
University)

CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]

This is the process of students handing over their completed assignment to the appropriate person so that marking and/or feedback can take place. It may involve taking a
completed piece of work to a physical location or submitting something electronically (e-submission).

A receipting system indicates that a piece of work has been submitted or that an ephemeral assignment such as a presentation or dance performance has actually taken
place.

What are we trying to achieve?


The process is formalised to ensure compliance with a stated submission deadline. This is largely to ensure that all students have the same amount of time to complete
the assignment.

Clear deadlines also help in managing staff workload. Some institutions view student anonymity as an important means of ensuring fairness in the marking process and
the ability to handle anonymous submissions can save time and complicated workarounds later in the process.

In using e-submission we try to make the process as easy as possible for students and avoid them having to make a journey to campus just for this purpose. We also try to
streamline administration making it readily possible to see who has and hasn't submitted, to undertake academic integrity checking and distribute the assignments to
markers.

How might we use technology at the submitting stage of the lifecycle and what are the benefits?
E-submission is rapidly becoming the norm. Features can include receipting, academic integrity checking and support for managing anonymity, distribution of work to
markers and the application of penalties for late submission.

This is the area of the lifecycle where the benefits of EMA for students are most widely understood and accepted. For staff and the institution also, these include:

Convenience and time savings of not having to travel to hand in assignments

Avoidance of printing costs for students

Automatic proof of receipt and avoidance of anxiety about missing assignments in the postal system

Improved confidence provided by the privacy, safety and security of e-submission

Confidence of knowing work is backed up

Electronic reminders about deadlines and improved clarity about turnaround times for marking

Submission deadlines not constrained by office hours

A sense that this is simply normal practice in a digital age.

That is not however to say that institutions have already ironed out all of the issues around this area; technical, process, pedagogic and cultural issues do remain.

What are the common problems?


Commercial systems used for e-submission have limitations on the type and size of files that can be submitted. It can also be problematic for such systems to handle
group submissions eg, the outcomes of a joint project.

There are also limits on the type of assignment suitable for e-submission. Where the nature of the physical artefact is important, such as a piece of sculpture, a digital
representation may never be an acceptable alternative. Similarly some pieces of assessed work may be quite ephemeral eg, a dance performance or an oral examination.
Tip: Think carefully about where digital technology can help. Even if the assignment can't be submitted electronically do you need a digital record that submission took place or, in the case of presentation and
performance, would a digital recording help with marking and feedback?

Case study: digitising thought Abertay University is keen to make the most of digital technology wherever it can and has adopted an
innovative approach to assessing art and design work.

Professor Louis Natanson, head of the school of arts, media and computer games, told us that with a traditional portfolio a lot of the work of interpreting the student's
thought processes actually falls back on the lecturer who needs to try and make sense of what they are presented with.

By using a digital portfolio, the student is required to make decisions about how to present their work in the same way they would have to make decisions when deciding how
to display a range of paintings in a physical space. The process of thinking this through and documenting the thought process is what actually achieves the learning outcome.

There is often no need for the university to store the physical artefacts meaning significant cost savings on the amount of storage space needed for arts and design subjects.

Ensuring that a submission is made in time for the assignment deadline can be stressful for students. E-submission avoids the need for artificial deadlines such as the time
when the departmental office closes.

However greater flexibility has other implications such as the need for technical support outside normal office hours. When submission systems become mission-critical,
any technical issues can have serious repercussions. System downtime beyond institutional control is commonplace in recent years and there are issues with submissions
timing out when students have a slow internet connection.

When technical problems occur and students are unclear about whether or not their submission has been successful, it is a natural instinct to resubmit (often multiple
times) which exacerbates server load problems.

Tip: Separate the physical act of submission from any other part of the workflow such as academic integrity checking or distribution to markers. E-submission systems require a holding area for submissions to be
received, acknowledgement sent to students, and the submission held in a cache until the next part of the workflow commences.
Distinguish between receipting and verification eg, an assignment may be received on time but not actually be a valid submission.
Submitting is the part of the lifecycle where any agreed extensions to deadlines need to be managed. It is also important to know about any extenuating circumstances.
This is largely a matter of institutional policy: some organisations believe they have clear policies but find that interpretation varies widely between different parts of the
institution.

The variability of approaches makes it difficult for system suppliers to build in functionality to apply coding for managing extensions and extenuating circumstances and/or
penalties for late submission.

Even when institutions do have a clear and consistent approach, they are often not able to change the product settings to match their policy. Human behaviour adds a
further dimensional to the problem; predictably many students leave submission until the last possible moment and a late submission may be recorded when a student
starts a submission at 00:59 for a midnight deadline but the upload does not complete until 00:01.

Tip: Develop a clear institutional policy on submission and examples of how this can be consistently applied in different scenarios. Follow consistent procedures in the event of system failure.
A need for student anonymity can cause issues from this stage in the process onwards. Sometimes the problem is due to human error eg, students inserting their own
name into the filename even though they have been requested not to do so. In other cases, where there is full anonymity, the problem is identifying which students have not
submitted or identifying students who have special needs or extenuating circumstances.

Maintaining anonymity can also be an issue in the later stages of the lifecycle.

Tip: Before deciding that anonymity is a requirement, be clear what purpose it serves and how to apply it consistently. For example there are some subjects such as performance art where anonymous assessment is
impractical.

In some cases staff training on good assessment design and avoiding unconscious bias may better serve your needs.

What resources can help?


The University of Oxford's why use online submission?
[http://jiscdesignstudio.pbworks.com/w/page/38552439/Cascade%20Why%20use%20online%20assignment%20submission] guide outlines the benefits of online
submission

Try the e-Assignment [http://www.southampton.ac.uk/assignments/] open source tool for the submission, marking and feedback of student work

Related themes
Assessment literacy [/guides/transforming-assessment-and-feedback/assessment-literacies] Work-based assessment [/guides/transforming-assessment-and-feedback/work-based-assessment]

Marking and production of feedback


A stage of the assessment and feedback lifecycle

What does marking and production of feedback involve?


The assessment and feedback life cycle (adapted from an original by Manchester Metropolitan
University)

CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]
This is a key stage in the lifecycle when student work is formally evaluated against a set of predefined assessment criteria with marks and feedback provided.

Feedback and marking are separate entities and serve quite distinct purposes. In some cases assessments may be purely formative and feedback given with no mark
associated. In this section we have addressed the more complex scenario where an assignment will have both marks and feedback. The importance of purely formative
feedback for student's long term development must however be noted.

There may be multiple evaluators involved in assessing the work of a single student - the reasons for this are discussed further below. Here we look at a situation where
teaching staff provide feedback and marks and consider peer review and assessment as separate themes (also under the lifecycle section on reflecting).

There are many distinct tasks under this element of the lifecycle including distributing work to markers, marking itself, production of feedback and collation and verification
of marks.

It's possible to carry out all of these tasks electronically but in practice many institutions still use a combination of online, off-line and paper-based processes.

What are we trying to achieve?


The main purposes of this lifecycle stage are:

To give a piece of work a final grade in relation to the assessment criteria

To provide feedback to aid student longitudinal development.

The processes used to carry out these tasks are designed with two further objectives in mind:

To ensure that the work of each student is marked fairly

To ensure consistency of approach across different cohorts of students.

Particularly in HE, internal quality assurance processes will generally demand that marking and feedback is carried out in a certain way for a particular assignment. The
main features of such processes are:

Moderation to ensure a consistent approach across different markers

Double or second marking to give additional scrutiny to the work of individual students particularly for high stakes assignments.

There are four main models of marking and feedback in the UK. We have defined these models based on research undertaken at the University of Manchester and validated
with a wide range of HE providers as part of our EMA project [/rd/projects/electronic-management-of-assessment] . The models are:

Early moderation
Assignments are marked and marks and feedback recorded. A sample of assignments is then submitted to a moderator (or occasionally multiple moderators) who verifies
that marking is consistent across all of the different markers involved. Any anomalies are reconciled prior to feedback and marks being released to students.

The sample used is usually a percentage that should include any fails, firsts or borderline marks.

Late moderation
The process is as described above except that feedback (and possibly provisional marks as well) can be released to students prior to the moderation process taking place

Double-blind marking
The work of each student is evaluated by two assessors who are each working blind to the marks and feedback given by the other. The two markers can work in parallel
provided the technical settings of the marking tool used allows for this. In this case the student will usually receive two marks and two sets of feedback on the same piece
of work.

Open second marking


As above, the work of each student is evaluated by two assessors but in this case the marking takes place sequentially with the role of the second marker being to validate
the work of the first. The student receives a single mark and set of feedback. In some cases the two markers will collaborate to agree the mark.

The choice of approach to marking generally depends on the overall value of the assignment and its weighting in relation to overall marks for a particular programme of
study. Some form of second marking and moderation can however be a staff development opportunity for new members of staff.

How might we use technology at the marking and production of feedback stage of the lifecycle and
what are the benefits?
Technology can support all aspects of marking and feedback (often termed e-marking and e-feedback). Marking can take place online and increasingly off-line and
feedback can be provided in digital formats including text, audio and video. The benefits of e-marking and e-feedback are most evident for academic staff and include:

Convenience of not collecting and carrying large quantities of paper

Convenience of electronic filing

Security of having work backed up on an online system

Ability to moderate marks without having to physically exchange paper

Increased speed and efficiency of being able to reuse common comments

Improved morale through not having to write out repeated comments

Convenience of undertaking originality checking in the same environment as marking

Improved clarity of marking and feedback and the ability to include lengthy comments at the appropriate point in the text

Improved consistency of marking

Ability to add audio comments and annotations as well as typed comments

Ability to give qualitatively different feedback in different media eg, audio feedback for language programmes for intonation and pronunciation.

The benefits are also significant for students and there is indeed considerable student demand for e-feedback. These include:

Improved clarity, particularly not having to decipher handwriting


Improved privacy as compared to having paper assignments distributed in pigeonholes

Easy storage increasing the likelihood that feedback will be reviewed at a later date

Improved quality of feedback as tutors spend less time repeating common comments and concentrate more on the individual aspects of the assignment

What are the common problems?


Our research found this to be the most problematic component of the lifecycle as it is the area where the fit between institutional processes and the functionality of
commercially available systems is least well matched.

Marks and feedback are different entities and need to be handled differently but technology platforms tend to conflate the two. Additionally most commercial systems don't
provide functionality to meet the needs of each of the different roles in common marking and moderation processes. In practice this causes the following difficulties:

Inability to release feedback to students separately from their marks

Inability to support blind marking ie to hide the first marker's comments from a second marker

Risk of second markers and external examiners overwriting or deleting comments made by an earlier marker

Difficulties in recording decisions taken during the moderation process eg mark before and after moderation and reason it was changed

Tip: Ensure your marking and feedback processes are not over-complicated. Use our workflow models to compare your own practice to sector norms and cut out any unnecessary steps.Use the model and system
specification to discuss your requirements with your system supplier to inform their future development plans.
Despite the benefits of e-marking and e-feedback, institutions often defer to the preferences of individual academics when it comes to how they carry out the task. This
means institutions supporting academics using a variety of different tools for marking and feedback as well as those who still work on paper.

There are some general issues around the ability of systems to deal with mathematical and scientific or musical notation but otherwise the issue is really down to personal
preference as to whether or not tutors like to mark on screen.

For those who are prepared to undertake e-marking there is also a distinction between online or off-line marking with the former being currently better supported by
systems than the latter. Some staff prefer familiar tools such as Microsoft Word but find the lack of integration with other EMA systems a drawback eg, the need to return
work to each student separately involving an email per student and compromising anonymity.

Getting used to online marking tools may take a while but there is good evidence that integrated tools save time on the overall marking process. Deciding which tools to use
is not necessarily a straightforward matter if the institution supports multiple tools, as the attractiveness of each may vary according to the type of assignment and
marking process.

Tip: Emphasising the benefits of online marking and getting internal champions to share their experiences can be an effective way of bringing more staff on board without needing to be strongly directive.
Whilst a lot of effort is expended on comparing and moderating marks, it is less common for programme teams to discuss feedback given to students. Consequently
approaches can vary greatly with some feedback being much more comprehensive and useful than other examples.

Feedback can take many forms. Praise and feedback on content is less effective in the long term than feedback on skills and self-regulatory abilities. The latter are more
likely to develop autonomy in learning and an ability to make evaluative judgements without the support of a teacher. Clarifying what purpose feedback is expected to serve
and analysing tutor feedback therefore needs to become normal practice for academic staff.

Tip: Our feedback and feed forward guide [/guides/feedback-and-feed-forward] provides more ideas on giving effective feedback and tools for monitoring and comparing feedback.
Giving feedback can be very time consuming for academic staff and some staff may question whether the time spent is worthwhile if they are not confident that students
are using and acting upon the feedback. See our section on reflecting [/guides/transforming-assessment-and-feedback/reflecting] for more help on this topic.

Tip: Make use of time saving tools such as comment banks to avoid repeating frequently used comments. Try giving audio feedback if you type slowly - this can save time and be more meaningful and personal.

What resources can help?


A report from the University of Huddersfield looks at staff and student attitudes to EMA
[http://jiscdesignstudio.pbworks.com/w/file/66830875/EBEAM%20Project%20report.pdf] and describes the strategies used to effect large scale change

Read our case study on embedding electronic assessment management [http://repository.jisc.ac.uk/5595/3/e-affect.pdf] at Queen’s University Belfast

Electronic Feedback is a marking assistant developed by Philip Denton of Liverpool John Moores University. The application uses MS Excel and Word to generate
student reports in print and email messages. To obtain a copy of this free tool contact p.denton@ljmu.ac.uk [mailto:p.denton@ljmu.ac.uk]

Read Manchester Metropolitan University's suggestions for trying something new with feedback [http://www.celt.mmu.ac.uk/feedback/try.php?tryid=6]

The Sounds Good project explored using digital audio to give student feedback [http://docs.google.com/viewer?
a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxzb3VuZHNnb29kdWt8Z3g6M2ZhNTYxZDU5MjM5ZmZiOA]

Read the University of Oxford's guide on how to create audio feedback [https://weblearn.ox.ac.uk/access/content/group/info/howto/Audio_feedback.pdf]

The universities of Reading and Plymouth have a useful website based on their experiences with video feedback [http://www.reading.ac.uk/videofeedback/]

Related themes
Feedback and feed forward [/guides/transforming-assessment-and-feedback/feedback] Marking practice [/guides/transforming-assessment-and-feedback/marking-practice]

Peer assessment [/guides/transforming-assessment-and-feedback/peer-assessment] Peer review [/guides/transforming-assessment-and-feedback/peer-review]

Quality assurance and standards [/guides/transforming-assessment-and-feedback/quality-assurance]

Recording grades
A stage of the assessment and feedback lifecycle
Students should be able to see how marks are arrived at in relation to the criteria, so as to understand the criteria better in future. They should be able to understand why the grade they
got is not lower or higher than it actually is.

One way to do this is to use the sentence stems:" “You got a better grade than you might have done because you ...” and “To have got one grade higher you would have had to ...”.
Transforming the Experience of Students through Assessment (TESTA)
What does recording grades involve?
The assessment and feedback life cycle (adapted from an original by Manchester Metropolitan
University)

CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]

The end point of this stage is the culmination of marking and moderation processes - a single grade is recorded against each piece of work. In practice this can involve a
number of separate tasks such as collating the marks from different assessors who may be marking in various tools and/or on paper; profiling the marks in order to identify
a sample to be moderated; reconciling anomalies and formally approving the marks via some form of board.

There are therefore a lot of iterative relationships with the marking and production of feedback [/guides/transforming-assessment-and-feedback/feedback-production] .

Institutional regulations will determine who records the grade, how this is verified and in which system it is stored. However, in most cases, the student record system is the
definitive source of grading information.

What are we trying to achieve?


The purpose of this stage is to give each summative assignment a definitive grade that describes how well the student has met the criteria for the assignment and hence
the learning outcomes.

In order to do this there has to be quality assurance of the original marking process and a procedure for reconciling any anomalies.

How might we use technology at the recording grades stage of the lifecycle and what are the
benefits?
Ideally there would be a seamless workflow whereby the work of each marker would be picked up from the marking tool they used, submitted for profiling and analysis and
then transferred to the system that records the final mark along with any associated audit trail.

In practice most institutions are still a long way from achieving this and there is a lot of manual intervention in most cases. Some institutions have however developed their
own marks recording systems. EMA can provide the following benefits:

Recording marks in digital format can help avoid transcription errors

Storing marks in digital format can make collation and profiling easier even when a number of different systems are involved

An online audit trail makes quality assurance processes easier.

What are the common problems?


As academics mark using many different tools (including paper) this often means that marks have to be transcribed from one system or medium to another and
transcription errors such as mixing up one's and seven's are extremely common.

The problems of manual intervention are often exacerbated by academics not trusting in the ability to edit central systems as needed and keeping marks elsewhere 'under
their control' until all adjustments have been made and marks have been verified. In many cases the moderation process is carried out on shared drives and by exchanging
emails back and forth.

Tip: Review the workflows in your institution and identify the most effective process, bearing in mind the tools that you have available. Promote the benefits of adhering to the standard process and using central
information systems.

See our guide on process improvement [/guides/process-improvement] for further guidance.


In most cases it is insufficient to simply record a mark and know that it has been adjusted as a result of moderation: there needs to be an audit trail of the actual marks
before and after moderation and the reason for the change. Currently this is a weakness in EMA systems.

Tip: When reviewing workflows bear in mind the need for an audit trail; compliance is another driver behind the need for centralised information sources.
The ways in which systems record and store marks can cause issues for many institutions whose grading schemes do not match the way the software is configured eg, an
institution may have a letter grading scheme whereas their IT systems can only support percentage marks. There are also concerns about the rounding of numeric marks
and the possibility that double rounding of marks in different systems can give an inaccurate result.

Tip: Determine your grading policy by academic requirements not system capabilities. Use our system specification to discuss your requirements with your system supplier and to inform their future development plans.

What resources can help?


Read our briefing paper on the Learning Tools Interoperability [http://publications.cetis.ac.uk/wp-content/uploads/2012/05/LTI-Briefing-Paper.pdf] specification - a way of
seamlessly connecting learning applications and remote content to virtual learning environments.

Related theme
Quality assurance and standards [/guides/transforming-assessment-and-feedback/quality-assurance]

Returning marks and feedback


A stage of the assessment and feedback lifecycle
What does returning marks and feedback involve?
The assessment and feedback life cycle (adapted from an original by Manchester Metropolitan
University)

CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]

This stage informs students about the outcomes of an assessed piece of work. Marks and feedback can either be returned together or separately, and marks may be
provisional until confirmed by some form of academic board.

Marks and feedback may be returned in a variety of formats depending on the nature of the assignment and how it was submitted for assessment. Formats vary from
handwritten comments on scripts through a range of digital formats including audio and video feedback to ephemeral forms such as verbal feedback.

The process may be fully automated with information being delivered to students on a specified deadline after the submission date. Alternatively it may be entirely manual:
there remain instances where students can only obtain feedback on demand by making an appointment with their tutor.

What are we trying to achieve?


It may be self-evident that students must be informed about the outcomes of assessed work but it is equally important that they are given information about their strengths
and weaknesses so that they can improve their performance for future assignments.

This stage however is about returning marks and feedback [/guides/transforming-assessment-and-feedback/returning-feedback] which doesn't necessarily mean that
students understand and act upon the outcomes. We discuss this further in the section on reflecting [/guides/transforming-assessment-and-feedback/reflecting] .

How might we use technology at this stage of the lifecycle and what are the benefits?
Technology can be used to post marks and feedback direct to individual students without the need for manual intervention. This can provide the following benefits:

Reduced workload as compared to distributing scripts or sending marks by other means such as email

Guarantee that marks and feedback will be available on the stated deadline

No need for students to physically collect marked assignments

Ability to push the information direct to students or at least to alert them that it is available for viewing

Evidence that students make more use of feedback that is stored electronically.

What are the common problems?


The return of marks and feedback [/guides/transforming-assessment-and-feedback/returning-feedback] is a key issue for students and the timeliness of feedback a
major source of dissatisfaction. The issue is more about clarity than about absolute deadlines. A number of institutions have told us students don't really mind whether the
deadline is 20 days or 30 days as long as there is clarity.

In the case of feedback you need to bear in mind that it will only be useful if it is received in time to impact on subsequent assignments (see the section on setting
[/guides/transforming-assessment-and-feedback/setting] ). Conversely students not collecting/viewing feedback is a major concern for staff and investigation has shown
that this can sometimes be due to lack of clarity that feedback is available for students to view.

The problem may exist even when feedback is available electronically as not all systems have an automated alert facility to notify students that the feedback is ready.

Tip: Outline your approach to feedback and the deadline by which it will be available to students at the setting stage and ensure this is made clear.
A common concern voiced by academic staff is that students are heavily focused on marks and grades and often ignore feedback altogether. The converse argument
voiced by students is that the feedback is often received too late to be useful.

The disaggregation of marks and feedback can address both of these issues. It allows feedback to be released while marks are still undergoing moderation so that
feedback is more timely. Students may be required to give some evidence of having at least viewed the feedback before they are allowed to see their mark.

Where such approaches have been implemented they have been shown to be of value but barriers remain in many cases. For example, one issue is the inability of systems
to support separate release of marks and feedback - such an approach often necessitates amendments to academic regulations.

Tip: Consider disaggregating the release of marks and feedback and think about how you might require students to show evidence of having engaged with the feedback before they see their mark.

Case study: activities to encourage reflection on Manchester Metropolitan University has the following suggestions for getting students to reflect on feedback:
feedback
Idea one

Separate marks and feedback so that you give back the feedback one week and the mark the next.

Ask students to reflect on the interpretation of feedback by getting them to predict the mark they got from the feedback you’ve given. You could even offer them a bonus
mark, say five per cent for an accurate prediction. You need to make rules for the attribution of the extra marks eg, the date by which the predicted mark has to be submitted
and the degree of precision required.

A final activity could be to have a brief class discussion about why the predictions were or were not accurate. You may think that this is taking valuable classroom time away
from the programme content. However by helping students to engage with the outputs of the unit as well as the input, you will help them to improve their understanding and
performance.

Idea two
If you have two assignments in the same unit try using some of the marks available in the second assignment to reward students who show how they have acted on the
feedback in the first assignment.

This could be asking them to provide a simple statement at the end of the assignment which explains what they did in response to the feedback and indicating where the
evidence for improvement can be found in the second submission. Any marks you give for this should be on the quality of the statement rather than their improvement which
will be marked anyway as part of the second assignment.

The format of feedback has a considerable impact in terms of how easy it is for students to use. Handwriting is notoriously difficult to decipher (and there are also privacy
concerns around hard copy assignments being left in pigeonholes to be collected). Many students still say that they value hard copy feedback but few of them have organised
approaches to its storage and retrieval meaning they are likely to look at it once and then never go back to it.

It is generally the case that students find electronic feedback easier to store and retrieve therefore they are more likely to look at it again.

Tip: Provide all feedback in digital format. There is good evidence to show this greatly increases the amount of students who actually look at and make effective use of feedback.
Tip: If your system doesn't support disaggregation of marks and feedback, use a workaround ie not filling in the mark field then emailing the mark later after students have reviewed the feedback.

What resources can help?


Sheffield Hallam University's Technology, Feedback, Action!
[http://evidencenet.pbworks.com/w/page/19383525/Technology%2C%20Feedback%2C%20Action%21%3A%20Impact%20of%20Learning%20Technology%20on%20Stu
project tried ‘adaptive release’ where students engaged with their feedback before receiving their grade

Glasgow College developed an application called Examview [http://www.rsc-scotland.org/?p=551] to pull assessment marks from the student record system direct to
the VLE avoiding the need for duplicate mark entry

Sheffield Hallam University has a guide to achieving the three week turnaround [http://academic.shu.ac.uk/assessmentessentials/wp-
content/uploads/2015/09/Achieving-the-3-Week-Turnaround.pdf]

Related theme
Feedback and feed forward [/guides/transforming-assessment-and-feedback/feedback]

Reflecting
A stage of the assessment and feedback lifecycle

What does reflecting involve?


The assessment and feedback life cycle (adapted from an original by Manchester Metropolitan
University)

CC BY-NC-SA [http://creativecommons.org/licenses/by-nc-sa/3.0]

This is one of the most important stages of the lifecycle. Real student learning takes place through an iterative process of reflecting on how progress matches against
learning outcomes. It is also where staff review the outcomes of various assignments in order to continuously improve curriculum design and delivery.

What are we trying to achieve?


The aim of this stage is to ensure that students engage with their feedback and use it to improve their future performance.

In a similar vein academic staff need to engage with student feedback and statistics on the performance of particular cohorts.

How might we use technology at the reflecting stage of the lifecycle and what are the benefits?
It can be used to store feedback and make it accessible to students and staff. We can also use it to support self-reflection on a portfolio of work and dialogue around
feedback, whether this is staff/student dialogue or peer to peer dialogue.

This can provide the following benefits:

E-feedback improves the quality of feedback and consequently the self dependency of learners

Peer review activities help students understand the process of making academic judgements

Peer review activities can reduce staff workload

E-portfolios can aid self-reflection and be used to present student skills to future employers

Digital feedback permits forms of auditing and analysis that can support staff development planning

Digitally available marks and feedback are prerequisites for various forms of learning analytics including assessment analytics.
What are the common problems?
It can be difficult to gain an overview of student feedback to support long term development. Feedback on individual assignments is generally stored at module level and it
is difficult for students and tutors alike to get an overall view of feedback across a particular student's programme of study.

This is problematic for personal tutors who need to understand how students perform across a range of units but may not teach on any of those units and don't have
access to any of the marks or feedback.

There is good research evidence to show that an effective combination of self-reflection and peer review may make the biggest difference to student learning and future
employability (see our sections on peer review [/guides/transforming-assessment-and-feedback/peer-review] and student self-assessment and reflection
[/guides/transforming-assessment-and-feedback/self-reflection] ).

In spite of this peer review activities are unfamiliar to many students and they can be uncomfortable with the approach. This is partly due to the notion that formal
education is about learning from experts and students often don't value working with peers.

Tip: Ensure that both student induction and the supporting stage of the lifecycle emphasise the benefits of peer learning to students, in particular developing critical thinking and communication skills and their relevance
to the world of work.
Academic staff often have concerns that giving better feedback and, more specifically engaging in dialogue around feedback, necessarily means more work. Our evidence
suggests that where e-marking and e-feedback tools are used effectively, time is saved on more routine and repetitive elements of the process and this time can be used
for giving better quality feedback and engaging in dialogue.

The use of generic feedback on common issues can save repeating the same comments many times.

Tip: When thinking about effectiveness consider the overall workflow. Allow time staff to familiarise themselves when evaluating time-saving tools before measuring hours spent.
Make full use of functions such as comment banks to save repeating frequently used phrases.

What resources can help?


Our video on reflecting on feedback [https://www.youtube.com/watch?v=RU34b6ahqJY] shows how the University of Westminster improved student engagement
with feedback by encouraging self-reflection.

The University of Westminster's making assessment count project emphasised student self reflection
[http://jiscdesignstudio.pbworks.com/w/page/23495173/Making%20Assessment%20Count%20Project] . Supporting resources include a project report
[http://www.jisc.ac.uk/media/documents/programmes/curriculumdelivery/mac_final_reportV5.pdf] and the Feedback+
[https://sites.google.com/a/my.westminster.ac.uk/feedback-plus/home] tool.

The University of Dundee interACT [http://jiscdesignstudio.pbworks.com/w/page/50671082/InterACT%20Project] project placed great emphasis on creating the
conditions for dialogue around feedback and has produced a range of resources to help others

Read our case study on using technology to promote feedback dialogue at the University of Dundee

Sheffield Hallam University's 'Technology, Feedback, Action!


[http://evidencenet.pbworks.com/w/page/19383525/Technology%2C%20Feedback%2C%20Action%21%3A%20Impact%20of%20Learning%20Technology%20on%20Stu
' project generated a range of resources to help others improve student engagement with feedback.

Related themes
Assessment design [/guides/transforming-assessment-and-feedback/assessment-design] Developing academic practice [/guides/transforming-assessment-and-feedback/academic-practice]

Employability and assessment [/guides/transforming-assessment-and-feedback/employability] Feedback and feed forward [/guides/transforming-assessment-and-feedback/feedback]

Peer assessment [/guides/transforming-assessment-and-feedback/peer-assessment] Peer review [/guides/transforming-assessment-and-feedback/peer-review]

Quality assurance and standards [/guides/transforming-assessment-and-feedback/quality-assurance] Student self-reflection [/guides/transforming-assessment-and-feedback/self-reflection]

Work-based assessment [/guides/transforming-assessment-and-feedback/work-based-assessment]

Assessment and feedback themes


The assessment themes look at topics that cut across many stages of the lifecycle [/guides/transforming-assessment-and-feedback/lifecycle] . If you have an academic
or staff development role, you may find that this is the most helpful route into the resources for you.

For each theme we discuss why it is important, what are the common problems and how applying technology might help. We also relate each theme back to the different
stages of the lifecycle so you can see exactly what you need to be thinking about and doing at each stage of the process.

We point to case studies of good practice and a range of free tools that you can use and adapt in your own context.

Assessment design
"Good assessments create a good educational experience, set out high expectations, foster appropriate study behaviours and stimulate students’ inquisitiveness, motivation and
interest for learning."
University of Hertfordshire

Why is assessment design important?


Assessment and feedback is the cornerstone of formal education and forms a significant part of both academic and administrative workload.

Survey data such as that from the national student survey (NSS) regularly shows that students are less satisfied with assessment and feedback than with any other aspect
of the HE experience. Good assessment design is at the heart of improving this aspect of the learning experience and achieving better learning outcomes overall.

Good design should make the assessment experience inspiring and motivating for both students and staff. It should create a positive climate that encourages interaction
and dialogue. Assessment should appear relevant and authentic and wherever possible allowed students to draw on their personal experience and to exercise choice with
regard to topics, format and timing of assessment.

There should be effective mechanisms for generating high quality feedback and ensuring that learners understand and act on feedback. Reflective skills should be
developed that help students direct and regulate their own learning and support the learning of their peers.
Characteristics of a learning environment that supports assessment for learning
[https://www.plymouth.ac.uk/uploads/production/document/path/2/2729/RethinkingFeedbackInHigher

©Kay Sambell

All rights reserved

Characteristics of a learning environment that supports assessment for learning

What are the common problems?


In a modular curriculum each module tends to be assessed separately. This can lead to a range of problems:

The assessment methods used in short modules are limited


They don't really allow for formative approaches and the assessment of learning outcomes that focus on slowly-developed, complex, and high order skills and
understanding.

Students concentrate on each module independently


They tick off each module in turn and fail to see the links between them. Staff can be similarly focused on their own module responsibilities.

We tend to over assess


This causes issues of staff and student workload. Where mappings take place at programme level some learning outcomes are assessed multiple times whilst others are
missed completely.

Strong focus on traditional assessment formats


These approaches, such as essays and unseen exams, are not necessarily the best means of testing desired learning outcomes and developing the skills that students
need for their future working lives (see our section on employability and assessment).

Students don't understand what is expected of them


Simply telling them about criteria and standards is not enough; the overall assessment design must offer effective opportunities to engage with the criteria and
opportunities to practice different activities before they are assessed (see also our section on assessment patterning and scheduling).

Feedback mechanisms are often ineffective


Feedback may arrive too late to be useful and it may be short term and corrective rather than addressing skills development that is relevant to the overall learning outcomes
of the course (see our section on feedback and feed forward [/guides/transforming-assessment-and-feedback/feedback] ).

Reflective skills are under developed


Research has shown that developing reflective skills, particularly using peer review, is one of the best ways of enhancing learning but such techniques are still underused
(see our sections on peer assessment [/guides/transforming-assessment-and-feedback/peer-assessment] , peer review [/guides/transforming-assessment-and-
feedback/peer-review] and student self-reflection [/guides/transforming-assessment-and-feedback/self-reflection] ).

How might we use technology to support assessment design and what are the benefits?
Curriculum management systems can help give an overview of assessment forms and patterns across a range of modules in order to aid programme focused
assessment. This ensures that the desired learning outcomes of the overall study programme are effectively addressed.

Online assessment briefs and grading criteria make it easy for students to find information about what learning outcomes are being assessed, how they will be assessed
and what the standards are.

Technology can help to ensure parity and fairness of assessment by providing alternative formats of information and other support for students with a disability (see our
section on inclusive assessment) [/guides/transforming-assessment-and-feedback/inclusive-assessment] . It can also provide students with a choice of formats to
deliver an assignment. Assessment formats that are novel and interesting encourage creativity, inquisitive enquiry and participation.

Generating feedback in digital formats can speed up the process and make it more usable by students and easier to store and refer to in the future.

Technology can also support peer review and assessment, and is usually necessary to enable the use of such techniques with large class sizes.

How does assessment design relate to the assessment and feedback lifecycle?
It relates closely to where you are specifying [/guides/transforming-assessment-and-feedback/specifying] the overall assessment strategy for the programme of study.
You also need to refer to these intentions at the setting [/guides/transforming-assessment-and-feedback/setting] and supporting [/guides/transforming-assessment-and-
feedback/supporting] stages to ensure that you effectively implement your intended approach for each cohort of students.

Because the lifecycle is an iterative process, at each stage of reflecting [/guides/transforming-assessment-and-feedback/reflecting] you consider whether to make any
changes to assessment design for future instances of delivery.

Taking a principled approach to assessment and An approach that many Jisc projects have found central to improving assessment and feedback practice is
feedback practice defining the educational principles that underpin assessment and feedback in their institution.

By defining shared educational values, academics, learning technologists and those responsible for quality assurance and administration have worked together to look at
whether their principles are genuinely reflected in practice. Where improvement is required they have moved forward on the basis of a shared understanding of what is
fundamentally important.
In a short guide, Why use assessment and feedback principles? [http://www.reap.ac.uk/TheoryPractice/Principles.aspx] Professor David Nicol highlights the fact that they
can:

Help put important ideas into operation through strategy and policy
Provide a common language

Provide a reference point for evaluating change in the quality of educational provision
Summarise and simplify the research evidence for those who don't have time to read all the research literature.
Principles need to be written in a way that requires action rather than passive acceptance if they are to effect change. You also need to bear in mind that generic principles
can be interpreted in various ways. For example, the principle ‘help clarify what good performance is’ can be implemented in ways that are teacher-centric or in ways that
actively engage students.

A starting point for many institutions has been to review the well-known re-engineering assessment practices (REAP) [http://www.reap.ac.uk/] principles from the University
of Strathclyde.

(adapted from those used at the University of Hertfordshire)


Assessment for learning principles
Good practice in assessment for learning

Engages students with the assessment criteria


Assessment should be used to help reinforce expected standards. The criteria should be communicated in clear, easily accessible language, free of jargon, which students
can understand, whilst setting high expectations of the learner. Interactions should help students engage with the assessment criteria.

Supports personalised learning


Students each have individual needs, motivation and interests. In order to provide all students with opportunities to demonstrate their learning, the profile of assessments
must anticipate the needs of a diverse student body. Good practice ensures that students have variety in the assessments and where possible gives them some individual
choice. For example in the topic of an assessment or in the method/format of the assessment.

Ensures feedback leads to improvement


Feedback is an essential aspect of assessment. For feedback to be effective it needs to be prompt and make sense to the students so that they can develop their learning
and feed this into their future practice. Good feedback provides a commentary on the student’s submission, offers advice on how work could be developed and provides
opportunities for students to demonstrably engage with the feedback. Feedback should also feed forward.

Focuses on student development


Assessment has a significant influence on student motivation and the ways in which students approach their learning. Timely, meaningful assessment develops the students’
interests, motivations and encourages them to engage in their study to meet the learning outcomes.

Well planned assessment facilitates the student to reflect on their own learning and self-assess. Assessments should encourage effective learning behaviours ie deep not
surface, understanding not just memory. These include spending appropriate time on tasks, with effort spread across topics and weeks and making links across knowledge
domains.

Assessment should encourage positive beliefs and build student confidence.

Stimulates dialogue
Good assessment supports the development of a learning community and provides opportunities for students to engage in dialogue about their learning. Teachers should
also have an opportunity to engage in dialogue with students and colleagues to help them shape their teaching and engage in staff, module and programme development.

Considers staff and student effort


Good assessment should distribute student effort across the study-period and topic areas and demand an appropriate amount of student effort. Good assessment design
will not overload students or teachers and will ensure there is adequate time for teachers to create and deliver feedback in ways that support student learning.

(adapted from understanding assessment


Designing programme or module assessment in
[http://www.qaa.ac.uk/en/Publications/Documents/understanding-assessment.pdf] by the Quality
three easy stages Assurance Agency (QAA))

Stage one
Decide on the intended learning outcomes. What should the students be able to do on completion of the course, and what underpinning knowledge and understanding will
they need in order to do it that they could not do when they started?

Stage two
Devise the assessment task(s). If you have written precise learning outcomes this should be easy because the assessment should be whether or not they can satisfactorily
demonstrate achievement of the outcomes.

Stage three
Devise the learning activities necessary (including formative assessment tasks) to enable the students to satisfactorily undertake the assessment task(s). These stages
should be conducted iteratively, with each stage informing the others to ensure coherence.

The likelihood that more than one iteration might occur reflects the need to ensure what is sometimes referred to as 'alignment' between the learning outcomes at
programme level and those at module level; in other words to ensure that the learning outcomes at programme level are actually being addressed through the combination of
modules.

Our video outlines how the University of Strathclyde implemented new models of assessment practice:

What resources can help?


The University of Ulster's viewpoints' staff development materials [http://wiki.ulster.ac.uk/display/VPR/Home] aid curriculum design with an emphasis on
assessment and feedback.
Read our case study on transforming assessment and feedback practice [http://repository.jisc.ac.uk/5598/3/viewpoints.pdf] at Harper Adams University and Cardiff
Metropolitan University.

The University of Hertfordshire's guidance outlines how to apply assessment for learning principles
[http://jiscdesignstudio.pbworks.com/w/file/fetch/68646815/ITEAM%20UH%20Assessment%20Principles%20and%20Guidance%20August%202013.pdf] to
assessment design. Their activity cards can help with designing assessment for learning
[http://jiscdesignstudio.pbworks.com/w/file/68762994/ITEAM%20AfL%20activity%20cards.docx] .

The Quality Assurance Agency (QAA)'s guide gives recommendations on how to implement institutional change in assessment and feedback practices
[http://www.enhancementthemes.ac.uk/docs/publications/transforming-assessment-and-feedback.pdf?sfvrsn=12] .

Read the study on the influence of disciplinary assessment patterns on student learning [https://www.tandfonline.com/doi/pdf/10.1080/03075079.2014.943170] .

The University of Bradford's programme assessment strategies project generated a short guide on programme focused assessment
[http://www.pass.brad.ac.uk/short-guide.pdf] and a series of accompanying case studies [http://www.pass.brad.ac.uk/case-studies.php]

Oxford Brookes University's guide outlines how to take a social constructivist approach to assessment in three easy steps (pdf)
[http://www.brookes.ac.uk/WorkArea/DownloadAsset.aspx?id=2147552649]

Birmingham City University's assessment design checklist [http://repository.jisc.ac.uk/6194/1/BCU_Assessment_Checklist_and_guide.pdf] is a useful tool and
reference guide.

[1]Rust, C., O’Donovan, B. & Price, M. (2005), ‘A social constructivist assessment process model: how the research literature shows us this could be best practice’,
Assessment and Evaluation in Higher Education, Vol. 30, No. 3, 233-241

Assessing group work


Why is assessing group work important?
Group learning techniques using approaches such as collaborative, enquiry-based and problem-based learning can be pedagogically effective. Working in groups also helps
students develop a range of transferable skills that are useful in the world of work (see also the section on employability and assessment).

In assessment terms evaluating group assignments can save academic staff time depending on the approach taken. Establishing a fair and appropriate means of
allocating marks for group assessment can however prove challenging. It's possible to allocate a single mark to a whole group which may often mirror working life where a
whole team shares in the success or failure of a project.

Alternatively you may choose to allocate individual marks based on the contribution of each student. Other approaches include a combination of the two ie, an overall mark
for the group with a certain percentage allocated for individual contributions.

Finally, although this may counter the concept of group work, you could set each student an individual piece of work based on the topic addressed by the group.

In choosing the best solution you need to think about what learning outcomes are being addressed. If the process of arriving at and/or presenting the final outcome is
important then you are unlikely to adequately assess the learning outcomes by simply looking at the finished product.

If it's important to understand the dynamics of how the group worked and what each individual contributed, some form of self or peer review can be helpful.

What are the common problems?


"If students understand why group work is being used, understand the assessment system, are collaborative and ethical in their behaviour and possess sophisticated group work skills,
then only minimal assessment mechanisms may be necessary as safeguards. In the end it is the creation of a healthy learning milieu that can contribute most to solving group work
assessment problems."
Professor Graham Gibbs

Students may be unfamiliar with group work and therefore feel anxious about it. This problem can be exacerbated by the fact that group working is best suited to longer,
more complex assignments which may account for a significant percentage of the overall marks.

You will need to consider the issue during assessment design and when thinking about assessment patterning and scheduling [/guides/transforming-assessment-and-
feedback/pattern-and-scheduling] . This will ensure that students have sufficient practice at component tasks before they undertake a high-stakes assignment.

Students often resent approaches which allocate a single mark to the whole group. In particular stronger students can feel they have been let down by the weaknesses of
others. Allocating individual marks is a way of getting round this but issues of fairness can still arise eg, timid students who do a lot of research may not get full recognition
if the final outcome is an oral presentation.

Cultural issues can be a barrier to group working. In some cases, multicultural groups may be a real asset in achieving learning outcomes particularly when requiring
students to confront situations they may face in a working environment.

However it is likely that multicultural groups will take longer to form effective communication channels and working relationships so you will need to factor this in. Research
suggests that a period of about four months is the point at which distinctions between the performance of homogenous and culturally diverse groups disappear.[1]

How might we use technology in assessing group work and what are the benefits?
It can be used in the following ways:

Facilitate collaboration between group members ie, upload and comment on wiki contributions or use social media channels to stay in contact and organise activities

Enhance the fairness of evaluating individual contributions - providing facilities for students to keep a reflective log or e-portfolio reveals their individual contribution to
a group project

Support peer review to acknowledge the differential contribution of the individuals involved.

How does this theme relate to the assessment and feedback lifecycle?
At the specifying stage [/guides/transforming-assessment-and-feedback/specifying] you will identify that group work is important to achieving learning outcomes. At the
setting stage [/guides/transforming-assessment-and-feedback/setting] you will think about scheduling activities, particularly so that students can practice component
elements before they undertake high-stakes summative assessment.

Throughout the supporting stage [/guides/transforming-assessment-and-feedback/supporting] you will help students develop the skills they need to work effectively in
groups.
Tips for assessing group work Adapted from the work of Professor Graham Gibbs.

Allocate differential marks to individual students to increase fairness and avoid freeloading
Form an ideal group size of four to six - the maximum group size should be eight

Mixed ability groups work best: streaming disadvantages weaker students


Culturally homogenous groups do better on short assignments: multicultural groups need about four months working together to achieve their best results
Group work generally produces higher marks because overall it has more resources and time to apply to the task

It generally produces a narrower spread of marks than individual assignments

What resources can help?


Read Professor Graham Gibbs' paper on assessing group work and find out more about getting the most from group work assessment in a guide, both on the Oxford
Brookes University website [http://www.brookes.ac.uk/aske/groupwork-assessment/]

Our case study from Edinburgh College shows how social media and project management tools [http://www.rsc-scotland.org/?p=2566] support group assignments
to reduce workload and integrate assessment across different units in the curriculum.

Footnotes
[1] Watson, W. E., Kumar, K. & Michaelsen, L. K. (1993) Cultural diversity’s impact on group process and performance: comparing culturally homogeneous and culturally diverse task groups, The Academy of Management
Journal, 36(3), pp. 590–602.

Assessment literacy
"Students need to be given the opportunity to take part in the processes of making academic judgements to help them develop ‘appropriate evaluative expertise themselves’ and make
more sense of and take greater control of their own learning."
University of Dundee

Why is assessment literacy important?


This refers to understanding the process of making academic judgements, how this may be achieved and the benefits and limitations of different approaches. Both staff
and students need to develop this understanding and we cover the topic of staff assessment literacy in the section on developing academic practice. Here, we look at
student assessment literacy.

The term assessment literacy is still uncommon. We talk increasingly about study skills, graduate attributes and digital literacies but none of these fully addresses student
understanding of, and engagement in, the overall assessment process.

What are the common problems?


HE in particular has slowly moved away from a sector as the pinnacle of a highly selective pyramid in which students arrive equipped with the skills they need to succeed at
this level of study. This often results in a remedial approach whereby support for study skills bolts on to the curriculum to address the needs of weaker students, rather than
an inclusive approach which engages students as active participants in making academic judgements.[1]

"Assessment literacy is an iterative process, and therefore course design and implementation should provide unhurried opportunities and time within and across programmes to
develop complex knowledge and skills, and to create a clear paths for progression."
Higher Education Academy

A review of study skills materials available online shows a distinct emphasis on developing assessment technique through essay writing, presentation and preparing for
exams rather than understanding the nature and purpose of assessment and feedback practice. More integrated approaches that emphasise graduate attributes or
employability skills as key course outcomes can still fail to make the connection between the development of these skills and assessment practice.

The issue is however by no means confined to HE. A common observation in OFSTED reports on failing colleges is that there is insufficient use of pre-course assessments
to plan and teach to meet the needs of individual learners.

Peer review is an activity that can be very beneficial in developing assessment literacy because it engages students with assessment criteria and enables them to practice
making evaluative judgements. It can however be an unfamiliar and sometimes uncomfortable activity for many students so it is important to outline the purpose and
benefits of such techniques at an early stage.

How might we use technology and what are the benefits?


Providing online information about assessment criteria and marking rubrics and so on makes the information readily accessible to students.

Technology can be used to support activities such as peer review to help develop assessment literacy.

Text matching tools that generate an originality report for each assignment can be used to support the development of academic writing skills such as appropriate
referencing and citation. Using the tools in a formative way with students can be more productive than simply using them to assist with the detection of plagiarism.

How does assessment literacies relate to the lifecycle?


At the specifying [/guides/transforming-assessment-and-feedback/specifying] and setting [/guides/transforming-assessment-and-feedback/setting] stages you will
think about how assignment tasks allow students to develop and practice skills with the tasks becoming progressively more complex. Throughout the supporting
[/guides/transforming-assessment-and-feedback/supporting] stage, from induction onwards, you will seek to engage students with the overall process of assessment and
develop their capacity for making academic judgements.

At the submitting [/guides/transforming-assessment-and-feedback/submitting] stage, students can make use of originality reports generated by text matching tools to
check their referencing and citation.

What resources can help?


Our video outlines how Bath Spa and Winchester Universities improved assessment and feedback practice with the aid of student fellows:

Find out more about Bath Spa and Winchester universities' student fellow scheme [http://jiscdesignstudio.pbworks.com/w/page/51251270/FASTECH%20Project]
including a video of student fellows talking about their experiences
Our case study from Cumbernauld College outlines how the college developed targeted formative and summative assessment tasks in Moodle [http://www.rsc-
scotland.org/?p=2465] to improve grammar

Oxford Brookes University's guide outlines how to improve your students' performance in 90 minutes (pdf)
[http://www.brookes.ac.uk/WorkArea/DownloadAsset.aspx?id=2147552287]

The Assessment Futures [http://www.uts.edu.au/research-and-teaching/teaching-and-learning/assessment-futures/overview] website has some useful guidance on


developing students' assessment skills.

Student led individually created courses Edinburgh College of Art uses a model called Student Led Individually Created Courses (SLICCs) to embed
assessment for learning approaches and employability into the curriculum. Students create their own course,
self-reflect and formatively self-assess their own learning with supervision by tutors.

Student project proposals must detail the learning activities together with how they will evidence the set learning outcomes (which are the same for all students and include
employability learning outcomes). Tutors sign off the academic viability of the proposal. Students must re-interpret the learning outcomes in their own words in their proposal
and this aids student understanding of what is required of them and how they will be assessed.

Students have to regularly evidence and articulate their learning as it unfolds (aligned to the set learning outcomes), using an e-portfolio and digital artefacts. Tutors do not
formally lecture in this model but they provide regular formative feedback via the e-portfolio based on the principle that feedback requires students to take action.

SLICCs is also linked to the university’s Edinburgh Award [http://www.employability.ed.ac.uk/Student/EdinburghAward/] , enabling a certificate of recognition from the
university to be gained and an entry on their Higher Education Achievement Report (HEAR).

Many universities run PAL schemes which allow students to provide cross-year support to one another.
Peer assisted learning (PAL) schemes
Schemes such as that implemented at Bournemouth University
[https://www.bournemouth.ac.uk/students/learning/peer-assisted-learning] serve to:

Support student learning


Foster cross-year support for students with new students supported by those from the year above
Enhance students' experience of university life

Encourage collaborative learning rather than competitive learning


Create a safe environment where students are encouraged to ask questions
Help students gain insight into the requirements of their course and their lecturers' expectations

Encourage active and independent learning


Improve student retention and achievement
Give PAL leaders opportunities to revisit their prior learning

Bath Spa and Winchester Universities use paid student fellows to act as change agents, co-developers and co-
Students as change agents
researchers in developing their assessment practice.

In training their first set of student fellows the universities introduced the students to the institutions' educational principles and to current thinking about assessment practice
from the research literature, as well as taking them through the overall process. This gave the students a much broader base on which to draw than simply their own prior
experience and gave them a different understanding of processes that had previously seemed complicated and incomprehensible.

A number of student fellows recognised the "naivety" with which they had previously viewed some aspects of the process.

Footnotes
[1] For a discussion of these issues see Wingate, U. (2006) Doing away with 'study skills'. Teaching in Higher Education, vol 11, No. 4, October 2006, pp 457-469.
http://embeddingskills.hud.ac.uk/sites/embeddingskills.hud.ac.uk/files/W... [http://embeddingskills.hud.ac.uk/sites/embeddingskills.hud.ac.uk/files/Wingate.pdf]

Assessment patterning and scheduling


"A comparison of the way students respond to different assessment environments has shown that it is not when criteria are spelled out in detail for each assignment that students are
clear about goals and standards, but when they get plenty of practice at the same kind of assignment with good written and oral feedback, so they come to understand, over time, what
is expected."
Transforming the Experience of Students through Assessment (TESTA)

Why is assessment patterning and scheduling important important?


This is an essential component of good assessment design [/guides/transforming-assessment-and-feedback/assessment-design] . Incorrect assessment order or timing
can affect real learning opportunities.

You must distribute student effort fairly evenly across all important topics rather than concentrating it at particular times of the year. This needs combining with a
developmental approach where assignments build on prior learning to become increasingly complex and demanding over time (see the information on programme focused
assessment in our section on assessment design).

This can be a difficult balancing act. Too many similar assignments become trivial yet too much variation may not allow students sufficient practice at each type of
assessment. It may be more difficult for students to see the relevance of feedback when the next assignment is significantly different in form.

What are the common problems?


These include over-assessment and too much emphasis on summative assessment without sufficient formative opportunities.

Over-assessment can have a detrimental effect on student attainment as with too many different assignments to complete, students cannot concentrate sufficient effort
on each one. This is a particular problem if combined with 'assessment bunching' where the deadlines to submit assignments fall closely together.

Where there is too much emphasis on summative assessment, students may overly focus on the final mark and feel less inclined to read and act on feedback. This is
exacerbated when having so many assignments to mark. It means that tutors are unable to return marks and feedback in a timely manner to inform the students' approach
to the next assignment (see also the section on feedback and feed forward [/guides/transforming-assessment-and-feedback/feedback] ).
How might we use technology and what are the benefits?
It can help students to engage with assessment criteria and standards through the use of online templates for assignment briefs and marking rubrics.

Digital information about the curriculum can model assignment scheduling and present information about deadlines.

How does assessment patterning and scheduling relate to the lifecycle?


This is a curriculum design issue that relates to the specifying stage [/guides/transforming-assessment-and-feedback/specifying] . It's most relevant at the setting stage
[/guides/transforming-assessment-and-feedback/setting] when the precise requirements for a specific instance of curriculum delivery will be put in place.

What resources can help?


The University of Hertfordshire's assessment timelines tool (view via UK Web Archive)
[https://www.webarchive.org.uk/wayback/en/archive/20150529100437/http://jiscdesignstudio.pbworks.com/w/page/30631817/ESCAPE%20-
%20Assessment%20timelines] outlines the impact of assessment timing

The University of Greenwich programme mapper [https://sites.google.com/site/mapmyprogramme/home] can help you mange staff and student workload

TESTA's guidance outlines revised assessment patterns that work [http://www.testa.ac.uk/index.php/resources/category/7-best-practice-guides]

TESTA's guidance 10 steps to auditing a programme [http://www.testa.ac.uk/index.php/resources/research-tool-kits/category/11-researchtoolkits] includes a


completed audit example showing good points and areas for improvement from an anonymised course

The University of South Wales' video outlines the creation of online assessment diaries for students.
[http://jiscdesignstudio.pbworks.com/w/file/fetch/67926972/Assessment_Diary.mp4]

Developing academic practice


Why is developing academic practice important?
It's just as important for staff to reflect on their assessment and feedback practice as for students to consider the outcomes of particular assessments.

Learning and teaching practice does not stand still. Whilst underlying good practice principles may have a relatively long validity, the changing technical landscape regularly
offers new ways of implementing good practice.

Even the underpinning principles themselves require review at certain intervals. For example, recent research into approaches such as peer review reveals benefits that you
should take into account when updating assessment strategies.

What are the common problems?


Time is an important factor. Staff overloaded by traditional assessment and feedback practices don't have time to step back and reflect on how to do things differently and
better. This is a particular issue where the underlying processes are poor; academic staff often describe themselves as being on a treadmill where they identify underlying,
badly designed business processes but don't have the time or analysis skills to redesign them.

Risk aversion is also a significant issue. Staff don't want to take risks with something as important as assessment practice. They may fear the opinion of external quality
assessors or worry that innovative or unfamiliar types of assignment may prove problematic for students and bring down the grade point average.

"New tutors often have a limited feel for what good feedback looks like or what standard of feedback, in terms of length and specificity, is expected. They may concentrate on proving
their superior knowledge to the student rather than focussing on improving the students’ work in future."
Transforming the Experience of Students through Assessment (TESTA)

Culture is also an issue. Approaches to assessment and feedback are often highly personal. Academic staff often don't view themselves as having regular working hours.
The fact that they often complete marking and feedback off campus suggests that it's done in their own time and they should therefore have freedom of choice in how they
do it.

Feedback in particular has been described as taking place in a black box with little or no discussion amongst programme teams about approaches and the types of
feedback given by individual academics. In these circumstances it is not surprising to find inconsistent approaches and variable quality.

Despite rigorous processes to ensure quality and standards, marking and grading is still a subjective matter. New lecturers tend to rely more heavily on written criteria but
also to mark more harshly. More experienced lecturers may develop tacit and personalised standards of marking which are not necessarily shared across the whole
programme or department.

Profiling feedback Evidence from feedback audits undertaken by Jisc projects shows that typical feedback profiles may differ
considerably between institutions. In our audits the 'typical' profile in each sample skewed towards a particular
type of feedback:

In sample A (postgraduate online distance learning programme in medical education) 95% of feedback related to content and 72% related to the immediate task

In sample B (a range of masters level courses in education) praise statements were the most common element of summative feedback; there was little advice given and
that advice was short rather than longer term
In sample C (modern languages) a higher percentage of comments concentrated on weaknesses rather than strengths.
Good feedback looks at strengths and weaknesses and helps students define specific actions for improvement which feeds into future assignments. In the resources below
you will find some tools to help you undertake your own feedback audit.

How might we use technology and what are the benefits?


It can streamline business processes and simplify workflows by automating many manual tasks. Having assignments in digital format offers many advantages over the
issues associated with copying, distributing and storing material on paper.

E-marking and e-feedback can be quicker and more user-friendly, both for staff and students, than traditional methods (see our section on marking practice
[/guides/transforming-assessment-and-feedback/marking-practice] ).

There are a range of online tools available to support staff development by helping with feedback auditing and supporting tutor self-development and we point to some of
these in our resources section.
How does designing assessment practice relate to the lifecycle?
The reflecting stage [/guides/transforming-assessment-and-feedback/reflecting] covers not only student reflection but also staff reflection both on their own practice and
student performance to inform future iterations of similar courses.

What resources can help?


Our video case study outlines how Queen's University, Belfast changed assessment and feedback practices using appreciative inquiry approaches:

University College London's feedback profiling tool [http://assessmentcareers.jiscinvolve.org/wp/files/2013/02/Feedback-profiling-tool.pdf] and supporting guidance
[http://assessmentcareers.jiscinvolve.org/wp/files/2013/02/Guidelines-for-using-the-feedback-profiling-tool.pdf] enables individuals or teams to analyse feedback and
encourages reflection

The Open Mentor tool [http://omtetra.ecs.soton.ac.uk/wordpress/] supports tutor reflection on feedback

The Feedback Analysis Chart for Tutors [http://jiscdesignstudio.pbworks.com/w/page/68306171/Analysing%20assignment%20feedback%20-


%20The%20FACT%20method] (FACT) is an evaluation tool based on feedback 'depth'

The University of Dundee's action plan template [http://jiscdesignstudio.pbworks.com/w/file/fetch/53155411/Action%20Plan_Template_April2012.pdf] assists with


developing assessment and feedback practice.

Consensus marking exercise (adapted from the work The TESTA project highlighted student awareness of variations between markers. They identify ‘hawks’ and
of TESTA) ‘sparrows’ on programme teams and often choose modules accordingly.

This exercise strengthened shared standards and consistency between markers:

Use two previously assessed scripts at the same level but with different marks
Invite the programme team to an hour long meeting

Ask colleagues to brainstorm what they are looking for in this assignment – this should be fresh and intuitive rather than orthodox
Ask colleagues to read and mark the pieces
Collect initial marks in a hat – written but anonymous

Discuss the pieces to gain impressions


Agree a consensus mark, and compare with initial marks

Revise criteria or agree to meet again to discuss other assessed work.


The team tried the exercise without the anonymous marks. A respected academic spoke first and dismissed one piece that had previously been double marked and given two
first class marks. Most of the team adjusted to this and gave the work 50s and low 60s with no-one daring to give it a first. This demonstrates the value of the anonymous
initial marking.

The Open University (OU) has the highest ratings of any university for feedback in the National Student Survey
Good practice in feedback monitoring
[http://www.thestudentsurvey.com/] even though all of its teaching takes place at a distance. The OU gives all
of its tutors training on how to give feedback. They provide exemplars of good feedback and advice on using the 'OU sandwich’ of positive comments, advice on how to
improve, followed by an encouraging summary.

The OU also monitors the standard of feedback that tutors provide to students. An experienced staff tutor samples new tutors’ marking. If they see feedback that falls below
accepted standards (for example too brief to be understandable) or is inappropriate (ie, overly critical with little advice on how to improve), they will contact the tutor for a
discussion. That tutor’s feedback will go on a higher level of monitoring until it's seen to improve.

Walsall College, which has an outstanding Ofsted rating, takes a similar approach. It has a number of ‘coaches’ who sample feedback for each subject area and provide staff
development for any tutors whose feedback is felt to be inappropriate for the level of study - particularly that which focuses only on the assessment criteria and not on
longitudinal development. Careful monitoring of feedback provided by new tutors takes place in the early days.

Both the OU and Walsall College operate strict rules for the timely return of feedback and monitor tutor adherence to this.

Employability and assessment


Why is employability and assessment important?
Universities and colleges are under increasing pressure from government and regulators, as well as fee-paying students and their families, to demonstrate that their
courses can enhance a student's future employment prospects. They are also keen to be the first port of call for employers seeking to develop their workforce capabilities
(see also our section on work-based assessment).

What are the common problems?


Issues include a perceived lack of alignment between common assessment practice and the formative ways in which professionals develop throughout their careers.
Learning providers continue to place considerable emphasis on key episodes of summative assessment. In contrast professional development tends to be an ongoing
process related to gathering and making sense of formative feedback from a range of stakeholders, including clients as well as peers.

In other sections of this guide (see particularly setting [/guides/transforming-assessment-and-feedback/setting] and supporting [/guides/transforming-assessment-and-
feedback/supporting] ) we state the importance of achieving clarity around assessment criteria and standards. However in some cases the notion of fixed assessment
criteria and grading structures may run counter to the work environment.

In a business context working out exactly what the client requires, and what are the most crucial parts of the brief, can often be challenging but key to success. This
underlines the value of engaging learners in defining assessment criteria and making evaluative judgements.

This raises the challenging question: Are we being too specific in detailing exactly how students get marks from our assessments? Should part of the assessment be the
task of working out which are the more crucial parts of the assessment itself?
"It seems that assessment in business is a cumulative and ongoing process with mostly undefined criteria that must be independently discovered, so in order to make our own
assessments more authentic we may need to reduce the level of definition we create about mark allocation."
University of Exeter

The development of self-regulated learning, perhaps best expressed as the skills needed for lifelong learning should be one of the key aims of assessment. However the
lifelong learning approach is often neglected in assessment design.

Defining employability assessment types


There are issues around defining and describing types of assessment that support employability which can act as cultural barriers to improving pedagogic practice. The
term authentic assessment is frequently used in connection with employability. However this can receive a negative reaction from many academic staff who feel the term
inappropriately suggests that many other forms of assessment are not valid in their own context.

Other terms that may be used to describe this type of assessment include: integrated; work focused; experiential; work-related; contextual; alternative, and situated.

Feedback in employment settings


In the section on feedback and feed forward [/guides/transforming-assessment-and-feedback/feedback] we challenge the notion that feedback is something to be
delivered by tutors to students. In employment settings professionals also require skills not only in interpreting feedback but also in giving feedback (see also the sections
on peer assessment [/guides/transforming-assessment-and-feedback/peer-assessment] and peer review [/guides/transforming-assessment-and-feedback/peer-review]
).

Another issue is that students are often not very good at recognising the transferable skills they have developed and articulating these to potential employers.

How might we use technology and what are the benefits?


Technology can help achieve a greater employability focus in assessment practice in various ways by:

Providing rich evidence of employability skills (through audio and video recording devices, webcams, e-portfolios)

Enabling learners to capture and reflect on the process of learning (through e-portfolios, blogs, video annotation software)

Capturing work-related performance for appraisal by a tutor or mentor (through audio and video recording devices, webcams)

Creating opportunities for employment-related assessments that are difficult to create in the classroom (ie, virtual worlds, online simulated professional and vocational
environments)

Supporting scenario-based assessment (through online diagnostic tools, computer-generated/marked assessments)

Supporting peer assessment and review (using software tools such as Peerwise [http://peerwise.cs.auckland.ac.nz/] or WebPA [http://webpaproject.lboro.ac.uk/] )

Mapping opportunities for acquiring and assessing wider employability skills across complex curricula ie, medicine (using mind mapping and curriculum mapping
tools)

Mapping assessments and learning outcomes against employability outcomes; making these visible to all stakeholders (via curriculum databases, virtual learning
environments or VLEs, learning portals).

How does employability and assessment relate to the lifecycle?


Enhancing employability is very much a curriculum design issue. At the specifying [/guides/transforming-assessment-and-feedback/specifying] stage you think about the
ways in which the methods of assessment resemble practices that students may encounter in a work environment.

At the setting [/guides/transforming-assessment-and-feedback/setting] stage you will select topics and problems that relate to real world situations and clarify how
specific learning outcomes relate to a broader set of skills and competencies (you may call these graduate attributes).

The reflecting [/guides/transforming-assessment-and-feedback/reflecting] stage is likely to be of particular importance with both self and peer reflection as important
features of assessment practice that seek to enhance employability.

What resources can help?


Our enhancing student employability guide [/guides/enhancing-student-employability-through-technology-supported-assessment-and-feedback] is a good starting point on how the curriculum can help develop the skills
needed in the working world.You can also look at these individual resources from the guide:
Read our case study on enhancing employability through assessment [http://repository.jisc.ac.uk/5589/3/collaborate.pdf] at the University of Exeter and watch the
associated video below

Birmingham City University uses videos embedded in a 3-D graphical representation of a town called Shareville [http://shareville.bcu.ac.uk/index.php] enabling
students to develop the professional skills needed in the real world

The College of West Anglia set up an award-winning media production company and internet TV station, Springboard TV [http://www.springboardtv.com/] ,
remodelling the curriculum in the process to enable Media BTEC and diploma learners to be assessed on real-world projects

Our Keele University case study (pdf) [http://repository.jisc.ac.uk/7335/1/Jisc_e-portfolio_Keele2019.pdf] in our e-portfolio guide, how to enhance student learning,
progression and employability with e-portfolios (2019) [/guides/e-portfolios] , shows the importance of preparing students for the reflective practice required for
registration with a professional body

Our guide on e-portfolios, how to enhance student learning, progression and employability with e-portfolios (2019) [/guides/e-portfolios] , includes up to date evidence from UK colleges and universities. See, for example,
our case studies from Abertay (pdf) [http://repository.jisc.ac.uk/7332/1/Jisc_eportfolio_Abertay2019.pdf] and Nottingham Trent (pdf) [http://repository.jisc.ac.uk/7336/1/Jisc_eportfolio_NTU2019.pdf] universities.

Further useful resources


Read our case study [http://repository.jisc.ac.uk/6250/6/Technology_for_employability_-_HE_case_studies.PDF] on how the University of Edinburgh incorporated the
employability agenda into core learning activities and assessed learning outcomes

See also our case studies [http://repository.jisc.ac.uk/6251/9/Technology_for_employability_-_FE_and_Skills_case_studies.PDF] relating to employability and


assessment in FE and skills at the City of Glasgow College and Portland College

Read case studies in our guide, how to enhance student learning, progression and employability with e-portfolios [/guides/e-portfolios] :
Nottingham Trent University (pdf) [http://repository.jisc.ac.uk/7336/1/Jisc_eportfolio_NTU2019.pdf]

Sheffield Hallam University (pdf) [http://repository.jisc.ac.uk/7337/1/Jisc_e-portfolio_SHU2019.pdf]

Abertay University (pdf) [http://repository.jisc.ac.uk/7332/1/Jisc_eportfolio_Abertay2019.pdf]


Designing work integrated assessment The University of Exeter's designing a work integrated assessment model focuses on six areas when
designing assessment to support employability.

Dimension Brief guidance

Problem / Set a real world problem as the core assessment task, supported by real world data
data
Purely academic learning might require a theoretical problem in order to test a theoretical understanding. In employment though problems tend to be
very real, and data rarely comes in coherent, standardised forms. It is usually in 'messier' formats that need to be interpreted to be of use. Using a real
world problem and real world data helps to develop skills in analysis, interpretation and evaluation.

Time Move to a more distributed pattern of assessment; consider introducing ‘surprise’ points

Assessments are often delivered in the form of one summative assessment, eg an exam or essay, at the end of a period of formal learning. In
employment however, ‘assessment’ or evaluation points tend to occur frequently. In addition, timing is often out of individual control, and consequently
it can be necessary to juggle competing tasks at short notice. Using multiple assessment points helps to develop reflective thinking, whilst ‘surprise’
points support task prioritisation.

Collaboration Create teams of students who work together to complete the assessment, encourage collaboration

Many forms of assessment require working alone, yet employment invariably requires some form of collaboration and team work, and often with
unknown and perhaps even challenging individuals. Encouraging students to work collaboratively and in teams improves their ability to negotiate and
discuss, and develops their understanding of team roles and role flexibility.

Review Include peer and/or self-review explicitly in the assessment process

Typically the review of assessments (ie, feedback) in formal education is only provided by teaching staff. In employment, however, much of the review
process comes in multiple forms, eg, informal peer feedback from colleagues, formal and informal reviews from clients, and self-review of personal
performance. Including peer and/or self-review explicitly within an assessment helps students to develop critical thinking skills, and encourages
articulation and evidencing.

Structure Lightly structure the overall assessment; reward student approaches

Most thinking on assessment suggests that there should be explicit guidance to students concerning how and where marks are attained. However in
employment part of the challenge for the individual and/or team is the structuring of the work that needs to be completed. Tasks need to be identified,
processes decided, and priorities allocated. Using a light structure approach encourages students to plan tasks and goals in order to solve a bigger
problem, strengthening their project management and prioritisation skills.

Audience Aim to set explicit audiences for each assessment point

In higher education the audience for an assessment is implicitly the academic that sets it, who will naturally be already aligned in some way with the
course and/or module. This contrasts with employment, where the audience can be peers, but is more often the client or another external third party,
with different values, priorities and expectations. Having to think for a different audience on an assessment provokes greater reflective thinking, and
requires new types of synthesis.

Download a full copy of the model [http://repository.jisc.ac.uk/6196/1/Exeter_dimensions_model.pdf]

">

Feedback and feed forward


Why is feedback and feed forward important?
"Conventionally, feedback is conceptualised as an issue of ‘correction of errors’ or ‘knowledge of results’. Much more important is how the provision of feedback affects student learning
behaviour - how feedback results in students taking action that involves, or does not involve, further learning."
Transforming the Experience of Students through Assessment (TESTA)

Feedback provides information to learners about where they are in relation to their learning goals so that they can evaluate their progress, identify gaps or misconceptions
in their understanding and take remedial action. Generated by tutors, peers, mentors, supervisors, a computer, or as a result of self-assessment, feedback is a vital
component of effective learning.

Feedback should be constructive, specific, honest and supportive.

While feedback focuses on a student’s current performance, and may simply justify the grade awarded, feed forward looks ahead to subsequent assignments and offers
constructive guidance on how to do better. A combination of both feedback and feed forward helps ensure that assessment has a developmental impact on learning.

Effective feedback should also stimulate action on the part of the student. The most effective practice treats feedback as an ongoing dialogue and a process rather than a
product.

What are the common problems?


Timeliness
Feed forward can only be effective if it's timely ie, received at a point when meaningful action can be taken. High-stakes assessments are often set towards the end of a
module, term or semester, reducing opportunities for students to apply any feedback they receive.

Regularity
Feedback needs to be quite regular and hence on relatively small chunks of course content to be useful. One piece of detailed feedback on an extended essay or design
task after ten weeks of study is unlikely to support learning across a whole course very well.
Approaches to feedback
Academic staff can often ignore the need to discuss feedback approaches. The feedback given can then be inconsistent or weak in other ways ie, skewed towards a
particular type of observation (such as praise) or short term and too focused on the assignment in hand rather than truly developmental.

A common Ofsted report observation in failing colleges is that tutors (and workplace assessors in the case of apprentices) don’t provide feedback to students that helps
them understand how they can improve.

Missed opportunities
Students often don't collect or read feedback. This is sometimes because it arrives too late to be useful. In some cases the problem is as simple as the students not
realising the feedback is available.

Alternatively students can focus on the overall mark and not understand the benefits of making use of the feedback. We discuss this further in the section on returning
marks and feedback [/guides/transforming-assessment-and-feedback/returning-feedback] where we suggest that disaggregating marks and feedback can encourage
students to engage better with the feedback.

Passive learners
Students can often be passive recipients of feedback, viewing it as the tutor's role to deliver feedback to them, rather than understanding the need for them to engage in
meaningful dialogue around the feedback to aid their development.

Lack of overview
It can often be difficult for both students and tutors (especially tutors with a pastoral or personal tutoring role) to gain an overview of feedback. This may be because
feedback is online but stored at a module level or because it is in a more ephemeral format such as paper or verbal feedback.

Understanding feedback
Feedback that appears self-evident to tutors may be difficult for students to understand. There could be difficulties with the format eg, indecipherable hand writing or with
how the feedback is expressed and contextualised.

The Open University asked a group of students to work through their feedback [http://www.open.ac.uk/blogs/efep/?page_id=523] talking out loud about what they
understood and what they didn't understand.

How might we use technology and what are the benefits?


Technology can :

Improve clarity about marking and feedback deadlines and provide students with a personalised schedule

Help students store and access feedback easily if it's provided in a digital format

Support individual learners or subjects where certain digital formats are more suitable eg, audio feedback for language courses

Help teachers make efficiency gains by using a feedback format that best suits them eg, a slow typist may provide better quality feedback by making an audio
recording.

How does feedback and feed forward relate to the lifecycle?


This theme runs right through the lifecycle:

At the specifying stage [/guides/transforming-assessment-and-feedback/specifying] think about your approach to feedback

During the setting stage [/guides/transforming-assessment-and-feedback/setting] ensure that the overall submission and marking schedules allow for timely
feedback that can inform the next assignment. Infom students on how to make use of feedback and also provide formative opportunities

At the marking and production stage [/guides/transforming-assessment-and-feedback/feedback-production] generate feedback for students

When returning marks and feedback [/guides/transforming-assessment-and-feedback/returning-feedback] adopt an approach that is most likely engage students

At the reflecting stage [/guides/transforming-assessment-and-feedback/reflecting] engage in dialogue with students about their feedback and reflect on the feedback
you have given, how useful it's been and any changes you should make in the future.

What resources can help?


Our feedback and feed forward guide [/guides/feedback-and-feed-forward] is a good starting point on the use of technology to support student progression. You can also look at these individual resources from the
guide:
Our case study [http://repository.jisc.ac.uk/5596/3/interact.pdf] outlines the use of technology to improve feedback dialogue at the University of Dundee

Our video explores reconceptualising feedback:

Further useful resources


Gibbs and Simpson's Assessment Experience Questionnaire [http://www.testa.ac.uk/index.php/resources/research-tool-kits/category/11-researchtoolkits] (AEQ)
helps teachers to diagnose how well their course assessment supports student learning

Sheffield Hallam University has a set of useful case studies and guides [http://academic.shu.ac.uk/assessmentessentials/marking-and-feedback/feedback/] on
different ways to give feedback

The Sounds Good website highlights how to provide better feedback using audio [http://sites.google.com/site/soundsgooduk/]

Manchester Metropolitan University's guidance gives staff ideas for changing feedback practice [http://www.celt.mmu.ac.uk/feedback/index.php]

University College London (UCL) produced a useful feedback profiling tool [http://assessmentcareers.jiscinvolve.org/wp/files/2013/02/Feedback-profiling-tool.pdf]
and supporting guidance [http://assessmentcareers.jiscinvolve.org/wp/files/2013/02/Guidelines-for-using-the-feedback-profiling-tool.pdf]

The TESTA project generated resources including a feedback guide for lecturers [http://www.testa.ac.uk/index.php/resources/category/7-best-practice-guides] and
feedback guide for students [http://www.testa.ac.uk/index.php/resources/category/7-best-practice-guides]

Our Moray College, UHI case study outlines how audio feedback improved learner engagement [http://www.rsc-scotland.org/?p=5423] .
Technology, Feedback, Action at Sheffield Hallam In this project the university evaluated how a range of technical interventions might encourage students to
University engage with their feedback and formulate actions to improve future learning.

Delivering feedback electronically offered considerable benefits including greater control for students over how and when they reviewed their feedback. Electronic storage
made it more likely that students would revisit the feedback in future.

Read the full research report


[http://evidencenet.pbworks.com/w/page/19383525/Technology%2C%20Feedback%2C%20Action!%3A%20Impact%20of%20Learning%20Technology%20on%20Students'%20Engageme
and access the outputs from the project.

The interACT project placed great emphasis on creating the conditions for dialogue around feedback:
Creating feedback dialogue at the University of
"Neglecting dialogue can lead to dissatisfaction with feedback. The transmission model of feedback ignores
Dundee these factors and importantly the role of the student in learning from the feedback. Simply providing feedback
does not ensure that students read it, understand it, or use it to promote learning."

Interventions on a postgraduate online programme in medical education included the requirement for students to submit a compulsory cover sheet with each assignment
reflecting on how well they think they met the criteria and indicating how previous feedback has influenced this assignment.

Following feedback from the tutor they were then invited to log onto a wiki (this is optional) and include a reflection on the following four questions:

How well does the tutor feedback match with your self-evaluation?

What did you learn from the feedback process?


What actions, if any, will you take in response to the feedback process?
What if anything is unclear about the tutor feedback?
The project evaluation report [http://jiscdesignstudio.pbworks.com/w/file/70106568/AF_Strand%20A_Final%20Evaluation%20Report%20interACT_06092013.doc] found
that these activities promoted desirable behaviours in both students and tutors and that participants detected a qualitative improvements in learning.

Inclusive assessment
Why is inclusive assessment important?
Inclusivity is a very important factor in assessment design as fair assessment must reflect the needs of a diverse student body. The Quality Assurance Agency (QAA) UK's
quality code for higher education [http://www.qaa.ac.uk/quality-code] has a series of indicators that reflect sound practice. Indicator ten states:

Through inclusive design wherever possible, and through individual reasonable adjustments wherever required, assessment tasks provide every student with an equal opportunity to
demonstrate their achievement.

In order to provide all students with an equal opportunity to demonstrate their learning, you need to consider the different means of demonstrating a particular learning
outcome. Ensuring that students have variety in assessment and some individual choice, eg, in the topic or in the method/format of the assessment, can lead to overall
enhancement of the assessment process to benefit all students.

Assessment procedures and methods must be flexible enough to allow adjustments to overcome any substantial disadvantage that individual students could experience.

Inclusive practice means:

Ensuring that an assessment strategy includes a range of assessment formats

Ensuring assessment methods are culturally inclusive

Considering religious observances when setting deadlines

Considering school holidays and the impact on students with childcare responsibilities when setting deadlines

Considering students' previous educational background and providing support for unfamiliar activities eg, for students unused to group work

Considering the needs of students with disabilities - our guide on making assessments accessible [/guides/making-assessments-accessible] can help with this

What are the common problems?


Course teams that fail to consider inclusivity at the design stage can find themselves making a range of one off assessment adjustments throughout the course to meet
the needs of particular students. Often such modifications come at a cost and run the risk of introducing an element of bias into the process.

Setting up alternative arrangements for students with particular needs can create a sense of a two tier system that singles out students with special needs. Try to make
sure that a single process can accommodate students with additional needs.

Most people are well aware of the need to consider students with disabilities but may give less consideration to cultural, religious and domestic factors. As an example a
case study for a management course set in a brewery may not be the most appropriate choice if an alternative scenario would achieve the learning outcomes equally well.

How might we use technology and what are the benefits?


Technology can support choice and flexibility in assessment by allowing students to produce assignments in a range of different media.

It's also particularly important in helping to meet the needs of learners with disabilities - find out more on assistive technologies in our guide on making assessments
accessible [/guides/making-assessments-accessible]

How does inclusive access relate to the life cycle?


You are most likely to consider inclusive assessment practice in the specifying [/guides/transforming-assessment-and-feedback/specifying] and setting
[/guides/transforming-assessment-and-feedback/setting] stages. You will also think about this in a contextual way while supporting a specific cohort of students.

What resources can help?


Our guide can help with making assessments accessible [/guides/making-assessments-accessible]
Manchester Metropolitan University's Equality and Diversity in Learning and Teaching [http://www.celt.mmu.ac.uk/ltia/Vol9Iss1/index.php] (Vol 9 Issue 1, Autumn
2012) features the themes 'equal, diverse, accessible'

Roehampton University's project explored the development of equivalent assessment for students with disabilities
[http://www.plymouth.ac.uk/uploads/production/document/path/2/2538/Whats_it_worth.pdf]

Manchester Metropolitan University's DEMOS project outlined how to assess disabled students without breaking the law
[http://www.celt.mmu.ac.uk/ltia/issue4/wray.shtml]

Read Birmingham City University's guide to Moodle accessibility for students with specific learning difficulties
[http://repository.jisc.ac.uk/6195/1/BCU_Moodle_Accessibility_Guidelines.pdf]

The University of Bristol has a useful discussion paper on Ethical issues in technology enhanced assessment [http://www.bristol.ac.uk/media-
library/sites/education/migrated/documents/ethicalissues.pdf]

Marking practice
Why is marking practice important?
Marking and the validation of marks (verifying, analysing, moderating) ensures fairness and consistent application of standards. However it takes up a considerable amount
of academic staff time. It's also a high stakes activity with serious repercussions if it goes wrong.

What are the common problems?


Personal approaches
Approaches to marking are often highly personal. Academic staff may not view themselves as having regular working hours and the fact that they often complete marking
and feedback at home or elsewhere off campus suggests that it's done in their own time and they should therefore have freedom of choice in how they approach it. Staff
often resist institutional attempts to enforce a uniform approach.

Many of the most comprehensive marking tools are best suited to online marking and there are more hurdles to overcome for people who wish to use them in situations
where internet access is not available.

The discourse of resistance is also highly personalised eg, some older members of staff may cite eye-strain as an issue with online marking. Others of the same age group
prefer e-marking, and use technology to adapt to their personal needs and make reading easier.

Ensuring a safe working environment


Colleges and universities have a statutory duty to ensure that staff have a safe working environment; ensuring appropriate and proper set up of computer workstations and
related furniture. The same applies to staff who are contracted to work from home but not necessarily to those who choose to do so.

It's important therefore that staff who work in situations outside of their formal work place are aware of how to set up and use their equipment in order to avoid
musculoskeletal problems and eye-strain. Extended use of laptops in particular is likely to cause problems.

Past experience
Tools to support e-marking have recently become considerably more user-friendly. Academic staff who tried to use such technologies a while ago and reverted to paper
marking because they found them difficult may take some persuading to try the new generation of tools.

Case study: resolving access issues with Grademark In 2014, the University of Hull received complaints from a small number of academic users who struggled to
at the University of Hull use the university approved marking tool, Turnitin Grademark, on their university PC.

When viewing the originality report characters appeared fuzzy and difficult to read for any length of time despite none of the users having a particular visual impairment.
Testing showed that the quality of the original PDF assignment submission deteriorated when used through Turnitin Grademark. This was specifically an issue with the
rendering engine of the marking software rather than a browser issue.

Further testing showed a resolution to the problem when marking was undertaken by using the iPad app (which uses a different rendering engine to both the browser and the
Turnitin web engine).

How might we use technology and what are the benefits?


There are various technologies that can speed up marking at the same time as delivering more effective feedback. The most comprehensive tools may include:

Banks of frequently used comments to quickly insert for pointing out grammatical errors or the need for a citation

Rubrics with marking criteria in a pre-defined matrix which improve marking consistency and may enable a quick calculation of marks and grades

Integration with text matching tools to support academic integrity checking.

Other standalone tools which may still capture annotations in digital format include the mark-up features in Microsoft Word or the use of digital pens.

One of the most significant benefits of digital approaches is improved clarity for students who don't have to struggle to read handwriting.

Academic staff may need some time to familiarise themselves with new tools but many find the processes to be more efficient and more rewarding as many of the
repetitive elements have been removed. The real efficiencies are however most evident when you consider the overall process. Other benefits include:

The convenience of not having to collect and carry large quantities of paper

The convenience of electronic filing

The security of having work backed up on an online system

The ability to moderate marks without having to physically exchange paper.

How does marking practice relate to the lifecycle?


It relates specifically to the marking and production of feedback stage [/guides/transforming-assessment-and-feedback/feedback-production] which we discuss further
in that section.
What resources can help?
Our case study from the University of South Wales outlines the use of Grademark for online marking and feedback
[http://jiscdesignstudio.pbworks.com/w/file/fetch/67927160/Grademark%20%281%29.mp4]

This case study [http://academic.shu.ac.uk/assessmentessentials/wp-content/uploads/2015/09/Inline-Crocodoc-Marking.pdf] from Sheffield Hallam University


shows the benefits of online marking and annotation of student work

The University of Northampton has a useful guide to setting up your computer workstation
[https://nile.northampton.ac.uk/bbcswebdav/orgs/LearnTech/User%20Guides/Staff%20guides/Online%20Marking%20HS%20Advice.pdf] for safe and comfortable
online marking

The University of York's reading on screen [https://readingonscreen.wordpress.com/] guidance is for use by both staff and students

Queen Margaret University's guide for tutors outlines how to mark and feedback using the Turnitin Grademark product.
[http://www.qmu.ac.uk/cap/TELPdfFiles/GrademarkTutorGuide2014.pdf]

Good practice in e-marking (adapted from guidance produced by the University of Huddersfield)

Concentrate on the use of the tool to provide dialogic feedback (rather than a monologue) to facilitate conversation with students about their work. You can do this by:

Never using generic comments alone to annotate student work - always include ‘bubble’ comments written specifically for that student. Redeploy time by offering more
bespoke comments and support
Make use of first person pronouns when engaging with the student’s work (‘I found this sentence difficult to interpret’ or ‘the opening paragraph you have offered here is
really compelling: I’m feeling motivated to read on’)
Invite students to ask their tutor to focus on specific aspects of their work in feedback
Use colour coding to distinguish strengths from weaknesses and ensure that all students’ work has at least some aspects of their work highlighted in both categories

Avoid building generic comments that replicate bad habits common in paper marking such as a tick or a comment that simply says ‘good’ or ‘vague’
Agree a shared approach with colleagues to using the marking tool that offers consistency to students while still allowing tutors to benefit from the flexibility that the tool
offers.

(adapted from work at the University of Huddersfield - see the report on their Evaluating the Benefits of
A suggested strategy for implementing e-marking
Electronic Assessment Management (EBEAM) project
[http://jiscdesignstudio.pbworks.com/w/page/50671451/EBEAM%20Project] for more information)

Academic staff attitudes are split into three main groups:

Innovators or early adopters who have migrated enthusiastically to e-marking


Those who have approached it more cautiously (likely to be the bulk of staff)
Reluctant adopters or those who have tried it and then moved back to paper marking.
An effective way of managing the transition to e-marking is to allow each of these groups to continue marking in the way that they feel most comfortable. The movement
from paper to e-marking will happen organically (probably over several academic years) but this process will generate the least disgruntlement and hostility.

Those who are happy to mark electronically should be encouraged to do so while academics who prefer to mark on paper are supported by provision of a print-out of student
submissions. This strategy will lead to e-marking spreading organically but only if there is simultaneous pressure provided by strategic policy decisions.

This pressure comes in the form of change agency from early adopters, from administrative systems which reward academic staff who adopt e-marking (eg, by lightening
their assessment administration load) and finally from student demand. The aim is to achieve critical mass whereby e-marking becomes established as the norm.

This allows it to become not just a student expectation but an entitlement and makes those who are reluctant to mark electronically the exceptions rather than the rule.

To achieve this critical mass, the bulk of academic staff (ie, those who are neither early adopters or especially resistant) need to find it easier and more rewarding to move to
electronic marking than to stay in a paper-based system. This middle group is therefore the most important.

To achieve the goal of maximising electronic management of assessment, it's important to build a strategy and a system which provides each group with the support they
need. It must also offers rewards and apply pressure in a consistent way so that moving away from paper-based marking and into e-marking makes the most sense to as
many as possible.

Due to the differing attitudes of these three groups to e-marking, an effective strategy must be sensitive to them all.

Peer assessment
Why is peer assessment important?
This relates to the topic of peer review and is a variant of this approach; students mark or grade the work of other students.

Many of the earliest attempts to develop peer activities concentrated on peer assessment. Experience shows that peer assessment is often more difficult than peer review
to implement successfully. This is largely because students lack confidence in their own and their peers' ability to undertake grading. This undermines their trust in the
fairness of the assessment process and their satisfaction with it which can erode learning benefits.

The benefits of the peer approach derive mainly from thinking about and evaluating others' work rather than grading it. That is not to say that peer assessment can't be
undertaken successfully and we introduce some examples throughout this guide.

What are the common problems?


They are similar to those experienced with peer review - the approach can be unfamiliar to students and uncomfortable for them. There are however additional issues to
consider with marking and grading:

Risk of collusion or even bullying

Differences between tutor and peer marks can cause dissatisfaction and lack of confidence in the approach.
How might we use technology and what are the benefits?
As with peer review [/guides/transforming-assessment-and-feedback/peer-review] , the use of suitable software is essential for effective peer assessment. The overall
benefits are similar to those outlined in the section on peer review. Specific benefits include:

Introducing summative assessment of group learning even with large classes

Giving credit to individual students on group projects - those who contribute more earn higher marks

Grading of a student’s abilities against key skills such as leadership, communication and report writing

Privacy and anonymity of online marking appears to promote fairness

Permits automatic final grade calculation and helps to avoid transcription errors from manipulating data in spreadsheets.

How does peer assessment relate to the lifecycle?


Like peer review, you need to consider peer assessment at many stages of the lifecycle. For it to be most effective, you will consider it at the specifying stage
[/guides/transforming-assessment-and-feedback/specifying] and ensure that you introduce the approach early in a programme of study so that the activities become
increasingly demanding as the students develop the necessary skills.

It may be a feature of the supporting [/guides/transforming-assessment-and-feedback/supporting] stage where purely formative peer review activities take place ahead of
introducing an assessment element. Peer assessment forms part of producing marking and feedback [/guides/transforming-assessment-and-feedback/feedback-
production] - it may generate a considerable volume of useful feedback very easily and, in some cases, peer marks may count towards the mark for a summative
assessment.

It's also an important feature of the reflecting stage [/guides/transforming-assessment-and-feedback/reflecting] as the critical analysis and evaluation produced by these
activities is the source of deep learning.

What resources can help?


Our video highlights WebPA - an online tool that facilitates peer moderated group-work marking.

Find out more about the WebPA tool [http://webpaproject.lboro.ac.uk/] - download the latest version and access detailed guidance for instructors and students

Read the case study [https://www.cs.auckland.ac.nz/~j-hamer/peer-assessment-using-Aropa.pdf] from the University of Auckland

Read the University of Hertfordshire's peer assessment initiative case study [http://jiscdesignstudio.pbworks.com/w/page/28195425/ESCAPE+-
+Peer+Assessment+(Assessment+TIP's)]

Peer review
" ...developing students’ capacity for the making of evaluative judgements about their own and others’ work is weakly developed in higher education, even though these skills are highly
valued in all aspects of life beyond university."
University of Strathclyde PEER project

Why is peer review important?


It's a process where students review other students' work and provide feedback on it. Usually this involves students producing feedback reviews for peers and receiving
feedback reviews from others.

The topic is important because producing peer feedback helps students develop critical thinking skills and make evaluative judgements based on the assignment criteria. In
giving and receiving feedback, students develop skills that help prepare them for future professional practice and helps them understand the process of making academic
judgements.

Giving and receiving feedback


Both producing and receiving feedback can significantly enhance student learning although qualitatively in different ways. Giving and receiving feedback are significantly
different activities. Giving feedback is a very proactive process requiring students to review and think about the assignment criteria, and make comparative judgements.

Receiving peer feedback can be a valuable supplement to tutor feedback and enable students to reflect on things they may not have thought about. Research however
shows that students giving feedback generally see the benefits in terms of their own development, even if the work they are reviewing is weak, whereas significant numbers
of students receive peer reviews in a more passive fashion and find them unhelpful.

When reviewing the work of others students inevitably make comparisons with the work they have produced themselves and gain an understanding of the strengths and
weaknesses of different approaches.

Peer review study


An in-depth peer review study [http://www.reap.ac.uk/PEER/Project.aspx] at the University of Strathclyde concluded that attempts to deliver more and better quality
feedback to students would have limited impact. To enhance learning, students need to be engaged in the process of making such academic judgements for themselves.

The study also suggested that the best way to enhance learning is by making peer review a platform for the development of critical thinking across a whole programme of
study, rather than as an occasional task in a module or course.

What are the common problems?


Peer review can be an unfamiliar and sometimes uncomfortable activity for many students. The notion that formal education is about learning from experts is deeply
ingrained in the student psyche and attitudes towards peer review can vary depending on a student's previous educational background and culture.

Tip: Peer review doesn't have to criticise; it might involve suggesting something to improve an assignment or highlighting an issue or perspective that is missing.
Despite the many benefits, peer review is equally unfamiliar to some academic staff and many staff fear either student dissatisfaction, increased workload or both as a
result of introducing peer review activities.

Staff often raise concerns about students plagiarising the work of others but you can generally overcome these with well-designed peer review activities.

Tip: Following peer review ask students to comment on their own assignment without altering it so they can't plagiarise.

Giving and receiving feedback (adapted from the University of Strathclyde's PEER project which identified tools to support peer review.)

The benefits for students giving feedback to peers:


It involves constructing meaning. Giving feedback is a demanding cognitive activity - students can't be passive when giving feedback whereas they can be passive in using
the feedback they receive

Helps to address lack of understanding in assignment tasks - constructing feedback requires students to actively engage with assessment criteria
Students develop disciplinary expertise and writing skills through regular evaluation. This process complements and elaborates on teacher and peer feedback
Stimulates self-reflection and results in the transfer of learning to students' own work. They see different approaches and recognise that they can achieve quality in a
variety of ways
If handled sensitively, engaging students in feedback in a safe and trusting environment can help develop social cohesion and learning communities. Peer feedback moves
away from learning and assessment as a private activity
Helps students develop the ability to appraise their own work. In this way, peer review directly helps students to become more independent and effective at self-regulating
their own learning.
The benefits for students receiving feedback from peers:

Peers often provide feedback that is easier to understand than the teachers as it's written in a language that's more accessible

Students might receive more feedback than is possible from a single teacher
They learn how different readers interpret their work. This is important for developing communication skills where anticipating the reader response is important
Peer review might save some teacher time and reduce the need for extensive teacher feedback, or allow teachers to target feedback.

"Being involved in peer feedback, then, didn’t just keep students on track by telling them what they had done well or aspects they had missed. It also helped some reframe their views of
feedback as a dialogic, participative process, and helped them begin to recognise the importance of taking deep approaches to learning and viewing the subject matter through a
different lens."
Sambell (2011)

How might we use technology and what are the benefits?


Software tools can support learners in devising questions for peers, marking, reviewing, moderating or giving feedback on each other’s work. They provide benefits such as:

Increasing the scalability of peer assessment

Engaging learners in spending time with assessment criteria

Developing learners’ evaluative and digital literacy skills

Enabling activities to take place in any location at any time

Providing confidential and immediately collated results

Supporting group work and independent learning.

The immediacy, frequency and volume of software supported peer feedback is likely to make up for any difference in quality between peer and tutor feedback.

The University of Strathclyde's peer review study concluded that software is not only beneficial but necessary to support peer review because:

Students generally value anonymity which would be difficult to achieve without software support

It would be unreasonable to expect academic staff to administer peer review manually due to the extra workload this would cause.

Despite the many benefits there appears room for further improvement in current peer software systems. In particular many systems (especially those that are open
source) don't offer easy integration with VLEs. Improved support for managing student groups would also be beneficial.

How does peer review relate to the lifecycle?


You need to consider peer review at many stages. For it to be most effective, you will think about it at the specifying stage and introduce the approach early in a programme
of study. The activities become increasingly demanding as the students develop the necessary skills.

It may feature at the supporting stage where activities are purely formative. Peer review also forms part of producing marking and feedback as it may generate a
considerable volume of useful feedback very easily -

It may be a feature of the supporting stage where purely formative peer review activities take place ahead of introducing an assessment element. Peer assessment forms
part of producing marking and feedback - it may generate a considerable volume of useful feedback very easily and, in some cases, peer marks may count towards the
mark for a summative assessment.

What resources can help?


The University of Strathclyde's PEER project contains useful information and guidance on how to undertake peer review [http://www.reap.ac.uk/PEER.aspx]

Professor David Nicol's paper developing students' ability to construct feedback [http://www.enhancementthemes.ac.uk/docs/publications/developing-students-
ability-to-construct-feedback.pdf?sfvrsn=30] forms part of the University of Strathclyde's PEER project

Oxford Brookes University's guidance outlines making peer feedback work in three easy steps (pdf) [http://www.brookes.ac.uk/WorkArea/DownloadAsset.aspx?
id=2147552652]

Quality assurance and standards


Why are quality assurance and standards important?
Assessment is not only central to the learning experience - it results in a summative judgement that impacts a student's future life chances. For this reason assessment
practices in colleges and universities are subject to rigorous quality assurance mechanisms.

It's extremely challenging to measure something as complex as a learning experience and compare them across different institutions. The issues are complicated and
beyond the scope of this guide however our is to support transformational assessment practice, based on sound educational principles, to enhance students' prospects.

The approach to assessment suggested in this guide, and many of the examples of good practice, are a far cry from the traditional approaches supported by the many
existing regulations, standards and marking schemes. The 2007 Burgess group report noted that a single summative judgement is increasingly irrelevant and inappropriate
given more flexible curricula, different forms of study including work-based learning, and more diverse assessment practices.
"All of this has given rise to a dramatic increase in the diversity of assessment practices, beyond the traditional examinations at the end of a year, or years, of study, and is designed to
capture a wider range of student achievement in greater depth. Assessment is increasingly complicated with much more use of continuous assessment and assessment of
achievements and progress where the criteria and the mark distributions are both very different from conventional examinations (such as projects, dissertations, shows and
performance).

Increasingly different types of achievements are being assessed – involving for example both knowledge and skills – which simply cannot be added together in a meaningful way. The
steering group concluded that there is a need to do justice to this wide range of experience by allowing a wider recognition of achievement instead of spending considerable time and
effort attempting to fit these into a single summative judgement."
Beyond the honours degree classification: Burgess Group final report [http://www.hear.ac.uk/sites/default/files/Burgess_final2007.pdf]

What are the common problems?


They are complex. Assessing learning is inevitably difficult and becomes more so when teachers try to innovate against frameworks and structures that may be no longer
fit for purpose. Here we focus on some of the issues that you can address by following the resources and good practice in this guide.

Too much emphasis on final grades


The persistence of a single summative judgement drives both students and staff towards a fixation with the final grade and tactical behaviours to the detriment of broader
and deeper learning. The emphasis on marking and grading suggests a view of assessment as an instrument of measurement rather than a means of supporting learning.

Our section on assessment design [/guides/transforming-assessment-and-feedback-with-technology/assessment-design] suggests ways of better supporting


assessment for learning. The section on feedback and feed forward [/guides/transforming-assessment-and-feedback/feedback] further develops those ideas.

Assessing the right things


Assessment must be valid and reliable but many experts believe that validity is compromised by increasingly simpler approaches to reliability. The quest for reliability drives
teachers to assess small and unambiguous chunks of learning rather than attempting to address more complex issues of knowledge.

Teachers may combine marks in a way that fails to represent the different types of learning outcome achieved through each individual component. Different types of
assessment format like coursework compared to examinations and different disciplinary customs and practices may distort marks. Many experts question whether it's
possible to distinguish the quality of work to a precision of one percentage point.

In the sections on assessment design [/guides/transforming-assessment-and-feedback-with-technology/assessment-design] and assessment patterning and


scheduling [/guides/transforming-assessment-and-feedback/pattern-and-scheduling] we look at the assessment effectiveness of programme level learning outcomes
and aligning individual assignments so that students get the practice they need to build confidence and take on increasingly complex tasks.

Communicating standards
Standards are only useful when they are valid and understood. Issues have been identified with how standards are communicated and understood by students and staff.
We consider this in the sections on the setting [/guides/transforming-assessment-and-feedback/setting] and supporting [/guides/transforming-assessment-and-
feedback/supporting] elements of the lifecycle, and suggest some useful approaches to staff development in the section on developing academic practice
[/guides/transforming-assessment-and-feedback/academic-practice] .

The process of making academic judgement requires a certain amount of tacit knowledge. In the sections on feedback and feed forward [/guides/transforming-
assessment-and-feedback/feedback] and on peer review [/guides/transforming-assessment-and-feedback/peer-review] we look at creating the conditions for dialogue
that enables students to better understand the process of making academic judgements.

Assessing collaborative assignments


Assessing group work raises particular issues of fairness and standards. In general a group has more resources and overall elapsed time at their disposal so should
produce better work against the same assignment topic than an individual could. In practice, however, groups tend to work on more complex tasks than the assignments
intended for individual study.

Group size and diversity also has an impact on how easy or difficult the students may find it to work together. All of these factors must be taken into account when
evaluating group work and determining its equivalence to other types of assignment. We offer guidance on this in the section on assessing group work
[/guides/transforming-assessment-and-feedback/group-work] .

A diverse student population has diverse learning needs and some students may be unable to take particular assignments due to various types of disability. We look at the
issue of alternative formats and equivalence in the section on inclusive assessment [/guides/transforming-assessment-and-feedback/inclusive-assessment] .

Relevance to employment skills


In spite of the continued emphasis employers place on degree classifications in particular, it's unlikely that a single summative judgement will prove helpful to employers
when selecting future members of their workforce. In the section on employability and assessment [/guides/transforming-assessment-and-feedback/employability] we
look at how students can be helped to develop and evidence a range of transferable skills.

How might we use technology and what are the benefits?


It can foster clarity and transparency about the curriculum so that those with overall responsibility for programme design have a clear overview of the learning outcomes
and assessment methods for different courses. This in itself can facilitate dialogue that leads to sharing good practice and effective approaches.

Digital storage of marks and feedback can simplify analysis to identify anomalies. It can also permit auditing and profiling of feedback to support staff development.

Digital records of learning outcomes can produce richer student achievement evidence such as the diploma supplement and higher education achievement record (HEAR)
which students can show to potential employers.

Technologies such as e-portfolios allow students to build up a rich picture of their skills and achievements when seeking employment. They can then take these forward to
support continuous professional development throughout their working lives.

Technology can also be used to allow students a range of alternative formats for tackling an assignment. This choice may encourage creativity, better engagement and
support students who are unable to complete particular forms of assignment due to a disability.

How does quality assurance and standards relate to the lifecycle?


This is a theme that runs throughout. There will be formal programme and module approval processes at the specifying stage [/guides/transforming-assessment-and-
feedback/specifying] and detailed definition of marking and grading approaches at the setting stage [/guides/transforming-assessment-and-feedback/setting] .

At the marking and production of feedback [/guides/transforming-assessment-and-feedback/feedback-production] stage, you may implement quality assurance
measures such as second marking and moderation.

Verification and validation of marks will take place during the recording grades stage [/guides/transforming-assessment-and-feedback/recording-grades] . The reflecting
stage [/guides/transforming-assessment-and-feedback/reflecting] involves considering programme and module outcomes against other comparators to see whether
improvements can be made for the future.
What resources can help?
The Quality Assurance Agency's guide outlines the role of assessment in safeguarding academic standards
[http://www.qaa.ac.uk/en/Publications/Documents/understanding-assessment.pdf]

The Burgess Group's final report [http://www.hear.ac.uk/sites/default/files/Burgess_final2007.pdf] discusses the issues around summative assessment in higher
education

In 2007 a group of leading academics brought together by the Assessment Standards Knowledge exchange (ASKe) produced 'assessment standards: a manifesto for
change' [http://www.brookes.ac.uk/aske/assessment-standards-manifesto/] which underpins the Higher Education Academy's framework for transforming assessment
in HE. [https://www.heacademy.ac.uk/sites/default/files/downloads/transforming-assessment-in-he.pdf]

Student self reflection


Why is student self reflection important?
A key goal of formative assessment and feedback is to help students develop as independent learners capable of monitoring and regulating their own learning. Simply
providing feedback does not achieve this. It's only when learners actively engage with the assessment criteria and process of evaluating performance against those criteria
that they are able to use feedback in a way that leads to improvement. [1]

Research shows that a combination of student self-reflection and peer review is most likely to result in deeper learning. Helping students better understand their own level
of achievement is likely to reduce costly and time-consuming appeals and complaints.

The aim is to create a learning experience in which students can take responsibility for setting their own learning goals and evaluating progress in reaching those goals.

What are the common problems?


Students' ability to self-assess and regulate their learning is often undermined by a transmission model that treats students as passive recipients of feedback delivered by
tutors. This creates a mindset that assessment is the tutor's responsibility.

Responding to feedback
The means of capturing self-assessment and reflection also needs to facilitate dialogue around that reflection. For example an assignment cover sheet can be a useful
reflective tool but simply giving students a form to fill in doesn't necessarily challenge a teacher-centric approach.

One university found that rather than undertaking self-assessment, students used cover sheets to write a 'shopping list' of what they wanted from tutors. A more effective
solution involved closing the feedback loop by asking students to keep a reflective journal giving their response to the feedback.

Integrating self-assessment and reflection


The tools used to support self-assessment and reflection need to be easy to integrate into every day learning and working practices. The University of Westminster
developed an open source tool to support student reflection and found that the stand-alone nature of the tool was a barrier to take up and further work was required to
make it LTI compliant so that it could integrate with other learning platforms.

Nicol (2010)[2]​notes that developing the capacity to critically self evaluate the quality or impact of work may be implicit in most university curricula although it's almost
never explicitly stated as a learning outcome. He argues that doing so would significantly change the organisation and delivery of the curriculum.

For example there would be a much greater emphasis on self and peer processes and putting learners in control as co-contributors to the curriculum.

How might we use technology and what are the benefits?


It can play a significant part in enabling the development of self-monitoring and self-evaluative skills. Examples include:

Online quizzes with automated, interactive feedback - they offer self-assessment opportunities before attempting an assignment

Screen capture software can demonstrate how to use assessment criteria, clarify goals and standards in an accessible way

Online dialogue through blogging, fora, email, internet messaging and wikis can provide opportunities to test and correct understanding, enabling the incremental
development of self-monitoring and self-evaluative skills

E-portfolios facilitate peer-to-peer, peer-to-tutor dialogue, private reflection and, in some cases, assignment submission and receipt

Audio and video feedback offer richer, more personalised feedback. Audio recorded podcasts also provide an efficient approach to giving feed forward to large groups

However, the focus need not be on individual technologies. Increasingly, curriculum designers draw on combinations of technologies to provide a learning environment that
continuously promotes self-monitoring, self-evaluation and reflection on progress.

How does self reflection relate to the lifecycle?


At the specifying [/guides/transforming-assessment-and-feedback/specifying] and setting [/guides/transforming-assessment-and-feedback/setting] stages you will
think about designing activities that encourage self-reflection on individual assignments. At the supporting [/guides/transforming-assessment-and-feedback/supporting]
stage you will promote the value of self-assessment and reflection and provide good practice guidance to students.

The reflecting [/guides/transforming-assessment-and-feedback/reflecting] stage of the lifecycle is an iteration that students will cover many times in evaluating their
progress.

What resources can help?


Our video shows how the University of Westminster improved student engagement with feedback by encouraging self-reflection:

The University of Westminster's making assessment count project


[http://web.archive.org/web/20190606060103/http://jiscdesignstudio.pbworks.com/w/page/23495173/Making%20Assessment%20Count%20Project] emphasised
student self-reflection. Outputs included a project report [http://www.jisc.ac.uk/media/documents/programmes/curriculumdelivery/mac_final_reportV5.pdf] and the
Feedback+ [https://sites.google.com/a/my.westminster.ac.uk/feedback-plus/home] tool

Our case study from Glasgow Clyde College shows how games help medical administration students self-assess
[http://web.archive.org/web/20150809162258/http://www.rsc-scotland.org/?p=3684] in relation to difficult terminology.
Footnotes
[1] See for example Nicol, D. & Macfarlane-Dick, D. (2006) Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, Vol 31(2), 199-218.

[2] Nicol, D. (2010) The Foundation for Graduate Attributes: developing self-regulation through self and peer assessment. Quality Assurance Agency for Higher Education.
http://www.enhancementthemes.ac.uk/pages/docdetail/docs/publications/the... [http://www.enhancementthemes.ac.uk/pages/docdetail/docs/publications/the-foundation-for-graduate-attributes-developing-self-regulation-
through-self-assessment]

Work-based assessment
Why is work-based assessment important?
Many students opt to learn in a work-based environment rather than on a university or college campus. This may be to support their study or continually update their skills
after an initial period of education.

Many institutions develop expertise in collaboration with employers to deliver and assess learning in work-based environments. This type of education can demand
different modes of curriculum delivery and assessment to meet the needs of the workplace host.

Work-based learning may involve:

Work placements to gain experience of working environments while in full-time education or training ie, a sandwich course, year in industry or apprenticeship in a
professional setting

Learning at work eg, acquisition or renewal of skills while in post, plus any workforce development initiated by the employer

Learning through work ie, re-engagement with education or training to achieve a better standing at work using the workplace as a learning environment or point of
reference

See also our section on employability and assessment [/guides/transforming-assessment-and-feedback/employability] which looks at ways in which traditional college
and university courses can enhance a student's future employment prospects.

What are the common problems?


Assessment design in work-based learning resembles real-world challenges as closely as possible, preferably those arising from authentic workplace issues and problems.
It demands greater flexibility and collaboration over the timing, location and mode of assessments than institutionally based provision and in some cases it may require a
shift in thinking over the value of collaborative and reflective outputs in summative assessment.

Work-based learning contexts require different approaches to assessment.

Learners on work placement are likely to be assessed by workplace mentors for competencies/skills, and by institutional tutors on their ability to relate theory to practice.
Assessors in the two different locations must have a mutual understanding of the assessment criteria and standards, and use a common vocabulary to aid student
understanding.

Equivalence of standards, or at least similar grade distributions may be difficult to achieve if those marking in the workplace do not also mark in an academic context.

Learning at work (such as continuing professional development) can involve activities such as self and peer evaluation which may be unfamiliar for many students (see
also our sections on peer assessment [/guides/transforming-assessment-and-feedback/peer-assessment] and peer review [/guides/transforming-assessment-and-
feedback/peer-review] ).

Both learning at work and through work can involve a combination of online learning and learning and assessment activities in the workplace. Study patterns may not fit the
traditional academic year and computers must be available in the workplace that are suitable for the intended forms of assessment.

A common observation in Ofsted reports on failing colleges is that too few apprentices have their skills assessed in a place of work.

How might we use technology and what are the benefits?


It can enable the work environment to be a location for assessing specialist and professional skills in ways that are authentic and convenient. Technology can also facilitate
collaboration and give students who might otherwise feel isolated the sense of being part of a learning community.

Further benefits include:

Capture of workplace skills in situ (digital video, audio, still photography, webcams)

Immediate learning reflection (internet-connected mobile devices, e-portfolios)

Efficient collaboration between tutors and workplace assessors (web conferencing)

Contextualised assessment management (mobile access to competency maps and assessment records)

Delivery, assessment and accreditation of short courses in any location (e-portfolios, VLEs)

Convenient, secure submission, return and storage of assignments (online assessment management tools)

Online access to feedback/feed forward [/guides/transforming-assessment-and-feedback/feedback] (podcasts, voice boards)

Asynchronous and synchronous communication with tutors, peers and workplace mentors (voice boards, VLEs, e-portfolios, social networking tools, blogs).

How does work-based assessment relate to the lifecycle?


At the specifying stage [/guides/transforming-assessment-and-feedback/specifying] you will design assessments where the method and topics relate to the particular
work context. During the setting stage [/guides/transforming-assessment-and-feedback/setting] you will tailor the assessment schedule to the particular learner profile
and ensure there is clarity and consistency in the requirements for all stakeholders including academics, workplace assessors and students.

At the supporting stage [/guides/transforming-assessment-and-feedback/supporting] you will need to ensure that students are aware of all the possible sources of
support and help them to make full use of collaborative tools so they don't feel a sense of isolation. You will also ensure clear and robust arrangements for submitting
assignments.

The reflecting stage [/guides/transforming-assessment-and-feedback/reflecting] will be of particular importance with both self and peer reflection as important features
of assessment practice in these contexts.
What resources can help?
The University of Exeter discuss how to prepare students for personal development reviews in the workplace
[http://collaboratevoices.blogspot.co.uk/2012/04/assessment-in-workplace.html]

See also our case studies [http://repository.jisc.ac.uk/6251/9/Technology_for_employability_-_FE_and_Skills_case_studies.PDF] relating to employability and


assessment in FE and Skills for example at S&B Autos Automotive Academy Bristol

Assessment, feedback and accreditation Our video show how the University of Derby supports work-based learning with technology including the
accreditation of work-based assessors as university lecturers.

Our video shows how five universities collaborated to transform assessment in practice settings through the
Assessment and learning in practice
use of a shared competency map and mobile devices

Our video shows how the University of Wolverhampton used the PebblePad e-portfolio tool for delivering and
E-portfolio implementation
assessing short courses to SMEs

Our video shows how Thanet College uses e-portfolios for work-based assessment
E-portfolios for work-based assessment

 > Advice > Guides > Transforming assessment and feedback with technology > Full guide

Areas

Connectivity

Cyber security

Cloud

Data analytics

Libraries, learning resources and research

Student experience

Trust and identity

Advice and guidance

Explore

Guides

Training

Consultancy

Events

Innovation

Useful

About

Membership

Get involved

News

Jobs

Get in touch

Contact us

Sign up to our newsletter

 Twitter

 Facebook

 LinkedIn

 YouTube

Cookies

Privacy

Modern slavery

Carbon reduction plan

Accessibility

You might also like