You are on page 1of 6

SPECIFICATIONS

In building operational assessments, the selection of texts or recordings will be guided by the specifications and by their
potential as sources for generating tasks and items (Alderson et al., 1995, p. 43).
Designers and writers have to choose whether it is the best to
- Craft new material to use as input.
- Adapt found source material to match the specifications.
- Try to find materials that can be used unaltered in the assessment.

Bachman and Palmer’s (2010) task characteristics framework in chapter 4 has been discussed how their framework
could be applied to:
- Assessing reading and listening abilities (Alderson, 2000 and Buck, 2001)
- Assessing vocabulary and grammar (Read, 2000 and Purpura, 2004)

Bejar et al. (2000) and Weir (2005a) has gone a step further in bringing together the contexts within which language
tasks are undertaken and the different cognitive processes that are involved in carrying them out.
Khalifa and Weir (2009) set out how cognitive processes are reflected in the texts employed as input for the reading
tests in the Cambridge English language examinations. By using questionnaires, verbal reports by test takers and eye
tracking tools, researchers have explored how purpose and context shape the reading and listening process and how
closely these processes resemble each other in the different contexts of the tests and in the target language use
domains.
SPECIFICATIONS
For most purposes, it is said to be preferable to use texts and recordings that have been obtained from sources in the
real world contexts that the assessment is intended to reflect.

In practice, there may need to be a trade-off between what is desirable from a theoretical standpoint and practical
constraints on the kind of texts or recordings in the assessment.

Material taken directly from a target language use domain, even when very carefully chosen, is inevitably less likely to
match the detailed specifications than written or scripted materials.
However, invented texts are much more likely to differ significantly from the material that learners might encounter in
real world language use.

Much of the listening that language learners carry out in real life situations will involve spontaneous, unscripted speech
that is not as easy to find in a recorded form. Meanwhile, although unrehearsed, unedited recordings of spontaneous
conversations are available, they may not be easy to follow.

The recording may be authentic, but the involvement on the part of the assesse is not.
SPECIFICATIONS
Recordings such as news broadcasts, lectures, interviews, dramas, or public announcements
- Are rarely entirely impromptu
- Include rehearsed, semi-scripted or even fully scripted speech.
- Are quite dissimilar to impromptu, unscripted speech.
Þ Rehearsed, directed and edited broadcasts have different characteristics than everyday conversation.

Alderson (2000), Buck (2001), and Davidson and Lynch (2002) suggested that if suitable recordings cannot be found to
use as an input for the listening assessments, an alternative is to make recordings that simulate real world language use.
Buck (2001) also recommended a range of strategies include:
- Scenarios where performers improvise a transaction or conversation based on an outline rather than a script
- Interviews with proficient speakers with a set pattern of the questions
=> These should be based on recordings of genuine speech, adapted as required, then re-recorded by actor if stricter
control for the limitations of the use of scripts is needed (Buck , 2001).
SPECIFICATIONS
Writers naturally have more scope for adapting written than spoken material since words can be inconspicuously
removed from or added to a written text in a what that it is impossible with audio or video recordings.
Writers may make changes to a found text in preparing assessments include reshaping it to better match the text
characteristics in the specifications and adjusting it to better accommodate the intended questions or tasks.

For example, to meet the specifications, writer may wish to


- Make cuts or add sentences to adjust the length of an input text
- Take an extract from a longer text
- Take out any reference to the parts of the text that have been removed
- Might change allusions to people, places or events
- Take out content that could offend or upset some assessees.

When preparing tasks, writers may:


- Need to revise and reorganize a text to produce the number of items called for
- Shift ideas from one location in the text to another
- Remove repeated ideas
- Eliminate ambiguities
- Change wording and structure of sentences to get rid of obvious hints
SPECIFICATIONS
Adapting spoken material is generally more awkward than adapting written texts. Although making adjustments to the
material is relatively easy thanks to the availability of audio and video editing software, it will generally require re-record
the material if any changes are made to the wording.

Brindley et al. (1997) provided an example of the kinds of dilemmas that face designers of listening assessments by
trying to use the authentic recordings taken from the target language use domain of daily life in Australia for a test to
assess the English of immigrants to Australia. However, some recordings contained too much or too little information or
would have required extensive contextualization.

The recordings employed in the test were either invented or adapted form broadcast material and re-
recorded using actors.

The key advantage of using scripts and the reason why they still remain so popular is that they allow the designer to
control over the content of a recording greater.
Moreover, the level of the learners must play an important part in decisions about the kinds of recording that can be
used.

However, heavy reliance on scripted material will deny learners opportunities to become accustomed to
the features of more natural speech. Designers will need to carefully balance these considerations
according to the purpose of the assessment.
ASSESSING GRAMMAR AND VOCABULARY
Assessments that focus on knowledge of words and phrases, the formal properties of language and the ways in which
manipulating these affect basic propositional meaning cover an important but restricted proportion of language
processing.
Þ Assessment of grammar and vocabulary knowledge => an assessments of a restricted range of the processes
involved in receptive, interactive and productive language use.

One important reason why this may be worth assessing is that problems in accessing the vocabulary or grammar on a
text or recording could be the source of difficulty in comprehension.
Þ Such assessments may have a diagnostic value, revealing what learners may need to learn.
Another reason is that these assessments can have potential for practicality and reliability. For example:
- Large numbers of items covering a wide range of language points can be administered in a relatively short time
- Scoring can be made very straightforward, or automatic

Such assessments provide quite effective measurement, but only of a limited aspect of general language
ability. If teachers and learners only concentrate on grammar and vocabulary, they may study only
disjointed fragments of language, overlooking knowledge of discourse, pragmatic and socio-linguistic
features.

Traditionally, such assessments have been administered mainly in the written mode. However, there are good reasons
for assessing spoken as well as written word and phrase recognition and production.

You might also like