You are on page 1of 15

3

3
HOW TO MAKE TEACHING
MATERIALS AND TESTS
FOR PRAGMATICS

In this chapter, I will look at different instruments that can be used to create
activities for teaching and testing L2 pragmatics. Most of these instruments were
developed for pragmatics research, but they can be used just as well for teaching
32
and testing purposes. Two broad types of tools exist: receptive tools, which check
learners’ understanding of pragmatic meaning, and productive tools, which stimu-
late learners to practice the creation of pragmatic meaning. Receptive tools
include metapragmatic judgment and multiple-​choice tasks, whereas productive
tools include discourse completion tasks, role plays, and elicited conversations, as
shown in Figure 3.1.
This chapter will focus on how to design these instruments and what kind of
information they can provide to teachers and testers. In the following chapters,
I will draw on these instruments for teaching and testing learners at different pro-
ficiency levels.

3.1 The Basics: How to Establish Context


Teaching materials and test tasks for L2 pragmatics and interactional compe-
tence need to meet one basic but crucial requirement: they must establish con-
text. Pragmatics is all about language use in context, and the whole point of
pragmatics is that context influences how we talk. We speak differently to friends
than to people we barely know; we speak differently when chatting to people at
a party than in a business meeting; we speak differently when talking face to face
or online; and we speak differently depending on what we are talking about and
trying to achieve. Any pragmatics activities or tasks need to take these factors into
account, just like grammar activities need to make clear that a plural or a simple
past is required.
DOI: 10.4324/9780429260766-3
34

34 Teaching Materials and Tests

Receptive tools Productive tools

Metapragmatic Multiple Discourse Elicited


Role play
judgments choice completion conversation

FIGURE 3.1 Tools for teaching and testing L2 pragmatics

Three main aspects of context need to be included:

1. the physical context (where –​when –​how);


2. the social context (who to whom); and
3. the goal of the interaction (why or for what purpose).

The physical context includes the location where the communication happens
(an office, a hospital ward, a living room), the mode of communication (face to
face, over the phone, online, email), and, where relevant, the time: knocking on a
friend’s door and asking them for help at 2 am requires different pragmatics than
doing so at 2 pm.
The social context needs to make clear the social roles of the interlocutors, 34
e.g., student and professor, housemate and housemate, father and son. It also
needs to make clear the settings for the social context variables in Brown and
Levinson’s (1987) framework: Power, (Social) Distance, and Imposition. Power is
usually implied in the social relationship, e.g., a manager is higher in power than
an employee, and a teacher is higher in power than a student whereas housemates,
friends, or colleagues are equal in power.
By contrast, Social Distance is not automatically clear from people’s respective
social roles, and needs to be described in the task. For example, strangers and other
people who have never met before have high social distance whereas people who
have a high degree of social contact with each other (friends, housemates, family
members) have low social distance. People who do not know each other well but
have something in common (a new co-​worker, a classmate you have not spoken
to before, a second cousin you have only seen a couple of times) have medium
social distance.
Degree of imposition is a very important driver of the politeness level of the
utterance and must be clear from the situation. As outlined in Chapter 2, high
imposition means high “cost” to the hearer in terms of money, energy, time, or
social reputation. To put it generally, the more they have to go out of their way
to do things they would not ordinarily do, the greater the imposition. Borrowing
your friend’s laptop for an hour is a mid-​level imposition: they do not have to do
very much to comply with the request, but they cannot use their laptop during
that time, so there is some cost, but not too much. This situation becomes high
35

Teaching Materials and Tests 35

imposition if they also need their laptop to finish an urgent task. It becomes low
imposition if they do not need the computer at the moment, or they have a
backup. Putting this information in the task prompt allows the teacher, tester, or
researcher to influence the imposition level.
Especially in testing and research settings, it is very important to make sure
that test takers or research participants perceive the social context settings in the
same way as the tester or researcher. Cultural differences in perceptions can lead
to distortions, so there should always be a piloting with members of the target
community.
Finally, the goal of the interaction needs to be spelled out. What is the speaker
trying to achieve? In real-​ world talk, people know what they want, but in
pragmatics tasks, we need to tell learners/​participants what the point of the talk is.
Including these three major considerations may sound like the task will end
up being very lengthy but the challenge is to keep it concise. That is actually not
as hard as it sounds because not everything needs to be spelled out in great detail.
The following is a scenario for a discourse completion task, designed to elicit a
request:

Request item (adapted from Roever, 2005)


Eric works as a waiter in a restaurant. He is supposed to work this afternoon
34 but he hasn’t been feeling well lately and wants to go and see his doctor. His
colleague Nikita is not working this afternoon so Eric decides to ask her
to take his shift. Eric is sitting next to Nikita during their morning break.
What would Eric probably say?

The physical context is implied as being the workplace setting (restaurant) because
Eric and Nikita are both at work. Since they are sitting next to each other, the
mode of interaction is face-​to-​face talk.The time is sometime in the morning but
this is not crucial for this situation.
The social relationship between Eric and Nikita is given as colleagues, which
implies low social distance and no power difference. The degree of imposition is
probably medium since Nikita will have to change whatever plans she might have
to accept Eric’s request (which is a cost to her) but at the same time she gets the
benefit of working an extra shift (extra pay).
The goal of the interaction is clearly specified as Eric wanting Nikita to take
his shift, and the scenario also provides a reason for his request. This is important
because in the real world, people do not make requests without a reason. Not
including a reason in the scenario would mean that respondents need to make it
up, which would put extra pressure on them and is not realistic.
As a general guideline, the description of the communicative situation should
be about the length of one paragraph. Anything longer than a paragraph runs the
risk of becoming a reading comprehension test, and respondents are just not likely
36

36 Teaching Materials and Tests

to read it carefully. It is certainly possible to enhance the description with pictures


or video but for everyday classroom purposes or low-​cost tests, a brief, clear but
comprehensive written description is sufficient.
Since respondents need to picture the situation and feel invested in their
response, it is generally a good idea to have male and female names represented in
the scenarios. A male and a female could be included in most scenarios with some
being male–​male or female–​female.
This gender-​ balance goal may require some modification under certain
circumstances, most commonly if respondents are from cultures where it would
be odd for men and women to talk to each other unless they are family. For
example, in Al-​Gahtani and Roever’s role play studies (Al-​Gahtani & Roever,
2012, 2018), a male role play interlocutor interacted with participants from a
Saudi Arabian background. In Saudi culture, it would be inappropriate for men
and women to talk to each other unless they are family members so all role play
participants were male and the scenarios were written as an interaction between
two men.
Finally, it is a very good idea to pilot scenarios with a small number of people,
ideally people similar to the research participants or test takers, but just friends
would already be helpful.This is to ensure that the tasks make sense to readers and
are not ambiguous or strange. For classroom use, piloting is usually unrealistic and
unnecessary as there are no stakes attached to the task, and even if one “falls flat,” 36
teachers can just drop it.
The following sections will describe different task types, starting with recep-
tive tasks.

3.2 Metapragmatic Judgments


Metapragmatic judgment tasks allow learners to show their awareness of prag-
matic norms, especially appropriateness, which is why they are also sometimes
called “appropriateness judgments.” Since they do not require production, which
tends to be more challenging for learners than comprehension, these tasks can
come close to eliciting “pure” pragmatic awareness with little effect of proficiency
other than the proficiency required to read the situation description and target
utterance.They are useful for teachers as the first type of exercise after introducing
a new pragmatic target feature.
Metapragmatic judgment is mostly used for speech acts since a speech act
can be judged for its degree of appropriateness. At the same time, respondents
may not agree exactly how (in)appropriate a particular utterance is: some may
find it slightly problematic whereas others may think it is clearly inappropriate.
This is a problem for using metapragmatic judgments in language testing, but less
so for classrooms, where differences in judgment can be the starting point for a
discussion on different pragmatic perceptions. However, some learners may feel
frustrated by not being provided with a “correct” answer.
37

Teaching Materials and Tests 37

3.2.1 Types of Metapragmatic Judgment Tasks


A metapragmatic judgment task typically consists of a scenario and a Likert
scale where learners indicate their judgment of the appropriateness of the target
utterance. The following task is based on the example in section 3.1 above.

Metapragmatic judgment task with under-​politeness


Eric works as a waiter in a restaurant. He is supposed to work this after-
noon but he hasn’t been feeling well lately and wants to go and see
his doctor. His colleague Nikita is not working this afternoon so Eric
decides to ask her to take his shift. Eric is sitting next to Nikita during
their morning break.
Eric: “Nikita, you need to take my shift this afternoon.”
How appropriate is Eric’s utterance?
Completely appropriate
Mostly appropriate
Somewhat appropriate
Mostly inappropriate
Completely inappropriate

36 Eric’s request is overly direct and lacks a reason so it would probably be judged
to be mostly inappropriate or completely inappropriate. This illustrates that
metapragmatic judgment tasks do not necessarily have one correct answer: most
respondents would probably agree that Eric’s utterance is not polite enough but
they might disagree as to whether it is “completely inappropriate” or “mostly
inappropriate,” and there is no objective way to decide who is right.
Metapragmatic judgments can also be designed to be overly polite and
have a more specific rating scale that incorporates over-​politeness, like the
following task:

Metapragmatic judgment task with over-​politeness


Carl is at a coffee shop and wants to buy a muffin. When it is his turn to
order, the server says to him: “Hi, what can I get you?”
Carl: “Hi, I’m really sorry to trouble you, but I’m wondering if you’d mind
giving me a blueberry muffin?”
How polite is Carl’s utterance?
Much too polite
A little too polite
Just right
A little too impolite
Much too impolite
38

38 Teaching Materials and Tests

Carl’s utterance is much too polite for the situation since the degree of imposition
is quite small; after all, it is the server’s job to sell muffins. Interestingly, overpolite
items were quite difficult in the metapragmatic judgment section of Roever et al.
(2014a) so over-​politeness seems to be harder to detect for learners.
A variation on metapragmatic judgment is to ask learners to correct the
inappropriate utterance, which adds a productive element to the task. The
following example shows a metapragmatic judgment similar to the ones used by
Roever et al. (2014a).

Metapragmatic judgment item with correction


Carol and her close friend Bob have not seen each other for several months.
Carol has invited Bob for dinner at her house, and greets him at the door.
Carol: “Hi! Come in, it’s great to see you after so long!”
Bob: “Hi! What is there to eat?”
Was Bob’s utterance appropriate?
 Yes
 No
If not, what should Bob say?
_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​___​_​_​_​_​
38
If production is not wanted, possible response options could be given in multiple-​
choice format:

Metapragmatic judgment item with response options


Carol and her close friend Bob have not seen each other for several
months. Carol has invited Bob for dinner at her house, and greets him at
the door.
Carol: “Hi! Come in, it’s great to see you after so long!”
Bob: “Hi! What is there to eat?”
Was Bob’s utterance appropriate?
 Yes
 No
If not, what should Bob say? Check all correct answers.
1. “Hi, great to see you too.”
2. “Hi, thanks for coming over.”
3. “Hi, never mind that.”
4. “Hi, I can’t wait to have dinner.”

Instead of possible alternative utterances, learners can also be asked to indicate


what is wrong with the target utterance, and show their degree of confidence on a
Likert scale, as shown in the following example (based on Ellis & Roever, 2020b):
39

Teaching Materials and Tests 39

Metapragmatic judgment item


Pierre and Sarah are colleagues who work in the same office but don’t know
each other well. Pierre is working on a complicated spreadsheet, and would
like Sarah to check it.
Pierre: “Sarah, could you have a look at my spreadsheet?”
Sarah: “No, I don’t have time.”
Is Sarah’s answer appropriate?
 Yes
 No

If test takers choose “yes,” the test moves to the next item. If they choose “no,”
there are two follow-​up questions:

Follow-​up questions
If you think it is not appropriate, please explain why.
A. Sarah should have been more polite.
B. Sarah should not refuse a colleague’s request.
C. Sarah’s reason is not good enough to refuse.
D. Sarah should have been more direct.
38
How certain of your response are?
 Very certain
 Quite certain
 Not certain

This design avoids learners having to produce utterances, which might be difficult
because of proficiency limitations, but it means more work for the designer in
creating the response options.
Metapragmatic judgment tasks can also include several target utterances situated
in one scenario, which reduces the need to create a large number of scenarios.
Here is an example:

Metapragmatic judgment of multiple utterances in one scenario


While at work in the office, Mark accidentally knocks over a stack of papers
on his colleague Sally’s desk. The papers scatter all over the floor. What
would Mark probably say to Sally?
Here are some things Mark might say. Indicate how appropriate each one is.

Appropriate  Inappropriate
1. “Oops, my bad.” 5 4 3 2 1
2. “Oh no, I’m so sorry.” 5 4 3 2 1
3. “Sorry, Sally.” 5 4 3 2 1
40

40 Teaching Materials and Tests

4. “Gosh, how clumsy of me. I’m really sorry.” 5 4 3 2 1


5. “How did that happen? I’ll help you 5 4 3 2 1
pick it up.”
6. “Damn, I’m really sorry. I didn’t mean 5 4 3 2 1
to do that.”

3.2.2 Procedure: Administering Metapragmatic Judgment Tasks


The administration of metapragmatic judgment tasks is fairly easy. They can be
administered on paper as a teacher-​designed classroom activity, or on a computer
for larger-​scale tests or to integrate more “bells and whistles,” e.g., audio or video
support.
Since the effort involved for participants is limited to reading and making a
judgment, a fairly large number of items is possible. For example, Hudson et al.
Brown (1995) had 24 metapragmatic judgment items in their paper-​based test,
Bardovi-​Harlig and Dörnyei (1998) had 20 scenarios in their video-​based instru-
ment, and Roever et al. (2014a) also had a total of 20 metapragmatic judgment
items with audio support in their computer-​based test across two sections. While
item numbers should probably be around 20 for testing and research purposes,
they can be substantially smaller for classroom activities.

3.2.3 Resources and Further Readings 40

The variations on metapragmatic judgment tasks shown above are only some pos-
sible instantiations. Surprisingly few complete instruments are available. Bardovi-​
Harlig and Dörnyei (1998) have made their entire video-​based instrument available
on IRIS (www.iris-​database.org/​). The testing instrument used by Hudson
et al. (1995) is fully reproduced in the appendix of their book. Takimoto (2009)
shows examples of different kinds of metapragmatic judgment tasks, with add-
itional samples available on IRIS. Li (2014) shows some samples of metapragmatic
judgments in L2 Chinese pragmatics.
Possibly surprisingly, there are also no extensive theoretical discussions of
metapragmatic judgments. Culpeper et al. (2018) devote just a subchapter to
metapragmatic judgments, and Taguchi and Roever (2017) even less.

3.3 Multiple-​choice Tasks


Multiple-​choice tasks have limited utility in L2 pragmatics teaching and testing.
By their very nature, they require that one answer choice be entirely correct and
the others entirely wrong, while still looking plausible enough to attract less com-
petent learners. In pragmatics, decisions about appropriateness are rarely entirely
black or white, right or wrong. Appropriateness is a matter of degree, which is
why multiple-​choice tasks are not normally used for speech acts or extended
41

Teaching Materials and Tests 41

interactions. They can be used more profitably to investigate recognition of rou-


tine formulae or comprehension of implicature.
It is typical to have three to five response options for multiple-​choice items,
including one correct answer and the other ones incorrect answers (distractors).
The correct answer should be in different positions so that a respondent who ran-
domly always chooses the same answer choice does not end up getting a perfect
score. Incorrect answers must be clearly incorrect but still attractive enough to be
chosen. Distractors that are so “off the wall” that they are obviously wrong are
useless and make it easy for respondents to strategically narrow down the number
of possible answers, which means they can get an item correct due to test-​taking
strategies rather than knowledge of pragmatics. Creating plausible, attractive but
unambiguously incorrect distractors is one of the perennial challenges of making
multiple-​choice items.
As pragmatic knowledge differs for routine formulae and implicature, multiple-​
choice items for the two aspects of pragmatics are constructed somewhat differ-
ently and discussed separately below.

3.3.1 Creating Multiple-​choice Items for Routines


For routine formulae, teachers and testers are usually interested in finding out
40
whether learners can recognize the appropriate formula for a particular situation.
Multiple-​choice items therefore consist of a scenario and several possible for-
mulae, of which only one is correct in the scenario. Here is an example from
Roever (2005):

Routines item from Roever (2005, p. 129)


In a crowded subway, a woman steps on Jake’s foot. She says “I’m sorry.”
What would Jake probably say?
1. “That’s okay.”
2. “No bother.”
3. “It’s nothing.”
4. “Don’t mention it.”

The scenario description for routines items can be kept fairly short with no great
elaboration of the social relationship, since the point of the item is to trigger the
situationally typical formula from a limited range of possibilities, rather than care-
fully craft a highly recipient-​designed response.
The challenge with any multiple-​choice item is the creation of plausible but
clearly incorrect distractors in addition to the correct option. For routine formulae,
a good way to create distractors is to use formulae that exist but do not fit the
situation. In the example above, the first response option is a correct response to
an apology but the three distractors are responses to expressions of gratitude. This
42

42 Teaching Materials and Tests

item was medium difficulty in Roever (2005) with 57% of test takers answering
it correctly.
As with all multiple-​choice items, no one response option should stick out,
e.g., by being much longer than the others. In the apology item above, all response
options are approximately the same length. Another way to make response options
similar is to start them all off the same way as in the following example:

Routines item from Roever (2005, p. 127)


Jane is at the beach and wants to know what time it is. She sees a man with
a watch.
What would Jane probably say?
1. “Excuse me, can you say the time?”
2. “Excuse me, how late is it?”
3. “Excuse me, what’s your watch show?”
4. “Excuse me, do you have the time?”

This item was somewhat difficult in Roever’s (2005) test with 39% of test takers
answering it correctly. Unlike the subway item, the answer choices for the watch
item are not responses to an initiating utterance but they are utterances that open
a conversation. This is demonstrated by each of them starting with “Excuse me,”
which also serves to make them look more similar. 42

Another interesting feature of this item is the second distractor, which was
designed to be attractive to L1 German respondents (who made up a substantial
subset of Roever’s test-​taker sample) since it is a direct translation of the equiva-
lent German routine “Wie spät ist es?” Using direct translations is a useful way
to create attractive distractors but can of course only be done in foreign language
settings where all respondents share the same L1.
It is notable that the other distractors are not derived from existing routine for-
mulae but were created for this item. This is not ideal but using routine formulae
to make distractors is not always feasible.

3.3.2 Creating Multiple-​choice Items for Implicature


For implicature, the focus of teaching and testing is learners’ comprehension of
implied meaning. Items therefore need to show a target utterance that includes
an implicature followed by possible meanings of the target for learners to identify
the correct interpretation. Similar to items for routine formulae, implicature items
only require fairly short and simple scenario descriptions, but unlike routines
items, there needs to be a brief dialogue to establish the implied meaning. Here is
an example item from Roever (2005):
43

Teaching Materials and Tests 43

Implicature item from Roever (2005, p. 122)


Jack is talking to his housemate Sarah about another housemate, Frank.
Jack: “Do you know where Frank is, Sarah?”
Sarah: “Well, I heard music from his room earlier.”
What does Sarah probably mean?
1. Frank forgot to turn the music off.
2. Frank’s loud music bothers Sarah.
3. Frank is probably in his room.
4. Sarah doesn’t know where Frank is.

The implied meaning in the above example is that Frank is probably in his room,
making the third answer choice the correct answer. The type of implicature in
the example above is idiosyncratic implicature (Bouton, 1988, 1999) or non-​
conventional implicature (Taguchi, 2009, 2011), which does not follow a par-
ticular fixed pattern. This was an easy item on Roever’s (2005) test, which 79% of
test takers answered correctly.
Taguchi (2008, 2009) suggests the category of conventional implicature, which
includes routine formulae as well as commonly used sequential means for implying
pragmatic meaning. Here is an example targeting implied refusal:
42
Implicature item for implied refusal
Mark and Sally are students who share a house. Sally has finished her uni-
versity work in the morning, and has nothing else to do. They are talking
in the living room.
Sally:“Do you want to do something this afternoon? Catch a movie maybe?”
Mark: “Well, I’ve got a paper due tomorrow.”
Sally: “Come on, it’s just a couple of hours.”
Mark: “I really need to get this paper done today.”
What does Mark mean?
1. He does not want to go to a movie today.
2. He will be finished with his paper in about two hours.
3. He wants to do something with Sally in the afternoon.
4. He wants Sally to help him with his paper.

In addition to idiosyncratic implicature, Bouton identifies another implicature


type, formulaic implicature, which encompasses sarcasm and irony, as in the
following example (Roever, 2005):
4

44 Teaching Materials and Tests

Formulaic implicature item from Roever (2005, p. 123)


José and Tanya are professors at a college. They are talking about a stu-
dent, Derek.
José: “How did you like Derek’s essay?”
Tanya: “I thought it was well-​typed.”
What does Tanya probably mean?
1. She did not like Derek’s essay.
2. She likes it if students hand in their work type-​written.
3. She thought the topic Derek had chosen was interesting.
4. She doesn’t really remember Derek’s essay.

The item above was notably more difficult for learners than the idiosyncratic
implicature item (see the first example in section 3.3.2). Only 31% identified
the first response option as the correct answer. It is a general finding that implicature
based on irony or sarcasm is more difficult for learners, possibly because it has the
extra cognitive step of recognizing that the implied meaning is the opposite of the
surface meaning.
Just like with routine formulae, the creation of distractors is difficult for impli-
cature items.Taguchi (2005) suggests three principles of distractor design: one dis-
tractor should represent the opposite of the correct answer, one distractor should 4

relate to the final utterance, and one should relate to the dialogue as a whole.
Roever constructed his distractors along these lines for his formulaic implicature
item shown above: the first one is the correct answer, the second one relates to the
final utterance, and the third one is the opposite of the correct answer. Only the
fourth one does not follow Taguchi’s principle and is designed as the interlocutors
misunderstanding each other or being deliberately obfuscating.

3.3.3 Creating Multiple-​choice Items for Speech Acts


There is some work on multiple-​choice items for speech acts but these are
very difficult to design and not recommended. Hudson et al. (1995) included
some multiple-​choice request, refusal, and apology items in their test battery but
Yamashita (1996) reported low reliabilities for these items in her Japanese adap-
tation of the test, as did Yoshitake (1997), who used the original. Tada (2005)
attained acceptable reliability for video-​ supported multiple-​ choice speech act
items, and Liu (2006) described a bottom-​up item development process, which
generated items that seemed to work well, though McNamara and Roever (2006)
questioned their construct validity.
It is in fact extremely difficult to design distractors for speech act items that
are clearly unacceptable in the target speech community but still plausible and
attractive. As an example, take the scenario about Eric, the waiter, in section 3.2.1
above. In the following example, this item has been turned into a multiple-​choice
45

Teaching Materials and Tests 45

item with the first answer choice as the intended correct answer and the others
as distractors.

Multiple-​choice speech act item


Eric works as a waiter in a restaurant. He is supposed to work this afternoon
but he hasn’t been feeling well lately and wants to go and see his doctor. His
colleague Nikita is not working this afternoon so Eric decides to ask her
to take his shift. Eric is sitting next to Nikita during their morning break.
What would Eric probably say?
1. “Nikita, I’m not feeling great. Can you take my shift this afternoon?”
2. “Nikita, you need to take my shift this afternoon.”
3. “Nikita, I’ve got a shift this afternoon. Do you want it?”
4. “Nikita, can you do me a favor? Would you be able to take my shift this
afternoon?”

The first response option seems completely fine, including a reason and an
indirect request with a modal and interrogative. The second response looks
much less acceptable as it is a bit too much of a command given the equal power
relationship between Eric and Nikita, and it is lacking an explanation. However,
the directness of it can also be read as expressing great urgency, or maybe Eric
4 is just given to a bit of drama. The missing explanation could be taken care of
by Nikita asking “why, what’s up?” in the next turn, thereby giving Eric space
to explain.
Response option 3 is even more borderline: it seems to ignore the imposition
on Nikita of having to change her plans this afternoon and it actually makes it
sound like taking Eric’s shift is in Nikita’s interest. There is no explanation of why
Eric wants Nikita to take his shift, but since Eric casts the request as an offer, is a
reason really required? Some respondents might feel that it is not polite enough
in disregarding the imposition and not giving a reason but others might find it
unproblematic as casual talk between colleagues. Also, if Nikita really wanted to
know, she might ask in the next turn.
Response option 4 goes into the direction of over-​ politeness, and some
respondents might feel that it is too formal given Eric’s and Nikita’s relationship.
But it is not grossly inappropriate, and if Eric and Nikita have larger social dis-
tance because they do not know each other well, or there is a significant age gap
between them, a more polite utterance may be justified.
Of course, it would be possible to create response options that are clearly too
direct or too arcane (“Nikita, take my shift this afternoon.” /​“Nikita, are you
doing anything this afternoon?”) but they would be too obviously off the mark,
just like a vastly overpolite option (“Nikita, I’m so sorry to bother you, but I was
just wondering if it would be at all possible for you to do me a huge favor and
take my shift this afternoon?”). Overall, it is best not to use multiple-​choice tasks
for speech acts.
46

46 Teaching Materials and Tests

3.3.4 Procedure: Administering Multiple-​choice Tasks


While multiple-​choice tasks are tricky to develop, their big advantage lies in the
ease of administration and scoring.They can be quickly and easily administered on
paper or a computer, and many survey or test design websites and systems allow
automatic scoring, sometimes even with immediate feedback to the learner or
test taker.
Because they are receptive tasks that just require learners to tick the correct
answer, larger numbers are possible. Roever (2005) had 12 implicature and 12
routines tasks in his web-​based test battery, and Taguchi (2011) had 32 implica-
ture items in her battery. For learners at B1 level, one minute per item is chal-
lenging but doable, keeping the total time required to 30–​40 minutes (including
instructions and a couple of practice items).

3.3.5 Beyond Multiple Choice: Multi-​response Tasks


Multi-​response tasks can be viewed as multiple-​choice tasks with several correct
answers, or extended True/​False tasks. They are not common in tests but they
can be used for classroom activities. For example, a scenario could be given and
learners choose which routine formulae is likely to occur from a number of
options, as in the following example, which is an adaptation of the example in
section 3.3.1 above. 46

Multi-​response routines item


In a crowded subway, a woman steps on Jake’s foot. She says “I’m sorry.”
Which of the following would Jake probably say (Y) or not say (N)?
1. “That’s okay.”
2. “Don’t mention it.”
3. “Don’t worry about it.”
4. “I could care less.”
5. “No biggie.”
6. “What’s up?”
7. “It’s fine.”
8. “Sure thing.”

Options 1, 3, 5, and 7 are all likely in the scenario, and an item like this encourages
learners to compare several routines and their relationship to a scenario.
A similar item type could be used for implicature, as shown in the following
example:

Multi-​response implicature item


Sally and Bob attended a lecture on ancient Greek philosophy.
47

Teaching Materials and Tests 47

Sally: “That lecture was fascinating.”


Which of Bob’s responses show agreement (A), and which show disagreement (D)?
1. Bob: “Well, if you say so.”
2. Bob: “I couldn’t get enough of it.”
3. Bob: “It was alright.”
4. Bob: “You can say that again.”
5. Bob: “Fascinating is a big word.”
6. Bob: “I was totally into it.”

Options 2, 4, and 6 indicate agreement in this task.

3.3.6 Resources and Further Readings


For routines, Roever (2005) includes his entire test of second language rou-
tine formulae in the appendix of his book. Bardovi-​Harlig (2009) gives a list
of formulae and scenarios that she used for her study but she does not include
distractors.Taguchi, Li and Xiao (2013) show the steps in designing a routines tests
of Mandarin Chinese as a second language.

3.4 Discourse Completion Tasks


46
Discourse completion tasks (DCTs) are the most long-​standing of productive
pragmatics tasks. In their classic format, they consist of a scenario and a gap for
respondents to write their response, as in the following example, which is an adap-
tation of the Eric, the waiter, scenario.

DCT item (adapted from Roever, 2005, p. 130)


Eric works as a waiter in a restaurant. He is supposed to work this afternoon
but he hasn’t been feeling well lately and wants to go and see his doctor. His
colleague Nikita is not working this afternoon so Eric decides to ask her
to take his shift. Eric is sitting next to Nikita during their morning break.
What would Eric probably say?
_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​_​___​_​_​_​_​_​​_​_​_​_​_​_​_​

In the first 20 or so years of L2 pragmatics work, DCTs were overwhelmingly


popular. Nearly every productive study employed them, and analytic schemes were
developed to analyze results, most notably the coding manual from the largest
DCT study of all, the Cross-​Cultural Speech Acts Realization Project (CCSARP;
Blum-​Kulka et al., 1989). CCSARP used DCTs with eight request scenarios and
eight apology scenarios to collect data from 1,088 participants in seven countries
(Australia, US, England, Canada, Denmark, Germany, Israel), and then coded the
data to find cross-​cultural differences in speech act strategies.

You might also like