You are on page 1of 12

SYSTEM

PERGAMON System 26 (1998) 163-174

Computer-aided error analysis


Estelle Dagneaux, Sharon Denness, Sylviane Granger*
Centre for English Corpus Linguistics, Universit~ Catholique de Louvain, Collkge Erasme,
Place Blaise Pascal 1, B-1348 Louvain-la-Neuve, Belgium

Abstract

This article introduces the technique of Computer-aided Error Analysis (CEA), a new
approach to the analysis of learner errors which it is hoped will give new impetus to Error
Analysis research, and re-establish it as an important area of study. The data used to
demonstrate the technique consist of a 150,000-word corpus of English written by French-
speaking learners of intermediate and advanced level. After the initial native speaker correc-
tion, the corpus is annotated for errors using a comprehensive error classification. This stage
is a computer-aided process supported by an 'error editor'. The error-tagged corpus can then
be analyzed using standard text retrieval software tools and lists of all the different types of
errors and error counts can be obtained in a matter of seconds. In this study, we use the
corpus to analyze progress rate between intermediate and advanced level students along a
range of grammatical and lexical variables, with interesting results. Initial research suggests
that this approach provides one way of discovering important features of learner writing, in
particular areas of persistent difficulty. As such, we believe it could play an important role in
the development of the next generation of pedagogical tools, ones which incorporate more
'learner aware' information. © 1998 Elsevier Science Ltd. All rights reserved.

1. Introduction

There is no d o u b t that error analysis (EA) has represented a major step forward in
the development o f Second L a n g u a g e Acquisition (SLA) research, but equally, it is
also true that it has failed to fulfil all its promises. In this article we aim to demon-
strate that recognizing the limitations o f E A does not necessarily spell its death. We
propose instead that it should be reinvented in the form o f computer-aided error
analysis (CEA), a new type o f c o m p u t e r corpus annotation.

*Corresponding author. E-mail: granger@lige.ucl.ac.be

0346-251X/98/$19.00 © 1998 Elsevier Science Ltd. All rights reserved.


PII: S0346-251 X(98)00001-3
164 E. Dagneaux et al./System 26 (1998) 163-174

2. Traditional error analysis

In the 1970s, EA was at its height. Foreign language learning specialists, viewing
errors as windows onto learners' interlanguage, produced a wealth of error typoi-
ogies, which were applied to a wide range of learner data. Spillner's bibliography
(Spillner, 1991) bears witness to the high level of activity in the field at the time.
Unfortunately, EA suffers from a number of major weaknesses, among which the
following five figure prominently.
• Limitation 1: EA is based on heterogeneous learner data;
• Limitation 2: EA categories are fuzzy;
• Limitation 3: EA cannot cater for phenomena such as avoidance;
• Limitation 4: E A is restricted to what the learner cannot do;
• Limitation 5: EA gives a static picture of L2 learning.
The first two limitations are methodological. With respect to the data, Ellis (1994)
highlights "the importance of collecting well-defined samples of learner language so
that clear statements can be made regarding what kinds of errors the learners pro-
duce and under what conditions" and regrets that "many EA studies have not paid
enough attention to these factors, with the result that they are difficult to interpret
and almost impossible to replicate" (p. 49). The problem is compounded by the
error categories used which also suffer from a number of weaknesses: they are often
ill-defined, rest on hybrid criteria and involve a high degree of subjectivity. Terms
such as "grammatical errors" or "lexical errors", for instance, are rarely defined,
which makes results difficult to interpret, as several error types--prepositional errors
for instance--fall somewhere in between and it is usually impossible to know in
which of the two categories they have been counted. In addition, the error typologies
often mix two levels of analysis: description and explanation. Scholfield (1995)
illustrates this with a typology made up of the following four categories: spelling
errors, grammatical errors, vocabulary errors and L1 induced errors. As spelling,
grammar and vocabulary errors may also be L1 induced, there is an overlap between
the categories, which is "a sure sign of a faulty scale" (Scholfield, 1995, p. 190).
The other three limitations have to do with the scope of EA. EA's exclusive focus
on overt errors means that both non-errors, i.e. instances of correct use, and non-use
or underuse of words and structures are disregarded. It is this problem in fact that
Harley (1980, p. 4) is referring to when he writes: "[It] is equally important to
determine whether the learner's use of 'correct' forms approximates that of the
native speaker. Does the learner's speech evidence the same contrasts between
the observed unit and other units that are related in the target system? Are there
some units that he uses less frequently than the native speaker, some that he does
not use at all?" In addition, the picture of interlanguage depicted by EA studies is
overly static: "EA has too often remained a static, product-oriented type of
research, whereas L2 learning processes require a dynamic approach focusing on the
actual course of the process" (van Els et al., 1984, p. 66).
Although these weaknesses considerably reduce the usefulness of past EA studies,
they do not call into question the validity of the EA enterprise as a whole but highlight
E. Dagneaux et al./System 26 (1998) 163-174 165

the need for a new direction in EA studies. One possible direction, grounded in the fast
growing field of computer learner corpus research, is sketched in the following section.

3. Computer-aided error analysis

The early 1990s saw the emergence of a new source of data for SLA research: the
computer learner corpus (CLC), which can be defined as a collection of machine-
readable natural language data produced by L2 learners. For Leech (1998) the
learner corpus is an idea 'whose hour has come': "it is time that some balance was
restored in the pursuit of SLA paradigms of research, with more attention being
paid to the data that the language learners produce more or less naturalistically".
CLC research, though sharing with EA a data-oriented approach differs from it
because "the computer, with its ability to store and process language, provides the
means to investigate learner language in a way unimaginable 20 years ago" (Leech,
1998, p. xvii).
Once computerized, learner data can be submitted to a wide range of linguistic
software tools--from the least sophisticated ones, which merely count and sort, to
the most complex ones, which provide an automatic linguistic analysis, notably part-
of-speech tagging and parsing (for a survey of these tools and their relevance
for SLA research, see Meunier, 1998). All these tools have so far been exclusively
used to investigate native varieties of language and though they are proving to be
extremely useful for interlanguage research, the fact that they have been conceived
with the native variety in mind may lead to difficulties. This is particularly true of
grammar and style checkers. Several studies have demonstrated that current check-
ers are of little use for foreign language learners because the errors that learners
produce differ widely from native speaker errors and are not catered for by the
checkers (Granger and Meunier, 1994; Milton, 1994). Before one can hope to pro-
duce "L2 aware" grammar and style checkers, one needs to have access to compre-
hensive catalogues of authentic learner errors and their respective frequencies in
terms of types and tokens. This is where EA comes in again, not traditional EA, but
a new type of EA, which makes full use of advances in CLC research.
The system of CEA developed at Louvain has a number of steps. First, the learner
data is corrected manually by a native speaker of English who also inserts correct
forms in the text. Next, the analyst assigns to each error an appropriate error tag (a
complete list of all the error tags is documented in the error tagging manual) and
inserts the tag in the text file with the correct version. In our experience efficiency is
increased if the analyst is a non-native speaker of English with a very good knowl-
edge of English grammar and preferably a mother tongue background matching
that of the E F L data to be analyzed. Ideally the two researchers--native and non-
native--should work in close collaboration. A bilingual team heightens the quality
of error correction which nevertheless remains problematic because there is regularly
more than one correct form to choose from. The inserted correct form should
therefore rather be viewed as one possible correct form--ideally the most plausible
one than as the one and only possible form.
166 E. Dagneaux et al./System 26 (1998) 163-174

The activity of tag assignation is supported by a specially designed editing soft-


ware tool, the "error editor". When the process is complete, the error-tagged files
can be analyzed using standard text retrieval software tools, thereby making it pos-
sible to count errors, retrieve lists of specific error types, view errors in context, etc.
The main aim of this EA system has always been to ensure consistency of analysis.
The ideal EA system should enable researchers working independently on a range of
language varieties to produce fully comparable analyses. For this reason, a purely
descriptive system was chosen, i.e. the errors are described in terms of linguistic
categories. A categorization in terms of the source of the error (LI transfer, over-
generalization, transfer of training, etc.) was rejected because of the high degree of
subjectivity involved. One error category runs counter to this principle. This is the
category of "false friends", which groups lexical errors due to the presence of a
formally similar word in the learner's LI. This category was added because of our
special interest in this type of error at Louvain. However, it is important to note that
the distinction is made at a lower level of analysis within a major "descriptive"
category--that of lexical single errors.
The error tagging system is hierarchical: error tags consist of one major category
code and a series of subcodes. There are seven major category codes: Formal,
Grammatical, LeXico-grammatical, Lexical, Register, Word redundant/word miss-
ing/word order and Style. These codes are followed by one or more subcodes, which
provide further information on the type of error. For the grammatical category, for
instance, the first subcode refers to the word category: GV for verbs, GN for nouns,
GA for articles, etc. This code is in turn followed by any number of subcodes. For
instance, the GV category is further broken down into GVT (tense errors), GVAUX
(auxiliary errors), GVV (voice errors), etc. The system is flexible: analysts can add or
delete subcodes to fit their data. To test the flexibility of the system, which was
initially conceived for L2 English, we tested it on a corpus of L2 French. The study
showed that the overall architecture of the system could be retained with the addi-
tion (or deletion) of some subcategories, such as G A D J G (grammatical errors
affecting adjectives and involving gender).
Descriptive categories such as these are not enough to ensure consistency of
analysis. Researchers need to know exactly what is meant by "grammatical" or
"lexico-grammatical", for instance. In addition, they need to be provided with clear
guidelines in case errors--and there are quite a few--allow for more than one
analysis. To give just one example, is *an advice to be categorized as G A (grammatical

There was a forest with dark green dense foliage and pastures where a herd of tiny (FS)
braun $brown$ cows was grazing quietly, (XVPR) watching at $watching$ the toy train going
past. I lay down (LS) in $on$ the moss, among the wild flowers, and looked at the grey and
green (LS) mounts $mountains$. At the top of the (LS) stiffest $steepest$ escarpments, big
ruined walls stood (WM) 0 $nsing$ towards the sky. I thought about the (GADJN) brutals
$brutaI$ barons that (GVT) lived Shad lived$ in those (FS) castels $castles$ . I closed my
eyes and saw the troops observing (F$) eachother $each others with hostility from two (FS)
opposit $opposite$ hills.

Fig. 1. Sampleof error-tagged text.


E. Dagneaux et al./System 26 (1998) 163-174 167

article error) or as XNUC (lexico-grammatical error involving the count/uncount


distinction in nouns)? Obviously both options are defensible but if consistency is
the aim, analysts need to opt for one and the same analysis. Hence the need for
an error tagging manual, which defines, describes and illustrates the error tagging
procedures.
Insertion of error tags and corrections into the text files is a very time-consuming
process. An MS Windows error editor, UCLEE (Universit( Catholique de Louvain
Error Editor) was developed to speed up the process.~ By clicking on the appro-
priate tag from the error tag menu, the analyst can insert it at the appropriate point
in the text. Using the correction box, he/she can also insert the corrected form with
the appropriate formatting symbols. If necessary, the error tag menu can be changed
by the analyst. Fig. 1 gives a sample of error-tagged text and Fig. 2 shows the screen
which is displayed when the analyst is in the process of error editing a GADVO
error, i.e. an adverb order error (easily jumping capitalized in the text for clarity's
sake). The figure displays the error tag menu as well as the correction box.

:::::::::::::::::::~:~
: :::" ::. • "': S-- -: • •: ... • .....
:::: :: ........ :. : :~:~:i:::~i:~:i:!..........

Fig. 2. Error editor screen dump.

I UCLEE was written by John Hutchinson from the Department of Linguistics, University of
Lancaster. It is available from the Centre for English Corpus Linguistics together with the Louvain Error
Tagging Manual (Dagneaux et al., 1996).
168 E. Dagneauxet al./System 26 (1998) 163-174

complemented by other (XNPR) approaches of $approaches to$ the subject. The written
are concemed. Yet, the (XNPR) aspiration to Saspiration for$ a more equitable society
can walk without paying (XNPR) attention of Sattention to$ the (LSF) circulation Straffic$.
could not decently take (XNPR) care for Scare of$ a whole family with two half salaries
be ignored is the real (XNPR) dependence towards $dependence on$ television.
are trying to affirm their (XNPR) desire of $desire for$ recognition in our society.
such as (GA) the $a$ (XNPR) drop of Sdrop inS meat prices. But what are these
decisions by their (XNPR) interest for $interest ins politics. As a conclusion we can
hope to unearth the (XNPR) keys of $keys to$ our personality. But (GV'r) do scientists
and (GVN) puts Sput$ (XNPR) limits to $1imits on$ the introduction of technology in their
This dream, or rather (XNPR) obsession of $obsession for$ power of some leaders can

Fig. 3. Output of search for XNPR.

Once inserted into the text files, error codes can be searched using a text retrieval
tool. Fig. 3 is the output of a search for errors bearing the code XNPR, i.e. lexico-
grammatical errors involving prepositions dependent on nouns.

4. Error tagging a corpus of French learner EFL writing

Error-tagged learner corpora are a valuable resource for improving ELT materials.
In this section we will report briefly on some preliminary results of a research project
carried out in Louvain, in which error tagging has played a crucial role. Our aim is
to highlight the potential of CEA and its advantages over both traditional EA and
word-based (rather than error-based) computer-aided retrieval methods.
The aim of the project was to provide guidelines for an E F L grammar and style
checker specially designed for French-speaking learners. The first stage consisted in
error tagging a 150 000-word learner corpus. Half of the data was taken from the
International Corpus o f Learner English database, which contains approximately
2 million words of writing by advanced learners of English from 14 different mother
tongue backgrounds (for more information on the corpus, see Granger, 1996, 1998).
From this database we extracted 75 000 words of essays written by French-speaking
university students. It was then decided to supplement this corpus with a similar-
sized corpus of essays written by intermediate students. 2 The reason for this was
two-fold: first, we wanted the corpus to be representative of the proficiency range
of the potential users of the grammar checker; and second, we wanted to assess
students' progress for a number of different variables.
Before turning to the EA analysis proper, a word about the data is in order. In
accordance with general principles of corpus linguistics, learner corpora are compiled
on the basis of strict design criteria. Variables pertaining to the learner (age, language
background, learning context, etc.) and the language situation (medium, task type,
topic, etc.) are recorded and can subsequently be used to compile homogeneous

2"Intermediate" here is to be interpreted as "higher intermediate". Learners in this group are


universitystudents of English who have had approximately 1 year of English at universityleveland 4 years
at school.
E. Dagneaux et al./System 26 (1998) 163 174 169

Style Form
Register 5% 9%
WR, WO, VVM12%
5%

32%

Lexls
30% Lexico-
Grammar
7%

Fig. 4. Breakdown of the major error categories.

corpora. The two corpora we have used for this research project differ along one
major dimension, that of proficiency level. Most of the other features are shared: age
(approximately 20 years old), learning context (EFL, not ESL), medium (writing),
genre (essay writing), length (approximately 500 words, unabridged). Although, for
practical reasons, the topics of the essays vary, the content is similar in so far as they
are all non-technical and argumentative. One of the major limitations of traditional
EA (Limitation 1) clearly does not apply here.
A fully error-tagged learner corpus makes it possible to characterize a given learner
population in terms of the proportion of the major error categories. Fig. 4 gives this
breakdown for the whole 150 000-word French learner corpus. While the high pro-
portion of lexical errors was expected, the number of grammatical errors, in what
were untimed activities, was a little surprising in view of the heavy emphasis on
g r a m m a r in the students' curriculum. A comparison between the advanced and the
intermediate group in fact showed very little difference in the overall breakdown.
A close look at the subcategories brings out the three main areas of grammatical
difficulty: articles, verbs and pronouns. Each of these categories accounts for
approximately a quarter of the grammatical errors (27% for articles and 24%
for pronouns and verbs, respectively). Further subcoding provides a more detailed
picture of each of these categories. The breakdown of the G V category brings out
G V A U X as the most error-prone subcategory (41% of GV errors). A search for
all G V A U X errors brings us down to the lexical level and reveals that c a n is the
most problematic auxiliary. At this stage, the analyst can draw up a concordance of
c a n to compare correct and incorrect uses of the auxiliary in context and thereby get
a clear picture of what the learner knows and what he/she does not know and
therefore needs to be taught. This shows that CEA need not be guilty of Limitation
4: non-errors are taken on board together with errors.
Another limitation of traditional EA (Limitation 5) can be met if corpora repre-
senting similar learner groups at different proficiency levels are compared. 3 In our
3It is true however that all forms of EA are basically product-oriented rather than process-oriented
since the focus is on accuracy in formal features. In our view, the two approaches should not be viewed as
opposed but as complementary: form-based activitiescan--indeed should--be integrated into the overall
process of writing skill development.
170 E. Dagneaux et al./Systern 26 (1998) 163-174

Table 1
Error category breakdown in intermediate and advanced learners
Progress rate Number of errors: Number of errors:
(%) intermediate advanced
Word redundant 82.1 84 15
Lexis: single 63.9 1048 378
X: complementation 63.3 120 44
Register 62.8 801 298
Style 61.8 325 124
Form: spelling 57.2 577 247
Form: morphology 57.1 56 24
Lexis: phrases 56.7 467 202
Lexis: connectives 56.1 528 232
Grammar: verbs 55.1 534 240
Grammar: adjectives 52.9 34 16
Grammar: articles 50.4 581 288
Word order 48.4 126 65
Word missing 44.3 115 64
Grammar: pronouns 40.7 477 283
X: dependent preposition 39.4 198 120
Grammar: word class 36.9 65 41
Grammar: nouns 27.2 250 182
Grammar: adverbs 22 82 64
X: countable/uncount. 15.7 83 70

Total 6551 2997

project, we c o m p a r e d F r e n c h - s p e a k i n g university students o f English at two differ-


ent stages in their c u r r i c u l u m s e p a r a t e d by a 2-year gap. T a b l e 1 gives the results o f
this c o m p a r i s o n with the e r r o r categories classified in decreasing o r d e r o f p r o g r e s s
rate. The table shows t h a t there is an u n d e n i a b l e i m p r o v e m e n t with respect to the
overall n u m b e r o f errors: a d v a n c e d students have h a l f as m a n y errors as the inter-
m e d i a t e ones. H o w e v e r , it also shows that progress rate differs m a r k e d l y a c c o r d i n g
to the e r r o r category. It ranges from 82.1% to only 15.7% a n d the average progress
rate across the categories is 49.7%.
F o r E L T p u r p o s e s such results p r o v i d e useful feedback. F o r us, it was p a r t i c u l a r l y
interesting a n d h e a r t e n i n g to note t h a t the l e x i c o - g r a m m a t i c a l c a t e g o r y that we
spent m o s t time o n - - t h a t o f c o m p l e m e n t a t i o n - - f a r e d m u c h better (63.3%) t h a n the
o t h e r t w o - - d e p e n d e n t p r e p o s i t i o n s (39.4%) a n d c o u n t / u n c o u n t n o u n s (15.6%),
which we clearly have to focus on m o r e in future. Similarly, a c o m p a r i s o n o f the
b r e a k d o w n o f the G V c a t e g o r y in the two g r o u p s (Figs. 5 a n d 6) shows that
the o r d e r o f the t o p m o s t categories is reversed: auxiliaries is the m o s t p r o m i n e n t G V
c a t e g o r y in the i n t e r m e d i a t e g r o u p b u t is s u p e r s e d e d by the tense c a t e g o r y in the
a d v a n c e d group. In o t h e r words, progress is m u c h m o r e m a r k e d for auxiliaries ( 6 7 % )
t h a n for tenses (35%). This again has i m p o r t a n t consequences for syllabus design.
As the learner d a t a is in m a c h i n e - r e a d a b l e form, text retrieval software can be
used to search for specific w o r d s a n d phrases a n d one might w o n d e r w h e t h e r this
E. Dagneaux et al./System 26 (1998) 163-174 171

250 239

::::: :.: :: :.

200 :::::: :. :: :f

• ::: ::::::.: :.:


: ::::::.: :: :::::
:: :: ::: .:: .:
• :.:..: :: :.: 150
150 :::::5:::::::.:::
• :::... :.:
• .::.: :..:.
: : :-:..:: :::::
::.:: :: .:.:.
iiiiiiiii!iiiiiii!i
: :::::: ::: ::: ::: :%::::::.?
::.:: :: ::::.
: ":: :: ::. :::: ::
100 :.': ": :: :':: ":
:...::.
]Z]Z]I'~
:: ::." .': •
:: :::::':

: : : ::: 57 54
50 ::'::'::

:: :.'::.
::.:.

i i !i i i i i i i 21
: :.::: ::.:

:: .:. ::: ::::::::


!i!!i!!~'!i~'!i!'!! 13
::: :.: ::: : :': : iiii!i?ililili:ili
0 i

GVAUX GVT GVNF GV N GVM GVV

Fig, 5. Number of verb errors in intermediate corpus. GVAUX: auxiliary errors; GVT: tense errors;
GVNF: finite/non-finite errors; GVN: concord errors; GVM: morphology errors; GVV: voice errors.

250

200

150

98
100

50 34
18
8 4

GVT GVAUX GVNF GVN GVM GVV

Fig. 6. Number of verb errors in advanced corpus.

method might not be a g o o d alternative to the time-consuming process o f error


tagging. A search for can, for instance, would retrieve all the instances o f the w o r d - -
erroneous or n o t - - a n d the analyst could easily take this as a starting-point for
analysis. However, this system suffers from a number o f weaknesses. It is clearly
applicable to closed class items, such as prepositions or auxiliaries. When the list o f
items is limited, it is possible to search for all the members o f the list and analyze the
results on the basis o f concordances. Things are much more complex when it comes
to open class items. The main question here is to k n o w which words one should
search for. It is a bit like looking for a needle in a haystack! T o find the needle(s) the
analyst can fall back on a number o f methods. He/she can compare the frequencies
o f words or phrases in a native and non-native corpus o f similar writing. Items
172 E. Dagneaux et al./System 26 (1998) 163-174

which are significantly over- or underused may not always be erroneous but often
turn out to be lexical infelicities in learner writing. This technique has been used with
success in a number of studies 4 which clearly shows that CLC research, whether it
involves error tagging or not, has the means of tackling the problem of avoidance in
learner language, something traditional EA failed to do. Yet most lexical errors
cannot be retrieved on the basis of frequency counts. Obviously teachers can use
their intuitions and search for words they know to be problematic, but this deprives
the learner corpus of a major strength, namely its heuristic power, its potential to
help us discover new things about learner language. A fully error-tagged corpus
provides access to all the e r r o r s of a given learner group, some expected, others
totally unexpected. So, for instance, errors involving the word as proved to involve
many fewer instances of the type shown in Example 1, which are illustrated in all
books of common errors, than of the type shown in Example 2, which are not.
• Example 1. *As (like) a very good student, I looked up in the dictionary ...
• Example 2 . . . . with university towns *as (such as) Oxford, Cambridge or
Leuven...
Error tagging has two other major advantages over the text retrieval method.
First, it highlights non-native language forms, which the retrieval method fails to do.
To investigate connector usage in learner writing, one can use existing lists of con-
nectors such as that by Quirk et al. (1985) and search for each of them in a learner
corpus. However, this method will fail to uncover the numerous non-native con-
nectors used by learners, such as those illustrated in the following sentences:
• Example 3. In this point of view, television may lead to indoctrination and
subjectivity ...
• Example 4. But at the other side, can we dream about a better transition ...
• Example 5. According to me, the root cause for the difference between men and
women is historical.
Second, error tagging reveals instances when the learner has failed to use a w o r d - -
article, pronoun, conjunction, connector, etc.--, something the retrieval method
cannot hope to achieve, since it is impossible to search for a zero form! In the error
tagging system, the zero form can be used to tag the error or its correction. In our
study, an interesting non-typical use of the subordinating conjunction that, was
revealed in this way. In Example 6 use of that would be strongly preferred, in
Example 7, it would usually not be included.
• Example 6. It was inconceivable *0 (that) women would have entered uni-
versities.
• Example 7. How do you think *that (0) a man reacts when he hears that a
woman...
The figures revealed that the learners were seven times more likely to omit that, as
in Example 6, than they were to inappropriately include it and one can reasonably

4For studies of learner grammar, lexis and discourse using this method, see Granger (1998).
E. Dagneaux et al./System 26 (1998) 163-174 173

assume that these French learners are overgeneralizing that omission. Their atten-
tion needs to be d r a w n to the fact that there are restrictions to that omission, and
that it is only frequent in informal style with c o m m o n verbs and adjectives (Swan,
1995, p. 588).

5. Conclusion

EA w a s - - a n d still i s - - a worthwhile enterprise. Its basic tenets still hold but its
methodological weaknesses need to be addressed. M o r e than 25 years ago, in his
plea for a m o r e rigorous analysis o f foreign language errors, A b b o t t (1980) called
for "the rigour o f an agreed analytical instrument". 5 C E A , which has inherited the
methods, tools and overall rigour o f corpus linguistics, brings us one such instru-
ment. It can be used to generate comprehensive lists o f specific error types, count
and sort them in various ways and view them in their context and alongside in-
stances o f non-errors. It is a powerful technique which will help E L T materials
designers produce a new generation o f pedagogical tools which, being more "learner
aware", c a n n o t fail to be more efficient.

Acknowledgements

We gratefully acknowledge the support o f the Belgian National Scientific


Research Council ( F N R S ) and the Ministry o f Education o f the French-speaking
C o m m u n i t y o f Belgium. We would also like to thank the a n o n y m o u s reviewer for
insightful c o m m e n t s on an earlier version of this article.

References

Abbott, G., 1980. Towards a more rigorous analysis of foreign language errors. IRAL 18(2), 121-134.
Dagneaux, E., Denness, S., Granger, S., Meunier, F., 1996. Error Tagging Manual. Version 1.1. Centre
for English Corpus Linguistics, Universit6 Catholique de Louvain, Louvain-la-Neuve.
Ellis, R., 1994. The Study of Second Language Acquisition. Oxford University Press, Oxford.
Granger, S., 1996. Learner English around the world. In: Greenbaum, S. (Ed.). Comparing English
Worldwide. Clarendon Press, Oxford, pp. 13-24.
Granger, S., 1998. The computer learner corpus: a versatile new source of data for SLA research. In:
Granger, S. (Ed.). Learner English on Computer. Addison Wesley Longman, London, pp. 3 18.
Granger, S., Meunier, F., 1994. Towards a grammar checker for learners of English. In: Fries, U., Tottie,
G. (Eds.). Creating and Using English Language Corpora. Rodopi, Amsterdam, pp. 79-91.
Harley, B., 1980. lnterlanguage units and their relations. Interlanguage Studies Bulletin 5, 3-30.
Leech, G., 1998. Learner corpora: what they are and what can be done with them. In: Granger, S. (Ed.).
Learner English on Computer. Addison Wesley Longman, London, pp. xiv-xx.

5Cf. Abbott (1980, p. 121): "It is difficult to avoid the conclusion that without the rigour of an agreed
analytical instrument, researchers will tend to find in their corpuses ample evidence of what they expect to
find".
174 E. Dagneaux et al./System 26 (1998) 163-174

Meunier, F., 1998. Computer tools for the analysis of learner corpora. In: Granger, S. (Ed.). Learner
English on Computer. Addison Wesley Longman, London, pp. 19-37.
Milton, J., 1994. A corpus-based online grammar and writing tool for EFL learners: a report on work in
progress. In: Wilson, A., McEnery, T. (Eds.). Corpora in Language Education and Research. Unit for
Computer Research on the English Language, Lancaster University, pp. 65-77.
Quirk, R., Greenbaum, S., Leech, G., Svartvik, J., 1985. A Comprehensive Grammar of the English
Language. Longman, London.
Scholfield, P., 1995. Quantifying Language. Multilingual Matters, Clevedon.
Spillner, B., 1991. Error Analysis. A Comprehensive Bibliography. John Benjamins, Amsterdam.
Swan, M., 1995. Practical English Usage. Oxford University Press, Oxford.
van Els, T., Bongaerts, T., Extra, G., van Os, C. Janssen-van Dieten, A.M. 1984. Applied Linguistics and
the Learning and Teaching of Foreign Languages. Edward Arnold, London.

You might also like