You are on page 1of 60

MASARYK UNIVERSITY BRNO

Faculty of Education
Department of English Language and Literature

Testing Grammar: Using Multiple Choice Tests


versus Translation

Bachelor thesis

Brno 2007

Supervisor: Written by:


Dr. Rita Collins, Ed.D. Lenka lbkov
I declare that I worked on this thesis independently, using only the primary and
secondary sources listed in the bibliography.

I agree with this bachelor thesis being deposited in the Library of the Faculty of
Education at the Masaryk University and being made available for study purposes.


Lenka lbkov

2
Acknowledgements

I would like to thank Dr. Rita Collins, Ed.D., for her kind guidance, valuable and
professional advice.

3
CONTENTS

1. Introduction .. 6

2.Language testing ... 8

2.1 Introduction to language testing .... 8


2.1.1 Reasons for testing ... 8
2.1.2 What is a good test ... 9

2.2 Writing tests ..... 10


2.2.1 Rules of designing tests .... 10
2.2.2 Administration .. 11
2.2.3 Test instructions to students .. 12

2.3 Direct and Indirect test item types ..... 13


2.3.1 Multiple choice tests 14
2.3.2 Constructing multiple choice items . 16

2.4 Describing learners .. 17


2.4.1 Age 17
2.4.2 Language levels 18
2.4.3 Individual differences 18

2.5 Scoring and interpreting test scores .. 18


2.5.1 Summarising the scores 19

2.6 Translation 20

3. Testing grammar .. 22

3.1 Introduction to the research: motivation and the methods . 22

3.2 Why do I test? .. 23

4
3.3 Administration of the multiple choice test and the translation 24

3.4 A sample of tested students and conditions of testing 25

3.5 The research analysis . 26


3.5.1 Total scores of multiple choice tests versus scores of translation 27
3.5.2 Different order of filling in multiple choice tests and translation 28
3.5.3 Gaps between the individual results 31
3.5.4 Choce versus Havlkv Brod .. 32

4. Conclusion .. 36

Resum 38

Bibliography .. 40

Appendices: sample tests . 41

5
1. Introduction

I have been working as a teacher of English language for five years. During that
time I have noticed that there is a necessity to test children in Czech schools. If we did
not test them somehow they would not prepare for English lessons at all. On the other
hand, it does not mean that testing is the main way to make students learn more.
Positive motivation is far more important but good results in tests may be also a kind of
positive motivation.
My students had translation as a practising and testing device since the fourth grade
and they are quite successful. They usually translate from English into Czech in a
written way and from English into Czech orally just to check they understood well. On
the contrary, they are more successful at completing multiple choice items as they may
also guess and it is probably easier work for them. I have found out many times that the
students are able to choose the correct answer within the sentence but even a short time
after if they are asked to say or write the whole sentence in English without being
allowed to see it in the test, they frequently make many mistakes.
That is the reason why I have decided to investigate how this process will turn out
in other classes. I chose the eighth grade because the students there should be familiar
with two basic tenses which I focused on: Present Simple and Present Continuous. As
those two tenses are taught since the fifth grade, I have expected quite high scores in
both testing means which were used: Multiple choice tests and Translation.
I would like to prove that the translation method is a very useful kind of testing
because it shows more about students knowledge and their ability to apply the language
than multiple choice tests. That is why the students will achieve a higher score in
multiple choice tests than in creating their own complete sentences, they simply may
guess in the first case.
In addition, there is another issue which I have focused on: the first half of the
sample of the students will fill the multiple choice items first and after that they will
translate the whole sentences. The second half of the students will do it the opposite
way. There I would like to prove that the students who fill the multiple choice test
firstly and the translation after, will reach a higher score than the students who will do it
the opposite way. They may remember the sentence constructions from the test or they
just remind the grammar issues, which helps them in the translation.

6
There are 132 students being tested, all of them attend the eighth grade of the
primary school, and they are from two schools. Their level should be the same
according to Czech curricula. I also would like to focus on the results between the two
groups of students from the two schools, if there are any noticeable differences, though
there should not be any great difference since the students attended general classes
without any specification at the general ordinary primary schools.

7
2. Language testing

2.1 Introduction to language testing

2.1.1 Reasons for testing


According to Heatons study (Classroom Testing 9), there are many reasons for
giving a test, but however the teacher prepares a test, he or she must think of the real
purpose of the test which is given to the students.
One of the most important reasons is to find out how well the students have
mastered the language, areas and skills that have just been taught. These tests look back
and they are called Progress tests. If the teacher tests what has been recently taught and
also practised then there is an expectation that students will score fairly high marks. If it
does not happen and many students fail, something must be wrong either with the
teaching itself, the materials or students understanding of the syllabus. Also progress
tests at the end of a unit or a semester should reflect progress, not failure. The teacher
should try to give progress tests regularly, but it is very important to avoid over-testing,
which de-motivates the students. The best progress test is one which students do not
recognise as a test but see as simply an enjoyable and meaningful activity, states
Heaton (9).
On the other hand, a classroom test does not focus only on students knowledge or
problems with the language, but furthermore, it is concerned with evaluation for the
purpose of enabling teachers to increase their own effectiveness by making adjustment
in their teaching. As a result, those classroom tests should enable groups of students or
individuals in the class to benefit more. The inquiry helps the teacher to select and
evaluate the effectiveness of the syllabus as well as the methods and materials that he or
she uses. Furthermore, the test results may indicate the whole area of the language
syllabus, which have not been understood well by the class. If, for example, eight or
more students make a similar mistake in the Present Perfect Tense, the teacher should
take this problem area into account when planning further teaching.
In addition, if the teacher looks for and wants to diagnose students real weakness
and problems with the language, he or she will use Diagnostic tests usually as a part of
a progress test. In order to find out what students weaknesses are, teachers must be
systematic when designing the test and select areas where problems are likely to be.

8
Diagnostic tests may test grammar, of course, but definitely they may also be used to
diagnose difficulties in involving language skills and sub-skills, etc. Moreover,
diagnostic tests are essential if teachers want to evaluate their teaching. Teachers may
also evaluate the materials they are using according to the results of the diagnostic tests.
Problems and difficulties having been found this way may lead to the improvement of
teachers presentation and explanation or to the provision students of more practice.
Further, Heaton notes that another important function of testing is to encourage
students. Because most people like doing things at which they are good, the test should
be made to show students progress, and on the other hand, students will be usually
good at the things, which they like. It means that a well-constructed classroom test may
be effectively used to motivate students. What is more, if the details of their
performance are given as soon as possible after the test, the students should be able to
learn from their weakness. Heaton concludes, that in these ways a good test can be used
as a valuable teaching device (Heaton, Classroom Testing 9-13, Writing English
Language Tests 6-7).
To conclude types of testing, there should be mentioned also Placement tests and
Proficiency tests. Placement tests are commonly used for placing new students in the
right class or for dividing students into groups. These tests usually test grammar and
vocabulary knowledge and assess students productive and receptive skills. Harmer
describes Proficiency tests: They give a general picture of students knowledge and
ability. They are frequently used as stages people have to reach if they want to be
admitted to a foreign university, get a job, or obtain some kind of certificate (Harmer,
The Practice of English Language Teaching 321).

2.1.2 What is a good test?

Jeremy Harmer, in the chapter Testing students (322), mentions two basic
characteristics of a good test: Validity and Reliability. According to Harmers statement,
a test is valid if it tests what it is supposed to test. Thus it is not valid, for example, if it
is supposed to test writing ability with an essay question that requires special knowledge
of history or biology. In addition, to be reliable, a good test should give consistent
results. For example, if the same group of students took the same test twice within two
days, without reflecting on the first test before they wrote it again, they should get the
same results on each occasion. In practice, these principles should be kept: make the test

9
instructions absolutely clear, restrict the scope for variety in the answers, and make sure
that test conditions remain constant (Harmer 322).
According to Daviess and Pearses study (Success in English Teaching 172), there
are other principles of validity and reliability. First they describe validity as a principle
of using the grammar, vocabulary, and functional content of a test that is carefully
selected on the basis of the course syllabus. This is logical and fair. If the learners have
not practised the Passive Voice, they should not be tested on it. Further, validity also
means that the exercises and tasks in a test should be similar to those used in the course.
If the students have never practised translating in the lessons, they should not have to
translate a passage in the test.
Then they explain principles of reliability; a specific test exercise or task is
normally reliable when:
-The instructions are clear and unambiguous for all the students.
-The exercise or task controls to some extent how the students respond, for
example, it should be clear in fill the gap exercises whether a single word or
a phrase is required.
-There are no errors in the test, for example, if the students have to select the
best answer- a, b, c, or d, there should not actually be two or more acceptable
answers.
(Davies and Pearse, Success in English Teaching 173)

Finally, they conclude that reliability of a test also depends on its length and on how
it is administered. A long test is usually more reliable than a short one. Any test
provides a sample of a students English, and a large sample of it is, of course, more
reliable (Davies and Pearse 172-174).

2.2 Writing Tests

2.2.1 Rules of designing tests

Concerning the rules of writing tests, Harmer says that there are a number of things
the teacher needs to do before designing a test and then giving it to students (327).

10
First, the teacher needs to have in mind the context in which the test takes place. He
or she has to decide how much time should be given to the test taking, when and where
it will take place.
Second, the teacher has to list what he or she wants to include in a test. Moreover, it
is important to include a representative sample from across the whole list. Thus,
students success or failure with those items will be a good indicator of how well they
have learnt all of the language they have studied.
Another point includes the balance of the elements in a test. For example, a 200-
item multiple choice test with a short real-life writing task leads to the suggestion that
the teacher thinks that multiple choice questions are a better way of finding out about
students knowledge than writing tasks would be. Therefore, the amount of space and
time we give to the various elements may also reflect on their importance in our
teaching (Harmer 328).
The fourth rule should be combined with the previous one and it says that at the
same time of balancing the elements of the test, we should think about how many
marks are given to each section or sections of the test. It also will show the importance
of the element.
Lastly, to avoid the problems with the test items, the teacher can get fellow teachers
to try them out. Frequently, they spot problems, which the teacher is not aware of and
they may come up with possible answers and alternatives that have not been anticipated
(Harmer 327-328).

2.2.2 Administration

According to Heatons study (Writing English Language Tests 167), a test must be
practical. He states that, in other words, it must be fairly straight-forward to administer.
It is not unusual that the teacher is so absorbed in the actual construction of test items,
that as a result, he or she overlooks the practical considerations of the test.
The first important thing, which the teacher should have in mind, is the length of
time available for the administration of the test, so he or she should plan it carefully
since it is frequently misjudged even by experienced test writers. The teacher must take
into account not only the administration of the test but also the reading of the test
instructions, the collection of the answer sheets, etc. That is why he or she should give

11
the students sufficient time for all those activities. In addition, it may be very useful to
tryout the test on a small but representative group of testees.
Further, Heaton mentions another practical consideration, which concerns the
answer sheets and the stationery used. There are two possibilities how to enter the
answers: The students may be asked to enter their answers directly on the question
paper (e.g. circling the letter of the correct option), or they write down their answers on
a separate answer sheet. But there can be disadvantages with using of a separate answer
sheet; insufficient thoughts may be given to possible errors arising from the mental
transfer of the answer from the context of the item on the question paper to the answer
sheet itself. Confusion appears, for example, if the distractors are numbered vertically
on the question papers, and so are the questions themselves, while the possible answers
on the answer sheets are numbered horizontally:
Example: The questions and distractors: Youd already left by seven
oclock,..................you?
A. didnt
B. werent
C. hadnt
D. havent
The possible answers on the answer sheets:
1. A B C D 2. A B C D 3. A B C D
On the other hand, the use of separate answer sheets is strongly recommended when
large numbers of students are being tested.
Afterward, Heaton concludes, A final point concerns the presentation of the test
paper itself. It should be printed or typewritten and appear neat, tidy and aesthetically
pleasing. Nothing is worse and more disconcerting to the students than an untidy test
paper, full of misspellings, omissions and corrections (Writing English Language Tests
168-169).

2.2.3 Test instructions to students

There are a few rules about test instructions, which should be followed:
- All instructions are clearly written.
- Examples are given.
- All students are able to follow the instructions.

12
- Grammatical terminology is avoided so students may be able to perform all the
required tasks without having any knowledge of formal grammar.
- If students are instructed to put a tick opposite the correct answer, an example of
what is meant by the word tick should be given (e.g. 3).
Heaton describes why those rules are essential if the teacher wants to give the students a
good test (168).
First, the clear instructions are important since most students taking any test are
working under certain mental pressure and concentrating on complicated instructions
may be very difficult. Giving examples is also essential to help the students understand
well what they are asked to do.
Second, concerning the third point above, if all students are not able to follow the
instructions, the test will be neither reliable nor valid.
Last, the grammatical terms should be rewritten since there is not being tested the
knowledge of formal grammar (e.g. the term pronouns should be replaced by words
like the following and followed by examples).
Heaton further admits that sometimes it is difficult to avoid writing clumsy
instructions consisting of complex sentences (169). One possible solution may be
cutting complex sentences into short ones. Another solution is to use the students first
language when the test group is monolingual. However, it is recommended only at the
elementary level (Writing English Language Tests 168-170).

2.3 Direct and Indirect test item types

There are two basic types of test items, which are direct and indirect test items,
according to Harmers distinction (322). A test item is direct if it either asks students to
perform the communicative skill, which is being tested, or it tests receptive skills, and it
tries to be as much like real life language use as possible, i.e. tasks which deal with
features of real life are included. In real life when people speak or write they generally
do so for a real purpose, because they need something or they are interested in the topic
of conversation and want to add their own ideas. Here is an example of the task testing
writing skills:
Some businesses now say that no one can smoke cigarettes in or even
near any of their offices. Some governments have banned smoking in all
public places-whether outside or inside. This is a good idea but it also

13
takes away some of our freedom. Do you agree or disagree? Give reasons
for your answer.
(Harmer, The Practice of English Language Teaching 325)
Tests of reading and listening skills should also reflect real life. Role-playing where
students perform tasks such as introducing themselves may be a good example of
testing speaking skills.
On the other hand, indirect test items try to measure a students knowledge that lies
beneath their receptive and productive skills. It is found out through more controlled
items. These are often quicker to design and, crucially, easier to mark and produce
greater scorer reliability.
Indirect items included multiple choice tests, cloze procedure, transformation,
paraphrase, sentence re-ordering, sentence fill-ins, finding errors in sentences or
choosing the correct form of a word.
Cloze procedures, in their purest form, means the omission of every nth word in a
text (somewhere between every fifth or tenth word). Because the procedure is random,
it avoids test designer failings. The randomness of the omitted words also enables that
anything may be tested (e.g. grammar, collocations, fixed phrases, reading
comprehension, etc.).
Transformation and paraphrase means to rewrite sentences in a slightly different
form:
I am sorry that I didnt get her an anniversary present.
I wish............................................................................
In order to complete it successfully students have to understand the first sentence, and
they have to know how to construct an equal sentence, which is grammatically possible.
Sentence re-ordering means putting words in right order to make appropriate
sentences, which tells the teacher a lot about students knowledge of syntax and lexico-
grammatical elements (Harmer 323-325).

2.3.1 Multiple choice tests

Concerning the distinction of indirect test items, Harmer notes that, although there
is a wide range of indirect test possibilities, certain types are common in use, such as
multiple choice questions (MCQs).

14
Here is the example:
How .................. sugar do you take in your coffee?
A little B few C much D many
A multiple choice item must have only one correct answer (Heaton, Classroom
Testing 96), which seems to be common sense, but it is very easy to write an item with
two correct answers. The item above, for example, has two correct answers: A as well
as the expected C.
Harmer also admits a number of problems with multiple choice questions. First,
they are extremely difficult to write well especially in the design of the incorrect
choices. Second, it is possible to be trained in technique so the trained students will
probably be more successful than those who have not been trained in it. Finally, while
students multiple choice questions abilities may be trained and improved, this may not
actually improve their English (Harmer 323).
Also Hughes refers to the difficulty of writing multiple choice items successfully
(61). He says that, according to his experience, multiple choice tests produced for use
within institutions are often full of faults. Common among those faults are: more than
one correct answer; no correct answer; there may be clues in the options to which is
correct (e.g. difference in length); and lastly, the possible answer (A, B, C, D) is so
simple to show to other students nonverbally.
Hughes concludes that saving time for administration and scoring will outweigh the
time spent on successful test preparation. On the other hand, he also adds the most
obvious advantages of multiple choice tests: scoring can be perfectly reliable and it is
possible to include more items than would otherwise be possible since the student has
only to make a mark on the paper (Hughes 59-61).
However, for many years multiple choice questions were considered to be an ideal
test instrument for measuring students knowledge of grammar and vocabulary (Harmer
323).
In addition, Heaton describes multiple choice questions as a device that tests the
ability to recognise sentences which are grammatically correct (96). However, this
ability is not the same as the ability to produce correct sentences. The teacher must
remember this limitation and then he or she can still find multiple choice items useful
for certain purposes, especially on a progress test, and they may be useful for finding
out more about the difficulties which students have with certain areas of grammar.

15
Further on, wherever possible, the items of tests should be set in context. If the
teacher wants to concentrate on a certain area of grammar, he or she should put the item
into a short two-line dialogue. This is better than providing no context at all. Thus, the
item becomes more meaningful:
Can I get you anything?
.................... a pen and a piece of paper.
A I like B Ill like C Id like D Im liking
There is also the possibility to write only three options instead of four of them (Heaton,
Classroom Testing 96-97).

2.3.2 Constructing multiple choice items

There are four steps of constructing multiple choice items described by Heaton
(Writing English Language Tests 37-39):
Step 1: The first step is to know what the teacher wants to test and to have the sentence
testing that. The teacher may think the sentence up or use sentences and errors made by
students in their free composition and open-ended answers to questions.
I like tea but I havent / dont like coffee.
Step 2: Next the teacher writes out the sentence substituting a blank for the area being
tested and then writes in the correct option and the distractor which the student has
provided.
I like tea but I .......................coffee.
A dont like B havent like
Step 3: Now the teacher adds two more distractors. Again he or she may go to the
written work of the students to provide these distractors. But if he or she cannot find any
suitable errors without too much difficulty, the teacher uses his or her own experience
and knowledge of the target and native languages. Of course, it is necessary to be very
careful and to not give more than one correct option. There is also a possibility to
change the positions of the distractors.
I like tea but I .......................coffee.
A doesnt like B dont like C havent like D like
Step 4: Finally the teacher should check to be sure that other options are not correct and
ask his or her fellow teachers to try the test to find possible imperfections.

16
2.4 Describing learners

There are three basic factors which influences our decisions about how and what to
teach: age, level, and individual differences. In addition, as these factors are important
for teaching language, they will also be important for assessment and testing (The
practice of English Language Teaching 37).

2.4.1 Age

Students of different ages have different needs, competences and cognitive skills.
For example, children of primary age are better educated through play, whereas
concerning teaching adult learners, there may be greater use of abstract thoughts.
According to the age, students may be divided into three groups: young children,
adolescents, and adult learners. These three groups are different in many ways,
especially in learning process.
First, young children usually respond to meaning even if they do not understand
individual words, or they often learn more indirectly rather than directly. On the other
hand, teachers must take due note of a limited attention span of young children, they
may lose interest after ten minutes or so.
Second, adolescents, as the methodologist Penny Ur suggests, are in fact the best
language learners (Ur 1996: 286). On the other hand, they are so much less motivated
than young children and they have discipline problems very often. However, if
teenagers are engaged, they have a great capacity to learn, a great potential for creativity
and they love doing things which interest them.
Last, teachers of adult learners know that those learners can engage in abstract
thoughts, they have a lot of life experiences to draw on, which also enables teachers to
use a wide range of activities with them. On the contrary, teachers must take into
account that they may have experienced failure or criticism at school, which causes
them to be anxious about making mistakes and under-confident about learning (The
Practice of English Language Teaching 40).

17
2.4.2 Language levels

According to Harmers distinction, students are generally described in three levels:


beginner, intermediate, and advanced (44). Between beginner and intermediate, there
we often talk about elementary level. The intermediate level is further divided into
lower, upper and even mid-intermediate. At this point, public examinations help us to
place students into right levels. The scores they get, together with our own experience
and intuition, will allow us to use level labels with discrimination. According to these
levels, we adapt our teaching techniques, testing devices, language, topics, and other
issues.

2.4.3 Individual differences

Concerning students individual differences, teachers face different learner types


and styles. They should satisfy the many different students in front of them and focus on
their individual strengths with appropriate activities designed. Also testing devices
should focus on producing the best results for each of the students. Furthermore,
teachers have to recognize students as individuals as well as being members of a group.
Even when classes have been established according to students levels, not every one in
the group will have the same knowledge of English. Looking at students scores on
different tests, together with monitoring their progress through observation will help us
to ascertain their language level and will tell us who needs more or less help in the class
(The Practice of English Language Teaching 48).

2.5 Scoring and interpreting test scores

Concerning grammar tests, there is the important rule which must be kept: It must
be absolutely clear what each item is testing and points must be based on that only. For
example, when the teacher wants to test the knowledge of articles, at this time all
available points should be awarded for that, nothing should be deducted for non-
grammatical errors, or for errors in grammar, which is not being tested. For instance, a
student should not be penalised for a missing third person s when the item is testing
relative pronouns. To conclude, for valid and reliable scoring of grammar items, careful
preparation of the scoring key is necessary (Hughes, 145).

18
2.5.1 Summarising the scores

Marks awarded by counting the number of correct answers in the test are known as
raw marks. An other possibility is to make a histogram where the vertical dimension
indicates the number of students scoring within a particular range of scores; the
horizontal dimension shows what these ranges are. It makes clear what will be the
outcome (how many students will pass, fail, or be on the border line) of setting
particular pass marks. For these reasons, it is always advisable to make such a diagram
at least for totals (Hughes 157).

25

20
No of candidates

15

10

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Score

Histogram of Scores of 108 students on a 15 item test.

There are many other ways to score the grammar tests, for example, measures of
central tendency, measures of dispersion or counting of the index of difficulty, etc.
(Heaton, Writing English Language Tests, 174-180).

19
2.6 Translation

Alan Maley, who edited a series of resource books for teachers, introduces Duffs
book about translation. Translation has been long considered as uncommunicative,
boring, pointless and difficult and has the disadvantage to be too close with grammar.
Only recently, Maley adds, as the communicative movement has begun to run short of
ideas, has there been an interest in traditional practices such as translation.
Methodologists have asked the question: could it be that it offers some useful purpose
after all? Judging by the activities in Alan Duffs book, the answer has to be yes,
admits Alan Maley. And further he adds the its great originality lies in shifting the
emphasis from learning translations as a set of skills to using translation as a promotion
of language learning (Alan Maley 3).
Duff describes five possible reasons why translation has fallen out of favour with
the language teaching community (5):
1. It develops only two skills reading and writing; it is not a communicative
activity.
2. It is not suitable for classroom work because the students must do it on their
own; it is also time-consuming.
3. It is associated with different language and with the use of literary or scientific
texts, it is not suitable for the general needs of the language learner.
4. Use of the mother tongue is required, and this is undesirable
5. It is boring; both to do and to correct to support this attitude to this.
On the other hand to support his attitude to this, Duff describes several reasons to
use translation in the classroom (6-10).
First he supports the use of the mother tongue. All people have a mother tongue, or
first language, which shapes their way of thinking. Translation may help to understand
better the influence of the one language on the other, and as a result, it enables the
students to explore the potential of both languages: their strength and weakness.
Second, translation is a natural and necessary activity. Outside the classroom in
offices, banks and shops, there is a need for translation all the time. Why not inside the
classroom? Moreover, language competence is a two-way not a one-way system. We
must be able to communicate both ways: into and from the foreign language. In
textbooks there is an emphasis to work in the foreign language, yet there should be

20
more emphasis on how to communicate back into the mother tongue and translation is
definitely suitable for practicing this skill.
In addition, translation develops three qualities essential to all language learning:
accuracy, clarity and flexibility. It trains the student to search (flexibility) for the most
appropriate words (accuracy) to convey what is meant (clarity). The teacher can select
the material, which illustrates particular aspects of language with which the students
have difficulty in English (e.g. prepositions, if-clauses, the passive). As a result, the
students can see the link between language (grammar) and usage.
To conclude, translators will always be needed. Without them, there would be no
summit talks, no Cannes Film Festival, no Olympic Games, etc. Either the professionals
themselves or the students of language do all this necessary work. Only translation in
schools can give them the training they need (Duff 6-10).
On the whole, Duff states: Yet today translation is largely ignored as a valid
activity for language practice and improvement. And even where it is still retained it
tends to be used not for language teaching, but for testing (Translation 5).

21
3. Testing Grammar

3.1 Introduction to the research: motivation and the methods

The practical part is focused on two common testing devices: Multiple choice tests
and Translation; and two basic tenses are being tested: Present Simple and Present
Continuous.
Multiple choice tests are not so commonly used at primary schools but they are
applicable as a device for entrance exams at secondary schools, language schools,
higher schools and universities, therefore pupils and students should be used to them.
As I have shown in the previous chapter, according to Harmers experience and study,
students may be trained in managing well in multiple choice tests, which is actually
advantageous for them if they like to pass an entrance exam. On the contrary, this type
of testing is not much use for the teachers if they want to measure students knowledge,
achievement and progress.
Translation, on the other hand, may reveal that the students are not able to create
sentences as a whole, although they may succeed in choosing the correct answers in
multiple choice tests, which may lead to the suggestion that they are very good at
guessing. Nevertheless this kind of testing may be useful practice to get familiar with
basic grammatical items, remember basic rules and all components of, for example,
Present Simple.
In fact, there are many objections against using translation method. First, students
should not translate word by word from Czech language to English, which they usually
try to do at the beginning, as most times it is not even a possible way how to use the
language. Second, students should start to think about what they want to say
immediately in English as soon as possible, it means they are able to imagine the whole
phrase in English (e.g. it is not really possible to translate the Czech sentence Jak se
jmenuje? word by word, they must know the English version, which sounds in Czech
Jak je tvoje jmno?). Lastly, it is considered to be a difficult task, especially for
young children, to translate the whole sentences, instead of only filling the gaps or
choosing the right options.
According to my experience as a student as well as a teacher, translation is
commonly used in Czech schools and in addition, there are language books where, in

22
each chapter, we can find translation exercises (e.g. Zbojov, E., Peprnk J., and
Nangonov S. Anglitina pro jazykov koly). I personally consider translation as a very
useful and practicable method, moreover I agree with Duffs opinion that language
competence is two-way not a one-way system and we need to be able to communicate
both ways: into and from the foreign language (Duff 6).
As a teacher I use translation method from the very beginning as a learning practice
as well as testing device. During the learning and practising process, the children should
be able to create whole sentences on their own and translate sentences from English into
Czech, too. When the children do an exercise in their workbooks, I usually ask them for
translating the sentences into Czech since many times when I did not want them to
translate, I found out that they did not know at all what they wrote about or what the
sentences meant though they completed them even with the correct word.
I also apply multiple choice tests even though I endorse Heatons opinion that
multiple choice items tests an ability to recognise sentences that are grammatically
correct, which is also very useful in the learning process as far as we realize that it is not
the ability to produce correct sentences. If we are aware of this limitation, we can find
multiple choice tests really useful for certain purposes. I use them especially when I
want to find out more about the difficulties in certain areas of grammar or to test formal
parts of grammar, such as the third person singular in Present Simple.
To conclude, the hypothesis of the research is: although students choose the correct
option in multiple choice test and may achieve a high score, they will achieve a lower
score in translation of the same sentences which appears in the multiple choice tests. As
a result, I would like to prove that translation shows more about students knowledge of
the language.

3.2 Why do I test?

According to my opinion and practice, there is a general need to test students in the
Czech Republic, especially at a lower age. I was observing that students effort to study
hard has gone rapidly down during the last years. Furthermore, I think if teachers did
not test students knowledge they would notice neither students progress and
achievement nor their study effort.
In addition, testing is also a useful device to find any problems students have with
the language. I do not think teachers test to see students failure, all teachers want, or all

23
teachers should want to see students success, especially after presenting and practising
a language item, the teacher likes to notice that students understood it well. Moreover,
the results of the tests may point at the teachers failure, for example, if most of the
students did not do well in the test, it may mean the teacher had not presented and
explained the language item in a suitable way or the students need more practice to
manage it well.
I also do not think that testing causes students fear or makes students so nervous
that they are not able to manage it well. Of course, there are some exceptions, for
example, the students with learning difficulties, should be tested carefully and mainly
orally so as not to be stressed because of their disability. On the other hand, students
may be stressed more if they are tested orally, there must be individual approach to
students, which helps us to choose the right way of testing but there must be some
testing.

3.3 Administration of the multiple choice test and the translation

During the process of the administration I used language structures that are
generally taught with Present Simple and Present Continuous, i.e. basic verbs, common
expressions of time and other vocabulary which the students should know. I also used a
book called Essential Grammar in Use written by Raymond Murphy. I find this book
very well organised, very useful and full of examples of the language from the real life.
There are twenty-five sentences in the multiple choice test. The test is divided into
three parts: the first part is focused on Present Simple tense, the second on Present
Continuous and the third one concerns both the tenses mixed together. To illustrate,
there are eight sentences in both the first and the second part and nine sentences in the
last one. I think the test is well-arranged and, of course, clear instructions and the
example are given at the beginning. Each sentence of the test, for which the students
should choose the correct answer, is given into context to be better understood. Here the
context or an illustrated situation helps the students choose the correct tense in the third
part. The students tick the correct answer, it is always only one of options A, B, C, D.
I did not prepare the answer sheet, which would make it easier to evaluate, since
I wanted the students to concentrate on the language and grammar not on the way of
answering.

24
The translation part is organised in the same way: eight sentences for Present
Simple, eight for Present Continuous and nine sentences for both the tenses. The
sentences used are those from the multiple choice test. To be clear, I have chosen the
sentences important for the tense not those which just accompanied them to make the
context. However, the context is not always given, on the other hand, a sample of the
sentences is basic so that it should be easily understandable. I did not want to put there
anything tricky since I did not mean to meet students failure.

3.4 A sample of tested students and conditions of testing

There were 132 students tested. All of them attended the eighth grade at primary
schools, however, they are from two different schools.
The one school is the primary school in Havlkv Brod, Wolker street. It was
founded in 1970 and it is one of five primary schools in that city. There are 654 students
and 59 pedagogical workers. It is the kind of school in which we can find extended
teaching of foreign languages. It means that the students are divided into language and
non-language classes. The students from the non-language classes follow the common
curricula. On the other hand, the students, who attend the language classes, start
learning English in the third grade, they have a higher number of English lessons and
they start learning another foreign language in the sixth grade.
I have chosen the students from the non-language classes for my research since the
primary school in Choce does not have the language classes and there was a need to
have the same level students.
As I have mentioned, the primary school in Choce is another school where I found
a sample of tested students. There are 570 students and 30 pedagogical workers. English
language is taught since the third grade and students have three or four English lessons a
week ( they have three lessons in the 3rd, 7th and 8th grades and four lessons in the 4th,
5th, 6th and 9th grades). There are no language classes and German language is offered
only as an optional subject.
The students were not given both parts, i.e. the multiple choice test and the
translation, at the same time. Half of the students filled the test first and the next lesson,
i.e. at least a day after, they translated the sentences. The second half of the students did
it the opposite way, i.e. the translation first and the next lesson they filled the test. There

25
was a thirty-minute period for each part of testing but if anybody needed more time, he
or she may write after the limit.

3.5 The research analysis

During the process of evaluating I followed Hughess opinion and


recommendations for scoring a test: a student should not be penalized for mistakes of
issues which are not being tested (Hughes, 145). That is why I did not lower the scores
for vocabulary mistakes which was not the direct part of the grammar I had tested in the
translation. On the other hand, in the multiple choice test, there were just two
possibilities: either right or wrong.
Here is the example of the most common mistakes appeared in the translation.
These mistakes are not figured in the scores:
-play on the piano
-klavier instead of piano
-english
-coffee-many different possibilities
-teeth-many different possibilities
-floor-many of the students did not know the word
-on the garden
-be quiet-many different possibilities
-meal instead of meat
-dinner-also many possibilities

There were two most frequent problems with a verb construction: have a shower
and have dinner, which I had to include into the mistakes which lowered the scores
since they were parts of the grammar being tested.
The multiple choice tests were much easier to evaluate. It may be the reason why
this kind of testing is popular and common, especially in case of the high number of
students. Twenty-five points represents the highest score in the multiple choice test, i.e.
each correct option was evaluated by one point. The students could get the same number
of points in the translation, i.e. each correct sentence was evaluated by one point. A
sentence had to be completely correct or it could include the tolerated mistakes which I
have shown above.

26
When I was analysing the research results I was dealing with four different points of
view. First, I focused on the total score of the multiple choice test versus the results of
the translation. Second, I was interested in possible differences depending on what the
students did first. Next, I focused on gaps in the results of each individual student and
lastly, I was concerned with different results of the two schools.

3.5.1 Total scores of multiple choice tests versus scores of translation

points 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
nr.of mct 0 0 0 0 0 0 0 5 3 7 10 9 7 5 7 8 9 4 4 5 9 8 10 13 3 6
students tr. 7 13 12 8 10 5 14 2 2 4 1 2 2 7 5 9 4 5 6 6 2 3 1 1 1 0

16

14

12

10
Nr. of students

mct
8
tr.

0
1 3 5 7 9 11 13 15 17 19 21 23 25
Nr. of points

There is a significant difference in the results of these two testing devices. The
average of correct answers in the multiple choice tests is 16.33 whereas in the
translation it is 8.87.
In the graph we can also see that all students succeeded in at least 7 questions in the
multiple choice tests on the contrary with the translation. There was 30% of the students
who failed and had 0-3 correct answers, which is quite a warning situation. To illustrate,
there was 5% of the students who failed totally and scored 0; 10% of the students had

27
only one correct answer, 9% had two, and 6% had three correct answers. The number of
the students who had just 4-6 correct answers is also high (22%), which makes 52% of
the students who achieved less than seven points.
On the other hand, there is 4.5% of the students who reached the highest score in
the multiple choice test, about 2% reached twenty-four points and 10% of the students
had twenty-three points, which makes nearly a sixth of all the students. In the
translation, on the contrary, the students did not succeed so: nobody achieved the
highest score, one of 132 students had twenty-four correct answers, another one twenty-
three, one student reached twenty-two points, and two students achieved twenty-one
points.
There is another significant difference that comes out of the research. I expected,
since the students are from the eighth grade, that most of the students must have had at
least half of the correct sentences in the translation but it did not happen. 62% of the
students reached less than thirteen points in the translation, which means that only 38%
of them had thirteen and more correct answers, balanced against the results of the
multiple choice tests, where 31% of the students achieved twelve and less points
whereas 69% had thirteen and more correct answers.
To conclude this general comparison, I had confirmed the hypothesis that the
students would succeed in multiple choice tests better than in translation. It means that
even though they are able to choose the correct answer from the options, many of them
are not able to create complete sentences.

3.5.2 Different order of filling in multiple choice tests and translation

The students were divided into two groups and there was a different order in filling
of the multiple choice tests and the translation. Sixty-six students did the test first and
the translation after, which means usually next lesson. The other sixty-six students did it
the opposite way.

28
Z CHOCE Z HAVLKV BROD
Group 1 Group 2 Group 1 Group 2

MCT TR TR MCT MCT TR TR MCT

1. 25 20 2 11 1. 8 2 17 23
2. 21 17 7 22 2. 15 1 13 20
3. 10 1 20 23 3. 12 5 3 22
4. 16 1 17 23 4. 7 4 15 23
5. 19 6 0 8 5. 14 3 15 21
6. 13 6 2 13 6. 16 10 6 20
7. 11 4 4 11 7. 14 0 15 21
8. 20 14 1 15 8. 10 2 13 20
9. 12 4 3 12 9. 9 1 4 14
10. 25 22 1 7 10. 16 3 14 23
11. 18 15 3 12 11. 13 0 17 23
12. 19 13 12 14 12. 17 6 16 21
13. 16 15 12 18 13. 9 1 6 24
14. 15 6 6 15 14. 21 21 15 25
15. 20 16 4 13 15. 21 15 18 22
16. 23 14 6 16 16. 16 8 18 22
17. 10 9 1 10 17. 11 5 4 19
18. 12 11 1 14 18. 15 5 19 23
19. 12 3 2 13 19. 18 6 2 9
20. 7 1 5 16 20. 16 13 6 20
21. 10 2 6 16 21. 22 14 24 25
22. 15 3 13 23 22. 11 0 21 25
23. 9 1 4 9 23. 11 9 16 21
24. 10 8 19 23 24. 9 7 15 23
25. 11 4 19 22 25. 7 2 15 21
26. 15 1 23 23 26. 10 0 17 23
27. 19 16 19 20 27. 17 4 11 17
28. 8 2 18 20 28. 12 6 0 9
29. 15 5 21 24 29. 10 2 19 25
30. 11 9 18 22 30. 18 6 18 22
31. 14 3 18 22 31. 10 1 2 10
32. 7 2 19 24 32. 11 0 14 22
33. 17 9 13 19 33. 14 6 13 20
Avg. 14,70 7,97 9,67 16,76 Avg. 13,33 5,09 12,76 20,55

29
Average Group 1 Group 2
results MCT TR TR MCT
CH 14,70 7,97 9,67 16,76
HB 13,33 5,09 12,76 20,55
OVERALL 14,02 6,53 11,21 18,65

I expected that the students who filled the test first would succeed better in the
translation. I supposed so because they may have recalled the grammar rules during the
test, or they may have remembered the sentence constructions from the test. In fact, it
did not happen. As we can see in the chart, the average of the correct answers in the
translation is higher when the students worked on it first. It makes 11.21 correct
answers whereas it is only 6.53 correct answers written by the students who did it after
the multiple choice test. I may just guess why it happened. Since these first translation
students achieved better results also in the multiple choice tests (18.65 versus 14.02), it
may simply mean they are generally smarter or just better at English. On the other hand,
those first multiple choice test students achieved a lower score in both the testing
forms, which may mean, they are worse at English. However, in both groups we can
find excellent as well as poor results.
There may also be another reason. Since the students got the sentences in Czech and
translated them to English for the first test, and then they had to complete the same
sentences for the multiple choice test, it seems from the scores that having exposure to
the sentences in Czech helped. It may be that in Czech they are able to understand the
meaning better and thus, do better when having to complete the sentences in the second
test (multiple choice test).
To conclude, according to these research results I would say that a higher score did
not depend much on the order of the test and the translation but more on the English
language knowledge, but further research could confirm that Czech language really
helps them in working with English language.

30
3.5.3 Gaps between the individual results

When I was correcting the tests and the translations I was also interested in the
individual differences of the results, not only in total averages.

25

20
Nr. of students

15

10

0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
GAP

Gap

All of the students succeeded in the multiple choice test better than in the
translation. In only two cases, there is no difference between the two result numbers.
These two students reached 23 and 21 points in both the testing parts.
Even though the generalization here is quite a difficult task, I would say that the
students who had a smaller gap between their results succeeded better in both testing
forms. On the contrary, the students whose gaps are 12-19 had poor results in the
translation.
To demonstrate, here are some numbers. First, we focus on the gaps 1-5. There are
36 students, which makes 27% of all students. Out of this number, 11 students had quite
poor results in the translation and surprisingly also in the multiple choice test (e.g. 10-9;
9-7; 7-4; 7-2). It means that the two thirds of these 36 students achieved the high
number of points in both the translation and the test (e.g. 25-22; 24-21; 23-19).
Compared to the back of the chart, there are 19 students, about 14%, who achieved
the high number of points in the multiple choice test balanced against the low number of

31
points in the translation. Their gaps are 19-12. The most interesting results I have found
in last two positions: one student reached 22 points in the multiple choice test compared
to 3 points in the translation, the other student achieved even 24 points in the test
whereas only 6 points in the translation. These two students are probably good at
guessing in the tests but not good at English.
Lastly, we focus on the gaps 6-11. The highest number of the students belongs to
this category. It means 75 students, which makes about 57%. There are many variations
in the results. To illustrate, 16% of all students had a gap of six points. There can be
found good results in both the testing forms (about two thirds) as well as poor results
(about one third) in this category of 6 points gap.
To conclude this analysis, there were many variations in both good and poor results.
However, I would say that the students with a smaller gap in their results were generally
better at both the testing forms than those students with gaps, for example, 14 and more.

3.5.4 Choce versus Havlkv Brod

I meant to investigate if there are any significant differences in the results of the
students from Choce and the students from Havlkv Brod. Nevertheless, I did not
expect any great dissimilarities since the students are taught according to the same
curricula, and have the same number of English lessons, even though it may happen that
there are more talented students in one school than in another.

Z
Score Z
HAVLKV
MCT CHOCE
BROD
0-3 0 0
4-6 0 0
7-9 7 8
10-12 15 11
13-15 13 7
16-18 8 9
19-22 13 19
23-25 10 12

32
M ultiple choice test results

20
No of students

15
Z CHOCE
10
Z HAVLKV BROD
5

0
0-3 4-6 7-9 10-12 13-15 16-18 19-22 23-25
No of correct answ ers

There were 66 students from Choce and 66 students from Havlkv Brod. As we
can see in the graph, there are not many variations between these two schools, yet there
are some.
The main dissimilarity in the multiple choice test results can be found in the interval
of 19-22 correct answers, it makes a difference of six students, which is about 4.5%. A
higher number of the students who reached 16-25 points are from Havlkv Brod. On
the other hand, more students from Choce achieved 10-15 points. Nobody had the
lowest score, i.e. 0-6 correct answers, and also in a gap of 7-9 correct answers, there is a
variation of one student.
According to the average of the correct answers in multiple choice tests, there can
be found better results reached by the students from Havlkv Brod. The average is
15.73 correct answers in Choce and 16.94 in Havlkv Brod, which means one point
difference. However, I would not see it as the significant difference that could mean that
there are more talented and clever students in Havlkv Brod.

33
Z
Score Z
HAVLKV
TR CHOCE
BROD
0-3 21 19
4-6 14 15
7-9 5 3
10-12 3 2
13-15 7 14
16-18 7 8
19-22 8 4
23-25 1 1

Translation results

25

20
No. of students

15 Z CHOCE
10 Z HAVLKV BROD

0
0-3 4-6 7-9 10-12 13-15 16-18 19-22 23-25
No. of correct answers

The averages of the translation results are more comparable. The students from
Choce had 8.82 correct answers on the average compared to 8.92 of Havlkv Brod. It
does not make nearly any difference. The main dissimilarity appeared in the interval of
13-15 points, i.e. seven students from Choce versus fourteen students from Havlkv
Brod. On the contrary, Choce balanced the average when eight students reached 19-22
points whereas in Havlkv Brod there were just four students with this number of the
correct answers.
On the whole, as I have expected, the results in Choce are comparable to the
results in Havlkv Brod, if we take it into account as the average. Nevertheless, if we
return the results which formed the averages according to the order of filling, there is a

34
significant difference between the two schools. If we compare the results of the students
who did the translation first and these students came from both schools, we will find
quite a difference.
Although they all succeeded in the translation better than the students who did the
test first, the students from Havlkv Brod reached 12.76 correct answers on the
average whereas the students from Choce reached only 9.67. This group from
Havlkv Brod also achieved better results in the multiple choice, which is 20.55
versus 16.76 in Choce. It must have been a very smart group.
The students from Choce, on the other hand, had better results in the multiple
choice tests done first. It makes 14.70 versus 13.33 in the test and they also succeeded
better in the translation done after, which is 7.97 in Choce versus 5.09 in Havlkv
Brod. But the results are not comparable to the previous group from Havlkv Brod.
From another point of view, when we consider these four groups, Havlkv Brod
placed the first and last position whereas Choce placed the second and the third
position, which makes their general averages comparable.

35
4. Conclusion

On the whole, I would like to briefly repeat the aim of this work: I wanted to prove
that testing by multiple choice tests does not bring a truthful view of students language
knowledge. The reasons are evident. Students simply may guess the right answer from
the options, or when they see the options they may remember the grammar rules. On the
contrary, students must be able to produce correct complete sentences on their own,
which is more complicated. Then, in the translation, we may find out that students from
the eighth grade are not able to use the right personal pronoun or the right form of the
verb to be.
Therefore, I applied these two testing devices on 132 students of the eighth grade
and confirmed the hypothesis. As I have expected there was a significant difference of
the results in the multiple choice test and in the translation. The students were much
more successful in the test than in the translation. Some surprising individual
differences appeared, for example, one student reached 22 points in the multiple choice
test whereas only 3 points in the translation. In this case I would say the student is very
good at guessing. On the other hand, there were four students who reached 22 points in
the test and 18 points in the translation, which I see as the students have good
knowledge of the language, especially of the grammar. Most of the students had the
individual gap 6-11 in their results but we could say the smaller gap they had the better
they usually were.
Next, as I have expected, the result of the two schools were comparable if taken on
the average. In the translation the averages were nearly the same, in the test there was
only one point difference in favour of the school in Havlkv Brod. Nevertheless, some
dissimilarities appeared. The group of the students who did the translation first and
comes from Havlkv Brod had far better results in the translation than other groups. It
makes about three points difference compared to the group from Choce.
The hypothesis that the students who did the multiple choice test first would be
more successful was not confirmed. It was a very surprising finding and it would need
further research to find out if it was just luck or if the students are really better at
English, etc.
To conclude, I must say it was very interesting experience to do this research and I
am happy that many students succeeded in the translation very well. On the other hand,

36
the number of the students who had poor translation was quite high, and mistakes they
made were in basic grammar forms. Those students would need more practice and also
they should probably work on their English more at home.
As a result, I think this work has proved that there is the need for teachers to include
translation in their class work as well as in their testing.

37
Resum

This bachelor thesis deals with testing grammar. In the theoretical part (chapter 2), I
am concerned with some general questions about testing: if we should test or not and
why, how to create a suitable test, which shows real students knowledge and finds out
what it should find out, etc. Next, there are some illustrations of different types of test. I
focus mainly on a multiple choice test and describe its application, administration and
evaluation. Lastly, there is a consideration about translation as a device useful during
the learning process.
The practical part (chapter 3) analyses the research that is divided into four smaller
researches. I compare the students results which were scored in the multiple choice test
and in the translation (from Czech language to English). The results were very
interesting, the students succeeded better in the test, but the individual differences were
very varied.
There is also an analysis of the results reached by the students who did the
translation first and the test after, and the results of those students who did it the
opposite way. It was most surprising result of the research and it would need further
searching. Finally, there is also comparison of the results of the two schools (Choce
and Havlkv Brod), which shows small differences in some aspects.

Tato bakalsk prce se zabv testovnm gramatiky. V teoretick sti (kapitola


2) se vnuji obecnm otzkm, kter se tkaj testovn: Zda testovat i ne a pro, jak
vytvoit sprvn test, kter vypovd o skutench vdomostech student a zjiuje, co
zjiovat m atd. Dle zde uvdm ukzky rznch druh test. Zejmna se zamuji na
multiple choice test, co je test s vbrem z vce monost, a popisuji jeho vyuit,
sestaven a vyhodnocen. Zvr teoretick sti pojednv o pekladu jako prostedku
uitenm pi vuce jazyka.
Praktick st (kapitola 3) rozebr vzkum, kter je rozdlen do ty mench
vzkum. Porovnvm zde vsledky dosaen studenty v multiple choice testu a
v pekladu (z eskho jazyka do anglitiny). Zjitn vsledky byly velmi zajmav,
studenti uspli v testu lpe, ale individuln rozdly byly rozmanit.

38
Je zde tak analza vsledk dosaench studenty, kte dlali peklad jako prvn a
test pot, a vsledky student, kte mli opan postup. Toto byl nejpekvapivj
vsledek celho vzkumu a bylo by zde poteba dalho eten. Nakonec se zabvm
porovnnm vsledk dvou zmiovanch kol (Choce a Havlkv Brod), kter
vykazuje v nkterch ppadech drobn odchylky.

39
Bibliography

Davies, Paul, and Eric Pearse. Success in English Teaching. Oxford: Oxford
University Press, 2000.

Duff, Alan. Translation. Oxford: Oxford University Press, 1989.

Harmer, Jeremy. The Practice of English Language Teaching. Harlow: Pearson


Education, 2001.

Heaton, J. B. Classroom Testing. Harlow: Longman, 1990.

Heaton, J. B. Writing English Language Tests. Harlow: Longman, 1988.

Hughes, Arthur. Testing for Language Teachers. Cambridge: Cambridge University


Press, 1989.

Secondary sources:

Murphy, Raymond. Essential Grammar in Use. Cambridge: Cambridge University


Press, 1997.

Murphy, Raymond. English Grammar in Use. Cambridge: Cambridge University


Press, 1994.

40
Appendices
MULTIPLE CHOICE TEST

Choose the correct answer. Put a circle round the letter of the correct answer.
Example: What is your hobby??
We love films. We .. to the cinema a lot.
a) goes b) go c) going d) are going

1. PRESENT SIMPLE
1. Hes playing the piano again!
Yes, he .. the piano every day.
a) is play b) play c) plays d) playing

2. I .. my job.
Why?
Its very boring.
a) dont like b) likes c) like d) doesnt like

3. .. English?
Yes, he speaks English very well.
a) do your brother speak b) speak your brother c) does your brother speaks
d) does your brother speak

4. Where .. live?
They live in London.
a) do they b) do we c) they d) do

5. Excuse me, where is the bank?


Im sorry, I .. .
a) not understand b) understand c) do understand d) dont understand

6. Coffee?
No, thanks. I never .. coffee.
a) dont drink b) drink c) drinks d) not drink

7. How often .. TV?


Every day.
a) you watch b) watch you c) do watch d) do you watch

8. What .. do?
Hes a teacher.
a) he b) do he c) does he d) he does

2. PRESENT CONTINUOUS
1. Wheres Tom?
He .. a bath.
a) has b) having c) have d) is having

2. Lucy, can you help me?


Wait a minute, I .. my teeth.
a) cleaning b) am cleaning c) clean d) dont clean

3. Im sorry. .. long?
Thats all right.
a) Youre waiting b) Are you waiting c) Waiting you d) Wait you

41
4. Do I need an umbrella?
No, it .. .
a) is rain b) rains c) isnt raining d) is raining

5. Look at those people!


They .. on the floor.
a) are sit b) sitting c) is sitting d) are sitting

6. Listen! Can you hear it?


Yes, somebody .. .
a) singing b) is singing c) are singing d) sing

7. Where .. ?
To the city centre.
a) they going b) going are they c) are they going d) going they

8. Where are the children?


They .. in the garden.
a) are playing b) play c) playing d) plays

3. PRESENT SIMPLE AND PRESENT CONTINUOUS


1. Please be quiet.
Why?
I .. .
a) work b) am working c) working d) am work

2. Take an umbrella with you.


.. ?
a) Rain it b) Raining c) It is raining d) Is it raining

3. Is she tired?
Yes, she .. to go home.
a) wants b) wanting c) is wanting d) want

4. They .. milk.
Really?
a) not liking b) are not liking c) likes d) dont like

5. Where is Paul?
In the kitchen. He .. dinner.
a) cooking b) is cooking c) cook d) cooks

6. I play the piano very well.


Really? I .. the piano at all.
a) am not b) dont play c) dont playing d) am not play

7. Mark is a vegetarian.
He .. meat.
a) eat b) not eat c) isnt eating d) doesnt eat

8. What do you usually do at weekends?


I usually .. a bike.
a) riding b) ride c) am riding d) rides

9. (on the phone)


We .. dinner now. Can you phone later, please?
a) having b) are having c) have d) are have

42
TRANSLATION
Translate these sentences into English.
Example:
Chodme hodn do kina.
We go to the cinema a lot.

1. PRESENT SIMPLE
1. On hraje na klavr kad den.

2. Nemm rd svoje zamstnn.

3. Mluv tvj bratr anglicky?

4. Kde bydl? (oni)

5. Nikdy nepiji kafe.

6. Nevm.

7. Jak asto se dv na televizi?

8. Co dl? (ptme se na zamstnn ona)

2. PRESENT CONTINUOUS
9. Pokej minutku, istm si zuby.

10. Omlouvm se, ek dlouho?

11. Nepr.

12. Oni sed na podlaze.

43
13. Nkdo zpv.

14. Kam jdou?

15. Tom se koupe.

16. Oni si hraj na zahrad.

3. PRESENT SIMPLE AND PRESENT CONTINUOUS


17. Bu tie, pracuji.

18. Pr?

19. Ona chce jt dom.

20. Nemaj rdi mlko.

21. Kde je Paul? Va veei.

22. Nehraji na klavr. (vbec)

23. On nej maso. (vbec)

24. (Na otzku: Co dl obvykle o vkendu) Obvykle jezdm na kole.

25. Veeme. (te prv)

44

You might also like