Professional Documents
Culture Documents
True or False Many items can be administered in a Limited primarily to testing knowledge
relatively short time. Moderately easy of information. Easy to guess correctly
to write and easily scored. on many items, even if material has not
been mastered.
Multiple Choice Can be used to assess a broad range Difficult and time consuming to write
of content in a brief period. Skillfully good items. Possible to assess higher
written items can be measure higher order cognitive skills, but most items
order cognitive skills. Can be scored assess only knowledge. Some correct
quickly. answers can be guesses.
Matching Items can be written quickly. A broad Higher order cognitive skills difficult to
range of content can be assessed. assess.
Scoring can be done efficiently.
Short Answer Many can be administered in a brief Difficult to identify defensible criteria for
amount of time. Relatively efficient to correct answers. Limited to questions that
score. Moderately easy to write items. can be answered or completed in a few
words.
Essay Can be used to measure higher order Time consuming to administer and score.
cognitive skills. Easy to write Difficult to identify reliable criteria for
questions. Difficult for respondent to scoring. Only a limited range of content
get correct answer by guessing. can be sampled during any one testing
period.
C. Based on Orientation and The Way to Test
Language testing is divided into two types based on the orientation. They are language competence test and
performance language test. Language competence test is a test that involves components of language such as
vocabullary, grammar, and pronounciation while performance test is a test that involve the basic skills in
English that are writing, speaking, listening and reading. Moreover language testing is also divided into two
types based on the way to test. They are direct testing and indirect testing. Direct testing is a test that the
process to elicit students competences uses basic skill, like speaking, writing, listening, or reading while
indirect languange testing is a test that the process to elicit students competences does not use basic skills.
From the explanation above, language testing can be divided into four types based on orientation and the
way to test. They are direct competence test, indirect competence test, direct performance test, and indirect
performance test.
Direct Indirect
Competence/ system I II
Performance III IV
Measuring language proficiency is a complex process that necessitates the use of valid and reliable language
testing tools. Language assessments take various forms depending on the skill or proficiency level being
tested. In this post, we'll describe and define different types of language testing so you can better understand
the ways you, your students, or your employees can accurately measure their language proficiency.
Schedule a Test »
There are five main types of language assessments — aptitude, diagnostic, placement, achievement, and
proficiency tests.
1. Aptitude Tests
Aptitude refers to a person's capacity for learning something. Language aptitude tests assess a person's
ability to acquire new language skills. Because of the nature of these tests, they are more general than most
other language tests and don't focus on a particular language. Instead, they assess how quickly and
effectively a person is able to learn new language skills.
An employer might use an aptitude test to select the best employees to take language courses so they can aid
in the setup of a new international branch or provide bilingual customer service.
2. Diagnostic Tests
Diagnostic tests are aimed at diagnosing the state of a person's abilities in a certain area — in this case, their
language abilities. In contrast to achievement and proficiency tests, diagnostic tests are typically given at the
start of a language learning course or program.
On a diagnostic test, most test-takers encounter questions or tasks that are outside the scope of their abilities
and the material they're familiar with. The results of the test reveal the strengths and weaknesses in one's
language abilities. Having a student's diagnostic test results can help teachers formulate lesson plans that fill
the gaps in the student's current capabilities. Students can also use diagnostic tests to determine which areas
they need to work on in order to reach a higher level of proficiency.
3. Placement Tests
Placement tests share some similarities with diagnostic tests. They are used for educational purposes and are
administered before a course or program of study begins. In this case, the application is a bit different.
Educators and administrators use placement tests to group language learners into classes or study groups
according to their ability levels.
A university may give a placement test to determine whether a new French major needs to take introductory
French courses or skip over some courses and begin with more advanced classes. Placement tests are also an
important type of test in English language teaching at the university level, since international students
typically come in with different English-learning backgrounds and proficiency levels.
4. Achievement Tests
An achievement test evaluates a student's language knowledge to show how their learning has progressed.
Unlike diagnostic, aptitude, and placement tests, achievement tests only cover information the student
should have been exposed to in their studies thus far.
Achievement tests are typically given after a class completes a certain chapter or unit or at the conclusion of
the course. A language teacher may give a final exam at the end of the semester to see how well a student
has retained the information they were taught over the course of the semester. Achievement tests are
typically graded and are meant to reflect how well the language tester is performing in their language
learning studies.
5. Proficiency Tests
Proficiency refers to a person's competency in using a particular skill. Language proficiency tests assess a
person's practical language skills. Proficiency tests share some similarities with achievement tests, but rather
than focusing on knowledge, proficiency tests focus on the practical application of that knowledge.
Proficiency tests measure a language user’s comprehension and production against a rating scale such as
the ACTFL, ILR, and CEFR scales.
Whereas most of the tests we've looked at are primarily associated with academic contexts, proficiency tests
are useful in a variety of settings. Anyone can take a language proficiency test, regardless of how they
learned the language and where they believe they are in their level of competency. Proficiency tests
accurately measure the candidate's ability to use a language in real-life contexts.
Another way to understand language testing is in terms of language skills. Though you may ask someone
whether they "know" a certain language, that general term consists of several distinct skills. The four skills
involved in language proficiency are listening, speaking, reading, and writing.
These skills can be categorized by their direction and method of communication. Listening and reading are
both ways of receiving language input, whereas speaking and writing are both ways of producing language
output. These pairs differ from each other when it comes to the direction of communication. The items
within each pair, however, differ by their method of communication. Listening and speaking both involve
oral communication while reading and writing involve written communication.
1. Listening
Listening skills in a particular language involve understanding oral communication. When people acquire
their first language as babies, listening to their parents and others speaking around them is the initial step
toward comprehension and listening ability. Some people also acquire a second language through
immersion, with their listening skills developing earliest.
2. Speaking
People often refer to speaking a language in a general way that encompasses multiple ways of using a
language. For example, they may say they speak a certain language when a more accurate statement would
be that they are able to communicate in it using all four of the communicative skills. Speaking is a specific
skill, however, which, along with listening, is required to negotiate meaning in a conversation. Speaking
requires communication in real time and may be one of the most challenging to develop yet most valuable of
the four skills.
3. Reading
Comprehension of oral language and written language are two very different skills. The reading skill
involves understanding the meaning of written language. A person may be able to speak a language with a
high level of proficiency but be completely unable to read it, while other may find it easier to read than
speak since they can consume and process the language at their own pace.
The degree of difficulty in learning to read in a second language partly depends on how similar or dissimilar
the writing system is from that of a person's first language. For example, most European languages use the
Latin alphabet, the world's most widely used alphabetic writing system, making letters appear similar on the
page. Therefore, a native English speaker may be able to learn to read in Spanish relatively easily. However,
a knowledge of the Latin alphabet won't help you understand Arabic script or Chinese characters. Reading
tests can help you determine your proficiency in reading a language
4. Writing
Writing comes with the same challenges involved in reading since writing systems vary across languages.
Learning to write in a second language that uses a completely different system from the one you're familiar
with can be especially challenging. Writing doesn't come as naturally as speech, even in acquiring our first
language, so it can be a challenging skill for language learners. This is why students often take writing
courses in their first language throughout their educational careers.
Language Assesment
I. LANGUAGE ASSESSMENT
A. INTRODUCTION
Many language teachers harbour a deep mistrust of tests and testers since it is
undeniable that a great deal of language testing is of poor quality. Language tests oftentimes
have a harmful effect on teaching and learning and fail to measure what it is they intend to
measure.
2. Backwash
WHAT IS A BACKWASH?
Basically, backwash refers to the effect of testing on teaching and learning. These may be
either harmful or beneficial.
Harmful backwash, as the name suggests, are the negative effects of testing on the teaching
and learning activities.
Examples:
• If a test is regarded as important, if the stakes are high, preparation for it can dominate the
teaching and learning activities
• Test content and testing are not appropriate to measure the intended learning outcomes
• Students developing negative attitudes toward tests
Sometimes, however, backwash can be beneficial, thus giving positive impacts on teaching and
learning.
Examples:
• It helps in monitoring students performance
• It gives teachers the chance to assess their teaching performance to make room of
improvement
• It helps to determine whether to revise the curriculum and teaching styles for the
betterment of teaching and learning activities
Generally, backwash is the impact of assessment on teaching and learning.
3. The Needs for Tests
§ One conclusion drawn from understanding why tests are so mistrusted by language
teachers and how this mistrust is often justified is that this might be better off without
language test.
§ Teaching is, after all the primary activity; if testing comes in conflict with it, then it is
testing that should go, especially when it has been admitted that so much testing provides
inaccurate information.
§ Teaching systems need dependable measures of language ability to provide information
about the achievement of groups of learners, without which it is difficult to see how rational
educational decisions can be made.
§ Even without considering the possibility of bias, we have to recognize the need for a
common yardstick, which test provide, in order to make meaningful comparisons.
4. Reasons for Testing
a. Finding out about progress
· The type of test to be given will depend very much on the purpose in testing. One
should always one’s self about the real purpose of the test to be given to the students.
· One major reason is to find out how well the students have mastered the language
areas and skills which have been taught. These tests look back at what students have
achieved and are called progress tests, the most important kinds of tests for a teacher.
· Progress tests should produce a cluster of high marks for it is expected. But if
most students fail, something must have been wrong with the teaching, the syllabus or
the materials.
· It also acts as a safeguard against hurrying on to complete a syllabus or textbook
regardless of what the students are actually achieving--or failing to achieve.
· A teacher has to avoid over-testing although one should try to give progress tests
regularly
· The best progress test is one which students do not recognize as a test but see as
simply an enjoyable and meaningful activity.
b. Encouraging students
· This is one important function of a teacher-made test, to encourage students.
· In learning a foreign language, and it is often very difficult indeed for us to judge
our own progress.
· A classroom test can help to show students the progress which they are
undoubtedly making. It can serve to show them each set of goals which they have
reached on their way to fluency.
c. Finding out about learning difficulties
· In teaching, sometimes we concentrate on following the syllabus and ignore the
needs of our students leading to failure.
· . A good diagnostic test helps us to check our students’ progress for specific
weaknesses and problems they may have encountered. One must be systematic when
designing the test and must select areas where we think there are likely to be problems
or weaknesses.
· Usually a diagnostic test forms part of another type of test especially a classroom
progress test. As such, it is useful to regard diagnostic testing as an ongoing part of the
teaching and testing process.
· When marking a diagnostic test, one should try to identify and group together a
student’s marks on particular areas of language
· Diagnostic tests of all kinds are essential if we wish to evaluate our teaching. One
can also evaluate the syllabus, the course book and the materials used.
· Whatever the reason, a classroom test can enable teachers to locate difficulties
and to plan appropriate remedial teaching.
· Achievement test is also like a progress test but it is usually designed to cover a
longer period of learning than a progress test.
· Achievement tests should attempt to cover as much of the syllabus, the contents
of the test will not reflect all that has been learned.
· A test of achievement measures a student’s mastery of what should have been
taught.
· It is concerned with covering a sample which accurately represents the contents
of a syllabus or a course book.
e. Placing Students
· A placement test enables us to sort students into groups according to their
language ability at the beginning of a course.
· The most important part of the test should consist of questions directly concerned
with the specific language skills.
· A placement test should try to spread out the student’s scores as much as
possible. In this way, it’s possible to divide students into several groups according to
their various ability levels.
f. Selecting Students
· The purpose of this test is to compare the performances of all the candidates and
select only the best. Often refer to a selection test as being norm-referenced.
· A good selection test will usually spread out students’ scores over most of the
scale that one is using (0%-100 %).
· Selection test are rarely set by the class teacher. They are usually set by outside
examining bodies that have washback effect (refers to the way an exam or test
influences teaching and learning in the classroom.) Teachers will gear closely to the
exam if it is good then, it is useful. If it's bad, then it has a damaging effect on teaching
· Proficiency tests are used to measure how suitable candidates will be performing
certain task or following a specific course.
· This test has different parts which candidates can choose to do according to their
different purposes.
· In designing a proficiency test, one should pay careful attention to those language
areas and skills which the candidate will need.
· The main concern is to find out only the degree of success of someone rather than
comparing the abilities of the various candidates.
· A criterion-referenced test is used to find out whether a student can perform a
particular task or not.
B. KINDS OF TESTS AND TESTING
The four types of test are proficiency tests, achievement tests, diagnostic tests and placement tests. This
categorization will prove useful both in deciding whether an existing test is suitable for a particular purpose
and in writing appropriate new tests where these are necessary.
1. Proficiency Test
o ‘proficient’ means having sufficient command of the language for the particular purpose.
o designed to measure people’s ability in a language regardless of any training they may
had in that particular language.
o it is based on the specification of what the candidates’ have to be able to do in language
in order to be considered proficient.
But there are some proficiency test that are more general,
§ This test functions to show whether the candidates have reached a certain standard with
respect to a set of specified objectives.
§ This proficiency test should have detailed specifications saying just what it is the
candidates have demonstrated that they can do.
§ Despite of differences between the content and level of difficulty, all proficiency tests
have in common, they are not based on courses that candidates may previously taken.
2. Achievement Tests
In contrast to proficiency tests, achievement tests are directly related to language courses, their
purpose being to establish how successful individual students, groups of students, or the courses
themselves have been achieving the objectives
· In the view of some testers, the content of a final achievement test should be based directly
on a detailed course syllabus or on the books and other materials used. This has been referred
to as the “syllabus-content approach.” It has an obvious appeal, since the test only contains
what it is thought that the students have actually encountered, and thus can be considered in
this respect at least, a fair test.
· If the syllabus is badly designed or the books and other materials are badly chosen, the
results of a test can be very misleading.
· Successful performance on the test may not truly indicate successful achievement of
course objectives.
Examples:
· A course may aim to develop a reading ability in German, but the test may limit itself to
the vocabulary that the students are known to have met.
· A course intended to prepare students for a university study in English, but the syllabus
(and so the course and the test) may not include listening (with note taking) to English delivered
in lecture style on topics of the kind that the students will have to deal with at a university.
Test results will fail to show what the students have achieved in terms of course objectives.
The alternative approach is to base the test content directly on the objectives of the course.
2. It makes it possible for performance on the test to show just how far students have achieved
those objectives.
This in turn puts pressure on those responsible for the syllabus and for the selection of books and
materials to ensure that these are consistent with the course objectives.
Now it might be argued that basing the test content on objectives rather than on
course content is unfair to students. If the course content does not fit well with objectives,
they will be expected to do things for which they have not been prepared. In a sense this is
true. But in another sense it is not. If a test is based on the content of a poor or inappropriate
course, the students taking it will be misled as to the extent of achievement and the quality of
the course.
Progress Achievement Tests are tests that are intended to measure the progress that the students
are making.
· One alternative way of measuring progress would be establishing a series of well-defined
short-term objectives.
· These should make a clear progression toward the final achievement test based on course
objectives.
· Teachers should feel to set their own “pop quizzes.” These will serve both to make a
rough check on students’ progress and to keep students on their toes.
· Since such tests will not form part of formal assessment procedures, their construction
and scoring need not to be too rigorous.
· They can, however, reflect the particular ‘route’ that an individual teacher is taking
towards the achievement of objectives.
3. Diagnostic Assessment
· Diagnostic assessment can be the teacher’s basis of planning of what to do next in the
teaching and learning process.
· The teacher will be able to design classroom activities that address their actual
learning needs if he knows students’ strengths and weaknesses.
· They are intended primarily to ascertain what learning still needs to take place. At the
level of broad language skills this is reasonably straightforward. We can be fairly confident
of our ability to create tests that will tell us that someone is particularly weak in, say,
speaking opposed to reading in a language.
But it is not easy to obtain a detailed analysis of a student’s command of grammatical structures—
something that would tell us, for example, whether she or he had mastered the present perfect/past tense
distinction in English. In order to be sure of this, we would need a number of examples of the choice the
student made between the two structures in every different context that we thought was significantly
different and important enough to warrant obtaining information on.
The lack of good diagnostic tests is unfortunate. They could be extremely useful for individualized
instruction or self-instruction. Learners would be shown where gaps exist in their command of the
language, and could be directed to sources of information, exemplification and practice.
Well-written computer programs will ensure that the learner spends no more time than is absolutely
necessary to obtain the desired information, and without the need for a test administrator. Whether or not
they become generally available will depend on the willingness of individuals to write them and of
publishers to distribute them.
Placement Tests
· Are intended to provide information that will help to place students at the stage of the
teaching programme most appropriate to their abilities. Typically they are used to assign
students to classes at different levels.
· Placement tests can be bought, but this is to be recommended only when the
institution concerned is sure that the test being considered suits its particular teaching
programme. One possible exception is placement tests designed for use by language
schools, where the similarity of popular text books used in them means that the schools’
teaching programme also tend to resemble each other.
· The placement tests that are most successful are those constructed for particular
situations. They depend on the identification of the key features at different levels of
teaching in the institution. They are tailor-made rather than bought off the peg. This usually
means that they have been produced ‘in house’. The work that goes into their construction is
rewarded by the saving in time and effort through accurate placement.
Direct Testing
Testing is said to be direct when it requires the student to perform precisely the skill
we wish to measure.
Examples:
· If we want them to know how well students can write composition, we
get them to write compositions.
· If we want then to know how well they speak, we get them to speak.
The tasks, and the texts that are used, should be authentic as possible. Every effort is
made to make them as realistic as possible.
1.) It is relatively straightforward to create the conditions which will elicit the
behaviour on which to base our judgments.
Indirect Testing
Indirect testing attempts to measure the abilities that underlie the skills in which we
are interested.
Examples:
§ The main appeal of indirect testing is that it seems to offer the possibility of testing a representative
sample of finite number of abilities which underlie a potentially indefinite large number of manifestations
of them.
§ The main problem with indirect tests is that the relationship between performance on them and
performance of the skills in which we are usually more interested tends to be rather weak in strength and
uncertain in nature.
Example:
Speaking tests where students respond to a tape-recorded stimuli, with their own responses being recorded
and later scored. These test are semi- direct in the sense that, although not direct, they stimulate direct
testing.
§ refers to the testing of one element at a time, item by item. This might, for
example, take the form of series of items, each testing a particular grammatical
structure.
Integrative testing
Discrete point tests will almost always be indirect, while integrative test will tend to be
direct. However, some integrative testing methods, such as cloze procedure, are indirect.
Norm-Referenced Testing
· it is a test that indicates how a pupil’s performance compares to that of other pupils.
(Santos, R 2007)
Criterion-Referenced Testing
1. Norm-referenced Tests are used to determine the achievements of individuals in comparison with the
achievements of other individuals who take the same test.
2. In norm-referenced test, the quality of achievement of a student is determined by the distance of his
score from the mean or median.
3. Norm- referenced test are designed to produce variability among individuals. To achieve this, some
easy items are included in the test. Variability among the scores reflects good measurement while
homogeneity indicates poor measurement to some extent.
5. Norm-Referenced Tests, on discriminating items such as items that are easy, too difficult or are
ambiguous are removed or improved. Hence, sampling of test items is allowed and utilized.
6. In Norm-Referenced Tests relative placement indices are used to describe the relative placement of
scores. Such indices are absolute ranks, quartile, means, median, and the like.
7. In Norm Referenced Tests learners may be allowed to tackle a higher level of learning task although
they have not mastered very well the preceding learning task.
1. Criterion Referenced Tests are used to determine the achievements of individuals in comparison with a
criterion usually an absolute standard.
2. In Criterion Referenced Test, the quality of achievement of a student is determined by the distance of
his score from the criterion established.
4. Criterion referenced test are used to determine the level or skill or knowledge of individuals if they are
capable or qualified to apply such skill or knowledge.
5. In criterion referenced tests, too easy or too difficult items are not removed, rather they should be
included if they truly reflect being measured.
6. In criterion referenced tests, an individual scores simply above or below standard or criterion.
7. In criterion referenced tests, a pupil is not supposed to tackle a higher learning tasks if he has not
passed the standard set for preceding learning task.
Objective Tests – If no judgment is required on the part of the scorer, then the scoring is objective.
These are tests that require one and only one possible answer. The scoring of this type of test is easy because
there is one-to-one correspondence of examinees’ answers with what is specified in the key answers. The
objectivity of scoring means that when one rater checks the paper today and another will check the same set
of papers tomorrow, the scores will always be the same. To elaborate further, no matter who checks the
paper at different times and settings, similar results are gathered.
Subjective Test – If judgment is called for, the scoring is said to be subjective. There are different
degrees of subjectivity in testing. The impressionistic scoring of a composition may be considered more
subjective than the scoring of short answers in response to questions on a reading passage. These are tests
that require a tedious scoring task (e.g., essay test). The difficulty of scoring makes it possible that one
checker may rate one piece of work differently from that of the other. This is a challenge in the use of
subjective tests. To objectivize the quantification of subjective tests, the teacher should develop rubrics for
scoring.
Computerized adaptive testing (CAT, sometimes called computer-adaptive testing) are designed to adjust their
level of difficulty—based on the responses provided—to match the knowledge and ability of a test taker.
In most paper and pencil tests, the candidate is presented with all the items, usually in ascending order of
difficulty, and is required to respond to as many of them as possible. This is not the most economical way of
collecting information on someone’s ability. People of high ability (in relation to the test as a whole) will
spend time responding to items that are very easy for them – all, or nearly all, of which they will get correct.
We would have been able to predict their performance on these items from their correct response to more
difficult items. Similarly, we could predict the performance of people of low ability on difficult items, simply
by seeing their consistently incorrect response to easy items. There is no real need for strong candidates to
attempt easy items, and no need for weak candidates to attempt difficult items.
Computer adaptive testing offers a potentially more efficient way of collecting information on people’s
ability. All candidates are presented initially with an item of average difficulty. Those who respond correctly
are presented with a more difficult item; those who respond incorrectly are presented with an easier item. The
computer goes on in this way to present individual candidates with items that are appropriate for their
apparent level of ability (as estimated by their performance on previous items), raising or lowering the level of
difficulty until a dependable estimate of their ability is achieved. The oral interviews are typically a form of
adaptive testing, with the interviewer’s prompts and language being adapted to the apparent level of the
candidate.
https://www.teachingenglish.org.uk/article/testing-assessment
Some students become so nervous that they can't perform and don't give a true account of their knowledge
or ability
Other students can do well with last-minute cramming despite not having worked throughout the course
Once the test has finished, students can just forget all that they had learned
Students become focused on passing tests rather than learning to improve their language skills.
A test can give the teacher valuable information about where the students are in their learning and can
affect what the teacher will cover next. They will help a teacher to decide if her teaching has been effective
and help to highlight what needs to be reviewed. Testing can be as much an assessment of the teaching as
the learning
Tests can give students a sense of accomplishment as well as information about what they know and what
they need to review.
o In the 1970s students in an intensive EFL program were taught in an unstructured conversation course.
They complained that even though they had a lot of time to practise communicating, they felt as if they
hadn't learned anything. Not long afterwards a testing system was introduced and helped to give them a
sense of satisfaction that they were accomplishing things. Tests can be extremely motivating and give
students a sense of progress. They can highlight areas for students to work on and tell them what has and
hasn't been effective in their learning.
Tests can also have a positive effect in that they encourage students to review material covered on the
course.
o At university I experienced this first hand, I always learned the most before an exam. Tests can encourage
students to consolidate and extend their knowledge.
Tests are also a learning opportunity after they have been taken. The feedback after a test can be invaluable
in helping a student to understand something she couldn't do during the test. Thus the test is a review in
itself.
Try to make the test a less intimidating experience by explaining to the students the purpose for the test and
stress the positive effects it will have. Many may have very negative feelings left over from previous bad
experiences.
Give the students plenty of notice and teach some revision classes beforehand.
Tell the students that you will take into account their work on the course as well as the test result.
Be sensitive when you hand out the results. I usually go through the answers fairly quickly, highlight any
specific areas of difficulty and give the students their results on slips of paper.
Emphasise that an individual should compare their results with their own previous scores not with others in
the class.
"Are the test results consistent with the work that the students have done on the course. Why/why not?"
"Did I manage to create a non-threatening atmosphere?"
All of this will help the teacher to improve the evaluative process for next time.
Alternatives to testing
Using only tests as a basis for assessment has obvious drawbacks. They are 'one-off' events that do not
necessarily give an entirely fair account of a student's proficiency. As we have already mentioned, some
people are more suited to them than others. There are other alternatives that can be used instead of or
alongside tests.
Continuous assessment
Teachers give grades for a number of assignments over a period of time. A final grade is decided on a
combination of assignments.
Portfolio
A student collects a number of assignments and projects and presents them in a file. The file is then used as
a basis for evaluation.
Self-assessment
The students evaluate themselves. The criteria must be carefully decided upon beforehand.
Teacher's assessment
The teacher gives an assessment of the learner for work done throughout the course including classroom
contributions.
Conclusions
Overall, I think that all the above methods have strengths and limitations and that tests have an important
function for both students and teachers. By trying to limit the negative effects of tests we can try to ensure
that they are as effective as possible. I don't think that tests should be the only criteria for assessment, but
that they are one of many tools that we can use. I feel that choosing a combination of methods of
assessment is the fairest and most logical approach.