You are on page 1of 2

1.

Proficiency tests are designed to measure “people’s ability in language” (Hughes,2003)


These tests are made by experts and they usually measure overall proficiency and general
mental processes involved in performing specific tasks. One example of this type of test
is TOEFL.

2. Achievement tests are designed with the purpose of examining students to find out if
they accomplished the course’s objectives, or to verify if they acquired the competencies
specified for the course. Achievement tests are sometimes designed by teachers; however
some course books come with an evaluation pack, which includes achievement tests, in
the supplementary materials. One example of this type of tests is the one we are going to
design in this unit.

3. Placement tests are designed by experts and their purpose is to help students and
teachers find out what is the best level for students to start their learning process.
Obviously, these test are designed to elicit what kind of previous knowledge students
have before being admitted in a language course. One example of this type of tests is the
one that students have to undergo when they want to join UABC’s language school.

4. Diagnostic tests can be designed by experts or by teachers and they are intended to
show students strengths and weaknesses in language competence or general performance.
One example of this kind of tests is the one administered to students in some schools at
half the course, these test are opposed to achievement tests because they focus on general
proficiency instead of specific objectives like the achievement tests do.

The explanation.

Direct testing vs. Indirect testing.


Direct testing: When we want to measure specific skills or abilities, we go directly to the
aspect we want to measure. In this way we can measure student’s achievement of the
unit’s contents. Example: if we want know whether students learned to write a letter, we
can ask them to write a letter thus evaluate that specific skill.

Indirect testing: If we want to measure student’s level of linguistic competence and the
underlying mental processes involved in language comprehension and production, we
have to designe contextualized items to elicit responses from the students that can help us
determine whether or not students are performing certain processes. Example: if we want
to know whether or not a student can get differentiate between the main idea from the
supporting ideas of a texts, we can ask several comprehension questions, the answer can
be oral or written, what we measure is comprehension, not writing skills, or
pronunciation mistakes.
Discrete point vs. Integrative testing.
Discrete point: If our purpose is to measure only one language element at a time, we use
the discrete point testing. This kind of testing is often done in Indirect testing, because of
the language elements are isolated from other elements that might interfere with the
testing, such interfering elements could be pronunciation mistakes interfering in a correct
comprehension answer.

Integrative Testing: But if our purpose is to measure integrated skills, we usually ask
students to produce certain product (letters, dialogues, monologues, etc.) in which
students can show a complete array of skills involved in language production. This kind
of testing is often done in Direct testing.

Norm referenced vs. criterion reference testing.

Norm-referenced: If we want to measure people’s competence or performance, but we do


not need to know the score, we just want to know who performed best and who did worst,
we can establish the best as the norm to classify the rest of the test-takers.

Criterion reference testing: A table of specifications is necessary to establish the criteria


to do this kind of testing. Nowadays, we talk about competencies, these competencies
need to be well explained and defined so that students could know what level of
knowledge they are expected have before the test is administered.

Objective vs. Subjective testing.

Objective testing: When we mark tests, we are supposed to determine if the test-taker is
right or wrong, but if instead of that we are evaluating good or bad, we might end up
marking students responses according to our mood thus we could affect students’ scores.
The best way to maintain objectivity is to design tests with discrete point items, in a
indirect kind of testing, so impressionistic judgment can be avoided by the person who
marks the test.

Subjective testing: When we have to make a judgment based on our impressions we say
that it is subjective testing. According to Hughes, there are several degrees of
subjectivity: When we mark an integrative kind of testing is usually in a subjective way,
if we test speaking skill directly, we might base our marks on the impression we get about
the speaker’s fluency, accuracy, intonation, and overall competence; in this way, we could
end up basing our impressions on the “best speakers and the worst speakers” like in
Norm-referenced testing instead of using pre-established criteria.

You might also like