You are on page 1of 6

EFFECTS on LEARNING

Negative effects
1. Backwash
- From the teachers’ perspective, in an aligned system, the intended learning outcomes from the central pillar,
though from the students’ perspective the curriculum is defined by assessment. As a result, students learn
what they think will be tested on.
- Tends to occur in an exam-dominated system, where exam strategies are more important than knowledge
2. Hidden Curriculum
- What happens behind the formal curriculum. Students construct their own meaning and understanding of
the curriculum from implicit and explicit messages about what counts in assessment. Once students work
out the hidden curriculum is, they can approach their learning more strategically and time-efficiently. As the
result, they may know how to pass exams, but not to understand it.
Positive effects
1. Encouraging and facilitating learning
2. Using Backwash positively to encourage appropriate learning
- This may happen when tests are well-planned, aligned, and designed to measure the full range of identified
outcomes
3. Developing a deeper understanding of the material and a wider background knowledge by changing the way
assessment is conducted, and adding activities that students are engaged
4. Raising awareness of both learning and assessment
TYPES of ASSESSMENT
1. Direct testing
- Testing a particular skill by getting the student to perform that skill
- e.g. Testing whether someone can write a discursive essay by asking them to write one
- The argument is that this kind of test is more reliable because it tests the outcomes, not just the individual
skills and knowledge that the test-taker needs to deploy
2. Indirect testing
- Trying to test the abilities which underlie the skills we are interested in
- e.g. Testing whether someone can write a discursive essay by testing their ability to use contrastive markers,
modality, hedging etc.
- Although this kind of test is less reliable in testing whether the individual skills can be combined, it is easier
to mark objectively
3. Discrete-point testing
- A test format with many items requiring short answers which each target a defined area
- e.g. placement tests are usually of this sort with multiple-choice items focused on vocabulary, grammar,
functional language etc.
- These sorts of tests can be very objectively marked and need no judgement on the part of the markers
4. Integrative testing
- Combining many language elements to do the task
- e.g. Public examinations contain a good deal of this sort of testing with marks awarded for various elements:
accuracy, range, communicative
- Although the task is integrative, the marking scheme is designed to make the marking non-judgemental by
breaking down the assessment into discrete parts
5. Subjective marking
- The marks awarded depend on someone's opinion or judgement
- e.g. marking an essay on the basis of how well you think it achieved the task
- Subjective marking has the great disadvantage of requiring markers to be very carefully monitored and
standardised to ensure that they all apply the same strictness of judgement consistently
6. Objective marking
- Marking where only one answer is possible - right or wrong
- e.g. machine marking a multiple-choice test completed by filling in a machine-readable mark sheet
- This obviously makes the marking very reliable but it is not always easy to break language knowledge and
skills down into digital, right-wrong elements.
7. Analytic marking
- The separate marking of the constituent parts that make up the overall performance
- e.g. breaking down a task into parts and marking each bit separately (see integrative testing, above)
- This is very similar to integrative testing but care has to be taken to ensure that the breakdown is really into
equivalent and usefully targeted areas
8. Holistic marking
- Different activities are included in the overall description to produce a multi-activity scale
- e.g. marking an essay on the basis of how well it achieves its aims (see subjective marking, above)
- The term holistic refers to seeing the whole picture and such test marking means that it has the same
drawbacks as subjective marking, requiring monitoring and standardisation of markers.

Difference between Testing and Assessment


TESTING ASSESSMENT
Testing is a form of assessment but, as we shall see, it When we are talking about giving people a test and
comes in all shapes and sizes. recording scores etc., we would normally refer to this.
A bi-weekly progress test is part of evaluation although When someone else measures success for ourselves (as
learners may see it as assessment. When testing is in teaching a lesson), it’s called assessment
formal and externally administered, it's usually called
examining.
External and informal or formal, classroom based. External and formal.

Effects on Teaching
- Identifying level of students.
- Feedback for improvement.
- Identifying gaps in knowledge.
- Driving curriculum in methods that replicate test format, content, and cognitive requirements.
- Ignoring knowledge which is not related to the test.
- Reducing instructional time.

Positive effect Negative effect


- Identifying level of students. - Ignoring knowledge which is not related to the test.
- Feedback for improvement. - Reducing instructional time.
- Identifying gaps in knowledge.
- Driving curriculum in methods that replicate test
format, content, and cognitive requirements.

Diagnostic test:
- a test set early in a program to plan the syllabus.
- can be Quizzes, surveys, checklists, discussion boards, etc.
- discover the learner's strengths and weaknesses for planning purposes.
- used to determine a learner's proficiency level in English before they begin a course.
- helps the teacher to know the gap in learners’ understanding.

Barrier Test:
- used to test the learner's current level.
- given before the course.
- helps the teacher to know whether the learner is ready to accept the new course or not.

Aptitude Test:
- not a test for which a person can study.
- generally used for job placement, college program entry.
- used to determine an individual's skill or propensity.
- used to test a learner’s general ability to learn a language rather than the ability to use a particular language.
- used to assess how learners are likely to perform in an area in which they have no prior training or knowledge.

Proficiency Test:
- used to determine the language level of the learner.
- has the same form as the public examination.
- used to test a learner’s ability in the language regardless of any course they may have taken.
- happens regardless of which course they have taken.
- a test used for placement.

Progress Test
- assessment enables you to accurately measure how your school and your students are performing – student by
student, class by class and year by year.

Achievement Test
- an assessment of developed knowledge or skill.
- used to assess the learners' cognitive abilities in the course.
- an end-of-course or end-of-week test (even a mid-lesson test).
- measure learner's performance at the end of a period of study to evaluate the effectiveness of the programme.
- evaluate a learner's language knowledge to show how their learning has progressed.

CRITERIA OF TEST:
1. reliability : A reliable test is one which will produce the same result if it is administered again.
- Circumstances
make an effort to see (like noise levels, room temperature, level of distraction etc. are kept stable)
- Marking
more subjectively a test- more carefully standardize markers.
the fewer markers- the easier it is to do this.
=> ensure at least double marking of most work.
- Uniformity: parallel versions of the same test
- Quantity
The more evidence - the more reliable the judgment.
Disadvantages: marking time takes longer and the test-takers are tired
- Constraints
free tasks are error avoiding but controlled and allow more reliably to gauge the targets can be successfully
achieved.
a structured response test and the rubric forces the test subjects to produce language in specific areas
- Make rubrics clear
Any misunderstanding of what's required undermines reliability.
Learners vary in their familiarity with certain types of task => Making the rubric clear contributes to leveling the
playing field.

2. validity : to ensure that the test is testing what you think it's testing so the meaningful results .
- Face validity
Students won't perform at their best in a test they don't trust is really assessing properly what they can do.
The environment ( a formal event held in silence with no cooperation between test-takers)
- Content validity
contain that which has been taught only and not have any extraneous material => Coverage plays a role here
- Predictive validity
tests tell how well learners will perform in the tasks set and the lessons to help them prepare for the examination.`
- Concurrent validity
administer both tests to a large group and compare the results. Parallel results are a sign of good concurrent
validity
- Construct validity
something that happens in the brain and has nothing to do with constructing a test.
To have high construct validity a test-maker must be able succinctly and consistently to answer the question

3. practicality: the test is deliverable in practice


- administration
+ The test should not be too complicated or too complex to conduct.
+ The test should be quite simple to administer.
- scoring/ evaluation
+ The scoring/evaluation process should fit into the time allocation.
+ The test should be accompanied by scoring rubrics, key answers, and so on to make it easy to evaluate.
- design: based on time, money, space, equipment
+ appropriately utilizes available material resources.
+ create the test (10-15') with a different code (lightweight and scalable)

4. Discrimination:
- it refers to the ability which a test has to distinguish clearly and quite finely between different levels of
learner.
- If a test is too simple, most of the learners in a group will get most of it right which is good for boosting
morale but poor if you want to know who is best and worst at certain tasks.
- if a test is too difficult, most of the tasks will be poorly achieved and your ability to discriminate between the
lea corners' abilities in any area will be compromised

What is Bloom’s taxonomy?


Taxonomy is an orderly classification of items according to systematic relationship (low to high, small to big, simple
to complex)
 In one sentence, Bloom’s Taxonomy is a hierarchical ordering of cognitive skills that can, among countless other
uses, help teachers teach, and students learn.
Bloom’s Taxonomy can be used to:
+ Create assessments
+ Frame discussions
+ Plan lessons
+ Evaluate the complexity of assignments
+ Design curriculum maps
+ Self-assessment

a/ The original version


a/ The original version
a/ The original version
a/ The original version
The original version Was created by Benjamin Bloom in 1956.
The framework was revised in 2001 by Lorin Anderson and David Krathwohl  the revised Bloom's Taxonomy.

Exam Matrix
1. The type of questions:
2. Knowledge-based questions
3. Comprehension questions
4. Low-level application questions
5. High-level application questions

Total numbers of questions: 50 questions (100%)


a. Knowledge-based questions: 16 questions (32%)
- Require simple recall of previously learned information without a deeper understanding, application, or analysis.
- Test the student's ability to recall the pronunciation of specific words, specifically where the stress falls in each
word.
- (It doesn't require interpretation, application, or higher-order analysis—it simply tests what the student
remembers about how each word is pronounced)
- Require a direct answer: There's no need for interpretation or application here.
- It's a direct query into their stored knowledge on the topic.
"Stress" area:
Question 18: Stress on word has more than 2 syllables
Question 19: Stress on word has 2 syllables
"Pronunciation" Area:
Question 20: "-ed" pronunciation
Question 21: Vowel pronunciation
b. Comprehension questions: 16 questions (32%)
- Recognizing synonyms or antonyms requires both understanding the given word and translating that
understanding to select another word with a similar meaning or opposite meaning from the options provided.
- This goes beyond mere recall; it demands a deeper comprehension of each word's meaning in context.
- Understanding words in context is a more advanced skill than recognizing their standalone definitions.
Synonyms" area:
Question 22: Indicate the synonym of “Glad”
Question 23: Indicate the synonym of “Understand”
"Antonyms" Area:
Question 25: Indicate the antonym of “dull”

c. Low-level application questions: 12 questions (24%)


- The reason is that the students are required to use their understanding of verb forms (which they would have
learned and understood previously) and apply it to a new context or situation
- In this case, completing the given sentence with the appropriate verb form.
- The learner isn't just remembering the verb forms (which would be the lowest level).
- They aren't simply understanding or explaining the differences between the verb forms either.
Question 14: Reduced Adverbial Clause
d. High-level application questions: 6 questions (12%)
- The question isn't asking students to merely remember the definition of the phrase "two of a kind."
- Nor is it about just understanding the phrase in isolation.
- They have to use their understanding of the phrase's meaning and context to make the correct choice among the
provided options.
- They're actively applying their knowledge to discern the correct answer, rather than simply explaining or
describing the meaning of the phrase.
- Move beyond simple recall or definition recognition.
- Require students to understand the contextual meaning of a phrase ("two of a kind") and identify its opposite.
- The phrase "two of a kind" is idiomatic, (which means its meaning can't be easily deduced just by understanding
the individual words)
- Recognizing the opposite of such a phrase demands a deeper grasp of English idioms and phrases.
- The provided options aren't straightforward antonyms.
Antonyms" area:
Question 24: Indicate the synonym of the phrase “Two of a kind”
The phrase is an idiom

You might also like