Its a formal instrument to measure what learners can do or know about something.
What are tests for?
To inform learners and teachers of the strengths and weaknesses of the process. To motivate learners to review or consolidate specific material. To create a sense of accomplishment/success. To guide the planning/development of the ongoing teaching process. To determine if (and to what extent) the objectives have been achieved. To encourage improvement.
Depending on purpose Depending on characteristics Screening/Selection/ Admission Direct Tests/ Indirect Tests Placement Discrete point/ Integrative tests Proficiency Criteria-referenced/ Norm-referenced Aptitude Objective tests/ Subjective tests Diagnostic Speed test/ Power test Achievement Knowledge tests/ Skill tests Progress Depending on purpose:
Screening/Selection/Admission: To know if a person has the required behavior to be successful in a specific program (not based on objectives), e.g. IPCs admission test.
Placement: To determine the level in which a person should be located inside a program (designed by the institution), e.g. CVAs placement test.
Proficiency: To know if a person shows an overall proficiency in a language, compared to native speakers in real life contexts, e.g. The TOEFL test.
Aptitude: To know the talents of a person to do something specific. Suitability of a candidate for a specific program of instruction.
Diagnostic: It refers to entrance behavior or previous knowledge. To determine strengths and weaknesses and to guarantee that potential problems will be corrected (performed by the teacher).
Achievement: To know if a determined objective has been covered successfully.
Progress: To check improvement achieved according to a referential point in a program.
Depending on characteristics:
Direct Tests: Test the skill itself. Students perform exactly what we want to test. Indirect Tests: Test abilities related to the skills we are interested in, e.g. assess grammar and spelling through a written exercise.
Discrete point: Focus on restricted areas of the target language, e.g. cloze exercise on verb tenses. Integrative tests: Measure overall language proficiency, e.g. oral interviews = fluency, pronunciation, content, grammar, comprehension, etc.
Criteria-referenced: Exams describe what a person can do in relation to the course objectives or a predefined criteria. There is no comparison between students. Norm-referenced: Exams compare one persons performance with many others. (Established from the total populations results. Comparison to average)
Objective tests: No judgment involved. Answers are either right or wrong. (e.g. Yes/no question items) Subjective tests: Judgment and opinion involved. No right or wrong answer. (e.g. opinion/discussion items)
Speed test: Easy items in a very short time. Assess speed of performance and strategy, e.g. scanning exercises. Power test: Difficult items in enough time. Assess knowledge.
Knowledge tests: Assess the language components, e.g. grammar quizzes. Skill tests: Assess the skills, e.g. listening quizzes.
Specific guidelines: The way the test is designed and organized.
Moderation of mark scheme: The way in which teachers set the score of the test.
Standardization of examiners: The way in which examiners guarantee a common criteria for correction.
Weir, C. (
Specific Guidelines
Moderation of tasks: Searching for feed-back. Revision made by other teachers.
Level of difficulty: The presentation of tasks in a test should be arranged from easy to difficult. Starting with the most difficult task will lead the weakest learners to soon give up. An item is easy if 75% of students answer it correctly, its average if 50% of the students answer it correctly, and if 25% of students cant answer the item, then it is considered difficult (pilot test).
Discrimination: A test should allow candidates at different levels to perform according to their abilities. A variety of tasks ranging from easy to difficult should point out the difference(s) between learners (good and weak). The number of difficult tasks should be limited and go at the end of the test.
Appropriate sample: The test should present a representative sample of the objectives, activities and tasks taught or used in the classroom.
Overlap: It occurs when content is assessed more than once. It should be avoided as reassessment of content will present an inappropriate sample, but also to prevent visual and mental overload from students.
Clarity of tasks: Instructions should be simple and unambiguous, providing a clear indication of what the task demands from the student. Instructions should never be more difficult than the task.
Questions and texts: The selection of questions and texts will depend on the purpose and the formats chosen by the designer of the test. Again, the difficulty should not lie in the question but in the task. Conversely, questions should not be too simple, obvious or answerable from world knowledge. Timing: Testers should give students a reasonable time to complete the test, since too little time will evidence unreliable results. Students should be aware of the time set to complete each part of the test. The time of the test should reflect the importance and difficulty of what is being assessed. Teachers can pilot the test with a group of a similar level or he/she can even relate to similar evaluative experiences in the classroom, to determine the appropriate time agreed to complete the test.
Layout: Presentation, printing, spacing, font size, style, formats (a,b,c I,II,III,IV 1,2,3) The layout should be consistent. Single parts should be arranged on the same page.
Bias: Bias can result from experiential, cultural or knowledge-based factors. Teachers should avoid items or topics inclined to give an unfair advantage to a particular group of students. Conversely, teachers should also avoid tasks or issues so obscure that candidates might have no frame of reference into which process and comprehend what is being asked.
Moderation of Mark Scheme
Acceptable response/variations.
Subjectivity in productive tasks.
Weighting (balance between items/tasks and scores).
Computation: The data and results should be easy to compute. The manipulation of numbers must be convenient. Simple for students and teacher (to conceive and process).
Avoidance of muddied measurement: The use of a skill should not interfere with the measurement of another.
Accessibility/intelligibility of mark scheme: Easy and convenient to access, use and understand. Standardization of examiners
Agreement on criteria: by teachers and students.
Trial assessment: to assess difficulty and potential problems.
Review of procedures: related to the test.
Follow up checks: Notes or reports on the results of the tests (to improve or consolidate it)
GRAND PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION 2 options Professor Classmates
1.- The formal instrument to measure what learners can do or know about something is known as
a. Assessment b. Evaluation
c. Test
d. Checklist
GRAND PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION 2 options Professor Classmates
1.- The formal instrument to measure what learners can do or know about something is known as
a. Assessment b.
c. Test
d.
GRAND PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION 2 options Professor Classmates
1.- The formal instrument to measure what learners can do or know about something is known as
a. b.
c. Test
d.
GRAND PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION 2 options Professor Classmates
2.- The test applied to know if a person has the required behavior to be successful in a specific program is called a. Placement b. Admission
c. Aptitude
d. Proficiency
GRAND PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION 2 options Professor Classmates
2.- The test applied to know if a person has the required behavior to be successful in a specific program is called a. b. Admission
c. Aptitude
d.
GRAND PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION 2 options Professor Classmates
2.- The test applied to know if a person has the required behavior to be successful in a specific program is called a. b. Admission
c.
d.
GRAND PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION 2 options Professor Classmates
3.- The test applied to know if a determined objective has been covered successfully is known as
a. Achievement b. Screening
c. Progress
d. Diagnostic
GRAND PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION 2 options Professor Classmates
3.- The test applied to know if a determined objective has been covered successfully is known as
a. Achievement b.
c. Progress
d.
GRAND PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION 2 options Professor Classmates
3.- The test applied to know if a determined objective has been covered successfully is known as
a. Achievement b.
c.
d.
GRAND PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION 2 options Professor Classmates
4.- The TOEFL is an example of what kind of test? a. Admission b. Proficiency
c. Progress
d. Placement
GRAND PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION 2 options Professor Classmates
4.- The TOEFL is an example of what kind of test? a. b. Proficiency
c. Progress
d.
GRAND PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION PRIZE QUESTION QUESTION 2 options Professor Classmates
4.- The TOEFL is an example of what kind of test? a. b. Proficiency