You are on page 1of 8

TABLE OF

SPECIFICATION
Definition
• Table of specification is a chart that provides
graphic representations of the content of a
course or curriculum elements and the
educational objectives.
• Table of specifications is a two –way chart
which describes the topics to be covered in a
test and the number of items or points which
will be associated with each topic
What are the benefits of table of
specifications

Clarify learning Ensure content


outcomes coverage

Matching Help in
method of assessment
instructions plans
Things should be taken into account when building a table of
specification

Table of specifications are designed based on


• 1-course learning outcomes/objective .
• 2-topics covered in class.
• 3-amount of time spent on those topics .
• 4-methods of instruction .
• 5-assessment plan .
Constructing the table of specifications

Table is guided by the content of the curriculum


• Followed by learning outcomes/objectives
• Followed by Bloom’s taxonomy and its level and keeping
in mind the content and learning outcomes.
• Followed by methods of instruction matching with
content, learning outcomes, nd time spent on the topic.
Percentage should work back to 100%.
• Finally assessment plan in added keeping in mind
content, learning outcomes, and time spent on
instruction.
TOS EXAMPLE
Course content Knowledge Comprehensio Application No. of item
30% n 30%
40%
Assessment 1 3 4 8

Measurement 2 4 3 9

Evaluation 1 2 1 4

Comparison b/t 1 1 1 3
evaluation and test

Test 1 1 1 3

Validity and 2 1 1 4
Reliability
Total 8 12 11 31
Reliability

• The reliability of an assessment tool is the extent to which it consistently and


accurately measures learning.
• When the results of an assessment are reliable, we can be confident that repeated or
equivalent assessments will provide consistent results. This puts us in a better
position to make generalized statements about a student’s level of achievement,
which is especially important when we are using the results of an assessment to
make decisions about teaching and learning, or when we are reporting back to
students and their parents or caregivers. No results, however, can be completely
reliable. There is always some random variation that may affect the assessment, so
educators should always be prepared to question results.
• Factors which can affect reliability:
• The length of the assessment – a longer assessment generally produces more reliable
results.
• The suitability of the questions or tasks for the students being assessed.
• The phrasing and terminology of the questions.
• The consistency in test administration – for example, the length of time given for the
assessment, instructions given to students before the test.
• The design of the marking schedule and moderation of marking procedures.
• The readiness of students for the assessment – for example, a hot afternoon or
straight after physical activity might not be the best time for students to be assessed.
Validity

• Validity
• Educational assessment should always have a clear purpose. Nothing will be
gained from assessment unless the assessment has some validity for the
purpose. For that reason, validity is the most important single attribute of a
good test.
• The validity of an assessment tool is the extent to which it measures what it
was designed to measure, without contamination from other characteristics.
For example, a test of reading comprehension should not require
mathematical ability.
• There are several different types of validity:
• Face validity: do the assessment items appear to be appropriate?
• Content validity: does the assessment content cover what you want to
assess?
• Criterion-related validity: how well does the test measure what you want it
to?
• Construct validity: are you measuring what you think you're measuring?

You might also like