You are on page 1of 14

Assessment and Evaluation of Learning 1

Apply principles in constructing and interpreting alternative / authentic forms of high quality assessment.

PART I – CONTENT UPDATE

BASIC CONCEPTS

Test – an instrument designed to measure any characteristic, quality, ability, knowledge or skill. It comprised of items in the area it is
designed to measure.

Measurement – a process of quantifying the degree to which someone/something possesses a given trait i.e., quality, characteristics
or feature

Assessment – a process of gathering and organizing quantitative or qualitative data into an interpretable form to have a basis for
judgement or decision-making.
- It is a prerequisite to evaluation. It provides the information which enables evaluation to take place

Evaluation – a process of systematic interpretation, analysis and appraisal or judgement of the worth of organized data as basis for
decision-making. It involves judgement about the desirability of changes in students.

Traditional Assessment – it refers to the use of pen-and-paper objective test

Alternative Assessment – it refers to the use of methods other than pen-and-paper objective test which includes performance test,
projects, portfolios, journals, and the likes.

Authentic Assessment – it refers to the use of assessment methods that stimulate true-to-life situations. This could be written tests
that reflect real life situations or performance tasks that are parallel to what we experience in real life.

PURPOSES OF CLASSROOM ASSESSMENT

1. Assessment FOR Learning – this includes three types of assessment done before and during instruction. These are
placement, formative and diagnostic.
a. Placement—done prior to instruction
 Its purpose is to assess the needs of the learners to have basis in planning for a relevant instruction
 Teachers use this assessment to know what their students are bringing into learning situation and use this as a
starting point for instruction.
 The results of this assessment place students in specific learning groups to facilitate teaching and learning
b. Formative—done during instruction
 This assessment is where teachers continuously monitor the students’ level of attainment of the learning
objectives (Stiggins, 2005)
 The results of this assessment are communicated clearly and promptly to the students for them to know their
strengths and weaknesses and the progress of their learning.
c. Diagnostic – done during instruction
 This is used to determine students’ recurring or persistent difficulties
 It searches for the underlying causes of students’ learning problems that do not respond to first aid treatment
 It helps formulate a plan for detailed remedial instruction

2. Assessment OF Learning – this is done after instruction. This is usually referred to as the summative assessment
 It is used to certify what students know and can do and the level of their proficiency or competency
 Its results reveal whether or not instructions have successfully achieved the curriculum outcomes
 The information form assessment of learning is usually expressed as marks or letter grades
 The results of which are communicated to the students, parents, and other stakeholders for decision-making
 It is also a powerful factor that could pave the way for educational reforms

3. Assessment AS Learning – this is done for teachers to understand and perform well their role of assessing FOR and OF
learning. It requires teachers to undergo training on how to assess learning and be equipped with the following competencies
needed in performing their work as assessors.

Standards for Teacher Competence in Educational Assessment


(Developed by the American Federation of Teachers National Council of Measurement in Education National Education Association)
1. Teachers should be skilled in choosing assessment methods appropriate for instructional decisions
2. Teacher should be skilled in developing assessment methods appropriate for instructional decisions
3. Teachers should be skilled in administering, scoring and interpreting the results of both externally-produced and teacher-
produced assessment methods
4. Teachers should be skilled in using assessment results when making decisions about individual students, planning teaching,
developing curriculum, and school improvement
5. Teachers should be skilled in developing valid pupil grading procedures which use pupil assessments
6. Teachers should be skilled in communicating assessment results to students, parents, other lay audiences, and other educators
7. Teachers should be skilled in recognizing unethical, illegal, and otherwise inappropriate assessment methods and uses of
assessment information

PRINCIPLES OF HIGH QUALITY CLASSROOM ASSESSMENT

Principle 1: Clarity and Appropriateness of Learning Targets


 Learning targets should be clearly stated, specific, and centers on what is truly important
Learning Targets

(Mc Millan, 2007; Stiggins, 2007)

Knowledge Student mastery of substantive subject matter

Reasoning Student ability to use knowledge to reason and solve problems

Skills Student ability to demonstrate achievement-related skills

Products Student ability to create achievement-related products

Affect / Student attainment of affective states such as attitudes, values, interests and self-efficacy.
Disposition

Principle 2: Appropriateness of Methods


Assessment of Methods
Objective Objective Essay Performance- Oral Question Observation Self-Report
Supply Selection Based

Self-Report Multiple Restricted Presentations Oral Informal Attitude Survey


Choice Papers Examinations Formal

Completion Matching Response Projects Athletics Conferences Sociometric


Test Type Extended Interviews Devices

True / False Response Demonstrations Questionnaires

Exhibitions
Portfolios

Learning Targets and their Appropriate Assessment Methods


Targets Assessment Methods

Objective Essay Performance Oral Question Observation Self- report


Based

Knowledge 5 4 3 4 3 2

Reasoning 2 5 4 4 2 2

Skills 1 3 5 2 5 3

Products 1 1 5 2 4 4

Affects 1 2 4 4 4 5

Note: Higher numbers indicate better matches (e.g. 5=high, 1=low)

Modes of Assessment
Mode Description Examples Advantages Disadvantages
Traditional The paper-and-pen test - standardized and - scoring is objective - preparation of the
used in assessing teacher-made tests instrument is time
knowledge and thinking - administration is consuming
skills easy because
students can take the - prone to guessing
test at the same time and cheating

Performance A mode of assessment that - practical test - preparation of the - scoring tends to be
requires actual instrument is subjective without
demonstration of skills or - oral and aural test relatively easy rubrics
creation of products of
-projects, etc - measures -administration is
learning
behaviour that time consuming
cannot be deceived

Portfolio A process of gathering - working portfolios - measures students - development is


multiple indicators of growth and time consuming
student progress to - show portfolios development
support course goals in - rating tends to be
- documentary - intelligence fair subjective without
dynamic, ongoing and
portfolios rubrics
collaborative process

Principle 3: Balance
 A balanced assessment sets targets in all domains of learning (cognitive, affective, and psychomotor) or domains of
intelligence (verbal-linguistic, logical-mathematical, bodily-kinesthetic, visual-spatial, musical-rhythmic, interpersonal-
social, interpersonal-introspection, physical world-natural-existential-spiritual)
 A balanced assessment makes use of both traditional and alternative assessment

Principle 4: Validity
A. Validity – is the degree to which the assessment instrument measures what it intends to measure. It also refers to the
usefulness of the instrument for a given purpose. It is the most important criterion of a good assessment instrument.
Ways in Establishing Validity
1. Face Validity -- is done by examining the physical appearance of the instrument
2. Content Validity – is done through a careful and critical examination of the objectives of assessment so that it reflects the
curricular objectives
3. Criterion-related Validity – is established statistically such that a set of scores revealed by the measuring instrument is
correlated with the scores obtained in another external predictor or measure. It has two purposes: concurrent and predictive.
a. Concurrent validity – describes the present status of the individual by correlating the sets of scores obtained from
two measures given concurrently
b. Predictive validity – describes the future performance of an individual by correlating the sets of scores obtained
from two measures given at a longer time interval
4. Construct Validity – is established statistically by comparing psychological traits or factors that theoretically influence
scores in a test.
a. Convergent validity – is established if the instrument defines another similar trait other than what it is intended to
measure e.g. Critical thinking test may be correlated with creative thinking test
b. Divergent validity – is established if an instrument can describe only the intended trait and not the other traits e.g.
Critical thinking test may not be correlated with reading comprehension test.

Principle 5: Reliability
Reliability – it refers to the consistency of scores obtained by the same person when retested using the same instrument or its paraller

Type of

Method Reliability Measure Procedure Statistical Measure


1. Test-Retest Measure of stability Give a test twice to the same group Pearson r
with any time interval between tests
from several minutes to several years

2. Equivalent Measure of equivalence Give parallel forms of tests with close Pearson r
Forms time interval between forms

3. Test-Retest Measure of stability and Give parallel forms of tests with Pearson r
with Equivalent equivalence increased time interval between forms
Forms

4. Split Half Measure of internal Give a test once. Score equivalent Pearson r and Spearman
consistency halves of the test e.g. odd-and-even Brown Fomula
numbered items

5. Kuder- Measure of internal Give the test once then correlate the Kuder-Richardson
Richardson consistency proportion / percentage of the students Formula 20 and 21
passing and not passing a given item

Principle 6: Fairness
A fair assessment provides all students with an equal opportunity to demonstrate achievement. The key to fairness are as
follows:
 Students have knowledge of learning targets and assessment
 Students are given equal opportunity to learn
 Students possess the pre-requisite knowledge and skills
 Students are free from teacher stereotypes
 Students are free from biased assessment tasks and procedures.

Principle 7: Practicality and Efficiency


When assessing learning, the information obtained should be worth the resources and time required to obtain it. The factors
to consider are as follows:
 Teacher Familiarity with the Method. The teacher should know the strengths and weaknesses of the method and how to
use them.
 Time Required. Time includes construction and use of the instrument and the interpretation of results. Other things being
equal, it is desirable to use the shortest assessment time possible that provides valid and reliable results.
 Complexity of the Administration. Directions and procedures for administrations and procedures are clear and that little
time and effort is needed.
 Ease of Scoring. Use scoring procedures appropriate to your method and purpose. The easier the procedure, the more
reliable the assessment is.
 Ease of Interpretation. Interpretation is easier if there was a plan on how to use the results prior to assessment.
 Cost. Other things being equal, the less expense used to gather information, the better.

Principle 8: Continuity
 Assessment takes place in all phases of instruction. It could be done before, during and after instruction.
Activities Occurring Prior to Instruction
 Understanding students’ cultural backgrounds, interests, skills, and abilities as they apply across a range of learning domains
and/or subject areas;
 Understanding students’ motivations and their interests in specific class content;
 Clarifying and articulating the performance outcomes expected of pupils; and
 Planning instruction for individuals or groups of students
Activities Occurring During Instruction
 Monitoring pupil progress toward instructional goals;
 Identifying gains and difficulties pupils are experiencing in learning and performing;
 Adjusting instruction;
 Giving contingent, specific, and credible praise and feedback;
 Motivating students to learn; and
 Judging the extent of pupil attainment of instructional outcomes.
Activities Occurring After Appropriate Instructional Segment (e.g. lesson, class, semester grade)
 Describing the extent to which each student has attained both short- and long-term instructional goals;
 Communicating strengths and weaknesses based on assessment results to students, and parents or guardians;
 Recording and reporting assessment results for school-level analysis, evaluation, and decision-making;
 Analyzing assessment information gathered before and during instruction to understand each student’s progress to date and to
inform future instructional planning;
 Evaluating the effectiveness of instruction; and
 Evaluating the effectiveness of the curriculum and materials in use.

Principle 9: Authenticity
Features of Authentic Assessment
 Meaningful performance task
 Clear standards and public criteria
 Quality products and performance
 Positive interaction between the assessee and assessor
 Emphasis on meta-cognition and self-evaluation
 Learning that transfers
Criteria of Authentic Achievement (Burke, 1999)
1. Disciplined Inquiry – requires in-depth understanding of the problem and a move beyond knowledge produced by others to a
formulation of new ideas.
2. Integration of Knowledge – considers things as a whole rather than fragments of knowledge
3. Value Beyond Evaluation – what students do have some value beyond the classroom

Principle 10: Communication


 Assessment targets and standards should be communicated.
 Assessment results should be communicated to their important users
 Assessment results should be communicated to students through direct interaction or regular ongoing feedback on their
progress.

Principle 11: Positive Consequences


 Assessment should have a positive consequence to students; that is, it should motivate them to learn.
 Assessment should have a positive consequence on teachers; that is, it should help them improve the effectiveness of their
instruction

Principle 12: Ethics


 Teachers should free the students form harmful consequences of misuse or overuse of various assessment procedures such as
embarrassing students and violating student’s right to confidentiality.
 Teachers should be guided by laws and policies that affect their classroom assessment.
 Administrators and teachers should understand that it is inappropriate to use standardized student achievement to measure
teaching effectiveness.

PERFORMANCE- BASED ASSESSMENT


Performance-based assessment is a process of gathering information about student’s learning through actual demonstration
of essential and observable skills and creation of products that are grounded in real world contexts and constraints. It is an assessment
that is open to many possible answers and judged using multiple criteria or standards of excellence that are pre-specified and public.

Reasons for Using Performance-Based Assessment


 Dissatisfaction of the limited information obtained from selected-response test
 Influence of cognitive psychology, which demands not only for the learning of declarative but also for procedural knowledge
 Negative impact of conventional tests e.g., high-stake assessment, teaching for the test
 It is appropriate in experiential, discovery-based, integrated, and problem-based learning approaches
Types of Performance-based Task
1. Demonstration-type – this is a task that requires no product. Examples: constructing a building, cooking demonstrations,
entertaining tourists, teamwork, presentations
2. Creation-type – this is a task that requires tangible products. Examples: project plan, research paper, project flyers

Methods of Performance-based Assessment


1. Written-open ended -- a written prompt is provided (Formats: Essays, open-ended test)
2. Behavior-based – utilizes direct observations of behaviours in situations or simulated contexts (Formats: structured—a
specific focus of observation is set at once; and unstructured—anything observed is recorded and analyzed)
3. Interview-based – examinees respond in one-to-one conference setting with the examiner to demonstrate mastery of the
skills (Formats: structured—interview questions are set at once; and unstructured—interview questions depend on the flow of
conversation)
4. Product-based – examinees create a work sample or a product utilizing the skills / abilities (Formats: restricted—products of
the same objective are the same for all students; and extended—students vary in their products for the same objective)
5. Portfolio-based – collections of works that are systematically gathered to serve many purposes

How to Assess a Performance


1. Identify the competency that has to be demonstrated by the students with or without a product.
2. Describe the task to be performed by the students either individually or as a group, the resources needed, time allotment and
other requirements to be able to assess the focused competency.
7 Criteria in Selecting a Good Performance Assessment Task
 Generalizability – the likelihood that the students’ performance on the task will generalize the comparable tasks.
 Authenticity – the task is similar to what the students might encounter in the real world as opposed to encountering only in
the school
 Multiple Foci -- the task measures multiple instructional outcomes
 Teachability – the task allows one to master the skill that one should be proficient in
 Feasibility – the task is realistically implementable in relation to its cost, space, time, and equipment requirements
 Scorability – the task can be reliably and accurately evaluated.
 Fairness – the task is fair to all the students regardless of their social statuses or gender.
3. Develop a scoring rubric reflecting the criteria, levels of performance and the scores.

PORTFOLIO ASSESSMENT
Portfolio Assessment is also an alternative to pen-and-paper objective test. It is a purposeful, ongoing, dynamic, and
collaborative process of gathering multiple indicators of the learner’s growth and development. Portfolio assessment is also
performance-based but more authentic than any performance-based task.

Reasons for Using Portfolio Assessment


Burke (1999) actually recognizes portfolio as another type of assessment and is considered authentic because of the following
reasons:
 It tests what is really happening in the classroom.
 It offers multiple indicators of students’ progress.
 It gives the students the responsibility of their own learning.
 It offers opportunities for students to document reflections of their learning.
 It demonstrates what the students know in ways that encompass their personal learning styles and multiple intelligences.
 It offers teachers a new role in the assessment process.
 It allows teachers to reflect on the effectiveness of their instruction.
 It provides teachers freedom of gaining insights into students’ development or achievement over a period of time.

Principles Underlying Portfolio Assessment


There are underlying principles of portfolio assessment: content, learning, and equity principles.
1. Content principle suggests that portfolios should reflect the subject matter that is important for the students to learn.
2. Learning principle suggests that portfolios should enable the students to become active and thoughtful learners.
3. Equity principle explains that portfolios should allow students to demonstrate their learning styles and multiple intelligences.

Types of Portfolios
Portfolios could come in three types: working, show, or documentary.
1. The working portfolio is a collection of student’s day-to-day works which reflect his/her learning
2. The show portfolio is a collection of student’s best works
3. The documentary portfolio is a combination of a working and a show portfolio

Steps in Portfolios Development

2
(E
3
lc
es)tC
.S
4 nG
1
id
v
o
z
a
g
r
O
R
57
b
h
x
/u
6
fUr

DEVELOPING RUBRICS
Rubric is a measuring instrument used in rating performance-based tasks. It is the “key to corrections” for assessment tasks
designed to measure the attainment of learning competencies that require demonstration of skills or creation or products of learning. It
offers a set guidelines or descriptions in scoring different levels of performance or qualities of products of learning. It can be used in
scoring both the process and the products of learning.

Similarity of Rubric with Other Scoring Instruments

Rubric is a modified checklist and rating scale.


1. Checklist
 Presents the observed characteristics of a desirable performance or product
 The rater checks the trait/s that has/have been observed in one’s performance or product
2. Rating Scale
 Measures the extent or degree to which a trait has been satisfied by one’s work or performance
 Offers an overall description of the different levels of quality of a work or a performance
 Uses 3 to more levels to describe the work or performance although the most common rating scales have 4 or 5
performance levels

Below is a Venn Diagram that shows the graphical comparison of rubric, rating scale and checklist.
R
Checklist U Rating Scale
-shows the observed B -shows degree of quality
traits of a work /
R of work / performace
performance
I
C
S

Types of Rubrics
Type Description Advantages Disadvantages

Holistic Rubric It describes the overall - it allows fast assessment - it does not clearly describe
quality of a performance or the degree of the criterion
product. In this rubric, there - it provides one score to satisfied by the performance
is only one rating given to describe the overall performance or product
the entire work or or quality of work
performance. - it does not permit
- it can indicate the general differential weighting of the
strengths and weaknesses of the qualities of a product or a
work or performance performance

Analytic Rubric It describes the quality of a - it clearly describes whether the - it is more difficult to
performance or product in degree of the criterion used in construct
term of the identified performance or product has been - it is more time consuming
dimensions and/or criteria satisfied or not to use
for which they are rated
independently to give a - it permits differential
better picture of the quality weighting of the qualities of a
of work or performance product or a performance

- it helps raters pinpoint specific


areas of strengths and
weaknesses

Important Elements of a Rubric


Whether the format is holistic or analytic, the following information should be made available in a rubric.
 Competency to be tested – this should be a behaviour that requires either a demonstration or creation of products of learning
 Performance Task –the task should be authentic, feasible, and has multiple foci.
 Evaluative Criteria and their Indicators – these should be made clear using observable traits
 Performance Levels – these levels could vary in number from 3 or more
 Qualitative and Quantitative descriptions of each performance level – these descriptions should be observable and
measurable

Guidelines When Developing Rubrics


 Identify the important and observable features or criteria of an excellent performance or quality product
 Clarify the meaning of each trait or criterion and the performance levels.
 Describe the gradations of quality product or excellent performance
 Aim for an even number of levels to avoid the central tendency source of error.
 Keep the number of criteria reasonable enough to be observed or judged
 Arrange the criteria in order in which they will likely to be observed
 Determine the weight / points of each criterion and the whole work or performance in the final grade
 Put the descriptions of a criterion or a performance level on the same page
 Highlight the distinguishing traits of each performance level.
 Check if the rubric encompasses all possible traits of a work
 Check again if the objectives of assessment were captured in the rubric.

PART II – ANALYZING TEST ITEMS

Directions: Read and analyze each item and select the correct option that answers each question. Analyze the items using the first 5
items as your sample. Write only the letter of your choice in your answer sheet.

1. Who among the teachers described below is doing assessment?


a. Mrs. Bautista who is administering a test to her students.
b. Mr. Ferrer who is counting the scores obtained by the students in his test
c. Ms. Leyva who is computing the final grade of the students after completing all their requirements
d. Prof Cuevas who is planning for a remedial instruction after knowing that students perform poorly in her test.
Analysis:
The correct answer is C because assessment is represented by the grade, which is the result of the collection of data that could
be used for easy judging of student’s performance. Option A refers to testing, which is one of the techniques when assessing learning.
Option B refers to measurement because it refers to the quantification of data which is like computing the scores obtained in a test.
Option D refers to evaluation because it involves judgement (i.e. students perform poorly) and decision making (i.e. planning for
remedial instruction).

2. Mr. Fernandez is judging the accuracy of these statements. Which statements will he consider as correct?
I. Test is a tool to measure a trait.
II. Measurement is the process of qualifying a given trait.
III. Assessment is the gathering of quantitative and qualitative data.
IV. Evaluation is the analysis of quantitative and qualitative data for decision making.
a. I and II only c. I, II, and III
b. III and IV only d. I, III and IV
Analysis:
The correct answer is D because the first, third and fourth are correct statements. The first describes correctly a test. This is
also true to the third statement which correctly describes assessment. The last sentence is also a correct description of evaluation.
Among the four, it is only the second statement, which is wrong because measurement is not the process of qualifying but rather
quantifying data.

3. If I have used the most authentic method of assessment, which of these procedures should I consider?
a. Traditional Test c. Written Test
b. Performance-based Assessment d. Objective Assessment
Analysis:
The correct answer is B because among the four methods presented, it is only performance-based assessment that requires
actual demonstration of skills or creation of products of learning, which simulate what we really need to do in real life. Options A, C,
and D are all pen-and-paper tests which usually require low-level thinking skills only. In real life, what these exams capture could be
easily forgotten after the exam because they are usually just memorized without applications in real life.

4. After doing the exercise on verbs, Ms. Borillo gave a short quiz to find out how well the students have understood the lesson.
What type of assessment was done?
a. Summative Assessment c. Diagnostic Assessment
b. Formative Assessment d. Placement Assessment
Analysis:
The correct answer is B, formative assessment, since the purpose of the assessment is to find out what the students have
understood from the exercises about the lesson presented in the form of a quiz. The result of formative assessment gives immediate
feedback about the students’ learning for the day. Option A, summative test, covers a broad range of lessons usually in the form of
Final Test or Achievement Test. Option C, diagnostic test, aims to determine recurring problems that should be an input to remedial or
any follow up lesson. Option D, placement assessment, is more on determining the area or group a learner is most fit in order to
receive an appropriate instruction.

5. Who among the teachers below performed a diagnostic assessment?


a. Ms. Santos who asked questions when the discussion was going on to know who among her students understood
what she was trying to emphasize.
b. Mr. Colubong who gave a short quiz after discussing thoroughly the lesson to determine the outcome of instruction
c. Ms. Ventura who gave a 10-item test to find out the specific lessons which the students failed to understand
d. Mrs. Lopez who administered a readiness test to the incoming grade one pupils.
Analysis:
The correct answer is C, diagnostic assessment, since the purpose of the assessment is to find out what the students failed to
understand that would require remedial instruction. Options A and B are formative assessment while D is placement assessment.

6. You are assessing FOR learning. Which of these will you likely do?
a. Giving grades to students
b. Reporting to parents the performance of their child
c. Recommending new policies in grading students
d. Assessing the strengths and weaknesses of students

7. Ms. Saplan is planning to do an assessment OF learning. Which of these should she include in her plan considering her
purpose for assessment?
a. How to give immediate feedback to student’s strengths and weaknesses
b. How to determine the area of interest of students
c. How to certify student’s achievement
d. How to design one’s instruction

8. You targeted that after instruction, your students should be able to show their ability to solve problems with speed and
accuracy. You then designed a tool to measure this ability. What principle of assessment did you consider in this situation?
a. Assessment should be based on clear and appropriate learning targets of objectives
b. Assessment should have a positive consequence on student’s learning
c. Assessment should be reliable
d. Assessment should be fair.

9. Ms. Ortega tasked her students to show how to play basketball. What learning target is she assessing?
a. Knowledge c. Skills
b. Reasoning d. Products

10. Mr. Ravelas made an essay test for the objective “Identify the planets in the solar system.” Was the assessment methods used
the most appropriate for the given objective? Why?
a. Yes, because essay test is easier to construct than objective test
b. Yes, because essay test can measure any type of objective
c. No, he should have conducted oral questioning
d. No, he should have prepared an objective test.

11. Mr. Cidro wants to test students’ knowledge of the different places in the Philippines, their capital and their products and so
she gave her students an essay test. If you are the teacher will you do the same?
a. No, the giving of an objective test is more appropriate than the use of essay
b. No, such method of assessment is inappropriate because essay is difficult
c. Yes, essay test could measure more than what other tests could measure
d. Yes, essay test is the best in measuring any type of knowledge

12. What type of validity does the Pre-board Examination possesses if its results can explain how the students will likely to
perform in their Licensure Examination?
a. Concurrent c. Construct
b. Predictive d. Content

13. Ms. Alvin wants to determine is the students’ scores in their Final Test is reliable. However, she has only one set of test and
her students are already on vacation. What test of reliability can she employ?
a. Test-Retest c. Equivalent Forms
b. Kuder Richardson Method d. Test-Retest with Equivalent Forms

Refer to this case in answering items 14-15


Two teachers of the same grade level have set the following objectives for the day’s lesson: At
the end of the period, the students should be able to:
A. Construct a bar graph;
B. Interpret bar graphs.

To assess the attainment of the objectives, Teacher A required the students to construct a bar
graph the given set of data then she asked them to interpret this using a set of questions as
guide. Teacher B presented a bar graph then asked them to interpret this using also a set of
14. Whose practice is acceptable based on the principles of assessment?
a. Teacher A c. Both Teacher A and B
b. Teacher B d. Neither Teacher A nor Teacher B

15. Which is true about the given case?


a. Objective A matched with performance-based assessment while B can be assessed using the traditional pen-and-
paper objective test
b. Objective A matched with traditional assessment while B can be assessed using a performance-based method
c. Both objective A and B matched with performance-based assessment
d. Both objective A and B matched with traditional assessment

16. In the context of the Theory of Multiple Intelligence, which is a weakness of the paper-pencil test?
a. It puts non-linguistically intelligent at a disadvantage
b. It is not easy to administer
c. It utilizes so much time
d. It lacks reliability

17. Mr. Umayam is doing a performance-based assessment for the day’s lesson. Which of the following will most likely happen?
a. Students are evaluated in one sitting.
b. Students do an actual demonstration of their skill
c. Students are evaluated in the most objective manner
d. Students are evaluated based on varied evidences of learning

18. Ms. Despi rated her students in terms of appropriate and effective use of some laboratory equipment and measurement tools
and the students’ ability follow the specified procedures. What mode of assessment should Miss del Rosario use?
a. Portfolio Assessment
b. Journal Assessment
c. Traditional Assessment
d. Performance-Based Assessment

19. Mrs. Hilario presented the lesson on baking through a group activity so that the students will not just learn how to bake but
also develop their interpersonal skills. How should this lesson be assessed?
I. She should give the students an essay test explaining how they baked the cake
II. The students should be graded on the quality of their baked cake using a rubric
III. The students in the group should rate the members based on their ability to cooperate in their group activity.
IV. She should observe how the pupils perform their task.
a. I, II and III only c. I, II, IV only
b. II, III, and IV only d. I, II, III, and IV

20. If a teacher has set objectives in all domains of learning targets and which could be assessed using a single performance task,
what criterion in selecting a task should she consider?
a. Generalizability c. Multiple Foci
b. Fairness d. Teachability

21. Which term refers to the collection off students’ products and accomplishment in a given period of evaluation purposes?
a. Diary c. Anecdotal record
b. Portfolio d. Observation report

22. Mrs. Catalan allowed the students to develop their own portfolio in their own style as long as they show all the non-
negotiable evidences of learning. What principle in portfolio assessment explains this practice?
a. Content Principle c. Equity Principle
b. Learning Principle d. Product Principle

23. How should the following steps ain portfolio assessment be arranged logically?
I. Set targets
II. Select evidences
III. Collect evidences
IV. Rate Collection
V. Reflect on Evidences
a. I, II, III, IV, V c. I, II, III, V, IV
b. I, III, II, V, IV d. I, III, V, II, IV

24. Which could be seen in a rubric?


I. Objective in a higher level of cognitive behaviour
II. Multiple criteria in assessing learning
III. Quantitative descriptions of the quality of work
IV. Qualitative descriptions of the quality of work
a. I and II only c. I, II, and III
b. II, III and IV only d. I, II, III, and IV

25. The pupils are to be judged individually on their mastery of the singing of the national anthem and so their teacher let them
sing individually. What should the teacher use in rating the performance of the pupils considering the fact that the teacher has
only one period to spend evaluating her 20 pupils?
a. Analytic c. Either holistic or analytic
b. Holistic d. Both holistic and analytic

PART III – ENHANCING TEST TAKING SKILLS

Directions: Enhance your test taking skills by answering the items below. Write only the letter of the best answer.

1. Mrs. Pua is judging the worth of the project of the students in her Science class based on a set of criteria. What process
describes what she is doing?
a. Testing c. Evaluating
b. Measuring d. Assessing

2. Mrs. Acebuche is comparing measurement from evaluation. Which statement explains the difference?
a. Measurement is assigning a numerical value to a given trait while evaluation is giving meaning to the numerical
value of the trait
b. Measurement is the process of gathering data while assessment is the process of quantifying the data gathered
c. Measurement is the process of quantifying data while evaluation is the process of organizing data
d. Measurement is a pre-requisite of assessment while evaluation is the pre-requisite of testing

3. Ms. Ricafort uses alternative methods of assessment. Which of the following sill she NOT likely use?
a. Multiple Choice Test c. Oral Presentation
b. Reflective Journal Writing d. Developing Portfolios

4. Ms. Camba aims to measure a product of learning. Which of these objectives will she most likely set for her instruction?
a. Show positive attitude towards learning common nouns
b. Identify common nouns in a reading selection
c. Construct a paragraph using common nouns
d. Use a common noun in a sentence

5. The students of Mrs. Valino are very noisy. To keep them busy, they were given any test available in the classroom and then
the results were graded as a way to punish them. Which statement best explains if the practice is acceptable or not?
a. The practice is acceptable because the students behaved well when they were given test
b. The practice is not acceptable because it violates the principle of reliability
c. The practice is not acceptable because it violates the principle of validity
d. The practice is not acceptable since the test results are graded

6. Ms. Delos Angeles advocates assessment FOR learning. Which will she NOT likely do?
a. Formative Assessment c. Placement Assessment
b. Diagnostic Assessment d. Summative Assessment

7. At the beginning of the school year, the 6-year old pupils were tested to find out who among them can already read. The
result was used to determine their sections. What kind of test was given to them?
a. Diagnostic c. Placement
b. Formative d. Summative

8. The grade six pupils were given a diagnostic test in addition and subtraction of whole numbers to find out if they can proceed
to the next unit. However, the results of the test were very low. What should the teacher do?
a. Proceed to the next lesson to be able to finish all the topics in the course
b. Construct another test parallel to the given test to determine the consistency of the scores
c. Count the frequency of errors to find out lessons that the majority of students need to learn
d. Record the scores then inform the parents about the very poor performance of their child in mathematics

9. Mrs. Nogueras is doing an assessment OF learning. At what stage of instruction should she do it?
a. Before instruction c. Prior to instruction
b. After instruction d. During the instructional process

10. Mr. Cartilla developed an Achievement Test in Math for her grade three pupils. Before she finalized the test, she examined
carefully if the test items were constructed based on the competencies that have to be tested. What test of validity was she
trying to establish?
a. Content validity c. Predictive validity
b. Concurrent validity d. Construct validity

11. Mrs. Robles wants to establish the reliability of her achievement test in English. Which of the following activities will help
achieve her purpose?
a. Administer two parallel tests to different groups of students
b. Administer two equivalent tests to the same group of students
c. Administer a single test but to two different groups of students
d. Administer two different tests but to the same group of students

Refer to the situation below in answering items 12 and 13.

A teacher set the following objectives for the day’s lesson:


At the end of the period, the students should be able to:
a. Identify the parts of a friendly letter;
b. Construct a friendly letter using the MS Word; and
c. Show interest towards the day’s lesson
To assess the attainment of the objectives, Ms. Cidro required the students to construct a
friendly letter and have it encoded at their Computer Laboratory using the MS Word. The
letter should inform one’s friend about what one has learned in the day’s lesson and how
one felt about it.

12. Which is NOT true about the given case?


a. Ms. Cidro practices a balanced assessment
b. Ms. Cidro’s assessment method is performance-based
c. Ms. Cidro needs a rubric in scoring the work of the students
d. Ms. Cidro’s assessment targets are all in the cognitive domain

13. If Mr. Paraiso will have to make a scoring rubric for the student’s output, what format is better to construct considering that
the teacher has limited time to evaluate their work?
a. Analytic Rubric c. Either A or B
b. Holistic Rubric d. Neither A nor B

14. The school principal has 3 teacher applicants all of whom graduated from the same institution and are all licensed teachers.
She only needs to hire one. What should she do to choose the best teacher from the three?
I. Give them a placement test
II. Interview them on why they want to apply in the school
III. Let them demonstrate how to teach a particular lesson
IV. Study their portfolios to examine the qualities of the students’ outputs when they were in College
a. I and II c. I, III, and IV
b. II and III d. II, III and IV

15. What should be done first when planning for a performance-based assessment?
a. Determine the “table of specifications” of the tasks
b. Set the competency to be assessed
c. Set the criteria in scoring the task
d. Preparing a scoring rubric

16. To maximize the amount spent for performance-based assessment, which one should be done?
a. Plan a task that can be used for instruction and assessment at the same time.
b. Assess one objective for one performance task
c. Set the objectives only for cognitive domains
d. Limit the task to one meeting only

17. Who among the teachers below gave the most authentic assessment task for the objective “Solve word problems involving
the four basic operations”?
a. Mrs. Juliano who presented a word problem involving the four fundamental operations and then asked the pupils to
solve it
b. Mrs. Mandia who asked her pupils to construct a word problem for a given number sentence that involves four
fundamental operations and then asked them to solve the word problem they constructed
c. Mrs. Manalang who asked her pupils to construct any word problem that involves the four fundamental operations
and then asked them to show how to solve it
d. Mrs. Pontipedra who asked her pupils to construct any word problem that involves the four fundamental operations
then formed them by twos so that each pair exchanged problems and helped solve each other’s problem

18. Which is WRONG to assume about traditional assessment?


a. It can assess individuals objectively
b. It can assess individuals at the same time
c. It is easier to administer than performance test
d. It can assess fairly all the domains of intelligence of an individual

19. Which statement about performance-based assessment is FALSE?


a. It emphasizes mere process
b. It also stresses doing, not only knowing
c. It accentuates on process as well as product
d. Essay tests are an example of performance-based assessments

20. Under which assumption is portfolio assessment based?


a. Portfolio assessment is a dynamic assessment
b. Assessment should stress the reproduction of knowledge
c. An individual learner is adequately characterized by a test score
d. An individual learner is inadequately characterized by a test score

21. Which is a good portfolio evidence of a student’s acquired knowledge and writing skills?
a. Project c. Reflective Journal
b. Test Results d. Critiqued Outputs

22. When planning for portfolio assessment, which should you do first?
a. Set the targets for portfolio assessment
b. Exhibit one’s work and be proud of one’s collection
c. Select evidences that could be captured in one’s portfolio
d. Reflect on one’s collection and identify strengths and weaknesses

23. Which kind of rubric is BEST to use in rating students’ projects done for several days?
a. Analytic c. Either holistic or analytic
b. Holistic d. Both holistic and analytic

24. Which is NOT TRUE of an analytic rubric?


a. It is time consuming
b. It is easier to construct than the holistic rubric
c. It gives one’s level of performance per criterion
d. It allows one to pinpoint the strengths and weaknesses of one’s work

25. Mrs. Bacani prepared a rubric with 5 levels of performance described as 5- excellent, 4- very satisfactory, 3- satisfactory, 2-
needs improvement, 1-poor. After using this rubric with these descriptions, she found out that most of her students had a
rating of 3. Even those who are evidently poor in their performance had a rating of satisfactory. Could there be a possible
error in the use of the rubric?
a. Yes, the teacher could have committed the generosity error
b. Yes, the teacher could have committed the central tendency source of error
c. No, it is just common to see more of the students having a grade of 3 in a 5-point scale
d. No, such result is acceptable as long as it has a positive consequence to students

You might also like