You are on page 1of 42

ASSESSMENT OF STUDENT

LEARNING 1
Course Requirements / Grading System:
Quizzes 35%
Major Exams 40%
Class Participation 15%
Assignments / Projects 10%
__________
100%
•Missed exam policy: A student is allowed to make up for only two missed exams,
provided that the reason is valid. If another exam is missed, the student will get a
zero score in that exam. The special exam is usually scheduled after the Finals and
covers all topics of the entire course.

• Aside from academic deficiency, other grounds for a failing grade are: a. cheating
during an examination; b. Absence for 20% of the total number of hours for one
semester (dropped)
COURSE OUTLINE:
1 – Shift of Educational Focus from Content to Learning Outcomes
a. Outcomes-based Education: Matching Intentions with
Accomplishments
b. The Outcomes of Education
2 – Measurement, Assessment and Evaluation in Outcomes-Based
Education
c. Measurement
d. Assessment
e. Evaluation
3 – The Outcomes of Student Learning
a. Program objectives and Student Learning Outcomes
b. The Three Types of Learning
c. Domain 1: Cognitive (Knowledge)
d. Domain 2: Psychomotor (Skills)
e. Domain 3: Affective (Attitude)
4 – Assessing Student Learning Outcomes
f. Principles of Good Practice in Assessing Learning Outcomes
g. Samples of Supporting Student Activities
h. The Outcomes Assessment Phases in the Instructional Cycle
d. Variety of Assessment Instrument
 Objective Examinations
 Essay Examinations
 Written Work
 Portfolio Assessment
 Assessment Rubrics – Holistic and Dimensional / Analytical
 Competencies/Skills Assessment from Beginner to Proficiency
Level
e. Assessment of Learning Outcomes in the K to 12 Program
5 - Development of Varied Assessment Tools: Knowledge and
Reasoning
 Types of Objective Tests
 Planning a Test and Constructing a Table of Specifications (TOS)
 Constructing a True-False Test
 Multiple Choice Tests
 Matching Type and Supply Type Items
 Essays
6. Item Analysis and Validation
Item Analysis
Validation
Reliability

7. Performance-Based Tests
 Introduction
 Performance-based Tests
 Performance Tasks
 Rubrics and Exemplars
 Creating Rubrics
 Automating Performance-based Tests
8. Grading System
Norm-referenced Grading
Criterion-referenced Grading
Four Questions in Grading
What Should Go Into a Student’s Grade
Standardized Test Scoring
Cumulative and Averaging Systems of Grading
K to 12 Grading System: Reproduced from DepEd Order No. 31. s. 2012
Alternative Grading System
I – SHIFT OF EDUCATIONAL FOCUS FROM CONTENT TO LEARNING
OUTCOMES:
Reduced to the barest components, the educative
process happens between the teacher and the
student. Education originated from the terms
“educare” or “educere” which means “to draw out.”
Ironically, however, for centuries we succeed in
perpetuating the belief that education is a “pouring
in” process wherein the teacher was the infallible
giver of knowledge and the student was the passive
recipient.
The advent of technology caused a change of
perspective in education, nationally and
internationally. The teacher ceased to be the sole
source of knowledge. With knowledge explosion,
students are surrounded with various sources of facts
and information accessible through user-friendly
technology. The teacher has become a facilitator of
knowledge who assists in the organization,
interpretation and validation of acquired facts and
information
Outcomes-Based Education: Matching
Intention with Accomplishments

Outcomes-Based
The change in educational perspective is called
Education (OBE)
which has 3 characteristics:

1. It is student-centered; focusing on Student Learning Outcomes


2. It is faculty driven; it encourages faculty responsibility for teaching,
assessing program outcomes and motivating participation from the students
3. It is meaningful; it provides data to guide the teacher in making valid and
continuing improvement in instruction and assessment activities.
To implement outcomes-based education on the subject or
course level, the following procedure is recommended:
Identification of the educational objectives of the
subject/course.
Listing of learning outcomes specified for each
subject/course objective; Bloom’s taxonomy of
educational objectives which is grouped in to three:
• Cognitive (knowledge) – mental skills
• Psychomotor (skills) – manual or physical skills
• Affective (attitude) – growth in feelings or emotions
Drafting outcomes assessment procedure.
This procedure will enable the teacher to
determine the degree in which the students are
Drafting
attaining the desired outcomes assessmentIt identifies for
learning outcomes.
procedure
every outcome the data that will be gathered which will
guide the selection of the assessment tools to be used
and at what point assessment will be done.
The Outcomes of Education

Two types of Outcomes:


Immediate and deferred outcomes

Immediate Outcomes are competencies /skills


acquired upon completion of a subject, a grade level,
a segment of the program, or of the program itself.

Deferred Outcomes refer to the ability to apply


cognitive, psychomotor and affective
skills/competencies in various situations many
years after completion of a subject, grade level or
degree program.
MEASUREMENT
ASSESSMENT in OUTCOMES-
EVALUATION BASED EDUCATION

Measurement is the process of determining or describing


the attributes or characteristics of physical objects generally
in terms of quantity. In the field of education, measurement
process becomes difficult, hence, we need to specify the
learning outcomes to be measured.
For instance, knowledge of the subject matter is often
measured through standardized test results. In this case, the
measurement procedure is testing (objective type)
The same concept can be measured in another way. We can ask
a group of experts to rate a student’s (or teacher’s) knowledge
of the subject matter in a scale of 1 t0 5 with 1 being the lowest
and 5 the highest. In this procedure, perceptions (subjective
type) is used to measure knowledge of the subject matter.

Objective measurements are more stable than


subjective measurements in the sense that
repeated measurements of the same quantity or
quality of interest will produce more or less the
same outcomes.
An educational variables is a measurable characteristics of a
student. Indicators are the building blocks of educational
measurement upon which all other forms of measurement are
built. A group of indicators constitute a variable and a group of
variables form a construct or a factor. The variables which form a
factor correlate highly with each other but have low correlations
with variables in another group.

In educational measurement, we shall


be concerned with indicators,
variables and factors of interest in
the field of education
Assessment is the process of gathering evidence of student’s
performance over a period of time to determine learning
and mastery of skills. Assessment requires review of journal
entries, written work, presentation, research papers,
essays, story written, test results, etc.
i m p r o v e
s m e n t i s to s
o f a s s e s ts, p a r e n t
ve r a l l go al e s t u d e n
The o a n d p r o vid r e g a r d in g
t l e a r n in g o r m a t i o n
studen li ab l e in f o f th e
r s w ith r e tt a in m e n t
c h e o f a
and tea es s an d e x t e n t
nt is
p r o g r in m e
student n in g o u t co m e s.
f s k il l a tt a
en t
l e a r en t o se ssm
expected As s e s s m
s ie r th a n a s
e nt al
v e ly e a ot h er m
relat i g a n d
s t a nd i n
e r
of und ability
Evaluation originates from the root word “value” and so when we
evaluate, we expect our process to give information regarding the
worth, appropriateness, goodness, validity or legality of something
for which a reliable measurement has been made

Evaluation is a process designed to provide


information that will help us to make a
judgment about a particular situation. The end
result of evaluation is to adopt, reject or
revise what has been evaluated

Objects of evaluation include instructional programs, school, projects,


teachers, students, and educational goals. Evaluation involves data
collection and analysis and quantitative and qualitative methods
Evaluations are often divided into two broad
categories: FORMATIVE AND SUMMATIVE
Formative Evaluation is a method of
judging the worth of a program while the
program activities are in progress. The
focus is on the process

m et h o d
t io n i s a t
e e v a l ua o g r a m a
iv pr
Summat h e w o r t h o f a
t i es .
i ng t c t iv i
of judg t h e pr o g r a m a
of t.
the end s o n th e r e su l
s i
The focu
THE OUTCOMES OF STUDENT
LEARNING

Program Objectives and Student Learning Outcomes


In the past, teachers are subject-matter centered. Their
concerns for outcomes was secondary to the completion of the
planned content for the subject, what competencies, knowledge or
other characteristics should the graduates possess. The graduates
of BEEd/BSEd program is one who has full understanding of child
development, who possesses the competency to apply such
understanding in planning the methods and activities in the class
such that the pupils will show the desired learning outcomes.
The previous mentioned objectives are the two of
several learning objectives of the BEEd/BSEd program.

From the educational objectives, learning


outcomes may be drafted with a statement
opener such as “students can…”, and
completing the statement by using whenever
possible concrete active verbs like:
“demonstrate a wide range of teaching skills;”,
“apply learned theories in practice teaching;”
“illustrate alternative teaching methods.”
The Three Types of Learning
There are 3 domains of educational objectives (cognitive, affective and
psychomotor) are organized into categories or levels from the simplest
behavior to the most complex behavior. To ensure that the learning
outcomes are measurable, demonstrable and verifiable the outcomes
should be stated as concrete and active verbs.

Domain I: Cognitive (Knowledge)


Categories/Levels
1. Remembering – recall of previous learned information
2. Understanding – comprehending the meaning, translation and
interpretation of instructions; state a problem in one’s own word
3. Applying – using what was learned in the classroom into similar
new situations
4. Analyzing – separating materials or concept into component
parts to understand the whole
5. Evaluating – judging the value of an idea, object or materials
6. Creating – building a structure or pattern; putting parts together

LEARNING OBJECTIVES ARRANGED HIERARCHICALLY

Creating
Evaluating
Analyzing
Applying
Understanding
Remembering
Domain II: Psychomotor (Skills)
Categories/Levels
1.Observing – active mental attention to a
physical activity
2. Imitating – attempt to copy a physical behavior
3. Practicing – performing a specific activity
repeatedly
4. Adapting – fine tuning the skill and making
minor adjustments to attain perfection
Domain III: Affective (Attitude)
Categories/Levels
1.Receiving – being aware or sensitive to something and being
willing to listen or pay attention
2.Responding – showing commitment to respond in some
measure to the idea or phenomenon
3.Valuing – showing willingness to be perceived as valuing or
favoring certain ideas
4.Organizing – arranging values into priorities, creating a
unique value system by comparing, relating and synthesizing
values.
ASSESSING STUDENT LEARNING OUTCOMES

Outcomes assessment is the process of gathering information on


whether the instruction, services and activities that the program
provide are producing the desired student learning outcomes.

Principles of Good Practice in Assessing Learning Outcomes:


1. The assessment of student learning starts with the
institution’s mission and core values.
2. Assessment works best when the program has clear
statement of objectives aligned with the institutional mission
and core values.
3. Outcomes-based assessment focus on the student activities that
will still be relevant after formal schooling concludes.
4. Assessment requires attention not only to outcomes but also and
equally to the activities and experiences that lead to the attainment
of learning outcomes.
5. Assessment works best when it is continuous, on-going and not
episodic.

Samples of Supporting Student Activities


Student Learning outcomes #1: Students can organize
information from secondary sources as basis of a research topic
Student Learning outcomes #2: Students apply principles of
logical thinking and persuasive argument in writing.
Student Learning outcome #3: Students write multiple page
essays complying with standard format and style.

Variety of Assessment Instrument:


6. It is best to use a variety of assessment instruments or tools
when assessing student learning outcomes.
a. Objective Examinations (multiple choice, true or false, matching,
simple recall). The advantage is that teachers are familiar with it
although construction of high quality questions may be difficult.
b. Essay examinations allow for student individuality and
expression although it may not cover an entire range of
knowledge.
c. Written work (reports, papers, research projects, reviews,
etc,) The disadvantage is that plagiarism may occur and
written work is difficult to quantify.
d. Portfolio Assessment. Portfolios may either be longitudinal
or best-case/thematic portfolio.
e. Assessment Rubrics. It is an authentic assessment tool
which measures student’s work. The 3 common
characteristics are: 1. emphasis is on a stated objective; 2.
performance is rated in a range, and 3. include specific
performance characteristics arranged in levels or degrees in
which a standard has been met.
Two major types of Rubrics:
• Holistic – covers the instrument as a whole; students receive an
overall score based on a pre-determined scheme. This is
criterion-based with performance level like most acceptable,
very acceptable, barely acceptable and unacceptable.

• Dimensiona
l/Analytic – yie
as well as cumu lds sub-scores
lative score wh for each dimen
or unweighted i ch is the sum, eith s ions,
. Utilizes multip er weighted
academic tasks le indicators of
that involve mo quality for
ability. re than one lev
el of skill or
f. Competencies/skills Assessment from Beginner to Proficiency Level
Skills acquisition undergoes phases from beginner to proficiency
level. This may be illustrated in assessing cognitive and psychomotor
skills as demonstrated in the combination of “An Adaptation of the
Motor Skills acquisition” by Patricia Benner applied to the
“Assessment of Critical Thinking and of Technological Skills” by
Hernon and Dugan.

Please refer to the samples of


Holistic, Dimensional, Rubrics
and Competency skills
assessment
Assessment of Learning Outcomes in the K to 12 Program
(Dep Ed Order No. 31, s. 2012)

The assessment process is holistic, with emphasis on the


formative or developmental purpose of quality assurance in
student learning. It is also standards-based as it seeks to ensure
that teachers will teach according to the standards and
students will aim to meet or even exceed the standards. The
students’ attainment of standards in terms of content and
performance is, therefore, a critical evidence of learning.
The assessment shall be done at four levels which are an
adaptation of the cognitive levels for learning. Weights are
assigned to the levels.
Level of Assessment Percentage Weight
Knowledge 15%
Process of Skills 25%
Understanding/s 30%
Product / Performances 30%
100%
The levels are defined as follows:
Knowledge – refers to the substantive content of the curriculum.
The facts and information that the student acquires.
Process – refers to cognitive operations that the student performs
on facts and information for the purpose of constructing
meanings and understandings. This level is assessed
through activities or tests of analytical ability.
Understandings – refer to enduring big ideas, principles and
generalizations inherent to the disciplines, which may be
assessed using the facets of understanding. Assessment at
this level, should require ability to synthesize, generalize
and judge accordingly.
Productive/Performances – refer to real-life application of
understanding as evidenced by the student’s performance
of authentic tasks. At this level students are expected to
be able to apply what has been learned in contrived or
real situations.
DEVELOPMENT OF VARIED ASSESSMENT TOOLS: KNOWLEDGE
AND REASONING
Types of Objective Tests: The first 4 types are used to
a) True-False items test the first 4 to 5 levels of
b) Multiple-choice type items educational objectives while
c) Matching items the last (essay) is used for
d) Enumeration and Filling of Blanks testing HOTS
e) Essays

Development of objective tests requires careful planning and expertise


in terms of actual test construction. Essays are easier to construct than
the other types of objective tests but the difficulty w/ which objective
grades are derived from essay examination often discourage teachers
from using this particular form of examination in actual practice.
Planning a Test and Construction of
Table of Specification (TOS)

The important steps in planning for a Test are:


• Identifying test objectives
• Deciding on the type of objective test to be prepared
• Preparing a Table of Specifications (TOS)
• Constructing the draft test items
• Try-out and validation
Identifying Test Objectives.
An objective test, if it is to be comprehensive, must cover the
various levels of Bloom’s Taxonomy. Each objective consists of a
statement of what is to be achieved and, preferably, by how
many percent of students.

Deciding on the type of objective test.


The test objectives guide the kind of objective test that will be
designed and constructed by the teacher. For instance, for the
first 4 levels, we may want to construct a multiple-choice type of
test while for application and judgment, we may opt to give an
essay test or a modified essay test.
Preparing a Table of Specifications (TOS)
A TOS is a test map that guides the teacher in constructing a test. It
ensures that there is a balance between items that test lower level
thinking skills and those which test higher order thinking skills (or a
balance between easy and difficult items) in the test. (see samples)
Constructing the Test Items
The actual construction of the test items follows the TOS. As a
general rule, it is advised that the actual number of items to be
constructed in the draft should be double the desired number of items.
Item Analysis and Try-out
The test draft is tried out to a group of pupils or students. The
purpose of this is to determine the : a) item characteristics through
item analysis; and b) characteristics of the test itself – validity,
reliability, and practicality.
Constructing a True-False Test
Binomial-Choice tests are tests that have only 2 options such as
true or false, right or wrong, good or better and so on. Although
correction formulas exist, it is best that the teacher ensures that a
true-false item is able to discriminate properly between those who
know and those who are just guessing. A modified true-false test
can offset the effect of guessing by requiring students to explain
their answer and to disregard a correct answer if the explanation is
incorrect.

Ruels in Constructing True-False Test


1. Do not give a hint (inadvertently) in the body of the question.
2. Avoid using the words “always”, “never”, “often”, and other
adverbs that tend to be either always true or always false.
3. Avoid long sentences as these tend to be “true.” Keep sentences
short.
4. Avoid trick statements with some minoir misleading word or spelling
anomaly, misplaced phrases, etc. A wise student who does not know
the subject matter may detect this strategy and thus get the answer
correctly.
5. Avoid quoting verbatim from reference materials or textbooks. This
practice sends the wrong signal to the students that is necessary to
memorize the textbook word for word and thus, acquisition for higher
level thinking skills is not given due importance.
6. Avoid specific determiners or give-away qualifiers. Students quickly
learn that strongly worded statements are more likely to be false than
true, for example, statements with never, no, all, or always.
Moderately worded statements are more likely to be true than false.
Statements with many, often, sometimes, generally, frequently or

You might also like