You are on page 1of 73

PRAYER

ENERGIZER
JULY AN SERRANO
CLASSROOM RULES
RESPECT
RESPONSIVE
RESPONSIBLE
GREETINGS
LESSON OBJECTIVES
At the end of the lesson, the students can:

Explain the basic: concepts and principles in


educational assessment;

Discuss on the role of assessment in making


instructional decisions to improve teaching and
learning; and

Reflect on and discuss the applications and


implications of assessment to teaching and learning.
pre-activity
K.W.L

What I KNOW What I WANT to know What I LEARNED


UNIT 1
PRELIMINARY CONCEPTS AND
RECENT TRENDS

Prepared by:

Bongcayao, Dianne Grace


Casano, Fretchie Ann P.
Gopita, Jake A.
Pagaling, Jemverson
Polinar, Sim Angelo
Serrano, July An
Solmia, Neslyn
What is
educational
assessment?
Educational assessment seeks to
determine how well students are learning
and is an integrated part of the quest for
improved education. It provides feedback
to students, educators, parents,
policymakers, and the public about the
effectiveness of educational services
(National Research Council).
What are the principles and
indicators of assessment of
student learning?
Principle 1: The Primary Purpose of Assessment is to Improve Student
Learning.
Principle 2: Assessment for Other Purposes Supports Student Learning.
Principle 3: Assessment Systems Are Fair to All Students.
Principle 4: Professional Collaboration and Development Support
Assessment.
Principle 5: The Broad Community Participates in Assessment
Development.
Principle 6: Communication about assessment is regular and clear.
Principle 7: Assessment Systems Are Regularly Reviewed and Improved.
Principle 1: The Primary Purpose
of Assessment is to Improve
Student Learning.
• Assessment systems provide useful
information about whether students have
reached important learning goals and about the
progress of each student. They employ practices
and methods that are consistent with learning
goals, curriculum, instruction, and current
knowledge of how students learn.
Principle 2: Assessment for
Other Purposes Supports
Student Learning.
• Assessment systems report on and certify student
learning and provide information for school
improvement and accountability by using practices
that support important learning. Important
decisions, such as high school graduation, are made
on the basis of information gathered over time, not
on a single assessment.
Principle 3: Assessment
Systems Are Fair to All
Students.
• Assessment systems, including instruments,
policies, practices, and uses, are fair to all students.
Assessment systems ensure that all students
receive fair treatment in order not to limit students'
present and future opportunities.
Principle 4: Professional
Collaboration and Development
Support Assessment.
• Knowledgeable and fair educators are essential
for high-quality assessment. Assessment systems
depend on educators who understand the full range
of assessment purposes, use appropriately a variety
of suitable methods, work collaboratively, and
engage in ongoing professional development to
improve their capability as assessors.
Principle 5: The Broad
Community Participates in
Assessment Development.
• Assessment systems draw on the community's
knowledge and ensure support by including
parents, community members, and students
together with educators and professionals with
particular expertise in the development of the
system.
•Principle 6: Communication
about assessment is regular
and clear.
• Educators in schools, districts, and states clearly and
regularly discuss assessment system practices and
student and program progress with students, families,
and the community. Educators and institutions
communicate in ordinary language the purposes,
methods, and results of assessment. They focus on
reporting on what students know and are able to do,
what they need to learn to do, and what will be done
to facilitate improvement. They report achievement
data in terms of agreed-upon learning goals.
Principle 7: Assessment Systems
Are Regularly Reviewed and
Improved.
• Assessment systems are regularly reviewed and
improved to ensure that they are educationally
beneficial to all students. Assessment systems must
evolve and improve. Even well-designed systems must
adapt to changing conditions and increased
knowledge. Reviews are the basis for making decisions
to alter all or part of the assessment system.
Reviewers include stakeholders in the education
system and independent expert analysts.
Types of Classroom
Assessment
Assessment for
Learning (Formative
Assessment)
This is being used by the teacher to find out the
extent of what you know and what you can do and
hereby see the gaps you might have. This also refers
as a formative assessment; wherein its results
serves as a proof that you have achieve the desired
learnings targeted by the teachers

Examples of Formative Assessment:


Quizzes & Pre-test or Post-test
Assessment of
Learning (summative
Assessment)
it is usually given towards the end of a course or a
unit in a semestral term. it refers to strategies
design to confirmed what students know,
determine whether or not they have met curriculum
outcomes or the goals of their individualized
programs, or to certify proficiency and make
decisions about students future or placement.

Examples of Summative Assessment:


Pre-final, Midterms & Final Examinations
comparing assessment for learning &
assessment of learning
Assessment FOR learning Assessment OF learning
(FA) (SA)

1. Checks learning to determine 1. Checks what has bee learned


what to do next and then to date.
provides suggestions of what 2. Usually compiles data into a
to do ---- teaching and learning single number or score or
are indistinguishable from mark as part of a formal
assessment. report.
2. Usually uses detailed, specific 3. Is presented in a periodic
descriptive feedback in a report.
formal or informal report.
3. Is used continually by
providing descriptive
feedback.
Assessment as
Learning
Assessment as learning develops and supports
students metacognitive skills. As students engage
in peer and self-assessment, they learn to make
sense of information, relate it to prior knowledge
and use it for new learning. Students develop a
sense of ownership and efficacy when they use
teacher, peer and self-assessment feedback to
make adjustments, improvements and changes to
what they understand
Purpose of Educational
Assessment
Assessment is used to:

Inform and guide teaching and learning.


A good classroom assessment plan gathers
evidence of student learning that informs teachers
instructional decisions. It provides teachers with
information about what students know and can do.

Help students set learning goals.


Students need frequent opportunities to reflect
on where their learning is at and what needs to be
done to achieve their learning goals.
Purpose of Educational
Assessment
Assessment is used to:

Assign report card grades.


Grades provide parents employers other schools
governments, post secondary institutions and others
with summary information about student learning

Motivate students.
Research (Davies 2004 Stiggins et al 2004) has shown
that students will be motivated and confident
learners when they experience progress and
achievement, rather than the failure and defeat
associated with being compared to more successful
peers.
The Assessment
Process
An effective classroom assessment.
Addresses specific outcomes in the program of studies
shares intended outcomes and assessment criteria
with students prior to the assessment activity.
Assesses before, during and after instruction employs
a variety of assessment strategies to provide evidence
of student learning.
Provides frequent and descriptive feedback to
students.
Ensures students can describe their progress and
achievement and articulate what comes next in their
learning.
Informs teachers and provides insight that can be used
to modify instruction.
MEASUREMENT
MEASUREMENT
• Thorndike and Hagen (1960) define measurement as "the
process of quantifying observations and/or descriptions
about quality or attribute of a thing or person."

Step 1: identifying and defining the quality or attribute that is


to be measured

Step 2: determining a set of operations by which the attribute


may be made manifest and perceivable.

Step 3: establishing a set of procedures or definitions for


translating observations into quantitative statement of
degree or amount.
MEASUREMENT
• McMillan (1997) stated that measurement involves using
observation, rating scales, or any other non-test device that
secures information in a quantitative form.

• Gredler (1997) defined measurement as the process of


making empirical observations of
some attribute, characteristic, or phenomenon and
translating those observations into quantifiable or categorical
form according to clearly specified procedures or rules

• Educational measurement - refers to the process of


determining a quantitative or
qualitative academic attribute of an individual or group of
individuals.
Testing
• A test refers to a tool, technique or a method
that is intended to measure students
knowledge or their ability to complete a
particular task. In this sense, testing can be
considered as a form of assessment. Tests
should meet some basic requirements, such as
validity and reliability.
Testing
1.) Standardized Testing
- is the process of trying out the test on a group of people to see
the scores which are typically obtained.
- is a test administered and scored in a consistent manner
(Popham, 2003).
- are tools designed to allow measure of student performance
relative to all others taking the same test.
Types of Standardized Testing

(1.1) Norm-referenced testing - It measures performance relative


to all other students taking the same test.

(1.2) Criterion-referenced testing - It measured factual knowledge


of a defined body of material.
(multiple choice exam)
Testing
2.) High Stakes Testing
- are tests used to make important decisions about
students. These include whether students should be
promoted, allowed to graduate, or admitted to
programs.
- designed to measure whether or not content and
performance standards established by the state have
been achieved.
- testing becomes high stakes when the outcomes are
used to make decisions about promotion, admissions,
graduation, and salaries.
Major Theories Underlying
Test-Based Accountability on
High-stakes Tests
1. Motivational theory - is the predominant theory underlying
test-based accountability.

2. The theory of alignment - holds that system-wide


improvement is most likely to occur if educators align the major
components of the educational system (standards, curriculum,
and assessments) surrounding schools so that they reinforce
each other.

3. Information theory - maintains that student performance data


are useful for teachers and administrators to make decisions
about students and programs and that providing such data to
local educators and giving them incentives to improve their
performance will guide classroom and organizational decision-
making.
Major Theories Underlying
Test-Based Accountability on
High-stakes Tests

4. Symbolism theory - has also contributed to the


growth and prevalence of high-stakes testing. In
this model, the accountability system is seen to
signal important values to stakeholders and, in
particular, the public.
EVALUATION
•Evaluation
- is a process of summing up the results of
measurements or tests, giving them some meaning
based on value judgments (Hopkins and Stanley, 1981).

•Educational evaluation
- is the process of characterizing and appraising some
aspect or aspects of an educational process. It is a
systematic determination of merit, worth, and
significance of something or someone using criteria
against a set of standards.
- is a professional activity that individual educators
need to undertake if they intend to continuously review
and enhance the learning they are endeavoring to
facilitate.
Distinctions of Tests
Distinctions of Tests
1. Objective Test and Subjective Test
a. Objective Test.
It is a type of test in which two or more evaluators give an examinee the
same score because the answer is specific.
b. Subjective Test.
It is a type of test in which the scores are influenced by the judgment of the
evaluators because the answer is not specific.

2. Supply Test and Fixed-Response Test


a. Supply Test.
It is a type of test that requires the examinees to supply an answer, such as
an essay test, completion test or short answer test.
b. Fixed-Response Test.
It is a type of test that requires the examinees to select an answer from a
given option such as multiple-choice test, matching type test, or true-false
test.
3. Mastery Test and Survey Test
a. Mastery Test.
This type of achievement test measures the degree of mastery of a
limited set of learning outcomes using criterion-reference to interpret
the result.
b. Survey Test.
This type of test measures student's general achievement over broad
range of leaming outcomes using norm-reference to interpret the
result.

4. Speed Test and Power Test


a. Speed Test.
It is designed to measure number of items an individual can complete
over a certain period of time.
b. Power Test.
It is designed to measure the level of performance rather than the
speed of response.
5. Standardized Test and Teacher-Made Test/Non-Standardized Test
a. Standardized Test.
This test provides exact procedures in controlling the method of
administration and scoring with norms and data concerning the reliability
and validity of test.
b. Teacher-Made Test/Non-Standardized Test.
This test is prepared by classroom teachers based on the contents stated
in the syllabi and the lesson taken by students.

6. Achievement Test and Aptitude Test


a. Achievement Test.
It is designed to measure the knowledge and skills students learned in
school or to determine the academic progress they have made over a
period of time.
b. Aptitude Test.
Aptitude tests are "forward-looking in that they typically attempt to
forecast or predict how well students will do in a future educational or
career setting.
7. Diagnostic Test and Placement Test
a. Diagnostic Test.
It helps teachers and learners to identify strengths and
weaknesses.
b. Placement Test.
It is designed to help educators place a student into a
particular level or section of a language curriculum or
school.
Some Other Types of Tests
1. Intelligence Test.
This test measures the Intelligence Quotient (IQ) of an
individual as genius, very superior, high average, average,
low average, borderline, or mentally defective.
2. Personality Test.
This test measures the ways in which the individual's
interest with other individuals or in terms of the roles an
individual has assigned to himself and how he adopts in
the society.
3. Prognostic Test.
It is a test which is designed to predict how well one is
likely to do in a language course.
Some Other Types of Tests
4. Performance Test.
It is a measure which often make use of accomplishing
the leaming task involving minimum accomplishment
or none at all.
5. Preference Test.
This test measures the vocational or academic
interest of an individual or aesthetic decision by
forcing the examinee to make force options between
members of paired or group of items.
6. Scale Test.
This test is a series of item arranged in the order of
difficulty.
HIGH QUALITY
ASSESSMENT
COMPONENTS
HIGH QUALITY ASSESSMENT
COMPONENTS
High quality assessment takes the
massive quantities of
performance data and translates
that into meaningful, actionable
reports that pinpoint current
student progress, predict future
achievement, and inform
instruction.
HIGH QUALITY ASSESSMENT
COMPONENTS
1. CLEAR PURPOSE
The purpose of assessment is to gather
relevant information about student
performance or progress, or to determine
student interests to make judgments
about their learning process.
HIGH QUALITY ASSESSMENT
COMPONENTS
2. CLEAR AND APPROPRIATE
LEARNING TARGETS
• Assessment should be clearly stated and
specified and centered on what is truly
important.
•Assessment can be made precise,
accurate and dependable only if what are
to be achieved are clearly stated and
feasible.
Cognitive Domain
Bloom‟s Taxonomy is a hierarchical ordering of cognitive
skills that can help teachers teach and students learn.
Bloom‟s Taxonomy was created by Benjamin Bloom in
1956.
The framework was revised in 2001 by Lorin Anderson
and David Krathwohl, yielding the revised Bloom‟s
Taxonomy. The most significant change was the removal
of "Synthesis‟ and the addition of "Creation‟ as the
highest-level of Bloom‟s Taxonomy.
Cognitive Domain
Bloom‟s Taxonomy is a hierarchical
ordering of cognitive skills that can
help teachers teach and students
learn. Bloom‟s Taxonomy was
created by Benjamin Bloom in 1956.
The framework was revised in 2001
by Lorin Anderson and David
Krathwohl, yielding the revised
Bloom‟s Taxonomy. The most
significant change was the removal
of "Synthesis‟ and the addition of
"Creation‟ as the highest-level of
Bloom‟s Taxonomy.
Cognitive Domain
Revised Bloom’s Taxonomy
Psychomotor Domain
The psychomotor
domain includes
physical movement,
coordination, and
use of the motor-
skill areas.
Development of
these skills requires
practice and is
measured in terms
of speed, precision,
distance,
procedures, or
techniques in
execution.
Affective Domain
Affective learning is
demonstrated by behaviors
indicating attitudes of
awareness, interest,
attention, concern, and
responsibility, ability to
listen and respond in
interactions with others,
and ability to demonstrate
those attitudinal
characteristics or values
which are appropriate to
the test situation and the
field of study.
HIGH QUALITY ASSESSMENT
COMPONENTS
3. APPROPRIATE METHODS
Assessment methods are techniques,
strategies, tools and instruments for
collecting information to determine the
extent to which the students
demonstrate the desired learning
outcomes.
Common Methods in
Assessing Cognitive
Learning Targets
1. Written-Response Instrument
a. Essay Test. It gives students a chance to organize,
evaluate, and think, and therefore often are very effective
for measuring how well students have learned.

b. Objective Test. This test requires students to select the


correct response from several alternatives or to supply a
word or short phrase to answer a question or complete a
statement.
Common Methods in
Assessing Cognitive
Learning Targets
i. Multiple Choices Test. It is the most versatile and useful but not
limited in testing the ability to interpret diagrams, sketches, tables,
graphs, and related material.
ii. Matching Type Test. It is useful in testing recognition of the
relationships between pairs of words, or between words and
definitions.
iii. Short Answer Test. It allows for greater specificity in testing while
still providing some opportunity for student creativity.
iv. Completion Test. These questions usually consists of sentences in
which one or more key words have been left blank for students to
complete.
v. True-False Test. It is easy to write and grade and is used only for
testing factual recall.
Common Methods in
Assessing Cognitive
Learning Targets
2. Oral Questioning. This method involves the teacher
probing students to think about what they know
regarding a topic.

a. Open-Ended Questions. Open-ended questions are


questions that allow someone to give a free-form
answer.
b. Closed- Ended Questions It can be answered with
"Yes" or "No" or they have limited sets of possible
answers.
Common Methods in Assessing
Affective Learning Targets
1. Self-Report. It essentially requires an individual to provide an
account of his attitude or feelings toward a concept or idea or
people. It is also called “written reflections”.

2. Semantic Differential (SD) Scale. Semantic Differential (SD)


scales tries to assess an individual‟s reaction to specific words,
ideas or concepts in terms of ratings on bipolar scales defined
with contrasting adjectives at each end.
Common Methods in Assessing
Affective Learning Targets
3. Thurstone scale Thurstone is considered the Father of
Attitude measurement and addressed the issue of how
favorable an individual is with regard to a given issue.

4. Likert Scale This requires an individual to tick on the box to


report whether they "strongly agree", "agree", "undecided",
"disagree", "strongly disagree" in response to large number of
items concerning attitude towards object.

5. Checklist. It consists of simple items that the student or


teacher mark as “absent” or “present”.
Common Methods in Assessing
Psychomotor Learning Targets
1. Performance Test. Performance test is a form of testing that requires students to perform a task
rather than select an answer from a ready-made list. For example, a student may be asked to generate
scientific hypotheses, solve math problems, or conduct research on an assigned topic.

2. Observation. Observation is a process of systematically viewing and recording students while they
work, for the purpose of making programming and instruction decisions. Observation can take place at
any time and in any setting.

3. Product Rating Scale. A product rating scale is a tool used for assessing end products of the
performance usually in the form of projects.
ADEQUATE
SAMPLING
Sampling facilitates the assessment process
when programs/classes have large numbers of
students and it is not feasible to assess all
students. Furthermore, sampling may be useful
when assessing artifacts (an object created by
students during the course of instruction and
must lasting, durable, public, and materially
present) that take a long time to review.
Census vs. Sampling
Assessing the entire population is called a census whereas assessing
only part of the population is called a sample.
Example of Using a Census:
• An Honors section of Music Appreciation ends the course
with four students, each of whom is required to write a 10‐15 page
paper. All four of the course‟s outcomes are to be assessed by the
paper using a rubric. An evaluation group reads all four student
papers.
Example of Using a Sample:
• The English Department runs five sections of Critical Thinking
Through Argument involving 98 students. Two of the course‟s four
outcomes are to be assessed by a 8‐10 page paper scored by a
rubric. The English department selects 20 papers randomly from the
five sections.
Sampling Procedures
Before evaluating artifacts or data for the SLO, you must:
1. Decide whether you will use a sample or the whole
population.
2. Choose an appropriate sample size based on
percentage, artifact size and complexity.
3. Choose an appropriate sampling method.

Determining Sample Size


Whether or not to sample and the size of the sample
depend on three factors, all of which must be kept in
mind when making sampling decisions:
1. Length and complexity of the assignments. If the
assignment or artifact is of a capstone level (e.g. research
project), then a smaller percentage of students might be
chosen.
2. The number of students in the class. If your class has
less than 100 students, then you should consider using a
larger percentage or the entire population.

3. The number of teachers serving as evaluators. If the


school has only three teachers as evaluator, then a
smaller sample size would be more appropriate
depending on the complexity of the assignment.
Common Types of Sampling
There are a variety of sampling methods. Simple random, stratified, systemic, and cluster sampling are
examples of four common and appropriate sampling methods for institutional assessment activities.

1. Simple Random Sampling. You randomly select a certain number of students or artifacts.
2. Stratified Sampling. Students are sorted into homogenous groups and then a random
sample is selected from each group.
3. Systematic Sampling. You select the nth (e.g. 7th, 9th, 20th) student or artifact from a
list.
4. Cluster Sampling. You randomly select clusters or groups (e.g. classes or sections), and
you evaluate the assignments of all the students in those randomly selected clusters or
groups.
OBJECTIVITY
Objectivity is a noun that means a lack of bias,
judgment, or prejudice. Maintaining one's
objectivity is the most important job of a teacher
during assessment process. The meaning of
objectivity is easy to remember, when you see
that the word "object" embedded within it.
Objectivity in assessment refers to the nature of
data gathered through an assessment process
OBJECTIVITY
So a test is considered objective when it makes for the elimination of the
scorer‟s personal opinion and bias judgement. In this context there are two
aspects of objectivity which should be kept in mind while constructing a
test.
i. Objectivity in scoring. Objectivity of scoring means same person or
different persons scoring the test at any time arrives at the same result
without any chance of error. A test to be objective must necessarily so
worded that only correct answer can be given to it.
ii. Objectivity in interpretation of test items by the testee. By item
objectivity we mean that the item must call for a definite single answer.
Well-constructed test items should lead themselves to one and only one
interpretation by students who know the material involved.
Purposes of Objectivity
1. To avoid bias.
2. To ensure accurate conclusion or results.
OBJECTIVITY 3. To ensure out comes purely based on facts.

Characteristics of Objectivity
1. Based on scientific facts rather than on one‟s opinion
2. Factual, free from personal bias.
3. Judgement based on observable phenomena uninfluenced by
emotions or personal prejudices.
4. Being objective is to do something that is not primary about
one self or ourselves, but for the world itself.
5. Has multi-dimensional viewing.

Its results and data is based on continuous testing, then


demonstrated or confirmed by a third party.
Recent Trends
and Focus
refers to the current development and areas of
emphasis in the field of education
Recent Trends
and Focus
1. Accountability and Fairness:
-This trend is about ensuring that everyone involved
in the educational process, from teachers to
administrators, are held accountable for students'
learning outcomes. Fairness, on the other hand, is
about providing equal opportunities for all
students, regardless of their backgrounds or
circumstances. It's about making sure that every
student has the resources and support they need to
succeed.
Recent Trends
and Focus
2. Standards-Based Education:
-Refers to systems of instructions, assessment, grading, and
academic reporting that are based on students demonstrating
understanding or mastery of the knowledge and skills they are
expected to learn as they progress through their education.

-Is a method of evaluating student skill mastery. SBE is intended


to help students, families, and teaachers understand accurately
how students are doing as they work on developing their skills.

-This is an approach to education that focuses on ensuring that


students meet specific standards or benchmarks. These standards
are often set by national or state education departments and are
used to measure students' progress and achievement. In a
standards-based system, the goal is for all students to achieve a
certain level of proficiency in each subject area.
Recent Trends
and Focus
3. Outcome-Based Education:
-It means that the assessment process must be aligned with the
learning outcomes. This means that it should support the learners
in their progress (formative assessment) and validate the
achievement of the intended learning outcomes at the end of the
process (summative assessment).

-This is a learning model that focuses not on what the students


are taught, but on what they can do after they’ve been taught.
It's about setting clear, measurable outcomes that students are
expected to achieve by the end of their education. This could
include specific skills, knowledge, or attitudes. The emphasis is on
the final product, rather than the learning process itself.
Recent Trends
and Focus
4. Item Response Theory:
-in its simplest implementation—named rasch model—it
leverages the answer given by the set of students to a set of
assessment items to estimate the skill level of each student and
the difficulty of each item.

-This is a theory used in the creation and analysis of tests. It's a


way to measure a person's abilities or traits based on their
responses to specific questions or "items". The theory assumes
that the probability of a correct response to an item is a function
of both the person's ability and the characteristics of the item.
This theory is often used in educational testing to create fair and
accurate assessments.
Thank
You!
Prepared by:

Bongcayao, Dianne Grace


Casano, Fretchie Ann P.
Gopita, Jake A.
Pagaling, Jemverson
Polinar, Sim Angelo
Serrano, July An
Solmia, Neslyn

You might also like