Professional Documents
Culture Documents
Abstraction - Heading Final
Abstraction - Heading Final
- The learning target should be clearly stated and must be focused on student learning objectives
rather than teacher activity (Gabuyo, 2012).
- Learning targets need to be stated in behavioral terms or terms that denote something
(De- Guzman- Santos, 2007).
- Learning target are clearly what the child will be learning in all subject areas.
Cognitive Targets
Bloom’s (1954) hierarchy of educational objectives at the cognitive level:
1. Knowledge- the acquisition of facts, concepts and theories (De Guzman- Santos, 2007)
3. Application- the transfer of knowledge from one concept to another ( De Guzman- Santos,
2007)
4. Analysis- the breaking down of concept or idea into its components (De Guzman- Santos, 2007)
6. Evaluating and Reasoning- valuing and judgment or putting the “worth” of a concept or
principle (De Guzman- Santos, 2007)
- Once the learning targets are clearly set, it is now necessary to determine an appropriate
assessment procedure or method. We discuss the general categories of assessment methods or
instrument below (De Guzman-Santos, 2007)
- The type of test used should always match the instructional objectives or learning outcomes of
the subject matter posed during the delivery of the instruction. Teachers should be skilled in
choosing and developing assessment methods appropriate for instructional decisions. (Gabuyo,
2012)
-
Assessment Methods
Written-Response Instruments
- Include objective tests (multiple choice, true-false, matching or short answer) tests, essays,
examinations and checklists. (De Guzman-Santos, 2007)
Objective tests are appropriate for assessing the various levels of hierarchy of
educational objectives.
Multiple choice tests in particular can be constructed in such a way as to test
higher order thinking skills.
Essays when properly planned, can test the student’s grasp of the higher level
cognitive skills particularly in the areas of application analysis, synthesis, and
judgment.
Performance Tests
- One of the most frequently used measurement instrument is the checklist. A performance
checklist consists of a list of behaviors that make up a certain type of performance (e,g. using
a microscope, typing a letter, solving a mathematics performance and so on). (De Guzman-
Santos, 2007)
- It is used to determine whether or not an individual behaves in a certain (usually desired) way
when asked to complete a particular task. If a particular behavior is present when an
individual is observed, the teacher places a check opposite it on the list. (De Guzman-Santos,
2007)
Oral Questioning
- The traditional Greeks used oral questioning extensively as an assessment method. Socrates
himself, considered the epitome of a teacher, was said to have handled his classes solely
based on questioning and oral interactions. (De Guzman-Santos, 2007)
- Oral questioning is an appropriate assessment method when the objectives are: (a) to assess
the student’s stock knowledge and/or (b) to determine the student’s ability to communicate
ideas in coherent verbal sentences. (De Guzman-Santos, 2007)
- Oral questioning is indeed an option for assessment, several factors need to be considered
when using this option. (De Guzman-Santos, 2007)
- Refers to the degree to which a test actually measures what it tries to measure. (Asaad, 2004)
- Refers to the appropriateness of score-based inferences; or decisions made based on the
students’ test results. The extent to which a test measures what it is supposed to measure.
(Gabuyo, 2012)
- Means the degree to which a test or measuring instrument measures what it intends to measure.
The validity of a measuring instrument has to do with its soundness, what the test measures its
effectiveness and how well it could be applied. ( Calmorin,
9. Ambiguity
- Ambiguous statements in test items contribute to misinterpretations and
confusion.
- Ambiguity sometimes confuses the bright students more than the poor
students, causing the items to discriminate in a negative direction.
Face Validity
- This is done by examining the test to find out if it is a good one. In judging
face validity, there should be at least three knowledgeable persons who are
qualified to pass judgment on the appropriateness, suitability, and
mechanics in the construction of the tests. There is no common numerical
method for face validity.
Content Validity
- Like the face validity, it is done by examining the test by at least three
knowledgeable persons but this time there is a more serious query to find
out if the test really measures what it seeks to measures.
- In judging content validity, one should look at both the subject matter
covered in the test and the type of behavior to be measured.
Construct Validity
- Construct validity refers to the degree to which the test can be described
psychologically. Here, it is assumed that there are factors that make up
psychological construct.
Concurrent Validity
- Concurrent validity refers to the degree to which the test correlates with a
criterion, which is set up as an acceptable measure or standard other than
the test itself. The criterion is always available at the time of testing.
r= n ¿ ¿
Create a study outline / study notes / fact sheets on this part. Do not forget to insert in-text
citation and write the full citation in the Reference part located at the portion.
- Refers to the consistency of measurement; that is, how consistent test results or other assessment
results from one measurement to another, We can say that a test is reliable when it can be used
to predict practically the same scores when test administered twice to the same group of students
and with a reliability index of 0.61 above. (Gabuyo, 2012)
- Refers to the consistency and accuracy of the test. A reliable test, therefore, should yield
essentially the same scores when administered twice to the same students. For a teacher-made
test, a reliability index of 0.50 and above is acceptable. (Asaad, 2004)
3. Objective scoring
- Reliability is greater when test can be scored objectively. That is, a student
who takes the test should obtain the same score regardless of who happens
to be the examiner or the corrector of the test.
5. Limited time
- A test in which speed is a factor is more reliable than a test that is conducted
at a longer time.
KR-20
Formula:
Mean: Variance KR20
∑X
[ ][ SD −∑ piqi
]
2
N
SD = ∑ ¿¿ ¿ rxx =
2
X=
N N −1 SD
2
KR21
-
Create a study outline / study notes / fact sheets on this part. Do not forget to insert in-text
citation and write the full citation in the Reference part located at the portion.
Task Designing
- Is a simulation or activity to create a task that will allow learners to demonstrate the knowledge,
skill, and attitudes that they have acquired (De Guzman & Santos, 2007)
- It allows the students to demonstrate the knowledge and skills they have acquired through the
activity given by the teacher (Navarro & De Guzman- Santos, 2013)
Scoring Rubrics
- An assessment tool that guides the evaluation of the products or process of students effort
(Gutierrez, 2008)
- A scoring scale used to assess student performance along a task-specific set of criteria (Navarro
& De Guzman, 2013)
- A rating system by which teachers can determine at what level of proficiency a student is able to
perform a task (Gabuyo & Dy, 2013)
Ways to state Product-Oriented Learning Competencies (Navarro & De Guzman- Santos, 2013)
Level 1: Does the finished product or project illustrate the minimum expected parts or
functions? (Beginner/ Novice Level)
Level 2: Does the finished product or project contain additional parts and functions on top of the
minimum requirements which tend to enhance the final output? (Skilled Level)
Level 3: Does the finished product contain the basic minimum parts and functions, have
additional features on top of minimum, and is aesthetically pleasing? (Expert Level)
Task Designing
- The design of the task in this context depends on what the teacher desires to observe as outputs
of students. The concepts that may be associated with task designing include (Navarro & De
Guzman- Santos, 2013)
A. Complexity- the level of complexity of the project needs to be within the range of
ability of the students.
B. Appeal- the project or activity must be appealing to the students which is/are interesting
enough that could encourage the students to pursue the task to completion.
C. Creativity- the project needs to encourage students to exercise creativity and divergent
thinking.
D. Goal-Based- the project must be produced in order to attain a learning objective.
Scoring Rubrics
- Are descriptive scoring schemes that are developed by teaches or evaluators to guide the
analysis of products or processes of students effort (Brookhart, 1999 as cited by Navarro & De
Guzman- Santos, 2013)
- May be used to evaluate broad range of activities (Corpuz, 2021)
- An assessment tool that guides the evaluation of the products of students effort (Gutierrez,
2008)
Steps of Krathwohl’s Taxonomy of Affective Domain (Navarro & De Guzman- Santos, 2013)
Receiving- being aware of or attending to something in the environment
Responding- showing some new behaviors as a result of experience
Valuing- showing some definite involvement or commitment
Organization- integrating a new value into one’s general set of values, giving it some
ranking among one’s general priorities.
Characterization by value- acting consistently with the new value
Motivation in education can have several effects on how students learn and their behavior towards
subject matter (Ormrod, 2023 as cited by Navarro and De Guzman- Santos, 2013). It can
Direct behavior towards particular goal
Lead to increased effort and energy
Increate initiation of, and persistence in activities
Enhance cognitive processing
Determine what consequences are reinforcing
Lead to improved performance
Self- efficiency – is an impression that one is capable of performing in a certain manner or attaining
certain goals
Is the belief (whether or not accurate) that one has the power to produce that effect.
Create a study outline / study notes / fact sheets on this part. Do not forget to insert in-text
citation and write the full citation in the Reference part located at the portion
Teacher Observation- is one of the essential tools for formative assessment to record student’s behavior
that indicates the presence of targeted affective traits (Cajigal & Mantuano, 2014)
Steps in using observation:
1.Determine in advance how specific behaviors relate to the target.
2.List the students behaviors and actions
3.Classify and create separate list of the positive student behaviors and another list for
negative student behavior.
Approach behavior (positive behavior) – results in direct, frequent, and less
intense contact.
Avoidance behavior (negative behavior) – result in less direct, less
frequent, and less intense contact.
4.Decide whether to use an informal (unstructured observation) or formal (structured
observation)
Things to be considered if teacher observation method will be used to assess affect
Student Self-Report – Self-Report (written reflections) is the most common measurement tool which
essentially requires an individual to provide an account of his/her attitude or feelings toward a concept
or idea of people.
Student interview- it is like observation but in here, there is an opportunity that teachers
may have direct involvement with the student wherein teachers can prove and respond for better
understanding.
Surveys and Questionnaires
Constructed Response Format- is a straight forward approach asking
students about their affect b responding to simple statement or question.
Selected-Response Format- is an important aspect when considering the
traits that are personal such as values and self- concept. This format is
considered to be an efficient way of collecting information.
Checklist for using Student’s Self-Report to Assess Affect (McMillan, 2007 as cited by Cajigal
&Mantuano, 2014)
Peer Rating (appraisal) – is seen as relatively inefficient in terms of nature of conducting, scoring, and
interpreting peer rating.
Two methods in conducting peer rating
Guess who approach
Socio-metric approach
Checklist- is one of the effective formative assessment strategies to monitor specific skills, behaviors,
or dispositions of individual or groups of students (Burke, 2009 as cited by Cajigal & Mantuano, 2014)
Rating scales- is a set of categories designed to elicit information about a quantitative attribute in social
science (Navarro & De Guzman-Santos, 2013)
According to Nitko (2001) as cited by (Cajigal & Mantuano, 2014), rating scales can be used for
teaching purposes and assessment.
1. Rating scales help students understand the learning targets/ outcomes and to focus student’s
attention to performance.
2. Completed rating scale gives specific feedback to students as far as their strengths and
weaknesses with respect to the targets to which they are measured.
3. Students not only learn the standards but also may internalize the set standards.
4. Rating help to show each student’s growth and progress.
Likert Scale- a list of clearly favorable and unfavorable attitude statements are provided (Cajigal &
Mantuano, 2014)
- Is a summated rating in response to a large number of items concerning an attitude object
or stimulus (Navarro & De Guzman- Santos, 2013)
Semantic Differential Scale- used adjective pairs that provide anchors for feelings or beliefs that are
opposite in direction and intensity (Cajigal & Mantuano, 2014)
- Tries to assess the individual’s reaction to specific words, ideas or
concepts in terms of rating on bipolar scales defined with contrasting adjectives at each end
(Navarro & De Guzman-Santos, 2013)