You are on page 1of 20

Create a study outline / study notes / fact sheets on this part.

Do not forget to insert in-text


citation and write the full citation in the Reference part located at the portion.

Principle of High- Quality Assessment 1: Clarity of Learning Targets

- The learning target should be clearly stated and must be focused on student learning objectives
rather than teacher activity (Gabuyo, 2012).
- Learning targets need to be stated in behavioral terms or terms that denote something
(De- Guzman- Santos, 2007).
- Learning target are clearly what the child will be learning in all subject areas.

Cognitive Targets
Bloom’s (1954) hierarchy of educational objectives at the cognitive level:
1. Knowledge- the acquisition of facts, concepts and theories (De Guzman- Santos, 2007)

Sample behavioral terms


Define List Recall
Describe Match Recite
Identify Name Select
Label Outline State

2. Comprehension- involves a cognition or awareness of the interrelationships of facts and


concepts (De Guzman- Santos, 2007)

Sample behavioral terms


Convert Explain Infer
Defend Extend Paraphrase
Discriminate Estimate Predict
Distinguish Generalize Summarize

3. Application- the transfer of knowledge from one concept to another ( De Guzman- Santos,
2007)

Sample behavioral terms


Change Modify Relate
Compute Operate Solve
Demonstrate Organize Transfer
Develop Prepare Use
Employ Produce

4. Analysis- the breaking down of concept or idea into its components (De Guzman- Santos, 2007)

Sample behavioral terms


Break down Distinguish Point out
Deduce Illustrate Relate
Diagram Infer Separate out
Differentiate Outline Subdivide
5. Synthesis- the opposite of analysis and entails putting together the components in order to
summarize the concept (De Guzman- Santos, 2007)

Sample behavioral terms


Categorize Create Formulate
Compile Design Rewrite
Compose Devise Summarize

6. Evaluating and Reasoning- valuing and judgment or putting the “worth” of a concept or
principle (De Guzman- Santos, 2007)

Sample behavioral terms


Appraise Criticize Support
Compare Defend Validate
Contrast Justify
Conclude Interpret
Create a study outline / study notes / fact sheets on this part. Do not forget to insert in-text
citation and write the full citation in the Reference part located at the portion.

Principle of High-Quality Assessment 2: Appropriateness of Assessment Method

- Once the learning targets are clearly set, it is now necessary to determine an appropriate
assessment procedure or method. We discuss the general categories of assessment methods or
instrument below (De Guzman-Santos, 2007)
- The type of test used should always match the instructional objectives or learning outcomes of
the subject matter posed during the delivery of the instruction. Teachers should be skilled in
choosing and developing assessment methods appropriate for instructional decisions. (Gabuyo,
2012)
-

Assessment Methods

 Written-Response Instruments
- Include objective tests (multiple choice, true-false, matching or short answer) tests, essays,
examinations and checklists. (De Guzman-Santos, 2007)
 Objective tests are appropriate for assessing the various levels of hierarchy of
educational objectives.
 Multiple choice tests in particular can be constructed in such a way as to test
higher order thinking skills.
 Essays when properly planned, can test the student’s grasp of the higher level
cognitive skills particularly in the areas of application analysis, synthesis, and
judgment.

 Product Rating Scales


- A teacher is often tasked to rate products. Example of products that are frequently rated in
education are book reports, maps, charts, diagrams, notebooks, essays, and creative
endeavors of all sort. (De Guzman-Santos, 2007)
- An example of product rating scale is the classic ‘handwriting’ scale used in the California
Achievement Test, Form W (1957). (De Guzman-Santos, 2007)

 Performance Tests
- One of the most frequently used measurement instrument is the checklist. A performance
checklist consists of a list of behaviors that make up a certain type of performance (e,g. using
a microscope, typing a letter, solving a mathematics performance and so on). (De Guzman-
Santos, 2007)
- It is used to determine whether or not an individual behaves in a certain (usually desired) way
when asked to complete a particular task. If a particular behavior is present when an
individual is observed, the teacher places a check opposite it on the list. (De Guzman-Santos,
2007)

 Oral Questioning
- The traditional Greeks used oral questioning extensively as an assessment method. Socrates
himself, considered the epitome of a teacher, was said to have handled his classes solely
based on questioning and oral interactions. (De Guzman-Santos, 2007)
- Oral questioning is an appropriate assessment method when the objectives are: (a) to assess
the student’s stock knowledge and/or (b) to determine the student’s ability to communicate
ideas in coherent verbal sentences. (De Guzman-Santos, 2007)
- Oral questioning is indeed an option for assessment, several factors need to be considered
when using this option. (De Guzman-Santos, 2007)

 Observation and Self-Report


- A tally sheet is a device often used by teachers to record the frequency of student behaviors,
activities or remarks. (De Guzman-Santos, 2007)
- A self-checklist is a list of several characteristics or activities presented to the subjects of a
study. (De Guzman-Santos, 2007)
- Self-checklists are often employed by teachers when they want to diagnose or to appraise the
performance of students from the point of view of the students themselves. (De Guzman-
Santos, 2007)
- Observation and self-reports are useful supplementary assessment methods when used in
conjunction with oral questioning and performance tests. (De Guzman-Santos, 2007)
-
Create a study outline / study notes / fact sheets on this part. Do not forget to insert in-text
citation and write the full citation in the Reference part located at the portion.

Principle of High-Quality Assessment 3: Validity

- Refers to the degree to which a test actually measures what it tries to measure. (Asaad, 2004)
- Refers to the appropriateness of score-based inferences; or decisions made based on the
students’ test results. The extent to which a test measures what it is supposed to measure.
(Gabuyo, 2012)
- Means the degree to which a test or measuring instrument measures what it intends to measure.
The validity of a measuring instrument has to do with its soundness, what the test measures its
effectiveness and how well it could be applied. ( Calmorin,

Factors that Affect the Validity of a Test


There are some factors that greatly affect the validity of a test. These include the following:

1. Inappropriateness of the test items


- Measuring the understanding, thinking skills, and other complex types of
achievement with test forms that are appropriate only for measuring
factual knowledge will invalidate the results.

2. Directions of the test items


- Directions that are not clearly stated as to how the students respond to the
items and record their answers will tend to lessen the validity of the test
items.

3. Reading vocabulary and sentence structure


- Vocabulary and sentence structures that do not match the level of the
students will result in the test of measuring reading comprehension or
intelligence rather than what it intends to measure.

4. Level of difficulty of the test item


- When the test items are too easy and too difficult they cannot discriminate
between the bright and the poor students. Thus, it will lower the validity of
the test.

5. Poorly constructed test items


- Test items which unintentionally provide clues to the answer will tend to
measure the students alertness in detecting clues and the important aspects
of student’s performance that the test is intended to measure will be
affected.

6. Length of the test items


- A test should be of sufficient number of items to measure what it is
supposed to measure. If a test is too short to provide a representative
sample of the performance that is to be measured, validity will suffer
accordingly.

7. Arrangement of the test items


- Test item should be arranged in an increasing difficulty. Placing difficult
items early in the test may cause mental blocks and it may take up too
much time for the students; hence, students are prevented from reaching
items they could easily answer. Therefore, improper arrangement may
also affect validity by having a detrimental effect on students’ motivation.

8. Pattern of the answers


- A systematic pattern of correct answer will enable students to guess
answers, and this will lower again the validity of the test.

9. Ambiguity
- Ambiguous statements in test items contribute to misinterpretations and
confusion.
- Ambiguity sometimes confuses the bright students more than the poor
students, causing the items to discriminate in a negative direction.

Methods of Establishing Validity


There are five types in which we can determine the internal validity. These are face validity,
content validity, construct validity, concurrent validity, and predictive validity. (

 Face Validity
- This is done by examining the test to find out if it is a good one. In judging
face validity, there should be at least three knowledgeable persons who are
qualified to pass judgment on the appropriateness, suitability, and
mechanics in the construction of the tests. There is no common numerical
method for face validity.

 Content Validity
- Like the face validity, it is done by examining the test by at least three
knowledgeable persons but this time there is a more serious query to find
out if the test really measures what it seeks to measures.
- In judging content validity, one should look at both the subject matter
covered in the test and the type of behavior to be measured.

 Construct Validity
- Construct validity refers to the degree to which the test can be described
psychologically. Here, it is assumed that there are factors that make up
psychological construct.

 Concurrent Validity
- Concurrent validity refers to the degree to which the test correlates with a
criterion, which is set up as an acceptable measure or standard other than
the test itself. The criterion is always available at the time of testing.

Types of validity (Calmorin, 2004)

 Concurrent Validity Computation


- Is described by the relevance of a test to different types of criteria, such as
through judgment and systematic examination of relevant course syllabi
and textbooks, pooled judgment of subject matter expert, statement of
behavioral objectives, analysis of teacher-made test questions, and among
others. (Calmorin, 2004)
 Predictive Validity Computation
- Is commonly used in evaluating achievement test. A well- constructed
achievement test should cover the objective of instruction, not just it’s
subject matter. (Calmorin, 2004)

Pearson Product- Moment Coefficient of Correlation

r= n ¿ ¿

Where: r= coefficient of correlation (Pearson r)


X= valid criterion
Y= test score
n=total number of pairs of scores

 Content Validity Checklist


- Means the extent to which the content or topic of the test is truly
representative of the course. It involves, essentially, the systematic
examination of the test content to determine if it covers a representative
sample of the behavior domain to be measured.
- It is very important the behavior domain to be tested must be
systematically analyzed to make certain that all major aspects are covered
by the test items and in correct proportions. (Calmorin, 2004)

Create a study outline / study notes / fact sheets on this part. Do not forget to insert in-text
citation and write the full citation in the Reference part located at the portion.

Principle of High- Quality Assessment 4: Reliability

- Refers to the consistency of measurement; that is, how consistent test results or other assessment
results from one measurement to another, We can say that a test is reliable when it can be used
to predict practically the same scores when test administered twice to the same group of students
and with a reliability index of 0.61 above. (Gabuyo, 2012)
- Refers to the consistency and accuracy of the test. A reliable test, therefore, should yield
essentially the same scores when administered twice to the same students. For a teacher-made
test, a reliability index of 0.50 and above is acceptable. (Asaad, 2004)

Factors that Affect Reliability


There are some factors that greatly affect the reliability of the test. These include the following:
(Asaad, 2004)

1. Length of the test


- Reliability is higher when there are more number of items in the given test,
because the test involves a larger sample of the materials involved.

2. Moderate item difficulty


- Reliability is increased when the test items are of moderate difficulty because
this spreads the scores over a greater range than when a test is composed of
difficult or easy items.

3. Objective scoring
- Reliability is greater when test can be scored objectively. That is, a student
who takes the test should obtain the same score regardless of who happens
to be the examiner or the corrector of the test.

4. Heterogeneity of the student group


- Reliability is higher when test scores are spread over a range of abilities.
Measurement errors are smaller than that of a group that is more
homogeneous in ability.

5. Limited time
- A test in which speed is a factor is more reliable than a test that is conducted
at a longer time.

Methods of Establishing Reliability


There are five methods in estimating the reliability of measuring instrument. These methods
include Test-retest Method, Parallel Form Method, Split Half Method, KR-20 and KR-21 (Calmorin,

 Test- Retest Method


- The same measuring instrument is administered twice to the same group of
students and the computation coefficient is determined. The limitations of
this method are (1) when the time interval is short, the response may recall
their previous responses and this tends to the correlation coefficient high, (2)
when the time is long, such factors as unlearning, forgetting, among may
occur and may result in low correlation of the measuring instrument, and (30
regardless of the time inter-separating the two administrations, other varying
environmental conditions such as noise, temperature, lighting, and other
factors may affect the correlation coefficient of the measuring instrument.

Spearman rank correlation coefficient or Spearman rho


6 ∑ D2
Formula: rs = 1- 3
N −N
Where: rs= Spearman rho
∑ D 2=Sum of the squared difference between ranks
N= Total number of cases

To apply the foregoing formula (3.1), the steps are as follows:


Step 1. Rank the scores of respondents from highest to lowest in the first set of administration (X1) and
mark this rank as Rx. The highest score receives the rank of 1; second highest score, 2, third
highest score, 3; and so on.
Step 2. Rank the second set of scores (Y) in the same manner as in Step 1 and mark as Ry.
Step 3. Determine the difference in ranks for every pair of ranks.
Step 4. Squared each difference to get D2.
Step 5. Sum the squared difference to find ∑ D 2.
Step 6. Compute the Spearman rho (rs) by applying formula (3.1).

 Parallel Form Method


- Parallel or equivalent forms of a test may be administered to the group of
students, and the paired observations correlated. “In estimating reliability by the
administration of parallel or equivalent forms of a test, criteria of parallelism is
required” (Ferguson and Takane, 1989)

 Split Half Method


- The test in this method may be administered once, but the test items are divided
into two halves. The common procedure is to divide a test into odd and even
items.
- The two halves of the test must be similar but not identical in content, number of
items, difficulty, means and standard deviations.
Formula:
2(rht )
rwt =
1+ rht
Where rwt is the reliability of the whole test; and rht is the reliability of half test.

 KR-20

The steps on applying the kuder- Richardson formula 20 are as follows:


Step 1. Compute the variance ( SD 2) of the test scores for the whole group.
Step 2. Find the proportion passing each item (pi) and the proportion failing each item (qi). For
instance, twelve of the fourteen students passed or got the correct answer for item 1, (pi = 12/
14= 0.86); and two students failed in item 1, (qi = 2 / 14 = 0. 14 0r qi= 1- pi) 1- 0.86 = 0.14).
Step 3. Multiply pi and qi for each item, i.e., 0.86 x 0.14 = 0.1204; and sum for all items. This gives
the ∑ piqi value.
Step 4. Substitute the calculated values in formula 2.3.

Formula:
Mean: Variance KR20
∑X
[ ][ SD −∑ piqi
]
2
N
SD = ∑ ¿¿ ¿ rxx =
2
X=
N N −1 SD
2

 KR21
-
Create a study outline / study notes / fact sheets on this part. Do not forget to insert in-text
citation and write the full citation in the Reference part located at the portion.

Process- Oriented Performance- Based Assessment

Process- Oriented Learning Competencies


- Process- Oriented performance-based assessment is concerned with the actual task performance
rather than the output or product of the activity (Navarro & De Guzman-Santos, 2013).
- It is the procedure that a student uses to complete a task (Raagas,2010)

Types of Learning Competencies (Navarro & De Guzman- Santos, 2013)


1. Simple Competency- contains one skill or ability.
2. Complex Competency- contains two or more skills.
Process in Developing Process-Oriented Learning Competencies (Gabuyo & Dy, 2013)
1. Identify the competencies that are suitable for utilization of performance-based assessment.
2. Create a list of learning outcomes that specifies knowledge, skills, habits of the mind and social
skill that are appropriate for performance assessment.
Structure of Process- Oriented Learning Competencies (Navarro & De Guzman- Santos, 2013)
 Task/ Identified Task
 Objectives
 Learning Competencies

Task Designing
- Is a simulation or activity to create a task that will allow learners to demonstrate the knowledge,
skill, and attitudes that they have acquired (De Guzman & Santos, 2007)
- It allows the students to demonstrate the knowledge and skills they have acquired through the
activity given by the teacher (Navarro & De Guzman- Santos, 2013)

Standards for Designing a Task (Navarro & De Guzman- Santos, 2013)


 Identify an activity that would highlight the competencies to be evaluated.
 Identify an activity that would entail more or less the same sets of competencies.
 Finding task that would be interesting and enjoyable for the students.

Scoring Rubrics
- An assessment tool that guides the evaluation of the products or process of students effort
(Gutierrez, 2008)
- A scoring scale used to assess student performance along a task-specific set of criteria (Navarro
& De Guzman, 2013)
- A rating system by which teachers can determine at what level of proficiency a student is able to
perform a task (Gabuyo & Dy, 2013)

Components of a Scoring Rubric (Navarro & De Guzman- Santos)


 Criteria- characteristics of good performance task
Criteria Selection (Corpuz, 2021)
 Focus on important aspects of the performance
 Should reflect observable and measurable expectations relative to the task
 Should be different from each other

 Should be stated in a precise, unambiguous language


 Performance Level- a rating scale that identifies students level of mastery within each criterion
Fewer levels of performance be included in a scoring rubric initially because such is:
 Easier quicker to administer
 Easier to explain to students and others
 Easier to expand than larger rubrics to shrink
 Descriptors- spell out what is expected of students at each level of performance for each
criterion

Types of Scoring Rubric (Navarro & De Guzman- Santos, 2013)


 Analytic- articulates levels of performance for each criterion so the teacher can assess student
performance on each criterion.
 Holistic- assigns a level of performance by assessing performance across multiple criteria as a
whole.
Create a study outline / study notes / fact sheets on this part. Do not forget to insert in-text
citation and write the full citation in the Reference part located at the portion

Product-Oriented Performance- Based Assessment

Product- Oriented Learning Competencies


- Student performance can be defined as target tasks that lead to a product or overall learning
outcome. It includes a wide range of student work that target specific skills (Navarro & De
Guzman- Santos, 2013)
- Is a tangible outcome that may be the result of completing a process (Raagas, 2010)
- Require students to demonstrate multiple levels of metacognitive skills which require the use of
complex procedural skills for creating authentic product (Cajigal & Mantuano, 2014)

Ways to state Product-Oriented Learning Competencies (Navarro & De Guzman- Santos, 2013)
 Level 1: Does the finished product or project illustrate the minimum expected parts or
functions? (Beginner/ Novice Level)
 Level 2: Does the finished product or project contain additional parts and functions on top of the
minimum requirements which tend to enhance the final output? (Skilled Level)
 Level 3: Does the finished product contain the basic minimum parts and functions, have
additional features on top of minimum, and is aesthetically pleasing? (Expert Level)

Task Designing
- The design of the task in this context depends on what the teacher desires to observe as outputs
of students. The concepts that may be associated with task designing include (Navarro & De
Guzman- Santos, 2013)
A. Complexity- the level of complexity of the project needs to be within the range of
ability of the students.
B. Appeal- the project or activity must be appealing to the students which is/are interesting
enough that could encourage the students to pursue the task to completion.
C. Creativity- the project needs to encourage students to exercise creativity and divergent
thinking.
D. Goal-Based- the project must be produced in order to attain a learning objective.

Scoring Rubrics
- Are descriptive scoring schemes that are developed by teaches or evaluators to guide the
analysis of products or processes of students effort (Brookhart, 1999 as cited by Navarro & De
Guzman- Santos, 2013)
- May be used to evaluate broad range of activities (Corpuz, 2021)
- An assessment tool that guides the evaluation of the products of students effort (Gutierrez,
2008)

Components of Scoring Rubric


 Criteria- a set of criteria that serve as basis for evaluating a students output (Gutierrez, 2012)
Criteria Setting (Navarro & De Guzman- Santos, 2013)
Quality
Creativity
Comprehensiveness
Accuracy
Aesthetics

 Performance Level- a scale of numerical values on which to rate each criterion


(Gutierrez,2008)
Setting of Performance level (Corpuz, 2021)
There is no specific number of levels
Each level has no an adjective word that describe the performance level
 Descriptors- standards of excellence accompanied by examples (Gutierrez, 2008)

Types of Scoring Rubric


 Analytic Rubric- articulates levels of performance for each criterion so the teacher can
assess student’s performance on each criterion (Corpuz, 2021)
 Holistic Rubric- assigns a level of performance by assessing performance across multiple
criteria as a shole (Corpuz, 2021)

Steps in Developing a Scoring Rubric (Gutierrez, 2008)


A. Identify the qualities that will be looked into a student’s output
B. Define the criteria from top to bottom level of performance
C. Assign numerical value to each level of performance
Create a study outline / study notes / fact sheets on this part. Do not forget to insert in-text
citation and write the full citation in the Reference part located at the portion

Assessment in the Affective Domain


- The affective domain describes learning objectives that emphasize a feeling tone, an emotion, or
a degree of acceptance or rejection (Navarro & De Guzman- Santos, 2013)

Taxonomy in the Affective Domain


- Contains a large number of objectives in the literature expressed as interest, attitudes,
appreciations, values, and emotional sets or biases (Krathwohl et al, 1964 as cited by (Navarro
& De Guzman- Santos, 2013)

Steps of Krathwohl’s Taxonomy of Affective Domain (Navarro & De Guzman- Santos, 2013)
 Receiving- being aware of or attending to something in the environment
 Responding- showing some new behaviors as a result of experience
 Valuing- showing some definite involvement or commitment
 Organization- integrating a new value into one’s general set of values, giving it some
ranking among one’s general priorities.
 Characterization by value- acting consistently with the new value

Affective Learning Competencies


- Are often stated in the form of instructional objectives such that (Navarro & De Guzman-
Santos, 2013)
 Instructional objectives are specific, measurable, short-term, observable student behaviors.
 Objective are the foundation upon which you can build lessons and assessment that you can
prove meet your overall course or lessons goals.
 Think of objectives as tool you use to make sure you reach your goals.
 The purpose of objectives is not to restrict spontaneity or constraint the vision of education
in the discipline but to ensure that learning is focused clearly enough that both students and
teachers know what is going on, and so learning can be objectively measured.

Behavioral Verbs Appropriate for the Affective Domain


 ‘Receiving- accept, attend, develop, recognize
 Responding – complete, comply, cooperate, discuss, examine, obey, respond
 Valuing- accept, defend, devote, pursue, seek
 Organization- codify, discriminate, display, order, organize, systematize , weigh
 Characterization- internalize, verify

Focal Concepts to be considered:


Attitudes-a mental predisposition to act that is expressed by evaluating a particular entity with some
degree of favor or disfavor.
-It can influence the way we act and think in social communities we belong components.
A. Cognitions- are our beliefs, theories, expectancies, cause and effect beliefs, and
perceptions relative to the focal object.
B. Affect- refers to our feeling with respect to focal object such as fear, liking, or anger.
C. Behavioral intentions- are our goals, aspirations, and our expected responses to the
attitude object.
D. Evaluation- are often considered the central component of attitudes.

Motivation- is a reason or et or reasons for engaging in a particular behavior, especially human


behavior as studied in psychology and neuropsychology.

Abraham Maslow’s hierarchy of human needs:


 Psychological: food, clothing, shelter
 Safety and Security: home and family
 Social: being in a community
 Self-esteem: self- understanding, self-acceptance
 Self- actualization: recognition, achievement

Federick Herberg’s two factor theory (Motivator- Hygiene Theory)


 Motivators: (e.g. challenging work, recognition, responsibility) which give positive satisfaction,
and
 Hygiene factors: (e.g. status, job, security, salary, and fringe benefits) which do not motivate if
present, but if absent will result in demotivation .

Clayton Aldelfer ERG Theory (existence, relatedness, and growth


 Expanded Maslow’s hierarchy needs
 Existence category- psychological and safety, the lower order needs
 Relatedness category- love and self esteem
 Growth category- self- actualization and self-esteem needs

Motivation in education can have several effects on how students learn and their behavior towards
subject matter (Ormrod, 2023 as cited by Navarro and De Guzman- Santos, 2013). It can
 Direct behavior towards particular goal
 Lead to increased effort and energy
 Increate initiation of, and persistence in activities
 Enhance cognitive processing
 Determine what consequences are reinforcing
 Lead to improved performance

Two kinds of Motivation


 Intrinsic Motivation- occurs when people are internally motivated to do something because it is
either brings them pleasure, they think it is important, or they feel that what they are learning is
morally significant
 Extrinsic Motivation- comes into play when a student is compelled to do something because of
factors of external to him/her (like money or good grade)

Self- efficiency – is an impression that one is capable of performing in a certain manner or attaining
certain goals

Is the belief (whether or not accurate) that one has the power to produce that effect.

Create a study outline / study notes / fact sheets on this part. Do not forget to insert in-text
citation and write the full citation in the Reference part located at the portion

Development of Affective Assessment Tools


- Assessment tools in the affective domain, those which are used to assess attitudes, interest,
motivation, and self- efficiency, have been developed.

Methods of Assessing Affective Targets


Three considerations assessing affect:
1. Emotions and feelings change quickly most especially for young children and during
early adolescence.
2. Use varied approaches in measuring the same affective trait as possible.
3. Decide what type of data or results are needed, is it individual or group data?

Teacher Observation- is one of the essential tools for formative assessment to record student’s behavior
that indicates the presence of targeted affective traits (Cajigal & Mantuano, 2014)
Steps in using observation:
1.Determine in advance how specific behaviors relate to the target.
2.List the students behaviors and actions
3.Classify and create separate list of the positive student behaviors and another list for
negative student behavior.
 Approach behavior (positive behavior) – results in direct, frequent, and less
intense contact.
 Avoidance behavior (negative behavior) – result in less direct, less
frequent, and less intense contact.
4.Decide whether to use an informal (unstructured observation) or formal (structured
observation)
Things to be considered if teacher observation method will be used to assess affect

 Determine behaviors to be observed in advance


 Record student’s important data such as time, data, and place
 If unstructured, record brief descriptions of relevant behavior
 Keep interpretations separate from description
 Record both positive and negative behaviors
 Have as much observations of each students as necessary
 Avoid personal bias
 Record immediately the observations
 Apply a simple and efficient procedure

Student Self-Report – Self-Report (written reflections) is the most common measurement tool which
essentially requires an individual to provide an account of his/her attitude or feelings toward a concept
or idea of people.

Student interview- it is like observation but in here, there is an opportunity that teachers
may have direct involvement with the student wherein teachers can prove and respond for better
understanding.
Surveys and Questionnaires
 Constructed Response Format- is a straight forward approach asking
students about their affect b responding to simple statement or question.
 Selected-Response Format- is an important aspect when considering the
traits that are personal such as values and self- concept. This format is
considered to be an efficient way of collecting information.

Checklist for using Student’s Self-Report to Assess Affect (McMillan, 2007 as cited by Cajigal
&Mantuano, 2014)

 Keep measures focused on specific affective traits


 Establish trust with students
 Match response format to the trait being assessed
 Ensure anonymity if possible
 Keep questionnaires brief
 Keep items short and simple
 Avoid negative and absolutes
 Write items in present tense
 Avoid double-barreled items

Peer Rating (appraisal) – is seen as relatively inefficient in terms of nature of conducting, scoring, and
interpreting peer rating.
Two methods in conducting peer rating
Guess who approach
Socio-metric approach

Affective Assessment Tools


The affective domain encompasses behaviors in terms of attitudes, beliefs, and feelings

Checklist- is one of the effective formative assessment strategies to monitor specific skills, behaviors,
or dispositions of individual or groups of students (Burke, 2009 as cited by Cajigal & Mantuano, 2014)

Criteria for checklist


- In planning for criteria that will be used in checklist, the criteria must be aligned with the
outcomes that need to be observed and measured.

Checklist should be utilized because these:


a. Make a quick and easy way to observe and record skill, criteria, and behaviors prior to final
test or summative evaluation.
b. Provide information to teachers if there are students who need help so as to avoid failing.
c. Provide formative assessments of student’s learning and help teachers monitor if students
are on track with the desired outcomes.

Steps in construction of a checklist (Navarro & De Guzman-Santos, 2013)


1. Enumerate all the attributes and characteristics you wish to observe relative to the concept
being measured.
2. Arrange these attributes as a “shopping” list of characteristics
3. Ask the students to mark those attributes or characteristics which are present and to leave
blank those are not.

Rating scales- is a set of categories designed to elicit information about a quantitative attribute in social
science (Navarro & De Guzman-Santos, 2013)

According to Nitko (2001) as cited by (Cajigal & Mantuano, 2014), rating scales can be used for
teaching purposes and assessment.
1. Rating scales help students understand the learning targets/ outcomes and to focus student’s
attention to performance.
2. Completed rating scale gives specific feedback to students as far as their strengths and
weaknesses with respect to the targets to which they are measured.
3. Students not only learn the standards but also may internalize the set standards.
4. Rating help to show each student’s growth and progress.

Types of Rating Scales (Cajigal & Mantuano, 2014)


 Numerical Rating Scales- translates the judgments of quality or degree into numbers
 Descriptive Graphic Rating Scales- replaces ambiguous single word with short behavioral
descriptions of the various points along the scale

Likert Scale- a list of clearly favorable and unfavorable attitude statements are provided (Cajigal &
Mantuano, 2014)
- Is a summated rating in response to a large number of items concerning an attitude object
or stimulus (Navarro & De Guzman- Santos, 2013)

Likert Scales are derived as follows (Navarro & De Guzman-Santos, 2013)


1. You pick an individual item to include
2. You choose how to scale each item
3. You ask your target audience to mark each item
4. You derive a target’s score by adding the values that target identified on each item.

Steps in constructing likert scale instrument (Cajigal & Mantuano, 2014)


1. Write a series of statements expressing positive and negative opinions toward attitude
object
2. Select the best statements (at least 10), with a balance of positive and negative opinions and
edit as necessary.
3. List the statements combining the positive and negative and put the letters of the five-point
scale to the left of each statement for easy marking.
4. Add the directions, indicating how to mark the answer and include key at the top of the
page if letters are used for each statement.
5. Some prefer drop the undecided category so that respondents will be forced to indicate
agreement or disagreement.

Semantic Differential Scale- used adjective pairs that provide anchors for feelings or beliefs that are
opposite in direction and intensity (Cajigal & Mantuano, 2014)
- Tries to assess the individual’s reaction to specific words, ideas or
concepts in terms of rating on bipolar scales defined with contrasting adjectives at each end
(Navarro & De Guzman-Santos, 2013)

Sentence Completion - It captures whatever come to mind from each student

You might also like