You are on page 1of 29

Chapter 1: High Quality Assessment

High Quality assessment takes the massive quantities of performance


data and translates that into meaningful,actionable reports that pinpoint
current student progress,predict future achievement and inform
instruction(techlearning.com)

Cognitive Targets
1. Knowledge
2. Comprehension
3. Application
4. Analysis
5. Synthesis
6. Evaluation

Psycho motor Targets


Skills, Competencies, Abilities Targets Products,output and projects targets

Appropriateness of Assessment methods

Written Response Instruments-it is a type of test that requires students


to select the correct response from several alternatives or to supply a word or
short phrase to answer a question or complete a statement. It includes true-
false, matching type, and multiple choice questions (Gabuyo, 2012). It includes
objective tests (multiple choice,true or false, matching type and likes) tests,
essays and checklists.This is to assess the cognitive aspect.

Product Rating Scales-a teacher is often tasked to rate products.In education,

Book reports,maps,diagrams,portfolios and the likes are some of the used


products to assess students in line to the lesson.

Performance Tests- one of the most frequently used measurement instrument


is checklist.This is used by the teacher to check certain skills or results that
are expected to students to achieve by the end of the lesson.It is an assessment
in which students are asked to perform real-world tasks and demonstrate
meaningful application of essential knowledge and skills. It can appropriately
measure learning objectives which focus on the ability of the students to
demonstrate skills or knowledge in real-life situations (Gabuyo, 2012).

Oral Questioning - it is a type of assessment that is very appropriate to assess


the stock knowledge and speaking skills of the students.

Observation and self -reports - these are useful supplementary when


conjunction with oral questioning and performance test.Such method can off-
set the negative impact on the student brought about anxieties during oral
questioning or performance actual observation.The responses of the students
may be used to evaluate both performance and attitude. Assessment tools
include sentence completion, likert scales, checklists or holistic scales
(Gabuyo, 2012). The teacher will observe how students carry out certain
activities either observing the process or product. The behavior of students in
his performance is systematically monitored, described, classified and analyzed
(Gabuyo, 2012).
Properties of assessment methods
Validity - according to Asaad and Hailaya (2004), validity refers to the degree
to which a test actually measures what it tries to measure.The validity of a test
concerns what the test measures and how well it does so.

Reliability- to the consistency and accuracy of the test.It answers the question
“Does the test yield the same or similar score rankings(all other factors being
equal) consistently? (Asaad and Hailaya,2004).

CHAPTER 2: TYPES OF ASSESSMENT

Traditional and Authentic Assessment


Traditional assessment mainly pertains to paper-and-pencil tests that are
usually true/false, matching type, or multiple choice exams (National Council
of State Supervisors for Languages, n.d.). These tests are usually standardized.
These tests measure what learners can do at a particular time (Cajigal &
Mantuano, 2014), with particular focus on what they have memorized and
recalled. Traditional assessment activities are “contrived” meaning they are
being deliberately manufactured to fit the intended learning outcomes. Here,
teachers are the ones to organize the development of solutions. Traditional
assessment deals with indirect evidence of learning as it only measures, as
stated above, knowledge and comprehension levels of the student’s cognitive
ability.
Authentic Assessment, on the other hand, reveal students’ learning, students’
achievements, and students’ attitudes through testing their higher cognitive
skills of analysis and creativity (Cajigal & Mantuano, 2014). Authentic
Assessment is also called Performance Assessment because it places students
in situations where they can come up with solutions based on real life. Here,
they have to use their acquired knowledge and skills in the real world. In
short, this type of assessment requires the learner to perform in realistic
situations. Students participate in specific tasks, interviews, or performances
that are appropriate to the audience and setting (National Council of State
Supervisors for Languages, n.d.).

Formative Evaluation and Summative Evaluation


Formative Evaluation pertains to assessment for learning. Assessment of
learning is basically done to improve student’s learning outcomes. Teaching
and learning plans are based on the results of formative assessment because
this gives feedback on the effectiveness of teaching and further points out what
students need (Cajigal & Mantuano, 2014). This feedback can then be sued to
modify teaching and learning activities. This type of assessment bridges the
gap between students’ current performance and the articulated learning goals
through immediate constructive dialogue and feedbacks through which
instructional adjustments are made (de Guzman & Adamos, 2015). This tracks
the students’ learning progress during instruction through gathering data
during the time a program is being developed for the purpose of guiding
progress (Reganit, Elicay&Laguerta, 2010). Furthermore, formative
assessment. Formative evaluation occurs in three points of the instruction: 1)
during instruction, 2) between instruction, and 3) between units. Most
formative assessment occur during instruction. The ways in which teachers
can give formative evaluation is through giving quizzes, conducting
observations, and conducting student self-assessments.
Summative Assessment is used in Assessment of Learning wherein students’
achievements are reflected in relation to curricular learning standards.
Traditionally, this type of assessment is done at the end of a chapter or unit to
find out student achievement (Cajigal & Mantuano, 2014). Summative
assessments are usually traditional paper and pencil tests such as unit tests,
long tests, exams, or essays. Summative assessment is used to determine the
extent to which the instructional objectives have been achieved, and is used for
assigning course grades as a way of certifying student mastery of the intended
learning outcomes (Reganit, Elicay&Laguerta, 2010).

Norm and Criterion-Referenced Assessment


Norm-Referenced Assessment describes a student’s performance in class
compared to his peers. The principal use of norm-referenced assessment is
survey testing wherein individuals are measured based on their differences in
achievement (Cajigal & Mantuano, 2014). This assessment typically focuses on
a broad range of achievement and items are selected in that they provide
maximum differentiation among individuals to attain reliable ranking. Here,
easy items are usually eliminated in a test. Percentile rank is used in this type
of assessment wherein level of performance is determined by relative position
in a group.
Criterion-Referenced Assessment describe a student’s performance based on
predefined and absolute standards or outcomes (Cajigal & Mantuano, 2014).
The main goal of this type of assessment is to test students’ mastery through
comparing their performance to a clearly specified achievement domain. Unlike
norm-referenced assessment, this focuses on specific and limited learning
tasks. Here, percentage score is used wherein level of performance is
determined by how well standard is achieved.

Contextualized and Decontextualized Assessment


Similar to authentic assessment, contextualized assessment focuses on how
the students construct functioning knowledge how they apply this knowledge
in the real world context of the subject (Cajigal & Mantuano, 2014). This type
of assessment uses procedures that are authentic in nature and performance
tasks given to students reflect learning goals.
On the other hand, decontextualized assessment uses tasks that focus on
reflecting declarative or procedural knowledge. The tasks given here do not
necessarily have to have connection to the real world context of the discipline
area. Teachers must be wary of overemphasizing declarative knowledge. They
have to also assess students’ functional knowledge or the knowledge and skills
that emerge from their declarative knowledge.
Analytic and Holistic Assessment
Analytic assessment is a type of assessment that refers to feedback-giving that
is specific in nature. Here, students are tested on each important aspect of a
specific performance task. With this, assessment should be looked at wholly
and not on parts (Cajigal & Mantuano, 2014).
Holistic assessment pertains to a global approach in giving feedback and
scoring. Here, the teacher develops mental responses to a student’s work or
performance and provides a grade through supporting it with valid
justifications (Cajigal & Mantuano, 2014). The main goal of this type of
assessment is to enhance a student’s personal strength because they are able
to effectively develop their decisive and investigative skills.
Chapter 3: PRODUCT ORIENTED PERFORMANCE BASED-ASSESSMENT
Assessing student performance in ESL/EFL classrooms is one of the
biggest concerns educators face. What does it mean to give students a grade?
Does a passing grade mean they really have communicative proficiency in the
language? The increasing emphasis on competency means that educators must
devise methods that measure proficiency and the ability to perform. Educators
must find ways to measure whether a student can use information to confront
real-world tasks successfully.
Many teachers have traditionally relied on some sort of test to assess
learning. The problem with this approach is that while tests may assess how
much information a student has retained, they do not often evaluate how well a
student can use this knowledge to perform a task. Multiple-choice exams, for
instance, make it difficult to measure language competency demanding more
than recall of the subject matter (Brualdi, 1998, Roediger, 2005). The emphasis
on the assessment of competencies demands a new way of thinking about how
to evaluate students and requires varied forms of assessments to determine the
extent to which students can actually use knowledge to complete tasks.
Process oriented performance based assessment evaluates the actual
task performance. It does mot emphasize on the output or product of the
activity. This assessment aims to know what processes a person undergoes
when given a task.
Task Designing
Concepts that may be associated with task designing include:
1. Complexity – needs to be within range of the ability of the students. Too
simple are uninteresting, too complicated are frustrating.
2. Appeal – Projects should be interesting enough so that students are
encouraged to pursue to complete the task.
3. Creativity – Think out of the box (divergent thinking). The project should
lead to exploring various possible ways of presenting the output.
4. Goal Based – Bear in mind that the project is produced in order to attain a
learning objective. Projects are assigned not just for the sake of producing
something but reinforcing learning
Types of Performance-Based Assessments
Performance-based assessment, as the name implies, measures how well
a student actually performs while using learned knowledge. It may even require
the integration of language and content area skills (cf. Brualdi, 1998; Valdez
Pierce, 2002). The key is the determination of how well students apply
knowledge and skills in real life situations (Frisby, 2001; McTighe & Ferrara,
1998; Wiggins, 1998). The successful use of PBAs depends on using tasks that
let students demonstrate what they can actually do with language.
There are several types of performance-based assessment from which to
choose: products, performances, or process-oriented assessments (McTighe &
Ferrara, 1998). A product refers to something produced by students providing
concrete examples of the application of knowledge. Examples can include
brochures, reports, web pages and audio or video clips. These are generally
done outside of the classroom and based on specific assignments.
Process-oriented assessments provide insight into student thinking,
reasoning, and motivation. They can provide diagnostic information on how
when students are asked to reflect on their learning and set goals to improve it.
Examples are think-alouds, self/peer assessment checklists or surveys,
learning logs, and individual or pair conferences (McTighe & Ferrara, 1998).
1. Presentation

One easy way to have students complete a performance-based activity is


to have them do a presentation or report of some kind. This activity could be
done by students, which takes time, or in collaborative groups.
The basis for the presentation may be one of the following:
1.a Providing information
1.b Teaching a skill
1.c Reporting progress
1.d Persuading others
2. Portfolios
Student portfolios can include items that students have created and
collected over a period. Art portfolios are for students who want to apply to art
programs in college.Another example is when students create a portfolio of
their written work that shows how they have progressed from the beginning to
the end of class. The writing in a portfolio can be from any discipline or a
combination of disciplines
3. Performances
Performances allow students to show how they can apply knowledge and
skills under the direct observation of the teacher. These are generally done in
the classroom since they involve teacher observation at the time of
performance. Much of the work may be prepared outside the classroom but the
students “perform” in a situation where the teacher or others may observe the
fruits of their preparation. Performances may also be based on in-class
preparation. They include oral reports, skits and role-plays, demonstrations,
and debates (McTighe & Ferrara, 1998).
Dramatic performances are one kind of collaborative activities that can be used
as a performance-based assessment. Students can create, perform, and/or
provide a critical response. Examples include dance, recital, dramatic
enactment. There may be prose or poetry interpretation
4.Exhibits and Fairs
Students working with chemicals in classroom. Teachers can expand the
idea of performance-based activities by creating exhibits or fairs for students to
display their work. Examples include things like history fairs to art exhibitions.
Students work on a product or item that will be exhibited publicly. Exhibitions
show in-depth learning and may include feedback from viewers
5. DebateA debate in the classroom is one form of performance-based learning
that teaches students about varied viewpoints and opinions. Skills associated
with debate include research, media and argument literacy, reading
comprehension, evidence evaluation, public speaking, and civic skills. There
are many different formats for debate. One is the fishbowl debate in which a
handful of students form a half circle facing the other students and debate a
topic. The rest of the classmates may pose questions to the panel.

Performance-based Assessment:Key Points


 PBA is an alternative form of assessment that moves away from traditional
paper-pencil tests (Ferman, 2005)

 Performance-based assessment is one in which the teacher observes and make


judgement about the student's demonstration of skills or competency in making
product, constructing a response, or making a presentation (McMillan, 2007)

 Performance-based assessment provides a basis for teacher to evaluate both


effectiveness of the process and procedure used and the product resulting from a
performance of a task (Linn, 1995)
CHAPTER 4- DESIGNING MEANINGFUL PERFORMANCE-BASED
ASSESSMENT

As we learned the nature of performance-based assessment, its


characteristics, types, advantages and limitations, the next step is to design it
aligned to the learning goals. Focusing on the knowledge and skills targeted,
you will need to think of some tasks which must be performed authentically.
Clearly, comprehensive planning and designing of performance-based
assessment should be taken into consideration.
Designing performance based assessment entails critical processes
which start from the tasks that the teacher wants to assess. A well-designed
performance assessment helps the student to see the connection between
knowledge, skills, and abilities they have learned from the classroom, including
the experiences which help them to construct their own meaning of knowledge.
The following steps will guide you in developing a meaningful performance
assessment-both process and product that will match to the desired learning
outcomes.
1. DEFINING THE PURPOSE OF ASSESSMENT

The first step in designing performance-based assessment is to define the


purpose of assessment. Defining the purpose and target of assessment
provides information on what students need to be performed in a task given.
By identifying the purpose, teachers are able to easily identify the weaknesses
and strengths of the students’ performance. Purpose must be specified at the
beginning of the process si that the proper kinds of performance criteria and
scoring procedures can be established
Basically, the teacher should select those leaning targets which can be
assessed by performance which fits to the plan along with the assessment
techniques to be utilized for measuring other complex skills and performances.
1.1 four types of leaning targets used in performance assessment

in defining the purpose of the assessment, learning targets must be carefully


identified and taken into consideration. Performance assessment primarily use
four types of learning targets which are deep understanding, reasoning, skills,
and products.
a. Deep Understanding
- involves students meaningfully in hands-on activities so that their
understanding is rich and more extensive than what can be attained by
traditional paper-and-pencil assessment.
- Focuses on the use of knowledge and skills

b. Reasoning
- Essential with performance assessment as the students demonstrate
skills and construct products.
- Typically, students are given a problem to solve or are asked to make a
decision or other outcome

c. Skills
- Students are required to demonstrate communication, presentation, and
psychomotor skills. these targets are ideally suited to performance
assessment.

Psychomotor skills
- Describe clearly the physical action required for a given task.
d. Products
- Are completed works, such as term papers, projects, and other
assignments in which students use their knowledge and skills.
1.2 Process and Product-Oriented Performance-Based Assessment
 In defining the purpose of assessment, the teacher should identify
whether the students will have to demonstrate a process or a product.
 If the learning outcomes deal on the procedures which you could specify,
then it focuses on process assessment. In assessing the process, it is
essential that assessment should be done while the students are
performing the procedure or steps.
 Learning targets which require students to demonstrate process focuses
on the procedural assessment.

Usually, the learning objectives stat with a general competency which is


the main target of the task, and it follows with specific competencies which are
observable on the target behavior or competencies. This can be observes also in
defining the purpose of assessment for product-oriented performance-based
assessment.
Assessment of products must be done if the students will produce a
variety of better ways to produce high-quality products, sometimes, method or
sequence does not make much difference as long as the product is the focus of
the assessment.
2. IDENTIFYING PERFORMANCE TASKS

 Having a clear understanding of the purpose of assessment, the next


step is to identify performance tasks which measure the learning target
you are about to assess. Some targets imply that the tasks should be
structured; others require unstructured tasks.
 Performance needs to be identified so that students may know what
tasks and criteria to be performed. In this case, a task description must
be prepared to provide the listing of specification of the tasks and will
elicit the desired performance of the students.
 Tasks should be meaningful and must let the students be personally
involved in doing and creating the tasks. The tasks should be of high
value, worth teaching to, and worth learning as well.
 In creating performance tasks, one should specify the learning targets,
the criteria by which you will evaluate performance, and the instructions
for completing the task.
2.1 Suggestions for Constructing Performance Tasks

The development of high-quality performance assessments that effectively


measure complex learning outcomes requires attention to task development
and to the ways in which performances are rated. Linn (1995) suggested ways
to improve the development of tasks:
1. Focus on learning outcomes that require complex cognitive skills
and student performances. Tasks need to be developed or selected in
light of important learning outcomes. They should be used primarily to
assess learning outcomes that are not adequately measured by less time-
consuming approaches.

2. Select or develop tasks that represent both the content and the
skills that are central to important learning outcomes. The
specification of assumed content understandings is critical in ensuring
that a task functions as intended.

3. Minimize the difference of task performance on skills that are


irrelevant to the intended purpose of the assessment task. The key
here is to focus on the attention of the assessment.
4. Provide the necessary scaffolding for the students to be able to
understand the task and what is expected. Challenging tasks often
involve ambiguities and require students to experiment, gather
information, formulate hypothesis, and evaluate their own progress in
solving a problem.

5. Construct task directions so that the student’s task is clearly


indicated. Vague directions can lead to such a diverse array of
performance that it becomes impossible to rate them in a fair or reliable
fashion.

6. Clearly communicate performance expectations in terms of criteria


by which the performances will be judged. Specifying the criteria to be
used in rating performance helps clarify task expectations for a student.
Explaining the criteria that will be used in rating performances not only
provides students with guidance on how to focus their efforts, but helps
to convey priorities for learning outcomes.

3. DEVELOPING SCORING SCHEMES


There are different useful ways to record the assessment of students’
performance. Variety of tools can be used in assessment depending on the
nature of the performance it calls for. As teacher, you need to critically examine
the task to be performed matched with assessment tools to be utilized. Some
ways of assessing the students’ performance could be the utilization of
anecdotal records, interviews, direct observations using checklist or likert
scale, and the use of rubrics especially for the performance based assessment.
3.1 RUBRICS AS AN ASSESSMENT TOOL
Rubrics nowadays have been widely used as an assessment tool in various
disciplines, most especially in the field of education. Different authorities
defined rubrics, via:
 Set of rules specifying the criteria used to find out what the students
know and are able to do so. (Musial, 2019)
 Scoring tool that lays out specific expectations for assignment.
(Levy,2005)
 A scoring guide that uses criteria to differentiate between levels of
student proficiency. (McMillian, 2007)
 Descriptive scoring schemes that are developed by teachers or
evaluators to guide the analysis of product or processes of students’
effort. (Brookhart, 1999)
 The scoring procedures for judging students’ responses to performance
tests. (Popham, 2011)

A rubric that is used to score students’ responses to a performance


assessment has three important features:
EVALUATIVE CRITERIA. These are the factors to be used in determining the
quality of a students’ response.
DESCRIPTIONS OF QUALITATIVE DIFFERENCES FOR EVALUATING
CRITERIA. For each evaluative criterion, a description must be supplied so
qualitative distinctions in students’ responses can be made using the criterion.
AN INDICATION OF WHETHER HOLISTIC OR ANALYTIC SCORING
APPROACH IS TO BE USED. The rubric must indicate whether the evaluative
criteria are to be applied collectively in a form of holistic scoring or on a
criterion-by-criterion basis in the form of analytic scoring.
3.2 TYPES OF RUBRICS
ANALYTIC RUBRIC. It requires the teacher to list and identify major
knowledge and skills which are critical in the development of process and
product tasks. It identifies specific and detailed criteria prior to assessment.
Teachers can access easily the specific concept, understanding, skills or
product with a separate component. Each criterion for this kind of rubric
receives a separate score, thus, providing better diagnostic information and
feedback for the students as a form of formative assessment.
HOLISTIC RUBRIC. It requires the teachers to make a judgment about the
overall quality of each student response. Each category of the scale contains
several criteria which shall be given a single score that gives an overall rating.
This provides a reasonable summary of rating in which traits are efficiently
combined, scored quickly and with only one score, thus, limiting the precision
of assessment of the results and providing little specific information about the
performance of the students and what needs for further improvement.
3.3 RUBRIC DEVELOPMENT
Stevens and Levi’s Introduction to rubrics (2005) enumerated the steps
in developing a rubric. Basically, rubrics are composed of task description,
scale, dimensions, and description of dimensions.
Task description
It involves the performance of the students. Tasks can be taken from
assignments, presentations, and other classroom activities. Usually, task
descriptions are being set in defining performance tasks.
Scale
It describes how well or poorly any given task has been performed and
determine to what degree the student has met a certain criterion. Generally, it
is used to describe the level of performance.
Dimensions
It is a set of criteria which serves as basis for evaluating student output
or performance. The dimensions of rubric lay out the parts and how tasks are
divided into its important components as basis also for scoring the students.
Description of the dimensions
Dimensions should contain description of the level of performance as
standard of excellence accompanied with examples. This allows both the
teachers and the students to identify the level of expectation and what
dimension must be given an emphasis.
4. RATING THE PERFORMANCE
The main objective of rating the performance is to be objective and
consistent. Be sure also that the scoring system is feasible as well. In most of
the classroom situations, the teacher is both the observer and the rater. If
there are some important instructional decision to be made, additional raters
must be considered in order to make scoring more fair.
Some common errors in rating should be avoided; personal bias and halo
effect. McMillian (2007) stated that personal bias results in three kinds of
error, generosity error occurs when the teacher tends to give higher scores,
severity error results when the teachers use the low end of the scale and
underrate student performances; and the central tendency error in which the
students are rated in the middle. On the other hand, halo effect occurs when
the teacher’s general impression of the students affects scores given on
individual traits or performance.
Chapter 5: Affective Learning Targets
In this chapter the following topics will be discussed:

1. Importance of Affective Targets


2. Affective Traits and Learning Targets
3. Attitude Targets
4. Value Targets
5. Motivation Targets
6. Academic self-concept targets
7. Social Relationship Targets
8. Classroom Environment Targets
9. Affective Domain of the Taxonomy of Educational Objectives

Students’ academic performance is often measured by looking through


limited aspects only. According to Tanner (2011), aptitudes and attitudes of
students has to assessed also as these are part of students’ entire academic
performance.
Harter (1998) and Lefrancois (1994) stated that the learner’s attitude
towards his academic tasks influence his achievement. This what we call
attitude is related to the individual’s affective domain- and the student’s
affective side take effect on how he performs in the class.
Based on the article entitled, About Students Attitudes on Learning from
the Ministry of Education in Guyana expressed that attitude generally alter
every person’s life including students’ education because student’s attitudes
towards learning will determine their ability and willingness to learn. If
negative attitudes will remain unchanged students may unlikely to continue
their education beyond what is required of them. In other words, students may
experience lethargy in continuing their studies, which may result to drop outs
or chronic absences. If students’ attitude and academic performance has a very
significant relation, then it is surely necessary to identify the factors affecting
the students affect and immediately take action. The purpose of doing this is
not just for students to like the activities but to have the motivation to do it
better. Attitude measures is part of the broader category of personality
measures-which is an area of assessment that is significant since information
of personality characteristics gives more information that will help in predicting
how a specific learner will likely respond to certain learning situation.
I. Importance of Affective Targets

Ormrod (2004) established the clear connection of cognitive and affective


targets. Fraser (2004) said that a student can effectively perform if he is
emotionally involved with his activities. According to him also, severe anxiety
destroys learning and greater positive motivation will lead students to perform
at their maximum.
Jill Staake (2019) in her article, What teachers need to know about
childhood depression that according to CDC’s or Centers for Disease Control
and Prevention’s statistics on child’s mental health is that there are more than
three percent of kids ranging from 3-17 have been diagnosed with clinical
depression and six percent of kids from 12-17 have been diagnosed also. And
Staake, listed down several manifestations of this behaviour.
Here are the following:
 Student experiences difficulty in concentrating or doing the
assignment
Depression makes it hard to focus and takes away motivation to
complete tasks. These symptoms mean that students suffer
academically.
Example:
Jake has always been a good student who thrives not to miss any
opportunity to recite in the class. But just last semester, he does not even
bother to answer any of the teachers’ questions, even the simple ones.
When asked by his teacher, he said he just got tired and does not want
to be much involved in recitations anymore.
This example, leads us to example no. 2
 Seeming sad, tired, and uninterested in activities they enjoyed before.

Sadness doesn’t really mean frequent crying. It often looks like general
disinterest in things especially those that used to be fun. Childhood depression
often causes sleep disturbances, too which leads to low energy.Even though
the linkage of affect and students’ learning performance is well-established,
there remains a very little systematic assessment that is applied in classroom
instruction, according to McMillan, Workman &Myra 1998; and Stiggins and
Conklin, 1992. Despite of teachers’ knowledge in view of the importance of
students’ behavior towards learning, there is still no formal affective
assessment conducted.
McMillan (2007) stated the primary reasons, which are (1) school routines
are organized based on subject areas; (2) assessment of affective targets is
fraught with difficulties. It had been said that it is tough to determine what
affective targets are appropriate for all students. Defining attitudes, behaviours
and values are hard by nature.
The second reason is that conducting affective assessment among students
has many potential sources of error leading to low level of reliability. Students
may not provide honest answers during the time of assessment.
Affective assessment may not also present dependable information as this
would be possibly influenced by temporary moods. Students also try to make
up answers only to please their teachers.
The above mentioned reasons may be enough for some to not pursue the
affective assessment but this is still deemed necessary because of the following
benefits:
Students will be able to:
 Attain effective learning
 Become productive in the society
 Attain occupational and vocational productivity and satisfaction
 Maximize the motivation to learn both at present and in the future.
 To prevent students from dropping out from school

An effective affective assessment begins with determining the appropriate


affective targets.
3. Affective Traits and Learning Targets

Affective was defined by Hohn (1995) as a variety of traits and dispositions


that are different from knowledge, reasoning, and skills. This term technically
means the emptions or feelings that one has toward someone or something.

These traits are often considered to


- Attitude be non-cognitive, these include more
- Values than emotions or feelings. Most kinds
- Self-concept of students affect both involve the
- Citizenship emotion and beliefs.
Here are some examples of affective targets (McMillan, 2007)
Trait Description
Attitudes Predisposition to respond favourably
or unfavourably to specified
situations, concepts, objects,
institutions, or persons.
Interests Personal preference for certain kinds
of activities.
Values Importance, worth, or usefulness
Opinions Beliefs about specific occurrences
and situations
Preferences Desire to select one object over
another
Motivation Desire and willingness to be engaged
in behaviour including intensity of
involvement
Academic Self-Concept Self-perception of competence in
school and learning
Self-esteem Attitude towards oneself; degree of
self-respect.
Locus of control Self-perception whether failures are
controlled by the students or by
external forces
Altruism Willingness and propensity to help
others.
Below is a sample of an affective assessment tool:
Check the box that corresponds to your answer. Make sure to answer the
following statements with all honesty in order to attain with an accurate
information.
Strongly Agree Strongly Agree
Disagree Disagree
1. I feel good
about my work
at school.
2. I easily get
along with my
group-mates
during group
projects or
activities.
3. I am proud of
my ability to
cope with
difficulties at
work.
4. When I feel
uncomfortable
at school, I
know how to
respond to the
situation.
5. I can tell
others at school
that they are
glad with my
cooperation.
In the succeeding parts of this chapter, some of these affective traits will
be discussed in line with setting an effective targets or outcomes. These traits
have been studied and found out to be contributory factors in student learning.

II.A Attitude Targets


Attitude is an internal state that influences what students are likely to do.
–McMillan, 2007
This internal state in some degree can determine positive or negative;
favourable and unfavourable reaction towards an object, situation, person or
group of objects and many more.
Attitude is conditional based on subjects, teachers, other students,
homework and other objects or persons. Most often, one can identify the
positive or negative attitudes that a person intends to foster or at least keep a
track of because these attitudes are related to current and future behaviour.
Some of these attitudes are listed below.
Positive Attitude PA Negative Attitude NA
Learning Cheating
Classroom Rules Cutting classes
Teachers Dropping out

Forsyth (1999) listed down the three components consisting attitude:


1. An affective component of positive or negative feelings
2. A cognitive component describing worth or value
3. A behavioural component indicating a willingness or desire to engage in
particular actions.
Hence,
a. the affective component consists of the emotion or feeling associated with
an object or person (e.g. good or bad feeling, enjoyment, comfort)
b. Cognitive component is an evaluative belief (thinking something as useful
or worthy)

This trifocal conceptualization has an important implication for


identifying attitudes targets. This will also determine the focus of the
assessment, whether it is on feelings, thoughts or behaviour.

II.B Value Targets

Values refer either to end states of existence or to modes of conduct that


are desirable or sought. – Rokeach, 1973
 End states of existence mean to conditions and aspects of oneself and
the kind of world that a person desires- such as a safe life, world peace
or freedom.
 Modes of conduct refer to what a person believes to be appropriate in his
daily life.
 McMillan (2007) suggested that in setting value targets, it is a must to
abide non-controversial and only to those related to academic learning
and educational goals.

Here are some examples of the non-controversial values


Value Sample Value Target
Honesty Students should learn honesty in their
dealing with others.
Integrity Students should firmly observe their code of
values.
Freedom Students should believe that democratic
countries must provide the maximum level
of freedom to their citizens.

McMillan (2007) and Popham (2005) suggested other non-controversial


values (aside those mentioned above), they’ve suggested kindness, generosity,
perseverance to be included.
Both of them believed that there should be a limit in to the number of
affective traits targeted and assessed.
“It is better to do an excellent job assessing a few important traits than to try
assessing many traits casually.”
II.C Motivation Targets
McMillan (2007) expressed that motivation is the extent to which students are
involved in trying to learn which includes:
a. Initiation of learning
b. Student’s commitment and persistence
c. Intensity of effort exerted

This is the determined engagement in learning in order to gain mastery of


knowledge or skills.
Motivation can be organized based on Expectancy X Value Framework
(Brophy, 2004; Pinrich&Schunk, 2002) this model defines that motivation is
determined by student’s expectation, the self-efficacy of the student.
Self-efficacy is the student’s self-perception of the importance of the
performance; seeing the relevance of the activity.
McMillan (2007) suggests that motivation targets should focus on self-efficacy
and value, distinct by academic subject and type of learning (knowledge or
understanding)
Below are examples of motivation targets:
7. Students will believe that they are capable of learning how to write
simple computer program using Java (self-efficacy)
8. Students will believe that it is important to learn how to write simple
computer program using Java (value)

To effectively assess motivation, teachers must know the reasons of the


students as to why they do things, the influences of their actions.

There are actually two kinds of motivation,


1. Intrinsic motivation- a motivation that is driven by passion, joy
2. Extrinsic motivation- is driven by wanting a reward or fear of
punishment

Most importantly, positive behavior is attributed by those students who are


motivated by the need to understand and master the task.

Chapter 6: Development of affective Assessment Tools

Cognitive and affective domains are inseparable aspects of a learner.


Each completes one another with respect to the learners’ important domains.
Proper, ongoing assessment of affective domain—student’s attitudes, values,
dispositions and ethical perspectives is essentials in any efforts to improve
academic achievements and quality of educational experience are provided.
However, the teacher are more focuses on cognitive level than affective domain
which is left behind cause of less assessment tools on affective domain.

Methods of Assessing Affective Target


According to McMillan, there are three feasible methods of assessing
traits and dispositions, these are; teacher’s observation, students self-report,
peer rating. There are many psychological measures that assess affective traits,
but due to its sophistication of such instrument, the teacher is more on
observation and student self-report.
The three thing that teacher must consider in assessing affects these are;
1. Emotions and feeling change quickly especially in children and early
adolescence – teacher must do several repeated assessments to determine the
affective targets.
2. Used varied approaches in measuring the same affective trait as possible. –
it is better not to rely on single method because the limitation inherent in that
method due to significant meddling results in assessing affective targets of
learners.
3. Decide what type of data or result a nd, is it individual or group data?
Consider off hat purpose of assessment is will influence the method that must
be used.
Development of Assessment Tools
Assessment tools in the affective domain, those which are used to assess
attitudes, interests, motivations and self-efficacy, have been developed.
Teacher observation – in using observation, the first thing to do is to
determine in advance how specific behaviors relate to the target. It starts with
a vivid definition of the trait, then followed by list of student’s behaviors and
actions that corresponds to positive and negative dimensions of the traits. After
the list has been developed, the teacher needs to decide whether to use an
informal, unstructured observation or a formal one and structured.
 Unstructured observation – this is normally open ended, no checklist or
rating scale is used.
 Structured observation– more time is needed since checklists or rating
forms are to be made since it will be used to record observations.
Self-Report - It is the most common measurement tool in the affective domain.
It essentially requires an individual to provide an account of his/her attitude or
feelings toward a concept or idea or people. It is sometimes called “written
reflections”.
Peer rating - In peer assessment, a collaborative learning technique, students
evaluate their peers’ work and have their work evaluated by peers. Often used
as a learning tool, peer assessment gives students feedback on the quality of
their work, often with ideas and strategies for improvement.  At the same time,
evaluating peers’ work can enhance the evaluators’ own learning and self-
confidence.    Peer involvement personalizes the learning experience, potentially
motivating continued learning.
Rating Scale
Rating scales help students understand the learning target/outcomes
and to focus student’s attention to performance. It also gives specific feedback
to students as far as their strengths and weaknesses with respect to the targets
to which they are measured. Rating helps to show each student’s growth and
progress.
Types of Rating Scale:
 Numerical Rating Scales – translate the judgment of quality or degree
into numbers.
 Descriptive Graphic Rating Scales –replaces ambiguous single word with
short behavioral descriptions of the various points along the scale.
 Likert Scale – is used to list a clear favorable and unfavorable attitude
statements are provided.
 Semantic Differential Scale – it uses adjective pairs that provide anchors
for feeling or beliefs that are opposite in direction and intensity.
 Sentence Completion – it captures whatever comes to mind from each
student

Chapter 7 Nature of Portfolio Assessment


Learning Key Points
Portfolio is a systematic process and purposeful collection of student
work to document the student learning progress, efforts and achievement
towards the attainment of learning outcomes.Portfolios can be used for many
purposes. (1) Portfolios give students the opportunity to direct their own
learning; (2) Portfolios can be used to determine students’ level of achievement;
(3) Portfolios can be used to understand how students think, reason, organize,
investigate and communicate; (4) Portfolios can be used to communicate
student efforts, progress toward accomplishing learning goals and
accomplishments; and (5) Portfolios can be used to evaluate and improve
curriculum and instruction.
There are different types of portfolios you will encounter on assessing the
performance approach in your classroom which are: (1) Showcase portfolio; (2)
Documentation portfolio; (3) Process portfolio; (4) Product portfolio; and (5)
Standard-based portfolio.
A Showcase Portfolio shows the best of the students’ best work. This type of
portfolio is based on the students’ personal criteria rather than the criteria of
their teacher. Students select their best work and reflect thoughtfully on its
quality.
A Documentation Portfolio displays changes and accomplishments related to
academic performance over time. The assembled work sample is to provide
evidence about the student growth which also provides meaningful
opportunities for self-evaluation of the students.
A Process Portfolio shows the steps and/or the results of a completed project or
task as the primary goal of this portfolio.
The Product Portfolio is similar to the process portfolio except that its focus is
on the end product rather than on the process in which the product was
developed.
The Standard-Based Portfolio collects evidence that links student achievement
to a particular learning standards. It focuses on specific standards that are
predetermined by the teacher and discussed to the students at the start of the
school year.
Portfolio has its distinct elements which are expected and included from the
outputs of the students, viz: (a) Cover sheet; (b) Table of contents; (c) Work
samples; (d) Date of sample works; (e) Drafts; (f) Self-assessment; (g) Future
goals; and (h) Other’s comments and assessment.
CHAPTER 8
Designing and Evaluating Portfolio Assessment in the Classroom
Objectives:
a. Identify the steps in developing portfolio assessment;
b. Determine the different portfolio evaluation and its function; and
c. Explain the importance of designing and evaluating portfolio assessment.

What is portfolio assessment?


Portfolio assessment is a term with many meanings, and it is a process that
can serve a variety of purposes. A portfolio is a purposeful collection of student
work that has been selected and organized to show student learning progress
(developmental portfolio) or to show samples of student’s best work (showcase
portfolio). Portfolio assessment can be used in addition to other assessments or
the sole source of assessment and used as part of the National Board for
Professional Teaching Standards assessment of expert teachers.
Steps for Developing Portfolio Assessment
1. Identify overall purpose and focus.
2. Identify the physical structure.
3. Determine the appropriate organization and sources of content.
4. Determine student reflection guidelines.
5. Identify and evaluate scoring criteria.

PORTFOLIO EVALUATION
Student Evaluation
Many advocates of this function believe that a successful portfolio
assessment program requires the ongoing involvement of students in the
creation and assessment process. Portfolio design should provide students with
the opportunities to become more reflective about their own work, while
demonstrating their abilities to learn and achieve in academics. Portfolios can
bridge this gap by providing a structure for involving students in developing
and understanding criteria for good work and through the use of critical
thinking and self-reflection, enable students to apply these criteria to their own
work efforts and that of other students’. Through the use of Portfolios,
students are regularly asked to examine how they succeeded or failed or
improved on a task or set goals for future work. No longer is the learning just
about the final product, evaluation or grade but becomes more focused on
students developing metacognitive skills that will enable them to reflect upon
and make adjustments in their learning in school and beyond.
How will the portfolio be used for student evaluation?
If the purpose of evaluation is to demonstrate growth, the teacher may
want to make judgments about the evidence of progress periodically and
provide feedback to students or make note of them for his or her own records.
The student could also self-assess progress shown or not shown, goals met or
not met. On a larger scale, an evaluation of the contents within the portfolio
may be conducted by the teacher, by peers, or external evaluators for the
purpose of judging completion of SLOs, standards, or other requirements.
Regardless of the purpose, however, the criteria must be fully and carefully
defined and transparent to all. This is usually best done through the use of a
rubric. Giving students a voice in defining success criteria gives them
ownership in the process.

There are three possible levels of assessments within the portfolio evaluation
process:
• the work samples selected
• student reflections on the work samples
• the portfolio itself

Portfolio Assessment as A Tool for Teacher Evaluation


How is portfolio assessment connected to teacher evaluation?
A portfolio based system is one plausible way to assess teacher
performance through evidence of student growth. Portfolio assessment has the
potential to improve the complex task of student assessment making it possible
to document the unfolding process of teaching and learning over time. A
successful portfolio assessment that provides evidence of student growth for
the purposes of teacher evaluation―
• Includes clearly defined student learning objectives
• Begins with a pre-assessment to gauge student learning
• Is ongoing rather than representative of a single point in time
• Allows a window into process as well as products
• Provides opportunities for students to revisit and revise, guided by
evaluation criteria
• Allows for diverse means of demonstrating competency
• Serves as a demonstration of student strengths
• Includes student reflection, decision-making and goal setting
• Provides tangible evidence of student’s knowledge, skill, abilities and
growth
• Involves student choice
• Includes student evaluation and progress monitoring
• Provides a means for managing and evaluating multiple assessments
for each student (variety- pre/post, formative, audio, video, essays, letters,
journals, self-assessments, reflections, drawings, graphs, etc.)
• Includes an audience
• Allows students the opportunity to communicate, present and discuss
their learning with teachers, parents, community and/or experts

Student Teachers Conference


It is important for teachers and students to work together to prioritize
the criteria that will be used as a basis for assessing and evaluating student
progress. During the instructional process, students and teachers work
together to identify significant pieces of work and the processes required for the
portfolio. As students develop their portfolio, they are able to receive feedback
from peers and teachers about their work. Because of the greater amount of
time required for portfolio projects, there is a greater opportunity for
introspection and collaborative reflection. This allows students to reflect and
report about their own thinking processes as they monitor their own
comprehension and observe their emerging understanding of subjects and
skills. The portfolio process is dynamic and is affected by the interaction
between students and teachers.
A portfolio can serve many purposes: It can highlight or celebrate the
progress a student has made; it can capture the process of learning and
growth; it can help place students academically; or, it can even simply
showcase the final products or best work of a student. Ultimately, a portfolio is
not just the pile of student work that accumulates over a quarter, semester or
year. Instead, it is a very intentional process: both teacher and student must
be clear about the story the portfolio will be telling, and both must believe that
the selection of and reflection upon their work serves one or more meaningful
purposes.

CHAPTER 9: GRADING AND REPORTING SYSTEM


Assessing the learning of the students is done during and after instruction
and are achieved in a number of ways. As teachers assess the learning of the
students, there is a corresponding grade given. Summarizing the variety of
collected information from different types of assessment and comes up with a
standardized numerical grade or brief report is one of the challenges in
grading.
The guiding premises in developing and reporting system are as follows:
1. The primary goal of grading and reporting is communication.
2. Grading and reporting are integral parts of the instructional process.
3. Good reporting based on good evidence.
4. Changes in grading and reporting are best accomplished through the
development of a comprehensive reporting system.

These premises must be taken into consideration in developing and


implementing the grading and reporting systems to have a meaningful output
and help in the attainment of the student learning objectives.
1. K to 12 Grading of Learning Outcomes
The K to 12 assessments is a learner-centered and carefully considers its
learning environment system. It includes 21st century skills such as research,
analytical/critical, practical and creative as part of the indicators. The cognitive
and non-cognitive skills are part of the assessment.
Formative assessment, also known as “Assessment for Learning”, is given
importance to ensure the learning. The process of self-assessment is
encouraged to learners to take part which is known as the “Assessment as
Learning”. Summative assessments, known as “Assessment of Learning”, are
also part of the K to 12 curriculum assessment.
The wide variety of traditional and authentic assessment tools and
techniques are prescribed by the K to 12 curriculum that should be utilize for a
valid, reliable and realistic assessment of learning. Both assessments give
greater importance on assessing understanding and skills development rather
than on more accumulation of content.
Assessment will be standards-based to ensure that there is standardization
in teaching and learning. Assessment will be done in four levels and will be
weighted accordingly based on the order (DepEd Order No. 31, s. 2012) issued
by the Department of Education.

Four levels of Assessment:


Knowledge - the essential content of the curriculum, the facts and information
that the students acquires.
Process - cognitive acts that the student does on facts and information to come
up with meanings and understandings.
Understanding - lasting big ideas, principles and generalizations that are
fundamental to the discipline which may be assessed using the facets of
understanding.
Products/Performances - real-life application of understanding as shown by
the student’s performance of authentic tasks.
The assigned weight per level of assessment
Level of Assessment Percentage Weight

Knowledge 15%

Process of Skills 25%

Understanding 30%

Products/Performances 30%

TOTAL 100%

The prescribed level of proficiency which has equivalent numerical values


will describe the student’s performance at the end of the quarter. Proficiency
level is computed from the sum of all the performances of students in various
levels of assessment. Each level is described as follows:
 Beginning. The students struggle with his/her understanding of
prerequisite and fundamental knowledge skills that have not been acquired
or developed adequately.
 Developing. The student possesses the minimum knowledge and skills and
core understanding but needs help throughout the performance of
authentic tasks.
 Approaching Proficiency. The student has developed the fundamental
knowledge and skills and core understandings, and with little guidance
from the teacher and/or with some assistance from peers, can transfer
these understandings through authentic performance tasks.
 Proficient. The student has developed the fundamental knowledge and
skills and core understandings, and can transfer them independently
through authentic performance tasks.
 Advanced. The student exceeds the core requirements in terms of
knowledge, skills and core understandings, and can transfer them
automatically and flexibly through authentic performance tasks.
Numerical value of the proficiency level
Level of Proficiency Equivalent Numerical Value

Beginning 74% and below


Developing 75-79%

Approaching Proficiency 80-84%

Proficient 85-89%

Advanced 90% and above

Source: DepEd Order 31, s. 2012

2. The Effects of Grading on Students


There studies made over the years on how student’s grades and teacher’s
comments written on the student’s paper might affect students’ achievement.
Page made an investigation on this matter which resulted a conclusion that
student’s grades can have a beneficial effect on student learning when
accompanied by specific or individualized comments from the teacher.
Relevance based on the study presented of Page:
1. Grades can be used in positive ways to enhance students’ achievement and
performance while it is not compulsory for teaching and learning.
2. It showed that positive effects can be gained with relatively little effort on the
part of teachers. Stamps or stickers with standard comments could be easily
produced for teachers to use yet has significant positive effect on students’
performance.
3. Building a Grading and Reporting System
3.1 The Basis of Good Reporting is Good Evidence
Grading and reporting should provide high-quality information that should
be understood and used to interested person whatever is preferred and
required of the teacher in terms of format. Critical evidence on students
learning is the basis of such high-quality information. Evaluation experts
stress that important decisions about students that have broad implications,
decisions involved in grading, the more that good evidence must be ready at
hand. The most detailed and hi-tech grading and reporting is useless when
good evidence is absent. Three qualities that contribute to the goodness of
evidence that are gathered on student learning which are validity, reliability
and quantity.

3.2 Major Purposes of Grading and Reporting

 To communicate the achievement status of students to parents and others


 To provide information that students can use for self-evaluation
 To select, identify or group students for certain educational paths or
programs
 To provide incentives for students to learn
 To evaluate the effectiveness of instructional programs
 To provide evidence of students’ lack of effort or inappropriate responsibility
3.3 Grading and Reporting Methods
3.3.1 Letter Grades
The most common and best known of all grading methods which is mostly
composed of five-level grading scale that has letter grade descriptors. The true
meaning of letter grades is not always clear because often times what the
teachers would like to communicate with particular letter grade and what
parents interpret that grade to mean are not the same. Most schools include a
legend on the reporting form which each letter has a corresponding explanatory
word or phrase to give more clarity. Descriptors must be carefully chosen to
avoid additional complications and misunderstanding.
3.3.2 Percentage Grades
The ultimate multi-category grading method which can range from 0 to
100. it is generally more popular among high school teachers than elementary
teachers.

3.3.3 Standards-Based Grading


Four steps in developing standards-based grading (Guskey& Bailey, 2001):
1. Identify the major learning goals or standards that students will be
expected to achieve at each grade level or in each course of study.
2. Establish performance indicators for the learning goals.
3. Determine graduated level of quality for assessing each goal or
standard.
4. Develop reporting tools that communicate teachers’ judgments of
students’ learning progress and culminating achievement in relation to the
learning goals and standards.
3.3.4 Pass/Fail Grading
The number of grade categories is reduced to just two: Pass or Fail which
the simplest alternative grading method available to educators. It is originally
introduced in college-level courses for students to give more importance to
learning and less to grades they attained.
Summary of the different grading methods:

Method Advantages Disadvantages 4.


Letter  Convenient;  Broad, sometimes unclear
Grade  Concise; indication of performance;
 Familiar  Often includes a jumble of
factors including effort and
improvement.
Percentage  Easy to calculate,  Broad, sometimes unclear
Grade record, and indication of performance,
combine; false sense of difference
 Familiar between close scores;
 High scores not
necessarily signifies mastery
Standards-Based  Focus on high  May not reflect student
standards for all learning in many areas;
students;  Does not include effort or
 Pre-established improvement
performance
levels
Pass/Fail  Simple;  Little discrimination in
 Consistent with performance;
mastery of  Less emphasis on high
learning performance
Developing Effective Reporting System
There are three aspects of communication must be considered to determine
the purpose included in reporting system.
a. What information or messages do we want to communicate?
b. Who is the primary audience for that message?
c. How would we like that information or message to be used?

5. Tools for Comprehensive Reporting System


1. Report Cards
2. Notes: Attached to Report Cards
3. Standardized Assessment Report
4. Phone Calls to Parents
5. Weekly/Monthly Progress Reports
6. School Open-Houses
7. Newsletter to Parents
8. Personal Letter to Parents
9. Evaluated Projects or Assignments
10. Portfolios or Exhibits of Students’ Work
11. Homework Assignments
12. Homework Hotlines
13. School Web Pages
14. Parent-Teacher Conferences
15. Student-Teacher Conferences
16. Student-Led Conference

6. Guidelines for Better Practice


Guide on how to utilize effectively the grading and reporting systems:
1. Begin with a clear statement purpose.
2. Provide accurate and understandable descriptions of learning.
3. Use grading and reporting to enhance teaching and learning.

Do’s and Don’ts of Effective Grading

DO DON’T
Use well-thought-out professional
Depend entirely on number crunching.
judgments.
Try everything you can to score and Allow personal bias to affect grades.
grade fairly.
Grade according to pre-established Grade on the curve using the class as
learning targets and standards. the norm group.
Clearly inform students and parents of Keep grading procedures secret.
grading procedures at the beginning of
the semester.
Use effort, improvement, attitudes,
Base grades primarily on student
and motivation for borderline
performance.
students.
Penalize poorly performing students
Rely most on current information.
early in the semester.
Mark grade and return assessments to
Return assessments weeks later with
students as soon as possible and with
little or no feedback.
as much feedback as possible.
Review borderline cases carefully,
when in doubt, assign the higher Be inflexible with borderline cases.
grade.
Convert scores to the same scale before Use zero scores indiscriminately when
combining. averaging grades.
Weight scores before combining. Include extra credit assignments that
are not related to the learning targets.
Use a sufficient number of
assessments. Rely on one of two assessments for a
semester grade.

Be willing to change grades when Lower grades for cheating,


warranted. misbehaving, tardiness, or absence.
(McMillan, 2007)

7. Planning and Implementing Parent-Teacher Conference


The parent-teacher conference is the most common way teachers
communicate with parents about student progress. It may be initiated by either
the teacher or the parent, based on purpose.

Two types of parent-teacher conferences:


a. Group Conference
These are conducted in the beginning of the year to communicate school
and class policies, class content, evaluation procedures, expectations, and
procedures for getting in touch with teachers.
b. Individual Conferences
These are conducted to discuss the individual students’ achievement,
progress or difficulties.
Conference is not a lecture type of gathering or meeting, it is a
conversation. It is important to be well-planned to be prepared that has all the
means of information needed about the list of areas pertaining to students that
will be discussed with parents. Listening to parents will help the teacher
understand the student better. Thus, parent-teacher conferences entail hard
work to be successful. To ensure the objective of the parent-teacher conference
is met, preparations of the logistics as well as the face-to-face encounter of the
teachers with parents are necessary to be carried out.

Chapter 10: Statistics and Computer: Tools for Analyzing of Assessment


Data
General Objectives:
A) To define statistics,
B) To identify the different statistical tools; and
C) To distinguish tabular and graphical presentation of data.
Statistics Defined
Statistics is the study concerned with representation and interpretation of
chance outcomes that occur in a planned scientific investigation. Statistical
methods are those procedures used in the collection, presentation, analysis,
and interpretation of data.
Two Statistical Methods
√Descriptive Statistics- It comprises those methods concerned with collecting
and describing a set of data so as to yield meaningful information. The
construction of tables, charts, graphs, and other relevant computations in
various newspapers and magazines usually fall in the area categorized as
descriptive statistics.
√Inferential Statistics- It comprises those methods concerned with the
analysis of a subset of data leading to predictions or inferences about the entire
set of data. The generalizations associated with statistical inferences are always
subject to uncertainties, since we are dealing only with partial information
obtained from a subset of the data of interest.

Statistical Tools
√Measures of Central Tendency
•Mean. The mean is the arithmetic average, and it is probably the measure of
central tendency that you are most familiar. Calculating the mean is very
simple. You just add up all of the values and divide by the number of
observations in your dataset.
•Median. The median is the middle value. It is the value that splits the dataset
in half.
•Mode. The mode is the value that occurs the most frequently in your data set.
On a bar chart, the mode is the highest bar. If the data have multiple values
that are tied for occurring the most frequently, you have a multimodal
distribution. If no value repeats, the data do not have a mode.
√Measures of variability
A measure of variability is a summary statistic that represents the amount of
dispersion in a dataset. How spread out are the values? While a measure of
central tendency describes the typical value, measures of variability define how
far away the data points tend to fall from the center. We talk about variability
in the context of a distribution of values. A low dispersion indicates that the
data points tend to be clustered tightly around the center. High dispersion
signifies that they tend to fall further away.
•Range. The range of a dataset is the difference between the largest and
smallest values in that dataset.
•Interquartile Range. The interquartile range is the middle half of the data that
is in between the upper and lower quartiles. In other words, the interquartile
range includes the 50% of data points that fall between Q1 and Q3.
•Variance. Variance is the average squared difference of the values from the
mean. Unlike the previous measures of variability, the variance includes all
values in the calculation by comparing each value to the mean. To calculate
this statistic, you calculate a set of squared differences between the data points
and the mean, sum them, and then divide by the number of observations.
Hence, it’s the average squared difference.
√Standard Scores
The standard score (more commonly referred to as a z-score) is a very useful
statistic because it (a) allows us to calculate the probability of a score occurring
within our normal distribution and (b) enables us to compare two scores that
are from different normal distributions. The standard score does this by
converting (in other words, standardizing) scores in a normal distribution to z-
scores in what becomes a standard normal distribution.
√Indicators of Coefficient of Correlation
Statistical correlation is measured by what is called the coefficient of
correlation (r). Its numerical value ranges from +1.0 to -1.0. It gives us an
indication of both the strength and direction of the relationship between
variables.
In general, r > 0 indicates a positive relationship, r < 0 indicates a negative
relationship and r = 0 indicates no relationship (or that the variables are
independent of each other and not related). Here r = +1.0 describes a perfect
positive correlation and r = -1.0 describes a perfect negative correlation.
The closer the coefficients are to +1.0 and -1.0, the greater the strength of the
relationship between the variables.
As a rule of thumb, the following guidelines on strength of relationship are
often useful (though many experts would somewhat disagree on the choice of
boundaries).

Value of r. Strength of relationship


-1.0 to -0.5 or 1.0 to 0.5 Strong
-0.5 to -0.3 or 0.3 to 0.5 Moderate
-0.3 to -0.1 or 0.1 to 0.3 Weak
-0.1 to 0.1 None or very weak

Computer: Aid in statistical computing and data presentation


√Tabular Presentation of Data
Frequency Distribution- a large mass of data that assessed by grouping the
data into different classes and then determining the number of observations
that fall in each of the classes. Data that are presented in the form of frequency
distribution are called grouped data.
Parts of the Complete Frequency Distribution Table
•Class Limits- the smallest and largest values that can fall in a given class
interval. The smallest number is called the lower limit and the larger number is
the upper limit.
•Class Boundaries- it is the interval of class limits where the value of lower
limit is decreased by 0.5 and the value of the upper limit is increased by 0.5.
The result for the lower limit is the lower boundary and the result for upper
limit is called the upper boundary.
•Class Frequency- the number of observations falling in a particular class and
is denoted by the letter f.
•Class Width- the numerical difference between the upper and lower class
boundaries of a class interval.
•Class mark or Class midpoint- the midpoint between the upper and lower
class boundaries or class limits of a class interval.
•Relative Frequency- relative frequency of each class is obtained by dividing the
class frequency by the total frequency.
•Relative Frequency Percentage- relative frequency distribution of each class is
obtained by dividing the class frequency by the total frequency, and multiplied
by 100%.
•Cumulative Frequency- the total frequency of all values less than the upper
class boundary of a given class interval.
•Cumulative Frequency Percentage- the total frequency of all values less than
the upper class boundary of a given class interval, and multiplied by 100%.
√Graphical Presentation of Data
Graphical Representation is a way of analyzing numerical data. It exhibits the
relation between data, ideas, information and concepts in a diagram. It is easy
to understand and it is one of the most important learning strategies. It always
depends on the type of information in a particular domain. There are different
types of graphical representation. Some of them are as follows
Most Common Graphical Presentations
•Line Graphs. Linear graphs are used to display the continuous data and it is
useful for predicting the future events over time.
•Bar Graphs. Bar Graph is used to display the category of data and it
compares the data using solid bars to represent the quantities.
•Histograms. The graph that uses bars to represent the frequency of numerical
data that are organized into intervals. Since all the intervals are equal and
continuous, all the bars have the same width.
•Line Plot. It shows the frequency of data on a given number line. ‘x’ is placed
above a number line each time when that data occurs again.
•Circle Graph. Also known as pie chart that shows the relationships of the
parts of the whole. The circle is considered with 100% and the categories
occupied is represented with that specific percentage like 15%, 56%, etc.
•Stem and Leaf Plot. In stem and leaf plot, the data are organized from least
value to the greatest value. The digits of the least place values from the leaves
and the next place value digit forms the stems.
•Box and Whisker Plot. The plot diagram summarizes the data by dividing into
four parts. Box and whisker shows the range (spread) and the middle (median)
of the data.
REFERENCES:
Arter, J. and Spandel, V. (1992). Using portfolios of student work in
instruction and assessment. National Council on Measurement in Education
(NCME) Instructional Module.
Calmorin, L., (2011),Chapter 2:Authentic Assessment and Assessment of
Process and Product, Assessment of Student Learning 2, First Edition,pp.31-
32.

Cajigal, Ronan M. & Mantuano, Maria Leflor D. (2014). Assessment of Learning


2. 776 Aurora Blvd., cor. Boston St. Cubao, Quezon City, Manila, Philippines.
Adriana Publishing Co., INC.

Covington, M. (1992). Making the grade: A self-worth perspective on


motivation and school reform. New York: Cambridge University

Education Consumer Guide. (1993, November). Student portfolios: Classroom


uses. Retrieved April 14, 2014:
http://www2.ed.gov/pubs/OR/ConsumerGuides/classuse.html
Frost, J. (2018). Measures of central tendency. Retrieved from:
https://statisticsbyjim.com/basics/measures-central-tendency-mean-median-
mode/
Gabuyo, Yonardo A. (2012). Assessment of Learning I. 84-86 R. Florentino St.,
Sta. Mesa Heights, Quezon City: Rex Book Store, Inc.

Mctighe, J. & Ferrara, S. (1998), Performance- Based Assessment in the


Classroom, Educational Assessment, pp. 7. Retrieve from
[from:http://jaymctighe.com/wordpress/wp-
content/uploads/2011/04/Performance-Based-Assessment-in-the-
Classroom.pdf

Nadu, T. et al., (2020). Graphical representation - types, rules, principles and


merits. Retrieved from: https://byjus.com/maths/graphical-representation/

Navarro, R., Santos, R. Chapter5: Product-Oriented, Performance-based


Assessment, Authentic Assessment of Learning Outcomes, Assessment
Learning 2, Second Edition,44-48.

Reganit, Arnulfo R., et. al., (2010). Assessment of Learning 1 (Cognitive


Learning). 839 EDSA, South Triangle, Quezon City: C & E Publishing, Inc.
Walpole, R.E. (2002). Introduction to statistics. Third edition. Jurong,
Singapore: Mcmillan Publishing Co., Inc.

W.I. Griffith, Hye-Yeon Lim (2012), Performance- Based Assessment: Rubrics,


Web 2.0 Tools and Language Competencies, Mextesol Journal, Vol.36 No.1
[from:http:/mextesl.net/journal/indexphp?page=&id_article=108]

Wilson, L.T. (2009). Statistical corellation. Retrieved from:


https://explorable.com/statistical-correlation

You might also like