You are on page 1of 41

1

Don Mariano Marcos Memorial State University


La Union, Philippines

ASSESSMENT OF LEARNING I

DR. VERONICA B. CARBONELL


2

 COURSE DESCRIPTION

This 3-unit course that focuses on the principles, development and


utilization of conventional assessment tools to improve the teaching-learning
process. It emphasizes on the use of testing for measuring knowledge,
comprehension and other thinking skills. It allows students to go through the
standard steps in test construction for quality assessment. It includes
competencies contained in the Trainer’s Methodology I of TESDA.

OBJECTIVES

Part II
Course Outline and Timeframe

Time Frame Course Content/Subject Matter


Week 1 Assessment and Evaluation in Education
Week 2 Measurement
Week 3 Roles of Assessment
Week 4 Appropriateness and Alignment of Assessment Methods to
Learning Outcomes

Week 5 Types of Assessment Methods


Week 6 Development of Assessment Tools: Knowledge and Reasoning
Week 7 Table of Specification
Week 8 Item Analysis
Week 9 MIDTERM EXAMINATION
Week 10 Reliability
Week 11 Validity
Week 12 Practicality and Efficiency Scores
Week 13 Ethics
Week 14 Measures of Central Tendency
Week 15 Measures of Dispersion or Variability
Week 16 Policy Guidelines on Classroom Assessment for the K to 12 Basic
Education
Week 17 Steps in Grade Computation Promotion and Retention at the
End of the School Year
Week 18 FINAL EXAMINATION
3

References:
De Guzman, Estefania S. et.al(2015). Assessment of Learning 1;
Cubao, Quezon City, Manila, Philippines: Adriana Printing Co., Inc.

Department of Education. (2015) DepEd Order no. 8, s.2015: Policy


on Guidelines on Classroom assessment fot the K to 12 Education
Program. Retrieved on April 15,2015 from
http://www.deped .gov.ph/wp-content/upload

Lucas, M. & Corpuz B. (2014). Facilitating learning: a metacognitive


process 4th edition. Quezon City, Philiipines: Lorimar Publishing, Inc.

Goff, Loui, et al (2015). Learning outcomes assessment: a pratitioner’s


handbook: Higer Education Quality Council of Ontario

Kizlik, B. (2014). Measurement, assessment, and evaluation in


education.

Retrieved from

https://www.cloud.edu/Assets/PDFs/assessment/Assessment%20_
%20Evaluation_Mea surement.pdf

http://www.nwlink.com/~donclark/hrd/bloom.html

https://www.slideshare.net/vsk84/domains-of-learning-
56492381
4

ASSESSMENT OF LEARNING 1

INTRODUCTION

This is a 3-unit course that focuses on the principles, development and


utilization of conventional assessment tools to improve the teaching-learning
process. It emphasizes on the use of testing for measuring knowledge,
comprehension and other thinking skills. It allows students to go through the
standard steps in test construction for quality assessment. It includes
competencies contained in the Trainers’ Methodology I of TESDA.

OBJECTIVES

After studying the module, you should be able to:

1. show understanding of the basic concepts of assessment, measurement


and evaluation
2. analyze and derive information from test results to
monitor/evaluate learner progress and achievement;
3. demonstrate skills in reporting accurate students’ learning
progress/achievement and using strategies for constructive feed backing
to improve learner performance , and
4. match learning outcomes with the appropriate assessment method.

DIRECTIONS/ MODULE ORGANIZER

There are four lessons in the module. Read each lesson carefully then
answer the exercises/activities to find out how much you have benefited from it.
Work on these exercises carefully and submit your output to your instructor.

In case you encounter difficulty, discuss this with your instructor during
the face-to-face meeting. If not contact your instructor at the College of
Education office.

Good luck and happy reading!!!


5

Course Outline and Timeframe

Time Frame Course Content/Subject Matter


Week 1 Assessment and Evaluation in Education
Week 2 Measurement
Week 3 Roles of Assessment
Week 4 Appropriateness and Alignment of Assessment Methods to
Learning Outcomes

Week 5 Types of Assessment Methods


Week 6 Development of Assessment Tools: Knowledge and Reasoning
Week 7 Table of Specification
Week 8 Item Analysis
Week 9 MIDTERM EXAMINATION
Week 10 Practicality and Efficiency Scores
Week 11 Ethics

Week 12 Validity
Week 13 Reliability
Week 14 Norm-Referenced Grading and Criterion-Referenced Grading
Week 15 Cumulative and Averaging Systems of Grading

Week 16 Policy Guidelines on Classroom Assessment for the K to 12


Basic Education

Week 17 Steps in Grade Computation Promotion and Retention at the


End of the School Year

Week 18 FINAL EXAMINATION

References:
De Guzman, Estefania S. et.al(2015). Assessment of Learning 1;
Cubao, Quezon City, Manila, Philippines: Adriana Printing Co., Inc.

Department of Education. (2015) DepEd Order no. 8, s.2015: Policy


on Guidelines on Classroom assessment fot the K to 12 Education
Program. Retrieved on April 15,2015 from
http://www.deped .gov.ph/wp-content/upload

Lucas, M. & Corpuz B. (2014). Facilitating learning: a metacognitive


process 4th edition. Quezon City, Philiipines: Lorimar Publishing, Inc.
6

Goff, Loui, et al (2015). Learning outcomes assessment: a pratitioner’s


handbook: Higer Education Quality Council of Ontario
Kizlik, B. (2014). Measurement, assessment, and evaluation in
education.

Retrieved from

https://www.cloud.edu/Assets/PDFs/assessment/Assessment%20_%2
0Evaluation_Mea surement.pdf

http://www.nwlink.com/~donclark/hrd/bloom.html

https://www.slideshare.net/vsk84/domains-of-learning-
56492381
7

MODULE I
INTRODUCTION

Lesson 1 Assessment and Evaluation in


Education

Lesson 2 Measurement

Lesson 3 Roles of Assessment

Lesson 4 Appropriateness and Alignment of


Assessment Methods to Learning
Outcomes
8

Lesson 1

Assessment and Evaluation in


Education

Assessment

Assessment comes from the Latin word assidēre which means “to sit
beside a judge”. This implies that assessment is tied up with evaluation.
Miller, Linn & Gronlund (2009) define assessments as any method utilized
to gather information about student performance. Black and Wiliam (1998, p.82)
gave a lengthier definition emphasizing the importance of feedback and
signifying its purpose. They stated that assessment pertains to all “activities
undertaken by teachers – and by their students in assessing themselves- that
provide information to be used to modify the teaching and learning activities in
which they are engaged”. This means that assessment data direct teaching in
order to meet the needs of the students. It should be pointed out however, that
assessment is not just about collecting data. These data are processed,
interpreted and acted upon. They aid teachers to make informed decisions and
judgment to improve teaching and learning. It is a continuous process used to
identify and address problems on teaching methods, learning milieu, student
mastery and classroom management. Hence, it is no surprise that assessment
subsumes measurement and instigates evaluation.
Test are a form of assessment. However the term “testing” appears to
have a negative connotation among educators and somewhat threatening to
learners. The term “assessment” is preferably used. While the test gives a
snapshot of student’s learning, assessment provides a bigger and more
comprehensive picture. It should now be clear that not all assessments are test.
Although many educators are still focused on traditional tests, schools
implementing an outcome- based teaching and learning (OBTL) approach are
now putting more emphasis on performance tasks and other assessment like
portfolios, observation, oral questioning and case studies of authentic
assessment. These are non-test assessment technique.

Nature of Assessment

Assessment is a process that can be placed in two broad categories:


measures of typical performance (Miller, Linn and Gronlund, 2009). Originally,
Cronbach made this classification for personnel selection tests.

Maximum performance is achieved when learners are motivated to


perform well. Assessment results from maximum performance manifest what
9

students can do at their level best- their abilities and achievements. In this
category, students are encouraged to aim for a high score. Of course there are
factors that affect a student’s optimal performance like noise and other
distractions. Since teachers have direct control over the testing environment,
they can take action in reducing or eliminating such factors.

Contrastingly, a measure of typical performance shows what students will


do or choose to do. It assesses how a learner’s ability is evident if demonstrated
on a regular basis. Hence, it is more focused on the learner’s level of motivation
rather than his optimal ability.

Examples of measures of maximum performance are achievement and


aptitude tests. An achievement test is a measure of an individual’s competency
in a specific area. Spelling tests, arithmetic tests and periodical tests are typical
examples of classroom achievement tests. The National Achievement Test (NAT)
administered annually by the Department of Education to Grade 6 and Grade 10
students is a standardized test designed to determine the achievement level of
students in five subjects: English, Mathematics, Science, Filipino, and Araling
Panlipunan. An Aptitude test measures a learner’s ability or capability to learn.
It conveys to teachers and other evaluators how a learner is likely to perform in
school- his / her propensity to succeed.

As for measures of typical performance, these include attitude, interest


and personality inventories; observation techniques; and peer appraisals.
Personality and interest and potential career preferences. Examples of these are
the Myers-Briggs Type Indicator (MBTI) and the Strong Interest Inventory (SII).
Observation is used by teachers to document what happens inside the
classroom. Data are gathered by watching, listening and recording student’s
performance and behavior. Some observational techniques include scoring
rubrics; anecdotal records; portfolios; checklists and rating scales. Rubrics are
scoring guides or sets of criteria used to assess students’ skills or level of
understanding. Anecdotal records are notes containing a teacher’s observation
of how students learn and perform, as well as how they interact with peers.
These are brief notes focused on specific outcomes, observed during the lesson
or after the student has completed a performance or product. They contain
records of a learner’s progress and pattern of behavior. A portfolio is a selection
of student work, purposefully chosen to reveal the student’s learning progress
over time (growth portfolio); his/her best works (showcase portfolio) or
document the learner’s achievement for grading (evaluation portfolio). In
portfolio assessment, the most critical element is the learner’s reflection upon
the quality and growth his/her work. The reflection sheets contain the learner’s
reasons for the selection, the strengths and weaknesses in the chosen sample of
work, the learner’s assessment of his/her own self-efficacy and personal
strategies to improve and attain the learning outcomes. Checklist and rating
scales are tools to systematically record observations about what students know
and what they can actually do relative to the stated outcomes. Checklist and
10

rating scales are tools to systematically record observations about what students
know and what they can actually do relative to the stated outcomes. Kubiszyn &
Borich (2010) categorized them as rubrics for performance assessment. A
checklist usually uses a yes/no, present/absent or complete /incomplete format
in marking a student’s performance or execution of specific steps in a list. Nitko
and Brookhart (2007) identified four types of checklists: procedure checklist;
product checklist; behavior checklist; and self-evaluation checklist. When a TLE
teacher observes if a student follows the correct steps in the use and storage of
an electric mixer, a procedure checklist is used. A behavior checklist is used
when observing students in an oral presentation if they stand upright, maintain
eye contact, speak loudly, enunciate clearly, etc. Projects call for a product
checklist while a self- evaluation checklist goes well with a portfolio. In
accomplishing a self- evaluation checklist, learners undertake a thoughtful
review of their performance. Finally, a rating scale indicates the extent of the
behavior, skills and strategies displayed by the learner. Unlike checklist, it
attaches quality to the elements of a process or product. English teachers can
use a rating scale to assess their learners’ listening skills. Rating scales are used
in developing a grading rubric.

Purposes of Assessment

There are three interrelated purposes of assessment. Knowledge of these


purposes and how they fit in the learning process can result to a more effective
classroom assessment.

1. Assessment for Learning (AfL)

Assessment for Learning pertains to diagnostic and formative


assessment tasks which are used to determine learning needs, monitor
academic progress of students during a unit or block of instruction and
guide instruction. Students are given on-going and immediate descriptive
feedback concerning their performance. Based on assessment results,
teachers can make an adjustments when necessary in their teaching
methods and strategies to support learning. They can decide whether
there is a need to differentiate instruction or design more appropriate
learning activities to clarify and consolidate students’ knowledge,
understanding and skills. Examples of AfL are pre-tests, written
assignments, quizzes, concept maps, focused questions, among others.

2. Assessment as Learning (AaL)

Assessment as Learning employs tasks or activities that provide


students with an opportunity to monitor and further their own learning- to
think about their personal learning habits and how they can adjust their
learning strategies to achieve their goals. It involves metacognitive
11

processes like reflection and self-regulation to allow students to utilize


their strengths and work on their weaknesses by directing and regulating
their learning. Hence, students are responsible and accountable for their
own learning. Self and peer-assessment rubrics and portfolios are
examples of AaL. AaL is also formative which may be given at any phase
of the learning process. (Deped Order 8,s. 2015)

3. Assessment of Learning (AoL)

Assessment of Learning is summative and done at the end of a unit,


task, process or period. Its purpose is to provide evidence of a student’s
level of achievement in relation to curricular outcomes. Unit test and
final projects are typical examples of summative assessment. Aol is used
for grading, evaluation and reporting purposes. Evaluative feedback on
the student’s proficiency level is given to the student concerned, likewise
to his/her parents and other stakeholders. AoL provides the foundation
for decisions on student’s placement and promotion.

Evaluation

Evaluation comes in after the data had been collected from an assessment
task. According to Russell and Airasian (2012), evaluation is the process of
judging the quality of a performance or course of action. As what its etymology
indicates (French word évaluer), evaluation entails finding the value of an
educational task. This means that assessment data gathered by the teacher have
to be interpreted in order to make sound decisions about students and the
teaching-learning process. Evaluation is carried out both by the teacher and
his/her students to uncover how the learning process is developing.

Relationship Among Measurement, Test and Evaluation

Figure 1.1 displays a graphical relationship among the concepts of


measurement, test and evaluation (Bachman, 1990). It shows that while tests
provide quantitative measures, test results may be used for evaluation or
otherwise. Likewise, there are non-tests that yield quantitative measures which
can be used for evaluative purposes or research. It is clear in the diagram that
tests are considered measurements simply because they yield numerical scores.
They are forms of assessment because they provide information about the
learner and his/her achievement. However, tests comprise only a subset of
assessment tools. There are qualitative procedures like observations and
interviews that are used in classroom assessment. They add more dimension to
evaluation.
12

Area 1 is evaluation that does not involve measurement or tests. An


example is the use of qualitative descriptions to describe student performance.
Observations are non-test procedures which can be used to diagnose learning
problems among students. Area 2 refers to non-test measures for evaluation.
Ranking used by teachers in assigning grades is an example of a non-test
measure for evaluation. Area 3 is where all three coverage. Teacher-made test
fall in this region. Area 4 pertains to non-evaluative test measures. Test scores
used in correlational studies are examples of these. There had been researches
conducted on the relationship of test scores and motivation, test scores and
family income, etc. Finally, area 5 pertains to non-evaluative non-test measures
like assigning numerical codes to responses in a research study. An example
would be nominal scales used in labeling educational attainment.

Relevance of Assessment

Assessment is needed for continued improvement and accountability in all


aspects of the educational system. In order to make assessment work for
everyone- students, teachers and other players in the education system should
have an understanding of what assessment provides and how it is used to explain
the dynamics of student learning.

Students

Through varied learner-centered and constructive assessment


tasks, students become actively engaged in the learning process. They
take responsibility for their own learning. With the guidance of the
teacher, they can learn to monitor changes in their learning patterns.
They become aware of how they think, how they learn, how they
accomplish tasks and how they feel about their own work. These redound
to higher levels of motivation, self- concept and self-efficacy (Mikre,
2010) and ultimately better student achievement (Black & William, 1998)

Teachers

Assessment informs instructional practice. It gives teachers


information about a student’s knowledge and performance base. It tells
them how there students are currently doing. Assessment results can
reveal which teaching methods and approaches are most effective. They
provide direction as to how teachers can help students more and what
teachers should do next.
As a component of curriculum practice, assessment procedures
support instructors’ decisions on managing instruction, assessing student
competence, placing students to levels of education programs, assigning
13

grades to students, guiding and counselling, selecting students for


education opportunities and certifying competence. ( Mikre,2010)
Parents

Education is a shared partnership. Following this tenet, parents


should be involved in the assessment process. They are valued source of
assessment information on the educational history and learning habits of
their children, most especially for pre- schoolers who do not yet
understand their developmental progress. In return, teachers should
communicate vital information to parents concerning their children’s
progress and learning.
Additionally, assessment data can help identify needs of children
for appropriate intervention. For instance, when results of the School
Readiness Year-end Assessment (SReYA) for kindergarten are shared with
parents, they can use the information to concoct home-based activities to
supplement their children’s learning.

Administrators and Program Staff

Administrators and school planners use assessment to identify


strengths and weaknesses of the program. They designate program
priorities, assess options and lay down plans for improvement. Moreover,
assessment data are used to make decisions regarding promotion or
retention of students and arrangement of faculty development programs.

Policymakers

Assessment provides information about students’ achievements


which in turn reflect the quality of education being provided by the
school. With this information, government agencies can set or modify
standards, reward or sanction schools and direct educational resources.
The Commission on Higher Education in response to their program on
quality assurance shut down substandard academic programs of schools
with low graduation and passing rates in licensure examinations.
Assessment results also serve as basis for formulation of new laws.
A current example is RA 10533, otherwise known as the K to 12 Enhanced
Basic Education Act of 2013. The rationale for the implementation of this
law was the low scores obtained by Filipino pupils in standardized tests
such as the National Achievement Test (NAT) and international test like
the TIMSS (Trends in International Mathematics Study.
Assessment plays a vital role in the key to 12 program. In
kindergarten, children are given a School Readiness Yearend Assessment
(SReYA) in the mother tongue to assess readiness across the different
developmental domains aligned with the National Early Learning
Framework School-based Early Grade Reading Assessment (EGRA) and
Early Grade Math Assessment (EGMA) in the mother tongue are given in
14

grade 1; and EGRA in English and Filipino in grade 3 . National


achievement tests are conducted in key stages to assess readiness of
learners for subsequent grade/year levels. In helping students choose
specifications in senior high school, they will undergo several assessments
to uncover their strengths and weaknesses. Among these is the National
Career Assessment Examination (NCAE). The National Basic Education
Competency Assessment (NBECA) completes the assessments stages. It
measures the attainment of the K to 12 standards. As we can see, there
are mechanisms in place to monitor the quality of basic education in the
country under the new K to 12 BEC. Assessment data provide a basis for
evaluative decisions and policy formulation to sustain or improve the
program and adapt to emerging needs.
15

THINK!

A. Interpretive Exercise
Below is a portion of the memorandum from the
Department of Education. Read the DepED guidelines
and answer the questions that follow.
DO 5, s. 2013-Policy Guidelines on the Implementation of the
School Readiness Year-End Assessment (SReYA) for Kindergarden

1. Pursuant to Republic Act (RA) No. 10157 otherwise known as


the Kindergarten Education Act, Kindergarten Education as
the first stage of compulsory and mandatory formal
education is vital for the holistic development of the Filipino
child.

2. Kindergarten Education is hereby institutionalize as part of


basic education which was made effective starting School
Year (SY) 2011-2012 following the Standards and
Competencies for Five-Year Old Filipino Children. Along with
the implementation of this curriculum, an assessment tool
Filipino Children. Along with the implementation of this
curriculum, an assessment tool is deemed necessary. Thus,
the School Readiness Year-End Assessment (SreYa) was
restructured and contextualized into 12 dominant languages
(Mother Tongue). The Tool is intended to assess the
performance level of all kindergarten pupils in the
elementary school system across different developmental
domains aligned with the National Early Learning
Framework.

3. The SReYA aims to:


16

a. assess children’s readiness across the different


developmental domains ( Physical Health and Well-being,
Motor Development, Mathematics, Language and Literacy
, Sensory Perceptual, Physical and Social Environment,
Character and Values Development , and Socio-Emotional
Development);
b. utilize the results as basis for providing appropriate
interventions to address specific needs of the children;
and
c. share with parents the results as basis for helping them
come up with home-based activities for their
supplemental learning.

4. The assessment shall not be treated as an achievement test


or final examination. Hence, no child shall be refused entry
to Grade 1 based on the results of this assessment.

Questions:

1. What assessment is mentioned in the memorandum?


What is the purpose of giving such assessment?

2. How would you classify the assessment in terms of its nature? What
type of test is it?

3. Is this graded assessment? Why or why not?

4. What is the relevance of the assessment to students, teachers, parents


and the school?
17

Lesson 2

 MEASUREMENT

Measurement comes from the Old French word mesure which means “limit
or quantity”. Basically, it is a quantitative description of an object’s
characteristics or attribute. In science, measurement is a comparison of an
unknown quantity to a standard. There are appropriate measuring tools to
gather numerical data on variables such as height, mass, time, temperature,
among others. In the field of education, what do teachers measure and what
instruments do they use?

Teachers are particularly interested in determining how much learning a


student has acquired compared to a standard (criterion) or in reference to other
learner’s in a group (norm-referenced). They measure particular elements of
learning like their readiness to learn, recall of facts, demonstration of specific
skills, or their ability to analyze and solve applied problems. They use tools or
instruments like tests, oral presentation, written reports, portfolios and rubrics
to obtain pertinent information. Among these, tests are the most pervasive.

A quantitative measure like a score of 30 out of 50 in a written


examination does not hold meaning unless interpreted. Measurement stops once
a numerical value is ascribed. Making a value judgment belongs to evaluation.

Testing

Testing is a formal, systematic procedure for gathering information


(Russell and Airasian, 2012). A test is a tool comprised of a set of questions
administered during a fixed period of time under comparable conditions for all
students (Miller, Linn and Gronlund, 2009). It is an Instrument used to measure a
construct and make decisions. Educational Tests may be used to measure the
learning progress of a student which is formative purpose, or comprehensive
covering a more extended time frame which is summative.

Teachers score tests in order to obtain numerical descriptions of students’


performance. Examples of measures are raw scores and percentages obtained in
tests. For example, Nico’s score of 16 out of 20 items in completion type quiz in
Araling Panlipunan (Social Studies) is a measure of his cognitive knowledge on a
particular topic. This indicates that he got 80% of the items correctly, This is an
18

objective way of measuring a student’s knowledge of the subject matter.


Another method is through perception which is less stable of its subjectivity. For
instance, a teacher can rate a student’s knowledge about history using a scale of
1 to 5. Subjective types of measurement are useful especially in quantifying
latent variables like creativity, motivation, commitment, work satisfaction,
among others.

Test are the most dominant form of assessment. The issue concerning its
effectiveness to measure and effectively evaluate learning is resolved if
questions target and reflect learning outcomes and covers the different learning
domains. Test are traditional assessments. They may not be the best way to
measure how much students have learned but they still provide valuable
information about students learning and their progress.

Types of Test

For a long time, tests had been an integral part of education. However, it
is important to note that it is not the end- all and be-all of education.
Nonetheless, we acknowledge its significance as a source of information in
helping teachers provide the best learning experience for their students.

There are several typologies of tests. The successful use of a test depends
on the purpose and the construct to be measured. An objective test cannot be
used to gather opinions or determine students’ position on a social issue. An oral
test cannot be used to ascertain the writing skills of students. Personality test
cannot appropriately diagnose learning disabilities. An understanding of the
types of tests is beneficial to get the most out of them.

According to Mode of Response

In terms of the way responses are made, a test may be oral, written or
performance-based. In an oral test (viva voce), answers are spoken. Hence, it
can be used to measure oral communication skills. It may also be used to check
students’ understanding of concepts, theories and procedures. Unlike written
test, it is minimally discriminatory and more inclusive especially for learners who
are dyslexic (Huxham, Campbell and Westwood,2012). Plagiarism is less likely.
But it consumes time and may be stressful for some students (Huxham, Campbell
and Westwood,2012). It favors extrovert and eloquent students. It is not
appropriate for abstract reasoning tasks. Written tests, on the other hand, are
activities wherein students either select or provide a response to a prompt.
Among the forms of written assessments are alternate response (true/false),
multiple choice, matching, short- answer, essays, completion and identification.
A written test has strong points. It can be administered to a large group at one
time. It can measure students’ written communication skills. It can also be used
to assess lower and higher levels of cognition provided that questions are
phrased appropriately. It enables assessment of a wide range of topics. Despite
19

some criticisms, written tests are generally fair and efficient. Performance Test
are activities that require students to demonstrate their skills or ability to
perform specific actions. More aptly called performance assessment, they
include problem-based learning, inquiry tasks, demonstration tasks, exhibits,
presentation tasks and capstone performances. These tasks are designed to
authentic, meaningful, in-depth and multidimensional. However, cost and
efficiency are some of the drawbacks.

According to Ease of Quantification of Response

As to way of scoring, a test may be classified as objectives or subjective.


An objective test can be corrected and quantified quite easily. Scores can be
readily compared. It includes true-false, multiple choice, completion and
matching items. The test items have a single or specific convergent response. In
contrast, a subjective test elicits varied responses. A test question of this type
may have more than one answer. Subjective tests includes restricted and
extended-response essays. Because students have the liberty to write their
answer to a test question, it is not easy to check. Answers to this type of test
are usually divergent. Scores are likely to be influenced by personal opinion or
judgment by the person doing the scoring.

According to Mode of Administration

An individual test is given to one person at a time. Individual cognitive


and achievement test are administered to gather extensive information about
each student’s cognitive functioning and his/her ability to process and perform
specific tasks. They can help identify intellectually gifted students. Likewise,
they can also pinpoint those with learning disabilities (LDs). LDs are neurological
disorders that impede a learner’s ability to score, process or produce
information properly. Testing can aid in identifying learners who are struggling
in reading (dyslexia), math (dyscalculia), writing (dysgraphia), motor skills
(dyspraxia), language (dysphasia), or visual or auditory processing. Aside from
assessment data obtained from a wide array of given tasks, the teacher can also
observe individual students closely during the test to gather additional
information.

A group test is administered to a class of students or group of examinees


simultaneously. It was developed to address the practical need of testing. The
test is usually objective and responses are more or less restricted. It does not
lend itself for in-depth observations of individual students. There is less
opportunity to establish rapport or help students maintain interest in the test.
Additionally, students are assessed on all items of the test. Students may
become bored with easy items and anxious over difficult ones. Information
obtained from group tests is not comprehensive as those from individual tests.
20

According to test Constructor

Classified based on the constructor, a test may either be standardized or


non-standardized. Miller, Linn and Gronlund (2009) enumerated four properties
that differentiate standardized test from classroom or informal test: learning
outcomes and content measured; quality of test items; reliability; and
administration and scoring interpretation.

Standardized Test are prepared by specialists who are versed in the


principles of assessment. They are administered to a large group of students or
examiners under similar conditions. Scoring procedures and interpretations are
consistent. There are available manuals and guides to aid in the administration
and interpretation of test results. Because of high validity and reliability, they
can be used for a long period of time provided they are used for whatever they
were intended for. Results are generally consistent. Commonly, standardized
test consist of multiple choice items used to distinguish between students.
Results of standardized tests serve as an indicator of instructional effectiveness
and a reflection of the school’s performance.

Non- standardized tests are prepared by teachers who may not be adept
at the principles of test construction. At times, teacher-made tests are
constructed haphazardly due to limited time and lack of opportunity to pre-test
the items or pilot test. Compared to standardized test, the quality of items is
uncertain, or if known, they are generally lower. Non- standardized test are
usually administered to one or a few classes to measure subject or course
achievement. One or several test formats are used; hence items may not be
entirely objective. Test items are not thoroughly examined for validity. Scores
are not subjected to any statistical procedure to determine reliability. Unlike a
standardized test, it is not intended to be used repeatedly for a long time. There
are no established standards for scoring and interpreting results.

According to Mode of Interpreting Results

Test that yield norm- referenced interpretation are evaluate


instruments that measure a student’s performance in relation to the
performance of a group on the same test. Comparisons are made and student’s
relative position is determined. For instance, a student may rank third in the
class of fifty. Examples of norm-referenced test are teacher- made survey tests
and interest inventories. Standardized achievement test also fall under this
type.
21

Test that allow criterion –referenced interpretations describe each


student’s performance against an agreed upon or pre- established criterion or
level of performance. The criterion is not actually a cutoff score but rather the
domain of subject matter- the range of well-defined instructional objectives or
outcomes. Nonetheless, in a mastery test, the cut score is used to determine
whether or not a student has achieved mastery of a given unit of instruction.
Surprisingly, the methods for setting a cut score for a test vary, therefore
making it somewhat subjective.

You will find, that some educators classify tests as norm or criterion-
referenced tests. However, Popham(2011) stressed that there are no such things.
Instead, he clarified that these are interpretations of student performance.

According to Nature of Answer

The following are popular types of tests classified according to the


construct they are measuring: personality, intelligence, aptitude, achievement,
social relationships and occupational competence.

Personality tests were first developed in the 1920s, initially intended to


aid in the selection of personnel in the armed forces. Since then, quite a number
of personality tests were developed. A personality test has no right or wrong
answer, but it measures one’s personality and behavioral style. It is used in
recruitment as it aids employers in determining how a potential employee will
respond to various work-related activities. Apart from evaluating and staffing, it
is also used in career guidance, in individual and relationship counseling and in
diagnosing personality disorders. In schools, personality tests determine
personality strengths and weaknesses. Personality development activities can
then be arranged for students.

Achievement tests measures students’ learning as a result of instruction


and training experiences. When used summative, they serve as basis for
promotion to the next grade. In contrast, aptitude test determine a student’s
potential to; learn and do new tasks. The college Scholastic Aptitude Test by the
Center for Educational Measurement, Inc. measures student ability and predicts
success in college. A career aptitude test aids in choosing the best line of work
for an individual based on his/ her skills and interests. At this point, we may ask,
“Is there a relationship between aptitude and achievement? “If an aptitude test
is administered prior to instruction and results of an achievement test are
obtained after instruction, then it can be investigated whether aptitude causes
achievement.

Intelligence test measure learners’ innate intelligence or mental ability.


The first modern intelligence test was published in 1905 by Alfred Binet and
Theodore Simon. Intelligence tests have continually evolved because of efforts
22

to accurately measure intelligence. It had been exploited extensively as a


predictor of academic achievement. Intelligence tests contain items on verbal
comprehension, quantitative and abstract reasoning, among others, in
accordance with some recognized theory of intelligence. For instance, Sternberg
constructed a set multiple choice questions grounded on his Triarchic Theory of
Human Intelligence. The intelligence test taps into the three independent
aspects of intelligence: analytic, practical and creative.

A sociometric test measures interpersonal relationships in social group.


Introduced in the 1930s, the test allows learners to express their preferences in
terms of likes and dislikes for other members of the group. It includes peer
nomination, peer rating and sociometric rankings of social acceptance. For
instance, a child may be asked to nominate three students whom they like to
play with, or rate them accordingly.

A trade or vocational test assesses an individual’s knowledge, skills and


competence in a particular occupation. A trade test may consist of a theory test
and a practical test. Upon successful completion of the test, the individuals is
given certification for qualification. Trade test can likewise be used to
determine the fruitfulness of training programs. (ex. National Certificates given
by TESDA).

Functions of Testing

One needs to be aware of the purposes of testing in order to select the


most appropriate type of test. Tests can be classified into four interrelated
categories: instructional, administrative, program evaluation and research, and
guidance (Hopkins, 1998). Each is discussed briefly below.

A. Instructional Functions

1. Test facilitate the clarification of meaningful learning objectives.


When constructing tests teachers are reminded to go back to the
learning objectives. If they are committed to these, if they are
committed to these teaching -learning and assessment tasks provide
mutual support.
2. Tests provide a means of feedback to the instructor and the student.
They can be used for self-diagnosis. Students can assess their own
learning and performance. Test results guide teachers in adjusting
their pedagogical practices to match students’ learning styles. The
impact of a test on teaching and learning is called wash back. The
effect may be beneficial or harmful as teachers and learners tend to
tailor instructional activities and learning processes to the demands of
the test.
3. Tests can motivate learning. In a meta-analysis study –a research that
examined similarities and differences of several studies on classroom
23

testing in schools in the United States – it was shown that frequent


testing increases academic preparation ( Study time) and academic
achievement (Bangert-Drows, Kulik & kulik, 1991; Basol and Johanson,
2009). Frequent testing also produces a more positive attitude among
students. Students expressed a more favorable opinion about
instruction. ( Bangent- Drowns, Kulik & Kulik, 1991).
4. Test can facilitate learning. The effects of testing had been studied by
researchers indicating improved performance when learners are given
the opportunity to practice retrieval before giving the final test. For
instance, in a study conducted by Lipko-Speed, Dunlosky and
Rawson(2014) among 5th graders in Science, students who were pre-
tested with feedback performed students can improve in subsequent
performances. But that as it may that tests boost learning of concepts,
the use of other assessment strategies is needed to achieve mastery. A
test- restudy practice method called successive relearning conducted
at appropriate intervals can bring about long-term retention (Lipko-
Speed, Dunlosky and Rawson, 2014).

5. Tests are useful means of overlearning. Overlearning means continued


study, review, interaction or practice of the same material even after
concepts and skills had been mastered. Preparation for a scheduled
test induces overlearning. While overlearning helps in retention it
dissipates over time(Rohrer, et al.,2004)

B. Administrative Functions

1. Test provide a mechanism of quality control. Through test, a school


can determine the strengths and weaknesses of its curricula.
Administrators can then devise ways to improve outcomes and
assessment, implement and check for improvements.
2. Test facilitate better classification and decisions. Test results allow
administrators to group students according to their level ability.
Through a classification system, schools can assign or transfer students
to a gifted or remedial program.
3. Tests can increase the quality of selection decisions. In using tests for
classification purposes, schools can then select students for specific
programs. This is true when admitting students for senior high school
or college. Through testing, a teacher can select students who would
benefit, for instance, in tutorial classes or remedial programs.
4. Tests can be a useful means of accreditation, mastery or certification.
Test provide a means of certifying knowledge and skills. In the K to 12
curriculum, a senior high school student who completed a technical-
vocational-livelihood track in grade 12 may obtain a National
Certificate Level II provided he/she passes the competency-based
assessment of the Technical Education and Skills Development
24

Authority (TESDA). This certification will enable students to land jobs


after high school. Another example is the Licensure Examination for
Teachers (LET) conducted by the Professional Regulation Commission
(mandated by Republic Act. No. 7836). Let passers are issued licenses
making them eligible to practice their profession.

C. Research and Evaluation

Test are useful for program evaluation and research. Tests are
utilized in studies that determine effectiveness of new pedagogical
techniques. Researchers on teaching and learning innovations like the
effectiveness of technology- enhanced learning (tablet computing and
flipped classroom) are carried out using tests and other assessment
techniques to collect data. Evaluators also utilize assessment data to
determine the impact and success of their programs.

D. Guidance Functions

Tests can be of value in diagnosing an individual’s special aptitudes and


abilities. The aim of guidance is to enable each individual to understand
his/her abilities and interests and develop them so that he/she can take
advantage of educational, vocational and personal opportunities. In
school, the guidance department evaluates learner’s scholastic aptitude,
achievement, interests and personality. By giving intelligence tests,
aptitude tests and personality inventories, along with interviews and
counselling sessions, a guidance counselor can help students develop their
study and time management skills, choose which program of study to
take, and select a career path to follow.
25

THINK!

As a college student, you underwent several


assessments in basic education. Recall from your own personal
experience an assessment that you think was truly meaningful
to you. Explain why it is so. Explain the nature and purpose of
that particular assessment.
26

Lesson 3

 Roles of Assessment

There are four roles of assessment used in the instructional process.


Miller, Linn and Gronlund(2009) identified these as functional roles of
assessment in classroom instruction. Analogously, Nitko (1994) enumerated these
as instructional decisions supported by test.

1. Placement Assessment
Placement assessment is basically used to determine a
learner’s entry performance. Done at the beginning of instruction,
teachers assess through a readiness pre-test whether students possess
prerequisite skills needed prior to instruction. If pre-requisite skills are
sufficient, then the teacher can provide learning experiences to help
them develop those skills. If students are ready, then the teacher can
proceed with instruction as planned. An example of the readiness pre-test
in an arithmetic test given to students who are about to take elementary
algebra.
Placement assessment is also used to determine if students have
already acquired the intended outcomes. A placement pre-test contains
items that measure knowledge and skills of students reference to the
learning targets. If students do not fare well, the teacher can proceed
with the planned instruction. However, if students have already achieved
the learning outcomes, then the teacher may advance the students to
higher cognitive level. This suggests that the teacher designs more
complex problems or activities for the students.

2. Formative Assessment

There is now a shifting from a testing culture to an assessment


culture characterized by the integration of assessment and
instruction(Dochy,2001. This is where formative assessment comes in.
Formative assessment mediates the teaching and learning processes. It is
learner-centered and teacher-directed. It occurs during instruction. It is
27

context- specific since the context of instruction determines the


appropriate classroom assessment technique. Consider the following
examples Muddiest point is a technique that can be used to address gasps
in learning. The technique consists of asking students at the end of a
lesson to scribble down their answer to the question, “What is the
muddiest point in the lecture, discussion, assignment or activity?” Another
is a ‘background knowledge probe’ which is a short and simple
questionnaire given at the start of a new lesson to uncover students
preconceptions. From these we can see that formative assessment is used
as feedback to enhance teaching and improve the process of learning. It is
an on-going process, hence learners regularly receive feedback. And how
does this works? For instance, a teacher provides his comments and
suggestions to an essay on climate change submitted by one of his/ her
students. The student revised his/her work before being finally assessed.
Other types of formative assessments include question and answer during
discussion, assignments, short quizzes and teacher observations. Results
of formative assessments are recorded for the purpose of monitoring
students’ learning progress. However, these are not used as bases for
students’ marks.

Positive Effects of Formative Assessment

Black and William (1998) cited a body of evidence showing that


formative assessment can raise the standards of achievement. Utaberta &
Hassanpour (2012) enumerated the positive effects formative assessment.
They are as follows:

 Reactive or consolidates prerequisites skills or knowledge prior to


introducing material;
 Focuses attention on important aspects of the subject;
 Encourages active learning strategies;
 Give students opportunities to practice skills and consolidate
learning
 Provides knowledge of outcomes and corrective feedback;
 Helps students monitor their own progress and develop self-
evaluation skills;
 Guides the choice of further learning activities to increase
performance; and
 Helps students to feel a sense of accomplishment.

Attributes of an Effective Formative Assessment

1. Learning progressions. Learning progressions should clearly


communicate the sub goals of the ultimate learning goal.
28

2. Learning Goals and Criteria for Success. Learning goals and criteria for
success should be clearly identified and communicated to students.
3. Descriptive Feedback. Students should be provided with evidence-
based feedback that is linked to the intended instructional outcomes
and criteria for success. Hattie & Timperley (2007) constructed a
model of feedback to enhance learning. Refer to Figure 2.2.
Discrepancies (or gaps) in the students’ current actual performance
and desired goal attainment can be reduced by both teacher and
students through effective feedback that answers three vital
questions: Where am I going? How am I going? Where to next? To
discourage students from rote and superficial learning and incite them
to do more, assessment feedback must address all three questions
previously mentioned. Effective feedback can operate on any of four
levels; task, process, self-regulation and self-level.

4. Self- and Peer- Assessment. Both Self-and peer-assessment are


important for providing students an opportunity to think metacognitive
about their learning.

5. Collaboration. A Classroom culture in which teachers and students are


partners in learning should be established.

Purpose
To reduce discrepancies between current understanding/ performance and a
desired goal

The discrepancy can be reduced by:


Students
 Increased effort and employment of more effective strategies
Or
 Abandoning , blurring, or lower the goals

Teachers
 Providing appropriate challenging and specific goals
 Assisting students to reach them through effective learning strategies
and feedback

Effective feedback answers three questions

Where am I going?( the goal) Feed Up


How am I going? Feed Back
Where to next? Feed Forward
29

Each feedback question works at four levels:

Task Level Process Level Self-regulation Self Level


Level
How well tasks The main process Self-monitoring, Personal
are understood/ needed to directing, and evaluations
performed understand regulating of and
/perform tasks actions affect(usually
positive )
about the
learner

3. Diagnostic Assessment

Diagnostic assessment is intended to identify learning


difficulties during instruction. A diagnostic test for instance can
detect commonly held misconceptions in a subject. Contrary to
what others believe, diagnostic tests are not merely given at the
start of instruction. It is used to detect causes of persistent
learning difficulties despite the pedagogical remedies applied by
the teacher. This is not used as part of a student’s mark of
achievement.

4. Summative Assessment

Summative assessment is done at the end of instruction to


determine the extent to which the students have attained the
learning outcomes. It is used for assigning and reporting grades or
certifying mastery of concepts and skills. An example of a
summative assessment is the written examination at the end of the
school year to determine who passes and who fails.

There is another form of assessment called interim


assessment. Interim assessments have the same purpose as
formative assessments, but these are given periodically throughout
the school year. They prepare students for future assessments. For
example, to predict which students are on course to succeed in
national achievement test or high school/college admission test,
the school gives interim tests to students every eight weeks.
Interim assessments fall between formative and summative
assessments. They allow comparison of assessment results to aid in
30

decision- making at the micro (classroom) and meso (school and


district) levels. As such, interim assessments and instructional,
predictive and evaluative. They are differentiated from
instructionally embedded formative assessments that are given
frequently or summative assessments that have greater scope and
longer cycle duration.

THINK!

Identify the assessment function illustrated by the


following:
1. Entrance examination: Placement
2. Daily Quiz:_________________
3. Unit Test:__________________
4. Periodical Test:_____________
5. District wide Test:___________
6. National Achievement Test:___________________
7. Test of English as a Foreign Language(TOEFL)
____________________
8. Licensure Examination for Teachers:_____________
9. Attitudinal Test:___________________
10. IQ Test:__________________________
31

Lesson 4


Appropriateness and Alignment of
Assessment Methods to Learning Outcomes

Overview

What principles govern assessment of learning? Chappuis , Chappuis &


Stiggins (2009) delineated five standards of quality assessment to inform sound
instructional decision: (1) clear purpose; (2) clear learning targets; (3) sound
assessment design; (4) effective communication of result ; and (5) student
involvement in the assessment process.
Classroom assessment begins with the question, “why are you assessing”
the answer to this question gives the purpose of assessment which was discussed
in Section I. the next question is, “why do you want to assess? This pertains to
the student learning outcomes – what the teachers would like their students to
know and be able to do at the end of a section or unit. Once targets or outcomes
are defined, “how are going to assess?” these refer to the assessment tools that
can be measure the learning outcomes. Assessment methods and tools should be
parallel to the learning targets or outcomes to provide learners with
opportunities that are rich in breadth and depth and promote deep
understanding. In truth, not all assessment methods are applicable to every type
of learning outcomes and teachers have to be skillful in the selection of
assessment methods and designs. Knowledge of different levels of assessment in
paramount. For example, if a learning outcome in an English subject states that
students should be able to communicate their ideas verbally, then assessing
their skill through written essay will not allow learners demonstrate that stated
outcome.
32

Lesson 4 deals with the second and third assessment standards identified by
Chppuis, Chappuis & Stiggins (2009). It covers learning outcomes and assessment
methods, and how they are aligned.
Intended Learning Outcome (ILO)

Identifying Learning Outcome

A learning outcome pertains to a particular level of knowledge, skills and values


that a student has acquired at the end of a unit or period of study as a result of
his/her engagement in a set of appropriate and meaningful learning experiences.
An organized set of learning outcomes help teachers plan and deliver
appropriate instruction and design valid assessment tasks and strategies.
Anderson, et al. (2005) listed four steps in a student’s outcome assessment: (1)
create learning outcome statements; (2) design teaching assessments to achieve
these outcomes statements; (3) implement teaching/ assessment activities; (4)
analyze data on individual and aggregate levels; and (5) reassess the process.
These chapter centers on step 1 and 2. Hence, to comprehend the principle of
appropriateness of assessment methods to learning outcomes, we need to revisit
the taxonomy of learning domains and look at the different methods.

TAXONOMY OF LEARNING DOMAINS

Learning outcome are statements of performance expectation:


cognitive, effective and psychomotor. These are the three board domain of
learning characterized by change in learner’s behavior. Within each domain are
level of expertise that drives assessment. These levels are listed in order of
increasing complexity. Higher levels require more sophisticated methods of
assessment but they facilitate retention and transfer of learning (Anderson, et
al., 2005). Importantly, all learning outcome must be capable of being assessed
and measured. This may be done using direct and indirect assessment
techniques.

A. Cognitive (Knowledge-based)

Table 3.1 (see p. 35) shows the level of cognitive learning originally devised by
Bloom,Engelhart ,Furst, Jill & Krathwohl in 1956 and revised by Anderson,
Krathwhol et al. In 2001 to produce a two-dimensional framework of Knowledge
and Cognitive Processes and account for twenty-first century needs by including
metacognition. It is designed to help teachers understand and implement a
standards-based curriculum. The cognitive domain involves the development of
knowledge and intellectual skilss. It answers the question," What do i want learners to
know?" The first three are lower-order, while the next three level promote higher-order
thinking.
Krathwohl (2002) stressed that the revised Bloom's taxonomy in not only used to classify
instructional and learning activities used to achieve the objectives, but also for assessments
employed to determine how well learners have attained and mastered the objectives.
33

Marzano & Kendall (2007) came up with their own taxonomy composed of three systems
(Self system, Metacognitive system, and Cognitive System) and the Knowledge Domain. Their
Cognitive system has four levels: Knowledge; Comprehension; Analysis and Knowledge
Utilization. The Knowledge component is the same as the Remembering level in the revised
Bloom's Taxonomy. 'Comprehension' entails synthesis and presentation. Relevant information
are taken and then organized into categories. Analysis involves processes of matching,
classifying, error analysis, generalizing and specifying. The last level, Knowledge Utilization,
comprises desicion-making, problem solving, experimental inquiry and invertigation -
processes essential in problem-based and project-based learning.

Table 3.1 Congnitive levels and Process (Anderson, et al., 2001)


Levels Process and Action Verb Sample learning
Describing learning Outcome competencies
Remembering Processes: Recognizing, Define the four levels of
Retrieving relevant recalling mental processes in Mazano
knowledge from long-term verbs: define, describe, & Kendall's cognitive system
memory identify, label, list, match,
name, outline, reproduce,
select, state
Understanding Processes: Interpreting, Explain the purpose of
Construction meaning from Exemplifying, Classifying, Mozano & Kendall' New
instructional messages, Summarizing, Inferring, Taxonomy of Educational
including oral written, and Comparing, Explaining, Objectives
graphic communication. Verbs: convert, describe,
distinguish, estimate,
extend, generalize, give
examples, paraphrase,
rewrite, summarize.
Applying Processes: Executing, Write a learning objective
Carrying out or using a Implementing for each level of Marzano &
procedure in a given Verbs: apply, change, Kendall's Cognitive system
situation classify (examples of a
concept), compute,
demonstrate, discover,
modify, operate, predict,
prepare, relate, show, solve,
use
Analyzing Processes: Differentiating, Compare and contrast the
Breaking material into its Organizing, Attributing. thinking levels in the revised
constituent parts and Verbs: analyze, arrange, Bloom's Taxonomy and
determine how the parts associate, compare, Marzano & Kendall cognitive
relate to one another and to contrast, infer, organize, system
an overall structure or solve, support ( a thesis)
purpose
Evaluating Processes: Executing, Judge the effectiveness of
Making judgments based on Monitoring, Generating writing learning outcomes
criteria and standards Verbs: appraise, compare, using Marzano & Kendall's
34

conclude, contrast, criticize, cognitive system


evaluate, judge, justify,
support ( a judgment), verify
Creating Processes: Planning, Design a classification
Putting elements together to Producing scheme for writing learning
form a coherent or Verbs: classify (infer the outcomes using levels of
functional whole; reorganize classification system), cognitive system developed
elements into new pattern or construct, create, extend, by Marzano & Kendall.
structure formulate, generate,
synthesize

Whatever taxonomy you choose, be it the Revised Bloom' or Marzano &


Kendall, classification, they should help you categorize learning outcomes which
are crucial in designing and developing assessments. As a case in point, consider
this learning outcome in science, “Design an experiment to determine the factor
that affect the strength of an electromagnet". It is aimed at the highest level of
cognition in the revised Bloom' Taxonomy. In Marzano and kendall's Taxonomy, it
is directed at 'Knowledge Utilization'. Now, consider this multiple choice item:

Which of the following factors does not affect the strength of an


electromagnet?
A. Diameter of the coil
B. Direction of windings
C. Nature of the core material
D. Number of turns in coil

The item does not allow learners to attain the level of performance
expressed in the learning outcome. The performance verb 'design' calls for a
constructed response assessment (performance/product), not a selected-
response test. You will learn more about such as you get along this chapter.

B. Psychomotor (Skill-based)
The psychomotor domain focuses on physical and mechanical skills involving.
Coordination of the brain and mascular activity.it answers the question, “What
actions do I want learners to be able to perform?"
Dave (1970) identified five levels of behavior in the psychomotor domain;
Imitation, manipulation, Precision, Articulation, and Naturalization. In his
taxonomy, Simpson (1972) laid down seven progressive levels: Perception, Set,
Guided Response, Mechanism, Complex Overt Response, Adaptation and
Organization. Meanwhile, Harrow (1972) developed her own taxonomy with six
categories organized according to degree of coordination: Reflex movements,
Basic fundamental movement, Perceptual, Physical activities, skilled
35

movements, and Non-discursive communication. Table 3.2 displays the level of


psychomotor domain combining the taxonomies built by Simpson, Dave and
Harrow.

Table 3.2 Taxonomy of Psychomotor Domain


Levels Action Verbs Describing Sample Learning
Learning Outcomes. Competencies
Observing Describing, detect, Relate music to a
Active mental attending distinguish, differentiate, particular dance step
of a physical event describe, relate, select
Imitating Begin, display, explain, Demonstrate a simple
Attempted copying of a move, proceed, react, dance step
physical behavior show, state, volunteer
Practicing Bend, calibrate, Display several dance
Trying a specific physical construct, differentiate, steps im sequence
activity over and over dismantle, fasten, fix,
grasp, grind, handle,
measure, mix, organize,
operate, manipulate,
mend
Adapting Arrange, combine, Perform a dance showing
Fine tuning. Making compose, construct, new combinations of
minor adjustments in the create, design, originate, steps.
physical activity in order rearrange, reorganize
to perfect it.

C. Affective (Values, Attitude and Interests)


The effective domain emphasizes emotional knowledge. It tackles the
question, "What actions do I want learners to think or care about?"
Table 3.3 presents the classification scheme for the affective domain developed
by Krathwohl, Bloom and Masia in 1946. The affective domains includes factors
such as student motivation, attitudes, appreciations and values

Table 3.3 Taxonomy of Affective Domain ( Krathwohl, et al., 1946)


Levels Action Verbs Describing Sample learning
Learning Outcomes Competencies
Receiving Asks, chooses, describe, Listen attentively to
Being aware of or follows, gives, holds, volleyball introduction
attending to something in identities, locates,
the environment names, points to,
selects, sits, erect,
replies, uses
Responding Answer, assist, comply, Assist voluntarily in
Showing some new conform, discuss, greet, setting up volleyball nets
behaviors as a result of help, label, peform,
36

experience practice, present, read,


recite, report, select,
tell, write
Valuing Complete, describe, Attend optional
Showing some definite differentiate, explain, volleyball matches
involvement or follow, form, initiate,
commitment invite, join, justify,
propose, read, report,
select, share, study,
work
Organizing Adhere, alter, arrange, Arrange his/her own
Integrating a new value combine, compare, volleyball practice
into one's general set of complete, defend,
values, giving it some explain, generalize,
ranking among one's identify , integrate,
general priorites modify, order, organize,
prepare, relate,
synthesize
Internalizing Values: Act, discriminate, Join intramurals to play
Characterization by a display, influence, listen, volleyball twice a week
value or value complex modify, perform,
acting consistently with practice, propose,
the new value qualify, question, revise,
serve, solve, use, verify
37

THINK!

Activity 1: TAXONOMY CLASSIFICATION


Determine which domain and lever lf learning are targeted by the following
learning competencies taken from the Basic Education curriculum guides. your
information, the term competency has various meaning. It’s for description
range from that of a board overarching attribute to that of a very specific task
(Kennedy, Hyland, & Ryan, 2009). This activity is important because your choice
of assessment method is contingent on the learning domains and levels of the
learning outcomes and competencies.
Learning competencies Domain Level
1. Identify parts of
microscope and their
functions.
2. Employ analytical
listening to make
predictions.
3. Exhibit correct body
posture
4. Recognize the benefit
of patterns in special
products and factoring.
5. Infer that body
structures help animals
adapt and survive in their
particular habitat.
6. Differentiate linear
38

inequalities in two
variables from linear
equations in two
variables
7. Follow written and
verbal directions.
8. Perform jumping over
a stationary object
several times in
succession, using
forward-and-back and
side-to-side movement
pattern
9. Compose musical
pieces using a particular
style of the 20th century
10. Describe movement
skills in response to
sound.
11. Proves statements on
triangle congruence
12. Work independently
and with others under
time constraints.
13. Design an
individualized exercise
program to achieve
personal fitness

Activity 2: Sequencing

The taxonomy of Cognitive, Psychomotor and Effective Domain have levels


called vignettes. Arrange the learning competencies using the hierarchy from
lowest to highest.
Domain: Cognitive
Top A: Quadratic Equations
_______ (a) Solve quadratic equations by factoring.
_______ (b) Describe a quadratic equation.
_______ (c) Compare the four methods of solving quadratic equations.
_______ (d) Differentiate a quadratic equation from other types of equation
in terms of form and degree.
_______ (e) Formulate real-life problems involving quadratic equations.
_______ (f) Examine the nature of roots of a quadratic equation.
Domain Cognition
Topic B: Mechanical energy
39

_______ (a) Decide whether the total mechanical energy remains the same
during a certain process.
_______ (b) Create a device that shows conservation of mechanical
energy.
_______ (C) State the law of conservation energy.
_______ (d) Explain energy transformation in various activities or
events.
_______ (e) Perform activities to demonstrate conservation of
mechanical energy.
_______ (f) Determine the relationship among the kinetic, gravitational
potential and total mechanical energies for a mass at any point
between maximum potential energy and maximum kinetic energy.

Domain Psychomotor
Topic C: Basic sketching
_______ (a) Watch how tools are selected and used in sketching.
_______ (b) Create a design using combinations of lines, curves and
shapes.
_______ (c) Draw various lines, curves, and shape.
_______ (d) set the initial draw position

Domain
Topic D:
_______ (a) Write down important details of the story pertaining to
character, setting and events.
_______ (b) Share inferences, thoughts and feelings based on the short
story.
_______ (c) Relate story events to personal experience.
_______ (d) Read carefully the short story.
_______ (e) Examine thoughts on the issues raised in the short story

Activity 3:
Before you can match the appropriate assessment method to a learn outcome,
you have to the familiar with the types of assessment methods and activities.
Match the description in Column A with the correct method Column B. Write the
letter of the correct answer before the item number.

Column A Column B
____ 1. Student writes a restricted or extended a. Brief-
response to an open-ended question. constructed
response
40

____ 2. Teacher monitors student’s behavior in class b. Essay


as well as the classroom climate.
____ 3. Students evaluates his/her performance at a c. Observation
learning task in relation to a learning outcome.
____ 4. Student demonstrate his/her skills based on d. Oral Question
Authentic tasks.
____ 5. Student choose a response provide by the e. performance
teacher or test developer. assessment
____ 6. Student gives a short answer by f. Selected-
completing a statement or response g. Self- assessment.
labelling a diagram.

Activity 4: ASSESSMENT SCENARIOS


For each of the following situations, indicate which method provides the best
match. In determining the appropriate method, apply the Revised Blooms,
Taxonomy. Justify your choice in one or two statements.
1. Mr. Dasas wants to know if his students can identify the different parts of
flower.
2. Mr. Bunquin wants to find out if his students can examine the quality of
education in the country.
3. Mrs. Geronimo wants to check if her students can build a useful 3D object
using recycled materials.
4. Ms. De la Cruz wants to determine if her Grade 1 pupils can write
smoothly and legibly.
5. Ms. Uy wants to check her students can subtract two-digit numbers.
6. Ms. Alonsabe wants her students to think, write down and solve three
challenging situations where ratio and proportion can be applied in real-
life.
7. Mr.Balmeo needs to know if his students can construct a frequency
distribution table after he demonstrated the procedure.
8. Mrs. Dayao wants to see if her students have grasped the important
elements of the story before continuing on to the next instrument.
41

SUMMATIVE TEST

You might also like