You are on page 1of 11

URDANETA CITY

UNIVERSITY
Owned and operated by the City Government of Urdaneta
College of Teacher Education

PROFESSIONAL ENHANCEMENT 2
1st Semester Academic Year 2023-2024
REVIEW MATERIAL IN ASSESSMENT IN LEARNING

Assessment refers to the process of gathering, describing or quantifying information about the student performance. It includes
paper and pencil test, extended responses and performance assessment are usually referred to as “authentic assessment” tasks.

Measurement is a process of obtaining a numerical description of the degree to which an individual possesses a particular
characteristic. Measurement answers the questions “How much?”

Evaluation refers to the process of examining the performance of student. It also determines whether or not the student has met
the lessons’ instructional objectives.

Test is an instrument or systematic procedure designed to measure the quality, ability, skill or knowledge of students by giving a
set of question in a uniform manner. Since test is a form of assessment, tests also answer the question “How does individual
student perform?”

Testing is a method used to measure the level of achievement or performance of the learners. It also refers to the administration,
scoring and interpretation of an instrument designed to elicit information about performance in a sample of a particular area of
behavior.

TYPES OF MEASUREMENT

There are two ways of interpreting the student performance in relation to classroom instruction. These are the Norm-referenced
tests and Criterion-referenced tests.

Norm-referenced Test is a test designed to measure the performance of a student compared with other students. Each individual is
compared with other examinees and assigned a score usually expressed as a percentile, a grade equivalent score or a stanine. The
achievement of student is reported for broad skill areas, although some norm-referenced tests do report student achievement for
individual.

The purpose is to rank student with respect to the achievement of the others in broad areas of knowledge and to discriminate high
and low achievers.

Criterion-referenced Test is a test designed to measure the performance of the students with respect to some particular criterion or
standard. Each individual is compared with a pre-determined set of standard for acceptable achievement. The performance of the
other examinees is relevant. A student’s score is usually expressed a percentage and student achievement is reported for individual
skills.

The purpose is to determine whether each student has achieved specific skills or concepts. And to find out how much students
know before instruction begins and after it has finished.

Common characteristics of Norm-referenced Tests and Criterion-referenced Tests (Linn et. al., 1995)

1. Both require a specification of the achievement domain to be measured.


2. Both require a relevant and representative sample of test items.
3. Both use the same types of test items.
4. Both used the same rules for item writing (except for item difficulty)
5. Both are judge with the same qualities of goodness (validity and reliability)
6. Both are useful in educational assessment.

Differences between Norm-referenced Tests and Criterion-referenced Tests:

Norm-Referenced Tests Criterion-Referenced Tests


1. Typically covers a large domain of learning tasks, 1. Typically focuses on a delimited domain of
with just a few items measuring each specific task. learning tasks, with a relative large number of
items measuring each specific task.
2. Emphasizes discrimination among individuals in 2. Emphasizes among individuals can and
terms of relative level of learning. cannot perform.
3. Favors items of large difficulty and typically omits 3. Matches item difficulty to learning tasks,
very easy and very hard items. without altering item difficulty or omitting
easy and hard items.
4. Interpretation requires a clearly defined group. 4. Interpretation requires a clearly defined
and delimited achievement domain.
URDANETA CITY
UNIVERSITY
Owned and operated by the City Government of Urdaneta
College of Teacher Education

TYPES OF ASSESSMENT

There are four types of assessment in terms of their functional role in relation to classroom instruction. These are the placement
assessment, diagnostic assessment, formative assessment and summative assessment.

A. Placement Assessment is concerned with the entry performance of student. The purpose of placement evaluation is to
determine the prerequisite skills, degree of mastery of the course objectives and the best mode of learning.
B. Diagnostic Assessment is a type of assessment given before the instruction. It aims to identify the strengths and
weaknesses of the students regarding the topics to be discussed. The purpose of diagnostic assessment is:
1. to determine the level of competence of the students;
2. to identify the students who have already knowledge about the lessons; and
3. to determine the causes of learning problems and formulate a plan for remedial action.
C. Formative Assessment is a type of assessment used to monitor the learning progress of the students during and after
instruction. Purposes of formative assessment:
1. to provide feedback immediately to both student and teacher regarding the success and failures of learning;
2. to identify the learning errors that is in need of correction; and
3. to provide information to the teacher for modifying instruction and used for improving learning and instruction.
D. Summative Assessment is a type of assessment usually given at the end of a course or unit. Purposes of summative
assessment:
1. to determine the extent to which the instructional objectives have been met;
2. to certify student mastery of the intended outcome and used for assigning grades;
3. to provide information for judging appropriateness of the instructional objectives; and
4. to determine the effectiveness of instruction.

MODES OF ASSESSMENT

A. Traditional Assessment
1. Assessment in which students typically select an answer or recall information to complete the assessment. Test may
be standardized or teacher made test. These tests may be multiple-choice, fill-in-the-blanks, true-false, or matching
type.
2. Indirect measures of assessment since the test items are designed to represent competence by extracting knowledge
and skills from their real life context.
3. Items on standardized instruments tend to test only the domain of knowledge and skill to avoid ambiguity to the test
takers.
4. One-time measures rely on a single correct answer to each item. There is a limited potential for traditional test to
measure higher order thinking skills.
B. Performance Assessment
1. Assessment in which students are asked to perform real-world tasks that demonstrate meaningful application of
essential knowledge and skills.
2. Direct measures of student performance because tasks are designed to incorporate contexts, problems, and solution
strategies that students would use in real life.
3. Designed ill-structured challenges since the goal is to help students prepare for the complex ambiguities in life.
4. Focus on processes and rationales. There is no single correct answer; instead students are led to craft polished,
thorough and justifiable responses, performances and products.
5. Involve long-range projects, exhibits, and performances are linked to the curriculum.
6. Teacher is an important collaborator in creating tasks, as well as in developing guidelines for scoring and
interpretation.

C. Portfolio Assessment
1. Portfolio is a collection of student’s work specifically selected to tell a particular story about the student.
2. A portfolio is not a pile of student work that accumulates over a semester or a year.
3. A portfolio contains a purposefully selected subset of student work.
4. It measures growth and development of students.

Factors to Consider when Constructing Good Test Items

A. Validity is the degree to which the test measures what is intended to measure. It is the usefulness of the test for a given
purpose. A valid test is always reliable.
B. Reliability refers to the consistency of score obtained by the same person when retested using the same instrument or one
that is parallel to it.
URDANETA CITY
UNIVERSITY
Owned and operated by the City Government of Urdaneta
College of Teacher Education

C. Administrability refers to the uniform administration of test to all students so that the scores obtained will not vary due to
factors other than differences of the students’ knowledge and skills. There should be a clear provision for instruction for
the students, proctors and even who will check the test or the scorer.
D. Scorability. The test should be easy to score, directions for scoring is clear, provide the answer sheet and the answer key.
E. Appropriateness. The test item that the teacher constructs must assess the exact performances called for in the learning
objectives. The test item should require the same performance of the student as specified in learning objectives.
F. Adequacy. The test should contain a wide sampling of items to determine the educational outcomes or abilities so that the
resulting scores are representatives of the total performance in the areas measured.
G. Fairness. The test should not be biased to the examinees. It should not be offensive to any examinee subgroups. A test
can only be good if it is also fair to all test takers.
H. Objectivity represents the agreement of two or more raters or test administrators concerning the score of a student. If the
two raters who assess the same student on the same test cannot agree on score, the test lacks objectivity and the score of
neither judge is valid, thus, lack of objectivity reduces test validity in the same way that lack reliability influence validity.

TABLE OF SPECIFICATION

Table of Specification is a device for describing test items in terms of the content and the process dimensions. That is, what a
student is expected to know and what he or she is expected to do with that knowledge. It is described by combination of
content and process in the table of specification.

Sample of One way table of specification in Linear Function

Content Number of Class Number of Items Test Item


Sessions Distribution
1. Definition of Linear Function 2 4 1-4
2. Slope of a Line 2 4 5-8
3. Graph of Linear Function 2 4 9-12
4. Equation of Linear Function 2 4 13-16
5. Standard Forms of a Line 3 6 17-22
6. Parallel and Perpendicular Lines 4 8 23-30
7. Applications of Linear Functions 5 10 31-40
TOTAL 20 40 40

Example: Number of items for the topic “definition of linear function”

Number of class sessions = 2

Desired number of items = 40

Total number of class sessions = 20

Number of items = Number of class sessions x desired total number of items

Total number of class sessions

2  40
= Number of items = 40
20
Sample of Two Way table of specification in Linear Function

Content Class Knowledg Comprehensi Applicatio Analysi Synthesi Evaluatio Tota


Hours e on n s s n l
1. Definition of 2 1 1 1 1 4
Linear Function
2. Slope of a Line 2 1 1 1 1 4
3. Graph of Linear 2 1 1 1 1 4
Function
4. Equation of 2 1 1 1 1 4
Linear Function
5. Standard Forms 3 1 1 1 1 1 1 6
of a Line
URDANETA CITY
UNIVERSITY
Owned and operated by the City Government of Urdaneta
College of Teacher Education

6. Parallel and 4 2 2 2 2 8
Perpendicular
Lines
7. Applications of 5 1 1 3 2 3 10
Linear Functions
TOTAL 20 4 6 8 8 7 7 40

ITEM ANALYSIS

Item analysis refers to the process of examining the student’s response to each item in the test. There are two characteristics of an
item: desirable and undesirable characteristics. An item that has desirable characteristics can be retained for subsequent use and
that with undesirable characteristics is either be revised or rejected.

Three criteria in determining the desirability and undesirability of an item:

a. difficulty of an item
b. discriminating power of an item
c. measures of attractiveness

Difficulty index (DF) refers to the proportion of the number of students in the upper and lower groups who answered an item
correctly. In a classroom achievement test, the desired indices of difficulty not lower than 0.20 nor higher than 0.80. The average
index of difficulty from 0.30 or 0.40 to a maximum of 0.60.

PUG  PLG
DF  PUG= proportion of the upper group who got an item right
2
PLG= proportion of the lower group who got an item right

Level of Difficulty of an Item

Index Range Difficulty Level


0.00 – 0.20 Very Difficult
0.21-0.40 Difficult
0.41-0.60 Moderately Difficult
0.61-0.80 Easy
0.81-1.00 Very Easy

Index of Discrimination

Discrimination index is the difference between the proportion of high performing students who got the item right and the
proportion of low performing students who got an item right. The high and low performing students usually defined as the upper
27% of the students based in the total examination score and the lower 27% of the students based on the total examination score.
Discrimination index is the degree to which the item discriminates between high performing group and low performing group in
relation of scores on the total test. Index of discrimination are classified into positive discrimination, negative discrimination and
zero discrimination.

Positive Discrimination – if the proportion of the students who got an item right in the upper performing group is greater than the
proportion of the low performing group.

Negative Discrimination – if the proportion of the students who got an item right in the low performing group is greater than the
students in the upper performing group.

Zero Discrimination – if the proportion of the students who got an item right in the upper performing group and low performing
group are equal.

Discrimination Index Item Evaluation


0.40 and up Very Good Item
0.30 – 0.38 Reasonably good item but possibly subject to improvement
0.20 – 0.29 Marginal item, usually needing and being subject to improvement
Below 0.19 Poor item, to be rejected or improved by revision
URDANETA CITY
UNIVERSITY
Owned and operated by the City Government of Urdaneta
College of Teacher Education

Maximum Discrimination is the sum of the proportion of the upper and lower groups who answered the item correctly. Possible
maximum discrimination will occur if the half or less of the sum of the upper and lower groups answered an item correctly.

Discriminating Efficiency is the index of discrimination divided by the maximum discrimination.

Notations: PUG= proportion of the upper group who got an item right

PLG= proportion of the lower group who got an item right

Di = discrimination index

DM = maximum discrimination

DE = discrimination efficiency

Formula: Di  PUG  PLG

Di
DE 
DM

DM  PUG  PLG

Example: Eighty students took an examination in Algebra, 6 students in the upper group got the correct answer and students in the
lower group got the correct answer from item number 6. Find the Discriminating Efficiency.

Given: Number of students took the exam = 80

27% of 80 = 21.6 or 22, which means that there are 22 students in the upper performing group and 22 students in the
lower performing group.

6
PUG   27%
22

4
PLG   18%
22

Di  PUG  PLG
 27%  18%
 9%

Di
DE 
DM
.09

.45
 0.20or 20%

DM  PUG  PLG
 27%  18%
 45%

This can be interpreted as on the average, the item is discriminating at 20% of the potential of an item of its difficulty.

VALIDITY OF A TEST

Validity refers to the appropriateness of score-based inferences; or decisions made based on the students’ test results. The extent to
which a test measures what it’s supposed to measure.

Important Things to Remember About Validity


URDANETA CITY
UNIVERSITY
Owned and operated by the City Government of Urdaneta
College of Teacher Education

1. Validity refers to the decisions we make, and not to the test itself or to the measurement.

2. Like reliability, validity is not an all or nothing concept; it is never totally absent or absolutely perfect.

3. A validity estimate, called a validity coefficient, refers to specific type of validity. It ranges between 0 to 1.

4. Validity can never be finally determined; it is specific to each administration of the test.

TYPES OF VALIDITY

1. Content Validity – a type of validation that refers to the relationship between a test and the instructional objectives, establishes
content so that the test measures what it is supposed to measure. Things to remember about validity:

a. The evidence of the content validity of your test is found in the Table of Specification

b. This is the most important type of validity to you, as a classroom teacher.

c. There is no coefficient for content validity. It is determined judgmentally, not empirically.

2. Criterion-related Validity – a type of validation that refers to the extent to which scores from a test relate to theoretically similar
measures. It is a measure of how accurately a student’s current test score can be used to estimate a score on a criterion measure,
like performance in courses, classes or another measurement instrument.

a. Construct Validity – a type of validation that refers to a measure of the extent to which a test measures a hypothetical and
unobservable variable or quality such as intelligence, math achievement, performance anxiety, etc. it established through
intensive study of the test or measurement instrument.

b. Predictive Validity – a type of validation that refers to a measure of the extent to which a person’s current test results can be
used to estimate accurately what that person’s performance or other criterion, such as test scores will be at the later time.

3. Concurrent Validity – a type of validation that requires the correlation of the predictor or concurrent measure with the criterion
measure. Using this, we can determine whether a test is useful to us as predictor or a substitute (concurrent) measure. The higher
the validity coefficient, the better the validity evidence of the test. In establishing the concurrent validity evidence, no time interval
is involved between the administration of the new test and the criterion or established test.

Factors Affecting the Validity of a Test Item

1. The test itself.


2. The administration and scoring of a test.
3. Personal factors influencing how students response to the test.
4. Validity is always specific to a particular group.

Ways to Reduce the Validity of the Test Item

1. Poorly constructed test items


2. Unclear directions
3. Ambiguous items
4. Reading vocabulary too difficult
5. Complicated syntax
6. Inadequate time limit
7. Inappropriate level of difficulty
8. Unintended clues
9. Improper arrangement of items

RELIABILITY OF A TEST

Reliability refers to the consistency of measurement; that is, how consistent test results or other assessment results from one
measurement to another. We can say that a test is reliable when it can be used to predict practically the same scores when test
administered twice to the same group of students and with a reliability index of 0.50 or above.

Factors Affecting the Reliability of a Test

1. Length of the test


2. Moderate item difficulty
URDANETA CITY
UNIVERSITY
Owned and operated by the City Government of Urdaneta
College of Teacher Education

3. Objective scoring
4. Heterogeneity of the student group
5. Limited time

Four Methods of Establishing Reliability

1. Test-Retest Method. A type of reliability determined by administering the same test twice to the same group of students with
any time interval between tests. The result of the test scores are correlated using the Pearson Product Correlation Coefficient
(r) and this correlation coefficient provides a measure of stability. This indicates how stable the test result over a period of
time.
2. Equivalent-Form Method (Parallel or Alternate). A type of reliability determined by administering two different but
equivalent forms of the test to the same group of students in close succession. The equivalent forms are constructed to the
same set of specifications that is similar in content, type of items and difficulty. The result of the test scores are correlated
using the Pearson Product Correlation Coefficient (r) and the correlation coefficient provides a measure of the degree to which
generalization about the performance of students from one assessment to another assessment is justified. It measures the
equivalence of the tests.
3. Split-half Method. Administer test once, score two equivalent halves of the test. To split the test into halves that are
equivalent, the usual procedure is to score the even-numbered and the odd-numbered separately. This provides two score for
each student. The result of the test scores are correlated using the Spearman-Brown Formula and this correlation coefficient
provides a measure of internal consistency. It indicates the degree to which consistent results are obtained from two halves of
the test.
4. Kuder-Richardson Formula. Administer the test once, core total test and apply the Kuder-Richardson Formula. The Kuder-
Richardson formula is applicable only in situation where students’ responses are scored dichotomously and therefore is most
useful with traditional test items that are scored as right or wrong. KR-20 estimates of reliability that provides information
about the degree to which the items in the test measure the same characteristics. It is an assumption that all items are of equal
difficulty.

RUBRICS

Rubrics is a scoring scale and instructional tool to assess the performance of student using a task-specific set of criteria. It contains
two essential parts: the criteria for the task and levels of performance of each criterion. It provides teachers an effective means of
students-centered feedback and evaluation of the work of the students. It also enables teachers to provide detailed and informative
evaluations of their performance.

Rubrics is very important most especially if you are measuring the performance of students against a set of standard or pre-
determined set of criteria. Through the use of scoring rubrics, the teachers can determine the strengths and weaknesses of the
students. Hence, it enables the students to develop their skills.

Types of Rubrics

1. Holistic Rubrics – does not list separate levels of performance for each criterion. Rather, holistic rubrics assigns a level of
performance along with a multiple criteria as a whole, in other words, you put all the components together.
Advantages: quick scoring, provide overview of students’ achievement.
Disadvantages: does not provide detailed information about the student performance in specific areas of the content and
skills, may be difficult to provide one overall score.

Example:
3 – Excellent Researcher
 includes 10-12 sources
 no apparent historical inaccuracies
 can easily tell where the sources of information was drawn from
 all relevant information is included

2 – Good Researcher
 includes 5-9 sources
 few historical inaccuracies
 can tell with difficulty where information came from
 bibliography contains most relevant information

1 – Poor Researcher
 includes 1-4 sources
URDANETA CITY
UNIVERSITY
Owned and operated by the City Government of Urdaneta
College of Teacher Education

 lots of historical inaccuracies


 cannot tell from which source of information came
 bibliography contains very little information

2. Analytic Rubrics – the teacher or the rater identify and assess components of a finished product. Breaks down the final
product into component parts and each part are scored independently. The total score is the sum of all the rating for all the
parts that are to be assessed or evaluated. In analytic scoring, it is very important for the rater to treat each part as separate to
avoid bias toward the whole product.
Advantages: more detailed feedback, scoring more consistent across students and graders
Disadvantage: time consuming to score
Example:
Criteria Limited Acceptable Proficient
1 2 3
Made good observations observations are most observations are all observations are
absent of vague clear and detailed clear and detailed
Made good predictions predictions are absent most predictions are all predictions are
or irrelevant reasonable reasonable
Appropriate conclusion conclusion is absent or conclusion is conclusion is
inconsistent with consistent with most consistent with
observations observations observations

Advantages of Using Rubrics

When assessing the performance of the students using performance based assessment, it is very important to use scoring rubrics.
The advantages of using rubrics in assessing students’ performance are:
1. Rubrics allow assessment to become more objective and consistent;
2. Rubrics clarify the criteria in specific terms;
3. Rubrics clearly show the students how work will be evaluated and what is expected;
4. Rubrics promote student awareness of the criteria to use in assessing peer performance;
5. Rubrics provide useful feedback regarding the effectiveness of the instruction; and
6. Rubrics provide benchmarks against which to measure and document process

Steps in Developing Rubrics

1. Identify your standards, objectives and goals for your students. Standard is a statement of what the students should be able to
know or be able to perform. It should indicate that your students should meet these standards. Know also the goals for
instruction, what are the learning outcomes?
2. Identify the characteristics of a good performance on that task, the criteria. When the students perform or present their work, it
should indicate that they performed well in the task given to them; hence, they met that particular standard.
3. Identify the levels of performance for each criterion. There is no guidelines with regards to the number of levels of
performance, it vary according to the task and needs. It can have as few as two levels of performance or as many as the teacher
can develop. In this case, the rater can sufficiently discriminate the performance of the students in each criterion. Through this
level of performance, the teacher or the rater can provide more detailed feedback about the performance of the students. It is
easier also for the teacher and students to identify the areas needed for improvement.

PERFORMANCE BASED ASSESSMENT

Performance based assessment is a direct and systematic observation of the actual performances of the students based from a pre-
determined performance criteria. It is an alternative form of assessing the performance of the students that represent a set of
strategies for the application of knowledge, skills, and work habits through the performance of tasks that are meaningful and
engaging to students.

Framework of Assessment Approaches

Selection Type Supply Type Product Performance


True-False Completion Essay, story or poem Oral representation of
report
Multiple-Choice Label a diagram Writing portfolio Musical, dance, or
dramatic performance
Matching Type Short answer Research report Typing test
URDANETA CITY
UNIVERSITY
Owned and operated by the City Government of Urdaneta
College of Teacher Education

Concept map Portfolio exhibit, Art Diving


exhibit
Writing journal Laboratory demonstration
Cooperation in group
works

Forms of Performance Based Assessment

1. Extended Response Task


a. Activities for single assessment may be multiple and varied.
b. Activities may be extended over a period of time.
c. Products from different students may be different in focus.
2. Restricted-Response Tasks
a. Intended performances more narrowly defined that on extended-response tasks.
b. Questions may begin like multiple-choice or short-answer stem, but then asks for explanation, or justification.
c. May have introductory material like an interpretative exercise, but then asks for an explanation of the answer, not just the
answer itself.
3. Portfolio is a purposeful collection of student work that exhibits the student’s efforts, progress and achievements in one or
more areas.

Uses of Performance Based Assessment

1. Assessing the cognitive complex outcomes such as analysis, synthesis and evaluation.
2. Assessing non-writing performances and products.
3. Must carefully specify the learning outcomes and construct activity or task that actually called forth.

Focus of Performance Based Assessment

Performance Based Assessment can assess the process, product or both depending on the learning outcomes. It also involves doing
rather than just knowing about the activity or task. The teacher will assess the effectiveness of the process or procedures and the
product used in carrying out the instruction.

Use the process when:

1. There is no product;
2. The process is orderly and directly observable;
3. Correct procedures/ steps is crucial to later success;
4. Analysis of procedural steps can help in improving the product; and
5. Learning is at the early stage.

Use the product when:

1. Different procedures result in an equally good product;


2. Procedures not available for observation;
3. The procedures have been mastered already; and
4. Products have qualities that can be identified and judged.

Assessing the Performance

The final step in performance assessment is to assess and score the students’ performance. To assess the performance of the
students, the evaluator can use checklist approach, narrative or anecdotal approach, rating scale approach, and memory approach.
The evaluator can give feedback on students’ performance in the form of a narrative report or a grade. There are different ways to
record the results of performance-based assessment:

1. Checklist Approach are observation instruments that divide performance whether it is certain or not certain. The teacher
has to indicate only whether or not certain elements are present in the performance.
2. Narrative/Anecdotal Approach is a continuous description of student behavior as it occurs, recorded without judgment or
interpretation. The teacher will write narrative reports of what was done during each of the performances. From these
reports, teachers can determine how well their students met their standards.
3. Rating Scale Approach is a checklist that allows the evaluator to record information on a scale, noting the finer distinction
that just presence or absence of a behavior. The teacher will indicate to what degree the standards were met. Usually,
teachers will use a numerical scale.
URDANETA CITY
UNIVERSITY
Owned and operated by the City Government of Urdaneta
College of Teacher Education

4. Memory Approach. The teacher observes the students when performing the tasks without taking any notes. They use the
information from their memory to determine whether or not the students were successful. This approach is not
recommended to use for assessing the performance of the students.

PORTFOLIO ASSESSMENT

Portfolio assessment is the systematic, longitudinal collection of student work created in response to specific, known instructional
objectives and evaluated in relation to the same criteria. Student portfolio is a purposeful collection of student work that exhibits
the student’s efforts, progress and achievements in one or more areas. The collection must include student participation in selecting
contents, the criteria for selection, the criteria for judging merit and evidence of student self-reflection.

Comparison of Portfolio and Traditional Forms of Assessment

Traditional Assessment Portfolio Assessment


Measures student’s ability at one time Measures student’s ability over time
Done by the teacher alone, students are not aware of the Done by the teacher and the students; the students are
criteria aware of the criteria
Conducted outside instruction Embedded in instruction
Assigns student a grade Involves student in own assessment
Does not capture the students’ language ability Capture many facets of language learning performance
Does not include the teacher’s knowledge of student as a Allows for expression of teacher’s knowledge of student
learner as a learner
Does not give student responsibility Students learn how to take responsibility

Three Types of Portfolio

1. Working Portfolio
It is also known as “teacher-student portfolio” as the name implies that it is a project “in the work”. It contains the work in
progress as well as the finished samples of work use to reflect on process by the students and teachers. It documents the stages
of learning and provides a progressive record of student growth. This is an interactive teacher-student portfolio that aids in
communication between teacher and student.
The working portfolio may be used to diagnose student needs. In this, both student and teacher have evidence of students’
strengths and weaknesses in achieving learning objectives, information extremely useful in designing future instruction.

2. Showcase Portfolio
It is also known as best works portfolio or display portfolio. In this king of portfolio, it focuses on the student’s best and most
representative work, it exhibit the best performance of the student. Best works portfolio may document student efforts with
respect to curriculum objectives; it may also include evidence of student activities beyond school.
It is just like an artist’s portfolio where a variety of work is selected to reflect breadth of talent. Hence, in this portfolio, the
student selects what he or she thinks is representative work.
The most rewarding use of student portfolios is the display of the students’ best work, the work that makes them proud. In this
case, it encourages self-assessment and builds self-esteem to students. The pride and sense of accomplishment that students
feel make the effort well worthwhile and contribute to a culture for learning in the classroom.

3. Progress Portfolio
It is also known as Teacher Alternative Assessment Portfolio. It contains examples of students’ work with the same types
done over a period of time and they are utilized to assess their progress.

Uses of Portfolios

1. It can provide both formative and summative opportunities for monitoring progress toward reaching identified outcomes.
2. Portfolios allow students to document aspects of their learning that do not show up well in traditional assessments.
3. Portfolios are useful to showcase periodic or end of the year accomplishments of students such as in poetry, reflections on
growth, samples of best works, etc.
4. Portfolios may also be used to facilitate communication between teachers and parents regarding their child’s achievement and
progress in a certain period of time.
5. The administrators may use portfolios for national competency testing to grant high school credit, to evaluate educational
programs.
6. Portfolios may be assembled for combination of purposes such as instructional enhancement and progress documentation. A
teacher reviews students’ portfolios periodically and make notes for revising instruction for next year use.

According to Mueller (2010), there are seven steps in developing portfolios of students. Below are the discussions of each step.
URDANETA CITY
UNIVERSITY
Owned and operated by the City Government of Urdaneta
College of Teacher Education

1. Purpose: What is the purpose(s) of the portfolio?


2. Audience: For what audience(s) will the portfolio be created?
3. Content: What samples of student work will be included?
4. Process: What processes (e.g., selection of work to be included, reflection on work, conferencing) will be engaged in during
the development of the portfolio?
5. Management: How will time and materials be managed in the development of the portfolio?
6. Communication: How and when will the portfolio be shared with pertinent audiences?
7. Evaluation: if the portfolio is to be used for evaluation, when and how should it be evaluated?

Guidelines for Assessing Portfolios

1. Include enough documents (items) on which to base judgment.


2. Structure the contents to provide scorable information.
3. Develop judging criteria and a scoring scheme for raters to use in assessing the portfolios.
4. Use observation instruments such as checklists and rating scales when possible to facilitate scoring.
5. Use trained evaluators or assessors.

You might also like