You are on page 1of 59

NAME: Maychel R.

Castillo
COURSE: Assessment of learning (Answered Modules)
SUBMITTED TO: Dr. Randy Joy M. Ventayen

CHAPTER 1

Evaluative Exercise 0.1

Directions: Based on the learning that you have acquired from the orientation and the
course syllabus that you have read, reflect on the following: (Write your reflections on the
space provided).

1. How do you see yourself after finishing this course subject?

I will be able to provide proper feedbacks to learners, to teach learners to accept this feedback
positively and to utilize this information contained within It effectively to enhance their work.

2. How relevant is your understanding of the vision, mission, goals and objectives of a
certain school?

It helps me to set my priorities while in the school and also encourages me to be productive in my
studies.

3. Differentiate the vision from mission.

The vision focuses on the future goals of the organization while, the mission describes what you
do, who you do it for and the benefits that it provides.

4. Is the course syllabus relevant to you? How? Give specific examples.


Yes, because it summarized the content of the subject that gives the learner a hint of what will be
studying in the a subject. If you have a syllabus you will be able to know the sequence of the
lessons and you can also conduct an advance research to the next lessons and it makes easier
for you to understand a certain topic.
1.1 Pre-Test

Directions: Answer the following questions briefly on the space provided.


1. Have you heard about outcomes-based education (OBE)? If yes, where did you
hear it?

No

2. In your own understanding, how do you define outcomes-based education?

It’s an alternative way of teaching.

3. How relevant is measurement to you? Why?

It helps me to know the effectiveness of my teaching methods for the learners, because it
give you an assistance on how you will modify your methods for the easier way of
understanding of your learners.

4. Is assessment different from evaluation? Why did you say so?

Yes, because assessment is a process that continuously improves the learner’s path
towards learning and ungraded while, evaluation focuses on grades and can be used as a
final review to cut out the quality of instruction. In short evaluation provides closure and
judgment.

1.5 Evaluative Exercise

Directions: After reading and understanding the different definitions of Outcomes-Based


Education, synthesize it to form your own definition. Write your definition on the space
provided. Post it in your Facebook account. Take a screen shot, print it and attach it in
your portfolio.

Outcomes-Based Education is a process that focuses on demonstrating the results of


learning that he/she attained by the course. And it centralized on what the students can do about their
learning experiences. Therefore, it is a deeper way of learning.

1.10 Evaluate Exercise

Directions: Enumerate the immediate outcomes of your course program (e.g. BSEd, BIT,
etc.) on the space provided.

Bachelor of Science in Hospitality management


1. Capacity to integrate knowledge, analyze and manage different hotel and restaurants
aspects in global level.
2. Enhances comprehension and identification on culinary terminologies
3. Applying the basic principles of analytical thinking and problem solving when examining
hospitality management issues.
4. Demonstrate the ability to read, listen, and clearly express themselves using written, oral,
and visual methods to communicate effectively with superiors, coworkers, customers, and
members of the community.
5. Ability to handle hotel bookings, inquiries and others.
6. Demonstrate leadership, teamwork and interpersonal skills needed for managing diverse
and global hospitality operations.
1.15 Research Activity

Directions: On the space provided, Find and write the definitions of the following terms:

1. Formative Evaluation
 Formative evaluations aim to gain quick feedback about the effectiveness
of current instructional strategies with the explicit goal of enhancing teaching during the target
course. The focus of formative evaluation is on soliciting feedback that enables timely
revisions to enhance the learning process. Formative evaluations are designed to provide
information to help instructors improve their online instruction. Formative evaluations may be
conducted at any time throughout the instructional process to monitor the value and impact of
instructional practices or to provide feedback on teaching strengths and challenges

2. Summative Evaluation
Summative evaluations emphasize an overall judgment of one’s effectiveness in teaching
online. Conducted at the end of a course or program, the focus of summative evaluations is to
measure and document quality indicators for decision-making purposes. Although information
gained from summative evaluations may be used to improve future teaching performance, the
information is not provided in a timely fashion to provide opportunities for revision or
modification of instructional strategies while the teaching and learning is still in progress.

1. How do you differentiate Outcomes-Based Education (OBE) to Understanding by


Design (UbD)?

Outcomes-Based Education is a model that enables students to acquire analyzing and


deeper understanding of the course. Also, to make the students demonstrate the outcomes
of their learning experiences. For this to accomplish the Understanding by Design will be
the curriculum framework that aligned to the degree program to enables students to acquire
desire competencies.
2. Are you in favor of the implementation of OBE? Why or why not?

Yes, because with this system we can determine the depths of the learning of a child,
which is not only based on memorizing.
1.17 Reflection

Directions: On the space provided, write in narrative form the learning experience you
had after reading and accomplishing the exercises and activities in this chapter.

Chapter 1 of this module has an interesting and knowledgeable topic. It gives me insights and
understandings about Outcomes-Based Education (OBE) and Understanding by Design (UbD) as much
as the relevance of the measurements on learning of the students wherein, I discovered the difference
between Assessment and Evaluation. I believed that assessment and evaluation are similar but, upon
reading this, I realized that Assessment is a process of gathering pieces of evidence of student's
performance to determine their learning and, it continuously improves the learners' path towards
learning while evaluation provides closure and judgment. Also, I encountered the definition of
Outcomes-Based Education and its relation to UbD that comes to my realization that teachers
instructions should build a way for the students to analyze, interpret, and to apply all their accumulated
learning experiences. By being informed by this, helps me to have a better understanding of the
learning that should be given to the learners because students don't need to memorize all the time.
However, they must acquire the knowledge to determine the intended learning outcomes.

CHAPTER 2

Directions: Answer the following questions briefly on the space provided or do what is
being asked for.
1. Define outcomes of student learning on your own understanding.

Answer: It is a combination of knowledge, abilities, and understanding of what the students know, be
able to do and, how he or she can apply all the learning they attain after completed the course.

2. Reviewing your knowledge in the Principles of Teaching, how important are the domains
of learning in the life of a teacher?

Answer: The development and execution of teaching lessons by teachers is an important part of the
teaching process. It is important to understand that there are various types of learners with different
needs and, as such, different approaches must be followed in the preparation and implementation of
lessons to ensure that these needs are addressed and, these 3 domains of learning will help the
teachers to identify the most effective way of teaching for their students.

Directions: The following are examples of learning outcomes; on the second column, write the
domain in which each outcome is classified and on the third column the level/category to which
the learning outcome belongs.

Learning Domain Level / Category


Outcomes
1. Formulate a
procedure to follow in Cognitive Application
preparing for class
demonstration
2. Formulates new program
Cognitive Synthesis

3. Perform repeatedly
with speed and accuracy Psychomotor Physical abilities

4. Listen to others
with respect Affective Valuing

5. Select the most effective


among a number Cognitive Analysis
of solutions
6. Watch a more
experienced Psychomotor Perceptual Abilities
performer
7. Know the rules
and practice them Cognitive Application
8. Show ability to
resolve Affective Characterization
problems/conflicts
9. Apply learning
principles in studying Cognitive Application
pupil behavior
10. Recite prices of
commodities from Cognitive Knowledge
memory
Directions: Read the EDCOM Report of 1991 on the internet and be able to answer the
following questions.
1. What is EDCOM Report of 1991?

Answer: Congressional Commission on Education to Review and Assess Philippine Education.


The report assessed the scenario of education in the Philippines. It was created by a Joint
Resolution of the Eight Philippine Congress on June 17, 1990. EDCOM consulted every
component of the stakeholder's group involving parents, teachers, school administrators, DECS
officials, business sectors, LGUs, NGOs, civic orgs, religious leaders, workers, and the
marginalized sectors (e.g farmers).

2. What are the salient findings of EDCOM Report of 1991?


Answer: In 1991, the Philippines is said to have the most expanded school systems in the world.
Covering up to 97.78% participation rate in the elementary level, it is said that the Philippines is close to
the attainment of universal elementary education. On the other hand, the Philippines scored 89%
literacy rate though its functional literacy showed only 73%.
Summarize Findings
1. Too little investment in Education
2. Disparities in Access Education
3. Low Achievement
4. High Drop-out Rate in Less Developed Communities 
5.  Special Needs Neglected
6. Limited ECE & NFE Services
7.  Schooling Length & Class Interruptions, Less Quality
8. Inadequate science and technology
9. Ineffective VE
10. Bilingual Education affects learning
11. Manpower mismatched
12. Irrelevance of Education
13. Incompetent Training and Instruction
14. Ineffective and Inefficient Organization

Directions: On the space provided, write in narrative form the learning experience you had
after reading and accomplishing the exercises and activities in this chapter.

Answer: This chapter refreshes my knowledge in 3 domains of learning known as cognitive domain, affective
domain, and psychomotor domain. Also, how important it is to the teachers to be able to bring the utmost
learning outcomes to the students. By reading the Edcom report of 1991, I have learned the importance of
assessment in learning. This report gives enlightenment on the problems of our educational system such as: 
 high dropout rates especially in rural areas, 
 poor access to special education (SPED) because of the limited number of a special school and SPED
center in the country, 
 The government was not investing enough in our education system, inadequate investments in teaching
materials & learning resources in primary education institutions,
 Our education establishment was poorly managed and more.
With the aid of this study, it is easier to make recommendations for improving the education system in our
country.
.

CHAPTER 3

Directions: Answer the following questions briefly on the space provided.

1. How good are you in assessment? Provide specificsituation?

Answer: Right nowI can say that I’m still in the process of finding out my own technique when it
comes to assessment,I will ask students to answer questions such as “Why” and “How” to have
more in depth thinking from the students. For example: how you come up with that solution and why
you think it is more effective?

2. What instrument or tool do you use when you assess? AndWhy?

Answer:Rubrics, because you can give a set of criteria for assessing knowledge and it will be
easier to rate the score if there are criteria in assessing the students output.

3. Is it difficult to assess student learning outcomes?Why?

Answer:Yes, because there is so many factors to consider in assessing student learning outcomes
one example of this is as teachers we must consider students learning styles, and multiple
intelligence and must come up with the variety of ways assessing students learning to come up with
an effective teaching method and attain a betterstudent learning outcomes.

Directions: Write Five (5) Student Learning Outcomes in line with your field of
specialization and also write Five (3) Supporting Student Activities in each Student
Learning Outcome.

1. Contextualization of Knowledge
Students will…Identify, formulate and solve problems using appropriate information and approaches.

 Demonstrate their understanding of major theories, approaches, concepts, and current and classical
research findings in the area of concentration.
 Demonstrate an understanding of the basic in Performing Mensuration and Calculation.
 Familiarize oneself with the table of weights and measures in baking in Applying basic mathematical
operations in calculating weights and measures to Measure dry and liquid ingredients accurately.

2. Praxis and Technique


Students will…Utilize the techniques, skills and modern tools necessary for practice.
 Demonstrate professional and ethical responsibility.
 Appropriately apply laws, codes, regulations, standards that protect the health and safety of the
public.
 Use of Tools and Bakery Equipment and Prepare tools and equipment for specific baking purposes

3. Critical Thinking
Students will…Recognize, describe, predict, and analyze systems behavior.
 Evaluate evidence to determine and implement best practice.
 Examine technical literature, resolve ambiguity and develop conclusions.
 Synthesize knowledge and use insight and creativity to better understand and improve systems.

4. Occupational Health and Safety Procedure


Students will...Maintain occupational health and safety awareness.
 Identify hazards and risks.
 Evaluate hazards and risks.
 Control hazards and risks.

5. Research and Communication


Students will…Retrieve, analyze, and interpret the professional and lay literature providing information to both
professionals and the public.
 Propose original research: outlining a plan, assembling the necessary protocol, and performing the
original research and Design and conduct experiments, and analyze and interpret data.
 Communicate effectively through written reports, oral presentations and discussion.
 Work in multi-disciplinary teams and provide leadership on materials-related problems that arise in
multi-disciplinary work.

Directions: Read the article entitled “Ensuring Educational Quality Means Assessing Learning”
written by KathraelKazin and David G. Payne at https://www.ets.org/ Media/Education_
Topics/pdf/ 11707_AsSeenIn.pdf. Be able to answer the following questions:
1. What is the relationship between educational quality and learningassessment?
The relationship between educational quality and learning assessment, is the pivotal role of
assessment in obtaining educational quality, without assessing student learning outcomes,
there’s no reliable way to measure and demonstrate educational quality.

2. What is culture ofevidence?


It is a framework for helping colleges and universities define aspirations for student learning and
translate them into measurable outcomes through a series of white papers that has developed
over the past two years of Educational Testing Service.

Directions: On the space provided, write in narrative form the learning experience you had after reading and
accomplishing the exercises and activities in this chapter.

After nights of reading, analyzing, researching and having to come up with my own question in mind
First of what is outcome-based education or, in short, OBE? What do this educational strategy can
do with the current program of our education in our country? Can we really achieve educational
quality through this?
Spady (1993) has a ready answer for the first question. OBE means clearly focusing and organizing
everything in an educational system around what is essential for all students to be able to do
successfully at the end of their learning experiences. This means starting with a clear picture of what
is important for students to be able to do, then organizing the curriculum, instruction and
assessment to make sure that learning ultimately happens.
Accordingly, OBE has six important features, namely: active learners; continuous assessment;
critical thinking, reasoning, reflection and action; integration of knowledge, learning
relevant/connected real life situations; learner-centered and educator/facilitator use
group/teamwork; and learning programs seen as guides that allow educators to be innovative and
creative in designing programs and activities.
As a teaching strategy, the following guideposts should be properly considered by mentors (Killen,
2000): The main focus should be on learning rather than teaching; students cannot learn if they do
not think; thinking is facilitated and encouraged by the processes that you use to engage students
with the content, as well as by the content itself; the subject does not exist in isolation — you have
to help students make links to other subjects; and you have a responsibility to help students learn
how to learn. It is learner-centered.
Given the situation of the world today, OBE would be of great help in the program implementation of
different programs in our education here in our country, as an essential educational strategy used
by mentors to ensure academic success for themselves and for their students.
To set anexample, the K12 curriculum, yes, as a inspiring educator is concern, I boost for the
continuous offering of the program despite the trumping objections from some sectors of our society
today and before it was been implemented. Surely, the additional two years of schooling for basic
education, again, will enhance the academe-industry partnership scheme and further more we are
into bringing our educational quality and graduates equipped with a worldwide quality, skills, critical
thinking, and learning that every country around the world is into already.
In undertaking the training, students are based on various strands available in the local schools and
based on their respective interests. Prior to graduation, the students are immersed in various
companies and agencies whose specialization is related to their achieved expertise.
Likewise, the students prior to undergoing the process of immersion, almost the same as that of the
on-the-job-training or internship program of the college students, students are subjected to testing at
the Technical Education and Skills Development Authority (Tesda) Testing Centers. The qualified
and deserving students are being awarded with National Certification (NC), attesting to their
capabilities based on the instilled knowledge and acquired skills based on their hands-on activities
of the vocation. Of course, their mentors are the ones subjected first to Tesda testing. They have to
have national certifications up to Level 4. Likewise, they must undergo a rigid training dubbed as
Training Management. Hence, passers or achievers of this endeavor are qualified enough to handle
SHS courses or subjects.
In addition, both high school and collegiate levels, even in the graduate Schools, the mentors are
required to implement OBE. Most of the textbooks and references being used at present are
prepared by writers and authors using OBE guidelines from the DepEd and Commission on Higher
Education.
The final outcome of the this program should be assessed after graduation of the first and second
batches of Grade 12 students. The prospective graduates are expected to be skillful enough and
ready to enter the world of work. Having credentials of NC-2, they are qualified to obtain jobs locally
and abroad.
This OBE phenomenon is an advantage specially for graduates who are not keen on going to
college. Hence, the period of schooling prior to working has been shortened by two years. Further,
the resource-poor parents will have their children engage in productive jobs since they are well-
trained and skillful enough to handle the assigned jobs in the companies of their destination.
On the other hand, if the graduates of senior highschool intend to go to college, they will be
knowledgeable and very proficient in the courses of their interest since they are OBE trained prior to
college endeavors. Likewise, there is a congruency on the strands of their training and their
collegiate undertakings, if the strands in their secondary program and the collegiate course to be
taken are properly aligned.
So, the questions posed in the beginning has been answered — OBE is an essential educational
strategy used by mentors to ensure academic success for them and their learners. OBE is very
much needed in the process of learning in our educational program and through OBE, the program
will consequently succeed and achieved quality education. We are into true diversity of learning and
teaching, it is one way to welcome and discover the true nature of one’s individual for us to
supervise, cultivate, stimulate, nourish, give it a environment where it can fully grow in order for it to
reach it’s highest potential where it can truly bloom in every walks of life.
Followed by completing the exercises and activities that allows my understanding, thinking,
assessing, and creativity to help me to analyze the question, comprehend to put the words
togetherinto a whole ideato explain I am constructing in my mind and during this course it has
impacted my thinking, understanding and creativity in how can I myself come up with something or
bring a new idea in teachingin order to hand and hand developed the student learning outcomes, it
might not be so easy compared to traditional way of assessing ones learning outcomes but it is a
great help for us inspiring teachers to dig deeper to our learner’s, that learning can be borderless
and by that we are embracing diversity in our teaching methods and most specially for our students.

CHAPTER 4

Directions: Answer the following questions briefly on the space provided or do


what is being asked for.
1. List down the instruments or tools that you can use when youassess.
1.Graphic Organizers. These are tools to visually represent thoughts, ideas, knowledge,
and concepts. They help to organize thoughts and to promote understanding. This
section contains sample graphic organizers and some examples of how they were
successfully used by schools for various purposes.
2.Review and Reflection Tools. These enable learners to review and reflect on their
knowledge, progress, and what they have learnt and achieved during a unit, topic or
project.
3.Feedback Tools. These enable learners to provide feedback on their work and
performance. It also includes strategies for teachers to increase the wait time when
asking questions in class.
4. Rubrics. These are printed sets of criteria for assessing knowledge, performance or
product and for giving feedback. The following tools are examples of rubrics and how
they are used in schools.

2. How relevant are the assessment tools that you have listed down in no.1?

It is best to use a variety of assessment instruments or tools when assessing student


learning because no single instrument or tool can give you the best results.These tools are
for immediate use with students in the classroom. They are suitable for use in many
different contexts and are aimed at improving assessment practices. A range of schools
have used some of these tools and their feedback and suggestions for use are included.

3. Have you tried constructing tests? How difficult it is? Discuss by citing your
specificexperience?

After this activityyes, I’ve already tried constructing test, it’s difficult specially it is
my first time and I don’t have a specific guideline to what I needed to include in
constructing my test so I base it all to the things I already have taken also during
my college days where we are the one who has been taking up the tests. It’s
important that in constructing tests we incorporate different kinds of giving
questions to cater for the different ways on how our students can highlight what
they’ve learn in the span of our lessons. It can also be beneficial for student and
teachers to both been given a window to assessed.
Directions: Enumerate examples of graphic organizers. Just in case the space is not enough,
use the space at the back pages of the book.
 Five-Paragraph Essay. Help students write five-paragraph essays with a graphic
organizer. …
 Analogy Organizer. Use this analogy organizer when teaching new concepts to your
class. …
 Steps in a Process. …
 Triple Venn Diagram. …
 KWL Chart (Version 3) …
 Three Paragraph Main Idea and Details Chart. …
 Cause and Effect. …
 Alphabet Organizer.

Directions: Enumerate examples of review and reflection tools. Just in case the space is not
enough, use the space at the back pages of the book.

1. The 5R framework for reflection- This framework takes you through Reporting, Responding,
Relating, Reasoning, and Reconstructing.
2. The CARL framework of reflection- This framework takes you through Context, Action,
Results, and Learning.
3. The four F’s of active reviewing- This framework takes you through Facts, Feelings, Findings,
Future.

Directions: Enumerate examples of feedback tools. Just in case the space is not enough, use
the space at the back pages of the book.
 Webshop Reviews
 Traditional Surveys
 Community Feedback
 Visual Feedback
 Website Feedback Forms

Directions: Enumerate examples of rubrics. Just in case the space is not enough, use the
space at the back pages of the book.
1. Rubrics as Checklists
These basic rubric examples ensure that all parts of the assignment are present. They help
students keep track of each element of a project. Checklists also let teachers see whether a
student fully participated in an assignment, but they aren’t as informative as other rubrics.

Example of a Checklist Rubric


Checklists are useful in all subject areas because they’re versatile and easy to
understand. As long as each part of an assignment is present, the student receives full
credit. An example of a science project checklist includes a column for students to check
their work before turning it in.
2. Holistic Rubrics
A general rubric that lists a few levels of performance is a holistic rubric. These rubrics
usually combine criteria for a certain score into one level. Holistic rubrics include more
information than a checklist and make good formative assessment tools.

Example of a Holistic Rubric


The typical A-F grading system is one example of a holistic rubric in which many skills
are combined for one score. Here is another example of a holistic rubric for an oral
presentation in social studies.
3. Analytic Rubrics
An analytic rubric assesses each aspect of an assignment. It awards a designated
number of points to each part which adds up to the student’s final score. Projects with
analytic rubrics take longer to grade, but they are informative to teachers as
summative assessment tools.
Example of an Analytic Rubric
Analytic rubrics are useful in any subject in which the teacher needs to monitor
discrete skills. Check out an example of an analytic essay for a language arts literary
essay.
4. Developmental Rubrics
While other types of rubrics measure one assignment or project, a developmental rubric tracks a
student’s overall progress toward proficiency. These continuum rubrics can span one standard,
one subject, or one skill. Developmental rubrics are more time-consuming for teachers than
analytic rubrics, but they are the most informative type of assessment tool.

Example of a Developmental Rubric

The Common Core standards are an example of a developmental rubric with benchmarks over
each grade level. Standards-based grading systems are becoming more common in modern
classrooms. Check out an example of a developmental rubric designed to keep track of
elementary math skills in operations and algebraic thinking.

Directions: Make a 20-Item True/False Test

True or False?
1. It's okay to thaw perishable food like frozen chicken and beef on the kitchen counter or in the sink.
2. Foods should be put away in the fridge or freezer within two hours.
3. Milk and eggs can be stored in the refrigerator door.
4. One of the best ways to prevent contaminating foods is to wash your hands.
5. You can tell if food is still safe to eat by smelling it.
6. Keep raw foods and cooked foods separate.
7. Fruits and vegetables should be washed, even if you are peeling them.
8. The temperature at which you cook leftovers doesn’t matter because it’s already cooked.
9. Steaks can be eaten rare.
10. Pregnant women, infants, seniors and people with a weakened immune system are at greater risk of
developing foodborne illness.
11. A strong fishy or iodine smell indicates that the shrimp are still fresh.
12. Frozen crabmeat should be treated like any other frozen fish.
13. Clams become tough and rubbery if overcooked.
14. Shrimps like other shellfish, become tough and rubbery when cooked at low temperature.
15. Frozen breaded fish can be fried without thawing
16. Never use an electrical appliance if your hands are wet or if you’re standing on a wet floor.
17. If you spill something on the floor, leave it there for someone else to clean it.
18. During a kitchen lab you should intermingle with other groups?
19. When you open a hot oven you should stick in your hand in the oven to check if it is hot or not?
20. Store chemicals away from food and out of children’s reach.

Directions: Construct a 20-item Multiple Choice Test

1. What utensil is used to transfer little or lots of cooked pasta to a waiting plate without a mess?

a) Pasta spoon or server

b) Serving spoon

c) Soup ladle

d) Two-tined fork

2. It is used to level off ingredients when measuring and to spread frosting and
Sandwich filling.

a) Scraper

b) Spatula

c) Serving spoon

d) Ladle

3. It is a chamber or compartment used for cooking, baking, heating, or drying.

a) Skillet

b) Microwave oven

c) Oven

d) Toaster

4. Which tool is used to chop, blend, mix, whip, puree, grate, and liquify all kinds of food?

a) Mixer
b) Beater

c) Juicer

d) Blender
5. It is good for baking but not practical on top or surface cooking. You need
Extra care in using it. What is it?

a) Teflon

b) Double broiler

c) Cast iron

d) Glass

6. It is a Filipino dessert made primarily from coconut milk.

a) Cuchinta

b) Maja blanca

c) Palitaw

d) Sapinsapin

7. Which kitchen is the most flexible and most popular as it provides with a compact triangle?

a) Single Wall/Pullman Kitchen

b) G-shaped kitchen

c) L-shaped Kitchen

d) U-shaped Kitchen

8. This style kitchen makes easy for one cook to maneuver,

a) Island option

b) Corridor/Galley Kitchen

c) U-shaped Kitchen

d) G-Shaped Kitchen

9. Ideal for apartments and smaller homes.


a) L-shaped Kitchen

b) Corridor/Galley Kitchen

c) Single Wall/Pullman Kitchen

10. It adds color and flavoring to native delicacies.

a) Butter and margarine

b) Vanilla

c) Cheese

d) Cinamon

11. The following are the advantage of quaternary ammonium compound, which one is
disadvantage?

a) Slow destruction of microorganism

b) Non-irritating

c) Odorless

d) Colorless

12. The process of removing food and other types of soil from a surface such as dish, glass or
cutting board.

a) Cleaning

b) Sanitizing

c) Washing

d) Heating

13. Factor that must be considered when using chemical sanitizer.

a) Steam

b) Radiation

c) Heat

d) Concentration
14. A cleaning agent used to remove heavy accumulations of soil that are difficult to remove with
detergents.

a) Acid cleaners

b) Abrasive cleaner

c) Detergents

d) Iodine

15. A kind of knives used to section raw meat, poultry and fish. It can be used as a clever to
separate small joints or to cut bones.

a) Boning knife

b) Chef’s Knife

c) Butcher knife

d) Citrus Knife

16. ¼ cup : 60ml


1/3 cup : ____ ml

a) 65 ml

b) 45 ml

c) 85 ml

17. 1 pound :16 oz


2 pounds : _____oz

a) 32 oz

b) 12 oz

c) 8 oz

18. 250 °F : 120°C


125 °F : _____°C

a) 70°C

b) 40°C
c) 60°C

19. 1,000 grams : 1 k


500 grams : _____ k.

a) ¾ k

b) ¼ k

c) ½ k
20. A kitchen work triangle is formed by imaginary lines that connect the:
a) Sink, oven, range

b) Refrigerator, island, range

c) Sink, range, refrigerator

d) Island, refrigerator, and sink

Directions: Construct a 20-item Matching Type Test

MATCHING TYPEDirections: With hazard risks as bases, match Column A with Column B. Write the letterson
ly. Use separate sheet for your answers.

A B 
_____ 1. Used for baking elegant and special cakes.  A. rolling pin
_____ 2. Used for flatten or roll the dough. B.035 ounces 
_____ 3. Used to form desired designs on cakes.  C.Slips, falls 
_____ 4. A unit of measurement of mass which
 is equal to ½ oz. D.sift
_____ 5. A unit of weight equal to 28.35 grams.  E. Shock 
_____ 6. 5 whole eggs  F.mortar and pestles
_____ 7. 1 gram  G.Change in body function 
_____ 8. Used to pound and ground ingredients. H.Cause harm 
_____ 9. Used to bake individual custard cake  I.Determinant of health 
_____ 10. 16 ounces  J.unsafe workplace 
_____ 11. Electricity  K. Vibration  
_____ 12. Benzene  N. utility tray 
_____ 13. Wet floor  O. lump 
_____ 14. Hazards  P.shortenings 
_____ 15. Work  Q.1 pound 
_____ 16. Safety hazards  R.custard cup 
_____17. Butter or fats used to make pastry crispier. U. 1 cup 
_____18. A firm irregular mass  V. ounces 
_____19. Used to hold ingredients together  W. pound 
_____20. Separating course particles in the ingredients  X. pastry tip 
Y. Bundt pan
Directions: Construct a 5-item Essay Test.

1. The dimensions for knife cuts are standard industrywide. Discuss the importance of
following these standards in your operation and the cost of ignoring these standards
during food preparation.

2. Selecting a reputable food vendor can have a major impact on the success of your
business. What are some things a restaurant should look for in a food vendor before
purchasing food through that vendor or awarding that vendor a purchasing contract?

3. Develop and describe an emergency action for a kitchen. What rules and
procedures will you establish? What events do you consider an emergency? What
jobs and tasks need to be established and assigned in the event of each type of
emergency? How will you train your employees to be ready for such emergencies?
Describe your rationale for these decisions.

4. There are four acceptable thawing methods that can be used in the kitchen (under
running water, refrigeration, as part of the cooking process, and in a microwave)
Discuss appropriate foods, scenarios, and conditions for which you would use each
thawing method. Also discuss the principles behind each thawing method to show a
complete understanding of how each method works to thaw the food.

5. Discuss the advantages and disadvantages of primal cuts such as rough beef rib
versus buying portion control products such as portioned ribeye steaks. What factors
would play into your decision when deciding between the two options? Discuss the
trade-off for selecting either option.

Directions: Read the article entitled “Student Portfolios as an Assessment Tool“


written
by Emma Mcdonald athttp://www.educationworld.com/a_curr/columnists/mcdonald/
mcdonald025.shtml. Be able to answer the following questions:
1. How does a student portfolio assess the academic performance of a child?

Set a goal, or purpose, for the portfolio. The goal should be tied to how you plan to use the
portfolio. Take some time to think about what kind of data you want to collect and how
teachers plan to use it.Next, determine how – or if – teachers will grade the portfolios. If the
purpose is merely to collect work samples to pass along to another teacher or parent, there
is no need to actually grade the portfolios. If, however, teachers are looking for an overall
mastery of skills, teachers will want to grade the work collected. The most efficient way to
grade a portfolio is through a rating scale. If teachers are looking for specific skills, they
might begin with a checklist. That checklist will ensure that all necessary pieces are
included. Each teacher must determine what skills or learning are to be evaluated through
student portfolios. Teachers can have students create portfolios of their work for a particular
unit. That portfolio might count as a project for that particular topic of study. One thing to
keep in mind is that, although many portfolios reflect long-term projects completed over the
course of a semester or year, it does not have to be that way. Educators can have students
create portfolios of their work for a particular unit. That portfolio might count as a project for
that particular topic of study. The next unit might not include the use of a portfolio as an
assessment tool. There is no need to collect work in a portfolio, give an end-of-unit test, and
have students complete a major project in connection with the unit. All three activities are
tools to evaluate student learning and its overkill for both teachers and the students to use all
three.

2. How reliable are student portfolios as assessment tool?

The portfolio is not the easiest type of assessment to implement, but it can be a very
effective tool. Portfolios show the cumulative efforts and learning of a particular student over
time. They offer valuable data about student improvement and skill mastery. Along with
student reflection, that data provides valuable information about how each student learns
and what is important to him or her in the learning process.

Directions: On the space provided, write in narrative form the learning experience you
had after reading and accomplishing the exercises and activities in this chapter.

Assessment is a tool used in the classroom every day. It is used to measure a student’s
mastery of a skill or knowledge of a given subject. It is also what demonstrates to the
teacher what the students have learned. Educators use that information to determine if they
need to re-teach to a specific student, group, or the entire class. They can also use that
information to determine the rate of their teaching. Assessments are important because, as
future teachers, we need to know what difficulties our students have and what needs to be
refined for them. While I do believe in assessment and feel that it is one of the key
components of teaching, I am more concerned with a child’s process of learning rather than
the overall product that comes from it. This is where grades come in for me. Grades
determine the students’ level of mastery on a subject, nothing more. Grades should not be
the exclusive indicators that a student has learned the information that is presented to them.
It is the things a student learns along the way that truly matter and sometimes cannot be
measured.
Prior to teaching a unit, I believe it is useful to incorporate surveys and diagnostic
assessments to determine what our students understand before instruction.

Observation, combined with anecdotal records, is essential, especially in the early grades.
By observing and keeping track of these observations, teachers are able to tell a lot about
their students. For example, they can see how they interact socially with other peers as well
as how well they carry out a given task. In grades K-3 for example. The first years of school
are ideal age group. The early childhood stage is a time when children develop the most.
They are developing physically, cognitively, and psychologically. Due to this, I feel it would
be more beneficial to assign performance tasks rather than tests. By having students carry
out performance tasks, students can demonstrate they have learned what has been taught
and also show they are learning real life skills that will help them throughout their daily lives.
To achieve this, I see the need to develop and conduct both formative and summative
assessments on a regular basis.
Formative assessments are ongoing assessments such as records of students’
performance, observations, checklists, and rating scales. These are ongoing records that are
used by teachers to improve instructional strategies in the classroom and direct instruction.
According to textbook, these assessments monitor students’ progress during instruction and
learning activities that include feedback and opportunities to improve. Some informal
assessments may be in the form of regular classroom activities such as class work, journals,
essays, play-based assessment or student participation. I plan to utilize them to determine
where students are at the time of learning. The results will indicate the pace of instruction
and it can modify the way it present the information. Informal assessments include a teacher.

Summative assessments are used to evaluate the effectiveness of the academic programs
taught. They can be used to determine whether or not students have mastered specific skills
or grasped certain concepts. During this semester I learned that summative assessments
are administered at the end of teaching a lesson or a unit as an indicator of what the
students know and are able to do after the instruction is completed. These assessments will
vary in form such as quizzes, tests, presentations, essays, and growth portfolios. To
incorporate a variety of assessments in instruction because it gives students an opportunity
to demonstrate what they know as a result of the lesson, not based on the type of
assessment. While one child may not be good at tests they might be good at presentations
and vice versa. This diverse outlook on assessment is what enables students to be the
successful learners.
As future teacher, we have some control over the types of assessments, formative and
summative, that we will use to administer to our students. However, there are other types of
formal assessments that are data driven and based on statistics. These assessments
include norm-referenced tests and criterion referenced tests used by the District, State,
and/or Nationwide. Formal assessments can be given to students to test their performance
against other children in their age group and grade level. They may also be given to identify
a student’s strengths and weaknesses in comparison to peers. These large scales tests may
have some benefits, but in our perfect world, they would not exist. These tests are presented
in one way and does not allow for students to demonstrate their mastery or understanding
but only their ability to ‘fit a mold’.

In some articles, both guest speakers admitted that, at present, the classroom revolves
around assessments. Everything students do in the classroom today ultimately leads to
progressive assessments and final evaluative assessments. It saddens me that school has
become this assessment ‘bubble’. I believe that we are surrounded by assessment and
since it is a crucial part of our education system, we need assessment in the classroom but
do it in a way that is strictly meant to monitor student’s progress and whether we are
effectively teaching or not.Often, teachers have a hard time agreeing on what practices are
ethical when it comes to determining a grade. It is a general consensus that grades are a
powerful symbol and have the capacity to impact students in a positive or negative way
since they can represent different things for different people. The use of rubrics and
checklists are important to me. These checklists and rubrics should be shared with the
students constantly and discussed at length so that students are aware of what they will be
graded on and what each aspect of their grade is based upon. Grades, as I stated earlier,
should be limited to the level of knowledge within the subject that has already been taught,
not about the student’s ability to read the instructions. I also believe there should be different
grades for different things, for example, in a grade for a language arts performance, it should
not include points for non-related items.
On a college course syllabus,they have a section that includes all of the elements needed for
a final grade with a total worth for each section. Each of these elements has a rubric of its
own that composes the overall final grade for that assignment. This method of grading is
what I would like to incorporate for students, making the necessary modifications based on
their age group, grade level, school policy, and district policy. I believe that academic skills
are a separate grade from social, emotional, and community skills. If students know what to
expect, a grade will not come as a surprise to them. Students will be able to know what is
required of them to get a passing score or a mastery score.
I also believe in giving students an opportunity to redo an assignment if they believe they
can do better. With the redo, however, they will also include an explanation as to why they
believed they could do better the second time around. I will not take points away with a redo
because the goal is not to penalize students for making an effort to succeed but rather to
help them see where they could improve and noticing said improvement.
If I assess what I teach, my grade results should be valid. If my results are reliable, and they
indicate that the class did poorly on one exam, the assessment indicates that there is
something that my students didn’t understand and, therefore, something I didn’t effectively
teach. In this instance, I should re-examine my grading policy and adjust it to reflect what the
students were able to do, not what I was not able to teach. My philosophy is not only that
assessment is vital for the classroom, but using the results appropriately is crucial to the
continuing of effective teaching.
It is my goal to make assessment and grading a positive element to my classroom for both
my students and me. I want to give many opportunities for my students to do well and
achieve mastery as well as become the best student they can be. Students should not just
be measured by the end result. Learning is a process and I believe that it is in this process
that true learning occurs. Aside from being graded on the basic facts, students need to be
measured on how well they apply their knowledge. Assessment will be a huge part of my
classroom; however, In the future as a answer the calling of becoming a teacher I will hold
more importance for a student’s performance and progress rather than a factual test. Down
the road, these students will need the skills learned during their early years. A multiple-
choice question isn’t what is going to help them in the long run. However, the process they
used to learn and decide upon the answer will.

CHAPTER 5

Directions: Answer the following questions briefly on the space


provided or do what is being asked for.
1. Do you think that the previous tests that you have answered
given by your teachers from the time that you have entered
the formal schooling are valid and reliable? Why do you think
so??

I do think that the previous tests that is have answered that’s as been given to me by our
teachers from the time I have entered formal schooling are valid and reliable. There may be
different approaches now, studies, researches on how they can improve the quality of
education through different assessment but because we were living in that time that it is bear
already in our minds that that was what it supposed to be a valid and reliable way to assess
us as student so for me those methods works and of course it also crossed my mind before
itis the only thing that teachers can be available to validate the learning capacity of their
students.

2. In a Venn Diagram, from your own idea, give the similarities


and differences of validity andreliability.

Validity Reliability
Reliability and validity are both about how well a method measures something: Reliability
refers to the consistency of a measure (whether the results can be reproduced under the
same conditions). Validity refers to the accuracy of a measure (whether the results really do
represent what they are supposed to measure).
Reliability refers to how consistent the results of a study are or the consistent results of a
measuring test.
This can be split into internal and external reliability. Internal reliability refers to how
consistent the measure is within itself. A personality test should produce the same results
every time for the same participant.

External reliability refers to how consistent the results are when the same procedures are
carried out for a test. For example, if a research study takes place, the results should be
almost replicated if the study is replicated.
Validity refers to whether the study or measuring test is measuring what is claims to
measure. Internal validity refers to whether it is exclusively the independent variable causing
the change or whether there are confounding variables. External validty refers to how well
the laboratory study can be generalised to real life settings.

Directions: Be able to define each of the types of validity. Write


5.9 Research Activity

your answer on the space provided.

Types of Validity

1. WHAT IS CONSTRUCT VALIDITY?

Construct validity refers to the general idea that the realization of a theory
should be aligned with the theory itself. If this sounds like the broader definition of
validity, it’s because construct validity is viewed by researchers as “a unifying
concept of validity” that encompasses other forms, as opposed to a completely
separate type.

It is not always cited in the literature, but, as Drew Westen and Robert Rosenthal
write in “Quantifying Construct Validity: Two Simple Measures,” construct validity
“is at the heart of any study in which researchers use a measure as an index of a
variable that is itself not directly observable.”

The ability to apply concrete measures to abstract concepts is obviously important


to researchers who are trying to measure concepts like intelligence or kindness.
However, it also applies to schools, whose goals and objectives (and therefore
what they intend to measure) are often described using broad terms like “effective
leadership” or “challenging instruction.”

Construct validity ensures the interpretability of results, thereby paving the way for
effective and efficient data-based decision making by school leaders.

2. WHAT IS CRITERION VALIDITY?


Criterion validity refers to the correlation between a test and a criterion that is
already accepted as a valid measure of the goal or question. If a test is highly
correlated with another valid criterion, it is more likely that the test is also valid.

Criterion validity tends to be measured through statistical computations of


correlation coefficients, although it’s possible that existing research has already
determined the validity of a particular test that schools want to collect data on.

3. WHAT IS CONTENT VALIDITY?


Content validity refers to the actual content within a test. A test that is valid in
content should adequately examine all aspects that define the objective.

Content validity is not a statistical measurement, but rather a qualitative one. For
example, a standardized assessment in 9th-grade biology is content-valid if it
covers all topics taught in a standard 9th-grade biology course.

Warren Schillingburg, an education specialist and associate superintendent,


advises that determination of content-validity “should include several teachers
(and content experts when possible) in evaluating how well the test represents the
content taught.”

While this advice is certainly helpful for academic tests, content validity is of
particular importance when the goal is more abstract, as the components of that
goal are more subjective.

School inclusiveness, for example, may not only be defined by the equality of
treatment across student groups, but by other factors, such as equal opportunities
to participate in extracurricular activities.

Despite its complexity, the qualitative nature of content validity makes it a


particularly accessible measure for all school leaders to take into
consideration when creating data instruments.

5.11 Research Activity

Directions: Be able to define each of the types of reliability.


TYPES OF RELIABILITY
The reliability of an assessment refers to the consistency of results. The most basic
interpretation generally references something called test-retest reliability, which is
characterized by the replicability of results. That is to say, if a group of students takes a test
twice, both the results for individual students, as well as the relationship among students’
results, should be similar across tests.
However, there are two other types of reliability: alternate-form and internal
consistency.
1. Alternate form is a measurement of how test scores compare across two similar
assessments given in a short time frame. Alternate form similarly refers to the
consistency of both individual scores and positional relationships.
2. Internal consistency is analogous to content validity and is defined as a measure of
how the actual content of an assessment works together to evaluate understanding
of a concept.

5.12 Evaluative Exercise

Write your answer on the space provided.

Directions: Answer the following question on the space provided.

1. What is an expectancy table? Describe the process of


constructing an expectancy table. When do we use an
expectancytable?

An expectancy table is a two-way table showing the relationship between two tests.
Helmstadter (1964) notes that an expectancy table provides the probabilities indicating that
students achieving a given test score will behave in a certain way in some second situation.
Steps in Creating an Expectancy Table
Step 1
For each NRT score, count the number of students at SSS Levels that correspond to that
NRT. For example, an NRT of “83” would be counted under SSS level 4.
Notice that the NRT percentiles are in ranges of 1-25, 26-50, and so on. With these ranges,
the process is more efficient.
Step 2
Sum the rows and columns. For example, in row 51-75, there are 6 cases (1+5 = 6).
Calculate the percentage of cases in each cell for each row. For example, in row 51-75,
there is one case under SSS Level 1 and 5 cases under Level 3. Divide each cell with the
Row Total. 1/6 = 17% and 5/6 = 83%.
Continue calculating the percentages for all the other cells.
Converting Frequencies to Percentages
Interpretation of Expectancy Information
The percentages indicate the probability of attaining a score based on performance of
another score.
The percentages are not “guarantees.” They are only probabilities. They answer questions,
such as:
How likely is it that a student at a particular FCAT NRT will attain a certain level on the FCAT
SSS?
Information from expectancy tables can be used to help teachers differentiate instruction by
addressing the academic needs of individual students.
Creating Expectancy Tables for Curriculum- Based Measurements
You may want to use an expectancy table to establish predictive validity for the reading
probes (CBMs) that you develop to monitor your students’ progress in reading.
The scores in the left-hand column are the base-line Word Per Minute Correct scores for the
nine sixth-grade students you have chosen to monitor. The scores in the right column
represent those same students’ most recent FCAT NRT percentile rank in reading.
If you want to find out whether your probes are making a fairly accurate prediction of FCAT
reading scores, you can create an expectancy table
In the left-hand column, we entered a range of reading probe scores.
Across the top row, we entered a range of FCAT NRT scores.
Then we entered the number of students who scored within each range. Example: In the
bottom row of student data, we see the student from our table above who scored 125 WPM
on his reading probes and scored in the 27th percentile on the FCAT NRT entered as a
number 1. This student is the only one who scored in both the 26-50 NRT range and the
100-125 WPM range.
We then divided the total number of students in each cell by the total number in their row to
get the percentage of students scoring in that range.
 Teachers can use the information from one test to help predict the performance level
on another test; that is, an expectancy table can be used to display predictive validity
data.

2. What is the relationship between validity and reliability? Can a


test be reliable and yet not valid?Illustrate.

Reliability and validity are both about how well a method measures something: Reliability
refers to the consistency of a measure (whether the results can be reproduced under the
same conditions). Validity refers to the accuracy of a measure (whether the results really do
represent what they are supposed to measure).
A test can be reliable, meaning that the test-takers will get the same score no matter when
or where they take it, within reason of course. But that doesn’t mean that it is valid or
measuring what it is supposed to measure. A test can be reliable without being valid.
However, a test cannot be valid unless it is reliable. For example, when a man wrongly
reports his date of birth consistently, it may be reliable but not valid.
3. Discuss the different measures of reliability. Justify the use of
each measure in the context of measuringreliability.

There are four general classes of reliability estimates, each of which estimates reliability
in a different way. They are:

1. Inter-Rater or Inter-Observer Reliability: Used to assess the degree to


which different raters/observers give consistent estimates of the same
phenomenon.
2. Test-Retest Reliability: Used to assess the consistency of a measure
from one time to another.
3. Parallel-Forms Reliability: Used to assess the consistency of the
results of two tests constructed in the same way from the same
content domain.
4. Internal Consistency Reliability: Used to assess the consistency of
results across items within a test.

Let’s discuss each of these in turn.

Inter-Rater or Inter-Observer Reliability

Whenever you use humans as a part of your measurement procedure, you have to
worry about whether the results you get are reliable or consistent. People are notorious
for their inconsistency. We are easily distractible. We get tired of doing repetitive tasks.
We daydream. We misinterpret.
So how do we determine whether two observers are being consistent in their
observations? You probably should establish inter-rater reliability outside of the context
of the measurement in your study. After all, if you use data from your study to establish
reliability, and you find that reliability is low, you’re kind of stuck. Probably it’s best to do
this as a side study or pilot study. And, if your study goes on for a long time, you may
want to reestablish inter-rater reliability from time to time to assure that your raters
aren’t changing.
There are two major ways to actually estimate inter-rater reliability. If your measurement
consists of categories – the raters are checking off which category each observation
falls in – you can calculate the percent of agreement between the raters. For instance,
let’s say you had 100 observations that were being rated by two raters. For each
observation, the rater could check one of three categories. Imagine that on 86 of the
100 observations the raters checked the same category. In this case, the percent of
agreement would be 86%. OK, it’s a crude measure, but it does give an idea of how
much agreement exists, and it works no matter how many categories are used for each
observation.
The other major way to estimate inter-rater reliability is appropriate when the measure is
a continuous one. There, all you need to do is calculate the correlation between the
ratings of the two observers. For instance, they might be rating the overall level of
activity in a classroom on a 1-to-7 scale. You could have them give their rating at
regular time intervals (e.g., every 30 seconds). The correlation between these ratings
would give you an estimate of the reliability or consistency between the raters.
You might think of this type of reliability as “calibrating” the observers. There are other
things you could do to encourage reliability between observers, even if you don’t
estimate it. For instance, I used to work in a psychiatric unit where every morning a
nurse had to do a ten-item rating of each patient on the unit. Of course, we couldn’t
count on the same nurse being present every day, so we had to find a way to assure
that any of the nurses would give comparable ratings. The way we did it was to hold
weekly “calibration” meetings where we would have all of the nurses ratings for several
patients and discuss why they chose the specific values they did. If there were
disagreements, the nurses would discuss them and attempt to come up with rules for
deciding when they would give a “3” or a “4” for a rating on a specific item. Although this
was not an estimate of reliability, it probably went a long way toward improving the
reliability between raters.

Test-Retest Reliability
We estimate test-retest reliability when we administer the same test to the same sample
on two different occasions. This approach assumes that there is no substantial change
in the construct being measured between the two occasions. The amount of time
allowed between measures is critical. We know that if we measure the same thing twice
that the correlation between the two observations will depend in part by how much time
elapses between the two measurement occasions. The shorter the time gap, the higher
the correlation; the longer the time gap, the lower the correlation. This is because the
two observations are related over time – the closer in time we get the more similar the
factors that contribute to error. Since this correlation is the test-retest estimate of
reliability, you can obtain considerably different estimates depending on the interval.
Parallel-Forms Reliability
In parallel forms reliability you first have to create two parallel forms. One way to
accomplish this is to create a large set of questions that address the same construct
and then randomly divide the questions into two sets. You administer both instruments
to the same sample of people. The correlation between the two parallel forms is the
estimate of reliability. One major problem with this approach is that you have to be able
to generate lots of items that reflect the same construct. This is often no easy feat.
Furthermore, this approach makes the assumption that the randomly divided halves are
parallel or equivalent. Even by chance this will sometimes not be the case. The parallel
forms approach is very similar to the split-half reliability described below. The major
difference is that parallel forms are constructed so that the two forms can be used
independent of each other and considered equivalent measures. For instance, we might
be concerned about a testing threat to internal validity . If we use Form A for the pretest
and Form B for the posttest, we minimize that problem. it would even be better if we
randomly assign individuals to receive Form A or B on the pretest and then switch them
on the posttest. With split-half reliability we have an instrument that we wish to use as a
single measurement instrument and only develop randomly split halves for purposes of
estimating reliability.

Internal Consistency Reliability


In internal consistency reliability estimation we use our single measurement instrument
administered to a group of people on one occasion to estimate reliability. In effect we
judge the reliability of the instrument by estimating how well the items that reflect the
same construct yield similar results. We are looking at how consistent the results are for
different items for the same construct within the measure. There are a wide variety of
internal consistency measures that can be used.

Average Inter-item Correlation


The average inter-item correlation uses all of the items on our instrument that are
designed to measure the same construct. We first compute the correlation between
each pair of items, as illustrated in the figure. For example, if we have six items we will
have 15 different item pairings (i.e., 15 correlations). The average interitem correlation
is simply the average or mean of all these correlations. In the example, we find an
average inter-item correlation of .90 with the individual correlations ranging from .84
to .95.

Average Itemtotal Correlation


This approach also uses the inter-item correlations. In addition, we compute a total
score for the six items and use that as a seventh variable in the analysis. The figure
shows the six item-to-total correlations at the bottom of the correlation matrix. They
range from .82 to .88 in this sample analysis, with the average of these at .85.

Split-Half Reliability
In split-half reliability we randomly divide all items that purport to measure the same
construct into two sets. We administer the entire instrument to a sample of people and
calculate the total score for each randomly divided half. the split-half reliability estimate,
as shown in the figure, is simply the correlation between these two total scores. In the
example it is .87.
Cronbach’s Alpha (a)
Imagine that we compute one split-half reliability and then randomly divide the items into
another set of split halves and recompute, and keep doing this until we have computed
all possible split half estimates of reliability. Cronbach’s Alpha is mathematically
equivalent to the average of all possible split-half estimates, although that’s not how we
compute it. Notice that when I say we compute all possible split-half estimates, I don’t
mean that each time we go an measure a new sample! That would take forever. Instead,
we calculate all split-half estimates from the same sample. Because we measured all of
our sample on each of the six items, all we have to do is have the computer analysis do
the random subsets of items and compute the resulting correlations. The figure shows
several of the split-half estimates for our six item example and lists them as SH with a
subscript. Just keep in mind that although Cronbach’s Alpha is equivalent to the
average of all possible split half correlations we would never actually calculate it that
way. Some clever mathematician (Cronbach, I presume!) figured out a way to get the
mathematical equivalent a lot more quickly.

5.13 Web Video

Directions: Watch a Video entitled “Using Item Analysis in


Blackboard Learn“ at
https://en-us.help.blackboard.com/Learn/Instructor/Tests_Poo
ls_Surveys/120_ Item_Analysis. Be able to answer the
following questions on the space provided.
1. Was the message of thehost clear? Were you able to
grasp the mainidea?
Yes and Yes

2. What are the salient points of the host?Enumerate.


The Item Analysis output consists of four parts: A
summary of test statistics, a test frequency distribution, an
item quintile table, and item statistics.

5.14 Reflection

Directions: On the space provided, write in narrative form the learning


experience you had after reading and accomplishing the exercises
and activities in this chapter.

Teachers should be skilled in developing, administering, scoring, interpreting, and using the
results of assessment methods appropriate for instructional decisions. Teacher should be
skilled at analyzing the quality of each assessment technique they use and communicating
assessment results to others, including developing valid pupil grading procedures.
Item analysis was taught in this class and used on each exam that the students took. I
have never before seen item analysis used on an exam, and I am now a strong believer of
making sure on that I do this for my future students. Item analysis is for the benefit of
teachers and students, and finds badly written items, items that don’t connect to the
objectives, miskeyed items, and particularly difficult items.
My outputs is an example of a made-up item analysis of a multiple choice question. While
item analysis has plenty of benefits, one prominent negative aspect to item analysis is the
time that it takes. If I was to do item analysis by hand on a 50 question exam, it would take
me a very long time. Luckily, there are certain computer programs that can do this for you. I
hope to learn how to use them, as well as perfect how to do it myself, before I become a
teacher.
Further more I learnin this topic how to plan a test and build a test. Talking about, mostly
multiple-choice but same principals for true-false, matching and short answer, by analyzing
results you can refine you’re testing. The Classroom level will tell which questions they were
are all guessing on, or if you find a questions which most of them found very difficult, you
can reteach that concept, do item analysis on pretests toso if you find a question they all got
right, don’t waste more time on this area, find the wrong answers they are choosing to
identify common misconceptions, can’t tell this just from score on total test, or class average
Individual level Isolate specific errors this child made afteryou’ve planned these tests, written
perfectquestions, and now analyzed the results, you’re going to know more about these kids
than they know themselves. ! professional development doing the occasional item analysis
will help teach you how to become a better test writer and you’re also documenting just how
good you revaluation is useful for dealing with parents or principals if there’s ever a dispute
once you start bringing out all these impressive looking stats parents and administrators will
believe that maybe you do know what you’re talking about when you fail students…parent
says, I think your & question stinks&, well, &according to the item analysis, this question
appears to have worked well – it’s your son that stinks&’ just kidding( --actually, face validity
takes priority over stats any dayand if the analysis shows that the question does stink,
you’ve already dropped it
If I think about it it is important that beforeyou’ve handed itback to the student, let alone the
parent seeing it. It is one area where even a lot of otherwise very good classroomteachers
fall down they think they’re doing a good job/ they think they’ve doing good evaluation, but
without doing item analysis, they can’t reallyknow.
Part of being aprofessional is going beyond the illusion of doing a good job to finding out
whether you really are but something just a lot of teachers don’t know what to do and do it
indirectly when kids argue with them…wait for complaints from students, students parents
and maybe other teachers. Also Item analysis can be a powerful technique available to
instructors for the guidance and improvement of instruction. For this to be so, the items to be
analyzed must be valid measures of instructional objectives. Further, the items must be
diagnostic, that is,
Knowledge of which incorrect options students select must be a clue to the nature of the
misunderstanding, and thus prescriptive of appropriate remediation. In addition, instructors
who construct their own e4aminations may greatly improve the effectiveness of test items
and the validity of test scores if they select and rewrite their items on the basis of item
performance data. such data is available to instructors who have their examination answer
sheets scored at the Computer laboratoryscoring office. This described major concepts
related to item analysis including validity, reliability, item difficulty, and item discrimination,
particularly in relation to criterion-referenced tests. It discussed how these concepts can be
used to revise and improve items and listed suggestions regarding general guidelines for
test development.

CHAPTER 6

Directions: Answer the following questions briefly on the space provided.


1. How do you assess the performance of students other than
objective tests or writtentests?
The following are some of the other way to assess the
performance of students.
Performance Tasks
Performance tasks are hands-on activities that require
students to demonstrate their ability to perform certain actions.
This category of assessment covers an extremely wide range
of behaviors, including designing products or experiments,
gathering information, tabulating and analyzing data,
interpreting results, and preparing reports or presentations.

Senior Projects
Senior projects are distinct from written assessments and
performance tasks because they are cumulative, i.e., they
reflect work done over an extended period rather than in
response to a particular prompt. The term senior project is
used here to identify a particular type of culminating event in
which students draw upon the skills they have developed over
time. It has three components: a research paper, a product
or activity, and an oral presentation, all associated with a
single career-related theme or topic.
Portfolios
Like a senior project, a portfolio is a cumulative assessment
that represents a student’s work and documents his or her
performance. However, whereas a senior project focuses on a
single theme, a portfolio may contain any of the forms of
assessments described above plus additional materials such
as work samples, official records, and student-written
information.

COMPARING SELECTED-RESPONSE AND ALTERNATIVE


ASSESSMENTS
For decades, selected-response tests (multiple-choice,
matching, and true-false) have been the preferred technique
for measuring student achievement, particularly in large-scale
testing programs. In one form or another, selected-response
measures have been used on a large scale for seventy-five
years. Psychometricians have developed an extensive theory
of multiple-choice testing, and test developers have
accumulated a wealth of practical expertise with this form of
assessment.

2. Have you encountered the word, “rubrics” before? What’s your idea
aboutit?
Yes I’ve encountered the word rubrics before. My idea about it before it
for clear instruction on how our outputs are going to be graded and it
would be the basis in assessing our activities.

Directions: Give an example in each of the two major types of Rubrics, namely:
a. Holistic Rubric
Breakfast in Bed: Holistic Rubric
Score Description
4 All food is perfectly cooked, presentation
surpasses expectations, and recipient is
kept exceptionally comfortable throughout
the meal.
3 Food is cooked correctly, the meal is
presented in a clean and well-organized
manner, and the recipient is kept
comfortable throughout the meal.
2 Some food is cooked poorly, some aspects
of presentation are sloppy or unclean, or
the recipient is uncomfortable at times.
1 Most of the food is cooked poorly, the
presentation is sloppy or unclean, and the
recipient is uncomfortable most of the time.

b. Dimensional /Analytical
Expected Results and Outcomes
The learner demonstrates
understanding of the basic
concepts and principles
underlying the process and
delivery cooking.
 Process flow in
various methods of
cooking, frying,
Standard Content steaming, boiling,
baking, stewing,
sautéing, roasting,
etc.
 Project Plan
 Four M’s
(Manpower,
Machine, Methods)
of production in
cooking
 Evaluation of
products
 Cost of production
 Pricing of products
Performance The learner produces
marketable original new
meal products following the
principles underlying the
process and delivery in
cooking.
Essential Understanding Applying the basic concepts
and principles underlying
the process and delivery in
cooking is essential to
producing marketable meal
products.
Question Why do we need to
understand the basic
concepts and principles
underlying the process and
delivery in cooking?

The Assessment Process


Product/ Marketable original/new meal products following the basic concepts
Performance: and principles underlying the process and delivery in cooking.
Instructions: Please rate the different aspects of the evidences of understanding of the
process and delivery in cooking a marketable original or new meal product. Please check
all the criteria/indicators evident and put the level of performance appropriate to the given
work. You can put NA (Not Applicable criteria/indicator) or add appropriate one. This is still in
its draft form so please fill free to improve it.
Criteria at the Level of Understanding
Evidence and Tasks/Evidence/ Performance Levels and Descriptions Sc
Criteria Weight 4 3 2 1 ore

1. Explana Oral Excellent Very Satisfact Poor


tion Presentation/Pr Satisfact ory
oject Plan ory
Explained the Satisfied FAILED to FAILED to FAILED to
basic concepts ALL of the satisfy 1 satisfy 2 satisfy 4
and principles identified of the to 3 of the or more
underlying the indicators given given of the
process and checked indicators indicators given
delivery in below. below. below indicators
cooking with below
the following
traits:
 Clear
 Compre
hensive
 Coheren
t
 With
scientific
basis
Others:
2. Interpre 4 3 2 1
tation
Show the Oral Satisfied FAILED to FAILED to FAILED to
significance of Presentation/Pr ALL of the satisfy 1 satisfy 2 satisfy 3
the process oduct identified of the of the or more
and delivery of indicators given given of the
cooking in checked indicators indicators given
producing new below. below. below. indicators
products with below
the following
traits:
 Clear
 Original
 Creative
Others:
3. Applica 4 3 2 1
tion
Exhibit Work Satisfied FAILED to FAILED to FAILED to
marketable Plan/Exhibition ALL of the satisfy 1 satisfy 2 satisfy 4
meal product of Work identified of the to 3 of the or more of
following the indicators given given the given
process and checked indicators indicators indicators
delivery in below. below. below below
cooking
satisfying the
following traits:
 Original
 Creative
 With
nutritive
value
 Cost-
efficient
Others:
4. Perspe 4 3 2 1
ctive
Compare and Oral Satisfied FAILED to FAILED to FAILED to
contrast various Presentation ALL of the satisfy 1 satisfy 2 satisfy 3
methods and identified of the of the or more of
techniques in indicators given given the given
cooking with checked indicators indicators indicators
the following below. below. below. below
traits:
 Clear
 Concise
 Appropri
ate
Others:
5. Empath 4 3 2 1
y
Considers the Work Plan/Oral Satisfied FAILED to FAILED to FAILED to
return of Presentation/Re ALL of the satisfy 1 satisfy 2 satisfy 3
producing flective Journal identified of the of the or more of
marketable indicators given given the given
meal products checked indicators indicators indicators
satisfying the below. below. below. below
following traits:
 Profitabl
e
 Good
quality
 Open
Others:
6. Self- 4 3 2 1
Knowle
dge
Reflect on the Oral Satisfied FAILED to FAILED to FAILED to
production Presentation/Re ALL of the satisfy the satisfy the satisfy
process of flective Journal identified first of second of none of
cooking meal indicators the given the given the given
products. checked indicators indicators indicators
below below below below
 Clear
 Confide
nt
Others:
Criteria at the Level of Performance
Evidence and Tasks/ 4 3 2 1 Sc
Criteria Evidence/ ore
1. Marketa Weight Excellent Very Satisfact Poor
bility Satisfact ory
ory
The meal Product Satisfied FAILED to FAILED to FAILED to
product ALL of the satisfy 1 satisfy 2 satisfy 4
satisfies the identified of the to 3 of the or more
following traits: indicators given given of the
checked indicators indicators given
below. below. below indicators
below
 Good
taste
 Good
appeara
nce
 Affordab
le price
 Good
packagi
ng
Others:
2. Original 4 3 2 1
ity
The meal Product Satisfied FAILED to FAILED to FAILED to
product ALL of the satisfy the satisfy the satisfy
satisfies the identified second first only none of
following traits: indicators only of of the the given
checked the given given indicators
below indicators indicators below
below below
 unique
 With
new
value
added
Others:
3. Compli 4 3 2 1
ance
with
Standar
ds
Demonstration/ Satisfies FAILED to FAILED to FAILED to
Observation ALL of the satisfy 1 satisfy the satisfy 3
identified of the of the or more
indicators given given of the
checked indicators indicators given
below below below indicators
below
 Used
correct
tools for
cooking
 Used
the
correct
equipme
nt for
cooking
 Prepare
d
correctly
all the
ingredie
nts for
cooking
Others:
4. Applica 4 3 2 1
tion of
Proced
ure
Cooking was Demonstration/ Satisfied FAILED to FAILED to FAILED to
done following Observation ALL of the satisfy the satisfy the satisfy
the following identified second first only none of
traits: indicators only of the of the the given
checked given given indicators
below indicators indicators below
below below
 Followe
d
correct
food
handling
procedu
res
 Followe
d safe
food
handling
procedu
res
Others:
5. Obseva 4 3 2 1
nce of
Work
Habits
Cooking was Demonstration Satisfied FAILED to FAILED to FAILED to
done following and ALL of the satisfy 1 satisfy 2-4 satisfy 5
the following Observation identified of the of the or more
traits: indicators given given of the
checked indicators indicators given
below below below indicators
below
 Tie hair
back
 Wash
hands
before
and
through
out the
handling
of food
 Left the
kitchen
clean
 All
dishes
werewa
shed,
dried,
and put
in
proper
places
 Tables
and
counter
tops
were
clean
and dry
Others:
Score
6. Speed / Cooked/ Cooked/ Cooked/ Cooked/
Time prepared prepared prepared prepared
the food the food the food the food
ahead of just in while a after
the time time for presentat everyone
set the ion in has
presentati going on presente
on d
Others:
Final Score

Cooking Class
Poor Fair Good
1pts 2pts 3pts
Recipe Preview Student did not pay Student paid Student attended to
attention while sporadic attention to the entire recipe
recipe was the recipe preview preview
previewed on the
class
Preparation Student di not wash Student did not Student did wash
hands or tie hair complete both hands properly at
back. Student did requirements: failed the beginning and
not rewash hands to wash hands or tie throughout the
after touching hair, hair back laboratory and did
face, etc. tie hair back
Cooperation Student only worked Student worked but Student
with prodding. Did complained, refused demonstrated a
not participate in all non-preferred tasks willingness to
tasks, and did not or quit before all complete all tasks
demonstrate a tasks were complete including clean up
willingness to work tasks. Worked
steadily through the
laboratory and
participated in all
kitchen tasks
SkillPractice Student did not Student used some Student used the
practice of the demonstrated demonstrated
demonstrated techniques. Did and techniques for food
techniques for food pay attention to preparation during
preparation details laboratory. Student
paid attention to
details
Safety Did not follow safety Student tried to use Student
rules. Did not use equipment safety demonstrated safe
safe food handling and correctly. and correct use of
techniques. Did not Careless at times all kitchen
use kitchen and did not always equipment used for
equipment in a safe follow the rules. the laboratory.
manner. Did not Attempted to follow Student followed
clean up during safe food handling safe food
preparation to procedures handling.procedures
prevent accidents
Clean-Up The student left Student washed, Student left the
unwashed items. dried, and put away kitchen clean. All
Counters and tables dishes, but left dishes were
are not cleaned well, counters unwashed, washed, dried and
dirty towels and and tables dirty. put away. Tables
dishrags are left Laundry may or may and counter tops
lying about in the not be picked up were clean and dry.
kitchen lab. ALL laundry was
gathered up and put
in the wash area

Directions: List down the guidelines in creating a rubric.

A good rubric needs to be designed with care and precision in order to truly help teachers
distribute and receive the expected work.
Steps to Create a Rubric
The following six steps will help to use a rubric for assessing an essay, a project, group
work, or any other task that does not have a clear right or wrong answer.
Step 1: Define Your Goal
Before you can create a rubric, you need to decide the type of rubric you’d like to use, and
that will largely be determined by your goals for the assessment.
Ask yourself the following questions:
How detailed do I want my feedback to be?
How will I break down my expectations for this project?
Are all of the tasks equally important?
How do I want to assess performance?
What standards must the students hit in order to achieve acceptable or exceptional
performance?
Do I want to give one final grade on the project or a cluster of smaller grades based on
several criteria?
Am I grading based on the work or on participation? Am I grading on both?
Once you’ve figured out how detailed you’d like the rubric to be and the goals you are trying
to reach, you can choose a type of rubric.
Step 2: Choose a Rubric Type
Although there are many variations of rubrics, it can be helpful to at least have a standard
set to help you decide where to start. Here are two that are widely used in teaching as
defined by DePaul University’s Graduate Educational department:
Analytic Rubric: This is the standard grid rubric that many teachers routinely use to assess
students’ work. This is the optimal rubric for providing clear, detailed feedback. With an
analytic rubric, criteria for the students’ work is listed in the left column and performance
levels are listed across the top. The squares inside the grid will typically contain the specs
for each level. A rubric for an essay, for example, might contain criteria like “Organization,
Support, and Focus,” and may contain performance levels like “(4) Exceptional, (3)
Satisfactory, (2) Developing, and (1) Unsatisfactory.” The performance levels are typically
given percentage points or letter grades and a final grade is typically calculated at the end.
The scoring rubrics for the ACT and SAT are designed this way, although when students
take them, they will receive a holistic score.
Holistic Rubric: This is the type of rubric that is much easier to create, but much more difficult
to use accurately. Typically, a teacher provides a series of letter grades or a range of
numbers (1-4 or 1-6, for example) and then assigns expectations for each of those scores.
When grading, the teacher matches the student work in its entirety to a single description on
the scale. This is useful for grading multiple essays, but it does not leave room for detailed
feedback on student work.
Step 3: Determine Your Criteria
This is where the learning objectives for your unit or course come into play. Here, you’ll need
to brainstorm a list of knowledge and skills you would like to assess for the project. Group
them according to similarities and get rid of anything that is not absolutely critical. A rubric
with too much criteria is difficult to use! Try to stick with 4-7 specific subjects for which you’ll
be able to create unambiguous, measurable expectations in the performance levels. You’ll
want to be able to spot the criteria quickly while grading and be able to explain them quickly
when instructing your students. In an analytic rubric, the criteria are typically listed along the
left column.
Step 4: Create Your Performance Levels
Once you have determined the broad levels you would like students to demonstrate mastery
of, you will need to figure out what type of scores you will assign based on each level of
mastery. Most ratings scales include between three and five levels. Some teachers use a
combination of numbers and descriptive labels like “(4) Exceptional, (3) Satisfactory, etc.”
while other teachers simply assign numbers, percentages, letter grades or any combination
of the three for each level. You can arrange them from highest to lowest or lowest to highest
as long as your levels are organized and easy to understand.
Step 5: Write Descriptors for Each Level of Your Rubric
This is probably your most difficult step in creating a rubric.Here, you will need to write short
statements of your expectations underneath each performance level for every single criteria.
The descriptions should be specific and measurable. The language should be parallel to
help with student comprehension and the degree to which the standards are met should be
explained.
Again, to use an analytic essay rubric as an example, if your criteria was “Organization” and
you used the (4) Exceptional, (3) Satisfactory, (2) Developing, and (1) Unsatisfactory scale,
you would need to write the specific content a student would need to produce to meet each
level. It could look something like this:
4
Exceptional 3
Satisfactory 2
Developing 1 Unsatisfactory
Organization Organization is coherent, unified, and effective in support of the paper’s
purpose and
Consistently demonstrates
Effective and appropriate
Transitions
Between ideas and paragraphs. Organization is coherent and unified in support of the
paper’s purpose and usually demonstrates effective and appropriate transitions between
ideas and paragraphs.
Organization is coherent in
Support of the essay’s purpose, but is ineffective at times and may demonstrate abrupt or
weak transitions between ideas or paragraphs. Organization is confused and
fragmented. It does not support the essay’s purpose and demonstrates a
Lack of structure or coherence that negatively
Affects readability.
A holistic rubric would not break down the essay’s grading criteria with such precision. The
top two tiers of a holistic essay rubric would look more like this:
6 = Essay demonstrates excellent composition skills including a clear and thought-provoking
thesis, appropriate and effective organization, lively and convincing supporting materials,
effective diction and sentence skills, and perfect or near perfect mechanics including spelling
and punctuation. The writing perfectly accomplishes the objectives of the assignment.
5 = Essay contains strong composition skills including a clear and thought-provoking thesis,
but development, diction, and sentence style may suffer minor flaws. The essay shows
careful and acceptable use of mechanics. The writing effectively accomplishes the goals of
the assignment.
Step 6: Revise Your Rubric
After creating the descriptive language for all of the levels (making sure it is parallel, specific
and measurable), you need to go back through and limit your rubric to a single page. Too
many parameters will be difficult to assess at once, and may be an ineffective way to assess
students’ mastery of a specific standard. Consider the effectiveness of the rubric, asking for
student understanding and co-teacher feedback before moving forward. Do not be afraid to
revise as necessary. It may even be helpful to grade a sample project in order to gauge the
effectiveness of your rubric. Adjusting the rubric if need be before handing it out, but once it’s
distributed, it will be difficult to retract.

Directions: Read an article entitled “What is Performance Task?” by J.


McTigheat https://blog.performancetask.com/what-is-a-performance-task-
part-1-9fa0d99e ad3b#.vt1x28shr Read the following questions on the
spaceprovided.
1. What is performancetask?
A performance task is any learning activity or assessment that asks
students to perform to demonstrate their knowledge, understanding and
proficiency.
2. What are the characteristics of PerformanceTasks?
Characteristics of Performance Tasks
While any performance by a learner might be considered a performance
task (e.g., tying a shoe or drawing a picture), it is useful to distinguish
between the application of specific and discrete skills (e.g., dribbling a
basketball) from genuine performance in context (e.g., playing the game
of basketball in which dribbling is one of many applied skills). Thus, when
I use the term performance tasks, I am referring to more complex and
authentic performances.
Here are seven general characteristics of performance tasks:
1. Performance tasks call for the application of knowledge and skills, not
just recall or recognition.
In other words, the learner must actually use their learning to perform.
These tasks typically yield a tangible product (e.g., graphic display, blog
post) or performance (e.g., oral presentation, debate) that serve as
evidence of their understanding and proficiency.
2. Performance tasks are open-ended and typically do not yield a
single, correct answer.
Unlike selected- or brief constructed- response items that seek a “right”
answer, performance tasks are open-ended. Thus, there can be different
responses to the task that still meet success criteria. These tasks are
also open in terms of process; i.e., there is typically not a single way of
accomplishing the task.
3. Performance tasks establish novel and authentic contexts for
performance.
These tasks present realistic conditions and constraints for students to
navigate. For example, a mathematics task would present students with
a never-before-seen problem that cannot be solved by simply “plugging
in” numbers into a memorized algorithm. In an authentic task, students
need to consider goals, audience, obstacles, and options to achieve a
successful product or performance. Authentic tasks have a side benefit
— they convey purpose and relevance to students, helping learners see
a reason for putting forth effort in preparing for them.
4. Performance tasks provide evidence of understanding via transfer.
Understanding is revealed when students can transfer their learning to
new and “messy” situations. Note that not all performances require
transfer. For example, playing a musical instrument by following the
notes or conducting a step-by-step science lab require minimal transfer.
In contrast, rich performance tasks are open-ended and call “higher-order
thinking” and the thoughtful application of knowledge and skills in
context, rather than a scripted or formulaic performance.
5. Performance tasks are multi-faceted.
Unlike traditional test “items” that typically assess a single skill or fact,
performance tasks are more complex. They involve multiple steps and
thus can be used to assess several standards or outcomes.
6. Performance tasks can integrate two or more subjects as well as 21 st
century skills.
In the wider world beyond the school, most issues and problems do not
present themselves neatly within subject area “silos.” While performance
tasks can certainly be content-specific (e.g., mathematics, science, social
studies), they also provide a vehicle for integrating two or more subjects
and/or weaving in 21st century skills and Habits of Mind. One natural way
of integrating subjects is to include a reading, research, and/or
communication component (e.g., writing, graphics, oral or technology
presentation) to tasks in content areas like social studies, science,
health, business, health/physical education. Such tasks encourage
students to see meaningful learning as integrated, rather than something
that occurs in isolated subjects and segments.
7. Performances on open-ended tasks are evaluated with established
criteria and rubrics.
Since these tasks do not yield a single answer, student products and
performances should be judged against appropriate criteria aligned to the
goals being assessed. Clearly defined and aligned criteria enable
defensible, judgment-based evaluation. More detailed scoring rubrics,
based on criteria, are used to profile varying levels of understanding and
proficiency.

Directions: On the space provided, write in narrative form the learning


experience you had after reading and accomplishing the exercises and
activities in this chapter.

At the beginning of this exercise even though I may have read already about
some of the things about it specifically about rubrics for accomplishing the
exercises and activities new information is still a discovery for me specially in
constructing it myself, it’s not easy but it is very fun and challenging. Before I
thought that exams, quizzes are just way of our teachers to evaluate if we
remember anything or learn anything for the span of our lesson but it’s very mind
opening to learn that throughout those activities, exams, projects, quizzes, etc.
information of our strength and weaknesses is being assessed and know by our
professors and it gives them information so that they can integrate what they’ve
discovered through their learning objectives. It is becoming more fulfilling to know
that in every little thing that our teachers are doing the learning, progress,
development and growth of their students is their main priority in mind it makes
me cherish more this profession. In additional.
The following are the things that throughout accomplishing the exercisesand
activities cross my mind. And of course in every thing that we used there is always
the benefit of it and limitation that we can identify throughout the process of using
it specially for instruction.
Benefits of Rubrics
Rubrics contribute to student learning and program improvement in a number of
ways— some obvious, others less so.
Rubrics make the learning target more clear. If students know what the learning
target is, they are better able to hit it (Stiggins, 2001). When giving students a
complex task to complete, such as a building an architectural model or putting
together a portfolio of their best photographs, students who know in advance what
the criteria are for assessing their performance will be better able to construct
models or select photographs that demonstrate their skills in those areas.

Rubrics guide instructional design and delivery. When teachers have carefully
articulated their expectations for student learning in the form of a rubric, they are
better able to keep the key learning targets front and center as they choose
instructional approaches and design learning environments that enable students
to achieve these outcomes (Arter&McTigue, 2001).

Rubrics make the assessment process more accurate and fair. By referring to a
common rubric in reviewing each student product or performance, a teacher is
more likely to be consistent in his or her judgments. A rubric helps to anchor
judgments because it continually draws the reviewer’s attention to each of the key
criteria so that the teacher is less likely to vary her application of the criteria from
student to student. Furthermore, when there are multiple raters (e.g., large lecture
classes that use teaching assistants as graders), the consistency across these
raters is likely to be higher when they are all drawing on the same detailed
performance criteria. Additionally, a more prosaic benefit is the decided decrease
in student complaints about grades at semester’s end.

Rubrics provide students with a tool for self-assessment and peer feedback.
When students have the assessment criteria in hand as they are completing a
task, they are better able to critique their own performances (Hafner&Hafner,
2004). A hallmark of a professional is the ability to accurately and insightfully
assess one’s own work. In addition, rubrics can also be used by classmates to
give each other specific feedback on their performances. (For both psychometric
and pedagogical reasons, we recommend that peers give only formative feedback
that is used to help the learner make improvements in the product or
performance, and not give ratings that are factored into a student’s grade.)

Rubrics have the potential to advance the learning of students of color, first
generation students, and those from non-traditional settings. An often
unrecognized benefit of rubrics is that they can make learning expectations or
assumptions about the tasks themselves more explicit (Andrade & Ying, 2005). In
academic environments we often operate on unstated cultural assumptions about
the expectations for student performance and behavior and presume that all
students share those same understandings. However, research by Lisa Delpit
(1988) and Shirley Heath (1983), for example, highlights the many ways that
expectations in schools are communicated through subtle and sometimes
unrecognizable ways for students of color or non-native English speakers who
may have been raised with a different (but valid) set of rules and assumptions
about language, communication, and school performance itself.

Limitations of Rubrics
While well-designed rubrics make the assessment process more valid and
reliable, their real value lies in advancing the teaching and learning process. But
having a rubric doesn’t necessarily mean that the evaluation task is simple or
clear-cut. The best rubrics allow evaluators and teachers to draw on their
professional knowledge and to use that professional knowledge in ways that the
rating process doesn’t fall victim to personality variations or limitations of human
information processing.

A serious concern with rubrics, however, is how long it takes to create them,
especially writing the descriptions of performances at each level. With that in
mind, rubrics should be developed for only the most important and complex
assignments. Creating a rubric that is used to determine whether students can
name the parts of speech would be like using a scalpel to cut down a tree: Good
instrument, wrong application.

Another challenge with rubrics is that if poorly designed they can actually diminish
the learning process. Rubrics can act as a straitjacket, preventing creations other
than those envisioned by the rubric-maker from unfolding. (“If it is not on the
rubric, it must not be important or possible.”) The challenge then is to create a
rubric that makes clear what is valued in the performance or product—without
constraining or diminishing them. On the other hand, the problem with having no
rubric, or one that is so broad that it is meaningless, is to risk having an evaluation
process that is based on individual whimsy or worse—unrecognized prejudices.
Though not as dangerous as Ulysses’ task of steering his ship between the two
fabled monsters of Greek mythology, Scylla and Charybdis, a rubric-maker faces
a similar challenge in trying to design a rubric that is neither too narrow nor too
broad.

While not a panacea, the benefits of rubrics are many—they can advance student
learning, support instruction, strengthen assessment, and improve program
quality.
CHAPTER 7

Directions: Answer the following questions briefly on the space provided.


1. What is your knowledge about gradingsystem?
That the primary purpose of the grading system is to clearly, accurately, consistently, and
fairly communicate learning progress and achievement to students, families, postsecondary
institutions, and prospective employers.Also, Grading system is a method used by teachers
to assess students' educational performance.In early times, simple marking procedure was
used by educators.
2. Is grading system important in a certain curriculum? Defend
your answer by citing specificexample.
Yes, Because for students like me it’s is one of the validation of my hard work throughout the
entire school year it uplifts my feeling and give me motivation to strive harder. It also allows
me to assess myself to the particular area were I should improve and that I should keep in
track with.
Directions: Define the following terms and be able to give example of
each. Write your answer on the space provided.
1. Norm-Referenced Grading
In norm-referenced systems students are evaluated in relationship to one
another (e.g., the top 10% of students receive an A, the next 30% a B, etc.).
This grading system rests on the assumption that the level of student
performance will not vary much from class to class.

2. Criterion-ReferencedGrading
In criterion-referenced systems students are evaluated against an absolute
scale (e.g. 95-100 = A, 88-94 = B, etc.). Normally the criteria are a set number
of points or a percentage of the total. Since the standard is absolute, it is
possible that all students could get As or all students could get Ds.

Marinila D. Svinicki (2007) of the Center for Teaching


Effectiveness of the University of Texas at Austin poses four intriguing
questions relative to grading. Based on your understanding, answer
the followingquestions:
1. Should grades reflect absolute achievement level of achievement
relative to others in the sameclass?
Grades on both individual assessments and report cards should
reflect students’ achievement of performance standards on intended
learning outcomes. … It will do no good to base grades on
achievement if students don’t understand what it is they are
supposed to be achieving.

2. Should grades reflect achievement only or nonacademic


components such as attitude, speed and diligence?
It is a very common practice to incorporate such things as
turning in assignments on time into the overall grade in a
course, primarily because the need to motivate students to
get their work done is a real problem for instructors. Also it
may be appropriate to the selection function of grading that
such values as timeliness and diligence be reflected in the
grades. External users of the grades may be interpreting the
mark to include such factors as attitude and compliance in
addition to competence in the material.

The primary problem with such inclusion is that it makes


grades even more ambiguous than they already are. It is
very difficult to assess these nebulous traits accurately or
consistently. Instructors must use real caution when
incorporating such value judgments into final grade
assignment. Two steps instructors should take are (1) to
make students aware of this possibility well in advance of
grade assignment and (2) to make clear what behavior is
included in such qualities as prompt completion of work and
neatness or completeness.

In my own understanding grades should reflect through


academic and non-academic component, yes non-academic
competent is hard to put into numbers but it will always stay
towards the person that was been given the same to how a
student progress and growth that is being assessed by
academic components because of what he or she have gain
it only shows what he/she have learned during his/her year
of studying factors for non academic components can make
you even more happy as a teacher if it is also being reflected
to how much they value the things that they have learn to
you as their teacher because it only show their non
academic and academic components as student.

3. Should grades report status achieved or amount ofgrowth?


This is a particularly difficult question to answer. In many beginning classes,
the background of the students is so varied that some students can achieve
the end objectives with little or no trouble while others with weak backgrounds
will work twice as hard and still achieve only half as much. This dilemma
results from the same problem as the previous question, that is, the feeling
that we should be rewarding or punishing effort or attitude as well as
knowledge gained.

A positive aspect of this foreknowledge is that much of the uncertainty which


often accompanies grading for students is eliminated. Since they can plot
their own progress toward the desired grade, the students have little
uncertainty about where they stand.

There are many problems with “growth” measures as a basis for change, most
of them being related to statistical artifacts. In some cases the ability to
accurately measure entering and exiting levels is shaky enough to argue
against change as a basis for grading. Also many courses are prerequisite to
later courses and, therefore, are intended to provide the foundation for those
courses. “Growth” scores in this case would be disastrous.

Nevertheless, there is much to be said in favor of “growth” as a component in


grading. We would like to encourage hard work and effort and to
acknowledge the existence of different abilities. Unfortunately, there is no
easy answer to this question. Each instructor must review his or her own
philosophy and content to determine if such factors are valid components of
the grade.

4. How can several grades on diverse skills combine to give a singlemark?


The basic answer is that they can’t really. The results of instruction are so
varied that the single mark is really a “Rube Goldberg” as far as indicating
what a student has achieved. It would be most desirable to be able to give
multiple marks, one for each of the variety of skills which are learned. There
are, of course, many problems with such a proposal. It would complicate an
already complicated task. There might not be enough evidence to reliably
grade any one skill. The “halo” effect of good performance in one area how
that can be done even though currently the system does not lend itself to any
satisfactory answers.

Directions: Discuss the following alternative grading systems:


1. Teacher Feedback and LiveCommentary
Students are given verbal and written feedback immediately–as work is being
completed. Live scoring without the scoring and iteration. No letters or
numbers, just feedback.
Make an audio recording of the event. Or video. Livestream it. Make it a
‘podcast’ ( but private–so an audio file, basically) for parents to listen to with
their child on the drive to school each morning. Live feedback for critical
learning objectives is difficult and not sustainable for every assignment but is
far more enlightening for all stakeholders, and thus a valid alternative to the
letter grade in spots.

2. Self-Assessment
In social psychology, self-assessment is the process of looking at oneself in
order to assess aspects that are important to one’s identity. It is one of the
motives that drive self-evaluation, along with self-verification and self-
enhancement.

Self-assessment is a powerful mechanism for enhancing learning. It


encourages students to reflect on how their own work meets the goals set for
learning concepts and skills. It promotes metacognition about what is being
learned, and effective practices for learning. It encourages students to think
about how a particular assignment or course fits into the context of their
education. It imparts reflective skills that will be useful on the job or in
academic research.

Most other kinds of assessment place the student in a passive role. The
student simply receives feedback from the instructor or TA. Self-assessment,
by contrast, forces students to become autonomous learners, to think about
how what they should be learning. Having learned self-assessment skills,
students can continue to apply them in their career and in other contexts
throughout life.

While self-assessment cannot reliably be used as a standalone grading


mechanism, it can be combined with other kinds of assessment to provide
richer feedback and promote more student “buy-in” for the grading process.
For example, an instructor might have students self-assess their work based
on a rubric, and assign a score. The instructor might agree to use these self-
assigned grades when they are “close enough” to the grade the instructor
would have assigned, but to use instructor-assigned grades when the self-
grades are not within tolerance.

Self-assessment can also be combined with peer assessment to reward


students whose judgment of their own work agrees with their peers’. In
Calibrated Peer Review, students are asked to rate the work of three of their
peers, and then to rate their own work on the same scale. Only after they
complete all of these ratings are they allowed to see others’ assessments of
their own work. CPR assignments are often configured to award points to
students whose self-ratings agree with peers’ ratings of their work. The
Coursera MOOC platform employs a similar strategy. Recently a “calibrated
self-assessment” strategy has been proposed, that uses self-assigned scores
as the sole grading mechanism for most work, subject to spot-checks by the
instructor. Self-assigned grades are trusted for those students whose spot-
checked grades are shown to be valid; students whose self-assigned grades
are incorrect are assigned a penalty based on the degree of misgrading of
their work.

In self-assessment, as in other kinds of assessment, a good rubric is essential


to a good review process. It will include detailed criteria, to draw students’
attention to important aspects of the work. The criteria should mention the
goals and keywords of the assignment, so that students will focus on their
goals in assessment as well as their writing.

This paper will cover the benefits of self-assessment, and then provide
several examples of how it can be combined with other assessments.

3. Take Cues fromGaming


Many teachers struggle to smoothly incorporate games into lessons due to
time and logistical issues, yet see game-based learning (GBL) as a way to
engage students and appeal to diverse learning styles.
Research has continuously shown such advantages. For example, video
games stimulate an increase in midbrain dopamine to help store and recall
information, according to a 2014 article in the journal of Learning, Media and
Technology.
But when admins and teachers don’t seamlessly introduce a game, students
may be slow to adopt it and reap these benefits.
Along with examples of games and a condensed guide you can download for
your desk, here are five steps to integrating game-based learning into your
classroom:
1. Determine the Purpose of Game-Based Learning
Deciding how you’ll use a game will narrow your search, helping you find an
appropriate one.
Before researching, determine if you want to use a game for:
Intervention – If a student is struggling to demonstrate understanding of core
material, you may consider using a game to address his or her trouble spots.
The game you choose should therefore deliver content that adjusts itself to
player knowledge and learning style. This should help the student gain a
better understanding of difficult material.
Enrichment – As students master core material, you may want a game that
presents content through different media. For example, it may give questions
through text, audio, images and more. This should encourage students to
challenge themselves as they explore new ways to process the content.
Reinforcement – Instead of using games to teach and engage individual
students, entire classes can play to reinforce curriculum content. This can also
make game-based learning a group activity. Some games have multiplayer
features and students may naturally compete against each other to earn
higher scores.
Keeping these factors in mind will likely hasten the process of finding a game
that meets both teacher and student needs.
2. Play the Game Yourself, Making Sure It Is Aligned with Learning Goals
Test the Game – How to Implement Game-Based Learning

Playing the game in question will help you determine if it’s aligned with
learning goals you’ve set.
After finding a game you think is appropriate, play it and make note of:
Teacher Control – Many educational games offer teachers the ability to
control content and adjust settings for individual students. For example, some
let you match questions to in-class material, delivering them to specific
players.
Intuitiveness – Whether it’s a physical or video game, it should be easy to use.
Students should challenge themselves by processing and demonstrating
knowledge of the content – not by stressing over how the game works.
Engagement – Based on the content and how it’s presented, determine if
students will enjoy the game. If it’s engaging, students should inherently want
to play and, as a result, learn.
Content Types – To accommodate diverse learning styles, the game should
offer different types of content. For example, an educational math video game
may present questions as graphs, numbers and word problems.
Content Levels – To address diverse trouble spots and aptitudes, the game
should use differentiated instruction principles to adapt content to each player.
For example, a language video game may focus more on pronouns with one
student than another.
Paying attention to these criteria while playing should help you decide if the
game properly supports learning goals.
3. Ensure It Meets Expectations from Parents
Meet Expectations from Parents – How to Implement Game-Based Learning
in the Classroom

Getting buy-in from other teachers or admins may be needed before finalizing
your game selection, but parents should also know about your game-based
learning plans.
This opens the door to parent participation which, according to oft-cited
research from the National Committee for Citizens in Education, is one of the
most accurate predictors of student success:
The family makes critical contributions to student achievement, from earliest
childhood through high school … When schools engage parents and students,
there are significant effects. When parents are involved at school, not just at
home, children do better in school and they stay in school longer.
What’s more, you probably don’t want kids telling unaware parents they
played an hour of games in class. They may not think of games with
educational value.
Sending a letter home, explaining the game’s benefits and possibly providing
your email address, may alleviate these concerns. Here’s a letter that Prodigy
offers to admins and teachers who sign up for our math game.
Providing this sort of clear communication should smooth the implementation
process from both a teacher and administrative perspective.
4. Dedicate Time to Consistent In-Class Play
Sporadic game-based learning may not allow students to reach learning goals
as effectively as consistent, scheduled play time. What’s more, it may not be
as engaging as possible.
For example, a study published in the journal of Educational Technology and
Society found a positive correlation between a structured 40-minute period of
educational game play and not only faster recall processes, but improved
problem-solving skills.
In a classroom with 1:1 device use, make time for game-based learning
activities by:
Including game time as a designated activity in your lesson plan, not an
afterthought
Using a game as an entry ticket, drawing student attention to the lesson’s
topic
Using a game as an exit ticket, allowing students to reflect
In a classroom with limited device use, make time for game-based learning
activities by:
Focusing more on non-digital games, such as board games with educational
value
Creating learning stations, one of which is playing a device-based game
Playing team games, letting students play in pairs or groups
These options should make it easier to designate time for educational play,
seamlessly incorporating game-based instruction into class.
5. Assess Progress Throughout Play, Informing Instruction
Assess Progress – How to Implement Game-Based Learning in the
Classroom

Collecting data from the games you implement can uncover student trouble
spots and aptitudes, helping you shape in-class instruction.
Data collection will vary depending on the purpose and nature of a game in
question.
Usually, it involves a following method:
In-Game Reports – Some educational video games feature in-game reports
for teachers, which record student performance. For example, charts will
contain each player’s marks for a series of questions, letting you click to see
more details.
Self-Reports – For physical games, or video games without reporting features,
you can encourage students to take ownership of their progress through self-
reporting. Create a Google Forms spreadsheet for each student. Then, ask
them to provide updates.
Class Discussions – After playing team games, conducting a class-wide
discussion allows each group to share difficulties, progress and
accomplishments.
This final step of incorporating game-based learning will give you the
information needed to adjust lessons and activities, addressing trouble spots
and building on new knowledge.
Infographic
Created by Educational Technology and Mobile Learning, here’s an
infographic that summarizes the five steps to introducing and using game-
based learning in your class:
An infographic that explores five steps to introducing game-based learning
into a classroom.
Click to expand.[/caption]
Examples of Game-Based Learning Options for Your Classroom
1. Video Games
A child playing video games at his computer, which is an example of game-
based learning

As devices become more readily accessible throughout classrooms, many


video games offer comprehensive game-based learning experiences.
Students can play some through downloadable programs, whereas others are
accessible online.
For example, Prodigy is a free online math game that’s aligned with CCSS,
TEKS, MAFS and Ontario curricula for grades 1 to 8. You can change the
focus of questions to supplement lessons and homework, running reports to
examine each student’s progress.
As well as adjusting questions to address student trouble spots, the game
generates math problems that use words, charts, pictures and numbers.
Create and sign into your free teacher account here:
Sign up Log in

2. Adaptations of Common Games


A chess board that, used in certain ways, can be an example of game-based
learning in the classroom.

Preparation time varies, but you can create spins on popular games to
supplement lessons and units.
For example, you can transform tic-tac-toe into a math game. Start by dividing
a sheet into squares – three vertical by three horizontal. Instead of leaving
them blank, put an equation or word problem in each that tests a different
ability.
Similarly, you can create your own version of a game that asks fact-based
questions, such as Trivial Pursuit.
Introducing this sort of game-based learning in the classroom not only
engages students, but doesn’t force you to rely on computers and other digital
devices.
3. Original Games
You’re not limited to adaptations and video games. You can create original,
interactive content.
This is possible by crafting quiz-, board- and team-based games, as well as
using software such as QuoDeck to design and customize simple digital
games.
Doing so allows you to completely customize the game-based learning
experience.
Game-Based Learning vs. Gamification
Students playing at computers in class

As you work to smoothly implement games into your teaching strategy, it


helps to understand the differences between gamification and game-based
learning.
This is because they are often confused, and each require a separate
approach to introduce.
Here are the main differences:
Whereas games have defined rules and objectives, gamification may just be a
series of tasks with rewards such as points
There is a chance of losing in a game but, to motivate students, gamification
may not present this possibility
Although playing a game may be inherently rewarding, gamification may not
offer intrinsic rewards
Whereas building a game can be hard and expensive, gamification is usually
easier and cheaper
Content is typically morphed to fit the story and scenes of a game, but you
can add game-like features to your content without making changes to it
Keeping these points in mind should ensure your approach falls in the realm
of game-based learning.

Final Thoughts About Introducing Game-Based Learning in the Classroom


Along with the examples and discussion about gamification, use this step-by-
step guide to smoothly implement game-based learning in the classroom.
Students should be quick to adopt a given game, and enjoy its benefits as
they work to meet learning goals.
Look forward to a more engaged classroom as a result.

Directions: Read an article entitled “School Grading System


Measures Success and Empowers Parents” by Brenda Duplantis at
http://articles.orlandosentinel.com/2013- 08-11/news/os-ed-school-
grading-system-good-081113-20130809_1_grading-system-
school-grading-parents. Read the following questions on the space
provided.
1. How do grading systems measures success according to thewriter?
The A-F grading system in Florida has empowered parents to understand
how each school is doing based on student learning and graduation rates
— things that are really important to parents. The grades keep schools
accountable and provide parents, students, educators and the
community with an accurate picture of their local school’s quality of
education.

Because we all need feedback to help us improve, and when schools


earn a low grade, it matters. Teachers, school leaders, parents and
students join together to work toward a clear, tangible goal. This drive
creates a unified purpose — the yearning to accomplish something larger
than ourselves.

Holding schools accountable for student learning should not be viewed


as a point of contention, but rather an invitation for improvement as well
as an opportunity to learn from what is working in schools across the
state. With the A-F grading system, teachers and schools are able to
identify strengths, pinpoint weaknesses and home in to create a plan to
improve learning.

Just like parents ask — or, as my 13-year-old would say, harass — their
children about grades and what they are learning in school, we need to
ask our schools questions. What are you doing to engage my child in
school? What are your grades?

The bottom line is that A-F grading is universally understood. If a child is


attending a D or F school, parents work to figure out how they can help
that school improve, or they look for other options to meet the needs of
their child. The grading system gives parents the tools to make these
choices in their children’s education.

When kids understand you have high expectations for them and a plan to
help them reach those goals, they will work to meet the challenge. When
schools know failure is not an option for your kids, they will, in turn, rise
to meet their challenge.

In the end, Florida’s school grades are about accountability, expecting


great things from our students, and giving families and communities the
chance to support their local schools. I’m one Florida parent who
appreciates the honesty and transparency of the grades. In short, it’s a
collaboration of effort to all the individuals involve, by the school, to the
educators, student, the community, family, specially parents.

2. How do grading systems empowerparents?

The A-F grading system in Florida has empowered parents to understand


how each school is doing based on student learning and graduation rates
— things that are really important to parents

The bottom line is that A-F grading is universally understood. If a child is


attending a D or F school, parents work to figure out how they can help
that school improve, or they look for other options to meet the needs of
their child. The grading system gives parents the tools to make these
choices in their children’s education.

Directions: On the space provided, write in narrative form the learning


experience you had after reading and accomplishing the exercises
and activities in this chapter.
The issues of grading and reporting on student learning continue to
challenge educators. However, more is known at the beginning of the
twenty-first century than ever before about the complexities involved
and how certain practices can influence teaching and learning. To
develop grading and reporting practices that provide quality
information about student learning requires clear thinking, careful
planning, excellent communication skills, and an overriding concern
for the well-being of students. Combining these skills with current
knowledge on effective practice will surely result in more efficient and
more effective grading and reporting practices.

When it comes to grading system I always have a mix feelings for


myself towards it. One like any other students we know it’s important
but is that the only way that we can be assessed students, is it about
memorization of what the lesson is that they are after or is it about
what can remain in our minds after how many years of studying. Two
is it bias to give credit to the effort and values of students. Three how
do we want our students to become as they face the world, and what
kind of person are we nurturing when we put numbers to them.
MAYCHEL R. CASTILLO
#673 M. Castillo St. Libsong West Lingayen, Pangasinan
Mobile Nos.: 09673701177
Email Address: castillomaychel@gmail.com

EXPERIENCES:
March 2015 – May 2018
GOLDILOCKS (Cashier)
Golden Ties and Food Resources
Lucao district, Dagupan City, Pangasinan

May 2019 – July 2020


CHOWKING (Assistant Manager)
Lingayen Chow Delight Inc.
Avenida Rizal Cor. Nicanor St.
Lingayen, Pangasinan

ACHIEVEMENTS:

Commercial Cooking NC II
El Jardine Hotelier Training Center Inc.
3rdfloor Teachers Building Maramba Boulevard
Lingayen, Pangasinan
August 2013

Computer Hardware Service NC II


Northern Philippines Learning, Training and Assessment Center
3rdfloor Teachers Building Maramba Boulevard
Lingayen, Pangasinan

TRAININGS:

Commercial Cooking NC II
El Jardine Hotelier Training Center Inc.
3rdfloor Teachers Building Maramba Boulevard
Lingayen, Pangasinan
August 2013

In-House Training (150 hours)


PSU Hostel (Training Center)
PSU, Alvear St. Lingayen, Pangasinan
November 2011-October 2012
Computer Hardware Service NC II
Northern Philippines Learning, Training and Assessment Center
3rdfloor Teachers Building Maramba Boulevard
Lingayen, Pangasinan

Hospitality and Tourism Students Summit


Sison Auditorium, Lingayen Pangsinan
September 22, 2013
Event Management and Front Office Seminar

H2O
Pasay, Manila
September 10, 2013

Bar Tour Seminar


T.G.I Friday’s
Quezon City, Manila
September 29, 2013

Latest Trends and Strategies in tourism Marketing


President Hotel Lingayen, Pangasinan
September 4, 2012

The Tour Guide Profession


President Hotel Lingayen, Pangasinan
September 4, 2012

Fine Dining Management


President Hotel Lingayen, Pangasinan
September 4, 2012

Banquet Catering and Services


Baguio Country Club Baguio City
September 1, 2011

EDUCATIONAL BACKGROUND:

TERTIARY:

Institution : Pangasinan State University- Lingayen Campus


Alvear St., Poblacion, Lingayen, Pangasinan
Course : Bachelor of Science in Hospitality Management
VOCATIONAL COURSE

Institution : Computer Hardware Service NC II


Course : Northern Philippines Training, and Assessment Center
Teachers Building Maramba Boulevard Lingayen,

SECONDARY:

Caloocan High School


10thAvenue Grace Park Caloocan City
2005-2009

PRIMARY:

Grace Park Elementary School Unit-I


7thAvenue Grace Park Caloocan City
1999-2005

PERSONAL BACKGROUND:

Date of Birth: May 10, 1993


Place of Birth: Caloocan City
Citizenship: Filipino
Gender: Female
Civil Status: Single
Age: 25
Height: 5’1
Weight: 70 kg
Religion: Roman Catholic
Language: Pangasinan, Tagalog, English

I hereby certify that the above given information is true and correct as to
the best of my knowledge.

MAYCHEL R.CASTILLO
Signature of Applicant

You might also like