You are on page 1of 11

Chelsea George

Fall 2022

FRIT 7236
Technology-Based Assessment and Data Analysis
Key Assessment

Part 1 – Multiple Choice

Standard: SS5H4 Explain America’s involvement in World War II


Objective Level and Type Multiple Choice Questions
Students will be able to 1A - Remember 1. What is the name for the event in
describe the major events of Factual which Japanese forces attacked an
WWII, including Pearl Harbor. Knowledge American Naval base, causing the
United States to enter the war?

A. Iwo Jima
B. Blitzkrieg
C. Pearl Harbor
D. The Manhattan Project

1A - Remember 2. American forces dropped atomic


Factual bombs on which 2 Japanese cities?
Knowledge
A. Hiroshima and Nagoya
B. Tokyo and Nagoya
C. Okazaki and Tokyo
D. Nagasaki and Hiroshima

Students will be able to recognize 2B - Understand 3. How did the United States' choice to
how the events of WW2 affected Conceptual join WWII affect the women and the
women's relationship with the Knowledge workforce?
workforce.
A. The economy was strengthened
because women stayed at home and
fulfilled a domestic role.
B. The economy was strengthened
because women fulfilled jobs that were
once only for men.
C. The economy suffered because
women refused to join the workforce.
D. The economy suffered because
women became soldiers and left the
country to fight opposing forces.
George

Multiple Choice Assessment Plan

Improving Item Reliability


Reliability refers to objectivity when creating content and consistency in grading over time.
(Nitko & Brookhart 2014). Reliability ensures that results are not skewed and that the assessment
tests what it is supposed to in a fair manner. When assessments are reliable, they can be used
repeatedly over an extended period of time. One of the most effective ways to improve item
reliability for multiple choice questions is to create a scoring rubric. This allows the assessor to
ensure scoring consistency over time.

Improving Item Validity


Validity is ensuring alignment between assessment and a lesson's standards/goals (Nitko &
Brookhart 2014). Alignment among standards, objectives, and questions is one of the most
effective ways to maintain validity. Test directions must be clear enough for students to
understand exactly what is being asked of them. An assessment blueprint is a useful tool to help
the test maker touch on different levels of the taxonomy and ensure that students are using
higher-level thinking skills among an array of different question types.

Differentiation of Instruction
Differentiation can be defined as tailoring content or assessments to each students’ personal or
collective learning needs (Nitko & Brookhart 2014). Data collected from formative assessments
allows the teacher to determine what information needs to be differentiated. This ensures that
each student is provided with the support they need in order to meet the teacher’s defined
objective or the state’s standards. Differentiation can be achieved through conferences with a
student, small group and other collaborative learning opportunities, enrichment activities or
remediation.

Improving Student Learning


Student learning is improved when teaching and assessment styles are differentiated for each
student. This type of assessment is difficult to differentiate. THe best approach for improving
student learning in this scenario would be to conference with students either individually or in a
group based on the questions that were collectively missed. Explaining to students how and why
they missed a question will help them with their testing taking skills and allow them to better
understand the content.

Improving Future Assessments


It is important to keep in mind that using an assessment blueprint of the taxonomy is greatly
beneficial when creating an assessment. It is possible to create multiple choice questions that are
higher on the taxonomy, but very difficult. With the composition of a multiple choice question, it
would be difficult for the teacher to design an opportunity for students to “create” or “evaluate”
content. In the future, the assessor might find it best to use multiple choice questions in the 1A,
1B, 2A and 2B sections of the taxonomy. This would ensure the best results when students are
taking the assessment.
George

Part 2 – Essay and Short Answer


Standard: SS5H4 Explain America’s involvement in World War II

Objective Level and Type Short Answer Questions


Students will be able to name the 1A - Remember Factual The primary political leader of The
primary political figures of WWII. Knowledge United Kingdom during the events of
World War II was?
Students will be able to describe 1A - Remember Factual The symbol that German soldiers used to
German aggression in Europe. Knowledge identify people of Jewish decent was
called?
Students will be able to describe 2B - Understand Conceptual Describe one way in which African
how the Tuskegee Airmen Knowledge American men were able to participate
contributed to WWII. in the War.
Essay Questions
Students will be able to explain 4B - Analyze Conceptual Suppose the United States of America
how WWII affected economies. Knowledge never experienced the Great Depression
of the 1920's. How do you think this
might have affected American
involvement in WW2? Explain your
thoughts below; be sure to mention the
how WW2 impacted the American
economy.
4B - Analyze Conceptual Describe what might have happened to
Knowledge Germany's economy if Adolf Hitler had
never been elected chancellor. Be sure to
mention the effects of the war-guilt
clause from WWI’s Treaty of Versailles.
Students will be able to debate the 5B - Evaluate Conceptual How would the war in Asia have been
impact of dropping atomic bombs Knowledge affected if the United States never
on Japan. dropped atomic bombs on Japanese
cities? Do you think there was a better
way to stop Japanese forces from
advancing into China? Be sure to answer
both questions and explain your
thoughts.
George

Essay and Short Answer Assessment Plan

Improving Item Reliability


Reliability refers to objectivity when creating content and consistency in grading over time.
(Nitko & Brookhart 2014). Short answer questions are reliable since they require students to
identify facts; facts do not change. However, essay questions may not be as reliable year after
year. Because each class’s needs vary from year to year, the teacher might have to cover
different content or teach in a unique way. To improve item reliability for these items, the
assessor should create a scoring rubric. This eliminates or decreases the possibility for the halo
effect or carryover affect to influence scoring from one item to another.

Improving Item Validity


Validity ensures that test items are aligned to objectives and standards (Nitko & Brookhart
2014). If there is no alignment, test results are not valid. To improve validity, the teacher should
implement the use of measurable verbs when determining the language of the learning objectives
and test items. Furthermore, creating an assessment blueprint will help the teacher ensure that
test items are aligned to standards.

Differentiation of Instruction
Essay questions are heavily based on the use of language. To support EL’s, the teacher can
supply sentence starters, a basic outline, transitional words, or even a word bank. To support
SWD’s the teacher can read the question prompts aloud. The teacher can also provide students
with a tool to help them transcribe their thoughts without the tool doing the writing for them.

Improving Student Learning


Differentiation of instruction is the best way to improve student learning. Small group instruction
based on collective needs (disability or ability) and partner activities would be a great way to
prepare students for these types of test items. The teacher can also improve student learning by
teaching students how to answer questions based on the measurable verb.

Improving Future Assessments


Test results from would be analyzed after students complete this test. The grade level would
collectively look for common themes among questions missed. Teachers would target this
content instructionally and review test items to ensure alignment to objectives and clear
language.
George

Part 3 – Higher Order Thinking

Standards:
S5P1. Obtain, evaluate, and communicate information to explain the differences between a
physical change and a chemical change.
MGSE5.OA.1 Use parentheses, brackets, or braces in numerical expressions, and evaluate
expressions with these symbols

Objective Level and Higher Order Thinking Questions


Type
Students will 5B -
be able to Evaluate
evaluate Conceptual
information to Knowledge
explain the
phases of
matter.

10. Tyler makes lemonade and puts it in a large pitcher (Figure


A). Tyler adds ice until the lemonade rises to the top of pitcher
(Figure B). To keep the drink cold, he puts the pitcher into a 40-
degree fridge. Hours later he comes back to find there is no
more ice and the lemonade has overflowed (Figure C). Explain
what happened to the ice. Be sure to describe what changes
physical or chemical change might have occurred.

Students will 4C -
be able to Analyze
evaluate and Procedural
interpret Knowledge
problems
using the
mathematical
order of
operations
(PEMDAS).

11. Determine and explain the error in Madison’s thinking.


George

5C -
Evaluate
Procedural
Knowledge

12. Explain how Madison could have used the order of


operations to solve this problem and achieve a correct answer.
George

Higher Order Thinking Question Assessment Plan

Improving Item Reliability


Reliability refers to objectivity when creating content and consistency in grading over time.
(Nitko & Brookhart 2014). Due to the open-ended nature of higher order thinking questions, it is
important that the assessor develops scoring rubrics to ensure consistency in scoring and fairness
within the grading process. Rubrics can be shared with students prior to the assessment to help
guide their responses. The teacher can also provide opportunities for students to work in small
groups and answer HOT questions with a scoring rubric.
Improving Item Validity
Validity ensures that test items are aligned to objectives and standards (Nitko & Brookhart
2014). The assessor can create an assessment blueprint to ensure that test items are directly
correlated to the standards and objectives that were taught in class. Each item on this assessment
is grounded in an objective based on the state standards.

Differentiation of Instruction
For English learners and students with disabilities, the assessment should include simple
language and be free of any extraneous information. The test should also be read aloud to
students to ensure clear understanding. The assessor might also provide these students with
sentence starters or a word bank since the structure of open-ended response items is heavily
language based. If a student struggles with dysgraphia, the assessor might accept an oral
response while grading against the same scoring rubric.

Improving Student Learning


To improve student learning it is pertinent that the teacher explicitly teaches analytical skills.
The teacher should ensure that students are taught how to interpret different types of graphs,
charts, tables, maps, etc. Students should be taught to identify trends and how to recognize
irrelevant information. To help students develop critical thinking skills, the teacher should pose
real life scenarios that the students might encounter and ask them to collaboratively find
solutions. As a whole group the class can compare solutions and engage in discussion to
determine the best answer. Walking through these types of problems with peers will significantly
enhance student performance.

Improving Future Assessments


In a series of questions based on the same prompt, the teacher might want to include multiple
choice questions to help guide students closer towards critical thinking. Also, to improve this
assessment for future use the assessor must identify common misconceptions and address them
accordingly. To help student understanding, the teacher can address misunderstandings in a
whole group discussion or in individual conferences with students. Lastly, the assessor should
administer the test to a practice group to ensure that it is comprehensible and attainable for
students to complete.
George

Part 4 – Performance

Standard: S5L3. Obtain, evaluate, and communicate information to compare and contrast
the parts of plant and animal cells. b. Develop a model to identify and label parts of a plant
cell (membrane, wall, cytoplasm, nucleus, chloroplasts) and of an animal cell (membrane,
cytoplasm, and nucleus).

Objective Level and Performance Assessment Items


Type
Students 4A -
will be able Analyze
to compare Factual
types of Knowledg
cells while e
using a
graphic
organizer.

Using a Venn Diagram, compare the characteristics of plants cells


to animal cells. Please use the provided terms from the word bank.
Each term will be used at least once; some may be used more than
once.

You will be assessed on your ability to organize information.


Please see the checklist to guide your work.
George

Students 6A –
will be able Create
to organize Factual
information Knowledg
in a e
newsletter
to explain
the
anatomy of
a plant
cell.

Using the completed Venn Diagram, create a 1-page newsletter


explaining what each part of the plant cell does to contribute to the
cell’s overall function and structure. An example outline is
displayed. Your newsletter should include each of the terms listed
under the plant cell section of your Venn Diagram. Write in
complete sentences and include a graphic for each part of the plant
cell. This can be completed on a piece of paper or in a digital
format.

You will be assessed on your ability to synthesize and explain


information clearly. Please see the scoring progressions rubric to
guide your work.
Students 3A – Using your completed Venn Diagram as a guide, create a 3D
will be able Apply model of a plant cell. All materials to create the model will be
to construct Factual provided for you.
a model of Knowledg
a plant cell e You will be assessed on your ability to synthesize information and
(membrane, create an accurate model. Please see the scoring progressions
wall, 6B – rubric to guide your work.
chloroplast Create
s, Conceptual
cytoplasm, Knowledg
and e
nucleus) by
using
organized
information
(Venn
Diagram).
George

Performance Assessment Plan

Improving Item Reliability


Reliability refers to objectivity when creating content and consistency in grading over time.
(Nitko & Brookhart 2014). Performance tasks can be relatively open-ended in nature. To help
students understand what is being assessed, it is paramount to have a scoring rubric. For the first
assessment item, a checklist would suffice since comparison is something that can be measured
easily. However, for the second and third assessment items, a scoring rubric with scaled
progressions would serve as the most effective tool for the assessor. For these items, I would
suggest a scoring rubric with 4 progressions: does not meet standards, approaching standards,
meets standards, and exceeds standards. Developing this type or rubric for this type of
assessment item is the most ethical way to assess a student's performance. Also, Teachers should
work collaboratively during PLC meetings to define behavioral descriptors for each progression.
Lastly, it would be in best practice to have more than one person assess student work with the
scoring rubric.

Improving Item Validity


Validity ensures that test items are aligned to objectives and standards (Nitko & Brookhart
2014). For example, in the first test item students are required to compare and are provided with
a Venn Diagram to complete their task. Because Venn Diagrams are a common tool used to
compare ideas, this task achieves validity. To continue to create valid assessment items, the
assessor can use the backwards design method while planning a unit of instruction. To do so, the
first thing the assessor must do is create an achievable objective for the unit or lesson. After the
objective is defined, as opposed to creating a learning experience to match the objective, the
assessor will create the assessment. The assessor should constantly refer back to the objective
while making the assessment item and define the criteria for the scoring rubric. Once the scoring
rubric has been created, it will be rather easy to make tasks that align to the objectives. I've found
this to be one of the most effective ways to attain validity.

Differentiation of Instruction
Differentiation of instruction allows for each student’s unique learning abilities to be capitalized
while focusing on the areas they can also grow in. Some accommodations that might be included
for students with disabilities, gifted students, and/or English learners include the following:
1. Task 1: Providing unlabeled visuals of each cell type and asking students to label them,
removing the word bank
2. Task 2: Providing headings and sentence starters to help structure writing, explaining
thoughts verbally and having the teacher transcribe. Another accommodation might be to
provide students with the picture and headings for the newsletter and ask them to match a
provided description to the correct picture.
3. Task 3: A model will already be provided for students that struggle with fine motor skills.
They will simply have to label each part of the cell appropriately. Students will also have
the ability to work with a partner for this task.
George

Improving Student Learning


To improve student learning the teacher must ensure that whole group lessons are not the
primary mode of instruction. For success on a performance-based assessment, students must
have the opportunity to perform, create, organize, construct etc. It would be best to use tangible
materials and ideas since performance-based tasks often require the use of tools or materials. In
planning, the teacher should include varied opportunities for students to discuss ideas with their
peers. Students should also be introduced to the idea of improving their work samples based on a
scoring rubric. The teacher should create opportunities for students to work alongside the rubric,
submit their work sample, receive revisions, and make necessary adjustments based on the
teacher’s comments and scoring rubric. Conferencing before and after assessments is another
way for teachers to formatively assess what a student has mastered and what they are still
struggling with.

Improving Future Assessments


To improve these assessment items the grade level would come together to identify what skills
and test items students struggled with the most (comparing, organizing, constructing, explaining,
etc.). This analysis would prompt the grade level to either revision or remove the assessment
item. The grade level should also consult the corresponding scoring rubric to determine if truly
measured the learning objective. Based on student performance, teachers would also create a
plan for instruction and how to clarify misconceptions. Lastly, to further improve performance
assessments.

You might also like