You are on page 1of 14

Student Name

Father Name
Program
Registration / Student I.d
Assignment
Semester
Course

Q1: What are the types of assessment? Differentiate assessment for training of learning
and as learning?
Constructive assessment is used to monitor student learning to provide continuous feedback
that teachers or teachers can use to improve their teaching and students to improve their
learning.
Summary assessment, however, is used to assess student learning at the end of a teaching unit
against a particular level or benchmark.
We can see from their descriptions that the two test methods are not intended to measure in
the same way. So let's look at the big differences between them.
The difference between constructive assessment and summarizing
Differences 1
The first major difference is when assessment takes place in the learner's learning program.
Since the definition has already been published, constructive assessment is an ongoing
process. Assessment takes place during study. Not just one time, but several times.
Summary testing takes place at another time. Not during the process, but after it. The test
takes place after graduation or unit.
Differences 2
There are also major differences between assessment strategies for obtaining accurate student
reading knowledge.
Through constructive assessment you are trying to determine if the student is doing well or
need help with monitoring the learning process.
If you use a summary test, assign marks. Grades tell you whether a student has achieved a
learning goal or not.
Differences 3
The objectives of both tests are miles away. Constructive assessment, the aim is to improve
student learning. To do this you need to be able to give a logical answer. See this post for
feedback.
Summary assessment, the purpose is to assess student achievement.
So do you want your students to be the best at something, or do you want your students to go
over them each time over and over again?

Differences 4
Do you remember when I say that constructive assessment occurs several times during the
learning process en with summative assessment at the end of a chapter or lesson? This also
defines the size of the test packages.
Constructive exploration includes small content areas. Example: 3 constructive evaluation of
chapter 1.
Summary reviews include complete chapters or areas of content. Example: Only 1 test at the
end of a chapter. The reading material package is too big now.
Differences 5
The last difference you may have guessed. Constructive assessment sees testing as a process.
In this way, the teacher can see the student grow and direct the student toward the top.
With abbreviated assessment it becomes difficult for you to guide the student in the right
direction. The test has already been done. That is why abbreviated testing or testing is
considered a very “product”.
Examples of constructive testing
Constructive reviews can be classmates, exit tickets, early response, and so on. But you can
make them even more fun. Consider three examples.
In answer to the question or topic question, students write down 3 different abbreviations. 10-
15 words long, 30-50 words long and 75-100 words long.
3-2-1 Counting Activity: Give your learners cards to write on, or they can answer orally.
Students should respond to three different statements: 3 things you did not know before, 2
things that surprised you with this topic and 1 thing you want to start doing with what you
have learned.
One-minute papers are usually made at the end of the lesson. Students answer a short
question in writing. The question usually focuses on the main point of the lesson, the most
surprising point, the most confusing part of the topic and what question on the topic may
arise in the next test.
Examples of abridged tests
Most of you have used summaries for all your teaching activities. And that is normal.
Education is a slow learner and giving students marks is easy to do.
Examples of summative tests are mid-term tests, end-of-unit or chapter tests, final projects or
papers, regional benchmarks and points used for school and student responses.

So, it was like this in this post. I hope you now know the difference and know what
assessment strategy you will use in your teaching. If you want to know more about starting a
constructive test you should really check out this free school discussion and this post about
the building blocks of constructive assessment.
Q2: What do you know about taxonomy of educational objectives? Write in detail.
Taxonomy Information and quotations in this summary, except where otherwise noted, are
drawn from Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory
into Practice, 41 (4), 212-261. Krathwohl participated in the creation of the original
Taxonomy, and was the co-author of the revised Taxonomy.
“The Taxonomy of Educational Objectives is a framework for classifying statements of what
we expect or intend students to learn as a result of instruction. The framework was conceived
as a means of facilitating the exchange of test items among faculty at various universities in
order to create banks of items, each measuring the same educational objective (p. 212).”
The Taxonomy of Educational Objectives provides a common language with which to
discuss educational goals.
Bloom’s Original Taxonomy
Benjamin Bloom of the University of Chicago developed the Taxonomy in 1956 with the
help of several educational measurement specialists.
Bloom saw the original Taxonomy as more than a measurement tool. He believed it could
serve as a:
Common language about learning goals to facilitate communication across persons, subject
matter, and grade levels;
Basis for determining in a particular course or curriculum the specific meaning of broad
educational goals, such as those found in the currently prevalent national, state, and local
standards;
Means for determining the congruence of educational objectives, activities, and assessments
in a unit, course, or curriculum; and
Panorama of the range of educational possibilities against which the limited breadth and
depth of any particular educational course or curriculum could be contrasted (Krathwohl,
2002).
Bloom’s Taxonomy provided six categories that described the cognitive processes of
learning: knowledge, comprehension, application, analysis, synthesis, and evaluation. The
categories were meant to represent educational activities of increasing complexity and
abstraction.

Bloom and associated scholars found that the original Taxonomy addressed only part of the
learning that takes place in most educational settings, and developed complementary
taxonomies for the Affective Domain (addressing values, emotions, or attitudes associated
with learning) and the Psychomotor Domain (addressing physical skills and actions). These
can provide other useful classifications of types of knowledge that may be important parts of
a course.
The Affective Domain
Receiving
Responding
Valuing
Organization
Characterization by a value or value complex
From Krathwohl, Bloom, & Masia. Taxonomy of Educational Objectives, the Classification
of Educational Goals. Handbook II: Affective Domain. (1973).
Psychomotor Domain
Reflex movements
Basic-fundamental movements
Perceptual abilities
Physical abilities
Skilled movements
No discursive communication
From Harrow. Taxonomy of psychomotor domain: a guide for developing behavioural
objectives. (1972).
The Revised Taxonomy
Bloom’s Taxonomy was reviewed and revised by Anderson and Krathwohl, with the help of
many scholars and practitioners in the field, in 2001. They developed the revised Taxonomy,
which retained the same goals as the original Taxonomy but reflected almost half a century of
engagement with Bloom’s original version by educators and researchers.
Original vs. Revised Bloom’s Taxonomy
[1] Unlike Bloom’s original “Knowledge” category, “Remember” refers only to the recall of
specific facts or procedures
[2] Many instructors, in response to the original Taxonomy, commented on the absence of the
term “understand”. Bloom did not include it because the word could refer to many different
kinds of learning. However, in creating the revised Taxonomy, the authors found that when
instructors use the word “understand”, they were most frequently describing what the original
taxonomy had named “comprehension”.
Structure of the Cognitive Process Dimension of the Revised Taxonomy
One major change of the revised Taxonomy was to address Bloom’s very complicated
“knowledge” category, the first level in the original Taxonomy. In the original Taxonomy,
the knowledge category referred both to knowledge of specific facts, ideas, and processes (as
the revised category “Remember” now does), and to an awareness of possible actions that can
be performed with that knowledge. The revised Taxonomy recognized that such actions
address knowledge and skills learned throughout all levels of the Taxonomy, and thus added
a second “dimension” to the Taxonomy: the knowledge dimension comprised of factual,
conceptual, procedural, and metacognitive knowledge.
Structure of the Knowledge Dimension of the Revised Taxonomy
Factual knowledge – The basic elements that students must know to be acquainted with a
discipline or solve problems in it.
Conceptual knowledge – The interrelationships among the basic elements within a larger
structure that enable them to function together.
Procedural knowledge – How to do something; methods of inquiry; and criteria for using
skills, algorithms, techniques, and methods.
Metacognitive knowledge – Knowledge of cognition in general as well as awareness and
knowledge of one’s own condition.
The two dimensions – knowledge and cognitive – of the revised Taxonomy combine to create
a taxonomy table with which written objectives can be analysed. This can help instructors
understand what kind of knowledge and skills are being covered by the course to ensure that
adequate breadth in types of learning is addressed by the course.
Structure of Observed Learning Outcomes (SOLO) taxonomy
Like Bloom’s taxonomy, the Structure of Observed Learning Outcomes (SOLO) taxonomy
developed by Biggs and Collis in 1992 distinguishes between increasingly complex levels of
understanding that can be used to describe and assess student learning. While Bloom’s
taxonomy describes what students do with information they acquire, the SOLO taxonomy
describes the relationship students articulate between multiple pieces of information.
Atherton (2005) provides an overview of the five levels that make up the SOLO taxonomy:

Pre-structural: here students are simply acquiring bits of unconnected information, which
have no organization and make no sense.
Unstructured: simple and obvious connections are made, but their significance is not grasped.
Multistructural: a number of connections may be made, but the meta-connections between
them are missed, as is their significance for the whole.
Relational level: the student is now able to appreciate the significance of the parts in relation
to the whole.
At the extended abstract level, the student is making connections not only within the given
subject area, but also beyond it, able to generalize and transfer the principles and ideas
underlying the specific instance.

Q.3 how will you define attitude? Elaborate its components.


Attitudes represent our evaluation, preferences or rejection based on the information we
receive.
It is a normal tendency to think or act in a certain way in relation to something or a situation,
usually accompanied by emotions. It is a learned state of responding consistently to
something.
This may include assessments of people, problems, objects, or events. Such tests are usually
good or bad, but they can also be uncertain at times.
This is the way of thinking, and it shapes the way we relate to the world at work and Outside
of work. Researchers also suggest that there be a number of different components that make
up attitudes.
One can see this by looking at three aspects of attitude: perception, touch, and behaviour.
The 3 parts of attitude are;
Part of the mind.
Active Part.
Moral Section.
Part of the mind
The cognitive part of attitudes refers to beliefs, ideas, and traits that can be associated with
something. It is an opinion or part of an attitude of attitude. Refers to that part of the attitude
related to general human knowledge.
These often come out in the open or with strange ideas, such as ‘every child is good’,
‘smoking is a health hazard’ etc.

Active Part
The active part is the emotional or emotional part of the mind.
It is related to a statement that affects another person.
It refers to feelings or emotions that are expressed without regard for something, such as fear
or hatred. By using the above example, someone may develop an attitude of love for all
children because they are attractive or hate smoking because it is harmful to health.
Moral Section
The moral part of an attitude involves a person's tendency to act in a certain way. Refers to
that part of the attitude that reflects a person's intention in the short or long term.
Using the above example, a moral attitude either- ‘I can’t wait to kiss a child’, or ‘we better
get those smokers out of the library, etc.
Attitude is made up of three parts, which include the mental part, the active or emotional part,
and the behavioural part.
Basically, part of the mind is based on knowledge or experience, while the affected part is
based on emotions.
The behavioural part shows how attitude affects the way we act or behave. It helps to
understand their complexity and the possible relationship between attitudes and behaviour.
But for the sake of clarity, remember that the word attitude actually refers to the affected part
of the three parts.
In an organization, attitudes are essential to the success of their goal. Each of these
components is very different from the other, and can build on each other to shape our attitude,
and therefore, affect how we relate to the world.

Q.4 what are the type of every questions? Also write its advantages and disadvantages.
Many options are a common way to measure student comprehension and memory. Cleverly
designed and applied, many choice questions will make the test more robust and accurate. At
the end of this activity, you will be able to create multiple test kits to choose from and see
when to use them in your test. Let us first consider the advantages and disadvantages of using
multiple choice questions. Knowing the pros and cons of using multiple choice questions will
help you decide when to use them in your exam.
Benefits
Allow for exploration of a variety of learning objectives
The nature of purpose limits bias

Students can respond quickly to many things, allowing for wider samples and content
inclusion
Difficulty can be adjusted by adjusting for bug fixes
It works well for managing and scoring points
Incorrect response patterns can be analyzed
Slightly influenced by speculation is true-false
Evil
Limited response to errors in student comprehension
It tends to focus on lower-level learning goals
The results may be dependent on the ability to read or the ability to test
Building good things takes time
The balanced ability to plan and express ideas is impossible
Many options include a question or incomplete statement (called stem) followed by 3 to 5
answer options. The correct answer is called the key while the wrong answer options are
called interruptions.

Example: This is the most common type of object used in testing. It requires students to
choose one answer from a short list of alternatives.

True - False (distortion)


More options (key)
Short answer (bug)
Essay (distraction)
Following these tips will help you develop a wide range of high quality selection questions
for your exam.
Formatting tips
Use 3-5 answers in the vertical column below the stem.
Put the answer options in logical order (chronological, numerical), if there is one, to help with
learning.
Writing Tips
Use clear, precise, simple language so that words do not touch students' clues (avoid jokes,
jargon, cliché).
Each question should represent a complete thought and be written as a coherent sentence.

Avoid complete or obscure words (all, none, never, always, often, sometimes).
Avoid using negative items; if necessary, highlight them.
Make sure there is only one possible explanation and one correct or best answer.
Stem should be written so that students can answer a question without looking for answers.
All answers should be clearly written, almost identical in content, length and grammar.
Make distractions visible and equally attractive to students who are unfamiliar with the
materials.
Ensure that titles and answers are independent; do not give or show an answer to a distraction
or another question.
Avoid “all of the above” or “none of the above” if possible, and especially if you ask for the
best answer.
Include bulk content in the stem, not in the comments.
The stem should include any words that will be repeated in each answer.
Examples
Check out the examples below and think about the tips you just read. As you look at each one
think it is a good example or need to be improved?
As a public health nurse, Susan tries to identify people with unknown health risk factors or
conditions for people with no symptoms. This type of intervention can best be described as
A. Case management
B. Health education
C. Representation
D. Exploration
E. None of the above
This item needs to be updated. There should be no “any of the above” as an option when
requesting a “best” answer.
Critical pedagogy
A. A teaching and learning approach based on a feminist perspective that embraces equality
by identifying and overcoming oppressive practices.
B. Is a method of teaching and learning based on a socio-political perspective that combines
equality by overcoming oppressive practices.
C. It is a way of teaching and learning based on the real day-to-day teaching / learning of
students and teachers rather than what may or may not happen.

D. A method of teaching and learning based on growing awareness of how dominant thought
patterns enter modern society and differentiate the lens of the context in which one looks at
the world around us.
This item needs to be updated because the repetitive words must be in the stem. So the title
should read "Clinical pedagogy is a method of teaching and learning based on:"
Katie weighs 11 pounds [11 kg]. You have an order of ampicillin sodium 580 mg IV q 6
hours. What is her daily dose of ampicillin as prescribed?
A. 1160 mg
B. 1740 mg
C. 2320 mg
D. 3480 mg
This example is well written and structured.
The research design that provides the best evidence for causal relationships is:
A. Test design
B. Control group
C. Quasi-experimental design
D. Evidence-based practice
This example contains a system index and program inconsistencies. Additionally, not all
distractions are equally credible.
The head nurse wrote the following test note: Carol has been a nurse in the postoperative
department for 2 years. She has excellent planning and clinical skills in managing a patient's
condition. He understands all the circumstances and is ready to take on the weighty
responsibilities of promoting personal care.
Using the Dreyfus model acquisition model, identify the stage that best describes Carol's
performance.
A. Beginner
B. Advanced Beginner
C. Skilled
D. You are talented
E. Specialist
This is a good example.

Many selection questions are often used in testing because of their purposeful nature and
good management. To take full advantage of these benefits, it is important to make sure that
your questions are well written.
Q.5 Construct a test, administer it and ensure its reliability.
Reliability is a measure of the consistency of a metric or method.
Every metrics or method we use, including things like how to identify usability issues in the
interface and expert judgment should be tested for reliability.
In fact, before you can establish authenticity, you need to establish credibility.
Here are four most common ways to measure the reliability of any test or metric form:
Reliability of medium values
Reliability of re-testing
The reliability of the corresponding forms
Integrity of internal consistency
Because honesty comes from history in the education system (think standard tests), many of
the words we use to test loyalty are derived from the test dictionary. But do not let negative
memory impressions allow you to waste their compliance with measuring customer
perceptions. These four methods are the most common way to measure the reliability of any
empirical or metric method.
Inter-Rater Reliability
The extent to which the actors or viewers react in the same way to something is one measure
of loyalty. Where there is judgment there is disagreement.
Even well-trained professionals disagree with each other when they see the same thing.
Kappa and the coefficient of correlation are two common measurements of intermediate
levels of reliability. Other examples include:
Examiners identify interaction issues
Experts estimate the severity of the problem
For example, we have found that inter-rater reliability
Reliability of medium values
Reliability of re-testing
The reliability of the corresponding forms
Integrity of internal consistency
Because honesty comes from history in the education system (think standard tests), many of
the words we use to test loyalty are derived from the test dictionary. But do not let negative
memory impressions allow you to waste their compliance with measuring customer
perceptions. These four methods are the most common way to measure the reliability of any
empirical or metric method.
Inter-Rater Reliability
The extent to which the actors or viewers react in the same way to something is one measure
of loyalty. Where there is judgment there is disagreement.
Even well-trained professionals disagree with each other when they see the same thing.
Kappa and the coefficient of correlation are two common measurements of intermediate
levels of reliability. Other examples include:
Examiners identify interaction issues
Experts estimate the severity of the problem
For example, we found that the average reliability of liters [pdf] for usability professionals
who measured the severity of usability issues was r = .52. You can also measure the
reliability of an intra-rater, where you combine multiple points from a single viewer. In the
same study, we found that the average reliability of the intra-rater when judging the severity
of the problem was r = .58 (which is usually low reliability).
Reliability of Review
Do customers offer the same set of responses when there is nothing about their knowledge or
their changed attitudes? You do not want your rating system to fluctuate when everything
else is off.
Ask a set of participants to answer a set of questions (or create a set of tasks). Later (at least a
few days, usually), ask them to answer the same questions again. When you combine two sets
of measurements, see the highest correlation (r> 0.7) for the reliability of the test.
As you can see, there is a lot of effort and planning involved: you need participants to agree
to answer the same questions twice. There are a few questionnaires that measure the
reliability of re-testing (especially due to the layout of items), but with the increase of online
research, we should encourage more of this type of rating.

Reliability of Related Forms


Finding the same or very similar results with slight variations in question or test method also
establishes credibility. One way to achieve this is to say, say, 20 things that measure one
structure (satisfaction, reliability, usefulness) and to manage 10 items in one group and 10 in
another group, and then link the results. You want high affinity and there is no formal school
difference between groups.

Internal Integrity of Consensus

This is the most commonly used measure of fidelity in the settings used. It is popular because
it is very easy to calculate using software — it requires only one sample of data to measure
internal consistency and consistency. This fidelity ratio is best defined using Cronbach's
alpha (sometimes called the coefficient alpha).

It measures how participants respond consistently to one set of items. You can think of it as a
kind of measure of the relationship between things. Cronbach's alpha went from 0.0 to 1.0.
Since the late 1960's, the average acceptable level of fidelity has been 0.70; in practice,
however, in the high-value questionnaire, it aims to exceed 0.90. For example, SUS has a
Cronbach alpha of 0.92.

If you have a lot of things, the tool is very reliable internally, so to increase internal
reliability, you can add items to your questionnaire. As there is often a strong need for a few
things, however, inward loyalty often suffers. If you have only a few items, and as a result
often reduce internal reliability, having a larger sample size helps to correct losses in
reliability.

Here are a few things to keep in mind when it comes to measuring honesty:
Reliability is the consistency of a measure or method over time.
Reliability is required but it is not sufficient to establish a method or metric as it is
permissible.
There is no single standard of reliability, instead there are four standard standards for
consistent responses.
You will want to use as many reliability standards as possible (although in most cases one is
sufficient to understand the reliability of your rating system).
Even if you cannot collect loyalty data, be aware of the ways in which low reliability can
affect the quality of your actions, and ultimately authenticity.

You might also like