Professional Documents
Culture Documents
2. Talk about the principles of language assessment: ‘when’, ‘why’, ‘what’ of assessment.
3. Outline the types of assessment. Overall review.
4. Compare formative and summative assessment; subjective and objective assessment.
5. Describe the peculiarities of self - assessment and peer-assessment. Outline the self-assessment
tools.
6. Describe counseling as a self-assessment tool. Give examples.
7. Talk about continuous assessment and diagnostic assessment. Give examples.
8. Chraracterize tests as a form of assessment. Talk about different types of tests.
9. Compare norm-referenced and criterion-referenced tests. Talk about advantages and
disadvantages of testing.
10. Give an overview of the main test parameters necessary for the effective use of testing
procedures.
11. Define validity, its types and its necessity for the assessment procedures..
12. Define reliability, its types and necessity for the assessment procedures.
13. Define practicality and talk about it the context of administering tests.
14. Outline the effects of backwash and spin off. Compare positive and negative backwash effects.
15. Characterize micro and macro skills of reading and listening and talk about the importance of
this distinction for the assessment. Give examples to illustrate your point of view.
16. Characterize micro and macro skills of writing and speaking and talk about the importance of
this distinction for the assessment. Give examples to illustrate your point of view.
17. Talk about assessing speaking. Outline speaking sub-skills, the types of speaking performance
(imitative, intensive, responsive, interactive, extensive), and the common assessment tasks.
18. Talk about assessing writing: writing sub-skills, the types of writing performance (imitative,
intensive, responsive, extensive) and the tasks for assessment.
19. Compare analytic and holistic scales as tools to assess productive skills.
20. Talk about assessing reading: reading sub-skills and possible assessment tasks.
21. Talk about assessing listening: listening sub-skills, types of listening performance (intensive,
responsive, selective, extensive) and types of assessment tasks.
22. Talk about psychological assessment and the ways to perform it in the classroom.
23. Outline the role of portfolios and e-portfolios as a method of assessment.
24. Describe portfolio as a self-assessment and self-reflection tool for teachers in the course of their
professional development.
Testing is a form of assessment, where the aim is to determine what the learner
knows, compared with teaching, which refers to the imparting of or sharing of (in
our context) linguistic knowledge.
Testing—procedure
Test-technique
In the educational context, the verb ‘to evaluate’ often collocates with terms
such as:
Assessment occurs when judgments are made about a learner’s performance, and
entails gathering and organizing information about learners in order to make
decisions and judgments about their learning.”
Assessment is thus the process of collecting information about learners using
different methods or tools (e.g. tests, quizzes, portfolios, etc).
Formative assessment:
It is process-oriented and is also referred to as ‘assessment for Learning’. It is an
ongoing process to monitor learning, the aim of which is to provide feedback to
improve teachers
instruction methods and
improve students
learning.
Summative assessment:
It is product-oriented
and is often referred to
as ‘Assessment of
Learning’. It is used to
measure student learning
progress and
achievement at the end
of a specific
instructional period.
Alternative assessment:
It is also referred to as
authentic or performance assessment. It is an alternative to traditional assessment
that relies only on standardized tests and exams. It requires students to do tasks
such as presentations, case studies, portfolios, simulations, reports, etc. Instead of
measuring what students know, alternative assessment focuses on what students
can do with this knowledge.
Simply put, a test refers to a tool,
technique or a method that is
intended to measure students
knowledge or their ability to
complete a particular task. In this
sense, testing can be considered
as a form of assessment. Tests
should meet some basic
requirements, such as validity and
reliability.
Validity refers to the extent to which a
test measures what it is supposed to
measure.
Reliability refers to the consistency of test scores when administered on different
occasions.
When do we assess?
Assessment may take place before a language course begins, at the begging of the
course, during the course on specific occasions or on an on-going basis, or at the
end. It may also take place afterwards.
Why do we assess?
This is obviously linked to “when”. It may be done in order to place the learner or
to advise them on what kind of course or work they should be doing. It may be
diagnostic in nature, in order to analyze the learners’ needs and decide what to
teach, to plan the course and decide on appropriate learning activities. It may be to
gauge progress during the course or, at the end of course, it may be to assess
whether the student has learnt what was taught in the course. Or we may want to
assess their language proficiency in general, for example to advise them on
whether they are ready to do a public exam. These differ reasons for assessment of
course overlap.
What do we assess?
• Knowledge of lexis or grammar
• Their ability in the four skills
• Their ability to carry out certain kinds of real-life tasks which may involve
different language knowledge and skills
• Their progress
• Their behavior
• Their participation
• Their attitude
• Their suitability for doing a particular job or course
Lexis
—ability to spell words correctly (e.g. a dictation)
—selecting the most appropriate word in a given context (e.g. 4-option multiple
choice)
—knowledge of word formation (e.g. transform a base word and use it in its
appropriate form or complete a sentence)
—ability to use appropriate vocabulary in a given context (e.g. close test—
complete gaps with no options to choose from)
Listening
—ability to extract key information from a text (e.g. sent Ex which the learner has
to decide are true or false according to the text)
—Ability to understand detailed information (e.g. complete sentences with
information heard on the recording)
—ability to ascertain
attitudes and
relationships between
people (e.g. matching
speakers’ comments to
the correct speaker
3)Outline the
types of
assessment.
Overall review.
Teacher, peer, student himself, administrators, examinations
1. Formal/informal assessment
• Formal assessment—the grade
Formal assessment —ss are assessed under strict test conditions, where the
individual student cannon communicate with other, where they have to complete
exam tasks in a specific time and are allowed no other. Eternal, ss know the date,
they can’t use phones, etc
3. Self-assessment
4. Peer-assessment
5. Discrete point or discrete item test
you test one specific element —>to test only present perfect
Discrete point or discrete item test—we may wish to provide to see how well our
students have understood and can apply their knowledge of specific items of
language, for example the present perfect or ways of making suggestions, end of
unit tests in coursebooks commonly assess progress in this way.
Discrete item (or discrete point) tests are tests which test one element of language at a time. For
example, the following multiple choice item tests only the learner's knowledge of the correct past form
of the verb sing :
They have the advantages of often being practical to administer and mark, and objective in terms of
marking. However, they show only the learners ability to recognise or produce individual items - not
how s/he would use the language in actual communication. In other words, they are
inevitably indirect tests - they provide evidence of the learners' ability to recognise or produce certain
specific elements of the language, but do not demonstrate how they might actually use them (or
anything else) in communication. Learners’ abilities are inferred rather than demonstrated.
Integrative tests, on the other hand, may be either direct or indirect. The use of the
term integrative indicates that they test more than one skill and/or item of knowledge at a time.
Dictation is an integrative test, because it involves listening skills, writing skills, recognition of specific
language items, grammar (eg in order to distinguish whether /əv/ should be written as have or of) and
so on. Dictation is still, however, an indirect test,
Many integrative tests, on the other hand, are often also direct tests - they ask the learner to
demonstrate their ability to perform a specific "real life" communicative task by asking them to
actually do it. They therefore demonstrate the learners's ability to use the language in actual
communication.
Subjective assessment is a form of questioning which may have more than one
correct answer (or more than one way of expressing the correct answer). A
subjective test is evaluated by giving an opinion. Subjective tests are more
challenging and expensive to prepare, administer and evaluate correctly, but they
can be more valid.
Examples:
extended-response questions
essays
tests of writing ability are often subjective because they require an examiner
to give an opinion on the level of the writing.
In the classroom
Learners preparing for a subjective writing test, for example a letter of complaint,
need to think about their target audience, since they are being asked to produce a
whole text. Teachers can help them by emphasizing the importance of analyzing
the question and identifying the key points of content, register, and format.
5)Describe the peculiarities of self - assessment and peer-assessment.
Outline the self-assessment tools.
Self-assessment is where learners assess their language proficiency and evaluate
their performance, rather than a teacher doing it. The criteria must be carefully
decided upon beforehand.
Scripts consist of specific questions that are structured into a clear progression of
steps, to guide learners in how best to achieve a task. A script can help students to
assess whether they are on the right track to completing the task, and supports
them to adjust their learning behavior according to the directions of the scripted
questions.
A learning journal is a place for students to reflect in writing about how their
learning is going, what they need help with, and the effectiveness of different
strategies for learning. Teachers need to provide regular, short periods of time for
writing in the journal, with guiding questions to support self-assessment.
Examples:
Discussion.
Kinesthetic Assessments (Ss are acting – like labs)
Learning & Response Logs. (aka mind map but more detailed)
Observations.
Online Quizzes & Polls.
6)Describe counseling as a self-assessment tool. Give examples.
Non-linguistic assessment.
Counseling sessions
• This term we use to refer to a meeting between a student, or it could be a group
of students, and the teacher to discuss their work, their objectives or assessment
results.
• This is clearly a way of involving the students in assessing their performance and
therefore encouraging them to take on responsibility for their learning. If done on
an individual basis it offers what may be quite a rare opportunity for the teacher
to find out about the student as an individual.
• As a result of a counseling session you may find yourself adjusting your
assessment of a student, perhaps because they tell you about smth that may have
affected their performance in a test or in class, for example.
Coffield (Coffield et al.2004a, 2004b) set out to determine if any of these theories
could be used by educators, and suggest that classifying people into a fixed set of
characteristics may be counter-productive to learning.
• Visual learners like to see words or images
• Auditory learners like to hear words or sentences
• Kinesthetic emotional learners like to involve their feelings and emotions
• Kinesthetic motors learners like to do smth or touch smth
Testing is a form of assessment, where the aim is to determine what the learner
knows, compared with teaching, which refers to the imparting of or sharing of (in
our context) linguistic knowledge.
Testing—procedure
Test-technique
A progress / formative test is administered during the course. The test aims to
find out how well students have grasped what has been taught on the course so far.
In other words, the test content is based on the teaching content, not on other
things. As a result of this test, the teacher and the learners see how they are
progressing. They can be used to help the teacher and the learners themselves set
their own learning goals.
a) Formative assessment is used to improve the quality of future learning - ie to help the
teacher and learners decide how successful their learning has been up to that point, what
needs recycling and consolidation before they can move on, whether different learning
strategies need to be introduced and used etc etc. Examples of tests with a formative purpose
are diagnostic tests and progress tests.
b) Summative assessment, on the other hand, evaluates the success of past learning in terms
of pass/fail or various forms of grades. They show to what extent the learner has or hasn't
achieved the standards required by the programme. Examples of summative tests
include achievement testsand proficiency tests.
A diagnostic test
A diagnostic test is used at the begging of the course to find out what the students
know and what they don’t know. It’s more finely tuned than a placement test and
the content refer to what the students should know or will need to know at this
particular level. Based on the test results the teacher or course planner can then
ensure that the course content is relevant to the students’ needs, or in other words,
will teach them what don’t really know. To check the gaps
To help a teacher plan the contents of a course and the type of syllabus and suggest
the range of activities and techniques. This diagnostic test is used at the begging of
a course once students have enrolled. Its content reflects what the ss should know
or will need to jinx for the level they have been placed in.
A proficiency test
A proficiency test focuses on what students are capable of doing in a foreign
language, regardless of the teaching programme. It is used to assess whether a
student meets the general standard. These types of tests are often set by external
bodies such as examining boards. They may be used within schools to see for
example whether students are at the required level to enter for and pass a public
exam—IELTS, TOEFL (put you on the scale of your proficiency). Cambridge
exams testing what I know, my level (not related to the course I taught at the Uni).
This exam is likely to test candidates’ general proficiency and general ability in a
language, regardless of a specific type of teaching programme. The teacher or
school may ask ss to take a mock exam in order that they have an idea of their
strengths and weaknesses relative to the requirements and level of the actual exam.
Alternative to testing:
• Continuous assessment—teachers give grades for a number of assignments over
a period of time. A final grade is decided on a combination of assignments.
Techniques of continuous assessment: short tests, quizzes, projects, observations,
written/oral tasks.
• Self-assessment—ss evaluate themselves. The criteria must be carefully decided
upon beforehand.
Self-assessment tools: learner diaries, records of learning, checklists, can-do
statements, assessing writing and speaking via scales, marking own work using
keys.
• Teacher’s assessment—the teacher gives an assessment of the learner for work
done throughout the course including classroom contributions.
• Portfolio—a student collects a number of assignments and projects and present
them in a file, the file is then used as a basis for evaluation.
What do tests do?
Competence testing is used to measure candidates’ acquired capability to
understand and produce a certain level of foreign language, defined by
phonological, lexical, grammatical, sociolinguistic and discourse
constituents.
Performance testing includes direct, systematic observation of an actual
student performance or examples of student performances and rating of that
performance according to pre-established performance criteria. Students are
assessed on the result as well as the process engaged in a complex task or
creation of a product. A performance test measures performance on tasks
requiring the application of learning in an actual or simulated setting.
Diagnostic testing seeks to identify those areas in which a student needs
further help. These tests can be fairly general, and show, for example,
whether a student needs particular help with one of the four language skills;
or they can be more specific, seeking to identify week nesses in a student’s
use of grammar.
Psychometric testing is aimed at measuring psychological traits such as
personality, intelligence, aptitude, ability, knowledge, skills which makes
specific assumptions about the nature of the ability tested. It incudes a lot of
discrete point items.
Achievement testing is sued to determine whether or not students have
mastered the course content and how they should proceed. The content of
these tests, which are commonly given at the end of the course, is generally
based on the course syllabus or the course textbook.
Progress testing is used at various stages throughout a language course to
determine learners’ progress up to that point and to see what they have
learnt.
Proficiency testing is used to measure learners’ general linguistic
knowledge, abilities or skills without reference to any specific course. Two
types:
1. Some of these tests are intended to show whether students or people outside
the formal educational system have reached a given level of general
language ability.
2. Others are designed to show whether candidates have sufficient ability to be
able to use a language in some specific areas as medicine, tourism, etc. Such
tests are often called Specific Purposes tests.
Marking system
Objective (there’s a clear answer, and every marker would give the same
marks to the same question)
Subjective (marking depends largely on the personal decision of the marker;
different markers might have different marks for the same question)
Test construct
Communicative language ability
Speaking ability
Fluency
Literacy
Test of spoken fluency will test the following features:
Rate of speech
Number of hesitations
Extent to which it causes strain to the listener
Errors
Criteria vs marks
What is the aim of the progress test?
To give encouragement that something is being done well
To point out areas where learners need to improve
Thus, giving marks may be not the most effective wat of assessment, especially
when skills are being tested.
Criterion-referenced test
Criterion reference test is a method which uses test score to judge students. Also,
they help to generate statements about students’ behavior. Also, they use test
scores as their reference. Criterion reference mostly uses quizzes. The main
objective of this is to check whether students have learned the topic or not.
Construct validity: Does the test measure the concept that it’s intended to
measure?
Content validity: Is the test fully representative of what it aims to measure?
Face validity: Does the content of the test appear to be suitable to its aims?
Predictive validity
Practicality relates to how ways and convenient it’s to administer the test, based
on straightforward practical considerations, here you would need to consider
questions such as the materials available, the time it would take to mark the test,
and how easy it would be to produce the materials.
So, for example, if you want to use video for testing listening skills, have you got
enough copies of the video and enough viewing screens for all the classes that
need it?
11)Define validity, its types and its necessity for the assessment
procedures.
Validity is defined as the extent to which the instrument measures what it purports
to measure (It tells you how accurately the method measures something).
If a method measures what it claims to measure, and the results closely correspond
to real-world values, then it can be considered valid. There are four main types of
validity:
Construct validity: Does the test measure the concept that it’s intended to
measure?
Content validity: Is the test fully representative of what it aims to measure?
Face validity: Does the content of the test appear to be suitable to its aims?
Predictive validity
else.
Content validity means that the test tests what it actually sets out to test.
Face validity test should appear to the students and the teacher to test what it’s
supposed to. It refers to how the test appears to the users. For example, if you aim
to test a student’s ability to read and understand whole texts, it might appear
strange to do this by giving them a multiple-choice grammar.
Predictive validity is concerned with the degree to which a test can “see”
candidates’ future performance.
12) Define reliability, its types and necessity for the assessment
procedures.
Reliability.
Reliability is defined as the extent to which any measurement tool (a
questionnaire, test, observation) produces the same results on repeated trials. In
short, it is the stability or consistency of scores over time or across raters.
Test reliability: a test can be said to be reliable if the same students, with the same
amount of knowledge, taking the same test at a different time, would get more or
less the same results. The closer the results, the more reliable the test is.
It’s unlikely that teachers designing tests will need to test this kind of reliability,
but it’s a very important factor in externally assessed exams such as the
Cambridge exams, where it’s vital that there is not a large discrepancy in the level
of difficulty of the exam each examining session.
Scorer reliability is about that different markers would give the same marks to the
same tests. This is easy with discrete item tests such as multiple choice, if there is
only one correct answer and the markers mark accurately. But with, for example, a
piece of writing, the marking may be more subjective, particularly if the marker
knows the students who did the test and is influenced by what they can usually
produce.
To improve scoree reliability, you can use things such as clear guidelines for
marking e.g. criteria and points awarded, standardization meetings; to compare
sample tests and agree on what constitutes the different grades, or double marking;
two teachers mark each price of work to produce an average mark. Clear
instructions to tasks are also important here.
13) Define practicality and talk about it the context of administering
tests.
Practicality
This relates to how ways and convenient it’s to administer the test, based on
straightforward practical considerations, here you would need to consider
questions such as the materials available, the time it would take to mark the test,
and how easy it would be to produce the materials.
So, for example, if you want to use video for testing listening skills, have you got
enough copies of the video and enough viewing screens for all the classes that
need it?
It might be considered a good idea to give students individual interviews to judge
their speaking ability, but with a class of say 15 students whose lesson are 90 min
long, twice a week, you would need to either have a number of extra teachers
available during teaching hours to carry out the test, or the test would take a lot of
time out of class time and might need to be carried out over seven lessons which
might not be satisfactory form the point of view of either the students or the
institution. What would the other students be doing while their classmates being
interviewed? These kinds of practical issues can narrow the choices available when
we decide the best way to test our students; skills.
• Tests may create negative backwash i.e. you adjust your teaching to suit what is
in the test, doing lots of “test practice” rather than focusing on learners’ needs
from a longer term or broader perspective;
• Learners may not bother with class activities or homework because they know
it’s only the final test result that counts.
• High anxiety, test for the sake of test, cheating, bad grades may discourage.
14)Outline the effects of backwash and spin off. Compare positive and
negative backwash effects.
Washback/backwash refers to the effect that the test has on the teaching
programme that lead up to it, and can be both positive and negative.
Example: ss don’t want to do smth except what they need for ЕГЭ. if you give
students smth from FCE, they may tell you « I don’t need it, there is no such tasks
in ЕГЭ».
• Tests may create negative backwash i.e. you adjust your teaching to suit what is
in the test, doing lots of “test practice” rather than focusing on learners’ needs
from a longer term or broader perspective;
• Learners may not bother with class activities or homework because they know
it’s only the final test result that counts.
• High anxiety, test for the sake of test, cheating, bad grades may discourage.
In some circumstances, students studying for a test may spend most of their
classroom time doing practice tests and developing the skills and strategies they
need to do well in the final assessment, rather than developing the broader skills
which they may need in order to use the language in a communicative setting
outside the classroom. A teacher may decide to limit the lesson content to only
those structures and areas of lexis, or types of written and spoken texts, which will
appear in the exam. On the other hand. Students who are preparing to take an
examination may be better motivated to study outside class, to do homework tasks,
to practice their writing skills in the case of examinations with a written
components, and to develop other skills such as deducing meaning from context
which will improve their overall language ability and future studies in the
language.
15)Characterize micro and macro skills of reading and listening and talk
about the importance of this distinction for the assessment. Give
examples to illustrate your point of view.
Often is taken for granted
Acquired by the age of 5-7 y.o.
Reading skill is often used for assessing speaking, writing (as a stimulus for
the test-taker response)
Reading as a process is unobservable
Issues:
Students don’t “see” reading as a skill
Students aren’t used to using any strategies
Students are overwhelmed by the amount of unknown vocabulary
Students have low reading rate
Students don’t read extensively outside the classroom
Students don’t see any improvement
The importance of listening
Closely connected with speaking
It’s rare to find just a listening test
Listening isn’t a clearly observable skill we can assess the result of
listening, but no the process
Context-outside the text
Co-text- looking what’s around in the text (the gap filling and its surrounding:
collocations, phrasal verbs, etc) (goal set a goal/achieve)
16)Characterize micro and macro skills of writing and speaking and talk
about the importance of this distinction for the assessment. Give
examples to illustrate your point of view.
How to score imitative speaking tasks? Scoring scale for repetition tasks:
2 acceptable pronunciation
1 comprehensible, partially correct pronunciation
0 silence, seriously incorrect pronunciation
5. Extensive (monologue)
Speaking strategies
1. Turn-taking
2. Paralinguistics
3. The co-operative principle (negotiating meaning)
4. Hedging (I’m not sure, euphemism (I think you’re not so clever))
5. Ellipsis
6. Buying time (thanks for the question …)
7. Adjacency
8. Openings and closings
9. Topic sensitivity
10.Stress and intonation
11.Backchannel devices
12.Prefabricated chunks
Issues with assessing speaking
4. No speaking task is capable of isolating the single skills of oral production
5. How to design elicitation techniques? How to make the test taker avoid the
target language or paraphrasing?
6. Tasks become more open-ended how to score that?
To assess speaking in Movers we use holistic scale.
18)Talk about assessing writing: writing sub-skills, the types of writing
performance (imitative, intensive, responsive, extensive) and the tasks
for assessment.
3. Expressing relationships between parts of a written text through cohesive devices [especially through grammatical
introducing an idea
developing an idea
concluding an idea
explicitly
implicitly
narrative
argument
1. Representative tasks
Specify all possible content (genres, functions, styles, types of texts,
formality, levels, etc)
Include a representative sample of the specified content (NB: the
desirability of wide sampling has to be balanced against practicality)
1.Imitative writing
Fundamental, basic skill- ability to write letters, words, very brief sentences,
and use punctuation
Mastering the mechanics of writing (handwritten texts)
Form is primary
*The problem appears with the rise of personal computers, tables, and phones
Writing scale
2.Grammatically and lexically correct
1.Eithier grammatically or lexically correct
0.Both grammatically and lexically incorrect
3.Responsive writing
Performing a limited discourse level, connecting sentences into paragraphs,
sequence of several paragraphs
Tasks respond to pedagogical directives, lists of criteria, outlines
The writer has mastered the fundamentals of sentence-level grammar and is
more focuses on the discourse conventions that will achieve the conventions
of the written text
Form-focused attention is at discourse level, with a strong emphasis on
context and meaning
4. Extensive writing
Successful management of all the processes and strategies of writing for all
purposes
Focus on achieving a purpose, organizing and developing odes logically,
using details to support/illustrate ideas, demonstrating syntactic and lexical
variety
Engagement in the process of writing through multiple drafts
Focus on grammatical form is limited to occasional editing or proofreading a
draft
Responsive and extensive writing
The test taker is freed from strict control of intensive writing test takers
are involved in composing, or real writing rather than a display of writing
There’s a choice of topics, lengths, styles, language
Higher-end production level
1. Holistic scale
Primary trait scale (If the class or the assignment focuses on a particular aspect of writing, or a specific linguistic
form, or the use of a certain semantic group, primary trait scoring allows the instructor and the students to focus their feedback,
revisions and attention very specifically.)
2. Analytical scale
Competence vs performance in assessment
We assess competence, but observe performance
Performance doesn’t always indicate true competence (emotional test
anxiety, memory block, illness, bad night’s rest, etc)
Concern the fallibility of the results of single performance consider at
least two performances
How can it be done?
Several tests that’re combined to form an assessment
A single test with multiple test tasks to account for learning styles and
performance variables
In-calls and extra-class graded work
Alternative forms of assessment (self-, peer-, portfolios, observations, etc)
Multiple measures will always give you a more reliable and valid assessment
that a single measure
Reading skill
Often is taken for granted
Acquired by the age of 5-7 y.o.
Reading skill is often used for assessing speaking, writing (as a stimulus for
the test-taker response)
Reading as a process is unobservable
Which words come to your mind when you think about teaching/assessing
reading?
Bottom-up + top-down approaches
Schemata
Content
Genre
Comprehension
Issues:
Students don’t “see” reading as a skill
Students aren’t used to using any strategies
Students are overwhelmed by the amount of unknown vocabulary
Students have low reading rate
Students don’t read extensively outside the classroom
Students don’t see any improvement
Coffield (Coffield et al.2004a, 2004b) set out to determine if any of these theories
could be used by educators, and suggest that classifying people into a fixed set of
characteristics may be counter-productive to learning.
• Visual learners like to see words or images
• Auditory learners like to hear words or sentences
• Kinesthetic emotional learners like to involve their feelings and emotions
• Kinesthetic motors learners like to do smth or touch smth
Learning styles
1. Various factors influence a person’s preferred style. For example, social
environment, educational experiences, or the basic cognitive structure of the
individual.
2. A typical presentation of Kolb’s two continuum’s is that the east-west axis
is called Processing Continuum (how we approach a task), and the north-
south axis is called the Perception Continuum (our emotional response, or
how we think or feel about it).
Acquiring information (S vs N)
Sensing Intuitive
Matter-of-fact; Generating ideas;
Empirical/practical; Enjoys new jobs;
Dislike fuzzy problems plaguing Insight into complex problems;
Specialist/functional perspective; Gestalt (top-down) perspective;
Present oriented. Future oriented.
Making Judgments (T vs F)
Thinking Feeling
Tough-minded; Value-centered;
Analytic, quantitative; People-oriented;
Clear criteria; Personal perspective;
Impersonal, detached; Warmth, over-committed;
Task-oriented; Good-bad.
Correct-incorrect.
Establishing Goals (J vs P)
Judging Perceiving
Output-oriented; Take on many projects;
“Time is money”; Overload;
prefer action to analysis; “Look before leap”;
Implementation oriented. Emphasize diagnosis
On most traditional tests, students are asked to work within a set time frame.
This offers little or no opportunity or the test-taker to think, reflect upon and
grudge their work.
The portfolio approach is developed from the concept of reflective practice. A
portfolio isn’t a collection or folder of all the students’ handouts or material/ the
essential consideration for the teacher and her/his students is what exactly they
want to include in the portfolios. Portfolios give students the opportunity to
reflect on their learning so that they may evaluate their progress in a course
or programme.
For results oriented —learning—tests are good
For process-learning—portfolios
• Portfolio development is increasingly cited as a viable alternative to standardized
testing
• Portfolio assessment is aimed at determining student achievement and
competences
the class but their own experience. In this section too, learners may in due any
plans they have for taking an English exam, visiting an English -speaking country,
or having English-speaking visitors at home.
Rationale
• Raising learners’ awareness of the many different ways English can be learnt and
practiced.
• Moving away from the idea that English is a school subject and that is useful and
necessary outside the classroom.
• Sharing experiences and finding out about what others do.
• Reading about tier own classmates’ stories in English is useful for reviewing
grammar and vocabulary.
3. The Dossier
This is a collection of course work which shows learners’ level of English. It may
include corrected class or homework, tests and exams or any other piece of work
which illustrates where the learner is at. In this part of an LP, a leader may include
voice or video recordings or any part of project work which they have done.
Advantages of Language Portfolios.
4. They enhance learners’ motivation by providing smth personal and tangible
which they can build up and develop over the course.
5. They help learners to reflect on their own learning and achievement by asking
them to make choices, reviews, compare and organize their own work.
6. They enable learners to look for new cultural experiences by opening their
eyes to the possibilities available to them. Part of portfolio work involves
“show and tell” sessions where learners talk about their experiences and look at
other portfolios.
7. From a teacher’s point of view, portfolios lead to greater learner autonomy
since they involve self-assessment, leaner responsibility and parent
involvement.
8. Learners can work in their own time and at their own pace on different
sections of the LP.
9. Parents get to see the progress of their kinds.
Secondary teachers should realize that portfolio collection and assessment in the
secondary classroom, as in other education levels, is driven by the purpose and
organization of ESL instruction (Batzle, 1992)
Before launching portfolios systems educators must answer the following
questions:
• What specifically are the objectives that teachers have for particular time periods
(the six weeks, semester, year)?
• Is the curriculum for ESL students focused on language development only? Or
does it include content skills and knowledge?
Portfolio contents
• Language
• Skills
• Content knowledge and skills
• Effort and performance
• Works of different nature (drawings, written texts, audio recordings, videos,
DIY, ect)
• “Secondary evidence” (checklists, interviews, observations by the teacher)
Assessment and Evaluation
• At one level, assessment involves ongoing, periodic reflection of grown by
both teacher and student. This assessment may involve a teacher’s cursory
review of the portfolio noting changes in specific areas across specific time
perditions that culminates in a shot evaluation.
• At another level, assessment involves a more formal examination of what
students say,do, and think using locally developed rubrics, such as a checklist.
Assessment are conduced to provide a basic for some final evaluation of the
students. This evaluation may eventually be recorded as a grade, rating or mark
of some kind for each dimension or subject assessed.