You are on page 1of 7

1

CRITICAL JOURNAL REVIEW

Lecturer :
Sisila Fitriany Damanik S.S M.HUM

Subject :

ARRANGED BY :

Tungunedo Manalu

ENGLISH LITERATURE DEPARTMENT FACULTY OF LANGUAGE AND ART STATE OF


MEDAN UNIVERSITY
2023
2

PREFACE

First of all is praise and gratitude be to God, because of His blessings the author
can complete the Critical Journal assigment (CJR)

In this journal, we explore the Effect of Vocabulary Instruction on Studens’


Listening Comprehension Cognitive Styles in ESL

We believe in critical thinking and this journal reflects the dedication of scholars
and listen's to understand our complex world.

We're thankful to the authors who shared their insights and to our readers. We invite
you to read, question, and broaden your horizons with us. Author hope’s, together we
can improve our understanding of the world. Thank you for joining us in this pursuit of
critical thinking

Authors are too fully aware that in this critical journal there are shortcomings, these
are shortcomings and far from and far from perfect word. Therefore, authors hope for
criticism, suggestions, and suggestions for the sake of improvements repair critical
journals that authors have made in the future, considering coming, remembering not
something is perfect without constructive suggestions.

Medan, September 2023


3

INTRODUCTION

1.1 Background

A critical journal has several roles, including promoting reflection on your learning or
listening, encouraging the analysis and assessment of content, fostering critical thinking,
documenting your thoughts, facilitating communication of your insight, and aiding in the
development of skill, especially in listening and analysis.

1.2 The Purpose of Listening a Critical Journal Review (CJR)

The purpose of listening a critical journal is to engage in reflective thingking, analyze and
evaluate content, develop critical thinking skills, document personal insight, facilitate
communication of those insight, and enhance various skills, including listening and analytical
abilities.

1.3 Identity of Journal

● Title :What to look for in ESL admission test: Cambridge certificate exams, IELTS
and TOEFL

● Author: Micheline Chalhoub Deville, Caroline e turner

● publication date : 2000

● ISSN. : ISSN-0346-251X
CHAPTER II

RESULT

2.1 Method

Method of this jurnal using Scoring method


Scoring in the adaptive Listening and Structure sections is cumulative. The final
section score depends on item difficulty and the number of items answered correctly.
Correct answers to more difficult questions carry more weight than correct answers to
easier ones. Item diffulty is estimated using the three-parameter item response theory
model. Scores in the linear Reading section are based on the number of correct
answers, but adjusted for potential discrepancies in the individualized sets of reading.
Two independent raters evaluate essays in the Writing section. Essays are rated using
a six-point scale. Section scores are converted into scaled scores (ETS, 1998).
4

Scaled section scores across the three tests contribute equally to provide a total
scaled score.

2.2 Summary

Historically, large-scale English as a second language (ESL) admission testing has been
dominated by two test batteries: the Cambridge exams, sponsored by the University of
Cambridge Local Examinations Syndicate (UCLES), and the Test of English as a foreign
language (TOEFL),
Each paper of the Cambridge certificate exams presents a variety of tasks
with regard to both type of input and response type. For example, according to
● the ``CPE Handbook'' (UCLES, 1998a), the Listening section includes three
tasks. The first task involves listening to a debate with a multiple-choice response format;
the second task requires listening to a radio interview with sentence completion as the
response format; and the third comprises listening to a discussion with a matching
response format. Research such as that by Shohamy (1984) and Chalhoub-Deville
(1995) supports including a variety of tasks and response types (see also Bachman,
1990).
IELTS differs from the Cambridge exams in that published reports recognize the
need to address reliability and include information to that effect. For example, IELTS
manuals describe a detailed approach to the certification of interviewers/ assessors
for the speaking test and raters for the writing component that requires recertification
procedures every two years.
CHAPTER III

ADVANTAGE AND DISADVANTAGE

3.1 Advantage

Enhancing Education:

The author explain in detail the points it has conveyed, there are introduction, reliability,
validity consideration, the purpose, content, method, conclusion and reference.

3.2 Disadvantage

`The Cambridge certificate exams'' (Section 3.4). In short, IELTS publications


need to provide more documentation of rater reliability, the reliability of the
instrument, and the ensuing scores.
Another aspect of reliability to consider with admission tests is the dependability of
decisions made based on cut-off scores. Often, institutions do not carry out any
systematic local investigations to decide upon cut-off scores, but instead base
their decision, lack of supporting data and documentation.
5

CHAPTER IV

CONCLUSION

Conclusion Scores on language proficiency tests like the TOEFL, IELTS, and Cambridge
Certificate are used to determine whether or not a student will be admitted to an academic
institution. Therefore, it is essential that the results show high-quality data. It is the
responsibility of those who create large-scale tests, like those examined in this article, to
build instruments that adhere to professional standards, keep researching the characteristics
of their tests and the results they produce, and make test manuals, user guides, and
research papers available to the general public.
Additionally, test users have a duty. As required by the "Standards" (AERA et al., 1999), state
test creators must "provide information on the strengths and weaknesses of their
instruments. But ultimately, it is the user's obligation to utilize and interpret tests
appropriately.
6

CHAPTER V

REFERENCE

Cambridge.
Clapham, C., 2000. Assessment for academic purposes: where next? System 28, 511±521.
Crystal, D., 1997. English as Global Language. Cambridge University Press, Cambridge.
Eignor, D., Taylor, C., Kirsch, I., Jamieson, J., 1998. Development of a Scale for Assessing
the Level of
Computer Familiarity of TOEFL Test Takers (TOEFL Research Report No. 60). Educational
Testing
Service, Princeton, NJ.
ETS, 1998. Computer-Based TOEFL: Score User Guide. Educational Testing Service,
Princeton, NJ.
Fulcher, G., 1996. Testing tasks: issues in task design and the group oral. Language Testing
13, 23±51.
Fulcher, G., 2000. The `cummunicative' legacy in language testing. System 28, 483±497.
IELTS (International English Language Testing System), July 1996. IELTS Annual Report:
1995.
UCLES, The British Council, and IDP Education Australia, Cambridge.
Ingram, D.E., Wylie, E., 1993. Assessing speaking pro®ciency in the international English
language testing system. In: Douglas, D., Chapelle, C. (Eds.), A New Decade of Language
Testing Research.
TESOL, Alexandria, VA, pp. 220±234.
Jamieson, J., Taylor, C., Kirsch, I, Eignor, D., 1998. Design and Evaluation of a
Computer-based
TOEFL Tutorial (TOEFL Research Report No. 62). Educational Testing Service, Princeton,
NJ.
Kachru, B.B. (Ed.), 1992. The Other Tongue: English Across Cultures. University of Illinois
Press,
Urbana, IL.Kirsch, I., Jamieson, J., Taylor, C., Eignor, D., 1998. Computer Familiarity among
TOEFL Test Takers
(TOEFL Research Report No. 59). Educational Testing Service, Princeton, NJ.
Lado, R., 1961. Language Testing. McGraw-Hill, New York.
Messick, S., 1989. Validity. In: Linn, R.R. (Ed.), Educational Measurement, 3rd Edition.
American
Council on Education/Macmillan, New York, pp. 13±103.
Messick, S., 1996. Validity and washback in language testing. Language Testing 13,
241±256.
Pierce, B., 1994. The test of English as a foreign language: developing items for reading
comprehension.
In: Hill, C., Parry, K. (Eds.), From Testing to Assessment: English as an International
Language.
Longman, New York, pp. 39±60.
7

Quirk, R., Widdowson, H.G. (Eds.), 1985. English in the World: Teaching and Learning
the Language and Literatures. Cambridge University Press, Cambridge.
Shohamy, E., 1984. Does the testing method make a di€erence? The case of reading
comprehension.
Language Testing 1, 147±170.
Shohamy, E., Reves, T., Bejarano, Y., 1986. Introducing a new comprehensive test of oral
pro®ciency.
English Language Teaching Journal 40, 212±222.
Spolsky, B., 1995. Measured Words. Oxford University Press, Oxford.
Taylor C., Jamieson, J., Eignor, D., Kirsch, I., 1998. The Relationship Between Computer
Familiarity
and Performance on Computer-based TOEFL Test Tasks (TOEFL Research Report No. 61).
Educational Testing Service, Princeton, NJ.
UCLES, 1997. FCE Handbook. University of Cambridge Local Examination Syndicate,
Cambridge.
UCLES, 1998a. CPE Handbook. University of Cambridge Local Examination Syndicate,
Cambridge.
UCLES, 1998b. CAE Report. University of Cambridge Local Examination Syndicate,
Cambridge.
UCLES, 1998c. Producing Cambridge EFL Examinations: Key Considerations and Issues.
University of
Cambridge Local Examination Syndicate, Cambridge.
UCLES, 1999a. CAE Handbook. University of Cambridge Local Examination Syndicate,
Cambridge.
UCLES, 1999b. IELTS Annual Review: 1998/1999. University of Cambridge Local
Examination Syndicate, The British Council, and IDP Education Australia, Cambridge.
Wall, D., 1997. Impact and washback in language testing. In: Clapham, C., Corson, D.
(Eds.), Language
Testing and Assessment, Encyclopedia of Language and Education, Vol. 7. Kluwer,
Dordrecht, pp. 291±302.
Wall, D., 2000. The impact of high-states testing or teaching and learning: can this be
predicted or controlled? System 28, 499±509.

You might also like