You are on page 1of 3

Assessing Writing 51 (2022) 100607

Contents lists available at ScienceDirect

Assessing Writing
journal homepage: www.elsevier.com/locate/asw

Editorial

Over the past few years, we have all in our own circumstances confronted the disruptions, the adversities, and the challenges posed
by the pandemic. We hope this period of disruption is coming to an end and that a time for reconnecting and exploration is ahead. With
this hope, we enter 2022 with a nascent sense of optimism.
Our optimism is, in part, buoyed by Elsevier’s continued commitment to Assessing Writing. We see this commitment in a number of
“firsts” for the journal. These include a new editorial position; a new free access feature, and the anticipated publication of two special
issues.
Given the increased number of submissions to Assessing Writing, Elsevier agreed to create an Associate Editor position for the
journal. We are very pleased to announce that Tracey Hodges is joining Assessing Writing as Associate Editor. In addition to publishing
numerous articles in Assessing Writing, Dr. Hodges has served for a number of years on our Editorial Board. She will support us as the
two editors-in-chief by handling manuscripts associated with her areas of expertize. We look forward to the energy and insights Dr.
Hodges will bring to the editorial team.
This year Elsevier also initiated a new feature for the journal: The Editors’ Choice. Each quarter, we as editors-in-chief will select a
number of articles from the most recent issue of the journal that will be designated free access for that quarter. Selected manuscripts
will be chosen to highlight the quality and diversity of scholarship published in the journal. This new feature demonstrates Elsevier’s
commitment to expanding access to the scholarship published in Assessing Writing.
This year will also see, for the first-time in a single calendar year, the publication of two special issues. Beverly Baker and Atta
Gebril are guest editing the “Special Issue on the Assessment of Languages other than English (LOTE)”. We are hopeful that this issue
will provide insights into the assessment of writing in English that can be learned from the assessment of writing in other languages.
Shulin Yu and Icy Lee are guest editing the “Special Issue on Writing Assessment and Feedback Literacy.” The call for abstracts for this
SI remains open until February 28th (see the journal home page for more information). We thank these four Editorial Board members
for their important contributions to the journal.
In another first, Volume 51 is the largest volume in the 28-year history of Assessing Writing. This volume includes 16 full length
articles and 5 book reviews. Full length articles have been penned by authors from China, the United States, Australia, Colombia, Iran,
New Zealand, and the United Kingdom. The topics explored in this volume are similarly diverse. Across this volume articles help us to
explore the complex interactions that mediate writing ability and its expression in textual form. Socio-cognitive patterns are explored
alongside an array of intrapersonal factors—cognitive processes, dispositions, engagement, metacognition, mindset, reflexivity, self-
regulation—and textual features—modality, syntactic complexity, vocabulary—that collectively belie the enormous complexity
involved in making sound inferences from performance assessment data.
Rock describes a rubric design process in which experts extracted descriptions of target language use from sample texts produced
by test-takers. The study harnessed disagreement between raters to develop a more robust, inclusive rating scale. Rock found that the
use of sample texts helped ensure a rubric that was free of generic and vague language.
Li and Huang report on an investigation into the relationship between organization and overall quality of an holistically scored
essay. They found that when assessing holistically scored essays most participants ignored issues of organization while focusing more
on language issues. Further they found that attention to these issues shifted in relationship to writing quality–organization being less of
a focus when scoring lower quality papers.
Xu examines the connection between mindsets (growth and fixed), feedback orientation (seeking and avoiding), and self-regulated
learning (cognitive, metacognitive, social behavior, and motivational regulation). Xu found that growth mindsets were positively
associated with feedback seeking orientation, and with all four self-regulated learning strategies. Comparing findings from this study

https://doi.org/10.1016/j.asw.2022.100607

Available online 1 February 2022


1075-2935/© 2022 Elsevier Inc. All rights reserved.
Assessing Writing 51 (2022) 100607

with similar studies conducted with other non-Chinese populations, Xu observed that both cultural and contextual factors may also
impact student’s use of self-regulated learning strategies.
Ryan, Khosronejad, Barton, Myhill, and Kervin examine self-reflexivity of elementary student writers. They present eight case
studies from a study of 570 Australian elementary student writers to demonstrate the associations between student self-assessment of
their self-reflexive behavior, their teachers’ assessment of this behavior, and the conditions that shape the formation of their writing
practices. They found that teachers and students differed in their assessments of a student’s self-reflexive behavior. A key implication
of their findings is that a dialogic approach to assessment is necessary to better understand the student-as-writer and by extension the
pedagogical approaches that will best help that student. Categorizing students according to four reflexive modes—communicative,
autonomous, meta-reflexive, and fractured—they demonstrate how important it is for teachers to shape learning conditions to account
for variations in students’ reflexive modalities.
Teng, Wang, and Zhang investigated self-regulation strategy use in student writers and their impact on performance. They found
that students in higher grade levels reported greater use of self-regulatory writing strategies than did students in lower grades. Female
students used these strategies more than male students did. They observe that developing self-regulatory writing strategies is multi­
dimensional and dynamic, making it important to understand individual differences when promoting the use of these strategies.
Zhang and Hyland looked into student behavioral, affective, and cognitive engagement with feedback. In this study 33 Chinese
post-secondary students engaged with automated, peer, and teacher feedback as they completed a writing task. Zhang and Hyland
found engagement with feedback to be a complex, dynamic process requiring students to devote time, invest effort, regulate emotional
and attitudinal reactions, and employ cognitive and metacognitive strategies to receive, process, and act upon the feedback received.
They further found that an integrated approach to feedback supported student engagement. Their study highlights the need for in­
clusive pedagogies that create supportive learning environments.
Hosseinporr and Kazemi examined composing strategies of 58 high and low performing post-secondary Iranian ESL/EFL student
writers. They found that among these students the use of metacognitive strategies was a significant predictor of English writing ability.
Low-performing students were unaware of many metacognitive strategies, a finding the highlights the importance of teaching met­
acognitive strategies in EFL/ESL writing classes.
Stiess, Krishnan, Kim and Olson focused on text-based analytical writing quality seeking to understand this writing construct,
how dimensions of this construct both predict overall writing quality, and how these dimensions are expressed across intrapersonal
factors such as English language proficiency, sex, and race/ethnicity. They report that in the context of text-based analytical writing,
writing quality is a multidimensional construct, consisting of three factors: Ideas/Structure, Evidence of Use; Language Use. Female
students outperformed male students on all three dimensions.
Tan, Fan, Braunstein, Lane-Holbert looked at the role that linguistic, cultural, and substantive patterns play in shaping student
responses to writing prompts. They compared performances of 70 Chinese undergraduate English majors against the that of 66
American undergraduate students responding to the same essay prompt. They found clear differences between groups with respect to
the expression of these patterns in their essays. They note that 35–58% overall variance in writing assessment scores can be attributed
to cultural features (Shermis, Shneyderman, & Attali, 2008) to highlight the importance of these patterns and their impact on how
writing is scored, especially in international testing contexts.
Chamorro investigated how 38 Colombian pre-service teachers’ cognitive processes were effected by assessment modality
(computer-based, or paper-based). She found that assessment medium impacted test-takers’ cognitive processes; paper-based as­
sessments triggering greater macro-planning of content and organization; computer-based assessment triggering greater micro-level
planning and after-writing revisions. These findings carry implications for inferences regarding writing ability, specifically with
respect to planning and revision, that can be made with respect to the modality of response a test-taker is afforded.
Sarte and Gnevsheva examined ESL student’s use of noun modifiers when writing selected responses tasks. They examined the
role influence of topic and writer’s developmental stages on use. On the one hand, they found that use of noun modifiers differed by
proficiency level, at the same time they found that demands of the task, not just the proficiency of writers impact the types of linguistic
features produced. This impacts considerations of prompt design, and provides caution against inferences about proficiency inde­
pendent of examination of prompt effect.
Durrant and Durrant focused on register appropriateness in children’s written vocabulary development. Their results show that
development of academic vocabulary is discipline specific. In particular they highlight the differences between writing in science and
writing in English classes. They caution that complex effects can underlie broad quantitative trends requiring careful analysis to enable
valid interpretations from these trends.
Zhang and Lu examined genre effects on the predictive power of syntactic complexity indices for L2 writing quality. They observed
that genres associated with differing communicative functions impact syntactic complexity. They caution that the predictive power of
syntactic complexity indices need to be systematically examined for genre effects, and that diversity of testing and rating conditions
also warrant closer examination.
Gong, Zhang, and Li investigated keyboarding fluency, the demographic, linguistic, and socio-economic factors that are associ­
ated with fluency, the relationship between fluency and writing performance, and the impact of fluency on text production and editing
behaviors. They found that the relationship between keyboarding skill and writing performance was not uniform, and that the as­
sociation between these two factors strengthen when keyboarding fluency falls below a certain threshold which differs by genre and
cognitive load. A key implication of this work, then, is that poor levels of keyboarding fluency do impact writing performance, to a
point, and that beyond that point—as fluency increases—keyboard fluency does not constrain writing performance. Their findings
have implications for the use of keystroke logging as a measure of writing proficiency.
Kim and Kessler examined the relationship of lexical bundles in Chinese undergraduate student writing with writing quality. They

2
Assessing Writing 51 (2022) 100607

found that use of lexical bundles was influenced both by prompt and task features. They also found that higher-scoring writers both
used more prompt-dependent bundles and used more diverse types of lexical bundles. Their study raises cautions about the automated
scoring of impromptu essay assessments, observing, “When some L2 English learners are placed into an impromptu written assessment
situation, the recurring formulaic strings of language they will produce in their writing likely will be highly prompt dependent, and
utilize minimal discourse-organizing and referential bundles.”
Ouyang, Jiang, Liu compared how traditional measures of syntactic complexity and measures of dependency distance were able to
differentiate between writers’ proficiency levels. They conclude that dependency distance measures were more effective than tradi­
tional syntactic complexity measures at discriminating between beginner, intermediate, and advanced learners. Their study further
reveals challenges of using NLS systems for assessing writing ability.
We are pleased to include reviews of five books in this issue: Toward a reconceptualization of second language classroom assessment:
Praxis and researcher-teacher partnership; Language Aptitude: Advancing Theory, Testing, Research and Practice; Critical Perspectives on
Global Englishes in Asia: Language Policy, Curriculum, Pedagogy and Assessment; Scoring Second Language Spoken and Written Performance:
Issues, Options and Directions; and Assessment rubrics decoded: An educator’s guide.
As always, we would like to thank our Editorial Board and our reviewers for the continuing support of this journal. Your work is so
important to building our profession, and to supporting our authors in maximizing the quality of manuscripts we publish. Finally, we
offer our continued appreciation to Elsevier for its ongoing investment in our research community.
David Slomp, Martin East

Reference

Shermis, M. D., Shneyderman, A., & Attali, Y. (2008). How important is content in the ratings of essay assessments? Assessment in Education: Principles, Policy &
Practice, 15(1), 91–105. https://doi.org/10.1080/09695940701876219

You might also like