Professional Documents
Culture Documents
net/publication/357941740
CITATIONS READS
0 147
1 author:
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
The effect of key-words video captionings on vocabulary learning in a mobile-assisted language learning setting View
project
Affective learners' variables in successful listening comprehension: A path analysis View project
All content following this page was uploaded by Mohammed Ali Mohsen on 26 February 2022.
Meta-Analysis
Abstract
Written corrective feedback for improving L2 writing skills has been a debatable issue
for more than two decades. The aims of this meta-analysis are to (1) provide a
quantitative measure of the effect of computer-generated written feedback for im-
proving L2 writing skills and (2) verify how moderators (i.e., adopted technology, task
types, and learners’ language proficiency) mitigate the effectiveness of corrective
feedback provided by computer technology for developing the L2 students’ writing
fluency and accuracy. A comprehensive search was performed to collect the pop-
ulation of computer-mediated corrective feedback (CMCF) studies. The effect sizes
were calculated for 14 primary studies with L2 participants (N = 1220). The findings
indicate a large overall effect of CMCF (d = 1.21). A medium overall effect was found in
using automated writing evaluation (AWE) technology for writing skills, whereas a large
effect size was determined in using non-AWE technology. The results further indicate a
large overall effect in using CMCF for both writing fluency and accuracy. As for the
proficiency level of moderators, the results indicate a large overall effect in using CMCF
among beginners and intermediate learners, whereas the overall effect is small among
advanced learners. Limitations and recommendations for future studies are also raised
in this study.
1
College of Languages and Translation, Najran University, Najran, Saudi Arabia
Corresponding Author:
Mohammed Ali Mohsen, College of Languages and Translation, Najran University, Najran 987654, Saudi
Arabia.
Email: mmohsen@gmail.com; mamohsen@nu.edu.sa
2 Journal of Educational Computing Research 0(0)
Keywords
computer-mediated corrective feedback, writing accuracy, writing fluency, second
language, meta-analysis
Introduction
Written corrective feedback (WCF) has been the central issue of second language
acquisition (SLA) studies. WCF is mainly prompted by its affordances of allowing
learners to observe their language faults and help rectify these errors in their successive
revised drafts, thereby leading to L2 writing development (e.g., Karim & Nassaji, 2020;
Zhai & Ma, 2021; Zhang & Zhang, 2018; Xu & Zhang, 2021). Although the WCF of
teachers is generally accurate and positively helps improve students’ writing ability,
teachers cannot provide WCF for huge numbers of learners in the classroom as in-
stantaneously as computers; even delivering delayed feedback is time consuming and
requires effort among teachers. Aiming to solve the aforementioned issue, many
technological devices have been launched to speed up the teachers’ role in providing
corrective feedback of learners’ writing. Modern technology has provided immediate
synchronous and asynchronous corrective feedback to language learners, which can
benefit L2 writing accuracy and development (Shintani & Aubrey, 2016). In the current
work, two skills of second language writing are examined from a meta-analytic
viewpoint: (1) writing accuracy to ensure that students can master grammar, punc-
tuation, and spelling and (2) writing fluency to deal with the students’ mastery of
coherence, content organization, and appropriate use of vocabulary.
One of the potential gains of modern technological advancement is the creation of
automated writing evaluation (AWE). The AWE is built with natural language pro-
cessing (NLP), artificial intelligence (AI), or latent semantic analysis to provide
L2 learners with advanced corrective feedback that go beyond polishing language
accuracy, further helping them to improve their macro-skills, such as coherence, or-
ganization, and language content, in which only human teachers can usually implement
(Hockly, 2019; Li et al., 2016; Stevenson & Phakiti, 2014; Zhai & Ma, 2021; Zhang &
Zhang, 2018). These cutting-edge technologies provide immediate feedback to
L2 writers, enabling them to trace the progress of their writing by raising collocational
errors, linking students to language corpuses with respect to identifying correctly used
expressions, and integrating peers and teachers’ feedback and the AWE feedback
(Zhang & Zhang, 2018). AWE has the potential to analyze students’ errors and provide
meaningful feedback not only in low-level form but also in terms of writing devel-
opment content and rhetoric improvement (Cotos, 2014). This potential of AWE to aid
L2 content development has been accomplished through the rapid development of
technological devices in the modern age, enabling the program “to work by comparing
a written text to a large database of writing of the same genre, written in answer to a
specific prompt or rubric” (Hockly, 2019, p. 82). Examples of well-known AWE
programs are Criterion, My ACEESS, Pigai, and e-rater®, which have been investigated
Mohsen 3
to induce L2 writing accuracy and development. The studies have explored the efficacy
of direct and indirect WCFs on L2 writing development. Direct WCF refers to the
explicit knowledge about the errors and the immediate correction performed by a
teacher, a peer, and/or a computer by providing error location and giving the correct
answer (i.e., recast), whereas indirect feedback refers to notifying learners that an error
has been made (Sarré et al., 2019; Van Beuningen, 2010; Zhang, 2021). Indirect CF can
be classified into meta-linguistics (highlights the grammatical rule and provides ex-
amples) or indirect location of errors by indicating the occurrence of errors and the
number of errors using codes or symbols, such as the asterisk (Lee, 2017). Other
researchers, such as Lee (2017) and Zhang (2021), have classified WCF into focused
feedback (correcting single types of learners’ errors), mid-focused feedback (correcting
multiple types of learners’ errors), or unfocused feedback (correcting comprehensive
errors committed by learners). My investigation in this current meta-analysis is to
examine the efficacy of WCF—regardless of the different types of WCFs—that
manipulates CMCF to aid the students’ learning accuracy and development by us-
ing different moderators that can impact L2 writing improvement.
Proficiency level is one of the moderators used in L2 research to investigate what types
of learners can benefit from the WCF provided by the teacher or computer (Li et al., 2016;
Ranalli, 2018; Saricaoglu, 2019; Xu & Zhang, 2021). Computer-generated WCR does
not consider the proficiency level factor, the learners’ previous educational experience,
the L2 cultural background of the participants, and the familiarity with L2 (Ranalli,
2018). In the literature review, only few studies have attempted to bridge this gap, some of
them investigating how learners with different proficiency levels can process the
computer-generated WCFs (Bitchener & Ferris, 2012; Li et al., 2016; Ranalli, 2018;
Saricaoglu, 2019; Xu & Zhang, 2021). Furthermore, the research indicates that advanced
learners tend to benefit more considerably from content feedback because this can help
improve their revised drafts, as they are mainly concerned with improving their writing
fluency more than accuracy (Xu & Zhang, 2021). However, beginning students tended to
use WCF to improve their writing accuracy, such as grammar, spelling, and punctuation,
and they were reported to be well-motivated to engage through WCFs by submitting
several revised drafts (Xu & Zhang, 2021). Li et al. (2016) revealed that low-level
L2 students perceive WCF as useful in polishing grammatical and mechanical errors,
whereas high-level learners have expressed low-perceived usefulness and reported that
their WCFs were formulaic and vague, as they found the feedback related to language
accuracy was out of context of their actual needs (i.e., their needs were beyond L2 form
correction). The research on learners’ cognitive processes in L2 writing has found that
beginning students have suffered from inability to write fluently due to the lack of
linguistics resources, encountered great difficulty in writing, and reported being ex-
tremely worried with polishing mechanical errors; by contrast, experienced writers tend
to focus on improving their L2 writing contents (Barkaoui, 2016; Mohsen, 2021; Révész
et al., 2019). Indeed, more studies are needed to robustly answer the question that may be
raised by instructors and pedagogues: Who would benefit from CMF? How can CMCF
address the L2 learners’ needs with different proficiency levels?
6 Journal of Educational Computing Research 0(0)
Methods
Design
The author utilized the Reporting Items for Systematic Reviews and Meta-Analyses
(PRISMA) (2020, as reported by Page et al., 2021) in the current meta-analysis.
Mohsen 7
Literature Search
Several steps have been undertaken to identify the preliminary studies that tackle the
target scope of the current meta-analysis:
(a) The author consulted three databases covering the studies on SLA. These
databases are Scopus, Educational Information Resource Center (ERIC), and
Clarivate Web of Science (Social Science Citation Index). These databases
permit researchers to retrieve full records of the studies, such as titles, key-
words, and abstracts. Many keywords were inputted into these well-known
databases, such as Computer-Corrective feedback, Computer-mediated feed-
back, Criterion, Automated evaluation program, computer-generated feed-
back, My access, e-rater, Writing Roadmap, Write to Learn and Summary
Street error-correction task, automated essay evaluation, automated essay
scoring, and writing evaluation technology, L2 writing, or second language
writing.
(b) The author searched in applied linguistics journals, computer-assisted language
learning (CALL) journals, and educational technology journals, as proposed by
Smith and Lafford (2009). See Appendix A for the details.
(c) Manual search was conducted in Google Scholar to check whether my search
was comprehensive and whether some studies were missed and not covered by
target journals and the consulted databases. Google Scholar is a generic da-
tabase that covers journals that are not indexed in ERIC, Scopus, or Web of
Science (Vitta & Al-Hoorie, 2020).
(d) I consulted references and bibliographies from articles on systematic reviews
and meta-analysis studies (e.g., Bahari, 2021; Kang & Han, 2015; Stevenson &
Phakiti, 2014; Strobl et al., 2019).
(a) The target study investigates the effect of any WCF provided by computer
technology during an L2 writing task.
(b) The independent variable should be any type of WCF provided by computer
technology to aid L2 writing, either for L2 grammatical accuracy or writing
fluency or both.
(c) A study must hold a control group that either received peer or teacher’s
corrective feedback or did not receive any feedback.
(d) The target study should be reported in English.
8 Journal of Educational Computing Research 0(0)
(e) A study should contain an experimental group in which a type of WCF was
manipulated and compared with a control group, such as a group with peers
and/or teachers’ WCF or zero feedback.
(f) If the article abstract does not contain sufficient information about the study
design, then the full article should be consulted.
The articles were excluded from the met-analysis pool when one of the following
criteria was found:
(a) A study examined CMCF, but no control group was maintained. Some studies
reported a case study group or contained experimental studies for different
types of WCFs provided by computer.
(b) Means and SDs were not reported. Some studies reported only frequencies and
percentiles.
The literature search, which was not restricted by timespan, was concluded in
December 2020. The search outcomes resulted in 1128 reports from all of the research
outlets that I have checked. Only 14 studies that recruited L2 participants (N = 1220)
and could meet the inclusion criteria were selected. These studies are marked with an
asterisk (*) in the reference list. Figure 1 illustrates the literature search process.
Adopted Technology. Two categories were identified for this moderator. If the study used
AWE, then it is coded as “AWE.” In cases in which AWE refers to a type of technology
that is designed with AI, if the study used the other technology, then it is coded as “non-
AWE.”
Task Type. Two categories were identified for the task type moderator. Writing fluency
refers to WCF that tackles the writing content, such as coherence, structural organi-
zation, and lexical appropriateness. Writing accuracy refers to form, such as spelling,
grammatical issues, and punctuation.
Learners’ TL proficiency. Learners’ target language proficiency level was used either as
an independent variable or as a covariate in the studies that passed the inclusion criteria.
Learners’ target language proficiency level was coded as one of the following three
levels: beginners, intermediate, and advanced. The code was determined on the basis of
the participants’ background information, as provided in the primary studies. The
original labels used by the researchers to classify the participants into different levels
were retained, and no inferences were made about this feature.
Mohsen 9
Two coders were involved in coding the studies. A consensus should be reached
when a disagreement occurs between the coders. The inter-rater reliability was .90.
Interpretation of Effect Size. The data were analyzed using Comprehensive Meta-
Analysis version. 3. The Q statistic, which is a statistic used for multiple signifi-
cance testing across several mean values, was used to determine the heterogeneity
among the sampled study properties. In addition, the analysis conducted by the
10 Journal of Educational Computing Research 0(0)
moderators allowed for the determination of how different factors may affect AWE.
Moreover, the I-square statistic was used to show that the variation was not due to
chance but rather the heterogeneity of the sample (Higgins et al., 2003). A low I-square
value indicates a non-significant variance, whereas an increasing I-square value in-
dicates heterogeneity. A value of 25% represents a low I-square statistic, 50% rep-
resents a medium I- square statistic, and 75% represents a high I-square statistic. In the
analysis, the confidence interval (CI) of 95% was used to test the statistical trust-
worthiness of the individual and averaged effect sizes. If the CIs include a zero, then the
calculated effect size may be due to chance; it may also be that the true effect size is zero
and thus not trustworthy. The random effect size was selected in this meta-analysis
because the random effect model entails a relatively strong conceptual motivation. A
fixed effect model assumes that the study effects are homogeneous, or the samples have
only a single population effect size. By contrast, the random effect model directly
estimates heterogeneity as a variance estimate (Oswald & Plonsky, 2010).
Publication Bias
A funnel plot was created (Figure 2) in this meta-analysis to ascertain whether
availability bias was present (i.e., whether the retrieved studies entail significant
results). In a funnel plot, studies with large sample sizes, given their small sampling
error and high precision values, appear towards the top of the graph and tend to cluster
near the mean effect size. The studies with small sample sizes have greater sampling
error and lower precision; thus, they tend to appear towards the bottom of the graph. If
no availability bias is found, then the studies will be symmetrically distributed around
the mean. If availability bias is present, the small studies will be concentrated on the
right side of the mean. The funnel plot of this meta-analysis shows the following
patterns: First, the larger sample studies (those with higher precision values) are
generally evenly distributed around the mean and appear towards the upper part of the
funnel. Second, some effect sizes can be observed at the bottom of the plot, and they
are evenly distributed around the mean and appear towards the bottom of the funnel.
This trend in the current meta-analysis indicates a normal distribution of the mean
values. The funnel plot also shows some dots on the right side of the aggregated mean
values because few studies have large effect sizes. However, in general, the funnel
plot presents a symmetrical distribution around the mean, which indicates the absence
of publication bias.
Results
A total of 22 effect sizes from 14 studies (highlighted with an asterisk in the list of
references) were analyzed, including the effect size, the standard error, and the 95% CI
of each effect size. These studies involved 1220 participants.
Mohsen 11
Figure 2. Availability bias: Funnel plot of precision by standard difference in mean values.
Test of Null
Confidence Intervals (2-Tail)
Heterogeneity
k* Point Estimate SD Error Lower Limit Upper Limit z-value p-value Q-value df (Q) p-value I-Squared Mean d
moderator analyses for the contextual variables. The CIs of many subgroups in this
meta-analysis seldom overlapped, indicating statistically significant differences be-
tween their effects.
The first moderator was the type of technology used in evaluating writing skills. The
technology types used in the primary studies were categorized into two types: AWE and
non-AWE. The results indicate that the use of non-AWE interventions produced
substantially larger effects than the ones that used AWE. As shown in Table 2, the effect
size is significantly larger (d = 1.44) for non-AWE treatments compared with AWE
treatments (d = .58).
The second moderator was the type of tasks in writing skills. The task types were
categorized into two types in this study: writing improvement and grammatical
competence. The results indicate that the use of CMCF to improve L2 writing skills has
large effect sizes for both types of tasks. As shown in Table 2, the effect size is large (d =
1.25) for writing improvement and grammatical competence (d = 1.17).
The third moderator was learners’ target language proficiency level. The proficiency
levels were categorized into three groups in this study: beginner, intermediate, and
advanced. As shown in Table 2, the effect sizes for beginner and intermediate learners
are large (d = 1.03 and 1.80, respectively), whereas the effect size is small (d = .27) for
advanced learners. In addition, the CIs were positive in the three categories, suggesting
14
Moderators Categories K Lower Limit Upper Limit z-value p-value Q-value p-value I-Squared Mean (d)
Technology adopted AWE 6a -.204 .812 1.173 .241 30.52 0 83.62 0.58
Non-AWE 16a .641 1.601 4.57 .000 122.4 0 87.74 1.44
Task type Writing fluency 11 .487 1.561 3.73 .000 95.53 0 89.53 1.25
Writing accuracy 11 .168 1.094 2.673 .008 56.32 0 82.24 1.17
L2 Learners’ proficiency Beginner 3 .108 1.902 2.197 .028 15.82 0 87.36 1.03
Intermediate 11 .753 2.148 4.076 .000 107.31 0 90.68 1.80
Advanced 5 .179 .502 3.799 .000 3.57 .466 .0000 0.27
NA 3 -.879 1.364 .424 .671 18.36 0 89.11 0.77
a
Titles of these studies are reported in Appendix B.
Journal of Educational Computing Research 0(0)
Mohsen 15
that the language learners who used computer feedback to improve their writing skills
performed better than those who did not use the technology.
Discussion
This meta-analysis study explores the overall effect of computer-generated WCF to
show the learners’ errors as a way of improving their L2 writing in their subsequent
revised drafts. This research also seeks to understand if moderators, such as task type,
learners’ proficiency level, and type of adopted technology, can mitigate the general
effect of CMF on L2 writing outcomes. The results of this meta-analysis demonstrate an
overall large effect of CMCF over the traditional WCF, indicating that L2 learners are
supported using CMCF to aid their writing development. The findings of this meta-
analysis are consistent with the meta-analysis of Kang and Han (2015), revealing that
the WCF’s overall effect on L2 grammatical accuracy is moderate to large. However,
the magnitude of effect sizes in this study is different from those reported in similar
meta-analyses on WCF. For example, Wisniewski et al. (2020) reported a medium
effect size (d = 0.48) for the feedback on student learning. The findings of the current
study demonstrate the large effect size of CMCF over the traditional corrective
feedback for developing L2 writing accuracy and fluency. A plausible interpretation for
these positive findings is that WCMCF assists students to notice the feedback provided
by computer, help identify their errors (content and form), and consequently avoid these
errors in their subsequent writing drafts (Karim & Nassaji, 2020; Li et al., 2016). These
findings also corroborate with interactionist theory, which states that learners find CMF
as a scaffolding in which they can interact with the feedback provided by a computer
and help achieve a negotiation of meaning until they can ensure that their subsequent
writing attempts are correct, thereby leading to L2 writing automaticity (Ellis, 2009;
Heift & Hegelheimer, 2017; Long, 1996). Learners who encounter salient language
errors can attend to the language input, and the learning becomes automatized as they
attend to these language errors in their subsequent learning modules (Krashen, 1981).
The positive findings of CMF found in this meta-analysis study align with the other
findings in the literature, demonstrating the positive impact of CMCF to enhance
L2 writing improvement by identifying the students’ weaknesses in different aspects of
L2 writing, helping them to address these errors in their subsequent revisions or drafts
(Karim & Nassaji, 2020; Sarré et al., 2019; Zhang, 2021).
The second research question addresses the effects of the three moderator variables
on the use of CMCF to improve writing. In terms of the technology used in the
treatments, the findings suggest that both types of CMCF entail significant differences
over the traditional feedback. However, the non-AWE intervention significantly
outperforms the AWE intervention for improving L2 writing fluency and accuracy. This
result can be attributed to the AWE studies that have been included in this meta-analysis
pool; that is, the AWE studies examined L2 development whereas non-AWE studies
investigated L2 writing accuracy, except one of them that focused on L2 writing
fluency. Another reason is the small number of aggregated effect sizes (k = 6) for the
16 Journal of Educational Computing Research 0(0)
AWE studies analyzed in this meta-analysis; the small number may have skewed the
results. By contrast, the number of aggregated effect sizes for non-AWE studies is high
(k = 16), thus yielding different effect sizes.
As for task type as a moderator, the results indicate that CMCF can significantly
improve L2 grammatical competence (accuracy) more than L2 writing fluency. Clearly,
micro-level errors can be accurately polished by a computer, as it is easy for technology
to identify these types of errors and give indirect or direct grammatical, orthographical,
and punctuation feedback. However, the use of CMCF to handle content errors seems to
be a difficult task; incidentally, these kinds of errors can be accurately identified by
human teachers. Although rapid technological advancements enhanced by AI and NLP
can simulate the work of human teachers, the tasks only seem to assist human teachers
but not replace his/her corrective feedback.
Advanced learners tend to be less beneficent from receiving CMCF, as demonstrated
in the current study. However, beginning and intermediate students have shown great
progress in their learning performance as a result of their attending to CMCF. A
possible reason is that beginning and intermediate learners lack automaticity in
L2 competence, and they focus on micro-level corrective feedback, such as on
grammar, spelling, and punctuation, provided by computers (Barkaoui, 2016; Révész
et al., 2019). In contrast to Kang and Han’s (2015) meta-analysis, the current study
found that advanced learners find the CMF less useful, whereas the beginning level
students are typically involved with computer-generated feedback. This difference can
be ascribed to the focus of Kang and Han (2015) who examined the overall effect on
language accuracy that matched the low-level students’ concerns because they lacked
writing automaticity. Nonetheless, the current meta-analysis is in line with the findings
of Xu and Zhang (2021) who showed that beginning learners tend to be highly engaged
with CMCF interaction, are much interested in addressing the errors suggested by
computers, and pay much attention to improve their micro-level revisions in successive
drafts. By contrast, advanced learners may not show interest in addressing the low-level
corrective feedback suggested by computers, as they tend to be much occupied with
high-level corrective feedback to improve their content and discourse level in their
writing (Xu & Zhang, 2021). This result matches the findings of Révész et al. (2019)
and Mohsen (2021) who reported that experienced writers have mastered their writing
accuracy, and they may overlook CMCF concerns related to mechanics because their
automaticity in writing form is higher than those of their counterparts. As a result, their
working memory resources are free to focus on the high-cognitive level, such as
generating and organizing ideas, maintaining cohesion and coherence, and keeping the
idea flow from one section to another (Barkaoui, 2016; Mohsen & Qassem, 2021).
Conclusion
Being a debatable issue among scholars for nearly two decades, this meta-analysis
contributes to the literature by summarizing the quantitative findings on whether
CMCF can enhance L2 writing accuracy and fluency. Previous meta-analyses studies
Mohsen 17
focused on how corrective feedback generated by teachers or peers can aid L2 writing
in terms of language accuracy. The relevant technology was first incorporated in
L2 learning to aid language accuracy, as it is easy for designers to set algorithms to
show the language form errors and help learners identify their grammatical, or-
thographic, and punctual faults. Therefore, the majority of the software programs
were constructed to improve language accuracy; as a result, many studies have
examined the potential of the technology to aid L2 learners’ grammatical competence.
The new advancements in modern technology have the potential to aid language
fluency to a certain extent, and they can also help to improve language fluency. The
technological advancements that manipulate AI and NLP have helped designers to
determine how language content can be improved by developing new programs to
address the aforementioned gap. The findings of the current meta-analysis found that
CMCF has a large effect on L2 writing accuracy and fluency. Expectedly, computer-
detectable language accuracy is significantly higher than CMCF-aided language
fluency. The types of learners exposed to CMCF can determine the type of task type to
be aided by CMCF. The current findings suggest that advanced learners can benefit
more from feedback for improving language fluency, whereas beginning and in-
termediate learners utilize the corrected feedback related to language accuracy
improvement because they are more concerned with polishing their errors. As for the
efficiency of adopted technology as a moderator, the results suggest that AWE-
manipulated studies obtained a medium overall effect in writing scores. By contrast,
for studies that manipulated non-AWE (corrective feedback for micro-skill level of
L2 writing), the effect size was large. The difference indicates that even if AWE can
simulate the error detection of human teachers at the macro-skill level, the tool cannot
entirely replace human teachers.
Pedagogical Implications
In light of the current study findings, many pedagogical implications can be high-
lighted. First, the teachers’ role is crucial in raising the students’ macro-skill errors, as
technology cannot fully detect all of the students’ errors. Therefore, teachers’
intervention is necessary besides the feedback provided by technology (Mohsen &
Alshahrani, 2019). Unlike advanced learners who show less interest in CMCF
interaction, beginning learners are more interested in attending to WCF as a way of
improving their language accuracy, particularly by addressing CMCF in their re-
vised drafts. These scenarios will require instructors to consider individual dif-
ferences when manipulating AWE or non-AWE in their students’ curriculum.
Second, teachers should monitor the students’ learning processes during L2 writing
involvement and elicit their difficulties when attending to computer-generated
feedback. As of this writing, scholars have yet to understand how much effort is
involved in the writing task aided by a computer and to what extent they can interact
and address the CMF.
18 Journal of Educational Computing Research 0(0)
Appendix A
CALICO https://journals.equinoxpub.com/CALICO
CALL- EJ http://callej.org/
Computer-Assisted Language Learning https://www.tandfonline.com/toc/ncal20/current
JALT Journal https://jalt-publications.org/jj
Language Learning and Technology https://www.lltjournal.org/
ReCALL https://www.cambridge.org/core/journals/recall
Appendix B
(continued)
20 Journal of Educational Computing Research 0(0)
(continued)
Acknowledgments
The authors would like to thank the editor and three anonymous reviewers for their valuable
comments during the peer review stage of this article. My sincere appreciation goes for Dr Hassan
Mahdi for his insightful views of the statistical analysis. The author thanks the Deanship of
Scientific Research at Najran University for funding this study through a grant research code
(NU/-/SEHRC/10/941).
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship,
and/or publication of this article: This research is supported by Najran University (NU/-/SEHRC/
10/941).
ORCID iD
Mohammed Ali Mohsen https://orcid.org/0000-0003-3169-102X
References
AbuSeileek, A. F. (2013). Using track changes and word processor to provide corrective feedback
to learners in writing. Journal of Computer Assisted Learning, 29(4), 319–333. https://doi.
org/10.1111/jcal.12004.
AbuSeileek, A., & Abualsha’r, A. (2014). Using peer computer-mediated corrective feedback to
support EFL learners’ writing. Language Learning & Technology, 18(1), 76–95. http://llt.
msu.edu/issues/february2014/abuseileekabualshar.pdf.
Al-Olimat, S. I., & AbuSeileek, A. F. (2015). Using computer-mediated corrective feedback
modes in developing students’ writing performance. Teaching English with Technology,
15(3), 3–30.
Mohsen 21
Hosseini, S. B. (2012). Asynchronous computer-mediated corrective feedback and the correct use
of prepositions: Is it really effective? Turkish Online Journal of Distance Education, 13(4),
95–111.
Huang, S., & Renandya, W. A. (2018). Exploring the integration of automated feedback among
lower-proficiency EFL learners. Innovation in Language Learning and Teaching, 14(1),
15–26. https://doi.org/10.1080/17501229.2018.1471083.
Kang, E., & Han, Z. (2015). The efficacy of written corrective feedback in improving L2 written
accuracy: A meta-analysis. The Modern Language Journal, 99(1), 1–18. https://doi.org/10.
1111/modl.12189.
Karim, K., & Nassaji, H. (2020). The revision and transfer effects of direct and indirect
comprehensive corrective feedback on ESL students’ writing. Language Teaching Re-
search, 24(4), 519–539. https://doi.org/10.1177/1362168818802469.
Krashen, S. (1981). Second language acquisition and second language learning. Pergamon
Press.
Lai, Y.-h. (2010). Which do students prefer to evaluate their essays: Peers or computer program.
British Journal of Educational Technology, 41(3), 432–454. https://doi.org/10.1111/j.1467-
8535.2009.00959.x.
Lee, L. (2017). Classroom writing assessment and feedback in L2 school contexts. Springer.
Link, S., Mehrzad, M., & Rahimi, M. (2020). Impact of automated writing evaluation on teacher
feedback, student revision, and writing improvement. Computer Assisted Language
Learning. https://doi.org/10.1080/09588221.2020.1743323.
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. SAGE Publications.
Li, S., Zhu, Y., & Ellis, R. (2016). The effects of the timing of corrective feedback on the
acquisition of a new linguistic structure. The Modern Language Journal, 100(1), 276–295.
https://doi.org/10.1111/modl.12315.
Long, M. H. (1996). The role of the linguistic environment in second language acquisition. In
W. Ritchie, & T.K. Bhatia (Eds.), Handbook of Second Language Acquisition
(pp. 413–468). Academic Press. https://doi.org/10.1016/b978-012589042-7/50015-3.
Mohsen, M. A. (2021). L1 versus L2 writing processes: What insight can we obtain from a
keystroke logging program? Language Teaching Research. https://doi.org/10.1177/
13621688211041292.
Mohsen, M. A., & Alshahrani, A. (2019). The effectiveness of using a hybrid mode of automated
writing evaluation system on EFL students’ writing. Teaching English with Technology,
19(1), 118–131.
Mohsen, MA, & Qassem, M (2021). Analyses of L2 learners’ text writing strategy: Process-
oriented perspective. Journal of Psycholinguistic Research, 49(3), 435–451. https://doi.org/
10.1007/s10936-020-09693-9.
Oswald, F. L., & Plonsky, L. (2010). Meta-analysis in second language research: Choices and
challenges. Annual Review of Applied Linguistics, 30, 85–110. https://doi.org/10.1017/
s0267190510000115.
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., &
Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting
systematic reviews. Bmj: British Medical Journal. http://dx.doi.org/10.1136/bmj.n71.
Mohsen 23
Polio, C., Fleck, C., & leder, N. (1998). “If I only had more time:” ESL learners’ changes in
linguistic accuracy on essay revisions. Journal of Second Language Writing, 7(1), 43–68.
https://doi.org/10.1016/s1060-3743(98)90005-4.
Ranalli, J. (2018). Automated written corrective feedback: How well can students make use of it?
Computer Assisted Language Learning, 31(7), 653–674. https://doi.org/10.1080/09588221.
2018.1428994.
Révész, A., Michel, M., & Lee, M. (2019). Exploring second language writers’ pausing and
revision behaviors: A mixed-methods study. Studies in Second Language Acquisition,
41(3), 605–631.
Saricaoglu, A. (2019). The impact of automated feedback on L2 learners’ written causal ex-
planations. ReCALL, 31(2), 189–203. https://doi.org/10.1017/s095834401800006x.
Sarré, C., Grosbois, M., & Brudermann, C. (2019). Fostering accuracy in L2 writing: Impact of
different types of corrective feedback in an experimental blended learning EFL course.
Computer Assisted Language Learning. https://doi.org/10.1080/09588221.2019.1635164.
Sauro, S. (2009). Computer-mediated corrective feedback and the development of L2 grammar.
Language Learning & Technology, 13(1), 96–120. http://llt.msu.edu/vol13num1/sauro.pdf
Schmidt, R. W. (1990). The role of consciousness in second language learning. Applied Lin-
guistics, 11(2), 129–158. https://doi.org/10.1093/applin/11.2.129.
Shintani, N., & Aubrey, S. (2016). The effectiveness of synchronous and asynchronous written
corrective feedback on grammatical accuracy in a computer-mediated environment. The
Modern Language Journal, 100(1), 296–319. https://doi.org/10.1111/modl.12317.
Smith, B., & Lafford, B. A. (2009). The evaluation of scholarly activity in computer-assisted
language learning. Modern Language Journal, 93(1), 868–883. https://doi.org/10.1111/j.
1540-4781.2009.00978.x.
Stevenson, M., & Phakiti, A. (2014). The effects of computer-generated feedback on the quality
of writing. Assessing Writing, 19, 51–65. https://doi.org/10.1016/j.asw.2013.11.007.
Storch, N. (2010). Critical feedback on written corrective feedback research. International
Journal of English Studies, 10(2), 29–46. https://doi.org/10.6018/ijes/2010/2/119181.
Strobl, C., Ailhaud, E., Benetos, K., Devitt, A., Kruse, O., Proske, A., & Rapp, C. (2019). Digital
support for academic writing: A review of technologies and pedagogies. Computers &
Education, 131, 33–48. https://doi.org/10.1016/j.compedu.2018.12.005.
Swain, M. (2004). Verbal protocols: What does it mean for research to use speaking as a data
collection tool? In M. Chaloub-Deville, C. Chapelle, & P. Duff (Eds.), Inference and
generalizability in applied linguistics: Multiple research perspectives. John Benjamins.
Tang, J., & Rich, C. S. (2017). Automated writing evaluation in an EFL setting: Lessons from
China. JALT CALL Journal, 13(2), 117–146. https://doi.org/10.29140/jaltcall.v13n2.215.
Truscott, J. (1996). The case against grammar correction in L2 writing classes. Language
learning, 46(2), 327–369. https://doi.org/10.1111/j.1467-1770.1996.tb01238.x.
Van Beuningen, C. (2010). Corrective feedback in L2 writing: Theoretical perspectives, em-
pirical insights, and future directions. International Journal of English Studies, 10(2), 1–27.
https://doi.org/10.6018/ijes/2010/2/119171.
Vitta, J. P., & Al-Hoorie, A. H. (2020). The flipped classroom in second language learning: A
meta-analysis. Language Teaching Research. https://doi.org/10.1177/1362168820981403.
24 Journal of Educational Computing Research 0(0)
Wang, Y.-J., Shang, H.-F., & Briody, P. (2013). Exploring the impact of using automated writing
evaluation in English as a foreign language university students’ writing. Computer Assisted
Language Learning, 26(3), 234–257. https://doi.org/10.1080/09588221.2012.655300.
Ware, P. (2014). Feedback for adolescent writers in the English classroom: Exploring pen-and-
paper, electronic, and automated options. Writing & Pedagogy, 6(2), 223–249. http://doi:10.
1558/wap.v6i2.223.
Wisniewski, B, Zierer, K, & Hattie, J (2019). The power of feedback revisited: A meta-analysis of
educational feedback research. Frontiers in Psychology, 10, 3087. https://doi.org/10.3389/
fpsyg.2019.03087.
Xu, J., & Zhang, S. (2021). Understanding AWE feedback and English writing of learners with
different proficiency levels in an EFL classroom: A sociocultural perspective. The Asia-
Pacific Education Researcher. https://doi.org/10.1007/s40299-021-00577-7.
Yeh, S.-W., & Lo, J.-J. (2009). Using online annotations to support error correction and corrective
feedback. Computers & Education, 52(4), 882–892. https://doi.org/10.1016/j.compedu.
2008.12.014.
Zhai, N., & Ma, X. (2021). Automated writing evaluation (AWE) feedback: A systematic in-
vestigation of college students’ acceptance. Computer Assisted Language Learning. https://
doi.org/10.1080/09588221.2021.1897019.
Zhang, T. (2021). The effect of highly focused versus mid-focused written corrective feedback on
EFL learners’ explicit and implicit. System, 99, 102493. https://doi.org/10.1016/j.system.
2021.102493.
Zhang, Z., & Zhang, Y. (2018). Automated writing evaluation system: Tapping its potential for
learner engagement. IEEE Engineering Management Review, 46(3), 29–33. https://doi.org/
10.1109/emr.2018.2866150.