You are on page 1of 16

Computers & Education 131 (2019) 33–48

Contents lists available at ScienceDirect

Computers & Education


journal homepage: www.elsevier.com/locate/compedu

Digital support for academic writing: A review of technologies and


T
pedagogies☆
Carola Strobla,∗, Emilie Ailhaudb, Kalliopi Benetosc, Ann Devittd, Otto Krusee,
Antje Proskef, Christian Rappg
a
University of Antwerp, Department of Translators and Interpreters, Prinsstraat 18, 2000, Antwerpen, Belgium
b
University of Lyon 2, Laboratory Dynamique du Langage, 14 avenue Berthelot, 69007, Lyon, France
c
University of Geneva, TECFA Educational Technologies Unit, Bd du Pont-d'Arve 42, 1205, Genève, Switzerland
d
Trinity College Dublin, the University of Dublin, School of Education, Dublin 2, Ireland
e
Zurich University of Applied Sciences, Language Competence Centre, Theaterstr. 17, 8401, Winterthur, Switzerland
f
TU Dresden, Psychology of Learning and Instruction, Zellescher Weg 17, 01062, Dresden, Germany
g
ZHAW, School of Management and Law, Center for Innovative Teaching and Learning, St.-Georgen-Platz 2, 8400, Winterthur, Switzerland

ARTICLE INFO ABSTRACT

Keywords: This paper presents a review of the technologies designed to support writing instruction in
Distance education and telelearning secondary and higher education. The review covers tools to support first and second language
Evaluation of CAL systems writers and focuses on instructional affordances based on their technological specifications.
Interactive learning environments Previous studies in this field centred on Automated Writing Evaluation, Automated Essay Scoring
Pedagogical issues
and the rarer Intelligent Tutoring Systems, addressing mainly essay writing needs in US sec-
ondary school instruction. With technology-enhanced learning becoming more ubiquitous and
widespread, new technologies and tools catering to a broader range of genres, pedagogical set-
tings, and approaches are emerging. We present a systematic analysis of 44 tools across 26
quantitative and qualitative features related to writing processes, pedagogical approaches,
feedback modalities and technological specifications. The results uncover an imbalance of
available tools with regard to supported languages, genres, and pedagogical focus. While a
considerable number of tools support argumentative essay writing in English, other academic
writing genres (e.g., research articles) and other languages are under-represented. With regard to
the pedagogical focus, automated support for revising on the micro-level targeting factual
knowledge (e.g., grammar, spelling, word frequencies) is well represented, whereas tools that
support the development of writing strategies and encourage self-monitoring to improve macro-
level text quality (e.g., argumentative structure, rhetorical moves) are infrequent. By mapping
the state of the art and specifying direction for further research and development, this review is of
interest to researchers, policymakers, tool developers, and practitioners of writing instruction in
higher and secondary education institutions.


Author's note: The co-authors contributed equally to the research underlying the paper and/or the manuscript. Therefore, their names are listed
in alphabetical order.

Corresponding author.
E-mail addresses: carola.strobl@uantwerpen.be (C. Strobl), emilieailhaud@gmail.com (E. Ailhaud), Kalliopi.Benetos@unige.ch (K. Benetos),
DEVITTAN@tcd.ie (A. Devitt), otto.kruse@gmx.net (O. Kruse), antje.proske@tu-dresden.de (A. Proske), Christian.Rapp@gmx.net (C. Rapp).

https://doi.org/10.1016/j.compedu.2018.12.005
Received 19 September 2018; Received in revised form 13 December 2018; Accepted 15 December 2018
Available online 21 December 2018
0360-1315/ © 2018 Elsevier Ltd. All rights reserved.
C. Strobl et al. Computers & Education 131 (2019) 33–48

1. Introduction

1.1. Digital writing aids and instructional approaches to writing

In this paper, we explore a new generation of mostly web-based approaches connecting writing technology with more vigorous
instructional offers to writers than before. We build on previous reviews, such as that of Allen, Jacovina, and McNamara (2016). who
proposed a classification of three different computer-supported help functions for writers called Automated Writing Evaluation
(AWE), Automated Essay Scoring (AES) and Intelligent Tutoring Systems (ITS). They have to be seen against the background of a long
line of technological developments starting in the 1980s with the first word processors such as WordStar and WordPerfect on personal
computers making this technology available to everyone. (see Table 4)
Arguably, the introduction of word processors has been the most sustainable part of the digitization of writing (Mahlow & Dale, 2014;
Sharples & Pemberton, 1990), followed by subsequent innovations extending their functionality and providing additional assistance to
writers, such as formatting devices, pagination, spelling and grammar checkers, thesauri and synonym finders, search and replace, tracking
and commentary functions, outline tools, and index generators. When network and internet technologies entered the field, the interactivity of
the writing environments increased allowing for more communication, feedback and collaboration among writers. With the most recent
developments in cloud computing, online offers for writers have become available that draw on technologies from corpus analysis, Natural
Language Programming (NLP) and Latent Semantic Analysis (LSA) offering linguistic help to writers through various kinds of automated text
evaluation. These new opportunities connect writing technology with writing provision in the form of individualized process instruction,
focused linguistic support, automated feedback, and intelligent tutoring.
In parallel to this technological development, writing has become a major teaching field at all levels of education constituting a
major component of teaching and learning across the disciplines. The development of new learning technologies is at least partially
driven by the need to individualize the teaching of writing in reaction to the high number of disciplines, skills, contexts, genres, and
audiences involved. The traditional theories of writing such as the process approach introduced by Emig (1971) and the cognitive
process models on writing in the tradition of Hayes and Flower (Flower & Hayes, 1981; Hayes & Flower, 1980) still shape our
understanding of the individual strategies writers can follow (Pritchard & Honeycutt, 2006). The more recent socio-cognitive models
stress such qualities of writing as its social connectivity, the process of meaning making, the interaction with readers and the
development of “voice” (Bazerman, 2016). The new technologies, however, do not simply support teaching but change the nature of
the writing process itself as well as the ways writing can be learned and taught.
Much writing research is devoted to instructional models of writing (Graham, 2006; MacArthur, 2016), and feedback for writers
(Beach & Friedrich, 2006; Jackel, Pearce, Radloff, & Edwards, 2017; Nelson & Schunn, 2009). Writing in educational contexts is
strongly influenced by the disciplinary differences in epistemological and communicative style (Langer & Applebee, 1987; Walvoord
& McCarthy, 1990; Poe, Lerner, & Craig, 2010), as well as by the high number of different genres used for teaching (Nesi & Gardner,
2012) which account for the great diversity of purposes, procedures, and skills involved in student writing.
Approaches to improve writing proficiency have been widely discussed and assessed in the past, for instance, by Graham and
Perin (2007) as well as Kellogg and Raulerson (2007). Allen et al. (2016) noted that the most effective support measures for novice
writers seem to be strategy instruction, extended practice and individualized feedback. These measures, however, require a sub-
stantial amount of added time and costs often not available in institutional settings (Allen et al., 2016). Consequently, questions arose
as to what extent writing instruction and training can be supported electronically (Allen et al., 2016; Crossley & McNamara, 2016).
Scaling as a means of leveraging resources became a desirable aim of writing tool construction (Rapp & Kauf, 2018). Various types of
writing technologies have been developed to respond to these needs by using, for example, AES to reduce teachers’ grading workload
and AWE to provide not only summative, but also formative feedback (Foltz, Streeter, Lochbaum, & Landauer, 2013).
While promising, the rise of AES and AWE also triggered debates about such issues as the reliability of scoring engines (Attali, 2013) and
their pedagogical value and desirability (Li, Link, & Hegelheimer, 2015). Cotos (2015) warns of a potential misuse of AES and AWE systems
as substitutes for human instruction and formative feedback, pointing out the potential of rhetorical feedback to scaffold thinking about writing
rather than generalized feedback based on standards. This focus on scaffolding also characterises ITS (Steenbergen-Hu & Cooper, 2014)
although it seems that these systems offer more adaptability to the learning progress of students. While spreading in other disciplinary fields,
few ITS exist in the area of writing support (e.g., W-Pal) (Roscoe, Allen, Weston, Crossley, & McNamara, 2014). The literature on writing
instruction and on effective assessment for learning practices more broadly converge in specifying the following as critical to student learning:
ample opportunities for practice and high quality timely feedback which clarifies what good performance is and feeds forward into future
work (Nicol & Macfarlane-Dick, 2006; Sambell, McDowell, & Montgomery, 2013; Yang & Carless, 2013).

1.2. Contribution of this study to the field

In contrast to earlier overview studies, the presented review examines a much broader range of technologies and pedagogies. This is due
to the bottom-up approach that was adopted to expand the scope in terms of educational technologies, approaches and supported languages.
Given the rapid evolution in the field of computer enhanced writing technology, a critical review of existing support instruments focusing on
their instructional affordances is of interest to both practitioners and researchers in the area. The present review looks at existing technologies
through an educational lens by sketching their impact on our understanding of the teaching of writing today as well as their potential impact
on the design and development of future writing technologies. The term tool in this review is used in its broadest sense to refer to writing
technologies, disregarding their breadth and scope. According to Peraya (1999), a tool is any device that uses digital technology to mediate
some function of teaching and learning, ranging from a digital learning platform, environment or software and its services, features and

34
C. Strobl et al. Computers & Education 131 (2019) 33–48

components, to the specific functionalities it might provide. According to this definition, the term “tool” in this study may refer to platforms,
programs and apps, or specific functional units within platforms or programs.
The research goals of this review are twofold. First, we aim to provide a snapshot of the variety of currently existing systems,
identifying the areas and instructional settings that are well covered. This enables us both to enlarge the existing taxonomy of online
writing support systems and to define gaps to be addressed by future research and development. To this aim, a comprehensive
framework for the evaluation and description of writing support tools was needed. The development of such a framework, which to
date is inexistent, formed our second research goal. In accordance with our focus on instruction, we based our framework on
theoretical approaches regarding human-computer interaction and writing instruction. Special attention was paid to feedback pro-
vision that plays an important role in supporting a process-oriented and thinking-intensive approach to writing instruction.
The article is structured along these two research goals. In the method section, the development of our evaluation framework and
the data selection process for the review is described. Then, the results of the review of 44 tools are presented across the 26 features
included in the evaluation framework. In the discussion section, an evaluation of both the framework we developed and the breadth
of the writing support tools will be presented, with the aim of broadening the taxonomy of writing support tools and formulating
future directions for research and development in the field.

2. Materials and methods

2.1. Data collection and selection

As many writing tools are not publicly accessible and only some of them are described in the research literature, we followed a twofold
strategy in our search (see Fig. 1). We systematically scrutinised literature databases and search engines in English, French, Spanish, German,
and Dutch (e.g., Psycinfo Database, ERIC, Dialnet, BNTL, Google Scholar, Zotero). This search yielded 62 results. In parallel, an online survey
was sent out at the beginning of 2017 to several research communities concerned with writing instruction (European Literacy Network,
European Association of Teaching Academic Writing, European Association of Computer Assisted Language Learning, Computer Assisted
Language Instruction Consortium, Writing SIG of the European Association of Research on Learning and Instruction), to writing centres and
to our individual professional networks. This survey, asking for information about electronic support tools for writers, yielded 76 responses
stating 67 different tools. Additional responses were received by Email. After eliminating duplicates, we added 27 tools to our list established
through literature research, resulting in 89 tools from the combined search.

Fig. 1. Flowchart of data selection process.

35
C. Strobl et al. Computers & Education 131 (2019) 33–48

Included in our review were tools or learning environments supporting student writing in higher education, and in some cases
including last year students in secondary education. All digital tools offering instructional or tutorial support for writing, in com-
bination or not with online communication facilities, automated or personal feedback were retained. As our focus is on higher
education, we excluded those tools that are intended solely for primary and secondary education. Furthermore, we excluded tools
with a sole control focus for one of the following features: grammar, spelling, style, or plagiarism detection. Also, technologies
without an instructional focus, like pure online text editors and tools, platforms or content management systems aiming at essay
scoring, resource provision, or submission management only, were excluded, as well as non-interactive tutorials. Applying these
exclusion criteria, the number of tools was reduced to 44 which were retained for the fine-grained analysis of features.

2.2. Coding framework

The 44 tools were analysed using a coding framework with 26 features subdivided into four categories, namely (1) general
specifications, (2) instructional approaches, (3) technical specifications, and (4) feedback specifications. Defining the features relevant to
digitally supporting academic writing is a first step towards a conceptual research framework. Consequently, this review establishes a
coding framework that quantifies analysis results for as many features as possible. To this end, we extended the coding framework
proposed by Allen et al. (2016). To ensure the relevance of selected features and to address the potential ambiguity introduced by
subjective inference, the coding scheme was developed in a 3-step iterative process with feedback from all seven members of the
research group (Glaser, 1965). The strength of this research group formed within the European Literacy Network program for the
purpose of this review lies in the multidisciplinary and multilingual expertise which was bundled to ensure a broad view on the topic.
The initial coding scheme was established and presented in a grid format within an online collaboration tool. To ensure a common
understanding of the pre-defined classification options, validation help texts were added to the columns of individual features where
deemed necessary. To test the clarity and completeness of the features and classification options in the coding scheme relative to our
research focus, each member of the group independently used the first version of the coding scheme for the analysis of two or three
tools resulting in a coverage of 20 different tools in this first evaluation round. All comments and amendment suggestions were
gathered and then discussed by the whole group over Skype. As a result, several features and classification option definitions in the
help texts were refined. Also, features that turned out to be uninformative were deleted, e.g. scalability, which is often irrelevant for
web-based environments.
The refined version of the coding framework was then used for the classification of the 44 tools retained for systematic analysis.
First, every group member independently classified 5–10 tools. Then, three sub-groups of two or three members were formed for an
inter-rater agreement process. A second reviewer independently assessed 10% of the tools already classified by a first reviewer in his
or her subgroup. Using the constant comparison approach (Glaser, 1965), all discrepancies and other problems encountered in the
classification procedure were discussed and documented by the subgroups. In a final virtual meeting, these uncertainties were
resolved by the whole group. Afterwards, all group members again checked their original classifications with regard to the discussed
features and categories, making amendments where necessary.
The final coding scheme is presented in Appendix D. We provide detailed descriptions and rationales for the respective coding
features and categories. Parts (1)–(3) of the coding scheme were filled in for all tools (n = 44), while part (4) was filled in only for the
tools that provide feedback to writers (n = 28). The nine features of the general specifications category convey basic objective in-
formation that can often be retrieved from the developer's/provider's description, for example on a website or in research papers. This
information allows us to understand the tools' stated use and intent. The eight features of the instructional approaches subcategory
include information about the underlying pedagogical concept. The classification categories draw on established pedagogical fra-
meworks and meta-analyses in the field, including Anderson et al. (2001) and Graham and Perin (2007). Usually, this information
could only be retrieved through an inspection of the tool in use (or, if not available, of a demo version). To support inter-rater
reliability, for all features that required an interpretation by the reviewer and/or background knowledge, help texts and definitions
for data validation were provided. The three features of the technical specifications category relate to information about technologies,
backend data and context of the tool. The feedback specifications category is based upon Narciss' (2013) Interactive Tutoring Feedback
Model. According to this model, at least three facets have to be considered to determine the nature and quality of a feedback message:
scope and function, content, and presentation. We translated this framework into seven features included in our coding framework.

2.3. Statistical analyses

Data was gathered and organised with MS Excel, Version 2016. To prepare for statistical analysis, data input by individual
reviewers was edited (spelling variants, synonyms, extra words added to a selected category option) or recoded to include new
options into the pre-defined classification options. Descriptive and quantitative analyses of the writing tool features were carried out
in SPSS, Version 22. To test for influences of features on distributions of outcome variables, chi square tests were applied. All
statistical tests were applied in a two-sided set-up, significance was assumed if p < .05. Data was visualized in tables and bar charts.
In all cases, results of the chi-square test were cited but extended with data visualization to allow for a transparent judgement on data
quality and effect sizes. In a follow-up meeting, all statistical results were discussed and interpreted by the whole group and im-
plications and future directions for tool development were formulated.
In the following section, the most salient results from these analyses will be described and discussed per main category of the
analysis framework, i.e., general technical and background information about the tool, information about the instructional approach
to writing, information about automated feedback provision.

36
C. Strobl et al. Computers & Education 131 (2019) 33–48

3. Results

Below, the results obtained from the quantitative analysis of the 44 tools will be presented per main section of the framework.
Each subsection starts with an overview table displaying the names of the features (in the header row), the names of the underlying
pre-defined classification options (in the columns below the respective features), ranked by the number of times they have been
selected by the reviewers (in parentheses), or sometimes ranked by logical principles. The text highlights salient results with regard to
individual features, significant relationships between two or more features with appropriate graphic representations and any addi-
tional information noted by reviewers where relevant.

3.1. General specifications

The general specifications section contains information about the target public, language of instruction and support, general tool
category and aims (writing genre, writing domain), as well as information about the usage policy and any validation efforts (see
Table 1).

Table 1
Overview of general specification features and numerical results per classification option.
Feature 1 2 3 4 5 6 7 8 9
ID + name
Public: Ed. level Target Support Public: L1/L2 Tool Genre Domain Policy Validation
language language category efforts

Option 1 (n) Under-graduate English (31) English L1 (17) AWE (26) Essay (11) No specific Openly Available
(14) (36) domain (37) accessible (16) (21)
Option 2 (n) Graduate (3) German (4) German (3) L2 (6) IWP (16) Research Specific domain1 Free registration Unavailable
paper (10) (7) (6) (23)
Option 3 (n) Secondary (4) Spanish (1) Spanish (1) L1+L2 (16) ITS (1) Other (10) Paid registration
(14)
Option 4 (n) Senior secondary/ Danish (1) Danish (1) AWE + ES Restricted
freshmen (4) (1) access (domain
users) (16)
Option 5 (n) Other (4) Dutch (1) Dutch (1)
Option 6 (n) Swedish (1) Other (2)
Option 7 (n) Indepen-dent
(5)
(not specified/ (15) (5) (13) (2)
unknown)

1
The following specific (combinations of) academic/writing domains were annotated one time each: Social sciences; Business and technological
communication; Applied Linguistics, Computer Science, and Material Science; Humanities and Liberal arts; Science; writing across the curriculum
(WAC) and critical thinking; German for specific purposes.

In accordance with review criteria, the selection focused on writing tools for higher education. 17 of the 44 tools target higher
education explicitly, with three of them targeting graduate students. Eight tools explicitly targeting secondary students were included
in the review as we found them to be relevant for higher education purposes as well. The same applies to the remaining 15 tools, for
which no education level target was specified.
70% of the 44 tools reviewed offer support in English (31) with a few German (4) and some language independent (adaptable)
tools (5), targeting users writing in their mother tongue (L1) (17) or mainly L1 writers without excluding writers in a foreign or
second language (L2) (16). Few tools (6) target L2 users specifically.
Over half the tools reviewed provide AWE on text input (27), one of which is in combination with AES. Only one tool, WritingPal,
was classified as an ITS. A previously unrecognized category emerged during our analysis: Interactive Writing Platforms (IWP). These
are tools that provide writing prompts and pedagogical scaffolds for learners but do not process the input (e.g. C-SAW, Rationale).
This new category included 16/44 tools.
Writing genres, where specified, were distributed between essay (11) and research paper (10) with the vast majority (37/44) not
specifying any particular domain of studies. With one exception aimed at science writing (DiSci), domains specified are in the
humanities or social sciences (social science, linguistics, business, humanities, psychology).
Tools are available under varied conditions and policies. Some commercial products available under paid only access (14/16)
offer some free access through trial or limited versions, while some of the openly accessible tools (16/44) offer additional services for
a premium. A further 12 tools restrict access to users of a particular domain address (6) or registered users (6). The six tools available
only for particular domain users are all tools developed within academic research projects and made available to students or re-
searchers within these institutions.
Nearly half of the tools reported publicly available validation efforts of some kind. Table 1 only provides generic information;
specific references for each tool can be found in the full data set (see Appendix A, available electronically).

37
C. Strobl et al. Computers & Education 131 (2019) 33–48

3.2. Instructional approaches

The instructional approaches category groups information regarding the particular pedagogical focus and the type of writing
support each tool provides. The results of this category are presented in three subsections organised according to significant inter-
actions between features (see Table 2).

Table 2
Overview of instructional approach features and numerical results per classification option.
Feature 10 11 12 13 14 15 16 17
ID + name
Text level Instruc-tional Targeted subtask Targeted skills and Instruc-tional Digital interaction Adaptabili-ty: Adaptabili-ty:
focus setting strategies practice support learner teacher

Option 1 (n) Micro (22) Self-directed Whole writing Factual Scaffolding student Learner-Teacher/ Minor Minor
learning (20) process (12) knowledge (15) writing (24) Tutor (8) adaptations adaptations
possible (9) possible (8)
Option 2 (n) Macro Along with Planning and/or Procedural know- Explicit teaching of Learner-Learner Major adaptations Major
(11) in-class drafting2 (8) ledge (9) skills/processes/ (1) possible (7) adaptations
instruction knowledge/ possible (4)
(5) strategies (8)
Option 3 (n) Macro After in-class Revising and/or Conceptual Process writing Learner- Fully adaptable Fully adaptable
+ micro instruction editing (12) knowledge (3) approach (5) Teacher + Learn- (6) (1)
(11) (1) er-Learner (6)
Option 4 (n) Flexible (9) Revising (5) Meta-cognitive Instruction given None (28) None (15) None (31)
knowledge (2) with feedback (2)
Option 5 (n) Not specified Editing (7) Combina-tion (14) Combination (5)
(9)
(not specified/ (1) (2)
unknown)

1
This category was collapsed for the sake of overview from the following three categories (n between brackets): „Prewriting: Planning” (1),
„Prewriting: Other” (1), „Drafting” (1), „Planning and Drafting” (5).

3.2.1. Text level focus, targeted subtask, and targeted skills and strategies
From a selection of writing process models (Bereiter & Scardamalia, 1987; Hayes & Flower, 1980), we identified the writing
subtask options to be prewriting, planning, drafting, revising, and editing. A minority of tools support prewriting, drafting and planning
tasks (8/44). More than half the tools focus only on editing and revising tasks (24/44). The remaining 12 tools support the user
throughout the whole writing process.
The targeted subtasks supported by the tools were found to correlate significantly with particular targeted skills and strategies and
the main text level focus. As shown in Fig. 2, tools focusing on the micro text level only (words, phrases, or sentences) (22/44) target
mostly editing and revising tasks, while those focusing on the macro level only (paragraphs, whole text) (11/44) address planning
and drafting tasks. The targeted subtask focus showed significant correlations with two other features of this section, “Text level
focus” and “Targeted skills and strategies”, as revealed in Chi square tests. The remaining eleven tools that focus on both macro and
micro text levels also tend to cover the whole writing process (x2 (8, N = 44) = 36.43, p < .001).
A statistically significant relationship was also found between the subtask focus of the 44 tools and the skills and strategies they
support (x2 (20, N = 44) = 45.01, p = .001). We observed an uneven distribution of targeted skills over the six domains factual (15),
conceptual (9), procedural (3), and metacognitive (2) knowledge, and affective and social skills.1 The latter two never form the
predominant focus of a writing tool, but are only targeted as a secondary skill in combination with one or more knowledge domains.
As would be expected, most tools focusing on the whole writing process also support a combination of skills and strategies (10/12)
(see Fig. 3). The procedural knowledge focused tools (9) are spread across pre and post writing tasks, while 14/15 of those related to
factual knowledge or metacognitive reflection are found within the editing and revising post-task groups. Few tools cover only
conceptual (3) or metacognitive (2) skills and strategies across different tasks.

3.2.2. Instructional setting, instructional practice, adaptability, and targeted subtask


The instructional settings in which the tools aim to be used were found to be coherent with the instructional practice they employ
and with the support provided for teacher and/or learner adaptability. While the largest group of tools is specifically intended for self-
directed use and learning (20/44), only six tools are specifically intended to support teachers and students during or after class
instruction.
The employed instructional practices detected tend to correlate with the intended instructional settings (x2 (16, N = 44) = 24.1,

1
Our taxonomy of targeted skills domains builds upon Anderson et al. (2001). Please refer to Appendix 3.3 for more information on the individual
domains.

38
C. Strobl et al. Computers & Education 131 (2019) 33–48

Fig. 2. Text level focus by targeted subtask.

Fig. 3. Targeted subtask focus by targeted skills/strategies.

p = .087). The majority of the tools employ a scaffolding approach to support writing, which ties in with the predominant self-
directed use. It is also noteworthy that only seven tools offer explicit instruction of skills, processes, knowledge or strategies.
Given the comparatively low number of tools that are intended for use in classroom settings (15/44), it follows that most tools
(31/44) do not allow for adaptations to be made by teachers. The relationship between adaptability by the teacher and instructional
setting was found to be significant (x2 (12, N = 44) = 27.02, p < .01). A chi square test of independence also revealed a significant
relationship between adaptability by the teacher and instructional practice (x2 (12, N = 44) = 27.20, p < .01). The tools that allow
adaptations according to teachers' preferences mostly employ a scaffolding approach (5), followed by tools adopting a process writing
approach (3). As many tools are to be used in a self-directed context (20/44), adaptability by learners was more prevalent. Slightly
more than half of the tools allowed some adaptation by learners (23/44), and this also showed a tendency towards significant
relationship with instructional setting (x2 (16, N = 44) = 24.50, p < .08), but not with instructional practice.
Furthermore, we detected significant relationships between learner adaptability and tools' subtask focus, on the one hand, and the
basic tool category (AWE, ITS or IWP, see Table 1) on the other. Learner adaptability is less prevalent when the tool's focus is on post-
writing tasks such as editing or revising than when the focus involves prewriting, drafting and planning (x2 (4, N = 44) = 10.14,
p < .05) (see Fig. 4). In the same vein, AWE tools are significantly less likely to offer any options for learners to adapt them to
individual preferences (17/27) than ITS and IWP tools (12/16) (x2 (8, N = 44) = 15.45, p < .05).

3.2.3. Digital interaction support, instructional practice, and targeted subtasks


The types of human-computer interaction supported in technology enhanced learning and instruction can be learner-tutor,
learner-learner (collaboration or peer instruction), or learner-content interactions (Moore, 1989). Danesh, Bailey and Whisenand
(2015) introduce teacher-interface interactions to include interactions with the system to create and manage learning interactions. In
our evaluation framework, we only consider human-human exchanges as interactions, while teacher/learner-interface interactions
are considered as part of the adaptability features (see above). Only 16/44 tools had any interactivity available. Of these, eight
allowed interactions between student and teacher and six included interactions between learners and between learners and teachers.

39
C. Strobl et al. Computers & Education 131 (2019) 33–48

Fig. 4. Writing subtask focus by learner adaptability.

Fig. 5. Digital interaction support by supported stage in writing process.

Fig. 6. Digital interaction support by instructional practice.

Only one tool allowed interactions only between learners. Interaction options focus on the writing product (as opposed to the writing
process). There is a significant relationship between the availability of human-human interaction and the stage in the writing process
that is supported by the tool (x2 (2, N = 43) = 8.46, p < .05) (see Fig. 5).
Finally, we detected also a significant relationship between instructional practice adopted by the tools and the type of digital
interaction available (x2 (4, N = 43) = 10.53, p < .05) (see Fig. 6). 18/23 tools intended to scaffold student writing offer no support
for human-human interaction, thus scaffolding student writing only through interactions with the system. Only 6/24 scaffold student
writing through learner-teacher or learner-tutor interactions.

3.3. Technical specifications

The three features included in this section (ID 18 to 20) contain information about the technical specifications of the evaluated
tools, as far as this information was available to the reviewer (see Table 3).

40
C. Strobl et al. Computers & Education 131 (2019) 33–48

Table 3
Overview of technical specification features and numerical results per classification option.
Feature ID + name 18 19 20

Technology used Backend data Context

Option 1 (n) NLP (9) Corpus (8) Web-based (36)


Option 2 (n) List-based pattern matching (5) Word lists (6) Stand-alone software (5)
Option 3 (n) LSA (1) Combination (8) Web-based or stand-alone (3)
Option 4 (n) Other (5)
Option 5 (n) None (5)
(not specified/unknown) (19) (22)

The tools reviewed are based on a variety of technologies and use a range of back-end data for generating feedback, however the
majority do not readily reveal this information. A number of tools (9) appear to use some level of NLP analysis, several use list based
pattern matching (5) and a few use corpora (6) or wordlists (6). While many use some linguistic knowledge or technology, the degree
or quality of linguistic or corpus analysis is not always evident. A general move to web-based application delivery methods is also
reflected in the results of this survey. Most of the tools (39/44) are web-based (36) or have a web-based option (3). Only five are
standalone software applications. A handful of tools were noted for offering additional visual representations or graphic elements to
scaffold writing or interactions (CohVis, C-SAW, Rationale, Open Essayist) or commended for their particularly good graphics (Scribo).2

3.4. Feedback specifications

This section contains information about the feedback that is provided by 28 out of the 44 tools under review. The feedback
specification features were based on Narciss’ (2013) Interactive Tutoring Feedback Model (see Table 4).

Table 4
Overview of feedback specification features and numerical results per classification option.
Feature ID + name 21 22 23 24 25 26

Source Focus Tutoring Component Specificity level Provision Delivery

Option 1 (n) Computer only (22) Product (17) Language Whole text (5) Simultaneous (whole Automatically (11)
correctness (7) text) (16)
Option 2 (n) Computer + human (6) Process (7) Genre (4) Text section Per text level (2) Upon student request
(3) (14)
Option 3 (n) Self-regulation Content and Paragraph (1) One comment at a time Upon teacher request
(3) structure (4) (7) (2)
Option 4 (n) Strategies/ Sentence (4)
techniques (3)
Option 5 (n) Word (2)
Combination (n) (1) (10) (13)
not specified/ (3)
unknown

Of the 44 tools under review, 28 tools provide some kind of automated feedback, with 6 of them allowing for (optional) additional
feedback provided by a human rater, i.e., a teacher, tutor or peer. An interesting trend in the latter category are specialty e-feedback
systems that help speed up the feedback process by providing preset feedback codes or comments on writing (e.g., Corpuscript).
Moreover, some tools allow feedback givers to integrate links to other sources of information like websites or course management
systems.
With respect to feedback focus, we distinguish between feedback on the written product (17) from guidance that helps writers
manage the writing process (e.g., how to start structuring your ideas) (7), or boosts their self-regulation skills (e.g., how to avoid
writer's block) (3). The tutoring component is in line with this focus, as confirmed by a chi-square test of independence approaching
significance between the two features (x2 (16, N = 28) = 24.81, p < .07). Tools focusing on the written product provide information
on non-target like language use (11) and/or appropriateness of content and structure (6), whereas tools focusing on the writing
process or self-regulation provide tutoring with regard to demands or conventions of the genre (7) and/or writing strategies and
techniques (5).3
Moreover, there is a close relationship between feedback focus and tutoring component on the one hand and the writing phase or

2
This information was gathered in an open-answer text field for „additional observations“ which can be found in the last column of the evaluation
framework (see Appendix A).
3
The total exceeds 28, as ten tools provide feedback on more than one components.

41
C. Strobl et al. Computers & Education 131 (2019) 33–48

subtask the tool is intended to support (see Table 2, feature 12), on the other. A chi-square test of independence, with the “subtask”
feature being collapsed to the three subcategories “pre-writing or during-writing”, “post-writing” and “whole writing process”, shows
significant relationships between subtask and feedback provision (x2 (10, N = 44) = 21.90, p < .05), and subtask and delivery (x2
(8, N = 44) = 23.11, p < .01). Feedback tends to be given simultaneously (i.e., at once) on the finalised written product, upon
student request, and focuses predominantly on language use and text content and/or structure. This corresponds with the observation
that tools focusing on pre- or during-writing tasks typically do not provide automated feedback (only 1/8), whereas two thirds of the
tools supporting post-writing tasks (16/24) and nearly all tools supporting the whole writing process (11/12) do provide feedback
(see Table 3).
Language-related feedback, as would be expected, is especially prominent in - but not restricted to - tools that are designed to
support L2 writers, and targets such diverse features as spelling and grammar (6/10), sentence fluency or length (3/10), word choice
(3/10), style (2/10), clarity of expression (2/10), and voice (2/10). Other uniquely occurring language-related feedback content
includes lexical sophistication (expressed as a level score referring to the Common European Framework of Reference for Languages in
Lärka), collocations (DiSci), congruence of adjacent words for gender and number (Deutsch-uni online), phrasing (escribo), and wor-
diness and eggcorns (Grammark). Feedback targeting text content and structure typically addresses the overall organisation and
representation of ideas (4/6) and/or coherence and cohesion (2/6). In terms of fostering metacognitive reflection about content via
automated feedback, the approach taken by Open Essayist deserves special attention. This tool presents a visual representation of the
core ideas in the produced text, identifying key words and sentences, and encouraging the users to compare this representation with
their own mental representation of the intended text. Genre-related tutoring components of automated feedback include general
conventions (e.g., citation rules) (3/7), academic language use (e.g., passive voice and nominalisation) (4/7), and rhetorical move-
step analysis based on Swales (1990) (Research Writing Tutor). With regard to feedback on the writing process targeting strategy
development, the user can find help on task completion (WriteToLearn) or a revision plan and related activities (C-SAW).
Feedback was also classified according to the level of granularity, or its “specificity”. Nearly half of the tools (13) deliver feedback
on different levels, while the other half specify either on the word (2), the sentence (4), the paragraph (1), the section level (3), or on
the whole text (5). With regard to the latter category, it is worth mentioning that some tools have restrictions with regard to the
number of words that can be processed, and in a few cases, this restriction depends on the kind of access (free vs. paid, see 3.1). When
feedback is provided on the whole text, sometimes rubrics or traits are used to address several aspects of the writing (e.g.,
WriteToLearn). The number of drafts that can be stored in the system, with the feedback on every draft remaining available for
comparison, varies from 1 up to 29 (PEG Writing). Systems that deliver feedback on higher or on different levels of granularity, mostly
provide all feedback simultaneously to the user (17/28). When feedback is provided successively, the provision is organised per level
of granularity (e.g., Klinkende Taal, Writing Aid English) or per text section (e.g., Scientific Writing Assistant, Thesis Writer).
A couple of tools yield interesting features with regard to modality, which were indicated by the reviewers in an additional free-
text column. It is interesting to note that automated feedback provided in the online tools is not restricted to the written mode, but
can take several shapes, ranging from colour codes (e.g., Research Writing Tutor, Lärka) to clickable symbols (e.g., CorpuScript,
Deutsch-Uni online) to graphic representations of the text (e.g., C-SAW) or of the readability index (e.g., Right Writer). A tool that
stands out in its use of graphic representations is CohVis which depicts cohesion gaps as non-connected nodes within a concept map of
the whole text.

4. Discussion and conclusion

This combined discussion and conclusion section is structured along the two main research aims of this study. First, we will
critically evaluate usability aspects of the framework that was established for the analysis of online writing support tools. This is
followed by a discussion of the main findings, highlighting provision and gaps in provision across existing tools as well as future
directions for tool development and research in the field.

4.1. Evaluating the framework for analysis

The comprehensive framework that was established to investigate ICT-based support tools for academic writing proved to be fit
for purpose for a quantitative descriptive analysis. The primary contribution of the framework is in the integration of theoretical
perspectives to interrogate the full range of functionalities of a tool. The framework includes dimensions deriving from writing
process theory, Assessment for Learning (AfL), web-based learning theory, computer-assisted language learning, L1 and L2 writing, as
well as a usability and design focus. The integration of these domains in the framework for analysis provides a shared platform for
discussion and a basis to generate insights that cross traditional disciplinary boundaries. For example, the integration of an AfL
perspective on feedback with writing process theory has identified a key gap in the feedback mechanisms of most writing tools in the
area of feedback on process and strategy rather than product, potentially focusing writers on outcomes rather than the learning
process.
In terms of future applications of the analysis framework, it was established in an iterative design-evaluation cycle and we
acknowledge the need for any tool to remain responsive to its context of use. New tools and new theoretical perspectives could
require additional features or refinement of existing parameters. In particular, this framework is limited in terms of its representation
of the economic and technological dimensions of writing support tools. The economic contexts of writing support tools are very
diverse. While some are tools built in a research context and may be released for use by the public, others are designed and marketed
for commercial purposes. Information regarding the design process, tool architecture and validation studies underpinning a tool may

42
C. Strobl et al. Computers & Education 131 (2019) 33–48

be proprietary and not publicly available or they can be well disseminated through publication. Given the inconsistent information
available on these topics, these dimensions of the framework were not fully developed. A second area where the framework could be
extended is with regard to the usability of tools. A full scale usability study for all tools was beyond the scope of this project and
usability information was also inconsistent across different tools. Therefore, this dimension of tool analysis was not fully explored
through the framework.
Despite these limitations, the framework which focuses on instructional and learning factors provides depth and breadth in its
perspectives. This integrated view delivers a comprehensive overview of functions and functionality for our target public of prac-
titioners, researchers, as well as technology and learning designers.

4.2. Evaluating the breadth of provision of writing support tools

Our second research goal was to provide a snapshot of the variety and provision of existing systems. This goal can be subdivided
into three sub-goals, (a) identification of the areas and/or instructional settings that are well covered by the existing range of support
tools, (b) identification of gaps to be addressed in the development of future tools and (c) refinement of the existing taxonomy of
writing support systems. The tools we examined provide learners with opportunities for practice and feedback of varying kinds, two
key elements to support the development of student writing (Graham, Hebert, & Harris, 2015; Kellogg & Raulerson, 2007) and to
provide a holistic approach to AfL in higher education (Sambell et al., 2013). This can certainly be a valuable asset for students in
providing a low risk environment where they can write and receive close to immediate feedback.
The stated predominance of tools for English language writing instruction may be a reflection of the need to prepare students for
higher education or help them achieve the level of writing competence expected in higher education that may be lacking upon
entrance to particular programs in the US and the UK. With regard to scientific domain and writing genres, our results show a
tendency towards writing tools being used to improve basic academic writing skills rather than using writing as a pedagogical
approach for learning within or about a domain through writing (writing-to-learn). The low number of tools that allow teachers to
adapt the tools implies that most tools aim to support the development of writing skills through practice outside the classroom, with
little instructor intervention. It may further indicate an intent to use these tools to compensate for the lack of class time or difference
in time needed by students to master certain skills rather than support teaching practices.
Furthermore, our findings indicate that most tools provide evaluation and feedback on written outputs at the revising and editing
phases of the writing process. Within this the majority of tools provide feedback on the micro level, i.e. on sentence and word level
errors with respect to a native speaker norm. Some but relatively few provide macro level feedback on discourse structure or text
coherence or cohesion or genre features. None provide feedback on the message of the learners’ texts, e.g. to what extent they address
a particular task or the success of a particular argument. This is an acknowledged shortfall of existing AES and hence AWE tools (Allen
et al., 2016). Effective feedback (Nicol & Macfarlane-Dick, 2006; Yang & Carless, 2013) on writing should clarify what good writing
looks like and should feed forward into future work (i.e., identify what could be done better next time). Micro-level feedback can
provide this but to a limited extent. Macro-level feedback on discourse structure and rhetorical moves when combined with supports
for text structuring can both clarify what good looks like in terms of text structure and feed forward to improve this dimension.
Currently, while automated support for revision on the micro-level targeting factual knowledge is well represented, tools that support
the development of writing strategies and encourage self-monitoring to improve macro-level text quality are rare. In the same vein,
the relatively scarce support of human interaction is predominantly being aimed at evaluating products rather than scaffolding
processes.
In maintaining a focus on micro-level form-focused aspects of learner texts, the majority of tools address the learners' written
products rather than the writing process. Graham, Harris, & Chambers’ (2016) meta-analysis on writing instruction identify explicit
writing instruction on strategies to have the largest effect sizes in improving writing, yet most of the tools focus on emulating models
and writing assessment through machine feedback, strategies that in this study had the smallest effect sizes. The tools which provide
support for the whole writing process address this area of need with a focus on scaffolding writing and providing feedback.
The introduction of a technology in instruction implies introducing a “transactional distance” into learning interactions that can
impede achieving desired learning outcomes (Moore, 1997). The distance of the “teaching-learning” transaction can be decreased
through devices that enable dialogue and communication between various actors. Writing tools can serve to further reduce this
distance by supporting learner-tutor, learner-learner interactions (as in the case of IWP) and by providing salient learner-interface
and teacher-interface learning interactions (as in the case of AWE and ITS). The structure of technology mediated instruction
(flexibility, personalisation of guidance, adaptability) is a second variable that can affect transactional distance. The third variable is
the instructional technology's ability to support learner autonomy and self-directed learning. As regards facilitating dialogue, the
majority of tools are marketed for self-directed learners assuming no external instructional or social context for the learner. The
findings indicate that systems overall provide very little support for interactivity between users (learners and teachers), perhaps
impacting on transactional distance. Within a social constructivist theory of learning, the dialogic process is essential to learning.
Dialogic feedback or engaging in dialogue in relation to feedback is a key feature of effective feedback and develops students' self-
regulation skills (Nicol & Macfarlane-Dick, 2006). Where interactivity is available it is focused on the written product and could
facilitate this dialogic feedback process on students' writing. Similarly, the lack of adaptability of most systems is problematic as
regards transactional distance. Adaptability tends to be a feature of systems supporting pre-writing tasks without automated feedback
options. While this is useful, feedback which cannot be targeted to learners' need or focus for development can limit the capacity of
students to engage with and use feedback provided.
These reflections on interactivity and adaptability serve to emphasise the importance of the instructional context for the use of

43
C. Strobl et al. Computers & Education 131 (2019) 33–48

these tools even where they are marketed as tools for the self-directed learner. Graham et al. (2015) indicate that computer generated
feedback linked to instruction has an effect size of about 0.38 in an elementary school context. While the findings here were for
schools, they indicate the importance of the overall pedagogical context for the writing tools. The tools examined offer extensive
opportunities for practice and confidence building and some are rich in feedback of different kinds, as advocated by Sambell et al.
(2013) for optimum AfL conditions in higher education. The extent to which they develop students’ ability to direct their own
learning and evaluate their own progress (Sambell et al., 2013) may be dependent on their overall instructional context for use.

4.3. Refining the taxonomy for writing support systems

As noted above, the classification of writing systems in Allen et al. (2016) was integrated in our analysis framework, identifying
tools as AES, AWE or ITS systems. Arising from our analysis, we suggest an addition to this classification to include Interactive
Writing Platforms (IWP) focusing on planning, pre-writing and drafting processes (see Fig. 7). In addition, based on the broad range
observed within the AWE category, we propose a refinement to distinguish those systems which provide relatively simple feedback on
low-level aspects of writing (“micro-AWE”) from those which provide more sophisticated feedback on low-level elements as well as
on the whole text as a unit of cohesive and coherent discourse (“macro-AWE”) (see Fig. 8).
Within this taxonomy, an ITS stands at the interface of IWPs and AWEs where automated analysis of written products (AWE) is
used to provide tailored supports to students on the writing process (IWP). This revised taxonomy (see Fig. 8) allows us to clearly
delineate the gaps in provision noted above. In the current review of writing support tools, the majority constitute micro-AWE tools.
There is a gap of provision in macro-AWE tools. There is also a gap in support of the writing process which is partially addressed by
some IWP platforms. These gaps serve to highlight the required development resources needed to realise ITS systems that can support
and develop student writing.

Fig. 7. Basic tool category by text level focus.

Fig. 8. Proposal for a refined taxonomy of online tools for writing support.

44
C. Strobl et al. Computers & Education 131 (2019) 33–48

4.4. Future directions

This paper provides a timely insight into the range and coverage of writing support tools available. With this structured analysis of
44 writing support tools across 26 descriptive features we have identified some key areas for potential development:

• provision for support in languages other than English as the majority of research papers produced in secondary and tertiary
education is written in the national or regional language;
• a stronger focus on macro-level feedback focusing on the writing goals and genres;
• stronger integration of strategy instruction, ideally linked to feedback;
• greater inter-user interactivity to support scaffolding and feedback processes;
• more use of multimodal options for feedback provision;
• increased adaptability for learners or teachers to target individual needs, e.g. focused feedback on specific elements or scaffolding
for learners to use feedback (Hattie & Timperley, 2007).

In the absence of automated processing that can provide macro-level, message-focused feedback which is linked to strategy
instruction and writing scaffolds, the instructional context for writing tools remains crucial. Writing tools can facilitate this context
through a greater focus on adaptability and interaction for both learners and teachers. To inform development, writing research needs
to examine the effectiveness of tools over different time scales in longitudinal studies but also in experimental studies of activity
during the writing process and examine the qualitative aspects of using technology to support writing. This paper paints the current
landscape of technology to support writing instruction and points to where the field can expand its horizons in the future.

Acknowledgements

This work was supported by the European Union COST Action IS1401 (https://www.is1401eln.eu/en/working-groups/working-
group-3/). We would also like to thank Sarah Huffman (Iowa State University) and Lieve De Wachter (Catholic University of Leuven)
for their invaluable contribution to an early stage of this project. Last but not least, we would like to thank the anonymous reviewers
for their comments that greatly helped to improve the manuscript.

Appendices

A. Evaluation framework

Data available electronically in the open access repository TARA. Name: COST ELN IS1401 Writing Support Tools Review
Database. URL: http://hdl.handle.net/2262/85522

B. List of all tools included in the detailed analysis

Academic Vocabulary
Article Writing Tool
AWSuM
C-SAW (Computer-Supported Argumentative Writing)
Calliope
Carnegie Mellon prose style tool
CohVis
Corpuscript
Correct English (Vantage Learning)
Criterion
De-Jargonizer
Deutsch-uni online
DicSci (Dictionary of Verbs in Science)
Editor (Serenity Software)
escribo
Essay Jack
Essay Map
Gingko
Grammark
Klinkende Taal
Lärka
Marking Mate (standard version)
My Access!
Open Essayist

45
C. Strobl et al. Computers & Education 131 (2019) 33–48

Paper rater
PEG Writing
Rationale
RedacText
Research Writing Tutor
Right Writer
SWAN (Scientific Writing Assistant)
Scribo - Research Question and Literature Search Tool
StyleWriter
Thesis Writer
Turnitin (Revision Assistant)
White Smoke
Write&Improve
WriteCheck
Writefull
WriteLab
Writer's Workbench
WriteToLearn
Writing Aid English
Writing Pal

C. Glossary of terminology used in the article

Term Definition

Automated feedback Feedback provided by a computer


Drafting Written translating process: the writer “translates” into written words and sentences the previously planned information.
Editing The function of this process is to detect and correct violations in writing conventions and inaccuracies of meaning
Feedback giver Human feedback source (teacher, tutor, or peer)
NLP (Natural Language Proc- Branch of artificial intelligence exploring how computers can be used to understand and manipulate human language
essing)
Planning The function of this process is to generate and organize ideas setting goals, using information retrieved from long-term memory
and the task environment.
Prompt Brief passage of text providing a topic idea for a piece of writing
Revising The function of this process is to evaluate and to edit the text produced-so-far in order to improve it
Self-monitoring Writer's ability to think about the effectiveness of the writing strategies used, during or after writing

D. Evaluation framework: Features, classification options, and validation help text.

This appendix lists the 26 features of the evaluation framework, subdivided into four tables (one for each main category of the
framework). The list numbers correspond with the feature ID, as represented in the framework and in the results section of the article.
The feature names are followed by a list of pre-defined classification options (separated by semicolons) and validation help text
(italics in parentheses), where applicable.

D1. General specifications;


D2. Instructional approaches;
D3. Technical specifications;
D4. Feedback specifications.

Appendix D1. Features, classification options, and validation help text of the evaluation framework. Category “General specifications”.

ID Feature name Classification options (+ Validation help text, where applicable)

1 Public: Educational Secondary; Freshmen; Undergraduate; Graduate; Other; Not defined. (For whom is the tool (mainly) intended/useful in your opinion?)
level
2 Target language English; French; German; Spanish; Dutch; Finnish; Swedish; Danish; Japanese; Other. (What language(s) does the tool support for
written input? If "other", please specify)
3 Support language English; French; German; Spanish; Dutch; Finnish; Swedish; Danish; Japanese; Other. (In which language(s) is the interface/the
instruction available? If "other", please specify)
4 Public: L1/L2 L1; L2; L1+L2; Not specified. (Is the tool mainly intended/appropriate for use by native speakers (L1) or language learners (L2,
independent of proficiency level)?)
5 Tool category AWE (Automated Writing Evaluation: Provision of formative feedback on written text);
IWP (Interactive Writing Platform: Prompts learner input, providing pedagogical scaffolding, but does not process the input);
ITS (Intelligent Tutoring System: Mainly strategy-oriented writing aid);
AWE + ES (Essay Scoring: Provision of summative feedback).

46
C. Strobl et al. Computers & Education 131 (2019) 33–48

6 Genre Essay; Research paper; Other; Not defined. (What is the type of text the tool supports? If "Other", please specify)
7 Domain Free response (Is the tool tailored towards a specific research domain? If the answer is "yes", please specify)
8 Policy Openly accessible; Accessible upon free registration; Accessible upon paid registration; Restricted access (domain users only);
Unknown. (Is the tool openly accessible (and free to use)?)
9 Validation efforts - Free response - (publication references)

Appendix D2. Features, classification options, and validation help text of the evaluation framework. Category “Instructional approaches”.

ID Feature name Classification options (+ Validation help text, where applicable)

10 Text level focus Macro-level ("Higher-order concerns": Content, Structure, Coherence, …);
Micro-level ("Lower-order concerns": Grammar, Punctuation, Vocabulary, …);
Macro + Micro-level.
11 Instructional setting Along with in-class instruction; Self-directed learning; Both.
12 Targeted subtask of the writing Planning; Other prewriting activities; Drafting; Revising; Editing; Whole writing process.
process
13 Targeted skills and strategies Factual knowledge (The basic elements a student must know to be acquainted with a discipline or solve problems in it
(knowledge of terminology, knowledge of specific details and elements, e.g., features of a coherent argument));
Conceptual knowledge (The interrelationships among the basic elements within a larger structure that enable them to
function together (knowledge of classifications and categories, knowledge of principles and generalizations, knowledge of
theories, models, and structures, e.g., differentiate coherent and non-coherent arguments));
Procedural knowledge (How to do something, methods of inquiry, and criteria for using skills, algorithms, techniques, and
methods (knowledge of subject-specific skills and algorithms, knowledge of subject-specific techniques and methods,
knowledge of criteria for determining when to use appropriate procedures, e.g., how to build a coherent argument));
Metacognitive knowledge (Knowledge of cognition in general as well as awareness and knowledge of one's own cognition
(strategic knowledge, knowledge about cognitive tasks, including appropriate contextual and conditional knowledge, self-
knowledge, e.g. strategies to build a coherent argument, reflect upon own writing process));
Affective (e.g., cope with writing anxiety);
Social (e.g., provide appropriate peer feedback).
14 Instructional practice Process writing approach (Extended opportunities for writing: students engage in cycles of planning, translating, and
reviewing. They write for real purposes and audiences, with some of their writing projects occurring over an extended period
of time. Students' ownership of their writing is stressed, as is self-reflection and evaluation. Students work together
collaboratively, and teachers create a supportive and nonthreatening writing environment. Personalized and individualized
writing instruction is provided through mini-lessons, writing conferences, and teachable moments);
Explicit teaching of skills, processes, or knowledge (Sustained, direct, and systematic instruction designed to facilitate
student mastery (e.g., grammar instruction, sentence combining technique, strategy instruction, summarization techniques,
teaching knowledge about the structure of specific types of text));
Scaffolding students' writing (Providing some form of assistance that helps the student carry out one or more processes
involved in writing (e.g., structuring how a student carries out a particular writing process, having peers help each other as
they compose, providing students with feedback on their performance, focusing students' attention on specific aspects of the
task, providing a model of what the end product should look like)).
15 Digital interaction support Learner-teacher/tutor; Learner-learner; Combination; None. (Does the tool support interaction of several actors involved
in the writing process?)
16 Adaptability to preferences of the Fully adaptable; Major adaptations; Minor adaptations; Not adaptable. (Can learners adapt content and/or learning path
learner according to their individual preferences? E.g., first read an example and then the explanation or vice-versa)
17 Adaptability to preferences of the Fully adaptable; Major adaptations; Minor adaptations; Not adaptable. (Can teachers adapt content and/or the learning
teacher/tutor path of the tool according to their instructional focus/intention?)

Appendix D3. Features, classification options, and validation help text of the evaluation framework. Category “Technological specifications”.

ID Feature name Classification options (+ Validation help text, where applicable)

18 Technology Coh-Metrix (Tool that calculates readability index of a text); NLP (Natural Language Processing, e.g., syntactic parser, part-of-speech recognition,
used calculation of lexical density/sophistication …); List-based pattern-matching; LSA (Latent Semantic Analysis/indexing, i.e., semantic
interpretation of text input); Other; Unknown. (What kind of technology is used to process (and evaluate) written input?)
19 Backend data Word lists; Dictionary; Corpus; Example solutions for self-evaluation; Combination; Other; Unknown; Not applicable (no feedback on or
analysis of user input).
20 Context Web-based; Stand-alone software; Web-based or stand-alone.

Appendix D4. Features, classification options, and validation help text of the evaluation framework. Category “Feedback specifications”.

ID Feature name Classification options (+ Validation help text, where applicable)

21 Feedback Source Computer; Computer + Human.


22 Feedback Focus Product (Is a text product (or parts of a text) correct/incorrect? (may include directions to improve a text); Procedure (Feedback aimed at the
processes/strategies/techniques used to create a product); Self-regulation (Addresses the way students monitor, direct, and regulate actions
toward the writing/writing goal); Combination.
23 Tutoring Language correctness; Genre; Content; Combination.
Component

47
C. Strobl et al. Computers & Education 131 (2019) 33–48

24 Specificity level Whole text; Text section; Paragraph; Sentence; Word; Combination.
25 Provision Simultaneous (whole text); Per text level; One comment at a time.
26 Delivery Automatically; Upon student request; Upon teacher request.

References

Allen, L. K., Jacovina, M. E., & McNamara, D. S. (2016). Computer-based writing instruction. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.). Handbook of writing
research (pp. 316–329). (2nd ed.). New York, NY: Guilford.
Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., et al. (2001). A taxonomy for learning, teaching, and assessing: A
revision of bloom's taxonomy of educational objectives (abridged edition). New York, NY: Longman.
Attali, Y. (2013). Validity and reliability of automated essay scoring. In M. D. Shermis, & J. Burstein (Eds.). Handbook of automated essay evaluation: Current applications
and new directions (pp. 181–198). New York, NY: Routledge.
Bazerman, C. (2016). What do sociocultural studies of writing tell us about learning to write? In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.). Handbook of writing
research (pp. 11–23). (2nd ed.). New York, NY: Guilford.
Beach, R., & Friedrich, T. (2006). Response to writing. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.). Handbook of writing research (pp. 222–234). New York, NY:
Guilford.
Bereiter, C., & Scardamalia, M. (1987). The psychology of written composition. Mahwah, NJ: Lawrence Erlbaum Associates.
Rapp, C., & Kauf, P. (2018). Scaling academic writing instruction: Evaluation of a scaffolding tool (Thesis writer). International Journal of Artificial Intelligence in
Education, 28(1), 1–16. https://doi.org/10.1007/s40593-017-0162-z.
Cotos, E. (2015). Automated writing analysis for writing pedagogy: From healthy tension to tangible prospects. Writing & Pedagogy, 7(2–3), 197–231. https://doi.org/
10.1558/wap.v7i2-3.26381.
Crossley, S. A., & McNamara, D. S. (Eds.). (2016). Adaptive educational technologies for literacy instruction. London: Routledge.
Danesh, A., Baily, A., & Whisenand, T. (2015). Technology and instructor-interface interaction in distance education. International Journal of Business and Social Science,
6(2), 29–47.
Emig, J. (1971). The composing process of twelfth graders. Urbana, IL: National Council of Teachers of English Press.
Flower, L., & Hayes, J. R. (1981). A cognitive process theory of writing. College Composition & Communication, 32(4), 365–387.
Foltz, P. W., Streeter, L. A., Lochbaum, K. E., & Landauer, T. K. (2013). Implementation and applications of the intelligent essay assessor. London: Routledge.
Glaser, B. (1965). The constant comparative method of qualitative analysis. Social Problems, 12(4), 436–445. https://doi.org/10.2307/798843.
Graham, S. (2006). Strategy instruction and the teaching of writing: A meta-analysis. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.). Handbook of writing research
(pp. 187–207). New York, NY: Guilford.
Graham, S., Harris, K. R., & Chambers, A. B. (2016). Evidence-based practice and writing instruction: A review of reviews. In C. A. MacArthur, S. Graham, & J.
Fitzgerald (Eds.). Handbook of writing research (pp. 211–226). (2nd ed.). New York, NY: Guilford.
Graham, S., Hebert, M., & Harris, K. (2015). Formative assessment and writing: A meta-analysis. The Elementary School Journal, 115(4), 523–547. https://doi.org/10.
1086/681947.
Graham, S., & Perin, D. (2007). A meta-analysis of writing instruction for adolescent students. Journal of Educational Psychology, 99(3), 445–476.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487.
Hayes, J. R., & Flower, L. (1980). Identifying the organization of the writing process. In L. W. Gregg, & E. R. Steinberg (Eds.). Cognitive processes in writing (pp. 3–30).
Hillsdale, NJ: Erlbaum.
Jackel, B., Pearce, J., Radloff, A., & Edwards, D. (2017). Assessment and feedback in higher education: A review of literature for the higher education academy.
Retrieved from https://research.acer.edu.au/higher_education/53.
Kellogg, R. T., & Raulerson, B. A. (2007). Improving the writing skills of college students. Psychonomic Bulletin & Review, 14(2), 237–242. https://doi.org/10.3758/
BF03194058.
Langer, J. A., & Applebee, A. N. (1987). How writing shapes thinking: A study of teaching and learning. Urbana, Ill: NCTE.
Li, J., Link, S., & Hegelheimer, V. (2015). Rethinking the role of automated writing evaluation (AWE) feedback in ESL writing instruction. Journal of Second Language
Writing, 27, 1–18. https://doi.org/10.1016/j.jslw.2014.10.004.
MacArthur, C. A. (2016). Instruction in evaluation and revision. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.). Handbook of writing research (pp. 272–287). New
York, NY: Guilford.
Mahlow, C., & Dale, R. (2014). Production media: Writing as using tools in media convergent environments. In E.-M. Jakobs, & D. Perrin (Eds.). Handbook of writing and
text production (pp. 209–230). Berlin: De Gruyter.
Moore, M. G. (1989). Editorial: Three types of interaction. American Journal of Distance Education, 3(2), 1–7. https://doi.org/10.1080/08923648909526659.
Moore, M. G. (1997). Theory of transactional distance. In D. Keegan (Ed.). Theoretical principles of distance education (pp. 22–38). London: Routledge.
Narciss, S. (2013). Designing and evaluating tutoring feedback strategies for digital learning environments on the basis of the interactive tutoring feedback model.
Digital Education Review, (23), 7–26.
Nelson, M. M., & Schunn, C. D. (2009). The nature of feedback: How different types of peer feedback affect writing performance. Instructional Science, 37, 375–401.
https://doi.org/10.1007/s11251-008-9053-x.
Nesi, H., & Gardner, S. (2012). Genres across the disciplines. Student writing in higher education. Cambridge: Cambridge Applied Linguistics.
Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and selfregulated learning: A model and seven principles of good feedback practice. Studies in Higher
Education, 31(2), 199–218. https://doi.org/10.1080/03075070600572090.
Peraya, D. (1999). Médiation et médiatisation: Le campus virtuel. Hermès, La Revue, 25, 153–167.
Poe, M., Lerner, N., & Craig, J. (2010). Learning to communicate in science and engineering: Case studies from MIT. Cambridge, MA: MIT Press.
Pritchard, R. J., & Honeycutt, R. L. (2006). The process approach to writing instruction: Examining its effectiveness. In C. A. MacArthur, S. Graham, & J. Fitzgerald
(Eds.). Handbook of writing research (pp. 275–292). New York, NY: Guilford Press.
Roscoe, R. D., Allen, L. K., Weston, J. L., Crossley, S. A., & McNamara, D. S. (2014). The writing pal intelligent tutoring system: Usability testing and development.
Computers and Composition, 34, 39–59. https://doi.org/10.1016/j.compcom.2014.09.002.
Sambell, K., McDowell, L., & Montgomery, C. (2013). Assessment for learning in higher education. London: Routledge.
Sharples, M., & Pemberton, L. (1990). Starting from the writer: Guidelines for the design of user-centred document processors. Computer Assisted Language Learning,
2(1), 37–57.
Steenbergen-Hu, S., & Cooper, H. (2014). A meta-analysis of the effectiveness of intelligent tutoring systems on college students' academic learning. Journal of
Educational Psychology, 106(2), 331–347. https://doi.org/10.1037/a0034752.
Swales, J. M. (1990). Genre analysis: English in academic and research settings. Cambridge: Cambridge University Press.
Walvoord, B. E. F., & McCarthy, L. P. (Eds.). (1990). Thinking and writing in college: A naturalistic study of students in four disciplines. Urbana, IL: National Council of
Teachers of English.
Yang, M., & Carless, D. (2013). The feedback triangle and the enhancement of dialogic feedback processes. Teaching in Higher Education, 18(3), 285–297. https://doi.
org/10.1080/13562517.2012.719154.

48

You might also like