You are on page 1of 16

Tseng, S.-S., & Er, E. (2024).

Maximizing the impact of dialogic peer feedback on student engagement: The role of
regulation support. Educational Technology & Society, 27(2), 133-148. https://doi.org/10.30191/ETS.202404_27(2).RP04

Maximizing the impact of dialogic peer feedback on student engagement:


The role of regulation support
Sheng-Shiang Tseng1* and Erkan Er2
1
Department of Education and Futures Design, Tamkang University, Taiwan // 2Department of Computer
Education and Instructional Technology, Middle East Technical University, Turkey // u9241346@gmail.com //
erkane@metu.edu.tr
*
Corresponding author

(Submitted February 21, 2023; Revised July 3, 2023; Accepted July 9, 2023)

ABSTRACT: The absence of instructional support during peer feedback prevents students from engaging with
peer feedback. This study adopted regulated dialogic feedback as the instructional foundation and investigated its
impact on students’ engagement using an experimental research design. Students in the experimental group (n =
26) performed the feedback practice with a regulated dialogic feedback approach through three phases, each
involving a different regulation type: 1. negotiation and coordination of feedback activities involving shared
regulation of learning (SSRL); 2. feedback provision and discussion to support its uptake involving co-regulation
of learning (CoRL); and 3. translation of feedback into task progress involving self-regulation of learning (SRL).
Students in the control group (n = 25) performed the feedback practice without the regulated dialogic feedback
approach in an online discussion forum. The study lasted for 10 weeks for both groups. The research data
included students’ responses to peer feedback engagement surveys, students’ learning behaviors, and the
transcripts of interviews with students. The survey results show that the regulated dialogic feedback led to a
higher cognitive, emotional, and behavioral engagement with the feedback practice. Based on the feedback-
related learning behaviors, this study found that the SSRL, SRL, and CoRL processes can potentially promote
cognitive, emotional, and behavioral engagement. Extreme case analysis demonstrated how the support for
SSRL, SRL, and CoRL promoted engagement in all three areas (cognitive, behavioral, and emotional).
Pedagogical implications were provided for creating engaging dialogic feedback practices.

Keywords: Peer feedback, Dialogic feedback, Student engagement, Regulation of learning

1. Introduction
Feedback practice is a common instructional activity in online education to help students improve their learning
and perform better. It has a strong influence on online learning by encouraging effective online learning
behaviors (Li & Huang, 2023), promoting achievement in online learning (Cheng et al., 2015), and helping
learners stay focused, connected, and engaged (Oncu, 2015). Peer feedback is often perceived as more accessible
compared to teacher feedback as it tends to use simpler language, avoiding complex academic terms (Falchikov,
2005). Another important advantage of peer feedback is that students feel less threatened when discussing and
criticizing their peers’ comments or opinions, especially in online learning environments (DiGiovanni et al.,
2001). Despite its potential for supporting student learning, peer feedback practices have often been criticized as
having a limited impact on learning. One reason for their ineffectiveness is that peer feedback is mostly
implemented as a one-way interaction (Carless, 2020), where students simply post comments to praise peers or
correct misconceptions. One-way feedback does not offer sufficient opportunities for students to fully understand
and benefit from the feedback they receive, resulting in a lack of perceived value for the feedback activity and
reduced engagement. Rather than treating peer feedback as a one-way information transfer, framing it as a
dialogic process can lead to more powerful learning experiences (Carless & Boud, 2018; Zhu & Carless, 2018).
Dialogic peer feedback allows for bidirectional interactions between students and leads to more effective
feedback processes, where the feedback is moved forward to create an impact on learning (Wegerif, 2006;
Carless & Winstone, 2020).

Previous studies have made attempts to make feedback a dialogic process through different tools such as
discussion forums (Carless, 2016), mobile applications (Soh & Ho, 2014), and videos (Charteris & Smardon,
2013). However, a major weakness of previous research is that the tools were mostly developed without a solid
instructional foundation to support the dialogic feedback mechanisms (e.g., Noroozi et al., 2020; Hepplestone et
al., 2011; Hsia et al., 2016; Shang, 2022). These tools were utilized mainly to leverage the advantages of the
online modality, such as the ability to send and receive feedback without time limitations and to review feedback
at the recipient’s convenience (Hepplestone et al., 2011). Despite the utility of these tools in facilitating feedback
tasks for students (e.g., uploading their work or posting feedback comments), the lack of an instructional

ISSN 1436-4522 (online) and 1176-3647 (print). DOI 10.30191/ETS. This article of Educational Technology & Society is available under Creative Commons CC-BY-
133
NC-ND 3.0 license (https://creativecommons.org/licenses/by-nc-nd/3.0/). For further queries, please contact Editors at ets.editors@gmail.com.
foundation often leads to poor engagement with dialogic peer feedback. This can include a failure to provide
constructive feedback, a tendency to skim or read comments passively, and a lack of motivation to post feedback
comments or participate in follow-up discussions (Sinclair & Cleland, 2007; Wasson & Vold, 2012). However,
most educational studies were limited to examining the effects of peer feedback on students’ learning
performance, such as deep learning (Filius et al., 2018; Konert et al., 2012), writing performance (Novakovich,
2016), and motivation and self-efficacy (Hsia et al., 2016). Few studies have considered the role of instructional
design in facilitating dialogic peer feedback practices. Consequently, there is a critical gap in the literature
regarding specific instructional strategies that can promote student engagement with dialogic feedback, and this
issue remains underdeveloped and understudied (Trevelyan & Wilson, 2012).

Drawing on Hadwin’s et al. (2017) model of regulation, a regulated dialogic feedback approach that incorporates
systematic regulation support was implemented and examined for its effectiveness in enhancing student
engagement. To the best of our knowledge, this study represents the first few efforts to leverage the regulation
model to enhance dialogic feedback. The regulated dialogic feedback was composed of three regulatory phases.
In the first phase, socially shared regulation of learning (SSRL) played an essential role as students negotiated
and coordinated feedback activities, while in the second phase, co-regulation of learning (CoRL) was involved as
students provided and discussed the feedback. In the last phase, where students translated feedback into task
progress, Self-Regulation of Learning (SRL) was critical. To identify the effects of the regulated approach, a
control group was created in which students completed the same feedback activity using a discussion forum
without the regulation support. This allowed for a comparison between the effects of the regulated dialogic
approach and the traditional approach on student engagement with the peer feedback activity. This study aims to
examine and explain the impact of regulation support during dialogic feedback on student engagement by
addressing the following research questions:
(a) Does the regulated dialogic feedback lead to higher student engagement with dialogic peer feedback?
(b) How does the regulated dialogic feedback promote student engagement with dialogic peer feedback?

2. Background
2.1. Student engagement with peer feedback

Student engagement with peer feedback is composed of three elements: cognitive engagement, behavioral
engagement, and emotional engagement (Appleton et al., 2008; Zheng & Yu, 2018). Cognitive engagement with
feedback practice refers to the cognitive effort of evaluating the peers’ work to provide useful feedback (Zhang
& Hyland, 2018). Cognitive engagement is associated with the effort spent in making sense of the peer feedback
received and using peer feedback to reflect on personal work. Behavioral engagement with feedback practice
refers to the number of feedback items provided and the time spent on producing the feedback (Price et al.,
2011). Behavioral engagement, in addition, entails the percentage of improvements made in one’s own work in
accordance with the feedback received. Emotional engagement involves affective reactions and attitudes toward
feedback practice (such as motivation to provide feedback or perceptions of peer feedback) (Zhang & Hyland,
2018). The three elements of student learning engagement are crucial factors for successful peer feedback
practice (Bonnel, 2008). Students who are highly engaged in peer feedback practice can deepen their own
learning, construct knowledge relevant to their learning, and further develop the skills to set learning objectives
of their own (Trevelyan & Wilson, 2012).

The instructional strategy embedded in the feedback activity is one of the significant factors that influence
engagement with peer feedback (Dunworth & Sanchez, 2016; Mutch, 2003; Van Zundert et al., 2010). For
example, Van Zundert et al. (2010) used observational learning as an instructional strategy to help novice
students become better feedback providers. The results indicated that students were better at giving correct and
explicit feedback based on the evaluation criteria through observational learning. In addition, a learner-centered
feedback design was used to promote students’ engagement with feedback (To, 2022), showing improvement in
students’ responsibility, evaluative judgement, and psychological safety. These studies revealed the value of the
instructional approach in helping students become active and engaged in peer feedback.

2.2. Regulated dialogic feedback and its implementation

Dialogic feedback involves “interactive exchanges in which interpretations are shared, meanings are negotiated,
and expectations are clarified” (Zhu & Carless, 2018, p. 90). In other words, feedback as a dialogic activity
allows two-way interactions among students to help them construct meaning from feedback and create new

134
knowledge through an ongoing dialogue (Zhu & Carless, 2018). Although most studies indicate that utilizing a
dialogic approach can improve the effectiveness of instructor feedback, implementing this approach in the
context of peer feedback still presents a challenge (Er et al., 2021). An example of this challenge is the difficulty
of engaging students in a productive dialogue with each other, which may require appropriate support
mechanisms.

This study employs a regulated dialogic approach (Er et al., 2021) that considers regulation to be the processes
and strategies used by individuals or groups to manage and control their cognitive, behavioral, and emotional
states to achieve desired outcomes in a particular task. In other words, regulation was implemented as an integral
part of the dialogic peer feedback process to promote students’ cognitive, behavioral, and emotional engagement.
The approach entails three distinct phases to engage students in dialogic feedback: (1) negotiation and
coordination of feedback activities, (2) feedback provision and discussion to support its uptake, and (3)
translation of feedback into task progress. These phases involve different levels of regulation, including shared
regulation of learning (SSRL), co-regulation of learning (CoRL), and self-regulation of learning (SRL),
respectively (Hadwin et al., 2017). An online feedback platform was developed to guide students through these
phases. In the following sections, each of these three phases is elaborated, and their implementations are
described.

2.2.1. Phase 1. Negotiation and coordination of feedback activities

The first phase, involving SSRL, aims to establish a mutual agreement on the quality of the peer work and plan
the feedback provision. It focuses on establishing a shared perspective on the peer work, followed by creating a
collective feedback plan (e.g., the focus of the feedback, changes to be suggested, and daily contributions to the
dialogue). During this phase, students have the opportunity to share their opinions about the assessed work’s
quality and negotiate the feedback that will be given in the next phase.

Implementing this phase in the feedback platform contains several tasks to facilitate the regulation. First,
students conduct a self-assessment by scoring their work using the rubric provided by the course instructor.
Similarly, their peers review the work and assign scores (see Figure 1). Once both assessments are complete, the
feedback platform allows students to compare their assigned scores and discuss any inconsistencies by posting
comments (see Figure 2). They can make necessary adjustments and reassess the work if needed. Finally,
students can take quick notes to plan the feedback they will provide to their peers (see Figure 3).

Figure 1. Assessing a submitted work (or one’s own work) using a rubric

Note. Students are required to select a score for each criterion on the rubric, as determined by the instructor, to
assess the work assigned. Once a score is assigned to each criterion, students must press the “Complete
Assessment” button.

135
Figure 2. Comparing the scores assigned for the same work

Note. Students can compare the scores assigned to a student work by all reviewers. Conflicting scores are
highlighted for possible discussion and negotiation.

Figure 3. Taking a quick note

Note. Students can take quick notes while assessing their peers’ work to collaboratively plan feedback provision
with other reviewers.

2.2.2. Phase 2. Feedback provision and discussion to support its uptake

The second phase involves the provision of the planned feedback, followed by a discussion among the students
to support the uptake of feedback. To facilitate this discussion, two-way communication should be enabled to
ensure that students have a clear understanding of their peers’ feedback. The ultimate goal of this phase is for
students to collaboratively develop an action plan (in other words, co-regulate each other) that outlines the
intended revisions and improvements to their work based on an accurate understanding of the feedback they have
received.

The implementation of this phase entails several tasks that promote co-regulation among the students. As Figure
4 shows, students post their feedback on the peer work shared as a Google Doc on the feedback platform. They
can refer to the quick notes they took in the first phase to plan their feedback and update the status of their notes
as complete once they have provided the related feedback (see Figure 5). This task encourages students to assess
how closely their feedback plans align with their actual feedback comments, allowing them to evaluate the
quality of their feedback (Boud & Molloy, 2013). For the discussion of the feedback, students can reply to
feedback comments to ask for clarifications and reach a consensus on how to revise their own work. After the
discussion, students create an action plan that delineates the intended revisions and improvements to their work
based on the feedback they received from their peers (see Figure 6).

136
Figure 4. Posting feedback on student work

Note. Using Google Docs, students can view peers’ work and post their feedback as comments directly on the
document.

Figure 5. Viewing quick notes and updating their status during feedback provision

Note. While posting feedback comments, students can refer to the quick notes taken in the previous phase to
remind themselves about the feedback plan. They can view and mark them as complete once corresponding
feedback is posted.

Figure 6. Creating actions based on peer feedback

Note. Students can create specific action items during feedback interactions with peers by naming the action,
selecting the related feedback comment, and optionally setting an expected completion date and difficulty level.
This allows for concrete steps to improve work based on feedback received.

137
2.2.3. Phase 3. Translation of feedback into task progress

The last phase involves students incorporating the planned revisions, based on the action plan they developed in
the previous phase, to improve their work. Since this phase involves individual effort, SRL plays a critical role in
completing the revisions and finalizing the work as planned. Therefore, it is important to empower students to
monitor and evaluate their progress on the revisions. With this goal in mind, several features have been
incorporated into the feedback platform to facilitate this phase. First, the feedback platform presents an action
plan to guide students in making revisions to their work and update the progress on each action as they advance
with the revisions (see Figure 7). Students can also monitor their overall progress through a visualization that
showcases the relationship between their daily progress and the revisions they’ve made (see Figure 8). This
visualization also highlights the planned completion date for each action, providing students with a clear picture
of their progress toward achieving their goals.

Figure 7. Viewing the actions when revising the work

Note. When revising their work, students can access the list of the actions that they created in the preceding
phase. They can view the latest progress on each action and update it according to the revisions performed.

Figure 8. Monitoring the progress on an action

Note. Once a student has clicked on the “Monitoring” button in Figure 7, a line chart is displayed that enables
students to track their progress over time. The chart allows for a daily comparison of the number of revisions
incorporated into their progress, aiding students in assessing their performance.

3. Methods
3.1. Participants

Fifty-one students were recruited from two undergraduate courses titled Curriculum Design and Development
from a university in Asia to participate in this study. Before the research, the participants were provided with a
consent form that outlined the research purposes, their right to withdraw without any consequences, and the
confidentiality of their data. All the students signed the consent form before participating in this study. The study
was approved by the IRB (NO. NCKU HERC-E-111-199-2) to ensure that the study adhered to ethical
guidelines.

138
The participants were randomly divided into a control group with 10 males and 15 females and an experimental
group with eight males and 18 females. Both groups had experienced a peer feedback activity using the online
discussion board before participating in this research. In the control group, the peer feedback activity was
conducted with dialogic feedback without regulation support through the online discussion board of the LMS. In
the experimental group, the peer feedback activity was implemented with regulated dialogic feedback using an
online feedback platform developed by the researchers of the study. Members of the control group were provided
with the same learning materials and had the opportunity to practice the same learning tasks after the study was
completed from Weeks 11 to 18 so that both groups received equal access to the intervention.

3.2. Research design

The research study lasted for 10 weeks. In Week 1, the students were randomly assigned to the control or
experimental group, introduced to peer feedback practice, and asked to sign a consent form acknowledging their
willingness to participate in this study. From Weeks 2 to 10, the experimental group participated in the peer
feedback practice that utilized the regulated dialogic feedback approach, which was facilitated by an online
feedback platform. Meanwhile, the control group took part in the same activity, where the dialogic feedback
approach was facilitated via a discussion forum with no regulation support.

Table 1. Design of learning tasks


Weeks Control group (Dialogic feedback without Experimental group (Dialogic feedback with
regulation support) regulation support)
Week 1 Introduction of learning tasks & research purposes
Weeks 2-5 Assign scores to personal and peer work Assign scores to personal and peer work
None Discuss the scores
None Create quick notes to plan feedback
Weeks 6-8 None Review the quick notes for feedback provision
Provide feedback comments Provide feedback comments
Reply to feedback comments Reply to feedback comments
None Create actions to plan revision
Weeks 9-10 None Review the actions for revisions
Make revisions Make revisions

Table 1 demonstrates the learning tasks which the control and experimental groups underwent over the ten weeks
of the study. The students in both groups learned three topics about lesson planning, including instructional
goals, instructional activities, and assessment. The three topics were taught to the students in parallel and
iteratively during weeks: 2, 3, 4, 6, 7, and 9. But the control and experimental groups received different
instructional support during weeks 5, 8, and 10. Specifically, in week 5, the control group submitted individual
work to an online discussion board, while the experimental group submitted individual work to the regulated
feedback platform. After the submissions, the control group students in pairs used a rubric to perform self- and
peer-assessment and posted the scores on the discussion board. The experimental group performed the same
assessment tasks but using the regulated feedback platform. Additionally, the experimental group was asked to
compare the assigned scores, discuss inconsistencies in the scores to reach a consensus, and take quick notes to
plan the feedback that they would provide. In week 8, both groups provided their feedback and replied to
feedback comments through the discussion forum (control group) or the regulated feedback platform
(experimental group). During this phase, to guide the feedback provision, the experimental group was further
asked to review the quick notes they took earlier. In addition, students were asked to create action items to plan
revisions (see Figure 5 and Figure 6). Finally, in week 10, the students from both groups revised their work
based on the feedback comments they received.

3.3. Data collection and analysis

The data collected included (a) students’ responses to peer feedback engagement surveys, (b) the logs of student
actions in the regulated feedback platform, and (c) the transcripts of student interviews. Pre- and post-feedback
engagement surveys were administered to the control and experimental groups to examine the differences in
feedback engagement between the groups before and after the study (RQ1). The feedback engagement survey
was adapted from the Engagement Scale (Fredricks et al., 2005), which measures behavioral, emotional, and
cognitive engagement. The Engagement Scale was originally developed to measure student school engagement
but was later adapted to measure student engagement in online or blended learning (e.g., Henrie et al., 2015;
Tseng, 2021). Given its focus on “dialogic feedback,” this study examined learner engagement in three
139
dimensions (cognitive, behavioral, and emotional). The researchers modified the items of the Engagement Scale
to measure student engagement levels in feedback practice (see Appendix A). For example, the item “I follow
the rules at school” was revised to “I follow the rules of the feedback tasks” for behavioral engagement. The item
“I am interested in the work at school” was revised to “I am interested in providing peer feedback” for emotional
engagement. The item “I study at home even when I don’t have a test” was revised to “I provide peer feedback to
peers even when I do not have to” for cognitive engagement.

Cronbach’s alpha values were computed to identify the reliability of the scale, and the results showed a high
reliability of 0.72 for behavioral engagement, 0.83 for emotional engagement, and 0.84 for cognitive
engagement. The validity of the scale was examined by two experts in peer feedback practices and learning
engagement. The pre- and post-survey data were analyzed using MANOVA to identify any significant
differences between the experimental and control groups in feedback engagement before and after the study. The
MANOVA was used to determine the differences between multiple dependent variables affected by changes in
one or more independent variables. In this study, the dependent variables were the sub-scores of engagement in
the three dimensions (cognitive, behavioral, and emotional). The independent variable was the regulation support
(regulation support vs non-regulation support). A homogeneity test was performed, showing that the multivariate
result did not violate the assumption of MANOVA.

Table 2. Categorization of learning activities in the regulated dialogic feedback


Regulated dialogic feedback tasks Associated learning activity
Phase 1 (involving SSRL)
Judge the work at hand Visiting the self-assessment page
Visiting the peer-assessment page
Reach a consensus Assigning scores to self-work
Assigning scores to peers’ work
Comparing the scores
Plan participation in feedback practice Creating quick notes to plan feedback
Phase 2 (involving CoRL)
Provide and discuss the feedback Posting feedback
Viewing and setting quick notes as complete during feedback
provision
Posting replies
Generate plans for revision Creating action items to plan revision
Phase 3 (involving SRL)
Make revisions Viewing the action items for revision
Visiting the revision page
Making revisions

Extreme case analysis was performed with two students to gain deeper insights into how students were supported
with the regulated dialogic feedback to promote engagement with peer feedback. Extreme case sampling was
used to select the students from the two groups, as Patton (1990) indicated that “more can be learned from
intensively studying extreme or unusual cases than can be learned from statistical depictions of what the average
case is like” (p. 170). By analyzing extreme cases, the study could identify the factors of the regulated dialogic
feedback that contributed to the high and low levels of engagement in feedback practice. Therefore, one student
was selected to demonstrate a big improvement in engagement levels, while the other student showed minimal
improvement. In this analysis, the log data of the two students, recorded automatically by the feedback platform,
were used. The log data emerged from students’ learning behaviors during the peer feedback practice. Log
data about students’ peer feedback activities have been extensively utilized to derive valid indicators of student
engagement (Khosravi & Cooper, 2017; Zhang et al., 2022). When these indicators are grounded in a theoretical
framework, their validity as engagement indicators can be enhanced (Fincham et al., 2019). In the current study,
the log data were determined and selected based on the framework of the regulated dialogic feedback grounded
in Hadwin’s et al. (2017) regulation model (see Table 2). Student log data that were irrelevant to the feedback
tasks (such as logins and profile views) were filtered out, and only those that were linked to the regulated
feedback practice (such as reaching a consensus and generating plans for revision) were included in the analysis.
In this way, we could investigate the specific student learning behaviors during the regulated dialogic feedback
practice that contributed to student engagement with peer feedback. In addition, individual semi-structured
interviews were conducted with the two students. The interviews lasted around 20 minutes and were guided by
the following questions: “Can you describe how you completed each phase of feedback practice?”, “Which one
did you spend lots of time and energy to complete? Why?” and “How did each phase of feedback practice
promote your learning engagement?” The interview transcripts were analyzed through an inductive approach that

140
involved the following steps: (1) organizing and reading through data, (2) coding data, (3) generating themes, (4)
interrelating the themes, and (5) interpreting the themes (Attride-Stirling, 2001).

4. Findings
4.1. Effects of regulated dialogic feedback

The pre-engagement survey data were analyzed using MANOVA to determine whether there were pretreatment
differences between the control and experimental groups. The results of the analysis showed no significant
difference in engagement with peer feedback between the control and experimental groups (Wilk’s lambda =
0.98, F = 0.25, p = .86) before the study. However, the analysis of the post-survey results revealed a significant
difference in the peer feedback engagement between the control and experimental groups (Wilk’s lambda = 0.85,
F = 2.85, p = .04). ANOVA was performed to examine the statistical differences in each of the engagement
variables between the two groups. The univariate ANOVA results revealed that the students who experienced the
regulated dialogic feedback demonstrated significantly higher cognitive engagement (F = 5.84, p = .02),
behavioral engagement (F = 6.38, p = .02), and emotional engagement (F = 5.59, p = .02) during the dialogic
feedback practice (see Table 3 and Table 4).

Table 3. Descriptive statistics of students’ engagement with peer feedback practice after the study
Engagement with feedback practice Groups N Mean SD
Behavioral engagement Control group 25 3.56 0.62
Experimental group 26 4.01 0.67
Emotional engagement Control group 25 3.34 0.74
Experimental group 26 3.82 0.79
Cognitive engagement Control group 25 3.56 0.56
Experimental group 26 3.95 0.59
Note. Control group (Dialogic feedback without regulation); Experimental group (Regulated dialogic feedback).

Table 4. Univariate effects for the engagement with peer feedback practice after the study
Learning engagement with feedback practice SS df MS F Sig. 2
Behavioral engagement 2.65 1 2.65 6.38 .02 .12
Emotional engagement 3.48 1 3.48 5.59 .02 .10
Cognitive engagement 1.95 1 1.95 5.84 .02 .11

4.2. Roles of the regulated dialogic feedback for engagement

Twenty-six students in the experimental group practiced the regulated peer feedback. Their actions during the
regulated feedback practice were captured through the platform’s logs to investigate the factors that contributed
to student engagement with peer feedback. Table 5 shows the mean and standard deviation of the student
learning actions in the three phases of the regulated dialogic feedback: Phase 1 (SSRL), Phase 2 (CoRL), and
Phase 3 (SRL). The results show that students demonstrated the highest engagement in Phase 3, followed by
Phase 1 and Phase 2 based on the mean scores. Further discussion of how the three phases promote student
engagement in the dialogic feedback practice is presented through extreme cases below.

Table 5. Descriptive statistics about the student actions performed in the regulated feedback platform
Dialogic phases Number of tasks Frequency Mean SD
Phase 1 (SSRL) 6 1090 181.67 23.72
Phase 2 (CoRL) 4 455 113.75 10.76
Phase 3 (SRL) 3 860 286.66 16.40
Note. Frequency: Total number of student actions performed. Mean: Average number of student actions per task
(i.e., frequency divided by number of tasks). SD: Standard deviation of the number of actions performed by
students.

Two extreme case students were selected to demonstrate how the regulated dialogic feedback promoted student
engagement in terms of cognition, behaviors, and emotions (see Table 6 and Table 7). S-LI represents the student
who made less improvement in cognitive, emotional, and behavioral engagement, while S-MI represents the
student with more improvement in engagement. The interview transcripts were utilized to provide explanations
for high or low levels of engagement observed among the extreme case students.

141
Table 6. S-MI and S-LI learning engagement with peer feedback practice
Pre Post Post-Pre
A student with more improvement (S-MI) Cognitive engagement 3.18 4.55 1.37
Emotional engagement 2.67 4.17 1.50
Behavioral engagement 2.75 4.00 1.25
A student with less improvement (S-LI) Cognitive engagement 3.36 3.91 0.55
Emotional engagement 2.67 3.33 0.66
Behavioral engagement 3.50 4.50 1.00

Table 7. S-MI and S-LI learning behaviors in the SSRL, CoRL, and SRL phases
Associated behaviors S-MI S-LI
Phase 1 (SSRL)
Judge the work at hand Visit the self-assessment page 25 12
Visit the peer assessment page 31 14
Reach a consensus Assign scores to self-work 1 1
Assign scores to peers’ work 2 5
Compare the scores 28 9
Plan participation in feedback practice Create quick notes to plan feedback 5 5
Phase 2 (CoRL)
Provide and discuss the feedback Visit the quick notes to guide feedback provision 27 9
Post feedback 8 8
Post replies 13 0
Generate plans for revision Create action items to plan revisions 1 1
Phase 3 (SRL)
Make revisions View the actions for revision 17 9
Visit the revision page 10 11
Make revisions 12 3

4.2.1. Phase 1 (SSRL)

The purpose of the first phase is for students to establish a mutual understanding regarding the quality of the peer
feedback by visiting the self- and peer-assessment pages, assigning scores to their own and peers’ work, and
comparing the scores received from self-assessment and peer-assessment. Table 7 shows that S-MI visited the
peer-assessment and self-assessment pages and compared the scores more often than S-LI. S-MI considered
visiting the peer assessment page pivotal in achieving this goal, explaining, “I visited the peer assessment page
several times to read the assessment criteria and peer work, as I would like to provide trustworthy and helpful
feedback.” S-LI also acknowledged the peer assessment page, which provided explicit instructions and a rubric,
saying, “This is the first time that I really know how to comment and assess peer work because of the instruction
and especially rubric.” She further added that “I am not a smart student and did not actually understand what to
do with peer feedback. But I enjoy the activity this time. The rubric is helpful. It allows me to understand what
elements...I should focus on while evaluating my work my and peers’.” These results highlight the importance of
understanding the rubric and assessment criteria to students’ cognitive engagement in dialogic peer feedback
practice and their ability to give reliable and constructive feedback to peers.

In addition, comparing scores achieved the second-highest frequency during the SSR phase among S-MI and S-
LI. S-MI stated that comparing the scores was interesting and awkward, as she found huge differences in the
scores. Her peers obviously gave much lower scores on her work than the one she assigned to herself. S-MI
stated that “I went back to re-check the peer assessment criteria… and discuss with him.” The results show that
contradicting scores from peers prompted S-MI to re-examine and discuss the peer feedback criteria together
with her classmates. However, S-LI had a much lower frequency of comparing scores than S-MI. When asked if
she discussed the grading criteria with peers, she responded, “Not much. I spent less time comparing the scores,
as my peer gave me a higher score than I expected. I felt fine.” The results indicate that discrepancies in
students’ assessment scores can raise students’ curiosity and motivation to discuss the goals and criteria of
providing peer feedback, which could be an essential factor in promoting S-MI engagement with feedback
practice.

142
4.2.2. Phase 2 (CoRL)

The second phase aims to create opportunities for students to provide and discuss feedback by viewing the notes
for feedback provision, posting feedback, and posting replies. Table 7 shows that S-MI had a higher frequency of
viewing the quick notes in the feedback than S-LI. As S-MI stated, “Notes are useful because I can go back to
them to see if I forgot something I want to say and make sure how I want to say it during the feedback process.
This reduces my anxiety.” In addition, she indicated that “while creating notes to plan feedback, I need to think
how to provide constructive peer feedback by taking peers’ feelings and perspectives.” Therefore, S-MI could
develop empathy and confidence that her feedback was valuable to and critical for peers. The results demonstrate
that for S-MI, this phase of the feedback process fostered her emotional engagement by reducing her anxiety and
increasing her confidence and empathy. Creating and viewing notes was the preparatory work to help her feel
more invested in the feedback process and more connected to the value of her contributions to her peers.

The other difference was found in the task of posting replies to peer feedback. Posting replies was documented
when the students replied to each of the received feedback comments. Table 7 reveals that S-MI posted 13
replies to the peer feedback, and S-LI did not reply to any of the feedback. S-MI appreciated being able to post
replies and regarded it as an opportunity to interact with peers and discuss her work with them. Accordingly, she
said, “I liked talking to peers about my work to clarify the misunderstandings between each other.” These
discussions helped her figure out how to revise her work based on the feedback. The results suggest that allowing
students to discuss feedback with peers may contribute to higher cognitive and behavioral engagement in
understanding the meaning of feedback interactively. The findings highlight the potential benefits of fostering
peer discussion in the feedback process. By actively engaging in discussions with peers, students may gain a
deeper understanding of the feedback received, clarify any misunderstandings, and ultimately make more
informed decisions on how to revise their work. However, S-LI explained that she “spent less time viewing peer
comments and replying to comments, because the comments she received were all positive and very
encouraging.” It was found that S-LI and her feedback provider were close to each other, revealing friendship
issues and facing saving in feedback practice. Students tend to provide positive comments to peers in order not to
hurt their classmates’ feelings, especially if they know each other. This highlights the importance of maintaining
anonymity in peer feedback for a productive dialogic process.

4.2.3. Phase 3: SRL

The SRL allows students to take control of their learning by viewing the actions for revision, visiting the revision
page, and making revisions. Table 7 reveals that S-MI had a higher frequency of viewing the action plan and
making revisions. This was the first time the two students tried the features of creating and reviewing planned
actions for revising their work. S-MI indicated that “the actions for revision were meaningful for me. I did not
make those action plans alone, but they (action plans) came from the discussion with my peers.” Therefore, she
valued the action plan constructed through the dialogic feedback process and viewed the action plans several
times when making revisions. She also added that “the actions for revision were just like a plan for me to
conceptualize the steps, goals, to improve my work so I can make my revision better.” Following the action
plans, she spent time breaking down the revision process into structured and manageable steps during the
revision process. The results demonstrate that the action plans promoted behavioral engagement in making
revisions based on the feedback received. In contrast, S-LI spent less time viewing the actions for revisions,
stating, “I cannot see the value of the action plans for revision, since I had known what I had to do for my
revision.” A possible reason for S-LI’s perception could be her lack of discussion with her peers. Table 7 shows
that she did not post replies to any of the feedback comments she received. This highlights the potential benefits
of peer discussion in the feedback process for promoting emotional engagement by incorporating action plans in
peer feedback activities.

5. Discussion and conclusion


This study investigated the effects of regulation support on students’ engagement with dialogic feedback by
integrating Hadwin’s et al. (2017) regulation model involving different levels of regulation: SSRL, CoRL, and
SRL. The first research question is to examine whether the regulated dialogic feedback leads to higher student
engagement with dialogic peer feedback. The findings indicate that students with regulation support
demonstrated significantly higher cognitive, behavioral, and emotional engagement than those who performed
the dialogic feedback without regulation support. In other words, when dialogic feedback is integrated with

143
regulated support, students are more capable of seeing the value of dialogic feedback practice, and they spend
more time and exert more cognitive effort in completing feedback tasks.

This study contributes to the literature by addressing the second research question on how regulated dialogic
feedback promotes student engagement with dialogic peer feedback. Grounded in Hadwin’s et al. (2017) model
of regulation, past studies have concluded that CoRL was the essential factor in promoting student engagement
and learning performance through peer interaction, while SSRL was rarely discussed (e.g., Zhan et al., 2022).
This study found that SSRL was another dominant factor in improving student engagement with dialogic
feedback. SSRL originally refers to “transactive exchanges amongst group members” (Panadero, 2017, p. 16). In
the current study, SSRL served as a preparatory stage for students to construct and negotiate shared goals,
strategies, and criteria for providing peer feedback through internal (self-assessment) and external (peer
assessment) resources. The mutual understanding of the shared grading criteria, defined as assessment literacy
(Price et al., 2011), was particularly essential to promote engagement with dialogic feedback, as Cartney (2010)
indicated that students could engage with feedback practice when they understand and reach a consensus on
grading criteria. The development of assessment literacy was supported with SSRL, where feedback providers
and evaluators could negotiate the assessment scores to socially construct the knowledge to interpret and apply
assessment results for feedback provision.

CoRL differs from SSRL in how students collaborate during the dialogic feedback. CoRL entails one student
taking the lead, while SSRL focuses on the joint efforts of students to coordinate the feedback. In the study,
CoRL was prominent when the students led the discussion by posting replies to elicit explanations or clarify
misunderstandings from feedback providers. During the feedback discussion, the students became active
learners, evaluating the feedback and employing it to reflect on their learning, which was evident in their high
behavioral and cognitive engagement with dialogic feedback. The findings correspond to Zheng et al. (2018),
who showed that students who discussed feedback content with peer providers were more engaged in providing
feedback. Additionally, this study found that not every student was actively leading the discussion during the
dialogic feedback. For example, S-LI did not respond to any peer comments because the comments she received
were too positive to identify the areas of improvement to refine her work. The lack of constructive criticism
could be attributed to her peer’s lack of knowledge or desire to be nice to avoid conflicts. The results aligned
with previous studies (Alemdag & Yildirim, 2022; Wu & Schunn, 2023), suggesting that peer review training
and anonymity may help reduce students’ cognitive and psychosocial demands during the peer feedback process.
Future studies are encouraged to integrate these two features into the dialogic feedback practice.

In the SRL phases, the students with higher engagement recognized the value of building an action plan to
organize their feedback provision and revisions beforehand. The role of planning has been emphasized with
goal-setting in Alemdag and Yildirim (2022) to improve self-assessment and feedback provision. This study
added that planning was also beneficial for students’ uptake of feedback. The act of creating action plans
stimulated the students to make judgments about the received feedback and to monitor their progress in making
revisions based on their action plans. Based on the decision-making and monitoring activities, the students could
develop self-regulatory skills to promote their engagement during the dialogic feedback.

5.1. Implications

The findings of this study suggest several pedagogical implications for creating engaging dialogic feedback
practices. First, regulated dialogic feedback can be impactful in guiding student engagement with feedback. Only
providing students with the technology to enable feedback provision and discussion may not unlock the full
potential of dialogic feedback. Second, an important step in regulated dialogic feedback should be the
preparation phase, where students intend to understand the rubric (expectations from the instructor), assess the
peer work, and exchange ideas with each other to align their perspectives on the quality of the work. The
importance of the preparation phase for peer feedback practice is evident in the students’ statement: “I am not a
smart student and did not actually understand what to do with peer feedback. But I enjoy the activity this time.
The rubric is helpful. It allows me to understand what elements... I should focus on while evaluating my work my
and peers.” This step set the stage for students to continue with fruitful engagement and dialogue in the later
stages of the feedback practice. For example, with a clear understanding of the evaluation criteria, students can
easily agree on the feedback and provide coherent comments that complement each other, thus creating a bigger
impact on student learning.

Moreover, providing mechanisms that encourage and enable students to act upon feedback is essential. Students
usually face problems moving the feedback forward. Giving students opportunities to determine the actions to be
taken for each peer feedback comment can help them realize the power of feedback and lead to impactful
144
feedback practices, as the student indicated that “the actions for revision were just like a plan for me to
conceptualize the steps, goals, to improve my work so I can make my revision better.” Last, as the students stated
in the interviews, assigning scores and providing comments on peer work might possibly hurt a classmate’s
feelings, especially if the students know each other. This finding, aligned with Yu and Hu (2017), shows that the
face issue was a major concern in the feedback practice. When receiving critical feedback, people often value
and try to save their own and others’ faces. To remove the effect of the face issue from the peer feedback
practice, an option for anonymous feedback can be considered.

5.2. Limitations and future research

Despite its contributions, this study has several limitations. First, while the extreme case analysis provides
critical insights into the effects of the regulated approach on student engagement, it would be beneficial to
supplement the findings with more qualitative data to better understand the underlying mechanisms. Therefore,
future research should incorporate more qualitative data and analysis to further support and expand upon our
findings. Second, while student learning actions are commonly utilized in the field of learning analytics (Gasevic
et al., 2015), the learning action log obtained from the regulated feedback platform could have provided more
detailed insights if they were complemented with students’ discussion forum activities. However, due to the
unavailability of logs from the discussion forum, it was not possible to include students’ discussions in this
study. Future studies could benefit from incorporating a more comprehensive analysis of learning traces. Finally,
this study focused on students’ cognitive, emotional, and behavioral engagement and did not investigate social
engagement during the dialogic feedback process. As a follow-up to this research, a more in-depth analysis of
students’ interactions can examine how social engagement may influence students’ interpretations and uptake of
peer feedback. This will provide a better understanding of how students engage with the platform and how to
optimize its design to promote greater engagement.

Acknowledgement
We would like to express our gratitude to the National Science and Technology Council, Taiwan under the
project numbers MOST 109-2511-H-032-002-MY2 and NSTC 111-2628-H-032-001-MY2 for the financial
support on this project. The research of the second author has been fully funded by the European Union’s
Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement 793317.

References
Alemdag, E., & Yildirim, Z. (2022). Effectiveness of online regulation scaffolds on peer feedback provision and uptake: A
mixed methods study. Computers & Education, 188, 104574. https://doi.org/10.1016/j.compedu.2022.104574
Appleton, J. J., Christenson, S. L., & Furlong, M. J. (2008). Student engagement with school: Critical conceptual and
methodological issues of the construct. Psychology in the Schools, 45(5), 369–386.
Attride-Stirling, J. (2001). Thematic networks: An analytic tool for qualitative research. Qualitative research, 1(3), 385–405.
Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design. Assessment &
Evaluation in Higher Education, 38(6), 698–712.
Bonnel, W. (2008). Improving feedback to students in online courses. Nursing Education Perspectives, 29(5), 290–294.
Carless, D. (2020). Longitudinal perspectives on students’ experiences of feedback: A need for teacher–student
partnerships. Higher Education Research & Development, 39(3), 425–438.
Carless, D. (2016). Feedback as dialogue. In M. Peters (Ed.), Encyclopedia of Educational Philosophy and Theory (pp. 1–6).
Springer. https://doi.org/10.1007/978-981-287-532-7
Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment &
Evaluation in Higher Education, 43(8), 1315–1325.
Carless, D., & Winstone, N. E. (2020). Teacher feedback literacy and its interplay with student feedback literacy. Teaching in
Higher Education, 1–14. https://doi.org/ 10.1080/13562517.2020.1782372
Cartney, P. (2010). Making changes to assessment methods in social work education: Focusing on process and
outcome. Social Work Education, 29(2), 137–151.

145
Charteris, J., & Smardon, D. (2013). Second look–second think: A fresh look at video to support dialogic feedback in peer
coaching. Professional Development in Education, 39(2), 168–185.
Cheng, K. H., Liang, J. C., & Tsai, C. C. (2015). Examining the role of feedback messages in undergraduate students’ writing
performance during an online peer assessment activity. The internet and higher education, 25, 78–84.
DiGiovanni, E., & Nagaswami, G. (2001). Online peer review: An alternative to face–to–face? ELT journal, 55(3), 263–272.
Dunworth, K., & Sanchez, H. S. (2016). Perceptions of quality in staff-student written feedback in higher education: A case
study. Teaching in Higher Education, 21(5), 576–589.
Er, E., Dimitriadis, Y., & Gašević, D. (2021). A Collaborative learning approach to dialogic peer feedback: A theoretical
framework. Assessment & Evaluation in Higher Education, 46(4), 586–600.
Falchikov, N. (2005). Improving assessment through student involvement. Routledge–Falmer.
Fincham, E., Whitelock-Wainwright, A., Kovanović, V., Joksimović, S., van Staalduinen, J. P., & Gašević, D. (2019).
Counting clicks is not enough: Validating a theorized model of engagement in learning analytics. In Proceedings of the 9th
international conference on learning analytics & knowledge (pp. 501–510). https://doi.org/10.1145/3303772.3303775
Filius, R. M., de Kleijn, R. A., Uijl, S. G., Prins, F. J., van Rijen, H. V., & Grobbee, D. E. (2018). Strengthening dialogic peer
feedback aiming for deep learning in SPOCs. Computers & Education, 125, 86–100.
Fredricks, J. A., Blumenfeld, P., Friedel, J., & Paris, A. (2005). School engagement. In What do children need to
flourish? (pp. 305–321). Springer.
Hadwin, A. F., Järvelä, S., & Miller, M. (2017). Self-regulation, co-regulation and shared regulation in collaborative learning
environments. In D. H. Schunk & J. Greene (Eds.), Handbook of self-regulation of learning and performance (2nd ed., pp.
83–106). Routledge.
Henrie, C. R., Halverson, L. R., & Graham, C. R. (2015). Measuring student engagement in technology-mediated learning: A
review. Computers & Education, 90, 36–53.
Hepplestone, S., Holden, G., Irwin, B., Parkin, H., & Thorpe, L. P. (2011). Using technology to encourage student
engagement with feedback: A literature review. Research in Learning Technology, 19(2), 117–127.
Hsia, L. H., Huang, I., & Hwang, G. J. (2016). Effects of different online peer–feedback approaches on students’ performance
skills, motivation and self–efficacy in a dance course. Computers & Education, 96, 55–71.
Khosravi, H., & Cooper, K. M. L. (2017). Using learning analytics to investigate patterns of performance and
engagement in large classes. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science
Education (pp. 309–314). https://doi.org/10.1145/3017680.3017711
Konert, J., Richter, K., Mehm, F., Göbel, S., Bruder, R., & Steinmetz, R. (2012). PEDALE - A peer education diagnostic and
learning environment. Educational Technology & Society, 15(4), 27–38.
Li, L.-Y., & Huang, W.-L. (2023). Effects of undergraduate student reviewers’ ability on comments provided, reviewing
behavior, and performance in an online video peer assessment activity. Educational Technology & Society, 26(2), 76–93.
Mutch, A. (2003). Exploring the practice of feedback to students. Active learning in higher education, 4(1), 24–38.
Novakovich, J. (2016). Fostering critical thinking and reflection through blog‐mediated peer feedback. Journal of Computer
Assisted Learning, 32(1), 16–30.
Noroozi, O., Hatami, J., Bayat, A., van Ginkel, S., Biemans, H. J., & Mulder, M. (2020). Students’ online argumentative peer
feedback, essay writing, and content learning: Does gender matter? Interactive Learning Environments, 28(6), 698–712.
Oncu, S. (2015). Online peer evaluation for assessing perceived academic engagement in higher education. Eurasia Journal
of Mathematics, Science & Technology Education, 11(3), 535–549.
Panadero, E. (2017). A review of self-regulated learning: Six models and four directions for research. Frontiers in
Psychology, 8, 1–28. https://doi.org/10.3389/ fpsyg.2017.00422.
Patton, M. (1990). Qualitative evaluation and research methods. Sage
Price, M., Handley, K., & Millar, J. (2011). Feedback: Focusing attention on engagement. Studies in higher education, 36(8),
879–896.
Shang, H. F. (2022). Exploring online peer feedback and automated corrective feedback on EFL writing
performance. Interactive Learning Environments, 30(1), 4–16.
Sinclair, H. K., & Cleland, J. A. (2007). Undergraduate medical students: who seeks formative feedback? Medical
Education, 41(6), 580–582.

146
Soh, O. K., & Hong-Fa, H. (2014). Students’ perceptions towards the use of dialogic feedback in mobile applications for
students’ writing: A qualitative case study. Journal of e-Learning and Knowledge Society, 10(3), 37–49.
Tseng, S. S. (2021). The influence of teacher annotations on student learning engagement and video watching
behaviors. International Journal of Educational Technology in Higher Education, 18, 1–17.
To, J. (2022). Using learner-centred feedback design to promote students’ engagement with feedback. Higher Education
Research & Development, 41(4), 1309–1324.
Trevelyan, R., & Wilson, A. (2012). Using patchwork texts in assessment: Clarifying and categorising choices in their use.
Assessment & Evaluation in Higher Education, 37(4), 487–498.
Yu, S., & Hu, G. (2017). Understanding university students’ peer feedback practices in EFL writing: Insights from a case
study. Assessing Writing, 33, 25–35.
Van Zundert, M., Sluijsmans, D., & Van Merriënboer, J. (2010). Effective peer assessment processes: Research findings and
future directions. Learning and instruction, 20(4), 270–279.
Wasson, B., & Vold, V. (2012). Leveraging new media skills in a peer feedback tool. The Internet and Higher
Education, 15(4), 255–264.
Wegerif, R. (2006). A dialogic understanding of the relationship between CSCL and teaching thinking skills. International
Journal of Computer-Supported Collaborative Learning, 1(1), 143–157.
Wu, Y., & Schunn, C. D. (2023). Assessor writing performance on peer feedback: Exploring the relation between assessor
writing performance, problem identification accuracy, and helpfulness of peer feedback. Journal of Educational
Psychology, 115(1), 118–142.
Zhang, Z. V., & Hyland, K. (2018). Student engagement with teacher and automated feedback on L2 writing. Assessing
Writing, 36, 90–102.
Zhang, J., Huang, Y., & Gao, M. (2022). Video features, engagement, and patterns of collective attention allocation : An
open flow network perspective. Journal of Learning Analytics, 9(1), 32–52.
Zhan, Y., Wan, Z. H., & Sun, D. (2022). Online formative peer feedback in Chinese contexts at the tertiary level: A critical
review on its design, impacts and influencing factors. Computers & Education, 176, 104341.
https://doi.org/10.1016/j.compedu.2021.104341
Zheng, Y., & Yu, S. (2018). Student engagement with teacher written corrective feedback in EFL writing: A case study of
Chinese lower-proficiency students. Assessing Writing, 37, 13–24.
Zheng, L., Cui, P., Li, X., & Huang, R. (2018). Synchronous discussion between assessors and assessees in web-based peer
assessment: Impact on writing performance, feedback quality, meta-cognitive awareness and self-efficacy. Assessment &
Evaluation in Higher Education, 43(3), 500–514.
Zhu, Q., & Carless, D. (2018). Dialogue within peer feedback processes: Clarification and negotiation of meaning. Higher
Education Research & Development, 37(4), 883–897.

147
Appendix A. The engagement survey
This survey is to help you diagnose your engagement in the peer feedback practice. Please circle the number
after each statement below according to the degree of your agreement/acceptance of the statement. The numbers
represent the following levels of agreement: 5=Strongly agree; 4=Agree; 3=Hard to say; 2=Disagree; 1=Strongly
disagree.

Engagement Fredricks et al. (2005) Modified


Behavioral I pay attention in class. I pay attention to the peer 1 2 3 4 5
engagement feedback practice.
When I am in class, I just act When I am doing peer feedback 1 2 3 4 5
as if I am working. practice, I just act as if I am
working.
I follow the rules at school. I follow the rules of the feedback 1 2 3 4 5
tasks.
I get in trouble in school. I get in trouble in feedback 1 2 3 4 5
practice
Emotional I feel excited by the work at I feel excited when providing 1 2 3 4 5
engagement school. peer feedback.
I feel happy in school. I feel happy when providing peer 1 2 3 4 5
feedback.
I feel bored in school. I feel bored when providing peer 1 2 3 4 5
feedback.
I like being in school. I like being at providing peer 1 2 3 4 5
feedback.
I am interested in the work at I am interested in providing peer 1 2 3 4 5
school. feedback.
My classroom is a fun place Providing feedback is a fun 1 2 3 4 5
to be. activity.
Cognitive I study at home even when I I provide peer feedback to peers 1 2 3 4 5
engagement don’t have a test. even when I do not have to.
I try to watch TV shows I try to state my comments based 1 2 3 4 5
about things we are doing at on the rubrics to help peers
school. improve their work.
I check my schoolwork for I check peer feedback content 1 2 3 4 5
mistakes. for the improvement of my
work.
I read extra books to learn I provide extra feedback to better 1 2 3 4 5
more about things we do in improve peer work.
school.
If I don’t know what a word If I do not know my feedback 1 2 3 4 5
means when I am reading, I content, I do something to
do something to figure it figure it out, like asking peer for
out, like look it up in the clarification.
dictionary or ask someone

148

You might also like