Professional Documents
Culture Documents
Parul Sharma Unit4 Final
Parul Sharma Unit4 Final
Abstract:
This paper reviews how engagement can be measured in various
classroom settings. First, we review how often engagement should be
measured during the learning process and how to strike the balance between
gathering engagement metrics from students while minimizing distractions
during their learning process. Second, we review how to motivate students to
leave engagement feedback. After this, we review what physical form should
be used for feedback and the benefits and drawbacks of each. Lastly, we
review the importance of collaboration and sharing engagement data.
Judgements on the debates of collaboration on unsupervised quizzes, free
form feedback versus content anchored feedback, and whether or not
grading should be used as a form of motivation are all discussed. Analysis of
the following can be used to make better judgements on how to gather
engagement data in future learning environments in the most efficient and
effective manner for both students and professors.
Introduction
While non-verbal cues such as eye contact, raising a hand to ask a
question, and a simple look of concern or confusion were typically enough to
show the level of engagement of a student to the professor in a class, the
classroom setting has changed over time with the advent of technology, and
many of the traditional engagement techniques used in physical classrooms
are obsolete. With the growth of online distance learning and large lecture
halls, one on one attention or the benefit of having a physical professor
present while a student is learning is no longer always true.
How Often Should Engagement Be Measured? Limit distracting from
the learning process while also optimizing the data that is analyzed.
When considering engagement tools and deciding on an engagement
measurement technique for the classroom setting, many factors differ and
must be taken into account on a case by case basis. One factor is how
periodic the engagements should be measured. The more frequent the
engagement of the students is measured, the more data there is to analyze
and pinpoint how the learning experience can be improved. However, more
frequent engagement measurements could have an effect on the learning
process itself. Various studies have researched the effect of task-switching on
the brain and the effectiveness of various alert modalities to signal to the
1.
The overall takeaway from this research paper [1] is the notion that not
all tasks created equal and all tasks vary in the amount of mental workload
they require. Different tasks that may be needed in varying quantities based
on the academic subject include cognitive work such as store, recall, and
reasoning. For example, understanding comments and the document text
require language comprehension. Making edits to a piece of writing requires
language processing. Finding a file within a file system requires the cognitive
process of recall.
This reflects how comprehension of a lecture is very much dependent
on its content. For example, providing real-time feedback for a physics class
may require more or less effort than providing real-time feedback for a
philosophy lecture due to the varying mental work that each subject
demands. Despite the content of the lecture, all lecture videos contain audio
and visual elements. Due to the audio element of online lecture videos, most
lecture videos require a fair amount of language comprehension cognitive
workload. A one-way ANOVA analysis with subtask as a factor reveals that
certain subtasks, such as reasoning, induces more mental workload than
store, while other subtasks induce similar workload. For example, store and
recall induce similar workload levels. [1]
2. Task-Switching Costs for Alert Modalities
While the act of task-switching and its effect on mental workload have
been discussed, the physical form of the task-switching interruption has not
been discussed. Interruptions to signal a task switch may come in different
Free form feedback offers certain advantages and insights now how
lectures can be improved for the future. In a research study, the muddy
points systems that was originally used in physical smaller classrooms was
applied to online lecture environments. Muddy points, in their original use, is
feedback that each student leaves on an index card after a lecture. Muddy
cards are index cards that teachers give to students at the end of a lecture.
Students are given time to reflect and write the muddiest (least clear)
point on the index card to hand in as they leave. [6] While this format works
for smaller classes, physically collecting index cards for large lecture halls is
not a feasible option. As discussed in the article, the aggregation of the
responses in the index cards is often a time-consuming process. Due to this
spatial anchor for each muddy point affected the process of commenting on
non-spatial aspects of the video.
Students, overall, preferred the spatially anchored feedback over the
free form feedback. Hence, the conclusion can be made that the free form
text box provides flexibility and freedom, but also requires a bit more effort
for analysis. The main drawback of the spatially feedback analysis, however,
is that flexibility of leaving feedback on what is said versus what is shown. As
noted in the publication, 16% of students described wanting to point to a
moment in time or to what was being said over the slide, instead of the text
or images on the slide. This indicates that what is missing from Mudslide is
the ability to indicate a confusion that is related to what is being said in an
online lecture video, rather than what is being shown. In fact, one student
remarked that this was especially problematic when teachers spent a long
time talking over a slide with little information on it. Teachers reported that
although Mudslides visual point-and-click interface was suggests of
spatially-oriented feedback, many students nonetheless provided feedback
on non-spatial and/or holistic aspects of the lecture.
This paper reviewed engagement tools and methods that can be
applied to virtual classrooms and learning environments, emphasizing what
has worked in the physical environments and how they can be adapted for
virtual ones. While it least promising that wearables will be used to alert
students to take a quiz for an online quiz, or alert them to leave feedback,
other research such as the Muddy Points system adapted for video
Acknowledgments:
I would like to acknowledge and thank my Revision Club peers for providing
assistance in editing this paper, Professor Akbari for his guidance and helpful
feedback, and to my capstone project advisor, Professor Schedlbauer for the
motivating and inspiring to analyze, research, and provide solutions for
problems that exist between bridging people and technology
References:
[1] Iqbal, Shamsi T., Piotr D. Adamczyk, Xianjun Same Zheng, and Brian P. Bailey. "Towards an
Index of Opportunity: Understanding Changes in Mental Workload during Task Execution." CHI
'05 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Pages
311-320 (2005): n. pag. ACM Digital Library. Web. 11 Apr. 2016.
[2] Ewald Strasser, Astrid Weiss, Roland Buchner, and Manfred Tscheligi. 2013. Alert in the
cleanroom: testing alerting modalities for a task guiding interface. In CHI '13 Extended
Abstracts on Human Factors in Computing Systems (CHI EA '13). ACM, New York, NY, USA,
661-666. DOI=http://dx.doi.org/10.1145/2468356.2468473
[3] Andreas Gro, Sebastian Voigt, Thomas Janda, and Christoph Meinel. 2010. Technology
based motivation of students to collaborate by digital annotations during real lecture and learning
sessions. In Proceedings of the 38th annual ACM SIGUCCS fall conference: navigation and
discovery
(SIGUCCS
'10).
ACM,
New
York,
NY,
USA,
61-64.
DOI=http://dx.doi.org/10.1145/1878335.1878351
[4] Amy E, Kulesza Student Success Indicators Associated With Clicker-Administered Quizzes
in an Honors Introductory Biology Course
[5] Kibble, Use of unsupervised online quizzes as a formative assessment in a medical
physiology course: effects of incentives on student participation and performance)
[6] Elena L. Glassman, Juho Kim, Andrs Monroy-Hernndez, and Meredith Ringel Morris.
2015. Mudslide: A Spatially Anchored Census of Student Confusion for Online Lecture Videos.
InProceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems
(CHI
'15).
ACM,
New
York,
NY,
USA,
1555-1564.
DOI=http://dx.doi.org/10.1145/2702123.2702304