You are on page 1of 12

Computers & Graphics 112 (2023) 1–12

Contents lists available at ScienceDirect

Computers & Graphics


journal homepage: www.elsevier.com/locate/cag

Special Section on EG2022 Edu Best Papers

Teaching the basics of computer graphics in virtual reality



Birte Heinemann , Sergej Görzen, Ulrik Schroeder
Learning Technologies Research Group, RWTH Aachen University, Germany

article info a b s t r a c t

Article history: New technology such as virtual reality can help computer graphics education, for example, by providing
Received 1 November 2022 the opportunity to illustrate challenging 3D procedures. RePiX VR is a virtual reality tool for computer
Received in revised form 18 February 2023 graphics education that focuses on teaching the core ideas of the rendering pipeline. This paper
Accepted 6 March 2023
describes the development and two initial evaluations, which aimed to strengthen the usability, review
Available online 11 March 2023
requirements for different stakeholders, and build infrastructure for learning analytics and research.
Keywords: The integration of learning analytics raises the question of appropriate indicators to be approached
Computer graphics education through exploratory data analysis. In addition to learning analytics, the evaluation includes quantitative
Virtual reality techniques to get insights about usability, and didactical feedback. This paper discusses advanced
Multimodal learning analytics aspects of learning in VR and looks specifically at movement behavior. According to the evaluations,
Rendering pipeline
even learners without prior experience can utilize the VR tool to pick up the fundamentals of computer
Technology-enhanced learning
graphics.
© 2023 Elsevier Ltd. All rights reserved.

1. Introduction research, as well as the challenges, whose overall verdict is favor-


able. For example, they conclude that VR (and AR) contribute to
Learning technologies have been evolving for years. In the ‘‘Develop[ing] students’ higher order thinking skills encouraging
first half of 2016, the first consumer-grade virtual reality head- learning by design’’. Furthermore, they summarize positive ef-
set marked a milestone for business and science. Recently, this fects on engagement, self-learning, multi-sensory learning, spatial
technology has also been entering in the field of education and ability, and others.
training. Teachers and researchers are discovering this technol- A challenge not directly related to the technical part of VR
ogy’s possibilities and challenges for teaching and learning. This is researching VR learning applications compared to other (tra-
trend of general nature in e-learning can also be observed in ditional) learning situations, see [12]. Systematic processes to
computer graphics (CG) teaching [1]. integrate VR into teaching and design learning applications are
The first efforts to technically support CG teaching were made still at the beginning [13,14]. Learning analytics (LA) is a way to
early on [1]. Looking at the process since then, it is noticeable study and quantify learning effectiveness [15], which offers the
that comparatively many different tools are used, and a large chance to scale research extensively and not require additional
part of these was not explicitly designed for use in education [1]. time for learners (as test subjects). Additionally, VR offers to col-
lect multimodal data, which can be used to understand, predict,
To date, no standard way has been established regarding what
and quantify learning [16]. Apart from learning, other complex
principles are used to teach computer graphics content. On the
constructs such as engagement, collaboration quality, and ex-
other hand, many approaches and ideas are showing up in small
pertise are also researched using multimodal learning data [17].
applications, and we observe more and more web-based solu-
Multimodal learning analytics (MMLA) is one approach to sys-
tions, e.g. [2]’s application Rayground. The future can perhaps
tematically analyze and compare the success of different peda-
unite both trends, as WebXR (former WebVR) can offer positive
gogical strategies and tools in CG education since this is a gap in
aspects from both worlds [3,4], although developments are not
previous research [1].
yet production-ready [5].
In this extended journal paper (original [18]), we present the
The use of VR in teaching has already been investigated with
results of the analysis of a tool specifically created for computer
different questions and overall shows multiple positive effects science education, teaching the basics of introductory computer
on the learning process, and motivation, e.g. [6–11]. [8] discuss graphics — the rendering pipeline. The rendering pipeline is a fun-
various benefits and provide an overview of the state of the damental part of introductory courses [19]. Using the presented
theory-based VR application, learners can interactively experi-
∗ Corresponding author. ence the different steps of the graphics pipeline and see how
E-mail address: heinemann@cs.rwth-aachen.de (B. Heinemann). images are generated from models. The open-source application

https://doi.org/10.1016/j.cag.2023.03.001
0097-8493/© 2023 Elsevier Ltd. All rights reserved.
B. Heinemann, S. Görzen and U. Schroeder Computers & Graphics 112 (2023) 1–12

offers the possibility to investigate the learning process of CG con-


cepts and learning in virtual reality. Furthermore, it could be used
to research influencing factors like motivation, spatial ability, and
the effects of feedback. The integration of multimodal learning
analytics provides opportunities for researchers, learners, and
teachers alike. This contribution adds theoretical derivation in the
development by further arguments, such as the factor of visual–
spatial ability, and by the deeper analysis of the collected data
with a focus on the motion data.
Fig. 1. Relation between learning outcomes and selected factors particularly
This article is divided into eight sections. This introduction, connected to virtual reality. More factors in original research: [25].
motivation and overview of the contents are followed by the
second section, in which a framing in the current state of research
from different perspectives (computer graphics didactics, virtual
with a high measurement in high spatial ability characteristics
reality in educational research and learning analytics) is made.
and one with low values in spatial ability. When investigating dif-
Afterwards, the approach and methodology of the development,
ferent factors that have an impact on learning outcomes, e.g. mo-
which builds on the aforementioned preliminary work and theo-
tivation, presence and others, it could be shown that ‘‘control
retical foundations, is briefly described. In the fourth section, the
and active learning’’ – i.e. giving learners control over learning
technical foundations are presented with a focus on the learning
and actively involving them – is influenced by the learner char-
analytics process. Based on these considerations, the developed
acteristic spatial ability (illustrated in Fig. 1). This could only
application RePiX VR is presented (in the fourth section). The fifth
be found in the 3D variant, whereas in the desktop variant the
section then follows with the evaluation methodology that was
spatial characteristic has no modifying influence on the mediators
used in two studies. The results of the studies are presented in
control and active learning.
section six and discussed in section seven. Additionally the dis-
In 2020, virtual reality and three-dimensional graphics were
cussion contains the limitations and recommends future research. identified as emerging topics in computer curricula [26]. One of
The last section summarizes the paper’s finding giving a synthesis the components of the software fundamentals – which is one of
of key points. six categories, collected by different computing communities –
is ‘‘Graphics and Visualization’’ [26]. Basic rendering and funda-
2. Related work & foundations mental graphics techniques are, for example, referred to as ‘‘affine
and coordinate system transformations’’, ‘‘polygonal representa-
First, a summary of learning analytics and the benefits of VR is tions’’, and ‘‘the graphics pipeline’’ [26]. In an examination of 20
given in this part, followed by an explanation of the role of com- curricula, Balreira found that 95% of CG courses cover these topics.
puter graphics in computer science curricula. Then, the current The visual–spatial ability is an example of a significant influ-
state of research on computer graphics education technologies, encing factor in teaching and learning in virtual reality as well
especially in connection with XR, is presented. as with computer graphics. The visual–spatial ability (VSA) is
Chatti et al. define Learning Analytics (LA) as ‘‘a TEL defined as ‘‘the ability to comprehend and conceptualize visual
[technology-enhanced learning] research area that focuses on the representations and spatial relationships in learning and the per-
development of methods for analyzing and detecting patterns formance of tasks such as [. . . ] reading maps, navigating mazes,
within data collected from educational settings and leverages conceptualizing objects in space from different perspectives, and
those methods to support the learning experience’’ [20]. The executing various geometric operations’’ [27]. As it relates to the
integration of LA can provide information about the learning learning objectives in CG courses, the connection from VSA may
process, which offers interesting opportunities for researchers, be a determinant of academic achievement. For instance, another
teachers, and learners [15,21]. To get a deeper understanding empirical study in the fields of mathematics, engineering, and
of (complex) processes like learning, researchers gather data on science demonstrates the link between VSA and success [28].
learners’ behaviors and learning outcomes to generate insights In addition to the link to the learning content, VSA is yet
and improve the learner’s experience regarding methodology and another element important for VR learning. In the context of
didactics. The resulting areas of educational data mining and desktop VR applications, spatial ability is discussed and discov-
learning analytics converged in various fields like multi-modal ered as an influencing factor on the learning results in desktop
learning analytics, acknowledging that a more holistic view is VR applications [25]. Given that the learning material was not
required when technology-enhanced learning emerges beyond directly related to VSA, this remark should be appraised in terms
traditional desktop setups. Terms like self-regulated learning, of the influence of VSA generally (the task was dissecting the
adaptive experiences, and learner activation are often discussed anatomy of a virtual frog). Besides this, research showed that
in that field and influence the work described here. there are signs that the influence of control and active learning
VR encourages student engagement with the learning mate- on learning outcomes was moderated by spatial ability, while
rial [22,23], especially when used in conjunction with learning active learning and control themselves had a positive impact
analytics [24]. In VR, game-based learning, simulations, and ex- on learning outcomes [25]. Furthermore, Dalgarno and Lee have
ergaming are possible. VR could be used for more action-oriented shown that three-dimensional virtual learning environments can
applications, such as teacher training or hard disk updates [10]; support learning tasks that lead to increased spatial knowledge
however, in this case, VR provides the opportunity to make ab- representations [29].
stract domain knowledge interactive. VR could make fundamental There are still few educational computer graphics applications
computer graphics concepts, including the rendering pipeline, for (VR), augmented reality (AR), and mixed reality (MR), which
more accessible to everyone. Another aspect of connecting VR mark different points on the reality–virtuality continuum [30].
with computer graphics is how VR provides a unique perspective The virtual platform called ‘‘Mental Vision’’ by Peternier et al. [31]
on three-dimensional learning subjects. Other research showed fuses virtual reality and computer graphics. To make it simpler for
indicators that spatial ability moderates control and active learn- learners to master the material, they use modules and tutorials.
ing, which both have positive effects on learning outcomes [25]. They use a CAVE or mobile devices and teach programming. In
For this, students were compared between two groups — one contrast to this approach, we do not wish to concentrate on
2
B. Heinemann, S. Görzen and U. Schroeder Computers & Graphics 112 (2023) 1–12

Fig. 2. Short overview of steps created for RePiX VR (Version Jan. 2022): (1) Application, (2) 3D Geometry, (3) 3D Transformation, (4) Clipping, (5) Rasterization, (6)
(Local) Lighting, (7) Texturing, (8) Visibility Test and (9) Image.

imparting programming knowledge and do not use a CAVE and The theoretical and systematic planning ended in creating a
mobile devices. ‘‘GetiT - Gamified Training Environment for Affine guided tour for the computer graphic basics. Learners are accom-
Transformations’’ is another project investigating immersive VR. panied by a (virtual) host as they explore the various components
Within the environment you can discover affine transformations and steps of a simplified version of the rendering pipeline. Further
(ATs). According to evidence from Oberdörfer and Latoschick, VR steps in development were prioritization of the individual stages
learning is preferable to control groups learning with desktop- and the depth of the content to be learned, which is oriented to
3D learning. Regardless of the specific learning material, using VR the lecture at the RWTH Aachen University. A brief overview is
to teach can motivate students and change them from passive to depicted in Fig. 2, followed by a detailed explanation in Section 4.
active students [7]. The recent, (but only web-based) application The result is a mixture of interactive and non-interactive stages,
‘‘Rayground’’, by the Computer Graphics Group of Athens Uni- especially simplifying the Rasterizer step with only four simpler
versity, is useful for teaching ray tracing paradigms. It intends stages: Rasterization, (Local) Lighting, Texturing and Visibility Test.
to instruct students in a WebGL-based programmable graphics
pipeline. This is another analogous strategy [2,32]. 4. Technical implementation & learning analytics
In summary, from the above examples and the discussions,
it can be concluded that VR is suitable for computer graphics The reference model for learning analytics by Chatti et al. [20]
education. It supports a constructivist approach and learning served as the foundation for the planning of the quantitative
analytics is, as well as traditional research methods in education, evaluation and feedback process for learners and educators since
a way to evaluate the environments. it offers an orientation framework for developing from various
viewpoints and aids in allocating weights to various factors. The
3. Didactic conceptualization & development dimensions What? (Data and Environments), How? (Methods),
Why? (Objectives) and Who? (Stakeholders) help to address chal-
The systematic and human-centered procedure we used to lenges and plan sustainable learning analytics structures.
develop the open-source tool is described in another publica- In the dimension of What? the decision for the ‘‘Experience
tion [13]. Summarized, we followed an agile approach with pe- API’’ (xAPI) as data format was made with ‘‘Learning Locker’’ as
riodic tests, so-called Design-Make-Learn cycles [33], which are a compatible Learning Record Store for the data collection [36].
embedded in a Design-Based-Research (DBR) process [34]. Learning Locker is an open-source Learning Record Store, a data
We followed recommendations for best practices, such as [35]. warehouse application that can receive, store and return xAPI
These recommendations aid in creating efficient VR applications statements [37] via HTTP requests. The xAPI specification permits
that make the most of VR’s capabilities while maximizing the defining user activities in human-readable statements formatted
potential of constructivist learning theories, situated, active, and as a triplet: {actor, verb (action), and object (activity)}, such as
creative learning. The learning analytics approach is based on ‘‘Emma fixated pointable object’’, which, together with the stored
investigating the needs of various stakeholders, namely develop- extension, comprises the details that Emma was engaged in, like
ers, researchers, students, and teachers, as described in [13]. The ‘‘Emma spent 500 ms examining the tiger’s mesh’’. Eye tracking
intended audience includes both high school students who are was one of the learning analytics modalities we used. Along with
interested and introductory courses at universities. tracking eye movements, we monitored controller actions, the
3
B. Heinemann, S. Görzen and U. Schroeder Computers & Graphics 112 (2023) 1–12

user’s position, and head movements (nodding and shaking), in


order to get a diverse multimodal picture of our learners and
their approach. The characteristic of xAPI data, which is carefully
specified and allows for the unique identification of verbs and
objects, is essential to meet the FAIR principles for scientific data;
see [38] for more information. In a nutshell the FAIR principles
help to comprehensively characterize the data and to optimize
the reuse and exchange of it by following principles to make the
data Findable, Accessible, Interoperable, and Reusable. Goal of the
FAIRification is to allow us to combine data and meta data to
get a deeper understanding of learners, learning and technology.
Using xAPI is also interesting for comparability to a desktop
version (using mouse and keyboard instead of VR) or a future
extension with other learning opportunities, e.g. a learning videos Fig. 3. The first step is a tutorial explaining the controller, here teleportation.
or quizzes in Moodle, collecting data cross platforms in the same
format, as suggested in [39].
A web application described in more detail in [40] was built
small spheres) in the VR space using the VR controller and
that controls connections to the Learning Record Store, Unity and
a gradient color picker stuck on its trackpad. Finally, the
settings. It was created for researchers and interested educators
three vertices have to be connected to a triangle and can
to configure the learning experience and the learning analytics, be extended to a more complex object (Fig. 2.2). Thus, the
for example, to choose which objects should trigger a statement learners can observe the color interpolation of a triangle
to be written when the user looks at it. This is important as and see its index by looking at a text label stuck above each
to not collect more data than necessary (privacy by design — vertex.
collecting no data is the default setting). Web socket connec- 3. 3D Transformation: In this step, the frustum-based projec-
tions link instances of our VR learning application to the web tion is introduced. The learners can adjust its’ influencing
application (ReCoPa — Researcher Companion Panel) and supply factors (far plane, near plane, the field of view, and aspect
their trackable scene objects. The user could choose pertinent ratio) by interacting with the frustum via laser-pointer
scene elements for tracking and start the calibration for the casting from the VR controller. A semi-transparent, blue
(optional) eye-tracking with different privacy options. Tracking pyramid-shaped mesh represents the frustum. For demon-
can be started and stopped with the panel. stration, a few 3D objects are placed inside this pyramid
Initially created for the HTC Vive Pro Eye, the learning applica- to represent a 3D scene. The learners can move the far
tion described in this paper was developed using Unity 2019.4.8f1 and near plane of the frustum mesh along its z-axis by
in conjunction with Microsoft Visual Studio 2019 for C# scripts grabbing/selecting them with the laser-pointer (Fig. 2.3).
and the SteamVR plugin. An Intel Core i7-8750H processor run- The field of view and aspect ratio of the frustum can be
ning at 2.20 GHz, 64 GB of RAM, an NVIDIA GeForce RTX 2070 adjusted with 3D sliders. Those also define the width and
graphics card, and Windows 10 Education 64-bit were used in the height of pyramid-shaped mesh (the frustum).
studies. The LA tools, the Researcher Configuration Panel, and its 4. Clipping: In this step, the normalized device coordinates
server were developed using JetBrains WebStorm 2020.2, NodeJS (NDC) concept is explained by using a unit-sized semi-
v12.13.1, and Nuxt.js v2.14.4. transparent cube combined with transformed mesh copies
of the 3D objects placed inside the frustum mesh from
Repix VR: The guided tour the previous stage. Mesh primitives outside this unit-sized
RePiX VR – short for Rendering Pipeline eXperience in VR – cube are invisible because they are clipped (discarded).
is a guided tour that takes a student through each step of a However, to demonstrate them visually, they are tinted
simplified rendering process in a massive neutral setting while cyan by enabling this preview via pressing a specific button
being accompanied by an avatar as host (brief demo: https:// on the users’ VR controller (Fig. 2.4). Like in a real-world
youtu.be/U77hR7udyak). The host is a robot, apparent in some rendering pipeline, the mesh transformation of the 3D
figures: Subfigures (2)–(9) of Figs. 2 and 3. The tour starts with a scene depends on the frustum and its values. Thus, in this
brief lesson, as shown in Fig. 3, in which the learner is introduced stage, learners can observe the results of the frustum-based
to the buttons, teleport function, locomotion technique, and the transformation by interacting with the pyramid-shaped
laser pointer for selecting scene items. Then, the nine steps of the mesh again. Clipped fragments are discarded (or tinted)
condensed rendering pipeline are given as previews in Fig. 2. We in real-time. This way, the learners can observe which
describe here the version from January 2022. Innovations not yet fragments are discarded.
present in the evaluations are marked with this symbol: ✬. 5. Rasterization: A textual explanation combined with a 2D
illustration of the rasterization principle introduces the
1. Application: A demonstration of the computations of the learners to the rasterization stage. Algorithms, like the
translation, rotation, and scale matrix of a 3D object. For Digital Differential Analyzer algorithm, are only mentioned
example, a red cube transforms step by step (translation, but not explained in more detail. The illustration shows
rotation, scale) and results in its TRS matrix (Fig. 2.1). ✬ Af- the detection of fragments (potential pixels) of a 3D scene
ter this demonstration, the learner can practice this knowl- (Fig. 2.5). ✬ An interactive version of this stage with more
edge by combining the matrices of a small solar system. details is currently under development.
The matrices are colored (e.g., the rotation matrix is green) 6. (Local) Lighting: The idea of Local-Lighting is explained in
to identify its responsibility. this stage. The directional light source (represented by the
2. 3D Geometry: An introduction to the basics of a 3D mesh sun) is turned on and off to demonstrate this idea. Then,
is given. A Stanford Bunny mesh (without surfaces) rotates the learners are invited to seek the light source in the sky.
in front of the learner. Afterward, learners must build their ✬ Further, the learners are invited to place flashlights and
first triangle by placing its colored vertices (visualized as light bulbs with different colors and beam sizes to discover
4
B. Heinemann, S. Görzen and U. Schroeder Computers & Graphics 112 (2023) 1–12

Fig. 4. The user can trigger different states of the clipping mechanism for the final image. Changes from left to right: Tint clipped fragments, made image transparent,
hidden clipped fragments, show final image.

the influence of different light sources (Fig. 2.6). With the triangles’’ (connecting vertices), ‘‘move vertices’’, and ‘‘remove
help of a user interface, the learners can choose between vertices’’. Another small task, which invites the user to interact
phong and blinn shading, as well as vertex and fragment and experiment with the tool, is the modification of the frustum
shader. In addition, they can toggle the ambient, diffuse, values taught in the 3D Transformation stage. Later the changes
and specular modes. are directly shown as a change in the 2D view. The shape of the
7. Texturing: In the texturing stage, a ‘‘naked’’ (without tex- frustum depends on the values of the near plane, far plane, aspect
ture) copy of the tour guide is placed next to it. The ac- ratio, and field of view. The user can also use these functions
cording texture is displayed on a UV coordinate system at a in the clipping and the final stage and choose between different
board on the other side next to the tour guide (Fig. 2.7). The styles, which visualize the results of this stage.
learners can go closer to the UV map and see an overlaying
2D mesh grid (UV map) on the texture. ✬ The newest 5. Evaluation methodology
version allows the learner to paint on the UV map or the
‘‘naked’’ version of the robot (like spray cans) to explore The application has gone through different design-based re-
the connection between mesh and texture. search (DBR) cycles; the process is described in [13]. We will
8. Visibility Test: To explain the concept of the rendering discuss the results of two evaluations at different stages of devel-
pipeline’s z-buffer, textual explanations are displayed. opment. Smaller experiments were carried out while the appli-
These aim to transfer the principle of deciding between cation was being developed. Two larger-scale experiments were
visible fragments (pixels on screen) and non-visible (dis- done to enhance the usability, the learning process, and the
carded) fragments. The final chosen fragments are illus- learning analytics components in 2020 and 2021. The enhanced
trated by displaying the resulting image. It is located where interaction in the second evaluation could be important for the
the NDC representation stood in the stage before (Fig. 2.8). interpretation. Another vital piece of information concerns the
9. Image: In the final stage, the learners can interact with circumstances under which the data collection took place be-
the frustum-based mesh again. This time, they can see not cause careful safety procedures were needed due to the unique
only the result of the clipping stage, but also the final circumstances in the academic years of 2020 and 2021. In addi-
rendered image displayed in front of NDC (represented tion to the novelty effect of VR, this face-to-face session was a
again by the unit-sized cube) (Fig. 2.9). By using the laser- great exception to other learning opportunities at that time as
pointer on this image plane, the opacity of image plane gets our regulations did not include presence meetings. These special
reduced, so that the learners can observe the transformed circumstances may have had an impact on students’ perceptions
mesh inside the unit-sized cube. This way, the learners can and feelings, as reported by Knight et al. who found for example
directly compare the resulting image with its transformed missing social experiences caused by the pandemic [41].
3D geometry laid behind. In addition, clipped primitives In total, we evaluated 32 persons; in the first evaluation, eight
can be shown by tinting them cyan. This way, learners can people participated. Of these, all tested the VR variant. In the
reflect directly on how it fits with the final image. The
second evaluation, we varied between a desktop version (mouse
different variants of possible user perspectives can be seen
and keyboard instead of VR) with 14 and the VR version with ten
in Fig. 4.
people.
Generally speaking, each step includes explanations, anima- The length of each session varied from 35 to 75 min. The ages
tions, and some steps include tasks and VR interactions. A panel of participants ranged from 25 to 40. In all the tests, the first
with an overview of each step is set up in front of the tour assignment was to (nearly) independently examine the rendering
host for orienting. Each stage features a button that can be used pipeline as part of the guided tour. When people had concerns
to launch the associated learning activity. Only the previously about how the controllers worked, we offered assistance. The
visited stages or the next one on the tour schedule are available participants were told to share their ideas while they were ex-
to the learners. By doing this, we can guarantee linear navigation ploring. A free interview was conducted after the instruction,
while yet allowing for the free study of actual stages. which was delivered using the think-aloud technique [42,43].
Especially the steps with interactive tasks should invite the As described in Section 4 we collected eye fixations on various
learner to explore and to try things. It refers to constructivist objects, all controller actions, the user’s position, and nodding and
learning theories in their composition. The student is given expla- shaking of the head, which combines to a high-resolution data
nations of the potential interactions and brief interaction tasks to collection compared to other educational research and provides a
help them become accustomed to them. After this, the tour guide high range of potential targets for exploratory analysis in learning
extends an invitation to explore the stage. Finally, the students analytics; similar to the procedure in [44].
can practice the things they have learned. This inductive approach aims to derive models and theories
Additional control panels are used for a task if needed, such underlying the data. Moreover, an inductive approach allows for
as at the 3D Geometry stage. This panel lets the controller but- aggregating the raw data. The goals of this approach correspond
tons switch between functions like ‘‘put vertices’’, ‘‘construct to those of the ‘‘general inductive approach’’, whose approaches
5
B. Heinemann, S. Görzen and U. Schroeder Computers & Graphics 112 (2023) 1–12

Fig. 5. A test person creates an object with vertices and triangles in the Geometry step. The task was to recreate the small polygon behind the self-made turquoise
object.

can be applied to the mix of qualitative and quantitative data we data in the lab, on the other hand, were collected in a standard-
collected [45]. Fig. 5 displays a photo obtained during the first ized manner. After a welcome, methodological introduction, and
evaluation. clarification about the experiment, the students were let into the
VR application and could learn there without having a time limit.
5.1. First evaluation They were instructed to express their thoughts, and we helped
with questions about how to use the application. Without being
The initial evaluation in 2020 was intended to uncover tech- asked we only helped during the tutorial. We collected the set of
nical issues, test usability, and validate the content’s difficulty. log and sensor data without gaze, which was not possible due to
Four subjects with little prior knowledge were explicitly tested technical issues. After the experience, students were asked about
for usability and interaction. These four persons also had no ex- their impressions and were asked about specific observations, if
perience with VR. They also have no prior knowledge of computer any. They also had time to reflect on usability. Content-related
graphics, and generally, the knowledge of computer science is questions were discussed after the study. Feedback from students
at a early undergraduate level. To ease the pressure due to the who used the desktop version was obtained in a lecture exer-
limited pre-knowledge, we instructed the people that they did cise. An investigator guide can be found in the supplementary
not have to go through all the details. To get more information, material.
we asked the novices, but also four experts to evaluate RePiX
6. Results
VR to get information concerning the content. Additional content
feedback – and not part of this evaluation – was obtained during
We conducted an exploratory and descriptive analysis of the
development in collaboration with computer graphics experts.
logged interaction, gaze, and motion data without considering
The four expert participants had experience in user research,
statistical reliability due to the small number of participants.
learning and VR, their task was to provide feedback from differ-
Feedback from the interviews and the evaluation of the think-
ent perspectives. They provided feedback on learning materials,
aloud methodology provide hints for the ongoing advancements
usability, lessons learned, and learning and user experience.
in this early development phase.
Overall, four novices and four experts with different perspec-
First, a few general results will be presented. Subsequently,
tives gave feedback, checked development progress, and offered
the two test runs will be explained individually. In both studies,
hints for the next iteration. Two experts evaluated the first ver-
consistently favorable user impressions showed the potential of
sion of the Researcher Companion Panel [40] as it should only
a guided tour, especially for learners unfamiliar with computer
be used by someone with experience in learning analytics and
graphics. Also, persons without a computer science background
prospective research design.
or ‘‘young’’ students achieved basic learning objectives and could
describe the overall process (understand) and could illustrate
5.2. Second evaluation (apply) steps of the pipeline, especially the interactive tasks. For
the six taxonomy levels (remember, understand, apply, analyze,
In the subsequent more considerable evaluation, we asked evaluate and create) see [46].
students enrolled in a computer science course at the RWTH All test persons, independent of prior knowledge, agreed that
Aachen University to test the environment. Everyone took part the virtual tour host was helpful and the application’s visu-
in the master’s course in learning technologies. Due to pandemic alizations are quite detailed. In both versions, we used small
restrictions, we had to develop a second version of the pipeline videos showing interactions from a first-person perspective to
quickly, which can be used from a desktop computer at home. help learners with the interactions and to demonstrate usage.
This way, it was possible for all of the students to participate This feature is based on other findings and suggestions, e.g. by
and collect data for the lecture projects. In order to compare the Sutcliffe and Kaur [47]. They found a usability issue with ap-
interactions between the two platforms, we also gathered inter- proaching and orienting the objects. They proposed that hurdles
action data in the desktop version. Four of the ten VR students in interactions could be avoided with small in-time video demon-
previously learned about computer graphics or virtual reality. strations. We had not planned to systematically investigate this
Therefore we will specifically refer to them in the results section. feature but noticed mixed feedback for the animation videos.
The students assigned themselves to the groups (VR and desk- Some learners understood the function immediately, while others
top). Particularly with the desktop version, we were unable to only understood the videos because they asked about it while
directly control usage, as these were conducted unsupervised. The being in VR or noticing reflectively in the follow-up interviews.
6
B. Heinemann, S. Görzen and U. Schroeder Computers & Graphics 112 (2023) 1–12

Lastly, before going into details of both evaluations, a finding


from an early pilot trial is presented, that had an impact on
the evaluations presented here because it formed the basis for
some decisions concerning usability. Auto-play for instructions
and texts was a feature in the very first development iteration but
was found to be not user-friendly. The dialogue system had to be
adjusted. However, differing prior knowledge makes it difficult to
establish a standard pace for the dialogue with the robot. Users
want to manage the speed individually. During these early tests,
we found out, that we need to check usability heuristics, like
the one from Nielsen [48], for the design of the user interfaces, Fig. 6. Comparison of the nodding and shaking behavior of VR users.
because with the earliest prototypes, we could interpret some
users’ expressions as an overload. They were unsure about the
‘‘visibility of system status’’. As a result, it was decided to display in terms of the use of controllers. A graphic and an example of
the panel with the current step permanently and to use a bar to interaction with the frustum showing three typical users from
visualize the progress in each step of the rendering pipeline. different groups is shown in [18].
Besides the foundations for usability in VR, Sutcliffe and Kaur Lastly, five (out of eight) learners remained in the application
proposed walkthrough questions to evaluate VR user interfaces, after accomplishing all of the tasks to experiment with the tool
e.g. using questions like, ‘‘Can the user form or remember the and to observe the scene from different positions. This raised the
task goal?’’ [47]. Moreover, Sutcliffe et al. produced HCI design question about the movement for the second evaluation.
heuristics for evaluating VR applications [49]. The questions from
the proposed walk-through could almost without exception be Results of the second evaluation
answered with yes, e.g. by the beforehand tutorial and explicit We collected about 147.000 statements from 24 persons in the
instructions of the robot. Only the question: ‘‘Can the user decide Learning Record Store. The evaluations demonstrate that VR users
what to do next?’’ was not quite clear for the very first interac- spend more time overall using the application (29 min on aver-
tion with the panel (change to the second step of the pipeline), age). Compared to VR users, desktop users spent nine minutes
therefore attention was paid to this in the evaluation. less and had less variation in specific application steps. VR users
The feedback we received from the users is consistent with nod substantially more often than they shake their heads (see
findings from Wang et al. [50]: ‘‘77% of usability feedbacks can Fig. 6), according to a preliminary analysis of head movements.
be mapped to Nielsen’s heuristics’’. Wang et al. mapped dis- Additional visualizations and calculations can be found in the
cussions and reviews to HCI design heuristics. Besides the find- supplementary material (third link).
ings for Nielsen’s heuristics, only few themes could be linked In line with expectations, the data also reveals that depending
to other heuristic evaluations, presumed because of the goals of on the level of interactivity, the stages are used for varying
the concrete VR application. We could not create a meaningful amounts of time. However, the data also shows that VR learn-
direct mapping between the statements of the users and other ers frequently engage with tasks (specifically three-dimensional
heuristics either. Still, heuristic 11 (‘‘Clear turn-taking, between ones). Therefore, VR users spent six times as long in the Geometry
user and system initiative’’.) is interesting for our approach to stage compared to desktop users. In Application is the animation
designing a guided tour and will be discussed in the relevant of matrices and the visualization of a red cube manipulated by
section. the matrices (as already shown in Fig. 2.1): VR users have been
In conclusion, the pilot studies showed that usability is crucial looking at these three times, following how the values change
for VR learning applications and heuristics like Nielsen’s could and the cube is adjusted. There is also a longer time spent in
help to design user-friendly applications right from the beginning, the final stage, where VR users check the final 2D image from
which is in line with Wang et al. [50]. different angles and re-perform the familiar interactions from
the previous steps. This is especially remarkable regarding how
Results of the first evaluation little new information is provided in the final stage (Image stage:
An average of 4.500 xAPI statements were gathered for each Fig. 2.9).
test learner during a test session. Even though this data is insuf- There are only two steps in which the desktop users spent
ficient to make predictions about the learning behavior and the more time than their fellow students in the VR group: Tutorial
fit of material and content, we were nevertheless able to identify and Rasterization. The tutorial took desktop users much longer.
trends and test some basic statistical techniques, e.g. distribution From the quantitative data, we can also see that users do not have
of the different types of the xAPI statements. The results could significant differences in movement, with the exception that the
be seen in [18] and the supplementary material. For example, the transformation step has more user movement than other steps.
triangle construction appears to be the most engaging interaction. Learners quickly discovered they could look at the scene from
Besides vertices and triangles all other objects were selected at every angle, which was done by most VR learners (at least in the
least once. final step). Figs. 7 and 8 show heatmaps of the movement and
We saw that complex interactions and visualizations, like also teleportation behavior of the users subject 13 and subject 22.
the frustum transformation, are the most exciting and frequently These visualizations show the top view of the virtual learning
used, which is in line with the vocal expressions, while the learn- environment. It also contains the objects visible in the largest
ers used the application and tried to verbalize their thoughts. The number of steps, the unit-sized cube, and the frustum with the
more interactive the content, the more engaging it was; the users example objects, which can be seen in Fig. 9. As these visu-
stayed longer beyond the completion of the intended task, most alizations show the learner’s overall movement progress, some
visible at the geometry stage. common spots from the tutorial guidance (as already shown in
With regard to the two specific groups, the test revealed Fig. 3) can be identified on the right of the picture. Considering
that prior knowledge is essential. Both, VR experience level and these, the following differences between these two prototypical
background had an impact on the user’s movement and usage. examples can be seen: Subject 13 has moved more than subject
The novices did not move that much and were not experimental 22. Subject 13 not only shows more different spots but also a
7
B. Heinemann, S. Görzen and U. Schroeder Computers & Graphics 112 (2023) 1–12

Fig. 9. The final result of the rendering pipeline (Image stage). On the left is the
interactable frustum, in the middle is the tour guide Kyle behind the navigation
menu, and on the right is the unit-sized cube representing the NDC. In front of
it, the resulting image and the clipped (tinted cyan) fragments of the Stanford
bunny.

Fig. 7. Heatmap of the teleportation and movement behavior of subject 13.


One example is the follow-up of the already explained usabil-
ity issues found with the ‘‘Visibility of system status’’ heuristic:
While the persistent display of the application’s steps was intu-
itively understood, a few learners indicated that they overlooked
the progress bar.
Details and a resulting issue list can be traced in Gitlab, see
supplementary material. After exploring the content and stage
animations the learners offered different ideas for further de-
velopment. In particular, the four students who already knew
the rendering pipeline provided feedback. No urgent need for
changes was identified, instead, enhancements were suggested.
The overall expressed desire for interactive tasks for the not-yet-
interactive stages will be addressed in Section 7.2.
Mixed reviews were given by the fourteen persons using the
desktop version. While specific interactions are simpler to use,
others make the benefits of the VR application evident. For in-
stance, because of the difficulty of placing triangles in the 3D
space, desktop users generally constructed less in the 3D Geome-
try stage (four vs. ten triangles).
Finally, we made an unintended observation at the end of
the summer semester of 2021. As already reported in [18], the
user tests were at the beginning of the semester. The exam took
Fig. 8. Heatmap of the teleportation and movement behavior of subject 22.
place in fall. The study was not part of the computer graphics
introduction, many students were still able to recall informa-
tion from the application for the oral exam. They used it to
longer stay, which is indicated by the darker color. Subject 13 explain general concepts of learning technologies, which suggests
used the teleporting function 19 times; subject 22 used it eleven persistent positive learning outcomes even after a longer period.
times. Values range from four to 23 teleports with a median of 19.
The connected spots show that there was also a major physical 7. Discussion
movement (which could also be very small teleports if only the
visualization is considered, but could be excluded based on the This section is divided into three parts, the first discusses the
data). Subject 13 also teleported inside the frustum and observed results from the previous section and some general VR in educa-
the scene from different positions. Subject 22 has moved less; tion topics. The second one deals with the evaluations’ findings
only the spot on the start position has long residence time. that have already been included in the newest versions of RePiX
The visualizations from other learners can be explored in the VR. Then third, additional adjustments, open research questions,
supplementary material. The data is openly accessible and it is and limitations of this work are explained and discussed.
also possible to filter by individual stages.
The Think-Aloud protocol revealed that learners with visual 7.1. Interpretation of the results
impairments might find it challenging to read the text panels,
two persons have directly or subsequently remarked on this issue. Different sorts of feedback and analytics data were gathered.
In addition, they stated that reading may be challenging due to The learners had varying perceptions of the learning material and
the robot’s modest motions behind the semi-transparent panel. content. The learning analytics data shows indicators of different
Usability notes were largely collected through the Think-Aloud types of users, not only interacting with objects but also moving
protocol to create a mapping like done by Wang et al. [50]. They around. The different learner movement profiles could be related
mainly concern the heuristics ‘‘Visibility of system status’’, ‘‘User to the cognitive load of the novel teaching environment and/or
control and freedom’’ and ‘‘Recognition rather than recall’’ [48]. to learning types, or personal characteristics like spatial ability.
8
B. Heinemann, S. Görzen and U. Schroeder Computers & Graphics 112 (2023) 1–12

However, for statistical evidence, we collected too few data sets In this study, we had no learners complaining of symptoms
in these evaluations. of cybersickness [54]. Since learners are familiar with this or
Interpreting the collected gestures is challenging. Under some similar concepts, it was addressed in the introduction before the
circumstances, such as after the learners read the text displayed VR experience and interviews. Sickness is a typical limitation
on a panel or if they completed a task, we may interpret nodding and drawback discussed in the field of education and virtual
as an indication of understanding. Nods/shakes are defined by reality [10]. Research from multiple directions aims to minimize
the amplitude and the number of detected movements. Personal cybersickness; a recent systematic survey aggregated guidelines
calibration could optimize the detection. Furthermore, the inter- that could help to design VR applications [54]. For example, the
pretation of the data is difficult, because these gestures can also locomotion technique in RePiX VR (walking and teleportation)
have different meanings and this can lead to different execu- was selected to reduce movement-induced sickness. In contrast,
tion [51], which is currently not mapped. Another difficulty is that movements that do not match the sensory expectations should
some stages lead to shaking the head repeatedly from right to be avoided [54].
left, such as stage Application. Because the learners could observe
how the values of matrices change, and how the TRS matrix is 7.2. Already processed feedback
calculated, the shaking of the users’ heads as a ’failure to under-
stand’ metric is currently not a reliable indicator. Nevertheless, Feedback is presented with the consequent implications,
there is a noticeable discrepancy between nodding and shaking which were part of the DBR cycles that this research is based on.
that could be attributed to the interaction with the robot and the A brief overview of the new features of the stages has already
overall feeling of understanding, especially in combination with been shown in Section 4.
the personal statements collected in the qualitative data. The biggest request from the learners was to make all steps in-
The last stage is intended to connect the knowledge of all teractive. Current versions of the application provide more inter-
stages. This seems to invite users to experiment and perform their active tasks, e.g., experimenting with light bulbs and spotlights to
tasks, such as reshaping the frustum to keep the bunny out of create shadow, drawing on the UV map or the texture-less robot
the visible area and to test the result of cutting it up. From a to observe the connection, building a solar system to discover the
didactic perspective, it is interesting to see, that learners perform mathematics, and investigating rasterization to experience the
repetitions of previously taught concepts, such as interactions conversion from vectors to a raster image.
with the frustum. The experts’ wish to broaden the 3D Geometry and Transforma-
An important note from the qualitative interviews concerns tion steps and provide students with more information about the
the readability of the texts on the panel. We are actively re- computation of the TRS matrix was integrated with further itera-
working the scene to take this requirement into account. One tions. The desire to experiment with matrices corroborates with
additional person reported that reading was taxing, and in keep- Oberdörfer et al. [55] and Huang et al. [35]. Huang et al. calls for
ing with the concepts of multimedia learning, voice could be extensive interaction in VR, while Oberdörfer et al. created an ap-
added in the upcoming version to support reading [52]. plication (GetiT) to teach the transformations (already described
For the very first interaction with the panel showing the in Section 2). This request’s outcome is already evident in the
pipeline overview, we could not answer the question ‘‘Can the learning application’s description in this article (see Section 4).
user decide what to do next?’’ positively [47]. The interaction to The most recent version has a solar system where students can
navigate to the second step of the pipeline was difficult for some change the matrices on their own.
learners, therefore attention was paid to this in the evaluation. Currently in the Learn phase of the iterative design cycle
A more powerful cue like a flashing button could reinforce the of [33], are new versions of the steps: Lighting, Rasterization, and
textual hint of the robot. Texturing. Additionally, these steps are requested to be interactive
Another usability issue connects the two previous paragraphs. in accordance with the findings of our evaluations. It is noticeable
Heuristic 11 [49], which proposes clear turn-taking, between user that the didactic reduction in light and shadow, as well as textur-
and system initiative, is natural to achieve in the guided tour with ing, is easier than in the rasterization step. Reducing the content
a robot as host. Nevertheless, improvement can be achieved by to heterogeneous prior knowledge and groups of people remains
having learners clearly notice when they have completed a task, an open challenge.
when the robot then takes over the turn and starts talking. This
can also help clarify what to do next. 7.3. Limitations and directions for future research
When interpreting the results collected with the desktop ver-
sion, it is important to note that we do not aim for media Open questions are to what extent an improvement of the
comparison studies that dominate research on VR and AR [53]. recognition of gestures can lead to a better understanding of
The desktop version is an addition and should be viewed with a the learning processes and their optimization. Our study shows
critical eye due to the current optimization for the VR version. that it is worth examining nodding (and shaking) as indicators
Learners report, that the setting is more engaging for users of of comprehension in subsequent research and correlating them
the VR application, which should be considered for data inter- systematically with other factors, such as the focused elements. In
pretation in addition to usability issues in the desktop version. particular, the reading interactions with the robot are interesting
Furthermore, only students in the VR condition were instructed learning moments.
to Think-Aloud. On the one hand, the environment itself might Another open question arises from the unintentional observa-
be more motivating to explore the triangles. All in all the general tion described in Section 6. Which effect have interactive tasks
learning context should be considered different. on learners with various learning motivations? Another open
Compared to other steps, the non-interactive stages Raster- question concerns the described animations demonstrating the
ization may have been less engaging, especially for VR users. interactions. Some learners understood them immediately, which
Nevertheless, desktop learners spent more time there, which is could be related to the preknowledge in VR and/or gaming. The
in contrast to the other steps. It may be easier to take the time suspected connection can possibly be traced back to the conver-
to read content on the computer. In stages like Geometry, the sations with the student.
environment itself might be more motivating to experiment with An open question for future research is if the VR applica-
the triangles and the learning context is different. tion takes longer compared to the Desktop version because the
9
B. Heinemann, S. Görzen and U. Schroeder Computers & Graphics 112 (2023) 1–12

students interact with the content more intensely and if this must also address the challenge of creating genuinely equivalent
increases the intensity of their learning process. How much might control groups for some questions. Future research could investi-
the increased time in the application be related to zeal, which (at gate the findings of Janßen et al. which showed that personality
best) moderates the overall course learning outcomes? Another traits, like extraversion or agreeableness, influence the user expe-
question is: Do we have to provide more assistance to certain rience (UX) in VR learning scenarios and that UX impacts learning
groups of students than to others, such as those who have weak outcomes [58].
visual–spatial skills, as indicated by [28]? Diverse audiences are a challenge in introductory graphics
Future plans include expanding the application to participa- courses [59]. One strategy is to pay close attention to assessment,
tory VR, where multiple visitors interact to enjoy the tour in learning activities and the learning outcomes, see [60]. Taking key
a shared virtual space. For more information, see [56]. Huang points from both sources, teaching material should be designed
et al. indicate that it is asserted that virtual reality has enormous to be modular to maintain the greatest possible flexibility. There-
promise for cooperative and collaborative learning [35]. However, fore, RePiX VR is open-source; we invite teachers and researchers
participatory VR requires more didactic considerations. How can to contribute with ideas or to participate actively. VR can promote
the tasks be modified to allow for collaborative action? How can self-regulated learning; with the data we collect, we can produce
we ensure that everyone has equal opportunities and that the adaptive learning scenarios that help deal with heterogeneous
group work is distributed fairly? preknowledge.
Another open point of discussion is whether more telepor-
tation points could encourage learners to observe the 3D en- CRediT authorship contribution statement
vironment more closely and possibly achieve a better learning
effect. The number of participants needs to be larger to prove
Birte Heinemann: Conceptualization, Methodology, Valida-
this statistically. However, one assumption for the different user
tion, Formal analysis, Investigation, Data curation, Writing – origi-
behavior is that at least some people forgot that they could
nal draft, Writing – review & editing, Visualization, Project admin-
teleport during the session. So the idea was generated to offer
istration. Sergej Görzen: Conceptualization, Methodology, Valida-
more teleportation spots. Furthermore, the positions of desktop
tion, Formal analysis, Investigation, Data curation, Software, Writ-
users will be recorded in the future to measure differences and
ing – review & editing, Visualization. Ulrik Schroeder: Resources,
conduct a more thorough investigation.
Supervision.
Although usability has been enhanced incrementally, there are
still certain areas that require improvement. For instance, the
controller’s trackpad can modify the vertices’ color. Only some Declaration of competing interest
people initially found this feature. The pointing pattern we em-
ployed as an interaction method is another problem. Although it The authors declare that they have no known competing finan-
seems to be intuitive, various interactions, such as altering the far cial interests or personal relationships that could have appeared
and near plane, need to work more accurately. Short development to influence the work reported in this paper.
cycles and rapid tests will be employed to enhance these usability
issues (as well as the usability of the desktop version). Data availability
One limitation of the two studies presented is that the stu-
dents we tested are a special group in special conditions due to It is shared in the paper (supplementary material)
the pandemic, the latter was already reported in Section 5. Our
students chose a lecture about learning technologies; some were Appendix A. Supplementary material
familiar with the content and gave good feedback concerning the
material, and some were novices. In addition, computer graphics
is a non-mandatory area in our computer science program. There- 1. Link to Repository of the application RePiX VR: https://
fore, it is not completely certain that particular interests will lead gitlab.com/learntech-rwth/repix-vr/repix-vr-app
to different results for those choosing computer graphics as an 2. Link to Demo Video: https://youtu.be/U77hR7udyak
elective area in their studies. 3. Link to Data & Learning Analytics (created using Python) &
Experimental Guide for the Second Study: https://doi.org/
8. Conclusion 10.18154/RWTH-2022-10631

This study demonstrated the outcomes of a guided tour that References


interactively explores the rendering pipeline and computer
graphic basics. Both analyses demonstrate that an iterative de- [1] Suselo T, Wünsche BC, Luxton-Reilly A. Technologies and tools to support
velopment approach, as described in [33], results in user-friendly teaching and learning computer graphics: A literature review. In: Proceed-
VR environments. Furthermore, the students who tested our ap- ings of the twenty-first Australasian computing education conference. New
York, NY, USA: Association for Computing Machinery; 2019, p. 96–105.
plication have reacted positively across the board. Together with http://dx.doi.org/10.1145/3286960.3286972.
the results from [18], in which the interaction with particular [2] Vasilakis AA, Papaioannou G, Vitsas N, Gkaravelis A. Remote teaching ad-
objects was highlighted, the evaluation of the learning analytics vanced rendering topics using the rayground platform. IEEE Comput Graph
data shows different usage patterns. These are partly attributable Appl 2021;41(5):99–103. http://dx.doi.org/10.1109/MCG.2021.3093734.
to different prior experiences with VR. [3] Matahari T. WebXR asset management in developing virtual reality learn-
ing media. Indonesian J Comput, Eng Des (IJoCED) 2022;4(1):38. http:
The next step is to do further evaluations with a larger sample //dx.doi.org/10.35806/ijoced.v4i1.253.
size. The relatively high acquisition expenses and, for instance, [4] Pathak R, Simiscuka AA, Muntean GM. An adaptive resolution scheme for
cybersickness are limiting factors for implementation in large- performance enhancement of a web-based multi-user VR application. In:
scale educational applications [12,57]. Stojšić et al. [12] explicitly 2021 IEEE international symposium on broadband multimedia systems
and broadcasting. 2021, p. 1–6. http://dx.doi.org/10.1109/BMSB53066.2021.
mention the research gap: many studies on immersive VR in
9547069.
educational contexts are conducted under laboratory conditions [5] Li K, Wang S. Development and application of VR course resources
and not in authentic teaching/learning situations. Furthermore, to based on embedded system in open education. Microprocess Microsyst
effectively assess the effects of the VR learning environment, we 2021;83:103989. http://dx.doi.org/10.1016/j.micpro.2021.103989.

10
B. Heinemann, S. Görzen and U. Schroeder Computers & Graphics 112 (2023) 1–12

[6] Martín-Gutiérrez J, Mora CE, Añorbe-Díaz B, González-Marrero A. Vir- [28] González Campos JS, Sánchez-Navarro J, Arnedo-Moreno J. An empirical
tual technologies trends in education. EURASIA J Math, Sci Technol study of the effect that a computer graphics course has on visual-spatial
Educ 2017;13(2):469–86. http://dx.doi.org/10.12973/eurasia.2017.00626a, abilities. Int J Educ Technol Higher Educ 2019;16(1):41. http://dx.doi.org/
Publisher: EURASIA. 10.1186/s41239-019-0169-7.
[7] Pantelidis VS. Reasons to use virtual reality in education and training [29] Dalgarno B, Lee MJW. What are the learning affordances of 3-D
courses and a model to determine when to use virtual reality. Themes virtual environments? Br J Educ Technol 2010;41(1):10–32. http://dx.
Sci Technol Educ 2009;12. doi.org/10.1111/j.1467-8535.2009.01038.x, _eprint: https://bera-journals.
[8] Papanastasiou G, Drigas A, Skianis C, Lytras M, Papanastasiou E. Virtual onlinelibrary.wiley.com/doi/pdf/10.1111/j.1467-8535.2009.01038.x.
and augmented reality effects on K-12, higher and tertiary education [30] Milgram P, Takemura H, Utsumi A, Kishino F. Augmented reality: A class
students’ twenty-first century skills. Virtual Real 2019;23(4):425–36. http: of displays on the reality-virtuality continuum. In: Das H, editor. Tele-
//dx.doi.org/10.1007/s10055-018-0363-2. manipulator and telepresence technologies. SPIE; 1995, p. 282–92. http:
[9] Scrivner O, Madewell J, Buckley C, Perez N. Best practices in the use of //dx.doi.org/10.1117/12.197321, URL http://proceedings.spiedigitallibrary.
augmented and virtual reality technologies for SLA: Design, implementa- org/proceeding.aspx?articleid=981543.
tion, and feedback. In: Carrió-Pastor ML, editor. Teaching language and [31] Peternier A, Vexo F, Thalmann D. The mental vision framework - A
teaching literature in virtual environments. Singapore: Springer; 2019, p. platform for teaching, practicing and researching with computer graphics
55–72. http://dx.doi.org/10.1007/978-981-13-1358-5_4. and virtual reality. In: Pan Z, Cheok AD, Müller W, El Rhalibi A, edi-
tors. Transactions on edutainment I. Lecture notes in computer science,
[10] Zender R, Knoth AH, Fischer MH, Lucke U. Potentials of virtual reality
Springer; 2008, p. 242–60. http://dx.doi.org/10.1007/978-3-540-69744-2_
as an instrument for research and education. I-Com 2019;18(1):3–15.
19.
http://dx.doi.org/10.1515/icom-2018-0042, Publisher: De Gruyter Section:
i-com. [32] Vitsas N, Gkaravelis A, Vasilakis AA, Vardis K, Papaioannou G. Ray-
ground: An online educational tool for ray tracing. In: Romero M,
[11] Asad MM, Naz A, Churi P, Tahanzadeh MM. Virtual reality as pedagogical
Sousa Santos B, editors. Eurographics 2020 - Education papers. The Euro-
tool to enhance experiential learning: A systematic literature review. Educ
graphics Association; 2020, http://dx.doi.org/10.2312/eged.20201027, ISSN:
Res Int 2021;2021:1–17. http://dx.doi.org/10.1155/2021/7061623.
1017-4656.
[12] Stojšić I, Ivkov-Džigurski A, Maričić O. Virtual reality as a learning tool:
[33] Jerald J. The VR book: Human-centered design for virtual reality.
How and where to start with immersive teaching. In: Daniela L, editor.
Association for Computing Machinery and Morgan & Claypool; 2015.
Didactics of smart pedagogy. Cham: Springer International Publishing;
[34] Anderson T, Shattuck J. Design-based research: A decade of progress
2019, p. 353–69. http://dx.doi.org/10.1007/978-3-030-01551-0_18.
in education research? Educ Res 2012;41(1):16–25. http://dx.doi.org/
[13] Heinemann B, Görzen S, Schroeder U. Systematic design of effective 10.3102/0013189X11428813, Publisher: American Educational Research
learning scenarios for virtual reality. In: 2022 International conference on Association.
advanced learning technologies. Bucharest, Romania; 2022, http://dx.doi. [35] Huang HM, Liaw SS. An analysis of learners’ intentions toward virtual real-
org/10.1109/ICALT55010.2022.00107. ity learning based on constructivist and technology acceptance approaches.
[14] Fowler C. Virtual reality and learning: Where is the pedagogy?: Learning Int Rev Res Open Distrib Learn 2018;19(1):91–115.
activities in 3-D virtual worlds. Br J Educ Technol 2015;46(2):412–22. [36] Documentation of experience API (or xAPI): A specification for learning
http://dx.doi.org/10.1111/bjet.12135. technology. 2018, URL https://xapi.com/. (visited on 18 January 2022).
[15] Gedrimiene E, Silvola A, Pursiainen J, Rusanen J, Muukkonen H. Learning [37] Documentation of learning locker: A conformant open source learning
analytics in education: Literature review and case examples from voca- record store (LRS). 2015, URL https://docs.learninglocker.net/welcome/.
tional education. Scand J Educ Res 2020;64(7):1105–19. http://dx.doi.org/ (visited on 30 July 2021).
10.1080/00313831.2019.1649718. [38] Wilkinson MD, Dumontier M, Aalbersberg IJ, Appleton G, Axton M, Baak A,
[16] Siemens G. What are learning analytics?. 2010, URL http://web.archive.org/ et al. The FAIR guiding principles for scientific data management and
web/20180803152224/http://www.elearnspace.org/blog/2010/08/25/what- stewardship. Sci Data 2016;3(1):160018. http://dx.doi.org/10.1038/sdata.
are-learning-analytics/. 2016.18, URL https://www.nature.com/articles/sdata201618. Number: 1
[17] Worsley M. Multimodal learning analytics’ past, present, and potential Publisher: Nature Publishing Group.
futures. In: CrossMMLA@LAK. 2018. [39] Judel S, Schroeder U. EXCALIBUR LA - An extendable and scalable infras-
[18] Heinemann B, Görzen S, Schroeder U. RePiX VR - learning environment tructure build for learning analytics. In: 2022 International conference on
for the rendering pipeline in virtual reality. In: Bourdin J-J, Paquette E, advanced learning technologies. 2022, p. 155–7. http://dx.doi.org/10.1109/
editors. Eurographics 2022 - Education papers. Reims, France; 2022, p. 8. ICALT55010.2022.00053.
http://dx.doi.org/10.2312/eged.20221040. [40] Heinemann B, Ehlenz M, Görzen S, Schroeder U. xAPI made easy: A
[19] Balreira DG, Walter M, Fellner DW. What we are teaching in introduction learning analytics infrastructure for interdisciplinary projects. Int J On-
to computer graphics. In: Bourdin JJ, Shesh A, editors. EG 2017 - Educa- line Biomed Eng (IJOE) 2022;18(14):99–113. http://dx.doi.org/10.3991/ijoe.
tion papers. The Eurographics Association; 2017, http://dx.doi.org/10.2312/ v18i14.35079.
eged.20171019, ISSN: 1017-4656. [41] Knight H, Carlisle S, O’Connor M, Briggs L, Fothergill L, Al-Oraibi A, et al.
[20] Chatti MA, Dyckhoff AL, Schroeder U, Thüs H. A reference model for Impacts of the COVID-19 pandemic and self-isolation on students and staff
learning analytics. Int J Technol Enhanced Learn 2012;4(5/6):318. http: in higher education: A qualitative study. Int J Environ Res Public Health
//dx.doi.org/10.1504/IJTEL.2012.051815. 2021;18(20):10675. http://dx.doi.org/10.3390/ijerph182010675, URL https:
//www.mdpi.com/1660-4601/18/20/10675. Number: 20 Publisher: Multi-
[21] Ihantola P, Vihavainen A, Ahadi A, Butler M, Börstler J, Edwards SH, et al.
disciplinary Digital Publishing Institute.
Educational data mining and learning analytics in programming: Literature
[42] Ericsson KA, Simon HA. Protocol analysis: Verbal reports as data. Protocol
review and case studies. In: Proceedings of the 2015 ITiCSE on working
analysis: Verbal reports as data, Cambridge, MA, US: The MIT Press; 1984,
group reports. ITICSE-WGR ’15, New York, NY, USA: Association for
p. 426, Pages.
Computing Machinery; 2015, p. 41–63. http://dx.doi.org/10.1145/2858796.
[43] Grossman T, Fitzmaurice G, Attar R. A survey of software learnability:
2858798.
Metrics, methodologies and guidelines. In: Proceedings of the SIGCHI
[22] Allcoat D, von Mühlenen A. Learning in virtual reality: Effects on per-
conference on human factors in computing systems. CHI ’09, New York, NY,
formance, emotion and engagement. Res Learn Technol 2018;26. http:
USA: ACM; 2009, p. 649–58. http://dx.doi.org/10.1145/1518701.1518803.
//dx.doi.org/10.25304/rlt.v26.2140.
[44] Gibson D, de Freitas S. Exploratory analysis in learning analytics. Tech-
[23] Nesenbergs K, Abolins V, Ormanis J, Mednis A. Use of augmented and nol, Knowl Learn 2016;21(1):5–19. http://dx.doi.org/10.1007/s10758-015-
virtual reality in remote higher education: A systematic umbrella review. 9249-5.
Educ Sci 2021;11(1):8. http://dx.doi.org/10.3390/educsci11010008.
[45] Thomas DR. A general inductive approach for analyzing qualitative
[24] Banihashem SK, Aliabadi K, Pourroostaei Ardakani S, Delaver A, Nili Ah- evaluation data. Am J Eval 2006;27(2):237–46. http://dx.doi.org/10.1177/
madabadi M. Learning analytics: A systematic literature review. Interdiscipl 1098214005283748, Publisher: SAGE Publications Inc.
J Virtual Learn Med Sci 2018;9(2). http://dx.doi.org/10.5812/ijvlms.63024. [46] Krathwohl DR. A revision of bloom’s taxonomy: An overview. Theory
[25] Ai-Lim Lee E, Wong KW, Fung CC. How does desktop virtual reality Into Pract 2002;41(4):212–8. http://dx.doi.org/10.1207/s15430421tip4104_
enhance learning outcomes? A structural equation modeling approach. 2, Publisher: Routledge.
Comput Educ 2010;55(4):1424–42. http://dx.doi.org/10.1016/j.compedu. [47] Sutcliffe AG, Kaur KD. Evaluating the usability of virtual reality user in-
2010.06.006. terfaces. Behav Inf Technol 2000;19(6):415–26. http://dx.doi.org/10.1080/
[26] ACM Association for Computing Machinery, IEEE Computer Society, editors. 014492900750052679.
Computing curricula 2020: Paradigms for global computing education. [48] Nielsen J. Enhancing the explanatory power of usability heuristics. In: Pro-
ACM; 2020, http://dx.doi.org/10.1145/3467967. ceedings of the sigchi conference on human factors in computing systems.
[27] Visual–spatial ability – APA dictionary of psychology. 2018, URL https: CHI ’94, New York, NY, USA: Association for Computing Machinery; 1994,
//dictionary.apa.org/visual-spatial-ability. (visited on 30 July 2021). p. 152–8. http://dx.doi.org/10.1145/191666.191729.

11
B. Heinemann, S. Görzen and U. Schroeder Computers & Graphics 112 (2023) 1–12

[49] Sutcliffe AG, Poullis C, Gregoriades A, Katsouri I, Tzanavari A, Herak- [55] Oberdörfer S, Latoschik ME. Knowledge encoding in game mechanics:
leous K. Reflecting on the design process for virtual reality applications. Int Transfer-oriented knowledge learning in desktop-3D and VR. Int J Com-
J Hum–Comput Interaction 2019;35(2):168–79. http://dx.doi.org/10.1080/ put Games Technol 2019;2019:7626349. http://dx.doi.org/10.1155/2019/
10447318.2018.1443898, Publisher: Taylor & Francis. 7626349, Publisher: Hindawi.
[50] Wang W, Cheng J, Guo JL. Usability of virtual reality application through [56] Velho L, Giannella J, Carvalho L, Lucio D. VR tour: Guided par-
the lens of the user community: A case study. In: Extended abstracts of ticipatory meta-narrative for virtual reality exploration. Rev GEMInIS
the 2019 CHI conference on human factors in computing systems. CHI EA 2019;10(2):122–40.
’19, New York, NY, USA: Association for Computing Machinery; 2019, p. [57] LaViola JJ. A discussion of cybersickness in virtual environments. ACM
1–6. http://dx.doi.org/10.1145/3290607.3312816. SIGCHI Bull 2000;32(1):47–56. http://dx.doi.org/10.1145/333329.333344.
[51] Wagner P, Malisz Z, Kopp S. Gesture and speech in interaction: An [58] Janßen D, Tummel C, Richert A, Isenhardt I. Towards measuring user
overview. Speech Commun 2014;57:209–32. http://dx.doi.org/10.1016/ experience, activation and task performance in immersive virtual learning
j.specom.2013.09.008, URL https://www.sciencedirect.com/science/article/ environments for students. In: Allison C, Morgado L, Pirker J, Beck D,
pii/S0167639313001295. Richter J, Gütl C, editors. Immersive learning research network. Com-
[52] Mayer RE. Multimedia learning. Cambridge University Press; 2001, munications in computer and information science, Springer International
http://dx.doi.org/10.1017/CBO9781139164603, URL https://www.cambridg Publishing; 2016, p. 45–58. http://dx.doi.org/10.1007/978-3-319-41769-
e.org/core/books/multimedia-learning/E9595926786F5DEA326A3774D2F23 1_4.
DB2. [59] Fairén M, Pelechano N. Introductory graphics for very diverse audiences.
[53] Buchner J, Kerres M. Media comparison studies dominate comparative re- In: Bourdin J-J, Cerezo E, Cunningham S, editors. Eurographics 2013 -
search on augmented reality in education. Comput Educ 2023;195:104711. Education papers. The Eurographics Association; 2013, http://dx.doi.org/
http://dx.doi.org/10.1016/j.compedu.2022.104711. 10.2312/conf/EG2013/education/009-010, ISSN: 1017-4656.
[54] Ramaseri Chandra AN, El Jamiy F, Reza H. A systematic survey on cyber- [60] Biggs JB, Tang CS-k. Teaching for quality learning at university: What
sickness in virtual environments. Computers 2022;11(4). http://dx.doi.org/ the student does. 4th ed.. SRHE and open university press imprint,
10.3390/computers11040051, URL https://www.mdpi.com/2073-431X/11/ Maidenhead, England New York, NY: McGraw-Hill, Society for Research
4/51. into Higher Education & Open University Press; 2011.

12

You might also like