You are on page 1of 6

2013 Seventh International Conference on Next Generation Mobile Apps, Services and Technologies

Using the ADL Experience API for Mobile Learning,


Sensing, Informing, Encouraging, Orchestrating
Christian Glahn
International Relations and Security Network
Swiss Federal Institute of Technology
Zurich, Switzerland
christian.glahn@sipo.gess.ethz.ch

AbstractThe new ADL Experience API is an attempt for


better interoperability between different types of educational
systems and devices. This new specification is designed to
link sensor networks for collecting and analyzing learning
experiences in different contexts. This paper analyses how
the concepts of the Experiences API were integrated in a
mobile learning application and how the app uses learning
analytics functions based on the collected data for informing
the learners about their learning performance, for
encouraging them to actively use the app, and for
orchestrating and sequencing the learning resources.
Interoperability, Mobile learning, Software System design
for Mobile services, , Standardization, Learning analytics

I.

INTRODUCTION

Learning analytics is a new approach for using


statistics for supporting learning processes [1]. Several
educational use cases have been discussed in the literature.
Glahn, Specht, and Koper [2] discussed a framework for
contextualizing visualizations of learning activities for
supporting the learner engagement in informal learning
and sub-sequentially showed that contextualizing effects of
different learning analytics functions on active and passive
learner engagement [3]. Kalz, Drachsler, Van Bruggen,
Hummel et al. [4] reported on the use of learning analytics
on peer traces for supporting the way-finding of students in
complex resource structures for competence development.
Florian, Glahn, Drachsler, Specht et al. [5] identified
different social planes in the activity repositories of
learning management systems through learning analytics.
Analytical functions for adaptive navigation and adaptive
visualization have been frequently discussed [6]. Baepler
and Murdoch [7] address the use of data mining
approaches for improving educational interventions of
teachers and instructors. This special form of learning is
also referred to as teaching analytics, which indicates
that the inferred information is unavailable to the learners.
Aljohani and Davis [8] presented a learning analytics
framework for mobile devices in formal educational
contexts.
Two forms of learning analytics can be identified in the
literature: process analytics and object-data analytics.
Process analytics refer to approaches that focus on
descriptive elements of the learning process [2,5]. Objectdata analytics address learner created resources [9].
The use of learning analytics has received some
attention in the educational technology industry.
Therefore, the ADL Initiative has supported the TinCan
project [10] in order to increase the interoperability
between sensing approaches that can feed into learning

978-0-7695-5090-9/13 $26.00 2013 IEEE


DOI 10.1109/NGMAST.2013.55

268

analytics functions. The project has resulted in the ADL


Experience API (XAPI) [11].
This contribution analyses the interplay of learning
analytics with the XAPI in mobile learning environments.
It presents a generic architecture for utilizing the XAPI in
mobile mash-up learning environments. The architecture is
analyzed based on four use cases of the Mobler Cards app
[12].
This article introduces the XAPI in the following
section. It discusses its features for mobile learning
contexts and describes a generic architecture for utilizing
the XAPI in mobile learning mash-ups. Finally, it analyzes
learning analytics functions on data that is available
through the XAPI for identifying learning activities, for
providing information on the learning performance, for
engaging into learning processes, and for orchestrating
these processes.
II.

THE ADL EXPERIENCE API

The ADL XAPI is a new specification that is part of


the newly initiated ADL Training and Learning
Infrastructure (TLA) [13]. The TLA aims to overcome the
limitations of the present SCORM specification [14].
The objective of the XAPI is to express, store and
exchange statements about learning experiences. The
specification has two primary parts. The first part focuses
on the syntax of the data format, while the second part
defines the characteristics of learning record stores
(LRS). LRSes serve as data-endpoints that can safely
collect and exchange learning activity traces.
Activity statements are generated by so called activity
providers (APs). APs are sensor networks that are aware of
and are able to identify individual actors. An AP is referred
to as the authority of an activity statement. The authority
is used to verify the validity of a statement.
Experience statements are at the core of the XAPI.
These statements form activity streams that provide a
trajectory of the learning activities. The activity streams
are the foundation for any learning analytics approach. The
XAPI data format describes an experience statement with
the following 11 attributes.
Unique Identifier
Actor
Verb
Object
Result
Context
Timestamp
Stored (internal recording timestamp)
Authority

(Protocol) Version
Attachments
All information in XAPI statements can be separated
into
meta-data,
descriptive
information,
and
complementary data. The unique identifier, the internal
recording timestamp, and the protocol version are such
meta-data at the activity level.
Descriptive information includes all information that
describes an activity. The minimal descriptive information
of an XAPI activity statement is the actor, verb,
object triple. The actor is typically a learner, the verb
refers to the activity that is performed, and the object is the
learning object or tool that has been provided to perform
the activity. This triple can be optionally extended by the
optional information about the result of the activity (e.g., a
score or a link to a resource), the timestamp that records
the time when the activity has been performed, the
authority that observed and verified the performance, and
the context that defines the setting, in which an activity has
been performed.
The XAPI differentiates between the core context and
the wider context. The core context includes the
instructor(s), the direct peers involved in an activity
(team), the learning environment (platform), the language
that was used in the performance, and a framing statement
for an activity (e.g., the course that relates to the activity).
The extended context includes a set of data-records about
the wider context of a learning activity. This wider context
is not explicitly specified and can include the location of
the learner, the wider (social) relations, the duration of an
activity, environmental factors (e.g., temperature or noise
level) etc. The format and the content of the wider context
is specific to the AP and is not subject to the
interoperability of the data format.
Complementary data refers to additional and optional
data that is process or activity specific and can be in any
arbitrary format. Therefore, the complementary
information is only of limited use for learning process
analytics in. The attachments are complementary data at
the level of the activity statement.
The XAPI data format is complemented by the
functional specification of the LRS. The LRS is the data
endpoint for accepting and providing activity statements as
activity streams. Activity streams are sequences of activity
statements.
An LRS is defined by two interfaces:
Statement interface
Document interface
The statement interface is responsible for recording the
activity statements. This LRS interface is the prime point
for adding statements to the LRS or filtering the present
statements. This interface is used for connecting sensor
networks or for exchanging data between sub-systems of a
complex learning environment.
The document interface enables APs to store key value
pairs to the LRS. These documents can be used to
provide application specific attribution to a statement.
Consequently, all documents are associated to one learning
activity statement in the LRS. This interface handles three
types of documents.
State interface
Activity profile interface
Agent profile interface

269

The state interface allows APs to store their internal


state relative the present activity statement. The activity
profile interface allows annotating activity specific
information to a statement. Through the agent profile
interface it is possible to store key-value pairs related to an
agent (actor). The agent information can be potentially
shared across activities of the same agent.
III.

A GENERIC EXPERIENCE API ARCHITECTURE FOR


MOBILE AND CONTEXTUAL LEARNING

This section analyses the integration of the XAPI into


mobile learning environments. The generic XAPI
architecture consists of the following three components.
Activity provider (AP)
Learning Record Store (LRS)
Activity consumer (AC)
The XAPI data format is used to exchange the activity
streams between these components (Figure 1).
XAPI
Activity Stream

Activity
Provider

LRS

XAPI
Activity Stream

Activity
Consumer

Figure 1: Basic XAPI architecture


In complex setups both the AP and the AC can also
refer to an LRS. This enables cascading and distributed
structures for processing the learning activities. An
example for such a structure can be the integration of
mobile devices, a virtual learning environment (VLE), and
an organizational e-portfolio system (Figure 2).
Mobile App

Sensor
Network

LRS

VLE

E-Portfolio
System

LRS

LRS

Figure 2: Cascading LRS setup


Such a cascading architecture supports the
requirements of mobile and contextual learning settings.
These requirements typically include cross device
synchronization (Figure 3) and offline support.
Furthermore, mobile devices can utilize the integrated
sensors for completing contextual information of a
learning activity.
Mobile App

Sensor
Network

VLE

LRS

LRS
Mobile App
LRS
Sensor
Network

Figure 3: Device synchronization setup

Finally, mobile and contextual learning scenarios


typically include external tools with conventional webbased VLEs. These tools are often optimized for specific
devices that need to interact flexibly in different
organizational contexts. Additionally, lifelong learning
opportunities suggest that learners are active in several
learning environments at the same time. Therefore, mobile
learning applications have to manage learning activity
statements for different educational contexts while they
provide unique learning experiences (Figure 4).
Mobile App

Sensor
Network

VLE

LRS

Mobile App

Sensor
Network

The performance statistics provide the learners with


information about basic metrics that indicate their
performance on four dimensions: the number of test-items
(cards) handled, the average score for the items that
includes also partially correct responses, the number of
correctly answered test-items, and the average speed for
answering the items. The displayed values are relative to
the present day. A second indicator visualizes whether the
value represents an improvement of the learning
performance or not.

LRS

VLE

LRS

LRS

Figure 5: Example for the Mobler Cards Feedback


Figure 4: Multiple educational context setup
IV.

THE MOBLER CARDS APP

Mobler Cards is a smart phone app for supporting


flashcard learning. The app is considered as mobile
learning mash-up as it integrates with VLEs and, thus, can
be seamlessly embedded into existing instructional designs
for blended and online learning. As a mash-up it provides
the opportunity to be automatically reconfigured for
applications in different organizational environments and it
offers a two-way synchronization between the app and the
VLE.
Mobler Cards uses the question pool feature that is part
of many test and assessment engines of contemporary
LMS. This feature is also part of the IMS QTI
specification, which allows exchanging interoperable
learning material for the use with Mobler Cards.
From a learner perspective the app has two modes: the
exercise mode and the statistics mode.
The exercise mode provides the learning experiences
for the learners in a seemingly random sequence of
exercises. Each exercise refers to one test-item in the
related question pool. In order to complete an exercise the
learners have to respond to the presented test-items. The
responses are evaluated towards their degree of
correctness. The results can be either wrong, partially
correct, or correct (excellent). The result is immediately
presented to the learner together with the correct solution
and the provided answer. Figure 5 shows an example for
the provided feedback. Through this feedback learners can
assess their errors. Additionally, the feedback can be
enriched through textual feedback if it has been defined for
the respective test-item.
The statistics mode provides learners with course
specific performance metrics. This data represents the
learners performances in the exercise mode (Figure 6).

270

In addition to the performance statistics the app offers


two achievements in the form of effort-based learning
badges. While the performance statistics are moving
targets that are suitable for monitoring the learning
performance, the achievements show only the progress
until they are the requirements are met. Once a learner
meets the criteria for a badge it remains achieved.

Figure 6: Mobler Cards Statistics Screen


Although the app uses the IMS QTI format for
retrieving the learning material from a VLE it does not
provide a mobile assessment environment. Instead, it
facilitates course related practicing through casual
exercises. This educational scenario hinders Mobler Cards
to fully implement the IMS QTI protocol, which considers
only assessment scenarios. Therefore, the Mobler Cards
uses the XAPI for storing and exchanging information
about the learning activities between the mobile device and
the VLE.

V.

SENSING, SYNCHRONISING, AND MONITORING

The presented app supports mobile casual learning,


which encourages learning in environments that are
uncommon for learning. The exiting prototype does not
support any location-based features or other forms of
contextualization towards the physical environment of the
learner. Consequently, it uses two sensors that are closely
related to the interaction of the learner with the app:
Answering performance sensor
Achievement sensor
Both sensors create activity statements in the apps
internal LRS. In this process the app utilizes that many
aspects of the related activities are constant from the
perspective of the app. The two variables that need to be
actively registered are the learning object and the result of
the activity. For the performance sensors the learning
object refers to the test item that has been handled. For the
achievement sensor the learning object refers to the related
badge. The result of the answering performance is the
score that has been reached by a learners response. The
score can be either 0 for wrong responses, 0.5 for partially
correct responses, and 1 for correct responses. This value
is also used for determining, which feedback should be
provided to a learner. In the case of the achievements there
is no such result. Additionally, the app tracks the time a
learner needed to respond to the given challenge. The
duration for handling a test-item is defined as the timeinterval from receiving the question/problem statement
until the learner confirms the answer in order to receive the
feedback. This information is stored as extended
contextual data.
The actor is defined by the learner profile that is
provided by the VLE during authentication. Because
mobile devices are typically personal devices this can be
considered as constant during a session.
Mobler Cards has only two performances that
characterize the learning experiences with the app. This
results that the app specific verbs for the activity
statements are either attempt for answering test-items or
achieved if the requirements for an achievement have
been reached. The attempt is only registered for
completed test-items, but not if a learner decides to skip
the item.
The app has a LRS component integrated. This LRS is
used for collecting activity statements from the two
application sensors for synchronizing with a backend VLE
and for later processing even if no network connection to a
backend VLE can be established. The synchronization
with a VLE is currently used for synchronizing the
learning progress across different devices that might be
available to a learner. The same data can be used for
monitoring the learning progress in a course and plan
appropriate support interventions when necessary.
Furthermore, the app internal LRS keeps track, which
activity statements relate to which of the connected VLEs.
In addition to the standard features, the LRS integrates
a learning analytics component that is used for providing
information about the progress to the learner as well as for
orchestrating of the test-items. This component aggregates
the learning activity statements for enabling the learners
to monitor their personal progress, to encourage and
support them for setting learning goals, as well as

271

orchestrating the seemingly random sequence of the


selection of test-items.
The following sections describe the learning analytics
functions as they are used for the different features of the
app.
VI. INFORMING
The statistics view is the primary view for informing
the learners about their performance. It indicates the
performance in handling the provided test-items on four
dimensions:
Amount of test-items handled (cards
handled)
Average score reached
Number of correctly answered items
(progress)
Average duration for handing the test-items
These dimensions are calculated displayed relative to a
time frame. This time frame is defined by a starting time
and an offset.
The statistics mode uses two time starting times. The
first starting time is time when the learner requested the
statistics. This time is used for presenting the current
performance and refers to the performance time frame.
The second starting time relates to the last active day.
The last active day refers to the latest 24-hour interval that
has activity statements for the selected course. The related
starting time is relative to the smallest 24-hour offset from
the request time. This second time frame is used for the
indicator and is therefore called reference time frame.
The offset defines the relative duration of the interval.
In Mobler Cards this offset is set to 24 hours.
The performance statistics are only considering the
attempt-statements in the apps LRS.
The following analytics functions are applied for
calculating the presented values.
The amount of test-items handled is given
by the number of statements in the LRS
within the selected time frame.
The average score is the mean score found
in the statements result attribute. As the
calculated score can be only within 0 and 1
the average score provides an indicator for the
relative amount of correctly, partially correct
and wrongly answered test-items.
The progress value states how many of the
items in the question pool were correctly
answered relative to the total amount of items
in the question pool. This is the number of
statement objects that have activity statements
with a score of 1, without considering how
many times these objects were handled with
in the time frame.
The speed dimension refers to the mean
duration that was required for responding the
test-items. This uses the duration attribute
of the extended context of the activity
statement.
Each dimension processed for the performance and the
reference timeframes.

VII.

ENCOURAGING

The achievements are in many respects similar to the


performance statistics. The main difference is that an
achievement remains persistent once it has been
completed. Mobler Cards offers two achievements to
demonstrate the concept: the Stack Handler achievement
and the Card Burner achievement.
The Stack handler can be achieved by answering all
test-items in a course, which may contain several question
pools. Therefore, the analytical function tests if all testitems of a course are referred to in the object-attribute in
the LRS. This function relies on additional information of
the so-called content-broker component in order to
determine which test items can be considered. The stack
handler analytics reports the number of test items
considered for the analysis and the number of
corresponding objects found in the LRS. This learning
badge is achieved as soon as both values match. This
achievement has no constraining timeframe.
The Card Burner can be achieved by responding to at
least 100 test-items with in 24hour time frame. This
function uses the same analytical function as the cards
handled dimension of the statistics view. This badge is
achieved as soon as the amount of handled test-items
matches the predefined yardstick. While the values on the
statistics view are calculated relative to the request time,
the card burner achievement must consider the activity
time for matching. Thus, the LRS triggers this analysis
whenever a new attempt.
Because learning achievements are persistent in the
context of a course, they are also considered as learning
experiences, while the performance metrics are of plain
informative character. Therefore, the analytical function
submits an activity statement to the LRS that indicates that
a learner has achieved the respective badge as soon as
the achievement goal is matched.

weight (w) is computed based on the dampening function


(1).


    
This weight considers the overall performance on a test
item (nS), the most recent activity (s), and a dampening
timeframe before a test-item re-appears.
A test-item is only included into the sequencing if it
falls below the threshold of 1 or no other items below this
threshold are available.
IX.

CONCLUSIONS

This paper analyzed how the concepts of the ADL


XAPI specification can be applied in a mobile learning app
for smart phones that integrates with other educational
infrastructures. The Mobler Cards app case illustrated
concepts for linking learning analytics functions on top of
the data in a XAPI compliant LRS. These functions are
underpinning features for informing learners about their
performance, engaging them through learning badges, and
for orchestrating and sequencing learning activities
depending on the learners previous interactions.
The present analysis indicates the potentials of this new
specification and touches use cases of learning analytics in
distributed and mobile environments. These use cases
show how local application centered sensor networks can
be utilized and embedded into a framing educational
scenario. Currently, the presented use cases can be only
implemented by custom functions of devices or
applications. As the XAPI specification receives more
attention in the educational technology industry it can be
expected that more explicit and interoperable approaches
will be requested for defining advanced filtering and
analytics functions for complex educational scenarios.
ACKNOWLEDGMENT

VIII. ORCHESTRATING
Finally, Mobler Cards uses a weight-based selection
algorithm for orchestrating the sequence of the items. The
objective for the sequencing was that the learners receive
the test-items in a semi-random sequence. In this sequence
the items that were previously answered incorrectly will
appear more often than those a learner has mastered
already. This weight accumulates over time in a way that
items that are repetitively answered correctly will rarely
get presented to the learners. However, the algorithm is
required not to exclude any questions completely.
In order to achieve this selection mode the following
learning analytics function is applied to the LRS. For each
object of attempt activities the LRS is requested to return
information about:
The number of recorded attempts (n),
The total score across all attempts (S),
The score of the last attempt (s), and
The number of other attempts since the last
attempt with the object (t)
The app calculates the weight of each learning object
by using these parameters. Over time each object looses
weight with every attempt on another object. These
parameters are complemented with the number of
available test-items in a course (icourse). The basic object

272

The research related to this chapter has been conducted


at the International Relations and Security Network (ISN)
at the ETH Zrich, Switzerland, funded by the ADL CoLabs and awarded by the Office of Naval Research Global
(ONRG) under the grant no. N62909-12-1-7022. The
views expressed herein are solely those of the author and
do not represent or reflect the views of any academic,
government or industry organization mentioned herein.
REFERENCES
[1]

[2]

[3]

[4]

[5]

T. Elias, Learning Analytics: Definitions, Processes and Potential.


http://learninganalytics.net/LearningAnalyticsDefinitionsProcesses
Potential.pdf
C. Glahn, M. Specht, and R. Koper, Smart indicators to support
the learning interaction cycle. International Journal of Continuing
Engineering Education and Lifelong Learning, 18(1), 98-117.
C. Glahn, M. Specht, and R. Koper, Visualisation of interaction
footprints for engagement in online communities. Journal of
Educational Technology & Society, 12(3), 44-57
Kalz, M., Drachsler, H., Van der Vegt, W., Van Bruggen, J.,
Glahn, C., & Koper, R. (2009). A Placement Web-Service for
Lifelong Learners. In K. Tochtermann & H. Maurer (Eds.),
Proceedings of the 9th International Conference on Knowledge
Management and Knowledge Technologies (pp. 289-298).
September, 2-4, 2009, Graz, Austria: Verlag der Technischen
Universitt Graz.
B. Florian, C. Glahn, H. Drachsler, M. Specht, and R. Fabregat,
Activity-based learner-models for Learner Monitoring and

(1)

[6]
[7]

[8]

[9]

Recommendations in Moodle. In C. D. Kloos, D. Gillet, R. M.


Crespo Carca, F. Wild, & M. Wolpers (Eds.), Towards Ubiquitous
Learning: 6th European Conference on Technology Anhanced
Learning, EC-TEL 2011 (pp. 111-124)., 2011, Heidelberg, Berlin:
Springer.
P. Brusilovsky, Adaptive Hypermedia. User Modelling and Useradapted Interaction, 11 (1-2), 87-110.
Baepler, P., & Murdoch, C.J. (2010). Academic Analytics and Data
Mining in Higher Education. International Journal for the
Scholarship of Teaching and Learning, 4(2).
N. Aljohani and H. Davis, Learning Analytics in Mobile and
Ubiquitous Learning Environments. In Proceedings of the 11th
International Conference on Mobile and Contextual Learning 2012
Helsinki, Finland, October 16 -18, 2012.
Kalz, M. (2009). Placement Support for Learners in Learning
Networks. October, 16, 2009, Heerlen, The Netherlands: Open
University of the Netherlands, CELSTEC.

273

[10] ADL Initiative, Project TinCan. http://www.adlnet.gov/tla/tin-can


[11] ADL Initiative, Experience API, 2013, https://github.com/adlnet/xAPI-Spec/blob/master/xAPI.md
[12] Glahn, C. (2012). Supporting learner mobility in SCORM
Compliant Learning environments with ISN Mobler Cards. In
Proceedings of the 1st Workshop on Mobile Learning in Security
and Defence Organizations (mADL 2012). 15 Oct. 2012, Helsinki,
Finland
[13] ADL Initiative: Training & Learning Architecture (TLA). Internet
Web-Seite, 2013. Zugriff ber http://www.adlnet.gov/tla.
[14] Advanced Distributed Learning (ADL). Initiative. Sharable
Content Object Reference Model (SCORM) 2004 4th Edition RunTime Environment (RTE) Version 1.1. Alexandria: ADL Initiative.