You are on page 1of 20

Int. J. Learning Technology, Vol. 4, Nos.

1/2, 2009 53

SAMOS: a model for monitoring students’ and


groups’ activities in collaborative e-learning

Angel A. Juan* and Thanasis Daradoumis


Department of Computer Sciences,
Multimedia and Telecommunication
Open University of Catalonia
Rambla del Poblenou, 156
08018 Barcelona, Spain
Fax: +34 933 568 822
E-mail: ajuanp@uoc.edu
E-mail: adaradoumis@uoc.edu
*Corresponding author

Javier Faulin
Department of Statistics and Operations Research
Public University of Navarre
Campus Arrosadia
31006 Pamplona, Navarra, Spain
E-mail: javier.faulin@unavarra.es

Fatos Xhafa
Department of Languages and Informatics Systems
Technical University of Catalonia
Jordi Girona 1–3
08034 Barcelona, Spain
E-mail: fatos@lsi.upc.edu

Abstract: We address the issue of monitoring students’ and groups’ activities


in online collaborative learning environments. Such monitoring can provide
valuable information to online instructors, who may guide and support the
development of efficient collaborative learning projects. We have developed
and tested an information system, Student Activity Monitoring using Overview
Spreadsheets (SAMOS), which facilitates the automatic generation of weekly
monitoring reports derived from data contained in server log files. These
reports provide online instructors with visual information regarding students’
and groups’ activity, thus allowing for a quick and easy classification
of students and groups according to their activity level. That way, entities
with a low activity level are easily identified, which allows the establishment
of just-in-time assistance for them. Furthermore, instructors can use
complementary qualitative monitoring reports, issued by the students
themselves, to anticipate potential problems – such as student dropouts or
possible conflicts inside the groups – and take operational and tactical decisions
to handle them.

Copyright © 2009 Inderscience Enterprises Ltd.


54 A.A. Juan, T. Daradoumis, J. Faulin and F. Xhafa

Keywords: collaborative learning; online learning; educational technology;


monitoring students’ activity; educational data analysis; just-in-time assistance;
learning technology.

Reference to this paper should be made as follows: Juan, A.A., Daradoumis,


T., Faulin, J. and Xhafa, F. (2009) ‘SAMOS: a model for monitoring students’
and groups’ activities in collaborative e-learning’, Int. J. Learning Technology,
Vol. 4, Nos. 1/2, pp.53–72.

Biographical notes: Dr. Angel A. Juan is an Associate Professor of Simulation


and Data Analysis at the Computer Sciences Department of the Open
University of Catalonia (Spain). He is also a Lecturer at the Technical
University of Catalonia. He holds a PhD in Industrial Engineering (UNED), an
MS in Information Technology (Open University of Catalonia) and an MS in
Applied Computational Mathematics (University of Valencia). His research
interests include computer simulation, applied data analysis and mathematical
e-learning. He has published several papers in international journals, books and
proceedings regarding these fields. As a Researcher, he has been involved in
several international research projects.

Dr. Thanasis Daradoumis holds a PhD in Computer Science from the


Polytechnic University of Catalonia (Spain), a Master’s in Computer Science
from the University of Illinois (USA) and a Bachelor’s degree in Mathematics
from the University of Thessaloniki (Greece). He is an Associate Professor at
the Department of Computer Sciences, Multimedia and Telecommunication of
the Open University of Catalonia. His research focuses on e-learning and
network technologies, web-based instruction and evaluation, distributed and
adaptive learning, CSCL, CSCW, interaction analysis and grid technologies.
He is Co-Director of the DPCS Research Laboratory (http://dpcs.uoc.es/).

Dr. Javier Faulin is an Associate Professor of Operations Research and


Statistics at the Public University of Navarre (Spain). He is also a Lecturer at
the UNED (Pamplona, Spain). He holds a PhD in Economics from the
University of Navarre (Pamplona, Spain), an MS in Operations Management,
Logistics and Transportation from UNED (Madrid, Spain) and an MS in
Mathematics from the University of Zaragoza (Zaragoza, Spain). He has
extended experience in distance and web-based teaching at several European
universities. His research interests include logistics and simulation modelling
and analysis. He has published several papers in international journals, books
and proceedings.

Dr. Fatos Xhafa received his PhD in Computer Science from the Polytechnic
University of Catalonia (Spain), where he is currently an Associate Professor at
the Department of Languages and Informatics Systems. His research interests
include parallel algorithms, combinatorial optimisation, approximation and
metaheuristics, distributed programming, grid and P2P computing. He has
published in leading international journals and served in the Organising
Committees of many conferences and workshops. He is also a member of the
Editorial Board of several international journals, including the International
Journal of Computer-Supported Collaborative Learning, Grid and Utility
Computing and Autonomic Computing.
SAMOS: a model for monitoring students’ and groups’ activities 55

1 Introduction

Information technologies offer new ways to communicate, collaborate and participate in


learning processes. Since technology is changing the methods through which education is
delivered and knowledge is constructed, colleges and universities across the world are
confronting several transformations which affect the nature of the courses and degree
programmes they offer. These technological innovations have also driven the growth of
distance learning opportunities, as students who are time bound (due to job or travel
difficulties) or place bound (due to geographic location or physical disabilities) can now
access courses and degree programmes at their convenience. Because of the rapid growth
of distance and global education, e-learning models are currently practised widely all
over the world. As Seufert et al. (2002) point out, “E-learning models can provide high
quality educational offerings at the same time they allow for convenient and flexible
learning environments without space, distance or time restrictions.”
Moreover, educational technologies facilitate the shifting from a traditional
educational paradigm – centred on the figure of a masterful instructor – to an emergent
educational paradigm which considers students as active and central actors in their
learning process. In this new paradigm, students learn, with the help of instructors,
technology and other students, what they will potentially need in order to develop their
future academic or professional activities. The instructor’s role, therefore, is moving from
one related to a knowledge transmission agent to another related to a specialist agent who
designs the course, guides, assists and supervises the student’s learning process
(Engelbrecht and Harding, 2005; Simonson et al., 2003).
In online learning environments like Moodle, Sakai, WebCT, Blackboard or EClass,
instructors provide students with course core materials and, additionally, with
complementary learning resources such as web links, overhead presentations,
software-based simulations, self-assessment tests, research articles and Java applets. At
the same time, they set up individual or collaborative learning activities to guide the
learning process, providing assistance at different levels while moderating and supporting
discussions in either small group or class forums. Online students, in turn, are encouraged
to use these resources, participate in learning activities and engage in collaborative tasks
where they have the opportunity to express ideas, discuss course topics and work out
complex deliverables.
In this paper we present Student Activity Monitoring using Overview Spreadsheets
(SAMOS), an information system designed to help online instructors to efficiently
monitor students’ and groups’ activities in e-collaborative scenarios. We first discuss the
importance of monitoring individual and group activity in the context of collaborative
e-learning. After that, we present the existing research work in this area and set up the
goals of our work, which aim to take current research one step further. Subsequently, we
describe the specific e-learning context (the collaborative e-learning scenario) in which
the monitoring model is built and the global functional scheme of our model. Then, we
explain how the monitoring reports are generated and used for producing awareness
information for decision support purposes. Finally, we analyse the usefulness of the
SAMOS information system in enhancing participation, collaboration and performance in
the whole learning process.
56 A.A. Juan, T. Daradoumis, J. Faulin and F. Xhafa

2 The need for monitoring activity in collaborative e-learning

Despite the benefits that internet-based education can offer both to students and
instructors, it also presents some important challenges. Typically, any type of distance
education programme presents higher dropout rates than more conventional programmes
due to different factors (Levy, 2007; Sweet, 1986; Tyler-Smith, 2005; Xenos et al.,
2002). The nature of distance education can create a sense of isolation in learners, and
students can feel disconnected from the instructor, the rest of the class and even the
institution. This may be the usual case among those adult students who must combine
work, family and academic activities. It is necessary, then, that instructors provide
just-in-time guidance and assistance to students’ activities and also that they provide
regular feedback on those activities. Furthermore, communication among students needs
to be facilitated and promoted by instructors, who must encourage students’ participation
in the web spaces devoted to that function.
Unfortunately, it is very difficult and time consuming for instructors to thoroughly
track all the activities performed by each student in these e-learning environments. It is
even much more complex to figure out the interactions taking place among students
and/or groups of students, that is:
• to identify actors – the groups’ leaders and followers
• to detect students that are likely to drop out of the course
• to perceive possible group-internal conflicts or malfunctions before it gets too late to
efficiently manage these problems. Monitoring students’ and groups’ activities can
help one to understand these interactions and anticipate these potential problems,
which, in turn, can give important clues on how to organise learning exercises more
efficiently and thus achieve better learning outcomes (Daradoumis et al., 2006b;
Dillenbourg, 1999).
Monitoring reports can be used by instructors to easily track down the learners’ online
behaviour and a group’s activity at specific milestones, gather feedback from the learners
and scaffold groups with low degree of activity. Regulation has a time dimension, that is,
instructors have to know both the groups’ and students’ activity performance as the
learning process develops. The regulation process can thus be a means for instructors to
provide just-in-time assistance according to the groups’ and students’ needs.

3 Review of existing research

Due to its importance, several works in the Computer-Supported Collaborative Learning


(CSCL) literature, and especially the ones related to online collaborative learning, have
addressed the monitoring issue from different perspectives. Even so, they all provide a
very limited scope and do not raise the most practical issues. Rather, they are concerned
with conceptual aspects of regulation (Guerrero et al., 2005; Jeong, 2004; Joyes and
Frize, 2005). Other studies have adopted an ad hoc support strategies-oriented approach
(Zhang et al., 2004).
SAMOS: a model for monitoring students’ and groups’ activities 57

There is also a wide variety of proposed methods to monitor group and individual
activity in online collaborative learning. These methods include statistical analysis, social
network analysis and monitoring through shared information and objects (Martínez et al.,
2003; Mazza and Milani, 2005; Reffay and Chanier, 2002). Moreover, there exist some
differences as regards the sources of information used for monitoring: log files of
synchronous and asynchronous communication, bulletin boards, electronic discussion
information reports, video, etc.
As some authors recognise, instructors participating in online learning environments
have very little support in terms of integrated means and tools to monitor and evaluate
students’ activity (Jerman et al., 2001; Zumbach et al., 2002). As a consequence, this
monitoring process constitutes a difficult task which demands a lot of resources and
expertise from educators.
‘Regulation’ is a term that has been commonly used during the last few years
for controlling the learning process. Regulation approaches support collaboration by
taking actions ‘on the fly’ during the interaction (Jermann and Dillenbourg, 2008).
Regulating or monitoring interaction is a complex skill that requires a quick appraisal of
the situation based on the comparison of the current interaction with a model of the
desired interaction (Chen, 2003). Computational methods for coaching the interaction
encounter difficulties in the automatic detection of meaningful patterns of action as well
as in the definition of indicators that define a model of productive collaborative
interaction. A comprehensive review of the various technological means that can
support regulating processes can be encountered in Soller et al. (2005). In addition, a
further comparative study examines the effect of scaffolding learning components in
a computerised environment for students solving qualitative science problems in a
simulation of laboratory experiments (Fund, 2007).
Successful learning seems to depend on more rather than less intense involvement
of the teacher. In this sense, Vreman-de Olde and De Jong (2006) implemented
appropriate scaffolding (support) programmes – operative, integrated and strategic
– using four support components that were found to be effective in computerised
learning environments: structure, reflection, subject-matter and enrichment. The
programmes were provided appropriate worksheets (for each student and for each
problem) and were found to be a suitable way of scaffolding students both in the overall
structure and in the specific reasoning steps. This enabled regulation of the problems to
be solved in each lesson.
Other research works have investigated the way graphical feedback affects
monitoring and evaluation of learners’ activity in online learning environments, where
teachers have limited access to group processes, either because they do not have access to
transcripts of online discussions, or because the number of groups they support is too
large. For instance, Zumbach et al. (2004) considered feedback a crucial aspect of the
learning process, and so they suggested the generation of both interaction and problem-
solving feedback. Their system collects data about participation, motivation – through
self-reports – and problem-solving capabilities, and feeds it back to the learners in an
aggregated manner. In their studies, they analysed the effect of feedback on students’
performance during two different course periods. On the one hand, providing feedback in
a short-term learning practice did not present a positive effect on knowledge growth;
however, it did have positive results regarding motivation and participation. On the other
hand, in a long-term course that lasted over four months, motivational feedback showed
58 A.A. Juan, T. Daradoumis, J. Faulin and F. Xhafa

an effective influence on group functioning. A different type of feedback can be related


to task performance. In general, it has been observed that feedback has a real effect on
knowledge acquisition and the problem solution quality after several weeks, and it leads
to a higher degree of reflection concerning students’ organisation and coordination.
Losada et al. (1990) also found that providing students with group process feedback
contributes to significantly increasing the number of interactive sequences in a CSCL
environment. They also proposed the use of time series analysis to detect patterns of
interactive sequences in these environments.
Moreover, Janssen et al. (2007) tested a method for visualising the participation of
groups of three to four members in a CSCL environment dedicated to inquiry learning.
An important result of this experiment was that the visualisation did not positively impact
the performance of the groups. However, the feedback impacted the generation of more
fruitful and longer dialogues, which in turn contributed to more efficient information
transfer and coordination activities, as well as to a more equal participation by group
members. Groups which were supported with feedback were also more proactive in the
planning of their group process. Morch et al. (2005) focused on how groups of dispersed
people are enabled to work and learn together through enriched web-based environments
which are capable of capturing important information from group interaction, but so far
they fail to provide a useful and easily handled representation of this information, which
may have a complex structure.
In addition to the task or group functioning type of feedback, there is the need
to consider social (e.g., group cohesion) and affective (e.g., motivation) indicators
that provide a different but very useful dimension of group dynamics and member
relationships (Vassileva and Sun, 2007). In this line, the approach developed by Reimann
and Zumbach (2003) urges individual members to indicate/express their motivation
on their own, calculates ‘individual motivation’ over time and presents it via a graph
per person, whereas Cheng and Vassileva (2005) present it via a ‘status in the society’
– gold, bronze, silver member – which in turn produces both a hierarchical membership
and a reward system.
Regulation and scaffolding can be applied beyond e-learning environments, such as in
face-to-face classroom interaction. To this end, DiMicco et al. (2007) developed an
approach that provides visual real-time automated feedback to small groups of four
members who interact face to face to solve a hidden profile task. The report produced
about student participation, which included detailed instructions about the functioning of
the tool and its applicability to the task, proved to be beneficial to the students. However,
the evaluation and analysis process followed is rather subjective. In this line, Liu and Kao
(2007) carried out a study that proposed a design of classrooms that incorporates a
personal workspace and a public workspace. Students use handheld devices as their
private workspace and work with peers in the public workspace with shared displays
through their handheld devices. The proposed shared display groupware promoted a
shared understanding of the workspace and increased awareness of partner actions.
Finally, the work of Collazos et al. (2003) proposed an integration of different
sources of data in the analysis and evaluation of different collaborative learning processes
in order to promote positive interdependence among group members; however, they
focused and based their findings on laboratory experiments. In fact, existing approaches
have not yet managed to meet regulation needs satisfactorily, since most of them focus on
experimental situations, which do not exactly reflect the issues and problems of a real
SAMOS: a model for monitoring students’ and groups’ activities 59

situation. Our work faces this challenge by developing a new approach – a monitoring
system – that has been used and tested in real, long-term and quite complex online
collaborative learning settings.

4 Objectives and scope of our work

The proliferation of web-based collaborative learning practices in most e-learning


environments, as a means for enhancing students’ involvement and interaction, and the
significant increase in the number of students involved, has urged the necessity to
develop and provide new means of support to online instructors who need nonintrusive
and automatic ways to get feedback from learners’ progress in order to better follow their
learning process and assess the online course’s effectiveness. Designing efficient
monitoring tools for online collaborative environments is certainly a complex task. This
is partly due to the lack of practical models that have already been tested in real situations
involving a considerable number of students, groups and instructors. Therefore, the main
goal of this work is to develop, implement and test a practical information system that
allows instructors in our e-learning environment to efficiently monitor students’ and
groups’ activities in collaborative e-learning courses.
Even though the model presented in this paper has been designed to meet our
environment’s specific requirements, it can serve as a conceptual framework for tracking
groups’ and individuals’ activity in any e-learning environment. In particular, it can be
especially useful in those collaborative e-learning courses that:
• span over one or more semesters
• involve a large number of groups and students that need to carry out a continuous
and intensive collaborative activity
• need to analyse and evaluate specific situations at different granularity levels, e.g., at
the instructor level, to determine students’ active participation, task contribution and
performance, group functioning, and conflict detection and resolution; and at the
course manager level, to study dropout rates, course success and students’ global
satisfaction, as well as students’ motivation and the class’s overall social cohesion.

5 The collaborative e-learning scenario

In order to design our monitoring system for our e-learning environment, we considered a
common scenario where groups of students have to develop long-term projects, which are
problem-based collaborative learning practices. Such projects are organised in terms of
several phases, each of them corresponding to a target goal. The instructional design of
each target goal includes several learning tasks, adequately linked to each other, which
students should carry out individually (such as readings) or collaboratively (such as group
activities and exercises) in order to achieve the corresponding goal. In addition, the
design of some target goals also involves the realisation of specific asynchronous debates
at the group or class level, aiming at decision taking on a set of specific questions. These
projects are carried out in the scope of several distance learning undergraduate courses,
which typically run over a period of 15 weeks. Each of these courses involves one
60 A.A. Juan, T. Daradoumis, J. Faulin and F. Xhafa

academic coordinator, several instructors (one for each virtual class) and the class of
students (about 50 per class) distributed among different online groups of three to five
members each (Figure 1).

Figure 1 The collaborative e-learning scenario (see online version for colours)

As a way to support the collaborative e-learning practices, we used the Basic Support for
Cooperative Work (BSCW) system,1 a web-based groupware tool that enables
asynchronous and synchronous collaboration over the web (Bentley et al., 1997). This
system, like any other similar online collaborative environment, offers shared workspaces
that groups can use to store, manage, jointly edit and share documents, to realise threaded
discussions, etc. Additionally, the BSCW server keeps log files which contain all the
actions (events) performed by group members on shared workspaces, as well as detailed
information about these actions: user identification, event type, timestamp, associated
workspace, affected objects, etc.
Even though most e-learning environments offer some simple monitoring tools, they
are still very limited for practical purposes and do not meet the information needs of
online instructors. Enhanced regulation facilities can play an important instructional and
social task since they lead participants to think about the activity being performed, to
promote collaboration through the exchange of ideas that may arise, to propose new
resolution mechanisms, and to justify and refine their own contributions and thus acquire
new knowledge (Stahl, 2006). As a matter of fact, the developers of the BSCW system
recognise the need for powerful monitoring models and tools. To this end, in the first
stage, our model will make use of quantitative data contained in BSCW log files to
automatically generate specific, on-demand, visual reports that summarise relevant
information regarding students’ and groups’ activity. Subsequently, these reports can be
combined and contrasted with qualitative data derived from self-, peer and group
evaluation reports which are designed and produced at particular milestones and provide
specific complementary information about task contribution and performance, group
functioning and group cohesion/socialisation, which helps the academic agent – the
instructor or course managers – to interpret the quantitative reports and learners’ situation
in a more effective and contextualised way. As a consequence, we provide an integrated
approach and a more complete information system capable of managing different types of
information, which can in turn be combined and converted into knowledge that can be
used for offering different types of regulation facilities. The next section describes all the
steps engaged in the construction of the whole monitoring system.
SAMOS: a model for monitoring students’ and groups’ activities 61

6 The global scheme of the monitoring system

Figure 2 shows the global scheme of the monitoring system that we developed and tested
in real collaborative learning settings. The general functioning of this model is as follows:
1 Students perform activities in the collaborative web spaces assigned to their
working group: they post or read notes in forums, send or read e-mails, upload
or download documents, manage folders and files, etc. Each of these activities
can be considered an event of a certain type which has been carried out by a
particular student at a certain time and web space. Since one of the main goals
of this work is to identify students and groups with a low activity level, we will
consider all kinds of academic actions that students perform on the web server
and that are registered in the server log files. In some previous work, we discuss
theoretical frameworks where activity types are categorised and individually
analysed (Daradoumis et al., 2006a–b).
2 The events generated by students are registered in log files at the server
(a BSCW server in our case, but it could be any other server such as Moodle, Sakai
or e-GroupWare).
3 A specific Java application, called EICA, is used to automatically read and process
new incoming log files and to store the extracted data into a unique persistent
database in the corresponding server.
4 The database files are then processed by the SAMOS application. SAMOS was
specifically designed and developed as a practical Excel/VBA tool that uses Excel’s
numerical, graphical and programming capabilities (Albright, 2006; Zapawa, 2005)
to generate weekly reports which summarise group and student activity levels in a
graphical manner. The details regarding the design of these reports, which represent
the core part of our model, are explained in Section 7.
5 The server automatically sends these reports to the instructors by e-mail.
6 The instructors receive these reports and analyse them, looking for groups and
students which seem to be ‘at risk’, i.e., students with low activity levels, which
makes them likely to be nonparticipating students and potential dropouts, and
groups with low activity levels, which makes it possible for them to be
malfunctioning groups.
7 These results are then combined and contrasted with the qualitative self-, peer and
group evaluation reports which are generated by the students themselves. Notice that
quantitative reports can act as real-time alarms that help to detect problems before
they get out of control: qualitative reports usually take some time to be processed by
the instructor while quantitative ones give real-time information about the learning
process. This way, instructors can use quantitative reports first to focus their
attention on a subset of qualitative reports that are likely to need a quick intervention
from them.
8 Once the groups and students at risk have been detected, and the specific problems
have been identified and classified according to whether they refer to task, group
functioning or group social cohesion, the instructors contact them to offer specific
62 A.A. Juan, T. Daradoumis, J. Faulin and F. Xhafa

guidance and support towards the best development and completion of their projects.
The specific actions to be performed by the instructors depend on the characteristics
of the current learning activity and the type of problem detected. In any case, the
important point here is that the instructors become aware of the low-activity
problems as soon as they appear and, therefore, they can react on time, which adds
value to their role as supervisors of the learning process. This way, students and
groups at risk receive just-in-time and efficient guidance and support for them to
enhance and continue their individual or collaborative work more successfully.

Figure 2 General scheme of our monitoring model (see online version for colours)
2. Events are
registered in log files
1. Students
3. Data pre -
generate events
processing
(actions)
(EICA)
Web server
TS
DEN Database server
STU

4. Reports generation
(SAMOS)

7. Group leaders sent


qualitative reports to
instructors

8. Instructors contact students


5. Quantitative reports E-mail server
“at risk” to offer support
are sent to instructors

INSTRUCTOR

6. Quantitative reports are


analyzed by instructors

7 The SAMOS monitoring reports

The design of our system is based on our ten years’ practical experience as online
instructors and also on the informational needs that some colleagues from different
knowledge areas have suggested to us. In line with this, we chose to generate weekly
monitoring reports with the aim of showing a small set of graphs that could be easily and
quickly understood by instructors, so that they did not have to invest extra time in
analysing data. For flexibility and efficiency reasons, we decided that these graphs
should contain only critical information about groups’ and students’ activity levels.
Furthermore, they should provide instructors with a rough classification for each kind of
entity – groups and students – according to their corresponding activity levels.
Specifically, they should allow instructors to easily identify those groups and students
that were bound to maintain significantly low activity levels, and consequently point out
those entities that are likely to need just-in-time guidance and assistance. These graphs
SAMOS: a model for monitoring students’ and groups’ activities 63

should also provide information about the historical evolution of each group’s activity
with respect to the rest of the class groups, as well as information about the historical
evolution of each student’s activity with respect to the rest of the group members. Having
these considerations in mind, we designed four charts, as follows:
1 groups’ classification graph – This chart (Figure 3) is a scatter plot of the following
two variables: X = ‘average number of events per member that have been generated
by group i during this (current) week’ (i = 1,2,…, n) and Y = ‘average number of
events per member that have been generated by group i during an average week’.
The plot also includes the straight lines x = x and y = y , which divide the graph
into four quadrants, Q1 to Q4. That way, points in Q1 can be seen as representing
heading groups since their activity levels are above the two activity means – current
week and average week; points in Q2 can be considered lowering groups, since
even though historically their activity level has been above the activity level for an
average week, their current activity level is below the average; points in Q3
represent those groups which are below the two activity means – currently and
historically – and, therefore, they can be considered groups at risk, since they are
the most likely to suffer from low task contribution, group malfunctioning, lack of
social cohesion and eventually from student dropouts; finally, points in Q4 can be
seen as improving groups, since even though their activity level has been historically
below the mean, their level has been above the mean during the current week, so
they are experiencing some improvement in their activity level. Note that these
interpretations can be stronger as the distance between the considered point and the
straight line is greater, e.g., considering its distance from the x-bar line, there is
good evidence that the activity of the group in Q4 has fairly improved during the
current week.
2 students’ classification graph – This chart is similar to the one before. The only
difference is that now the points represent students instead of groups. Therefore, this
graph allows an easy identification of those students who are at risk – that is,
students whose activity levels are below the current week average and below the
historical week average. Analogously to what happened with the groups, students
can also be classified as improving, lowering or heading depending on the quadrant
they belong to.

3 group’s activity-evolution graph – There is a chart for each group of students


(Figure 4). For any given group, the corresponding chart shows:
• a time series representing the group’s actual historical evolution – that is, the
number of events generated by the group each week
• two smooth bands which provide the lower (LQ) and higher (HQ) quartiles
associated with the distribution of the weekly events generated by all
groups – this way, it can be immediately checked whether the group is
performing above the third quartile, below the first one, or in between
• an exponentially smooth line, using a smoothing factor of α = 0.3 (Berk and
Carey, 2000), which gives a forecast for the group’s activity for the following
week. This chart allows the instructor not only to follow but also to predict the
group’s evolution throughout the course.
64 A.A. Juan, T. Daradoumis, J. Faulin and F. Xhafa

4 group members’ accumulated activity graph – There is also a chart for each group
member. Given a group, the corresponding graph shows the percentage contribution
of each member with respect to the total activity developed by the group until the
current week (Figure 5). From this chart, group leaders and non-participating
members can be easily identified, allowing instructors to immediately activate
policies aiming at preventing negative situations such as inefficient or unbalanced
distribution of task contribution or student abandonment. Such a policy concerns the
production of a specific qualitative report by the group coordinator, which helps
the instructor identify and evaluate exactly what the real problem is and thus take the
best decision about how to support and guide the needed students and the group
as a whole.

Figure 3 Groups’ classification graph (see online version for colours)

Figure 4 Group activity graph (see online version for colours)


SAMOS: a model for monitoring students’ and groups’ activities 65

Figure 5 Group members’ accumulated activity graph with no problem detection (see online
version for colours)

As shown in Figure 5, there is a somewhat uneven distribution of the activity presented


by two members, ‘mfarrenyf’ and ‘ajove’, when compared with the other two members.
This situation becomes more evident especially after Week 8. However, this does not ring
an alarm for the instructor, since the activity line of each member tends to become more
stable after Weeks 3 and 4. Note that the graph represents the accumulated events
produced by each member since the beginning of the groupwork, so it is normal that
there appear gaps or significant differences in the members’ activity during the first two
or three weeks. If a member’s activity gets stabilised afterwards, even though there can
be small fluctuations up or down, there is probably no problem to worry about. This
fact can be checked furthermore against the qualitative self-, peer and group reports
generated at specific milestones. The role of these reports is very important for leading to
more contextualised problem detection and more efficient decision making. In this case,
these reports indicated that the two more active members, ‘mceron’ and ‘jcasasusd’,
seemed to act as the group leaders in the sense that they were taking the initiative in most
phases of the project development and presenting higher activity levels in
communicational and organisational aspects of the group work. Consequently, the
contribution, participation and involvement of the other two members was normal, i.e.,
no real problem was detected in either task, group functioning or group social/cohesion
aspects, thus confirming the fact that the overall group work and learning process were
evolving smoothly.
Figure 6, on the contrary, shows a different group situation which illustrates the
power and, at the same time, the simplicity of our system in helping instructors to detect
insufficient interaction on time and intervene to correct it. In particular, we can observe
two critical points in the group members’ activity graph. The first one refers to student
‘avilage’, whose activity starts and continues to be low for the first two weeks. This sends
an alarm to the instructor to figure out what the problem is with the student. The fact that
the learning activity is encountered at the beginning of the quarter urges the instructor to
apply a policy according to which the instructor contacts the student immediately and
tries to solve the problem through discussion and orientation. The student was indeed
66 A.A. Juan, T. Daradoumis, J. Faulin and F. Xhafa

confused in the beginning and did not manage to join and adapt herself either to the group
dynamics and organisation or to the project demands. If the student was left alone in this
delicate situation, she would most probably have dropped out of the course after a while.
However, the timely detection of the problem by the system and the prompt guidance of
the instructor proved to be effective, as shown by the improvement and normalisation of
the students’ activity in the following weeks. Although the student’s performance was not
outstanding and she did not reach the same level as her peers, she finally managed to
complete the project successfully, a fact which is preferable to dropping out.

Figure 6 Group activity graph with problem detection for some members (see online version
for colours)

The second case which is worth explaining concerns the activity evolution of a student,
‘aferrert’, who starts well but after Week 4 starts to present a continuous decrease
in activity. As we said above, since the system shows the accumulated activity of a
member, it produces an alarm only when an abrupt activity decline occurs or when
there is a smooth but continuous activity slowdown. The latter occurs in the case of this
student, whose activity level changes negatively in Week 4 and then presents a slight but
continuous decrease for three more weeks. As a consequence, this provokes the
instructor’s intervention in Week 7. After examining the qualitative reports, the instructor
identified that the problem concerned some aspects of group work, such as reduced
contribution both to task and group functioning. Again, the prompt identification and
treatment of the problem caused a positive reaction in the student for the next three
weeks, which resulted in improving her degree of contribution and overall performance.
Though her activity started to go down again in the last four weeks, this recession
was quite mild and had no consequences, either to her contributing behaviour or to
the overall project development. Finally, Figure 6 shows another interesting detail:
it seems that student ‘cserrato’ was a leader in the sense that, at least partially, this
student assumed the reins of the group at all levels, i.e., task, group functioning and
social aspects.
SAMOS: a model for monitoring students’ and groups’ activities 67

In sum, as a result of the continuous and effective monitoring reports facilitated by


SAMOS and the efficient policies applied by the instructor, the group managed to fulfil
the project goals successfully, both at individual and group levels.

8 Model validation

After ten years of experience as e-learning instructors, we strongly believe in the value
that tools such as SAMOS can add to the online teaching process. To support this belief
with some empirical evidence, we decided to carry out an experiment in order to test
whether the information provided by SAMOS may significantly influence groups’ and
students’ performance in real collaborative learning situations. For that purpose, we
defined the following three indicators:
1 percentage of sampled groups which finished their project according to its initial
specifications (PGF)
2 percentage of sampled groups which received a positive evaluation of their project at
the end of the semester (PGP)
3 percentage of sampled groups which experienced dropouts (PGD) – that is, some of
the group members abandoned the course during the semester.
The idea was to compare the historical values associated with each of the former
indicators with the corresponding values obtained after the implementation of
SAMOS. Should we find significant differences (hopefully improvements) between
before-SAMOS and after-SAMOS values, we would be able to conclude that some
empirical observations support the goodness of our model. Otherwise, we would not have
enough empirical evidence of SAMOS adding value to the online teaching process. The
methodology used to perform this comparison is explained in the next paragraph:
1 At the beginning of the second semester of the 2006/2007 academic year, a
random sample of size n = 40 was drawn from the population of student groups
that were participating in several collaborative e-learning courses.
2 During the semester, the instructors of these selected groups were provided with
weekly reports generated by SAMOS, so that they could detect students and groups
at risk and provide them with just-in-time guidance and support. They did not
receive any special training, just a one-hour online session to introduce the system
and to give some examples about how to analyse the different graphs.
3 Meanwhile, we used historical data associated with those selected instructors to
calculate before-SAMOS values for indicators PGF, PGP and PGD, which we
0 0 0
denoted pPGF , pPGP and pPGD respectively. The second column in Table 1 shows
the calculated values for these indicators.
4 At the end of the semester, we used data obtained from the randomly selected groups
to calculate the after-SAMOS values for indicators PGF, PGP and PGD. We denoted
these values by p1PGF , p1PGP and p1PGD respectively. The third column in Table 1
shows the calculated values for these indicators, where numbers between parentheses
68 A.A. Juan, T. Daradoumis, J. Faulin and F. Xhafa

represent absolute frequencies. Notice that, as expected, these values were always
greater than the corresponding values calculated in the previous step (those obtained
from historical data).
5 Next, we used statistical inference techniques (Montgomery and Runger, 2006) to
perform three two-sided hypothesis tests (one per indicator) over the differences
between before- and after-SAMOS values. The tests are:
• 0
H 0 : pPGF − p1PGF = 0 versus H A : pPGF
0
− p1PGF ≠ 0

• 0
H 0 : pPGP − p1PGP = 0 versus H A : pPGP
0
− p1PGP ≠ 0

• 0
H 0 : pPGD − p1PGD = 0 versus H A : pPGD
0
− p1PGD ≠ 0 .

Notice that since we used data from the same set of instructors (for both the
before- and after-SAMOS calculations), we minimised the ‘instructor factor’, i.e., the
possibility that significant differences between before- and after-SAMOS indicators
were mainly due to the instructors’ ability and skills to detect and prevent potential
problems instead of to the usefulness of the information provided by the system.
6 Then, we calculated the p-values associated with each of these tests, which are
registered in Column 4 of Table 1.
7 Using the former p-values and a standard significance level of α = 0.05, the
corresponding result for each test was registered in the last column of Table 1.
As can be concluded from the results in Table 1, the tests associated with indicators PGF
and PGD were statistically significant, meaning that statistical evidence supports the idea
that the information provided by SAMOS contributed to significantly enhance the PGF
and PGD indicators in the collaborative e-learning scenario where the experiment was
carried out. Also, even when we failed to reject the null hypothesis in the PGP test, this
rejection was a borderline decision (since the p-value was almost equal to the selected
significance level). Finally, notice that the high attrition rate itself (43% in the
before-SAMOS condition and 25% in the after-SAMOS condition) may have a direct
impact on the ability of the groups to finish their projects, i.e., the significant reduction in
attrition itself might be enough to significantly increase the groups’ ability to complete
their projects.

Table 1 Hypothesis tests about the population proportions

pi0 (before-SAMOS, pi1 (after-SAMOS,


Index i historical data) random sample n = 40) p-value Test result (α = 0.05)
1. PGF 55% 75% (30) 0.011 Reject H0
2. PGP 49% 65% (26) 0.056 Accept H0
3. PGD 43% 25% (10) 0.025 Reject H0
SAMOS: a model for monitoring students’ and groups’ activities 69

9 Future work and conclusions

Since a pilot version of the SAMOS system has been validated in a real scenario, we are
now focusing on the development of an open source version of this system. This version
of SAMOS will be based on Apache, MySQL and PHP. Also, several versions of the
EICA application are being developed, one for each major online platform other than
BSCW (Moodle, Sakai, WebCT, etc.). This will allow running the same version of
SAMOS over any of these e-learning environments.
In this work we coped with two major related problems in distance learning courses:
1 ensuring that students will reach a satisfactory level of involvement in the
learning process
2 avoiding high dropout rates caused by the lack of adequate support and guidance.
These problems are even more critical in collaborative e-learning scenarios, where
individual dropouts or individual low-level involvements could force groups to lose
cohesion, face anxiety or spend too much time and effort rearranging their activities,
which may cause a slowdown or even a breakdown of the group’s activity.
Regulating students’ and groups’ activity can be very useful in identifying
nonparticipating students or groups with unbalanced task contribution, group
malfunctioning or social problems. This identification process, in turn, allows instructors
to intervene whenever necessary to ensure and enhance student’s involvement and
performance in the collaborative learning process.
The SAMOS monitoring system model presented in this paper has been successfully
used to track groups’ and students’ activities in several undergraduate online courses
offered in a web-based learning environment. These courses involve long-term,
project-based collaborative learning practices. Weekly monitoring reports are used by
instructors to easily track down the students’ and groups’ activities at specific milestones,
gather feedback from the learners and scaffold groups with a low degree of activity, or
groups with specific problems. SAMOS has proved to be an innovative monitoring tool
for our online instructors, since it provides them with prompt and valuable information
which adds value to their role as supervisors/regulators of the learning process and allows
them to offer just-in-time guidance and assistance to students and groups. In our opinion,
this model can serve as a practical framework for other universities offering collaborative
e-learning courses.

Acknowledgements

This work has been partially supported by the Spanish Ministry of Science and
Innovation under grants EA2007-0310 and EA2008-0151. It has been also supported by
the UOC Innovation Vice-rectorate under grant IN-PID0702
70 A.A. Juan, T. Daradoumis, J. Faulin and F. Xhafa

References
Albright, C. (2006) VBA for Modelers: Developing Decision Support Systems Using Microsoft
Excel, Duxbury Press.
Bentley, R., Appelt, W., Busbach, U., Hinrichs, E., Kerr, D., Sikkel, S., Trevor, J. and Woetzel, G.
(1997) ‘Basic support for cooperative work on the World Wide Web’, International Journal of
Human-Computer Studies, Vol. 46, No. 6, pp.827–846.
Berk, K. and Carey, P. (2000) Data Analysis with Microsoft Excel, Duxbury Press.
Chen, M. (2003) ‘Visualizing the pulse of a classroom’, Proceedings of the Eleventh ACM
International Conference on Multimedia, Berkeley, California, USA, pp.555–561.
Cheng, R. and Vassileva, J. (2005) ‘User- and community-adaptive rewards mechanism for
sustainable online community’, User Modeling, pp.332–336.
Collazos, C., Guerrero, L., Pino, J. and Ochoa, S. (2003) ‘Collaborative scenarios to promote
positive interdependence among group members’, in J. Favela and D. Decouchant (Eds.)
Proc. of the 9th Int. Workshop on Groupware (CRIWG 2003), Grenoble-Autrans, France,
LNCS 2806, Springer-Verlag, pp.247–260.
Daradoumis, T., Martínez, A. and Xhafa, F. (2006a) ‘A layered framework for evaluating online
collaborative learning interactions’, International Journal of Human-Computer Studies,
Vol. 64, No. 7, pp.622–635.
Daradoumis, T., Xhafa, F. and Juan, A. (2006b) ‘A framework for assessing self, peer and group
performance in e-learning’, Self, Peer, and Group Assessment in Elearning, IGI Global,
pp.279–294.
Dillenbourg, P. (Ed.) (1999) Collaborative Learning: Cognitive and Computational Approaches,
Elsevier Science.
DiMicco, J.M., Hollenbach, K.J., Pandolfo, A. and Bender, W. (2007) ‘The impact of
increased awareness while face-to-face’, Special issue on ‘Awareness systems design’,
Human-Computer Interaction, Vol. 22, No. 1.
Engelbrecht, J. and Harding, A. (2005) ‘Teaching undergraduate mathematics on the internet.
Part 1: technologies and taxonomy’, Educational Studies in Mathematics, Vol. 58, No. 2,
pp.235–252.
Fund, Z. (2007) ‘The effects of scaffolded computerized science problem-solving on achievement
outcomes: a comparative study of support programs’, Journal of Computer Assisted Learning,
Vol. 23, pp.410–424.
Guerrero, L., Madariaga, M., Collazos, C., Pino, J. and Ochoa, S. (2005) ‘Collaboration for
learning language skills’, Proceedings of 11th International Workshop on Groupware
CRIWG’05, Pernambuco, Brazil, pp.284–291.
Janssen, J., Erkens, G., Kanselaar, G. and Jaspers, J. (2007) ‘Visualization of participation: does
it contribute to successful computer supported collaborative learning?’, Computers and
Education, Vol. 49, No. 4, pp.1037–1065.
Jeong, A. (2004) ‘The combined effects of response time and message content on growth patterns
of discussion threads in computer-supported collaborative argumentation’, Journal of Distance
Education, Vol. 19, No. 1, pp.36–53.
Jermann, P. and Dillenbourg, P. (2008) ‘Group mirrors to support interaction regulation in
collaborative problem solving’, Computers & Education, Vol. 51, No. 1, pp.279–296.
Jerman, P., Soller, A. and Muhlenbrock, M. (2001) ‘From mirroring to guiding: a review of state
of the art technology for supporting collaborative learning’, Proceedings of EuroCSCL,
Maastricht, The Netherlands, pp.324–331.
Joyes, G. and Frize, P. (2005) ‘Valuing individual differences within learning: from face-to-face to
online experience’, International Journal of Teaching and Learning in Higher Education,
Vol. 17, No. 1, pp.33–41.
Levy, Y. (2007) ‘Comparing dropouts and persistence in e-learning courses’, Computers &
Education, Vol. 48, pp.185–204.
SAMOS: a model for monitoring students’ and groups’ activities 71

Liu, C. and Kao, L. (2007) ‘Do handheld devices facilitate face-to-face collaboration? Handheld
devices with large shared display groupware to facilitate group interactions’, Journal of
Computer Assisted Learning, Vol. 23, No. 4, pp.285–299.
Losada, M., Sánchez, P. and Noble, E. (1990) ‘Collaborative technology and group process
feedback: their impact on interactive sequences in meetings’, Proceedings of the 1990 ACM
Conference on Computer-Supported Cooperative Work, Los Angeles, California, USA,
pp.53–64.
Martínez, A., Dimitriadis, Y., Rubia, B., Gómez, E. and De la Fuente, P. (2003) ‘Combining
qualitative and social network analysis for the study of social aspects of collaborative
learning’, Computers and Education, Vol. 41, No. 4, pp.353–368.
Mazza, R. and Milani, C. (2005) ‘Exploring usage analysis in learning systems: gaining insights
from visualizations’, Proceedings of the 12th International Conference on Artificial
Intelligence in Education (AIED), Amsterdam.
Montgomery, D. and Runger, G. (2006) Applied Statistics and Probability for Engineers,
New York, NY: John Wiley & Sons.
Morch, A., Jondahl, S. and Dolonen, J. (2005) ‘Supporting conceptual awareness with pedagogical
agents’, Information Systems Frontiers, Vol. 7, No. 1, pp.39–53.
Reffay, C. and Chanier, T. (2002) ‘Social network analysis used for modelling collaboration
in distance learning groups’, Proceedings of Intelligent Tutoring System Conference (ITS’02),
Juin, France, pp.31–40.
Reimann, P. and Zumbach, J. (2003) ‘Supporting virtual learning teams with dynamic feedback’, in
K.T. Lee and K. Mitchell (Eds.) The “Second Wave” of ICT in Education: From Facilitating
Teaching and Learning to Engendering Education Reform, Hong Kong: AACE, pp.424–430.
Seufert, S., Lechner, U. and Stanoevska, K. (2002) ‘A reference model for online learning
communities’, International Journal on E-Learning, Vol. 1, No. 1, pp.43–54.
Simonson, M., Smaldino, S., Albright, M. and Zvacek, S. (2003) Teaching and Learning at a
Distance, Upper Saddle River, NJ: Prentice Hall.
Soller, A., Martinez, A., Jermann, P. and Muehlenbrock, M. (2005) ‘From mirroring to guiding:
a review of state of the art technology for supporting collaborative learning’, International
Journal of Artificial Intelligence in Education, Vol. 15, No. 4.
Stahl, G. (2006) Group Cognition: Computer Support for Building Collaborative Knowledge,
Acting with Technology Series, Cambridge, MA: MIT Press.
Sweet, R. (1986) ‘Student drop-out in distance education: an application of Tinto’s model’,
Distance Education, Vol. 7, pp.201–213.
Tyler-Smith, K. (2005) ‘Early attrition among first time eLearners: a review of factors that
contribute to drop-out, withdrawal and non-completion rates of adult learners undertaking
elearning programmes’, Journal of Online Learning and Teaching, Vol. 2, No. 2, pp.1–5,
http://jolt.merlot.org/Vol2_No2_TylerSmith.htm.
Vassileva, L. and Sun, L. (2007) An Improved Design and a Case Study of a Social Visualization
Encouraging Participation in Online Communities, CRIWG, pp.72–86.
Vreman-de Olde, C. and De Jong, T. (2006) ‘Scaffolding learners in designing investigation
assignments for a computer simulation’, Journal of Computer Assisted Learning, Vol. 22,
pp.63–73.
Xenos, M., Pierrakeas, C. and Pintelas, P. (2002) ‘A survey on student dropout rates and dropout
causes concerning the students in the course of Informatics of the Hellenic Open University’,
Computers & Education, Vol. 39, pp.361–377.
Zapawa, T. (2005) Excel Advanced Report Development, New York, NY: Wiley.
Zhang, J., Chen, Q., Sun, Y. and Reid, D.J. (2004) ‘Triple scheme of learning support design for
scientific discovery learning based on computer simulation: experimental research’, Journal of
Computer Assisted Learning, Vol. 20, pp.269–282.
72 A.A. Juan, T. Daradoumis, J. Faulin and F. Xhafa

Zumbach, J., Hillers, A. and Reimann, P. (2004) ‘Supporting distributed problem-based learning:
the use of feedback mechanisms in online learning’, in T.S. Roberts (Ed.) Online
Collaborative Learning: Theory and Practice, London: Information Science Publishing,
pp.86–102.
Zumbach, J., Muehlenbrock, M., Jansen, M., Reimann, P. and Hoppe, U. (2002)
‘Multi-dimensional tracking in virtual learning teams: an exploratory study’, Proceedings of
the Conference on Computer Supported Collaborative Learning CSCL-2002, Boulder,
Colorado, pp.650–651.

Note
1 http://bscw.fit.fraunhofer.de/

You might also like