Professional Documents
Culture Documents
1/2, 2009 53
Javier Faulin
Department of Statistics and Operations Research
Public University of Navarre
Campus Arrosadia
31006 Pamplona, Navarra, Spain
E-mail: javier.faulin@unavarra.es
Fatos Xhafa
Department of Languages and Informatics Systems
Technical University of Catalonia
Jordi Girona 1–3
08034 Barcelona, Spain
E-mail: fatos@lsi.upc.edu
Dr. Fatos Xhafa received his PhD in Computer Science from the Polytechnic
University of Catalonia (Spain), where he is currently an Associate Professor at
the Department of Languages and Informatics Systems. His research interests
include parallel algorithms, combinatorial optimisation, approximation and
metaheuristics, distributed programming, grid and P2P computing. He has
published in leading international journals and served in the Organising
Committees of many conferences and workshops. He is also a member of the
Editorial Board of several international journals, including the International
Journal of Computer-Supported Collaborative Learning, Grid and Utility
Computing and Autonomic Computing.
SAMOS: a model for monitoring students’ and groups’ activities 55
1 Introduction
Despite the benefits that internet-based education can offer both to students and
instructors, it also presents some important challenges. Typically, any type of distance
education programme presents higher dropout rates than more conventional programmes
due to different factors (Levy, 2007; Sweet, 1986; Tyler-Smith, 2005; Xenos et al.,
2002). The nature of distance education can create a sense of isolation in learners, and
students can feel disconnected from the instructor, the rest of the class and even the
institution. This may be the usual case among those adult students who must combine
work, family and academic activities. It is necessary, then, that instructors provide
just-in-time guidance and assistance to students’ activities and also that they provide
regular feedback on those activities. Furthermore, communication among students needs
to be facilitated and promoted by instructors, who must encourage students’ participation
in the web spaces devoted to that function.
Unfortunately, it is very difficult and time consuming for instructors to thoroughly
track all the activities performed by each student in these e-learning environments. It is
even much more complex to figure out the interactions taking place among students
and/or groups of students, that is:
• to identify actors – the groups’ leaders and followers
• to detect students that are likely to drop out of the course
• to perceive possible group-internal conflicts or malfunctions before it gets too late to
efficiently manage these problems. Monitoring students’ and groups’ activities can
help one to understand these interactions and anticipate these potential problems,
which, in turn, can give important clues on how to organise learning exercises more
efficiently and thus achieve better learning outcomes (Daradoumis et al., 2006b;
Dillenbourg, 1999).
Monitoring reports can be used by instructors to easily track down the learners’ online
behaviour and a group’s activity at specific milestones, gather feedback from the learners
and scaffold groups with low degree of activity. Regulation has a time dimension, that is,
instructors have to know both the groups’ and students’ activity performance as the
learning process develops. The regulation process can thus be a means for instructors to
provide just-in-time assistance according to the groups’ and students’ needs.
There is also a wide variety of proposed methods to monitor group and individual
activity in online collaborative learning. These methods include statistical analysis, social
network analysis and monitoring through shared information and objects (Martínez et al.,
2003; Mazza and Milani, 2005; Reffay and Chanier, 2002). Moreover, there exist some
differences as regards the sources of information used for monitoring: log files of
synchronous and asynchronous communication, bulletin boards, electronic discussion
information reports, video, etc.
As some authors recognise, instructors participating in online learning environments
have very little support in terms of integrated means and tools to monitor and evaluate
students’ activity (Jerman et al., 2001; Zumbach et al., 2002). As a consequence, this
monitoring process constitutes a difficult task which demands a lot of resources and
expertise from educators.
‘Regulation’ is a term that has been commonly used during the last few years
for controlling the learning process. Regulation approaches support collaboration by
taking actions ‘on the fly’ during the interaction (Jermann and Dillenbourg, 2008).
Regulating or monitoring interaction is a complex skill that requires a quick appraisal of
the situation based on the comparison of the current interaction with a model of the
desired interaction (Chen, 2003). Computational methods for coaching the interaction
encounter difficulties in the automatic detection of meaningful patterns of action as well
as in the definition of indicators that define a model of productive collaborative
interaction. A comprehensive review of the various technological means that can
support regulating processes can be encountered in Soller et al. (2005). In addition, a
further comparative study examines the effect of scaffolding learning components in
a computerised environment for students solving qualitative science problems in a
simulation of laboratory experiments (Fund, 2007).
Successful learning seems to depend on more rather than less intense involvement
of the teacher. In this sense, Vreman-de Olde and De Jong (2006) implemented
appropriate scaffolding (support) programmes – operative, integrated and strategic
– using four support components that were found to be effective in computerised
learning environments: structure, reflection, subject-matter and enrichment. The
programmes were provided appropriate worksheets (for each student and for each
problem) and were found to be a suitable way of scaffolding students both in the overall
structure and in the specific reasoning steps. This enabled regulation of the problems to
be solved in each lesson.
Other research works have investigated the way graphical feedback affects
monitoring and evaluation of learners’ activity in online learning environments, where
teachers have limited access to group processes, either because they do not have access to
transcripts of online discussions, or because the number of groups they support is too
large. For instance, Zumbach et al. (2004) considered feedback a crucial aspect of the
learning process, and so they suggested the generation of both interaction and problem-
solving feedback. Their system collects data about participation, motivation – through
self-reports – and problem-solving capabilities, and feeds it back to the learners in an
aggregated manner. In their studies, they analysed the effect of feedback on students’
performance during two different course periods. On the one hand, providing feedback in
a short-term learning practice did not present a positive effect on knowledge growth;
however, it did have positive results regarding motivation and participation. On the other
hand, in a long-term course that lasted over four months, motivational feedback showed
58 A.A. Juan, T. Daradoumis, J. Faulin and F. Xhafa
situation. Our work faces this challenge by developing a new approach – a monitoring
system – that has been used and tested in real, long-term and quite complex online
collaborative learning settings.
In order to design our monitoring system for our e-learning environment, we considered a
common scenario where groups of students have to develop long-term projects, which are
problem-based collaborative learning practices. Such projects are organised in terms of
several phases, each of them corresponding to a target goal. The instructional design of
each target goal includes several learning tasks, adequately linked to each other, which
students should carry out individually (such as readings) or collaboratively (such as group
activities and exercises) in order to achieve the corresponding goal. In addition, the
design of some target goals also involves the realisation of specific asynchronous debates
at the group or class level, aiming at decision taking on a set of specific questions. These
projects are carried out in the scope of several distance learning undergraduate courses,
which typically run over a period of 15 weeks. Each of these courses involves one
60 A.A. Juan, T. Daradoumis, J. Faulin and F. Xhafa
academic coordinator, several instructors (one for each virtual class) and the class of
students (about 50 per class) distributed among different online groups of three to five
members each (Figure 1).
Figure 1 The collaborative e-learning scenario (see online version for colours)
As a way to support the collaborative e-learning practices, we used the Basic Support for
Cooperative Work (BSCW) system,1 a web-based groupware tool that enables
asynchronous and synchronous collaboration over the web (Bentley et al., 1997). This
system, like any other similar online collaborative environment, offers shared workspaces
that groups can use to store, manage, jointly edit and share documents, to realise threaded
discussions, etc. Additionally, the BSCW server keeps log files which contain all the
actions (events) performed by group members on shared workspaces, as well as detailed
information about these actions: user identification, event type, timestamp, associated
workspace, affected objects, etc.
Even though most e-learning environments offer some simple monitoring tools, they
are still very limited for practical purposes and do not meet the information needs of
online instructors. Enhanced regulation facilities can play an important instructional and
social task since they lead participants to think about the activity being performed, to
promote collaboration through the exchange of ideas that may arise, to propose new
resolution mechanisms, and to justify and refine their own contributions and thus acquire
new knowledge (Stahl, 2006). As a matter of fact, the developers of the BSCW system
recognise the need for powerful monitoring models and tools. To this end, in the first
stage, our model will make use of quantitative data contained in BSCW log files to
automatically generate specific, on-demand, visual reports that summarise relevant
information regarding students’ and groups’ activity. Subsequently, these reports can be
combined and contrasted with qualitative data derived from self-, peer and group
evaluation reports which are designed and produced at particular milestones and provide
specific complementary information about task contribution and performance, group
functioning and group cohesion/socialisation, which helps the academic agent – the
instructor or course managers – to interpret the quantitative reports and learners’ situation
in a more effective and contextualised way. As a consequence, we provide an integrated
approach and a more complete information system capable of managing different types of
information, which can in turn be combined and converted into knowledge that can be
used for offering different types of regulation facilities. The next section describes all the
steps engaged in the construction of the whole monitoring system.
SAMOS: a model for monitoring students’ and groups’ activities 61
Figure 2 shows the global scheme of the monitoring system that we developed and tested
in real collaborative learning settings. The general functioning of this model is as follows:
1 Students perform activities in the collaborative web spaces assigned to their
working group: they post or read notes in forums, send or read e-mails, upload
or download documents, manage folders and files, etc. Each of these activities
can be considered an event of a certain type which has been carried out by a
particular student at a certain time and web space. Since one of the main goals
of this work is to identify students and groups with a low activity level, we will
consider all kinds of academic actions that students perform on the web server
and that are registered in the server log files. In some previous work, we discuss
theoretical frameworks where activity types are categorised and individually
analysed (Daradoumis et al., 2006a–b).
2 The events generated by students are registered in log files at the server
(a BSCW server in our case, but it could be any other server such as Moodle, Sakai
or e-GroupWare).
3 A specific Java application, called EICA, is used to automatically read and process
new incoming log files and to store the extracted data into a unique persistent
database in the corresponding server.
4 The database files are then processed by the SAMOS application. SAMOS was
specifically designed and developed as a practical Excel/VBA tool that uses Excel’s
numerical, graphical and programming capabilities (Albright, 2006; Zapawa, 2005)
to generate weekly reports which summarise group and student activity levels in a
graphical manner. The details regarding the design of these reports, which represent
the core part of our model, are explained in Section 7.
5 The server automatically sends these reports to the instructors by e-mail.
6 The instructors receive these reports and analyse them, looking for groups and
students which seem to be ‘at risk’, i.e., students with low activity levels, which
makes them likely to be nonparticipating students and potential dropouts, and
groups with low activity levels, which makes it possible for them to be
malfunctioning groups.
7 These results are then combined and contrasted with the qualitative self-, peer and
group evaluation reports which are generated by the students themselves. Notice that
quantitative reports can act as real-time alarms that help to detect problems before
they get out of control: qualitative reports usually take some time to be processed by
the instructor while quantitative ones give real-time information about the learning
process. This way, instructors can use quantitative reports first to focus their
attention on a subset of qualitative reports that are likely to need a quick intervention
from them.
8 Once the groups and students at risk have been detected, and the specific problems
have been identified and classified according to whether they refer to task, group
functioning or group social cohesion, the instructors contact them to offer specific
62 A.A. Juan, T. Daradoumis, J. Faulin and F. Xhafa
guidance and support towards the best development and completion of their projects.
The specific actions to be performed by the instructors depend on the characteristics
of the current learning activity and the type of problem detected. In any case, the
important point here is that the instructors become aware of the low-activity
problems as soon as they appear and, therefore, they can react on time, which adds
value to their role as supervisors of the learning process. This way, students and
groups at risk receive just-in-time and efficient guidance and support for them to
enhance and continue their individual or collaborative work more successfully.
Figure 2 General scheme of our monitoring model (see online version for colours)
2. Events are
registered in log files
1. Students
3. Data pre -
generate events
processing
(actions)
(EICA)
Web server
TS
DEN Database server
STU
4. Reports generation
(SAMOS)
INSTRUCTOR
The design of our system is based on our ten years’ practical experience as online
instructors and also on the informational needs that some colleagues from different
knowledge areas have suggested to us. In line with this, we chose to generate weekly
monitoring reports with the aim of showing a small set of graphs that could be easily and
quickly understood by instructors, so that they did not have to invest extra time in
analysing data. For flexibility and efficiency reasons, we decided that these graphs
should contain only critical information about groups’ and students’ activity levels.
Furthermore, they should provide instructors with a rough classification for each kind of
entity – groups and students – according to their corresponding activity levels.
Specifically, they should allow instructors to easily identify those groups and students
that were bound to maintain significantly low activity levels, and consequently point out
those entities that are likely to need just-in-time guidance and assistance. These graphs
SAMOS: a model for monitoring students’ and groups’ activities 63
should also provide information about the historical evolution of each group’s activity
with respect to the rest of the class groups, as well as information about the historical
evolution of each student’s activity with respect to the rest of the group members. Having
these considerations in mind, we designed four charts, as follows:
1 groups’ classification graph – This chart (Figure 3) is a scatter plot of the following
two variables: X = ‘average number of events per member that have been generated
by group i during this (current) week’ (i = 1,2,…, n) and Y = ‘average number of
events per member that have been generated by group i during an average week’.
The plot also includes the straight lines x = x and y = y , which divide the graph
into four quadrants, Q1 to Q4. That way, points in Q1 can be seen as representing
heading groups since their activity levels are above the two activity means – current
week and average week; points in Q2 can be considered lowering groups, since
even though historically their activity level has been above the activity level for an
average week, their current activity level is below the average; points in Q3
represent those groups which are below the two activity means – currently and
historically – and, therefore, they can be considered groups at risk, since they are
the most likely to suffer from low task contribution, group malfunctioning, lack of
social cohesion and eventually from student dropouts; finally, points in Q4 can be
seen as improving groups, since even though their activity level has been historically
below the mean, their level has been above the mean during the current week, so
they are experiencing some improvement in their activity level. Note that these
interpretations can be stronger as the distance between the considered point and the
straight line is greater, e.g., considering its distance from the x-bar line, there is
good evidence that the activity of the group in Q4 has fairly improved during the
current week.
2 students’ classification graph – This chart is similar to the one before. The only
difference is that now the points represent students instead of groups. Therefore, this
graph allows an easy identification of those students who are at risk – that is,
students whose activity levels are below the current week average and below the
historical week average. Analogously to what happened with the groups, students
can also be classified as improving, lowering or heading depending on the quadrant
they belong to.
4 group members’ accumulated activity graph – There is also a chart for each group
member. Given a group, the corresponding graph shows the percentage contribution
of each member with respect to the total activity developed by the group until the
current week (Figure 5). From this chart, group leaders and non-participating
members can be easily identified, allowing instructors to immediately activate
policies aiming at preventing negative situations such as inefficient or unbalanced
distribution of task contribution or student abandonment. Such a policy concerns the
production of a specific qualitative report by the group coordinator, which helps
the instructor identify and evaluate exactly what the real problem is and thus take the
best decision about how to support and guide the needed students and the group
as a whole.
Figure 5 Group members’ accumulated activity graph with no problem detection (see online
version for colours)
confused in the beginning and did not manage to join and adapt herself either to the group
dynamics and organisation or to the project demands. If the student was left alone in this
delicate situation, she would most probably have dropped out of the course after a while.
However, the timely detection of the problem by the system and the prompt guidance of
the instructor proved to be effective, as shown by the improvement and normalisation of
the students’ activity in the following weeks. Although the student’s performance was not
outstanding and she did not reach the same level as her peers, she finally managed to
complete the project successfully, a fact which is preferable to dropping out.
Figure 6 Group activity graph with problem detection for some members (see online version
for colours)
The second case which is worth explaining concerns the activity evolution of a student,
‘aferrert’, who starts well but after Week 4 starts to present a continuous decrease
in activity. As we said above, since the system shows the accumulated activity of a
member, it produces an alarm only when an abrupt activity decline occurs or when
there is a smooth but continuous activity slowdown. The latter occurs in the case of this
student, whose activity level changes negatively in Week 4 and then presents a slight but
continuous decrease for three more weeks. As a consequence, this provokes the
instructor’s intervention in Week 7. After examining the qualitative reports, the instructor
identified that the problem concerned some aspects of group work, such as reduced
contribution both to task and group functioning. Again, the prompt identification and
treatment of the problem caused a positive reaction in the student for the next three
weeks, which resulted in improving her degree of contribution and overall performance.
Though her activity started to go down again in the last four weeks, this recession
was quite mild and had no consequences, either to her contributing behaviour or to
the overall project development. Finally, Figure 6 shows another interesting detail:
it seems that student ‘cserrato’ was a leader in the sense that, at least partially, this
student assumed the reins of the group at all levels, i.e., task, group functioning and
social aspects.
SAMOS: a model for monitoring students’ and groups’ activities 67
8 Model validation
After ten years of experience as e-learning instructors, we strongly believe in the value
that tools such as SAMOS can add to the online teaching process. To support this belief
with some empirical evidence, we decided to carry out an experiment in order to test
whether the information provided by SAMOS may significantly influence groups’ and
students’ performance in real collaborative learning situations. For that purpose, we
defined the following three indicators:
1 percentage of sampled groups which finished their project according to its initial
specifications (PGF)
2 percentage of sampled groups which received a positive evaluation of their project at
the end of the semester (PGP)
3 percentage of sampled groups which experienced dropouts (PGD) – that is, some of
the group members abandoned the course during the semester.
The idea was to compare the historical values associated with each of the former
indicators with the corresponding values obtained after the implementation of
SAMOS. Should we find significant differences (hopefully improvements) between
before-SAMOS and after-SAMOS values, we would be able to conclude that some
empirical observations support the goodness of our model. Otherwise, we would not have
enough empirical evidence of SAMOS adding value to the online teaching process. The
methodology used to perform this comparison is explained in the next paragraph:
1 At the beginning of the second semester of the 2006/2007 academic year, a
random sample of size n = 40 was drawn from the population of student groups
that were participating in several collaborative e-learning courses.
2 During the semester, the instructors of these selected groups were provided with
weekly reports generated by SAMOS, so that they could detect students and groups
at risk and provide them with just-in-time guidance and support. They did not
receive any special training, just a one-hour online session to introduce the system
and to give some examples about how to analyse the different graphs.
3 Meanwhile, we used historical data associated with those selected instructors to
calculate before-SAMOS values for indicators PGF, PGP and PGD, which we
0 0 0
denoted pPGF , pPGP and pPGD respectively. The second column in Table 1 shows
the calculated values for these indicators.
4 At the end of the semester, we used data obtained from the randomly selected groups
to calculate the after-SAMOS values for indicators PGF, PGP and PGD. We denoted
these values by p1PGF , p1PGP and p1PGD respectively. The third column in Table 1
shows the calculated values for these indicators, where numbers between parentheses
68 A.A. Juan, T. Daradoumis, J. Faulin and F. Xhafa
represent absolute frequencies. Notice that, as expected, these values were always
greater than the corresponding values calculated in the previous step (those obtained
from historical data).
5 Next, we used statistical inference techniques (Montgomery and Runger, 2006) to
perform three two-sided hypothesis tests (one per indicator) over the differences
between before- and after-SAMOS values. The tests are:
• 0
H 0 : pPGF − p1PGF = 0 versus H A : pPGF
0
− p1PGF ≠ 0
• 0
H 0 : pPGP − p1PGP = 0 versus H A : pPGP
0
− p1PGP ≠ 0
• 0
H 0 : pPGD − p1PGD = 0 versus H A : pPGD
0
− p1PGD ≠ 0 .
Notice that since we used data from the same set of instructors (for both the
before- and after-SAMOS calculations), we minimised the ‘instructor factor’, i.e., the
possibility that significant differences between before- and after-SAMOS indicators
were mainly due to the instructors’ ability and skills to detect and prevent potential
problems instead of to the usefulness of the information provided by the system.
6 Then, we calculated the p-values associated with each of these tests, which are
registered in Column 4 of Table 1.
7 Using the former p-values and a standard significance level of α = 0.05, the
corresponding result for each test was registered in the last column of Table 1.
As can be concluded from the results in Table 1, the tests associated with indicators PGF
and PGD were statistically significant, meaning that statistical evidence supports the idea
that the information provided by SAMOS contributed to significantly enhance the PGF
and PGD indicators in the collaborative e-learning scenario where the experiment was
carried out. Also, even when we failed to reject the null hypothesis in the PGP test, this
rejection was a borderline decision (since the p-value was almost equal to the selected
significance level). Finally, notice that the high attrition rate itself (43% in the
before-SAMOS condition and 25% in the after-SAMOS condition) may have a direct
impact on the ability of the groups to finish their projects, i.e., the significant reduction in
attrition itself might be enough to significantly increase the groups’ ability to complete
their projects.
Since a pilot version of the SAMOS system has been validated in a real scenario, we are
now focusing on the development of an open source version of this system. This version
of SAMOS will be based on Apache, MySQL and PHP. Also, several versions of the
EICA application are being developed, one for each major online platform other than
BSCW (Moodle, Sakai, WebCT, etc.). This will allow running the same version of
SAMOS over any of these e-learning environments.
In this work we coped with two major related problems in distance learning courses:
1 ensuring that students will reach a satisfactory level of involvement in the
learning process
2 avoiding high dropout rates caused by the lack of adequate support and guidance.
These problems are even more critical in collaborative e-learning scenarios, where
individual dropouts or individual low-level involvements could force groups to lose
cohesion, face anxiety or spend too much time and effort rearranging their activities,
which may cause a slowdown or even a breakdown of the group’s activity.
Regulating students’ and groups’ activity can be very useful in identifying
nonparticipating students or groups with unbalanced task contribution, group
malfunctioning or social problems. This identification process, in turn, allows instructors
to intervene whenever necessary to ensure and enhance student’s involvement and
performance in the collaborative learning process.
The SAMOS monitoring system model presented in this paper has been successfully
used to track groups’ and students’ activities in several undergraduate online courses
offered in a web-based learning environment. These courses involve long-term,
project-based collaborative learning practices. Weekly monitoring reports are used by
instructors to easily track down the students’ and groups’ activities at specific milestones,
gather feedback from the learners and scaffold groups with a low degree of activity, or
groups with specific problems. SAMOS has proved to be an innovative monitoring tool
for our online instructors, since it provides them with prompt and valuable information
which adds value to their role as supervisors/regulators of the learning process and allows
them to offer just-in-time guidance and assistance to students and groups. In our opinion,
this model can serve as a practical framework for other universities offering collaborative
e-learning courses.
Acknowledgements
This work has been partially supported by the Spanish Ministry of Science and
Innovation under grants EA2007-0310 and EA2008-0151. It has been also supported by
the UOC Innovation Vice-rectorate under grant IN-PID0702
70 A.A. Juan, T. Daradoumis, J. Faulin and F. Xhafa
References
Albright, C. (2006) VBA for Modelers: Developing Decision Support Systems Using Microsoft
Excel, Duxbury Press.
Bentley, R., Appelt, W., Busbach, U., Hinrichs, E., Kerr, D., Sikkel, S., Trevor, J. and Woetzel, G.
(1997) ‘Basic support for cooperative work on the World Wide Web’, International Journal of
Human-Computer Studies, Vol. 46, No. 6, pp.827–846.
Berk, K. and Carey, P. (2000) Data Analysis with Microsoft Excel, Duxbury Press.
Chen, M. (2003) ‘Visualizing the pulse of a classroom’, Proceedings of the Eleventh ACM
International Conference on Multimedia, Berkeley, California, USA, pp.555–561.
Cheng, R. and Vassileva, J. (2005) ‘User- and community-adaptive rewards mechanism for
sustainable online community’, User Modeling, pp.332–336.
Collazos, C., Guerrero, L., Pino, J. and Ochoa, S. (2003) ‘Collaborative scenarios to promote
positive interdependence among group members’, in J. Favela and D. Decouchant (Eds.)
Proc. of the 9th Int. Workshop on Groupware (CRIWG 2003), Grenoble-Autrans, France,
LNCS 2806, Springer-Verlag, pp.247–260.
Daradoumis, T., Martínez, A. and Xhafa, F. (2006a) ‘A layered framework for evaluating online
collaborative learning interactions’, International Journal of Human-Computer Studies,
Vol. 64, No. 7, pp.622–635.
Daradoumis, T., Xhafa, F. and Juan, A. (2006b) ‘A framework for assessing self, peer and group
performance in e-learning’, Self, Peer, and Group Assessment in Elearning, IGI Global,
pp.279–294.
Dillenbourg, P. (Ed.) (1999) Collaborative Learning: Cognitive and Computational Approaches,
Elsevier Science.
DiMicco, J.M., Hollenbach, K.J., Pandolfo, A. and Bender, W. (2007) ‘The impact of
increased awareness while face-to-face’, Special issue on ‘Awareness systems design’,
Human-Computer Interaction, Vol. 22, No. 1.
Engelbrecht, J. and Harding, A. (2005) ‘Teaching undergraduate mathematics on the internet.
Part 1: technologies and taxonomy’, Educational Studies in Mathematics, Vol. 58, No. 2,
pp.235–252.
Fund, Z. (2007) ‘The effects of scaffolded computerized science problem-solving on achievement
outcomes: a comparative study of support programs’, Journal of Computer Assisted Learning,
Vol. 23, pp.410–424.
Guerrero, L., Madariaga, M., Collazos, C., Pino, J. and Ochoa, S. (2005) ‘Collaboration for
learning language skills’, Proceedings of 11th International Workshop on Groupware
CRIWG’05, Pernambuco, Brazil, pp.284–291.
Janssen, J., Erkens, G., Kanselaar, G. and Jaspers, J. (2007) ‘Visualization of participation: does
it contribute to successful computer supported collaborative learning?’, Computers and
Education, Vol. 49, No. 4, pp.1037–1065.
Jeong, A. (2004) ‘The combined effects of response time and message content on growth patterns
of discussion threads in computer-supported collaborative argumentation’, Journal of Distance
Education, Vol. 19, No. 1, pp.36–53.
Jermann, P. and Dillenbourg, P. (2008) ‘Group mirrors to support interaction regulation in
collaborative problem solving’, Computers & Education, Vol. 51, No. 1, pp.279–296.
Jerman, P., Soller, A. and Muhlenbrock, M. (2001) ‘From mirroring to guiding: a review of state
of the art technology for supporting collaborative learning’, Proceedings of EuroCSCL,
Maastricht, The Netherlands, pp.324–331.
Joyes, G. and Frize, P. (2005) ‘Valuing individual differences within learning: from face-to-face to
online experience’, International Journal of Teaching and Learning in Higher Education,
Vol. 17, No. 1, pp.33–41.
Levy, Y. (2007) ‘Comparing dropouts and persistence in e-learning courses’, Computers &
Education, Vol. 48, pp.185–204.
SAMOS: a model for monitoring students’ and groups’ activities 71
Liu, C. and Kao, L. (2007) ‘Do handheld devices facilitate face-to-face collaboration? Handheld
devices with large shared display groupware to facilitate group interactions’, Journal of
Computer Assisted Learning, Vol. 23, No. 4, pp.285–299.
Losada, M., Sánchez, P. and Noble, E. (1990) ‘Collaborative technology and group process
feedback: their impact on interactive sequences in meetings’, Proceedings of the 1990 ACM
Conference on Computer-Supported Cooperative Work, Los Angeles, California, USA,
pp.53–64.
Martínez, A., Dimitriadis, Y., Rubia, B., Gómez, E. and De la Fuente, P. (2003) ‘Combining
qualitative and social network analysis for the study of social aspects of collaborative
learning’, Computers and Education, Vol. 41, No. 4, pp.353–368.
Mazza, R. and Milani, C. (2005) ‘Exploring usage analysis in learning systems: gaining insights
from visualizations’, Proceedings of the 12th International Conference on Artificial
Intelligence in Education (AIED), Amsterdam.
Montgomery, D. and Runger, G. (2006) Applied Statistics and Probability for Engineers,
New York, NY: John Wiley & Sons.
Morch, A., Jondahl, S. and Dolonen, J. (2005) ‘Supporting conceptual awareness with pedagogical
agents’, Information Systems Frontiers, Vol. 7, No. 1, pp.39–53.
Reffay, C. and Chanier, T. (2002) ‘Social network analysis used for modelling collaboration
in distance learning groups’, Proceedings of Intelligent Tutoring System Conference (ITS’02),
Juin, France, pp.31–40.
Reimann, P. and Zumbach, J. (2003) ‘Supporting virtual learning teams with dynamic feedback’, in
K.T. Lee and K. Mitchell (Eds.) The “Second Wave” of ICT in Education: From Facilitating
Teaching and Learning to Engendering Education Reform, Hong Kong: AACE, pp.424–430.
Seufert, S., Lechner, U. and Stanoevska, K. (2002) ‘A reference model for online learning
communities’, International Journal on E-Learning, Vol. 1, No. 1, pp.43–54.
Simonson, M., Smaldino, S., Albright, M. and Zvacek, S. (2003) Teaching and Learning at a
Distance, Upper Saddle River, NJ: Prentice Hall.
Soller, A., Martinez, A., Jermann, P. and Muehlenbrock, M. (2005) ‘From mirroring to guiding:
a review of state of the art technology for supporting collaborative learning’, International
Journal of Artificial Intelligence in Education, Vol. 15, No. 4.
Stahl, G. (2006) Group Cognition: Computer Support for Building Collaborative Knowledge,
Acting with Technology Series, Cambridge, MA: MIT Press.
Sweet, R. (1986) ‘Student drop-out in distance education: an application of Tinto’s model’,
Distance Education, Vol. 7, pp.201–213.
Tyler-Smith, K. (2005) ‘Early attrition among first time eLearners: a review of factors that
contribute to drop-out, withdrawal and non-completion rates of adult learners undertaking
elearning programmes’, Journal of Online Learning and Teaching, Vol. 2, No. 2, pp.1–5,
http://jolt.merlot.org/Vol2_No2_TylerSmith.htm.
Vassileva, L. and Sun, L. (2007) An Improved Design and a Case Study of a Social Visualization
Encouraging Participation in Online Communities, CRIWG, pp.72–86.
Vreman-de Olde, C. and De Jong, T. (2006) ‘Scaffolding learners in designing investigation
assignments for a computer simulation’, Journal of Computer Assisted Learning, Vol. 22,
pp.63–73.
Xenos, M., Pierrakeas, C. and Pintelas, P. (2002) ‘A survey on student dropout rates and dropout
causes concerning the students in the course of Informatics of the Hellenic Open University’,
Computers & Education, Vol. 39, pp.361–377.
Zapawa, T. (2005) Excel Advanced Report Development, New York, NY: Wiley.
Zhang, J., Chen, Q., Sun, Y. and Reid, D.J. (2004) ‘Triple scheme of learning support design for
scientific discovery learning based on computer simulation: experimental research’, Journal of
Computer Assisted Learning, Vol. 20, pp.269–282.
72 A.A. Juan, T. Daradoumis, J. Faulin and F. Xhafa
Zumbach, J., Hillers, A. and Reimann, P. (2004) ‘Supporting distributed problem-based learning:
the use of feedback mechanisms in online learning’, in T.S. Roberts (Ed.) Online
Collaborative Learning: Theory and Practice, London: Information Science Publishing,
pp.86–102.
Zumbach, J., Muehlenbrock, M., Jansen, M., Reimann, P. and Hoppe, U. (2002)
‘Multi-dimensional tracking in virtual learning teams: an exploratory study’, Proceedings of
the Conference on Computer Supported Collaborative Learning CSCL-2002, Boulder,
Colorado, pp.650–651.
Note
1 http://bscw.fit.fraunhofer.de/