Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Standard view
Full view
of .
Save to My Library
Look up keyword
Like this
0 of .
Results for:
No results containing your search query
P. 1
Addressing e-Assessment Practices in Modern Learning Settings: A Review

Addressing e-Assessment Practices in Modern Learning Settings: A Review

Ratings: (0)|Views: 29 |Likes:
Published by smadi
AL-Smadi, M., Gütl, C. & Chang, V. (2011). Addressing e-Assessment practices in e-Learning Activities: A Review. In S.
Barton et al. (Eds.), Proceedings of Global Learn Asia Pacific 2011 (pp. 448-459). AACE.
Retrieved from http://www.editlib.org/p/37209.
AL-Smadi, M., Gütl, C. & Chang, V. (2011). Addressing e-Assessment practices in e-Learning Activities: A Review. In S.
Barton et al. (Eds.), Proceedings of Global Learn Asia Pacific 2011 (pp. 448-459). AACE.
Retrieved from http://www.editlib.org/p/37209.

More info:

Categories:Types, Research
Published by: smadi on Nov 12, 2012
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





AL-Smadi, M., Gütl, C. & Chang, V. (2011). Addressing e-Assessment practices in e-Learning Activities: A Review. In S.Barton et al. (Eds.), Proceedings of Global Learn Asia Pacific 2011 (pp. 448-459). AACE.Retrieved from http://www.editlib.org/p/37209.
Addressing e-Assessment Practices in Modern Learning Settings:A Review
Mohammad AL-SmadiInstitute for Information Systems and New Media (IICM), TU-Graz, Austriamsmadi@iicm.eduChristian GütlSchool of Information Systems, Curtin University of Technology, Perth, WA, AustraliaInstitute for Information Systems and New Media (IICM), TU-Graz, Austriacguetl@iicm.eduVanessa ChangSchool of Information Systems, Curtin University of Technology, Perth, WA, Australiavanessa.chang@curtin.edu.au
In today‟s “information age”, learners grow up with technology dominating most of 
their life activities.
Given the dominance of technology, it is no surprise that learning andeducational strategies have been influenced by emerging use of technology. New age of information has appeared where information and communication technology plays a main rolein the education and learning society. As a result of this, learning and educational systemshave become more modern and global. Assessment as a main part of any learning systems hasevolved over the last fifty years and educators have to cope with the changes in theeducational systems. This paper discusses the need for enhanced assessment activities thatshould be flexibly integrated into the learning process. Moreover, it explores the historicaloverview of e-assessment and its evolution as emerging forms of assessment and theirpractices in the new culture of e-learning and e-assessment. To this end, learning activitieshave to be linked with assessment to make the learning more exciting and in turn, this wouldattract learners and engage them in the learning process.
Recently, a new age of information has appeared where information and communication technologyplays a main role in education and learning society. As a result, modern learning settings of learner-centeredpractices have become more dominant. A new culture of assessment of integrating measurement with instructionto address requirements of assessing skills such as cognitive (e.g. problem solving, critical thinking), meta-cognitive (e.g. self-reflection and self-evaluation), social (e.g. leading discussions and working within groups),and affective aspects (e.g. internal motivation and self-efficiency) have arisen (Dochy & McDowell, 1997). Inthis new culture of assessment, students play major roles in the assessment where new forms of assessment havebeen adapted to suit the learning styles of the modern learners. Such forms include interviews, performanceassessment, exhibitions, portfolio assessment, process and product assessment, directed assessment, authenticassessment, performance assessment, alternative assessment, collaborative assessment and self- and peer-assessment. (Dochy & McDowell, 1997; Shepard, 2000).In the age of the so-
called “information age”
, learners grow up with technology dominating most of their life activities. They use technology anywhere, anytime, and they are faced with the challenge of needing tobe engaged and motivated in their learning (Prenkey, 2001). The emergence of Web 2.0 and the influence of Information and Communication Technology (ICT) have fostered e-learning to be more interactive, challenging,and situated. As a result, learners felt empowered when they are engaged in collaborative learning activities andself-directed learning. The learners are also provided with e-learning systems that would maintain their socialidentity and situated learning experience. Given the different learning styles of students, educators are faced with
AL-Smadi, M., Gütl, C. & Chang, V. (2011). Addressing e-Assessment practices in e-Learning Activities: A Review. In S.Barton et al. (Eds.), Proceedings of Global Learn Asia Pacific 2011 (pp. 448-459). AACE.Retrieved from http://www.editlib.org/p/37209.
the challenge of having to develop assessments
which are required to appraise the students‟
learning process.Assessment forms provided in e-learning activities have to be adapted so that they can foster effective types of learning such as reflective-learning, experiential-learning, and socio-cognitive learning (Dochy & McDowellb,1997; Elliott, 2008).Our research group in Graz University of Technology (TUG) is collaborating with other researchers inan EU-funded project entitled "Adaptive Learning via Intuitive/Interactive, Collaborative and EmotionalSystem" or ALICE from 2010 to 2012. TUG is responsible for developing the
“New Forms of Assessment”
work package, where an integrated model for e-assessment based on rich learning experience is provided. Themodel will mainly identify possible tools, practices, guidelines for providing enhanced forms of e-assessment forcomplex- learning objects (CLO) such as serious-games and simulations, virtualized collaborative learning,storytelling, and consideration of affective or emotional aspects. An extensive literature search will beconducted on e-assessment in general and e-assessment practices to support and reinforce the ALICE work package. In this paper, assessment models will be discussed, a short history of e-assessment will be provided andemerging forms of e-assessment will also be explained.
Assessment Models
The questions that educators often asked about assessment are whether all assessment forms have thesame framework or architecture and what the common features are between the assessment forms. Pellegrino,Chudowsky, and Glaser (2001) provided what they called
assessment triangle
that discusses three key elementsof assessment in general. The first element is cognition which is a model for learning and assessment in thedomain that represents how students build knowledge and develop competence. The second element isobservation which represents a set of beliefs about the kinds of observations that are constructed based onsituations and tasks provided to the students so they can interact with and build their knowledge and skills.
Observations provide an evidence of students‟ competencies. The third element is interpretation which is the
process of reasoning an evidence of competence achievement based on the observations.Evidence-centered assessment design (ECD) (Almond et al., 2002; Mislevy et al., 2004) is a framework that explains the structures of assessment arguments, their elements and process, as well as the interrelationshipsamong them. ECD framework consists of five layers:
(1) domain analysis, (2) domain modelling (design patterns), (3) conceptual assessment framework (CAF) (templates and tasks specifications), (4) assessment implementation, and (5) assessment delivery (four-process delivery system)
. As this project focuses on a
“NewForms of Assessment” work package, the third element of CAF is examined in detail and as such this element is
elaborated in this paper. CAF discusses the assessment arguments which are sketched in
design patterns
in termsof the kinds of elements and processes required to implement an assessment that embodies those arguments.CAF consists of a set of modules which provides specifications that is linked to answer critical questions. Forexample,
the Student Model
attempts to answer
What We Are Measuring; the Evidence Model
attempts toanswer
 How Do We Measure It; the Task Model
is linked to
Where Do We Measure It; the Assembly model
islinked to
 How Much Do We Need to Measure;
and finally,
the Presentation Model attempts to answer How Does It Look.
Moreover, these five models describe the requirements for the objects in the assessment deliverysystem. The
 Delivery System Model
describes the collection of students, evidence, task, assembly, andpresentation models necessary for the assessment and how they will work together.Almond et al. (2002) describe a four-process architecture which are common features between differentforms of assessment. These processes include
activity selection
response processing
, and
summary scoring.
The creation of the assessment task starts by the
activity selection process
where theadministrator (instructor) selects and sequence tasks form the
task/evidence composite library
(a database of possible tasks, their description, materials, rules, and evidence parameters). Following this, the information issent to the
 presentation process
which delivers the assessment task to the participant (student). Relevantmaterials can be retrieved from the
task/evidence composite library
for instance, assessment paper (traditionalassessment) or images, audio/ video files (e-assessment). The
 presentation process
records the students
 response as a
work product 
which can be paper script assessment, or computer file and then delivers this
work  product 
to the
response processing
section for evaluation. The evaluation process may consist of simple scoring
 process or more complex series of evaluation for the students‟ responses. The evaluations are then passed to the
summary scoring process
which updates the
scoring record.
The scoring record contains all the judgements
AL-Smadi, M., Gütl, C. & Chang, V. (2011). Addressing e-Assessment practices in e-Learning Activities: A Review. In S.Barton et al. (Eds.), Proceedings of Global Learn Asia Pacific 2011 (pp. 448-459). AACE.Retrieved from http://www.editlib.org/p/37209.
about students‟ knowledge, skills level, and abilities based on pre
-defined evidences provided for all tasks.According to Almond et al. (2002), separating the
 Response Processing
step from both
Summary Scoring
is vital to an evidence-based focus in assessment design and supports reuse of the task in multiplecontexts. Two types of feedback can be delivered based on this architecture:
Task-Level Feedback,
whichrepresents the immediate feedback based on student responses independently of evidence from other tasks, and
Summary Feedback,
which reports the accumulated
from the
scoring record 
based on tasksevidences to the participant (student). According to Brinke et al. (
2007), Almond‟s four process conceptual
assessment framework (CAF) has a limitation as it was designed for computer-based assessment and is moredirected to the execution phase of assessment. CAF views assessment as a process involving two main rolesparticipation, an
to setup and maintain the assessment, and a
(student) whosecompetence, skills, and knowledge are going to be assessed.
Brinke et al. (2007) have constructed an educational model for assessment in which they covered newtypes of assessment. The model is designed to have different sub-models with each model representing adifferent stage in the assessment process:
assessment design
item construction
assessment construction
assessment run
response rating
decision making
, and
 feedback to the assessment design stage. These sub-modules are required 
to adapt the assessment activities for the whole assessment process. According to Brinkeet al. (2007) the model can be used to enrich the IMS QTI (Question and Test Interoperability) specifications
with more features especially for the „assessment‟ and „section‟ parts of the specification. Moreover 
, it can beused to fill in the gaps between IMS QTI specifications and other related specifications such as IMS LD(Learning Design) by providing directions of using both specifications to address teaching, learning, andassessment. However, the model has some limitations as it does not discuss statistical and psychometricinformation which are covered in the four process model of Almond.Another useful framework is the Framework Reference Model for Assessment (FREMA,http://www.frema.ecs.soton.ac.uk). FREMA was the principal deliverable of the FREMA research project,which ran from April 2005 until October 2006. The project was funded by Joint Information SystemsCommittee (JISC) as part of its e-learning framework (E-Framework) program. FREMA explains and visualizespossible activities and entities related to the e-learning assessment domain. The framework uses concept maps tovisualize assessment components and their interrelationships in a way to explain possible assessment services,standards, organizations, and use cases (Millard et al., 2005). The FREMA website provides interactive Flash®components to demonstrate the assessment domain. FREMA possible resources and activities have been definedin consultation with the e-assessment community in UK (Millard et al., 2005). A useful view of Noun Map andVerb Map represents the related assessment resources and activities. The Noun Map explains the possibleassessment resources as well as stakeholders and their roles in the assessment cycle. The Verb Map representsthe possible processes of assessment and what people can do in the context of e-assessment.Another example of assessment framework is the Service Oriented Framework for Assessment (SOFA).SOFA has been designed by AL-Smadi et al. (2009) as part of their research for flexible and standardized e-assessment systems. The authors suggested a flexible e-assessment system from the architectural point of viewwhere the system can be used as a standalone e-assessment system or be integrated with other systems such asLMS and authoring systems. Therefore, they distinguish between two levels of standardization in flexible e-assessment systems and their possible standards and specifications. The first level is an external level whichrepresents possible services and standards that can be used to integrate the assessment system with external useragents such as LMSs. The second level is an internal level of standardization which discusses e-assessmentrelated services and their possible standards. SOFA concentrates on service-oriented architecture (SOA) whereservices should be designed to be standard-conform. SOA will foster e-assessment systems with flexiblearchitecture so that they can be used as standalone systems or be integrated with other systems and tools.SOFA abstraction layers are discussed as follows:
Users and Systems
represent the external users,tools, and systems that may interact with the e-assessment system. The
is used for the externalcommunications between the e-assessment system and other external systems, users, and tools. The interfacelayer should be underpinned with a set of specifications and standards in order to facilitate the integration andcommunication between the core e-assessment system and the external user agents.
 Assessment Services
represent the fundamental services for any e-assessment system. The services here are used to perform the mainfunctionality of the assessment process from authoring the items until exchanging them. Special interfaces areused to make the interaction between these services and the assessment portal and users. The specifications andstandards of the internal level determine which e-assessment system are used. The assessment services havebeen identified based on FREMA processes concept map.
Common Services is
a lower level of services those

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->