You are on page 1of 16

Testing a taxonomy to facilitate e-assessment

Bobby Elliott, Scottish Qualifications Authority

Abstract Online writing, in various forms, is common in education but there has been relatively little research into ways of analysing and evaluating its quality. This paper reports on original research into ways of assessing online writing. It addresses two research questions: 1. Can a traditional taxonomy be used to analyse the academic quality of online writing? and 2. Can a taxonomic approach be used to aid the assessment of online writing? The paper uses content analysis techniques to study two forums used within a post-graduate online learning programme and reports on the nature of learning in this environment. The techniques used are well- known methods of analysing academic discourse: a revised version of Blooms taxonomy proposed by Anderson and Krathwohl (2001) and the practical inquiry model proposed by Garrison, Anderson and Archer (2000). This paper seeks to assess their suitability for the online domain. The paper concludes that traditional taxonomies can be used to analyse online writing and that the use of a taxonomy would improve assessment practice. It recommends the creation of an e-taxonomy for this purpose, based on traditional taxonomies but modified to better reflect the new affordances of the online environment. . Keywords Taxonomy; e-assessment; online writing; e-taxonomy. Introduction The use of computers in education has a long tradition. Higher Education (HE) has used asynchronous communication tools (such as discussion boards) since the late twentieth century. More recently, a range of Web 2.0 tools, such as blogs and wikis, have been added to this toolset. Students enjoy the use of technology in their classes (Clarke, Flaherty and Mottner 2001) and its use is supported by a wealth of pedagogical research, such as Vygotskys theories on the construction of knowledge (1987) and Kolbs emphasis on the importance of learning through participation (1984). Subsequent research has highlighted the importance of the social construction of knowledge (see, for example, Pea 1993). A large body of empirical evidence supports the use of computers in the learning process. A meta-analysis carried out by the Department of Education in the United States (US Department of Education 2009)1 concluded that students who took all or part of the class online performed better, on average, than those taking the same course through traditional face-to-face instruction. The study reported that instruction which combined online learning with face-to-face learning was most

Over 1000 research studies were included in this meta-analysis. Most related to the post-school sector; few studies have been carried out for younger learners.

International Journal of e-Assessment vol.3 no.1 2013

effective. These findings were true irrespective of the means of implementation [of online learning] and were found to be applicable irrespective of content or learners.2 This research paper looks at one particular form of online learning, that which involves the use of online discussion boards. Online discussion boards are one instance of what has been variously described as online writing, digital writing, digital learning narratives and Web 2.0 writing, which can take various forms including online forums, blogs, wikis, social networks, instant message logs, and virtual worlds. Many of the issues raised in this paper are relevant to all forms of online writing. This new medium provides new affordances new ways of utilising the medium to communicate and collaborate. For example, online writing may make visible such things as cooperation, collaboration and self-reflection; the learners thought processes are also more apparent (Knox 2007). The inclusion of multimedia (such as audio and video) is straightforward. Referencing (hyperlinking) to related resources or information is simple. The asynchronous nature of many online communications makes the time and place of contributions more flexible, and provides more wait time (Berliner 1987) to improve opportunities for reflective writing. The dialogue may have an audience beyond the walls of the university (perhaps at campus, national or global level). The scope of the tasks that may be set can be greater, with faculty setting of large-scale projects such as collaborative book writing or large-scale software construction (Gray et al. 2009). Authenticity can be improved by tackling real-world issues and seeking feedback from peers and experts across the world (Gray et al. 2009). The potential for producing authentic, largescale, co-constructed, continuously improved, media-rich works of local, national or international interest is unique. These new affordances have implications for assessment. They provide an opportunity to assess skills that were previously considered difficult or impossible to assess. For example, collaboration has long been recognised as an important skill but is rarely assessed due to inherent difficulties in measuring this competence through traditional methods. The new affordances offer opportunities to set new kinds of assessment activities real activities with real value rather than the contrived tasks often used to assess learners knowledge and skills (Gray et al. 2009). Traditional assessment has been criticised as being artificial or trivial or irrelevant (see, for example, Brown et al. 1997). The online environment offers the prospect of addressing these criticisms as well as rethinking current practice. Guidance on best practice in online writing is slowly emerging. For example, Crme and Lea (2003) include specific advice on writing for the internet. Digital scholarship is an emerging topic in academia (see, for example, Weller 2011). These publications provide guidance and advice about online writing and tangentially provide an informal way of describing and analysing it. However, to date, few academics have addressed the specific issue of the formal analysis of online writing. The verbal nature of online interaction makes it suitable for traditional content analysis techniques, which are normally used in the analysis of oral or written interviews, but these were designed to elicit meaning from long narratives and have been criticised, even in this context, for their complexity and
The report noted that the benefits of online learning may not necessarily relate to the medium but could be the effects of other factors such as time spent learning, the curriculum, or pedagogy.
2

International Journal of e-Assessment vol.3 no.1 2013

subjectivity. More appropriate approaches have been suggested. Mercer (2000) developed a socio-cultural discourse analysis that categorised online contributions as disputational, cumulative or exploratory and this has been applied, with some success, by other academics (see, for example, Littleton and Whitelock 2005). The absence of an online pedagogy, an e-pedagogy, has been noted (Elliott 2009) but the lack of a formal taxonomy for describing online activities is, arguably, even more fundamental. Online writing is a relatively recent form of communication, and its educational assessment is evolving. Current practice in the creation of rubrics to assess students writing has been criticised with many marking schemes being found to have low validity and fidelity and confused about what it is they are assessing: There appears to be confusion about what faculty is seeking to assess. Many of the assessment criteria relate to knowledge, skills and attitudes that have little, or nothing, to do with learning objectives. Many rubrics focus on participation rather than achievement, and inputs (such as effort) rather than outputs (such as understanding) (Elliott 2010). The adoption of a formal taxonomy for online writing could provide similar benefits to those that were realised by the adoption of Blooms taxonomy in the middle of the twentieth century. It would provide a common framework and shared vocabulary for describing online writing and would facilitate its assessment. This paper seeks to highlight and develop the debate about the assessment of online writing. Specifically, it explores the practicality of using traditional methods of analysing academic discourse and seeks to determine their efficacy as a means of assessing it. The research questions were: 1. Can a traditional taxonomy be used to analyse the learning that takes place in an online environment? 2. Can a taxonomic approach be used to aid the assessment of online writing? Research methods Two forums were selected to address the two research questions. Both forums were part of The University of Edinburghs masters degree in e-learning which has been 3 offered by the university since 2006. The forums were part of the following courses within that programme: Introduction to digital environment for learning (IDEL) Understanding learning in an online environment (ULOE).

These courses were selected because they both required students to use an associated online discussion forum and one was assessed (ULOE) and one was not (IDEL), permitting them to be compared with respect to the impact of assessment. In total 33 students undertook these courses between September and December 2009; 21 participated in the IDEL course and twelve in the ULOE course. Each course lasted twelve weeks. In each course some time was dedicated to
3

The programme was piloted in 2004.

International Journal of e-Assessment vol.3 no.1 2013

assessment towards the end of the course; this varied from one non-teaching week in IDEL (week 12) to two non-teaching weeks in ULOE (weeks 11 and 12). The IDEL course is an introductory course, normally, but not exclusively, undertaken by new students to the programme. The ULOE course is generally undertaken by more experienced students. ULOE was assessed using four elements, one of which related to learners participation in the associated course forum. This contributed 10% to the overall course grade. A rubric was used to assess each student, based on that described by Rovai (2000). The other assessed elements were: learning needs analysis (20%), reflective report (20%) and an essay (50%). In total, 1380 messages were posted across the two forums: 723 on IDEL and 657 on ULOE. A significant number of these were social messages and, as such, were excluded from the analysis, leaving 491 messages on the IDEL forum and 545 4 messages on the ULOE forum (1036 messages in total). Research design The analysis of the forums took a quantitative and qualitative approach. From a quantitative perspective, various metrics were computed, such as the total number of posts and the mean number of words per post. From a qualitative perspective, the academic quality was measured in terms of two attributes: 1. an analysis of the types of cognitive activities 2. an analysis of the critical thinking. The first analysis was done using a modified version of Blooms taxonomy (1956) proposed by Anderson and Krathwohl (2001) as the coding frame. The second analysis used the practical inquiry model proposed by Garrison, Anderson and Archer (2001). These techniques were selected because they are widely used and understood in the academic community. Blooms taxonomy is long established and is the most widely used classification system in education; the Anderson and Krathwohl version is derived from Blooms and is generally accepted as the de facto contemporary version of the original taxonomy. The practical inquiry model is not so universally known and used, but it is considered a standard method of critical analysis and, as such, is a good choice for the purposes of this research. Anderson and Krathwohls taxonomy In 2001, Anderson and Krathwohl proposed a revision to Blooms taxonomy to incorporate new knowledge and thought into the framework [] now that we know more about how children develop and learn (xxii). Their taxonomy for learning,

A significantly larger number of messages on the IDEL forum were social messages compared to the ULOE contributions.

International Journal of e-Assessment vol.3 no.1 2013

Table 1: Anderson and Krathwohls taxonomy for learning, teaching and assessment
Cognitive dimension Knowledge dimension A. Factual knowledge B. Conceptual knowledge C. Procedural knowledge D. Meta knowledge 1.Remember 2.Underst and 3. Apply 4. Analys e 5. Evaluat e 6. Create

teaching and assessment (hereafter referred to as the taxonomy) used Blooms original taxonomy as one dimension in a two-dimensional depiction of cognitive abilities (table 1). Each message was analysed using the revised taxonomy as a coding frame. Each message was individually reviewed and placed in one of the boxes in table 1. Each message was assigned a code, ranging from A1 to D6, to represent its coordinates in the table. To aid coding, the framework was populated with additional information to assist with the placing of each message in an appropriate cell (table 3). This additional information was based on the advice provided by the original authors but customised to reflect the dialogues that typically occur in the online environment. The table has tried to capture some of the unique affordances provided in an online forum, such as the ease with which a student can hyperlink to online resources. So, for example, providing a simple link to an online resource is considered as remembering factual knowledge (A1); summarising an online discussion is an example of analysing conceptual knowledge (B4); and relating an online resource to a learning theory is an example of understanding meta-knowledge (D2). Table 3 is essentially an attempt to update the taxonomy to encompass the new affordances of online writing. The unit of analysis was a message. Both the taxonomy and practical inquiry methods are macro-analysis tools, best applied at message level rather than smaller units (such as sentences or paragraphs). A message is also a natural unit of analysis since it is clearly delineated and is consciously packaged by the learner. The selection of a complete message as the unit of analysis also simplifies the process of content analysis, and obviates one of main criticisms of traditional approaches (complexity). Practical inquiry model The practical inquiry model proposed by Garrison, Anderson and Archer (2000) was used to analyse the critical thinking on the forums. This model is well grounded in theory, building on Deweys (1933) approach to learning. It has four phases:

International Journal of e-Assessment vol.3 no.1 2013

Table 2: Practical enquiry model of critical analysis Phase 1 Triggering event Issue or problem is identified; this may be a learning challenge initiated by the teacher or an observation posted by a learner. The social exploration of ideas. In this phase, learners seek to comprehend the problem or issue, moving between discourse with other learners and critical reflection. Meaning is constructed from the ideas and comments produced in the previous phase. In this phase, ideas are assessed and connections are made between ideas. In this phase, conclusions are reached and some consensus is created between learners.

Phase 2

Exploration

Phase 3

Integration

Phase 4

Resolution

1. triggering event 2. exploration 3. integration 4. resolution. Table 2 describes these phases in more detail. This model was designed to identify critical thinking, which Garrison et al. defined as the extent to which learners are able to construct and confirm meaning through sustained reflection and discourse in a critical community of inquiry (Garrison, Anderson and Archer 2000). To aid the models use as a classification tool for online forums, the model was augmented to provide additional guidance on its application in an online environment (figure 1). Note that the model is circular since the output from the resolution stage can be used as a triggering event.

International Journal of e-Assessment vol.3 no.1 2013

Table 3: Coding table for content analysis of cognition with exemplification of online writing
Knowledge 1. Remember State relevant facts. Provide references/ hyperlinks Ask relevant questions. A. Factual Describe relevant personal experiences. Factual replies to messages. Suggest relevant resources. Provide appropriate links. Explain concept/system in own words. Seeks clarification about idea/ concept. Give examples of idea. Describe systems. Make links Follow a system. Apply a method. Identify relationships. Relate concept to self. Describes personal experience of concept. Apply concept to self. Suggest uses of concept. Explain facts. Check facts. Clarify questions. Correct data/factual information. Classify facts. Summarise facts/data. Exemplify facts. Suggest related resources. Perform calculations. Relate factual information to self or others. Apply factual information to self or others. State opinion in relation to facts provided. Suggest use of resources to self or others. 2. Understand 3. Apply 4. Analyse Determine point of view about facts/factual information. Check facts/information for accuracy. Determine bias in data/factual information. Select data. Organise data. Discriminate between facts. Compare facts. Summarise data. Provide factual summary. Compare ideas/concepts/ systems. Break down ideas/systems. Agree or disagree with concepts/methods with reasons. Summarise a Judge a system or relationship. Assess a concept using criteria. Apply criteria to an idea or system. Challenge an idea/position using criteria or Apply criteria to factual information. Assess factual information using criteria. Evaluate resources. 5. Evaluate 6. Create Generate factual information. Produce hypotheses about factual information. Create factual scenarios. Create criteria for evaluating factual information. Evaluate personal experience factually. Create original data. Create digital resource with factual use. Make a relationship. Devise a system. Synthesise a discussion. Provide an insight. Propose changes to a system or relationship. Create criteria to

State theories/concepts. B. Conceptual Reference concepts/ideas. Quote ideas/ concepts. State systems. State relationships.

International Journal of e-Assessment vol.3 no.1 2013

Knowledge

1. Remember

2. Understand between ideas. Ask questions about ideas/positions. Agree or disagree with reasons. Provide metaphors for concept. Seek clarification about concept/system. Describe a procedure or method. Describe a relationship.

3. Apply Suggest resources linked to concept.

4. Analyse discussion/ concept/system.

5. Evaluate references.

6. Create evaluate an idea/system. Create an original idea/method.

State a procedure or method. C. Procedural Refer to a relationship or method.

Discuss a procedure or method. Clarify a procedure.

Follow a procedure. Relate procedure/method to self. Describe personal use of system/procedure.

Break down a procedure or method. Identify components within a procedure. Compare steps/stages. Compare methods of doing something. Summarise a procedure.

Devise a procedure or method. Judge a procedure/system using criteria or references. Assess the criteria for a procedure. Propose justified changes to a system/procedure. Write criteria for judging a procedure or method. Propose a new system.

Describe how to do Ask questions about method/ something. relationship. Describe personal experience of system/method. Relate conceptual knowledge to a

International Journal of e-Assessment vol.3 no.1 2013

Knowledge

1. Remember

2. Understand procedure/method.

3. Apply

4. Analyse

5. Evaluate

6. Create

Describe learning theories. State facts about self in relation to course content. D. Meta State facts about nature of knowledge. State facts about learning. Refer to learning strategies.

Describe learning strategies. Explain the nature of knowledge. Describing personal limitations or abilities related to learning or course content. Relate resource to learning theory.

Use knowledge of self to further learning. Apply learning strategies. Reflect on professional practice related to course. Relate technology to own learning.

Analyse self using criteria. Identify personal strengths and weaknesses. Apply learning strategies. Identify bias in self or learning theory.

Compare learning strategies. Evaluate self using criteria. Judge own learning. Assess learning strategies. Assess resource for learning.

Devise a learning strategy. Modify an existing learning strategy. Create a new resource for learning.

International Journal of e-Assessment vol.3 no.1 2013

Figure 1: Critical inquiry model used to encode forums with exemplification of online writing Every message in both forums was encoded T (trigger), E (explore), I (integrate) or R (resolve) to denote the type of critical thinking it contained. Inter-rater reliability was computed for both methods. The results of this analysis are reported in the findings section below. The students consent for this research was sought prior to them commencing the courses. It is therefore possible, if unlikely, that this may have influenced their 5 contributions to the forums.

Findings6
Extracting the taxonomy data and comparing both courses in this respect, produced table 4. Figure 2 illustrates these data sets graphically.

5 6

Anonymity was guaranteed. Only aggregated data would be presented. Detailed coding tables are available from the author on request.

10

International Journal of e-Assessment vol.3 no.1 2013

Table 4: Summary of forums using taxonomy analysis Factual IDEL ULOE 21% 11% Conceptual 22% 59% Procedural 23% 18% Meta 2% 5% None 32% 7%

The IDEL forum was significantly more factual and slightly more procedural than the ULOE forum. One in five posts on the IDEL forum stated basic facts or unsubstantiated factual opinions. This was double the rate of the ULOE forum. Neither forum exhibited much meta-knowledge. The striking difference is the amount of conceptual knowledge in each forum (figure 2) Slightly more than one in five messages (22%) in the IDEL forum related to conceptual knowledge; the most common sub-class was understanding conceptual knowledge with 52 occurrences. This compares with almost 60% of messages in ULOE relating to conceptual knowledge; the most common sub-class was analysing conceptual knowledge with 121 occurrences. Another striking difference is the proportion of posts that were not academic in nature. Almost one in three posts in the IDEL forum was a social message or a request for support. This compares to one in sixteen in the ULOE forum. Extracting the critical inquiry data from the tables, produces the summary in table 57. These data are illustrated in figure 3.

Figure 2: Comparison of forums using taxonomy

The percentages do not add to 100 since social and technical messages were excluded.

11

International Journal of e-Assessment vol.3 no.1 2013

Table 5: Analysis of forums based on critical thinking IDEL Trigger Explore Integrate Resolve 17% 27% 20% 4% ULOE 24% 27% 23% 9%

The analysis of critical thinking produced fewer differences than the taxonomy analysis. In fact, the forums were broadly equivalent in terms of this analysis. The main differences relate to the number of trigger questions and the number of messages that sought to resolve discussions. ULOE included (proportionately) more trigger-type questions than IDEL. Similarly, and more significantly, ULOE included twice as many resolve-type questions than IDEL. However, both percentages were low, indicating that few (student) messages, in either forum, sought to summarise or synthesise discussions.

Figure 3: Comparison of critical thinking in the forums Inter-rater reliability To check the reliability of the analysis, a second rater categorised the data using the same coding frames. Cohens kappa coefficient (Kc) (Cohen 1960) was used to determine the extent of the agreement between the raters (and, hence, the reliability of the analysis). The resulting kappa values were: Kc (taxonomy) = 0.607 Kc (practical inquiry) = 0.64. While both values were at the very low end of the category, these results indicate significant agreement (Landis and Koch 1977) between the raters and infer that the analysis methods were objectively applied and produced reliable results. The kappa value for the practical inquiry model was, predictably, higher than that for the analysis by taxonomy, but the taxonomy analysis produced, in the writers view, surprisingly reliable results given the relative complexity of this approach. 12

International Journal of e-Assessment vol.3 no.1 2013

Conclusions The conclusions from the study are presented as answers to the research questions, which are stated below. It is acknowledged that the study was small and that rigorous statistical methods were not applied to the findings. However, the conclusions and recommendations are not controversial and confirm the findings of larger studies (such as the positive effects of assessment on student discourse). However, the research does unequivocally identify the potential benefits of a taxonomic approach to the assessment of online writing. 1. Can a taxonomy be used to analyse the learning that takes place in an online environment? The content analysis methods employed in this study proved to be practical means of analysing the academic interactions on the forums. The analysis by taxonomy worked well, providing an insight into the nature (and types) of academic discourse in each forum. The critical analysis method also appeared to be effective, albeit producing less illuminating results. A significant conclusion from this study is that the proposed analysis by taxonomy is a workable and (apparently) effective means of analysing online discussions. The content analysis by taxonomy proved to be an effective tool for analysing the cognitive quality of online discussions. The results of this analysis are illustrated in figure 2 (see page 11). However, the traditional Anderson and Krathwohl taxonomy requires revision to incorporate the new affordances of digital writing. Table 3 begins the process of customising the taxonomy for this purpose but further work is needed to produce a truly updated taxonomy for the online environment. Like Blooms original taxonomy, and subsequent revisions of it, an updated taxonomy would have important implications for assessment, providing a framework for the analysis of online writing and the construction of rubrics. The critical analysis method also worked well, although it produced less differentiated results. It is the conclusion of this study that the practical enquiry model is a useful way of categorising contributions (as trigger, explore, integrate or resolve-type contributions) but that it has less direct application to assessment than the taxonomy. However, the associated terminology has the potential to improve pedagogy and aid rubric construction. The practical enquiry model also confirmed the quality of the discussions. Both forums included a significant number of messages that spanned the triggering event, exploration and integration phases of this model; the resolution phase was less evident. The practical inquiry model proved to be a simple and reliable method of categorising contributions and one that can be easily applied to various forms of online writing. Both methods of content analysis (the taxonomy and the practical inquiry model) provide a higher level view of online discourse than traditional content analysis methods. Both methods were easily applied, produced rapid results (compared with traditional content analysis methods) and appeared to have high reliability. 2. Can a taxonomic approach be used to aid the assessment of online writing?

13

International Journal of e-Assessment vol.3 no.1 2013

Assessment appeared to have two effects of the quality of discussion: it improved quality, as measured by the level of academic discourse taking place, and it altered the nature of messages. The findings show that the assessed forum had a significantly higher proportion of messages relating to course concepts than the non-assessed forum. Also, the assessed forum had significantly fewer non-academic (social and support) messages. However, this cannot be conclusively attributed to assessment. Other variables, such as the nature of the course contents or the prior experience of the student groups, may have played a significant role. But the apparent positive effects of assessment seen in this small study are consistent with previous larger studies. Recommendations Based on these conclusions, the following recommendations are made. Recommendation 1: Traditional taxonomies can be used to analyse online writing This study used a traditional taxonomic approach to the analysis of online forums, and found that this produced worthwhile results. The reasons for using taxonomies appear to be as valid in the online domain as in the traditional classroom. The application of the taxonomy provided a framework for describing learner contributions and worked well as a means of analysing and classifying online writing, which has implications for the assessment of this activity (see below). The taxonomy operates at a macro (whole message) level and, in consequence, obviates some of the drawbacks of traditional content analysis methods. Unsurprisingly, it was discovered that traditional taxonomies need to be customised for the online environment, to take full account of the unique affordances of digital writing. This study showed that such customisations are possible and, once done, greatly assist with the process of analysing contributions. Recommendation 2: An e-taxonomy should be created for the purpose of analysing online writing and improving assessment practice Recommendation 1 stated that traditional taxonomies require customisation to embrace the unique affordances of online writing. The updated taxonomy could be described as an e-taxonomy. An e-taxonomy has the potential to improve online practice in much the same way that Blooms original taxonomy had a positive impact on classroom practice. The original taxonomy provided teachers with a consistent way of defining curricula, and constructing (and marking) examination papers. An e-taxonomy could confer similar benefits to the online learning environment. It would improve assessment practice by providing a common framework (and vocabulary) for appraising learner contributions to online discussions and has the potential to improve validity and reliability. An e-taxonomy would improve assessment practice by standardising the language used to describe and assess learner activity in the online domain. Such a taxonomy has the potential to improve professional practice in the twenty-first century in a similar way to the improvements effected by the original taxonomy in the twentieth century.

14

International Journal of e-Assessment vol.3 no.1 2013

References Anderson, L.W., and D.R. Krathwohl. 2001. A taxonomy for learning, teaching and assessing. New York: Addison Wesley Longman. Berliner, D.C. 1987. But do they understand? In Educator's Handbook: A research perspective, ed. V. Richardson-Koehler. 259-93. New York: Longman. Bloom, B.S., ed. 1956. Taxonomy of educational objectives: The classification of educational goals. Susan Fauer Company, Inc. Brown, George, Joanna Bull and Malcolm Pendlebury. 1997. Assessing student learning in higher education. London: Routledge. Clarke, I., T.B. Flaherty and S. Mottner. 2001. Student perceptions of educational technology tools. Journal of Marketing Education 23, 3: 169-77. Cohen, Jacob. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement 20, 1: 37-46. Creme, Phyllis, and Mary R. Lea. 2003. Writing at University: A Guide for Students. Open University. Dewey, J. 1933. How we think: A restatement of the relation of reflective thinking to the educative process. rev. edn. Boston: D.C. Heath. Dewey, J. 1987. My pedagogic creed. School Journal 54: 77-80. Elliot, B. 2010. A review of rubrics for assessing online discussions. Paper presented at the CAA Conference 2010. http://www.scribd.com/doc/33378944/A-Review-ofRubrics-for-Assessing-Online-DiscussionsCAA-Conference-2010 (accessed February 4, 2013) Garrison, D.R., T. Anderson and W. Archer. 2001. Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education 15, 1: 7-23. Garrison, D.R., T. Anderson and W. Archer. 2000. Critical inquiry in a text-based environment: Computer conference in higher education. The Internet and Higher Education 2, 2: 1-19. Gray, K., J. Waycott, M. Hamilton, J. Richardson, J. Sheard and C. Thompson. 2009. Web 2.0 authoring tools in higher education learning and teaching: New directions for assessment and academic integrity. Discussion paper for national roundtable on 23 November 2009. Melbourne: Australian Learning and Teaching Council. Knox, E. L. 2007. The rewards of teaching online. http://www.h-net.org/aha/papers/Knox.html (accessed January 4, 2010) Landis, J.R. and G.G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics 33, 1: 159-74. Littleton, K., and D. Whitelock. 2005. The negotiation and co-construction of meaning and understanding within a postgraduate online learning community. Learning Media and Technology 30, 2: 147-64 Pea, R.D. 1993. Practices of distributed intelligence and designs for education. Cambridge: Cambridge University Press.

15

International Journal of e-Assessment vol.3 no.1 2013

Rovai, A.P. 2000. Online and traditional assessments: What is the difference? The Internet and Higher Education 3, 3: 141-51. Rovai, A.P. 2003. Strategies for grading online discussions: Effects on discussions and classroom community in Internet-based university courses. Journal of Computing in Higher Education 15, 1: 89-107. US Department of Education. 2009. Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. http://www2.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport.pdf (accessed February 4, 2013). Weller, Martin. 2011. The digital scholar: How technology is transforming scholarly practice. London: Bloomsbury Academic. Vygotsky, L.S. 1987. Problems of general psychology. Vol.1 of The collected works of L.S. Vygotsky. New York: Plenum.

16

You might also like