Professional Documents
Culture Documents
Small To Big Before Massive: Scaling Up Participatory Learning Analytics
Small To Big Before Massive: Scaling Up Participatory Learning Analytics
Daniel T. Hickey
Indiana University Bloomington IN, 47401 01-812-856-2344
Xinyi Shen
Beijing Normal University Beijing 100875, China 86-159-0100-1713
dthickey@indiana.edu ABSTRACT
tmcilmoi@indiana.edu
shenxinyi1128@gmail.com
2. RESEARCH CONTEXT
This particular approach to personalized learning is rooted in prior design-based studies of educational multimedia [14, 16], educational videogames [1] and English language instruction [13]. This prior research resulted in a general set of design principles and local theories [4] for enacting them. Starring in 2008, these ideas were used to design and refine two online graduate level courses: Learning & Cognition in Schools and Assessment in Education. Both courses served students with very diverse experiences & ambitions; both courses brought with them detailed expectations for disciplinary content coverage. These refinements resulted in a set of participatory design principles for fostering productive forms of individual and social engagement in disciplinary knowledge [5] while also consistently impacting individual understanding (as assessed with classroom performance assessments) and aggregated achievement (as measured with conventional tests). The courses were organized around wikifolios that every learner could see and comment on. Both courses were organized around texts that were challenging for less-experienced learners. The first author taught each course online at least once per year using the Sakai CMS. These features and analytics (a) were reasonably efficient with up to 30 students, (b) supported extensive levels of individual and shared disciplinary engagement, (c) generated enduring understanding of targeted course concepts, and (d) resulted in significant and substantial gains in student achievement [15]. The Assessment course was scaled up to a 12-week open course using Google Course Builder (Version 1.4) starting September 2013. The assignments, interactions, assessments and analytics were all revised to be manageable with up to 500 participants. Ultimately, 460 registered for the course, including 8 who also enrolled for official university credit and agreed to complete all optional assignments. Some participants held doctoral degrees in education while others had no prior coursework in education. In addition to preparing for the BOOC, more scaling took place across the BOOC as the features and analytics were streamlined and/or automated. Some streamlining involved making more efficient use of the instructors time as the weekly analytics and feedback were handed off to the teaching assistant (TA, the 2nd author) and in some cases to an intern (the 3rd author). Other streamlining involved moving from the cumbersome manual examination of wikifolio pages to downloading spreadsheets to automated algorithms. Other scaling (not discussed here) is continuing as entire features are redesigned (and sometimes reconceptualized) to function more autonomously. Generally speaking, the courses were designed and refined in order to align immediate real-time feedback as individual wikifolios were
This case study describes how course features and individual & social learning analytics were scaled up to support participatory learning. An existing online course was turned into a big open online course (BOOC) offered to hundreds. Compared to typical open courses, relatively high levels of persistence, individual & social engagement, and achievement were obtained. These results suggest that innovative learning analytics might best be scaled (a) incrementally, (b) using design-based research methods, (c) focusing on engagement in consequential & contextual knowledge, (d) using emerging situative assessment theories.
General Terms
Algorithms, Measurement, Performance, Design.
Keywords
Personalized learning, learning analytics, assessment, social learning analysis, analytic approaches, analytic approaches.
posted, close feedback across the entire set of wikifolios, proximal reflections on wikifolios, and distal assessment of achievement and post-hoc examinations of engagement.
to the next revealed an average change of 140 words, with spikes at the transition to Part Two and to Part Three
In the small course, students were instructed to summarize the ideas in order of relevance and include rationale for the ordering. In the BOOC, participants rearranged text boxes containing summaries of the ideas (and sometimes links to additional information) and provided a rationale in the text box below. This simplified the process and generated data for social feedback.
Significantly, 33% of comments made reference to context (either the wiki author or the commenter and sometimes both). Wikifolios received an average of 2.9 comments and the average number was relatively constant across the course. Post hoc analysis of one third of the wikifolios revealed that 65% included an initial question and 55% of those questions got a response. Coding of comments was streamlined by a subsystem that output comment text and context, allowing faster examination of the impact of new or modified assignment features on engagement.
warranted their claim that the artifact, exchange, or comment was exemplary. The wikifolio homepage indicated which student earned the most such stamps each week. In the BOOC, participants simply clicked a box to promote a wikifolios, which then asked them to add a warrant. The names of promoters and their warrants were then displayed on the wikifolio. Participant in each group with most promotions each week were acknowledged in the weekly feedback. The group members with the most promotions for each of the three units and for the entire course were awarded leader versions of the digital badges describe below. Participation in peer promotion averaged 67%, increasing from 51% in the first week to 77% in in the last week.
Because of the workload, the open-ended items were dropped in the BOOC, which included three 20-item unit exams and a 30item comprehensive final. While item-level feedback was not provided, participants were allowed to take the exams twice in three hours and take the final twice in four hours. Participants were required to complete the exams and final to earn digital badges, but the original 80% criteria was relaxed. Participants were able to choose whether they included their exam performance on their digital badges, and exam scores were factored into final course grades for the for-credit students. Average scores for non-credit participants were 84%, 76%, 78%, and 75%, while average scores for the for-credit students were 88%, 83%, 78%, and 82%.
4. CONCLUSIONS
In light of the low completion and engagement rates reported for university-sponsored MOOCs [2], the BOOC appeared quite promising. Of the 160 participants who completed the first assignment, 60 (37%) completed the course. In contrast to the declining engagement among completers in most MOOC, engagement levels across the BOOC units was stable (comments) or increased (endorsements and promotions). BOOC completers reported working an average of 7.5 hours per week, with a high of 30 hours. Open-ended comments were overwhelmingly positive; one participant who completed his MS and PhD at top-ranked universities deemed the BOOC best graduate-level education course he had ever taken. Reassuringly, there were virtually no meltdowns like those at one widely cited failure to foster more participatory personalized learning at a massive scale [25]. The success of this course supports four suggestions for scaling up participatory learning. The first suggestion is that scaling should be done incrementally. With around 100 participants completing weekly assignments, it is possible to gradually streamline the learning analytics by handing them off to a TA and making use
spreadsheets and the other resources. With thousands of learners, this useful iterative refinement and the insights that followed would have been impossible. While this brief paper does not allow elaboration, efforts are already underway to further streamline most BOOC features and analytics to allow their use in automated and/or massive contexts. The second suggestion is that efforts to scale should consider design-based research methods [4]. The specific effort described here was part of a larger ongoing effort to refine and exemplify a set of more general participatory design principles. In translating these more general principles into specific features and analytics, more specific design principles are emerging alongside knowledge of the contextual factors that help make them possible. Together, these general principles, specific principles, specific features, and contextual factors offer useful knowledge for broader effort and the efforts of others The third suggestion is that the design principles used to scale participatory learning embrace emerging situative theories of assessment. [8, 11, 12] This provides a coherent framework for aligning informal assessment during activities, semi-formal assessment in reflections, and formal assessment in exams. The final suggestion is that the design and refinements of interactive features and analytics focus on contextual and consequential knowledge. Doing so appears to be a scalable way to foster the natural forms of disciplinary engagement [5] that that can also foster learning of the factual, procedural, and conceptual knowledge that courses are typically accountable for.
[8]
[9] [10]
[11]
[12]
[13]
[14]
5. ACKNOWLEDGEMENTS
This research was supported by a gift from Google to Indiana University. Garrett Poortinga and Thomas Smith contributed directly to many of the features and learning analytics described in this paper. Rebecca Itow contributed to key aspects of the design research and the writing of this manuscript and Retno Hendryanti supported the instruction described here. [15]
6. REFERENCES
[1] Barab, S, Zuiker, S., Warren, S, Hickey, D., IngramGoble, A, Kwon, E., Kouper, I, & Herring, S., 2007. Situationally embodied curriculum: Relating formalisms and contexts. Science Education, 91, 750-782. Brinton, G. C., Chiang, M., Jain, S., Lam, H, Liu, Z., & Wong, F. M. F. (2013). Learning about social learning in MOOCs: From a statistical analysis to a generative model. Cornell. http://arxiv.org/abs/1312.2159. Brown, J. S., & Adler, R. P., 2006.. Open education, the long tail, and learning 2.0. EDUCAUSE Review, 43,1, 16-20. Cobb, P., Confrey, J., Lehrer, R., & Schauble, L., 2003. Design experiments in educational research. Educational researcher, 32, 1, 9-13. Engle, R. A., & Conant, F. R..2002. Guiding principles for fostering productive disciplinary engagement: Explaining an emergent argument in a community of learners classroom. Cognition and Instruction, 20 3, 399-483. Gee, J. P., 2004. Situated language and learning: A critique of traditional schooling. New York: Psychology Press. Greeno, J. G., 1998 The situativity of knowing, learning, and research." American Psychologist 53, 1, 526
[16]
[2]
[6] [7]
Greeno, J. G., and Gresalfi, M. S., 2008. Opportunities to learn in practice and identity." In Moss, P. (Ed) Assessment, equity, and opportunity to learn. New York: Teachers College Press: 170-199. Gresalfi, M., et al., 2009 Virtual worlds, conceptual understanding, and me: Designing for consequential engagement. On the Horizon 17, 1, 21-34. Hickey, D. T., 2003. Engaged participation vs. marginal non-participation: A stridently sociocultural model of achievement motivation. Elementary School Journal, 103, 4, 401-429. Hickey, D. T.. 2012. A gentle critique of formative assessment and a participatory alternative. In P. Noyce & D. T. Hickey (Eds.), New frontiers in formative assessment (pp. 207-222). Cambridge, MA: Harvard Education Press. Hickey, D. T., Ingram-Goble, A., and Jameson, E., 2007. Designing assessments and assessing designs in virtual educational environments. Journal of Science Education Technology, 18, 187-208. Hickey, D. T., McWilliams, J. T., & Honeyford, M. A., 2011. Reading Moby-Dick in a participatory culture: Organizing assessment for engagement in new media. Journal of Educ. Computing Research, 44 (4), 247-273 Hickey, D. T., Tassoobshirazi, G., Cross, D., 2012. Assessment as learning. Enhancing discourse, understanding, and achievement in innovative science curricula. Journal of Research in Science Teaching, 49, 1240-1270. Hickey, D. T., & Rehak, A., 2013. Wikifolios and participatory assessment for engagement, understanding, and achievement in online courses. Journal of Educational Media and Hypermedia, 22, 4, 229-263. Hickey, D. T., & Zuiker, S. J., 2012. Multi-level assessment for discourse, understanding, and achievement in innovative learning contexts. The Journal of the Learning Sciences, 22, 4, 1-65. Ito, M., et al., 2010 Hanging out, messing around, and geeking out: Kids living and learning with new media. Cambridge MA: MIT Press. Jenkins, H., 2009. Confronting the challenges of participatory culture: Media education for the 21st century. Cambridge MA: The MIT Press. Middendorf, J, and Pace, D. 2004. Decoding the disciplines: A model for helping participants learn disciplinary ways of thinking. New Directions for Teaching and Learning, 98, 1-12. Oremus, W. 2013 Online class on how to teach online goes laughably awry. Slate (February 5, 2013), 1. Siemens, G., 2005. Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning, 2, 1, 3-10. Woolf, B. P. 2010. A roadmap for education technology. Computing Research Association, Washington, DC.