Professional Documents
Culture Documents
10 2190@ET 42 2 B
10 2190@ET 42 2 B
ABSTRACT
The use of digital badges for peer-credentialing web-shared work offers the
promise of extending classroom learning beyond explicit course objectives
and evaluations. This pilot study of peer-awarded badges examines the
results of an online graduate course where students voted on and evaluated
the web-shared work of their colleagues on different criteria than used by
the instructor, encouraging critical review of colleagues, extending student
learning in lateral ways, and suggesting activities for later course work.
Although voting was anonymous, student evaluation results were quite con-
sistent across the class and students appeared to have extended their per-
spective on course content areas (emerging technologies in this case) through
the process of peer review. Students expressed warm, cordial communi-
cations within the course that may have been enhanced by learning about
the personal and professional interests of colleagues through this peer-
review process.
BACKGROUND
The emergence of web-based, digital badges (http://openbadges.org/) presents
new possibilities for course enhancements and extension. Badges can serve to
87
The course that was studied was “Learning with Emerging Technology: Theory
and Practice,” a foundational course offered within a new online master’s degree
program in emerging technologies. Students joined the course from a variety of
different professional backgrounds and experiences but all were required to
be open to learning with and using technologies. The course was intended to
serve as an introductory, survey-level course providing participants with learning
theories and practices related to the vast array of rapidly-surfacing, e-mediated
learning scenarios. In each of the four basic course modules, students studied
and summarized a relevant learning theory after examining research, readings,
videos, and websites containing applied and theoretical information. Students
conducted additional research on each module topic in reference to his or her
professional or personal interests. A fifth module required the students to col-
laborate in teams to summarize and reflect on the course content in each module.
They presented their collaborative summaries within the synchronous, virtual
environment of Second Life (www.secondlife.com). For the “practice” compo-
nent of the course in each of the four core modules, students created an emerging-
technology, exploratory project in an open-ended, sandbox way—that is,
students were given basic tutorials and required to mach-up initial explorations
of their interests within each technology. Students developed these projects using
common, web-based approaches: presenting via PowerPoint and Prezi; creating
websites; developing shared videos with YouTube; and using Facebook for
institutional promotion. Within each of the four basic course modules, these
exploratory projects were shared with the instructor and colleagues through a
web link in their assignment posting.
Nine students participated in the course, covering ages from 25 to 55 years. The
students came with a range of backgrounds and interests, from training developers
in formal environments, to independent “artistic” web and media developers, to
a K12 non-English language teacher, to a business instructor within a fashion
institute, and to several adult-education developers. Students voluntarily signed a
consent form approved by the college’s Institutional Review Board that allowed
their work to serve as data for the analysis provided within this article.
90 / O’CONNOR AND MCQUIGGE
The course directions established that different criteria were being used in the
instructor evaluation versus the peer evaluation, and it also made it clear that
although students were asked to be diligent in their peer review, the peer badge
outcomes would be shared but would not be used in determining the course grade.
In an example of one of the rubrics that the instructor used (see Table 1), it is
evident that the expectations of the instructor were functional and introductory.
Although “attractive” is listed within a criteria, the emphasis was on beginning
level skills and on the role that video might play in a learning environment (as
shown within this YouTube evaluation rubric).
By contrast, the students were given the badge categories and descriptors that
are shown in Table 2. By design, the tone and approach used in the peer review
(0 = no evidence, % of
1 = little evidence . . . 10 = excellent) grade Pointsa Comments
was less formal, more open-ended, holistic, and mindful of the shared nature
of the various e-media artifacts created. The instructor generated a Google Drive
Form that served as the input for each peer evaluation. The form used the
categories in Table 2 for the badge evaluation and there were three areas on
which the first three badges were based. Aesthetics was the first area for each
of the four review assignments (of a presentation, a website, a YouTube, a
Facebook institutional page); the other two categories addressed creativity or
quality and a parameter unique to the media. By the fourth review, the instructor
added a category called “Try New” to ask students to reflect on the willingness
of the peer being reviewed to be adventuresome with the technology. Again,
the peer review was intended to encourage: learning from the work of others,
becoming a stronger community through understanding the efforts and work of
other students, and experiencing the process of badging as a model for possible
future use.
In designing the procedure for peer review, since the digital badging tools
themselves (http://badg.us) were very much under-development and would not
be used until the completion of the class, the instructor therefore began by creating
Google Drive Forms that prompted for the three categories, and eventually, the
fourth category. The link to the evaluation form was placed within the section of
the online course where the student posted the link to their assignment artifact
(the website, YouTube, and so on). Once the students opened the Google Form,
they were prompted for their name and were then required to evaluate the three
or four categories on a 1 to 4 scale (No-Go to Gold). An optional area was
provided for comment, but the student could exit the form by only completing
the badge components.
METHODOLOGY
As noted above, built into the design of the course and the peer review of the
badge-able components, was a process that allowed data to be gathered into a form
(in Google Drive). The data later was brought into an Excel spreadsheet for
EXPLORING BADGING / 93
FINDINGS
This section starts with an overall comparison of the instructor’s versus the
students’ ratings to illustrate the differences in the reviews that were conducted.
It then presents the most salient findings from the analysis of students’ votes on
the different badges, considering both the prompted criteria and the open-ended
comment. The next section within the discussion seeks to draw conclusions from
the patterns that emerged.
As suggested in the previous section, it does seem that the students and the
instructor were assessing the artifacts developed on different dimensions. How
did the students vote categorically (by badge level) and how consistent was the
voting by different students for the same student?
Over the course of the four badge cycles studied here, 133 voting scenarios
were generated, that is, for each scenario a student voted for a peer on three or four
categories over the course of the four different web-based assignments. Students
optionally provided a written comment (as discussed in the following section).
Students were required to vote for three colleagues for each of the four artifacts;
the overall vote count at the end showed that there were 24% more votes than
required by the minimum, although the number of extra votes were randomly
distributed among the student voters.
When voting, the valued submitted were either 1, 2, 3, or 4 (No Go, Pewter,
Silver, or Gold). The mathematical average value for the votes was 3.2 and the
mode and median were both 3. However, of the total 430 independent votes
given (3 for the first 3 badge-voted artifacts and 4 for the fourth artifact), the
votes cast among all the different badge types are shown in Table 4.
Looking more specifically at the way the badge votes were cast, as shown in
Table 5, for each student that was voted-on, the votes were averaged for each
different category. In the first three artifact votes there were three categories.
Category 1 was a vote for the overall aesthetics of the artifact. Category 2 and
category 3 were the second and third votes cast and differed slightly among
the four artifacts, overall these addressed importance, quality, and creativity of the
EXPLORING BADGING / 95
Number of % in this
Badge category votes category
1 – No Go 16 4%
2 – Pewter 43 16%
3 – Silver 190 40%
4 – Gold 181 42%
artifact for the intended audience (students were developing their artifacts
toward an actual or hypothetical audience; they had stated their intended audience
within the course). For the last vote, on an institutional Facebook page, students
use the first three categories and also voted on a fourth category—the willingness
of the student to try new territory with the web-media; thus, votes were cast for
this category in only the last artifact vote.
As shown in Table 5, it appears that students’ attributes clustered around the
same average value for the different categories and that differences were evident
96 / O’CONNOR AND MCQUIGGE
(but small) from student to student. In addition to the data above, Table 6 provides
a further analysis, reporting the total votes cast for each student, the average of all
the criteria across the four artifacts, the range of votes, and the standard deviation
among the votes given by colleagues for each student. Considering that the
standard deviation is never greater than 0.9, the students were reasonably con-
sistent in the way they evaluated their colleagues. Note too that all votes were
cast directly to the instructor, all reports to the students were done anonymously,
and the badge awards were announced after the assignment was completed and
graded by the instructor.
Mindful that the peer reviewers were looking beyond the strict confines of
the course instruction and evaluated expectations and that they were focused on
creativity and impact-on-audience, it is not surprising that students 4 and 9 did
not achieve the higher scores as did some of their colleagues. These two students
came from more traditional, instructional-education backgrounds. By contrast,
students 2 and 5, who were voted higher, had come from backgrounds with more
creative, exploratory components.
As the students voted to evaluate their colleague’s work within the different
badge categories, there was an optional area for providing comments. As noted
above, students voted with an understanding that their votes and comments would
be shared anonymously with colleagues. Students were encouraged to support
their colleagues with additional comments, ideas, or suggestions that could help
their colleagues grow and improve. Among the 133 voting scenarios over these
four reviews, a total of 73 comments were added by eight of the nine student; one
student gave no comments and one only gave one comment (commenting was
optional). On average, almost 55% of the voting scenarios incorporated comments.
The majority of the comments were substantive, offering specific positive
feedback or critique and sometimes offering specific suggestions for improve-
ment. Four comments were either neutral or non-specific (i.e., “This is good”)
and one was not related to the student being viewed, thus for the analysis
below, these comments were removed from the tally. Also, for the tally shown in
Tables 7 and 8, some categories were aggregated. For example, some negative
comments were qualified by a positive secondary comment or a suggestion
for improvement; however, if the overall tone of the comment was negative, it
was included with the negative tally. A similar technique was applied to the
aggregation of the positive comments in both charts and data tables.
As evident in Table 7, students 3 and 6 were most liberal with their positive
comments; students 1, 2, 5, and 8 also offered suggestions for improvement.
The students who offered the majority of the comments (2, 3, 4, 5, 6, and 8)
provided comments in more than one category, suggesting a commitment to the
process of giving constructive and useful feedback.
Referring to Table 6, the student with the lowest average score from peers
(student 9) also provided very few comments to colleagues. Student 9 also
received five directly negative comments from colleagues. Student 6 also received
a low score from the votes cast by colleagues; however, this student also reviewed
the work of colleagues in a careful manner as evident in number of detailed
comments given. Although it is not possible to say with certainty from the
data within the course how well this student would work in the future, the interest
of student 6 in reviewing colleagues work critically suggests an interest in
understanding and evaluating the work being surveyed. Actually, at the end of
the course this student attended an optional virtual session where this student
talked about some very innovative ideas for a final project within the larger
master’s program, suggesting that the student had learned from the work of
reviewing and being reviewed by colleagues, and from the general interest and
commitment that the student had for the course.
The students’ comments were generally specific, sincere, and helpful (whether
positive or negative in tone). A sample of a few comments illustrates the general
tenor of the comments:
EXPLORING BADGING / 99
DISCUSSION
In reviewing the findings, the authors began from their own questions and
examined the data and comments to see if and how the pilot badging process
served the course objectives and these researchers’ interest. In examining the
badge-development process itself, at this current time in the evolution of
learning management systems, the authors were able to generate lessons
learned that can inform their future work, and hopefully the work of readers.
Interesting discoveries along the way suggested other uses for the peer badging
process in terms of strengthening students’ experiences even beyond the course
on a programmatic level, generating student interest that could be a possible
segue to future courses.
100 / O’CONNOR AND MCQUIGGE
Students appeared to have looked beyond the confines of the course expec-
tations, examining colleagues’ work in rich ways, and making connections to
their own interests.
In studying the interactions within the course, particularly within the badging
process, the authors were challenged, as are other researchers such as, Chang
and Lee (2007), to find if and how the course itself could encourage positive
interactions, sharing, and caring among the students. The course under study was
designed with multiple layers of discussion and interactivity—discussion boards,
virtual synchronous meetings, virtual field trips, presentations in Second Life,
shared video work via YouTube—thus making it difficult to attribute the apparent
connectedness among students to any one component. However, an examination
of the comments given with the votes suggests an ongoing concern and under-
standing of the work that students were developing. Although students were not
required to remember the audience of the student they chose to review, comments
frequently gave specific references to the effect on the intended audiences that
was being expressed in the web-based communication: “I enjoyed the last part
and how it tied all the concepts together. You see how learning is fun and the effect
of learning a second language is positive.” Students took the extra time to suggest
specific improvements; 15 comments, about 20% of the comments offered, focused
on concrete and specific recommendations for improvements. With more than half
of the voting scenarios including some level of comments (that were shared
anonymously with students after votes were cast), these students’ extended efforts
gave evidence of caring about the goals, ideas, and needs of their colleagues.
implement these peer evaluations into their own work. During this course, students
did not simply read about badging, they reviewed colleagues work, voted on
different criteria, and extended recommendations and comments. Furthermore,
they observed who received the awards and on what dimensions the awards
were received. As one student reflected within the course: “What happens
when no badges are awarded to an individual? This could potentially discourage
learning as well as encourage it.” As these students consider the role of badges
in their own work, they will remember and reflect on the experience of being
reviewed themselves.
These adult students in a graduate course that was part of a larger graduate
program are busy individuals with complex lives. Peer review, leading to the
distribution of badges, was required within the course but it was not “evaluated”
per se beyond a pass/fail assessment. Did the students demonstrate a commit-
ment to the peer review process itself? The analysis of the data in the Findings
section suggests several ways that responsible engagement was evident:
• Not only did students meet the basic review requirement, they frequently
reviewed more than the required number of peers, over 20% additional voting
was noted. If a student did a fourth, optional review, this took an extra time
commitment for the student. And, since students had to submit their own
work and then review the work of others within the same week’s timeframe,
the commitment to a peer-review and vote was non-trivial. Also, more than
half of the votes were submitted with specific, careful, and personal comments
(and, comments were optional), suggesting a commitment to their peers and
to the learning of the peers. (The comments were shared even if no badge
was awarded.)
• An examination of the votes cast sheds further light on the depth of analysis
among the students and suggests care and discernment when casting votes.
An examination of Table 4 does find that the ratings on individual criteria
are skewed towards the higher valuations, with only 20% of the ratings in the
1 or 2 range. However, the comments show a more robust and detailed
range of discrimination and support; Table 7 shows that 43% of the students’
comments suggested ways to improve or pointed explicitly to areas of defi-
ciency, indicating students’ willingness to encourage better efforts.
• Students were also fairly consistent in their valuations of the overall efforts of
their peers. The graphic and tabular view of the average vote for each student
on the different criteria in Table 5 and the data on the spread of scores given
(the standard deviation in Table 6) shows that students were reasonably
close to their colleagues in their independent and anonymously reported
votes. Although the instructor did not issue a specific evaluation to students on
EXPLORING BADGING / 103
the badge criteria, in general the instructor’s valuations would have been very
aligned with the overall valuations of the collective class responses.
Overall, the students showed commitment to the process and to quality reviews
and educational discrimination in their response.
REFERENCES
Abramovich, S., Schunn, C., & Higashi, R. (2013). Are badges useful in education?
It depends upon the type of badge and expertise of learner. Educational Technology
Research & Development, 61(2), 217-232.
Carey, K. (2012). A future full of badges. Chronicle of Higher Education, 58(32), 60.
Chang, H. M., & Lee, S. T. (2007). Explaining computer supported collaborative learning
(CSCL) by the caring construct. In R. Carlsen et al. (Eds.), Proceedings of Society
for Information Technology & Teacher Education International Conference 2007
(pp. 992-995). Chesapeake, VA: AACE.
Coppola, N., Hiltz, S., & Rotter, N. (2004). Building trust in virtual teams. IEEE Trans-
actions on Professional Communication, 47(2), 95-104.
Davidson, C. (2011, March 21). Why badges work better than grades. Retrieved June 24,
2013 from http://hastac.org/blogs/cathy-davidson/why-badges-work-better-grades
Duncan, A. (2011). Digital badges for learning. Retrieved June 24, 2013 from http://www.
ed.gov/news/speeches/digital-badges-learning
Finklestein, J. (2012). Digital badges and micro-credentialing. ELI Webinars. Retrieved
June 24, 2013 from https://educause.adobeconnect.com/_a729300474/p9t3eudt0qt/?
launcher=false&fcsContent=true&pbMode=normal
Hartnett, M., St. George, A., & Dron, J. (2012). Examining motivation in online distance
learning environments: Complex, multifaceted, and situation-dependent. The Inter-
national Review of Research in Open and Distance Learning. Retrieved June 24, 2013
from http://www.irrodl.org/index.php/irrodl/article/view/1030/1954
Hickey, D. (2012). Intended purposes versus actual function of digital badges. Retrieved
June 24, 2013 from http://hastac.org/blogs/slgrant/2012/09/11/intended-purposes-
versus-actual-function-digital-badges
Khaddage, D. F., Baker, R., & Knezek, G. (2012). If not now! When? A mobile badge
reward system for K-12 teachers. In P. Resta (Ed.), Proceedings of Society for
Information Technology & Teacher Education International Conference 2012
(pp. 2900-2905). Chesapeake, VA: AACE.
EXPLORING BADGING / 105