You are on page 1of 19

J. EDUCATIONAL TECHNOLOGY SYSTEMS, Vol.

42(2) 87-105, 2013-2014

EXPLORING BADGING FOR PEER REVIEW,


EXTENDED LEARNING AND EVALUATION, AND
REFLECTIVE/CRITICAL FEEDBACK WITHIN AN
ONLINE GRADUATE COURSE

EILEEN A. O’CONNOR, PH.D.


AMY MCQUIGGE

Empire State College, State University of New York

ABSTRACT
The use of digital badges for peer-credentialing web-shared work offers the
promise of extending classroom learning beyond explicit course objectives
and evaluations. This pilot study of peer-awarded badges examines the
results of an online graduate course where students voted on and evaluated
the web-shared work of their colleagues on different criteria than used by
the instructor, encouraging critical review of colleagues, extending student
learning in lateral ways, and suggesting activities for later course work.
Although voting was anonymous, student evaluation results were quite con-
sistent across the class and students appeared to have extended their per-
spective on course content areas (emerging technologies in this case) through
the process of peer review. Students expressed warm, cordial communi-
cations within the course that may have been enhanced by learning about
the personal and professional interests of colleagues through this peer-
review process.

BACKGROUND
The emergence of web-based, digital badges (http://openbadges.org/) presents
new possibilities for course enhancements and extension. Badges can serve to

87

Ó 2013, Baywood Publishing Co., Inc.


doi: http://dx.doi.org/10.2190/ET.42.2.b
http://baywood.com
88 / O’CONNOR AND MCQUIGGE

demonstrate skills, experience, and learning beyond courses credentials, while


demonstrating academic and professional development to potential and current
schools, employers, or communities (Hickey, 2012). From great hope (Duncan,
2011; Stansbury, 2011; Young, 2012) to guarded caution (Abramovich, Schunn,
& Higashi, 2013), the digital badge movement has generated interest from K12
to higher education (Carey, 2012; Khaddage, Baker, & Knezek, 2012). In general,
digital badges attempt to recognize skills, achievements, and learning that happens
inside or outside of the formal classroom. For example, in some situations,
learners may not be provided with assessments of their informal activities—such
as, completing a MOOC (massive open online course) (Simonite, 2013), volun-
teering in the community, or sharing an expert level knowledge of a content
area in a blog or other online media. Digital badges can be used by instructors,
employers, or peers to acknowledge such achievement. Accomplishment recog-
nition has long been granted by K12 teachers or college/university faculty,
but peers can now award similar recognition. Badge earners, regardless of
where, when, or how they have acquired their badges, can choose to post them
in public web-accessible locations, such as the Open Badges Backpack (from
http://www.openbadges.org/). This allows earners to share their skills, achieve-
ments, and learning through social media sites and with a wider audience. One
of the advantages of digital badging is that data and information about the
achievement or evidence of the achievement itself can be associated with the
badge, thereby informing the badge-viewer about information such as who issued
the badge, when the badge was issued, what evaluation criteria was used, and
so on. Badges can also be granular—that is, built upon a series of sequential
smaller steps, which can demonstrate a set of competencies that emphasize the
growth within a knowledge domain (Open Badges, 2013).
For the purpose of this study, the authors were particularly interested in
students receiving acknowledgment by peers for a diverse range of web skills
that were beyond the course-specific evaluation. Later, these students would be
able to post these badges to the Open Badges Backpack or to social media sites,
if they so choose. While grades can provide a level of recognition, digital badges
can be issued by peers for a wide range of skills not normally evaluated in a
graduate-level classroom (Davidson, 2011). These badges could also be a useful
tool for employers to identify these students’ work in the field of emerging
technologies (the content covered within this course). Furthermore, evaluating
their colleagues’ work for possible professional recognition could create an
intrinsic value, thus serving as a motivational force for effective participation
within the course (Hartnett, St. George, & Dron, 2012).
As reported within this article, the authors (the instructor and the
researcher) sought to explore whether the application of a “badging”
approach and the motivation of peer-based awards (Schenke, Tran, &
Hickey, 2013) could facilitate the more amorphous course goals such as
building a professional-community atmosphere and extending learning beyond
EXPLORING BADGING / 89

course-specific constraints (Finklestein, 2012), and encouraging an examina-


tion of the badging process itself. This article will address the way that the
concept of badging was integrated into an online graduate course, the results
that were most evident, and what these results might indicate about the value of
the peer-badging process. It will conclude with areas for further study, lessons
learned, and suggestions for instructors considering peer-review badging within
their own learning environments.

BACKGROUND ON THE COURSE IN GENERAL AND


THE RESEARCH QUESTIONS ASKED

The course that was studied was “Learning with Emerging Technology: Theory
and Practice,” a foundational course offered within a new online master’s degree
program in emerging technologies. Students joined the course from a variety of
different professional backgrounds and experiences but all were required to
be open to learning with and using technologies. The course was intended to
serve as an introductory, survey-level course providing participants with learning
theories and practices related to the vast array of rapidly-surfacing, e-mediated
learning scenarios. In each of the four basic course modules, students studied
and summarized a relevant learning theory after examining research, readings,
videos, and websites containing applied and theoretical information. Students
conducted additional research on each module topic in reference to his or her
professional or personal interests. A fifth module required the students to col-
laborate in teams to summarize and reflect on the course content in each module.
They presented their collaborative summaries within the synchronous, virtual
environment of Second Life (www.secondlife.com). For the “practice” compo-
nent of the course in each of the four core modules, students created an emerging-
technology, exploratory project in an open-ended, sandbox way—that is,
students were given basic tutorials and required to mach-up initial explorations
of their interests within each technology. Students developed these projects using
common, web-based approaches: presenting via PowerPoint and Prezi; creating
websites; developing shared videos with YouTube; and using Facebook for
institutional promotion. Within each of the four basic course modules, these
exploratory projects were shared with the instructor and colleagues through a
web link in their assignment posting.
Nine students participated in the course, covering ages from 25 to 55 years. The
students came with a range of backgrounds and interests, from training developers
in formal environments, to independent “artistic” web and media developers, to
a K12 non-English language teacher, to a business instructor within a fashion
institute, and to several adult-education developers. Students voluntarily signed a
consent form approved by the college’s Institutional Review Board that allowed
their work to serve as data for the analysis provided within this article.
90 / O’CONNOR AND MCQUIGGE

During the design and subsequent implementation of this course, the


authors (the instructor and the researcher—with an expertise in the emerging
area of badging) developed a pragmatic approach that would allow them to pilot
the “concept” of badging while they worked to assemble the actual digital badges
by the end of the course. As explained above, the course served to assemble a
disparate group of students into an environment where they could begin to
explore and understand the principles and practices behind the rapidly-developing
e-tools for communication, community, and learning. The course was not a
design course, an evaluation course, or an in-depth technology course. However,
to the extent that the course could begin the students’ journey towards their own
design, development, and evaluation of e-mediated environments, the course
could initiate the type of thinking and analysis that could be further developed
later within the other master’s courses within the larger program.
Thus, as detailed below, the researcher and instructor integrated a peer-review
process that generated badges for students within the course activities. As the
authors developed and later assessed the course, they formulated questions:
• Can the badging process itself extend the learning in lateral ways, that
is, engage students beyond the specific learning outcomes intended within
the course?
• Can the use of peer review and badges within the course serve as an example
itself of an emerging conceptual-framework for learning, evaluation, and
motivation becoming available through the advances possible with web-
based interfaces?
• Can the peer review process strengthen student connections?
• Will the process of peer review (toward the generation of badges) be used
by the students in a diligent and thoughtful manner?

On the Specific Peer-Review Process


Embedded within the Course Itself
At the outset of the course, the e-badging process itself was under development
and improvement; the researcher was evaluating the best approach to use when
actually providing the badges at the end of the course. Thus, the instructor decided
to create a simulated peer-review with e-tools that were readily available. By the
end of the course, the researcher was able to issue e-badges for the students who
had earned this level of peer achievement. To the purpose of creating a badging
process within the course, the instructor designed the course components to
establish an independent review by peers of their colleagues’ work. Badges were
announced based on the results from the peer review of web-based assignments for
each course module. Within each course module, students were asked to:
1. post their assignment artifact (a presentation, a website, a YouTube, a
Facebook institutional page) via a link within the course;
EXPLORING BADGING / 91

2. review the work of at least three classmates; and


3. provide an anonymous badge decision and an optional comment for each
assignment artifact that was reviewed.

Comparing Instructor and Student Evaluations

The course directions established that different criteria were being used in the
instructor evaluation versus the peer evaluation, and it also made it clear that
although students were asked to be diligent in their peer review, the peer badge
outcomes would be shared but would not be used in determining the course grade.
In an example of one of the rubrics that the instructor used (see Table 1), it is
evident that the expectations of the instructor were functional and introductory.
Although “attractive” is listed within a criteria, the emphasis was on beginning
level skills and on the role that video might play in a learning environment (as
shown within this YouTube evaluation rubric).
By contrast, the students were given the badge categories and descriptors that
are shown in Table 2. By design, the tone and approach used in the peer review

Table 1. Example Rubric Used by Instructor


(the Scores were Provided in a Spreadsheet)

(0 = no evidence, % of
1 = little evidence . . . 10 = excellent) grade Pointsa Comments

•Creates an attractive YouTube that com- 40 10


municates your desired intention in a clear,
non-rambling way
•Has a clear central purpose that is evident 15 10
in the emphasis within the video and is
shown in a bulleted way within the video at
some point
•Uses basic editing skills with cutting, 25 10
transitions, and titles/captions perhaps
•Posts the YouTube using either a public or 5 10
unlisted link (NOT a private link)
•Creates a link-to or embeds-in your NA NA
website (optional)
•Reviews and comments on several 5 10
colleagues’ You Tubes
•Posts within the required time 10 10

Total points possible = 100 100 = Actual


points earned
aPoints earned out of 10 possible points.
92 / O’CONNOR AND MCQUIGGE

Table 2. Badge Categories Used for the Peer Review of


the Various Web-Presented Artifacts

No go (1) Pewter (2) Silver (3) Gold (4)

Won’t even make Minimally accept- Interesting and Wow, I am learning


the grade for the able for the useful; solid and taking notes
assignments assignment but display of here—a great job;
minimum criteria nothing noteworthy expertise on this I’ll have my friends
in this aspect criteria visit here too

was less formal, more open-ended, holistic, and mindful of the shared nature
of the various e-media artifacts created. The instructor generated a Google Drive
Form that served as the input for each peer evaluation. The form used the
categories in Table 2 for the badge evaluation and there were three areas on
which the first three badges were based. Aesthetics was the first area for each
of the four review assignments (of a presentation, a website, a YouTube, a
Facebook institutional page); the other two categories addressed creativity or
quality and a parameter unique to the media. By the fourth review, the instructor
added a category called “Try New” to ask students to reflect on the willingness
of the peer being reviewed to be adventuresome with the technology. Again,
the peer review was intended to encourage: learning from the work of others,
becoming a stronger community through understanding the efforts and work of
other students, and experiencing the process of badging as a model for possible
future use.
In designing the procedure for peer review, since the digital badging tools
themselves (http://badg.us) were very much under-development and would not
be used until the completion of the class, the instructor therefore began by creating
Google Drive Forms that prompted for the three categories, and eventually, the
fourth category. The link to the evaluation form was placed within the section of
the online course where the student posted the link to their assignment artifact
(the website, YouTube, and so on). Once the students opened the Google Form,
they were prompted for their name and were then required to evaluate the three
or four categories on a 1 to 4 scale (No-Go to Gold). An optional area was
provided for comment, but the student could exit the form by only completing
the badge components.

METHODOLOGY
As noted above, built into the design of the course and the peer review of the
badge-able components, was a process that allowed data to be gathered into a form
(in Google Drive). The data later was brought into an Excel spreadsheet for
EXPLORING BADGING / 93

extended quantitative analysis and was examined categorically for a qualitative


study of the open-ended comments. A pivot table in Excel was used to allow
the quick review of multiple quantitative scenarios to determine if any patterns
might emerge; the descriptive components were broken into categories for some
aspects of the study and were examined more holistically for meaning and
implications related to other parts of the study. The authors attempted to review
this qualitative and quantitative data in an open method adapted from aspects
of grounded theory approach of Strauss and Corbin (1990) where the data and
descriptive information themselves were used to suggest the outcomes of this
pilot badging effort. The findings gained from the analysis of the votes and
comments of the students were coupled with insights from the online discussions,
statements within other course assignments, and the virtual synchronous meetings
(in Second Life) course and were integrated into the conclusions and sug-
gestions that are addressed later in this article.

FINDINGS

This section starts with an overall comparison of the instructor’s versus the
students’ ratings to illustrate the differences in the reviews that were conducted.
It then presents the most salient findings from the analysis of students’ votes on
the different badges, considering both the prompted criteria and the open-ended
comment. The next section within the discussion seeks to draw conclusions from
the patterns that emerged.

Comparing the Instructor’s and Students’ Rankings

The instructor’s and students’ evaluations were on different criteria, as shown


in the background section above and illustrated in Tables 1 and 2. As presented
in the data in Table 3, the instructor’s ordering was determined by ranking the
aggregated final grade created from evaluating all graded assignments using the
weighting given in the course syllabus; the peer-review order was determined
by ranking the aggregated average of all the criteria among all the different
artifacts. Thus, it does appear that the two different lenses were used in looking
at the student work. The instructor was looking for basic, entry-level work; the
student-awarded badges were assigned for creativity, quality of the artifacts’
appeal to a future audience, and aesthetics. (Note: although the instructor did not
actually assess the work on the dimensions given to the students when they
determined badge categories for the different artifacts, in general the instructor
agreed with the students’ assessments.) Table 3 appears to reinforce the per-
spective that the criteria reviewed by students and by the instructor were sub-
stantially different.
94 / O’CONNOR AND MCQUIGGE

Table 3. Instructor v. Peer Rankings


(1 is highest; duplicate entries were students ranked at the same level)

Instructor ranking Peer ranking

Student 1 5 (lowest performer) 4


Student 2 2 1 (highest performer)
Student 3 2 3
Student 4 4 6
Student 5 3 1
Student 6 2 7 (lowest performer A)
Student 7 4 3
Student 8 1 (highest performer) 3
Student 9 3 7 (lowest performer B)

Considering How the Students Voted for the Badges

As suggested in the previous section, it does seem that the students and the
instructor were assessing the artifacts developed on different dimensions. How
did the students vote categorically (by badge level) and how consistent was the
voting by different students for the same student?
Over the course of the four badge cycles studied here, 133 voting scenarios
were generated, that is, for each scenario a student voted for a peer on three or four
categories over the course of the four different web-based assignments. Students
optionally provided a written comment (as discussed in the following section).
Students were required to vote for three colleagues for each of the four artifacts;
the overall vote count at the end showed that there were 24% more votes than
required by the minimum, although the number of extra votes were randomly
distributed among the student voters.
When voting, the valued submitted were either 1, 2, 3, or 4 (No Go, Pewter,
Silver, or Gold). The mathematical average value for the votes was 3.2 and the
mode and median were both 3. However, of the total 430 independent votes
given (3 for the first 3 badge-voted artifacts and 4 for the fourth artifact), the
votes cast among all the different badge types are shown in Table 4.
Looking more specifically at the way the badge votes were cast, as shown in
Table 5, for each student that was voted-on, the votes were averaged for each
different category. In the first three artifact votes there were three categories.
Category 1 was a vote for the overall aesthetics of the artifact. Category 2 and
category 3 were the second and third votes cast and differed slightly among
the four artifacts, overall these addressed importance, quality, and creativity of the
EXPLORING BADGING / 95

Table 4. Total Votes Cast by the


Various Badge Types

Number of % in this
Badge category votes category

1 – No Go 16 4%
2 – Pewter 43 16%
3 – Silver 190 40%
4 – Gold 181 42%

Table 5. Peer Ratings Given to Students on the Different Criteria

artifact for the intended audience (students were developing their artifacts
toward an actual or hypothetical audience; they had stated their intended audience
within the course). For the last vote, on an institutional Facebook page, students
use the first three categories and also voted on a fourth category—the willingness
of the student to try new territory with the web-media; thus, votes were cast for
this category in only the last artifact vote.
As shown in Table 5, it appears that students’ attributes clustered around the
same average value for the different categories and that differences were evident
96 / O’CONNOR AND MCQUIGGE

(but small) from student to student. In addition to the data above, Table 6 provides
a further analysis, reporting the total votes cast for each student, the average of all
the criteria across the four artifacts, the range of votes, and the standard deviation
among the votes given by colleagues for each student. Considering that the
standard deviation is never greater than 0.9, the students were reasonably con-
sistent in the way they evaluated their colleagues. Note too that all votes were
cast directly to the instructor, all reports to the students were done anonymously,
and the badge awards were announced after the assignment was completed and
graded by the instructor.
Mindful that the peer reviewers were looking beyond the strict confines of
the course instruction and evaluated expectations and that they were focused on
creativity and impact-on-audience, it is not surprising that students 4 and 9 did
not achieve the higher scores as did some of their colleagues. These two students
came from more traditional, instructional-education backgrounds. By contrast,
students 2 and 5, who were voted higher, had come from backgrounds with more
creative, exploratory components.

Analyzing Quantitatively Aspects of the


Comments Made by Students

As the students voted to evaluate their colleague’s work within the different
badge categories, there was an optional area for providing comments. As noted
above, students voted with an understanding that their votes and comments would
be shared anonymously with colleagues. Students were encouraged to support
their colleagues with additional comments, ideas, or suggestions that could help

Table 6. More Data on Student Voting Results

Total votes Average Lowest vote Highest vote Std


for student vote score received received Dev

Std 1 57 3.2 1 4 0.9


Std 2 43 3.6 2 4 0.6
Std 3 62 3.3 2 4 0.7
Std 4 44 3.0 1 4 0.9
Std 5 34 3.6 3 4 0.5
Std 6 59 3.2 1 4 0.8
Std 7 53 3.5 3 4 0.5
Std 8 43 3.3 1 4 0.7
Std 9 35 2.5 1 4 0.9
EXPLORING BADGING / 97

their colleagues grow and improve. Among the 133 voting scenarios over these
four reviews, a total of 73 comments were added by eight of the nine student; one
student gave no comments and one only gave one comment (commenting was
optional). On average, almost 55% of the voting scenarios incorporated comments.
The majority of the comments were substantive, offering specific positive
feedback or critique and sometimes offering specific suggestions for improve-
ment. Four comments were either neutral or non-specific (i.e., “This is good”)
and one was not related to the student being viewed, thus for the analysis
below, these comments were removed from the tally. Also, for the tally shown in
Tables 7 and 8, some categories were aggregated. For example, some negative
comments were qualified by a positive secondary comment or a suggestion
for improvement; however, if the overall tone of the comment was negative, it
was included with the negative tally. A similar technique was applied to the
aggregation of the positive comments in both charts and data tables.
As evident in Table 7, students 3 and 6 were most liberal with their positive
comments; students 1, 2, 5, and 8 also offered suggestions for improvement.
The students who offered the majority of the comments (2, 3, 4, 5, 6, and 8)
provided comments in more than one category, suggesting a commitment to the
process of giving constructive and useful feedback.

Table 7. Types of Comments Given By the Different Students


98 / O’CONNOR AND MCQUIGGE

Table 8. Type of Comments Given To the Different Students

Referring to Table 6, the student with the lowest average score from peers
(student 9) also provided very few comments to colleagues. Student 9 also
received five directly negative comments from colleagues. Student 6 also received
a low score from the votes cast by colleagues; however, this student also reviewed
the work of colleagues in a careful manner as evident in number of detailed
comments given. Although it is not possible to say with certainty from the
data within the course how well this student would work in the future, the interest
of student 6 in reviewing colleagues work critically suggests an interest in
understanding and evaluating the work being surveyed. Actually, at the end of
the course this student attended an optional virtual session where this student
talked about some very innovative ideas for a final project within the larger
master’s program, suggesting that the student had learned from the work of
reviewing and being reviewed by colleagues, and from the general interest and
commitment that the student had for the course.

Noting the Tenor of the Comments

The students’ comments were generally specific, sincere, and helpful (whether
positive or negative in tone). A sample of a few comments illustrates the general
tenor of the comments:
EXPLORING BADGING / 99

• In a YouTube comment: “Amazing! Visual. visual, visual! No matter what


type of learner, there was something in the video for them. Made me want to
go out and learn an instrument. [Name of student removed] is a natural speaker
too. Her voice was soothing, relaxed and real.”
• “I enjoyed the last part and how it tied all the concepts together. Seeing how
learning is fun and the effect of learning a second language is positive.”
• In a Facebook comment, useful and kindly written: “A little text editing,
I wouldn’t start the About section with ‘This is a Page about.’ I would start,
‘Exploring emerging.’”
• Not all comments were positive, but they were all supportive: “A bit long
and somewhat repetitive, but would certainly appeal to its intended audience.”
Or, “I liked the ‘woman on the street’ segments. The sound was a little off,
I liked the concepts!”
Students also often specifically indicated what they had learned that encouraged
them to expand their own work in the future.
• “Nice use of video plug in—I did it too after her example”
• “Where in SL did you find the keyboard? The address or a way to find it
would be useful for your viewers who might want to try it out.”
• “I loved the pics, pet education links, and the therapy dog link. I needed
this info for my dog.”
• “I learned something from viewing it about MY presentation! Totally clear
what the environment is.”
• “I also liked the wallpaper. (I tried to find that and couldn’t.)”
• “Made me want to go out and learn an instrument.”
• “I will try and follow the instructions that were detailed here. Thanks.”
The range, specificity, and expanded learning suggested by the comments
indicates a dedicated, invigorated group of students who are willing to support
their colleagues.

DISCUSSION
In reviewing the findings, the authors began from their own questions and
examined the data and comments to see if and how the pilot badging process
served the course objectives and these researchers’ interest. In examining the
badge-development process itself, at this current time in the evolution of
learning management systems, the authors were able to generate lessons
learned that can inform their future work, and hopefully the work of readers.
Interesting discoveries along the way suggested other uses for the peer badging
process in terms of strengthening students’ experiences even beyond the course
on a programmatic level, generating student interest that could be a possible
segue to future courses.
100 / O’CONNOR AND MCQUIGGE

Exploring the Research Questions


The authors used a grounded theory approach in examining the course and
the badging process, determining the results that “emerged” from the actual
work of the students. They had initiated the study to determine how emerging
approaches to assessment (badging) might enhance and extend learning and
encourage community-building.

Extending Learning Beyond the Specific and


Expected Course Outcomes

Consonant with most instructors’ aspirations, these authors valued a plethora


of learning outcomes that were beyond the reality of what could be accom-
plished reasonably within the course. Given that the primary goal of the course
was to introduce students to educational theories that could support effective
use of and planning for learning and community-building through e-means, the
authors integrated the badging process into the course as a way to encourage
students to consider the design and effectiveness of web-based artifacts in a
reflective and somewhat intuitive manner, in advance of their own future
coursework where they might choose to design their own badging systems. The
authors asked if participating in the badging process itself extended the learning
in these lateral ways:
• An initial question concerns how much the badge-level evaluation differs
from the instructor evaluation. As evident in Tables 1 and 2, the style, scoring
process, and criteria for evaluation were significantly different. In addition,
the very-different ranking of overall web-artifact evaluation by the students
versus the instructor evaluations (Table 3) indicates no predictable pattern
or logic and suggests that different dimensions of the web-based artifacts
were truly being considered by students and by the instructor.
• Because of the required viewing of the work of peers, students had to visit
and evaluate artifacts, making specific judgments in the three or four cate-
gories for each voting scenario. The total 133 voting scenarios show on
average that over 20% more votes were cast than were required within the
courses. Students’ efforts required that they open, review, and evaluate
numerous web-based artifacts, thus being exposed to a wide range of expres-
sions and ideas from colleagues that had needs, audiences, backgrounds,
talents, and experiences other than their own.
• As noted in the previous section, students gave rich, supportive, helpful
comments to colleagues, noteworthy since comments were an optional com-
ponent of the voting process. Frequently, students indicated directly how
they were learning from the work of their colleagues, as evident from this
statement within a larger comment shown above: “I learned something
from viewing it about MY presentation!”
EXPLORING BADGING / 101

Students appeared to have looked beyond the confines of the course expec-
tations, examining colleagues’ work in rich ways, and making connections to
their own interests.

Encouraging Students Connectedness

In studying the interactions within the course, particularly within the badging
process, the authors were challenged, as are other researchers such as, Chang
and Lee (2007), to find if and how the course itself could encourage positive
interactions, sharing, and caring among the students. The course under study was
designed with multiple layers of discussion and interactivity—discussion boards,
virtual synchronous meetings, virtual field trips, presentations in Second Life,
shared video work via YouTube—thus making it difficult to attribute the apparent
connectedness among students to any one component. However, an examination
of the comments given with the votes suggests an ongoing concern and under-
standing of the work that students were developing. Although students were not
required to remember the audience of the student they chose to review, comments
frequently gave specific references to the effect on the intended audiences that
was being expressed in the web-based communication: “I enjoyed the last part
and how it tied all the concepts together. You see how learning is fun and the effect
of learning a second language is positive.” Students took the extra time to suggest
specific improvements; 15 comments, about 20% of the comments offered, focused
on concrete and specific recommendations for improvements. With more than half
of the voting scenarios including some level of comments (that were shared
anonymously with students after votes were cast), these students’ extended efforts
gave evidence of caring about the goals, ideas, and needs of their colleagues.

Developing a Conceptual Framework for an


Emerging Concept—from Direct Experience

Although students may have benefited personally from peer-reviewed systems,


such as Internet Movie Database (www.imdb.com) or other consumer peer-
reviews, the concept of generating badges from peer reviews of web-based
products is most likely a new approach for these students. As reported by
Finklestein (2012) during a webinar where he considered how instructors could:
“leverage digital badges to build ongoing relationships with learners, foster
engagement, and acknowledge learning that often goes unrewarded or unrecog-
nized,” he explained how the process of engagement itself within the course
modeled the application of badging. Just as Finklestein reported, the immersion
in badging will hopefully encourage reflection on and application into these
students’ future development of learning environments. It was not possible to
assess within the short confines of the course whether the process modeled will be
used by the students; however, the intention is for the instructor and the researcher
to work with these students in the future to determine if students adapt and
102 / O’CONNOR AND MCQUIGGE

implement these peer evaluations into their own work. During this course, students
did not simply read about badging, they reviewed colleagues work, voted on
different criteria, and extended recommendations and comments. Furthermore,
they observed who received the awards and on what dimensions the awards
were received. As one student reflected within the course: “What happens
when no badges are awarded to an individual? This could potentially discourage
learning as well as encourage it.” As these students consider the role of badges
in their own work, they will remember and reflect on the experience of being
reviewed themselves.

Considering Student Commitment to and Diligence


within the Badging Process Itself

These adult students in a graduate course that was part of a larger graduate
program are busy individuals with complex lives. Peer review, leading to the
distribution of badges, was required within the course but it was not “evaluated”
per se beyond a pass/fail assessment. Did the students demonstrate a commit-
ment to the peer review process itself? The analysis of the data in the Findings
section suggests several ways that responsible engagement was evident:
• Not only did students meet the basic review requirement, they frequently
reviewed more than the required number of peers, over 20% additional voting
was noted. If a student did a fourth, optional review, this took an extra time
commitment for the student. And, since students had to submit their own
work and then review the work of others within the same week’s timeframe,
the commitment to a peer-review and vote was non-trivial. Also, more than
half of the votes were submitted with specific, careful, and personal comments
(and, comments were optional), suggesting a commitment to their peers and
to the learning of the peers. (The comments were shared even if no badge
was awarded.)
• An examination of the votes cast sheds further light on the depth of analysis
among the students and suggests care and discernment when casting votes.
An examination of Table 4 does find that the ratings on individual criteria
are skewed towards the higher valuations, with only 20% of the ratings in the
1 or 2 range. However, the comments show a more robust and detailed
range of discrimination and support; Table 7 shows that 43% of the students’
comments suggested ways to improve or pointed explicitly to areas of defi-
ciency, indicating students’ willingness to encourage better efforts.
• Students were also fairly consistent in their valuations of the overall efforts of
their peers. The graphic and tabular view of the average vote for each student
on the different criteria in Table 5 and the data on the spread of scores given
(the standard deviation in Table 6) shows that students were reasonably
close to their colleagues in their independent and anonymously reported
votes. Although the instructor did not issue a specific evaluation to students on
EXPLORING BADGING / 103

the badge criteria, in general the instructor’s valuations would have been very
aligned with the overall valuations of the collective class responses.
Overall, the students showed commitment to the process and to quality reviews
and educational discrimination in their response.

Considering Implications and Cautions


for Course Dynamics
The process of piloting badges as peer recognition of achievement that
goes beyond course objectives appears to be of sufficient value to continue to
improve, refine, and re-evaluate effective ways to embed badging in future
courses. However, embedding a non-instructor evaluation within a course could
risk the safety, security, and sense of fairness that students develop within the
course. One of the students pondered how badges would work with her own
students, mostly 20-year-old students learning about retail costs and consider-
ations in the fashion industry: “They are used to be winners.” Thus instructors
must weigh the impact of a peer-review system on the carefully balanced trust
that they try to inculcate within courses (Coppola, Hiltz, & Rotter, 2004).
And, although the instructor was most explicit that course evaluation and
grading was based on the instructor evaluation only, the recent development of
badges required the instructor and the researcher to be quite involved in the
development and distribution of badges (while waiting for the promised more
turn-key systems to emerge and stabilize). Could the instructor’s grading be
unduly influenced by peer reviews within the course? Instructors must be very
clear on the detachment from the peer review in any grading consequences,
should that be the case, or explain the role of the badges and peer review within
the intended course schema if the peer review is to factor into the course evalu-
ation. Also too, as automated badge systems become more available, these systems
could remain in an open and course-separated area to allow for ongoing web-
artifact updating and peer voting.

NEXT STEPS AND EXTENDED APPLICATIONS


The initial findings from this pilot study on a peer-reviewed, digital badging
system that evaluated non-course objectives suggest that students were dutiful
and careful with their evaluations and were critically, respectfully, and con-
sistently supportive of their peers. Hopefully, the process of reviewing the work
of colleagues can extend course learning beyond simply planned objective, pre-
paring students for later courses with a richer background of ideas and experiences
through this reflection and evaluation process. And, of course, students will have
the option to display their badges for external recognition (as publishing options
become more established). The authors intend to solicit student comments and
recommendations and to find if there is any concrete evidence of use of lateral
104 / O’CONNOR AND MCQUIGGE

learning later in the graduate program. They are designing a survey to be


distributed two semesters after the course has concluded, thereby gaining a
longitudinal perspective on digital badges in these students learning, applica-
tion, and credential-sharing. And, they also want to design a more deliberative
study of the effect of the badging process on the dynamics and relationships
within the course.
With the peer review of web-based artifacts becoming increasingly feasible,
and with its leading to possible rewards for beyond-course competencies,
instructors may find value in peer-badging for supplementing and expanding
their course goals and objectives. Instructors may want to consider creating a
pilot study on focused applications within a course to test the value of peer-
badging systems in the context of their course. They should also be mindful of
the complexities of trust and confidence established within courses to determine
if and how badges might support their educational, inter-personal, and inter-
student goals and objectives.

REFERENCES
Abramovich, S., Schunn, C., & Higashi, R. (2013). Are badges useful in education?
It depends upon the type of badge and expertise of learner. Educational Technology
Research & Development, 61(2), 217-232.
Carey, K. (2012). A future full of badges. Chronicle of Higher Education, 58(32), 60.
Chang, H. M., & Lee, S. T. (2007). Explaining computer supported collaborative learning
(CSCL) by the caring construct. In R. Carlsen et al. (Eds.), Proceedings of Society
for Information Technology & Teacher Education International Conference 2007
(pp. 992-995). Chesapeake, VA: AACE.
Coppola, N., Hiltz, S., & Rotter, N. (2004). Building trust in virtual teams. IEEE Trans-
actions on Professional Communication, 47(2), 95-104.
Davidson, C. (2011, March 21). Why badges work better than grades. Retrieved June 24,
2013 from http://hastac.org/blogs/cathy-davidson/why-badges-work-better-grades
Duncan, A. (2011). Digital badges for learning. Retrieved June 24, 2013 from http://www.
ed.gov/news/speeches/digital-badges-learning
Finklestein, J. (2012). Digital badges and micro-credentialing. ELI Webinars. Retrieved
June 24, 2013 from https://educause.adobeconnect.com/_a729300474/p9t3eudt0qt/?
launcher=false&fcsContent=true&pbMode=normal
Hartnett, M., St. George, A., & Dron, J. (2012). Examining motivation in online distance
learning environments: Complex, multifaceted, and situation-dependent. The Inter-
national Review of Research in Open and Distance Learning. Retrieved June 24, 2013
from http://www.irrodl.org/index.php/irrodl/article/view/1030/1954
Hickey, D. (2012). Intended purposes versus actual function of digital badges. Retrieved
June 24, 2013 from http://hastac.org/blogs/slgrant/2012/09/11/intended-purposes-
versus-actual-function-digital-badges
Khaddage, D. F., Baker, R., & Knezek, G. (2012). If not now! When? A mobile badge
reward system for K-12 teachers. In P. Resta (Ed.), Proceedings of Society for
Information Technology & Teacher Education International Conference 2012
(pp. 2900-2905). Chesapeake, VA: AACE.
EXPLORING BADGING / 105

Open Badges. (2013). Open badges. Retrieved from http://openbadges.org


Schenke, K., Tran, C., & Hickey, D. (2013). Re-mediating assessment: Design principles
for motivating learning with digital badges. Re-mediating assessment. Retrieved
June 24, 2013 from http://remediatingassessment.blogspot.com/2013/06/design-
principles-for-motivating.html
Simonite, T. (2013). As data floods in, massive open online courses evolve. MIT
Technology Review. Retrieved June 24, 2013 from http://www.technologyreview.
com/news/515396/as-data-floods-in-massive-open-online-courses-evolve/
Stansbury, M. (2011). Digital badges could help measure 21st-century skills. Eschool
News, 14(10), 1-36.
Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory pro-
cedures and techniques. Newbury Park, CA: Sage.
Young, J. R. (2012). “Badges” earned online pose challenge to traditional college diplomas.
Education Digest: Essential Readings Condensed for Quick Review, 78(2), 48-52.

Direct reprint requests to:


Eileen O’Connor
Empire State College, SUNY
111 West Ave.
Saratoga Springs, NY 12866-2307
e-mail: Eileen.OConnor@esc.edu

You might also like