You are on page 1of 12

Evaluating Music Performance: Politics, Pitfalls, and Successful Practices

Author(s): NANCY H. BARRY


Source: College Music Symposium, Vol. 49/50 (2009/2010), pp. 246-256
Published by: College Music Society
Stable URL: https://www.jstor.org/stable/41225250
Accessed: 20-01-2020 10:07 UTC

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms

College Music Society is collaborating with JSTOR to digitize, preserve and extend access to
College Music Symposium

This content downloaded from 144.64.26.59 on Mon, 20 Jan 2020 10:07:30 UTC
All use subject to https://about.jstor.org/terms
Evaluating Music Performance: Politics,
Pitfalls, and Successful Practices
NANCY H. BARRY

Background

Evaluation
in is an integral
which musicians engage at component
many levels, of music
ranging performance
from and
processes that areinstruction,
very an activity
informal and spontaneous to very formal processes that occur within highly structured
settings. Informal evaluation includes activities such as the self-evaluation that occurs
throughout the process of music making (constantly listening to and adjusting and/or
correcting one's performance). In contrast, a traditional jury setting in which the per-
former receives written feedback (and a grade) from a panel of faculty is an example
of a much more formal evaluation process.
Performance evaluation in the arts presents a conundrum. On one hand, artistic
performance is inherently subjective - a matter of individual taste. On the other hand,
demonstrated mastery of certain technical standards is expected of students in the arts.
This situation is explained in A Philosophy for Accreditation in the Arts Disciplines,
a statement of the National Association of Schools of Music, National Association of
Schools of Art and Design, National Association of Schools of Theatre, and National
Association of Schools of Dance:

Of course, evaluation of works of art, even by professionals, is highly sub-


jective, especially with respect to contemporary work. Therefore, there is a
built-in respect for individual points of view. At the same time, in all of the
arts disciplines, there is recognition that communication through works of art
is impossible unless the artist possesses a significant technique in his or her
chosen medium. Professional education in the arts disciplines must be grounded
in the acquisition of just such a technique.1

While it is likely that none of us one would deny the importance of employing fair and
valid evaluation practices with our students, there is much controversy concerning just
exactly how musical performance should be evaluated.
This paper explores some of the key topics related to music performance evaluation
including significant political and social issues, as well as some pitfalls and concerns.
The paper concludes with a discussion of selected performance evaluation tools and
procedures that have been used successfully in music-performance settings.

'National Association of Schools of Music. A Philosophy, 4.

This content downloaded from 144.64.26.59 on Mon, 20 Jan 2020 10:07:30 UTC
All use subject to https://about.jstor.org/terms
EVALUATING MUSIC PERFORMANCE 247

Accountability in Higher Education

A High Stakes Political Environment


Numerous political and social forces are increasing the
students in ways that can be documented and quantified. T
policy ' 'No Child Left Behind' upon "rigorous accountabil
for government mandated testing in public schools across
implications in terms of rewards for schools that perform
measures for those that perform below average. Now it see
tion has expanded interest in accountability to include hig
by U.S. Secretary of Education Margaret Spellings' statem
for Higher Education: "Over the years, we've invested tens
payer money and just hoped for the best. We deserve bett
Spellings, improved techniques for measuring achievemen
students "are needed to help bring important information to
in their college decision making process . . ." and to "assist
tions to better diagnose problems and target resources to
On March 22, 2007, the U.S. Department of Education i
release:

"Secretary Spellings leads national higher education tran


Washington, D.C. Secretary hosts national dialogue on m
able, accountable and accessible to more Americans."5

The summit focused on action items around five key recomme


Commission on the Future of Higher Education to improve
and accountability: Aligning K-12 and higher education e

• Increasing need-based aid for access and success;


• Using accreditation to support and emphasize student
• Serving adults and other non-traditional students;
• And enhancing affordability, decreasing costs, and pr

The U.S. Department of Education's interest in creating a


ability and transparency throughout higher education"6
emphasis upon data-driven assessment in colleges and uni

2The White House, No Child Left Behind.


spellings, Action Plan.
4U.S. Department of Education, Press Release, September 28, 2007.
5U.S. Department of Education, Press Release, March 22, 2007.
6Ibid.

This content downloaded from 144.64.26.59 on Mon, 20 Jan 2020 10:07:30 UTC
All use subject to https://about.jstor.org/terms
248 COLLEGE MUSIC SYMPOSIUM

Litigation in Higher Education: The Stu

Growing concerns about litigation are also serving


tors ' interest in being able to quantify and docum
An important trend that has emerged in policy an
the student as a "consumer":

Education has always been a commodity to be bought and sold; the true danger
lies in the move to a 'rights-based' culture where students (and politicians) see
education merely as something to be 'consumed' rather than as an activity in
which to participate. Whilst the law seems thus far to have been something of a
bulwark against this movement, it remains an open question as to whether this
will continue to be the case if Higher Education institutions do not themselves
act more proactively in challenging this damaging view of education.8

For example, at my own university, we were informed in 2007 that faculty are ex-
pected to maintain records not only of final grades awarded, but also records showing
exactly how those final grades were calculated. The thinking behind this, of course, is
to be able to provide a defense in the event that a student contests the grade at some
point in the future.

Pitfalls

Taking a consumer approach to higher education is fraught with potential problems.


The argument has been made that treating students as consumers sets up a culture of
entitlement that inhibits the inherent motivation to learn:

Not only are students corrupted by such a system, but faculty are, as well. They
are not doing their job to impart knowledge and intellectual virtue to the students
when they envision their classes as providing "customer satisfaction." They
cheat the students of the education they should be obtaining. By violating the
internal goods of higher education, they are no longer acting as educators, but
as clerks in an "education market" - and thus, they are behaving unethically.
The consumer model even corrupts parents, if those parents call and complain
about their son's or daughter's poor grades not from concern for the student's
improvement, but in the spirit of "doing whatever it takes" to get John or Jane
through school.9

7Lake, "Tort Litigation."


8Kaye and Birtwistle, "Criticizing the Image," 85.
9Potts, "The Consumerist Subversion," 63.

This content downloaded from 144.64.26.59 on Mon, 20 Jan 2020 10:07:30 UTC
All use subject to https://about.jstor.org/terms
EVALUATING MUSIC PERFORMANCE 249

The general issue of accountability also poses challenges

• The danger of higher education being micromanage


for political or financial purpose, or both10;
• The negative impact on faculty and staff morale11;
• The challenge of measuring higher education out
valid manner.

However, given the political climate, it is likely that le


ability will gain increasing emphasis in higher education

Accountability literature in public administration ind


political system has a long-standing, fundamental cult
government and other public sector institutions. As a res
political representatives are preoccupied with account
seek new control mechanisms to achieve it. So, the no
might just "go away" doesn't appear very likely, espec
tion didn't appear to "stay tuned" to the concerns of its s
escalated tuition and other fees in the 1980s. Many of
education unfortunately have some validity and the vo
be heard and actions taken to deal with them.12

Evaluating music performance in the college music s


challenges with respect to balancing the subjective, pers
mance with the need to maintain some degree of consist
to grade students fairly. In today's political climate, an
consumer, music teachers more than ever need to utilize
for carrying out and documenting music performance e

Evaluating Music Performance

The art and science of evaluation involves two basic c


ability. The evaluation must be valid in that it measures
(Having, for example, a student perform a series of maj
very valid way of evaluating certain technical skills, but
most appropriate way to evaluate the student's mastery
tice.) In contrast, reliability relates to the consistency o
reliability pertains to the consistency of one faculty me
performance receive the same grade at different times). In s
judges, such as juries and competitions, reliability can al

'•'American Federation of Teachers, Accountability.


1 ' Grantham, Accountability. %
12Ibid., 2.

This content downloaded from 144.64.26.59 on Mon, 20 Jan 2020 10:07:30 UTC
All use subject to https://about.jstor.org/terms
250 COLLEGE MUSIC SYMPOSIUM

different judges. Statistically, reliability is a ratio of


thus, the higher the rate of agreement among differ
Research in music-performance evaluation h
studies have revealed that faculty evaluations of s
unreliable13 and even biased on the basis of influe
formance order,15 and even the performer's attra
Reliability tends to be quite high, however, whe
criterion-specific rating scales and rubrics are use

Examples of Tools and Techniques

In the language of measurement and evaluation


can be a type of "authentic" assessment - authentic
authentically reflects the nature of the instructio
should not assume that all performance-based ass
This authenticity is only achieved when the asse
materials and activities of the learning process. A
95% of her applied lesson time working on scales
upon jury performance of a sonatina that was pr
be performance-based, but not authentic. Perfor
a product (e.g., a music competition in which the
is evaluated) or a process (e.g., evaluating the stu
over time). In many cases, music teachers are inter
process. In all cases, it is important that everyone
of how the evaluation is intended to function. Per
be highly subjective. However, the use of well-des
reliability and validity and reduces subjectivity a
used types of performance assessment tools are ch

• Checklists are lists of behaviors or skills used to indicate whether each


behavior or skill has been observed.
Rating scales permit teachers to indicate the frequency or degree to which
a behavior or skill is exhibited.
• Rubrics are rating scales that are specifically used for scoring results of
performance assessments.

Checklists provide a quick snapshot of the student's achievement of specific per-


formance goals. Checklists are best when used during the early stages of the learning
process to provide a quick indication of strengths and weaknesses and to provide a

l3Bergee, "Faculty Interjudge"; Thompson and Williamon, "Evaluating Evaluation."


l4Bergee and Westfall, "Stability."
15Bereee and Platt. "Influence."
l6Ryan and Costa-Giomi, "Attractiveness Bias"; Wapnick, Mazza, and Darrow, "Effects."
17Bergee, "Faculty Interjudge"; Zdzinski and Barnes, "Development."

This content downloaded from 144.64.26.59 on Mon, 20 Jan 2020 10:07:30 UTC
All use subject to https://about.jstor.org/terms
EVALUATING MUSIC PERFORMANCE 25 1

convenient tool for student self assessment. Figure 1 show


developed to provide feedback on piano accompaniment a

Figure 1. Piano Accompaniment and Song Leading Che

1 .

2.

3.

4.

5.

6.

7.

8.

9.

10.

A rating scale is a useful tool for indicating the level of


along a continuum such as from beginning to exemplary, or
Figure 2 shows an example of a rating scale for percussion.

This content downloaded from 144.64.26.59 on Mon, 20 Jan 2020 10:07:30 UTC
All use subject to https://about.jstor.org/terms
252 COLLEGE MUSIC SYMPOSIUM

Figure 2. Sample rating scale. Adapted from Augus


(http://www.augie.edu/dept/music/percexam.pdf

A rubric achieves the same basic purpose as a rat


more detailed information by including specific d
achievement continuum. While more challenging to
provided by the rubric offers very useful feedback
also quite useful as a way of communicating perfo
viduals charged with judging the performance. Fig
for assessing preparatory-level piano performance.

This content downloaded from 144.64.26.59 on Mon, 20 Jan 2020 10:07:30 UTC
All use subject to https://about.jstor.org/terms
EVALUATING MUSIC PERFORMANCE 253

Figure 3. Sample Assessment Rubric for a Preparatory P


by the author's piano pedagogy graduate student).

I Not Yet I Almost I Meets I Exceeds


^
^HBJJffljMTiujdimt
^^^^^^^H more than two no mare than no cues and no cues and
^^^^^^^H cues or two cues or hesitates no never
^^^^^^^^H hesitates mare hesitates no more than hesitates.
^^^^^^^^H then twice. more than once.
^

^^^^^^^■Štuďěnt

^^^^^^^^H dose ta the proper distance a proper


^^^^^^^H keyboard, from the distance from di
^^^^^^B teyboardbut the keyboard th
^^^^^^^H does not tuve wìthfeetflat an
^^^^^^^| feetflatonthe on the floor. st
^^^^^^^| floor. fluid upper
|^
^

^H^^^^^J rushes mare rushes no ma


^^^^^^^H than twice, than twice
^^^^^^^| stays with the Iberties taten
^^^^^^^H metronome fi the period in
^^^^^^B (set by head which the piece
^

H^^^^^^B incorrect more in


^^^^^^^H thantwice^i.e., mor
^^^^^^^H forte is not t
^^^^^^^H different
^^^^^^^H piano). diff
^^^^^^^| which the piece
^
^^^^^^^■sñjdent
^Ц^Н^^^н standard standard the score's develops more
^^^^^^^H ппвелпе vnare fingerings once, ппвегж«. efficient
^^^^^^^| than once. fingEríne
^
^^^^^^^■Šuinent
^^^H^^^H incorrectly two ncorrectty the score's correcnvand
IllllllllllllllH nr mrvp timp« npriaknnrp oedal mariant artstkalv.

*Proper distance from the piano=the distance necessary f


fully, but naturally extended.
** Standard fingerings=those fingerings marked in the sc

This content downloaded from 144.64.26.59 on Mon, 20 Jan 2020 10:07:30 UTC
All use subject to https://about.jstor.org/terms
254 COLLEGE MUSIC SYMPOSIUM

In actual practice, music performance evaluation


comment forms to very detailed rubrics such as
adjudication of solo and group performance.18
The examples shown above were presented in or
music performance evaluation instruments that ar
recommend a one-size-fits-all approach to music
function of the evaluation as well as the culture of
The following guidelines are offered as a first ste
that works within a particular context.

Step-by-Step Guide to Developing Perfor

1 . Develop a list of specific dimensions of the p


This must be done collaboratively, in order t
all those involved in judging the performance
2. Narrow the list down to only those dimensio
The number of dimensions will vary with the
the performance task. However, it is importa
between including enough dimensions to allo
and keeping the number of dimensions smal
3. Write clear descriptions of specific performa
sion. Keep in mind that more specific perform
more reliable evaluation results both within a
4. Plan to pilot test the evaluation instrument t
ing criteria:
a. Ease of use. Do judges find it effective or a
ambiguous, or tedious?
b. Effective communication tool. Does the eva
cate clearly to students?
c. Validity. Do students and faculty agree that
it's supposed to measure?
d. Reliability. Does the evaluation instrument
5. Make adjustments as needed and continue to
of the evaluation instrument. Do not be disco
and repeated trails to develop a valid and reli
evaluation instrument.

While both formal and informal evaluations are inherent and essential aspects of
music learning and performance, the particulars of how to carry out evaluation as well
as how the results of evaluation should be used remain controversial. Regardless of how
we might feel about the political overtones associated with accountability and the way

18See the Bands of America website for sample copies of detailed evaluation rubrics and forms: http://www. bands,
org/public/resourceroom/adjudication (site requires user login).

This content downloaded from 144.64.26.59 on Mon, 20 Jan 2020 10:07:30 UTC
All use subject to https://about.jstor.org/terms
EVALUATING MUSIC PERFORMANCE 255

that performance evaluation functions within the college


can agree that a carefully-planned approach to music perf
as a useful tool for faculty and students.

References

American Federation of Teachers. Accountability in Higher Education: A Statement by


the Higher Education Program and Policy Council. Washington, DC: American
Federation of Teachers, 2000. http://www.aft.org/pubs-reports/highered/Account-
ability.pdf. Accessed October 12, 2007.
Benson, Cynthia. "Comparison of Students' and Teachers' Evaluations and Overall
perceptions of Students' Piano Performances." Texas Music Education Research
(1995). http^/www.tmea.org/OSOCollege/Researcbben 1995. pdf. Accessed No-
vember 7, 2008.
Bergee, Martin. J. "Faculty Interjudge Reliability of Music Performance Evaluation."
Journal of Research in Music Education 51, no. 2 (2003): 137-50.
Bergee, Martin J. and Melvin С Platt. "Influence of Selected Variables on Solo and Small-
Ensemble Festival Ratings." Journal of Research in Music Education 51, no. 4
(2003): 342-53.
Bergee, Martin J. and Claude R. Westfall. "Stability of a Model Explaining Selected
Extramusical Influences on Solo and Small-Ensemble Festival Ratings." Journal
of Research in Music Education 53, no. 4 (2005): 358-74.
Grantham, Marilyn. Accountability in Higher Education: Are There "Fatal Errors "
Embedded in Current U.S. Policies Affecting Higher Education? Fairhaven, MA:
American Evaluation Association, 1999. http://danr.ucop.edu/eeeaea/ Accountabil-
ity in Higher Education Summary.htm. Accessed October 12, 2007.
Kaye, Timothy S., Robert D. Bickel, and Tim Birtwistle. "Criticizing the Image of the
Student as Consumer: Examining Legal Trends and Administrative Responses in
the US and UK1." Education and the Law 18, nos. 2-3 (2006): 85-129.
Lake, Peter F. "Tort Litigation in Higher Education." Journal of College and University
Law 27, no. 2 (2000): 255-311.
National Association of Schools of Music, National Association of Schools of Art and
Design, National Association of Schools of Theatre, and National Association of
Schools of Dance. A Philosophy for Accreditation in the Arts Disciplines. Reston,
VA: National Association of Schools of Music, 1997. http^/nasm.artsaccredit.org/
index.jsp?page=Philosophy%20for% 20Accreditation. Accessed October 12, 2007.
Potts, Michael. "The Consumerist Subversion of Education." Academic Questions 18,
no. 3 (2005): 54-64.
Ryan, Charlene and Eugenia Costa-Giomi. "Attractiveness Bias in the Evaluation of
Young Pianists' Performances." Journal of Research in Music Education 52, no. 2
(2004): 141-54.
Spellings, Margaret. Action Plan for Higher Education: Improving Accessibility, Af-
fordability and Accountability. Washington, D.C.: U.S. Department of Education,

This content downloaded from 144.64.26.59 on Mon, 20 Jan 2020 10:07:30 UTC
All use subject to https://about.jstor.org/terms
256 COLLEGE MUSIC SYMPOSIUM

2006. http://www.ed.gov/aboutfàdscor^^
Accessed October 12, 2007.
Thompson, Sam and Aaron Williamon. "Evaluatin
Assessment as a Research Tool." Music Percep
U.S. Department of Education. Press Release.
of Education, March 22, 2007. http://www.ed.g
03222007.html. Accessed October 12, 2007.
U.S. Department of Education. Press Release. Washington D.C.: U.S. Department of
Education, September 28, 2007. http://www.ed.gov/news/pressreleases/2007/09/
09282007.html. Accessed October 12, 2007.
Wapnick, Joel, Jolan Kovacs Mazza, and Alice Ann Darrow. "Effects of Performer
Attractiveness, Stage Behavior, and Dress on Evaluation of Children's Piano Per-
formances." Journal of Research in Music Education 48, no. 4 (2000): 323-35.
The White House. No Child Left Behind. Washington, DC: The White House, President
George W. Bush, http^/www.whitehouse.gov/news/reports/no-child-left-behind.
html. Accessed October 12, 2007.
Zdzinski, Stephen F. and Gail V. Barnes. "Development and Validation of a String
Performance Rating Scale." Journal of Research in Music Education 50, no. 3
(2002): 245-55.

This content downloaded from 144.64.26.59 on Mon, 20 Jan 2020 10:07:30 UTC
All use subject to https://about.jstor.org/terms

You might also like