You are on page 1of 10

Lorna Gonzalez and Lisa Tremain

ED 202H/ Writing Program Administration

Annotated Bibliography

May 2, 2011

Introduction

This annotated bibliography represents our efforts toward educating


ourselves about writing assessment and course placement practices for
incoming students. Some key search terms and questions included:

• Retention - University
• Directed Self-Placement/ Self-efficacy
• Accuplacer and FYC placement standardized exams
• WPA as authority & who makes decisions about writing programs?
o Who gets to make decisions about these things (placement)
• Machine-scored placement exams?

With our search, we hoped to learn whether a correlation had been


established between placement in first-year composition and student
retention. In WPA literature, authors were open but skeptical to machine-
scored assessments, and all agreed that human intervention in some form is
a necessary component of student placement. Of the readings we selected,
those dealing with retention and student satisfaction stressed the
importance of student-advisor interaction and student choice in course
selection. What follows is more specific annotation on the individual books,
chapters, and articles.

Annotated Bibliography

Condon, W. (2009). Looking beyond judging and ranking: Writing


assessment as a generative

practice. Assessing Writing, 14, 141—156.

Condon’s article presents an alternative to more “reductive forms of writing


assessment” through his argument for assessments which stem from
“generative” prompts that enable test-takers to examine and reflect upon
their own experiences as writers and learners (141-2). In the first half of the
article, Condon acknowledges that the financial savings of timed-essay tests
2

(often used for placement purposes, such as the SAT and ACCUPLACER)
“assure that the lowest form of assessment provides the appearance of
thoroughness and that the greatest economy will prevail” (142). However,
these types of tests prioritize placement over process and construct—and
the student products resulting from these types of high stakes “throw away”
tests don’t generate any useful data for research purposes or student
reflection and learning. Condon notes that in systematizing reductive forms
of assessment, universities inscribe values of writing which are underscored
by “the least interesting and the least useful potential product of an
assessment: the score, the ranking, the placement” (142). In fact, Condon
implicates the economics of placement in the increased use of machine-
scored essay tests, despite the fact that machine scoring is an even more
reductive form of an already reductive assessment. In the second half of the
article, Condon describes the process used at Washington State University to
move away from “typical” writing placement essay prompts—ones which
asked students to respond to a reading or argue a position—to a generative
type of prompt which calls upon writers to share their individual experiences
as learners and writers. WSU’s FYC placement prompts are specifically
generative since they ask writers to reflect upon two of six institutional
learning goals, which are intended to be incorporated into the general
education program and into the courses offered by each department. WSU’s
prompt asks students for two essays, one sample where students discuss
and analyze influential courses and/or teachers and the other sample where
students identify and reflect on learning experiences outside the classroom
that they feel will help them achieve the learning goals. Condon describes
various factors that make these types of prompts beneficial, including the
more obvious fact that they are reflective in nature. Additional advantages
of a generative prompt include two important distinctions: first, universities
“are of necessity” moving in the direction of specifying clear learning
outcomes and should base assessments on them, and second, these tests
provide a “robust set of data” for researchers to study in terms of learning,
writing, and reflection as well as information about the learning styles and
interests of test takers (145). Condon concludes that moving toward a more
generative type of assessment is in the institution’s and writing program’s
“enlightened self-interest” (153).

Corso, G. (2006). The role of the writing coordinator in a culture of placement

by ACCUPLACER. In P.F.Ericsson & R. Haswell (Eds.) Machine Scoring of

Student Essays. Utah: Utah State University Press, 154-165.

Corso begins this chapter with a telling narrative about her history with
writing program administration and student placement. Worth mentioning is
that Corso’s institution had, at one time, placed students by using a holistic
3

essay scoring method, but a consultant recommended a switch to


Accuplacer, which allowed advisors to make placement decisions in a more
timely manner. At the time of this publication, Corso’s advocacy for
adjustments to placement procedures included implementing a portfolio
appeals process for students. In fact, between 2003 and 2006, Corso’s
university utilized Accuplacer on a pilot basis and members of a placement
committee found that humans often needed to intervene on placement
results from Accuplacer. Though Accuplacer meets a need for enrolling
many students in a short time period, Corso suggests that it is a flawed
system, both because of the need for human intervention and the devalue of
writing as a process. In fact, the misplacement that occurred as a result of
the Accuplacer scoring resulted in weeks of instruction disrupted while
students moved to other courses. Throughout this chapter, Corso notes how
human intervention became more proactive over time, but that it remained a
necessary component to off-set Accuplacer’s misplacement
recommendations. Likewise, she notes how her specific role has been an
active advocacy of student needs and the school mission by reading essays
and consulting with students and advisors each year that Accuplacer is
utilized for placement.

Geiser, S. and Santelices, M.V. (2007). Validity of high-school grades in

predicting student success beyond the freshman year: High-school record

vs. standardized tests as indicators of four-year college outcomes.

Research & Occasional Paper Series: CSHE.6.07. Accessed from

http://cshe.berkeley.edu/publications/publications.php?id=265 on 29 April

2011.

The authors of this study sought to examine the relative uses of high school
grades and standardized admissions tests for predicting students’ long-term
performance in post-secondary school. Using the University of California’s
student database, the researchers sampled 80,000 first-time freshman and
used multi-level data modeling to analyze the data for indicators of long-
term success (defined as graduation, and cumulative four-year GPA).
Despite many inconsistencies with secondary school grading techniques and
weights, the researchers determined the high school record to have strong
“superiority” over standardized tests in predicting long-term success in
college. Though this study does not directly correlate with first-year
4

composition placement techniques, it stands to reason that it fails to support


standardized testing as an adequate course placement or retention
technique.

Gere, A., Aull, L., Green, T. & Porter, A. (2010). Assessing the validity of
directed

self-placement at a large university. Assessing Writing, (15), 154—176.

In this article, Gere, Aull, Green, and Porter examine an established directed
self-placement (DSP) program at the University of Michigan, applying
Messick’s (1989/1995) definition of validity, which focuses on interpretations
and actions stemming from assessment results and, thus, the meaning(s)
placed upon assessment by the institution or writing program. The study
design was meant to determine “the degree to which the implementation of
DSP at the University of Michigan between 2003 and 2008 led students…to
take writing courses that were appropriate for them” (155). The authors’
argument highlights the gap in DSP research which reveals that while several
articles “touch on issues of reliability,” no research exists which examines
the “validity of DSP in systematic terms, [where validity should] link
evidence with social and personal consequences and values” (156). Thus,
the researchers use various data sources to examine the validity of DSP at U
of M, including student scores and decisions, DSP online questions,
interviews and surveys. In order to examine the empirical data and
theoretical positions about DSP at U of M, Gere et al use the six aspects of
validity as designed by Messick to inform the interpretation of results from
the data: 1) the extent to which the content of the DSP questions align to
the FYC writing construct at U of M , 2) the extent to which DSP questions
and students’ responses to them are theoretically grounded, 3)the extent to
which the scoring of DSP surveys align with the construct of FYC writing at U
of M, 4) the extent to which DSP scores generalize across time, student
populations, and the construct of FYC at the U of M, 5) the extent to which
the scores on other assessment measures (such as course grades) correlate
with the scores on the DSP questions, and 6) the implicit values in and
consequences of interpreting and using DSP scores. The authors found that
through using Messick’s definition of validity, a number of weaknesses in the
DSP program at U of M were clarified and/or revealed. In response to each of
the research questions, framed around one of six aspects of validity
described above, the researchers found that there were several
“disconnects” between the DSP and the first-year writing courses that
students chose to take (170). While these findings show that DSP and
students’ needs and desires did not correlate in terms of validity, the authors
highlight that any DSP program must be examined within their local context
of its college or university. A second implication of this study is that DSP had
perhaps been “under-conceptualized” at the U of M, including the way it had
5

students, who often resist “easy categorization” were positioned. Gere,


Aull, Green, and Porter ultimately argue that a fully conceptualized view of
DSP would both “avoid boxing students into narrow categories” as well as
encourage a deeper consideration of validity oriented assessments of the
DSP program itself (175).

Harrington, S., Fox, S, & Hoge, T. (1998). Power, partnership, and


negotiations: The limits of

collaboration. WPA Journal, 21(2/3), 52—64.

Harrington, Fox, and Hoge describe the complicated nature of writing


program administration through an examination of two different views of
writing program leadership: one, Ann Ruggles Gere’s view that a single
figure serves as WPA, but who functions in an inevitably complex role and
must hold “multiple subject positions” and the other, Barbara Cambridge and
Ben McClelland’s call for a collaborative leadership structure of the writing
program which emphasizes partnership with faculty within the program and
across campus. Both views of WPA leadership underscore the idea that the
WPA is a “dynamic figure who enables other research,” whether it be the
research of the position and the program (as Gere promotes) or the focus on
“innovative faculty partnerships” (highlighted by Cambridge and McClelland)
(3). While the key difference between the two views outlined by in the
article illuminates the differences between administrator and administration,
Harrington et al argue that neither view addresses “how such partnerships
come to be created in a hierarchical university environment, how power
(even the decentralized, facilitated kind) is acquired, and how collaboration
works on a daily basis” (53-4). The authors point out that theories of WPA
work must address the difficulties of what it means to create networked
partnerships as well as the ways in which first year writing is not always
acknowledged as a complex form of teaching and scholarship. The second
half of their article focuses on theories of WPA work, including complicating
collaboration by looking at the different forms that it might take, how titles of
WPA work (administrator vs. coordinator, for example) and faculty rank
contextualize the nature of campus partnerships, and how programs and
WPAs can (and do) manage conflict, both within the program and the larger
institution. One suggestion promoted by the authors comes from
Cambridge and McClellan’s emphasis on “twin citizenship” in a writing
program, where tenure track faculty’s focus on the profession and the
discipline is maximized and valued, and non-tenure track faculty, who see
themselves as “more responsible” to the local community, including students
and teachers on campus, is equally valued. “Twin citizenship” enables
“better decisions to be made” since all program partners are invested, in
some way, in the two “worlds” in which the writing program is situated.
Ultimately, whether the program focuses on administration through
6

collaborative leadership or an administrator who occupies multiple and


complex roles, Harrington, Fox, and Hoge argue that partnerships inside and
outside the program must be constantly “(re)negotiated” and that WPA work
must “foreground the ways that conflict, power, and citizenship sometimes
unite and sometimes brush up against each other” (63). As the WPA holds a
specific subject position within these negotiations, a focus on power,
partnership, and negotiation will help sustain and evolve the writing program
model.

Jones, E. (2006). Accuplacer’s essay-scoring technology. In P.F.Ericsson & R.

Haswell (Eds.) Machine Scoring of Student Essays. Utah: Utah State

University Press, 93-113.

Jones’ suspicion of Accuplacer’s essay scoring system, WritePlacer Plus leads


him and others to holistically score essays also scored by Accuplacer in order
to determine reliability and validity of the results. He argues, then, that
Accuplacer is simultaneously reliable and invalid, citing “the exaggerated
value placed upon sheer length and the undervaluing of problems that have
to do with readability” (p. 100). To test the exaggerated value of length,
Jones pasted and submitted two separate essays combined as one
document, which scored a ten. Separately, the essays contained arguments
that contradicted the other and received a score of seven out of a maximum
of twelve. Despite conflicting arguments, the combined essays scored
higher, thus confirming the length hypothesis. To test the readability
hypothesis, Jones corrected low-scoring essays and resubmitted them, only
to receive incremental improvement in the scores. In another test of
coherence, Jones randomly reordered twenty-one sentences in an essay that
received an original score of eight. The new, nonsensical essay scored a
seven. Though the scoring system provided consistent results, Jones
concludes that WPAs should not place students based upon essay scores
alone, but he also recognizes the limitation of his study in discussions of the
validity of other methods, including portfolios and directed self-placement.
This study may serve as one instance where the pilot requested of CSULL
has been carried out by other programs in advance of our own and need not
be replicated at the great expense of our school.

Lewiecki-Wilson, C., Sommers, J. & Tassoni, J.P. (2000). Rhetoric and the
writer’s profile:

Problematizing directed self-placement. Assessing Writing, (7), 165—183.

Lewiecki-Wilson, Sommers and Tassoni’s article presents an argument in


favor of directed student self-placement (DSP) over other forms of placement
7

for first year composition (FYC), including computer-based essay tests.


However, Lewiecki-Wilson et al narrow this argument such that the local
contexts of the institution and its writing program must be included in the
design of the DSP program since any “form of assessment chosen is a
rhetorical [social] act” (165). Using the local context of their two-year
college, the authors describe the design of the “Writer’s Profile” as a
strategic form of FYC placement. In the Writer’s Profile, students self-select
different types of their own writing to submit to the college and writing
program faculty (two readers per profile) assess these materials by
discussing and negotiating together in order to refer the student to the most
appropriate writing course. But this article is not merely about the
description of their process of using the Writer’s Profile as a form of
placement, but instead a critique of universalized assessment practices at
two- and four-year colleges that do not “recognize that a student is not a
universal category, nor a transparent construct” (168). Despite the evolution
of various assessment types in public universities, the authors describe
objective placement-focused essay tests (e.g. COMPASS and ACCUPLACER)
as part of a “backward-moving current” in assessment, particularly prevalent
in two-year colleges. While the article describes the Writer’s Profile as
creating greater equity between students, instructors and administrators and
fostering changes in instruction as faculty review the profiles, the authors
realize that this type of placement assessment is likely unrealistic at large
institutions, particularly due to its expense. Nevertheless, in considering
assessment as a rhetorical social act, one which generates “dialectical
interactions” between campus stakeholders, including students, writing
instructors, staff, and administrators, Lewiecki-Wilson, Sommers and Tassoni
frame the importance of the local context in developing any campus
placement exam or high stakes writing exam, since “assessment speaks” to
all campus stakeholders about inscribed values and practices of college-
based writing.

Roberts, J. and Styron, R. Jr. (2010). Student satisfaction and persistence:

Factors vital to student retention. Research in Higher Education Journal

(6), 1-18.

The authors, citing Hagedorn (2005), first identify four categories of student
retention: institution, system, academic discipline, and by course. For this
study, they focus on retention in a specific academic department of concern
by comparing survey results of students enrolled in spring 2008 with the
enrollment statuses of students enrolled for fall 2008. The authors
determined that social connectedness and faculty approachability were
dependent variables that significantly differed between students classified
by whether or not they returned to the university in Fall 2008. From these
8

findings, they make recommendations for retention strategies, including


cohort models of student support and increased faculty-student collaboration
and social interaction. Their emphasis on student engagement in their
academic life conflicts with Noel-Levitz’s recommendation to automate
course enrollment. In fact, it raises concerns about the adverse effects of
replacing an effective and social placement option (DSP) with an automated
one.

Whithaus, C. (2005). Teaching and Evaluating Writing in the Age of


Computers and

High-stakes Testing. London: Lawrence Erlbaum Associates.

While much of Whithaus’ book focuses on multimedia composing and how to


develop these skills in students through institutional writing program design,
a significant portion of his discussion also centers on arguing for the types of
large-scale assessments that are needed in the digital age, including FYC
placement. By acknowledging the fact that many FYC placement programs
do use digital modes of testing and scoring, Whithaus also points out that
these assessments are often universalized, narrowly defined, and not
situated in authentic “communication activities” (xxvii). In various chapters,
Whithaus argues that a more appropriate methodology for assessment
should be embedded in the situated practice of writers and writing, including
incorporation of the multimodal. Specifically, Whithaus suggests that
colleges and universities consider new systems of evaluation which
acknowledge that effective communication occurs through interacting with
other writers and thinkers. Additionally, students’ writing and learning
should be studied and described by faculty through multimodal and
multimedia constructs, assessment practices should involve distribution
among multiple readers, various audiences, and tests should assess student
skill levels in relation to particular writing purposes (150). An electronic
writing portfolio, Whithaus presents, is the most comprehensive assessment
for evaluating student writing, and promotes evaluation as looking at “what
and how students learn” rather than “deficits judged by outdated print-based
standards” (151). Whithaus is realistic that his argument for situated
evaluation will present many difficulties for a WPA or program looking to
institute change in writing assessment practices. He describes tackling
these difficulties as much like the revision practices student writers need to
embrace, ones which often require “deep conceptual change.” Furthermore,
he argues that any situated evaluation change must occur at the grassroots
level, and include a full examination of existing research, including that
which comes from “actual classrooms” (152). Whithaus’ discussion includes
many references to the overt actions and undercurrents of campus politics in
assessment practices in the age of technology. He hopes to help students
9

“flip the instruments of the institution back on themselves” by including


assessment practices in which students are asked to pursue “subjects of
inquiry and methods of evaluation [which are] relevant to their academic
lives or professional careers” (52-3).

2009 National Research Report: Academic Advising highly important to

students. Accessed from:https://www.noellevitz.com/papers-research-

higher-education/2009/academic-advising-highly-important-students on

21 April 2011.

Using a “Noel-Levitz Student Satisfaction Inventory” survey instrument as


their data-gathering method, employees of Noel-Levitz, Inc. aggregated data
from more than 550,000 students at two and four-year public, private, and
professional colleges and universities. Their resulting report touts academic
advising as the most important aspect of student satisfaction because of its
high ranking across institution-type. From there, they make broad
suggestions for developing an effective academic advising plan. Some of
these suggestions include: begin the planning process; set goals; design an
information system; discuss student participation, etc. (pp. 4-5). They take
these steps and suggest corresponding tasks (e.g. “develop a consensual
definition of advising”) and “major questions” to ask when developing this
plan. However, none of the suggestions are warranted with other research,
scholarly literature, or theory of any kind. The company depends on its
authority as a national corporation in the business of education as the basis
for these recommendations. Additional reports on student retention
(https://www.noellevitz.com/papers-research-higher-education/2009/2009-
student-retention-practices-and-strategies-report) and measuring student
success utilize similar poll-recommendation techniques for reporting
information.

Specific Noel-Levitz Resources

• Noel-Levitz web site - They are a consulting firm. In their “About Us”
section, they "strategically align" with publishing companies
Bedford/Saint Martin’s, Prentice Hall, and Cenage Learning.
• Some of their materials on retention - They use rhetoric of data-driven
approaches.
• A search of the Noel-Levitz site for Accuplacer yielded zero results.
• At their conference on student recruitment and enrollment, the Key
Note speakers come from recognized universities, including LaGaurdia
Community College (home of Darrin Cambridge & his electronic
10

portfolios) and San Diego State University.


• Their offices are in Iowa City, Iowa and Denver, Colorado.

A search of ComPile for “Accuplacer” yielded the following results (click)


• the most recent piece was 2006 and oldest was 1990.

Other Noel-Levitz/Accuplacer Info:


• Businessweek’s page on Noel-Levitz as a private business.
• University of Iowa has a Noel-Levitz building. Noel-Levitz headquarters
also seem to be in Iowa.

You might also like