Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
1Activity
0 of .
Results for:
No results containing your search query
P. 1
LT LG Annotated Bibliography

LT LG Annotated Bibliography

Ratings: (0)|Views: 10|Likes:
Published by lornagonzalez1

More info:

Categories:Types, Research
Published by: lornagonzalez1 on May 27, 2011
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as DOC, PDF, TXT or read online from Scribd
See more
See less

05/27/2011

pdf

text

original

 
Lorna Gonzalez and Lisa TremainED 202H/ Writing Program AdministrationAnnotated BibliographyMay 2, 2011
Introduction
 This annotated bibliography represents our efforts toward educatingourselves about writing assessment and course placement practices forincoming students. Some key search terms and questions included:
Retention - University
Directed Self-Placement/ Self-efficacy
Accuplacer and FYC placement standardized exams
WPA as authority & who makes decisions about writing programs?
o
Who gets to make decisions about these things (placement)
Machine-scored placement exams?With our search, we hoped to learn whether a correlation had beenestablished between placement in first-year composition and studentretention. In WPA literature, authors were open but skeptical to machine-scored assessments, and all agreed that human intervention in some form isa necessary component of student placement. Of the readings we selected,those dealing with retention and student satisfaction stressed theimportance of student-advisor interaction and student choice in courseselection. What follows is more specific annotation on the individual books,chapters, and articles.
Annotated Bibliography
Condon, W. (2009). Looking beyond judging and ranking: Writingassessment as a generativepractice.
 Assessing Writing, 14,
141—156.Condon’s article presents an alternative to more “reductive forms of writingassessment” through his argument for assessments which stem from“generative” prompts that enable test-takers to examine and reflect upontheir own experiences as writers and learners (141-2). In the first half of thearticle, Condon acknowledges that the financial savings of timed-essay tests
 
2
(often used for placement purposes, such as the SAT and ACCUPLACER)“assure that the lowest form of assessment provides the appearance of thoroughness and that the greatest economy will prevail” (142). However,these types of tests prioritize placement over process and construct—andthe student products resulting from these types of high stakes “throw away”tests don’t generate any useful data for research purposes or studentreflection and learning. Condon notes that in systematizing reductive formsof assessment, universities inscribe values of writing which are underscoredby “the least interesting and the least useful potential product of anassessment: the score, the ranking, the placement” (142). In fact, Condonimplicates the economics of placement in the increased use of machine-scored essay tests, despite the fact that machine scoring is an even morereductive form of an already reductive assessment. In the second half of thearticle, Condon describes the process used at Washington State University tomove away from “typicalwriting placement essay prompts—ones whichasked students to respond to a reading or argue a position—to a
generative
type of prompt which calls upon writers to share their individual experiencesas learners and writers. WSU’s FYC placement prompts are specificallygenerative since they ask writers to reflect upon two of six institutionallearning goals, which are intended to be incorporated into the generaleducation program and into the courses offered by each department. WSU’sprompt asks students for two essays, one sample where students discussand analyze influential courses and/or teachers and the other sample wherestudents identify and reflect on learning experiences outside the classroomthat they feel will help them achieve the learning goals. Condon describesvarious factors that make these types of prompts beneficial, including themore obvious fact that they are reflective in nature. Additional advantagesof a generative prompt include two important distinctions: first, universities“are of necessitymoving in the direction of specifying clear learningoutcomes and should base assessments on them, and second, these testsprovide a “robust set of data” for researchers to study in terms of learning,writing, and reflection as well as information about the learning styles andinterests of test takers (145). Condon concludes that moving toward a moregenerative type of assessment is in the institution’s and writing program’s“enlightened self-interest” (153).Corso, G. (2006). The role of the writing coordinator in a culture of placementby ACCUPLACER. In P.F.Ericsson & R. Haswell (Eds.)
Machine Scoring of Student Essays
. Utah: Utah State University Press, 154-165.Corso begins this chapter with a telling narrative about her history withwriting program administration and student placement. Worth mentioning isthat Corso’s institution had, at one time, placed students by using a holistic
 
3
essay scoring method, but a consultant recommended a switch toAccuplacer, which allowed advisors to make placement decisions in a moretimely manner. At the time of this publication, Corso’s advocacy foradjustments to placement procedures included implementing a portfolioappeals process for students. In fact, between 2003 and 2006, Corso’suniversity utilized Accuplacer on a pilot basis and members of a placementcommittee found that humans often needed to intervene on placementresults from Accuplacer. Though Accuplacer meets a need for enrollingmany students in a short time period, Corso suggests that it is a flawedsystem, both because of the need for human intervention and the devalue of writing as a process. In fact, the misplacement that occurred as a result of the Accuplacer scoring resulted in weeks of instruction disrupted whilestudents moved to other courses. Throughout this chapter, Corso notes howhuman intervention became more proactive over time, but that it remained anecessary component to off-set Accuplacer’s misplacementrecommendations. Likewise, she notes how her specific role has been anactive advocacy of student needs and the school mission by reading essaysand consulting with students and advisors each year that Accuplacer isutilized for placement.Geiser, S. and Santelices, M.V. (2007). Validity of high-school grades inpredicting student success beyond the freshman year: High-school recordvs. standardized tests as indicators of four-year college outcomes.
Research & Occasional Paper Series: CSHE.6.07
. Accessed fromhttp://cshe.berkeley.edu/publications/publications.php?id=265on 29 April2011. The authors of this study sought to examine the relative uses of high schoolgrades and standardized admissions tests for predicting students’ long-termperformance in post-secondary school. Using the University of California’sstudent database, the researchers sampled 80,000 first-time freshman andused multi-level data modeling to analyze the data for indicators of long-term success (defined as graduation, and cumulative four-year GPA).Despite many inconsistencies with secondary school grading techniques andweights, the researchers determined the high school record to have strong“superiority” over standardized tests in predicting long-term success incollege. Though this study does not directly correlate with first-year

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->