Tag Archive | PAB entry 3

Engl 810 – PAB Entry 3

Hassel, Holly, and Joanne Baird Giordano. “First-Year Composition Placement at Open-Admission, Two-Year Campuses: Changing Campus Culture, Institutional Practice, and Student Success.” Open Words: Access and English Studies 5.2 (2011): 29–59. Web.

http://www.citruscollege.edu/lc/testing/PublishingImages/Accuplacer.jpg

A student takes the ACCUPLACER so he can be placed “effectively” into English and math courses that are right for him.

In this article, Holly Hassel and Joanne Baird Giordano address one of the enduring issues in the world of English studies, and in particular, ESL and developmental English classes. That issue is one of placement tests; exams meant to place students into a particular English class based on their skill and abilities as a writer and thinker. According to Hassel and Baird Giordano, over 92 percent of community colleges use some form of placement exams. 62 percent use the ACCUPLACER and 42 percent use Compass; many other schools use some combination of the two. These tests, then, become the primary method of assessing student readiness for first year composition (30). Unfortunately, these tests tend to measure outcomes that “do not reflect the complex demands of academic discourse in the first college year” (30); or are disconnected from the learning outcomes the college writing program sets for students (30-31). For example, while standardized tests are good at measuring sentence correction, reading comprehension, and other quantifiable data, they do not test these abilities in action or in problem-solving scenarios. Even more problematic is that these placement tests are very poor at correctly placing students. While only 1-3 percent of students at competitive universities ended up re-taking first year composition, 25-35 percent of community college students did. This outcome can lead us to only one conclusion: these tests only define a student’s “test-taking skills” (38).

Hassel and Baird Giordano note that at their own community college in Wisconsin, the data they researched matches nearly identically with their own experiences. Their school places students into English courses based on only one test score; as a result, somewhere between 20 and 35 percent of students ended up re-taking their first year English courses due to misplacement and later poor performance (34). They advocate that a better system will take a multipronged approach to assess student levels and placement (36). The multipronged approach that they suggest includes looking at ACT or SAT scores, which would assess which students (they say around 50 percent) are under-prepared (37); asking students to complete a writing sample, with an assessment corresponding to the writing program’s learning outcomes (39); examining high school curriculum and grades, particularly for students with “borderline placement profiles” (41); and survey and student self-assessment, which asks students to assess their own readiness for college reading and writing courses, which is particularly useful for older returning students. In this approach, students can meet with English faculty, learn about the courses, and make an educated decision (41-42).

Hassel and Baird Giordano note that undertaking this endeavor, while work intensive, is much more successful that the test model. They noted that “students who remained in good standing at the end of the fall semester significantly increased over the implementation of this approach” (44). While Hassel and Giordano note the difficulty and work that such an approach takes, particularly for large institutions, they recommend trying to increase multiple measures for placement over time (53).

Last week I wrote about the history of ESL students in the composition classroom, and one of the major issues and ways students were being oppressed was through placement tests that put them in/outside the “right” group. As Kai Matsuda noted, these tests were given to students under the assumption that “language differences can be effectively removed from mainstream composition courses” (642). Clearly, language differences cannot be removed or “fixed,” and perhaps should not be removed from the language class. As Janice Lauer noted in chapter 2 of English Studies: An Introduction to the Discipline, in 1974, CCCC came out with the manifesto “Students Right to Their Own Language,” in which the organization affirmed that students should be able to use their own patterns, dialects, and varieties of speaking and writing in which they find their own style and identity (120). This was a huge turn for a discipline that was frequently non-inclusive to other language speakers. Placement testing, therefore, would seem to run directly counter to this recommendation – by asking students to read, think, and write in one particular way, those students who may have been excellent thinkers and writers are relegated to one part of the college, while those that are good test takers are relegated to another for which they may not be prepared. Placement testing, therefore, is not only ineffective as Hassel and Baird Giordano point out, but it ends up preventing students from having a right to their own language and misplaces them based on testable signs that say little about actual writing and thinking skill.

What Hassel and Baird Giordano have presented in this piece is particularly interesting and points out a major question that needs answering – do placement tests work? Their clear answer, based on the historical data is no. However, they have discovered that a multipronged approach can be effective. Yet, this leaves us with the question: how can we successfully facilitate their method over multiple colleges, big and small, when faculty are already overtaxed and often underpaid? While this clearly works well for a school with only 1400 students like the one these women teach at, how can this be applied to a very large school like NOVA, where thousands of students take composition 1 and 2 each semester? I do not think this paper leaves us with an answer that makes this feasible yet. From personal experience, even checking the prerequisite and developmental histories of each of my 100+ students is time-consuming, stressful, and riddled with error and discrepancies. The thought of having to look at multiple test scores, meeting with students, and examining writing samples seems insurmountable. The next question, then, is: if we get rid of the placement test, how do we quickly and more effectively replace them?

Works Cited

Hassel, Holly, and Joanne Baird Giordano. “First-Year Composition Placement at Open-Admission, Two-Year Campuses: Changing Campus Culture, Institutional Practice, and Student Success.” Open Words: Access and English Studies 5.2 (2011): 29–59. Web.

Kei Matsuda, Paul. “The Myth of Linguistic Homogeneity in U.S. College Composition.” College English 68.6 (2006): 637-651. Print.

Lauer, Janice. “Rhetoric and Composition.” English Studies: An Introduction to the Discipline. Ed. Bruce McComiskey. Urbana: NCTE, 2006. 106-152. Print.