Volume 12, Issue 1: 2019

Beyond Tradition: Writing Placement, Fairness, and Success at a Two-Year College

by Jessica Nastal, Prairie State College

This archival study analyzed the impact of a writing skills placement test at a minority-serving community college. With special emphasis on 1,029 students in the lowest level of developmental writing class, attention was given to both performance (grades and grade point average) and to student placement (in terms of sex and race/ethnicity) from 2012-2016. With findings indicating undue burden on Black students as the result of the placement test, the case study is used to raise questions of success, its formulation, and the instrumental value of the case for next-generation fairness measures for two-year colleges.

Keywords: college composition, fairness, two-year colleges, writing assessment, writing placement


Introduction

Complexity of Placement

Placement is fraught with conceptual and operational complexity. Decades of research demonstrate that indirect measures do not capture students’ writing ability, yet two- and four-year institutions nationwide continue to use them because they are inexpensive and quick (Huddleston, 1954; Isaacs, 2018; Stein, 2016; Williamson, 1994). When used for placement, timed writing exams are similarly judged invalid as a result of their constraints on the writing construct (Bereiter, 2003; Faigley, Cherry, Jolliffe, & Skinner, 1985). Moreover, many scholars resist automated writing evaluation because “writing to a machine is not writing at all” (Herrington & Moran, 2001, 2012; Perelman, 2012). Directed or Guided Self-Placement is held up as a model of more ethical and accurate placement because it relies on students actively participating in the placement process (Royer & Gilles, 2003), but it is perceived by some as being a time-consuming and expensive model that shifts the decision-making burden onto students who may be uninformed (Gere, Aull, Green, & Porter, 2010; Gere, Aull, Perales-Escudero, Lancaster, & Vander Lei, 2013; Jones, 2008; Ketai, 2012; Schendel & O’Neill, 1999). Challenge methods seem promising as making the best of a bad situation (Peckham, 2009, 2010). Still others suggest omitting placement altogether (Elbow, 2012) and offering institutional support for students enrolled in credit-bearing courses (Adams, Gearhart, Miller, & Roberts, 2009), but some faculty are wary when local governments encourage co-requisite models as a cost-savings measure.

Although placement into composition is still a widespread practice, Haswell (2005) and Smith (1992) have pointed out it is often a judgement call (see Kane, 2006, p. 24, on value judgement) in which decisions are made without sufficient evidence. Students may perform better the second time they take a test, so the test-retest reliability of their scores on placement exams comes into question (see Haswell, 2004, for a summary of the research). Placement tests that rely on multiple-choice questions about grammar, usage, and mechanics offer little information about the writing students actually do, so their construct validity is suspect. And, far too often, we know little about sub-group analysis prerequisite to score interpretation and use. Processes that require students to make an informed decision are criticized for pushing the responsibility onto students, or for putting too much of a financial strain on writing programs and universities.

Placement in Two-Year Colleges

Placement in two-year colleges is especially complicated, in part because of our complex mission. In this uniquely American institution,

Community colleges are centers of educational opportunity. They are an American invention that put publicly funded higher education at close-to-home facilities, beginning nearly 100 years ago with Joliet Junior College. Since then, community colleges have been inclusive institutions that welcome all who desire to learn, regardless of wealth, heritage, or previous academic experience. The process of making higher education available to the maximum number of people continues to evolve at 1,167 public and independent community colleges—1,600 when branch campuses are included. (American Association of Community Colleges, 2017)

Yet, we also pit open admissions policies focused on educational access against restrictive gatekeeping practices. We invite in students from typically underserved communities and penalize them for their linguistic, cultural, and socio-economic markers. We know an initial placement decision follows students throughout their entire academic career, determines their time-to-degree, affects their financial obligations, and provides them with an institutional identity—and that the majority of students in community colleges (52% [Complete College America (CCA), 2012, p. 6]) place into at least one developmental course. Very few then go on to complete college-level courses, certificates, and degrees (see Chen, 2016). If we believe in the mission of offering access to high-quality education for all students, then we have an obligation to ensure our practices—including our placement practices—reflect that mission.

The Death of the COMPASS Exam

In 2015, ACT announced it would no longer offer the COMPASS test for placement into English and math because of its limited predictive ability: “A thorough analysis of customer feedback, empirical evidence and postsecondary trends led us to conclude that ACT COMPASS is not contributing as effectively to student placement and success as it had in the past” (Fain, 2015). Responding to two influential studies out of the Community College Research Center at Columbia University’s Teachers College (Belfield & Crosta, 2012; Scott-Clayton, 2012), ACT acknowledged “limitations in measuring college readiness” (Fain, 2015). While efficient, it was ineffective and offered limited predictive ability, as Scott-Clayton (2012) found. The traditional view of placement (as discussed in the Introduction of this special issue), in other words, failed.

The death of the COMPASS exam presents a kairotic moment for teacher-scholar-practitioners nationwide to enact meaningful changes and move toward a more valid and fair method, particularly for typically underserved communities. It is within this complex environment that two-year colleges and open-access institutions must define writing placement success and then identify methods that allow opportunity structures to be created for diverse students that, in turn, facilitate success (Merton, 1938, 1996). Now, two-year colleges nationwide have the opportunity to change how incoming students begin their academic careers, and to begin implementing more equitable and fair practices.

Case Study

The aim of this study was to understand how placement scores related to student success for a diverse student population. Of primary concern was attention to fairness; to address this concern, we examined relationships between placement scores, grades in placed courses, GPA, and demographic factors. The sample was drawn from first-time undergraduate students at a community college in the Midwest, and included five years (2012-2016) of placement scores and student grades. We disaggregated data, examined correlations of grades and GPAs, analyzed differences between subgroups, and performed survival analysis. This study contributes to writing assessment studies by analyzing consequences of placement at a two-year college with a diverse student body. The findings show students—particularly Black students—who placed into the lowest-level developmental writing class rarely went on to pass the gateway course, confirming patterns discussed in the scholarship, and raising questions about how writing assessment can work toward achieving fairness.

Literature Review

The Placement Problem

Placement is often seen as a necessary and democratizing tool: It is necessary to determine which students stand to benefit from the most intervention, and it is democratic because students are not prevented from admission based on their writing abilities (White, 1995). It has been used to identify which students need the most instructional support. It sorts students into various ability levels to help teachers target their instruction and to ensure students receive instruction that will build on their current skills and encourage them to develop into college-level critical thinkers, readers, and writers. Writing studies scholars have historically claimed placement tests can fulfill equity missions, that the developmental writing programs the 1.7 million students nationwide (CCA, 2017) place into “[serve] to help underprepared students succeed instead of washing them out” (White, 1995, p. 76). Placement testing is assumed to help students because it directs them to developmental writing courses, which can offer students more “guidance and support” (White, 1995, p. 77) before they enter the presumably more difficult college-level course.

But, placement has been divisive in the literature. Some scholars claim high-stakes assessments are all too often instruments of White middle-class values that penalize students from historically underserved communities—“students of color, multilingual students, and working class students” (Inoue, 2014, p. 330). Trachsel (1992), for instance, has argued, “Educational tests are more apt to function as mechanisms that enable an educated elite to impose exclusive standards upon academic aspirants” (p. 22) and, even more dramatically, that large-scale standardized tests are “instruments of social tyranny” (p. 22). Crowley (1996) viewed such tests as exclusionary tools: “In the current mean-spirited political climate, I doubt whether we serve ‘new students’ well by using mass examinations to segregate them into classrooms that can readily be identified as remedial or special” (p. 90). Bailey, Jaggars, and Jenkins (2015) have more recently claimed the purpose of traditional placement testing is, in fact, “to identify some group of students who will be kept out of a college-level program of study, or whose entry will at least be delayed” (p. 23, emphasis original).

Furthermore, English placement testing has proven to be inaccurate because it frequently results in over-remediation, for anywhere between 14% (Smith, 1993) to 64% (Haswell, 2004) of students (see also Bailey et al., 2015) with limited predictive abilities (Haswell, 2004). Furthermore, placement disproportionately affects historically underserved communities who populate the most developmental writing courses, who fail those courses at the highest rates, and who are least likely to persist to the college-level course or to complete a degree. Today, more than 20 years after Crowley’s argument, we are still struggling to answer her question: “Can we serve diverse student bodies well through placement?” The dilemma remains of whether to advance a student not ready for challenging coursework or retain the student and risk losing them altogether.

The placement problem draws attention to a number of assumptions that undergird the process and question its necessity and role (see Schmitz & delMas, 1991). Some of these assumptions refer to the placement process itself—that placement procedures can sort students by writing abilities; the procedure adequately represents the local curriculum; “placement scores contribute to the prediction of course grades” (Schmitz & delMas, 1991, p. 40). Other assumptions refer to the purpose of testing an incoming students’ writing ability—the assumption that sorting students by writing ability will improve their success in writing courses; that “scores accurately represent a student’s standing within an academic domain” (Schmitz & delMas, 1991, p. 39); that students’ writing abilities differ from each other enough to warrant different courses; that students’ differing writing abilities can be identified in a placement procedure; and that examining students’ writing ability provides enough, relevant information about other courses that serve as “major hurdles in the introductory college curriculum” (Bailey et al., 2015). Still other assumptions refer to the course sequence: It is best to place students into the highest-level course they are able to succeed in (see Kane, 2006, p. 24); developmental coursework, offering students the most instructional support, serves students well; and the course sequence is therefore best built hierarchically. An overview of placement, including the assumptions built into the traditional algorithmic model represented by Willingham (1974), are discussed at length in the Introduction to this special issue.

Social Consequences of Assessment

Validity. While the focus of this study is not an explication of validity in placement testing (see Messick, 1988), an analysis of the validity argument justifying placement (see Kane, 2006, 2013; Lederman, 2018; Slomp, 2016), or a presentation of validation evidence (see Elliot, 2015; Kelly-Riley & Elliot, 2014), it is important to note validation nevertheless provides a useful framework for a discussion of ethics, fairness, and justice in writing assessment. Messick (1989) argued for validity to be understood through the interpretations and uses of test scores, which brings attention to the ethical implications of validation and of assessment—the subject of the present study. Furthermore, he argued, “Validity judgments are value judgments” (Messick, 1989, p. 10, emphasis original) “that often trigger[] score-based actions and serve[] to link the construct measured to questions of applied practice and social policy” (p. 9); that is, test score use relates not only to one specific action but to an array of implications, judgments, and social consequences based on labels, connotations, actions, and subsequent lines of inquiry pursued. In placement, for instance, a test score is used to determine the first writing course a student is eligible for. It may also result in a label designation, institutional tracking, and/or subsequent academic opportunities offered or withheld. He argued the social consequences of test score use are so central to validation that, if adverse consequences exist, the test used may be invalid (Messick, 1989, p. 11), a claim Elliot (2015) has recently expanded on. Ultimately, focusing on interpretation and use of scores can humanize assessment, an argument Poe has made (Allen, 2016, p. 6), and encourage practitioners to consider how assessment affects equity of opportunity (Elliot, 2015).

Ethics and Fairness. While validity plays a significant role in informing this case study, emphasis is on evidence related to fairness, “defined as the identification of opportunity structures created through maximum construct representation. Constraint of the writing construct is to be tolerated only to the extent to which benefits are realized for the least advantaged” (Elliot, 2015, §1.3). Elliot (2015) drew on social justice theories to emphasize: If an assessment practice is biased, then we must identify “opportunity structures leading to the advancement of opportunity to learn” (§1.4).

A theory of ethics of writing assessment requires attention to how students are affected by our practices. In his articulation of the theory, Elliot (2016) demanded practitioners attend to writing assessment through the lens of morality (§3.4.1), a radical shift from the objectivity privileged in the 1970s and the efficiency-based models highlighted throughout the history of writing assessment. An ethical approach underscores the utter importance of assessment as a human endeavor.

An ethical and fair approach to writing assessment requires examination of the writing construct itself (see American Educational Research Association [AERA], American Psychological Association [APA], & National Council on Measurement in Education [NCME], 2014, p. 215; Banks et al., 2018; Slomp, 2016). Writing assessment should mirror the curriculum it serves, and the curriculum should offer students the opportunity to develop their ideas and communicate their thinking in “their own patterns and varieties of language—the dialects of their nurture or whatever dialects in which they find their own identity and style” (Conference on College Composition and Communication [CCCC], 1974). This approach is further aligned with the new vision statement for the National Council of Teachers of English (NCTE, 2017):

NCTE and its members will apply the power of language and literacy to actively pursue justice and equity for all students and the educators who serve them. As the nation’s oldest organization of pre-K through graduate school literacy educators, NCTE has a rich history of deriving expertise and advocacy from its members’ professional research, practice, and knowledge. Today, we must more precisely align this expertise to advance access, power, agency, affiliation, and impact for all learners.

Enacting NCTE’s (2017) mission requires methods of assessment that fully represent the writing construct and methods of analysis that consider social consequences. As Slomp (2016) urged, the first step to understand the consequences of assessment is to disaggregate the data to examine how groups of students perform (disparate impact analysis is one way to do this; see Poe & Cogan, 2016; Poe, Elliot, Cogan, & Nurudeen, 2014). Doing so identifies the students “least advantaged” (Elliot, 2016) by the process and informs practitioners about how their assessment processes affect students. If any groups of students are “demonstrably disenfranchised from the process” (Elliot as cited in Allen, 2016, p. 1), we must revise methods and develop opportunity structures “to advance access, power, agency, affiliation, and impact for all learners” (NCTE, 2017), including traditionally underrepresented communities and those disproportionately negatively affected by local placement practices. Toth (2018) has labeled this line of inquiry “validation for social justice.”

This background establishes the standpoint context for the present case study.

Research Questions

Prairie State College (PSC) is like countless other schools that used a combination of the COMPASS exam and a locally-developed writing task to place students into their required composition courses. The English Department was dissatisfied with using COMPASS but continued doing so for the familiar reasons: It was time- and cost-efficient (Elliot, Deess, Rudniy, & Joshi, 2012), and it placed students “reasonably well” (Smith, 1992). Faculty perceived students who placed into the lowest-level First-Year Writing (FYW) course typically needed the most hands-on literacy instruction. Few students in the course were invited to bump up to the next course in the sequence, and few acted upon that suggestion. While pass rates in the college-level FYW course hovered around 51% since at least 2009, the Department saw this as a function of student failure, or circumstances impeding students’ success, rather than an error in placement. But, for a group of humanities faculty dedicated to the student body they serve and familiar with critique of purchased tests, it seemed wrong to use COMPASS, even in part, to determine students’ writing abilities. The Department believed COMPASS had poor face validity and little relationship to the work of critical reading, writing, revision, and collaboration students are asked to do in their FYW courses. That, combined with PSC’s status as a Predominantly Black Institution (PBI) and an Emerging Hispanic Serving Institution (EHSI), as well as the extinction of COMPASS in 2016, offered the Department an opportunity to develop a new method of placement that places students at least as well as COMPASS did, that better represents the writing construct, and that works toward advancing justice and opportunity to learn (Banks et al., 2018).

The present study undertook the following research questions:

  1. What do archival data reveal about placement methods and their relationship to success for a diverse student population?
  2. What is the instrumental value of this study in terms of how an emphasis on fairness in assessment can enable practitioners to “better attend to the needs of the diversity of students in our classrooms” (Kelly-Riley & Whithaus, 2016)?

Through these questions, I sought to understand how placement affected the student body at PSC, to perform an analysis that holds “the achievement of justice and advancement of opportunity as equal aims of assessment” (Banks et al., 2018), and to consider how this study might add to the body of scholarship within writing studies on assessment at two-year colleges.

Description of Placement Process and Curriculum

Placement Decision-Making Process

Each new student at PSC is required to take a placement test in math, English (writing), and reading. In some instances, ACT/SAT scores place students directly into college-level courses, and AP scores can exempt students from English 101. The English Department has autonomy to determine appropriate placement and exit procedures for composition courses. From 2012-2016, it used COMPASS Reading scores with timed writing samples, based on locally-designed writing prompts, to determine students’ placement. Readers were members of the English Department; most were adjunct faculty. Until October 2016, this combination of COMPASS Reading/PSC writing score was used to determine students’ placement into FYW as well as whether students needed a supplemental, developmental reading course (RDG 098) to be taken in conjunction with the lowest-level developmental writing course (ENG 098). Developmental courses are non-credit bearing at PSC. Figure 1, below, shows the placement decision guidelines.

Figure 1

When the COMPASS Reading score was low enough, it nullified the writing score; that is, if students earned a reading score of 40-59, they would be placed into developmental reading and writing courses, regardless of how high their writing score was. If students scored below 40 on COMPASS Reading, they earned a “no placement” and were, instead, encouraged to enroll in non-credit literacy courses. A reading score of 60-100 meant students did not need to take RDG 098 and were eligible for ENG 099 (developmental writing) or ENG 101 (college-level writing), depending on their performance on the writing test.

Composition Course Descriptions

Table 1 identifies the four courses into which students are placed.

Table 1

The lowest level developmental English course, ENG 098, “Foundations of College Writing,” is designed to support students who have demonstrated weakness primarily in grammar, language, and usage. The course description for ENG 098 emphasizes knowledge of Standardized Edited American English (SEAE) (Inoue, 2014) conventions: Students will learn to avoid common errors with words and sentences and combine correct sentences to produce clear, organized writing. On campus, the sense is that students who place into ENG 098 have weak literacy skills, are poor and reluctant readers, and have limited awareness of how writing works. ENG 099 focuses on English Language Arts, with emphasis on writing and reading. The course has had a number of iterations in the past few years and is currently mostly presented as a concurrent enrollment model to support students in proceeding to the gateway course (see Evans, 2018 for further discussion). ENG 101 is a writing process course with an end-of-semester portfolio required during the time period of this study.

Method

Through IRB-approved archival research at PSC, a medium-sized, public, suburban two-year college in the Midwest, this study examined five years of placement scores (2012-2016) to answer the research questions. To answer the first question—What do archival data reveal about placement methods and their relationship to success for a diverse student population?—the Institutional Research office provided anonymized student records that included a unique ID (not related to student ID number), grades in placed course, term GPA, and demographic categories, including sex and race/ethnicity (see Kelly-Riley, Elliot, & Rudniy, 2016). This information collection system is congruent with the Integrated Postsecondary Education Data System (IPEDS, 2016). Home language, Pell Grant, veteran, disabilities, and first-generation college status were not included in the data but would offer additional insight into the success of the local population of students. Sarah Klotz at Michigan State University, for instance, has also successfully included foster youth in disaggregation of institutional data (personal communication, June 27, 2018). Additionally, students who earned an FW grade were excluded from the study. These are students who are no longer active in the course and have not dropped; however, including them would offer insight into attrition in the courses.

I worked with a senior scholar to prepare, analyze, and disaggregate the data. We examined how students overall and by race/ethnicity and sex performed on the placement exam and in their placed courses, and how those performances correlated with their GPA. With assistance from a computer and data science specialist, we performed the survival analysis to determine how students who began in the lowest-level developmental course proceeded through the composition sequence. I then disaggregated the survival analysis to determine how student subgroups proceeded through the sequence.

The data analysis offered empirical evidence to draw inferences to answer the second research question—What is the instrumental value of this study in terms of how an emphasis on fairness in assessment can enable us to “better attend to the needs of the diversity of students in our classrooms?” (Kelly-Riley & Whithaus, 2016). As Slomp (2016) and colleagues called for, disaggregation of data is imperative, “so score interpretation and use can be clearly understood for all groups and each individual within those groups” (§ “A Role for Ethics”).

Results

Student Characteristics

In 2016, PSC’s student body (N = 4,699) was 54% Black/African American, 18% Hispanic/Latinx, and 21% White/Caucasian; female students represented 59% of the population; male students, 41% (IPEDS, 2016). Table 2 and Figure 2, below, shows demographic information for PSC in general and in composition courses during the study period of 2012-2016.

Table 2

Figure 2

The sample included students enrolled in composition courses (ENG 098, ENG 099, ENG 101) from Spring 2012 through Summer 2016, for a total of 14 semesters of data. Of the 11,054 student records examined in FYW, 56% were female (slightly underrepresented from the total College population), and 44% were male (slightly overrepresented). Black students represented 68% of the sample (overrepresented from the College population), Hispanic/Latinx students represented 10% (underrepresented), and White students represented 16% (underrepresented).

In the sample, 1,127 students enrolled in ENG 098. In terms of sex, 49% were female, and 51% were male. In terms of race/ethnicity, 82% were Black, 6% were Hispanic/Latinx, and 6% were White. ENG 099 had 2,765 students enrolled, revealing that more students were placed into this course by placement scores. In terms of sex, 55% were female, and 45% were male. In terms of race/ethnicity, 75% were Black, 10% were Hispanic/Latinx, and 10% were White. ENG 101 had 7,162 students enrolled, again reflecting placement score use. In terms of sex, 57% were female and 43% were male. In terms of race/ethnicity, 63% were Black, 11% were Hispanic/Latinx, and 20% were White. The demographic makeup of ENG 101 most closely mirrored the population at PSC overall. Black students, however, were enrolled in developmental courses at a higher rate than Hispanic/Latinx and White students were.

Grade Distributions

Table 3, below, presents grade distribution in the sample overall and for each writing course. In the three courses overall, the most commonly-awarded grades were B (21%), C (21%), and W (20%).

Table 3

While grades are shown in Table 3, Figures 3, 4, and 5, below, offer a visual frequency representation of grade distribution in each writing course. Lower grades have been intentionally placed to the left.

English 098

In English 098, 1,127 students enrolled and were active. Figure 3 illustrates grade distribution.

Figure 3

The most commonly awarded grades were F (n = 328; 29%) and C (n = 321; 28%), evidence of bi-modality and in violation of Gaussian distribution (see Kelly-Riley et al., 2016). Slightly more students passed the course with an A, B, or C (n = 577, 51%) than earned a D, F, or W (n = 550, 49%). Additional analysis revealed that Black students passed at 48%; Hispanic/Latinx students passed at 78%; and White students passed at 67%. More female students (n = 315, 57%) than male students (n = 262, 46%) passed the course.

English 099

In English 099, 2,765 students were active. Figure 4 illustrates the grade distribution.

Figure 4

The most commonly awarded grades were C (n = 674; 24%) and B (n = 592; 21%). While no evidence of bi-modality, the sharp increase in grades of F over grades of D violates the Gaussian distribution. More students passed the course with an A, B, or C (n = 1,750; 63%) than earned a D, F, or W (n = 1,015; 37%). Additional analysis revealed Black students passed at 60% (n = 1,234); Hispanic/Latinx students passed at 73% (n = 198); and White students passed at 78% (n = 220). More female students (n = 1,010; 66%) than male students (n = 739, 60%) passed the course.

English 101

In English 101, 7,162 students were active. Figure 5 illustrates the grade distribution.

Figure 5

The most commonly awarded grades were W (n = 1,804; 25%) and B (n = 1,510; 21%). A, C, and F were all awarded at the same frequency (17%; n = 1,269 for A, n = 1,234 for C, n = 1,218 for F). Very few students were awarded a D grade (n = 127; 2%). Again, there is violation of the Gaussian distribution. More students passed the course with an A, B, or C (n = 4,013; 56%) than earned a D, F, or W (n = 3,149; 44%). Additional analysis revealed that Black students passed at 50% (n = 2,235); Hispanic/Latinx students passed at 64% (n = 493); and White students passed at 72% (n = 1048). More female students (n = 2,341; 57%) than male students (n = 1,668; 54%) passed the course.

Correlations

Tables 4 and 5 present correlations relating to grades and student GPA for the overall population and for all sub-groups. The correlation ranges used in analyses and discussions are as follows: high positive correlations = 1.0 to 0.70; medium positive correlations = 0.69 to 0.30; and low positive correlations = 0.29 to 0.00 (see Kelly-Riley et al., 2016, p. 102).

Table 4

Table 5

The overall sample shown in Table 4 and all sub-groups shown in Table 5 demonstrate course grades reached medium-to-high statistically significant correlations with student GPA. This important finding provides perspective of the relationship of course grades to concurrent college measures.

The overall sample in Table 4 showed moderate, statistically significant correlations between course grade and GPA. Correlations, however, were notably lower in ENG 098 (0.38), rose in ENG 099 (0.51), and reached high positive correlation in ENG 101 (0.72). Table 5 provides a more granular analysis. Statistically significant correlations reached medium statistical significance in ENG 098 (ranging from 0.33 for Black students to 0.44 for White students) and ENG 099 (ranging from 0.48 for both male and Black students to 0.53 for female students). High statistically significant correlations were reached in ENG 101 (ranging from 0.67 for White students to 0.72 for female students).

As shown in Table 5, course grade and GPA comparison shows statistically significant differences between sexes and among races/ethnicities in every course. Overall, female students differed at statistically significant, higher levels than male students. Black students differed at statistically significant, lower levels from both Hispanic/Latinx and White students. No significant difference was noted between Hispanic/Latinx and White students.

Statistically Significant Difference Analysis

Table 6 reveals statistically significant results in both sex and race/ethnicity.

Table 6

With the exception of GPA in ENG 098, statistically significant differences were observed as women recorded higher grades and GPA in the composition courses. In terms of race/ethnicity analysis, Black students performed more poorly, at statistically significant levels, than White and Hispanic/Latinx students in both coursework and GPA in each course. Hispanic/Latinx students performed more poorly, at statistically significant levels, than White students in terms of GPA in ENG 099 and grades and GPA in ENG 101.

Survival Analysis

Performing survival analysis reveals information about the impact of the lowest-level developmental writing course and the placement procedure. Survival analysis “is a collection of statistical procedures for data analysis for which the outcome variable of interest is time until an event occurs” (Kleinbaum & Klein, 2012). Traditional survival studies can be used to examine, for example, how long a population stays alive or stays out of prison (Kleinbaum & Klein, 2012). Survival analysis is useful here to determine how one population of students—those enrolled in the lowest-level developmental writing course—survives in the writing sequence. That is, how many students who begin in the lowest level pass the college-level course?

To perform the analysis, we began with all students who took ENG 098 and determined who passed the course with a C or better, the prerequisite for the next course. Then, we reviewed those students who began in ENG 098 and continued on to ENG 099, determined who passed with a C or better, and so on through the three-course sequence. We omitted ENG 102 (Composition II) from this analysis because not all students are required to take it whereas ENG 099 is a prerequisite for most courses across the college, and ENG 101 is a requirement for all associate degrees.

Of the students who began in ENG 098 (n = 1,029), 56% passed (n = 573). Of those students, 192 did not enroll in ENG 099—a 34% loss. Of the 66% who continued to ENG 099 (n = 381) 72% (n = 294) passed the course. Of those students, 81 did not enroll in ENG 101—a 28% loss. The survival rate of students who began in the lowest-level writing course and succeeded in the college-level course was 12%; in other words, of the students who began ENG 098, 12% ultimately passed ENG 101.

Disaggregated survival analysis. The disaggregated survival analysis offers further information to understand how the placement procedure affected groups of students, shown in Table 7.

Table 7

Black students. ENG 098 (n = 1,029) was primarily populated by Black students (n = 834), of whom 53% passed the class (n = 441). Of these, 67% (n = 294) of students who began ENG 098 started ENG 099—a 33% loss. 74% (n = 217) of those students passed ENG 099. Of the students who passed ENG 099, 69% (n = 150) began ENG 101—a 31% loss. 53% (n = 79) of those students passed ENG 101. Overall, 9% of the Black students who began in ENG 098 passed ENG 101.

Hispanic/Latinx students. Of the Hispanic/Latinx students who began in ENG 098 (n = 62), 76% (n = 47) passed the class. 60% (n = 28) continued to ENG 099—a 40% loss. Of those who continued to ENG 099, 89% (n = 25) passed that course. From there, 72% (n = 18) continued to ENG 101—a 28% loss. 56% (n = 10) passed. Overall, 16% (n = 10) of the Hispanic/Latinx students who began in ENG 098 passed ENG 101.

White students. Of the White students who began in ENG 098 (n = 65), 72% (n = 47) passed the course. 64% (n = 30) of these students continued to ENG 099—a 36% loss. Of those who continued, 97% (n = 29) passed; of whom 76% (n = 22) then continued to ENG 101—a 24% loss. 82% (n = 18) passed. Overall, 28% (n = 18) of the White students who began in ENG 098 passed ENG 101.

Female students. For female students in the sample (n = 520), 60% (n = 314) who began ENG 098 passed it; 68% (n = 215) of those students began ENG 099—a 32% loss. Of those students, 80% (n = 171) passed the class. From there, 74% (n = 127) began ENG 101—a 26% loss. 57% (n = 72) passed it. Overall, 14% (n = 72) of the female students who began ENG 098 passed ENG 101.

Male students. Of the male students who began ENG 098 (n = 509), 51% (n = 259) passed the course; 64% (n = 166) then began ENG 099—a 36% loss. Of those who began ENG 099, 74% (n = 123) passed it. From there, 70% (n = 86) began ENG 101—a 30% loss. 57% (n = 49) passed it. Overall, 10% (n = 49) of the male students who began in ENG 098 passed ENG 101.

Discussion

Archival Data, Placement Methods, Success, and Diverse Students (Research Question 1)

The archival data from this study reveal important patterns for the local institution to consider and offer additional information to the national body of scholarship regarding student success in post-secondary education.

The correlations in the sample were moderate overall, which is expected with a heterogeneous population. It appears that the correlations became stronger as students progressed through the writing course sequence. When the data are disaggregated, however, and reviewed alongside the survival analysis, it becomes clear the correlations became stronger when the student population became more homogenous.

Black students. Black students were least successful in the entire composition course sequence, at statistically significant levels, as compared to their peers. In ENG 098, the success difference was at 30% compared to Hispanic/Latinx students and 19% compared to White students—which is particularly concerning because they were overrepresented in the course. In the sample, more than 80% of students in ENG 098 were Black. The survival analysis offers additional insight as it shows that, of the Black students who did pass ENG 098, one third did not continue to the second developmental course.

In ENG 099, Black students’ success differed at statistically significant levels as compared to their Hispanic/Latinx (who were 13% more successful) and White (18% more successful) peers. And, again, Black students experienced around a one-third loss to the gateway composition course, ENG 101. In ENG 101, Black students’ success differed at statistically significant levels: a difference of 14% compared to their Hispanic/Latinx peers and 22% compared to their White peers.

The survival analysis starkly shows how few Black students persist through three composition courses: Only 79 students (9%) successfully completed the credit-bearing course in the study.

Hispanic/Latinx students. Of the three race/ethnicity groups examined in this study, Hispanic/Latinx students were most successful in ENG 098, where 76% passed the course. Despite this course-level success, however, they experienced the highest drop in persistence to the second developmental course; in fact, the 40% loss experienced by Hispanic/Latinx students from ENG 098 to ENG 099 was the highest among any demographic group across the sequence. In ENG 101, Hispanic students’ success rate (56%) was lower, at statistically significant rates, as compared to their White peers (82%).

White students. White students were most successful, at statistically significant levels, in ENG 101 as compared to their Black and Hispanic/Latinx peers. White students demonstrated the highest survival rates in the sequence (28%, 12% higher than Hispanic/Latinx students and 19% higher than Black students). They also demonstrated the highest persistence to ENG 101 from ENG 099, where around one quarter of students were lost (compared to close to one third for both Black and Hispanic/Latinx students).

Patterns of loss. The archival data from this study confirm patterns of loss identified throughout the scholarship. After three composition courses, only 12% of students passed what is identified as the gateway composition course. Results from the current study were also observed, for example, in recent research conducted by Complete College America, where Zaback, Carlson, Laderman, and Mann (2016) found,

Success in the first college-level gateway course for those students who complete the remedial sequence also varies dramatically by race. In particular, far fewer Black students go on to complete the gateway course (or college level courses associated with their remedial needs) within two years, while Asian students are highly likely to complete the gateway course. The gaps are less pronounced for white and Hispanic students. Across both race and subject, far too few students of nearly any background complete their associated gateway courses within two years of entry, which significantly impacts their ability to complete a degree on time. (p. 8)

In particular, Table 7 demonstrates a fine-grain pattern of loss throughout the writing sequence across demographic categories. Arrows on the table indicate flashpoints where a substantial number of students were lost. The first arrow is located in ENG 098, where about half of the students who began in the course (n = 1,029) failed it (n = 573); Black students and male students were most negatively affected. The second arrow is between ENG 098 and ENG 099, where 66% of students continued. Finally, the third arrow is located in ENG 101, where about half of the students who began the course (n = 213) failed it (n = 121).

One of the assumptions embedded in placement and writing course sequences is that most students who pass a course will continue to the next (see the Introduction of this special issue for further discussion of Willingham’s sequential model). The archival data from PSC disrupts the assumption. This study offers evidence to support Bailey et al.’s (2015) argument in their discussion of an unpublished study of students enrolled in Achieving the Dream programs:

Many students who fail to complete their sequences do so without failing or withdrawing from a remedial course. They either never show up for their first remedial course…or do not return after completing one of the lower courses in the sequence.... Additionally, some students complete the sequence but do not enroll in a college-level course. (p. 121)

This is simply more loss than the College—than any college—can bear, particularly in an era of decreasing enrollment, increased scrutiny of developmental education programs, and national completion agendas.

Attending to the Needs of Diverse Students (Research Question 2)

Placement tests are used to determine the best-fit course for students, the course that will challenge and encourage students as they build upon their current skills (see Kane, 2006). It is therefore reasonable to presume most students will earn at least a C or better in the placed course if the placement method is determined to be successful—even with the caveat that grades are an unreliable measure because they are affected by a number of factors throughout a semester. The number of students who failed ENG 098 at PSC suggests the combined COMPASS/local test may have had limited predictive ability. If that were so, the placement procedure would not have placed students “reasonably well” (which led to ACT eliminating the COMPASS exam; see Fain, 2015) because such a significant number of students earned DFW grades (n = 4,714; 42%). It could suggest there may have been misalignment between what the test measured and what students encountered in the curriculum. If so, it would have been a poor method for PSC to determine students’ preparedness for the FYW curriculum. The high failure rate for the lowest-level developmental writing course is troubling on its own. Combined with the demographic information—that more than 80% of the class is Black—and the survival analysis—that 91% of Black students who began in ENG 098 did not pass ENG 101—it is devastating.

This study suggests it remains difficult, however, to definitively identify whether a placement procedure has been successful and, in so doing, raises the question of whether it is necessary. An accurate placement test at PSC, for instance, would have to have elements of knowledge of conventions (ENG 098), reading and writing (ENG 099), and writing processes (ENG 101). Only a test with these three areas could be used on the basis of construct ties to the curriculum. If the placement test had an incomplete representation of the writing curriculum, then, of course it could only offer an approximate suggestion of where to place students.

Two-year colleges are sites of innovation, and, like our two-year college peers, the English Department at PSC has implemented a number of innovations in an effort to improve student success in their required writing courses. Over the years, the Department has changed methods of course, program, exit, and placement assessments; changed curricula; changed course structure; and focused on professional development. Despite these interventions—based on decades of scholarship in writing studies—students have not been very successful on the placement procedure or throughout the composition sequence (see Bossone, 1967 for a comparison). The end of COMPASS provided the Department with an opportunity to revamp its placement procedure. It is currently using a locally-designed, reading-and-writing integrated procedure and is taking part in regional conversations about the viability of alternative placement processes.

The archival data in this study reveal devastating loss in the writing course sequence. In so doing, it begins to address a gap in writing studies scholarship to show what has happened in one minority-serving institution; however, it cannot offer the reasons why some students succeed and some fail. A placement score and a course grade, even a course grade analyzed alongside term GPA, offers an incomplete picture of a student’s experience. The findings raise a number of questions about why students are unsuccessful in the writing sequence, especially when the Department is trying so hard to do it right. For instance, numbers decline for each student group who passed a course and continued to the next—what happened? The survival rate might offer further support for the remediation critique offered by Scott-Clayton (2012) and others (e.g., Community College Research Center, 2012): It is possible few students succeeded throughout the writing sequence because they were under-placed and over-remediated. The higher correlations in ENG 101 to students’ overall GPA and low survival rates could support Morante’s (1987) claim that students with the most extenuating circumstances are most likely to place into (and, perhaps, be unsuccessful in) developmental coursework, and support the assertion that students’ failure may also be a result of their material conditions (see Sullivan, 2008). Perhaps those who passed a course did not proceed to the next because the opportunity costs were too great—a reality many in developmental writing studies have acknowledged and so are committed to reducing the amount of time a student spends in the writing sequence (see Evans, 2018). Furthermore, there is an assumed learning sequence in the curriculum at PSC, as at countless institutions, mirroring Willingham’s (1974) model (see Introduction) and challenged by current scholarship. That is, we know students do not need to master knowledge of SEAE conventions before they can master writing processes—in fact, facility with conventions is just one aspect (among eight others, according to a recent Institute of Education Sciences study [Graham et al., 2016, p. 3]) of a robust writing construct. It is possible the placement procedure worked reasonably well, but the sequence of courses and curriculum has not served students well. Students from traditionally underserved communities do, indeed, fill developmental classes at PSC; however, as Inoue (2014) and others have suggested, failure in writing assessments and on placement tests may very well be “a result of ‘social inequities, not personal failings’ (Otte & Mlynarczyk, 2010, p. 8)”; “the inherent racism in basic writing programs and concepts” (Fox 1993, 1999; Jones, 1993)”; and “the relationship between the kinds of language used by students (often marked by culture, class, gender, and race) and dominant, White, middle-class, academic discourse (Horner & Lu, 1999)” (p. 330). Ultimately, traditional placement tests so drastically under-represent the writing construct that often privileges White, middle-class language practices that they become tests of test-taking. Students who do well are students who have had the resources and opportunities to become familiar with the testing apparatus and speak a language variety more similar to the one being tested.

The difficulty in offering analysis is the crux of the placement dilemma: Placement is used to identify the students who are most at risk of failing the college-level writing course and to support their learning in an effort to prevent their failure. It does not necessarily identify what to do when students fail their placed course, or fail out of the writing sequence, or pass their placed course but do not enroll in the next, because those students do not follow the sequential placement model. In the traditional placement model, completion rates indicate how prepared students are to embark on the work of college. When Black and Hispanic/Latinx students underperform on placement measures or in writing courses, it is seen as a result of their K-12 education (see the Chicago Public Schools’ recent civil rights claim against the state of Illinois); when poor students do so, it is seen as a result of the demands on their time. To some, when students underperform, it is evidence they do not belong in the institution; at two-year colleges, they may be re-routed to adult literacy courses, but for many students, this is where their path to higher education ends. In the traditional model, students’ failure is seen as their failure, not a failure of the system or curriculum, a point Poe and Inoue (2016) made well in their discussion of assessment for justice. They explained in traditional assessment systems,

Decisions are objectified, leaving the outcomes to individuals who experience the personal responsibility of assessment—responsibility that rests on students to wind their way through courses and additional assessment mazes. Here is where we can apply a lesson from Iris Young: The rhetoric that accompanies assessment—like poverty—“encourages an isolated, atomistic way of thinking about individuals” (23). Like the personal responsibility discourse of poverty that attempts to “isolate the deviant and render them particularly blameworthy for their condition,” assessment practices often isolate “failure” (23). In doing so, “the application of paternalistic and punitive policies” becomes justified (23). (Poe & Inoue, 2016, p. 122)

The field has used methods that result in traditionally underserved communities failing on placement tests (placing into developmental coursework), failing the placed course, and failing to proceed in the writing sequence. These failures have been used as evidence of students’ weak writing abilities, lack of preparation for college-level work, and unfitness for higher education.

In the present study, only 12% of students who placed into the lowest-level developmental course passed the college-level course, despite taking two courses designed to help them do just that. Almost all students enrolled in ENG 098 were Black. Almost all Black students failed the course. One argument, based on the traditional model, is that the students who failed the lowest-level course are not prepared for college and therefore do not belong in college. That argument is unacceptable because the argument is itself based on a judgement that may or may not be supported by the construct model at play or the observed patterns in the data. It is just as likely that the placement test does not capture the curriculum at the local institution—and, hence, results in over-remediation—as it is that the students are not prepared for the curriculum. Even if the model were robust, these results must prompt action. The aim of education is advancement, not reifying disadvantage and replicating racist social structures. We simply must do better.

Conclusion

In his discussion of the ethics and science of assessment, Messick (1989) argued, “We must inquire whether the potential and actual social consequences of test interpretation and use are not only supportive of the intended testing purposes, but at the same time are consistent with other social values” (p. 8). Most of us involved in higher education—particularly at two-year colleges, open-access institutions, and new majority-serving institutions—believe in the mission of advancing equity through education. Many of our assessment practices, however, work directly against that mission and, instead, indicate we do not believe our student bodies belong in higher education. Placement practices that purport to honor students’ individual instruction needs have proven time and time again to fail many students, and those students do not continue in their college education.

The hope embedded in writing assessment is that students, instructors, administrators, and other stakeholders can improve teaching and learning through this expression of our values about writing (e.g., Broad, 2003; Huot, 2002). All too often, however, it has been a method of convenience that upholds racist and classist social structures. It may have taken the end of COMPASS for many to examine how placement has been used to persist in penalizing historically underserved communities; however, we can use this moment as an opportunity to develop more equitable and just practices.

A placement method that is more successful—that better serves students—might not exist. If placement cannot predict course-level or -sequence success, or if placement serves to penalize students outside the White middle-class mainstream, then it is worth radically reconsidering or eliminating the apparatus (see Haswell & Elliot, 2017). I am drawn to the argument that a successful placement method would have each student experience the gateway course (see Belfield, 2014). I believe it is worth exploring the implications of the argument that an admitted student is an already-qualified student, even in open-access institutions. In practice, an institution could forego placement altogether, enroll each student in the gateway course, and dramatically redesign supports for students and instructors (further discussion is presented in Poe, Nastal, & Elliot’s Reflection of this special issue).

Ultimately, teacher-scholar-practitioners must study archival data and track student success to determine patterns of placement and progression, with special attention to sub-group analysis, before making programmatic decisions. At this moment, after the end of COMPASS and in the ethical turn in writing assessment studies, I believe those of us at two-year colleges, open-access institutions, and universities alike can radically reconceive what placement should be to advance opportunity to learn for all students.

Author Note

Jessica Nastal (jnastal@prairiestate.edu) is Associate Professor of English at Prairie State College. She serves on the Illinois Community College Board Placement Standards Workgroup. She is Developmental Editor for The Journal of Writing Analytics and an editorial board member for Composition Studies. In 2013-2014, she was named a "Person Who Has Helped You The Most" by adult and transfer students at University of Wisconsin-Milwaukee, and she was awarded the Distinguished Dissertation Fellowship for (Re)Envisioning Placement for 21st Century Writing Programs. Portions of this study drew on her dissertation.

Acknowledgements

This project was completed with the support of Marie Hansel, Vice President of Academic Affairs; Elighie Wilson, Dean of Liberal Arts and Social Sciences; the English Department; and the Institutional Research office at Prairie State College. Alex Rudniy, University of Scranton, offered assistance with survival analysis. Christie Toth, my fellow co-authors in this special issue, Mya Poe, and three anonymous Journal of Writing Assessment reviewers provided insightful feedback throughout the preparation of this manuscript. Thank you to Diane Kelly-Riley and Carl Whithaus, JWA editors, for their support of this project and for their work in advancing fairness in writing assessment. Special thanks are reserved for Norbert Elliot, who, in addition to establishing a theory of ethics for writing assessment, provided the data analysis and has been with me every step of the way.

 

References

Accelerated Learning Program. (n.d.). What is ALP? Retrieved from http://alp-deved.org/what-is-alp-exactly/

Adams, P. Gearhart, S., Miller, R., & Roberts, A. (2009). The Accelerated Learning Program: Throwing open the gates. Journal of Basic Writing, 28(2), 50-69.

Allen, N. (2016). The ethics of writing assessments: Moving from exclusion to opportunity. Council Chronicle, 25(3), 6-9.

American Association of Community Colleges. (2017). About community colleges. Retrieved from http://www.aacc.nche.edu/AboutCC/Pages/default.aspx

American Educational Research Association, American Psychological Association, & National

Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.

Bailey, T. R., Jaggars, S. S., & Jenkins, D. (2015). Redesigning America’s community colleges: A clearer path to student success. Cambridge, MA: Harvard University Press.

Banks, W. P., Burns, M. S., Caswell, N. I., Cream, R., Dougherty, T. R., Elliot, N., … Warwick, N. (2018). The braid of writing assessment, social justice, and the advancement of opportunity: Eighteen assertions on writing assessment with commentary. In M. Poe, A. B. Inoue, & N. Elliot (Eds.), Writing assessment, social justice, and the advancement of opportunity. Retrieved from https://wac.colostate.edu/docs/books/assessment/braid.pdf

Belfield, C. R. (2014). Improving assessment and placement at your college: A tool for
institutional researchers
. New York, NY: Columbia University, Teachers College, Community College Research Center. Retrieved from https://ccrc.tc.columbia.edu/media/k2/attachments/improving-assessment-placement-institutional-research.pdf

Belfield, C. R., & Crosta, P. M. (2012). Predicting success in college: The importance of placement tests and high school transcripts (CCRC Working Paper No. 42). New York, NY: Columbia University, Teachers College, Community College Research Center

Bereiter, C. (2003). Foreword. In M. D. Shermis & J. C. Burstein (Eds.), Automated essay scoring: A cross-disciplinary perspective (pp. vii-x). Mahwah, NJ: Lawrence Erlbaum.

Blake, M. F., MacArthur, C. A., Mrkich, S., Philippakos, Z. A., & Sancak-Marusa, I. (2016). Self-regulated strategy instruction in developmental writing courses: How to help basic writers become independent writers. Teaching English in the Two-Year College, 44(2), 158-175.

Bossone, R. M. (1967). Remedial English in junior colleges: An unresolved problem. College Composition and Communication, 18(2), 88-93.

Broad, B. (2003). What we really value: Beyond rubrics in teaching and assessing writing. Logan, UT: Utah State University Press.

Chen, X. (2016). Remedial coursetaking at U.S. public 2- and 4-year institutions: Scope, experiences, and outcomes (NCES 2016-405). U.S. Department of Education. Washington, DC: National Center for Education Statistics. Retrieved from http://nces.ed.gov/pubsearch

Community College Research Center. (2012, February 28). Thousands of community college students misplaced into remedial classes, new studies suggest. Retrieved from http://ccrc.tc.columbia.edu/pressreleases/thousands_of_community_college_students_misplaced_into_remedial_classes_new_studies_suggest_1036.html

Complete College America. (2012). Remediation: Higher education’s bridge to nowhere. Washington, DC.

Complete College America. (2017). Corequisite education: Spanning the completion divide. Washington, DC. Retrieved from http://completecollege.org/spanningthedivide/#home

Conference on College Composition and Communication. (1974). Students’ right to their own language. College Composition and Communication, 25(3), 1-18.

Crowley, S. (1996). Response to Edward M. White. Journal of Basic Writing, 15(1), 88-91.

Elbow, P. (2012). Good enough evaluation: When is it feasible and when is evaluation not worth having? In N. Elliot & L. Perelman (Eds.), Writing assessment in the 21st century: Essays in honor of Edward M. White (pp. 301-323). New York: Hampton Press.

Elliot, N. (2015). Validation: The pursuit. College Composition and Communication66(4), 668-687.

Elliot, N. (2016). A theory of ethics for writing assessment. Journal of Writing Assessment, 9(1). Retrieved from http://journalofwritingassessment.org/article.php?article=98

Elliot, N., Deess, P., Rudniy, A., & Joshi, K. (2012). Placement of students into first-year writing courses. Research in the Teaching of English, 46(3), 285-313.

Evans, J. (2018). To live with it: Assessing an accelerated basic writing program from the perspective of teachers. The Basic Writing e-Journal, 14(1). Retrieved from https://bwe.ccny.cuny.edu/Evans.htm

Faigley, L., Cherry, R. D., Jolliffe, D. A., & Skinner, A. M. (1985). Assessing writers’ knowledge and processes of composing. Norwood, NJ: Ablex.

Fain, P. (2015, June 18). Finding a new COMPASS. Inside Higher Ed. Retrieved from

https://www.insidehighered.com/news/2015/06/18/act-drops-popular-compass-placement-test-acknowledging-its-predictive-limits

Gere, A. R., Aull, L., Escudero, M. D. P., Lancaster, Z., & Lei, E. V. (2013). Local assessment: Using genre analysis to validate directed self-placement. College Composition and Communication, 64(4), 605-633.

Gere, A. R., Aull, L., Green, T., & Porter, A. (2010). Assessing the validity of directed self-placement at a large university. Assessing Writing, 15(3), 154-176.

Haswell, R. (2004). Post-secondary entrance writing placement: A brief synopsis of research. Retrieved from http://comppile.org/profresources/writingplacementresearch.htm

Haswell, R. (2005). Post-secondary entrance writing placement. Retrieved from http://comppile.org/profresources/placement.htm

Haswell, R., & Elliot, N. (2017). Innovation and the California State University and Colleges English Equivalency Examination, 1973-1981: An organizational perspective. Journal of Writing Assessment, 10(1). Retrieved from http://www.journalofwritingassessment.org/article.php?article=118

Herrington, A., & Moran, C. (2001). What happens when machines read our students’ writing?    College English, 63(4), 480-499.

Herrington, A., & Moran, C. (2012). Writing to a machine is not writing at all. In N. Elliot & L.    Perelman (Eds.), Writing assessment in the 21st century: Essays in honor of Edward M. White (pp. 219-232). New York: Hampton Press.

Huddleston, E. M. (1954). Measurement of writing ability at the college entrance level:     Objective vs. subjective testing techniques. Journal of Experimental Education, 22, 165-213.

Huot, B. (2002). (Re)articulating writing assessment for teaching and learning. Logan: Utah State University Press.

Inoue, A. B. (2014). Theorizing failure in US writing assessments. Research in the Teaching of English, 48(3), 330-352.

Graham, S., Bruch, J., Fitzgerald, J., Friedrich, L., Furgeson, J., Greene, K., … & Smither Wulsin, C. (2016). Teaching secondary students to write effectively (NCEE 2017-4002). Washington, DC: National Center for Education Evaluation and Regional Assistance (NCEE), Institute of Education Sciences, U.S. Department of Education. Retrieved from the NCEE website: http://whatworks.ed.gov

Integrated Postsecondary Education Data System. (2016). Prairie State College. National Center for Educational Statistics. Retrieved from http://nces.ed.gov/collegenavigator/?q=prairie+state+college&s=all&id=148007

Isaacs, E. J. (2018). Writing at the State U: Instruction and administration at 106 comprehensive universities. Louisville, CO: Utah State University Press.

Jones, E. (2008). Self-placement at a distance: Challenge and opportunities. WPA: Writing Program Administration, 32 (1), 57-75.

Kane, M. T. (2006). Validation. In R. Brennan (Ed.), Educational Measurement (4th ed., pp. 17-64). Westport, CT: American Council on Education and Praeger.

Kane, M. T. (2013). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50(1), 1-73.

Kelly-Riley, D., & Elliot, N. (2014). The WPA Outcomes Statement, validation, and the pursuit of localism. Assessing Writing, 21, 89-103.

Kelly-Riley, D., Elliot, N., & Rudniy, A. (2016). An empirical framework for eportfolio assessment. International Journal of ePortfolio, 6(2), 95-116. Retrieved from http://www.theijep.com/pdf/IJEP224.pdf

Kelly-Riley, D., & Whithaus, C. (2016). Introduction to a special issue on a theory of ethics for   writing assessment. Journal of Writing Assessment, 9(1). Retrieved from   http://journalofwritingassessment.org/article.php?article=99

Ketai, R. L. (2012). Race, remediation, and readiness for college writing: Reassessing the ‘self’ in directed self-placement. In A. B. Inoue & M. Poe (Eds.), Race and writing assessment (pp. 141-154). New York, NY: Peter Lang.

Kleinbaum, D. G., & Klein, M. (2012). Survival analysis. New York, NY: Springer.

Lederman, J. (2018). Writing assessment validity: Adapting Kane's argument-based validation approach to the assessment of writing in the post-process era. Journal of Writing Assessment, 11(1). Retrieved from http://www.journalofwritingassessment.org/article.php?article=128

Merton, R. K. (1938). Social structure and anomie. American Sociological Review, 3, 672-682.

Merton, R. K. (1996). Opportunity structure: The emergence, diffusion and differentiation of a sociological concept, 1930s–1950. In F. Adler & W. S. Laufer (Eds.), The legacy of anomie theory: Advances in criminological theory (pp. 3-78). New Brunswick, NJ: Transaction Publishers.

Messick, S. J. (1988). Validity. In R. L. Linn (Ed.), Educational measurement (pp. 13-104). New York: Macmillan.

Messick, S. J. (1989). Meaning and values in test validation: The science and ethics of assessment. Educational Researcher, 18(2), 5-11.

Morante, E. A. (1987). A primer on placement testing. New Directions for Community Colleges, 59, 55-63.

National Council of Teachers of English. (2017). NCTE vision statement. Retrieved from http://www.ncte.org/mission/vision

Peckham, I. (2009). Online placement in first-year writing. College Composition and Communication, 60(3), 517-540.

Peckham, I. (2010). Online challenge versus offline ACT. College Composition and Communication, 61(4), 719-745.

Perelman, L. (2012). Mass-market writing assessments as bullshit. In N. Elliot & L. Perelman (Eds.), Writing assessment in the 21st century: Essays in honor of Edward M. White (pp. 425-437). New York: Hampton Press.

Poe, M., & Cogan, J. (2016). Civil rights and writing assessment: Using the disparate impact approach as fairness methodology to evaluate social impact. Journal of Writing Assessment, 9(1). Retrieved from http://journalofwritingassessment.org/article.php?article=97

Poe, M., Elliot, N., Cogan, J., & Nurudeen, T. (2014). The legal and the local: Using disparate impact analysis to understand the consequences of writing assessment. College Composition and Communication, 65(5), 588-611.

Poe, M. & Inoue, A. B. (2016). Social justice and writing assessment: An idea whose time has come [Guest editor introduction to special issue on writing assessment and social justice]. College English, 79(2), 115-122.

Prairie State College. (2014). Catalog. Retrieved from http://prairiestate.edu/apply-reg-pay/how-to-enroll/catalogs/index.aspx#Catalog2014_16

Royer, D. J., & Gilles, R. (2003). Directed self-placement: Principles and practices. Cresskill, NJ: Hampton Press.

Schendel, E., & O’Neill, P. (1999). Exploring the theories and consequences of self-placement through ethical inquiry. Assessing Writing, 6(2), 199-227.

Schmitz, C. C., & delMas, R. C. (1991). Determining the validity of placement exams for developmental college curricula. Applied Measurement in Education, 4(1), 37-52.

Scott-Clayton, J. (2012). Do high stakes placement exams predict college success? (CCR Working Paper No. 41). New York: Community College Research Center. Retrieved from http://ccrc.tc.columbia.edu/publications/high-stakes-placement-exams-predict.html

Slomp. D. (2016). Ethical considerations and writing assessment. Journal of Writing Assessment, 9(1). Retrieved from http://www.journalofwritingassessment.org/article.php?article=94

Smith, W. L. (1992). The importance of teacher knowledge in college composition placement testing. In R. J. Hayes (Ed.), Reading empirical research studies: The rhetoric of research (pp. 289-316). Norwood, NJ: Ablex.

Smith, W. L. (1993). Assessing the reliability and adequacy of placement using holistic scoring of essays as a college composition placement test. In M. M. Williamson & B. A. Huot (Eds.), Validating holistic scoring for writing assessment: Theoretical and empirical foundations (pp. 142-205). Cresskill, NJ: Hampton Press.

Stein, Z. (2016). Social justice and educational measurement: John Rawls, the history of testing, and the future of education. London: Routledge.

Sullivan, P. (2008). Measuring “success” at open admissions institutions: Thinking carefully about this complex question. College English, 70(6), 618-632.

Toth, C. (2018). Directed self-placement at “democracy’s open door”: Writing placement and social justice in community colleges. In M. Poe, A. B. Inoue, & N. Elliot (Eds.), Writing assessment, social justice, and the advancement of opportunity. Fort Collins, Colorado: The WAC Clearinghouse and University Press of Colorado. Available at https://wac.colostate.edu/docs/books/assessment/chapter4.pdf

Trachsel, M. (1992). Institutionalizing literacy: The historical role of college entrance examinations in English. Carbondale, IL: Southern Illinois University Press.

White, E. M. (1995). The importance of placement and basic studies. Journal of Basic Writing, 14(2), 75-84.

Williamson, M. (1994). The worship of efficiency: Untangling theoretical and practical considerations in writing assessment. Assessing Writing, 1, 147-174.

Willingham, W. W. (1974). College placement and exemption. New York, NY: College Entrance Examination Board.

Zaback, K., Carlson, A., Laderman, S., & Mann, S. (2016). Serving the equity imperative: Intentional action toward greater student success. Boulder, CO: State Higher Education Executive Officers Association. Retrieved from http://www.sheeo.org/sites/default/files/2016_SHEEO_CCA_ServingEquityImperative.pdf