Volume 12, Issue 1: 2019

Directed Self-Placement at Two-Year Colleges: A Kairotic Moment

by Christie Toth, University of Utah

As national reform efforts are reshaping community college policies with the goal of improving degree completion rates, many two-year colleges are rethinking longstanding course placement processes. Directed Self-Placement (DSP) has emerged as one increasingly visible and viable option for placing students into introductory English and mathematics courses. However, higher education researchers advocating placement reform demonstrate little familiarity with the extensive scholarly literature on DSP in writing studies. To date, that literature has focused almost exclusively on 4-year institutions, with few studies of DSP at two-year colleges. This article begins to address these gaps by (a) reviewing writing studies scholarship on DSP to identify key theoretical insights that are missing in the community college placement reform literature and (b) presenting findings from semi-structured interviews with implementation leaders at twelve 2-year colleges that have attempted DSP. These findings demonstrate a more extensive record of DSP for writing placement at 2-year colleges than has previously been visible in published scholarship, and that DSP can be successful in these institutional settings. These findings also demonstrate distinctive considerations, challenges, and opportunities for DSP at open admissions 2-year colleges that warrant greater attention from placement reformers and writing assessment scholars.


In February 2017, researchers at Columbia University’s Community College Research Center (CCRC) published the latest in a series of influential working papers on placement assessment (Barnett & Reddy, 2017). Titled “College Placement Strategies: Evolving Considerations and Practices,” this paper offered an overview of current approaches to mathematics and “English” course placement at community colleges across the United States. It echoed the consensus among CCRC-affiliated researchers—many associated with foundation-funded community college reform efforts—that the widespread practice of placing students using high-stakes, single-score tests may result in significant misplacement, particularly under-placement into unnecessary “developmental” coursework (Bailey, Jaggars, & Jenkins, 2015; Bailey, Jeong, & Cho, 2010; Belfield & Crosta, 2012; Hodara, Jaggars, & Karp, 2012; Hughes & Scott-Clayton, 2011; Scott-Clayton, 2012; Scott-Clayton, Crosta, & Belfield, 2014). Among the alternatives the paper presented was directed self-placement (DSP). Barnett and Reddy (2017) defined DSP as an approach in which “students may be permitted to place themselves into the course level of choice, usually informed by the results of placement testing, a review of their high school performance, and/or information about college-level expectations in math and English” (p. 10). The inclusion of DSP in this CCRC paper suggests that, in our era of community college reform, DSP is emerging as a feasible approach to writing placement.

Such was not the case at the beginning of the decade when DSP was rarely discussed in higher education research. In writing studies, scholars characterized DSP as virtually nonexistent at two-year colleges (Sullivan, 2008a), and the literature included just one dated case study of DSP at a community college (Tompkins, 2003). Faculty and administrators often doubted whether DSP could work with their open admissions policies, diversely prepared students, and limited institutional resources. The lack of published scholarship suggested DSP was untried, and therefore risky, in two-year college settings (Giordano & Hassel, 2016). In our current reform-minded moment, however, many colleges are rethinking their approaches to placement, and some are contemplating DSP (Klausman et al., 2016; Toth, 2018). This study demonstrates that DSP has a more extensive record at two-year colleges than has been documented to date. I present findings from interviews with leaders at 12 two-year colleges that have adopted DSP for writing placement, showing that DSP has been implemented successfully at these institutions. However, these findings also reveal a number of distinctive considerations for DSP in open admissions two-year colleges that have not been fully accounted for in the writing assessment literature, which is based almost entirely on studies conducted at four-year institutions.

Most of the two-year college DSP initiatives included in this study were led by English faculty and informed by writing assessment scholarship. In “College Placement Strategies,” however, Barnett and Reddy (2017) demonstrated no awareness that the term directed self-placement derives from writing studies (Royer & Gilles, 1998) or that writing assessment scholars have produced several decades of research on DSP. Researchers and reformers unfamiliar with our disciplinary literature risk advocating and implementing placement processes that do not align with DSP’s theoretical underpinnings and fail to address important equity concerns. While reform-minded higher education researchers have helped create what I call a “kairotic moment” for DSP at two-year colleges, engagement with writing studies scholarship is essential for developing writing placement processes that are effective and fair.

Literature Review

Defining DSP

Two decades after writing studies scholars Royer and Gilles (1998) introduced the term directed self-placement, DSP has become a widely-accepted approach to writing placement at four-year institutions. The core idea of DSP is this: Given the opportunity to reflect on their own writing experiences in relation to the literacy expectations, course options, and other academic supports available at the institution they are entering, students can and should choose the writing course that best suits their preparation and learning preferences (Royer & Gilles, 1998, 2003b). The twin fundamentals of DSP are thus guidance and choice.

There are, however, pervasive misconceptions about these fundamentals. For example, contrary to Barnett and Reddy’s (2017) characterization, Florida’s 2013 legislation ending mandatory placement testing does not in and of itself constitute a statewide shift to a DSP “policy” (p. 10)—indeed, in a recent white paper on developmental education reforms, the Two-Year College English Association (TYCA) referred to such legislation as “undirected self-placement” (Hassel et al., 2015, p. 239). While Barnett and Reddy (2017) claimed that “early descriptive data from Florida indicate that directed self-placement leads to much higher enrollment in introductory college-level courses in English and math but lower pass rates for these courses” (p. 10), the study they cited (Hu et al., 2016) made no mention of directed self-placement. Likewise, the Florida legislation in question did not reallocate funding from developmental education toward other forms of advising, student support, curricular redesign, or reductions in class size to allow for differentiated instruction: It was, ultimately, a measure to cut costs associated with developmental education rather than an improvement to placement. On the other hand, using self-assessment instruments to place students without giving them an explicit course choice (e.g. Balay & Nelson, 2012; Lewiecki-Wilson, Sommers, & Tassoni, 2000) is not directed self-placement, either. Without grounding in the writing assessment literature, colleges risk developing nominal DSP processes that do not align with DSP principles.

Another common misconception is that DSP is one specific instrument or procedure. At community colleges, this misunderstanding is likely exacerbated by the predominance of prepackaged tests that purport to relieve faculty of the “burden” of overseeing placement. In fact, DSP processes vary from institution to institution depending on local curricular configurations, student populations, and available resources. Institutions that have published about their DSP procedures report using various combinations of:

  • explanatory handouts, checklists, and/or web content (e.g., Blakesley, Harvey, & Reynolds, 2003; Kenner, 2016; Ketai, 2012; Klausman et al., 2016; Royer & Gilles, 1998)
  • paper or online questionnaires that yield placement recommendations (e.g., Das Bender, 2011; Frus, 2003; Gere, Aull, Green, & Porter, 2010; Gere, Aull, Perales, Escudero, & Vander Lei, 2013; Jones, 2008; Kenner, 2016; Klausman et al., 2016; Toth & Aull, 2014)
  • in-person group orientations or individual advising sessions (e.g., Bedore & Rossen-Knill, 2004; Chernekoff, 2003; Crusan, 2006; Gere et al., 2010; Klausman et al., 2016; Royer & Gilles, 2003a; Tompkins, 2003)
  • self-assessment in relation to an actual writing task, sample course readings, and/or examples of course writing assignments and successful student responses (e.g., Gere et al., 2010; Jones, 2008; Kenner, 2016; Klausman et al., 2016; Pinter & Sims, 2003; Toth & Aull, 2014)

Thus, DSP is not a single procedure, product, or algorithm, but rather a set of principles grounded in student choice that can be implemented in a variety of ways with varying consequences in local contexts. Those implementations often evolve over time as student bodies and curricula change and as new technologies and theoretical insights emerge.

Categories of Evidence

Developing a theoretically sound DSP process requires considering several different categories of evidence that educational measurement and writing assessment theorists often discuss through the trinitarian model of fairness, validity, and reliability/precision (Elliot, 2015, 2016; White, Elliot, & Peckham, 2015). The writing studies scholarship offers a number of important insights regarding these categories of evidence as they relate to DSP.

Fairness. Ethical concerns have been central to scholarly debates about DSP since its inception. Early DSP advocates invoked the work of democratic and critical pedagogues like Dewey, Freire, Shor, and hooks to argue for the importance of valuing students’ knowledge of their own writing experiences, respecting their agency in course decisions, and encouraging them to take responsibility for their education (Blakesley, 2002; Chernekoff, 2003; Pinter & Sims, 2003; Royer & Gilles, 1998, 2000, 2003b). More recently, scholars have suggested that DSP has the potential to supplant placement practices that have long privileged White, middle-class students, fostering more equitable writing assessment that advances social justice goals (Gomes, 2018; Inoue, 2009a; Kenner, 2016; Ketai, 2012; Toth, 2018). These arguments resonate with recent conversations about ethics in writing assessment, which articulate definitions of fairness that account for differential impact on structurally disadvantaged students (Elliot, 2016; Elliot et al., 2016; Poe & Inoue, 2016; Poe, Inoue, & Elliot, 2017; Slomp, 2016). As I (2018) have observed in my chapter “‘Directed Self-Placement at ‘Democracy’s Open Door’: Writing Placement and Social Justice in Community Colleges,” which draws on the same dataset as this article, such arguments may be particularly appealing to community college faculty who are often motivated by commitments to broadening educational access and opportunity, supporting social and economic mobility, and preparing students for various forms of democratic participation and civic engagement.

On the other hand, writing assessment scholars have also raised ethical concerns about DSP. Several have expressed doubts about novice writers’ capacity to assess themselves in relation to a writing context they have yet to enter (Balay & Nelson, 2012; Bedore & Rossen-Knill, 2004; Condon et al., 2001; Giordano & Hassel, 2016; Neal & Huot, 2003; Nicolay, 2002; Schendel & O’Neill, 1999). They have questioned the fairness of shifting the responsibility for placement onto students, asserting that faculty are better positioned to evaluate students’ preparation in relation to the demands of the curriculum (Condon et al., 2001; Nicolay, 2002; Schendel & O’Neill, 1999). Some have argued that the risks of such burden-shifting may be particularly acute for students whose self-concepts as writers have been negatively informed by their histories with school-based assessment, histories often shaped by race, ethnicity, language background, class, gender, age, and/or (dis)ability (Das Bender, 2011; Schendel & O’Neill, 1999; Toth, 2018). DSP materials and processes can also tacitly privilege White, middle-class language practices, behaviors, and values, thereby reproducing racialized constructs of “developmental” writers (Ketai, 2012).

Toth and Aull (2014) have suggested that these important ethical concerns should shape rather than dissuade DSP development. Indeed, Naynaha (2016) has argued that categorical skepticism about DSP in two-year colleges reflects a “paternalistic” disregard for the decision-making capacities of the racially, linguistically, and socioeconomically diverse students who attend these institutions (p. 199). My (2018) review of the empirical literature on DSP outcomes for diverse learners, presented in the “‘Democracy’s Open Door’” chapter, supports the assertion that DSP can offer a more fair approach to placement than single-score, high-stakes standardized tests. However, achieving that potential hinges on careful, critical design and ongoing collection of local validation evidence related to fairness (Elliot, 2016).

Validity. In the writing assessment literature, debates about DSP have often focused on evidence related to validity (Harrington, 2005; Neal & Huot, 2003; Schendel & O’Neill, 1999). Drawing on the work of educational measurement scholars like Messick (1989, 1995) and Kane (2006, 2013), writing assessment scholars have argued that evidence for the validity of a DSP process inheres not in the instrument per se, but rather its representation of the valued local construct of writing and the consequences of using a particular DSP process in local context (Gere et al., 2010, 2013; Toth & Aull, 2014). In this line of thinking, any DSP process must be validated locally to determine how well it aligns with the college’s conceptualization of writing and how its use affects curriculum, pedagogy, and student outcomes (see Gere et al., 2010, 2013; Inoue, 2008, 2009a for models). Local validation must also examine the impact of DSP on the diverse groups of students the institution enrolls (Das Bender, 2011; Inoue, 2008, 2009a, 2009b; Ketai, 2012; Klausman et al., 2016; Schendel & O’Neill, 1999; Toth, 2018). Such validation should be systematic, ongoing, and situated within broader programmatic assessment and institutional initiatives (Inoue, 2009a; Klausman et al., 2016; Toth, 2018).

We now have two decades of disciplinary scholarship on the local consequences of adopting various forms of DSP. Most studies have found that, on average, students do as well or better in first-year writing under DSP than previous mandatory placement processes, at least as measured by course completion rates and/or average course grades, and students often report high levels of satisfaction with the DSP process and their course decisions (Bedore & Rossen-Knill, 2004; Blakesley, 2002; Blakesley et al., 2003; Chernekoff, 2003; Cornell & Newton, 2003; Crusan, 2006; Inoue, 2009a, 2009b; Jones, 2008; Kenner, 2016; Klausman et al., 2016; Pinter & Sims, 2003; Royer & Gilles, 1998; Tompkins, 2003; Toth, 2018). There are, however, several reasons to be cautious about this consensus. First, faculty may be less likely to write about unsuccessful DSP initiatives at their institutions: We have little scholarship on DSP failures. Second, the constructs and consequences of DSP are shaped by a range of factors related to implementation, including institutional resources and support (Gere et al., 2010).

Third—and bearing directly on the need for evidence related to fairness—published studies often have not disaggregated student outcomes data by race, gender, or other legally protected categories to determine whether DSP has differential impact (see Inoue, 2009b; Klausman et al., 2016; Poe & Cogan, 2016; Poe, Elliot, Cogan, & Nurudeen, 2014). A few studies have offered promising evidence that DSP can result in more equitable outcomes for women, students of color, first-generation college students, and/or multilingual students than mandatory placement, at least in some processes and local contexts (Blakesley et al., 2003; Inoue, 2009a, 2009b; Kenner, 2016). However, there is a pressing need for greater scholarly attention to the experiences of structurally disadvantaged students in the design and validation of DSP. As I (2018) discuss in my “‘Democracy’s Open Door’” chapter, this concern is particularly urgent at open-admissions two-year colleges, given the diverse student populations they serve (see also Klausman et al, 2016).

Largely absent from the CCRC placement literature—but crucial for considering the local consequences of any assessment’s use—has been discussion of the pedagogical consequences of DSP. Royer and Gilles (1998) suggested that DSP improves student attitudes in basic/developmental writing courses, which fosters a more positive teaching and learning environment. They also asserted that DSP motivates students who place themselves into more advanced classes to demonstrate they have made an appropriate choice (see also Kenner, 2016). For them, the question was not whether students are capable of making the “correct” course choice, but how the act of choosing shapes students’ orientations toward their writing courses and how those orientations might, in turn, reshape the pedagogical context. To the extent that adopting DSP impacts writing class sizes and fill rates, course scheduling, and staffing decisions, it can also create pedagogical consequences that may be difficult to anticipate.

Pedagogical consequences also extend to the messages placement sends to students about the writing context they are entering. Several DSP advocates have echoed Harrington’s (2005) observation that placement “is most students’ first contact with the theory and practice of first-year writing programs…[W]e need to think less about placement as mechanism and more about placement as an opportunity to communicate” (p. 12). In this rhetorical orientation, DSP offers the opportunity to introduce students to the local construct of writing, model the pedagogical approaches they will encounter in their writing classes, and encourage students to reflect on their own writing experiences and learning preferences in relation to the curriculum they are entering (e.g., Gere et al., 2010, 2013; Toth, 2018; Toth & Aull, 2014). From this perspective, mandatory placement based on multiple-choice usage tests communicates misleading messages to students about how writing will be theorized, valued, and assessed in their college courses.

While Barnett and Reddy (2017) made passing reference to issues of validity in “College Placement Strategies” (p. 15), they largely reduced these considerations to a question of “accuracy,” stating, “[a]n accurate placement mechanism will direct students who are college-ready to college-level work, while referring students who are academically underprepared to developmental coursework” (p. 3). This easy dichotomy between “college-ready” and “academically underprepared” allows little room for the kinds of critical questions writing studies scholars have raised about monolithic constructs of “college-level writing” or the material conditions that influence students’ performance (Sullivan, 2006, 2008b). Likewise, it fails to address the inequities that result from labeling students “underprepared” without acknowledging that such categories do not inhere in students themselves but are, rather, produced by our assessments and the classed, raced, colonial language ideologies that undergird them (e.g. Bartholomae, 1993; Cushman, 2016; Fox, 1993, 1999; Otte & Mlynarczyk, 2010). Poe and Inoue (2016) have argued that we need a “sociocultural model of validity” (p. 118) to develop writing assessments that further social justice. Such models require us to interrogate constructs like “college-level,” “preparedness,” and “accuracy” in our writing placements, rather than taking them for granted or defining them in universal, decontextualized terms.

Reliability/Precision. The notion of “accuracy” shades into the third major evidentiary category, which the Standards for Educational and Psychological Testing (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 2014) refer to as “reliability” or “precision.” Elliot (2015) has defined reliability as “the consistency of scores across replications of a measurement procedure” (p. 680), and White, Elliot, and Peckham (2015) have suggested that reliability might best be understood as “an important element (but not the controlling factor) of validity” (p. 23). In their original article on DSP, Royer and Gilles (1998) asserted that students’ responses to their questionnaire were “very reliable—more reliable, we believe than…faculty’s holistic responses to anonymous and impromptu student writing” (p. 63). However, the conventional trinitarian emphasis on reliability in writing assessment can seem to demand a numerical score, which likely contributes to the widespread assumption that any given DSP process must involve a questionnaire. Questionnaires are the most obvious way to “score” student self-assessments or self-inventories to yield a predictable placement recommendation.

Questionnaires can be a useful way to guide student self-assessment in relation to the local curriculum and construct of writing. However, it is also easy for discussions of DSP questionnaires to replicate the kind of binary algorithmic thinking about writing placement advanced by Willingham (1974)—thinking that Toth, Nastal, Hassel, and Giordano (2019) have critiqued for its decontextualized, linear assumptions about writing and learning. Furthermore, focusing narrowly on questionnaires that yield “reliable” placement recommendations risks losing sight of the broader principles of DSP: communicating with students about the writing context they are entering and encouraging self-reflection and agency. A well-developed DSP process invites students to consider a range of factors regarding their prior writing knowledge and experiences, learning preferences, and the values and practices associated with college-level writing at their new institution. We should not assume that a numerical score—or, for that matter, a questionnaire—is a necessary part of that process.

A related concern in the CCRC literature is the notion of efficiency. Given their limited resources and need to register large numbers of open admissions students quickly, two-year colleges are often looking for fast, predictable, and easy-to-interpret placement mechanisms. In many cases, community colleges seek to reduce logistical burdens on students by enabling them to test and register for classes during a single visit to campus. Decades ago, Williamson (1994) warned writing assessment scholars against the “worship of efficiency,” arguing that a preoccupation with the practical concerns of testing can contribute to undertheorization of assessment instruments and processes. That warning seems prescient at community colleges, where ill-conceived notions of efficiency at the point of entry have led to widespread use of high-stakes, single-score tests for writing placement—often, multiple-choice tests of grammar and usage—resulting in patterns of misplacement that can reduce student persistence to degree completion (Bailey et al., 2010; Belfield & Crosta, 2012; Hodara et al., 2012; Hughes & Scott-Clayton, 2011; Scott-Clayton, 2012; Scott-Clayton et al., 2014).

The writing studies literature offers a mixed portrait regarding the “efficiency” of DSP. Royer and Gilles (1998) argued that DSP is less resource-intensive than other forms of placement, particularly those that require faculty to evaluate samples of student writing. However, scholars like Bedore and Rossen-Knill (2004) have countered that ensuring students make an informed self-placement decision requires advising that is just as resource-intensive as mandatory placement. Elliot’s (2016) definition of fairness insists on the provision of resources to support the students least advantaged by an assessment, suggesting that our focus should be on determining how to allocate resources most beneficially rather than on reducing costs. For two-year colleges, the central question may be whether available resources are better spent on proprietary tests or on developing theoretically sound, locally validated, fair approaches to placement.

DSP at Two-Year Colleges

The two published accounts of DSP in two-year colleges prior to this study (Klausman et al., 2016; Tompkins, 2003) reported mostly positive outcomes. Tompkins (2003) found that students in his small-scale DSP pilot who would have been placed into developmental writing but chose the college-level course earned grades of A or B at higher rates than students who placed into those courses through standardized test scores, although DSP participants also had a course withdrawal rate that was higher than the institutional average. Likewise, Mid Michigan College saw an increase in first-year writing exit portfolio completion rates after implementing DSP, suggesting that this change did not result in large-scale over-placement and, in conjunction with other curricular reforms, may have helped improve student outcomes (Klausman et al., 2016). As I (2018) reported in my “‘Democracy’s Open Door’” chapter, and summarize in more depth below, my interviews with DSP leaders at 12 two-year colleges suggested similarly positive outcomes (pp. 162–163). However, as I also discussed in that chapter, none of the available outcomes data on DSP in two-year colleges are disaggregated by race, gender, age, or other salient categories to determine whether DSP has differential impact for some groups in these settings—a shortcoming that undercuts our ability to evaluate claims that DSP furthers social justice at open-admissions two-year colleges.

The DSP processes described by both Tompkins (2003) and Klausman et al. (2016) involved versions of what Tompkins calls the “decision zone” model (p. 197). This approach begins with standardized test scores. Students who score on either the low or high end of the test are placed into developmental or college-level courses, respectively, and those who score in the middle range are eligible to choose their course through DSP. At Mid Michigan, the decision zone was noncompulsory, with students who scored either low or high on the ACCUPLACER Reading test “strongly advised” into courses (Klausman et al., 2016, p. 146). However, as Inoue (2014) has warned on the Writing Program Administrators (WPA) listserv, there are reasons to be cautious about combining standardized test scores with DSP:

It has been my experience running and programmatically assessing a large DSP…that trying to run two kinds of placement systems, with two very different theoretical frameworks and assumptions about students and their agency in the placement system, may cause you problems -- i.e. the DSP won’t work very well when students get contradictory messages. DSPs work the best when students get consistent messages about what power they have in the entire placement ecology.

Thus, the decision zone model might sustain competing assessment ideologies and send conflicting messages to students and other stakeholders. As I discuss below, many two-year colleges use versions of the decision zone, and this model warrants much more careful theoretical consideration than it has received to date.

Although published scholarship on DSP in two-year colleges is scant, the paradigm on placement in these institutions is shifting. A decade ago, findings from the TYCA Research Committee’s member survey suggested that DSP was practically non-existent at two-year colleges (Ostman, 2013; Sullivan, 2008a). More recently, however, TYCA’s “White Paper on Placement Reform” (Klausman et al., 2016) has presented DSP as a viable placement process. The “White Paper” did not recommend any single placement approach, stating that such decisions should be made in local contexts. Instead, the document asserted that all writing placement should be: (a) grounded in disciplinary knowledge, (b) developed by local faculty whose work is recognized and compensated by their institution, (c) sensitive to effects on diverse student populations, (d) assessed and validated locally, and (e) integrated into campus-wide efforts to improve student success (p. 136). The “White Paper” suggested DSP was one option for writing placement that, when done well, could align with these principles. In order to advance our understanding of what “done well” might mean, I turn to findings from my interview-based study of DSP at 12 two-year colleges.        

Methods

My initial goal for this study was to develop a comprehensive list of two-year colleges that had attempted DSP. I began in early 2015 by reviewing published scholarship, searching the archives of the WPA listserv, and following up on prior conversations with two-year college colleagues. After gaining study approval from the University of Utah’s Institutional Review Board, I posted calls for information on the WPA and Council of Basic Writing (CBW) listservs and sent direct email queries to the entire TYCA membership through the National Council of Teachers of English (NCTE) mailing list. Through my conversations with initial respondents, I used snowball sampling to identify additional colleges that had tried DSP. Ultimately, I identified 17 two-year colleges that have attempted DSP since the late 1990s.[1] Twelve had active DSP processes: three had been using DSP for more than a decade, two had been using it for five years, and seven were piloting new DSP processes. Via email, ­I invited contacts at all 17 institutions to participate in one-time semi-structured interviews about their experiences, and I interviewed everyone who agreed to participate.

In total, I conducted 12 interviews: nine with English faculty (eight individuals and one pair) and three with college administrators, one of whom had been a member of the English faculty at the time of his college’s DSP pilot. The participants included six men and seven women, all White, ranging in age from their early 30s to mid 60s. They were employed at colleges in eight different U.S. states located in the Northeast, Mid-Atlantic, Midwest, Intermountain West, and West Coast. All participants were assured that neither they nor their colleges would be identified in publications resulting from this study, and all were given the opportunity to review and respond to a draft of this article. The interviews were conducted through either phone or video conferencing and were audio-recorded and transcribed for analysis. Ten participants provided copies of their DSP materials (questionnaires, course descriptions, and/or sample texts), and five sent reports or slide presentations detailing DSP outcomes at their institutions. Table 1 presents institutional demographic data from IPEDS (2016) for the 12 colleges included in the study, as well as their DSP implementation status at the time of the interview.

Table 1

I analyzed the transcripts using a grounded theory approach (Corbin & Strauss, 2008), iteratively reviewing the data, memoing, and coding in the online program Dedoose. This approach enabled me to identify themes in institutional experiences based on participant descriptions. The findings I present here derive from seven broad code categories that emerged through this process: rationale for DSP, policy context for DSP, institutional context for DSP, DSP leadership, DSP process, DSP outcomes, and challenges with DSP.

These methods present several limitations. First, I rely on participant self-report. While all interviewees were leaders in campus DSP initiatives, they inevitably had their own perspectives shaped by their disciplinary knowledge, their professional roles and experiences, and other aspects of their identities and personalities. Furthermore, four interviewees described initial DSP implementation processes that had been launched more than 10 years before, which meant they were drawing on relatively distant memories and the notes and reports they had saved. I was also reliant on participants for whatever DSP materials and outcomes data they were able to provide, which varied from one site to another. Thus, these findings reflect themes in the perceived opportunities and challenges of DSP at 12 two-year colleges, as described by the faculty and administrators most directly involved in its implementation.

Findings and Discussion

As noted above, I (2018) have presented findings related to DSP and social justice in my “‘Democracy’s Open Door’” chapter. In this Journal of Writing Assessment article, I focus on findings that highlight other distinct and under-discussed considerations for two-year college DSP. I arrange four key themes from these interviews chronologically, paralleling the arc of DSP implementation (see Table 2). For each theme, I present the findings and a brief discussion.

Table 2

Motivations for DSP

Dissatisfaction. Prior to adopting DSP, all 12 of the institutions in this study placed students using either a standardized multiple-choice test, a faculty-scored impromptu writing sample, or both. Participants at eight of the 12 colleges indicated that their interest in DSP emerged from dissatisfactions with those previous systems. Five objected to prior placement systems on disciplinary grounds, explaining that these processes did not align with what they understood to be sound writing assessment principles. Six commented on issues relevant to construct validity, stating that their previous placement processes did not reflect their department curricula or values and/or failed to account for the full range of writing abilities expected in their courses. Six were concerned about misplacement, particularly for specific groups of students, such as students of color and those who were multilingual, older/returning, and/or in the first generation of their families to attend college (for a more detailed discussion, see Toth, 2018).

Institutional and policy change. DSP initiatives often emerged when these longstanding dissatisfactions intersected with other institutional changes. For the seven colleges whose DSP initiatives had been launched within the last five years, these changes were linked to broader policy shifts geared toward improving student degree completion. As one participant said, “Especially as [the state] is moving towards performance-based funding for community colleges, it’s not just about our enrollment. It’s about getting [student] completions. If we’re going to get completions, we have to make our processes more efficient. We have to move them through.” Two colleges were in the midst of major reorganizations to help “move students through,” with pre-college-level reading and writing courses being relocated from a separate developmental unit into the English department. Such restructuring brought increased attention to curricular alignment between developmental and college-level courses, which in turn prompted reconsideration of placement. At six institutions, placement reform was spurred by other changes to developmental education that have been taking place at community colleges across the country, such as integrating reading and writing instruction, condensing course sequences, and implementing accelerated learning programs (see Adams, Gearhart, Miller, & Roberts, 2009; Hassel et al., 2015; Hern, 2012).

At five colleges, participants reported that these changes were fueled by institutional involvement with non-profit organizations like Achieving the Dream, funding from large foundations, and/or the influence of reform-oriented higher education research emerging from academic centers like the CCRC. In five cases, developmental education reforms affecting placement were also a response to state-level policies either encouraging or mandating change. As one participant characterized it,

The community college went through a really long period of quiet that you could measure in geological time. Now, it’s like the volcanoes are erupting and, whoa, we have to do all of this stuff. We’re in a period of huge change right now.

Calls among reform-minded community college researchers to adopt multiple measures placement (Belfield & Crosta, 2012; Hodara et al., 2012; Hughes & Scott-Clayton, 2011; Jaggars, Hodara, Cho, & Xu, 2015; Scott-Clayton, 2012; Scott-Clayton et al., 2014) were particularly helpful in creating the conditions for DSP. One participant reported,

[We’ve] been trying to get the state to allow us to do something besides standardized test placement for a long, long, long time. Then suddenly, a few years ago, they did Complete College [State], which is based on Complete College America. One of the recommendations was multiple measures placement, and so it created this opportunity for us.

The rise of multiple measures enabled four colleges to uproot placement based on single-score standardized tests. They were then able to implement DSP for at least a subset of incoming students by situating self-placement within a “multiple measures” framework. For one participant’s college, the transitions from single-score placement to multiple measures to DSP unfolded over time:

Changing our use of COMPASS reading, doing the transcript-based placement, I wasn’t thinking, ‘Okay, we’re going to eventually move to DSP.’ I was like, ‘Let’s move to something more ethical than COMPASS.’ Then I think our placement became a session [rather than a test]…DSP seems like a natural progression, and there seem to be places and a real way to do DSP.

In some cases, then, adopting multiple measures could be a first step in placement reform that enabled an incremental move to DSP.

Pedagogy and fairness. Echoing themes in the writing studies literature, several participants viewed DSP as an attractive placement alternative because of its perceived pedagogical benefits. Four described DSP as an opportunity to “communicate” or “converse” with students, which they hoped would encourage students to be more invested in their writing courses and enter whichever course they chose with improved attitudes and motivation. Six participants noted DSP’s potential to foster reflection and greater writing self-awareness, and six described DSP as a way to “empower” students with greater control over their own education or to foster their sense of “agency” and “self-efficacy.” Three characterized DSP as a more “ethical” approach to placement, in part because it was honest about the limitations of the institution’s ability to assess students’ writing capacities (see Toth, 2018).

Participants also had a keen interest in DSP’s potential to improve student outcomes. Two commented on DSP’s ability to account for many otherwise unmeasured aspects of students’ learning preferences, study skills, and motivations, as well as competing demands on their time—what the higher education literature refers to as “non-cognitive” factors (e.g. Barnett & Reddy, 2017; Scott-Clayton, 2012). Four noted the value of giving all students the opportunity to choose the additional time and support offered in developmental writing courses, and five were particularly concerned with reducing perceived patterns of under-placement. Two participants specifically mentioned CCRC studies regarding the negative consequences of placement into unnecessary developmental coursework, which they found troubling given their colleges’ predominantly low-income and often racially and linguistically diverse students. In sum, most participants reported adopting DSP because it offered a promising corrective to perceived problems and injustices in their placement processes (see Toth, 2018).

Discussion. Many of the participants in this study had longstanding misgivings about their colleges’ approaches to writing placement, concerns grounded in disciplinary knowledge, pedagogical principles, and mounting concerns about equity for students from structurally disadvantaged backgrounds. DSP offered an attractive alternative rooted in the pedagogical and ethical frameworks of writing studies. These findings demonstrate that, in recent years, faculty members’ ability to change their colleges’ placement practices has been linked to the “kairotic moment” created by broader community college reform pressures, which are destabilizing “business as usual” in writing placement (Klausman et al., 2016, p. 138). Some participants in this study were able to take advantage of such destabilization and redesign placement to better align with disciplinary knowledge and their departments’ writing curricula.

The rhetorical term kairos refers to a propitious moment, to recognizing and seizing opportunity to make change by choosing effective words (and other actions) at an appropriate or advantageous time. In the 2010s, completion-oriented reforms driven by mega-philanthropies and higher education researchers have been reshaping developmental education at two-year colleges (Hassel et al., 2015). As Warnke and Higgins (2018) have observed, these reform pressures are typically “framed in terms of neoliberal efficiency” (p. 363) and often feature corporate sponsorships, software “solutions,” and zealous rhetoric that arouse suspicion in many writing faculty. The findings of this DSP study affirm Warnke and Higgins’s argument that, without relinquishing skepticism about the provenance and neoliberal agenda of these reform efforts, we might recognize the kairotic openings they provide to implement more theoretically sound and socially just approaches to placement.

Implementing DSP

Leadership. At 11 of the 12 colleges in this study, English faculty led DSP development and implementation; the three DSP programs that had been in place for over a decade were all faculty-driven. DSP initiatives were often led by faculty who had assumed disciplinary authority over writing placement. As one participant said, “I think [writing placement] should stay with the content subject matter expert, which will be the English faculty.” Nine faculty participants described involvement in professional activities that suggested sustained engagement with the discipline of writing studies. They reported learning about DSP through conference attendance, disciplinary publications, professional listservs, interactions with writing faculty at other institutions (both two- and four-year), and/or other forms of networking within the writing studies community.

Some faculty were also engaged in research that bolstered their professional authority. One participant explained his administrators’ receptivity to piloting DSP by saying,

I had some credibility at the college. I had worked on placement and done some studies. I think what helped a lot was that I was on a statewide taskforce related to placement. This wasn’t my first foray into researching placement. They knew that I was coming with a background.

Engaging with writing assessment at scholarly, professional, and policy levels could thus be both intellectually and rhetorically useful for faculty seeking to reform placement at their institutions.

Stakeholders. Although faculty participants were leaders in their colleges’ DSP initiatives, many were also quick to note the importance of support from key administrators and the cooperation or collaboration of other stakeholders on campus, such as adult basic and developmental education instructors, testing center personnel, and student services staff. In the words of one participant,

It’s important to have partnerships to get [DSP] going. You can’t just do it all by yourself. Even if you do get it going, it’s important to maintain the partnerships and the communications in order to have it be a successful program. If I’m not on the same page with advising, our DSP doesn’t work well.

Indeed, several participants pointed to the particular importance of ensuring that advising staff understood the principles as well as the procedures of DSP (see also Klausman et al., 2016). Two participants with longstanding DSP processes emphasized that working with advisors to facilitate conversations about DSP should be on-going, in-person, and dialogic. Otherwise, staff turnover and changing institutional conditions could lead to confusion over DSP’s purposes and principles, particularly its overriding commitment to student choice.

Challenges. Participants described a number of challenges initiating DSP at their institutions. Some encountered ideological resistance from administrators, staff, and/or fellow faculty paralleling themes in the writing studies literature. These stakeholders sometimes doubted students’ ability to self-assess their own writing preparation or willingness to make course choices based on anything other than expedience. Some faculty were concerned that DSP would undermine “standards” in college-level courses, and others were hesitant to let go of the perceived “efficiency” of standardized tests. Likewise, some stakeholders were concerned about the evidentiary basis for DSP, particularly the lack of research from open admissions institutions (see Giordano & Hassel, 2016). Writing studies scholarship on DSP was not always viewed as credible compared to the more scientific-seeming materials offered by testing companies.

Indeed, stakeholder doubts had the power to negatively affect DSP pilots, becoming a kind of self-fulfilling prophecy. At the one institution where DSP was administrator-driven—and funded by a foundation grant aimed at “disrupting” traditional approaches to developmental education—faculty were skeptical about the preparation of students in the decision zone. As the administrator participant reported, “In the very early stages, we identified for the faculty who the DSP students were. What we found is there was almost a bias against them…After the first semester, we quit informing the faculty who the DSP students were, and that had a better result.” In other words, students who would have placed into developmental courses under this college’s previous ACCUPLACER-based process but chose to enter college-level courses were assigned lower average grades when their writing teachers knew who they were. Those differences disappeared once faculty could no longer distinguish “DSP students” from those who placed directly into the college-level courses via ACCUPLACER.

Participants also identified other implementation challenges. Two described difficulties achieving a departmental consensus about the local construct of writing that should inform their DSP materials. As one complained,

If we’re going to be, through DSP, saying, “This is what 101 is,” the English Department has to decide on what that is…That’s one of my frustrations. There’s been this English curriculum committee going on for two years. I just feel like…“Come up with something coherent though so we can tell students!”

Reliance on externally-devised placement tests often allows departments to overlook or ignore fundamental disagreements among faculty about the purposes and values of the writing courses they teach. As Toth and Aull (2014) have observed, DSP initiatives can surface such tensions.

Discussion. These findings about DSP implementation foreground the importance of faculty leadership in writing placement reform. While successful DSP implementation required the involvement of a range of stakeholders, including administrators and advisors, faculty brought the disciplinary expertise and the presence in actual writing classrooms that kept DSP theoretically coherent and grounded in local curriculum. However, these findings highlight how what Jensen (2017) has called the “uneven professionalization” of two-year college English faculty (p. 24) presents challenges for implementing DSP. These faculty often have less professional autonomy and disciplinary authority over institutional decision-making than their four-year counterparts (Toth, Griffiths, & Thirolf, 2013). Administrators do not always honor faculty input or decisions about placement practices, particularly when forces for change are emanating from outside the institution, and not all two-year college English faculty have the professional preparation, disciplinary grounding, material support, or desire to take on such leadership roles (Griffiths, 2017). When disparate disciplinary preparation, uneven professionalization, and overreliance on contingent faculty labor results in an incoherent writing curriculum—when, as Klausman (2008) has suggested, two-year colleges have “a collection of writing classes, not a program” (p. 239)—there may be no clear local construct of writing around which to design a DSP process.

On the other hand, these findings show that professional activity beyond the classroom—attending conferences, participating in multi-institutional initiatives, and conducting research—provided some faculty in the study with important “footing” from which to assert their professional authority in placement reform (Toth et al., 2013, p. 100; see also Griffiths, 2017). The experiences of these participants suggest the value of engaging with disciplinary networks for gaining professional authority—indeed, such engagement may be the best hope for seizing this kairotic moment for placement reform. These findings also suggest the importance of addressing the systemic underrepresentation of two-year college English faculty in disciplinary professional spaces (Hassel & Giordano, 2013; Toth, 2014). In order to achieve its social justice goals, writing studies needs to ensure that two-year college faculty are a well-integrated part of the disciplinary community.

DSP Processes

Decisions. Without a prepackaged assessment product, the colleges in this study needed to make a number of local decisions about their DSP processes. They had to determine what DSP should cost and how it would be funded, as well as whether to start with a small DSP pilot study or implement at scale. The responsibilities of different stakeholders needed to be negotiated and, in some cases, revisited over time. Implementers had to develop sustainable plans for validating and assessing their DSP processes. And, as I will discuss in this section, they also had to decide which students would be eligible for DSP; the nature of DSP materials and interactions; and logistical issues, such as the time, location, and platform on which students would complete the DSP process.

Eligibility. It is possible to have “universal” DSP at open-admissions two-year colleges. Five of the institutions in this study—including two that had sustained DSP for over a decade—used the same DSP procedure for all incoming students. Seven colleges, however, used some version of Tompkins’s (2003) decision zone model. In this approach, not all incoming students were eligible to participate in DSP. Rather, a subset of students—those scoring in a defined middle range on standardized tests or other multiple measures criteria—had the option of choosing their writing course through DSP. Those who did not meet the eligibility requirements were mandatorily placed into courses.

Some participants expressed satisfaction with their decision zone approach and gave no indication of plans to change. Participants at two colleges, though, saw the decision zone as an initial means of challenging longstanding placement practices and creating a provisional opening for student agency. One described incrementally expanding DSP eligibility to include students who couldn’t be placed using other multiple measures or who participated in academic support programs, saying, “We’re going to shave off groups of students who can effectively use DSP.” Another had an even more ambitious long-term goal: “I would like to scale down the use of ACCUPLACER, even get rid of it, and do a much larger scale of Directed Self-Placement.” Thus, some saw decision zone DSP as a way to demonstrate the viability of self-placement to skeptical stakeholders, expand student eligibility, and, in the long run, push out placement tests entirely.

Logistics. In addition to determining who would be eligible to complete DSP, colleges also had to determine the nature of the choice being offered. At one institution, students were choosing between multiple levels of developmental writing. At seven colleges, students had a binary choice between developmental and college-level courses. At three colleges with recently reformed curricula, students were choosing between developmental courses, college-level writing with acceleration or supplemental support, and/or the college-level course without additional support (see Adams et al., 2009; Hassel et al., 2015).

Colleges also needed to decide how that decision-making process would be structured, including when, where, and in the context of what institutional interactions DSP would take place. In these open admissions settings, those considerations presented some distinctive challenges. Unlike selective four-year institutions, many colleges could not count on all students being able or willing to complete DSP processes online before arriving on campus. Only two took that approach, and both enforced “compliance” by blocking students’ writing course registration until they had completed DSP. Most colleges conducted the entire DSP process on-site: Five included DSP as part of new-student orientation, and seven had students make their course choice during a one-on-one conversation with an advisor. However, resource constraints could present a barrier to that approach. Three participants indicated that there were not enough staff at their institutions for all eligible students to make their DSP decision in consultation with an advisor.

Materials. DSP materials at all 12 colleges included an overview of the placement process and an explanation of the course options. Eleven structured their DSP around some kind of self-inventory or self-assessment questionnaire. Four also included at least one sample text intended to give students a sense of the reading expectations in college-level courses. Three colleges included sample assignments from first-year composition courses to show students the nature of the writing tasks they should expect. Colleges that provided such sample materials often included self-assessment questions that encouraged students to reflect on their preparation for completing similar reading and/or writing tasks.

Only three colleges asked students to produce writing as part of the DSP process. While one institution was still developing its DSP process and had not yet decided on a prompt, the other two required students to write a reflective or argumentative piece explaining their DSP course choice. Such pieces, which resembled Lewiecki-Wilson et al.’s (2000) self-reflective Writer’s Profile, could be produced in a single sitting. These context-relevant writing tasks communicated a message about the value the curriculum placed on writerly self-reflection, and they provided colleges with insight into the reasoning behind students’ course selections.

Platform. The platform on which colleges presented DSP varied. One college informed students of their DSP options via letter prior to registration, and two presented DSP options during orientation with accompanying print materials. The other nine colleges in the study used online platforms for facilitating DSP. Two had web-based questionnaire interfaces built by institutional IT staff, and another administered its questionnaire through the survey software Qualtrics. Two colleges created interactive DSP modules through course management systems, and two used Learner Web, an online platform for self-paced learning originally developed for adult literacy programs. The remaining two colleges used versions of the Write Class, a web-based multiple measures “course-matching” tool developed by writing studies researchers at Boise State University (Ruecker, Shepherd, Estrem, & Brunk-Chavez, 2017).

Discussion. These findings confirm that there is no single procedure, instrument, or platform for DSP at two-year colleges. Rather, institutions developed their DSP processes locally, based on their curricular options, student populations, institutional resources, and the demands of writing placement in the context of open admissions. Most included some kind of questionnaire or self-inventory, and some included sample readings or assignments that illustrated the local construct of writing. The provision of such sample materials aligns with recent writing studies literature advocating DSP processes that invite students to self-assess in relation to the genres and literacy practices they can expect to engage in college (Gere et al., 2010, 2013; Toth & Aull, 2014). Gere and her colleagues (2010, 2013) argue for DSP processes that structure self-assessment around an actual writing task that simulates local genre expectations. However, few two-year colleges in the study seemed to find this kind of task necessary or feasible. That choice may be a function of the challenges of administering DSP in open admissions settings. The requirement to write a lengthy (intimidating) essay might (differentially) dissuade some two-year college students from enrolling. The nature of writing tasks in two-year college DSP—and the consequences of their use—are areas that warrant further research.

These findings also demonstrate that one of the most distinctive characteristics of DSP at two-year colleges is the prevalence of the decision zone model. That pattern may reflect the fact that, for over a decade, the only published literature on DSP in two-year colleges was Tompkins’s (2003) case study. However, in a recent email exchange, Tompkins himself wrote, “I chose to pilot with the ‘decision zone’ because policy allowed for innovation with students whose scores fell there, but I would have preferred to open it to all students” (personal communication, November 5, 2017). He agreed that there are reasons to think carefully about this approach. First, as Inoue (2014) has suggested, it maintains contradictory assessment ideologies and sends mixed messages about students’ capacity to choose. The example of faculty bias toward “DSP students” suggests that maintaining these conflicting ideologies can undermine the effectiveness of DSP. Second, a decision zone established through products like ACCUPLACER determines whether or not students are capable of making their own course selection based on a negligible difference in score on tests that can have disparate impact. Defining the decision zone using multiple measures may mitigate the second concern, but not the first. Finally, decision zone DSP does little to divert resources being spent on commercial tests back toward placement that more accurately conveys the writing program’s values and expectations.

However, based on these study findings, we might consider whether the decision zone can serve as a pragmatic, provisional option at open admissions two-year colleges. At several colleges in this study, implementing DSP institution-wide would have been unfeasible given stakeholder skepticism: The choice was not between decision zone DSP and universal DSP, but rather between decision zone DSP and no DSP at all. And decision zone DSP did seem to reduce patterns of under-placement. It might therefore be seen as a strategic intervention on universal mandatory placement. Those implementing decision zone DSP could consider it a starting point—a foot in the door—rather than the ideal model; they could look for opportunities to expand over time the number of students who get a choice in their writing placement. By collecting data on the consequences of DSP and continuously revising the placement process, those colleges might work toward universal DSP incrementally.

Consequences of DSP

Student outcomes. As I (2018) wrote in my “‘Democracy’s Open Door’” chapter, participants at all seven colleges with outcomes data expressed enthusiasm about the consequences of adopting DSP. Five reported a reduction in the number of students enrolled in developmental courses, which they interpreted as a beneficial corrective to under-placement; one college was eliminating the lowest-level course in its developmental sequence in response to these shifts. Another college indicated that enrollments in developmental courses remained about the same under DSP. Only one college found that enrollments in developmental courses increased. The faculty participant at this college stated that his department interpreted this outcome as an indication that, under DSP, students who wanted more time and feedback on their writing were able make that choice for themselves; he did not indicate what evidence the department drew on to reach this interpretation.

While each institution varied in the types of data it collected and the specificity of what participants were able or willing to share, six of the seven colleges with outcomes data saw increased “student success” after adopting DSP, at least as measured by average course grades and/or completion of the required college-level writing course. Importantly, none saw evidence that students were systematically over-placing themselves in ways that undermined their persistence—a finding in line with DSP outcomes reported at four-year institutions. The three colleges that surveyed students found most were satisfied with the course they had selected and responded positively to having a choice in their writing placement (for a more detailed institutional breakdown of these outcomes, see Toth, 2018, pp. 162–163).

Change over time. Five of the seven colleges with outcomes data had active DSP programs when these interviews were conducted, and all five reported making adjustments or changes to their DSP processes over time. Four described revising or refining the items on their DSP questionnaires based on data showing which questions best predicted student outcomes. Four described changing the sample reading and writing assignments and/or explanatory materials to make them more accessible or relevant to students. For one college, advisor feedback based on DSP-related interactions with students was central to that revision process.

All five colleges also reported revising DSP as their curricula and student populations evolved. Three reported making changes related to policy shifts, including the introduction of new course options like accelerated learning, integrated reading and writing, and first-year learning communities, as well as the advent of state-mandated student orientation. Two participants described challenges to established DSP processes posed by the rapid growth of dual/concurrent enrollment programs. These departments found themselves suddenly overseeing placement for large numbers of high schoolers seeking to enroll in first-year writing courses. This change raised questions about how to craft DSP processes appropriate to adolescents’ personal and intellectual developmental, prior literacy experiences, and available course options.

Discussion. Adopting DSP had positive consequences at most of the colleges in this study. However, none of the colleges had disaggregated outcomes data by race, gender, age, or other demographic categories to determine whether their DSP processes resulted in more equitable placement than previous systems, or whether the use of DSP had disparate impact on any groups of students. Thus, the experiences of these early adopter institutions give us little sense of whether DSP can result in more socially just writing placement at open-admissions two-year colleges, or whether some approaches to DSP in these settings yield more equitable outcomes than others. There is a pressing need for more research into how various forms of DSP impact students from structurally disadvantaged backgrounds in local two-year college contexts (for an extended discussion of these issues, see Toth, 2018).

Conclusion

In “A Theory of Ethics for Writing Assessment,” Elliot (2016) defines “fairness” as “the identification of opportunity structures created through maximum construct representation under conditions of constraint” (emphasis mine). In other words, fair writing assessment aims to create possibilities for meaningful learning and educational advancement by operating from the most comprehensive representation of the local writing construct possible in contexts where time and material resources are inevitably limited. Elliot adds the important caveat that “constraints” should be tolerated “only to the extent to which benefits are realized for the least advantaged”: If the limitations on assessment practices imposed by our material conditions are having disparate impact on structurally disadvantaged groups, those constraints are unacceptable, and we must work to change them.

By this definition, DSP seems to offer affordances for fairness that many approaches to two-year college writing placement lack. It creates new opportunities for incoming students to enter college-level writing courses while retaining those students’ opportunity to access the additional instruction offered by various “developmental” support options. It can expand the construct writing students encounter in the placement process to better reflect the range of rhetorical, processual, and metacognitive capacities the college values. And, it can, it appears, provide these benefits under the resource constraints at many open admissions two-year colleges, although we do not yet know enough about its consequences for structurally disadvantaged students. Some approaches to DSP may be fairer in local two-year college contexts than others, and that fairness may be contingent on the resources colleges are willing to allocate to placement processes. However, the findings of this study should help us move beyond the question of whether DSP can work at two-year colleges and turn our attention to expanding its opportunity structures, maximizing its construct representation, and realizing its benefits for the least advantaged students in these settings.

I have argued that our current era of reform presents a kairotic moment for challenging writing placement practices that were long taken for granted but are now revealed to be unfair. While we should be wary of the neoliberal emphasis on “efficiency” and the narrow language and literacy ideologies pervasive in the discourses of community college reform, we might, as Warnke and Higgins (2018) suggest, recognize the possibilities of “interest convergence” with powerful parties (p. 368). We might use these reform discourses as strategic cover to pursue our own disciplinary social justice agenda: drawing on their best research to identify structural inequalities, redistributing the material resources reform efforts have garnered, and siphoning their rhetorical power in spaces where our own disciplinary discourses carry little influence. We might seize this moment to become what Warnke and Higgins (2018) have labeled critical reformers (p. 365). TYCA has called on two-year college faculty to see writing placement as a site for what Andelora (2013) and Sullivan (2015) call teacher-scholar-activism, as one domain beyond the classroom in which faculty might implement disciplinarily-grounded practices that sustain educational access and advance equity (Hassel et al., 2015; Klausman et al., 2016). The heretofore unrecognized intellectual and administrative labor of two-year college faculty reported in this study demonstrate precedence and possibilities for DSP, offering teacher-scholar-activists both guidance and choice.

Author note: Christie Toth is an assistant professor in the University of Utah’s Department of Writing & Rhetoric Studies. She collaborates with two-year college colleagues, both locally and nationally, on inter-institutional initiatives, scholarship, and policy documents related to writing instruction and community colleges.

References

Adams, P. D., Gearhart, S., Miller, R., & Roberts, A. (2009). The Accelerated Learning Program: Throwing open the gates. Journal of Basic Writing, 28(2), 50–69.

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational research and psychological testing (6th ed.). Washington: American Educational Research Association.

Andelora, J. (2013). Teacher/scholar/activist: A response to Keith Kroll’s “The End of the Community College English Profession." Teaching English in the Two-Year College, 40(3), 302–307.

Bailey, T., Jaggars, S. S., & Jenkins, D. (2015). Redesigning America’s community colleges. Cambridge: Harvard University Press.

Bailey, T., Jeong, D. W., & Cho, S. W. (2010). Referral, enrollment, and completion in developmental education sequences in community colleges. Economics of Education Review, 29(2), 255–270.

Balay, A., & Nelson, K. (2012). Placing students in writing classes: One university’s experience with a modified version of directed self placement. Composition Forum, 25.

Barnett, E. A., & Reddy, V. (2017). College placement strategies: Evolving considerations and practices (CAPR Working Paper). New York: Columbia University.

Bartholomae, D. (1993). The tidy house: Basic writing in the American curriculum. Journal of Basic Writing, 12(1), 4–21.

Bedore, P., & Rossen-Knill, D. F. (2004). Informed self-placement: Is a choice offered a choice received. WPA: Writing Program Administration, 28(1–2), 55–78.

Belfield, C., & Crosta, P. M. (2012). Predicting success in college: The importance of placement tests and high school transcripts (CCRC Working Paper No. 42). Columbia University.

Blakesley, D. (2002). Directed self-placement in the university. WPA: Writing Program Administration, 25(3), 9–39.

Blakesley, D., Harvey, E. J., & Reynolds, E. J. (2003). Southern Illinois University Carbondale as an institutional model: The English 100/101 stretch and directed self-placement program. In D. Royer & R. Gilles (Eds.), Directed self-placement: Principles and practices (pp. 207–241). Cresskill: Hampton Press.

Chernekoff, J. (2003). Introducing directed self‐placement to Kutztown University. In D. Royer & R. Gilles (Eds.), Directed self-placement: Principles and practices (pp. 127–147). Cresskill: Hampton Press.

Condon, W., Glade, F., Haswell, R., Johnson-Shull, L., Kelly-Riley, D., Leonhardy, G., … Wyche, S. (2001). Whither? Some questions, some answers. In Beyond outcomes: Assessment and instruction within a university writing program (pp. 191–205). Westport: Ablex.

Corbin, J., & Strauss, A. (2008). Basics of qualitative research: Techniques and procedures for developing grounded theory. Thousand Oaks, CA: Sage.

Cornell, C. E., & Newton, R. D. (2003). The case of a small liberal arts university: Directed self‐placement at DePauw. In D. Royer & R. Gilles (Eds.), Directed self-placement: Principles and practices (pp. 149–178). Cresskill: Hampton Press.

Crusan, D. (2006). The politics of implementing online directed self-placement for second language writers. In P. K. Matsuda, C. Ortmeier-Hooper, & X. You (Eds.), The politics of second language writing: In search of the promised land (pp. 205–221). West Lafayette: Parlor Press.

Cushman, E. (2016). Decolonizing validity. Journal of Writing Assessment, 9(1). Retrieved from http://journalofwritingassessment.org/article.php?article=92

Das Bender, G. (2011). Assessing generation 1.5 learners: The revelations of directed self-placement. In N. Elliot & L. Perelman (Eds.), Writing assessment in the 21st century: Essays in honor of Edward M. White (pp. 371–384). Cresskill: Hampton Press.

Elliot, N. (2015). Validation: The pursuit. College Composition and Communication, 66(4), 668–687.

Elliot, N. (2016). A theory of ethics for writing assessment. Journal of Writing Assessment, 9(1). Retrieved from http://journalofwritingassessment.org/article.php?article=98

Elliot, N., Slomp, D., Poe, M., Cogan, J. A., Broad, B., & Cushman, E. (2016). Forum: Issues and reflections on ethics and writing assessment. Journal of Writing Assessment, 9(1).

Fox, T. (1993). Standards and access. Journal of Basic Writing, 12(1), 37–45.

Fox, T. (1999). Defending access: A critique of standards in higher education. Boynton/Cook Pub.

Frus, P. (2003). Directed self-placement at a large research university: A writing center perspective. In D. Royer & R. Gilles (Eds.), Directed self-placement: Principles and practices (pp. 179–192). Cresskill: Hampton Press.

Gere, A. R., Aull, L., Green, T., & Porter, A. (2010). Assessing the validity of directed self-placement at a large university. Assessing Writing, 15(3), 154–176.

Gere, A. R., Aull, L., Perales, M. D., Escudero, Z. L., & Vander Lei, E. (2013). Local assessment: Using genre analysis to validate directed self-placement. College Composition and Communication, 64(4), 605–633.

Giordano, J. B., & Hassel, H. (2016). Unpredictable journeys: Academically at-risk students, developmental education reform, and the two-year college. Teaching English in the Two-Year College, 43(4), 371–390.

Gomes, M. (2018). Writing assessment and responsibility for colonialism. In M. Poe, A. B. Inoue, & N. Elliot (Eds.), Writing assessment, social justice, and the advancement of opportunity (p. forthcoming). Boulder: University Press of Colorado.

Griffiths, B. (2017). Professional autonomy and teacher-scholar-activists in two-year colleges: Preparing new faculty to think institutionally. Teaching English in the Two-Year College, 45(1), 47–68.

Harrington, S. (2005). Learning to ride the waves: Making decisions about placement testing. WPA: Writing Program Administration, 28(3), 9–29.

Hassel, H., & Giordano, J. B. (2013). Occupy writing studies: Rethinking college composition for the needs of the teaching majority. College Composition and Communication, 65(1), 117–139.

Hassel, H., Klausman, J., Giordano, J., O’Rourke, M., Roberts, L., Sullivan, P., & Toth, C. (2015). TYCA White Paper on Developmental Education Reforms. Teaching English in the Two-Year College, 42(3), 227–243.

Hern, K. (2012). Acceleration across California: Shorter pathways in developmental English and math. Change: The Magazine of Higher Learning, 44(3), 60–68.

Hodara, M., Jaggars, S. S., & Karp, M. M. (2012). Improving developmental education assessment and placement: Lessons from community colleges across the country (CCRC Working Paper No. 51). Community College Research Center, Columbia University.

Hu, S., Park, T., Woods, C., Richard, K., Tandberg, D., & Bertrand Jones, T. (2016). Probability of success: Evaluation of Florida’s developmental education redesign based on cohorts of first-time-in-college students from 2009-10 to 2014-15. doi:10.13140/RG.2.1.2292.8888

Hughes, K. L., & Scott-Clayton, J. E. (2011). Assessing developmental assessment in community colleges. Community College Review, 39(4), 327–351.

Inoue, A. B. (2008). Program Assessment and DSP Validation Study: First-Year Writing and Its Pilot DSP. Retrieved from https://www.fresnostate.edu/artshum/english/documents/undergrad/FYW%20Program%20Assessment-AY07-08v4.pdf

Inoue, A. B. (2009a). Self-assessment as programmatic center: The first year writing program and its assessment at California State University, Fresno. Composition Forum, 20.

Inoue, A. B. (2009b). The technology of writing assessment and racial validity. In C. Schreiner (Ed.), Handbook of research on assessment technologies, methods, and applications in higher education (pp. 97–120). Hershey: Information Science Reference.

Inoue, A. B. (2014, September 17). Re: DSP Rubrics [Electronic mailing list message]. Retrieved from https://lists.asu.edu/

Jaggars, S. S., Hodara, M., Cho, S.-W., & Xu, D. (2015). Three accelerated developmental education programs: Features, student outcomes, and implications. Community College Review, 43(1), 3–26.

Jensen, D. (2017). Tilting at windmills: Refiguring graduate education in English to prepare future two-year college professionals (Doctoral dissertation). University of Nebraska, Lincoln.

Jones, E. (2008). Self-placement at a distance: Challenge and opportunities. WPA: Writing Program Administration, 32(1), 57–75.

Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 17-64). Washington: American Council on Education.

Kane, M. T. (2013). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50(1), 1–73.

Kenner, K. (2016). Student rationale for self-placement into first-year composition: Decision making and directed self-placement. Teaching English in the Two Year College, 43(3), 274–289.

Ketai, R. L. (2012). Race, remediation, and readiness: Reassessing the “self” in directed self-placement. In A. B. Inoue & M. Poe (Eds.), Race and Writing Assessment (pp. 141–154). New York: Peter Lang.

Klausman, J. (2008). Mapping the terrain: The two-year college writing program administrator. Teaching English in the Two-Year College, 35(3), 238–251.

Klausman, J., Toth, C., Swyt, W., Griffiths, B., Sullivan, P., Warnke, A., … Roberts, L. (2016). TYCA White Paper on Placement Reform. Teaching English in the Two-Year College, 44(2), 135–157.

Lewiecki-Wilson, C., Sommers, J., & Tassoni, J. P. (2000). Rhetoric and the writer’s profile: Problematizing directed self-placement. Assessing Writing, 7(2), 165–183.

Messick, S. (1989). Meaning and values in test validation: The science and ethics of assessment. Educational Researcher, 18(2), 5–11.

Messick, S. (1995). Standards of validity and the validity of standards in performance assessment. Educational Measurement: Issues and Practice, 14(4), 5–8.

Naynaha, S. (2016). Assessment, social justice, and Latinxs in the US community college. College English, 79(2), 196–201.

Neal, M., & Huot, B. (2003). Responding to directed self-placement. In D. Royer & R. Gilles (Eds.), Directed self-placement: Principles and practices (pp. 243–255). Cresskill: Hampton Press.

Nicolay, T. F. (2002). Placement and instruction in context: Situating writing within a first-year program. WPA: Writing Program Administration, 25(3), 41–59.

Ostman, H. (2013). Writing program administration and the community college. Anderson, SC: Parlor Press.

Otte, G., & Mlynarczyk, R. (2010). Basic writing. West Lafayette: Parlor Press.

Pinter, R., & Sims, E. (2003). Directed self-placement at Belmont University: Sharing power, forming relationships, fostering reflection. In D. Royer & R. Gilles (Eds.), Directed self-placement: Principles and practices (pp. 107–125). Cresskill: Hampton Press.

Poe, M., & Cogan, J. A. (2016). Civil rights and writing assessment: Using the disparate impact approach as a fairness methodology to evaluate social impact. Journal of Writing Assessment, 9(1).

Poe, M., Elliot, N., Cogan, J. A., & Nurudeen, T. G. (2014). The legal and the local: Using disparate impact analysis to understand the consequences of writing assessment. College Composition and Communication, 65(4), 588–611.

Poe, M., & Inoue, A. B. (2016). Toward writing as social justice: An idea whose time has come. College English, 79(2), 119–126.

Poe, M., Inoue, A. B., & Elliot, N. (Eds.). (2017). Writing assessment, social justice, and the advancement of opportunity. Boulder: University Press of Colorado.

Royer, D., & Gilles, R. (1998). Directed self-placement: An attitude of orientation. College Composition and Communication, 50(1), 54–70.

Royer, D., & Gilles, R. (2000). Basic writing and directed self-placement. Basic Writing e-Journal, 2(2).

Royer, D., & Gilles, R. (2003a). Directed self-placement: Principles and practices. Cresskill: Hampton Press.

Royer, D., & Gilles, R. (2003b). The pragmatist foundations of directed self-placement. In D. Royer & R. Gilles (Eds.), Directed self-placement: Principles and practices (pp. 49–72). Cresskill: Hampton Press.

Ruecker, T., Shepherd, D., Estrem, H., & Brunk-Chavez, B. (2017). Retention, persistence, and writing programs. Boulder: University Press of Colorado.

Schendel, E., & O’Neill, P. (1999). Exploring the theories and consequences of self-assessment through ethical inquiry. Assessing Writing, 6(2), 199–227.

Scott-Clayton, J. (2012). Do high-stakes placement exams predict college success? (Working Paper No. 41). Community College Research Center: Columbia University.

Scott-Clayton, J., Crosta, P. M., & Belfield, C. R. (2014). Improving the targeting of treatment: Evidence from college remediation. Educational Evaluation and Policy Analysis, 36(3), 371–393.

Slomp, D. (2016). An integrated design and appraisal framework for ethical writing assessment. Journal of Writing Assessment, 9(1).

Sullivan, P. (2006). An essential question: What is “college-level” writing? In P. Sullivan & H. Tinberg (Eds.), What is “college-level” writing (pp. 1–28). Urbana: National Council of Teachers of English.

Sullivan, P. (2008a). An analysis of the National TYCA Research Initiative Survey, Section II: Assessment practices in two-year college English programs. Teaching English in the Two Year College, 36(1), 7–26.

Sullivan, P. (2008b). Measuring “success” at open admissions institutions: Thinking carefully about this complex question. College English, 70(6), 618–632.

Sullivan, P. (2015). The two-year college teacher-scholar-activist. Teaching English in the Two-Year College, 42(4), 327–350.

Tompkins, P. (2003). Directed self-placement in a community college context. In D. Royer & R. Gilles (Eds.), Directed self-placement: Principles and practices (pp. 193–206). Cresskill: Hampton Press.

Toth, C. (2014). Unmeasured engagement: Two-year college English faculty and disciplinary professional organizations. Teaching English in the Two-Year College, 41(4), 335–353.

Toth, C. (2018). Directed self-placement at “democracy’s open door”: Writing placement and social justice in community colleges. In I. Asao, M. Poe, & N. Elliot (Eds.), Writing assessment, social justice, and the advancement of opportunity (pp. 139–172). Boulder: University Press of Colorado.

Toth, C., & Aull, L. (2014). Directed self-placement questionnaire design: Practices, problems, possibilities. Assessing Writing, 20, 1–18.

Toth, C., Griffiths, B., & Thirolf, K. (2013). “Distinct and significant”: Professional identities of two-year college English faculty. College Composition and Communication, 65(1), 90–116.

Toth, C., Nastal, J., Hassel, H., & Giordano, J. B. (2019). Writing assessment, placement, and the two-year college. Journal of Writing Assessment, 12(1). 

Warnke, A., & Higgins, K. (2018). A critical time for reform: Empowering interventions in a precarious landscape. Teaching English in the Two-Year College, 45(4), 361–384.

White, E. M., Elliot, N., & Peckham, I. (2015). Very like a whale. University Press of Colorado.

Williamson, M. (1994). The worship of efficiency: Untangling theoretical and practical considerations in writing assessment. Assessing Writing, 1(2), 147–173.

Willingham, W. W. (1974). College placement and exemption. New York: College Entrance Examination Board.

 

 

[1] Since closing data collection for this study in January 2016, I have heard from five additional community colleges who have piloted DSP.