The Micropolitics of Pathways: Teacher Education, Writing Assessment, and the Common Core
by J. W. Hammond and Merideth Garcia, University of Michigan
Within writing assessment scholarship, disciplinary discussions about the politics of pathways regularly question how reforms mediate education and affect education actors. This article complements and complicates these conversations by attending to the micropolitics of pathways: how local education actors mediate reform-related standards, and, in the process, pave what they believe to be locally-meaningful pathways. Taking the Common Core State Standards (CCSS) as our point of departure, our study centers on one important site for micropolitical work that has, to date, gone unstudied in CCSS-focused writing assessment research: teacher education, which involves coordination between secondary and postsecondary actors who might differently interpret and engage with externally-imposed reforms. Our findings suggest that while standards may be politically intended to mediate education and standardize pathways, teachers micropolitically interpret and repurpose those standards--strategically drawing on them as a means to communicate about local writing instruction and assessment. For this reason, we argue conversations about pathway-related reforms can benefit from adopting a micropolitical perspective, sensitive to the participation of teachers in locally constructing and maintaining educational pathways. Keywords: Micropolitics of pathways; Common Core State Standards; Teacher education; Writing assessment; Local interpretation
Education reform often focuses on redesigning and managing educational pathways. By introducing standards, assessments, or curricula, these reforms seek to regulate the flow of students across grade levels and school sites—in the process, managing student advancement, opportunity, and attainment. Whether we look to past struggles over the American curriculum (Kliebard, 2004), or to present-day resistance to large-scale testing-related reform (Stein, 2016) and systematic over-testing (Lazarín, 2014), it seems the politics of pathways are never fully settled and are never far from our classrooms.
Writing assessment scholars are no strangers to these politics. They have written extensively, and often critically, about high-stakes, standardized testing-related reforms (e.g., Gallagher, 2011; Hillocks, 2002; Poe, 2008)—including the Common Core State Standards (CCSS) and its attendant large-scale assessments (e.g., Addison, 2015; Jacobson, 2015). In both pushing for curricular alignment and introducing assessments that purport to measure “college readiness,” the CCSS participates in paving the pathways students navigate in and between courses—including secondary-postsecondary pathways (Addison, 2015; Bailey, Jaggars, & Jenkins, 2015, pp. 139-141). The CCSS is intended to articulate education institutions, classrooms, and actors:
High standards that are consistent across states provide teachers, parents, and students with a set of clear expectations to ensure that all students have the skills and knowledge necessary to succeed in college, career, and life upon graduation from high school, regardless of where they live. (Common Core State Standards Initiative, “Frequently Asked Questions” n.d., p. 1)
In this manner, “The new standards … provide a way for teachers to measure student progress throughout the school year and ensure that students are on the pathway [emphasis added] to success in their academic careers” (Common Core State Standards Initiative, “What Parents Should Know” n.d., para. 3).
To date, writing assessment scholarship has raised significant concerns about the CCSS (e.g., Addison, 2015; Clark-Oates, Rankins-Robertson, Ivy, Behm, & Roen, 2015; Ruecker, Chamcharatsri, & Saengngoen, 2015). The pathways it promises are, many have argued, too rigidly or narrowly constructed; however, for all their supposed pathway-defining power, these standards are neither self-interpreting nor self-implementing. “Policy directives—at whatever level of education—do not execute themselves” (Gallagher, 2011, p. 463). Here is the tension at the core of the CCSS, and of pathway-defining standards, generally: Standards like the CCSS are never as autonomous or agentive as sometimes imagined; they are largely contingent on interpretation and implementation by the very actors they are intended to coordinate and perhaps constrain. In the words of Bridges-Rhoads and Van Cleave (2016), “we (and all teachers) create the meaning of the Standards in every instructional moment” (p. 271, emphasis in original).
Our article dwells on this tension. Turning to the CCSS, we explore the micropolitics of pathways, by which we mean the ways education actors negotiate and mediate pathway-related reforms. That is to say, we consider how the CCSS’s impacts on articulations, assessments, and curricula are micropolitically shaped by teachers. To borrow Gallagher’s (2011) turn of phrase, “being there matters” (p. 468). Even as the CCSS affords teachers a common vocabulary, its local meanings and effects remain reliant on local education actors—each of whom might have a different interpretation of the CCSS and its value. Homogenizing educational projects like the CCSS are always alloyed with heterogeneous local perspectives, assumptions, and aims. While perhaps obscured by standardizing efforts, local differences are not erased by them. Our work seeks to restore to view the active and strategic participation of teachers in the micropolitics of pathways.
To this end, our research centers on an aspect of education that, while gestured to (e.g., Ruecker et al., 2015), remains unstudied in CCSS-oriented writing assessment scholarship: teacher education work. English Language Arts (ELA) teacher education is a professional space that articulates K-12 and postsecondary actors who might have different beliefs about writing assessment, goals for writing education, and interpretations of writing standards. As such, this space is a useful one for writing assessment scholars interested in how different educators interact with and through pathway-related standards and assessments, like those the CCSS advances. Teacher education helpfully highlights micro-level engagements with the politics of pathways, drawing our attention to the local meanings of standards and the limits of pathway-standardizing efforts. The process of teacher education requires pre-service (“student”) teachers to navigate and negotiate novel organizational and professional expectations. As such, micropolitics are an important feature of teacher education and induction work (see, e.g., Kelchtermans & Ballet, 2002). Student teachers learn to engage with pathway-related reforms, while—at the same time—experienced educators explicitly guide them through this process. Our article draws on qualitative analysis of interviews conducted with nine educators engaged in secondary ELA teacher education: three field instructors, three mentor teachers, and three student teachers coordinated through a teacher education program at a large Midwestern university (henceforth, Midwestern University). These actors give voice to the micropolitical pathway work teachers routinely do when engaging with standards like the CCSS—work that existing writing assessment scholarship has remained largely silent on.
So much is said about the CCSS and its effects on American education—shaping (perhaps narrowing) curricula, constructing (perhaps constraining) postsecondary pathways—we might forget that, when abstracted from local contexts and enactments, these standards virtually cease to exist. Pathways are stabilized micropolitically—if they are stabilized at all. The work undertaken here supports deeper understanding of the politics of pathways by taking seriously the ways educators always already mediate and actively participate in these politics. To be clear at the outset, our article is by no means intended as an endorsement or rejection of the politics of pathways promoted by the CCSS—though writing assessment scholars have persuasively advocated for (at least) healthy skepticism where the politics of externally-mandated standards and assessments are concerned (e.g., Addison, 2015; Gallagher, 2011; Hillocks, 2002; see also Brass, 2014). As former secondary ELA teachers ourselves, we sympathize with this advocacy; we are, at the same time, committed to underscoring the micropolitical importance of teachers in our conversations about the politics of pathways. If we want to change pathways, we must do more than change standards or assessments—we must, at a minimum, also change teachers’ minds. In the next section, we detail the importance of considering micropolitical engagements with externally-imposed standards (§2.0). We then outline methods (§3.0) and findings (§4.0) for our study, before concluding with a discussion of some implications of our findings for writing assessment scholarship (§5.0).
The ascendancy of national standards-and-assessment reform initiatives (like the CCSS) is only a recent entry in a saga that stretches back over a century (see Addison & McGee, 2015)—the story of complex pathways, diverse teacher practices, and how reformers have sought to manage them. In the past century, new assessment technologies (including writing scales, rubrics, holistic scoring, and automated essay scoring) have emerged in response to the perceived problem of heterogeneity (i.e., unreliability) across teacher assessments of student writing (Elliot, 2005). New pathway-related reforms have likewise proliferated, promising increased consistency, commonness, and standardization.
Still, educational complexity is not so easily tamed; the pathways that reforms put in place are seldom as stable and standardized as intended. This is as true for postsecondary reforms as it is for those primarily targeting K-12 education. To give one recent, community college-focused example, Bailey, Jaggars, and Jenkins (2015) have suggested student outcomes can be raised through adoption of what they call “guided pathways,” which provide students with directive guidance and a more focused curriculum—using faculty and advisors to coordinate (or guide) students “instead of letting students find their own paths through college” (p. 16). We might think of the guided pathways approach as something of a spiritual successor to the CCSS—at least to the extent that both reforms propose to manage the complexity of the curricular paths students take. Finding much promise in the guided pathways idea, Rose (2016) has nevertheless reminded us of “how messy and unpredictable the process of reform can be” (para. 12), noting that reforms relying on articulation between faculty members can run into particular challenges: “faculty can have quite different beliefs about concepts like ‘improving students’ lives.’ And some of these differing beliefs can present resilient barriers to change” (para. 18). Reform initiatives can only standardize so much; where their pathways lead is always partly contingent on the assumptions and aims of the teachers who maintain them.
Rose (2016) underscores that the politics of pathways—our overt contestation over the paths structured for students—can be complicated or confounded by the ways educators interpret and engage with reform initiatives, something Blase (2005) has called the micropolitics of educational change. The term “micropolitics” has been used in education research to account for the heterogeneity, dissensus, and complexity at the core of education work. In Achinstein’s (2002) words, “Micropolitical theories … spotlight individual differences, goal diversity, conflict, uses of informal power, and the negotiated and interpretive nature of organizations (Ball, 1987; Blase, 1987, 1991; Hall & Spencer-Hall, 1982)” (p. 423). Adopting a “micropolitical perspective” (Blase, 1991) increases our awareness that educator behavior is not fully shaped and determined by the structures educators participate in; instead, educators partly shape those structures through “the use of formal and informal power … to achieve their goals” (p. 11; see also Achinstein, 2002; Blase, 2005). This interpretive influence of teachers touches virtually every aspect of educational practice. The complex process of socializing new teachers is micropolitical (Kelchtermans & Ballet, 2002), as is the messy act of collaboration (Achinstein, 2002; Adamson & Walker, 2011) and the often-overlooked strategic work of interpreting standards and reforms (März & Kelchtermans, 2013; see also Dover, Henning, & Agarwal-Rangnath, 2016).
Thus, what Rose (2016) identified as messy behavior could be thought of as micropolitical—actions that can serve both “as a positive, facilitative force and as a negative, impeding force” (Blase, 2005, p. 269, emphasis in original) depending on the assumptions, aims, and actions of educators. Though not drawing explicitly on the vocabulary of micropolitics, Shulman (1999) made much the same point when he wrote, “initiatives to reform our schools will surely founder if they ignore the centrality of … teacher conceptions, beliefs, and practices. Educational change must always be mediated [emphasis added] through the minds and motives of teachers” (p. vii; see also Gallagher, 2011, p. 463). Writing assessment research on standards-and-testing reforms has said much about mediation—but the bulk of this commentary has been devoted to the ways these reforms mediate education, or are mediated by assessment artifacts and textbooks, and not the ways teachers mediate reforms. It is true enough that standards are media—what Jacobson (2015) has called “a genre of policy action” that reformats education (para. 34; see also Poe, 2008). It is true, too, that these standards are articulated through “genres of implementation” like high-stakes tests and textbooks that give standards form and force (Jacobson, 2015, para. 34; see also Brass, 2014; Hillocks, 2002). At the same time, this object-oriented focus risks backgrounding teachers, who are at least as important as tests and textbooks for mediating pathway-related standards. Assessment technologies that mediate the CCSS are, themselves, partly contingent on use and interpretation by human actors (see Jacobson, 2015, para. 43). While we might find that “the type of writing assessment mandated by the state will influence the writing instruction that high school students experience” (O’Neill, Murphy, Huot, & Williamson, 2005, p. 104; see also Hillocks, 2002), the nature and extent of this influence remains “mediated by teachers’ beliefs and attitudes” (Troia & Graham, 2016, p. 1738)—a point we return to in our findings (§4.0) and discussion (§5.0) sections.
Yet despite disciplinary understandings that teachers are important mediators of educational change (Blase, 2005; Gallagher, 2011), and that teachers’ beliefs and perceptions affect their teaching (Hillocks, 1999), teacher interpretation and negotiation of the CCSS remains understudied. Work of this kind is essential, for “there are more scholars theorizing about the CCSS than those who are actually collecting and analyzing data from teachers who are responsible for implementing the standards” (Ajayi, 2016, p. 3). To date, larger-scale research on teacher perceptions of the CCSS suggests teachers hold broadly positive views of the CCSS (Matlock et al., 2016), and of the writing and language standards specifically (Troia & Graham, 2016). Even several years into CCSS adoption and implementation, teachers report widespread unfamiliarity with the ELA CCSS-related assessments (Troia & Graham, 2016; see also Ajayi, 2016); they also hold conflicted views that those assessments “are more rigorous than their prior state writing tests” but “fail to address important aspects of writing development and do not accommodate the needs of students with diverse writing abilities” (Troia & Graham, 2016, p. 1740). Perhaps understandably, in trying to take the general measure of emerging teacher engagements with the CCSS, this existing scholarship has focused on broad patterns in teacher perceptions of the CCSS, seldom digging deeper into the messiness of these perceptions—or how teachers micropolitically engage with and locally instantiate the CCSS.
In practice, policies and reforms are always messy micropolitical matters. Our sense is that the micropolitical messiness of teacher perspectives and beliefs is too often tidied away by means of Likert scales and large-scale surveys. Inspired by the insightfulness of recent interview-based CCSS studies pertaining to writing and literacy (e.g., Ruecker et al., 2015; Murphy & Haller, 2015), our project foregrounds and preserves some of the messiness and heterogeneity of teacher perceptions, complementing and complicating (not contradicting) the impressive and growing body of larger-scale research on teacher perceptions of the CCSS (e.g., Matlock et al., 2016; Troia & Graham, 2016) and teacher perceptions of testing, generally (e.g., Barnes, Fives, & Dacey, 2017; O’Neill et al., 2005). To this end, our findings below (§4.0) lend much-needed attention to the micropolitics of teacher engagements with the CCSS, foregrounding teacher agency in locally reinterpreting and negotiating the CCSS—and how to assess it.
We collected data for this IRB-approved, qualitative, interview-based study between April and June of 2015 from three professional subgroups of teachers. In this section, we describe the professional responsibilities of these three groups (§3.1), provide context for the school sites (§3.2), and explain the data collection and analysis practices for the research (§3.3).
Participants worked together at three different high school sites in professional triads composed of field instructors, mentor teachers, and student teachers at each site. Participants and sites associated with them were assigned pseudonyms beginning with the same letter, chosen to alliteratively signal and clarify relationships. Sites and participants associated with Triad A all begin with “A”—Amanda, Anne, and Alicia at Allendale High; Triad B—Barbara, Brenda, and Brandon at Bardstown High; and Triad C—Caleb, Cathy, and Cal at Clayville High (Appendix A, Table A1).
3.1.1 Field Instructors. Recruitment began at Midwestern University by emailing field instructors in its ELA teacher education program. Three instructors expressed interest in participating—Amanda, Barbara, and Caleb (Appendix A, Table A2). All had previously been secondary ELA teachers. We asked them to recommend participants from student and mentor teacher pairs in their cohorts. Field instructors facilitated communication between the university and mentor teachers, supported student teachers in a weekly course relevant to placement experiences, conducted at least three classroom observations of student teachers, and attended beginning- and end-of-semester meetings with student and mentor teachers. They completed evaluations required for teacher certification, and frequently wrote recommendation letters for students’ applications to teaching jobs and graduate programs.
3.1.2 Mentor Teachers. This study included three mentor teachers—Anne, Brenda, and Cathy (Appendix A, Table A3)—from among those recommended by our field instructors. Mentor teachers opened their classrooms to student teachers and field instructors, providing student teachers the opportunity to observe instruction daily and, for part of the year, to take responsibility for two or more classes. They guided student teachers in preparing lessons according to school and state requirements, and helped student teachers apply abstract content and procedural knowledges to real workplaces. Mentor teachers completed two formal evaluations of student teacher performance for inclusion in the student teacher’s certification application.
3.1.3 Student Teachers. We drew on data from three student teachers—two (Alicia and Brandon) enrolled in the undergraduate teacher certification program, and one (Cal) in a Master’s level certification program (Appendix A, Table A4). The Master’s program placed students for the entire school year, while the undergraduate program placed students for one semester. Student teachers in both programs observed mentor teachers daily, coordinating with them to plan and enact instructional units (usually spanning four to six weeks) in at least two classes. They submitted unit plans to their field instructors for feedback and evaluation, and scheduled their field instructors’ observations to showcase developing instructional skills.
Secondary school sites were located in the same state as Midwestern University, a public Research I university whose teacher education program is accredited by the Teacher Education Accreditation Council. While a CCSS adoptee throughout data collection and the writing of this article, this state articulated its standards to and through a standardized test other than the Partnership for Assessment of Readiness for College and Careers (PARCC) and Smarter Balanced Assessment Consortium (SBAC) assessments. During the data collection period, Midwestern University hosted 22 secondary-level student teachers and partnered with a number of secondary school sites, including the three represented in our study.
3.2.1 Allendale High. Alicia described Allendale High as having a “relatively homogeneous” student population, and Anne explained “we are about 1200 students, 9 through 12. And we serve, primarily it’s suburban, upper-middle class or affluent families.” She added, “primarily we’re a school full of white students, but we do pull from a lot of other populations,” and that Allendale “tends to pull from students whose parents have found a way to land in the neighboring city and get themselves into the district.” State information indicates around 10% of the testing population scored not-proficient on the statewide standardized test given to the 268 11th-graders enrolled in Allendale during the 2014-2015 school year.
3.2.2 Bardstown High. Brenda said that Bardstown High had “about 1900 students there, so it’s large ... and it’s pretty homogenous,” serving a “mostly white” and “middle to middle-upper class” student population. She explained that parents selected Bardstown because “the scores are very high here … last year we had the number one AP scores in the state.” Brandon concurred that, “It’s one of the best schools in the state, and it’s probably, I would say, probably considered one of the best schools in the Midwest for public schools.” State information indicates around 12% of the testing population scored not-proficient on the statewide standardized test given to the 464 11th-graders enrolled in Bardstown during the 2014-2015 school year.
3.2.3 Clayville High. Cathy described Clayville High as “a small alternative education setting with at-risk students in an urban setting. We have about 235 students total that range in age from 14 to 25.” In Cal’s account, Clayville primarily served students who “have been kicked out … or for other disruptive reasons have left their high school, and they are now here. Extremely homogenous—99%, just about, African American. All are high needs, high trauma.” State information indicates around 81% of the testing population scored not-proficient on the statewide standardized assessment test given to the 44 11th-graders enrolled in Clayville during the 2014-2015 school year.
We conducted one-on-one semi-structured interviews (Appendix B) ranging from 30 minutes to an hour long. (One participant, Alicia, submitted responses in written form.) Filler words (e.g., “um,” “uh”) were excised during transcription. We began by independently coding the data, attending to how participants interpreted and mediated the CCSS through their classrooms, paying particular attention to writing instruction and assessment. We returned to the data iteratively and collaboratively to tease out nuanced differences within and across participant responses. This analytic approach was supplemented with memos and notes shared between and reviewed by both researchers. At all analysis stages, we sought to document and learn from the diversity evident in participant accounts, rather than evaluate their comparative merits and omissions. Consequently, our work does not account for the myriad effects the CCSS might, in reality, have had—on pathways, curricula, and assessments—beyond those participants raised. Evaluating teacher perspectives and casting our analytic focus beyond them are crucially important projects, but they are not ours here.
Sensitive to the intended pathway-consolidating function of the CCSS, participants described the CCSS as having the potential to put teachers and students across the country (in Barbara’s words) “on the same page”—a phrase used also on the CCSS’s official webpage: “With students, parents, and teachers all on the same page [emphasis added] and working together toward shared goals, we can ensure that students make progress each year and graduate from high school prepared to succeed in college, career, and life” (Common Core State Standards Initiative, “Read the Standards” n.p.). Yet while each of our participants reported using the CCSS in some way in their curricular planning, none held identical perceptions of the CCSS, and none described using it (or locally assessing it) in quite the same way. Importantly, none of our participants reported that the CCSS fully determined the educational pathways their own students traveled down. Instead, participants reported micropolitically interpreting and repurposing the CCSS—drawing strategically on the standards to supplement and support the local pathways they already had in mind for students.
Cathy, for instance, asserted that standards themselves are—without local curation, negotiation, and interpretation—improper guides for the educational pathways traveled by students. “I think they’re [the CCSS] too restrictive,” she told us, adding:
I think standards in general are too restrictive. The needs of students change based on the environment the students live in and the environment that they’re going to be going into. If it’s a college prep school, standards might be, you know, … a little bit, they should be higher expectations. Students that are just going to go out into the world and just want to find jobs, and they just want their high school diploma so that they can have that, they’re [the standards] not as important. And sometimes … life lessons are more important than a school standard.
Here, Cathy’s claim was not just that education must be calibrated to the needs of students, but also, more specifically, that pathways precede standards, not proceed from them—and that teachers appraise the uses and usefulness of standards against the backdrop of the pathways they already imagine for students. Cathy reported that as 11th- or 12th-graders entered her Clayville High classes, “they come to me sometimes and all they need is one English class to graduate, but they’re only reading at a third or fourth grade level.” Her solution was not to abandon externally-developed standards entirely, but to curate or retrofit them to serve local needs and preexisting pathways. Cathy confided that rather than covering the whole of the grade-level standards, she preferred to “take one or two standards and teach the crap out of ’em” because she “would rather have them [students] master a few than half-master all of them.” Specifically, she focused on “the writing standards,” judging these to be in closest alignment with student needs.
This is not to say that the CCSS had no uses or effects for the participants we interviewed—far from it. The CCSS provided a medium and common vocabulary for managing professional communication in schools, discursively articulating their classrooms to broader educational imperatives (§4.1). Teachers were able to micropolitically coordinate and collaborate around the CCSS, and could use the language of the CCSS to authorize—rather than constrain—their instructional choices. In other words, their interactions with the CCSS were not passive, but strategic and rhetorical. Importantly for our purposes, this was true also where writing assessment was concerned (§4.2). Perhaps because they did not regard the CCSS as creating or determining students’ educational pathways, our participants did not imagine CCSS-related large-scale assessments to be the ultimate arbiters of student success in the ELA classroom. Instead, our participants devised assessments that aligned with their local aims and goals. In many cases, the CCSS seemed as much a pretext for these decisions as it was a prompt for them.
Our participants described the CCSS as a medium for managing communication with stakeholders and—by extension—signaling professional participation in the collective enterprise of American education. In this way, they framed the CCSS less as a reform that imposes pathways in (and between) schools than as a kind of rhetorical instrument teachers could use when describing the local instructional pathways they constructed. The CCSS afforded teachers in our study a common vocabulary for making local education pathways externally legible. In other words, teachers engaged with the CCSS micropolitically, leveraging its vocabulary to satisfy the complex professional requirement to make instruction-and-assessment decisions intelligible and palatable to an audience made up of multiple stakeholders. In the work of teacher education, this professional requirement involved (at minimum) communication between student teachers, mentor teachers, and field instructors. However, as many participants indicated, the common language found in the CCSS also had a communicative reach that extended beyond the professional triads we interviewed for our study. Standards may be media, but they are media teachers can strategically use.
4.1.1 Curricular Curation and Communication. Alicia noted that adoption of the CCSS and its terminology was not the same as adopting a new set of practices or goals. Instead, she appeared to regard the CCSS as a kind of institutional prosthesis for teachers, helpfully facilitating professional conversation by providing teachers with the terms necessary to voice what they were already doing. She told us, “The CCSS seems to allow quality teachers access to the language that describes the skills that they were likely already teaching in their classroom all along.” Underpinning this perception was the idea that the standards represented little more than what good teachers are always already doing in their classrooms—albeit, perhaps, without the vocabulary to make their positive practices known to stakeholders. “My classmates and I, generally, agreed that it [the CCSS] is a document that only suggests skills and lessons that ‘good teaching’ should have anyway,” she wrote. This sense was shared by field instructors Barbara and Caleb, the former of whom reported that, among teachers, “nobody’s bothered by it [the CCSS]”; nodding to the communicative uses of the standards, she asked, “what’s the big deal? It’s nice that everybody’s on the same page.”
As a general touchstone for talking about “good teaching,” the common language afforded by the CCSS was micropolitically useful to our participants, who drew on it to warrant their instructional decisions. Brandon, for his part, described the CCSS as micropolitically “beneficial” as a kind of professional lingua franca, enabling him to enter disciplinary conversations about professional expectations and practices. He confided,
when I got to student teaching, when my mentor teacher started talking about Common Core, I wasn’t like, deer in the headlights, or anything like that. I could discuss it with them. I talked to the principal a lot, and we had departmental meetings and things like that, and I wasn’t just sitting there completely with a blank stare on my face. I could contribute to the conversation….
The communicative uses of the CCSS were evident also in Brandon’s lesson planning. When he and his mentor teacher developed a unit plan for Huckleberry Finn, they started with themes that they wanted to emphasize—“such as friendship, love, and trust, empathy. [Be]cause those are the things that again, I just think they’re so important for kids to learn about”—then moved to the final assessments they would locally implement: a group presentation on banned books, a multiple-choice test on Huckleberry Finn, and a portfolio “compiled of a bunch of things” students had composed throughout the unit. After the text, themes, and summative assessments were settled, Brandon and his mentor “went through all the Common Core Standards” and matched them to lessons where “they’ll fit in.” In other words, Brandon strategically curated the standards, selecting those that best matched his goals and assessments to signal compliance. Cal, too, discussed the CCSS in terms of its communicative uses. He relied on the CCSS not to define the curricular path he paved for students, but instead to signal to outsiders that this path was an appropriate, standards-approved one.
Cal’s sense, though, was less that the CCSS opened professional doors than that it closed opportunities for professional censure. Speaking of the CCSS, Cal described the negative assumptions that might accompany failure to draw on the vocabulary of the CCSS: “if you don’t necessarily have it embedded into … you know your lesson plans and everything you’re doing, then you’re not necessarily an effective teacher.” Like Brandon and Alicia, Cal’s facility with the CCSS provided him a way to signal professional growth to the field instructor who supervised his lesson planning. As Cal tells us, though, this lesson planning was a subtle rhetorical task: “while he wants to make sure that I know them [the standards], he also wants to make sure that I don’t know them, if that makes … any sense. It’s like, ‘Use them, but don’t necessarily be pigeon-holed by them.’” Instead of letting the CCSS narrow his curriculum, Cal mined the CCSS for pieces that were relevant, “tak[ing], like, bits and … two or three of them, varying on the grade level, and apply[ing] them for my unit plan on a weekly basis.” What his description touched on was a pattern in the way our participants engaged with the CCSS: They rhetorically used the standards, and actively resisted being used by them.
Cal’s mentor teacher, Cathy, also described curating the standards. She said of the CCSS, “After reading them, they’re pretty straightforward, and they can be kind of twisted however you need to use them.” Cathy went so far as to suggest that rhetorical engagement with the CCSS—strategically selecting, interpreting, and negotiating the standards—ought to be “a mandatory class,” because such training would facilitate pedagogical self-awareness and develop a capacity to communicate about the educational pathways locally maintained in the classroom. Cathy argued teachers do not need to demonstrate equal fidelity to all educational standards, but
it is important to know what you don’t like. And it’s important to be able to explain why. … I think every teacher needs to be educated to the point of being experts on these [the standards] because that’s the only way we can get around them if we need to.
No classroom is an island; educational pathways are the shared jurisdiction of multiple stakeholders and, as such, must be negotiated. Using the standards as a shared local language secured for our participants a kind of self-determination that could only come with persuasively communicating with outside stakeholders.
4.1.2 A Common Language for Proving and Improving Pedagogy. Indeed, like many educators (including one of the present writers), Cathy is both a teacher and a parent. As a parent, she appreciated that the CCSS provided a “much more user-friendly” means of communicating with her daughter’s teachers about the educational pathways they locally supported. In this way, too, the CCSS provided a common terrain for communicating about—and rhetorically contesting—the local paths teachers paved. Anne (Alicia’s mentor teacher) described herself as taking comfort in the communicative affordances of the CCSS when responding to parents who were “more demanding about the sorts of tasks that their kids do.” Anne gestured to the CCSS as a means to allay parents’ concerns that their students were not on an appropriate educational path: “I find it easier to say, ‘Look at all the things, the Common Core things, we’re doing!’” For Anne, making the connection between her lessons and the standards was a documentary process that demonstrated the validity of what she was already doing. Amanda—working as a field instructor with Anne and Alicia—expressed a similar, if stronger, conviction about the demonstrative potential of the CCSS. In her estimation, one major problem confronting teachers was the need to “prove” the validity of local instructional decisions to external stakeholders—a communicative requirement she had met through recourse to a “big old curriculum binder” in her own (pre-CCSS) instructional days. For this reason, she taught her student teachers to employ the CCSS as a warrant for the decisions they made, insisting, “what they teach should always be able to be proven—I use that word—you know, with the Common Core….” In this insistence, Amanda seemed to echo Cal’s commentary on the uses of the CCSS: within a professional community that has adopted the CCSS, failure to speak the language of the standards was to court sanction—to be left without a persuasive micropolitical means to prove oneself to skeptical outsiders.
Brenda (Brandon’s mentor teacher) stressed that a “helpful thing [about the CCSS] is that it provides a common language,” replacing what might be thought of as the normal, Babel-like diversity of teacher vocabularies for describing the same practices. For example, she noted that what
we used to call an “assertion”—people know it as a “thesis”—but it is so clear in the Common Core that you have to have a “claim.” … It goes back to that language thing. It’s very helpful, I mean, all teachers talk about a “claim” now, you can read online, and everyone uses the word “claim,” and it’s logical. You make a claim, you have to support it.
In this account, the terminological mess of teaching was brought under greater control by adoption of the CCSS’s community-articulating vocabulary. As a National Writing Project-trained participant in district-wide writing-specific workgroups, Brenda was perhaps particularly sensitive to the diversity of ways different groups of teachers related the same essential practices and processes. Importantly, Brenda’s description of the CCSS here positioned it less as a reform of practice than of what we call that practice—with teachers adopting a new, shared convention for existing staples of their local work.
This terminological shift alone was one that Brenda believed beneficial. She held that educational pathways already in place would be more easily navigable by students, who would “have language that transfers from Teacher A to Teacher B.” This affordance of the CCSS was one voiced regularly in the interviews we conducted, with participants noting that a shared vocabulary provided educational pathways at least superficially an increased kind of intelligibility and coherence for students. When Brenda discussed movements between the classrooms of Teachers A and B, she was thinking specifically of vertical course alignment, with students advancing from “English 11A” to the next course (“English 11B”) in a sequence. However, other participants (like Anne and Caleb) noted the benefits of this shared vocabulary for students who, in Anne’s words, “have to be mobile.” As Caleb put it, an easily-navigable lateral movement from school to school is essential “so you can live in a country where people move a lot.”
Our participants held that the common vocabulary of the CCSS promoted local communication and pathway intelligibility—aiding students as they moved from class to class, teacher to teacher, and school to school. Crucially, though, the use of the CCSS as a micropolitical medium for communication did not also result in standardized, interchangeable courses. The local content of the pathways that participants maintained was neither imposed nor determined by the CCSS. Teachers navigated and micropolitically curated the CCSS, such that these seemingly “common” standards were interpreted and repurposed to suit teachers’ uncommon, locally-determined goals, including where assessment was concerned—a topic we turn to next (§4.2).
When asked how they assessed whether students were mastering the CCSS, no participants offered large-scale standardized testing as a possibility. Instead of using the large-scale, seemingly high-stakes testing closely associated with the CCSS to guide their classrooms, participants consistently described developing local, often low-stakes writing assessments to appraise CCSS mastery. Participants expressed a range of perspectives concerning the standardized large-scale tests associated with the CCSS, but interestingly, none reported their classrooms as fully being captive to them. Rather than follow a narrow curricular pathway determined by the CCSS, the teachers in this study curated the CCSS, strategically determining which standards to emphasize and how best to assess student mastery of them.
4.2.1 Mentor Teachers. All mentor teachers had some practical knowledge of prior and emerging state-mandated standardized tests: Anne and Brenda had seen the SBAC test in professional development and practice-test contexts, and Cathy served as the test-coordinator for her campus. Anne considered adaptive testing-facilitated SBAC performance tasks a “big change. Instead of, you know, 25 multiple choice questions about grammar, we’re looking at higher-order thinking skills….” This increased rigor and complexity was not, by her account, an unproblematic good: “I’m thinking, ‘Gosh! This is really, really cumbersome in its … task, not only just physically, but also mentally.’ You know? And it—to me, it involves a lot of … technology skill that I don’t know that our students have.” Rather than calibrating her classes to CCSS-aligned large-scale tests, though, Anne reported another (more local) way the CCSS was instantiated: formative classroom assessment. “I don’t really prioritize essay grading. Instead, I try to prioritize whatever small chunks I can do and give them [students] feedback on immediately,” Anne admitted. Such a trade-off was consistent with one of her personally-held “goals as a teacher”—that is, “to get feedback to [her] kids in a more meaningful and timely way.” She took CCSS-adoption as an occasion to develop a local system for appraising student writing (in small chunks, rather than essays) that reflected her own aims and beliefs as an educator.
Having observed an SBAC practice implementation, Brenda thought the test “was fabulous. I thought it was amazing.” Praising the test for its explicit alignment, Brenda regretted the state’s decision to pursue a different—potentially less-explicitly aligned—testing system, stating, “I was really sad that we didn’t go to it.” Even voicing this support, Brenda described her classroom not as caught in the thrall of large-scale standardized test preparation, but instead as backward designed (see Wiggins & McTighe, 2005) against meaningful, locally-developed summative goals. Brenda’s school developed and implemented local CCSS-aligned “common final exams in the core classes” like English. In Brenda’s own classes, essays were assessed against a rubric targeting “focus correction areas” that “come from the Writing Core.” Brenda’s writing rubric was reframed, not narrowed, to reflect the CCSS’s commonly-shared vocabulary: “to be honest, I don’t think it’s [the rubric] anything extraordinarily different [from past, pre-CCSS rubrics], but the wording and the language is going to match the wording and the language on the Common Core. In fact, it might even list the strand.” In this way, the CCSS was micropolitically leveraged to signal (rather than determine) local writing assessment priorities—marking, in new terminology, the pathway Brenda had already set.
Cathy, too, reported that state-mandated large-scale tests had “gotten much more difficult” in response to the CCSS, but faulted the CCSS for misalignment to her students’ needs: “I really think … it focuses too much … towards the testing. I think that impacts students a lot, because our kids, … they need to be ready for the real world, and Common Core does not always address those needs.” Departing from other mentor teachers, Cathy said of the CCSS, “Yes, it’s raising the bar to a higher level, but sometimes students need more than that.” Here, “raising the bar” was equated to something decidedly less than what students needed. Cathy identified the “need to write an argumentative essay” as something that, for her students, perhaps “isn’t really important for their life needs. Can they write a resumé? That’s more important. Do they know how to look up a job application and fill that out? That’s more important for these kids.” In this rendition, the CCSS pulled too far in the direction of what were perceived to be postsecondary writing needs at the expense of more immediately valuable (professional) writing-related skills.
“I don’t look at the standards as standards, I look at them as suggestions,” Cathy claimed; “They’re a good place to start, but they can go either way. You can advance them, take it to the next step, or you can cut things out.” In a way, our study’s mentor teachers all practiced what Cathy preached here—micropolitically mediating the CCSS in the service of preexisting commitments and beliefs regarding assessment. They used the CCSS as license to increase and explore their preferred formative assessment strategies (Anne), leveraged the CCSS’s vocabulary to validate assessment practices they believed effective (Brenda), and strategically determined which standards were emphasized, how, and to what ends (Cathy).
4.2.2 Student Teachers. Echoing their mentor teachers, student teachers discussed negotiating and interpreting the CCSS by means of local assessment. Asked if the CCSS dictated or shaped classroom evaluations of student performance, Alicia stressed the multiple ways (beyond standardized testing) assessment could locally instantiate the CCSS:
it is all about the type of interpretation you take toward the CCSS. I believe that our assessments and classroom evaluations of student performance should be loosely based off of the skills that are offered by the CCSS. However, I do not think that this means that teachers need to merely provide a standardized test assessing these skills. Quite the opposite, in fact. I believe that teachers should provide final assessments that ask students to use a wide range of tasks the CCSS focuses on (e.g. using evidence to support a claim, determining the central idea of a text, etc.).
Consistent with this line of thinking, Brandon spoke favorably about state-mandated standardized testing and its ability to help “comparison throughout the U.S. education to be … a little more accurate,” but was careful to claim that such testing ought not drive curriculum, because “if you master the Common Core Standards, you’re going to do well on standardized testing.” Instead, when he discussed the local meaning the CCSS had for guiding assessment, Brandon thought not of state-mandated large-scale tests, but instead—like Brenda—talked of teachers in a department sharing common tests “based on the Common Core Standards.” Appropriate assessment was a local matter; as a baseline for describing “good teaching,” the CCSS provided local actors the vocabulary necessary to collaboratively develop (and discuss) shared local tests.
In stark contrast to Alicia and Brandon, Cal believed the CCSS was “dumbing down education,” and worried formative and individualized assessments might be crowded out because “people could be kind of pigeon-holed to only have a couple of assessments that actually show that students are following the Common Core standard….” Crucially, Cal’s concerns related less to his classroom than to those of other teachers more “willing to kind of take it [the CCSS] by law….” By contrast, Cal’s classes prioritized individualized writing assessment (“different benchmarks” indexed to individual students), privileging feedback-rich writing instruction grounded in the “notion of writing as rewriting”—all while carving out space for preparing students to create resumés (a local need identified by Cathy). Even under the CCSS, essay writing and assessment could serve, for Cal, as tools for teaching deeper life skills. “‘Listen,’” he exhorted an imagined audience of his students,
"Your writing can be something that can be extremely superb, but it’s something … that you have to be willing to work on, meaning that you need a work ethic for [it], and it has to be something that you have to realize that you have to accept criticism for and seek it out for that." And, hopefully, again, [students will] kind of retain that to their life.
Moving between the field instructors at Midwestern University and the mentor teachers at their high school placements, student teachers micropolitically negotiated the CCSS in the process of developing assessments they believed best supported student learning. In response to the CCSS’s standardizing potential, they insisted on the need for multiple measures of student progress (Alicia); imagined that collaboratively-developed, standards-aligned assessments would naturally prepare students for success on large-scale standardized tests (Brandon); and advocated individualized rather than standardized assessment, tailored to student needs and preexisting pathways (Cal).
4.2.3 Field Instructors. Field instructors had limited direct knowledge of the CCSS-related large-scale standardized tests and instead reported on what they gleaned secondhand in placement sites. Interestingly, though, all three field instructors expressed some form of support for the CCSS’s large-scale standardized tests—their perspectives underpinned by a more general sense that the CCSS represented little more than (in Caleb’s words) “a nice minimum” good teachers always already meet: “I remember the first time I read through ’em [the CCSS], … my feeling was, ‘Well if you’re not doing these things, what the heck are you doing in English class?’ … These are just the things that you should be doing.” Broadly speaking, the field instructors thought of the CCSS as aligning with their own micropolitical sense of what was normal or appropriate for “good” teaching. With this understanding in mind, field instructors expected the effects of CCSS-aligned standardized testing to be positive or unobtrusive—altering local practices only in cases of curricular negligence or ineptitude.
Within this context, Caleb referenced the idea that tests drive curriculum—a familiar concern in writing assessment scholarship (e.g., Hillocks, 2002)—but regarded this possibility as a feature rather than a flaw. Comparing the sample CCSS-aligned large-scale tests with previous state tests he had observed as a high school teacher, he told us,
this [CCSS-related assessment] seems more difficult, so more rigorous, more focused on critical thinking and synthesis of information, and I know that a lot of times, tests drive curriculum, so, I mean, I think the hope is that curriculum will become more rigorous than they were—than it was under [previous] state standards….
Importantly, though, when Caleb spoke about tests driving curriculum, he was not envisioning rote test preparation. His sense was, to the contrary, that quality instruction was already aligned with, and preparatory for, CCSS-aligned large-scale tests. Considering himself “ideologically aligned” with what he considered the CCSS’s emphasis on teaching with “big questions” in mind, Caleb claimed “teaching those types of lessons well ensures that kids are going to do fine on the assessment.” Caleb attributed the controversy over the CCSS large-scale tests to “misconceptions” in the wake of a weak introduction: “the way it [the CCSS] was sort of rolled out and implemented didn’t really promote a lot of clarity, and I think some parents are refusing to let their kids test, and some states are opting out….”
For their parts, Barbara and Amanda—neither of whom had seen a CCSS-aligned sample test—expressed regret that politics had complicated CCSS-aligned testing. Barbara’s central complaint about the new testing regime seemed to be that state-level political forces had been unwilling to commit to standards and tests long enough for schools to gauge requirements and prepare adequately—a kind of politics of pathways that exchanged student futures for political pride. The state, she argued,
was reluctant at first to go with the Core, and it’s like, you know, "Everybody else is on board, why…?" You know, the legislature again, feeling like, "We have to be autonomous. We don’t need to have the Core. We can have our own guidelines." And it’s like, "Why?" … And the same thing with the testing….
Amanda, by contrast, regretted that for all the CCSS’s promised commonness, its official large-scale tests remained plural, a promise of commonness fragmented into the SBAC, PARCC, and other state-determined CCSS-related tests. “I thought … we would all have the same standardized test—which I hate—but if you’ve got to have ’em, I’d just as soon it’s the same thing, you know, across the board,” she maintained.
Whereas Barbara framed the problem of political equivocation in terms of local ability to adapt to new tests, Amanda’s concern seemed more that pluralization of CCSS-related large-scale tests contradicted the spirit of the CCSS: “I feel like we’re kind of wavering now. Like with the [state-specific CCSS-aligned test], why are we doing…? Why don’t we, h[ave]—I thought this was going to be like a national test, where everybody took the same test.” Potential articulations and pathways proliferated, Amanda worried, complicating the standardizing promise of the CCSS through tests that mediated and stabilized its meaning in different ways. Indeed, while none of the field instructors regarded the CCSS’s large-scale standardized tests as having an undue constraining effect on classroom instruction or assessment, all of them identified these tests as plagued by overtly political problems. The problems identified with these assessments were less a matter of local, micropolitical engagement than of macropolitical controversy and chaos—with local interpretation and navigation complicated by a confusing rollout (Caleb), state-level indecision (Barbara), and a national inability to adopt a single, standardized assessment (Amanda).
When this study’s participants communicated with one another about instruction and assessment, they invoked the CCSS as an articulating document. However, beneath the veneer of unity provided by the CCSS, we found substantive disagreement both within and across teacher groups, indicating that pathway-related reforms and the consensus they seek to impose are always fraught with local, micropolitical dissensus. For example, one might expect that field instructors would share an orientation toward the CCSS, using it to evaluate student teachers’ lesson plans in a state that had adopted the CCSS. Yet among our field instructors, Amanda insisted that every lesson plan build on a specific standard, Barbara encouraged student teachers to plan around skills and match standards to them retroactively, and Caleb reported that students attended more closely to selecting texts than standards when planning. Consider, too, micropolitical dissensus within Caleb’s Clayville triad: Where Caleb viewed the CCSS as introducing more rigor to the curriculum, Cal saw it as “dumbing down education,” and Cathy explained that students needed more than what the standards offered. More generally, participants reported developing locally-meaningful lessons and assessments, and strategically curating the “common core” of standards to match uncommon local needs. We might say, playing on the terminology favored by Bailey et al. (2015), that within the putatively “guiding” structure offered by the CCSS, these participants took a “cafeteria-style” approach.
Teacher beliefs, assumptions, and aims regarding writing, assessment, and education informed the ways our participants performed and instantiated the CCSS—with different participants instantiating the CCSS in different ways. Our findings suggest that even among a small number of closely-connected teachers and teacher educators, the CCSS and its related assessments took on a multiplicity of meanings. This heterogeneity worked beneath the surface—and sometimes under the cover—of implied standardization; it was articulated through the common language of the standards. In this way, even a supposedly “common core” of standards can be as uncommon, as plural as the local actors articulating them. These differences point to the importance of conducting research that is attentive to the micropolitical work of local actors in interpreting and implementing pathway-related reforms. Where the politics of pathways are concerned, we miss much of the action—and deny much of teachers’ agency—if we focus on standards themselves as determining the educational pathways students take. In the remainder of this section, we briefly discuss the potential importance of this micropolitical perspective for shifting how we think about professional secondary-postsecondary partnerships (§5.1) and how we talk about teacher engagements with pathway-related standards and assessments (§5.2), before concluding with some limitations of our work and how future research might redress them (§5.3).
When suggesting productive responses to the CCSS, writing assessment scholarship often recommends attention to teacher training, centering teacher perspectives, and some form of closer, more meaningful articulation between writing studies specialists and K-12 teachers (e.g., Addison, 2015; Clark-Oates et al., 2015). Consider, as one example, Ruecker et al.’s (2015) suggestions from the 2015 special issue of the Journal of Writing Assessment dedicated to the CCSS—released just months after completion of our own data collection for this paper. Ruecker et al. argued “teacher perceptions provide readers a situated perspective of the implementation of the CCSS that is often lost as politicians, test makers, and other individuals fight over the value of the CCSS and the continued push for high-stakes standardized assessment” (2015, para. 3). What is lost, we might say, is the micropolitical perspective teachers bring with them. Suggesting “co-constructed workshops” and “co-teaching in both high school and college classes” as possible paths forward (2015, para. 54), Ruecker et al. reminded readers that close collaboration between secondary and postsecondary educators (including those involved in teacher education programs) has potential “to improve writing instruction for all students”—noting that “it is important to ensure that these relationships are collaborative and not top-down” (2015, para. 53).
We agree with this recommendation, and believe that an explicitly micropolitical perspective is helpful for more fully thinking it through. For instance, we might be led to ask: What does it mean to productively and non-hierarchically navigate around (or through) differences in perception, where standards and assessments are concerned? After all, what it means to “improve writing instruction for all students” is a matter subject to micropolitical negotiation. Because teacher education work is a space where secondary and postsecondary actors already collaborate closely—negotiating the meanings of standards, assessments, and the pathways they participate in—additional teacher education-centric research might aid writing assessment scholars in better understanding the micropolitics of teacher collaboration and conflict (see also Achinstein, 2002). Such research might also assist writing assessment scholars in understanding how professional articulations (e.g., secondary and postsecondary practitioner partnerships) affect the pathways students end up navigating in and between school sites. Relatedly, where writing assessment research is concerned, co-authorship with practicing teachers (see, e.g., Clark-Oates et al., 2015) is another way productive secondary-postsecondary partnerships can be pursued—one that, perhaps, provides an additional means by which the micropolitical work of those teachers can be made more visible within our disciplinary conversations.
Moreover, as an existing space where secondary and postsecondary actors are partnered, teacher education work can also benefit from explicit adoption of a micropolitical perspective. It might be helpful for teacher education to explicitly frame engagement with externally-mandated standards as a rhetorical, micropolitical process, training teachers to strategically curate and repurpose those standards, so that (in Cathy’s words) they “can get around them if [they] need to.” Foregrounding the micropolitical dimension of pathway-related reforms in this way could help secondary and postsecondary actors in developing what Kelchtermans and Ballet (2002) have called “micro-political literacy,” supporting them as they grapple with externally-mandated standards and assessments—pathway work that teachers (like those in our study) routinely participate in. While the future of the CCSS, like all reform initiatives, may be uncertain, it is worth remembering that the politics of pathways neither began with the CCSS, nor are these politics likely to end with it. For this reason, we have good cause to expect that—whatever reform initiatives the future holds—there will continue to be a need for teacher training that is sensitive to the micropolitics of pathways
As postsecondary teacher educators and compositionists, we prize the impact that research and training in writing have had on our own practices, and on outcomes for our students. We also know firsthand, through our own secondary teaching experience, that many teachers do teach to the test—sometimes against their wishes or better judgment. We worry, though, that when writing assessment scholarship under-acknowledges the agentive role teachers have in mediating pathway-related reforms (see §2.0), we risk constructing secondary teachers as passive—perhaps even as unconsciously or uncritically adopting approaches to writing and writing assessment dictated by state-mandated testing imperatives. For instance, Hillocks, Jr. (2002) seems to have explained the fact that his research “found few vocal critics of the [state-mandated] assessments and no one who made a systematic, articulate criticism of the state’s writing assessment” (p. 197) with the idea that the educators interviewed often lacked the criticality or training necessary to resist the influence of reductive state-mandated assessments. “Few teachers had special training in composition and rhetoric that might enable them to conduct a detailed critique of the [state-mandated] assessments,” Hillocks, Jr. wrote, following that, “Indeed, it is much more common for the state assessment to become the theory of writing upon which teachers base their teaching” (p. 198).
Our study recommends caution where this kind of equation is concerned. Many of our participants did not express a particular distaste for the CCSS or its associated state-mandated assessments—including those participants with special training. Indeed, Brenda—who was National Writing Project-trained and taught a high school class on college writing—stands out as perhaps the participant most enthusiastic about the SBAC. Moreover, our participants embraced externally-mandated standards while interpreting them in ways that matched their local instructional goals, assessment preferences, and the writing constructs they privileged. With these insights from teacher education actors in mind, our study suggests a view of teachers not as passive cogs within the political machinery of pathway-related reforms, but instead as micropolitical mediators who make strategic use of those reforms. Micropolitics are not only in play when teachers resist or subvert reform initiatives; teacher support for or reclaiming of standards and assessments can also be an informed, strategic matter (see, e.g., Dover et al., 2016; März & Kelchtermans, 2013). When we reframe teachers as micropolitical actors, we increase the likelihood that our ways of talking about teachers and their perceptions—even those we roundly disagree with—are ways that honor, rather than displace, the intellectual agency of teachers.
There are, of course, clear limitations to our work here. For one, the CCSS is by no means the only set of standards with which educators engage; more work can be done to discuss the ways multiple, overlapping sets of standards and programs (e.g., Advanced Placement, International Baccalaureate) complicate the politics of pathways in local school settings. Also, because our study focuses on teacher perceptions, we did not triangulate interview data with classroom artifacts and observation data. As a result of this limitation, our research does not examine whether or how participant accounts correspond to outsider/researcher perceptions of classroom realities. Beyond our small, demographically-unrepresentative sample and the resultant non-generalizability of our findings, our interviews were not longitudinal, and omitted perspectives from other important actors and stakeholders—including students. Where engagements with the CCSS and related assessments are concerned, more can and should be done to track the ways that perspectives of all relevant actors change or remain stable over time.
Additionally, while explicit consideration of social justice concerns has been beyond the scope of the present project, it is important to remember that any efforts to define student needs and pave educational pathways are freighted with ethical significance. Recent writing assessment scholarship has underscored the need to consider our assessment practices within a social justice framework, critically questioning how our practices define and structure opportunity (see, e.g., Elliot, 2016; Poe & Inoue, 2016). We believe there is a need for future work that brings micropolitics, social justice, and writing assessment literatures into closer conversation. Particularly promising in this respect would be research critically considering how teachers’ local interpretations and repurposing of pathway-related standards participate in promoting (or impeding) educational opportunity (see also Dover et al., 2016).
We conclude with one suggestive avenue for future work we believe likely to serve as a compelling addition to writing assessment research agendas regarding pathway-related reforms. On the whole, our participants displayed a degree of nonchalance where the CCSS was concerned. As Brenda told us plainly: “I guess, in the scheme of all things to be concerned about, this [the CCSS] is just not high on my list.” What was high on her list? Brenda reported her school’s recent switch to a trimester system—purportedly to save money—had the kind of dramatic impact on writing instruction and assessment that we, as researchers, initially expected to hear about when participants discussed the CCSS:
It’s [the trimester system] caused us to decrease the amount we write. The class size has gone up, the time in which to teach writing has gone down, and unless you want to grade papers every single night and virtually give up your family life at home, during the school year, you’re not teaching writing as much because with immediate feedback—how do you do that?
While time and course-load constraints might not be at the top of all K-12 teachers’ concerns, we feel there is some promise in coupling our consideration of pathway-related reforms and their effects with questions calibrated to gauge those effects relative to (or as they intersect with) other local constraints and imperatives. Expanding our research in this way promises a means for more meaningfully appraising the impacts of standards like the CCSS. It also affords us a clearer sense of which additional micropolitically-relevant factors impact local writing instruction and assessment—factors that might otherwise be underemphasized in our conversations about the politics of pathways.
J. W. Hammond is a doctoral candidate in the Joint Program in English and Education at the University of Michigan, where he researches writing assessment history, theory, and technology. His published work appears in the Encyclopedia of educational philosophy and theory (co-authored with Pamela A. Moss), the edited collection Teaching and learning on screen: Mediated pedagogies (co-authored with Merideth Garcia), and in the (forthcoming) collection Writing assessment, social justice, and the advancement of opportunity.
Merideth Garcia is a doctoral candidate in the Joint Program in English and Education at the University of Michigan, where she researches the intersection of digital and critical literacy practices. Her current project is an account of how teachers and teenagers co-construct meanings for technology in high school classrooms. Her published work appears in Teaching and Learning on Screen: Mediated Pedagogies (co-authored with J. W. Hammond), and in the collection Neil Gaiman in the 21st Century.
Queries regarding this article should be addressed to J. W. Hammond, Joint Program in English and Education, University of Michigan, 610 E. University, Room 4204, Ann Arbor, MI 48109. Contact: email@example.com
This research was supported in part by a University of Michigan Rackham Pre-candidacy research grant. We are grateful to our anonymous JWA reviewers for their instructive comments, and also to JWA’s editors, Diane Kelly-Riley and Carl Whithaus, for their patience and guidance. Particular thanks are owed to Norbert Elliot for the (characteristically generous) mentorship and recommendations he provided as we began drafting this article; to Christie Toth, for the (consistently good-natured) advice and support she provided us as we revised our work; and to Chandra Alston, without whose sponsorship our research would have been impossible.
Achinstein, B. (2002). Conflict amid community: The micropolitics of teacher collaboration. Teachers College Record, 104(3), 421-455.
Adamson, B., & Walker, E. (2011). Messy collaboration: Learning from a learning study. Teaching and Teacher Education, 27(1), 29-36.
Addison, J. (2015). Shifting the locus of control: Why the common core state standards and emerging standardized tests may reshape college writing classrooms. Journal of Writing Assessment, 8(1). Retrieved from http://journalofwritingassessment.org/article.php?article=82
Addison, J. & McGee, S. J. (2015). To the core: College composition classrooms in the age of accountability, standardized testing, and common core state standards. Rhetoric Review, 34(2), 200-218.
Ajayi, L. (2016). High school teachers’ perspectives on the English language arts common core state standards: An exploratory study. Educational Research for Policy and Practice, 15(1), 1-25.
Bailey, T. R., Jaggars, S. S., & Jenkins, D. (2015). Redesigning America’s community colleges: A clearer path to student success. Cambridge, MA: Harvard University Press.
Barnes, N., Fives, H., & Dacey, C. M. (2017). U.S. teachers’ conceptions of the purposes of assessment. Teaching and Teacher Education, 65, 107-116.
Blase, J. (1991). The micropolitical perspective. In J. Blase (Ed.), The politics of life in schools: Power, conflict, and cooperation (pp. 1-18). Thousand Oaks, CA: Corwin Press.
Blase, J. (2005). The micropolitics of educational change. In A. Hargreaves (Ed.), Extending educational change: International handbook of educational change (pp. 264-277). Dordrecht, The Netherlands: Springer.
Brass, J. (2014). Reading standards as curriculum: The curricular and cultural politics of the common core. Journal of Curriculum and Pedagogy, 11(1), 23-25.
Bridges-Rhoads, S., & Van Cleave, J. (2016). #theStandards: Knowledge, freedom, and the common core. Language Arts, 93(4), 260-272.
Clark-Oates, A., Rankins-Robertson, S., Ivy, E., Behm, N., & Roen, D. (2015). Moving beyond the common core to develop rhetorically based and contextually sensitive assessment practices. Journal of Writing Assessment, 8(1). Retrieved from http://journalofwritingassessment.org/article.php?article=88
Common Core State Standards Initiative. (n.d.). Frequently asked questions. Retrieved from http://www.corestandards.org/wp-content/uploads/FAQs.pdf
Common Core State Standards Initiative. (n.d.). Read the standards. Retrieved from http://www.corestandards.org/read-the-standards/
Common Core State Standards Initiative. (n.d.). What parents should know. Retrieved from http://www.corestandards.org/what-parents-should-know/
Dover, A. G., Henning, N., Agarwal-Rangnath, R. (2016). Reclaiming agency: Justice-oriented social studies teachers respond to changing curricular standards. Teaching and Teacher Education, 59, 457-467.
Elliot, N. (2005). On a scale: A social history of writing assessment in America. New York, NY: Peter Lang.
Elliot, N. (2016). A theory of ethics for writing assessment. Journal of Writing Assessment, 9(1). Retrieved from http://journalofwritingassessment.org/article.php?article=98
Gallagher, C. W. (2011). Being there: (Re)making the assessment scene. College Composition and Communication, 62(3), 450-476.
Hillocks, G., Jr. (1999). Ways of thinking, ways of teaching. New York, NY: Teachers College Press.
Hillocks, G., Jr. (2002). The testing trap: How state writing assessments control learning. New York, NY: Teachers College Press.
Jacobson, B. (2015). Teaching and learning in an “audit culture”: A critical genre analysis of common core implementation. Journal of Writing Assessment, 8(1). Retrieved from http://journalofwritingassessment.org/article.php?article=85
Kelchtermans, G., & Ballet, K. (2002). The micropolitics of teacher induction. A narrative-biographical study on teacher socialisation. Teaching and Teacher Education, 18(1), 105-120.
Kliebard, H. M. (2004). The struggle for the American curriculum (3rd ed.). New York, NY: Routledge.
Lazarín, M. (2014, October). Testing overload in America’s schools. Retrieved from Center for American Progress website: https://cdn.americanprogress.org/wp-content/uploads/2014/10/LazarinOvertestingReport.pdf
März, V., & Kelchtermans, G. (2013). Sense-making and structure in teachers’ reception of educational reform. A case study on statistics in the mathematics curriculum. Teaching and Teacher Education, 29, 13-24.
Matlock, K. L., Goering, C. Z., Endacott, J., Collet, V. S., Denny, G. S., Jennings-Davis, J., & Wright, G. P. (2016). Teachers’ views of the common core state standards and its implementation. Educational Review, 68(3), 291-305.
Murphy, A. F., & Haller, E. (2015). Teachers’ perceptions of the implementation of the literacy common core state standards for English language learners and students with disabilities. Journal of Research in Childhood Education, 29(4), 510-527.
O’Neill, P., Murphy, S., Huot, B., & Williamson, M. M. (2005). What teachers say about different kinds of mandated state writing tests. Journal of Writing Assessment, 2(2), 81-108.
Poe, M. (2008). Genre, testing, and the constructed realities of student achievement. College Composition and Communication, 60(1), 141-152.
Poe, M., & Inoue, A. B. (2016). Toward writing assessment as social justice: An idea whose time has come. College English, 79(2), 119-126.
Rose, M. (2016, June 23). Reassessing a redesign of community colleges. Inside Higher Ed. Retrieved from https://www.insidehighered.com/views/2016/06/23/essay-challenges-facing-guided-pathways-model-restructuring-two-year-colleges
Ruecker, T., Chamcharatsri, B., Saengngoen, J. (2015). Teacher perceptions of the impact of the common core assessments on linguistically diverse high school students. Journal of Writing Assessment, 8(1). Retrieved from http://journalofwritingassessment.org/article.php?article=87
Shulman, L. (1999). Foreword. In G. Hillocks, Jr., Ways of thinking, ways of teaching (pp. vii-x). New York, NY: Teachers College Press.
Stein, Z. (2016). Social justice and educational measurement: John Rawls, the history of testing, and the future of education. New York, NY: Routledge.
Troia, G. A., & Graham, S. (2016). Common core writing and language standards and aligned state assessments: A national survey of teacher beliefs and attitudes. Reading and Writing, 29(9), 1719-1743.
Wiggins, G., & McTighe, J. (2005). Understanding by design (2nd ed.). Alexandria, VA: ASCD.
Participant Information Tables
Semi-Structured Interview Questions
[NOTE: Questions marked with * were only asked of student teachers.]
Teacher training and experience:
- How many years of experience do you have teaching?
- *Did you have any experience teaching/working in an educational context before entering your certification program? If so, tell me something about those experiences.
- What were you responsible for in this educational context?
- What kinds of guides did you use or were you given in this context?
- In particular, were you responsible for planning lessons/activities?
- What kinds of guides did you use or were you given to plan activities?
- Briefly describe the institution you are [*student] teaching at (large, small, urban, rural, demographically homogeneous or diverse).
- What grades/subjects have you taught (or do you plan to teach)?
- Briefly describe your teacher training experience.
- *What is your program like? What do you feel your program is best preparing you to do?
Lesson planning and assessment:
- Describe your lesson planning process for me.
- How do you go about planning what students will do each day?
- How do you decide what material(s) students will cover?
- How do you assess whether students have achieved the learning goals you set out for them?
Knowledge of the CCSS:
- *Were you familiar with the CCSS before you began your teacher certification program? If so, what did you know about them?
- How did you first hear about the Common Core State Standards?
- What do you know about the organization that created the CCSS?
- Have you read the document? In what form (online, printed, condensed, complete)?
- Have you received any training or professional development in using the CCSS? If so, describe what you took away from that experience.
Beliefs and attitudes regarding the CCSS:
- What value (if any) do you think the CCSS have for classroom teachers?
- What concerns (if any) do you have about how the CCSS might affect classroom teachers?
- What value (if any) do you think the CCSS have for students?
- What concerns (if any) do you have about how the CCSS might affect students?
- Briefly describe any additional ways in which you think the CCSS might be valuable.
- Briefly describe any additional concerns you have about the CCSS and its effects.
Assessment and the CCSS:
- How do you evaluate (in class) whether students have met the CCSS?
- Do you think classroom evaluations of student performance are shaped or dictated by the CCSS? How?
- What do you know about the state-wide tests in development for measuring the CCSS?
- Have the state-wide tests for evaluating student progress changed in response to the CCSS? How?
- Have you ever implemented standards other than the Common Core? If so, do the CCSS seem the same or different from previous standards? Explain.
- Have the procedures evaluating you as a teacher been shaped or dictated by the CCSS? How?
CCSS and relevant social groups:
- *How would you describe your field instructor’s knowledge of the CCSS?
- *How would you describe your mentor teacher’s knowledge and implementation of the CCSS?
- How useful do you think the CCSS are for new teachers versus experienced teachers?
- Do you think the CCSS play a different role in the education of students in different kinds of programs (like advanced placement, regular classes, or remedial classes)? If so, can you walk me through the differences?
- What else would you like me to know about your thoughts on the CCSS?