Volume 8, Issue 1: 2015

Shifting the Locus of Control: Why the Common Core State Standards and Emerging Standardized Tests May Reshape College Writing Classrooms

by Joanne Addison

In 2010 the Common Core State Standards, a set of outcomes-based standards detailing core skills for K-12 English Language Arts and Math classrooms across the US, were released. This was followed by the release of related standards-based assessments, most notably the large-scale standardized tests developed through the Partnership for Assessment of Readiness for College and Career (PARCC) and the Smarter Balanced Assessment Consortium (SBAC). Because the Standards and their attendant standardized tests are limited to the K-12 curriculum, they are generally thought of as something happening within our elementary and secondary schools, not something that may have a direct effect on how we teach writing at the college level. By mapping the increased control of professional development networks for teachers by private philanthropists and testing companies, vertical alignment of K-20 standardized tests, and new approaches to funding education reform and research, we can begin to see how and why the Standards and emerging standardized tests will reshape our college writing classrooms. Understanding this shift is crucial to reasserting teacher agency at all levels of the curriculum and reinforcing assessment as primarily a teaching and learning practice, not a system of accountability and control.


The Common Core State Standards (CCSS), released in 2010, mark the first time the United States has successfully moved to adopt nation-wide learning standards for our K-12 classrooms in almost every state.1 The CCSS are self-described as “a set of high-quality academic standards in mathematics and English language arts/literacy (ELA).These learning goals outline what a student should know and be able to do at the end of each grade” (Common Core State Standards Initiative [CCSS], 2015). While the Standards do not dictate use of any specific curriculum or pedagogy, they do aim to standardize what is taught in all of our public schools. Initially, forty-three states, the District of Columbia, and four territories adopted the Standards in full. And with the Standards come new high-stakes standardized tests, such as the one developed through the Partnership for Assessment of Readiness for College and Career (PARCC) and the one developed through the Smarter Balanced Assessment Consortium (SBAC). Twenty-one states and the U.S. Virgin Islands are using the SBAC standardized test and thirteen states are using the PARCC standardized test. The most significant lure used to encourage adoption of the Standards and new standardized tests was increased eligibility for President Obama’s Race to the Top grants—a $4.35 billion dollar initiative (United States Department of Education, 2009, p. 2).

Many of us are more familiar with the controversies ignited by adoption of the Standards than the Standards themselves. Critique of the Standards fall into three broad categories: 1. The Standards are not developmentally appropriate—i.e., in many instances the Standards require children to master skills that they are not cognitively able to master at the specified age, 2. The Standards fail to address the most significant issues in education today—e.g., the impact of poverty on educational opportunity, and 3. Federally mandated accountability tied to the Standards has led to a massive increase in dollars and time spent on standardized testing, significantly decreasing instructional time and funds for other initiatives. So much criticism has been leveled against the Common Core State Standards, PARCC, and SBAC that many are simply waiting for the Standards to fail and the tests to be abandoned. As of this writing, two states have dropped the Standards and four are considering doing so. Further, PARCC initially counted twenty-six states among its ranks (now down to thirteen) while SBAC counted thirty one (now down to twenty one).2 But I would argue that waiting for failure is not the best route because even as criticism persists, teachers are being trained, textbooks and rubrics revised, and billions of dollars spent ensuring adherence to the Standards. And this money is not simply being spent by school districts on instructional materials but also by private philanthropists and testing companies to build networks of influence that reach far into higher education; networks that are shifting the locus of control for education away from teachers and local public school systems and toward testing companies and other private entities that use assessment primarily as a tool of accountability and control.

Linda Adler-Kassner and Susanmarie Harrington (2010) have been tracking the accountability movement as it took form in the 1970’s with a focus on recent efforts to achieve accountability through standardization as it applies to the teaching of writing. As they assert, current uses of “accountability” in education reform debates do not allow for the “outcomes-focused efforts implied in teacher-driven, bottom-up work that takes into account students, teachers, and programs” (Adler-Kassner and Harrington, 2010, p. 84). Instead, corporate interests increasingly drive the accountability movement, diminishing teacher agency. In her 2012 article “The Companies We Keep or the Companies We Would Like to Keep: Strategies and Tactics in Challenging Times,” Adler-Kassner outlined how five specific organizations are working to shape the accountability framework. At the time of its publication she noted that the role of the Common Core State Standards in shaping the accountability movement was still unfolding (Adler-Kassner, 2012, p. 122). We have now reached a point where we can consider, with some clarity, just what role the Standards play in the accountability movement at all levels of the curriculum.

In this article I will focus specifically on the ways that the Common Core State Standards are being positioned as the most well-funded and pervasive effort to date at ensuring accountability through standardization not just in our K-12 classrooms but increasingly in our college classrooms as well. By mapping the increased control of professional development networks for teachers by private philanthropists and testing companies, vertical alignment of K-20 standardized tests, and new approaches to funding education reform and research, we can begin to see how and why the Standards and emerging standardized tests will reshape our college writing classrooms. Understanding this shift is crucial to reasserting teacher agency at all levels of the curriculum and reinforcing assessment as primarily a teaching and learning practice, not a system of accountability and control.

How Did We Get Here from There?

Because the Standards and their attendant standardized tests are limited to the K-12 curriculum, they are generally thought of as something happening within our elementary and secondary schools, not something that may have a direct effect on how we teach writing at the college level. But the NCTE Policy Brief, “How Standardized Tests Shape—and Limit—Student Learning” (2014), details the effects of high-stakes standardized tests “includ[ing] changing the nature of teaching, narrowing the curriculum, and limiting student learning. English language Arts (ELA) teachers and their students feel these effects with special force because literacy is central in most standardized tests” (p. 1). These effects will not be left in the halls of our high schools upon graduation but rather carried with students into our classrooms. This carryover will be assured in states that allow the standardized tests designed to measure student command of the Common Core State Standards to be used as admission and placement tests for college. My home state of Colorado is one of many states using the standardized tests developed by the Partnership for Assessment of Readiness for College and Career (PARCC) to assess student learning as benchmarked by the Standards for college admission and placement. As the Colorado Department of Higher Education makes clear on its website:

640 colleges and universities have committed to participate in PARCC. These colleges and universities, including many flagship universities and most of the largest state systems, have pledged to participate in the development of the new college-ready assessments in mathematics and English language arts/literacy and have signed on to ultimately use these tests as college placement tools [emphasis added] . . . . Colorado’s new higher education admissions and remediation policies allow institutions to use PARCC scores for both course placement and admissions purposes [emphasis added] (2015).

Similarly, in SBAC member states, almost 200 colleges have agreed to use SBAC test results for college placement. Tony Alpert, Smarter Balanced Executive Director stated: “This is a game changer ” (Smarter Balanced Consortium, 2015). Perhaps this game-changer could be viewed more positively if the tests had been developed in collaboration with teachers, grown out of new understandings of best practices and alignment of K-20 vertical curricula, and employed as just one small part of an educational system’s overall practice of assessment and inquiry. In the case of tests such as that developed through PARCC, nothing could be farther from the reality:

The English Language Arts Work Group for the CCSSI [Common Core State Standards Initiative], for instance, consisted of fourteen members, ten of whom were associated with ACT, Achieve, or the College Board (these members had titles such as Senior Test Development Associate; Associate Vice President, Educational Planning and Assessment System Development, Education Division; and Senior Director, Standards and Curriculum Alignment Services. The group also included representatives of three companies: America’s Choice (an educational ‘solutions provider’ owned by Pearson); Student Achievement Partners, LLC; and VockelyLang, LLC (a marketing firm). Rounding out the group was a lone retired English professor, well known for her work on the National Assessment of Educational Progress. Exactly zero practicing teachers served on this work group (Gallagher and Turley, 2012, p. 10).

The composition of the English Language Arts Working Group signals that very early on the development and implementation of the Standards was designed to shift the locus of control over education away from teachers and toward private philanthropists and testing companies. While a growing body of literature is beginning to uncover the ways in which the Standards are shifting control of the curriculum away from teachers and school districts and putting it in the hands of testing companies and private philanthropists at the K-12 level, much less attention has been paid to the ways in which the Standards and related efforts are fostering a similar shift in the locus of control for instruction at the college level. In fact, significant effort has been spent on presenting a picture of higher education as informed supporters of the Standards.

For example, just as criticism of the Common Core State Standards was reaching new heights, the organization Higher Ed for Higher Standards was formed. As stated on their website: “The mission of Higher Ed for Higher Standards is to elevate the higher ed voice in support of efforts by K-12 educators to implement college- and career-ready standards, including the Common Core Standards” (Higher Ed for Higher Standards, 2015). Higher Ed for Higher Standards is a project of the Collaborative for Student Success, which many view as little more than a public relations arm of the CCSS because the same organizations designated as backers of the CCSS, most prominently the Bill and Melinda Gates Foundation, are also backers of the Collaborative for Student Success. Higher Ed for Higher Standards lists among its supporters organizations representing hundreds of public and private schools including the American Association for Colleges and Universities, the Association of Public and Land Grant Universities, and the State Higher Education Executive Officers Association. Additionally, they name several prominent college and university presidents, chancellors, and other academic officers from every state as supporters.3 In effect, it seems as if the higher education community has strongly positioned itself in support of CCSS. But how many of us have read the Standards or examined the standardized tests designed to tell us what students know and can do by the time they reach our classrooms? How many of us have been given the opportunity to have a voice in defining the position of our institutions on the Standards as well as the use of PARCC and SBAC for college admissions and placement in our classrooms? Shifting the course being set for us requires understanding the ways in which initial and ongoing monetary support for the Standards is altering the nature of educational reform through a well-funded system of networks designed to continually reinforce assessment efforts framed as accountability measures.

In order to secure funding to launch the Common Core State Standards, Gene Wilhoit, former Executive Director of the Council of Chief State School Officers, and David Coleman, current head of College Board (overseers of the SAT), convinced Bill Gates in 2008 that our educational system, and thus our potential workforce, was falling dangerously behind other countries, and that academic standards varied so dramatically between states forty percent of college freshman needed remedial classes (Layton, 2014). Gates not only agreed to become a primary investor of CCSS, his foundation continues to fund a system of networks to ensure success of CCSS at all levels. The National Writing Project is one example of an existing network being used to ensure the success of CCSS. In 2011, Sharon J. Washington, Executive Director of the National Writing Project, issued a statement informing members that President Obama had signed a bill eliminating direct federal funding for the National Writing Project. As she explained: “This decision puts in grave jeopardy a nationwide network of 70,000 teachers who, through 200 university-based Writing Project sites, provide local leadership for innovation and deliver localized, high-quality professional development to other educators across the country in all states, across subjects and grades” (2011). Defunding the National Writing Project occurred at the same time President Obama began arguing for a historic $4 billion to fund his Race to The Top grant program, calling into question the inability to support the National Writing Project as a matter of fiscal crisis. While the federal government ultimately restored a small amount of funding to the National Writing Project, many local sites across the U.S. are now funded in part by the Bill and Melinda Gates Foundation in return for their support of the Common Core State Standards. Describing a few of the grants geared toward NWP illustrates not only this point but also how networks of influence are being established by private philanthropists.

In 2010 The National Writing Project received a $550,000 grant from the Bill & Melinda Gates Foundation. Teams of teachers from local sites throughout California and Massachusetts, as well as from Louisville, Boise, and Oakland (MI), were expected to “create a model for classroom teachers in writing instruction across the curriculum that will support students to achieve the outcomes of the Common Core Standards” (National Writing Project, 2010). In 2011 the Bill and Melinda Gates Foundation awarded $3,095,593 in grant money to local sites of the National Writing Project, including one at my own university, to “create curricula models for classroom teachers in writing instruction that will support students to achieve the outcomes of the newly state-adopted Common Core Standards” (“Denver Writing Project,” 2011). More recently, in 2014 the Bill and Melinda Gates Foundation announced the Assignments Matter grant program:

This grant opportunity is meant to introduce large numbers of teachers to the Literacy Design Collaborative (LDC) and its tools for making and sharing writing assignments. Specifically, we will introduce teachers to the LDC task bank and jurying rubric, tools meant to support teachers in creating clear and meaningful writing prompts. (National Writing Project, 2014)

Enlisting local sites of the National Writing Project in the work of the Literacy Design Collaborative is of direct importance to those of us concerned with writing assessment. In 2013, the Bill and Melinda Gates Foundation earmarked $12,000,000 to “incubate an anchor Literacy Design Collaborative (LDC) organization to further expand reach and impact [of the Common Core State Standards]” (Bill and Melinda Gates Foundation, 2013). The LDC claims to put “educators in the lead” but they are only allowed to lead within the relatively narrow parameters of rubrics designed and approved by the Collaborative. That is, what educators are leading is the development of assignments that are as closely aligned as possible with the Common Core State Standards. Educators do not, however, seem to be allowed to lead when it comes to constructing assessments for students or themselves. For example:

[LDC] has created a process to validate the CCRS alignment of LDC-created content. The SCALE-created “jurying” process looks at how richly the tasks and modules engage academic content and build CCRS-aligned skills. Jurying can provide guidance on how to improve each module and is used to identify modules that are ready to share, as well as to spotlight those that reach the standards for “exemplary” that are in the LDC Curriculum Library (Literacy Design Collaborative, “Overview,” 2015).

Furthermore, teachers are expected to use the LDC-developed rubrics when assessing student work:

After a module’s instructional plan is taught and students’ final products (their responses to the teaching task) are collected, teachers score the work using LDC rubrics that are focused on key CCRS-aligned features as well as on the disciplinary knowledge shown in each piece. Visit the Rubric page for more information (Literacy Design Collaborative, “What Results,” 2015).

It is difficult to assess the LDC’s reach, although they claim to have “enabled” tens of thousands of teachers to prepare students for the twenty-first century workforce. And, even the briefest internet search reveals a long list of school districts, nonprofits, unions, and others that advocate the LDC approach to professional development. Indeed, with a $12,000,000 initial investment by the Bill and Melinda Gates Foundation, the LDC has the resources needed to incentivize and build professional development activities that are highly regulated and closely aligned with the CCSS primarily by way of writing assessment activities regardless of the local needs of an educational community. Such organizations may quickly position themselves to rival long-standing professional organization such as the National Council of Teachers of English and efforts such as NCTE’s Read-Write-Think project.

Charting the Future of U.S. Higher Education

It is the depth of this system of networks that is important to understand as we make our way forward, particularly as recent changes in the role of private philanthropists shift assessment away from a practice of teaching and learning inquiry and toward one of accountability and control. Adler-Kassner, Addison and McGee, and others make clear that 2006 can be considered a turning point for the level of accountability and control through standardized testing higher education increasingly faces. In 2006, then-Secretary of Education Margaret Spellings released the Spellings Commission’s report, “A Test of Leadership: Charting the Future of U.S. Higher Education.,” which counseled a “robust culture of accountability” in higher education (p. 20):

We believe that improved accountability is vital to ensuring the success of all the other reforms we propose. Colleges and universities must become more transparent about cost, price, and student success outcomes, and must willingly share this information with students and families. Student achievement, which is inextricably connected to institutional success, must be measured by institutions on a ‘value-added’ basis that takes into account students’ academic baseline when assessing their results. This information should be made available to students, and reported publicly in aggregate form to provide consumers and policymakers an accessible, understandable way to measure the relative effectiveness of different colleges and universities. (p. 4)

In their summary, the Commission noted, “According to the most recent National Assessment of Adult Literacy. . .the percentage of college graduates deemed proficient in prose literacy has actually declined from 40 to 31 percent in the past decade” (2006, p. 3). And in their recommendations, the Commission “urge[d] these institutions to develop new pedagogies, curricula and technologies to improve learning. . .” (2006, p. 5). Administrators and educators were told one primary way to improve learning is through the use of value-added standardized tests.

It seems our institutions of higher education agree. In response to the Spellings Commission, an alliance of more than 300 colleges and universities, as well as organizations such as the Association for Public and Land-grant Universities and ACT, formed the Voluntary System of Accountability (Perez-Pena, 2012). This represents over 50% of AASCU and APLU members (Voluntary System of Accountability, 2015). The Voluntary System of Accountability members approve of and encourage the use of the ETS Proficiency Profile, the Collegiate Assessment of Academic Proficiency (ACT) and the Collegiate Learning Assessment (CLA) to measure the value added to a student’s academic growth by attending a specific institution.

Examining one of these tests can tell us more about the networks and current reach of the CCSS and its primary investor, the Bill and Melinda Gates Foundation. The CLA is a performance-based standardized test designed to measure an institution’s effect on improving student writing and critical thinking skills. It is generally administered to a sample of college freshman and seniors to measure change in student abilities over the course of a traditional academic career, or the value added by attending a specific institution in terms of a student’s improved writing and critical thinking skills. Use of CLA has steadily increased since its inception in 2002, and it now counts 700 higher education institutions among its ranks. Of note is that CAE (Council for Aid to Education), the organization that administers the CLA, is working with organizations developing Common Core State Standards Assessments to ensure alignment between their standardized tests and those used at the college level, including the CLA. This effort is known as the Common Core State Standards Validation through Assessment Project and it is funded by the Bill and Melinda Gates Foundation (Steedle, Zahner, and Patterson, 2013).

For this project, CAE developed a standardized test of college readiness that was administered to entering college freshman. Outcome data such as grades in first-year courses as well as high school and college GPA were analyzed at the end of the students’ first year of college. The test used for this project was similar to the performance-based CLA currently in use, although closely aligned with the Common Core State Standards. The goal of this project was to determine if the Standards do, in fact, lead to success in college at the end of the first year as predicted by CCSS-aligned standardized test scores.

Thirty-two professors were recruited to participate in this project, including nine English professors. It is important to note that all of the professors were participants in CLA Performance Task Academies—professional development workshops focusing on the use of performance-based assessments in the classroom. Perhaps of more importance is that the report doesn’t list the professors’ areas of teaching or research, so we have no way of knowing if this group included writing specialists, let alone experts in writing assessment. The project relied on these nine professors to identify which of the ELA Standards were most important to include in a test of college readiness. As the authors of the report themselves made clear, this sample “raises concerns about the statistical precision of results as well as the generalizability of results to other professors” (Steedle et al., 2013, p. 9).

Once the professors identified the most important benchmarks in the ELA Standards that should be included in a test of college readiness, the Common Core State Standards Validation Assessment (CCSSVA) was developed and administered to 749 students at universities representing a range of selectivity, size, and diversity. In the end, higher scores on the CCSSVA proved to be a significant predictor of a student receiving a B or higher in college composition, although “there were plenty of students who performed relatively poorly on the CCSSVA test and still obtained high grades in their courses” (Steedle et al., 2013, p. 53). Somewhat buried in this study is another result that should be the real focus of our discussions around high stakes standardized tests: “Results from this study revealed that high school GPA was the single best predictor of first-year GPA” (Steedle et al., 2013, p. 61)—not the results of the Common Core State Standards Validation Assessment.

In a much larger study recently conducted by the National Association for College Admission Counseling, “Defining Promise: Optional Standardized Testing Policies in American College and University Admissions” (Hiss & Franks, 2012), we learn of the same result. This study set out to uncover whether standardized testing leads to the most useful predictive results or if it limits the pool of applicants “who would succeed if they could be encouraged to apply” (Hiss & Franks, 2012, p. 2). Using data for 123,000 students at twenty private colleges and universities, six public universities, five minority-serving schools and two arts schools, the researchers found “the differences between submitters [of ACT or SAT scores] and non-submitters are five one-hundredths of a GPA point, and six-tenths of one percent in graduation rates. By any standard, these are trivial differences” (Hiss & Franks, 2012, p. 3). Quite telling is the finding that students who don’t submit test scores are more likely to be first-generation, minorities, women, Pell Grant recipients, and students with learning differences (Hiss & Franks, 2012, p. 3). As the authors concluded, “There are dramatic choices to be made: the numbers are quite large of potential students with strong high school GPAs who have proved themselves to everyone except the testing agencies” (Hiss & Franks, 2012, p. 61).

An earlier study by Saul Geiser and Maria Veronica Santelices in 2007 found the same result when trying to determine the single best predictor of college success. Using data from nearly 80,000 students in the University of California system, Geiser and Santelices found that not only was high school GPA (HSGPA) the single best predictor of student success as measured by a student’s first year college GPA, the predictive strength of HSGPA persisted throughout students’ college career. Even a study conducted by College Board itself in 2008 using data from close to 150,000 students concluded that the single best predictor of college success was HSGPA (Kobrin, Patterson, Shaw, Mattern, & Barbuti, 2008, p. 5). Geiser and Santelices suggested this may be because “Whereas standardized test scores are usually based on only one or two test administrations, HSGPA is based on repeated sampling of student achievement over time in a variety of academic settings” (2007, p. 16). Not only is HSGPA based on repeated sampling in a variety of settings, it also includes a range of formative and summative assessments or the use of multiple measurement tools by a variety of people over a relatively long period of time. This creates the conditions for HSGPA to reflect robust assessment of student achievement as opposed to a standardized test result, which is a single measurement tool representing the result of only one test at one specific moment in time.

No single tool can capture the complexities that lead to student success. The diversity of assessments reflected in HSGPA is extremely important not just as an indicator of college readiness; it should also be important as an indicator of student ability to write in multiple contexts for multiple audiences under a variety of conditions. Monolithic assessment projects such as that of the Literacy Design Collaborative and the movement to align our most prominent standardized tests with CCSS, such as the realignment of CLA with CCSS discussed earlier, threaten the very diversity of assessment that leads to critical thinkers and flexible writers. And because this alignment occurs not just with standardized tests designed for college admissions and placement but also for standardized tests used while a student is in college to measure growth during the college years, this homogenization may lead to the same narrowing of the curriculum and limiting of student learning at the college level that has been documented at the K-12 level.

Perhaps even more interesting is that when the College Board researched the predictive strength of the SAT in the 2008 study referenced above, they also set out to determine if any one of the individual SAT sections (Math, Critical Reading, and Writing) was better than the others at determining college success. Ultimately, the Writing section was found to be the best predictor of college success (Kobrin et al., 2008, p. 6). Relatedly, in 2007 Geiser and Santelices found that while HSGPA was consistently the single best predictor of college success, the second best single predictor of college success was the SAT II Writing Scores “for both first and fourth-year college grades. . . [for all ] UC campuses, and academic fields, with only minor exceptions” (p. 12).4

It is not at all clear how findings such as these figure into the SAT redesign set for 2016, given that the redesign will make the SAT essay section optional, as David Coleman, president of College Board and major architect of the Common Core State Standards, explained. To be clear, I am not advocating for timed impromptu standardized tests. Les Perelman and others have convincingly argued that these types of tests have significant issues, such as a timeframe so limited that it does not allow for the critical reading and integration of substantial texts nor does it allow for significant planning and revision—two factors found to consistently improve student writing (Perelman, 2008, p. 28). Still, it is curious that even through the SAT writing test is shown to be the second best single predictor of college success after HSGPA, that the essay exam will be removed and made optional because “the writing test added in 2005 ‘has not contributed significantly to the overall predictive power of the exam’” (Jaschik, 2014).

Reversing the shift

What may be clear by now is that much of this would not be possible without recent significant shifts in the funding landscape. Research published in 2011 by Cassie Hall traced fundamental shifts in philanthropic giving to higher education. In the past, foundations generally provided direct incentives to institutions with a focus on capital construction or academic research or programmatic efforts. But as Hall revealed, “a new philanthropic approach to higher education has been developing over the past decade--one that emphasizes broad-scale reform initiatives and advocates for systemic changes through policy work” (2011, p. 2). This new approach is referred to as “advocacy philanthropy” and Hall argued that this will have a significant impact on higher education (2011, p. 50). There are several possible benefits, concerns, and emerging outcomes of this shift. Hall counted among the benefits the fact that foundations draw attention to important problems, use grant initiatives to bring key actors together, and scale reforms to a level that can result in relatively swift and substantive change (2011, pp. 96-100). Among the concerns are the lack of external accountability, stifling innovation (and I would add diversity) by offering large-scale, prescriptive grants, and an unprecedented level of influence over state and government policies (2011, pp. 96-100). Hall posited that among the emerging outcomes is less available money for field-initiated academic research, a shift from a local to a national focus and a growing lack of trust in higher education (2011, pp. 83-92).

We can see how a withdrawal of trust from higher education institutions is enacted in the ways in which teacher and student agency are increasingly restricted and assessment is used as a tool of accountability and control. As Hall (2011) concluded, “the Bill and Melinda Gates Foundation and the Lumina Foundation for Education have taken up a set of methods—strategic grantmaking, public policy advocacy, the funding of intermediaries, and collaboration with government—that illustrate their direct and unapologetic desire to influence policy and practice in numerous higher education arenas” (p. 109). Certainly, this exploration of why the Common Core State Standards and emerging standardized tests may reshape college composition classrooms evidences the claims made by Hall. Indeed, as she documented, “college ready funding has been the largest funding priority for the Gates Foundation” in the United States (Hall, 2011, p. 36). This funding is not simply about direct support for the development of Common Core State Standards but establishing dominant support networks designed to control assessment and policies in our schools, districts, and communities nation-wide.

We need to respond to this shift in the locus of control in ways that reassert student and teacher agency at all levels of the curriculum. A significant part of doing so involves reinforcing assessment primarily as a practice of teaching, learning, and inquiry, not a system of accountability and control, as NCTE has asserted: “Assessments should aid learning, not merely audit it. Assessment for accountability purposes is necessary, but assessments are most valuable when they are locally constructed, provide immediate and useful feedback, and involve students in meaningful activities” (2014 NCTE Education Policy Platform).

As a first step in this process, those of us in higher education need to begin asking questions of our home institutions:

  1. Where do we stand on Common Core State Standards and the use of standardized tests such as PARCC and SBAC for college admissions and placement?
  2. Have we joined any consortia related to CCSS and standardized testing, and does doing so improve teaching and learning at our institution and within our local communities?
  3. Have we conducted analyses similar to that of Geiser and Santelices on the predictive strength of various measures of college success for our own student body, especially in relation to underrepresented minority students (including LD students) who are typically disadvantaged by such tests?
  4. If we are using value-added standardized tests such as CLA, in what ways are assessments at the institutional level used to inform a recursive process to build context-specific assessments that inform teaching and learning in the classroom through meaningful professional development opportunities?

Also from the base of our home institutions and local contexts, instead of allowing umbrella groups such as Higher Ed for Higher Standards to imply wholesale adoption of CCSS by our institutions of higher education, we should exercise the 15% guideline in order to help shape the standards. The 15% guideline allows individual states to add 15% more content to CCSS that they feel is important. Privately-funded organizations such as Achieve argue that states should be “judicious” about adding content as “a literal interpretation by states of the 15% guideline (that is, 15% added at every grade level and in each subject) would undermine the very reason the states developed the Common Core Standards in the first place (Achieve, 2010, p. 22). Keeping in mind that the states did not develop the Common Core Standards in the first place, I would argue that we work to intervene in the implementation of CCSS precisely through adding 15% content that is rooted in best practices as outlined by NCTE, CCC, and others—especially when it comes to scaffolding college-ready writers.

On a local level, we need to work with the leaders of our professional organizations to find ways to improve the strength of our voice and recommendations. One example of such an effort is NCTE’s Rapid Response Assessment Task Force formed in the summer of 2014. Further, we are likely to improve our reach through strategic work in partnering with other professional organizations in order to sponsor the sorts of networks supporters of the Common Core State Standards have established in order to effect change at all levels. For example, how might we expand the reach of NCTEs Read-Write-Think initiative through our work with local schools districts and the teachers in our classrooms? Relatedly, in order to strengthen our voice we must be willing to publish in forums not traditionally recognized and rewarded by higher education in promotion and tenure but increasingly important in rebuilding public trust and moving beyond standardized test scores as the most visible measure of the work we do in writing classrooms at all levels. Indeed, Mike Rose called us to write for the public in his 2012 CCCC Exemplar Award acceptance speech: “Frame a career that along with the referred article and research monograph includes and justifies the opinion piece and the blog commentary—and craft a writing style that is knowledgeable and keenly analytic and has a public reach” (p. 542).

Finally, where we do find the Common Core State Standards to be a moment of opportunity to shape positive change at the K-12 and college level, we must also find ways to intervene in their implementation, impact, and needed revisions. As argued by Christopher Gallagher and Eric Turley in Our Better Judgment: Teacher Leadership for Writing Assessment (2012), we need to place teachers’ professional judgment at the center of education and help establish them as leaders in assessment. Doing so requires re-envisioning formative and summative assessment as a process of inquiry at the heart of the work of teachers.

 

Notes

  1. Arguments have been made for nation-wide curricula or standards since at least the establishment of the Department of Education in 1867 (later demoted to an Office and then elevated again to a Department) although staunch adherence to States’ Rights have kept such efforts at bay until now.
  2. Initially, twelve states belonged to both testing consortia.
  3. A full list of members can be found here: http://higheredforhigherstandards.org/supporters/
  4. Prior to 2005, SAT offered a SAT writing subject test that included an essay exam. In 2005 the SAT was redesigned so that an essay exam was included in the SAT itself. The latest redesign, set for release in 2016, will eliminate the essay exam.

 

 

References

Achieve, Inc. (2010). On the road to implementation. Retrieved from http://www.achieve.org/files/FINAL-CCSSImplementationGuide.pdf

Addison, J., & Mcgee, S. J. (2015). To the Core: College Composition Classrooms in The Age of Accountability, Standardized Testing, and Common Core State Standards. Rhetoric Review, 34(2), 200-218.

Adler-Kassner, L. (2012). The companies we keep or the companies we would like to try to keep: Strategies and tactics in challenging times. Writing Program Administrator, 36(1), 119-140.

Adler-Kassner, L., and Harrington, S. (2010). Responsibility and composition’s future in the twenty-first century: Reframing accountability. College Composition and Communication, (62)1, 73-99.

Bill and Melinda Gates Foundation. (2013, July). Literacy Design Collaborative, Inc. Retrieved from http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database/Grants/2013/07/OPP1088077

Colorado Department of Education. (2015). Colorado Measures of Academic Success. Retrieved from https://www.cde.state.co.us/communications/cmasfactsheet

Common Core State Standards Initiative. (2015). About the standards. Retrieved April from http://www.corestandards.org/about-the-standards

Denver Writing Project awarded Gates Foundation Grant to develop curricula for local teachers. (2011, November 3). The Dean’s Notes. Retrieved from http://clas.ucdenver.edu/deansNotes/news/denver_writing_project_awarded_gates_foundation_grant_to_develop_curricula_for_local_teachers

Gallagher, C. W., & Turley, E. D. (2012). Our better judgment. Teacher leadership for writing assessment. Urbana, IL: National Council of Teachers of English.

Geiser, S., & Santelices, M. V. (2007). Validity of high-school grades in predicting student success beyond the freshman year: High-school record vs. standardized tests as indicators of four-year college outcomes (Rep. No. CSHE.6.07). Retrieved from http://www.cshe.berkeley.edu/sites/default/files/shared/publications/docs/ROPS.GEISER._SAT_6.13.07.pdf

Hall, C. (2011). ‘Advocacy Philanthropy’ and the public policy agenda: The role of modern foundations in American higher education. (Thesis). Retrieved from UMI. (1500712)

Higher Ed for Higher Standards. (2015). Mission & principles. Retrieved from http://higheredforhigherstandards.org/about/principlesmission/

Hiss, W., & Franks, V. (2012). Defining promise: Optional standardized testing policies in American college and university admissions. National Association for College Admission Counseling. Retrieved from http://www.nacacnet.org/research/research-data/nacac-research/Documents/DefiningPromise.pdf

Jaschik, S. (2014, March 5). A new SAT. Inside Higher Ed. Retrieved from https://www.insidehighered.com/news/2014/03/05/college-board-unveils-new-sat-major-overhaul-writing-exam

Kobrin, J., Patterson, B., Shaw, E., Mattern, K., & Barbuti, S. (2008). Validity of the SAT® for predicting first-year college grade point average (Rep. No. 2008-5). Retrieved from College Board website http://professionals.collegeboard.com/profdownload/Validity_of_the_SAT_for_Predicting_First_Year_College_Grade_Point_Average.pdf

Layton, L. (2014, June 7). How Bill Gates pulled off the swift Common Core revolution. Washington Post.

Literacy Design Collaborative. (2015). Overview. Retrieved from http://ldc.org/how-ldc-works/overview

Literacy Design Collaborative. (2015). What results? Retrieved from http://ldc.org/node/13

National Council of Teachers of English. (2014). 2014 NCTE education policy platform. Retrieved from http://www.ncte.org/positions/statements/2014policyplatform

National Council of Teachers of English. (2014). How standardized tests shape—and limit—student learning: An NCTE policy research brief. Council Chronicle, 24(2), 1-3.

National Writing Project. (2014, August 14).Assignments Matter Grant Opportunity. Retrieved from http://www.nwp.org/cs/public/print/events/768?x-t=sites_eos.view

National Writing Project (2010, November 1). National Writing Project to create teaching models to improve writing instruction. Retrieved from http://www.nwp.org/cs/public/print/resource/3337

Perelman, L. (2008). Information illiteracy and mass market writing assessments. College Composition and Communication, 60(1). 128-141.

Perez-Pena, R. (2012, April 7). Trying to find a measure for how well colleges do. New York Times. Retrieved from http://www.nytimes.com/2012/04/08/education/trying-to-find-a-measure-for-how-well-colleges-do.html

Rose, M. (2012). 2012 CCCC Exemplar Award acceptance speech. College Composition and Communication, 64(3). 542-544.

Smarter Balanced Assessment Consortium. (2015, April 15). Close to 200 colleges and universities to use Smarter Balanced scores as part of placement. Retrieved from http://www.smarterbalanced.org/news/close-200-colleges-universities-use-smarter-balanced-scores-part-placement/

Spellings, M. (2006). A test of leadership charting the future of U.S. higher education. Washington, D.C.: U.S. Dept. of Education.

Steedle, J. T., Zahner, D., & Patterson, J. A. (2013). Common Core State Standards Validation through Assessment (CCSSVA) Report. Retrieved from Council on Aid to Education website http://cae.org/images/uploads/pdf/CAE_CCSS_Executive_Summary.pdf

United States Department of Education. (2009, November). Race to the Top executive summary. Retrieved from http://www2.ed.gov/programs/racetothetop/executive-summary.pdf

Voluntary System of Accountability. (n.d.). College portrait participants by state. Retrieved from http://www.voluntarysystem.org/participants

Washington, S. J. (2011, March 6). Congress, Obama cut funding for National Writing Project. Retrieved from http://www.nwp.org/cs/public/print/resource/3507

 

Author Biography

Joanne Addison is an Associate Professor in the Department of English at the University of Colorado Denver. Her research focuses on educational policy and practice at the institutional and national level as well as empirical research in writing studies and online learning. She has published her work in Rhetoric Review, College Composition and Communication, Computers and Composition and other journals as well as a number of edited collections.