The Path to Competency-Based Certification: A Look at the LEAP Challenge and the VALUE Rubric for Written Communication
by Jennifer Grouling, Ball State University
Although originally designed by writing professionals, AAC&U’s VALUE Written Communication rubric is one small part of a larger national vision for higher education. This article traces that vision through multiple AAC&U publications from 2002-2017 to demonstrate the way advocacy-based philanthropy and competency-based education has shifted the VALUE initiative away from institutionally-based assessment toward national accountability. With the General Education Maps and Markers (GEMs) pathway initiative of 2015 and the creation of the VALUE Institute national scoring database in 2018, the VALUE rubrics may be used to compare writing instruction at universities, to facilitate state-wide transfer agreements, and to certify students’ degree completion. In so doing, much of the original value of the rubric for writing studies is lost. When used on a national scale, it is impossible to modify for local context. I argue experts in writing assessment need greater awareness of the impact on these large-scale movements on the use of rubrics for writing instruction in higher education.
Keywords: rubrics, competency-based education, advocacy-based philanthropy, AAC&U, VALUE rubrics
Since their introduction in 2009, the Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics by the American Association of Colleges and Universities (AAC&U) have made a significant impact on assessment in higher education. According to Sullivan (2015), between 2010 and 2015, “more than 34,500 individuals from more than 6,500 institutions” downloaded one or more of the VALUE rubrics (p. 7). According to the National Institute for Learning Outcomes Assessment’s (NILOA) survey of provosts in 2013, 69% of colleges and universities use rubrics as an assessment tool, and their preliminary data from 2017 shows 44% of institutions use the VALUE rubrics specifically (Kinzie & Kuh, 2017). The VALUE rubrics are so pervasive at this point it is impossible to present a comprehensive view of the project. However, the history and politics of the VALUE rubrics are of prime importance to writing professionals, particularly the rubric for written communication, which may be other assessment practitioners and administrators’ first—and sometimes only—exposure to the field of Rhetoric and Composition/Writing Studies.
Written Communication is one of 16 VALUE rubrics, and, in many ways, it represents the field’s values about writing. Designed by writing scholars who examined existing composition rubrics, the VALUE Written Communication rubric closely resembles the Council of Writing Program Administration (WPA) Outcomes Statement (Anderson, Anson, Townsend, & Yancey, 2013, p. 95). In addition to sharing terminology with the WPA Outcomes, such as genre, context, and conventions, both the WPA Outcomes and the VALUE rubrics initially seemed to share a common purpose—programmatic unity and consistency without uniformity. So far, the VALUE Written Communication rubric has been praised as a valuable alternative to standardized testing.
However, a close examination of the rhetoric of AAC&U’s initiatives over time shows VALUE rubrics, like the standardized tests before them, are heading toward the comparison of writing instruction across universities, courses, and students. As the planned revision of the rubrics occurs in 2018 (Rhodes, 2017), this is a key moment to examine the history and rhetoric of AAC&U, the current use of the rubrics, and the future direction of the VALUE initiative. Reading the reports within the framework of the current political climate of accountability and competency-based education can provide a greater awareness for writing professionals to respond to these changes in higher education assessment. This article seeks to address (a) how the VALUE rubrics relate to AAC&U’s larger initiatives, particularly their General Education Maps and Markers (GEMs) pathway initiative, (b) how the use of the VALUE rubrics has shifted toward competency-based education, and (c) how writing professionals might respond to these changes in higher education assessment.
Austerity, Advocacy-Based Philanthropy, and Accountability
Composition scholars have expressed concern about the political connections between the Common Core, advocacy-based philanthropy, and accountability in higher education (Addison 2015; Adler-Kassner, 2012, 2017; Moore, O’Neill, & Crow, 2016; Welch & Scott, 2016). However, much of this critique has focused on standardized testing rather than on the use of rubrics. Rubrics have been critiqued for their use in both classroom and programmatic assessment but are only beginning to be used and critiqued as tools for national or state-wide assessment. I frame my reading of AAC&U’s VALUE initiative within writing studies’ larger discussion about rubrics, the rhetoric of austerity, and the politics of advocacy-based philanthropy.
Some compositionists dismiss rubrics up front, particularly national-scale rubrics, such as VALUE. Broad (2003) proclaimed that “the method and technology of the rubric now appear dramatically at odds with our ethical, pedagogical, and political commitments” (p. 2). Specifically, he objected to the use of rubrics in writing program assessment because “it documents only a small fraction of the rhetorical values at work there” (p. 12). Rubrics, by their nature, reify certain values and leave out others. This problem is magnified when discussing state-wide or national rubrics rather than rubrics designed for specific writing programs. Anson, Dannels, Flash, and Housley Gaffney (2012) argued big rubrics like VALUE “wear the guise of local application” (para 3). Part of this guise is the illusion that common terminology can be agreed upon, particularly without extended discussion. For example, Thaiss and Zawacki (2006) noted that faculty in different disciplines agreed writing should be “concise” (p. 499) but disagreed over what concise meant. Resolving these differences takes intense effort, as demonstrated by Colombini and McBride (2012), and is one of the reasons many writing scholars argue for locally negotiated assessments. Huot (2002) established good “writing assessment must be site-based and locally controlled” (p. 19). Unfortunately, his hope that focusing on local assessment would prevent more standardized assessments seems to have been unrealized. Fourteen years later, technology allows for data mining, which can use writing created within highly-contextualized courses, and for local programmatic assessment, in large-scale comparisons between schools (Moore et al., 2016, p. 30).
Behind this push toward data that can be used for accountability and comparability is the Educational Intelligence Complex (EIC), which Adler-Kassner (2017) defined as “a collection of NGOs (nongovernment organizations), granting agencies, businesses, consulting firms, policy institutes, actions, and actors” (p. 320). According to Adler-Kassner (2017), the narrative of the EIC is that there is a problem with higher education, and the solution to this problem is personalized education through technological innovation (p. 320). This narrative supports what Welch and Scott (2016) called a “rhetoric of austerity” (p. 3), in which higher education is pushed toward efficiency, and students are seen as “wise consumers” (p. 3) who seek the “fastest route to a degree” (p. 3). Some refer to this approach as competency-based education, which Gallagher (2016) defined as “a highly individualized educational approach in which students amass credentials through demonstrated competencies, usually in a self-paced manner, rather than through ‘seat time’” (p. 22). According to the EIC narrative, technology will allow each student to choose and design the quickest, “best” route through their own education. Although presented as “personalized” education, the goals of this approach are standardized; only the pathway to the goals varies.
Included in the EIC are advocacy-based philanthropists, such as the Bill and Melinda Gates Foundation and the Lumina Foundation, whose work impacts both secondary and post-secondary education. Known for their support of the Common Core Standards, the Gates Foundation has taken this focus on competency-based education into higher education through their funding of pathways initiatives. For example, Gates has continually funded pathways initiatives by Jobs for the Future, Inc. First, they funded pathways programs for high school students (2007), then for adult learners (2010), and most recently (2016) for colleges to “close equity and achievement gaps by scaling evidence-based practices and policies related to guided pathways” (Gates Foundation Grants Database, n.d.). This trajectory represents a shift from concern over pathways into higher education to pathways through higher education.
The involvement of advocacy-based philanthropy in pathways through college lends an urgency to previously expressed concern by composition scholars over connections between the Common Core and the possible standardization of college curricula (Addison, 2015; Moore et al., 2016). Adler-Kassner (2012) showed similarities between the American Legislative Executive Council’s (ALEC) legislative language, higher education’s Voluntary System of Accountability (VSA), and the Degree Qualifications Profile (DQP), all of which have connections with the Lumina Foundation (pp. 122-123). In another example, Addison (2015) outlined how the Gates Foundation’s 2014 grant program “Assignments Matter” gave grants to teachers in the Literacy Design Collaborative to create writing assignments that fit the Common Core. These assignments were meant to be collected in a database and assessed with a common rubric (Addison, 2015, “How Did We Get Here from There?,” para. 6). This example bears a striking resemblance to what is currently happening in higher education with the VALUE rubric, as this article will show. Both pathways initiatives and the VALUE movement add credence to Addison’s (2015) claim that advocacy-based philanthropy is moving beyond the Common Core toward control over educational policy and assessment in higher education (“How Did We Get Here from There?,” para. 3).
The EIC and advocacy-based philanthropists are successful, in part, because they are responding to current public concerns that higher education is worth the increased costs. Parents and students want to be able to compare their choices for both quality and cost efficiency. In addition to advocacy-based philanthropists, accreditors are starting to respond to this exigence. While accreditors have previously allowed individual institutions to make assessment decisions, Moore et al. (2016) believe they are being pushed in a new direction by a hostile public and the increased availability of technology (pp. 22-23). For example, Moore et al. (2016) noted Western Association of Schools and Colleges (WASC) has begun to use both the VSA and the DQP for their ease in comparing schools (p. 24). Technology has made this comparison easier. The VSA allows the public and policy holders to digitally compare universities on a number of factors, including costs and demographics, as well as assessment procedures and results. Initially, the system used measures such as the Collegiate Learning Assessment (CLA) test to compare schools, but more recently the VALUE rubrics—particularly the Written Communication and Critical Thinking rubrics—have become tools for schools to report assessment data on this national scale (VSA, 2012). It may be that this represents a positive shift away from standardized testing; however, the context of austerity and accountability also means a shift in the use of the VALUE rubrics from their original creation in 2009.
Introduction to the VALUE Rubrics
The Liberal Education and America’s Promise (LEAP) initiative began in 2005 and established essential learning outcomes for liberal education that were meant to be assessed by the VALUE rubrics. In 2010, Adler-Kassner and O’Neill praised the LEAP initiative for presenting “an alternative to the technocratic narrative located in reports like the Spellings Commission” (p. 85). They also lauded the VALUE Written Communication rubric for reflecting good writing assessment (as established in previous scholarship) in three key ways: It was developed by writing professionals, it asks raters to look at contextual materials such as assignment sheets, and it stresses the need to adapt the rubric for local contexts (Adler-Kassner & O’Neill, 2010, p. 173). Although these features were present at the creation of the rubric, since the early days of the VALUE initiative there has been an increasing shift from the rubrics as a tool to be adapted for local assessment toward their use as a tool for national, competency-based education.
First, it is important to understand the original design and purpose of the VALUE rubrics. In 2008, AAC&U began the development of 16 rubrics, which were released in 2009 (McConnell & Rhodes, 2017, p. 10). The rubric development teams referenced existing rubrics and repeatedly tested and revised the VALUE rubrics during the development phase (Rhodes & Finley, 2013, pp. vii-1). Adler-Kassner was one of a team of writing scholars who worked for 18 months on the VALUE Written Communication rubric. She explained the rubric “could be used inside and outside of writing classrooms to evaluate actual course artifacts collected in an electronic portfolio” (Adler-Kassner & O’Neill, 2010, p. 173). Rhodes (2010) stressed the rubrics were meant for institutional-level assessment and should be changed to reflect local outcomes and missions (p. 2). Institutional context was key to the original design and purpose of the VALUE rubrics, and AAC&U prided themselves on giving faculty control over the assessment process.
Including expert faculty in the development of the rubric also helped to create a common language for assessment based on expert knowledge. In keeping with best practice, the VALUE Written Communication rubric begins with a full page that includes a definition of written communication, framing language for the rubric, and a glossary of terms used in the rubric. The glossary defines terms such as “genre” that are key in writing studies. In addition, the dimensions of the VALUE Written Communication rubric match the values of writing studies. They include “Context and Purpose of Writing, Content Development, Genre and Disciplinary Conventions, Sources and Evidence, Control of Syntax and Mechanics” (AAC&U, 2009). The rubrics are not holistic; the scores for each dimension of the rubric should be seen as separate and not added for a total score (McConnell & Rhodes, 2017, p. 28). Thus, an evaluation of an artifact might show strong control of syntax and mechanics but weak content development rather than overall proficiency in writing.
The rubrics were also designed to show how students overall might progress throughout their undergraduate education rather than assess individual proficiency or assign grades. Unlike many rubrics that have a scale from “insufficient” to “highly proficient,” the VALUE rubrics all contain the levels of Capstone, Milestones (2 levels), and Benchmark. They were designed with the highest Capstone category first to show that institutions should design curriculum to encourage all students to reach this level (McConnel & Rhodes, 2017, p. 26). These levels were meant to represent a trajectory of student learning over a college education, not to correspond directly to standing or grades (Rhodes, 2010, p. 3). It is also important to note that the distance between these points is not meant to be equal as it would in a true numerical scale (McConnell & Rhodes, 2017, p. 27). Again, this distinguishes the VALUE rubrics from traditional grading rubrics, as well as from most holistic rubrics.
Although the rubrics call for modification on a local scale, AAC&U proceeded with national scale testing. In 2011, AAC&U began a large-scale test with the Multi-State Collaborative (MSC) partnership with State Higher Education Executive Officers (SHEEO). In 2014, the Minnesota Collaborative and the Great Lakes College Association (GLCA) joined the testing of the VALUE rubrics (McConnell & Rhodes, 2017, p. 10). In 2017, AAC&U released their “proof of concept” study, On Solid Ground, in which they presented the results of scoring 21,189 student artifacts gathered from 92 institutions from the MSC with the critical thinking, quantitative literacy, or written communication rubrics (McConnell & Rhodes, 2017, p. 12). In this study, AAC&U concluded that “faculty can effectively use common rubrics to evaluate student work products—even those produced from courses outside their area of expertise” (McConnell & Rhodes, 2017, p. 16). In other words, AAC&U believes that their inter-rater reliability rates are high enough to show faculty agree on writing quality. However, since the Benchmark, Milestone, and Capstone descriptors on the rubric are not equidistant, inter-rater reliability is a complex calculation. In addition, in keeping with previous studies (Thaiss & Zawacki, 2006), faculty may view the criteria of the rubric as generic but actually interpret them differently. Rather than engage with these complexities, On Solid Ground presents the results of the MSC scoring as confirmation of their push toward national-scale rubrics.
Two years before the results of the MSC, in 2015, Sullivan had already claimed: “The key to the VALUE assessment approach is the creation of common rubrics that can summarize levels of student achievement across different academic fields and institutions” (pp. 5-6, emphasis original). In this same publication, he forecasted that by 2018-2020 “the VALUE approach will become the national higher education student learning and institutional outcome assessment standard” (Sullivan, 2015, p. 9). Even without data that common rubrics were effective, AAC&U had shifted their position to advocate for the VALUE rubrics as common tools to be used across institutions rather than tools to be adapted and used locally. In order to more completely understand this shift away from local adaptation of the rubrics and toward competency-based education, it is important to see how the rhetoric of AAC&U has shifted over time.
AAC&U Initiatives and Reports: Changing Rhetorical Focus
To analyze the vision of AAC&U and how it has moved toward competency-based education, I drew from multiple AAC&U reports from between 2002 and 2017. The reports of AAC&U are short publications from approximately 30-50 pages that read as a mix of data-driven argument, sales pitch, and how-to book. Data are rarely presented neutrally; rather, they are used to support AAC&U’s vision. Ideals for higher education are laid out, and sometimes steps to follow the vision are provided. Case studies back up the success of the vision, and rarely—but occasionally—call to mind potential problems with a certain approach. Some publications are individually authored, some have selections by different authors, and some are simply credited to the organization as a whole. However, even in the case of multiple authors, a unified organizational view rather than individual opinion is conveyed.
From their 2002 publication Greater Expectations to their 2015 GEMs initiative, AAC&U presents a cohesive narrative in which they solve the problems of higher education today. Together, these documents discuss the process of creating the LEAP Outcomes and the VALUE rubrics but focus only on success and consensus, erasing the actual messiness of the process. They also incorporate the DQP by the Lumina Foundation as seamlessly as if it were originally one of their own initiatives. In so doing, AAC&U creates a coherent narrative of progress and success, selling their approach to higher education administrators, policy makers, faculty, and the general public. Although elements of the pathways perspective are present from the beginning of these initiatives, this section traces how the language and focus of the publications moves from local adaptation to national competency-based education over the span of 15 years. Observing this trajectory shows both why the purpose of the VALUE initiative has shifted and anticipates how the rubrics might be used in the future.
Greater Expectations (2000-2005)
AAC&U’s vision for 21st-century education most notably began with the Greater Expectations initiative (2000-2006). Although this vision did not call directly for a pathways approach to higher education, it did pave the way long before Bailey, Jaggars, and Jenkins’ (2015) book. Greater Expectations (AAC&U, 2002) set forth a vision for the “New Academy” where college is focused on goals rather than specific courses and credits (p. xiii). Key to this vision was the development and assessment of outcomes for general education, but initially these outcomes were specific to individual universities. In Leskes’s (2004) afterword to the 2004 publication, she emphasized “the faculty hold primary responsibility for outcomes, curriculum, pedagogy, and assessment because all these elements comprise teaching and learning” (p. 25). She encouraged faculty to work locally by looking at institutional mission when building outcomes and curriculum (Leskes, 2004, p. 25). By setting up faculty as a part of the solution rather than the problem, AAC&U set itself apart from narratives such as those in the Spellings Commission that argued for wide-scale change to education through testing and often blamed faculty for current problems.
Rather than lay out the outcomes themselves, the Greater Expectations National Panel presented a model for how to develop outcomes for individual institutions. In the Appendix of their 2005 publication, Leskes and Wright included a Step-by-Step Checklist for assessment that begins with understanding the local institutional context and mission and leads to defining learning goals based on this information (pp. 45-46). They also presented examples of institutions that were already using this approach, such as Portland State University. In particular, Portland State was praised for their use of rubrics to assess general education and capstone experiences, which would later become key to AAC&U’s vision (Leskes & Wright, 2005, pp. 31-32). The Greater Expectations National Panel called for both secondary and collegiate educators to create stable goals for learning, build curriculum around those goals, and assess the learning based on them; they also left a good deal of agency to specific institutions (AAC&U, 2002, pp. 49-50).
The next phase of AAC&U’s initiative began to shift the focus to national outcomes, although the assessment of those outcomes was still to be determined by individual universities. LEAP represented a 10-year initiative (2005-2015) “to align the goals for college learning with the needs of the new global century” (Kuh, 2008, p. v). Released in 2007, the LEAP “essential learning outcomes” were designed by the LEAP National Leadership Council, a team of “educational, business, community and policy leaders” (AAC&U, 2017a, p. vii). AAC&U (2007b) presented the LEAP essential learning outcomes as “an emerging consensus—about what kinds of learning Americans need from college” (p. 2).
The four essential learning outcomes are broad categories with a set of bullet points under each: Knowledge of Human Cultures and the Physical and Natural World, Intellectual and Practical Skills, Personal and Social Responsibility, and Integrative Learning (AAC&U, 2007a, p. 3). Written communication comes in under the heading of “Intellectual and Practical Skills,” and AAC&U (2007b) reported 73% of employers wanted more focus on written communication in college. They stressed these outcomes can be achieved in many different ways by different types of institutions and curricula, maintaining their initial focus on local adaptation rather than standardization (AAC&U, 2007b, p. 4).
High-Impact Practices (2008-present)
In 2008, AAC&U began to suggest best educational practices to meet the LEAP essential learning outcomes, calling them high-impact practices (HIPs). HIPs were introduced as “widely tested” teaching practices that benefit “college students from many backgrounds” (Kuh, 2008, p. 9). The simplified version of AAC&U’s narrative is that if all students had access to HIPs, all students could achieve the LEAP Outcomes. George Kuh, author of AAC&U’s 2008 report on HIPs, stated:
Today when I am asked, what one thing can we do to enhance student engagement and increase student success? I now have an answer: make it possible for every student to participate in at least two high-impact activities during his or her undergraduate program, one in the first year, and one taken later in relation to the major field. (p. 19)
His publication lists 10 HIPs, including first-year seminars, writing-intensive courses, and capstone projects. The e-portfolio was added as the 11th HIP in 2016 (AAC&U 2017c). Although it was not initially listed as a HIP, e-portfolios were a part of AAC&U’s vision from early in the process. In fact, the 2008 HIP publication introduces the VALUE initiative as “the LEAP e-portfolio project” to “assess students’ cumulative achievement of these essential learning outcomes” (Kuh, 2008, p. 3). While the HIPs are indeed well-established practices in higher education, their release signaled AAC&U’s move from a wide vision of higher educational that could be incorporated differently at different institutions to a vision of best teaching practices across the nation.
As previously mentioned, the DPQ was not originally designed by AAC&U, but was instead published by the Lumina Foundation in 2014. Both AAC&U publications and the DQP publications presented the LEAP and DQP initiatives as complementary; however, the DQP is more clearly linked to competency-based education. While the language used in AAC&U’s LEAP initiative revolved around “outcomes,” the DQP used the word “proficiencies.” In the Appendix of the DQP, the authors aligned this word with the competency-based movement in education, but argued that competencies focus on single learning experiences while proficiencies are developed over time through multiple experiences (Adelman, Ewell, Gaston, & Schneider, 2014, p. 33). They presented their approach as the evolution of competency-based education, saying “degree recipients should be proficient in their fields of study and, more generally, as students, not simply competent” (Adelman et al., 2014, p. 33, emphasis original).
This move solidified the shift away from local practice to national comparability across institutions and laid the groundwork for the future use of the VALUE rubrics. After the release of the DQP, the overall language in AAC&U publications shifted to include more references to the word proficiencies, and there has been little attempt to distinguish them from outcomes. Rather, they simply begin to refer to “outcomes and proficiencies” together, often implying that the LEAP essential outcomes and the DQP should be viewed as the same. At the time of this publication, AAC&U’s (2017b) website states: “The DQP incorporates virtually all of AAC&U’s Essential Learning Outcomes.” Since the VALUE rubrics were designed to assess the LEAP Outcomes, and the LEAP Outcomes became synonymous with the DQP, the VALUE rubrics, too, were seen as a logical tool for assessing whether students had gained the proficiencies in the DQP. Even though they were released five years before the DQP, the VALUE website states that the rubrics are aligned with the DQP since the DQP is aligned with LEAP (AAC&U, 2017d). Thus, the language of competency-based education seamlessly entered the publications and vision of AAC&U.
With their guided pathways initiative GEMs in 2015, AAC&U fully embraced competency-based education. This project is funded heavily by the Bill and Melinda Gates Foundation and provides “guidelines for the competency-based learning movement” (Schneider, 2015a, p. vi). GEMs were presented as the application of the DQP to the “design of general education programs and their connections to the major” (Schneider, 2015a, p. vi). The GEMs initiative further shifted the focus to proficiency and transfer: “The GEMs initiative provides a proficiency-based, portable approach to general education that is designed to help all students develop mastery of essential skills, knowledge, and capacities that are relevant to their lives, motivations, and goals” (AAC&U, 2015, pp. 6-7). Like other pathway models, GEMs is meant as an alternative to the “cafeteria” model of education where students select general education credits “a la carte” and strive to get the courses “out of the way” (Schneider, 2015b, p. v).
GEMs is hailed as “the centerpiece of a family of projects” that AAC&U has dubbed “the LEAP Challenge” (AAC&U, 2015, p. vii). The key that links together the many initiatives under the LEAP Challenge umbrella is the concept of “signature work,” which is defined as a cumulative project designed by students on a problem of their choice. Signature work should be completed over the course of at least one semester and may be a part of a capstone course, an internship, or other experiential learning opportunity (Gaston, 2015, p. 6). It is meant to demonstrate the LEAP Outcomes/DQP proficiencies and be scored using the VALUE rubrics. The maps part of GEMs refers to the idea that students choose a route through their coursework to their signature work when first completing the college application. Markers are meant to show progress along that path (Gaston, 2015, p. 8). Signature work can be a positive way to culminate student education; however, within the LEAP Challenge, it is framed not only as an opportunity for integrative learning, but as a point of certification that a degree is complete.
Future of the VALUE Rubrics
Even though the VALUE rubrics were originally meant to be adapted for local context, their design lends them to the goals of the DQP and GEMs initiative. For example, since the rubrics were designed to show progression over a degree rather than outcomes within a particular course, they can be interpreted as certifying proficiencies at the end of a degree. In addition, their basis in national LEAP Outcomes sets them up to be used for national comparison of institutions. AAC&U has recently announced a new project for 2018, the VALUE Institute, which will likely shift the role of the rubrics more toward national comparison and common usage and away from local adaptation. The VALUE Institute’s goal is to “establish the most comprehensive resource for direct and indirect evidence of learning beyond high school in the US” (AAC&U, 2017a). For a fee of $5,000-7,000 universities will be able to upload 100 student artifacts, have them scored by certified raters, and receive a report that “will provide actionable information” that can be used for “external validation of local campus learning assessment information” (AAC&U, 2017e). In this model, local assessment does not completely go away, but national—external—assessment is key for communicating with external stakeholders. The VALUE Institute represents a collaboration with Indiana University’s Center for Postsecondary Research (home of the National Survey of Student Engagement), and Lumina has just announced $1,725,000 in start-up funding to establish the database (AAC&U, 2017a). In a December 20, 2017 press release (AAC&U, 2017a), Lumina’s strategy director, Amber Garrison Duncan, is quoted as saying: “The VALUE Institute represents a major step forward to provide credible, transparent, and up-to-date evidence about the quality of learning in different type [sic] of credential pathways.”
The database will also provide resources for assignment design and for transfer programs, “to help students achieve and demonstrate key learning outcomes across guided learning pathways as part of general education or the majors” (AAC&U, 2017e). The VALUE Written Communication rubric will be one of six initial rubrics available for schools to select. Because student work will be scored on a national scale, there will be no opportunity for adaptation of the rubric to local institutional missions. Rather, the purpose will be comparison among institutions, but it could also be used to certify students work either upon entering the university, transferring between schools, or completing a degree.
The main features that Adler-Kassner and O’Neill (2010) praised about the VALUE Written Communication rubric—such as the use of contextual documents and the adaptation for local context—have already begun to disappear and may cease to exist in the new vision of the VALUE Institute. The rubric directly states in its framing language: “Evaluators using this rubric must have information about the assignments or purposes for writing that guided the writer’s work” (AAC&U, 2009). Also, the framing language encourages “adaptations and additions” (AAC&U, 2009) that correspond with the context of individual colleges and universities. However, as Anderson et al. (2013) stated, “Nothing about adapting outcomes to local contexts is easy” (p. 102). By providing a common rubric and commonly trained raters, the VALUE Institute feeds the need for efficiency and fits with the goals of the EIC. When common rubrics are used for transfer credit, evaluation of student signature work, or comparison between institutions—all of which the VALUE Institute could facilitate—local adaptation and information about the context of the artifacts are impossible to maintain.
One force driving this loss of context is the call for clearer transfer agreements between schools. As a part of the movement toward efficiency and quick pathways through education, AAC&U has begun to explore the implications of the VALUE rubrics for transfer and articulation agreements at the state level (McConnell & Rhodes, 2017, p. 510). The VALUE Institute could facilitate quick and direct comparison between courses at different institutions. This could decentralize the transfer credit process, taking decisions for what courses transfer out of the hands of WPAs. This increased pressure for coursework to conform to the VALUE rubric may even begin to dictate curriculum. Baker (2016) reported that, as of 2011, 21 states had statewide transfer agreements, and many were looking to streamline transfer further through joint curricula between community colleges and state schools (pp. 629-630). First-year writing is one of the courses most affected by such agreements: “By ensuring that English 101 at a community college covers the same content as ENG 101 at a 4-year school, transfer requirements are clearer and it is easier for students to create a complete 4-year course plan” (Baker, 2016, p. 629). Rhodes and Finley (2013) argued that transfer is one reason to use the rubrics as written, acknowledging that they could “help facilitate transfer based on actual evidence of achievement, rather than just the number of credits earned” (p. 21). Such agreements will also discourage adaptation of the rubrics for individual programs and institutions.
In some states, VALUE rubrics—either in their original form or in a modified, but state-wide version—are already being used to assess common curricula across institutions. As of 2016, the GT Pathways Curriculum in Colorado has approved 1,200 courses for transfer credit state-wide, including writing courses at both first-year and advanced levels. Although the VALUE rubrics are not required as a part of the curriculum for these courses, the program has released a modified VALUE Written Communication rubric that may be used. The rubric—unlike the original VALUE rubric—uses numerical columns that correspond not only with certain standards for performance but also with class standing. Macgillivray (2016) explained that Level 1 on the rubric is a benchmark for first-year students in a GT Pathways course. In this way, criteria for Level 1 on the rubric, such as “demonstrates minimal attention to context, audience, purpose, and to the assigned task(s),” becomes not a starting point but a level of competency for first-year students (Colorado Department of Education, n.d.). Similarly, in Idaho, the General Education Framework has established four competency areas, including written communication, that can be certified using rubrics. The Idaho State Board of Education’s policies from 2014 explained: “To ensure transferability, the Committee reviews competencies and rubrics for institutionally-designated General Education categories; final approval resides with the Board.” Finally, in conjunction with LEAP Texas and the VALUE rubrics, the state of Texas has come up with a Core Curriculum and core objectives that span institutions. In contrast with their original use at the institutional level, they advocate for the use of the rubrics in the classroom to guide pedagogy toward certain skills that will be evaluated using the rubrics (Carter, 2016, p. 4). The drive toward efficiency of transfer is already working against the value of local assessment.
So far, however, policy makers have carefully danced around the issue of dictating curricula. Yet, if student artifacts scored by the VALUE rubric are used to certify the transferability of these courses, it follows that the artifacts submitted must fit the rubric. This is currently an issue AAC&U has run into with its national test of the rubrics. For example, some raters who I interviewed for a larger study reported scoring reflective papers about internships that didn’t show evidence of research. However, internships are mentioned by AAC&U as an example of signature work. One solution to resolving this inconsistency is the use of signature assignments that are required regardless of course content. As far back as 2013, AAC&U discussed the way the rubrics impact assignment design by advocating for “signature assignments” that are designed to meet the VALUE rubrics (Rhodes & Finley, 2013). Anson et al. (2012) argued large-scale generic rubrics like the VALUE rubrics lead to poor pedagogy and “stereotypical assignments that best match the generality of the criteria, reifying vague, institutional-level standards but misaligning pedagogy and assessment” (“Going Local,” para. 2). However, AAC&U does not seem to share this concern and has presented common assignment design as a valuable practice. Rhodes and Finley’s (2013) publication dedicated a chapter to developing common signature assignments in order to address the concern that faculty members are not creating assignments that fit the outcomes or align with the rubrics. Furthermore, in AAC&U’s (2017a) press release, Rhodes is quoted as saying that the VALUE Institute will provide the type of information needed for “cost effective changes in what we are already doing, including assignment design.” It is unclear what Rhodes means here, but, when looked at in the context of using the VALUE rubrics to evaluate transfer equivalencies, it could very well mean common signature assignments across institutions.
Using the VALUE rubrics to establish transfer credit is one way that AAC&U may become more involved in students’ paths through college, but the GEMs pathways model may also standardize students’ paths out of college. In this model, the VALUE rubrics are used to “assess student accomplishment” on “widely endorsed proficiencies” (Schneider, 2015a, p. vii). Coursework in this model is not about an experience but about students gaining skills that will allow them to accomplish signature work that will meet learning proficiencies for degree completion. In this type of competency-based education, “The content of learning is no longer important; it’s the development of strategies that will lead to career success” (Adler-Kassner, 2012, p. 128). Adler-Kassner (2012) argued that the DQP positioned writing only as a tool, leading to the erasure of writing studies as a discipline (p. 130). Despite careful definitions of writing within the framing language of the VALUE Written Communication rubric, when used in conjunction with the DQP and signature work, writing becomes merely a certifiable skill.
While Adler-Kassner expressed concern over the way competency-based education would shift the definition of writing, Rose (2016) saw an opportunity for writing specialists to steer a large-scale discussion of what makes good writing (pp. 61-62). In particular, Rose (2016) noted the opportunity to argue for valid placement measures within the context of increased concern over dual-credit and transfer curriculum (p. 62). The shift in the use of the VALUE rubrics and their upcoming revision also allows writing specialists a chance to contribute to the conversation. However, it is unclear at this point who would be scoring signature work under GEMs and what role writing specialists might be able to play in this process. The schools presented as case studies in the LEAP Challenge publication have the resources needed to support pathways and student signature work through either one-on-one faculty advisors or full faculty committees, most of which get credit toward tenure, course releases, or grants to work with senior projects (Peden, Reed, & Wolfe, 2017). In these cases, writing specialists would have a chance for valuable conversations across campus about writing and a chance to help students develop signature work in writing studies itself. However, the reality is that many schools don’t have this type of flexibility in adjusting faculty loads to accommodate student signature work.
Gaston (2015) recognized assessing signature work could add to the workload of faculty, and in the absence of such support he presented “technology-supported evaluation” (p. 28) as one possible solution to this problem. He advocated for digital innovations to free up faculty time to focus on “inequity, analysis, evidence-based reasoning, reflection on values, and collaborative problem solving” over “content delivery” (Gaston, 2015, p. 29). The introduction to GEMs (AAC&U, 2015) also promised that the initiative will “set the standard to ensure that the digital revolution is used to facilitate students’ inquiry-based learning and projects” (p. ix). In the absence of local resources, schools may turn to a database like the one proposed in the VALUE Institute to evaluate senior signature work and certify the completion of the bachelor’s degree rather than burden already-busy faculty with assessing signature work. In these cases, signature work would be taken out of institutional context and scored on a national level. This solution fits with the EIC’s overall narrative that technology can solve the current needs of higher education. Students who need to graduate on time will have a quick, technological solution for certifying the completion of their degree.
In addition, policy holders and the public can quickly see how well graduates and institutions are accomplishing their goals by using technology funded by the EIC. In 2015, Sullivan predicted “the VALUE initiative will someday tell us about how well graduates are prepared for the challenges of work and life in the twenty-first century and which institutions foster the greatest learning gains” (p. 10). In this prediction, Sullivan (2015) captured not only the need to quickly evaluate the preparation of graduates but also the need to compare institutions when doing so. In addition to assessing student work, the VALUE Institute may be used to compare writing instruction across higher education through the Voluntary System of Accountability (VSA). The VSA (n.d.) College Portrait is designed to offer policy makers and the public “straightforward, flexible, comparable information on the undergraduate experience, including the reporting of student learning outcomes.” The VALUE Institute (n.d.) website reports that the VALUE rubrics have been approved as national standards for accountability by the VSA, and some are already reporting assessment results from scoring with the VALUE Written Communication rubric. The VSA Administration and Reporting Guidelines (2012) encouraged universities to report the results of their assessments using both the VALUE Written Communication and Critical Thinking rubrics.
The VSA does allow for some adaptation of the rubric language as long as the categories and descriptors remain the same, but this may change if artifacts are assessed externally using the VALUE Institute. Although the program has voluntary in its very title, Hawthorne (2008) expressed concerns that, like accreditation, the VSA system would be “expected and, de facto, an essential practice for legitimate institutions of higher education” (Background section, para. 3). Reports that give student scores from the VALUE Written Communication rubric for the purpose of comparing the strength of writing instruction between colleges and universities may lead to more attention to writing instruction, but it may not be the kind of attention writing scholars want. Rather than define successful instruction for our own programs and universities, the quality of our work may be misrepresented through defining it by external, national standards. In addition, Moore et al. (2016) feared the connections between the VSA and SIR II, a nationalized faculty evaluation service, will lead to standardized evaluation of writing faculty (p. 27).
There are clearly advantages to using the VALUE Written Communication rubric over the traditional use of test scores for evaluating institutions, courses, and students. However, it is concerning that a rubric designed to be modified to fit individual institutional missions has now become a key tool for competency-based education and national certification of student learning. Writing professionals need to be attuned to the way this rubric represents, and narrowly defines, our discipline to a national audience. WPAs should research how the rubric is being used both at our own institutions and at a national level. Rubrics are political documents, and the political context of their use can shift even when the text of the rubric does not. For Turley and Gallagher (2008), a rubric can be good or bad depending on its use in different contexts (p. 89). As shown throughout this article, there has been a shift in the way the VALUE rubrics are now being used to support competency-based education versus their original design for local, adapted assessment. This does not negate the possible use of the rubric for individual writing programs, but WPAs should be aware of the ways that external stakeholders may view the rubric as a tool for national comparison.
According to Rhodes (2017), the VALUE rubrics are set to be revised in 2018. Writing assessment scholars can—and should—get involved in this process and in conversations about rubric use at our home institutions. However, as Adler-Kassner (2012) so aptly stated: “These movements are larger, more powerful, and better funded than any writing teachers, or even any group of writing teachers, will ever be” (p. 136). It is a struggle to know what to do when faced with large-scale assessment movements by organizations such as AAC&U, funded by philanthropists such as Gates and Lumina. Awareness of the current issues and trends is certainly a part of that battle; however, I close by building on the suggestions of others with the VALUE initiative in mind.
Addison and McGee (2010) suggested writing scholars establish more collaborations like the one between NSSE and the WPA. They stated:
While it is important for individual researchers to serve as consultants to the College Board, the Department of Education, and the like, it is equally important that we explore useful collaborations wherein we put ourselves as an organization in the position of defining the questions asked, thus giving visibility to our work and helping define the issues. (p. 169)
With the upcoming VALUE Institute database, which is already connected to NSSE through the Indiana University Center for Postsecondary Research, writing researchers have another opportunity to make such a connection. Colleges and universities will upload writing artifacts from across the country to this database, giving us a wealth of data that can be used for our own purposes and studies. However, this data must be used in conjunction with other data in our field, and the combination of data should be used to push the national conversation in a productive direction.
For example, On Solid Ground showed a potentially troubling trend that writing artifacts lacked evidence of source material. A zero score on a VALUE rubric represents that the artifact did not show evidence related to that criteria, and 12-16% of all writing artifacts had zero scores in the area of “Sources and Evidence” (McConnell & Rhodes, 2017, p. 39). This could lead to the development of signature assignments that involve clearer use of sources and evidence; however, looking at multiple data points may illuminate more about the nature of this problem. According to the data from NSSE that Addison and McGee (2010) presented, 91% of college students completed writing assignments that asked them to “analyze or evaluate” (p. 154) source material. Thus, the problem might not exist with the assignments being given, but with the assignments being collected. Writing experts can encourage this type of triangulation rather than acting on the data from VALUE Institute alone.
Finally, I will reiterate suggestions by Rose (2013), Addison (2015), and Webber (2017) that writing assessment experts publish our studies outside of our own field. Addison (2015) argued publishing outside of traditional academic forums was key to “rebuilding public trust and moving beyond standardized test scores as the most visible measure of the work we do in writing classrooms at all levels” (“Reversing the Shift,” para. 6). Webber (2017) offered Perelman’s attack on machine scoring in the Boston Globe as one such successful public critique and called for similar efforts (p. 134). Rose (2013) also argued for writing scholars read in different fields and to “listen for the currents in the news that touch your work” (p. 543). Writing scholars need to continue to inform ourselves of the way that higher education initiatives, such as those by AAC&U, and the public perception of such initiatives tie in with larger educational narratives of writing, accountability, and advocacy.
Jennifer Grouling is an Assistant Professor and Director of the Writing Program at Ball State University. Her research focuses on writing assessment—particularly the use of rubrics, teacher response, TA prep, and gaming. Her current project is an institutional ethnography of the use of the VALUE rubrics.
I would like to acknowledge my colleague in English Education, Jeff Spanke, who collaborated on an earlier draft of this article and continued to provide a sounding board for the project; my writing group comrade, Tim Lockridge, who provided invaluable feedback; and the editors and reviewers for the Journal of Writing Assessment who worked with me through multiple iterations of this article.
Addison, J. (2015). Shifting the locus of control: Why the Common Core State Standards and emerging standardized tests may reshape college writing classrooms. Journal of Writing Assessment, 8(1). Retrieved from http://journalofwritingassessment.org/article.php?article=82
Addison, J., & McGee, S. J. (2010). Writing in high school/writing in college: Research trends and future directions. College Composition and Communication, 62(1), 147-179. doi:10.2307/27917889
Adelman, C., Ewell, P., Gaston, P., & Schneider, C. G. (2014). The Degree Qualifications Profile. Indianapolis, IN: Lumina Foundation.
Adler-Kassner, L. (2012). The companies we keep or the companies we would like to try to keep: Strategies and tactics in challenging times. Writing Program Administration, 36(1), 119-140. Retrieved from http://wpacouncil.org/archives/36n1/36n1adler-kassner.pdf
Adler-Kassner, L. (2017). 2017 CCCCs Chairs Address: Because writing is never just writing. College Composition and Communication, 69(2), 317-340. Retrieved from http://cccc.ncte.org/library/NCTEFiles/Resources/Journals/CCC/0692-dec2017/CCC0692Address.pdf
Adler-Kassner, L., & O’Neill, P. (2010). Reframing writing assessment to improve teaching and learning. Logan, UT: Utah State University Press.
American Association of Colleges and Universities, National Panel. (2002). Greater expectations: A new vision for learning as a nation goes to college. Washington, DC: AAC&U.
American Association of Colleges and Universities, Greater Expectations Project on Accreditation and Assessment. (2004). Taking responsibility for the quality of the baccalaureate degree. Washington, DC: AAC&U.
American Association of Colleges and Universities, National Leadership Council. (2007a). College learning and the new global century. Washington, DC: AAC&U.
American Association of Colleges and Universities, National Leadership Council. (2007b). College learning and the new global century: Executive summary with findings from employer survey. Washington, DC: AAC&U.
American Association of Colleges and Universities. (2009). Written communication VALUE rubric. Retrieved from https://www.aacu.org/value/rubrics/written-communication
American Association of Colleges and Universities. (2015). General Education Maps and Markers: Designing meaningful pathways to student achievement. Washington, DC: AAC&U.
American Association of College and Universities. (2017a). AAC&U receives $1.7M grand from Lumina Foundation to support the launch of the VALUE Institute [press release]. Retrieved from https://www.aacu.org/press/press-releases/2017/lumina-value-institute
American Association of Colleges and Universities. (2017b). Degree Qualification Profile. Retrieved from https://www.aacu.org/qc/dqp
American Association of Colleges and Universities. (2017c). ePortfolios. Retrieved from https://www.aacu.org/eportfolios
American Association of Colleges and Universities. (2017d). VALUE FAQ. Retrieved from https://www.aacu.org/value-faqs
American Association of Colleges and Universities. (2017e). The VALUE Institute: Learning outcomes assessment at its best. Retrieved from https://www.aacu.org/VALUEInstitute
Anderson, P., Anson, C. M., Townsend, M., & Yancey, K. B. (2013). Beyond composition: Developing a national outcomes statement for writing across the curriculum. In N. N. Behm, G. R. Glau, D. H. Holdstein, D. Roen, & E. M. White (Eds.), The WPA Outcomes Statement: A decade later (pp. 88-106). Anderson, SC: Parlor Press LLC.
Anson, C. M, Dannels, D. P, Flash, P., & Housley Gaffney, A. L. (2012). Big rubrics and weird genres: The futility of using generic assessment tools across diverse institutional contexts. The Journal of Writing Assessment, 5(1). Retrieved from http://www.journalofwritingassessment.org/article.php?article=57
Bailey, T., Smith Jaggars, S., & Jenkins, D. (2015). Redesigning America’s community colleges: A clearer path to student success. Cambridge, MA: Harvard UP.
Baker, R. (2016). The effects of structured transfer pathways in community colleges. Educational Evaluation and Policy Analysis, 38(4), 626-646. doi:10.3102/0162373716651491
Broad, B. (2003). What we really value: Beyond rubrics in teaching and assessing writing. Logan, UT: Utah State University Press.
Carter, D. (2016). VALUE rubrics: Valuable tools for improving teaching and learning [white paper]. LEAP Texas. Retrieved from http://leaptx.org/wp-content/uploads/2016/02/White-Paper_VALUE-Rubrics-Valuable-Tools-for-Improving-Teaching-and-Learning-Part-1.pdf
Colorado Department of Education. (n.d.). Guaranteed Transfer (GT) Pathways General Education Curriculum. Retrieved from http://highered.colorado.gov/academics/transfers/gtpathways/curriculum.html
Colombini, C. B., & McBride, M. (2012). “Storming and norming”: Exploring the value of group development models in addressing conflict in communal writing assessment. Assessing Writing, 17(4), 191-207.
Gaston, P. L. (2015). General Education transformed: How we can, why we must. Washington, DC: AAC&U.
Gallagher, C. (2016). Our Trojan horse: Outcomes assessment and the resurrection of competency-based education. In N. Welch & T. Scott (Eds.), Composition in the age of austerity (pp. 21-14). Logan, UT: Utah State Press.
Gates Foundation Grants Database. (n.d.) Accessed 2017. Retrieved from https://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database
Hawthorne, J. (2008). Accountability and comparability: What’s wrong with the VSA approach? Liberal Education, 94(2). Retrieved from https://www.aacu.org/publications-research/periodicals/accountability-comparability-whats-wrong-vsa-approach
Huot, B. (2002). (Re)Articulating writing assessment for teaching and learning. Logan, UT: Utah State University Press. Retrieved from http://digitalcommons.usu.edu/usupress_pubs/137
Idaho State Board of Education. (2014). Governing policies and procedures. Section III: Postsecondary affairs. N. Statewide General Education. Retrieved from https://boardofed.idaho.gov/board-policies-rules/board-policies/higher-education-affairs-section-iii/iii-n-general-education/
Kinzie, J., & Kuh, G. (2017, October). A national view of the field: 2017 NILOA provost survey results. Presentation at Assessment Institute, Indianapolis: IN.
Kuh, G. D. (2008). High-impact educational practices: What they are, who has access to them, and why they matter. Washington, DC: AAC&U.
Leskes, A. (2004). Afterword. In American Association of Colleges and Universities, Greater Expectations Project on Accreditation and Assessment (2004). Taking Responsibility for the Quality of the Baccalaureate Degree. Washington, DC: AAC&U.
Leskes, A., & Wright, B. D. (2005). The art & science of assessing general education outcomes. Washington, DC: AAC&U.
Macgillivray, I. K. (2016, June 24). Letter. Colorado Commission on Higher Education. Retrieved from https://highered.colorado.gov/academics/transfers/gtPathways/Criteria/Content/GT_Pathways_Content_Criteria&Competencies_Information.pdf
McConnell, K. D., & Rhodes, T. L. (2017). On solid ground: VALUE report 2017. Retrieved from http://www.aacu.org/sites/default/files/files/FINALFORPUBLICATIONRELEASEONSOLIDGROUND.pdf
Moore, C., O’Neill, P., & Crow, A. (2016). Assessing for learning in an age of comparability: Remembering the importance of context. In W. Sharer, T. A. Morse, M. F. Eble, & W. P. Banks (Eds.), Reclaiming accountability: Improving writing programs through accreditation and large-scale assessments (pp. 17-34). Logan, UT: Utah State University Press.
Peden, W., Reed, S., & Wolfe, K. (2017). Rising to the LEAP Challenge: Case studies of integrative pathways to student signature work. Washington, DC: AAC&U.
Rhodes, T. L. (Ed.). (2010). Assessing outcomes and improving achievement: Tips and tools for using rubrics. Washington, DC: AAC&U.
Rhodes, T. L. (2017, October). On solid ground: Assessment for learning. Presentation at Assessment Institute, Indianapolis, IN.
Rhodes, T. L., & Finley, A. (2013). Using the VALUE rubrics for improvement of learning and authentic assessment. Washington, DC: AAC&U.
Rose, M. (2013). 2012 CCCC Exemplar Award acceptance speech. College Composition and Communication, 64(3), 542-544. Retrieved from http://www.jstor.org/stable/43490770
Rose, S. (2016). Understanding accreditation’s history and role in higher education: How it matters to college writing programs. In W. Sharer, T. A. Morse, M. F. Eble, & W. P. Banks (Eds.), Reclaiming accountability: Improving writing programs through accreditation and large-scale assessments (pp. 52-64). Logan, UT: Utah State University Press.
Schneider, C. G. (2015a). Foreword. In American Association of Colleges and Universities, General Education Maps and Markers: Designing meaningful pathways to student achievement. Washington, DC: AAC&U.
Schneider, C. G. (2015b). Foreword. In P. L. Gaston (Ed.), General education transformed: How we can, why we must (pp. v-ix). Washington, DC: AAC&U.
Sullivan, D. F. (2015). The VALUE breakthrough: Getting the assessment of student learning in college right. Washington, DC: AAC&U.
Thaiss, C., & Zawacki, T. M. (2006) Engaged writers and dynamic disciplines. Portsmouth, NH: Boynton/Cook.
Turley, E. D., & Gallagher, C. W. (2008). On the “uses” of rubrics: Reframing the great rubric debate. The English Journal, 97(4), 87-92. Retrieved from http://www.jstor.org/stable/30047253
VALUE Institute. (n.d.). Learning outcomes assessment at its best. Retrieved from http://valueinstitute.indiana.edu
Voluntary System of Accountability. (n.d.). History of the VSA. Retrieved from http://www.collegeportraits.org/about/vsa_history
Voluntary System of Accountability. (2012). Administration and reporting guidelines: AAC&U VALUE rubric: Demonstration project. Retrieved from https://cp-files.s3.amazonaws.com/32/AAC_U_VALUE_Rubrics_Administration_Guidelines_20121210.pdf
Webber, J. (2017). Toward an artful critique of reform: Responding to standards, assessment, and machine scoring. College Composition and Communication, 69(1), 118-145.
Welch, N., & Scott, T. (Eds.). (2016). Composition in the age of austerity. Logan: Utah State University Press.