Volume 10, Issue 1: 2017

Innovation and the California State University and Colleges English Equivalency Exam, 1973-1981: An Organizational Perspective

by Richard Haswell and Norbert Elliot

This article examines the origin and development of the English Equivalency Exam (EEE) used by California State University and Colleges between 1973 and 1981. Although an episode in the history of writing assessment that has been well documented, the EEE bears revisiting through the lens of an organizational perspective, with special attention to the process of innovation. Attention to management processes and the contexts in which they occur can inform the perspectives of professionals in language assessment and strengthen their commitment to action undertaken on behalf of students.


Shape and revise the map within the mind.
—Gerhard Friedrich (1957)

In U.S. education, three social forces exert a distinct pressure on all stakeholders: requirements for high school graduation (in terms of course completion and test performance); articulation agreements between high schools and post-secondary institutions (in processes of admissions, placement, and progression); and the curricular consequences of these requirements and agreements (as course objectives are broadened or narrowed). Students; their parents and guardians; the classroom teachers who deliver the curriculum and administrators who coordinate educational efforts; state and federal legislators who allocate funding; and workforce leaders demanding unique knowledge, skills, and attitudes—each community has a shared interest in a system that faces unprecedented challenges of an increasingly diverse student population (Hussar & Bailey, 2017) and expansive voter constituency (Teixeira, Frey, & Griffin, 2015).

It is helpful to recall that historically these communities are partly structured around educational models formed in the 1960s under plans for a Great Society, a new deal offered by Lyndon B. Johnson early in his presidency in response to racial segregation and attendant poverty. Chief among these plans to shape equity were those of Clark Kerr, President of the University of California from 1958-1967. Trained as a labor economist, Kerr led the state—and a nation watching with great interest—toward an innovative structure for student access to higher education. Termed the California Master Plan of 1960, Kerr designed “a treaty,” as he recalled in 1991, so that each population segment—especially high school graduates who would be guaranteed a pathway to the state’s community and state colleges—“could then make its own detailed plans” (p. 366).

Today, how is “innovative structure for student access to higher education” to be taken? How understood, how questioned, how bettered? We believe important issues concerning educational innovation may be raised by taking an organizational or managerial perspective. We also believe it will be helpful to examine closely one “detailed plan” functioning within Kerr’s new “treaty”: the English Equivalency Exam (EEE) of the California State University and Colleges (CSUC—California State University dropped the “and Colleges” in 1983). The EEE at CSU was administered from 1973 to 1981. It rose during a period of enormous U.S. educational growth. Here, we can witness exactly how school graduation requirements, articulation agreements, and curricular consequences worked together within a particular innovative structure that holds lessons for us as the second decade of the 21st century comes to an end.

Coincidentally, in 1960 Kerr was uniquely positioned to provide an organizational perspective. He had a background of research in industrial relations and a broad grasp of the history of U.S. education. He found three distinct periods of American educational organization. The first, under a colonial government, was the founding of Harvard College in 1636 and William and Mary College in 1693, with Reformation governance structures of lay trustees, not the professoriate. The second period, from the Reconstruction to the early 20th century, was characterized by service and utility, especially in the land grant universities, in which leaders of agriculture and industry were increasingly placed on university boards. The third period, from 1960 to 1980, was transforming post-secondary education into a system that was “more a part of the totality of American life” (p. xiii) as Kerr termed it. In 1960, there were three million students in post-secondary education. In 1980, there were 12 million students.

From reformation to service to egalitarianism, U.S. education has been imagined, transformed, and broadened though its association with industry. While this fact has been the subject of the dangers of commercialization in the academy (Bok, 2003) and the instantiation of neoliberalism ideology in writing assessment (Gallagher, 2011), one fact holds true: The oldest corporation in the Western Hemisphere is the Harvard Corporation, and every academic institution has followed a reformation impulse in ensuring that leadership does not fall solely into the hands of any group who does not embrace industry in all its meaning. The question, then, is not why writing assessment scholars would examine innovation models associated, for instance, with organized labor; rather, the question is why so little has been done to better understand the historically pervasive referential frame of organization itself.

After all, the language and practice of organizations and the management of innovation are not incidental to writing assessment. Academics may like to think of a writing test as an index of an individual’s knack with words, an index more or less fair, germane, trustworthy, instructional, and generalizable. But writing tests are also more or less feasible, novel, trendy, sellable, and profitable. They are educational measures, true enough. But they are also commercial goods—artifacts that are designed, produced, advertised, and sold. Sometimes they are damaged, and sometimes they are discontinued. This entrepreneurial frame, centered in innovation, suits tests that are purchased from a testing firm and run by it as much as those assessments that are designed and run by faculty. All writing assessment comes with a dollar price. Sometimes the value is good, and sometimes it is poor, but one principle is universal: In the United States, writing assessment is an artifact of organizational life and, as such, much may be gained by a focus on innovation.

The case of the EEE at CSU has the advantage of having had its history told more than once from different angles (e.g., Elliot, 2005, pp. 203-206; White, 1984, 1993, 2001; Klompien, 2012, pp. 65-83; Miller, 2016, pp. 124-139). The operationalization of the EEE is also thoroughly documented in its extensive initial proposal (English Council, 1972), and White’s series of annual reports from 1973 to 1981 The first report (White, 1973) is especially detailed, exhaustive, and forthright; as a case study in innovation, the EEE is exemplary.

The case of the EEE also has the advantage of reminding us of the importance of sample selection as a way to control threats to the validity of our inferences. An empirical undertaking, the history of writing assessment may best be accompanied by calls for action when historical contexts of the past resonate with those of the present and its immediate future. In the case of the EEE, Kerr’s 1991 recollection of growth holds true—although recent data prove an underestimation. According to the National Center for Education Statistics (NCES), in 1961 the total post-secondary enrollment was 4,145,065; in 1980, there were 12,096,895 students. Taking a similar 9-year period for comparison, NCES (Hussar & Bailey, 2017) projected 20,207,000 students in post-secondary institutions in 2014 and 22,290,000 in 2025 (Table 15, p. 60). This incremental present growth is nothing like the period described by Kerr from 1960 to 1980 in which unprecedented numbers of young people entered college, and public colleges expanded dramatically to meet the enrollment demand. A closer look, however, reveals an equally unprecedented demographic shift. In the immediate future, NCES projects an increase of 22% for students who are Black between 2014 and 2025 (2.8 million vs. 3.4 million), accompanied by an increase of 32% for students who are Hispanic between 2014 and 2025 (3.2 million vs. 4.2 million). This growth is accompanied by a stable population increase of only 3% for students who are White between 2014 and 2025 (11.2 million vs. 11.5 million) (Hussar & Bailey, 2017, Figure 21, p. 27). While the total population growth is dissimilar in the two periods compared—that is, between 1960 and 1980 and 2014 and 2025—the student sub-groups reflect seismic shifts. Thus, comparison between the two time periods is justified. As we will demonstrate at the end of our study, attention to innovation is a promising way to conceptualize writing assessment from 1960 to 1980; today, innovations accompanying social justice frameworks appear to be a promising way to understand the impact of changes that will continue to occur over the next two decades in U.S. education. Surely, it is appropriate to situate a range of recent books—such as those by Bailey, Jaggars, and Jenkins (2015) on pathways in community colleges and by Zwick (2017) on college admissions—as acknowledgement of, and response to, just these demographic shifts.

In this paper, we present an analysis of the CSUC EEE from 1973 to 1981. Our focus is on innovation, with special attention paid to preliminary considerations undertaken prior to implementation, initial inception, and aftermath. In terms of the value of the case, we propose a new role for writing-studies historiography based on a desire for actionable history—the identification of historical patterns used to inform our perspectives and strengthen our commitment to action undertaken on behalf of students. Beyond usefulness, ours is an actionable history.

A Case Study of Innovation

When the EEE was first given on May 12, 1973, at the 19 CSUC campuses, more than four thousand students had paid $15 each to take it, hoping to buy with their money and talent credit for six hours of first-year English. Their two exam essays, over 8,000 of them, were holistically scored 5 weeks later by 60 teachers, paid $300 each, from all 19 campuses—a daunting enterprise, by any index, and all the more so in that the test was developed and administered by the CSUC English Council (EC)—a quasi-independent organization run by faculty for faculty. A year earlier, when the EC learned that their proposal to create the exam had been approved and funded, they “looked at one another in dismay” (White, 2001, p. 315).

The dismay connected directly with innovation. When those looks were exchanged, legacies of Lyndon B. Johnson’s Great Society Program were still in play, and change remained the order of the day. While Richard M. Nixon had just begun his second term as a conservative, the U.S. Supreme Court overturned abortion in Roe v. Wade (1973) in January. In terms of state policy in 1973, Virginia Grey published a study of innovative policies in the states, with special attention to education, welfare, and civil rights. “In the past decade political scientists have witnessed an outpouring of literature on state policy outputs” (Grey, 1973, p. 1174), she observed. To better understand this new phenomenon, she turned to models of innovation—“generally defined as an idea perceived as new by an individual; the perception takes place after invention of the idea and prior to the decision to adopt or reject the new idea” (Grey, 1973, p. 1174)—and found that innovation was driven by time and circumstance.

California was the best imaginable place for educational innovation in the 1970s. White, the EC, and those players we will shortly introduce—Glenn Dumke, Vernon Hornback, Carolyn Shrodes, Vernon Hornback, and Gerhard Friedrich—were all agents in Kerr’s master plan for higher education. An administrator who spoke the language of business and industry (Soo & Carson, 2004), Kerr was the architect of what has become for many states the received view of educational hierarchy: an elite set of campuses (in this case, the University of California) hold the role of top research institutions; more numerous sites (here, the California State University campuses) manage undergraduate students; and many two-year colleges (California Community College campuses) provide vocational and transfer-oriented degrees. A system invented to serve the influx of children born in the 1950s would achieve efficient management: Education would be open to all, and efforts would not be duplicated. So popular was this plan that Kerr landed on the cover of Time on October 17, 1960.

What ingenuity and labor would it take to fabricate a valid examination under this system and administer it to thousands of students applying to the largest and most far-flung state system of higher education?

Traditionally, scholarly interest has been on the EC’s rejection of indirect and embrace of direct testing to award advance credit in writing, their use of a holistic method to score the essays, the local politics (administration vs. faculty), or the ideology that underlay it all (“the neoliberal unconscious,” Miller, 2016, p. 19). But our eye is on the pragmatics needed to create and maintain the exam as a commercially viable product. The central rationale for returning once again to the history of the EEE is that similar pragmatics motivate and propel any large-scale assessment of writing but have been rarely addressed by our profession.

Fortunately, organizational management scholars have long studied the pragmatics and dynamics entailed in developing and implementing a commercial innovation within a complex organization. They have proposed models that are roughly sequential, which therefore provide us with a ready entry into an historical narrative of the EEE. From three sources in particular (Daft, 1982; Kaufman & Tödtling, 2001; Boons, Montalvo, Quist, & Wagner, 2013), we have devised our own model of steps.

A Model of Innovation

Coppola and Elliot (2007) also proved useful, with their perspective on college writing assessment as a technology that can be innovated, developed, commercialized, and transferred to other settings via the stages of prospecting, developing, trial, and adoption.

This model of innovation offers a new angle on the EEE, allowing new historical insights. As we will see, while the scoring method turned out to be perhaps the EC’s most costly decision, it was the least innovative. The most innovative step may have been the recruitment and management of a large number of people from a wide variety of academic levels to develop and implement the examination. Team effort, of course, is a hallmark of organizational innovation, but it is not much of an academic innovation. We will also see that credit may need to be given to individuals lost in the historical shuffle. These new findings should not surprise. In the post-WWII history of language testing, when academic test developers have faced the question of scoring method, they have picked ready made over newly made most of the time. And when new examination programs are first imposed, usually it was the effort of a few intrepid, committed, and rather isolated individuals. How all this worked out with the EEE over its 8-year run is complex, intriguing, and insightful.

Stimulus

First, a word about the EC of the CSUC. The EC had been established in 1961, when it was known as the Council of State College English Department Chairmen and Coordinators of Freshman English. It was supported by member dues and had acquired affiliate status with the California Association of Teachers of English, and therefore enjoyed some autonomy from the CSUC system. In 1971, the membership was formed of chairs and two other members (usually including the composition coordinator) of each CSUC department of English, a president with a 2-year term, a few other officers, and liaisons with community colleges, the University of California, and the secondary schools. The full EC met twice a year, spring and fall, and the Executive Committee (president, ex-president, vice-president, and secretary-treasurer) met more often as needed (Hornback, 1973).

In mid-summer of 1971, the EC learned that Glenn Dumke, Chancellor of the CSUC system, had arranged a “pilot project” with the presidents of CSU Bakersfield and CSU San Francisco, in which their entering students would be invited to take the College Level Examination Program (CLEP)—admissions testing designed to award college credit for workplace experience—free of charge. With the Educational Testing Service (ETS) acting as the research vendor, The College Board (CB) had announced their CLEP in 1968. Marketing for the test included a television spot featuring Abraham Lincoln in an employment office. “You are looking for an executive position,” the agent says, “but what about college?” When Lincoln replies, “Well, I’ve done a lot of reading and studying,” the agent munches on a sandwich and concludes that, well, Lincoln isn’t really executing material. “Make learning pay,” the voice-over concludes. “You can earn college credit for what you have already learned on your own” (CB, n.d.). Offered were 15 subject examinations in common undergraduate courses and a general examination covering five basic areas: English, humanities, math, natural science, and social science. CLEP was a commercial success, catching the wave of “individualized” and “competency-based” education that arose in the late 1960s. By 1973, some 1,300 colleges were accepting CLEP performance for advanced credit (Caldwell, 1973). Incidentally, it still is a commercial success. Today, some 2,900 colleges grant credit for CLEP examinations at $80 per examination and $10 more if the college requires an essay (CB CLEP, 2017)

In 1971, Chancellor Dumke decided that, on the English part of the CLEP general test, if students met or exceeded the 25th percentile score at San Francisco or the 50th percentile at Bakersfield, they would receive six hours of college credit in English at those CSUC institutions (Hornback, 1973; Whitaker, 1972). If students passed all five parts of the CLEP general exam, they would receive two semesters of credit, in effect entering college with sophomore standing. The CB and the ETS were privy, if not parent, to the plan and had agreed to administer and score the new exams without charge to help spread word. That was only good business practice.

The EC was appalled that decent college English credit could be earned in an hour via 100 multiple-choice questions. While educational measurement specialists would reference terms, such as construct validity, to justify limited use of Number 2 pencils, the gleaming efficiency of the bubble and booklet system failed to pass the smirk test of everyone else. Later study supported the EC. In the Journal of Higher Education, Edward Caldwell (1973), Director of Testing at the University of South Florida, noted that a student needed to answer only 32 of the 100 multiple-choice questions to achieve the 25th percentile in English and reported that specialists had found almost a quarter of the English questions not of college level (p. 700). To achieve six hours of college credit in English, matriculating students at CSU San Francisco needed only to answer correctly seven of the 68 remaining college-level questions. By most standards, the CLEP English was a profitable but shoddy product.

But the EC waited until the outcomes were made public in September of 1971. At CSU San Francisco, 41% of CLEP takers earned credit for two semesters of first-year English, and 38% earned sophomore status (Hornback, 1973, p. 24). Individual members of the EC—such as president Vernon Hornback of CSU Sacramento and executive committee member Carolyn Shrodes of CSU San Francisco—complained about the advance credit in newspaper articles with headlines flourishing the phrase “Instant Sophomores.” (The phrase, as we will see, was borrowed.) And at their annual fall meeting in October, the EC approved a memorandum of protest to be sent to Chancellor Dumke. Both in public and in private, the EC argued that the CLEP testing for English was invalid, that a test earning advance credit in first-year English had to require an essay written by the student, not just a response to machine-scorable items.

None of this, it is important to point out, was necessarily stimulus for the EC’s EEE of 1973. In fact, in October of 1971, the EC pressured the Chancellor to erase an innovation, not to implement one (“at the time we were only concerned with protecting our writing programs from this serious attack,” White, 2001, p. 313). Indeed, for the CSUC system, it was Dumke’s CLEP project that was the innovation. It furthered an agenda in higher education that was relatively new, one directed at economic needs and explicitly couched in the language of management.

In the late 1960s, colleges and universities were facing criticism much like the Cold War challenge schools had faced 10 to 15 years earlier. This time, the worry was not that students were learning so little that the nation was losing out to Communism. It was that students were trapped in a stodgy and redundant curriculum, costing them time and the colleges money. Whereas in the 1950s and 1960s schools were accused of laxness, in the 1960s and 1970s, colleges were accused of inefficiency. Higher education needed to streamline and diversify its old lockstep, 120-credit-hours-to-graduate curriculum so that college applicants, whose numbers kept increasing, could enter college readily and graduate quickly. Students should not be held captive to fossilized curricula; rather, they should be allowed to apply toward a degree individual capabilities acquired by others means—pre-collegiate testing, credit for extra-curricular work experience, credit for correspondence courses. All were ways to accelerate learning and lower expenses through curricular innovation.

While most of the earlier criticism of the schools originated from outside the schools, this later criticism of higher education originated from within—from within and from above. Typically the call for innovation and change came top down, from trustees, boards, presidents, chancellors, and provosts. Master planner Clark Kerr (1971) had chaired a Carnegie Commission report that was probably the one most influential advocate of competency-based education reform. Called Less Time, More Options: Education Beyond the High School, released in 1969 and published in 1971, it exposed the duplication of high-school and lower-division college learning and recommended a 3-year bachelor of arts. In a talk at a Southern Regional Education Board convention, given July 17, 1971, Kerr made no apologies about the top-down nature of the Commission’s proposed reform. Because university faculty will only resist it, “External encouragement, I’m sorry to say, not only can assist this process of innovation, but may be necessary” (Kerr, 1971, p. 30).

Kerr also drew the attention of his audience to a Wall Street Journal article, published a few days earlier and entitled “Instant Sophomores” (Ehrich, 1971). The piece, noted Kerr (1971), discussed the Carnegie Commission report and divulged that the CSUC system was studying the Carnegie proposals “and planning in the very near future to introduce some aspects of them” (p. 29). Kerr (1971) may have had knowledge of Chancellor Dumke’s CLEP “pilot project,” which would end up supported not only by the CB but by a grant from the Carnegie Foundation. In a speech to the CSUC Trustees in January, 1971, called “A New Approach to Higher Education,” Dumke himself had argued that implementing his proposed innovations for the CSU system will take “A flow of wise men and women from the outside [to] demolish any false ivory tower concepts” (p. 5). The stimulus for innovation was established—the current managerial framework was outmoded—but the stakeholder roles ware undefined.

As such, part of the “shock” or “outrage” felt by the EC on hearing about Dumke’s (1971) CLEP plans was the anger that he had initiated it without consulting or even notifying teachers and directors of the English programs impacted. Certainly, the poor construct validity of the CLEP English exam helped stimulate the EC’s protest to the Chancellor. Their antipathy to bubble tests was strong. But stronger was the EC’s antipathy to top-down managerial fiat. What could they do? They had to meet the administration’s argument, whose premise was profit-and-loss. They had to become managers themselves.

Invention

With the EC’s initial act—their memorandum of protest in the fall of 1971—the Council “Took on the chancellor’s office and won,” at least according to EC chair Vernon Hornback two years later (1973, p. 24). The act would end up, eventually, with the EC replacing the Chancellor’s CLEP with a test of their own making. But Chancellor Dumke may have felt a bit victorious himself. As far as we know, the 1971 memorandum has not survived, but apparently it agreed with Dumke that some CSU entering students came “already knowing what we had to teach” (White, 2001, p. 314), a concession that gave the Chancellor a way out and the EC a way in. Dumke agreed to postpone further CLEP testing, and established within the Chancellor’s office a committee on equivalency testing. Thereupon followed many months of “negotiation” (Klompien, 2012, p. 71). In early summer 1972, a bargain was finalized: The EC would study the feasibility of creating a better examination for awarding entering students advance credit in English (both literature and composition), and the Chancellor would fund that enterprise with $40,000. It was an eye-opening investment. (For a comparison of current buying power, multiply 1971 U.S. dollars by five.)

Chancellor Dumke was no dummy. He and the EC had found stasis, and their ground of agreement was the use for an equivalency examination that would save the CSUC system money, one not only more valid but as economical as a fully operational CLEP examination run by ETS. Fees to the CB would decrease, university revenue would increase, and curricular costs would be cut as enrollment in first-year English courses declined and as students graduated more quickly. From the Chancellor’s point of view, the agreement took the chance that one cost-saving innovation would be replaced with another even more profitable innovation.

All this sounds improbably mercenary. But not to the ears of CSUC administrators in the late 1960s. The many-voiced proposals for competency-based education—often from politicians who held the purse strings—were driven by one unequivocal motive, to save tax-funded institutions money. Because college expenditures were not fully covered by tuition and other revenue such as donations and investments, the state of California was supporting public higher education to the tune of millions annually. Increases in enrollment would only increase costs. If undergraduate degrees were reduced from four to three years, Clark Kerr told his audience of administrators and accreditors in 1971, by 1980, California would be saving from five-hundred to eight-hundred million dollars (p. 28).

All of the proposals that Chancellor Dumke laid before the CSUC Board of Trustees in January of 1971 were money savers: accelerated 3-year degree, advanced placement, advanced credit, challenge exams for all courses, off-campus extension courses, budgets based on student-faculty ratio, ceilings on degree requirements, better utilization of space through night and Saturday classes, merger or discontinuance of academic programs, and full cost of instruction charged to students taking electives not used for their degree or not making satisfactory progress toward that degree. This last would bring in an income of ten to fifteen million dollars by 1974, according to the Chancellor. Dumke was not anti-faculty. He insisted that these proposed changes could not be effected without the support of CSU’s teachers. “As a faculty member myself” he said, “who spent more than a decade in the classroom, I look upon these proposals as vastly rewarding to all faculty members who are fundamentally concerned with the end product—the educated graduate” (Dumke, 1971, p. 8). Note, though, that he classified the student and the education as an “end product.” “While my chief reason for these proposals is educational excellence,” Dumke (1971) concluded, “significant eventual savings should result” (p. 7).

In tandem, the EC agreed to invent not only an advance-credit test that was educationally more valid, but also one that would save money for the CSUC system. Many writing teachers, as the EC soon found out, did not embrace equivalency testing in any form, not for writing courses. First-year writing was designed to improve the writing of any student enrolled in it. How could future writing improvement be measured by a test? But in allowing that some entering students might already know “what we had to teach,” the EC had implicitly affirmed their belief in some kind of equivalency between test and course for the student, and between cost and savings for the system.

The second equivalency they produced. In September of 1971, Dumke’s CLEP test had examined, free of charge, 1,294 CSUC students and awarded 531 of them six semester units credit for first-year English—a degree-shortening and cost-effective move that easily could be measured in full-time equivalent units. In May of 1973, the first EEE examined 4,071 students at $15 a head and awarded 1,139 CSUC students six semester-units credit. As the EC report of this first implementation stressed, “the credit hours earned cost the State of California [in supporting the test] much less than the usual expense for instruction” (White, 1973, p. 6). In 1971, the EC may have won a victory over the Chancellor’s office, but in 1973 Chancellor Dumke probably did not feel a loss.

Relocation

Inventors with an idea need space to develop it. The space, however, is of a peculiar kind, one that helps free inventors from money worries and helps hold them on task. With the EEE, the crucial relocation was from the Chancellor’s office to the EC. That didn’t happen in one jump. The interim space was Dumke’s committee on equivalency testing located within his office, a committee instructed to reconsider the protean question of CSUC equivalency testing in all academic subjects. Wisely, the committee created a sub-committee composed mainly of EC members to look at equivalency testing in English, and finally gave the task of creating a test fully to the EC. The EC, as it turned out, was an ideal incubator space.

It would have been easy for the EC to delegate decisions on equivalency to the individual campuses in the system. To a degree, the University of California had long done that with their Subject A testing. Instead, the EC maintained their autonomy from both the CSUC system’s central administration and the system’s 19 English departments. Of course, at steps along the way, the EC secured approval from the administration and from those departments. But they took it upon themselves to control the research, development, and implementation of the test—and kept that control over the 9 years the test was given. In some ways, this centralization of power within an ancillary and quasi-independent organization was one of the most efficient steps in the creation of the EEE, although the move was not exactly innovative. Other large locally run language testing operations had used the same organizational dynamics, for instance the local education authorities in Britain, The Board of Examinations for the College of the University of Chicago, and The Bay Area Writing Project.

Ownership

In part, the EC was an ideal vessel for creating the EEE because it had already faced a number of external challenges and in doing so had “developed an almost fraternal unity” (Hornback, 1973, p. 25). It had fought, for instance, for lower class sizes, lower teaching loads, and better credentialing for school teachers. It built internal solidarity through these battles against outside attacks. More bonding was generated by the new challenge of creating a pathfinding exam. Certainly, camaraderie prospered with the writing of the fall 1972 report that laid the groundwork for the exam. As EC chair Vernon Hornback (1973) recounted a year later, the report

was the product of hundreds of hours of consultation between Edward White and Jesse Ritter, the Council's Credit by Exam Committee, and the English faculty from all but one or two of the system's nineteen departments, plus two days of discussion at two different Council meetings. When the Council adopted, by unanimous vote, our position on credit by examination, it was the expression of the informed will of the nineteen departments of English in the system. (p. 25)

In educational institutions, we should not forget, there is always a higher will. Faculty may feel an ownership with faculty-run projects, but the real owner, behind the curtains, is the central administration. The EEE found this out in the end, as will be seen.

Affordance

Hornback (1973) touched on one of the truly innovative aspects of the EEE: It was the orchestration of a remarkable number of people both inside and outside the EC. No other way could such a colossal faculty-run enterprise have been achieved. True, the first exam was funded by an office of the Chancellor called New Program Development and Evaluation. And ETS, who had seen their CLEP exams unexpectedly jettisoned, remained on board and collaborated in constructing the replacement. Edward White travelled to Princeton and got advice from Paul Diederich at ETS, but ETS people also assisted: Albert Serling, director of CLEP; Alan Seder, director of the ETS office at Berkeley; Richard Harsh, director of the ETS office at Los Angeles. Chief among this group was Serling, a motorcycle riding member of the Humanities Department at ETS who was as knowledgeable as he was collegial. As were all members of the department, Serling was generally trained in statistical analysis (M. Fowles, personal communication, November 13, 2017), and White benefitted directly from that experience. As White recollected,

Al Serling spent a week or so in San Bernardino, educating me about testing during the summer of 1971 (or was it 1972?); I took him sailing on Lake Arrowhead . . . it was the start of a real friendship. I stayed at his house in Princeton and he got me on several key committees . . . he gave me a private tutorial on assessment that I am grateful for to this day. (White, personal communication, October 9 and October 19, 2017)

The connection between CSU and ETS was more than formal and might accurately be described as a knowledge-exchange process important to establishing the conditions—in this case, the need for statistical analysis of results—necessary for the innovation to occur

ETS handled the physical logistics—printing exam copies, keeping them secure, delivering them to the 19 testing sites, and redirecting them to the two faculty scoring sites. William Cowell, ETS statistical analyst for the Advanced Placement exams, ran the numbers and supplied a report, free of charge. In repayment, the first and following EEEs bought the 90-minute CLEP short-answer test in literature for the objective half of the exam.

At base, the relationship of ETS and the EC was a business partnership. In 1972-1973, ETS had sensed which way the wind was blowing and began devising an essay as part of the English CLEP. The fall 1972 EC report said it had “great hopes” (EC, 1972, p. 16) for a CLEP essay. But it would not be ready until too late for the first EEE, and the essay would have had to have been scored anyway by CSUC personnel. When the CLEP essay was ready in fall of 1973, the EC voted not to use it, having already spent money and time devising and administering their own essay exam. The EC, however, always imagined ETS as co-investor in the new exam: Klompien (2012) reported,

In our interview in 2007, [Edward] White was careful to inform me that the individuals he was working with at ETS were those who collaborated with the CSU and were interested in partnering on designing a test that both parties could endorse and support. (p. 78)

And profit from.

Still, it was inside people, CSUC personnel, who did most of the hard work. Names are so many only a few can be mentioned. Edward White, CSU San Bernardino, became chair of the exam committee and co-director of the elaborate enterprise. Richard Lid of CSU Northridge served as the other co-director, handling budget and facilities. Richard Cantey of CSU Long Beach prepared the test manuals. Rex Burbank, at CSU San Jose, served as one of the two “essay question leaders,” who oversaw construction of the essay topics, evaluated the pre-tests, selected table leaders and scorers, devised scoring rubrics, selected sample essays, supervised the scoring, helped devise cutting scores, and prepared a final report on the scoring.

The other essay question leader was Gerhard Friedrich. Friedrich is one of the undersung heroes of post-WWII writing assessment, and we need to pause to consider his contributions. He is usually mentioned as playing the dramatic role of the “mole” in the CSU Chancellor’s office, the Dean of Academic and Resource Planning who in the summer of 1971 leaked to the EC the confidential news about Dumke’s CLEP “project” (Elliot, 2005; White, 2001). But his part in the implementation of the EEE was much greater than that. He is a good example of the company man who helps innovate and produce a commercial product, and who becomes anonymous not long after the product hits the market.

In 1971, Gerhard Friedrich was 55 years old. He had been born in 1916 in Graudenz, Germany and apparently grew up in that conflicted town on the Vistula River, a few kilometers from the Baltic Sea. After the Treaty of Versailles, with a changed name and new country, Grudziadz, Poland was a dangerous place either to have a German name or to be pro-Polish. Friedrich’s 1957 book of poems, The Map Within the Mind, speaks of his “defiance of Hitler, imprisonment and exile” (inside dustcover). We do not know when he came to the US and became a citizen. Quite likely, it was around 1937, when he would have been 21 years old. Whenever it was, he adapted quickly. He completed his Ph.D. at the University of Minnesota in 3 years (1947-1951), and then taught at Haverford College (1951-1958) and Cedar Crest College (1958-1961) in Pennsylvania. In 1961, he moved to the position of Chair of Humanities at CSU Fullerton (then called Orange State College).

By 1971, he had published his dissertation on Moby Dick, scholarly essays on a range of writers, such as Drayton, Hawthorne, Joyce, and Dreiser, and two collections of his own poetry. But he was more than a literature scholar. He also had a national reputation as an expert in English pedagogy, curriculum, and administration—especially concerning textbook use, honors classes, foreign-language requirements, school-college articulation, synthesis of literature and composition, and departmental administration. In 1962 alone, he was a member of the CCCC Executive Committee, associate chair of the NCTE Committee on High-School Articulation, and participant in the Allerton Park seminar that produced a much-used set of resolutions for English department chairs. In 1965, when Glenn Leggett, chair of NCTE’s College Section, wanted to start a high-level discussion on the undergraduate English curriculum, he arranged to meet with eight people before the annual national convention, an elite group that included James R. Squire (Executive Secretary of NCTE), John H. Fisher (Chair of the Modern Language Association)—and Gerhard Friedrich (Leggett, 1965).

More germane to the history of the EEE, Friedrich was nationally known as a spokesman for the CB’s Advanced Placement exams. He had been a part of the literature and composition examinations from their start in 1954, probably drawn into the venture through Robert U. Jameson, director of the AP, who was Friedrich’s department chair at Haverford College. Year after year, Friedrich read AP exams, and by 1958 had risen to chair of the AP Composition and Literature committee. He published pieces arguing the benefits and the concerns of the AP program (Friedrich, 1959, 1970), and he kept the English profession aware of changes in AP examination methods through workshops at the annual NCTE and CCCC conventions. At these workshops, as chair, presenter, or consultant, he met most of the current luminaries in essay evaluation, for instance Paul Diederich in 1956, Albert Kitzhaber in 1962, Richard Lloyd-Jones in 1965. So, in 1971, he would have been known as the go-to expert on English equivalency examinations, not only in the CSUC system but throughout California educational circles. Within the EC, he was respected as its second president (1962-1964), who had gained the Council affiliate status with the California Association of Teachers of English (CATE), an accomplishment assisted, no doubt, because at the time he was the fourth president of CATE (1963-1964).

His contributions to the EEE were crucial and continued through the second administration in 1974. As CSUC Dean of Academic and Resource Planning, he talked the Chancellor into postponing more CLEP testing until a system-wide committee on equivalency testing considered the matter. Friedrich was made chair of that committee, and so he helped establish the sub-committee of EC members to study the situation in English and to create their own equivalency test in the spring of 1972. As the sub-committee met during the fall to discuss, among other matters, their choice of essay scoring method, Friedrich’s “behind the scenes support was crucial” (E. M. White, email to authors, personal communication, February 2017)..

Friedrich was made one of the two “essay question leaders” for the exam, helped create questions for it, and with Rex Burbank was sent to Rider College in New Jersey to observe another AP scoring just before the EEE was given (White, 1973, p. 36). For the first EEE administration, he oversaw the holistic scoring for one essay and wrote the section in the 1973 report on that scoring (White, 1973, pp. 35-37). He performed the same services for the 1974 exam. Imagine, a vice chancellor of the largest state university system in the nation selecting sample papers, training holistic scorers, overseeing scoring, and adjusting discrepant scores!

Method

Probably Friedrich’s expertise contributed to EC’s choice to adapt AP holistic scoring for their exam. He had a long and intimate familiarity with AP-style holistic scoring of essays, and in 1961 he had been the “chief reader” in Frederick Godshalk’s (1961) “skim or holistic” experiment to select topics for the ETS’s Writing Sample, a short-lived project to provide colleges with an essay written during AP testing. The EC’s choice of scoring method was not a given, however. Edward White’s research into scoring methods was thorough. During the summer of 1972, he corresponded, “sometimes at considerable length,” with many English testing experts in the U.S. and England (EC, 1972, p. 2; White, 1973, p. 125), and in the fall of 1972 the EC’s testing committee had “considerable discussion of the various grading scales that have been used in the past” (White, 1973, p. 32).

Analytic scoring, of course, was well known. But the EC also knew about open holistic scoring, which did not rely on rubric or range-finder essays and pooled the scores from three or more independent readers. White had written to James Britton in England, who had studied that method empirically (Britton, Martin, & Rosen, 1966), and it had been used for Godshalk’s 1961 “skim or holistic” experiment. The EC would have also known ETS’s later more constrained holistic scoring, which trained readers and resolved discrepant scores, because they had read Godshalk, Swineford, and Coffman’s 1966 report (EC, 1972, p. 13). Further, they could have known about primary-trait methods because White had also written to Rexford Brown, who in 1972 was involved in developing that method of scoring essays for the second National Assessment of Educational Progress scheduled for 1974. As for the method the EC adopted, AP-style holistic scoring, both Friedrich and White had met with Diederich, who had created, employed, and researched a form of it at the University of Chicago (1943-1949), and because Friedrich had long been applying and directing it at AP readings. The EC may even have known about the adaptation of AP holistic scoring at Sir Francis Drake High School in Marin County in the late 1960s because Friedrich knew Albert “Cap” Lavin, who administered that program. Lavin’s colleague Kate Blickhahn, ground-breaking missionary for holistic scoring, had met with the EC in 1972 (Hornback, 1973). (This history is fully explored in Haswell & Elliot, in progress, 2018.)

However it came about, the EC rejected analytic scoring of essays and open holistic scoring, and instead chose the more constrained AP holistic method. The decision must have been based, at least in part, on cost. Analytic and primary-trait scoring would have been more expensive if only in terms of scorer time-per-essay. So would open-pooled evaluation because it required three or more readers per essay, making reader salaries prohibitive. In addition, the AP model of scoring, which ETS had run and studied for over a decade, did not require any validation. The EC simply did not have the time or the money to pre-test their method, as the National Assessment of Education Progress was doing with primary-trait scoring for their second round of testing in 1974-1975.

Cost-wise, adapting AP scoring, which had undergone two decades of refinement, was just smart. The scoring guide would be patterned after the AP’s “rubric,” with no gridded scaling of criteria. AP’s current 9-point scale was reduced to a 6-point scale, criterion referenced to the extent that scores of 4 and above would earn “credit,” 3 and below “no credit.” Each essay would have two independent readers, and a discrepancy of two or more points would be adjusted through a third reader or a group consultation. Adjacent scores were just added. All this was “in accordance with Advanced Placement models ” (in White, 1973, p. 36) as Friedrich himself noted. The first two EEE reports even use the AP terms “rubric” and “scoring key” (e.g., White, 1973, p. 36; White, 1974, p. 19), terms that later White would discard in favor of “scoring guide.”

Implementation

This emphasis on economy does not mean that the EEE scrimped on quality to save costs. (By today’s standards, AP scoring of essays itself was rather labor intensive.) The two essay topics received a trial run as the third assignment in a composition class at CSU Northridge, scored by the EEE rubric, and compared with grades given to the previous essay grades of the students (White, 1973, p. 44). On June 5, Friedrich, Lid, Burbank, and White read through 100 test essays, selected sample papers, and discussed criteria. Question leaders, Friedrich and Burbank, created tentative scoring guides, and the morning of the next day adjusted them with the table leaders, who then met and practiced with their readers all that afternoon. As a result, adjustment to the guides helped create a sense of solidarity and ownership with the teacher readers (White, 2017). “Live” scoring the next day was constantly checked, with random batches of papers collected and read blindly by the table and question leaders. Out-of-line readers were tapped, re-instructed, and sometimes re-normed with sample papers. Discrepant scores were adjusted, sometimes by groups formed of the ablest readers. Later, failing essays authored by students with high objective-test scores were re-read. The implementation was highly professional, time consuming, and, as White (1973) noted, “enormously expensive” (p. 77).

Audit

Using the terminology of educational measurement, White’s 1973 report was quick to make claims of the relationship between evidence gathering and curricular consequences: “The existence of a valid and reliable test demanding writing, as part of a freshman English equivalency examination, is likely to have substantial and positive effects on teaching and learning in the schools as well as in the colleges” (p. 6). Even the table leaders used the language in their reports. Rex Burbank noted satisfaction in his report that “generally the papers were graded with a very high degree of uniformity, reliability, and validity” (White, 1973, p. 45).

All told, the AP-style holistic reading cost $49,348, accounting for almost 65% of the total expenditure on the exam. In comparison, the fee paid to ETS for the objective CLEP half of the exam was only $28,000. The message was clear to English teachers at other colleges wanting to take in hand their own equivalency or placement exams. Holistic scoring, done with know-how and without short-cuts, was hard work and not cheaper than objective testing. The good news for CSU was that the $15 student exam fee paid for a full 80% of the total cost of the examination. That meant the system had spent only $9 per credit hour earned, a figure that probably represents a savings compared to instructional costs, as the EC (1972) report documents.

Fallout

Inherent in large-population testing is anxiety over the unexpected, which usually results in unforeseen expenditures. The more sizeable and innovative the population, the greater the anxiety. In a voluntary exam such as the EEE, will enough students show up with fees in hand—that is, will enough customers buy the new product? Will interrater reliability be high enough to justify the scores—that is, will the product pass quality assessment? Bad news in either case could torpedo the test, especially if it were authorized on a trial basis.

With the first EEE, the most disconcerting surprise, actually a double surprise, involved the student essays that the EC hoped to collect from CSUC teachers at the end of the second semester English sequence, to provide a criterion reference for awarding advance credit. Surely, EEE students qualifying for six hours of English credit should be writing at least as well as enrolled students at the end of the first-year English sequence. The first surprise was that less than half of the course papers promised by teachers showed up. Four of the participating institutions sent none at all. White (1973) saw “very clear evidence of the residual resentment most English departments continue to feel towards equivalency testing” (p.58). The EC, of course, had been aware all along that many of a test’s customers, the faculty itself, do not buy into any kind of equivalency examination of student writing proficiency.

The second surprise was the poor quality of the end-of-second-semester papers that teachers did hand over. Many of the CSUC college essays would not have received EEE credit. With good reason, White’s (1973) report conjectured that “the motivation of the students writing the norm sample at the end of their class work was far below that of the students taking the test, for credit” (p. 51). The EC should not have been surprised. As early as 1956, the same phenomenon had been seen with the “reference papers” written by college students that ETS had collected to check the validity of AP scores (Swineford, 1956). Two years later, the problem had not disappeared, and eventually AP stopped gathering the comparison sample (Jameson, 1980, p. 21). In 1970, Friedrich had commented on the AP comparisons and also attributed part of it to differences in student-writer motivation, but he added course prestige as another explanation. AP exam takers feel themselves part of high school AP courses and that confers a “prestige-lending privilege,” whereas college first-year English students may feel part of “the least specialized and the least rewarding of chores” (Friedrich, 1970, p. 12).

The issue was not minor because it involved possible false advertising. The EC had found that they could not say exactly to what their merchandise, their “equivalency exam,” was equivalent.

Reiteration

Nevertheless, in the eyes of the Chancellor’s office, the EEE passed muster, and they continued to support the EC’s administration of the test for another eight years. For the EC, the success raised old and new problems. With the second testing, in 1974, to assure that they would receive valid numbers of “norm papers,” the comparison essays were drawn from courses taught by one of the question leaders, Rex Burbank at San Jose. This time, the sample was large enough, but there recurred the same problem of low performance compared to EEE passing quality. Was it because the students at the end of the year-long course came from sections “normally not taken by English majors and hence may be presumed to be skewed low in writing ability” (White, 1974, p. 41)? A shaky excuse, and the EC was faced with the “grim curricular implications” (White, 1974, p. 50) that standards in CSUC first-year English courses needed to be raised. From a commercial viewpoint, the situation was as grim. Should next year’s potential customers be informed that the test they were purchasing to earn advance credit for two first-year English courses required a level of writing higher than that needed to pass the actual courses?

Another problem for the reiteration was creating essay prompts that were new yet equivalent to the previous years’ prompts. The issue, in this case, was product batch consistency. In the second exam, the student pass rate turned out lower. Was the run of students less proficient or were the topics harder, and how could that question be answered? Faculty who manage their own testing often cannot match professional testing companies, who have the time, the money, and the expertise to run topic equivalency studies.

With sellable goods, innovation, if successful, must turn into replication, and with replication comes burnout—in potential fatal and in actuality hard to fight. With the second EEE in 1974, the EC introduced some changes. The protocol was made more efficient, for instance, by keypunching results for computer analysis. And there was a certain turnover of new people. William Robinson of CSU San Francisco joined to create essay topics (some years later, he would help develop an innovative rising-junior writing exam at San Francisco). But for the sake of efficiency and cost, “In almost every respect, procedures followed those of the previous year, and the same personnel were involved in the same capacities” (White, 1974, p. iii). Although each new topic had a new scoring guide, it reproduced the structure and basic criteria of the old topic and in some places copied the guide verbatim. The same question leaders and table leaders were used. All this duplication saved money.

However, one third of the assembly-line workers—the readers—did not come back. Short lifespan is one of the consistent patterns in the history of faculty-led tests using holistic scoring of student essays, but often the degree that the exams last has much to do with avoidance of worker burnout, with the enjoyment and bonding of the readers. Just as the process of innovation helped the EC unify, the process of implementation—with its common task, norming, table talk, and break-time socializing—helped bring together teachers from the ends of an educational system that spanned a thousand miles. In the last paragraph of his last EEE report, Edward White (1981) predicted that “the ‘in-service training’ by-product of English Equivalency will be the most important educational impact of the program.”

Actionable History

As White (2001) recollected, “The general success of the English Equivalency Examination led directly to faculty direction of the English Placement Test [EPT] that the CSU developed five years afterwards, a program that affected ten times as many students and faculty and that was widely imitated around the country” (p. 318). The EPT was launched in fall of 1977, so for five years CSU was running two examinations connected with entering-student writing proficiency. But the “success” of neither the EEE or the EPT guaranteed their lasting production. With the EEE, all it took was the appointment of a new administrator in the Chancellor’s office, namely the director for New Program Development and Evaluation who was funding the testing. After the 1980 exam, she removed White as director of the EEE, and after one more year the EEE ceased. Remembering back, White notes the unwritten rule, that “faculty leadership in assessment is transitory while career administrators build and maintain programs so they can move on . . . to bigger salaries” (E. M. White, email to authors, personal communication, February 2017). While individuals on the team are left behind, the owners of innovation proceed according to their own lights.

History shows us that typically local testing doesn’t reiterate for very long, certainly not as long as Tums, Band-Aids, or AP English. In theory, faculty-run testing can adjust for the negative effects of replication, but, as we have said, in reality, the testing itself is owned by their higher administration as the managers of the innovation. As we have noted, the EEE lasted only 9 years.

As for the EPT, it had a longer production run, exactly 40 years. On August 2, 2017, once again those managers ended local California writing assessments in higher education. In this case, it was by an executive order from Timothy P. White, Chancellor of the CSU system. In the fall of 2018, he declared, new baccalaureate credit-bearing courses would be introduced to strengthen written communication in English courses as well as quantities reasoning in mathematics courses. Standard practice—admitting students, testing them, and placing those falling below a determined cut score into non-credit remedial courses—would no longer be continued. In place of the tests, first-year skills assessment would be made from multiple measures of academic proficiency: course grades in high school English and mathematics/quantitative reasoning, high school grade point average, grades in collegiate courses, ACT scores, SAT scores, AP scores, International Baccalaureate scores, SAT subject tests, and Smarter Balanced Assessment/Early Assessment Program scores. As the Chancellor noted, “Effective with this executive order, the English Placement Test (EPT) and the Entry- Level Mathematics (ELM) Test shall not be offered, and the EPT and ELM committees are discontinued” (White, 2017). Because the Chancellor had explicitly established that the CSU system was “committed to providing students an equitable opportunity to succeed” (White, 2017), remediation and the tests used to designate those in need had become an outmoded innovation associated with the potential for disparate impact (Poe & Cogan, 2016). But notice that the CSU system does not pay for any of the new multiple measures.

In a semiotic shift associating the presence of the test with barriers preventing equitable opportunity to succeed, the CSU managers deployed what Clayton Christensen (1997) would term a disruptive technology—an innovation that holds the potential to displace established competitors. If high school English course grades and associated in-place tests could do what the EPT had done, then why not let the curriculum and tests that had already been taken do the work at a fraction of the time and cost? Indeed, the executive order is itself a form of disruptive innovation (Christensen, Horn, & Johnson, 2008; Christensen & Raynor, 2003). In disruptive innovation,

the process by which a sector that has previously served only a limited few because its products and services were complicated, expensive, and inaccessible, is transformed into one whose products and services are simple, affordable, and convenient and serves many no matter their wealth or expertise. (Christensen, Horn, Caldera, & Soares, 2011, p. 2)

As a form of disruptive innovation, common-sense markers, such as grade-point average and AP scores, have the potential to move rapidly and relentlessly across markets to displace the established testing competition.

Back in 1981, “disruptive innovation”—the idea, not the words—might well have been in the mind of the director for New Program Development and Evaluation. Why not shift the burden and improve the validity from a purchased exam to one designed by local faculty? In 2017, a new question. For advance standing, why not just shift the burden, the time, and the money entirely to the student? So, in the end, CLEP won the market. Largely for economic reasons, commercial firm beat out faculty enterprise. It is possibly the most common tale in the history of post-WWII writing assessment.

We can still ask, however, why the EEE sold for as long as it did? The answer must lie beyond the bonding of workers who produced it, lie within the product itself. Early on—May 14, 1973 to be exact—in a letter to CSUC English professors asking them for “norming” samples from their second-semester English students, White (1973) wrote, “this is the first time English professors have assumed responsibility for the reliability and total administration of such a test, and the program thus becomes an example of professional activity without precedent” (p. 55). Not true, to be sure (the undergraduate English faculty at the University of Chicago had run a holistically scored equivalency test in the 1950s), but not without its truth, that the persisting worth of the EEE centered somewhere in professional responsibility.

But responsibility to what? Our analysis begins with White’s 1993 observation that in early holistic scoring the entrepreneurial nature of testing could take over. The words bear repeating: “The few scholars and test administrators who were using holistic scoring were using all their energies to confront the problems of cost and scoring reliability, as practical aspects of the large testing programs they were supervising” (White, 1993, p. 82). The EEE went beyond that, as White (1984) himself declared in his essay “Holisticism.” The method of holistic scoring may have achieved some pragmatic ends, making “the direct testing of writing practical and relatively reliable” (White, 1984, p. 408), and it may have achieved some indirect social ends, bringing “together English teachers to talk about the goals of writing instruction” (White, 1984, p. 408), but beyond that “it embodies a concept of writing that is responsible in the widest sense" (White, 1984, p. 408). It was responsible to its product, which was responsible to its advertised use.

In part, EEE lasted because it was a quality product for its time, just as was the EPT. The EEE was an annually validated test that used an essay to test essay-writing skill, and the skill is a good in itself. Fifteen years earlier in 1970, Gerhard Friedrich had defended that core criterion. An essay, he argued, is part of the testing of all AP subjects because it serves “to tease and test the mind at work, to let it explain itself in the process of tackling a given task, to produce not simply correct answers to be checked off, but evidence of how and how well it arrives at conclusions” (Friedrich, 1970, p. 21).

Still, no matter how good, the EEE offered an edgy, tottery product, owing its precarious balance to fractious powers outside itself. Reports were written and research was published, but there is little evidence that systematic review was used to investigate how student sub-groups were impacted by the test or what could be done to improve the way that test served its diverse students. It is within this perspective that we see the emergence of the social justice turn in writing assessment.

In terms of periodization, a 2012 replication study by Inoue and Poe of White and Thomas’s 1981 study of racial minorities and writing skills assessment in the CSUC revealed that sub-group differences persisted but that standardization had become entrenched. As Inoue and Poe (2012) found,

In Fresno many junior high and elementary schools put “troubled” and English language learners in the same classroom, creating a room where 80%-90% of the students do not speak English, or have great difficulties with English and learning, in an all English-speaking classroom. Perhaps this practice is a kind of triage or a response to the teacher resources available (if a teacher speaks Mandarin, it makes sense to put students who primarily speak that language in her classroom). Of course, the curriculum does not change, so failure becomes the norm, and students simply get pushed through. It is from these classrooms that many of our Hmong, Latino/a, and African American students come. Should the EPT be redesigned to accommodate these local conditions? (p. 355)

If evidence exists of such efforts, we have not been able to locate it. To borrow an image from Friedrich’s early poetry published in 1957, the EET and its successor sold a map of a map of a map, the last being “the map within the mind.” Interiority is of limited use when trying to find a way out. And as any bookstore manager will tell you, maps have a short shelf life.

When maps no longer capture contemporary geography, they disappear. In 2017, the standard practice of remediation through high-stakes testing has fallen on the wrong side of history. Since 2012, the Complete College America program has published research documenting that 1.7 million beginning students are placed into remediation annually and that most of these students will not graduate. In light of such information and its applicability to the local setting, academic managers have been implementing alternatives to remediation in which the admitted student is viewed as a qualified student, and resources are reallocated to co-requisite courses, smaller class size, and additional tutorial support (Complete College America, 2015). In its most recent publication, the Complete College America (2016) initiative could not be more explicit: “End mandatory placement into stand-alone remediation” (p. 12). If we are indeed in a new era marked by a social justice turn in writing assessment (Inoue, 2015; Poe, Inoue, & Elliot, 2018; Kelly Riley & Whithaus, 2016; Poe, 2012; Poe & Inoue, 2016), we can understand why the new disruptive technology of using in-place information surpasses the many costs, both economic and social, involved with stand-alone innovations. When attention shifts to fairness and justice, the aim of innovative practice shifts from systems to their impact.

As our account of the EEE at CSU demonstrates, writers who focus on histories of writing assessment fall along a continuum in which three distinct impulses may be identified. In the first, there are those writers who focus on history in its own right in which the primary aim is to rescue lost documents and their creators from the fog of time. On a Scale: A Social History of Writing Assessment in America (Elliot, 2005) is just such a book in all but its final chapter. Following a second impulse, there are those who call for a serviceable sense of history. Huot, O’Neill, and Moore (2010), for instance, famously called for a usable past for writing assessment. As they correctly proposed, by understanding the importance of assessment historically, we can use it more effectively in the present. As they conclude, “One thing we know for sure: if we work actively to create a productive culture of assessment around the teaching of writing and the administration of writing programs, the future of writing assessment will be much different from its past” (Huot et al., 2010, p. 512). That we are already witnessing such shifts had been established by Behizadeh and Engelhard (2015) in their identification of resonances between the educational measurement and writing studies communities in terms of theories and procedures related to validity evidence. The third category, a call for actionable history, is evident in the present article and in a study under final revision by the authors (Haswell & Elliot, 2018). In what we hope will be a new tradition, the recovered past is both reflectively usable and heuristically actionable. Specifically, the history of writing assessment becomes a way to chart, in detail, paths for action. If we are indeed approaching an age of justice in writing assessment—one on which assessment fairness is defined as the identification of opportunity structures (Elliot, 2016)—then the extension from usefulness into action is worth exploring. Perhaps this shift will also recast what Morris, Greve, Knowles, and Huot (2015) found in their study of 34 books published on writing assessment within a 38-year period: The ownership of writing assessment research by writing assessment scholars can be seen as “limited, with a constant agenda being set outside the purview of writing teachers and administrators” (p. 132). Adoption of an actionable justice and fairness framework allows writing assessment scholars to circumvent solipsism by attending in innovative ways to broad social issues that have consequences for all.

As our case study of CSUC’s EEE has demonstrated, attention to organizational practices provides a realpolitik to accompany contemporary discussions of score interpretation and score use (Kane, 2013). These rubber-meets-the-road yield approaches both inform our perspectives and strengthen our commitment to action undertaken on behalf of students. As the present case illustrates, the time-honored distinction between purchased and local assessments is not as useful a perspective as closely observing the processes of innovation that the social conditions used to incorporate and displace them. As well, the case illustrates moral culpability that occurs when a test is used for a purpose that it was not designed to support without validation of the intended new use. The question of the EEE was not, in fact, one of multiple-choice tests versus direct writing assessments. The case was one of a test designed to award college credit for workplace experiences being used to place students into academic courses. A quick look at the CB (n.d.) video makes one wonder what Chancellor Dumke had in mind when, in 1971, he decided that admitted students would be awarded college credit based on their performance on the English part of the CLEP general test designed to give college credit for workplace experiences. How often have we witnessed such inappropriate test use and the damage it can do? Perhaps now is the time to make such histories actionable and prevent the injustices that, as history reveals, are sure to occur.

To return to the case at hand: In the future, writing assessment scholars may be well served by examining episodes in the history of writing assessment through models of invention such as that we have used here. From such perspectives, attention may be given to the material conditions of our students and, perhaps, future actions can be taken offering innovations that, in turn, may be used to structure opportunities for our students. Creating social value by establishing new techniques for writing assessment can, as history shows, disrupt an antiquated testing market and its shopworn network of values.

Disruption is a process. It is time to begin.

Author Note

Richard Haswell is Haas Professor Emeritus of English at Texas A&M University--Corpus Christi, and co-editor of CompPile: An Inventory of Writing Studies. Norbert Elliot is Research Professor at University of South Florida and editor-in-chief of Journal of Writing Analytics. They are completing a new book, Holistic Scoring of Writing: A Theory, A History, A Reflection.

References

Bailey, T., Jaggars, S. S., & Jenkins, D. (2015). Redesigning America’s community colleges: A clearer path to student success. Cambridge, MA: Harvard UP.

Behizadeh, N., & Engelhard. G. (2015). Valid writing assessment from the perspectives of the writing and measurement communities. Pensamiento Educativo. Revista de Investigación Educacional Latinoamericana, 52(2), 34-54.

Bok, D. (2003). Universities in the marketplace: The commercialization of higher education. Princeton, NJ: Princeton University Press.

Boons, F., Montalvo, C., Quist, J., & Wagner, M. (Eds.). (2013). Sustainable innovation and business models. [Special issue]. Journal of Cleaner Production, 45.

Britton, J. N, Martin, N. C., & Rosen, H. (1966.) Multiple marking of English Compositions: An account of an experiment. Schools Council Examinations Bulletin No 12. London, UK: Her Majesty's Stationery Office.

Caldwell, E. (1973). Analysis of an innovation (CLEP). Journal of Higher Education, 44(9), 698-702.

Carnegie Commission on Higher Education. (1971). Less time, more options: Education beyond the high school. New York, NY: McGraw-Hill.

Christensen, C. (1997). The innovator’s dilemma: When new technologies cause great firms to fail. Boston, MA: Harvard Business School Press.

Christensen, C., Horn, M. B., Caldera, L., & Soares (2011). Disrupting college: How disruptive innovation can deliver quality and affordability to postsecondary education. Washington, DC: Center for American Progress. Retrieved from http://www.americanprogress.org/issues/2011/02/pdf/disrupting_college.pdf

Christensen, C., Horn, M. B., & Johnson, C. W. (2008). Disrupting class: How disruptive innovation will change the way the world learns. New York, NY: McGraw Hill.

Christensen, C., & Raynor, M. E. (2003). The innovator’s solution: Creating and sustaining successful growth. Boston, MA: Harvard Business School Press.

College Board. (n.d.). CLEP: A public service message from the College Board. Retrieved from https://www.youtube.com/watch?v=6F02IMkpsAA

College Board, College Level Examination Program. (2017). Retrieved from https://clep.collegeboard.org/

Complete College America. (2012). Remediation: Higher education’s bridge to nowhere. Retrieved from http://completecollege.org/docs/CCA-Remediation-final.pdf

Complete College America. (2015). Corequisite remediation: Spanning the completion divide. Washington, DC: Complete College America. Retrieved from http://completecollege.org/spanningthedivide/#home

Complete College America. (2016). New rules: Policies to strengthen and scale the game changers. Washington, DC: Complete College America. Retrieved from http://completecollege.org/wp-content/uploads/2016/11/NEW-RULES.pdf

Coppola, N. W., & Elliot, N. (2007). A technology transfer model for program assessment in technical communication. Technical Communication, 54(4), 459-474.

Daft, R. L. (1982). Bureaucratic versus nonbureaucratic structure and the process of innovation and change. In S. B. Bacharach (Ed.), Perspectives in organizational sociology: Theory and research (pp. 129-166). Greenwich, CT: JAI Press.

Dumke, G. S. (1971). A new approach to higher education . . . for the California State Colleges. ERIC ED 056 643.

Ehrich, T. L. (1971, July 15). Instant sophomores. Wall Street Journal, pp. 1, 20.

Elliot, N. (2005). On a scale: A social history of writing assessment in America. New York, NY: Peter Lang.

Elliot, N. (2016). A theory of ethics for writing assessment. Journal of Writing Assessment, 9(1). Retrieved from http://journalofwritingassessment.org/article.php?article=98

English Council of the California State University and Colleges. (1972). Equivalency testing in college freshman English: A report and a proposal. Los Angeles, CA: English Council.

Friedrich, G. (1957). The map within the mind. New York, NY: Exposition Press.

Friedrich, G. (1959). Benefits to English departments of the Advanced Placement program. College Composition and Communication, 10(1), 11-14.

Friedrich, G. (1970). Advanced placement: Some concerns and principles. College Board Review, 78, 20-21.

Gallagher. C. W. (2011). Being there: (Re)making the assessment scene. College Composition and Communication, 62(3), 450-476.

Godshalk, F. (1961). Internal ETS memo to Henry Chauncey, dated January 30. Princeton, NJ: Educational Testing Service.

Grey, V. (1973). Innovation in the States: A diffusion study. The American Political Science Review, 67(4), 1174-1185.

Haswell, R. H, & Elliot, N. (2018). Holistic scoring of writing: A theory, a history, a reflection. Manuscript in preparation.

Hornback, Jr., V. T. (1973). On building an effective regional professional association. ADE Bulletin, 38, 23-25.

Huot, B. A., O’Neill, P., & Moore, C. (2010). A usable past for writing assessment. College English, 72(5), 495–517.

Hussar, W. J., & Bailey, T. M. (2017). Projections of education statistics to 2025 (NCES 2017-019). Washington, DC: U.S. Department of Education and National Center for Education Statistics.

Inoue, A. B. (2015). Antiracist writing assessment ecologies: Teaching and assessing writing for a socially just future. Fort Collins, Colorado: WAC Clearinghouse and Anderson, South Carolina: Parlor Press.

Inoue, A. B., & Poe, M. (2012). Racial formations in two writing assessments: Revisiting White and Thomas’ findings on the English Placement Test after thirty years. In N. Elliot & L. Perelman (Eds.) Writing assessment in the 21st century: Essays in honor of Edward M. White (pp. 343-361). Cresskill, NJ: Hampton Press.

Jameson, R. U. (1980). An informal history of the AP readings, 1956-76. New York, NY: Advanced Placement Program of the College Board.

Kane, M. T. (2013). Validating the interpretation and uses of test scores. Journal of Educational Measurement, 50(1), 1–73.

Kaufmann, A., & Tödtling, F. (2001). Science-industry interaction in the process of innovation: The importance of boundary-crossing between systems. Research Policy 30(5), 791-804.

Kelly Riley, D., & Whithaus, C. (2016). A theory of ethics for writing assessment. [Special issue]. Journal of Writing Assessment, 9(1). Retrieved from http://journalofwritingassessment.org/article.php?article=99

Kerr, C. (1971). Less time, more options. In Education for the future: Reform or more of the same? Paper presented in Proceedings of the 20th SREB Legislative Work Conference, Atlanta, Georgia, July 14-17 (pp. 25-30). Atlanta, Georgas: Southern Regional Education Board. ERIC ED 056 640, 25-30.

Kerr, C. (1991). The great transformation in higher education: 1960-1980. Albany, NY: State University of New York Press.

Klompien, K. J. (2012). Speaking truth to power: A history of the California State University English Council. (Doctoral dissertation). Indiana, PA: Indiana University of Pennsylvania.

Leggett, G. (1965). Counciletter: An interim report. College English, 26(3), 233-235.

Morris, W., Greve, C. Knowles, E., & Huot, B. (2015). An analysis of writing assessment books published before and after the year 2000. Teaching of English in the Two-Year College, 43(2), 118-140.

Miller, K. L. (2016). The rhetoric of writing assessment (Unpublished doctoral dissertation). Reno, NV: University of Nevada.

Poe, M. (2012). Diversity and international writing assessment [Special issue]. Research in the Teaching of English, 48(3).

Poe, M., & Inoue, A. B. (2016). Toward writing assessment as social justice: An idea whose time has come [Special issue]. College English, 79(2), 119-126.

Poe, M., Inoue, A. B., & Elliot, N. (Eds). (2018). Writing assessment, social justice, and the advancement of opportunity. Fort Collins, Colorado: WAC Clearinghouse; and Boulder, Colorado: University Press of Colorado.

Roe v. Wade, 410 U.S. 113 (1973).

Soo, M., & Carson, C (2004). Managing the research university: Clark Kerr and the University of California. Minerva, 42(3), 215-236.

Smith, R. F. (1976). Report of the Chief Reader. In College Entrance Examination Board Advanced Placement Examination: English. New York, NY: College Board.

Swineford, F. (1956). College Entrance Examination Board Advanced Placement Tests. Princeton, NJ: Educational Testing Service.

Teixeira, R., Frey, W. H., & Griffin, R. (2015). States of change: The demographic evolution of the American electorate, 1974-2060. Washington, DC: Center for American Progress, American Enterprise Institute, & Brookings Institution.

Whitaker, U. (1972). Credit by examination at San Francisco State College. College Board Review, 83, 12-16.

White, E. M. (1973). Comparison and contrast: The 1973 State University and Colleges Freshman English Equivalency Examination. Los Angeles, CA: English Council of the California State University and Colleges.

White, E. M. (1974). Comparison and contrast: The 1974 State University and Colleges Freshman English Equivalency Examination. Los Angeles, CA: English Council of the California State University and Colleges.

White, E. M. (1981). Comparison and contrast: The 1980 and 1981 State University and Colleges Freshman English Equivalency Examination. Los Angeles, CA: English Council of the California State University and Colleges.

White, E. M. (1993). Holistic scoring: Past triumphs, future challenges. In M. M. Williamson & B. A. Huot (Eds.). Validating holistic scoring for writing assessment: Theoretical and empirical foundations (pp. 79-108). Cresskill, NJ: Hampton Press.

White, E. M. (1984). Holisticism. College Composition and Communication, 35(4), 400-409.

White, E. M. (2001). The opening of the modern era of writing assessment: A narrative. College English, 63(3), 306-320.

White, E. M., & Thomas, L. L. (1981). Racial minorities and writing skills assessment in the California State University and Colleges. College English, 43(3), 276-283.

White, T. P. (2017, August 2). Memorandum: Assessment of academic preparation and placement in first-year general education written communication and mathematics/quantitative reasoning courses, Executive Order 1110. Retrieved from http://www.calstate.edu/eo/EO-1110.html

Zwick, R. (2017). Who gets in? Strategies for fair and effective college admissions. Cambridge, MA: Harvard University Press.