Volume 4, Issue 1: 2011

Situating writing assessment practices: A review of A Guide to College Writing Assessment

by Kristen Getchell

O'Neill, P., Moore, C., & Huot, B. (2009). A guide to college writing assessment. Logan, UT: Utah State University Press.

In A Guide to College Writing Assessment, Peggy O'Neill, Cindy Moore, and Brian Huot provide composition scholars, writing program administrators, and assessment professionals with a broad overview of current assessment theory and practice. Including a general history of writing assessment and a focus on common administrative assessment practices like placement, proficiency examinations, programmatic assessment, and faculty evaluation, this book helps generate a foundation for productive discussions about how and why faculty and administrators do what they do. What in many ways might appear as a practical manual of sorts, A Guide to College Writing Assessment is really a broad scholarly look at a complex field. This book is a sound resource for professionals in the field of composition who want to develop appropriate, responsible and defensible assessment practices, as well as an accessible and insightful introduction to important work in the field.

A Guide to College Writing Assessment is unique in its design. The first four chapters in particular provide a theoretical overview of the foundations of the field; however, the book becomes increasingly practical in the last three chapters and in the appendix, which offers sample writing assessment documents for placement, proficiency evaluation, and program assessment.

For this discussion, I have categorized the final four chapters into two sections. The book is ambitious in scope as it provides an overview of an entire field. Each of the chapters is written with a coherence that would allow it to stand on its own; however, in addition to focusing on the specific message of each chapter, I want to consider how each chapter fits in with the larger themes of assessment in the book. Additionally, as someone with more than a passing interest in writing assessment, I tried to understand this volume as a coherent statement on fundamental aspects of writing assessment.

Complex Foundations
The first three chapters provide an introduction to foundations of current writing assessment through a discussion of history, theory, and context. One feature that sets this work apart from other writing assessment guides is the authors' willingness to engage and address the complex history of writing assessment; in particular, writing assessment's early connections to educational measurement. Many composition scholars would prefer to conveniently avoid discussion of the field's early connections to practices such as large-scale intelligence testing and indirect multiple choice testing of writing ability. It is not a past that compositionists are necessarily proud of, and over the past several decades the field of writing assessment has evolved away from these sorts of assessment techniques. However, the authors of this book not only acknowledge this history, but also emphasize its importance in understanding the current work in writing assessment. In fact, it is important to note that while the field of composition was developing away from these practices, the field of educational measurement was also growing toward a more rhetorical understanding of assessment.

In addition to discussing the historical connection between educational measurement theory and writing assessment theory, the authors use terms, such as "validity" and "reliability," to emphasize this connection and provide examples of how educational measurement theory should continue to inform compositionists. This work begins in Chapter 2, "Historicizing Writing Assessment," by addressing issues of reliability and validity through an historical lens that is essential to making sense of these complex terms, and emphasizing the claim made by O'Neill (2003) that the term validity is often "misconstrued" by writing assessment scholars in composition (p. 49). By understanding these terms in relation to their complex history, scholars and practitioners can begin to understand their importance to current assessment practice. This chapter also traces the paths of reliability and validity from their early days in educational measurement to the way they are used today in writing assessment.

In Chapter 3, "Considering Theory," the authors provide clear, current definitions of validity and reliability as they review important literature and research that connected the terms to composition. Outlining both the way these terms are used and the way they are viewed in relationship to each other, the authors demonstrate how those in composition can use educational measurement theory and apply it to work in college writing assessment. While these connections have been previously articulated in writing assessment literature (Cherry and Meyer, 1993; Huot, 1996, 2002; Moss, 1992, 1994, 1998; Smith, 1992, 1993; Williamson, 1993; Yancey, 1999) they are generally omitted from works whose audience is not comprised of writing assessment specialists.

The authors continue to develop this connection between educational measurement and college writing assessment through discussion of the work William Smith conducted at the University of Pittsburgh in the 1990s (in both Chapters 3 and 4). The use of Smith's work forms a bridge between the past and the present of writing assessment in two very interesting ways. First, it moves the terms validity and reliability out of the hands of testing specialists, where it previously rested and into the hands of writing teachers and administrators. At the time, Smith was investigating the ways these terms influence the writing placement program at his university. His primary concern was not defending a program, but investigating the appropriateness of the placement decisions through multiple strands of validation research. Interestingly, this notion of appropriateness in Smith's work was similar to the work being done by Cronbach (1988) and Messick (1989) in educational measurement, which began to focus validity away from the text itself toward the inferences, interpretations, and in Smith's context, decisions being made on behalf of a measure like the standard writing sample for placement into first year writing courses. The work of Messick, and later (and noticeably absent from theoretical discussion in this book) Michael Kane (1992, 2006), is important when building a bridge between educational measurement and composition because they describe validation as an argument, thus conceptualizing writing assessment as a rhetorical act, something composition scholars and WPAs can find complementary to their own work.

The second important way that Smith's work brings the reader from the past to the present is that it begins to set up a framework for validation research in writing assessment. As mentioned in Chapter 3, Smith's research was not in direct response to outside pressure from administrators, but instead, emerged as a genuine inquiry into the procedures he developed for placement into the writing program at his university. Smith's work provided models for the possibilities of validation inquiry and the way that it can shape practices for those who develop assessments. Largely, this is one place in the book where we see a recurrent theme of the importance of a research agenda. Smith's work reminds writing assessment practitioners that as we move into the future, it will become essential to conduct validation inquiry into our assessment practices on our own terms.

In addition to providing an historical perspective, A Guide to College Writing Assessment also provides a discussion of the importance of understanding theory and context when developing ethical assessments. The last four chapters of this book are devoted to discussion of four common assessment situations: placement, proficiency testing, program evaluation, and faculty evaluation. As the authors note, while these four types of assessment occur outside of the classroom, their implications are often felt in the classroom. In many ways, they are an extension of classroom goals and practices. In addition, the authors contend that a focus on such out-of-classroom assessments will provide evidence about the effectiveness of programs inside of the classroom.

Placement and Proficiency
Again, I have divided discussion of these four chapters into two sections: assessments that evaluate students directly, and assessments we use to evaluate teaching practices. In Chapter 5, the authors provide a brief overview of the multiple ways of placing students into first-year writing courses. Returning to the idea of context, they remind us it is important to understand that each university has different placement needs. In addition, efficiency demands vary from school to school and play a critical and sometimes unfortunate role in shaping our assessment practices (Williamson, 1994). The authors review a variety of models including directed self-placement (DSP) and automated scoring.

One of the most critical parts of the placement chapter is the focus on validation inquiry, and the means to responsibly evaluate placement programs. In their discussion of theory, in Chapter 3, the authors cite the work of Samuel Messick, who emphasized that validity rests in the decision being made as a result of the test. It is imperative then that we continue to conduct research that investigates the appropriateness of our decisions, and even further, how these decisions about placement affect students and classroom dynamics. This chapter provides models of validation inquiry and support for faculty looking to conduct validation studies of their own. According to the authors, Unless we can present evidence that all of the choices of a specific placement program provide sound educational experiences for the students who take such courses, we will have difficulty making an argument that our placement programs provide valid decisions-that is, it provides sound educational experiences for students. (p. 90). This is important for placement administrators to consider when they look to provide evidence of the effectiveness of their programs.

This discussion for the need for evidence continues in Chapter 6 on proficiency testing. Often assessment scholars run across faculty members or writing program administrators looking to create proficiency programs that will provide them with data on student success or proficiency. Chapter 6 asks why universities choose to develop proficiency programs since it is such a complex endeavor and what information they seek when they test students. This chapter provides a review of different types of proficiency testing as it builds on the idea that an understanding of purpose is of primary importance in the development of proficiency testing. The authors emphasize that proficiency testing should emerge in a contextual way, out of the specific need of the university. As with Chapter 5 on placement, this chapter also emphasizes the importance of validation inquiry as a component of any strong testing program.

Assessing Programs and Faculty
The next two chapters move from student-focused writing assessments toward the ways faculty and WPAs can use assessments to inform their own practices. Chapter 7, Conducting Writing Program Assessments, and Chapter 8, Evaluating Writing Faculty and Instruction, address two types of evaluation that are often overlooked in writing assessment literature: programmatic assessment and writing faculty evaluation. A recent experience I had underscores how important it is for these types of assessment to be examined in our scholarship and included for discussion in books like this one. I discuss this experience because I believe it is not unique.

At a recent faculty meeting, the dean of the college spoke about the state of affairs at our university and used terms like "accountable," "data," "effectiveness," and "restructuring." While he stopped short of dictating how the department should conduct its assessments, he made it clear that the best interest of our department, and others at the university, was closely tied to a specific kind of assessment. O'Neill, Moore, and Huot point out that we need to be equipped and ready to respond to outside assessment demands to maintain control over our practices. In Chapters 7 and 8, the authors provide a discussion of ways to develop on-going assessments of writing programs and gather information about faculty practices for individuals, departments, and outside critics. In the current state of the economy, college and university administrators are cutting programs they view to be ineffective and pressuring departments to produce evidence that their programs are worthwhile. Writing program administrators and writing faculty cannot ignore the fact that these pressures exist, and cannot pretend their convictions that programs are invaluable to the university are enough to protect them.

Chapters 7 and 8 offer the tools to assist writing program administrators and faculty find the most appropriate, sound, and informative ways to assess their programs. Continuing within the framework of a connection between theory and context, these chapters provide an in-depth discussion of the logistical and methodological concerns of those looking to implement programmatic assessment at their university. The authors also attend to financial concerns and logistical considerations and provide practical insight into areas of funding and generating enough staffing support for assessments.

I had not anticipated devoting an entire section of this review to the appendices of A Guide to College Writing Assessment, but I found it to be a significant point for consideration. My first response was that the appendix was out of place and a bit contradictory, considering the extensive attention that the authors paid throughout each chapter to encourage contextual assessment. Why would the authors provide blank assessment sample forms after emphasizing the importance of local assessment development? However, as I considered the book holistically, I realized it is a strong demonstration of the intersection between theory and practice - appendices included. The appendix samples were not provided as samples for practitioners to blindly apply to their own assessment needs, as anyone who read the discussion of Chapter 4 on context would understand. As explained in the authors' discussion of theory: "theory is fundamental to practices and our revision of practice" (p. 36). The samples are not meant to stand alone but rather to work side-by-side with the theory provided in this book. The documents that make up the appendix section provide a conduit for practitioners to begin to make the connection from theory, as clearly outlined in this book, to their own practice.

Continuing with the notion of revising and reflecting on practice, one of the themes of this book is inquiry. Regardless of the specific method of assessment, this book emphasizes the need to continually evaluate assessment programs to ensure that they are functioning according to current theory and the goals of writing programs. A Guide to Writing Assessment not only emphasizes an understanding of theory as the responsibility of writing teachers and assessment creators, it also speaks to the need for a research agenda that works to inform practitioners about students, practices, and programs. It is this type of data that can be provided to outsiders who look for evidence of faculty and program effectiveness.

All too often in English departments, writing programs, and doctoral and masters programs in composition, assessment gets left to the "assessment people." This book helps break down this dichotomy. Every scholar working in or entering the field of composition has a responsibility to understand the work being done in the field of writing assessment. Whether as a common handbook for faculty or a graduate course textbook, A Guide to Writing Assessment is the ideal resource for experienced and developing scholars of the field.

Cherry, R.D., & Meyer, P.R. (1993). Reliability issues in holistic assessment. In M.M. Williamson & B.A. Huot (Eds.), Validating holistic scoring for writing assessment: Theoretical and empirical foundations (pp. 109-141). Creskill, NJ: Hampton Press.

Huot, B. (1996). Toward a new theory of writing assessment. College Composition and Communication, 47, 549-566.

Huot, B. (2002). (Re)Articulating writing assessment for teaching and learning. Logan, UT: Utah State University Press.

Kane, M. (1992). An argument-based approach to validity. Psychological Bulletin, 112, 527-535.

Kane, M. (2006). Validation. In R.L. Brennan (Ed.), Educational Measurement (4th ed.), (pp. 21-65). New York: American Council on Education and Macmillan.

Messick, S. (1989). Validity. In R.L. Linn (Ed.), Educational measurement (3rd ed.), (pp.13-103). New York: American Council on Education and Macmillan.

Moss, P. A. (1992). Shifting conceptions of validity in educational measurement: Implications for performance assessment. Review of Educational Research, 62, 229-258.

Moss , P.A. (1994). Can there be validity without reliability? Educational Researcher, 23(2), 5-12.

Moss, P.A. (1998). Response: Testing the test of a test. Assessing Writing, 5, 111-112.

Smith, W.L. (1992). The importance of teacher knowledge in college composition placement testing. In R.J. Hays (Ed.), Reading empirical research studies: The rhetoric of research (pp. 289-316) Norwood, NJ: Ablex.

Smith, W.L. (1993). Assessing the reliability and adequacy of placement using holistic scoring of essays as a college composition placement test. In M.M. Williamson & B.A. Huot (Eds.), Validating holistic scoring for writing assessment: Theoretical and empirical foundations (pp. 142-205). Creskill, NJ: Hampton Press.

Williamson, M.M. (1993). An introduction to holistic scoring: The social, historical, and the theoretical context for writing assessment. In M.M. Williamson & B.A. Huot (Eds.), Validating holistic scoring for writing assessment: Theoretical and empirical foundations (pp. 1-44). Creskill, NJ: Hampton Press.

Williamson, M.M. (1994). The worship of efficiency: Untangling theoretical and practical considerations in writing assessment. Assessing Writing, 1, 147-174.

Yancey, K.B. (1999). Looking back as we look forward: Historicizing writing assessment. College Composition and Communication, 50, 483-503.