Select a Past Issue

Volume 9, Issue 1: 2016

Volume 8, Issue 1: 2015

Volume 7, Issue 1: 2014

Volume 6, Issue 1: 2013

Volume 5, Issue 1: 2012

Volume 4, Issue 1: 2011

Volume 3, Issue 2: Fall 2007

Volume 3, Issue 1: Spring 2007

Volume 2, Issue 2: Fall 2005

Volume 2, Issue 1: Spring 2005

Volume 1, Issue 2: Fall 2003

Volume 1, Issue 1: Spring 2003


Search Titles, Authors & Abstracts

This tool allows you to search all of the titles, authors and abstracts from every issue, both past and current. Issues from 2007 back are only available as .pdf files, but are still searchable.

KEYWORD OR PHRASE

Volume 4, Issue 1: 2011

Introduction to Volume 4

by Brian Huot

The Effect of Scoring Order on the Independence of Holistic and Analytic Scores

by Nancy Robb Singer and Paul LeMahieu

Abstract
Conventional wisdom and common practice suggest that to preserve the independence of holistic judgments, they should precede analytic scoring. However, little is known about the effects of scoring order on the scores obtained or if true holistic scoring is even possible from the mind of a scorer who has already been trained to and will be asked to provide analytic scores as well. This research explores the matter of independence of scores and the effects of scoring order upon those judgments. Our analysis shows statistically significant differences in mean scores under the two conditions (holistic scoring preceding analytic and the reverse), with the holistic scores more nearly replicating "pure" holistic scoring only when it precedes the analytic. This research affirms that when readers will be asked to score both ways, holistic scoring should precede analytic scoring. It also suggests interesting insights into the cognitive processes engaged by scorers as they score holistically and analytically.

Reframing Reliability for Writing Assessment

by Peggy O'Neill

Abstract
This essay provides an overview of the research and scholarship on reliability in college writing assessment from the author's perspective as a composition and rhetoric scholar. It argues for reframing reliability by drawing on traditions from fields of college composition and educational measurement with the goal of developing a more productive discussion about reliability as we work toward a unified field of writing assessment. In making this argument, the author uses the concept of framing to argue that writing assessment scholars should develop a shared understanding of reliability. The shared understanding begins with the values--such as accuracy, consistency, fairness, responsibility, and meaningfulness--that we have in common with others, including psychometricians and measurement specialists, instead of focusing on the methods. Traditionally, reliability has been framed by statistical methods and calculations associated with positivist science although psychometric theory has moved beyond this perspective. Over time, the author argues, if we can shift the frame associated with reliability, we can develop methods to support assessments that lead to improvement of teaching and learning.

Validity inquiry of race and shared evaluation practices in a large-scale, university-wide writing portfolio assessment

by Diane Kelly-Riley

Abstract
This article examines the intersections of students' race with the evaluation of their writing abilities in a locally-developed, context-rich, university-wide, junior-level writing portfolio assessment that relies on faculty articulation of standards and shared evaluation practices. This study employs sequential regression analysis to identify how faculty raters operationalize their definition of good writing within this university-wide writing portfolio assessment, and, in particular, whether students' race accounts for any of the variability in faculty's assessment of student writing. The findings suggest that there is a difference in student performance by race, but that student race does not contribute to faculty's assessment of students' writing in this setting. However, the findings also suggest that faculty employ a limited set of the criteria published by the writing assessment program, and faculty use non-programmatic criteria--including perceived demographic variables--in their operationalization of "good writing" in this writing portfolio assessment. This study provides a model for future validity inquiry of emerging context-rich writing assessment practices.

A prominent feature analysis of seventh-grade writing

by Sherry S. Swain, Richard L. Graves, and David T. Morse

Abstract
The present study identified the characteristics of seventh-grade writing produced in an on-demand state assessment situation. The subjects were 464 seventh graders in three middle schools in the southeastern United States. The research team included 12 English language arts teachers. Results of the analysis yielded some 32 prominent features, 22 positive and 10 negative. The features were correlated with state assessment scores, which ranged from 1 to 4. Of the 22 positive features, 14 correlated positively with the assessment scores. Of the ten negative features, 8 correlated negatively with the assessment scores. The study also found 108 statistically significant (p <.001) interrcorrelations among the features. From the features themselves, a formula was devised to create a prominent feature score for each paper, the scores ranging from 3 to 21. The prominent feature scores were also significantly correlated with assessment scores (r = .54). Whereas statewide assessment scoring assigns numerical values to student writing, prominent feature analysis or scoring derives numerical values from specific rhetorical features. These results may be helpful for classroom teachers for the assessment and diagnosis of student writing and for professionals who lead staff development programs for teachers.

Out of the box: A review of Ericsson and Haswell's (Eds.) Machine Scoring of Student Writing: Truth and Consequences

by Elliot Knowles

Situating writing assessment practices: A review of A Guide to College Writing Assessment

by Kristen Getchell