Welcome to the Journal of Writing Assessment
Check out JWA's Reading List for reviews of relevant writing assessment publications.
The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and noneducational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment. Please refer to the submission guidelines on this page for information for authors and submission guidelines.
The Journal of Writing Assessment online ISSN 2169-9232.
The Journal of Writing Assessment is proud and appreciative of the support of the following organizations:
DEPARTMENT OF ENGLISH
COLLEGE OF LETTERS, ARTS & SOCIAL SCIENCES
Volume 8, Issue 1: July 2015
To aggregate or not? Linguistic features in automatic essay scoring and feedback systems
by Scott A. Crossley, Kristopher Kyle, and Danielle S. McNamara
This study investigates the relative efficacy of using linguistic micro-features, the aggregation of such features, and a combination of micro-features and aggregated features in developing automatic essay scoring (AES) models. Although the use of aggregated features is widespread in AES systems (e.g., e-rater; Intellimetric), very little published data exists that demonstrates the superiority of using such a method over the use of linguistic micro-features or combination of both micro-features and aggregated features. The results of this study indicate that AES models comprised of micro-features and a combination of micro-features and aggregated features outperform AES models comprised of aggregated features alone. The results also indicate that that AES models based on micro-features and a combination of micro-features and aggregated features provide a greater variety of features with which to provide formative feedback to writers. These results have implications for the development of AES systems and for providing automatic feedback to writers within these systems.
Book review: Henry Chauncey: An American Life by Norbert Elliot
by Bob Broad, Illinois State University
If you want to read a history of writing assessment as it developed during the 20th century within the narrow and specialized confines of the Educational Testing Service (ETS), you can’t do better than Norbert Elliot’s On a Scale: A Social History of Writing Assessment in America (2005). If your curiosity about ETS is not satisfied by that enormously careful and detailed history, and if you want to gain a close-up, intimate understanding of the person one author called ETS’s “first president and abiding institutional deity” (Owen, 1985, p. 1), then you can’t do better than Elliot’s new biography, Henry Chauncey: An American Life.
Later in this review, I will re-visit those last two “ifs.”