Skip to main content
eScholarship
Open Access Publications from the University of California

About

The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and noneducational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment.

Please refer to the submission guidelines on this page for information for authors and submission guidelines.

Articles

Editor’s Introduction: Contract Grading, Portfolios, and Reflection

The articles in this issue examine the continuing use and development of contract grading in college and high school writing courses (DasBender et al. and Watson); time and labor as important influences despite most often being seen as outside of the construct of writing (Del Principe); and the treatment of reflection within writing assessment theory and practice (Ratto Parks).

Contract Grading and the Development of an Efficacious Writerly Habitus

Contract grading has been shown to reduce stress and anxiety, promote self-directed learning, and disrupt unjust educational norms (Cowan, 2020; Inoue, 2019; Medina & Walker, 2018). Yet, there is growing recognition of challenges associated with the approach, including the unintended effects of deemphasizing grades (Inman & Powell, 2018) and the possibility that labor-based contracts, in particular, may put some students at a disadvantage (Carillo, 2021). This article reports selected findings from an IRB-approved multi-semester, comparative study of labor-based and labor-informed contract grading in first-year writing courses at a large private research university. The study affirms several findings from existing research on contract grading. In particular, it shows the approach mitigates students’ stress and anxiety and increases their overall satisfaction with grading. Contract grading shifts the assessment ecology of the first-year writing classroom so that the challenges and rewards of writing take priority over the pressures and limitations of grades (see Inoue, 2015). Drawing on the literature of self-efficacy (Bandura, 1977, 1994, 1997; Pajares, 2003), the authors theorize that contract grading encourages students to develop an efficacious writerly habitus grounded in self-motivated effort, increased confidence, and heightened understanding of writing as a mode of thinking. 


  • 1 supplemental ZIP

Achieving High Goals: The Impact of Contract Grading on High School Students' Academic Performance, Avoidance Orientation, and Social Comparison

This article examines American high school students’ (N=439) self-worth protection behaviors, maladaptive coping mechanisms, and academic performance under a contract grading system, which has been understudied in contemporary secondary classrooms. The quantitative analysis revealed that under the contract grading system, 97% (n=421) earned a passing grade (i.e., A, B, or C) on the assessment and 90% (n=390) fulfilled the contract by reaching mastery (A) or proficiency (B). Compared to the previous year, students with prior experience were 19% more likely to earn an A and 16% more likely to earn a B under the grading contract despite increased workload demands. The qualitative analysis of 40 semi-structured interviews revealed that performance improved as a result of the contract’s clarity of purpose, which limited task avoidance and facilitated task-oriented effort toward a desirable goal. Students enrolled in regular courses experienced the most significant grade improvement due to clear expectations that helped them place their effort on the right tasks. The findings of this study lead to a call to action for teachers to implement contract grading in high school classrooms to clarify work expectations, improve task-oriented effort, and help students set and achieve high goals.

Time as a “Built-In Headwind”: The Disparate Impact of Portfolio Cross-assessment on Black TYC students

This study of a departmental portfolio cross-assessment practice sheds light on factors that appear to influence assessment outcomes for Black students and helps to tease out some of the reasons why this assessment ecosystem has a disparate impact on these students. The findings, drawn from student outcomes data and student survey data,  suggest that it isn’t only, or even primarily, Black students’ linguistic variety that lead to higher failure rates. The writing qualities most commonly flagged on Black students’ failing portfolios are likely related to the very different material conditions in which they write their papers. These conditions challenge the framing of “time” and “labor” as neutral, non-racially-inflected resources to which all students have equal access and which are not often conceptualized as part of the construct of writing ability. As TYCs across the country reform their placement mechanisms for greater access and equity and place more and more students of color into their credit-bearing FYC 1 courses, we have an ethical obligation to watch for disparate impact created by our pre-existing assessment ecosystems.

What Do We Reward in Reflection? Assessing Reflective Writing with the Index for Metacognitive Knowledge

Reflection is a staple of contemporary writing pedagogy and writing assessment. Although the power of reflective writing has long been understood in writing studies, the field has not made progress on articulating how to assess the reflective work. Developed at the crossroads of research in reflection and metacognition, the Index for Metacognitive Knowledge (IMK) is designed to help writing researchers, teachers, and students articulate what is being rewarded in the assessment of reflection and to articulate the role of metacognitive knowledge in critical reflective writing. The IMK was used to code final portfolio introductions from first-year writing courses in order to analyze the distribution of the three kinds of metacognitive knowledge (declarative, procedural, and conditional) and to explore the quality and complexity of students’ metacognitive knowledge. Inter-rater reliability testing on the IMK showed that it is highly reliable; the Fleiss’ kappa was 83% (K=.834). The IMK offers researchers, teachers, and students language with which to explore the unique work of reflective writing in order to develop more metacognitively rich observations. It provides a framework to explain the evolving complexity of students’ reflective writing and to assess and describe the impacts of other pedagogical interventions.

  • 1 supplemental ZIP