Volume 11, Issue 1: 2018

Write Outside the Boxes: The Single Point Rubric in the Secondary ELA Classroom

by Jenna Wilson, Westville High School

The conversation around writing assessment in educational settings has been developed by research, practices, and legislation over the last 100 years. This article focuses on secondary writing assessment, where instructors are typically limited by local and statewide requirements. The debate on the use of rubrics (such as the traditional analytic and holistic) and the use of narrative/feedback assessment has shaped secondary writing instruction and assessment, but it has largely been shaped by stakeholders outside of the classroom. This article presents the Single Point Rubric (SPR) as a possible tool to work against the problematic applications of analytic and holistic rubrics without the commitment of time, focus, and energy that narrative feedback assessment demands. Rooted in decades old concepts of grid-grading, the SPR combines the formulaic and time-saving components of rubrics with the differentiated and individualized components of narrative response and grading via detailed feedback. Though the SPR is not an answer to the problems involved with writing assessment, it provides a tool that has largely been neglected for teachers who desire to individualize writing assessment while remaining concise and efficient.


“That is not a rubric.”

Those words hung in the air between me and the angry father of a very bright student who was sure of three things: I did not give his daughter a rubric, my assessment tool was irrational, and I made it impossible for his daughter to feel successful.

About a month before this parent-teacher conference, his daughter’s class had received a writing assignment paired with a single point rubric (SPR). Many of my grade-centric honor students were initially worried about not having a categorized list of qualifications for an “A paper,” but most of them followed my advice to communicate with me and take risks while writing in order to earn a high grade. This student still felt uncomfortable two weeks into the project, so she approached me with a request: “Can I have the other side of the rubric? The part for ‘above standard?’” Her eyes were pleading as I told her that it didn’t exist. I only make SPRs.

At the end of the unit, this student earned the highest grade out of all of my students on this assignment – juniors and seniors. I never doubted she was capable of that. Neither did her father. But throughout the whole writing process, this student was terrified. Despite being a self-described writer, and even having been published in a collection of teen poetry, she didn’t think she knew how to do well on a writing assignment if she wasn’t given an analytic rubric with specific parameters for excellence. Regardless of these concerns, she wrote a powerful and professional essay—she took risks, asked questions, and thought deeply about her own choices as a writer. This type of situation is the exact reason why I use SPRs instead of their holistic and analytic siblings.

Even now, I come back to that parent-teacher conference often. “This is not a rubric.” He wasn’t wrong. But he also wasn’t right. The SPR is, indeed, a rubric; as Rebecca Howell (2014) of Charleston Southern University points out, a rubric should “formulate standards for achievement, provide an objective way to grade work, and make expectations clear to students, particularly when used as a guide while completing the task at hand” (p. 401). In her 2006 defense of the use of holistic and analytic rubrics, Vicki Spandel explained, “Using a rubric well is an interactive, interpretive process, in which a teacher's wisdom, insight, experience, and judgment play an important role” (p. 20). She also stated that rubrics provide a common base for discussing writing and keep evaluation criteria public. Because the SPR does all of these things, this father was not correct in stating that this was “not a rubric.”

However, there are many things the SPR also does not do. It does not attempt to define the ways a student can write “above average” or “excellent” work the way an analytic rubric does. It does not reduce the entirety of a piece of writing into one or two descriptive sentences matched with a score the way a holistic rubric does. It does not attempt to quantify the power of writing; instead, the SPR provides a space to communicate which parts of the writing were particularly powerful, which parts were meaningful but not significant, and which parts fell short. So, in that sense, and based on what this student needed in order to “feel successful,” this father was right: this was not a rubric.

The SPR does not contain the series of boxes my student and her father expected. The term SPR is relatively new, but the concept has been around for decades. In 2010, Jarene Fluckiger of the University of Nebraska at Omaha explained that “the single point rubric has only one set of criteria, or ‘one point,’ and that is the list of criteria which shows proficient competence appropriate to the grade or learning context” (p. 19). The SPR is formatted like an analytic rubric, but with only three variations of success: Inadequate, Proficient, and Excellent. In my classroom, I call them Below Standard, At Standard, and Above Standard. The key difference between the SPR and the analytic rubric: Only the “Proficient” or “At Standard” section is filled in for the student. “Below Standard” and “Above Standard” are empty boxes, signifying that each student can individually find ways to rise above or fall below proficiency level. After researching classrooms in which teachers used the SPR regularly, Fluckiger noted that students assessed with the SPR showed greater student achievement, stronger self-assessment skills, and higher quality of final drafts.

Before I explain more about why and how I use SPRs, it’s important to explain why the analytic and holistic rubrics failed me. I wish I could say the SPR is my own idea and creation, but that’s not true. I wish I could say that I always understood the moral and practical issues with holistic and analytic rubrics, but that’s also not true. I actually found the SPR completely by accident because I thought I was using analytic rubrics wrong. I thought I was messing up a perfectly simple tool, so I took to the internet in search of a magical answer explaining how I could do better. Instead, I found a blog that described the three basic types of rubrics – the first time I’d ever heard of the SPR. I slowly realized that I wasn’t using the analytic rubric incorrectly; it was the wrong tool for the writing response and assessment that my students deserved.

The context of my courses and of my students’ lives determines what form of response and assessment they deserve. It is important to note that I learned all of this while working with a general population of students who were building the foundation of their writing skills. For three years of teaching high school sophomores, seniors, and 8th graders, the analytic and holistic rubrics failed to work in my course context. I discovered the SPR soon before I moved into a position teaching a general population of high school juniors and seniors in a rural school district stricken by poverty and lack of resources. Even my most driven and motivated learners struggled with concepts they’d need in the college courses they dreamed of taking, but most of my learners had no desire to pursue education beyond the high school diploma. My duty is to prepare each student for the goals they’ve set and the goals they might one day set. Beyond that, every person needs to be able to effectively communicate ideas. Writing never goes away. The context of my courses – English lit and comp for high school juniors and seniors – alongside the context of their lives dictated my need for an assessment tool that allowed personalized feedback without draining me on every assignment. Of course, college First-Year Composition courses and content-specific courses have their own contexts to consider. An introductory social sciences course and an advanced mathematics course will both involve writing, but certainly not the same writing, and not necessarily for the same purpose.

Promises and Problems

Like most teachers, I originally leaned towards rubrics for a variety of reasons, which can be considered the “promises” that rubrics make. The first of these promises is that of clarity in expectations. Rubrics, particularly the more heavily detailed analytic rubrics, make the criteria on which a writing assignment will be assessed very clear to students. Though the interpretation of the rubric’s language could cause problems, researchers have found that delivering any kind of rubric to students before they engage in drafting tends to lead to a higher percentage of student writers performing well on the assignment (Howell, 2014). However, many researchers and scholars in the field of writing assessment recognize that not quite enough research has been done on this topic to negate the concerns that come with this clarification.

While it is true that rubrics make assessment criteria very clear, analytic and holistic rubrics can also constrain and limit student writers. This makes sense: A student working towards a “good grade” is going to feel safer if she believes she is doing only and especially what the rubric marks as “excellent” or “10/10” material. She is not likely to take the risk of doing something not on the rubric because she doesn’t know where that piece might fit in the puzzle. This is why my student wanted the descriptions for an “A Paper.” She wanted to do exactly what I described because it was safe and sure. When I refused to provide this, she was left to her own devices. She didn’t believe she was capable of writing a strong argument without assistance, but she did just that.

As Turley and Gallagher (2008) pointed out in their article “On the Uses of Rubrics: Reframing the Great Rubric Debate,” rubrics originated from an attempt to standardize and quantify writing assessment for educators. The idea is that a well-constructed rubric will allow an assessor to mark which threshold of achievement the student has met, average the numbers, and arrive at a final score. Assessment like this takes much less time, limits the subjective nature of teacher input, and churns out final grades quickly and succinctly. This concept is great in theory, but unfortunately for teachers of writing, not so simple in practice.

In fact, many educators have found analytic and holistic rubrics to take up even more time than assessment otherwise would have needed, and to limit a teacher’s ability to respond to student writing. This was absolutely the truth of my experience in those few years when I tried repeatedly to mold my classroom around these assessment tools. Maja Wilson (2006) recounted many moments in her book Rethinking Rubrics when she was faced with a piece of writing that did not fit with her grading rubric at all: Sometimes, the writing was astounding but earned a poor grade on the rubric; at other times, the writing was weak and thoughtless but earned a high grade on the rubric. Like Wilson, I spent days of my early career scratching my head over these problems. I read writing that hit on all of the highest-rated analytic boxes but felt lifeless. The writer didn’t care about her product. How could a reader care about it, either?

But still, mathematically, it earned an A, which secondary education has established as an excellent product that far surpasses the average expectations of a C grade. I didn’t know why this was happening, but I knew that my assessment tools did not award creative and critical thinking in writing. They just rewarded the act of following directions. In the event that a student did write something phenomenal or something drastically too weak, the boxes I had built on that rubric rarely served as an adequate explanation for why. Wilson and I are not the only teachers who have been in this position. After the hours necessary to create an analytic rubric and fill out, in detail, the characteristics of each possible grade for each possible criterion, teachers then shift to the agonizing debate over which grade to apply to the writing when it doesn’t fit perfectly into one of the little boxes on the page. It doesn’t seem that these rubrics save much time at all.

One thing is abundantly clear: When rubrics are focused on quantifying student writing, saving time, and making assessment easier, they are not focused on individual student writers improving and growing. However, teachers are (or should be) focused on the challenge of helping the writers in front of them. Traditional analytic and holistic rubrics constrain and sometimes eliminate a student writer’s ability to personalize her success, ushering her instead into a neatly structured box to meet the A or B requirements. To borrow a phrase from Wilson (2007), “The purpose of writing is to create a response in the reader’s mind… [Rubrics] tear at the foundations of the rhetorical heart of writing, reducing student essays to an exercise in purposelessness” (p. 64). Providing only a generic response to writing, be it above, at, or below the standard expectation, means doing a disservice to our students and to the integrity of writing as a process.

Roots of Rubrics

The birth of the analytic and holistic rubrics came during a tumultuous time in the history of writing assessment. Around the 1940s, direct assessment of writing was all but eradicated. Tests of writing ability for college entrance and beyond were indirect, consisting of multiple-choice questions on topics related to writing. In 1961, researchers at the Educational Testing Service experimented with writing assessment by giving 53 “distinguished readers” 300 student papers to grade on a scale of 1 to 9. Every paper received five or more scores (Diederich, French, & Carlton, 1961), and the researchers quickly realized that this assessment was unreliable.

Had Diederich et al. (1961) dug more deeply, they might have learned some reasons why this assessment was unreliable. They might have considered the context of the writing, the background of the distinguished readers, and the assessment criteria provided. Unfortunately, they did not study the cause of the unreliable scores. Instead, they studied the comments the scorers made and boiled writing assessment down into five major criteria: form, flavor, ideas, mechanics, and wording. Thus, the analytic rubric was born—from a gross oversimplification of a complex issue. This complexity has only grown in the five decades since. Unfortunately, it cannot be oversimplified in my classroom—nor in any other responsible writing classroom.

The holistic rubric was created soon after the analytic, and it quickly became a favorite for large-scale standardized testing although the analytic rubric has also been used consistently. Psychometricians and testmakers approve of the holistic rubric because it is commonly considered to be a reliable assessment tool (White, 1985), but teachers in the secondary English and language arts (ELA) classroom typically find it problematic.

Despite the issues with these tools, they did offer a few significant benefits: As Bob Broad (2003) pointed out in What We Really Value, they legitimized the use of direct assessment of writing, created a common language for writing assessment, and attempted to streamline the assessment process. Historically, most teachers were curious when given these tools and soon adapted them for classroom use. But, with time comes clarity. Soon enough, teachers realized that the analytic and holistic rubrics were not the answers they’d been waiting for. Though their direct assessments of writing were now validated by testing firms and educational organizations (Broad), they still did not have a standardized reliable tool to assess student writing although those same firms and organizations at the time considered the problems to be solved. In addition to the multiple-choice questions about commas and verbs, students were indeed now writing on standardized tests, but those assessing the tests were too far removed from the context of the assessment to offer a genuine evaluation. Over the years, most secondary teachers have found again and again just how unhelpful and detrimental these tests and their rubrics have become for their learners.

Structural Differences –and Similarities

Figure 1

Figure 1. Traditional Analytic Rubric. Though teachers individualize their rubrics to the context of their assignments and course, I chose to use a rubric created through the Rubistar generator, which is hosted by the University of Kansas and available at www.rubistar.4teachers.org. It is my belief that Rubistar represents the average analytical rubric design concept and assessment criteria.

Figure 2

Figure 2. Holistic rubric for persuasive essays. Reprinted from “Sample Holistic Rubric for Essays,” by University of Maryland Baltimore County, Faculty Development Center, n.d., Retrieved from https://fdc.umbc.edu/files/2013/01/SAMPLE-HOLISTIC-RUBRIC-FOR-ESSAYS.pdf

Figure 3

Figure 3. Single point rubric. This is an example of a rubric I use in my classroom. This rubric was generated with a section of my junior English 3 class, which contained 27 students, most of whom had never written a research paper longer than two or three pages before this unit. The word minimum on this assignment was 1,500, and the SPR was generated after reading and personally responding to multiple mentor texts, including student research papers along the grading spectrum and other persuasive documents such as blog posts, editorials, and essays.

Many differences and similarities are noticeable in this juxtaposition. First, note the differences in structure. The analytic rubric in figure 1, which reflects a standards-based grade system, attempts to outline each of the four possible levels of success on each criterion. This leaves little space on the assessment tool for the reader’s response other than the box intended to house the numerical score. The holistic rubric in figure 2 contains a section for overall comments, but only after grouping all criteria together into one numerical value with a small percentage range. The SPR in figure 3 leaves space for commentary on each criterion and space for overall comments at the end. Of course, all three of these tools could be formatted and visually structured in a variety of ways, but the three figures here represent the average or most common style.

With any assessment, the explanation of assessment criteria is arguably the most important factor given to any student. This means wording is important. The analytic rubric in figure 1 is so specific, even in its attempts to leave room for options, that a grade-centric student would quickly feel limited. Take, for example, the analytic rubric’s section on the attention grabber. A student working to earn an A would consider only four options for her paper: a strong statement, a relevant quotation, a statistic, or a question. Anecdotes, jokes, metaphors, and comparisons between like topics can also start strong essays, but those options aren’t included. Also noteworthy is that one vital characteristic of a failing attention grabber is that it is “not interesting.” This is such a personal determination, but the teacher’s interest becomes a concern of the student’s when given this rubric. This element of reader interest, which a secondary student may equate to teacher’s interest, comes from an attempt to break down every possible option for each criterion. This rubric has tried to list every possible form of success and failure by using general terminology like “interesting,” but that’s simply not possible.

The wording of the holistic rubric in figure 2 does just the opposite. Instead of breaking down every option for individual criterion, the holistic rubric lumps them all together. Particularly notable is that the C essay is “adequate in most areas, but exceptional in none.” I’ve read hundreds of student papers in my career which were adequate in most areas, exceptional in a few, and downright lacking in a couple, or exceptional in everything except for spelling, or unacceptable in everything except for a profound use of figurative language, and so on. With the holistic rubric, the evaluation of the writing doesn’t have the opportunity to reflect these details – but those details can be the difference between a student seeing a balance of the strengths and weaknesses in her writing and assuming that her writing only has weaknesses.

In the SPR in figure 3, the wording is, of course, my own. Though this was established alongside one specific class, most of the rubrics used in my classroom look quite similar to this one, as these are the criteria I am encouraged by my district to emphasize to my students. Unlike the analytic rubric, the SPR leaves almost limitless options for surpassing the “At Standard” level. If a student aims to accomplish the basic goal and move on, she has instructions for the “At Standard” paper. If a student desires an A, like the student in my opening anecdote, she has to work harder, and the openness of this rubric challenges students to determine how to rise above the standard. The way one student can succeed is different from other students. She needs to self-assess and ask herself if she has done more than what each criterion requires. She needs to take her peer-response day seriously and ask her readers what they think. She needs to ask me, her teacher and guide, if her ideas are working along the way. She needs to take risks while thinking creatively and critically. Is that not ultimately the goal when teaching students how to write?

Solutions: The Single Point Rubric

The only assessment tool I have found that delivers on some of these promises is the SPR. Instead of writing blanket statements in boxes and hoping students write between those lines, the SPR provides individualized feedback. In 1981, Peter Elbow (1998) first published Writing With Power, in which he detailed the differences between criterion-based feedback and reader-based feedback. The former focuses on whether the writing includes the information and criteria it was intended to include, and the latter offers the reader an opportunity to tell the writer how s/he received the writing. Personal reactions from audience members aren’t valued only by philosophical expressivists. Wilson (2006) suggested teachers make their responses transparent, writing down “what goes on in [their] minds” (p. 63) and providing detailed responses to the writing. White (1994), historically a fan of the holistic rubric, also argued for detailed feedback whenever possible, even if used alongside a rubric. Huot (1996) encouraged writing assessment “emphasize the context of the texts being read, the position of the readers, and the local, practical standards teachers and other stakeholders establish for written communication” (p. 561). Many other scholars echo these calls. Students learn more about writing when their writing is individually assessed by someone in their shared context.

The SPR can serve as a bridge between the teacher’s desire to provide detailed feedback and the administrative need for clear, straightforward evaluation. It also meets the needs of student writers. In 1994, White published “Issues and Problems in Writing Assessment” in Assessing Writing. In this article, White set out to detail what the different stakeholders in writing assessment value and require. Of course, he illustrated the deeply entrenched battle between what teachers value and what testmakers and government bodies value, but he also detailed the needs of students in writing assessment. In summary, White (1994) argued students need writing assessment that

…stresses the social and situational context of the writer… provides maximum and
speedy feedback to the student… breaks down the complexity of writing into focused units which can be learned in sequence and mastered by study… produces data principally for the use of learners and teachers… and largely ignores surface features of dialect and usage, focusing on critical thinking and creativity (p. 23).

Despite being more than twenty years old, this article is still largely representative of what students need from writing assessment. The needs of the student often get lost in the debate among other stakeholders, but they are also the first to suffer when negative change occurs.

The key to creating writing assessment that reflects the needs of students lies in their first need: stressing the social and situational context of the writer. Based on the assumption that a teacher’s local writing assessment embraces the student’s context and situation, the teacher becomes the only stakeholder capable of providing speedy and detailed feedback to the student. This feedback can be rooted in the student’s growth, writing process, and ability. The context of the course itself is also vital to keep in mind at this juncture; an instructor focused on building skills applicable to writing in a variety of genres and an instructor focused on using writing as a vehicle to assess the student’s understanding of a given concept could have different outcomes, even if the same student is writing for both. Social and situational context, in this way, is not just limited to the student’s life experiences, but also to their purpose for writing that product at that time, and the outcome they’re hoping to achieve.

One of the strongest characteristics of the SPR is its ability to meet the third criterion White (1994) listed: It breaks the complexity of writing down into smaller, manageable units that can be learned in sequence and mastered by study. By design, the SPR breaks larger, more intricate projects down; the dissection can get as detailed as necessary. With some students, one criterion can focus on the introduction, another on the support for the thesis, and a third on the conclusion. If students are struggling with introducing topics, the SPR could get deeper and break down the traditional components of an introduction – engaging the reader, previewing the argument, stating the thesis, and so forth.

The beautiful thing about this depth of analysis is the ability for the students to show distinctly different pathways to mastery of a skill. Consider the concept of engaging the audience in an introduction. If an analytic rubric states that an excellent introduction will begin with a thoughtful remark on the topic, students may feel discouraged to begin with a joke, or an analogy, or a historical example. By simply stating the basic criteria, the SPR leaves those doors open. As a student works through mastery on a given criterion, she can move toward mastering each unit of the writing project at her own pace.

Elbow suggested something along the lines of the SPR in his 2000 book Everyone Can Write. He called it “grid grading,” and his grids did precisely what the SPR does: They list different criteria and grade them on three standards (weak, satisfactory, strong). Elbow states that the default grade should be “satisfactory,” but that anything above or below deserves a detailed written explanation. This is precisely how feedback on the SPR works. Teachers are saved from writing the same thing on every paper and have the space and freedom to write what the student deserves to know about individual strengths and weaknesses.

The SPR’s unique combination of numerical and narrative feedback allows it to produce data for learners and teachers. Learners can see and track their progress with various criteria, and teachers have a visual representation of what might need to be re-taught. If every learner falls below the standard on a given criterion, the red flags are visible with a quick skim of the graded rubrics. Even if every student struggled in different ways, the teacher can easily see how many students struggled and what concepts deserve more time and practice. This seems to be the theory behind the analytic rubric, but that theory rarely pulls through in practice. Furthermore, applying a quick mathematical equation to the SPR can lead to a percentage score, which in my experience matches the letter grade I would label the work. Assume “below standard” can earn up to 1 point, “at standard” earns up to 1.5 points, and “above standard” earns up to 2 points. The example listed here has 10 criteria. Assuming one student hit every standard without rising above or falling below, the equation is simple 1.5 x 10 = 15. Now, (15/20) = 0.75; this is a 75% C grade. If the student, like most, hits different parts of the rubric, change the numbers accordingly. That said, the mathematical equation is not always necessary in evaluation.

As far as surface errors and dialectical issues may go, the SPR keeps teachers honest to the balance they establish for their students when announcing evaluation criteria. If I am going to focus on capitalization of each proper noun, comma use and placement, variety in sentence structure, use of first- and second-person pronouns, proper infinitives, and sentences not ending in prepositions, I would need to share all of that with my students before evaluating them. As mentioned earlier, the SPR works best when made with students, not for students. Most composition teachers refrain from focusing so heavily on grammatical issues like these when establishing criteria for writing projects; however, a paper in a high school or college course that fails to meet the standard expectation for every grammatical issue listed might be easy to find overwhelming, which could lead the evaluator to forget the other criteria. This just is not fair to the student, and I’d venture to guess that most teachers of writing would agree. That is why many scholars call for teachers to remain true to the criteria they establish (e.g., Broad, 2003; Elbow, 1998, 2000; Howell, 2014; Spandel, 2006).

If surface errors only have one line on the SPR, they cannot bring the entire grade down. Mathematically, it’s impossible. That is not to say a teacher wouldn’t be within her rights to tell a student that a piece of writing did not meet the expectations for that performance level, but that should be a new conversation in which both parties participate wholly. As long as the majority of the rubric focuses on the concepts underlying the purpose of the writing, the SPR will not allow any one criterion to overrule the creative and critical thinking in the writing.

The SPR also establishes a shared language for writing assessment between the learner and the instructor, which was one of the perceived benefits of the two preceding rubrics. However, one thing the SPR embraces is that this shared language will differ from one context to another. Of course, there will be instruction required for students to understand what “engage the audience” means, but that would be the case regardless of assessment practice. The use of the standard or proficient rating offers students an understanding of what needs to be done, and teachers can use mentor texts and examples to show what it looks like at varying levels. This is already fairly common classroom practice; the difference is that those excellent and powerful examples don’t need to fit into boxes. Students can pull on their own experiences and ideas fearlessly, as long as they communicate with teacher and peer audience members all along. In one class of 20 students, a few may like and dislike different things within the writing. This discussion shows them how their style and voice as a writer can differ from others – a necessary lesson. It also helps the teacher establish what’s vital to include or omit within the context of the course objectives and the assignment itself.

Finally, the SPR is much more likely to streamline and simplify grading than any other rubric. I don’t think I’m alone in cramming comments in the margins of analytic rubrics, or writing paragraph after paragraph explaining a score on a holistic rubric. Written feedback is necessary, but it must be intentional. As a junior and senior English teacher, the majority of my students hit most standards and rise above a few on any given subject. Narrative feedback, when done alone, requires me to write the same thing for every student who hits each standard but doesn’t rise above. On the SPR, I can circle or check-mark that box and move on. This allows me to focus on the parts of the writing that rise above or fall flat. When I find writing that rises above every standard, I don’t have trouble finding the energy to write detailed feedback, and I don’t think I’m alone in that.

I also get to explain to students how their writing, which came from their own minds and hearts, both succeeded in the assignment and impacted me as a reader. I get to remind them of their own talent and skill in detail, with feedback tailored to their work. Not only is this more personal for the writer than just circling the “10/10” box, but it shows them how they’ve succeeded in writing, not just how they’ve succeeded in this one assignment. For the student who only hits above standard in one or two criteria, it shows that they do have skills that excel and gives them a map for rising above other standards, too.

Conclusion

I do not write this article with the intention to argue that I’ve solved the problems in writing assessment. Call me a cynic, but I’m not sure that’s even possible. Instead, I write to share my experience with a tool I didn’t find until I’d struggled for years with tools that just didn’t work within my context. Learning how to use the SPR felt like magic. It was like finding a hammer after trying to flatten nails with the end of a screwdriver: a sweet, sweet relief.

The relief of finding an assessment tool that helped my students succeed and helped me assess them in the way they deserved without draining me was second to none. Though I’d love to commit to full, personalized narrative feedback like Wilson (2006, 2007) passionately advocates for, I have over 130 students in any given semester, and many of them are around the same ability and experience levels. That leaves me writing similar responses at least half of the time I assess any project, with individualized comments on a few different criteria for each student. Each learner deserves every detail of individualized response, but I am not a super hero. I don’t have super-human strength or energy, and evaluating writing is an easy way to burn out quickly. Personally, I need the listed basic criteria on the SPR to shorten the time I spend writing. It makes the time I do spend writing feedback so much more powerful – for me and for the student. This is the most enriching and fulfilling assessment I can provide, and, in my experience, it works.

I recognize that this article is rooted in just that – my experience. Empirical evidence on the use of the SPR is limited. While other scholars like Elbow (2000) have used the same concept with different terms and found success, at the time of this writing, very few studies containing empirical evidence on the use of the SPR had been published. This might be because so many stakeholders outside of the secondary classroom value the analytic and holistic rubric. Regardless, I hope researchers with more resources than I have can grab this baton from my hand and take the lead. We cannot allow empirical research on writing assessment to fall stagnant or even to become limited by the most commonly accepted assessment tools. The SPR works for me and for the other teachers I know who have adapted it, but that’s not enough. I look forward to reading, one day, what other scholars are able to learn.

One of the key factors of writing assessment, which testmakers ignore but multiple scholars stress, is that of context, both of the student and of the learning situation. White (1994) explained students need testing that honors the context of their writing and learning. Huot (1996) explained that writing assessment cannot be responsible without acknowledging context. Until teachers can convince College Board, Educational Testing Services, and government institutions to rely on contextualized assessment instead of nationally standardized assessment, I worry that solutions to the issues in standardized writing assessment will be slim to none. Even with an SPR, I do not believe a timed, standardized writing sample will truly describe the majority of the students’ writing ability. But, the theories behind these standardized exams are responsible for the analytic and holistic rubrics. If the writing assessments our students are forced to take are unreliable or irresponsible, then our best hope is to teach them to understand their own writing and to think like writers. The SPR allows me to do this when I invite students to help me build it, to use it to evaluate example writing, and to reflect and assess their own writing with the standards we’ve used all along.

A colleague of mine once shared that she thinks transfer of knowledge has been eradicated. She pressed her fingers to her temples, exhausted, and told me she knows these students learned these skills in their junior English class in first quarter, but they did not practice them in her junior history class in second quarter. The issue isn’t the skills they did or did not learn; the issue is the context. If students learn to follow instructions listed in boxes on the page, they learn the “what,” but not the “why.” They can’t transfer that knowledge because the boxes in the other class’s rubric say different things; their instructions are different.

The boxes will be different no matter where they go. In different classes, in different schools, in different jobs – the boxes will always say different things. Teaching students to read boxes helps no one. Instead, we need to teach them to create writing that reflects critical thinking, creativity, careful consideration of the writing’s purpose, a reflection of their intended audience, and a reflection of themselves as writers. The student with the joke in her introduction is much more likely to engage the reader with her own voice and her own honest writing than if she tries to force a thoughtful comment because that was what the boxes told her to do. If she practices this technique, if she self-assesses, if she reflects on her choices as a writer and their effects on his audience, she will soon understand the “why.”

Analytic and holistic rubrics are comfortable. Most administrators accept them without blinking. Countless websites will generate them with only a few sentences of guidance. They’ve survived for decades since their tumultuous birth, and they’ll survive as long as teachers keep using them. I don’t believe I’m alone in questioning their use. It’s scary to change assessment techniques, and it’s difficult to reflect on our own practice. But, just like my student who requested the “other side” of the rubric learned, taking those risks and trusting our own ability pays off so much more than playing it safe.

References

Broad, B. (2003). What we really value: Beyond rubrics in teaching and assessing writing. Utah State University Press.

Diederich, P., French, J., & Carlton, S. (1961). Factors in judgments of writing ability (Report No. RB-61-15). Educational Testing Service Research Bulletin.

Elbow, P. (1998). Writing with power: Techniques for mastering the writing process (2nd ed.). Oxford University Press.

Elbow, P. (2000). Everyone can write: Essays toward a hopeful theory of writing and teaching writing. Oxford University Press.

Fluckiger, J. (2010). Single point rubric: A tool for responsible student self-assessment. The Delta Kappa Gamma Bulletin, 18-25.

Gonzalez, J. (2014, May 1). Know tour terms: Holistic, analytic, and single-point rubrics. [Web log post]. Retrieved from https://www.cultofpedagogy.com/holistic-analytic-single-point-rubrics/

Howell, R. J. (2014). “Grading rubrics: Hoopla or help?” Innovations in Education and Teaching International, 51(4), 400-411.

Huot, B. (1996). Towards a new theory of writing assessment. College Composition and Communication, 47(4), 549-566.

Spandel, V. (2006). Speaking my mind: In defense of rubrics. The English Journal, 96(1), 19-22.

Turley, E. D., & Gallagher, C. (2008). On the uses of rubrics: Reframing the great rubric debate. English Journal, 97(4), 87-92.

University of Maryland Baltimore County, Faculty Development Center. Sample holistic rubric for essays. Retrieved from https://fdc.umbc.edu/files/2013/01/SAMPLE-HOLISTIC-RUBRIC-FOR-ESSAYS.pdf

White, E. (1994). Issues and problems in writing assessment. Assessing Writing, 1(1), 11-27.

White, E. (1985). Teaching and Assessing Writing: Recent Advances in Understanding, Evaluating, and Improving Study Performance. Jossey-Bass.

Wilson, M. (2007). Why I won’t be using rubrics to respond to students’ writing. English Journal, 96(4), 62-66.

Wilson, M. (2006). Rethinking rubrics in writing assessment. Heinemann.