Volume 9, Issue 2: 2016

Multimodal Assessment as Disciplinary Sensemaking: Beyond Rubrics to Frameworks

by Ellery Sills, University of Nevada, Reno

This study argues that organizational studies scholar Karl Weick’s concept of sensemaking can help to integrate competing scales of multimodal assessment: the pedagogical attention to the purposes, motivations, and needs of composing students; the programmatic desire for consistent outcomes and expectations; and the disciplinary mandate to communicate collective (though not necessarily consensual) values to composition scholars and practitioners. It addresses an ongoing debate about the prevalence of common or generic rubrics in conducting multimodal assessment; while some scholars argue that multimodal assessment is compatible with common, and even print-oriented, programmatic rubrics, others insist that only assignment-specific, context-driven assessments can account for the rich diversity of multimodal processes and texts. Adopting sensemaking theory, by contrast, argues for multimodal assessment efforts to attend to cross-programmatic and disciplinary frameworks--plastic, scalable assessment categories that can be adapted to local contexts. An analysis of current multimodal assessment research and practice demonstrates how emergent sensemaking frameworks are integrating global (cross-programmatic) and local (classroom- or assignment-specific) scales of assessment. Keywords: sensemaking, multimodal assessment


Multimodal composition has greatly troubled the use of print-centered frameworks to understand writing. In a recent reflection, Edward White (2013) recounted the loss of meaning that accompanied his exposure to multimodal composition:

Most remarkable to me is the way digital writing has changed the construct of writing itself, a realization that has truly shaken my world. If I knew anything at all, I knew what writing was and how to teach and assess it. And writing took place in a print   environment, so I assumed. Now I can do that no longer. (para. 3)

Multimodal composition has challenged how the field of rhetoric and composition/writing studies thinks about writing; moreover, it has challenged how we think about assessment. Many composition instructors engaged in multimodal assessment report struggling with the absence of immediately suitable models and frameworks with which to make sense of the work being assessed. This struggle has informed an ongoing debate about the prevalence of common or generic rubrics across programs and institutions. While some argue multimodal assessment is compatible with common, and even print-oriented, programmatic rubrics (Alford, 2009; Burnett et al., 2014; Murray, Sheets & Williams, 2009), others insist only assignment-specific, context-driven assessments can account for the rich diversity of multimodal processes and texts (Anson et al., 2012; Gallagher, 2014; Neal, 2011; Penrod, 2005). There is a sense, in this debate, of conflicting scales: the relative micro-scale of the individual classroom (the purposes, motivations, and needs of composing students) and the relative macro-scale of institutions (the desire for consistent outcomes and expectations, the desire to ensure that a program has instituted valid and reliable assessment across its courses). The question is, can they be reconciled? How can we develop multimodal composing outcomes that satisfy assessment needs on both scales?

To answer this question, there is another scale to consider: the disciplinary. As Jonathan Alexander and Jacqueline Rhodes have warned in On Multimodality: New Media in Composition Studies, despite widespread acknowledgment that composing with new media has changed the field, scholars and practitioners continue to treat the relationship between composition studies and multimodality as that of “a disciplinarity threatened by genres that do not yet even have names” (2014, p. 45). If composition instructors and programs continue to make use of common rubrics for multimodal assessment, it may be because cross-programmatic disciplinary and professional bodies in composition studies have been slow to offer alternatives to many of the disciplinary values traditionally expressed in writing assessment—coherence, voice, word choice, mechanics, and other terms and concepts more commonly associated with alphanumeric writing than other modes and media. In the wake of disciplinary ambiguity, common print-oriented rubrics often fill the void for composition classrooms and programs.    

I will argue, here, that organizational studies scholar Karl Weick’s concept of sensemaking offers a possible first step to integrating pedagogical, programmatic, and disciplinary scales. According to Weick, who is largely responsible for popularizing sensemaking in organizational studies, sensemaking “involves the ongoing retrospective development of plausible images that rationalize what people are doing” (Weick, Sutcliffe, and Obtsfeld, 2005, p. 409). Sensemaking quite literally means what it says: It is the process by which individuals and groups “make sense” of the situations around them. Analyses of sensemaking usually begin with the experience of individual sensemakers, since sensemaking shares with ethnomethodology a commitment to studying interpretation, action, and communication at the micro-level of local practices (Weick, 1995, p. 13). At the same time, scholars of sensemaking are also interested in how local sensemaking practices scale up into collective practices.

Sensemaking is enactive, which is to say that, through the articulation of sensemaking practices, “situations, organizations, and environments are talked into existence” (Weick, 1995, p. 30). Given this enactive quality, sensemaking can offer strategies to cultivate institutional ecologies of multimodal assessment; as Weick articulated, “sensemaking is the feedstock of institutionalization” (1995, p. 36). Instead of thinking in terms of generic rubrics, or protean criteria that change from assignment to assignment, sensemaking theory helps to build cross-programmatic and disciplinary frameworks for multimodal assessment—plastic, scalable assessment categories that can be adapted to local contexts. The following analysis of multimodal assessment research and practice suggests that attention to such frameworks is presently being “talked into existence.”                   

Writing Rubrics

Debate about the use of rubrics in writing assessment has only recently been tied to questions of multimodal assessment. Multimodal composition emerged as a key subject for writing studies in the late 1990s and early 2000s, after theorists such as Gunther Kress, Bill Cope and Mary Kalantzakis, Cynthia Selfe, Pamela Takayoshi, Anne Wysocki, Carl Whithaus and others stressed the centrality of digital and other new media technologies to contemporary communication. Accordingly, these theorists often critiqued continued bias toward print-centric pedagogy, research, and assessment.

These developments, perhaps not coincidentally, arose alongside the post-positivist critique of rubrics in writing assessment theory. In the early 2000s, critics like Bob Broad, Maja Wilson, and Patricia Lyne argued that rubrics were too reductive to satisfactorily express “what we really value” in writing. Broad (2003) asserted “in their pursuit of their normative and formative purposes, traditional rubrics achieve evaluative brevity and clarity. In so doing, they surrender their descriptive and informative potential: responsiveness, detail, and complexity in accounting for how writing is actually evaluated” (p. 2). Broad introduced dynamic criteria mapping, a “streamlined form of qualitative inquiry” that developed criteria in response to emergent contextual considerations, as a viable alternative to rubrics (2003, p. 13). This critique of rubrics’ apparent lack of contextuality has only increased over time; most recently, Anson, Dannels, Flash, and Gaffney (2012) have asserted that “rubrics reflecting generalized standards wear the guise of local application, fooling us into believing that they will improve teaching, learning, and both classroom and larger-scale assessment” (para. 3).

Even as rubrics face a good deal of criticism, a desire for programmatic, cross-programmatic, and disciplinary guidance regarding assessment persists. According to Alford (2009), even in local programmatic adoptions of dynamic criteria mapping, rubrics were designed because “both students and faculty, adjunct faculty in particular, wanted and needed something they felt was more specific and concrete to help them understand what the values and outcomes really meant” (p. 46). A felt need for transparent and readily available assessment methods that are simultaneously attentive to local context is evident.

The Dilemma of Multimodal Rubrics

This desire for cross-programmatic and disciplinary guidance is especially true for multimodal assessment, further complicating the longstanding debate. Indeed, a number of studies have suggested the continued prevalence of rubrics in assessing multimodal work. According to Anderson et al. (2006), 80% of surveyed instructors favored rubric criteria (as well as reflection papers) to assess multimodal compositions (p. 71). In terms of assessment theory, scholars like Burnett et al. (2014) argued “multimodal composition curricula can productively use programmatic rubrics when those rubrics are part of an ecology of assessment that prioritizes feedback and adaptation” (Defining a programmatic ecology of assessment section, para. 2). They claimed, furthermore, “rubrics can have value by focusing on selected rhetorical factors to assess multimodal artifacts, can provide an environment for programmatic consistency, and can help manage the enormity of a teacher’s workload as it scales up” (Defining a programmatic ecology of assessment section, para. 5). These arguments indicate a desire for flexibility, consistency, and scalability informs instructors’ persistent use of rubrics for multimodal assessment.

Other studies, however, indicate that not all programmatic rubrics are designed to address multimodal processes and products, making their use especially challenging. For example, in 2009, Elizabeth Murray, Hailey Sheets, and Nicole Williams conducted a survey of composition instructors, several of whom worked at Ball State University, an institution that required program-wide print-based rubrics to assess students’ new media compositions. In spite of this requirement, instructors reported increasingly seeking different ways to make sense of new media work; when asked how they assessed multimodal projects, only 7% of instructors reported using their writing program’s print-based rubric, whereas 37% reported using another rubric and 31% reported using “other” means of assessment (Murray et al., 2009, Survey Results: Question #6 section). Murray et al. attributed this to “confusion about how a writing rubric used to grade traditional alphabetic essays could be applied to multimodal projects…” (Significance of Survey Results section). Along these lines, the Multimodal Assessment Project (MAP), a nationwide assessment effort organized by the National Writing Project, found generic print-oriented rubrics “named areas of writing performance that were still very relevant for multimodal texts, but in each case very important areas of practice were still invisible” (Jimerson, 2011). One of MAP’s chief concerns, for instance, was the rubrics’ persistent emphasis upon the discrete “text” to the detriment of examining its context (Wahleithner, 2014). Without an understanding of the context, multimodal texts literally did not “make sense”—a considerable disadvantage when undertaking their assessment.

The debate reflected in these different studies reveals an enduring dilemma: Although instructors are receptive to using programmatic rubrics for guidance in multimodal assessment, they discover that such rubrics are frequently not designed with such guidance in mind. The “ecology of assessment that prioritizes feedback and adaptation,” championed by Burnett et al. (2014), is absent. At the same time, however, instructors historically have found writing programs’ professional development opportunities offer “little help in conceptualizing multimodal assignments, assessing student responses, or securing the hardware needed to undertake such assignment” (Anderson, et al. 2006, p. 79). These opportunities, then, do not necessarily represent composition instructors’ needs regarding multimodal assessment.

Sensemaking in Multimodal Assessment

Sensemaking offers a cross-programmatic strategy for encouraging practices of retrospective interpretation, communication, and action conducive to successful multimodal assessment. In place of the persistent demand for ready-made programmatic rubrics, it offers the alternative of adaptable, context-sensitive yet scalable assessment frameworks. The subsequent analysis will demonstrate how sensemaking assessment frameworks can integrate global and local scales of assessment.

Sensemaking Theory

Before engaging in this analysis, a more sustained explication of sensemaking theory and research is necessary. Sensemaking theories draw inspiration from pragmatic, phenomenological, symbolic interactionist, and ethnomethodological traditions; in the spirit of these traditions (particularly the experiential philosophy of William James), these theories understand sensemaking as the confrontation of “ongoing, unknowable, streaming of experience” and the pursuit of provisional “answers to the question ‘what’s the story?’” (Weick, et al. 2005, p. 410). People engage in sensemaking not purely for its own sake, but in order to guide action and articulate meaning to others involved in action: “Sensemaking is central [in guiding human behavior] because it is the primary site where meanings materialize that inform and constrain identity and action” (Weick et al., 2005, p. 409). Because of this focus on the relationship between meaning and action, sensemaking begins in the middle of things; it does not presume a stable, unchanging world where everything is always running smoothly, but a dynamic world marked by disruption and flux.

Accordingly, Weick and others stress that sensemaking finds its “genesis” in the experience of “disruptive ambiguity” (Weick et al., 2005, p. 413). For example, in the steady and sustained work of organizations, an “event” not previously encountered before temporarily halts action. As a result, a felt need emerges to make sense of the event using available frameworks, or, failing that, to construct new frameworks:  

Explicit efforts at sensemaking tend to occur when the current state of the world is perceived to be different from the expected state of the world, or when there is no obvious way to engage the world. In such circumstances there is a shift from the experience of immersion in projects to a sense that the flow of action has become unintelligible in some way. To make sense of the disruption, people look first for reasons that will enable them to resume the interrupted activity and stay in action. These “reasons” are pulled from frameworks such as institutional constraints, organizational premises, plans, expectations, acceptable justifications, and traditions inherited from predecessors. If resumption of the project is problematic, sensemaking is biased either toward identifying substitute action or toward further deliberation (Weick et al., 2005, p. 409).

The need for frameworks to make sense of disruption cannot be emphasized enough. As Weick noted, we perpetually use frameworks to make sense of the ongoing stream of experience we encounter: “sensemaking involves placing stimuli into some kind of framework…when people put stimuli into frameworks, this enables them to ‘comprehend, understand, explain, attribute, extrapolate and predict’” (Weick, 1995, p. 4). Frameworks partake of sensemaking’s enactive quality; seen in this light, a framework is a relatively stable enactive process, produced and maintained by the close interaction of people and their environments, that “directs interpretations” (Weick, 1995, p. 4). However, the close relationship between sensemaking and frameworks becomes especially “visible when predictions break down” (Weick, 1995, p. 5). When predictions or frameworks are no longer entirely reliable, their direction of interpretations is loosened. Practitioners must improvise, “simultaneously interpret[ing] their knowledge with trusted frameworks, yet mistrust[ing] those very same frameworks by testing new frameworks and new interpretations” (Weick et al., 2005, p. 412).

The testing of new frameworks and new interpretations is accomplished through the sensemaking practices of noticing and bracketing, labeling, and retrospection. Magala (1997) observed that, when confronted with ambiguous data, professionals make note of the data in passing and bracket it for later attention and analysis, “inventing a new meaning (interpretation) for something that has already occurred during the organizing process, but does not yet have a name, has never been recognized as a separate autonomous process, object, event” (p. 324, emphasis added). Weick et al. (2005), building on this observation, asserted the inventive process begun with noticing and bracketing ambiguities develops via retrospective interpretation and labeling. Retrospective labeling provisionally translates initial ambiguities into “cues” for “plausible” coordinated action: “Thus, the ways in which events are first envisioned immediately begins the work of organizing because events are bracketed and labeled in ways that predispose people to find common ground” (Weick et al., 2005, p. 411). In other words, while sensemaking might begin with individual actors envisioning events in different ways, subsequent labeling allows for increasingly widespread and cooperative (if not consensual) communication, action and organization. In this way, individual and group sensemaking “scales up” into the articulation of provisional categories across disciplinary and professional networks. This understanding of the relationship between interpretation and action lends itself to discussing assessment work.        

Assessment as Sensemaking

Understanding writing assessment and outcomes design as sensemaking underscores the vital roles of interpretation, action, and communication to assessment work. As Scott and Brannon pointed out, “Assessments codify particular value systems. Conceptions fundamental to writing pedagogy…are also fundamental to writing assessments” (2013, p. 277). By means of assessment, disciplinary practitioners interpret the student work they receive; they act (respond and evaluate); they articulate inherited and emergent disciplinary values through the communication and institutionalization of assessment rubrics, regimes, and frameworks. In the case of composition studies, writing assessments articulate (differing but nonetheless collaborative) values about what constitutes (good) writing.                       

When describing assessment as sensemaking, however, we must be careful not to equate assessment solely with evaluation, since evaluation frequently reduces context. Weick et al. (2005) cautioned, “[s]ensemaking is about the interplay of action and interpretation rather than the influence of evaluation on choice” (p. 409). Understanding assessment as the “interplay of action and interpretation” (Weick et al., 2005, p. 409) suggests a richer, more complex engagement with students’ composing processes and products. Assessment as sensemaking entails the priority of meaning over evaluation; it requires interpretation of students’ texts that, moving beyond evaluation’s a-contextual judgment of “right” or “wrong”, affords a greater latitude for student agency—acknowledging “students might represent the teacher’s task one way but carry it out another way” (Prior, 1998, p. 250).

In order to provide such latitude, assessment categories are necessarily flexible and provisional, responding to the unique contributions afforded by students’ compositions. For example, the National Writing Project’s Multimodal Assessment Project (MAP) categories demonstrate this flexibility. In discussing a category such as “artifact,” MAP team member Elyse Eidman-Aadahl pointed out that “the way the description of the artifact is written [in the MAP framework] is to say that part of the success of any artifact is how it operates in a real communicative siutation and moment…An artifact is always experienced in context” (Wahleithner, 2013). Instead of being evaluated in isolation, then, as it might be in a generic rubric, “artifact” can come to mean different things within a sensemaking framework. This does not simply mean that such a category can be interpreted differently by different people. As Weick (1995) explained, sensemaking includes interpretation, but is not synonymous with it: “Most descriptions of interpretation focus on some kind of text. What sensemaking does is address how the text is constructed as well as how it is read. Sensemaking is about authoring as well as reading” (p. 7). In other words, assessing students’ multimodal composing is not so much about what a composition means within a fixed interpretive scheme, but rather what a composition can do within a “real communicative situation and moment.” Thus, assessment as sensemaking allows greater attention to the contextual processes informing the construction of the assessed text, and by extension, the opportunity to generate new assessment values attuned to these contexts.

Understanding assessment as sensemaking has particular relevance for multimodal assessment. Anne Wierszewski (2013) noted the rhetorical dexterity afforded by multimodal compositions privileges an interpretive, rather than narrowly evaluative, response:    

As Takayoshi cautioned, we should take care to avoid emphasizing through our responses that there is a “right” and a “wrong” way to arrange a multimodal text. Instead, we should take care to do as Wysocki (2004) has implored us: “As we respond, “generosity too must enter, so that we approach different-looking texts with the assumption not that mistakes were made but that choices were made and are being tried out and on” (p. 23). A focus on the rights and wrongs of form cannot account for the kinds of choice, creativity, and experimentation demanded by multimodal pedagogical models.

Wysocki’s “generous reading” of multimodal texts emphasizes “how any text—like its composers and readers—doesn’t function independently of how it is made and in what contexts,” and foregrounds these questions of construction and context in the process of interpretation (2004, p. 15). However, as Jody Shipka warns us, such generous reading is rarely done at leisure for composition teachers (2011, p. 113). In reality, writing assessment must meet the demands of “a million things that go on” (Weick et al., 2005, p. 411), including student expectations for prompt feedback and grading, the expectations of other interested stakeholders, institutional and programmatic goals, means and outcomes statements, and so on. In other words, multimodal assessment, like all assessment, must swiftly and imperfectly aim to organize the “flux” of an “ongoing stream of experience.” It must rely on established and emerging assessment frameworks to make sense of multimodal composition’s “disruptive ambiguity.”

Multimodal Assessment as Breakdown

As discussed earlier, multimodal composition has greatly troubled the use of print-centered frameworks to understand writing; as a result, many composition instructors engaged in multimodal assessment report struggling with the absence of immediately suitable models and frameworks with which to make sense of the work being assessed. Emily Wierszewski’s 2013 study “‘Something Old, Something New’: Evaluative Criteria in Teacher Reponses to Student Multimodal Texts” is instructive in revealing how the sensemaking of multimodal assessment relies upon trusted print-based frameworks, yet also begins to develop new assessment frameworks. It also suggests the role of disciplinary and professional networks in acting upon and articulating these new frameworks.

In the 2013 study, Wierszewski sought to discover “what print values do teachers use when they assess student multimodal works, and what kinds of criteria seem to be unique to new, multimodal pedagogies.” In pursuing this question, Wierszewski built on the prior claims of multimodal composition scholars that “teachers must take into account that multimodality is different from print in profound ways and transform what they know about rhetorical effectiveness” (Wierszewski, 2013, Literature review section, para. 3). Wierszewski asked eight composition instructors to respond to their own students’ multimodal texts in sixty-minute verbal protocols. She then compared their evaluative responses to Connors and Lunsford’s 1993 “taxonomy of teachers’ written responses on print essays,” pointing out “such a comparison was necessary to identify and analyze teachers’ new or repurposed values—values that did not fit anywhere on Connors and Lunsford’s [print-based] spectrum.”

Intriguingly, in responding to their students’ texts, instructors engaged in sensemaking practices very similar to the ones described in Weick et al. (2005). Instructors interpreted texts using trusted, print-based frameworks. Wierszewski (2013) noted, “the top four most frequent evaluative comment types in this study—formal arrangement, overall, organization, and audience—all overlapped with categories found in Connors and Lunsford’s data set.” Composition instructors thus relied on familiar rhetorical categories, such as organization, arrangement, audience, and (less frequently) purpose and sentence structure, in order to evaluate the texts.

Where these familiar categories fell short, however, the instructors adopted strategies of noticing, bracketing and labeling anything different or unique about the multimodal texts. Wierszewski (2013) found “half of the time teachers were not engaged in evaluation but were explaining what they were doing or making sense of the student’s text as a reader.” Instructors responded to features they noticed, as opposed to directly evaluating these features, in order to temporarily bracket and return to them for more contextual interpretation. Indeed, Wierszewski even speculated that one of the most frequent evaluative comment types also found in Connors and Lunsford—“overall” comments about students’ general performance—“may suggest teachers’ uncertainty about how to name the things that they find effective or less effective in student multimodal work” (2013, emphasis added). Here, too, instructors appeared to be relying on improvisational interpretation of multimodal elements that did “not yet have a name” in multimodal assessment.

When instructors did make more specific evaluative statements about the unique multimodal elements within their students’ texts, they engaged in a labeling and categorizing of these elements that proved strikingly different from Connors and Lunsford’s print-based categories. For instance, Wierszewski (2013) identified one key multimodal comment type as “creativity,” with several comments focusing on “the use of creative or inventive approaches to the assignment, including remarks about choice, originality, and thoughtfulness.” Another unique category, “multimodality,” addressed “the relationship between the modalities in a text.” Thus, for example, one instructor commented how a student’s use of music “enters into conversation with the video shots.” Each of these categories, creativity and multimodality, remains fairly general and inclusive. However, as Weick et al. pointed out, that is to the advantage of the sensemaking process: “categories have plasticity because they are socially defined, [and] because they have to be adapted to local circumstances…” (2005, p. 411). In other words, in categorizing the features of their students’ texts, instructors were attempting to address the local contexts of the specific works, but they were also looking ahead to a “functional deployment” of these categories. The categories were thus being constructed, implicitly, with the work of disciplinary organizing—“coordinating” with colleagues and “distributing” workable categories to others—in mind (Weick et al., 2005, p. 411).

Indeed, Wierszewski (2013) herself contributes to this coordination and distribution of new multimodal assessment categories. Through her coding and analysis of instructors’ responses, she refines the makeshift categories the instructors have used and, in asserting that “teachers are actively developing concepts and criteria foreign to print essays as they respond to multimodal texts,” articulates these categories as assessment values requiring “future research or other scholarship.” Wierszewski’s contribution indicates that the retrospective sensemaking of individual instructors engaged in assessment does not, in and of itself, enact and sustain disciplinary recognition of these categories as legitimate. Rather, there is a need for articulation, or organizing through communication, to “talk” disciplinary frameworks into existence.                    

Assessment as Articulation

Articulation is defined as “the social process by which tacit knowledge is made explicit or more usable” (Weick et al., 2005, p. 413). In discussing how articulation plays out in practice, Weick et al. (2005) described a nursing scenario in which multiple practitioners must negotiate an act of collective sensemaking: “the second nurse absorbs the complexity of the situation . . . by holding both a nurse’s and doctor’s perspectives of the situation while identifying an account of the situation that would align the two” (p. 413). This effort to “identif[y] an account of the situation that would align” multiple sensemaking experiences is key to bringing a sensemaking framework into multimodal assessment. Multimodal assessment, as sensemaking, is not a process confined to the minds of individual practitioners. Once sensemaking practices of noticing and bracketing, labeling, presumption and retrospect have produced “plausible stories” about emergent multimodal categories and values, these stories are articulated across writing programs, departments, academic journals and websites, and professional associations (Weick et al., 2005, p. 415). In order for these plausible stories to emerge in the first place, however, initial efforts at articulation must work to “align” multiple practitioners’ sensemaking experiences. These efforts do not presume that such alignment produces uniformity of judgment; rather, they work toward what Susan Leigh Star has called “cooperation without consensus” (1993).

Weick et al. (2005) have elaborated upon how cooperation without consensus plays out within a sensemaking framework:

The rhetoric of “shared understanding,” “common sense,” and “consensus,” is commonplace in discussions of organized sensemaking. However, the haunting questions remain: Are shared beliefs a necessary condition for organized action [and] is the construct of collective belief theoretically meaningful [?]...When information is distributed among numerous parties, each with a different impression of what is happening, the cost of reconciling these disparate views is high, so discrepancies and ambiguities in outlook persist. Thus, multiple theories develop about what is happening and what needs to be done, people learn to work interdependently despite couplings loosened by the pursuit of diverse theories, and inductions may be more clearly associated with effectiveness when they provide equivalent rather than shared meanings. (pp. 417-418)

This association of articulation with “equivalent rather than shared meanings,” and “cooperation without consensus,” bears a great deal of resemblance to post-positivist writing assessment theorists’ critique of rubrics. In What We Really Value, for example, Broad (2003) encouraged “changing the name of large-group discussions that precede live grading from ‘standardization,’ ‘calibration,’ or ‘norming’ to ‘articulation’ ” (p. 129). As he noted, terms like “standardization” assume that some kind of assessment consensus has been reached; “articulation,” however, opens the door to “exploring how and why evaluators disagree” since, in assessment approaches such as Dynamic Criteria Mapping, instructors are prompted to “voice what they value in their students’ work” (Broad, 2003, p. 129, emphasis added). This emphasis on multiple voices, rather than a single monolithic norm, sounds very much like “equivalent rather than shared meanings.” Similarly, Maja Wilson (2006) argued one of the most significant anxieties historically driving writing assessment has been the “fear of disagreement” and that “[o]ur most trusty assessment tool—the rubric—was created to manufacture consensus” (p. 53). Sensemaking theory shares their skepticism of consensus as the ideal for group dynamics; where it improves upon the goals of however, is in its potential to “scale up” the cooperative aspects of articulation to the disciplinary level. In this way, the emergence of new sensemaking frameworks can help to integrate local and global scales of assessment; after these frameworks have been “talked into existence,” they can be made both plastic and scalable, transparent and adaptable to local contexts.

To provide an example of how a multimodal assessment sensemaking framework can be articulated successfully at multiple scales, I turn to the National Writing Project’s Multimodal Assessment Project (MAP). MAP, a group of teachers and researchers engaged in multimodal pedagogy organized by the National Writing Project’s Digital Is…Initiative, was tasked with a central question: “What would it look like if the language of assessment was closely aligned with the language used by the creators and readers of digital compositions?” (Multimodal Assessment Project Group, 2013, emphasis added). Articulation, then, was at the forefront of this project, insofar as there was a felt need to speak a new “language” of multimodal assessment that could plausibly align the sensemaking processes of multimodal assessors and composers alike. Moreover, in asserting that “the language of assessment can inform—and build upon—discussions more often associated with interaction, instruction, and text creation than with evaluation,” the MAP project claims sensemaking’s priority of meaning over evaluation for assessment.

The MAP Group rejected both generic rubrics and idiosyncratic assignment-specific criteria for their scalable approach. As the Group reported in the 2013 account of their work,

The more we looked at examples of young people’s work, the more we listened to conversations among authors and teachers…the more the language of [a] full set of domains—and not just the narrow language of tools or single artifacts or demands of singular assignments—seemed vital.

Finding the language of print-based rubrics too limiting, MAP articulated a multimodal assessment framework encompassing five broad domains: the artifact, or the final product of composers; the context, or the rhetorical dimensions guiding an artifact’s creation; the substance, or “the overall quality or significance of the ideas presented”; process management and technique, or “the processes, capacities and skills involved in planning, creating, and circulating multimodal artifacts”; and habits of mind, or the “patterns of behavior and attitudes” encouraged by students’ involvement in multimodal composing. These domains, according to the MAP Group, were vital because they operated at both macro- and micro-scales:

[These domains] have a resonance with the guidelines and outcomes developed by the Conference on College Composition and Communication (2009), National Council of Teachers of English (2009), and Council of Writing Program Administrators (2011). Yet the MAP domains also begin at that intimate level of student-teacher and writer-reader interaction. (Multimodal Assessment Project Group, 2013)

In other words, the domains emerged from an effort to align the sensemaking of teachers and students, composers and audience, and “scale up” these alignments.

Context is an especially promising assessment domain for discussing sensemaking’s potential to integrate local and global scales of assessment. An assessment framework for meaning “shifts the attention away from individual decision makers toward a point ‘out there’ where context and individual action overlap…” (Snook, 2001, p. 206). This shift in attention is perhaps most evident in how the MAP Group uses the domain of “context” to make sense of a local “Microblogging in Character” classroom assignment. For this assignment, ninth grade students in an American Studies classroom were asked to tweet the thoughts of characters from The Things They Carried as chapters from the text were being read aloud. As the MAP Group notes, “[If] you take the series of microblog posts…out of context, at best, they appear to be confusing and at worse they could be considered nonsensical.” When working out of an assessment framework that privileges the text and sees multimodal composition (at best) solely in terms of technical attributes and design, this assignment can appear as a “disruptive ambiguity.” None of these microblog posts “makes sense” in isolation; a tweet such as “Pull yourself together” will not necessarily fit into traditional rubric categories such as ideas and content, organization, sentence structure, word choice, and so on. They only acquire meaning in relation to the entire series of posts, The Things They Carried, the collaborative groups from which these posts emerged, and the poems inspired by these tweets that the students wrote subsequently.

In embracing the challenge to make sense of this ambiguity, MAP works to move past an assessment framework (seen especially in large-scale assessment) that emphasizes evaluating the composed artifact to the exclusion of other dimensions:

In most classroom activities, the artifact usually takes precedence as the primary object of assessment, sometimes followed by the process used to create the artifact…In the “Microblogging in Character” activity, both were important, but context—how deeply students entered into the constraints, affordances, and opportunities of the environment and tools surrounding the creation of their artifacts—was perhaps even more important with relation to student learning and the empathy they developed with characters (Multimodal Assessment Project Group, 2013).

When taking the context of the students’ tweets into account, “the short pieces of writing take on a different significance—both as individual pieces and as a larger collaborative project” (Multimodal Assessment Project Group, 2013). Considering the relationship between the tweets and the poems means that context “becomes a space for dialogue, a space where we want to see the assignment, the brainstorming activities, and the ‘final’ artifacts as a developmental chain” (Multimodal Assessment Project Group, 2013). In other words, context becomes a space for the alignment and scaling up of sensemaking frameworks. At the same that individual designers and instructors articulate what multimodal activities and artifacts mean in their local contexts, disciplinary frameworks align these local articulations using flexible assessment categories. Making sense of the “Microblogging in Character” activity is a local, context-specific act of assessment, but such sensemaking also relies on a cross-programmatic, disciplinary assessment domain—the very attentiveness to context at the local level—in order to encourage local efforts at framing meaning.

Conclusion

As a form of sensemaking, multimodal assessment confronts the “disruptive ambiguity” of new multimodal composition. In doing so, it seeks to avoid the one-size-fits-all approach of generic rubrics, but also the immotility of assignment-specific criteria. It acknowledges, in other words, the need to align and “scale up” disparate sensemaking perspectives into new frameworks of meaning, frameworks that “[advance] ambiguity as a disciplinary access point” (Selber, 2014, p. 432). In making use of sensemaking frameworks, instructors and administrators can rely on the flexibility and scalability of emergent multimodal categories (as in MAP’s assessment domains), while at the same time deploying and refining such categories in context-specific ways in individual classrooms and programs. In articulating the priority of meaning over evaluation, sensemaking highlights the central challenge of multimodal assessment: to recognize that multimodal assessment is not about asking “What makes that artifact better than another artifact?” but, rather, about making sense of what multimodal artifacts and processes do in various contexts—classroom, program, or discipline. 

 

Author Note

Ellery Sills is a lecturer in English at The University of Nevada, Reno. His research interests include multimodal composing and assessment, writing program administration, writing across the curriculum, composition pedagogy, and critical theory.

Correspondence concerning this article should be addressed to Ellery Sills, 1255 Jones St, Reno, NV 89503. Email: esills@unr.edu.

 

References

Alford, B. (2009). DCM as the assessment program at Mid Michigan College. In B. Broad, L. Adler-Kassner, B. Alford, & J. Detweiler (Eds.), Organic writing assessment (pp. 37-51). Logan, UT: Utah State University Press.

Anderson, D., Atkins, A., Ball, C., Homicz Millar, K., Selfe, C., & Selfe, R. (2006). Integrating multimodality into composition curricula: Survey methodology and results from a CCCC Research Grant. Composition Studies, 34(2), 59-84. Retrieved from http://techstyle.lmc.gatech.edu/wp-content/uploads/2012/08/Anderson-et-al.-2006.pdf

Anson, C., Dannels, D. P., Flash, P., & Gaffney, A. L. H. (2012). Big rubrics and weird genres: The futility of using generic assessment tools across diverse instructional contexts. The Journal of Writing Assessment, 5(1). Retrieved from http://www.journalofwritingassessment.org/article.php?article=57

Broad, B. (2003). What we really value: Beyond rubrics in teaching and assessing writing. Logan, UT: Utah State University Press.

Burnett, R. E., Frazee, A., Hanggi, K., & Madden, A. (2014). A programmatic ecology of assessment: Using a common rubric to evaluate multimodal processes and artifacts. Computers and Composition, 31, 53-66. http://dx.doi.org/10.1016/j.compcom.2013.12.005

Connors, R., & Lunsford, A (1993). Teachers’ rhetorical comments on student papers. College Composition and Communication, 44(2), 200-223.

Gallagher, C. (2014). Staging encounters: Assessing the performance of context in students’ multimodal writing. Computers and Composition, 31, 1-12. http://dx.doi.org/10.1016/j.compcom.2013.12.001

Jimerson, L. (2011). The NWP Multimodal Assessment Project. Digital Is. Retrieved from http://digitalis.nwp.org/resource/1577                           

Magala, S. J (1997). The making and unmaking of sense. Organizational Studies, 18(2), 317-338.

Multimodal Assessment Project Group (2013). Developing domains for multimodal writing assessment: The language of evaluation, the language of instruction. In H. McKee and D. DeVoss (Eds.), Digital writing assessment and evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press. Retrieved from http://ccdigitalpress.org/dwae

Murray, E. A., Sheets, H. A., & Williams, N. A. (2009). The new work of assessment: Evaluating multimodal compositions. Computers and Composition Online. Retrieved from http://cconlinejournal.org/murray_etal/index.html

Neal, M. (2011). Writing assessment and the revolution in digital texts and technologies. New York, NY: Teachers College Press.

Penrod, D. (2005). Composition in convergence: The impact of new media on writing assessment. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

Prior, P. (1998). Writing/disciplinarity. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

Rhodes, J., & Alexander, J. (2014). On multimodality: New media in Composition Studies. Urbana, IL: National Council of Teachers of English.

Scott, T., and Brannon, L. (2013). Democracy, struggle and the praxis of assessment. College Composition and Communication, 65(2), 273-298.

Selber, S. (2014). Institutional dimensions of academic computing. In C. Lutkewitte (Ed.), Multimodal composition: A critical sourcebook (pp. 427-447). Boston: Bedford/St. Martin’s. (Original work published 2009.)

Shipka, J. (2011). Toward a composition made whole. Pittsburgh, PA: University of Pittsburgh Press.

Snook, S. (2001). Friendly fire. Princeton, PA: Princeton University Press.

Star, S. L. (1993). Cooperation without consensus in scientific problem solving: Dynamics of closure in open systems. In S. Easterbrook (Ed.), CSCW: Cooperation or conflict? (pp. 93-106). London, UK: Springer-Verlag.

Wahleithner, J. M. (2014). The National Writing Project’s Multimodal Assessment Project: Development of a framework for thinking about multimodal composing. Computers and Composition, 31, 79-86. http://dx.doi.org/10.1016/j.compcom.2013.12.004

Weick, K. (1995). Sensemaking in organizations. Thousand Oaks, CA: SAGE.  

Weick, K., Sutcliffe, K. M., & Obstfeld, D. (2005). Organizing and the process of sensemaking. Organization Science, 16(4), 409-421.

White, E. (2013). Afterword. In H. McKee & D. DeVoss (Eds.), Digital writing assessment & evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press. Retrieved from http://ccdigitalpress.org/dwae

Wierszewski, E. (2013). “Something old, something new”: Evaluative criteria in teacher responses to student multimodal texts. In H. McKee & D. DeVoss (Eds.), Digital writing assessment and evaluation. Logan, UT: Computers and Composition Digital Press/Utah State University Press. Retrieved from http://ccdigitalpress.org/dwae

Wilson, M. (2006). Rethinking rubrics in writing assessment. Portsmouth, NH: Heinemann.

Wysocki, A. F. (2004). Opening new media to writing: Openings and justifications. In A. F. Wysocki, J. J. Eilola, C. L. Selfe & G. Sirc (Eds.), Writing new media: Theory and applications for expanding the teaching of composition (pp.1-41). Logan, UT: Utah State University Press.