Volume 8, Issue 1: 2015

Teacher Perceptions of the Impact of the Common Core Assessments on Linguistically Diverse High School Students

by Todd Ruecker, Bee Chamcharatsri, and Jet Saengngoen

Any discussion of the Common Core State Standards (CCSS) is incomplete without an understanding of the assessments that go along with them, since test makers were an integral part of the panels designing the standards. The Partnership for Assessment of Readiness for College and Careers (PARCC) is one of the two consortia (along with Smarter Balanced) developing tests for multiple states that align with the CCSS. This article shows how teachers perceive the impact of the CCSS and high-stakes assessment, particularly the PARCC, on a linguistically diverse school in the Southwestern U.S. The authors begin by reviewing work focused on how the creators of the CCSS and the associated assessments have overlooked ELL student populations. They then present findings from a multi-year study involving teacher interviews and classroom observations, focusing particularly on the following: psychological effects on students, the challenges of developing a literacy test for a homogeneous population, accessibility and accommodations, and computer-based administration.


Rather brief in their development at 18 months, the Common Core State Standards (CCSS) have moved from being a state-driven initiative emerging out of the National Governors Association to a dominant nationwide force shaping the U.S. educational system. As the largest U.S. education reform since the ill-fated No Child Left Behind Act (NCLB), which raised the profile of high-stakes assessment in education across the U.S., the CCSS has been questioned by those on both the political left and right. With test makers playing an integral role in designing the standards from the beginning (see Zancanella and Moore, 2014; Newkirk, 2013), it has been clear that the era of high stakes assessment brought on with NCLB continues with the arrival of the CCSS.

Of particular concern for the authors of this piece, and anyone working with large numbers of English Language Learners (ELL), is the way that this group of students was largely ignored in the creation of the standards. Because ELLs are barely mentioned in the standards themselves, many are concerned about whether their needs will be adequately assessed in the two major assessments aligned with the standards, the Partnership for Assessment of Readiness for College and Career (PARCC) and the Smarter Balanced Assessment Consortium (SBAC). Students officially designated as ELLs continue to be one of the fastest growing student populations, comprising 9.1%, or 4.4 million students, as of 2011. In some states, these percentages are much larger; ELLs comprise 23.2% of the population in California, 16.1% in New Mexico, and 14.9% in Texas (NCES, 2014). These numbers do not include all students who speak English as a second language since some have already tested out of the designation or were overlooked in placement screening. For instance, 21.3% of secondary school students (and 71.7% of Latina/o secondary school students) report speaking a language other than English at home (NCES, 2010). In short, students who speak English as a second or additional language is not a population that should come as an afterthought; rather, they need to be a central consideration in the development of standards and their associated assessments.

In this article, we aim to shed some light on how ELLs have been both considered and ignored in the development of the CCSS and aligned assessments, with a focus on how CCSS-related assessment is impacting a linguistically diverse high school in the Southwest. We focused our project particularly on teacher perceptions of this impact, conducting multiple interviews with nine teachers in English, history, and science as we conducted classroom observations and collected a variety of other materials related to testing and instruction. Teacher perceptions provide readers a situated perspective of the implementation of the CCSS that is often lost as politicians, test makers, and other individuals fight over the value of the CCSS and the continued push for high-stakes standardized assessment. The following research questions guided our work:

  1. What are teachers’ perceptions of the proposed PARCC assessments and their impact on ELLs? How are these perceptions shaped by school context?
  2. How are ELLs impacted by design and accommodation choices made by the creators of the PARCC and SBAC?

Overview of the CCSS, High-Stakes Testing, and ELLs

The CCSS has been critiqued from a few different directions for failing to consider increasingly diverse student populations (e.g., Coleman & Goldenburg, 2012). This matters when discussing the PARCC and SBAC because these assessments are designed to align with the standards (Boals, Kenyon, Blair, Cranley, Wilmes, & Wright, 2015). DelliCarpini and Alonso (2013) noted that the standards provide “no specific guidelines for the education of ELLs or other nontraditional populations” (p. 91). A look at the standards themselves merits only a brief mention of ELLs:

It is also beyond the scope of the Standards to define the full range of supports appropriate for English language learners and for students with special needs. At the same time, all students must have the opportunity to learn and meet the same high standards if they are to access the knowledge and skills necessary in their post-high school lives. Each grade will include students who are still acquiring English. For those students, it is possible to meet the standards in reading, writing, speaking, and listening without displaying native-like control of conventions and vocabulary. (CCSS, 2015b, p. 6)

Since the CCSS were premised upon the idea of helping make every student college and career ready, the omission of ELLs may speak to the limitations of nationwide standards and assessment movements. As noted by NCTE, “there is no one profile for an ELL student, nor is one single response adequate to meet their educational goals and needs” (p. 2). The CCSS creators recognized this challenge when writing, “States and districts recognize that there will need to be a range of supports in place to ensure that all students, including those with special needs and English language learners, can master the standards” (CCSS, 2015c, para. 8).

Although the CCSS does not require the use of particular texts, the creators released a list of exemplar texts by grade level, a list that may reveal the creators’ bias. Gangi and Benfer (as cited in Strauss, 2014) noted that, “Of 171 texts recommended for elementary children in Appendix B of the CCSS, there are only 18 by authors of color, and few books reflect the lives of children of color and the poor” (para. 2). This bias is further evident in CCSS (2015b) statements such as “Demonstrate knowledge of eighteenth-, nineteenth- and early-twentieth-century foundational works of American literature” (p. 38) and “Delineate and evaluate the reasoning in seminal U.S. texts” (p. 40). When it comes to assessment, the problem becomes how the CCSS wording and recommendations are operationalized. As noted by Abedi and Gándara (2006), student test performance can be affected by both cognitive and non-cognitive factors, such as motivation. When texts on assessments more closely reflect the lives of particular students, it can disproportionately affect the performance of students culturally distant from the exam content. Bunch, Walqui, and Pearson (2014) similarly point to the importance of motivation in supporting learner comprehension, another aspect of literacy that has been largely overlooked by proponents of the CCSS and is not a factor mentioned in the Standards themselves.

Despite these concerns, scholars like Fillmore (2014) see value in the CCSS in that they raise the bar for all students, including ELLs who have often been given a reductive education:

I argue that not only can ELLs handle higher standards and expectations, but that more complex materials are in fact precisely what they have needed, and lack of access to such materials is what has prevented them from attaining full proficiency in English to date. (p. 624)

In saying this, however, Fillmore noted that classroom support is essential to help bring students up to these standards, something that is not defined specifically for this population. CCSS-connected proponents of “close reading” have advocated for looking at the text in isolation from additional background information provided for readers. Bunch, Walqui, and Pearson (2014) find fault in this approach, noting that for ELLs, many of whom come from different cultural backgrounds, text background is perhaps even more vital than for other student populations. They noted,

Texts do not exhibit difficulty by themselves: it is a matter of who readers are and what they bring to reading tasks; what the broader environmental factors and sociocultural context entail; what the activity structures are; and, perhaps most importantly, what kinds of classroom supports are available. (p. 551)

As we will discuss below, teachers we interviewed for this project and others often do not find fault with the CCSS; rather, they take issue with the continuation of high-stakes standardized testing from the NCLB era. The NCLB era brought a number of studies showing how high-stakes testing impacted literacy instruction in schools with high number of minority students, including ELLs (e.g., Assaf, 2006; Booher-Jennings, 2005; Eick & Valli, 2010; McCarthey, 2007). For instance, McCarthey (2007) explained, “teachers and students in low-income schools have less power to resist the law and are monitored to a greater degree than teachers in high-income schools” (p. 464). In the four schools she studied, McCarthey (2007) found that teachers at high-income schools could teach more genres of writing because they had less pressures to focus on test preparation. Similarly, in a study conducted at a 99% Latina/o high school on the U.S.-Mexico border with 40% of students classified as limited English proficient, Ruecker (2013) found that the threat of the school being shut down and reorganized into a charter school led to a curriculum largely focused on test preparation. Because the Texas test at the time encouraged one type of writing, a personal narrative, this is primarily what was taught throughout the students’ first three years at high school. Booher-Jennings (2005) wrote about how one Texas school facing accountability pressures placed priority on the “bubble kids,” in other words, those kids who hadn’t passed the test previously but stood a chance of doing so. This shift led to strategic decisions to focus attention on particular students in terms of additional instructional interventions; on the other hand, students who were likely to fail the test were increasingly sent to special education because their scores did not count against the school. Special education referrals had doubled since the implementation of accountability.

While the CCSS has arrived with the promise to help ensure students are all given similar learning opportunities, tying it to a high-stakes testing risks repeating the same mistakes from the NCLB era. We now turn to discussing our study, beginning by providing context on the school and our state, a state that has increased the pressures tied to high-stakes testing in the first part of the CCSS era.

 Methodology

Context and Participants

The study was conducted at Enchanted High School (EHS), a school of just under 2,000 students located in a metropolitan district in the Southwestern U.S. EHS was one of the most linguistically diverse schools in the state, something that has traditionally been a point of pride with teachers and administration. Approximately 57.2% of the students were Hispanic, 18.2% Caucasian, 12.5% Native American, 8.5% African American, and 3.6% Asian/Pacific. Approximately 95% of students were eligible for free and reduced lunches and 15% were classified as ELL. The school population was largely composed of resident multilingual students, 

 

In describing the context, we would be remiss not to include the political contexts surrounding the district and the school. Our conservative Governor appointed an Education Secretary with no classroom teaching experience who has connections to influential organizations like Chiefs for Change and Foundation for Excellence in Education, organizations with close ties to Jeb Bush and his goals for education reform, which focus on measuring school performance through standardized assessments and promoting alternatives like charter schools. Together they have called attention to the ways teachers have been failing students, turning to high-stakes testing as the answer. The Secretary released a new teacher evaluation model in which test scores account for 50% of a teacher’s evaluation (administrator observations count for 25% and other measures, such as attendance, count for the remaining 25%). The State did take some steps towards acknowledging that schools like EHS, serving large numbers of ELLs and economically disadvantaged students, are different from others and they explained that they used Value Added Modeling (VAM) to account for differences in students. According to a state representative, VAM means that “Teachers teaching every level of student have an equal opportunity to be successful” by scores that account for student differences such as ELL status and free/reduced lunch eligibility (citation omitted for anonymity).

For the participants, we recruited teachers in department meetings and conducted interviews in a space of their choice, typically their classrooms. The participants, who have teaching experience ranging from 2 to 21 years, included English Language Arts (ELA) faculty (N=7) and science and social studies teachers (N=3), teachers from areas impacted by the CCSS literacy standards (see Table 1 for a list of participants; we included limited information to protect teacher identity). Nine of the teachers we interviewed were White and one was Latina. The dedicated ESL teacher declined to participate in the study; however, given the linguistic diversity of the school, all the teachers below had extensive experience working with ELL students.

Table 1: List of Participants

Name

Gender

Years of Teaching

Subject(s)

Eli

Male

>8

English, Special Education

Susan

Female

>8

English, History, Special Education

Teresa

Female

>8

English, Creative Writing

Vicky

Female

<8

English

George

Male

>8

English

Magda

Female

>8

English

Zach

Male

>8

Science

Jasmine

Female

>8

Social Studies

Bria

Female

<8

English

Charles

Male

>8

Social studies

Data Collection and Analysis

Our primary data sources included teacher interviews, classroom observations, classroom materials, and sample PARCC and SBAC tests. We conducted two semi-structured 60-90 minute interviews with each teacher, separated by approximately a year. The interviews included questions about teaching background, perceptions of the CCSS, and attitudes towards high-stakes assessment and its impact on ELL students. The follow-up interviews were similar to the first but also focused on changes over the past year regarding administration, teacher morale, and evolving understandings of the CCSS and the PARCC. All interviews were fully transcribed. Observations were conducted by individual investigators who took notes, collected teaching materials, and took pictures of various items on their walls. Sample tests were taken from the PARCC and SBAC websites.

In collecting and analyzing data, we employed a critical language testing lens influenced by Shohamy (2001), aimed at unpacking the ideologies of testing and how assessment tools impact the lives of the participating teachers and their teaching practices. We paired this with grounded theory, which “captures the abductive logic through which [researchers] explore the social or natural world through practical engagements with it, derive working models and provisional understandings, and use such emergent ideas to guide further empirical explorations” (Atkinson & Delamont, 2008, p. 300; Glaser & Strauss, 1967). All interviews were read recursively by the researchers separately, who each formed analytical notes guided by our frameworks here as well as our research questions stated above. We met to compare our notes, jointly identifying trends in the data that were used to organize the findings.

 Findings

The CCSS are premised on the notion that all students taught with the same standards will achieve great things, something alluded to by Magda in our first interview, where she noted that is the standards are “putting everybody on the equal playing field.” However, teachers at EHS were clearly concerned about the impact of a standardized education system with a heavy emphasis on high-stakes testing on ELLs. These concerns spanned a few different areas: psychological effects on ELLs, the challenge of creating a literacy test for such a culturally and linguistically heterogeneous country, accessibility options for ELLs, and the implications of computer-based testing for ELLs.

Psychological Effects on Students

Teachers generally felt that assessments like the PARCC did more harm than good for ELLs. They pointed to the psychological effects on students of repeatedly being told they were failures. For instance, Magda noted, “I'm worried about my kids. I'm worried about the fact that they’re making strides and the test is going to bring them down because I think about the scores that they’re going to give the kids.”

Teresa explained the impact on students being over-tested and being repeatedly told they are below grade level:

If you’re a language learner and you take four major assessments and on every one of them you’re told how low your score is, you know, [it] doesn't seem like you’re being valued. I tried to tell my students...I’ll say you speak English much better than I speak Spanish, you know, I gotta give you credit for that you have two languages. That’s an incredible gift...that’s not the message you get when you take the assessment and you get your score.

Bria had a similar feeling, noting,

I’m looking out at this whole classroom of kids taking the SBA and predominantly they are Hispanics who do not speak English as a first language [who] failed this test the first time and are retaking it, you know, four or five months later and it’s the same issue. So I, that, I think that it sets a very sort of negative environment for some of these kids who have 3.5 GPAs but are freaking out about whether or not they are able to graduate and get a diploma...I think that it shows a lot of students like this that their type of intelligence or what they know and the work that they put in is not valued.

Both teachers recognize ELL students bring a variety of competencies to the classroom, competencies that are not recognized by the PARCC or other large-scale assessments. Instead, as Teresa further explained, the tests put students on very unequal playing fields with serious implications for their success in school: “I have one student in my class and she’s a wonderful young lady she’s been here four weeks in this country and she’s supposed to take you know an assessment at the end of the year to determine her graduation and she has to make the same score as students who’ve been here all their life.”

Teachers who had spent time with the PARCC were shocked at the difficulty and length of the passages students had to read before answering a particular question. In the words of George,

So I think it’s a huge disservice to kids who to be able to have tests in these high-stake situations and be able to understand everything that they read; pages and pages of reading. Like reading a passage that is three pages long on the computer and have to flip back and forth and figure out the answers, and have the second question connected to it after flipping back and then have a second piece of reading and a video to watch and be able to figure out all of that when it’s all straight English and it’s very complex.

While this length was done in the name of “text complexity,” teachers found that it tended to further marginalize ELLs, who often struggled with the basics of reading and would likely shut down after seeing too many unfamiliar words or pages of text. George was shocked after spending time with the PARCC, noting that it was more challenging than the AP exam. He felt it would hinder ELL student progress and “drop their confidence...make them feel like...guess I’m not very smart or I just don’t understand.” Other teachers similarly commented on the length of the test and the reading passages and felt that students would disengage from what they were doing.

Developing a Test for a Heterogeneous Population

While the PARCC and the SBAC are premised on the idea that we need one test to measure students across the country, we found this notion problematic both in our analysis of the PARCC and SBAC tests as well as from our interviews with teachers. For this discussion, it is important to draw attention to the way the two consortia are choosing texts for reading and writing related items.

PARCC (2015c) stated that “passage finders will locate authentic texts for the PARCC Summative Assessments for English Language Arts/Literacy” (p. 4). In contrast, SBAC (2015a) noted “The sample English language arts/literacy items and performance tasks include a mixture of published and commissioned reading passages and sources” (para. 3). SBAC has indicated a preference for published texts, however, noting that the CCSS calls for “high-quality, increasingly challenging literary and informational texts” (2015a, para. 3).

While this adherence to authentic texts seems valuable, it raises concerns of accessibility for ELLs because of a dependence on more archaic texts. In general, licensing texts can be expensive, so it is understandable that PARCC would prefer public domain texts, which are typically published before 1923. This bias is evident from a look at the 11th grade sample tests available on their website, which include the following, among others (PARCC, 2015d):

  • Are We Causing Antibiotic Resistance by Trying to Prevent It? (blog entry, November 29, 2012) by Beth Skwarecki
  • Cranford (1853) by Elizabeth Gaskell
  • Declaration of Independence (1776) by Thomas Jefferson
  • Frankenstein; or The Modern Prometheus (1818) by Mary Shelley
  • Heart of Darkness (1899) by Joseph Conrad
  • The Autobiography of an Ex-Colored Man (1912) by James Weldon Johnson

The 11th grade SBAC (2015a) practice test includes the following:

  • Article excerpts from the magazines Family Life and City Times
  • Big Book of Pop Culture: How to Guide for Young Artists (2007) by Hal Niedzviecki
  • How are Invasive Species Introduced? (commissioned) by Franklin Black
  • Invasive Species in the Great Lakes (commissioned) by Temh Patel
  • Life of Pi (2001) by Yann Martel

While both consortia note that the sample items are only a small selection of all the passages and text items available, the impact of relying primarily on public domain original texts is evident from this brief look. The passage taken from Elizabeth’s Gaskell’s Cranford, for instance, includes lines like “For obtaining clear and correct knowledge of everybody’s affairs in the parish; for keeping their neat maid-servants in admirable order” and “the last gigot, the last tight and scanty petticoat in wear in England, was seen in Cranford—and seen without a smile.” Passages such as these depend on words like “maid-servants,” “parish,” and “petticoat,” more likely to be familiar with students from a certain culture and region. While still complex, the sample SBAC passages are culturally and linguistically more accessible to students, dealing with contexts and topics (like waste and recycling, invasive species, and a narrative of a boy stuck on a boat with a tiger) that are more likely to motivate students from different backgrounds. The differences between the two tests seems to stem in part from SBAC’s decision to use a mixture of commissioned and published texts, which possibly lessens some of the faithfulness to authenticity but arguably makes the test more broadly accessible.

After seeing the PARCC (along with previous standardized assessments), teachers recognized bias in terms of the included texts. Zach noted how a test designed for people in a place like Wisconsin would involve different language than a test designed for students living in New Mexico. Similarly, George said, “They might not understand an idiom that...comes from the Southeast that here in the Southwest we don’t use…” As students in Massachusetts might wonder what a pueblo, aquecia, or llano are, students in New Mexico have less familiarity with lawns, ships, subways, or references to places in the Northeast and Europe.

Vicky, a White teacher originally from the Midwest was able to see the cultural distance from her students and the implications this had: “I was just like err Charles Dickens, really? My kids don’t really relate to Charles Dickens cause I’ve never; they don’t learn that until 10th grade for one. Umm, you know, it’s just tough because they don’t have the experience. I had to get 50 years under my belt to get the experiences where I can appreciate other cultures. I grew up in white bread middle America. And that’s what I read. That seems to be what PARCC goes for right now.” Vicky speculated that texts closer to and more engaging for the students would impact their performance: “I know culturally, we could design the reading around something that they were concerned about, something from their culture. It would be a huge difference in the test score.”

Accessibility and Accommodations

The goal driving the CCSS and aligned assessments is to hold all students to the same standards to support learning and comparability in assessment; however, it is important to note that the PARCC and SBAC differ in the accommodations they provide for ELLs, with the most prominent differences shown in Table 2 (Heitin, 2014; PARCC, 2015a; SBAC, 2015b).

Table 2: PARCC and SBAC Accommodations

Types of Accommodation

PARCC

SBAC

Test read aloud

· Allowed at all grade levels

· Requires notation of accommodations on grade reports

· Allowed at grade 6 or higher

· No notation requirement

Glossary

· Only paper dictionaries allowed unless a hard copy is unavailable

· Online dictionaries available in Spanish, Vietnamese, Arabic, Tagalog, Ilokano, Cantonese, Mandarin, Korean, Punjabi, Russian, and Ukrainian

Test translation

· Available in Spanish

· Available in Spanish 

Neither consortium will be translating into other languages unless specific states request and pay for the translations.

Teachers raised concerns about the scope of accommodations provided for students at their school. Teresa touched on how teachers were initially attracted to the school because of its diversity but that assessments like the PARCC and surrounding policies strip away the value of this diversity:

I LOVE diversity here. I'm so excited at EHS that I have so many students from wonderful background. They’re rich in culture and language. It’s one of the reasons I like working here...I feel bad because these assessments that...they’ve given to us, they're translated into Spanish, which is nice for those students who come from Spanish speaking countries. They’re not translated into Vietnamese. They’re not translated into Swahili. They’re not translated into Arabic.

George touched on this as well: “I just think we’re doing many kids a disservice whose first language is not English...especially for if it’s other than Spanish. Because perhaps they can get a Spanish version of it, but no other language is available.” EHS was one of the most linguistically diverse high schools in the state; however, as Teresa noted, any PARCC translations would be restricted to Spanish unless the state would provide funding for additional translations, which did not seem likely. Also, teachers found that many students who might have qualified for them did not seek out accommodations (such has having a teacher read the test aloud to them) because of the bureaucracy involved and/or lack of awareness.

Teachers were concerned about repeated testing with students and the lack of accommodations that recognized their multiliterate backgrounds. Bria described one situation when proctoring a test:

A kid raised his hand and asked me if I could read it to him in Spanish, or if I could translate in Spanish because I speak okay Spanish. And of course I had to say no because we aren’t allowed to and he was like, I can’t [do this test]. You didn’t pass it the first time because of the language issues. He’s having to take it again, the same issue is there, the language issue.

Technology and the PARCC          

Another area where teachers found the PARCC impacting instruction was with its computer-based administration. While the actual time required to take the PARCC compared to the previous state test was not necessarily greater, it was the first large-scale test in the state to be administered on computers. To rotate the whole school through the computer labs to take the tests, testing would interrupt the school schedule for six weeks (See Figure 1 for the March schedule which only covered half of the PARCC testing period)

Figure 1. March PARCC Testing Schedule.

Figure 1. March PARCC Testing Schedule.

During this time, periods were extended from 80 to 120 minutes and teachers were asked not to introduce much new content. Also, some teaching periods were altered and students missed classes due to the modified schedule. Bria had to skip her unit on civil disobedience literature because of the reduced instructional time. Bria explained how she typically used the computer lab to teach process writing and for students to conduct research, something she could not do during test season. After testing was over, the effects lingered. As Bria noted,

I think that affects the whole school though because we’re encouraged to use technology   all the time and if we don’t have access to that for six weeks, you know, it’s gonna make  a significant impact and then when we do have access to it again. Every teacher in the school is fighting for it cause we didn't have it for those six weeks.

As Bria explained, it is hard for teachers to incorporate technology in writing instruction when they lack access to computer labs. While teachers at more affluent schools might ask students to bring laptops, this was not a feasible option at EHS.

Another implication of online assessments is connected with their ability to facilitate machine-based scoring. The PARCC is no exception as the group is already conducting validity testing on machine-based scoring. Bria recalled an incident in class, “we were watching I Robot...and one kid goes ‘Are robots gonna be grading our tests?’ And I was like, well I mean, ‘Yes.’” Although Bria found some humor in the student’s insight, she also found it sad in how it took away the purpose of teaching and writing: “That’s interesting to me because then you’re not even really teaching. It’s not even a human response to your writing.”

The final issue regarding the PARCC administration concerns students’ technological literacies, something that should be irrelevant to the administration and scoring of the exam. Students at schools like EHS often have less experience with technology at home than their counterparts in more affluent schools and communities. Nonetheless, teachers were prohibited from helping students with the technology during the exam, as noted by Bria:

The fact that we can’t help them with little things like where’s the highlight button, and these are things that we went over beforehand but still kids forget, you know. And it’s not like you're helping them with content. You’re just, I just don't see why you can’t assist them in just the process to make it easier for everyone involved.

Teachers would practice with students beforehand to address some of these issues, but they found that students forgot what they learned about the technology during the actual testing.

Discussion

The CCSS were introduced with the idea that every student would graduate from high school college and career ready. The teachers we talked to were generally supportive of the CCSS, saying how it gave them flexibility in their teaching and matched much of what they had already been doing (e.g., having students engage in close reading and use textual evidence to back up their responses). However, all teachers agreed that the continued emphasis on high stakes testing undercut some of the CCSS benefits for ELL students. While the CCSS creators may be faulted for not fully considering ELL students, the “Application of CCSS State Standards for English Language Learners” document posted on the CCSS website provides an insight to what shape the CCSS and associated assessments might take:

ELLs are a heterogeneous group with differences in ethnic background, first language, socioeconomic status, quality of prior schooling, and levels of English language proficiency. Effectively educating these students requires diagnosing each student instructionally, adjusting instruction accordingly, and closely monitoring student progress. (CCSS, 2015a, p. 1)

Ironically, while the writers of this document recognized the heterogeneity of ELL student populations, the assessments aligned with the CCSS continue to reflect a flawed one-size-fits-all model.

Writing assessment experts like Crusan (2010) and Huot (2002) have made the argument that assessment most effective and useful when implemented locally. Huot (2002) noted contemporary approaches to assess writing “recognize the importance of context, rhetoric, and other characteristics integral to a specific purpose and institution. The procedures are site-based, practical, and have been developed and controlled locally” (p. 94). With the aim of measuring all students nationally, the PARCC and SBAC creators have drawn on a variety of local agents in the design process, organized a variety of bias and review committees to facilitate text selection, and run validity testing based on different demographic groups. However, because of the heterogeneous nature of the U.S. population and the huge differences between different teachers and schools, they will likely fail in assessing all students fairly.

Our findings indicate that the CCSS-aligned assessments may ultimately be at fault for disparate impact, which Poe, Elliot, Cogan, and Nurudeen (2014) defined as the “the unintended racial differences in outcomes resulting from facially neutral policies or practices that on the surface seem neutral” (p. 593). The assessments, like the CCSS, purport to put every student on a similar playing field; however, the teachers we talked with saw very real ways the tests marginalized their school overall and especially the ELL students within it. Similarly, despite the promises of the Value Added Modeling promoted by the state, it seems that teachers at schools with higher number of ELL students are being disproportionately penalized in the new evaluation system tied to test scores, which threatens to penalize such schools and their students, and which has, in part, caused an exodus of teachers to schools with smaller ELL populations.

In general, we observed strong teachers who enjoyed working with ELL students and were able to recognize the competencies they brought to the classroom. This was revealed in some of the quotes earlier, where teachers, like composition and L2 writing scholars (e.g., Guerra, 2016; Horner et al., 2010; Medina, 2013; Ruecker, 2015) recognize the multiliteracies and different competencies that their ELL students bring to the classroom but see how non-locally designed assessments fail to capture these competencies. For instance, the PARCC Writing rubric has two categories, “Reading Comprehension and Written Expression” and “Knowledge of Language and Conventions,” which imply that writers will be largely evaluated on a particular organizational style and “conventions of standard English at an appropriate level of complexity” (PARCC, 2015b). The rubric and the people (or computers aligned with them) scoring essays have validity implications; as Murphy and Yancey (2007) have written, “When the linguistic and rhetorical patterns of the students’ home culture differ in important ways from the expectations of readers and the patterns that scoring rubrics value, students’ scores may be adversely affected” (p. 452). Similarly, Inoue (2009) made a strong argument that tests promote a certain set of values over another and that racial validity should be a central concern of any assessment creator. We found that students are set up for failure via assessments that repeatedly tell them they are inadequate. At minimum, this can threaten their intrinsic motivation; at worst, it can lead to amotivation, a real concern among teachers who regularly saw students completely disengage from the tests they were taking.[1] We agree with Inoue (2009) that “Racism in writing assessment is not an individualized attitude problem, a prejudice. It is a social, historical, and structural problem of our technologies, describing privileges and inequalities that produce racial formations” (p. 112).

In terms of computer technology, the largest impact of the PARCC on literacy instruction at EHS seemed to concern the online nature of the tests. As Hochleitner and Kimmel (2014) have pointed out, the minds behind the CCSS appeared to assume that computer technology would be widely available in every school. Hochleitner and Kimmel (2014) noted that schools across the nation have had to reorganize schedules in order for students to complete tests. While this has the advantage of boosting investment in school technology, it also raises concerns about how this new technology will be used. With the restructured testing schedule aimed at rotating the school through computer labs, teachers at EHS had to cut some units out of their lesson plans to make sure that students had access to computers for their tests. During the six weeks dedicated to PARCC testing, teachers were not able to visit the labs with their classes outside of testing, limiting the opportunities for teachers to prepare their students to be 21st century communicators. While the shift to online testing has increased technology investment in schools across the country, we wonder how much the technology will be used for the drilling style of learning prevalent in schools that serve large numbers of minority students (Banks, 2006). It also means that test writing will be more easily graded by machines, which both teachers and students found problematic because they were no longer writing for an actual audience. While recognizing the potential of automated assessment for L2 writers, Weigle (2013) found that automated assessment systems may not be attuned to the differences between L1 and L2 writing in the same way that human raters are; moreover, she noted the concern regarding who the human raters are being used to set the scoring system, as a biased sample in this regard can lead to a biased automated scoring system.

Implications for Writing Professionals

From the above discussion, it is evident that we have strong reservations about the design and use of national standardized tests, especially in the way that they can negatively impact the educational pathways of ELL students. Despite the existence of bias review committees for both the PARCC and SBAC purporting to ensure texts are “bias-free” and “sensitive to diverse cultures” (PARCC, 2014, p. 15), we have seen little evidence that the overall representation of texts on the exams mirror the diversity of worlds that our students inhabit, including the heterogeneity of ELL student populations--a detail recognized in the CCSS ELL document that noted these students need individually-tailored instruction. While we agree that the most important assessment is done locally, we recognize the continued desire for national and international comparisons of student performance in core subjects, and understand that pressure for large-scale assessment will not disappear anytime soon. As researchers and teachers familiar with the disparate impact that high-stakes assessments have had on ELL students and their schools and teachers in recent years, we would like high-stakes assessments to more clearly value diverse cultures in reading and writing tasks so students are recognized for the competencies they bring to the classroom.

With the understanding that high-stakes standardized testing will continue to play some role in our education system, we heed Adler-Kassner (2012), who has noted that writing studies professionals need to take action to help change the system and “develop strategies to navigate a course through these challenging times” (p. 130). While sitting back and critiquing changes like the teacher evaluation system in our state or the bias of exams like SBAC and PARCC are relatively easy, we are always conscious of ways to affect change. On one level, our work focuses on challenging the use of high-stakes assessment as a force externally imposed on schools like EHS; on another level, we seek ways to improve the exams in place so ELLs are assessed more fairly alongside their non-ELL counterparts. We envision this work taking a few different forms.

Form Partnerships to Improve Learning for ELLs

Writing faculty can partner with both teacher education programs as well as with high schools and teachers to improve writing instruction for all students, including ELLs, in an era of high-stakes testing. These partnerships can collectively think through creative ways to incorporate the most recent developments in writing instruction into K-12 teacher preparation as well as in high school classrooms. As part of these partnerships, it is important that college writing faculty spend time in high schools in order to better understand the working conditions of teachers in terms of areas like teaching load, class sizes, technology access, testing mandates, and more. As discussed elsewhere, it is important to ensure that these relationships are collaborative and not top-down; when college faculty go in with the idea that they know better, this tends to sour the relationship and limit productivity (Alemán, Perez, & Oliva, 2013; Ruecker, 2014).

These partnerships can lead to co-constructed workshops for both high school and college writing teachers that are focused on topics like building habits of mind (see the Framework for Success in Postsecondary Writing), providing feedback effectively and efficiently to ELL and L2 writers, developing rubrics, peer feedback, and seeking agency in a constrained teaching environment. They can also involve co-teaching in both high school and college classes, which will help teachers--at all levels--think collaboratively for ways to align high school and college writing curricula. These partnerships may also be advocacy-oriented, with postsecondary educators and researchers harnessing the ethos of higher education to join secondary teachers in protests and other forms of resistance to high-stakes testing regimes.

Ensure Test Makers Create a Better Test

This recommendation involves some risk, as we have seen test makers eager to claim they consult experts situated in K-12 and postsecondary contexts without fully utilizing their recommendations. However, we also understand that involvement in the process can help positively affect tests for the better, which matters because high-stakes tests have been shown to positively and negatively affect literacy instruction through washback (Taylor, 2005). One of the authors of this piece has been involved in various PARCC bias review and other meetings, continually drawing the consortium’s attention to ELL students and the heterogeneity of students across the diversity of PARCC states. This has been a frustrating process because he felt that his suggestions are often ignored or pushed back against; however, he has witnessed incremental progress such as the inclusion of more Native American authors in the exam.

Another area where assessment and writing experts can help improve the CCSS-aligned exams is in regard to computer-based testing. As it currently stands, it seems the primary goal of moving testing online is for the ease of grading and to also support the rise of machine scoring of writing. As Abedi (2014) has written, online testing has a huge potential for universal design, well beyond traditional paper-based tests, where this was next to impossible. With this in mind, we would like to see test makers think creatively with technology. They should explore how technology can be used to make the tests adaptable to different student populations rather than merely a time- and cost-saving measure. For instance, the SBAC uses a computer adaptive design that chooses questions based on answers to previous ones. While this adaptive design is a step in the right direction, we would like to see nationwide tests that adapt to particular regions in terms of the types of passages used and the vocabulary they contain. As several of our interviewees noted, a test drawing on texts from a particular region will bias a particular set of students because vocabulary in different parts of the country can vary significantly. We would also like to see the testing agencies continue to expand the technology-supported accommodations that they offer students including the accessibility of online glossaries in all student languages, not just the most popular ones. Finally, teachers should not be restricted by testing protocols from helping students navigate test technology. As we noted earlier, this prohibition penalizes a disproportionate number of ELLs who have limited access to computer technology at home.

Final Thoughts

The CCSS hold promise in helping students in a variety of contexts access equally challenging educations that will prepare them for college writing or other career ambitions. We are supportive of the push towards analytical and source-based writing, something that students will encounter throughout college. While disappointed in the way that ELLs seemed to be an afterthought in writing the standards, and in how the standards fail to disrupt the Anglo-dominated canon traditionally taught in English classes, we appreciate that the creators have taken some steps to recognize the heterogeneity of student populations, including ELLs. However, tying the CCSS to high-stakes assessments threatens to perpetuate the status quo begun with No Child Left Behind, a status quo that continues to label ELL students and their schools and teachers as inadequate and failing.

 

 Notes

1. Our understanding of ELL mirrors that of Limited English Proficient students stated in NCLB: a student whose native language is not English and/or who lacks the English proficiency to meet proficiency levels on state assessment. States define and test ELLs differently, so there is no one universal definition. As evident from this opening discussion, we are adopting a broad understanding of ELL, which includes students who come from households where languages besides English are spoken (often referred to in education and applied linguistics as linguistic minorities). We recognize that ELLs are disproportionately students of color, with 18% of Latina/o and 17% of Asian students having difficulty with English compared to 1% of White students (NCES, 2010)

2. All names are pseudonyms. This project received IRB approval from the relevant boards.

3. For detailed portraits of students like those at EHS, see Ruecker (2015).

4. The state test that preceded the PARCC but still being given while the PARCC is fully implemented.

5. Our understanding of motivation is shaped by Vallerand’s (1997) work. Vallerand found intrinsic motivation leads to higher levels of engagement and creativity in a task because it involves interest in the task itself while extrinsic motivation focused on what might result from completing the task. Amotivated people have little interest in the task, something that often results from the task feeling too difficult (Vallerand, 1997, p. 282).

6. It is heartening to see the issue of over testing gain more traction, with President Obama recently pushing to cap the amount of testing time at 2% of instructional time (Zernike, 2015). PARCC has also reduced the amount of testing time for the second year of its administration. In hearing these discussions on reducing testing time, it is important to be aware of the reality depicted in Figure 1 and discussed above: The time it takes for students to take a test does not reflect the time testing disrupts instruction at a particular school.

 

References

Abedi, J. (2014). The use of computer technology in designing appropriate test accommodations for English language learners. Applied Measurement in Education, 27(4), 261-272. doi: 10.1080/08957347.2014.944310

Abedi, J., & Gándara, P. (2006). Performance of English Language Learners as a subgroup in largeā€scale assessment: Interaction of research and policy. Educational Measurement: Issues and Practice, 25(4), 36-46. doi: 10.1111/j.1745-3992.2006.00077.x

Adler-Kassner, L. (2012). The companies we keep or the companies we would like to keep: strategies and tactics in challenging times. WPA: Writing Program Administration, 36(1), 119-30.

Alemán, E., Perez, J., & Oliva, N. (2013). Adelante en Utah: Dilemmas of leadership and college access in a university-school-community partnership. Journal of Cases in Educational Leadership, 16(3), 7-30. doi: 10.1177/1555458913498476

Assaf, L. (2006). One reading specialist’s response to high-stakes testing pressures. The Reading Teacher, 60(2), 158-167.

Atkinson, P., & Delamont, S. (2008). Analytic perspectives. In N. K. Denzin & Y. S. Lincoln (Eds.), Collecting and interpreting qualitative materials (3rd ed., pp. 285-311). Thousand Oaks, CA: Sage.

Banks, A. (2006). Race, rhetoric, and technology: Searching for higher ground. Mahwah, NJ: Lawrence Erlbaum.

Boals, T., Kenyon, D. M., Blair, A., Cranley, M. E., Wilmes, C., & Wright, L. J. (2015). Transformation in K-12 English language proficiency assessment changing contexts, changing constructs. Review of Research in Education, 39(1), 122-164. doi: 10.3102/0091732x14556072

Booher-Jennings, J. (2005). Below the bubble: “Educational Triage” and the Texas accountability system. American Educational Research Journal, 42(2), 231-268.

Bunch, G. C., Walqui, A., & Pearson, P. D. (2014). Complex text and new common standards in the United States: Pedagogical implications for English learners. TESOL Quarterly, 48, 533-559. doi: 10.1002/tesq.175

Coleman, R., & Goldenberg, C. (2012). The CCSS challenge for ELLs. Principal Leadership, 12(5), 46-51.

Common Core State Standards (2015a) Application of Common Core State Standards for English Language Learners. Retrieved from www.corestandards.org/assets/application-for-english-learners.pdf

Common Core State Standards (2015b). English Language Arts Standards. Retrieved from http://www.corestandards.org/wp-content/uploads/ELA_Standards1.pdf

Common Core State Standards (2015c). Read the standards. Retrieved from http://www.corestandards.org/read-the-standards/

Council of Writing Program Administrators, National Council of Teachers of English, and National Writing Project (2011). Framework for success in postsecondary writing. Retrieved from http://wpacouncil.org/framework

Crusan, D. (2010). Assessment in the second language writing classroom. Ann Arbor, MI: University of Michigan Press.

DelliCarpini, M., & Alonso, O. (2013). Working with English Language Learners: Looking back, moving forward. English Journal, 102(5), 91-93.

Eick, C., & Valli, L. (2010). Teachers as cultural mediators: A comparison of the accountability era to the assimilation era. Critical Inquiry in Language Studies, 7(1), 54-77.

Fillmore, L. W. (2014). English Language Learners at the crossroads of educational reform. TESOL Quarterly, 48: 624-632. doi: 10.1002/tesq.174

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. New Brunswick, NJ: Aldine Transaction.

Guerra, J. (2016). Language, culture, identity and citizenship in college classrooms and communities. New York, NY: Routledge.

Heitin, L. (2014). Testing plans differ on accommodations. Education Week, 33(29), 30-33.

Hochleitner, T., & Kimmel, A. (2014). Technology. In F. M. Hess & M. Q. McShane (Eds.), CCSS meets education reform: What it all means for politics, policy, and the future of schooling (pp. 140-161). New York: Teachers College Press.

Horner, B., Lu, M. Z., Royster, J. J., & Trimbur, J. (2011). Language difference in writing: Toward a translingual approach. College English, 73(3), 303-321.

Huot, B. (2002). (Re)articulating writing assessment for teaching and learning. Logan, UT: Utah State University Press.

Inoue, A. (2009). The technology of writing assessment and racial validity. In C. Schriener (Ed.) Handbook on assessment technologies, methods, and applications in higher education. Hershey, PA: IGI Global.

McCarthey, S. J. (2008).  The impact of No Child Left Behind on teachers’ writing instruction.  Written Communication, 25(4), 462-505

Medina, C. (2013). Nuestros refanes: Culturally relevant writing in Tucson high schools. Reflections, 13(1), 52-79.

Murphy, S., & Yancey, K. B. (2008). Construct and consequence: Validity in writing assessment. In C. Bazerman (Ed.) Handbook of research on writing: History, society, school, individual, text (pp. 365-385). New York, NY: Routledge.

National Center for Education Statistics (2014). Fast Facts. Retrieved from https://nces.ed.gov/fastfacts/display.asp?id=96

National Center for Education Statistics (2010). Status and trends in the education of racial and ethnic minorities. Retrieved from https://nces.ed.gov/pubs2010/2010015/tables/table_8_2a.asp

Newkirk, T. (2013). Holding on to good ideas in a time of bad ones: Six literacy principles worth fighting for. Boston, MA: Heinemann.

Partnership for Assessment of Readiness for College and Careers. (2014). Item guidelines for ELA/ literacy PARCC summative assessment. Retrieved from   http://www.parcconline.org/sites/parcc/files/Updated%20Formatted%20Item%20Guidelines%20.pdf

Partnership for Assessment of Readiness for College and Careers. (2015a). PARCC accessibility features and accommodations manual. Retrieved from http://www.parcconline.org/parcc-accessibility-features-and-accommodations-manual

Partnership for Assessment of Readiness for College and Careers (2015b). PARCC scoring rubric for prose constructed response items. Retrieved from http://parcc.pearson.com/resources/practice-tests/english/Grade6-11-ELA-LiteracyScoringRubric-July2015.pdf.

Partnership for Assessment of Readiness for College and Careers (2015c). Passage selection guidelines for the PARCC summative assessments, Grades 3-11, in ELA/Literacy. Retrieved from http://www.parcconline.org/assessments/test-design/ela-literacy/test-specifications-documents

Partnership for Assessment of Readiness for College and Careers (2015d). Practice tests. Retreived from http://parcc.pearson.com/practice-tests/.

Poe, M., Elliot, N., Cogan Jr., J. A., & Nurudeen Jr., T. G. (2014). The legal and the local: Using disparate impact analysis to understand the consequences of writing assessment. College Composition and Communication, 65(4), 588.

Ruecker, T. (2015). Transiciones: Pathways of Latinas and Latinos writing in high school and college. Logan, UT: Utah State University Press.

Ruecker, T. (2013). High-stakes testing and Latina/o students: Creating a hierarchy of college readiness. Journal of Hispanic Higher Education, 12(3), 303-320.

Smarter Balanced Assessment Consortium. (2015a). Sample items and performance tasks. Retrieved from http://www.smarterbalanced.org/sample-items-and-performance-tasks/

Smarter Balanced Assessment Consortium. (2015b). Usability, accessibility, and accommodations guidelines. Retrieved from http://www.smarterbalanced.org/wordpress/wp-content/uploads/2014/08/SmarterBalanced_Guidelines.pdf

Shohamy, E. (2001). Democratic assessment as an alternative. Language Testing, 18(4), 373-391. doi: 10.1177/026553220101800404

Strauss, V. (2014, September 16). How CCSS’s recommended books fail children of color. The Washington Post. Retrieved from http://www.washingtonpost.com/blogs/answer-sheet/wp/2014/09/16/how-common-cores-recommended-books-fail-children-of-color/

Taylor, L. (2005). Washback and impact. ELT Journal, (59)2, 154-155. doi: 10.1093/eltj/cci030

Vallerand, R. J. (1997). Toward a hierarchical model of intrinsic and extrinsic motivation. Advances in experimental social psychology, 29, 271-360. doi: 10.1016/s0065-2601(08)60019-2

Weigle, S. C. (2013). English language learners and automated scoring of essays: Critical considerations. Assessing Writing, 18(1), 85-99. doi: 10.1016/j.asw.2012.10.006

Zancanella, D., & Moore, M. (2014). The origins of CCSS: Untold stories. Language Arts, 91(4), 271-277.

Zernike, K. (2015, October 24). Obama administration calls for limits on testing in schools. The New York Times. Retrieved from http://nyti.ms/1PJiOPw

 

Author Biographies

Todd Ruecker is an assistant professor of English at the University of New Mexico where he coordinates assessment for the College of Arts and Sciences.  His work regularly crosses disciplinary boundaries and he has published extensively on the transitions of Latina/o writers from high school to college.  He has published articles in respected composition, education, and applied linguistics journals, includingTESOL Quarterly, College Composition and Communication, Journal of Hispanic Higher Education, Critical Inquiry in Language Studies, and Writing Program Administration.  His book, Transitiones: Latina and Latino Students Writing in High School and College,was published by Utah State University Press in early 2015.

Pisarn Bee Chamcharatsri, PhD, received his doctorate degree in Composition and TESOL (C&T) from Indiana University of Pennsylvania (IUP). He is assistant professor as a joint appointment at University of New Mexico in Department of Language, Literacy, and Sociocultural Studies (LLSS) and Department of English. His research interests include emotions and writing, second language writing, identity construction of ESL/EFL learners, language policy, world Englishes, and food writing. His publications appear in L2 Journal, Journal of English as an International Language, and Asian EFL Journal.

Jet Saengngoen is a Ph.D. student in Educational Linguistics in the Department of Language, Literacy, and Sociocultural Studies (LLSS) at the University of New Mexico.  He received his master’s degree in Teaching English to Speakers of other Languages (TESOL) from Southern Illinois University Carbondale, USA.  His research interests include bilingualism, adult second language acquisition, and corrective feedback usage in language instruction.

 

Appendix 1: Interview Protocols

Initial Interview

Implementation and Teacher Preparation

  1. When did you first hear about the CCSS State Standards? Can you share with us your reactions when you first heard about the implementation of the CCSS State Standards (CCSS)?
  2. In what ways have you imagined of how the implementation of the CCSS State Standards (CCSS) will impact your preparation for your classes?
  3. What types of preparation do you have to make in response to the CCSS State Standards?
  4. What is your perception towards the CCSS State Standards when compared to the NCLB?
  5. One of the largest changes the CCSS have proposed in terms of literacy instruction has been more non-fiction reading and writing across the curriculum.
    1. For ELA teachers: How do you feel about the impact on ELA classes’ traditional focus on narrative writing and literary analysis? What do you think about the potential of incorporating writing instruction throughout the curriculum? What will be the impact of these changes on students’ college readiness?
    2. For science and social studies: How do you feel about the impact on reading and writing in your classes (asked to respective teachers)?

 

Assessment

  1. What role do you anticipate assessment playing in the CCSS in comparison to the role it played in No Child Left Behind? Do you anticipate more assessment? Less? About the same? Are they similarly high-stakes? Please say more about the assessment tools you anticipate.
  2. We understand that New Mexico is one of 22 states joining the Partnership for Assessment of Readiness for College and Careers (PARCC), which aims “to develop a common set of K-12 assessments in English and math anchored in what it takes to be ready for college and careers.” What knowledge do you have of this organization and what are your feelings about their mission?
  3. How have assessments under NCLB impacted your teaching? What kind of impacts do you anticipate with proposed new assessments?
  4. What impact have assessments traditionally have had on your ELL students and how do you anticipate this impact changing with the adoption of PARCC assessments?
  5. In what ways have you seen assessments contributing to or hindering the literacy development of your ELL students?

 

ELLs

  1. In what ways do CCSS State Standards propose to help ELLs gain literacy skills?
  2. The CCSS offers very little specifics regarding ELLs, but they claim: “For those students, it is possible to meet the standards in reading, writing, speaking, and listening without displaying native-like control of conventions and vocabulary.” (p. 6) What are your thoughts about this statement?
  3. In what ways do you anticipate the CCSS affecting the way schools serve ELL student populations?
  4. The CCSS supporters have adopted a mantra of “college and career readiness.” College going and success rates of ELLs and other minority populations have consistently been lower than the national average. What are your thoughts on how diverse student populations are served with the CCSS in terms of college readiness?
  5. What supports do the school provide in conjunction with the CCSS State Standards to help ELLs prepare for their college success?
  6. As discussed earlier, the CCSS promises to shift towards more non-fiction types of reading and writing in the upper grades. How do you think this shift will affect the instruction of ELL students?

 

Follow Up Interview

Looking Back

  1. After one year of planning and implementing CCSS, what are your reactions to the standards?
  2. Last time we talked with you, you had a ______ attitude towards the CCSS. Could you explain how, if at all, your general perceptions have changed since then?
  3. Last time we talked, you felt that it had ______ support for ELL students? Has your understanding of how the CCSS impacts and/or supports the literacy development of ELL students changed since then?
  4. Over the past year,
    1. how have the role of assessments changed, if at all, at HS? In your classroom?
    2. how has the implementation of new assessments and/or the CCSS impacted your teaching, if at all?
    3. how has writing instruction changed for students at HS?
    4. how has the morale of teachers changed at HS?
    5. how does the new principal compared to the old in the implementation of the CCSS?
    6. how have the writing assignments changed in your classes?
    7. how have the reading assignments changed in your classes?
    8. how has your knowledge of the CCSS changed?
    9. how has your knowledge of PARCC changed?
    10. what kind of training related to the CCSS have you attended?
    11. how have the school supports for ELL students changed?

 

Observed Instruction

[Remind teacher what was the general content/focus of the observed classes]

  1. How representative of your instruction were the classes we observed? Were they typical? Were they different in some way?
  2. Explain the thinking that went into the design of the observed classes.
  3. How did this particular class connect with or support CCSS goals?
  4. Is there anything else that you would like to say or add?

 

Author Biographies:

Pisarn Bee Chamcharatsri, PhD, received his doctorate degree in Composition and TESOL (C&T) from Indiana University of Pennsylvania (IUP). He is assistant professor as a joint appointment at University of New Mexico in Department of Language, Literacy, and Sociocultural Studies (LLSS) and Department of English. His research interests include emotions and writing, second language writing, identity construction of ESL/EFL learners, language policy, world Englishes, and food writing. His publications appear in L2 Journal, Journal of English as an International Language, and Asian EFL Journal.

 

Jet Saengngoen is a Ph.D. student in Educational Linguistics in the Department of Language, Literacy, and Sociocultural Studies (LLSS) at the University of New Mexico.  He received his master’s degree in Teaching English to Speakers of other Languages (TESOL) from Southern Illinois University Carbondale, USA.  His research interests include bilingualism, adult second language acquisition, and corrective feedback usage in language instruction.

 

Todd Ruecker is an assistant professor of English at the University of New Mexico where he coordinates assessment for the College of Arts and Sciences.  His work regularly crosses disciplinary boundaries and he has published extensively on the transitions of Latina/o writers from high school to college.  He has published articles in respected composition, education, and applied linguistics journals, including TESOL Quarterly, College Composition and Communication, Journal of Hispanic Higher Education, Critical Inquiry in Language Studies, and Writing Program Administration.  His book, Transitiones: Latina and Latino Students Writing in High School and College, was published by Utah State University Press in early 2015.