So I’ve learned tons of stuff after organizing a cross-institutional grading session between our county high schools, our local community college, and our nearest four year university; we’re still sorting through the data, and I think most of what I’ve learned informs what we should have done constructing the exercise. I’ll report when we’ve figured enough out to share sensible insights.
In the meantime, the actual experience of grading against the SAT rubric with teachers from different “walks of teaching life” left me rather breathless. I sat with a community college ESOL teacher, a four year state college adjunct, and within ear shot of the chair of the community college English department. Suffice it to say, the norming process exists because teachers vary greatly.
Norming effectively takes a long time, and I know it can get done properly. Essentially, the norming process requires each teacher to set aside his or her personal biases to share a standard of quality as measured by the rubric. In my English department, we norm for the state writing test (a very basic, minimal writing assessment), but we don’t norm with each other for on-level student writing. We have a rubric for on-level student writing, written who knows when by whomever, and we all use it variably.
Now since no test needed assessment, actual norming between teachers wasn’t even a goal. Instead, we used that paradigm to sit down together with the SAT rubric and look at essays written by community college students, written on the first day of Comp English 101, to study how differently we grade. As the choir I preach to knows, the idea of rubric grading is to grade “holistically,” with no single attribute truly outweighing any of the others. Yeah. Right. Sure. Of course. It can be a painful process.
Here’s my overall conclusion: Grading holistically isn’t easy because our teaching experiences create biases in the ways we grade. The more a teacher works with students struggling with grammar, the less likely that teacher is to deem grammar errors as “interfering with an essay’s meaning.” If a teacher spends her day mostly with AP or advanced English majors, she will see fewer fragments and less subject/verb agreement errors. It’s the nature of the tracking beast. For teachers who work with writers who do make those errors frequently, it’s like being accustomed to the dialect of a beloved multi-lingual friend—after a while, it’s hard to even notice the accent because the ear adjusts.
According to insights gleaned during our grading experiment, some teachers who mostly teach higher level writers also gave more credit for “figurative language,” especially if the essay used strong standard English, which I more often considered “tangential bologna.” Now don’t get me wrong, I enjoy my share of poetic language in an argument, but I don’t prefer a list of similes to a list of supportive examples clearly illustrating a sub-point of a thesis. For me, similes work best when used as part of the argument rather than instead of the argument; whereas, some of my fellow teachers who work with more filtered students, while agreeing that the similes in the writing sample took the place of the argument, felt that the use of visual imagery deserved recognition.
My own bias for a strong argument creates my struggle with remembering that a rubric list of attributes is not hierarchical; most rubrics start with a description of the argument and end with a description of the mechanics. I tend to think the argument counts more than the use of the comma anyway, so that list order reinforces this bias in my brain. I can shake that when I norm for state test grading, but in general in my classes, I recognize that I weigh the argument more than the punctuation. Some of my peers confessed their total abhorrence for shifting voice—half a discussion about “everyone” and half a discussion about “I.” I sheepishly hoped I even notice such a voice shift since the absence of pervasive second person makes me so giddy with joy, I suspect I fail to see beyond it.
What does any of this mean? Honestly, we’re still sorting that out. We all sat down together to see how everyone grades a paper, and we’ve statistically verified our different points of view. The next step requires data analysis and discussion; since almost everyone has end of semester grading coming in, I’m not exactly sure when we’ll draw our conclusions. Personally, the experience makes me glad I don’t have to grade holistically all the time. It makes sense for big exams, like SAT and AP, but without a norming session, I struggle to shake how accustomed I am to seeing “definitely” spelled “defiantly” (Yes, it used to bother me, but after thousands of exposures, I’m like a grammar cockroach…). Without a norming session, I retain my preference for a strong, logical response to the prompt peppered with comma splices as opposed to a punctuation-perfect song and dance. I’d argue we all put a “hole” somewhere in holistic, and I’d like to believe that identifying my bias and reflecting upon it will improve the way I grade.
No comments:
Post a Comment