Thursday, May 30, 2013

Standardized testing: When a picture is NOT worth 1,000 words.

“Could you take my picture
‘cuz I won’t remember…”
(“Take a Picture,” Filter)
An article by Motoko Rich in the New York Times makes the claim that it is easier to achieve gains in math than in reading, as indicated by test scores.

I repeat, as indicated by test scores.

On the surface, this makes some degree of intuitive sense; math questions are typically about “the right answer,” and it is easier to program students with a repeatable mechanical set of steps to reach a right answer to a math problem than it is to train the mind to perform the feats of abstract thought necessary to satisfy the typical question on most ELA assessments.

Rich illustrates this phenomenon thus:
During a fifth-grade reading class, students read aloud from “Bridge to Terabithia,” by Katherine Paterson. Naomi Frame, the teacher, guided the students in a close reading of a few paragraphs. But when she asked them to select which of two descriptions fit Terabithia, the magic kingdom created by the two main characters, the class stumbled to draw inferences from the text. 
Later, in math class, the same students had less difficulty following Bridget McElduff as she taught a lesson on adding fractions with different denominators. At the beginning of the class, Ms. McElduff rapidly called out equations involving two fractions, and the students eagerly called back the answers.
But I just can’t get over the grotesque leaps of illogic that this article makes.  A few samples:
“[T]eachers are finding it easier to help students hit academic targets in math than in reading, an experience repeated in schools across the country,” which begs the question: Are the “academic targets” actual learning and demonstration of mastery and understanding, or are they simply test scores? Not the same thing.
“[S]tudents … in KIPP middle schools for three years had test scores that indicated they were about 11 months — or the equivalent of more than a full grade level — ahead of the national average in math,”  which makes the implicit assumption, signaled by that word again – indicated – that test scores are a reliable and predictable and robust indicator of actual mathematical achievement.

“Studies have repeatedly found that ‘teachers have bigger impacts on math test scores than on English test scores,’ said Jonah Rockoff, an economist at Columbia Business School,” which at least comes right out and admits that it’s only talking about test scores, and not about the actual educational experience of the child.

“Over time, teachers hope to develop the same results in reading that they have produced in math. Already, students at high school campuses in the Uncommon network in Brooklyn and Newark post average scores on SAT reading tests that exceed some national averages,” which again, emphasizes the primacy of test scores FIRST, students SECOND.
This is not as it should be. For one thing, I categorically reject the notion that a single standardized test score in any way equates to the child’s experience over a year’s worth of math classes. New York State is a sterling example of the pedagogically self-destructive redactio ad absurdum of this line of thinking, as I blogged recently (redacted here, click on this link to read the whole post):
When the focus is “doing what you have to do to get students to pass,” the focus is not on learning, building understanding, or scaffolding for continued study. The focus is instead on a short-term, quick-fix, half-assed solution that benefits NO ONE, and the illusion of the higher pass rate, suggesting a much higher quality of education that what is actually transpiring, is convincing enough for education stakeholders to look the other way. 
With regard to the New York State Algebra Regents [state-standardized final assessment], [what you see] is not [what you get]. On the New York State Algebra Regents exam, students are given a score out of 100, a "scaled score," and it is very easy (and most people, even many teachers and guidance counselors make this mistake) to think of this as a percentage – an 83 on the Regents means you got an 83% on the test. That is SO not the case.

View this chart for the June 2011 administration of the NYS Algebra Regents. A total of 87 points were available to be earned by students on the exam. 65% of 87 is about 56.5, the number of points out of 87 that would need to be earned by a student for the Regents Exam score to actually reflect the percentage correct that the student achieved. But oh, no. A [minimum] passing grade of 65 is granted by a student’s successfully earning a whopping 31 points (the "raw score"). That’s a passing grade for UNDER 36% correct. And what’s worse, standard policy in New York State is that even if a student fails the entire course, all four academic quarters, a passing grade on the Regents Exam is de facto full passing credit earned for the course.

A student could do nothing from September through May, cram for the Regents exam in June, do a 40% job, and PASS THE WHOLE YEAR based on that 40% on that one end-of-year assessment. And the Algebra teacher gets to brag about how high her passing rate is. And she will, no doubt.
As I have blogged, and re-posted, many times before:
Popular myth and the standardized-test culture of American public education would have one think that education is about the massive and rapid accumulation of content, the purpose of which is to succeed on a state test; the error of this thinking is that it relegates the education process itself to a secondary status, making the test score the “prize” of education. Nothing could be more removed from the truth. We have become a nation of pure data, of test scores and dropout rates, ciphers which are at best simplified abstractions of critically important ideas – but raw numbers do not tell the whole story. Any educational process or notion that has at its heart the notion that it is the data that needs to be treated, and not the students, is fundamentally flawed.
Motoko Rich doesn’t realize this, but she accidentally stumbles on these points early in her article in the words of David Javiscas, a seventh-grade reading teacher at Troy Prep Middle School, near Albany (emphasis mine):
Helping students to puzzle through different narrative perspectives or subtext or character motivation, though, can be much more challenging. “It could take months to see if what I’m teaching is effective,” he said.
You get that, Ms. Rich? It could take months. Teaching is a process; learning is a process. A test is at best a snapshot (not a movie), and sadly, most of this nation’s "cameramen" are not themselves educators, and are not fit to wield the cameras.

Have you ever taken a bad photograph of a normally photogenic person? When my family goes to Sears or JC Penney to get family portraits done, the photographer typically takes multiple shots of the same pose. Why? Because not every snapshot reflects the right image. How is a test any different?

And this article’s insistence, both explicit and implicit, that test scores are the de facto equivalent of achievement is a sad sign of the times; thanks to all this Race to the Top fueled testing, the way students are taught nowadays, we have to take their picture, ‘cuz they won’t remember…

Hey, Dad, what do you think about your schools now?


I'm just A.S.K.ing...

3 comments:

  1. Great analogy about the picture. I'm glad people such as yourself are speaking out about the limits of standardized testing!

    ReplyDelete
  2. Thanks. And "Take a Picture" is a pretty cool song too, though I'll admit I'm not 100% sure I know exactly what the song's really about...

    ReplyDelete
  3. I am extremely impressed along with your writing abilities, Thanks for this great share.

    ReplyDelete