Wednesday, May 16, 2012

When Everything Goes South

Bad news in Florida.
Preliminary results released Monday indicate that just 27 percent of fourth-graders earned a passing score of 4.0 or better (out of 6) on the writing test. A year ago, 81 percent scored 4.0 or better. . . .
Passing scores plummeted from 81 percent to 27 percent for fourth-graders and showed similar drops in eighth and 10th grades, according to statewide results released by the Department of Education.

(Aforementioned preliminary results here.)


I did what I always do when I come across an item of note, whether it be a dime on the sidewalk or a pineapple in the headlines: I called a friend.


We compared our reactions, which were predictably (and comically, as we spoke simultaneously and in the same lexicon--not only are we friends of many years' standing, but we worked together for years, and are in the habit of oft discussing our work, a habit all the more agreeable as we share so many opinions) identical:
1. What up with the scoring? 
and/or 
2. What up with the test construction? 
and/or 
3. What up with the cohort?
and/or
 4. What up with the cut scores? 


Read more here: http://www.miamiherald.com/2012/05/14/2799146/fcat-writing-scores-plummet.html#storylink=cpy

Them's being the usual suspects. Occam's Razor.


As it happens, there was a change in the cut scores recently. The state DOE is (reportedly--I didn't talk to anyone myself) taking the line "that the results of prior years were artificially high and these are the real ones." Although the state did turn around and decide to lower the bar so more students would pass.


Take a look at the exemplar writing sets. These show examples of student essays at each score point level.


And then there may be other factors beyond the cut score. Never dismiss the possibility that someone, somewhere, made a mistake. It happens.


According to Stuart R. Kahl, Ph.d., in a paper for Measured Progress,
A test score estimates something--a student's mathematical proficiency, perhaps. It is an estimate because it is based on a small sampling of the universe of items that could have been included on the test. Further, a test score is affected by factors other than the student’s mathematical proficiency, such as: how well or motivated the student feels, whether there were distractions or interruptions during the testing session, and whether the student made good or bad guesses, to name a few. These factors, which can all be sources of measurement error, explain the difference between a student’s calculated score on a particular test and that student’s hypothetical “true” score. That true score, forever unknown, would reflect the student’s real level of proficiency.
Estimate, inference--these are the words we must use when we talk about our suppositions of what a student knows or is able to do when those suppositions are based on the results of a test.


Florida may sit herself down to rest on the lowered bar. Or there may be an investigation. The investigation will crawl through the maze of scoring operations.


If nothing turns up, the investigation could lumber over to focus on the construction of the test. Again, if the test construction is immaculate, there may be something going on with the cohort.


Or maybe it really was just the cut scores. ("Just." As if that's nothing.)


In other news: the latest on the pineapple here, along with newly reported mistakes on the New York State math tests. There's no glee in reporting that. We all of us get tarred by that brush, even if we have nothing to do with that particular test.


That's what's going on today. Who knows what tomorrow will bring.




Read more here: http://www.miamiherald.com/2012/05/14/2799146/fcat-writing-scores-plummet.html#storylink=cpy

No comments:

Post a Comment