National Institute for Literacy
 

[Assessment] To fudge or not to fudge

Katrina Hinson khinson at future-gate.com
Fri Mar 10 07:50:28 EST 2006


That's a big question. I had to think about this one some before I could even begin to think about a response. I agree that it's not a new story and that it probably happens more than people realize. My personal opinion is that any time you tie funding primarily to performance, there are bound to be issues regarding data collection and I'm not sure there is a "neat" solution that will adress the problem. Also I think there are gaps in the data itself. I teach in addition to other duties I have at my institution and so often when I do outcomes, I don't feel they're a true indicator of a student's performance - a student may have made progress but not enough to move up a level - not enough to meet a goal or performance indicator - goals like that are accounted for. Another problem we've encountered with the data itself is the fact that if a students marks "find a job" as a goal and ends up joining the military, the goal isn't met either because of the way the data is cross references with ESC agencies, yet I think most people would agree that joining the military definitely qualifies as getting a job.

I think that there are issues with the data collection instruments, that while they may have been validated at some point, they perhaps need to be reviewed to see if they are capturing the right data needed to show a program's real performance or if more needs to be taken into account when determining if a program is doing well. I don't think raw data alone can ever truly show a programs strenghts and weaknesses.

I'm still digesting this topic. It definitely warrants thought.

Regards
Katrina Hinson


>>> marie.cora at hotspurpartners.com >>>

That is the question...

Hello all! Not too long ago, I received an email question regarding
submitting accurate data to the states and the Feds. It appeared that
the person was being pressured to make up data (assessment scores) so
that the outcomes of the program looked better.

I bet this story is not new to you - either because you have heard about
it, or perhaps because it has happened to you.

So I have some questions now:

If programs fudge their data, won't that come back to haunt us all?
Won't that skew standards and either force programs to under-perform or
not allow them to reach their performance levels because they are too
steep? Why would you want to fudge your data? At some point,
most-likely the fudge will be revealed don't you think?

We don't have nationwide standards - so if programs within states are
reporting data in any which way, we won't be able to compare ourselves
across states, will we?

Since states have all different standards (and some don't have any),
states can report in ways in which it makes them appear to be out-doing
other states, when perhaps they are not at all?

I'm probably mushing 2 different and important things together here:
the accurate data part, and the standards part ("on what do we base our
data") - but this is how it's playing out in my mind. Not only do we
sometimes struggle with providing accurate data (for a variety of
reasons: it's complex, it's messy, we feel pressure, sometimes things
are unclear, etc.), but we do not have institutionalized standards
across all states for all to be working in parallel fashion.

What are your thoughts on this?

Thanks,
marie cora
Assessment Discussion List Moderator





More information about the Assessment mailing list