National Institute for Literacy
 

[Assessment] To fudge or not to fudge

Andrea Neilson ADNEILS at k12.carr.org
Wed Mar 8 14:12:26 EST 2006


Marie,

You've hit the mark with me on all the points you've raised regarding
data. Having worked as the LWIS coordinator for the past 6 years, I've
seen data collection requirements become more detailed and certainly
more complex. When we went from scanning to on-line data entry,
suddenly we had more restrictions, instant error notifications, and a
series of "if this, then that" rules to institute. I don't consider
myself a data-geek, but I really appreciate the more clearly defined and
precise directives. I value our program's integrity and feel secure in
our reporting as it's in line with our state's requirements.

However, I still very often feel we're comparing apples to oranges when
we look at our performance across fiscal years, against other providers,
and against the state averages. I often wonder what practices other
providers are using in regard to data management, intake and assessment
and data quality (not to mention instruction and professional
development). In the bigger picture, it would make sense to me if these
practices were also either more clearly directed and defined or at the
very least, shared and discussed among all providers so as to learn
"what works."

I'm a first time responder to this list and perhaps using this
platform, we might share legitimate program practices that result in
these anticipated outcomes?

Thanks,
Andrea Neilson
Intake Assessment Specialist
Carroll Adult Learning Connection
Carroll County Public Schools
410-751-3680 ext. 221


>>> marie.cora at hotspurpartners.com 3/8/06 12:08:31 PM >>>

That is the question...

Hello all! Not too long ago, I received an email question regarding
submitting accurate data to the states and the Feds. It appeared that
the person was being pressured to make up data (assessment scores) so
that the outcomes of the program looked better.

I bet this story is not new to you - either because you have heard
about
it, or perhaps because it has happened to you.

So I have some questions now:

If programs fudge their data, won't that come back to haunt us all?
Won't that skew standards and either force programs to under-perform
or
not allow them to reach their performance levels because they are too
steep? Why would you want to fudge your data? At some point,
most-likely the fudge will be revealed don't you think?

We don't have nationwide standards - so if programs within states are
reporting data in any which way, we won't be able to compare ourselves
across states, will we?

Since states have all different standards (and some don't have any),
states can report in ways in which it makes them appear to be
out-doing
other states, when perhaps they are not at all?

I'm probably mushing 2 different and important things together here:
the accurate data part, and the standards part ("on what do we base
our
data") - but this is how it's playing out in my mind. Not only do we
sometimes struggle with providing accurate data (for a variety of
reasons: it's complex, it's messy, we feel pressure, sometimes things
are unclear, etc.), but we do not have institutionalized standards
across all states for all to be working in parallel fashion.

What are your thoughts on this?

Thanks,
marie cora
Assessment Discussion List Moderator




More information about the Assessment mailing list