National Institute for Literacy
 

[Assessment 238] Re: Thoughts on apples and oranges

Gopalakrishnan, Ajit Ajit.Gopalakrishnan at ct.gov
Wed Mar 15 09:48:11 EST 2006


My comments on some of the issues are below.



Thanks.



Ajit



Ajit Gopalakrishnan

Connecticut Department of Education

25 Industrial Park Road

Middletown, CT 06457

Phone: (860) 807-2125

Fax: (860) 807-2062

Email: ajit.gopalakrishnan at ct.gov



________________________________

From: assessment-bounces at nifl.gov [mailto:assessment-bounces at nifl.gov]
On Behalf Of Marie Cora
Sent: Tuesday, March 14, 2006 11:14 AM
To: assessment at nifl.gov
Subject: [Assessment 234] Thoughts on apples and oranges



Hi everyone,



I'm going to pick up where the discussion left off last week - we were
exploring some of the frustrations with standards, reported data, and
goals.



Several of you noted that because of the lack of national standards,
it's tough or next to impossible to compare performance across programs
or states.

Actually, I think that the NRS accomplishes just that by outlining a
nationally agreed upon set of skill levels and level descriptors. The
NRS does not prescribe what standards a state has to use. It simply
tries to compare the performance outcomes resulting from whatever
standards the state is using.



Yet part of what the federal system does is compare states to one
another in terms of identifying recipients of things like incentive
grants and so forth.

I believe the federal system does not compare states to each other to
determine incentive awards. That determination is based on a state's
performance relative to its own performance targets. There is some level
of state-to-state comparison in the process of establishing performance
targets.



States are required to report on how they are able to show gain via pre
and post test scores - but as Andrea and Susan pointed out in their
posts, there is no standardized method for showing this gain - each
state creates its own benchmarks.

I don't believe this statement is correct. The NRS benchmarks don't vary
by state. The standardized method for showing gain is the NRS which
requires the use of approved standardized assessments.



What can we do about this? We need a national set of standards. But
before that? Jane noted that state standards should be indicated within
the submitted data - do any states do this? (probably not because they
are not required). Would this help? Let's think this possibility
through a little.....



Susan described several scenarios for us in which one aspect necessarily
must suffer in order for another aspect to be recognized (feel familiar
to you?). I hear this lament constantly: 'so as a program director, do
I make sure my numbers work so I can continue to get funded to run my
program, or do I not compromise the integrity of the teaching/learning
process but run the risk of not showing good data?' (and then my
program loses its funding, so integrity becomes a moot point). However,
we must have an accountability system; I really don't believe anyone
wants to throw around money without real proof that it's not being
wasted. One of you noted that reform then, must happen at the root - at
the NRS - what would that look like?

I have not read the original scenarios but have heard similar feelings
expressed by some practitioners. This tends to be a no-win situation for
those practitioners. My feeling is that state and local administrators
need to be completely committed to maximizing the value of standardized
assessments, and using data for decision making (not just data about
test scores but also data relative to recruitment, retention,
attendance, goal setting, etc. and data that might sometimes be very
unflattering).



Varshna - you asked if the NRS/DOE requirements included data validation
as does DOLs requirements - not to my knowledge - but can anyone speak
to this question? It's a good one. Varshna - do you believe that such
data validation helps with the "apples/oranges" issue? How so?

The NRS does require an extensive data validation checklist that looks
at data structures, data systems, policy requirements, edit checks, and
professional development. States submit this checklist along with their
end-of-year data submission.



Finally, Katrina - you brought up the 'gaps in data' issue and cited the
"unanticipated" goals situation as an example. This is also something
we need to address: if a student changes a goal or achieves a goal that
was not specifically set at the outset of the learning process - this
happens all the time actually and is normal behavior: shifting and
changing your goals based on your experience and progress can logically
happen during a learning process. But often, these goals can get lost
or don't get counted or cannot be counted because our system does not
give us a way to show increments for example.

Yes, currently only goal-based outcomes relative to employment,
postsecondary and diploma are of the greatest importance for NRS
reporting. States can establish accountability systems that reward
(monetary or otherwise) other outcomes if they so choose.





What do we need? National standards? Is that the most important thing
that will help combat these issues?



A different way to capture learning? What would that look like?
Remember that the needs of the funder and public are quite different
than the needs of the teacher and student - and both are legitimate
needs.



What are your thoughts on these issues?



Thanks,

marie cora

Assessment Discussion List Moderator

marie.cora at hotspurpartners.com <mailto:Marie.cora at hotsurpartners.com>







-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.nifl.gov/pipermail/assessment/attachments/20060315/2eb6725b/attachment.html


More information about the Assessment mailing list