National Institute for Literacy
 

[Assessment] Legitimacy of alternative tools

Marie Cora marie.cora at hotspurpartners.com
Sun Feb 5 08:30:23 EST 2006


Hi Ajit and everyone,

Yeah, that's a good question you posed to me Ajit: I guess you are
right in saying that I do think about 'alternative' as referring to any
assessment that is not multiple choice. Actually, the terms I use in my
head to separate this stuff out are selected-response and constructed
response.

Selected response describes that situation: the person must choose
(select) from a set of answers (responses) which one they think is the
right one.

That's pretty tightly wrapped up in terms of what that means: you get
the list of answers, you look at the choices, you determine which item
on the list you think is right.

Constructed response also describes the situation precisely: the person
must recall info or build for themselves (construct) the answer to a
particular question. No choices are given for the person to consider -
they are not selecting anything. The other thing that is hugely useful
about using this term is that it is not prescriptive in how big or small
a response must be constructed. So for example, many people think that
a 'performance assessment' (which is a constructed response: because
you are demonstrating your performance) must necessarily entail
something big, lengthy, intense, etc. But in fact, a constructed
response might entail just one word (as long as you are not selecting
that word from a list). Here's a great example: you know what a
'cloze' exercise is? Those fill-in-the-blank worksheets that can test
you on vocab or grammar? Well, that is a performance assessment, even
though you are only filling in one word here and there.

I like to think about these notions this way because they are devoid of
other distractors - for example, there is no mention of standardization
with selected or constructed response, that is a whole other step in the
process. And if you continue to think about selected response as
'multiple choice' then I bet you a dime you just fall back on equating
multiple choice with TABE - and that is just not correct at all. While
the TABE is an EXAMPLE of a multiple choice test - one does not equal
the other.

A couple of questions back to you Ajit and to all the subscribers:

- Ajit, you made some really thoughtful comments in your arguments
against using authentic assessment - what do others think of Ajit's
point of view?

- Ajit, you said: "In my opinion, at least some non-multiple choice
assessments should be standardized so that they can be used to broaden
the array of assessments available for state-level
reporting/accountability."

Folks - can anyone give us any examples of what Ajit describes above?
Let's see if we can develop a growing list of the assessments being used
that are different - I'll start by adding the REEP Writing Rubric to the
list - it is standardized, it is a constructed response test, and at
least Massachusetts uses it for reporting writing gains to the feds.

Also, Andrea Wilder (post on 2/3) suggested that we use Assessment for
all types of 'tests' but that we divide that into sub-headings that list
the various types, and include information on who wants the data from
said test and who gets that data. We do have some amount of info listed
on types of tests and costs, but we don't have a whole lot of info on
who actually gets the test data and what is gets used for. What do
folks think about this?...I'm intrigued....

Robin Millar (post on 2/3) describes a guided portfolio in use in
Manitoba that sounds interesting: it has several levels to it. Robin -
are parts of the portfolio standardized? The whole thing? Does the
portfolio include both selected response and constructed responses types
of assessments and info?

Ok, enough chatter from me for a Sunday morning. Hope everyone is
having a lovely weekend, and see you again tomorrow,

Marie cora
Assessment Discussion List Moderator

For definitions see:
http://wiki.literacytent.org/index.php/Assessment_Information#Assessment
_Glossary

For a bunch of details and info on Commercial Assessments, but that do
not discuss the uses of data and should! Go to:
http://wiki.literacytent.org/index.php/Commercially_Available_Assessment
_Tools

To help me develop the Wiki section on Alternative Assessment, go to:
http://wiki.literacytent.org/index.php/Alternative_Assessment

To make informed choices about test selection, go to:
http://wiki.literacytent.org/index.php/Selecting_Assessment_Tools



-----Original Message-----
From: assessment-bounces at nifl.gov [mailto:assessment-bounces at nifl.gov]
On Behalf Of Gopalakrishnan, Ajit
Sent: Saturday, February 04, 2006 12:42 PM
To: The Assessment Discussion List
Subject: Re: [Assessment] Legitimacy of alternative tools

Marie, et al,

By "alternative", I presume you mean that these assessment options are
an alternative to multiple-choice assessments. Is that a fair inference?
I sometimes refer to alternative assessments as non-multiple choice
assessments, just to make clear what I am talking about.


>From my perspective, referring to them as authentic seems to muddy this

discussion. Webster provides two of the following definitions for
authentic which may help to illustrate my thinking:
a) worthy of acceptance or belief as conforming to or based on fact
<paints an authentic picture of our society>
b) true to one's own personality, spirit, or character


So for example, a student's CASAS scale score in math (say 212) from a
multiple choice test may be worthy of acceptance of a person's math
ability. An analysis of the test item responses may even provide greater
information about a person's strengths and weaknesses. However, they
cannot say much about how the student perceives the relation of "math"
to his/her own personality and life. Two students at entry might both
achieve a score of 207 in math for very different reasons. One student
might have liked math, viewed herself as being capable of learning math
but just not used it for many years. The other student might have never
liked math, generally seen herself as having other strengths, but been
forced to use math as part of her job. To ascertain this type of
information, the teacher might have to talk to the student and find out
the student's past experiences with math, the student's perceptions of
its importance in his/her life, etc. Then, a custom assessment/project
can
be designed that is meaningful and authentic to that particular
student.


>From my perspective, all standardization (whether multiple-choice or

non-multiple choice assessments) will to some extent reduce the
authenticity for the student. The CASAS system attempts to address this
by providing assessments that are relevant to adults and based in
various contexts (life skills, employability skills, workforce learning,
citizenship, etc.) so that the student can be assessed in contexts that
are somewhat authentic to their experiences and goals.

Therefore, I prefer the term alternative assessments because then we can
focus our discussion on the differences between multiple choice
assessments and non-multiple choice assessments.

There is no question that non-multiple choice assessments can be
legitimate and have many strengths.
For example, Connecticut is currently piloting a CASAS workplace
speaking assessment. This is a standardized assessment designed for ESL
learners who are currently working to demonstrate their listening and
speaking abilities in a workplace context. Compared to the CASAS
listening multiple-choice assessments which we have used over the years,
the speaking assessment has the potential for the instructor to gain a
greater understanding of a student's strengths and weaknesses. Students
also seem to enjoy taking the assessment. However, it needs to be
administered one-on-one unlike the listening which can be group
administered. The speaking assessment also places a greater training and
certification burden on the test administrator and scorer. We have
experienced many of these challenges with our statewide implementation
of the CASAS Functional Writing Assessment over the past few years.
Kevin alluded to some of those challenges such as maintaining scorer
certification and interr
ater reliability. The scoring rubric used in both the writing and the
speaking assessments can be valuable tools for classroom instruction.

In my opinion, at least some non-multiple choice assessments should be
standardized so that they can be used to broaden the array of
assessments available for state-level reporting/accountability.

Thanks.
Ajit

Ajit Gopalakrishnan
Education Consultant
Connecticut Department of Education
25 Industrial Park Road
Middletown, CT 06457
Tel: (860) 807-2125
Fax: (860) 807-2062
ajit.gopalakrishnan at po.state.ct.us
<mailto:ajit.gopalakrishnan at po.state.ct.us>
________________________________

From: assessment-bounces at nifl.gov [mailto:assessment-bounces at nifl.gov]
On Behalf Of Marie Cora
Sent: Thursday, February 02, 2006 11:52 AM
To: Assessment Discussion List
Subject: [Assessment] Legitimacy of alternative tools

Hi Bruce and everyone,

Bruce, you said:

"I think putting forth the strengths and legitimacy of tools such as
portfolios, outcome checklists, holistically scored writing samples, etc
is a good way to go."

This sounds like a very good path to go down to me. I think people
would have a lot to say and share about alternative tools, their uses,
and their strengths. It would be a great exercise to list them all out
and discuss the strengths, uses, and limitations of each one.

What questions do folks have about alternative assessments?: using
them, seeking them out, developing them, whatever area most intrigues
you.

What can folks share with the rest of us in terms of "the strengths and
legitimacy" of alternative tools such as portfolios, checklists,
analytic/holistic scoring, rubric use, writing samples,
in-take/placement processes?

Are any of the tools you use standardized? Not standardized? Do you
think that this is important? Why or why not?

Are any of the tools used for both classroom and program purposes?

I have other questions for you, but let's leave it at that for right
now. Let us hear what your thoughts are. We're looking forward to it.

Thanks,

marie cora
Assessment Discussion List Moderator




--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 267.15.0/248 - Release Date: 2/1/2006


-------------------------------
National Institute for Literacy
Assessment mailing list
Assessment at nifl.gov
To unsubscribe or change your subscription settings, please go to
http://www.nifl.gov/mailman/listinfo/assessment




More information about the Assessment mailing list