AdultAdolescenceChildhoodEarly Childhood
Programs

Programs & Projects

The Institute is a catalyst for advancing a comprehensive national literacy agenda.

[Assessment 1956] Response to Forrest Chisman's Comments on ABE Assessments

Schneider, Jim

jschneider at eicc.edu
Sat May 30 17:10:40 EDT 2009


<Forrest Chisman wrote: What I DO wonder is what the value added of fine grained assessments is for a teaching force that is overwhelmingly part-time, semi-skilled, unsupervised, paid only for contact hours, and meets with students who attend on an intermittent basis 3-6 hours per week. Most of them are lucky if they know who their learners ARE on any given day.>


We have used CASAS assessments for the past 13 years. Although one can quibble about the particulars of ANY assessment and I have plenty to quibble about, providing insightful information for instructors isn't one of them. Between the competency reports and content standard reports the CASAS assessments provide a wealth of information regarding the strengths and weaknesses of our learners.

Dr. Chisman's "wonder" couldn't be more spot on. The ABE accountability system (NRS) is established "as if" the ABE system was funded and operated in a fully professional manner as the K-12 system. IF it were such, the assessments could and would be used as they are intended. So long as the ABE system continues to be marginalized at the national, state, and institutional levels there is little need to develop an even more elaborate charade than the one we are currently operating under.




-----Original Message-----
From: assessment-bounces at nifl.gov on behalf of Forrest Chisman
Sent: Sat 5/30/2009 1:03 PM
To: 'The Assessment Discussion List'
Subject: [Assessment 1953] Re: DIBELS

Jean,



Actually I'm not a teacher and never have been. I'm a policy guy. What I DO
wonder is what the value added of fine grained assessments is for a teaching
force that is overwhelmingly part-time, semi-skilled, unsupervised, paid
only for contact hours, and meets with students who attend on an
intermittent basis 3-6 hours per week. Most of them are lucky if they know
who their learners ARE on any given day. You'd certainly have to invest in a
lot of staff training if they were going to use fine grained assessments,
and if you didn't they would probably misuse the tools. But wouldn't the
same investment in staff training be better used for other purposes? More
than that, I'm always concerned that more precision in almost anything
doesn't necessarily create better results. Sometime "close enough" works
best. NAAL certainly was precise, but as far as I can tell it has had little
or no impact on the field, and the more one learns about it, the less one
knows. It's sort of like the complex "risk management" programs developed by
financial institutions to evaluate derivatives. They sure were precise, but
they created a false sense of certainty that brought down the American
economy. I'm not a Luddite, but I do believe in calibrating our means with
our aims. Just because we CAN do it, doesn't mean it's worth doing - except
possibly for purposes of research to find out what we SHOULD be doing.



Forrest Chisman

Vice President

Council for Advancement of Adult Literacy



From: assessment-bounces at nifl.gov [mailto:assessment-bounces at nifl.gov] On
Behalf Of Jean Marrapodi
Sent: Saturday, May 30, 2009 11:31 AM
To: 'The Assessment Discussion List'
Subject: [Assessment 1952] Re: DIBELS



Forrest-

Yes, you are overworked, but imagine having something that meets the
mark..and being able to know what the mark is for each learner. That might
eliminate wasted efforts. We can dream, right?

Jean



From: assessment-bounces at nifl.gov [mailto:assessment-bounces at nifl.gov] On
Behalf Of Forrest Chisman
Sent: Friday, May 29, 2009 6:53 PM
To: 'The Assessment Discussion List'
Subject: [Assessment 1947] Re: DIBELS



Given John's reading on this, are we sure we need an additional AE test?
That is, would there be much value added in instructional terms in being
more fine grained than the additional tests - particularly because AE
teachers are fairly autonomous and over-worked as it is?



Forrest Chisman



From: assessment-bounces at nifl.gov [mailto:assessment-bounces at nifl.gov] On
Behalf Of Sabatini, John
Sent: Friday, May 29, 2009 2:39 PM
To: The Assessment Discussion List
Subject: [Assessment 1945] Re: DIBELS



I recommend you also keep after us at ETS. Our team has developed a set of
component/diagnostic measures for use with adult literacy learners and have
piloted them with ABE adults in our intervention studies and also with
adolescent struggling readers. We also do not have national norms for you
yet, but we have the measures (computerized), so we are probably getting
close. A national norming sample is an expensive, somewhat complicated
thing to do - probably most of the expense of any published test is getting
the norming sample and analyzing it.


Part of our delay is the odd circumstance of adult basic education. As the
NAAL FAN report shows, most of the adults in the country are probably off
the top of the chart for basic skills - or at least high enough that one is
not likely to provide additional basic skills testing. So the normative
sample we need is relative to a special population - ABE students and ELL
students. Otherwise, all we'll learn is that all of the ABE students are in
the bottom 25th percentile in the country. The tests need to be designed to
discriminate well among that bottom 25th percentile. So, we would most
likely need to sample from literacy programs across the country - and making
them take more tests! Then, it would still not be sound to generalize to
non-program adults, but that might not be too large a problem. Perhaps we
can generate some momentum among providers for a study of that kind.



John





_____

From: assessment-bounces at nifl.gov [mailto:assessment-bounces at nifl.gov] On
Behalf Of Jean Marrapodi
Sent: Friday, May 29, 2009 2:04 PM
To: 'The Assessment Discussion List'
Subject: [Assessment 1944] Re: DIBELS

Thanks Bob!

I contacted DIBELS as well and they said they had no studies regarding
adults. Seeing that there would be no place to enter data for adult scores
makes a big difference in the decision whether this would be usable at all
for my population. I'm trying to find something that can assist the teachers
assess and track this lowest level population at a granular level without
reinventing the wheel. DIBELS has the granularity of the skills I'm looking
for.



So much of our adult literacy material is paper and pencil and labor
intensive. I have the Bader, which is complex and not user friendly. I have
a slew of individual diagnostics for word recognition, vocabulary meaning,
phonemic analysis, etc. I'm looking for simple.



I wonder if I could reopen that discussion with DIBELS, and if it would be
worth it.

I've also contacted CAL about a project they did with bilingual
Spanish/English K-2 children learning to read. We have a slew of new Spanish
speaking learners who arrived with minimal education in L1, so in many ways
they are paralleling these children. Of course in many ways the don't, but
what's out there for them? Our lowest literacy folks have such sparsely of
diagnostics we need to borrow across the realms of K-12.









Jean Marrapodi, PhD, CPLP

teacher by training, learner by design
<mailto:rejoicer at aol.com> jmarrapodi at applestar.org
mobile:
<http://www.plaxo.com/click_to_call?lang=en&src=jj_signature&To=401%2E440%2E
6165&Email=rejoicer at aol.com> 401.440.6165
<http://www.applestar.org/> www.applestar.org













From: assessment-bounces at nifl.gov [mailto:assessment-bounces at nifl.gov] On
Behalf Of Hughes, Robert
Sent: Thursday, May 28, 2009 12:25 PM
To: The Assessment Discussion List
Subject: [Assessment 1933] Re: DIBELS



A year or so ago, I decided to see if DIBELS could be adapted to adult ed
settings. I contacted the researchers who designed it at the University of
Oregon (https://dibels.uoregon.edu/) to see what they thought. They seemed
to think that the measures would be appropriate, and I agree with John's
assessment below that there could be some uses.



Here's the rub, though. The way that CMBs like DIBELS work is that they
rely on input from large number of users to generate norms that teachers can
use to assess individual students. This works well because DIBELS is
gathering scores from a wide range of users from all over the country. The
number and diversity of users provides a natural sampling that provides a
pretty accurate and constantly updated norming process. DIBELS is,
therefore, normed to the K-12 population pretty well.



There isn't a category for entering adult learning scores into DIBELS, and
that needs to be done before it can be appropriately normed. And rather
than grade-level norms, someone would have to generate norms that are closer
aligned to the norms that we use in adult ed. My brief discussion with the
DIBELS folks suggest that they aren't averse to doing this -- but that
people haven't approached them with the request. I'm guessing that if
enough people start contacting them, they might respond.



Bob H.



Bob Hughes, Ed.D.
Associate Professor of Adult Education
Seattle University
410 Loyola Hall

901 12th Ave

PO Box 222000, Seattle WA 98122

Ph: 206-296-6168
E-mail: rhughes at seattleu.edu



_____

From: assessment-bounces at nifl.gov on behalf of Sabatini, John
Sent: Thu 5/28/2009 6:42 AM
To: The Assessment Discussion List
Subject: [Assessment 1929] Re: DIBELS

Hi,



I'd also recommend the following references for thinking about how to assess
and think about fluency measures with adult learners. The first two are
actually from the 4th grade special studies of Oral reading conducting by
the NAEP. The reason to look at them is to see how the authors constucted
the fluency/prosody/expressiveness subscale and to understand a bit about
the distinctions between rate (words per minute), accuracy (percentage
correct), and words correct per minute. As the Wayman report points out,
4th grade is a key developmental year for the strength of the relationship
between oral reading and comprehension in children. The national sampling is
sound. The Wayman article introduces all the variations of oral reading
tasks and what aspects might matter in choosing one. One can also read
nearly anything by Tim Shanahan.



DIBELS has been an exemplar of a Curriculum-based Measures (CBM) approach.
The goal of that research had been to use fluency-type measures as a proxy
for predicting reading comprehension. Interestingly, the focus has been
less on the subgoal/subskill of improving children's reading fluency. The
DIBELS technical reports still provide some useful benchmarks for thinking
about the development of reading rate and fluency, but as the previous post
notes, be cautious about applying any rules as is with adults. They do
continue to improve the technical aspects.



Of course, we continue to recommend you look at the NCES Basic Skills report
that was just published, as we gave a national sample of some 19000 adults
two passages -- one at about 2nd-6th grade level another at 7th-8th grade
level. While we cannot at present create a normative scale for those
particular passages, as we develop further reports, the results can be a
guide to expectations for adult readers. Our research team here is also
conducting research on adult reading fluency, though we don't have
particular assessments to recommend at this time. Hopefully, we'll have more
helpful reports out there for you soon.



I think one of the main purposes in reading fluency assessments with adults
is to monitor the improvement of accuracy, rate, and
fluency/prosody/expressiveness (I think referred to here as chunking for
syntax, grammar) over time with texts of increasing challenge. So, it is
the repeating of the activity over time and the recording of rates and
accuracy and ease to see if there is improvement. I don't trust
readability formulas for equating texts - don't expect any two texts with
the same readability index to be of equal difficulty in terms of reading
rate for any adult. However, adults and most readers are roughly consistent
in their reading rates across a relatively wide variety of texts - until
they get so difficult that the individual is struggling with every word. I
actually prefer picking easy texts relative to the adults word reading
ability when monitoring continuous text reading fluency. There are separate
measures one can use for word recognition and decoding.



Finally, McShane's report applies this to adults.



John





Daane, M. C., Campbell, J. R., Grigg, W. S., Goodman, M. J., & Oranje, A.
(2005). Fourth-grade students reading aloud: NAEP 2002 special study of oral
reading (No. NCES 2006-469). Washington, DC: U. S. Department of Education,
Institution of Education Sciences, National Center for Educational
Statistics.

Pinnell, G. S., Pikulski, J. J., Wikxson, K. K., Campbell, J. R., Gough, P.
B., & Beatty, A. S. (1995). Listening to children read aloud: Data from
NAEP's Integrated Reading Performance Record (IRPR) at grade 4 (No.
NAEP-23-FR-04; NCES-95-726). Princeton, NJ: Educational Testing Service.

Samuels, S. J. (2006). Toward a model of reading fluency. In S. J. Samuels &
A. E. Farstrup (Eds.), What research has to say about fluency instruction
(pp. 24-46). Newark, DE: International Reading Association.

Wayman, M. M., Wallace, T., Wiley, H. I., Ticha, R., & Espin, C. A. (2007).
Literature synthesis on curriculum-based measurement in reading. The Journal
of Special Education, 41(2), 85-120.



McShane, S. (2005). Applying Research in Reading Instruction for Adults:

First Steps for Teachers. Washington, DC: National Center for Family
Literacy, National Institute for Literacy.





_____

From: assessment-bounces at nifl.gov [mailto:assessment-bounces at nifl.gov] On
Behalf Of SALandrum at aol.com
Sent: Thursday, May 28, 2009 6:53 AM
To: assessment at nifl.gov
Subject: [Assessment 1926] Re: DIBELS

I may be wrong but I don't think it has been scaled for adults. Below is
from their webpage.



The DIBELS were developed as criterion-based measures; but national norms
have been developed.

DIBELS are criterion-referenced because each measure has an empirically
established goal (or benchmark) that changes across time to ensure students'
skills are developing in a manner predictive of continued progress. The
goals/benchmarks were developed following a large group of students in a
longitudinal manner to see where students who were "readers" in later grades
were performing on these critical early literacy skills when they were in
Kindergarten and First grade so that we can make predictions about which
students are progressing adequately and which students may need additional
instructional support. This approach is in contrast with normative measures
which simply demonstrate where a student is performing in relation to the
normative sample, regardless of whether that performance is predictive of
future success.

For your convenience, district-level norms or percentiles are generated at
each benchmark data collection period so schools/districts can make
decisions about student performance in relation to the local context of
students who have received, generally, the same type of instructional
experiences. National norms, generated with all the students in the DIBELS
Data System as of 2002, are also posted within the Technical Reports section
of the website in Technical Report #9.

You can see how the benchmark goals are used by going to our Technical
Reports <https://dibels.uoregon.edu/techreports/index.php> page and
downloading the following report:

Good, R. H., Simmons, D. S., Kame'enui, E. J., Kaminski, R. A., & Wallin, J.
(2002). Summary of decision rules for intensive, strategic, and benchmark
instructional recommendations in kindergarten through third grade (Technical
Report No. 11). Eugene, OR: University of Oregon.



Susan Landrum
Certified Barton Tutor
Central Georgia Technical College
slandrumcgtcedu at gmail.com



In a message dated 5/28/2009 6:42:03 A.M. Eastern Daylight Time,
jmarrapodi at applestar.org writes:

I'm going out on a limb here.

Lots of folks in the K-5 world use DIBELS (https://dibels.uoregon.edu/ ) for
reading assessment in the primary grades. It is fairly granular. Is there
any history or applicability for use with adult low literacy learners? It's
fairly intensive to learn to administer, but it does measure a lot of the
subskills we are looking at with alphabetics, fluency, comprehension and
vocabulary. In the teacher discussions on teachers.net one of their
complaints was the timing issues for young children, which I can see could
create undue stress for some tasks. Often elementary materials are
problematic for adults, but this one comes well researched.



I'm just wondering about it, so I thought I'd toss it into the mix this week
to see what you all thought.





<http://www.applestar.org/> Image removed by sender.
http://www.applestar.org/


Jean Marrapodi, PhD, CPLP

teacher by training, learner by design
<mailto:rejoicer at aol.com> jmarrapodi at applestar.org
mobile:
<http://www.plaxo.com/click_to_call?lang=en&src=jj_signature&To=401%2E440%2E
6165&Email=rejoicer at aol.com> 401.440.6165
<http://www.applestar.org/> www.applestar.org











-------------------------------
National Institute for Literacy
Assessment mailing list
Assessment at nifl.gov
To unsubscribe or change your subscription settings, please go to
http://www.nifl.gov/mailman/listinfo/assessment
Email delivered to salandrum at aol.com



_____

We found the real 'Hotel California' and the 'Seinfeld' diner. What will you
find? Explore <http://www.whereitsat.com/?ncid=emlwenew00000004>
WhereItsAt.com.

--------------------------------------------------
This e-mail and any files transmitted with it may contain privileged or
confidential information.
It is solely for use by the individual for whom it is intended, even if
addressed incorrectly.
If you received this e-mail in error, please notify the sender; do not
disclose, copy, distribute,
or take any action in reliance on the contents of this information; and
delete it from
your system. Any other use of this e-mail is prohibited.

Thank you for your compliance.
--------------------------------------------------

--------------------------------------------------
This e-mail and any files transmitted with it may contain privileged or
confidential information.
It is solely for use by the individual for whom it is intended, even if
addressed incorrectly.
If you received this e-mail in error, please notify the sender; do not
disclose, copy, distribute,
or take any action in reliance on the contents of this information; and
delete it from
your system. Any other use of this e-mail is prohibited.

Thank you for your compliance.
--------------------------------------------------






This message contains confidential information and is intended for the individual to whom it is addressed. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. If verification is required please request a hard-copy version.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.nifl.gov/pipermail/assessment/attachments/20090530/14a501d5/attachment.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.jpg
Type: image/jpeg
Size: 526 bytes
Desc: not available
Url : http://www.nifl.gov/pipermail/assessment/attachments/20090530/14a501d5/attachment.jpg


More information about the Assessment discussion list