Edition: U.S. / Global

Education

Trying to Find a Measure for How Well Colleges Do

Raymond McCrea Jones/The New York Times

Various standardized tests exist to gauge secondary school achievement. A similar system for judging and comparing colleges may be taking shape.

  • comments
  • Print
  • Reprints

How well does a college teach, and what do its students learn? Rankings based on the credentials of entering freshmen are not hard to find, but how can students, parents and policy makers assess how well a college builds on that foundation?

What information exists has often been hidden from public view. But that may be changing.

In the wake of the No Child Left Behind federal education law, students in elementary, middle and high schools take standardized tests whose results are made public, inviting anyone to assess, however imperfectly, a school’s performance. There is no comparable trove of public data for judging and comparing colleges.

Pieces of such a system may be taking shape, however, with several kinds of national assessments — including, most controversially, standardized tests — gaining traction in recent years. More than 1,000 colleges may be using at least one of them.

“There’s a real shift in attitudes under way,” said David C. Paris, executive director of the New Leadership Alliance for Student Learning and Accountability, a coalition of higher education groups. “We used to hear a lot more of, ‘The value of college can’t be measured,’ and now we hear more of, ‘Let’s talk about how we can measure.’ ”

In January, the New Leadership Alliance released guidelines calling on colleges to systematically “gather evidence of student learning” — though not explicitly advocating standardized tests — and release the results. The report was endorsed by several major organizations of colleges and universities.

Advocates say the point is not to measure how each college’s students perform after four years, which depends heavily on the caliber of students it enrolls in the first place, but to see how much they improve along the way. The concern is less about measuring knowledge of chemistry or literature than about harder to define skills like critical thinking and problem-solving.

That vision still faces major obstacles. Colleges that use standardized tests vary widely in what they test, how and when. And many of them that use those tests or national surveys keep the results to themselves.

“I’d love for all the data to be public,” said Jennifer Carney, director of program evaluation at the Jack Kent Cooke Foundation, which conducts education research. But, she added, that would inevitably lead to some colleges manipulating the figures in pursuit of a higher standing, just as some have done with existing ranking systems.

In the best-known college rankings, by U.S. News & World Report, up to 40 percent of a college’s score is based on its reputation among educators and its selectivity in admitting students. Other factors include several indirect indicators of what happens in classrooms, like student retention, graduation rates and class sizes, but no direct measures of learning.

Critics of standardized tests say they are too narrow and simplistic.

“I’m not sure any standardized test can effectively measure what students gain in problem-solving, or the ability to work collaboratively,” said Alice P. Gast, president of Lehigh University.

In 2008, the Consortium on Financing Higher Education, a group of some of the nation’s most prestigious colleges and universities — including all of the Ivy League — issued a lengthy manifesto saying that what its students learn becomes evident over decades and warning against a “focus on what is easily measured.”

Many of those same colleges participate in the National Survey of Student Engagement, which measures many factors that educators say are good, though indirect, indicators of learning, like how many hours students spend studying and how much they interact with professors. The survey, which began with a handful of colleges in 2000, had more than 700 participate last year. Each college can see its own results and those of a group of comparable colleges, but the scores are not made public.

The view from state-supported colleges has been shaped in part by pressure from policy makers to show what taxpayer dollars are accomplishing, through standardized tests. Texas, a leader in the movement, has required its state colleges to administer tests since 2004, and it makes the outcomes public.

Testing advocates have also gained ammunition from books calling into question the quality of American colleges, notably “Academically Adrift,” by Richard Arum and Josipa Roksa, published last year. It was based on a study showing that more than one-third of students show no significant gain in critical thinking skills after four years of college.

But the concept of universal assessment got its biggest boost in 2006, from the findings of a commission appointed by Margaret Spellings, then the education secretary. The report said that learning “must be measured by institutions on a ‘value added’ basis that takes into account students’ academic baseline,” and that the results must be made available to everyone “to measure the relative effectiveness of different colleges and universities.”

That prompted talk that the federal government might mandate standardized testing, as it did for public schools with No Child Left Behind in 2001.

“That’s what gave this issue urgency,” said Christine M. Keller, executive director of the Voluntary System of Accountability, an alliance of more than 300 state colleges that was created in response to the Spellings Commission. “No one wanted the government imposing a standard.”

Her group has approved three competing tests and asks public colleges to use them and post scores on the group’s Web site.

They are the ETS Proficiency Profile, from Educational Testing Service; the Collegiate Assessment of Academic Proficiency, produced by ACT; and the Collegiate Learning Assessment (used in the “Academically Adrift” study), from the Council for Aid to Education, a research group.

“These instruments are not without controversy,” Ms. Keller said. “It’s still very much a work in progress.”

While more blanks are filled in each year, the “college portraits” remain spotty. Some member colleges have not posted test scores so far, while others first posted scores from two or three years ago and have not updated them. Typically, only a small sample of students takes a test, and the ways the samples are chosen can vary, making comparison harder.

Some major systems, like the University of California, do not participate in the organization, while others, like the State University of New York, are represented by only a few campuses.

Selective colleges, public and private, complain of a “ceiling effect” in standardized testing, making it especially hard for their students to show improvement.

“It does not, in my opinion, measure value added very well for our kind of institution,” said Neal E. Armstrong, a vice provost of the University of Texas at Austin. “Our freshmen come in with very high aptitude and critical thinking skills.”

Even so, the use of the three tests in the voluntary system has boomed, with each company claiming several hundred client colleges, most of which do not make their scores public. Test authors acknowledge that colleges often use them in less than optimal ways, like skipping some sections, testing only once every few years (often to satisfy an accreditation agency) and not measuring student growth over time.

If officials and experts in higher education can reach a broad consensus about how to measure learning, it will become routine, with results being made public “only if consumers demand it,” said Ms. Carney, of the Cooke Foundation.