by

Scientists Offer New Formula to Predict Career Success

First there was the “impact factor.” Then came the “h-index.” Now, for those who believe that scientific prowess can be measured by statistical metrics, comes the Acuna-Allesina-Kording formula.

The formula, outlined on Wednesday in the journal Nature, is intended to improve upon the h-index—a tally of a researcher’s publications and citations—by adding a few more numerical measures of a scientist’s publishing history to allow for predictions of future success.

The idea, said the paper’s senior author, Konrad P. Kording, an associate professor of physical medicine and rehabilitation at Northwestern University, is to help universities and grant-writing agencies “fund someone who will have high impact in the future.”

Kording readily admitted his method—tweaking the h-index by adding numbers such as years of publication and number of distinct journals—cannot be perfect and shouldn’t be a substitute for thoughtful human analysis of a researcher’s past writings and future goals. But even careful subjective reviews have their limits, especially in the real world of deadline pressures and global competition, Kording said.

“Both ways of evaluating people,” he said, “have advantages and disadvantages.”

Despite that caution, the idea put forth by Kording, along with Daniel E. Acuna, a postdoctoral student in his lab, and Stefano Allesina, an assistant professor of ecology and evolution at the University of Chicago, is already generating some of the same divisions that have surrounded the impact factor and the h-index.

“It’s really disturbing,” said Robert H. Austin, a professor of physics at Princeton University, referring to the trend in which he now sees hiring panels requiring candidates to state their h-index, and applicants sometimes offering it at the top of their résumés.

The focus on the h-index, said John M. Drake, an associate professor of ecology at the University of Georgia, is leading researchers to choose popular and established topics that are likely to win citations from other authors. That, he said, “would seem to be the opposite of creativity.”

The forerunner of the h-index, the impact factor, was devised in the 1950s by Eugene Garfield, a librarian pursuing a doctorate in structural linguistics. His simple formula—dividing the number of citations a scientific journal received in the two previous years by the number of articles it published—sparked a revolution in establishing the reputations of the journals and the scientists who write for them.

It also spawned resentment, with scientists complaining of the biases it creates and the gamesmanship it encourages. Among the more blatant cases of alleged abuse, journals eager to raise their impact factor have suggested that prospective writers mention some of their previously published articles.

Then came the h-index, created in 2005 by Jorge E. Hirsch, a professor of physics at the University of California at San Diego. It’s also a simple measure, but aimed more at ranking the researcher than the journals. It’s the number of a researcher’s published papers that have been cited at least as many times as that number itself.

While it quickly gained popularity, the h-index suffered from the sense that it mostly confirmed past success. That’s because those with the largest h-index were readily seen as renowned scientists late in their careers.

Hirsch recognized the problem and wrote a follow-up analysis in 2007 in which he tested the ability of several other variants to provide greater predictions of a scientist’s future career prospects. His conclusion, published in PNAS, the Proceedings of the National Academies of Science, was that none did.

Kording and his team are now suggesting otherwise. Their proposal adds to the h-index by including a scientist’s total number of articles, the number of years since the first one was published, the number of journals they’ve appeared in, and the number of “top” journals.

The new formula, Kording said, has proved more than twice as accurate as the h-index for predicting the future success of researchers in the life sciences. He was driven to pursue the formula, he said, out of a basic curiosity about science and how it works.

“I’m a scientist,” he said, “so as a scientist I can’t avoid basically asking myself about the future, about my career, about the career of friends, about the careers of people that I know, and which factors drive good science.” He also acknowledged the “basic angst” among scientists wondering if their work is ultimately either useful or worthless to society.

Hirsch is less impressed. After reviewing the Acuna-Allesina-Kording paper, he said the factors added to his h-index appeared to have little meaningful effect. He suggested the additional factors had been devised by “optimizing the coefficients” for a particular set of authors covered by the paper. He said the predictive powers would not hold up for a wider set of test cases.

“I would expect that dropping the h-index from the formula would have a major effect, and dropping any of the other criteria a minor effect,” Hirsch told The Chronicle.

Another critic of the overall use of such statistical measurements, Anurag A. Agrawal, a professor of ecology and evolutionary biology at Cornell University, said he’s gradually realizing that metrics such as impact factor and h-index will soon grow obsolete.

Eventually, Agrawal said, technology will allow better direct measurements of the value of each piece of published research. It won’t be so important to estimate a researcher’s value in five or 10 years, when data can show almost immediately how many people have actually downloaded and used a published journal article, he said.

Kording also believes the future will improve the degree to which statistical measures complement subjective human evaluations. He acknowledged that impact factor and h-index can be “gamed” and can create harmful and perverse incentives. But he said similar problems faced Google, and the reason the company’s search engine has survived and prevailed is that its engineers are constantly adjusting it when they learn how people seek to manipulate its results.

“Basically, if you have strong statisticians that build metrics,” he said, “these guys will always be ahead of those people that game it.”

As for the effects of such improved metrics, Kording anticipates a world in which universities and grant-writing agencies work more efficiently, to the betterment of science.

Because better metrics may help all sides recognize young talent, “it might make the rich universities richer and the poor universities poorer,” he said. And rather than punish creative adventurers who dare to tread into areas not yet recognized by other scientists, better systems of talent evaluation might mean a person with the talent to lead an intellectual revolution might actually get the money needed to do it, he said.

“In a system where we have better metrics,” Kording said, “the people who could make most out of the money are more likely to get it.”

Return to Top