Nature | News

Computer model predicts academic success

Algorithm based on publications finds that first-author articles in leading journals matter most.

Article tools

Rights & Permissions

Norma Jean Gargasz/Getty Images

Female biologists are less likely to become principal investigators than are male biologists with comparable publication records, a statistical model found.

The mantra 'publish or perish' is drilled into every early-career scientist — and for good reason, a computer model suggests. The most important predictor of success for a young biomedical scientist is the number of first-author papers published in journals with high impact factors early in a researcher's career, according to the formula.

The model, created by computer scientist Lucas Carey, at Pompeu Fabra University in Barcelona, and his collaborators, also found that, even correcting for publication records, working at a highly ranked university — and being male — are predictors of academic success.

The results appear today in Current Biology1, and the team has built a website, pipredictor.com, which tells early-career scientists whether their publication records are ahead of the norm.

Together with David van Dijk at the Weizmann Institute of Science in Rehovot, Israel, and Ohad Manor at the University of Washington in Seattle, Carey analysed the publication records of more than 25,000 biomedical researchers who first published between 1996 and 2000. Just 6.2% of these scientists would go on to become a principal investigator (PI), which the team defined as publishing at least three papers as the last-named author, after 13 years. Their algorithm used machine-learning techniques to analyse which of more than 200 different metrics were the best predictors of success, and produced a formula that can be used to compare the odds of two individuals from the same cohort becoming a PI. The formula picks out the more successful researchers with an 83% success rate.

Middle-author obscurity

According to the formula, the quantity of publications in a scientist’s first eight years, and the impact factors of the journals in which they appear, are highly influential factors. But it is the first-author publications that count most, adds Carey. “It’s really bad to be a middle author,” he says.

However, first authors who publish with a large number of co-authors start to lose their advantage, Carey explains. “Our data suggest you shouldn’t get trapped in big projects,” he adds. And other measures of quality — such as the citations a paper receives above the norm of the journal — fade beside the importance of the journal impact factor, even though many scientists castigate this metric as a flawed way of judging the quality of researchers.

Being male is also a positive predictor for becoming a PI, the results suggest. On average, having an identical publication record but being a woman lowers the chance of success by 7%, for example, says van Dijk. Those at leading universities also had much higher chances of becoming a PI — although when this effect was untangled from good publication records, university rank on its own was not particularly influential.

Some scientists defied the predictions, becoming PIs even though they never published in high-impact journals. But those authors had twice as many first-author papers as their competitors who didn't become PIs — suggesting that enough first-author papers can compensate, the team found.

“The paper shows that academic success has clear predictable features, and decomposes cleverly the main factors that contribute to it. In many ways, it really indicates the powerful role of appropriate metrics in gauging academic success,” comments Albert-László Barabási, a network theorist at Northeastern University in Boston, Massachusetts, who has published work on predicting academic success at the level of individual papers.  

“Most of the findings line up reasonably well with career advice that young researchers are given all the time,” says Zen Faulkes, a biologist at the University of Texas–Pan American in Edinburg.

But the method has its limits. The formula has been tested only on data about biomedical scientists in the 1996–2000 cohort, so the factors for success might be different today. And the model's apparently high predictive accuracy may be misleading, notes Peter van den Besselaar, a researcher who studies the management of science at the Free University of Amsterdam. It is relatively easy to accurately distinguish between the low-performers who will leave academia from the high-performers who are likely to become PIs, he says; the hard part is divining who will be successful among just the top 15% of scientists (of whom fewer than half will become PIs). And the model’s definition of a PI does not necessarily line up with what people might think of as academic success, such as becoming head of a lab or gaining tenure.

Still, the analysis is useful, Faulkes adds, because it documents the disadvantage of female researchers, and shows that the journal impact factor still holds a lot of power.

Carey says that the team is now working on computer models that try to predict who might get tenure, and who will be awarded a grant. “From my experience, what university hiring committees care about is mainly how many papers you have and whether you have funding — and all the funding agencies care about is your publication record,” he says.

Journal name:
Nature
DOI:
doi:10.1038/nature.2014.15337

References

  1. Van Dijk, D., Manor, O. & Carey, L. B. Curr. Biol. http://dx.doi.org/10.1016/j.cub.2014.04.039 (2014).

For the best commenting experience, please login or register as a user and agree to our Community Guidelines. You will be re-directed back to this page where you will see comments updating in real-time and have the ability to recommend comments to other users.

Comments for this thread are now closed.

Comments

4 comments Subscribe to comments

  1. Avatar for Sui Huang
    Sui Huang
    A very interesting and heroic study – but the results should not be misconstrued as having identified the metric of “true” academic or scholarly success. It merely, as many commenters have alluded to, exposes the corruption of the system: The self-referential nature of academic success: Most telling and disappointing finding is this: “And other measures of quality — such as the citations a paper receives above the norm of the journal — fade beside the importance of the journal impact factor, even though many scientists castigate this metric as a flawed way of judging the quality of researchers.” If “Becoming a PI” as a surrogate metric of academic achievement depends on “publication in high-impact journals” and if academic recruiters who control the former, and editors the latter, both share the same set of metric, then we have a self-referential self-fulfilling prophecy. No wonder prediction is good! http://www.ncbi.nlm.nih.gov/pubmed/23386501 I still would like to believe that citation number is closer to the “true” historical impact, and hence to actual scholarly achievement: clearly, the same number of citations for a paper published in a small journal is more valuable than for a paper published in a high-IF journal. Why does “truly” good research need to piggy back on high-IF journals to have impact? They speak for themselves! Common, fellow scientists, let’s set the bar higher! From now on, I will “normalize” the citation numbers of papers by faculty candidates for the IF of the journals in which they are published. Perhaps this would be a better measure of “true” impact. Of course that would devalue many glamorous journals and politically impossible to propose loudly.
  2. Avatar for Stephen Curry
    Stephen Curry
    Given the well-known obsession with journal impact factors and the timescale of the study, the results are hardly surprising. A system that depends over-much on impact factors rewards those who play that particular game. The much more important question is how to ensure that the PI's one hires are not just productive, but also good collaborators, good mentors and good teachers. In this regard, the final paragraph of Van Dijk et al's paper is rather telling: "Our results suggest that currently, journal impact factor and academic pedigree are rewarded over the quality of publications, which may dis-incentivize rapid communication of findings, collaboration and interdisciplinary science." This shows how the over-zealous application of narrow measures of 'quality' can be damaging to the research enterprise and is a message that I hope will not be lost in the discussion of this paper. It is rather unfortunate that the authors have built their 'PIPredictor' web-tool, since I fear this will only encourage the malpractice of judging science by the journals where it is published.
  3. Avatar for Harsha Radhakrishnan
    Harsha Radhakrishnan
    I dont think this web-tool is going to change a practice thats been going on for a while now.
  4. Avatar for David Colquhoun
    David Colquhoun
    I'm afraid that these observations merely serve to show the corruption of science that has happened as a consequence of the publish or perish culture and reliance on impact factors. It shows that people are too lazy to read papers of job applicants, but instead rely on surrogate measures. One manifestation of this is the emphasis on grant income. At Queen Mary College London and at Kings College London people are being fired if they don't generate enough grant income, regardless of the fact that different sorts of research need very different amounts of money. Less than £200k/year and you're out. Since most grant expenditure consists of salaries in many areas of work, people are forced to employ as many people as possible, whether or not they are needed. As a result, very few get permanent jobs. They are employed for the benefit of the university (and the PI), not for the benefit of science or, perish the thought, for the benefit of the employee. This has led to the current 'inverted pyramid' structure that harms both science and people. PIs have no time to do the science they were employed to do, but spend their time writing grant applications, lobbying glamour journals and travelling the world to hawk their wares. It would certainly help if glamour journals were competed out of existence. There is now a real chance that this might happen. The advent of the web and of the open access movement is already giving rise to alternative methods of publication in which it is cheap or even free to publish, and which are free to read. They also have comments sections which allow post-publication peer review. Given that peer review simply doesn't work to ensure quality, especially at the lower end, that has to be a good development. It remains to be seen how long Elsevier and NPG can withstand this onslaught. It's sad that learned society journals may also go, but it's probably inevitable. One thing is sure, science needs to be liberated from perverse incentives imposed by greedy university bureaucrats and greedy publishers. There are reasons to be optimistic that this could happen.

Top Story

Retina

Next-generation stem cells cleared for human trial

Researchers hope to treat macular degeneration of the retina with induced pluripotent stem cells, a method that has generated enormous expectations.

Science jobs from naturejobs