Eyes on the prize are blind to reality

Scientists’ quest for publication in journals with high impact factors is widely perceived as one of the more refractory barriers to the fuller adoption of open access, which I believe to be in the best interests of science.

But the barrier problem is complicated. Some of its dimensions were teased out in a debate on ‘Open Science and the Future of Publishing’ held at Oxford at the end of February this year that involved publishers, funders and proponents of open access. The video of the debate is well worth watching (especially the first 45 min – though there’s a nice summary over at F1000). I was particularly struck by the comments made by Alison Mitchell of the Nature Publishing Group (NPG) (from 10:50 – 19:12) which started a train of thought about their gold-standard publication, Nature. See if you think I’m going off the rails.

The problem

Publication in Nature is a highly coveted prize. Many labs will literally pop champagne corks if their paper is accepted by one of the most prestigious scientific journals in the world. I shared in this wonderful feeling once, though it was back in 1993 and the memory, alas, is fading.

Nature’s prestige is hard-earned. The journal’s editorial staff and volunteer peer reviewers sift through thousands of submissions and select fewer than 10% for the privilege of occupying a few pages in the weekly publication. This rigorous filtration, argues the publisher, ensures that the journal serves up a quality product to the scientific community.

The success of the journal — and its authors — is one reason cited for their resistance to open access modes of publishing. Although NPG offers several open access options through the group’s various titles and permits deposition of author-formatted manuscripts in PubMed Central 6 months after publication in Nature, it does not offer Gold open access options for the vast majority of its Nature branded journals (the notable exceptions being Nature Communications and Scientific Reports*).

Alison Mitchell defends this stance by arguing that Nature’s reader/author ratio is so high — considerably higher than ‘normal’ scientific journals — that it does not make sense to charge the journal running costs to authors, which they estimate would work out at up to £30,000 per paper if a switch to full open access were to be made.

This is a reasoned position, but I would still like to pick it apart because some of it doesn’t make sense to me.

For a start, part of that £30k fee would presumably be needed to cover the costs of the very good front matter that appears in Nature and occupies about a third of the journal. The front matter includes news, commentary, feature articles (analysing scientific trends and matters arising in science policy and education) and summaries or highlights of the papers appearing in that week’s issue). These items are mostly written by Nature staffers or commissioned from academics. There is no case for charging the costs of writing them to the authors of Nature’s scientific papers, although I would be loathe to see the front matter disappear from the journal. I suspect these are the pages that most people read. Let’s face it, although Nature is a general science journal, few these days have the learning to be able to profit from all its articles. The spread is too great and the divisions between specialisms, unfortunately, are too deep (a point I will return to later).

Part of Nature’s predicted high open access charges also reflects the very high rejection ratio — above 90% — which means that the journal processes many more articles than are eventually published. Nature relies on skilled editorial staff — at PhD level or above — and the selectivity imposed by them and their reviewers to ensure quality and maintain the prestige of the Nature brand. The careful sifting is reflected in its impact factor which, at 36.101, is one of the highest in the business.

This latter point bears closer inspection, particularly if one has the bigger picture of science in mind.

First, most, if not all of the papers rejected by Nature will be eventually published elsewhere, though only after the delay caused by cycles of rejection and resubmission as authors chasing impact factors work their way down the journal rankings. The chase retards the dissemination of scientific information — and can be exhausting and demoralising for authors.

Second, despite all the careful sifting, Nature’s system is incapable of picking winners reliably, a problem that was highlighted in the past week in BBC4’s Beautiful Minds documentary on Andre Geim, who shared the 2010 Physics Nobel prize with Konstantin Novoselov for the discovery of grapheme (catch it if you can — wonderful). As revealed in the program, Nature rejected Geim’s and Novoselov’s initial paper** on graphene — twice.

Nature’s failure in this case is not the particular fault of anyone at the journal; it simply represents the intrinsic difficulty of forecasting from a slew of submissions which ones will go on to spark the greatest interest in the scientific community. The problem is not simply anecdotal; it is widespread. Nature’s impact factor is dominated by a minority of the papers that it publishes, as the journal itself has acknowledged. A 2005 editorial revealed that fully 89% of the citations to work published in the journal in 2004 derived from just 25% of papers; at the other end of the citations distribution, over half the Nature papers from that year had fewer than 20 citations.

It is genuinely difficult to pick winners: the skewed distribution is in fact typical of most journals, whatever their ranking, as shown by Per Seglen in a fascinating analysis performed back in 1992. Despite its rigorous selection procedures, Nature appears statistically no better at determining the relative quality of its submissions than other journals; it wins at the impact factor game because the brand ensures that the average quality of the submissions is higher.

What this means — and this has long been recognised — is that the journal impact factor is not a reliable indicator to the quality or influence of a particular paper. Nature knows this, and publicly bemoans the mis-use of journal impact factors in the assessment of individuals. And yet it cannot help itself from trumpeting its success in a full page advert whenever the latest impact factor calculation is published, or from dangling the statistic in the faces of prospective authors.

But the journal is not particularly to blame for this. Their stance is all of a piece with a scientific culture that has grown to over-value journal rankings. Everyone in the business knows what publication in Nature means. It is an accolade that we seek out because the system — largely devised and run by scientists — recognises and rewards winners of this prize with funding and promotion. We might wish that it were otherwise but wishing only works in fairy tales.

Playing the game makes fools of us all. We chase prizes that our critical faculties and our mathematical analyses have long demonstrated to be awarded prematurely and inaccurately. Worse still, running after these prizes slows us down.

Surely we can do better?

A proposal

We certainly still need to weigh and judge the scientific output of our peers. This is necessary to determine distribution of funds and preferment. But rather than relying on the inaccurate shorthand of impact factors, we need to reserve judgement until after publication. I am not suggesting that we abandon pre-publication peer review, which I think serves a useful function in filtering and improving the published literature. But it would be better to fast-track publication and then to arrive at a more considered judgement of their quality by assessing how well they had been put to use — downloaded, cited, commented on, criticised or lauded — by the scientific community. Adoption of full open access would facilitate this by allowing the whole community to be involved. There is a burgeoning industry of post-publication assessment that can better harness the wisdom of the community and which, done well, could provide more accurate judgements than a handful of peer reviewers.

We could even buttress this system by a more formal and more extensive procedure of prize-giving — prizes are important to us — which would do a better job of recognising achievement than publication in a high-ranking journal. Such prizes could replace the incentivising function of the glamour publications in stimulating the productivity of research groups. Again, if they were judged by the wider community, by the people who actually make use of the published literature, such awards would have the merit of being a fairer and more thorough assessment of scientific work.

This is a radical proposal but it has many advantages. It is more equitable; it clears the way for open access; it speeds up the process of peer-review and publication; it accelerates science.

The proposal may threaten the business model of Nature and break with the journal’s long and venerable tradition of publishing ground-breaking work. But I think there is still a place for truly multi-disciplinary journals in this new landscape. As I mentioned already, no-one can properly read and profit from the breadth of papers that Nature currently publishes; the constraints of the abbreviated format of Nature papers and the deep specialisation that characterises modern science make this impossible. That is unfortunate for science since new discoveries are often made at the intersections between fields: Geim exemplifies this with his field-hopping success.

I would like suggest therefore that Nature re-models itself as a platform for scientists to write a separate version of their ground-breaking research (published in long form elsewhere) that is intelligible to scientists outside their field. In doing so they would need to include the broad context of the work and to unshackle the text from excessive jargon. I can see the grimaces already but this would be a good exercise for authors — obliged to think large — and provide a valuable stimulus for interdisciplinary research to the scientific community. One might take this even further and include at least a lay summary that could be appreciated by a non-scientist readership. Nature might then rediscover its original mission.

I don’t for a moment suppose that this proposal will be taken seriously at NPG. But I thought it was worth thinking about.


*Thanks to Graham Steel for pointing out my omission of these titles in the original post.

**Geim’s and Novoselov’s paper was eventually accepted by Science. It currently has over 8000 citations according to Google scholar.

My thanks to the various people who sent me a copy of Seglen’s paper which I could not access from my institution.

This entry was posted in Open Access, Scientific Life and tagged , , , , . Bookmark the permalink.

56 Responses to Eyes on the prize are blind to reality

  1. Graham Steel says:

    Re. “….Nature, it does not offer Gold open access options for any of its Nature branded journals”.

    What about Nature Communications:- http://www.nature.com/ncomms/index.html

  2. David Stephens says:

    Good post as always Stephen.

    Initial thoughts…isn’t what you propose essentially done through citations which of course have their own problems as a metric? Plenty of people read papers but won’t comment on them on journal websites, me included. I certainly can’t find the time to comment in any useful way on 1/10th of the papers I read. Doesn’t mean the others are not valuable or interesting, just that I was’t able,willing, or ready to comment on/about them at the time.

    The publication of a short form with longer form elsewhere is also something that I believe that PNAS now does … Albeit with publication of both forms in PNAS.
    There are also many other journals who operate the kind of model you state including (but not restricted to) some new ones such as Cellular Logistics and Autophagy.

    Personally I hate these pieces. I can’t see the point and have always declined to write them. I would also expect them to dilute ones own citations – splitting them between the original paper and the “mini review”.

    I’d also suggest that more prizes = dilution of their value….better that less emphasis is given to the journals in which the recipient publishes by the prize givers….

    • Stephen says:

      Well, it’s certainly true that none of these proposals has been worked out in great detail. This is more of a form of thinking out loud.

      However, I see a more formal system of prizes as a way of drawing attention to work that is genuinely considered to be valued by the community. I don’t trust mean counting citations as the measure of value but am looking to additional measures (I think comments will come eventually though they have not worked in the past) that would have to include assessment from people who know the field and have read the paper.

      As for whether more prizes would mean dilution I’d say not. At the moment, as I argue in the post, publication in a high IF journal is already considered a prize but the award mechanism is flawed — because it relies on too few people.

      I’ve not seen the long/short form combinations that you mention — got any links? There are problems here since I can see reluctance on the part of authors. But if Nature retained its prestige that might be a way for ambitious scientists to really show their mettle — by demonstrating that they could show how their work was important not only in their own field, but potentially in others.

      • David Stephens says:

        Thanks Stephen,

        Do find myself agreeing it’s you I mean cites, I just don’t know a simple solution. I’d just speculate that social media is not it (given the relatively poor take up in academia).

        One example of dual commenting….
        http://www.pnas.org/content/108/31/12746.abstract
        And their own min review in
        http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3265929/

        Or this:

        ERK regulates Golgi and centrosome orientation towards the leading edge through GRASP65.
        Bisel B, Wang Y, Wei JH, Xiang Y, Tang D, Miron-Mendoza M, Yoshimura S, Nakamura N, Seemann J.
        J Cell Biol. 2008 Sep 8;182(5):837-43.
        PMID: 18762583

        And

        Remodeling of the Golgi structure by ERK signaling.
        Wei JH, Seemann J.
        Commun Integr Biol. 2009;2(1):35-6.
        PMID: 19704864 [PubMed] Free PMC Article
        Related citations

        There seem to be raft of Landes journals doing this now, Communications in Integrative Biology included, are widely pushing this. I need to be persuaded on the value…..

        • Stephen says:

          These are interesting developments. I agree many might see it as a tiresome overhead when they might rather be getting on with the next piece of research. But it does add real value to the scientific literature. That’s why I think there would be interest in having such digests/papers in a single journal and that, for the most exciting stuff, Nature would be the place to do it. To give them their due, NPG has launch a whole series of review journals in recent years which might be considered to be part of this trend but I wonder how inter-disciplinary they are considered to be.

  3. Alejandro says:

    Really need is a prize after having done a great job.

  4. Graham Steel says:

    And of course more recently, Scientific Reports:- http://www.nature.com/srep/faqs/srep-faqs.html

    Other than those two, you are correct.

    “Are other Nature-branded journals going to introduce an open access option?

    No — there are no plans to introduce an open access option on any other established Nature-branded title. In these cases, self-archiving provides an alternative solution, and NPG has a progressive self-archiving policy. NPG’s services and policies ensure that authors can fully comply with the public access requirements of major funding bodies worldwide — for more information visit http://www.sherpa.ac.uk/romeo/

    • Stephen says:

      Thanks for the correction Graham – I’d overlooked those, even though Nature Communications was mentioned in Alison Mitchell’s comments. I’ll amend the post. It’s certainly true that NPG is offering some OA options (as I had said).

  5. Pingback: Ninth Level Ireland » Blog Archive » Eyes on the prize are blind to reality

  6. Ross Mounce says:

    “I would like suggest therefore that Nature re-models itself as a platform for scientists to write a separate version of their ground-breaking research (published in long form elsewhere) that is intelligible to scientists outside their field”

    Isn’t this what Nature already does? (I’m arguing this point semi-seriously btw)

    Take a look at most palaeontological articles in Nature – the print version of the article can only run to such a brief length (say, 3 pages max.) that there is barely anything in there, I believe this to be commonly bemoaned within the palaeo community. The actual paper is often tucked away in upwards of 80 pages of electronic supplementary materials, that may or may not have been as thoroughly peer-reviewed.

    I suggest your proposal has already happened, just rather unannounced.

    • Stephen says:

      I don’t agree. The advent of Supplementary Material (a boon or a curse, depending on your point of view) hasn’t resulted in a major shift in the way that the printed article is written. My proposal was aimed specifically to encourage authors to write with inter-disciplinarity in mind so that any scientist picking up Nature would be able to access (intellectually) most, if not all the articles.

      To a degree, Nature does a pretty good job of this already via the News and Views commentaries (often written by paper reviewers I believe) which give a useful summary and contextualisation that is not in the paper. But the N&V pieces only cover a minority of the papers published in any one week. I guess I would be arguing for authors to do this themselves within their own Nature paper.

  7. Rich says:

    It’s definitely valuable to think about how to best harness post-publicaton peer-review, and free ourselves from under the yoke of journal impact factors!

    I wonder if we could implement some kind of voting system, like Reddit (with up and down votes) or google +1 to identify papers worth awarding a prize or recognition. Obviously, any system would need careful consideration – for instance, I think it would be necessary to ensure one vote per person, and voting wouldn’t be anonymous (to prevent gaming the system).

    • Stephen says:

      Yes – but one would have to think very carefully about strategies to prevent people gaming the system. This is an underlying concern for anyone thinking about post-publication assessment. I’m not sufficiently web-savvy to come up with particular answers to that one. But Google appears to do a decent job of preventing web-sites from gaming their search results (though they’ve recently come under fire for breaking their own system by populating search results with hits from G+!).

  8. Jan Jensen says:

    “There is a burgeoning industry of post-publication assessment that can better harness the wisdom of the community and which, done well, could provide more accurate judgements than a handful of peer reviewers.”

    One example is compchemhighlights.org which is an overlay journal in the area of theoretical chemistry. It uses blogger.com and was set up in a few hours (gathering the editorial board is another matter), so anyone can in principle do this for other areas.

    • Stephen says:

      Really interesting – basically you have recruited scientists to blog about the latest papers in their field?

      • Jan Jensen says:

        Right, except that I am very careful not to call is a blog. It’s an overlay journal with editors, so the incentive to contribute to CCH is the prestige associated with being an editor.

        • Stephen says:

          I fully understand your reluctance to use the dreaded b-word which still has a low currency value with many in the academic community (more’s the pity). ;-)

    • Mike says:

      This is a quote I wanted to highlight as well. I’m not yet convinced that shifiting the system from ≥2 (and it is generally >2) pre-publication reviewers to n = ? post publication reviewers adds clarity or ease of access to the “important” science we should all be reading.

      The advent of PLoS One (in general, a good thing as it has forced others to rethink financial publication strategies) and similar innovations also requires excellent, user friendly filtering tools with high (universal?) uptake, to deal with accessing the enormous increase in published literature. This hasn’t yet happened, in my opinion.

      Pre-publication peer review serves as an important filter (which I know you agree with, Stephen). Post-publication review won’t get round all the existing problems and may introduce others, e.g., cronyism (which may occur with pre-pub review anyway) or generating even more (soft) literature to work through. Two different bloggers may take very different views of the same article.

      Don’t get me wrong, I think it’s great that you’re coming up with and highlighting alternatives here, getting the ball rolling. My main concern is legitimate filtering of the massive literature content, which I don’t think we’ve cracked yet. Sorry I’ve not come up with any constructive ideas here!

  9. Rich says:

    the overlay journal idea is great. Maybe I should start a subreddit for schizophrenia research …

  10. Joerg Heber says:

    I just like to add an example to the proposal you are making where authors should write summaries of their papers. I know at least one example that appeared in Science back in 2007:

    http://dx.doi.org/10.1126/science.1149338

    You need to either look at the pdf or go to the “Author’s Summary”.

    Though I am not sure what happened to this scheme. But such considerations are certainly not new…

    • Stephen says:

      Thank Joerg – didn’t know about that example. Seems to have been discontinued. But although it addresses the part about making the science more accessible, that scheme does not tackle the central problem of breaking the grip of the IF (which, to be clear, I do not expect publishers to solve by themselves.)

      • Joerg Heber says:

        Oh, on the IF, I think this will solve itself once there is a commonly accepted article metrics-based scheme – which may or may not be implemented by a publisher or by some other party (as the IF is)

  11. In my view possibly the most powerful way for scientists to change the system would be through boycott – not reviewing for the fashion journals, and not publishing in them – in short, rediscovering the value of your own field’s high-quality open-access specialist journals and supporting them. If the majority of people did it, then the obvious corollary would be that people would no longer be judged on their number of Nature/Science/et al papers they had, because no one of quality would be racking them up any more. Boycotts were tried before with limited success (I’m remembering the early days of PLoS). To get everyone on side, globally, would take a hell of a lot of activism.

    Not that I’m adverse to activism…

    • Stephen says:

      Ah yes, I remember…

      Boycotts certainly add to the mix. The current Elsevier boycott has been tremendously powerful in raising awareness around this issue. But given the sheer scale of the scientific (nay, academic) community, I can’t see them being effective on their own. Which is why I wanted to explore the scope for re-organising our incentives. The Wellcome Trust has promised to get tough on its funded scientists who fail to publish via OA and I suspect that the RCs will fall into line. But repeatedly in conversation with my colleagues I come up against the “Ah, but…” line in reference to impact factors. There is a degree of enslavement to IFs that is going to be very difficult to shrug off.

  12. What might be helpful is an international conference on ‘how to turn the oil tanker’, with anyone who’s anyone in attendance. Really high-profile.

  13. Pingback: More on publishing and impact factor. « Åse Fixes Science

  14. Iain says:

    Worth watching the progress of F1000 Research – a concerted effort by Vitek Tracz, the founder of Biomed Central, to create a new OA journal that makes a success of post-pub peer review.

  15. Pingback: Harvard: we have a problem | Reciprocal Space

  16. Bob O'H says:

    But rather than relying on the inaccurate shorthand of impact factors, we need to reserve judgement until after publication.

    Err, impact factors are post-publication judgements. Of course, the choice of a journal is pre-publication, and journals have reputations, but I’m not sure the reputation is primarily determined by impact factor.

    I’m not convinced by post-publication assessment, largely because it’s unregulated, so we don’t know why a paper is popular: good papers can be overlooked and poor papers become popular depending on whether they are seen by the “right” people (i.e. people like PZed Myers). If you award prizes by a popular vote, I don’t see how you’re going to avoid this. If you do it with a panel, how is that different from pre-publication peer review?

    • Stephen says:

      I disagree, Bob. IFs may be assessed retrospectively but they are *not* post-publication assessments of a particular piece of submitted work. The authors are, in effect, getting the benefit of the previous work of other scientists.

      The argument here is that we need mechanisms of assessment that look at how good or important or influential a piece of work is. I do agree with you that there are technical problems in devising a system that is accurate but I do see potential in being able to gather article-level metrics. Of course one would have to try to prevent gaming of the system, and to avoid simplistic dependencies on metrics or stats (or popularity contests). The involvement of a panel (e.g. to determine prize-giving) would be better than pre-publication peer-review because it could involve more people and it would have more information at its disposal.

  17. Bob O'H says:

    Stephen – what I meant about IFs being post-publication is that they are calculated from post- publication statistics. I think it’s a mistake to confuse a statistic with how it is used: logically any statistic that’s used to measure how good a scientist is now will be pre-publication too, because it’s also going to borrow from the past, in the same way.

    On prize panels, you’re asking for senior scientists to do even more. AFAIKS you either have a small number of prizes, which becomes horribly elitist and is asking to be gamed (you know what competition is like with the REF), or you give out so many prizes that it becomes impossible to administer. Can you really find a happy medium between these two?

  18. Bob O'H says:

    Oh, and an additional thought (which raises a problem with all post-publication metrics). Because they are post publication they will take time to accumulate. For junior scientists this will pressurise them to publish earlier: they can’t afford to wait to get a publication out because then they’ll be to late to get a prize that will mean they can get a post-doc. So, whilst the pressure to get good publications is the same there is now an additional time pressure to get publications out early.

    • Stephen says:

      Hi Bob – good, challenging comments as ever! Maybe I expressed myself badly but I don’t think I’m confused. What I’m arguing is that the assignment of an IF (a post-pub statistic) for the journal to a particular paper on publication — by dint of acceptance of the manuscript — is a premature ‘prize’ for that particular piece of work. On average, a paper accepted in Nature is likely to be better than one accepted in a lower-ranking journal, say JBC. But the point is that such averages are no guidance to the particular merit of any given paper. The problem arises because such prize are now taken too seriously. I think we probably both agree on that.

      As for a solution, a way to break the hold of IFs over careers and funding decisions, well I don’t pretend to have all or indeed any of the answers. The suggestion of prizes was to draw attention to the fact that publication in Nature is already a kind of prize. I’m not sure how a prize system would work out in practice but it’s definitely worth considering. There are concerns about post-publication metrics but this is a fast moving area and Joerg above (who is a senior editor at Nature Materials) doesn’t regard them with a completely jaundiced eye. Neither do I.

      I would envisage any prize system as offering a large number of prizes — after all, Nature awards several hundred per year. Yes, it is an overhead, though breaking the grip of impact factors should reduce the number of rejections and resubmissions in the system and so save some work. Perhaps the editorial boards of each journal could review their published output after 12 months and make an assessment of the top 100 or something like that — perhaps informed also by metrics, reader comments.

  19. Stephen says:

    And, this article raises another potential problem with IFs: How journal rankings can suppress interdisciplinary research. The analysis has been done in the field of Business and Management; I honestly can’t say how it might transfer to the sciences but I had touched on the issue of fostering interdisciplinarity in my post.

  20. Pingback: Open access, peer review, grants and other academic conundrums

  21. Frank says:

    I have just discovered the Frontiers series of journals and they seem to have something a little similar to what you propose.

    The Frontiers Evaluation System uses analytic tools to “automatically track down every article’s views and downloads: during 3 months, the Frontiers platform analyzes the reading activity on an article based on the inputs of the entire Frontiers Community”. It also “provides the basis for the distillation of published articles in what is known as the Frontiers Tiering System”.

    Under this the “the top 10% articles in a tier are democratically selected for review as prestigious higher tier articles. The authors of the selected articles are therefore invited to revise their research article in a review style focused on the original discovery and with the support of the Frontiers peer review. Focused Reviews and Frontiers Commentaries aim at the broader audience of a field community”.

    It is a bit Byzantine, but an interesting model.

    http://www.frontiersin.org/about/journalseries

    • Stephen says:

      Byzantine is the word. Although their declared aim is to foster scholarly and public communication, it took me a while to figure out how the journal works! I think this is the best place to start.

      But you’re right — this is an innovative form of community based, reasonably regulated post-publication review. The strength is the 2-tier nature of the process whereby 10% of the articles are selected (based on aggregated community judgements) to be re-worked into a fresh review-type article with the collaboration of the journal editors.

  22. Stephen says:

    Have just come across this: The BioMed Central Annual Research Awards – which are awarded for “high quality research made freely available through open access publishing”.

    I’d like to see something like this operate for all journals – as a way to confer prestige.

  23. Frank says:

    The Cozzarelli prize (PNAS) and the Newcomb Cleveland prize (Science) are similar.

    http://www.pnas.org/site/misc/cozzarelliprize.shtml
    http://www.aaas.org/aboutaaas/awards/newcomb/

  24. Pingback: Willetts’ Speech on Open Access: Analysis | Reciprocal Space

  25. Pingback: New Bragging Rights Central Post | VWXYNot?

  26. Pingback: Open Access: Money and Data talk and say the same thing? | Reciprocal Space

  27. Pingback: Impact Factor Boxing 2012 « O'Really?

  28. Mark Thompson says:

    Great news re the Willetts announcement – though still much further to go.

    One idea for how to work is as follows: Since we exist in a REF-skewed environment for the moment at least, why not encourage HEFCE to recognise Journal Editorial Boards (wherein the ongoing prestige and review capability of the journal inheres) rather than the Journal titles? So for example, if the entire Board of a prestige journal migrates to an Open Access title, then HEFCE will award recognition to this new title, rather than the old, ‘closed’ one? The academics will follow suit, since this is where the current REF incentive system points them.

    All it would take would be a clear policy statement from HEFCE, in line with the government’s commitment to Open Access – and the rest would follow.

    Would this be workable?

  29. Pingback: UK Government Goes For Broke on Open Access | Reciprocal Space

  30. Pingback: Olympic Science: The Long Jump to Conclusions « O'Really?

  31. Pingback: Sick of Impact Factors | Reciprocal Space

  32. Pingback: Length of journal title as indicator of impact? « Alexander Brown .info

  33. Pingback: Impact factors — RCUK provides a chance to act | Reciprocal Space

  34. Pingback: The Schekman Manoeuvre | Reciprocal Space