Taxpayers deserve value for money from research funding | Stephen Curry and Imran Khan | Science | guardian.co.uk

Imran Kahn and Stephen Curry present a case for accountable science here:

Taxpayers deserve value for money from research funding | Stephen Curry and Imran Khan | Science | guardian.co.uk.

This in response to Ananyo Bhattacharya’s piece from last week.

My own take: Kahn and Curry are still caught up in thinking of blue skies (or basic) research as something fundamentally different from applied research. So, they call for a balanced approach to funding both. But the main problem is seeing blue skies research as something without impact and limiting impact instead to applied research. This is a failure of imagination about what ‘impact’ could mean, as well as about the nature of research more generally. Can we all please get beyond the basic-applied dichotomy? But it is still an improvement over Bhattacharya’s reactive attitude, which replies with an immediate and forceful ‘No!’ to the question of impact.

This entry was posted in Accountability, Broader Impacts, Future of the University, Metrics, Peer Review. Bookmark the permalink.

18 Responses to Taxpayers deserve value for money from research funding | Stephen Curry and Imran Khan | Science | guardian.co.uk

  1. If I gave the impression I thought blue-skies research was ‘without impact’ then it was unintentional. The difficulty rather, as I see it, is in forecasting impact — something widely recognised as problematic (particularly if it is supposed to be in quantifiable terms). Even retrospective assessments are difficult, as all the committees assembled to prepare REF submissions in the UK are finding.

    But before I say any more, I guess I’d better read that article you linked to!

  2. Thanks to @stephencurry [edit: @stephen_curry] for the comment. For those of you on Twitter, I suggest you follow him — quite an interesting and active academic tweeter.

    It is certainly possible I read too much into your article. Here are the bits that led me to my interpretation:

    There may be residual difficulties at the Engineering and Physical Sciences Research Council (EPSRC), but at Imperial College last month, the chief executive of the Biotechnology and Biological Sciences Research Council (BBSRC) was happy to publicly affirm that, when its panels of scientists meet to decide on grant awards, the quality of the science therein is taken as the most important criterion for success. Impact statements assessing future potential are only considered in the case of a tie-break between two applications of equal scientific merit.

    The solutions to many of today’s challenges may well come ultimately from the funding of basic research, but there is also a place for directed endeavours. There is no question of abandoning blue-skies research, but rather of finding an agreed balance between curiosity-driven science and applied work. In a democratic society that is something that scientists and politicians and the public should be talking about, but – by and large – we do not.

    As scientists and campaigners we will continue to stand up and say that basic research must be protected, not only because of its track record of success but also its intrinsic worth. Ideas, not economies, are what really inspire us.

    Nowhere do you say that basic research is “without impact” — that’s over-interpretation on my part.

    With regard to the issue of measuring impact, I wonder whether you would agree that measuring the impact of applied research is somehow easier than measuring the impact of basic research. Though perhaps Pielke and Byerly have influenced your thinking on this?

    As for ex ante impact assessment using peer review, perhaps you will find this article interesting: http://www.peerevaluation.org/read/libraryID:28351.

  3. Thanks for the recommendation, though people will find me at @stephen_curry on twitter.

    As for measuring the impact of applied versus basic (or should I say pure) research, I guess the former is easier but is not without its difficulties. The trouble is it is still very hard to track the development of new technologies or products from a piece of original scientific research, in part because many factors affect the development of new technologies and in part because the information you need is hidden (under the veil of commercial confidentiality). So while most criticism has been directed at efforts to forecast impact (thought that’s a misreading of the policy), even retrospective assessments are problematic. Dunno how you square all that with the key issue of democratic accountability!

    • Sorry for the mistake in your Twitter handle — I’ve edited the original comment.

      As for your point that most of the criticism of impact requirements has been directed at “efforts to forecast impact (though that’s a misreading of the policy),” I agree on both counts. But it’s the second point that is most telling.

      Criticizing a policy such as RCUK’s Pathways to Impact on the grounds that impacts are impossible to predict displays either pseudoreasoning or a genuine misunderstanding of the policy. RCUK explicitly states that they do not expect proposers and reviewers to predict the impact of the proposed research. Instead, the requirement is that they have given some thought to the potential impact, to who might be the ‘beneficiaries’ of that impact, and to how to ensure that the potential impactees (ugh) get the message.

      Being able to give such an account and actually making the attempt to ensure that those who might be impacted by the research understand what the impact might be and how to deal with it is enough to ensure democratic accountability on the front end (that is, ex ante).

      One more point — that proposers have thought through the research in a similar fashion is really all we ask in terms of scientific merit, as well. We don’t ask them to predict what the outcome of the research will be. We ask that they think about possible outcomes, and what they will do if this happens rather than that, and so forth.

  4. Ananyo says:

    Hi Britt
    “But it is still an improvement over Bhattacharya’s reactive attitude, which replies with an immediate and forceful ‘No!’ to the question of impact.”
    Ouch.

    I’m not sure that you could fairly interpret my blog post as a ‘No’ to impact – in the sense of being against the idea that science should have social (important) and economic impact. I, in-fact, argue that it does – and adding an ‘impact statement’ to a grant application doesn’t help the process but rather contributes to the ‘over promising’ on benefits that Kieron points out has been going on since 1945. It would be fair to criticise me for overstating the benefits of ‘investigator led’/basic/blue skies research though I don’t think that the evidence is unequivocal on that and there’s a clear answer.

    As for a false dichotomy – perhaps – but I’m not sure what else you would call the sort of research I’m talking about – which involves answering a question that has scientific merit (when do snails elect to reproduce asexually vs sexually) that have little in the way of foreseeable economic (or indeed social!) impact. Luke G suggested the term ‘investigator-led’ for this sort of work – and I’d be happy to stick with that. But it strikes me as a little different from say a drug company saying – examine how this drug binds to these proteins for us and tell us the results. Sure there’s a spectrum of science between these and beyond them but surely there is a distinction? I’m probably displaying my ignorance but if so, I’m happy to be enlightened further.

    • Hi, Ananyo:

      Welcome to another interesting and active tweeter I heartily recommend. You can follow him — let’s make sure to get it right this time — here: @Ananyo.

      Yes, the resounding ‘no’ to impact that I attribute to you is not a denial that science does and should have social or economic impact.

      Instead, the ‘no’ you voice — and please do correct me if I am mistaken — is to impact policies or impact requirements such as RCUK’s Pathways to Impact (an ex ante requirement similar to the US NSF’s Broader Impacts Criterion) or the Research Excellence Framework, aka the REF (an ex post requirement for which we have not even a rough equivalent in the US).

      As I understand it, your reasoning for being against such impact policies is that they are either misconceived in the first place (since, say, expecting a malacologist to be able to foresee the potential societal or economic impacts of research on snail reproduction is unreasonable) or they are misapplied.

      As for the first objection, I refer to my replies to @stephen_curry, above, as well as to this article (http://www.peerevaluation.org/read/libraryID:28351), in which addresses the issue of predicting impact.

      As for the second objection — that impact policies may be incorrectly applied or badly implemented — I totally agree that this may happen. But then I think the criticism should be directed at specific aspects of a specific policy, rather than at the very idea of having an impact policy at all.

      As for our poor malacologist, who seems at a real disadvantage compared to our chemist who works for a drug company, my opinion is that the malacologist ironically has more of an opportunity where ex ante impact criteria are concerned. It is up to her to figure out what the broader context of her research is, who might benefit, what the benefit might be, and how to make that happen. With the chemist working for the drug company, there is a little less room to maneuver (though creativity should still be employed).

      What I am suggesting is an attitude shift. Instead of seeing the impact glass as half empty, researchers should see it as half full. This is what I characterize as taking an entrepreneurial attitude toward impact (http://cas-csid.cas.unt.edu/?p=2846).

  5. Ananyo says:

    “It is up to her to figure out what the broader context of her research is, who might benefit, what the benefit might be, and how to make that happen.”

    But what if she can’t? Perhaps she’s simply not imaginative enough? And even if she does, what are the chances that this impact turns out to be ‘real’. Aren’t you encouraging academics to take part in what is essentially an exercise in deception (deceiving themselves, the public and the funders?)

    • Well, I hate to appear hardhearted about it; but if she either can’t figure out the potential impacts of her work or refuses to think about it, then she ought not to be funded out of the public coffer.

      RCUK do leave researchers the option, I believe, of ticking a box that says something like ‘this research has no societal or economic impacts’ — but good luck getting that funded. And that seems just to me.

      Would you hesitate to mark down a proposal for lacking scientific merit?

      I am definitely not encouraging academics to take part in an exercise of deception. In fact, I think proposals that suggest outlandish impacts ought not to be taken seriously (just as we would act in the case of a proposal that promises some outlandish scientific outcome).

      The idea would be to approach impact accounts with the same sort of rigor that we use in approaching the question of scientific merit.

      The new US NSF Merit Review Criteria move in this direction, by the way: http://cas-csid.cas.unt.edu/?p=2840. I think this is wise.

  6. Hmm…let me play taxpayer (kind of easy, since I am).

    If a scientist can’t figure out the broader context of his research, I don’t think they deserve public funding. I’m not in the business of contributing to the advancement of his career, but in supporting the increase in knowledge and understanding that comes from his work.

    If a scientist is insufficiently able to imagine the impact of her research, I think whomever trained them was negligent, if not incompetent. If one can understand why a particular experiment contributes to the leading edge of their field, then one should be able to articulate the secondary outcomes of the research (what new things can be done, potential benefits to the public, additional people trained in X, new efficiencies in research/applications/education, etc.).

    If you don’t think proposed impact measures can capture this, that’s an argument I want to hear. The problem is that it’s easy to characterize the rhetoric around this discussion as “trust us and leave us alone,” or “we’ll simply be deceiving people” which I’m not sure even Haldane would be sanguine with. If people think impact measures will be seen as deceptive, it’s hard to see how the absence of measures will be taken as more honest, much less transparent.

    • David Bruggeman is another science policy twitterati (@p_phronesis) for those following Twitter.

      You might enjoy this dialogue (http://cas-csid.cas.unt.edu/?p=2842) though @Ananyo will have further evidence of my philosophic tendency to deal too roughly with interlocutors (sorry again, Gas Station Without Pumps). I should try to be more polite. We philosophers are just used to treating each other roughly.

  7. Thanks, Britt, for the pointer. It would seem that some researchers would think that feats of creative thinking into the future of their work are either beyond their ken and/or irresponsible. And while I can see why you might think a list would be too restrictive, I think the resistance of research communities to impact exercises suggests they could benefit from some guidance. Maybe a list is the wrong form for that guidance, but I can see many researchers (and review panels for that matter) opting to do nothing when given wide latitude to do something. In my day job I often have to start the creative process in our volunteers by giving them something to work with. While they have the skills and knowledge to provide the needed analysis, they aren’t terribly inclined to work in a vacuum.

    Perhaps some case development from existing research could be useful (though it would be hard to avoid including current hindsight that was unavailable at the time of the work). There’s plenty of history, plenty of tacit knowledge, embedded in scientists for articulating the intellectual merit of their work. While there may be tacit knowledge somewhere that deals with broader impacts, I think the behavior of scientists doesn’t internalize that knowledge so much as hide it (put it in a black box if you’re needing an STS cliche right about now).

    It’s kind of like teaching music theory to someone who learned by ear and/or imitation. They may know what they’re doing, but they can’t effectively explain it to someone, even to themselves.

    • David,

      Point taken. And I agree.

      I fully support the idea that scientists and engineers ought to be trained in thinking about the broader impacts of their research.

      The policy that NSF is now left to develop regarding the Broader Impacts Criterion ought to include provisions for such training. In fact, if you look at the ‘policy’ subsection of Sec. 526 of the America COMPETES Reauthorization Act of 2010, you’ll see that Congress thinks so, too. It says that NSF must develop a policy that:

      (4) encourages institutions of higher education and other nonprofit education or research organizations to develop and provide, either as individual institutions or in partnerships thereof, appropriate training and programs to assist Foundation-funded principal investigators at their institutions in achieving the goals of the Broader Impacts Review Criterion as described in subsection (a); and

      (5) requires principal investigators applying for Foundation research grants to provide evidence of institutional support for the portion of the investigator’s proposal designed to satisfy the Broader Impacts Review Criterion, including evidence of relevant training, programs, and other institutional resources available to the investigator from either their home institution or organization or another institution or organization with relevant expertise.

      Now, let’s see what happens.

      But, in my opinion, if scientists look at this as something they have to do (taking on a reactive compliance mentality), rather than as something they ought to do, both for themselves and for society (taking an entrepreneurial attitude), the training won’t work. First, scientists need to begin to own the Broader Impacts Criterion.

  8. Pingback: Blogosphere erupts over science’s “Faustian bargain” « Purse String Theory

  9. It seems only a slight exaggeration to observe that research scientists approach anything that takes them away from the bench as prompting a reactive compliance mechanism, and possibly as an infringement on their sacred autonomy. They don’t want to own the Broader Impacts Criterion, so someone’s going to have to make them.

    Given the lack of consideration the National Science Board gave to the public in giving into aggrieved bench scientists consulting with ‘the community” I don’t see the Board or the Foundation as willing or interested in following through, as they strike me as pushovers.

    • We shall see. I think they (NSF) will do something good. I think they get it. I also think, if the NSB report is any indication, that they’ve thought it through pretty well.

      I don’t expect perfection — not even sure what that would be. But I do think they have learned some lessons over the past 15 years about broader impacts.

      I also think scientists (some, not all) will surprise us. They’ve yet to receive good training in addressing broader impacts. Let’s see what happens if NSF supports training activities. And let’s try to make sure we learn some lessons from training in ethics and RCR!

      Personally, I think it is important, even if broader impacts training is mandatory, that people not feel like they’re being MADE to comply. Let’s see if we can finesse that one, now that there will be (if they follow America COMPETES Reauthorization of 2010) a requirement for such training.

Leave a Reply to Britt Holbrook Cancel reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>