The GRC’s first output is a consensus on six common principles for using merit review (peer review) to assess and fund scientific research projects – including transparency, integrity, impartiality and confidentiality. “For funding agencies, peer review is our bread and butter. We have to pick the best ideas and people in the most transparent and ethical manner,” Suresh says.
The preamble to the six principles is quite interesting, in that it explicitly links the best research with societal benefits:
Research funding agencies worldwide identify and support scientific research that creates new knowledge and benefits society. Trusted with government funding, these agencies are publicly accountable for their funded research efforts. As stewards of the public trust, these institutions must demonstrate excellence in the assessment of proposed research and be responsive to program objectives. Rigorous and transparent scientific merit review helps to assure that government funding is appropriately expended on the most worthy projects to advance the progress of science and address societal challenges.
This highlights the role of peer review in responding to societal demands for accountability, as well as in responding to the demand from the scientific community that peer review fund the most worthy projects. Unfortunately for funding agencies, many scientists continue to resist the idea that broader societal impact should factor in to funding decisions.
CSID has argued not only that criteria for broader societal impact have an important role to play in science funding decision making, but also that (1) they have a role to play in peer review processes and (2) scientists would do well to embrace such criteria.
It is important to note that the peer review process itself is usually only part of the actual funding decision. If proposers and reviewers continue to resist such criteria, they will simply be moved elsewhere in the decision making process. This removes the opportunity for scientists to contribute to the discussion of what counts as a broader societal impact. Somewhat surprisingly, then, resistance to impact criteria is actually a threat to scientific autonomy.
CSID is also co-organizing a workshop in Dalian, China on Peer Review, Research Integrity, and the Governance of Science. Although the themes of our workshop overlap a great deal with the themes of the Global Summit, there is a notable difference in approach. Whereas the Global Summit included only representatives of science funding agencies, CSID continues to hold that the best results will come from including not only funding agency officials, but also members of the research community. CSID has held a series of workshops that involve both groups.
We hope that the GRC will seriously consider including researchers in future workshops, as ESF did last December in connection with their efforts to outline pan-European peer review principles. Researchers are, after all, the ‘users’ of the peer review system. Just as funding agencies expect researchers to account for the impact of their work by discussing what the impact will be, who will be impacted, and how they will work to ensure the impact will take place, funding agencies should do the same. In this case, turnabout is more than fair play — it’s the smart play.