NSF Tweaks Its Merit Review Rules

Jeff tries to explain how government works to readers of Science.

The National Science Board has made two subtle but potentially important changes in how grant applications are reviewed at the National Science Foundation (NSF). And while those procedural changes may seem relevant only to those hoping to win NSF funding, they also add to the never-ending debate about how best to measure the results of federally funded research.

A new report from NSF's oversight body, approved last month, attempts to clear up ambiguous language on how proposal writers and reviewers should interpret a criterion NSF adopted in 1997 asking reviewers to evaluate the so-called "broader impacts" of the proposed research. To help applicants and reviewers with what is the second of two criteria used to evaluate proposals, NSF guidelines currently provide eight examples of possible outcomes. They range from attracting more women and minorities into science to fostering ties between academia and industry. The list has become a de facto definition of broader impacts, in other words, a blueprint of the ideas investigators believe NSF is most likely to fund.

That's a false assumption, says the science board, and one that imposes unnecessary restrictions on the creativity of investigators. Instead, the board stipulates that reviewers should use the same five metrics that they use to assess how well a proposal meets NSF's first criterion—the proposal's intellectual merit. The metrics include the significance of the idea and whether the investigator is qualified and has the resources to carry out the work.

In its report, the board says the new language will give researchers more freedom:

In the final analysis, NSB believes that the Intellectual Merit and Broader Impacts review criteria together capture the important elements that should guide the evaluation of NSF proposals. Because of the great breadth and diversity of research and education activities that are supported by NSF, the Board has decided not to recommend a specific set of activities related to Broader Impacts, just as it would not recommend particular types of research (emphasis added). Those decisions are best left to the PIs [principal investigators] to describe and to the NSF to evaluate.

The second change addresses the thorny question of how to measure what "broader impacts" have resulted from the grant. Publication of new results is the widely accepted yardstick for the agency's ability to spot scientifically worthy ideas. But there's no such consensus around the second criterion. Some investigators believe NSF shouldn't be using the criterion at all, saying that it imposes an unnecessary burden on scientists already hard-pressed to obtain grants.

While strongly endorsing the continued use of the criterion, the board throws a bone to its critics. The report says that it's not always fair to expect a researcher to show how his or her grant has met a societal goal such as broadening participation or commercializing a scientific discovery. The scope of the activity may be too small, or the time frame too short. Instead, it may make more sense to evaluate the success of a cluster of grants given out by an entire NSF program, or to examine the combined efforts of multiple grants at a particular university. As the report points out:

NSB notes that assessing the effectiveness and impact of outcomes of these activities one project at a time may not be meaningful, particularly if the size of the activity is limited. Thus, assessing the effectiveness of activities designed to advance broader societal goals may best be done at a higher, more aggregated, level than the individual project. Large, campus-wide activities or aggregated activities of multiple PIs could lend themselves to assessment, which should be supported by NSF.

NSF hopes to put the new guidelines into effect in January 2013 as part of the next version of its Grant Proposal Guide. That will allow time for the agency to hold workshops for both staffers and the community on how the changes will be implemented.

In the meantime, we'd like to hear what you think of the board's new approach to evaluating broader impacts. Is it a good idea? Will it change how you write or review a proposal?

Posted in People & Events