what scientists and engineers are saying
Before adopting the current merit review criteria, NSF and the National Science Board (NSB), the policy branch of NSF, created a Task Force to provide recommendations regarding the proposed new criteria. NSF published the recommendations of the Task Force on the Web, through press releases, and through direct contact with universities and professional associations and received around 300 responses from the scientific and engineering community. The responses raised several concerns about the new criteria, including what the Task Force termed the issue of “weighting” the criteria: the intellectual merit criterion was perceived by respondents as more important than the broader impacts criterion, or the broader impacts criterion was perceived as irrelevant, ambiguous, or poorly worded. Moreover, respondents expressed concern that for much of basic research it is impossible to make meaningful statements about the potential usefulness of the research.
In 2001, after reviewing these responses in greater detail, the NAPA Report on the new merit review criteria asserted that “the concept of broader social impact raises philosophical issues for many reviewers – in particular, reviewers who see their task as exclusively one of assessing the intellectual merit of proposals” (p.14, authors’ emphasis).
Since the February 2001 NAPA Report, there have been repeated calls for clarification of the broader impacts criterion, the most persistent of which come from the reports of the Committees of Visitors (COVs), outside experts who provide feedback to NSF on various aspects of program-level operations and outcomes of NSF-funded research. Among the operations about which COVs provide feedback is a program’s adherence to the merit review process, including its use of both merit review criteria, with special focus on the extent of each program’s use of the broader impacts criterion.
In 2005, 13 NSF programs underwent COV review. Of the 13 COV reports, 9 indicate problems with the interpretation and application of the broader impacts criterion. Such indications run the gamut from the simple: “Criterion I always addressed; criterion II not always addressed” (Molecular and Cellular Biosciences Division, p. 4), to the complex: “The selection of proposal jackets was designed so show up how program officers are implementing the considerations of review criterion #2. As expected, in the borderline cases chosen, there were some cases where the influence of criterion 2 evaluation becomes evident. These cases were in the minority, however. It seems evident that for research proposals, the intellectual criterion #1 is predominant in the judgment. The COV does not find fault with this, but feels obliged to point out that proposal actions do not show equal attention to arguments based on the two criteria. Despite the clear instructions found in the Grant Proposal Guide and in the Announcements of Opportunity issued by UARS, there is still some uncertainty about the importance of criterion #2” (Upper Atmospheric Research Section, p. 8).
The 2005 COV Report on the Division of Electrical and Communications Systems reinforces the idea that mere quantity is not a sufficient measure of quality: “The jackets indicate that the individual reviews are increasingly responding to the guidance and addressing both intellectual merit and broader impact. Compliance is now virtually 100%. However, the interpretation of the ‘broader impacts’ criterion and relative weight given to the requirement is inconsistent across panels. In several cases, this criterion is given very brief attention by the PI and reviewer” (p. 9).
Two 2005 COVs provide somewhat droll accounts: the Report for the Division of Astronomical Sciences notes some “idiosyncrasies” in the application of the broader impacts criterion, such as simply not addressing it: “There is evidence from examination of proposals submitted during the FY 2002-04 period of some idiosyncrasies in the reviewers’ application of the ‘broader impacts’ criterion. Specifically, some reviews received during this period simply do not address Criterion II at all, while in other cases reviewers identified broader impacts on behalf of proposers who did not explicitly address this criterion in their proposals. Obtaining proper balance within the written narrative of proposals is integral to the integrity of the peer-review process” (p. 29); while the Report for the Deep Earth Processes Section calls for further clarification (but not more emphasis) of the broader impacts criterion: “Implementation of the intellectual merit criterion is straightforward, but how is the broader impact criterion used? . . . More clarity about this criterion would be useful (but not more emphasis)” (p. 15).
The 2005 COV Report on the Information Technology Research Priority Area takes a broad view: “One of the most consistent concerns expressed within the context of process and management was the inconsistency of proposers, reviewers, panels and NSF Program Directors, in addressing both merit criteria and in particular, the broader impact merit criterion. This problem was most serious with respect to the small proposals and represents a Foundation-wide concern that goes well beyond the ITR program. While the Foundation has worked hard to help define what is meant by ‘broader impacts’, there are still widely varying interpretations that often lead to confusion in the review process. Within this context many on the team considered it critical to emphasize the importance of extending the definition of this review criteria to include the need for broader participation of under-represented groups” (p. 4).
The 2005 COV Report on the Division of Integrative Organismal Biology offers the following advice: “The COV recommends that the NSF continue to stress the importance of review criterion 2 to both investigators and reviewers. This might involve disseminating information about the kinds of activities that could be included under this criterion, and making a concerted effort to change the existing culture of the scientific community, which tends to give exclusive importance to criterion 1” (p. 5).
For 2006, only two COV reports have thus far been released. The 2006 COV Report on the Division of Physics notes “significant improvement” on the use of the broader impacts criterion, but adds: “This is in great measure the result of an effort to educate both the proposers and the reviewers on Merit Criterion 2 by the program directors. The results are clear, but we encourage the effort to continue” (p. 4). Finally, the 2006 COV Report on the Division of Design and Manufacturing Innovation remarks that the use of the broader impacts criterion “remains an issue in the research community that is not specific to DMI. The PIs and reviewers do not understand the rationale behind the requirement of broader impacts and therefore remain confused” (p. 5).
The preceding comments were taken only from the most recent (2005 and 2006) COV Reports, but they are representative of COV Reports since the new criteria were put in place. Moreover, they indicate that the confusion regarding the interpretation and application of the broader impacts criterion noted in the 2001 NAPA Report remains a persistent issue.
Calls for clarification of the criterion are a natural reaction to the lack of quality responses to the broader impacts criterion on the part of PIs and reviewers: if people are having trouble responding to the broader impacts criterion, perhaps the broader impacts criterion is unclear. However, one thing that is clear from the COV Reports is that, as the 2001 NAPA Report emphasized, problems surrounding the interpretation and application of the broader impacts criterion are not simple cases of misunderstanding the language used to express the criterion, and that the discomfort evidenced by proposers and reviewers confronted by the demand to give an account of and judge the “broader impacts” of proposals has deeper roots. That the COVs feel “obliged” to report uneven attention to the broader impacts criterion, even though they “do not find fault with it,” that the complete omission of responses to the broader impacts criterion is described as an “idiosyncrasy” rather than an aberration or even a violation, that they are careful to note that “more clarity” need not entail “more emphasis” – such remarks reveal that improving the quality of responses to the broader impacts criterion must take us beyond simply clarifying the language used to express it: we must effect a cultural change within the scientific community.
All COV Reports cited are available online at: http://www.nsf.gov/od/oia/activities/cov/.