NIH Regional Consultation Meeting on Peer Review

Meeting Summary

September 12, 2007 – Chicago

Meeting Context and Review of Ongoing Activities
Dr. Lawrence Tabak
Director, National Institute of Dental and Craniofacial Research, NIH; Co-Chair of the Working Group of the Advisory Committee to the NIH Director (ACD) on NIH Peer Review

NIH is conducting a self-study in partnership to strengthen peer review during these changing times. We are witnessing an increasing breadth of complexity in science and greater reliance on interdisciplinary approaches. All of this creates new challenges for the system of support for biomedical research, and we all agree that peer review is a key component of the system.

What we are looking at here goes beyond peer review to the entire system NIH uses to support science. We must continue to adapt to rapidly changing fields of science and ever-growing public health challenges, while at the same time ensuring that the processes we use are both efficacious and efficient for applicants and reviewers alike. And we must continue to draw upon the most talented reviewers.

We are seeking input from the very broad scientific community, including investigators, scientific societies, grantee institutions, and voluntary health organizations. We have also solicited a great deal of input from our own staff at NIH. Two working groups have been constructed to guide this effort: the Advisory Committee to the Director (ACD) of NIH, co-chaired by Dr. Yamamoto and myself; and the NIH Steering Committee Working Group on Peer Review, co-chaired by Dr. Jeremy Berg (director of the National Institute of General Medical Sciences) and myself.

The Center for Scientific Review (CSR) has a number of ongoing initiatives, including: shortening the review cycle, immediate assignment of applications to IRGs, realignment of study sections, electronic reviews, and shortening the size of applications. We are working in concert with the CSR to assure we are not at cross-purposes.

We are now in the diagnostic phase of this review. A Request for Information (RFI) just closed this week. In this RFI, we solicited information about the challenges of the NIH system of research support, challenges of peer review per se, and solutions to those challenges. We asked some questions about core values of the NIH peer review process, about the specific criteria that are used as well as the scoring methods, and whether the process should be the same for persons in different stages of their scientific career pathway. If you have not yet had a chance to give us your input, please do so.

In addition to this RFI:

  • Dr. Zerhouni, Dr. Yamamoto, and I have held two Dean’s teleconferences.
  • Today’s meeting will be followed up by meetings in New York, Washington, D.C., and San Francisco.
  • We have asked the ACD Working Group to select persons to serve as science liaisons to further enhance outreach to stakeholders in specific fields and areas of science.
  • We have created a common website to allow everyone to submit feedback.
  • Within NIH, we have held three consultative meetings and have solicited information from institutes and centers about prior experiments related to peer review.
  • We have looked carefully at the rich literature on peer review and at the approaches of other agencies.
  • We are engaging individuals who are experts in psychometric analysis so that we can model out different ways of peer review from that perspective.

Beginning in February 2008, NIH leadership will consider input from the RFI and both working groups and will determine next steps, including pilots. In March 2008, we will begin to design the pilots and associated evaluations, and then develop an implementation plan. Successful pilots will be expanded, leading to the development of a new NIH peer review policy.

Goals for the Meeting
Dr. Keith Yamamoto
Executive Vice Dean, School of Medicine,UCSF; Professor, Cellular/Molecular Pharmacology and Biochemistry/Biophysics,UCSF; Co-Chair of the Working Group of the Advisory Committee to the NIH Director (ACD) on NIH Peer Review

The title of my presentation is a quote from Dr. Zerhouni: “Fund the best science, by the best scientists, with the least administrative burden.” Another of my favorite quotes dates back to 1946, the year the NIH Peer Review system was rolled out. The Surgeon General at that time said: “The only possible source for adequate support of our medical schools and medical research is the taxing power of the federal government . . . such a program must assure complete freedom for the institutions and the individual scientists in developing and conducting their research work.” These are pretty refreshing words from a federal official.

If we flash forward 60 years or so, another notable person, Tom Cech, had this to say about the way the system works: “Discovery and innovation are to some extent taking place in spite of, rather than because of, the current policies and practices of major biomedical funding agencies.” That is the challenge.

Peer review is the only system for funding the best science, but it is important to keep in mind that intrinsic conflicts are built into peer review; that these won’t go away, no matter what we do, because of the nature of peers reviewing peers; and that we must design systems that will address these conflicts in some way. These conflicts are reviewer self interest and reviewer conservatism.

The nature of research has changed and will continue to change. The good news about biomedical research is that we understand enough to break down the barriers that we used to put between fields, between disciplines, between experimental systems and between experimental approaches. And so we are seeing applications that are much broader in scope and greater in complexity, and that require multiple subspecialty expertise.

The culture of the review system also has changed, and this has been driven in part by the doubling and then flattening of the NIH budget, which contributed to an explosion of applications. With that has come a vast increase in the number of reviewers and the fact that most are ad hoc. The of ad hoc part of the mix has exploded in part because of the need for appropriate expertise. Study sections have proportionately fewer senior scientists than before. In fact, we have seen in recent years a big increase in the number of pre-tenured scientists who are being recruited into the review process. And at least it is a perception of some that the process has become somewhat adversarial, rather than one in which the reviewers are trying to take an even view.

What this really says, and the reason we are all here, is that NIH criteria must develop and adapt. We need bold thinking in all areas and especially in a meeting like this one, where no ideas are off the table. I challenge all of you to think in as expansive a way as possible, to bring your ideas forward; they will be considered. Here are a number of areas where you might do some thinking:

  • Review Criteria and Focus: You know that the NIH has been staunchly dedicated to a project-based focus; other agencies are people-focused. Should the NIH stay with that? Is there some reason to try to do both in some cases?
  • Application Structure and Content: You heard that the CSR is already experimenting with changes in the content and organization of applications. Are there ways to think about that process that would address some of the conflicts we have heard about?
  • Reviewer Mechanisms and Mechanics: Is the old study section concept still a useful one? If so, how can it be adapted, adopted, or changed so that we can recover the kind of culture that those of us who were involved in that process years ago enjoyed? Should we employ more of an editorial board set-up? That is, a small number of people on a study section would serve as editors who would send the application to experts around the country and ask for comment pertaining to their particular areas of expertise.
  • Reviewers and Review Culture: Are there ways to get senior scientists back in the process and away from the thinking that they have lifetime immunity once they have served one term?

Other areas need bold thinking as well, such as how applications are scored. Let’s try to think about ways to find a system that is both rigorous and fair, that supports incremental innovative research, and that is efficient both for applicants and for the reviewers. We look forward to your feedback.

Statements/Proposals from External Scientific Community Offering Specific Strategies or Tactics for Enhancing NIH Peer Review and Research Support

Eswar Krishnan, MD MPH, Assistant Professor of Medicine, University of Pittsburgh

I would like to see a three-stage review process. The first step would involve the investigator presenting a three-page proposal that explains the specific aims, innovativeness, and significance, with a small proportion of space devoted to methods. If an investigator is able to articulate this in a period of 2 or 3 days and write up a proposal, he or she should be able to send it out and get an idea of whether this is a go or a no go. This first screen process would be Internet-based, anonymized, and reviewed by several individuals across the country.

The second step would be to identify those applications that are in the top 20 percent to 25 percent and then invite those individuals to submit a much more detailed proposal that includes the practical aspects of the research. In addition, the PIs would be allowed to present the proposal at the full study section meeting and be available to answer questions about the research proposal, so that you are not judging paper, you are actually judging individuals who are passionately doing what they normally do.

Burton F. Dickey, MD, Clifton D. Howe Distinguished Professor of Pulmonary Medicine; Chair, Department of Pulmonary Medicine, MD Anderson Cancer Center

Opening the scientific review process by abolishing anonymous review and inviting applicant and outside comments during a defined period could self-correct poor-quality reviews. Reviewers posting non-anonymous reviews would be motivated to provide thoughtful, substantive, and defensible critiques. If logical or scientific errors are made in a critique, these could be addressed in an effective forum, and ideally in a timeframe that wouldn’t require a 9-month turnaround. Reviewers repeatedly making inappropriate criticisms or failing to identify serious flaws would become apparent and could be moved off study section. Conversely, ad hoc reviewers who repeatedly hit the nail on the head could be invited to serve.

The mechanics of such a process might work as follows:

  • After submission of a grant application, when it is posted for reviewers to read, it could also be accessible to anyone with an NIH Commons username and password.
  • Application reviews would be posted by reviewers several weeks before study section, as they are now for the reviewers, but be similarly accessible to anyone with an NIH Commons access. A period of several weeks would then ensue for publicly posted comments by the applicant, reviewers, and others. This would end at least one week before study section to allow reviewers time to digest the comments before study sections.
  • At the study section meeting, the reviewers would summarize their own and the posted critiques, and could amend their own critiques as is currently done after viewing the critiques of other reviewers prior to study section. This session might be viewed by webcam, again accessible to anyone with an NIH Commons access, but with no opportunity for comment so the meetings need not last any longer than they do currently.
  • Scoring by reviewers should probably also be non-anonymous so that anyone who attempted to score outside the recommendations with the primary and secondary reviewer would explain his or her reasoning during the discussion period rather than take an action that could deviate the final score without explanation or accountability.
  • The operations of individual study sections could be observed by a committee of NIH staff and outside scientists to compare best practices and to identify those study sections that appear to be functioning poorly, such as with excessive grandstanding, inappropriate alliances, misidentification of poor-quality or high- quality reviews, etc. A poorly functioning study section might have its chair replaced, its SRA replaced, or be dissolved.

The potential problems can all be dealt with. For example, there would be problems in terms of disclosing preliminary results, but the online posting could serve as a public record of priority just like a published manuscript. There could also be problems in terms of disclosing plans to competitors, but that’s already done in the current system without any protection that public evidence of priority could give. Review might take more time in view of the increased care likely to be exercised by reviewers, but a reduced number of applications could be assigned by eliminating tertiary reviewers since there would be additional comments from the wider community.

Sunita Dodani, MD PhD, Medical College of Georgia

I have several recommendations (the details of which were submitted to NIH) and will touch on the most important ones here:

  • Have separate peer reviews for applications from new investigators and established investigators.
  • Assign more weight to the science and methods (scoring guidelines are in written statement).
  • Shorten the time to review and prove a summary statement to new investigators.
  • Score each section of an application to reflect quality (details in statement).
  • Increase the number and size of R03 applications to enable new investigators to perform more pilot studies.
  • Advance new study sections on interdisciplinary, applied, and global research that will help new investigators propose these types of research projects.
  • Avoid appointing junior faculty to study sections (exceptions can be made).

Primal de Lanerolle, PhD, Professor, University of Illinois at Chicago

I have four ideas for making the review process more objective, equitable, and efficient:

  • Increase the size of R01s. NIH has been reducing the size of R01s to make the funds go further, but smaller grants have resulted in more and more applications as Dr. Yamamoto pointed out. There should be fewer applications from individual investigators if R01s are bigger. But, to ensure fewer applications, make it harder to have more than one grant. This could be done by multiplying the Priority Score of the second grant by 1.1, the third grant by 1.3, and so forth.
  • Reward productivity, but define it first by developing a formula based on publications. Select 500 recently funded renewal applications and calculate the Average Impact Factor; then do the same for the renewal application for each grant. Divide the Average Impact Factor by the Individual Impact Factor and multiply the Priority Score by this ratio. This will reward or penalize applications based on productivity without affecting the peer review process. The definition or formula should be transparent, and every investigator should be able to calculate whether he or she is going to get funded by this mechanism.
  • Triage the front end of grants, not the back end. If the Individual Impact Factor is greater than the Average Impact Factor, fund it and get it out of the system. We are wasting far too much time and money reviewing resubmissions.
  • Establish an NIH Extramural Scholars Programs, with the following characteristics: (1) Each scholar would receive $400,000 per year in direct costs for 4 years; (2) initially, only renewal applications would be eligible; (3) success would be based on productivity in the previous funding period; (4) there would be strong disincentives for an Extramural Scholar to apply for additional R01s, PPGs, or SCOREs; and (5) the scholar’s institution would be obligated to provide at least 50% of the scholar’s salary.

In order to fund this program, ask Congress to increase the NIH budget by a modest 2.5% each year for the next 5 years. The key to the plan is that NIH would match each increase, and the funds would be placed in a special account used solely to support Extramural Scholars. Thus, the funds available in the program would increase by 5% each year to a total of 25% in the fifth year.

Tema Fridman, PhD, University of Tennessee

My proposal for change in the NIH grant submission and review procedures involves a two-tier submission system and an upgrade of the reviewer assignment system.

  • In the first round of a two-tier submission system, only the scientific proposal, the team, and the resumes would be submitted. This part would go through normal review discussion. If the scientific review is passed, then the rest of the required information is submitted. The same assigned reviewers would review the budget and other criteria.

Technically, it is very easy to introduce minimal changes to the existing procedure. The review discussion is naturally centered on the scientific merits of the proposals, with the rest of information such as budget, animal subjects, etc., receiving the assessment input from only the three assigned reviewers. It is hard to imagine a situation where extra expertise would be necessary to provide the assessment of these conditions.

Following the conclusion of the review panel, top applicants, perhaps in a slight excess of the cutoff, should be invited to submit the rest of the information within a short time, such as 2 weeks or 2 months. Then, for each returning proposal, the same three assigned reviewers should submit the written assessment of the budget and other criteria within 2 weeks.

  • The keywords system that assigns the reviewers is outdated. One way to remedy that is a self-organized system, in which reviewers would submit their specialization in three hierarchical levels – for example, Level I: Physics, Mathematics, Biophysics; Level II: Algorithms, Computational Sciences, Computational Modeling, Proteomics, Theory (Experiment); Level III: Mass Spectrometry, Molecular Structure, Protein Structure, DNA Structure.

Toshio Narahashi, PhD, John Evans Professor of Pharmacology, Northwestern University Feinberg School of Medicine

The most critical factor affecting the breakdown of the peer review system is the lack of an effective judge for grant review. For a manuscript submitted to a journal for possible publication, the chief editor serves as a judge. He/she makes a final decision as to which of the author’s claims and the reviewer’s comments are correct. There is no such judicial system in the study section. The chair of the study section is in effect a coordinator and cannot possibly serve as a judge who is required to read all proposals.

Currently, 50% or more proposals in a study section are triaged, and the critiques on a triaged proposal from three reviewers do not have a chance to be scrutinized by other members of the study section. For a number of reasons, the selection of reviewers tends to shift to younger scientists who often are not experienced enough to assess the overall picture or significance of the proposed research. This is one of the major reasons many PIs complain about the quality of reviews. To rectify this situation, I propose limiting R01 proposal to 10 pages, and abolishing the triage system. Any excellent proposal can be condensed to 10 pages. These arrangements would permit many members of the study section to take at least a quick look at every proposal, and the critiques from the three reviewers would be exposed to all study section members.

Joel Schwartz, DMD DMSc, Professor of Oral Maxillofacial Pathology, University of Illinois at Chicago

I propose a small grant with milestones, a new system that would allow young investigators and investigators with innovative ideas to have funding. My proposed system would offer $35,000 for the first year if certain requirements are fulfilled (see written statement). Another characteristic of this system would be a quick review through a small ad hoc committee that would meet 6 times a year. These people would have to have a PhD or an advanced degree and a demonstrated expertise in the area they would be reviewing.

Part of the requirements for initial small grant funding would be based upon a need or an area of interest designated by NIH as requiring further study. A clear definition of the hypothesis and the need for this kind of research, and letters of support from interdisciplinary team members also would be required. We are trying to encourage the sciences to work together to develop a better view of how they are going to attack a specific scientific problem, so a clear statement of the level and detail of interaction and cooperation between these individuals is also required. Additionally, a letter from the institution would need to clearly state the laboratory and monetary support available to the investigator.

This $35,000 would be provided short-term. Investigators could return for a second small amount of money after reaching certain milestones.

Gene Webb, PhD, Planning Manager, Dean’s Offices of Biological Science, University of Chicago

NIH has taken bold steps to assure the timely independence of promising postdoctoral fellows through the Pathway to Independence Awards. Currently, postdoctoral fellows under this program identify a postdoctoral mentor, apply for the grant, and, once the grant is awarded, have 18 months in which to secure a tenure-track position. To address this problem of a compressed timeline, we propose that a novel dual-institution, postdoctoral mentor/faculty sponsor-colleague structure be allowed in instances where a candidate can identify a potential faculty position at the outset of the application process. The application would be shepherded to the NIH by the applicant, the mentor, the sponsor-colleague, and the high-level administrative offices of the institution offering the independent position. Only after the NIH awards the Pathways to Independence grant would this career plan be set in motion.

This kind of proposal offers at least four benefits:

  • The proposed research program would not only fit the infrastructure of the postdoctoral mentor’s laboratory and institution but would also effectively leverage the infrastructure and strengths of the institution in which the bulk of the work would be completed.
  • With the NIH providing a view of their investment in the awardee at the outset, this modification could be used to spur not only the offer of an independent position, but also stimulate early and clear articulation of institutional investment in the awardee.
  • These candidates would likely be significantly younger than most assistant professors, and therefore additional junior-faculty mentoring should be considered. These candidates would be observed by Dean-level officials from the outset and their progress supported early by the institution.
  • This model would allow a smooth transition between institutions and minimize disruptions to research progress by the awardee.

This plan has four characteristics that should assure awardee independence:

  • The tenure-track institution must provide input not only from the sponsor-colleague, but also dean-level administrators.
  • The sponsor-colleague research program cannot overlap with the awardee’s program.
  • The awardee’s research program must be part of a logical and articulated departmental/institutional growth model.
  • The tenure-track institution must promise at least a standard start-up package so the awardee has the opportunity to be independent.

Allison Hubel, PhD, University of Minnesota

The organizational structure of the NIH is centered around diseases or organ systems. As a result, the peer review system is largely organized in the same manner, which has led to the neglect of several areas of scientific inquiry that directly benefit the NIH mission. Study sections, composed principally of those whose research programs center on disease or organ systems, are given grants to review from these “orphan areas,” and more often than not, these proposals are triaged. Thus, the existing peer review system becomes an obstacle to funding much-needed research in areas that are not disease- or organ-related.Drug delivery and imaging are two fields that are not specifically disease- or organ-related. Both fields are well represented at NIH, and both have multiple standing study sections and frequent program announcements soliciting research in the field. This type of structure has resulted in tangible improvements in the diagnosis and treatment of disease. The success of drug delivery and imaging initiatives also point toward the potential benefit of supporting research that may not be organ or disease based.

Not all “orphan areas” are that fortunate. For example, although preservation of cells, tissues, and gametes is an area of importance to many initiatives at NIH, there are no standing study sections on cell preservation, and researchers in the field are at best ad hoc reviewers if they participate at all in the review process. Reviewers familiar with a specific cell type are more likely to evaluate a proposal on preservation than someone who is actually a researcher in the field. At present, only 12 R01 mechanism grants are actively funded in the area of cell preservation. It is not hard to conclude that the lack of representation in the peer review system results in a lack of funding in a given field.

One option is to have a group of scientists in a given field petition the NIH. The scientists could write a brief (2-3 pages) white paper, outlining background for the field, significant challenges, stakeholders at NIH, and relevance to human health. The paper could be written in a manner that could easily generate a program announcement in the field. This type of document could be drafted and revised using online editing tools and permit easy linkages to other references that could augment the overall content of the paper (e.g., a “wiki” approach). The white paper would then be sent to the stakeholders at NIH and could be reviewed quickly and electronically via the NIH internal review system. If there were sufficient interest in the program, and if stakeholders agreed to support an initiative in the area, both a draft program announcement and study section roster would be drafted and posted for comment.

Open Discussion – Introduction

Dr. Tabak

I would like to share with you some emerging themes that we have been hearing and I will explain to you how we arrived at these. Most people chose to use the website, but others opted to use e-mail or to send letters to Dr. Yamamoto, myself, the NIH director, or other members of the committee. Thus far, I have reviewed these e-mails and hard copy letters but not anything that came through the website, so these emerging ideas are really a subset. These are not in any priority order, nor do they represent what NIH wants to do. They are only presented to you to facilitate further discussion.

Review Criteria and Focus; Application Structure

  • Reviewing the project versus funding the person
  • Retrospective versus prospective reviews
  • Separate application modes and review criteria for projects that lack preliminary data or precedent from those that are extensions of current work

Reviewer Mechanisms/Mechanics
New models of review

  • Two-stage technical subject matter/editorial board model
  • Electronic review
  • Virtual electronic applicant/reviewer dialogue to answer questions
  • Different types of review for different types of science
    1. Clinical research
    2. Can’t mix basic research with clinical expertise on panel
    3. Trials tend to be new submissions, which historically have low success rates
    4. Interdisciplinary research – Content expertise vs. big picture; interpreters (editorial board?)
    5. SBIR/STTR – Academics usually not the “right” persons to review small business
    6. Have investigators designate one application as their “primary” application; use different criteria to review/fund “non-primary” applications
    7. Provide more useful feedback to applications from new investigators, including clearer ranking to those who are unscored; eliminate “triage” (for new investigators)
    8. Rethink design of an original submission in view of low A0 success rates (“clogs” queue)

Pre-applications to provide rapid identification and separation of competitive from non-competitive ideas, and meaningful advice to A0 applicants

Reviewers and Review Culture
Maximization of review(er) quality

  • How much information to provide reviewers appropriate context (“firewall”)
  • Incentives for reviewers
  • Mandatory service?
  • More flexible service
  • Increased support?
  • Rating the reviewers/SRA’s
  • Don’t publish reviewer ID (e.g. NSF); identify reviewers

Scoring
Scoring issues percentiles; binning; triaging; more information from scores desired

  • Consider psychometrics to devise voting scales and to group scores that are not statistically different
  • Consider adding additional dimensions to extract more information from review, e.g., two scores: application as received/ best potential score

Other
Limit % effort that can be recovered on grants for principal investigators/increase % effort to 50+% to be a principal investigator

Open Discussion

Applicant Feedback

  • I propose a mechanism for grants that don’t get discussed and are now called unscored, which doesn’t tell anything. Everybody should get feedback in terms of the perception of the study section and how their grant fared. What we are encouraging people to do for grants that are not very good is to revise them and come back in when they really have no chance, and that creates more grants that we have to review in the future, which builds the backlog. If we can discourage those people by telling them their ideas are no good from the start, then we would limit the number of grants coming in and do a better review process.
  • It would be very valuable if it were possible for applicants to see the posted reviews that apply to them before the meeting actually happens and then the review panel can discuss whether they agree with the applicant’s rebuttal. You could at least perhaps eliminate some obvious errors by allowing the applicant an opportunity to respond. This would also shorten the review cycle because you wouldn’t necessarily have to see a grant coming back that was just fixing something that could have been taken care of in advance.
  • We need to score all the applications and give the scores back to the PIs.
  • Is there a way that we can get it right the first time? That we can say: This concept is good; let’s approve it or give feedback so that only two cycles are involved, not three.

Applications

  • I suggest a 5-page application that is focused on the innovation and the ideas themselves. Eliminate the methods and the preliminary data.
  • Reducing the page limit to 10 probably is not a good idea because sometimes you don’t get all the information that you need to review the application fully with 10 pages for an R01.
  • From a reviewer’s point of view, reading 25 pages is a lot of work. A balance between 25 pages versus a very short 10-pages should be instituted.
  • Twenty-five pages may be too long but I worry if you make it too short. And it is particularly true for people who are changing fields, so it gives them more of a chance to explain what they really are doing.
  • Twenty-five pages works very well.
  • It is my understanding that the reason for pushing for shorter grants is so that we can get more work to reviewers. Instead of the reviewers having 10 grants at a study section, they are now going to have 15 to 17 grants, and so you’re still asking a lot of work out of the reviewers, and I am not sure that’s going to substantially improve the quality of the reviews.
  • As we reduce the number of pages in the proposal itself, are we increasing the number of pages that are coming back in the appendices?
  • In Europe, applicants are required to write very few pages; things are much more general and they are basically “trust me” grants. The people who have lots of experience and lots of connections get funded. It is very difficult for new investigators or people who are outside of this system to get funded at all. So I would suggest that while 10 25-page grants are burdensome to review, it is almost as difficult to get a good review of a 10-page proposal. Sometimes a shorter grant proposal encourages this “trust me” attitude, and I think it would encourage the funding of more experienced investigators over new ones.
  • The direct proposal should be critiqued based on the merit or the concept; I really question whether we really need all the details of the methods.
  • I propose the idea of a blanket review, where the identity of the applicant is hidden from the reviewers. I believe that DOD has a system like this. Applicants submit a very short two-page idea that is given to a review panel without them having any knowledge of who submitted it, and a subset of these could be evaluated and a subset invited to submit a whole proposal.

Electronic/Offsite vs. Face-to-Face Reviews

  • Regarding the electronic review, I just participated in one, and it took me more time to get three grants done that way than if it had been done face to face. In the case of at least one grant, it was essential to have some interaction between the reviewers around the table.
  • If you start reviewing grants like you review papers, it’s going to bring in vindictive natures and allow people to do unethical things. The face-to-face approach keeps that in check.
  • Meetings for study section people are important because it’s an opportunity to actually review the reviewers.
  • Face time is a much greater incentive for me than an honorarium.
  • In trying to fix the system, we may be doing some things that take away reviewer accountability, including electronic reviews and telephone conference reviews, where people no longer come together. In face-to-face meetings, a dynamic develops among colleagues within a study section, and they begin to understand not only how each of them thinks but also how each grades grant applications.

Evaluation

  • I like the idea of being able to go back at the end of the meeting and ask a study section: Did you do what you meant to do?
  • I think it’s a good thing that everyone knows who reviews proposals. I value correct, accurate reviews even if they are negative about my proposal. What really bothers me is the reviews that are not correct, not an accurate reflection of what the presentation was. So while the open review concept may swing the pendulum too far, maybe we can consider a way of evaluating the review process itself. Set up a feedback system and use it to screen reviewers.
  • As we change things, we need to apply the scientific method to looking at those changes. Are we achieving the goals we want to achieve? What kind of benchmarks are we going to use?
  • Have we measured the repeatability of our scoring system? Before we decide whether the system is broken, we ought to apply scientific methods to quantitate that.
  • We really need to have some ways to evaluate what is a good grant, what is productive, what is innovation. We need to have some quantitative markers that we can interject into the system.
  • It is not too early to start thinking about how one would evaluate the pilots. There should be a strong qualitative component to that.

Expertise

  • If a research project has clinical relevance, it needs to have someone involved who can judge whether that project is feasible and has relevance, whether it is a dentist, physician, or nurse or even someone from the industry.
  • I’d like to encourage and increase efforts to include the knowledgeable and willing members of target populations such as persons living with HIV, particularly for prevention or clinical trial grants. As reviewers we found that their expertise, particularly around recruitment and retention of target populations, can be very valuable.
  • Have a truer peer review by including people with input on the clinical relevance of these proposals.

Funding

  • As Tony Scarpa has mentioned in his numerous slide presentations, the single most dramatic change in the last 50 years is the level of salary that must be paid to PIs. These personnel costs are the biggest investment, and a new one is tuition costs. There is no mechanism in ongoing grants to pay for these burdens. NIH needs to come to grips with this problem and perhaps devise a policy that can deal with PI salaries and these kinds of issues.
  • There are really three groups of investigators: Young, no previous grants; mid-level, one grant; senior, two or more grants. My recommendation is to have separate pay lines that are set automatically by NIH for these three groups of people.
  • We all want to encourage the new investigators; the problem is, which part of the pie does the money come from? I suggest setting a lower limit for the percentage of effort (or the higher limit for the grant amount) so that the people who propose a grant will expend enough effort to study the project.
  • The reason internal resources are devoted to generating proposals is the cash flow back to the institution. If NIH just reduced the maximum percentage that could be supported on NIH proposals, institutions would devote fewer of their internal dollars to generating more NIH proposals.
  • Peer review is a filter, so submitted applications should pass through this filter. What we are trying to decide here is how to perfect this filter. But in fact, given human nature, the more constraints you introduce, the more scientific ideas you will lose. The solution is to give more money. Relax the filter, and only then you will get the best ideas.  
  • NIH needs to make slow changes to implement the maximum amount of PI salary that can be put on a grant.
  • Let’s look at indirect costs as a major source of cutting.

Innovation

  • While I very much consider innovation a very important thing to be considered, it shouldn’t necessarily be demanded in the review process. If it is demanded, it must be very carefully and fully described to the reviewers.
  • The review process is often constrained on the five criteria that were developed several years ago. The distinction between significance and innovation does a disservice to research that builds on what we already know. Many things can be significant without being particularly innovative, and some research is diminished when innovation is considered equivalent to significance. One way to get around that is to adjust the criteria to the mechanism.
Junior Investigator Reviews
  • We should review senior scientists and junior scientists a bit differently.
  • It would be useful to review junior investigators separately, maybe within the same meetings, so that the study section turns its focus to new investigators rather than considering them intermittently.

Logistics

  • Study sections need only meet one and a half days instead of two.
  • Don’t shorten the review meetings to only one day.
  • Have a smaller number of grants reviewed by ad hoc reviewers or junior people so these individuals will not be burdened with 12 or 13 grants.

Miscellaneous

  • Consider having one-year training grants for people who want to be career research nurses. Consider having research nurses review grants; put them on committees.
  • As the peer review process changes, I hope that the composite of review reflects the priorities of the NIH in the broadest sense.
  • The problem with a lot of institutions is that they are more impressed with grant dollars and indirect costs than they are interested in scholarship. I suggest thinking about ways in which some of the control is returned back to the PIs.
  • Keeping the anonymity in the process is very important.
  • Most councils are very reticent to change study section decisions. I suggest that maybe we need to start making people on council more aware of the problems and that they police the comments from the study sections a little bit more, because there are some significant problems. We have all had strange comments, and there is no recourse.
  • NIH should fund the projects, not the persons
  • Get mentors to train the applicants to write better grants so we don’t need to go through so many grants.
  • If we get an open market force in-between the funding agency and the reviewer, we might actually get, in effect, consulting firms that provide feedback to the investigators in those areas that are considered to be of critical need and to keep track of where the government is in funding protocols. This might reduce the time that is being spent by the average investigator who in many ways is spending less time on research than ever before.
  • I would be a bit cautious about making sure that everything has clinical relevance. The microbiology revolution came out of research that never could have been funded on clinical relevance. The opportunity for serendipity, luck, and just blue-sky investigation should be encouraged.
  • I am not sure what the logic was in abandoning the R29 mechanism, but in a sense a good idea was thrown away for bad reasons. The problem was the way the mechanism was developed, so we want to revisit the R29 mechanism but with more realistic criteria (and that might also apply to some of the K awards that are essentially only salary, very little support money, yet often require 75 percent effort).
  • Most of our study section time should be spent on those individuals between the top 5 percent and the bottom 50 percent.

Resubmissions

  • Standardize revised applications. Have a portfolio so two people would have an investigator and they would follow that investigator for the entire process.
  • When people review resubmissions, in most cases they should be given the entire pile of everything that happened before that.

Scoring/Ranking

  • The scoring system has to change. We have to scrap the 105-point scale and use something else, such as a cumulative score, or a value-added/value subtracted score.
  • One person’s 2.0 is another person’s 1.6. I strongly recommend descriptors for each of the decimal points.
  • Score all applications.
  • When the reviewer does not know the outcome of the score during the discussion it is very difficult to objectively make judgments. My suggestion is that at the end of the study section, get the score out and let all reviewers reevaluate those scores.
  • If we had real-time scoring we could equalize the way grants get scored and then rank the best to the worst and see what actually is going to get funded.
  • Imagine a proposal review process that doesn’t attempt to assign scores but instead categorizes things as “exceptional,” “extremely good,” “good,” and “not good.” If the funding rate is 10 percent, there has to be less than 10 percent demand in “exceptional.” The “extremely goods” would be funded by this lottery basis. If funding was oversubscribed by 10 to 1, then one-tenth of them would get funded, and then the “goods” would get some review comments and the “bads” would be discouraged. Advantages: ease of review, less intense competition, more innovation.
  • The compression of scores makes it exceedingly difficult to evaluate the most meritorious grants.
  • Weighting grant scores would provide a lot of flexibility at NIH in terms of weighting innovation versus approach in certain mechanisms.
  • We should examine the repeatability of all scores, using a scientific method.
  • In our own study section, the scores become compressed, leading to some of the problems that people have mentioned that one score on their revision ends up higher than the score of the review before, because all the proposals being considered are revisions and there is some relative scoring among those. I hope the process allows the study section to fine-tune its process whether by ranking, submitter feedback during the process, or whatever, so that the process can be more and more accurate.
  • I’d like to propose an informal system of ranking grants rather than scoring them. For example, at the end of the meeting, you go back and you compare: Okay, if we are going to fund 10 researchers, are these the 10 that we really want to fund? Also, Ken Deal has come up with a proposal in which he suggests a sort of bidding where you have ranked proposals from 1 to 5 and you put all your 1s in one bin and then as a review panel, you consider those in the top bin.

Staffing Panels

  • To attract senior investigators to return, even as ad hoc reviewers, the idea of twice a year is great. I think I’ve heard Dr. Scarpa mention the idea of attracting reviewers by giving them 6 months or 8 months more on their own current grants; maybe that should be considered as a kind of bonus for spending a lot of time reviewing these grants.
  • To increase participation of senior investigations, increase the number of participants for study sessions such that the reviewers only need to serve twice per year.
  • If you want more experienced people to serve on study sections, perhaps you just haven’t re-asked those who already have served. Many of us who are elderly would be very happy to serve on study sections again. Send out a campus-wide appeal for senior investigators, full professors at institutions, and members of societies. Go to the American Physiological Society or some of the other societies and ask for senior people to do reviews.
  • There is a great disincentive for people to be permanent members of study sections. We have too many ad hoc reviewers who don’t understand the process and don’t understand the dynamic that goes on in the review process. We need to provide some incentive, small or large, for reviewers to either serve shorter time periods when they have grants coming up for renewal, or give them an extra year or two as kind of a goodwill payment for their considerable efforts.
  • We probably ought to lean on all PIs to serve a mandatory term.
  • New study sections members and ad hoc or junior people are often more open to new ideas and less impressed with their buddies they have worked with and seen at meetings over the years.
  • Require all PIs of R01s to serve on at least one review group per year.
  • If ad hoc members seem to be excellent reviewers, move them on as permanent members.

Training Reviewers

  • Reviewers are not given adequate advice and training on how to be study section members. They should not be reviewing grants without adequate training.
  • One of the big things that we have lost in the study sections is the idea about the concept behind the grant. We need some education of the study section to go back to what its real mission is, which is to look for new innovative science.
  • We mentioned junior investigators and the difficulty they have getting funded. I think including the junior investigators in the review groups is an important part of their education process. We talk about training for study sections and I think that could serve as their training.
  • The online training that is available is good. In addition, would it be possible to have training regionally or at professional meetings, trainings?
  • New R01 investigators could be invited to observe a study section. This could serve as a training session and as an opportunity to improve their own grants.
Closing Remarks

Dr. Yamamoto

Thank all of you for your participation, time, energy, and ideas. These are exactly the kinds of things we need most to make the process work.

Don’t feel that as the meeting closes your opportunity for comment is also closing. You know how to find us, and we are going to be at this for some time, and we have other regional meetings. We will be collecting feedback like this and begin to examine it for the rest of the calendar year, at which the two committees will come together and turn some of those ideas into experiments. As we decide which ideas to experiment with, we will be considering their consequences. Each of these points has at least one other side, and we will be depending on you to point out to us what these are.

Dr. Tabak

Thanks to my colleagues who staffed the flip charts; I appreciate their efforts. Thanks to the contract support team who are also very busy capturing things. I am very appreciative of all of your efforts.

We all heard a lot of good ideas here and a great deal of honesty, which is very important. As Dr. Yamamoto said, this is an ongoing process. I can’t hide from you; it is pretty easy to contact me. Please send me additional comments, suggestions, etc. Thank you all very much.

This page was last reviewed on October 9, 2007.
skip main navigation National Institutes of Health - Transforming Health Through Discovery U.S. Department of Health and Human Services Health Information Page NIH Grants News and Events Research Institutes and Centers About NIH