U S National Institutes of Health John E Fogarty International Center Home Page
About Fogarty

Framework for Program Assessment (Evaluation and Review)[1]

Contact Information:
Linda E. Kupfer, Evaluation Officer
Fogarty International Center
National Institutes of Health
kupferl@mail.nih.gov
Initial Document: December, 2002
Modified April, 2005

 

A Performance-Based Review Process


    I. Goals and Objectives of Assessment

Goal:

The goal of assessment at the Fogarty International Center (FIC) is to:

  • Provide the tools and information necessary to improve each FIC sponsored program to achieve the FIC mission.
  • Document progress and successes of the programs.
  • Provide new directions for FIC programs.
  • Identify role of the programs in fulfilling the FIC Mission.
  • The Fogarty International Center promotes and supports scientific research and training internationally to reduce disparities in global health.
  • Identify commonalities among FIC programs.
  • A. Guiding Principles:


    • Assessment is a continuous quality improvement, review process.
    • The primary responsibility for continuous assessment, reporting and analysis rests at the Program Officer (PO) level.
    • Assessment will focus on outputs, outcomes and impacts and mechanisms to ensure that these occur. While reporting of metrics (number of trainees achieving advanced degrees, number of publications etc.) is necessary, meeting stated metric goals can become a check off exercise with little accomplished. Reviews will go beyond metrics and will depend on the basic principle of external peer review and recommendations. Evaluation, on the other hand, will include a major component of data collection and analysis.
    • The assessment process will consider innovation, flexibility and risk-taking positively.
    • Programs must be assessed against their own goals and objectives, taking into account fiscal resources and granting mechanisms.
    • Review and evaluation will use retrospective measurements of the achievements over a certain time period (eventually a cyclical period) based in part on measured quantitative outputs, outcomes and impacts (metrics), as well as success stories and more qualitative outputs, outcomes and impacts. This information will be used to make recommendations for the future.

    B. Specific Objectives:


    • To stimulate the performance of programs at FIC and to encourage innovative approaches to address problems and issues relating to global health disparities.
    • To demonstrate sound stewardship of federal funds and the programs they support.
    • **To produce guidance for program officers and FIC management, to strengthen programs, improve performance, enhance funding decisions, demonstrate public health and economic benefits, and promote sound program policies, and evaluate mature programs.
    • Provide mechanisms to identify program accomplishments to FIC, NIH, HHS, funding agencies, national and international partners and the US Congress.
    • Identify, share and stimulate best-management practices for improvement in performance in the FIC programs as a whole.
    • To publish the results of the reviews and evaluations in peer-reviewed journals.

    Back to top


      II. Elements and Basis for Review and Evaluation

    The review and evaluation process is a continuum through a period of time (to be agreed to). It begins with the FIC Strategic Plan. Program plans, in the form of a well-developed Request for Applications (RFA) and Program Announcements (PA) are then developed with the input of the stakeholder community. The program officer will be in charge of ensuring that the appropriate stakeholder community is involved in the development of the program plan and the RFA. The program officer will monitor the progress of trainees and projects and may visit a project to interact with its management team, faculty staff, institutional administration and constituents. If mutually decided, a specialized team of experts can visit a project to advise it and make specific recommendations about specific elements and or issues (review visit). This type of correction can help a project correct itself mid-course rather than wait until the end of the project to terminate it for its weaknesses. The process will culminate with a visit of a group of experts, a Review Panel (RP) during year 4/5 of the program (this will differ from program to program and will depend on the program cycle) or at an appropriate time in the program. During year 9/10 of the program, a program evaluation will take place that will include data collection and data analysis by a contractor who specializes in evaluation.

    A key to effective program review is the degree to which the review is normalized to the resources, objectives and program planning of the individual program. Given that each program has different financial resources, utilizes different talent pools with various specialties, faces different issues in host countries, works under unique institutional policies, and uses different approaches to reducing global health disparities, the review should be tailored to take program variability into account.

    A. Program Development:


    The foundation for individual program review is a well-developed program plan that culminates in an RFA. Importantly, planning a program will normally require a two year lead time to allow sufficient input, partnership development and administrative review. Each program has its own RFA that can act as a strategic plan for that program. The RFA is keyed into the FIC, HHS strategic plan as well as the strategic plans of the program partners. Planning cannot be stressed enough in its importance. It can be based on experience, program results in the past, and stakeholder needs and expectations. Each program should have a plan developed which addresses its goals and objectives. Although this plan need not be formalized and written down, having a written form will ensure continuity for the program. The program plan can be informed through consultations, workshops, and meetings. It should be specific to resource needs, managing the program to meet those needs, data needs, and data gathering, analysis and storage.

    A program plan, reflecting the input of management and constituents, will include:

    • Vision and focus of where the program is heading and why;
    • Backgrounds on issues and mechanisms for establishing priorities for investment of resources; and
    • Goals and objectives and performance milestones targets that provide guidance for evaluating program performance.

    Planning is fundamental to program assessment. Developing the understanding, communication and data collection processes necessary to meet the basic goals of the program is necessary. A program should be reassessed and new planning (planning workshops, planning meetings etc.) take place every five years or as appropriate. Of course network meetings can also be used as part of the continuous review and planning for a program.

    B. Self-Assessment Process


    Each program should conduct self-assessment and analysis on a regular basis in between the program assessments. Each program's self-assessment will be based on performance milestones unique to that program, as well as the criteria given below for all programs. Annual self-assessment can be accomplished at network meetings or following the submission of progress reports from the projects under the program. It is important that the self-assessment will include identification of results, potential problems and mechanisms. Self-assessment and program analysis is a checkpoint in preparation for the program review and program evaluation, which will occur at regular intervals. Analysis of program data should be conducted in conjunction with self-analysis. In some cases, both collection and analysis of program data may need to be contracted out.

    Data collected by the program could include:

    • Reporting major research accomplishments--Publications in high profile journals; citations; trainee training; successful new grant applications; presentations at international meetings (and abstracts);
    • Career accomplishments--Tracking the path and impact of graduates who have entered a health field, research, academia or government; percentage of trainees returning to country of origin (brain drain issue); membership on scientific or policy committees; membership on advisory panels; analysis against control groups.
    • Clinical Benefits--Improved understanding of new or existing diseases; improved tools to detect, diagnose, treat, and prevent disease; development of treatment or treatment regime for disease.
    • Institutional Changes--Creation of networks, collaboration among labs; building infrastructure (labs, departments, research groups); provide critical mass; establish political support for institution, project; establish lab as regional center.
    • Changing the Research/Health Care Agenda--Documentation of the changes in approach to solving global health care issues (e.g. laws impacted or changed, policies created or altered, awareness altered; media attention), better public health programs.
    • Information Use--Documentation of how, when and in what way information was used by the target constituents to implement and/or change the ways they conduct business, use resources and/or change the quality of life, improve health and treat disease.
    • Qualitative Effects--Qualitative description of impact of program on training, health, and social effects--success stories.
    • Other

    C. Reporting Framework



    The key to continuous assessment is regular communication between the PI and the PO. Periodic reporting by the PI should be a routine part of this communication in order to document accomplishments and impacts in meeting program goals. It is this mechanism that specifically allows for qualitative measures of accomplishment to be addressed, such as health and/or economic gains made by implementation of program results. Reporting following significant project events should be mandated (e.g. publications in refereed journals, significant research findings, health care advances resulting from FIC grant activities, technical reports, workshops, special events). Fogarty is currently working on a standard format of quantitative and qualitative measurements and which will allow analysis across many programs.

    Back to top


      III. Assessment Criteria

    A. Criteria for Assessment

    Continuing assessment is designed to strengthen, improve and enhance the impact of FIC. There are several important criteria that reflect the effectiveness of the FIC program and establish benchmarks that describe expected performance levels:

    Areas of Assessment:
      Program Planning
      Program Management
      Project Selection
      Recruiting Talent
      Institutional Setting
      Program Components
      Human Subjects and Fiscal Accountability
      Partnership and Communication
      Results

    Each is described in detail below:

    1. Program Planning

    Effective programs will use the strategic planning framework of the FIC as well as that of any partners as a basis for developing their RFA based on the needs of the U.S. scientific community, host countries, and as identified in collaboration with stakeholders such as other government agencies, foreign scientists, experts in the field. Effective planning may also involve regional programs. Partnerships with other agencies and organizations are considered important. Program plans will be reviewed annually and amended as necessary. These changes will be communicated to all involved parties (FIC Admin, NIH partners, PIs etc.). Sufficient time should be allotted into the planning process to maximize input and RFA preparation. Program planning will involve input from all constituencies important to the program.

    Suggested Indicators of Performance
    • Evidence of a planning process and a plan (priority determination, clear articulation)
    • Relevance of program to FIC, NIH IC, HHS strategic plans
    • Stakeholder involvement (numbers, duration, roles) in planning
    • Integration of input into planning
    • Reevaluation of program goals over time
    • Strategic planning process
    Suggested Questions
    • What was the strategic planning process?
    • What role do stakeholders have in setting the goals? The priorities?
    • Who provided input for the initiative? How were stakeholders identified? How were they involved?
    • How are modifications to the initiative implemented?
    • Are the goals difficult, risk taking goals? Do they convey vision?
    • How do goals fit into FIC, NIH, HHS strategic plans and initiatives?
    2. Program Management
      a. Project Selection:   The program incorporates an excellent and relevant peer review process selecting those proposals that receive consistently high marks for merit, application and priority fit. The selection/review process should take into account host country needs in the program's scientific area. The program officer role should be well defined.

      Suggested Indicators of Performance:
    • Review process including: composition of panels, review criteria, quality of feedback to PI, amount of time allowed for review, conflict of interest issues and involvement of program officer.

    • Suggested Questions
    • Under what institute/center did the review take place?
    • Is the composition of the review panel appropriate to the program?
    • If the program was interdisciplinary in nature, was the panel adequate to address all facets of the program?
    • Are the review criteria appropriate and does the panel employ them?
      Were international issues taken into account?
    • What was the role of the program officer in the selection of the panel?
      In the review?

      b. Recruiting Talent:   Every program will attract a variety of talent. The best efforts will involve the best talent. The program must have mechanisms in place to identify and attract the best and most appropriate talent available.

      Suggested Indicators of Performance
    • Recruitment of new/young investigators; recruitment of foreign investigators; success rate; minority applications; interdisciplinary teams; turnover of investigators

    • Suggested Questions
    • How does the program advertise its RFA?
    • How does the program make certain its RFA attracts new talent, international talent and interdisciplinary teams?

      c. Program Components:   Each program is made up of various projects that come together to form a program. It is the role of the PO to see to it that the various program components have a chance to interact and gain experience from one another. The whole program should have a greater effect than the sum of its parts.

      Suggested Indicators of Performance
    • Network meetings; other meetings/ways at which PIs and/or trainees get together

    • Suggested Questions
    • Are there networking opportunities available under the Program?
    • What are some successful interactions that have been encouraged?

    d. Institutional Setting:  Programs vary in their institutional setting and institutional support. The program should be well supported by both the academic institution(s) involved and the federal institutions involved. There must be appropriate business practices available at both the domestic and the foreign institution for grant implementation to go smoothly.

    Suggested Indicators of Performance

    • Matching funds; mentorship support; laboratory support; administrative support and good business practices

    • Suggested Questions
    • Does the institution provide additional or matching funds for the program?
    • How supportive is the institution for the program?
    • How involved is the administration of the institution with the program?

      e. Human Subjects and Fiscal Accountability   -- Programs should demonstrate that they have appropriate mechanisms in place to account for federal funds and are properly documenting protocol reviews for human subjects.

      Suggested Indicators of Performance
    • Presence of operational IRB; good accounting practices; good documentation practices; assurance that all intended funding is reaching the foreign collaborator and the trainees.

    • Suggested Questions
    • Is there need for IRB review in this program? If so, does the institution (US and foreign) have a functional IRB? What are its credentials? Have they reviewed projects under this program?
    • What role does the foreign institution play re. accounting under this project? How well are expenses documented? Is the funding reaching the foreign collaborator and the trainees? Is the funding being used to support agreed activities?

    3. Partnerships and Communication
      a. Partnerships (federal, national and international) are essential to addressing global health issues. Partnerships should be attracted, nurtured and maintained and will be examined during the assessment process.

      Suggested Indicators of Performance
    • Numbers of partnerships; different types of partnerships (NIH, HHS, other federal, international, interdisciplinary, NGOs, industry); involvement of partners in the development of strategic goals; funds from partners; cost of partnership

    • Suggested Questions
    • How were partnerships developed? What role did management play?
    • Do the partners provide a significant contribution in funding or human resources?
    • Could the effort have succeeded without the partnership?
    • Has the program established long-term relationships that continue to be productive?
    • What is the cost/benefit ratio of the partnerships?

      b. Communications: To be fully successful, scientific results must be communicated to the user community and utilized. During the assessment the link of the program to the user community will be reviewed and implementation of the science into policy or other working frameworks will be assessed.

      Suggested Indicators of Performance
    • Appropriate community input into the strategic planning; informational meeting/training sessions held with community; involvement of community on advisory board of program; involvement of community in selection of trainees; involvement of program in the community; demographics of contacts and efforts; requests for information, presentations; community needs surveys; user community feedback (mechanisms and tracking)

    • Suggested Questions
    • Has the program defined its user community? Are they identified in the RFA? Do the projects have plans to interact with the user community?
    • Are needs assessments of the community conducted?
    • How does the program maintain contacts with the user community?
    • What methods and tools does the program use to transmit scientific findings and results? How effective are they? Is the program on the forefront of using new technologies to improve their information transfer capabilities? Does the program present results and finding in the ways useful to the community?
    • What role do users have in reviewing the progress of the program?
    • What are the communication efforts the program makes?
    • How satisfied is the user community? Are they getting the information they need? When they need it? If not, why not?
    • How do programs assess their effectiveness in working with the user community?
    • Do the programs have flexibility to adjust and react to unanticipated events that require new research and outreach activities?

    4. Results of the Program

    Depending upon age of a program, significant results will fall into different categories. The following should be documented and reported, analyzed and evaluated:

      a. Program Input--the total of the resources put into the program (funds and as kind input from partners nationally and internationally--any "enabling resources")

      b. Program Outputs--the program must be managed to produce program outputs which are the immediate, observable products of research and training activities, such as publications or patent submissions, citations, degrees conferred. In the best sense, quantitative indices of output are tools for the program. They allow POs and PIs to track changes, highlight progress and spot potential problems. Trends and variations in output may be much more significant than observations of the steady state. Fogarty may eventually use some of this data for benchmarking purposes. (expected for younger and older programs)

      Suggested Indicators of Performance
    • Number and list of publications (journal articles, book chapters, reports, etc.); number and list of presentations; number of trainees; field of training? Number and type of degrees/certificates earned; number and list of meetings and attendance at meetings.

    • Suggested Questions
    • What type of publications have been produced and how have they been utilized, distributed? Is the publication a direct result of the training?
    • What types of students have been trained, in what areas and what degree has been earned?
    • What meetings have been held? Who attended? What area was discussed? Was there any evaluation conducted?

      c. Program Outcomes--Longer-term results for which a program is designed to contribute, such as strengthened research capacity within the U.S. and foreign laboratory, effective transfer of scientific principles and methods, success in obtaining/attracting further scientific and/or international support. (expected for more mature programs)

      Suggested Indicators of Performance
    • Number of laboratories started: number of new grants or new funding procured; scientific methods discovered--number and type; scientific departments started or strengthened; awards received; careers enhanced.

    • Suggested Questions
    • In what scientific areas were laboratories started? Was this totally lacking or is this supplemental? Do the labs support training? Are they well funded and supported by the institution? What percentage of the time do the PIs conduct research vs. administration and other duties? Is laboratory direct result of training?
    • What scientific principles were developed? Who is using them? Are they used internationally? Is methodology a direct result of training?
    • Where does the new grant funding/new funding in general come from? National or International? Is the new research funding a direct result of the training?
    • Did any trainees or PIs receive awards as a result of training? If so, list and describe how training influenced this.
    • Did the training influence any trainees' careers? How? Were they all promotions?

      d. Program Impacts--The total consequences of the program, including unanticipated benefits. These can include the influence of research activities on clinical public health practice or health policy, success in establishing a sustainable career structure, affecting the career path of trainees, changes in health care systems, alterations in health care laws. Demonstrating impacts requires more complex analysis and synthesis of multiple lines of evidence of both a quantitative and qualitative nature. (expected for the most mature program)

      Suggested Indicators of Performance
    • New policies adopted or advanced; new clinical procedures adopted; new career structure in place: alteration of health care system; alteration of health care laws

    • Suggested Questions?
    • What were the new policies adopted as a result of training provided by this program? How was the trainee or training involved with the policy?
    • What new health practice was adopted as a result of training and how was this linked to the training?
    • Were any health laws changed as a result of the program and how did this come about?
    • Are there any economic impacts that can be demonstrated as a result of training? Environmental impacts? Health care impacts (laws, policies; systems etc.) How do these relate to training?
    • Are there any success stories (using the metrics described and others as needed)? How do these relate to the training?
    • Is impact local? National? Regional? International?
    • Are partners involved in impact? Who are they and how are they involved?

    Back to top


      IV. Assessment Roles

    A. Role of the Fogarty International Center Advisory Board (FICAB) and FIC Administration

    The review and evaluation process and schedule should be proposed at the program officer level and approved at the FIC administration level. It is anticipated that the Advisory Board (AB) will play a key role in assessment, either by chairing or co-chairing the Program reviews or by participating in the teams in some official capacity. Thus, the Program review panels (PRPs) can be considered a subcommittee of the Federal Advisory Committee Act (FACA) chartered FIC Advisory Board. Reports developed by the review panels will be approved and distributed by FIC administration in conjunction with the FIC Advisory Board. FIC will annually communicate the results of all the FIC assessments to the Director of NIH, the Secretary of HHS and to the Congress.

    B. The Role of the Program Officer (PO)

    The FIC has ultimate responsibility for the excellence and effectiveness of FIC programs. The PO will be responsible for the day-to-day assessment and analysis of the program progress. The PO will work with the Evaluation Officer to analyze program progress, synthesize program results, and to set up the review or evaluation. Together they will determine the appropriate outside experts to be part of the review as well as determine specifics of the review (e.g., dates, sites, presentations, and agenda).

    C. Role of the Evaluation Officer (EO)

    The evaluation officer, in coordination with the FIC PO's and the FIC administration will be responsible for setting the annual schedule for review and evaluation. She will apply for all funds for reviews and evaluations and will work out all budgets with the POs. She will work with the PO to set the agenda and schedule for the reviews. She will provide training for review chairs and members. She will work with the review panel to conduct the review, write the final report, and with FIC administration on the annual assessment report to the Director of NIH, the Secretary of HHS and to congress. She will schedule an annual meeting of FIC staff to discuss all the assessments that have taken place in a given year. She will work with other NIH ICs and other experts on assessment to ensure that the Fogarty assessments are current. She will serve as the planner and interface for program evaluations. The EO will be available to work with the PO on program analysis and synthesis of program results.

    D. Program Advisory Visit--Make-up and Role

    The program advisory visits are more informally designed to enable program officers to make informed mid-course corrections for projects or programs in their portfolios. They should be small in nature and targeted to a specific question or set of questions the program officer feels needs to be addressed. They do not need to be lead by an FIC advisory board member, but that is an option. There should be a summary report following advisory visits.

    E. Program Review Panels (PRPs)--Make-up and Role

    At five-year intervals a visiting committee, Program Review Panel (PRP) will conduct a formal review of the FIC programs using the formal framework and criteria given in Section III. The panel will be made up of 4-8 members, including at least one or as many as two, FICAB members, and 3 to 6 experienced administrators and decision-makers, health care professionals and scientists as well as people experienced in program review from other disciplines as appropriate. The PRP can include, but not be limited to, persons such as:

    • Deans or Associate Deans of Appropriate Colleges or Universities
    • World renowned scientists in appropriate fields
    • Executives of national and international health care or related agencies
    • Executives of national or international health care NGOs
    • Officers of appropriate commercial and industrial entities
    • Recognized medical practitioners in appropriate fields
    • Expert international scientists or administrators who are stakeholders or partners in the program
    • Scientists from partner institutions (IC)
    • Representative with fiscal expertise (e.g. person involved with grants management)

    PRP members should be highly respected and recognized in their fields. Panel membership should be jointly determined and agreed to by FIC staff and the AB as well as the evaluation officer. An individual respected by all parties, very familiar with FIC objectives and programs, and someone with a longer-term commitment to FIC should chair the PAT.

    Using any and all material available and necessary to conduct its review, the role of the PRP should be as follows:

    • To document and report on the program's overall productivity and accomplishments relative to FIC's mission and goals and the programs RFA and level of support.
    • To assess the program's overall scientific or educational strength (e.g. by the significance of scientific or public health related advances and impacts, the rigor of the planning process, the level to which the best talent and resources have been brought to bear on program's goals and objectives and the success in meeting them, the rigor of the self-assessment process, publications, patents and other metrics of output).
    • To assess the effectiveness of the programs management in meeting stated goals and objectives and in providing overall leadership for the program.
    • To assess the program's partnerships and linkages, both nationally and internationally.
    • To assess the program's position and role in its host institution and host country.
    • To assess, considering all the above, the program's potential for growth.
    • Based on these assessments, the PRP should provide the PO and FIC management a comprehensive written report that documents the program's strengths and weaknesses, makes specific suggestions for program improvement, reports program accomplishments and provides for an overall assessment using criteria developed in Section III. The PRP shall have a draft assessment report ready upon leaving the program assessment. A final report shall be due to the PO and the evaluation officer within 30 working days of the review exercise, and is the responsibility of the PRP Chair. Upon receiving the report the PO will have a reasonable time, 21 working days to review the report, make factual comments, and if necessary write a response. A final version of the report with the PO's input is due to the FIC administration within 60 working days of the review. At the approval of the FIC administration, the report will become part of the official record of the program.

      Back to top



      [1] For the purposes of this paper the term assessment is defined as the valuation of a program or procedure made by experienced persons according to their discretion. The process of assessment can be accomplished either through a review or an evaluation. An evaluation is defined here as a large scale semi-quantitative judgement of a program done after a significant period of program operation; a review is defined as a more qualitative inspection of a program conducted after a relatively short period of program operation.

USA dot gov Logo

Fogarty International Center
National Institutes of Health
31 Center Drive - MSC 2220
Bethesda, MD 20892-2220 USA
U S Department of Health and Human Services LogoNational Institutes of Health LogoFogarty International Center Logo