skip header and navigation
HHS Home  Bureau of Health Professions Questions? Search
HRSA Home
Photos of Health Professions
HRSA Home
Grants
Student Assistance
National Health Service Corps
National Center for Health Workforce Analysis
Health Professional Shortage Areas
Medicine & Dentistry - Medicine & Dentistry
Medicine & Dentistry
Nursing
Diversity
Area Health Education Center
Public Health
Other Disciplines
Children Hospitals GME
Practioner Data Banks
Practioner Data Banks

 

Advisory Committee on Interdisciplinary, Community-Based Linkages Meeting

Members | Meeting Minutes | Charter

April 13-15, 2003
Washington Terrace Hotel
Washington, D.C.

Minutes

Approved as final:  June 2003

Advisory Committee Members in Attendance:
Helen Caulton-Harris, MA, MED
Katherine Flores, MD
Lawrence Harkless, DPM
Teresa Hines, MPH
Richard Oliver, PhD
Ricardo Perez
Joseph Scaletti, PhD
Sabra Slaughter, PhD
Richard Wansley, PhD

Note: this does not represent a quorum.

HRSA Staff:
Lynn R. Wegman, MPA
Jennifer Donovan
Louisiana Jones

Overview

The focus of the Advisory Committee meeting was to explore the use of performance measures and outcomes to demonstrate program effectiveness.  The Advisory Committee heard presentations addressing performance measures and outcomes.  These presentations included:

  • Discussion of March 18-19 Bureau of Health Professions (BHPr) Outcomes Meetings
    Richard Wansley and Lynn Wegman
  • Insights on Policymakers’ Perceptions of Performance Measures and Outcomes
    Robert H. Bradner, Holland and Knight, former Chief of Staff for Congressman John E. Potter
  • Health Resources and Services Administration (HRSA) Update
    Dennis Williams, Deputy Administrator, HRSA
  • BHPr Update, Development of New BHPr Strategic Plan, Overview of Maternal and Child Health Bureau’s Title V Information System
    CAPT Kerry Nesseler, RN, MS, Associate Administrator for Health Professions, HRSA
  • BHPr’s Performance Measurement Workgroup
    Janet Schiller, EdD
  • Evaluation and Performance Measurement: Philanthropy’s Perspective
    Laura Leviton, PhD, Senior Program Officer, Robert Wood Johnson Foundation
  • Current BHPr Data Collection and Evaluation Efforts
    Lou Coccodrilli, BHPr, HRSA
  • Agency for Healthcare Research and Quality (AHRQ), National Quality Measures Clearinghouse
    Jean Slutsky, Acting Director, Center for Practice and Technology Assessment, AHRQ

Based on the presentations, Advisory Committee members identified topic areas for further exploration.

Future Activities

  • Second Report

The Advisory Committee’s second report is currently in the process of being cleared by HRSA.

  • Bioterrorism Report

Two summaries (a long version and a short version), were developed of the Advisory Committee’s December 2002 meeting that addressed bioterrorism.  The Advisory Committee agreed that the long version should be incorporated into the Committee’s third report.  The short, two-page version will be sent to Secretary Thompson, in order to keep him informed of the Committee’s activities in this area.  A cover letter will accompany the short version.  The cover letter will encourage the Administration to consider to the Advisory Committee’s recommendations in future efforts related to bioterrorism.

  • Future Meetings

At the September 2003 meeting, the Advisory Committee will address the topic of Cultural Competency and Diversity, with a focus on how it relates to health status outcomes. The Advisory Committee suggested that the following topics be addressed at the meeting:

  • When a person is culturally competent does that impact health status outcomes?
  • Why do Titles VII and VIII programs focus on training people from minority communities? 
  • Who are we talking about when we say underrepresented minorities?

Possible speakers included representatives from the Office of Minority Health, Indian Health Service, and the Centers of Excellence.

Also considered as topics, and possible topics for future meetings, were:

  • Sustainability/Self Sufficiency; and
  • Collaboration (with NHSC, Community Health Centers, and how it reflects the President’s Community Health Center Initiative).

Members also suggested additional activities that might be incorporated into future meetings.  These included:

  • Collect examples from programs that demonstrate impact on health outcomes; and
  • Have outgoing members provide an evaluation of the effectiveness of the committee for inclusion in the third report
  • In preparation for the September 2003 meeting, the following steps will be taken:
  • Send minutes from April meeting to Advisory Committee members;
  • Conference Call (mid June) – gain input from members on outcomes meeting and prioritize list of topic areas, clarify purpose of August call; and
  • Conference Call (early August) - Preparation for the September meeting.
  • New Advisory Committee Members

Four candidates, to fill existing vacancies, have been invited to serve on the committee and their responses should be received in the near future.  Ten possible candidates are currently being reviewed by the Agency.  These ten new members would replace existing members.  It is possible that 14 new members will attend the September 2003 meeting.  Current Advisory Committee members expressed concern about the impact of such a large influx of new member on the work of the committee, especially since the committee’s third report will be due shortly after the September meeting.  Committee members asked BHPr to consider including both old and new members at the September meeting or delay seating the new members until after the third report has been completed.

The Advisory Committee also considered ways to orient new members.  Providing new members with an existing or former member as a “mentor” was suggested.

Issues for Further Exploration

Advisory Committee members identified topics areas that should be further explored as part of an ongoing discussion of performance measures and outcomes.

  1. Outcome measurement should be used for identifying unmet needs.

Discussion:  The data that programs are required to collect and report are not always the most useful to the programs in terms of conducting needs assessment activities and making adjustments to increase program effectiveness.  Outcome measurement should focus on program enhancement, not just measures program effort (bean counting).

  1. Endorse establishing statement of common purpose and overarching goals

Discussion:  There is a lack of an overarching purpose and no consensus on what the programs should do.  It is hard to evaluate the programs without an overarching goal.

  1. Take process steps in developing measures that ultimately address changes in health status.
  2. Establish logic models that permit the inferences (using current data).
  3. Seek out that which we can take credit for now.  

Discussion:  Explore what outcomes the programs can take credit for now, especially in areas that are unique to the program, such as cultural competency training or workforce diversity.  The data that are currently collected do not allow for the measurement of outcomes.

  1. Develop base of “evidence” (clearinghouse).  

Discussion:  A database needs to be developed that links what programs do to outcomes in health status (look to existing data and establish a clearinghouse of this evidence).

  1. Create a central repository of reported data and develop a report for each program (to show impact).
  2. Administration/management of data should be internalized within the Bureau rather than handled by an external contractor.
  3. Establish measurable objectives for each grant program, set a protocol for mid- course adjustments. 

Discussion:  What are the criteria for specific programs?  They need to be reviewed to see if they are appropriate and if it is possible to identify measurable outcomes.

  1. Solicit/establish database of community response/end users.

Discussion:  A mechanism is needed for community (end users) input on their needs and what is working.

  1. Capture innovative approaches and establish means to disseminate this information, as with clinical trials (could be a searchable web database). Capture innovative approaches and establish means to disseminate this information, as with clinical trials (could be a searchable web database). 

Discussion: The innovations developed by the programs are as important as other outcomes.  However, there isn’t always a way to capture and report program efforts in this area.  “Numbers are not important, it is the innovation.”

  1. Establish a protocol to identify and describe “unexpected” outcomes. 

Discussion:  Programs need to be able to report unexpected outcomes and the implications these outcomes have for the program. 

  1. Explore economic parameters/impact of training programs (part of database and “evidence”).

Discussion:  It is a problem that the programs’ outcomes are not understandable to policymakers (Congress/OMB).    Currently the “story” of the programs is not being told effectively. There is a need to dispel the myth that these programs would be continued in the private sector if the Federal subsidies were removed.

  1. Construct a “story” that really describes the impact of programs (base “story” on data and reliable inferences).
  2. Dispel the myth that without Federal support, program activities would still be conducted. 
  3. Need to estimate the impact of loss associated with reduction or elimination of Federal grant programs/activities

Discussion:  In the current funding environment, characterized by cuts at all levels (Federal, State, etc.), programs face a dilemma.  If thye demonstrate effectiveness with less money, there may not be strong motivation to restore lost funding.

  1. Collect “anecdotal” accounts that help explain data.
  2. Identify partners and related effects/impact resulting from these partnerships.

Discussion:  The programs are part of an entire network of providers and programs that have an effect on health status.  Often this impact has a prevention focus, such as the CHIPS program.

  1. Identify examples/models of linking training to direct services (internal/external through partnerships)
  2. Current grantees should be involved in the process to identify outcome measures.
  3. Translate findings and outcomes into “easily digestible” statements.
  4. Encourage BHPr to create a glossary of terms for evaluation/outcomes measurement.
  5. Establish agreed upon definitions of activities/outcomes to be measured (e.g., cultural competency).
  6. Coordinate evaluation measurement between different sources of support (dialogue between Federal/State/local/private).
  • The Federal process for developing protocols should recognize other funders as stakeholders.
  • It is incumbent on the grantees to recognize the data needs of the funders.

Discussion: Most of the grantees have multiple means of support and there needs to be some consistency across funders about outcomes (to reduce reporting burden).  State representatives may need to be involved in the process of identifying outcomes.

  1. Request that the Federal Agency establish guidelines for reasonable costs associated with evaluation/outcomes measurement for the grantees (as a direct cost).
  2. Cost-share match can be calculated as a negotiated rate and actual.
  3. Program outcomes and outputs should shape policy recommendations/strategic directions.

Discussion:  What is the purpose of the outcome measures and how do they get

translated into subsequent action?  Some advisory committee members believed that outcomes are a measurement for determining whether a program has accomplished what it said it would accomplish.  Other advisory committee members see outcome measures as a means of improving programs (but not how to learn to do the wrong things better).

  1. Build flexibility into authorization and administration of grants (part of funds).
  2. Outcome measurements should be linked to actions that improve program efforts.
  3. Characterize the Advisory Committee’s recommendations to meet short-term needs (OMB) and longer-term needs.

Discussion: Who is the audience for the outcomes?  Congress? OMB? What is currently being reported is not selling the program.   The Advisory Committee will have to identify what can and cannot be done in the short and long term.  The Advisory Committee needs to acknowledge the value of the programs but also acknowledge that it agrees with OMB that there needs to be greater emphasis on outcomes that demonstrate the impact of the programs.

  1. Develop a means to broaden the base of stakeholders who buy into outcome measurement and quality improvement, including local/field input.

Discussion:  Does the community know about the OMB report and its findings?  Knowledge of the report would motivate grantees in the field to become more outcome-oriented and provide information to help tell the story.

  1. Identify common performance measures and then develop unique local performance goals.
Presentations Discussion of March 18-19 BHPr Outcomes Meetings

Richard Wansley and Lynn Wegman

BHPr is currently in the process of developing a strategic plan that includes looking at the Bureau’s overall mission.  A key aspect of this effort is the development of Logic Models, which serve as tools for identifying program outcomes.   These efforts are modeled after similar planning activities carried out by HRSA’s Maternal and Child Health Bureau (MCHB).  BHPr anticipates that this will be a five-year process and will be working with stakeholders to ensure that appropriate measures are identified.

As part of this process, BHPr held two, one-day meetings in March focusing on program performance and measuring outcomes.  Participants at the first meeting included chairpersons for Bureau advisory committees, including Richard Wansley.  Participants on the second day represented various stakeholder groups.

The meetings provided an opportunity to discuss roles in addressing the emerging challenges facing health care providers and those that train them.  The focus was to explore how outcomes can be measured and how this activity would fit in the process of developing a strategic plan.  The emphasis on outcomes is in part due to the change in leadership at HRSA.  It is also timely given the recent Office of Management and Budget (OMB) report on BHPr programs that found some of the programs to be ineffective.

At the meetings, participants discussed the mission and the function of BHPr. The crafting of a mission statement for the BHPr is a challenge because of the diverse programs funded by the Bureau.  As a result, both policymakers and the community are sometimes unclear of the purpose of the health professions training programs.  There were mixed opinions about the Bureau’s mission. Some felt that the Bureau’s mission is too broad while others thought it was broad in an appropriate way.  Some wanted specific populations (unserved, underserved and vulnerable) to be identified in the mission.

Also at the meetings, participants were asked to identify challenging aspects of creating outcome measures.  These include:

  • Creating a measurement with common elements that can cross programs;
  • Creating measures that reflect changes in health status in the population, which are difficult to develop for health training programs (and may not be appropriate); and
  • Use of long-term outcomes vs. short-term indicators.
Insights on Policymakers’ Perceptions of Performance Measures and Outcomes

Robert H. Bradner, Holland and Knight, former Chief of Staff for Congressman John E. Porter

Drawing on his experience as Chief of Staff to a member of Congress, Mr. Bradner was asked to discuss factors that influence Congress in evaluating the outcomes of programs.  Mr. Bradner provided an overview of the Federal funding process, including the roles of the Administration and Congress and discussed how the current budgetary environment, with deficit spending and a push by the Administration for tax cuts, means that many programs are endanger of having their funding reduced or eliminated. 

Information about specific programs and their effectiveness needs to be provided in a clear, concise manner.  Policymakers, both in Congress and the Administration, are inundated with information on an extremely wide range of programs.  Supporters of specific programs need to be able to explain the importance and impact of their program in one or two pages.   In addition, if they can demonstrate how the program is of value to constituents, it can “bring the issue home” to a member of Congress.  Highlighting how a program has specifically impacted a person’s life, through Congressional testimony or letters from constituents, is also an effective way of communicating to policymakers the value of a program.  In addition, programs need to highlight their unique aspects.  For example, the health professions training programs need to stress why training people from underserved areas is important.

OMB plays a role in keeping the Federal budget down (in line with the President’s budget) and focuses on the cost effectiveness of programs (bang for the buck).  However, others, such as committee staff in Congress, are also extremely influential.  What is important to remember is that there is a relatively small number of people making decisions about the status of a program, whether it is in Congress or at OMB.  Also, not every source of information carries the same weight with every audience.  While the Administration may rely on assessments by OMB, Congress is likely to pay more attention to a General Accounting Office (GAO) report.  However, even though a GAO report is likely to carry more weight, a negative OMB report still must be addressed.  It can serve as a lag on a program and prevent it from moving forward.  It also provides evidence to those who do not support a program.

Health Resources and Services Administration (HRSA) Update

Dennis Williams, Deputy Administrator, HRSA

Quality, affordable health care for all Americans is a goal for the Administration.  Key to this effort is HRSA’s five-year Community Health Centers Initiative, which will add 1,200 new and expanded health center sites by 2006.  Funding has also been increased for the National Health Service Corps, which offers scholarships and loan repayment plans to clinicians who agree to serve in health centers or in other locations serving underserved communities.  

In light of the OMB report, health professions training programs will need to focus on documenting their outcomes, which might include the following activities:

  • Focus on the value of the programs today, in the current health care environment, such as the need for providers in underserved areas as community health centers are expanded.
  • Provide statistics, not anecdotal data, which convey the importance of the programs.
  • The emphasis used to be the number of health care providers available, now it is the quality of the providers.  The programs play a role in enhancing the quality of health care providers but do not do a good job of documenting it.
  • Community health center expansion needs a mechanism to expand availability of qualified and culturally competent health care providers.  The programs fill this role

BHPr Update, Development of New BHPr Strategic Plan, Overview of Maternal and Child Health Bureau’s Title V Information System

CAPT Kerry Nesseler, RN, MS, Associate Administrator for Health Professions, HRSA

BHPr is currently in the process of developing a new strategic plan that will include a new mission and functions.  The strategic direction of the Bureau is to:

Current Mission and Functions

Mission: To increass health care access by assuring a health professions workforce that meets the needs of the public.

Functions:

  • Develop the health professions workforce through research, analysis, and planning;
  • Imporove distribution and diversity of health professionals to rural/urban underserved areas;
  • Improve the quality of health professions practice and education;
  • Focus on key 21st century professions issues (geriatrics, genetics, diversity/distribution, distance learning, bioterrorism).

1)  Establish a Bureau presence (focus will be on health care professions in general);

2)  Document the Bureau’s significant impact on the quality of care of the population – develop outcome measures (for example, the Bureau needs to take more credit for the impact of culturally competent health care providers); 

3)  Link the Bureau’s activities and performance to the Presidential initiatives (Community Health Centers expansion);

4)   Improve the use of data to measure program success;

5)  Build positive and creative Bureau leadership;

6)  Obtain employees’ and partners’ input into the Bureau’s strategic direction; and

7)  Create strong partnerships with States, communities, organizations, and the health sector.

In the OMB Report, concerns were raised about three of the Bureau’s programs.

Health Professions

  • Disagreement regarding the clear and focused purpose of the program.
  • Differences in purpose of the program between the Agency and others.
  • Agency – address the failure of the market to distribute health providers to all areas of the country and to serve all population groups.
  • Others – to help rural areas to place providers or to subsidize schools.
  • The program does not regularly use performance data to improve outcomes.
  • GAO noted in 1997 that effectiveness has not been shown and will be difficult to measure without common goals and outcomes measures.

Nursing Education Loan Repayment Program

  • No evidence is available to indicate the program’s overall impact.
  • The program’s national impact on nurse vacancies and staffing is not known.
  • Further work is needed to improve the measurement of key outcomes.

National Health Service Corps

  • The program purpose is clear and is designed to have a unique and significant impact.
  • The program ensures clinicians honor their service agreements with the government and uses additional performance information to improve outcomes.
  • Evaluation indicates the program is effective at increasing care access.
  • The program lacks outcome information and robust targets.
  • Greater flexibility in the allocation of funds between scholarship and loans could further improve efficiency.

Through the new strategic plan, the Bureau will be focusing on documenting its significant impact on the quality of care.  Specifically, it will look at:

  • Workforce planning and analysis (the right people);
  • High quality education (the right skills);
  • Equitable distribution (the right places); and
  • Performance/outcome measures (the right outcomes).

Criteria exist for selecting performance measures that meet the needs of various audiences.  These include criteria that is:

  • Relevant to health professions and Bureau activities;
  • Understandable to policymakers and the public;
  • Linked to outcome measures; and
  • Generally available from the majority of programmatic areas.

The strategic planning process carried out by HRSA’s MCHB provides an example of the efforts currently carried out by BHPr.  MCHB’s Title V Information Service (TVIS) was designed to:

  • Report on performance of grantees (financial, program, outcomes);
  • Provide consistency and comparability of data across grantees;
  • Increase timeliness of data;
  • Increase quality of data;
  • Increase ability to track improvements over time;
  • Increase access of the data to grantees, policymakers, and the public; and
  • Demonstrate how Federal funds leverage other program resources (State, local and private).

Prior to TVIS, there was no consistent format for developing program objectives and comparing progress.  MCHB identified 18 performance measures and six outcome measures.  For each measure, States set their own targets.  In addition, States identified seven to ten measures, depending on needs within their own State. All of the data submitted by grantees is posted on MCHB’s web site in a searchable database.  This allows comparison across programs and provides incentive to grantees to provide complete data in a timely manner. Detail sheets for each performance measure, prepared by MCHB, provide information on what is being measured (identifies numerator and denominator, provides source of data, objective and how much has been accomplished based on five-year targets).

Because the block grants funded by MCHB are similar, the process of determining outcomes was somewhat less complex than efforts currently underway in the BHPr.  BHPr may develop a set of outcomes, 50 measures for example, from which grantees would be able to draw a subset.

Advisory Committee members provided feedback on the process, including concerns and recommendations.

  • Outcome measures must be reflective of the size of the program.  Small programs should not be held accountable for large performance measures (aggregate data can be collected at the Bureau level that reflect overall program effort).
  • There needs to be an effort to educate about what are realistic outcomes (acceptable numbers in terms of effort and expense).  However, while standards are needed in terms of realistic outcomes, it is also necessary to recognize the unique situations that may impact a grantee’s ability to achieve outcomes.
  • By focusing on performance measures, effort and resources may be concentrated in these areas, which results in doing fewer things with greater intensity. The result is that less overall is accomplished and that the nature of the programs may change. Instead of telling how much is accomplished, outcomes should focus on telling how well it is accomplished.
  • One of the things that will come from this process is an understanding of the partners who are responsible for health status changes (what are the linkages that are in play, what does it really take to change in health status). 
  • There needs to be an emphasis on local needs and funders and programs need to look at who makes up the health care work force.   For example, in some States, programs can’t train community workers because they are not considered health care professionals.  Outcome measures need to reflect who is doing the work.
  • Programs identify barriers to care as they educate health professionals.  This should be reflected when measures are developed (identify barriers to care and how they impact the performance of services).
  • In the MCHB process, baseline data was available to provide a foundation to the process.  Is there similar data that can be used to get the process started for health professions? 

BHPr’s Performance Measurement Workgroup

Janet Schiller, EdD

BHPr’s Performance Measurement Workgroup is involved in the Bureau’s efforts in this area, which includes:

  • The development of a new strategic plan and needs performance measures that collectively articulate the intended outcomes of its programs, and meaningfully relate them to Agency, Bureau, and Department strategic goals; and
  • The development of performance measures that are clear, well-defined, easily understood by all, and supported by a data system that collects high quality data, with a minimum burden to the public.

The workgroup includes representatives from across the Bureau and the process runs parallel to the Bureau’s strategic planning initiative.  The goal of the workgroup’s efforts is to develop a data and performance measurement system appropriate to support the new strategic plan and the OMB PART measures (a new process for rating programs).

Logic models are a key element of the Bureau’s strategic plan. A logic model is a way of graphically displaying a program’s needs (underserved areas and gaps in knowledge) mission, strategies, resources, performance measures and outcomes.  Not everything in the logic model is measurable.  It spells out in reasonable detail all the things a program does and what is accomplished and it tells the story in a linear, graphic way.  The logic models are intended to help each program better articulate its intended outcomes, and serve as a tool for identifying potential performance measures and associated data needs.  Completed logic models are also intended to facilitate discussions with policymakers outside the Bureau, and improve the public’s understanding of the Bureau’s programs.  Logic models are well suited to the Bureau’s diverse programs since they help to clearly articulate differences yet show where several programs are striving toward a similar outcome.  

The workgroup will be looking at data that is already available (Medicare and Medicaid data). The logic model is an approach that helps tell the story but does not require the collection of more data or confuse data that is difficult to analyze.

Once logic models are complete, the workgroup will begin to look at performance measures that capture the desired outcomes.   They will rely, when possible, on published evaluation materials, but the logic models will also help in the identification of areas where evaluations are needed to document program outcome. 

Advisory Committee members provided feedback on the Bureau’s efforts to develop logic models.

  • There are many overlaps within a grantee’s programs (different funding sources) which make them more cost-effective.  It would be great if these overlaps could be reflected in the logic models.
  • Logic models should also reflect qualitative data, such as client satisfaction.
  • Capture effective systems and collaboration that have been developed, in addition to health status indicators.
  • BHPr should create a dictionary of terminology so that people can discuss this in a consistent manner. 
  • The community (people in the field) should be involved in the process so they can provide input.  This will facilitate buy-in for the use of the new outcome measures.
Evaluation and Performance Measurement: Philanthropy’s Perspective

Laura Leviton, PhD, Senior Program Officer, Robert Wood Johnson Foundation

Ms. Leviton provided insight into the Robert Wood Johnson Foundation’s (RWJ) approach to program evaluation and monitoring.  RWJ is one of the top foundations funding health care in the United States.  As opposed to public sector funding, foundation funding is characterized as being more flexible and personal.  Foundations are only answerable to their Board of Directors and the Internal Revenue Service (IRS).  Foundations often make either strategic or responsive grants.  Strategic grants are more long term and implementation oriented.

RWJ likes to leverage investments by attracting funding partners.  An evaluation can influence the attraction and participation of other funding.   It is important to evaluate the strategy, not the grantee, and evaluation resources should be directed to finding out if the strategy works.

RWJ is currently focusing on eight areas that make up their portfolio.

1)  Improving quality of medical care;

2)  Reducing disparities in health care;

3)  Increasing the endorsement of political elites for coverage of uninsured;

4)   Improving care at bedside;

5)   Smoking cessation and prevention;

6)   Substance abuse treatment;

7)   Childhood obesity prevention; and

8)   Strengthening public health leadership and capacity.

Because these priorities are new, RWJ is in the process of setting numerical objectives for grantees.  An example of how RWJ measures outcome is provided by RWJ’s efforts in smoking cessation.  RWJ’s objective was to increase the excise tax on cigarettes in all 50 states.  There was existing research indicating that it prevents children from smoking. From the logic model that was developed, RWJ knew by how much they needed to raise taxes to see results and established objectives for monitoring.  Because they knew the linkage existed, they did not need to document decreases in smoking among children to demonstrate impact.

An example of how the foundation quantifies outcomes for established programs is the

Scholars in Health Policy Program.  The goal of the program is to attract experts from other disciplines to study health policy. The evaluation was based on the experience of the fellows.  RWJ looked at publications per year (both mainstream and health policy), time spent in each activity in old and new areas, and long-term results including appointments, lectures, and talks.  Another example is the Minority Medical Education Program.  Evaluation indicated that participants were accepted to medical school at a higher rate than non-participants.  Currently, RWJ is looking at the long-term impact (those enrolled that finished medical school, what did those that were not accepted do, etc.).

In addition, other key aspects of program evaluation were identified:

  • Performance monitoring is easiest to initiate from the beginning of a program.  It is much harder to implement mid-stream.
  • Many of the programs are carried out at the local level and national measures dilute the impact.  For example, HIV prevention efforts target very specific populations and measurements can be very focused.  With these programs, the “dose response” of the intervention is important (intensity, etc.).
  • Funders need to be aware of the response burden (data reporting burden) and coordinate it with other funders (Federal, state, local, and private sector);
  • A logic model is only part of an evaluation process.  Funders also need to be concerned about evaluability assessments where the cyclical process and program reality—what is getting accomplished—is considered.  This can become a living document that people can respond to and agree in advance about what can be measured.
Resources Suggested by Dr. Leviton.

Joseph Wholey, Harry Hatry, and Katherine Newcomer (1994), Handbook of Practical Program Evaluation

Michael Seriven (1994), Evaluation Thesaurus (Revised Edition)

Current BHPr Data Collection and Evaluation Efforts

Lou Coccodrilli, BHPr, HRSA

Mr. Coccodrilli discussed the Bureau’s data collection and evaluation efforts, including the Comprehensive Performance Management System (CPMS)/Uniform Progress Report (UPR).  He provided examples of how the Bureau uses some of the data provided by grantees.  The Bureau receives requests about their programs from various parties (the Agency, Administration, and Congress) and can use the CPMS/UPR data to respond to these requests.

The amount and diversity of data creates some challenges for BHPr.  The Bureau is building infrastructure to manage the data and training staff to use the data management software.  Problematic areas identified included:

  • Changes in data management contractors;
  • Varied reporting requirements (it takes about three years to fully populate the database for entire cohort);
  • Section on special topics does not provide quantitative data (very limited data which staff has to follow up on);
  • Grantees can get different answers about what and how much information to provide (from project officers vs. data contractor);
  • Definitions can be unclear and lead to putting people in wrong categories;
  • Some instructions are unclear;
  • Totals of trainees across tables may be inconsistent;
  • Grantees may not collect all data asked for (age, gender, race/ethnicity); and
  • Use of the academic year as the reporting period creates some problems.

An issue BHPr asked the Advisory Committee to consider is whether the use of Social Security Numbers would be an effective way to track trainees over time.

Agency for Healthcare Research and Quality (AHRQ), National Quality Measures Clearinghouse

Jean Slutsky, Acting Director, Center for Practice and Technology Assessment, AHRQ

The Agency for Healthcare Research and Quality (AHRQ) provides evidence-based information on health care outcomes, quality, and cost, use, and access.  AHRQ’s mission is to support research designed to improve the outcomes and quality of health care, reduce its costs, address patient safety and medical errors, and broaden access to effective services. Given AHRQ’s role in identifying evidenced-based outcomes, Ms. Slutsky provided comments on outcome research and information on AHRQ’s activities.

It may take as long as 17 years to turn 14 percent of original research to the benefit of patient care.  Factors include negative results, lack of numbers, and incorrect indexing. 

In the United States, their remains large gaps between the care people should receive and what they receive.  Quality problems include the overuse, misuse and underuse of care.  These quality problems can be address through the following:

  • Structure (facilities) – are the right elements in place to be able to provide quality;
  • Process (right treatment) – are the right things done to the right people at the right time; and
  • Outcomes (does the patient get better) – is the result as good as it should have been given current knowledge.

In terms of evaluation, process measures are appropriate when outcomes may not be available for a long time.  The process that leads to the development of outcomes includes: biomedical research, clinical trials outcome and effectiveness research, evidenced-based medicine, and clinical practice guidelines.  Such quality measurement (assurance and improvement) leads to organization change, leadership and public input and choice.  A quality measure is a mechanism that enables the user to quantify the quality of a selected aspect of care by comparing it to a criterion.  A subtype of a quality measure is a clinical performance measure.

The measurement of quality of care can help consumers and patients, purchasers and clinicians make appropriate choices.  It can also be used for improvement (identify areas where change is important) and for accountability.

The National Quality Measures Clearinghouse (NQMC), sponsored by AHRQ, is a public repository for evidence-based quality measures and measure sets. The URL for the Clearinghouse is: http://www.qualitymeasures.ahrq.gov.

Minutes accepted as final by the Advisory Committee on June 23, 2003.

 


HRSA| HHS | Privacy Policy | Disclaimers | Accessibility |
Clinician Recruitment & Service | Health Professions | Healthcare Systems | HIV/AIDS | Maternal and Child Health | Primary Health Care | Rural Health |
Instructions for Downloading Viewers and Players