Your browser doesn't support JavaScript. Please upgrade to a modern browser or enable JavaScript in your existing browser.
Skip Navigation U.S. Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
www.ahrq.gov
Monitoring and Evaluating Medicaid Fee-for-Service Care Management Programs: User's Guide

Chapter 2. Getting Started

Completing a timely and detailed Medicaid program evaluation can be a daunting and expensive task.10 Nevertheless, its importance is well worth the effort and investment.

It is never too early to start thinking about evaluation. Indeed, it is important to consider the evaluation even as you begin to design your CM program, as design considerations have significant impact on the outcome of your evaluation. Program design impacts everything from the data that will be available to conduct an assessment to the design of the evaluation itself. Planning early enables you to carefully consider how to maximize resources, and it potentially saves money and time. However, evaluation shouldn't be thought of as a one-time-only activity. Ongoing monitoring and assessment activities are also important, and data needs for these activities should be considered early on.

As you begin planning for the evaluation of your Medicaid CM program, there are a number of important factors to consider as you balance a rigorous methodology with a feasible evaluation plan. This chapter is organized around several key steps for getting started.

Getting Started Action Steps

  1. Identify core program goals and evaluation questions.
  2. Identify which data are needed and which are available.
  3. Establish an evaluation timeframe.
  4. Determine how much funding is available.
  5. Select evaluators.

Action Step 1: Identify Core Program Goals and Evaluation Questions

Goals

The first critical step in launching CM evaluation efforts is to identify your core program goals and what you hope to accomplish through performance monitoring and evaluation efforts.

Although they operate CM programs in very different contexts, States generally want to demonstrate value-that is, they want to know that their CM program is improving the health of enrollees while yielding an economic return. Value is subjective-there is no single measure that assesses value.11 For the purposes of evaluation, you will want to identify both a group of measures and a credible methodology that allows you to assess whether your CM program is demonstrating "value" as defined in your State's context. In completing this step, it is critical to maintain realistic expectations for what your CM program can accomplish.

States invest in evaluation efforts for many different reasons, including:

  • Determining whether the CM program is successful. States want to know whether their CM programs are successful in meeting their core goals. In making this determination, they want to understand what would have happened had they not implemented their CM program.
  • Program management and quality improvement efforts. States may rely on their evaluation as a tool for managing their programs, making programmatic adjustments, and improving quality.
  • Budget and cost-containment efforts. Some States implement CM programs with explicit savings targets and outcomes expectations and need to evaluate whether those expectations have been met. For example, in Washington12 and Texas,13 State officials were given cost savings goals through legislation. As a result, one important aim of their evaluation efforts is an assessment of whether specific cost savings have been achieved.
  • Funding and reauthorization. Some States invest in evaluation efforts so that they can make a case for ongoing or increased funding or program reauthorization by CMS or State legislation. Evaluations may also respond to specific waiver requirements or legislative inquiries.
  • Expansions. Some States aim to expand their CM programs to new populations or new areas of the State by demonstrating quality improvements and cost savings.
  • CM vendor procurements and contract renewals. States may seek to make adjustments in their CM delivery systems or make contracting decisions based on evaluation findings.

A thorough and balanced evaluation should have explicit goals. New evaluation questions may arise over time as discoveries are made and concerns are voiced.14 Flexibility in defining the scope of the evaluation is also important. You will find that conducting a comprehensive evaluation is often an iterative process.

Evaluation Questions

States generally want their evaluations to answer multiple questions such as:

  • Have health outcomes and quality of care improved?
  • Have we achieved gross and net cost savings to Medicaid?
  • Have we achieved a positive return on investment?
  • Was the chosen CM approach (as implemented) less costly to Medicaid compared with the available alternatives (e.g. the prior model for CM; alternative CM approaches)?
  • Compared with the available alternatives (e.g. the prior model for CM; alternative CM approaches), did the chosen CM approach (as implemented) result in higher immediate costs to Medicaid but with the return of significantly higher quality that is likely to reduce future expenditures considerably?
  • Have enrollees demonstrated better self-management?
  • Have enrollees and providers expressed satisfaction with the program?
  • Have enrollees used more preventive and primary care services and fewer acute care services, such as emergency department visits and inpatient services?
  • Have we reduced health disparities?
  • Have providers adhered to evidence-based practice guidelines?
  • What was the impact of factors other than the CM program (e.g., provider incentives) that may cause the same outcomes targeted by the CM program (and, thus, may explain some of the "impact" being attributed to the program)?
  • Has the program reached the intended population?

Factors that Make Evaluation Difficult

Evaluating a CM program can be a challenging task. Multiple factors make evaluation of State Medicaid CM programs difficult. It is important to keep mitigating factors in mind as you plan, conduct, and present your evaluation. Some of these factors are:

  • Limited resources (staff time, money), combined with pressure to collect findings quickly, create constraints.
  • Decisions by State legislatures guide evaluation design as officials dictate implementation requirements. For example, some States may not have the opportunity to do pilot programs.
  • The timeline used for evaluating the impact of CM programs affects the results obtained. Some interventions will take longer than 1 year to see significant results, while other savings seen immediately may decrease over time. This occurs because of the complex interplay of program implementation factors (e.g. the natural phase of program "ripening" or "ramp-up" until full effectiveness is achieved), disease factors (e.g. time and effort are invested now to prevent disease-related complications in the future), and timing (e.g. an influenza vaccination campaign will have large early effects, particularly if a program is launched in late summer, it is an epidemic influenza year, and efficacious vaccine is available for delivery to clients).
  • Outcomes can vary by population or disease group. For example, a Medicaid CM program for patients with asthma could result in cost savings at the same time that a program for patients with diabetes in the same State could experience net losses. These differences in impact could occur for a variety of reasons, including the population targeted and the mechanism by which impacts are achieved. Some programs target groups that have been underserved and may uncover significant unmet needs that can drive costs up in the short term.
  • The particular characteristics of the Medicaid population can further complicate the evaluation process. With frequent "churning" in Medicaid enrollment, evaluations need to distinguish between populations that are CM eligible versus CM enrolled versus CM engaged. An "intention to treat" analysis, which is discussed in Chapter 3, is one way States can address these differences.
  • A number of methodological issues can arise, including the use of a control group, selection bias, and "regression to the mean," all of which are discussed in Chapter 3. While control groups ideally should be used in all evaluations in order to isolate program effects, appropriate control groups may be challenging to identify and evaluate.
  • If clear and specific standards for data collection and sharing have not been incorporated into a vendor-run program, both acquiring and compiling data can be difficult.

Considerations

A comprehensive evaluation may attempt to address all of the above questions and more. You will need to carefully manage expectations, and you may find that you need to narrow the scope of your evaluation questions in order to respond to specific and time-sensitive program management, budgetary, or legislative pressures. In addition, you may need to focus your list of evaluation questions in light of other key considerations like data availability, timeframe, and funding as discussed below. With limited resources, it will be advisable to develop a hierarchy of evaluation questions, distinguishing between process and outcome questions, and organizing them according to other key considerations such as feasibility, relevance, and cost.

Some questions may be relevant within a few months of program implementation; others may not be answerable early on. It will be important to distinguish these questions and ask different questions at different points along the pathway. For example, for some conditions cost savings may take time to develop, so you may need to identify intermediate outcomes that may be suggestive of future cost savings. In addition, different diseases have different natural histories and can be affected by seasonal changes, and therefore, they will have different timeframes for showing clinical and financial results. States should seek clinical guidance on when evaluation questions are relevant for particular conditions.

Return to Contents

Action Step 2: Identify Which Data You Need and Which Data Are Available

States generally want to access a range of data sources to look at how care is provided, health outcomes, satisfaction, and financial measures. For successful evaluation efforts, the goal is to efficiently use a portfolio of measures. For example, you may find that combining certain qualitative and quantitative data sources, rather than a broad array of sources, is an efficient way to provide a comprehensive picture of your CM program.

Table 1 summarizes which types of data are generally used with which types of measures and identifies a few advantages and challenges for each data source. You will need to carefully specify your data needs based on the specific intervention you are measuring. Keep in mind that there are tradeoffs associated with different data choices.

You may not need data on every enrollee to estimate the effects of your CM program. Carefully selected random samples of adequate size may be sufficient for many measures. Drawing adequate samples of Medicaid enrollees, however, can sometimes be challenging. The pool from which you want to draw your sample may be too small as a result of a variety of factors, i.e., attrition, loss to followup, and difficulty contacting enrollees.

Table 1: Potential data sources and types of measures

Possible data Sources Advantages Challenges Measures
Administrative claims data Data available for full population, standard reporting formats used

Coding practices and robustness of data can change over time (e.g., data may become more complete and reliable as claims submissions improve). In addition, certain clinical conditions or events may be underreported or underdiagnosed.

Inconsistencies in reliability and validity, missing data.

Quality: Process
Quality: Outcome
Utilization

Cost
Medical records Rich clinical data that are not available through administrative data; electronic medical records offer an opportunity to collect more clinical data at low cost May be expensive and labor-intensive to collect, reporting formats may vary, care must be taken to remove patient identifiers

Quality: Process
Quality: Outcome

CM administrative records Data on CM interventions, flexibility to modify to meet reporting and evaluation needs Limited to interventions for which care manager has records Quality: Process Utilization
Surveys Primary means of capturing satisfaction data, validated tools such as Consumer Assessment of Healthcare Providers and Systems (CAHPS)15 available, can do random sampling Must ensure sample and response rate are sufficient. Potential for response bias. Satisfaction

Data Availability

An extremely important consideration in the early stages of designing an evaluation is which types of data sources are available. Medicaid programs generally rely on administrative claims data for fee-for-service program monitoring and evaluation. Although claims data are generally the easiest data to access, there can be significant variability and time lags before such data are available. In addition, some aspects of care may not be easily captured in administrative records. For example, some services like immunizations may not generate a claim.

If it is necessary to collect new data for the evaluation, planning for that should begin early. You may find that other sources of data greatly enhance your evaluation, such as medical records, other administrative records (e.g., client self-reported data, clinical case management reports, and financial reports, etc.), and satisfaction surveys.

If you have a vendor-run CM program, it is especially important to negotiate access to data for your evaluation, as well as the details of the measures and methods involved in the evaluation. You will need to incorporate details about how and with what frequency you exchange data within your vendor contract, and you may want to consider including financial penalties for noncompliance. You should also arrange for independent validation and verification of any vendor-reported results.

Action Step 3: Establish an Evaluation Timeframe

Timeframe Covered by the Evaluation

You will need to address the question of what time period to cover in your evaluation. It can be challenging to determine whether your program is sufficiently mature for evaluation and whether the outcomes you are looking for can be measured with reliable data.16 States should be aware that evaluations based on short followup periods (1 to 2 years) are potentially misleading and may overstate costs or miss long-term benefits.17 At a minimum, you should allow at least 3 years of program experience. The most robust evaluations are often ongoing, iterative, and capture data at various points in time, taking into account factors other than CM that may be responsible for the outcomes.18

CM program design (e.g., pilots, phased-in enrollment, or voluntary enrollment [opt-in/opt-out]) influences the evaluation methods you use and the time period that should be covered by the evaluation. For example, if there was a start-up period during which individuals gradually enrolled in your CM program, the evaluation timeframe will need to take into account any lag times that may have occurred during enrollments and adjust for them appropriately. In addition, if you are doing a pre-post comparison, any policy and program changes over time will influence how you define your baseline period. Also, if the program is initially piloted in one region or enrollment is phased-in, identifying a comparable reference group of eligible people who did not enroll may make evaluation easier. This is discussed in more detail in Chapter 3.

Lessons from the Field

The University of Washington conducted an early clinical evaluation of Washington's CM program, which was phased-in between April 2002 and July 2002. The evaluation period was defined as July 2002 through October 2003. The evaluators acknowledged that a longer evaluation period could have been beneficial and that the full effects of CM may not have been evident in the 16-month timeframe.12,19 Washington's actuarial evaluation of cost savings was conducted after the first year of CM, and it has been conducted in each subsequent year (allowing for 6 months of data run-out). The actuarial study compares a 1-year pre-implementation baseline to a 1-year post-implementation period.

Because of resource availability, new Medicaid programs and policies are often rolled out in stages by aid category or geographic region. Indiana took advantage of such a natural opportunity and used a randomized control trial (RCT) in two large, urban group practices and an observational analysis of staggered implementation with repeated measures in the central, northern, and southern regions of the State. Figure 1 illustrates how Indiana rolled out its program in relation to the RCT.

In another example, Virginia began operating a voluntary (opt-in) CM pilot program in June 2004 and used it as a model for their new, voluntary (opt-in) CM program that was implemented in January 2006. In order to use a pre-CM reference group to evaluate their new program, Virginia is considering using a pre-June 2004 pilot baseline. In addition, the State may convert their new CM program to an enrollment opt-out arrangement in the future, which will influence their decisions about an evaluation design.20

Timeline for Conducting the Evaluation

It is very important that you try to anticipate potential future evaluation needs early on and develop a realistic evaluation timeline. If you are operating your CM program under a waiver, CMS has certain evaluation requirements. Programs authorized under 1915(b) waivers must be independently assessed for the first two waiver periods (2 years each). If your program is vendor-run, you should identify clear timeframes for your vendor's major evaluation efforts, while also building in some flexibility in your vendor contracts to respond to unanticipated evaluation needs.

Your timeline will need to account for data completion time: many States need to allow at least 6 to 8 months for claims data run-out. Texas, for example, set deadlines for claims data extractions that were 7 months following the end of the period being evaluated. You may want to use the time during which you are waiting for data completion to set up your evaluation and test-run your measures.

Considerations: Ongoing Program Monitoring

It is also important to plan for ongoing program monitoring to identify opportunities for continuous quality improvement. You will want to regularly ask whether your CM program is accomplishing what you had hoped. While it makes sense to conduct an independent and thorough program evaluation every few years, you should also plan to continually monitor your program by evaluating performance based on a subset of your measures on a monthly, quarterly, semi-annual, or annual basis. Measures that you might use for ongoing program monitoring include, for example, physician visits, pharmacy utilization, hospital admissions, emergency department (ED) use, and HbA1c and retinal screens for patients with diabetes. You will want to consider how and when you report out the findings of your ongoing program monitoring efforts. For example, North Carolina provides quarterly feedback to providers based on their analysis of administrative data and annual feedback based on chart reviews.21

Return to Contents
Proceed to Next Section

 

AHRQ Advancing Excellence in Health Care