HHS Logo: bird/facesU.S. Department of Health and Human Services

Methodological Issues in the Evaluation of the National Long Term Care Demonstration

Executive Summary

Randall S. Brown

Mathematica Policy Research, Inc.

July 1986


This report was prepared under contract #HHS-100-80-0157 between the U.S. Department of Health and Human Services (HHS), Office of Social Services Policy (now the Office of Disability, Aging and Long-Term Care Policy) and Mathematica Policy Research, Inc. For additional information about the study, you may visit the DALTCP home page at http://aspe.hhs.gov/daltcp/home.htm or contact the office at HHS/ASPE/DALTCP, Room 424E, H.H. Humphrey Building, 200 Independence Avenue, SW, Washington, DC 20201. The e-mail address is: webmaster.DALTCP@hhs.gov. The DALTCP Project Officer was Robert Clark.


In September 1980, the National Long Term Care Demonstration--known as channeling--was initiated by the U.S. Department of Health and Human Services. It was to be a rigorous test of comprehensive case management of community care as a way to contain the rapidly increasing costs of long term care for the elderly while providing adequate care to those in need. The key goal was to enable elderly persons, whenever appropriate, to stay in their own homes rather than entering nursing homes.

Two models of channeling were tested, each implemented in five sites. Under the basic case management model, the channeling project assumed responsibility for helping clients gain access to needed services and for coordinating the services of multiple providers. This model provided a small amount of additional funding to fill in gaps in existing programs. But it relied primarily on what was already available in each community, thus testing the premise that the major difficulties in the current system were problems of information and coordination which could be largely solved by client-centered case management.

The financial control model differed from the basic model in several ways. The primary difference was that it established a funds pool to ensure that services could be allocated on the basis of need and appropriateness rather than on the eligibility requirements of specific categorical programs. The pooled funds could be used to purchase a broader range of community services than were covered by Medicare and Medicaid. Case managers were responsible for determining the amount, duration, and scope of services paid for out of the funds pool, subject to limits on the amount that could be spent on any one case.

The goal of the evaluation was to determine the impact of channeling on several key outcomes:

Research on these topics have been conducted over the past two years, culminating in a series of final reports.

The credibility of the estimates obtained depends heavily on the methodology used to obtain them. Many previous studies of case management or community service programs have methodological flaws that raise serious doubts about the findings, including use of poorly matched comparison groups, restriction in the number and diversity of sites examined, small sample sizes, and inadequate data (see Kemper et al., 1986 for a review of previous studies). Thus, one of the initial purposes of the channeling demonstration was to provide a sound methodological basis for assessing the impacts of such programs.

The purpose of this report is to describe the methodology used throughout the channeling evaluation and to document the major analytical issues that posed potential threats to the credibility of the analysis. Because these topics are quite technical and affect all areas of the analysis, they received relatively little attention in the various final reports on specific channeling impacts. This document provides a more thorough explanation of the estimation procedures and test statistics employed, and summarizes our investigations of specific methodological issues.

Although the discussion of technical issues contained here is more comprehensive than that provided in the final reports on channeling impacts, it is not a detailed review of all of the analyses conducted. Such a document would be extremely long and would make the important issues less accessible to readers. Rather, the discussion here is intended to describe the methodological concerns that we had, how we examined them, and the conclusions we came to, with relatively little presentation of the statistical evidence. In the discussions of specific methodological issues we indicate how the interested reader can obtain more detailed documentation of the results of these investigations.

Issues concerning the overall design of the evaluation are not addressed here. Thus, there is no assessment of the generalizability of the evaluation results to settings other than the 10 sites in which the demonstration was implemented (see Kemper et al., 1986 and Carcagno et al., 1986 for discussion of this topic). Nor does this report attempt to provide a guide for future evaluations on which design features are essential and what pitfalls should be avoided. Although that would be useful, it requires consideration of the economic and political costs of incorporating such features, which deviates too far from the statistical topics on which this report is focused.

The full report is organized into five chapters. Chapter I gives you an introduction to the report. Chapter II describes the basic design of the evaluation, including the data sources and sample sizes. Chapter III presents the statistical methodology used to estimate channeling impacts, and the test statistics and assessment strategy used to draw inferences about whether observed treatment/control differences were attributable to channeling or to chance. In Chapter IV, we describe briefly eight specific methodological problems that arose and were examined. Finally, Chapter V recaps the primary methodological findings and gives an overall assessment of the methodology used in the analysis.