Department of Health and Human Services logo ACF Banner Skip ACF banner navigation
Questions?  
Privacy  
Site Index  
Contact Us  
   Home   |   Services   |   Working with ACF   |   Policy/Planning   |   About ACF   |   ACF News Search  
 -  -
Administration for Children and Families US Department of Health and Human Services
Office of Community Services -- Asset Building Strengthening Families..Building Communities
Report Contents

Download FREE Adobe Acrobat® Reader™ to view PDF files located on this site.

 

Assets for Independence Act Evaluation:
Design Phase, Concept Paper
February 16, 2000

1.

Evaluation Objectives

    Statutory Mandate for Evaluation
    Factors to Evaluate and Corresponding Evaluation Activities
    Issues of Site Selection and Timing

 

This report provides the conceptual framework for the evaluation of the Assets for Independence Act. We focus here on the research questions to be addressed in the evaluation, the evaluation methods appropriate for addressing these questions, and the measures to be used in the research. This paper does not detail the data collection strategies that might be appropriate for obtaining these measures. Such strategies will be developed and tested during the remainder of the twelve-month evaluation design phase, which extends through August 2000.

The Assets for Independence Act (AFIA) provides federal funds for the operation of individual development account (IDA) programs at the state and local levels, subject to requirements regarding who can participate and how the accounts will be financed and structured. The premise of the Act is that the offer of matching funds and the provision of program support services (counseling and training) will promote savings, enable participants to purchase homes, start businesses, and advance their education, and thus improve their lives.

The primary question that the AFIA evaluation must address can be stated is as follows:

Has the Assets for Independence Act led to programs that achieved their intended results in a cost-effective manner?

From this basic question, a series of other questions follow, as listed below.

(1) Were programs implemented as intended under the Act?

(a) What programs were created or expanded as a result of the Act?

(b) Did these programs indeed conform to the provisions of the Act?

(2) Did the participants improve their lives in expected ways as a result of the program?

(a) Did the participants experience improved outcomes?

What were the savings rates of individuals . . .based on demographic characteristics including gender, age, family size, race or ethnic background, and income?

What were the economic, civic, psychological, and social effects of asset accumulation? How did such effects vary among different populations or communities?

(b) Did the improved outcomes among participants result from the program?

What were the effects of incentives and organizational or institutional support on savings behavior?

What were the effects of individual development accounts on savings rates, homeownership, level of postsecondary education attained, and self-employment?

How do such effects vary among different populations and communities?

(3) Are the programs cost-effective?

(a) What were the benefits and costs associated with these programs?

(b) Do the program benefits outweigh their associated costs?

What are the potential returns to the Federal Government and to other public sector and private sector investors in individual development accounts over a 5-year and 10-year period of time?

With this as the general evaluation framework, the remainder of this first section describes the statutory mandate for evaluation (as contained in Section 414 of the Assets for Independence Act), the relevant evaluation methods, and issues of site selection and timing of the evaluation activities to be conducted following completion of the design phase.

Statutory Mandate for Evaluation

Section 414 of the Assets for Independence Act establishes the statutory mandate for evaluation of AFI-funded projects, as follows:

Section 414(a)—"In General"—indicates that, within 10 months of legislative enactment, the HHS Secretary is to "contract with an independent research organization to evaluate the demonstration projects conducted under this title, individually and as a group."

Section 414(b)—"Factors to Evaluate"—specifies the factors that the research organization is to address "in evaluating any demonstration project" conducted under the Act.

Section 414(c)—"Methodological Requirements"—states that, in at least one site, the research organization will "use control groups to compare participants with nonparticipants" and that the evaluation work will be based on both quantitative analysis and qualitative assessments, with the latter including in-depth interviews.

Section 414(d)—"Reports by the Secretary"—indicates that the HHS Secretary will provide to the Congress (1) "interim reports" each year summarizing the results of annual progress reports submitted by each grantee to the Secretary and (2) a "final report setting forth the results and findings of all reports and evaluations" undertaken with respect to the Act.

Section 414(e)—"Evaluation Expenses"—sets aside 2 percent of the congressionally appropriated amount for the Act to carry out the evaluation activities called for under Section 414.

  top of page


Factors to Evaluate and Corresponding Evaluation Activities

With respect to the design of the evaluation, it is Sections 414(b) and 414(c) that are most important. In Exhibit 1-1, we have listed the "factors to evaluate" from Section 414(b), as rows in a matrix. The columns of this matrix indicate the evaluation activities that we have identified as relevant in carrying out the statutory mandate, as follows:

  • process analysis
    on-site observation of program operations, interviewing of program staff, and examination of written materials to determine how the program was implemented and how the program operates
  • in-depth participant interviews
    lengthy personal interviews with program participants to examine their understanding of the program, their reasons for participating, and their experiences as a participant
  • program and participant tracking and monitoring
    collection and analysis of information regularly maintained about the status of program participants, the flow of funds into and out of the accounts, and administrative operational details, including staffing and costs
  • experimental impact analysis
    collection and analysis of information on program-eligible persons randomly assigned to a treatment group (participating in the program) and a control group (not participating in the program) for the purpose of estimating the effects of the program on its participants, relying on random assignment as the means of obtaining groups that are comparable on both observable and unobservable traits and thus as the means of enabling one to attribute to the program any differences in observed outcomes between the two groups.
  • nonexperimental impact analysis
    collection and analysis of information on persons participating in the program and a comparison group of persons identified in data as not participating in the program, for the purpose of estimating the effects of the program on its participants, relying on statistical techniques to take account of demographic and socioeconomic differences between groups and thus to isolate the effects of the program.
  • benefit-cost analysis
    collection and analysis of information on the benefits of the program to its participants and the costs of the program to the federal government (i.e., to federal taxpayers), to other public sponsors (including state and local), and to private funders.

This following sections of this report focus in turn on these research activities. Within Exhibit 1-1, a check mark (3) indicates those evaluation activities that are relevant for each factor to evaluate.

Exhibit 1-1. Factors to Evaluate and Corresponding Evaluation Activities
Factors To Evaluate Corresponding Evaluation Activities:
Process Analysis In-depth Participant Interviews Porgram and Participant Tracking and Monitoring Experimental Impact Analysis Nonexperimental Impact Analysis Benefit-Cost Analysis
In evaluating any demonstration project conducted under this title [the Assets for Independence Act], the research organization shall address the following factors:
(1) The effects of incentives and organizational or institutional support on savings behavior

checkmark
especially, at experimental site(s)

checkmark
especially, at experimental site(s)
checkmark
treatment-control differences
checkmark
participant-
nonparticipant
differences
(2) The savings rates of individuals based on demographic characteristics including gender, age, family size, race or ethnic background, and income
checkmark
checkmark
treatment
group data
checkmark
participant-
nonparticipant
differences
(3) The economic, civic, psychological, and social effects of asset accumulation, and how such effects vary among different populations or communities
checkmark
checkmark
treatment
group data
checkmark
participant-
nonparticipant
differences
(4) The effects of individual development accounts on savings rates, homeownership, level of postsecondary education attained, and self-employment, and how such effects vary among different populations or communities
checkmark
especially, at experimental site(s)
checkmark
especially, at experimental site(s)
checkmark
treatment-control differences
checkmark
participant-
nonparticipant
differences
(5) The potential financial returns to the Federal Government and to other public sector and private sector investors in individual development accounts over a 5-year and 10-year period of time
checkmark
cost estimates
checkmark
cost estimates
checkmark
benefit estimates
checkmark
benefit estimates
checkmark
(6) The lessons to be learned from the demonstration projects and if a permanent program of individual development accounts should be established
checkmark
checkmark
checkmark
checkmark
checkmark
checkmark

Among the six "corresponding evaluation activities" shown in Exhibit 1-1, three follow directly from the statutory requirements. These are: in-depth participant interviews, experimental analysis of effects, and benefit-cost analysis. Section 414(c) makes specific reference to in-depth participant interviews. The same subsection also calls for experimental analysis of effects, through the requirement for control groups, which is conventionally interpreted to mean the construction of a research sample with cases randomly assigned to treatment and control groups. Benefit-cost analysis is also a necessary research component, as the fifth factor to evaluate identified in Section 414(b) calls for the evaluation to address the "potential financial returns to the Federal Government and to other public sector and private sector investors."

The inclusion in this design of the other three evaluation activities identified in Exhibit 1-1-process analysis, program and participant tracking and monitoring, and nonexperimental analysis of effects-requires brief explanation.

Process analysis is a valuable counterpart to any quantitative analysis of program effects. One needs to establish how the program was implemented, whether the program was in fact implemented as intended in any given site, and whether the program operates differently across sites. Information about the specific workings of the program, especially as it pertains to the interface with program participants, is valuable-if not essential-to the interpretation of program effects, whether estimated experimentally or nonexperimentally. Cost information obtained through on-site staff interviews is also essential to the benefit-cost analysis.

Program and participant tracking and monitoring is a means of obtaining administrative data on program participation and costs, account usage, and participant outcomes. In the AFI programs, the information system in use by virtually all sites will presumably be MIS IDA-the Management Information System for Individual Development Accounts. The data collected through this system (or any other equivalent) will indicate the pattern among participants of deposits into accounts and assets purchased from the accounts. The system also provides some program cost information.

The nonexperimental impact analysis is one possible approach, in addition to an experimental design, to estimating the impact of the program on savings and asset accumulation. In a nonexperimental analysis, the effects of AFI programs would be derived by using available national databases to compare the experiences of AFI program participants with the experiences of similar individuals and families in the population of AFI nonparticipants. Statistical methods would be used to adjust for the effect on savings and asset accumulation of differences between participants and nonparticipants in their demographic and socioeconomic characteristics, in seeking to estimate the effects that are specific to the program.

  top of page


Issues of Site Selection and Timing

Section 414 of the Act says little about matters of site selection and the timing of evaluation activities. These issues are raised here at the outset and will then be addressed later, in the final section of this paper.

As to site selection, Section 414(a) calls for research activities that serve the following purpose:

to evaluate the demonstration projects conducted under this title, individually and as a group, including evaluating all qualified entities participating in and sources providing funds for the demonstration projects conducted under this title.

As noted earlier, Sections 414(b) and 414(c) then establish the factors to evaluate and the methodological requirements, respectively, to be used "in evaluating any demonstration project conducted under this title," where control groups are to be used "for at least one site."

We interpret this language to mean that the evaluation design should provide for the following:

In all funded sites, there will be some level of evaluation activity. This will be the program and participant monitoring and tracking that can be provided through MIS IDA or its equivalent.[1]

  • In one site, and most likely only one site, an experimental design will be implemented.
  • In the experimental site and also in some limited number of additional sites, the following evaluation methods will be applied: process analysis, in-depth participant interviews, and benefit-cost analysis.

As to matters of timing, the Act requires the Secretary to submit interim reports to the Congress annually, by the end of March 2000 and every 12 months thereafter, "until all demonstration projects conducted under this title are completed." The final evaluation report is than due "not later than 12 months after the conclusion of all demonstration projects conducted under this title."

As shown in Exhibit 1-2, because projects are funded for five years and because there are five cohorts of funded projects, it is not until September 2008 that "all demonstration projects conducted under this title are completed."[2] The final evaluation report would then come one year later (2009), if one interprets the statutory language literally.

This literal interpretation, however, seems an implausible one. The Congress will presumably want evaluation findings in a timely manner for purposes of reauthorizing the Act, which passed in October 1998 and will come up for reauthorization during 2003. This suggests that interim evaluation reports would be submitted in March of 2000, 2001, and 2002, with a final evaluation report submitted in March 2003.

Under this scenario, the final evaluation report (if submitted in March 2003) would encompass operational data through September 2002, including:

  • three years of operation of the first cohort of grantees (October 1999-September 2002);
  • two years of operation of the second cohort of grantees (October 2000-September 2002); and
  • one year of operation of the third cohort of grantees (October 2001-September 2002).
Exhibit 1-2. Assets for Independence Act Grantees—
Project Periods, by Cohort
Grantee
Cohort  
Federal
Fiscal
Year
Funding
Date of
Grant
Announcement
 Project Period
 1st Year 2nd Year 3rd Year 4th Year 5th Year
First
1999
Sep 1999
Oct 1999-
Sep 2000
Oct 2000-
Sep 2001
Oct 2001-
Sep 2002
Oct 2002-
Sep 2003
Oct 2003-
Sep 2004
Second
2000
Sep 2000
Oct 2000-
Sep 2001
Oct 2001-
Sep 2002
Oct 2002-
Sep 2003
Oct 2003-
Sep 2004
Oct 2004-
Sep 2005
Third
2001
Sep 2001
Oct 2001-
Sep 2002
Oct 2002-
Sep 2003
Oct 2003-
Sep 2004
Oct 2004-
Sep 2005
Oct 2005-
Sep 2006
Fourth
2002
Sep 2002
Oct 2002-
Sep 2003
Oct 2003-
Sep 2004
Oct 2004-
Sep 2005
Oct 2005-
Sep 2006
Oct 2006-
Sep 2007
Fifth
2003
Sep 2003
Oct 2003-
Sep 2004
Oct 2004-
Sep 2005
Oct 2005-
Sep 2006
Oct 2006-
Sep 2007
Oct 2007-
Sep 2008

Within this projected schedule, the time available to conduct an experimental evaluation in one or more sites is quite limited, in the following respects. Once an experimental site is selected and is ready to serve participants under an experimental regime, it will likely take one year to enroll a research sample of adequate size. Moreover, the startup of sample enrollment could itself not begin until some time after HHS has contracted with an independent research organization to conduct the Section 414 evaluation activities; this lead time would be necessary to establish the procedures for baseline interviewing and random assignment. At the very earliest, this evaluation contract would be awarded in September 2000, immediately following the year-long evaluation design phase. This best-case scenario implies that the experimental sample would be enrolled from October 2000 through September 2001, with follow-up data extending through September 20002. The members of the research sample would thus be observed for a period of 12 to 24 months, depending on their month of random assignment.

Note that only the grantees in the first or second funding cohorts would be candidates for the experimental site, if experimental findings are to be available for Congress to consider in its reauthorization of the Act. For sites in the third cohort, follow-up data collection would need to end at the close of the first project year, a year that would be required simply to enroll the research sample.

  top of page


Notes

[1] Nonexperimental impact analysis, to the extent that it makes use of national data, may also potentially include information for all funded sites. [Return to Text]

[2] Nonexperimental impact analysis, to the extent that it makes use of national data, may also potentially include information for all funded sites. [Return to Text]

 

Last Updated: August 24, 2004