Evaluation of Abstinence Education Programs Funded Under Title V, Section 510:
Interim Report

Chapter III:
The Foundation for Assessing the Impacts of Abstinence Education Programs

Contents

In 1997, Congress authorized, and its staff requested, a scientifically rigorous impact evaluation of the abstinence education programs funded under Title V Section 510 of the Social Security Act.  Policymakers, school officials, community leaders, program staff, and parents all want to know the extent to which particular program strategies succeed.  They want to know for whom these strategies work and to understand the ingredients of that success.  They also want to gather information that will guide program improvement for any groups identified as not responding well to particular strategies.

The early operational findings discussed in Chapter II provide a critical foundation for subsequent reports to address these questions of program effectiveness.  Much has been learned about school and community responses to the abstinence education funding, the range and nature of coalitions formed, the design and operational experiences of the programs, and the responses of youth and their parents.  Evidence on the impacts of the abstinence education programs, however, is not currently available, because obtaining definitive and rigorous evidence on program impacts is a complicated, long-term process.

Over the past four years, the evaluation effort has laid the foundation for a careful, comprehensive, and rigorous assessment of program impacts.  The research team has selected five targeted programs for the impact evaluation, built the partnerships needed to conduct the evaluation, enrolled samples large enough to support reliable estimates of program impacts of each program, and designed and implemented a rich and multipronged data collection strategy to support the evaluation of each program.  The impact evaluation will build on this foundation to determine the extent to which the abstinence programs in the evaluation achieve six specific goals:

  1. Strengthen knowledge and attitudes supportive of abstinence
  2. Induce more youth to embrace abstinence as a personal goal
  3. Reduce sexual activity among youth
  4. Persuade sexually experienced youth to become or remain abstinent
  5. Lower the risk of STDs
  6. Lower the risk of nonmarital pregnancies

Evidence on attainment of these goals is being developed through a scientifically rigorous impact evaluation design, careful and comprehensive data collection, and detailed and deliberate analysis and reporting.  The impact evaluation design avoids the limitations of most prior research on abstinence education programs.  Few previous studies, for example, used rigorous experimental research designs to generate program and control groups.  Those that did use experimental designs usually randomized entire classrooms or schools rather than individual students, which severely reduced their effective sample sizes.(1)  Few were able to use independent professional data collectors.  Finally, few were able to track outcomes of their sample members over an extended follow-up period.  Consequently, results usually pertain to outcomes of youth before they reached the age when many were engaging in sexual activity.

[ Go to Contents ]

Scientific Rigor in the Study Design

The scientific rigor of the impact study design rests on four key elements.  It begins with the selection of strong, well-implemented, replicable program models.  Second, the impact study uses a rigorous experimental design to create program and control groups within each site.  Third, the sample enrollment period was long enough to generate adequate sample sizes to support reliable impact estimates.  Finally, the impact evaluation includes a reasonable length follow-up period to ensure that relevant changes in behavioral outcomes can be measured.

The impact evaluation examines five programmatic strategies geared to the needs of the local communities (Table 3).  Measuring impacts for a range of program models promotes the goal of identifying and documenting abstinence education strategies appropriate to varied local needs and contexts.  For example, the Florida and Wisconsin programs serve mainly youth from single-parent households; these programs are intensive and include strong components on relationship development and maintenance, as well as understanding and appreciation for the institution of marriage.  In the Mississippi program site, many youth live in large, multigenerational households isolated from the broader community.  The program in this community is delivered through the schools and emphasizes both basic knowledge development and components focused on managing peer pressure.  Youth in the South Carolina and Virginia programs live in communities that mirror “middle America.”  The program in Virginia is a low-cost, school-based intervention, while the one in South Carolina is a more comprehensive and intensive youth development initiative.  These choices of program strategies reflect community characteristics and perceptions of how best to serve youth, given local needs and the resources and constraints of the partner schools.

One implication of the variation in program interventions and services is that it is not possible to reach a single judgment about the efficacy of abstinence education.  Such a judgment would only be possible if there were a single, well-defined intervention, one that could vary in its “dosage” across sites but is similar in nature across all sites.  In the case of the Section 510 abstinence education programs, however, the interventions and services vary considerably across program sites and sometimes even within a program site.  In the absence of definitive evidence on the efficacy of a specific abstinence education approach, this variation is a natural result of the funding opportunities available through Title V Section 510.  In addition, the variation in the abstinence education programs provides the opportunity to learn about the effectiveness of different programmatic strategies.

Table 3:
Program Interventions and Services Received by the Control Groups
Program Location Program Intervention Control Group Services
FL (Miami) Elective class offered daily, all year to girls in middle schools (ReCapturing the Vision and Vessels of Honor) Other elective class
MS (Clarksdale) Mandatory weekly year-long abstinence education curriculum (Revised Postponing Sexual Involvement and Sex Can Wait) Regular health class
SC (Edgefield) Five-session mandatory curriculum with voluntary enrollment in weekly or biweekly character clubs (Heritage Keepers) Five-session mandatory abstinence curriculum without character clubs
VA (Powhatan) 36-session mandatory curriculum (Reasonable Reasons to Wait; The Art of Loving Well; and Choosing the Best) Regular health class
WI (Milwaukee) Voluntary after-school program; two hours daily all year for multiple years (Families United to Prevent Teen Pregnancy) Regular after school programs; no special services

The impact evaluation uses an experimental design.  In an experimental design study, program slots are filled by youth who are selected at random from a larger pool of eligible and appropriate youth (Figure 2).  Random assignment procedures divide youth into a program group that has access to the abstinence education program and a control group that does not receive the program, but may receive regular or alternative services.  The contrast in services being studied varies depending both on the nature and intensity of the program services and the experiences of the control group (see Table 3).

Figure 2. Study Sample Enrollment and Tracking

Longitudinal tracking of both the program and control group youth begins at the time of sample enrollment and continues for 18 to 36 months, depending on the time of initial enrollment.  The comparison of outcomes for these two groups over time provides the basis for judging impacts of the program.

The experimental design offers the best means of measuring, with a known degree of certainty, how successful the programs are overall and how well they serve key subgroups of youth within a site.  This is because, with careful implementation, the only systematic difference between the program and control youth should be their access to the program.  As a result of the random assignment, the program and control groups have similar demographic and background characteristics within any study site (Figure 3) and they are exposed to a common school and community context.

Figure 3. Demographic and Background Characteristics are Similar for Program and Control Youth Within Each Site.

However, the characteristics of sample youth vary across study sites due to a combination of factors, including program targeting practices and differences in the program and community characteristics.  For example, the average age of youth at the time of sample enrollment ranges between 10 in the Wisconsin program site, which delivers its services through an after-school program, to 13 in the Virginia program site, which serves exclusively eighth graders.  The proportion of sample youth who are nonHispanic black ranges from a low of 12 percent in the Virginia program site to over 80 percent in two other programs, one of which operates in a rural southern community, the other in an inner-city setting.  The proportion living in two-parent families ranges from 37 percent to more than 75 percent.

Random assignment generates, in each study site, program and control groups consisting of youth who, on average, are subject to similar family rules and express similar attitudes and values about abstinence before the program group is exposed to abstinence education services (Figure 4).  For example, the proportion of youth who say their parents have strict rules about companions they spend time with varies across sites between 15 and 45 percent, but is similar for program and control youth within each site.  Between 62 and 83 percent of sample youth in each study site reported believing that “having sex as an unmarried teen would make it harder to subsequently have a good marriage,” and between 16 and 35 percent hold the view that “having sex is a way to tell someone you love them.”  In all cases, however, the views of program and control youth are nearly identical within each site.

Figure 4. Family Rules and Attitudes about Teen Sex are Similar for Program and Control Youth at Sample Enrollment.

A major advantage of the random assignment design is that it protects against selection bias in the impact estimates for the individual programs studied.  Other evaluation designs are vulnerable to selection bias, which can seriously undermine the credibility of their results.  Some evaluations, for example, have relied on comparisons of outcomes for participants in “elective” programs and youth at the same site who, for some reason, do not participate.  Others compare outcomes for program youth with youth who responded to local or national surveys.  In both cases, there is a strong possibility that the participants differ in some preexisting but unobservable way from the comparison group.  These preexisting differences may lead to biased estimates of program impacts.

Pre-post comparison designs have other defects.  Comparisons of measures for participant groups before and after their involvement in a program can be affected not only by the program but also by natural maturation effects.  For example, data from the National Longitudinal Survey of Adolescent Health show that the percentage of teens who have ever had sex increases from 9.6 percent at age 13 to 19.6 percent at age 14.  Thus, using a pre-post design to measure program impacts on abstinence would seriously bias the results toward estimates of no impacts or possibly even adverse impacts.

Studies that rely on comparison samples drawn from existing survey databases can be weakened by both bias and unreliability.  Some studies, for example, compare program participants with respondents to the Youth Risk Behavior Survey or the National Longitudinal Survey of Adolescent Health.  Such study designs have the added complications arising from noncomparability of survey instruments, data collection methods, and timing of the data collection (Santelli et al. 2000).

Carefully designed and implemented experimental design studies can both overcome these weaknesses and offer unanticipated bonuses for programs and policymakers.  When program resources are not sufficient to serve everyone, many youth will not receive the abstinence education program services, regardless of whether there is an experimental-design evaluation or not.  Random assignment is often fairer than commonly used practices such as “first come, first served” or referral systems to allocate scarce program resources.  Random assignment designs also can provide valuable information about the magnitude of “unmet” demand for the program services.  Assuming that the evaluation design is implemented so that programs operate at capacity, the size of the control group provides a lower-bound estimate of unmet demand.  At the same time, the operational experience with outreach and recruitment provides qualitative information regarding how thorough and successful the outreach efforts are and may provide tips on how to strengthen future outreach efforts.

One limitation of a random assignment design for measuring program impacts arises if any of the programs has major spillover effects.  If, for example, youth who are assigned to the program group interact with youth in the control group in ways that transfer the benefits of the program intervention to peers in the control group, the random assignment study design will underestimate program impacts.  Similarly, if the presence of an intervention in the school or community significantly alters the overall school or community climate in important ways, this could lead to underestimates of program impacts.  The overall judgment of the evaluation team is that, for each of the five sites included in the impact evaluation, spillover effects are expected to be very small in relation to the direct effects on those who participate in the program.  Nonetheless, this is an issue that has received ongoing attention by the evaluation team and that is addressed in the follow-up surveys with students.(2)

The impact evaluation has large sample sizes of between 400 and 700 youth per site.  Large sample sizes protect against the possibility of failing to detect true program impacts simply because the study lacks statistical power.  It is important that, if no statistically significant program-related impacts are detected on sexual activity or on risks of STDs or pregnancy, for example, one of two conditions holds:  (1) there really was no impact of the program at all, or (2) any program impact was sufficiently small as to be of no importance to policymakers or practitioners.

What constitutes a sample size large enough to detect true impacts depends in large part on the nature of the program.  Generally, low-intensity or short programs have smaller impacts and, thus, require larger sample sizes to ensure that true impacts are picked up in the analysis.  The opposite is generally true of programs that are longer or more intensive.

The originally planned one-year period of sample enrollment for the evaluation was extended to three years in order to generate samples large enough to ensure detecting meaningful program effects and to avoid false claims of no effects.  Final sample sizes per site are expected to vary between 443 (280 program/163 control) and 700 (371 program/329 control) students.  Table 4 presents estimates of changes in outcomes the study will be able to detect using reasonable standards of statistical power and precision, given these sample sizes and given national estimates of the prevalence for selected outcomes.  For example, the study will be able to detect true program impacts on the percentage of students who are sexually experienced of 7.2 percentage points or larger in the site with 700 youth in the study sample and of 11.2 percentage points or larger in the site with 443 youth in the sample.

Table 4:
Minimum Detectable Changes in Outcomes
Outcome Measure (Wave 3) Estimated Prevalence
of Outcome(a)
Minimum Change Detectable(b)
Largest Sample Smallest Sample
Taken Virginity Pledge 14.9% ±6.0% ±9.3
Sexually Experienced 24.1% ±7.2% ±11.2
Abstinent at Follow-up(c) 86.5% ±5.8% ±8.9
At Risk of Pregnancy(d) 17.3% ±6.4% ±9.8

Sample Sizes
700 443
  • Program Group
371 280
  • Control Group
329 163
Notes:
a.  These estimates are based on computations from the National Longitudinal Survey of Adolescent Health data.  National prevalence estimates for youth at different ages have been weighted by the age distribution of the Title V Section 510 abstinence education program evaluation sample in the construction of these estimates.

b.  Minimum detectable differences are calculated based on the actual sample sizes, adjusted for anticipated nonresponse to follow-up surveys.  A 95 percent confidence interval and an 80 percent power requirement were used.

c.  Defined as never had sexual intercourse or not sexually active in past 90 days.

d.  Defined as sexually experienced and did not use a highly effective method of contraception at last intercourse.

To guard against errors that might arise based on findings from small sample sizes with low statistical power, no impact evaluation results will be released until data for the full study sample are available.  Results based on just the first one or two years of sample enrollment would run a risk of missing true impacts simply because of small sample sizes.

The study sample is being followed for up to 36 months.  The data collection schedule balances the need to release study findings at the earliest point possible with the importance of ensuring that study findings offer reliable guidance for policy and practice decisions.  Two waves of follow-up surveys are planned.  The wave 2 follow-up survey is being administered 6 to 12 months after initial study enrollment (when the wave 1 baseline survey was administered), and the wave 3 follow-up survey will be administered between 18 and 36 months after enrollment.  The interval between sample enrollment and the wave 3 survey depends on the age of youth at enrollment and the latest calendar date when surveys can be administered given the reporting schedule.  Under this plan, it is possible to analyze both short-term impacts on knowledge, attitudes, and intentions of youth related to abstinence and longer-term impacts on behavior.

Because so few youth engage in sexual activity before entering high school, outcome estimates based on wave 2 outcome data from middle-school years would miss program impacts on behaviors that most often would emerge at later ages.  Indeed, a shortcoming of previous abstinence education evaluations has been a follow-up period that does not extend beyond the middle school years.  Nationally, only 12 percent of males and 8 percent of females under age 13 have ever had sex (tabulations of the National Longitudinal Survey of Adolescent Health).  It is important to have the data collection period extend as long as possible in order to measure behavioral outcomes at ages where the prevalence of the behavior is high enough that changes in behavior will be observed.

The follow-up period for this evaluation is such that almost two-thirds of the study sample will be 14 to 18 years of age by the time of wave 3 followup and no youth will be younger than age 12.  Even with the extended follow-up period, however, only six percent of the study sample will have reached ages 18 and 19, when over half their peers are expected to be sexually active.  To address the potential need for even longer followup, the data collection procedures and plans for the evaluation are designed to accommodate longer followup, if resources were to become available.

[ Go to Contents ]

Careful and Thorough Data Collection Plans and Procedures

Plans and procedures for the data collection in the impact evaluation are designed to capture the high-quality data needed for a thorough evaluation.  A conceptual framework for the program intervention strategies, which is consistent with the main theories of adolescent behavior discussed earlier, dictates the data collected and the timing of those data (Figure 5).

Figure 5. Conceptual Framework for Evaluating Abstinence Education Programs.

This framework acknowledges that the decisions youth make regarding sexual activity and other risk-taking behavior (Column IV) depend critically on a range of antecedent factors (Column I), including demographic and background characteristics of the youth, characteristics of their parents and their families, and the school and community context in which they have been raised.  For all youth, these antecedent factors are mediated by current parental attitudes, values, and supports; the attitudes, knowledge and relationships of the youth; and the current school and community context in which youth live (Column III).

There are two means by which the abstinence education programs (or any other intervention) operate to potentially alter the key outcomes of interest.  One is by directly altering youth behavior.  The other is through affecting the natural mediating factors, for example, by providing parents with knowledge and tools to better guide their children in sound decisionmaking; by changing the attitudes, knowledge and relationships of youth in ways that reduce their inclination to engage in risk-taking behaviors; or by changing the school and community climate in ways that are more expectant and supportive of abstinence.

The first wave of student surveys administered near the time of enrollment in the evaluation study gathers information on the antecedents of teen sexual activity and baseline values of the natural mediating factors (Columns I and III).  Wave 2 and wave 3 surveys gather information to mark changes in the natural mediating factors and the key outcomes (Columns III and IV).

A number of critical issues relate to the design and administration of these surveys to support the rigor of the impact study.  These include:

The rights and privacy of sample members and their parents are paramount.  Only youth whose parents have given active parental consent for their child to participate in the study are included in the study sample.  Moreover, youth themselves must actively consent to each wave of data collection.  The privacy of student responses is protected through a rigorous system that relies on professional, independent data collectors; that permits no personal identifying information on any survey form or data file containing survey responses; that maintains secure data files; and that has the protection of a Federal Certificate of Confidentiality (HRSA-00-15).

Survey questions were selected with attention to issues of the validity and reliability of the core constructs for the evaluation.  Each question included in any of the three surveys has been mapped to one of the core constructs in the conceptual framework (Figure 5 above).  Moreover, in determining the particular questions that would be asked to address each construct, careful attention was paid to the experience of prior studies with similar populations, including the validity and reliability of measures for different target populations and when questions were administered through different data collection modes.  For example, questions about school and family draw heavily on the National Longitudinal Survey of Youth and the National Education Longitudinal Study of 1988; questions on youth attitudes about sexual activity draw heavily on questions used in prior studies of abstinence education programs, such as Values and Choices (Olsen et al. 1991), Teen Aid (Weed et al. 1998), Responsible Social Values Program (Adamek 1993), Best Friends (Best Friends Foundation 1997), and Sex Respect (Weed and Olsen, no date); questions about other risk-taking behaviors, such as drinking and using drugs, draw heavily on questions in the Youth Risk Behavior Surveillance Survey (Centers for Disease Control and Prevention 1993) and the National Longitudinal Survey of Youth (Card 1993);  and questions about romantic relationships and actual sexual experiences draw on the National Longitudinal Survey of Adolescent Health (Udry and Bearman 1998), the National Survey of Family Growth (Card 1993), and the Youth Risk Behavior Surveillance Survey (Centers for Disease Control and Prevention 1993).

Each of the survey questionnaires was pretested with small groups of youth.  After revisions, they were then reviewed by key staff in the five programs participating in the impact evaluation, by the Federal Office of Management and Budget, and by the University of Pennsylvania’s Institutional Review Board.  In addition, staff from various constituent groups and policy organizations reviewed the survey questionnaires, provided useful insights, and made helpful suggestions.

Youth may not want to report sensitive and socially undesirable information.  Some respondents may feel uncomfortable reporting accurate information on questions about sexual intercourse and may distort their responses in the direction that they perceive as socially desirable.  Moreover, the problem of underreporting behavior that is considered socially undesirable may be exacerbated for youth who participate in abstinence programs, given the strong and unequivocal message of these programs.

To minimize the underreporting of sensitive behaviors, as well as to protect the privacy of the study sample, the evaluation uses self-administered survey data collection, maintains the strictest standards of confidentiality, and informs the survey respondents about them.(3)  The data collection procedures ensure that no one from the local schools — including teachers, administrators, and counselors — has access to students’ survey responses.  School and program staff are not allowed to participate in the data collection; trained interviewers conduct all survey data collection and focus groups.  As soon as the student surveys are completed, the interviewers immediately separate student contact information from the surveys and remove them from the school grounds.

Before the students complete the surveys, the interviewers assure all respondents that their answers will be kept confidential and will not be shared with anyone.  The consent forms sent home to parents, as well as the assent forms given to students, make it clear that no individual-level data from the surveys will be reported.  Rather, information on individual students will be combined into groups for analysis and reporting purposes.

Survey administration methods protect student privacy

  • Most students complete the surveys by themselves in the presence of trained interviewers who can answer questions about the survey administration.  Younger sample members and those with poor reading skills have the survey read to them, but they mark their own responses.
  • Trained, professional interviewers employed by Mathematica Policy Research, Inc., conduct all survey data collection.
  • All surveys are removed from the school premises immediately upon completion.
  • No personal identifying information is included on the survey instruments.
  • A Federal Certificate of Confidentiality protects the student data.

Youth may have different definitions of abstinence.  The primary goal of the Section 510 abstinence education programs is to persuade youth to abstain from sexual activity.  Thus, it is very important that survey questions accurately measure this outcome.

Survey questions on abstinence from sex are difficult to design, since abstinence means different things to different people.  Some consider abstinence to mean refraining from all intimacy except for kissing and holding hands, while others consider abstinence as anything except sexual intercourse.  Participation in abstinence education programs may lead some youth to change their definitions of what constitutes sexual activity and abstinence.  Failure to address such program-induced changes in definitions could result in a downward bias in the reporting of abstinence by program youth relative to control youth and thereby limit the detection of true program impacts.

It is essential to ask in the clearest way possible about specific behaviors of greatest interest.  To have reliable measures of sexual activity, the evaluation survey instruments measure whether study youth have ever had sexual intercourse.  Since program and control youth are likely to have the same understanding, on average, of what sexual intercourse is, this measure has greater reliability than survey questions that ask simply about abstinence from sexual activity.

Outcome measures must be age appropriate.  The survey and administration methods for the study are sensitive to the social and emotional development of sample youth.  The abstinence education programs target youth in their preadolescent and adolescent years, and measurement of outcomes must reflect that age span.  Some programs serve youth as young as third or fourth grade.  Measures of program impacts for preadolescent youth may be quite different from those for adolescent youth.  For example, questions related to sexual intercourse are not appropriate for preadolescent youth, given the low prevalence of the behavior and, more importantly, the age inappropriateness.

The evaluation survey instruments for youth below grade seven do not ask whether the respondent has had sexual intercourse.

The survey was designed to avoid contamination of the abstinence message.  The Section 510 abstinence education programs promote a strong message that teenagers should postpone sexual activity until marriage.  The programs do not promote use of contraception, on the premise that such information is inconsistent with program goals and sends a mixed message to youth.

However, a careful evaluation must measure the main outcomes of an abstinence education program.  The evaluation must be able to measure whether program participants do or do not abstain from sex and whether program participants do or do not engage in behaviors that risk pregnancy and exposure to STDs.  This requires that the survey questions about sex measure similar behaviors for the program and control youth and be detailed enough to measure exposure to risks of pregnancy and STDs.  Moreover, it is critical that the study’s informed consent procedures are consistent with asking youth these sensitive questions.

Accurate assessment of whether programs affect risk of STDs and pregnancy must take into account the behaviors of those youth who become sexually active.  Among youth who are sexually active, exposure to unwanted pregnancy and STDs depends, among other factors, on the use of condoms or other contraceptives.  Therefore, the evaluation survey instruments ask a limited number of questions about use of condoms and other contraceptives.  These questions are seen by and are asked only of youth who have already stated that they have had sex, and they are designed so that they do not provide information that the abstinence programs themselves avoid communicating.

[ Go to Contents ]

Future Analysis and Reporting Plans

Over the three-year sample enrollment period, which ended in fall 2001, the evaluation team secured cooperation from 3,300 youth and their parents to participate in the impact evaluation.  To date, 3,081 of these youth have completed the wave 1 survey.  The wave 2 survey has been administered to those youth enrolled during the first two study years, with 1,791 completing this survey thus far.  In spring 2002, the wave 2 survey will be administered to the remaining sample, and the wave 3 survey will be administered to those who enrolled in the study sample during the first year of sample enrollment.  Sample youth will continue to be tracked through surveys and, in some cases, school records through fall 2003.  Furthermore, program operations and community context will be monitored continuously throughout the remainder of the study period to support the evaluation.

Table 5:
Interview Schedule and Sample Sizes, by Time of Sample Enrollment
Sample Enrollment Total N Wave 1 Wave 2 Wave 3
Fall ’99/Spring ’00 1,040 Fall ’99/Spring ’00 Fall ’00 Spring/Fall ’02
Fall ’00 901 Fall ’00 Spring ’01 Fall ’03
Fall ’01 1,359 Fall ’01 Spring ’02 Fall ’03
Total Number 3,300 3,081 2,970a 2,805a
a.  Estimated number of completed surveys.

A report on the effects of the programs in achieving their short-term goals of changing knowledge, attitudes, and near-term behavioral choices will be completed in early 2003 once wave 2 survey data are available for the full study sample.  The final study evaluation report will be completed in summer 2005.  During intervening periods, the study team will prepare a limited number of special-focus reports that address particular questions of interest to Congress or the U.S. Department of Health and Human Services.

[ Go to Contents ]


Endnotes

1.  When classrooms or schools are the unit of randomization, the “effective sample size” is substantially lower than would be if students were the unit of randomization.  This is because of the high within-class or within-school correlation (Kish 1965).

2.  This is an issue that was of sufficient concern during the study design that an external review of the study design was commissioned to ensure that there was strong professional support for the random assignment design adopted for the study.

3.  A methodological experiment was conducted to assess whether using personal data-recording devices increased reporting of sensitive behaviors.  It did not have any such effect for the evaluation sample.


Where to?

Top of page | Contents

Main Page of Report | Contents of Report

Home Pages:
Human Services Policy
Assistant Secretary for Planning and Evaluation
U.S. Department of Health and Human Services