Search for Programs to Help YouthSearch for Programs to Help Youth

How to Use the Program Tool

Three women work together on a laptop computer.The Program Tool provides you with information about program designs that successfully deal with risky behaviors. You can replicate these strategies to meet your local needs. The Program Tool database contains risk factors, protective factors, and programs that have been evaluated and found to work.

Click on the “by Risk Factor” or “by Protective Factor” button to produce a list of those respective factors. Then click on a particular factor to generate a list of programs that deal with that factor and to see what ages are served. Next, click on the program name, and you will find details about that program; for example, a description of the intervention, a summary of the evaluation, an effectiveness rating, a contact from whom to obtain additional information, and, if available, technical assistance resources. You can also click on the “All Programs” button to produce an alphabetical list of all programs in the database.

You can form a continuum of effective programs (by age range of clients served) by clicking on the box adjacent to each program’s title, scrolling to the bottom of the program matrix, and then selecting the option labeled “Move Checked Programs to the Top of the List.” This allows the user to view the selected programs in an array.

Frequently Asked Questions

Answers to Frequently Asked Questions About Federally Sponsored, Online, Evidence-Based Program Selection Tools

1. How do I nominate a program for inclusion in the Program Tool?

We are pleased to review research findings on potential new programs. To be considered, programs must focus on one of the following problem behaviors:

  • Academic problems
  • Aggression/violence
  • Youth gang involvement
  • Alcohol, tobacco, and other drug use
  • Delinquency
  • Family functioning
  • Gang activity
  • Sexual activity/exploitation
  • Trauma exposure

Eligible programs can include but are not limited to delinquency prevention, probation, youth courts, restitution programs, community service, school-based programs, conflict resolution, parent training, mentoring, restorative justice, and home confinement.

The intervention must (a) explicitly aim to prevent or reduce a problem behavior in a universal or selective juvenile population; or (b) if not explicitly aimed to reduce or prevent a problem behavior, apply to a juvenile population at risk for problem behaviors. For example, an academic enrichment program would not qualify if applied to a universal population, but would be eligible if directed toward an at-risk population. Conversely, a tobacco cessation curriculum applied to a universal population would be eligible because it directly affects tobacco use. A juvenile is defined as anyone under the age of 18. The study is also acceptable if both juveniles and adults are included in the treatment sample.

The study design must involve a comparison condition. A comparison condition can be (a) no treatment, (b) treatment as usual, (c) a placebo treatment, (d) a straw-man alternative treatment, or (e) a time period. Thus, eligible study designs may include experimental with random assignment, nonequivalent quasiexperimental, and quasiexperimental, one-group pretest-posttest studies. Nonexperimental and case study designs are specifically excluded.

Please submit the following information on programs to be considered for inclusion in the HAY database:

  • Detailed description of the program intervention, target population, and target setting.

  • Detailed information on the research methodology used to evaluate the program’s effectiveness.

  • Detailed information on the evaluation outcomes and findings, including findings on research objectives and performance.

  • Contact address, phone number, and e-mail for further information on the program.

Submissions will be reviewed and scored along several dimensions, including (1) the degree to which the program is based on a clear, well-articulated theory or conceptual framework; (2) the ability of the research design to establish a causal association between the treatment and outcome; and (3) the degree to which the evaluation findings support the program treatment.

Please send this information, including journal articles and evaluation materials, to:

Dr. Stephen Gies
Development Services Group, Inc.
Suite 800 East
7315 Wisconsin Avenue
Bethesda, MD 20814

Click here for full instructions and a nomination form.

2. What does “evidence-based” mean?

Today, the term “evidence-based” is part of the vernacular of prevention science. In general, the term “evidence-based” and similar terms—“research-based,” “science-based,” “model” programs, and “effective” programs—are used interchangeably to describe programs that have demonstrated empirical success in preventing problem behaviors.

3. What other evidence-based program directories are available?

Model Programs Guide (MPG)

The MPG is an easy-to-use informational resource tool that offers a database of scientifically proven programs to address a variety of youth problems, including the following: delinquency; violence; youth gang involvement; alcohol, tobacco, and drug use; academic difficulties; family functioning; trauma exposure; sexual activity/exploitation; and mental health issues.

Sponsor Agency: U.S. Department of Justice, Office of Justice Programs, Office of Juvenile Justice and Delinquency Prevention

Online at: http://www.dsgonline.com/mpg2.5/mpg_index.htm

National Registry of Evidence-based Programs and Practices (NREPP)

The NREPP is a searchable online registry of mental health and substance abuse interventions for youth and other populations that have been reviewed and rated by independent reviewers.

Sponsor Agency: U.S. Department of Health and Human Services, Substance Abuse and Mental Health Services Administration

Online at: http://www.nrepp.samhsa.gov/index.htm

What Works Clearinghouse

The What Works Clearinghouse (WWC) collects, screens, and identifies studies of the effectiveness of educational interventions (programs, products, practices, and policies). It provides educators, policymakers, researchers, and the public with a central and trusted source of scientific evidence on what works in education.

Sponsor Agency: U.S. Department of Education, Institute of Education Sciences

Online at: http://www.whatworks.ed.gov

4. Why are there so many different federally sponsored online resources that assist practitioners and communities in implementing evidence-based programs?

There are four federally sponsored online resources that assist practitioners and communities in implementing evidence-based programs:

  • Helping America’s Youth (HAY) Community Guide, administered by the White House

  • Model Programs Guide (MPG), administered by the Office of Juvenile Justice and Delinquency Prevention

  • What Works Clearinghouse (WWC), administered by the U.S. Department of Education’s Institute of Education Sciences

  • National Registry of Evidence-based Programs and Practices (NREPP), supported by the Substance Abuse and Mental Health Services Administration

While each of these resources is similar in terms of functionality, each Web site differs in style and, most important, substance. For instance, the HAY Community Guide is an online resource to help communities assess their needs and resources and link them to effective programs to help at-risk youth in their neighborhoods and towns. The guide focuses on a variety of youth problem behaviors, including academic problems, aggression/violence, substance abuse, family functioning, gang activity, sexual activity/exploitation, and trauma exposure. The Model Programs Guide is a database of scientifically proven programs that address a variety of youth problems, including delinquency; violence; youth gang involvement; alcohol, tobacco, and drug use; academic difficulties; family functioning; trauma exposure; sexual activity/exploitation; and mental health issues. The WWC, on the other hand, collects, screens, and identifies studies on the effectiveness of educational interventions (i.e., programs, products, practices, and policies). The NREPP is a searchable online registry of mental health and substance abuse interventions for youth and other populations that have been reviewed and rated by independent reviewers.

5. Can a particular program be included in more than one of the four federally sponsored online resources?

Yes. Some programs may address more than one type of problem behavior. Therefore, a program may appear in more than one of the four federally sponsored online resources.

6. Are there differences in the rating systems used for the federally sponsored online resources?

Helping America’s Youth (HAY) Community Guide

The Helping America’s Youth (HAY) Program Tool features evidence-based programs that prevent and reduce delinquency or other youth (up to age 20) problem behaviors (e.g., drug and alcohol use). The Program Tool includes information on programs that have been evaluated using scientific techniques and that have demonstrated a statistically significant decline in the targeted negative outcomes.

To be eligible for inclusion in the database, candidate programs must demonstrate results in accordance with widely accepted scientific criteria for program effectiveness. Programs in the database fall into one of the following categories:

  • “Level 1” programs have been scientifically demonstrated to prevent youth problem behaviors or to reduce or enhance risk/protective factors using a research design of the highest quality (i.e., an experimental design and random assignment of subjects).

  • “Level 2” programs have been scientifically demonstrated to prevent youth problem behaviors or to reduce or enhance risk/protective factors using either an experimental or a quasiexperimental research design with a comparison group, with the evidence suggesting program effectiveness.

  • “Level 3” programs display a strong theoretical base and have been demonstrated to prevent youth problem behaviors or to reduce or enhance risk/protective factors for these problems using limited research methods (with at least single group pre- and post-treatment measurements). The evidence associated with these programs appears promising but requires confirmation using more rigorous scientific techniques.

The overall rating is derived from four summary dimensions of program effectiveness: the conceptual framework of the program, program fidelity, strength of the evaluation design, and empirical evidence demonstrating the prevention or reduction of problem behaviors.

To be eligible for inclusion in the HAY Program Tool, programs must meet the following criteria:

1. The study must investigate the effects of a prevention or intervention program designed to address problem behaviors or conditions that place youth at risk for juvenile delinquency and other problem behaviors. The program must focus on one of the following problem behaviors: delinquency; violence; youth gang involvement; alcohol, tobacco, and drug use; family functioning; trauma exposure; or sexual activity/exploitation. Other problem behaviors, such as physical health problems and injuries, are excluded.

2. The program must (a) explicitly aim to prevent or reduce a problem behavior in a universal or selective juvenile population or (b) if not explicitly aimed to reduce or prevent a problem behavior, apply to a juvenile population at risk for problem behaviors.

3. The study design must involve a comparison condition. A comparison condition can be (a) no treatment, (b) treatment as usual, (c) a placebo treatment, (d) a straw-man alternative treatment, or (e) a time period. Thus, eligible study designs may include experimental with random assignment, nonequivalent quasiexperimental, and quasiexperimental, one-group pretest-posttest studies. Nonexperimental and case study designs are specifically excluded.

The following federal agencies worked together to identify programs for the HAY Program Tool: the U.S. Department of Health and Human Services, the U.S. Department of Justice, the U.S. Department of Education, the U.S. Department of Labor, the U.S. Department of Agriculture, the U.S. Department of Housing and Urban Development, the Office of National Drug Control Policy, and the Corporation for National and Community Service. Program reviews were completed by Development Services Group, Inc., and the Institute for Intergovernmental Research.

Model Programs Guide (MPG)

HAY and the MPG use the same rating system (though each gives the three categories different labels—see below). The other two federally sponsored online resources use different rating systems to assess programs. Below is a brief description of the four rating systems.

HAY and the MPG rate programs along four dimensions of effectiveness: (1) conceptual framework, (2) program fidelity, (3) evaluation design, and (4) empirical evidence. The score for each dimension of effectiveness and the overall effectiveness score are used to classify programs into one of three categories. The categories (or levels) summarize the research base that supports a particular program. They include:

  • Level 1 (Exemplary in MPG). In general, when implemented with a high degree of fidelity, these programs demonstrate robust empirical findings, using a reputable conceptual framework and an evaluation design of the highest quality (experimental).

  • Level 2 (Effective in MPG). In general, when implemented with sufficient fidelity, these programs demonstrate adequate empirical findings, using a sound conceptual framework and an evaluation design of high quality (quasiexperimental).

  • Level 3 (Promising in MPG). In general, when implemented with minimal fidelity, these programs demonstrate promising (yet perhaps inconsistent) empirical findings, using a reasonable conceptual framework and a limited evaluation design (single group pretest–posttest) that requires causal confirmation using more appropriate experimental techniques.

What Works Clearinghouse (WWC)

The WWC rates the effects of an intervention in a given outcome domain as positive, potentially positive, mixed, no discernible effects, potentially negative, or negative. The rating of effectiveness takes into account four factors: the quality of the research design, the statistical significance of the findings, the size of the difference between participants in the intervention and comparison conditions, and the consistency in findings across studies.

  • Positive Effects: Strong evidence of a positive effect with no overriding contrary evidence. (Two or more studies showing statistically significant positive effects, at least one of which met WWC evidence standards for a strong design. No studies showing statistically significant or substantively important negative effects.)

  • Potentially Positive Effects:Evidence of a positive effect with no overriding contrary evidence. (At least one study showing a statistically significant or substantively important positive effect. No studies showing a statistically significant or substantively important negative effect and fewer studies showing indeterminate effects than showing statistically significant or substantively important positive effects.)

  • Mixed Effects: Evidence of inconsistent effects. (At least one study showing a statistically significant or substantively important positive effect, and at least one study showing a statistically significant or substantively important negative effect—but no more such studies than the number showing a statistically significant or substantively important positive effect—or at least one study showing a statistically significant or substantively important effect, and more studies showing an indeterminate effect than showing a statistically significant or substantively important effect.)

  • No Discernible Effects: No affirmative evidence of effects. (None of the studies shows a statistically significant or substantively important effect, either positive or negative.)

  • Potentially Negative Effects: Evidence of a negative effect with no overriding contrary evidence. (At least one study showing a statistically significant or substantively important negative effect. No studies showing a statistically significant or substantively important positive effect OR more studies showing statistically significant or substantively important negative effects than showing statistically significant or substantively important positive effects.)

  • Negative Effects: Strong evidence of a negative effect with no overriding contrary evidence. (Two or more studies showing statistically significant negative effects, at least one of which is based on a strong design. No studies showing statistically significant or substantively important positive effects.)

National Registry of Evidence-based Programs and Practices (NREPP)

NREPP staff and the program developer generally work together to prepare the application for review. Each of the reviewers independently reviews the materials provided and calculates ratings using the predefined review criteria. Programs are reviewed along two dimensions: (1) quality of research; and (2) readiness for dissemination. Both are rated on a scale of 0.0 to 4.0.

  • The quality of research rating summarizes the amount and general quality of the evidence supporting the conclusion that the intervention, rather than other factors, produced the reported results or outcomes. Higher scores indicate stronger, more compelling evidence. Each outcome is rated separately. This is because interventions may target multiple outcomes (e.g., alcohol use, marijuana use, behavior problems in school), and the evidence supporting the different outcomes may vary.

  • The readiness for dissemination rating summarizes the amount and general quality of the resources available to support the use of the intervention. Higher scores indicate that more and higher-quality resources are available. This rating applies to the intervention as a whole.

7. How do I decide which program is right for my community?

Communities are encouraged to conduct a planning process so that they have identified their communities’ risk and protective factors and other needs prior to deciding which program is right for them. Selected programs should address identified risk factors and offer a community needed protective factors. (This is why the HAY Program Tool lists programs by risk and protective factors.) Contact information for program developers and technical assistance providers is provided, and communities are encouraged to contact these individuals to obtain additional program implementation information. Simply because a program has worked for one population does not mean it will work everywhere. It is important to understand how the proposed population is similar to or different from the population on which the program was tested.

8. What else should I consider when selecting a program from the Program Tool?

It is important to implement a program as closely as possible to the way it was designed. Programs that are poorly implemented may not have the desired positive effect on youth behavior, and it may be better not to implement a program at all than to implement it poorly. It will generally take time, funding, and effort to implement a program and train staff on the program model. Coalitions must secure adequate funding and have staff in place for proper program implementation. A strategic plan should be developed and a system for monitoring progress in place before implementation begins. For relevant strategic planning and progress monitoring tools, search the helpful links feature on the guide at http://guide.helpingamericasyouth.gov/helpfullinks/default.cfm.