About the What Works in Reentry Clearinghouse

What Works >> About the Project

Introduction

The What Works in Reentry Clearinghouse offers easy access to important research on the effectiveness of a wide variety of reentry programs and practices. It provides a user-friendly, one-stop shop for practitioners and service providers seeking guidance on evidence-based reentry interventions, as well as a useful resource for researchers and others interested in reentry. What Works in Reentry Clearinghouse was developed for the National Reentry Resource Center by the Council of State Governments Justice Center (CSG Justice Center) and the Urban Institute (UI), with funding provided by the U.S. Department of Justice’s Bureau of Justice Assistance through the Second Chance Act.


Project History

One of the primary objectives of the National Reentry Resource Center is to help reentry policymakers and practitioners identify and understand evidence-based practices and to integrate these practices into their reentry efforts. What Works in Reentry Clearinghouse has been developed in the interest of furthering this goal. At the outset of this project, the Urban Institute (UI), in partnership with the John Jay College of Criminal Justice’s Prisoner Reentry Institute (PRI), developed a framework outlining procedures for identifying, evaluating, and categorizing reentry research. This framework was developed after comprehensive review and analysis of the methodology of 32 similar “what works” projects in a variety of fields.

In the spring of 2010, the CSG Justice Center, UI, and PRI convened a roundtable with over 50 nationally recognized researchers and experts in the criminal justice and reentry fields to review and enhance the framework. Attendees provided guidance on the project’s goals, scope, methodology, study inclusion criteria, and website design and structure, among other aspects of the project. UI researchers used advisors’ recommendations to develop the current criteria for identifying and classifying research according to the outcomes measured in the study, the strength of these findings, and the rigor of the study design. UI researchers also developed a coding instrument to record key information from each study reviewed.

With this methodology in place, UI researchers then undertook a systematic review of the reentry research literature, identifying approximately 2,500 publications that were then screened for eligibility and categorized by topic area. During this process, many studies did not meet the inclusion criteria and were weeded out; a list of these studies is included here. UI researchers then began the process of coding and assessing the studies that did meet eligibility criteria, beginning with the topic areas of Employment, Housing, Mental Health, and Brand Name Programs. At present, UI continues to identify, review, and code additional research that will be added to What Works in Reentry Clearinghouse over time.

CSG Justice Center staff have led the development of the project’s website, which offers concise assessments of the existing body of research on each intervention, as well as more detailed descriptions of study methods and findings for those interested. The website also allows users to search for information by a range of different criteria. They can search by topical area, target population, community or program setting, among many more. As information is added to the site, the ability to search and navigate information by differing criteria increases. The CSG Justice Center will continue to make updates to the site and improve user access as additional content is uploaded.


Methodology Overview

The Urban Institute uses a rigorous review process to determine which studies are included in What Works in Reentry Clearinghouse. This review process takes into account both the focus and the methodology of the study. In order to be included, a study must evaluate whether a particular program, practice, or policy improves outcomes for people returning to the community from incarceration. The study must evaluate the impact of the intervention on least one of a number of relevant outcomes, including recidivism, substance use, housing, employment, and mental health. Studies that do not evaluate an intervention’s impact on one of these reentry-related outcomes are not eligible for inclusion. Furthermore, studies that do not strictly examine the impact of an intervention on a reentry population (i.e., individuals returning to the community from incarceration) are also excluded. Finally, studies using strictly qualitative methods to explore program impacts are ineligible for inclusion, as they do not provide measurable, quantitative information about the impact of a program on concrete outcomes.

If these initial conditions are met, a study must also satisfy a minimum set of standards in terms of methodological rigor. Similar to criteria used in other “what works” projects, the study must employ either (a) random assignment or (b) quasi-experimental methods with matched groups or statistical controls for differences between groups, and the sample size must be at least 30 individuals in both the treatment and comparison groups. Furthermore, the study must have been either conducted by an independent researcher (i.e., not the same person or group that developed the program) or have been published in a peer-reviewed journal. Studies meeting these minimum criteria are further divided into two categories of study rigor (High or Basic). Many studies did not meet the inclusion criteria and were weeded out; a list of these studies is included here

Below, we provide answers to some commonly asked questions about our methodological process and criteria, as well as definitions for some key research terms used throughout What Works in Reentry Clearinghouse.


Frequently Asked Questions

How is What Works in Reentry Clearinghouse organized?

What Works in Reentry Clearinghouse is organized into three levels, starting broad and moving into greater degrees of detail: focus area pages (e.g., employment), intervention pages (e.g., work release), and evaluation pages (e.g., a specific study of a work release program).

Focus area pages

Each focus area page includes a series of text boxes that summarize each intervention falling within the focus area. For example, the Employment focus area page includes boxes for work release, prison industries, and a number of “unique” employment programs. Each intervention box includes a description of the intervention, as well as a rating symbol for each of the studies that evaluated that intervention. The symbol indicates, for each study, whether the intervention had an impact on recidivism, and also indicates the rigor of the study (see below for a description of our study rating system). Also accessible from each focus area page is a written summary of all the research falling under that focus area.

Intervention pages

A more detailed summary of each intervention and its related research studies is provided on each intervention page. A table at the top of the page summarizes the findings from all of the eligible studies that evaluated the intervention. The table lists each type of outcome that was evaluated (such as recidivism or employment) and uses symbols to indicate how many studies evaluated that outcome, as well as the rigor (High or Basic) and the findings of each study. For example, for recidivism, the chart will show every study that evaluated recidivism outcomes and will indicate for each study whether the program had a beneficial effect, a harmful effect, or no effect on recidivism. This chart succinctly summarizes the research and allows viewers to easily see how the intervention performed on each outcome. Each intervention page also includes a series of boxes listing and briefly summarizing the individual studies that evaluated the intervention. In addition, from the intervention pages, viewers can access a list of the studies that were reviewed but that did not meet our criteria for inclusion.

Evaluation pages

Finally, What Works in Reentry Clearinghouse also includes a web page for each evaluation, or study, that met eligibility criteria. Evaluation pages contain detailed descriptions of the evaluation, including information on the study population, methods, limitations, findings, and implementation fidelity (as available). As noted above, each study receives a rating for both rigor (High or Basic) and outcomes (i.e., whether the intervention appeared to be effective at improving a particular outcome, such as recidivism). The ratings system is integrated throughout the site to help viewers easily see the rigor and outcomes of each study.


How do I navigate What Works in Reentry Clearinghouse?

There are two ways to access information in What Works in Reentry Clearinghouse: by browsing the site (starting with one of the focus areas and moving down into greater levels of detail, as described above), or by performing a basic or advanced search. Click “advanced search” in the search box to perform a customized search.


How does the research review process work?

A trained member of the Urban Institute coding team first screens each study for eligibility, then, after establishing that all criteria have been met, enters relevant information on the intervention, study methods, and outcomes into a coding instrument. The coder then assigns ratings for both the rigor of the study and the findings (i.e., whether the intervention was found to be effective), according to established criteria. Finally, the coder writes a summary of the study and its findings. A Ph.D.-level senior researcher then reviews the study, ratings, and summary and provides input or suggested changes. For a more detailed description of the coding and review process, please click here.


What qualifies as a “reentry intervention”?

For the purposes of What Works in Reentry Clearinghouse, a reentry intervention is considered to be any program, practice, or policy that serves a population returning to society from incarceration. It does not need to be specifically designed for an incarcerated population, or even a criminal justice population, to be included (for example, we include a study of the effects of Medicaid benefits on recidivism). However, it does need to be used with an incarcerated or recently incarcerated population. Interventions that do not serve people who are currently incarcerated or who recently served time in a correctional facility are not included. For example, programs are not considered reentry interventions if they serve as alternatives to incarceration or divert individuals away from the criminal justice system. Studies of drug courts, mental health courts, and most probation programs and victim-offender mediation programs, for example, are not included in this effort. For ratings of programs that serve criminal justice populations but are not focused on a reentry population, please see the Crime Solutions website sponsored by the U.S. Department of Justice’s Office of Justice Programs.


What types of outcomes are considered in reviewing reentry interventions?

Generally, the primary goals of reentry interventions are three-fold: to reduce recidivism, increase public safety, and improve the lives of those returning from incarceration (these three are strongly linked). Given the importance of these objectives, What Works in Reentry Clearinghouse only evaluates research that examines post-release outcomes, which could include either recidivism or other outcomes that are important for successful reentry, namely substance use, housing, employment, and mental health. Outcomes that are not related to the reentry process – such as inmate conduct within the correctional institution – are not considered in What Works in Reentry Clearinghouse. We also do not consider intermediate outcomes, such as the extent to which participants receive services or attend programs in the community, because these may not necessarily lead to better long-term outcomes and reduced recidivism.


Why isn’t qualitative research included in the study ratings?

What Works in Reentry Clearinghouse only reviews studies that measure quantitative differences in reentry outcomes between a treated group and at least one comparison group. Studies using only qualitative methods are not reviewed, unless they provide information on implementation fidelity as a companion study to a quantitative evaluation. While qualitative studies constitute an important body of research in the reentry field, they do not provide clear, quantifiable assessments of intervention effectiveness. Therefore, such studies are outside the scope of this project.

An exception is made for process (or implementation) evaluations that are conducted jointly with quantitative studies included in What Works in Reentry Clearinghouse. Process evaluations use qualitative and, in some cases, quantitative methods to evaluate how an intervention was implemented and the degree of fidelity to the original program design. Wherever available, key findings from these studies are provided on the evaluation pages and summarized on the intervention pages.


What methodological standards must studies meet to be included in What Works in Reentry Clearinghouse?

To be included in What Works in Reentry Clearinghouse, studies must meet a minimum standard of rigor to ensure that the findings are reliable. Elsewhere on this website, we provide a list of the studies that did not meet this minimum standard and are not included in What Works in Reentry Clearinghouse. This basic study rigor standard consists of three elements: Studies must (1) use either a randomized design or a quasi-experimental design incorporating matching or statistical controls for pre-existing group differences, (2) include a sample size of at least 30 individuals in each group (treatment and control), and (3) either be conducted by independent evaluators (individuals who have not been involved in the design or delivery of the program), or be published in a peer-reviewed journal. These methodological restrictions are in place to ensure that the findings of each study can truly be attributed to the intervention itself, and not to other factors, such as chance, researcher bias, or a lack of similarity between the treatment and control groups. Furthermore, studies published prior to 1980 are not included in What Works in Reentry Clearinghouse because their findings may not apply in today’s societal and correctional context.


How do we rate the rigor of a study?

Studies receive one of two methodology ratings: High or Basic. See the chart below for the requirements of the basic and high rigor ratings. The basic level simply requires studies to meet the same three methodological requirements that all studies must meet to be included in What Works in Reentry Clearinghouse (as described above). The high level of rigor imposes an additional set of requirements.

The final requirement listed in the chart – the Senior Reviewer Rating – allows the project’s senior reviewer (a Ph.D.-level researcher) to use his or her discretion to modify the study rating downward where necessary. For example, a study may technically meet all of the requirements for the High or Basic rigor rating, but the senior reviewer may determine that it nonetheless has serious flaws, such as poor implementation fidelity or inadequate measures to ensure that the treatment and comparison groups are similar to one another. In these cases, the senior reviewer may elect to assign the study a rating of Basic rather than High rigor, or to exclude it from What Works in Reentry Clearinghouse entirely.

Rating Design Sample Size Attrition Follow-up Period Independence Date Published Senior Reviewer Rating
High Either: (1) a randomized controlled trial (RCT) or (2) quasi-experimental design (QED) with matched groups and/or statistical controls for group differences At least 100 in both treatment and control groups; total number of participants is at least 200. Attrition must not present a significant threat to the study’s validity. A senior reviewer makes this determination. Outcomes must be tracked for at least one year following the individual’s completion of the program or release from incarceration Study must be an independent evaluation (i.e., not conducted by individuals involved in program development or delivery), OR study must be published in a peer-reviewed journal 1980 or later The senior reviewer must certify that the study is of high rigor and that threats to validity are minimal
Basic Either: (1) RCT or (2) QED with matched groups and/or statistical controls for group differences At least 30 in both treatment and control groups; total number of participants is at least 60 No requirement No requirement Study must be an independent evaluation (i.e., not conducted by individuals involved in program development or delivery), OR study must be published in a peer-reviewed journal 1980 or later The senior reviewer must certify that the study is of sufficient rigor and that threats to validity are minimal

If a study fails to meet the minimum criteria described in the table above, it may be excluded from the Clearinghouse. To view a list of these studies that did not qualify for inclusion, click here.


How do we arrive at our ratings of study findings?

For each study included in What Works in Reentry Clearinghouse, we assign a rating to each broad type of outcome that the study measured (such as recidivism, employment, housing, substance abuse, or mental health). The rating indicates whether or not the study found evidence that the intervention affected participants’ outcomes in each area. To arrive at these ratings, we consider all of the study’s findings relating to each outcome as a whole. For example, a study may measure recidivism in multiple ways, such as rearrest, reconviction, reincarceration, technical violations, self-report, etc. Rather than providing a separate rating for each of these outcomes, we rate the study’s findings on recidivism by considering all of these outcomes together. Similarly, substance abuse, mental health, employment, and housing outcomes may each be measured in many ways, and we consider all findings within each area as a whole. For each type of outcome, studies receive one of the following ratings:


  • Strong evidence of effectiveness: The study findings show a consistent pattern indicating that the treatment group experienced better post-release outcomes in the area of interest than the comparison group, and most or all of these findings are statistically significant.

  • Modest evidence of effectiveness:Study findings are mixed: some findings indicate that the treatment group experienced significantly better outcomes than the comparison group, but other findings show no significant differences; or findings indicate that the treatment group experienced better outcomes on several measures, but these differences generally failed to reach statistical significance.

  • No evidence of an effect: The study findings show very few or no significant differences between groups.

  • Modest evidence of a harmful effect: Study findings are mixed: some findings indicate that the comparison group experienced significantly better outcomes than the treatment group, but other findings show no significant differences; or findings indicate that the comparison group experienced better outcomes on several measures, but these differences generally failed to reach statistical significance

  • Strong evidence of a harmful effect: The study findings show a consistent pattern indicating that the comparison group experienced better post-release outcomes than the treatment group, and most or all of these findings are statistically significant.

These outcome ratings, along with ratings on the study methodology, have been translated into a visual ratings system that is used throughout the site, as described below.


How do I interpret the ratings system?

The primary goal of the ratings system is to quickly convey what research has shown about the effectiveness of a given intervention at reducing recidivism or improving other outcomes. It is important to note that the rating system only applies to individual studies and not to the intervention as a whole. In other words, we provide ratings for studies and their findings, but we do not rate the effectiveness of an intervention itself. We provide the study ratings so that website users can come to their own conclusions about promising or effective interventions based on the existing research.

The ratings system is designed to highlight two key aspects of a study: the study rigor and the study’s findings on a particular type of outcome (such as recidivism or employment). A circular symbol indicates that researchers rated the rigor as High, and a diamond symbol indicates a rigor rating of Basic. The color green indicates that the intervention had a beneficial effect on participant outcomes, while no color indicates there was no significant impact, and red indicates that the intervention had a harmful effect. The ratings system also accounts for the strength of the findings: a fully-filled-in symbol indicates a rating of strong evidence, while a partially-filled-in symbol indicates a rating of modest evidence. Each study receives one rating for every type of outcome measured. For example, a study may show strong evidence of an effect on recidivism, but no evidence of an effect on employment.

What_works_legend

This ratings system is woven into each page throughout the site. On each Focus Area page, there is a series of boxes for all the interventions that fall within that focus area. In each box, we provide the recidivism ratings for each study that evaluated that intervention. (The reason that we provide only the recidivism ratings and not the ratings for other outcomes is that, while almost every single study in What Works in Reentry Clearinghouse measured recidivism outcomes, far fewer studies examined other outcomes, such as employment or substance abuse. Thus, recidivism is the best summary measure of the findings on an intervention’s effectiveness.) In addition, a table at the top of each intervention page summarizes all the findings from every evaluation of that intervention. This table provides all applicable outcome ratings, not just recidivism ratings. On the evaluation pages, visitors can read about why a study received a particular outcome and methodology rating.


What is the difference between the What Works in Reentry Clearinghouse and CrimeSolutions.gov

The What Works in Reentry Clearinghouse and CrimeSolutions.gov projects serve as complementary resources for the field, with CrimeSolutions.gov providing topical information on effective criminal justice programs, and the What Works in Reentry Clearinghouse providing topical information on effective reentry programs. Despite the different topic areas, the projects share many common goals and characteristics.

The What Works in Reentry Clearinghouse and CrimeSolutions.gov serve a similar function of providing users with online access and analysis of research studies from social science evaluations. To this end, both resources rely on evidence from high quality evaluations that use either experimental or quasi-experimental designs. Furthermore, both employ systematic review techniques and multiple reviewers to assess criteria related to the quality of evaluation design, evaluation outcomes, and program implementation. Both strive to improve practitioner decision making by providing a range of practical information related to each intervention, program, or practice.

The primary difference between these two resources is that the What Works in Reentry Clearinghouse was developed to address the evidence base related to reentry and the specific information needs of those working in this field. CrimeSolutions.gov has a far more general focus that includes all of criminal justice, juvenile justice, and crime victim services. In large part because of the difference noted above, these two resources employ slightly different evidence review procedures, apply somewhat different weighting criteria, and use different methods to display information.

What Works in Reentry and CrimeSolutions.gov are both funded by the U.S. Department of Justice. What Works in Reentry is a project of the National Reentry Resource Center, a project of the Bureau of Justice Assistance and funded through the Second Chance Act.

Learn more about specific evidence review procedures and methods for What Works in Reentry Clearinghouse

Learn more about specific evidence review procedures and methods for CrimeSolutions.gov

Key Research Terms

Attrition

Attrition refers to the loss of study participants over time. There are two primary types of attrition: program and study attrition. Program attrition occurs when individuals enrolled in the program gradually drop out, are terminated, or leave the program for other reasons. Study attrition occurs when researchers lack information on specific individuals due to missing administrative data, an inability to get in touch with them, or other reasons. These two types of attrition are distinct because, for example, an individual may have left the program, but researchers may still have information on his or her outcomes, so they can still include him or her in the study if desired.

Researchers can handle program attrition in a number of ways – they may include everyone who enrolled in the program as part of the treatment group (regardless of whether they completed it), they may exclude all drop-outs from the study, or they may use a strategy that falls somewhere in the middle. If researchers opt to exclude drop-outs, this can create a problem known as “creaming,” in which only people with the motivation and ability to finish the program are included in the treatment group (a type of selection bias). To protect against creaming, researchers can use an intent-to-treat analysis, in which everyone who the program intended to treat is kept in the treatment group for analysis purposes, even if they later dropped out. This type of analysis is more rigorous than analyses in which drop-outs are removed. However, intent-to-treat analyses can actually cause the opposite problem of creaming, since only some of the treatment group members actually received the whole program. It could be argued that, if all participants had received the full dosage of the program, they would have had better outcomes. In other words, intent-to-treat analysis is a more conservative estimate of the program’s effects. In What Works in Reentry Clearinghouse, studies are not eligible if they unilaterally exclude drop-outs from the treatment group. Studies that use this method may be biased in favor of the treatment group.

Study attrition can also create differences between study groups. This type of attrition is particularly problematic if the rate of attrition is different for the treatment group than for the comparison group. This is known as differential study attrition. For example, if researchers try to get in touch with everyone in the sample and are able to find all of the people in the treatment group, but only half of the people in the comparison group, this would likely create a difference between groups – given that the people who are easier to find might be the ones with more stable housing. Excluding all the people that researchers were unable to find can create bias between groups. Differential study attrition can be problematic for both randomized controlled trials and quasi-experiments.


Implementation Fidelity

Implementation fidelity refers to the degree to which a program was implemented as designed. In the “real world,” things often do not go as planned – participants may attend classes only once a week instead of twice, for example, or instructors may not follow the curriculum closely. The extent to which a program adheres to its original design is important for researchers in determining whether study findings are the result of the program or the result of unknown factors. For instance, if a study evaluates a program that was not implemented properly and finds no effect on participant outcomes, researchers cannot be sure whether these findings are due to a problem with the program design itself or to a problem with how it was implemented. What Works in Reentry Clearinghouse provides information about implementation fidelity where available. Unfortunately, however, many studies provide limited or no information in this area.


Matching

Matching is a statistical technique used to match each individual in the treatment group with an individual in the comparison group who is similar to him or her on certain demographic characteristics and other variables selected by the researchers, such as age, race, gender, and criminal history. This technique assumes that the researchers have a large pool of individuals that they might include in the comparison group. To select a comparison sample from this larger pool, researchers determine which individuals are best matched to individuals in the treatment group. One well-known type of matching uses propensity score – a composite variable that takes into account several different characteristics – to match individuals in the comparison group to members of the treatment group. By matching the treatment and comparison groups on these individual-level characteristics, researchers can have greater confidence that the study’s results are due to the intervention and not to differences between groups. Matching is typically used in quasi-experimental design and can be used alone or in combination with multivariate regression to further control for differences between groups.


Multivariate Regression

Multivariate regression is a statistical technique used to control for pre-existing differences between the treatment and control groups, such as differences in demographic characteristics and criminal history. It is used to increase the possibility that differences in outcomes are due to the program or treatment, and not to these pre-existing differences. Multivariate regression allows researchers to hold these factors constant in order to see the effect of the treatment independent of these characteristics.


Quasi-Experimental Design

In a quasi-experimental study, participants are not randomly assigned to groups. Instead, researchers make use of naturally occurring groups. For example, a quasi-experiment may include a treatment group consisting of individuals who were eligible for a program, while the comparison group is comprised of individuals who were not eligible for some reason. Unlike a randomized controlled trial, quasi-experiments are more susceptible to selection bias, meaning that any differences in outcomes may be due to pre-existing differences between groups rather than to the intervention. A carefully implemented quasi-experimental design can reduce this possibility. Constructing a comparison group from individuals on the program’s wait list is often considered a strong design, because those on the wait-list are both eligible for the program and (presumably) willing to participate, since they signed up, increasing the likelihood that they will be similar to the individuals who do participate (i.e., the treatment group). A more problematic type of quasi-experiment involves comparing inmates who volunteered to participate in the program with inmates who did not. Those who volunteered may be more motivated to succeed.

Researchers can use statistical techniques to minimize the potential for pre-existing differences between groups, such as matching or multivariate regression. That said, researchers are unable to control for what are known as “unobserved differences” between the two groups – i.e., factors that the researchers have no way of knowing about or measuring (such as social support or motivation). Thus, randomized controlled trials are still generally stronger than even a well-designed quasi-experiment.


Randomized Controlled Trial

In a randomized experiment, or randomized controlled trial, researchers randomly assign eligible participants to either the treatment group (which receives the intervention) or the control group (which does not). Randomization, if carried out properly, eliminates the possibility of selection bias and helps ensure that any significant differences in outcomes between the two groups are due to the intervention itself and not to chance. This is widely considered the strongest type of research design and the one that provides the best evidence for the effectiveness or ineffectiveness of an intervention. However, this research design is only successful if group assignment is truly random and the study design is implemented as intended.


Selection Bias

Selection bias occurs when study participants are selected for the treatment or control group in a systematic manner rather than randomly. Selection bias may result in differences between study groups that could skew the findings. There are many types of selection bias. Self-selection bias occurs when a program enrolls individuals who volunteer—indicating they might be more motivated to succeed—whereas individuals in the comparison group did not volunteer for the program. Another type of selection bias occurs when program staff inadvertently choose a certain type of individual to participate in the program, thereby leading to differences between the groups. For example, program staff might select individuals whom they perceive as easy to work with, potentially leading to a treatment group that is pre-disposed to success. These types of selection bias reduce the likelihood that the study’s findings are due to the intervention rather than to pre-existing differences between groups—and as a result can make it difficult to accurately determine the program’s impact.

Researchers can take a variety of measures to reduce the influence of selection bias. Utilizing a randomized design should theoretically eliminate the possibility of selection bias, since individuals are selected at random for the treatment or control group. If random assignment is not possible, researchers can use statistical techniques, such as matching or multivariate regression, to partially control for differences between groups.


Statistical Significance

When researchers want to determine if a program or intervention had an effect, they look for statistical significance differencesbetween the treatment and comparison groups. Statistical significance indicates that the differences were probably not due to chance or accident. In the field of statistics, the probability that a difference in outcomes occurred by chance alone is referred to as the “p-value.” Typically, the p-value must fall below a certain threshold (known as an “alpha level”) in order for the finding to be considered statistically significant. The smaller the p-value, the less likely the result is due to chance. In What Works in Reentry Clearinghouse, we require a p-value of less than or equal to 0.05, or five percent, to consider a finding significant (in other words, we use an alpha level of .05). However, we do note situations in which a finding approaches significance but does not quite reach .05.

In calculating the p-value, both the size of the difference and the sample size are taken into account. A large finding may not achieve statistical significance if the sample is too small (i.e., with a small number of people participating in the study, it is possible that the finding was due to coincidence alone). Alternatively, a small finding may achieve significance if the sample is large enough (i.e., if many people participated in the study, it is unlikely that the effect—though small—occurred by chance). Although a finding of statistical significance can give confidence that a result was not due to chance, it does not necessarily mean that the observed difference was due to the intervention. Differences in outcomes may be due to differences between the treatment and comparison groups that existed prior to receiving the intervention.


Study Groups: Treatment, Control, and Comparison

When researchers evaluate the effectiveness of an intervention, they typically compare outcomes for people who have received the treatment to outcomes for those who have not. The individuals who participate in the program are commonly referred to as the treatment group. The group that has not received the treatment is called the comparison group or control group. Researchers strive to ensure that the groups are as similar as possible, in terms of demographic characteristics, criminal history, etc. The goal is that the only major difference between groups is whether they received the treatment. This way, differences in outcomes can be attributed to the intervention and not to any other factors. Throughout What Works in Reentry Clearinghouse, we use the term “control group” to refer to a group that was randomly assigned to the control condition as part of a randomized controlled trial, and we use the term “comparison group” to refer to the comparison condition in a quasi-experimental design.


Validity

In the field of statistics, researchers refer to two main types of validity: internal validity and external validity. Internal validity refers to the degree to which the study’s findings are trustworthy and can be replicated within the study population. High internal validity means that the treatment (and not other factors) likely led to the observed outcomes, and that the same or similar results would occur if the study is replicated with the same population. Threats to internal validity are factors that may skew the study’s findings, such as selection bias, attrition, and pre-existing differences between groups. What Works in Reentry Clearinghouse identifies and evaluates threats to internal validity for each of the studies included on the website. If threats to internal validity are too severe for the findings to be trustworthy, the study is not included in What Works in Reentry Clearinghouse.

External validity refers to the degree to which the research findings apply to various contexts and populations. External validity can be threatened if the treatment group is not representative of the types of individuals that the program actually intends to target, or if the setting in which the program was evaluated is unique and may not apply to the “real world.” In an effort to ensure that the studies it includes are applicable to a variety of contexts and populations, What Works in Reentry Clearinghouse reports on the demographic and criminal history characteristics of the study population, the place(s) where the program was evaluated, and any other information that may affect external validity. We focus on studies conducted in the United States, and we exclude studies published prior to 1980 and studies that use populations not returning from incarceration in an effort to ensure that study findings are applicable to today’s reentry context.


Variables

The term “variable” refers to anything that can vary or change. Studies involve two types of variables – independent and dependent. The independent variable is the treatment or intervention (i.e., the element under study), while the dependent variables are the outcomes that might vary depending on whether someone received the treatment or not. For example, enrollment in a cognitive-behavioral program could be the independent variable, while recidivism rates would be the dependent variable.

Another class of variables that is important to consider are covariates. These are factors like gender, age, race, risk of recidivism, length of time incarcerated, or anything else that could vary among participants and could be responsible for differences in the dependent variable. In other words, outcomes could be due to these other factors and not to the treatment itself. Researchers use statistical techniques such as matching and multivariate regression to minimize the influence of these factors on outcomes, in order to isolate the effect of the treatment. However, they are unable to do so for variables that are not measured – such as motivation, attitude, or personality – and that might impact outcomes. These are known as unobserved variables.

Explore the Justice Center’s Websites
CSG Justice Center Criminal Justice / Mental Health Consensus Project Justice Reinvestment National Reentry Resource Center Reentry Policy Council