Skip Navigation

What Works Clearinghouse


WWC Procedures and Standards Handbook
WWC Procedures and Standards Handbook
Version 2.0 – December 2008

Foreword

The mission of the Institute of Education Sciences’ (IES) “What Works Clearinghouse” is to be a central and trusted source of scientific evidence for what works in education. By reviewing and synthesizing scientific evidence, the What Works Clearinghouse (WWC) is fulfilling part of IES’s overall mission to bring “rigorous and relevant research, evaluation and statistics to our nation's education system.”1 The IES is within the U.S. Department of Education and the WWC is within the institute’s National Center for Education Evaluation and Regional Assistance.

A distinguishing feature of the WWC is that it does not directly assess programs, policies, or practices, but instead reviews and reports on the findings from existing research. Whereas Consumer Reports, for example, will bring together a set of products and compare and contrast their features using various standards (in effect yielding an assessment of product “quality”), the WWC reviews extant research about programs, policies, or practices and assesses the “quality” of the research. Based on the research that meets particular standards, the WWC then reports on what the research indicates about the effectiveness of the program, policy, or practice, which can be abbreviated as the “intervention.”

Educators who want to know whether an intervention is effective can read a WWC report and know that it represents a thorough review of the research literature on that intervention and a critical assessment of the evidence presented in the research, following a transparent approach to synthesizing the evidence that culminates in a rating of effectiveness. If some of the research meets WWC standards, the resulting report provides both summaries and details about the research findings; otherwise, the report indicates the lack of evidence meeting WWC standards. The reports also note that not finding evidence of effectiveness does not mean that an intervention is ineffective; it means that the evidence is not clear either way. If educators and researchers want to know more about how the WWC reached its assessment, intervention reports provide full details and explanations. The details can be checked by others and, indeed, are verified by the IES peer review process.

The WWC generates a wide range of products. Intervention reports assess all studies of a specific intervention within a topic area, rating each of them based on the WWC evidence standards. Topic reports compile the information from intervention reports in a topic area and enable WWC users to easily compare the ratings of effectiveness and sizes of effects for numerous interventions in one area. WWC quick reviews are designed to provide education practitioners and policymakers with timely and objective assessments of the quality of the research evidence for recently released research papers and reports. Finally, based on reviews of research and the expert opinions and experiences of a panel of nationally recognized experts, practice guides contain practical recommendations for educators to address challenges in their classrooms and schools.

This handbook describes the structure and processes that the WWC uses for its reviews. It brings together into one place the standards the WWC uses to assess research. The handbook necessarily is a work in progress because it describes WWC standards and processes at a point in time. The WWC continues to develop new standards, and the handbook will be revised as major new features are finalized. Currently, the handbook does not discuss practice guides, which also use WWC standards to identify strong studies; however, practice guide panels are also encouraged to introduce other forms of evidence.

The handbook details the components of the review process, including defining the topic area, identifying all potential research papers that fit the topic area, screening in the eligible papers, defining and prioritizing interventions within the topic area, reviewing the studies of the intervention, producing intervention reports, and proceeding through several rounds of quality assurance before finalizing reports. Review topic areas are identified through a collaborative process combining input from policymakers, researchers, and experts in the field. The topic areas are organized around key student outcomes, with special attention given to academic outcomes, though topic areas might also be organized around non-academic outcomes. Topic areas currently under review include Beginning Reading, Dropout Prevention, Early Childhood Education, Elementary School Math, English Language Learners, and Middle School Math.

Reviews within a WWC topic area are undertaken by teams led by principal investigators who are supported by deputy principal investigators, coordinators, and teams of reviewers who are trained and certified to conduct reviews. Principal investigators are charged with overall authority for crafting the review protocol and for decisions about how standards are interpreted by reviewers. In addition, challenging technical issues are brought to the attention of the deputy WWC director and the WWC's technical team.

The protocol is at the heart of a topic-area review, detailing the process to be used to identify the studies that will be examined as part of the review of a given topic area and the specific outcomes that will be examined. The protocol specifies the time period over which studies are to be included, the outcomes to be examined in the review, and keyword strategies for the literature search. It also structures the data items that will be scrutinized to assess comparison-group equivalence. The literature search strategy begins with keywords but it is ultimately designed to identify all studies purporting to be about the effectiveness of an intervention, which then are screened to determine if they fall within the review according to the protocol. A long list of study abstracts can become a much shorter list as screens are employed.

Research studies that fall within the protocol are then reviewed using standards. The key role of standards is to provide a transparent basis for determining whether studies provide causal evidence. Findings in reports are based only on studies meeting standards (or studies “meeting standards with reservations,” a WWC term meaning that some aspect of the study merits caution in interpreting the findings). In addition, the WWC adjusts some reported findings to correct for issues that arise with some frequency in research. For example, some studies have more than one analytic level (such as schools and students), and the studies are designed at one level but are analyzed at the other level. Most frequently, studies are designed by matching schools or classrooms but are analyzed as if they had been designed by matching students. This mismatch of levels yields a well-known overstatement of statistical precision of estimates of effects. The WWC uses a correction to adjust for this. Another adjustment is used because looking at multiple outcomes can lead to false conclusions about the number of statistically significant effects.

The main outcome of the review effort is an intervention report, which synthesizes the findings into a rating of effectiveness and reports the basis on which the rating was given. The WWC uses an approach for rating the evidence that emphasizes the preponderance of evidence for studies that meet standards (or meets standards with reservations). Interventions can be rated as positive, potentially positive, mixed effects, no discernible effects, potentially negative, or negative. The two middle categories—mixed effects and no discernible effects—have different meanings. A rating of “mixed effects” means that some of the research reports positive effects and some of it reports negative effects. A rating of “no discernible effects” means that the research that meets standards consistently reports statistically insignificant or numerically small effects.

Finally, reports synthesize evidence into a summary number, the effect size, which is presented as an “improvement index.” Reports assess how much evidence was reviewed and whether the “extent of evidence” was small or medium to large.

We hope the handbook is useful. Users who want to provide feedback about it can contact us at http://ies.ed.gov/ncee/wwc/help/webmail.

1 The quote is from http://ies.ed.gov/. IES was established as part of the Education Sciences Reform Act of 2002.

Top

PO Box 2393
Princeton, NJ 08543-2393
Phone: 1-866-503-6114