LEAD & MANAGE MY SCHOOL
Sustaining Your Prevention Initiative

Selected Resources

Selected Resources on Program Sustainability | Selected Resources on Program Implementation | Selected Resources on Program Evaluation

Selected Resources on Program Sustainability

Abbott, M., Walton, C., Tapia, Y., & Grennwood, C. (1999). Research to practice: A "blueprint" for closing the gap to local schools. Exceptional Children, 65, 339-352.

Factors related to sustainability include grassroots support, collaboration between researchers and teachers, and teacher participation.

Akerlund, K.M. (2000). Prevention program sustainability: The state's perspective. Journal of Community Psychology, 28(3), 353-362.

Programs are likely to be sustained if they are high-quality, have evaluated and documented success, have strong administrative support, have community ownership, and meet funder's priorities. Groups planning for sustainability should develop a three-year plan, include an advisory board in planning, identify and maintain a list of current and potential funders, consider non-traditional funding sources (i.e., managed care), develop and follow a timeline and management structure, gain the support of potential funders, carefully maintain program records, perform cost-effectiveness/cost-benefit analyses, consider integration with other service providers, look beyond grants, and follow sound business practices.

Altman, D.G. (1995). Sustaining interventions in community systems: On the relationship between researchers and communities. Health Psychology, 14(6), 526-536.

Six phases in the community research cycle include research; transfer to a community base; transition, adaptation, replication, or innovation; regeneration based on community feedback to researchers; empowerment; and community ownership of the program.

Backer, T.E. (2000). The failure of success: Challenges of disseminating effective substance abuse prevention programs. Journal of Community Psychology, 28(3).

Factors related to sustainability include user-friendly communication, user-friendly evaluation, adequate capacity-building, adequate resources, yield (benefits), and community (participant) involvement.

Comins, W.W., & Elias. M.J. (1991). Institutionalization of mental health programs in organizational contexts: The case of elementary schools. Journal of Community Psychology, 19.

Conditions hypothesized to have a positive effect on the institutionalization of educational innovation in the classroom include the following:

  • Disincentives external to the school are at least balanced by incentives.
  • Relationships among user teachers are good, with high morale, good communication patterns, and a sense of ownership of the innovation.
  • The innovation is ambitious and demanding.
  • The innovation appeals to the professional identities of the users and facilitators, and respects their professional capabilities.
  • The innovation is first introduced on a pilot or trial basis.
  • The innovation both allows for and results in mutual adaptation between the innovation and the adopting organization.
  • There is a person, or group of people, who serves as an active innovator from within the school system.
  • The principal provides consistent institutional support for the innovation.
  • The district supports the innovation at the top level, and plans continuation strategies.

Edwards, S.L., & Stern, R.F. (1998). Building and sustaining community partnerships for teen pregnancy prevention: A working paper. Available online: http://aspe.hhs.gov/hsp/teenp/teenpreg/teenpreg.htm.

Factors related to sustainability include monitoring; quality of process and outcome evaluations; involvement of program staff in the evaluation; resources and support (e.g., paid staff, community organizer, trained and experienced staff who are accepted in community); diversity of funding; use of local funding; effective leadership; technical assistance; and ongoing planning.

Elias, M.J., Zins, J.E., Weissberg, R.P., Frey, K.S., Greenberg, M.T., Haynes, N.M., Kessler, R., Schwab-Stone, M.E., & Shriver, T.P. (1997). Promoting social and emotional learning: Guidelines for educators. Alexandria, Virginia: Association for Supervision and Curriculum Development.

Factors related to long-term implementation of social and emotional learning programs include presence of a designated program coordinator, social development facilitator, or social and emotional development committee; high visibility and recognition; active involvement and commitment of larger community; and adaptability.

Elmore, R.F. (1996). Getting to scale with good educational practice. Harvard Education Review, 66(1).

Recommendations for bringing innovations to scale include the following: Develop strong professional and social normative structures for good teaching practice that are external to individual teachers and their immediate working environments; evaluate how many teachers use good practice; develop organizational structures that intensify and focus expected student outcomes; create intentional processes for reproduction of successes; and create structures that promote learning of new practices and incentive systems that support them.

Gager, P.J. & Elias, M.J. (1997). Implementing prevention programs in high-risk environments: Application of the resiliency paradigm. American Journal of Orthopsychiatry, 67(3).

Factors related to program institutionalization include an ongoing process of formal and informal training; high visibility in the school; adherence to a regular time schedule; inclusion of special education students as regular program recipients; involvement of recognized community figures to help meet program goals; and support of individuals who carry out the initiative with high shared morale, good communication, and a sense of ownership.

Gersten, R., Chard, D., & Baker, S. (2000). Factors enhancing sustained use of research-based instructional practices. Journal of Learning Disabilities, 33, 445-457.

Factors affecting the sustainability of core teaching strategies include teacher understanding, teachers' willingness to consider new content and pedagogical approaches, teacher efficacy, and membership in a professional community.

Gomez, B.J., Greenberg, M.T., & Feinberg, M. (in press). Sustainability of community coalitions: A study of 20 coalitions under communities that care. Prevention Research Center, Pennsylvania State University.

A continuing board provides the best measure for sustainability. Factors statistically associated with the continuing nature of coalition board activity are key leader knowledge of how to select empirically based prevention programs, the coalition's internal functioning, fidelity of implementation, and technical assistance ratings.

Goodman, R.M., & Steckler, A. (1989). A model for institutionalization of health promotion programs. Family and Community Health, 11(4), 63-78.

Factors related to sustainability include awareness of a problem, concern for the problem, organizational receptivity to change, availability of solutions, adequacy of resources and benefits, convergence of aspirations held by various program constituents, presence of an effective program champion, adjustments between the program and the host organization, and organizational fit.

Marek, L.I., Mancini, J.A., & Brock, D.J. (2000). The national youth at risk program sustainability study. Report to the USDA, Washington, DC.

Factors related to sustainability includecommunity support, collaboration, and use of a variety of sustainability mechanisms (e.g., grants, user fees, advisory boards). Obstacles to sustainability include longer-term funding, inadequate numbers of staff, and lack of committed staff.

Paine-Andrews, A., Fisher, J., Campuzano, M.K., Fawcett, S.B., & Berkly-Patton, J. (2000). Promoting sustainability of community health initiatives: An empirical case study. Health Promotion Practice, 1(3), 248-258.

Factors related to sustainability include community awareness of the value of the program, a local champion, strong leadership, fit of the project within a lead agency, type or attributes of community changes produced by the project, and strength of alliances between community organizations with similar missions.

Pentz, M.A. (2000). Institutionalizing community-based prevention through policy change. Journal of Community Psychology, 28, 257-270.

Factors leading to policy change includehaving outside developers share decision-making with community planners; networking with other community leaders; community leader involvement in policy and program implementation; standardized training of community leaders, vendors, and program providers, and use of comprehensive, multi-component community prevention programs. Barriers include shifts in federal focus of priorities, length of time between policy planning and enactment, and variability in completion of policy activities.

Rogers, E.M. (1995). Diffusion of innovation (4th ed.). New York: Free Press.

Factors related to program adoption include characteristics of the innovation (relative advantage, complexity, trialability, observability); communication channels; timing; infrastructure factors; opinion leadership; and social system norms. Factors related to sustainability include compatibility of innovation with clients' needs and resources; involvement with the innovation (ownership); process of individual diffusion (knowledge, persuasion, decision, implementation, confirmation); and process of organizational diffusion (agenda-setting, matching, redefining/restructuring, clarifying, routinizing).

Shediec-Rizkallah, M.C., & Bone, L.R. (1998). Planning for the sustainability of community-based health programs: Conceptual frameworks and future directions for research, practice, and policy. Health Education Research, 13(1), 87-108.

Operational indicators of sustainability include maintenance of benefits achieved through the initial program, level of institutionalization of the program within an organization, and changes in the capacity of the targeted community. Three major groups of factors that influence sustainability are project design and implementation, factors within the organizational setting, and factors in the broader community.

Vaughn, S., Klingner, J., & Hughes, M. (2000). Sustainability of research-based practices. Exceptional Children, 66(2), 163-171.

Teacher/researcher issues that affect sustainability include teacher knowledge and adequate opportunity to weave it into the research; teacher attitudes, including beliefs about the effectiveness of the research and the extent to which research findings should/could influence teaching; contextual factors, including the multiple demands on teachers from their environments; researchers who are open to input from teachers and other school staff; and mutual respect, sensitivity, and responsiveness between teacher and researchers. Other challenges to sustaining research-based educational practices occur when the consequences of implementing a particular research-based practice are not immediately apparent, teachers believe that their pre-research practices are moderately effective, and/or teachers don't believe that there is enough consensus among researchers to warrant a change in their teaching practices.

The resources listed above were collected and summarized by Meg Small, Ph.D., of the Prevention Research Center at Pennsylvania State University, as part of the development of a comprehensive literature view. We also extend our thanks to the Northeast Center for the Application of Prevention Technologies for providing references.

Selected Resources on Program Implementation

Center for Substance Abuse Prevention (2001). 2001 Annual Report of Science-Based Prevention Programs. Rockville, MD: Author. Available at http://www.samhsa.gov/centers/csap/modelprograms/pdfs/2001Annual.pdf .

Dane, A.V., & Schneider, B.H. (1998). Program Integrity in Primary and Early Secondary Prevention: Are Implementation Effects Out of Control? Clinical Psychology Review, 18, 23-45.

Ferrari, J.R., & Durlak, J. (Eds.) (1998). Program Implementation in Preventive Trials. [Special issue]. Journal of Prevention and Intervention in the Community, 17 (2).

This journal issue includes the following articles:

  • Why Worry About Implementation Procedures: Why Not Just Do It?

  • Why Program Implementation Is Important

  • Intervention Fidelity in the Psychosocial Prevention and Treatment of Adolescent Depression

  • Implementing a Violence Intervention for Inner-city Adolescents: Potential Pitfalls and Suggested Remedies

  • Successful Program Development Using Implementation Evaluation

  • Design and Implementation of Parent Programs for a Community-Wide Adolescent Alcohol Use Prevention Program

  • Some Exemplars of Implementation

Gottfredson, G.D., Gottfredson, D.C., Czeh, E.R., Cantor, D., Crosse, S.B., & Hantman, I. (2000). National Study of Delinquency Prevention in Schools. Ellicott City, MD: Gottfredson Associates, Inc. Available at http://www.gottfredson.com/national.htm.

Graczyk, P.A., Domitrovich, C.E, & Zins, J.E. (in press). Facilitating the Implementation of Evidence-Based Prevention and Mental Health Promotion Efforts in Schools. In M. Weist, S. Evans, & N. Tashman (Eds.), School Mental Health Handbook, a volume in the series Issues in Clinical Child Psychology (M. Roberts, Ed.).

Greenberg, M.T., Domitrovich, C.E., Graczyk, P., & Zins, J. (January 2001). A Conceptual Model of Implementation for School-Based Preventive Interventions: Implications for Research, Practice, and Policy. Report to the Center for Mental Health Services, Substance Abuse and Mental Health Services Administration, US Department of Health and Human Services.

Zins, J.E., Greenberg, M.E., Elias, M.J., & Pruett, M.K. (Eds.) (2000). Issues in the Implementation of Prevention Programs [Special issue]. Journal of Educational and Psychological Consultation, 11 (1).

This journal issue includes the following articles:

  • Increasing Implementation Success in Prevention Programs

  • The Role of the Collaborative to Advance Social and Emotional Learning (CASEL) in Supporting the Implementation of Quality School-Based Prevention Programs

  • Moving Prevention from the Fringes into the Fabric of School Improvement

  • Implementation and Diffusion of the Rainbows Program in Rural Communities: Implications for School-Based Prevention Programming

  • Building Full-Service Schools: Lessons Learned in the Development of Interagency Collaboratives

  • You Can Get There From Here: Using a Theory of Change Approach to Plan Urban Education Reform

  • Partnerships for Implementing School and Community Prevention Programs

  • Building an Intervention: A Theoretical and Practical Infrastructure for Planning, Implementing, and Evaluating a Metropolitan-Wide School-To-Career Initiative

Zins, J.E., Greenberg, M.E., Elias, M.J., & Pruett, M.K. (Eds.) (2000). Measurement of Quality of Implementation of Prevention Programs [Special issue]. Journal of Education and Psychological Consultation, 11 (2). Available online.

This journal issue includes the following articles:

  • Promoting Quality Implementation in Prevention Programs

  • Community Psychology: Partners in Prevention Program Implementation

  • Applying Comprehensive Quality Programming and Empowerment Evaluation to Reduce Implementation Barriers

  • The Study of Implementation: Current Findings from Effective Programs that Prevent Mental Disorders in School-Aged Children

  • A Model to Measure Program Integrity of Peer-Led Health Promotion Programs in Rural Middle Schools: Assessing the Implementation of the Sixth-Grade Goals for Health Program

  • Voices From the Field: Identifying and Overcoming Roadblocks to Carrying Out Programs in Social and Emotional Learning/Emotional Intelligence

  • Serving Children with Special Social and Emotional Needs: A Practical Approach to Evaluating Prevention Programs in Schools and Community Settings

Selected Resources on Program Evaluation

Andrews, F. M., Lem, L., Davidson, T. N., O'Malley, P., and Rodgers, W. L. (1978). A guide for selecting statistical techniques for analyzing social science data. Ann Arbor, MI: Survey Research Center, Institute for Social Research, University of Michigan.

This guide uses decision trees to map the choices involved in selecting an appropriate statistical technique for a given analysis. More than 100 different statistics or statistical techniques are included in the guide. Some knowledge of statistics is assumed.

Carmona, M.C., Steward, K., Gottfredson, D.C., and Gottfredson, G.D. (1998). A guide for evaluating prevention effectiveness, CSAP Technical Report (NCADI Publication No. 98-3237). Rockville, MD: Center for Substance Abuse Prevention, Substance Abuse and Mental Health Services Administration.

This guide provides practitioners with basic evaluation concepts and tools. It describes commonly used research designs and their strengths and weaknesses. Qualitative and quantitative data collection methods used in process and outcome evaluation are described. Basic concepts in data analysis are also discussed.

Flaxman, E. (Ed.) (2001). Evaluating School Violence Programs. New York, NY: ERIC Clearinghouse on Urban Education and Institute for Urban Minority Education.

This monograph provides the information program administrators need to integrate evaluation ino their programs naturally, using their own staffs or consultants. Separate essays offer an overview of the evaluation process, tools for measuring the effectiveness of one specific type of violence prevention program, and a comprehensive review of assessment resources currently available in print and over the Internet. The document also includes a table of school violence resource guides and a school violence resource guide content checklist. Ordering Information can be obtained online at http://eric-web.tc.columbia.edu.

French, J F., and Kaufman, N.J. (Eds.) (1981). Handbook for prevention evaluation: Prevention evaluation guidelines. Publication No. ADM81-1145. Washington, DC: National Institute on Drug Abuse, National Institutes of Health.

This handbook was written for evaluator-practitioner teams working to apply their skills in the assessment and improvement of prevention programs. Topics include models of prevention, evaluation design, indicators and measures for process and outcome evaluation, and reporting evaluation results. It contains an extensive appendix on instruments and data sources.

Hawkins, J D., and Nederhood, B. (1987). Handbook for evaluating drug and alcohol prevention programs: Staff/team evaluation of prevention programs (Publication No. (ADM) 87-1512). Washington, DC: US Department of Health and Human Services.

This handbook provides program managers with a comprehensive tool for guiding their evaluation efforts. It discusses instruments and activities for determining program effectiveness (outcome evaluation) and for documenting and monitoring the delivery of services (process evaluation). The major topics it addresses are evaluation design, measuring outcomes, measuring implementation, data collection, data analysis, and reporting study findings. Worksheets, sample instruments, and a bibliography are included.

Isaac, S., and Michael, W. B. (1983). Handbook in research and evaluation: A collection of principles, methods, and strategies useful in planning, design, and evaluation of studies in education and the behavioral sciences (2nd Ed.). San Diego, CA: EdLTS Publishers.

This book summarizes basic information on research and evaluation methods. It is intended to help practitioners choose the best technique for a particular study. The major topics include planning evaluation and research studies, research design and methods, instrumentation and measurement, data analysis, and reporting a research study. It contains many tables and worksheets.

W. K. Kellogg Foundation (1998). W. K. Kellogg Foundation Evaluation Handbook. Battle Creek, MI: Collateral Management Company.

This handbook provides a framework for thinking about evaluation as a program tool. It was written for project directors with direct responsibility for the evaluation of Kellogg Foundation-funded projects. It discusses how to prepare for an evaluation (e.g., developing evaluation questions, budgeting for evaluation, selecting an evaluator), designing and conducting an evaluation (e.g., data collection methods, analyzing and interpreting data), and reporting findings. The handbook contains worksheets, charts, and a bibliography on evaluation. Full text available online at http://www.wkkf.org/pubs/Pub770.pdf.

Moberg, D.P. (1984). Evaluation of prevention programs: A basic guide for practitioners. Madison, WI: Board of Regents of the University of Wisconsin System for the Wisconsin Clearinghouse.

This guide is intended for practitioners involved in planning and delivering local prevention services. Definitions and uses of program evaluation are described. Recommended steps for planning and implementing a program evaluation are detailed.

Muraskin, L.D. (1993). Understanding evaluation: The way to better prevention programs. Publication No. ED/OESE92-41. Washington, DC: U.S. Department of Education.

This handbook was written for school and community agency staff to carry out required evaluations under the Drug-Free Schools and Communities Act. The premise of this book is that many evaluations that use simple designs can be conducted without formal training in program evaluation. The author outlines checkpoints in the evaluation process where practitioners may want to consult with evaluation specialists. Topics discussed include evaluation design, data collection methods and instruments, and interpreting and reporting findings. The handbook describes implementation of an evaluation of a hypothetical prevention program. This publication can be ordered through ERIC at http://www.ed.gov/pubs/pubdb.html.

Thompson, N.J., and McClintock, H.O. (1998). Demonstrating your program's worth: A primer on evaluation for programs to prevent unintentional injury. Atlanta, GA: National Center for Injury Prevention and Control, Centers for Disease Control and Prevention.

Addressed to program managers, this guide describes the process involved in conducting a simple evaluation (formative, process, impact, and outcome), how to hire an evaluator, and how to incorporate evaluation activities into a prevention program. Appendices include information on sample questionnaire/interview items, events or activities to observe, and types of records to maintain. The guide provides a glossary and a bibliography on evaluation. It also includes sources of information on violence; injuries that take place in the home, on the road, or during leisure activities; acute care, rehabilitation, and disabilities; and general sources on injury control/prevention. Ordering information for this publication is available at
http://www.cdc.gov/ncipc/pub-res/demonstr.htm.

You may also want to consult the Northeast CAPT about "Locating, Hiring, and Managing an Evaluator," the training on working with an outside evaluator.


   31 | 32 | 33
TOC
Print this page Printable view Send this page Share this page
Last Modified: 05/30/2008