Chapter 1: Introduction
The Art of Appropriate Evaluation

CHAPTER ONE

INTRODUCTION

Once upon a time there was a manager who was responsible for starting up a new pedestrian safety program. Because it was new, her boss asked her to evaluate the program to find out how well it worked. Alarm bells rang in her head; she had never done an evaluation and it seemed way beyond her ability. When she discussed this assignment in her regular staff meeting, one of the staff volunteered to take on the responsibility. Greatly relieved, she gave him free rein.

The staff member immediately busied himself designing data collection forms and survey instruments. He wrote instruction manuals for filling out the forms and distributed them to the folks who were involved in publicizing the program. His research designs called for dividing the city into four regions that would each receive different combinations of the program’s components. His weekly project reports were filled with detailed accounts of new forms, focus group protocols, new data collection and analytical procedures, and statistical tests. It seemed that everything was under control.

As the program reached its peak of activity, things took a turn for the worse. Data collectors weren’t filling out the forms correctly, and no one could get a handle on the mountains of data the survey produced. The evaluator spent most of his time analyzing the change in public perception of the program. The difference was statistically significant, but so small as to be practically negligible. The progress reports started documenting why it was impossible to conduct a valid evaluation, with terms like changes in data definitions, and confounding variables leading the list of excuses.

The net result was that more than 20 percent of the project’s resources were spent on evaluation and no one could answer the simple question “did it work?” The project manager vowed “Never again!”

The term evaluation evokes similar nightmares for anyone working in the public sector. We have all heard stories about expensive evaluation efforts that yield reams of complex data that end up confusing people. None of us wants an evaluation like that. We want to document the good parts of our program and find the things that need to be changed.

Evaluation is a term that refers to the process by which someone determines the value of something.

Value doesn’t only mean monetary value; so evaluation doesn’t necessarily involve converting something into a dollar and cents issue. It is simply examining, appraising, or judging the worth of a particular item or program.

We all conduct evaluations whenever we are contemplating a major purchase. If we are considering a new car purchase, we must decide if a vehicle is worth the price being asked for it. We go through three distinct evaluation processes to make that determination.

  1. graphic - car in circleWe first determine what we need in a car and what we would like to have. (Maybe I want a car that makes me “look good” behind the wheel.)

  2. We then determine if the car we are looking at will meet these needs and wants. (The sassy red convertible definitely fits the bill.)

  3. If it does, we must decide if we are willing to pay the price being asked. (Am I willing to pay $6,000 more than I planned in order to “look good?)
To conduct an evaluation of a bicycle helmet use campaign, you probably do not have to design a complicated experiment. You really just need to collect helmet use data before and after your program, being careful to follow the exact same procedures both times. Many make the mistake of assuming that unless a program evaluation involves a complex research design, and sophisticated statistical analyses, it can’t be a good evaluation. This is not true. Program evaluations do not have to be full-blown experiments in order to be valid. They just have to be carefully planned and executed.
 

Once we have purchased the car, we probably continue to evaluate, but we sometimes call it “having second thoughts.” After the purchase is made, we try to determine if we made a good choice. Did the car deliver on the advertising promises? Did it meet our personal needs and wants? Did it actually cost what we planned or did the car require a lot of expensive maintenance to keep it running. If I had it to do over, would I buy the same car? Would I recommend it to a friend?

When you are implementing a traffic safety program, you should be making the same types of judgments. You build evaluation into your program so that you can determine:

  • The exact nature of the traffic safety problem you are trying to address (10 percent of the 50 traffic-related deaths last year were child bicyclists. None of the children were wearing bike helmets.)

  • What are reasonable goals and objectives for reducing this problem (to decrease the number of bicyclist fatalities by increasing bike helmet usage to 80 percent among child bicyclists)

  • How well the program you implemented accomplished your objectives. (Bike helmet usage increased from 45 percent before the program to 85 percent after the program.)
    What do you notice about these three bullets? They are specific, focused, and practical.

First, the evaluator identified a specific problem (The kids who died were not wearing bicycle helmets.) Next there is one focused program approach to address this problem. (Increase bicycle helmet use.) Note that there is no mention of how you are going to do this: free helmets, school programs, bike safety events or whatever. Finally, there is a practical measure of the progress your program made. (Document the change in bicycle helmet use.)

Why You Want to Read This Guide

A lot has been said over the years about the importance of program evaluation in traffic safety. At various times, program managers have been required to allocate a specified percentage of their program budgets to program evaluation. Training programs have been developed on how to evaluate traffic safety programs using such statistical tools as time series analysis and multiple regression analyses. And despite all of this attention, criticism continues to pour in about the fact that most traffic safety programs are never actually evaluated. And it is no wonder. Some program managers are convinced that program evaluation is too hot to handle, that it causes nothing but trouble, and costs a fortune to boot.

This Guide will convince you otherwise!

It is designed to alleviate your fears about program evaluation and convince you that conducting an appropriate evaluation actually makes your job easier rather than harder. The focus is on what evaluation can do for you, not the other way around.

The Guide provides an overview of the steps that are involved in program evaluations and gets you thinking about how these steps fit into your implementation plans.
 

The Guide provides an overview of the steps that are involved in program evaluations and gets you thinking about how these steps fit into your implementation plans. It also will provide you with some handy suggestions on how to find and work with an evaluation consultant. And finally it will provide you with a handy glossary of evaluation terms and concepts so that you speak with confidence when the topic turns to “proving results.” (When you encounter an underlined term such as Before and After Design, you can refer to the Glossary for its definition.)

It is equally important that you recognize what this Guide is not. It will not give you detailed, step-by-step instructions on how to evaluate a traffic safety program. Our assumption is that you are already too busy to take on a new career as a evaluation specialist. There are talented individuals in your own community who can help you design and conduct an appropriate evaluation. This Guide will tell you how to find and work with them.

The focus of this Guide is on using limited resources to maximum, practical, advantage. This means conducting an evaluation that is appropriate to the size and scope of the program you are implementing.

Who the Guide is for

work area aheadBefore we go any further, it’s time to share the assumptions we have made about who you are. If you are a state or local traffic safety project director with at least some curiosity about program evaluation, this Guide is for you. Our assumption is that you do not have a background in experimental design or statistics and have no intention of becoming an evaluation expert. (If you really want to become an expert, you should enroll in some college-level statistics courses—this is not one of those subjects you can teach yourself with a book!) You need to understand:

  • what type of evaluation is reasonable for the type of program you are implementing;
  • what you can do to maximize the success of a program evaluation; and
  • where you can get help.

If that is what you are looking for, this Guide is for you!

How the Rest of the Guide is Organized

The remainder of this Guide is organized into six sections, and an appendix. They are:

  1. The Evaluation Mentality—This is where we convince you that program evaluation is always a good idea.

  2. In Search of the Appropriate Evaluation—A discussion of what you can reasonably expect a state or community program evaluation to accomplish.

  3. Evaluation Step-By-Step—A high level overview of the steps involved in program evaluation, from defining your problem to reporting results.

  4. Getting Help—What you should expect from an evaluator, where to find them, and how to work with them.

  5. Closing Comments—A wrap-up of the arguments in support of always evaluating your program efforts.

  6. Glossary of Terms—Some basic evaluation terms defined to increase your comfort level around evaluators.