The Role Of Evaluation In Prevention Of College Student Drinking Problems
Robert F. Saltz, Ph.D.
April 2002
Prepared for the National Institute on Alcohol Abuse and Alcoholism's National
Advisory Council on Alcohol Abuse and Alcoholism Subcommittee on College Drinking
Panel 2 Prevention and Treatment.
Newcomers to the topic of college student drinking are often puzzled to learn
that our knowledge of “what works” is exceedingly slim. Those who work on college
campuses or who have been students themselves know that there are any number
of activities sponsored by the school or one of its many affiliates all done
in the name of preventing student drinking problems. Yet, apart from some recent
and promising interventions aimed at individual drinkers (Larimer and Cronce,
2002), the conscientious program specialist will find little empirical evidence
available to guide his or her choice of interventions aimed at the broader college
population. In their review of literature of college interventions, Hingson
and colleagues (1998) were able to find only a single handful of programs with
any appreciable evaluation over a span of two decades. Useful guides meant to
aid administrators and program managers by identifying “promising practices”
(e.g., Anderson and Milgram, 1997) are unable to find much empirical evidence
of positive impact on the part of those interventions.
The irony is that this failing is observed precisely in those settings (i.e., institutions of
higher education) where, presumably, the commitment to empirical research is high, and expertise
in evaluation is available. How, then, can we account for the lack of a research base? Likely, it
results from a combination of interrelated factors. First, it may be that the academic setting
sets a standard of evaluation equivalent to that of research conducted for publication in academic
journals. This is appropriate for some programs (e.g., NIH-funded research projects), but probably
not for programs sponsored by their own campuses. A second, closely related possibility is that
academic researchers may have difficulty accommodating their own interests and expectations to a
project that might have limited promise of future publication. Of course, there is often a fear
that evaluation may threaten favored programs or funding for prevention efforts. Sometimes
evaluation is seen as a drain of funds that “should” be going to direct services. Inertia cannot
be ruled out as a factor, either.
This chapter is aimed at encouraging program evaluation, though not by turning program
specialists or college administrators into evaluators, and not by providing a “primer” on
evaluation methods. Rather, we will lay out a rationale for building some aspect of evaluation
into all interventions (whether they be programs or policies), discuss the barriers to evaluation
and how they might be overcome, and then clarify the role of administrators and program planners
with respect to successful evaluation.
If we had to summarize this chapter, it hopes to encourage administrators and program managers to
address the need of evaluation, even if only in taking the beginning steps towards a full-scale
implementation. In the sections to follow, we identify something of a “hierarchy” of evaluation,
beginning with the need for clarifying program goals and objectives, identifying and obtaining
data relevant for measuring whether those objectives are being achieved, and finally, designing
evaluations in such a way as to maximize confidence in the validity of the evaluation analysis and
simultaneously provide useful information to guide future directions for specific prevention
programs.
College and university administrators, including state boards and even legislatures governing
multiple campuses, are in a particularly important position to encourage the development of
evaluation activities on their campuses. They can provide resources for data and evaluation, they
can create a supportive atmosphere that overcomes fears that evaluation might threaten programs or
positions, and they can set a priority for clear program objectives and strategies. It is our hope
that working towards these goals will break the stalemate resulting from insufficient evaluation
and the dominance of conventional wisdom in prevention programming.
To begin, then, evaluation has been described as the application of empirical research methods to
the “conceptualization, design, implementation, and utility of social intervention programs”
(Rossi et al., 1999). Evaluation turns the question of how or how well a program works into a
research topic that can be addressed by a broad array of methods, measures, and analytic
strategies.